text
stringlengths 4
5.48M
| meta
stringlengths 14
6.54k
|
---|---|
\section{Introduction}
Morphology systems, lemmatisers and part-of-speech taggers are some of the
basic tools in natural language processing. There are numerous applications,
including syntax parsing, machine translation, automatic indexing and semantic
clustering of words. Unfortunately, for languages other than English, such tools
are rarely available, and different research groups are often forced to
redevelop them over and over again. Considering German, quite a few morphology
systems (Hausser 1996) and taggers (see table \ref{Tagger}) have been developed, which are described in Wothke et al. (1993) (IBM Heidelberg), Steiner (1995) (University of M{\"u}nster), Feldweg (1995) (University of T{\"u}bingen), Schmid (1995) (University of Stuttgart), Armstrong et al. (1995) (ISSCO Geneva), and Lezius (1995) (University of Paderborn).
However, in most cases, the tagger is isolated from the morphology system. It relies on a lexicon of full forms which, of course, may be generated by a morphological
tool. Unfortunately, most German lexicons are not available due to copyright
restrictions and - as far as we know - none of them is public-domain.
Therefore we have decided to make our system Morphy publicly available.
It combines a morphological and tagging module in a single package
and can be downloaded from the World Wide
Web.\footnote{
URL: http://www-psycho.uni-paderborn.de/lezius/morpho.html}
\begin{table}
\caption{\label{Tagger} Comparison of German Taggers}
\begin{footnotesize}
\begin{center}
\begin{tabular}{|l||c|c|c|c|c|c|} \hline
& IBM & Univer- & Univer- & Univer- & & Univer- \\
& Heidel- & sity of & sity of & sity of & ISSCO & sity of \\
& berg & M{\"u}nster & T{\"u}bingen & Stuttgart & Geneva & Paderborn \\
\hline \hline
learning & super- & super- & unsuper- & unsuper- & unsuper- & super-\\
method & vised & vised & vised & vised & vised & vised \\ \hline
context & bi- \& tri- & bi- & bi- & bi- & bi- & tri- \\
method & grams & grams & grams & grams & grams & grams \\ \hline
training & 20.000 & 200.000 & 200.000 & 20.000 & 20.000 & 20.000 \\
corpus & words & words & words & words & words & words \\ \hline
test & 10.000 & 30.000 & 20.000 & 5.000 & 1.850 & 5.000 \\
corpus & words & words & words & words & words & words \\ \hline
size of & 534.514 & 850.000 & 500.000 & 350.000 & 30.000 & 100.000 \\
lexicon & words & words & words & words & words & words \\ \hline
tag sets & 689 & 143 & - & - & - & 456 \\
large / small & 33 & 54 & 42 & 50 & 56 & 51 \\ \hline
accuracy & 77.7\% & 81.5\% & - & - & - & 84.7\% \\
large / small & 93.4\% & 92.8\% & 96.7\% & 97.0\% & 96.5\% & 95.9\% \\ \hline
\end{tabular}
\end{center}
\end{footnotesize}
\end{table}
Since it has been created not only for linguists, but also for second language
learners, it has been designed for standard PC-platforms and great effort has been put in making it as easy to use as possible.
\vspace{-1ex}
\section{The morphology module of Morphy}
The morphology system is based on the Duden grammar (Drosdowski 1984).
It consists of three parts: {\it Analysis}, {\it generation} and
{\it lexical system}.
The lexical system is more sophisticated than other systems in order to allow a
user-friendly extension of the lexicon.
When entering a new word,
the user is asked the minimal number of questions
which are necessary to infer the new word's grammatical features and
which any native speaker should
be able to answer. In most cases only the root of the word has to be typed in,
questions are answered by pressing the number of the correct alternative
(see figure \ref{Lex} for the dialogue when entering the verb
{\it telefonieren}). Currently,
the lexicon comprises 21.500 words (about 100.000 word forms)
and is extended continuously.
Starting from the root of a word and the inflexion type as stored in the
lexicon, the generation system produces all inflected
forms which are shown on the screen.
Among other morphological features it considers vowel mutation, shifts
between {\it {\ss}} and {\it ss} as well as pre- and infixation
of markers for participles.
The analysis system for each word form determines its root and its part of speech,
and, if appropiate, its gender, case, number, tense and comparative
degree. It also segments compound nouns using a longest-matching rule
which works from right to left. Since the system treats each word form separately,
ambiguities can not be resolved. For ambiguous word forms any possible lemma and
morphological description is given (for some examples
see table \ref{analysis}). If a word form can not be recognised,
its part of speech is predicted by an algorithm which makes use of
statistical data on German suffix frequencies.
\begin{figure}
\begin{center}
\begin{tabular}{|l|} \hline\\
\hspace*{1.2em}1. Geben Sie den Stamm ein: {\bf telefonieren} \\
\hspace*{1.2em}2. Wird das Verb schwach konjugiert?\\
\hspace*{2.7em}{\bf 1}: Ja\\
\hspace*{2.7em}2: Nein\\
\hspace*{1.2em}3. Wie lautet die 2. Person Singular Pr{\"a}sens? \hspace*{1.2em}\\
\hspace*{2.7em}{\bf 1}: du telefonierst\\
\hspace*{2.7em}2: du telefonierest\\
\hspace*{2.7em}3: du telefoniert\\
\hspace*{1.2em}4. Wie lautet das Partizip des Verbs?\\
\hspace*{2.7em}{\bf 1}: telefoniert\\
\hspace*{2.7em}2: getelefoniert\\
\hspace*{1.2em}Verb klassifiziert!\\
\\ \hline
\end{tabular}
\end{center}
\caption{\label{Lex} Dialogue when entering {\it telefonieren}
(user input is printed bold type)}
\end{figure}
Morphy's lookup-mechanism when analyzing texts is not based on a lexicon of
full forms. Instead, there is only a lexicon of roots together with their
inflexion types. When analyzing a word form, Morphy cuts off all
possible suffixes, builds the possible roots, looks up these roots in the lexicon,
and for each root generates all possible inflected forms. Only those roots
which lead to inflected forms identical to the original word
form will be selected (for details see Lezius 1994).
Naturally, this procedure is much slower than the simple lookup-mechanism
in a full form lexicon.\footnote{Morphy's current analysis speed is about 50 word
forms per second on a fast PC, which is sufficient for many purposes. For the
processing of larger corpora we have
used Morphy to generate a full-form lexicon under UNIX. This has led to an
analysis speed of many thousand word forms per second.}
Nevertheless, there are advantages: First, the lexicon
can be kept small,\footnote{Only 750 KB memory is
necessary for the current lexicon.} which is an important consideration
for a PC-based system intended to be widely distributed. Secondly, the
processing of German compound nouns fits in this concept.
The performance of the morphology system has been tested at
the Morpholympics conference 1994
in Erlangen (see Hausser (1996), pp. 13-14, and Lezius (1996)) with a
specially designed test corpus
which had been unknown to the participants. This corpus comprised about 7.800
word forms and consisted of different text types (two political speeches, a fragment of
the Limas-corpus and a list of special word forms). Morphy
recognised 89.2\%, 95.9\%, 86.9\% and 75.8\% of the word forms, respectively.
\begin{table}
\caption{\label{analysis} Some examples of the morphological analysis}
\begin{footnotesize}
\begin{center}
\begin{tabular}{|l|l|l|} \hline
word form & morphological description & root \\ \hline \hline
Fl{\"u}ssen & Substantiv Dativ Plural maskulinum & Flu{\ss} \\ \hline
Bauern- & Substantiv Dativ Plural neutrum & Bauer / Haus \\
h{\"a}usern & & \\ \hline
Schiffahrts- & Substantiv Genitiv Singular maskulinum & Schiff / Fahrt / \\
hafenmeisters & & Hafen / Meister \\
\hline
K{\"u}sse & Substantiv Nominativ Plural maskulinum & Ku{\ss} \\
& Substantiv Genitiv Plural maskulinum & Ku{\ss} \\
& Substantiv Akkusativ Plural maskulinum & Ku{\ss} \\
& Verb 1. Person Singular Pr{\"a}sens & k{\"u}ssen \\
& Verb 1. Person Singular Konjunktiv 1 & k{\"u}ssen \\
& Verb 3. Person Singular Konjunktiv 1 & k{\"u}ssen \\
& Verb Imperativ Singular & k{\"u}ssen \\ \hline
einnahm & Verb 1. Person Singular Pr{\"a}teritum & (ein)nehmen \\
& Verb 3. Person Singular Pr{\"a}teritum & (ein)nehmen \\ \hline
verspieltest & Verb 2. Person Singular Pr{\"a}teritum & ver-spielen \\
& Verb 2. Person Singular Konjunktiv 2 & ver-spielen \\ \hline
verspieltes & Adjektiv Nominativ Singular neutrum & verspielt (ver-spielen)\\
& Adjektiv Akkusativ Singular neutrum & verspielt (ver-spielen)\\ \hline
edlem & Adjektiv Dativ Singular neutrum & edel \\
& Adjektiv Dativ Singular maskulinum & edel \\ \hline
\end{tabular}
\end{center}
\end{footnotesize}
\end{table}
\section{The tagging module of Morphy}
Since morphological analysis operates on isolated word forms, ambiguities
are not resolved. The task of the tagger is to resolve these ambiguities
by taking into account contextual information. When designing a tagger,
a number of decisions have to be made:
\begin{itemize}
\item Selection of a tag set.
\item Selection of a tagging algorithm.
\item Selection of a training and test corpus.
\end{itemize}
\subsection{Tag Set}
Like the morphology system, the tagger is
based on the classification of the parts of
speech from the Duden grammar. Supplementary additions have been taken from
the system of Bergenholtz and Schaeder (1977). The so-formed tag
set includes grammatical features as gender,
case and number. This results in a very complex system, comprising about
1000 different tags (see Lezius 1995). Since only 456 tags were actually
used in the training corpus, the tag set was reduced to half.
However, most German word forms are highly
ambiguous in this system (about 5 tags per word form on average).
Although the amount of information gained by this system is very high,
previous tagging algorithms with such large tag sets led to poor results in the
past (see Wothke et al. 1993; Steiner 1995). This is because different grammatical features
often have the same surface realization (e.g. nominative noun and accusative
noun are difficult to distinguish by the tagger). By grouping together parts of
speech with different grammatical features this kind of error can be
significantly reduced. This is what current small tag sets implicitly do.
However, one has to keep in mind that the gain of information provided by the tagger
is also reduced with a smaller tag set.
Since some applications do not require detailed distinctions, we also constructed
a small tag set comprising 51 tags as shown in table \ref{tagset}.
Both tag sets are constructed in such a way
that the large tag set can be directly mapped onto the small tag set.
\begin{table}
\caption{\label{tagset} The small tag set (51 tags)}
\begin{footnotesize}
\begin{center}
\begin{tabular}{||l|l|l||}\hline\hline
tag name & explanation of the tag & example\\ \hline\hline
\tt SUB & Substantiv & \it (der) Mann\\
\tt EIG & Eigenname & \it Egon, (Herr) Hansen\\ \hline
\tt VER & finite Verbform & \it spielst, l{\"a}uft\\
\tt VER INF & Infinitiv & \it spielen, laufen\\
\tt VER PA2 & Partizip Perfekt & \it gespielt, gelaufen\\
\tt VER EIZ & erweiterter Infinitiv mit \it zu & \it abzuspielen\\
\tt VER IMP & Imperativ & \it lauf', laufe\\
\tt VER AUX & finite Hilfsverbform & \it bin, hast\\
\tt VER AUX INF & Infinitiv & \it haben, sein\\
\tt VER AUX PA2 & Partizip Perfekt & \it gahabt, gewesen\\
\tt VER AUX IMP & Imperativ & \it sei, habe\\
\tt VER MOD & finite Modalverbform & \it kannst, will\\
\tt VER MOD INF & Infinitiv & \it k{\"o}nnen, wollen\\
\tt VER MOD PA2 & Partizip Perfekt & \it gekonnt, gewollt\\
\tt VER MOD IMP & Imperativ & \it k{\"o}nne\\ \hline
\tt ART IND & unbestimmter Artikel & \it ein, eines\\
\tt ART DEF & bestimmter Artikel & \it der, des\\ \hline
\tt ADJ & Adjektivform & \it schnelle, kleinstes\\
\tt ADJ ADV & Adjektiv, adverbiell & \it (Er l{\"a}uft) schnell.\\ \hline
\tt PRO DEM ATT & Demonstrativpronomen, attributiv & \it diese
(Frau)\\
\tt PRO DEM PRO & Demonstrativpronomen, pronominal & \it diese\\
\tt PRO REL ATT & Relativpronomen, attributiv & \it ,dessen
(Frau)\\
\tt PRO REL PRO & Relativpronomen, pronominal & \it ,welcher\\
\tt PRO POS ATT & Possesivpronomen, attributiv & \it mein (Buch)\\
\tt PRO POS PRO & Possesivpronomen, pronominal & \it (Das ist)
meiner.\\
\tt PRO IND ATT & Indefinitpronomen, attributiv & \it alle
(Menschen)\\
\tt PRO IND PRO & Indefinitpronomen, pronominal & \it (Ich mag)
alle.\\
\tt PRO INR ATT & Interrogativpronomen, attributiv & \it Welcher
(Mann)?\\
\tt PRO INR PRO & Interrogativpronomen, pronominal & \it Wer?\\
\tt PRO PER & Personalpronomen & \it er, wir\\
\tt PRO REF & Reflexivpronomen & \it sich, uns\\ \hline
\tt ADV & Adverb & \it schon, manchmal\\
\tt ADV PRO & Pronominaladverb & \it damit, dadurch\\
\tt KON UNT & unterordnende Konjunktion & \it da{\ss}, da\\
\tt KON NEB & nebenordnende Konjunktion & \it und, oder\\
\tt KON INF & Infinitivkonjunktion & \it um (zu spielen)\\
\tt KON VGL & Vergleichskonjunktion & \it als, denn, wie\\
\tt KON PRI & Proportionalkonjunktion & \it desto, um so, je\\
\tt PRP & Pr{\"a}position & \it durch, an\\ \hline
\tt SKZ & Sonderklasse f{\"u}r \it zu & \it (,um) zu (spielen)\\
\tt ZUS & Verbzusatz & \it (spielst) ab\\
\tt INJ & Interjektion & \it Wau, Oh\\
\tt ZAL & Zahlw{\"o}rter & \it eins, tausend\\
\tt ZAN & Zahlen & \it 100, 2\\
\tt ABK & Abk{\"u}rzung & \it Dr., usw.\\ \hline
\tt SZD & Doppelpunkt & :\\
\tt SZE & Satzendezeichen & .!?\\
\tt SZG & Gedankenstrich & -\\
\tt SZK & Komma & ,\\
\tt SZS & Semikolon & ;\\
\tt SZN & sonstige Satzzeichen & ()/ \\ \hline \hline
\end{tabular}
\end{center}
\end{footnotesize}
\end{table}
\subsection{Tagging algorithm}
The tagger uses the Church-trigram-algorithm (Church 1988), which is still
unsurpassed in terms of simplicity, robustness and accuracy. However, since
we assumed that longer n-grams may give more information, and
since we observed that some longer n-grams are rather frequent in corpora
(see figure 2 for some statistics on the Brown-corpus),
we decided to compare
the Church algorithm with a tagging algorithm relying on variable context
widths as described by Rapp (1995).
Starting from an ambiguous word form which is to be tagged, this algorithm
considers the preceding word froms - which have already been
tagged - and the succeeding word forms still to be tagged.
For this ambiguous word form the algorithm constructs all possible tag
sequences composed of the already computed tags on the left, one of the
possible tags of the critical word form and possible tags on the right.
The choice of the tag for the critical word form is a function for the
length of the tag sequences to the left and to the right which can be
found in the training corpus. A detailed description of this algorithm
is given in Rapp (Rapp 1995, pp. 149-154).
Although some authors (Cutting et al. 1992; Schmid 1995; Feldweg 1995) claim
that unsupervised tagging algorithms produce superior results, we chose
supervised learning. These publications pay little attention to
the fact that algorithms for unsupervised tagging require great care
(or even luck) when tuning some initial parameters.
It frequently happens that unsupervised learning with sophisticated tag sets
ends up in local minima, which can lead to poor results without any
indication to the user. Such behavior seemed unacceptable for a standard
tool.
\begin{figure}
$$\beginpicture
\setcoordinatesystem units <0.025em,0.06em>
\setplotarea x from 0 to 1000, y from 0 to 200
\begin{footnotesize}
\axis bottom label {size of the corpus}
ticks numbered from 0 to 1000 by 200 /
\axis left label {\stack{n,-,g,r,a,m,s}}
ticks numbered from 0 to 200 by 50 /
\setplotsymbol ({\rm.})
\linethickness0.01em
\setquadratic
\plotsymbolspacing0.05em
\plot 0 0 100 40 200 65
400 100 600 125 800 145
1000 155 /
\plot 0 0 100 10 200 15
600 22.5 1000 23.5 /
\plot 0 0 500 2 1000 4 /
\setlinear
\plotsymbolspacing0.4em
\plot 0 0 200 200 /
\put {n=2} at 1050 7
\put {n=3} at 1050 27
\put {n=4} at 1050 157.5
\end{footnotesize}
\endpicture$$
\vspace{4ex}
\caption{\label{Brown} Statistics on the Brown corpus: number of different
n-grams occuring in the corpus versus size of the corpus (all figures
in thousands)}
\end{figure}
\subsection{Training and test corpus}
For training and testing we took a fragment from the
``Frankfurter-Rundschau''-corpus,\footnote{This corpus was generously donated by
the Druck- und Verlagshaus Frankfurt
am Main and has been included in the CD-ROM of the European
Corpus Initiative. We thank Gisela Zunker for her help with the acquisition and
preparation of the corpus.}
which we have been collecting since 1992. Tables and other non-textual items
were removed manually. A segment of 20.000 word forms was used for training,
another segment of 5.000 word forms for testing. Any word forms not recognised
by the morphology system
were included in the lexicon. Using a special tagging editor which
- on the basis of the morphology module - for each word gives a choice of
possible tags, both corpora were tagged
semiautomatically with the large tag set. A recent version of the editor
additionally predicts the correct tag.
\section{Results}
Using the probabilities from the manually annotated training corpus,
the test corpus was tagged automatically. The results were compared with the
previous manual annotation of the test corpus. This was done for both tagging
algorithms and tag sets. For the small tag set, the Church algorithm achieved
an accuracy of 95.9\%, whereas with the variable-context algorithm an accuracy
of 95.0\% was obtained. For the large tag set the respective figures are 84.7\%
and 81.8\%.
In comparison with other research groups, the results are
similar for the small tagset and slightly better for the large tagset (see
table \ref{Tagger}). Surprisingly, inspite of considering less context, the Church
algorithm performs better than the variable-context algorithm in both cases.
This is the reason why the current implementation
of Morphy only includes the Church algorithm.\footnote{The speed of the tagger
(including morphological analysis) is about 20 word forms per second for
the large and 100 word forms per second for the small tag set on a fast PC.}
As an example, figure \ref{Example}
gives the annotation results of a few test sentences for both tag sets.
\begin{figure}
$$\beginpicture
\setcoordinatesystem units <0.001275em,0.2em>
\setplotarea x from 0 to 20000, y from 0 to 100
\begin{footnotesize}
\axis bottom label {size of the training corpus}
ticks numbered from 0 to 20000 by 5000 /
\axis left label {\stack{a,c,c,u,r,a,c,y}}
ticks numbered from 0 to 100 by 10 /
\setplotsymbol ({\rm.})
\setquadratic
\plotsymbolspacing0.05em
\plot 0 0 500 70 1000 93.8
2500 94.3 5000 94.6 7500 94.7
10000 94.8 12500 94.9 15000 95.0
17500 94.8 20000 95.0 /
\plotsymbolspacing0.05em
\plot 0 0 500 70 1000 93.9
2500 94.7 5000 94.9 7500 95.2
10000 95.4 12500 95.5 15000 95.5
17500 95.7 20000 95.9 /
\put{Church} at 17500 99
\put{var.context} at 17500 92
\put{small} at 21250 96
\put{tag set} at 21250 93
\setplotsymbol ({\rm.})
\setquadratic
\plotsymbolspacing0.05em
\plot 0 0 500 51 1000 69.2
2500 74.0 5000 75.9 7500 77.2
10000 78.6 12500 79.4 15000 80.1
17500 80.8 20000 81.8 /
\plotsymbolspacing0.05em
\plot 0 0 500 51 1000 71.4
2500 75.6 5000 78.9 7500 81.8
10000 82.7 12500 83.3 15000 83.6
17500 84.2 20000 84.7 /
\put{Church} at 17500 87
\put{var.context} at 17500 77
\put{large} at 21250 84
\put{tag set} at 21250 81
\end{footnotesize}
\endpicture$$
\vspace{4ex}
\caption{\label{Results} Accuracy versus size of the training corpus for
Church's trigram algorithm and the variable-context algorithm and both tag
sets.}
\end{figure}
However, there are also some advantages on the side of the
variable-context algorithm. First, its potential when using larger training
corpora seems to be slightly higher (see figure \ref{Results}). Secondly, when
the algorithm
is modified in such a way that sentence boundaries are not assumed to be known
beforehand, the performance degrades only minimally. This means that this
algorithm can actually contribute to finding sentence boundaries. And third,
if there are sequences of unknown word forms in the text, the algorithm takes
better guesses than the Church algorithm (examples are given in
Rapp 1995, p. 155). When about 2\% of the word forms in the test corpus were
randomly replaced by unknown word forms, the quality of the results for the
Church algorithm decreased by 0.7\% for the small and by 2.0\% for the large
tag set. The respective figures for the variable-context algorithm are 0.9\%
and 1.3\%, which is better overall.
In a further experiment, the contribution of the lexical probabilities
to the quality of the results was examined. Without the lexical
probabilities, the results decreased by 0.3\% (small) and
0.6\% (large tag set) for the Church algorithm, the respective
figures for the variable-context algorithm were 0.9\% and 0.0\%.
\begin{figure}[t]
\begin{footnotesize}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|} \hline
Die & Frau & bringt & das & Essen & . \\
\hline
ART DEF & SUB & VER & ART DEF & SUB & SZE \\ \hline
\end{tabular}
\vspace{1ex}
\begin{tabular}{|c|c|c|c|c|} \hline
Ich & meine & meine & Frau & . \\ \hline
PER PRO & VER & POS ATT & SUB &
SZE \\ \hline
\end{tabular}
\vspace{1ex}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline
Winde & das & im & Winde & flatternde & Segel & um & die & Winde \\ \hline
\bf SUB & ART DEF & PRP & SUB & ADJ & SUB & PRP & ART & SUB \\ \hline
\end{tabular}
\vspace{6ex}
\begin{tabular}{|c|c|c|} \hline
Die & Frau & bringt \\ \hline
ART DEF NOM SIN FEM & SUB NOM FEM SIN & VER 3PE SIN \\ \hline
\end{tabular}
\vspace{1ex}
\begin{tabular}{|c|c|c|c|} \hline
das & Essen & . & Ich \\ \hline
ART DEF AKK SIN NEU & SUB AKK NEU SIN & SZE & PER NOM SIN 1PE \\
\hline
\end{tabular}
\vspace{1ex}
\begin{tabular}{|c|c|c|c|} \hline
meine & meine & Frau & .\\ \hline
VER 1PE SIN & POS AKK SIN FEM ATT & SUB AKK FEM SIN & SZE \\
\hline
\end{tabular}
\vspace{1ex}
\begin{tabular}{|c|c|c|} \hline
Winde & das & im \\ \hline
VER \begin{bf}3PE \end{bf} SIN & \begin{bf} DEM NOM \end{bf} SIN NEU \begin{bf}
PRO \end{bf} & PRP DAT \\ \hline
\end{tabular}
\vspace{1ex}
\begin{tabular}{|c|c|c|} \hline
Winde & flatternde & Segel \\ \hline
SUB DAT MAS SIN & PA1 \begin{bf} SOL \end{bf} NEU AKK \begin{bf} PLU \end{bf} &
SUB AKK NEU \begin{bf} PLU \end{bf} \\ \hline
\end{tabular}
\vspace{1ex}
\begin{tabular}{|c|c|c|c|} \hline
um & die & Winde & . \\ \hline
PRP AKK & ART DEF AKK SIN FEM & SUB AKK FEM SIN & SZE \\ \hline
\end{tabular}
\end{center}
\caption{\label{Example} Tagging example for both tag sets - the
ambiguity rates amount to 2.4 tags per word for the small and 8.8 tags per word
for the large tag set (errors are printed bold type).}
\end{footnotesize}
\end{figure}
\section{Conclusions}
We have compared two different tagging algorithms and two different tag sets.
The first tagging algorithm is the Church algorithm
which uses trigrams to compute contextual probabilities. The second algorithm,
the so-called variable-context algorithm, has been described in paragraph 3.
The smaller of the two tag sets contains 51 parts-of-speech, the larger tag set
includes additional grammatical features such as case, number and gender.
The small tag set is a subset of the large tag set.
In comparison with the Church algorithm, the variable-context algorithm produces
similar results for the small tag set, but significantly inferior results for the
large tag set. On the other hand, the performance of the variable-context
algorithm for the large tag set improves faster with increasing size of the
training corpus than the performance of the Church algorithm.
Thus, with tagging more training
texts manually, similar results are to be expected for the two algorithms.
Considering the two tag sets, the results for the small tag set are
significantly better. Nevertheless, with increasing size of the training corpus
an approximation of the results seems to be possible.
One of our aims for the near future is to use the output of the tagger for
lemma\-tization. In this way a sentence like {\it Ich meine meine Frau.} could be
unambiguously reduced to {\it ich / meinen / mein / Frau}.
\vspace{-2ex}
\section*{Bibliography}
\vspace{-2ex}
S. Armstrong, G. Russell, D. Petitpierre and G. Robert (1995). An open architecture for multilingual text processing. In: {\it Proceedings of the ACL SIGDAT Workshop. From
Texts to Tags: Issues in Multilingual Language Analysis}, Dublin.
\vspace{-0.75ex}
H. Bergenholtz and B. Schaeder (1977). {\it Die
Wortarten des Deutschen}. Klett, Stuttgart.
\vspace{-0.75ex}
K. Church (1988). A stochastic parts program and noun phrase parser
for unrestricted text. In:
{\it Second Conference on Applied Natural Language Processing}, pp. 136-143.
Austin, Texas.
\vspace{-0.75ex}
D. Cutting, J. Kupiec, J. Pedersen and P. Sibun (1992). A
practical part-of-speech tagger. In: {\it Proceedings of the Third Conference on
Applied Language Processing}, pp. 133-140. Trento, Italy.
\vspace{-0.75ex}
G. Drosdowski (1984). {\it Duden. Grammatik der deutschen
Gegenwartssprache}. Dudenverlag, Mannheim.
\vspace{-0.75ex}
H. Feldweg (1995). Implementation and evaluation of a German
HMM for POS disambiguation. In: Feldweg and Hinrichs, eds., {\it Lexikon und
Text}, pp. 41-46. Niemeyer, T{\"u}bingen.
\vspace{-0.75ex}
R. Hausser (1996). {\it Linguistische Verifikation.
Dokumentation zur Ersten Morpholympics}. Niemeyer, T{\"u}bingen.
\vspace{-0.75ex}
W. Lezius (1994). Aufbau und Funktionsweise von Morphy.
Internal report. Universit{\"a}t-GH Paderborn, Fachbereich
2.
\vspace{-0.75ex}
W. Lezius (1995). Algorithmen zum Taggen deutscher Texte.
Internal report, Universit{\"a}t-GH Paderborn, Fachbereich 2.
\vspace{-0.75ex}
W. Lezius (1996). Morphologiesystem MORPHY. In:
R. Hausser, ed., {\it
Linguistische Verifikation: Dokumentation zur Ersten Morpholympics 1994},
pp. 25-35. Niemeyer, T{\"u}bingen.
\vspace{-0.75ex}
R. Rapp (1995). Die Berechnung von Assoziationen - Ein korpuslinguistischer Ansatz. In: Hellwig and Krause, eds.,
{\it Sprache und Computer}, vol. 16, Olms, Hildesheim.
\vspace{-0.75ex}
H. Schmid (1995). Improvements in part-of-speech tagging with an
application to German. In: Feldweg and Hinrichs, eds.,
{\it Lexikon und Text}, pp. 47-50. Niemeyer, T{\"u}bingen.
\vspace{-0.75ex}
P. Steiner (1995). Anforderungen und Probleme beim Taggen
deutscher Zeitungstexte. In: Feldweg and Hinrichs, eds.,
{\it Lexikon und Text}. Niemeyer, T{\"u}bingen.
\vspace{-0.75ex}
K. Wothke, I. Weck-Ulm, J. Heinecke, O. Mertineit and T. Pachunke (1993).
Statistically Based Automatic Tagging of German Text Corpora
with Parts-of-Speech - Some Experiments. Technical Report 75.93.02,
IBM Germany, Heidelberg Scientific Center.
\end{document}
| {'timestamp': '1996-10-30T18:34:39', 'yymm': '9610', 'arxiv_id': 'cmp-lg/9610006', 'language': 'en', 'url': 'https://arxiv.org/abs/cmp-lg/9610006'} |
\section{Introduction}
Hartshorne conjectured that every smooth, codimension $c$ subvariety of $\bP^n$, with $c<\frac{1}{3}n$, is a complete intersection~\cite[p.~1017]{hartshorne-bulletin}.
We prove this in the special case when $n\gg \deg X$.
\begin{theorem}\label{thm:more general}
There is a function $N(c,e)$ such that if $X\subseteq \bP^n_{\bk}$ is closed, equidimensional of
codimension $c$, and has degree~$e$, and if $X$ is nonsingular in codimension $\geq N(c,e)$, then $X$ is a complete intersection. In particular, $N(c,e)$ does not depend on $n$ or on the field $\bk$.
\end{theorem}
\noindent In characteristic zero, Hartshorne showed this in~\cite[Theorem 3.3]{hartshorne-bulletin}. In parallel, and also in characteristic zero, Barth and Van de Ven proved an effective version of this result, showing that $N=\frac{5}{2}e(e-7)+c$ works~\cite{barth-icm}.
Later improvements include: work of Ran~\cite[Theorem]{ran}, sharpening the bound and extending it to arbitrary characteristic, but only when $c=2$; and Bertram-Ein-Lazarsfeld's \cite[Corollary~3]{bertram-ein-lazarsfeld}, which sharpens the bound in arbitrary codimension, but only holds in characteristic zero. See also~\cites{ballico-chiantini, holme-schneider}.
Our result is new in the case of positive characteristic and $c>2$.
All of these previous results are proved by geometric means. For instance, the main ingredient in ~\cite{barth-icm} is an analysis of the variety of lines in $X$ through a point, and many of the proofs make use of Kodaira Vanishing and topological results like Lefschetz-type restriction theorems.
Our proof, on the other hand, is algebraic and employs quite different methods: we derive Theorem~\ref{thm:more general} as an elementary consequence of our result in ~\cite{ess-stillman} that the graded ultraproduct of polynomial rings is isomorphic to a polynomial ring (see Theorem~\ref{thm:ultra poly} below). Although Theorem~\ref{thm:more general} is ineffective, it holds in arbitrary characteristic and is independent of the field.
Our approach also explicitly connects Hartshorne's Conjecture with the circle of ideas initiated by Ananyan and Hochster in~\cite{ananyan-hochster} in their proof of Stillman's Conjecture.
Though we do not rely on the results of~\cite{ananyan-hochster}, those ideas motivated our approach.
Our proof also has some overlap with the Babylonian tower theorems, like ~\cite[Theorems~I and IV]{BVdV} and those in~\cite{coandua, flenner, sato} among others. From an algebraic perspective, the natural setting for such statements is an inverse limit of polynomial rings, and~\cite{ess-stillman} shows such an inverse limit shares many properties with the ultraproduct ring.
\section{Setup and Background} \label{sec:background}
Each closed subscheme $X\subseteq \bP^n_\bk$ determines a homogeneous ideal $I_X\subseteq \bk[x_0,\dots,x_n]$. The scheme $X$, or the ideal $I_X$, is {\bf equidimensional of codimension $c$} if all associated primes of $I_X$ have codimension $c$, and $X$ is a {\bf complete intersection} if $I_X$ is defined by a regular sequence. Since the minimal free resolution of $I_X$ is stable under extending the ground field $\bk$, the property of being a complete intersection is also stable under field extension.
From here on, $\bk$ and $\bk_i$ will denote fields. If $R$ is a graded ring with $R_0=\bk$, then as in~\cite{ananyan-hochster}, we define the {\bf strength} of a homogeneous element $f\in R$ to be the minimal integer $k \ge -1$ for which there is a decomposition $f=\sum_{i=1}^{k+1} g_i h_i$ with $g_i$ and $h_i$ homogeneous elements of $R$ of positive degree, or $\infty$ if no such decomposition exists. The {\bf collective strength} of a set of homogeneous elements $f_1,\dots,f_r\in R$ is the minimal strength of a non-trivial homogeneous $\bk$-linear combination of the $f_i$.
\begin{lemma}\label{lem:decreasing strength}
Let $R$ be a graded ring with $R_0=\bk$. If $I\subseteq R$ is homogeneous and finitely generated, then $I$ has a generating set $g_1,\dots,g_r$ where the strength of $g_k$ equals the collective strength of $g_1,\dots,g_k$ for each $1\leq k \leq r$.
\end{lemma}
\begin{proof}
Choose any homogeneous generators $f_1,\dots,f_r$ of $I$. We prove the statement by induction on $r$. For $r=1$ the statement is tautological. Now let $r>1$. By definition of collective strength, we have a $\bk$-linear combination $g_r=\sum_{i=1}^r a_if_i$ such that the strength of $g_r$ equals the collective strength of $f_1,\dots,f_r$. After relabeling, we can assume that $a_r\ne 0$ and it follows that $f_1,\dots,f_{r-1},g_r$ generate $I$. Applying the induction hypothesis to the ideal $(f_1,\dots,f_{r-1})$ yields the desired result.
\end{proof}
Let $Q=(f_1,\dots,f_r) \subseteq \bk[x_1,x_2,\dots]$. The ideal of $c\times c$ minors of the Jacobian matrix of $(\frac{\partial f_i}{\partial x_j})$ does not depend on the choice of generators of $Q$. We denote this ideal by $J_c(Q)$.
\begin{lemma}\label{lem:strength and J}
Let $Q=(f_1,\dots,f_r)$ be a homogeneous ideal in $\bk[x_1,x_2,\dots]$. If the strength of $f_i$ is at most $s$ for $c\leq i \leq r$, then $\codim J_c(Q)\leq (r-c+1)(2s+2)$.
\end{lemma}
\begin{proof}
For each $c \leq i \leq r$, we write $f_i=\sum_{j=0}^{s} a_{i,j}g_{i,j}$ where $a_{i,j}$ and $g_{i,j}$ have positive degree for all $i,j$. Write $L_i$ for the ideal $(a_{i,j}, g_{i,j} \mid 0\leq j \leq s)$ and let $L =L_c+L_{c+1}+\dots+L_r$. The $i$th row of the Jacobian matrix has entries $\frac{\partial f_i}{\partial x_k}$; thus by the product rule, every entry in this row is in $L_i$. Since every $c\times c$ minor of the Jacobian matrix will involve row $i$ for some $c\leq i \leq r$, it follows that $J_c(Q) \subseteq L$. Thus $\codim J_c(Q)\leq \codim L$, which by the Principal Ideal Theorem is at most $(r-c+1)(2s+2)$, as this is the number of generators of $L$.
\end{proof}
We briefly recall the definition of the ultraproduct ring, referring to~\cite[\S4.1]{ess-stillman} for a more detailed discussion. Let $\cI$ be an infinite set and let $\cF$ be a non-principal ultrafilter on $\cI$. We refer to subsets of $\cF$ as {\bf neighborhoods of $\ast$}, where $\ast$ is an imaginary point of $\cI$. For each $i\in \cI$, let $\bk_i$ be an infinite perfect field. Let $\bS$ denote the graded ultraproduct of $\{\bk_i[x_1,x_2,\dots]\}$, where each polynomial ring is given the standard grading. An element $g\in \bS$ of degree $d$ corresponds to a collection $(g_i)_{i\in \cI}$ of degree $d$ elements $g_i\in \bk_i[x_1,x_2,\dots]$, modulo the relation that $g=0$ if and only if $g_i=0$ for all $i$ in some neighborhood of $\ast$. For a homogeneous $g\in \bS$ we write $g_i$ for the corresponding element in $\bk_i[x_1,x_2,\dots]$, keeping in mind that this is only well-defined for $i$ in some neighborhood of $\ast$. The following comes from~\cite[Theorems~1.2 and 4.6]{ess-stillman}:
\begin{theorem}\label{thm:ultra poly}
Let $K$ be the ultraproduct of perfect fields $\{\bk_i\}$ and fix $f_1,\dots, f_r\in \bS$ of infinite collective strength. There is a set $\cU$, containing the $f_i$, such that
$\bS$ is isomorphic to the polynomial ring $K[\cU]$.
\end{theorem}
The following result was first proven in~\cite[Theorem~4.2]{cmpv} (though that result is more general than this lemma). We provide an alternate proof, to illustrate how it also follows quickly from Theorem~\ref{thm:ultra poly}.
\begin{lemma}\label{lem:numgens and degree}
Fix $c$ and $e$. There exist positive integers $d_1,\dots,d_r$, depending only on $c$ and $e$, such that any homogeneous, equidimensional, and radical ideal $Q\subseteq \bk[x_1,\dots,x_n]$ (with $\bk$ perfect) of codimension $c$ and degree $e$ can be generated (not necessarily minimally) by homogeneous polynomials $f_1,\dots,f_r$ and where $\deg(f_i) \le d_i$. Neither $r$ nor the $d_i$ depend on $n$ or $\bk$.
\end{lemma}
\begin{proof}
For a homogeneous ideal $I$ we write $\nu(I)$ for the sum of the degrees of the minimal generators of $I$. If the statement were false, then for each $j\in \bN$ we could find a homogeneous, equidimensional, and radical ideal $Q'_j\subset \bk_j[x_1,x_2,\dots]$ of codimension $c$ and degree $e$, such that $\nu(Q'_j)\to \infty$ as $j\to \infty$. Choose some function $m \colon \cI \to \bN$ where $m(i)$ is unbounded in any neighborhood of $\ast$. For each $i\in \cI$, choose some $j$ such that $\nu(Q'_j)\ge m(i)$ and set $Q_i=Q_j'$. By construction, the function $i\mapsto \nu(Q_i)$ is unbounded in every neighborhood of $\ast$.
By~\cite[Proposition~3.5]{eisenbud-huneke-vasconcelos} (see also~\cite[Theorem~1]{mumford}), each $Q_i$ can be generated up to radical by a regular sequence $f_{1,i},f_{2,i},\dots,f_{c,i}\in \bk_i[x_1,x_2,\dots]$ with $\deg(f_{j,i})=e$ for all $i,j$. Let $f_j=(f_{j,i})\in \bS$ and let $J=(f_1,f_2,\dots,f_c)\subseteq \bS$. Since $P=\sqrt{J}$ is finitely generated by Theorem~\ref{thm:ultra poly}, we can write $P=(g_1,\dots,g_r)\subseteq \bS$. We let $P_i=(g_{1,i},\dots,g_{r,i})$ be the corresponding ideal in $\bk_i[x_1,x_2,\dots]$.
For any $x=(x_i)\in \bS$, we have:
\[
\begin{matrix}\text{$x_i\in P_i$}\\\text{for $i$ near $\ast$}\end{matrix} \iff x\in P \iff
\begin{matrix}x^n\in J\\ \text{ for some $n$}\end{matrix}
\iff \begin{matrix}\text{$x_i^n \in J_i$}\\\text{ for $i$ near $\ast$} \end{matrix}
\iff \begin{matrix}x_i\in Q_i\\ \text{ for $i$ near $\ast$}\end{matrix}.
\]
Thus $P_i=Q_i$ for $i$ near $\ast$. It follows that in a neighborhood of $\ast$, $\nu(Q_i)$ is bounded by $\sum_{k=1}^r \deg(g_k)$, providing our contradiction.
\end{proof}
\section{Proof of the main result}
\begin{theorem}\label{thm:Nreg}
There is a function $N(c,e)$ such that if $Q\subseteq \bk[x_1,\dots,x_n]$ is a homogeneous, equidimensional ideal of codimension $c$ and degree $e$ and if $V(Q)$ is nonsingular in codimension $\geq N(c,e)$, then $Q$ is defined by a regular sequence of length $c$. In particular, $N(c,e)$ does not depend on $n$ or on the field $\bk$.
\end{theorem}
\begin{remark}
Since an equidimensional ideal of codimension $c$ that is nonsingular in codimension $2c+1$ must be prime, it would be equivalent to rephrase Theorem~\ref{thm:Nreg} in terms of prime ideals. We stick with equidimensional and radical ideals because some of the auxiliary results in this paper might be of interest with this added generality.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{thm:Nreg}]
We first reduce to the case where $\bk$ is perfect. Extending the field will change neither the minimal number of generators of $Q$, nor the codimension of the singular locus. By taking $N(c,e)\geq 1$, we can also assume that $Q$ is radical, even after extending the field. Finally, since a field extension will not change the codimension of any minimal prime of $Q$~\cite[00P4]{stacks}, we can assume that $\bk$ is perfect and that $Q$ is radical and equidimensional of codimension $c$.
Suppose that the theorem were false. Then for some fixed $c,e$ and for each $j\in \bN$ we can find an equidimensional, radical ideal $Q_j'\subseteq \bk_j[x_1,x_2,\dots]$ (with $\bk_j$ perfect) of codimension $c$ and degree $e$ that is not a complete intersection, but where the codimension of the singular locus of $V(Q_j')$ tends to $\infty$ as $j\to \infty$. Since the singular locus of $V(Q_j')$ is defined by $Q_j'+J_c(Q_j')$, this implies that $\codim J_c(Q_j') \to \infty$ as $j\to \infty$. We choose a function $m \colon \cI \to \bN$ where $m(i)$ is unbounded in each neighborhood of $\ast$. For each $i \in \cI$, define $Q_i$ to be any of the $Q_j'$ satisfying $\codim J_c(Q_j')\geq m(i)$. By construction, $\codim J_c(Q_i)$ is unbounded in every neighborhood of $\ast$.
By Lemma~\ref{lem:numgens and degree}, there are positive integers $d_1,\dots,d_r$ satisfying: for each $i\in \cI$, there are homogeneous $f_{1,i}, \dots, f_{r,i}$ of degrees $d_1,\dots,d_r$ which generate $Q_i$. Let $f_1=(f_{1,i}), \dots, f_r=(f_{r,i})$ be the corresponding elements in $\bS$ and let $Q=(f_1,\dots,f_r)$. By Lemma~\ref{lem:decreasing strength}, we can assume that the strength of $f_k$ is the collective strength of $f_1,\dots, f_k$ for each $1\leq k\leq r$.
If $f_c$ had strength at most $s$, then the same holds for $f_{c,i}$ in a neighborhood of $\ast$; for if $f_c=\sum_{j=0}^s a_jh_j$ then $f_{c,i}=\sum_{j=0}^s (a_j)_i(h_j)_i$ for $i$ near $\ast$. But by Lemma~\ref{lem:strength and J}, this would imply that $\codim J_c(Q_i)$ is bounded in a neighborhood of $\ast$. Since this cannot happen, $f_c$ must have infinite strength. Thus the collection $f_1,\dots,f_c$ has infinite collective strength and so by Theorem~\ref{thm:ultra poly}, $f_1,\dots,f_c$ are independent variables in $\bS$. In particular, $(f_1,\dots,f_c)$ defines a prime ideal of codimension $c$ and we must have $f_{c+1}=\dots=f_{r}=0$. However, this implies that in a neighborhood of $\ast$, each $Q_i$ is a complete intersection, contradicting our original assumption.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:more general}]
As in the beginning of the proof of Theorem~\ref{thm:Nreg}, we can quickly reduce to the case where $\bk$ is perfect. For a fixed $c$ and $e$, we let $N$ equal the bound from Theorem~\ref{thm:Nreg}. Fix some $X\subseteq \bP^n$ satisfying the hypotheses of Theorem~\ref{thm:more general}, and let $Q\subseteq \bk[x_1,\dots,x_{n+1}]$ be the defining ideal of $X$. By Theorem~\ref{thm:Nreg}, $Q$ is defined by a regular sequence, and thus $X$ is a complete intersection.
\end{proof}
\begin{bibdiv}
\begin{biblist}
\bib{ananyan-hochster}{article}{
author={Ananyan, Tigran},
author={Hochster, Melvin},
title={Small subalgebras of polynomial rings and Stillman's conjecture},
date={2016},
note={\arxiv{1610.09268v1}},
}
\bib{ballico-chiantini}{article}{
author={Ballico, Edoardo},
author={Chiantini, Luca},
title={On smooth subcanonical varieties of codimension $2$\ in ${\bf
P}^{n},$ $n\geq 4$},
journal={Ann. Mat. Pura Appl. (4)},
volume={135},
date={1983},
pages={99--117 (1984)},
}
\bib{barth-icm}{article}{
author={Barth, Wolf},
title={Submanifolds of low codimension in projective space},
conference={
title={Proceedings of the International Congress of Mathematicians},
address={Vancouver, B.C.},
date={1974},
},
book={
publisher={Canad. Math. Congress, Montreal, Que.},
},
date={1975},
pages={409--413},
}
\bib{BVdV}{article}{
author={Barth, W.},
author={Van de Ven, A.},
title={A decomposability criterion for algebraic $2$-bundles on
projective spaces},
journal={Invent. Math.},
volume={25},
date={1974},
pages={91--106},
}
\bib{BVdV-Grassman}{article}{
author={Barth, W.},
author={Van de Ven, A.},
title={On the geometry in codimension $2$ of Grassmann manifolds},
conference={
title={Classification of algebraic varieties and compact complex
manifolds},
},
book={
publisher={Springer, Berlin},
},
date={1974},
pages={1--35. Lecture Notes in Math., Vol. 412},
}
\bib{bertram-ein-lazarsfeld}{article}{
author={Bertram, Aaron},
author={Ein, Lawrence},
author={Lazarsfeld, Robert},
title={Vanishing theorems, a theorem of Severi, and the equations
defining projective varieties},
journal={J. Amer. Math. Soc.},
volume={4},
date={1991},
number={3},
pages={587--602},
}
\bib{coandua}{article}{
author={Coand\u a, Iustin},
title={A simple proof of Tyurin's Babylonian tower theorem},
journal={Comm. Algebra},
volume={40},
date={2012},
number={12},
pages={4668--4672},
}
\bib{cmpv}{article}{
author = {Caviglia, Giulio},
author = {Chardin, Marc},
author = {McCullough, Jason},
author = {Peeva, Irena},
author = {Varbaro, Mateo},
title = {Regularity of prime ideals},
note = {\url{https://orion.math.iastate.edu/jmccullo/papers/regularityofprimes.pdf}},
}
\bib{hartshorne-bulletin}{article}{
author={Hartshorne, Robin},
title={Varieties of small codimension in projective space},
journal={Bull. Amer. Math. Soc.},
volume={80},
date={1974},
pages={1017--1032},
}
\bib{eisenbud-huneke-vasconcelos}{article}{
author={Eisenbud, David},
author={Huneke, Craig},
author={Vasconcelos, Wolmer},
title={Direct methods for primary decomposition},
journal={Invent. Math.},
volume={110},
date={1992},
number={2},
pages={207--235},
}
\bib{ess-stillman}{article}{
author={Erman, Daniel},
author={Sam, Steven~V},
author={Snowden, Andrew},
title={Big polynomial rings and Stillman's Conjecture},
note={\arxiv{1801.09852v3}}
}
\bib{flenner}{article}{
author={Flenner, Hubert},
title={Babylonian tower theorems on the punctured spectrum},
journal={Math. Ann.},
volume={271},
date={1985},
number={1},
pages={153--160},
}
\bib{hartshorne-bulletin}{article}{
author={Hartshorne, Robin},
title={Varieties of small codimension in projective space},
journal={Bull. Amer. Math. Soc.},
volume={80},
date={1974},
pages={1017--1032},
issn={0002-9904},
}
\bib{holme-schneider}{article}{
author={Holme, Audun},
author={Schneider, Michael},
title={A computer aided approach to codimension $2$ subvarieties of ${\bf
P}_n,\;n \geqq 6$},
journal={J. Reine Angew. Math.},
volume={357},
date={1985},
pages={205--220},
}
\bib{mumford}{article}{
author={Mumford, David},
title={Varieties defined by quadratic equations},
conference={
title={Questions on Algebraic Varieties},
address={C.I.M.E., III Ciclo, Varenna},
date={1969},
},
book={
publisher={Edizioni Cremonese, Rome},
},
date={1970},
pages={29--100},
}
\bib{ran}{article}{
author={Ran, Z.},
title={On projective varieties of codimension $2$},
journal={Invent. Math.},
volume={73},
date={1983},
number={2},
pages={333--336},
}
\bib{sato}{article}{
author={Sato, Ei-ichi},
title={Babylonian Tower theorem on variety},
journal={J. Math. Kyoto Univ.},
volume={31},
date={1991},
number={4},
pages={881--897},
}
\bib{stacks}{misc}{
label={Stacks},
author = {The {Stacks Project Authors}},
title = {Stacks Project},
year = {2017},
note = {\url{http://stacks.math.columbia.edu}},
}
\end{biblist}
\end{bibdiv}
\end{document}
| {'timestamp': '2018-04-27T02:00:36', 'yymm': '1804', 'arxiv_id': '1804.09730', 'language': 'en', 'url': 'https://arxiv.org/abs/1804.09730'} |
\section{Introduction\label{sec:Introduction}}
For a Fermi gas at zero temperature, a Fermi surface(FS) naturally arises as the boundary separating
the occupied and empty states in $(\omega,\mathbf{k})$-space.
As is known, when weak interactions/perturbations
and disorders are introduced, some FSs still survive though the occupation
of states may be shifted dramatically, while some others are easily
gapped. Such kind of FS stability originates from its
topological property of the Green's function or the Feynman propagator
for fermionic particles, $G(\omega,\mathbf{k})=\left(i\omega-\mathcal{H}\right)^{-1}$,
which was first pointed out by Volovik in Ref.\cite{Volovik-Book}.
Generally speaking, an FS is robust against weak interactions/perturbations and disorders, if it has a nontrivial topological charge that
provides the protection; otherwise it is vulnerable and easy to be gapped.
The most general case is that a Hamiltonian is not subject to any
symmetry, where the topological charge is formulated by the homotopy
group $\pi_{p}(GL(N,\mathbf{C}))$. This general case was addressed
in Refs.\cite{Volovik-Book,Volovik Vacuum} and analyzed
in the framework of the K theory \cite{Horava}. Notably, real physical systems
have normally certain symmetries, making it necessary and significant to develop
a corresponding theory for symmetry preserving cases.
It is known that the symmetry
of a quantum system can always have either a unitary representation
or an anti-unitary one in the corresponding Hilbert space. The unitary symmetries, such as rotation, translation, and parity symmetries, are easy to be broken by weak interactions/perturbations and disorders, while the anti-unitary symmetries, such as the time-reversal symmetry(TRS) and charge
conjugate or particle-hole symmetry(PHS) can usually be preserved. Thus we are motivated to
develop a unified theory to classify the FSs of systems with the two so-called reality symmetries, namely TRS and PHS, which seems to be fundamentally important.
In this Letter, taking into account the two reality symmetries and introducing six types of topological charges, a new kind of complete classification of all FSs is obtained, being explicitly illustrated in Tab.{[}\ref{tab:periodicity table}{]}. Moreover, an intrinsic relationship between the symmetry class index and the codimension number is established, which provides us a unique way for realizing any type of FS with a high codimension.
Let us first introduce how to classify all sets of
Hamiltonian involving the above-mentioned two reality symmetries, as done in the random matrix theory\cite{Ramdom Matrix 0,Ramdom Matrix I}. It turns out that the chiral symmetry(CS) has to be introduced for a
complete set of classification.
If a unitary symmetry represented by $\mathcal{K}$ can anti-commute
with the Hamiltonian $\mathcal{H}$, $i.e.$, $\left\{ \mathcal{K},\mathcal{H}\right\} =0$,
it corresponds the CS. In fact, the combined symmetry of TRS and PHS
is a kind of chiral symmetry. On the other hand, two chiral symmetries can be combined to commute
with $\mathcal{H}$, which makes $\mathcal{H}$ diagonal. Thus it
is sufficient to consider only one chiral symmetry. Based on these considerations\cite{Ramdom Matrix II,TI Classification III,TI Classification I},
a kind of complete classification of all sets of Hamiltonian can be obtained with
respect to the three symmetries. As is known, the TRS and PHS are both anti-unitary,
and can be expressed in a unified form
\begin{equation}
\mathcal{H}(\mathbf{k})=\epsilon_{c}C\mathcal{H}^{T}(-\mathbf{k})C^{-1},\quad CC^{\dagger}=1,\quad C^{T}=\eta_{c}C,\label{eq:reality transformation}
\end{equation}
where $\epsilon_{c}=\pm1$ denotes TRS/PHS, and $\eta_{c}=\pm1$. As a result, each reality
symmetry may take three possible types (even, odd, and absent), and thus there
are nine classes. In addition, considering that the chiral symmetry may be preserved or not
when both reality symmetries are
absent, we can have
ten classes, as summarized in Tab.{[}\ref{tab:classification}{]},
which is the famous Cartan classification of Hamiltonians in the random matrix theory\cite{Ramdom Matrix 0,Ramdom Matrix I}, while may be easily understood in the present framework \cite{TI Classification I,TI Classification III,Ramdom Matrix II,Tl Classification II}.
\begin{table}
\begin{tabular}{|>{\centering}p{0.5cm}|>{\centering}p{0.6cm}>{\centering}p{0.6cm}>{\centering}p{0.6cm}>{\centering}p{0.6cm}>{\centering}p{0.6cm}|>{\centering}p{0.7cm}>{\centering}p{0.7cm}>{\centering}p{0.7cm}>{\centering}p{0.6cm}>{\centering}p{0.6cm}|}
\hline
& \multicolumn{5}{c|}{Non-chiral case} & \multicolumn{5}{c|}{Chiral case}\tabularnewline
\hline
\hline
& A & AI & D & AII & C & AIII & BDI & DIII & CII & CI\tabularnewline
\hline
T & 0 & +1 & 0 & -1 & 0 & 0 & +1 & -1 & -1 & +1\tabularnewline
\hline
C & 0 & 0 & +1 & 0 & -1 & 0 & +1 & +1 & -1 & -1\tabularnewline
\hline
S & 0 & 0 & 0 & 0 & 0 & 1 & 1 & 1 & 1 & 1\tabularnewline
\hline
\end{tabular}
\caption{Classification of Hamiltonian. T and C denote respectively TRS and PHS, with $0$ indicating
the absence of TRS(PHS) and $\pm1$ denoting the sign of TRS(PHS). S denotes CS, with $0$ and $1$ representing the absence and presence of CS. \label{tab:classification}}
\end{table}
For a system with spatial dimension $d$, the FS is a compact submanifold
with dimension $d-p$ if the $(\omega,\mathbf{k})$-space is compact. We may define the codimension of the FS
as $p$. Topological charges can be defined on a chosen $p$-dimensional submanifold in the $(\omega,\mathbf{k})$-space enclosing the FS in its transverse dimension. The two reality symmetries are essentially different from the CS, which lies in that each of the reality symmetries relates a $\mathbf{k}$ point
to the $-\mathbf{k}$ point in $\mathbf{k}$-space,
while the CS acts on every $\mathbf{k}$. This difference leads
to the requirement that a chosen submanifold in $(\omega,\mathbf{k})$-space
has to be centrosymmetric around the origin in order to preserve either of the
reality symmetries.
Generally, the topological property of an FS depends on its codimension and symmetry~\cite{note1}. After the detailed analysis, we are able to classify all classes in an
appropriate form illustrated in Tab.{[}\ref{tab:periodicity table}{]}, which is one of main results of this work. In each case, as seen in
Tab.{[}\ref{tab:periodicity table}{]}, a kind of topological charge is designated to the FS. Topological charges $0$, $\mathbf{Z}$, $\mathbf{Z}^{(1,2)}_{2}$, and $2\mathbf{Z}$
correspond respectively to $0$, an integer, an integer of $\mod2$, and an even integer. As illustrated in the table, the ten classes can be divided into real and complex cases according to whether or not they have either of the reality symmetries. On the other hand, they can also be put into the chiral and non-chiral cases depending on whether or not they have the CS. Actually, from this kind of classification, we can find that there are six types of topological charges, which form two groups in terms of the CS, with each group consisting of an original one ($\mathbf{Z}$) and the two descendants ($\mathbf{Z}^{(1)}_2$ and $\mathbf{Z}^{(2)}_2$) \cite{note}. In the chiral or non-chiral real case, all of the four types of FSs can be realized for any given number of codimension. It is also clear that the complex and real cases have different periodicities: one is of a two-fold periodicity, while the other is of an eight-fold one. Obviously, for the complex case, we can introduce a matrix element $C(p,i)$ to denote the topological charges listed in Tab.{[}\ref{tab:periodicity table}{]}, where $i=$ odd(even) number corresponds to the class A (AIII), and $p$ is the number of codimension. In this way, $C(p,i)$ satisfies the relation $C(p,i)=C(p+n,i+n)$ with $n$ an integer and $i\mod 2$. For the real case, we can also introduce another matrix element $R(p,i)$, where $i=1, ...,8$ denote the classes AI, BDI, D, DIII, AII, CII, C and CI respectively. Intriguingly, we can also find
\begin{equation}
R(p,i)=R(p+n,i+n)
\end{equation}
with $i\mod 8$, which establishes an intrinsic relationship between the symmetry class index and the codimension number. More significantly, based on this relationship, we are able to realize any type of FS with a high codimension by reducing its codimension to an experimentally accessible one with the same CS, $i.e.$, $p=1, 2 , 3$. Actually, the above relation and the two-fold periodicity as well as the eight-fold one are originated from the Bott periodicity of $GL(N,\mathbf{C})$. In particular, the present eight-fold periodicity is induced by the two reality symmetries enforcing on the two-fold Bott periodicity, and thus is different from the eight-fold Bott periodicity for $O(N)$ and $Sp(N)$\cite{Nakahara}.
\begin{table}
\begin{tabular}{|>{\centering}p{0.6cm}|>{\centering}p{0.6cm}>{\centering}p{0.6cm}>{\centering}p{0.6cm}>{\centering}p{0.6cm}|>{\centering}p{0.7cm}>{\centering}p{0.7cm}>{\centering}p{0.7cm}>{\centering}p{0.7cm}|}
\hline
& \multicolumn{4}{c|}{Non-chiral case} & \multicolumn{4}{c|}{Chiral case}\tabularnewline
\hline
\hline
& \multicolumn{8}{c|}{Complex case}\tabularnewline
\hline
& \multicolumn{4}{c|}{A} & \multicolumn{4}{c|}{AIII}\tabularnewline
\hline
$p\diagdown i$ & \multicolumn{4}{c|}{$\mathbf{1}$} & \multicolumn{4}{c|}{$\mathbf{2}$}\tabularnewline
\hline
1 & \multicolumn{4}{c|}{$\mathbf{Z}$} & \multicolumn{4}{c|}{$0$}\tabularnewline
\hline
2 & \multicolumn{4}{c|}{$0$} & \multicolumn{4}{c|}{$\mathbf{Z}$}\tabularnewline
\hline
$\vdots$ & \multicolumn{4}{c|}{$\vdots$} & \multicolumn{4}{c|}{$\vdots$}\tabularnewline
\hline
\hline
& \multicolumn{8}{c|}{Real case}\tabularnewline
\hline
& AI & D & AII & C & BDI & DIII & CII & CI\tabularnewline
\hline
$p\diagdown i$ & $\mathit{1}$ & $\mathit{3}$ & $\mathit{5}$ & $\mathit{7}$ & $\mathit{2}$ & $\mathit{4}$ & $\mathit{6}$ & $\mathit{8}$\tabularnewline
\hline
1 & $0$ & $\mathbf{Z}$ & $\mathbf{Z}^{(2)}_{2}$ & $2\mathbf{Z}$ & $0$ & $\mathbf{Z}^{(1)}_{2}$ & $0$ & $0$\tabularnewline
\hline
2 & $0$ & $0$ & $\mathbf{Z}^{(1)}_{2}$ & $0$ & $0$ & $\mathbf{Z}$ & $\mathbf{Z}^{(2)}_{2}$ & $2\mathbf{Z}$\tabularnewline
\hline
3 & $2\mathbf{Z}$ & $0$ & $\mathbf{Z}$ & $\mathbf{Z}^{(2)}_{2}$ & $0$ & $0$ & $\mathbf{Z}^{(1)}_{2}$ & $0$\tabularnewline
\hline
4 & $0$ & $0$ & $0$ & $\mathbf{Z}^{(1)}_{2}$ & $2\mathbf{Z}$ & $0$ & $\mathbf{Z}$ & $\mathbf{Z}^{(2)}_{2}$\tabularnewline
\hline
5 & $\mathbf{Z}^{(2)}_{2}$ & $2\mathbf{Z}$ & $0$ & $\mathbf{Z}$ & $0$ & $0$ & $0$ & $\mathbf{Z}^{(1)}_{2}$\tabularnewline
\hline
6 & $\mathbf{Z}^{(1)}_2$ & $0$ & $0$ & $0$ & $\mathbf{Z}^{(2)}_{2}$ & $2\mathbf{Z}$ & $0$ & $\mathbf{Z}$\tabularnewline
\hline
7 & $\mathbf{Z}$ & $\mathbf{Z}^{(2)}_{2}$ & $2\mathbf{Z}$ & $0$ & $\mathbf{Z}^{(1)}_{2}$ & $0$ & $0$ & $0$\tabularnewline
\hline
8 & $0$ & $\mathbf{Z}^{(1)}_{2}$ & $0$ & $0$ & $\mathbf{Z}$ & $\mathbf{Z}^{(2)}_{2}$ & $2\mathbf{Z}$ & $0$\tabularnewline
\hline
$\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$ & $\vdots$\tabularnewline
\hline
\end{tabular}
\caption{Classification of Fermi surfaces. $p$ is the codimension of an FS and $i$ is the index of symmetry classes.
\label{tab:periodicity table}}
\end{table}
Let us look at the most general case, $i.e.$, the class A, in the absence of any
symmetry. The basic idea to classify FSs in terms of the topology
may be learnt from this case, which was already discussed in Ref.\cite{Horava}. We here follow it for a pedagogical
purpose. The Green's function can be written as
\begin{equation}
G(\omega,\mathbf{k})=\frac{1}{i\omega-\mathcal{H}(\mathbf{k})},\label{eq:Green's function}
\end{equation}
which is regarded as an N-dimensional matrix. FSs are defined
to be connected to the zero energy. Formulating topological charges in terms of the Green's function has an advantage when handling interacting systems, which is addressed in \cite{Order-parameter of TI,Horava,Interacting}.
Generally the FS consists of branches
of compact manifolds in $\mathbf{k}$-space. For a specific branch
with dimension $d-p$, a $p$-dimensional sphere
$S^{p}$ can always be picked up from $(\omega,\mathbf{k})$-space to enclose this branch in its transverse dimension. $G^{-1}(\omega,\mathbf{k})$ is nonsingular for each $(\omega,\mathbf{k})$
restricted on the $S^{p}$; in other words, it is a member of $GL(N,\mathbf{C})$.
Then $G^{-1}(\omega,\mathbf{k})$ restricted on the $S^{p}$ can be
regarded as a mapping from $S^{p}$ to $GL(N,\mathbf{C})$. As all
these mappings can be classified by the homotopy group $\pi_{p}(GL(N,\mathbf{C}))$,
$G^{-1}(\omega,\mathbf{k})$ on the $S^{p}$ is in a certain homotopy
class. In the so-called stable regime, $N>\frac{p}{2}$, $\pi_{p}(GL(N,\mathbf{C}))$
satisfies the Bott periodicity\cite{Nakahara},
\[
\pi_{p}\left(GL(N,\mathbf{C})\right)\cong\begin{cases}
\mathbf{Z} & p\in odd\\
0 & p\in even
\end{cases}.
\]
Since physical systems are always in the stable regime, only the mappings
on $S^{p}$ for FSs with the odd codimension $p=2n+1$ can
be in a nontrivial homotopy class, while ones for the even codimension
$p$ are always trivial. In other words, FSs with the even
codimension $p$ are always trivial and vulnerable against weak perturbations,
while ones with odd codimension can be classified by $\pi_{p}(GL(N,\mathbf{C}))\cong\mathbf{Z}$
and its stability is topologically protected
against any weak perturbations for the nontrivial one. We emphasize that since the class
A is not subject to any symmetry, the perturbation can be quite
arbitrary. In addition, the concrete shape of the $S^{p}$ is irrelevant
in principle, and the only requirement is that it is an orientable
compact submanifold in $(\omega,\mathbf{k})$-space with dimension
$p$, which is distinctly different from the cases of reality symmetries. The
homotopy number of a mapping for an FS is named as the
topological charge of the FS, which can be calculated from
the following formula,
\begin{equation}
N_{p}=C_{p}\int_{S^{p}}\mathbf{tr}\left(G\mathbf{d}G^{-1}\right)^{p}\label{eq:homotopic number}
\end{equation}
with
\[
C_{p}=-\frac{n!}{(2n+1)!(2\pi i)^{n+1}}.
\]
The $\mathbf{Z}$-type of FSs in the class A is
thus obtained. Once an FS has nontrivially this topological
charge, it can survive under any weak perturbations
in the absence of any symmetry\cite{Hubbard}. But the topologically trivial FSs
can be gapped due to the perturbations. Both cases can be seen in
$^{3}He$\cite{Volovik-Book}, where two Fermi points in $^{3}He$-$A$ phase are topologically protected with charge $\pm2$ individually, while the Fermi line in the
planar phase cannot be topologically charged because its codimension is even. The FS of another simple model Hamiltonian $\mathcal{H}=\mathbf{g}(\mathbf{k})\cdot\mathbf{\sigma}$ belongs also to this topological class\cite{Supp}, where $\sigma_i$ are Pauli matrices.
We now turn to consider the case when the system has a chiral
symmetry denoted by $\mathcal{K}$, $i.e.$,
\[
\left\{ \mathcal{H},\mathcal{K}\right\} =0.
\]
Systems with this symmetry have a crucial property: for any eigenstate
$|\alpha\rangle$ of $H$ with the energy $E_{\alpha}$, $K|\alpha\rangle$
is also an eigenstate of $H$ but with the energy $-E_{\alpha}$. The
Hamiltonian associated with this symmetry can alway be diagonalized in $\mathbf{k}$-space
on the basis that diagonalizes $\mathcal{K}$ and thus can be written as
\[
h(\mathbf{k})=\begin{pmatrix} & u(\mathbf{k})\\
u^{\dagger}(\mathbf{k})
\end{pmatrix}.
\]
On the $S^{p}$ that is chosen to enclose an FS of dimension $d-p$,
$h(\mathbf{k})$ also takes this form, which makes the topological charge
defined in Eq(\ref{eq:homotopic number}) always vanishing. However
$u(\mathbf{k})$ can be regarded as a mapping from $S^{p-1}$(set
$\omega=0$) to $GL(N/2,\mathbf{C})$, which may be in a nontrivial
homotopy class of $\pi_{p-1}(GL(N/2),\mathbf{C})$. In this sense, there exist
nontrivial FSs for the even number of codimension, $i.e.$, $p=2n$. The
homotopy number may be calculated as
\[
\nu_{p}=C_{p-1}\int_{S^{p-1}}\mathbf{tr}\left(u(\mathbf{k})\mathbf{d}u^{-1}(\mathbf{k})\right)^{p-1}.
\]
As the homotopic number is real, it can also be calculated by $u^{\dagger}(\mathbf{k})$.
Thus the homotopic number can also be expressed by Green's function:
\begin{equation}
\nu_{p}=\frac{C_{p-1}}{2}\int_{S^{p-1}}\mathbf{tr}\left(\mathcal{K}\left(G\mathbf{d}G^{-1}\right)^{p-1}|_{\omega=0}\right).\label{eq:general chiral charge}
\end{equation}
The homotopic number is referred to as the chiral topological charge of
the FS.
For this $\mathbf{Z}$-type topological charge in the class AIII, if
it is nontrivial, the FS is stable
against any perturbations that do not break the CS. However the
topological protection is not as stable as that induced by Eq.(\ref{eq:homotopic number}),
because the FS is gapped if ever the CS is broken. For instance, the honeycomb lattice model has two Dirac cones with the low-energy effective Hamiltonian $\mathcal{H}_\alpha=k^{\pm}_x\sigma_1\pm k_y^{\pm}\sigma_2$, respectively, with $\sigma_3$ representing the chiral symmetry that is the sublattice symmetry here \cite{Haldane model}. The two Dirac cones have $\nu_2=\pm1$ respectively\cite{Supp}, and thus they are topologically stable as far as this chiral symmetry is preserved. More remarkably, according to our present theory, it is noted that the stability of FSs in this model was recently verified in an ultra-cold atom experiment\cite{Dirac Points}. In addition, it was also seen clearly from this experiment that if the sublattice symmetry is broken, a gap may be opened in the whole Brillouin zone. It is noted that this type of topological charge was also seen in a superconducting system\cite{SConductor}. Another interesting model Hamiltonian $\mathcal{H}=(k^2_x-k^2_y)\sigma_1\pm2k_xk_y\sigma_2$ also gives out this type of topological charge $\nu_2=\pm2$ \cite{Supp}.
When TRS and PHS are considered, many $\mathbf{Z}$-type topological charges vanish. As for the first descendant ($i.e.\,\,\mathbf{Z}^{(1)}_2$) of a $\mathbf{Z}$-type in a non-chiral real case , $e.g.$
the case of codimension 2 that is labeled one row above that of codimension 3 ($\mathbf{Z}$-type) in the class AII in Tab.{[}\ref{tab:periodicity table}{]},
the Green's function restricted on the chosen $S^{p}$ can be classified by a $\mathbf{Z}_2$-type topological charge.
The key idea is that $G(\omega,\mathbf{k})|_{S^p}$ can be continuously extended
to $\tilde{G}(\omega,\mathbf{k};u)$ on the $(p+1)$-dimensional disk by introducing
an auxiliary parameter $u$ (ranging from $0$ to $1$) with the two
requirements\label{sub: requirement I}: $i)$ $\tilde{G}(\omega,\mathbf{k};0)|_{S^{p}}=G(\omega,\mathbf{k})|_{S^{p}}$,
and $ii)$ $\tilde{G}(\omega,\mathbf{k};1)|_{S^{p}}$ is a diagonal
matrix, such that $\tilde{G}_{\alpha\alpha}=\left(i\omega-\Delta\right)^{-1}$for
empty bands and $\tilde{G}_{\beta\beta}=(i\omega+\Delta)^{-1}$ for
occupied bands, where $\Delta$ is a positive constant\cite{Order-parameter of TI,Xiao-Liang PRB}.
The validity of the extension is based on the fact that $G(\omega,\mathbf{k})$
restricted on $S^{p}$ is always trivial in the homotopic sense in
this case. This topological charge in terms of Green's function is
formulated as
\begin{equation}
N_{p}^{(1)}=C'_{p}\int_{S^{p}}\int_{0}^{1}du\;\mathbf{tr}\left(\left(G\mathbf{d}G^{-1}\right)^{p}G\partial_{u}G^{-1}\right)\mod\:2\label{eq:Z_2 son for non-chiral case}
\end{equation}
with
\[
C_{p}^{'}=-\frac{2(p/2)!}{p!(2\pi i)^{(p/2)+1}},
\]
where ``$\sim$'' has been dropped for brevity. The topological
charge is defined in a similar way to the WZW-term in quantum field theory\cite{WZW-term}
and the $\mathbf{Z}_{2}$ character comes from the fact that two different
extensions differ to an even integer, as demonstrated in Ref.\cite{Order-parameter of TI}. As a concrete illustration, we here exemplify this $\mathbf{Z}_2$-type topological charge by a model Hamiltonian defined in a two spatial dimension: $\mathcal{H}(\mathbf{k})=k_x\sigma_1+k_y\sigma_2+(k_x+k_y)\sigma_3$, which has only a TRS with $C=i\sigma_2$ and $\eta=-1$ according to Eq.(\ref{eq:reality transformation}), and thus belongs to the class AII in Tab.{[}\ref{tab:classification}{]} and corresponds to the case of $R(2,5)$ in Tab.{[}\ref{tab:periodicity table}{]}. The corresponding FS at the origin in the $(\omega,\mathbf{k})$-space
is found to have a nontrivial topological charge $N_{2}^{(1)}=1 $ from Eq.(\ref{eq:Z_2 son for non-chiral case}) \cite{Supp} .
For the second descendant with the codimension $p$ ($i.e.\,\,\mathbf{Z}^{(2)}_2$), $e.g.$ the case
of codimension 3 with two rows above that of codimension 5 in the class C in Tab.{[}\ref{tab:periodicity table}{]},
a $\mathbf{Z}_{2}$-type topological charge can also be defined on the
chosen $S^{p}$. To define the $\mathbf{Z}_{2}$-type topological charge,
the $G(\omega,\mathbf{k})|_{S^{p}}$ is smoothly extended to a two-dimensional
torus $T^{2}$ parameterized by the two auxiliary parameters $u$
and $v$ (both ranging from $-1$ to $1$ ) with the three requirements:
$i)$ $\tilde{G}(\omega,\mathbf{k};0,0)|_{S^{p}}=G(\omega,\mathbf{k})|_{S^{p}}$; and
$ii)$ $\tilde{G}(\omega,\mathbf{k};u,v)|_{S^{p}}=\epsilon_{c}C\tilde{G}^{T}(\omega,-\mathbf{k};-u,-v)|_{S^{p}}C^{-1}$,
referring to Eq.(\ref{eq:reality transformation}); $iii)$$\tilde{G}(\omega,\mathbf{k};1,1)|_{S^{p}}$
corresponds to a trivial system, such as $\tilde{G}_{\alpha\alpha}=\left(i\omega-\Delta\right)^{-1}$for
empty bands and $\tilde{G}_{\beta\beta}=(i\omega+\Delta)^{-1}$ for
occupied bands. The corresponding topological
charge is defined as\cite{Xiao-Liang PRB,Order-parameter of TI}
\begin{equation}
N_{p}^{(2)}=C{}_{p+2}\int_{S^{p}\times T^{2}}\:\mathbf{tr}\left(G\mathbf{d}G^{-1}\right)^{p+2}\mod2,\label{eq:Z_2 grandson fro non-chiral case}
\end{equation}
where ``$\sim$'' has been dropped and
the $C_{p+2}$ is defined in Eq.(\ref{eq:homotopic number}). As a topological charge of FSs, its physical meaning is analogous to that
of Eq.(\ref{eq:Z_2 son for non-chiral case}).
Similar to the non-chiral case, a $\mathbf{Z}$ chiral topological
charge is associated with two descendants: son and grandson. The $\mathbf{Z_{2}}$
topological charge of the son is originated from the chiral topological
charge defined in Eq.(\ref{eq:general chiral charge}) in the same
spirit of that Eq.(\ref{eq:Z_2 son for non-chiral case}) is originated from
Eq.(\ref{eq:homotopic number}), which is given by
\begin{equation}
\begin{split}
\nu_{p}^{(1)}=&\frac{C'_{p-1}}{2}\int_{S^{p-1}}\int_{0}^{1}du\;\\
&\mathbf{tr}\left(\mathcal{K}\left(G\mathbf{d}G^{-1}\right)^{p-1}G\partial_{u}G^{-1}|_{\omega=0}\right)\mod\:2,\label{eq:Chiral son charge}
\end{split}
\end{equation}
where $\mathcal{K}$ is the matrix to represent the chiral symmetry,
$S^{p-1}$ is the chosen $S^{p}$ restricted on $\omega=0$, and $G$
is extended by an auxiliary parameter ranging from $0$ to $1$ with
the same requirements for introducing Eq.(\ref{eq:Z_2 son for non-chiral case}). The $\mathbf{Z}_{2}$ topological charge for the grandson can also
be defined in the same spirit of writing out Eq.(\ref{eq:Z_2 grandson fro non-chiral case}).
We extend the $G(\omega,\mathbf{k})|_{S^{p}}$ to a two-dimensional
torus $T^{2}$ parameterized by two auxiliary parameters $u$ and
$v$ (both ranging from $-1$ to $1$) with the four requirements:
the first three requirements are the same as those of the non-chiral
counterpart in Eq.(\ref{eq:Z_2 grandson fro non-chiral case}); while the
fourth one is that the chiral symmetry is preserved on $T^{2}$, which
also permits that either of reality symmetries is applicable in the second
requirement. The $\mathbf{Z}_{2}$ topological charge is written as
\begin{equation}
\nu_{p}^{(2)}=\frac{C_{p+1}}{2}\int_{S^{p-1}\times T^{2}}\:\mathbf{tr}\left(\mathcal{K}\left(G\mathbf{d}G^{-1}\right)^{p+1}|_{\omega=0}\right)\mod2,\label{eq:chiral grandson charge}
\end{equation}
where $C_{p+1}$ is defined in Eq.(\ref{eq:homotopic number}).
The physical meanings of topological charges for the eight classes with TRS and/or PHS are elaborated as
follows. The FS(s) with a certain codimension is always distributed centrosymmetrically, since either TRS or PHS relates $\mathbf{k}$ to $-\mathbf{k}$. Thus there exist two possibilities: $i)$ the FS with codimension $p$ resides at the origin in $(\omega,\mathbf{k})$-space; $ii)$the FS(s) is centrosymmetric outside of the origin. For the first case, we can choose an $S^p$ in $(\omega,\mathbf{k})$-space to enclose the FS, and use the corresponding formula to calculate the topological charge. If the topological charge is nontrivial, the FS is stable against perturbations provided the corresponding symmetries are preserved.
For the second case, the FS(s) is usually spherically distributed, so two $S^{p}$s can be chosen
to sandwich the FS(s) in its transverse dimension. The difference of the topological charges calculated from the two $S^p$s is the topological charge of the FS(s). If it is nontrivial, the FS(s) is topologically stable when the corresponding symmetries are preserved.
Before concluding this paper, we wish to emphasize that the topological charges of FSs addressed here are closely connected to topological insulators/superconductors\cite{Zhao1}, and therefore this work may provide a new and deep insight for studying them.
To conclude, FSs
in all of the ten symmetry
classes have been classified appropriately in terms of topological charges.
It has been revealed that when an FS is associated with a nontrivial topological charge, this FS is topologically protected
by the corresponding symmetry.
\begin{acknowledgments}
We thank G. E. Volovik for helpful discussions. This work was supported
by the GRF (HKU7058/11P), the CRF (HKU8/11G) of Hong Kong, the URC fund of HKU, and
the SKPBR of China (Grant No. 2011CB922104).
\end{acknowledgments}
| {'timestamp': '2013-06-27T02:01:44', 'yymm': '1211', 'arxiv_id': '1211.7241', 'language': 'en', 'url': 'https://arxiv.org/abs/1211.7241'} |
\section{Introduction}
\label{Introduction}
Recommender systems suggest items (movies, books, music, news, services, etc.) that appear most likely to interest a particular user. Matching users with the most desirable items helps enhance user satisfaction and loyalty. Therefore, many e-commerce leaders such as Amazon and Netflix have made recommender systems a salient part of their services \citep{koren2009matrix}.
Currently, most recommendation techniques leverage user-provided feedback data to infer user preferences \citep{chen2015recommender}.
Typically, recommender systems are based on collaborative filtering (CF) \citep{koren2011advances, aldrich2011recommender}, where the preferences of a user are predicted by collecting rating information from other similar users or items \citep{ma2008sorec}.
Many recent studies have contributed extensions to the basic Probabilistic Matrix Factorisation (PMF) by incorporating additional information.
Despite their popularity and good accuracy, recommender systems based on latent factor models encounter some important problems in practical applications \citep{zafari2016feature}. In these models, it is assumed that all values for item features are equally preferred by all users.
Another major problem with latent factor models based on matrix factorisation is that they do not usually take conditional preferences into consideration \citep{liu2015conditional}.
Furthermore, in general, latent factor models do not consider the effect of social relationships on user preferences, which encompasses peer selection (homophily) and social influence \citep{Lewis03012012, zafarani2014social}.
In previous work, we addressed the problem of modelling the socially-influenced conditional feature value preferences, and proposed CondTrustFVSVD \citep{zafari2017modelling}.
Since data usually changes over time, the models should continuously update to reflect the present state of data \citep{koren2010collaborative}.
A major problem with the most of the recent recommender systems is that they mostly ignore the drifting nature of preferences \citep{zafari2017modelling}. Modelling the time drifting data is a central problem in data mining. Drifting preferences can be considered a particular type of concept drift, which has received much attention from researchers in recent years \citep{widmer1996learning}. However, very few recommendation models have considered the drifting nature of preferences \citep{chatzis2014dynamic}. Changes in user preferences can originate from substantial reasons, or transient and circumstantial ones. For example, the items can undergo \textit{seasonal changes} or some items may experience \textit{periodic changes}, for instance, become popular in the specific holidays.
Apart from the short-term changes, user preferences are also subject to long term drifts. For example, a user may be a fan of romantic or action movies at a younger age, while his/her preference may shift more towards drama movies as gets older. Also, users may change their rating scale over time. For example, a user may be very strict and give 3 out of 5 for the best movie. However, might become less strict with age and be more willing to elect the full rate when fully satisfied. A similar situation may apply for movies. A movie may receive a generally high/low rate at some time period, and lower/higher rates at some other period \citep{koren2010collaborative}. Therefore, a preference model should be able to distinguish between different types of preference drifting, and model them individually in order to achieve the highest accuracy.
In recommender systems research, six major aspects to the preferences have been identified. These aspects include \textit{feature preferences} (\citep{zafari2015dopponent,salakhutdinov2011probabilistic}), \textit{feature value preferences} (\citep{zafari2016popponent,zafari2017popponent,zhang2014explicit}), \textit{socially-influenced preferences} (\citep{zafari2017modelling,zhao2015probabilistic,ma2008sorec,ma2011recommender,jamali2010matrix}), \textit{temporal dynamics} (\citep{koren2010collaborative}), \textit{conditional preferences} (\citep{liu2015conditional}), and \textit{user and item biases} (\citep{koren2011advances}). Feature value preferences refer to the relative favourability of each one of the item feature values, social influence describes the influence of social relationships on the preferences of a user, temporal dynamics means the drift of the preferences over time, conditional preferences refer to the dependencies between item features and their values, and user and item biases pertain to the systematic tendencies for some users to give higher ratings than others, and for some items to receive higher ratings than others (\citep{koren2011advances}). Modelling the temporal properties of these preference aspects is the central theme of this paper.
In this paper, we extend our previous work \citep{zafari2017modelling}, by considering the drifting nature of preferences and their constituting aspects. We assume that the socially-influenced preferences over features and conditional preferences over feature values, as well as user and item rating scales can be subject to temporal drift.
Therefore, the two major research questions addressed in this paper are:
\begin{itemize}
\item How can we efficiently model the drifting behaviour
preferences, and how much improvement would incorporating such information make?
\item Which aspects are more subject temporal changes, and how is this related to the domain on which the model is trained?
\end{itemize}
The current work proposes a novel latent factor model based on matrix factorisation to address these two questions.
This paper has two major contributions for the field.
In this paper, we make further improvements on the accuracy of, CondTrustFVSVD, a model that we proposed earlier. CondTrustFVSVD proved to be the the most accurate model among a large set of state of the art models. The additional improvements were achieved by incorporating the \textit{temporal dynamics of preference aspects}. We also draw conclusions about the dynamicity of preference aspects, by analysing the temporal aspects of the these aspects using a component-based approach, and show which aspects are more subject to drift over time. This research provides useful insights into the \textit{accurate modelling of preferences and their temporal properties}, and helps pave the way for boosting the performance of recommender systems. The findings suggest that the temporal aspects of user preferences can vary from one domain to another. Therefore, \textit{modelling domain-dependent temporal effects of preference aspects} are critical in improving the quality of recommendations.
The rest of the paper is organised as follows: The related work is introduced in section \ref{Related Work}. In section \ref{Brief introduction of PMF and CondTrustFVSVD}, we first briefly introduce probabilistic matrix factorisation, and CondTrustFVSVD. Then in section \ref{Time-Aware CondTrustFVSVD (Aspect-MF)} we introduce Aspect-MF to overcome the challenge of learning drifting conditional socially-influenced preferences over feature values. In section \ref{Experiments}, we first explain the experimental setup, and then report on the results of Aspect-MF using two popular recommendation datasets. Finally we conclude the paper in section \ref{Conclusion and Future Work}, by summarising the main findings and giving the future directions of this work.
\section{Related work}
\label{Related Work}
Collaborative Filtering models are broadly classified into memory-based and model-based approaches. Memory- or instance-based learning methods predict the user preferences based on the preferences of other users or the similarity of the items. Item-based approaches in memory-based CF \citet{d2015sentiment} calculate the similarity between the items, and recommend the items similar to the items that the user has liked in the past. User-based approaches recommend items that have been liked by similar users \citet{ma2008sorec}.
The time-dependent collaborative filtering models are also classified into the memory-based time-aware recommenders and model-based time-aware recommenders \citep{xiang2009time}.
\subsection{Model-based time-aware recommenders}
The models in this category usually fall into four classes: 1) models based on Probabilistic Matrix Factorisation, 2) models based on Bayesian Probabilistic Matrix Factorisation, and 3) models based on Bayesian Probabilistic Tensor Factorisation, and 4) models based on Bayesian Probabilistic Tensor Factorisation.
\subsubsection{Models based on probabilistic matrix factorisation}
Modelling the drifting preferences using a model-based approach based on PMF has first been considered by Koren \citep{koren2010collaborative} in TimeSVD++. TimeSVD++ builds on the previous model called SVD++ \citep{koren2009matrix}, in which the user preferences are modelled through a latent factor model that incorporates the user bias, item bias, and also the implicit feedback given by the users.
For each one of these preference aspects, Koren \citep{koren2010collaborative} used a time-dependent factor to capture both transient and long-term shifts. They showed TrustSVD++ achieves significant improvements over SVD++ on a daily granularity \citep{xiang2009time}.
In TrustFVSVD \citep{zafari2017modelling}, we extended TrustSVD by adding the preferences over feature values and the conditional dependencies between the features. We did this by adding additional matrices that captured the feature value discrepancies, where the values of these matrices were related to the values of the social influence matrix. In TrustFVSVD, the explicit influence of the social relationships on each one of the aspects of preferences were captured. Through comprehensive experiments on three benchmark datasets, we showed that TrustFVSVD significantly outperformed TrustSVD and a large set of state of the art models. However, similar to most of the state of the art models, in TrustFVSVD, we assumed that the preferences are static.
Another model-based time-aware recommendation model was proposed by Koenigstein, Dror and Koren \citep{koenigstein2011yahoo}. In this model, the authors use session factors to model specific user behaviour in music learning sessions.
Unlike TimeSVD++ which is domain-independent, was developed especially for the music domain. First, it enhances the bias values in SVD++, by letting the item biases share components for items linked by the taxonomy. For example, the tracks in a good album may all be rated higher than the average, or a popular artist may receive higher ratings than the the average for items. Therefore, shared bias parameters are added to different items with a common ancestor in the taxonomy hierarchy of the items. Similarly, the users may also tend to rate artists or genres higher than songs. Therefore, the user bias is also enhanced by adding the type of the items. It is also assumed that unlike in the movies domain, in music it is common for the users to listen to many songs, and rate them consecutively. Such ratings might be rated similarly due to many psychological phenomena.
The advantage of the models proposed by Koenigstein, Dror and Koren \citep{koenigstein2011yahoo} and Koren \citep{koren2010collaborative} that extend SVD++ is that they enable the capturing of dynamicity of the preference aspects with a high granularity for aspects that are assumed to be more subject to temporal drift. Furthermore, as shown in \citep{koenigstein2011yahoo}, domain-dependent temporal aspects of the preferences and their individual aspects can also be taken into consideration.
Jahrer, Toscher and Legenstein \citep{jahrer2010combining} split the rating matrix into several matrices, called bins, based on their time stamps. For each bin, a separate time-unaware model is trained by producing an estimated rating value that is obtained using the ratings of given for that bin. Each one of the bins are assigned a weight value, and the final rating is obtained by combining the ratings that are obtained through the models trained on each bin. Therefore, using this approach, they combine multiple time-unaware models into a single time-aware model. The disadvantage of this model is that the ratings matrix is usually sparse as it is, and it even becomes sparser, when the ratings are split into bins.
A similar approach is followed in the model proposed by Liu and Aberer \citep{liu2013soco}. They systematically integrated contextual information and social network information into a matrix factorization model to improve the recommendations. To overcome the sparsity problem of training separate models based on their time-stamps, they applied a random decision trees algorithm, and create a hierarchy of the time-stamps. For example, the ratings can be split based on year in the first level, month in the second level, day in the third level, and so on. They argue that the ratings that are given at similar time intervals are better correlated with each other, and therefore such clustering is justified. They also added the influence of the social friends to the model, using a context-aware similarity function. In this function users who give similar ratings to those of their friends in similar contexts get higher similarity values. Consequently, in this model, the role of time on the social influence is also indirectly taken into consideration.
Baltrunas, Ludwig and Ricci \citep{baltrunas2011matrix} argued that methods based on tensor factorisation can improve the accuracy when the datasets are large. Tensor factorisation requires the addition of a large number of model parameters that must be learned.
When the datasets are small, simpler models with fewer parameters can perform equally well or better. In their method, a matrix is added to capture the influence of contextual factors (e.g. time) on the user preferences by modelling the interaction of contextual conditions with the items.
Although the model is quite simple and fast, it does not include the effect of time on individual preference aspect. Unlike the models proposed by Koenigstein, Dror and Koren \citep{koenigstein2011yahoo} and Koren \citep{koren2010collaborative}, it can not capture fine-grained and domain-specific dynamicities.
\subsubsection{Models based on Bayesian probabilistic matrix factorisation}
BPMF extends the basic matrix factorisation \citep{salakhutdinov2008bayesian} by assuming Gaussian-Wishart priors on the user and item regularisation parameters and letting the hyper-parameters be trained along with the model parameters. Dynamic BPMF (dBPMF) is a non-parametric Bayesian dynamic relational data modelling approach based on the Bayesian probabilistic matrix \citep{luo2016bayesian}. This model imposes a dynamic hierarchical Dirichlet process (dHDP) prior over the space of probabilistic matrix factorisation models to capture the time-evolving statistical properties of modelled sequential relational datasets. The dHDP was developed to model the time-evolving statistical properties of sequential datasets, by linking the statistical properties of data collected at consecutive time points via a random parameter that controls their probabilistic similarity.
\subsubsection{Models based on probabilistic tensor factorisation}
In tensor factorisation methods, the context variables are modelled in the same way as the users and items are modelled in matrix factorisation techniques, by considering the interaction between users-items-context. In tensor factorisation methods, the three dimensional user-item-context ratings are factorised into three matrices, a user-specific matrix, an item-specific matrix, and a context-specific matrix.
A model in this category is proposed by Karatzoglou et al. \citep{karatzoglou2010multiverse}, who used Tensor Factorisation with CP-decomposition, and proposed multi-verse recommendation, which combines the data pertaining to different contexts into a unified model. Therefore, similar to the model proposed by Baltrunas, Ludwig and Ricci \citep{baltrunas2011matrix}, other contextual information besides time (e.g. user mode, companionship) can also be taken into consideration. However, unlike Baltrunas, Ludwig and Ricci \citep{baltrunas2011matrix}, they factorise the rating tensor into four matrices, a user-specific matrix, an item-specific matrix, a context-specific matrix, and a central tensor, which captures the interactions between each user, item, and context value. Then the original ratings tensor, which includes the ratings given by users to items in different contexts (e.g. different times) can be reconstructed by combining the four matrices back into the ratings tensor.
Other models in this category are the models proposed by Li et al. \citep{li2011tracking} and Pan et al. \citep{pan2013robust}.
\subsubsection{Models based on Bayesian probabilistic tensor factorisation}
There is a class of dynamic models that are based on Bayesian Probabilistic Tensor Factorisation (BPTF) \citep{xiong2010temporal}. BPTF generalises BPMF by adding tensors to the matrix factorisation process. A tensor extends the two dimensions of the matrix factorisation model to three or more dimensions. Therefore, besides capturing the user-specific and item-specific latent matrices, this model also trains a time-specific latent matrix, which captures the latent feature values in different time periods. The models based on tensor factorisation are similar in introduction of the time-specific matrices into the factorisation process. However, they are different in the way they factorise the ratings matrix into the user, item, and time matrices, and also the way they train the factorised matrices. Similar to BPMF, BPTF uses Markov Chain Monte Carlo with Gibbs sampling to train the factorised matrices.
\subsection{Memory-based time-aware recommenders}
Some simple time-dependent collaborative filtering models have been proposed by Lee, Park and Park \citep{lee2008time}. The models use item-based and user-based collaborative filtering, and exploit a pseudo-rating matrix, instead of the real rating matrix. In the pseudo-rating matrix the entries are obtained using a rating function, which is defined as the rating value when an item with launch time $l_j$ was purchased at time $p_i$. This function was inspired by two observations, that more recent purchases better reflected a user's current preferences, and also recently launched items appealed more to the users. If the users are more sensitive to the item's launch time, the function gives more weight to new items, and if the user's purchase time is more important in estimating their current preference, the function assigns more weight to recent purchases. After obtaining the pseudo-rating matrix, the neighbours are obtained as in the traditional item-based or user-based approaches, and the items are recommended to the users. These models are less related to the proposed model in this paper, so we are not going to review them further.
\section{Modelling time-aware preference aspects in CondTrustFVSVD}
\label{Modelling time-aware preference aspects in CondTrustFVSVD}
In this section, we explain how to integrate the time-awareness on different aspects of preferences into CondTrustFVSVD \citep{zafari2017modelling}.
\subsection{Brief introduction of PMF and CondTrustFVSVD}
\label{Brief introduction of PMF and CondTrustFVSVD}
In rating-based recommender systems, the observed ratings are represented by the user-item ratings matrix $R$, in which the element $R_{uj}$ is the rating given by the user $u$ to the item $j$. Usually, $R_{uj}$ is a 5-point integer, 1 point means very bad, and 5 points means excellent. Let $P \in \mathbb{R}^{N \times D}$ and $Q \in \mathbb{R}^{M \times D}$ be latent user and item feature matrices, with vectors $P_{u}$ and $Q_{j}$ representing user-specific and item-specific latent feature vectors respectively ($N$ is the number of users, $M$ is the number of items, and $D$ is the number of item features). In PMF, $R_{uj}$ is estimated by the inner product of the latent user feature vector $P_u$ and latent item feature vector $Q_j$, that is $\hat{R}_{uj} = P_uQ_j^T$.
PMF maximises the log-posterior over the user and item latent feature matrices with rating matrix and fixed parameters given by Eq. \ref{eq1}.
\small
\begin{equation}
\begin{split}
\label{eq1}
\ln p( \, P,Q|R,\sigma,\sigma_{P},\sigma_{Q}) \,
=
\ln p( \, R|P,Q,\sigma) \,
+
\ln p( \, P|\sigma_{P}) \,
+
\ln p( \, Q|\sigma_{Q}) \,
+
C
\end{split}
\end{equation}
\normalsize
where $C$ is a constant that is not dependent on $P$ and $Q$. $\sigma_{P}$, $\sigma_{Q}$, and $\sigma$ are standard deviations of matrix entries in $P$, $Q$, and $R$ respectively. Maximising the log-posterior probability in Eq. \ref{eq1} is equivalent to minimising the error function in Eq. \ref{eq2}.
\small
\begin{equation}
\begin{split}
\label{eq2}
argmin_{U,V}
[ \,
E
= \frac{1}{2} \sum_{u=1}^{N}\sum_{j=1}^{M}I_{uj} (\, R_{uj} - \hat{R}_{uj} )\,^2
+
\frac{\lambda_{P}}{2} \sum_{u=1}^{N}\|P_{u}\|_{Frob}^{2}
+
\frac{\lambda_{Q}}{2} \sum_{j=1}^{M}\|Q_{j}\|_{Frob}^{2}
] \,
\end{split}
\end{equation}
\normalsize
where $\|.\|_{Frob}$ denotes the Frobenius norm, and $\lambda_{P} = \frac{\sigma^2}{\sigma_{P}^2}$ and $\lambda_{Q} = \frac{\sigma^2}{\sigma_{Q}^2}$ (regularisation parameters). $Stochastic$ $Gradient$ $Descent$ and $Alternating$ $Least$ $Squares$ are usually employed to solve the optimisation problem in Eq. \ref{eq2}. Using these methods, the accuracy of the method measured on the training set is improved iteratively.
As mentioned in the introduction section, the disadvantage of traditional matrix factorisation methods is that the discrepancies between users in preferring item feature values and conditional dependencies between features are disregarded.
CondTrustFVSVD \citep{zafari2017modelling} addresses these problems by adding matrices $W$ and $Z$ to learn the preferences over item feature values.
Suppose that a social network is represented by a graph $\mathbb{G} = (\mathbb{V},\mathbb{E})$, where $\mathbb{V}$ includes a set of
users (nodes) and $\mathbb{E}$ represents the trust relationships among the users (edges). We denote the adjacency matrix by $T \in \mathbb{R}^{N \times N}$, where $T_{uv}$ shows the degree to which user $u$ trusts user $v$. Throughout this paper, we use the indices $u$ and $v$ for the users and indices $i$ and $j$ for items, and indices $f$ and $f^{'}$ for item features.
In CondTrustFVSVD, all aspects of preferences are assumed to be subject to change by social interactions, and therefore the explicit influence of social relationships on each of the aspects of the preferences are modelled.
In this method, we assume that the user preferences over an item feature can be formulated with a linear function. In this function, matrix $W$ is used to capture the "gradient" values and matrix $Z$ is used to learn the "intercept" values.
These matrices have the same dimensions as the user matrix $P$.
According to this figure, the probabilities of the matrices $P$, $Q$, $W$, $Z$, $\omega$, $y$ and vectors $bu$ and $bi$ are dependent on the hyper-parameters $\sigma_{P}$, $\sigma_{Q}$, $\sigma_{W}$, $\sigma_{Z}$, $\sigma_{\omega}$, $\sigma_{y}$, $\sigma_{bu}$ and $\sigma_{bi}$ respectively. Likewise, the probability of obtaining the ratings in matrix $R$ is conditional upon the matrices $P$, $Q$, $W$, $Z$, $\omega$, $y$ and vectors $bu$ and $bi$. CondTrustFVSVD finds the solution for the optimisation problem formulated by Eq. \ref{eq3}.
\small
\begin{equation}
\begin{split}
\label{eq3}
argmin_{P,Q,W,Z,\omega,y,bu,bi}
[ \,
E &
=
\frac{\lambda_t}{2} \sum_{u=1}^{N}\sum_{\forall v \in T_u}I_{uv} (\, T_{uv} - \sum_{f=1}^{D}P_{uf}\omega_{vf} )\,^2
+
\frac{\lambda_t}{2} \sum_{u=1}^{N}\sum_{\forall v \in T_u}(\, T_{uv} - \sum_{f=1}^{D}(1 - W_{uf})\omega_{vf} )\,^2 \\ &
+
\frac{\lambda_t}{2} \sum_{u=1}^{N}\sum_{\forall v \in T_u}(\, T_{uv} - \sum_{f=1}^{D}Z_{uf}\omega_{vf} )\,^2
+
\frac{1}{2} \sum_{u=1}^{N}\sum_{j=1}^{M} (\, R_{uj} - \hat{R}_{uj} )\,^2 \\ &
+
\sum_{u=1}^{N}(\frac{\lambda_P}{2}|I_u|^{-\frac{1}{2}} + \frac{\lambda_{T}}{2}|T_u|^{-\frac{1}{2}})\|P_{u}\|_{Frob}^{2}
+
\frac{\lambda_{Q}}{2} \sum_{j=1}^{M}\|Q_{j}\|_{Frob}^{2} \\ &
+
\sum_{u=1}^{N}(\frac{\lambda_W}{2}|I_u|^{-\frac{1}{2}} + \frac{\lambda_{T}}{2}|T_u|^{-\frac{1}{2}})\|W_{u}\|_{Frob}^{2}
+
\sum_{u=1}^{N}(\frac{\lambda_Z}{2}|I_u|^{-\frac{1}{2}} + \frac{\lambda_{T}}{2}|T_u|^{-\frac{1}{2}})\|Z_{u}\|_{Frob}^{2} \\ &
+
\frac{\lambda}{2} \sum_{i=1}^{M}|U_{i}|^{-\frac{1}{2}}\|y_i\|_{Frob}^{2}
+
\frac{\lambda_{\omega}}{2} \sum_{v=1}^{N}|T^{+}_{v}|^{-\frac{1}{2}}\|\omega_v\|_{Frob}^{2} \\ &
+
\frac{\lambda_{bu}}{2} \sum_{u=1}^{N}|I_u|^{-\frac{1}{2}}bu_{u}^{2}
+
\frac{\lambda_{bi}}{2} \sum_{j=1}^{M}|U_j|^{-\frac{1}{2}}bi_{j}^{2}
+
\frac{\lambda_{Y}}{2}\sum_{f=1}^{D}\sum_{f^{'}=1}^{D}Y_{ff^{'}}^2
] \,
\end{split}
\end{equation}
\normalsize
where $\lambda_{W} = \frac{\sigma^2}{\sigma_{W}^2}$, $\lambda_{Z} = \frac{\sigma^2}{\sigma_{Z}^2}$, $\lambda_{\omega} = \frac{\sigma^2}{\sigma_{\omega}^2}$, $\lambda_{y} = \frac{\sigma^2}{\sigma_{y}^2}$, $\lambda_{bu} = \frac{\sigma^2}{\sigma_{bu}^2}$, $\lambda_{bi} = \frac{\sigma^2}{\sigma_{bi}^2}$, $\lambda_{Y} = \frac{\sigma^2}{\sigma_{Y}^2}$. $\mu$ denotes the global average of the observed ratings, and $bu_i$ and $bi_j$ denote biases for user $i$ and item $j$ respectively. $I_u$ is the set of items rated by user $u$ and $U_j$ is the set of users who have rated item $j$. The values of $\hat{R}_{uj}$ in Eq. \ref{eq3} are obtained using Eq. \ref{eq4}.
\begin{equation}
\begin{split}
\label{eq4}
\hat{R}_{uj} = \mu + bu_u + bi_j + \sum_{f=1}^{D}(P_{uf} + |I_u|^{-\frac{1}{2}}\sum_{\forall i \in I_u}y_{if} + |T_u|^{-\frac{1}{2}}\sum_{\forall v \in T_u}\omega_{vf})(W_{uf}Q_{jf} + Z_{uf})
\end{split}
\end{equation}
According to the Eq. \ref{eq4}, the user $u$'s preference value over an item $j$ is defined using different aspects. These aspects are user bias, item bias, the \textbf{socially-influenced preferences over features}, and the \textbf{socially-influenced preferences over feature values}. Therefore, preferences are defined using different aspects that interact with each other by influencing the values of one another.
\subsection{Time-aware CondTrustFVSVD (Aspect-MF)}
\label{Time-Aware CondTrustFVSVD (Aspect-MF)}
In the following sections, we first provide a high-level view of Aspect-MF by explaining the interactions between aspects that are captured by the model, and then elaborating how the aspects are trained from the users' ratings and social relationships.
\subsubsection{Aspect interactions and high-level view of the model}
\label{Aspect interactions and high-level view of the model}
To address the problem of capturing drifting socially-influenced conditional preferences over feature values, we extend the method CondTrustFVSVD, by adding the dynamicity of each one of the preference aspects that are assumed to be subject to concept drift.
The method proposed here is abbreviated to Aspect-MF. A high-level overview of the preference aspects in Aspect-MF are presented in Fig. \ref{fig1}.
\begin{figure}[!ht]
\setcounter{figure}{0}
\vskip 0cm
\centerline{\includegraphics[scale=0.7]{Aspect-MF_Components.pdf}}
\caption{The preference aspects and their interplay in Aspect-MF}
\label{fig1}
\end{figure}
\begin{figure}[!ht]
\vskip 0cm
\centering
\setcounter{figure}{1}
\begin{subfigure}[b]{0.4\textwidth}
\includegraphics[width=1\linewidth]{Aspect-MF_Abstract.pdf}
\caption{}
\label{fig2a}
\end{subfigure}%
\begin{subfigure}[b]{0.6\textwidth}
\includegraphics[width=1\linewidth]{Aspect-MF_FlowChart.pdf}
\caption{}
\label{fig2b}
\end{subfigure}%
\centering
\caption{a) The high-level representation of Aspect-MF and b) its flow chart}
\label{fig2}
\end{figure}
In Fig. \ref{fig2a}, \textbf{FP} represents preferences over features, which is captured by matrix $P$ in the basic matrix factorisation. \textbf{F} represents item features captured by matrix $Q$ in the basic matrix factorisation. \textbf{CP} represents conditional dependencies, \textbf{FVP} represents preferences over feature values, \textbf{SI} stands for social influence, and finally \textbf{T} is an abbreviation for time.
Aspect-MF incorporates additional matrices and vectors into matrix factorisation to capture as many aspects present in the data as possible. As Fig. \ref{fig2} shows, the model starts by loading the time-stamped user ratings as well as the social network data into the memory.
The main loop accounts for the learning iterations over the model. The first loop within the main loop iterates over the time-stamped user-item ratings matrix, while the second loop iterates over the social network adjacency matrix, to train the socially influenced parts of the model. In each loop, one entry of the input matrix is read and used to update the matrices/vectors related to that input data. As can be seen, the user and item bias values are only updated in loop 1, since they are only related to the user-item ratings. Both user-item ratings and users' social relationships include information about the users' preferences over features. Therefore, the new values for FP are calculated in both loops and updated in the main loop, when all new values have been calculated. Similarly, the values for SI and FVP depend on both user-item ratings and social relationships. Consequently, their new values are calculated inside both loops 1 and 2, and are updated in the main loop. In contrast, the values of F as well as CP only need the user-item ratings to be updated. Therefore, they are immediately updated inside loop 1. The time aspect includes parameters that account for the dynamics of user and item biases, feature value preferences, and preferences over features. Since bias values do not depend on the user-item ratings matrix, they are updated immediately in loop 1. However, the new values for the dynamics of feature value preferences, and preferences over features are updated in the main loop. In Aspect-MF, every one of the preference aspects can be arbitrarily switched off and on by setting their respective learning rates and regularisation parameters (hyper-parameters) to zero or a non-zero value respectively.
Although social relationships are likely to be time-dependent, most datasets do not contain this information. Conditional preferences are related to the feature value preferences, since they model the dependencies between the features and their values, and therefore, are applied to the matrices that account for the users' preferences over feature values. Social influence is applied to the aspects of preferences over features and preferences over feature values. However, applying social influence to the user and item biases showed no observable benefits and user or item biases do not seem to be influenced by social interactions. Therefore, we concluded that user and item biases are not much influenced by the social interactions \citep{zafari2017modelling}. Therefore, in the most abstract view of the model as depicted in the high-level representation in Fig. \ref{fig2a}, the model is comprised of four main modules. Initialising the model parameters (Model Initialiser), learning the intrinsic constituting aspects of preferences (i.e. preferences over features, preferences over feature values, conditional dependencies, and user and item bias values) and the drifting properties of preferences (Intrinsic Trainer), learning the social influence of the friends over the drifting intrinsic preference aspects (Social Trainer), and finally updating the model to reflect the new information extracted from the data about user ratings, time, and social connections (Model Updater). These modules will be discussed in more details later, when we introduce the algorithm in section \ref{Aspect-MF Algorithm}.
\subsubsection{Aspect-MF model formulation}
\label{Aspect-MF}
In this section, we provide the mathematical formulation of the preferences captured in Aspect-MF.
Basically, in Aspect-MF, the user preferences are modelled as a \textit{Bayesian Network} \citep{korb2010bayesian}.
Fig. \ref{fig3} shows the topology or the structure of the Bayesian Network for user preferences that are modelled by Aspect-MF.
As mentioned earlier, Aspect-MF extends CondTrustFVSVD, by adding the time factor to the aspects of preferences as depicted in Fig \ref{fig1}. In CondTrustFVSVD, the user preferences were captured using the matrices $P$, $Q$, $W$, $Z$, $Y$, $\omega$, $y$, with the hyper-parameters $\sigma_{P}$, $\sigma_{Q}$, $\sigma_{W}$, $\sigma_{Z}$, $\sigma_{\omega}$, $\sigma_{y}$, $\sigma_{Y}$, $\sigma_{bu}$ and $\sigma_{bi}$.
\begin{figure}[!ht]
\setcounter{figure}{2}
\hspace*{0.2in}
\centerline{\includegraphics[scale=0.4]{Aspect-MF_BN.pdf}}
\caption{Bayesian network of Aspect-MF}
\label{fig3}
\end{figure}
In Aspect-MF, the drifting social influence of friends in the user's social network are captured through Eq. \ref{eq5} to \ref{eq7}.
\begin{equation}
\begin{split}
\label{eq5}
\hat{T}^{t}_{uv} =\frac{1}{|I_u^{t}|} \sum_{\forall {t_{uj}} \in I_u^t}^{D} \sum_{f=1}^{D}P_{uf}({t_{uj}})\omega_{vf}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\label{eq6}
\hat{S}^t_{uv} = \frac{1}{|I_u^{t}|} \sum_{\forall {t_{uj}} \in I_u^t}^{D} \sum_{f=1}^{D}(1 - W_{uf}({t_{uj}}))\omega_{vf}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\label{eq7}
\hat{G}^t_{uv} = \frac{1}{|I_u^{t}|} \sum_{\forall {t_{uj}} \in I_u^t}^{D} \sum_{f=1}^{D}Z_{uf}({t_{uj}})\omega_{vf}
\end{split}
\end{equation}
where $\hat{T}^{t}_{uv}$, $\hat{S}^t_{uv}$, $\hat{G}^t_{uv}$ model the time-dependent influence of user $v$ on the preferences of user $u$ for the preferences over features (captured by $P_{uf}(t)$) and preferences over feature values (captured by $W_{uf}(t)$ and $Z_{uf}(t)$), and similar to CondTrustFVSVD, $\omega$ captures \textit{the implicit influence of user $u$ on other users} and is obtained using the matrix factorisation process. As can be seen in Fig. \ref{fig1}, the user preferences over features and feature values in Aspect-MF are subject to social influence, and they also drift over time. In Eqs. \ref{eq5} to \ref{eq7}, $I_u^t$ is the set of timestamps for all the ratings given by user $u$. Therefore, using these equations, the influence of the user $v$ on the preferences of user $u$ is calculated for all the time points, and then it is averaged.
Intuitively, these equations are telling us that the trust of user $u$ in user $v$ can be estimated by calculating the average of the weighted averages of user $v$'s influence on user $u$'s preferences for different features, in different times. Intuitively, if user $u$ strongly trusts user $v$, his preferences would be more strongly influenced by user $v$. Furthermore, depending on the trust strength of user $u$ in user $v$ and the influence he gets from user $v$ and its direction (positive or negative), the user's preference can be positively or negatively affected. Therefore in Aspect-MF, the user preferences are subject to social influence, and the social influence depends on the strength of their trust in the friends. According to these equations, if there is no relationship between user $u$ and user $v$, user $u$'s preferences will not be directly affected by the social influence of user $v$.
In Aspect-MF, the drifting preference value of the user $u$ over an item $j$ at time $t$ is obtained according to Eq. \ref{eq8}.
\begin{equation}
\label{eq8}
\begin{split}
\hat{R}_{uj}(t_{uj}) = \mu& + bu_u(t_{uj}) + bi_j(t_{uj}) \\&+ \sum_{f=1}^{D}(P_{uf}({t_{uj}}) + |I_u|^{-\frac{1}{2}}\sum_{\forall i \in I_u}y_{if} + |T_u|^{-\frac{1}{2}}\sum_{\forall v \in T_u}\omega_{vf})(W_{uf}({t_{uj}})Q_{jf} + Z_{uf}({t_{uj}})) \\&+ \sum_{f^{'}=1}^{D}(\sum_{f=1}^{D}(W_{uf}({t_{uj}})Q_{jf} + Z_{uf}({t_{uj}}))Y_{ff^{'}})(W_{uf}({t_{uj}})Q_{jf^{'}} + Z_{uf}({t_{uj}}))
\end{split}
\end{equation}
According to Eq. \ref{eq8}, in Aspect-MF, different aspects of preferences as well as user and item biases are subject to temporal drift.
As can be seen in Eq. \ref{eq5} to \ref{eq8}, the user bias, item bias, preferences over features captured by the matrix $P$, and preferences over feature values captured by the matrices $W$ and $Z$ are subject to temporal drift. In order to model the drifting properties of these aspects, we use Eqs. \ref{eq9} to \ref{eq13}.
\begin{equation}
\begin{split}
\label{eq9}
bu_u(t_{uj}) = bu_u + \alpha_u dev_u(t_{uj}) + but_{ut_{uj}}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\label{eq10}
bi_j(t_{uj}) = (bi_j + bi_{jBin(t_{uj})})(C_u + Ct_{u{t_{uj}}})
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\label{eq11}
P_{uf}({t_{uj}}) = P_{uf} + \alpha_u^P dev_u(t_{uj}) + Pt_{uft_{uj}}
\end{split}
\end{equation}
\begin{equation}
\begin{split}
\label{eq12}
Z_{uf}({t_{uj}}) = Z_{uf} + \alpha_u^Z dev_u(t_{uj}) + Zt_{uft_{uj}}
\end{split}
\end{equation}
\begin{equation}
\label{eq13}
W_{uf}({t_{uj}}) = W_{uf} + \alpha_u^W dev_u(t_{uj}) + Wt_{uft_{uj}}
\end{equation}
where $P_{uf}$, $W_{uf}$, and $Z_{uf}$ capture the static preferences of the user $u$, while the variables $P_{uft_{uj}}$, $W_{uft_{uj}}$, $Z_{uft_{uj}}$ capture the day-specific variations in the user preferences (e.g. due to the mood of the users in a particular day), and $\alpha_u^P$, $\alpha_u^W$, and $\alpha_u^Z$ model the users' long term preference shifts, and $dev_u(t_{uj})$ is obtained according to Eq. \ref{eq14} \citep{koren2010collaborative}.
\begin{equation}
\begin{split}
\label{eq14}
dev_u(t_{uj}) = sign(t_{u_{uj}} - t_u).{|t_{uj} - t_u|}^\beta
\end{split}
\end{equation}
where $t_u$ is the mean of the dates for the ratings given by the user $u$, and $\beta$ is a constant value. In Eq. \ref{eq10}, all the dates are placed in a fixed number of bins, and the function $Bin(.)$ returns the bin number for a particular date. For example, if the maximum period of the ratings is 30 years and 30 bins are used, all the rates given in a particular year are placed in a bin, and the function $Bin(.)$ returns the year number for that particular year. The reason why this function is only used for items is that items are not expected to change on a daily basis, and as opposed to users' biases, longer time periods are expected to pass, before we see any changes in the items' popularity.
In simple words, $dev_u(t_{uj})$ shows how much the time of the rating given by user $u$ to the item $j$ deviates from the average time of the ratings given by that user. Therefore, if a rating is given at the same time as the average time of the ratings, then the according to these equations, there will be no long-term preference shift for that aspect. However, for instance, if the average time of the rates given by user $u$ is 11/04/2006, the rating of the same item by that user on 11/04/2016 would be different, and this shift is captured by the coefficients of the function $dev_u(t_{uj})$ in Eq. \ref{eq9} and Eqs. \ref{eq11} to \ref{eq13}. The drifting preferences captured using Eq. \ref{eq9} and Eqs. \ref{eq11} to \ref{eq13} are depicted in Fig. \ref{fig4}. In these figures, the mean of the dates on which the user has given the ratings are assumed to be 50 (the fiftieth day in a year), and the variations of the user preferences over a period of one year are captured for different values of $\alpha$ in Eq. \ref{eq9} and Eqs. \ref{eq11} to \ref{eq13}. The red lines in these figures represent the case in which the day-specific variations in the user preferences are not captured, while the blue lines also include the day-specific variations. Therefore, as can be seen, in these figures there are two types of preference shifts, long term drifts (captured by the values of $\alpha$, $\alpha^P$, $\alpha^W$, and $\alpha^Z$), and short-term or day-specific drifts (captured by the values of $but$, $Pt$, $Wt$, and $Zt$). Therefore, the preference drifts are comprised of small variations from one day to the other, mainly because of temporary factors such as the mood of the user, and the large variations which happen in the long term, as the user changes preferences because of the shift in the his/her tastes. The blue lines show the preference shift patterns that can be learnt by Aspect-MF. Furthermore, the first three terms in Eq. \ref{eq18} model the social influence of the feature preferences and feature value preferences captured by $P$, $\alpha^P$, $Pt$, $W$, $\alpha^W$, $Wt$, $Z$, $\alpha^Z$, $Zt$. Therefore, assuming that two users have established the social relationship from the very beginning (which is not essentially true, but usually social relationships do not contain time-stamps), using the Eqs. \ref{eq5} to \ref{eq7}, the social influence is applied to the preferences of the user over the entire period for which the rating data is record. Therefore, the formulation of the estimated ratings in Aspect-MF (\ref{eq8}) allows it to learn the drifting conditional feature value preferences, and the formulation of the optimisation in Aspect-MF (Eq. \ref{eq18}) enables it to learn the influence of social friends on the drifting preferences of a user.
\begin{figure}[!ht]
\vskip 0cm
\setcounter{figure}{3}
\begin{subfigure}[b]{1\textwidth}
\includegraphics[height=7cm,width=14cm]{Time_1.pdf}
\caption{}
\label{fig4a}
\end{subfigure}%
\newline
\begin{subfigure}[b]{1\textwidth}
\includegraphics[height=7cm,width=14cm]{Time_2.pdf}
\caption{}
\label{fig4b}
\end{subfigure}%
\centering
\caption{An example of drifting preferences in Eq. \ref{eq9} and Eqs. \ref{eq11} to \ref{eq13} for a) positive $\alpha$ values and b) negative $\alpha$ values}
\label{fig4}
\end{figure}
Eqs. \ref{eq9} to \ref{eq13} show how Aspect-MF can capture long-term and short-term drifts in each one of the preference aspects (user bias, item bias, feature preferences, and feature value preferences). The advantage of formulating the problem using Eq. \ref{eq8} is that each one these aspects can be arbitrarily switched on/off. This results in a component-based approach, in which the model aspects interact with each other, with the purpose of extracting as much preference patterns from the raw data as possible.
\subsubsection{Aspect-MF model training}
According to the Bayesian network of Aspect-MF in Fig. \ref{fig3}, this model minimises the log-posterior probability of matrices that define the user preferences, given the model hyper-parameters and the training matrix. Formally,
\small
\begin{equation}
\begin{split}
\label{eq15}
&argmin_{P,Pt,\alpha^P,Q,W,Wt,\alpha^W,Z,Zt,\alpha^Z,Y,\omega,y,bu,\alpha,but,C,Ct,bi,bit}\\&\{ln p(P,Q,W,Z,\omega,y,bu,bi,\alpha_u,bu_{t},bi_{Bin(t)},c,c_{t},\alpha^P,\alpha^Z,\alpha^W,P_t,Z_t,W_t\\&|R,T^t,S^t,G^t,\sigma_{N}\}
\end{split}
\end{equation}
\normalsize
$\sigma_{N}$=\{$\sigma$,$\sigma_{T}$,$\sigma_{P}$,$\sigma_{Pt}$,$\sigma_{\alpha^P}$,$\sigma_{Q}$,$\sigma_{W}$,$\sigma_{Wt}$,$\sigma_{\alpha^W}$,$\sigma_{Z}$,$\sigma_{Zt}$,$\sigma_{\alpha^Z}$,$\sigma_{\omega}$,$\sigma_{y}$,$\sigma_{bu}$,$\sigma_{\alpha}$,$\sigma_{but}$,$\sigma_{C}$,$\sigma_{Ct}$,$\sigma_{bi}$,$\sigma_{bit}$,$\sigma_{Y}$\}\\ denotes the set of all the hyper-parameters. $T^t$, $S^t$, $G^t$ respectively denote the real values for the estimated matrices $\hat{T}^t$, $\hat{S}^t$, and $\hat{G}^t$ in Eqs. \ref{eq5} to \ref{eq7}.
According to the Bayesian network in Figure \ref{fig3} and by decomposing the full joint distribution using chain rule of probability theory \cite{korb2010bayesian} according to the conditional dependencies between the variables defined in this figure, minimising the probability above is equal to minimising the value given in Eq. \ref{eq16} \cite{korb2010bayesian}.
\small
\begin{equation}
\begin{split}
\label{eq16}
&argmin_{P,Pt,\alpha^P,Q,W,Wt,\alpha^W,Z,Zt,\alpha^Z,Y,\omega,y,bu,\alpha,but,C,Ct,bi,bit,Y}\\ \{
& ln p(R|P(t),Q,W(t),Z(t),bu(t),bi(t),Y,\sigma)
+
ln p(Q|\sigma_{Q}) \\ &
+
ln p(P(t)|\sigma_{P})
+
ln p(W(t)|\sigma_{W})
+
ln p(Z(t)|\sigma_{Z}) \\ &
+
ln p(bu(t)|\sigma_{bu})
+
ln p(bi(t)|\sigma_{bi})
+
ln p(y|,\sigma_{y})
+
ln p(Y|,\sigma_{}) \\ &
+
ln p(T^{t}_{uv}|\omega, P(t),\sigma_{T})
+
ln p(S^{t}_{uv}|\omega, W(t),\sigma_{T})
+
ln p(G^{t}_{uv}|\omega, Z(t),\sigma_{T}) \\ &
+
ln p(P(t)|\sigma_{T})
+
ln p(W(t)|\sigma_{T})
+
ln p(Z(t)|\sigma_{T})
+
ln p(\omega|,\sigma_{T})
\}
\end{split}
\end{equation}
\normalsize
Provided that all the probabilities above follow a normal distribution, it can be shown that minimising the function in Eq. \ref{eq16} is equivalent to minimising the error value using the function in Eq. \ref{eq19}.
\small
\begin{equation}
\begin{split}
\label{eq17}
E_R &=
\frac{1}{2} \sum_{u=1}^{N}\sum_{j=1}^{M} (\, R_{uj} - \hat{R}_{uj} )\,^2
+
\frac{\lambda_{Q}}{2} \sum_{j=1}^{M}\|Q_{j}\|_{Frob}^{2}
+
\frac{\lambda_{y}}{2}\sum_{i=1}^{M}|U_{i}|^{-\frac{1}{2}}\|y_i\|_{Frob}^{2} \\ &
+
\sum_{u=1}^{N}\frac{\lambda_P}{2}|I_u|^{-\frac{1}{2}}(\|P_{u}\|_{Frob}^{2} + \|Pt_{ut}\|_{Frob}^{2} + \|\alpha^P\|_{Frob}^{2}) \\ &
+
\sum_{u=1}^{N}\frac{\lambda_W}{2}|I_u|^{-\frac{1}{2}}(\|W_{u}\|_{Frob}^{2} + \|Wt_{ut}\|_{Frob}^{2} + \|\alpha^W\|_{Frob}^{2}) \\ &
+
\sum_{u=1}^{N}\frac{\lambda_Z}{2}|I_u|^{-\frac{1}{2}}(\|Z_{u}\|_{Frob}^{2} + \|Zt_{ut}\|_{Frob}^{2} + \|\alpha^Z\|_{Frob}^{2}) \\ &
+
\sum_{u=1}^{N}\frac{\lambda_Z}{2}|I_u|^{-\frac{1}{2}}(\|Z_{u}\|_{Frob}^{2} + \|Zt_{ut}\|_{Frob}^{2} + \|\alpha^Z\|_{Frob}^{2}) \\ &
+
\frac{\lambda_{bu}}{2} \sum_{u=1}^{N}|I_u|^{-\frac{1}{2}}(bu_{u}^{2} + \alpha_{u}^{2} + C_{u}^{2} + \|bu_{u}\|_{Frob}^{2} + \|Ct_{u}\|_{Frob}^{2}) \\ &
+
\frac{\lambda_{bi}}{2} \sum_{j=1}^{M}|U_j|^{-\frac{1}{2}}bi_{j}^{2}
+
\frac{\lambda_{bi}}{2} \sum_{j=1}^{M}\sum_{\forall t \in I^t_j}^{}|U_j|^{-\frac{1}{2}}bit_{j,Bin(t)}^{2}
+
\frac{\lambda_{Y}}{2}\sum_{f=1}^{D}\sum_{f^{'}=1}^{D}Y_{ff^{'}}^2
\end{split}
\end{equation}
\normalsize
\small
\begin{equation}
\begin{split}
\label{eq18}
E_T &=
\frac{\lambda_t\eta_P}{2} \sum_{u=1}^{N}\sum_{\forall v \in T_u} (\, T_{uv} - \hat{T}_{uv} )\,^2
+
\frac{\lambda_t\eta_W}{2} \sum_{u=1}^{N}\sum_{\forall v \in T_u} (\, T_{uv} - \hat{S}_{uv} )\,^2
+
\frac{\lambda_t\eta_Z}{2} \sum_{u=1}^{N}\sum_{\forall v \in T_u} (\, T_{uv} - \hat{G}_{uv} )\,^2 \\ &
+
\sum_{u=1}^{N}\frac{\lambda_{T}}{2}|T_u|^{-\frac{1}{2}}(\|P_{u}\|_{Frob}^{2} + \|Pt_{ut}\|_{Frob}^{2} + \|\alpha^P\|_{Frob}^{2}) \\ &
+
\sum_{u=1}^{N}\frac{\lambda_{T}}{2}|T_u|^{-\frac{1}{2}}(\|W_{u}\|_{Frob}^{2} + \|Wt_{ut}\|_{Frob}^{2} + \|\alpha^W\|_{Frob}^{2}) \\ &
+
\sum_{u=1}^{N}\frac{\lambda_{T}}{2}|T_u|^{-\frac{1}{2}}(\|Z_{u}\|_{Frob}^{2} + \|Zt_{ut}\|_{Frob}^{2} + \|\alpha^Z\|_{Frob}^{2}) \\ &
+
\frac{\lambda_{\omega}}{2} \sum_{v=1}^{N}|T^{+}_{v}|^{-\frac{1}{2}}\|\omega_v\|_{Frob}^{2} \\ &
\end{split}
\end{equation}
\normalsize
\begin{equation}
\begin{split}
\label{eq19}
argmin_{P,Pt,\alpha^P,Q,W,Wt,\alpha^W,Z,Zt,\alpha^Z,Y,\omega,y,bu,\alpha,but,C,Ct,bi,bit}
[ \,
E = E_R + E_T
] \,
\end{split}
\end{equation}
where $I_j^t$ is the set of timestamps, for all the ratings given to item $j$, and $\eta_P$, $\eta_W$, and $\eta_Z$ are constants added to control the weights of the components related to the social aspect in this equation. The details of the model training can be found in Appendix \ref{Aspect-MF Training Equations}.
\subsubsection{Aspect-MF algorithm}
\label{Aspect-MF Algorithm}
Algorithm \ref{ModelTrainer} describes the details of the gradient descent method Aspect-MF uses to train the model parameters ($P$, $Pt$, $\alpha^P$, $Q$, $W$, $Wt$, $\alpha^W$, $Z$, $Zt$, $\alpha^Z$, $Y$, $\omega$, $y$, $bu$, $\alpha$, $but$, $C$, $Ct$, $bi$, $bit$) as expressed in Eq. \ref{eq19}.
The algorithm receives the set of model hyper-parameters $\lambda$ and the set of learning rates $\gamma$ as input, and trains the model parameters according to the Bayesian approach described in section \ref{Aspect-MF}.
As we showed in the high-level representation of the algorithm in Figure \ref{fig2a}, the model is comprised of four basic components. A model initialiser, which initialises the model parameters after the input data is loaded into memory, an intrinsic trainer, which trains the model parameters using the user-item ratings, a social trainer which trains the model parameters using the social relationship data, and finally, a model updater, which updates the model based on the trained parameters for a particular iteration.
As can be seen in line \ref{Init} in Algorithm \ref{ModelTrainer}, the training starts with initialising the model parameters. The matrices $P$, $Q$, $y$, and $\omega$ and user and item bias vectors ($bu$ and $bi$) are randomly initialised using a Gaussian distribution with a mean of zero and the standard deviation of one. The new matrices $Pt$, $W$, $Wt$, $Z$, $Zt$, $Ct$, $but$, $bit$, and $Y$ and the vectors $\alpha$, $\alpha^P$, $\alpha^W$, $\alpha^Z$, $C$ are initialised with constant values.
By using constant values to initialise the matrices and vectors, the algorithm starts the search process at the same starting point as CTFVSVD, and explores the modified search space to find more promising solutions, by considering the possible conditional dependencies between the features and the differences between users in preferring item feature values, as well as dynamic properties of the preferences, and the influence of social friends in the preferences of a user.
The main algorithm consists of a main loop, which implements the learning iterations of the model. Each iteration is comprised of one model intrinsic training operation (Algorithm \ref{IntrinsicTrainer}), one model social training operation (Algorithm \ref{SocialTrainer}), and one model updating operation (Algorithm \ref{ModelUpdater}).
In the model intrinsic trainer, the model parameters are updated using the gradient values in Eqs. \ref{eqa1} to \ref{eqa41}, using a rating value that is read from the user-item ratings matrix.
First in line \ref{estimatedrating}, the estimated rating is calculated according to Eq. \ref{eq8}.
Then the basic parameters of the model, $P$, $Q$, $W$, $Z$, $Y$, $bu$, and $bi$, and the temporal parameters $but$, $bit$, $\alpha$, $C$, $Ct$, $\alpha^P$, $\alpha^W$, $\alpha^Z$, $Pt$, $Wt$, and $Zt$ are updated using the rating-related gradient values ($\frac{\partial E_R}{\partial(.)}$) in the Eqs. \ref{eqa1} to \ref{eqa41}. Since this trainer only learns the intrinsic user preferences, only the error value in Eq. \ref{eq17} will be used to update the model parameters. After learning the intrinsic preferences, the function in Algorithm \ref{SocialTrainer} is invoked to train the social aspects of the preferences. Similar to IntrinsicTrainer, SocialTrainer is also comprised of a main loop, which iterates over the social relationship data in the social matrix. In each iteration, one entry from the social matrix is read, and the socially-influenced parameters of the model are updated though the gradient values that are obtained using the error in Eq. \ref{eq18}. Finally, the ModelUpdater in Algorithm \ref{ModelUpdater} is invoked, and the calculated model updates are applied to the model parameters. This process is repeated for a fixed number of iterations, or until a specific condition is met. At the end of this process, the model parameters ($P$, $Pt$, $\alpha^P$, $Q$, $W$, $Wt$, $\alpha^W$, $Z$, $Zt$, $\alpha^Z$, $Y$, $\omega$, $y$, $bu$, $\alpha$, $but$, $C$, $Ct$, $bi$, $bit$) are trained using the input data, and can be used to estimate the rating value given by a user $u$ to an item $j$ according to Eq. \ref{eq8}.
\begin{tcolorbox}[blanker,float=bpt, grow to left by=1cm, grow to right by=1cm]
\begin{algorithm}[H]
\caption{Model Training}\label{ModelTrainer}
\begin{algorithmic}[1]
\State{\textbf{void} ModelTrainer($\lambda$, $\gamma$, $maxIter$)}
\footnote{$\lambda$ is the set of the model hyper-parameters as specified in Eqs. \ref{eq17} and \ref{eq18} and Figure \ref{fig1}.
$N$, $M$, and $D$ respectively denote number of users, number of items, and number of features.
$\gamma$ denotes the set of learning rates, $maxIter$ denotes the maximum number of learning iterations.}
\State {$\lambda$ =\{$\lambda_{T}$,$\lambda_{P}$,$\lambda_{Pt}$,$\lambda_{\alpha^P}$,$\lambda_{Q}$,$\lambda_{W}$,$\lambda_{Wt}$,$\lambda_{\alpha^W}$,$\lambda_{Z}$,$\lambda_{Zt}$,$\lambda_{\alpha^Z}$,$\lambda_{\omega}$,$\lambda_{y}$,$\lambda_{bu}$,$\lambda_{\alpha}$,$\lambda_{but}$,$\lambda_{C}$,$\lambda_{Ct}$,$\lambda_{bi}$,$\lambda_{bit}$,$\lambda_{Y}$\}}
\State {$\gamma$ =\{$\gamma_{T}$,$\gamma_{P}$,$\gamma_{Pt}$,$\gamma_{\alpha^P}$,$\gamma_{Q}$,$\gamma_{W}$,$\gamma_{Wt}$,$\gamma_{\alpha^W}$,$\gamma_{Z}$,$\gamma_{Zt}$,$\gamma_{\alpha^Z}$,$\gamma_{\omega}$,$\gamma_{y}$,$\gamma_{bu}$,$\gamma_{\alpha}$,$\gamma_{but}$,$\gamma_{C}$,$\gamma_{Ct}$,$\gamma_{bi}$,$\gamma_{bit}$,$\gamma_{Y}$\}}
\State $\{$
\State {\textit{//Creating matrices $P$, $\omega$, $W$, and $Z$ and temporary matrices $P^S$, $\omega^S$, $W^S$, and $Z^S$:}}
\State
$Matrix $ $P,P^{S};$
$Matrix $ $\omega^{S};$
$Matrix $ $W^{S};$
$Matrix $ $Z^{S};$
\State {\textit{//Creating vectors $\alpha^{P}$, $\alpha^{W}$, and $\alpha^{W}$, and temporary vectors $\beta^{P}$, $\beta^{W}$, and $\beta^{W}$:}}
\State
$Vector$ $\alpha^{P},\beta^{P};$
$Vector $ $\alpha^{W},\beta^{W};$
$Vector $ $\alpha^{Z},\beta^{Z};$
\State {\textit{//Creating tables $Pt$, $Wt$, and $Zt$, and temporary tables $Pt^S$, $Wt^S$, and $Zt^S$:}}
\State
$Table $ $Pt,Pt^{S};$
$Table $ $Wt,Wt^{S};$
$Table $ $Zt,Zt^{S};$
\State {ModelInitialiser();}\label{Init}
\State $\textit{l} \gets \textit{1};$
\For{\textit{l} $\preceq$ $maxIter$}\label{learningLoop}
\State $IntrinsicTrainer();$
\State $SocialTrainer();$
\State $ModelUpdater();$
\State {$error \gets error \times 0.5;$}
\State $\textit{l} \gets \textit{l} + 1;$
\EndFor
\State $\}$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[H]
\caption{Model Initialising}\label{ModelInitialiser}
\begin{algorithmic}[1]
\State {\textbf{void} ModelInitialiser($\lambda$, $\gamma$)}
\State $\{$
\State $initMean \gets 0; initStd \gets 1;$
\State $P.init(initMean, initStd); \alpha^P.initConst(0); Pt.initConst(0);$
\State $P^S.init(initMean, initStd); \beta^P.initConst(0); Pt^S.initConst(0);$
\State $W.initConst(0); \alpha^W.initConst(0); Wt.initConst(0);$
\State $W^S.init(initMean, initStd); \beta^W.initConst(0); Wt^S.initConst(0);$
\State $Z.initConst(0); \alpha^Z.initConst(0); Zt.initConst(0);$
\State $Z^S.init(initMean, initStd); \beta^Z.initConst(0); Zt^S.initConst(0);$
\State $\omega.init(initMean, initStd); \omega^S.init(initMean, initStd);$
\State $bu.init(initMean, initStd); \alpha.init(0); but.init(0); C.init(0); Ct.init(0);$
\State $bi.init(initMean, initStd); \beta.initConst(0); bit.initConst(0);$
\State $Q.init(initMean, initStd); y.init(initMean, initStd);$
\footnote{$initMean$ and $initStd$ are the mean and standard deviation values that are used to initialise the model parameters.
init(initMean, initStd) is a function that initialises a bias vector (e.g. $bu$ and $bi$) and a matrix (e.g. $P$, and $Q$) using Gaussian distribution with mean value of $initMean$ and standard deviation of $initStd$.
initConst(initMean, initStd) initialises a matrix (e.g. $W$ and $Z$) with a constant value.}
\State $\}$
\end{algorithmic}
\end{algorithm}
\end{tcolorbox}
\begin{tcolorbox}[blanker,float=bpt, grow to left by=1cm, grow to right by=1cm]
\begin{algorithm}[H]
\caption{Intrinsic Training}\label{IntrinsicTrainer}
\begin{algorithmic}[1]
\State {\textbf{void} IntrinsicTrainer($\lambda$, $\gamma$)}
\State $\{$
\State $\textit{u} \gets \textit{1};$
\For {\textit{u} $\preceq$ \textit{N}} \label{forloop1}
\State $\textit{j} \gets \textit{1};$
\For {\textit{j} $\preceq$ \textit{M}}
\If {$R_{uj} \ne 0$}
\State Calculate $\hat{R}_{uj}$ according to Eq. \ref{eq8}.\label{estimatedrating}
\State Get the time $t$ that the rating $R_{uj}$ has been given.
\State {Update $bu_{u}$, $but_{ut}$, and $\alpha_{u}$ according to Eqs. \ref{eqa1}-\ref{eqa3} using $\gamma_\alpha$, $\gamma_{bu}$, $\gamma_{but}$;}\label{updateRulebu}
\State {Update $bi_{j}$ and $bit_{jt}$ according to Eqs. \ref{eqa4}-\ref{eqa5} using $\gamma_{bi}$ and $\gamma_{bit}$;}\label{updateRulebi}
\State {Update $C_{u}$ and $Ct_{ut}$ according to Eqs. \ref{eqa6}-\ref{eqa7} using $\gamma_{C}$ and $\gamma_{Ct}$;}\label{updateRuleC}
\State $\textit{f} \gets \textit{1};$
\For{\textit{f} $\preceq$ \textit{D}}
\State {Update $P^S_{uf}$, $Pt^S_{uft}$, and $\beta^{P}_{u}$ according to Eqs. \ref{eqa9}, \ref{eqa12}, and \ref{eqa15} using $\gamma_P$, ${\gamma_{Pt}}$, and $\gamma_{\alpha^P}$;}\label{updateRuleP1}
\State {Update $Q_{jf}$ according to Eq. \ref{eqa40} using $\gamma_Q$;}\label{updateRuleQ}
\State {Update $W^S_{uf}$, $Wt^S_{uft}$, and $\beta^{W}_{u}$ according to Eqs. \ref{eqa18}, \ref{eqa21}, and \ref{eqa24} using $\gamma_W$, ${\gamma_{Wt}}$, and $\gamma_{\alpha^W}$;}\label{updateRuleW1}
\State {Update $Z^S_{uf}$, $Zt^S_{uft}$, and $\beta^{Z}_{u}$ according to Eqs. \ref{eqa27}, \ref{eqa30}, and \ref{eqa33} using $\gamma_Z$, ${\gamma_{Zt}}$, and $\gamma_{\alpha^Z}$;}\label{updateRuleZ1}
\State {$\forall v \in T_{u}$: Update $\omega^S_{vf}$ according to Eq. \ref{eqa35} using $\gamma_\omega$;}\label{updateRuleOmega1}
\State {$\forall i \in I_{u}$: Update $y_{if}$ according to Eq. \ref{eqa33} using $\gamma_y$;}\label{updateRuley}
\State $f^{'} \gets \textit{f}+1;$
\For{$f^{'}$ $\preceq$ $\textit{D}$}
\State {Update $Y_{ff^{'}}$ and $Y_{f^{'}f}$ according to Eq. \ref{eqa39} using $\gamma_Y$;} \label{updateRuleY}
\State $f^{'} \gets f^{'} + 1;$
\EndFor
\State $\textit{f} \gets \textit{f} + 1;$
\EndFor
\EndIf
\State $\textit{j} \gets \textit{j} + 1;$
\EndFor
\State $\textit{u} \gets \textit{u} + 1;$
\EndFor
\State $\}$
\end{algorithmic}
\end{algorithm}
\end{tcolorbox}
\begin{tcolorbox}[blanker,float=bpt, grow to left by=1cm, grow to right by=1cm]
\begin{algorithm}[H]
\caption{Social Training}\label{SocialTrainer}
\begin{algorithmic}[1]
\State {\textbf{void} SocialTrainer($\lambda$, $\gamma$)}
\State $\{$
\State $\textit{u} \gets \textit{1};$
\For {\textit{u} $\preceq$ \textit{N}}\label{forloop2}
\State $\textit{v} \gets \textit{1};$
\For {\textit{v} $\preceq$ \textit{N}}
\If {\textit{v} $\in$ \textit{$T_u$}}
\For{\textit{f} $\preceq$ \textit{D}}
\State {Update $P^S_{uf}$, $W^S_{uf}$, and $Z^S_{uf}$ according to Eqs. \ref{eqa10}, \ref{eqa19}, \ref{eqa28} using $\gamma_P$, $\gamma_W$, and $\gamma_Z$;}\label{updateRulePWZ2}
\State {$\forall t \in I_u^t:$ Update $Pt^S_{uft}$, $Wt^S_{uft}$, and $Zt^S_{uft}$ according to Eqs. \ref{eqa13}, \ref{eqa16}, \ref{eqa19} using $\gamma_{Pt}$, $\gamma_{Wt}$, and $\gamma_{Zt}$;}\label{updateRulePWZ3}
\State {Update $\beta^P_{uf}$, $\beta^W_{uf}$, and $\beta^Z_{uf}$ according to Eqs. \ref{eqa16}, \ref{eqa19}, \ref{eqa22} using $\gamma_{\alpha^P}$, $\gamma_{\alpha^W}$, and $\gamma_{\alpha^Z}$;}\label{updateRuleAlphaPWZ2}
\State {$\forall t \in I_u^t:$ Update $\omega^t_{vf}$ according to Eq. \ref{eqa26} using $\gamma_\omega$;}
\State $\textit{f} \gets \textit{f} + 1;$
\EndFor
\EndIf
\State $\textit{v} \gets \textit{v} + 1;$
\EndFor
\State $\textit{u} \gets \textit{u} + 1;$
\EndFor
\State $\}$
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[H]
\caption{Model Updating}\label{ModelUpdater}
\begin{algorithmic}[1]
\State {\textbf{void} ModelUpdater($\lambda$, $\gamma$)}
\State $\{$
\State
{$\forall u,f: P_{uf} \gets - \gamma_{U} \times P^{S}_{uf};$}
\State
{$\forall u: \alpha^P_u \gets - \gamma_{\alpha^P} \times \beta^P_u;$}
\State
{$\forall u,f: W_{uf} \gets - \gamma_{W} \times W^{S}_{uf};$}
\State
{$\forall u: \alpha^W_u \gets - \gamma_{\alpha^W} \times \beta^W_u;$}
\State
{$\forall u,f: Z_{uf} \gets - \gamma_{Z} \times Z^{S}_{uf};$}
\State
{$\forall u: \alpha^Z_u \gets - \gamma_{\alpha^Z} \times \beta^Z_u;$}
\State
{$\forall u,f: \omega_{uf} \gets - \gamma_{\omega} \times \omega^{S}_{uf};$}
\State $\}$
\end{algorithmic}
\end{algorithm}
\end{tcolorbox}
\subsubsection{Computational complexity analysis}
\label{Computational Complexity Analysis}
The model training in Algorithm \ref{ModelTrainer} is comprised of one main loop that iterates for a fixed number of iterations (maxIter). Therefore, the computation time of the model trainer is expressed in Eq. \ref{eq41}.
\small
\begin{equation}
\label{eq41}
\begin{split}
C(ModelTrainer) & = C(IntrinsicTrainer) + C(SocialTrainer) + C(ModelUpdater)
\end{split}
\end{equation}
\normalsize
First, we examine the computational complexity of Intrinsic Training in Algorithm \ref{IntrinsicTrainer}. On the highest level, this algorithm is comprised of two loops that iterate over the non-zero ratings in the rating matrix $R$. In the following, $|R|$ and $|T|$ denote the number of non-zero entries in the rating matrix $R$ and adjacency matrix $T$ respectively. In Intrinsic Trainer:
\begin{itemize}
\item The number of repetitions to calculate the estimated ratings ($\hat{R}$) in line \ref{estimatedrating} is $(D^2\times|R|) + (D\times\sum_{u=1}^{N}|I_u|^2) + (D\times\sum_{u=1}^{N}|I_u|\times|T_u|)$.
\item The number of repetitions to update parameters related to user and item biases in lines \ref{updateRulebu}, \ref{updateRulebi}, and \ref{updateRuleC} is $7\times|R|$.
\item The number of repetitions needed to update the parameters $P$, $Q$, $W$, and $Z$ in lines \ref{updateRuleP1}, \ref{updateRuleQ}, \ref{updateRuleW1}, and \ref{updateRuleZ1} is $10\times D\times|R|$.
\item The number of repetitions needed to update the parameters $\omega$ in line \ref{updateRuleOmega1} is $D\times\sum_{u=1}^{N}(|I_u|\times|T_u|)$.
\item The number of repetitions needed to update the parameters $y$ in line \ref{updateRuley} is $D\times\sum_{u=1}^{N}|I_u|^2$.
\item The number of repetitions needed to update the dependency matrix $Y$ in line \ref{updateRuleY} is $D^2\times|R|$.
\end{itemize}
Therefore, the overall number of repetitions for the Intrinsic Trainer is obtained according to Eq. \ref{eq42}.
\small
\begin{equation}
\label{eq42}
\begin{split}
N(IntrinsicTrainer)&= D^2\times|R| + D\times \sum_{u=1}^{N}|I_u|\times|T_u|
+ 7\times|R|
+ 10\times D\times|R|
\\ &
+ D\times\sum_{u=1}^{N}(|I_u|\times|T_u|)
+ D\times\sum_{u=1}^{N}|I_u|^2
\\ &
+ D^2\times|R|
\end{split}
\end{equation}
\normalsize
Assuming that on average, each user rates $c$ items, and trusts $k$ users, the computation time can be obtained as Eq. \ref{eq43}.
\small
\begin{equation}
\label{eq43}
\begin{split}
C(IntrinsicTrainer) = O(D^2\times|R|) + O(D\times c\times |R|) + O(D\times k\times |T|)
\end{split}
\end{equation}
\normalsize
Assuming that $c,k \lll N$, we can ignore the values of $c$ and $k$. Therefore, the computational time of the Intrinsic Trainer would be obtained according to Eq. \ref{eq44}.
\small
\begin{equation}
\label{eq44}
\begin{split}
C(IntrinsicTrainer) = O(D^2\times|R|) + O(D\times|R|) + O(D\times |T|) = O(D^2\times|R|) + O(D\times |T|)
\end{split}
\end{equation}
\normalsize
Consequently, the overall computation time is linear with respect to the number of observed ratings as well as observed trust statements.
Social Trainer consists of two loops that iterate over the non-zero trust relations in the adjacency matrix $T$. The number of repetitions needed to update the parameters $P$, $W$, $Z$, and $\beta^P$, $\beta^W$, and $\beta^Z$ is $6\times D\times|T|$. The number of repetitions to update the values of $Pt$, $Wt$, $Zt$, and $\omega$ is equal to $4\times(\sum_{u=1}^{N}|I_u|\times|T_u|\times D)$. Therefore, the computation time of Social Trainer is equal to:
\small
\begin{equation}
\label{eq45}
\begin{split}
C(IntrinsicTrainer) = O(D\times |R|) + O(D\times |T|)
\end{split}
\end{equation}
\normalsize
In the Model Updater, the values of matrices $P$, $W$, $Z$, and vectors $\omega$, $\alpha^P$, $\alpha^W$, and $\alpha^Z$ need to be updated. The computation time needed to update these parameters is $O(N\times D)$. Assuming that each user has rated at least one item, it is safe to say that $|R|$ is greater than the number of users $N$.
Therefore, the computation time of Model Updater does not exceed the maximum computation time of Intrinsic Trainer and Social Trainer. Finally, the computation time of the Model trainer is obtained as Eq. \ref{eq46}.
\small
\begin{equation}
\label{eq46}
\begin{split}
C(ModelTrainer) = O(D^2\times |R|) + O(D\times |T|)
\end{split}
\end{equation}
\normalsize
The number of latent factors $D$ is fixed, hence the computation time is only a function of $|R|$ and $|T|$. Since both ratings matrix and social network matrix are sparse, the algorithm is scalable to the problems with millions of users and items.
\section{Experiments}
\label{Experiments}
\subsection{Datasets}
We tested Aspect-MF on three popular datasets, Ciao, Epinions, and Flixster.
Ciao is a dataset crawled from the ciao.co.uk website. This dataset includes 35,835 ratings given by 2,248 users over 16,861 movies. Ciao also includes the trust relationships between users. The number of trust relationships in Ciao is 57,544. Therefore the dataset density of ratings and trust relationships are 0.09\% and 1.14\% respectively. The ratings are integer values between 1 and 6.
The Epinions dataset consists of 664,824 ratings from 40,163 users on 139,738 items of different types (software, music, television show, hardware, office appliances, ...). Ratings are integer values between 1 and 5, and data density is 0.011\%. Epinions also enables the users to issue explicit trust statements about other users. This dataset includes 487183 trust ratings. The density of the trust network is 0.03\%.
Flixster is a social movie site which allows users to rate movies and share the ratings with each other, and become friends with others with similar movie taste. The Flixster dataset which is collected from the Flixster website includes 8,196,077 ratings issued by 147,612 users on 48,794 movies. The social network also includes 7,058,819 friendship links. The density of the ratings matrix and social network matrix are 0.11\% and 0.001\% respectively.
In all the experiments in sections \ref{Discussion}, \ref{Statistical Analysis}, and \ref{Dynamic aspects}, 80\% of the datasets are used for training and the remaining 20\% are used for evaluation. In order to achieve statistical significance, each model training is repeated for 30 times and the average values are used. In section \ref{Effect of the size of the training dataset}, we analyse the behaviour of the models in other cases, where 60\% and 40\% of the ratings are used for training.
\subsection{Comparisons}
\label{Comparisons}
In order to show the effectiveness of Aspect-MF, we compared the results against the recommendation quality of some of the most popular state of the art models that have reported the highest accuracies in the literature. The following models are compared across the experiments in this section:
\begin{itemize}
\item \textit{TrustSVD} \citep{guo2015trustsvd}, which builds on SVD++ \citep{koren2011advances}. The missing ratings are calculated based on explicit and implicit feedback from user ratings and user's trust relations.
\item \textit{CondTrustFVSVD} \citep{zafari2017modelling}, this method extends TrustSVD by adding the conditional preferences over feature values to TrustSVD. Experimental results show that this method is significantly superior to TrustSVD in terms of accuracy. This model is denoted CTFVSVD in the experiments section.
\item \textit{Aspect-MF}, which is the model proposed in this paper. The component-based approach that we took in designing this model enabled us to arbitrarily switch on/off the dynamicity over different preference aspects. Therefore, in the experiments we try all the combinations of dynamic preference aspects. This results in 7 combinations denoted by $b$, $bf$, $bffv$, $bfv$, $f$, $ffv$, and $fv$ \footnote{\textit{fv} denotes feature value preferences, \textit{f} denotes feature preferences, and \textit{b} denotes bias. Therefore, \textit{bffv} denotes a model with all the three aspects.}.
\end{itemize}
Guo, Zhang and Yorke-Smith \citep{guo2016novel} carried out comprehensive experiments, and showed that their model, TrustSVD outperformed all the state of the art models. Recently, Zafari and Moser \citep{zafari2017modelling} showed that their model CondTrustFVSVD significantly outperforms TrustSVD. Therefore, in this section, we limited our comparisons to these two models from the state of the art, since they outperform a comprehensive set of state of the art recommendation models \citep{guo2016novel,zafari2017modelling}.
The optimal experimental settings for each method are determined either by our experiments or suggested by previous works \citep{guo2015trustsvd,guo2016novel,zafari2017proposing}.
Due to the over-fitting problem, the accuracy of iterative models improves for a number of iterations, after which it starts to degrade. Therefore, we recorded the best accuracy values achieved by each model during the iterations, and compared the models based on the recorded values. We believe that this approach results in a fairer comparison of the models than setting the number of iterations to a fixed value, because the models over-fit at different iterations, and using a fixed number of iterations actually prevents us from fairly comparing the models based on their real capacity in uncovering hidden patterns from data. Therefore, the reported results for iterative models here are the best results that they could achieve using the aforementioned parameters. MAE and RMSE measures are used to evaluate and compare the accuracy of the models.
MAE and RMSE are two standard and popular measures that are used to measure and compare the performance of preference modelling methods in recommender systems. In the following sections, we consider the performances separately for All Users and Cold-start Users. Cold-start Users are the users who have rated less than 5 items, and All Users include all the users regardless of the number of items they have rated.
\subsection{Discussion}
\label{Discussion}
All latent factor approaches have been evaluated with 5 factors, because no clear ideal value could be established. In section \ref{Model Performances}, first we analyse the performance of the models from different perspectives. Since the results are subject to randomness, we also performed a t test to guarantee that the out-performances achieved do not happen by chance. The results are discussed in section \ref{Statistical Analysis}. As we mentioned in section \ref{Introduction}, one of the research questions we are interested in, in this paper is related to the interplay between the dynamicity of preference aspects and the preference domain. In section \ref{Dynamic aspects}, we consider the performance of combinations of Aspect-MF, in order to pinpoint the aspects that are more subject to temporal drift in each dataset. In section \ref{Effect of the size of the training dataset}, we also consider the effect of the amount of training data that is fed to the model as input, and analyse the robustness of the models to the shortage of training data.
\subsubsection{Model performances}
\label{Model Performances}
We can consider the performance of the models from different perspectives.
A preference model's performance can be considered with respect to the dataset on which it is trained, the accuracy measure that is used to evaluate the model's performance, and the performance of the model on cold-start users vs the performance on all users.
\paragraph{\textbf{Datasets}}
\label{Datasets}
The error values in Fig. \ref{fig5} show that the Aspect-MF results in substantial improvements over TrustSVD in all three datasets for both measures and for all users and cold-start users. As we can see in this figure, the box plots of Aspect-MF's combinations do not have much overlap with the box plot of TrustSVD, which means that the differences are definitely statistically significant. In this figure, we can also see that the box plot widths for Aspect-MF's combinations are usually much smaller than that for TrustSVD. This suggests that Aspect-MF's combinations are more stable than TrustSVD, meaning that they find roughly the same solutions across different model executions. This is a favourable property of the model, since it makes the model performance less subject to randomness. Clearly, a model that performs well sometimes and worse at other times is less reliable. The model's superior performance is likely due to its taking multiple preference aspects into account, therefore, it has more clues as to where the optimal solutions might reside in the solution space.
In particular, we can see that the model is more stable in the case of the Ciao and Epinions datasets than the Flixster dataset. On the Epinions dataset, each typical user and cold-start user rates 41.61 items and 4.08 items on average. These numbers respectively are 15.94 and 2.94 for the Ciao dataset, and 11.12 and 1.94 for the Flixster dataset. This could explain why the variations are larger on Flixster dataset than Epinions and Ciao datasets. Since more ratings per user are available in the Ciao and Epinions dataset, different executions lead the model to more similar solutions than the solutions that are found on the Flixster dataset across different model executions.
We can also see from Table \ref{table1}, that on the Ciao and Flixster datasets, the improvements are more significant for RMSE, while more significant improvements are achieved for RMSE.
We can also clearly observe that the model variations are smaller for all users in the Epinions dataset, and for cold-start users in the Flixster dataset.
\begin{figure}[!ht]
\vskip 0cm
\centering
\setcounter{figure}{4}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[height=5cm,width=6.5cm]{C_80_MAE_ALL.pdf}
\caption{MAE, all users}
\label{fig5a}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[height=5cm,width=6.5cm]{C_80_MAE_CS.pdf}
\caption{MAE, cold-start users}
\label{fig5b}
\end{subfigure}%
\newline
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[height=5cm,width=6.5cm]{C_80_RMSE_ALL.pdf}
\caption{RMSE, all users}
\label{fig5c}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[height=5cm,width=6.5cm]{C_80_RMSE_CS.pdf}
\caption{RMSE, cold-start users}
\label{fig5d}
\end{subfigure}%
\centering
\caption{Box plots of the Aspect-MF's combinations (b, bf, bffv, f, ffv, fv) and CTFVSVD versus TrustSVD in Ciao dataset in terms of MAE and RMSE measures for cold-start users (CS) and all users (ALL).}
\label{fig5}
\end{figure}
\begin{figure}[!ht]
\vskip 0cm
\centering
\setcounter{figure}{5}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[height=5cm,width=6.5cm]{E_80_MAE_ALL.pdf}
\caption{MAE, all users}
\label{fig6a}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[height=5cm,width=6.5cm]{E_80_MAE_CS.pdf}
\caption{MAE, cold-start users}
\label{fig6b}
\end{subfigure}%
\newline
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[height=5cm,width=6.5cm]{E_80_RMSE_ALL.pdf}
\caption{RMSE, all users}
\label{fig6c}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[height=5cm,width=6.5cm]{E_80_RMSE_CS.pdf}
\caption{RMSE, cold-start users}
\label{fig6d}
\end{subfigure}%
\centering
\caption{Box plots of the Aspect-MF's combinations (b, bf, bffv, f, ffv, fv) and CTFVSVD versus TrustSVD in Epinions dataset in terms of MAE and RMSE measures for cold-start users (CS) and all users (ALL).}
\label{fig6}
\end{figure}
\begin{figure}[!ht]
\vskip 0cm
\centering
\setcounter{figure}{6}
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[height=5cm,width=6.5cm]{F_80_MAE_ALL.pdf}
\caption{MAE, all users}
\label{fig7a}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[height=5cm,width=6.5cm]{F_80_MAE_CS.pdf}
\caption{MAE, cold-start users}
\label{fig7b}
\end{subfigure}%
\newline
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[height=5cm,width=6.5cm]{F_80_RMSE_ALL.pdf}
\caption{RMSE, all users}
\label{fig7c}
\end{subfigure}%
\begin{subfigure}[b]{0.5\textwidth}
\includegraphics[height=5cm,width=6.5cm]{F_80_RMSE_CS.pdf}
\caption{RMSE, cold-start users}
\label{fig7d}
\end{subfigure}%
\centering
\caption{Box plots of the Aspect-MF's combinations (b, bf, bffv, f, ffv, fv) and CTFVSVD versus TrustSVD in Flixster dataset in terms of MAE and RMSE measures for cold-start users (CS) and all users (ALL).}
\label{fig7}
\end{figure}
\paragraph{\textbf{Accuracy measures}}
\label{Accuracy Measures}
As the statistical analysis of the models in Table \ref{table1} show, the differences are generally more significant when the accuracies are measured in terms of the RMSE. This can be explained by the formulation of these models as an optimisation problem. These models focus on maximising accuracy using RMSE and achieving better MAE values is a secondary goal that is only pursued through minimising RMSE.
\paragraph{\textbf{Cold-start vs all users}}
\label{Cold-start vs All Users}
By taking a close look at the statistical analysis results in Table \ref{table1} and also the box plots of CTFVSVD vs Aspect-MF's combinations in Fig. \ref{fig5}, we can see that in all three datasets, the improvements of the Aspect-MF are more significant over all users than cold-start users. This can be explained by the amount of dynamic information that the models receive for each one of these groups of users. For all users, the model is trained using all ratings and also all associated time stamps for those ratings. Therefore the model can more successfully discern the temporal patterns in the preferences, and the accuracy improvements are larger. However, for the cold-start users, the model does not have access to much temporal information about these users, since they do not have many ratings. As a result, the model cannot identify the shift in the preferences of these users, and the improvements are smaller. From this, we conclude that temporal models are more successful on all users, because for them, temporal information is available.
\subsection{Statistical analysis}
\label{Statistical Analysis}
The statistical analysis of the performances provided in Table \ref{table1} shows that all Aspect-MF's combinations achieve significantly better results than TrustSVD, which does not include the temporal information. The values in Table \ref{table2} also show that Aspect-MF's combinations also result in improvements over CTFVSVD that are statistically significant, which means that in all three datasets, Aspect-MF has been successful in extracting the temporal patterns in the users' preferences. We can also see that the all the p values in Table \ref{table1} are 0.0000, which means that with almost 100\% probability, the two model executions (Aspect-MF and TrustSVD) do not come from distributions with equal mean performances. Therefore, we are almost 100\% sure that the observed differences in performance are due to the superiority of Aspect-MF over TrustSVD, and not the result of chance. Similarly, the p values in Table \ref{table2} are almost zero, which means that we are certain that Aspect-MF is better than CTFVSVD, in cases where the t test shows a statistically significant improvement.
\subsection{Dynamic aspects}
\label{Dynamic aspects}
The close comparison of the error values achieved by Aspect-MF in Fig. \ref{fig8} show that in terms of MAE for all users, Aspect-MF achieves the best performance on the Ciao and Epinions datasets, for the models including dynamic $b$ and $f$ aspects. However, on the Flixster dataset, the model combination with dynamic $b$ and $fv$ aspects performs best. Interestingly, for cold-start users, different models perform the best. In particular, on the Ciao dataset, the model including dynamic $f$ performs best, whereas on the Epinions and Flixster datasets, the model including dynamic $b$, $f$, and $fv$ aspects, and the model with drifting $f$ aspect achieve the best results respectively. Furthermore, the error values in Fig. \ref{fig9} show that different model combinations might achieve the best performances for RMSE.
From these figures, we can make several conclusions.
The first conclusion is that the dynamic patterns are dataset-dependent. Therefore, users and the items in different dataset can have preferences with aspects with different levels of dynamicity. This finding supports our component-based approach in modelling the dynamic properties of the preference aspects.
The second conclusion is that the prediction of the ratings for the cold-start users is less dependent on the drifting bias than that of all users. As we see in this Figures \ref{fig8} and \ref{fig9}, for all users, the combinations that include dynamic $b$ aspects are strictly better than the other combinations, whilst this is less consistent for cold-start users, where sometimes the models with only dynamic $f$ aspects perform best. This suggests that the preferences of cold-start users are not much affected by the shifts in the popularity of the items, while other users' preferences are more influenced by such shifts. Therefore, the accurate modelling of such temporal effects is of greater importance in the case of all users than cold-start users. As previous studies have shown \cite{koenigstein2011yahoo}, bias is a very important aspect in human preferences. Since the cold-start users do not have enough ratings, there is also not enough temporal data to train the preferences for these models. Therefore, the trained temporal aspects of these users are probably not very accurate, and therefore, the combinations that include bias perform poorly on these users, due to imprecise predictions.
The third conclusion is that both measures reveal roughly the same preference patterns. This seems justifiable, since the shift in user preferences should naturally be independent of how the differences in estimated preferences and real preferences are measured.
To summarise, it is very advantageous to have a component-based model in which the temporal aspects of preferences can be arbitrarily captured in different conditions. This enables us to capture the patterns only when they are actually helpful, and consequently, build the most accurate preference models, tailored to different datasets and domains with disparate temporal patterns.
\begin{figure}[H]
\setcounter{figure}{0}
\includegraphics[width=1\linewidth]{TRUSTSVD.pdf}
\captionsetup{labelformat=empty}
\centering
\caption{Table 1: The t values and p values for Aspect-MF's combinations vs TrustSVD in Ciao, Epinions, and Flixster datasets for MAE and RMSE measures on all users (ALL) and cold-start users (CS)}
\label{table1}
\end{figure}
\begin{figure}[H]
\setcounter{figure}{1}
\includegraphics[width=1\linewidth]{CTFVSVD.pdf}
\captionsetup{labelformat=empty}
\centering
\caption{Table 2: The t values and p values for Aspect-MF's combinations vs CTFVSVD in Ciao, Epinions, and Flixster datasets for MAE and RMSE measures on all users (ALL) and cold-start users (CS)}
\label{table2}
\end{figure}
\begin{figure}[!htp]
\hspace*{0.2in}
\centerline{\includegraphics[scale=0.7]{Aspect-MF-MAE-Performances.pdf}}
\caption{Comparisons of the MAE values of Aspect-MF's combinations in a,b) Ciao, c,d) Epinions, and e,f) Flixster datasets for all users (MAE-ALL) and cold-start users (MAE-CS)}
\label{fig8}
\end{figure}
\begin{figure}[!htp]
\hspace*{0.2in}
\centerline{\includegraphics[scale=0.7]{Aspect-MF-RMSE-Performances.pdf}}
\caption{Comparisons of the MAE values of Aspect-MF's combinations in a,b) Ciao, c,d) Epinions, and e,f) Flixster datasets for all users (RMSE-ALL) and cold-start users (RMSE-CS)}
\label{fig9}
\end{figure}
\subsection{Effect of the size of the training dataset}
\label{Effect of the size of the training dataset}
The main purpose of this section is to evaluate the robustness of the models against shortage of training data. In the experiments in sections \ref{Discussion} through \ref{Dynamic aspects}, 80\% of the ratings matrix was used for training the models and the remaining data was used for evaluation. The question that arises here is how the models would perform if less amount of data was fed to the models for training.
In order to analyse the behaviour of the models with respect to the amount of training data, we can reduce the amount of the training data, and consider how much the accuracy drops as the training data is decreased. Therefore, we also evaluate the models in two additional cases. The first case includes 60\% of the data for training, and the remaining 40\% for testing, and the second case uses 40\% of ratings data for training and the rest for evaluation. The results for the Flixster and Ciao datasets are demonstrated in Figs. \ref{fig10} and \ref{fig11} respectively. These figures show the percentage of error increase as the amount of training data is decreased.
\paragraph{\textbf{All users}}
As can be seen in Fig. \ref{fig10}, on the Flixster dataset, in the case of all users, all combinations of Aspect-MF result in a smaller increase in the error when the training data is decreased from 80\% to 60\% (denoted by 80-60 in these diagrams), and from 60\% to 40\% (denoted by 60-40 in these diagrams). Furthermore, we can observe that in terms of MAE, the combination that includes $f$ and $fv$ resulted the smallest error increase when the training data decreased from 80\% to 60\%, and the model that included $fv$ resulted in the smallest error increase when the training data decreased from 60\% to 40\%. This suggests that the dynamic model is more robust to the shortage of training data, when the error is measured in terms of MAE for all users. In terms of RMSE, the least accuracy deterioration happened for the model combination with the $f$ aspect, both when the training data amount drops to 60\%, and when it drops to 40\%.
\paragraph{\textbf{Cold-start users}}
For cold-start users however, a different pattern is evident. Interestingly, we can see that for cold-start users, the error increases more when the training data is decreased from 80\% to 60\%, compared to when it is decreased from 60\% to 40\%. This means that the accuracy degrades more when the training data drops to 60\%. Judging by the higher error increase for cold-start users in comparison with all users, cold-start users seem to be more sensitive to the decrease in the amount of training data. This seems understandable, since the cold-start users do not have many ratings. Therefore, when evaluating the model accuracy for cold-start users, less accurate predictions for each rating have a larger effect on the overall accuracy.
TrustSVD seems to be more robust to the shortage of training data for cold-start users, when the training data drops from 60\% to 40\%. This can be attributed to the fact that the dynamic model contains time information, and this information can be misleading if we substantially decrease the amount of training data, and evaluate the accuracy for cold-start users who do not have much ratings. A similar observation was made in Figs. \ref{fig8} and \ref{fig9}, where the dynamic model including the $b$ aspect performed poorly on the cold-start users.
\begin{figure}[!htp]
\vskip 0cm
\centerline{\includegraphics[scale=0.3]{F_Effects.pdf}}
\caption{Effect of the training amount on Flixster dataset for a) MAE for all users, b) MAE for cold-start users, c) RMSE for all users, d) RMSE for cold-start users}
\label{fig10}
\end{figure}
\paragraph{\textbf{All users vs cold-start users}}
A similar trend to the one observed in Flixster dataset can also be seen in the Ciao dataset in Figure \ref{fig11}. As this figure shows, the accuracy deterioration for cold-start users is much larger compared with that for all users. Again, we attribute this to the high sensitivity of cold-start users to inaccurate predictions.
For the case where the training data amount drops from 80\% to 60\%, the model combination with all the dynamic aspects ($bffv$) results in the lowest increase in MAE for all users. For cold-start users, the model combination with $b$ and $f$ aspects achieve the smallest deterioration of accuracy.
However, in terms of RMSE for all users, TrustSVD incurs the lowest increase in the error, while for cold-start users, the model with the dynamic $fv$ aspect is the most robust. In the second case where the training data amount is decreased from 60\% to 40\%, at least one of the model combinations performs best (incurs the lowest accuracy deterioration) for each measure, among the models tested. We can also see that when the training data amount is decreased from 80\% to 60\%, the error increase is much lower than when the training data amount drops from 60\% to 40\%. This means that the models are still quite robust with 60\% of the ratings data as training data, but their accuracy considerably drops when the training data decreases to 40\%.
\paragraph{\textbf{Flixster vs Ciao}}
One of the key differences between the behaviour of the models on the Flixster and Ciao datasets, as can be seen in Figs. \ref{fig10} and \ref{fig11}, is the threshold at which the accuracy sharply drops for cold-start users. For the Flixster dataset, the accuracy of cold-start users sharply worsens when the training data amount is decreased from 80\% to 60\%, while for the Ciao dataset, the sharp decrease in accuracy happens when the training data amount decreases from 60\% to 40\%. This can be easily justified by looking at the statistics of these two datasets for cold-start users. On the Flixster dataset as we mentioned before, each cold-start user rates 1.94 items on average, while this number is 2.94 in the Ciao dataset. Therefore, the accuracy of cold-start users on the Flixster dataset is more sensitive to inaccurate predictions than that on the Ciao dataset.
\begin{figure}[!htp]
\vskip 0cm
\centerline{\includegraphics[scale=0.3]{C_Effects.pdf}}
\caption{Fig. 11: Effect of the training amount on Ciao dataset for a) MAE for all users, b) MAE for cold-start users, c) RMSE for all users, d) RMSE for cold-start users}
\label{fig11}
\end{figure}
Considering all four measures on the two datasets, in general, we can observe that Aspect-MF's combinations are more robust to the decrease in the amount of training information than TrustSVD and CTFVSVD. The combinations in this paper are particularly more helpful in cases where enough time related data is fed into the model as input.
\paragraph{\textbf{Insights}}
From the observations for cold-start users, we can conclude that in order for the time information to be helpful, we need to provide the model with enough time-related data as input, so that the accuracy can be improved, and the importance of such data is more pronounced for the cold-start users, whose predictions are more sensitive to the inaccuracies. Otherwise, if the amount of training data is insufficient, the model can learn unrealistic temporal patterns that directly result from a shortage of training information.
We also saw that the degree of deterioration of the accuracy is somewhat dependent on the dataset. On the Flixster, the accuracy degrades somewhere between just under 1\% to just under 5\%. On Ciao, however, the accuracy deteriorates much more (roughly between 6.5\% and 19.5\%). Therefore, it is up to the system users to decide whether they would like to use smaller datasets and sacrifice the accuracy, or spend more time on training more accurate models using more information. We did not observe any tangible differences between the execution times of these cases (80\%-60\%-40\%), and the computational complexity analysis of the model in section \ref{Computational Complexity Analysis} showed that the model time is of linear order. Therefore, it is probably advisable for the system owners to use as much data as available to achieve the highest accuracies, as long as their computational limitations allow.
\section{Conclusion and future work}
\label{Conclusion and Future Work}
In this paper, we addressed the problem of modelling the temporal properties of human preferences in recommender systems. In order to tackle this problem, we proposed a novel latent factor model called Aspect-MF. Aspect-MF built on the basis of CTFVSVD, a model that we proposed earlier, in order to capture socially-influenced conditional preferences over feature values. In Aspect-MF, three major preference aspects were assumed to be subject to temporal drift. These aspects included user and item biases, preferences over features, and preferences over feature values. Moreover, we also analysed the temporal behaviour of each of these preference aspects and their combinations. We also considered the robustness of Aspect-MF's combinations with respect to the shortage of training data.
In order to evaluate the model, we carried out extensive experiments on three popular datasets in the area of recommender systems. We considered the model errors in terms of MAE and RMSE measures on all users and cold-start users. We also performed statistical analyses on the performances observed, to make sure that the differences in accuracies are significant, and hence do not happen by chance. The experiments revealed that in all three datasets, all combinations of Aspect-MF for both measures on all users and cold-start users significantly outperformed TrustSVD, which had proven to be the most accurate static social recommendation model before CTFVSVD. The experiments also proved that most of the Aspect-MF's combinations were significantly more accurate than CTFVSVD. In particular, we found that Aspect-MF with all dynamic aspects outperformed CTFVSVD in all three datasets on all users.
The analysis of the temporal behaviour of preference aspects and their combinations on the three datasets showed that different datasets included different temporal patterns, and therefore, required models with different dynamic aspects. This supported our component-based approach in modelling the basic preference aspects and their temporal properties. We also concluded that the dynamic models are more helpful in cases there is enough training data to discern the temporal properties. In particular, we concluded that the models proposed in this paper are more successful in modelling all users, because more time-related data is available for all users than cold-start users, and therefore the temporal characteristics were extracted more accurately. The analysis of the robustness of the models with respect to the shortage of training data also revealed that Aspect-MF was in general more robust than CTFVSVD and TrustSVD. The models were also more robust for all users than cold-start users, because cold-start users were more sensitive to the inaccurate predictions.
A direction that we would like to pursue in the future is related to explaining the resulting recommendations to the users. Explaining the recommendations to the users is believed to improve transparency and to instill trust in the users. So far we have pursued our main goal in improving the accuracy of the recommendations, and in this paper we showed how we could achieve significant improvements by taking the temporal aspects into consideration. As the next step, in particular we are interested in how we can explain the temporal properties of the trained models to the users. Furthermore, the component-based structure followed in designing Aspect-MF is generally beneficial in extracting explanations.
\begin{acks}
\label{Acknowledgment}
We would like to acknowledge the SunCorp Group for partially funding this project. We would also like to thank the National eResearch Collaboration Tools and Resources (Nectar) for providing us with the necessary computational resources to carry out the experiments.
\end{acks}
\bibliographystyle{ACM-Reference-Format}
| {'timestamp': '2018-03-01T02:04:34', 'yymm': '1802', 'arxiv_id': '1802.09728', 'language': 'en', 'url': 'https://arxiv.org/abs/1802.09728'} |
\section{Introduction}
In general, like-pairing correlations, such as proton-proton ($pp$)
and neutron-neutron ($nn$) pairing usually adopted in understanding the nuclear superfluidity, have isovector spin-singlet ($T=1, J=0$) mode and manifest themselves as nuclear odd-even mass staggering. They contribute to keeping spherical properties due to their $J =$ even couplings against the deformation treated in the mean field.
On the other hand, neutron-proton ($np$) pairing correlations in the residual interaction are expected to play meaningful roles in $N \simeq Z$ nuclear structure, and relevant nuclear electro-magnetic (EM) and weak transitions because protons and neutrons in these nuclei occupy the same (or nearby) orbitals leading to the maximum spatial overlap.
The $np$ pairing correlations have two different modes, viz. isoscalar (IS) spin-triplet ($T=0, J=1$) and isovector (IV) spin-singlet ($T=1, J=0$) \cite {Chen-Gos, Wolter, Goodman,simkovic03}. The IV $np$ pairing part, whose spin-singlet gives rise to anti-aligned pair, can be investigated by the study of $T=0$ and $T=1$ excited states in even-even and odd-odd nuclei. But the IS $np$ pairing part has a spin-aligned pair structure. For example, deuteron ground state as well as $np$-scattering phase shift analyses indicate strong attractive $np$ pairing features due to the strong $^{3}S_{1}$ tensor force. Inside nuclei, they are believed to contribute mainly to the mean field. But, even after converted to a mean field, it is quite natural to conjecture that there should still remain such strong attractive interactions due to the $np$ pairing in the residual interaction, although direct data implying the IS $np$ pairing inside nuclei are still controversial even in the $N \simeq Z$ nuclear structure study
Recently, with the advent of modern radioactive beam facilities producing the $N \simeq Z$ nuclei, competition or coexistence of these two IV and IS $np$ pairing in the residual interaction for $N \simeq Z$ nuclei is emerging as an interesting topic in the nuclear structure. Detailed reports about the recent progress regarding the $np$ pairing correlations in nuclear structure can be found at Refs. \cite{Sweden16,Brend16}. In particular, the IS spin-triplet part in the $np$ pairing has been argued to be very elusive compared to the IV spin-singlet part stemming from the like- and unlike-pairing correlations. Moreover, the deformation may also affect the IS and IV pairing correlations or vice versa because the IS $np$ pairing has $J=$ odd coupling inducing (non-spherical) deformation contrary to the IV $np$ mode keeping $J=$ even coupling.
Importance of the $np$ pairing has been discussed two decades ago in our early reports of the single- and double-$\beta$ decay transitions \cite {Ch93,pan} within a spherical quasi-particle random phase approximation (QRPA) framework with a realistic two-body interaction given by the Brueckner $G$-matrix based on the CD Bonn potential. But these works did not explicitly include the deformation, and the IS $np$ pairing was taken into account effectively by a renormalization of the IV $np$ pairing strength. Recent study \cite{Gamb15} regarding the relation of the deformation and the $np$ pairing correlations addressed extensively, by a model combining shell model techniques and mean-field, that the large coexistence of the IV and IS may be found and the deformation can induce the quenching of the IV $np$ pairing.
Furthermore, recent experimental data for the M1 spin transition
reported strong IV quenching in the $N =Z$ $sd$-shell nuclei
\cite{Matsu15}, whose nuclei are thought to be largely deformed. It
means that the IV quenching giving rise to IS condensation
may become of an important ingredient for understanding the nuclear
deformation in those nuclei. In Refs. \cite{Ha17-2,Ha18}, we demonstrated
that such IS condensation is really feasible in those $sd$- and $pf$-shell
nuclei, in particular, for large deformation region.
Similar discussions were carried out by other authors \cite{Bai14,Nils04}. But the deformation was not explicitly taken into account. Ref. \cite{Bai14} argued
that the IS $np$ pairing may affect the low-lying Gamow-Teller (GT) state near to
the isobaric analogue resonance (IAR) for $pf$-shell $N=Z+2$ nuclei
by studying the GT data \cite{Fujita14}.
Ref. \cite{Nils04} has performed a self-consistent $pn$-QRPA based on a relativistic HFB approach, which clearly demonstrated the importance of the $np$ pairing, specifically, IV $np$ pairing, for proper understanding the GT peaks. But, very recent calculation of the GT strength for Ni isotopes by $pn$-QRPA \cite{Sadiye18} is based only on the like-pairing correlations in the spherical QRPA.
Main interests in this work are how to interrelate the $np$ pairing in the residual interaction with the deformation considered in the mean field on the Gamow-Teller (GT) transition because the IS pairing may compete with the deformation features due to its coupling to non-spherical $J =$ odd states. Most studies have focused on the $N = Z$ nuclei because the $np$ pairing is expected to be larger rather than other $N \neq Z$ nuclei. However, as shown in a recent work \cite{Bertch11}, the nuclear structure of the $N \simeq Z$ nuclei may also be more or less affected by the $np$ pairing correlations.
In our recent papers \cite{Ha15,Ha15-1}, we developed a deformed QRPA (DQRPA) approach by explicitly including the deformation \cite{Ha15}, in which all effects by the deformation and the like-pairing ($nn$ and $pp$) correlations are consistently treated at the Deformed BCS (DBCS) and RPA stages. But the $np$ pairing correlations were taken into account only at the DBCS with a schematic interaction and a $G$-matrix type interaction \cite{Ha15-1}. Very recently both effects are simultaneously considered and applied to the GT strength study of $^{24,26}$Mg at Ref. \cite{Ha16} and other $N=Z$ $sd$-shell nuclei at Ref. \cite{Ha17}. We argued that the $np$ pairing correlations as well as the deformation may affect the $sd$-shell nuclear structure and their GT response functions. Along these preceding papers, here, we extend our applications to the GT strength study of $pf$-shell nuclei in order to understand the interplay of the pairing correlations and the deformation in the medium heavy nuclei.
In this work, we also investigate how such IS condensation and deformation affect the GT strength distribution in $pf$-shell nuclei, because roles of the IS and IV pairings in the deformed mean field and their effects on the GT strength distributions are still remained to be intensively discussed in those medium heavy nuclei. Our results are presented as follows. In section II, a brief pedagogical explanation of the formalism is introduced. Numerical GT results for $^{56,58}$N i and $^{62,64}$Ni are discussed in Sec. III. A summary and conclusions are done in Sec. IV.
\section{Theoretical Formalism}
The $np$ pairing
correlations change the conventional quasi-particle concept, {\it
i.e.}, quasi-neutron (quasi-proton) composed by neutron (proton) particle and its hole state, to quasi-particle 1 and 2
which may mix properties of the quasi-proton and
quasi-neutron. Here we explain briefly the formalism, DBCS and DQRPA, to be applied for GT transition strength distributions of some Ni isotopes, in which we include all types pairings as well as the deformation.
We start from the following DBCS transformation between creation and annihilation operators for real (bare) and quasi-particle states \cite{Ha15}
\begin{equation} \label{eq:HFB_1}
\left( \begin{array}{c} a_{1}^{\dagger} \\
a_{2}^{\dagger} \\
a_{\bar{1}} \\
a_{\bar{2}}
\end{array}\right)_{\alpha} =
\left( \begin{array}{cccc}
u_{1p} & u_{1n} & v_{1p} & v_{1n} \\
u_{2p} & u_{2n} & v_{2p} & v_{2n} \\
-v_{1p} & -v_{1n} & u_{1p} & u_{1n} \\
-v_{2p} & -v_{2n} & u_{2p} & u_{2n}
\end{array}\right)_{\alpha}
\left( \begin{array}{c}
c_{p}^{\dagger} \\
c_{n}^{\dagger} \\
c_{\bar{p}} \\
c_{\bar{n}}
\end{array}\right)_{\alpha} ~.
\end{equation}
Hereafter Greek letter denotes a single-particle state (SPS) of neutron and proton with a projection $\Omega $ of a total angular momentum on
a nuclear symmetry axis. The projection $\Omega$ is treated as the only good quantum number in the deformed basis.
We assume the time reversal symmetry in the transformation coefficient and do not allow mixing of different SPSs ($\alpha$ and $\beta$) to a quasi-particle in the deformed state.
But, in a spherical state representation of Eq. (\ref{eq:HFB_1}), the quasi-particle states would be mixed with different particle states in the spherical state because each deformed state (basis) is expanded by a linear combination of the spherical state (basis) (see Fig. 1 at Ref. \cite{Ha15}). In this respect, the DBCS is another representation of the deformed HFB transformation in the spherical basis. If we discard the $np$ pairing, Eq. (\ref{eq:HFB_1}) is reduced to the conventional BCS transformation in a deformed basis.
The other merit is that, by expanding all deformed wave functions constructed from the deformed harmonic oscillator basis into the spherical basis, we may exploit the Wigner-Eckardt theorem for more complicated physical operators in the deformed states. Finally, using the transformation of Eq. (\ref{eq:HFB_1}), the following DBCS equation
was obtained
\begin{equation} \label{eq:hfbeq}
\left( \begin{array}{cccc} \epsilon_{p}-\lambda_{p} & 0 &
\Delta_{p {\bar p}} & \Delta_{p {\bar n}} \\
0 & \epsilon_{n}-\lambda_{n} & \Delta_{n
{\bar p}} & \Delta_{n {\bar n}} \\
\Delta_{p {\bar p}} &
\Delta_{p {\bar n}} & -\epsilon_{p} + \lambda_{p} & 0 \\
\Delta_{n {\bar p}} &
\Delta_{n {\bar n}} & 0 & -\epsilon_{n} + \lambda_{n}
\end{array}\right)_{\alpha}
\left( \begin{array}{c}
u_{\alpha'' p} \\ u_{\alpha'' n} \\ v_{\alpha'' p} \\
v_{\alpha'' n} \end{array}\right)_{\alpha}
=
E_{\alpha \alpha''}
\left( \begin{array}{c} u_{\alpha'' p} \\ u_{\alpha'' n} \\
v_{\alpha'' p} \\
v_{\alpha'' n} \end{array}\right)_{\alpha},
\end{equation}
where $E_{\alpha \alpha''}$ is the energy of a quasi-particle denoted as $\alpha''$(=1 or 2) in the $\alpha$ state.
The pairing potentials in the DBCS Eq. (\ref{eq:hfbeq}) were calculated in the deformed basis by using the $G$-matrix obtained from the realistic Bonn CD potential for nucleon-nucleon (N-N) interaction as follows
\begin{equation} \label{eq:gap}
\Delta_{{p \bar{p}_\alpha}} = \Delta_{\alpha p \bar{\alpha}p} = -
\sum_{\gamma} \Big[ \sum_{J, a, c } g_{\textrm{pp}} F_{\alpha a
\bar{\alpha} a}^{J0} F_{\gamma c \bar{\gamma} c}^{J0}
G(aacc,J,T=1)\Big] (u_{1p_{\gamma}}^* v_{1p_{\gamma}} +
u_{2p_{\gamma}}^* v_{2p_{\gamma}}) ~,
\end{equation}
\begin{eqnarray} \label{eq:gap_pn}
\Delta_{{p \bar{n}_\alpha}} = \Delta_{\alpha p \bar{\alpha}n} = &-&
\sum_{\gamma} \Bigg[ \Big[\sum_{J, a, c} g_{\textrm{np}} F_{\alpha a
\bar{\alpha} a}^{J0} F_{\gamma c \bar{\gamma} c}^{J0}
G(aacc,J,T=1)\Big] Re(u_{1n_{\gamma}}^* v_{1p_{\gamma}} +
u_{2n_{\gamma}}^* v_{2p_{\gamma}}) \\ \nonumber
&+& \Big[ \sum_{J, a, c}
g_{\textrm{np}} F_{\alpha a \bar{\alpha} a}^{J0} F_{\gamma c
\bar{\gamma} c}^{J0} iG(aacc,J,T=0)\Big] Im (u_{1n_{\gamma}}^*
v_{1p_{\gamma}} + u_{2n_{\gamma}}^* v_{2p_{\gamma}}) \Bigg]~,
\end{eqnarray}
where $F_{ \alpha a {\bar \alpha a
}}^{JK}=B_{a}^{\alpha}~B_{a}^{\alpha} ~{(-1)^{j_{a} -\Omega_{\alpha}}}~C^{JK}_{j_{a}
\Omega_{\alpha} j_{a}-\Omega_{\alpha}}(K=\Omega_{\alpha} - \Omega_{\alpha})$ was introduced to describe the $G$-matrix in the deformed basis with the expansion coefficient $B_{\alpha}$ calculated as \cite{Ha15}
\begin{equation} \label{eq:sps_ex}
B_{a}^{\alpha} = \sum_{N n_z \Sigma} C_{l \Lambda { 1 \over 2} \Sigma}^{j \Omega_{\alpha}}
A_{N n_z \Lambda}^{N_0 l}~b_{N n_z \Sigma} ~,~A_{N n_z \Lambda}^{N_0 l n_r} =<N_0 l \Lambda|N n_z \Lambda
>~.
\end{equation}
Here $K$ is a projection number of the total angular momentum $J$ onto the $z$ axis and selected as $K=0$ at the DBCS stage because we considered pairings of the quasi-particles at $\alpha$ and ${\bar\alpha}$ states. $G(aacc ~J T)$ represents a two-body (pairwise) scattering matrix in the spherical basis, where all possible scattering of the nucleon pairs above Fermi surface were taken into account.
In the present work, we have included all possible $J$ values with the $K =0$ projection.
$\Delta_{\alpha n \bar{\alpha}n}$ is the similar to Eq. (\ref{eq:gap}) where $n$ was replaced by $p$.
In order to renormalize the $G$-matrix due to the finite Hilbert space, strength parameters,
$g_{\textrm{pp}}$, $g_{\textrm{nn}}$, and $g_{\textrm{np}}$ were multiplied with the $G$-matrix \cite{Ch93} by adjusting the pairing potentials, $\Delta_{p {\bar p}}$ and $\Delta_{n {\bar n}}$, in Eq. (\ref{eq:gap}) to the empirical pairing gaps, $\Delta_{p}^{emp}$ and $\Delta_{n}^{emp}$, which were evaluated by a symmetric five mass term formula for the neighboring nuclei \cite{Ha15-1} using empirical masses.
The theoretical $np$ pairing gaps were calculated as \cite{Ch93,Bend00}
\begin{equation}
\delta_{np}^{th.} = - [ ( H_{gs}^{12} + E_1 + E_2 ) - ( H_{gs}^{np} + E_p + E_n)]~.
\end{equation}
Here $H_{gs}^{12} (H_{gs}^{np}) $ is a total DBCS ground state energy with (without) $np$ pairing and $ E_1 + E_2 (E_p + E_n)$ is a sum of the lowest two quasi-particle energies with (without) $np$ pairing potential $\Delta_{n{\bar p}}$ in Eq. (\ref{eq:hfbeq}).
For the mean field energy $\epsilon_{p(n)}$ in Eq. (\ref{eq:hfbeq}) we exploited a deformed Woods-Saxon potential
\cite{cwi} with the universal parameter set. By taking the same approach as used in the QRPA equation
in Ref. \cite{Suhonen}, our Deformed QRPA (DQRPA) equation including the $np$ pairing
correlations was summarized as follows,
\begin{eqnarray}\label{qrpaeq}
&&\left( \begin{array}{cccccccc}
~ A_{\alpha \beta \gamma \delta}^{1111}(K) &~ A_{\alpha \beta \gamma \delta}^{1122}(K) &
~ A_{\alpha \beta \gamma \delta}^{1112}(K) &~ A_{\alpha \beta \gamma \delta}^{1121}(K) &
~ B_{\alpha \beta \gamma \delta}^{1111}(K) &~ B_{\alpha \beta \gamma \delta}^{1122}(K) &
~ B_{\alpha \beta \gamma \delta}^{1112}(K) &~ B_{\alpha \beta \gamma \delta}^{1121}(K) \\
~ A_{\alpha \beta \gamma \delta}^{2211}(K) &~ A_{\alpha \beta \gamma \delta}^{2222}(K) &
~ A_{\alpha \beta \gamma \delta}^{2212}(K) &~ A_{\alpha \beta \gamma \delta}^{2221}(K) &
~B_{\alpha \beta \gamma \delta}^{2211}(K) & ~B_{\alpha \beta \gamma \delta}^{2222}(K) &
~B_{\alpha \beta \gamma \delta}^{2212}(K) & ~B_{\alpha \beta \gamma \delta}^{2221}(K)\\
~A_{\alpha \beta \gamma \delta}^{1211}(K) & ~A_{\alpha \beta \gamma \delta}^{1222}(K) &
~A_{\alpha \beta \gamma \delta}^{1212}(K) & ~A_{\alpha \beta \gamma \delta}^{1221}(K) &
~B_{\alpha \beta \gamma \delta}^{1211}(K) & ~B_{\alpha \beta \gamma \delta}^{1222}(K) &
~B_{\alpha \beta \gamma \delta}^{1212}(K) & ~B_{\alpha \beta \gamma \delta}^{1221}(K)\\
~A_{\alpha \beta \gamma \delta}^{2111}(K) & ~A_{\alpha \beta \gamma \delta}^{2122}(K) &
~A_{\alpha \beta \gamma \delta}^{2112}(K) & ~A_{\alpha \beta \gamma \delta}^{2121}(K) &
~B_{\alpha \beta \gamma \delta}^{2111}(K) & ~B_{\alpha \beta \gamma \delta}^{2122}(K) &
~B_{\alpha \beta \gamma \delta}^{2112}(K) & ~B_{\alpha \beta \gamma \delta}^{2121}(K)\\
& & & & & & & \\
- B_{\alpha \beta \gamma \delta}^{1111}(K) & -B_{\alpha \beta \gamma \delta}^{1122}(K) &
-B_{\alpha \beta \gamma \delta}^{1112}(K) & -B_{\alpha \beta \gamma \delta}^{1121}(K) &
- A_{\alpha \beta \gamma \delta}^{1111}(K) & -A_{\alpha \beta \gamma \delta}^{1122}(K) &
-A_{\alpha \beta \gamma \delta}^{1112}(K) & -A_{\alpha \beta \gamma \delta}^{1121}(K)\\
- B_{\alpha \beta \gamma \delta}^{2211}(K) & -B_{\alpha \beta \gamma \delta}^{2222}(K) &
-B_{\alpha \beta \gamma \delta}^{2212}(K) & -B_{\alpha \beta \gamma \delta}^{2221}(K) &
- A_{\alpha \beta \gamma \delta}^{2211}(K) & -A_{\alpha \beta \gamma \delta}^{2222}(K) &
-A_{\alpha \beta \gamma \delta}^{2212}(K) & -A_{\alpha \beta \gamma \delta}^{2221}(K)\\
- B_{\alpha \beta \gamma \delta}^{1211}(K) & -B_{\alpha \beta \gamma \delta}^{1222}(K) &
-B_{\alpha \beta \gamma \delta}^{1212}(K) & -B_{\alpha \beta \gamma \delta}^{1221}(K) &
- A_{\alpha \beta \gamma \delta}^{1211}(K) & -A_{\alpha \beta \gamma \delta}^{1222}(K) &
-A_{\alpha \beta \gamma \delta}^{1212}(K) & -A_{\alpha \beta \gamma \delta}^{1221}(K) \\
- B_{\alpha \beta \gamma \delta}^{2111}(K) & -B_{\alpha \beta \gamma \delta}^{2122}(K) &
-B_{\alpha \beta \gamma \delta}^{2112}(K) & -B_{\alpha \beta \gamma \delta}^{2121}(K) &
- A_{\alpha \beta \gamma \delta}^{2111}(K) & -A_{\alpha \beta \gamma \delta}^{2122}(K) &
-A_{\alpha \beta \gamma \delta}^{2112}(K) & -A_{\alpha \beta \gamma \delta}^{2121}(K) \\
\end{array} \right)\\ \nonumber ~~&& \times
\left( \begin{array}{c} {\tilde X}_{(\gamma 1 \delta 1)K}^{m} \\ {\tilde X}_{(\gamma 2 \delta 2)K}^{m} \\
{\tilde X}_{(\gamma 1 \delta 2)K}^{m} \\ {\tilde X}_{(\gamma 2 \delta 1)K}^{m} \\ \\
{\tilde Y}_{(\gamma 1 \delta 1)K}^{m} \\ {\tilde Y}_{(\gamma 2 \delta 2)K}^{m} \\
{\tilde Y}_{(\gamma 1 \delta 2)K}^{m}\\{\tilde Y}_{(\gamma 2 \delta 1)K}^{m} \end{array} \right)
= \hbar {\Omega}_K^{m}
\left ( \begin{array}{c} {\tilde X}_{(\alpha 1 \beta 1)K}^{m} \\{\tilde X}_{(\alpha 2 \beta 2)K}^{m} \\
{\tilde X}_{(\alpha 1 \beta 2)K}^{m} \\ {\tilde X}_{(\alpha 2 \beta 1)K}^{m}\\ \\
{\tilde Y}_{(\alpha 1 \beta 1)K}^{m} \\ {\tilde Y}_{(\alpha 2 \beta 2)K}^{m} \\
{\tilde Y}_{(\alpha 1 \beta 2)K}^{m} \\ {\tilde Y}_{(\alpha 2 \beta 1)K}^{m} \end{array} \right) ~,
\end{eqnarray}
where the amplitudes
${\tilde X}^m_{(\alpha \alpha'' \beta \beta'')K }$ and ${\tilde Y}^m_{(\alpha
\alpha'' \beta \beta'')K}$ in Eq. (\ref{qrpaeq}) stand for forward and backward going amplitudes from state ${ \alpha
\alpha'' }$ to ${\beta \beta''}$ state \cite{Ch93}.
Our DQRPA equation is very general because we include the deformation as well as all kinds of pairing correlations still remained in the mean field.
If we switch off the $np$ pairing, all off-diagonal terms in the A and B matrices in Eq. (\ref{qrpaeq}) disappear with the replacement of 1 and 2 into $p$ and $n$. Then the DQRPA equation is decoupled into pp + nn + pn + np DQRPA equations \cite{saleh}. For charge conserving (or neutral current) reactions, pp + nn DQRPA can describe the M1 spin or EM transitions on the same nuclear species, while np + pn DQRPA describes the GT(+/-) transitions in the charge exchange (or charged current) reactions. Here it should be pointed out that np DQRPA is different from pn DQRPA because of the deformation. The A and B matrices in Eq. (\ref{qrpaeq}) are given by
\begin{eqnarray} \label{eq:mat_A}
A_{\alpha \beta \gamma \delta}^{\alpha'' \beta'' \gamma'' \delta''}(K) = && (E_{\alpha
\alpha''} + E_{\beta \beta''}) \delta_{\alpha \gamma} \delta_{\alpha'' \gamma''}
\delta_{\beta \delta} \delta_{\beta'' \delta''}
- \sigma_{\alpha \alpha'' \beta \beta''}\sigma_{\gamma \gamma'' \delta \delta''}\\ \nonumber
&\times&
\sum_{\alpha' \beta' \gamma' \delta'}
[-g_{pp} (u_{\alpha \alpha''\alpha'} u_{\beta \beta''\beta'} u_{\gamma \gamma''\gamma'} u_{\delta \delta''\delta'}
+v_{\alpha \alpha''\alpha'} v_{\beta \beta''\beta'} v_{\gamma \gamma''\gamma'} v_{\delta \delta''\delta'} )
~V_{\alpha \alpha' \beta \beta',~\gamma \gamma' \delta \delta'}
\\ \nonumber &-& g_{ph} (u_{\alpha \alpha''\alpha'} v_{\beta \beta''\beta'}u_{\gamma \gamma''\gamma'}
v_{\delta \delta''\delta'}
+v_{\alpha \alpha''\alpha'} u_{\beta \beta''\beta'}v_{\gamma \gamma''\gamma'} u_{\delta \delta''\delta'})
~V_{\alpha \alpha' \delta \delta',~\gamma \gamma' \beta \beta'}
\\ \nonumber &-& g_{ph} (u_{\alpha \alpha''\alpha'} v_{\beta \beta''\beta'}v_{\gamma \gamma''\gamma'}
u_{\delta \delta''\delta'}
+v_{\alpha \alpha''\alpha'} u_{\beta \beta''\beta'}u_{\gamma \gamma''\gamma'} v_{\delta \delta''\delta'})
~V_{\alpha \alpha' \gamma \gamma',~\delta \delta' \beta \beta' }],
\end{eqnarray}
\begin{eqnarray} \label{eq:mat_B}
B_{\alpha \beta \gamma \delta}^{\alpha'' \beta'' \gamma'' \delta''}(K) =
&-& \sigma_{\alpha \alpha'' \beta \beta''} \sigma_{\gamma \gamma'' \delta \delta''}
\\ \nonumber &\times&
\sum_{\alpha' \beta' \gamma' \delta'}
[g_{pp}
(u_{\alpha \alpha''\alpha'} u_{\beta \beta''\beta'}v_{\gamma \gamma''\gamma'} v_{\delta \delta''\delta'}
+v_{\alpha \alpha''\alpha'} v_{{\bar\beta} \beta''\beta'}u_{\gamma \gamma''\gamma'} u_{{\bar\delta} \delta''\delta'} )
~ V_{\alpha \alpha' \beta \beta',~\gamma \gamma' \delta \delta'}\\ \nonumber
&- & g_{ph} (u_{\alpha \alpha''\alpha'} v_{\beta \beta''\beta'}v_{\gamma \gamma''\gamma'}
u_{\delta \delta''\delta'}
+v_{\alpha \alpha''\alpha'} u_{\beta \beta''\beta'}u_{\gamma \gamma''\gamma'} v_{\delta \delta''\delta'})
~ V_{\alpha \alpha' \delta \delta',~\gamma \gamma' \beta \beta'}
\\ \nonumber &- & g_{ph} (u_{\alpha \alpha''\alpha'} v_{\beta \beta''\beta'}u_{\gamma \gamma''\gamma'}
v_{\delta \delta''\delta'}
+v_{\alpha \alpha''\alpha'} u_{\beta \beta''\beta'}v_{\gamma \gamma''\gamma'} u_{\delta \delta''\delta'})
~ V_{\alpha \alpha' \gamma \gamma',~\delta \delta' \beta \beta'}],
\end{eqnarray}
where $u$ and $v$ coefficients are determined from DBCS Eq. (\ref{eq:hfbeq}). The two body interactions $V_{\alpha \beta,~\gamma \delta}$ and $V_{\alpha \delta,~\gamma \beta}$
are particle-particle and particle-hole matrix elements of the residual $N-N$ interaction $V$, respectively, in the deformed state, which are detailed at Ref. \cite{Ha15-1}.
The GT transition amplitudes from the ground state of an initial (parent) nucleus
to the excited state of a daughter nucleus, {\it i.e.} the one phonon state
$K^+$ in a final nucleus, are written as
\begin{eqnarray} \label{eq:phonon}
&&< K^+, m | {\hat {GT}}_{K }^- | ~QRPA > \\ \nonumber
&&= \sum_{\alpha \alpha''\rho_{\alpha} \beta \beta''\rho_{\beta}}{\cal N}_{\alpha \alpha''\rho_{\alpha}
\beta \beta''\rho_{\beta} }
< \alpha \alpha''p \rho_{\alpha}| \sigma_K | \beta \beta''n \rho_{\beta}>
[ u_{\alpha \alpha'' p} v_{\beta \beta'' n} X_{(\alpha \alpha''\beta \beta'')K}^{m} +
v_{\alpha \alpha'' p} u_{\beta \beta'' n} Y_{(\alpha \alpha'' \beta \beta'')K}^{m}], \\ \nonumber
&&< K^+, m | {\hat {GT}}_{K }^+ | ~QRPA > \\ \nonumber
&&= \sum_{\alpha \alpha'' \rho_{\alpha} \beta \beta''\rho_{\beta}}{\cal N}_{\alpha \alpha'' \beta \beta'' }
< \alpha \alpha''p \rho_{\alpha}| \sigma_K | \beta \beta''n \rho_{\beta}>
[ u_{\alpha \alpha'' p} v_{\beta \beta'' n} Y_{(\alpha \alpha'' \beta \beta'')K}^{m} +
v_{\alpha \alpha'' p} u_{\beta \beta'' n} X_{(\alpha \alpha'' \beta \beta'')K}^{m} ]~,
\end{eqnarray}
where $|~QRPA >$ denotes the correlated QRPA ground state in the intrinsic frame and
the nomalization factor is given as $ {\cal N}_{\alpha \alpha'' \beta
\beta''} (J) = \sqrt{ 1 - \delta_{\alpha \beta} \delta_{\alpha'' \beta''} (-1)^{J + T} }/
(1 + \delta_{\alpha \beta} \delta_{\alpha'' \beta''}). $ The forward and backward amplitudes, $X^m_{(\alpha \alpha''
\beta \beta'')K}$ and $Y^m_{(\alpha \alpha''
\beta \beta'')K}$, are related to
$\tilde{X^m}_{(\alpha \alpha'' \beta \beta'')K}=\sqrt2 \sigma_{\alpha \alpha'' \beta \beta''} X^m_{(\alpha \alpha''
\beta \beta'')K}$
and $\tilde{Y^m}_{(\alpha \alpha'' \beta \beta'')K}=\sqrt2 \sigma_{\alpha \alpha'' \beta \beta''}
Y^m_{(\alpha \alpha'' \beta \beta'')K}$ in Eq. (\ref{qrpaeq}), where $\sigma_{\alpha \alpha'' \beta \beta''}$ = 1 if $\alpha = \beta$ and $\alpha''$ =
$\beta''$, otherwise $\sigma_{\alpha \alpha'' \beta \beta'' }$ = $\sqrt 2$ \cite{Ch93,Ha17}.
The particle model space
for all Ni isotopes was exploited up to $N = 5 \hbar \omega$ for a
deformed basis and up to $N = 10 \hbar \omega$ for a spherical
basis. In our previous papers \cite{Ha15,Ha15-1,Ha2013}, SPSs obtained from
the deformed Woods-Saxon potential were shown to be sensitive to the
deformation parameter $\beta_2$, which causes the shell evolution by the deformation. In this work, we allow some small variation within 20 \% from theoretical $\beta_2$ values in Table I to test the deformation dependence.
\section{Numerical results for the GT transition strength distribution for $^{56,58}$Ni and $^{62,64}$Ni}
\begin{table}
\caption[bb]{ Deformation parameter $\beta_2^{E2}$ from the
experimental E2 transition data \cite{Ram01} and theoretical
calculations, $\beta_2^{RMF}$ and $\beta_2^{FRDM}$, by Relativistic Mean Field (RMF) \cite{Lala99} and FRDM
model \cite{Mol16} for some Ni isotopes. Empirical pairing gaps evaluated from the five-point mass formula \cite{Ch93} are also shown.
Last column denotes Ikeda sum rule for GT transition \cite{Ha15} as a percentage ratio to $3(N-Z)$.
\\} \setlength{\tabcolsep}{2.0 mm}
\begin{tabular}{cccccccc}\hline
Nucleus & $|\beta_2^{E2}|$ & $\beta_2^{RMF}$&$\beta_2^{FRDM}$ & $\Delta_p^{\textrm{emp}}$ & $\Delta_n^{\textrm{emp}}$ & $\delta_{np}^{\textrm{emp}}$ & {GT($\%$)}
\\ \hline \hline
$^{56}$Ni & 0.173 &0. & 0. & 2.077 & 2.150 & 1.107 & 99.6 \\
$^{58}$Ni & 0.182 &-0.001 & 0. & 1.669 & 1.349 & 0.233 & 98.6 \\
$^{62}$Ni & 0.197 & 0.093 &0.107 & 1.747 & 1.639 & 0.465 & 99.1 \\
$^{64}$Ni & 0.179 &-0.091 &-0.094&1.747 & 1.602 & 0.454 & 99.2
\\ \hline
\end{tabular}
\label{tab:beta2}
\end{table}
For direct comprehension of feasible effects due to the $np$ pairing and the
deformation in the GT transition, first, we took two Ni isotopes,
$^{56,58}$Ni, which are known as the nuclei affected easily by the $np$
pairing correlations \cite{Saga16,Bai13}. $^{56}$Ni is thought to be almost spherical because of its double magic number. But, in this work, we allowed small deformation, from spherical ($\beta_2$ = 0.0) to small deformation ($\beta_2$ = 0.15), in order to study the correlations of the deformation and the $np$ pairing. We note that if we take a $\alpha$-cluster model for $^{56}$Ni, the ground state may be deformed \cite{Darai11}. Second, we calculated the GT strength distribution for $^{62,64}$Ni, which have more excess neutrons with finite deformation. Moreover, they have stronger $np$ pairing gaps, almost twice, rather than $^{58}$Ni as shown in Table I, where we show the
empirical pairing gaps, the deformation parameter $\beta_{2}$
deduced from E2 transition data and theoretical estimations for Ni isotopes. Also we show Ikeda sum rule results for GT strength distribution.
\begin{figure}
\includegraphics[width=0.45\linewidth]{56_gtm_wonp}
\includegraphics[width=0.45\linewidth]{56_gtm_wnp}
\caption{(Color online) Gamow-Teller (GT) transition strength
distributions B(GT(--)) of $^{56}$Ni.
Experimental data by $^{56}$Ni(p,n) in panel (a) are from Ref. \cite{Sasano11}.
Results of (b) - (d) in the left (right) hand side are without (with) the $np$ pairing.
Our results are presented by the excitation energy from parent nucleus.
} \label{fig1}
\end{figure}
Recent calculations by the QRPA with particle-vibration coupling (PVC) based on the Skyrme interaction \cite{Niu14} addressed that the PVC contribution may spread or redistribute the GT strength for the double magic nucleus and also other Ni isotopes. But, if we recollect that the PVC contribution originates from the particle or hole propagation inside nuclei, these contributions can be taken into account by the Brueckner $G$-matrix in the Brueckner HF (BHF) approach, which already includes such nucleon-nucleon interaction inside nuclei through the Bethe-Goldstone equation. In the following, we discuss our numerical results for the GT strength distributions for $^{56,58}$Ni and $^{62,64}$Ni.
Figure \ref{fig1} presents GT strength distributions for
$^{56}$Ni(p,n) reaction by our DQRPA. Left (right) panels are
results without (with) the $np$
pairing correlations. In the left panel, the more deformation scatters the
distribution to the bit higher energy regions because of the repulsive particle-hole ($p-h$) interaction. But, the two peaks
peculiar to this GT distribution data were not reproduced well only by
the deformation. Namely, the 2nd high energy peak does not appear enough to explain the data.
In the right panel, we showed the $np$ pairing effects, which push the
distribution to the higher energy region even without the deformation, if one compares the results in the left panel to the right panel with the same deformation.
Contrary to the $p-h$ repulsive force by the deformation, the $np$ pairing is mainly attractive, by which the Fermi energy difference of protons and neutrons, $\Delta_f = \epsilon_f^p - \epsilon_f^n$, is reduced by its attractive interaction and consequently gives rise to high-lying GT transitions between deeply bound neutron and proton single particle states \cite{Ha17}. As a result, the two peaks and their magnitudes appear explicitly by
the $np$ pairing.
\begin{figure}
\includegraphics[width=0.6\linewidth]{56_sum_gtm}
\caption{(Color online) Running sums for the GT (--) strength
distributions in Fig.\ref{fig1} (b)-(d) for $^{56}$Ni.
} \label{fig2}
\end{figure}
This feature becomes significant in the running sum in Fig. \ref{fig2}, if one notes differences of the solid lines (with the $np$) and the dashed lines (without the $np$). Therefore, the deformation just scatters the strength distributions to a higher energy region by the repulsive $p-h$ interaction, but the $np$ pairing shifts the distribution to a higher energy region in a more concentrated form owing to the attractive interaction.
\begin{figure}
\includegraphics[width=0.65\linewidth]{56_occu_a}
\includegraphics[width=0.65\linewidth]{56_occu_b}
\caption{(Color online) Occupation probabilities of neutrons and
protons in $^{56}$Ni of the single particle state energy
(SPSE) given by Nilsson basis \cite{Nilsson}. Left (right) panels are
with (without) neutron-proton pairing for $\beta_2=0.02$ ((a) and (b)) and $0.15$ ((c) and (d)), respectively.
} \label{fig3}
\end{figure}
\begin{figure}
\includegraphics[width=0.45\linewidth]{56_gtm_is}
\includegraphics[width=0.45\linewidth]{56_sum_gtm_is}
\caption{(Color online) Isoscalar (IS) $np$ pairing effects on the GT (--) strength distribution (left) and its running sum (right) for $^{56}$Ni with $\beta_2$ = 0.05. Panels in the left side show results without the IS $np$ pairing (a) ($T=0*0$), with the normal IS $np$ pairing (b) ($T =0*1$) and with the enhanced IS $np$ pairing (c) ($T =0*3$), respectively. Panel (e) is their running sums.} \label{fig4}
\end{figure}
Figure \ref{fig3} shows change of the Fermi surface by the deformation and the $np$ pairing correlations. The larger deformation makes the more smearing as shown Fig.\ref{fig3} (c) and (d). But the $np$ pairing gives rise to more effective smearing, which leads to the deeper Fermi energies, compared to the deformation effect, if we compare results in Fig.\ref{fig3} (a) and (c) to those in Fig.\ref{fig3} (b) and (d).
Recently, the IV quenching was reported at the $M1$ spin strength distribution of $N =Z$ $sd$-shell nuclei \cite{Matsu15}. Because this quenching may mean the condensation of the IS pairing in the $np$ pairing, we tested the IS pairing effects on the GT distribution \cite{Ha17-2,Ha18}. Figure \ref{fig4} reveals the effects of the enhanced IS condensation with a small deformation $\beta_2 =$ 0.05. The left panels explicitly show the shift of the GT strength distribution to a higher energy region with the increase of IS pairing, {\it i.e.} the case without IS (a), normal IS (c) used in the results of Fig. \ref{fig1} and the enhanced IS (d), where we retain the IV pairing to ensure the IS effect.
We took the enhanced IS pairing factor as 3.0 as argued in our previous paper \cite{Ha17-2,Ha18}. Because the IS pairing causes more attractive interaction, as shown in $^3 S_1$ state in the $np$ interaction, rather than the IV coupling, the shift of the GT strength distributions by the $np$ pairing is mainly attributed to the IS coupling condensation. This trend is also found in the results of Fig. \ref{fig1}. The IS and IV $np$ effects also manifest themselves in the GT running sum of Fig.\ref{fig4} (e). Not only the IV effect but also the IS effect are shown to be salient on the GT strength in this $N=Z$ nucleus.
\begin{figure}
\includegraphics[width=0.45\linewidth]{58_gtm_wonp}
\includegraphics[width=0.45\linewidth]{58_gtm_wnp}
\caption{(Color online) GT strength distributions B(GT(--)) for $^{58}$Ni. Experimental data by $^3$He beam in panel (a) are from
Ref. \cite{Fujita07}. Left (right) panel (b)-(d) are the
results without (with) the $np$ pairing correlations,
respectively.}\label{fig5}
\end{figure}
In order to study the $np$ pairing effects in $N >Z$ nucleus, we
present the GT results for $^{58}$Ni in Fig. \ref{fig5}.
In Ref. \cite{Bai14}, extensive discussions of the
GT states of $pf$-shell $N = Z+2$ nuclei have been done for the
experimental data \cite{Fujita14}. They addressed
that the IS $np$ pairing could be a driving force to create low-lying GT states for those nuclei because the IS $np$ pairing
induces only transitions with same $j$ value near to the Isobaric Analogue Resonance (IAR) owing to the isospin operator.
If we look into detail those results in Fig.\ref{fig5}, similar effects can be found at low-lying GT states around 12 MeV region, whose distributions become spilled out to a low-lying energy region by the IS $np$ pairing contribution, but the dependence on the deformation is larger than that by the $np$ pairing and its strength by the IS $np$ pairing is small compared to the main GT peak, as explained in Ref. \cite{Bai14}.
If we compare to the results $^{56}$Ni in Fig.\ref{fig1}, the $np$ pairing
does not show drastic effects as shown in the results
without (left) and with (right) the $np$ pairing. It means that the
$np$ pairing effects become the smaller with the increase of $N-Z$
number. However, this trend is shown to digress for $^{62,64}$Ni, as shown later on. Figure \ref{fig6} presents results of the GT(+) states and their experimental data \cite{Cole06}, where the deformation effect is more significant than the $np$ pairing effect.
\begin{figure}
\includegraphics[width=0.45\linewidth]{58_gtp_wonp}
\includegraphics[width=0.45\linewidth]{58_gtp_wnp}
\caption{(Color online) The same as in Fig. \ref{fig5}, but
for B(GT(+)) of $^{58}$Ni. Experimental data by {\it t} beam in
panel (a) are from Ref. \cite{Cole06}. } \label{fig6}
\end{figure}
\begin{figure}
\includegraphics[width=0.45\linewidth]{58_sum_gtm}
\includegraphics[width=0.45\linewidth]{58_sum_gtp}
\caption{(Color online) Running sums for the GT (--)
and GT (+) strength distributions in Figs. \ref{fig5} and \ref{fig6} (b)-(d) for $^{58}$Ni, respectively. Here
we used the general quenching factor ${(0.74)}^2$ for theoretical calculations. GT(--) data are taken from the isospin decomposition with $(p,p')$ scattering data \cite{Fujita07}. But GT(+) data are normalized to the lowest B(GT) by the calibration from $\beta$-decay \cite{Cole06}. Therefore, these data do not satisfy the Ikeda sum rule (ISR) for GT strengths. }\label{fig7}
\end{figure}
For $^{58}$Ni, the deformation effect turned out to be more
important rather than the $np$ pairing correlations, as
confirmed in the running sum in Fig. \ref{fig7}. Differences between colored solid and dot-dashed (dotted and dashed) curves by the $np$ pairing correlations are smaller than those by the deformation compared to the results in Fig. \ref{fig2} for $^{56}$Ni case. However, the $np$ pairing correlations turned out to play a role of explaining properly the GT running sum as shown in Fig. \ref{fig7}.
\begin{figure}
\includegraphics[width=0.45\linewidth]{62_gtm}
\includegraphics[width=0.45\linewidth]{64_gtm}
\caption{(Color online) Same as Fig. \ref{fig5}, but for $^{62}$Ni (left) and $^{64}$Ni (right). Experimental data for $^{64}$Ni by $^3$He beam in the right panel are from
Ref. \cite{64data}. Results are shown with and without $np$ pairing correlations, respectively.} \label{fig8}
\end{figure}
Figure \ref{fig8} provides the GT strength distributions for $^{62,64}$Ni, which shows stronger $np$ pairing effects rather than those of $^{58}$Ni in Fig.\ref{fig5}. That is, the $np$ pairing separates explicitly the GT distribution into a low-lying and a high-lying part adjacent to GTGR position. If we recollect that the $np$ pairings of $^{62,64}$Ni are almost twice of that of $^{58}$Ni as shown in Table I, this separation is quite natural.
Moreover, the larger deformation of those nuclei scatters those distributions rather than those of $^{58}$Ni. If we note the shift of the GTGR position to the reasonable region around 11 MeV, which is consistent with the data in Ref. \cite{Fujita11}, the $np$ pairing effect is indispensable even for these $pf$-shell $N \neq Z $ nuclei, $^{62,64}$Ni. More experimental data for these nuclei would be helpful for more definite conclusion for the $np$ pairing correlations effect on the GT transition strength distributions.
Recently, many papers argued the tensor force effect on the GT strength \cite{Bai14,Alex06,Urata17,Bernard16}. The tensor force effect in Ref. \cite{Bai14}, which is based on the zero range pairing force usually adopted in the Skyrme-Hartree-Fock (SHF) approach, shifts the high-lying GT state downward a few MeV owing to the its attractive force property. However, this trend may become changed by the deformation because of the following facts. The angular momentum $J$ is not a good quantum number in Nilsson basis, as a result it can be split by the deformation. It means that several angular momentum components are mixed in a deformed SPS, which makes direct understanding of a role of the tensor force in a deformed mean field fuzzy. Also the deformation changes the level density around the Fermi level, which leads to some increase or decrease of the pairing correlations, depending on the Fermi surface as argued in Refs. \cite{Alex06,Urata17}. For example, recent calculation in Ref. \cite{Bernard16} showed that the tensor force could be attractive, but sometimes it could be repulsive along with the SPS evolution with the deformation.
The tensor force in our DQRPA approach is explicitly taken into account on the residual interaction by the G-matrix calculated from the Bonn potential which explicitly includes the tensor force in the potential. In the mean field, the tensor force is implicitly included because of the phenomenological Woods-Saxon potential globally adjusted to some nuclear data. But, in order to discuss the separated tensor force effect on the GT strength distribution, the present approach needs more detailed analysis of the tensor force effect on the mean field as done in Ref. \cite{Alex06}, in which a tensor type potential is derived for reproducing the G-matrix calculation. But the approach is beyond the present calculation and would be a future project.
\section{Summary and conclusion}
In summary, in order to discuss the effects of the IS and IV $np$
pairing correlations and the deformation, we calculated the GT
strength distribution of $^{56,58}$Ni and $^{62,64}$Ni, which are $N=Z$ and $N-Z=$2 ,6, and 8 nucleus, respectively. The $np$ pairing effects turned out to be able
to properly explain the GT strength although the deformation was shown to be also
the other important property. In particular, the IS condensation part
played a meaningful role to explaining the GT strength distribution of $N=Z$
nucleus, $^{56}$Ni, whose GT strength distribution was shifted to a bit higher energy region by the reduction of the Fermi energy difference of proton and neutron due to the attractive $np$ pairing. For $^{58}$Ni, the deformation was more influential rather than the $np$ pairing. But for $^{62,64}$Ni, the situation is reversed because the $np$ pairing correlations are stronger than $^{58}$Ni. Namely, the $np$ pairing divides the GT strength distribution into a low and high energy region.
Therefore, the deformation treated by a mean
field approach can be balanced by the spherical property due to the
IV $np$ pairing coming from the unlike $np$ pairing as well as
the like-pairing correlations. But the IS spin-triplet $np$ part, which contributes more or less to the deformation
property due to its coupling to odd $J$ states, may give rise to
more microscopic deformation features which cannot be included in
the deformed mean field approach, and push the GT states to a bit high energy region. But all of the present results are based on a phenomenological Woods-Saxon potential. More self-consistent approaches are desirable for further definite conclusions on the IS $np$ pairing correlations. Finally, the GT strength
distribution as well as the M1 spin transition strength are shown to
be useful for investigating the IS and IV pairing properties.
\section*{Acknowledgement}
This work was supported by the National Research Foundation of Korea (Grant Nos. NRF-2015R1D1A4A01020477, NRF-2015K2A9A1A06046598, NRF-2017R1E1A1A01074023).
\newpage
\section*{References}
| {'timestamp': '2018-11-13T02:17:01', 'yymm': '1811', 'arxiv_id': '1811.04589', 'language': 'en', 'url': 'https://arxiv.org/abs/1811.04589'} |
\section{Introduction}
Since the pioneering work by Sinai \cite{Sin70} on dispersing billiards,
the dynamical structures and stochastic properties have been extensively studied
for chaotic billiards \cite{GO74, BS80, BSC90, BSC91, C99, C01, Ma04, SzV04,
CZ05a, CZ05b, C06, BG06, CM07, C08, CZ08, BCD11, SYZ13},
and also for abstract hyperbolic systems with or without singularities
\cite{Sin72, P77, KS86, P92, Sataev92, Y98, Y99, Sarig02, DL08, CD09, CZ09, DZ11, DZ13, DZ14}.
Among all the physical measures, the SRB measures - named after Sinai \cite{Sin72},
Ruelle \cite{Ru78} and Bowen \cite{Bow70, Bow75} -
are shown to display several levels of ergodic properties, including
the decay rate of correlations,
the large deviation principles and the central limit theorem, etc.
In this paper, we shall focus on the almost sure invariance principle (ASIP)
for a wide class of uniformly hyperbolic systems with singularities,
which preserve a mixing SRB measure.
The ASIP ensures that partial sum of a random process can be approximated by
a Brownian motion with almost surely negligible error.
More precisely, we say that
a zero-mean random process $\mathbf{X}=\{X_n\}_{n\ge 0}$ with finite second moments
satisfies an ASIP\footnote{
We may need to extend the partial sum process $\{\sum_{k=0}^{n-1} X_k\}_{n\ge 0}$
on a richer probability space without changing its distribution.
In the rest of this paper, we always assume this technical operation when we mention ASIP.
}
for an error exponent $\lambda\in [0, \frac12)$,
if there exists a Wiener process $W(\cdot)$ such that
\begin{equation}\label{ASIP0}
\left|\sum_{k=0}^{n-1} X_k - W(\sigma_n^2) \right|=\mathcal{O}(\sigma_n^{2\lambda}), \ \ \text{a.s.},
\end{equation}
where $\sigma_n^2={\mathbb E}\left( \sum_{k=0}^{n-1} X_k\right)^2$
is the variance of the $n$-th partial sum.
In particular, if $\sigma_n^2$ grows linearly in $n$ such that
$\sigma_n^2=n\sigma^2+\mathcal{O}(1)$ for some $\sigma\in [0,\infty)$,
it follows from \eqref{ASIP0} that
\begin{equation*}
\left|\sum_{k=0}^{n-1} X_k - \sigma W(n) \right|=\mathcal{O} (n^{\lambda}), \ \ \text{a.s.}.
\end{equation*}
The ASIP implies many other limit laws from statistics,
such as the almost sure central limit theorem,
the law of the iterated logarithm
and the weak invariance principle (see \cite{PhSt75} for the details).
There has been a great deal of work on the ASIP in the probability theroy, see for instance
\cite{PhSt75, BP79, E86, ShaoLu87, Shao93, Cuny11, Wu13, CunyMe15}.
In the context of the stationary process generated by bounded H\"older observables
over smooth dynamical systems with singularities,
the ASIP was first shown by Chernov \cite{C06} for Sinai dispersing billiards.
Later, a scalar and a vector-valued ASIP were later proved by Melbourne and Nicol \cite{MN05, MN09} for the Young towers.
By a purely spectral method, Gou\"{e}zel \cite{G10} extended the ASIP for stationary processes for a wide class of systems without assuming Young tower structure.
Gou\"ezel \cite{G10} also provided a pure probabilistic condition,
which was used by Stenlund \cite{Sten14} to show the ASIP for Sinai billiards with random scatterers.
The ultimate goal of our work is to establish ASIP
for \emph{non-stationary} process generated by \emph{unbounded} observables, over a wide class of two-dimensional hyperbolic systems under the standard assumptions (\textbf{H1})-(\textbf{H5}) in Section~\ref{Sec:Assumptions}.
Such class includes Anosov diffeomorphisms,
Sinai dispersing billiards and their perturbations (See Section~\ref{sec: app} for more details).
Compared to existing results of ASIP for bounded stationary processes,
our result is relatively new due to the following two features:
\begin{itemize}
\item[(1)] Low regularity: the question on how large classes of observables satisfy
the central limit theorem or other limit laws had been raised
by several researchers, see, e.g., a survey by Denker \cite{Den89}.
Sometimes those classes are much larger than those of bounded H\"older continuous or bounded variation functions.
In this paper, we only assume dynamically H\"{o}lder continuity for observables, which could even be unbounded.
A direct application is the fluctuation of Lyapunov exponents,
for which the log unstable Jacobian blows up near singularity in billiard systems.
\item[(2)] Non-stationarity: time-dependent processes
arise from the dynamical Borel-Cantelli Lemma and the shrinking target problem (see e.g. \cite{HV95, CK01, Fa06}).
Recently, Haydn, Nicol, T\"{o}r\"{o}k and Vaient \cite{HNTV17} obtained ASIP for the shrinking target problem on a class of expanding maps. In analogy, under some mild conditions,
we are able to apply our ASIP result
to the shrinking target problem for two-dimensional hyperbolic systems with singularity.
\end{itemize}
The method we address here is rather transparent and efficient.
We first construct a natural family of $\sigma$-algebras that are characterized by the singularities,
and explore its exponentially $\alpha$-mixing property.
Extending the approach by Chernov \cite{C06} and
applying a martingale version of ASIP by Shao \cite{Shao93},
we are then able to prove the ASIP for the random process
generated by a sequence of integrable dynamically H\"older observables.
A crucial assumption is that the process
satisfies the Marcinkiewicz-Zygmund type inequalities given by \eqref{M-Z}.
We emphasize that those observables could be unbounded,
and the growth of partial sum variances need not be linear.
Furthermore,
the error exponent $\lambda$ in ASIP of the form \eqref{ASIP0} only depends on
the constant $\kappa_2$ in \eqref{M-Z}.
This paper is organized as follows. In Section \ref{Sec: Assumptions and Results},
we introduce the standard assumptions for the uniformly hyperbolic systems
with singularities, and state our main results on the ASIP and other limit laws.
We recall several useful theorems in probability theory in Section~\ref{sec: prelim},
and prove our main theorem on the ASIP in Section \ref{sec:proof}.
In Section \ref{sec: app}, we summarize the validity of the ASIP for a wide class of
uniformly hyperbolic billiards, and discuss two practical process related to the fluctuation of
ergodic average and the shrinking target problem.
\section{Assumptions and Main Results}\label{Sec: Assumptions and Results}
\subsection{Assumptions}\label{Sec:Assumptions}
Let $T: M\to M$ be a piecewise $C^2$ diffeomorphism of
a two-dimensional compact Riemannian manifold $M$ with singularities $S_1$,
that is, for each connected component $\Omega\subset M\backslash S_1$,
the map $T: \Omega \to T(\Omega)\subset M$ is a $C^2$ diffeormophism which
can be continuously extended to the closure of $\Omega$.
We denote by $S_{-1}:= T S_{1}$ the singularity of the inverse map $T^{-1}$.
Let $d(\cdot, \cdot)$ denote the distance in $M$ induced by the Riemannian metric.
For any smooth curve $W\subset M$, we denote by $|W|$ its length,
and by $m_W$ the Lebesgue measure on $W$ induced by the Riemannian metric restricted to $W$.
We now make several specific assumptions on the system $T: M\to M$.
These assumptions are quite standard and have been made in many references \cite{C99,CD09,CM,CZ09}.
\\
\noindent(\textbf{H1}) \textbf{Uniform hyperbolicity of $T$}. There exist two families of cones
$C^u_x$ (unstable) and $C^s_x$ (stable) in the tangent spaces
${\mathcal T}_x M$, for all $x\in M$, and there exists a constant $\Lambda>1$, with the following properties:
\begin{itemize}
\setlength\itemsep{0em}
\item[(1)] $D_x T (C^u_x)\subset C^u_{ T x}$ and $D_x T
(C^s_x)\supset C^s_{ T x}$, wherever $D_x T $ exists.
\item[(2)] $\|D_x T(v)\|\geq \Lambda \|v\|$ for any
$v\in C_x^u$, and $\|D_xT^{-1}(v)\|\geq
\Lambda \|v\|$ for any $v\in C_x^s$.
\item[(3)] These families of cones are continuous on $M$
and the angle between $C^u_x$ and $C^s_x$ is uniformly
bounded away from zero.
\end{itemize}
We say that a smooth curve $W\subset M$ is an \emph{unstable curve} for $T$
if at every point $x \in W$ the tangent line
$\mathcal{T}_x W$ belongs in the unstable cone $C^u_x$.
Furthermore, a curve $W\subset M$ is an \emph{unstable manifold} for $T$ if
$T^{-n}(W)$ is an unstable curve for all $n \geq 0$.
We can define stable curves and stable manifolds in a similar fashion.\\
\noindent (\textbf{H2}) \textbf{Singularities.}
The singularity set $S_{1}$ consists of a finite or countable union of smooth compact curves in $M$,
including the boundary $\partial M$.
We assume the following:
\begin{itemize}
\setlength\itemsep{0em}
\item[(1)] $\partial M$ is transversal to both stable and unstable cones.
\item[(2)] Every other smooth singularity curve in $S_1\setminus \partial M$ is a stable curve, and
every curve in $ S_1$ terminates either inside another curve of $ S_1$ or on $\partial M$.
\item[(3)] There exist $C>0$ and $\beta_0\in (0,1)$ such that for any $x\in M\backslash S_1$,
\begin{equation*}
\|D_xT\|\le C d(x, S_1)^{-\beta_0}.
\end{equation*}
\end{itemize}
Similar assumptions are made for $S_{-1}$. We set
$
S_{\pm n}=\bigcup_{k=0}^{n-1} T^{\mp k} S_{\pm 1}
$
for any $n\ge 1$, and it is clear that
$S_{\pm n}$ be the singularity set of $T^{\pm n}$.
Furthermore, we denote
$
S_{\pm\infty}=\bigcup_{n=0}^{\infty} S_{\pm n}.
$
An unstable curve $W\subset M$ is said to be homogeneous
if for any $n\ge 0$,
$T^{-n} W$ is contained in a connected component of $M\setminus S_1$.
In other words, $W\cap S_{-\infty}=\emptyset$.
Similarly, we can define homogeneous stable curves.
\begin{defn}\label{separation time}
For every $x, y\in M$, define $\mathbf{s}_+(x,y)$, the
forward separation time for $x$ and $y$, to be the smallest integer
$n\geq 0$ such that $x$ and $y$ belong to distinct elements of
$M\setminus S_n$.
Similarly we define the backward separation time $\mathbf{s}_-(x,y)$.
\end{defn}
\noindent(\textbf{H3}) \textbf{Regularity of smooth unstable curves}. We assume that there is a $T$-invariant
family $\mathcal{W}^u_T$ of unstable curves such that
\begin{enumerate}
\setlength\itemsep{0em}
\item[(1)] \textbf{Bounded curvature} The curvature of any $W\in \mathcal{W}^{u}_T$ is uniformly bounded
from above by a positive constant $B$.
\item[(2)] \textbf{Distortion bounds.} There exist $\gamma_0\in (0,1)$ and $C_T>0$ such
that for any $W\in \mathcal{W}^{u}_T$ and any $x, y \in W$,
\begin{equation*}\label{distor10}
\left|\ln\mathcal{J}_W (x)-\ln \mathcal{J}_W (y)\right| \leq C_{T}\, d (x, y)^{\gamma_0},
\end{equation*}
where $\mathcal{J}_W (x)=|D_x T|_{\mathcal{T}_x W}|$ is the Jacobian
of $T$ at $x$ along the unstable curve $W$.
\item[(3)] { \textbf{Absolute continuity.}}
Let $W_1,W_2\in \mathcal{W}^{u}_T$ be two unstable curves close to each other. Denote
\begin{equation*}
W_i'=\{x\in W_i\colon
W^s(x)\cap W_{3-i}\neq\emptyset\}, \hspace{.5cm} i=1,2.
\end{equation*} The map
$\mathbf{h}\colon W_1'\rightarrow W_2'$ defined by sliding along stable
manifolds
is called the \textit{stable holonomy} map.
We assume that $\mathbf{h}_*m_{W_1'}$ is absolutely continuous with respect to $m_{W_2'}$.
Furthermore, there exist $C_T>0$ and $\vartheta_0\in (0,1)$ such that the Jacobian of $\mathbf{h}$ satisfies
\begin{equation*}\label{Jh}
|\ln\mathcal{J}\mathbf{h}(y)-\ln \mathcal{J}\mathbf{h}(x)| \leq C_T \vartheta_0^{\mathbf{s}_+(x,y)},
\ \ \ \text{for any}\ \ x, y\in W_1'.
\end{equation*}
\end{enumerate}
\noindent(\textbf{H4}) {\textbf{ SRB measure.}}
The map $T$ preserves an SRB probability measure $\mu$, that is, the conditional measure of $\mu$
on each unstable manifold $W^u$ is absolutely continuous with respect to $m_{W^u}$.
We further assume that $\mu$ is strongly mixing. \\
\noindent(\textbf{H5}) {\textbf{ One-step expansion.}}
Given an unstable curve $W\subset M$, we denote $V_\alpha$
as the connected component in $TW$ with index $\alpha$ and $W_\alpha=T^{-1} V_\alpha$.
There is $q_0\in (0, 1]$ such that
\begin{equation*}
\liminf_{\delta\to 0}\
\sup_{W\colon |W|<\delta}\sum_{\alpha}
\left(\frac{|W|}{|V_{\alpha}|}\right)^{q_0} \frac{|W_\alpha|}{|W|}<1,
\label{step1}
\end{equation*}
where the supremum is taken over all unstable curves $W$ in $M$.
\subsection{Statement of the main results}
The main result in this paper is to prove the almost sure invariance principle
for the system $(M, T, \mu)$, which satisfies Assumptions \textbf{(H1)}-\textbf{(H5)},
with respect to the process
generated by a sequence of dynamically H\"older observables.
We first recall the definition of such functions.
\begin{defn} A measurable function $f: M\to {\mathbb R}$ is said to be forward dynamically H\"older continuous if
there exists $\vartheta\in (0, 1)$ such that
\begin{equation*}
|f|_{\vartheta}^+:=\sup\left\{\frac{|f(x)-f(y)|}{\vartheta^{\mathbf{s}_+(x,y)}}:\ x\ne y\ \text{lie
on a homogeneous unstable curve} \right\}<\infty,
\end{equation*}
where $\mathbf{s}_{+}(\cdot, \cdot)$ is the forward separation time given by Definition \ref{separation time}.
The constant $\vartheta$ is called the dynamically H\"older exponent of $f$, and is usually denoted by $\vartheta_f$.
We denote the space of such functions by $\mathcal{H}^+_\vartheta$,
and set $\mathcal{H}^+:=\cup_{\vartheta\in (0, 1)} \mathcal{H}^+_\vartheta$.
In a similar fashion, we define the space $\mathcal{H}_\vartheta^-$ and $\mathcal{H}^-$ of backward
dynamically H\"older continuous functions. Also, we denote
$\mathcal{H}_\vartheta:=\mathcal{H}^+_\vartheta\cap \mathcal{H}^-_\vartheta$, and
$\mathcal{H}:=\mathcal{H}^+\cap \mathcal{H}^-$.
\end{defn}
\begin{remark}
Note that any H\"older continuous function is automatically
dynamical H\"older continuous.
However, a dynamically H\"older function can be only piecewise continuous,
and it may not be bounded.
\end{remark}
We need to assume certain integrability for the observables.
Given an $L^s$-integrable function $f$ on $M$ for some $s\ge 1$, we denote
${\mathbb E}(f)=\int f d\mu$ and $\|f\|_{L^s}={\mathbb E}(|f|^s)^{1/s}$.
For convenience, we shall use the following notations:
given two sequence $a_n$ and $b_n$ of non-negative numbers,
we write $a_n=o(b_n)$ if $\lim_{n\to\infty} a_n/b_n=0$;
we write $a_n=\mathcal{O}(b_n)$ or $a_n\ll b_n$ if $a_n\le Cb_n$ for some constant $C>0$,
which is independent of $n$;
and we denote $a_n\asymp b_n$ if $a_n\ll b_n$ and $a_n\gg b_n$.
We are now ready to state our main result.
\begin{theorem}\label{main}
Let $\mathbf{X}_{\textbf{f}}=\{X_n\}_{n\ge 0}:=\{f_n\circ T^n\}_{n\ge 0}$ be a random process generated by
a sequence $\textbf{f}=\{f_n\}_{n\ge 0}$ of functions, which satisfies the following conditions:
\begin{itemize}
\setlength\itemsep{0em}
\item[(1)] There are $\vartheta_\textbf{f}\in (0, 1)$ and $\beta_\textbf{f}\in [0, \infty)$
such that $f_n\in \mathcal{H}_{\vartheta_\textbf{f}}$ and
\begin{equation*}
|f_n|_{\vartheta_\textbf{f}}^+ + |f_n|_{\vartheta_\textbf{f}}^- \ll n^{\beta_\textbf{f}}.
\end{equation*}
\item[(2)] There is $p>4$ such that
$f_n\in L^{p}$ with ${\mathbb E}(f_n)=0$. Moreover, there are constants
$\kappa_p\ge \kappa_2>\frac{1}{4}$ such that
\begin{equation}\label{M-Z}
\sigma_n:=\left\|\sum_{k=0}^{n-1} X_k \right\|_{L^2} \gg n^{\kappa_2}, \ \
\text{and}\ \ \
\sup_{m\ge 0} \left\|\sum_{k=m}^{m+n-1} X_k \right\|_{L^p} \ll n^{\kappa_p}.
\end{equation}
\end{itemize}
Then the process $\mathbf{X}_{\textbf{f}}$ satisfies an ASIP for any error exponent
$\lambda\in \left( \max\{\frac14, \frac{1}{8\kappa_2}\}, \frac{1}{2}\right)$, that is,
there exists a Wiener process $W(\cdot)$ such that
\begin{equation}\label{ASIP1}
\left|\sum_{k=0}^{n-1} X_k - W(\sigma_n^2) \right|=\mathcal{O}(\sigma_n^{2\lambda}), \ \ \text{a.s.}.
\end{equation}
\end{theorem}
\vskip.1in
\begin{remark}
It is well known that a zero-mean independent process
$\mathbf{X}=\{X_n\}_{n\ge 0}$ with finite $s$-th moment (for some $s\ge 1$)
satisfies the Marcinkiewicz-Zygmund inequalities, i.e.,
$
\left\|\sum_{k=m}^{m+n-1} X_k \right\|_{L^s} \asymp
\left\| \left(\sum_{k=m}^{m+n-1} X_k^2 \right)^{\frac12}\right\|_{L^s}.
$
Such type of inequalities were later generalized to martingale difference sequence,
strongly mixing processes, etc (see e.g. \cite{MZ37, Yo80}).
We note that the term $\left\| \left(\sum_{k=m}^{m+n-1} X_k^2 \right)^{\frac12}\right\|_{L^s}$ is of order $\sqrt n$
for stationary iid. processes.
Due to the dependence and non-stationarity in our setting, there is no a priori information on
$\left\| \left(\sum_{k=m}^{m+n-1} X_k^2 \right)^{\frac12}\right\|_{L^s}$.
To this end, in terms of powers of $n$,
we directly impose the $2$nd moment lower bound and $p$-th moment upper bound
in \eqref{M-Z} for the partial sum $\sum_{k=m}^{m+n-1} X_k$.
\end{remark}
Condition (1) in Theorem~\ref{main} implies that every function $f_n$ is dynamically H\"older continuous
with a common exponent $\vartheta_\textbf{f}$, while the dynamically H\"older semi-norms of $f_n$ are allowed to
grow in a polynomial rate.
Condition (2) implies that the growth rate of partial sum variances $\sigma_n^2$
is of order between $n^{2\kappa_2}$ and $n^{2\kappa_p}$.
In particular, if $\kappa_2=\kappa_p=\frac12$, then
the growth is asymptotically linear, i.e., $\sigma_n^2\asymp n$.
We notice that the error exponent $\lambda$ in \eqref{ASIP1} does not depend
on the values of $\vartheta_\textbf{f}$, $\beta_\textbf{f}$ and $\kappa_p$,
and it can be chosen arbitrarily close to $\frac{1}{4}$ if $\kappa_2=\frac12$.
In the case when $p\in (2, 4]$ and $\kappa_2>\frac{1}{p}$,
our result still holds with $\lambda\in \left( \max\{\frac14, \frac{1}{2p\kappa_2}\}, \frac{1}{2}\right)$,
but requires advanced moment inequalities in the proof of a technical lemma - Lemma~\ref{lem: est Rj}.
For simplicity, we just prove the case when $p>4$, which is sufficient for all of our applications.
\vskip.1in
Note that the ASIP is the strongest form - it implies many other limit laws, such as
the weak invariance principle,
the almost sure central limit theorem,
and the law of iterated logarithm (see e.g. \cite{PhSt75} for their proofs and more details).
\begin{theorem}
Let $\mathbf{X}_{\textbf{f}}=\{X_n\}_{n\ge 0}:=\{f_n\circ T^n\}_{n\ge 0}$ be the random process
satisfying the assumptions in Theorem~\ref{main}.
We have the following limit laws:
\begin{itemize}
\setlength\itemsep{0em}
\item[(1)] Weak Invariance Principle: for any $t\in [0, 1]$,
\begin{equation*}
\dfrac{1}{\sigma_n}\sum_{k=0}^{\lfloor nt\rfloor-1} f_k\circ T^k
\xrightarrow{\ \ in\ distribution\ \ } W(t), \ \ \ \text{as}\ n\to \infty,
\end{equation*}
where $W(\cdot)$ is a Wiener process.
\item[(2)] Almost Sure Central Limit Theorem:
we denote $S_n=\sum_{k=0}^{n-1} f_k\circ T^k$, and let $\delta(\cdot)$ be
the Dirac measure on ${\mathbb R}$, then for $\mu$-almost every $x\in M$,
\begin{equation*}
\dfrac{1}{\log \sigma_n^2} \sum_{k=1}^{n} \frac{1}{\sigma_k^2} \delta_{S_k(x)}
\xrightarrow{\ \ in\ distribution\ \ } N(0, 1),
\ \ \ \text{as}\ n\to \infty,
\end{equation*}
where $N(0, 1)$ is the standard normal distribution.
\item[(3)] Law of Iterated Logarithm: for $\mu$-almost every $x\in M$,
\begin{equation*}
\limsup_{n\to \infty} \dfrac{\sum_{k=0}^{n-1} f_k\circ T^k(x)}{\sqrt{2\sigma_n^2 \log\log \sigma_n^2}} =1.
\end{equation*}
\end{itemize}
\end{theorem}
\vskip.1in
\section{Preliminaries from Probability Theory}\label{sec: prelim}
In this section, we recall several useful theorems in the probability theory.
Let $(M, \mu)$ be a standard probability space.
\begin{lemma}[Borel-Cantelli lemma]\label{lem: B-C}
If $\{E_n\}_{n\ge 1}$ is a sequence of events on $(M, \mu)$
such that $\sum_{n=1}^\infty \mu(E_n)<\infty$,
then $\mu\left( \cap_{n=1}^\infty \cup_{k\ge n} E_k \right)=0.$
\end{lemma}
We introduce a special case of the results by Gal-Koksma (Theorem A1 in \cite{PhSt75}).
\begin{proposition}\label{Gal-Koksma}
Let $\{X_n\}_{n\ge 0}$ be a sequence of zero-mean random variables
with finite second moments. Suppose there is $\kappa>0$ such that
for any $m\ge 0$, $n\ge 1$,
\begin{equation*}
{\mathbb E}\left(\sum_{k=m}^{m+n-1} X_k\right)^2\ll (m+n)^\kappa - m^\kappa,
\end{equation*}
then for any $\delta>0$,
\begin{equation*}
\sum_{k=0}^{n-1} X_k =o\left(n^{\frac{\kappa}{2} + \delta} \right), \ \ \text{a.s.}.
\end{equation*}
\end{proposition}
Let $\mathfrak{F}$ and $\mathfrak{G}$ be two $\sigma$-algebras on the space $(M, \mu)$.
\begin{defn}
The $\alpha$-mixing coefficient between $\mathfrak{F}$ and $\mathfrak{G}$ is given by
\begin{equation}\label{alpha coeff}
\alpha(\mathfrak{F}, \mathfrak{G}):=\sup_{A\in \mathfrak{F}} \sup_{B\in \mathfrak{G}}
\left|\mu(A\cap B)-\mu(A)\mu(B) \right|.
\end{equation}
\end{defn}
Note that $\alpha(\mathfrak{F}, \mathfrak{G})\le 2$. We have the following covariance inequality.
\begin{lemma}[Lemma 7.2.1 in \cite{PhSt75}]\label{lem: cov inequality}
Let $s_1, s_2$ and $s_3$ be positive real numbers such that $1/s_1+1/s_2+1/s_3=1$.
For any $X\in L^{s_1}(M, \mathfrak{F}, \mu)$ and any $Y\in L^{s_2}(M, \mathfrak{G}, \mu)$,
\begin{equation*}
\left|{\mathbb E}(XY)-{\mathbb E}(X){\mathbb E}(Y)\right|\le 10 \alpha(\mathfrak{F}, \mathfrak{G})^{\frac{1}{s_3}} \|X\|_{L^{s_1}} \|Y\|_{L^{s_2}}.
\end{equation*}
\end{lemma}
\begin{defn}\label{def mds}
$\{(\xi_j, \mathfrak{G}_j)\}_{j\ge 1}$ is called a martingale difference sequence if
\begin{itemize}
\setlength\itemsep{0em}
\item[(1)] $\mathfrak{G}_j$ is an increasing sequence of $\sigma$-algebras on $(M, \mu)$;
\item[(2)] Each $\xi_j$ is $L^1$-integrable and $\mathfrak{G}_j$-measurable;
\item[(3)] ${\mathbb E}(\xi_j|\mathfrak{G}_{j-1})=0$ for any $j\ge 2$.
\end{itemize}
\end{defn}
Here is a basic identity for martingale difference sequence $\{(\xi_j, \mathfrak{G}_j)\}_{j\ge 1}$:
\begin{equation}\label{martingale diff identity}
{\mathbb E}(X\xi_j)=0,
\end{equation}
for any $\mathfrak{G}_{j-1}$-measurable random variable $X$, as long as $X\xi_j\in L^1$ .
We shall need the following almost sure invariance principle by Shao \cite{Shao93}
for the martingale difference sequences (in the $L^4$-integrable case).
\begin{proposition}[\cite{Shao93}]\label{Shao ASIP}
Let $\{(\xi_j, \mathfrak{G}_j)\}_{j\ge 1}$ be an $L^4$-integrable martingale difference sequence.
Put $b_r^2={\mathbb E}\left( \sum_{j=1}^r \xi_j \right)^2$. Assume that there exists a sequence
$\{a_r\}_{r\ge 1}$ of non-decreasing positive numbers with $\lim_{r\to\infty} a_r=\infty$ such that
\begin{eqnarray}
& & \sum_{j=1}^r \left[ {\mathbb E}\left(\xi_j^2\left| \mathfrak{G}_{j-1} \right. \right) - {\mathbb E}\xi_j^2 \right] = o(a_r), \ a.s. \label{Shao Cond1} \\
& & \sum_{j=1}^\infty a_j^{-2} {\mathbb E}|\xi_j|^{4} <\infty. \label{Shao Cond2}
\end{eqnarray}
Then $\{\xi_j\}_{j\ge 1}$ satisfies an ASIP of the following form:
there exists a Wiener process $W(\cdot)$ such that
\begin{equation*}
\left|\sum_{j=1}^r \xi_j - W(b_r^2)\right|
=o\left( \left( a_r \left( \left|\log(b_r^2/a_r)\right| + \log\log a_r \right) \right)^{1/2} \right), \ a.s.
\end{equation*}
\end{proposition}
\section{Proof of Theorem~\ref{main}}\label{sec:proof}
We shall prove our main theorem as follows.
Firstly,
we construct a natural family $\mathfrak{F}$
of $\sigma$-algebras on $(M, \mu)$, and show that such family is exponentially $\alpha$-mixing.
Secondly,
we introduce blocks and approximate the sequence $\textbf{f}$ by
conditional expectation over a special sub-family of $\mathfrak{F}$ on each block.
Furthermore,
we divide the partial sum of $\mathbf{X}_\textbf{f}$ into a major part $\sum_{j=1}^{r(n)-1} Y_j$
and other negligible parts.
Thirdly,
we establish the martingale difference
representation $\{\xi_j\}_{j\ge 1}$ for the process $\{Y_j\}_{j\ge 1}$,
and obtain several preliminary norm estimates.
Fourthly,
we prove a technical lemma on Condition~\eqref{Shao Cond1}
and show an ASIP for the martingale difference sequence $\{\xi_j\}_{j\ge 1}$.
Finally,
we prove the ASIP for the original sequence $\textbf{f}$.
\subsection{The strong mixing property}
We first recall the exponential decay of correlations for the system $ (M, T, \mu)$
for bounded dynamically H\"older observables, which was proven in \cite{CZ09}
by using the coupling lemma (see e.g. \cite{CM, CD09}).
\begin{proposition}[\cite{CZ09}]\label{exp decay}
There exist $C_0>0$ and $\vartheta_0\in (0, 1)$ such that
for any pair of functions $f\in \mathcal{H}_{\vartheta_f}^+\cap L^\infty(\mu)$ and $g\in \mathcal{H}_{\vartheta_g}^-\cap L^\infty(\mu)$
and $n\ge 0$,
\begin{equation*}
\left| {\mathbb E}(f\cdot g\circ T^n) -{\mathbb E}(f){\mathbb E}(g) \right| \le C_{f, g} \theta_{f, g}^n,
\end{equation*}
where $\theta_{f, g}=\max\{\vartheta_0, \vartheta_f^{1/4}, \vartheta_g^{1/4}\}<1$, and
\begin{equation*}
C_{f, g}=C_0\left(\|f\|_{L^\infty} \|g\|_{L^\infty} + \|f\|_{L^\infty} |g|_{\vartheta_g}^- + \|g\|_{L^\infty} |f|_{\vartheta_f}^+ \right).
\end{equation*}
\end{proposition}
We then introduce the following natural family of $\sigma$-algebras for the system $T: M\to M$.
Recall that $S_{\pm n}$ is the singularity set of $T^{\pm n}$ for $n\ge 1$.
Let $\xi_0:=\{M\}$ be the trivial partition of $M$,
and denote by $\xi_{\pm n}$ the partition of $M$ into connected components of
$M\backslash T^{\mp(n-1)} S_{\pm 1}$ for $n\ge 1$.
Further, let
$$
\xi_m^n:=\xi_m\vee \dots \vee \xi_n
$$
for all $-\infty\le m\le n\le \infty$.
By Assumption (\textbf{H2}), $\xi_0^\infty$ is the partition of $M$ into maximal unstable manifolds,
and $\xi_{-\infty}^0$ is that into maximal stable manifolds.
Also, $\mu(\partial \xi_m^n)=0$ by Assumption (\textbf{H4}), where
$\partial \xi_m^n$ is the set of boundary curves for components in $\xi^n_m$.
Let $\mathfrak{F}_m^n$ be the Borel $\sigma$-algebra generated by the partition $\xi_m^n$.
Notice that
$\mathfrak{F}_{-\infty}^\infty$ coincides with the $\sigma$-algebra of all measurable subsets in $M$.
We denote by $\mathfrak{F}:=\{\mathfrak{F}_m^n\}_{-\infty\le m\le n\le \infty}$ the family of those $\sigma$-algebras.
\begin{proposition}\label{prop: alpha mixing}
The family $\mathfrak{F}$ is $\alpha$-mixing with an exponential rate, i.e.,
there exist $C_0>0$ and $\vartheta_0\in (0, 1)$ (which are the same as in Proposition \ref{exp decay}) such that
\begin{equation*}
\sup_{k\in {\mathbb Z}} \alpha(\mathfrak{F}_{-\infty}^k, \mathfrak{F}_{k+n}^\infty)\le C_0\vartheta_0^n,
\end{equation*}
where the definition of $\alpha(\cdot, \cdot)$ is given by \eqref{alpha coeff}.
\end{proposition}
\begin{proof} By the fact that $T^{-k}\xi_m^n=\xi_{m+k}^{n+k}$ and the invariance of $\mu$, it suffices to show that
\begin{equation*}
\alpha(\mathfrak{F}_{-\infty}^0, \mathfrak{F}_{n}^\infty)=\sup_{A\in \mathfrak{F}_{-\infty}^0} \sup_{B\in \mathfrak{F}_{n}^\infty}
\left|\mu(A\cap B)-\mu(A)\mu(B) \right| \le C_0\vartheta_0^n.
\end{equation*}
Since $A\in \mathfrak{F}_{-\infty}^0$ is a union of some maximal stable manifolds, we have that ${\boldsymbol{1}}_A\in \mathcal{H}^-$
such that $\|{\boldsymbol{1}}_A\|_{L^\infty}=1$ and $|{\boldsymbol{1}}_A|_{\vartheta}^-=0$ for any $\vartheta\in (0, 1)$.
Similarly, $B\in \mathfrak{F}_{n}^\infty$ implies that $T^{-n}(B)\in \mathfrak{F}^\infty_0$
is a union of some maximal unstable manifolds,
and thus ${\boldsymbol{1}}_{T^{-n}B}\in \mathcal{H}^+$ such that $\|{\boldsymbol{1}}_{T^{-n}B}\|_{L^\infty}=1$ and
$|{\boldsymbol{1}}_{T^{-n}B}|_{\vartheta}^+=0$ for any $\vartheta\in (0, 1)$.
Therefore, by Proposition~\ref{exp decay}, for any $A\in \mathfrak{F}_{-\infty}^0$ and $B\in \mathfrak{F}_{n}^\infty$,
\begin{equation*}
\left|\mu(A\cap B)-\mu(A)\mu(B) \right|=
\left| {\mathbb E}({\boldsymbol{1}}_{T^{-n}B}\cdot {\boldsymbol{1}}_A\circ T^n) -{\mathbb E}({\boldsymbol{1}}_{T^{-n}B}) {\mathbb E}({\boldsymbol{1}}_A ) \right|
\le C_0\vartheta_0^n.
\end{equation*}
This completes the proof of Proposition~\ref{prop: alpha mixing}.
\end{proof}
\subsection{Blocks and approximations}
Let $\textbf{f}=\{f_n\}_{n\ge 0}$ be a sequence of functions satisfying the assumptions of Theorem~\ref{main}.
From now on, we fix a error exponent $\lambda\in \left(\max\{\frac14, \frac{1}{8\kappa_2}\}, \frac{1}{2}\right)$,
and choose a sufficiently small constant $\epsilon>0$ such that
\begin{equation}\label{choose eps}
2\epsilon \kappa_p + \frac{1}{4-\epsilon} < 2\kappa_2 \lambda,
\ \ \text{and} \ \ \frac{\epsilon\kappa_p}{\kappa_2}<4\lambda-1.
\end{equation}
We partition the time interval $[0, \infty)$ into a sequence of consecutive blocks
$\Delta_j=[\tau_j, \tau_{j +1})$ for $j\ge 1$, where $\tau_j=\sum_{i=0}^{j-1} \lceil i^\epsilon\rceil$.
Note that the block $\Delta_j$ is of length $\lceil j^\epsilon\rceil$,
and $\tau_j\asymp j^{1+\epsilon}$.
For convenience, we set $\tau_0=-1$.
For any $k\in \Delta_j$,
we define the approximated function of $f_k$ by
\begin{equation}\label{g_k}
g_k={\mathbb E}\left(f_k\left|\mathfrak{F}^{\lceil 0.2 j^\epsilon\rceil}_{-\lceil 0.2 j^\epsilon\rceil}\right.\right).
\end{equation}
It is clear that ${\mathbb E}(g_k)=0$.
Since the separation times are adapted to the natural family $\mathfrak{F}$ of $\sigma$-algebras,
we have the following uniform $L^\infty$-bounds on the difference sequence $\{(f_k-g_k)\}_{k\ge 0}$.
\begin{lemma}\label{lem: convergence}
$
\sup_{k\in \Delta_j} \|f_k-g_k\|_{L^\infty}\ll \vartheta_\textbf{f}^{0.1 j^\epsilon}.
$
\end{lemma}
\begin{proof} Note that $k\le \tau_{j+1}\asymp j^{1+\epsilon}$ for any $k\in \Delta_j$.
For any measurable set $A\in \xi^{\lceil 0.2 j^\epsilon\rceil}_{-\lceil 0.2 j^\epsilon\rceil}$,
and any two points $x, y\in A$,
there is a point $z\in A$ such that $x$ and $z$ belong to one unstable curve, and
$y$ and $z$ belong to one stable curve. It follows that
$\mathbf{s}_+(x, z)>\lceil 0.2 j^\epsilon\rceil$ and $\mathbf{s}_-(y, z)>\lceil 0.2 j^\epsilon\rceil$, and thus
by Condition (1) of Theorem~\ref{main},
{\allowdisplaybreaks
\begin{eqnarray*}
|f_k(x)-f_k(y)| &\le& |f_k(x)-f_k(z)|+|f_k(y)-f_k(z)| \\
&\le & (|f_k|^+_{\vartheta_\textbf{f}}+|f_k|^-_{\vartheta_\textbf{f}}) \ \vartheta_\textbf{f}^{\lceil 0.2 j^\epsilon\rceil} \\
&\ll & k^{\beta_\textbf{f}} \vartheta_\textbf{f}^{\lceil 0.2 j^\epsilon\rceil}
\le j^{(1+\epsilon)\beta_\textbf{f}} \vartheta_\textbf{f}^{\lceil 0.2 j^\epsilon\rceil} \ll \vartheta_\textbf{f}^{0.1 j^\epsilon}.
\end{eqnarray*}
}
Hence for any $k\in \Delta_j$, we have
\begin{eqnarray*}
|f_k(x)-g_k(x)|
&=&\left|f_k(x)-\frac{1}{\mu(A)} \int_A f_k(y) d\mu(y)\right| \\
&\le &\frac{1}{\mu(A)} \int_A \left|f_k(x)-f_k(y) \right| d\mu(y)
\ll \ \vartheta_\textbf{f}^{0.1 j^\epsilon}.
\end{eqnarray*}
The proof of Lemma~\ref{lem: convergence} is complete.
\end{proof}
For any $n\ge 0$, there is a unique $r(n)\ge 1$ such that $n\in \Delta_{r(n)}$.
Note that $r(n)\asymp n^{\frac{1}{1+\epsilon}}$.
We now decompose the partial sum of the process $\mathbf{X}_\textbf{f}=\{X_n\}_{n\ge 0}=
\{f_n\circ T^n\}_{n\ge 0}$
as follows:
\begin{eqnarray}\label{sum decomposition}
\sum_{k=0}^{n-1} X_k &=&
\sum_{j=1}^{r(n)-1} \left(\sum_{k\in \Delta_j} g_k\circ T^k\right)
+ \sum_{k=0}^{\tau_{r(n)}-1} (f_k-g_k)\circ T^k
+\sum_{k=\tau_{r(n)}}^{n-1} X_k \nonumber \\
&=: & \sum_{j=1}^{r(n)-1} Y_j + U_n+ V_n.
\end{eqnarray}
It turns out that the major contribution for ASIP is given by $\sum_{j=1}^{r(n)-1} Y_j$,
while the rest terms are negligible.
\begin{lemma}\label{lem: remainder}
Let $U_n$ and $V_n$ be given by \eqref{sum decomposition}. Then
\begin{itemize}
\setlength\itemsep{0em}
\item[(i)] $\|U_n\|_{L^p}=\mathcal{O}(1)$, and $|U_n|=\mathcal{O}(1)$, a.s..
\item[(ii)] $\|V_n\|_{L^p}=\mathcal{O}\left(n^{\epsilon \kappa_p}\right)$,
and $|V_n|=\mathcal{O}\left(n^{2\kappa_2\lambda}\right)$, a.s..
\end{itemize}
\end{lemma}
\begin{proof} (i) Note that $U_n=\sum_{j=1}^{r(n)-1} \sum_{k\in \Delta_j} (f_k-g_k)\circ T^k$.
By Lemma~\ref{lem: convergence} and Minkowski's inequality, we have
\begin{eqnarray*}
\|U_n\|_{L^p}
\le \sum_{j=1}^\infty \sum_{k\in \Delta_j} \| (f_k-g_k)\circ T^k \|_{L^p}
&\le & \sum_{j=1}^\infty \sum_{k\in \Delta_j} \| f_k-g_k \|_{L^p} \\
&\ll & \sum_{j=1}^\infty \lceil j^\epsilon \rceil \ \vartheta_\textbf{f}^{0.1 j^\epsilon} < \infty.
\end{eqnarray*}
Moreover, $\sum_{j=1}^\infty \sum_{k\in \Delta_j} \| (f_k-g_k)\circ T^k \|_{L^p} <\infty$ implies that
$\sum_{j=1}^\infty \sum_{k\in \Delta_j} | (f_k-g_k)\circ T^k |<\infty$ a.s.,
and thus $|U_n|=\mathcal{O}(1)$ a.s..
(ii) By \eqref{M-Z}, we obtain
\begin{equation*}
\|V_n\|_{L^p}=\left\|\sum_{k=\tau_{r(n)}}^{n-1} X_k \right\|_{L^p}
\ll \left(n-\tau_{r(n)}\right)^{\kappa_p} \ll \left(r(n)^\epsilon\right)^{\kappa_p}
\ll n^{\epsilon \kappa_p}.
\end{equation*}
Moreover, by Markov's inequality and \eqref{choose eps},
\begin{eqnarray*}
\mu\{ |V_n|\ge n^{2 \kappa_2 \lambda} \}\le n^{-2p\kappa_2\lambda}\ {\mathbb E}|V_n|^p
\ll n^{p(-2\kappa_2\lambda + \epsilon \kappa_p)} \ll n^{-\frac{p}{4-\epsilon}},
\end{eqnarray*}
and hence $\sum_{n=1}^\infty \mu\{ |V_n|\ge n^{2 \kappa_2 \lambda} \}<\infty$.
By the Borel-Cantelli lemma (Lemma~\ref{lem: B-C}),
we get
$
\mu\left( \bigcap_{k=1}^\infty \bigcup_{n\ge k} \left\{ |V_n|\ge n^{2 \kappa_2 \lambda} \right\} \right)=0.
$
In other words,
$|V_n|\ll n^{2\kappa_2\lambda}$, a.s..
\end{proof}
\subsection{Martingale representation for $\{Y_j\}_{j\ge 1}$ }
In this subsection, we introduce a martingale representation for
the random process $\{Y_j\}_{j\ge 1}$ as defined in \eqref{sum decomposition}.
Such representation is given by Lemma 7.4.1 in \cite{PhSt75},
but has better norm estimates in our context.
We first establish the following preliminary estimates for $Y_j$.
\begin{lemma}\label{lem: est Y}
For any $j\ge 1$, the random variable $Y_j$ is $\mathfrak{F}_{\tau_{j-1}}^{\tau_{j+2}}$-measurable
such that ${\mathbb E} Y_j=0$ and $\|Y_j\|_{L^p}\ll j^{\epsilon \kappa_p}$.
Furthermore, $\|\sum_{j=1}^r Y_j\|_{L^2}\gg r^{\kappa_2}$.
\end{lemma}
\begin{proof}
By \eqref{g_k} and \eqref{sum decomposition}, we have for any $j\ge 1$,
\begin{equation*}
Y_j=\sum_{k\in \Delta_j} g_k\circ T^k
=\sum_{k\in \Delta_j} {\mathbb E}\left(f_k\left|\mathfrak{F}^{\lceil 0.2 j^\epsilon\rceil}_{-\lceil 0.2 j^\epsilon\rceil}\right.\right)\circ T^k
=\sum_{k\in \Delta_j} {\mathbb E}\left(f_k\circ T^k\left|\mathfrak{F}^{k+\lceil 0.2 j^\epsilon\rceil}_{k-\lceil 0.2 j^\epsilon\rceil}\right.\right)
\end{equation*}
is $\mathfrak{F}_{\tau_j-\lceil 0.2 j^\epsilon\rceil}^{\tau_{j+1}+\lceil 0.2 j^\epsilon\rceil}$-measurable,
and thus $\mathfrak{F}_{\tau_{j-1}}^{\tau_{j+2}}$-measurable.
It is clear that ${\mathbb E} Y_j=0$ since each $f_k$ is of zero mean.
Moreover, by Lemma~\ref{lem: convergence},
\begin{equation*}
\left\| Y_j-\sum_{k\in \Delta_j} X_k\right\|_{L^\infty}
\le \sum_{k\in \Delta_j} \left\| f_k-g_k\right\|_{L^\infty}
\ll \lceil j^\epsilon\rceil \vartheta_\textbf{f}^{0.1 j^\epsilon}
\le \sup_{j\ge 1} \lceil j^\epsilon\rceil \vartheta_\textbf{f}^{0.1 j^\epsilon} <\infty.
\end{equation*}
Therefore, by \eqref{M-Z},
\begin{eqnarray*}
\|Y_j\|_{L^p}
\le \left\|\sum_{k\in \Delta_j} X_k\right\|_{L^p}
+ \left\| Y_j-\sum_{k\in \Delta_j} X_k\right\|_{L^\infty}
\ll \lceil j^\epsilon\rceil^{\kappa_p} +\mathcal{O}(1) \ll j^{\epsilon \kappa_p}.
\end{eqnarray*}
Furthermore,
\begin{eqnarray*}
\left\|\sum_{j=1}^r Y_j \right\|_{L^2}
&\ge & \left\|\sum_{k=0}^{\tau_{r+1}-1} X_k\right\|_{L^2}
-\sum_{j=1}^r \left\| Y_j-\sum_{k\in \Delta_j} X_k\right\|_{L^\infty} \\
&\gg & \tau_{r+1}^{\kappa_2} - \sum_{j=1}^\infty \lceil j^\epsilon\rceil \vartheta_\textbf{f}^{0.1 j^\epsilon} \\
&\gg & (r+1)^{\kappa_2(1+\epsilon)}- \mathcal{O}(1)\gg r^{\kappa_2}.
\end{eqnarray*}
\end{proof}
Now we denote $\mathfrak{G}_j$ the $\sigma$-algebra generated by $Y_1, Y_2, \dots, Y_j$,
and it is immediate from Lemma \ref{lem: est Y} that $\mathfrak{G}_j\subset \mathfrak{F}_{-1}^{\tau_{j+2}}$.
We also set $\mathfrak{G}_0:=\{\emptyset, M\}$ to be the trivial $\sigma$-algebra.
\begin{lemma}\label{lem: martingale repn}
For any $j\ge 1$, we set $\xi_j:=Y_j-u_j+u_{j+1}$, where $u_j\in L^{4}$ is given by
\begin{equation}\label{def uj}
u_j:=\sum_{k=0}^\infty {\mathbb E}\left(Y_{j+k}\left| \mathfrak{G}_{j-1}\right. \right).
\end{equation}
Then $\{(\xi_j, \mathfrak{G}_j)\}_{j\ge 1}$ is a martingale difference sequence.
Moreover, ${\mathbb E} u_j={\mathbb E} \xi_j=0$ and
\begin{equation*}
\|u_j\|_{L^{4}}\ll j^{\epsilon \kappa_p}, \ \ \text{and} \ \ \|\xi_j\|_{L^{4}}\ll j^{\epsilon \kappa_p}.
\end{equation*}
\end{lemma}
\begin{proof} We first show that each $u_j$, given by \eqref{def uj}, is well-defined as an $L^{4}$ function.
Denote for short $v_{jk}:={\mathbb E}\left(Y_{j+k}\left| \mathfrak{G}_{j-1}\right. \right)$, which is $\mathfrak{G}_{j-1}$-measurable.
Then
\begin{eqnarray}\label{Ev_jk}
{\mathbb E}|v_{jk}|^{4}={\mathbb E}\left( v_{jk} \cdot v_{jk}^3 \right)
= {\mathbb E}\left( {\mathbb E}\left(Y_{j+k}\left| \mathfrak{G}_{j-1}\right. \right) \cdot v_{jk}^3 \right)
= {\mathbb E}\left( Y_{j+k} \cdot v_{jk}^3 \right).
\end{eqnarray}
By Lemma~\ref{lem: est Y}, $Y_{j+k}$ is $\mathfrak{F}^{\tau_{j+k+2}}_{\tau_{j+k-1}}$-measurable,
and also $\mathfrak{G}_{j-1}\subset \mathfrak{F}_{-1}^{\tau_{j+1}}$.
We choose $s_1=p$, $s_2=\frac{4}{3}$ and $s_3=\frac{4p}{p-4}$,
and apply Lemma~\ref{lem: cov inequality} to the last term of \eqref{Ev_jk},
\begin{eqnarray*}
{\mathbb E}|v_{jk}|^{4}
&\le & 10 \alpha(\mathfrak{F}^{\tau_{j+k+2}}_{\tau_{j+k-1}}, \mathfrak{F}_{-1}^{\tau_{j+1}})^{\frac{1}{s_3}}
\left\|Y_{j+k} \right\|_{L^{s_1}} \left\| v_{jk}^3 \right\|_{L^{s_2}} \\
&=& 10 \alpha(\mathfrak{F}^{\tau_{j+k+2}}_{\tau_{j+k-1}}, \mathfrak{F}_{-1}^{\tau_{j+1}})^{\frac{1}{s_3}}
\left\|Y_{j+k} \right\|_{L^{p}} \left[{\mathbb E}|v_{jk}|^{4}\right]^{\frac{3}{4}}.
\end{eqnarray*}
Dividing $\left[{\mathbb E}|v_{jk}|^{4}\right]^{\frac{3}{4}}$ on both sides,
and then using Proposition~\ref{prop: alpha mixing} and Lemma~\ref{lem: est Y},
we have that for any $j\ge 1$,
\begin{eqnarray*}
\|v_{jk}\|_{L^{4}}
&\le & 10 \alpha(\mathfrak{F}^{\infty}_{\tau_{j+k-1}}, \mathfrak{F}_{-\infty}^{\tau_{j+1}})^{\frac{1}{s_3}} \left\|Y_{j+k} \right\|_{L^{p}} \\
&\ll &
\begin{cases}
\left( j+k \right)^{\epsilon \kappa_p}, \ & \ 0\le k<3, \\
\left( j+k \right)^{\epsilon \kappa_p} \vartheta_0^{\frac{\lceil (j+k-2)^\epsilon \rceil}{s_3}}, \ & \ k\ge 3,
\end{cases}\\
&\ll &
\begin{cases}
j^{\epsilon \kappa_p}, \ & \ 0\le k<3, \\
\vartheta_0^{\frac{(k-2)^\epsilon}{2s_3}}, \ & \ k\ge 3.
\end{cases}
\end{eqnarray*}
Therefore, for any $j\ge 1$,
\begin{equation*}
\sum_{k=0}^\infty \left\|v_{jk}\right\|_{L^{4}}=
\sum_{k=0}^2 \left\|v_{jk}\right\|_{L^{4}} + \sum_{k=3}^\infty \left\|v_{jk}\right\|_{L^{4}}
\ll 3 j^{\epsilon \kappa_p} + \sum_{k=3}^\infty \vartheta_0^{\frac{(k-2)^\epsilon}{2s_3}}
\ll j^{\epsilon \kappa_p},
\end{equation*}
which implies that $u_j=\sum_{k=0}^\infty v_{jk}$ is well-defined in $L^{4}$,
and $\|u_j\|_{L^{4}}\ll j^{\epsilon \kappa_p}$.
By the formula $\xi_j:=Y_j-u_j+u_{j+1}$,
it is easy to check that $\{(\xi_j, \mathfrak{G}_j)\}_{j\ge 1}$ is a martingale difference sequence (see Definition~\ref{def mds}).
Moreover,
\begin{equation*}
\|\xi_j\|_{L^{4}}\le \|Y_j\|_{L^{p}} +\|u_j\|_{L^{4}}+\|u_{j+1}\|_{L^{4}} \ll j^{\epsilon \kappa_p}.
\end{equation*}
The proof of Lemma~\ref{lem: martingale repn} is complete.
\end{proof}
The following lemma shows that $\sum_{j=1}^{r} Y_j$ is well approximated by
$\sum_{j=1}^{r} \xi_j$.
\begin{lemma}\label{lem: xi remainder} We have the following estimates:
\begin{equation*}
\left\|\sum_{j=1}^{r} \left(Y_j-\xi_j \right) \right\|_{L^4}=\mathcal{O}\left(r^{\epsilon\kappa_p}\right),
\ \ \text{and} \ \
\left|\sum_{j=1}^{r} \left(Y_j-\xi_j \right) \right|=\mathcal{O}\left(r^{2\kappa_2\lambda}\right), \ a.s..
\end{equation*}
\end{lemma}
\begin{proof} By Lemma~\ref{lem: martingale repn}, we have
$\sum_{j=1}^{r} \left(Y_j-\xi_j \right)=u_1-u_{r+1}$,
and thus
\begin{equation*}
\left\|\sum_{j=1}^{r} \left(Y_j-\xi_j \right) \right\|_{L^4}=\left\|u_1-u_{r+1} \right\|_{L^4}
\ll 1+(r+1)^{\epsilon\kappa_p} \ll r^{\epsilon\kappa_p}.
\end{equation*}
Moreover, by Markov's inequality and \eqref{choose eps},
\begin{eqnarray*}
\mu\left\{ \left|\sum_{j=1}^{r} \left(Y_j-\xi_j \right) \right|\ge r^{2 \kappa_2 \lambda} \right\}
\le r^{-8\kappa_2\lambda}\ {\mathbb E}\left| \sum_{j=1}^{r} \left(Y_j-\xi_j \right) \right|^4
&\ll & r^{4(-2\kappa_2\lambda + \epsilon \kappa_p)} \\
&\ll & r^{-\frac{4}{4-\epsilon}},
\end{eqnarray*}
and hence
$\sum_{r=1}^\infty \mu\left\{ \left|\sum_{j=1}^{r} \left(Y_j-\xi_j \right) \right|\ge r^{2 \kappa_2 \lambda} \right\}<\infty$.
By the Borel-Cantelli lemma (Lemma~\ref{lem: B-C}),
we get
$\left|\sum_{j=1}^{r} \left(Y_j-\xi_j \right) \right|\ll r^{2\kappa_2\lambda}$,\ a.s..
\end{proof}
According to Lemma~\ref{lem: remainder} and Lemma~\ref{lem: xi remainder},
we shall focus on proving ASIP for the process $\{\xi_j\}_{j\ge 1}$.
\subsection{ASIP for $\{\xi_j\}_{j\ge 1}$ }
In this subsection, we shall use Proposition~\ref{Shao ASIP} to
prove a version of ASIP for the martingale difference sequence $\{(\xi_j, \mathfrak{G}_j)\}_{j\ge 1}$.
We first need a technical lemma with the following almost sure estimate.
\begin{lemma}\label{lem: est Rj}
$\sum_{j=1}^r \left[ {\mathbb E}\left(\xi_j^2\left| \mathfrak{G}_{j-1} \right. \right) - {\mathbb E}\xi_j^2 \right]= o(r^{4\kappa_2 \lambda})$, \ a.s.
\end{lemma}
\begin{proof}
We denote
$R_j:={\mathbb E}\left(\xi_j^2\left| \mathfrak{G}_{j-1} \right. \right) - {\mathbb E}\xi_j^2={\mathbb E}\left(\xi_j^2 - {\mathbb E}\xi_j^2\left| \mathfrak{G}_{j-1} \right. \right)$,
and note that $R_j$ is $\mathfrak{G}_{j-1}$-measurable and ${\mathbb E} R_j=0$.
Moreover, by Lemma~\ref{lem: martingale repn} and Jensen's inequality,
\begin{equation*}
\|R_j\|_{L^2}\le \sqrt{{\mathbb E}\left(\xi_j^2 - {\mathbb E}\xi_j^2\right)^2}\le \|\xi_j\|_{L^4}^2 \ll j^{2\epsilon \kappa_p}.
\end{equation*}
If for any $m\ge 1$ and any $r\ge 1$,
\begin{equation}\label{est Rj}
{\mathbb E}\left( \sum_{j=m}^{m+r-1} R_j \right)^2 \ll (m+r)^{1+8\epsilon \kappa_p} - m^{1+8\epsilon \kappa_p} ,
\end{equation}
then Lemma~\ref{lem: est Rj} immediately follows from \eqref{choose eps}
and Gal-Koksma (Proposition~\ref{Gal-Koksma}).
In the rest of the proof, we shall prove \eqref{est Rj}. Using that ${\mathbb E} R_j=0$ and ${\mathbb E} R_j^2\ge 0$,
we first notice that
{\allowdisplaybreaks
\begin{eqnarray*}
{\mathbb E}\left( \sum_{j=m}^{m+r-1} R_j \right)^2
&\le & 2\sum_{j=m}^{m+r-1} \sum_{k=0}^{m+r-1-j} {\mathbb E}(R_j R_{j+k}) \\
&= & 2\sum_{j=m}^{m+r-1} \sum_{k=0}^{m+r-1-j}
{\mathbb E}\left(R_j {\mathbb E}\left(\xi_{j+k}^2 - {\mathbb E}\xi_{j+k}^2\left| \mathfrak{G}_{j+k-1} \right. \right) \right)\\
&=& 2\sum_{j=m}^{m+r-1} \sum_{k=0}^{m+r-1-j} {\mathbb E}\left(R_j \left(\xi_{j+k}^2 - {\mathbb E}\xi_{j+k}^2 \right) \right) \\
&=& 2\sum_{j=m}^{m+r-1} {\mathbb E}\left(R_j \sum_{k=0}^{m+r-1-j} \xi_{j+k}^2 \right) \\
&=& 2\sum_{j=m}^{m+r-1} {\mathbb E}\left(R_j \left(\sum_{k=0}^{m+r-1-j} \xi_{j+k} \right)^2\right) \\
& & - 4\sum_{j=m}^{m+r-1} \mathop{\sum\sum}_{0\le k<\ell\le m+r-1-j} {\mathbb E}\left(R_j \xi_{j+k} \xi_{j+\ell}\right) \\
&=& 2\sum_{j=m}^{m+r-1} {\mathbb E}\left(R_j \left(\sum_{k=0}^{m+r-1-j} \xi_{j+k} \right)^2\right).
\end{eqnarray*}
}In the last step, we use \eqref{martingale diff identity} to conclude that
${\mathbb E}\left(R_j \xi_{j+k} \xi_{j+\ell}\right)=0$ if $k<\ell$.
By Lemma~\ref{lem: martingale repn}, we further obtain
{\allowdisplaybreaks
\begin{eqnarray*}
& & {\mathbb E}\left( \sum_{j=m}^{m+r-1} R_j \right)^2 \\
&\le & 2\sum_{j=m}^{m+r-1} {\mathbb E}\left(R_j \ \left[\sum_{k=0}^{m+r-1-j} Y_{j+k} + \left(u_{m+r-1} - u_j\right) \right]^2\right) \\
&\le & 2\sum_{j=m}^{m+r-1} \left\{ \sum_{k=0}^{m+r-1-j} {\mathbb E}\left(R_j Y_{j+k}^2 \right) \right. + 2 \sum_{k=0}^{m+r-1-j} \sum_{\ell=1}^{m+r-1-j-k} {\mathbb E}\left(R_j Y_{j+k} Y_{j+k+\ell} \right) \\
& & \ \ \ \ \ \ \ \ \ \ \ \ + 2 \sum_{k=0}^{m+r-1-j} {\mathbb E}\left(R_j Y_{j+k} u_{m+r-1} \right)
- 2 \sum_{k=0}^{m+r-1-j} {\mathbb E}\left(R_j Y_{j+k} u_j\right) \Bigg\} \\
& & +2\sum_{j=m}^{m+r-1} {\mathbb E}\left(R_j \left(u_{m+r-1} - u_j\right)^2 \right) \\
&=:& 2\sum_{j=m}^{m+r-1} \left( I_1 + I_2 + I_3 + I_4 \right) + 2 I_5.
\end{eqnarray*}
}
To prove \eqref{est Rj}, it suffices to show that
\begin{equation*}
|I_i|\ll j^{8\epsilon \kappa_p}, \ \text{for}\ i=1, 2, 3, 4, \ \ \text{and}\ \
|I_5|\ll (m+r)^{1+8\epsilon \kappa_p}-m^{1+8\epsilon \kappa_p}.
\end{equation*}
For $I_1$: Recall that $\|R_j\|_{L^2}\ll j^{2\epsilon\kappa_p}$, and $R_j$ is $\mathfrak{G}_{j-1}$- and thus
$\mathfrak{F}_{-1}^{\tau_{j+1}}$-measurable.
By Lemma~\ref{lem: est Y},
$\|Y_{j+k}^2\|_{L^{p/2}}\le \|Y_{j+k}\|_{L^p}^2\ll (j+k)^{2\epsilon \kappa_p}$,
and $Y_{j+k}^2$ is $\mathfrak{F}^{\tau_{j+k+2}}_{\tau_{j+k-1}}$- and thus $\mathfrak{F}^{\infty}_{\tau_{j+k-1}}$-measurable.
Applying Lemma~\ref{lem: cov inequality} and Proposition~\ref{prop: alpha mixing}, we
take $s=\frac{2p}{p-4}$ and get
{\allowdisplaybreaks
\begin{eqnarray*}
\left| I_1\right| &\le & \sum_{k=0}^{m+r-1-j} \left| {\mathbb E}\left(R_j Y_{j+k}^2 \right)\right| \\
&\le & \sum_{k=0}^{m+r-1-j} 10 \alpha(\mathfrak{F}_{-1}^{\tau_{j+1}}, \mathfrak{F}^{\infty}_{\tau_{j+k-1}})^{\frac{1}{s}}
\left\|R_j \right\|_{L^{2}} \left\|Y_{j+k}^2 \right\|_{L^{p/2}} \\
&\ll & \sum_{k=0}^2 j^{2\epsilon \kappa_p }(j+k)^{2\epsilon \kappa_p}
+ \sum_{k=3}^{m+r-1-j} \vartheta_0^{\frac{\lceil (j+k-2)^\epsilon \rceil}{s}} j^{2\epsilon\kappa_p} (j+k)^{2\epsilon \kappa_p} \\
&\ll & j^{4\epsilon \kappa_p }\left[\mathcal{O}(1)+ \sum_{k=3}^{m+r-1-j} \vartheta_0^{\frac{\lceil (k-2)^\epsilon \rceil}{s}}(1+k)^{2\epsilon \kappa_p} \right]
\ll j^{8\epsilon \kappa_p }.
\end{eqnarray*}
}
For $I_2$: we split the double sum into the cases $k\ge \ell$ and $k<\ell$, that is,
\begin{equation*}
|I_2|\le 2\sum_{k=0}^{m+r-1-j} \sum_{1\le \ell\le k} \left|{\mathbb E}\left(R_j (Y_{j+k} Y_{j+k+\ell}) \right) \right|
+2\sum_{\ell=1}^{m+r-1-j} \sum_{0\le k<\ell} \left|{\mathbb E}\left((R_j Y_{j+k} ) Y_{j+k+\ell} \right) \right|.
\end{equation*}
In the first summation, we note that $\ell\le k$ and
\begin{equation*}
\|Y_{j+k} Y_{j+k+\ell}\|_{L^{p/2}}\le \|Y_{j+k}\|_{L^p} \|Y_{j+k+\ell}\|_{L^p}
\ll (j+k)^{\epsilon \kappa_p}(j+k+\ell)^{\epsilon \kappa_p} \ll (j+2k)^{2\epsilon \kappa_p}.
\end{equation*}
Applying Lemma~\ref{lem: cov inequality} and Proposition~\ref{prop: alpha mixing}, we
take $s=\frac{2p}{p-4}$ and get
{\allowdisplaybreaks
\begin{eqnarray*}
& &\sum_{k=0}^{m+r-1-j} \sum_{1\le \ell\le k} \left|{\mathbb E}\left(R_j (Y_{j+k} Y_{j+k+\ell}) \right) \right| \\
&\le & \sum_{k=0}^{m+r-1-j} \sum_{1\le \ell\le k} 10 \alpha(\mathfrak{F}_{-1}^{\tau_{j+1}}, \mathfrak{F}^{\infty}_{\tau_{j+k-1}})^{\frac{1}{s}}
\left\|R_j \right\|_{L^{2}} \left\|Y_{j+k} Y_{j+k+\ell} \right\|_{L^{p/2}} \\
&\ll & \sum_{k=0}^2 k j^{2\epsilon \kappa_p }(j+2k)^{2\epsilon \kappa_p}
+ \sum_{k=3}^{m+r-1-j} k \vartheta_0^{\frac{\lceil (j+k-2)^\epsilon \rceil}{s}} j^{2\epsilon\kappa_p} (j+2k)^{2\epsilon \kappa_p} \\
&\ll & j^{4\epsilon \kappa_p }\left[\mathcal{O}(1)+ \sum_{k=3}^{m+r-1-j} \vartheta_0^{\frac{\lceil (k-2)^\epsilon \rceil}{s}}(1+2k)^{1+2\epsilon \kappa_p} \right]
\ll j^{8\epsilon \kappa_p }.
\end{eqnarray*}
}
In the second summation, we note that $k<\ell$ and hence
\begin{equation*}
\|R_jY_{j+k} \|_{L^{4/3}}\le \|R_j\|_{L^2}^{4/3} \|Y_{j+k}\|_{L^4}^{4/3}
\ll \left[j^{2\epsilon \kappa_p}(j+k)^{\epsilon \kappa_p}\right]^{4/3} \ll (j+k)^{4\epsilon \kappa_p}\le (j+\ell)^{4\epsilon \kappa_p}.
\end{equation*}
Also, $\|Y_{j+k+\ell} \|_{L^p}\ll (j+k+\ell)^{\epsilon\kappa_p}\le (j+2\ell)^{\epsilon \kappa_p}$.
Applying Lemma~\ref{lem: cov inequality} and Proposition~\ref{prop: alpha mixing}, we
take $s'=\frac{4p}{p-4}$ and get
{\allowdisplaybreaks
\begin{eqnarray*}
& &\sum_{\ell=1}^{m+r-1-j} \sum_{0\le k<\ell} \left|{\mathbb E}\left((R_j Y_{j+k} ) Y_{j+k+\ell} \right) \right| \\
&\le & \sum_{\ell=1}^{m+r-1-j} \sum_{0\le k< \ell} 10 \alpha(\mathfrak{F}_{-1}^{\tau_{j+k+1}}, \mathfrak{F}^{\infty}_{\tau_{j+k+\ell-1}})^{\frac{1}{s'}}
\left\|R_j Y_{j+k} \right\|_{L^{4/3}} \left\|Y_{j+k+\ell} \right\|_{L^{p}} \\
&\ll & \sum_{\ell=1}^2 \ell (j+\ell)^{4\epsilon \kappa_p }(j+2\ell)^{\epsilon \kappa_p}
+ \sum_{\ell=3}^{m+r-1-j} \ell \vartheta_0^{\frac{\lceil (j+k+\ell-2)^\epsilon \rceil}{s'}}
(j+\ell)^{4\epsilon \kappa_p }(j+2\ell)^{\epsilon \kappa_p} \\
&\ll & j^{5\epsilon \kappa_p }\left[\mathcal{O}(1)
+ \sum_{\ell=3}^{m+r-1-j} \vartheta_0^{\frac{\lceil (\ell-2)^\epsilon \rceil}{s}}(1+2\ell)^{1+5\epsilon \kappa_p} \right]
\ll j^{8\epsilon \kappa_p }.
\end{eqnarray*}
}
Therefore, $|I_2| \ll j^{8\epsilon \kappa_p}$.
For $I_3$: by the definition of $u_j$ in \eqref{def uj}, we rewrite
{\allowdisplaybreaks
\begin{eqnarray*}
\sum_{k=0}^{m+r-1-j} {\mathbb E}\left(R_j Y_{j+k} u_{m+r-1} \right)
&=& \sum_{k=0}^{m+r-1-j} \sum_{\ell=0}^\infty {\mathbb E}\left(R_j Y_{j+k} {\mathbb E}\left( Y_{m+r-1+\ell} | \mathfrak{G}_{m+r-2} \right)\right) \\
&=& \sum_{k=0}^{m+r-1-j} \sum_{\ell=0}^\infty {\mathbb E}\left(R_j Y_{j+k} Y_{m+r-1+\ell} \right) \\
&=& \sum_{k=0}^{m+r-1-j} \sum_{\ell=m+r-1-j-k}^\infty {\mathbb E}\left(R_j Y_{j+k} Y_{j+k+\ell} \right)
\end{eqnarray*}
}
We can split $I_3$ into the cases $k\ge \ell$ and $k<\ell$, and obtain $|I_3|\ll j^{8\epsilon \kappa_p}$ by
applying similar estimates for $I_2$.
For $I_4$: Note that
$\|R_j u_j\|_{L^{4/3}}\le \|R_j\|_{L^2}^{4/3} \|u_{j}\|_{L^4}^{4/3}
\ll \left[j^{2\epsilon \kappa_p} j^{\epsilon \kappa_p}\right]^{4/3} = j^{4\epsilon \kappa_p}$.
Applying Lemma~\ref{lem: cov inequality} and Proposition~\ref{prop: alpha mixing}, we
take $s'=\frac{4p}{p-4}$ and get
{\allowdisplaybreaks
\begin{eqnarray*}
|I_4| &\le & 2\sum_{k=0}^{m+r-1-j} \left| {\mathbb E}\left(R_j Y_{j+k} u_j\right)\right| \\
&\le & 2 \sum_{k=0}^{m+r-1-j} 10 \alpha(\mathfrak{F}_{-1}^{\tau_{j+1}}, \mathfrak{F}^{\infty}_{\tau_{j+k-1}})^{\frac{1}{s'}}
\left\|R_j u_j \right\|_{L^{4/3}} \left\|Y_{j+k} \right\|_{L^{p}} \\
&\ll & \sum_{k=0}^2 j^{4\epsilon \kappa_p }(j+k)^{\epsilon \kappa_p}
+ \sum_{k=3}^{m+r-1-j} \vartheta_0^{\frac{\lceil (j+k-2)^\epsilon \rceil}{s'}}
j^{4\epsilon \kappa_p }(j+k)^{\epsilon \kappa_p} \\
&\ll & j^{5\epsilon \kappa_p }\left[\mathcal{O}(1)+ \sum_{k=3}^{m+r-1-j}
\vartheta_0^{\frac{\lceil (k-2)^\epsilon \rceil}{s'}}(1+k)^{5\epsilon \kappa_p} \right]
\ll j^{8\epsilon \kappa_p }.
\end{eqnarray*}
}
For $I_5$: by Cauchy-Schwartz inequality,
{\allowdisplaybreaks
\begin{eqnarray*}
|I_5|
\le \sum_{j=m}^{m+r-1} \left| {\mathbb E}\left(R_j \left(u_{m+r-1} - u_j\right)^2 \right)\right|
&\le & \sum_{j=m}^{m+r-1} \|R_j\|_{L^2} \|u_{m+r-1}-u_j\|_{L^4}^2 \\
&\le & \sum_{j=m}^{m+r-1} \|R_j\|_{L^2} \left(\|u_{m+r-1}\|_{L^4} + \|u_j\|_{L^4}\right)^2 \\
&\ll & \sum_{j=m}^{m+r-1} j^{2\epsilon \kappa_p} \left[ (m+r-1)^{\epsilon \kappa_p} + j^{\epsilon\kappa_p} \right]^2 \\
&\ll & (m+r)^{2\epsilon \kappa_p} \sum_{j=m}^{m+r-1} j^{2\epsilon \kappa_p} \\
&\le & (m+r)^{2\epsilon \kappa_p} \sum_{j=m}^{m+r-1} j^{6\epsilon \kappa_p} \\
&\le & (m+r)^{1+8\epsilon \kappa_p} -m^{1+8\epsilon \kappa_p}.
\end{eqnarray*}
}
The proof of Lemma~\ref{lem: est Rj} is complete.
\end{proof}
We are now ready to show an ASIP for the sequence $\{\xi_j\}_{j\ge 1}$.
\begin{lemma}\label{lem: ASIP xi}
$\{\xi_j\}_{j\ge 1}$ satisfies an ASIP as follows:
put $b_r^2={\mathbb E}\left( \sum_{j=1}^r \xi_j \right)^2$.
There exists a Wiener process $W(\cdot)$ such that
\begin{equation*}
\left|\sum_{j=1}^r \xi_j - W(b_r^2)\right|
=o\left( r^{2\kappa_2\lambda(1+\epsilon)} \right), \ a.s..
\end{equation*}
\end{lemma}
\begin{proof} We directly apply Proposition~\ref{Shao ASIP} to the
$L^4$-integrable martingale difference sequence $\{(\xi_j, \mathfrak{G}_j)\}_{j\ge 1}$.
Set $a_r=r^{4\kappa_2\lambda}$, then
Condition~\eqref{Shao Cond1} holds by Lemma~\ref{lem: est Rj}.
Condition~\eqref{Shao Cond2} also holds, since by Lemma~\ref{lem: martingale repn} and
\eqref{choose eps}, we have
\begin{eqnarray*}
\sum_{j=1}^\infty a_j^{-2} {\mathbb E}|\xi_j|^{4}\ll \sum_{j=1}^\infty j^{-8\kappa_2\lambda} j^{4\epsilon \kappa_p}
\le \sum_{j=1}^\infty j^{-\frac{4}{4-\epsilon}}<\infty.
\end{eqnarray*}
On the one hand, by \eqref{choose eps}, $\epsilon\kappa_p<\kappa_2(4\lambda-1)<\kappa_2$, and thus
\begin{equation*}
b_r=\left\|\sum_{j=1}^r \xi_j \right\|_{L^2}\le \sum_{j=1}^r \left\|\xi_j \right\|_{L^2}\ll
\sum_{j=1}^r j^{\epsilon \kappa_p}\ll r^{1+\epsilon\kappa_p}\ll r^{1+\kappa_2}.
\end{equation*}
On the other hand, by Lemma~\ref{lem: est Y} and Lemma~\ref{lem: xi remainder},
\begin{equation*}
b_r=\left\|\sum_{j=1}^r \xi_j \right\|_{L^2}\ge
\left\|\sum_{j=1}^r Y_j \right\|_{L^2} - \left\|\sum_{j=1}^r (Y_j-\xi_j) \right\|_{L^4}
\gg r^{\kappa_2} - \mathcal{O}(r^{\epsilon \kappa_p}) \gg r^{\kappa_2}.
\end{equation*}
Therefore,
$r^{2\kappa_2(1-2\lambda)}\ll b_r^2/a_r\ll r^{2\kappa_2(1-2\lambda)+2}$,
and hence $\left|\log(b_r^2/a_r)\right|\ll r^{4\kappa_2\lambda\epsilon}$.
It is obvious that $\log\log a_r\ll r^{4\kappa_2\lambda\epsilon}$ as well.
By Proposition~\ref{Shao ASIP}, we have
{\allowdisplaybreaks
\begin{eqnarray*}
\left|\sum_{j=1}^r \xi_j - W(b_r^2)\right|
&=& o\left( \left( a_r \left( \left|\log(b_r^2/a_r)\right| + \log\log a_r \right) \right)^{1/2} \right), \ a.s. \\
&\le & o\left( r^{2\kappa_2\lambda(1+\epsilon)} \right), \ a.s..
\end{eqnarray*}
}
\end{proof}
\subsection{ASIP for $X_\textbf{f}$ }
Finally, we prove Theorem~\ref{main} - the ASIP for the random process
$X_\textbf{f}=\{X_n\}_{n\ge 0}=\{f_n\circ T^n\}_{n\ge 0}$. By the previous subsections,
we can now write
\begin{equation}\label{sum decomposition2}
\sum_{k=0}^{n-1} X_k = \sum_{j=1}^{r(n)-1} \xi_j +
\sum_{j=1}^{r(n)-1} (Y_j-\xi_j) + U_n + V_n.
\end{equation}
We first compare the variances $\sigma_n^2={\mathbb E}\left(\sum_{k=0}^{n-1} X_k\right)^2$
and $b_{r(n)-1}^2={\mathbb E}\left(\sum_{j=1}^{r(n)-1} \xi_j\right)^2$.
\begin{lemma}\label{lem: compare var}
$\left| \sigma_n - b_{r(n)-1}\right|\ll n^{\epsilon \kappa_p}$.
As a result, for any Wiener process $W(\cdot)$,
\begin{equation*}
\left|W(\sigma_n^2)-W(b_{r(n)-1}^2)\right| = \mathcal{O}\left(\sigma_n^{2\lambda}\right), \ a.s.
\end{equation*}
\end{lemma}
\begin{proof} By \eqref{sum decomposition2}, Lemma~\ref{lem: remainder} and Lemma~\ref{lem: xi remainder},
{\allowdisplaybreaks
\begin{eqnarray*}
\left| \sigma_n - b_{r(n)-1} \right|
&\le & \left\| \sum_{j=1}^{r(n)-1} (Y_j-\xi_j) + U_n + V_n \right\|_{L^2} \\
&\le & \left\| \sum_{j=1}^{r(n)-1} (Y_j-\xi_j) \right\|_{L^4} + \|U_n\|_{L^p} + \|V_n \|_{L^p} \\
&=& \mathcal{O}\left(\left(r(n)-1\right)^{\epsilon \kappa_p}\right) + \mathcal{O}(1) + \mathcal{O}\left(n^{\epsilon \kappa_p}\right)\ll n^{\epsilon \kappa_p}.
\end{eqnarray*}
}
In the last step, we use the fact that $r(n)\asymp n^{\frac{1}{1+\epsilon}}\ll n$.
By \eqref{M-Z} and \eqref{choose eps},
{\allowdisplaybreaks
\begin{eqnarray*}
\left|\sigma_n^2- b_{r(n)-1}^2\right|
&\le & \left| \sigma_n - b_{r(n)-1}\right|\left( 2\sigma_n + \left| \sigma_n - b_{r(n)-1}\right| \right) \\
&\ll & n^{\epsilon \kappa_p} \left( 2\sigma_n + n^{\epsilon \kappa_p} \right) \ll \sigma_n^{\epsilon \kappa_p/\kappa_2+1}.
\end{eqnarray*}
}
For any Wiener process $W(\cdot)$, the random variables $Z_n:=W(\sigma_n^2)-W(b_{r(n)-1}^2)$
follows the normal distribution $N\left(0, \left|\sigma_n^2- b_{r(n)-1}^2\right| \right)$.
By \eqref{choose eps}, we can choose a sufficiently large
$
s>\frac{4\max\{1, 1/\kappa_2\}}{4\lambda-1-\epsilon \kappa_p/\kappa_2}>4.
$
Then by Markov's inequality and Jensen's inequality, we have
{\allowdisplaybreaks
\begin{eqnarray*}
\mu\{|Z_n|\ge \sigma_n^{2\lambda}\}\le \sigma_n^{-2\lambda s} {\mathbb E}|Z_n|^s
\le \sigma_n^{-2\lambda s} \left({\mathbb E}|Z_n|^2\right)^{s/2}
&\ll & \sigma_n^{\frac{s}{2}\left[ \epsilon \kappa_p/\kappa_2+1 -4\lambda\right]} \\
&\ll & \sigma_n^{-2/\kappa_2}\ll n^{-2},
\end{eqnarray*}
}
which implies that $\sum_{n=1}^\infty \mu\{|Z_n|\ge \sigma_n^{2\lambda}\}<\infty$.
By the Borel-Cantelli lemma (Lemma~\ref{lem: B-C}),
we get
$|Z_n|\ll \sigma_n^{2\lambda}$, a.s..
\end{proof}
We are now ready to prove our main theorem.
\begin{proof}[Proof of Theorem~\ref{main}]
First, by \eqref{sum decomposition2}, Lemma~\ref{lem: remainder} and Lemma~\ref{lem: xi remainder},
we have
{\allowdisplaybreaks
\begin{eqnarray*}
\left| \sum_{k=0}^{n-1} X_k - \sum_{j=1}^{r(n)-1} \xi_j \right|
&\le & \left| \sum_{j=1}^{r(n)-1} (Y_j-\xi_j) \right| + |U_n| + |V_n | \\
&=& \mathcal{O}\left(\left(r(n)-1\right)^{2 \kappa_2\lambda}\right) + \mathcal{O}(1) + \mathcal{O}\left(n^{2 \kappa_2\lambda}\right), \ a.s. \\
&=& \mathcal{O}\left(n^{2\kappa_2\lambda}\right) = \mathcal{O}\left(\sigma_n^{2\lambda}\right), \ a.s.
\end{eqnarray*}
}
By Lemma~\ref{lem: ASIP xi} and Lemma~\ref{lem: compare var},
there exists a Wiener process $W(\cdot)$ such that
{\allowdisplaybreaks
\begin{eqnarray*}
\left|\sum_{k=0}^{n-1} X_k - W(\sigma_n^2)\right|
&\le & \left|\sum_{k=0}^{n-1} X_k - \sum_{j=1}^{r(n)-1} \xi_j \right| +
\left|\sum_{j=1}^{r(n)-1} \xi_j - W(b_{r(n)-1}^2)\right| \\
& & + \left|W(\sigma_n^2)-W(b_{r(n)-1}^2)\right| \\
&=& \mathcal{O}\left(\sigma_n^{2\lambda}\right) +
o\left( \left(r(n)-1\right)^{2\kappa_2\lambda(1+\epsilon)} \right) +\mathcal{O}\left(\sigma_n^{2\lambda}\right)
= \mathcal{O}\left(\sigma_n^{2\lambda}\right), \ a.s..
\end{eqnarray*}
}Here we use the fact that $r(n)\asymp n^{\frac{1}{1+\epsilon}}$ and hence
$\left(r(n)-1\right)^{2\kappa_2\lambda(1+\epsilon)}\asymp n^{2\kappa_2 \lambda}\ll \sigma_n^{2\lambda}$.
This completes the proof of Theorem \ref{main}.
\end{proof}
\section{Applications to Random Processes for Concrete Systems }\label{sec: app}
\subsection{Concrete hyperbolic systems}\label{sec:concrete}
Our main result applies to a large class of two-dimensional uniformly hyperbolic systems,
including Anosov diffeomorphisms\footnote{
By adding the boundaries of the finite Markov partition,
a topological mixing $C^2$ Anosov diffeomorphism satisfies our Assumptions \textbf{(H1)-(H5)}.
}
and chaotic billiards.
We shall focus on the Sinai dispersing billiards and their conservative perturbations.
Since such models were studied in \cite{CZ09, DZ11, DZ13},
we only remind some basic facts here.
We first recall standard definitions, see \cite{BSC90,BSC91,C99}.
A two-dimensional billiard is a dynamical system where a point moves
freely at the unit speed in a domain $Q\subset \mathbb{R}^2$ and bounces off its
boundary $\partial Q$ by the laws of elastic reflection.
A billiard is dispersing if
$\partial Q$ is a finite union of mutually disjoint $C^3$-smooth curves
with strictly positive curvature.
Four broad classes of perturbations of the dispersing billiards were considered in \cite{DZ11, DZ13}:
\begin{itemize}
\item[(a)] Tables with shifted, rotated or deformed scatterers;
\item[(b)] Billiards under small external forces which bend trajectories during flight;
\item[(c)] Billiards with kicks or twists at reflections, including slips along the disk;
\item[(d)] Random perturbations comprised of maps with uniform properties (including
any of the above classes, or a combination of them).
\end{itemize}
We treat all the above systems in a universal coordinate system.
More precisely, let $ M=\partial Q\times [-\pi/2,\pi/2]$ be the collision space, which
is a standard cross-section of the billiard flow. The canonical coordinate
in $M$ is denoted by $(r, \varphi)$, where $r$ is the arc length parameter
on $\partial Q$ and $\varphi\in [-\pi/2,\pi/2]$ is the angle of reflection.
The collision map $T: M \to M$ takes an outward unit vector $(r, \varphi)$
at $\partial Q$ to the outward unit vector after the next collision,
and the singularity of $T$ is caused by the tangential collisions, that is,
$S_1=\partial M\cup T^{-1}(\partial M)$.
It was proven in \cite{CZ09, DZ11, DZ13} that all the above collision map $T: M\to M$
preserves a mixing SRB measure $\mu$, and the systems $(M, T, \mu)$
satisfy the assumptions \textbf{(H1)-(H5)} in Section~\ref{Sec:Assumptions}.
Therefore, under conditions in Theorem \ref{main},
the ASIP holds for the non-stationary process generated by unbounded observables
over those systems.
\subsection{Practical random process}
In this subsection, we discuss some practical processes
over the concrete systems in Section~\ref{sec:concrete}.
\subsubsection{Fluctuation of Lyapunov exponents}
By Birkhoff's ergodic theorem, Pesin entropy formula
and the mixing property of the system $(M, T, \mu)$,
we have
\begin{equation*}
\lim_{n\to\infty} \frac{1}{n} \log \left|D^u_x T^n \right| = \int \log |D^u_x T| d\mu = h_\mu(T),
\end{equation*}
where $|D^u_x T^n|$ is the Jacobian of $T^n$ at $x$ along the unstable direction,
and $h_\mu(T)$ is the metric entropy of the SRB measure $\mu$.
We would like to study the fluctuation of the convergence for the ergodic sum given by
\begin{equation*}
\log \left|D^u_x T^n \right| - nh_\mu(T)=\sum_{k=0}^{n-1} \left[\log |D^u_x T| - h_\mu(T)\right] \circ T^k.
\end{equation*}
Unlike in Anosov systems,
the log unstable Jacobian function $x\mapsto \log |D^u_x T|$ is unbounded in billiard systems.
Recall that $M$ is the phase space of a billiard system with coordinates
$x=(r, \varphi)$, then $\log |D^u_x T|\asymp -\log \cos\varphi$
blows up near the singularities $\{\varphi=\pm\frac{\pi}{2}\}$.
Nevertheless, $\log |D^u_x T|$ is
dynamically H\"older continuous by Assumption (\textbf{H3}),
and it belongs to $L^p$ for any $p\ge 1$, as
\begin{equation*}
\int \left| \log |D^u_x T| \right|^p d\mu \asymp \iint \left| \log \cos\varphi\right|^p \cos \varphi dr d\varphi <\infty.
\end{equation*}
More generally, it follows from Theorem~\ref{main} that
an ASIP holds for the ergodic sum of a dynamically H\"older observable.
Here we only assume higher order integrability rather than boundedness for the observable.
\begin{theorem}\label{thm: stationary}
Suppose that $f\in \mathcal{H}\cap L^p$ for some $p>4$, such that
${\mathbb E} f=0$ and the first moment of its auto-correlations is finite, i.e.,
\begin{equation}\label{cor 1m}
\sum_{n=0}^\infty n\left|{\mathbb E}(f \cdot f\circ T^n)\right| < \infty.
\end{equation}
Then the stationary process $\mathbf{X}_f:=\{f\circ T^n\}_{n\ge 0}$
satisfies an ASIP for any error exponent $\lambda\in \left(\frac14, \frac12 \right)$,
that is, there is a Wiener process $W(\cdot)$ such that
\begin{equation}\label{ASIP12}
\left|\sum_{k=0}^{n-1} f \circ T^k - \sigma_f W(n) \right|=\mathcal{O}(n^{\lambda}), \ \ \text{a.s.}.
\end{equation}
where $\sigma_f^2$ is given by the Green-Kubo formula, i.e.,
\begin{equation}\label{sigma_f}
\sigma^2_f:=\sum_{n=-\infty}^\infty {\mathbb E}\left(f\cdot f\circ T^n\right) \in [0, \infty).
\end{equation}
\end{theorem}
\begin{proof}
First, we note that the series in \eqref{sigma_f} converges absolutely
by Condition \eqref{cor 1m}.
By direct computation, we have
{\allowdisplaybreaks
\begin{eqnarray*}
\sigma_n^2={\mathbb E}\left( \sum_{k=0}^{n-1} f \circ T^k\right)^2
&= & n \sigma_f^2 - \sum_{|k|>n} n \ {\mathbb E}\left(f\cdot f\circ T^k\right) -2 \sum_{k=1}^{n-1} k\ {\mathbb E}\left(f\cdot f\circ T^k\right) \\
&=& n \sigma_f^2 +\mathcal{O}\left(1 \right).
\end{eqnarray*}
}Therefore, $\sigma_f^2=\lim_{n\to\infty} \sigma_n^2/n \in [0, \infty)$.
If $\sigma_f=0$, then $\sigma_n^2$ is uniformly bounded.
In such case, it is well known that $f$ is a coboundary, i.e.,
there exists an $L^2$ function $g: M\to {\mathbb R}$ such that $f=g-g\circ T$ (see e.g. Theorem 18.2.2 in \cite{IL71}),
and thus \eqref{ASIP12} is automatic for any error exponent $\lambda>0$.
We now focus on the case when $\sigma_f>0$.
Condition (1) in Theorem \ref{main} automatically holds since $f\in \mathcal{H}$.
For Condition (2), we have $\sigma_n\asymp \sqrt n$ since $0<\sigma_f<\infty$, that is, $\kappa_2=\frac12$.
Also, by stationarity and Minkowski's inequality,
\begin{equation*}
\sup_{m\ge 0} \left\|\sum_{k=m}^{m+n-1} f \circ T^k \right\|_{L^p} =
\left\|\sum_{k=0}^{n-1} f \circ T^k \right\|_{L^p} \le n \|f\|_{L^p} \ll n.
\end{equation*}
In other words, $\kappa_p=1$. By Theorem \ref{main}, we obtain the ASIP for any $\lambda\in (\frac14, \frac12)$,
that is, there exists a Wiener process $W(\cdot)$ such that
\begin{equation*}
\left|\sum_{k=0}^{n-1} f \circ T^k - W(\sigma_n^2) \right|=\mathcal{O}(\sigma_n^{2\lambda})=\mathcal{O}\left(n^\lambda\right), \ \ \text{a.s.}.
\end{equation*}
Note that $Z_n:=W(\sigma_n^2)-\sigma_f W(n)$ follows the normal distribution
$N\left(0, \left|\sigma_n^2- n\sigma_f^2\right|\right)$,
and recall that $\left|\sigma_n^2- n\sigma_f^2\right| =\mathcal{O}(1)$.
Then by Jensen's inequality,
\begin{equation*}
\mu\{|Z_n|\ge n^{\frac14}\}\le n^{-2} {\mathbb E}|Z_n|^8
\le n^{-2} \left({\mathbb E}|Z_n|^2\right)^{4}
\ll n^{-2},
\end{equation*}
which implies that $\sum_{n=1}^\infty \mu\{|Z_n|\ge n^{\frac14}\}<\infty$.
By the Borel-Cantelli lemma (Lemma~\ref{lem: B-C}),
we get
$|Z_n|\ll n^{\frac14}$, a.s.. Therefore,
\begin{equation*}
\left|\sum_{k=0}^{n-1} f \circ T^k - \sigma_f W(n) \right| \le
\left|\sum_{k=0}^{n-1} f \circ T^k - W(\sigma_n^2) \right| +\left| Z_n\right|
=\mathcal{O}(n^\lambda), \ a.s..
\end{equation*}
\end{proof}
\begin{remark}
The stationary ASIP in the special case when $p=\infty$ had been shown by Chernov \cite{C06}
and many other authors. In this case, Condition \eqref{cor 1m} holds due to
the exponential decay of correlations for bounded dynamically H\"older observables.
Moreover, we can relax Condition \eqref{cor 1m} to sub-linear first moment of auto-correlations, i.e.,
$\sum_{n=0}^\infty n\left|{\mathbb E}(f \cdot f\circ T^n)\right| \ll n^\eta$ for some $\eta<1$, then
by a slight modification in the proof, we can show that the ASIP in \eqref{ASIP12}
holds for any error exponent $\lambda\in \left(\max\{\frac14, \frac{\eta}{2}\}, \frac12 \right)$.
\end{remark}
Now we can directly apply Theorem~\ref{thm: stationary} to study the fluctuations of Lyapunov exponents
in generic billiard systems for which Markov sieves exist (See Corollary 1.8 and Theorem 7.2 in \cite{C95} for more details).
For such generic billiards, Condition \eqref{cor 1m} holds for $f=\log |D^u_x T| - h_\mu(T)$ and
a broader class of observables.
Therefore, by Theorem~\ref{thm: stationary}, for any $\lambda\in (\frac14, \frac12)$, there is a Wiener process $W(\cdot)$
such that
\begin{equation*}
\left|\log \left|D^u_x T^n \right| - nh_\mu(T) - \sigma_f W(n) \right|=\mathcal{O}(n^{\lambda}), \ \ \text{a.s.}.
\end{equation*}
\vskip.1in
\subsubsection{Shrinking target problem}
Let $\{A_n\}_{n\ge 0}$ be a sequence of nested Borel subsets of $M$, i.e.,
$A_n\supset A_{n+1}$ for any $n\ge 0$.
Given $x\in M$,
we can study the absolute frequency that the trajectory of $x$ hits the shrinking targets $A_n$.
More precisely, for any $n\ge 1$, we denote
\begin{equation}\label{def Nn}
N_n(x)=\#\{k\in [0, n):\ T^k x\in A_k\}=\sum_{k=0}^{n-1} {\boldsymbol{1}}_{A_k}\circ T^k(x).
\end{equation}
Note that ${\mathbb E} N_n=\sum_{k=0}^{n-1} \mu(A_k)$ by the invariance of $\mu$ under $T$.
We say that the sequence $\{A_n\}_{n\ge 0}$ is dynamically Borel-Cantelli if
$\lim_{n\to\infty} N_n=\infty$, a.s..
Similar to a recent result by Haydn, Nicol, T\"or\"ok and Vaienti in \cite{HNTV17} (see
Theorem 5.1 therein),
we obtain the following ASIP for the frequency process $N_n$ of the shrinking target problem.
\begin{theorem}\label{thm: ASIP target}
Let $\{A_n\}_{n\ge 0}$ be a sequence of nested Borel subsets such that
\begin{itemize}
\item[(i)] There is $\beta\in [0, \infty)$ such that
${\boldsymbol{1}}_{A_n}\in \mathcal{H}_{0.5}$ and
$
|{\boldsymbol{1}}_{A_n}|_{0.5}^+ + |{\boldsymbol{1}}_{A_n}|_{0.5}^- \ll n^{\beta};
$
\item[(ii)] There is $\gamma\in (0, \frac34)$ such that $\mu(A_n)\gg n^{-\gamma}$.
Moreover, $\mu(A_n)=o(\frac{1}{\log n})$.
\end{itemize}
Then the process $N_n$ (as defined in \eqref{def Nn})
satisfies the ASIP for any error exponent
$\lambda\in \left( \max\{\frac14, \frac{1}{8(1-\gamma)}\}, \frac{1}{2}\right)$, that is,
there exists a Wiener process $W(\cdot)$ such that
\begin{equation}\label{ASIP14}
\left|N_n - {\mathbb E} N_n -W(\sigma_n^2) \right|=\mathcal{O}(\sigma_n^{2\lambda}), \ \ \text{a.s.},
\end{equation}
where $\sigma_n^2={\mathbb E} N_n^2 -\left( {\mathbb E} N_n \right)^2$.
\end{theorem}
\begin{remark} Here is a particular choice of the sequence $\{A_n\}_{n\ge 0}$ such that
Condition (i) in Theorem~\ref{thm: ASIP target} holds: let
$A_n$ be an open subset with boundaries
in the singular set $S_{-w(n)}\cup S_{w(n)}$,
where $w(n)$ is an sequence of positive integers such that
$w(n)\ll \log_2 n$. Then each ${\boldsymbol{1}}_{A_n}\in \mathcal{H}_{0.5}$ and
$
|{\boldsymbol{1}}_{A_n}|_{0.5}^+ + |{\boldsymbol{1}}_{A_n}|_{0.5}^- \le 2^{w(n)} \ll n^{\beta}
$
for some $\beta>0$.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{thm: ASIP target}]
Without loss of generality, we may assume that $\mu(A_0)\le \frac12$.
We take $f_n={\boldsymbol{1}}_{A_n}-\mu(A_n)$, then $N_n - {\mathbb E} N_n=\sum_{k=0}^{n-1} f_k\circ T^k$.
It follows from Condition (i) that $\{f_n\}_{n\ge 0}$ satisfies Condition (1) in Theorem \ref{main}.
For Condition (2), the second moment inequality in \eqref{M-Z} is automatic since for any $p\ge 1$,
any $m\ge 0$ and any $n\ge 1$,
\begin{equation*}
\left\|\sum_{k=m}^{m+n-1} f_k\circ T^k \right\|_{L^p} \le \sum_{k=m}^{m+n-1} \left\|f_k \right\|_{L^\infty} \le 2n.
\end{equation*}
That is, $\kappa_p=1$.
It remains to show the first moment inequality in \eqref{M-Z}.
We follow the arguments of Lemma 2.4 in \cite{HNVZ13}.
First, we claim the following long term iterations:
there exists $c>0$ such that for any $k\ge 0$,
\begin{equation}\label{long iterations}
\sum_{\ell>k+c\log(k+1)} \left| {\mathbb E}(f_k\circ T^k \cdot f_\ell \circ T^\ell) \right| \ll (k+1)^{-2}.
\end{equation}
Indeed, by Proposition~\ref{exp decay}, we take $\theta:=\max\{\vartheta_0, 2^{-1/4}\}<1$
and $c>\frac{3+\beta}{-\log \theta}$.
Then together by Condition (i), for any $0\le k< \ell$ and such that $\ell-k>c\log(k+1)$,
{\allowdisplaybreaks
\begin{eqnarray*}
\left| {\mathbb E}(f_k\circ T^k \cdot f_\ell \circ T^\ell) \right|
&\ll & C_0 \left(4+2k^\beta +2\ell^\beta \right) \theta^{\ell-k} \\
&\le & C_0 \left(4+2k^\beta + 2^{1+\beta} \left[k^\beta + (\ell-k)^\beta \right] \right)\theta^{\ell-k} \\
&\ll & \mathcal{O}(1) + k^\beta \theta^{\ell-k} + (\ell-k)^\beta \theta^{\ell-k} \\
&\ll & \mathcal{O}(1) + k^\beta (k+1)^{-3-\beta} +\mathcal{O}(1)\ll (k+1)^{-3},
\end{eqnarray*}
}which immediately implies \eqref{long iterations}. Now we have
{\allowdisplaybreaks
\begin{eqnarray*}
\sigma_n^2={\mathbb E}\left( \sum_{k=0}^{n-1} f_k\circ T^k \right)^2
&=& \sum_{k=0}^{n-1} {\mathbb E}(f_k^2)
+2\sum_{k=0}^{n-1} \sum\limits_{\substack{k< \ell<n, \\ \ell \le k+c\log(k+1)}} {\mathbb E}(f_k\circ T^k \cdot f_\ell \circ T^\ell) \\
& & +2 \sum_{k=0}^{n-1} \sum\limits_{\substack{k< \ell<n, \\ \ell>k+ c\log(k+1)}} {\mathbb E}(f_k\circ T^k \cdot f_\ell \circ T^\ell) \\
&=& \sum_{k=0}^{n-1} \left( \mu(A_k) - \mu(A_k)^2 \right) \\
& & + 2\sum_{k=0}^{n-1} \sum\limits_{\substack{k< \ell<n, \\ \ell \le k+ c\log(k+1)}}
\left( \mu(A_k \cap T^{-(\ell-k)}A_\ell) - \mu(A_k) \mu(A_\ell) \right) \\
& & + 2 \sum_{k=0}^{n-1} \mathcal{O}\left((k+1)^{-2} \right) \\
&\ge & \sum_{k=0}^{n-1} \left( \mu(A_k) - \mu(A_k)^2 \right)
- 2\sum_{k=0}^{n-1} \sum\limits_{\substack{k< \ell<n, \\ \ell \le k+c\log(k+1)}} \mu(A_k) \mu(A_\ell) +\mathcal{O}(1) \\
&\ge & \frac12 \sum_{k=0}^{n-1} \mu(A_k) - 2c \sum_{k=0}^{n-1} \log(k+1) \mu(A_k)^2 +\mathcal{O}(1) \\
&=& \frac12 \sum_{k=0}^{n-1} \mu(A_k) \left[1-4c \log(k+1) \mu(A_k)\right] +\mathcal{O}(1),
\end{eqnarray*}
}where the last inequality uses the fact that $\mu(A_\ell)\le \mu(A_k)\le \mu(A_0)\le \frac12$.
By Condition (ii), we further get
\begin{equation*}
\sigma_n^2\gg \sum_{k=0}^{n-1} \mu(A_k) (1-o(1))\gg n^{1-\gamma}.
\end{equation*}
In other words, $\kappa_2=1-\gamma$.
Applying our main theorem - Theorem~\ref{main}, we obtain the ASIP for $N_n$ given by
\eqref{ASIP14}.
\end{proof}
\section*{Acknowledgements}
The authors would like to thank Nicolai Haydn and Huyi Hu for helpful discussions and suggestions.
| {'timestamp': '2018-04-26T02:15:25', 'yymm': '1804', 'arxiv_id': '1804.09708', 'language': 'en', 'url': 'https://arxiv.org/abs/1804.09708'} |
\section{Introduction}
In this paper, we further develop a technique from \cite{RY} and apply it to study the Kobayashi Conjecture, $0$-cycles on hypersurfaces of general type, and Seshadri constants of very general hypersurfaces. The idea of the technique is to translate results about very general points on very general hypersurfaces to results about arbitrary points on very general hypersurfaces.
Our first application is to hyperbolicity. Recall that a complex variety is Brody hyperbolic if it admits no holomorphic maps from $\mathbb{C}$.
\begin{conjecture}[Kobayashi Conjecture]
\label{conj-Kobayashi}
A very general hypersurface $X$ of degree $d$ in $\mathbb{P}^n$ is Brody hyperbolic if $d$ is sufficiently large. Moreover, the complement $\mathbb{P}^n \setminus X$ is also Brody hyperbolic for large enough $d$.
\end{conjecture}
First conjectured in 1970 \cite{K}, the Kobayashi Conjecture has been the subject of intense study, especially in recent years \cite{Siu, Deng, Brotbeck, D2}. The suspected optimal bound for $d$ is approximately $d \geq 2n-1$. However, the best current bound is for $d$ greater than about $(en)^{2n+2}$ \cite{D2}. A related conjecture is the Green-Griffiths-Lang Conjecture.
\begin{conjecture}[Green-Griffiths-Lang Conjecture]
\label{conj-GGL}
If $X$ is a variety of general type, then there is a proper subvariety $Y \subset X$ containing all the entire curves of $X$.
\end{conjecture}
The Green-Griffiths-Lang Conjecture says that holomorphic images of $\mathbb{C}$ under nonconstant maps do not pass through a general point of $X$. Conjecture \ref{conj-GGL} is well-studied for general hypersurfaces, and it is a natural result to prove on the way to proving Conjecture \ref{conj-Kobayashi}. We provide a new proof of the Kobayashi Conjecture using previous results on the Green-Griffiths-Lang Conjecture.
\begin{theorem}
A general hypersurface in $\mathbb{P}^n$ of degree $d$ admits no nonconstant holomorphic maps from $\mathbb{C}$ for $d \geq d_{2n-3}$, where $d_2 = 286, d_3 = 7316$ and
\[ d_n = \left\lfloor \frac{n^4}{3} (n \log(n \log(24n)))^n \right\rfloor .\]
\end{theorem}
Our proof appears to be substantially simpler than the previous proofs (compare with \cite{Siu, Deng, Brotbeck, D2}), and can be adapted in a straightforward way as others use jet bundles to obtain better bounds for Conjecture \ref{conj-GGL}. Unfortunately, the bound of about $(2n \log(n\log(n)))^{2n+1}$ that we obtain is slightly worse than Demailly's bound of $(en)^{2n+2}$. However, assuming the optimal result on the Green-Griffiths-Lang Conjecture, our technique allows us to prove the conjectured bound of $d \geq 2n-1$ for the Kobayashi Conjecture.
The Kobayashi Conjecture for complements of hypersurfaces has also been studied by several authors \cite{Dar, BD}. Using results of Darondeau \cite{Dar} along with our Grassmannian technique, we are able to prove the Kobayashi Conjecture for complements as well.
\begin{theorem}
If $X$ is a general hypersurface in $\mathbb{P}^n$ of degree at least $d_{2n}$, where $d_n = (5n)^2 n^n$, then $\mathbb{P}^n \setminus X$ is Brody hyperbolic.
\end{theorem}
Our bound of about $100 \cdot 2^n n^{2n+2}$ is slightly worse than Brotbeck and Deng's bound of about $e^3 n^{2n+6}$ \cite{BD}, but our proof is substantially shorter.
Our second application concerns the Chow equivalence of points on very general complete intersections. Chen, Lewis, and Sheng \cite{CLS} make the following conjecture, which is inspired by work of Voisin \cite{voisinChow, V2,V3}.
\begin{conjecture}
\label{conj-CLS}
Let $X \subset \mathbb{P}^n$ be a very general complete intersection of multidegree $(d_1, \dots, d_k)$. Then for every $p \in X$, the space of points of $X$ rationally equivalent to $p$ has dimension at most $2n-k-\sum_{i=1}^k d_i$. If $2n-k-\sum_{i=1}^k d_i < 0$, we understand this to mean that $p$ is equivalent to no other points of $X$.
\end{conjecture}
If this Conjecture holds, then the result is sharp \cite{CLS}. Voisin \cite{voisinChow, V2, V3} proves Conjecture \ref{conj-CLS} for hypersurfaces in the case $2n-d-1 < -1$. Chen, Lewis, and Sheng \cite{CLS} extend the result to $2n-d-1 = -1$, and also prove the analog of Voisin's bound for complete intersections. Both papers use fairly involved Hodge theory arguments. Roitman \cite{R1,R2} proves the $2n-k-\sum_{i=1}^k d_i = n-2$ case. Using Roitman's result, we prove all but the $2n-k-\sum_{i=1}^k d_i = -1$ case of Conjecture \ref{conj-CLS}, and in this case we prove the result holds with the exception of possibly countably many points.
\begin{theorem}
\label{thm-chow}
If $X \subset \mathbb{P}^n$ is a very general complete intersection of multidegree $(d_1, \dots, d_k)$, then no two points of $X$ are rationally Chow equivalent if $2n-k-\sum_{i=1}^k d_i < -1$. If $2n-k-\sum_{i=1}^k d_i = -1$, then the set of points rationally equivalent to another point of $X$ is a countable union of points. If $2n-k-\sum_{i=1}^k d_i \geq 0$, then the space of points of $X$ rationally equivalent to a fixed point $p \in X$ has dimension at most $2n-k-\sum_{i=1}^k d_i$ in $X$.
\end{theorem}
Together with Chen, Lewis and Sheng's result, this completely resolves Conjecture \ref{conj-CLS} in the case of hypersurfaces. Our method appears substantially simpler than the previous work of Voisin \cite{voisinChow, V2, V3} and Chen, Lewis, and Sheng \cite{CLS}, although in the case of hypersurfaces, we do not recover the full strength of Chen, Lewis, and Sheng's result.
The third result relates to Seshadri constants. Let $\epsilon(p,X)$ be the Seshadri constant of $X$ at the point $p$, defined to be the infimum of $\frac{\deg C}{\operatorname{mult}_p C}$ over all curves $C$ in $X$ passing through $p$. Let $\epsilon(X)$ be the Seshadri constant of $X$, defined to be the infimum of the $\epsilon(p,X)$ as $p$ varies over the hypersurface.
\begin{theorem}
\label{thm-seshadri}
Let $r > 0$ be a real number. If for a very general hypersurface $X_0 \subset \mathbb{P}^{2n-1}$ of degree $d$ the Seshadri constant $\epsilon(p,X_0)$ of $X_0$ at a general point $p$ is at least $r$, then for a very general $X \subset \mathbb{P}^n$ of degree $d$, the Seshadri constant $\epsilon(X)$ of $X$ is at least $r$.
\end{theorem}
The layout of the paper is as follows. In Section \ref{sec-technique}, we lay out our general technique, and immediately use it to prove Theorem \ref{thm-seshadri}. In Section \ref{sec-hyperbolicity}, we discuss how to use the results of Section \ref{sec-technique} to prove hyperbolicity results. In Section \ref{sec-cycles}, we discuss how to prove Theorem \ref{thm-chow}.
\subsection*{Acknowledgements}
We would like to thank Xi Chen, Izzet Coskun, Jean-Pierre Demailly, Mihai P\u{a}un, Chris Skalit, and Matthew Woolf for helpful discussions and comments.
\section{The Technique}
\label{sec-technique}
We set some notation. Let $B$ be the moduli space of complete intersections of multidegree $(d_1, \dots, d_k)$ in $\mathbb{P}^{n+k}$ and $\mathcal{U}_{n,\underline{d}} \subset \mathbb{P}^{n+k} \times B$ be the variety of pairs $([p],[X])$ with $[X] \in B$ and $p \in X$. We refer to elements of $\mathcal{U}_{n,\underline{d}}$ as pointed complete intersections. When we talk about the codimension of a countable union of subvarieties of $\mathcal{U}_{n,\underline{d}}$, we mean the minimum of the codimensions of each component.
We need the following result from \cite{RY}.
\begin{proposition}
\label{GrassmanProp}
Let $C \subset \mathbb{G}(r-1,m)$ be a nonempty family of $r-1$-planes of codimension $\epsilon > 0$, and let $B \subset \mathbb{G}(r,m)$ be the space of $r$ planes that contain some $r-1$-plane $c$ with $[c] \in C$. Then $\operatorname{codim}(B \subset \mathbb{G}(r,m)) \leq \epsilon -1$.
\end{proposition}
\begin{proof}
For the reader's convenience, we sketch the proof. Consider the incidence-correspondence $I = \{ ([b],[c]) | \: [b] \in B, [c] \in C \} \subset \mathbb{G}(r,m) \times \mathbb{G}(r-1,m)$. The fibers of $\pi_2$ over $C \subset \mathbb{G}(r-1,m)$ are all $\mathbb{P}^{m-r}$'s, while for a general $[b] \in B$, the fiber $\pi_1^{-1}([b])$ has codimension at least $1$ in the $\mathbb{P}^r$ of $r-1$-planes contained in $b$ (since otherwise it can be shown that $C = \mathbb{G}(r,m)$). The result follows by a dimension count.
\end{proof}
We need a few other notions for the proof. A parameterized $r$-plane in $\mathbb{P}^m$ is a degree one injective map $\Lambda: \mathbb{P}^r \to \mathbb{P}^m$. Let $G_{r,m,p}$ be the space of parameterized $r$-planes in $\mathbb{P}^m$ whose images pass through $p$. If $(p,X)$ is a pointed hypersurface in $\mathbb{P}^m$, a parameterized $r$-plane section of $(p,X)$ is a pair $(\Lambda^{-1}(p), \Lambda^{-1}(X)) =: \phi^* (p,X)$, where $\Lambda: \mathbb{P}^r \to \mathbb{P}^m$ is a parameterized $r$-plane whose image does not lie entirely in $X$. We say that $\Lambda: \mathbb{P}^r \to \mathbb{P}^m$ contains $\Lambda': \mathbb{P}^{r-1} \to \mathbb{P}^m$ if $\Lambda(\mathbb{P}^r)$ contains $\Lambda'(\mathbb{P}^{r-1})$.
\begin{corollary}
\label{GrassmanCor}
If $C \subset G_{r-1,m,p}$ is a nonempty subvariety of codimension $\epsilon > 0$ and $B \subset G_{r,m,p}$ is the subvariety of parameterized $r$-planes that contain some $r-1$-plane $[c] \in C$, then $\operatorname{codim}(B \subset G_{r,m,p}) \leq \epsilon -1$.
\end{corollary}
Let $\mathcal{X}_{n,\underline{d}} \subset \mathcal{U}_{n,\underline{d}}$ be an open subset. For instance, $\mathcal{X}_{n,\underline{d}}$ might be equal to $\mathcal{U}_{n,\underline{d}}$ or it might be the universal complete intersection over the space of smooth complete intersections. Our main technical tool is the following.
\begin{theorem}
\label{thm-technicaltool}
Suppose we have an integer $m$ and for each $n \leq n_0$ we have $Z_{n,d} \subset \mathcal{X}_{n,\underline{d}}$ a countable union of locally closed varieties satisfying: \begin{enumerate}
\item If $(p,X) \in Z_{n,\underline{d}}$ and is a parameterized hyperplane section of $(p',X')$, then $(p',X') \in Z_{n+1,\underline{d}}$.
\item $Z_{m-1,\underline{d}}$ has codimension at least $1$ in $\mathcal{X}_{m-1,d}$.
\end{enumerate}
Then the codimension of $Z_{m-c,\underline{d}}$ in $\mathcal{X}_{m-c,\underline{d}}$ is at least $c$.
\end{theorem}
\begin{proof}[Proof of Theorem \ref{thm-technicaltool}]
We adopt the method from \cite{RY}. We prove that for a very general point $(p_0,X_0)$ of a component of $Z_{m-c,\underline{d}}$, there is a variety $\mathcal{F}_{m-c}$ and a map $\phi: \mathcal{F}_{m-c} \to \mathcal{U}_{m-c,\underline{d}}$ with $(p_0,X_0) \in \phi(\mathcal{F}_{m-c})$ and $\operatorname{codim}(\phi^{-1}(Z_{m-c,\underline{d}}) \subset \mathcal{F}_{m-c}) \geq c$. This suffices to prove the result.
So, let $(p_0, X_0)$ be a general point of a component of $Z_{m-c,\underline{d}}$, and let $(p_1,X_1) \in \mathcal{X}_{m-1,\underline{d}}$ be very general, so that $(p_1,X_1)$ is not in the closure of any component of $Z_{m-1,\underline{d}}$ by hypothesis 2. Choose $(p,Y) \in \mathcal{X}_{N,\underline{d}}$ for some sufficiently large $N$ such that $(p_0,X_0)$ and $(p_1,X_1)$ are parameterized linear sections of $(p,Y)$. Then for all $n < N$, let $\mathcal{F}_{n}$ be the space of parameterized $n$-planes in $\mathbb{P}^N$ passing through $p$ such that for $\Lambda \in \mathcal{F}_{n}$, $\Lambda^*(p,Y)$ is in $\mathcal{X}_{n,\underline{d}}$. This means that $\mathcal{F}_n$ is an open subset of $G_{n,N,p}$. Let $\phi: \mathcal{F}_{n} \to \mathcal{X}_{n,\underline{d}}$ be the map sending $\Lambda: \mathbb{P}^n \to \mathbb{P}^N$ to $\Lambda^*(p,Y)$.
We prove that $\operatorname{codim}(\phi^{-1}(Z_{m-c,\underline{d}}) \subset \mathcal{F}_{m-c}) \geq c$ by induction on $c$. For the $c=1$ case, we see by construction that $\phi^{-1}(Z_{m-1,\underline{d}})$ has codimension at least $1$ in $\mathcal{F}_{m-1}$, since $(p_1,X_1)$ is a parameterized $m-1$-plane section of $(p,Y)$ but is not in the closure of any component of $Z_{m-1,\underline{d}}$. Now suppose we know that $\operatorname{codim}(\phi^{-1}(Z_{m-c,\underline{d}}) \subset \mathcal{F}_{m-c}) \geq c$. We use Corollary \ref{GrassmanCor} with $C$ equal to $\phi^{-1}(Z_{m-c-1,\underline{d}})$. By hypothesis 1, we see that $B$ is contained in $\phi^{-1}(Z_{m-c,\underline{d}})$. It follows from this that
\[ c \leq \operatorname{codim}( \phi^{-1}(Z_{m-c,\underline{d}}) \subset \mathcal{F}_{m-c})) \leq \operatorname{codim}(B \subset \mathcal{F}_{m-c}) \]
\[ \leq \operatorname{codim}(C \subset \mathcal{F}_{m-c-1}) -1 . \]
Rearranging, we see that
\[ \operatorname{codim}( \phi^{-1}(Z_{m-c-1,\underline{d}}) \subset \mathcal{F}_{m-c-1}) = \operatorname{codim}(C \subset \mathcal{F}_{m-c-1}) \geq c+1 . \]
The result follows.
\end{proof}
As an immediate application, we prove Theorem \ref{thm-seshadri}.
\begin{proof}[Proof of Theorem \ref{thm-seshadri}]
Let $r$ be given. Let $Z_{m,d} \subset \mathcal{U}_{m,d}$ be the set of pairs $(p,X)$ where $\epsilon(p,X) < r$. We apply Theorem \ref{thm-technicaltool} to $Z_{m,d}$. We see that $Z_{m,d}$ is a countable union of algebraic varieties, and by hypothesis, $Z_{2n-1,d} \subset \mathcal{U}_{2n-1,d}$ has codimension at least $1$. Now suppose that $(p_0,X_0) \in Z_{m,d}$. Then there is some curve $C$ in $X_0$ with $\frac{\deg C}{\operatorname{mult}_{p_0} C} < r$. Thus, for any $X$ containing $p$, we see that the Seshadri constant of $X$ at $p$ is at most $\frac{\deg C}{\operatorname{mult}_{p_0} C} < r$. This shows that the $Z_{m,d}$ satisfy the conditions of Theorem \ref{thm-technicaltool}, which shows that $Z_{n,d} \subset \mathcal{U}_{n,d}$ has codimension at least $n$. By dimension reasons, this means that $\mathcal{U}_{n,d}$ cannot dominate the space of hypersurfaces, so the result follows.
\end{proof}
\section{Hyperbolicity}
\label{sec-hyperbolicity}
Let $\mathcal{X}_{n,d}$ be the universal hypersurface in $\mathbb{P}^n$ over the open subset $U$ in the moduli space of all degree $d$ hypersurfaces in $\mathbb{P}^n$ consisting of all smooth hypersurfaces. Many people have developed a technique for restricting the entire curves contained in a fiber of the map $\pi_2: \mathcal{X}_{n,d} \to U$. See the article of Demailly for a detailed description of some of this work \cite{D2}. For a variety $X$, let $\operatorname{ev}: J_k(X) \to X$ be the space of $k$-jets of $X$. Then, if $X \subset \mathbb{P}^n$ is a smooth degree $d$ hypersurface, there is a vector bundle $E_{k,m}^{GG} T_X^*$ whose sections act on $J_k(X)$. Global sections of $E_{k,m}^{GG} T_X^* \otimes \mathcal{O}(-H)$ vanish on the $k$-jets of entire curves. This means that sections of $E_{k,m}^{GG} T_X^* \otimes \mathcal{O}(-H)$ cut out a closed subvariety $S'_{k,m}(X) \subset J_k(X)$ such that any entire curve is contained in $\operatorname{ev}(S'_{k,m}(X))$. In fact, it can be shown that any entire curve is contained in the closure of $\operatorname{ev}(S_{k,m}(X))$, where $S_{k,m}(X) \subset J_k(X)$ is $S'_{k,m}(X)$ minus the space of singular $k$-jets.
The construction is functorial. In particular, if $V$ is the relative tangent bundle of the map $\pi_2$, there is a vector bundle $E_{k,m}^{GG} V^*$ whose restriction to each fiber of $\pi_2$ is $E_{k,m}^{GG} T_{X}^*$. Let $\mathcal{Y}_{n,d} \subset \mathcal{X}_{n,d}$ be the locus of $(p,X) \in \mathcal{X}_{n,d}$ such that $p \in \operatorname{ev}(S_{k,m}(X))$. Then by functoriality, $\mathcal{Y}_{n,d}$ is a finite union of locally closed varieties.
\begin{theorem}
\label{thm-Hyperbol}
Suppose that $\mathcal{Y}_{r-1,d} \subset \mathcal{X}_{r-1,d}$ is codimension at least 1. Then $\mathcal{Y}_{r-c,d} \subset \mathcal{X}_{r-c,d}$ is codimension at least $c$. In particular, if $\mathcal{Y}_{2n-3, d}$ is codimension at least $1$ in $\mathcal{X}_{2n-3,d}$ and $d \geq 2n-1$, then a very general $X \subset \mathbb{P}^n$ of degree $d_n$ is hyperbolic.
\end{theorem}
\begin{proof}
We check that $\mathcal{Y}_{r-1,d}$ satisfies both conditions of Theorem \ref{thm-technicaltool}. Condition 2 is a hypothesis. Condition 1 follows by the functoriality of Demailly's construction. Namely, if $(p,X_0)$ is a parameterized linear section of $(p,X)$, then the natural map $X_0 \to X$ induces a pullback map on sections
\[ H^0(E_{k,m}^{GG} T_X^* \otimes \mathcal{O}(-H)) \to H^0(E_{k,m}^{GG} T_{X_0}^* \otimes \mathcal{O}(-H)) , \]
compatible with the natural inclusion of jets $J_k(X_0) \to J_k(X)$. In particular, if some section $s$ of $H^0(E_{k,m}^{GG} T_X^* \otimes \mathcal{O}(-H))$ takes a nonzero value on a jet $\alpha(j)$, where $j \in J_k(X_0)$ and $\alpha:J_k(X_0) \to J_k(X)$ is the natural inclusion, then the restriction of $s$ to $X_0$ takes a nonzero value on the original jet $j \in J_k(X_0)$. Thus, if $X_0$ has a nonsingular $k$-jet at $p$ which is annihilated by every section in $H^0(E_{k,m}^{GG} T_{X_0}^* \otimes \mathcal{O}(-H))$, $X$ has such a $k$-jet as well.
To see the second statement, observe that by Theorem \ref{thm-technicaltool}, $\mathcal{Y}_{n,d}$ has codimension in $\mathcal{X}_{n,d}$ at least $2n-3-n+1 = n-2$. It follows that a general $X$ of degree $d$ in $\mathbb{P}^n$ satisfies that the image of any entire curve is contained in an algebraic curve. Since $d \geq 2n-1$, by a theorem of Voisin \cite{V2, V3}, any algebraic curve in $X$ is of general type. The result follows.
\end{proof}
The current best bound for the Green-Griffiths-Lang Conjecture is from Demailly \cite{D1,D2}. The version we use comes out of Demailly's proof.
\begin{theorem}[\cite{D2}, Section 10]
If $k$ and $m$ are sufficiently large, we have that $\mathcal{Y}_{n,d} \subset \mathcal{X}_{n,d}$ is codimension at least $1$ for $d \geq d_n$, where $d_2 = 286, d_3 = 7316$ and
\[ d_n = \left\lfloor \frac{n^4}{3} (n \log(n \log(24n)))^n \right\rfloor .\]
\end{theorem}
Using this bound, we obtain the following.
\begin{corollary}
The Kobayashi Conjecture holds hypersurfaces when $d \geq d_{2n-3}$, where $d_2 = 286, d_3 = 7316$ and
\[ d_n = \left\lfloor \frac{n^4}{3} (n \log(n \log(24n)))^n \right\rfloor .\]
\end{corollary}
This bound of about $(2n \log (n\log n))^{2n+1}$ is slightly worse than the best current bound for the Kobayashi Conjecture from \cite{D2}, which is about $(en)^{2n+2}$. However, our technique is strong enough to allow us to prove the optimal bound from the Kobayashi Conjecture, provided one could prove the optimal result for the Green-Griffiths-Lang Conjecture.
\begin{corollary}
If $\mathcal{Y}_{d-2,d}$ has codimension at least $1$ in $\mathcal{X}_{d-2,d}$ (as we would expect from the Green-Griffiths-Lang Conjecture), then a very general hypersurface of degree $d \geq 2n-1$ in $\mathbb{P}^n$ is hyperbolic.
\end{corollary}
\begin{proof}
We apply Theorem \ref{thm-Hyperbol}. We know that if $\mathcal{Y}_{2n-3, d}$ is codimension at least $1$ in $\mathcal{X}_{2n-3,d}$, then the Kobayashi Conjecture holds for hypersurfaces in $\mathbb{P}^n$ of degree $d$. We apply this result with $d = 2n-1$.
\end{proof}
Work has also been done on the hyperbolicity of complements of hypersurfaces in $\mathbb{P}^n$. There are similar jet bundle techniques in this case. Given a variety $Z$, and ample line bundle $A$, a subsheaf $V \subset T_Z$ and a subvariety $X \subset Z$, one can construct the vector bundles $E_{k,m} V^*(\log X)$, sections of which act on $k$-jets of $Z \setminus X$. It can be shown that any section of $H^0(E_{k,m} V^*(\log X) \otimes A^{*})$ must vanish on the $k$-jet of any entire curve in $Z \setminus X$. Then sections of $H^0(E_{k,m} V^*(\log X) \otimes A^{*})$ cut out a subvariety $S_{k,m}$ in the locus of nonsingular $k$-jets on $Z \setminus X$ such that any entire curve lies in the closure of $\operatorname{ev}(S_{k,m})$. Darondeau \cite{Dar} studies these objects for hypersurfaces in $\mathbb{P}^n$. Namely, he proves the following.
\begin{theorem}[Darondeau]
\label{thm-Darondeau}
Let $d \geq (5n)^2 n^n$, let $U_{n,d}$ be the space of smooth degree $d$ hypersurfaces in $\mathbb{P}^n$ and consider the space $\mathbb{P}^n \times U_{n,d}$, with divisor $\mathcal{X}_{n,d}$ corresponding to the universal hypersurface. Let $V = T_{\pi_2}$ be the relative tangent space of projection of $\mathbb{P}^n \times U_{n,d}$ onto $U_{n,d}$, and consider the locus $S_{k,m}$ cut out by sections in $H^0(E_{k,m} V^*(\log \mathcal{X}_{n,d}) \otimes \mathcal{O}(-H))$. Then $\operatorname{ev}(S_{k,m})$ is codimension at least $2$ in $\mathbb{P}^n \times U_{n,d}$.
\end{theorem}
Using our technique, we obtain the following effective form of the Kobayashi Conjecture.
\begin{theorem}
Let $Z_{n,d} \subset \mathbb{P}^n \times U_{n,d} \setminus \mathcal{X}_{n,d}$ be the locus of pairs $(p,X)$ where $p \in (\mathbb{P}^n \setminus X) \setminus \operatorname{ev}(S_{k,m})$. Then if $Z_{r-1,d}$ has codimension at least $1$ in $(\mathbb{P}^{r-1} \times U_{r-1,d}) \setminus \mathcal{X}_{r-1,d}$, we have that $Z_{r-c,d}$ has codimension at least $c$ in $(\mathbb{P}^{r-c} \times U_{r-c,d}) \setminus \mathcal{X}_{r-c,d}$.
\end{theorem}
\begin{proof}
The proof is very similar in spirit to the proof of Theorem \ref{thm-technicaltool}, but we give a new proof here for completeness. Let $(p, X_0)$ be a general point of a component of $Z_{r-c,d}$. We will find a family $\phi: \mathcal{F}_{r-c} \to \mathbb{P}^{r-c} \times U_{r-c,d} \setminus \mathcal{X}_{r-c,d}$ with $\phi^{-1}(Z_{r-c})$ having codimension at least $c$ in $\mathcal{F}_{r-c}$.
Let $(p, X_1) \in \mathbb{P}^{r-1} \times U_{r-1,d} \setminus \mathcal{X}_{r-1,d}$ be very general so that $(p,X_1)$ is not in the closure of any component of $Z_{r-1,d}$. Let $(p,Y)$ be a hypersurface in some high dimensional projective space $\mathbb{P}^N$ together with a point $p \in \mathbb{P}^N \setminus Y$ such that $(p,X_0)$ and $(p,X_1)$ are both parameterized linear sections of $(p,Y)$. Let $\mathcal{F}_{r-c}$ be the space of parameterized linear spaces in $\mathbb{P}^N$ passing through $p$. Then we have a natural map $\phi: \mathcal{F}_{r-c} \to \mathbb{P}^{r-c} \times U_{r-c,d} \setminus \mathcal{X}_{r-c,d}$ sending the parameterized $r-c$ plane $\Lambda$ to $\Lambda^* (p,Y)$. By construction, the image of $\phi$ will certainly contain the point $(p,X_0)$, and since $(p,X_1)$ is very general, $\phi^{-1}(Z_{r-1,d})$ will have codimension at least $1$ in $\mathcal{F}_{r-1}$. Observe that if $(p,Y_0) \in Z_{r-c-1,d}$ is a linear section of $(p,Y_1)$, then $(p,Y_1) \in Z_{r-c,d}$, since if there is a nonsingular jet $j$ at $p$ such that all the sections of $E_{k,m} T_{\mathbb{P}^{r-c-1}}^*(\log Y_0) \otimes \mathcal{O}(-H)$ vanish on $j$, then certainly all the sections of $E_{k,m} T_{\mathbb{P}^{r-c}}^*(\log Y_1) \otimes \mathcal{O}(-H)$ will vanish on $j$. By repeated application of Corollary \ref{GrassmanCor}, we see that the codimension of $\phi^{-1}(Z_{r-c,d})$ in $\mathcal{F}_{r-c,d}$ is at least $c$. This concludes the proof.
\end{proof}
It follows that if $Z_{n,d} \subset \mathbb{P}^n \times U_{n,d} \setminus \mathcal{X}_{n,d}$ has codimension at least $1$ for $d \geq d_n$, then a very general hypersurface complement in $\mathbb{P}^n$ is Brody hyperbolic when $d \geq d_{2n-1}$. Using this together with Darondeau's result, we obtain the following corollary, weakening the bound a bit for brevity.
\begin{corollary}
Let $d \geq (10n)^2 (2n)^{2n}$. Then if $X \subset \mathbb{P}^n$ is very general, the complement $\mathbb{P}^n \setminus X$ is Brody hyperbolic.
\end{corollary}
\section{$0$-cycles}
\label{sec-cycles}
Let $R_{\mathbb{P}^1, X, p} = \{ q \in X | Nq \sim Np \text{ for some integer $N$} \}$, where the relation $\sim$ means Chow equivalent. The goal of this section is to prove all but the $2n - \sum_{i=1}^k (d_i - 1) = -1$ case of the following conjecture of Chen, Lewis and Sheng \cite{CLS}.
\begin{conjecture}
\label{mainConj}
Let $X \subset \mathbb{P}^{n}$ be a very general complete intersection of multidegree $(d_1, \dots, d_k)$. Then for every $p \in X$, $\operatorname{dim} R_{\mathbb{P}^1, X, p} \leq 2n -k - \sum_{i=1}^k d_i$.
\end{conjecture}
Here, we adopt the convention that $\operatorname{dim} R_{\mathbb{P}^1, X, p}$ is negative if $R_{\mathbb{P}^1, X, p} = \{p \}$. Together with the main result of \cite{CLS}, this completely resolves Conjecture \ref{mainConj} in the case of hypersurfaces.
Chen, Lewis and Sheng consider the more general notion of $\Gamma$ equivalence, although we are unable to prove the $\Gamma$ equivalence version here. The special case $\sum_i d_i = n+1$ is a theorem of Roitman \cite{R1, R2} and the case $2n -k - \sum_{i=1}^k d_i \leq -2$ is a theorem of Chen, Lewis and Sheng \cite{CLS} building on work of Voisin \cite{voisinChow, V2, V3}, who proves the result only for hypersurfaces. Chen, Lewis and Sheng prove Conjecture \ref{mainConj} for hypersurfaces and for arbitrary $\Gamma$ in the boundary case $2n - k - \sum_{i=1}^k d_i = -1$ in \cite{CLS}. The case $2n - k - \sum_{i=1}^k d_i = -1$ appears to be the most difficult, and is the only one we cannot completely resolve with our technique.
We provide an independent proof of all but the $2n - k - \sum_{i=1}^k d_i = -1$ case of Conjecture \ref{mainConj}. Aside from Roitman's result, this is the first result we are aware of addressing the case $2n - k - \sum_{i=1}^k d_i \geq 0$. We rely on the result of Roitman in our proof, but not the results of Voisin \cite{voisinChow, V2} or Chen, Lewis, and Sheng \cite{CLS}.
Let $E_{n,\underline{d}} \subset \mathcal{U}_{n,\underline{d}}$ be the set of $(p,X)$ such that $R_{\mathbb{P}^1, X, p}$ has dimension at least $1$. Let $G_{n,\underline{d}} \subset \mathcal{U}_{n,\underline{d}}$ be the set of $(p,X)$ such that $R_{\mathbb{P}^1,X,p}$ is not equal to $\{p\}$. Both $E_{n,\underline{d}}$ and $G_{n,\underline{d}}$ are countable unions of closed subvarieties of $\mathcal{U}_{n,\underline{d}}$. When we talk about the codimension of $E_{n,\underline{d}}$ or $G_{n,\underline{d}}$ in $\mathcal{U}_{n,\underline{d}}$, we mean the minimum of the codimensions of each component. We prove Conjecture \ref{mainConj} by proving the following theorem.
\begin{theorem}
\label{mainThm}
The codimension of $E_{n,\underline{d}}$ in $\mathcal{U}_{n,\underline{d}}$ is at least $-n + \sum_i d_i$ and the codimension of $G_{n,\underline{d}}$ in $\mathcal{U}_{n,\underline{d}}$ is at least $-n-1 + \sum_i d_i$.
\end{theorem}
\begin{corollary}
Conjecture \ref{mainConj} holds for $2n- k - \sum_{i=1}^k d_i \neq -1$. In the special case $2n-k- \sum_{i=1}^k d_i = -1$, the space of $p \in X$ Chow-equivalent to some other point of $X$ has dimension $0$ (i.e., is a countable union of points) but might not be empty as Conjecture \ref{mainConj} predicts.
\end{corollary}
\begin{proof}
First we consider the case $2n - k - \sum_i d_i \geq 0$. Let $\pi_1: \mathcal{U}_{n,\underline{d}} \to B$ be the projection map. If $\pi_1|_{E_{n,\underline{d}}}$ is not dominant, then the result holds trivially. Thus, we may assume that the very general fiber of $\pi_1|_{E_{n,\underline{d}}}$ has dimension $n - k - \operatorname{codim}(E_{n,\underline{d}} \subset \mathcal{U}_{n,\underline{d}})$. If the bound on $E_{n,\underline{d}}$ from Theorem \ref{mainThm} holds, then the space of points $p$ of $X$ with positive-dimensional $R_{\mathbb{P}^1,X,p}$ has dimension at most $2n - k - \sum_i d_i$, which implies Conjecture \ref{mainConj} in the case $2n - k - \sum_i d_i \geq 0$.
Now we consider the situation for $2n - k - \sum_i d_i \leq -1$. Conjecture \ref{mainConj} states that $\pi_1|_{G_{n,\underline{d}}}$ is not dominant for this range. By Theorem \ref{mainThm}, we see that the dimension of $G_{n,\underline{d}}$ is less than the dimension of $B$ if $2n - k - \sum_i d_i \leq -2$, proving Conjecture \ref{mainConj}. In the case $2n - k-\sum_i d_i = -1$, the dimension of $G_{n,\underline{d}}$ is at most that of $B$, which shows that there are at most finitely many points of $X$ equivalent to another point of $X$. This proves the result.
\end{proof}
Our technique would prove Conjecture \ref{mainConj} in all cases if we knew that $G_{n,\underline{d}}$ had codimension $-n + \sum_{i=1}^k d_i$ in $\mathcal{U}_{n,\underline{d}}$. However, this is not true for Calabi-Yau hypersurfaces.
\begin{proposition}
A general point of a very general Calabi-Yau hypersurface $X$ is rationally equivalent to at least one other point of the hypersurface.
\end{proposition}
\begin{proof}
Let $X$ be a very general Calabi-Yau hypersurface. Then we claim that a general point of $X$ is Chow equivalent to another point of $X$. To see this, observe that any point $p$ of $X$ has finitely many lines meeting $X$ to order $d-1$ at $p$. Such a line meets $X$ in a single other point. Moreover, every point of $X$ has a line passing through it that meets $X$ at another point of $X$ with multiplicity $d-1$. Thus, let $q_1$ be a general point of $X$, let $\ell_1$ be a line through $p$ meeting $X$ at a second point, $p$ to order $d-1$, and $\ell_2$ be a different line meeting $X$ at $p$ to order $d-1$. Let $q_2$ be the residual intersections of $\ell_2$ with $X$. Then $q_1 \sim q_2$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{mainThm}]
First consider the bound on $E_{n,\underline{d}}$. By Roitman's Theorem plus the fact that $R_{\mathbb{P}^1,X,p}$ is a countable union of closed varieties, we see that $E_{-1+\sum_{i=1}^k d_i,\underline{d}}$ has codimension at least one in $\mathcal{U}_{-1 + \sum_{i=1}^k d_i,\underline{d}}$. We note that if $p \sim q$ as points of $Y$, and $Y \subset Y'$, then $p \sim q$ as points of $Y'$ as well. The rest of the result follows from Theorem \ref{thm-technicaltool} using $Z_{n,\underline{d}} = E_{n,\underline{d}}$.
Now consider $G_{n,\underline{d}}$. From Roitman's theorem, it follows that a very general point of a Calabi-Yau complete intersection $X$ is equivalent to at most countably many other points of $X$. Thus, a very general hyperplane section of such an $X$ satisfies the property that the very general point is equivalent to no other points of $X$. From this, we see that $G_{-2+\sum_{i=1}^k d_i,\underline{d}}$ has codimension at least $1$ in $\mathcal{U}_{-2+\sum_{i=1}^k d_i,\underline{d}}$. Together with Theorem \ref{thm-technicaltool}, this implies the result.
\end{proof}
\bibliographystyle{plain}
| {'timestamp': '2018-06-08T02:00:42', 'yymm': '1806', 'arxiv_id': '1806.02364', 'language': 'en', 'url': 'https://arxiv.org/abs/1806.02364'} |
\section{Introduction}
Over the last decade there has emerged growing interest in the so-called warm dense matter (WDM), which is of key importance for the description of, e.g., astrophysical systems \cite{knudson,militzer}, laser-excited solids \cite{ernst}, and inertial confinement fusion targets \cite{nora,schmit,hurricane3}.
The WDM regime is characterized by the simultaneous occurence of strong (moderate) correlations of ions (electrons), thermal effects as well as quantum effects of the electrons. In dimensionless units, typical parameters are the Brueckner parameter $r_s=\overline{r}/a_\textnormal{B}$ and the reduced temperature $\theta=k_\textnormal{B}T/E_\textnormal{F}$, both being of the order of unity (more generally in the range $0.1 \dots 10$). Here $\overline{r}$ and $a_\textnormal{B}$ denote the mean interparticle distance and the Bohr radius, respectively. A third relevant parameter is the classical coupling parameter of the ionic component, $\Gamma_i = Z_i^2e^2/\overline{r}k_BT$ which is often larger than unity indicating that the ionic component is far from an ideal gas.
This makes the theoretical description of this peculiar state of matter particularly challenging, as there is no small parameter to perform an expansion around.
In the ground state, there exists a large toolkit of approaches that allow for the accurate description of manifold physical systems, the most successful of which arguably being Kohn-Sham density functional theory (DFT), e.g. \cite{ks,dft_review}.
The basic idea of DFT is to map the complicated and computationally demanding quantum many-body problem onto an effective single-particle problem. This would be exact if the correct exchange-correlation functional of the system of interest was available which is, of course, not the case. In practice, therefore, one has to use an approximation. The foundation of the great success of DFT has been the local density approximation (LDA), i.e., the usage of the exchange-correlation energy $E_{xc}$ of the uniform electron gas (UEG) with the same density as the more complicated system of interest. Accurate data for $E_{xc}$ of the UEG was obtained by Ceperley and Alder \cite{alder} using a Quantum Monte Carlo (QMC) method, from which Perdew and Zunger \cite{perdew} constructed a simple parametrization with respect to density, $E_{xc}(r_s)$, that is still used to this day.
However, the accurate description of warm dense matter requires to extend DFT to finite temperature. This has been realized long ago by Mermin \cite{mermin} who used a superposition of excited states weighted with their thermal occupation probability. A strict approach to the thermodynamic properties of this system also requires an appropriate finite-temperature extension of the LDA, in particular replacement of the ground state energy functionals
by free energies, i.e. $E \to f = E - TS$. This means, a finite-temperature version of the LDA requires accurate parametrizations of the {\em exchange correlation free energy} with respect to temperature and density \cite{karasiev2,dharma, dharma_cpp15, gga,burke,burke2}, i.e., $f_{xc}(r_s,\theta)$, even though in some cases the entropic correction maybe small. This seemingly benign task, however, turns out to be far from trivial because accurate date for the free energy are much more involved than the ground state results. While for the ground state reliable QMC data are known for a long time, until recently \cite{tim2,tim_prl,dornheim,dornheim2,groth,dornheim3,dornheim_cpp,malone,malone2,dornheim_prl,dornheim_pop_17}, the notorious fermion sign problem \cite{loh,troyer} has prevented reliable QMC simulations in the warm dense regime.
Therefore, during the recent four decades many theoretical approaches to $f_{xc}(r_s,\theta)$ have been developed that lead to a variety of parametrizations, for an overview on early works, see e.g.~\cite{kraeft-book, dewitt-tribute} . Some of them have gained high popularity and they were successfully applied in many fields, even though there accuracy has remained poorly tested. It is the purpose of the present article to present such a quantitative comparison of earlier models to new simulation results.
In Sec.~\ref{sec:free}, we introduce a selection of such functions. First, we analyze the purely analytical expression presented by Ebeling and co-workers, e.g. \cite{ebeling1}. Next, we study functional fits to linear response data based on static local field correction schemes that were suggested by Singwi, Tosi, Land, and Sj\"olander (STLS) \cite{stls_original} (Sec.~\ref{sub:ichimaru}) and Vashishta and Singwi (VS) \cite{vs_original} (Sec.~\ref{sub:vs}). As a fourth example we consider the quantum-classical mapping developed by Dharma-wardana and Perrot (PDW) \cite{pdw_prl,pdw} (Sec.~\ref{sub:pdw}).
Finally, we consider the recent parametrization by Karasiev \textit{et al.}~(KSDT) \cite{karasiev} (Sec.~\ref{sub:ksdt}) that is based on the QMC data by Brown \textit{et al.}~that became available recently~\cite{brown}. However, those data have a limited accuracy due to (i) the usage of the fixed node approximation \cite{node} and (ii) an inappropriate finite-size correction (see \cite{dornheim_prl}) giving rise to systematic errors in the free energy results as we will show below.
In Sec.~\ref{sec:results}, we compare all aforementioned parametrizations of $f_{xc}$ to the new, accurate QMC data by Dornheim \textit{et al.}~\cite{dornheim_prl} that are free from any systematic bias, and, hence, allow us to gauge the accuracy of models. A particular emphasis is laid on the warm dense matter regime.
\section{\label{sec:free}Free Energy Parametrizations}
\subsection{\label{sub:ebeling}Ebeling's Pad\'e formulae}
The idea to produce an analytical formula for the thermodynamic quantities that connect known analyitical limits via a smooth Pad\'e approximant is due to Ebeling, Kraeft, Richert and co-workers \cite{ebeling_richert81, ebeling_ebeling_richert82, ebeling_richert85_1, ebeling_richert85_2}. These approximations have been quite influential in the description of nonideal plasmas and electron-hole plasmas in the 1980s and 1990s receiving, in part, a substantial amount of citations.
These approximations have been improved continuously in the following years, and we, therefore, only discuss the more recent versions, cf.~\cite{ebeling1,ebeling2} and references therein.
Ebeling {\em et al.} used Rydberg atomic units and introduced a reduced thermal density
\begin{equation}
\overline{n} = n\Lambda^3 = 6\sqrt{\pi}r_s^{-3}\tau^{-3/2} \quad ,
\end{equation}
with the usual thermal wavelength $\Lambda$ and $\tau=k_\textnormal{B}T/\textnormal{Ry}$ being the temperature in energy units.
The Pad\'e approximation for $f_{xc}$ then reads \cite{ebeling1}
\begin{eqnarray}
f_{xc}^\textnormal{Ebeling,Ry}(r_s,\tau) = - \frac{ f_0(\tau) \overline{n}^{1/2} + f_3(\tau) \overline{n} + f_2 \overline{n}^2\epsilon^\textnormal{Ry}(r_s) }{ 1 + f_1(\tau) \overline{n}^{1/2} + f_2 \overline{n}^2 } \quad , \label{eq:ebeling_Ry}
\end{eqnarray}
with the coefficients
\begin{eqnarray}
f_0(\tau) = \frac{2}{3}\left(\frac{\tau}{\pi}\right)^{1/4}\ ,\quad f_1(\tau) = \frac{1}{8f_0(\tau)}\sqrt{2}(1+\textnormal{log}(2))\ ,\quad f_2 = 3 \ ,\quad f_3(\tau) = \frac{1}{4}\left( \frac{\tau}{\pi}\right)^{1/2}\ ,
\end{eqnarray}
and the ground state parametrization for the exchange correlation energy
\begin{equation}
\epsilon^\textnormal{Ry}(r_s) = \frac{0.9163}{r_s} + 0.1244\ \textnormal{log}\left( 1 + \frac{ 2.117 r_s^{-1/2}}{1+0.3008\sqrt{r_s}}\right) \quad .
\end{equation}
To achieve a better comparability with the other formulas discussed below, we re-express Eq.~(\ref{eq:ebeling_Ry}) in Hartree atomic units as a function of $r_s$ and the reduced temperature $\theta=k_\textnormal{B}T/E_\textnormal{F}$:
\begin{eqnarray}
\label{eq:ebeling}f_{xc}^\textnormal{Ebeling,Ha}(r_s,\theta) &=& -\frac{1}{2} \frac{ Ar_s^{-1/2}\theta^{-1/2} + B r_s^{-1}\theta^{-1} + C\theta^{-3}\epsilon^\textnormal{Ry}(r_s) }{ 1 + D\theta^{-1}r_s^{1/2}+ C \theta^{-3}} \quad , \quad \textnormal{with} \\
A &=& \frac{2}{3\sqrt{\pi}}\left(\frac{8}{3}\right)^{1/2}\left(\frac{4}{9\pi}\right)^{-1/6}\ , \quad B = \frac{2}{3\pi}\left( \frac{4}{9\pi} \right)^{-1/3}\ , \quad C=\frac{64}{3\pi}\ , \quad \\ \nonumber D &=& \frac{ (1+\textnormal{log}(2))\sqrt{3} }{4}\left( \frac{4}{9\pi}\right)^{1/6}\ .
\end{eqnarray}
Evidently, Eq.~(\ref{eq:ebeling}) incorporates the correct ground state limit
\begin{equation}
\lim_{\theta \to 0} f_{xc}^\textnormal{Ebeling,Ha}(r_s,\theta) = -\frac{1}{2}\epsilon^\textnormal{Ry}(r_s) \quad ,
\end{equation}
where the pre-factor $1/2$ is due to the conversion between Rydberg and Hartree units. Similarly, in the high-temperature limit the well-known Debye-H\"uckel result is recovered, e.g.~\cite{dewitt}:
\begin{eqnarray}\label{eq:DBH}
\lim_{\theta \to \infty} f_{xc}^\textnormal{Ebeling,Ha}(r_s,\theta) = -\frac{1}{2} A\ r_s^{-1/2} \theta^{-1/2}
= - \frac{1}{\sqrt{3}}r_s^{-3/2}T^{-1/2} \quad .
\end{eqnarray}
Results for the warm dense UEG computed from these formulas are included in the figures below. For Pad\'e approximations to the UEG at strong coupling in the quasi-classical regime, see, e.g., Ref.~\cite{stolz_ebeling}.
\subsection{\label{sub:ichimaru}Parametrization by Ichimaru and co-workers}
In the mid-eighties, Tanaka, Ichimaru and co-workers \cite{stls85,stls} extended the original STLS scheme \cite{stls_original} for the static local field corrections to finite temperature and numerically obtained the interaction energy $V$ (per particle) of the UEG via integration of the static structure factor $S(k)$,
\begin{equation}\label{eq:V}
V = \frac{1}{2} \int_{k<\infty} \frac{ \textnormal{d}\mathbf{k} }{ (2\pi)^3 } [S(\mathbf{k}) - 1]\frac{4\pi}{\mathbf{k}^2} \quad ,
\end{equation}
for $70$ parameter combinations with $\theta=0.1,1,5$ and $r_s\sim10^{-3},\dots,74$.
Subsequently, there has been introduced a parametrization for $V$ as a function of $r_s$ and $\theta$ \cite{ichimaru_rev_2,ichimaru_rev}
\begin{equation}\label{eq:ichi_V}
V(r_s,\theta) = - \frac{1}{r_s} \frac{ a_\textnormal{HF}(\theta) + \sqrt{2}\lambda r_s^{1/2} \textnormal{tanh}(\theta^{-1/2}) B(\theta) + 2\lambda^2r_sC(\theta)E(\theta)\textnormal{tanh}(\theta^{-1}) }{1+ \sqrt{2}\lambda r_s^{1/2}D(\theta)\textnormal{tanh}(\theta^{-1/2}) + 2\lambda^2 r_s E(\theta)}\ ,
\end{equation}
with the definitions
\begin{eqnarray}\label{eq:ichi_definitions}
a_\textnormal{HF}(\theta) &=& 0.610887\ \textnormal{tanh}\left(\theta^{-1}\right) \frac{ 0.75 + 3.4363\theta^2 -0.09227\theta^3 + 1.7035\theta^4 }{ 1+ 8.31051\theta^2 + 5.1105\theta^4 }\\
B(\theta) &=& \frac{ x_1 + x_2\theta^2 + x_3 \theta^4 }{ 1 + x_4\theta^2 + x_5\theta^4 }\ , \quad
C(\theta) = x_6 + x_7 \textnormal{exp}\left( -\theta^{-1} \right) \ , \\
D(\theta) &=& \frac{ x_8 + x_9\theta^2 + x_{10}\theta^4 }{ 1 + x_{11}\theta^2 + x_{12}\theta^4} \ , \quad
E(\theta) = \frac{x_{13}+x_{14}\theta^2+x_{15}\theta^4}{1+x_{16}\theta^2 + x_{17}\theta^4} \quad .
\end{eqnarray}
In addition to the exact limits for $\theta\to 0$ and $\theta\to\infty$, the parametrization from Eq.~(\ref{eq:ichi_V}) also approaches the well-known Hartree-Fock limit for high density,
\begin{equation}
\lim_{r_s\to0}V(r_s,\theta) = - \frac{ a_\textnormal{HF}(\theta)}{r_s} \quad ,
\end{equation}
which has been parameterized by Perrot and Dharma-wardana \cite{pdw84}, see Eq.~(\ref{eq:ichi_definitions}).
Naturally, the free parameters $x_i$, $i=1,\dots,17$ have been determined by a fit of Eq.~(\ref{eq:ichi_V}) to the STLS data for $V$ and the resulting values are listed in Tab.~\ref{tab:ichi}.
From the interaction energy $V(r_s,\theta)$, the free exchange-correlation energy is obtained by integration
\begin{equation}\label{eq:fxc}
f_{xc}(r_s,\theta) = \frac{1}{r_s^2}\int_0^{r_s} \textnormal{d}\overline{r}_s\ \overline{r}_s V(\overline{r}_s,\theta) \quad .
\end{equation}
Plugging in the expression for $V(r_s,\theta)$ from Eq.~(\ref{eq:ichi_V}) into (\ref{eq:fxc}) gives the final parametrization for $f_{xc}(r_s,\theta)$
\begin{eqnarray}\label{eq:ichi_fxc}
f_{xc}(r_s,\theta) = &-& \frac{1}{r_s}\frac{c(\theta)}{e(\theta)} \\ \nonumber
&-& \frac{ \theta}{2 e(\theta) r_s^2\lambda^2 } \left[ \left( a_\textnormal{HF}(\theta)-\frac{c(\theta)}{e(\theta)}\right)
-\frac{d(\theta)}{e(\theta)}\left( b(\theta) - \frac{ c(\theta)d(\theta)}{e(\theta)}\right)\right] \\ \nonumber
&\times&\textnormal{log}\left| \frac{ 2 e(\theta) \lambda^2 r_s }{ \theta} + \sqrt{2}d(\theta)\lambda r_s^{1/2} \theta^{-1/2} +1 \right| \\ \nonumber
&-& \frac{\sqrt{2}}{e(\theta)}\left( b(\theta) - \frac{ c(\theta)d(\theta) }{e(\theta)}\right) \frac{ \theta^{1/2} }{r_s^{1/2}\lambda}\\ \nonumber
&+& \frac{ \theta }{ r_s^2\lambda^2 e(\theta) \sqrt{4e(\theta)-d^2(\theta)}}\left[ d(\theta)\left(a_\textnormal{HF}(\theta)-\frac{c(\theta)}{e(\theta)}\right)\right. \\ \nonumber &+& \left. \left(2-\frac{d^2(\theta)}{e(\theta)}\right)\left(b(\theta)-\frac{c(\theta)d(\theta)}{e(\theta)}\right)\right] \\ \nonumber
&\times&\left[ \textnormal{atan}\left( \frac{ 2^{3/2} e(\theta) \lambda r_s^{1/2} \theta^{-1/2} + d(\theta) }{ \sqrt{4e(\theta)-d^2(\theta)} } \right) - \textnormal{atan}\left( \frac{ d(\theta) }{ \sqrt{4e(\theta)-d^2(\theta)}}\right)\right] \ ,
\end{eqnarray}
with the abbreviations
\begin{eqnarray}
b(\theta) &=& \theta^{1/2}\ \textnormal{tanh}\left( \theta^{-1/2} \right) B(\theta) \ , \quad
c(\theta) = C(\theta)e(\theta), \\ \nonumber
d(\theta) &=& \theta^{1/2}\ \textnormal{tanh}\left( \theta^{-1/2} \right) D(\theta) \ , \quad
e(\theta) = \theta\ \textnormal{tanh}\left( \theta^{-1} \right) E(\theta) \quad .
\end{eqnarray}
\begin{table}
\centering
\begin{tabular}{ c c c c c }
$x_1$ & $x_2$ & $x_3$ &
$x_4$ &
$x_5$ \\ \hline
$3.4130800\times10^{-1}$ &
$1.2070873\times10$ &
$1.148889\times10^{0}$ &
$1.0495346\times10$ &
$1.326623\times10^0$\vspace{0.2cm} \\
$x_6$ & $x_7$ & $x_8$ &
$x_9$ &
$x_{10}$ \\ \hline
$8.72496\times10^{-1}$ &
$2.5248\times10^{-2}$ &
$6.14925\times10^{-1}$ &
$1.6996055\times10$ &
$1.489056\times10^0$ \vspace{0.2cm}\\
$x_{11}$ &
$x_{12}$ &
$x_{13}$ &
$x_{14}$ &
$x_{15}$ \\ \hline
$1.010935\times10$ &
$1.22184\times10^0$ &
$5.39409\times10^{-1}$ &
$2.522206\times10^{0}$ &
$1.78484\times10^{-1}$ \vspace{0.2cm} \\
$x_{16}$ &
$x_{17}$ \\ \hline
$2.555501\times10^{0}$ &
$1.46319\times10^{-1}$
\end{tabular}
\caption{Fit parameters by Ichimaru \cite{ichimaru_rev} for the $f_{xc}(r_s,\theta)$ parametrization from Eq.~(\ref{eq:ichi_fxc}), fitted to STLS data \cite{stls}.}
\label{tab:ichi}
\end{table}
\subsection{\label{sub:vs}Vashishta-Singwi parametrization}
Despite the overall good performance of STLS in the ground state \cite{bohm}, it has long been known that this scheme does not fulfill the compressibility sum-rule (CSR, see, e.g., Ref.~\cite{stls2} for a detailed discussion). To overcome this obstacle, Vashishta and Singwi \cite{vs_original} have introduced modified local field corrections (VS), where the CSR it automatically fulfilled. This idea had been extended in an approximate way to finite temperature by Stolzmann and R\"osler \cite{stolzmann}, until more recently Sjostrom and Dufty \cite{stls2} obtained an exhaustive data set of results that are exact within the VS framework.
As already explained in the previous section for the STLS data, they have first calculated the static structure factor $S(k)$, computed the interaction energy $V$ by integration (Eq.~(\ref{eq:V}), fitted the parametrization from Eq.~(\ref{eq:ichi_V}) to this data, and thereby obtained the desired parametrization of $f_{xc}(r_s,\theta)$ as given in Eq.~(\ref{eq:ichi_fxc}) (albeit with the new fit parameters listed in Tab.~\ref{tab:vs}).
\begin{table}
\centering
\begin{tabular}{ c c c c c }
$x_1$ & $x_2$ & $x_3$ &
$x_4$ &
$x_5$ \\ \hline
$1.8871493\times10^{-1}$ &
$1.0684788\times10$ &
$1.1088191\times10^{2}$ &
$1.8015380\times10$ &
$1.2803540\times10^2$\vspace{0.2cm} \\
$x_6$ & $x_7$ & $x_8$ &
$x_9$ &
$x_{10}$ \\ \hline
$8.3331352\times10^{-1}$ &
$-1.1179213\times10^{-1}$ &
$6.1492503\times10^{-1}$ &
$1.6428929\times10$ &
$2.5963096\times10$ \vspace{0.2cm}\\
$x_{11}$ &
$x_{12}$ &
$x_{13}$ &
$x_{14}$ &
$x_{15}$ \\ \hline
$1.0905162\times10$ &
$2.9942171\times10$ &
$5.3940898\times10^{-1}$ &
$5.8869626\times10^{4}$ &
$3.1165052\times10^{3}$ \vspace{0.2cm} \\
$x_{16}$ &
$x_{17}$ \\ \hline
$3.8887108\times10^{4}$ &
$2.1774472\times10^{3}$
\end{tabular}
\caption{Fit parameters by Sjostrom and Dufty \cite{stls2} for the $f_{xc}(r_s,\theta)$ parametrization from Eq.~(\ref{eq:ichi_fxc}), fitted to VS data.}
\label{tab:vs}
\end{table}
\subsection{\label{sub:pdw}Perrot-Dharma-wardana parametrization}
Dharma-wardana and Perrot \cite{pdw_prl,pdw} have introduced an independent, completely different idea. In particular, they employ a \textit{classical mapping} such that the correlation energy of the electron gas at $T=0$ (that has long been known from QMC calculations~\cite{alder,perdew}) is exactly recovered by the simulation of a classical system at an effective ``quantum temperature'' $T_q$. However, due to the lack of accurate data at finite $T$, an exact mapping had not been possible, and the authors introduced a modified temperature $T_c$, where they assumed an interpolation between the exactly known ground state and classical (high $T$) regimes, $T_{c}=\sqrt{T^2+T^2_q}$. Naturally, at warm dense matter conditions this constitutes a largely uncontrolled approximation.
To obtain the desired parametrization for $f_{xc}$, extensive simulations of the UEG in the range of $r_s=1,\dots,10$ and $\theta=0,\dots,10$ were performed. These have been used as input for a fit with the functional form
\begin{eqnarray}\label{eq:pdw}
f_{xc}(r_s,\theta) &=& \frac{ \epsilon(r_s) - P_1(r_s,\theta) }{ P_2(r_s,\theta) }, \\ \nonumber
P_1(r_s,\theta) &=& \left(A_2(r_s)u_1(r_s) + A_3(r_s)u_2(r_s)\right) \theta^2 Q^2(r_s) + A_2(r_s)u_2(r_s)\theta^{5/2}Q^{5/2}(r_s), \\ \nonumber
P_2(r_s,\theta) &=& 1 + A_1(r_s)\theta^2 Q^2(r_s) + A_3(r_s)\theta^{5/2}Q^{5/2}(r_s) + A_2(r_s)\theta^3Q^3(r_s), \\ \nonumber
Q(r_s) &=& \left( 2 r_s^2 \lambda^2 \right)^{-1}\ , \quad n(r_s) = \frac{3}{4\pi r_s^3}\ , \quad u_1(r_s) = \frac{\pi n(r_s)}{2}\ , \quad u_2(r_s) = \frac{2\sqrt{\pi n(r_s)}}{3}, \\ \nonumber
A_k(r_s) &=& \textnormal{exp}\left( \frac{ y_k(r_s) + \beta_k(r_s)z_k(r_s) }{ 1 + \beta_k(r_s) }
\right)\ , \quad \beta_k(r_s) = \textnormal{exp}\left( 5(r_s - r_k)
\right), \\ \nonumber
y_k(r_s) &=& \nu_k\ \textnormal{log}(r_s) + \frac{a_{1,k} + b_{1,k}r_s + c_{1,k}r_s^2 }{ 1 + r_s^2/5} \ , \quad
z_k(r_s) = r_s \frac{ a_{2,k} + b_{2,k}r_s }{ 1 + c_{2,k} r_s^2 } \quad ,
\end{eqnarray}
which becomes exact for $\theta\to 0$ and $\theta\to\infty$, but is limited to the accuracy of the classical mapping data in between. Further, it does not include the exact Hartree-Fock limit for $r_s\to0$, so that it cannot reasonably be used for $r_s<1$.
For completeness, we mention that a functional form similar to Eq.~(\ref{eq:pdw}) has recently been used by Brown \textit{et al.}~\cite{brown3} for a fit to their RPIMC data \cite{brown}.
\begin{table}
\centering
\begin{tabular}{ l c c c c c c c c }
$k$ & $a_{1,k}$ & $b_{1,k}$ & $c_{1,k}$ & $a_{2,k}$ & $b_{2,k}$ & $c_{2,k}$ & $\nu_k$ & $r_k$ \\ \hline
1 &
5.6304 &
-2.2308 &
1.7624 &
2.6083 &
1.2782 &
0.16625 &
1.5 &
4.4467 \\
2 &
5.2901 &
-2.0512 &
1.6185 &
-15.076 &
24.929 &
2.0261 &
3 &
4.5581 \\
3 &
3.6854 &
-1.5385 &
1.2629 &
2.4071 &
0.78293 &
0.095869 &
3 &
4.3909 \\
\end{tabular}
\caption{Fit parameters by Perrot and Dharma-wardana \cite{pdw} for the $f_{xc}(r_s,\theta)$ parametrization from Eq.~(\ref{eq:pdw}).}
\label{tab:pdw}
\end{table}
Similar ideas of quantum-classical mappings were recently investigated by Dufty and Dutta, see e.g. \cite{dufty_dutta1, dufty_dutta2}.
\subsection{\label{sub:ksdt}Parametrization by Karasiev \textit{et al.}}
Karasiev and co-workers \cite{karasiev} (KSDT) utilized as the functional form for $f_{xc}$ an expression similar to Eq.~(\ref{eq:ichi_V}), which Ichimaru and co-workers \cite{ichimaru_rev_2,ichimaru_rev} suggested for the interaction energy:
\begin{eqnarray}\label{eq:ksdt}
f_{xc}(r_s,\theta) &=& - \frac{1}{r_s} \frac{ a_\textnormal{HF}(\theta) + b(\theta) r_s^{1/2} + c(\theta) r_s }{ 1 + d(\theta)r_s^{1/2} + e(\theta)r_s }, \\ \nonumber
b(\theta) &=& \textnormal{tanh}\left( \theta^{-1/2} \right) \frac{ b_1 + b_2\theta^2+b_3\theta^4}{ 1 + b_4\theta^2 + \sqrt{1.5}\lambda^{-1}b_3\theta^4 } \ , \quad
c(\theta) = \left[ c_1 + c_2\ \textnormal{exp}\left( -\frac{c_3}{\theta} \right)\right] e(\theta), \\ \nonumber
d(\theta) &=& \textnormal{tanh}\left( \theta^{-1/2} \right) \frac{ d_1 + d_2\theta^2+d_3\theta^4}{ 1 + d_4\theta^2 + d_5\theta^4 } \ , \quad
e(\theta) = \textnormal{tanh}\left( \theta^{-1} \right) \frac{ e_1 + e_2\theta^2+e_3\theta^4}{ 1 + e_4\theta^2 + e_5\theta^4 } \ . \quad
\end{eqnarray}
Further, instead of fitting to the interaction energy $V$, they used the relation
\begin{equation}\label{eq:exc}
E_{xc}(r_s,\theta) = f_{xc}(r_s,\theta) -\left. \theta \frac{\partial f_{xc}(r_s,\theta)}{\partial\theta}\right|_{r_s}
\end{equation}
and fitted the rhs.~of Eq.~(\ref{eq:exc}) to the recently published RPIMC data for the exchange correlation energy $E_{xc}$ by Brown \textit{et al.}~\cite{brown} that are available for the parameters $r_s=1,\dots,40$ and $\theta=0.0625,\dots,8$.
\begin{table}
\centering
\begin{tabular}{ c c c c c c c }
$b_1$ & $b_2$ & $b_3$ & $b_4$ & $c_1$ & $c_2$ & $c_3$ \\
\hline
0.283997 &
48.932154 &
0.370919 &
61.095357 &
0.870089 &
0.193077 &
2.414644 \vspace{0.2cm} \\
$d_1$ & $d_2$ & $d_3$ & $d_4$ & $d_5$ & $e_1$ & $e_2$ \\
\hline
0.579824 &
94.537454 &
97.839603 &
59.939999 &
24.388037 &
0.212036 &
16.731249 \vspace{0.2cm}
\\
$e_3$ & $e_4$ & $e_5$ &
& & & \\ \hline
28.485792 &
34.028876 &
17.235515 &
& & & \\
\end{tabular}
\caption{Fit parameters by Karasiev \textit{et al.}~\cite{karasiev} for the $f_{xc}(r_s,\theta)$ parametrization from Eq.~(\ref{eq:ksdt}).}
\label{tab:KSDT}
\end{table}
\section{Results\label{sec:results}}
In this section we analyze the behavior of the analytical approximations for the exchange-correlation free energies that were summarized above by comparison to our recent simulation results that cover the entire relevant density range for temperatures $\Theta \ge 0.5$. These data
have an unprecedented accuracy of the order of $0.1\%$, for details, see Refs.~\cite{dornheim_prl, dornheim_pop_17}.
\subsection{\label{sub:temperature}Temperature dependence}
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{Contrib_fxc_rs1_theta.pdf} \includegraphics[width=0.5\textwidth]{Contrib_fxc_rs6_theta.pdf}~\hspace{-3mm}\vspace*{-0.6cm}\\
\includegraphics[width=0.5\textwidth]{Contrib_delta_fxc_rs1_theta.pdf}
\includegraphics[width=0.5\textwidth]{Contrib_delta_fxc_rs6_theta.pdf}~\hspace{-3mm}
\caption{Temperature dependence of $f_{xc}$ at fixed density $r_s=1$ (left) and $r_s=6$ (right). Top: QMC data (symbols) taken from Dornheim \textit{et al.}\cite{dornheim_prl}, a parametrization of RPIMC data by Karasiev \textit{et al.}\cite{karasiev} (KSDT), a semi-analytic Pad\'e approximation by Ebeling \cite{ebeling1}, a parametrization fitted to STLS and VS data by Ichimaru \cite{ichimaru_rev} and Sjostrom and Dufty \cite{stls2}, respectively, and a fit to classical mapping data by Perrot and Dharma-wardana \cite{pdw} (PDW). Bottom: Relative deviation to the QMC data.
\label{fig:theta} }
\end{figure}
In Fig.~\ref{fig:theta}, we show the temperature dependence of the exchange-correlation free energy as a function of the reduced temperature $\theta$ for two densities that are relevant for contemporary warm dense matter research, namely $r_s=1$ (left) and $r_s=6$ (right).
For both cases, all depicted parametrizations reproduce the correct classical limit for large $\theta$ [c.f.~Eq.~(\ref{eq:DBH})] and four of them (Ebeling, KSDT, STLS and PDW) are in excellent agreement for the ground state as well.
For completeness, we note that the small differences between KSDT and Ebeling and PDW are due to different ground state QMC input data. In particular, Karasiev \textit{et al.}~used more recent QMC results by Spink \textit{et al.}\cite{spink}, although in the context of WDM research the deviations to older parametrizations are negligible. Further, we note that the STLS parametrization reproduces the STLS data for $\theta=0$ that, however, are in good agreement with the exact QMC results as well. The VS-parametrization, on the other hand, does not incorporate any ground state limit and, consequently, the behavior of $f^\textnormal{VS}_\textnormal{xc}(r_s,\theta)$ becomes unreasonable below $\theta=0.0625$.
Similarly, the lowest temperature (despite the ground state limit) included in the fit for $f^\textnormal{PDW}_\textnormal{xc}(r_s,\theta)$ is $\theta=0.25$ and the rather unsmooth connection between this point and $\theta=0$ does not appear to be trustworthy as well.
Let us now check the accuracy of the different models at intermediate, WDM temperatures. As a reference, we use the recent, accurate QMC results for the macroscopic UEG by Dornheim \textit{et al.}~\cite{dornheim_prl}, i.e., the red squares. For $r_s=1$, the semi-analytic expression by Ebeling (blue) exhibits the largest deviations exceeding $\Delta f_{xc}/f_{xc}= 25\%$ for $\theta\sim 1$.
For lower density, $r_s=6$, the Ebeling parametrization is significantly more accurate although here, too, appear deviations of $\Delta f_{xc}/f_{xc}\sim 10\%$ to the exact data at intermediate temperature. Therefore, this parametrization produces reliable data in the two limiting cases of zero and high temperature, but is less accurate in between.
Next consider the STLS curve (black). It is in very good agreement with the QMC data, and the error does not exceed $\Delta f_{xc}/f_{xc} = 4\%$ over the entire $\theta$-range for both depicted $r_s$ values. The largest deviations appear for intermediate temperatures as well.
Third, we consider the VS-model (yellow line). For $r_s=1$, the VS-parametrization by Sjostrom and Dufty \cite{stls2} exhibits the same trends as the STLS curve albeit with larger deviations, $\Delta f_{xc}/f_{xc}> 5\%$. Further, for $r_s=6$, $f_{xc}^\textnormal{VS}$ exhibits much larger deviations to the exact result and the error attains $\Delta f_{xc}/f_{xc}\approx 8\%$. Evidently, the constraint to automatically fulfill the CSR does not improve the accuracy of other quantities, in particular the interaction energy $V$ (which was used as an input for the parametrization, see Sec.~\ref{sub:vs}) or the static structure factor $S(k)$ itself.
Fourth, the parametrization based on the classical mapping (PDW, light blue) exhibits somewhat opposite trends as compared to Ebeling, STLS, and VS and predicts a too large exchange correlation free energy for all $\theta$.
The magnitude of the deviations is comparable to VS and does not exceed $\Delta f_{xc}/f_{xc}=5\%$.
Finally, we consider the recent parametrization by Karasiev \textit{et al.}~(KSDT, green) \cite{karasiev} that is based on RPIMC results \cite{brown}.
For $r_s=6$, there is excellent agreement with the new reference QMC data with a maximum deviation of $\Delta f_{xc}/f_{xc}\sim 1\%$ for $\theta=4$. This is, in principle, expected since the main source of error for their input data, i.e., the nodal error and the insufficient finite-size correction, are less important for larger $r_s$. However, for $r_s=1$ there appear significantly larger deviations exceeding $\Delta f_{xc}/f_{xc}= 5\%$ at high temperature. In fact, for $r_s=1$ and the largest considered temperature, $\theta=8$, the KSDT parametrization exhibits the largest deviations of all depicted parametrizations.
\subsection{\label{sub:density}Density dependence}
\begin{figure}[t]
\includegraphics[width=0.5\textwidth]{Contrib_fxc_theta0p5_rs.pdf} \includegraphics[width=0.5\textwidth]{Contrib_fxc_theta4_rs.pdf}~\hspace{-3mm}\vspace*{-0.6cm}\\
\includegraphics[width=0.5\textwidth]{Contrib_delta_fxc_theta0p5_rs.pdf}
\includegraphics[width=0.5\textwidth]{Contrib_delta_fxc_theta4_rs.pdf}~\hspace{-3mm}
\caption{Density dependence of $f_{xc}$ at fixed temperature $\theta=0.5$ (left) and $\theta=4$ (right). Top: QMC data taken from Dornheim \textit{et al.}\cite{dornheim_prl}, a parametrization of RPIMC data by Karasiev \textit{et al.}\cite{karasiev} (KSDT), a semi-analytic Pad\'e approximation by Ebeling \cite{ebeling1}, a parametrization fitted to STLS and VS data by Ichimaru \cite{ichimaru_rev} and Sjostrom and Dufty \cite{stls2}, respectively, and a fit to classical mapping data by Perrot and Dharma-wardana \cite{pdw} (PDW). Bottom: Relative deviation to the QMC data.
\label{fig:density}}
\end{figure}
As a complement to Sec.~\ref{sub:temperature}, in Fig.~\ref{fig:density} we investigate in more detail the density dependence of the different parametrizations for two relevant temperatures, $\theta=0.5$ (left) and $\theta=4$ (right).
Most notably the Ebeling and PDW parametrizations do not include the correct high density ($r_s\to0$) limit, i.e.~Eq.~(\ref{eq:ichi_definitions}), and therefore are not reliable for $r_s<1$.
For $\theta=0.5$, $f_{xc}^\textnormal{Ebeling}$ is in qualitative agreement with the correct results, but the deviations rapidly increase with density and exceed $\Delta f_{xc}/f_{xc} = 10\%$, for $r_s=1$. At higher temperature, $\theta=4$, the situation is worse, and the Ebeling parametrization shows systematic deviations over the entire density range.
The STLS fit displays a similarly impressive agreement with the exact data as for the $\theta$-dependence (c.f.~Fig.~\ref{fig:theta}), and the deviations do not exceed $\Delta f_{xc}/f_{xc}\sim 3\%$ for both depicted $\theta$-values.
On the other hand, the VS results are again significantly less accurate than STLS although the deviation remains below $\Delta f_{xc}/f_{xc}=8\%$ for both temperatures. Further, we notice that the largest deviations occur for $r_s \geq 2$, i.e., towards stronger coupling, which is expected since here the pair distribution function exhibits unphysical negative values at short distance, e.g.~\cite{stls2}.
Again, the incorporation of the CSR has not improved the quality of the interaction energy or the structure factor compared to STLS.
The classical mapping data (PDW) does exhibit deviations not exceeding $\Delta f_{xc}/f_{xc} =5\%$ for $r_s\geq1$, i.e., in the range where numerical data have been incorporated into the fit. Overall, the quality of this parametrization is comparable to the VS-curve although the relative deviation appears to be almost constant with respect to the density. This is not surprising as the approximation has not been conducted with respect to coupling (the effective classical system is solved with the hypernetted chain method, which is expected to be accurate in this regime) but, instead, in the interpolation of the effective temperature $T_c$. Further, we notice a peculiar non-smooth and almost oscillatory behavior of $f^\textnormal{PDW}_{xc}$ around $r_s=5$ that is more pronounced for $\theta=0.5$ and the origin of which remains unclear.
Finally, we again consider the KSDT-fit based on the RPIMC data by Brown \textit{et al.}~\cite{brown} (a similar analysis for more temperatures can be found in Ref.~\cite{dornheim_prl}).
For $\theta=0.5$, this parametrization is in excellent agreement with the reference QMC data and the deviations are in the sub-percent regime over the entire depicted $r_s$-range.
However, for larger temperatures there appear significant errors that, at $\theta=4$, attain a maximum of $\Delta f_{xc}/f_{xc}\sim 10\%$ for $r_s=0.1$, i.e., at parameters where STLS, VS, and PDW are in very good agreement with the reference QMC data. Interestingly, these deviations only vanish for $r_s \leq 10^{-4}$.
Naturally, the inaccuracies of the KSDT-fit are a direct consequence of the systematic errors of the input data and the lack of accurate simulation data for $r_s<1$, prior to Ref.~\cite{dornheim_prl}.
\section{Discussion}
In summary, we have compared five different parametrizations of the exchange-correlation free energy of the unpolarized UEG to the recent QMC data by Dornheim \textit{et al.}~\cite{dornheim_prl} and, thereby, have been able to gauge their accuracy with respect to $\theta$ and $r_s$ over large parts of the warm dense matter regime. We underline that all these parametrizations are highly valuable, the main merit being their easy and flexible use and rapid evaluation. At the same time, an unbiased evaluation of their accuracy had not been done and appears highly important, as this allows to constrain the field of applicability of these models and to indicate directions for future improvements.
Summarizing our findings,
we have observed that the semi-analytic parametrization by Ebeling \cite{ebeling1} is mostly reliable in the high- and zero temperature limits, but exhibits substantial deviations in between.
The STLS-fit given by Ichimaru an co-workers \cite{ichimaru_rev_2,ichimaru_rev}, on the other hand, exhibits a surprisingly high accuracy for all investigated $r_s$-$\theta$-combinations with a typical relative systematic error of $\sim2\%$. The more recent Vashishta-Singwi (VS) results \cite{stls2} that automatically fulfill the compressibility sum-rule display a qualitatively similar behavior but are significantly less accurate everywhere.
The classical mapping suggested by Perrot and Dharma-wardana \cite{pdw} constitutes an approximation rather with respect to temperature than to coupling strength and, consequently, exhibits different trends. In particular, we have found that the relative systematic error is nearly independent of $r_s$, but decreases with increasing $\theta$ and eventually vanishes for $\theta\to\infty$. Overall, the accuracy of the PDW-parametrization is comparable to VS and, hence, inferior to STLS. Finally, the more recent fit by Karasiev \textit{et al.}~\cite{karasiev} to RPIMC data \cite{brown} is accurate for large $r_s$ and low temperature, where the input data is not too biased by the inappropriate treatment of finite-size errors in the underlying RPIMC results.
For higher temperatures (where the exchange correlation free energy constitutes only a small fraction of the total free energy) there occur relative deviations of up to $\sim10\%$.
Thus we conclude that an accurate parametrization of the exchange-correlation free energy that is valid for all $r_s$-$\theta$-combinations is presently not available. However, the recent QMC data by Dornheim \textit{et al.}~\cite{dornheim_prl} most certainly constitute a promising basis for the construction of such a functional. Further, thermal DFT calculations in the local spin-density approximation require a parametrization of $f_{xc}$ also as a function of the spin-polarization $\xi = (N_\uparrow - N_\downarrow) / (N_\uparrow + N_\downarrow)$, i.e., $f_{xc}(r_s,\theta,\xi)$ for all warm dense matter parameters. Obviously, this will require to extend the QMC simulations beyond the unpolarized case, $\xi\in(0,1]$ and, in addition, reliable data for $\theta<0.5$ are indispensable. This work is presently under way.
We also note that the quality of the currently available KSDT fit for $f_{xc}(r_s,\theta,\xi)$ remains to be tested for $\xi> 0$. The accuracy of this parametrization is limited by (i) the quality of the RPIMC data (for the spin-polarized UEG ($\xi=1$) they are afflicted with a substantially larger nodal error than for the unpolarized case that we considered in the present paper, see Ref.~\cite{groth}) and (ii) by the quality of the PDW-results \cite{pdw} that have been included as the only input to the KSDT-fit for $0<\xi<1$ at finite $\theta$.
Therefore, we conclude that the construction of a new, accurate function $f_{xc}(r_s,\theta,\xi)$ is still of high importance for thermal DFT and semi-analytical models, for comparisons with experiments, but also for explicitly time-dependent approaches such as time-dependent DFT and quantum hydrodynamics \cite{manfredi, michta}.
\begin{acknowledgement}
SG and TD contributed equally to this work.
We acknowledge helpful comments from A. F\"orster on the Pad\'e formulas of Ebeling \textit{et al.}~and from Fionn~D.~Malone
This work was supported by the Deutsche Forschungsgemeinschaft via project BO1366-10 and via SFB TR-24 project A9 as well as grant shp00015 for CPU time at the Norddeutscher Verbund f\"ur Hoch- und H\"ochstleistungsrechnen (HLRN).
\end{acknowledgement}
| {'timestamp': '2016-11-18T02:05:51', 'yymm': '1611', 'arxiv_id': '1611.05695', 'language': 'en', 'url': 'https://arxiv.org/abs/1611.05695'} |
\section{Introduction}
In 1961, I.J. Schark \cite{S}
proved for each $f \in \mathcal{H}^\infty (\mathbb{D})$, i.e., a bounded holomorphic function on the open disk $\mathbb{D}$, and for each $z \in \overline{\mathbb{D}}$ that the the image of the fiber $\mathcal{M}_z (\mathcal{H}^\infty (\mathbb{D} ))$ under the Gelfand transform $\widehat{f}$ of $f$ coincides with the set of all limits of $(f(x_{\alpha}))$ for the convergent nets $(x_{\alpha})\subset D$ to $z$ (you can find the definitions of the unexplained notations in Section \ref{Background}). This result has been called as the Cluster Value Theorem. It is also known as the weak version of the Corona Theorem, which was proved by L. Carleson \cite{C}. R.M. Aron et al. \cite{ACGLM} presented the first positive results for the Cluster Value Theorem in the infinite dimensional Banach space setting by proving that the theorem is true for the Banach algebra $\mathcal{H}^\infty (B_{c_0})$.
Though they showed that this theorem is true for the closed subalgebra $\mathcal{A}_u (B_{\ell_2})$ of $\mathcal{H}^\infty (B_{\ell_2})$, it is still open if this theorem holds for $\mathcal{H}^\infty (B_{\ell_2})$. Surprisingly, a domain $U$ of a Banach space has not been found for which the Cluster Value Theorem does not hold.
Three years later, W.B. Johanson and S.O. Castillo \cite{JC} proved that the Cluster Value Theorem holds also for $C(K)$-spaces, when $K$ is a scattered compact Hausdorff space by using similar arguments as in \cite[Theorem 5.1]{ACGLM}. However, up to our knowledge, $c_0$ and $C(K)$ with $K$ a compact Hausdorff scattered, are the only known Banach space $X$ for which the Cluster Value Theorem holds for $\mathcal{H}^\infty (B_X)$. For historical background and recent developments on this topic, we refer to the survey article \cite{CGMP}.
On the other hand, we got interested in more general open domains $U$ of a Banach space $X$ than its open unit ball $B_X$ such that the Cluster Value Theorem holds for $\mathcal{H}^\infty (U)$. It was observed by B.J. Cole and T.W. Gamelin \cite{CG} that the Banach algebra $\mathcal{H}^\infty (\ell_2 \cap B_{c_0})$ is isometrically isomorphic to $\mathcal{H}^\infty (B_{c_0})$. Motivated by this work, we first generalize the open domain $\ell_2\cap B_{c_0}$ to a polydisk type domains $U$ of a Banach space $X$ with a Schauder basis for which the Banach algebra $\mathcal{H}^\infty (U)$ is isometrically isomorphic to the Banach algebra $\mathcal{H}^\infty (B_{c_0})$. Once this is established, we can derive the analytic and algebraic structure of the spectrum $\mathcal{M} (\mathcal{H}^\infty (U))$ from those of $\mathcal{M} (\mathcal{H}^\infty (B_{c_0}))$ with the aid of the transpose of the aforementioned isometric isomorphism.
For instance, by applying the Cluster Value Theorem for $\mathcal{H}^\infty (B_{c_0})$ (see \cite[Theorem 5.1]{ACGLM}), we would find a new open domain $U$ for which the Cluster Value Theorem holds for $\mathcal{H}^\infty (U)$. We will introduce some polydisk type domains of a Banach space $X$ with a Schauder basis, which will be denoted by $\mathbb{D}_X (\bm{r})$, where $\bm{r}$ is a sequence of positive real numbers (see Definition \ref{def:rD_X}). As a matter of fact, the domain $\ell_2 \cap B_{c_0}$ considered in \cite{CG} is a special case of $\mathbb{D}_X (\bm{r})$.
Let us describe the contents of the paper: Section \ref{Background} is devoted to basic materials and the definition of certain polydisk type domains $\mathbb{D}_X (\bm{r})$, which are of our interest.
In Section \ref{sec:isometric_isomorphism}, We prove that the Banach algebra $\mathcal{H}^\infty (\mathbb{D}_X (\bm{r}))$ of all bounded holomorphic functions on $\mathbb{D}_X (\bm{r})$ (endowed with the supremum norm on $\mathbb{D}_X (\bm{r})$) is isometrically isomorphic to $\mathcal{H}^\infty (B_{c_0})$ as algebras under some assumption on $\bm{r}$, which generalizes the result in \cite{CG}.
As all the algebras $\mathcal{H}^\infty (\mathbb{D}_X (\bm{r}))$ are isometrically isomorphic, without loss of generality, we may fix the sequence $\bm{r}$ to be $(1,1,\ldots)$ and denote the corresponding domain by $\mathbb{D}_X$ in the following sections.
As we commented above, in Section \ref{sec:fiber}, we study the structure of the spectrum $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$ by applying that of the spectrum $\mathcal{M} (\mathcal{H}^\infty (B_{c_0}))$ through the transpose of the above isometric isomorphism.
Next, in Section \ref{sec:cluster}, we consider the notion of a cluster set of elements in $ \mathcal{H}^\infty (\mathbb{D}_X)$, and examine its relation with that in $\mathcal{H}^\infty (B_{c_0})$. Using this relation, we prove that the corresponding Cluster Value Theorem holds for $\mathcal{H}^\infty (\mathbb{D}_X)$. In Section \ref{sec:Gleason}, we focus on studying the Gleason parts of $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$ by translating Gleason parts of $\mathcal{M} (\mathcal{H}^\infty (B_{c_0}))$ or $\mathcal{M} (\mathcal{A}_u (B_{c_0}))$ into our case.
\section{Background}\label{Background}
In this section, we provide some necessary notations. Throughout this paper, $X$ will be an infinite dimensional complex Banach space with open unit ball $B_X$. The algebras that we are going to consider are denoted by $\mathcal{H}^\infty (U)$ and $\mathcal{A}_u (U)$, the algebra of all bounded holomorphic functions on an open subset $U$ of $X$ and the algebra of all uniformly continuous holomorphic functions on $U$, respectively, where both are endowed with the supremum norm. For a Banach algebra $\mathcal{A}$, we will denote by $\mathcal{M} (\mathcal{A})$ the spectrum (maximal ideal space) of $\mathcal{A}$, that is, the space of all continuous homomorphisms on $\mathcal{A}$. Notice that the spectrum $\mathcal{M} (\mathcal{A})$ is always a compact set endowed with the restriction of the weak-star topology. For each $a \in \mathcal{A}$, the Gelfand transform of $a$ will be denoted by $\widehat{a}$ that sends $\phi \in \mathcal{M} (\mathcal{A})$ to $\phi (a) \in \mathbb{C}$ for every $\phi \in \mathcal{M} (\mathcal{A})$.
If a Banach algebra $\mathcal{A}$ is taken to be $\mathcal{H}^\infty (B_X)$ or $\mathcal{A}_u (B_X)$, then there is the natural restriction mapping $\pi$ on $\mathcal{M} (\mathcal{A} )$ to $X^{**}$ given as $\pi (\phi) = \phi \vert_{X^*}$ because the dual space $X^*$ is included in $\mathcal{A}$ in this case. By Goldstine's theorem, we have that $\pi (\mathcal{M} (\mathcal{A} )) = \overline{B}_{X^{**}}$ and the \emph{fiber of $\mathcal{M} (\mathcal{A})$ at $z \in \overline{B}_{X^{**}}$} can be well defined as the pre-image of $z$ under the mapping $\pi$, i.e.,
\[
\mathcal{M}_z (\mathcal{A}) = \{ \phi \in \mathcal{A} : \pi (\phi) = z \} = \pi^{-1} (z).
\]
We say that the \emph{Cluster Value Theorem holds for $\mathcal{A}$ at $z \in \overline{B}_{X^{**}}$} when the image of the fiber $\mathcal{M}_z (\mathcal{A})$ under the Gelfand transform $\widehat{f}$ coincides with the \emph{cluster sets $Cl (f, z)$} defined by
\[
\{ \lambda \in \mathbb{C}: \, \text{there exists a net} \,\, (x_{\alpha}) \subset B_X \,\, \text{such that} \,\, x_{\alpha} \xrightarrow{w^*} z \,\, \text{and} \,\, f(x_\alpha) \rightarrow \lambda \},
\]
for every $f \in \mathcal{A}$. If $\widehat{f} (\mathcal{M}_z (\mathcal{A}))$ coincides with $Cl (f,z)$ for every $z \in \overline{B}_{X^{**}}$ and every $f \in \mathcal{A}$, then we simply say that the \emph{Cluster Value Theorem holds for $\mathcal{A}$}.
It is proved in \cite{ACGLM} that the Cluster Value Theorem holds for $\mathcal{H}^\infty (B_{c_0})$ and for $\mathcal{A}_u (B_{\ell_2})$, and that the Cluster Value Theorem holds for $\mathcal{A}_u (B_X)$ at $0$ whenever $X$ is a Banach space with a shrinking $1$-unconditional basis.
However, unlike in the situation when we dealt with $\mathcal{H}^\infty (B_X)$ or with $\mathcal{A}_u (B_X)$, the dual space $X^*$ is not included in $\mathcal{H}^\infty (U)$ in general for an open subset $U$ of a Banach space (see Proposition \ref{Au(D_X)_not_contained_in_Ho(D_X)}).
Thus, `fibers' of $\mathcal{M} (\mathcal{H}^\infty (U))$ cannot be defined in the same manner as before. For this reason, we consider the subspace $X^\sharp$ of $\mathcal{H}^\infty (U)$ defined as
\begin{equation}\label{X_sharp_U}
X^\sharp = \left(\mathcal{H}^\infty (U) \cap X^*, \| \cdot \|_U \right),
\end{equation}
where $\| \cdot \|_U$ is the supremum norm on $U$. In other words, $X^\sharp$ is the space which consists of elements of $\mathcal{H}^\infty (U)$ which are bounded linear functionals on $X$.
Notice that $X^\sharp$ can possibly be the trivial set $\{ 0 \}$. Under the case when $X^\sharp \neq \{ 0 \}$, we can consider the restriction mapping $\pi$ from the spectrum $\mathcal{M}(\mathcal{H}^\infty (U))$ to the closed unit ball $\overline{B}_{(X^\sharp)^*}$ of the dual space $(X^\sharp)^*$. Now, we consider the following definition.
\begin{definition}\label{def:fibers_U}
Let $X$ be a Banach space and $U$ be an open subset of $X$. If the restriction mapping $\pi : \mathcal{M}(\mathcal{H}^\infty (U)) \rightarrow \overline{B}_{(X^\sharp)^*}$ is surjective, then the \emph{fiber of $\mathcal{M} (\mathcal{H}^\infty (U))$ at $z \in \overline{B}_{(X^\sharp)^*}$} is defined as $\{ \phi \in \mathcal{M} (\mathcal{H}^\infty (U)) : \pi (\phi) = z \}$.
\end{definition}
As the above definition might be too general for our purpose, we would like to focus on certain polydisk type domains instead of dealing with an arbitrary open subset of a Banach space. Let us note that the following domain can be seen as a generalization of the domains $B_{c_0}$ (when $X = c_0$) or $\ell_p \cap \mathbb{D}^\mathbb{N}$, $1\leq p < \infty$ (when $X = \ell_p$).
\begin{definition}\label{def:rD_X}
Suppose that $X$ is a Banach space with a normalized Schauder basis and let $\bm{r} = (r_n)_{n =1}^\infty$ be a sequence of positive real numbers. Let us consider the set $\mathbb{D}_X (\bm{r})$ given by
\begin{equation*}
\mathbb{D}_X (\bm{r})= \left\{ x \in X : x = \sum_{j=1}^{\infty} e_j^* (x) e_j \,\, \text{with} \,\, |e_j^*(x)| < r_j \,\, \text{for every} \,\, j \in \mathbb{N} \right\}.
\end{equation*}
In particular, when $\bm{r} = (1, 1, \ldots)$, we simply denote it by $\mathbb{D}_X$.
\end{definition}
Let us make some remarks on $\mathbb{D}_X (\bm{r})$. First, if $\inf\limits_{n\in\mathbb{N}} r_n = 0$, then $\mathbb{D}_X (\bm{r})$ is never open in $X$. For if it were, it would follow that $t B_X \subset \mathbb{D}_X (\bm{r})$ for some $t > 0$. For $N \in \mathbb{N}$ satisfying that $r_N < \frac{t}{2}$, we have that $\frac{t}{2} e_N \in t B_X \subset \mathbb{D}_X (\bm{r})$. This implies that $\frac{t}{2} < r_N < \frac{t}{2}$, a contradiction.
As a matter of fact, we have the following result. Recall that for a Banach space with a normalized Schauder basis $\{e_j\}_{j\in \mathbb{N}}$, we always have $\sup_{j \in \mathbb{N}} \|e_j^* \| \leq 2 \sup_{n \in \mathbb{N}} \|P_n \| < \infty$.
\begin{remark}
Let $X$ be a Banach space with a normalized Schauder basis and $\bm{r} = (r_n)_{n=1}^\infty$ be a sequence of positive real numbers. Then the set $\mathbb{D}_X (\bm{r})$ is an open subset of $X$ if and only if $\inf\limits_{n\in \mathbb{N}} r_n > 0$.
\end{remark}
\begin{proof}
We only need to prove if $\inf\limits_{n\in\mathbb{N}} r_n > 0$ then $\mathbb{D}_X (\bm{r})$ is open. To this end, put $\alpha = \inf\limits_{n\in\mathbb{N}} r_n > 0$ and fix $x \in \mathbb{D}_X (\bm{r})$. Let $N \in \mathbb{N}$ be such that $|e_j^* (x)| < \frac{\alpha}{2}$ for all $j \geq N$. Choose $\varepsilon > 0$ sufficiently small so that $\varepsilon < \frac{\alpha}{2}$ and $\varepsilon < \min\{ r_j - |e_j^* (x)| : 1 \leq j \leq N-1\}$. If $y \in X$ satisfies $\| x - y \| < (\sup\limits_{j \in \mathbb{N}} \|e_j^*\| )^{-1} \varepsilon$, then $|e_j^* (y)| < r_j$ for every $j \in \mathbb{N}$.
\end{proof}
Recall that a \emph{complete Reinhardt domain} $R$ of a Banach space $X$ with a normalized unconditional Schauder basis is a subset such that for every $x \in R$ and $y \in X$ with $|e_j^* (y)| \leq |e_j^* (x)|$ for every $j \in \mathbb{N}$, we have $y \in R$.
Notice that if a basis of a Banach space $X$ is unconditional, then $\mathbb{D}_X (\bm{r})$ becomes a complete Reinhardt domain.
For more theory about holomorphic functions on Reinhardt domains of a Banach space, see \cite{CMV, DeGaMaSe, MN} and the references therein. Nevertheless, most of results in this paper hold for a general normalized Schauder basis.
\section{An isometric isomorphism between $\mathcal{H}^\infty (\mathbb{D}_X (\pmb{r}))$ and $\mathcal{H}^\infty (B_{c_0})$}\label{sec:isometric_isomorphism}
We start with the following main result which can produce many examples of (bounded or unbounded) polydisk type domains $U$ of a Banach space for which $\mathcal{H}^\infty (U)$ is isometrically isomorphic to $\mathcal{H}^\infty (B_{c_0})$.
Using the standard notation, let us denote by $P_n$ the canonical projection from $X$ onto the linear subspace spanned by $\{e_j : 1\leq j \leq n\}$ for each $n\in\mathbb{N}$.
\begin{theorem}\label{Hol:D(X):B_{c_0}} Let $X$ be an infinite dimensional Banach space with normalized Schauder basis, and $\bm{r} = (r_n)_{n=1}^\infty$ be a sequence of positive real numbers with $\inf\limits_{n\in\mathbb{N}} r_n >0$. Then $\mathcal{H}^\infty (\mathbb{D}_X (\bm{r}))$ and $\mathcal{H}^\infty (B_{c_0})$ are isometrically isomorphic.
\end{theorem}
\begin{proof}
Let us denote by $\iota$ the linear injective mapping from $X$ to $c_0$ defined as
$\iota (x) = \left( r_j^{-1} e_j^* (x) \right)_{j=1}^{\infty}$ for $x = \sum_{j=1}^{\infty} e_j^* (x) e_j.$
Then $\iota$ maps the set $\mathbb{D}_X (\bm{r})$ into $B_{c_0}$.
Consider the mapping $\Psi : \mathcal{H}^\infty ( B_{c_0} ) \rightarrow \mathcal{H}^\infty ( \mathbb{D}_X (\bm{r}))$ defined as
\[
\Psi (f) (x) = (f \circ \iota) (x) \quad \text{for every} \,\, f \in \mathcal{H}^\infty (B_{c_0}) \,\, \text{and} \,\, x \in \mathbb{D}_X (\bm{r}).
\]
It is clear that $\Psi$ is well-defined.
To check that $\Psi$ is injective, suppose $\Psi (f) = 0$ for some $f\in \mathcal{H}^\infty (B_{c_0})$. If $z=(z_j)_{j=1}^{N} \in B_{c_{00}}$, then it is obvious that $x'= \sum_{j=1}^{N} r_j z_j e_j \in \mathbb{D}_X (\bm{r})$ and $\iota (x') = z$. So, $f(z) = (f \circ \iota) (x') = \Psi (f) (x') =0$ which implies that $f = 0$ on $B_{c_{00}}$. Noting that $B_{c_{00}}$ is dense in $B_{c_0}$, we have that $f =0$ on $B_{c_0}$.
We claim that $\Psi$ is surjective. Let us denote by $\kappa$ the natural inclusion mapping from $c_{00}$ into $X$ mapping $z = (z_j)_{j =1}^\infty \in {c_{00}}$ to $\sum_{j=1}^{\infty} r_j z_j e_j$ (a finite sum) and let $g \in \mathcal{H}^\infty (\mathbb{D}_X (\bm{r}))$ be given.
Notice that the mapping $\kappa$ is not continuous in general, nevertheless, we claim that the function $g \circ \kappa$ is holomorphic on $B_{c_{00}}$. To this end, as $g \circ \kappa$ is bounded on $B_{c_{00}}$, it suffices to check that $g \circ \kappa$ is G\^ateaux holomorphic on $B_{c_{00}}$. Fix $a = (a_j)_{j=1}^\infty \in B_{c_{00}}$ and $w=(w_j)_{j=1}^\infty \in {c_{00}}$, and consider the mapping $\lambda \mapsto (g \circ \kappa )(a + \lambda w)$ on the set $ \Omega := \{ \lambda \in \mathbb{C} : z_0 + \lambda w_0 \in B_{c_{00}} \}$. Note that
\[
(g \circ \kappa) (a + \lambda w) = g \left( \sum_{j=1}^m r_j a_j e_j + \lambda \sum_{j=1}^m r_j w_j e_j \right)
\]
for some $m \in \mathbb{N}$, and it is holomorphic in the variable $\lambda \in \Omega$ since $g$ is holomorphic.
Now, consider the holomorphic function $g \circ \kappa \vert_{\mathbb{D}^N} : \mathbb{D}^N \rightarrow \mathbb{C}$ for each $N \in \mathbb{N}$. Note that the domain $\mathbb{D}^N$ is endowed with the supremum norm induced from $c_{00}$.
As the function $g \circ \kappa \vert_{\mathbb{D}^N}$ is separately holomorphic, it is holomorphic on $\mathbb{D}^N$ in classical several variables sense due to Hartogs' theorem.
From this, we have a unique family $(c_{\alpha, N} (g))_{\alpha \in \mathbb{N}_0^N}$ such that
\[
\left( g \circ \kappa \vert_{\mathbb{D}^N} \right)(z) = \sum_{\alpha \in \mathbb{N}_0^N} c_{\alpha, N} (g) z^{\alpha} \quad (z \in \mathbb{D}^N).
\]
It follows that there exists a unique family $(c_{\alpha}(g))_{\alpha \in \mathbb{N}_0^{(\mathbb{N})}}$ such that
\[
(g \circ \kappa)(z) = \sum_{\alpha \in \mathbb{N}_{0}^{(\mathbb{N})}} c_{\alpha} (g) z^{\alpha} \quad (z \in B_{c_{00}}).
\]
Also, note that
\[
\sup_{N \in \mathbb{N}} \sup_{z \in \mathbb{D}^N} \left| \sum_{\alpha \in \mathbb{N}_0^N} c_{\alpha} (g) z^{\alpha} \right| \leq \sup_{N\in\mathbb{N}} \sup_{z \in \mathbb{D}^N} |\left( g \circ \kappa \vert_{\mathbb{D}^N} \right)(z)| \leq \|g\|_{\mathbb{D}_X (\bm{r})} < \infty.
\]
By Hilbert's criterion (see \cite[Theorem 2.21]{DeGaMaSe}), there exists a unique $f \in \mathcal{H}^\infty (B_{c_0})$ such that $c_{\alpha} (f) = c_{\alpha} (g)$ for every $\alpha \in \mathbb{N}_{0}^{(\mathbb{N})}$ with
\[
\|f\|_{\infty} := \sup_{x \in B_{c_0}} |f(x)| = \sup_{N \in \mathbb{N}} \sup_{z \in \mathbb{D}^N} \left| \sum_{\alpha \in \mathbb{N}_0^N} c_{\alpha} (g) z^{\alpha} \right|.
\]
Hence $f =g \circ \kappa$ on $B_{c_{00}}$ which implies that
\begin{align*}
\Psi (f) (x) = f(\iota (x)) &= \lim_{n \rightarrow \infty } f(\iota(P_n (x))) \\
&= \lim_{n \rightarrow \infty } (g \circ \kappa ) ( ( \iota \circ P_n) (x) ) \quad (\text{since } (\iota \circ P_n )(x) \in B_{c_{00}}) \\
&= \lim_{n \rightarrow \infty } g(P_n (x)) = g(x)
\end{align*}
for every $x \in \mathbb{D}_X (\bm{r})$, i.e., $\Psi (f) = g$.
As
\[
\|f\|_{\infty} \leq \|g\|_{\mathbb{D}_X (\bm{r})} = \| \Psi(f)\|_{\mathbb{D}_X (\bm{r})} \leq \|f\|_{\infty},
\]
where the last inequality is obvious by definition of $\Psi$, we see that $\Psi$ is an isometry. It is clear by definition that $\Psi$ is a multiplicative mapping.
\end{proof}
From this moment on, we shall deal only with the domain $\mathbb{D}_X$ since all results on the Banach algebra $\mathcal{H}^\infty (\mathbb{D}_X)$ can be translated to a Banach algebra $\mathcal{H}^\infty (\mathbb{D}_X (\bm{r}))$ with $\inf\limits_{n\in\mathbb{N}} r_n >0$ with the aid of the isometric isomorphism in Theorem \ref{Hol:D(X):B_{c_0}}. Throughout this paper, we denote by $\Psi$ the isometric isomorphism from $\mathcal{H}^\infty (B_{c_0})$ onto $\mathcal{H}^\infty ( \mathbb{D}_X)$ given in the proof of Theorem \ref{Hol:D(X):B_{c_0}}. We also keep the notations $\iota : X \rightarrow c_0$ and $\kappa : c_{00} \rightarrow X$ defined in the above with respect to the sequence $\bm{r} = (1,1,\ldots)$. Let ${\Phi}$ denote the restriction of the adjoint of $\Psi$ to $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X ))$. Note that the mapping ${\Phi}$ is an homeomorphism from $\mathcal{M} (\mathcal{H}^\infty ( \mathbb{D}_X ))$ onto $\mathcal{M} (\mathcal{H}^\infty (B_{c_0}))$ (actually, an isometry in Gleason sense, see Section \ref{sec:Gleason}).
Recall that every continuous polynomial on $c_0$ is weakly continuous on bounded sets (see \cite[Proposition 1.59]{D} or \cite[Section 3.4]{G}).
From this fact, we shall see that if a continuous polynomial on $X$ is bounded on $\mathbb{D}_X$, then it is weakly continuous on bounded sets.
\begin{prop}
Let $X$ be a Banach space with a normalized Schauder basis and let $P$ be a continuous polynomial on $X$. If $P$ is bounded on $\mathbb{D}_X$, then $P$ is weakly continuous on bounded sets.
\end{prop}
\begin{proof}
Without loss of generality, we may assume that $P$ is an $n$-homogenous polynomial. To prove that $P$ is weakly continuous on bounded sets, it suffices by homogeneity to prove that $P$ is weakly continuous on some ball $r B_X$. Let $(x_\alpha) \subset \big(\sup\limits_{j \in \mathbb{N}} \|e_j^* \| \big)^{-1} B_X$ be a net converging weakly to some $z$. As $P$ is bounded on $\mathbb{D}_X$, it can be observed that $Q:= \Psi^{-1} ( P \vert_{\mathbb{D}_X}) \in \mathcal{H}^\infty (B_{c_0})$ is an $n$-homogeneous polynomial on ${c_0}$. So, $Q$ is weakly continuous on bounded sets. Now,
\begin{align*}
P(x_\alpha) = (P \vert_{\mathbb{D}_X}) (x_\alpha) &= \Psi (Q) (x_\alpha) = Q ( \iota (x_\alpha) ) \rightarrow Q ( \iota(z)) = P (z)
\end{align*}
since $\iota (x_\alpha) \xrightarrow{w(c_0, \,\ell_1)} \iota(z)$.
\end{proof}
Recall that if $U$ is a convex bounded open set in $X$, then $\mathcal{A}_u (U)$ becomes a subalgebra of $\mathcal{H}^\infty (U)$ as uniformly continuous functions on convex bounded sets of a Banach space are bounded.
Thus, $\mathcal{A}_u (B_{c_0})$ is a subalgebra of $\mathcal{H}^\infty (B_{c_0})$ and observe that the isometric isomorphism $\Psi$ from $\mathcal{H}^\infty (B_{c_0})$ onto $\mathcal{H}^\infty ( \mathbb{D}_X)$ maps $\mathcal{A}_u (B_{c_0})$ into the algebra $\mathcal{A}_u (\mathbb{D}_X)$.
In other words, $\Psi ( \mathcal{A}_u (B_{c_0})) \subset \mathcal{A}_u (\mathbb{D}_X)$.
So, one might ask if Theorem \ref{Hol:D(X):B_{c_0}} holds for the Banach algebras $\mathcal{A}_u (B_{c_0})$ and $\mathcal{A}_u (\mathbb{D}_X)$, i.e., whether $\mathcal{A}_u (B_{c_0})$ is isometrically isomorphic to $\mathcal{A}_u (\mathbb{D}_X)$. The following observations give a negative answer to this question. Moreover, we see that $\mathbb{D}_X$ is not bounded in general and that $\mathcal{A}_u (\mathbb{D}_X)$ is not a subalgebra of $\mathcal{H}^\infty (\mathbb{D}_X)$.
We would like to thank Daniel Carando for pointing out to us the following result.
\begin{prop}[D. Carando]\label{prop:unbounded_D_X}
If $X$ is a Banach space with a normalized Schauder basis $\{e_j\}_{j \in \mathbb{N}}$, then the set $\mathbb{D}_X$ is bounded if and only if $X$ is isomorphic to $c_0$.
\end{prop}
\begin{proof}
It is clear that if $X$ is isomorphic to $c_0$, then $\mathbb{D}_X$ is bounded. Assume that $\mathbb{D}_X$ is bounded. Then there exists $R > 0$ such that $\mathbb{D}_X \subset R B_X$. This implies that if $x \in X$ satisfies $\sup_{j \in \mathbb{N}} |e_j^* (x)| < 1$ then $\|x\| < R$. It follows that
\[
(\sup_{j \in \mathbb{N}} \|e_j^*\|)^{-1} \sup_{j \in \mathbb{N}} |e_j^* (x)| \leq \| x \| \leq R \sup_{j \in \mathbb{N}} |e_j^* (x)|
\]
for every $x \in X$. Thus, $(e_j)_{j \in \mathbb{N}}$ is equivalent to the canonical basis of $c_0$.
\end{proof}
\begin{prop}\label{Au(D_X)_not_contained_in_Ho(D_X)}
Let $X$ be a Banach space with a normalized Schauder basis, then
$\Psi ( \mathcal{A}_u (B_{c_0})) \subset \mathcal{A}_u (\mathbb{D}_X) \cap \mathcal{H}^\infty (\mathbb{D}_X)$. But the space $\mathcal{A}_u (\mathbb{D}_X)$ is contained in $\mathcal{H}^\infty (\mathbb{D}_X)$ if and only if $X$ is isomorphic to $c_0$.
\end{prop}
\begin{proof}
The first assertion follows from Theorem \ref{Hol:D(X):B_{c_0}}. By the above proposition, if $X$ is not isomorphic to $c_0$, the set $\mathbb{D}_X$ is not bounded, so not weakly bounded. Thus, there exists $x^* \in X^*$ such that $\sup_{z \in \mathbb{D}_X} |x^* (z)| = \infty$, while $x^*$ is clearly uniformly continuous on $\mathbb{D}_X$. The converse is obvious.
\end{proof}
Note that $\ell_1 = c_0^*$ embeds isometrically in $\mathcal{H}^\infty (B_{c_0})$. As the Banach algebras $\mathcal{H}^\infty (B_{c_0})$ and $\mathcal{H}^\infty (\mathbb{D}_X)$ are isometrically isomorphic, the image of $\ell_1$ under the isometric isomorphism $\Psi$ from $\mathcal{H}^\infty (B_{c_0})$ and $\mathcal{H}^\infty (\mathbb{D}_X)$ forms a closed subspace of $\mathcal{H}^\infty (\mathbb{D}_X)$. As a matter of fact, we will see from the next result that the image $\Psi (\ell_1)$ consists of elements of $\mathcal{H}^\infty (\mathbb{D}_X)$ which are linear on $X$.
\begin{prop}\label{prop:image_of_ell_1}
Let $X$ be a Banach space with a normalized Schauder basis $\{e_j\}_{j\in\mathbb{N}}$. Then
\[
\Psi (\ell_1) = \mathcal{H}^\infty (\mathbb{D}_X) \cap X^*.
\]
\end{prop}
\begin{proof}
Note that $\Psi$ maps linear functionals on $c_0$ to linear functionals on $X$. For if $y = (y_n)_{n=1}^\infty \in \ell_1$, then
\begin{equation*
| \Psi (y) (x) | =\left| \sum_{n=1}^\infty y_n e_n^* (x) \right| \leq \left(\sup_{j\in\mathbb{N}} \|e_j^* \| \right) \sum_{n=1}^\infty |y_n| = \left(\sup_{j\in\mathbb{N}} \|e_j^* \| \right)\|y\|_{\ell_1},
\end{equation*}
for every $x \in B_X$. This implies that $\Psi (\ell_1) \subset \{ x^* \in \mathcal{H}^\infty (\mathbb{D}_X) : x^* \in X^* \}$.
Conversely, let $x^* \in \mathcal{H}^\infty (\mathbb{D}_X) \cap X^*$.
Choose $\theta_n \in \mathbb{R}$ such that $x^* (e_n) = |x^* (e_n)| e^{i\theta_n} $ for each $n \in \mathbb{N}$. Fix $\varepsilon \in (0,1)$ and let $z_k = \sum_{j=1}^{k} (1-\varepsilon) e^{-i\theta_j} e_j$ for each $k \in \mathbb{N}$. Then $z_k \in \mathbb{D}_X$ for each $k \in \mathbb{N}$ and
\[
\sum_{n=1}^{k} (1-\varepsilon ) |x^* (e_n)| = |x^*(z_k)| \leq \|x^*\|_{\mathbb{D}_X}
\]
for every $k \in \mathbb{N}$ and $\varepsilon \in (0,1)$. By letting $k \rightarrow \infty$ and $\varepsilon \rightarrow 0$ we obtain that $\| (x^* (e_j) )_{j=1}^{\infty} \|_{\ell_1} \leq \|x^*\|_{\mathbb{D}_X}$.
Thus, $(x^* (e_j) )_{j=1}^{\infty}$ belongs to $\ell_1$. On the other hand,
\[
|x^* (x) | = \left| \sum_{j=1}^\infty x^* (e_j) e_j^* (x) \right| \leq \sum_{j=1}^\infty | x^* (e_j)| = \| (x^* (e_j) )_{j=1}^{\infty} \|_{\ell_1}
\]
for every $x \in \mathbb{D}_X$; hence $\| (x^* (e_j) )_{j=1}^{\infty} \|_{\ell_1} = \|x^*\|_{\mathbb{D}_X}$ and $\Psi ((x^* (e_n) )_{n=1}^\infty ) = x^*$.
\end{proof}
Let us observe from $\big(\sup\limits_{j\in\mathbb{N}} \|e_j^* \| \big)^{-1} B_X \subset \mathbb{D}_X$ that we have
$\sup_{x \in B_X} |x^* (x)| \leq \left(\sup_{j\in\mathbb{N}} \|e_j^* \| \right) \| x^* \|_{\mathbb{D}_X}$ for every $x^* \in X^\sharp$. Moreover, if we consider the bounded linear operator $T : \mathcal{H}^\infty (\mathbb{D}_X) \rightarrow \mathcal{H}^\infty (B_X)$ defined as $(T f)(x) = f\big( \big(\sup\limits_{j\in\mathbb{N}} \|e_j^* \| \big)^{-1} x \big)$ for every $f \in \mathcal{H}^\infty (\mathbb{D}_X)$ and $x \in B_X$, then $\|T f \|_{B_X} \leq \|f\|_{\mathbb{D}_X}$ for every $f \in \mathcal{H}^\infty (\mathbb{D}_X)$ and $T$ is injective due to the Principle of Analytic Continuation. However, $T$ is not a monomorphism (injection with closed image) in general. For example, let $X = \ell_p$ with $1 < p < \infty$. For each $n \in \mathbb{N}$, put $f_n = \Psi ( (\underbrace{n^{-1}, \ldots, n^{-1}}_{n\text{-many}}, 0,\ldots) ) \in \mathcal{H}^\infty (\ell_p \cap \mathbb{D}^\mathbb{N})$. Then $\|f_n\|_{\ell_p\cap\mathbb{D}^\mathbb{N}} = \| ({n^{-1}, \ldots, n^{-1}}, 0,\ldots)\|_{\ell_1} = 1$ for every $n \in \mathbb{N}$. However,
\begin{align*}
|(T f_n) (y)| =\left| \sum_{j=1}^{n} \frac{1}{n} y_j \right| \leq \left( \sum_{j=1}^{n} \left(\frac{1}{n}\right)^q \right)^{\frac{1}{q}} \left( \sum_{j=1}^n |y_j|^p \right)^{\frac{1}{p}} \leq \frac{1}{n^{1-\frac{1}{q}}}
\end{align*}
for every $y \in B_{\ell_p}$, where $q$ is the conjugate of $p$. Thus, $\|T f_n \|_{B_{\ell_p}} \leq \frac{1}{n^{1-\frac{1}{q}}} \rightarrow 0$ as $n \rightarrow \infty$ while $\|f_n\|_{\ell_p \cap \mathbb{D}^\mathbb{N}} = 1$ for every $n \in \mathbb{N}$. This implies that $T : \mathcal{H}^\infty (\ell_p\cap\mathbb{D}^\mathbb{N}) \rightarrow \mathcal{H}^\infty (B_{\ell_p})$ is not a monomorphism.
\section{Fibers of $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$}\label{sec:fiber}
In this section, we would like to study fibers of the spectrum $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$ which could help to describe the analytic structure of the spectrum. To this end, as we mentioned in Section \ref{Background}, we first consider the subspace $X^\sharp$ of the algebra $\mathcal{H}^\infty (\mathbb{D}_X)$ defined as follows as in \eqref{X_sharp_U}:
\[
X^\sharp = \left(\mathcal{H}^\infty (\mathbb{D}_X) \cap X^* , \|\cdot\|_{\mathbb{D}_X} \right).
\]
It is clear that $X^\sharp$ is a Banach space endowed with the norm $\| \cdot \|_{\mathbb{D}_X}$ since a limit of linear mappings on $\mathbb{D}_X$ is again linear.
Moreover, the structure of the space $X^\sharp$ is not as rare as it might seem and Proposition \ref{prop:image_of_ell_1} proves that $X^\sharp$ is actually nothing but $\Psi (\ell_1)$; hence isometrically isomorphic to $\ell_1$.
\begin{prop}\label{ell_1:X^dagger}
If $X$ is a Banach space with a normalized basis $(e_j)_{j=1}^{\infty}$, then
\[
\| x^* \|_{\mathbb{D}_X} = \sum_{j=1}^{\infty} |x^* (e_j)|
\]
for every $x^* \in X^\sharp$.
In particular, the mapping $\tau : X^\sharp \rightarrow \ell_1$ defined as $\tau (x^*) = (x^* (e_j))_{j=1}^{\infty}$ is an isometric isomorphism.
\end{prop}
To define fibers of $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$ as in Definition \ref{def:fibers_U}, we need to check that the restriction mapping $\pi : \mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X)) \rightarrow X^\sharp$ is surjective in this case. Before proceeding further, let us clarify first some relationship between the mappings $\iota : X \rightarrow c_0$, $\kappa : c_{00} \rightarrow X$ and the isometry $\tau: X^\sharp \rightarrow \ell_1$.
\begin{remark}\label{relation_tau_iota'}
Let $X$ be a Banach space with a normalized Schauder basis.
\[
\begin{tikzcd}
{c_{00}}\arrow{r}{\kappa} &X \arrow{r}{\iota} \arrow[d,-, dashed]{} & c_0 \arrow[d,-, dashed]{}& \, \, \\
\, & X^* & \ell_1 \arrow{l}{\iota^*} \arrow[d,-, dashed]{} & X^\sharp \arrow{l}{\tau} \arrow[d,-, dashed]{} & \Psi (\ell_1) \ar[equal]{l} \arrow[d,-, dashed]{}\\
\, & \, & \ell_\infty \arrow{r}{\tau^*} & (X^\sharp)^* & \Psi (\ell_1)^* \ar[equal]{l} \\
\end{tikzcd}
\]
We have that
(a) $\iota \circ \kappa = \text{Id}_{c_{00}}$; (b) $\tau = (\Psi \vert_{\ell_1})^{-1}$; (c) $\iota^* = \tau^{-1}$; (d) $\kappa = \tau^* \vert_{c_{00}}$.
\end{remark}
Note that an element $x$ in $\mathbb{D}_X$ can be naturally considered as an element in $(X^\sharp)^*$ which sends $x^* \in X^\sharp$ to $x^* (x)$ and we observe from (c) of Remark \ref{relation_tau_iota'} that the image of $B_{c_{00}}$ under the mapping $\tau^*$ is contained in $\mathbb{D}_X$. One might ask if $\tau^*$ sends elements in $B_{c_0}$ to elements in $X$ as it does for elements in $B_{c_{00}}$. However, this is not the case when $X$ is $\ell_p$ with $1 \leq p < \infty$. Indeed, consider $u = \left( \frac{1}{2^{1/p}}, \frac{1}{3^{1/p}}, \ldots \right) \in B_{c_0}$. Then
\[
(\tau^* (u) )(e_n^*) = \langle u, \tau (e_n^*) \rangle = u_n = \frac{1}{(n+1)^{1/p}}
\]
for every $n \in \mathbb{N}$. It follows that if $\tau^* (u) \in \ell_p$, then $\tau^* (u)$ coincides with $u$. It contradicts to that $u$ does not belong to $\ell_p$.
Now, we concern with the restriction mapping $\pi : \mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X) ) \rightarrow \overline{B}_{(X^\sharp)^*}$, i.e., $\pi (\phi) = \phi \vert_{X^\sharp}$ for every $\phi \in \mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X) )$. Since $\phi \in \mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X) ) $ is a homomorphism, for $x^* \in X^\sharp$ with $\|x^*\|_{\mathbb{D}_X} \leq 1$, we have $| \pi (\phi) (x^*) | = | \phi (x^*) | \leq 1.$
In other words, $\pi \left(\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X) ) \right)\subset \overline{B}_{(X^\sharp)^*}$.
The following remark proves that the restriction mapping $\pi: \mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X) ) \rightarrow \overline{B}_{(X^\sharp)^*}$ is surjective by presenting a relationship between the mapping $\pi $ and the restriction mapping $\pi_{c_0} : \mathcal{M} (\mathcal{H}^\infty (B_{c_0}) ) \rightarrow \overline{B}_{\ell_\infty}$.
\begin{remark}\label{relation_through_tau} Let $X$ be a Banach space with a normalized Schauder basis.
Then the action between elements in the space $X^\sharp$ and the spectrum $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$ is related to the one between elements in $\ell_1$ and $\mathcal{M} (\mathcal{H}^\infty (B_{c_0}))$. Indeed,
given $x^* \in X^\sharp$ and $\phi \in \mathcal{H}^\infty (\mathbb{D}_X)$, we have
\begin{align*}
\Phi (\phi) ( \tau (x^*)) &= \phi ( x \in \mathbb{D}_X \leadsto (\tau(x^*) \circ \iota) (x) ) \\
&= \phi \left( x \in \mathbb{D}_X \leadsto \sum_{j=1}^{\infty} x^*(e_j) e_j^* (x) \right) = \phi (x^*).
\end{align*}
In other words, if we denote by $\pi_{c_0}$ the natural surjective mapping from $\mathcal{M} (\mathcal{H}^\infty ( B_{c_0}))$ to $\overline{B}_{\ell_\infty}$, then $ \pi (\phi) (\cdot) = \pi_{c_0} (\Phi (\phi)) (\tau (\cdot) ) \, \text{ on } X^\sharp.$
Therefore, the following diagram commutes:
\[
\begin{tikzcd}[row sep=large]
\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X) ) \arrow{r}{\Phi} \arrow[swap]{d}{\pi} & \mathcal{M}(\mathcal{H}^\infty ( B_{c_0})) \arrow{d}{\pi_{c_0}} \\
\overline{B}_{(X^\sharp)^*} & \overline{B}_{\ell_\infty} \arrow{l}{\tau^* \vert_{\overline{B}_{\ell_\infty}} }
\end{tikzcd}
\]
In particular, the restriction mapping $\pi$ is surjective. Moreover, if $(\phi_\alpha)$ is a net in $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$ that converges to some $\phi$ in $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$, i.e., $\phi_\alpha (f) \rightarrow \phi(f)$ for every $f \in \mathcal{H}^\infty (\mathbb{D}_X)$, then $\pi (\phi_\alpha) \rightarrow \pi (\phi)$ in the $w ( (X^\sharp)^*, X^\sharp )$-topology.
\end{remark}
As the space $X^\sharp$ is not trivial and the restriction mapping $\pi : \mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X)) \rightarrow \overline{B}_{(X^\sharp)^*}$ is surjective, we can restate Definition \ref{def:fibers_U} in the case of the Banach algebra $\mathcal{H}^\infty (\mathbb{D}_X)$ as follows.
\begin{definition}\label{def:fibers}
Let $X$ be a Banach space with a normalized basis. For $z \in \overline{B}_{(X^\sharp)^*}$, the \emph{fiber} $\mathcal{M}_z (\mathcal{H}^\infty (\mathbb{D}_X ))$ of the spectrum $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X ))$ at $z$ is defined as
\[
\mathcal{M}_z (\mathcal{H}^\infty (\mathbb{D}_X)) = \{ \phi \in \mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X)) : \pi (\phi) = z \} = \pi^{-1} (z).
\]
\end{definition}
On the other hand, we can view $\mathbb{D}_X$ as a subset of $\overline{B}_{(X^\sharp)^*}$ as each $x \in \mathbb{D}_X$ induces a linear functional with norm less than or equal to one on $X^\sharp$ which maps $x^* \in X^\sharp \mapsto x^* (x)$. From this point of view, we can consider the $w ((X^\sharp)^*, X^\sharp )$-closure of $\mathbb{D}_X$ in the dual space $(X^\sharp)^*$.
For convenience, we shall simply denote $w ((X^\sharp)^*, X^\sharp )$ by $\sigma$ throughout this paper.
In the following result, we show that the $\sigma$-closure of $\mathbb{D}_X$ coincides with the closed ball $\overline{B}_{(X^\sharp)^*}$; hence it is isometrically isomorphic to $\overline{B}_{\ell_\infty}$ as well.
\begin{prop}\label{pi:projection}
Let $X$ be a Banach space with a normalized basis $(e_j)_{j=1}^{\infty}$, then
\begin{equation}\label{the_image_of_pi_projection}
\overline{B}_{(X^\sharp)^*} = \{ z \in (X^\sharp)^* : |z(e_j^*)| \leq 1 \,\, \text{for every} \,\, j \in \mathbb{N} \} = \overline{\mathbb{D}}_X^{\sigma}.
\end{equation}
\end{prop}
\begin{proof}
The first equality in \eqref{the_image_of_pi_projection} is clear by definition of the isometry $\tau^*: \ell_\infty \rightarrow (X^\sharp)^*$.
We claim that $\overline{B}_{(X^\sharp)^*} \subset \overline{\mathbb{D}}_X^{\sigma}$.
Let $z \in \overline{B}_{(X^\sharp)^*}$ be given. Then there exists $u = (u_n) \in \overline{B}_{\ell_{\infty}}$ such that $\tau^* (u) = z$. That is,
\[
z(x^*) = \sum_{j=1}^{\infty} u_j x^* (e_j) \quad (x^* \in X^\sharp).
\]
Let $x^* \in X^\sharp$ be fixed. Then
\[
x^* \left( \sum_{j=1}^{m} \left(1-\frac{1}{m}\right) u_j e_j \right) = \sum_{j=1}^{m} \left(1-\frac{1}{m}\right) u_j x^* (e_j ) \rightarrow \sum_{j=1}^\infty u_j x^* (e_j) = z(x^*),
\]
as $m \rightarrow \infty$.
This implies that the sequence $\big( \sum_{j=1}^{m} \left(1-\frac{1}{m}\right) u_j e_j \big)_{m=1}^\infty \subset \mathbb{D}_X$ converges in $\sigma$-topology to $z$.
Thus, $\overline{B}_{(X^\sharp)^*} \subset \overline{\mathbb{D}}_X^{\sigma}$.
Next, it is clear that $\mathbb{D}_X \subset \pi (\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X)))$. As $\pi$ is $w^*$-$\sigma$-continuous and $\mathcal{M}(\mathcal{H}^\infty (\mathbb{D}_X))$ is compact, we get $ \overline{\mathbb{D}}_X^{\sigma} \subset \pi (\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X)))$.
Therefore, we obtain that $\overline{B}_{(X^\sharp)^*} \subset \overline{\mathbb{D}}_X^{\sigma} \subset \pi \left(\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X) ) \right)= \overline{B}_{(X^\sharp)^*}$.
\end{proof}
Given $z \in \mathbb{D}_X$, the most simple examples of homomorphisms which belong to fiber $\mathcal{M}_z (\mathcal{H}^\infty (\mathbb{D}_X))$ would be the point evaluation homomorphism at $z$.
As $\mathbb{D}_X$ is convex and balanced, the result from \cite[Theorem 1.3]{GGM} guarantees that there exists an isometric multiplicative extension operator $AB : \mathcal{H}^\infty (\mathbb{D}_X) \rightarrow \mathcal{H}^\infty ( \accentset{\circ\,\,\,\,\,}{\overline{\D}_X^{w^*}} )$,
where $\accentset{\circ\,\,\,\,\,}{\overline{\D}_X^{w^*}}$ denotes the $\| \cdot \|_{X^{**}}$-interior of $\overline{\mathbb{D}}_X^{w^*}$ in $X^{**}$ and $\overline{\mathbb{D}}_X^{w^*}$ denotes the $w(X^{**},X^*)$-closure of $\mathbb{D}_X$ in $X^{**}$.
Thus, any $z \in \accentset{\circ\,\,\,\,\,}{\overline{\D}_X^{w^*}}$ induces a point evaluation homomorphism $\widetilde{\delta}_z$ in $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$;
\[
\widetilde{\delta}_z : f \leadsto AB(f) (z) \quad (f \in \mathcal{H}^\infty (\mathbb{D}_X)).
\]
\begin{prop}\label{prop:point_evaluation}
Given $z$ in $\accentset{\circ\,\,\,\,\,}{\overline{\D}_X^{w^*}}$, we have that $\Phi (\widetilde{\delta_z})$ is the point evaluation at $(\tau^{-1})^* (z) \in B_{\ell_\infty}$ in $\mathcal{M} (\mathcal{H}^\infty (B_{c_0}))$. Conversely, if $u \in B_{\ell_{\infty}}$, then $\Phi^{-1} (\widetilde{\delta_u} )$ is the point evaluation at $\tau^* (u) \in \accentset{\circ\,\,\,\,\,}{\overline{\D}_X^{w^*}}$ in $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$.
\end{prop}
\begin{proof}
As both of them can be proved very similarly, we only prove the direction: Let $f \in \mathcal{H}^\infty (B_{c_0})$ and $z \in \accentset{\circ\,\,\,\,\,}{\overline{\D}_X^{w^*}}$ be given. Then
\begin{align*}
\Phi (\widetilde{\delta_z}) (f) = \widetilde{\delta_z} (f \circ \iota) = (AB (f \circ \iota) ) (z) = AB(f) (AB(\iota) (z)),
\end{align*}
where the last equality holds due to \cite[Corollary 2.2]{CGKM} since $c_0$ is symmetric regular. As the mapping $\iota : X \rightarrow c_0$ is linear, we have that $AB(\iota) = \iota^{**}$;
hence
\[
\Phi (\widetilde{\delta_z}) (f) = AB(f) (\iota^{**} (z)) = AB(f) ((\tau^{-1})^{*} (z))
\]
for every $f \in \mathcal{H}^\infty (B_{c_0})$ and the proof is completed.
\end{proof}
The following result gives a different description of the fibers of $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$ by revealing its relation with fibers defined from $\mathcal{M} (\mathcal{H}^\infty (B_{c_0}))$ in terms of the isometric isomorphism $\Phi$ between $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X) )$ and $\mathcal{M} (\mathcal{H}^\infty (B_{c_0}))$.
\begin{prop}\label{prop:fiber_representation}
Let $z \in \overline{\mathbb{D}}_X^{\sigma}$. Then we have
\begin{equation*}
\mathcal{M}_z (\mathcal{H}^\infty (\mathbb{D}_X)) = \Phi^{-1} \left( \mathcal{M}_{(\tau^{-1})^* (z)} (\mathcal{H}^\infty (B_{c_0})) \right) = \Phi^{-1} \left( \mathcal{M}_{(z(e_1^*), z(e_2^*), \ldots)} (\mathcal{H}^\infty (B_{c_0})) \right).
\end{equation*}
\end{prop}
\begin{proof}
Let $\phi \in \mathcal{M}_z (\mathcal{H}^\infty (\mathbb{D}_X))$. For $y \in \ell_1$, choose $x^* \in X^\sharp$ so that $\tau (x^*) = y$. Observe that
\begin{align*}
\Phi (\phi) (y) = \Phi (\phi) ( \tau (x^*)) &= \phi (x^*) \quad (\text{by Remark \ref{relation_through_tau}}) \\
&= (\tau^{-1})^* (z) (\tau (x^*)) = (\tau^{-1})^* (z) (y);
\end{align*}
thus, $\Phi (\phi) \in \mathcal{M}_{(\tau^{-1})^* (z)} (\mathcal{H}^\infty (B_{c_0}))$.
\end{proof}
Let us finish this section by showing that the size of fibers over points of $\overline{\mathbb{D}}_X^{\sigma}$ can be quite big as in the case for the algebra $\mathcal{H}^\infty (B_X)$. As a matter of fact, it is well known that the fiber $\mathcal{M}_u (\mathcal{H}^\infty (B_{c_0}))$ contains a copy of $\beta \mathbb{N} \setminus \mathbb{N}$ for every $u \in \overline{B}_{\ell_{\infty}}$ where $\beta \mathbb{N}$ is the Stone-\v{C}ech compactification of $\mathbb{N}$ \cite[Theorem 11.1]{ACG91}. So, one direct consequence of Proposition \ref{prop:fiber_representation} is the following.
\begin{prop}
For any $z \in \overline{\mathbb{D}}_X^{\sigma}$, the set $\beta \mathbb{N} \setminus \mathbb{N}$ can be injected into the fiber $\mathcal{M}_z (\mathcal{H}^\infty (\mathbb{D}_X))$.
\end{prop}
Moreover, it is recently proved in \cite{CFGJM} and in \cite{DS} independently that for each $u \in \overline{B}_{\ell_\infty}$, there exists a Gleason isometric analytic injection of $B_{\ell_\infty}$ into the fiber $\mathcal{M}_u (\mathcal{H}^\infty (B_{c_0}))$ (see \cite{ADLM} or Section \ref{sec:Gleason} for the definition of Gleason metric). Combining this result with Proposition \ref{prop:fiber_representation}, we can notice the following result.
\begin{prop}\label{cor:injection_into_the_fiber_2}
For any $z \in \overline{\mathbb{D}}_X^{\sigma}$, there exists a Gleason isometric analytic injection $\Xi$ of $B_{\ell_{\infty}}$ into the fiber $\mathcal{M}_z (\mathcal{H}^\infty (\mathbb{D}_X))$.
\end{prop}
\section{Cluster values of $\mathcal{H}^\infty (\mathbb{D}_X)$}\label{sec:cluster}
Although it is mentioned in Section \ref{Background}, we start this section by recalling some definitions. For a point $z \in \overline{B}_{X^{**}}$ and $f \in \mathcal{H}^\infty (B_X)$, the \emph{cluster set} $Cl_{B_X} (f,z)$ is defined as the set of all limits of values of $f \in \mathcal{H}^\infty (B_X)$ along nets in $B_X$ converging weak-star to $z$, i.e.,
\[
\{ \lambda \in \mathbb{C}: \, \text{there exists a net} \,\, (x_{\alpha}) \subset B_X \,\, \text{such that} \,\, x_{\alpha} \xrightarrow{w^*} z \,\, \text{and} \,\, f(x_\alpha) \rightarrow \lambda \}.
\]
It is well known that we always get the following inclusion:
\begin{equation}\label{CVT_inclusion_0}
Cl_{B_X} (f,z) \subset \widehat{f} ( \mathcal{M}_{z} (\mathcal{H}^\infty (B_X)) )
\end{equation}
for every $f \in \mathcal{H}^\infty (B_X)$ and $z \in \overline{B}_{X^{**}}$, where $\widehat{f} : \mathcal{M}( \mathcal{H}^\infty (B_X))\rightarrow \mathbb{C}$ is the Gelfand transform of $f$. When the both sets in \eqref{CVT_inclusion_0} coincide for every $f \in \mathcal{H}^\infty (B_X)$, it is referred to as the \emph{Cluster Value Theorem holds for $\mathcal{H}^\infty (B_X)$ at $z$}.
In the similar manner, we can define the cluster set for $f \in \mathcal{H}^\infty (\mathbb{D}_X)$ and $z \in \overline{\mathbb{D}}_X^{\sigma}$:
\begin{definition}
Let $X$ be a Banach space with a normalized basis. For each $z \in \overline{\mathbb{D}}_X^{\sigma}$ and $f \in \mathcal{H}^\infty (\mathbb{D}_X)$, the \emph{cluster set $Cl_{\mathbb{D}_X} (f, z)$ of $f$ at $z$} is defined as
\[
\{ \lambda \in \mathbb{C} : \, \text{there exists a net} \,\, (x_{\alpha}) \subset \mathbb{D}_X \,\, \text{such that} \,\, x_{\alpha} \xrightarrow{\,\sigma\,} z \,\, \text{and} \,\, f(x_\alpha) \rightarrow \lambda \}.
\]
\end{definition}
One can observe that the following inclusion holds:
\begin{equation}\label{CVT_inclusion_for_D_X}
Cl_{\mathbb{D}_X} (f,z) \subset \widehat{f} ( \mathcal{M}_{z} (\mathcal{H}^\infty (\mathbb{D}_X)) )
\end{equation}
for every $f \in \mathcal{H}^\infty (\mathbb{D}_X)$ and $z \in \overline{\mathbb{D}}_X^{\sigma}$. Indeed, if $\lambda \in Cl_{\mathbb{D}_X} (f, z)$, there exists a net $(x_\alpha) \subset \mathbb{D}_X$ such that $x_\alpha \xrightarrow{\,\sigma\,} z$ and $f(x_\alpha) \rightarrow \lambda$. Passing to a subnet, if necessary, we may assume that $\delta_{x_\alpha} \rightarrow \phi$ in $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$. Thus, $\widehat{f} (\phi) = \phi (f) = \lambda$. For $x^* \in X^\sharp$, note that $\phi (x^*) = \lim_\alpha \delta_{x_\alpha} (x^*) = z(x^*)$, which implies that $\pi (\phi) = z$.
Now, we are ready to define the Cluster Value Theorem for the Banach algebra $\mathcal{H}^\infty (\mathbb{D}_X)$.
\begin{definition}\label{def:CVT_D_X}
Let $X$ be a Banach space with a normalized basis. We shall say that the \emph{Cluster Value Theorem holds for $\mathcal{H}^\infty (\mathbb{D}_X)$ when the both sets in \eqref{CVT_inclusion_for_D_X} coincide for every $f \in \mathcal{H}^\infty (\mathbb{D}_X)$ and $z \in \overline{\mathbb{D}}_X^{\sigma}$}.
\end{definition}
It is natural to investigate the relation between cluster sets from $\mathcal{H}^\infty (B_{c_0})$ and the ones from $\mathcal{H}^\infty (\mathbb{D}_X)$.
\begin{prop}\label{prop:same_cluster_sets}
Let $X$ be a Banach space with a normalized basis. Given $f \in \mathcal{H}^\infty (B_{c_0})$ and $u \in \overline{B}_{\ell_\infty}$, we have
\begin{equation}
Cl_{B_{c_0}} (f, u) = Cl_{\mathbb{D}_X} (\Psi(f), \tau^*(u)).
\end{equation}
\end{prop}
\begin{proof}
Let $\lambda \in Cl_{B_{c_0}} (f, u)$ be given. By the separability of $\ell_1$, we may choose a sequence $(x_n) \subset B_{c_0}$ such that $x_n \xrightarrow{w(\ell_\infty, \,\,\ell_1)} u$ and $f(x_n) \rightarrow \lambda$. Taking $w_n \in B_{c_{00}}$ for each $n \in \mathbb{N}$ so that $\|x_n - w_n \| \rightarrow 0$ and $|f(x_n)-f(w_n)| \rightarrow 0$ as $n \rightarrow \infty$, we have that $w_n \xrightarrow{w(\ell_\infty, \,\,\ell_1)} u$ and $f(w_n) \rightarrow \lambda$. By Remark \ref{relation_tau_iota'} and the continuity in the $w(\ell_\infty,\ell_1)$-$\sigma$ topology of the mapping $\tau^*$,
\[
\kappa (w_n) = \tau^* (w_n) \xrightarrow{\,\sigma\,} \tau^* (u) \,\text{ and }\, \Psi (f) (\kappa (w_n) ) = f ( \iota (\kappa (w_n))) = f(w_n) \rightarrow \lambda.
\]
It follows that $\lambda \in Cl_{\mathbb{D}_X} (\Psi(f), \tau^*(u))$; thus $Cl_{B_{c_0}} (f, u) \subset Cl_{\mathbb{D}_X} (\Psi(f), \tau^*(u))$.
Conversely, let $\lambda \in Cl_{\mathbb{D}_X} (\Psi(f), \tau^*(u))$. By \eqref{CVT_inclusion_for_D_X}, there exists $\phi \in \mathcal{M}_{\tau^* (u)} (\mathcal{H}^\infty (\mathbb{D}_X))$ such that $\lambda = \phi (\Psi (f))$. Note that $\lambda = \Phi (\phi ) (f)$ and from Proposition \ref{prop:fiber_representation} that
\[
\Phi (\phi) \in \Phi \left( \mathcal{M}_{\tau^* (u)} (\mathcal{H}^\infty (\mathbb{D}_X)) \right) = \mathcal{M}_u (\mathcal{H}^\infty ( B_{c_0})).
\]
This implies that $\lambda \in \widehat{f} ( \mathcal{M}_u (\mathcal{H}^\infty ( B_{c_0})) )$. As the Cluster Value Theorem holds for $\mathcal{H}^\infty (B_{c_0})$ \cite[Theorem 5.1]{ACGLM}, we obtain that $\lambda \in Cl_{B_{c_0}} (f, u)$. Hence, we conclude that $Cl_{\mathbb{D}_X} (\Psi(f), \tau^*(u)) \subset Cl_{B_{c_0}} (f, u)$.
\end{proof}
Note that Proposition \ref{prop:same_cluster_sets} and \cite[Lemma 2.1]{ACGLM} imply that each cluster set $Cl_{\mathbb{D}_X} (f, z)$ is a compact connected set. Now, we are ready to prove the Cluster Value Theorem for $\mathcal{H}^\infty (\mathbb{D}_X)$ provided that a Banach space $X$ has a normalized basis.
\begin{theorem}
Let $X$ be a Banach space with a normalized basis. Then Cluster Value Theorem holds for $\mathcal{H}^\infty (\mathbb{D}_X)$.
\end{theorem}
\begin{proof} Let $z \in \overline{\mathbb{D}}_X^{\sigma}$, $f \in \mathcal{H}^\infty (\mathbb{D}_X)$ and $\phi \in \mathcal{M}_z (\mathcal{H}^\infty (\mathbb{D}_X))$ be given.
Thanks to the inclusion \eqref{CVT_inclusion_for_D_X}, it suffices to prove $\lambda := \phi (f) \in Cl_{\mathbb{D}_X} (f,z)$. Note from Proposition \ref{prop:fiber_representation} that $\Phi (\phi) \in \mathcal{M}_{(\tau^{-1})^* (z)} (\mathcal{H}^\infty (B_{c_0}))$. From the Cluster Value Theorem for $\mathcal{H}^\infty (B_{c_0})$ at every $u \in \overline{B}_{\ell_{\infty}}$,
we get that
\begin{align*}
\lambda = \phi(f) = \Phi (\phi) ( \Psi^{-1} (f) ) \in Cl_{B_{c_0}} ( \Psi^{-1} (f), (\tau^{-1})^* (z)).
\end{align*}
By Proposition \ref{prop:same_cluster_sets}, we have that
\[
Cl_{B_{c_0}} ( \Psi^{-1} (f), (\tau^{-1})^* (z)) = Cl_{\mathbb{D}_X} (f, z).
\]
Thus, $\lambda \in Cl_{\mathbb{D}_X} (f,z)$ and it completes the proof.
\end{proof}
Recently, T. R. Alves and D. Carando \cite{AC} studied algebraic structure in the set of holomorphic functions which have large cluster sets at every point. Given a Banach space $X$ and a set $M \subset \overline{B}_{X^{**}}$, they defined the following sets:
\begin{align*}
\mathcal{F}_M (B_X) &= \{f \in \mathcal{H}^\infty (B_X) : \cap_{x \in M} Cl_{B_X} (f, x) \text{ contains a disk centered at } 0\}, \\
\mathcal{E}_M (B_X) &= \{ f \in \mathcal{H}^\infty (B_X) : Cl_{B_X} (f, x) \text{ contains a disk for each } x \in M \}.
\end{align*}
Notice that the difference between $\mathcal{F}_M (B_X) $ and $\mathcal{E}_M (B_X)$ is that we fix a single disk in the definition of $\mathcal{F}_M (B_X)$ while radii of disks in two different cluster sets can be different in $\mathcal{E}_M (B_X)$.
The authors proved that if $X$ is a separable Banach space, then $\mathcal{F}_{\overline{B}_{X^{**}}} (B_X)$ is strongly $\mathfrak{c}$-algebrable and that for $0 < \rho < 1$, the set $\mathcal{E}_{\rho B_{c_0}} (B_{c_0})$ is strongly $\mathfrak{c}$-algebrable and contains an almost isometric copy of $\ell_1$.
Similarly, we can define such sets of holomorphic functions in $\mathcal{H}^\infty (\mathbb{D}_X)$ whose cluster sets are large.
\begin{definition}
Let $X$ be a Banach space with a normalized basis and $M \subset \overline{\mathbb{D}}_X^{\sigma}$. We define sets $\mathcal{F}_{M} (\mathbb{D}_X)$ and $\mathcal{E}_{M} (\mathbb{D}_X)$ as
\begin{align*}
\mathcal{F}_M (\mathbb{D}_X) &= \{f \in \mathcal{H}^\infty (\mathbb{D}_X) : \cap_{x \in M} Cl_{\mathbb{D}_X} (f, x) \text{ contains a disk centered at } 0\}, \\
\mathcal{E}_M (\mathbb{D}_X) &= \{ f \in \mathcal{H}^\infty (\mathbb{D}_X) : Cl_{\mathbb{D}_X} (f, x) \text{ contains a disk for each } x \in M \}.
\end{align*}
\end{definition}
We can describe these sets as images of $\mathcal{F}_M (B_{c_0})$ and $\mathcal{E}_M (B_{c_0})$ under the isometric isomorphism $\Psi$.
\begin{prop}\label{prop:F_M_and_E_M}
Let $X$ be a Banach space with a normalized basis, and let $M \subset \overline{\mathbb{D}}_X^{\sigma}$.
Then
\begin{enumerate} \setlength\itemsep{0.4em}
\item[(a)] $\mathcal{F}_{M} (\mathbb{D}_X) = \Psi ( \mathcal{F}_{(\tau^* \vert_{\overline{B}_{\ell_{\infty}}} )^{-1} (M)} (B_{c_0}) )$
\item[(b)] $\mathcal{E}_{M} (\mathbb{D}_X) = \Psi ( \mathcal{E}_{(\tau^* \vert_{\overline{B}_{\ell_{\infty}}} )^{-1} (M)} (B_{c_0}) )$.
\end{enumerate}
\end{prop}
\begin{proof}
As (a) and (b) can be proved in a similar way, we here give a proof of (a): Let $f \in \mathcal{F}_{(\tau^* \vert_{\overline{B}_{\ell_{\infty}}} )^{-1} (M)} (B_{c_0})$ be given. Then there exists $r >0$ such that
\[
\mathbb{D} (0, r) = \{ \lambda\in \mathbb{C} : |\lambda| < r \} \subset \bigcap\limits_{x \in (\tau^* \vert_{\overline{B}_{\ell_{\infty}}} )^{-1} (M) } Cl_{B_{c_0}} (f, x).
\]
On the other hand, by Proposition \ref{prop:same_cluster_sets}
\begin{align*}
\bigcap\limits_{x \in (\tau^* \vert_{\overline{B}_{\ell_{\infty}}} )^{-1} (M) } Cl_{B_{c_0}} (f, x) &= \bigcap\limits_{x \in (\tau^* \vert_{\overline{B}_{\ell_{\infty}}} )^{-1} (M) } Cl_{\mathbb{D}_X} (\Psi (f), \tau^* (x) ) \\
&= \bigcap\limits_{w \in M} Cl_{\mathbb{D}_X} (\Psi (f), w ).
\end{align*}
This implies that $ \mathbb{D} (0, r) \subset \cap_{w \in M } Cl_{\mathbb{D}_X} (\Psi (f), w )$; hence $\Psi (f) \in \mathcal{F}_M (\mathbb{D}_X)$. Since $f$ is chosen arbitrarily, we have that
$\Psi ( \mathcal{F}_{(\tau^* \vert_{\overline{B}_{\ell_{\infty}}} )^{-1} (M)} (B_{c_0}) ) \subset \mathcal{F}_{M} (\mathbb{D}_X)$. The reverse inclusion can be obtained similarly.
\end{proof}
At this moment, we present the result which is an improvement of the result \cite[Theorem 1.4]{AC} related to $\mathcal{E}_{\rho B_{c_0}} (B_{c_0})$. Our claim is to show that it is possible to pass from $\rho B_{c_0}$ to its bidual ball $\rho B_{\ell_\infty}$ by slightly modifying its proof. For the sake of completeness, we include the proof.
\begin{prop}\label{prop:general_AC}
For $0 < \rho < 1$, the set $\mathcal{E}_{\rho \overline{B}_{\ell_\infty}} (B_{c_0}) \cup \{0\}$ contains an almost isometric copy of $\ell_1$.
\end{prop}
\begin{proof}
Choose a sequence $(r_n)_{n\in\mathbb{N}} \subset (\rho,1)$ so that $r_n \rightarrow 1$ rapidly and let $\delta = \prod_{n \in \mathbb{N}} r_n$. For each infinite subset $\Theta = \{ n_1 < n_2 < \cdots \} \subset \mathbb{N}$, define $f_{\Theta } : B_{c_0} \rightarrow \mathbb{C}$ by
\[
f_{\Theta} (x) = \delta^{-1} \prod_{j =1}^\infty \frac{r_j - x_{n_j}}{1-r_j x_{n_j}} \quad (x \in B_{c_0}).
\]
It is clear that $f_{\Theta} \in \mathcal{H}^\infty (B_{c_0})$ with $\|f \| \leq \delta^{-1}$. Now, choose a family $\{\Theta_k : k \in \mathbb{N} \}$ of pairwise disjoint infinite subsets of $\mathbb{N}$. Let $T : \ell_1 \rightarrow \mathcal{H}^\infty (B_{c_0})$ be defined as
\[
T (\beta)(x) = \sum_{k=1}^\infty \beta_k f_{\Theta_k} (x)
\]
for every $\beta = (\beta_n)_{n =1}^\infty \in \ell_1$ and $x \in B_{c_0}$. Notice that $T$ is an well defined bounded linear mapping with $\| T (\beta) \| \leq \delta^{-1} \|\beta\|_{\ell_1}$ for every $\beta \in \ell_1$.
Let us denote $\Theta_k$ by $\{ n_1^k < n_2^k < \cdots \}$ and choose $m_j \in \mathbb{N}$ so that $m_j > \max_{1 \leq i, \ell \leq j} n_i^\ell$ for each $j \in \mathbb{N}$. For fixed $1 \leq k \leq N$ and $j > N$, let $s_{j,k} \in \mathbb{N}$ be an integer such that $n_{\ell}^k > m_j$ for every $\ell \geq s_{j,k}$.
Suppose $\beta \neq 0$ and fix $N \in \mathbb{N}$ so that $(\beta_1, \ldots \beta_N, 0, 0, \ldots ) \neq 0$. Choose $\mu_1, \ldots, \mu_N \in \mathbb{D}$ arbitrarily and take $\lambda_{n_j^k} \in \mathbb{D}$ such that
\[
\mu_k = \frac{ r_j - \lambda_{n_j^k}}{1-r_j \lambda_{n_j^k}}
\]
for each $k = 1, \ldots, N$ and $j \in \mathbb{N}$.
Let us fix $y = (y_n) \in \rho {\overline{B}_{\ell_\infty}}$. Given $j > N$, let us consider $z_j^N \in {\ell_\infty}$ given by
\[
z_j^N = \sum_{k=1}^{N} \left[ \left( 1 - \frac{|y_{n_j^k}|}{j} \right) \lambda_{n_j^k} - y_{n_j^k} \right] e_{n_j^k} + (P_{m_j} ( y) - y),
\]
where $P_n : \ell_\infty \rightarrow \ell_\infty$ the restriction mapping on the first $n$ coordinates for each $n \in \mathbb{N}$. Notice that $\| y+ z_j^N \| \leq \max \big\{ \max\limits_{1 \leq k \leq N} | \lambda_{n_j^k} | , \|y\| \big\} < 1$; hence $y + z_j^N \in B_{c_0}$ for each $j > N $.
As $P_{m_j} ( y) \xrightarrow{w(\ell_\infty, \,\, \ell_1)} y$ as $j \rightarrow \infty$, we have that $z_j^N \xrightarrow{w^*} 0$; hence $y + z_j^N \xrightarrow{w^*} y$ as $j \rightarrow \infty$. Moreover, for fixed $1 \leq k \leq N$ and $j > N$,
\begin{align}\label{eq:y+z_j^N}
(y + z_j^N)_{n_\ell^k} =
\begin{cases}
y_{n_\ell^k}, &\quad \text{if } \ell = 1, \ldots, j-1;\\
\left( 1-\frac{|y_{n_j^k}|}{j} \right) \lambda_{n_j^k}, &\quad \text{if } \ell = j; \\
(P_{m_j}( y))_{n_{\ell}^k}, &\quad \text{if } \ell > j,
\end{cases}
\end{align}
where $(y+ z_j^N)_{n_{\ell}^k}$ and $(P_{m_j}( y))_{n_{\ell}^k}$ denote the $n_\ell^k$-th coordinate of $y+z_j^N$ and $P_{m_j} ( y)$, respectively.
Observe from \eqref{eq:y+z_j^N} that for fixed $1 \leq k \leq N$ and $j > N$,
\begin{align*}
f_{\theta_k} (y+ z_j^N) &= \delta^{-1} \prod_{\ell =1}^\infty \frac{ r_\ell - (y + z_j^N)_{n_\ell^k}}{1- r_\ell (y + z_j^N)_{n_\ell^k}} \\
&= \delta^{-1} \left( \prod_{\ell =1}^{j-1} \frac{ r_\ell - y_{n_\ell^k}}{1- r_\ell y_{n_\ell^k}} \right) \left( \frac{ r_j - \left( 1 - j^{-1}{|y_{n_j^k}|} \right) \lambda_{n_j^k} }{ 1 - r_j \left( 1 - j^{-1}{|y_{n_j^k}|} \right) \lambda_{n_j^k} } \right) A_{k, j},
\end{align*}
where
\[
A_{k,j} = \prod_{\ell=j+1}^\infty \frac{r_\ell - (P_{m_j}( y))_{n_{\ell}^k} }{1 - r_\ell (P_{m_j}( y))_{n_{\ell}^k} }.
\]
On the other hand, if $k > N$ and $j > N$, then
\[
f_{\theta_k} (y + z_j^N) = f_{\theta_k} ( P_{m_j} (y))
\]
since $\{ n_\ell^k : \ell \in \mathbb{N} \} \cap \{ n_j^1, \ldots, n_j^N \} = \O$.
It follows that for $j > N$,
\begin{align}\label{eq:Psi_beta}
T (\beta) ( y + z_j^N )
&= \sum_{k=1}^{N} \beta_k \delta^{-1} \frac{ r_j - \left( 1 - j^{-1}{|y_{n_j^k}|} \right) \lambda_{n_j^k} }{ 1 - r_j \left( 1 - j^{-1}{|y_{n_j^k}|} \right) \lambda_{n_j^k} } \prod_{\ell =1}^{j-1} \frac{r_{\ell} - y_{n_{\ell}^k}}{1 - r_{\ell} y_{n_{\ell}^k}} A_{k,j} \\
&\hspace{7cm}+ \sum_{k=N+1}^\infty \beta_k f_{\Theta_k} (P_{m_j} ( y)). \nonumber
\end{align}
By definition of $(s_{j,k})_{j \in \mathbb{N}}$, we have that
\[
A_{k, j } = \left( \prod_{\ell = j+1}^{s_{j,k} -1} \frac{r_\ell - y_{n_{\ell}^k} }{1 - r_\ell y_{n_{\ell}^k} } \right) \prod_{\ell = s_{j,k} }^\infty r_\ell
\]
for each $1 \leq k \leq N$.
By passing to its subsequence, which we denote again by $(s_{j,k})_{j \in \mathbb{N}}$, we may assume that $(s_{j,k})_{j \in \mathbb{N}}$ is an increasing sequence for each $k \in \mathbb{N}$. Thereby, we have that $A_{k, j } \rightarrow 1$ as $j \rightarrow \infty$ for each $1 \leq k \leq N$.
As $c_{N, j} := \sum_{k=N+1}^\infty \beta_k f_{\Theta_k} (P_{m_j} ( y))$ lies in $\| \beta\| \overline{\mathbb{D}}$ for each $j > N$, passing to a subsequence and keeping the same index, we may assume that $c_{N, j} \rightarrow c_N$ as $j \rightarrow \infty$ for some $c_N \in \| \beta \| \overline{\mathbb{D}}$.
On the other hand,
\begin{equation}\label{eq:delta_k}
\left| \frac{ r_j - \left( 1 - j^{-1}{|y_{n_j^k}|} \right) \lambda_{n_j^k} }{ 1 - r_j \left( 1 - j^{-1}{|y_{n_j^k}|} \right) \lambda_{n_j^k} } - \mu_k \right| \xrightarrow[j \rightarrow \infty]{} 0.
\end{equation}
So, we have from \eqref{eq:Psi_beta} and \eqref{eq:delta_k} that
\[
T (\beta) (y + z_j^N ) \xrightarrow[j \rightarrow \infty]{} \sum_{k=1}^N \beta_k \delta^{-1} \mu_k \delta_k + c_N,
\]
where $\displaystyle \delta_k = \prod_{\ell=1}^\infty \frac{r_\ell - y_{n_{\ell}^k} }{ 1 - r_\ell y_{n_{\ell}^k} } \neq 0 $.
Since $\mu_1, \ldots, \mu_k \in \mathbb{D}$ was chosen arbitrarily, this proves that the disk centered at $c_N$ of radius $\| (\beta_k \delta^{-1} \delta_k)_{k=1}^N \|_1$ is contained in the cluster set of $T (\beta)$ at $y$. For the case when $y = 0$, we have that $\delta_k = \delta$. Noting that $|c_N| \rightarrow 0$ as $N \rightarrow \infty$ as well, we conclude that $\| T ( \beta ) \| \geq \| \beta \|$. It follows that $\| \beta \| \leq \| T (\beta)\| \leq \delta^{-1} \|\beta\|$. Since $\delta$ can be taken arbitrarily close to $1$, this completes the proof.
\end{proof}
We come back to the sets just defined with respect to the algebra $\mathcal{H}^\infty (\mathbb{D}_X)$. Having in mind that an image of a strongly $\mathfrak{c}$-algebrable set under an isometric isomorphism is again strongly $\mathfrak{c}$-algebrable and that
$\tau^* (\rho \overline{B}_{\ell_{\infty}} ) = \rho \tau^* ( \overline{B}_{\ell_{\infty}} )= \rho \overline{\mathbb{D}}_X^{\sigma}$ for $0 < \rho < 1$,
we get the following result by combining Proposition \ref{prop:F_M_and_E_M}, \ref{prop:general_AC} with the above results in \cite{AC}.
\begin{theorem}
Let $X$ be a Banach space with a normalized basis. Then
\begin{enumerate} \setlength\itemsep{0.4em}
\item $\mathcal{F}_{\overline{\mathbb{D}}_X^{\sigma}} (\mathbb{D}_X)$ is strongly $\mathfrak{c}$-algebrable,
\item for each $\rho \in (0,1)$, $\mathcal{E}_{\rho \overline{\mathbb{D}}_X^{\sigma}} (\mathbb{D}_X)$ is strongly $\mathfrak{c}$-algebrable and contains an almost isometric copy of $\ell_1$.
\end{enumerate}
\end{theorem}
\section{Gleason parts of $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$.}\label{sec:Gleason}
Before we start, let us introduce some terminology. Given an open set $U$ of a Banach space $X$, let $\mathcal{A}$ be a Banach algebra of holomorphic functions on $U$. If we consider the spectrum $\mathcal{M} (\mathcal{A})$ as a subset of $\mathcal{A}^*$, then $\mathcal{M} (\mathcal{A})$ can be naturally equipped with the metric induced from $\mathcal{A}^*$. In other words, for $\phi, \psi \in \mathcal{M} (\mathcal{A})$,
\[
\| \phi - \psi \|_{\mathcal{A}} := \sup\{ |\phi(f)- \psi(f)| : f \in \mathcal{A} \text{ with } \|f\| \leq 1 \}.
\]
From this metric, which we call \emph{Gleason metric}, we can partite the space $\mathcal{M} (\mathcal{A})$ into equivalence classes by the relation $\phi \sim \psi $ if and only if $\| \phi - \psi \|_{\mathcal{A}} < 2$ for $\phi, \psi \in \mathcal{M} (\mathcal{A})$. These equivalence classes are called the \emph{Gleason parts} in the spectrum $\mathcal{M} (\mathcal{A})$. In other words, the Gleason part containing $\phi \in \mathcal{M} (\mathcal{A})$, which we will denote by $\mathcal{GP} (\phi, \mathcal{A})$ is the set given as
$\mathcal{GP} (\phi, \mathcal{A}) = \{ \psi \in \mathcal{M} (\mathcal{A}) : \| \phi - \psi \|_{\mathcal{A}} < 2 \}.$
It is worth mentioning that for $\phi, \psi \in \mathcal{M} (\mathcal{A})$ the \emph{pseudohyperbolic distance} defined by $\rho_{\mathcal{A}} (\phi ,\psi) := \sup \{ \phi (f) : f \in \mathcal{A} \text{ with } \|f\| \leq 1, \psi (f) = 0 \}$ satisfies that
\[
\| \phi - \psi \|_{\mathcal{A}} = \frac{2 -2 \sqrt{1 - \rho_{\mathcal{A}}(\phi, \psi)^2}}{\rho_{\mathcal{A}} (\phi, \psi)}.
\]
It is well known that $\| \phi - \psi \|_{\mathcal{A}} < 2$ if and only if $\rho_{\mathcal{A}} (\phi, \psi ) < 1$ for $\phi$ and $\psi$ in $\mathcal{M}( \mathcal{A})$. For further information on Gleason parts, we refer \cite{B}.
In this section, whenever there is no possible confusion, we shall simply write $\| \phi - \psi \|$ and $\rho (\phi, \psi)$ for $\phi, \psi \in \mathcal{M} (\mathcal{A})$ omitting their subscripts.
As $\Psi$ is an isometric isomorphism from $\mathcal{H}^\infty (B_{c_0})$ onto $\mathcal{H}^\infty (\mathbb{D}_X)$, it is clear that
the isomorphism ${\Phi}$ from $\mathcal{M} ( \mathcal{H}^\infty ( \mathbb{D}_X ))$ onto $\mathcal{M} (\mathcal{H}^\infty (B_{c_0}))$ is a Gleason isometry, that is,
$\| \Phi ( \phi ) - \Phi (\psi) \| = \| \phi - \psi \|$ for all $\phi$ and $\psi$ in $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$.
This completes the proof of the following result which will be frequently used in the sequel.
\begin{prop}\label{isomorphic_image_of_Gleason_parts}
Let $X$ be a Banach space with a normalized basis. Then
\begin{equation*}
\Phi( \mathcal{GP} (\phi, \mathcal{H}^\infty (\mathbb{D}_X)) ) = \mathcal{GP} (\Phi (\phi), \mathcal{H}^\infty (B_{c_0}) )
\end{equation*}
for every $\phi \in \mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X ))$.
\end{prop}
Let us keep to use the notation $\delta_u$ for the point evaluation at $u \in B_{\ell_{\infty}}$ in both $\mathcal{M} (\mathcal{H}^\infty (B_{c_0}))$ and $\mathcal{M} (\mathcal{A}_u (B_{c_0}))$. Notice from \cite[Theorem 2.4]{ADLM} that
\begin{equation*}
\rho_{\mathcal{A}_u (B_{c_0})}( \delta_u , \delta_ v ) = \sup_{ n \in \mathbb{N}} \left| \frac{u_n - v_n}{1 - \overline{u_n}v_n}\right|
\end{equation*}
for every $u = (u_n), v = (v_n) \in \overline{B}_{\ell_\infty}$. If we consider the canonical restriction $\Upsilon$ from $\mathcal{M} ( \mathcal{H}^\infty (B_{X}))$ to $\mathcal{M} (\mathcal{A}_u (B_{X}))$, then it is true in general that $\rho_{\mathcal{H}^\infty (B_{c_0})} (\phi, \psi) \geq \rho_{\mathcal{A}_u (B_{c_0})} (\Upsilon (\phi), \Upsilon (\psi))$ for every $\phi, \psi \in \mathcal{M} (\mathcal{H}^\infty (B_{X}))$. From this, we can observe that
\begin{equation}\label{Gleason_inclusion}
\rho_{\mathcal{H}^\infty (B_{c_0})} (\delta_u, \delta_v) \geq \rho_{\mathcal{A}_u (B_{c_0})} (\Upsilon(\delta_u), \Upsilon(\delta_v) ) = \sup_{ n \in \mathbb{N}} \left| \frac{u_n - v_n}{1 - \overline{u_n}v_n}\right|
\end{equation}
for every $u, v \in B_{\ell_\infty}$. As a matter of fact, B.J. Cole and T.W. Gamelin proved in \cite[Equation 6.1]{CG} that
\begin{equation}\label{prop:distance_point_evaluations}
\rho_{\mathcal{H}^\infty (B_{c_0})} (\delta_u, \delta_v) = \sup_{ n \in \mathbb{N}} \left| \frac{u_n - v_n}{1 - \overline{u_n}v_n}\right|.
\end{equation}
Let us give a different proof of this fact. Given $u, v \in B_{\ell_\infty}$, let us fix $r \in (0,1)$ so that $\| u\|, \|v \| < r $. Given $f \in \mathcal{H}^\infty (B_{c_0})$ with $AB(f) (u) = 0$, consider $f_r : r^{-1} B_{c_0} \rightarrow \mathbb{C}$ defined as $f_r (x) = f(r x)$ for every $x \in r^{-1} B_{c_0}$. It follows that $f_r \in \mathcal{H}^\infty (r^{-1} B_{c_0})$; hence $f_r \in \mathcal{A}_u (B_{c_0})$. By definition of the Aron-Berner extension, it is clear that $AB(f_r) (r^{-1} u) = 0$. Thus,
\begin{align*}
|AB(f) (v) | = |AB(f_r) (r^{-1} v)| \leq \rho_{\mathcal{A}_u (B_{c_0})} (\delta_{r^{-1} u}, \delta_{ r^{-1} v} ) = \sup_{n \in \mathbb{N}} \left| \frac{r^{-1} u_n - r^{-1} v_n}{1 - r^{-2} \overline{u_n} v_n } \right|.
\end{align*}
By letting $r \rightarrow 1$, we obtain that $|AB (f) (v) | \leq \sup\limits_{ n \in \mathbb{N}} \left| \frac{u_n - v_n}{1 - \overline{u_n}v_n}\right|$. As $f \in \mathcal{H}^\infty (B_{c_0})$ was arbitrary, we completes the proof.
Combining Proposition \ref{isomorphic_image_of_Gleason_parts} with Proposition \ref{prop:point_evaluation} and equation \eqref{prop:distance_point_evaluations}, we can calculate the Gleason metric between point evaluation homomorphisms induced from $\accentset{\circ\,\,\,\,\,}{\overline{\D}_X^{w^*}}$ in $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$.
\begin{prop}\label{prop:Gleason_parts_basics}
Let $X$ be a Banach space with a normalized basis $(e_j)_{j=1}^{\infty}$. Then for each $z, w \in \accentset{\circ\,\,\,\,\,}{\overline{\D}_X^{w^*}}$, we have that
\[
\rho (\widetilde{\delta}_z, \widetilde{\delta}_w ) = \sup_{n \in \mathbb{N}} \left| \frac{z(e_n^*) - w(e_n^*)}{1 - \overline{z(e_n^*)} w(e_n^*)}\right|.
\]
In particular, the set $\{ \widetilde{\delta}_z : z \in \accentset{\circ\,\,\,\,\,}{\overline{\D}_X^{w^*}} \}$ is contained in $\mathcal{GP} (\delta_0, \mathcal{H}^\infty (\mathbb{D}_X))$. Moreover, for a sequence $\bm{r} = (r_n)_{n=1}^\infty$ of positive reals with $\sup\limits_{n \in \mathbb{N}} r_n < 1$, the $\sigma$-closure of
$\mathbb{D}_X (\bm{r} )$
is contained in $\mathcal{GP} (\delta_0, \mathcal{H}^\infty (\mathbb{D}_X))$.
\end{prop}
\begin{proof}
Note that if $z \in \accentset{\circ\,\,\,\,\,}{\overline{\D}_X^{w^*}}$, then $\sup\limits_{n \in \mathbb{N}} |z(e_n^*)| < 1$; hence $\rho (\widetilde{\delta}_z, \delta_0) < 1$.
It remains to prove the last statement and this can be proved similarly as \cite[Proposition 1.2]{ADLM}. Let $\phi$ in the $\sigma$-closure of $\mathbb{D}_X (\bm {r})$ be given. Let $f \in \mathcal{H}^\infty (\mathbb{D}_X)$ be such that $f(0) = 0$ and $\|f \| = 1$. Given $\varepsilon > 0$ such that $\sup\limits_{n\in\mathbb{N}} r_n + \varepsilon < 1$, there exists $x_0 \in \mathbb{D}_X (\bm{r})$ such that $| \phi (f) - \delta_{x_0} (f) | < \varepsilon$. Observe that
\[
|\phi (f) - \delta_0 (f)| \leq \varepsilon + |\delta_{x_0} (f)-\delta_0 (f)| \leq \varepsilon + \rho (\delta_0, \delta_{x_0}) < 1,
\]
since $\rho (\delta_0, \delta_{x_0}) = \sup\limits_{n \in \mathbb{N}} |e_n^* (x_0)| \leq \sup\limits_{n \in \mathbb{N}} r_n$.
\end{proof}
Moreover, we can obtain the following result analogous to \cite[Proposition 1.1]{ADLM}.
\begin{prop} Let $X$ be a Banach space with a normalized basis $(e_j)_{j=1}^{\infty}$. If $\phi \in \mathcal{M}_z (\mathcal{H}^\infty (\mathbb{D}_X))$ and $\psi \in \mathcal{M}_w (\mathcal{H}^\infty (\mathbb{D}_X))$, where $z \in \overline{\mathbb{D}}_X^{\sigma}$ and $w \in \overline{\mathbb{D}}_X^{\sigma}$ satisfy $\sup\limits_{j \in \mathbb{N}} |z(e_j^*)| = 1$ and $\sup\limits_{j \in \mathbb{N}} |w(e_j^*)| < 1 $, then $\phi$ and $\psi$ are in different Gleason parts.
\end{prop}
\begin{proof}
Note that $\Phi (\phi)$ lies in $\mathcal{M}_{(\tau^{-1})^* (z)} (\mathcal{H}^\infty (B_{c_0}))$ and $\Phi (\psi)$ in $\mathcal{M}_{(\tau^{-1})^* (w)} (\mathcal{H}^\infty (B_{c_0}))$. As $\| (\tau^{-1})^* (z) \| = 1$ and $\| (\tau^{-1})^* (w) \| < 1$, homomorphisms $\Phi(\phi)$ and $\Phi(\psi)$ lie in different Gleason parts \cite[Proposition 1.1]{ADLM}. Thus, Proposition \ref{isomorphic_image_of_Gleason_parts} completes the proof.
\end{proof}
There are results \cite[Remark 3.1, Proposition 3.2]{ADLM} which is related to the Gleason part containing a homomorphism in $\mathcal{M}(\mathcal{H}^\infty (B_{c_0}))$ lying in the fiber over the boundary point. We can translate these results to our setting as follows with the aid of Proposition \ref{prop:fiber_representation} and \ref{isomorphic_image_of_Gleason_parts}.
\begin{prop} Let $X$ be a Banach space with a normalized basis, and $z, w \in \overline{\mathbb{D}}_X^{\sigma}$ with $\sup\limits_{n \in \mathbb{N}} |z(e_n^*)| = \sup\limits_{n \in \mathbb{N}} |w(e_n^*)| = 1$.
\begin{enumerate}
\item If $\mathcal{GP} ( \delta_{ (\tau^{-1})^* (z) }, \mathcal{A}_u (B_{c_0})) \neq \mathcal{GP} ( \delta_{ (\tau^{-1})^* (w) }, \mathcal{A}_u (B_{c_0})) $, then we have
\[
\mathcal{GP} ( \phi, \mathcal{H}^\infty (\mathbb{D}_X)) \neq \mathcal{GP} ( \psi, \mathcal{H}^\infty (\mathbb{D}_X)),
\]
for any $\phi \in \mathcal{M}_z (\mathcal{H}^\infty (\mathbb{D}_X))$ and $\psi \in \mathcal{M}_w (\mathcal{H}^\infty (\mathbb{D}_X))$.
In particular, if $z \in \overline{\mathbb{D}}_X^{\sigma}$ satisfies that $|z(e_j^*)| = 1$ for every $j \in \mathbb{N}$, then
\[
\mathcal{GP} (\phi, \mathcal{H}^\infty (\mathbb{D}_X)) \subset \mathcal{M}_z ( \mathcal{H}^\infty (\mathbb{D}_X)).
\]
for any $\phi \in \mathcal{M}_z (\mathcal{H}^\infty (\mathbb{D}_X))$.
\item If $\mathcal{GP} ( \delta_{ (\tau^{-1})^* (z) }, \mathcal{A}_u (B_{c_0})) = \mathcal{GP} ( \delta_{ (\tau^{-1})^* (w) }, \mathcal{A}_u (B_{c_0})) $, then there exists $\phi \in \mathcal{M}_z (\mathcal{H}^\infty (\mathbb{D}_X))$ and $\psi \in \mathcal{M}_w (\mathcal{H}^\infty (\mathbb{D}_X))$ such that
\[
\mathcal{GP} ( \phi, \mathcal{H}^\infty (\mathbb{D}_X)) = \mathcal{GP} ( \psi, \mathcal{H}^\infty (\mathbb{D}_X)).
\]
\end{enumerate}
\end{prop}
To present the result which can be viewed as a generalization of the second item in the previous result, we will follow the arguments in \cite[Proposition 3.3]{ADLM}. Recall that a basis $\{e_j\}_{j=1}^\infty$ of a Banach space $X$ is said to be \emph{symmetric} if $\{e_{\theta(j)} \}_{j \in \mathbb{N}}$ is equivalent to $\{e_j\}$ for every permutation $\theta$ of the integers. For a moment, let $X$ be a Banach space with a symmetric basis $\{e_j\}_{j=1}^\infty$.
Given $b \in \mathbb{D}$ and $x \in \mathbb{D}_X $, let us denote by $(b,x)$ the element
\[
b e_1 + \sum_{j=1}^{\infty} e_j^*(x) e_{j+1} \in \mathbb{D}_X,
\]
which converges well in $X$ due to that $\{e_j\}_{j=1}^\infty$ is symmetric.
Recall \cite[Lemma 2.9]{AFGM} that for given $b \in \mathbb{D}$ and $u \in \overline{B}_{\ell_{\infty}}$, the mapping $R_b : \mathcal{M}_u (\mathcal{H}^\infty( B_{c_0})) \rightarrow \mathcal{M}_{(b,u_1,u_2,\ldots)} (\mathcal{H}^\infty( B_{c_0})) $ given by
\[
R_b (\varphi ) (f) = \varphi \left(x \in B_{c_0} \leadsto f(b, x_1, x_2, \ldots) \right),
\]
for every $\varphi \in \mathcal{M}_u (\mathcal{H}^\infty( B_{c_0}))$ and $f \in \mathcal{H}^\infty( B_{c_0})$,
is a homeomorphism. If we denote by $\widetilde{R_b}$ the mapping from $\mathcal{M} (\mathcal{H}^\infty( \mathbb{D}_X ))$ to $\mathcal{M} (\mathcal{H}^\infty( \mathbb{D}_X ))$ defined by
\[
\widetilde{R_b}(\phi) (f) = \phi (x \in \mathbb{D}_X \leadsto f(b,x) ),
\]
for every $\phi \in \mathcal{M} (\mathcal{H}^\infty( \mathbb{D}_X ))$ and $f \in \mathcal{H}^\infty (\mathbb{D}_X)$, then
\begin{align*}
R_b ( \Phi (\phi) ) (f) &= \Phi (\phi) \left(x \in B_{c_0} \leadsto f(b, x_1, x_2, \ldots) \right) \\
&= \phi \left( x \in \mathbb{D}_X \leadsto f (b, e_1^*(x), e_2^*(x), \ldots ) \right) \\
&= \phi (x \in \mathbb{D}_X \leadsto (f \circ \iota) (b,x)) \\
&= \Phi (\widetilde{R_b} (\phi) ) (f),
\end{align*}
for every $f \in \mathcal{H}^\infty (B_{c_0})$. This implies that $R_b \circ \Phi = \Phi \circ \widetilde{R_b}$ for every $b \in \mathbb{D}$, i.e., the following diagram commutes.
\[
\begin{tikzcd}[row sep=large]
\mathcal{M}(\mathcal{H}^\infty (\mathbb{D}_X)) \arrow{r}{\widetilde{R_b}} \arrow{d}{\Phi} &\mathcal{M}(\mathcal{H}^\infty (\mathbb{D}_X)) \arrow{d}{\Phi} \\
\mathcal{M}(\mathcal{H}^\infty (B_{c_0})) \arrow{r}{R_b} & \mathcal{M}(\mathcal{H}^\infty (B_{c_0})).
\end{tikzcd}
\]
Moreover, as a composition of homeomorphisms, $\widetilde{R_b}$ is a homeomorphisms between fibers $\mathcal{M}_z (\mathcal{H}^\infty( \mathbb{D}_X ))$ and $\mathcal{M}_{\tau^{*} (b, z(e_1^*), z(e_2^*), \ldots)} (\mathcal{H}^\infty( \mathbb{D}_X ))$. Using the homeomorphisms $\widetilde{R_b}$, roughly speaking, we can observe that any Gleason part containing an element outside the distinguished boundary point contains elements from \emph{disk}-many different fibers.
\begin{prop}\label{prop:Gleason_parts_disk_in_fiber}
Let $X$ be a Banach space with a normalized symmetric Schauder basis.
For each $b \in \mathbb{D}$, there exists $r_b > 0$ such that if $|c-b|< r_b$, then $\widetilde{R_b} (\phi)$ and $\widetilde{R_c} (\phi)$ are in the same Gleason part for all $\phi \in \mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$.
\end{prop}
\begin{proof}
Consider $r_b >0 $ chosen so that $\| R_b (\varphi) - R_c (\varphi) \| < 2$ for all $\varphi \in \mathcal{M}(\mathcal{H}^\infty( B_{c_0}))$ whenever $|c-b| < r_b$ (see \cite[Proposition 3.3]{ADLM}). Let $\phi \in \mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$ be given, then
\begin{align*}
\| \widetilde{R_b} (\phi) - \widetilde{R_c} (\phi) \| = \| \Phi^{-1} \left[ R_b ( \Phi (\phi) ) - R_c ( \Phi (\phi) ) \right] \| = \| R_b ( \Phi (\phi) ) - R_c ( \Phi (\phi) ) \| < 2,
\end{align*}
which implies that $\widetilde{R_b} (\phi)$ and $\widetilde{R_c} (\phi)$ are in the same Gleason part.
\end{proof}
From Proposition \ref{prop:Gleason_parts_basics} and \ref{prop:Gleason_parts_disk_in_fiber}, we can particularly notice that there are homomorphisms $\varphi$ and $\psi$ in $\mathcal{M}(\mathcal{H}^\infty (\mathbb{D}_X))$ lying in different fibers but in the same Gleason part. Thus, it seems to natural to ask in the opposite direction, i.e., whether there exist homomorphisms in $\mathcal{M}(\mathcal{H}^\infty (\mathbb{D}_X))$ which belong to the same fiber but not to the same Gleason part. Combining \cite[Corollary 3.12]{ADLM} with Proposition \ref{prop:point_evaluation} and \ref{prop:fiber_representation} gives the answer that there are uncountably many homomorphisms in the same fiber but each of these lie in the different Gleason parts.
\begin{prop}
Let $X$ be a Banach space with a normalized Schauder basis. Let $z \in \tau^* (B_{\ell_\infty}) = B_{(X^\sharp)^*}$ be given. Then there exists an embedding $\xi : (\beta\mathbb{N} \setminus \mathbb{N}) \times \mathbb{D} \rightarrow \mathcal{M}_z (\mathcal{H}^\infty (\mathbb{D}_X))$ that is analytic on each slide $\{\theta\} \times \mathbb{D}$ and satisfies
\begin{enumerate}
\setlength\itemsep{0.4em}
\item[(a)] $\xi (\theta, \lambda) \notin \mathcal{GP} (\delta_z, \mathcal{H}^\infty (\mathbb{D}_X))$ for each $(\theta, \lambda) \in (\beta\mathbb{N} \setminus \mathbb{N}) \times \mathbb{N}$,
\item[(b)] $\mathcal{GP} (\xi (\theta, \lambda)) \cap \mathcal{GP} (\xi(\theta', \lambda')) = \O$ for each $\theta \neq \theta' \in \beta\mathbb{N} \setminus \mathbb{N}$ and $\lambda, \lambda' \in \mathbb{D}$.
\end{enumerate}
\end{prop}
We finish this section by introducing a simple way to obtain homomorphisms in $\mathcal{M} (\mathcal{H}^\infty (\mathbb{D}_X))$ which lie in the same fiber but in the different Gleason parts. As we use a bijective biholomorphic mapping on the open ball $B_{\ell_\infty}$ in the argument, the result only covers for homomorphisms in fibers over $B_{\ell_\infty}$.
\begin{prop}
Let $u \in {B}_{\ell_\infty}$ be given, then there exist homomorphisms $\varphi$ and $\psi$ in $\mathcal{M}_u (\mathcal{H}^\infty (B_{c_0} ))$ but $\mathcal{GP} (\varphi, \mathcal{H}^\infty (B_{c_0})) \cap \mathcal{GP} (\psi, \mathcal{H}^\infty (B_{c_0})) =\emptyset$.
\end{prop}
\begin{proof}
Choose $(r_n) \subset (0,1)$ so that $r_n \rightarrow 1$ rapidly so that
\[
f(x) := \prod_{n=1}^\infty \frac{r_n -x_n}{1-r_n x_n} \quad (x = (x_n) \in B_{c_0})
\]
is a well-defined bounded holomorphic function on $B_{c_0}$. Note that $\frac{r_n -t}{1-r_n t} \rightarrow -1$ as $t \rightarrow 1$. For each $n \in \mathbb{N}$, choose $t_n \in (0,1)$ such that
\[
\left| \frac{r_n -t_n}{1-r_n t_n} + 1 \right| < \frac{1}{n}.
\]
Now, consider the family of homomorphisms $\{ \delta_{t_n e_n} : n \in \mathbb{N}\}$ in $\mathcal{M} (\mathcal{H}^\infty (B_{c_0}))$ and let $\varphi$ be an accumulate point of this family in $\mathcal{M}(\mathcal{H}^\infty (B_{c_0}))$. As $t_n e_n \xrightarrow{w} 0$, we have that $\varphi \in \mathcal{M}_0(\mathcal{H}^\infty (B_{c_0}))$. However,
\begin{align*}
\| \varphi - \delta_0 \| \geq | \varphi(f) - f(0) | &= \lim_{n\rightarrow\infty} | f(t_n e_n) - f(0) | \\
&= \lim_{n\rightarrow\infty} \left| \left(\prod_{j \neq n} r_j \right) \frac{r_n - t_n}{1- r_n t_n} - \prod_{j =1}^\infty r_j \right| = 2 \prod_{j=1}^\infty r_j.
\end{align*}
As the infinite product $\prod_{j=1}^\infty r_j$ can be arbitrarily close to $1$, we conclude that $\| \varphi - \delta_0 \| = 2$, i.e., $\mathcal{GP} (\varphi, \mathcal{H}^\infty (B_{c_0})) \cap \mathcal{GP} (\delta_0, \mathcal{H}^\infty (B_{c_0})) =\O$. Now, let us fix a point $u \in B_{\ell_\infty}$. Then there exists a bijective biholomorphic mapping $\Lambda_{u} : B_{\ell_\infty} \rightarrow B_{\ell_\infty}$ with $\Lambda_u (u) = 0$ (see \cite[Proposition 3.9]{ADLM}).
Consider the surjective Gleason isometry $C_{\Lambda_u}^* : \mathcal{M} (\mathcal{H}^\infty (B_{c_0})) \rightarrow \mathcal{M} (\mathcal{H}^\infty (B_{c_0}))$ with inverse $\left(C_{\Lambda_u}^*\right)^{-1} = C_{\Lambda_u^{-1}}^*$ which satisfies
\[
C_{\Lambda_u}^* (\mathcal{M}_v (\mathcal{H}^\infty (B_{c_0})) ) = \mathcal{M}_{\Lambda_u (v)} (\mathcal{H}^\infty (B_{c_0}))
\]
for every $z \in B_{\ell_\infty}$ (see \cite[Theorem 3.11]{ADLM}).
It follows that $ C_{\Lambda_u^{-1}}^* (\varphi)$ and $C_{\Lambda_u^{-1}}^* (\delta_0)$ belongs to $\mathcal{M}_{\Lambda_u^{-1} (0)} (\mathcal{H}^\infty (B_{c_0})) = \mathcal{M}_{u} (\mathcal{H}^\infty (B_{c_0}))$ and satisfies that
\[
\mathcal{GP} (C_{\Lambda_u^{-1}}^* (\varphi), \mathcal{H}^\infty (B_{c_0})) \cap \mathcal{GP} (C_{\Lambda_u^{-1}}^* (\delta_0), \mathcal{H}^\infty (B_{c_0})) =\O.
\]
\end{proof}
Having in mind Proposition \ref{prop:fiber_representation} and Proposition \ref{isomorphic_image_of_Gleason_parts}, the above result establishes the following result:
\begin{prop}
Let $X$ be a Banach space with a normalized Schauder basis.
If $z \in \tau^* (B_{\ell_\infty}) = B_{(X^\sharp)^*}$, then there exist homomorphisms $\varphi$ and $\psi$ in $\mathcal{M}_z (\mathcal{H}^\infty (\mathbb{D}_X ))$ but $\mathcal{GP} (\varphi, \mathcal{H}^\infty (\mathbb{D}_X)) \cap \mathcal{GP} (\psi, \mathcal{H}^\infty (\mathbb{D}_X)) =\emptyset$.
\end{prop}
\noindent \textbf{Acknowledgment:\ } We would like to thank Daniel Carando and Veronica Dimant for valuable comments and fruitful conversations.
| {'timestamp': '2020-11-18T02:15:17', 'yymm': '2011', 'arxiv_id': '2011.08524', 'language': 'en', 'url': 'https://arxiv.org/abs/2011.08524'} |
\section{Introduction}\label{sec:intro}
In this paper, we present the European Court of Human Rights Open Data project (ECHR-OD). It aims at providing up-to-date and complete datasets about the European Court of Human Rights decisions since its creation. To be up-to-date and exhaustive, we developed a fully automated process to regenerate the entire datasets from scratch, starting from the collection of raw documents. As a result, datasets are as complete as they can be in terms of number of cases. The reproducibility makes it easy to add or remove information in future iterations of the datasets. To be able to check for corrupted data or bias, black swans or outliers, the whole datasets generation process is open-source and versioned.
In a second part, we present the results of a large experimental campaign performed on three flavors of the 13 datasets. We compared 13 standard machine learning algorithms for classification with regards to several performance metrics. Those results provide a baseline for future studies and provide some insights about the interest of some types of features to predict justice decisions. Notably, as for previous studies, we found that case textual description contains interesting elements to predict the (binary) outcome. However, for the first time, we show that the judgement is not as good as purely descriptive features to determine what article a given case is about, such that, for real-life predictive systems, the methodology of previous studies might not be suitable by itself.
Before presenting the project and datasets in Section \ref{sec:presentation}, we discuss in Section \ref{sec:context_related_work} the importance of data quality and the multiple issues with current datasets in complex fields such as the legal domain. The creation process is presented in detail in Section \ref{sec:process}. Section \ref{sec:experiments} is dedicated to the experiments on the datasets while Section \ref{sec:conclusion} concludes the papers by discussing the remaining challenges and future work. The paper comes with Supplementary Material available on GitHub\footnote{\url{https://aquemy.github.io/ECHR-OD_project_supplementary_material/}}. It contains additional examples about the data format, as well as all secondary results of the experiments that we omitted due to space constraint.
\section{Context and related work}
\label{sec:context_related_work}
It is now well established that the recent and spectacular results of artificial intelligence, notably with deep learning (\cite{lecun2015deep}), are partly due to the availability of data, so called ``Big Data'', and the exponential growth of computational power.
For a very long time, the bias-variance tradeoff seemed to be an unbeatable problem: complex models reduce the bias but hurt the variance, while simple models lead to high variance. In parallel, the regularization effect of additional data for complex models was also well known as illustrated by Figure \ref{fig:reg}. The advent of representation learning (\cite{bengio2013representation}) allowed to efficiently build extremely complex models with moderate variance by letting the algorithms discovering the interesting ``patterns'' or ``representations'' to solve a given problem. However, it requires a considerable amount of data to correctly reduce the variance, and there is now a growing consensus on the fact that data are as important as algorithms. In particular, the quality of a model is bounded by the quality of the data it learns from (\cite{valiant1984theory,vapnik1999overview}). The availability and quality of data are thus of primary importance for researchers and practitioners.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.5]{Fig1.png}
\caption{{\bf The regularization effect of data}
The ground truth (blue) is a $sin(x)$ modeled by a polynomial of degree 9 (green). On top, with only 11 training points, the model does not approximate correctly the ground truth while at the bottom, with 100 training points, the model error is far lower.
\label{fig:reg}
\end{figure}
Beyond the scope of pure scientific interest, the data governance, that is to say the lifecycle management of the data, is particularly crucial for our modern societies (\cite{OLHEDE201846,tallon2013corporate,attard2018challenges}). What data are publicly available? Who produces, manages and manipulates those data? What is the quality of the data? What is the process of collection, curation and transformation? Those are few legitimate questions that a citizen, a company, or an institution may (should?) ask due to the ethical, political, social and legal concerns (\cite{kitchin2014data}). One can mention the recent General Data Protection Regulation\footnote{\url{https://www.eugdpr.org/}} (GDPR), a European Union law with global application, that tries to give a legal framework to address some of the abovementioned questions. Beside privacy and business considerations, the quality of data is at the core of the quality of insights and decisions derived from the data.
\subsection{Open Data limits and validity threat}
The Open Data movement considers that the data should be freely available and reusable by anyone (\cite{kitchin2014data}). Although it has strong beneficial effects at many levels (for instance, in science (\cite{10.1371/journal.pbio.1001195}), in civic engagement (\cite{kassen2013promising}), or in governmental transparency (\cite{janssen2012benefits}), some critics have emerged with regards to social questions (\cite{gurstein2011open,misuraca2014open}). Aside from those specific considerations, we argue that open data are not enough to insure data quality and to totally handle all the questions mentioned in the previous paragraph. There are many ways data may be unsuitable for making decisions (either solely human based or assisted by any kind of model):
\begin{itemize}
\item {\bf Data sparsity and irrelevant information:} a dataset may lack information to correctly make a decision or on the contrary, contain a lot of irrelevant information. Some useful information might not be available at the moment the dataset is constructed. Also, it is hard to know {\it a priori} what piece of information is useful or not to model or understand a phenomenon. It usually requires specific techniques such as feature selection (\cite{guyon2003introduction}) and several studies to obtain a big picture. Having a dataset, even open, without the whole process from the collection to the moment it is publicly available is usually not enough for practical applications.
\item {\bf Missing unexpected patterns or learning wrong patterns:} for some reasons, regime change might occur in the data and be learnt or not by a model. How to know if this change is valid or is the result of a problem somewhere between the data collection (e.g. some sensors are not working or being recalibrated) and the data processing (e.g. a bug in the software used to sanitize the data) {\bf without} expert knowledge? Some points may also be outliers for good reasons (e.g. improbable event that {\it eventually} occurs, often referred as a {\it black swan}) or bad reasons (e.g. error in processing the data, problem in collecting the data)? In the first case, the models {\bf must} take into account those data, while in the second case, it should be discarded. Open data cannot help, except for obvious cases.
\item {\bf Data corruption:} at any stage of the collection, processing and usage, the data may be partly corrupted. It may be hard to determine where and how the data has been corrupted even if the data are open.
\item {\bf Biased data:} from the collection process itself to the sanitization choices, bias is introduced. Having access to open data is no help for building better models and algorithms if data are biased from the beginning.
\end{itemize} For those reasons, the datasets presented in this paper are accompanied with the full creation process, carefully documented.
Let us illustrate some of those limitations through a concrete case that motivated this project. To validate and compare a new method for classification, we used some datasets provided along with an article published in an open data journal. The reproduction of the experiment was successful but the results with the new methods were inconsistent with our preliminary tests. By digging into the datasets, we found out that many input vectors were empty (up to 70\% for some datasets) and the labels were not consistent among those empty vectors. In other words, many situations to classify were described with no information, and this absence of information could not be linked to a specific label. Furthermore, the prevalence in the group of degenerated vectors was rather high in comparison to the overall prevalence, resulting in relatively good metrics for standard classification algorithms. As the authors did not provide the raw data and the transformation process, there was no way to figure out what exactly caused this problem.
We would like to insist on the last two elements, namely the data corruption and biased data, as they represent a huge validity threat in machine learning-based retrospective studies. The concern is articulated around the following:
\begin{enumerate}
\item a typical study focuses on a dataset transformed by a data pipeline. For a new method, a cross-validation is used with the preprocessed data. Results are used to make a comparison with other methods presented in previous studies.
\item in general, comparisons are done solely between methods, without taking into account the data pipeline.
\item however, the data preprocessing operations introduces bias and possibly data corruption, which can drastically affect the final results.
\end{enumerate}
It may be difficult to evaluate how much the effect of data preprocessing affects the final result, especially that the data pipelines are rarely reported. For instance, \cite{CRONE2006781} notice that algorithm hyperparameter tuning is performed in 16 out of 19 selected publications while only two publications study the impact of data preprocessing.
The data preprocessing impact has been evaluated for multiple algorithms and operators. In \cite{CRONE2006781}, the authors showed that the accuracy obtained by Neural Network, SVM and Decision Trees are significantly impacted by data scaling, sampling and continuous and categorical coding. A correlation link between under and oversampling is also demonstrated. In \cite{NAWI201332}, three specific data processing operators has been tested for neural networks. Despite the authors do not provide the results without any data processing, the results show an important accuracy variability between the alternatives, thus implying a data processing impact. Using a representative sample of the available data can also lead to better overall results, as showed by \cite{DBLP:conf/bdas/NalepaMPHK18}. For a more comprehensive view on data processing impact, we refer the reader to \cite{dasu2003exploratory}. Recently, \cite{DBLP:conf/dolap/Quemy19} focuses on the data pipeline optimization and found that, between no preprocessing step and a carefully selected data pipeline, the classification error is reduced by 66\% in average among four methods (SVM, Decision Tree, Neural Network and Random Forest) and three datasets (Iris, Breast, Wine). More interesting, by changing the data pipeline, it is possible to obtain any possible ranking of the methods with regard to the error rate.
\subsection{Related work in legal analytics}
The legal environment is a {\it messy concept} (\citet{1637349}) that intrinsically poses some of the most challenging problems for the artificial intelligence research community: grey areas of interpretation, many exceptions, non-stationarity, presence of deductive and inductive reasoning, non-classical logic, etc. For some years and in several areas of the law, some "quantitative" approaches have been developed, based on the use of more or less explicit mathematical models. With the availability of massive data, those trends have been accented and brand-new opportunities are emerging at a sustained pace. Among the stakes of those studies, one can mention a better understanding of the legal system and the consequences of some decisions on the society, but also the possibility to decrease the mass of litigations in a context of cost rationalization. For a survey on legal analytics methods, we refer the reader to \citet{DBLP:conf/adbis/Quemy17}.
In \citet{DBLP:conf/adbis/Quemy17}, the author also defines some practical problems in the field of legal analytics. First, the prediction problem consists in determining the outcome of a trial given some facts about the specific case and some knowledge about the legal environment. The second problem consists in building a legal justification, knowing the legal environment.
The justification problem should not be misunderstood with the model explainability problem in machine learning. Understanding how a model make a prediction is certainly useful to generate legal justifications, however it is not enough. Many studies modeled justice decisions solely based on the estimation of the judge ideology (\citet{33cbff13d8b040ec8827ac230aed9caf,segal1989,doi:10.2307/2960194,citeulike:1147857}). The way the model makes decisions is rather clear, however, the model is unable to provide a satisfying legal justification. Those considerations are highly connected to a major debate among the legal practitioners, namely legalism to realism: are the judges objectively applying a method contained in the text (legalism) or do they create their own interpretation (realism). The feasibility of a solution to the justification problem largely depends on the answer to this debate. Some discussions on the topic can be found in \citet{posner2010judges,JELS:JELS1255,tamanaha2012balanced,leiter2010legal}.
By releasing large datasets, with several flavors based on different types of features, along with the whole tunable preprocessing pipeline, we hope to gain a better understanding on how justice decisions are taken, what elements are useful for a prediction, and to quantify the balance between realism and legalism. We also hope to make a step toward solving the justification problem.
Predicting the outcome of a justice case is challenging, even for the best legal experts: 67.4\% and 58\% accuracy, respectively for the judges and whole case decision, is found in \cite{PCS3} for the Supreme Court of the United States (SCOTUS). Using crowds, the Fantasy Scotus\footnote{\url{https://fantasyscotus.lexpredict.com/}} project reached respectively 85.20\% and 84.85\% correct predictions.
The Supreme Court of the United States (SCOTUS) has been widely studied, notably through the SCOTUS database\footnote{\url{http://scdb.wustl.edu/}} (\citet{Katz2017,martin_quinn_ruger_kim_2004,Guimer2011}). This database is composed of structured information about every case since the creation of the court but no textual information from the opinions. The opinions and other related textual documents also have been studied separately for SCOTUS (\citet{Islam,AJPS,Sim}). Conversely, very little if no similar work has been done in Europe. As far as we know, the only predictive model was using the textual information only (\citet{10.7717/peerj-cs.93}), despite more structured information is publicly available on HUDOC\footnote{\url{https://hudoc.echr.coe.int/eng}}. Using NLP techniques, the authors of \cite{10.7717/peerj-cs.93} achieve 79\% accuracy to predict the decisions of the European Court of Human Rights (ECHR). They make the hypothesis that the textual content of the European Convention of the Human Rights holds hints that will influence the decision of the Judge. They extracted from the judgement documents the top 2000 N-grams, calculated their similarity matrix based on cosine measure and partition this matrix using Spectral Clustering to obtain a set of interpretable topics. The binary prediction was made using an SVM with linear kernel. Contrary to the previous studies on SCOTUS, they found out that the formal facts are the most important predictive factors, which tend to favor legalism.
The data used in \citet{10.7717/peerj-cs.93} are far from being exhaustive: 3 articles considered (3, 6 and 8) with respectively 250, 80 and 254 cases per article. For many data-driven algorithms, this might be too little to build a correct model. As far as we know, the project presented in this paper already provides the largest existing legal datasets directly consumable by machine learning algorithms. In particular, it includes several types of features: purely descriptive and textual.
\subsection{On the availability of legal data}
One of the prominent domains of application for deep learning is computer vision. In this area, it is relatively easy to obtain new data, even if the process of manually labelling training data may be laborious. Techniques like Data Augmentation allow to generate artificially new data by slightly modifying existing examples (\citet{doi:10.1198/10618600152418584}). For instance, for hand-written recognition, one may add some small perturbations or apply transformations to an existing example, such as rotation, zoom, gaussian noise, etc. Those techniques are efficient at providing useful new examples but rely on an implicit assumption: the solution's behavior changes continuously with the initial data. In other words, slightly modifying the picture of an 8 by a small rotation or distortion still results in an 8. However, in many other fields, a small change in the data may result in a totally different outcome such that one cannot use Data Augmentation to artificially grow her dataset. In general, the fields where Data Augmentation is not applicable are more complex, require more information to process or require a sophisticated or {\it conscious} reasoning before being able to give an answer. Anyone can recognize a cat from a horse without processing any additional data than the picture itself, without elaborating a complex and explicit reasoning. Conversely, deciding if someone is guilty w.r.t. some available information and the current legal environment is not as natural as recognizing a cat, even for the best legal experts.
We would like to draw the attention on the fact that, it is not because those fields are more complex for humans that nowadays artificial intelligence techniques cannot perform better than humans: medical diagnosis is a complex field requiring expertise and explicit reasoning, however humans are regularly beaten by the machine (\citet{Tiwari2016,Yu2016,Patel2016}). That said, in many complex fields, data augmentation techniques cannot be used, access to the data may be difficult, the data itself can be limited and, when open, the data are provided already curated without access to the curation process. ECHR-OD project has been created with those considerations in mind and we will see with the experiments that despite being exhaustive w.r.t. number of cases, more data would have helped the models to perform better.
\section{ECHR-OD in brief}
\label{sec:presentation}
{\bf ECHR-OD} project aims at providing exhaustive and high-quality database and datasets for diverse problems, based on the European Court of Human Rights documents available on HUDOC. It appears important to us 1) to draw the attention of researchers on this domain that has important consequences on the society, 2) to provide a similar and more complete database for the European Union as it already exists in the United States, notably because the law systems are different in both sides of the Atlantic.
The project is composed of five components:
\begin{enumerate}
\item {\bf Main website:} \url{https://echr-opendata.eu}
\item {\bf Download mirror:} \url{https://osf.io/52rhg/}
\item {\bf Creation process:} \url{https://github.com/aquemy/ECHR-OD_process}
\item {\bf Website sources:} \url{https://github.com/aquemy/ECHR-OD_website}
\item {\bf Data loader in python:} \url{https://github.com/aquemy/ECHR-OD_loader}
\end{enumerate}
ECHR-OD is guided by three core values: {\bf reusability}, {\bf quality} and {\bf availability}. To reach those objectives,
\begin{itemize}
\item each version of the datasets is carefully versioned and publicly available, including the intermediate files,
\item the integrality of the process and files produced are careful documented,
\item the scripts to retrieve the raw documents and build the datasets from scratch are open-source and carefully versioned to maximize reproducibility and trust,
\item no data is manipulated by hand at any stage of the creation process.
\end{itemize}
At the submission date, the project offers 13 datasets for the classification problem. Datasets for other problems such as structured predictions will be available in the future. The datasets are available under the {\bf Open Database Licence (ODbL)}\footnote{Summary: \url{https://opendatacommons.org/licenses/odbl/summary/}}\footnote{Full-text: \url{https://opendatacommons.org/licenses/odbl/1.0/}} which guarantees the rights to copy, distribute and use the database, to produce work from the database and to modify, transform and build upon the database. The creation scripts and website sources are provided under {\bf MIT Licence}.
\subsection{Datasets description}
In machine learning, the problem of classification consists in finding a mapping from an input vector space $\mathcal X$ to a discrete decision space $\mathcal Y$ using a set of examples. The binary classification problem is a special case of the multiclass such that $\mathcal Y$ has only two elements, while in multilabel classification, each element of $\mathcal X$ can have several labels. It is often viewed as an approximation problem s.t. we want to find an estimator $\bar J$ of an unknown mapping $J$ available only through a sample called {\it training set}. A training set $(\mathbf{X}, \mathbf{y})$ consists of $N$ input vectors $\mathbf{X} = \{ \mathbf{x}_1, ..., \mathbf{x}_N \}$ and their associated correct class $\mathbf{y} = \{ J(\mathbf{x}_i) + \varepsilon \}^{N}_{i=1}$, possibly distorted by some noise $\varepsilon$. Let $\mathcal{J}(\mathcal X, \mathcal Y)$ be the class of mappings from $\mathcal X$ to $ \mathcal Y$. Solving an instance of the classification problem consists in minimizing the classification error:
\begin{equation}
J^* = \underset{\bar J \in \mathcal{J}(\mathcal X, \mathcal Y)}{arg\,min} \sum_{\mathbf{x} \in \mathcal X} \mathbb{I}_{\{J(\mathbf x) \neq \bar J(\mathbf{x})\}}
\end{equation}
From the HUDOC database and judgment files, we created several datasets for three variants of the classification problem: binary classification, multiclass classification and multilabel classification. There are 11 datasets for binary, one for multiclass, and one for multilabel classification.
Each dataset comes in different flavors based on descriptive features and Bag-of-Words and TF-IDF representations:
\begin{enumerate}
\item {\bf Descriptive features:} structured features retrieved from HUDOC or deduced from the judgment document,
\item {\bf Bag-of-Words representation:} based on the top 5000 tokens (normalized $n$-grams for $n\in \{1,2,3,4\}$),
\item {\bf TF-IDF representation:} idem but with a TF-IDF transformation to weight the tokens,
\item {\bf Descriptive features + Bag-of-Words:} combination of both sets of features,
\item {\bf Descriptive features + TF-IDF:} combination of both sets of features.
\end{enumerate}
Those different representations exist to study the respective importance of descriptive and textual features in the predictive models build upon the datasets.
For binary classification, the label corresponds to a violation or no violation of a specific article. Each of the 11 datasets corresponds to a specific article. We kept only the articles such that there are at least 100 cases with a clear output (see Section
\ref{sec:filter} for additional details) without consideration on the prevalence. Notice that a same case can appear in two datasets if it has in his conclusion two elements about a different article. A basic description of those datasets is given by Table \ref{table:binary}.
\begin{table}
\caption{{\bf Datasets description for binary classification.}}
\label{table:binary}
\input{summary_binary.tex}
\begin{flushleft} The columns min, max, avf \#features indicate the minimal, maximal and average number of features in the dataset cases for the representation "descriptive features + Bag-of-Words representation".
\end{flushleft}
\end{table}
For multiclass, there is a total of 18 different classes (the number of different articles multiplied by two possible decisions: violation or no violation).
To create the multiclass dataset, we aggregate the different binary classification datasets by removing the cases present in several datasets. For this reason, articles 13 and 34 are not included since they had less than 100 cases after this step. We did not simply merge the binary classification datasets for the remaining cases. The processing part consisting in the creation of the BoW and TF-IDF representations is based on the 5000 most frequent $n$-grams among the corpus of judgments (see Section \ref{sec:normalization} documents for additional details). As the most frequent $n$-gram changes depending on the corpus, the BoW representation of a given case is different in the binary, multiclass and multilabel dataset. The descriptive features are, however, not modified.
For multilabel classification, there are 22 different labels and the main difference with the multiclass is that there is no need to remove cases that appear in multiple binary classification datasets. The labels are simply stacked. Table \ref{table:multilabel} summarizes the dataset composition, Figures \ref{fig:multiclass_count} and \ref{fig:multilabel_count} shows the labels repartition among the multiclass and multilabel datasets, and Figure \ref{fig:multilabel_count_labels} provides the histogram of label numbers and cases per label.
\begin{table}
\caption{{\bf Dataset description for the multiclass dataset.}}
\label{table:multiclass}
\centering
\input{summary_multiclass.tex}
\begin{flushleft}For each article is indicated the number of cases, the number of cases labeled as violated and not violated with in parenthesis the prevalence w.r.t. the {\bf whole} dataset.
\end{flushleft}
\end{table}
\begin{table}
\caption{{\bf Dataset description for the multilabel dataset.}}
\label{table:multilabel}
\centering
\input{summary_multilabel.tex}
\begin{flushleft}For each article is indicated the number of cases, the number of cases labeled as violated and not violated with in parenthesis the prevalence w.r.t. the {\bf whole} dataset.
\end{flushleft}
\end{table}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.4]{Fig2.png}
\caption{{\bf Number of cases depending on the article and the outcome for the multiclass dataset.}}
\label{fig:multiclass_count}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.4]{Fig3.png}
\caption{{\bf Number of cases depending on the article and the outcome for the multilabel dataset.}}
\label{fig:multilabel_count}
\end{figure}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.4]{Fig4.png}
\caption{{\bf Number of cases depending on the number of labels for the multilabel dataset.}}
\label{fig:multilabel_count_labels}
\end{figure}
\begin{table}[!h]
\caption{{\bf Files contained in a dataset.}}
\label{table:dataset_content}
\begin{center}
\begin{tabular}{@{}ll@{}}
\toprule
{\bf descriptive.txt} & Descriptive features only. \\
{\bf BoW.txt} & Bag-of-Word representation only. \\
{\bf TF\_IDF.txt} & TF-IDF representation only. \\
{\bf descriptive+BoW.txt} & Descriptive features and Bag-of-Words. \\
{\bf descriptive+TF\_IDF.txt} & Descriptive features and TF-IDF. \\
{\bf outcomes.txt} & Contain the labels of the datasets. \\
{\bf features\_descriptive.json} & Mapping between feature and numerical id.\\
{\bf features\_text.json} & Mapping between $n$-grams and numerical id.\\
{\bf outcomes\_variables.json} & Mapping between labels and numerical id. \\
{\bf variables\_descriptive.json} & Mapping between descriptive variable and numerical id.\\
{\bf statistics\_datasets.json} & Contain some statistics about the dataset. \\ \bottomrule
\end{tabular}
\end{center}
\end{table}
The final format to encode the case information is close to the LIBSVM format. Each couple (\texttt{variable}, \texttt{value}) is encoded by \texttt{<variable\_id>:<value\_id>} with the specificity that the \texttt{<value\_id>} is not encoded per variable but globally. For instance, \texttt{0:7201} corresponds to variable \texttt{itemid=001-170361}.
The encoding for the variables can be found in {\bf variables\_descriptive.json} and the encoding for the couples (\texttt{variable}, \texttt{value}) in {\bf features\_descriptive.json}. The format for the mapping {\bf features\_descriptive.json} is \texttt{"<variable>=<value>":<id>} or \texttt{"<variable>\_has\_<value>":<id>} if the variable is a set of elements. For instance, the variable \texttt{parties} has two elements and is encoded by 19. Having "BASYUK" in the parties of a case is encoded by \texttt{"parties\_has\_BASYUK": 109712} and thus, the case description contains \texttt{19:109712} and \texttt{19:X} where X is the id for the second party.
As the id is global, having the variable id in prefix is redundant. Notice that it has at least three advantages. First, there is no need to look in the global dictionary and parse the corresponding key to know the encoded variable. Second, some algorithms might want a pair (\texttt{variable}, \texttt{value}) (e.g. Decision Tree) while others can work with global tokens (e.g. Neural Network). Finally, it makes it easier to re-encode the cases with a specific encoder (e.g. binary, Helmert, Backward Difference, etc.).
Regarding the Bag-of-Words representation, each $n$-gram is turned into a variable such that when a case judgment contains a specific token, the final representation contains \texttt{<token\_id>:<occurrences>}. For instance, assuming that the $2$-gram "find\_guilty" is encoded by 128210 and appears five times in a judgment, the case description will contain \texttt{128210:5}. For TF-IDF representation, \texttt{<occurrences>} is replaced by the specific weight for this token in the document given the whole dataset.
A Python library to load and manipulate the datasets have been developed and is available at \url{https://github.com/aquemy/ECHR-OD_loader}.
\section{Creation process}
\label{sec:process}
In this section, we describe in detail the dataset generation process from scratch. The datasets are based on the raw documents and information available publicly in HUDOC database. The process is broken down into several steps as illustrated by Figure \ref{fig:process_graph}:
\begin{enumerate}
\item {\bf get\_cases\_info.py:} Retrieve the list and basic information about cases from HUDOC,
\item {\bf filter\_cases.py:} Remove unwanted, inconsistent, ambiguous or difficult-to-process cases,
\item {\bf get\_documents.py:} Download the judgment documents for the filtered list of cases,
\item {\bf preprocess\_documents.py:} Analyze the raw judgments to construct a JSON nested structures representing the paragraphs,
\item {\bf process\_documents.py:} Normalize the documents and generate a Bag-of-Words and TF-IDF representation,
\item {\bf generate\_datasets.py:} Combine all the information to generate several datasets.
\end{enumerate}
\begin{figure}[!h]
\centering
\includegraphics[scale=0.5]{Fig5.png}
\caption{{\bf ECHR-OD datasets creation process.}}
\label{fig:process_graph}
\end{figure}
The integrality of this process is wrapped into a script {\bf build.py}. This script has some parameters such as the output folder name but also the number of tokens to take into consideration during the generation of Bag-of-Words representation. This allows anyone to generate slightly modified versions of the datasets and to experiment with them.
\subsection{Retrieving cases}
Using HUDOC API, basic information about all entries are retrieved and saved in JSON files. Those entries contain several keys that are listed on top of the script {\bf get\_case\_info.py}. Among them can be found the case name, the language used, or the conclusion in natural language.
See Appendix \ref{S1_Appendix} for an example of a case description.
\subsection{Filtering cases}
\label{sec:filter}
To ensure the quality and usability of the datasets, we filter the cases as follows:
\begin{enumerate}
\item We keep only cases in English,
\item We keep only cases with a judgment document,
\item We remove the cases without an attached judgment document,
\item We keep only the cases with a clear conclusion (i.e. containing at least one occurrence of ``(no) violation''),
\item We remove a specific list of cases hard to process (three cases for this version of the datasets).
\end{enumerate}
During this step, we also parse and format some raw information: the parties are extracted from the case title and many raw strings are broken down into lists. In particular, the string listing the articles discussed in a case are turned into a list and the conclusion string into a slightly more complex JSON object. For instance, the string \texttt{Violation of Art. 6-1;No violation of P1-1;Pecuniary damage - claim dismissed;Non-pecuniary damage - financial award} becomes the list of elements described in Appendix \ref{S2_Appendix}.
In general, each item in the conclusion can have the following elements:
\begin{enumerate}
\item {\bf article:} number of the concerned article if applicable,
\item {\bf details:} list of additional information (paragraph or aspect of the article),
\item {\bf element:} part of the raw string describing the item,
\item {\bf mentions:} diverse mentions (quantifier s.a. 'moderate', country...),
\item {\bf type:} violation, no violation or other.
\end{enumerate}
Some representative examples are provided in Appendix \ref{S3_Appendix}.
Finally, on top of saving the case information in a JSON file, we output a JSON file for each unique article with at least 100 associated cases\footnote{This constant is a parameter of the script and can thus be modified for additional experimentations.}.
Additionally, some basic statistics about the attributes are generated, e.g. the cardinality of the domain and the density (i.e. the cardinality over the total number of cases). For instance, the attribute \texttt{itemid} is unique and thus, as expected, its density is 1:
{\small
\begin{lstlisting}
"itemid": {
"cardinal": 12075,
"density": 1.0
}
\end{lstlisting}
}
In comparison, the field \texttt{article\_} (raw string containing a list of articles discussed in a case) and \texttt{article} (its parsed and formatted counterpart) have a density of respectively 25\% and 1\%. This illustrates the interest of our processing method: using the raw string, the article attribute is far more unique than it should be. In reality, there are about 130 different values that are really used across the datasets.
{\small
\begin{lstlisting}
"articles_": {
"cardinal": 3104,
"density": 0.2570600414078675
}
\end{lstlisting}
}
{\small
\begin{lstlisting}
"article": {
"cardinal": 131,
"density": 0.010848861283643893
}
\end{lstlisting}
}
\subsection{Getting documents}
During this phase, we only download the judgment documents in Microsoft Word format using HUDOC API.
\subsection{Preprocessing documents}
The preprocessing step consists in parsing the MS Word document to extract additional information and create a tree structure of the judgment file. It outputs two files for each case:
\begin{enumerate}
\item {\bf \textless case\_id\textgreater\_parsed.json:} same JSON document as produced by {\bf filter\_cases.py} with additional information.
\item {\bf \textless case\_id\textgreater\_text\_without\_conclusion.txt:} full judgment text without the conclusion. It is meant to be used for creating the BoW and TF-IDF representations.
\end{enumerate}
To the previous information, we add the field \texttt{decision\_body} with the list of persons involved into the decision, including their role. See Appendix \ref{S4_Appendix} for an example.
The most important addition to the case info is the tree representation of the whole judgment document under the field \texttt{content}. The content is described in an ordered list where each element has two fields: 1) \texttt{content} to describe the element (paragraph text or title) and 2) \texttt{elements} that represents a list of sub-elements. For a better understanding, see the example in Appendix \ref{S5_Appendix}. This representation eases the identification of some specific sections or paragraphs.
\subsection{Normalizing documents}
\label{sec:normalization}
During this step, the documents {\bf \textless case\_id\textgreater\_text\_without\_conclusion.txt} are normalized as follows:
\begin{itemize}
\item Tokenization,
\item Stopwords removal,
\item Part-of-Speech tagging followed by a lemmatization,
\item $n$-gram generation for $n \in \{1,2,3,4\}$,
\end{itemize}
The output files are named {\bf \textless case\_id\textgreater\_normalized.txt}.
\subsection{Processing documents}
\label{sec:processing}
This step uses Gensim (\citet{rehurek_lrec}) to construct a dictionary of the 5000 most common tokens based on the normalized documents (the dictionary is created per dataset) and outputs the Bag-of-Words and TF-IDF representations for each document. The naming convention is {\bf \textless case\_id\textgreater\_bow.txt} and {\bf \textless case\_id\textgreater\_tfidf.txt}. Additionally, {\bf feature\_to\_id.dict} and {\bf dictionary.dict} contain the mapping between tokens and id, respectively in JSON and in a compressed format used by Gensim. The number of tokens to use in the dictionary is a parameter of the script.
\subsection{Generating datasets}
The final step consists in producing the dataset and related files. See Table \ref{table:dataset_content} for the list of output files.
The feature id of the BoW and TF-IDF parts are {\bf not} the same as those obtained during the processing phase. More precisely, they are shifted by the number of descriptive features.
We remove the cases with no clear output. For instance, it is possible to have a violation of a certain aspect of a given article but no violation of another aspect of the same article. In the future, we will consider a lower label level than the article.
\section{Experiments}
\label{sec:experiments}
In this section, we perform a first campaign of experiments on each of the produced datasets. The goals are twofold: studying the predictability offered by those datasets and their different flavors, and providing a first baseline by testing the most popular machine learning algorithms for classification. All the experiments are implemented using Scikit-Learn (\cite{scikit-learn}). We split the experiments into three categories: binary, multiclass and multilabel classifications, mostly because the evaluation metrics and their interpretation differ. All the experiments and scripts to analyze the results and generate the plots and tables are open-source and available on a separated GitHub repository\footnote{\url{https://github.com/aquemy/ECHR-OD_predictions}} for replication.
\subsection{Binary classification}
We are interested in answering four questions: 1) what is the predictive power of the datasets, 2) are all the articles equal w.r.t. predictability, 3) are some methods performing significantly better than others, and 3) are all dataset flavors equal w.r.t. predictability?
\subsubsection{Protocol}
We compared 13 standard classification methods: AdaBoost with Decision Tree, Bagging with Decision Tree, Naive Bayes (Bernoulli and Multinomial), Decision Tree, Ensemble Extra Tree, Extra Tree, Gradient Boosting, K-Neighbors, SVM (linear, RBF), Neural Network (Multilayer Perceptron) and Random Forest.
For each article, we used three flavors: descriptive features only, bag-of-words only, and descriptive features combined to bag-of-words. For each method, each article and each flavor, we performed a 10-fold cross-validation with stratified sample, for a total of 429 validation procedures. Due to this important amount of experimental settings, we discarded the TF-IDF flavors. For the same reason, we did not perform any hyperparameter tuning at this stage.
To evaluate the performances, we reported some standard performance indicators: accuracy, $F_1$-score and Matthews correlation coefficient (MCC). Denoting by TP the number of true positives, TN the true negatives, FP the false positives and FN the false negative, those metrics are defined by:
\begin{align*}
\text{ACC} & = \frac{\text{TP} + \text{TN}}{\text{TP} + \text{TN} + \text{FP} + \text{FN}} \\
&\\
\text{F}_1 & = \frac{2 \text{TP}}{ 2\text{TP} + \text{FP} + \text{FN}} \\
& \\
\text{MCC} & = \frac{\text{TP} \times \text{TN} - \text{FP} \times \text{FN}}{\sqrt{(\text{TP} + \text{FP})(\text{TP} + \text{FN})(\text{TN} + \text{FP})(\text{TN} + \text{FN})}}
\end{align*}
The accuracy, $\text{F}_1$-score and MCC respectively belongs to $[0,1]$, $[0,1]$ and $[-1,1]$. The closer to 1, the better it is. $\text{F}_1$-score and MCC take into account false positive and false negatives. Furthermore, MCC has been shown to be more informative than other metrics derived from the confusion matrix \cite{Chicco2017}, in particular with imbalanced datasets.
Additionally, we report the learning curves to study the limit of the model space for each method. The learning curves are obtained by plotting the accuracy depending on the training set size, for both the training and the test sets. The learning curves help to understand if a model underfit or overfit and thus, shape future axis of improvements to build better classifiers.
To find out what type of features are the most important w.r.t. predictability, we used a Wilcoxon signed-rank test at 5\% to compare the accuracy obtained on Bag-of-Words representation to the one obtained on the Bag-of-Words combined with the descriptive features. Wilcoxon signed-rank test is a non-parametric paired difference test. Given two paired sampled, the null hypothesis assumes the difference between the pairs follows a symmetric distribution around zero. The test is used to determine if the changes in the accuracy is significant when the descriptive features are added to the textual features.
\subsubsection{Results}
\begin{table}
\caption{Best accuracy obtained for each article.}
\label{table:summary_acc}
\centering
\input{binary_acc_best.tex}
\end{table}
Table \ref{table:summary_acc} shows the best accuracy obtained for each article as well as the method and the flavor of the dataset. For all articles, the best accuracy obtained is higher than the prevalence. The method performing the best is linear SVM, obtaining the best results on 4 out of 11 articles. Gradient Boosting accounts for 3 out 11 articles and Ensemble Extra Tree accounts for 2 articles. The standard deviation is rather low with 1\% up to 4\%, at the exception of article 34 with 9\%. The accuracy ranges from 75.86\% to 98.32\% with an average of 94.43\%. The micro-average that ponders each result by the dataset size is 96.44\%. In general, the datasets with higher accuracy are larger and more imbalanced. The datasets being highly imbalanced, with a prevalence from 0.64 to 0.93, other metrics may be more suitable to appreciate the quality of the results. In particular, the micro-average could simply be higher due to the class imbalance rather than the availability of data.
Regarding the flavor, 8 out 10 best results are obtained on descriptive features combined to bag-of-words. Bag-of-words only is the best flavor for article 10 and descriptive features only for article 13 and article 34. This seems to indicate that combining information from different sources are improving the overall results.
\begin{center}
\begin{figure}
\caption{\label{fig:normalized_cm} Normalized Confusion Matrices for the best methods as described by Table \ref{table:summary_acc}.}
\includegraphics[scale=0.3]{Fig6a.png}
\includegraphics[scale=0.3]{Fig6b.png}
\includegraphics[scale=0.3]{Fig6c.png}
\includegraphics[scale=0.3]{Fig6d.png}
\includegraphics[scale=0.3]{Fig6e.png}
\includegraphics[scale=0.3]{Fig6f.png}
\includegraphics[scale=0.3]{Fig6g.png}
\includegraphics[scale=0.3]{Fig6h.png}
\includegraphics[scale=0.3]{Fig6i.png}
\includegraphics[scale=0.3]{Fig6j.png}
\includegraphics[scale=0.3]{Fig6k.png}
\end{figure}
\end{center}
Figure \ref{fig:normalized_cm} displays the normalized confusion matrix for each line of the Table \ref{table:summary_acc}. The normalization is done per line and allows to quickly appreciate how the true predictions are balanced for both classes. As expected by the prevalence, true negatives are extremely high, ranging from 0.82 to 1.00 with an average of 97.18. On the contrary, the true positive rate is lower ranging from 0.47 to 0.91. For most articles, the true positive rate is higher than 80\% and is lower than 50\% only for article 34.
\begin{table}
\caption{Best Matthews Correlation Coefficient and F1 score obtained for each article. The flavor and method achieving the best score for both metrics are similar for every article.}
\label{table:summary_mcc}
\centering
\input{binary_mcc_best.tex}
\end{table}
In addition, we provide the Matthew Correlation Coefficient in Table \ref{table:summary_mcc} and the F$_1$-score in Supplementary Material. The F1-score is weighted by the class support to account for class imbalance. The Matthew Correlation Coefficient ranges from 0.4918 on article 34 to 0.8829 on article 10. The best score is not obtained by the same article as for accuracy (article 10 achieved 93\% accuracy, below the average). Interestingly, the MCC reveals that the performances on article 34 are rather poor in comparison to the other articles and close to those of article 13.
Surprisingly, the best method is not linear SVC anymore (best on 3 articles) but Gradient Boosting (best on 4 articles).
While the descriptive features were returning the best results for two articles, according the Matthew Correlation Coefficient, it reaches the best score only for article 34.
Once again, the micro-average is higher than the macro-average. As the MCC and the weighted F$_1$-score take into account class imbalance, it supports the idea that adding more cases to the training set could still improve the result of those classifiers. This will be confirmed by looking at the learning curves.
Table \ref{table:summary_methods_accuracy} ranks the methods according to the average accuracy performed on all articles. For each article and method, we kept only the best accuracy among the three dataset flavors. Surprisingly, Linear SVC or Gradient Boosting are not the best method with a respective rank of 2 and 5, but Ensemble Extra Tree. Random Forest and Bagging with Decision Tree are respectively second and third while they never achieved the best result on any article. It simply indicates that those methods are more consistent across the datasets than Linear SVM or Gradient Boosting. This can be confirmed by the detailed results per article provided in the Supplementary Material.
\begin{table}
\caption{Overall ranking of methods according to the average accuracy obtained on every article.}
\label{table:summary_methods_accuracy}
\centering
\input{binary_acc_summary.tex}
\end{table}
Figure \ref{fig:learning_curves} displays the learning curves obtained for the best methods described by Table \ref{table:summary_acc}. The training error becomes (near) zero on every instance after only few cases, except for article 13 and 34. The test error converges rather fast and remains relatively far from the training error, synonym of high bias. Those two elements indicate underfitting. Usually, more training examples would help but as the datasets are exhaustive w.r.t. the European Court of Human Rights cases, this is not possible. As a result, simpler model space has to be investigated as well as, in general, hyperparameter tuning. An exploratory analysis of the datasets may also help in removing some noise and finding the best predictors.
On article 13 and 34, the bias is also high, and variance relatively higher than for the other articles, clearly indicating the worst possible case. Again, adding more examples is not an option. However, if we assume that the process of deciding if there is a violation or not is the same independently of the article, a solution might be transfer learning to leverage what is learnable from the other articles. We let this research axis for future work.
\begin{center}
\begin{figure}
\caption{\label{fig:learning_curves} Learning Curves for the best methods as described by Table \ref{table:summary_acc}.}
\includegraphics[scale=0.3]{Fig7a.png}
\includegraphics[scale=0.3]{Fig7b.png}
\includegraphics[scale=0.3]{Fig7c.png}
\includegraphics[scale=0.3]{Fig7d.png}
\includegraphics[scale=0.3]{Fig7e.png}
\includegraphics[scale=0.3]{Fig7f.png}
\includegraphics[scale=0.3]{Fig7g.png}
\includegraphics[scale=0.3]{Fig7h.png}
\includegraphics[scale=0.3]{Fig7i.png}
\includegraphics[scale=0.3]{Fig7j.png}
\includegraphics[scale=0.3]{Fig7k.png}
\end{figure}
\end{center}
Finally, we used a Wilcoxon signed-rank test at 5\% to compare the accuracy obtained on Bag-of-Words representation to the one obtained on the Bag-of-Words combined with the descriptive features. The difference between the sample has been found to be significant only for article 6 and article 8.
The best result of column \texttt{BoW} is improved in the column \texttt{both} for every article. However, statistically, for a given method, adding descriptive features does not improve the result.
Additionally, we performed the test per method. The result is significant for any method.
In conclusion, the datasets demonstrated a strong predictability power. Apart from article 13 and 34, each article seems to provide similar results independently of the relatively different prevalence. If the accuracy is rather high w.r.t. the prevalence, more informative metrics such as MCC and F$_1$ scores shows that there are still margins of improvements. Hyperparameter tuning is an obvious way to go, and this preliminary work have shown that good candidates for fine tuning are Ensemble Extra Tree, Linear SVM and Gradient Boosting.
This experimental campaign has demonstrated that the textual information provides better results than descriptive features alone, but the addition of those descriptive feature improve in general the final {\bf best} result. We emphasize the best (obtained among all methods) because for a given method and any article, adding the descriptive feature are not significantly improving the results. Another way of improving the results is to tune the different phases of the dataset generations. In particular, preliminary work in \cite{DBLP:conf/dolap/Quemy19} have shown that 5000 tokens and $4$-grams might not be enough to take the best out of the documents. It might seem surprising, but the justice language is codified and standardized in a way that $n$-grams for large $n$ might be good predictors for the outcome.
\subsection{Multiclass classification}
In this section, we are interested in quantifying the capacity of standard machine learning algorithms to deal with the multiclass dataset. In the previous section, we showed that most methods could obtain an accuracy higher than the dataset prevalence, and more generally, good evaluation metrics. Usually algorithms for binary classification adapt relatively well to multiclass problems, however, in the case of ECHR-OD, the labels come by pair (violation or no-violation of a given article) which may confuse the classifiers.
The experimental protocol being similar to the one of the previous section, we describe the results right after. For computational purposes, we dropped the two worst classifiers on the binary datasets, namely RBF SVM and KNN.
\begin{table}
\caption{Accuracy obtained for each method on the multiclass dataset.}
\label{table:mc_acc}
\centering
\input{multiclass_acc.tex}
\end{table}
Table \ref{table:mc_acc} presents the accuracy obtained on the multiclass dataset. The best accuracy for descriptive features only and Bag-of-Words only is linear SVM with respectively 91.41\% and 91.36\% correctly labeled cases. This is aligned with the results obtained on binary datasets. However, the top score of 94.99\% is obtained by Bagging Classifier that only ranked 4th on binary datasets. In other words, SVM ranked first on two types of features individually, but the improvement of combining the features is lower than the one obtained by Bagging Classifier. The same can be observed with Gradient Boosting that outperforms SVM. Except from Ada Boost, the standard deviation is mostly lower than 1\%.
For most methods, the flavor Bag-of-words only scores better than descriptive features. This observation is reversed by looking at the Matthew Correlation Coefficient provided by Table \ref{table:mc_mcc}. For both indicators however, combining both types of features increases performances at the notable exceptions of Extra Tree, Multinomial Naive Bayes and Ada Boost with Decision Tree.
This highly contrasts with the binary setting for which descriptive features were quantitatively far below textual features, in particular the MCC indicator. For binary datasets, the flavor descriptive features only was mostly scoring below the bag-of-words only, for any article and any method (c.f. Supplementary Material).
On top of that, taking only the best result per flavor, the descriptive features score better than purely textual features. The explanation can be found by studying the confusion matrix.
\begin{table}
\caption{Matthew Correlation Coefficient obtained for each method on the multiclass dataset.}
\label{table:mc_mcc}
\centering
\input{multiclass_mcc.tex}
\end{table}
\begin{center}
\begin{figure}
\caption{\label{fig:normalized_cm_multiclass} Normalized Confusion Matrix for multiclass dataset. The normalization is performed per line. A white block indicates that no element has been predicted for the corresponding label. Percentages are reported only if above 1\%.}
\includegraphics[scale=0.65]{Fig8a.png}
\includegraphics[scale=0.65]{Fig8b.png}
\includegraphics[scale=0.65]{Fig8c.png}
\end{figure}
\end{center}
Figure \ref{fig:normalized_cm_multiclass} shows the normalized confusion matrix for Linear SVM. The normalization has been done per line, i.e. each line represents the distribution of cases according to their ground truth. For instance, on descriptive features only, for the class "Article 11, no-violation", 44\% only were correctly classified and 56\% assigned to a violation of article 11. The perfect classifier should thus have a diagonal of 1. The diagonal is equivalent to the recall for the corresponding class and the average the diagonal terms is the balanced accuracy (\cite{brodersen2010balanced}).
The flavor ``Descriptive features only'' have a sparser normalized confusion matrix than the counterpart with Bag-of-Words. The fact that the first flavor returns a lower accuracy is explained by looking at the 2x2 blocks on the diagonal. Those blocks are the normalized confusion matrix of the subproblem restricted to find a specific article. For instance, 100\% of non-violation of article 10 has been labeled in one of the two classes related to article 10 (99\% for a violation). In general, the classifiers on ``Descriptive feature only'' are good at identifying the article but generates a lot of false negatives, most likely due to the imbalance between violation and non-violation labels for a given article. Adding the bag-of-words to the case representation slightly lowers the accuracy in average but largely rebalance the 2x2 blocks on the diagonal. On the other hand, it seems that the textual information does not hold enough information to identify the article, which explain why classifiers perform in general lower on this flavor.
We performed two Wilcoxon signed-rank tests: first between the samples of results on \texttt{BoW} and \texttt{both}, then between \texttt{desc} and \texttt{both}. The first result is clearly significant while the second is not significant, comforting us in the idea that for the multiclass domain, Bag-of-Words alone flavor is unlikely to give good results compared to descriptive features.
The main conclusion to draw from the multiclass experiment is that the descriptive features are excellent at identifying the article while the text offers more elements to predict the judgement. Using only the Bag-of-Words leads to the worst possible results, while adding textual information to the descriptive features slightly increases the accuracy and have a strong beneficial effect on discriminating between violations and non-violations. This is quite in opposition with the conclusion drawn from the experiments on the binary datasets where the textual representation clearly overperformed while the descriptive features had only a marginal effect.
This indicates that it might be more interesting to create a two-stage classifier: a multiclass classifier determines the article based on descriptive features, followed by an article-specific classifier in charge of determining if the article is violated or not. Over-sampling techniques to deal with imbalanced classes constitutes another axis of improvement to explore in future work.
Last but not least, it shows that the benefits of combining sources of information are not monotonic: the best scoring method on individual types of features might not be the best method overall.
\subsection{Multilabel Classification}
The multilabel dataset generalizes the multiclass one in a way there is not only one article to identify before predicting the outcome, but an unknown number. It is closer to real-life situations in which, when a complaint is filled, the precise articles to be discussed are yet to be determined. On top of analyzing the usual performance metrics, we would like to quantify how good are the methods to identify all the articles in each case. From the multiclass results, it is expected that the textual information alone will provide the lowest results among all flavors.
\subsubsection{Protocol}
Appreciating the results of a multilabel classifier is not as easy as in the binary or multiclass case. For instance, having wrongly added one label to 100 cases is not exactly the same as adding 100 wrong labels to a single case. Similarly, being able to predict correctly at least one correct label per case is not the same as predicting all good labels for a fraction of the cases, even if the total amount of correct label is the same in both scenarios. The distributions of ground truth and predicted labels among the dataset are important to evaluate the model.
For this reason, we reported the following multilabel-specific metrics: subset accuracy, precision, recall, F$_1$-score, Hamming loss and the Jaccard Similarity score. The subset accuracy is strictest metric since it measures the percentage of samples such that all the labels are correctly predicted. It does not account for partly correctly labeled vectors. The Jaccard index measures the number of correctly predicted labels divided by the union of prediction and true labels. The Hamming loss calculates the percentage of wrong labels in the total number of labels. Let $T$ (resp. $P$) denotes the true (resp. predicted) labels, $n$ the size of the sample, and $l$ the number of possible labels. Then the metrics are defined by:
\begin{align*}
\text{ACC} & = \frac{1}{n} \underset{i=1}{\overset{n}{\sum}} I(Y_i = \bar Y_i)\\
&\\
\text{RECALL} & = \frac{\text{T} \cap \text{P}}{\text{T}} \\
& \\
\text{PRECISION} & = \frac{\text{T} \cap \text{P}}{\text{P}} \\
& \\
\text{F}_1 & = \frac{\text{RECALL} \times \text{PRECISION}}{\text{RECALL} + \text{PRECISION}} \\
& \\
\text{HAMMING} & = \frac{1}{nl}\underset{i=1}{\overset{n}{\sum}} \text{xor}(y_{i,j}, \bar y_{i, j})
& \\
\text{JACCARD} & = \frac{\text{T} \cap \text{P}}{\text{T} \cup \text{P}}
\end{align*}
Finally, we are interested in quantifying how much a specific article was properly identified, as well as how much cases with a given number of labels are correctly labeled, taking into account their respective prevalence in the dataset reported by Figure \ref{fig:multilabel_count_labels}. Indeed, about 70\% of cases in the multilabel dataset have only one label such that a classifier assigning only one label to each case could reach about 70\% of subset accuracy.
Not all binary classifiers can be extended for the multilabel problem. We used the five following algorithms: Extra Tree, Decision Tree, Random Forest, Ensemble Extra Tree and Neural Network. As previously, a 10-fold cross-validation has been performed on each flavor.
\subsubsection{Results}
\begin{table}
\caption{Accuracy obtained for each method on the multilabel dataset.}
\label{table:multilabel_acc}
\input{multilabel_acc.tex}
\end{table}
\begin{table}
\caption{Precision obtained for each method on the multilabel dataset.}
\label{table:multilabel_precision}
\input{multilabel_precision.tex}
\end{table}
\begin{table}
\caption{Recall obtained for each method on the multilabel dataset.}
\label{table:multilabel_recall}
\input{multilabel_recall.tex}
\end{table}
The accuracy is reported in Table \ref{table:multilabel_acc} and shows that Decision Tree outperforms with 79.66\% of cases that have been totally correctly labeled. Similarly to the multiclass setting, the descriptive features provide a better result than the bag-of-words. Decision Tree scores also the best for the F$_1$-score (Table \ref{table:multilabel_f1}) and recall (Table \ref{table:multilabel_recall}). However, Ensemble Extra Tree overperforms Decision Tree when it comes to precision (Table \ref{table:multilabel_precision}). Decision Tree provides the best results for the {\it strict} metrics (highest accuracy and lowest Hamming loss) but also on more permissive metrics (best F$_1$-score and Jaccard index). Therefore, all things being equal (in particular default hyperparameters), Decision Tree is clearly the top method for multilabel which is a bit surprising since it ranked 8th over the binary datasets and 3rd on the multiclass one.
As expected, the bag-of-words flavor provides the worst possible results. Similarly to the experiments on the multiclass dataset, the textual information is inefficient at identifying the article.
\begin{table}
\caption{$F_1$-score obtained for each method on the multilabel dataset.}
\label{table:multilabel_f1}
\input{multilabel_f1_weighted.tex}
\end{table}
\begin{table}
\caption{Hamming loss obtained for each method on the multilabel dataset.}
\label{table:multilabel_hamming}
\input{multilabel_hamming_loss.tex}
\end{table}
\begin{table}
\caption{Jaccard Similarity Score obtained for each method on the multilabel dataset.}
\label{table:multilabel_jaccard}
\input{multilabel_jaccard_similarity_score.tex}
\end{table}
The Figure \ref{fig:multilabel_scores} shows the accuracy, recall and precision depending on the number of labels assigned by Decision Tree on the test set. It also indicates the number of cases for each label count. It is striking how the distribution of cases depending on the labels is close to the real distribution described by Figure \ref{fig:multilabel_count_labels}. We can reasonably assume that the model correctly identifies the labels a case is about (or at least the article). The subset accuracy for cases with a single label is consistant with the score on multiclass dataset (which is then virtually similar). The subset accuracy decreases linearly with the number of labels, which is not surprising since the metric become stricter with the number of labels. However, the recall and precision remain stable, above 80\% in average indicating that, not only the algorithm carefully identify the labels (recall) but also identify a large portion of labels (precision). Thus, Figure \ref{fig:multilabel_scores} clearly discards the possibility that the algorithm mostly focuses on cases with a single label.
\begin{figure}[!h]
\centering
\includegraphics[scale=0.4]{Fig9.png}
\caption{{\bf Multilabel scores depending on the number of labels assigned.}}
\label{fig:multilabel_scores}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
In this paper, we introduced the European Court of Human Rights Open Data project consisting in multiple datasets for several variants of the classification problems. The datasets come in different flavors (descriptive features, Bag-of-Word representation, TF-IDF) and are based on the real-life data directly retrieved from the HUDOC database. In total, 13 datasets are provided for the first release. We argued that providing the final data are not enough to ensure quality and trust. In addition, there are always some opinionated choices in the representation, such as the number of tokens, the value of $n$ for the $n$-grams calculation or the weighting schema in the TF-IDF transformation. As a remedy, we provide the whole process of dataset construction from scratch.
The datasets will be iteratively corrected and updated along with the ECHR new judgments. The datasets are carefully versioned to reach a compromise between the need to keep the data up-to-date (as needed by legal practitioners or algorithms in production) and to have the same version of data to compare results between scientific studies.
In the future, we plan to add additional enrichments (e.g. entity extraction from the judgments), new datasets with fine-grain labels and new datasets for different problems (e.g. structured prediction). We hope to offer a web platform such that anyone can tune the different dataset hyperparameters to generate its own flavor: a sort of {\it Dataset as a Service}.
A first experimental campaign has been performed to established a baseline for future work. The predictability power of each dataset and flavor has been tested for the most popular machine learning methods. On binary datasets, we achieved an average accuracy of 0.9443, against 0.9499 for multiclass. It demonstrates the interest of treating the problem at a higher level rather than at the article level. In particular, the learning curves have shown that the models are underfitting on binary datasets but, as the datasets are exhaustive, it is not possible to provide more examples. We showed than the descriptive features are excellent at determining the articles related to a case while the textual features helps in determining the binary outcome. Combining both features always help, but the gap between the two type of features is smaller on the multiclass than on the binary datasets. In both cases, descriptive features actually hold reasonable predictive power. Those preliminary experiments certainly do not clearly answer the realism versus legalism debate, but they open new possibilities to understand better our justice system. They also provide several axes of improvements: hyperparameter tuning, multi-stage classifier and transfer learning.
From those results, it seems clear that the prediction problem can be handled with the current state of the art in artificial intelligence. We hope that this project will pave the way toward a solution to the justification problem.
Last but not least, we encourage all researchers to explore the data, generate new datasets for various problems and submit their contributions to the project.
\bibliographystyle{spbasic}
| {'timestamp': '2019-02-05T02:28:20', 'yymm': '1810', 'arxiv_id': '1810.03115', 'language': 'en', 'url': 'https://arxiv.org/abs/1810.03115'} |
\section{Introduction}
The explosive data traffic growth, fast development, and commercialization of the 5G wireless communication networks impose great challenges on data security as well as global energy consumption \cite{5Ghu}.
In order to improve energy efficiency {(EE)}, mobile edge computing (MEC) and non-orthogonal multiple access (NOMA) have been envisaged as two promising technologies in 5G and the forthcoming 6G wireless networks. By deploying edge servers with high computational capacities close to end users, the end users can offload partial or all computation tasks to the nearby MECs to save power as well as speed up the computing \cite{eemc}. Meanwhile, by exploiting superposition coding at the transmitter and successive interference cancellation (SIC) at the receiver, NOMA brings significant changes to the multiple access. NOMA allows multiple users to share the same radio bandwidth in either power domain or code domain to improve spectral efficiency with a relatively higher receiver complexity \cite{noma}.
Applying NOMA into MEC-enabled networks has recently received extensive attention due to its performance gain in both spectrum efficiency and EE\cite{nec1}-\cite{nec3}. Most of the existing works didn't taking the security issue into account
In fact, due to the broadcast nature of the wireless link, it could be very vulnerable for the tasks to be intercepted by the eavesdroppers. The physical layer security (PLS) in the NOMA-assisted MEC networks has received many research interests \cite{sechj}.
The joint consideration of PLS in the NOMA assisted MEC network was studied in \cite{wu}-\cite{secac}.
In \cite{wu}, an iterative algorithm was proposed to maximize the minimum anti-eavesdropping ability in a MEC network with uplink NOMA.
The authors in \cite{wang} proposed a bisection searching algorithm to minimize the maximum task completion time subject to the worst-case secrecy rate. Instead of only considering the power consumption or computing rate performance above, \cite{secac} studied the EE maximization problem for a NOMA enabled MEC network with eavesdroppers.
Most of the existing works on NOMA-assisted MEC with external eavesdroppers typically focus on the performance evaluation in the scenarios where either channel conditions or required tasks remain constant. Such an assumption makes the analysis on the computation offloading and resource allocation more tractable. However, in a dynamic environment, the dynamic behaviors of the workload arrivals and fading channels impact the overall system performance. Thus the system design that focuses on the short term performance may not work well from the long term perspective. Towards that, a stochastic task offloading model and resource allocation strategy should be adopted \cite{Nouri}. In this paper, we integrate PLS and study the long-term EE performance in a NOMA-enabled MEC network. By incorporating the statistical behaviors of the channel states and task arrivals, we formulate a stochastic optimization problem to maximize the long-term average EE subject to multiple constraints including task queue stability, maximum available power, and peak CPU-cycle frequency. An energy-efficient offloading and resource allocation method based on Lyapunov optimization is proposed.
The simulation results validate the superior performance of the proposed method in terms of EE in a secure NOMA-assisted MEC network.
The rest of the paper is organized as follows. Section II describes the system model. In Section III, the EE maximization problem and corresponding alternative solution are presented. Numerical results are provided in Section IV. The paper is concluded in Section V.
\section{System Model}
\begin{figure}[h]
\vspace{-0.3cm}
\setlength{\abovecaptionskip}{-0.2cm}
\setlength{\belowcaptionskip}{-1cm}
\centering
\includegraphics[width=3.0in]{sy0821.eps}
\caption{System Model.\label{symodel}}
\end{figure}
In Fig. \ref{symodel}, an uplink NOMA communication system is considered, which consists of $N$ user equipments (UEs), one access point (AP) with the MEC server, and one external eavesdropper (Eve) near the AP. All the UEs can offload their computation tasks to the MEC while the external eavesdropper intends to intercept the confidential information. The arrival task of user $n$ at time slot $t$ is denoted as $A_n(t)$. Note that the prior statistical information of $A_n(t)$ is not required and it could be difficult to obtain in the practical systems. We focus on a data-partition-oriented computation task model. A partial offloading scheme is used, i.e., part of the task is processed locally and the remaining part of the data can be offloaded to the remote server for processing. For each UE, local computing and task offloading can be executed simultaneously.
Assuming that each UE has buffering ability, where the arrived but not yet processed data can be queued for the next time slot. Let $Q_n(t)$ be the queue backlog of UE $n$, and its evolution equation can be expressed as
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
{Q}_n(t + 1) = \max \{ {Q}_n(t) - {R_n^{tot}(t)\tau},0\} + {A}_n(t),
\end{equation}
where $R_n^{tot}(t)=R_n^{off}(t)+R_n^{loc}(t)$ is the total computing rate of UE $n$ at time slot $t$, $R_n^{off}(t)$ and $R_n^{loc}(t)$ are secure offloading rate and local task processing rate, respectively. $\tau$ is time duration of each slot.
\subsection{Local Computing Model}
Let $f_n(t)$ denote the local CPU-cycle frequency of UE $n$, which cannot exceed its maximum value $f_{\max}$. Let $C_n$ be the computation intensity (in CPU cycles per bit). Thus, the local task processing rate can be expressed as $R_n^{loc}(t)= f_n(t)/C_n$. We use the widely adopted model $P_n^{loc}(t)=\kappa_{n}f^3_n(t)$ to calculate the local computing power consumption of UE $n$, where $\kappa_n$ is the energy coefficient and its value depends on the chip architecture \cite{qiot}.
\subsection{Task Offloading Model}
The independent and identically distributed (i.i.d) frequency-flat block fading channel model is adopted, i.e., the channel remains static within each time slot but varies across different time slots. The small-scale fading coefficients from UE $n$ to the MEC server and to the Eve are denoted as ${H_{b,n}}(t)$ and ${H_{e,n}}(t)$, respectively. Both are assumed to be exponential distributed with unit mean \cite{Ymao}. Thus, the channel power gain from UE $n$ to the MEC is given as ${h_{i,n}}(t) = {H_{i,n}}(t){g_0}({d_0}/{d_{i,n}})^\theta$, $i \in \{b,e\}$, where $g_0$ is the path-loss constant, $\theta$ is the path-loss exponent, $d_0$ is the reference distance, and ${d_{i,n}}$ is the distance from UE $n$ to receiver. Furthermore, to improve the spectrum efficiency, NOMA is applied on the uplink access for offloading. We assume that $h_{b,1}\le h_{b,2}\le \cdots \le h_{b,N}$ and $h_{e,1}\le h_{e,2}\le \cdots \le h_{e,N}$. Using SIC at the receiver side, the achievable secure offloading rate at UE $n$ can be given by
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
R_{n}^{off}(t)=[B\log_2(1+\gamma_{b,n})-B\log_2(1+\gamma_{e,n})]^+,
\end{equation}
where $B$ is the bandwidth allocated to each UE, $\gamma_{b,n}=\frac{p_{n(t)}h_{b,n}(t)}{\sum_{i=1}^{n-1}p_i(t)h_{b,i}(t)+\sigma_{b,n}^2}$ and $\gamma_{e,n}=\frac{p_{n}(t)h_{e,n}(t)}{\sum_{i=1}^{n-1}p_ih_{e,i}(t)+\sigma_{e,n}^2}$ are the SINRs received by the MEC server and the Eve respectively. $p_{n}(t)$ is the transmit power of UE $n$, $\sigma_{b,n} ^{2}$ and $\sigma_{e,n} ^{2}$ are the background noise variances at the MEC and the Eve respectively. $[x]^+$= $\max(x,0)$.
The power consumption for offloading can be expressed as
$ P_n^o(t) = \zeta p_n(t)+p_r$,
where $\zeta$ is the amplifier coefficient and $p_r$ is the constant circuit power consumption.
\section{Dynamic Task Offloading and Resource Allocation}
\subsection{Problem Formulation}
EE is defined as the ratio of the number of long term total computed bits achieved by all the UEs to the total energy consumption \cite{EEDf},
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\eta(t)=\frac{{\lim }_{T \to \infty } \frac{1}{T}\mathbb{E}[\sum_{t=1}^{T}R_{tot}(t)\tau]}{{\lim }_{T \to \infty } \frac{1}{T}\mathbb{E}[\sum_{t=1}^T P_{tot}(t)\tau]}=\frac{\overline{R}_{tot}\tau}{\overline{P}_{tot}\tau},
\end{equation}
where $R_{tot}(t)=\sum_{n=1}^NR_n^{tot}(t)$ and $P_{tot}(t)=\sum_{n=1}^N P_n^{off}(t)+P_n^{loc}(t)$ are the total achievable rate and consumed power by all the users at $t$.
This work aims to maximize the long-term average EE for all the UEs under the constraints of resource limitations while guaranteeing the average queuing length stability. Therefore, the problem is formulated as
\begin{subequations}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\label{P0}
\begin{alignat}{5}
\textbf{P}{_0:}~ &\mathop {\max}_{f_n(t),p_n(t)} \mathop \eta\nonumber\\
s.t.~&~~ P_n^{tot}(t)\le P_{\max},\\
& \mathop {\lim }\limits_{T \to \infty } \frac{1}{T}\mathbb{E}[ |\ \overline Q_n(t)| ]=0,\\
&{f_n}(t) \le {f_{\max }},\\
&0 \le p_n(t),
\end{alignat}
\end{subequations}
where $\overline Q_n(t)$ is the average queue length of UE $n$. The constraint (\ref{P0}a) indicates that the total power consumed by UE at time slot $t$ should not exceed the maximum allowable power $P_{\max}$. (\ref{P0}b) requires the task buffers to be mean rate stable, which also ensures that all the arrived computation tasks can be processed within a finite delay. (\ref{P0}c) is the range of local computing frequency, and (\ref{P0}d) denotes the transmit power of each UE should not be negative.
\subsection{Problem Transformation Using Lyapunov Optimization}
The problem $\textbf{P}_0$ is a non-convex problem, which is difficult to be solved due the fractional structure of the objective function and the long term queue constraint (\ref{P0}b). By introducing a new parameter $\eta^*(t)=\frac{\sum_{i=0}^{t-1}R_{tot}(i)\tau}{\sum_{i=0}^{t-1}P_{tot}(i)\tau}$ \cite{EEDf}, the problem can be transformed to $\textbf{P}_1$, which can be solved in an alternating way.
\begin{subequations}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\label{P1}
\begin{alignat}{5}
\textbf{P}{_1:}~ {\max}_{f_n(t),p_n(t)}& \overline{R}_{tot}(t)\tau- \eta^*(t)\overline{P}_{tot}(t)\tau\nonumber\\
s.t.~~&(\ref{P0}a)-(\ref{P0}d).\nonumber
\end{alignat}
\end{subequations}
{
Note that $\eta^*(t)$ is a parameter that depends on the resource allocation strategy before $t$-th time block \cite{EEDf}.} In the following, the Lyapunov optimization is introduced to tackle the task queue stability constraint.
To stabilize the task queues, the quadratic Lyapunov function is first defined as $L(\mathbf{Q}(t))\mathop = \limits^\Delta \frac{1}{2}\sum_{n=1}^N Q_n^2{(t)}$ \cite{lypu}.
Next, the one-step conditional Lyapunov drift function is introduced to push the quadratic Lyapunov function towards a bounded level.
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\Delta (\mathbf{Q} (t))\mathop = \limits^\Delta \mathbb{E}[L(\mathbf{Q} (t + 1)) - L(\mathbf{Q} (t))|\mathbf{Q} (t)].
\end{equation}
By incorporating queue stability, the Lyapunov drift-plus-penalty function is defined as
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
{\Delta _V}(\mathbf{Q} (t)) =- \Delta (\mathbf{Q} (t)) + V [R_{tot}(t)\tau- \eta^*(t)P_{tot}(t)\tau],
\end{equation}
where $V $ is a control parameter to control the tradeoff between the queue length and system EE. The minus sign is used to maximize EE and to minimize the queue length bound.
For an arbitrary feasible resource allocation decision that is applicable in all the time slots, the drift-plus-penalty function ${\Delta _V}(\mathbf{Q}(t))$ satisfies
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\begin{aligned}
\Delta_V(\mathbf{Q}(t))& \ge -C+\sum_{n=1}^N \mathbb{E}\{Q_n(t)(R_n^{tot}(t)\tau-A_{n}(t))\}\\
&+V\sum_{n = 1}^N[R_n^{tot}(t)\tau- \eta^*(t)P_n^{tot}(t)\tau],
\end{aligned}
\end{equation}
where $C = \frac{1}{2}\sum\limits_{u = 1}^U {({R_{n}^{\max }}^2\tau^2 + {A_{n}^{\max}}^2)} $, $R_{n}^{\max }$ and $A_{n}^{\max}$ are the maximum achievable computing rate and the maximum arrival workload, respectively.
Thus, $\mathbf{P}_1$ is converted to a series of per-time-slot deterministic optimization problem $\mathbf{P}_2$, which needs to be solved at each time slot and is given as in \textbf{Algorithm 1}.
\begin{algorithm}[!t
\algsetup{linenosize=\small
\smal
\caption{ Dynamic Resource Allocation Algorithm }
\label{alg1}
\begin{algorithmic}[1]
\STATE At the beginning of the $t$th time slot, obtain $\{Q_n(t)\}$, $\{A_n(t)\}$.
\STATE Determine $\mathbf{f}(t)$ and $\mathbf{p}(t)$ by solving
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\begin{aligned}
\textbf{P}{_2:}~ & \max_{f_n(t),p_n(t)} \sum_{n=1}^N\{Q_n(t)(R_n^{tot}(t)\tau-A_{n}(t))\}\\
&+V\sum_{n = 1}^N[R_n^{tot}(t)\tau- \eta^*(t)P_n^{tot}(t)\tau]\\
s.t.~~& (\ref{P0}a),(\ref{P0}c),(\ref{P0}d) \nonumber\\
\end{aligned}
\end{equation}
\STATE Update $\{Q_n(t)\}$ and set $t=t+1$. Go back to step 1.
\end{algorithmic}
\end{algorithm}
In $\textbf{P}_{2}$, $\mathbf{f}(t)$ and $\mathbf{p}(t)$ can be decoupled with each other in both the objective function and the constraints. Thus, the problem $\textbf{P}_{2}$ can be decomposed into two sub-problems, namely the optimal CPU-cycle frequency scheduling sub-problem and the optimal transmit power allocation sub-problem, which can be solved alternately in the following.
\textbf{Optimal CPU-Cycle Frequencies Scheduling:}
The optimal CPU-cycle frequencies $\mathbf{f}(t)$ can be obtained by
\begin{alignat}{5}\label{PC}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\textbf{P}_{2.1}{:}\max_{ 0\le f_n(t)\le f_{\max}}& \sum_{n=1}^N(Q_n(t)+V)(R_n^{off}(t)+f_n(t)/C_n)\nonumber\\
&- V\eta^*(t)(\kappa_{n}f_n^3(t)+p_r+\zeta p_n(t))\nonumber\\
s.t.~~&\kappa_n f_n^3(t)\le P_{\max}-P_n^{off}.
\end{alignat}
Since the objective function of $\mathbf{P}_{2.1}$ and the constraints are convex with respect to $f_n(t)$, the optimal $f_n(t)$ can be given as
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
f_n^* = \left[\sqrt{\frac{(V + {Q_n}(t))}{3V\eta \kappa_n {C_n}}} \right]_0^{\overline{f}_{\max}},
\end{equation}
where $\overline{f}_{\max}=\min\{{f_{\max}},\root 3 \of {(P_n^{\max} - \zeta {p_n} - {p_r})/ \kappa_n}\}$ is the upper bound of the frequency.
\textbf{Optimal Transmit Power Allocation:}
For the transmission power allocation optimization, the problem $\mathbf{P}_2$ is transformed into
\begin{alignat}{5}\label{PW3}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\textbf{P}_{2.2}{:}~ \max_{{p}_n(t)}&\sum_{n=1}^NB\ln 2(Q_n(t)+V)[\ln({ \sum \limits_{i = 1}^{n} {p_i}(t)h_{b,i}^2 + \sigma _{b,n}^2})\nonumber\\
&-\ln({ \sum \limits_{i = 1}^{n - 1} {p_i}(t)h_{b,i}^2 + \sigma _{b,n}^2})- \ln( { \sum \limits_{i = 1}^{n} {p_i}h_{e,i}^2 + \sigma _{e,n}^2})\nonumber\\
&+\ln({ \sum \limits_{i = 1}^{n - 1} {p_i}h_{e,i}^2 + \sigma _{e,n}^2})+\frac{f_n}{B\ln 2C_n}]\nonumber\\
&- V\eta^*(t)(\zeta {p_n} + {p_r} + \kappa_n {f^3_n})\nonumber\\
s.t.~~&0\le {p_n}(t) \le (P_{\max}-{p_r} - \kappa_n {f^3_n})/\zeta.
\end{alignat}
The minus logarithmic terms make the objective function not convex, which is addressed by Lemma 1 introduced in the following.
$\mathbf{Lemma~1}$: By introducing the function $\phi(y)=-yx+\ln y+1$, $\forall x > 0$, one has
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
-\ln x=\max_{y>0}\phi(y).
\end{equation}
The optimal solution can be achieved at $y=1/x$. The upper bound can be given by using Lemma 1 as $\phi(y)$ \cite{lemma1}. By setting $y_{b,n}={ \sum \limits_{i = 1}^{n - 1} {p_i}(t)h_{b,i}^2 + \sigma _{b,n}^2}$ and $y_{e,n}={ \sum \limits_{i = 1}^{n} {p_i}(t)h_{e,i}^2 + \sigma _{e,n}^2}$, one has
\begin{alignat}{5}\label{PW4}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\textbf{P}_{2.3}{:}~ &\max_{{p}_n(t),y_{b,n},y_{e,n}}\sum_{n=1}^NB\ln 2(Q_n(t)+V)[\ln( \sum \limits_{i = 1}^{n} {p_i}(t)h_{b,i}^2\nonumber\\
&+ \sigma_{b,n}^2)+\phi_{b,n}(y_{b,n})+\phi_{e,n}(y_{e,n})+\ln({ \sum \limits_{i = 1}^{n - 1} {p_i}(t)h_{e,i}^2 + \sigma _{e,n}^2})\nonumber\\
&+\frac{f_n}{B\ln 2C_n}]- V\eta^*(t)(\zeta {p_n}(t) + {p_r} + \kappa_n {f^3_n})-Q_n(t)A_{n}(t)\nonumber\\
&s.t.~~0\le {p_n}(t) \le (P_{\max}-{p_r} - \kappa_n {f^3_n})/\zeta,
\end{alignat}
where $\phi_{b,n}(y_{b,n})=-y_{b,n}({ \sum \limits_{i = 1}^{n - 1} {p_i}(t)h_{b,i}^2 + \sigma _{b,n}^2})+\ln y_{b,n}+1$, and $\phi_{e,n}(y_{e,n})=-y_{e,n}({ \sum \limits_{i = 1}^{n} {p_i}(t)h_{e,i}^2 + \sigma _{e,n}^2})+\ln y_{e,n}+1$.
The problem $\textbf{P}_{2.3}$ is a convex problem with respect to both $p_n(t)$ and $y_{b,n},y_{e,n}$. It can be solved by using a standard convex optimization tool. After we obtain $p_n^*(t)$, the values of $y_{b,n}^*$ and $y_{e,n}^*$ can be respectively given by $y_{b,n}^*=({ \sum \limits_{i = 1}^{n - 1} {p_i^*}(t)h_{b,i}^2 + \sigma _{b,n}^2})^{-1}$ and $ y_{e,n}^*=({ \sum \limits_{i = 1}^{n} {p_i^*}(t)h_{e,i}^2 + \sigma _{e,n}^2})^{-1}$.
By alternately updating $p_n(t)$ and $y_{b,n},y_{e,n}$, the optimal solutions of $\mathbf{P}_{2.3}$ can be achieved at convergence.
\textbf{Remark 1:} To obtain fundamental and insightful understanding of the offloading power allocation for a multi-user NOMA assisted secure MEC system, we consider a special case with two UEs \cite{fzsec}. The problem with respect to $p_n$ is given as
\begin{alignat}{5}\label{PW5}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
\textbf{P}_{2.4}{:}~ &\max_{p_1(t),p_2(t)}B\ln 2(V + {Q_2}(t))[\ln ({p_2}(t)h_{b,2}^2 + {p_1}(t)h_{b,1}^2 + \sigma _{b,2}^2)\nonumber \\
&- \ln ({p_1}(t)h_{b,1}^2 + \sigma _{b,2}^2)- {y_{b2}}({p_1}(t)h_{b,1}^2 + \sigma _{b,2}^2)\nonumber\\
&+ \ln {y_{b2}} + 1 + \ln ({p_1}(t)h_{e,1}^2 + \sigma _{e,2}^2) + {{{f_2}} \over {{C_2}B\ln 2}}]\nonumber\\
&+ B\ln 2(V+ {Q_1}(t))[\ln (\sigma _{b,1}^2 + {p_1}(t)h_{b,1}^2) - \ln \sigma _{b,1}^2\nonumber\\
&- {y_{e1}}(\sigma _{e,1}^2 + {p_1}(t)h_{e,1}^2) + \ln {y_{e1}} + 1+\ln \sigma _{e,1}^2\nonumber\\
& + {{{f_1}} \over {{C_1}B\ln 2}}] - V\eta (\zeta ({p_2}(t) + {p_1}(t)) + 2{p_r} + \kappa_n ({f_n}^3)\nonumber\\
s.t.~~&0\le {p_n}(t) \le (P_{max}-{p_r} - \kappa_n {f_n^3})/\zeta.
\end{alignat}
$\textbf{P}_{2.4}$ is a convex problem with respect to $p_1(t)$ and $p_2(t)$, and the optimal solutions are given as
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
{p^*_1}(t) = {{ - b_1 \pm \sqrt {b_1^2 - 4{b_2}} } \over 2},
\end{equation}
and
\begin{equation}
\setlength{\abovedisplayskip}{3pt}
\setlength{\belowdisplayskip}{3pt}
{p^*_2(t)}{\rm{ = }}{1 \over {({{V\eta \zeta } \over {B\ln 2(V + {Q_2}(t))}} + {y_{e2}}h_{e,2}^2)}} - \frac{{p_1}h_{b,1}^2}{h_{b,2}^2} - \frac{\sigma _{b,2}^2}{h_{b,2}^2},
\end{equation}
where $a_1 = {{V\eta \zeta } \over {B\ln 2}} + (V + {Q_2}(t))({y_{b2}}h_{b,1}^2 + {y_{e2}}h_{e,1}^2) + (V + {Q_1}(t)){y_{e1}}h_{e,1}^2 - {{(V + {Q_2}(t))h_{b,1}^2({{V\eta \zeta } \over {B\ln 2(V + {Q_2}(t))}} + {y_{e2}}h_{e,2}^2)} \over {h_{b,2}^2}}$, $b_1 = (\sigma _{b,1}^2/h_{b,1}^2 + \sigma _{e,2}^2/h_{e,1}^2 - {{(V + {Q_1}(t))} \over {a1}} - {{(V + {Q_2}(t))} \over {a1}})$, and $b_2 = {{\sigma _{e,2}^2\sigma _{b,1}^2} \over {h_{e,1}^2h_{b,1}^2}} - {{(V + {Q_2}(t))} \over {a1}}\sigma _{b,1}^2/h_{b,1}^2 - {{(V + {Q_1}(t))} \over {a1}}\sigma_{e,2}^2/h_{e,1}^2$.
\section{Simulation Results}
\label{Simulation}
In this section, simulation results are provided to evaluate the proposed algorithm. The simulation settings are based on the works in \cite{qiot}, \cite{fzsec}. We consider the configuration with 2 UEs, which can be readily extended to a more general case. The system bandwidth for computation offloading is set as $B=1$ MHz, the time slot duration is $\tau=1$ sec, path-loss exponent is $\theta=4$, the noise variance is $\sigma_{i,j}=-60$ dBm, where $i \in \{b,e\}, j\in\{1,2\}$. The size of the arrival workload $A_n(t)$ is uniformly distributed within $[1,2]\times 10^6$ bits \cite{lyadeng}. Other parameter settings include the reference distance $d_0=1$ m, $g_0 =-40$ dB, $d_{b,1}=80$ m, $d_{b,2}=40$ m, $d_{e,1}=120$ m, $d_{e,2}=80$ m. $\kappa_n = 10^{-28}$, $P_{\max} = 2$ W, $f_{\max} = 2.15$ GHz, $C_n=737.5$ cycles/bit, the amplifier coefficient $\zeta=1$, and the control parameter $V=10^7$. The numerical results are obtained by averaging over $1000$ random channel realizations.
We consider two more cases as the benchmark schemes to compare with our proposed algorithm. In the first benchmark scheme, marked as "Full offloading", all the tasks are offloaded to the MEC server and there is no local computation at all. The second benchmark \cite{fzsec} is marked as "Eve fully decode", in which the Eve can correctly decode other users' information. This provides a worst-case scenario for comparison.
\begin{figure}[h]
\vspace{-0.3cm}
\setlength{\abovecaptionskip}{-0.2cm}
\setlength{\belowcaptionskip}{-1cm}
\centering
\includegraphics[width=2.8in]{syslot0710.eps}
\caption{System energy efficiency.\label{slotsy}}
\end{figure}
The performance of the system EE vs time is presented in Fig. \ref{slotsy}. We can see that the proposed method can achieve the highest system EE compared with the other two benchmark schemes. Furthermore, owing to the flexibility of having both offloading and local computing in the proposed scheme and in the ``Eve fully decode" scheme, the system can decide not to offload if the eavesdropper has a better channel on the offloading link while it can decide to offload if the link is secure enough. Therefore, these two schemes have a higher EE performance than the ``Full offloading" scheme, which has to offload even when the links are insecure. The system EE stabilizes for all the three schemes after $200$ time slots.
\begin{figure}[h]
\vspace{-0.3cm}
\setlength{\abovecaptionskip}{-0.2cm}
\setlength{\belowcaptionskip}{-1cm}
\centering
\includegraphics[width=2.8in]{atch0710.eps}
\caption{System energy efficiency v.s. Average arrival task length.\label{atch}}
\end{figure}
The system EE versus the average arrival task length is presented in Fig. \ref{atch}. The proposed method achieves the highest EE. For all the three schemes, EE decrease with the increase of the arrival task length because a higher workload forces the system to increase the computing rate to maintain the low queue level. This in turn decreases the system EE. Furthermore, we notice that the performance gap between the "Full offloading" scheme and other two schemes goes up with the increase of the task length. This demonstrates that local computing is more energy efficient and secure for processing the computation tasks when the task size goes up.
\begin{figure}[h]
\vspace{-0.3cm}
\setlength{\abovecaptionskip}{-0.2cm}
\setlength{\belowcaptionskip}{-1cm}
\centering
\includegraphics[width=2.8in]{edch0710.eps}
\caption{System energy efficiency v.s. eavesdropper relative distance.\label{edch}}
\end{figure}
Fig. \ref{edch} shows the system EE versus the eavesdropper location. Here the eavesdropper relative distance is defined as the distance between the eavesdropper and the UE. The proposed design achieves the best performance among all the schemes. The system EE of all the schemes goes up as the eavesdropper relative distance increases since a larger distance leads to a worse intercepting channel at the eavesdropper. Furthermore, the performance gap between the ``Full offloading" scheme and the other two schemes decreases quickly with the increase of the relative distance. This is because the secure offloading rate increases quickly when the eavesdropper moves away.
\begin{figure}[h]
\vspace{-0.3cm}
\setlength{\abovecaptionskip}{-0.2cm}
\setlength{\belowcaptionskip}{-1cm}
\centering
\includegraphics[width=2.8in]{pch0710.eps}
\caption{System energy efficiency v.s. maximum available power $P_{\max}$.\label{pch}}
\end{figure}
The relationship between EE and the maximum available power is illustrated in Fig. \ref{pch}. It is observed that EE increases with available power and gradually converges to a constant value. This is because that when the available power is limited, the higher computing rate and corresponding optimal EE cannot be achieved. With the power increase, EE of all the schemes keeps increasing and only stops when it achieves the highest level. After the optimal tradeoff has been reached, even there is more power available in the system, all the schemes maintain at the highest level without consuming any more power.
\section{Conclusion}
This paper aims to design a secure and energy efficient computation offloading scheme in a NOMA enabled MEC network with the presence of a malicious eavesdropper. In order to achieve a long term performance gain by considering dynamic task arrivals and fading channels, we proposed a secure task offloading and computation resource allocation scheme that aims to maximize the long-term average EE and used Lyapunov optimization framework to solve the problem. Numerical results validated the advantages of the proposed design via comparisons with two other benchmark schemes.
| {'timestamp': '2021-02-10T02:25:32', 'yymm': '2102', 'arxiv_id': '2102.05005', 'language': 'en', 'url': 'https://arxiv.org/abs/2102.05005'} |
\section{Introduction}
Two-dimensional gravity is an important tool in the study of quantum gravity. It is a simpler analogue of four-dimensional gravity, and it arises in the diffeomorphism invariant theory living on a string theory world sheet.
Two-dimensional quantum gravity has been solved to a large extent, in three ways: via double-scaled matrix models \cite{Brezin:1990rb,Douglas:1989ve,Gross:1989vs}, using conformal field theory techniques \cite{Knizhnik:1988ak,David:1988hj,Distler:1988jt} and in its topological formulation \cite{Witten:1988ze,Witten:1989ig,Kontsevich:1992ti}. The three approaches were proven to be equivalent in many instances. See e.g. \cite{Ginsparg:1993is,DiFrancesco:1993cyw} for reviews of the two-dimensional theory of quantum gravity.
While the topological formulation of two-dimensional gravity on Riemann surfaces without boundary was put on a firm footing a while back \cite{Witten:1988ze,Witten:1989ig}, the theory on Riemann surfaces with boundary was only recently understood rigorously \cite{Pandharipande:2014qya}. This gave rise to a flurry of mathematical activity which made earlier observations in the physics literature \cite{Dalley:1992br,Johnson:1993vk,McGreevy:2003kb,Gaiotto:2003yb,Maldacena:2004sn} precise \cite{Alexandrov:2014gfa,Buryak:2015eza,BCT1,Buryak:2018ypm}.
See \cite{Dijkgraaf:2018vnm} for a partial review peppered with physical insight and \cite{Muraki:2018rqv,Aleshkin:2018klz,Alexandrov:2019wfk} for more recent results.
Of particular interest to us here is the open/closed duality which was understood fairly well in the string theory literature \cite{Dalley:1992br,Johnson:1993vk,McGreevy:2003kb,Gaiotto:2003yb,Maldacena:2004sn}, and which was rigorously derived, including important additional details, in the more recent mathematical physics literature \cite{Alexandrov:2014gfa,Buryak:2015eza,BCT1,Buryak:2018ypm} in the case of pure two-dimensional gravity. This duality relates an open/closed string partition function to a purely closed string partition function; to be precise, the addition of a D-brane in topological string theory is transmuted into a shift of the background, a renormalization of the partition function, an operator insertion and an integral transform. While the conceptual framework goes back to \cite{McGreevy:2003kb,Gaiotto:2003yb,Maldacena:2004sn}, precise formulas were given more recently in \cite{Buryak:2015eza}. Only the latter allow to make direct contact with the rigorous analysis in algebraic geometry \cite{Pandharipande:2014qya,Buryak:2015eza,BCT1,Buryak:2018ypm}.
The main mathematical tool in the derivation of the precise form of open/closed string duality \cite{Buryak:2015eza} was the Kontsevich matrix model of pure two-dimensional gravity \cite{Kontsevich:1992ti}.
This matrix model depends only on the closed string sources, and can be viewed as a closed string matrix model in that sense.
The closed versus open/closed duality in the case of pure gravity was derived in a particularly clear manner by integrating out off-diagonal degrees of freedom of a $(N+1) \times (N+1)$ matrix model to obtain a $N \times N$ matrix model depending on one extra eigenvalue which is the integral transform of an open string coupling (or D-brane modulus) \cite{Buryak:2015eza}. In the large $N$ limit, this then gives rise to a duality between a closed string theory and an open/closed string theory. See also \cite{Brezin:2011ka,Alexandrov:2014gfa} for related results.
In this paper, we find an alternative to the original mathematical derivation that allows for a generalization to the case of two-dimensional topological gravity coupled to matter. The open/closed duality in this case was understood to a degree in \cite{Hashimoto:2005bf}. We provide the mathematically precise relation between the open/closed and closed string partition function using the technique described above of integrating out off-diagonal degrees of freedom at finite $N$, and then taking the infinite $N$ limit. This will lead to considerable additional insight. Indeed, we naturally find a determinant insertion in the Kontsevich matrix model. We moreover find that the closed string integrable hierarchy is intuitively extended to include the missing times. Importantly, by matching to the integrable system literature, we are able to obtain a precise algebro-geometric understanding of the resulting open/closed and extended partition functions. We also improve our understanding of the Virasoro algebras that govern the generating functions.
The paper is structured as follows. In section \ref{integratingout}, we first present the result of partially integrating out off-diagonal matrix elements in the Kontsevich matrix model in an elementary fashion. We then interpret the resulting equations in terms of concepts in the integrable systems literature, as well as in string theory. We moreover build the bridge to the open/closed topological intersection numbers studied in algebraic geometry.
In section \ref{Virasoro} we provide a guide to the various Virasoro algebras that appear in the context of these integrable systems, and clarify how to interpret them. The knowledge gained is used to prove a crucial statement in section \ref{integratingout}. In section \ref{conclusions}, we conclude with a summary and some open problems. In appendix \ref{FT} we provide properties of a generalized Fourier transform, in appendix \ref{AlternativeVirasoro} the result of conjugating Virasoro generators to a more familiar form, and in appendix \ref{alternativepuregravity}, we review an alternative viewpoint on open/closed duality in the case of pure gravity.
\section{The Open/Closed String Duality with Matter}
\label{matter}
\label{integratingout}
\label{matrixintegral}
In this section, we first derive a central technical result in an elementary manner.
We start from the generalized Kontsevich matrix model \cite{Kontsevich:1992ti,WittenAlgebraic,Adler:1992tj,Itzykson:1992ya} with matrix integration variables of dimension $(N+1) \times (N+1)$ and integrate out the off-diagonal matrix elements compared to $N \times N $ and $1 \times 1$ blocks around the diagonal. As we will discuss, the resulting equation can be interpreted in terms of standard concepts in integrable systems, and has interesting conceptual consequences in topological gravity and low-dimensional string theory.
\subsection{A Brief Overview}
The generalized Kontsevich model is a matrix generalization of the higher Airy function \cite{Kontsevich:1992ti}.
When the matrix model integration variable tends to infinite size, it becomes a generating function for the $p$-spin intersection numbers on moduli spaces of Riemann surfaces \cite{Kontsevich:1992ti,WittenAlgebraic,Adler:1992tj,Itzykson:1992ya,Chiodo,Faber:2006gca,Brezin:2012uc}. Equivalently, it is the partition function of topological gravity coupled to matter of type $p$, or a closed string theory of gravity plus matter in dimension smaller than one \cite{Dijkgraaf:1990nc,Li:1990ke,Dijkgraaf:1990dj}.
Our starting point is the study of the generalized Kontsevich model at finite $N$. In particular, we will
perform the integration over $2N$ off-diagonal degrees of freedom in the $(N+1) \times (N+1)$ matrix integration variables, and will be left with a $N \times N$ matrix model and an effective action for one more diagonal degree of freedom. In the large $N$ limit, both the $(N+1)\times(N+1)$ and the $N\times N$ dimensional model will correspond to closed string matrix models. The extra eigenvalue will eventually be related to a D-brane or open string modulus. The equation that results from the integration will thus allow for an interpretation in terms of open/closed string duality.
Equivalently, it allows for an interpretation of intersection numbers of Riemann surfaces with boundaries and bulk and boundary insertions, in terms of intersection numbers of Riemann surfaces with only bulk insertions.
Our approach has been inspired by a number of sources. Firstly, the idea of open/closed string duality can at least be traced back to the advent of D-branes in string theory \cite{Polchinski:1995mt}. The fact that it takes a particularly simple form in the case of pure topological gravity was understood in \cite{Gaiotto:2003yb}. These ideas were extended to the case of topological gravity with matter in \cite{Hashimoto:2005bf}.
At a technical level, these references have a different approach. Secondly, the more rigorous reference \cite{Buryak:2015eza} follows almost the same technical route we described above to render the physical intuition in \cite{Gaiotto:2003yb} mathematically precise, but there are important differences that we will highlight.
\subsection{The Generalized Kontsevich Model Extended}
A topological closed string partition function $\tau(\Lambda)$ corresponding to topological gravity in two dimensions coupled to topological matter is captured by the generalized Kontsevich model \cite{Kontsevich:1992ti,WittenAlgebraic,Adler:1992tj,Itzykson:1992ya}:
\begin{equation}
\tau(\Lambda) = \frac{N(\Lambda)}{D(\Lambda)} \, ,
\end{equation}
where the numerator $N(\Lambda)$ and the denominator $D(\Lambda)$ read
\begin{eqnarray}
N(\Lambda) &=& \int [dM]_{N } e^{-\alpha \frac{1}{p+1} \mathrm{Tr}\, [ (M+ \Lambda)^{p+1}]_{\ge 2}} \label{numerator}
\nonumber \\
D(\Lambda) &=& \int [dM]_{N } e^{- \frac{\alpha}{2} \sum_{k=0}^{p-1} \mathrm{Tr}\, M \Lambda^k M \Lambda^{p-1-k}} \label{denominator}
\end{eqnarray}
The square brackets with lower index $\ge 2$ indicate that one should consider only terms that are of order two or higher in the $N \times N$ hermitian matrix integration variable $M$.
The coupling constant $\alpha$ can be identified with one over the string coupling, $\alpha=-1/g_s$. The partition function of the closed string is a function of the matrix source $\Lambda$ which codes the values of all couplings to the matter primaries and gravitational descendants. When the couplings go to zero, the matrix source $\Lambda$ goes to infinity, and the $\tau$ function approaches one.
The matter content is specified by the order $p+1$ of the potential term. We have that $p=2$ for pure gravity.\footnote{We remark that these matrix integrals should be thought off as consisting of a Gaussian integration plus an exponential of higher powers that are to be expanded as formal power series in the integration variable.
For a careful treatment of these and other aspects see e.g. \cite{Kontsevich:1992ti, Buryak:2015eza}.}
Our set-up is elementary. We take the source matrix as well as the integration variable to be an $(N+1) \times (N+1)$ matrix and wish to interpret the last diagonal entry of the source matrix as (the dual of) an open string (or boundary insertion) modulus.
The matrix integral only depends on the eigenvalues of the source matrix $\Lambda_z$, which we take to be diagonal:
\begin{equation}
\Lambda_z = \mbox{diag} (\lambda_1,\dots,\lambda_N,z) \, . \label{sourceNplusOne}
\end{equation}
We used
the special notation $z$ for the last diagonal entry, which we plan to single out. We have the corresponding closed string partition function:
\begin{eqnarray}
\tau(\Lambda_z) &=& \frac{N(\Lambda_z)}{D(\Lambda_z)} \, . \label{NplusOne}
\end{eqnarray}
To perform the integration over the $2N$ off-diagonal degrees of freedom, we integrate over all $N (N+1)$ off-diagonal degrees of freedom, and then reinstate those that we wish to keep. To perform the integration, our main tool is the Harish-Chandra-Itzykson-Zuber integration formula \cite{Harish-Chandra:1957dhy,Itzykson:1979fi}. We integrate in two steps. Firstly, we concentrate on the numerator, and then we perform the Gaussian integration in the denominator.
The exponent in the numerator (\ref{numerator}) of equation (\ref{NplusOne})
can be simplified by shifting away the source $\Lambda_z$, to keep only a term which has power $p+1$, and a linear and constant term in the integration variable $M$. Indeed, we have
\begin{equation}
{[}(M+ \Lambda_z)^{p+1} {]}_{\ge 2} = (M+ \Lambda_z)^{p+1} - \Lambda_z^{p+1} - (p+1) \Lambda_z^p M \, ,
\end{equation}
and therefore upon shifting $M \longrightarrow M- \Lambda_z$, we find that the numerator $N(\Lambda_z)$ takes the form:
\begin{align}
N(\Lambda_z) &= \int [dM]_{N+1} e^{-\alpha \mathrm{Tr}\, \left( \frac{M^{p+1}}{p+1} - M \Lambda_z^{p} +\frac{p}{p+1}\Lambda_z^{p+1}\right)} \, .
\end{align}
We can then integrate over the unitary, angular variables that serve to diagonalise the matrix $M$.
To that end we parameterize the $(N+1) \times (N+1)$ hermitian matrix $M$ in terms of a unitary matrix $U$ and diagonal matrix $M_d$ as $M= U M_d U^{-1}$. The matrix integral factorizes:
\begin{align}
N(\Lambda_z) &= \frac{\pi^{\frac{N(N+1)}{2}}}{\prod_{i=1}^{N+1} i!} \int \prod_{i=1}^{N+1}dm_i \Delta_{N+1}(m)
e^{-
\frac{\alpha}{p+1} \left( \sum_{i=1}^{N+1} m_i^{p+1} + p \mathrm{Tr}\, \Lambda_z^{p+1}\right)
}\cr
&\hspace{3cm}\times \int [dU]_{N+1} e^{\alpha \mathrm{Tr}\, \Lambda_z^p U M_d U^{-1}} \,,
\end{align}
where the integration variables $m_i$ are the eigenvalues of the diagonal matrix $M_d$ and we defined the Vandermonde determinant measure factor
\begin{equation}
\Delta_N(m) = \prod_{1\le i < j \le N}(m_j-m_i)^2 \, .
\end{equation}
We use the Harish-Chandra-Itzykson-Zuber formula \cite{Harish-Chandra:1957dhy,Itzykson:1979fi}
\begin{equation}
\int [dU]_{N+1} e^{\alpha \mathrm{Tr}\, AUBU^{-1}} = \frac{\prod_{i=1}^N i!}{\alpha^{\frac{N(N+1)}{2}}} \frac{\text{det}(e^{\alpha\, a_i b_j}) }{\prod_{1\le i <j \le N+1} (a_j-a_i)(b_j-b_i)} \, ,
\end{equation}
to perform the integral over the unitary matrix $U$:
\begin{align}
N(\Lambda_z) &=\frac{1}{(N+1)!}\left(\frac{\pi}{\alpha}\right)^{\frac{N(N+1)}{2}} \frac{1}{\prod_{1\le i < j \le N+1} (\lambda_{z,j}^p-\lambda_{z,i}^p)} e^{-\frac{\alpha p}{p+1} \mathrm{Tr}\,\Lambda_z^{p+1}}\cr
&\times \int \prod_{i=1}^{N+1} dm_i \, e^{-\frac{\alpha}{p+1} \sum_{i=1}^{N+1} m_i^{p+1}}\prod_{1\le i <j \le N+1} (m_j-m_i) \, \text{det}\left( e^{\alpha \lambda_{z,i}^p m_j } \right) \, .
\end{align}
In the next step, we isolate the integral over the last diagonal variable $m_{N+1}$ and write the remaining factors as an $N\times N$ matrix integral that we wish to identify as a generalized Kontsevich model with an extra insertion. The first step towards this goal is to expand the determinant along the $(N+1)$st row of the matrix:
\begin{equation}
\det_{N+1} (e^{\alpha \lambda_{z,j}^p m_i}) \prod_{i<j}^{N+1} (m_j-m_i) = \sum_{l=1}^{N+1} e^{\alpha z^p m_l} \det_N (e^{\alpha \lambda_{j}^p m_i})_{i \neq l}
\prod_{i<j \neq l}^{N+1} (m_j-m_i) \prod_{i \neq l}^{N+1} (m_l -m_i) \, .
\end{equation}
By suitably changing variables in each term of the sum and using the permutation invariance of the measure, one can show that each resulting integral contributes equally.\footnote{We thank Alexandr Buryak for clarifying analogous steps that arise in \cite{Buryak:2015eza}.} Therefore, this sum may be written as $(N+1)$ times the contribution from the $l=N+1$ term. For easier writing, we rename the diagonal integration variable $m_{N+1}=s$ and find the numerator:
\begin{align}
N(\Lambda_z) &=\frac{1}{N!} \left(\frac{\pi}{\alpha}\right)^{\frac{N(N+1)}{2}} \frac{1}{\prod_{i=1}^N (z^p - \lambda_i^p)} e^{-\alpha\frac{p}{p+1} (z^{p+1} +\mathrm{Tr}\,\Lambda^{p+1})} \cr
& \int ds\, e^{-\alpha\frac{s^{p+1}}{p+1} + \alpha z^p\, s} \int \prod_{i=1}^N dm_i \Delta_N(m_i) \frac{\det(e^{\alpha \lambda_i^p m_j}) \prod_{i=1}^N (s-m_i)}{\prod_{1\le i < j \le N} (m_j-m_i) (\lambda_j^p - \lambda_i^p) } \, .
\end{align}
At this point, we trace our steps backwards and use the Harish-Chandra-Itzykson-Zuber formula in reverse. Thus, we write the numerator in terms of the integration over a single variable $s$ and a $N\times N$ hermitian matrix $M$:
\begin{align}
N(\Lambda_z) &= \left(\frac{\pi}{\alpha}\right)^N \frac{1}{\det(z^p - \Lambda^p)} \int ds\, e^{-\alpha\left(\frac{s^{p+1}}{p+1} - z^p\, s + \frac{p}{p+1} z^{p+1} \right) }\cr
&\times \int [dM]_N e^{-\alpha \mathrm{Tr}\, \left( \frac{M^{p+1}}{p+1} - \Lambda^p M + \frac{p}{p+1} \Lambda^{p+1} \right) } \, \det(s-M) \,.
\end{align}
This finishes the first and harder part of the calculation. Secondly, we must keep track of the denominator $D(\Lambda_z)$ that serves to anchor the partition function $\tau$ at $1$ for large source matrix $\Lambda$.
The integration in the denominator is Gaussian, and the $N \times N$ determinant that will serve to normalize the final $N
\times N$ matrix integration can easily be factored out:
\begin{align}
D(\Lambda_z) &= \int [dM]_{N+1} e^{- \frac{\alpha}{2} \sum_{k=0}^{p-1} \mathrm{Tr}\, M \Lambda_z^k M \Lambda_z^{p-1-k}}
\nonumber \\
&= \prod_{i=1}^{N+1} \sqrt{\frac{2 \pi}{\alpha}} \frac{1}{\sqrt{p \lambda_{z,i}^{p-1}}}\prod_{i \neq j}^{N+1}\sqrt{ \frac{\pi}{\alpha}} \sqrt{\frac{\lambda_{z,j}-\lambda_{z,i}}{\lambda_{z,j}^p-\lambda_{i,z}^p} }
\nonumber \\
&= \sqrt{\frac{2 \pi}{\alpha}}\, (p z^{p-1})^{-\frac{1}{2}} \prod_{i=1}^N \frac{\pi}{\alpha} \frac{\lambda_i -z}{\lambda_i^p-z^p} \ D(\Lambda)
\nonumber \\
&= \sqrt{\frac{2 \pi}{\alpha}} \left(\frac{\pi}{\alpha}\right)^{N} \frac{\det (z-\Lambda)}{\det (z^p-\Lambda^p)} (p z^{p-1})^{-\frac{1}{2}}\ D(\Lambda) \, .
\end{align}
The integrating-out in both numerator and denominator provides us with a final formula for the tau-function at finite $N$:
\begin{align}
\tau(\Lambda_z) &= \sqrt{\frac{\alpha\, p}{2\pi}}\ \frac{z^{\frac{p-1}{2}}}{\det(z-\Lambda)} e^{-\alpha\frac{ p}{p+1} z^{p+1}} \int ds\, e^{-\frac{\alpha}{p+1}s^{p+1}+ \alpha \, z^p\, s} \cr
&\hspace{2cm} \times\frac{1}{D(\Lambda)} \int [dM]_N e^{-\alpha\frac{1}{(p+1)}\mathrm{Tr}\, \left [(M+\Lambda)^{p+1} \right]_{\ge 2} }\, \text{det}(s-M-\Lambda) \,.
\label{MainResult1}
\end{align}
This is our first technical result. To clarify its significance, we repackage it in various ways, and then provide an interpretation.
\subsection{The Closed and Open/Closed String Partition Functions}
To make contact with both the string theory and integrability literature, we slightly reshuffle
the result \eqref{MainResult1}:
\begin{align}
\det\left(1-\frac{z}{\Lambda}\right)\, \tau(\Lambda_z)
&= \sqrt{ \frac{\alpha\, p}{2 \pi}}\, z^{\frac{p-1}{2}}\, e^{-\alpha\frac{p}{p+1} z^{p+1}}
\, \int ds\, e^{ \alpha\, z^p\, s}
\nonumber \\
&\hspace{1cm} \times \frac{1}{D(\Lambda)} \int [dM] e^{-\alpha\frac{1}{(p+1)}\mathrm{Tr}\, \left [(M+\Lambda)^{p+1} \right]_{\ge 2} }\, e^{-\alpha\frac{1}{p+1} s^{p+1}}\, \frac{\text{det}(\Lambda+M-s)}{\text{det}(\Lambda)} \, . \label{MainResult2}
\end{align}
\subsubsection*{The Closed String Partition Function}
We wish to be more specific about the interpretation and meaning of both the left and the right hand side of the equality (\ref{MainResult2}).
Firstly, let us remind the reader that the closed
string partition function $\tau(\Lambda)$ is the $\tau$-function of a reduced KP integrable hierarchy. The times $t_n$ of the integrable hierarchy are defined in terms of the source matrix $\Lambda$ by
\begin{equation}
t_n = - \frac{1}{n}\, \mathrm{Tr}\, \Lambda^{-n} \, .
\label{definitiontimes}
\end{equation}
The times play the role of closed string couplings in topological string theory. A well known and important result is the independence of the closed string partition function on the times $t_{np}$, where $n\in \mathbb{N}$ \cite{Kontsevich:1992ti, Itzykson:1992ya}. We will make use of this repeatedly in what follows.
Secondly, we consider the closed string partition function $\tau(\Lambda_z)$ with the extra source eigenvalue $z$ (as in equation (\ref{sourceNplusOne})).
We can think of the addition of the extra source eigenvalue $z$
as a redefinition of the closed string couplings $t_n$.
Indeed, we have:
\begin{equation}
\widetilde{t}_n = -\frac{1}{n}\, \mathrm{Tr}\, \Lambda_z^{-n}
=-\frac{1}{n}\, \mathrm{Tr}\, \Lambda^{-n} - \frac{1}{n\, z^{n}}
= t_n - \frac{1}{n\, z^{n}} \, .
\end{equation}
Thus, if we view the $\tau$-function as a function of the time variables $t_n$, which are independent in the large $N$ limit, then we can define a shifted closed string partition function by the formula:
\begin{equation}
\tau(\Lambda_z)
= \tau\left(t_{n} - \frac{1}{n\, z^{n}}\right) \, .
\end{equation}
The extra source term redefines the closed string background. This elementary aspect of the duality formula (\ref{MainResult2}) is well-understood in the literature.
Let us introduce an alternative symbol for the determinant on the left hand side of equation (\ref{MainResult2}):
\begin{equation}
e^{\xi} = \det\left(1-\frac{z}{\Lambda}\right) \, ,
\end{equation}
which implies that
\begin{align}
\label{xidefn}
\xi &= \sum_{n=1
}^{\infty} z^n t_n \, .
\end{align}
We then recognize on the left hand side of equation (\ref{MainResult2}) a quantity from integrable systems \cite{Bertola:2014yka}, namely the extended partition function or wave potential $\tau_{\text{ext}}(\Lambda, z)$ equal to:
\begin{equation}
\tau_{\text{ext}}(\Lambda, z)
= e^{\xi}\, \tau(\Lambda_z) \, .
\label{wavepotential}
\end{equation}
It is important that the extended tau-function $\tau_{\text{ext}}(\Lambda_z)$ (in contrast to the tau-function $\tau(\Lambda)$) depends on all times of the KP integrable hierarchy, and in particular on the times $t_{np}$. The factor $e^{\xi}$ introduces an exponential $t_{np}$ dependence.
Thus the left-hand side $\tau_{\text{ext}}(\Lambda_z)$ of the equality (\ref{MainResult2}) is an extended and shifted closed string partition function, equal to the wave potential of the KP integrable hierarchy.
\subsubsection*{The Wave Potential and the Open String Partition Function}
\label{MainTheorem}
Moreover, we also wish to compactly code and interpret the right hand side of equation \eqref{MainResult2}. To that end we define an open/closed partition function:
\begin{align}
\tau^{\text{op}+\text{cl}} (\Lambda,s)
&= \frac{1}{D(\Lambda)} \int [dM] e^{-\alpha\frac{1}{(p+1)}\mathrm{Tr}\, \left [(M+\Lambda)^{p+1} \right]_{\ge 2} }\, e^{-\alpha\frac{1}{p+1} s^{p+1}}\, \frac{\text{det}(\Lambda+M-s)}{\text{det}(\Lambda)} \, ,
\label{definitiontauopcl}
\end{align}
which is equal to the generalized Kontsevich matrix integral with a normalized determinant insertion labelled by a coupling constant $s$. Our result (\ref{MainResult2}) then reads more compactly:
\begin{align}
\tau_{\text{ext}}(\Lambda, z) &=
\sqrt{ \frac{\alpha\, p}{2 \pi}}
z^{\frac{p-1}{2}}\, e^{ -\alpha \frac{p}{p+1} z^{p+1}}
\int ds\, e^{ \alpha\, z^p\, s}\, \tau^{\text{op}+\text{cl}}(\Lambda,s)
\, .
\end{align}
If we moreover define a formal Fourier transform of a function (or rather a formal power series) $f(s)$ by:
\begin{align}
\Phi[f(s)](z)& := \sqrt{\frac{\alpha\, p}{2 \pi}}\, z^{\frac{p-1}{2}} \int ds\, e^{\alpha \left( \frac{z^{p+1}}{p+1} + z^p s\right)} \, f(s+z) \,,
\end{align}
we have that our duality succinctly reads:
\begin{align}
\tau_{\text{ext}}(\Lambda, z) &= \Phi\left[ \tau^{\text{op}+\text{cl}}(\Lambda,s) \label{MainResult3}
\right](z) \,.
\end{align}
We summarize that the (finite $N$ generalization of the) extended closed string partition function equals the integral transform of the open/closed string partition function.
Importantly, we still need to argue that the correct interpretation of the right hand side of the result (\ref{MainResult3}) is indeed as an open/closed string partition function. In order to do so, we need considerably more background.
Relatively recently, the following wave function was introduced in the integrable system literature \cite{Buryak:2014dta,Bertola:2014yka}.
Given a $p$-reduced integrable KP hierarchy and the associated Lax operator $L$ -- see \cite{Dickey:1991xa}
for a pedagogical introduction to the subject --
satisfying the evolution equations:
\begin{equation}
\frac{\partial L}{\partial t_m} = [ [L^{\frac{m}{p}}]_{\ge 0}\,, L ] \, ,
\end{equation}
one defines a wave function $\Psi(t_k)$ that satisfies the differential equations:
\begin{equation}
\frac{\partial \Psi}{\partial t_m} = [L^{\frac{m}{p}}]_{\ge 0}\, \Psi \, ,
\label{WaveEquations}
\end{equation}
as well as the initial condition \cite{Buryak:2014dta,Bertola:2014yka}
\begin{equation}
\Psi_{t_{n \ge 2}=0}=1 \,. \label{InitialCondition}
\end{equation}
The interest in the wave function $\Psi(t_k)$ originates in the advance that has been made in computing intersection numbers on moduli spaces of Riemann surfaces with boundaries
\cite{Buryak:2014dta,Buryak:2018ypm}. Indeed, in the presence of boundaries, the reduction of the integrable hierarchy no longer takes place, and the generating function of intersection numbers does depend on the times $t_{np}$. Moreover, there is an extra dependence on a coupling constant $s$ that counts open string or boundary insertions.
The partition function of closed/open intersection numbers can be expressed in terms of the wave function $\Psi$ \cite{BCT1,Buryak:2018ypm}. Let us describe the final result of the algebro-geometric calculations. We introduce free energies that generate intersection numbers on Riemann surfaces with and without boundary:
\begin{align}
\tau &= e^{F^{\text{cl}}}
\nonumber \\
\tau^{\text{op}+\text{cl}}_{\text{geom}} &= e^{ F^{\text{cl}} +F^{\text{op}}} = \tau \, e^{F^{\text{op}}}
\, .
\end{align}
Crucially, it is shown in \cite{Buryak:2018ypm} that the generator $F^{\text{open}}$ of geometric open/closed intersection numbers is given by the logarithm of the wave function $\Psi(t_k)$ after a substitution of variables:
\begin{equation}
F^{\text{op}} =
\log \Psi
\left({t}_i,t_p \rightarrow
t_p-\alpha s \right)
\, .
\end{equation}
It is important to note here that in the generating function, we have allowed for extended closed string amplitudes, namely, amplitudes where we have added boundaries but no explicit open string insertions. In this regard, see the useful intuitive remarks in \cite{BCT1}.
Finally, we are ready to state the relation our matrix model has to the algebro-geometric open/closed intersection numbers. We propose that our matrix integral $\tau^{\text{op}+\text{cl}}$ is equal to its geometric counterpart:
\begin{eqnarray}
\tau^{\text{op}+\text{cl}} = \tau^{\text{op}+\text{cl}}_{\text{geom}} = \tau(t) \, \Psi({t},t_p- \alpha s )
\, .\label{GeometricMainResult}
\end{eqnarray}
The left hand side is our open/closed string partition function $\tau^{\text{op}+\text{cl}}$ defined in terms of the generalized Kontsevich integral with normalized determinant insertion \eqref{definitiontauopcl}, while the right hand side is the generator of geometric open/closed string intersection numbers, according to \cite{Buryak:2018ypm}.
We will prove the identification \eqref{GeometricMainResult} in the following. As a warm-up,
let us first show that our $\tau^{\text{op}+\text{cl}}$ indeed only depends on the combination $t_p-\alpha s$ as expected from equation (\ref{GeometricMainResult}).
We write the open/closed partition function $\tau^{\text{op}+\text{cl}}(\Lambda,s)$ as the inverse integral transform of the wave potential $\tau_{\text{ext}}(\Lambda,z)$:\footnote{To obtain this expression, the contour in the $z$-plane is engineered to give the equality
\begin{equation}
\frac{\alpha p}{2\pi} \int dz\, z^{p-1}\, e^{\alpha z^{p} (s-s')} =\frac{1}{2\pi} \int d(\alpha z^{p})\, e^{\alpha z^{p} (s-s')} = \delta(s-s')
\end{equation}}
\begin{align}
\tau^{\text{op}+\text{cl}}(\Lambda, s) = \sqrt{\frac{\alpha\, p}{2 \pi}} \int\, dz\, z^{\frac{p-1}{2}} e^{\alpha \frac{p}{p+1} z^{p+1} -\alpha\, z^p\, s} \tau_{\text{ext}}(\Lambda,z) \, . \label{tauopcltauext}
\end{align}
From this expression, one verifies that the open/closed partition function is annihilated by the following operator:
\begin{align}
\left(\frac{1}{\alpha}\frac{\partial}{\partial s} + \frac{\partial}{\partial t_p} \right) \tau^{\text{op}+\text{cl}}(\Lambda, s) &=
\sqrt{\frac{\alpha\, p}{2 \pi}} \int\, dz\, z^{\frac{p-1}{2}} e^{\alpha \frac{p}{p+1} z^{p+1} -\alpha\, z^p\, s} \left(- z^p +\frac{\partial}{\partial t_p} \right)\tau_{\text{ext}}(\Lambda,z) \,.\cr
&= 0 \,.
\end{align}
In the second equality, we have used that the dependence of the wave potential on the time $t_p$ comes purely from the $e^{\xi}$ factor as the purely closed string partition function $\tau(\Lambda_z)$ is independent of the times $t_{np}$. This finishes our proof of the elementary property.
More importantly, we will prove that the wave function $\Psi(t_k)$ which is equal to the ratio of our matrix model integral $\tau^{\text{op}+\text{cl}}$ and the original closed string tau-function $\tau(\Lambda)$ does indeed satisfy the differential equations \eqref{WaveEquations} and the initial conditions \eqref{InitialCondition} in Section \ref{Proof}. Since the solution to these equations is unique, that will prove our claim \eqref{GeometricMainResult}.
In closing we remark that the matrix model perspective on both the algebraic and stringy duality naturally gives rise to the extended closed string partition function encountered in \cite{BCT1}. This is directly related to the wave function that is canonical from the perspective of the integrable system \cite{Buryak:2014dta,Bertola:2014yka}, i.e. it is the wave function that governs the algebraic intersection numbers \cite{Buryak:2018ypm}, including their extra time dependencies.
\section{The Open/Closed String Virasoro Constraints}
\label{Virasoro}
The KdV integrable hierarchy combined with one Virasoro constraint -- the string equation -- is sufficient to determine all correlators of topological gravity coupled to matter. Equivalently, the W-algebra constraints on topological gravity correlators are sufficient to determine them all. A subset of the W-algebra constraints are the Virasoro equations, which are strong constraints on the partition function and which allow to make clear contact with the free fermion formulation of the integrable hierarchy, for instance. They will also allow us to make further contact with the geometric framework \cite{Buryak:2018ypm}. Crucially, they are instrumental in proving the identification between our matrix integral and the generator of geometric invariants. The identification of these Virasoro algebras goes back to the study of open/closed string matrix models \cite{Dalley:1992br,Johnson:1993vk} and has also been touched upon in \cite{Gaiotto:2003yb,Dijkgraaf:2018vnm}, amongst many other places. We hope our treatment clarifies the various guises of the Virasoro algebra in the literature.
\subsection{The Open/Closed Virasoro Algebra}
\label{extendedVirasoro}
The closed string partition function is annihilated by half of a Virasoro algebra \cite{Fukuma:1990jw,Dijkgraaf:1990rs,Adler:1992tj,Kharchev:1991cy,Dijkgraaf:1991qh}:
\begin{equation}
L_{n} \tau(\Lambda_z) = 0 \qquad\text{for}\qquad n\ge -1\,.
\end{equation}
The generators of the Virasoro algebra are given traditionally in terms of the times $t_k$ of the integrable system:
\begin{align}
\label{oldVirasoro}
L_{-1} &= \alpha \frac{\partial}{\partial t_1} + \sum_{k=p+1}^{\infty\ \prime} \frac{k}{p} t_k \frac{\partial}{\partial t_{k-p}} +\frac{1}{2p}\sum_{k=1}^{p-1}k(p-k)\, t_k t_{p-k} \cr
L_0 &= \alpha \frac{\partial}{\partial t_{p+1}}+ \sum_{k=1}^{\infty\ \prime} \frac{k}{p} t_k \frac{\partial}{\partial t_{k}} + \frac{p^2-1}{24p} \cr
L_{n} &= \alpha \frac{\partial}{\partial t_{(n+1)p+1}} + \sum_{k=1}^{\infty\ \prime}\frac{k}{p}t_k \frac{\partial}{\partial t_{np+k}} + \frac{1}{2p} \sum_{k=1}^{pn-1\ \prime} \frac{\partial^2}{\partial t_{k} \partial t_{np-k} } \,. \cr
\end{align}
The primed sums have indices in the reduced set of times $t_{k \notin p \mathbb{N}}$.
Here we study a set of extended Virasoro generators $L_n^{\text{ext}}$ that act on the wave potential $\tau_{\text{ext}}(\Lambda,z)$ in such a manner that they become differential operators in the spectral parameter $z$ only. Moreover, after Fourier transform, they become differential operators in the variable $s$. Thus, we will find the following equalities:
\begin{equation}
\label{openclosedLL}
L_n^{\text{ext}}(t_k) \tau_{\text{ext}} = -L_n^{\text{ext}}(z)\Phi[ \tau^{\text{op}+\text{cl}}(s)](z) = - \Phi[ L^{\text{ext}}_n(s) \tau^{\text{op}+\text{cl}} ] \,.
\end{equation}
Therefore, the wave potential $\tau_{\text{ext}}$ will be annihilated by the generators
\begin{equation}
L_n^{\text{cl}} = L_n^{\text{ext}}(t_k) + L_n^{\text{ext}}(z) \label{ClosedVirasoro} \, ,
\end{equation}
while the open/closed partition function $\tau^{\text{op}+\text{cl}}$ will be killed by the Virasoro generators:
\begin{equation}
L_n^{\text{op}} = L_n^{\text{ext}}(t_k) + L_n^{\text{ext}}(s) \, . \label{OpenVirasoro}
\end{equation}
To prove the equalities \eqref{openclosedLL}, we firstly propose that the extended Virasoro generators are given by the following expressions in terms of the times $t_k$:
\begin{align}
L_{-1}^{\text{ext}} &= \alpha \frac{\partial}{\partial t_1} + \sum_{k=p+1}^{\infty} \frac{k}{p} t_k \frac{\partial}{\partial t_{k-p}} +\frac{1}{2p}\sum_{k=1}^{p-1}k(p-k)\, t_k t_{p-k} +t_p\cr
L_0^{\text{ext}} &= \alpha \frac{\partial}{\partial t_{p+1}}+ \sum_{k=1}^{\infty} \frac{k}{p} t_k \frac{\partial}{\partial t_{k}} + \left(\frac{p^2-1}{24p} +\frac{1}{2p}\right)\cr
L_{n}^{\text{ext}} &= \alpha \frac{\partial}{\partial t_{(n+1)p+1}} + \sum_{k=1}^{\infty}\frac{k}{p}t_k \frac{\partial}{\partial t_{np+k}} + \frac{1}{2p} \sum_{k=1}^{pn-1} \frac{\partial^2}{\partial t_{k} \partial t_{np-k} } + \, \frac{1}{p} \frac{\partial}{\partial t_{np}} \,. \cr
\label{extendedClosedVirasoro}
\end{align}
Note that all times $t_{np}$ appear in these extended Virasoro generators.
This is a natural change, since the wave potential depends on all times. Secondly, we observe further additional terms that can formally be argued as follows. If we were to introduce a time $t_0$ such that $t_0 = -\frac{1}{0} \text{Tr} \Lambda^0$ -- compare to equation \eqref{definitiontimes} --, then $0 \, t_0$ would measure minus the dimension of the matrix $\Lambda$. Since the latter changes by one in our process of integrating out off-diagonal degrees of freedom, the combination $0 \, t_0$ can be argued to change by one. This formally gives rise to the last term in the first and third lines of the proposal (\ref{extendedClosedVirasoro}).
The additional contribution to the constant term in $L_0$ can then be obtained by computing the $[L^{\text{ext}}_1, L^{\text{ext}}_{-1}]$ commutator. A more solid argument for the proposal is the long calculation that follows.
By explicitly acting with the extended Virasoro generators $L_n^{\text{ext}}(t_k)$ on the wave potential $\tau_{\text{ext}}$, we will manage to represent the extended Virasoro operators as differential operators in $z$ exclusively.
Firstly, we calculate the action of the generators $L_{n>1}^{\text{ext}}$ on the wave potential:
\begin{align}
L_n^{\text{ext}} \tau_{\text{ext}} &= L_n^{\text{ext}} \left(e^{\xi}\, \tau(\Lambda_z) \right) \cr
& = \tau(\Lambda_z)\, L_n^{\text{ext}} e^{\xi} + e^{\xi}\, L_n^{\text{ext}} \tau(\Lambda_z) +
\frac{1}{p} \sum_{k=1}^{pn-1} \frac{\partial e^{\xi}}{\partial t_{np-k}} \frac{\partial \tau(\Lambda_z)}{\partial t_{k} }\,.
\label{Lnaction}
\end{align}
We have taken into account the cross term that arises due to the double-derivative term in the $L^{\text{ext}}_{n>0}$ generators. Let us consider each of these terms in turn.
The first term is proportional to:
\begin{equation}
L_n^{\text{ext}} e^{\xi} = \left( \alpha\, z^{np+p+1} + \frac{1}{p}\sum_{k=1}^{\infty}k t_k z^{np+k} + \frac{1}{2p} \sum_{k=1}^{np-1} z^{np} + \frac{1}{p}z^{np} \right)e^{\xi} \, . \label{intermediate}
\end{equation}
Observe that the second term in equation (\ref{intermediate}) can be rewritten as a $z$-derivative:
\begin{equation}
\label{Lnfinal}
L_n^{\text{ext}}\, e^{\xi} = \left( \alpha\, z^{np+p+1} + \frac{z^{np+1}}{p}\frac{\partial}{\partial z}+ \frac{np+1}{2p} z^{np} \right)e^{\xi} \, .
\end{equation}
Let us now consider the second term in equation \eqref{Lnaction}. The important input here is that the closed string partition function $\tau(\Lambda_z)$ is annihilated by the action of the shifted Virasoro algebra where the shifted times $\widetilde{t}_k$ are given by
\begin{equation}
\widetilde{t}_k = t_k - \frac{1}{k\, z^k} \,.
\label{shiftedtimes}
\end{equation}
The shifted Virasoro generators take the simple form:
\begin{equation}
L_n^{\text{ext}}(t_k) = L_n^{\text{ext}}(\widetilde{t}_k) + \frac{1}{p}\sum_{k=1}^{\infty} \frac{1}{z^k} \frac{\partial }{\partial t_{np+k}}\,.
\end{equation}
Acting on $\tau(\Lambda_z)$ the first term gives a zero contribution while the second term can be rewritten as:
\begin{align}
L_n^{\text{ext}}\, \tau(\Lambda_z) &=
\frac{1}{p}\, z^{np+1}\, \frac{\partial \tau^{\text{cl}}(\Lambda_z)}{\partial z} -
\frac{1}{p}\, z^{np+1}\, \sum_{k=1}^{np-1}z^{-k-1}\frac{\partial \tau(\Lambda_z)}{\partial t_{k}}\,.
\label{Lnontau}
\end{align}
Finally, the last term in equation \eqref{Lnaction} can be evaluated to be:
\begin{equation}
\frac{e^{\xi}}{p} \, z^{np+1} \sum_{k=1}^{np-1} z^{-k-1} \frac{\partial \tau(\Lambda_z)}{\partial t_{k} }\,.
\end{equation}
We see that this contribution cancels the second term in equation \eqref{Lnontau} after multiplication by $e^{\xi}$. Furthermore, the coefficients of the $\partial / \partial z$ terms in
formulas \eqref{Lnfinal} and \eqref{Lnontau} are the same. Therefore, summing over all terms, we find the following differential operator as the action of the extended Virasoro generators on the wave potential:
\begin{align}
\label{Lclz}
L_{n}^{\text{ext}}(t_k) \tau_{\text{ext}}
& = \left(\alpha z^{np+p+1} + \frac{1}{p} z^{np+1} \frac{\partial}{\partial z} + \frac{ np+1}{2p} \, z^{np} \right) \tau_{\text{ext}} \,.
\end{align}
The differential operator on the right hand side is the negative of what we defined to be $L_n^{\text{ext}}(z)$ in equation \eqref{ClosedVirasoro} for the $n>0$ Virasoro generators. We turn to the remaining two cases. The calculations for $L_0$ are similar to the ones we have already done and we find
\begin{equation}
L_{0}^{\text{ext}} \tau_{\text{ext}} = \left(\alpha\, z^{p+1}+\frac{z}{p}\frac{\partial}{\partial z} + \frac{1}{2p} \right)\tau_{\text{ext}}
\,.
\end{equation}
Note that the constant $1/(2p)$ term survives in the first equality because the shifted Virasoro generator $L_0(\widetilde{t}_k)$ that annihilates the closed string partition function $\tau(\Lambda_z)$ does not include this constant term.
The analysis for the operator $L_{-1}^{\text{ext}}$ is more involved due to the terms quadratic in the times. We use again the fact that the operator $L_{-1}^{\text{ext}}(\widetilde{t}_k)$, with shifted times, annihilates the closed string partition function and find the following relation:
\begin{align}
L_{-1}^{\text{ext}} \tau_{\text{ext}}&= \left(\alpha\, z+\sum_{k=p+1}^{\infty}\frac{k}{p}t_kz^{k-p} \right) \tau_{\text{ext}} + e^{\xi}\, \sum_{k=p+1}^{\infty} \frac{1}{p} \frac{1}{z^k} \frac{\partial \tau}{\partial t_{k-p}}\cr
&~+ \left( t_p + \frac{1}{2p} \sum_{k=1}^{p-1} k(p-k)\left( \frac{t_{p-k}}{k z^k} + \frac{t_k}{(p-k) z^{p-k}} - \frac{1}{k(p-k)z^p} \right) \right) \tau_{\text{ext}}\,.\cr
\end{align}
We combine terms linear in the times $t_k$ and use the equations
\begin{equation}
\sum_{k=1}^{\infty} \frac{k}{p}\, t_k\, z^{k-1} e^{\xi} = \frac{1}{p} \frac{\partial e^{\xi}} {\partial z} \qquad \mbox{and} \qquad
\sum_{k=1}^{\infty} z^{-k-1} \frac{\partial \tau (\tilde{t}_k)}{\partial t_k} = \frac{\partial \tau(\tilde{t}_k)}{\partial z}\, ,
\end{equation}
to finally obtain
\begin{align}
\label{Lminusoneontauext}
L_{-1}^{\text{ext}}\, \tau_{\text{ext}}
&= \alpha\left(z + \frac{1}{\alpha\, p\, z^{p-1}} \frac{\partial}{\partial z} -\frac{p-1}{2\alpha\, p\, z^p } \right) \tau_{\text{ext}}\,.
\end{align}
We note that the $t_p$ term in the extended Virasoro generator
\eqref{extendedClosedVirasoro} is crucial in order to obtain the sum over all times that leads to the $\partial_z$-derivative acting on the $e^{\xi}$ factor. In short, the form
of the Virasoro generators \eqref{Lclz} is valid for all $n$.
This concludes our analysis of the extended closed Virasoro generators and how they are represented as differential operators in $z$ on the wave potential. We surmise that indeed the closed string Virasoro generator $L_n^{\text{cl}}$ in equation \eqref{ClosedVirasoro} annihilates the wave potential $\tau_{\text{ext}}$.
\subsection{From Closed to Open Virasoro}
\label{ClosedOpenVirasoro}
Our next step in proving the equalities \eqref{openclosedLL} is to Fourier transform the $z$-differential operator using the open/closed duality equation \eqref{MainResult3} to find the open string realization of the Virasoro algebra. To make things more transparent, we first rewrite the extended Virasoro algebra in \eqref{Lclz} in the following manner:
\begin{align}
\label{Lclz2}
L_{n}^{\text{ext}} \tau_{\text{ext}}
&= \alpha\, z^{p(n+1)}\left(z + \frac{1}{\alpha\, p\, z^{p-1}} \frac{\partial}{\partial z} -\frac{p-1}{2\alpha\, p\, z^p } \right) + \frac{n+1}{2} z^{pn} \,.
\end{align}
We now make use of the properties of the Fourier transform that are proven in Appendix \ref{FourierTransform} (see equation \eqref{GFTprop12}) that effectively show the equivalence between differential operators of $z$ acting on the closed string side and differential operators of $s$ acting on the open string side. We find the Virasoro generators:
\begin{align}
L_n^{\text{ext}}(s) &= (-\alpha)^{-n} \left( \frac{\partial^{n+1}}{\partial s^{n+1}} \cdot s - \frac{1}{2}(n+1) \frac{\partial^n}{\partial s^n} \right)\cr
& = (- \alpha)^{-n} \left( s\frac{\partial^{n+1}}{\partial s^{n+1}} + \frac{1}{2}(n+1) \frac{\partial^n}{\partial s^n} \right) \, . \label{sVirasoro}
\end{align}
Thus, we have shown that the operators $L_n^{\text{op}}$ defined in equation \eqref{OpenVirasoro} as the sum of the extended Virasoro generators and the $s$ differential operators annihilate the open/closed partition function $\tau^{\text{op}+\text{cl}}$.
Let us briefly remark on the string coupling dependence of the Virasoro generators.
The terms in equation \eqref{extendedClosedVirasoro} become more transparent when we rescale the times by a factor of $\alpha=-1/g_s$. The resulting string coupling dependence is then $g_s^{-2}$ for the term quadratic in times in $L_{-1}^{\text{ext}}$, arising from the sphere,
and there is an extra term arising from the disk, proportional to $g_s^{-1}$.
The term with the two derivatives in $L_n^{\text{ext}}$ is proportional to $g_s^2$, and the last, single derivative term is linear in the string coupling $g_s$. The terms above, in equation \eqref{sVirasoro} are proportional to $g_s^n$. Each open string insertion comes with an extra factor of the string coupling $g_s$.
\subsubsection*{An Alternative Point of View}
Let us add a remark on how the open/closed Virasoro generators are related to
those found in the literature for $p=2$. See e.g. \cite{Buryak:2015eza,Dijkgraaf:2018vnm}. We show in Appendix \ref{AlternativeVirasoro} that those Virasoro generators
(which we also compute in the Appendix for general $p$) are related to the ones we have by conjugation.
Indeed, we can conjugate the Virasoro generators to eliminate
the $t_{np}$ dependence of the wave potential $\tau_{\text{ext}}$ on which they act.
The open/closed Virasoro generators then match the generators
of references
\cite{Buryak:2015eza} and \cite{Dijkgraaf:2018vnm} that study the case $p=2$.
It should be remarked however (see appendix \ref{AlternativeVirasoro}) that this point of view introduces either an intricate operator insertion $\det ( \partial_s/\alpha + \Lambda^p)^{-\frac{1}{p}}$ inside the matrix expression for the open string partition function, or the ratio of determinants insertion discussed in appendix \ref{PureAlternative} for $p=2$ \cite{Buryak:2015eza}. Since our matrix model perspective in the bulk of the paper naturally matches both the algebro-geometric considerations as well as the integrability literature, we have stuck to this point of view in section \ref{matrixintegral} as well as in the calculation of the open/closed Virasoro generators presented in subsections \ref{extendedVirasoro} and \ref{ClosedOpenVirasoro}. Finally, we remind the reader that these Virasoro algebras have an interpretation in terms of a primary operator insertion that dates back to \cite{Dalley:1992br,Johnson:1993vk}.
\subsection{The Integrable Hierarchy and the Relation to Geometry}
\label{Proof}
In this subsection, we tie up several loose ends. We first make good use of the operators we found in our analysis to prove the statement made in subsection
\ref{MainTheorem} that our open/closed partition function matches the algebro-geometric generating function. Secondly, we note that a subset of our Virasoro constraints correspond to geometric constraints derived in \cite{Buryak:2018ypm}, and that our Virasoro algebra extends those constraints to an infinite family.
\subsubsection*{The Connection to Algebraic Geometry}
The first part of the proof of the fact that our matrix partition function matches the geometric one is based on a classic result in integrable systems, which says that the Baker-Akhiezer wave function $\psi(t_{k},z)$ is given by:
\begin{equation}
\psi(t_{k},z) = \frac{\tau(\widetilde{t}_{k})}{\tau(t_k)} e^{\xi(t_k,z)}
= \frac{\tau_{\text{ext}}}{\tau} \, , \label{BA}
\end{equation}
where $\widetilde{t}_k$ are the shifted times \eqref{shiftedtimes}.
The Baker-Akhiezer wave function $\psi(t_{k},z)$ satisfies the differential equations \eqref{WaveEquations} as well as the eigenvalue equation
\begin{equation}
L\, \psi(t_{k},z) = z^p\, \psi(t_{k},z) \, ,
\label{BAeigenvalue}
\end{equation}
where $L$ is the Lax operator of the integrable hierarchy. Moreover, from the linear relation between the open/closed partition function $\tau^{\text{op}+\text{cl}}$ and the wave potential $\tau_{\text{ext}}$ in \eqref{tauopcltauext}, it is clear that the ratio
$\tau^{\text{op}+\text{cl}}/\tau$ is related to the Baker-Akhiezer function $\psi(t_k,z)$ via the Fourier transform. Therefore the ratio
$\tau^{\text{op}+\text{cl}}/\tau$ also satisfies the differential equations \eqref{WaveEquations}.
It remains to prove that the wave function $\Psi(t_k) =\tau^{\text{op}+\text{cl}}/\tau$ also satisfies the initial conditions \eqref{InitialCondition}. Our proof closely follows the proof in \cite{Bertola:2014yka}. Previously, we proved that the wave potential satisfies the equation (see \eqref{Lminusoneontauext}):
\begin{equation}
\label{LminusoneSz}
L_{-1}^{\text{ext}}\cdot \tau_{\text{ext}} = \alpha\, S_z\cdot \tau_{\text{ext}}\,,
\end{equation}
where we define $S_z$ to be the differential operator:
\begin{equation}
S_z = \left(z + \frac{1}{\alpha\, p\, z^{p-1}} \frac{\partial}{\partial z} -\frac{p-1}{2\alpha\, p\, z^p } \right) \,.
\end{equation}
We now set all times to zero except the time $t_1$ and study the reduced Baker-Akhiezer function $\left.\psi(t_k, z) \right|_{t_{\ge 2}=0}$. When all times except time $t_1$ are zero, the closed string partition function $\tau(t)$ equals one. Also, recall that the operator $L_{-1}^{\text{ext}}$ coincides with the operator $\alpha\, \partial_{t_1}$ when all times but the first are zero.\footnote{These two statements have to be modified for $p=2$. For this value of $p$, the partition function at zero higher times equals $\tau(t)=\exp \frac{t_1^3}{6}$, and the operator $L_{-1}^{\text{ext}}$ has an extra term $t_1^2/2$. These two modifications cancel each other in the reasoning.} Combining this fact with equation \eqref{LminusoneSz} and the identification \eqref{BA}, we conclude that \cite{Bertola:2014yka}
\begin{equation}
\left(\frac{\partial }{\partial_{t_1}}\right)^k \left.\psi(t_k, z) \right|_{t_{\ge 2}=0} = S_z^k \, \left.\psi(t_k, z) \right|_{t_{\ge 2}=0}
\end{equation}
If we therefore define the function $A(z)$ to be the value that the wave function takes at $t_1=0$,
\begin{equation}
\left.\psi(t_k, z) \right|_{t_{\ge 1}=0} = A(z)\,,
\end{equation}
then we see that by Taylor expansion, we have
\begin{align}
\label{wavefunctiondefn}
\left. \psi(t_k, z)\right|_{t_{\ge 2}=0} =\sum_{n=0}^\infty \frac{1}{n!}\, S_z^n\, A(z)\, t_1^n \,.
\end{align}
Moreover, one can check that the function $A(z)$ satisfies the differential constraint \cite{Bertola:2014yka}:
\begin{equation}
\label{Azdefn}
S_z^p\cdot A(z) = z^p\, A(z)\,.
\end{equation}
This follows from firstly, the property \eqref{BAeigenvalue}
of the Baker-Akhiezer function that it is an eigenvector of the Lax operator, secondly, the initial conditions that require the Lax operator $L$ to be $(\partial _{t_1})^p$ when all times are zero and thirdly, the $L_{-1}^{\text{ext}}$ Virasoro identity \eqref{LminusoneSz}.
We now claim that the function $A(z)$ that satisfies the differential constraint
\eqref{Azdefn} is given by the following contour integral:
\begin{equation}
\label{Azsolution}
A(z) = \sqrt{\frac{\alpha\, p}{2 \pi}}\, z^{\frac{p-1}{2}}\, e^{-\alpha \frac{p}{p+1} z^{p+1}}\, \int ds e^{-\alpha \frac{s^{p+1}}{p+1} + \alpha\, s\, z^p}
\, .
\end{equation}
We check by explicitly acting with the operator $S_z$:
\begin{equation}
S_z^p\cdot A(z) = \sqrt{\frac{\alpha\, p}{2 \pi}}\, z^{\frac{p-1}{2}}\, e^{-\alpha \frac{p}{p+1} z^{p+1}}\, \int ds \, s^p\, e^{-\alpha \frac{s^{p+1}}{p+1} + \alpha\, s\, z^p}\,.
\end{equation}
Within the $s$-integral, one adds and subtracts $z^p$. Then, the combination $(s^p - z^p)$ can be written as a total derivative, which integrates to zero. We have thus proven \eqref{Azdefn}. We have chosen the normalization of the function
\eqref{Azsolution} such that it is the inverse Fourier transform of the constant function equal to $1$.
Recall that our wave function $\Psi(t_k) = \tau^{\text{op}+\text{cl}}/\tau$ is the formal Fourier transform of the Baker-Akhiezer function \eqref{wavefunctiondefn}:
\begin{equation}
\left. \Psi(t_k)\right|_{t_{\ge 2}=0} =\sqrt{\frac{\alpha\, p}{2 \pi}}\int dz\, z^{\frac{p-1}{2}}\, e^{\alpha\, p \frac{z^{p+1}}{p+1} - \alpha\, z^p\, t_p}\, \psi(t_k, z)_{t_{\ge 2}=0} \label{FTBA}
\end{equation}
The key point is now that every term in the sum \eqref{wavefunctiondefn} for $n>0$ vanishes inside the integral \eqref{FTBA} \cite{Bertola:2014yka}. This can be checked by integration by parts.
Therefore the only term that contributes to $\left. \Psi(t_k)\right|_{t_{\ge 2}=0}$ is the $n=0$ term. By explicitly substituting the integral expression for the power series $A(z)$, we obtain
\begin{equation}
\left. \Psi(t_k)\right|_{t_{\ge 2}=0} = \int ds\, e^{-\alpha \frac{s^{p+1}}{p+1} } \frac{\alpha p}{2\pi}\int dz\, z^{p-1} e^{\alpha\, z^p (s-t_p)}
= \int ds\, e^{-\alpha \frac{s^{p+1}}{p+1} }\delta (s-t_p)
=1
\end{equation}
where the last equality follows from the fact that all higher times, amongst which is $t_p$, are set to zero. Thus, we have proven the initial condition. Therefore we have proven the identification \eqref{GeometricMainResult} between the matrix and the geometric open/closed generating functions.
\subsubsection*{Further Constraints}
Finally, let us remark that our Virasoro constraints $L_{-1}^{\text{op}}$ and $L_0^{\text{op}}$ correspond to the string equation and dilaton equation of \cite{Buryak:2018ypm} (after using charge conservation, or the dimension constraint).
Our Virasoro generators $L_n^{\text{op}}$ extend these geometrically proven constraints to an infinite set.
\section{Conclusions}
\label{summary}
\label{conclusions}
Topological gravity coupled to topological matter is well-studied \cite{Witten:1989ig}. Introducing D-branes and applying concepts from holography has allowed for a better understanding of these simple string theories, and has provided hands-on illustrations of profound concepts \cite{McGreevy:2003kb,Gaiotto:2003yb}. Certainly, it has been very useful to underpin these achievements with the rigorous mathematical definition of topological gravity theories on Riemann surfaces with boundaries \cite{Pandharipande:2014qya}. This has stimulated progress in the identification of open/closed string correlators on Riemann surfaces with boundaries, and their relation to integrable systems \cite{Buryak:2018ypm}. This has in turn allowed for a more rigorous understanding \cite{BCT1} of the notion that boundaries can be replaced by closed string insertions \cite{Polchinski:1995mt}.
Amongst the many insights that string theory has provided into topological gravity is the fact that the integration variables in the Kontsevich matrix model correspond to open strings. Indeed, in \cite{Maldacena:2004sn}, it was argued that the degrees of freedom in the Kontsevich matrix integral are mesons made of open strings stretching between extended and localized branes. In string theory, it is often useful to split a given set of D-branes into a heavy stack, and a single ``probe brane". In this work we adopted such a probe brane analysis in treating the Kontsevich matrix model. We integrated out the open strings stretching between a large $N$ stack, and one extra D-brane. In doing so we have naturally produced a matrix model realization of open/closed duality. The process of integrating out generates a determinant operator insertion corresponding to the addition of a D-brane. From the integrable systems perspective this was argued in the case of pure gravity in \cite{Alexandrov:2014gfa}.
It is important to mention that the insertion is distinct from the more familiar determinant operator in the Kazakov matrix model before taking the double scaling limit \cite{Maldacena:2004sn, Hashimoto:2005bf}.
The hands-on treatment of the matrix model formulae allowed for a very precise treatment of the duality, the constraints the generating functions satisfy, and their relation to integrable systems. It is this relation to integrable systems that allowed us to precisely match our treatment to the rigorous algebro-geometric treatment of the $p$-spin intersection numbers on moduli spaces of Riemann surfaces with boundaries.
We note that both the extension of the closed string partition function that depends on all times of the KP integrable hierarchy \cite{BCT1} as well as the wave function that appears in the integrable system \cite{Bertola:2014yka} pop out of our matrix model analysis spontaneously.
One direction for future research amongst many is to carefully match the geometric analysis with a first principle string field theory or conformal field theory derivation of the amplitudes on Riemann surfaces with boundaries.
\section*{Acknowledgments}
We thank our colleagues for creating a stimulating research environment, and Alexandr Buryak for patient explanations of the results in \cite{Buryak:2015eza}. SA would like to thank the \'Ecole Normale Sup\'erieure, Paris and the Universit\`a di Torino, Italy for their hospitality during the completion of this work.
| {'timestamp': '2019-07-05T02:14:05', 'yymm': '1907', 'arxiv_id': '1907.02410', 'language': 'en', 'url': 'https://arxiv.org/abs/1907.02410'} |
\section{Introduction}\label{sec:Intro}
Iterative methods based on projections of the solution onto a Krylov subspace are often used to solve large-scale linear ill-posed problems \cite{RegParamItr,Chung2008,Novati2013,Bazan2010,Hochstenbach2010,projRegGenForm,Gazzola2015}, \cite[Chap. 6]{RankDeff}, \cite[Chap. 6]{HansenInsights}. Such large problems arise in a variety of applications including image deblurring \cite{Nagy2004,Yuan2007,SpecFiltBook,projRegGenForm,Chung2008} and machine learning \cite{Ong2004,Martens2010,Freitas2005,Ide2007}. Krylov methods iteratively project the solution onto a low-dimensional subspace and solve the resulting small-scale problem using standard procedures such as the QR decomposition \cite{LSQR}. These methods are therefore attractive for the solution of large-scale problems that cannot be solved directly as well as for problems perturbed by noise, since the projection onto a Krylov subspace possesses a regularizing effect \cite{Hnetynkova2009}. Accurate solution using iterative procedures requires stopping the process close to the optimal stopping iteration. In this paper we develop a general stopping criterion for Krylov subspace regularization, which we particularly apply to the Golub-Kahan bidiagonalization (GKB), also frequently referred to as Lanczos bidiagonalization \cite{Golub1965}.
Consider the problem
\begin{equation}\label{eq:Ax=b} Ax=b,\end{equation}
where the matrix \(A\in\mathbb{R}^{m\times n}\) is large and ill-conditioned and the data vector \(b=b_{true} + n\) constitutes the true data \(b_{true}\) perturbed by an additive white noise vector \(n\). We are interested in approximating the least-squares solution of the problem
\begin{equation}\label{eq:LSsolution}
\min_x ||b_{true}- Ax||^2
\end{equation}
without knowledge of \(b_{true}\). This can be done by minimizing the projected least-squares (PLS) problem
\begin{equation}\label{eq:GKBprob}
\min_x ||b-Ax||^2,\quad \text{such that}\quad x\in\mathcal{K}_k(A^TA,A^Tb),
\end{equation}
where
\begin{equation}\label{eq:KrylovSubspace}
\mathcal{K}_k(A^TA,A^Tb)=\text{span}\{A^Tb,\ldots,\left(A^TA\right)^{k-1}A^Tb\},
\end{equation}
is the Krylov subspace generated using the GKB process for each iteration \(k\).
The regularizing effect of the projection onto \(\mathcal{K}_k(A^TA,A^Tb)\) then dampens the noise in \(b\), allowing us to approximate \(x_{true}\) \cite{RegParamItr,Hnetynkova2009}. The reason for this regularization effect is that in the initial iterations the basis vectors spanning the Krylov subspace are smooth and so is the projected solution. However, for large iteration numbers \(k\) the accuracy of the projected solution decreases as the basis vectors become corrupted by noise. By applying GKB to \ref{eq:Ax=b}, we thus obtain a sequence of iterates \(\{x^{(k)}\}\) whose error \(||x^{(k)}-x_{true}||\) initially decreases with increasing iterations and then goes up sharply. This behavior of GKB is termed semi-convergence \cite[Sect. 6.3]{RankDeff}. It is therefore crucial to develop a reliable stopping rule by which to terminate the iterative solution of \ref{eq:GKBprob} before noise contaminates the solution. For this purpose, a number of stopping criteria have been proposed including the L-curve \cite{RegParamItr,LCurve}, the generalized cross validation (GCV) \cite{RegParamItr}, \cite[Sect. 7.4]{RankDeff} and the discrepancy principle \cite{RegParamItr}. However, both the GCV and the L-curve methods are inaccurate for determination of the stopping iteration in a significant percentage of cases, as has been demonstrated in \cite[Sect. 7.2]{Hansen2006} for the former and below in \ref{sec:NumEx} for the latter. The discrepancy principle, on the other hand, requires \emph{a priori} knowledge of the noise level and is highly sensitive to it \cite[Sect. 4.1.2]{Bauer2011}. Recently, a new stopping criterion called the normalized cumulative periodogram (NCP), which is based on a whiteness measure of the residual vector \(r^{(k)}=b-Ax^{(k)}\), was proposed in \cite{Hansen2006}. While this method outperforms the above-mentioned alternatives, we nevertheless show that its results are inconsistent for some of our numerical problems.
Estimation of the stopping iteration using the GCV or the L-curve method requires projection of the solution onto the subspace \(\mathcal{K}_k(A^TA,A^Tb)\), which depends on the noisy data vector \(b\). In contrast to the original problem \ref{eq:Ax=b} where the noise is entirely contained within data vector \(b\), while coefficient matrix \(A\) is noiseless, the coefficient matrix in the projected problem is contaminated by noise. It is thus nontrivial to generalize standard parameter-choice methods to the problem \ref{eq:GKBprob}, as their usage may result in suboptimal solutions due to the fact that they do not account for noise in the coefficient matrix. To overcome this problem, we estimate the optimal stopping iteration for GKB using the Data Filtering (DF) method originally proposed and briefly discussed in \cite{Levin2016}. Using the DF method the stopping iteration is selected as the one for which the distance \(||\widehat b-Ax^{(k)}||\) between the filtered data \(\widehat b\) and the data reconstructed from the iterated solution \(Ax^{(k)}\) is either minimal or levels-off. In \cite{Levin2016} the filtered data \(\widehat b\approx b-n\) is obtained by separating the noise from the data using the so-called Picard parameter \(k_0\), above which the coordinates of the data in the basis of the left singular vectors of \(A\) are dominated by noise. The approximation of the unperturbed data \(\widehat b\) is then obtained by setting the coordinates of \(b\) in that basis to zero for \(k>k_0\).
Computing the SVD of \(A\) for large-scale problems is not feasible and the specific basis in which the authors of \cite{Levin2016} achieve a separation between signal and noise is therefore unavailable. To overcome this problem we propose to replace the SVD basis used in \cite{Levin2016} by the basis of the discrete Fourier transform (DFT) and show that we can achieve a similar separation of signal from noise in one and in two dimensions. It is well known however, that the DFT assumes the signal to be periodic, and applying it to a non-periodic signal results in artifacts in the form of fictitious high-frequency components that cannot be distinguished from the noise in the DFT basis. We prevent these high-frequency components from appearing by using the Periodic Plus Smooth (PPS) decomposition \cite{Moisan2011}, which allows us to write the signal as a sum, \(b=p+s\), of a periodic component \(p\) and a smooth component \(s\). The periodic component is compatible with the periodicity assumption of the DFT and does not produce high-frequency artifacts, while the smooth component does not need to be filtered. We then have to filter only the Fourier coefficients of the periodic component.
For two-dimensional problems, the Fourier coefficients of the data require a vectorization prior to estimation of the Picard parameter. The coefficients are usually vectorized by order of increasing spatial frequency \cite{Hansen2006}. Here we propose an alternative vectorization, ordered by increasing value of the product of the spatial frequencies in each dimension, which enables a more accurate and consistent estimation of the Picard parameter. We demonstrate that such ordering is equivalent to the sorting of a Kronecker product of two vectors of spatial frequencies and stems from a corresponding Kronecker product structure of the two-dimensional DFT matrix. This approach is also analogous to reordering the SVD of a separable blur as discussed in e.g. \cite[Sect. 4.4.2]{SpecFiltBook}. We demonstrate that a filter based on the proposed ordering performs similarly to or outperforms its spatial frequencies-based counterpart in all our numerical examples, allowing termination of the iterative process closer to the optimal iteration. The new filtering procedure is simple and effective, and can be used independently of the iterative inversion algorithm.
Hybrid methods, which replace the semi-convergent PLS problem \ref{eq:GKBprob} with a convergent alternative, have received significant attention in recent years \cite{RegParamItr,Chung2008,Novati2013,Hochstenbach2010,Bazan2010,Gazzola2015,Chung2015}. These methods combine Tikhonov regularization with a projection onto \(\mathcal{K}_k(A^TA,A^Tb)\), replacing problem \ref{eq:GKBprob} with the projected Tikhonov problem
\begin{equation}\label{eq:TikhMinProb} \min_x ||b-Ax||^2 + \lambda^2||Lx||^2\quad \text{such that}\quad x\in\mathcal{K}_k(A^TA,A^Tb),\end{equation}
where \(L\) is a regularization matrix and \(\lambda\) is a regularization parameter that controls the smoothness of the solution. In this paper, we follow the authors of \cite{RegParamItr,Chung2008,Novati2013,Hochstenbach2010,Bazan2010,Gazzola2015} by considering only the \(L=I\) case. We would like to note, however, that a generalization to case \(L\neq I\) was developed and discussed in \cite{Hochstenbach2010}. The advantage of hybrid methods is that given an accurate choice of the value \(\lambda\) at each iteration, the error in the solution of \ref{eq:TikhMinProb} stabilizes for large iterations, contrary to the least-squares problem \ref{eq:GKBprob} for which the solution error has a minimum. However, appropriately choosing the regularization parameter \(\lambda\) at each iteration is a difficult task, since the coefficient matrix in the projected problem becomes contaminated by noise, as in the PLS problem. Hence, standard parameter-choice methods such as the GCV cannot be na\"{\i}vely applied to problem \ref{eq:TikhMinProb}. In practice, the GCV indeed fails to stabilize the iterations in a large number of cases as reported in \cite{Chung2008} and \cite[Sect. 5.1.1]{Bazan2010}. To overcome this problem, the authors of \cite{Chung2008} proposed the weighted GCV (W-GCV) method which incorporates an additional free weight parameter, chosen adaptively at each iteration and shown to significantly improve the performance of the method. We demonstrate however, using several numerical examples, that the results of the W-GCV method are still suboptimal.
This paper is organized as follows. In Sect. \ref{sec:DirectReg} we summarize results from the Tikhonov regularization of \ref{eq:Ax=b} that we extend to the PLS problem \ref{eq:GKBprob}. In Sect. \ref{sec:DFTfilter}, we present our filtering technique, based on the Picard parameter in the DFT basis for one- and two-dimensional problems. In Sect. \ref{sec:GKBinvert}, we present the GKB algorithm and formulate our stopping criterion. In this section we also discuss hybrid methods that combine projection with Tikhonov regularization. Finally, in Sect. \ref{sec:NumEx} we give numerical examples which demonstrate the performance of the proposed stopping criterion, and compare it to the L-curve, NCP and W-GCV methods.
\section{Tikhonov regularization}\label{sec:DirectReg}
We begin with a description of our parameter-choice method, detailed in \cite{Levin2016}, for standard direct Tikhonov regularization of \ref{eq:Ax=b} using the Picard parameter, which represent the starting point of our derivation. The Tikhonov regularization method solves the ill-posed problem \ref{eq:Ax=b} by replacing it with the related, well-posed counterpart
\begin{equation}\label{eq:TikhProb} \min_x ||b-Ax||^2 + \lambda^2||x||^2 \implies \left(A^TA+\lambda^2I\right)x=b.\end{equation}
The solution of \ref{eq:TikhProb} can be written in a convenient form, using the singular value decomposition (SVD) of \(A\), given by
\begin{equation}\label{eq:SVDofA} A = U\Sigma V^T,\end{equation}
where \(U\in\mathbb{R}^{m\times m}\) and \(V\in\mathbb{R}^{n\times n}\) are orthogonal matrices. For simplicity, let the \(j\)th columns of \(U\) and \(V\) be denoted by \(u_j\) and \(v_j\), respectively, and the \(j\)th singular value of \(A\) by \(\sigma_j\). Furthermore, let \(\beta_j= u_j^Tb\) be the \(j\)th Fourier coefficient of \(b\) with respect to \(\{u_j\}_{j=1}^m\) and let \(\nu_j = u_j^Tn\) be the Fourier coefficients of the noise. Then, the solution of \ref{eq:TikhProb} can be written as
\begin{equation}\label{eq:TikhSoln}
x(\lambda) = \sum_{j=1}^{m}\frac{\sigma_j}{\sigma_j^2+\lambda^2}\beta_j v_j.
\end{equation}
It can be shown that in order for solution \ref{eq:TikhSoln} to represent a good approximation to the true solution \(x_{true}\) for some \(\lambda\), the problem must satisfy the discrete Picard condition (DPC) \cite{DPC}. The DPC requires that the sequence of Fourier coefficients of the true data \(\{u_j^Tb_{true}\}=\{\beta_j-\nu_j\}\) decays faster than the singular values \(\{\sigma_j\}\) which, by the ill-conditioning of \(A\), decay relatively quickly. Therefore, the DPC implies that \(\beta_j-\nu_j \approx 0\), or equivalently, \(\beta_j\approx \nu_j\), for \(j\geq k_0\) from some index \(k_0\), termed the Picard parameter, on. In other words, the coefficients of \(b\) with indices larger than \(k_0\) are dominated by noise, while coefficients with smaller indices are dominated by the true data.
To estimate \(\lambda\) we can rewrite \ref{eq:LSsolution} as
\begin{equation}\label{eq:LSsolution2}
\min_\lambda ||b_{true}- Ax(\lambda)||^2,
\end{equation}
but since \(b_{true}\) is not known we suggest to replace it with the filtered field \(b_{true}\approx \hat b\), as in the DF method \cite{Levin2016}. The DF method sets the regularization parameter \(\lambda\) for \ref{eq:TikhProb} to be the minimizer of the distance function
\begin{equation}
\label{eq:DistNorm}
\min_\lambda ||\widehat{b}-Ax(\lambda)||^2,
\end{equation}
between the data \(Ax(\lambda)\) reconstructed from the solution \ref{eq:TikhProb}, and the filtered data \(\widehat b\). To obtain the filtered data \(\widehat b\) we remove the noise-dominated coefficients from the expansion of \(b\) in basis \(\{u_j\}\) so that
\begin{equation}\label{eq:PicFiltSVD}
\widehat b = \sum_{j=1}^{k_0-1}\beta_ju_j.
\end{equation}
The Picard parameter \(k_0\), can be found by detection of the levelling-off of the sequence
\begin{equation}\label{eq:VxDefn}
V(k) = \frac{1}{m-k+1}\sum_{j=k}^m \beta_j^2,
\end{equation}
which is shown to decrease on average until it levels-off at \(V(k_0)\simeq s^2\), where \(s^2\) is the variance of the white noise. The detection is done by setting \(k_0\) to be the smallest index satisfying
\begin{equation}\label{eq:PicIndCondSVD}
\frac{|V(k+h)-V(k)|}{V(k)} \leq \varepsilon,
\end{equation}
for some step size \(h\) and a bound on the relative change \(\varepsilon\). The above estimation of \(k_0\) is stated to be robust to changes in \(h\) and \(\varepsilon\), with the values \(\varepsilon\in[10^{-3},10^{-1}]\) and \(h\in[\lfloor\frac{m}{100}\rfloor,\lceil\frac{m}{10}\rceil]\) working consistently well \cite{Levin2016}.
Unfortunately, for large-scale problems, computing the SVD of \(A\) given by \ref{eq:SVDofA} is unfeasible in general and therefore the above separation of noise from signal in the SVD basis is unobtainable. In the next section, we propose a similar filtering procedure for \(b\), which utilizes the basis of the DFT instead of SVD. This basis satisfies an analog of the DPC and is effective for large-scale applications.
\section{The DFT filter}\label{sec:DFTfilter}
In this section we replace the SVD basis discussed in Sect. \ref{sec:DirectReg} with the DFT basis. Since computing the Fourier coefficients with respect to the DFT basis can be done efficiently, the proposed procedure remains computationally cheap even for large-scale problems.
We begin by noting that the true data \(b_{true}\) is generally smooth and therefore is dominated by low-frequencies. This is true for cases such as image deblurring problems and problems arising from discretization of integral equations, where the coefficient matrix \(A\) has a smoothing effect and hence, \(b_{true} = Ax_{true}\) is smooth even if \(x_{true}\) is not, see \cite{Hansen2006,Hansen2008} \cite[Sect. 5.6]{SpecFiltBook}. Furthermore, the SVD basis \(\{u_j\}\) is usually similar to that of the DFT,as shown in \cite{Hansen2006}, where the authors demonstrate that vectors \(u_j\) corresponding to small indices \(j\) are well represented by just the first few Fourier modes. In contrast, vectors \(u_j\) with large indices \(j\) are shown to include significant contributions from high frequency Fourier modes. These observations suggest that we can replace the SVD basis with the Fourier basis, so that the role of the decreasing singular values in the ordering of the basis vectors is played by the increasing Fourier frequencies. For our procedure to be valid we expect the DFT coefficients of \(b_{true}\) to satisfy an analog of the DPC and therefore to decay to zero as the frequency increases.
For an image \(B\) of size \(M\times N\) we use the two-dimensional DFT
\begin{equation}\label{eq:DFT2D}
\text{DFT2}[B] = \mathcal{F}_M^*B\overline{\mathcal{F}}_N,
\end{equation}
where
\begin{equation}\label{Eq:DFTmat} \left(\mathcal{F}_m\right)_{j,k} = \frac{1}{\sqrt{m}}e^{i2\pi(j-1)(k-1)/m},\end{equation}
is the unitary DFT matrix of size \(m\times m\), \(\overline{X}\) denotes complex conjugation, and \(i=\sqrt{-1}\). Note that \ref{eq:DFT2D} reduces to the one-dimensional DFT if \(N=1\). The data vector \(b\) in \ref{eq:Ax=b} is then obtained by vectorizing the matrix \(B\) by stacking its columns one upon the other so that \(b=\text{vec}(B)\), where \(\text{vec}(\cdot)\) denotes the above vectorization scheme and \(m=MN\) is the resulting length of \(b\). However, the Fourier coefficients found in \ref{eq:DFT2D} cannot be used directly for our purposes because a na\"{\i}ve application of DFT to a non-periodic signal causes artifacts in the frequency domain. Specifically, DFT assumes that the data to be transformed is periodic and therefore application of the DFT to a non-periodic data leads to discontinuities at the boundaries. In the frequency domain, these discontinuities take the form of large high-frequency coefficients \cite{Moisan2011}. Thus Fourier coefficients of smooth but non-periodic true data do not satisfy the DPC as we require. To circumvent this difficulty, we propose to use the Periodic Plus Smooth (PPS) decomposition introduced in \cite{Moisan2011}. The PPS decomposition decomposes an image into a sum
\begin{equation}\label{eq:PPS} B = P + S,\end{equation}
of a periodic component \(P\) very similar to the original one but that smoothly decays towards its boundaries, and a smooth component \(S\) that is nonzero mainly at the boundaries to compensate for the decaying \(P\).
To compute the PPS of \(B\), we define
\begin{equation}\label{eq:PerGapImg}
\begin{aligned}
&V_1(j,k) = \left\{\begin{array}{cc} B(M-j+1,k)-B(j,k), & \text{if } j=1 \text{ or } j=M,\\ 0, & \text{otherwise},\end{array}\right.,\\
&V_2(j,k) = \left\{\begin{array}{cc} B(j,N-k+1)-B(j,k), & \text{if } k=1 \text{ or } k=N,\\ 0, & \text{otherwise},\end{array}\right.,
\end{aligned}
\end{equation}
and \(V = V_1 + V_2\). Then, the two-dimensional DFT of the smooth component \(S\) is given by
\begin{equation}\label{eq:DFTS}
\text{DFT2}[S](j,k) = \left\{\begin{array}{cc} 0, & \text{if } j=k=1,\\ \frac{\text{DFT2}[V](j,k)}{2\cos\left(\frac{2\pi(j-1)}{M}\right)+2\cos\left(\frac{2\pi(k-1)}{N}\right)-4}, & \text{otherwise},\end{array}\right.
\end{equation}
which can be inverted to obtain \(S\) and \(P=B-S\). The subsequent filtering procedure uses only the periodic component \(P\), which contains all the noise as \(S\) is always smooth.
In order to filter the vectorized Fourier coefficients
\begin{equation}\label{eq:VecFourierCoeffs2D} \beta = \text{vec}\left(\text{DFT2}[P]\right),
\end{equation}
by using the Picard parameter, as in Sect. \ref{sec:DirectReg}, we first have to rearrange \(\beta\) so that the first coefficients correspond to the true data while the last are dominated by noise. In \cite{Hansen2006} the coefficients were arranged in order of increasing spatial frequency. Specifically, the basis of the two dimensional Fourier transform is a plane wave given by
\begin{equation}\label{eq:2DFTbasis}
f((j,k),(s,l)) = \exp\left\{-i2\pi \left[\frac{(j-1)(k-1)}{M}+\frac{(s-1)(l-1)}{N}\right]\right\} = \exp\left\{-i2\pi \mathbf{k}\cdot\mathbf{x}\right\},
\end{equation}
where \(\mathbf{k}=\left(\frac{j-1}{M},\frac{s-1}{N}\right)^T\) is the frequency vector, \(j=0,1,...,M\), \(s=0,1,...,N\) and \(\mathbf{r}=(k-1,l-1)^T\) is the spatial vector. The components of \ref{eq:VecFourierCoeffs2D} are arranged in order of increasing magnitude of the spatial frequency \(\mathbf{k}\), given by \(|\mathbf{k}|^2=(j-1)^2/M^2+(s-1)^2/N^2\). We refer to this ordering as the elliptic ordering since the contours of the spatial frequency function \(|\mathbf{k}|^2\) are ellipses centered about the zero frequency. However, use of this arrangement in our numerical experiments causes some results to be highly suboptimal.
An alternative ordering of the Fourier coefficients allows us to overcome this problem. Specifically, we utilize the Kronecker product structure of the two-dimensional Fourier transform, which can be written as a matrix-vector multiplication with \(b\) as
\begin{equation}\label{eq:DFT2kron}
\text{vec}\left(\text{DFT2}[B]\right) = \left(\mathcal{F}^{(2)}_{M,N}\right)^*b.
\end{equation}
Here
\begin{equation}\label{eq:2DForuierMat}
\mathcal{F}^{(2)}_{M,N} = \mathcal{F}_M\otimes\mathcal{F}_N
\end{equation}
is the 2D Fourier transform matrix and '\(\otimes\)' denotes the Kronecker product defined as
\begin{equation}\label{eq:KronDefn} A\otimes B = \left(
\begin{array}{ccc}
a_{1,1}B & \cdots & a_{1,n}B \\
\vdots & \ddots & \vdots \\
a_{m,1}B & \cdots & a_{m,n}B \\
\end{array}
\right).\end{equation}
In view of \ref{eq:2DForuierMat}, we suggest to reorder the Fourier coefficients \(\beta\) in \ref{eq:VecFourierCoeffs2D} according to the ordering permutation \(\pi\) which arranges the Kronecker product
\begin{equation}\label{eq:freqVec}
\mathbf{f}^{(2)}_{M,N} = \mathbf{f}_M\otimes \mathbf{f}_N\in\mathbb{R}^m,
\end{equation}
in increasing order, where \(\mathbf{f}_M\in\mathbb{R}^M\) and \(\mathbf{f}_N\in\mathbb{R}^N\) are the vectors representing the ordered absolute frequencies of \(\mathcal{F}_M\) and \(\mathcal{F}_N\) respectively. Note that since the frequencies in the two-dimensional Fourier transform are shifted so that the zero frequency is located at the corner of the image, the vectors \(\mathbf{f}_N\) and \(\mathbf{f}_M\) in \ref{eq:freqVec} also need to be correspondingly shifted.
The vector \(\mathbf{f}^{(2)}_{M,N}\) whose components are products of the absolute values of spatial frequencies is not ordered and in a component-wise form \ref{eq:freqVec} is given by
\begin{equation}\label{eq:freqVecCompWise}
\left(\mathbf{f}^{(2)}_{M,N}\right)_{M(j-1)+s}=\frac{1}{m}\left\{\begin{array}{ll}
(j-1)(s-1), & 1\leq j \leq \lfloor\frac{N}{2}\rfloor,\ 1\leq s\leq \lfloor\frac{M}{2}\rfloor,\\
(N-j+1)(s-1), & \lfloor\frac{N}{2}\rfloor < j \leq N,\ 1\leq s\leq \lfloor\frac{M}{2}\rfloor,\\
(j-1)(M-s+1), & 1 \leq j \leq \lfloor\frac{N}{2}\rfloor,\ \lfloor\frac{M}{2}\rfloor < s\leq M,\\
(N-j+1)(M-s+1), & \lfloor\frac{N}{2}\rfloor < j \leq N,\ \lfloor\frac{M}{2}\rfloor < s\leq M, \end{array}\right.
\end{equation}
where as above \(m=MN\). We then construct the permutation \(\pi\) such that \(\mathbf{f}^{(2)}_{M,N}(\pi(1:m))\) appears in increasing order and rearrange the coefficients \ref{eq:VecFourierCoeffs2D} to obtain \(\beta\mapsto \beta(\pi(1:m))\). We term this ordering the hyperbolic ordering since the contours of the function \ref{eq:freqVecCompWise} are hyperbolas centered about zero frequency (see \ref{fig:Masks}).
The above rearrangement using the ordering permutation \(\pi\) is analogous to the rearrangement of the SVD decomposition of a separable blur
\begin{equation}\label{eq:SepBlur} A = A_1\otimes A_2.\end{equation}
where \(A_1\in\mathbb{R}^{N\times N}\), \(A_2\in\mathbb{R}^{M\times M}\) and \(A\in\mathbb{R}^{m\times m}\) \cite[Sect. 2]{Hansen2008}. Letting \(A_1 = U_1\Sigma_1V_1^T\) and \(A_2 = U_2\Sigma_2V_2^T\) be the SVD of \(A_1\) and \(A_2\), the SVD of \(A\) \ref{eq:SVDofA} can be written as
\begin{equation}
A = \left(\underbrace{U_1\otimes U_2}_{= U}\right)\left(\underbrace{\Sigma_1\otimes \Sigma_2}_{= \Sigma}\right)\left(\underbrace{V_1\otimes V_2}_{= V}\right)^T.
\end{equation}
As in the case of the two-dimensional Fourier transform \ref{eq:2DForuierMat}, even though the singular values of \(A_1\) and \(A_2\) are ordered, those of \(A\) are not \cite[Sect. 4.4.2]{SpecFiltBook}, \cite{Hansen2008}. To be able to use the filter described in Sect. \ref{sec:DirectReg} we must reorder the entries of \(U\), \(\Sigma\) and \(V\) according to decreasing singular values using the ordering permutation \(\pi\) as in \ref{eq:freqVec}.
Once the Fourier coefficients are rearranged we proceed according to the procedure in Sect. \ref{sec:DirectReg}. Specifically, we form the sequence \(\{V(k)\}_{k=1}^m\) defined in \ref{eq:VxDefn}, estimate the Picard parameter using \ref{eq:PicIndCondSVD} and set to zero the Fourier coefficients with indices larger than \(k_0\) to form the vector \(\widehat\beta(\pi(1:m)) = (\beta(\pi(1)),\ldots,\beta(\pi(k_0-1)),\underbrace{0,\ldots,0}_{m-k_0+1})^T\). We then invert \ref{eq:VecFourierCoeffs2D} using \(\widehat\beta\) instead of \(\beta\) to obtain the filtered periodic component \(\widehat P\) and the filtered image \(\widehat B = \widehat P + S\). The filtered data vector is then obtained as \(\widehat b = \text{vec}(\widehat B)\).
Note that dropping the last coefficients of the data using the two orderings discussed above can also be interpreted as applying one-parameter windows of different shapes in the Fourier domain. Specifically, the elliptic ordering of \cite{Hansen2006} corresponds to setting to zero the Fourier coefficients outside of an ellipse, whereas the hyperbolic ordering corresponds to doing the same outside a hyperbola. This is illustrated in \ref{fig:Masks} where we plot the Fourier transform coefficients of a \(256\times 256\) image with \(k_0=10^4\) for each of the orderings. Viewed from this perspective, the proposed filtering algorithm simply applies a window depending on the parameter \(k_0\) to the DFT of the perturbed image. The difference between our approach and the approach of \cite{Hansen2006} is in the chosen shape of the window.
\section{Iterative inversion using the GKB}\label{sec:GKBinvert}
In this section, we consider the solution of the ill-posed problem \ref{eq:Ax=b} using the GKB algorithm \cite{Golub1965}. This algorithm approximates the subspace, spanned by the first \(k\) largest right singular vectors of \(A\) with the first \(k\) basis vectors of the Krylov subspace \(\mathcal{K}_k(A^TA,A^Tb)\) (see \cite[sect. 6.3.2]{RankDeff} and \cite[sect. 6.3.1]{HansenInsights}). After \(k\) iterations (\(1\leq k\leq n\)), the GKB algorithm yields two matrices with orthonormal columns \(W_k\in\mathbb{R}^{n\times k}\) and \(Z_{k+1}\in\mathbb{R}^{m\times(k+1)}\), and a lower bidiagonal matrix \(B_k\in\mathbb{R}^{(k+1)\times k}\) with the structure
\begin{equation}\label{eq:BkDefn} B_k = \left(
\begin{array}{cccc}
\varrho_1 & & & \\
\theta_2 & \varrho_2 & & \\
& \theta_3 & \ddots & \\
& & \ddots & \varrho_k \\
& & & \theta_{k+1} \\
\end{array}
\right),\end{equation}
such that
\begin{equation}
\label{eq:LanczosRels}\begin{aligned} &AW_k = Z_{k+1}B_k,\\ &A^TZ_{k+1} = W_kB_k^T + \varrho_{k+1}w_{k+1}e_{k+1}^T,\\ &Z_{k+1}\theta_1e_1 = b.
\end{aligned}
\end{equation}
The GKB algorithm is summarized in \ref{alg:GKB}. We perform a reorthogonalization at each step of the algorithm to ensure that the columns of \(W_k\) and \(Z_{k+1}\) remain orthogonal.
\begin{algorithm}
\caption{Golub-Kahan Bidiagonalization (GKB)}\label{alg:GKB}
\begin{algorithmic}
\INPUT{\(A,b,k\)}\Comment{Coefficient matrix \(A\), data vector \(b\) and number of iterations \(k\)}
\OUTPUT{\(W_k,Z_{k+1},B_k\)}
\LineComment{Initialization:}
\State \(\theta_1 \gets ||b||, \quad z_1\gets b/\theta_1\)
\State \(\varrho_1 \gets ||A^Tz_1||, \quad w_1\gets A^Tz_1/\varrho_1\)
\For{\(j=1,2,\ldots,k\)}
\State \(p_j \gets Aw_{j}-\varrho_jz_j\)
\State \(p_j\gets \left(I-Z_jZ_j^T\right)p_j\) \Comment{Reorthogonalization step}
\State \(\theta_{j+1} = ||p_j||, \quad z_{j+1}\gets p_j/\theta_{j+1}\)
\State \(q_j \gets A^Tz_{j+1}-\theta_{j+1}w_j\)
\State \(q_j\gets \left(I-W_jW_j^T\right)q_j\) \Comment{Reorthogonalization step}
\State \(\varrho_{j+1} = ||q_j||, \quad w_{j+1}\gets q_j/\varrho_{j+1}\)
\LineComment{Update output matrices:}
\State \(W_j \gets \left(W_{j-1},\ w_j\right), \quad Z_{j+1} \gets \left(Z_{j},\ z_{j+1}\right)\)
\State \(B_j \gets \left(\begin{array}{cc} \left(\begin{array}{c} B_{j-1} \\ 0 \end{array}\right), & \left(\begin{array}{c} \mathbf{0} \\ \varrho_j \\ \theta_{j+1} \end{array}\right)\end{array}\right)\)
\EndFor
\end{algorithmic}
\end{algorithm}
It can be shown that the columns of \(W_k\) span the Krylov subspace \ref{eq:KrylovSubspace} and that we can achieve a regularizing effect by projecting the solution onto this subspace \cite[Sect. 2.1]{Gazzola2015}.
Choosing a solution from the column space of \(W_k\), such that \(x=W_ky\), and using the relations \ref{eq:LanczosRels}, we can rewrite the residual norm in \ref{eq:GKBprob} as
\begin{equation}\rho(k) = ||b-Ax^{(k)}||^2 = ||U_{k+1}\left(\theta_1e_1-B_ky\right)||^2,\end{equation}
and, since \(U_{k+1}\) has orthonormal columns,
\begin{equation}\label{eq:rewriteLS}
\rho(k) = ||\theta_1e_1-B_ky||^2.
\end{equation}
Hence, solving the PLS problem \ref{eq:GKBprob} amounts to minimizing \ref{eq:rewriteLS}, which is small-scale and can be solved using standard procedures such as the QR decomposition of \(B_k\) as in the LSQR algorithm \cite{LSQR}.
In the absence of noise, the GKB algorithm is terminated once \(\varrho_{k}=0\) or \(\theta_{k+1}=0\).
However, the solution of the PLS problem \ref{eq:rewriteLS} exhibits semi-convergence, whereby the error in the iterates \(||x_{true}-x^{(k)}||\) first decreases as \(k\) increases and then increases sharply well before the above condition is met. This is due to the fact that the columns of \(W_k\) contain increasing levels of noise, as described in \cite{Hnetynkova2009}. Hence, at early iterations the columns are almost noiseless and the solution gets closer to the true one, while at later iterations the solution becomes contaminated by noise. It is thus crucial to appropriately terminate the iterations before the noise becomes dominant.
Usually, standard methods like the GCV or the L-curve are used for estimation of the optimal stopping iteration. However the GCV method assumes that the noise is additive and is fully contained in the data vector \(b\), while the coefficient matrix \(A\) is noiseless. That is indeed the case in the original, large-scale problem \ref{eq:Ax=b} but not in the PLS problem of minimizing \ref{eq:rewriteLS}. In the PLS problem the projected data vector \(Z_{k+1}^Tb=\theta_1e_1\) is a noiseless scaled standard basis vector. The projected coefficient matrix \(B_k = Z_{k+1}^TAW_k\) however, is generated by the Algorithm \ref{alg:GKB} from the noisy data vector \(b\) and depends on the columns of \(W_k\) and \(Z_{k+1}\). Therefore, the derivation of the standard form of the GCV function and the proof of its optimality, such as the ones given in \cite[Thm. 1]{GCV2} no longer apply. The justification for the L-curve method seems to hold for the projected problem, but, as we show in the numerical examples section, for many cases it is far from optimal.
In this paper we propose to use the DF method developed in \cite{Levin2016} to stop the iterative process. The DF method uses the distance between the filtered data \(\widehat b\) and the data reconstructed from the \(k\)th iterate \(Ax^{(k)}\) to characterize the quality of the iterated solution \(x^{(k)}\). Writing the distance as
\begin{equation}\label{eq:DistNormGKB}
\widehat f(k) = ||\widehat b - Ax^{(k)}||^2,
\end{equation}
and using the methods of Sect. \ref{sec:DFTfilter} to obtain the filtered data \(\widehat b\) we expect \(\widehat f(k)\) to have a global minimum at or near the optimal iteration. However, \(\widehat f(k)\) may also have local minima, and so we must continue the iterations beyond a potential minimum of \ref{eq:DistNormGKB} to ensure that the function \(\widehat f(k)\) continues to increase. In addition, for problems with very small noise levels, the filter of Sect. \ref{sec:DFTfilter} may not change the data vector \(b\) sufficiently, in which case \(\widehat f(k)\) will not have a minimum. Instead, \(\widehat f(k)\) will flatten after the optimal iteration, since the solution will not become significantly contaminated. To account for all the above behaviors, we propose to terminate the iterations once the relative change in \ref{eq:DistNormGKB} is small enough
\begin{equation}\label{eq:GKBcond}
\frac{\widehat f(k)-\widehat f(k+1)}{\widehat f(k)} \leq \delta,\quad \text{for \(p\) consecutive iterations}
\end{equation}
for some small bound \(\delta>0\). We emphasize that the numerator in \ref{eq:GKBcond} is not an absolute value and becomes negative if \(\widehat f(k)\) starts to increase. Since we choose \(\delta>0\), the condition \ref{eq:GKBcond} is automatically satisfied if \(\widehat f(k)\) has a minimum. Otherwise the iterations are terminated once \(\widehat f(k)\) becomes very flat. From our experience, the algorithm works well for \(\delta\in [10^{-2}, 10^{-4}]\) and \(p \geq 5\). Finally, we observe that since a discrete iteration number (\(k\in\{1,\ldots,n\}\)) acts as a regularization parameter, the ability of the reconstructed data \(Ax^{(k)}\) in \ref{eq:DistNormGKB} to get closer to the filtered data is limited, making the minimum of \ref{eq:DistNormGKB} less sensitive to the quality of the filtered data \(\widehat b\). Furthermore, the discrete regularization parameter does not limit the accuracy of the solution in practice. As we demonstrate in the numerical examples below, the additional regularization of hybrid methods is unnecessary and only slightly improves the solution, if at all.
\subsection{Regularizing the projected problem}\label{sec:TikhRegProj}
In this section, we briefly discuss hybrid methods for the solution of \ref{eq:Ax=b}, which combine projection onto a Krylov subspace with Tikhonov regularization and are the main competitors to our approach. Instead of attempting to terminate the GKB process at an optimal iteration, hybrid methods replace the PLS problem \ref{eq:GKBprob} with the Tikhonov minimization problem \ref{eq:TikhMinProb}. Using the relations \ref{eq:LanczosRels} and \(x=W_ky\), as in Sect. \ref{sec:GKBinvert}, problem \ref{eq:TikhMinProb} can be rewritten as
\begin{equation}\label{eq:MinTikhGKB} \min_y ||\theta_1e_1-B_ky||^2 + \lambda^2 ||y||^2,\end{equation}
or as the normal equation
\begin{equation}\label{Eq:TikhNormalEq0}
\left(B_k^TB_k + \lambda^2 I\right)y = B_k^T\theta_1e_1
\end{equation}
which has the solution
\begin{equation}\label{Eq:TikhNormalEq}
y_\lambda = \left(B_k^TB_k + \lambda^2 I\right)^{-1}B_k^T\theta_1e_1.
\end{equation}
The solution \ref{Eq:TikhNormalEq} can be rewritten, similarly to Sect. \ref{sec:DirectReg}, using the SVD of \(B_k\) as
\begin{equation}\label{eq:SVDbk}
B_k = U_k\Sigma_k V_k^T,
\end{equation}
where \(U_k\in \mathbb{R}^{(k+1)\times (k+1)}\) and \(V_k\in\mathbb{R}^{k\times k}\) are orthogonal and \(\Sigma_k\in\mathbb{R}^{(k+1)\times k}\) has the structure
\begin{equation}\label{eq:SigmaStruct}
\Sigma_k = \left(\begin{array}{c} \text{diag}\{\sigma_1,\ldots,\sigma_k\} \\ \mathbf{0}^T \end{array}\right),
\end{equation}
with the singular values \(\{\sigma_j\}\) arranged in decreasing order.
Due to the structure of \(B_k\) in \ref{eq:BkDefn} and the fact that \(\varrho_j,\; \theta_j\; >0\) for all relevant iterations, we have \(\text{rank}(B_k)=k\) and \(\sigma_j >0\) for all \(j\leq k\). Denoting the \(j\)th columns of \(U_k\) and \(V_k\) as \(u^{(k)}_j\) and \(v^{(k)}_j\) respectively, the Tikhonov solution \ref{Eq:TikhNormalEq} can be written similarly to \ref{eq:TikhSoln} as
\begin{equation}\label{eq:TikhSolnGKB} y_{\lambda} = \theta_1\sum_{j=1}^k \frac{\sigma^{(k)}_ju_j^{(k)}(1)}{\left(\sigma^{(k)}_j\right)^2+\lambda^2}v^{(k)}_j,\end{equation}
where \(u^{(k)}_j(1)=e_1^Tu^{(k)}_j\).
By appropriately choosing \(\lambda\) at each iteration we can, in theory, filter out noise added to the least-squares solution in \ref{eq:GKBprob} at higher iterations and thus, stabilize the error and make the final solution independent of the stopping iteration. Regularization in hybrid methods may also filter out noise that is not filtered by projection alone. Nevertheless we argue that this additional filtering has a negligible effect and that choosing \(\lambda\) appropriately at each iteration presents a significant challenge. Specifically, determination of \(\lambda\) for hybrid methods is usually done using standard procedures originally developed for direct regularization, the most popular of which is the GCV \cite{splines,GCV2}. These procedures assume a noiseless coefficient matrix \(A\) \ref{eq:MinTikhGKB}, whereas the hybrid methods project the solution into a noise-dependent Krylov space, thus also contaminating the projected coefficient matrix, similarly to the PLS problem. Therefore, the GCV method is not expected to produce optimal solutions for hybrid methods, as was indeed demonstrated in \cite[Sect. 4]{Chung2008}. The W-GCV method proposed in \cite{Chung2008} attempts to overcome the above shortcoming of the GCV by introducing an additional free parameter and choosing it adaptively at each iteration. However, as we show in the numerical examples in Sect. \ref{sec:NumEx}, the W-GCV method still produces suboptimal results in many cases.
In all of our numerical examples we observe that the minimum errors achievable using PLS and hybrid methods are almost identical and therefore any additional filtering achievable by hybrid methods is negligible. To explain this we note that at early iterations, the vectors spanning the Krylov subspace \ref{eq:KrylovSubspace}, and hence also the solutions projected onto them, are typically very smooth and do not contain noise \cite{Hnetynkova2009}. Therefore, little to no regularization is required at this stage and we expect \(\lambda\approx 0\), making the Tikhonov problem \ref{eq:TikhMinProb} equivalent to the least-squares problem \ref{eq:GKBprob}. Only after the basis vectors spanning \ref{eq:KrylovSubspace} become contaminated by noise does the solution require regularization. At this stage the optimal regularization parameter \(\lambda\) increases to some non-negligible, noise-dependent value that keeps the error of the regularized solution approximately constant, while the error of the unregularized PLS solution increases sharply. We demonstrate in the following section, using several numerical examples, that the optimal solution of the hybrid method occurs while the noise that contaminates it is very small and so in most typical cases the optimal solutions of \ref{eq:GKBprob} and \ref{eq:TikhMinProb} are very close to each other. This was also demonstrated in the numerical examples in \cite[Sect. 4]{Chung2015} and \cite{Chung2008}. Therefore, in most practical problems the only significant advantage of hybrid methods over simple GKB stopping criteria is in the ability to stabilize the iterations, making them less sensitive to the stopping iteration. This also implies that having a reliable stopping criterion for PLS obviates the need for a hybrid method in these cases.
\section{Numerical examples}\label{sec:NumEx}
In this section we demonstrate the performance of the proposed method using seven test problems from the \texttt{Matlab} toolbox \texttt{RestoreTools} \cite{Nagy2004}: \texttt{satellite}, \texttt{GaussianBlur440}, \texttt{AtmosphericBlur50}, \texttt{Grain}, \texttt{Text}, \texttt{Text2} and \texttt{VariantMotionBlur\_large}. Each of these problems includes a different blur, \(A\), and a different image, \(x_{true}\), to reconstruct. To generate the data vector \(b\), we form the true data, \(b_{true} = Ax_{true}\) and perturb it with white Gaussian noise of variance \(s^2=\alpha\max|b_{true}|^2\) where the noise level \(\alpha\) takes on three values for each problem \(\alpha\in\{10^{-2},10^{-4},10^{-6}\}\). We apply the inversion procedure to each test problem 100 times, each time using a different noise realization.
To implement our stopping criterion, we set \(h=\lceil\frac{m}{100}\rceil\) and \(\varepsilon=10^{-2}\) in \ref{eq:PicIndCondSVD} and also \(p=5\) and \(\delta=2\times10^{-3}\) in \ref{eq:GKBcond}. As mentioned above, however, our numerical results are robust and wouldn't change much for a wide range of \(h\), \(\varepsilon\), \(p\) and \(\delta\). In the numerical tests we compare our stopping criterion with the L-curve criterion \cite{RegParamItr,LCurve,Calvetti2000}, \cite[Chap. 7]{RankDeff}, the NCP method for the PLS problem \cite{Hansen2006,Rust2008}, and the hybrid W-GCV method \cite{Chung2008}. The L-curve method constitutes finding the point of maximum curvature on the so-called L-curve, defined as \((||r^{(k)}||,\; ||x^{(k)}||)\), where \(r^{(k)}=b-Ax^{(k)}\) is the residual vector. To do so we use the function \verb!l_corner! from Hansen's \texttt{Regularization Tools} toolbox \cite{RegTools}. Using the L-curve method, we terminate the iterations once the chosen iteration number either stays the same or decreases for \(p=5\) consecutive iterations, signifying that the corner of the L-curve is found.
The NCP method is based on calculating a whiteness measure of the residual vector \(r^{(k)}\) at each iteration \cite{Hansen2006}. The stopping iteration is chosen as the one at which the residual vector \(r^{(k)}\) most resembles white noise, as follows. The vector \(r^{(k)}\) is reshaped into an \(M\times N\) matrix \(R^{(k)}\) satisfying \(r^{(k)} = \text{vec}\left(R^{(k)}\right)\), and the quantity \(\widehat R = |\text{DFT2}(R^{(k)})|\) is defined to be the absolute value of its two-dimensional Fourier transform. Since \(R^{(k)}\) is a real valued signal, it follows that \(\widehat R\) is symmetric about \(q_1 = \lfloor M/2 \rfloor + 1\) and \(q_2 = \lfloor N/2 \rfloor +1\), so that \(\widehat R_{j,k} = \widehat R_{M-j,k}\) for \(2\leq j\leq q_1-1\) and \(\widehat R_{j,k} = \widehat R_{j,N-k}\) for \(2\leq k\leq q_2-1\). Consequently, only the first quarter of \(\widehat R\), which can be written as \(\widehat T = \widehat R(1:q_1,1:q_2)\) using \texttt{Matlab} notation, is required for the analysis. Vector \(\widehat t\) is then obtained by vectorizing \(\widehat T\) using the elliptical parametrization defined in Sect. \ref{sec:DFTfilter}. The NCP of \(R^{(k)}\) is defined as the vector \(c(R^{(k)})\) of length \(q_1q_2-1\) with components\footnote{In \cite{Hansen2006}, the authors assume the problem is square so that \(M=N\) and \(q_1=q_2\).}
\begin{equation}\label{eq:NCPdefn}
c(R^{(k)})_j = \frac{||\widehat t(2:j+1)||_1}{||\widehat t(2:q_1q_2)||_1},
\end{equation}
where the dc component of \(R^{(k)}\) is not included in the NCP. We note that it is argued in \cite{Rust2008} that the NCP should include the dc component of \(R^{(k)}\) since it captures the deviation from zero mean white noise. However, we found no difference between the two definitions in practice and therefore we shall follow \cite{Hansen2006} and disregard the dc component as in \ref{eq:NCPdefn}. It is shown in \cite{Hansen2006,Rust2008} that for white noise, the NCP should be a straight line from 0 to 1 represented by the vector with components \(s_j = j/(q_1q_2-1)\). Therefore, the whiteness measure is defined as the distance
\begin{equation}
\label{eq:NCPfunc}
N(k) = ||s-c(R^{(k)})||_1.
\end{equation}
The iterations are terminated once \ref{eq:NCPfunc} reaches its global minimum, signifying that the residual vector at the chosen iteration is the closest to white noise. To implement this method, we compute function \ref{eq:NCPfunc} at each iteration and terminate the iterations once the norm \ref{eq:NCPfunc} increases for \(p=5\) consecutive iterations, just as we do with our own method. We then choose the solution corresponding to the global minimum of the computed \(N(k)\).
In contrast to the above methods, the W-GCV is a hybrid method and solves the Tikhonov problem \ref{eq:TikhMinProb}. It is based on introducing a free parameter to the GCV criterion, as discussed in Sect. \ref{sec:TikhRegProj} and in \cite{Chung2008}. To implement the W-GCV, we use the \texttt{HyBR\_modified} routine provided in the \texttt{RestoreTools} package \cite{Nagy2004} as \texttt{x = HyBR\_modified(A,B,[],HyBRset('Reorth','on'),1)}. Note that we use the reorthogonalization option of the \verb!HyBR_modified! routine to make a fair comparison to our \ref{alg:GKB} that employs full reorthogonalization.
We measure the quality of a solution by computing its Mean-Square Deviation (MSD), defined as
\begin{equation}\label{eq:MSD} \text{MSD} = \frac{||x_{true} - x||^2}{||x_{true}||^2},\end{equation}
where \(x\) is a solution. We then define the optimal solutions to the PLS problem \ref{eq:GKBprob} and to the hybrid problem \ref{eq:TikhMinProb} as the ones minimizing the MSD to each problem.
We present the results of our simulations as boxplots of the resulting MSD values in \ref{fig:Boxplots_1-3} and \ref{fig:Boxplots_4-6}. The boxplots divide the data into quartiles with the boxes spanning the middle 50\% of the data, called the interquartile range and the vertical lines extending from the boxes span 150\% of the interquartile range above and below it. Anything outside this interval is considered an outlier and is marked with a '+'. Each box also contains a horizontal line marking the median of the data.
Based on the results presented in \ref{fig:Boxplots_1-3} and \ref{fig:Boxplots_4-6} we can make the following observations:
\begin{enumerate}
\item The hyperbolic ordering with the DF method performs similarly to or better than the corresponding elliptic ordering in all examples.
\item The DF method with hyperbolic ordering performs similarly to or outperforms the L-curve, NCP and W-GCV methods in all examples without exception.
\item The DF method with elliptic ordering failed to produce acceptable solutions for the \texttt{Text2} problem with \(\alpha=10^{-4}\). Contrary to the other examples where the distance function \ref{eq:DistNormGKB} with this ordering has a minimum, in this example it has neither a minimum nor even an inflection point and therefore the right stopping iteration could not be found with this ordering.
\item The optimal MSD values for the PLS problem \ref{eq:GKBprob} and the projected Tikhonov problem \ref{eq:TikhMinProb} are almost identical in all examples, as expected from the discussion in \ref{sec:TikhRegProj}.
\end{enumerate}
Overall, we can conclude that the DF criterion with hyperbolic ordering for estimation of optimal stopping iteration is accurate, robust and outperforms state-of-the-art methods.
\begin{figure}[!p]
\centering
\hspace*{\fill}
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{MaskCirc.pdf}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.45\textwidth}
\centering
\includegraphics[width=\textwidth]{MaskKron.pdf}
\end{subfigure}
\hspace*{\fill}
\caption{Effect of the filtering procedure introduced in Sect. \ref{sec:DFTfilter} on an image of size \(256\times256\) by the elliptic ordering of \cite{Hansen2006} and the hyperbolic ordering \ref{eq:freqVec} in Fourier domain. The Picard parameter for both methods is \(k_0=10^4\). The zero frequency component is placed at the center of the image.}\label{fig:Masks}
\end{figure}
\begin{figure}[!p]
\centering
\includegraphics[width=\textwidth]{box_1-4.pdf}
\caption{Boxplots of the MSD values obtained for the PLS problem \ref{eq:GKBprob} and the Tikhonov regularized problem \ref{eq:TikhMinProb} with the methods: (1) DF with hyperbolic ordering (DF-h), (2) DF with elliptic ordering (DF-e), (3) L-curve, (4) NCP, (5) W-GCV, (6) minimum MSD for PLS problem (PLS\(_{OPT}\)), (7) minimum MSD solution for Tikhonov problem (Tikh\(_{OPT}\)). The problems presented are \emph{First row}: \texttt{satellite}; \emph{Second row}: \texttt{GaussianBlur440}; \emph{Third row}: \texttt{AtmosphericBlur50}; \emph{Fourth row}: \texttt{Grain}. The noise levels are \emph{First column}: \(\alpha=10^{-2}\); \emph{Second column}: \(\alpha=10^{-4}\); \emph{Third column}: \(\alpha=10^{-6}\).}\label{fig:Boxplots_1-3}
\end{figure}
\begin{figure}[!p]
\centering
\includegraphics[width=\textwidth]{box_4-7.pdf}
\caption{Boxplots of the MSD values obtained for the PLS problem \ref{eq:GKBprob} and the Tikhonov regularized problem \ref{eq:TikhMinProb} with the methods: (1) DF with hyperbolic ordering (DF-h), (2) DF with elliptic ordering (DF-e), (3) L-curve, (4) NCP, (5) W-GCV, (6) minimum MSD for PLS problem (PLS\(_{OPT}\)), (7) minimum MSD solution for Tikhonov problem (Tikh\(_{OPT}\)). The problems presented are \emph{First row}: \texttt{Text}; \emph{Second row}: \texttt{Text2}; \emph{Third row}: \texttt{VariantMotionBlur\_large}. The noise levels are \emph{First column}: \(\alpha=10^{-2}\); \emph{Second column}: \(\alpha=10^{-4}\); \emph{Third column}: \(\alpha=10^{-6}\).}\label{fig:Boxplots_4-6}
\end{figure}
\bibliographystyle{plain}
| {'timestamp': '2017-07-14T02:07:01', 'yymm': '1707', 'arxiv_id': '1707.04200', 'language': 'en', 'url': 'https://arxiv.org/abs/1707.04200'} |
\section{Model}
The Hamiltonian describing the aforementioned model is given by
\begin{equation}
H = \sum_{i=3}^{L-2}(b_i + b_i^\dagger ) \cdot \left(\mathcal{N}^3_i+
\mathcal{N}^2_i \right)
\label{ham}
\end{equation}
where {$L$ is the number of sites;}
$b$ and $b^\dagger$ are the usual annihilation and creation
operators ($\hbar=1$); the operators
$\mathcal{N}^2_i = \sum_P n_{\alpha}n_{\beta}\bar n_{\gamma} \bar n_{\delta}$ and
$\mathcal{N}^3_i=\sum_{P'} n_{\alpha} n_{\beta} n_{\gamma} \bar n_{\delta}$
($n=b^\dagger b$, $\bar n = 1-n$, the indices $\alpha, \beta, \gamma,
\delta$ label the four neighbouring sites)
count the population present in the four neighbouring sites (the sum
runs on every possible permutation $P$ and $P'$ of the positions of
the $n$ and $\bar n$ operators) and $\mathcal{N}^2$ ($\mathcal{N}^3$)
gives the null operator if the population is different
from two (three), the identity otherwise. For classical states, as for
example an initial random configuration of dead and alive states,
the Hamiltonian~(\ref{ham}) is, at time zero, $H_{Active}= b_i + b_i^\dagger $
on the sites with two or three alive neighbours
and $H_{Hibernate}= 0$ otherwise. If the Hamiltonian would remain
constant, every active site would oscillate forever while the
hibernated ones would stand still. On the contrary
as soon as the evolution starts, the state evolves into a
superposition of possible classical configurations, resulting in a
complex dynamics as shown below and the interaction between sites
starts to play a role.
{Thus, the Hamiltonian introduced in Eq.~(\ref{ham}) induces
a quantum dynamics that resembles the rules of the GoL: a site with less
than two or more than three alive neighbouring sites "freezes'' while, on
the contrary, it ``lives''. The difference with the classical game --
connected to the reversibility of quantum dynamics --
is that ``living'' means oscillating with a typical timescale between
two possible classical states (see e.g. Fig.~\ref{rule}.}
\begin{figure}
\begin{center}
\includegraphics[width=4cm]{./fig3.eps}
\caption{Schematic representation of a one-dimensional
time-evolution of the discretized population
$\mathcal{D}_i(t)$ of a
``blinker'' (case A of Fig.~\ref{timeev}).
From left to right the states of subsequent generations are
sketched.
Empty (blue) squares are ``dead'' sites, coloured (red)
ones are ``alive''}
\label{blink}
\end{center}
\end{figure}
\section{Dynamics} To study the quantum GoL dynamics we employ the time
dependent Density Matrix Renormalization group (DMRG). Originally
developed to investigate condensed matter
systems, the DMRG and its time dependent extension have been proven to
be a very powerful method to numerically investigate many-body quantum
systems~\cite{white,daley,revDMRG, dechiara}. As it is possible to use it efficiently only in
one-dimensional systems, we concentrate to the one-dimensional version
of the Hamiltonian~(\ref{ham}): the operators $\mathcal{N}^2$
and $\mathcal{N}^3$ count the populated sites on the nearest-neighbour
and next-nearest-neighbour sites and thus $\alpha=i-2, \beta=i-1, \gamma=i+1,
\delta=i+2$. Note that it has been shown that the main
statistical properties of the classical GoL are the same in
both two- and one-dimensional versions~\cite{oned}.
{To describe the system dynamics we introduce different quantities
that characterise in some detail the system evolution.}
We first concentrate on the population dynamics, measuring the
expectation values of the number operator at every site $\langle
n_i(t) \rangle$. This clearly gives a picture of the ``alive''
and ``dead'' sites as a function of time, as it gives the probability
of finding a site in a given state when measured. That is, if we observe
the system at some final time $T_f$ we will find dead or alive sites
according to these probabilities. In Fig.~\ref{timeev} we show three
typical evolutions (leftmost pictures): configuration $A$
corresponds to a ``blinker'' where two couples of nearest-neighbour
sites oscillate regularly between dead and alive states {(a schematic
representation of the resulting dynamics of the discretised population
$\mathcal{D}_i(t)$ is reproduced also in Fig.~\ref{blink})};
configuration $B$ is
a typical overcrowded scenario where twenty-four ``alive'' sites
disappear leaving behind only some residual activity; finally a
typical initial random configuration ($C$) is shown.
Notice that in all configurations it is possible to identify the
behaviour of the wave function tails that propagate and generate
interference effects. These effects can be highlighted by computing
the visibility of the dynamics, the maximum
variation of the populations within subsequent generations,
defined as:
\begin{equation}
v_i(t)= |\max_{t'} n_i(t') - \min_{t'} n_i(t')|;
t'\in[t-\frac{T}{2};t+\frac{T}{2}];
\end{equation}
{that is, the visibility at time $t$ reports the maximum variation of the
population in the time interval of length $T$ centered around $t$.}
The visibility clearly follows the preceding dynamics (see
Fig.~\ref{timeev}, second column) and identifies the presence
of ``activity'' in every site.
\begin{figure}
\begin{center}
\includegraphics[width=4cm]{./avpop.eps}
\includegraphics[width=4cm]{./avpopinf.eps}
\\
\includegraphics[width=4cm]{./avdiv.eps}
\includegraphics[width=3.9cm]{./avdivinf.eps}
\caption{Left: Average population $\rho(t)$ (upper)
and diversity $\Delta(t)$ (lower) as a function of time {for
different initial population density $\rho_0$}.
Right: Equilibrium average population $\rho$ (upper) and diversity
$\Delta$ (lower) for the quantum (blue squares) and classical (red
circles) GoL {as a function of the initial population density
$\rho_0$}. Simulations are performed with a t-DMRG at third
order, Trotter step $\delta t=10^{-2}$, truncation dimension
$m=30$, size $L=32$, averaged over up to thirty
different initial configurations.}
\label{stat}
\end{center}
\end{figure}
To stress the connections and comparisons with the original GoL we
introduce a classical figure of merit (shown in the third column of
Fig.~\ref{timeev}): we report a
discretized version of the populations as a function of time
($\mathcal{D}_i(t)=1$ for $n_i(t) > 0.5$ and $\mathcal{D}_i(t)=0$
otherwise). {Notice that $\mathcal{D}_i(t)$ gives the most
probable configuration of the system after a measurement
on every site in the basis $\{|0\rangle, |1\rangle\}$.}
{Thus, we recover} a ``classical'' view of the
quantum GoL with the usual definition of site status. For example,
configuration $A$ is a ``blinker'' that changes status at every
generation {(see Fig.\ref{timeev} and \ref{blink})}.
More complex configurations appear in the other two
cases. The introduction of the discretized populations $\mathcal{D}_i$
can also be viewed as a new definition of ``alive'' and ``dead''
sites from which we could have started from the very beginning to
introduce a stochastic component as done in~\cite{SMgol}.
This quantity allows analysis to be performed as usually done on
the classical GoL and to stress the similarities between the quantum and
the classical GoL.
Following the
literature to quantify such complexity, we compute the clustering
function
$\mathcal{C}(\ell,t)$ that gives the number of clusters of
neighbouring ``alive'' sites of size $\ell$ as a function of time~\cite{oned}.
{For example, the function $\mathcal{C}(\ell,t)$ for a uniform
distribution of ``alive'' sites would be simply $\mathcal{C}(L)=1$ and
zero otherwise while a random pattern would result in a random cluster
function.} This function characterises the complexity of the evolving patterns,
e.g. it is oscillating between zero- and two-size clusters for the
initial condition $A$, while it is much more complex for the random
configuration $C$ (see Fig.~\ref{timeev}, rightmost column).
\section{Statistics}
To characterise the statistical properties of the quantum GoL we study
the time evolution of different initial random configurations as a
function of the initial density of alive sites. {We concentrate
on two macroscopic quantities: the density of the sites
that if measured would with higher probability result in ``alive'' states
\begin{equation}
\rho(t)=\sum_i \mathcal{D}_i(t)/L;
\end{equation}
and the diversity
\begin{equation}
\Delta(t)= \sum_\ell\mathcal{C}(\ell,t),
\end{equation}
the number of different cluster sizes that are present
in the systems, that quantifies the complexity of the
generated dynamics~\cite{oned, SMgol}.} Typical results,
averaged over different initial configurations, are shown in
Fig.~\ref{stat} (left). As it can be clearly seen
the system equilibrates and the density of states as well as the
diversity reach a steady value. This resembles the typical
behaviour of the classical GoL where any typical initial random
configuration eventually equilibrates to a stable configuration.
Moreover, we compare the quantum GoL with a classical {reversible}
version of GoL corresponding to that introduced here:
at every step a cell changes its status if and only if
within the first four neighbouring cells only three or two are alive.
Notice that, the evolution being unitary and thus reversible,
the equilibrium state locally changes with time, however the
macroscopic quantities reach their equilibrium values {that
depend} non trivially only on the initial population density.
{In fact, for the classical game, we
were able to check that the final population density is independent
of the
system size while the final diversity scales as $L^{1/2}$ (up
to $2^{10}$ sites, data not shown).}
Moreover, the {time needed to reach equilibrium}
is almost independent of the system
size and initial population density.
{These results on the scaling of classical system properties
support the conjecture that our findings for the quantum case
will hold in general},
while performing the analysis for bigger system sizes is highly demanding.
A detailed analysis of the size scaling of the system properties
will be presented elsewhere.
In Fig.~\ref{stat} we report the final (equilibrium)
population density (right upper) and diversity (right lower)
as a function of the initial population density for both the classical
and the quantum GoL for
systems of $L=32$ cells. The equilibrium population density $\rho$
is a non linear function of the initial one $\rho_0$ in both cases:
the classical one has an initial linear dependence up to half-filling
where a plateau is present up to the final convergence to unit filling
for $\rho_0=1$. Indeed, the all-populated configuration is
a stable system configuration. The quantum GoL follows a
similar behaviour, with a more complex
pattern. Notice that here a first signature of quantum behaviour is
present: the steady population density reached by the quantum
GoL is always smaller than its classical counterpart. This is
probably due to the fact that the evolution is not completely captured
by this classical quantity: the sites with population below half
filling, i.e. the tails of the wave functions, are described as
unpopulated by $\mathcal{D}_i$. However, this missing population
plays a role in the evolution: within the overall superposition of
basis states, a part of the probability density (corresponding to the
states where the sites are populated)
undergoes a different evolution than the classical one.
In general, the quantum system is effectively more populated than
the classical $\rho$ indicates. This difference in the
quantum and classical dynamics is even more evident in the dependence of the
equilibrium diversity on the initial population density
$\rho_0$. In the classical case the maximum diversity is slightly
above three: on average, in the
steady state, there are no more than about three different cluster
sizes present in the system independently of the initial
configuration. On the contrary --in the quantum case-- the
maximal diversity is about four, increasing the information content
(the complexity) generated by the evolution by about $10-20\%$.
{These findings are a signature of the difference between quantum and
classical GoL. In particular} we have shown that the quantum GoL has a
higher capacity of generating diversity than the {corresponding}
classical one. This property arises from the possibility of having quantum
superpositions of states of single sites. Whether purely quantum
correlations (entanglement) play a crucial role is under
investigation. Similarly, as there is some arbitrariness
in our definition of the quantum GoL, the investigation of
possible variations is left for future work.
The investigation presented here fits perfectly as
a subject of study for quantum simulators, like for example cold atoms
in optical lattices. Indeed, the five-body Hamiltonian~(\ref{ham})
can be written in {pseudo spin-one-half operators (Pauli matrices)} and
thus it can be simulated along the lines
presented in~\cite{cirac}. In particular, these simulations would give
access to investigations in two and three dimensions that are
not feasible by means of t-DMRG~\cite{revDMRG}.
In conclusion we note that this is {one of the few
available simulations of a many-body quantum game scalable in the number
of sites~\cite{qgames,mbqg,chen04}}. With a straightforward generalisation
(adding more than one possible strategy defined in
Eq.~(\ref{ham})) one could study also different
many-player quantum games. This approach will allow
different issues {to be studied} related
to many-player quantum games such as the
appearance of new equilibria and their thermodynamical
properties.
Moreover, the approach introduced here shows that one might
investigate many different aspects of many-body quantum systems with
the tools developed in the field of complexity and dynamical systems:
In particular,
the relations with Hamiltonian quantum cellular automata
in one dimension and quantum games~\cite{qgames,nagaj}.
Finally, the search for the possible existence of self-organised
criticality in these systems along the lines of similar investigations
in the classical GoL~\cite{SOC}, if successful, would be the
first manifestation of such effect in a quantum system and might
have intriguing implications in quantum gravity~\cite{smolin,gupta}.
After completing this work we became aware of another work on the same
subject~\cite{arXive}.
We { acknowledge}
interesting discussions and support by R.~Fazio and M.B.~Plenio,
the SFB-TRR21, the EU-funded projects
AQUTE, PICC for funding, the BW-Grid for computational
resources, and the PwP project for the t-DMRG code (www.dmrg.it).
| {'timestamp': '2012-01-24T02:03:09', 'yymm': '1010', 'arxiv_id': '1010.4666', 'language': 'en', 'url': 'https://arxiv.org/abs/1010.4666'} |
\section{Introduction}
Decentralization, blockchain technology, and token-based economics are all concepts incorporated into Web3, a new iteration of the World Wide Web \cite{ww}. The origin of Web3 can be traced to Tim Berners-Lee \cite{1}, the founder of the World Wide Web and the first person to summarize and discuss the idea of Web3, which he originally referred to as the "Semantic Web". However, Web3 is currently defined by a set of principles focused on decentralization, user ownership of data, and cryptocurrency \cite{3}. Berners-Lee identifies the initial challenge of creating a system that allows knowledge sharing with no central governance or forced commonalities that make sharing inherently tricky. This idea was partially delivered by introducing search engines and globally accepted guidelines on structuring data so that it can be indexed \cite{2}, which helped organize data and make it more accessible. This combination of "do-gooding" developers and growing search engines led to enormous amounts of shared data which is now accessible to a wide audience \cite{2}.
Web3, or the semantic web, has evolved from the idea of growing the Internet out of this inherent paradox of an encyclopedia without a set organization to creator or user ownership of the knowledge and assets that collectively comprise the Internet. The solution, as identified by Berners-Lee, is the ability to pass data along with the rules and logic that govern it from source to source while attempting to lose as little information as possible \cite{2}. The implementation of this solution is now contained in the public blockchains that are the foundation of cryptocurrencies and smart contracts \cite{a5}. This in theory, resolves the limitation of needing developers to create knowledge in good faith to be able to be shared and indexed because of the ability to pass logic across the Internet. Berners-Lee also highlights that the benefits of this idea are not waiting for an astronomical breakthrough in technology where "future software agents…can navigate the wealth of its rich representations." Instead, it can be gradually realized as systems provide more detailed information \cite{2}, and the community makes a conscious effort to adopt those core principles of knowledge sharing enablement and user ownership.
With the connection between systems through smart contracts, the benefits of a Web3 world can begin to be realized as greater change comes over time. This fundamental move to Web3 is not synonymous with a technology upgrade but rather aligned with a growing Internet culture focused on data sharing and ownership that can enable computers to process this data better for the benefit of the wider community. As Berners-Lee puts it, "think of the web today as turning all the documents in the world into one big book, then think [Web3] as turning all the data into one big database, or one big mathematical formula. Web3 is not a separate web; it is a new form of information adding one more dimension to the diversity of the one web we have today" \cite{2}. Simply put, Web1 was read-only. Web2 was read-write. Web3 is read-write-own \cite{4}.
\section{Foundation of Web3 and Enablement}\label{ExSurv}
Web3, as mentioned by Berners-Lee, is not going to hinge on a technological breakthrough, although there is evidence already of creative libraries that enable Web3 ideas. Web3 is an ideological shift in how the Internet of data is constructed that can be enabled with existing technology to produce benefits today. The enablement of Web3 will depend on the transition to Web3 libraries and concepts by knowledge and data creators. The foundational items that currently comprise the Web3 framework are blockchain networks (decentralized \cite{a6} but connected nodes), Web3 libraries, emerging specialized languages (Solidity), identity stores (wallets), smart contracts, and specialized service providers \cite{5}.
\subsection{Blockchains (Bitcoin, Ethereum, etc.)}
The public blockchains are the core pipelines for all transactions and interactions that occur within Web3 applications. They are the public ledgers of record, decentralized and immutable \cite{a7}. A global permissionless database that provides the ability to track the ownership chain of any asset or piece of data that exists on the blockchain at any time by anyone. While made popular by cryptocurrencies, their utility extends to any form of data. This is about Facebook or any other social network moving your profile to a blockchain where you have complete control to grant, revoke, or even sell your data rather than giving it away for the utility of using the social network. In Web2, the network controlled and held all the value. The principles of Web3 dictate that the data holds all the value, not the network. Web3 is an ideological shift for the Internet, just as Web2 was for Web1.
As users interact across Web3 and its associated frameworks, there is a functional need to house the necessary transactions and interactions. The Web3 components which provide this service and enable blockchains to function are known as nodes. Without nodes, the applications cannot communicate; thus, the decentralization of applications becomes voided. Nodes facilitate the tracking of data and are the multiple storehouses that exist to fight against data loss threats \cite{5} and have additional copies of the interaction. This also makes transactions verifiable and immutable because this would be the corroborating source for other Web3 applications.
\subsection{Web3 Libraries}
The Web3 libraries provide a set of application programming interfaces to bridge blockchains and smart contracts, enabling the creation of a new class of applications, commonly known as Decentralized Applications (aka DApps). By their very definition, Web3 applications utilize a blockchain. Ethereum is the most common blockchain used by DApps as Ethereum was created to support application development and has a governance model specifically crafted to be developer friendly. Web3.js, Next.js, Ether.js, and Truffle Suite \cite{5} are commonly used Web3 application development libraries, which are all JavaScript based. Cloudflare provides an example of this interaction here, which hosts an open-source NFT project to explain Web3 and blockchain \cite{6}.
\begin{figure}
\centering
\includegraphics[width=10cm]{p2.png}
\caption{ System targets from a user-centric explainable AI framework \cite{5}}
\label{fig:life}
\end{figure}
\subsection{Identity Stores (Wallets)}
With ownership being at the heart of the Web3 principles, immutable identities are a requirement. Wallets are applications that store and protect the identity used for interacting with blockchain and facilitate the literal transaction. This wallet is the “final authority of your data” and can be any entity \cite{6}. Transactions are totaled here, and all the interactions with knowledge will also be tracked here.
\subsection{Smart Contracts}
A smart contract \cite{SINGH2020101654} is akin to what Berners-Lee was talking about when passing the rules along with the knowledge. Smart contracts act as a conditional mechanism to make something happen that is fully transparent and separate from any intermediary to fulfill a transaction. Smart contracts rely on nodes to validate the data of a wallet account and pass in the parameters to provide the output of a smart contract condition. Essentially, smart contracts are codes that reflect the logic of the transaction two parties are executing. Every interaction with a smart contract is then recorded in a ‘block’ on the ‘chain’ by the node executing the contract \cite{7}. These would act as the access points of knowledge sharing permission or a point of passing the universal logic.
These concepts have contributed to the early landscape of Web3 and are critical cogs to a functioning network of decentralized data points, knowledge sharing, and registered owners.
\subsection{Virtual Reality Systems}
Virtual reality has evolved beyond games today, and people use it for art, tourism, and industrial purposes \cite{v}. Ticketing or fees must be paid for this purpose, and a blockchain is an excellent solution for completely secure payments \cite{v,g}. Blockchains provide a distributed cryptographic ledger that cannot be trusted. There is no need for a trusted authority to verify a party's interaction \cite{a8,a9}. Depending on how users log in and their access rights, there are different types of blockchain, including private, public, and consortium. Generally, blockchain-based virtual reality platforms use a blockchain for supporting Web3 so that anyone can easily participate. Virtual reality in the blockchain process involves hashing all the information the user provides, then encrypting the private key on the user's device and generating a digital signature. On a peer-to-peer (P2P) network, transaction data is sent to peers along with a digital signature. Using the public key, network members decrypt the device by comparing its hash against the hash of the transaction data. Due to the widespread use of Web3 technology, concerns have been raised about information theft, hacking, dissemination, and copying. Consequently, many companies are using virtual reality, because they are unable to limit supply. To support virtual reality in the middle of Web3, a blockchain is an ideal solution for creating decentralization, security, and transparency \cite{a10,a11}. The data recorded in the virtual reality system cannot be altered, and its use can be easily traced. As a result of utilizing blockchain technology, security is increased, and the user experience is improved. Virtual reality security and trust can be improved by utilizing blockchain.
\section{A Web3 Developer}
Web3 as a concept has created a market for developers with skillsets that can enable and manage the core functions of knowledge and data-sharing-centric network. With Web3 being tied heavily to the current cryptocurrency infrastructure that is aligned with the core principles of Web3, there has been a large move of users contributing to the most significant public GitHub repositories involving the Web3 development stack, according to Richard MacManus at the New Stack \cite{8} due to the interest stirred by the recent popularity of cryptocurrency. This population of around 18,000 developers is likely an underestimate of the active base, as there is no information on the true amount of private proprietary Web3 developers. Consensys, the makers of popular Web3 development tools and services, reported 350,000 active developers using its Infura blockchain development platform in November 2021 \cite{9}. By April 2022 – the time of the creation of this paper – this number had grown to 430,000 active developers \cite{10}. According to the New Stack, this figure is a “drop in the bucket,” with the likelihood of this population growing more in the coming years as this market has the potential to rapidly expand if the market for decentralized and interconnected applications becomes a wider norm for software and technology companies.
In this market, a Web3 developer does not look vastly different from other developers, with the difference being experienced in the core concepts of blockchain, digital wallets, smart contracts, and the Web3 libraries mostly written in JavaScript and Web3 service providers. Familiarity with the core libraries is important in creating the core functions of a Web3 app which is no different from other fields of software development and relies on knowledge of some of the most popular languages to create this interaction, namely JavaScript \cite{5}. The one exception to this assertion is in the area of smart contracts, which are usually written in a specialized language in order to optimize performance. Some of these languages are Solidity, Rust, Yul, Vyper, and JavaScript. This area is also not restricted in any sense, with open-source libraries available to whoever wants to interact and use them. Web3 development looks to involve more of a fundamental mindset shift in how applications and functions need to interact and the creation of readable scripts that other applications can tap into as portable nodes rather than querying a central server, very similar to querying cloud services. The ability to communicate with the recording system is essential in being part of this Web3 environment because all interactions across the environment rely on an audit log with specific conditions that need to be passed, sometimes referred to as a ledger – the blockchain. Experience with data security is also a key trait for Web3 developers based on the general infrastructure that Web3 is founded on, passing data around transactions through ledgers and conditional function codes known as smart contracts. This interaction is different from most other central-based systems, where logging is always a reference point back to internal systems through various inputs rather than a wider network of checks that are highly portable and accessible via a blockchain network. Security on a blockchain is enforced at every entry and execution point on the chain. If you do not meet the security requirements of the blockchain, your application will not be able to execute smart contracts or write to the blockchain. The governing body of the blockchain sets the security standards, or the DAO – Decentralized Autonomous Organization – their published bylaws.
Fundamentally, Web3 developers look very much like other software developers today but work with specialized concepts and tools specific to blockchain technologies. Demand will continue to grow for Web3 developers. The population of around 430,000 will also enable these first-comers to make potentially ground-breaking contributions to the space and push Web3 forward. The market will continue to grow for some time, and the developers looking for new challenges and the likes of the ’do-gooding’ developers that helped push Web2 are most likely going to join in the coming years. The continual fascination with blockchain as a technology and its applications will also be a part of this growing market with estimates as high 34.1\% CAGR (compounded annual growth rates) over the next 10 years \cite{11}. With blockchain being closely tied to Web3 as an enabler, this will create more developers capable of pushing the Web3 principles in other areas outside the core areas right now of banking, cybersecurity, and cryptocurrency. Developers wishing to explore a shift in their careers can find many helpful free resources, from job boards \cite{12} to guides to learning Web3 technologies \cite{13,14}.
\section{Web3 Impact}
In order to understand Web3, we must first understand the place of the web in the network architecture, as shown in Fig \ref{2} In this way, real-world applications can help us better understand web functionality.
\begin{figure}
\centering
\includegraphics[width=8cm]{w1.png}
\caption{The place of web in the network architecture }
\label{2}
\end{figure}
Web3 aims to be an evolution into an open, permissionless, decentralized environment for the Internet that will have widened the scope of knowledge and data accessibility and fundamentally shifted the economics upon adoption \cite{15}. Whether this adoption is contingent on technology or ideology, these principles will hold true on the overall impact in both the market and the internet ecosystem. The core impact will be on how computers will interact with other computers for the benefit of users and who owns the outcome. This will create smarter searches and better application of smart technology to put the ownership of the outcomes in the hands of the users and creators rather than the intermediaries of Web2.
Web3 will be structured as open-source code, not reliant on the big middlemen that enable accessing of content today, such as Google, Microsoft, and Facebook as examples \cite{15}. That is until Big Tech creates their own blockchains, makes them public, and incentivizes developers with airdrops backed by the company's stock to use their tools and services, making the developers owners of the company. The development of Web3 has become transparent to allow software developers to utilize the growing library of publicly accessible Web3 packages and libraries to build the ecosystem as soon as information becomes available, accelerating the velocity of evolution.
This availability of the infrastructure will enable users to create this Web3 ecosystem. A core piece of this idea is around permissionless technology where interactions are not reliant on trusted third parties to connect users, whether through a search engine or transaction machine \cite{15}. The goal will be to make this feasible by "blockchain-like" infrastructure, if not the same libraries that blockchains use today. The structure acts as its intermediaries, so peer computers can cut out the requirement of an intermediary by using universal logic and conditions to have transferable data ledgers to track transactions and share knowledge based on rules within the node through smart contracts. This helps to protect against the concern of data theft by the current infrastructure of Web2 that enables intermediaries to capture your data and use it for their purposes, often monetizing it in ways the creators of the data might not agree with. Allowing the rules and conditions to be transferable and available without going through an intermediary can also increase execution speed, transparency, and equitable ownership.
The reliance on large sole sources in the form of data warehouses that maintain knowledge is most likely to be disrupted by the principles of knowledge within Web3. Adopting Web3 as the infrastructure creates more areas where data is stored, which mitigate major data-loss threats, keeps data behind rules not maintained by one guardian, and keeps data from being hidden and controlled by middlemen that disperse the data not necessarily to the benefit of the user base \cite{15}. Decentralization of data storage is a core feature of Web3 that will allow users to create more content and follow a universal logic to share information rather than conforming to an intermediary that acts as a gatekeeper that pushes a framework requirement to best share out the work \cite{15}. Lessening the effort required to create new content accessible can make the whole process of knowledge sharing and value creation a lot faster than it is today through the use of Web3 principles and technology.
\section{Web3 Risks}
With the emergence of this new Web3 ecosystem which is projected to have a large swath of influence, there are some inherent risks based on the core infrastructure of Web3 and blockchain technology as a whole \cite{16}.
\subsection{Lack of Regulation and Oversight}
Today, blockchain technology interfaces within the Web3 landscape are essentially unregulated, with a general lack of understanding by most regulators. There are no written laws or advisory boards overseeing how this ecosystem operates, which poses a risk of bad actors and bad faith interactions occurring. This open space of operation has regulatory boards struggling to define even the structure of what are ’entities’ within this operation, such as how even to define what a “blockchain-enabled organization” is in comparison to other companies \cite{17}. Blockchains are operated by Decentralized Autonomous Organizations (DAO’s) with varying and non-standard governance models by common business standards \cite{a12}. DAOs are organizations represented by rules encoded as a transparent computer program, controlled by the organization members, and not influenced by a central government \cite{18}.
Transactions pose another risk based purely on the global scope of how Web3 can interact, with a central point not existing in any of these ecosystems. For knowledge sharing of particular materials, the jurisdiction of regulation is unclear because it can exist in many different environments and tracking a geographic origin might be impossible. Countries, regions, and governing bodies have different regulations for more traditional knowledge sharing that would be disrupted by how Web3 and blockchain operate without a framework to govern this. With services being provided through a Web3 environment, there are also financial implications in how those are served and to what body. It is unreasonable to believe that because there is not one singular regional source that governing bodies will allow transactions to be monitored/taxed without oversight for a long period of time. This would include how bad faith transactions would work because currently, there is no clear operating system to govern and ensure people are behaving appropriately beyond the bylaws of the DAOs. Traditional ‘scams’, insider trading, and various schemes can be adjudicated through justice systems that are not connected to or cover the framework of Web3’s blockchains. This new business model of promoting peer-to-peer transactions now puts the risks with users instead of what used to be managed by central intermediaries, which is an inherent risk of working with blockchain technology \cite{19}.
For example, a current blockchain/cryptocurrency scheme that exists (which does not directly tie into Web3 and the semantic web but provides an overview of potential risks) is referred to as a crypto pump and dump, which is parallel to insider trading on big events. An organization will promote a cryptocurrency they may have a large stake in to push the value up considerably before cashing out and typically crashing a currency with new entrants bearing the brunt of the losses. Because most governing bodies do not consider cryptocurrency as a security, let alone a commodity or actual currency, there is no legal precedent for doing anything about this practice that is parallel to an illegal activity \cite{20}.
As Web3 is adopted and operates for a wider audience, regulation boards should move closer to providing guidelines and direction to ensure the safety of consumers and businesses from bad actors \cite{a13}. This will also be a big pillar of credibility to this area of what exists now as the gray area with few legal definitions.
\subsection{Portability of Illegal Items}
Web3, as professed in the previous material, is focused on allowing knowledge to be shared without a required framework where information can be easily queried and passed along with the rules that define it. The ecosystem now exists within a connected framework where each node houses the same universal information that makes traceability hard. This means bad actors can use these properties to create material that is easily accessible by the people who want to view it and hidden from users who do not want to view it through a major component of Web3 as the smart contract. Illegal media could more easily exist in a Web3 framework due to this decentralization that allows creators to be more inclined to release more of this knowledge and data due to the inherent security provided by a decentralized and conditionally accessed piece of knowledge.
As a concept, the portability of illegal items might make other internet content creators and providers hesitant to begin the full-on shift based on the services they plan to make available, especially with no plan/guideline in place to protect against these questionable practices that, in parallel mediums are considered illegal \cite{a14}. This gap has a market in waiting, so potentially by the time Web3 is truly ready to be embraced, companies and services could be available to help combat this issue with their own version of a secure smart contract and ledger. This could also be in combination with wider efforts of regulators to put in place guidelines on the operation and have a better understanding of this space operates.
\subsection{Accuracy of Information}
Now that, in the context of Web3, content is easily transmissible and not contained to a central source, there lies a risk with the rapid movement of disinformation \cite{21,a13i}. The ability to send and receive information fast via a Web3 framework has issues in highlighting good or bad information. Current state, content creators and providers have the opportunity to monitor this information and correct items because it is sourced in their own central area. Within a Web3 framework, information can pass way faster with no restrictions and no oversight. This issue is prevalent today through other semi-decentralized applications such as WhatsApp, which has had rampant issues in combating disinformation in its own encrypted messaging platform \cite{21}.
\subsection{Privacy}
With security at the forefront as a benefit for the Web3 framework, there is still a vague gap regarding data privacy and how interactions will be saved not only for a user's wallet but also within the network \cite{20}. If an interaction is logged within a ledger and that ledger exists everywhere, what sort of information does the wallet provide to get in, and will users be able to understand those implications? This could mean now intermediaries do not control personal data but potentially their peers. With data being free flowing in this environment, there seems to be larger hurdles to surpass for data privacy, primarily when sources are not centered in regions but exist across networks. A user-focused approach for Web3 empowers users and the internet to be focused on them, but that does not provide them the education on how their information is going to be processed by each unique system, and this especially means regulators will not understand this either\cite{20,a15}.
Web2 policies around data privacy do not account for this framework and will need to be updated to account for the new capabilities of the framework to process information and store it universally. This alone will put the full onus on service providers to ensure they store this information correctly, which is not sufficient without regulatory oversight to provide clear direction on what that means. There is danger in allowing companies to govern themselves because ensuring that you are constantly doing the right thing does not always fit with the plan, as we can review countless firm practices that have put consumers and users in harm's way.
\subsection{Future Roadmap}
Our digital world faces new technologies ( e.g., Software Defined Network (SDN) \cite{t}, Network functions virtualization (NFV) \cite{a1}, Artificial intelligence (AI) \cite{Ekramifard2020}, Internet of things (IoT) \cite{a2}, and Blockchain \cite{a4}) that directly and indirectly affect human life in education, business, industry, and the military. Combined technologies have also improved construction, energy, and resource efficiency \cite{a1,a2}. Web3 \cite{w} is one of today's hottest, most attractive, and most exciting technologies.
We anticipate that research on Web3 will move towards quantum, IoT, AI, and secure cloud-assisted areas. The reason behind this anticipation is the existence of trends toward the application of security, Quantum, AI, IoT, and secure cloud-assisted environments are interesting areas for academic and industry sections \cite{a17}. Many companies are investing in Web3 applications and building their products to support them.
\begin{figure*}
\centering
\includegraphics[width=18cm]{w4}
\caption{ Combination of Web3 with other technologies }
\label{3}
\end{figure*}
Moreover, Fig. \ref{3} summarizes the widespread involvement of AI and IoT, Quantum with Web3, for the future. Indeed, it shows that inspired AI supported Web3 and the ability to Combination of Web3 with other technologies. It can prove these areas will move toward Web3 quickly.
\subsection{Conclusion}
Fast forward 10 years – all developers are Web3 developers of some form. The definition of "full stack" will evolve. Front-end developers will remain focused on the user experience but may have to deal with additional complexities of data coming from blockchains or sending data for smart contract execution. Back-end developers will still deal with server or cloud application-based logic, but the new complexities of blockchain and smart contract dependencies will require new skill development. Blockchain and smart contract layers will be added to the definition of "full stack", creating new areas of specialization and opportunities. With each blockchain transaction being assigned a very specific value in terms of the transaction itself and the value of the data contained in the blocks, security will become a very acute focus and a specialization unto itself. Web3 security will be focused on the security of the blockchain and the smart contracts, the points where the value is created and stored. Security practices will become ever more integrated into the code of the applications, increasing the requirement of developers to obtain new skills and demonstrate secure coding and development practices. The inherent risks posed by the current state of Web3 technologies create opportunities for growth and value creation. Likely, the jobs boom that occurred as the industry began to understand what Web2 meant in the context of Web1 will be dwarfed by the jobs boom Web3 will generate. The fundamental economic shifts in the ideology of Web3 alone will trigger a tectonic shift we have never seen before. This, coupled with the scarcity of resources the industry expects over the next decade, even without considering the impact of Web3, will provide developers and other roles in software engineering a tremendous opportunity to generate value and wealth \cite{22,a21,23}.
\bibliographystyle{IEEEtran}
| {'timestamp': '2022-09-07T02:47:01', 'yymm': '2209', 'arxiv_id': '2209.02446', 'language': 'en', 'url': 'https://arxiv.org/abs/2209.02446'} |
\section{Introduction}
The Internet of Things (IoT) is poised to transform our lives and unleash enormous economic benefit. However, inadequate data security and trust are seriously limiting its adoption. Blockchain, a distributed tamper-resistant database, has native resistance to data tamper by which it is practically impossible to modify the data retroactively once a transaction has been recorded. We believe such tamper-resistance property can be of significant value in ensuring trust in IoT systems.
\; We aim at demonstrating the tamper-resistant capability of Blockchain in securing trust in IoT systems. An IoT testbed was constructed in our lab, and Ethereum based Blockchain was built in the testbed. A number of tamper-resistance experiments have been carried out and analyzed to examine the process of block validation in Blockchain on both full node and light node. Our demonstrations reveal that Blockchain has a dedicated tamper-resistant capability, which can be applied to IoT to ensure trusted data collection and sharing.
With this features used within P2P network, precisely by design, a Blockchain-based database can therefore constitute a trust-free decentralized consensus system. Note that trust-free means conventional \engordnumber{3} party as an arbitral body is filled in by common cryptographical theorems. This promotes Blockchain to be a suitable role for data recording, storage and identity management, especially for those sensitive data [2].
The connection between Blockchain and IoT has no longer been futuristic [4-9]. With the research running, scientists have handed out Blockchain-based IoT.
There is no doubt that Blockchain and IoT are the two hot tags in the field of science and technology. IoT, including sensors, vehicles and other moving objects, basically contains any use of embedded electronic components with the outside world communication equipment, in particular, it makes use of the IP protocol,
as shown in Fig. 1.
\begin{figure}[htbp]
\centering
\includegraphics[height=2.8cm]{111.pdf}
\caption{Trusted digital Supply Chain}
\label{fgr:example}
\end{figure}
\section{Core Concept of Tamper-resistant}
\subsection{Key components}
We now explain the key components of tamper-resistant.
\; First of all, we point out the difference between the so-called 51\% attack and chain reorganization. Nodes will never import a block that fails validation. A 51\% attack consists of a 51\% of the miners forking off from a block in the past and creating a new chain that eventually beats the current canonical chain in total difficulty.
\; A reorganization is done only if a block being imported on a side fork leads to a higher total difficulty on that particular fork than the canonical fork. The blocks nonetheless still need to be valid. Note that triggering re-organization function is based on TotalDifficult in Proof-of-Work [2].
\; Synchronization happens all the time across each node starting from an identical genesis block in a chain. Each node runs a validation function to validate each incoming block. No block will be accepted unless passing the validation.
\subsection{Validation procedures}
Throughout the validation procedures, the follow conditions are evaluated:
\begin{compactitem}
\item if \emph{StateRoot} $\in$ local levelDB, throw errors;
\item if \emph{ParentBlock} $\notin$ local levelDB, throw errors;
\item if \emph{StateRoot}$|$$_{\emph{ParentBlock}}$ $\notin$ local levelDB, throw errors;
\item if \emph{Validate(Header)} where \{\emph{nonce, difficulty, mixDigest\ldots}\} $\in$ \emph{Header} not passed, throw errors;
\item if \emph{Validate(UncleHeader)} not passed, throw errors;
\item if \emph{Validate(GasUsed)} not passed $||$ \emph{Validate(bloom)} not passed, throw errors;
\item if \emph{TxHash} $\not=$ \emph{Hash(Txs)} $||$ \emph{ReceiptHash} $\not=$ \emph{Hash(Receipts)}, throw errors;
\item if \emph{StateRoot} $\not=$
\emph{StateRoot}$|$ $ \xLongleftarrow{Txs} \emph{CurrentStateRoot}$, throw errors.
\end{compactitem}
The last validation points out that since there is no transaction sent to those normal nodes, the state root after a state change will never equal to the state root of the header of the incoming block coming from an abnormal node.
A simple fraud of database does not account for any PoW computation. However, not only \emph{difficulty}, \emph{epochDataset} and \emph{mixhash}, but also the \emph{HeaderHash} is involved in calculating the targeting \emph{nonce}. It means arbitrary but sufficient amount of computation should be carried out on the tampered block. Even if sufficient amount of computation is satisfied, the record can be recovered by the canonical chain with the fastest speed effort, unless the hacker has control over 51\% computational power among the whole network.
\section{Tamper-resistance Demonstrations}
\subsection{System setup}
\subsubsection*{Hardware setup (shown in Fig.1)}
\begin{compactitem}
\item two workstations as mining nodes, shown in Fig.1(a);
\item three Raspberry Pi 3 B+, shown in Fig.1(b), attached with IoT sensors as end-point nodes, which are only allowed to look up and upload data without mining.
\begin{figure}[htb]
\centering
\subfloat[]{%
\includegraphics[scale=0.17]{testbed2.pdf}} \hfill
\subfloat[]{%
\includegraphics[width=.24\textwidth]{testbed1.pdf}}
\caption{Overview of testbed}
\label{fgr:example}
\end{figure}
\end{compactitem}
\vspace{1mm}
\subsubsection*{Software setup}
\begin{compactitem}
\item Ubuntu 14.04 Trusty and Mac OS on mining nodes;
\item Raspbian on Raspberry Pi;
\item Golang Ethereum 1.5.9 for Blockchain [3];
\item Golang on hacking, tampering and logging;
\item Python on data receiving and encapsulation;
\item Javascript on Blockchain processing via web3 API [3].
\end{compactitem}
\vspace{2mm}
\subsubsection*{System Implementation}
\
The Raspberry Pi IoT devices equipped with temperature sensors measure room temperatures in the lab every 30 minutes. An Ethereum based Blockchain is built in the testbed. The IoT measurement data are encapsulated and uploaded to a pre-built contract in the Blockchain.
\subsection{Hacking scenarios and analysis}
We demonstrate several distinct scenarios where blocks are tampered with fake solution to PoW. Scenario analysis and experimental results are presented.
\vspace{1mm}
\noindent\emph{1) \ Non-mining node hacked:}
\
When a non-mining node is hacked, and the total difficulty is smaller than that of a normal block, the canonical chain always chooses the block with larger total difficulty in the context of a valid block. Thus this tampered block will be seen as an uncle block, the canonical chain will be recovered back by those normal blocks from other normal nodes.
\
When the total difficulty is greater than that of a normal block, other normal nodes will not accept the incoming blocks since the tampered block fails to pass the PoW validation during the synchronization. Once a future block whose total difficulty transcends that of the tampered block is generated by one of the normal mining nodes, this tampered block will be seen as an uncle block, the canonical chain will be recovered back by those normal blocks from other normal nodes.
\vspace{1mm}
\noindent\emph{2) \ Mining node hacked:}
\
When a mining node hacked, and the total difficulty is smaller than that of a normal block, the results are the same as that of the non-mining node hacked case. Once this node starts mining, the uncle block will be broadcast at the same time for validation, contributing to errors throwing on other normal nodes shown as in Fig.2.
\begin{figure}[htbp]
\centering
\includegraphics[scale=0.28]{222.pdf}
\caption{Bad Block with tampered uncle blocks}
\label{fgr:example}
\end{figure}
\
When the total difficulty is greater than that of a normal block, it ends up being insufficient computational power. As a result, this tampered block will not be accepted by any other adversaries, and this mining node will be removed as a bad peer by other normal nodes, shown in Fig.3.
\begin{figure}[htbp]
\centering
\includegraphics[height=1cm]{333.pdf}
\caption{Insufficient computation leading to incorrect nonce}
\label{fgr:example}
\end{figure}
\noindent\emph{3) \ Tampered Block on light node:}
\
We now investigate the scenario that a light node is hacked, and the PoW verification passes and the height of this light node is greater than that of those mining nodes. In this scenario, there will be no suitable peers available for this light node and any transactions sent from local will be broadcast to null, until the height is transcended by canonical chain, shown in Fig.4.
\begin{figure}[htb]
\centering
\subfloat[No suitable peer available when fetching data]{%
\includegraphics[scale=0.29]{hack_light.pdf}}
\\
\subfloat[No suitable peer available when sending transactions]{%
\includegraphics[scale=0.3]{hack_morelightsendtx.pdf}}
\end{figure}
\section{A Practical Demonstration}
\; We now demonstrate the effectiveness of the tamper-resistant property of Blockchain in protecting the IoT data records. In our Blockchain secured IoT testbed, one Raspberry Pi device was hacked, its temperature record was changed from the original measure of 34$^\circ$C to -4$^\circ$C.
\par As soon as the tampering happened, the blockchain noticed this anomaly with a broken chain in the hacked node, which signals a tampering action.
\par Next, when the blockchain is synchronized, the abnormal block is automatically recovered back to the major one through the canonical chain. As a result the tampered record of -4$^\circ$C has been replaced by the original record of 34$^\circ$C. This is the chain reorganization process of the Blockchain. A log is generated to record which content had been changed unexpectedly, and this log is automatically uploaded onto Blockchain for future reference.
\par This demonstrated that the Blockchain can be applied to IoT to secure data records.
| {'timestamp': '2022-08-11T02:06:16', 'yymm': '2208', 'arxiv_id': '2208.05109', 'language': 'en', 'url': 'https://arxiv.org/abs/2208.05109'} |
\section{acknowledgment}
\begin{description}
\item The work of O. Z. was supported by International Science Education
Program (ISEP), grant No. QSU082068.
\end{description}
| {'timestamp': '1999-04-15T19:23:21', 'yymm': '9904', 'arxiv_id': 'quant-ph/9904064', 'language': 'en', 'url': 'https://arxiv.org/abs/quant-ph/9904064'} |
\section{Introduction}
Short text matching (STM) is generally regarded as a task of paraphrase identification or sentence semantic matching.
Given a pair of sentences, the goal of matching models is to predict their semantic similarity.
It is widely used in question answer systems \cite{liu2018improved} and dialogue systems~\cite{gao2019neural,yu2014cognitive}.
Recent years have seen great progress in deep learning methods for text matching \cite{mueller2016siamese,gong2017natural,chen2017enhanced,lan2018neural}.
However, almost all of these models are initially proposed for English text matching. For Chinese language tasks, early work utilizes Chinese characters as input to the model, or first segment each sentence into words, and then take these words as input tokens. Although character-based models can overcome the problem of data sparsity to some degree~\cite{li2019word}, the main drawback is that explicit word information is not fully utilized, which has been demonstrated to be useful for semantic matching \cite{li2019enhancing}.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\columnwidth]{lattice.pdf}
\caption{An example of word segmentation and the potential word ambiguity.}
\label{fig:example}
\end{figure}
However, a large number of Chinese words are polysemous, which brings great difficulties to semantic understanding~\cite{xu2016improve}.
Word polysemy in short text is more an issue than that in long text because short text usually has less contextual information, so it is extremely hard for models to capture the correct meaning. As shown in Fig. \ref{fig:example}, the word in red in sentence-1 actually has two meanings: one is to describe bragging (\emph{exaggeration}) and another is \emph{moisture}.
Intuitively, if other words in the context have similar or related meanings, the probability of them will increase.
To integrate semantic information of words, we introduce HowNet~\cite{dong2003hownet} as an external knowledge base. In the view of HowNet, words may have multiple senses/meanings and each sense has several sememes to represent it. For instance, the first sense \emph{exaggeration} indicates some boast information in his words. Therefore, it has sememes {\tt information} and {\tt boast}. Similarly, we can also find the sememe {\tt boast} describing the sense \emph{brag} which belongs to the word ``ChuiNiu (bragging)'' in sentence-2. In this way, model can better determine the sense of words and perceive that two sentences probably have the same meaning.
Furthermore, word-based models often encounter some potential issues caused by word segmentation. If the word segmentation fails to output ``ChuiNiu (bragging)'' in sentence-2, we will lose useful sense information. In Chinese, ``Chui (blowing)'' ``Niu (cattle)'' is a bad segmentation, which deviates the correct meaning of ``ChuiNiu (bragging)''.
To tackle this problem, many researchers propose word lattice graphs~\cite{lai2019lattice,li2020flat,chen2020neural}, where they retain words existing in the word bank so that various segmentation paths are kept. It has been shown that multi-granularity information is important for text matching.
In this paper, we propose a Linguistic knowledge Enhanced graph Transformer (LET) to consider both semantic information and multi-granularity information. LET takes a pair of word lattice graphs as input. Since keeping all possible words will introduce a lot of noise, we use several segmentation paths to form our lattice graph and construct a set of senses according to the word. Based on HowNet, each sense has several sememes to represent it. In the input module, starting from the pre-trained sememe embeddings provided by OpenHowNet~\cite{qi2019openhownet}, we obtain the initial sense representation using a multi-dimensional graph attention transformer (MD-GAT, see Sec. 3.1). Also, we get the initial word representation by aggregating features from the character-level transformer encoder using an Att-Pooling (see Sec. 4.1). Then it is followed by SaGT layers (see Sec. 4.2), which fuse the information between words and semantics. In each layer, we first update the nodes' sense representation and then updates word representation using MD-GAT. As for the sentence matching layer (see Sec. 4.3), we convert word representation to character level and share the message between texts. Moreover, LET can be combined with pre-trained language models, e.g. BERT~\cite{devlin2019bert}. It can be regarded as a method to integrate word and sense information into pre-trained language models during the fine-tuning phase.
Contributions in this work are summarized as: a) We propose a novel enhanced graph transformer using linguistic knowledge to moderate word ambiguity. b) Empirical study on two Chinese datasets shows that our model outperforms not only typical text matching models but also the pre-trained model BERT as well as some variants of BERT. c) We demonstrate that both semantic information and multi-granularity information are important for text matching modeling, especially on shorter texts.
\section{Related Work}
{\bf Deep Text Matching Models} based on
deep learning have been widely adopted for short text matching. They can fall into two categories: representation-based methods~\cite{he2016text,lai2019lattice} and interaction-based methods~\cite{wang2017bilateral,chen2017enhanced}. Most representation-based methods are based on Siamese architecture, which has two symmetrical networks (e.g. LSTMs and CNNs) to extract high-level features from two sentences. Then, these features are compared to predict text similarity. Interaction-based models incorporate interactions features between all word pairs in two sentences. They generally perform better than representation-based methods. Our proposed method belongs to interaction-based methods.
{\bf Pre-trained Language Models}, e.g. BERT, have shown its powerful performance on various natural language processing (NLP) tasks including text matching. For Chinese text matching, BERT takes a pair of sentences as input and each Chinese character is a separated input token. It has ignored word information. To tackle this problem, some Chinese variants of original BERT have been proposed, e.g. BERT-wwm~\cite{cui2019pre} and ERNIE~\cite{sun2019ernie}. They take the word information into consideration based on the whole word masking mechanism during pre-training. However,
the pre-training process of a word-considered BERT requires a lot of time and resources. Thus, Our model takes pre-trained language model as initialization and utilizes word information to fine-tune the model.
\section{Background}
In this section, we introduce graph attention networks (GATs) and HowNet, which are the basis of our proposed models in the next section.
\subsection{Graph Attention Networks}
Graph neural networks (GNNs)~\cite{scarselli2008graph} are widely applied in various NLP tasks, such as text classifcation~\cite{yao2019graph}, text generation~\cite{zhao2020line}, dialogue policy optimization~\cite{ chen2018structured,chen2018policy,chen2019agentgraph,chen2020distributed} and dialogue state tracking~\cite{chen2020schema,zhu2020efficient}, etc.
GAT is a special type of GNN that operates on graph-structured data with attention mechanisms. Given a graph $G = (\mathcal{V}, \mathcal{E})$, where $\mathcal{V}$ and $\mathcal{E}$ are the set of nodes $x_i$ and the set of edges, respectively. $\mathcal{N}^{+}(x_i)$ is the set including the node $x_i$ itself and the nodes which are directly connected by $x_i$.
Each node $x_i$ in the graph has an initial feature vector $\mathbf{h}_i^0 \in \mathbb{R}^d$, where
$d$ is the feature dimension. The representation of each node is iteratively updated by the graph attention operation. At the $l$-th step, each node $x_i$ aggregates context information by attending over its neighbors and itself. The updated representation $\mathbf{h}^l_i$ is calculated by the weighted average of the connected nodes,
\begin{equation}
\begin{split}
\mathbf{h}_i^l
= \sigma\left(\sum_{x_j \in \mathcal{N}^{+}(x_i)} \alpha_{ij}^l \cdot \left( \mathbf{W}^l \mathbf{h}_j^{l-1}\right)\right),
\end{split}
\label{eq:selfatt}
\end{equation}
where $\mathbf{W}^l \in \mathbb{R}^{d \times d}$ is a learnable parameter, and
$\sigma(\cdot)$ is a nonlinear activation function, e.g. ReLU. The attention coefficient $\alpha_{ij}^l$ is the normalized similarity of the embedding between the two nodes $x_i$ and $x_j$ in a unified space, i.e.
\begin{equation}
\begin{split}
\alpha_{ij}^l &= \mbox{softmax}_{j} \ f^l_{sim}\left( \mathbf{h}^{l-1}_i, \ \mathbf{h}^{l-1}_j \right) \\
& = \mbox{softmax}_{j}\left(\mathbf{W}^l_{q} \mathbf{h}_i^{l-1}\right)^T\left(\mathbf{W}^l_{k} \mathbf{h}_j^{l-1}\right),
\label{eq:attention}
\end{split}
\end{equation}
where $\mathbf{W}^l_{q}$ and $\mathbf{W}^l_{k} \in \mathbb{R}^{d \times d}$ are learnable parameters for projections.
Note that, in Eq. (\ref{eq:attention}), $\alpha_{ij}^l$ is a scalar, which means that all dimensions in $\mathbf{h}_j^{l-1}$ are treated equally. This may limit the capacity to model complex dependencies.
Following \citeauthor{shen2018disan}~\shortcite{shen2018disan}, we replace the vanilla attention with multi-dimensional attention.
Instead of computing a single scalar score, for each embedding $\mathbf{h}_j^{l-1}$, it first computes a feature-wise score vector, and then normalizes
it with feature-wised multi-dimensional softmax (MD-softmax),
\begin{equation}
\begin{split}
\bm{\alpha}_{ij}^l
= &\mbox{MD-softmax}_j\left( \hat{\alpha}^l_{ij} + f^l_{m}\left(\mathbf{h}_j^{l-1}\right)\right),
\end{split}
\end{equation}
where $\hat{\alpha}_{ij}^l$ is a scalar calculated by the similarity function $f^l_{sim} ( \cdot)$ in Eq. (\ref{eq:attention}), and $f^l_{m}(\cdot)$ is a vector. The addition in above equation means the scalar will be added to every element of the vector. $\hat{\alpha}_{ij}^l$ is utilized to model the pair-wised dependency of two nodes, while $f^l_{m}(\cdot)$ is used to estimate the contribution of each feature dimension of $\mathbf{h}_j^{l-1}$,
\begin{equation}
f^l_{m}(\mathbf{h}_j^{l-1}) = \mathbf{W}^l_{2} \sigma \left(\mathbf{W}^l_{1} \mathbf{h}_j^{l-1} + \mathbf{b}^l_{1} \right) + \mathbf{b}^l_{2},
\end{equation}
where $\mathbf{W}_{1}^l$, $\mathbf{W}_{2}^l$, $\mathbf{b}^l_{1}$ and $\mathbf{b}^l_{2}$ are learnable parameters.
With the score vector $\bm{\alpha}_{ij}^l$, Eq. (\ref{eq:selfatt}) will be accordingly revised as
\begin{equation}
\begin{split}
\mathbf{h}_i^l
&= \sigma\left(\sum_{x_j \in \mathcal{N}^{+}(x_i)} \bm{\alpha}_{ij}^l \odot \left( \mathbf{W}^l \mathbf{h}_j^{l-1} \right) \right),
\end{split}
\label{eq:mdatt}
\end{equation}
where $\odot$ represents element-wise product of two vectors.
For brevity, we use $\mbox{MD-GAT}(\cdot)$ to denote the updating process using multi-dimensional attention mechanism, and rewrite Eq. (\ref{eq:mdatt}) as follows,
\begin{equation}
\begin{split}
\mathbf{h}_i^l = \mbox{MD-GAT}\left(\mathbf{h}_i^{l-1}, \ \left\{ \mathbf{h}^{l-1}_j | x_j \in \mathcal{N}^{+}(x_i) \right\} \right). \\
\end{split}
\label{eq:mdatt_r}
\end{equation}
After $L$ steps of updating, each node will finally have a context-aware representation $\mathbf{h}_i^L$. In order to achieve a stable training process, we also employ a residual connection followed by a layer normalization between two graph attention layers.
\subsection{HowNet}
\begin{figure}[h]
\centering
\includegraphics[width=0.9\columnwidth]{sagt.pdf}
\caption{An example of the HowNet structure.}
\label{fig:sense}
\end{figure}
HowNet~\cite{dong2003hownet} is an external knowledge base that manually annotates each Chinese word sense with one or more relevant sememes. The philosophy of HowNet regards sememe as an atomic semantic unit. Different from WordNet~\cite{miller1995wordnet}, it emphasizes that the parts and attributes of a concept can be well represented by sememes. HowNet has been widely utilized in many NLP tasks such as word similarity computation~\cite{liu2002word}, sentiment analysis~\cite{xianghua2013multi}, word representation learning~\cite{niu2017improved} and language modeling~\cite{gu2018language}.
An example is illustrated in Fig. \ref{fig:sense}.
The word ``Apple'' has two senses including \emph{Apple Brand} and \emph{Apple}. The sense \emph{Apple Brand} has five sememes including {\tt computer}, {\tt PatternValue}, {\tt able}, {\tt bring} and {\tt SpecificBrand}, which describe the exact meaning of sense.
\section{Proposed Approach}
\label{sec:let}
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{net.pdf}
\caption{The framework of our proposed LET model.}
\label{fig:net}
\end{figure}
First, we define the Chinese short text matching task in a formal way.
Given two Chinese sentences $\mathcal{C}^{a}=\{c_1^a, c_2^a, \cdots, c_{T_a}^a\}$ and $\mathcal{C}^{b}=\{c_1^b, c_2^b, \cdots, c_{T_b}^b\}$, the goal of a text matching model $f(\mathcal{C}^{a},\mathcal{C}^{b})$ is to predict whether the semantic meaning of $\mathcal{C}^{a}$ and $\mathcal{C}^{b}$ is equal. Here, $c_t^a$ and $c_{t'}^b$ represent the $t$-th and $t'$-th Chinese character in two sentences respectively, and $T_a$ and $T_b$ denote the number of characters in the sentences.
In this paper, we propose a linguistic knowledge enhanced matching model. Instead of segmenting each sentence into a word sequence, we use three segmentation tools and keep these segmentation paths to form a word lattice graph $G=(\mathcal{V},\mathcal{E})$ (see Fig. \ref{fig:update} (a)). $\mathcal{V}$ is the set of nodes and $\mathcal{E}$ is the set of edges.
Each node $x_i \in \mathcal{V}$ corresponds to a word $w_i$ which is a character subsequence starting from the $t_1$-th character to the $t_2$-th character in the sentence. As introduced in Sec. 1, we can obtain all senses of a word $w_i$ by retrieving the HowNet.
For two nodes $x_i \in \mathcal{V}$ and $x_j \in \mathcal{V}$,
if $x_i$ is adjacent to $x_j$ in the original sentence, then there is an edge between them.
$\mathcal{N}_{fw}^{+}(x_i)$ is the set including $x_i$ itself and all its reachable nodes in its forward direction, while $\mathcal{N}_{bw}^{+}(x_i)$ is the set including $x_i$ itself and all its reachable nodes in its backward direction.
Thus for each sample, we have two graphs $G^a=(\mathcal{V}^a,\mathcal{E}^a)$ and $G^b=(\mathcal{V}^b,\mathcal{E}^b)$, and our graph matching model is to predict their similarity.
As shown in Fig. \ref{fig:net}, LET consists of four components: input module, semantic-aware graph transformer (SaGT), sentence matching layer and relation classifier. The input module outputs the initial contextual representation for each word $w_i$ and the initial semantic representation for each sense. The semantic-aware graph transformer iteratively updates the word representation and sense representation, and fuses useful information from each other. The sentence matching layer first incorporates word representation into character level, and then matches two character sequences with the bilateral multi-perspective matching mechanism. The relation classifier takes the sentence vectors as input and predicts the relation of two sentences.
\subsection{Input Module}
\subsubsection{Contextual Word Embedding}
\label{sec:cwe}
For each node $x_i$ in graphs, the initial representation of word $w_i$ is the attentive pooling of contextual character representations.
Concretely, we first concat the original character-level sentences to form a new sequence $\mathcal{C} = \{[\text{CLS}], c_1^a, \cdots, c_{T_a}^a, [\text{SEP}], c_1^b, \cdots, c_{T_b}^b, [\text{SEP}]\}$, and then feed them to the BERT model to obtain the contextual representations for each character $\{\mathbf{c}^{\text{CLS}}, \mathbf{c}_1^a$, $\cdots$, $\mathbf{c}_{T_a}^a$, $\mathbf{c}^{\text{SEP}}$, $\mathbf{c}_1^b$, $\cdots$, $\mathbf{c}_{T_b}^b$, $\mathbf{c}^{\text{SEP}}\}$. Assuming that the word $w_i$ consists of some consecutive character tokens $\{c_{t_1}, c_{{t_1}+1}, \cdots, c_{t_2}\}$\footnote{For brevity, the superscript of $c_{k}\ (t_1 \leq k \leq t_2)$ is omitted.}, a feature-wised score vector is calculated with a feed forward network (FFN) with two layers for each character $c_{k} \ (t_1 \leq k \leq t_2)$, and then normalized with a feature-wised multi-dimensional softmax (MD-softmax),
\begin{equation}
\mathbf{u}_{k}=\mbox{MD-softmax}_{k}\left( \text{FFN}(\mathbf{c}_{k}) \right),
\label{eq:attpool-1}
\end{equation}
The corresponding character embedding $\mathbf{c}_{k}$ is weighted with the normalized scores $\mathbf{u}_{k}$ to obtain the contextual word embedding,
\begin{equation}
\mathbf{v}_i = \sum_{k=t_1}^{t_2} \mathbf{u}_{k} \odot \mathbf{c}_{k},
\label{eq:attpool-2}
\end{equation}
For brevity, we use $\mbox{Att-Pooling}(\cdot)$ to rewrite Eq. (\ref{eq:attpool-1}) and Eq. (\ref{eq:attpool-2}) for short, i.e.
\begin{equation}
\mathbf{v}_i=\mbox{Att-Pooling}\left(\{\mathbf{c}_k | t_1 \leq k \leq t_2\}\right).
\end{equation}
\subsubsection{Sense Embedding}
The word embedding $\mathbf{v}_i$ described in Sec. \ref{sec:cwe} contains only contextual character information, which may suffer from the issue of polysemy in Chinese. In this paper, we incorporate HowNet as an external knowledge base to express the semantic information of words.
For each word $w_i$, we denote the set of senses as $\mathcal{S}^{(w_i)}=\{s_{i,1}, s_{i,2}, \cdots, s_{i,K}\}$. $s_{i,k}$ is the $k$-th sense of $w_i$ and we denote its corresponding sememes as $\mathcal{O}^{(s_{i,k})}=\{o_{i,k}^1, o_{i,k}^2, \cdots, o_{i,k}^M\}$.
In order to get the embedding $\mathbf{s}_{i,k}$ for each sense $s_{i,k}$, we first obtain the representation $\mathbf{o}_{i,k}^m$ for each sememe $o_{i,k}^m$ with multi-dimensional attention function,
\begin{equation}
\begin{split}
\mathbf{o}^{m}_{i,k} &= \mbox{MD-GAT}\left(\mathbf{e}_{i,k}^{m}, \ \left\{\mathbf{e}_{i,k}^{m'} | o_{i,k}^{m'} \in \mathcal{O}^{(s_{i,k})} \right\} \right), \\
\end{split}
\end{equation}
where $\mathbf{e}_{i,k}^{m}$ is the embedding vector for sememe $o_{i,k}^m$ produced through
the Sememe Attention over Target model (SAT) \cite{niu2017improved}. Then, for each sense $s_{i,k}$, its embedding $\mathbf{s}_{i,k}$ is obtained with attentive pooling of all sememe representations,
\begin{equation}
\mathbf{s}_{i,k} = \mbox{Att-Pooling}\left(\left\{ \mathbf{o}^{m}_{i,k} | o_{i,k}^{m} \in \mathcal{O}^{(s_{i,k})} \right\} \right).
\end{equation}
\subsection{Semantic-aware Graph Transformer}
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{update.pdf}
\caption{(a) is an example of lattice graph. (b) shows the process of sense updating. $\text{fw}_2$ and $\text{bw}_2$ refer to the words in forward and backward directions of $\text{w}_2$ respectively. $\text{uw}_2$ is the words that $\text{w}_2$ cannot reach. (c) is word updating; we will not update the corresponding word representation if the word is not in HowNet.}
\label{fig:update}
\end{figure}
For each node $x_i$ in the graph, the word embedding $\mathbf{v}_i$ only contains the contextual information while the sense embedding $\mathbf{s}_{i,k}$ only contains linguistic knowledge. In order to harvest useful information from each other, we propose a semantic-aware graph transformer (SaGT). It first takes $\mathbf{v}_i$ and $\mathbf{s}_{i,k}$ as initial word representation $\mathbf{h}^0_i$ for word $w_i$ and initial sense representation $\mathbf{g}^0_{i,k}$ for sense $s_{i,k}$ respectively, and then iteratively updates them with two sub-steps.
\subsubsection{Updating Sense Representation} At $l$-th iteration, the first sub-step is to update sense representation from $\mathbf{g}^{l-1}_{i,k}$ to $\mathbf{g}^l_{i,k}$. For a word with multiple senses, which sense should be used is usually determined by the context in the sentence. Therefore, when updating the representation, each sense will first aggregate useful information from words in forward and backward directions of $x_i$,
\begin{equation}
\begin{split}
\mathbf{m}^{l,fw}_{i,k} &= \mbox{MD-GAT}\left(\mathbf{g}_{i,k}^{l-1}, \ \left\{\mathbf{h}_j^{l-1} | x_j \in \mathcal{N}_{fw}^{+}(x_i) \right\} \right), \\
\mathbf{m}^{l,bw}_{i,k} &= \mbox{MD-GAT}\left(\mathbf{g}_{i,k}^{l-1}, \ \left\{\mathbf{h}_j^{l-1} | x_j \in \mathcal{N}_{bw}^{+}(x_i) \right\} \right), \\
\end{split}
\end{equation}
where two multi-dimensional attention functions $\mbox{MD-GAT}(\cdot)$ have different parameters. Based on $\mathbf{m}^l_{i,k}=[\mathbf{m}^{l,fw}_{i,k}, \mathbf{m}^{l,bw}_{i,k}]$~\footnote{$[\cdot,\cdot]$ denotes the concatenation of vectors.}, each sense updates its representation with a gate recurrent unit (GRU) \cite{cho2014learning},
\begin{equation}
\mathbf{g}^l_{i,k} = \mbox{GRU}\left( \mathbf{g}_{i,k}^{l-1}, \ \mathbf{m}^l_{i,k}\right).
\label{eq:gru_sense}
\end{equation}
It is notable that we don't directly use $\mathbf{m}_{i,k}^l$ as the new representation $\mathbf{g}_{i,k}^{l}$ of sense $s_{i,k}$.
The reason is that $\mathbf{m}_{i,k}^l$ only contains contextual information, and we need to utilize a gate, e.g. GRU, to control the fusion of contextual information and semantic information.
\subsubsection{Updating Word Representation} The second sub-step is to update the word representation from $\mathbf{h}^{l-1}_{i}$ to $\mathbf{h}^{l}_{i}$ based on the updated sense representations $\mathbf{g}_{i,k}^l \ (1\leq k \leq K)$. The word $w_i$ first obtains semantic information from its sense representations with the multi-dimensional attention,
\begin{equation}
\begin{split}
\mathbf{q}^{l}_{i} = \mbox{MD-GAT}\left(\mathbf{h}_{i}^{l-1}, \ \left\{\mathbf{g}^{l}_{i,k} | s_{i,k} \in \mathcal{S}^{(w_i)} \right\} \right),
\end{split}
\end{equation}
and then updates its representation with a GRU:
\begin{equation}
\mathbf{h}^l_{i} = \mbox{GRU}\left( \mathbf{h}_{i}^{l-1}, \ \mathbf{q}^l_{i}\right).
\end{equation}
The above GRU function and the GRU function in Eq. (\ref{eq:gru_sense}) have different parameters.
After multiple iterations, the final word representation $\mathbf{h}_{i}^L$ contains not only contextual word information but also semantic knowledge. For each sentence, we use $\mathbf{h}_{i}^a$ and $\mathbf{h}_{i}^b$ to denote the final word representation respectively.
\subsection{Sentence Matching Layer}
After obtaining the semantic knowledge enhanced word representation $\mathbf{h}_{i}^{a}$ and $\mathbf{h}_{i}^{b}$ for each sentence, we incorporate this word information into characters. Without loss of generality, we will use characters in sentence $\mathcal{C}^a$ to introduce the process. For each character $c_t^a$, we obtain $\mathbf{\hat{c}}^a_t$ by pooling the useful word information,
\begin{equation}
\mathbf{\hat{c}}^a_t = \mbox{Att-Pooling}\left(\left\{ \mathbf{h}^{a}_{i} | w_i^a \in \mathcal{W}^{(c_t^a)} \right\} \right),
\end{equation}
where $\mathcal{W}^{(c_t^a)}$ is a set including words which contain the character $c_t^a$. The semantic knowledge enhanced character representation $\mathbf{y}_t$ is therefore obtained by
\begin{equation}
\mathbf{y}_t^a = \mbox{LayerNorm}\left( \mathbf{c}_t^a + \mathbf{\hat{c}}^a_t \right),
\end{equation}
where $\mbox{LayerNorm}(\cdot)$ denotes layer normalization, and $\mathbf{c}_t^a$ is the contextual character representation obtained using BERT described in Sec. \ref{sec:cwe}.
For each character $c_t^a$, it aggregates information from sentence $\mathcal{C}^a$ and $\mathcal{C}^b$ respectively using multi-dimensional attention,
\begin{equation}
\begin{split}
\mathbf{m}^{self}_{t} &= \mbox{MD-GAT} \left(\mathbf{y}_{t}^a, \ \{\mathbf{y}_{t'}^{a} | c_{t'}^a \in \mathcal{C}^a \} \right), \\
\mathbf{m}^{cross}_{t} &= \mbox{MD-GAT} \ (\mathbf{y}_{t}^a, \ \{\mathbf{y}_{t'}^{b} | c_{t'}^b \in \mathcal{C}^b \} ) \ . \\
\end{split}
\end{equation}
The above multi-dimensional attention functions $\mbox{MD-GAT}(\cdot)$ share same parameters. With this sharing mechanism, the model has a nice property that, when the two sentences are perfectly matched, we have $\mathbf{m}_t^{self} \approx \mathbf{m}_t^{cross}$.
We utilize the multi-perspective cosine distance \cite{wang2017bilateral} to compare $\mathbf{m}_t^{self}$ and $\mathbf{m}_t^{cross}$,
\begin{equation}
d_k = \text{cosine}\left( \mathbf{w}_k^{cos} \odot \mathbf{m}_t^{self}, \mathbf{w}_k^{cos} \odot \mathbf{m}_t^{cross} \right),
\end{equation}
where $k \in \{1,2,\cdots,P\}$ ($P$ is number of perspectives). $\mathbf{w}_k^{cos}$ is a parameter vector, which assigns different weights to different dimensions of messages. With $P$ distances $d_1, d_2, \cdots, d_P$,
we can obtain the final character representation,
\begin{equation}
\mathbf{\hat{y}}_t^a = \text{FFN} \left(\left[\mathbf{m}_t^{self}, \mathbf{d}_t \right] \right),
\label{eq:ue}
\end{equation}
where $\mathbf{d}_t \triangleq [d_1, d_2, \cdots, d_P]$, and $\mbox{FFN}(\cdot) $ is a feed forward network with two layers.
Similarly, we can obtain the final character representation $\mathbf{\hat{y}}_t^b$ for each character $c_t^b$ in sentence $\mathcal{C}^b$.
Note that the final character representation contains three kinds of information: contextual information, word and sense knowledge, and character-level similarity. For each sentence $\mathcal{C}^a$ or $\mathcal{C}^b$,
the sentence representation vector $\mathbf{r}^a$ or $\mathbf{r}^b$ is obtained using the attentive pooling of all final character representations for the sentence.
\subsection{Relation Classifier}
With two sentence vectors $\mathbf{r}^a$, $\mathbf{r}^b$, and the vector $\mathbf{c}^{\text{CLS}}$ obtained with BERT, our model will predict the similarity of two sentences,
\begin{equation}
p = \text{FFN} \left(\left[\mathbf{c}^{\text{CLS}}, \mathbf{r}^a, \mathbf{r}^b, \mathbf{r}^a \odot \mathbf{r}^b , |\mathbf{r}^a - \mathbf{r}^b| \right] \right),
\end{equation}
where $\mbox{FFN}(\cdot)$ is a feed forward network with two hidden layers and a sigmoid activation after output layer.
With $N$ training samples $\{\mathcal{C}^{a}_i, \mathcal{C}^{b}_i, y_i \}_{i=1}^N$, the training object is to minimize the binary cross-entropy loss,
\begin{equation}
\mathcal{L} = - \sum_{i=1}^N \left( y_i \text{log}\left(p_i\right) + \left( 1 - y_i\right) \text{log}\left(1- p_i\right) \right),
\end{equation}
where $y_i \in \{0,1\}$ is the label of the $i$-th training sample and $p_i \in[0,1]$ is the prediction of our model taking the sentence pair $\{\mathcal{C}^{a}_i, \mathcal{C}_i^{b}\}$ as input.
\section{Experiments}
\subsection{Experimental Setup}
\begin{table*
\renewcommand\arraystretch{1.1}
\centering{
\begin{tabular}{lcccccc}
\specialrule{0.1em}{1pt}{1pt}
\multirow{2}{*}{\textbf{Models}} & \multirow{2}{*}{\textbf{Pre-Training}} & \multirow{2}{*}{\textbf{Interaction}} & \multicolumn{2}{c}{\textbf{BQ}} & \multicolumn{2}{c}{\textbf{LCQMC}} \\
\cline{4-7}
& & & \textbf{ACC.} & \textbf{F1} & \textbf{ACC.} & \textbf{F1} \\
\specialrule{0.1em}{1pt}{1pt}
Text-CNN\cite{he2016text} & $\times$ & $\times$ & 68.52 & 69.17 & 72.80 & 75.70 \\
BiLSTM\cite{mueller2016siamese} & $\times$ & $\times$ & 73.51 & 72.68 & 76.10 & 78.90 \\
Lattice-CNN \cite{lai2019lattice} & $\times$ & $\times$ & 78.20 & 78.30 & 82.14 & 82.41 \\
BiMPM \cite{wang2017bilateral} & $\times$ & $\surd$ & 81.85 & 81.73 & 83.30 & 84.90 \\
ESIM \cite{chen2017enhanced} & $\times$ & $\surd$ & 81.93 & 81.87 & 82.58 & 84.49 \\
\textbf{LET} (Ours) & $\times$ & $\surd$ & \textbf{83.22} & \textbf{83.03} & \textbf{84.81} & \textbf{86.08} \\
\specialrule{0.1em}{1pt}{1pt}
BERT-wwm \cite{cui2019pre} & $\surd$ & $\surd$ & 84.89 & 84.29 & 86.80 & 87.78 \\
BERT-wwm-ext \cite{cui2019pre} & $\surd$ & $\surd$ & 84.71 & 83.94 & 86.68 & 87.71 \\
ERNIE \cite{sun2019ernie} & $\surd$ & $\surd$ & 84.67 & 84.20 & 87.04 & 88.06 \\\specialrule{0.0em}{0pt}{0pt}
BERT\cite{devlin2019bert} & $\surd$ & $\surd$ & 84.50 & 84.00 & 85.73 & 86.86 \\
\textbf{LET-BERT} (Ours) & $\surd$ & $\surd$ & \textbf{85.30} & \textbf{84.98} & \textbf{88.38} & \textbf{88.85} \\
\specialrule{0.1em}{1pt}{1pt}
\end{tabular}
\caption{Performance of various models on LCQMC and BQ test datasets. The results are average scores using 5 different seeds. All the improvements over baselines are statistically significant ($p < 0.05$).}
\label{tab:main-res}
}
\end{table*}
\subsubsection{Dataset} We conduct experiments on two Chinese short text matching datasets: LCQMC \cite{liu2018lcqmc} and BQ \cite{chen2018bq}.
LCQMC is a question matching corpus with large-scale open domain. It consists of 260068 Chinese sentence pairs including 238766 training samples, 8802 development samples and 12500 test samples. Each pair is associated with a binary label indicating whether two sentences have the same meaning or share the same intention. Positive samples are 30\% more than negative samples.
BQ is a domain-specific large-scale corpus for bank question matching. It consists of 120000 Chinese sentence pairs including 100000 training samples, 10000 development samples and 10000 test samples. Each pair is also associated with a binary label indicating whether two sentences have the same meaning. The number of positive and negative samples are the same.
\subsubsection{Evaluation metrics}
For each dataset, the accuracy (ACC.) and F1 score are used as the evaluation metrics. ACC. is the percentage of correctly classified examples. F1 score of matching is the harmonic mean of the precision and recall.
\subsubsection{Hyper-parameters}
The input word lattice graphs are produced by the combination of three segmentation tools: jieba~\cite{sun2012jieba}, pkuseg~\cite{pkuseg} and thulac~\cite{li2009punctuation}. We use the pre-trained sememe embedding provided by OpenHowNet~\cite{qi2019openhownet} with 200 dimensions. The number of graph updating steps/layers $L$ is 2 on both datasets, and the number of perspectives $P$ is 20. The dimensions of both word and sense representation are 128. The hidden size is also 128. The dropout rate for all hidden layers is 0.2.
The model is trained by RMSProp with an initial learning rate of 0.0005 and a warmup rate of 0.1. The learning rate of BERT layer is multiplied by an additional factor of 0.1. As for batch size, we use 32 for LCQMC and 64 for BQ. \footnote{Our code is available at \url{https://github.com/lbe0613/LET}.}
\subsection{Main Results}
We compare our models with three types of baselines: representation-based models, interaction-based models and BERT-based models. The results are summarized in Table \ref{tab:main-res}. All the experiments in Table \ref{tab:main-res} and Table \ref{tab:ablation} are running five times using different seeds and we report the \textbf{average} scores to ensure the reliability of results. For the baselines, we run them ourselves using the parameters mentioned in \citet{cui2019pre}.
\textbf{Representation-based models} include three baselines Text-CNN, BiLSTM and Lattice-CNN. Text-CNN~\cite{he2016text} is one type of Siamese architecture with Convolutional Neural Networks (CNNs) used for encoding each sentence. BiLSTM~\cite{mueller2016siamese} is another type of Siamese architecture with Bi-directional Long Short Term Memory (BiLSTM) used for encoding each sentence. Lattice-CNN~\cite{lai2019lattice} is also proposed to deal with the potential issue of Chinese word segmentation. It takes word lattice as input and pooling mechanisms are utilized to merge the feature vectors produced by multiple CNN kernels over different $n$-gram contexts of each node in the lattice graph.
\textbf{Interaction-based models} include two baselines: BiMPM and ESIM. BiMPM~\cite{wang2017bilateral} is a bilateral multi-perspective matching model. It encodes each sentence with BiLSTM, and matches two sentences from multi-perspectives. BiMPM performs very well on some natural language inference (NLI) tasks. There are two BiLSTMs in ESIM~\cite{chen2017enhanced}. The first one is to encode sentences, and the other is to fuse the word alignment information between two sentences. ESIM achieves state-of-the-art results on various matching tasks.
In order to be comparable with the above models, we also employ a model where BERT in Fig. \ref{fig:net} is replaced by a traditional character-level transformer encoder, which is denoted as LET.
The results of the above models are shown in the first part of Table \ref{tab:main-res}. We can find that our model LET outperforms all baselines on both datasets. More specifically, the performance of LET is better than that of Lattice-CNN. Although they both utilize word lattices, Lattice-CNN only focuses on local information while our model can utilize global information. Besides, our model incorporates semantic messages between sentences, which significantly improves model performance.
As for interaction-based models, although they both use the multi-perspective matching mechanism, LET outperforms BiMPM and ESIM. It shows the utilization of word lattice with our graph neural networks is powerful.
\textbf{BERT-based models} include four baselines: BERT, BERT-wwm, BERT-wwm-ext and ERNIE. We compare them with our presented model LET-BERT. BERT is the official Chinese BERT model released by Google. BERT-wwm is a Chinese BERT with whole word masking mechanism used during pre-training. BERT-wwm-ext is a variant of BERT-wwm with more training data and training steps. ERNIE is designed to learn language representation enhanced by knowledge masking strategies, which include entity-level masking and phrase-level masking. LET-BERT is our proposed LET model where BERT is used as a character level encoder.
The results are shown in the second part of Table~\ref{tab:main-res}. We can find that the three variants of BERT (BERT-wwm, BERT-wwn-ext, ERNIE) all surpass the original BERT, which suggests using word level information during pre-training is important for Chinese matching tasks. Our model LET-BERT performs better than all these BERT-based models. Compared with the baseline BERT which has the same initialization parameters, the ACC. of LET-BERT on BQ and LCQMC is increased by 0.8\% and 2.65\%, respectively. It shows that utilizing sense information during fine-tuning phrases with LET is an effective way to boost the performance of BERT for Chinese semantic matching.
We also compare results with K-BERT~\cite{liu2020k}, which regards information in HowNet as triples \{word, contain, sememes\} to enhance BERT, introducing soft position and visible matrix during the fine-tuning and inferring phases. The reported ACC. for the LCQMC test set of K-BERT is 86.9\%. Our LET-BERT is 1.48\% better than that. Different from K-BERT, we focus on fusing useful information between word and sense.
\subsection{Analysis}
\begin{table}[t]
\centering
\setlength{\tabcolsep}{5mm}{\begin{tabular}{lccc}
\specialrule{0.1em}{1pt}{1pt}
\textbf{Seg.} & \textbf{Sense} & \textbf{ACC.} & \textbf{F1} \\
\specialrule{0.1em}{1pt}{1pt}
jieba & $\surd$ & 87.84 & 88.47 \\
pkuseg & $\surd$ & 87.72 & 88.40 \\
thulac & $\surd$ & 87.50 & 88.27 \\
lattice & $\surd$ & \textbf{88.38} & \textbf{88.85} \\
lattice & $\times$ & 87.68 & 88.40 \\
\specialrule{0.1em}{1pt}{1pt}
\end{tabular}
\caption{Performance of using different segmentation on LCQMC test dataset.}
\label{tab:ablation}}
\end{table}
In our view, both multi-granularity information and semantic information are important for LET. If the segmentation does not contain the correct word, our semantic information will not exert the most significant advantage.
Firstly, to explore the impact of using different segmentation inputs, we carry out experiments using LET-BERT on LCQMC test set. As shown in Table \ref{tab:ablation}, when incorporating sense information, improvement can be observed between lattice-based model (the fourth row) and word-based models: jieba, pkuseg and thulac. The improvements of lattice with sense over other models in Table \ref{tab:ablation} are all statistically significant ($p < 0.05$). The possible reason is that lattice-based models can reduce word segmentation errors to make predictions more accurate.
Secondly, we design an experiment to demonstrate the effectiveness of incorporating HowNet to express the semantic information of words. In the comparative model without HowNet knowledge, the sense updating module in SaGT is removed, and we update word representation only by a multi-dimensional self-attention. The last two rows in Table \ref{tab:ablation} list the results of combined segmentation (lattice) with and without sense information. The performance of integrating sense information is better than using only word representation. More specifically, the average absolute improvement in ACC. and F1 scores are 0.7\% and 0.45\%, respectively, which indicates that LET has the ability to obtain semantic information from HowNet to improve the model's performance.
Besides, compared with using a single word segmentation tool, semantic information performs better on lattice-based model. The probable reason is lattice-based model incorporates more possible words so that it can perceive more meanings.
We also study the role of GRU in SaGT. The ACC. of removing GRU in lattice-based model is 87.82\% on average, demonstrating that GRU can control historical messages and combine them with current information. Through experiments, we find that the model with 2 layers of SaGT achieves the best. It indicates multiple information fusion will refine the message and make the model more robust.
\subsubsection{Influences of text length on performance}
As listed in Table \ref{tab:length}, we can observe that text length also has great impacts on text matching prediction. The experimental results show that the shorter the text length, the more obvious the improvement effect of utilizing sense information. The reason is, on the one hand, concise texts usually have rare contextual information, which is difficult for model to understand. However, HowNet brings a lot of useful external information to these weak-context short texts. Therefore, it is easier to perceive the similarity between texts and gain great improvement.
On the other hand, longer texts may contain more wrong words caused by insufficient segmentation, leading to incorrect sense information. Too much incorrect sense information may confuse the model and make it unable to obtain the original semantics.
\begin{table}[t]
\centering
\resizebox{\linewidth}{!}{
\begin{tabular}{ccccc}
\specialrule{0.1em}{1pt}{1pt}
\multirow{2}{*}{\textbf{text length}} & \multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}c@{}}number of \\ samples\end{tabular}}} & \multicolumn{2}{c}{\textbf{ACC.}}&\multirow{2}{*}{\textbf{RER(\%)}} \\ \cline{3-4}
& & w/o sense & sense & \\ \specialrule{0.1em}{1pt}{1pt}
$<16$ & 2793 & 88.99 & 90.05 &\textbf{9.63} \\\specialrule{0em}{1pt}{1pt}
$16-18$ & 3035 & 88.49 & 89.25 &6.60 \\\specialrule{0em}{1pt}{1pt}
$19-22$ & 3667 & 88.58 & 89.04 &4.03 \\\specialrule{0em}{1pt}{1pt}
$>22$ & 3005 & 84.53 & 85.13 &3.88\\\specialrule{0.1em}{1pt}{1pt}
\end{tabular}
}
\caption{Influences of text length on LCQMC test dataset. Relative error reduction (RER) is calculated by $\frac{\text{sense} - \text{w/o sense}}{100 - \text{w/o sense}} \times 100\%$.}
\label{tab:length}
\end{table}
\subsubsection{Case study}
We compare LET-BERT between the model with and without sense information (see Fig.~\ref{fig:case}). The model without sense fails to judge the relationship between sentences which actually have the same intention, but LET-BERT performs well. We observe that both sentences contain the word ``yuba'', which has only one sense described by sememe {\tt food}. Also, the sense of ``cook'' has a similar sememe {\tt edible} narrowing the distance between texts. Moreover, the third sense of ``fry'' shares the same sememe {\tt cook} with the word ``cook''. It provides a powerful message that makes ``fry'' attend more to the third sense.
\begin{figure}[h]
\centering
\includegraphics[width=1\columnwidth]{case.pdf}
\caption{An example of using sense information to get the correct answer.}
\label{fig:case}
\end{figure}
\section{Conclusion}
In this work, we proposed a novel linguistic knowledge enhanced graph transformer for Chinese short text matching. Our model takes two word lattice graphs as input and integrates sense information from HowNet to moderate word ambiguity. The proposed method is evaluated on two Chinese benchmark datasets and obtains the best performance. The ablation studies also demonstrate that both semantic information and multi-granularity information are important for text matching modeling.
\section{Acknowledgments}
We thank the anonymous reviewers for their thoughtful comments. This work has been supported by No. SKLMCPTS2020003 Project.
| {'timestamp': '2021-02-26T02:08:59', 'yymm': '2102', 'arxiv_id': '2102.12671', 'language': 'en', 'url': 'https://arxiv.org/abs/2102.12671'} |
\section{Introduction}
Bright squeezed vacuum (BSV) is a state of light emerging from the output of a high-gain unseeded parametric amplifier (OPA). Due to its nonclassical properties such as photon-number entanglement and quadrature squeezing, this state is useful for various quantum-information applications, among them quantum metrology~\cite{metrology}, quantum imaging~\cite{q-imaging}, and quantum lithography~\cite{lithography}. Besides containing a high number of photons in each mode, the state is essentially multimode, both in the frequency and in the angle. These features provide its large information capacity, as quantum information can be encoded in the number of photons in different modes. At the same time, the presence of a large number of modes can be a disadvantage in certain experiments, for instance achieving phase super-sensitivity~\cite{super} or (related) gravitational-wave detection~\cite{gravit}. A possible way to reduce the number of modes without losing nonclassical correlations is to use a nonlinear interferometer, in which only part of the spectrum is amplified. This has been already demonstrated for the angular spectrum in Ref.~\cite{separation}. The goal of this work is to show similar behavior in the frequency domain.
The paper is organized as follows. In the next two subsections, we briefly describe the mode structure of BSV (subsection~\ref{S1.1}) and the idea and operation of a nonlinear interferometer (subsection~\ref{S1.2}). Section~\ref{S2} explains the idea of reducing the number of modes in BSV in space/angle and time/frequency and demonstrates the narrowing of the BSV angular spectrum in a nonlinear interferometer with spatially separated crystals. The experiment on the narrowing of the BSV frequency spectrum is described in Section~\ref{S3}. Section~\ref{S4} contains the conclusions.
\subsection{BSV and its eigenmodes}\label{S1.1}
The most convenient way of generating BSV is high-gain parametric down-conversion in a nonlinear crystal, which can be considered as an unseeded traveling-wave OPA. The frequency-angular spectrum and photon-number correlations are well described by the Bloch-Messiah formalism~\cite{Wasilewski,Silberhorn,Bloch}, in which the Hamiltonian is diagonalized by passing to the eigenmodes of the OPA. For instance, in the case of spatially multimode PDC, the Hamiltonian can be written as~\cite{Bloch}
\begin{equation}
H=i\hbar\Gamma\iint d \mathbf{q}_s d \mathbf{q}_i F(\mathbf{q}_s,\mathbf{q}_i)a^\dagger_{\mathbf{q}_s}a^\dagger_{\mathbf{q}_i}+h.c.,
\label{eq:Ham}
\end{equation}
where $\Gamma$ characterizes the coupling strength, $\mathbf{q}_{s,i}$ are the transverse wavevectors of the signal and idler radiation, and $a^\dagger_{\mathbf{q}_{s,i}}$ are the photon creation operators in the corresponding plane-wave modes. The central part of the Hamiltonian is the two-photon amplitude (TPA), $F(\mathbf{q}_s,\mathbf{q}_i)$, whose meaning is the probability amplitude of a photon pair created with the wavevectors $\mathbf{q}_s,\mathbf{q}_i$. The Hamiltonian (\ref{eq:Ham}) is diagonalized by representing the TPA as a Schmidt decomposition,
\begin{equation}
F(\mathbf{q}_s,\mathbf{q}_i)=\sum_k \sqrt{\lambda_k}u_k(\mathbf{q}_s)v_k(\mathbf{q}_i),
\label{eq:TPA Schmidt}
\end{equation}
where $\lambda_k$ are the Schmidt eigenvalues, $u_k(\mathbf{q}_s),\,v_k(\mathbf{q}_i)$ the Schmidt modes, and $k$ is a two-dimensional index. By definition, the modes are ordered so that $\lambda_{k+1}\le\lambda_k$. The Hamiltonian (\ref{eq:Ham}) can be now written as a sum of two-mode Hamiltonians,
\begin{equation}
H=\sum_k \sqrt{\lambda_k}H_k,\,\,H_k=i\hbar\Gamma A^\dagger_kB^\dagger_k+h.c.,
\label{eq:BM_dec}
\end{equation}
with the photon creation operators $A^\dagger_k,\,B^\dagger_k$ relating to the Schmidt modes. Moreover, if the signal and idler beams are indistinguishable, their Schmidt modes are the same, and
\begin{equation}
H_k=i\hbar\Gamma (A^\dagger_k)^2+h.c.
\label{eq:Ham_deg}
\end{equation}
It is worth mentioning that the Schmidt decomposition can be alternatively performed in the space coordinates, which is equivalent to the wavevector decomposition~(\ref{eq:TPA Schmidt}). A similar decomposition is valid for the frequency/temporal domain. In different works, the Schmidt modes are also called nonmonochromatic modes~\cite{Opatrny}, squeezing (eigen)modes~\cite{Boyd}, broadband modes~\cite{Silberhorn}, or supermodes~\cite{Fabre}.
Clearly, in terms of these new modes, photon-number correlations are only pairwise. The total mean photon number can be represented as a sum of incoherent contributions from all Schmidt modes,
\begin{equation}
\langle N\rangle=\sum_k \langle N_k\rangle,\,\,\langle N_k\rangle=\sinh^2[\sqrt{\lambda_k}G],
\label{eq:ph_N0}
\end{equation}
where $G=\int\Gamma dt$ is the parametric gain. This means that while at low gain ($G<<1$), the Schmidt modes are populated with the weights given by the Schmidt eigenvalues $\lambda_k$, at high gain these weights are changed to become~\cite{Bloch}
\begin{equation}
\tilde{\lambda}_k=\frac{\sinh^2[\sqrt{\lambda_k}G]}{\sum_k\sinh^2[\sqrt{\lambda_k}G]}.
\label{eq:renorm}
\end{equation}
According to this, at high gain the lower-order Schmidt modes, initially having higher eigenvalues, become more pronounced.
\subsection{SU(1,1) interferometers}\label{S1.2}
At the very start of nonlinear optics, an idea emerged to realize two nonlinear effects at spatially separated points and to see interference between them. Such \textit{nonlinear interference} would enable the observation of the relative phase between two effects. It was first realized by Chang et al.~\cite{Bloembergen} who measured in this way the complex values of surface quadratic susceptibility for several semiconductors. Later, it became a common way to measure the phases of nonlinear susceptibilities.
After the discovery of parametric amplification via PDC and four-wave mixing (FWM), it was soon suggested to realize nonlinear interference based on these effects. Yurke et al.~\cite{SU11} proposed an interferometer in which the signal and idler beams emitted in the first parametric amplifier were directed into the second one and got amplified or deamplified depending on the phases introduced in the pump, signal, or idler beams, $\phi_{p,s,i}$ (Fig.~\ref{nonlinear_int}). Because the transformations performed by the interferometer on the fields at its two output modes relate to the SU(1,1) group, this type of interferometer is usually referred to as SU(1,1).
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.5\textwidth]{nonlinear_int.eps}
\caption{(Color online) An SU(1,1) nonlinear interferometer based on two high-gain parametric amplifiers.}\label{nonlinear_int}
\end{center}
\end{figure}
Initially the SU(1,1) interferometer was proposed as a method to perform phase measurement below the shot-noise level, which is possible due to the strong dependence of its output intensity on the phases $\phi_{s,i}$ at high parametric gain~\cite{Ou}. Nevertheless, the first implementations of SU(1,1) interferometers were based on low-gain (spontaneous) PDC~\cite{Herzog,big}. In particular, in a clever modification of such interferometer the effect of `induced coherence without induced emission' was observed~\cite{induced}, which later was successfully implemented for the measurement of absorption~\cite{abs} and dispersion~\cite{disp}, as well as imaging~\cite{imaging} in the infrared or terahertz~\cite{THz} spectral ranges without the detection of infrared or terahertz radiation.
Only recently, the SU(1,1) interferometer using FWM in rubidium vapor has been implemented for overcoming the shot-noise level of phase measurement~\cite{Zhang}. More than $4$ dB improvement has been obtained compared to a conventional (SU(2)) interferometer populated with the same mean number of photons. The operation was at high gain, which provided $7.4$ dB amplification of the radiation from the first FWM source in the second one.
The same mechanism can be used for shaping the spectrum of high-gain PDC or FWM, both in the angle and in the frequency. Because of the nonlinear amplification of the incident radiation, the modes that are not amplified in the second nonlinear crystal will be much weaker at the output than those amplified. Moreover, one can take advantage of the de-amplification of certain modes, which, however, is not accompanied by a noise increase. Such selective amplification of different modes can enable tailoring the structure of the spectrum.
\section{Diffractive and dispersive spreading, and reduction of the mode number}\label{S2}
\subsection{Angular spectrum tailoring}\label{S2.1}
In a nonlinear interferometer formed by two spatially separated traveling-wave high-gain parametric amplifiers (Fig.~\ref{separated})~\cite{separation}, broadband radiation emitted by the first crystal is amplified in the second one. If the distance between the crystals is considerable, only part of the radiation is amplified in the second one - namely, the part that passes through the pump beam in the second crystal. In accordance with this, it was shown~\cite{separation} that at a certain distance between the crystals, the angular spectrum becomes nearly single-mode.
At a sufficiently large distance between the crystals, the angular width of the spectrum amplified in the second crystal should be roughly given by the ratio between the pump diameter $a$ and the distance $L$ between the crystals,
\begin{equation}
\Delta\theta\approx\frac{a}{L}.
\label{eq:angular width}
\end{equation}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.5\textwidth]{separated2.eps}
\caption{(Color online) A nonlinear interferometer formed by two spatially separated crystals.}\label{separated}
\end{center}
\end{figure}
This effect of spatial spectrum narrowing has a simple explanation in terms of the Schmidt modes of high-gain PDC. Indeed, each of the spatial Schmidt modes of BSV emitted by the first crystal is diffracting (spreading) in the course of propagation to the second crystal. To a good approximation, the spatial Schmidt modes are given by the Hermite-Gauss or Laguerre-Gauss set, the lowest-order mode being simply a Gaussian beam. Lower-order modes spread slowly and in the second crystal they overlap with the pump beam. Therefore, they get amplified provided that the phase acquired in the course of propagation is appropriate. However, higher-order modes (no matter if they are given by Laguerre-Gauss or Hermite-Gauss beams) spread faster in the space between the crystals and do not get amplified. As a result, the spatial spectrum at the output of the second crystal gets narrower. This continues until only the first Schmidt mode gets amplified, after which the angular width remains constant; increasing the distance $L$ only reduces the total intensity.
The dependence of the angular width $\Delta\theta$ on the distance $L$ between the crystals can be derived from this picture as follows. The zeroth-order Schmidt mode is a Gaussian beam of waist radius $w_{0}$. As it propagates from the crystal, the waist radius at a distance $z$ is~\cite{Kogelnik}
\begin{equation}
w_0(z)=\sqrt{w_{0}^{2}+\left(\frac{\lambda z}{\pi w_{0}}\right)^2},\,\, w_0(0)=w_0,
\label{eq:waist_1st}
\end{equation}
with $\lambda$ being the wavelength. The parameter $\theta_0\equiv\lambda/(\pi w_0)$ is the half-angle divergence of the Gaussian beam. Higher-order modes have larger spatial sizes, $w_{m}=Mw_0$; for instance, for Hermite-Gauss beams, $M=\sqrt{2m+1}$. At the same time, they have larger divergences $\theta_m=M\theta_0$, so that as the distance $z$ increases, they spread as
\begin{equation}
w_m(z)=\sqrt{w_{m}^{2}+\left(M^2\frac{\lambda z}{\pi w_{m}}\right)^2}.
\label{eq:waist_m}
\end{equation}
Assuming that for $z=L$, only modes of orders from $0$ to $m$ are amplified in the second crystal, we find the corresponding $M$ from the condition $w_m(L)=a/2$. We obtain
\begin{equation}
M=\frac{a}{2}\left[w_0^{2}+\left(\frac{\lambda L}{\pi w_0}\right)^2\right]^{-1/2}.
\label{eq:M}
\end{equation}
Then, the divergence of the beam will be equal to twice the half-angle divergence of mode $m$, $\Delta\theta=2\theta_m=M\frac{\lambda}{\pi w_{0}}$:
\begin{equation}
\Delta\theta=\left[\frac{1}{\Delta\theta_0^{2}}+\left(\frac{L}{a}\right)^2\right]^{-1/2},
\label{eq:angular width_full}
\end{equation}
where $\Delta\theta_0=\frac{a\lambda}{\pi w_{0}^2}$ is the initial angular width.
We have measured the dependence of the angular width on the distance between the crystals under the same experimental conditions as in Ref.~\cite{separation}: two $3$ mm crystals were placed into a single pump beam of full width at half maximum (FWHM) waist $200$ $\mu$m, and the distance between them was changed from $10$ to $130$ mm. The results are shown in Fig.~\ref{angular}.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.4\textwidth]{angular.eps}
\caption{(Color online) The angular width of the spectrum measured at the output of the second crystal, versus the distance between the crystals. The solid line is calculated with Eq.~(\ref{eq:angular width_full}), without fitting parameters. The dashed line is a guide to an eye. The arrow shows the position at which $m=1.1$ spatial modes were reported in Ref.~\cite{separation}.}\label{angular}
\end{center}
\end{figure}
Equation (\ref{eq:angular width_full}) was used for fitting the dependence at $L\le60$ mm. At larger distances, the angular spectrum shows no dependence on $L$; the dashed line is just a guide to the eye. The position at which nearly single-mode was observed (the number of modes was measured to be $m=1.1$)~\cite{separation} is shown by an arrow.
\subsection{Frequency spectrum tailoring}\label{S2.2}
This effect has an analogue in the frequency/time domain. In this case, the role of the diffractive spreading of beams is played by the dispersive spreading of pulses. Indeed, let a dispersive material of length $d$ be placed between the two crystals. Each frequency Schmidt mode of the BSV from the first crystal will spread in time in the course of propagation through the material, and the spreading will be determined by the group-velocity dispersion (GVD) $k''=d^2k/d\omega^2$. Lower-order modes are narrower in time than higher-order ones, but they will still spread in the GVD material slower than higher-order ones. This is illustrated by Fig.~\ref{modes} where several temporal Schmidt modes are plotted for BSV emitted from a 3 mm crystal pumped by $6$ ps pulses at wavelength $355$ nm (a). The emission is at the degenerate wavelength $710$ nm. The modes are assumed to be the same as for spontaneous parametric down-conversion~\cite{Law}. To a good approximation, they are given by Hermite functions~\cite{Wasilewski}.
After propagation through a dispersive material with the GVD $k''$ and length $d$, the temporal modes will spread in time but maintain their shapes. The latter follows from the fact that, similar to diffractive spreading of beams, dispersive spreading of pulses acts as the Fourier transformation, so that after a sufficiently long GVD material the shape of a pulse becomes similar to its spectral amplitude. At the same time, Hermite functions are eigenfunctions of the Fourier transformation. Therefore, the whole set of Schmidt modes will be simply rescaled after the propagation through the dispersive material. For instance, the zeroth-order Schmidt mode (dotted line in Fig.~\ref{modes}a), initially a Gaussian pulse of duration $\tau_{0}$ will remain a Gaussian pulse with the duration depending on $d$~\cite{Yariv},
\begin{equation}
\tau_0(d)=\sqrt{\tau_{0}^{2}+\left(\frac{k'' d}{\tau_{0}}\right)^2}.
\label{eq:time_1st}
\end{equation}
Higher-order modes will also maintain their shapes but, similar to the case of the angular modes, will spread faster.
In the right panel of Fig.~\ref{modes}, the Schmidt modes are plotted after the propagation through various lengths $d$ of SF6 glass, whose GVD at the wavelength $710$ nm is $k''=238$ fs$^2$/mm~\cite{refractive index}. In the calculation, Eq.~(\ref{eq:time_1st}) was used for the Gaussian mode, and the higher-order modes were simply rescaled accordingly.) For the length $d=10$ cm (b), only higher-order modes (mode $50$ in the figure) get sufficiently spread so that they do not fully overlap with the pump pulse in the second crystal. Therefore, high-order modes will not be amplified.
However, at $d=20$ cm (c), even the tenth-order mode becomes considerably spread and will not get fully amplified in the second crystal. This should lead to the narrowing of the spectrum. As the length of the dispersive glass increases, the spectral width should reduce. After $d=60$ cm of glass (d), the zeroth-order Schmidt mode will overlap with the pump pulse. At high gain, this situation should lead to single-mode output emission.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.7\textwidth]{modes.eps}
\caption{(Color online) Temporal Schmidt modes of orders $0$ (red dotted line), $10$ (blue dashed line) and $50$ (green solid line) for a $3$ mm crystal pumped by $6$ ps pulses before (a) and after propagating through $10$ cm (b), $20$ cm (c) and $60$ cm (d) of SF6 glass. For comparison, the amplitude of the pump pulse is shown by magenta dash-dotted line.}\label{modes}
\end{center}
\end{figure}
By analogy with Eq.~\ref{eq:angular width_full}, one can estimate the frequency spectrum of BSV generated in the system of two crystals separated by a GVD material as
\begin{equation}
\Delta\omega=\left[\frac{1}{\Delta\omega_0^{2}}+\left(\frac{k''d}{T_p}\right)^2\right]^{-1/2},
\label{eq:frequency width_full}
\end{equation}
where $T_p$ is the pump pulse duration and $\Delta\omega_0$ the initial spectral width.
In the next section, we describe the experimental results confirming this behavior.
\section{Experiment}\label{S3}
The scheme of the experimental setup is shown in Fig.~\ref{setup}. Collinear high-gain PDC with the central wavelength at 709.3 nm was created in a type-I 3-mm-long BBO crystal by pumping it with a third harmonic of the pulsed Nd:YAG laser at 354.7 nm, with a pulse duration of 18 ps (the coherence time being $6$ ps) and a repetition rate of 1 kHz. The laser power was varied by a half wave plate $\lambda_p/2$ followed by a polarization beamsplitter $PBS_p$. A telescope, made of plano convex lenses $L_{p1}$ and $L_{p2}$ with the focal distances of 50 cm and 5 cm, respectively, compressed the beam size down to half-power beam width of 225 $\mu m$. A dichroic mirror $DM_1$ separated the pump beam and the PDC. The PDC pulses were passing through the group velocity dispersion (GVD) medium. We had three options of GVD media: SF-6 glass rods of length $9$ cm and $18.3$ cm, and SF-57 glass rod of length $19.4$ cm. The pump pulses were timed, by means of a delay line, to overlap with the time-stretched PDC pulses on the dichroic mirror $DM_2$ and amplify them in the second type-I 3-mm-long BBO crystal. After the crystal the pump was blocked by a pair of dichroic mirrors $DM_3$ and a long-pass filter OG580. The iris $A_1$ placed in the focal plane of the lens L with the focal distance of 10 cm was used to align the crystals for collinear geometry. The lens $L_i$ focused the PDC radiation onto the input slit of the spectrometer with a resolution of 0.15 nm.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.8\textwidth]{setup.eps}
\caption{(Color online) Schematic of the experimental setup.}\label{setup}
\end{center}
\end{figure}
Figure~\ref{results} (a) shows the measured PDC spectra with and without the GVD medium placed between the crystals. The PDC spectrum for the crystals separated by an air gap of $24.2$ cm is shown by a blue line. The measurement was performed at an average pump power of $73.4$ mW in the setup, published before in~\cite{separation}. The interference fringes due to different refractive indexes of the pump and the signal and idler beams in the air were avoided by averaging the spectra taken over different positions of the first crystal in the range from 22.7 cm to 25.7 cm with the step of 2 mm. The FWHM of the spectrum was found to be $45.6$ nm.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.7\textwidth]{results5.eps}
\caption{(Color online) (a) Measured PDC spectra with different GVD media inserted. (b) FWHM of the PDC spectra experimentally measured with and without GVD media placed between the crystals (blue triangles) and the theoretically calculated dependence according to Eq.~\ref{eq:frequency width_full} (red line) plotted versus
$k''d$. }
\label{results}
\end{center}
\end{figure}
Green and red lines in Fig. ~\ref{results} (a) show how the spectrum of the PDC evolves as the value of $k''d$ for the inserted GVD medium is increased. All the spectra are affected by the interference arising from the frequency-dependent phase of the broadband PDC generated in the first crystal after passing through the GVD media. As a result, constructive or destructive interference is observed for the different frequencies at the output of the second crystal. Despite the interference, one can clearly see that the spectra measured with the GVD media are narrower than the one obtained with the air gap between the crystals. The FWHM of the envelope for each measurement was considered as the width of the spectrum.
In Fig.~\ref{results} (b) we compare the measurement results for the PDC spectral width (blue triangles) and the calculation according to Eq.~\ref{eq:frequency width_full} (red line) using the GVD values for the Schott glass SF6 and SF57~\cite{refractive index}. Instead of the pulse duration $T_p$, the coherence time of the laser $T_c=6 $ ps was used since it is the coherence of the pump that matters for parametric amplification. One can see that the experimental FWHM values agree well with the calculations.
\section{Conclusion}\label{S4}
In conclusion, we have considered a nonlinear interferometer formed by two unseeded traveling-wave parametric amplifiers (based on parametric down-conversion in nonlinear crystals) and showed that its angular and frequency spectrum of emission can be modified by spatially separating the two crystals and/or placing a dispersive material between them. The effect has a simple interpretation in terms of Schmidt modes: higher-order modes spread in space and time faster than low-order ones and hence do not get amplified in the second crystal. Our experimental results show the narrowing of the spatial spectrum, leading ultimately to a single spatial mode. For frequency spectrum narrowing, preliminary results show 30\% narrowing for a glass rod with large group velocity dispersion inserted into the interferometer.
The research leading to these results has received funding from the EU FP7 under grant agreement No. 308803 (project BRISQ2). We thank Xin Jiang and Patricia Schrehardt for providing the samples of SF6 and SF57.
| {'timestamp': '2015-03-12T01:10:48', 'yymm': '1503', 'arxiv_id': '1503.03406', 'language': 'en', 'url': 'https://arxiv.org/abs/1503.03406'} |
\section{Introduction}
\subsection{Notation}
In this paper, a \textit{real elliptic curve} will be a triple $(X,x_0,\sigma)$ where
$(X,x_0)$ is a complex elliptic curve (i.e.,\ a compact connected Riemann surface of
genus $1$ with a marked point $x_0$) and $\sigma:X\longrightarrow X$ is an anti-holomorphic
involution (also called a real structure). We do not assume that $x_0$ is fixed under
$\sigma$. In particular, $X^\sigma\,:=\,\mathrm{Fix}(\sigma)$ is allowed to be empty.
The g.c.d.\ of two integers $r$ and $d$ will be denoted by $r\wedge d$.
In the introduction, we omit the definitions of stability and semi-stability of vector
bundles, as well as that of real and quaternionic structures; all these definitions
will be recalled in Section \ref{semi-stable_bundles}.
\subsection{The case of genus zero}
Vector bundles over a real Riemann surface of genus $g\geq 2$ have been studied from various points of view in the past few years: moduli spaces of real and quaternionic vector bundles were introduced through gauge-theoretic techniques in \cite{BHH}, then related to the real points of the usual moduli variety in \cite{Sch_JSG}. In genus $0$, there are, up to isomorphism, only two possible real Riemann surfaces: the only compact Riemann surface of genus $0$ is the Riemann sphere $\mathbb{C}\mathbf{P}^1$ and it can be endowed either with the real structure $[z_1:z_2]\longmapsto [\ov{z_1}:\ov{z_2}]$ or with the real structure $[z_1:z_2]\longmapsto [-\ov{z_2}:\ov{z_1}]$. The real locus of the first real structure is $\mathbb{R}\mathbf{P}^1$ while the real locus of the second one is empty. Now, over $\mathbb{C}\mathbf{P}^1$, two holomorphic line bundles are isomorphic if and only if they have the same degree and, by a theorem due to Grothendieck (\cite{Grot_P1}), any holomorphic vector bundle over the Riemann sphere is isomorphic to a direct sum of line bundles. So, over $\mathbb{C}\mathbf{P}^1$, the only stable vector bundles are the line bundles, a semi-stable vector bundle is necessarily poly-stable and any vector bundle is isomorphic
to a direct sum of semi-stable vector bundles, distinguished by their respective slopes
and ranks. In particular, if $\mathcal{E}$ is semi-stable of rank $r$ and degree $d$, then $r$ divides $d$ and $$\mathcal{E} \simeq \mathcal{O}(d/r) \oplus \cdots \oplus \mathcal{O}(d/r),$$ where $\mathcal{O}(1)$ is the positive-degree generator of the Picard group of $\mathbb{C}\mathbf{P}^1$ and $\mathcal{O}(k)$ is its $k$-th tensor power. This means that the moduli space of semi-stable vector bundles of rank $r$ and degree $d$ over $\mathbb{C}\mathbf{P}^1$ is
$$\mathcal{M}_{\mathbb{C}\mathbf{P}^1}(r,d) = \left\{
\begin{array}{ccc}
\{\mathrm{pt}\} & \mathrm{if}& r\,|\,d,\\
\emptyset & \mathrm{if}& r\not|d.
\end{array}\right.$$ Assume now that a real structure $\sigma$ has been given on $\mathbb{C}\mathbf{P}^1$. Then, if $\mathcal{L}$ is a holomorphic line bundle of degree $d$ over $\mathbb{C}\mathbf{P}^1$, it is isomorphic to its Galois conjugate $\os{\mathcal{L}}$, since they have the same degree. This implies
that $\mathcal{L}$ is either real or quaternionic. Moreover, this real (respectively,\
quaternionic) structure is unique up to real (respectively,\ quaternionic)
isomorphism (see Proposition \ref{self_conj_stable_bundles}). If the real structure
$\sigma$ has real points, then quaternionic bundles must have even rank. Thus, when
$\mathrm{Fix}(\sigma) \neq \emptyset$ in $\mathbb{C}\mathbf{P}^1$, any line bundle, more generally any
direct sum of holomorphic line bundles, admits a canonical
real structure. Of course, given a real vector bundle of the
form $(\mathcal{L}\oplus\mathcal{L},\tau\oplus\tau)$, where $\tau$ is a real structure on the line bundle $\mathcal{L}$, one
can also construct the quaternionic structure
$\begin{pmatrix} 0 & -\tau \\ \tau & 0 \end{pmatrix}$ on $\mathcal{L}\oplus\mathcal{L}$.
Note that the real vector bundle $(\mathcal{L}\oplus\mathcal{L} \, ,\begin{pmatrix} 0 & \tau
\\ \tau & 0 \end{pmatrix})$ is isomorphic to $(\mathcal{L}\oplus\mathcal{L},\tau\oplus\tau)$.
When $\mathbb{C}\mathbf{P}^1$ is equipped with its real structure with no real points, a given
line bundle $\mathcal{L}$ of degree $k$ is again necessarily self-conjugate, so it has to
be either real or quaternionic but now real line bundles must have even degree
and quaternionic line bundles must have odd degree (\cite{BHH}), so $\mathcal{L}$ admits a
canonical real structure if $k$ is even and a canonical quaternionic structure if $k$
is odd. Consequently, when $\mathrm{Fix}(\sigma)=\emptyset$ in $\mathbb{C}\mathbf{P}^1$, semi-stable
holomorphic vector bundles of rank $r$ and degree $d=rk$ over $\mathbb{C}\mathbf{P}^1$ admit a canonical
real structure if $k$ is even and a canonical quaternionic structure if $k$ is odd.
\subsection{Description of the results}
The goal of the present paper is to analyze that same situation in the case of real
Riemann surfaces of genus one. In particular, we completely identify the moduli space
of semi-stable holomorphic vector bundles of rank $r$ and degree $d$ as a real
algebraic variety (Theorem \ref{moduli_space_over_R} below). Our main references are
the papers of Atiyah (\cite{Atiyah_elliptic_curves}) and Tu (\cite{Tu}). In what
follows, we denote by $(X,x_0)$ a complex elliptic curve and by $\mathcal{M}_{X}(r,d)$ the moduli
space of semi-stable vector bundles of rank $r$ and degree $d$ over $X$, i.e.,\ the
set of $S$-equivalence classes of semi-stable holomorphic vector bundles of rank $r$
and degree $d$ over $X$ (\cite{Seshadri}). Since $(X,x_0)$ is an elliptic curve, the
results of Atiyah show that any holomorphic vector bundle on $X$ is again (as in
genus $0$) a direct sum of semi-stable vector bundles (see Theorem \ref{Tu_obs}) but now
there can be semi-stable vector bundles which are not poly-stable (see
\eqref{ses_defining_F_h}) and also there can be stable vector
bundles of rank higher than $1$.
Moreover, the moduli space $\mathcal{M}_{X}(r,d)$ is a non-singular complex algebraic variety of
dimension $h:=r\wedge d$. As a matter of fact, $\mathcal{M}_{X}(r,d)$
is isomorphic, as a complex algebraic variety, to the $(r\wedge d)$-fold symmetric
product $\mathrm{Sym}^{r\wedge d}(X)$ of the complex elliptic curve $X$ and it contains
stable bundles if and only if $r\wedge d=1$, in which case all semi-stable bundles
are in fact stable. Let now $\sigma:X\longrightarrow X$ be a real structure on $X$ (recall that the
marked point $x_0$ is not assumed to be fixed under $\sigma$). Then the map
$\mathcal{E}\longmapsto\os{\mathcal{E}}$ induces a real structure, again
denoted by $\sigma$, on $\mathcal{M}_{X}(r,d)$, since it preserves the rank, the degree and the
S-equivalence class of semi-stable vector bundles (\cite{Sch_JSG}). Our main result
is then the following, to be proved in Section \ref{real_structure_on_mod_space}.
\begin{theorem}\label{moduli_space_over_R}
Let $h:=r\wedge d$. Then, as a real algebraic variety, $$\mathcal{M}_{X}(r,d) \simeq_{\mathbb{R}} \left\{
\begin{array}{cl}
\mathrm{Sym}^h (X) & \mathrm{if}\ X^\sigma\neq\emptyset,\\
\mathrm{Sym}^h (X) & \mathrm{if}\ X^\sigma=\emptyset\ \mathrm{and}\ d/h\ \mathrm{is\ odd},\\
\mathrm{Sym}^h (\mathrm{Pic}^{\, 0}_X) & \mathrm{if}\ X^\sigma=\emptyset\ \mathrm{and}\ d/h\ \mathrm{is\ even}.
\end{array}
\right.$$
\end{theorem}
We recall that $\mathrm{Pic}^{\,0}_X$ is isomorphic to $X$ over $\mathbb{C}$ (via the choice of
$x_0$) but not over $\mathbb{R}$ when $X^\sigma=\emptyset$ because $\mathrm{Pic}^{\,0}_X$ has the
real point corresponding to the trivial line bundle. In contrast, $\mathrm{Pic}^{\,1}_X$ is
always isomorphic to $X$ over $\mathbb{R}$, as we shall recall in Section \ref{line_bundles}.
For any $d\in\mathbb{Z}$, the real structure of $\mathrm{Pic}^{\,d}_X$ is induced by the map
$\mathcal{L}\longmapsto\os{\mathcal{L}}$, while the real structure of the $h$-fold symmetric product
$\mathrm{Sym}^h(Y)$ of a real variety $(Y,\sigma)$ is induced by that of $Y$ in the following
way: $[y_1,\cdots,y_h]\longmapsto[\sigma(y_1),\cdots,\sigma(y_h)]$. Note that, if $r\wedge d=1$,
then by Theorem \ref{moduli_space_over_R} we have $\mathcal{M}_{X}(r,d)\simeq_{\mathbb{R}} X$ if
$X^\sigma\neq\emptyset$ or $d$ is odd, and $\mathcal{M}_{X}(r,d)\simeq_{\mathbb{R}} \mathrm{Pic}^{\, 0}_X$ if
$X^\sigma=\emptyset$ and $d$ is even. This will eventually imply the following results on
the topology and modular interpretation of the set of real points of $\mathcal{M}_{X}(r,d)$,
analogous to those of \cite{Sch_JSG} for real curves of genus $g\geq 2$ (see Section
\ref{topology} for a proof of Theorem \ref{real_pts_coprime_case}; we point out that
it will only be valid under the assumption that $r\wedge d=1$, in which case all
semi-stable bundles are in fact stable, in particular a real point of $\mathcal{M}_{X}(r,d)$ is given
by either a real bundle or a quaternionic bundle, in an essentially
unique way; see Proposition \ref{self_conj_stable_bundles}).
\begin{theorem}\label{real_pts_coprime_case}
Assume that $r\wedge d=1$.
\begin{enumerate}
\item If $X^\sigma\neq\emptyset$, then $\mathcal{M}_{X}(r,d)^\sigma\simeq X^\sigma$ has either $1$ or $2$ connected components. Points in either component correspond to real isomorphism classes of real vector bundles of rank $r$ and degree $d$ over $(X,\sigma)$ and two such bundles $(\mathcal{E}_1,\tau_1)$ and $(\mathcal{E}_2,\tau_2)$ lie in the same connected component if and only if $w_1(\mathcal{E}^{\tau_1})=w_1(\mathcal{E}_2^{\tau_2})$.
\item If $X^\sigma=\emptyset$ and $d=2e+1$, then $\mathcal{M}_{X}(r,2e+1)^\sigma\simeq X^\sigma$ is empty.
\item If $X^\sigma=\emptyset$ and $d=2e$, then $\mathcal{M}_{X}(r,2e)^\sigma\simeq (\mathrm{Pic}^{\, 0}_X)^\sigma$ has two connected components, one consisting of real isomorphism classes of real bundles, the other consisting of quaternionic isomorphism classes of quaternionic bundles. These two components become diffeomorphic under the operation of tensoring a given bundle by a quaternionic line bundle of degree $0$.
\end{enumerate} Moreover, in cases $\textstyle{(1)}$ and $\textstyle{(3)}$, each connected component of $\mathcal{M}_{X}(r,d)^\sigma$ is diffeomorphic to $S^1$.
\end{theorem}
\noindent In particular, the formulae of \cite{LS} (see also \cite{Baird}), giving the mod $2$ Betti numbers of the connected components of $\mathcal{M}_{X}(r,d)^\sigma$ when $r\wedge d=1$ are still valid for $g=1$ (in contrast, when $r\wedge d\neq1$, the formulae of \cite{LS} do not seem to be interpretable in any way since, over an elliptic curve, the dimension of $\mathcal{M}_{X}(r,d)$ is $r\wedge d$, not $r^2(g-1)+1$).
In the third and final section of the paper, we investigate the properties of indecomposable vector bundles over real elliptic curves. Recall that a holomorphic vector bundle $\mathcal{E}$ over a complex curve $X$ is said to be indecomposable if it is not isomorphic to a direct sum of non-trivial holomorphic bundles. When $X$ is of genus $1$, there exists a moduli variety $\mathcal{I}_X(r,d)$ whose points are isomorphism classes of indecomposable vector bundles of rank $r$ and degree $d$: it was constructed by Atiyah in \cite{Atiyah_elliptic_curves} and revisited by Tu in \cite{Tu}, as will be recalled in Theorems \ref{Atiyah_indecomp_bdles} and \ref{rel_between_stable_and_indecomp}. We will then see in Section \ref{indecomposable_bdles_over_real_elliptic_curves} that we can extend their approach to the case of real elliptic curves and obtain the following characterization of $\mathcal{I}_X(r,d)$ as a real algebraic variety.
\begin{theorem}\label{indecomp_bdles_over_R}
Let $(X,x_0,\sigma)$ be a real elliptic curve. Let $\mathcal{I}_X(r,d)$ be the set of isomorphism classes of indecomposable vector bundles of rank $r$ and degree $d$ and let us set $h:=r\wedge d$, $r':=\frac{r}{h}$, $d':=\frac{d}{h}$. Then:
$$\mathcal{I}_X(r,d) \simeq_\mathbb{R} \mathcal{M}_X(r',d') \simeq_{\mathbb{R}} \left\{ \begin{array}{cl}
X & \mathrm{if}\ X^\sigma\neq \emptyset,\\
X & \mathrm{if}\ X^\sigma= \emptyset\ \mathrm{and}\ d'\ \mathrm{is\ odd},\\
\mathrm{Pic}^{\,0}_X & \mathrm{if}\ X^\sigma= \emptyset\ \mathrm{and}\ d'\ \mathrm{is\ even}.
\end{array}\right.
$$
\end{theorem}
\noindent By combining Theorems \ref{real_pts_coprime_case} and \ref{indecomp_bdles_over_R}, we obtain the following topological description of the set of real points of $\mathcal{I}_X(r,d)$, valid even when $r\wedge d\neq 1$.
\begin{theorem}\label{real_pts_indecomp_bdles}
Denote by $\mathcal{I}_X(r,d)^\sigma$ the fixed points of the real structure $\mathcal{E}\longmapsto\os{\mathcal{E}}$ in $\mathcal{I}_X(r,d)$.
\begin{enumerate}
\item If $X^\sigma\neq \emptyset$, then $\mathcal{I}_X(r,d)^\sigma\simeq X^\sigma$ consists of real isomorphism classes of real and indecomposable vector bundles of rank $r$ and degree $d$. It has either one or two connected components, according to whether $X^\sigma$ has one or two connected components, and these are distinguished by the Stiefel-Whitney classes of the real parts of the real bundles that they contain.
\item If $X^\sigma=\emptyset$ and $\frac{d}{r\wedge d}=2e+1$, then $\mathcal{I}_X(r,d)^\sigma\simeq X^\sigma$ is empty.
\item If $X^\sigma=\emptyset$ and $\frac{d}{r\wedge d}=2e$, then $\mathcal{I}_X(r,d)^\sigma\simeq (\mathrm{Pic}^{\, 0}_X)^\sigma$ has two connected components, one consisting of real isomorphism classes of vector bundles which are both real and indecomposable and one consisting of quaternionic isomorphism classes of vector bundles which are both quaternionic and indecomposable. These two components become diffeomorphic under the operation of tensoring a given bundle by a quaternionic line bundle of degree $0$.
\end{enumerate} Moreover, in cases $\textstyle{(1)}$ and $\textstyle{(3)}$, each connected component of the set of real points of $\mathcal{I}_X(r,d)$ is diffeomorphic to $S^1$.
\end{theorem}
\begin{acknowledgments}
The authors thank the Institute of Mathematical Sciences of the National University
of Singapore, for hospitality while the work was carried out. The first author is supported by J. C. Bose Fellowship. The second
author acknowledges the support from U.S. National Science Foundation grants DMS
1107452, 1107263, 1107367 "RNMS: Geometric structures And Representation varieties"
(the GEAR Network). Thanks also go to the referee for a careful reading of the paper and for suggesting the reference \cite{Baird}.
\end{acknowledgments}
\section{Moduli spaces of semi-stable vector bundles over an elliptic curve}\label{semi-stable_bundles}
\subsection{Real elliptic curves and their Picard varieties}\label{line_bundles}
The real points of Picard varieties of real algebraic curves have been studied for instance by Gross and Harris in \cite{GH}. We summarize here some of their results, specializing to the case of genus $1$ curves.
Let $X$ be a compact connected Riemann surface of genus $1$. To each point $x\in X$, there is associated a holomorphic line bundle $\mathcal{L}(x)$, of degree $1$, whose holomorphic sections have a zero of order $1$ at $x$ and no other zeros or poles. Since $X$ is compact, the map $X\longrightarrow \mathrm{Pic}^{\,1}_X$ thus defined, called the Abel-Jacobi map, is injective. And since $X$ has genus $1$, it is also surjective. The choice of a point $x_0\in X$ defines an isomorphism $\mathrm{Pic}^{\,0}_X\overset{\simeq}{\longrightarrow} \mathrm{Pic}^{\,1}_X$, obtained by tensoring by $\mathcal{L}(x_0)$. In particular, $X\simeq\mathrm{Pic}^{\,1}_X$ is isomorphic to $\mathrm{Pic}^{\,0}_X$ as a complex analytic manifold and inherits, moreover, a structure of Abelian group with $x_0$ as the neutral element.
If $\sigma:X\longrightarrow X$ is a real structure on $X$, the Picard variety $\mathrm{Pic}^{\,d}_X$, whose points represent isomorphism classes of holomorphic line bundles of degree $d$, has a canonical real structure, defined by $\mathcal{L}\longmapsto\os{\mathcal{L}}$ (observe that this anti-holomorphic involution, which we will still denote by $\sigma$, indeed preserves the degree). Since $\mathcal{L}(\sigma(x)) \simeq \os{(\mathcal{L}(x))}$, the Abel-Jacobi map $X\longrightarrow\mathrm{Pic}^{\,1}_X$ is defined over $\mathbb{R}$, meaning that it commutes to the real structures of $X$ and $\mathrm{Pic}^{\,1}_X$. We also call such a map a real map. If $X^\sigma\neq\emptyset$, we can choose $x_0\in X^\sigma$ and then $\mathcal{L}(x_0)$ will satisfy $\os{\mathcal{L}(x_0)} \simeq \mathcal{L}(x_0)$ so the isomorphism $\mathrm{Pic}^{\,0}_X \overset{\simeq}{\longrightarrow} \mathrm{Pic}^{\,1}_X$ obtained by tensoring by $\mathcal{L}(x_0)$ will also be defined over $\mathbb{R}$. More generally, by tensoring by a suitable power of $\mathcal{L}(x_0)$, we obtain real isomorphisms $\mathrm{Pic}^{\,d}_X \simeq \mathrm{Pic}^{\,1}_X$ for any $d\in\mathbb{Z}$. If now $X^\sigma=\emptyset$, then we actually cannot choose $x_0$ in such a way that $\mathcal{L}(\sigma(x_0)) \simeq \mathcal{L}(x_0)$ (see \cite{GH} or Theorem \ref{GH_case_g_equal_1} below; the reason is that such a line bundle would be either real or quaternionic but, over a real curve of genus $1$ with no real points, real and quaternionic line bundles must have even degree) but we may consider the holomorphic line bundle of degree $2$ defined by the divisor $x_0+\sigma(x_0)$, call it $\mathcal{L}$, say. Then $\os{\mathcal{L}}\simeq \mathcal{L}$ and, by tensoring by an appropriate tensor power of it, we have the following real isomorphisms $$\mathrm{Pic}^{\,d}_X \simeq_\mathbb{R} \left\{ \begin{array}{ccl}
\mathrm{Pic}^{\,1}_X & \mathrm{if} & d=2e+1,\\
\mathrm{Pic}^{\,0}_X & \mathrm{if} & d=2e.
\end{array}\right.$$ So, when the genus of $X$ is $1$, we have the following result.
\begin{theorem}\label{line_bdle_case}
Let $(X,x_0,\sigma)$ be a real elliptic curve.
\begin{enumerate}
\item If $X^\sigma\neq\emptyset$, then for all $d\in \mathbb{Z}$, $$\mathrm{Pic}^{\,d}_X\simeq_\mathbb{R} X.$$
\item If $X^\sigma = \emptyset$, then $$\mathrm{Pic}^{\,d}_X \simeq_\mathbb{R} \left\{ \begin{array}{ccl}
X & \mathrm{if} & d=2e+1,\\
\mathrm{Pic}^{\,0}_X & \mathrm{if} & d=2e.
\end{array}\right.$$
\end{enumerate}
\end{theorem}
\subsection{Semi-stable vector bundles}
Let $X$ be a compact connected Riemann surface of genus $g$ and recall that the slope of a non-zero holomorphic vector bundle $\mathcal{E}$ on $X$ is by definition the ratio $\mu(\mathcal{E})=\deg(\mathcal{E})/\mathrm{rk}(\mathcal{E})$ of its degree by its rank. The vector bundle $\mathcal{E}$ is called stable (respectively,\ semi-stable) if for any non-zero proper sub-bundle $\mathcal{F}\subset \mathcal{E}$, one has $\mu(\mathcal{F}) < \mu(\mathcal{E})$ (respectively,\ $\mu(\mathcal{F}) \leq \mu(\mathcal{E})$). By a theorem of Seshadri (\cite{Seshadri}), any semi-stable vector bundle $\mathcal{E}$ of rank $r$ and degree $d$ admits a filtration whose successive quotients are stable bundles of the same slope, necessarily equal to $d/r$. Such a filtration, called a Jordan-H\"older filtration, is not unique but the graded objects associated to any two such filtrations are isomorphic. The isomorphism class thus defined is denoted by $\mathrm{gr}(\mathcal{E})$ and holomorphic vector bundles which are isomorphic to direct sums of stable vector bundles of equal slope are called poly-stable vector bundles. Moreover, two semi-stable vector bundles $\mathcal{E}_1$ and $\mathcal{E}_2$ are called $S$-equivalent if $\mathrm{gr}(\mathcal{E}_1)= \mathrm{gr}(\mathcal{E}_2)$ and Seshadri proved in \cite{Seshadri} that, when $g\geq 2$, the set of $S$-equivalence classes of semi-stable vector bundles of rank $r$ and degree $d$ admits a structure of complex projective variety of dimension $r^2(g-1)+1$ and is non-singular when $r\wedge d=1$ but usually singular when $r\wedge d\neq 1$ (unless, in fact, $g=2$, $r=2$ and $d=0$). Finally, when $g\geq 2$, there are always stable bundles of rank $r$ and degree $d$ over $X$ (by the theorem of Narasimhan and Seshadri, \cite{NS}, these come from irreducible rank $r$ unitary representations of a certain central extension of $\pi_1(X)$ by $\mathbb{Z}$, determined by $d$ up to isomorphism). If now $g=1$, then the results of Atiyah (\cite{Atiyah_elliptic_curves}) and Tu (\cite{Tu}) show that the set of $S$-equivalence classes of semi-stable vector bundles of rank $r$ and degree $d$ admits a structure of non-singular complex projective variety of dimension $r\wedge d$ (which is consistent with the formula for $g\geq 2$ only when $r$ and $d$ are coprime). But now stable vector bundles of rank $r$ and degree $d$ can only exist if $r\wedge d=1$, as Tu showed following Atiyah's results (\cite[Theorem A]{Tu}). In particular, the structure of poly-stable vector bundles over a complex elliptic curve is rather special, as recalled next.
\begin{proposition}[Atiyah-Tu]\label{poly-stable_bundles}
Let $\mathcal{E}$ be a poly-stable holomorphic vector bundle of rank $r$ and degree $d$ over a compact connected Riemann surface $X$ of genus $1$. Let us set $h:=r\wedge d$, $r':=\frac{r}{h}$ and $d':=\frac{d}{h}$. Then $\mathcal{E} \simeq \mathcal{E}_1\oplus\cdots\oplus \mathcal{E}_h$ where each $\mathcal{E}_i$ is a stable holomorphic vector bundle of rank $r'$ and degree $d'$.
\end{proposition}
\begin{proof}
By definition, a poly-stable bundle of rank $r$ and degree $d$ is isomorphic to a
direct sum $\mathcal{E}_1\oplus \cdots\oplus \mathcal{E}_k$ of stable bundles of slope
$\frac{d}{r}=\frac{d'}{r'}$. Since $d'\wedge r'=1$ and each $\mathcal{E}_i$ is stable of
slope $\frac{d'}{r'}$, each $\mathcal{E}_i$ must have rank $r'$ and degree $d'$ (because
stable bundles over elliptic curves must have coprime rank and degree). Since
$\mathrm{rk}(\mathcal{E}_1\oplus\cdots\oplus\mathcal{E}_k) = kr'=\mathrm{rk}(\mathcal{E})=r$, we have indeed $k=h$.
\end{proof}
To understand the moduli space $\mathcal{M}_{X}(r,d)$ of semi-stable holomorphic vector bundles of rank $r$ and degree $d$ over a complex elliptic curve $X$, one then has the next two theorems.
\begin{theorem}[Atiyah-Tu]\label{moduli_space_coprime_case}
Let $X$ be a compact connected Riemann surface of genus $1$ and assume that $r\wedge d=1$. Then the determinant map $\det:\mathcal{M}_{X}(r,d)\longrightarrow\mathrm{Pic}^{\,d}_X$ is an isomorphism of complex analytic manifolds of dimension $1$.
\end{theorem}
Note that, when $r\wedge d=1$, any semi-stable vector bundle of rank $r$ and degree $d$ is in fact stable (over a curve of arbitrary genus) and that, to prove Theorem \ref{moduli_space_coprime_case}, it is in particular necessary to show that a stable vector bundle $\mathcal{E}$ of rank $r$ and degree $d$ over a complex elliptic curve $X$ satisfies $\mathcal{E}\otimes \mathcal{L}\simeq \mathcal{E}$ if and only if $\mathcal{L}$ is an $r$-torsion point in $\mathrm{Pic}^{\,0}_X$ (i.e., $\mathcal{L}^{\otimes r}\simeq O_X$), a phenomenon which only occurs in genus $1$.
If now $h:=r\wedge d\geq 2$, then we know, by Proposition \ref{poly-stable_bundles}, that a semi-stable vector bundle of rank $r$ and degree $d$ is isomorphic to the direct sum of $h$ stable vector bundles of rank $r'=r/h$ and degree $d'=d/h$. Combining this with Theorem \ref{moduli_space_coprime_case}, one obtains the following result, due to Tu.
\begin{theorem}[{\cite[Theorem 1]{Tu}}]\label{moduli_space_general_case}
Let $X$ be a compact connected Riemann surface of genus $1$ and denote by
$h:=r\wedge d$. Then there is an isomorphism of complex analytic manifolds
$$\begin{array}{ccc}
\mathcal{M}_{X}(r,d) & \overset{\simeq}{\longrightarrow} & \mathrm{Sym}^h(\mathrm{Pic}^{d/h}_X)\\
\mathcal{E} \simeq \mathcal{E}_1\oplus \cdots \oplus \mathcal{E}_h & \longmapsto & [\det\mathcal{E}_i]_{1\leq i\leq h}
\end{array}.$$
\end{theorem}
\noindent In particular, $\mathcal{M}_{X}(r,d)$ has dimension $h=r\wedge d$. Since the choice of a point $x_0\in X$ provides an isomorphism $\mathrm{Pic}^{\,d}_X\simeq_\mathbb{C} X$, we have indeed $\mathcal{M}_{X}(r,d) \simeq_\mathbb{C} \mathrm{Sym}^h(X)$. In the next section, we will analyze the corresponding situation over $\mathbb{R}$. But first we recall the basics about real and quaternionic vector bundles.
Let $(X,\sigma)$ be a real Riemann surface, i.e., a Riemann surface $X$ endowed with a real structure $\sigma$. A real holomorphic vector bundle over $(X,\sigma)$ is a pair $(\mathcal{E},\tau)$ such that $\mathcal{E}\longrightarrow X$ is a holomorphic vector bundle over $X$ and $\tau:\mathcal{E}\longrightarrow\mathcal{E}$ is an anti-holomorphic map such that
\begin{enumerate}
\item the diagram
$$\begin{CD}
\mathcal{E}_1 @>{\tau}>> \mathcal{E}_2\\
@VVV @VVV\\
X @>{\sigma}>> X
\end{CD}$$ is commutative;
\item the map $\tau$ is fibrewise $\mathbb{C}$-anti-linear: $\forall v\in\mathcal{E}$, $\forall \lambda\in\mathbb{C}$, $\tau(\lambda v) =\ov{\lambda}\tau(v)$;
\item $\tau^2=\mathrm{Id}_\mathcal{E}$.
\end{enumerate}
\noindent A quaternionic holomorphic vector bundle over $(X,\sigma)$ is a pair $(\mathcal{E},\tau)$ satisfying Conditions (1) and (2) above, as well as a modified third condition: (3)' $\tau^2=-\mathrm{Id}_\mathcal{E}$. A homomorphism $\phi:(\mathcal{E}_1,\tau_1) \longrightarrow (\mathcal{E}_2,\tau_2)$ between two real (respectively,\ quaternionic) vector bundles is a holomorphic map $\phi:\mathcal{E}_1\longrightarrow\mathcal{E}_2$ such that
\begin{enumerate}
\item the diagram
$$\begin{CD}
\mathcal{E}_1 @>{\phi}>> \mathcal{E}_2 \\
@VVV @VVV \\
X @= X
\end{CD}$$ is commutative;
\item $\phi\circ\tau_1=\tau_2\circ\phi$.
\end{enumerate}
A real (respectively,\ quaternionic) holomorphic vector bundle is called stable if for any $\tau$-invariant sub-bundle $\mathcal{F}\subset \mathcal{E}$, one has $\mu(\mathcal{F})<\mu(\mathcal{E})$. It is called semi-stable if for any such $\mathcal{F}$, one has $\mu(\mathcal{F})\leq\mu(\mathcal{F})$. As shown in \cite{Sch_JSG}, $(\mathcal{E},\tau)$ is semi-stable as a real (respectively,\ quaternionic) vector bundle if and only $\mathcal{E}$ is semi-stable as a holomorphic vector bundle but $(\mathcal{E},\tau)$ may be stable as a real (respectively,\ quaternionic) vector bundle while being only poly-stable as a holomorphic vector bundle (when $\mathcal{E}$ is in fact stable, we will say that $(\mathcal{E},\tau)$ is geometrically stable). However, any semi-stable
real (respectively,\ quaternionic) vector bundle admits real (respectively,\ quaternionic) Jordan-H\"older filtrations (where the successive quotients can sometimes be stable in the real sense only) and there is a corresponding notion of poly-stable
real (respectively,\ quaternionic) vector bundle, which turns out to be equivalent to being poly-stable and real (respectively,\ quaternionic). Real and quaternionic vector bundles over a compact connected real Riemann surface $(X,\sigma)$ were topologically classified in \cite{BHH}. If $X^\sigma\neq\emptyset$, a real vector bundle $(\mathcal{E},\tau)$ over $(X,\sigma)$ defines in particular a real vector bundle in the ordinary sense $\mathcal{E}^\tau\longrightarrow X^\sigma$, hence an associated first Stiefel-Whitney class $w_1(\mathcal{E}^\tau)\in H^1(X^\sigma;\mathbb{Z}/2\mathbb{Z})\simeq (\mathbb{Z}/2\mathbb{Z})^n$, where $n\in\{0,\cdots,g+1\}$ is the
number of connected components of $X^\sigma$. The topological classification of real and quaternionic vector bundles then goes as follows.
\begin{theorem}[\cite{BHH}]\label{top_classif}
Let $(X,\sigma)$ be a compact connected real Riemann surface.
\begin{enumerate}
\item If $X^\sigma\neq\emptyset$, real vector bundles over $(X,\sigma)$ are classified up to smooth isomorphism by the numbers $r=\mathrm{rk}(\mathcal{E})$, $d=\deg(\mathcal{E})$ and $(s_1,\cdots,s_n)=w_1(\mathcal{E}^\tau)$, subject to the condition $s_1+\ldots+s_n=d\ \mathrm{mod}\ 2$. Quaternionic vector bundles must have even rank and degree in this case and are classified up to smooth isomorphism by the pair $(2r,2d)$.
\item If $X^\sigma=\emptyset$, real vector bundles over $(X,\sigma)$ must have even degree are classified up to smooth isomorphism by the pair $(r,2d)$. Quaternionic vector bundles are classified up to smooth isomorphism by the pair $(r,d)$, subject to the condition $d+r(g-1)\equiv 0\ (\mathrm{mod}\ 2)$. In particular, if $g=1$, real and quaternionic vector bundles alike must have even degree.
\end{enumerate}
\end{theorem}
\noindent Theorem \ref{top_classif} will be useful in Section \ref{topology}, for the proof of Theorem \ref{real_pts_coprime_case}.
\subsection{The real structure of the moduli space}\label{real_structure_on_mod_space}
Let first $(X,\sigma)$ be a real Riemann surface of arbitrary genus $g$. Then the involution $\mathcal{E}\longmapsto\os{\mathcal{E}}$ preserves the rank and the degree of a holomorphic vector bundle and the bundle $\os{\mathcal{E}}$ is stable (respectively,\ semi-stable) if and only if $\mathcal{E}$ is. Moreover, if $\mathcal{E}$ is semi-stable, a Jordan-H\"older filtration of $\mathcal{E}$ is mapped to a Jordan-H\"older filtration of $\os{\mathcal{E}}$, so, for any $g$, the moduli space $\mathcal{M}_{X}(r,d)$ of semi-stable holomorphic vector bundles of rank $r$ and degree $d$ on $X$ has an induced real structure. Assume now that $g=1$ and let us prove Theorem \ref{moduli_space_over_R}.
\begin{proof}[Proof of Theorem \ref{moduli_space_over_R}] Since, for any vector bundle $\mathcal{E}$ one has $\det(\os{\mathcal{E}})=\os{(\det\mathcal{E})}$, the map $$\begin{array}{ccc}
\mathcal{M}_{X}(r,d) & \overset{\simeq}{\longrightarrow} & \mathrm{Sym}^h(\mathrm{Pic}^{d/h}_X)\\
\mathcal{E} \simeq \mathcal{E}_1\oplus \cdots \oplus \mathcal{E}_h & \longmapsto & [\det\mathcal{E}_i]_{1\leq i\leq h}
\end{array}$$ of Theorem \ref{moduli_space_general_case} is a real map. If $X^\sigma\neq\emptyset$, then by Theorem \ref{line_bdle_case}, we have $$\mathrm{Pic}^{\,d}_X\simeq_\mathbb{R}\mathrm{Pic}^{\,0}_X\simeq_\mathbb{R} X$$ so $\mathcal{M}_{X}(r,d) \simeq_\mathbb{R} \mathrm{Sym}^h(X)$ in this case. And if $X^\sigma=\emptyset$, we distinguish between the cases $d=2e+1$ and $d=2e$ to obtain, again by Theorem \ref{line_bdle_case}, that $$\mathcal{M}_{X}(r,d)\simeq_\mathbb{R} \left\{ \begin{array}{ccl}
\mathrm{Sym}^h(X) & \mathrm{if} & d/h\ \mathrm{is\ odd},\\
\mathrm{Sym}^h(\mathrm{Pic}^{\,0}_X) & \mathrm{if} & d/h\ \mathrm{is\ even},
\end{array}\right.$$ which finishes the proof of Theorem \ref{moduli_space_over_R}.
\end{proof}
Let us now focus on the case $d=0$, where there is a nice alternate description of the
moduli variety in terms of representations of the fundamental group of the elliptic
curve $(X,x_0)$. Since $\pi_1(X,x_0)\simeq \mathbb{Z}^2$ is a free Abelian group on two
generators, a rank $r$ unitary representation of it is entirely determined by the data
of two commuting unitary matrices $u_1,u_2$ in $\mathbf{U}(r)$ (in particular, such a
representation is never irreducible unless $r=1$) and we may assume that these two
matrices lie in the maximal torus $\mathbf{T}_r\subset \mathbf{U}(r)$ consisting of diagonal
unitary matrices. The Weyl group of $\mathbf{T}_r$ is $\mathbf{W}_r\simeq\mathfrak{S}_r$, the symmetric group on
$r$ letters, and one has
\begin{equation}\label{torus_reduction}\mathrm{Hom}(\pi_1(X,x_0);\mathbf{U}(r))/\mathbf{U}(r) \simeq
\mathrm{Hom}(\pi_1(X,x_0);\mathbf{T}_r)/\mathbf{W}_r.\end{equation} Note that since $\pi_1(X,x_0)$ is Abelian,
there is a well-defined action of $\sigma$ on it even if $x_0\notin X^\sigma$: a loop $\gamma$
at $x_0$ is sent to the loop $\sigma\circ\gamma$ at $\sigma(x_0)$ then brought back to $x_0$
by conjugation by an arbitrary path between $x_0$ and $\sigma(x_0)$. Combining this with
the involution $u\longmapsto\ov{u}$ of $\mathbf{U}(r)$, we obtain an action of $\sigma$ on
$\mathrm{Hom}(\pi_1(X,x_0);\mathbf{U}(r))$, defined by sending a representation $\rho$ to the representation
$\sigma\rho\sigma$. This action preserves the subset $\mathrm{Hom}(\pi_1(X,x_0);\mathbf{T}_r)$ and is
compatible with the conjugacy action of $\mathbf{U}(r)$ in the sense that
$\sigma(\mathrm{Ad}_u\,\rho)\sigma=\mathrm{Ad}_{\sigma(u)}\, (\sigma\rho\sigma^{-1})$, so it induces an
involution on the representation varieties $\mathrm{Hom}(\pi_1(X,x_0);\mathbf{U}(r))/\mathbf{U}(r)$ and
$\mathrm{Hom}(\pi_1(X,x_0);\mathbf{T}_r)/\mathbf{W}_r$ and the bijection \eqref{torus_reduction} is equivariant
for the actions just described. By the results of Friedman, Morgan and Witten in
\cite{FMW} and Laszlo in \cite{Laszlo}, this representation variety is in fact
isomorphic to the moduli space $\mathcal{M}_X(r,0)$. Moreover, the involution $\mathcal{E}\longmapsto\os{\mathcal{E}}$
on bundles correspond to the involution $\rho\longmapsto\sigma\rho\sigma$ on unitary representations.
Moreover, $$\mathbf{T}_r \simeq \underbrace{\mathbf{U}(1)\times\cdots\times\mathbf{U}(1)}_{r\ \mathrm{times}}
\simeq \mathbf{U}(1)\otimes_{\mathbb{Z}}\mathbb{Z}^r$$ as Abelian Lie groups, where $\mathbb{Z}^r$ can be interpreted as
$\pi_1(\mathbf{T}_r)$. In particular, the Galois action induced on $\mathbb{Z}^r$ by the complex
conjugation on $\mathbf{T}_r$ is simply $(n_1,\cdots,n_r)\longmapsto(-n_1,\cdots,-n_r)$ and the
isomorphism $\mathbf{T}_r\simeq\mathbf{U}(1)\otimes\mathbb{Z}^r$ is equivariant with respect to these natural
real structures. Finally, the bijection $$\mathrm{Hom}(\pi_1(X,x_0);\mathbf{T}_r) \simeq
\mathrm{Hom}(\pi_1(X,x_0);\mathbf{U}(1))\otimes\mathbb{Z}^r$$ is also equivariant and the representation variety
$\mathrm{Hom}(\pi_1(X,x_0);\mathbf{U}(1))$ is isomorphic to $\mathrm{Pic}^{\,0}_X$ as a real variety. We have
thus proved the following result, which is an analogue over $\mathbb{R}$ of one of the results
in \cite{FMW,Laszlo}.
\begin{theorem}\label{moduli_space_d_equal_0}
Let $(X,x_0,\sigma)$ be a real elliptic curve. Then the map $$\begin{array}{ccc}
(\mathrm{Pic}^{\,0}_X\otimes_{\mathbb{Z}}\mathbb{Z}^r) \simeq \big(\mathrm{Pic}^{\,0}_X)^r & \longrightarrow & \mathcal{M}(r,0)\\
(\mathcal{L}_1,\cdots,\mathcal{L}_r) & \longmapsto & \mathcal{L}_1\oplus\cdots\oplus\mathcal{L}_r
\end{array}$$ induces an isomorphism $$\mathcal{M}_X(r,0)\simeq_{\mathbb{R}} (\mathrm{Pic}^{\,0}_X\otimes \mathbb{Z}^r)/\mathfrak{S}_r,$$ where the symmetric group $\mathfrak{S}_r$ acts on $\mathbb{Z}^r$ by permutation.\\ When $X^\sigma\neq\emptyset$, one can further identify $\mathrm{Pic}^{\,0}_X$ with $X$ over $\mathbb{R}$ and obtain the isomorphism $$\mathcal{M}_X(r,0) \simeq_{\mathbb{R}} (X\otimes\mathbb{Z}^r) / \mathfrak{S}_r.$$
\end{theorem}
The results of Section \ref{indecomp_bdles} will actually give an alternate proof of Theorem \ref{moduli_space_d_equal_0}, by using the theory of indecomposable vector bundles over elliptic curves. We point out that algebraic varieties of the form $(X\otimes\pi_1(\mathbf{T}))/\mathbf{W}_{\mathbf{T}}$ for $X$ a complex elliptic curve have been studied for instance by Looijenga in \cite{Looijenga}, who identified them with certain weighted projective spaces determined by the root system of $\mathbf{T}$, when the ambient group $G\supset \mathbf{T}$ is semi-simple. Theorem \ref{moduli_space_d_equal_0} shows that, over $\mathbb{R}$, it may sometimes be necessary to replace $X$ by $\mathrm{Pic}^0_X$.
To conclude on the case where $d=0$, we recall that, on $\mathcal{M}_X(r,0)$, there exists another real structure, obtained from the real structure $\mathcal{E}\longmapsto\os{\mathcal{E}}$ by composing it with the holomorphic involution $\mathcal{E}\longmapsto \mathcal{E}^*$, which in general sends a vector bundle of
degree $d$ to a vector bundle of degree $-d$, so preserves only the moduli spaces $\mathcal{M}_X(r,0)$. Denote then by
$$
\eta_r\, :
\begin{array}{ccc} {\mathcal M}_X(r,0) & \longrightarrow & {\mathcal M}_X(r,0)\\
\mathcal{E} & \longmapsto & \overline{\sigma^*E}^{\,*}
\end{array}
$$ this new real structure on the moduli space ${\mathcal M}_X(r,0)$. In particular, we have
$$
\eta_1\, :
\begin{array}{ccc} \mathrm{Pic}^{\,0}_X & \longrightarrow & \mathrm{Pic}^{\,0}_X\\
\mathcal{L} & \longmapsto & \overline{\sigma^*\mathcal{L}}^{\,*}
\end{array}
$$
and we note that $\eta_1$ has real points because it fixes the trivial line bundle. The real elliptic curve $(\mathrm{Pic}^{\,0}_X,\eta_1)$ is, in general, not isomorphic to $(X,\sigma)$, even when $\sigma$ has fixed points. We can nonetheless characterize the new real structure of the moduli spaces $\mathcal{M}_X(r,0)$ in the following way.
\begin{proposition}\label{prop1}
The real variety $({\mathcal M}_X(r,0), \eta_r)$ is isomorphic to the $r$-fold
symmetric product of the real elliptic curve $(\mathrm{Pic}^{\,0}_X,\eta_1)$.
\end{proposition}
\begin{proof}
The proposition is proved in the same way as Theorem
\ref{moduli_space_d_equal_0}, changing only the real structures under consideration.
\end{proof}
\subsection{Topology of the set of real points in the coprime case}\label{topology}
In rank $1$, the topology of the set of real points of $\mathrm{Pic}^{\,d}_X$ is well understood and so is the modular interpretation of its elements.
\begin{theorem}[\cite{GH}, case $g=1$]\label{GH_case_g_equal_1}
Let $(X,\sigma)$ be a compact real Riemann surface of genus $1$ and let $d\in\mathbb{Z}$.
\begin{enumerate}
\item If $X^\sigma\neq\emptyset$, then $(\mathrm{Pic}^{\,d}_X)^\sigma\simeq X^\sigma$ has $1$ or $2$ connected components. Elements of $(\mathrm{Pic}^{\,d}_X)^\sigma$ correspond to real isomorphism classes of real holomorphic line bundles over $(X,\sigma)$ and two such real line bundles $(\mathcal{L}_1,\tau_1)$ and $(\mathcal{L}_2,\tau_2)$ lie in the same connected component of $(\mathrm{Pic}^{\,d}_X)^\sigma$ if and only if $w_1(\mathcal{L}_1^{\tau_1})= w_1(\mathcal{L}_2^{\tau_2})$.
\item If $X^\sigma=\emptyset$ and $d=2e+1$, then $(\mathrm{Pic}^{\,d}_X)^\sigma\simeq X^\sigma$ is empty.
\item If $X^\sigma=\emptyset$ and $d=2e$, then $(\mathrm{Pic}^{\,d}_X)^\sigma\simeq (\mathrm{Pic}^{\,0}_X)^\sigma$ has $2$ connected components, corresponding to isomorphism classes of either real or quaternionic line bundles of degree $d$, depending on the connected component of $(\mathrm{Pic}^{\,d}_X)^\sigma$ in which they lie.
\end{enumerate}
Moreover, in cases $\textstyle{(1)}$ and $\textstyle{(3)}$, any given connected component of $(\mathrm{Pic}^{\,d}_X)^\sigma$ is diffeomorphic to $S^1$.
\end{theorem}
For real Riemann surfaces of genus $g\geq 2$, the topology of $(\mathrm{Pic}^{\,d}_X)^\sigma$, in particular the number of connected components, is a bit more involved but also covered in \cite{GH}, the point being that these components are indexed by the possible topological types of real and quaternionic line bundles over $(X,\sigma)$. For vector bundles of rank $r\geq 2$ on real Riemann surfaces of genus $g\geq 2$, a generalization of the results of Gross and Harris was obtained in \cite{Sch_JSG}: we recall here the result for coprime rank and degree (in general, a similar but more complicated result holds provided one restricts one's attention to the stable locus in $\mathcal{M}_{X}(r,d)$). The coprime case is the case that we will actually generalize to genus $1$ curves (where stable bundles can only exist in coprime rank and degree).
\begin{theorem}[\cite{Sch_JSG}]\label{top_real_pts_hyp_case}
Let $(X,\sigma)$ be a compact real Riemann surface of genus $g\geq 2$ and assume that $r\wedge d=1$. The number of connected component of $\mathcal{M}_{X}(r,d)^\sigma$ is equal to:
\begin{enumerate}
\item $2^{n-1}$ if $X^\sigma$ has $n>0$ connected components. In this case, elements of $\mathcal{M}_{X}(r,d)^\sigma$ correspond to real isomorphism classes of real holomorphic vector bundles of rank $r$ and degree $d$ and two such bundles $(\mathcal{E}_1,\tau_1)$ and $(\mathcal{E}_2,\tau_2)$ lie in the same connected component if and only if $w_1(\mathcal{E}^{\tau_1})=w_1(\mathcal{E}_2^{\tau_2})$.
\item $0$ if $X^\sigma=\emptyset$, $d$ is odd and $r(g-1)$ is even.
\item $1$ if $X^\sigma=\emptyset$, $d$ is odd and $r(g-1)$ is odd, in which case the elements of $\mathcal{M}_{X}(r,d)^\sigma$ correspond to quaternionic isomorphism classes of quaternionic vector bundles of rank $r$ and degree $d$.
\item $1$ if $X^\sigma=\emptyset$, $d$ is even and $r(g-1)$ is odd, in which case the elements of $\mathcal{M}_{X}(r,d)^\sigma$ correspond to real isomorphism classes of real vector bundles of rank $r$ and degree $d$.
\item $2$ if $X^\sigma=\emptyset$, $d$ is even and $r(g-1)$ is even, in which case there is one component consisting of real isomorphism classes of real vector bundles of rank $r$ and degree $d$ while the other consists of quaternionic isomorphism classes of quaternionic vector bundles of rank $r$ and degree $d$.
\end{enumerate}
\end{theorem}
Now, using Theorem \ref{moduli_space_over_R}, we can extend Theorem \ref{top_real_pts_hyp_case} to the case $g=1$. Indeed, to prove Theorem \ref{real_pts_coprime_case}, we only need to combine Theorem \ref{GH_case_g_equal_1} and the coprime case of Theorem \ref{moduli_space_over_R} (i.e.,\ $h=1$), with the following result, for a proof of which we refer to either \cite{BHH} or \cite{Sch_JSG}.
\begin{proposition}\label{self_conj_stable_bundles}
Let $(X,\sigma)$ be a compact connected real Riemann surface and let $\mathcal{E}$ be a stable holomorphic vector bundle over $X$ satisfying $\os{\mathcal{E}}\simeq\mathcal{E}$. Then $\mathcal{E}$ is either real or quaternionic and cannot be both. Moreover, two different real or quaternionic structures on $\mathcal{E}$ are conjugate by a holomorphic automorphism of $\mathcal{E}$.
\end{proposition}
\noindent Note that it is easy to show that two real (respectively,\ quaternionic) structures on $\mathcal{E}$ \textit{differ} by a holomorphic automorphism $e^{i\theta}\in S^1\subset \mathbb{C}^*\simeq \mathrm{Aut}(\mathcal{E})$ but, in order to prove that these two structures $\tau_1$ and $\tau_2$, say, \textit{are conjugate}, we need to observe that $\tau_2(\,\cdot\,)=e^{i\theta}\tau_1(\,\cdot\,)=e^{i\theta/2}\tau_1(e^{-i\theta/2}\,\cdot\,)$. Then, to finish the proof of Theorem \ref{real_pts_coprime_case}, we proceed as follows.
\begin{proof}[Proof of Theorem \ref{real_pts_coprime_case}]
Recall that $X$ here has genus $1$. If $X^\sigma\neq\emptyset$, quaternionic vector bundles must have even rank and degree by Theorem \ref{top_classif}, so, by Proposition \ref{self_conj_stable_bundles}, points of $\mathcal{M}_{X}(r,d)^\sigma$ correspond in this case to real isomorphism classes of geometrically stable real vector bundles of rank $r$ and degree $d$. By Theorem \ref{moduli_space_over_R}, one indeed has $\mathcal{M}_{X}(r,d)^\sigma\simeq (\mathrm{Pic}^{\,d}_X)^\sigma \simeq X^\sigma$ in this case. Moreover, since the diffeomorphism $\mathcal{M}_{X}(r,d)^\sigma\simeq (\mathrm{Pic}^{\,d}_X)^\sigma$ is provided by the determinant map, the connected components of $\mathcal{M}_{X}(r,d)^\sigma$, or equivalently of $(\mathrm{Pic}^{\,d}_X)^\sigma$ are indeed distinguished by the first Stiefel-Whitney class of the real part of the real bundles that they parametrize, as in Theorem \ref{GH_case_g_equal_1}. If now $X^\sigma=\emptyset$, then by Theorem \ref{top_classif}, real and quaternionic vector bundles must have even degree and we can again use Theorem \ref{GH_case_g_equal_1} to conclude: note that since the diffeomorphism $\mathcal{M}_{X}(r,d)^\sigma\simeq
(\mathrm{Pic}^{\,d}_X)^\sigma$ is provided by the determinant map, when $d$ is even $r$ must be
odd (because $r$ is assumed to be coprime to $d$), so the determinant indeed takes real vector bundles to real line bundles and quaternionic vector bundles to quaternionic line bundles.
\end{proof}
Had we not assumed $r\wedge d=1$, then the situation would have been more complicated to analyze, because the determinant of a quaternionic vector bundle of even rank is a real line bundle and also because, when $h=r\wedge d$ is even, the real space $\mathrm{Sym}^h(X)$ may have real points even if $X$ does not (points of the form $[x_i,\sigma(x_i)]_{1\leq i\leq \frac{h}{2}}$ for $x_i\in X$).
\subsection{Real vector bundles of fixed determinant}\label{fixed_det_case_section}
Let us now consider spaces of vector bundles of fixed determinant. By Theorem 3 of
\cite{Tu}, one has, for any $\mathcal{L}\in\mathrm{Pic}^{\,d}_X$, $\mathcal{M}_X(r,\mathcal{L}):=\det ^{-1}(\{\mathcal{L}\})
\simeq_{\mathbb{C}} \mathbb{P}_{\mathbb{C}}^{h-1}$ where $d=\deg(\mathcal{L})$ and $h=r\wedge d$. This is proved
in the following way: under the identification $\mathcal{M}_{X}(r,d)\simeq_{\mathbb{C}}\mathrm{Sym}^h(X)$, there is
a commutative diagram $$\begin{CD}
\mathcal{M}_{X}(r,d) @>{\simeq}>> \mathrm{Sym}^h(X)\\
@VV{\det}V @VV{\mathrm{AJ}}V \\
\mathrm{Pic}^{\,d}_X @>{\simeq}>> \mathrm{Pic}^{\,h}_X
\end{CD}$$ where $$\mathrm{AJ}: \begin{array}{ccc}
\mathrm{Sym}^h(X) & \longrightarrow & \mathrm{Pic}^{\,h}_X \\
(x_1,\cdots,x_h) & \longmapsto & \mathcal{L}(x_1+\ldots+x_h)
\end{array}$$ is the Abel-Jacobi map (taking a finite family of points $(x_1,\cdots,x_h)$ to the line bundle associated to the divisor $x_1+\ldots+x_h$) and the fiber of the Abel-Jacobi map above a holomorphic line bundle $\mathcal{L}$ of degree $h$ is the projective space $\mathbb{P}(H^0(X,\mathcal{L}))$ which, since $\deg(\mathcal{L})=h\geq 1$ and $X$ has genus $1$, is isomorphic to $\mathbb{P}_{\mathbb{C}}^{h-1}$. Evidently, the same proof will work over $\mathbb{R}$ whenever we can identify $\mathrm{Pic}^{\,d}_X$ and $\mathrm{Pic}^{\,h}_X$ as real varieties, which happens in particular when $X^\sigma\neq\emptyset$.
\begin{theorem}\label{fixed_det_case}
Let $(X,x_0,\sigma)$ be a real elliptic curve satisfying $X^\sigma\neq\emptyset$ and let $\mathcal{L}$ be a real line bundle of degree $d$ on $X$. Then, for all $r\geq 1$, $$\mathcal{M}_X(r,\mathcal{L}) \simeq_{\mathbb{R}} \mathbb{P}_{\mathbb{R}}^{h-1}$$ where $h=r\wedge d$.
\end{theorem}
\begin{proof}
When $X^\sigma\neq\emptyset$, we can choose $x_0\in X^\sigma$ and use Theorem \ref{line_bdle_case} to identify all Picard varieties $\mathrm{Pic}^{\,k}_X$ over $\mathbb{R}$, then reproduce Tu's proof recalled above.
\end{proof}
\section{Indecomposable vector bundles}\label{indecomp_bdles}
\subsection{Indecomposable vector bundles over a complex elliptic curve}
As recalled in the introduction, a theorem of Grothendieck of 1957 shows that any holomorphic vector bundle on $\mathbb{C}\mathbf{P}^1$ is isomorphic to a direct sum of holomorphic line bundles (\cite{Grot_P1}) and this can be easily recast in modern perspective by using the notions of stability and semi-stability of vector bundles over curves, introduced by Mumford in 1962 and first studied by himself and Seshadri (\cite{Mumford_Proc,Seshadri}): the moduli variety of semi-stable vector bundles of rank $r$ and degree $d$ over $\mathbb{C}\mathbf{P}^1$ is a single point if $r$ divides $d$ and is empty otherwise. As for vector bundles over a complex elliptic curve, the study was initiated by Atiyah in a paper published in 1957, thus at a time when the notion of stability was not yet available. Rather, Atiyah's starting point in \cite{Atiyah_elliptic_curves} is the notion of an indecomposable vector bundle: a holomorphic vector bundle $\mathcal{E}$ over a complex curve $X$ is said to be indecomposable if it is not isomorphic to a direct sum of non-trivial holomorphic bundles. In the present paper, we shall denote by $\mathcal{I}_X(r,d)$ the set of isomorphism classes of indecomposable vector bundles of rank $r$ and degree $d$. It is immediate from the definition that a holomorphic vector bundle is a direct sum of indecomposable ones. Moreover, one has the following result, which is a consequence of the categorical Krull-Schmidt theorem, also due to Atiyah (in 1956), showing that the decomposition of a bundle into indecomposable ones is essentially unique.
\begin{proposition}[{\cite[Theorem 3]{Atiyah_Krull-Schmidt}}]\label{Atiyah_Krull-Schmidt}
Let $X$ be a compact connected complex analytic manifold. Any holomorphic vector bundle $\mathcal{E}$ over $X$ is isomorphic to a direct sum $\mathcal{E}_1\oplus\cdots\oplus\mathcal{E}_k$ of indecomposable vector bundles. If one also has $\mathcal{E}\simeq\mathcal{E}'_1\oplus\cdots\oplus\mathcal{E}'_{l}$, then $l=k$ and there exists a permutation $\sigma$ of the indices such that $\mathcal{E}'_{\sigma(i)}\simeq\mathcal{E}_i$.
\end{proposition}
Going back to the case of a compact, connected Riemann surface $X$ of genus $1$, Atiyah completely describes all indecomposable vector bundles on $X$. He first shows the existence, for any $h\geq 1$, of a unique (isomorphism class of) indecomposable vector bundle $F_h$ of rank $h$ and degree $0$ such that \begin{equation}\label{dim_space_of_sections_of_F_h}
\dim H^0(X;F_h)=1
\end{equation} (\cite[Theorem 5]{Atiyah_elliptic_curves}). As a matter of fact, this is the only vector bundle of rank $h$ and degree $0$ over $X$ with a non-zero space of sections. Let us call $F_h$ the \textit{Atiyah bundle} of rank $h$ and degree $0$. The construction of $F_h$ is by induction, starting from $F_1=\mathcal{O}_X$, the trivial line bundle over $X$, and showing the existence and uniqueness of an extension of the form \begin{equation}\label{ses_defining_F_h}
0 \longrightarrow \mathcal{O}_X \longrightarrow F_h \longrightarrow F_{h-1} \longrightarrow 0.
\end{equation} In particular, $\det(F_h)=\mathcal{O}_X$. Moreover, Since $F_h$ is the unique indecomposable vector bundle with non-zero space of sections, one has (\cite[Corollary 1]{Atiyah_elliptic_curves}):
\begin{equation}\label{F_h_self-dual}
F_h^*\simeq F_h.
\end{equation} Note that $F_h$ is an extension of semi-stable bundles so it is semi-stable. The associated poly-stable bundle is the trivial bundle of rank $h$, which is not isomorphic to $F_h$ (in particular, $F_h$ is not itself poly-stable).
Atiyah then shows that any indecomposable vector bundle $\mathcal{E}$ of rank $h$ and degree $0$ is isomorphic to $F_h\otimes \mathcal{L}$ for a line bundle $\mathcal{L}$ of degree $0$ which is unique up to isomorphism (\cite[Theorem 5-(ii)]{Atiyah_elliptic_curves}). Since it follows from the construction of $F_h$ recalled in \eqref{ses_defining_F_h} that $\det(F_h)=\mathcal{O}_X$, one has $\det\mathcal{E}=\mathcal{L}^h$. This sets up a bijection \begin{equation}\label{isom_with_line_bundles}
\begin{array}{ccc}
\mathrm{Pic}^{\,0}_X & \longrightarrow & \mathcal{I}_X(h,0)\\
\mathcal{L} & \longmapsto & F_h\otimes\mathcal{L}
\end{array}.
\end{equation} Note that the map \eqref{isom_with_line_bundles} is just the identity map if $h=1$. Atiyah then uses a marked point $x_0\in X$ to further identify $\mathrm{Pic}^{\,0}_X$ with $X$. In particular, the set $\mathcal{I}_X(h,0)$ inherits a natural structure of complex analytic manifold of dimension $1$.
The next step in Atiyah's characterization of indecomposable vector bundles is to consider the case of vector bundles of non-vanishing degree. He shows that, associated to the choice of a marked point $x_0\in X$, there is, for all $r$ and $d$, a unique bijection (subject to certain conditions) \begin{equation}\label{Atiyah_map}\alpha^{\,x_0}_{r,d}:\mathcal{I}_X(r\wedge d,0) \longrightarrow \mathcal{I}_X(r,d)\end{equation} between the set of isomorphism classes of indecomposable vector bundles of rank $h:=r\wedge d$ (the g.c.d.\ of $r$ and $d$) and degree $0$ and the set of isomorphism classes of indecomposable vector bundles of rank $r$ and degree $d$ (\cite[Theorem 6]{Atiyah_elliptic_curves}). As a consequence, Atiyah can define a canonical indecomposable vector bundle of rank $r$ and degree $d$, namely \begin{equation*}F_{x_0}(r,d):=\alpha^{\,x_0}_{r,d}(F_{r\wedge d})\end{equation*} where $F_{r\wedge d}$ is the indecomposable vector bundle of rank $r\wedge d$ and degree $0$ whose construction was recalled in \eqref{ses_defining_F_h}. We will call the bundle $F_{x_0}(r,d)$ the Atiyah bundle of rank $r$ and degree $d$ (in particular $F_{x_0}(r,0) = F_r$). Atiyah then obtains the following description of indecomposable vector bundles.
\begin{theorem}[{\cite[Theorem 10]{Atiyah_elliptic_curves}}]\label{Atiyah_indecomp_bdles}
Let us set $h=r\wedge d$, $r'=\frac{r}{h}$ and $d'=\frac{d}{h}$. Then every indecomposable vector bundle of rank $r$ and degree $d$ is isomorphic to a bundle of the form $F_{x_0}(r,d)\otimes \mathcal{L}$ where $\mathcal{L}$ is a line bundle of degree $0$. Moreover, $F_{x_0}(r,d)\otimes \mathcal{L}\simeq F_{x_0}(r,d)\otimes\mathcal{L}'$ if and only if $(\mathcal{L}'\otimes\mathcal{L}^{-1})^{r'}\simeq\mathcal{O}_X$.
\end{theorem}
Thus, as a generalization to \eqref{isom_with_line_bundles}, Theorem \ref{Atiyah_indecomp_bdles} shows that there is a surjective map $\mathrm{Pic}^{\,0}_X\longrightarrow \mathcal{I}_X(r,d)$, whose fiber is isomorphic to the group $T_{r'}$ of $r'$-torsion elements in $\mathrm{Pic}^{\,0}_X$. This in particular induces a bijection between the Riemann surface $\mathrm{Pic}^{\,0}_X/T_{r'}\simeq\mathrm{Pic}^{\,0}_X\simeq X$ and the set $\mathcal{I}_X(r,d)$ for all $r$ and $d$ and the set $\mathcal{I}_X(r,d)$ inherits in this way a natural structure of complex analytic manifold of dimension $1$.
\subsection{Relation to semi-stable and stable bundles}
It is immediate to prove that stable bundles (over a curve of arbitrary genus) are indecomposable. Moreover, over an elliptic curve, we have the following result, for a proof of which we refer to Tu's paper.
\begin{theorem}[{\cite[Appendix A]{Tu}}]\label{Tu_obs}
Every indecomposable vector bundle of rank $r$ and degree $d$ over a complex elliptic curve is semi-stable. It is stable if and only $r\wedge d=1$.
\end{theorem}
\noindent In particular, the Atiyah bundles $F_{x_0}(r,d)$ are semi-stable (and stable if and only if $r\wedge d=1$) and, by Proposition \ref{Atiyah_Krull-Schmidt}, every holomorphic vector bundle over a complex elliptic curve is isomorphic to a direct sum of semi-stable bundles. Next, there is a very important relation between indecomposable vector bundles and stable vector bundles, which will be useful in the next section.
\begin{theorem}[Atiyah-Tu]\label{rel_between_stable_and_indecomp}
Set $h=r\wedge d$, $r'=\frac{r}{h}$ and $d'=\frac{d}{h}$. Then the map
$$\begin{array}{ccc}
\mathcal{M}_X(r',d') & \longrightarrow & \mathcal{I}_X(r,d)\\
\mathcal{E}' & \longmapsto & \mathcal{E}'\otimes F_h
\end{array}$$ is a bijection: any indecomposable vector bundle of rank $r$ and degree $d$ is isomorphic to a bundle of the form $\mathcal{E}'\otimes F_h$ where $\mathcal{E}'$ is a stable vector bundle of rank $r'$ and degree $d'$, unique up to isomorphism, and $F_h$ is the Atiyah bundle of rank $h$ and degree $0$.
\end{theorem}
\noindent In particular, $\mathcal{I}_X(r,d)$ inherits in this way a structure of complex analytic manifold of dimension $r'\wedge d'=1$. This result, which generalizes \eqref{isom_with_line_bundles} in a different direction than Theorem \ref{Atiyah_indecomp_bdles}, can be deduced from Atiyah and Tu's papers but we give a proof below for the sake of completeness. It is based on the following lemma.
\begin{lemma}[{\cite[Lemma 24]{Atiyah_elliptic_curves}}]\label{rel_between_the_Atiyah_bdles}
The Atiyah bundles $F_{x_0}(r,d)$ and $F_{x_0}(r',d')$ are related in the following way: $$F_{x_0}(r,d) \simeq F_{x_0}(r',d') \otimes F_h.$$
\end{lemma}
\begin{proof}[Proof of Theorem \ref{rel_between_stable_and_indecomp}]
Let $\mathcal{E}\in \mathcal{I}_X(r,d)$. By Theorem \ref{Atiyah_indecomp_bdles}, there exists a line bundle $\mathcal{L}$ of degree $0$ such that $\mathcal{E}\simeq F_{x_0}(r,d)\otimes\mathcal{L}$. By Lemma \ref{rel_between_the_Atiyah_bdles}, $F_{x_0}(r,d)\simeq F_{x_0}(r',d')\otimes F_h$. Since $r'\wedge d'=1$, Theorem \ref{Tu_obs} shows that $F_{x_0}(r',d')$, hence also $\mathcal{E}':=F_{x_0}(r',d')\otimes \mathcal{L}$, are stable bundles of rank $r'$ and degree $d'$. And one has indeed $\mathcal{E}\simeq \mathcal{E}'\otimes F_h$. Let now $\mathcal{E}'$ and $\mathcal{E}''$ be two stable bundles of rank $r'$ and degree $d'$ such that $\mathcal{E}'\otimes F_h \simeq \mathcal{E}''\otimes F_h$. Since stable bundles are indecomposable, Theorem \ref{Atiyah_indecomp_bdles} shows the existence of two line bundles $\mathcal{L}'$ and $\mathcal{L}''$ of degree $0$ such that $\mathcal{E}'\simeq F_{x_0}(r',d')\otimes \mathcal{L}'$ and $\mathcal{E}'' \simeq F_{x_0}(r',d')\otimes \mathcal{L}''$. Tensoring by $F_h$ and applying Lemma \ref{rel_between_the_Atiyah_bdles}, we obtain that $F_{x_0}(r,d) \otimes \mathcal{L}' \simeq F_{x_0}(r,d) \otimes \mathcal{L}''$ which, again by Theorem \ref{Atiyah_indecomp_bdles}, implies that $\mathcal{L}'$ and $\mathcal{L}''$ differ by an $r'$-torsion point of $\mathrm{Pic}^{\,0}_X$. But then a final application of Theorem \ref{Atiyah_indecomp_bdles} shows that $F_{x_0}(r',d')\otimes \mathcal{L}' \simeq F_{x_0}(r',d') \otimes\mathcal{L}''$,
i.e.,\ $\mathcal{E}'\simeq\mathcal{E}''$.
\end{proof}
Thus, the complex variety $\mathcal{I}_X(r,d)\simeq\mathcal{M}_X(r',d')\simeq X$ is a $1$-dimensional sub-variety of the $h$-dimensional moduli variety $\mathcal{M}_{X}(r,d)\simeq \mathrm{Sym}^h(X)$ and these two non-singular varieties coincide exactly when $r$ and $d$ are coprime. More explicitly, under the identifications $\mathcal{I}_X(r,d)\simeq X$ and $\mathcal{M}_{X}(r,d)\simeq\mathrm{Sym}^h(X)$, the inclusion map $\mathcal{I}_X(r,d)\hookrightarrow \mathcal{M}_{X}(r,d)$, implicit in Theorem \ref{Tu_obs}, is simply the diagonal map \begin{eqnarray*}
X & \longrightarrow & \mathrm{Sym}^h(X)\\
x & \longmapsto & [x,\cdots,x]
\end{eqnarray*} and it commutes to the determinant map. (the latter being, on $\mathrm{Sym}^h(X)$, just the Abel-Jacobi map $[x_1,\cdots,x_h]\longmapsto x_1+\cdots+x_h$; see \cite[Theorem 2]{Tu}).
\subsection{Indecomposable vector bundles over a real elliptic curve}\label{indecomposable_bdles_over_real_elliptic_curves}
Over a real elliptic curve, the description of indecomposable vector bundles is a bit more complicated than in the complex case, because the Atiyah map $\alpha^{\,x_0}_{r,d}$ defined in \eqref{Atiyah_map} is not a real map unless the point $x_0$ is a real point, which excludes the case where $X^\sigma=\emptyset$. Of course the case $X^\sigma\neq\emptyset$ is already very interesting and if we follow Atiyah's paper in that case, then the Atiyah map $\alpha^{\,x_0}_{r,d}$ is a real map and the Atiyah bundles $F_{x_0}(r,d)$ are all real bundles. In particular, the description given by Atiyah of the ring structure of the set of isomorphism class of all vector bundles (namely the way to decompose the tensor product of two Atiyah bundles into a direct sum of Atiyah bundles, see for instance \cite[Appendix A]{Tu} for a concise exposition) directly applies to the sub-ring formed by isomorphism classes real bundles (note that, in contrast, isomorphism classes of quaternionic bundles do not form a ring, as the tensor product of two quaternionic bundles is a real bundle). To obtain a description of indecomposable bundles over a real elliptic curve which holds without assuming that the curve has real points, we need to replace the Atiyah isomorphism $$\alpha^{\,x_0}_{r,d}: \mathcal{I}_X(r\wedge d,0) \longrightarrow \mathcal{I}_X(r,d)$$ (which cannot be a real map when $X^\sigma=\emptyset$) by the isomorphism $\mathcal{I}_X(r,d) \simeq \mathcal{M}_X(r',d')$ of Theorem \ref{rel_between_stable_and_indecomp} and show that the latter is always a real map. The first step is the following result, about the Atiyah bundle of rank $h$ and degree $0$, whose definition was recalled in \eqref{ses_defining_F_h}.
\begin{proposition}\label{F_h_is_real}
Let $(X,\sigma)$ be a real Riemann surface of genus $1$. For any $h\geq 1$, the indecomposable vector bundle $F_h$ has a canonical real structure.
\end{proposition}
\begin{proof} We proceed by induction. Since $X$ is assumed to be real, $\mathcal{O}_X$ has a canonical real structure. So, if $h=1$, then $F_h$ is canonically real. Assume now that $h>1$ and that $F_{h-1}$ has a fixed real structure. Following again Atiyah (\cite{Atiyah_analytic_connections}), extensions of the form \eqref{ses_defining_F_h} are parametrized by the sheaf cohomology group $H^1(X;\mathrm{Hom}_{\mathcal{O}_X}(F_{h-1};\mathcal{O}_X))=H^1(X;F_{h-1}^{\ *})$. The
uniqueness part of the statement in Atiyah's construction above says that this cohomology group is a complex vector space of dimension $1$, which, in any case, can be checked by Riemann-Roch using Properties \eqref{dim_space_of_sections_of_F_h} and \eqref{F_h_self-dual}. Indeed, since $\deg(F_{h-1}^{\ *})=0$ and $X$ is of genus $g=1$, one has $$h^0(F_{h-1}^{\ *}) - h^1(F_{h-1}^{\ *}) = \deg(F_{h-1}^{\ *}) + (\mathrm{rk}\,F_{h-1}^{\ *})(1-g) =0$$ (where $h^i(\,\cdot\,)=\dim H^i(X;\,\cdot\,)$), so $h^1(F_{h-1}^{\ *})=h^0(F_{h-1}^{\ *})=1$. Now, since $X$ and $F_{h-1}$ have real structures, so does $H^1(X;F_{h-1}^{\ *})$ and the fixed point-space of that real structure corresponds to isomorphism classes of real extensions of $F_{h-1}$ by $\mathcal{O}_X$. Since the fixed-point space of the real structure of $H^1(X;F_{h-1}^{\ *})$ is a $1$-dimensional real vector space, the real structure of $F_h$ is unique up to isomorphism.
\end{proof}
\noindent Thus, in contrast to Atiyah bundles of non-vanishing degree, $F_h$ is always canonically a real bundle. In particular, $\os{F_h}\simeq F_h$. It is then clear that the isomorphism $$\begin{array}{ccc}
\mathcal{M}_X(r',d') & \longrightarrow & \mathcal{I}_X(r,d)\\
\mathcal{E}' & \longmapsto & \mathcal{E}'\otimes F_h
\end{array}$$ is a real map: $\os{\mathcal{E}'}\otimes F_h \simeq \os{\mathcal{E}'} \otimes \os{F_h} \simeq \os{(\mathcal{E}'\otimes F_h)}$, which readily implies Theorem \ref{indecomp_bdles_over_R}. Moreover, one can make the following observation.
\begin{proposition}\label{self_conj_indecomp_bdles}
Let $\mathcal{E}$ be an indecomposable vector bundle of rank $r$ and degree $d$ over the real elliptic curve $(X,x_0,\sigma)$ and assume that $\os{\mathcal{E}}\simeq \mathcal{E}$. Then $\mathcal{E}$ admits either a real or a quaternionic structure.
\end{proposition}
\begin{proof}
By Theorem \ref{rel_between_stable_and_indecomp}, we can write $\mathcal{E}\simeq \mathcal{E}'\otimes F_h$, with $\mathcal{E}'$ stable. Therefore, $$\os{\mathcal{E}} \simeq \os{(\mathcal{E}'\otimes F_h)} \simeq \os{\mathcal{E}'} \otimes \os{F_h} \simeq \os{\mathcal{E}'} \otimes F_h.$$ The assumption $\os{\mathcal{E}}\simeq \mathcal{E}$ then translates to $\os{\mathcal{E}'}\otimes F_h \simeq \mathcal{E}'\otimes F_h$ which, since the map from Proposition \ref{rel_between_stable_and_indecomp} is a bijection, shows that $\os{\mathcal{E}'}\simeq \mathcal{E}'$. As $\mathcal{E}'$ is stable, the fact that $\mathcal{E}'$ admits a real or quaternionic structure $\tau'$ follows from Proposition \ref{self_conj_stable_bundles}. If $\tau_h$ denotes the real structure of $F_h$, we then have that $\tau'\otimes\tau_h$ is a real or quaternionic structure on $\mathcal{E}$, depending on whether $\tau'$ is real or quaternionic.
\end{proof}
| {'timestamp': '2016-01-11T02:08:05', 'yymm': '1410', 'arxiv_id': '1410.6845', 'language': 'en', 'url': 'https://arxiv.org/abs/1410.6845'} |
\section{Introduction}
\label{intro}
Prime factorization is a problem in the complexity class NP of problems that can be solved in
polynomial time by a nondeterministic machine. Indeed, the prime factors can be verified
efficiently by multiplication.
At present, it is not known if the problem has polynomial computational complexity
and, thus, is in the complexity class P. Nonetheless, the most common cryptographic algorithms
rely on the assumption of hardness of factorization. Website certificates
and bitcoin wallets are examples of resources depending on that
assumption. Since no strict lower bound on the computational complexity is actually
known, many critical services are potentially subject to
future security breaches. Consequently, cryptographic keys have gradually increased
their length to adapt to new findings. For example, the general number-field sieve~\cite{nfsieve}
can break keys that would have been considered secure against previous factoring methods.
Prime factorization is also important for its relation with quantum computing, since
an efficient quantum algorithm for factorization is known. This algorithm is considered
a main argument supporting the supremacy of quantum over classical computing. Thus,
the search for faster classical algorithms is relevant for better understanding the
actual gap between classical and quantum realm.
Many of the known factoring methods use the ring of integers modulo $c$ as a common feature, where
$c$ is the number to be factorized. Examples are Pollard's $\rho$ and $p-1$
algorithms~\cite{pollard1,pollard2}, Williams' $p+1$ algorithm~\cite{williams},
their generalization with cyclotomic polynomials~\cite{bach}, Lenstra elliptic curve
factorization~\cite{lenstra}, and quadratic sieve~\cite{qsieve}.
These methods end up to generate a number, say $m$, having a
factor of $c$. Once $m$ is obtained, the common factor can be efficiently
computed by the Euclidean algorithm. Some of these methods use only operations defined in the ring. Others,
such as the elliptic-curve method, perform also division operations by pretending that $c$ is prime.
If this operation fails at some point, the divisor is taken as outcome $m$.
In other words, the purpose of these methods is to compute a zero over the field of integers modulo
some prime factor of $c$ by possibly starting from some random initial state. Thus, the general scheme
is summarized by a map $X\mapsto m$ from a random state $X$ in some set $\Omega$ to an integer $m$
modulo $c$. Different states may be tried until $m$ is equal to zero modulo some prime of $c$.
The complexity of the algorithm depends on the computational complexity of generating $X$ in $\Omega$,
the computational complexity of evaluating the map, and the average number of trials required to find a zero.
In this paper, we employ this general scheme by focusing on a class $\Theta$
of maps defined as multivariate rational
functions over prime fields $\mathbb{Z}_p$ of order $p$ and, more generally, over a finite
field $\text{GF}(q)$ of order $q=p^k$ and degree $k$. The set $\Omega$ of inputs is taken equal to
the domain of definition of the maps. More precisely, the maps are first defined over some algebraic
number field $\mathbb{Q}(\alpha)$ of degree $k_0$, where $\alpha$ is an algebraic number, that is,
solution of some irreducible polynomial $P_I$ of degree $k_0$. Then, the maps
are reinterpreted over a finite field.
Using the general scheme of the other methods, the class $\Theta$
takes to a factoring algorithm with polynomial complexity if
\begin{enumerate}
\item[(a)] the number of distinct zeros, say $N_P$, of the maps in $\Theta$ is arbitrarily large over $\mathbb{Q}(\alpha)$;
\item[(b)] a large fraction of the zeros remain distinct when reinterpreted over a finite field
whose order is greater than about $N_P^{1/M}$;
\item[(c)] \label{cond_c}
the product of the number of parameters by the field degree is upper-bounded by a
sublinear power function of $\log N_P$;
\item[(d)] the computational complexity of evaluating the map given any input is upper-bounded
by a polynomial function in $\log N_P$.
\end{enumerate}
A subexponential factoring complexity is achieved with weaker scaling conditions on
the map complexity, as discussed later in Sec.~\ref{sec_complex}.
Later, this approach to factorization will be reduced to the search of rational points of a
variety having an arbitrarily large number of rational intersection points with a hypersurface.
The scheme employing rational functions resembles some existing methods, such as Pollard's $p-1$
algorithm. The main
difference is that these algorithms generally rely on algebraic properties over finite
fields, whereas the present scheme relies on algebraic properties over the
field $\mathbb{Q}(\alpha)$. For example,
Pollard's method ends up to build a polynomial $x^n-1$ with $p-1$ roots over a finite
$\mathbb{Z}_p$, where $p$ is some prime factor of $c$. This feature of the polynomial
comes from Fermat's little theorem and is satisfied if the integer $n$ has $p-1$ as factor.
Thus, the existence of a large number of zeros of $x^n-1$ strictly
depends on the field. Indeed, the polynomial does not have more than $2$ roots over the
rationals. In our scheme, the main task is to find rational functions having a
sufficiently large number of zeros over an algebraic number field. This feature is then
inherited by the functions over finite fields.
Some specific properties of finite fields can eventually be useful, such as the reducibility
of $P_I$ over $\mathbb{Z}_p$. This will be mentioned later.
Let us illustrate the general idea with an example.
Suppose that the input of the factorization problem is $c=p p'$, with $p$ and $p'$
prime numbers and $p<p'$. Let the map be a univariate polynomial of the form
\be\label{Eq1}
P(x)=\prod_{i=1}^{N_P}(x-x_i),
\ee
where $x_i$ are integer numbers somehow randomly distributed
in an interval between $1$ and $i_{max}\gg p'$. More generally, $x_i$ can be rational
numbers $n_i/m_i$ with $n_i$ and/or $m_i$ in $\{1,\dots,i_{max}\}$.
When reinterpreted modulo $p$ or modulo $p'$, the numbers
$x_i$ take random values over the finite fields. If $N_P<p$, we expect that the polynomial
has about $N_P$ distinct roots over the finite fields. Thus, the probability that
$P(x)\mod p=0$ or $P(x)\mod p'=0$ is about $N_P/p$ or $N_P/p'$, respectively, which are the
ratio between the number of zeros and the size of the input space $\Omega$ over the finite fields.
The probability that $P(x)\mod c$ contains a nontrivial factor of $c$ is about
$\frac{N_P}{p}\left(1-\frac{N_P}{p'}\right)+\frac{N_P}{p'}\left(1-\frac{N_P}{p}\right)$.
Thus, if $N_P$ is of the order of $p$, we can get a nontrivial factor
by the Euclidean algorithm in few trials. More specifically, if $p\simeq p'$ and
$N_P\simeq \sqrt{c}/2$, then the probability of getting a nontrivial factor is roughly
$1/2$. It is clear that a computational complexity of the map scaling subexponentially or
polynomially in $\log N_P$ leads to a subexponential or polynomial
complexity of the factoring algorithm. Thus, the central problem is to build a polynomial
$P(x)$ or a rational function with friendly computational properties with respect to
the number of zeros.
The scheme can be generalized by taking multivariate
maps with $M$ input parameters. In this case, the number of zeros needs to be of the order of
$p^M$, which is the size of the input space over the field $\mathbb{Z}_p$. As a further
generalization, the rational field can be replaced by an algebraic number field
$\mathbb{Q}(\alpha)$ of degree $k_0$. A number in this field is represented as a
$k_0$-dimensional vector over the rationals. Reinterpreting the components of
the vector over a finite field $\mathbb{Z}_p$, the size of the sampling space
is $p^{k_0 M}$, so that we should have $N_P\sim p^{k_0 M}$ in order to get nontrivial
factors in few trials. Actually, this is the worst-case scenario since the reinterpretation
of $\mathbb{Q}(\alpha)$ modulo $p$ can lead to a degree of the finite field much smaller
than $k_0$. For example, if $\alpha$ is the root $e^{2\pi i/n}$ of the polynomial $x^n-1$,
the degree of the corresponding finite field with characteristic $p$ collapses to $1$ if
$n$ is a divisor of $p-1$.
On one hand, it is trivial to build polynomials with an arbitrarily large number $N_P$
of roots over the rationals as long as the computational cost grows linearly in $N_P$.
On the other hand, it is also simple to build polynomials with friendly computational
complexity with respect to $\log N_P$ if the roots are taken
over algebraically closed fields. The simplest example is the previously mentioned
polynomial $P(x)=x^n-1$, which has $n$ distinct complex roots and a computational complexity
scaling as $\log n$. However, over the rationals, this polynomial has at most
$2$ roots. We can include other roots by extending the rational field to an algebraic number
field, but the extension would have a degree proportional to the number of roots, so that
the computational complexity of evaluating $P(x)$ would grow polynomially in the number
of roots over the extension.
\subsection{Algebraic-geometry rephrasing of the problem}
It is clear that an explicit definition of each root of polynomial~(\ref{Eq1}) leads
to an amount of memory allocation growing exponentially in $\log c$, so that the resulting
factoring algorithm is exponential in time and space. Thus, the roots has to be defined
implicitly by some simple rules. Considering a purely algebraic definition, we associates
the roots to rational solutions of a set of $n$ non-linear polynomial equations in
$n$ variables ${\bf x}=(x_1,\dots,x_n)$,
\be
P_k({\bf x})=0,\;\;\;\; k\in\{0,\dots,n-1\}.
\ee
The solutions are intersection points of $n$ hypersurfaces.
The roots of $P(x)$ are defined as the values of some coordinate, say $x_n$, at the
intersection points. By eliminating the $n-1$ variables $x_1,\dots,x_{n-1}$, we end
up with a polynomial $P(x_n)$ with a number of roots generally growing exponentially in $n$.
This solves the problem of space complexity in the
definition of $P(x)$. There are two remaining problems. First, we have to choose
the polynomials $P_0,\dots,P_{n-1}$ such that an exponentially large fraction of
the intersection points are rational. Second, the variable elimination, given a value
of $x_n$, has to be performed as efficiently as possible over a finite field. If
the elimination has polynomial complexity, then factorization turns out to have
polynomial complexity. Note that the elimination of $n-1$ variables given $x_n$
is equivalent to a consistency test of the $n$ polynomials. The first problem can be
solved in a simple way by defining the polynomials as elements of the ideal generated
by products of linear polynomials. Let us denote a linear polynomial with a symbol
with a hat. For example, the quadratic polynomials
\be\label{quadr_polys}
G_i=\hat a_i\hat b_i, \;\;\;\;\ i\in\{1,\dots,n\}
\ee
have generally $2^n$ rational common zeros, provided that the coefficients of $\hat a_i$
and $\hat b_i$ are rational.
Identifying the polynomials $P_0,\dots,P_{n-1}$ with elements of the ideal generated
$G_1,\dots,G_n$, we have
\begin{equation}
\label{poly_eqs_simple}
P_k=\sum_i c_{k,i}({\bf x})\hat a_i \hat b_i,\;\;\; i\in\{0,\dots,n-1\},
\end{equation}
whose set of common zeros contains the $2^n$ rational points of the generators $G_i$.
In particular, if the polynomials $c_{k,i}$ are set equal to constants, then
the system $P_0=\dots P_{n-1}=0$ is equivalent to the system $G_1=\dots G_n=0$.
At this point, the variable elimination is the final problem. A working method is to
compute a Gr\"obner basis. For the purpose of factorizing $c=p p'$, the task is
to evaluate a Gr\"obner basis to check if the $n$ polynomials given $x_n$ are
consistent modulo $c$. If they are consistent modulo some non-trivial factor $p$ of $c$,
we end up at some point with some integer equal to zero modulo $p$.
However, the complexity of this computation is doubly exponential in the worst
case. Thus, we have to search for a suitable set of polynomials with a large set of
rational
zeros such that there is an efficient algorithm for eliminating $n-1$ variables.
The variable elimination is efficient if $n-1$ out of the $n$ polynomial equations
$P_k=0$ form a suitable triangular system for some set of low-degree polynomials $c_{k,i}$.
Let us assume that the last $n-1$ polynomials have
the triangular form
$$
\left.
\begin{array}{r}
P_{n-1}(x_{n-1},x_n) \\
P_{n-2}(x_{n-2},x_{n-1},x_n) \\
\dots \\
P_1(x_1,\dots,x_{n-2},x_{n-1},x_n)
\end{array}
\right\},
$$
such that the $k$-th polynomial is linear in $x_k$. Thus, the corresponding polynomial
equations can be sequentially solved in the first $n-1$ variables
through the system
\be
\label{rational_system}
\begin{array}{l}
x_{n-1}=\frac{{\cal N}_{n-1}(x_n)}{{\cal D}_{n-1}(x_n)} \\
x_{n-2}=\frac{{\cal N}_{n-2}(x_n,x_{n-1})}{{\cal D}_{n-2}(x_n,x_{n-1})} \\
\dots \\
x_1=\frac{{\cal N}_1(x_n,x_{n-1},\dots,x_2)}{{\cal D}_1(x_n,x_{n-1},\dots,x_2)},
\end{array}
\ee
where ${\cal D}_k\equiv \partial P_k/\partial x_k$ and ${\cal N}_k\equiv P_k|_{x_k=0}$.
This system defines a parametrizable curve,
say $\cal V$, in the algebraic set defined by the polynomials $P_1,\dots,P_{n-1}$, the variable
$x_n$ being the parameter.
Let us remind that a curve is parametrizable if and only if its geometric genus is equal to
zero.
The overall set of variables can be efficiently computed over a finite field, provided that
the polynomial coefficients $c_{k,i}$ are not too complex. Once determined the variables,
the remaining polynomial $P_0$ turns out to be equal to zero if ${\bf x}$
is an intersection point. Provided that the rational intersection points have distinct values
of $x_n$ (which is essentially equivalent to state that the points are distinct and
are in the variety $\cal V$),
then the procedure generates a value $P_0({\bf x})$ which is zero modulo $p$ with
high probability if $p$ is of the order of the number of rational intersection
points. For this inference, it is pivotal to assume that a large fraction of rational points
remain distinct when reinterpreted over the finite field.
This algebraic-geometry rephrasing of the problem can be stated in a more general form.
Let $\cal V$ and $\cal H$ be some irreducible curve in an $n$-dimensional space and a hypersurface,
respectively. The curve $\cal V$ is not necessarily parametrizable, thus its genus may take
strictly positive values. The points in $\cal H$ are the zero locus
of some polynomial $P_0$. Let $N_P$ be the
number of distinct intersection points between $\cal V$ and $\cal H$ over the rational field $\mathbb{Q}$.
Over a finite field $\text{GF}(q)$, Weil's theorem states that the number of rational
points, says $N_1$, of a smooth curve is bounded by the inequalities
\be\label{weil_bounds}
-2 g \sqrt{q} \le N_1-(q+1)\le 2 g\sqrt{q},
\ee
where $g$ is the geometric genus of the curve.
Generalizing to singular curves~\cite{aubry}, we have
\be\label{aubry_eq}
-2 g \sqrt{q}-\delta \le N_1-(q+1)\le 2 g\sqrt{q}+\delta,
\ee
where $\delta$ is the number of singularities, properly counted. These inequalities have the
following geometric interpretation. For the sake of simplicity, let us assume that the
singularities are ordinary double points.
A singular curve, says $\cal S$, with genus $g$ is birationally
equivalent to a smooth curve, say $\cal R$, with same genus, for which Wiel's theorem holds.
That is, the rational points of $\cal R$ are bijectively mapped to the non-singular
rational points of $\cal S$,
apart from a possible finite set $\Omega$ of $2m$ points mapping to $m$ singular points
of $\cal S$.
The cardinality of $\Omega$ is at most $2\delta$ (attained when the $\delta$ singularities
have tangent vectors over the finite field). We have two extremal cases.
In one case, $\#\Omega=2\delta$, so that, $\cal S$ has $\delta$ points less than $\cal R$
(two points in $\Omega$ are merged into a singularity of $\cal S$). This gives the lower
bound in~(\ref{aubry_eq}). In the second case, $\Omega$ is empty and the singular points of
$\cal S$ are rational points. Thus, $\cal S$ has $\delta$ rational points more.
Given this interpretation, Weil's upper bound still holds for the number of
non-singular rational points, say $N_1'$,
\be\label{weil2}
N_1'\le (q+1)+2 g \sqrt{q}.
\ee
Thus, if the genus is much smaller than $\sqrt{q}$, $N_1'$ is upper-bounded by a number close
to the order of the field.
Now, let us assume that most of the $N_P$ rational points in ${\cal V}\cap{\cal H}$ over $\mathbb{Q}$
remain distinct when reinterpreted over $\mathbb{Z}_p$, with $p\simeq a N_P$, where $a$ is a number
slightly greater than $1$, say $a=2$. We also assume that
these points are not singularities of $\cal V$. Weil's inequality~(\ref{weil2}) implies that
the curve does not have more than about $p$ points over $\mathbb{Z}_p$. Since
$p\gtrsim N_1'\gtrsim N_P\sim p/2$, we have that the number of non-singular rational points of
the curve is about the number of intersection points over $\mathbb{Z}_p$. This implies that a
large fraction of points $\bf x\in \cal V$ over the finite field are also points of $\cal H$. We
have the following.
\begin{claim}
\label{claim_rat_curve}
Let ${\cal V}$ and ${\cal H}$ be an algebraic curve with genus $g$ and a hypersurface, respectively.
The hypersurface is the zero locus of the polynomial $P_0$.
Their intersection has $N_P$ distinct points over the rationals, which are not singularities
of $\cal V$. Let us also assume that $g\ll \sqrt{N_P}$ and that most of the $N_P$ rational points remain
distinct over $\mathbb{Z}_p$ with $p\gtrsim 2 N_P$.
If we pick up at random a point in $\cal V$, then
\be
P_0({\bf x})=0 \mod p
\ee
with probability equal to about ratio $N_P/p$.
\end{claim}
If there are pairs $({\cal V},{\cal H})$ for every $N_P$ that satisfy the premises of this claim,
then prime factorization is reduced to the search of rational points of a curve. Actually,
these pairs always exist, as shown later in the section. Assuming that
$c=p p'$ with $p\sim p'$, the procedure for factorizing $c$ is as follows.
\begin{enumerate}
\item[(1)] Take a pair $({\cal V},{\cal H})$ with $N_P\sim c^{1/2}$ such that the premises of
Claim~\ref{claim_rat_curve} hold.
\item[(2)] Search for a rational point ${\bf x}\in\cal V$ over $Z_p$.
\item[(3)] Compute $\text{GCD}[P_0({\bf x}),c]$, the greatest common divisor of $P_0({\bf x})$ and $c$.
\end{enumerate}
The last step gives $\text{GCD}[P({\bf x}),c]$ equal to
$1$, $c$ or one of the factors of $c$. The probability of getting
a nontrivial factor can be made close to $1/2$ with a suitable tuning of $N_P$ (as shown later in
Sec.~\ref{sec_algo}).
Finding rational points of a general curve with genus greater than $2$ is an exceptionally complex
problem. For example, just to prove that the plane curve $x^h+y^h=1$ with $h>2$ does not have zeros over
the rationals took more than three centuries since Fermat stated it.
Curves with genus $1$ are elliptic curves, which have an important role
in prime factorization (See Lenstra algorithm~\cite{lenstra}). Here, we will focus on
parametrizable curves, which have genus $0$. In particular, we will consider parametrizations
generated by the sequential equations~(\ref{rational_system}). It is interesting to note that
it is always possible to find a curve $\cal V$ with parametrization~(\ref{rational_system})
and a hypersurface $\cal H$ such that their intersection contains a given set of rational points.
In particular, there is a set of polynomials $P_0,\dots,P_{n-1}$ of the form~(\ref{poly_eqs_simple}),
such that the zero locus of the last $n-1$ polynomials contains a parametrizable curve with
parametrization~(\ref{rational_system}), whose intersection with the hypersurface $P_0=0$ contains
$2^n$ distinct rational points. Let the intersection points be defined by the
polynomials~(\ref{quadr_polys}). Provided that $x_n$ is a separating variable, the set of
intersection points admits the rational univariate representation~\cite{rouillier}
\be
\left\{
\begin{array}{l}
x_{n-1}=\frac{\bar{\cal N}_{n-1}(x_n)}{{\bar{\cal D}}_{n-1}(x_n)} \\
x_{n-2}=\frac{{\bar{\cal N}}_{n-2}(x_n)}{\bar{\cal D}_{n-2}(x_n)} \\
\dots \\
x_1=\frac{\bar{\cal N}_1(x_n)}{\bar{\cal D}_1(x_n)} \\
{\bar{\cal N}}_0(x_n)=0
\end{array}
\right.
\ee
The first $n-1$ equations are a particular form of equation~(\ref{rational_system}) and
define a parametrizable curve with $x_n$ as parameter. The last equation can be replaced
by some linear combination of the polynomials $G_i$. It is also interesting
to note that the rational univariate representation is unique once the separating
variable is chosen. This means that the parametrizable curve is uniquely determined by
the set of intersection points and the variable that is chosen as parameter.
It is clear that the curve and hypersurface obtained through this construction with a
general set of polynomials $G_i$ satisfy
the premises of Claim~\ref{claim_rat_curve}. Indeed, a large part of the common zeros
of the polynomials $G_i$ are generally distinct over a finite field $\mathbb{Z}_p$
with $p\simeq N_P$. For example, the point $\hat a_1=\dots=\hat a_n=0$ is distinct from the other
points if and only if $\hat b_i\ne 0$ at that point for every $i\in\{1,\dots,n\}$.
Thus, the probability that a given point is not distinct over $\mathbb{Z}_p$ is of
the order of $p^{-1}\sim N_P^{-1}$, hence a large part of the points are distinct over the finite
field. There is an apparent paradox. With a suitable choice of the linear functions $\hat a_i$
and $\hat b_i$, the intersection points can be made distinct over a field $\mathbb{Z}_p$ with
$p\ll N_P$, which contradicts Weil's inequality~(\ref{weil2}). The contradiction is explained
by the fact that the curve is broken into the union of reducible curves over the finite field.
In other words, some denominator ${\cal D}_k$ turns out to be equal to zero modulo $p$
at some intersection points. This may happen also with $p\sim N_P$, which is not a concern.
Indeed, possible zero denominators can be used to find factors of $c$.
\subsection{Contents}
In Section~\ref{sec_algo}, we introduce the general scheme of the factoring algorithm
based on rational maps and discuss
its computational complexity in terms of the complexity of the maps, the number
of parameters and the field degree. In Section~\ref{sec_arg_geo}, the factorization
problem is reduced to the search of rational points of parametrizable algebraic varieties
$\cal V$ having an arbitrarily large number $N_P$ of rational intersection points
with a hypersurface $\cal H$. Provided that the $N_P$ grows exponentially in the
space dimension, the factorization algorithm has polynomial complexity if the
number of parameters and the complexity of evaluating a point in $\cal V$ over a
finite field grow sublinearly and polynomially in the space dimension, respectively.
Thus,
the varieties $\cal V$ and $\cal H$ have to satisfy two requirements.
On one side, their intersection has to contain a large set of rational points.
On the other side, $\cal V$ has to be parametrizable and its points
have to be computed efficiently given
the values of the parameters. The first requirement is fulfilled with a generalization
of the construction given by Eq.~(\ref{poly_eqs_simple}). First, we define an ideal
$I$ generated by products of linear polynomials such that the associated algebraic
set contains $N_P$ rational points. The relevant information on this ideal
is encoded in a satisfiability formula (SAT) in conjunctive normal form (CNF) and a
linear matroid. Then, we define $\cal V$ and $\cal H$ as elements of the ideal.
By construction, ${\cal V}\cap{\cal H}$ contains the $N_P$ rational points.
The ideal $I$ and the polynomials defining the varieties contain some coefficients.
The second requirement is tackled in Sec.~\ref{build_up}. By imposing the
parametrization of $\cal V$, we get a set of polynomial equations for the
coefficients. These equations always admit a solution, provided that the
only constraint on $\cal V$ and $\cal H$ is being an element of $I$.
The task is to find
an ideal $I$ and a set of coefficients such that the computation of points
in $\cal V$ is as efficient as possible, given a number of parameters scaling
sublinearly in the space dimension.
In this general form, the problem of building the varieties $\cal V$ and
$\cal H$ is quite intricate. A good strategy is to start with simple ideals
and search for varieties in a subset of these ideals, so that
the polynomial constraints on the unknown coefficients can be handled with
little efforts. With these restrictions, it is not guaranteed that
the required varieties exist, but we can have hints on how to proceed.
This strategy is employed in Sec.~\ref{sec_quadr_poly}, where we consider an
ideal generated by the polynomials~(\ref{quadr_polys}). The varieties
are defined by linear combinations of these generators with constant
coefficients, that is, $\cal H$ and $\cal V$ are in the zero locus of
$P_0$ and $P_1,\dots,P_{n-1}$, respectively, defined
by Eq.~(\ref{poly_eqs_simple}). The $2^n$ rational points associated
with the ideal are taken distinct in $\mathbb{Q}$. First, we prove
that there is no solution with one parameter ($M=1$), for a dimension
greater than $4$. We give an explicit numerical example of a curve
and hypersurface in dimension $4$. The intersection has $16$ rational
points. We also give a solution with about $n/2$ parameters. Suggestively,
this solution resembles a kind of retro-causal model. Retro-causality
is considered one possible explanation of some strange aspects of
quantum theory, such as non-locality and wave-function collapse after
a measurement. Finally, we close the section by proving that there is
a solution with $2\le M \le (n-1)/3$. This is shown by explicitly
building a variety $\cal V$ with $(n-1)/3$ parameters. Whether
it is possible to drop the number of parameters below this upper
bound is left as an open problem. If $M$ grows
sublinearly in $n$, then there is automatically a factoring
algorithm with polynomial complexity, provided that the coefficients
defining the polynomials $P_k$ are in $\mathbb{Q}$ and can be
computed efficiently over a finite field. The conclusions and
perspectives are drawn in Sec.~\ref{conclusion}.
\section{General scheme and complexity analysis}
\label{sec_algo}
At a low level, the central object of the factoring algorithm under study is a
class $\Theta$ of maps ${\vec\tau}\mapsto {\cal R}(\vec\tau)$
from a set $\vec\tau\equiv(\tau_1,\dots,\tau_M)$ of $M$ parameters over the field $\mathbb{Q}(\alpha)$ to a
number in the same field, where $\cal R$ is a rational function, that is, the algebraic fraction of
two polynomials. Let us write it as
$$
{\cal R}(\vec\tau)\equiv \frac{{\cal N}(\vec\tau)}{{\cal D}(\vec\tau)}.
$$
This function may be indirectly defined by applying consecutively simpler rational functions, as
done in Sec.~\ref{sec_arg_geo}. Note that the computational complexity of evaluating ${\cal R}(\vec\tau)$
can be lower than the complexity of evaluating the numerator ${\cal N}(\vec\tau)$. For this reason we
consider more general rational functions rather than polynomials.
Both $M$ and $\alpha$ are not necessarily fixed in the class $\Theta$.
We denote by $N_P$ the number of zeros of the polynomial $\cal N$ over $\mathbb{Q}(\alpha)$.
The number $N_P$ is supposed to be finite, we will come back to this assumption later in
Sec.~\ref{sec_infinite_points}.
For the sake of simplicity, first we introduce the general scheme of the algorithm over the
rational field. Then, we outline its extension to algebraic number fields.
We mainly consider the case of semiprime input, that is, $c$ is
taken equal to the product of two prime numbers $p$ and $p'$. This case is the most
relevant in cryptography. If the rational points are somehow randomly distributed
when reinterpreted over $\mathbb{Z}_p$, then the polynomial ${\cal N}$ has at least about
$N_P$ distinct zeros over the finite field, provided that $N_P$ is sufficiently smaller than
the size $p^{M}$ of the input space $\Omega$.
We could have additional zeros in the finite field, but we conservatively assume that
$N_P$ is a good estimate for the total number.
For $N_P$ close to $p^M$, two different roots in the rational field may collapse to the same
number in the finite field. We will account for that later in Sec.~\ref{sec_complex}.
Given the class $\Theta$, the factorization procedure has the same general scheme as other methods
using finite fields. Again, the value $m={\cal R}(\tau_1,\tau_2,\dots)$ is computed by pretending
that $c$ is prime and $\mathbb{Z}/c\mathbb{Z}$ is a field. If an algebraic division takes to a
contradiction during the computation of $\cal R$, the divisor is taken as outcome $m$. For the sake of
simplicity, we neglect the zeros of $\cal D$ and consider only the zeros of $\cal N$.
In Section~\ref{sec_arg_geo}, we will see that this
simplification is irrelevant for a complexity analysis. It is clear that the outcome $m$ is zero
in $\mathbb{Z}/c\mathbb{Z}$ with high probability for some divisor $p$ of $c$ if the number of
zeros is about or greater than the number of inputs $p^{M}$. Furthermore, if $p'>p$ and
$N_P$ is sufficiently smaller than $(p')^{M}$, then the outcome $m$
contains the nontrivial factor $p$ of $c$ with high probability. This is guaranteed if
$N_P$ is taken equal to about $c^{M/2}$, which is almost optimal if $p\simeq p'$, as we will see later
in Sec.~\ref{sec_complex}. Thus, we have the following.
\begin{algorithm}
\label{gen_algo0}
Factoring algorithm with input $c=p p'$, $p$ and $p'$ being prime numbers.
\item[(1)] \label{algo0_1}
Choose a map in $\Theta$ with $M$ input parameters and $N_P$ zeros over the rationals
such that $N_P\simeq c^{M/2}$ (see Sec.~\ref{sec_complex} for an optimal choice of $N_P$);
\item[(2)] generate a set of $M$ random numbers $\tau_1,\dots,\tau_M$ over $\mathbb{Z}/c\mathbb{Z}$.
\item[(3)] \label{algo0_3}
compute the value $m={\cal R}(\tau_1,\dots,\tau_M)$ over $\mathbb{Z}/c\mathbb{Z}$
(by pretending that $c$ is prime).
\item[(4)] \label{algo0_4}
compute the greatest common divisor between $m$ and $c$.
\item[(5)] if a nontrivial factor of $c$ is not obtained, repeat from point (2).
\end{algorithm}
The number $M$ of parameters may depend on the map picked
up in $\Theta$. Let $M_{min}(N_P)$ be the minimum of $M$ in $\Theta$ for given $N_P$.
The setting at point~(\ref{algo0_1}) is possible only if $M_{min}$ grows less than linearly in
$\log N_P$, which is condition~(c) enumerated in the introduction. A tighter condition is
necessary if the computational complexity of evaluating the map scales subexponentially, but
not polynomially. This will be discussed with more details in Sec.~\ref{sec_complex}.
If $c$ has more than two prime factors,
$N_P$ must be chosen about equal to about $p^{M}$, where $p$ is an estimate of one
prime factor.
If there is no knowledge about the factors, the algorithm can be executed by trying different
orders of magnitude of $N_P$ from $2$ to $c^{1/2}$. For example, we can
increase the guessed $N_P$ by a factor $2$, so that the overall number of executions grows
polynomially in $\log_2 p$. However, better strategies are available.
A map with a too great $N_P$ ends up to produce zero modulo $p$ for every factor $p$ of $c$
and, thus, the algorithm always generates the trivial factor $c$. Conversely, a too small $N_P$
gives a too small probability of getting a factor. Thus, we can employ a kind of bisection
search. A sketch of the search algorithm is as follows.
\begin{enumerate}
\item set $a_d=1$ and $a_u=c^M$;
\item set $N_P=\sqrt{a_d a_u}$ and choose a map in $\Theta$ with $N_P$ zeros;
\item execute Algorithm~\ref{gen_algo0} from point (2) and break the loop after a certain number
of iterations;
\item if a nontrivial factor is found, return it as outcome;
\item if the algorithm found only the trivial divisor $c$, set $a_u=N_P$, otherwise set $a_d=N_P$;
\item go back to point (2).
\end{enumerate}
This kind of search can
reduce the number of executions of Algorithm~\ref{gen_algo0}. In the following, we will not
discuss these optimizations for multiple prime factors, we will consider mainly semiprime integer
numbers $c=p p'$.
\subsection{Extension to algebraic number fields}
Before outlining how the algorithm can be extended to algebraic number fields,
let us briefly remind what a number field is.
The number field $\mathbb{Q}(\alpha)$ is a rational field extension obtained by adding
an algebraic number $\alpha$ to the field $\mathbb{Q}$.
The number $\alpha$ is solution of some irreducible polynomial $P_I$ of degree $k_0$, which is also
called the degree of $\mathbb{Q}(\alpha)$. The extension
field includes all the elements of the form $\sum_{i=0}^{k_0-1} r_i \alpha^i$, where $r_i$ are rational
numbers. Every power $\alpha^h$ with $h\ge k_0$ can be reduced to that form through the equation
$P_I(\alpha)=0$.
Thus, an element of $\mathbb{Q}(\alpha)$ can be represented as a $k_0$-dimensional vector over
$\mathbb{Q}$. Formally, the extension field is defined as the quotient ring $\mathbb{Q}[X]/P_I$,
the polynomial ring over $\mathbb{Q}$ modulo $P_I$. The quotient ring is also a field as long as
$P_I$ is irreducible.
Reinterpreting the rational function $\cal R$ over a finite field $\text{GF}(p^k)$ means to reinterpret
$r_i$ and the coefficients of $P_I$ as integers modulo a prime number $p$.
Since the polynomial $P_I$ may be reducible over $\mathbb{Z}_p$, the degree $k$ of
the finite field is some value between $1$ and $k_0$ and equal to the degree of one the
irreducible factors of $P_I$. Let $D_1,\dots,D_f$ be the factors of $P_I$. Each $D_i$ is
associated with a finite field $\mathbb{Z}_p[X]/D_i\cong \text{GF}(p^{k_i})$, where $k_i$ is
the degree of $D_i$. Smaller values of $k$ take to a computational advantage, as the
size $p^{k M}$ of the input space $\Omega$ is smaller and the probability, about $N_P/p^{k M}$, of
getting the factor $p$ is higher. For example, the cyclotomic number field with $\alpha=e^{2\pi i/n}$
has a degree equal to $\phi(n)$, where $\phi$ is the Euler totient function, which is asymptotically
lower-bounded by $K n/\log\log n$, for every constant $K<e^{-\gamma}$, $\gamma$ being the Euler
constant. In other words, the highest degree of the polynomial prime factors of $x^n-1$ is equal to
$\phi(n)$. Let $P_I$ be equal to the factor with $e^{2\pi i/n}$ as root.
If $n$ is a divisor of $p-1$ for some prime number $p$, then $P_I$ turns out to have linear
factors over $\mathbb{Z}_p$. Thus, the degree of the number field collapses to $1$
when mapped to a finite field with characteristic $p$. Thus, the bound $k_0$ sets a worst case.
For general number fields, the equality $k=k_0$ is more an exception than a rule, apart from the
case of the rational field, for which $k=k_0=1$. For the sake of simplicity, let us assume
for the moment that $k=k_0$ for one of the two factors of $c$, say $p'$. Algorithm~\ref{gen_algo0}
is modified as follows. The map is chosen at point~(1) of Algorithm~\ref{gen_algo0} such that
$N_P\simeq c^{k_0 M/2}$;
the value $m$ computed at point~(3) is a polynomial over $\mathbb{Z}/c\mathbb{Z}$ of
degree $k_0-1$; the greatest common divisor at point~(4) is computed
between one of the coefficients of the polynomial $m$ and $c$. If the degree $k$ of the finite
field of characteristic $p$ turns out to be smaller than $k_0$, we have to compute the
polynomial greatest common divisor between $m$ and $P_I$ by pretending again that
$\mathbb{Z}/c\mathbb{Z}$ is a field. If $m$ is zero over $\text{GF}(p^{k})$,
then the Euclidean algorithm generates at some point
a residual polynomial with the leading coefficient having $p$ as a factor (generally, all the
coefficients turn out to have $p$ as a factor).
If $k\ne k_0$ for both factors and most of the maps, then the algorithm ends up to generate
the trivial factor $c$, so that we need to decrease $N_P$ until a non-trivial factor is
found.
\subsection{Complexity analysis}
\label{sec_complex}
The computational cost of the algorithm grows linearly with the product between the computational
cost of the map, say ${\bf C}_0({\cal R})$, and the average number of trials, which is
roughly $p^{k_0 M}/N_P$ provided that $N_P\ll p^{k_0 M}$ and $P_I$ is irreducible over
$\mathbb{Z}_p$. The class $\Theta$ may contain many maps with a given number $N_P$ of
zeros over some number field. We can choose the optimal map for each $N_P$, so that we express
$k_0$, $M$ and $\cal R$ as functions of $\log N_P\equiv \xi$. The computational cost
${\bf C}_0({\cal R})$ is written as a function of $\xi$, ${\bf C}_0(\xi)$.
Let us evaluate the computational complexity of the algorithm in terms of the scaling
properties of $k_0(\xi)$, $M(\xi)$ and ${\bf C}_0({\xi})$ as functions of $\xi=\log N_P$.
The complexity ${\bf C}_0(\xi)$ is expected to be a monotonically increasing
function. If the functions $k_0(\xi)$ and $M(\xi)$ were decreasing,
then they would asymptotically tend to a constant, since they are not less than $1$.
Thus, we assume that these two functions are monotonically increasing or constant.
As previously said, the polynomial $\cal N$ has typically about $N_P$ distinct roots
over $\text{GF}(p^{k_0})$, provided that $N_P$ is sufficiently smaller than $p^{k_0 M}$.
If $N_P$ is greater than $p^{k_0 M}$, then almost every
value of $\vec\tau$ is a zero of the polynomial. Assuming that the zeros are somehow
randomly distributed, the probability that a number picked up at random is different from
any zero over $\text{GF}(p^{k_0})$ is equal to $(1-p^{-k_0 M})^{N_P}$.
Thus, the number of roots over $\text{GF}(p^{k_0})$ is expected to be of the order of
$p^{k_0 M} [1-(1-p^{-k_0 M})^{N_P}]$, which is about $N_P$ for $N_P\ll p^{k_0 M}$. Thus,
the average number of trials required for getting a zero is
\be
N_{trials}\equiv \frac{1}{1-(1-p^{-k_0 M})^{N_P}},
\ee
A trial is successful if it gives
zero modulo some nontrivial factor of $c$, thus the number of required trials can
be greater than $N_{trials}$ if some factors are close each other. Let us consider the
worst case with $c=p p'$, where $p$ and $p'$ are two primes with $p'\simeq p$ such
that $(p')^{k_0 M}\simeq p^{k_0 M}$.
Assuming again that the roots are randomly distributed, the probability of a successful
trial is $\text{Pr}_\text{succ}\equiv 2 [1-(1-p^{-k_0 M})^{N_P}](1-p^{-{k_0} M})^{N_P}$.
The probability has a maximum equal to $1/2$ for $\xi$ equal to the value
\be
\xi_0\equiv\log\left[-\frac{\log 2}{\log(1-p^{-k_0 M})}\right].
\ee
Evaluating the Taylor series at $p=\infty$, we have that
\be
\xi_0=k_0 M\log p+\log\log 2-\frac{1}{2 p^{k_0 M}}+O(p^{-2 k_0 M}).
\ee
The first two terms give a very good approximation of $\xi_0$.
At the maximum, the ratio between the number of zeros and the number of states $p^{k_0 M}$
of the sampling space is about $\log 2$. It is worth to note that,
for the same value of $\xi$, the probability of getting an isolated factor
with $p\ll p'$ is again exactly $1/2$. Thus, we have in general
\be
N_P\simeq 0.69 p^{k_0 M} \Rightarrow \text{Pr}_\text{succ}=1/2.
\ee
Since the maximal probability is independent of $k_0$ and $M$, this
value is also maximal if $k_0$ and $M$ are taken as functions of $\xi$.
The maximal value $\xi_0$ is solution of the equation
\be
\label{eq_xi0}
\xi_0=\log\left[-\frac{\log 2}{\log(1-p^{-f(\xi_0)})}\right].
\ee
where $f(\xi)\equiv k_0(\xi) M(\xi)$. If the equation has no positive solution,
then the probability is maximal for $\xi=0$. That is, the optimal map in the
considered class is the one with $N_P=1$. This means that the number of states
$p^{k_0 M}$ of the sampling space grows faster than the number of zeros.
In particular, there is no solution for $\log p$ sufficiently large if
$f(\xi)$ grows at least linearly (keep in mind that $f(\xi)\ge1$). Thus,
the function $f(\xi)$ has to be a sublinear power function, as previously
said.
The computational cost of the algorithm for a given map ${\cal R}(\xi)$ is
\be
{\bf C}(p,\xi)\equiv
\frac{{\bf C}_0(\xi)}{2 [1-(1-p^{-f(\xi)})^{\exp\xi}](1-p^{-f(\xi)})^{\exp\xi }}.
\ee
The optimal map for given $p$ is obtained by minimizing ${\bf C}(p,\xi)$ with
respect to $\xi$. The computational complexity of the algorithm is
\be\label{comp_complexity}
{\bf C}(p)=\min_{\xi>0} {\bf C}(p,\xi)\equiv {\bf C}(p,\xi_m),
\ee
which satisfies the bounds
\begin{equation}
\label{bounds}
{\bf C}_0(\xi_m)\le {\bf C}(p)\le 2{\bf C}_0(\xi_0).
\end{equation}
The upper bound in Eq.~(\ref{bounds}) is the value of ${\bf C}(p,\xi)$ at $\xi=\xi_0$,
whereas the lower bound is the computational complexity of the map at the minimum
$\xi_m$.
It is intuitive that the complexity ${\bf C}_0(\xi)$ must be subexponential in order to
have ${\bf C}(p)$ subexponential in $\log p$. This can be
shown by contradiction. Suppose that the complexity ${\bf C}(p)$ is subexponential
in $\log p$ and ${\bf C}_0(\xi)=\exp(a \xi)$ for some positive $a$. The lower bound
in Eq.~(\ref{bounds}) implies that the optimal $\xi_m$ grows less than $\log p$.
Asymptotically,
\be
\left. \frac{p^{k_0 M}}{N_P}\right|_{\xi=\xi_m}\sim e^{f(\xi_m)\log p-\xi_m}\ge K p^{1/2},
\ee
for some constant $K$.
Thus, the average number of trials grows exponentially in $\log p$, implying
that the computational complexity is exponential, in contradiction
with the premise.
Since $f(\xi)$ and $\log {\bf C}_0(\xi)$ must grow less than linearly, we may assume
that they are concave.
\begin{property}
\label{concave}
The functions $f(\xi)$ and $\log {\bf C}_0(\xi)$ are concave, that is,
\be
\frac{d^2}{d\xi^2}f(\xi)\le 0, \;\;\; \frac{d^2}{d\xi^2}\log {\bf C}_0(\xi)\le 0.
\ee
\end{property}
The lower bound in Eq.~(\ref{bounds}) depends on $\xi_m$, which depends on the
function $C_0(\xi)$. A tighter bound which is also simpler to evaluate can be derived
from Property~\ref{concave} and the inequality
\be
{\bf C}(p,\xi)\ge \frac{1}{2}e^{f(\xi)\log p-\xi}{\bf C}_0(\xi).
\ee
\begin{lemma}
If Property~\ref{concave} holds and ${\bf C}(p)$ is asymptotically sublinear in $p$, then there
is an integer $\bar p$ such that the complexity
${\bf C}(p)$ is bounded from below by $\frac{{\bf C}_0(\xi_0)}{2\log2}$ for $p>\bar p$.
\end{lemma}
{\it Proof}.
The minimum $\xi_m$ is smaller than $\xi_0$, since the function ${\bf C}_0(\xi)$ is monotonically increasing.
Thus, we have
\be
{\bf C}(p)=\min_{\xi\in\{0,\xi_0\}}{\bf C}(p,\xi)\ge \min_{\xi\in\{0,\xi_0\}}
e^{f(\xi)\log p-\xi+\log C_0(\xi)}/2.
\ee
Since the exponential is monotonic and the exponent is concave, the objective function
has a maximum and two local minima at the $\xi=0$ and $\xi=\xi_0$. Keeping in mind that $f(\xi)\ge 1$,
The first local minimum is not less than $p {\bf C}_0(0)/2$. The second minimum is
$e^{f(\xi_0)\log p-\xi_0} {\bf C}_0(\xi_0)/2$,
which is greater than or equal to ${\bf C}_0(\xi_0)/(2 \log2)$. This can be proved by eliminating $p$ through
Eq.~(\ref{eq_xi0}) and minimizing in $\xi_0$. Since ${\bf C}(p)$ is sublinear in $p$,
there is an integer $\bar p$ such that the second minimum is global for $p>\bar p$. $\square$
Summarizing, we have
\begin{equation}
\label{bounds2}
0.72{\bf C}_0(\xi_0)\le {\bf C}(p)\le 2{\bf C}_0(\xi_0)
\end{equation}
for $p$ greater than some integer.
Thus, the complexity analysis of the algorithm is reduced to study the asymptotic behavior
of ${\bf C}_0(\xi_0)$.
The upper bound is asymptotically tight, that is, $\xi=\xi_0$ is asymptotically optimal. Taking
$$
f(\xi)=b \xi^\beta \text{ with } \beta\in[0:1),
$$
the optimal value of $\xi$ is
$$
\xi_0=(b \log p)^\frac{1}{1-\beta}+O(1).
$$
The function $f(\xi)$ cannot be linear, but we can take it very close to a linear function,
\be
f(\xi)=b \frac{\xi}{(\log\xi)^\beta}, \;\;\;\; \gamma>1.
\ee
In this case, the optimal $\xi$ is
$$
\xi_0=e^{(b\log p)^{1/\beta}}+O\left[(\log p)^{1/\beta} \right].
$$
There are three scenarios taking to subexponential or polynomial complexity.
\begin{enumerate}
\item[(a)] The functions ${\bf C}_0(\xi)$ and $f(\xi)$ scale polynomially as $\xi^\alpha$ and
$\xi^\beta$, respectively, with $\beta\in[0:1)$. Then, the computational complexity ${\bf C}(p)$
scales polynomially in $\log p$ as $(\log p)^\frac{\alpha}{1-\beta}$.
\item[(b)] The function ${\bf C}_0(\xi)$ is polynomial and $f(\xi)\sim\xi/(\log\xi)^\beta$ with $\beta>1$.
Then the computational complexity ${\bf C}(p)$ scales subexponentially in $\log p$ as
$\exp\left[b (\log p)^{1/\beta}\right]$.
\item[(c)] The function ${\bf C}_0(\xi)$ and $f(\xi)$ are superpolynomial and polynomial respectively,
with ${\bf C}_0(\xi)\sim\exp\left[b \xi^\alpha\right]$ and $f(\xi)\sim\xi^\beta$. If
$\alpha+\beta<1$, then the complexity ${\bf C}(p)$ is subexponential in $\log p$ and scales
as $\exp\left[b (\log p)^\frac{\alpha}{1-\beta}\right]$.
\end{enumerate}
The algorithm has polynomial complexity in the first scenario. The other cases are subexponential.
This is also implied by the following.
\begin{lemma}
\label{litmus}
The computational complexity ${\bf C}(p)$ is subexponential or polynomial in $\log p$ if
the function ${\bf C}_0(\xi)^{f(\xi)}$ grows less than exponentially, that is, if
$$
\lim_{\xi\rightarrow\infty}\frac{f(\xi)\log {\bf C}_0(\xi)}{\xi}=0.
$$
In particular, the complexity is polynomial if ${\bf C}_0(\xi)$ is polynomial
and $f(\xi)$ scales sublinearly.
\end{lemma}
This lemma can be easily proved directly from Eq.~(\ref{eq_xi0}) and the upper bound in
Eq.~(\ref{bounds}), the former implying the inequality $\xi_0\le f(\xi_0)\log p+\log\log 2$.
Let us prove the first statement.
$$
\lim_{p\rightarrow\infty}\frac{\log{\bf C}(p)}{\log p}\le
\lim_{\xi\rightarrow\infty}\frac{f(\xi)\log 2 {\bf C}_0(\xi)}{\xi-\log\log 2}=
\lim_{\xi\rightarrow\infty}\frac{f(\xi)\log{\bf C}_0(\xi)}{\xi}=0.
$$
Using the lower bound in Eq.~(\ref{bounds2}),
the lemma can be strengthened by adding the inferences in the other directions
(\emph{if} replaced by {\emph{if and only if}).
Summarizing, we have the following.
\begin{claim}
\label{claim1}
The factoring algorithm~\ref{gen_algo0} has subexponential (\emph{polynomial}) complexity if,
for every $\xi=\log N_P>0$ with $N_P$ positive integer, there are rational univariate functions
${\cal R}(\vec\tau)=\frac{{\cal N}(\vec\tau)}{{\cal D}(\vec\tau)}$ of
the parameters $\vec\tau=(\tau_1,\dots,\tau_{M(\xi)})$ over
an algebraic number field $\mathbb{Q}(\alpha)$ of degree $k_0(\xi)$
with polynomials $\cal N$ and $\cal D$ coprime, such that
\begin{enumerate}
\item the number of distinct roots of $\cal N$ in $\mathbb{Q}(\alpha)$ is equal to
about $N_P$. Most of the roots remain distinct when interpreted over finite fields
of order equal to about $N_P^{1/M}$;
\item given any value $\vec\tau$, the computation of ${\cal R}(\vec\tau)$
takes a number ${\bf C}_0(\xi)$ of arithmetic operations growing less than
exponentially (\emph{polynomially}) in $\xi$;
\item the function ${\bf C}_0(\xi)^{k_0(\xi) M(\xi)}$ is subexponential (\emph{the function
$k_0(\xi) M(\xi)$ scales sublinearly}).
\end{enumerate}
\end{claim}
Let us stress that the asymptotic complexity is less than exponential if and only if
${\bf C}_0(\xi)^{f(\xi)}$ is less than exponential. Thus, the latter
condition is a litmus test for a given class of rational functions. However,
the function ${\bf C}_0(\xi)^{f(\xi)}$ does not provide sufficient information on
the asymptotic computational complexity of the factoring algorithm.
The general number-field sieve is the algorithm
with the best asymptotic complexity, which scales as $e^{a (\log p)^{1/3}}$.
Thus, algorithm~\ref{gen_algo0} is asymptotically more efficient than the general number-field
sieve if ${\bf C}_0(\xi)$ and $f(\xi)$ are asymptotically upper-bounded by a subexponential function
$e^{b(\log p)^\alpha}$ and a power function $c \xi^\beta$, respectively, such that $\alpha<(1-\beta)/3$.
In the limit case of $\beta\rightarrow 1$ and polynomial complexity of the map, the
function $f(\xi)$ must be asymptotically upper-bounded by $b \xi/(\log\xi)^3$.
\subsection{Number of rational zeros versus polynomial degree}
Previously we have set upper bounds on the required computational complexity of the
rational function $\cal R={\cal N}/{\cal D}$ in terms of the number of its rational zeros.
For a polynomial (subexponential) complexity of prime factorization, the computational complexity
${\bf C}_0$ of $\cal R$ must scale polynomially (subexponentially) in the logarithm in the number
of rational zeros. Thus, for a univariate rational function, it is clear that ${\bf C}_0$
has to scale polynomially (subexponentially) in the logarithm of the degree $d$ of $\cal N$,
since the number of rational zeros is upper-bounded by the degree
(fundamental theorem of algebra). An extension of this inference to multivariate
functions is more elaborate, as upper bounds on the number of rational zeros
are unknown. However, we are interested more properly to a set of $N_P$ rational zeros
that remain in great part distinct when reinterpreted over a finite field whose order
is greater than about
$N_P^{1/M}$.
Under this restriction, let us show that the number of rational
zeros of a polynomial of degree $d$ and with $M$ variables is upper-bounded by
$K d^{2 M}$ with some constant $K>0$.
This bound allows us to extend the previous inference on ${\bf C}_0$ to the case of
multivariate functions.
Assuming that the $N_P$ rational zeros over $\mathbb{Q}$ are randomly distributed
when reinterpreted over $\text{GF}(q)$, their number over the finite field is
about $q^{M}\left[1-(1-q^{-M})^{N_P}\right]$, as shown previously. Since
an upper bound on the number of zeros $N(q)$ of a smooth hypersurface
over a finite field of order $q$ is known, we can evaluate an upper bound on $N_P$.
Given the inequality~\cite{katz}
\be\label{gen_weil}
N(q)\le
\frac{q^M-1}{q-1} +\left[(d-1)^M-(-1)^M\right]\left(1-d^{-1}\right)q^{(M-1)/2}
\ee
and
\be
\label{bound_ff}
q^M\left[1-(1-q^{-M})^{N_P}\right]\le N(q),
\ee
we get an upper bound on $N_P$ for each $q$.
Requiring that Eq.~(\ref{bound_ff})
is satisfied for every $q>N_P^{1/M}$, we get
\be\label{up_bound}
N_P< K d^\frac{2 M^2}{M+1}< K d^{2 M}
\ee
for some constant $K$ (the same result is obtained by assuming that Eq.~(\ref{bound_ff}) holds
for every $q$).
Note that a slight break of bound~(\ref{up_bound}) with $N_P$ growing as $d_0^{M^a}$
in $M$ for some particular $d=d_0$ and $a>1$ would make the complexity of prime factorization
polynomial, provided the computational complexity of evaluating the function ${\cal R}$
is polynomial in $M$. This latter condition can be actually fulfilled, as shown with an
example later.
Ineq.~(\ref{gen_weil}) holds for smooth irreducible hypersurfaces. However, dropping
these conditions are not expected to affect the bound~(\ref{up_bound}). For example, if
$M=2$, then Ineq.~(\ref{gen_weil}) gives
\be\label{weil_plane}
N_P\le q+1+(d-1)(d-2)\sqrt{q}
\ee
which is the Weil's upper bound~(\ref{weil_bounds}) for
a smooth plane curve, whose geometric genus $g$ is equal to $(d-1)(d-2)/2$.
This inequality holds also for singular curves~\cite{aubry}. Indeed, this comes
from the upper bound~(\ref{aubry_eq}) and the equality $g=(d-1)(d-2)/2-\delta$.
Also reducibility does not affect Ineq~(\ref{up_bound}).
It is simple to find examples of multivariate functions with a number of rational
zero quite close to the bound $K d^{2 M}$.
Trivially, there are polynomials ${\cal N}(\tau_1,\dots,\tau_M)$ of
degree $d$ with a number of rational zeros at least equal to the number of
coefficients minus $1$, that is, equal to
$\bar N_P\equiv M!^{-1}\prod_{k=1}^M(d+k)-1\sim d^M/M!+O(d^{M-1})$.
For $M=1$, this corresponds to take the univariate polynomial
\be\label{univ_poly}
{\cal N}(\tau)=(\tau-x_1)(\tau-x_2)\dots(\tau-x_d).
\ee
A better construction of a multivariate polynomial is a generalization
of the univariate polynomial in Eq.~(\ref{univ_poly}).
Given linear functions $L_{i,s}(\vec\tau)$, the polynomial
$$
\tilde P=\sum_{i=1}^M\prod_{s=1}^d L_{i,s}(\vec\tau)
$$
has generally a number of rational points $N_P$ at least equal to $d^M$, which
is the square root of the upper bound, up to a constant.
For $d<4$ and $M=2$, the number of rational zeros turns
out to be infinite, since the genus is smaller than $2$
(see Sec.~\ref{sec_infinite_points} for the case of
infinite rational points).
A naive computation of $\tilde P(\vec\tau)$ takes $d M^2$
arithmetic operations, that is, its complexity is polynomial in $M$.
This example provides an illustration of the complexity test described
previously in Claim~\ref{claim1}. Expressing
$d$ in terms of $M$ and $\xi=\log N_P$ and assuming that
${\bf C}_0\sim d M^2$, we have that
$$
{\bf C}_0(\xi)=M^2 e^{M^{-1}\xi},
$$
which is subexponential in $\xi$ (provided that $M$ is a growing function of $\xi$),
which
is a necessary condition for a subexponential algorithm. However, the polynomial
does not pass the litmus test, as ${\bf C}_0(\xi)^M$ grows exponentially.
\subsubsection{The case of infinite rational zeros}
\label{sec_infinite_points}
Until now, we have assumed that the rational function has a finite number of rational
zeros over the rationals. However, in the multivariate case, it is possible to
have non-zero functions
with an infinite number of zeros. For example, this is the case of a bivariate
polynomials with genus equal to zero and one, which correspond to parametrizable
curves and elliptic curves, respectively. We can also have functions whose
zero locus contains linear subspaces with positive dimension, which can have
infinite rational points. Since the
probability of having ${\cal R}$ equal to zero modulo $p$ increases with the number
of zeros over the rationals, this would imply that the probability is equal
to $1$ if the number of zeros is infinite. This is not evidently the case. For
example, if ${\cal R}$ is zero for $x_1=0$ and $M>1$, evidently the function
has infinite rational points over $\mathbb{Q}$, but the number of points with
$x_1=0$ over $\mathbb{Z}_p$ is $p^{M-1}$, which is $p$ times less than the
number of points in the space.
Once again,
we are interested more properly to sets of $N_P$ rational zeros over $\mathbb{Q}$
such that a large fraction of them remain distinct over finite fields whose order
is greater than about $N_P^{1/M}$. Under this condition, $N_P$ cannot be infinite
and is constrained by Ineq.~(\ref{up_bound}). If there are linear subspaces with
dimension $h>0$ in the zero locus of $\cal R$, we may fix some of the parameters
$\vec\tau$, so that
these spaces become points. In the next sections, we will build rational functions
having isolated rational points and possible linear subspaces in the zero locus.
If there are subspaces with dimension $k>0$ giving a dominant contribution to
factorization, we can transform them to isolated rational
points by fixing some parameters without changing the asymptotic complexity of
the algorithm.
Isolated rational points are the only relevant points for an asymptotic study of
the complexity of the factoring algorithm, up to a dimension reduction. Thus,
we will consider only them and will not care of the other linear subspaces.
\section{Setting the problem in the framework of algebraic geometry}
\label{sec_arg_geo}
Since the number of zeros $N_P$ is constrained by Ineq.~(\ref{up_bound}),
the complexity of computing the rational function ${\cal R}(\vec\tau)$ must be
subexponential or polynomial in $\log d$ in order to have ${\bf C}_0(\xi)$
subexponential or polynomial. This complexity scaling is attained if, for
example, $\cal R$ is a polynomial with few monomials.
The univariate polynomial $P=\tau^d-1$, which is pivotal in Pollard's $p-1$
algorithm, can be evaluated with a number of arithmetic operations scaling
polynomially in $\log d$. This is achieved by consecutively applying
polynomial maps. For example, if $d=2^g$, then $\tau^d$ is computed through
$g$ applications of the map $x\rightarrow x^2$ by starting with $x=\tau$.
However, polynomials with few terms have generally few zeros over
$\mathbb{Q}$. More general polynomials and rational functions with friendly
computational complexity are obviously available and are obtained by consecutive
applications of simple functions, as done for $\tau^d-1$. This leads us to formulate
the factorization problem in the framework of algebraic geometry as
an intersection problem.
\subsection{Intersection points between a parametrizable variety and
a hypersurface}
\label{sec_intersection}
Considering only the operations defined in the field, the most general rational functions
${\cal R}(\vec\tau)$ with low complexity can be evaluated through the consecutive
application of a small set of simple rational equations of the form
\be\label{ratio_eqs}
\begin{array}{c}
x_{n-M}=\frac{{\cal N}_{n-M}(x_{n-M+1},\dots,x_n)}{{\cal D}_{n-M}(x_{n-M+1},\dots,x_n)} \\
x_{n-M-1}=\frac{{\cal N}_{n-M-1}(x_{n-M},\dots,x_n)}{{\cal D}_{n-M-1}(x_{n-M},\dots,x_n)} \\
\vdots \\
x_1=\frac{{\cal N}_1(x_2,\dots,x_n)}{{\cal D}_1(x_2,\dots,x_n)} \\
{\cal R}=P_0(x_1,\dots,x_n),
\end{array}
\ee
where $P_0$ is a polynomial. If the numerators and denominators ${\cal N}_k$ and ${\cal D}_k$
do not contain too many monomials, then the computation of ${\cal N}_k/{\cal D}_k$ can
be performed efficiently. Assuming that the computational complexity
of these rational functions is polynomial in $n$, the complexity of $\cal R$ is
polynomial in $n$. The computation of ${\cal R}(\vec\tau)$ is performed by setting the
last $M$ components $x_{n-M+1},\dots,x_n$ equal to $\tau_1,\dots,\tau_M$
and generating the sequence $x_{n-M},x_{n-M-1},\dots,x_1,{\cal R}$ according
to Eqs.~(\ref{ratio_eqs}), which ends up with the value of ${\cal R}$.
The procedure may fail to compute the
right value of ${\cal R}(\vec\tau)$ if some denominator
${\cal D}_k(\vec\tau)\equiv {\cal D}_k[x_{k+1}(\vec\tau),\dots,x_n(\vec\tau)]$
turns out to be equal to zero during the computation. However, since
our only purpose is to generate the zero of the field, we can take a zero divisor
as outcome and stop the computation of the sequence. In this way, the
algorithm generates a modified function ${\cal R}'(\vec\tau)$.
Defining $\bar{\cal N}_k(\vec\tau)$ as the numerator of the rational function
${\cal D}_k(\vec\tau)$, we have
\be
{\cal R}'(\vec\tau)=\left\{
\begin{array}{lr}
{\cal R}(\vec\tau) & \;\; \text{if}\;\; \bar{\cal N}_1(\vec\tau)\dots \bar{\cal N}_{n-M}(\vec\tau)\ne0 \\
0 & \text{otherwise}
\end{array}
\right.
\ee
The function ${\cal R}'(\vec\tau)$ has the zeros of
${\cal N}(\vec\tau)\bar{\cal N}_1(\vec\tau)\dots \bar{\cal N}_{n-M}(\vec\tau)$.
For later reference, let us define the following.
\begin{algorithm}
\label{algo1}
Computation of ${\cal R}'(\vec\tau)$.
\begin{enumerate}
\item set $(x_{n-M+1},\dots,x_n)=(\tau_1,\dots,\tau_M)$;
\item set $k=n-M>0$;
\item\label{attr} set
$x_k=\frac{{\cal N}_k(x_{k+1},\dots,x_n)}{{\cal D}_k(x_{k+1},\dots,x_n)}$.
If the division fails, return the denominator as outcome;
\item set $k=k-1$;
\item\label{last_step}
if $k=0$, return $P_0(x_1,\dots,x_n)$, otherwise go back to \ref{attr}.
\end{enumerate}
\end{algorithm}
The zeros of the denominators are not expected to give an effective
contribution on the asymptotic complexity of the factoring algorithm, otherwise it
would be more convenient to reduce the number of steps of the sequence by one and
replace the last function $P_0$ with the denominator ${\cal D}_1$. Let us show that.
Let us denote by $N_1$ the number of rational zeros of ${\cal R}'$ over some
finite field with all the denominators different from zeros. They are the zeros
returned at step~\ref{last_step} of Algorithm~\ref{algo1}.
Let $N_T$ be the total number of zeros. We remind that the factoring complexity
is about $p^{k M}$ times the ratio between the complexity of $\cal R$ and the number
of zeros. If the algorithm is more
effective than the one with one step less and $P_0$ replaced with
${\cal D}_1$, then $\frac{C_T}{N_T}<\frac{C_T-C_1}{N_T-N_1}$.
where $C_1$ and $C_T$ are the number of arithmetic operations of the last
step and of the whole algorithm, respectively. Since $C_1\ge1$, we have
$$
N_1>N_T C_T^{-1}
$$
In order to have a subexponential
factoring algorithm, $C_T$ must scale subexponentially
in $\log N_T$. Thus,
$$
N_1>N_T e^{-\alpha (\log N_T)^\beta}
$$
for some positive $\alpha$ and $0<\beta<1$. That is,
$$
\log N_1>\log N_T -\alpha (\log N_T)^\beta.
$$
If we assume polynomial complexity, we get the tighter bound
$$
\log N_1>\log N_T -\alpha \log\log N_T.
$$
These inequalities imply that the asymptotic complexity of the factoring
algorithm does not change if we discard the zero divisors at Step~\ref{attr}
in Algorithm~\ref{algo1}. Thus, for a complexity analysis, we can consider only
the zeros of ${\cal R}'(\vec\tau)$ with all the denominators ${\cal D}_k(\vec\tau)$
different from zero. This will simplify the subsequent discussion.
Each of these zeros are associated with an $n$-tuple $(x_1,\dots,x_n)$,
generated by Algorithm~\ref{algo1} and solutions of Eqs.~(\ref{ratio_eqs}).
Let us denote by ${\cal Z}_P$ the set of these $n$-tuples.
By definition, an element in ${\cal Z}_P$ is a zero of the set of $M-n+1$ polynomials
\be
\label{poly_affine}
\begin{array}{l}
P_0(x_1,\dots,x_n), \\
P_k(x_1,\dots,x_n)=x_k {\cal D}_k(x_{k+1},\dots,x_n)-{\cal N}_k(x_{k+1},\dots,x_n), \;\;\;
k\in\{1,\dots,n-M\}.
\end{array}
\ee
The last $n-M$ polynomials define an algebraic set of points, say $\cal A$,
having one irreducible branch parametrizable by Eqs.~(\ref{ratio_eqs}).
This branch defines an algebraic variety which we denote by $\cal V$.
The algebraic set may have other irreducible components which do not care about.
The polynomial $P_0$ defines a hypersurface, say $\cal H$. Thus, the
set ${\cal Z}_P$ is contained in the intersection between $\cal V$ and $\cal H$. This
intersection may contain singular points of $\cal V$ with ${\cal D}_k(x_{k+1},\dots,x_n)=0$
for some $k$, which are not relevant for a complexity analysis, as shown previously.
Thus, the factorization problem is reduced to search for non-singular rational points
of a parametrizable variety $\cal V$, whose intersection with a hypersurface
$\cal H$ contains an arbitrarily large number $N_P$ of rational points.
If $N_P$ and ${\bf C}_0$ scale exponentially and polynomially
in the space dimension $n$, respectively, then the complexity of factorization is polynomial,
provided that the number of parameters $M$ scales sublinearly as a power of $n$.
In the limit case of
\be\label{subexp_cond}
M\sim n/(\log n)^\beta
\ee
with $\beta>1$,
the complexity scales subexponentially as $e^{b (\log p)^{1/\beta}}$. Thus, if
$M$ has the scaling property~(\ref{subexp_cond}) with $\beta>3$, then there
is an algorithm outperforming asymptotically the general number field sieve.
A subexponential computational complexity is also obtained if the complexity
of evaluating a point in $\cal V$ is subexponential.
The parametrization of $\cal V$ is a particular case of rational parametrization
of a variety. We call it \emph{Gaussian parametrization} since the triangular form of
the polynomials $P_1,\dots,P_{n-M}$ resembles Gaussian elimination.
Note that this form is invariant under the transformation
\be\label{poly_replacement}
P_k\rightarrow P_k+\sum_{k'=k+1}^{n-M}\omega_{k,k'} P_{k'}.
\ee
The form is also invariant under the variable
transformation
\be\label{invar_trans}
x_k\rightarrow x_k+\sum_{k'=k+1}^{n+1} \eta_{k,k'} x_{k'}
\ee
with $x_{n+1}=1$.
It is interesting to note that if $N_P'$ out of the $N_P$ points in ${\cal Z}_P$
are collinear, then it is possible to build another variety with Gaussian
parametrization and a hypersurface over a $(n-1)$-dimensional subspace such that
their intersection contains the $N_P'$ points. For later reference, let us
state the following.
\begin{lemma}
\label{lemma_dim_red}
Let ${\cal Z}_P$ be the set of common zeros of the polynomials~(\ref{poly_affine})
with ${\cal D}_k(x_{k+1},\dots,x_n)\ne 0$ over ${\cal Z}_P$ for $k\in\{1,\dots,n-M\}$.
If $N_P'$ points in ${\cal Z}_P$ are solutions of the linear equation $L(x_1,\dots,x_n)=0$,
then there is a variety with Gaussian parametrization and a hypersurface over
the $(n-1)$-dimensional subspace defined by $L(x_1,\dots,x_n)=0$ such that their
intersection contains the $N_P'$ points.
\end{lemma}
{\it Proof.}
Given the linear function $L(x_1,\dots,x_n)\equiv l_{n+1}+\sum_{k=1}^n l_k x_k$,
let us first consider the case with $l_k=0$ for $k\in\{1,\dots,n-M\}$. Using
the constraint $L=0$, we can set one of the $M$ variables $x_{n-M+1},\dots,x_n$
as a linear function of the remaining $M-1$ variables. Thus, we get a new
set of polynomials retaining the original triangular form. The new parametrizable
variety, say ${\cal V}'$, has $M-1$ parameters. The intersection of ${\cal V}'$ with
the new hypersurface over the $(n-1)$-dimensional space contains the $N_P'$ points.
Let us now consider the case with $l_k=0$ for $k\in\{1,\dots,\bar k\}$, where
$\bar k$ is some integer between $0$ and $n-M-1$, such that $l_{\bar k+1}\ne 0$.
We can use the constraint $L=0$ to set $x_{\bar k+1}$ as a linear function
of $x_{\bar k+2},\dots,x_n$.
We discard the polynomial
$P_{\bar k+1}$ and eliminate the $(\bar k+1)$-th variable from the remaining polynomials.
We get $n-1$ polynomials retaining the original triangular form in $n-1$ variables
$x_1,\dots,x_{\bar k},x_{{\bar k}+2},\dots,x_n$. The intersection between the
new parametrizable variety and the new hypersurface contain the $N_P'$ points
$\square$.
\newline
This simple lemma will turn out to be a useful tool in different parts of the
paper.
In Section~\ref{sec_common_zeros}, we show how to build a set of polynomials $P_k$
with a given number $N_P$ of common rational zeros by using some tools of algebraic geometry
described in Appendix~\ref{alg_geom}.
In Sec.~\ref{build_up}, we close the circle by
imposing the form~(\ref{poly_affine}) for the polynomials $P_k$ with
the constraint that ${\cal D}_k(x_{k+1},\dots,x_n)\ne 0$ for $k\in\{1,\dots,n-M\}$ over
the set of $N_P$ points.
\subsection{Sets of polynomials with a given number of zeros over a number field}
\label{sec_common_zeros}
In this subsection, we build polynomials with given $N_P$ common rational zeros
as elements of an ideal $I$ generated by products of linear functions. This construction
is the most general.
The relevant information on the ideal $I$ is summarized by a satisfiability
formula in conjunctive normal form without negations and a linear matroid.
The formula and the matroid uniquely determine the number $N_P$ of rational common zeros
of the ideal. Incidentally, we also show that the information can be encoded in a more
general formula with negations by a suitable choice of the matroid.
Every finite set of points in an $n$-dimensional space is an algebraic set, that
is, they are all the zeros of some set of polynomials. More generally, the union
of every finite set of linear subspaces is an algebraic set.
In the following, we
will denote linear polynomials by a symbol with a hat; namely, $\hat a$ is meant
as $a_{n+1}+\sum_{i=1}^n a_i x_i$. Let us denote by $\vec x$ the $(n+1)$-dimensional
vector $(x_1,\dots,x_n,x_{n+1})$, where $x_{n+1}$ is an extra-component that is
set equal to $1$. A linear polynomial $\hat a$ is written in the form
$\vec a\cdot\vec x$.
Let $V_1,\dots, V_{L}$ be a set of linear subspaces and $I_1,\dots,I_L$ their
associated radical ideals. The codimension of the $k$-th subspace is denoted by $n_k$.
The minimal set of generators of the $k$-th ideal contains $n_k$ independent
linear polynomials, say $\hat a_{k,1},\dots,\hat a_{k,n_k}$, so that
\be
\vec x\in V_k \Leftrightarrow\vec a_{k,i}\cdot\vec x=0\;\; \forall i\in\{1,\dots,n_k\}.
\ee
If the codimension $n_k$ is equal to $n$, then $V_k$ contains one
point. We are mainly interested to these points, whose number is
taken equal to $N_P$. The contribution of higher dimensional subspaces to
the asymptotic complexity of the factoring algorithm is irrelevant up to a dimension
reduction (see also Sec.~\ref{sec_infinite_points} and the remark in the
end of the section). Since only isolated points are relevant, we could just consider
ideals whose zero loci contain only isolated points. However,
we allow for the possible presence of subspaces with positive dimension
since they may simplify the set of the generators or the form of the polynomials
$P_k$ that eventually we want to build.
Let $\cal Z$ be the union of the subspaces $V_k$. The product
$I_1\cdot I_2\cdot\dots I_L\equiv \tilde I$ is associated with $\cal Z$, that is,
${\cal Z}={\bf V}(\tilde I)$. A set of generators of the ideal $\tilde I$ is
\be
\prod_{k=1}^{L}\hat a_{k,i_k}\equiv G_{i_1,\dots,i_L}(\vec x) \;\;\;
i_r\in\{1,\dots,n_r\}, r\in\{1,\dots,L\}.
\ee
Thus, we have that
\be
\vec x\in {\cal Z} \Leftrightarrow G_{i_1,\dots,i_L}(\vec x)=0
\;\;\;\; i_r\in\{1,\dots,n_r\}, r\in\{1,\dots,L\}.
\ee
Polynomials in the ideal $\tilde I$ are zero in the set $\cal Z$.
This construction is not the most general, as $\tilde I$ is not radical.
Thus, there are polynomials that are not in $\tilde I$, but
their zero locus contains $\cal Z$.
Furthermore, the number of generators and the number of their
factors grow polynomially and linearly in $N_P$, respectively.
This makes it hard to build polynomials in $\tilde I$ whose
complexity is polynomial in $\log N_P$.
The radicalization of the ideal and the assumption of
special arrangements of the subspaces in $\cal Z$
can reduce drastically both the degree of the generators and
their number. For example, let us assume that $V_1$ and $V_2$
are two isolated points in the $n$-dimensional space and, thus,
$n_1=n_2=n$. The overall number of generators $a_{1,1},\dots,a_{1,n}$ and
$a_{2,1},\dots,a_{2,n}$ is equal to $2 n$. Thus, there are
$n-1$ linear constraints among the generators. Using linear
transformations, we can write these constraints as
$$
\hat a_{1,i}=\hat a_{2,i}\equiv\hat a_i \;\;\;\forall i\in\{2,\dots,n\}.
$$
Every generator $G_{i,i,i_3,\dots,i_L}$ with $i\ne 1$ is equal
to $\bar G_{i,i_3,\dots,i_L}=a_i^2 \prod_{k=3}^L \hat a_{k,i_k}$. The polynomial
$\bar G'_{i,i_3,\dots,i_L}\equiv a_i \prod_{k=3}^L \hat a_{k,i_k}$ is not an element of the
ideal $\tilde I$, but it is an element of its radical. Indeed,
it is zero in the algebraic set $\cal Z$. Thus, we extend
the ideal by adding these new elements. This extension allows
us to eliminate all the generators $G_{i_1,i_2,\dots,i_L}$ with
$i_1=1$ or $i_2=1$, since they are generated by $\bar G'_{i,i_3,\dots,i_L}$.
Thus, the new ideal has the generators,
\begin{equation}
\left.
\begin{array}{l}
\hat a_{1,1}\hat a_{2,1}\prod_{k=3}^L a_{k,i_k} \\
\hat a_i\prod_{k=3}^L a_{k,i_k},\;\;\; i\in\{2,\dots,n\}
\end{array}\right\}
i_r\in\{1,\dots,n_r\}, r\in\{3,\dots,L\}.
\end{equation}
Initially, we had $n^2\prod_{k=3}^{L}n_k$ generators. Now, their
number is $n\prod_{k=3}^{L}n_k$. A large fraction of them has the
degree reduced by one. We can proceed with the other points and
further reduce
both degrees and number of generators. Evidently,
this procedure cannot take to a drastic simplification of the
generators if the points in $\cal Z$ are in general position, since
the generators must contain the information about these positions.
A simplification is possible if the points have special arrangements
taking to contraction of a large number of factors in the generators.
Namely, coplanarity of points is the key feature that can take to a
drastic simplification of the generators.
In a $n$-dimensional space, there are at most $n$ coplanar points
in general position. Let us consider algebraic sets containing
larger groups of coplanar points. For example, let us assume that
the first $m$ sets $V_1,\dots,V_m$ are distinct coplanar points, with
$m\gg n$. Then, there is a vector $\vec a_1$ such that $\vec a_1\cdot\vec x=0$
for every $\vec x$ in the union of the first $m$ linear spaces. It is convenient
to choose the linear polynomial $\vec a_1\cdot\vec x$ as common generator of
the first $m$ ideals $I_1,\dots,I_m$. Let us set $\hat a_{k,1}=\hat a_1$
for $k\in\{1,\dots,m\}$. Every generator $G_{i_1,\dots,i_L}$ with $i_k=1$
for some $k\in\{1,\dots,m\}$ is contracted to a generator of the form
$\hat a\prod_{k=m+1}^L \hat a_{k,i_k}$. If there are other groups of
coplanar points, we can perform other contractions.
\begin{definition}
Given an integer ${\bar n}>n$, we define $\Gamma_s$ with $s\in\{1,\dots,{\bar n}\}$ as
a set of $s$-tuples $(i_1,\dots,i_s)\in\{1,\dots,{\bar n}\}^s$
with $i_k < i_{k'}$ for $k < k'$. That is,
\begin{equation}
\forall s\in\{1,\dots,{\bar n}\},\;\;\;
\Gamma_s\subseteq \{(i_1,\dots,i_s)\in\{1,\dots,{\bar n}\}^s| i_k < i_{k'},\forall k, k' \text{ s.t. }
k<k' \}.
\end{equation}
\end{definition}
The final result of the inclusion of elements of the radical ideal is another
ideal, say $I$, with generators of the form
\be\label{generators}
\begin{array}{l}
\hat a_{i_1} \;\;\;\forall i_1\in\Gamma_1 \\
\hat a_{i_1}\hat a_{i_2} \;\;\;\forall (i_1,i_2)\in\Gamma_2 \\
\dots \\
\hat a_{i_1}\hat a_{i_2}\dots\hat a_{i_{{\bar n}}} \;\;\;\forall (i_1,i_2,\dots,i_{{\bar n}})\in\Gamma_{{\bar n}},
\end{array}
\ee
where $\{\hat a_1,\dots,\hat a_{{\bar n}}\}\equiv\Phi$ is a set of ${\bar n}$ linear polynomials.
Polynomials in this form generate the most general ideals whose zero loci contain a given finite set
of points. This is formalized by the following.
\begin{lemma}
Every radical ideal associated with a finite set ${\cal Z}_P$ of points is generated by a set of
polynomials of form~(\ref{generators}) for some $\bar n$.
\end{lemma}
{\it Proof}. This can be shown with a naive construction. Given a set $\bar S$ of $N_P$ points
associated with the ideals $I_1,\dots,I_{N_P}$, the product
$I_1\cdot \dots I_{N_P}$ is an ideal associated with the set $\bar S$, which can be radicalized by
adding a certain number of univariate square-free polynomials as generators~\cite{seidenberg}.
The resulting ideal is generated by a set of polynomials of form~(\ref{generators}). $\square$
\newline
With the construction used in the proof, $\bar n$ ends up to be equal to the number of points
in ${\cal Z}_P$, which is not optimal for our purposes. We are interested to keep $\bar n$
sufficiently small, possibly scaling polynomially in the dimension $n$.
This is possible only if the points in the zero locus
have a `high degree' of collinearity.
Thus, a bound on $\bar n$ sets a restriction on ${\cal Z}$.
The minimal information on $\Phi$ that is relevant for determining the number $N_P$ of points in
$\cal Z$ is encoded in a linear \emph{matroid}, of which $\Phi$ is one linear representation.
Thus, the sets $\Gamma_s$ and the matroid determine $N_P$.
Note that the last set $\Gamma_{{\bar n}}$ has at most one element. The linear generators
can be eliminated by reducing the dimension of the affine space, see Lemma~\ref{lemma_dim_red}.
Thus, we can set $\Gamma_1=\emptyset$. Every subset $\Phi_{sub}$ of $\Phi$ is associated with a
linear space $V_{sub}$ whose points are the common zeros of the linear functions in $\Phi_{sub}$.
That is, $V_{sub}={\bf V}(I_{sub})$, where $I_{sub}$ is the ideal generated by $\Phi_{sub}$.
Let us denote briefly by ${\bf V}(\Phi_{sub})$ the linear space ${\bf V}(I_{sub})$.
This mapping from subsets of $\Phi$ to subspaces is not generally injective. Let
$\hat a'\in\Phi\setminus \Phi_{sub}$ be a linear superposition of the functions in
$\Phi_{sub}$, then $\Phi_{sub}$ and $\Phi_{sub}\cup\{\hat a'\}$ represent the same linear space.
An injective mapping is obtained by considering only the maximal subset associated with a linear
subspace. These maximal subsets are called \emph{flats} in matroid theory.
\begin{definition}
\emph{Flats} of the linear matroid $\Phi$ are defined as subsets
$\Phi_{sub}\subseteq \Phi$ such that
no function in $\Phi\setminus \Phi_{sub}$ is
linearly dependent on the functions in $\Phi_{sub}$.
\end{definition}
Let us also define the closure of a subset of $\Phi$.
\begin{definition}
Given a subset $\Phi_{sub}$ of $\Phi$ associated with subspace $V$, its closure
$\text{cl}(\Phi_{sub})$ is the flat associated with $V$.
\end{definition}
The number of independent functions in a flat is called \emph{rank} of the flat.
The whole set $\Phi$ is a flat of rank $n+1$, which is associated with an empty space. Flats
of rank $n$ define points of the $n$-dimensional affine space (with $x_{n+1}=1$).
More generally,
flats of rank $k$ define linear spaces of dimension $n-k$.
The dimension of a flat $\Phi_{flat}$ is meant as the dimension of the space
${\bf V}(\Phi_{flat})$.
The structure of the generators~(\ref{generators}) resembles
a Boolean satisfiability problem (SAT) in conjunctive normal form without negations.
Let us interpret $\hat a_i$ as a logical variable $a_i$ which is $\true$ or $\false$ if
the function is zero or different from zero, respectively. Every subset
$\Phi_{sub}\subseteq\Phi$ is identified with a string $(a_1,\dots,a_{{\bar n}})$ such
that $a_i=\true$ if and only if $\hat a_i\in\Phi_{sub}$.
The SAT formula associated with the generators~(\ref{generators}) is
\be\label{SAT}
\bigwedge\limits_{k=2}^{{\bar n}} \left( \bigvee\limits_{i\in \Gamma_k } a_i \right).
\ee
Given a flat $\Phi_{flat}$, the linear space ${\bf V}(\Phi_{flat})$
is a subset of $\cal Z$ if and only if $\Phi_{flat}$
is solution of the SAT formula. If a set $\Phi_{sub}\subseteq\Phi$ is solution
of the SAT formula, then the flat $\text{cl}(\Phi_{sub})$ is also solution
of the formula. Thus, satisfiability implies that there are flats as solutions
of the formula. This does not mean that
satisfiability implies that
$\cal Z$ is non-empty. Indeed, if the dimension of ${\bf V}(\Phi_{sub})$ is
negative for every solution $\Phi_{sub}$, then the set $\Phi$ is the only
flat solution of the formula.
We are interested to the isolated points in $\cal Z$.
A point $p\in{\cal Z}$
is \emph{isolated} if there is a SAT solution $\Phi_{flat}$ with zero dimension
such that $p\in{\bf V}(\Phi_{flat})$ and no flat $\Phi_{flat}'\subset\Phi_{flat}$ is
solution of the Boolean formula. We denote by ${\cal Z}_P$ the subset in $\cal Z$
containing the isolated points. Since the number $N_P$ of isolated points is completely
determined by the SAT formula and the linear matroid, the information on these
two latter objects is the most relevant. Given them, the linear functions $\hat a_i$
have some free coefficients.
{\bf Remark}. In general, we do not rule out sets $\cal Z$
containing subspaces with positive dimension, however these subspaces are irrelevant
for the complexity analysis of the factoring algorithm. For example, if subspaces of
dimension $d_s<M$
give a dominant contribution to factorization, then we can generally eliminate $d_s$ out of
the $M$ parameters by setting them equal to constants,
so that the subspaces are reduced to points. Furthermore,
subspaces with dimension greater than $M-1$ are not in the parametrizable variety $\cal V$,
whose dimension is $M$. Neither the overall contribution of all the subspaces with positive
dimension can provide a significant change in the asymptotic complexity up to
parameter deletions. Thus, only isolated points of
$\cal Z$ are counted without loss of generality.
\subsection{Boolean satisfiability and algebraic-set membership}
As we said previously,
the Boolean formula does not encode all the information about the number of isolated
points in ${\cal Z}$, which also depends on the independence relations among the vectors
$\vec a_i$, specified by the matroid. A better link between the SAT problem
and the membership to ${\cal Z}$ can be obtained if we consider sets $\Phi$
with cardinality equal to $2 n$ and interpret half of the
functions in $\Phi$ as negations of the others. Let us
denote by $\hat a_{0,1},\dots,\hat a_{0,n}$ and $\hat a_{1,1},\dots,\hat a_{1,n}$
the $2n$ linear functions of $\Phi$. For general functions, we have the following.
\begin{property}
\label{indep_functions}
The set of vectors $\{\vec a_{s_1,1},\dots,\vec a_{s_n,n},\vec a_{1-s_k,k}\}$
is independent for every string $\vec s=(s_1,\dots,s_n)\in\{0,1\}^n$ and every
$k\in\{1,\dots,n\}$.
\end{property}
This property generally holds if the functions are picked up at random.
Let us assume that $\Phi$ satisfies Property~\ref{indep_functions}. This
implies that $\{\hat a_{s_1,1},\dots,\hat a_{s_n,n}\}$ are linearly independent
and equal to zero at one point $\vec x_{\vec s}$. Furthermore,
Property~\ref{indep_functions} also implies that different strings $\vec s$ are
associated with different points $\vec x_{\vec s}$.
\begin{lemma}
\label{lemma_indep}
Let $\{\vec a_{0,1},\dots,\vec a_{0,n},\vec a_{1,1},\dots,\vec a_{1,n}\}$ be a set
of $2n$ vectors satisfying Property~\ref{indep_functions}. Let $\vec x_{\vec s}$
be the solution of the equations $\hat a_{s_1,1}=\dots=\hat a_{s_n,n}=0$.
If $\vec s\ne\vec r$, then $\vec x_{\vec s}\ne\vec x_{\vec r}$.
\end{lemma}
{\it Proof}. Let us assume that $\vec x_{\vec s}=\vec x_{\vec r}$ with $\vec s\ne\vec r$.
There is
an integer $k\in\{1,\dots,n\}$ such that $s_k\ne r_k$. Thus, the set of
vectors $\{\vec a_{s_1,1},\dots,\vec a_{s_n,n},\vec a_{1-s_k,k}\}$
are orthogonal to $\vec x_{\vec s}$. Since the dimension of the vector space
is $n+1$, the set of $n+1$ vectors are linearly dependent, in contradiction
with the hypotheses. $\square$
Now, let us define the set ${\cal Z}$ as the zero locus of the ideal generators
\be\label{generators2}
\begin{array}{l}
\hat a_{0,i} \hat a_{1,i} \;\;\;\forall i\in\{1,\dots,n\} \\
\hat a_{s_1,i_1}\hat a_{s_2,i_2} \;\;\;\forall (s_1,i_1;s_2,i_2)\in\Gamma_2 \\
\dots \\
\hat a_{s_1,i_1}\hat a_{s_2,i_2}\dots\hat a_{s_n,i_n} \;\;\;\forall (s_1,i_1;s_2,i_2,\dots,s_n,i_n)\in\Gamma_n.
\end{array}
\ee
The first $n$ generators provide an interpretation of $\hat a_{1,i}$ as negation
of $\hat a_{0,i}$, as consequence of Property~\ref{indep_functions}. The $i$-th generator
implies that $(a_{0,i},a_{1,i})$ is equal to $(\true,\false)$, $(\false,\true)$
or $(\true,\true)$. However, the last case is forbidden by Property~\ref{indep_functions}.
Assume that $(a_{0,i},a_{1,i})$ is equal to $(\true,\true)$ for some $i$.
Then, there would be $n+1$ functions $\hat a_{s_1,1},\dots,\hat a_{s_n,n},\hat a_{1-s_i,i}$
equal to zero, which is impossible since they are independent. Thus, the algebraic
set defined by the first $n$ generators contains $2^n$ distinct points, as implied
by Lemma~\ref{lemma_indep}, which are associated with all
the possible states taken by the logical variables. The remaining
generators set further constraints on these variables and define a Boolean formula
in conjunctive normal form. With this construction there is a one-to-one correspondence
between the points of the algebraic set ${\cal Z}$ and the solutions of the Boolean
formula.
There is a generalization of the generators~(\ref{generators2}) that allows us
to weaken Property~\ref{indep_functions} while retaining the one-to-one correspondence.
Let $R_1,\dots,R_m$ be $m$ disjoint non-empty sets such that
$\cup_{k=1}^m R_k=\{1,\dots,n\}$.
\begin{property}
\label{indep_functions2}
The set of vectors $\cup_{k=1}^m\{\vec a_{s_k,i}|i\in R_k\}\equiv A_{\vec s}$ is
independent for every $\vec s=(s_1,\dots,s_m)\in\{0,1\}^m$. Furthermore,
every vector $\vec a_{s,i}\notin A_{\vec s}$ is not in $\text{span}(A_{\vec s})$,
with $s\in\{0,1\}$ and $i\in\{1,\dots,n\}$.
\end{property}
\begin{lemma}
\label{lemma_indep2}
Let $\{\vec a_{0,1},\dots,\vec a_{0,n},\vec a_{1,1},\dots,\vec a_{1,n}\}$ be a set
of $2n$ vectors satisfying Property~\ref{indep_functions2}. Let $\vec x_{\vec s}$
be the solution of the equations
$$
\hat a_{s_k,i}=0 \;\;\; i\in R_k, k\in\{1,\dots,m\}
$$
for every $\vec s\in\{0,1\}^m$.
If $\vec s\ne\vec r$, then $\vec x_{\vec s}\ne\vec x_{\vec r}$.
\end{lemma}
The generators~(\ref{generators2}) are generalized by replacing the first line
with
\be\label{block_gen}
\hat a_{0,i} \hat a_{1,j}, \;\;\; (i,j)\in \cup_{k=1}^m (R_k \times R_k).
\ee
Provided that Property~\ref{indep_functions2} holds there is a one-to-one correspondence
between the points in the algebraic set and the solutions of a SAT formula built according
to the following interpretation.
Each set of functions $\{\hat a_{0,i}|i\in R_k\}\equiv a_k$ is interpreted as a Boolean variable,
which is true if the functions in there are equal to zero. The set $\{\hat a_{1,i}|i\in R_k\}$
is interpreted as negation of $\{\hat a_{0,i}|i\in R_k\}$. The SAT formula is built in obvious
way from the set of generators. For example, the generator $\hat a_{0,i}\hat a_{0,j}$ with
$i\in R_{1}$ and $j\in R_{2}$ induces the clause $a_{1} a_{2}$. Different generators
can induce the same clause. Since the total number of solutions depends only on the
SAT formula, it is convenient to take the maximal set of generators compatible with
a given formula. That is, if $a_{1} a_{2}$ is a clause, then $\hat a_{0,i}\hat a_{0,j}$
is taken as a generator for every $i\in R_1$ and $j\in R_2$.
\subsection{$3$SAT-like generators}
\label{sec_3SAT}
SAT problems have clauses with an arbitrarily large number of literals. Special cases
are $2$SAT and $3$SAT, in which clauses have at most $2$ or $3$ literals. It is known
that every SAT problem can be converted to a $3$SAT one by increasing the number of
variables and replacing a clause with a certain number of smaller clauses containing
the new variables. For example, the clause $a\lor b\lor c\lor d$ can be replaced by
$a\lor b\lor x$ and $c\lor d\lor (\lnot x)$. An assignment satisfies the first clause
if and only if the other two clauses are satisfied for some $x$. An identical reduction
can be performed also on the generators~(\ref{generators}). For example, a generator in $I$
of the form $\hat a_1\hat a_2\hat a_3\hat a_4\equiv G_0$ can be replaced by
$\hat a_1\hat a_2 y\equiv G_1$, $\hat a_1\hat a_2(1-y)\equiv G_2$
and $y(1-y)\equiv G_3$, where $y$ is an additional variable. Also in this
case, $G_0$ is equal to zero if and only if $G_1$ and $G_2$
are zero for $y=0,1$. Furthermore, the new extended ideal contains the
old one. Indeed, we have that $G_0=\hat a_3\hat a_4 G_1+\hat a_1\hat a_2 G_2$.
Note that all the polynomials in the ideal $I$ are independent of the
additional variables used in the reduction. Thus, if we build the
polynomials~(\ref{poly_affine}) by using $3$SAT-like generators, then
all these polynomials may be independent of some variables.
Thus, we can consider generators in a $3$SAT form,
\be\label{generators_3SAT}
\begin{array}{l}
\hat a_{i_1}\hat a_{i_2} \;\;\;\forall (i_1,i_2)\in\Gamma_2 \\
\hat a_{i_1}\hat a_{i_2}\hat a_{i_{3}} \;\;\;\forall (i_1,i_2,i_3)\in\Gamma_{3}.
\end{array}
\ee
There is no loss of generality, provided that all the polynomials $P_k$ are
possibly independent from $n_I$ variables.
The number of isolated points satisfies the inequality
\be
\label{bound_N0_n}
N_P\le 3^n.
\ee
The actual number can be considerably smaller, depending on the
matroid and the number of clauses defining the Boolean formula.
The bound is attained if $n_c=3 n$, the generators have the
form $a_i b_i c_i$ with $i\in\{1,\dots,n\}$, and the independent
sets of the matroid contain $n+1$ elements.
If there are only clauses with $2$ literals, then the bound is
\be
N_P\le 2^n,
\ee
which is strict if the generators have the form $a_i b_i$ with
$i\in\{1,\cdots,n\}$.
A consequence of these constraints is that the number $M$ of parameters
must scale sublinearly in $n$,
\be
\label{bound_M_n}
M\le K n^\beta, \;\;\;\; 0\le \beta<1
\ee
for some $K>0$.
\section{Building up the parametrizable variety and the hyperplane}
\label{build_up}
In this section, we put together the tools introduced previously to tackle our
problem of building the rational function ${\cal R}$ with the desired properties
of being computationally simple and having a sufficiently large set of zeros.
This problem has being reduced to the search of computationally simple
polynomials $P_k$ of the form~(\ref{poly_affine}) with a number of
common rational zeros growing sufficiently fast with the space dimension.
To build these polynomials, we first choose a set of generators of the
form~(\ref{generators}) such that the associated algebraic set $\cal Z$ has a set
of $N_P$ points. Then, we write the polynomials $P_k$ as elements of the ideal
associated with $\cal Z$. Finally, we impose that the polynomials $P_k$ have the
form of Eqs.~(\ref{poly_affine}).
\begin{procedure}
\label{procedure}
Building up of a parametrizable variety $\cal V$ with $M$ parameters
and $N_P$ intersection points.
\begin{enumerate}
\item Take a set of ${\bar n}$ unknown non-homogeneous linear functions in $n$ variables
with ${\bar n}>n$, say $\hat a_1,\dots,\hat a_{{\bar n}}$. Additionally,
specify which set of vectors are linearly independent. In other words,
a linear matroid with ${\bar n}$ elements is defined.
\item Choose and ideal $I$ with generators of the form~(\ref{generators_3SAT}) such that
the associated algebraic set $\cal Z$ contains a subset ${\cal Z}_P$ of
$N_P$ isolated points over some given number field.
\item Set the polynomials $P_s$ equal to elements of the ideal $I$
with $s\in\{0,\dots,n-M\}$. That is,
\begin{equation}
\label{poly_in_ideal}
P_s(\vec x)=\sum_{(i,j)\in \Gamma_2} C_{s,i,j}(\vec x) \hat a_i \hat a_j+
\sum_{(i,j,k)\in \Gamma_3} D_{s,i,j,k}(\vec x) \hat a_i \hat a_j\hat a_k,
\end{equation}
The polynomials $P_s$ with $s\in\{1,\dots,n-M\}$ define an algebraic set $\cal A$.
The polynomial $P_0$ defines a hyperplane $\cal H$. The number of parameters $M$
and the polynomial coefficients $C_{s,i,j}(\vec x)$ and $D_{s,i,j,k}(\vec x)$ are also unknown.
\item Search for values of the coefficients such that there is a parametrizable branch $\cal V$
in $\cal A$
with a number of parameters as small as possible. All the polynomials $P_s$ with
$s\in\{0,\dots,n-M\}$ are possibly independent of some subset of variables (see Sec.~\ref{sec_3SAT}).
The polynomials ${\cal D}_k$, as defined in Eq.~(\ref{poly_affine}) must be different
from zero in the set ${\cal Z}_P$.
\end{enumerate}
\end{procedure}
More explicitly, the last step leads us to the following.
\begin{problem}
\label{problem1}
Given the sets $\Gamma_2$ and $\Gamma_3$, and polynomials of the
form~(\ref{poly_in_ideal}), find linear functions
$\hat a_1,\dots,\hat a_{{\bar n}}$ and coefficients $C_{s,i,j}(\vec x)$,
$D_{s,i,j,k}(\vec x)$ such that
\be
\label{gauss_constrs}
\begin{array}{l}
\frac{\partial P_s}{\partial x_k}=0, \;\;\;\; 1\le k < s\le n-M,
\vspace{2mm}
\\
\frac{\partial^2 P_s}{\partial x_s^2}=0, \;\;\;\; 1\le s\le n-M,
\vspace{2mm}
\\
\vec x\in{\cal Z}_P \Rightarrow \frac{\partial P_s}{\partial x_s}\equiv
{\cal D}_s(x_{s+1},\dots,x_n)\ne 0,
\end{array}
\ee
under the constraint that $(\hat a_1,\dots,\hat a_{{\bar n}})$ is the representation
of a given matroid.
\end{problem}
{\bf Remark}.
If the algebraic set associated with the ideal $I$ is zero-dimensional,
this problem has always a solution for any $M$, since a rational univariate representation
always exists (see introduction). Essentially, the task is to find ideals such
that there is a solution with the coefficients $C_{s,i,j}(\vec x)$ and
$D_{s,i,j,k}(\vec x)$ as simple as possible, so that their computation is
efficient, given $\vec x$.
Let us remind that the constraints~(\ref{gauss_constrs}) are
invariant under transformations~(\ref{poly_replacement},\ref{invar_trans}).
All the polynomials are possibly independent of a subset of $n_I$
variables, say $\{x_{n-n_I+1},\dots,x_{n}\}$,
\begin{equation}
\frac{\partial P_s}{\partial x_k}=0, \;\;\;\;
\left\{
\begin{array}{l}
s\in\{0,\dots,n-M\} \\
k\in\{n-n_I+1,\dots,n\}
\end{array}\right.
\end{equation}
These $n_I$ variables can be set equal to constants,
so that the actual number of significant parameters is $M-n_I$.
The input of Problem~\ref{problem1} is given by a 3SAT formula
of form~(\ref{generators_3SAT}) and a linear matroid.
\begin{definition}
A 3SAT formula of form~(\ref{generators_3SAT}) and a linear matroid with
$\bar n$ elements is called a \emph{model}.
\end{definition}
\noindent
In literature, the term `model' is occasionally used with a different
meaning and refers to a solution of a SAT formula.
Problem~\ref{problem1} in its general form is quite intricate. First,
it requires the definition of a linear matroid and a SAT formula with an
exponentially large number of solutions associated with isolated points.
Whereas it is easy to find examples of matroids and Boolean formulas
with this feature, it is not generally simple to characterize models
with an exponentially large number of isolated points.
Second, Eqs.~(\ref{gauss_constrs}) take to a large number of polynomial
equations in the unknown coefficients. Lemma~\ref{lemma_dim_red} can help
to reduce the search space by dimension reduction.
This will be shown in Sec.~\ref{sec_reduc_2SAT} with a simple example.
A good strategy is to start with simple models and low-degree coefficients
in Eq.~(\ref{poly_in_ideal}). In particular, we can take the coefficients
constant, as done later in Sec.~\ref{sec_quadr_poly}. This restriction
does not guarantees that Problem~\ref{problem1} has a solution for
a sufficiently small number of parameters $M$, but we can have some
hints on how to proceed.
\subsection{Required number of rational points vs space dimension}
Let assume that the computational complexity ${\bf C}_0$ of $\cal R$
is polynomial in the space dimension $n$, that is,
\begin{equation}
\label{ident_dim}
{\bf C}_0\sim n^{\alpha_0}.
\end{equation}
The factoring algorithm has polynomial complexity if
\be
\left.
\begin{array}{l}
K_1 n^\alpha\le \log N_P\le K_2 n \;\;\; 0<\alpha\le 1 \\
M\le (\log N_P)^\beta \;\;\; \beta<1
\end{array}
\right\} \;\;\; (\text{polynomial complexity})
\ee
for $n$ sufficiently great, where $K_1$ is some positive
constant and $K_2=\log 3$. The upper bound is given
by Eq.~(\ref{bound_N0_n}). The algorithm has subexponential
complexity
${\bf C}\sim e^{b (\log N_P)^{\alpha}}$ with $0<\alpha<1$ if
\be
\left.
\begin{array}{r}
\log N_P\sim (\log n)^{1/\alpha}\;\;\; 0<\alpha<1, \\
M\sim (\log n)^\frac{\beta}{\alpha}\;\;\; 0\le\beta<1-\alpha.
\end{array}
\right\} \;\;\;\; (\text{subexponential complexity})
\ee
The upper bound on $\beta$ comes from Lemma~\ref{litmus}.
Thus, the number of rational points is required to
scale much less than exponentially for getting polynomial
or subexponential factoring complexity. Note that a slower
increase of $N_P$ induces stricter bounds on $M$ in terms
of $n$.
\subsection{Reduction of models}
\label{sec_reduc_2SAT}
In this subsection, we describe an example of model reduction. The model
reduction is based on Lemma~\ref{lemma_dim_red} and can be
useful for simplifying Problem~\ref{problem1}.
The task is to reduce a class of models associated with an efficient
factoring algorithm to a class of simpler models taking to another
efficient algorithm, so that it is sufficient to search for solutions
of Problem~\ref{problem1} over the latter smaller class.
In our example, the matroid contains $2n$ elements and is
represented by the functions
$\hat a_{0,1},\dots,\hat a_{0,n},\hat a_{1,1},\dots,\hat a_{1,n}$
satisfying Property~\ref{indep_functions}.
\newline
{\bf Model A.}
\newline
Matroid with representation
$(\hat a_{0,1},\dots,\hat a_{0,n},\hat a_{1,1},\dots,\hat a_{1,n})$
satisfying Property~\ref{indep_functions}.
\newline
Generators:
\be\label{gen_Scheme_A}
\begin{array}{l}
\hat a_{0,i} \hat a_{1,i} \;\;\; i\in \{1,\dots n\} \\
\hat a_{0,i} \hat a_{0,j} \;\;\; (i,j)\in \Gamma.
\end{array}
\ee
\begin{definition}
A diagonal model is defined as Model A with $\Gamma=\emptyset$.
\end{definition}
Clearly, an diagonal model defines an algebraic set with $2^n$ isolated
points. Each point satisfies the linear equations
\be
\hat a_{s_i,i}=0, \;\;\; i\in \{1,\dots,n\}
\ee
for some $(s_1,\dots,s_n)\in\{0,1\}^n$.
If there is an algorithm with polynomial complexity
and associated with Model~A, then it is possible to prove
that there is
another algorithm with polynomial complexity and
associated with a diagonal model. More generally,
this formula reduction takes to a subexponential factoring
algorithm, provided that the parent algorithm outperforms
the quadratic sieve algorithm. If the parent algorithm outperforms
the general number field sieve, then the reduced algorithm outperforms
the quadratic sieve. Thus, if we are interested to find a competitive
algorithm from Model~A, we need to search only the space of reduced formulas.
If there is no algorithm outperforming the quadratic sieve with $\Gamma=\emptyset$,
then there is no algorithm outperforming the general number field
for $\Gamma\ne \emptyset$.
\begin{theorem}
If there is a factoring algorithm with subexponential asymptotic complexity
$e^{a (\log p)^\gamma}$ and associated with Model~A, then there is another algorithm
associated with the diagonal model with computational complexity upper-bounded by
the function $e^{\bar a (\log p)^\frac{\gamma}{1-\gamma}}$ for some $\bar a>0$.
In particular, if the first algorithm has polynomial complexity,
also the latter has polynomial complexity.
\end{theorem}
{\it Proof}.
Let us assume that the asymptotic computational complexity of the
parent algorithm is $e^{a (\log p)^\gamma}$. For every $N_P$,
there is a Model~A with $N_P$ isolated
points and generating a rational function ${\cal R}$ with complexity
${\bf C}_0(\xi)$ scaling as $e^{a (\log N_P)^\alpha}$ and a number of
parameters $M$ scaling as $(\log N_P)^\beta$, where $\gamma=\alpha/(1-\beta)$
and $0\le\beta<1$ (See~\ref{sec_complex}).
We denote by $\cal Z$ the set of isolated points.
Since the complexity ${\bf C}_0$ is lower-bounded
by a linear function of the dimension $n$, we have
\be\label{ineq_n_N0}
\log n\le a (\log N_P)^\alpha+O(1).
\ee
Let $\hat a_{0,1},\dots,\hat a_{0,n},\hat a_{1,1},\dots,\hat a_{1,n}$
be the set of linear functions representing the matroid and satisfying
Property~\ref{indep_functions}. The ideal generators are given by
Eq.~(\ref{gen_Scheme_A}).
Let $m$ be the maximum number of functions in $\{\hat a_{0,i},\dots,\hat a_{0,n}\}$
which are simultaneously different from zero for $\vec x\in \cal Z$. Thus,
we have that
\be\label{bound_N0}
N_P\le \sum_{j=0}^m\frac{n!}{(n-j)! j!}\le \left(1+n^{-1}\right) {n}^{m} .
\ee
There is a point $\vec x_2$ in $\cal Z$ such that
$\hat a_{0,1},\dots,\hat a_{0,m}$ are different from zero and
$\hat a_{0,m+1}=\hat a_{0,m+1}=\dots=\hat a_{0,n}=0$,
up to permutations of the indices.
Let us set these last $n-m$ functions equal to zero by dimension reduction.
The new set of generators is associated with another factoring algorithm
(Lemma~\ref{lemma_dim_red}) and contains the clauses of the form
\be
\begin{array}{l}
\hat a_{0,i} \hat a_{1,i} \;\;\; i\in \{1,\dots,m\} \\
\hat a_{0,i} \hat a_{0,j} \;\;\; (i,j)\in \bar\Gamma\subseteq
\{1,\dots,m\} \times \{1,\dots,m\}.
\end{array}
\ee
Since there is a point $\vec x_2$ such that $\hat a_{0,i}\ne$ for $i\in\{1,\dots,m\}$,
the set $\bar\Gamma$ turns out to be empty, so that the reduced model is diagonal.
The number of common zeros of the generators, say $N_1$, is equal to $2^m$. Using
Ineq.~(\ref{bound_N0}), we have that
\be
(\log_2 N_1)(\log n)+\log\left(1+n^{-1}\right) \ge \log N_P.
\ee
Ineq.~(\ref{ineq_n_N0}) and this last inequality implies that
\be
\log N_P \le K (\log N_1)^{\frac{1}{1-\alpha}}
\ee
for some constant $K$. Since the computational complexity, say $\bar {\bf C}_0$ of the rational
function $\cal R$ associated with the reduced model is not greater than ${\bf C}_0$,
which scales as $e^{a (\log N_P)^\alpha}$,
we have that
\be
\bar {\bf C}_0\le e^{\bar a (\log N_1)^\frac{\alpha}{1-\alpha}},
\ee
for some constant $\bar a$.
Similarly, since the number of parameters, say $\bar M$, of the reduced rational function
is not greater than $M$, we have that
\be
\bar M\le \bar K (\log \bar N_1)^\frac{\beta}{1-\alpha}
\ee
for some constant $\bar K$. Thus, the resulting factoring algorithm has a computational
complexity upper-bounded by
$$
e^{\bar a(\log p)^\frac{\alpha}{1-\alpha-\beta} }=
e^{\bar a(\log p)^\frac{\gamma}{1-\gamma} }
$$
up to a constant factor.
The last statement of the theorem is proved in a similar fashion.
$\square$
The diagonal model with generators
\be\label{simple_gene}
G_i=\hat a_{0,i} \hat a_{1,i}, \;\;\;\; \forall i\in\{1,\dots,n\}
\ee
provides the simplest example of polynomials with an exponentially large number of
common zeros. The algebraic set ${\cal Z}={\cal Z}_P$ contains $2^n$ points,
which are distinct because of Property~\ref{indep_functions}. This guarantees that
the generated ideal is radical. Thus, Hilbert's Nullstellensatz
implies that every polynomial which is zero in ${\cal Z}$
can be written as $\sum_i F_i(\vec x) G_i(\vec x)$, where $F_1,\dots,F_n$
are polynomials (let us remind that $x_{n+1}=1$).
We impose that the polynomials $P_0,\dots,P_{n-M}$ are in the ideal
generated by $G_1,\dots,G_n$, that is,
\be
P_k(\vec x)=\sum_i C_{k,i}(\vec x) \hat a_{0,i} \hat a_{1,i} \;\;\;
\forall k\in\{0,\dots,n-M\}.
\ee
As there is no particular requirement on $P_0$, we can just set $C_{0,i}(\vec x)$ equal
to constants. In particular, we can take $P_0=\hat a_{0,1}\hat a_{1,1}$.
In this case, the unknown variables of Problem~\ref{problem1} are the
polynomials $C_{k,i}(\vec x)$ and
the linear equations $\hat a_{s,k}$ under the constraints of Property~\ref{indep_functions}.
In the following section we tackle this problem with $C_{k,i}(\vec x)$
constant.
\section{Quadratic polynomials}
\label{sec_quadr_poly}
In this section, we illustrate the procedure described previously by
considering the special case of $n-M+1$ quadratic polynomials
in the ideal $I$ generated by the polynomials~(\ref{simple_gene}).
Namely, we take the polynomials $P_l$ of the form
\begin{equation}
\label{quad_poly}
P_l(\vec x)=\sum_{i=1}^n c_{l,i}\hat a_{0,i}\hat a_{1,i}, \;\;\;\;
l\in\{0,\dots,n-M\},
\end{equation}
where $c_{l,i}$ are rational numbers and the linear functions $\hat a_{s,i}$
satisfy Property~\ref{indep_functions}. Thus, there are $2^n$ common rational
zeros of the $n-M+1$ polynomials, which are also the zeros of the
generators~(\ref{simple_gene}).
Each rational point is associated
with a vector $\vec s\in\{0,1\}^n$ so that the linear equations
$\vec a_{s_1,1}\cdot\vec x=0,\dots,\vec a_{s_n,n}\cdot\vec x=0$ are
satisfied.
First, we consider the case with one parameter ($M=1$). We also assume that
all the $2^n$ rational points are in the parametrizable variety.
Starting
from these assumptions, we end up to build a variety $\cal V$ with a number
$M$ of parameters equal to $n/2-1$
for $n$ even and $n\ge4$. Furthermore, we prove that there is no solution
with $M=1$ if $n>4$. We give a numerical example for $n=4$, which
takes to a rational function $\cal R$ with $16$ zeros.
Then we build a parametrizable variety with a number of parameters equal to
$(n-1)/3$. Thus, the minimal number of parameters is some value between $2$
and $(n-1)/3$ for the considered model with the polynomials of the
form~(\ref{quad_poly}).
\subsection{One parameter? ($M=1$)}
Given polynomials~(\ref{quad_poly}) and vectors $\vec a_{s,i}$ satisfying
Property~(\ref{indep_functions}), we search for a solution of Problem~\ref{problem1}
under the assumption $M=1$.
Let us first introduce some notations and definitions.
We define the $(n-1)\times n$ matrices
\begin{equation}
\label{def_matr_M}
{\bf M}^{\vec s}\equiv
\left(
\begin{array}{ccc}
A_{1,1}^{(s_1)} & \dots & A_{1,n}^{(s_n)} \\
\vdots & \ddots & \vdots \\
A_{n-1,1}^{(s_1)} & \dots & A_{n-1,n}^{(s_n)}
\end{array}
\right),
\end{equation}
where
\begin{equation}
A_{k,i}^{(s)}\equiv\frac{\partial\hat a_{s,i}}{\partial x_k},
\end{equation}
The square submatrix of ${\bf M}^{\vec s}$ obtained by deleting the $j$-th column
is denoted by ${\bf M}_j^{\vec s}$, that is,
\begin{equation}
{\bf M}_j^{\vec s}=
\left(
\begin{array}{cccccc}
A_{1,1}^{(s_1)} & \dots & A_{1,j-1}^{(s_{j-1})} & A_{1,j+1}^{(s_{j+1})} & \dots &A_{1,n}^{(s_n)} \\
\vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\
A_{n-1,1}^{(s_1)} & \dots & A_{n-1,j-1}^{(s_{j-1})} & A_{n-1,j+1}^{(s_{j+1})} & \dots &A_{n-1,n}^{(s_n)}
\end{array}
\right),
\end{equation}
The vectors $\vec a_{0,i}$ and $\vec a_{1,i}$ are also briefly denoted by $\vec a_i$ and
$\vec b_i$, respectively. Similarly, we also use the symbols $A_{k,i}$ and $B_{k,i}$
for the derivatives $A_{k,i}^{(0)}$ and $A_{k,i}^{(1)}$.
Problem~\ref{problem1} takes the specific form
\begin{problem}
\label{problem2}
Find coefficients $c_{l,i}$ and vectors $\vec a_{s,i}$ satisfying
Property~(\ref{indep_functions}) such that
\bey
\label{prob2_1}
\sum_{i=1}^n c_{l,i} \left( A_{k,i}\vec a_i+B_{k,i}\vec b_i\right)=0
\;\;\;\; 1\le k < l\le n-1,
\vspace{2mm}
\\
\label{prob2_2}
\sum_{i=1}^n c_{l,i} A_{l,i} B_{l,i}=0 \;\;\;\; 1\le l\le n-1,
\vspace{2mm}
\\
\label{prob2_3}
\vec x\in{\cal Z}_P \Rightarrow
\sum_{i=1}^n c_{l,i} \left( A_{l,i}\hat a_i+B_{l,i}\hat b_i\right)\ne 0
\;\;\;\; 1\le l\le n-1.
\eey
\end{problem}
Let us stress again that the problem is invariant with respect to
the transformations~(\ref{poly_replacement},\ref{invar_trans}), the latter
taking to the transformation
\be
A_{k,i}^{(s)}\rightarrow A_{k,i}^{(s)}+\sum_{l=1}^{k-1}\bar\eta_{k,l}A_{l,i}^{(s)}
\ee
of the derivatives.
We have the following.
\begin{lemma}
\label{lemma_inde_A}
For every $\vec s\in\{0,1\}^n$ and $j\in\{1,\dots,n\}$, the matrix ${\bf M}_j^{\vec s}$
has maximal rank, that is,
\begin{equation}
\det {\bf M}_j^{\vec s}\ne 0.
\end{equation}
\end{lemma}
{\it Proof}.
Let us prove the lemma by contradiction. There is a $j\in\{1,\dots,n\}$,
$l\in\{1,\dots,n-1\}$, and an $\vec s\in\{0,1\}^n$
such that the $l$-th row of ${\bf M}_j^{{\vec s}}$
is linearly dependent on the first $l-1$ rows. Thus,
there are coefficients $\lambda_1,\dots,\lambda_{l-1}$ such that
\begin{equation}
A_{l,i}^{(s_l)}+\sum_{k=1}^{l-1}\lambda_k A_{k,i}^{(s_k)}=0 \;\;\; \forall i\ne j
\end{equation}
With a change of variables of the form of Eq.~(\ref{invar_trans}), this equation
can be rewritten in the form
\begin{equation}
A_{l,i}^{(s_l)}=0 \;\;\; \forall i\ne j.
\end{equation}
Up to permutations $\hat a_i\leftrightarrow \hat b_i$, we have
\begin{equation}
\label{lin_dep_lemma}
B_{l,i}=0 \;\;\; \forall i\ne j.
\end{equation}
From Eq.~(\ref{prob2_2}), we have
$$
\sum_{i=1}^n c_{l,i} A_{l,i} B_{l,i}=0.
$$
From this equation and Eq.~(\ref{lin_dep_lemma}), we get the
equation $c_{l,j} A_{l,j} B_{l,j}=0$, implying that
$c_{l,j} A_{l,j}=0$ or $c_{l,j} B_{l,j}=0$. Without loss of generality,
let us take
\be
\label{clne0}
c_{l,j} B_{l,j}=0.
\ee
Let $\vec x_0\in{\cal Z}_P$ be the vector orthogonal to $\vec b_1,\dots,\vec b_n$.
From Eq.~(\ref{prob2_3}) we have that
\be
c_{l,j} B_{l,j}(\vec a_j\cdot\vec x_0)\ne 0,
\ee
which is in contradiction with Ineq.~(\ref{clne0}).
$\square$
\begin{corollary}
\label{corol_c}
The coefficients $c_{n-1,i}$ are different from zero for every
$i\in\{1,\dots,n\}$.
\end{corollary}
{\it Proof}.
Let us assume that the statement is false. Up to permutations, we have that
$c_{n-1,1}=0$. Lemma~\ref{lemma_inde_A} implies that there is an
integer $i_0\in\{2,\dots,n\}$ such that
$B_{n-1,i}=0$ for $i\notin\{1,i_0\}$, up to a transformation of the form of Eq.~(\ref{invar_trans}).
Thus,
\be
0=\sum_{i=1}^n c_{n-1,i}A_{n-1,i}B_{n-1,i}= c_{n-1,i_0} A_{n-1,i_0}B_{n-1,i_0},
\ee
the first equality coming from Eq.~(\ref{prob2_2}). Lemma~\ref{lemma_inde_A} also
implies that $A_{n-1,i_0}B_{n-1,i_0}\ne 0$
Thus, on one hand, we have that $c_{n-1,i_0}=0$.
On the other hand, we have
\be
c_{n-1,i_0} B_{n-1,i_0}(\vec a_{i_0}\cdot\vec x_0)\ne 0
\ee
from Eqs.~(\ref{prob2_3}),
where $\vec x_0$ is the vector orthogonal to $\vec b_1,\dots,\vec b_n$.
Thus, we have a contradiction. $\square$
Let us denote by ${\bf M}^{\vec s}_{j_1,\dots,j_m}$
the submatrix of ${\bf M}^{\vec s}$ obtained by deleting the last $m-1$ rows and
the columns $j_1,\dots,j_m$.
Given the coefficient matrix
\be
{\bf c}\equiv
\left(
\begin{array}{ccc}
c_{0,1} & \dots & c_{0,n} \\
\vdots & \ddots & \vdots \\
c_{n-1,1} & \dots & c_{n-1,n}
\end{array}
\right),
\ee
let us define ${\bf c}_{j_1,\dots,j_m}$ as the $m\times m$ submatrix of
${\bf c}$ obtained by keeping the last $m$ rows and the columns
$j_1,\dots,j_m$.
Lemma~\ref{lemma_inde_A} and Corollary~\ref{corol_c} are generalized by
the following.
\begin{theorem}
\label{theorem_det}
For every $m\in\{1,\dots,n-1\}$, $\vec s\in\{0,1\}^n$, and $m$ distinct
integers $j_1,\dots,j_m\in\{1,\dots,n\}$,
the matrices ${\bf M}^{\vec s}_{j_1,\dots,j_m}$ and ${\bf c}_{j_1,\dots,j_m}$
have maximal rank, that is,
\bey
\label{det_reduc}
\det {\bf M}^{\vec s}_{j_1,\dots,j_m}\ne 0, \\
\label{det_reduc2}
\det {\bf c}_{j_1,\dots,j_m}\ne 0.
\eey
\end{theorem}
{Proof.} The proof is by recursion. For $m=1$, the theorem comes from
Lemma~\ref{lemma_inde_A} and Corollary~\ref{corol_c}.
Thus, we just need to prove Eqs.~(\ref{det_reduc},\ref{det_reduc2}) by assuming that
\bey
\label{recur0}
\det {\bf M}^{\vec s}_{j_1,\dots,j_{m-1}}\ne 0, \\
\label{recur1}
\det {\bf c}_{j_1,\dots,j_{m-1}}\ne 0.
\eey
Let us first prove Eq.~(\ref{det_reduc}) by contradiction. If
the equation is false, then there is an $\vec s_0\in\{0,1\}^n$ and
$m$ distinct integers $i_1,\dots,i_m$ in $\{1,\dots,n\}$ such that
$\det {\bf M}^{\vec s_0}_{i_1,\dots,i_m}=0$.
By permutations, we can
set $i_h=h$. By suitable exchanges of $\hat a_i$ and $\hat b_i$, we can
set $s_i=1$ for every $i\in\{1,\dots,n\}$.
There is an integer $l\in\{1,\dots,n-m\}$ such that $B_{l,i}=0$ for
$i\in\{m+1,\dots,n\}$ up to a transformation of the form of Eq.~(\ref{invar_trans}).
From Eqs.~(\ref{prob2_1},\ref{prob2_2}), we have the $m$ equations
\be
\begin{array}{r}
\sum_{i=1}^m c_{l,i}A_{l,i}B_{l,i}=0 \\
\sum_{i=1}^m c_{n+1-m,i}A_{l,i}B_{l,i}=0 \\
\sum_{i=1}^m c_{n+2-m,i}A_{l,i}B_{l,i}=0 \\
\dots \\
\sum_{i=1}^m c_{n-2,i}A_{l,i}B_{l,i}=0 \\
\sum_{i=1}^m c_{n-1,i}A_{l,i}B_{l,i}=0.
\end{array}
\ee
From Eq.~(\ref{recur0}), we have that $A_{l,i}\ne0$ and $B_{l,i}\ne0$ for some
$i\in\{1,\dots,m\}$, so that
\be
\text{rank}
\left(
\begin{array}{ccc}
c_{l,1} & \dots & c_{l,m} \\
c_{n+1-m,1} & \dots & c_{n+1-s,m} \\
c_{n+2-m,1} & \dots & c_{n+2-s,m} \\
\vdots & \ddots & \vdots \\
c_{n-2,1} & \dots & c_{n-1,m} \\
c_{n-1,1} & \dots & c_{n-1,m}
\end{array}
\right)< m.
\ee
Up to a transformation of the form of Eq.~(\ref{poly_replacement}),
there is an integer $l_0\in\{n+1-m,\dots,n-1\}\cup\{l\}$ such
that $c_{l_0,i}=0$ for $i\in\{1,\dots,m\}$. Eq.~(\ref{recur1}) implies
that $l_0=l$. Thus, $c_{l,1}=\dots c_{l,m}=0$, but this contradicts
Eq.~(\ref{prob2_3}) with $\vec x\in{\cal Z}_P$ orthogonal to $\vec b_1,\dots,\vec b_n$.
Let us now prove Eq.~(\ref{det_reduc2}) by contradiction. If the equation is false,
then there are $m$ distinct integers $i_1,\dots,i_m$ in $\{1,\dots,n\}$ such that
$\det {\bf c}_{i_1,\dots,i_m}=0$. Without loss of generality, let us take
$i_h=h$. Up to the transformation~(\ref{poly_replacement}),
there is an integer $l\in\{n-m,\dots,n-1\}$ such that $c_{l,i}=0$ for
$i\in\{1,\dots,m\}$. Eq.~(\ref{recur1}) implies that $l=n-m$. Thus,
\be
c_{n-m,1}=\dots=c_{n-m,m}=0.
\ee
Eq.~(\ref{det_reduc}) implies that there is an integer
$i_0\in\{m+1,\dots,n\}$ such that $A_{n-m,i}=0$ for $i\in\{m+1,\dots,n\}\bs\{i_0\}$
up to transformation~(\ref{invar_trans}). Thus, we have from Eq.~(\ref{prob2_2})
that
\be
0=\sum_{i=1}^n c_{n-m,i}A_{n-m,i}B_{n-m,i}=
c_{n-m,i_0}A_{n-m,i_0}B_{n-m,i_0}.
\ee
Eq.~(\ref{det_reduc}) also implies that $A_{n-m,i_0}B_{n-m,i_0}\ne 0$, so that
$c_{n-m,i_0}=0$, which is in contradiction with Eq.~(\ref{prob2_3}) for $\vec x$
orthogonal to $\vec a_1,\dots,\vec a_n$. $\square$
\newline
In the following, this theorem will be used with $m\in\{1,2\}$.
Since all the coefficients $c_{n-1,i}$ are different from zero, we can set
them equal to $1$ by rescaling the vectors $\vec a_i$ or $\vec b_i$.
Let us denote by $c_i$ the coefficients $c_{n-2,i}$. Theorem~\ref{theorem_det}
with $m=2$ implies that $c_i\ne c_j$ for $i\ne j$.
Eq.~(\ref{prob2_1}) with $l=n-1$ and $l=n-2$ takes the form
\bey
\label{nm2}
\frac{\partial }{\partial x_k}P_{n-1}=\sum_{i=1}^n \left( A_{k,i}\vec a_i+B_{k,i}\vec b_i\right)=0
\;\;\;\; 1\le k \le n-2, \\
\label{nm3}
\frac{\partial }{\partial x_k}P_{n-2}=
\sum_{i=1}^n c_i\left( A_{k,i}\vec a_i+B_{k,i}\vec b_i\right)=0
\;\;\;\; 1\le k \le n-3.
\eey
These equations impose the form~(\ref{poly_affine}) to the last two
polynomials, $P_{n-1}$ and $P_{n-2}$, which must be independent from $n-2$ and
$n-3$ variables, respectively.
The first $n-2$ vector equations are linearly independent. Let us assume
the opposite. Then, there is a set of coefficients $\lambda_1,\dots,\lambda_{n-2}$
such that
$\sum_{k=1}^{n-2}\lambda_k (A_{k,i},B_{k,i})=0$.
But this is impossible because of Property~\ref{indep_functions}. It also
contradicts Theorem~\ref{theorem_det}. The theorem also implies that
Eqs.~(\ref{nm3}) are linearly
independent. Since the vector space is $n+1$-dimensional, the vectors
$\vec a_i$ and $\vec b_i$ must have $n-1$ vector constraints. Thus,
at least $n-4$ out of Eqs.~(\ref{nm3}) are linearly dependent on
Eqs.~(\ref{nm2}).
First, let us show that $n-4$ is the maximal number of dependent equations.
Assuming the converse, we have
\be
c_i(A_{k,i},B_{k,i})=\sum_{l=1}^{n-2}\lambda_{k,l} (A_{l,i},B_{l,i}) \;\;\;\;\;
\forall k\in\{1,\dots,n-3\}.
\ee
for suitable coefficients $\lambda_{k,l}$. Let us define the linear
superposition
\be
(A_i,B_i)\equiv \sum_{k=1}^{n-3} v_k (A_{k,i},B_{k,i})
\ee
with the coefficients $v_k$. Let $\bf \Lambda$ be the
$(n-2)\times(n-2)$ matrix with ${\bf \Lambda}_{k,n-2}=0$
and ${\bf \Lambda}_{k,l}=\lambda_{l,k}$
for $k\in\{1,\dots,n-2\}$ and $l\in\{1,\dots,n-3\}$.
The coefficients
$(v_1,\dots,v_{n-3})\equiv\vec v$ are defined by imposing
the $n-4$ constraints
\be
({\bf\Lambda}^s\vec v)_{n-2}=0 \;\;\;\;\; s\in\{1,\dots,n-4\}.
\ee
By construction, the pairs
\be\label{expo_form}
c_i^{k-1}(A_i,B_i) \;\;\;\; k\in\{1,\dots,n-2\}
\ee
are linear superpositions of the derivatives $(A_{k,i},B_{k,i})$ with
$k\in\{1,\dots,n-2\}$. Furthermore, the first $n-3$ pairs are
linear superpositions of $(A_{k,i},B_{k,i})$ with
$k\in\{1,\dots,n-3\}$. That is,
\be
\label{expo_deps}
\begin{array}{l}
c_i^{k-1} (A_i,B_i)=\sum_{l=1}^{n-3}\bar\lambda_{k,l}(A_{l,i},B_{l,i})
\;\;\; k\in\{1,\dots,n-3\} \\
c_i^{n-3} (A_i,B_i)=\sum_{l=1}^{n-2}\bar\lambda_{n-2,l}(A_{l,i},B_{l,i})
\end{array}
\ee
for some coefficients $\bar\lambda_{k,l}$. From Lemma~\ref{lemma_inde_A}
and Corollary~\ref{corol_c} we have that the $n-2$ pairs~(\ref{expo_form})
are linearly independent. Indeed, Corollary~\ref{corol_c} implies that
$A_i\ne 0$ and $B_i\ne 0$ for every $i\in\{1,\dots,n\}$. Lemma~\ref{lemma_inde_A}
implies that $c_i^{k-1}$ are linearly independent for $k\in\{1,\dots,n-2\}$.
Equations~(\ref{expo_form},\ref{expo_deps}) can be also derived from Jordan's
theorem, Lemma~\ref{lemma_inde_A} and Corollary~\ref{corol_c}. See
Appendix~\ref{lin_alg_tools}.
Thus, by a variable transformation,
Eqs.~(\ref{nm2},\ref{nm3}) take the form
\be
\sum_{i=1}^n c_i^{k-1}\left( A_{i}\vec a_i+B_{i}\vec b_i\right)=0
\;\;\;\;\;\; k\in\{1,\dots,n-2\}.
\ee
and
\be
\frac{\partial (\hat a_i,\hat b_i)}{\partial x_k}=c_i^{k-1}(A_i, B_i)
\;\;\;\;\;\; k\in\{1,\dots,n-2\}.
\ee
These equations imply that
$\sum_{i=1}^n d_i^{k-1} A_i B_i=0$ for $k\in\{1,\dots,2n-5\}$.
For $n>4$, we have in particular that
\be
\label{kill_AB}
\sum_{i=1}^n c_i^{k-1} A_i B_i=0 \;\;\;\; k\in\{1,\dots,n\}.
\ee
Since
\be
\det\left(
\begin{array}{ccc}
1 & \dots & 1 \\
c_1 & \dots & c_n \\
\vdots & \ddots & \vdots \\
c_1^{n-1} & \dots & c_n^{n-1}
\end{array}
\right)=\prod_{j>i} (c_j-c_i)
\ee
and $c_i\ne c_j$ for $i\ne j$, Eq.~(\ref{kill_AB}) implies that $A_i B_i=0$ for every
$i\in\{1,\dots,n\}$. But this is in contradiction with Theorem~\ref{theorem_det}.
Thus, let us take exactly $n-4$ out of Eqs.~(\ref{nm3}) linearly dependent on
Eqs~(\ref{nm2}). Let $\bar k$ be an integer in $\{1,\dots,n-3\}$ such that
Eq.~(\ref{nm3}) with $k=\bar k$ is linearly independent of Eqs.~(\ref{nm2}). Thus,
$$
c_i(A_{k,i},B_{k,i})=\bar\lambda_k c_i (A_{\bar k,i},B_{\bar k,i})+
\sum_{l=1}^{n-2}\lambda_{k,l}(A_{l,i},B_{l,i}) \;\;\;\;
k\in\{1,\dots,n-3\}\bs\{\bar k\}.
$$
By a transformation of the first $n-3$ variables, we can rewrite this
equation in the form.
\be
c_i(A_{k,i},B_{k,i})=\sum_{l=1}^{n-2}\lambda_{k,l}(A_{l,i},B_{l,i}) \;\;\;\;
k\in\{1,\dots,n-4\}.
\ee
By a suitable transformation of the first $n-2$ variables,
the $n-2$ pairs $(A_{k,i},B_{k,i})$ can be split in two groups (see
Appendix~\ref{lin_alg_tools}), say,
\be
\left.
\begin{array}{l}
\frac{\partial}{\partial x_k'} (\hat a_i,\hat b_i) \equiv
(A_{k,i}', B_{k,i}')=c_i^{k-1} (A_i',B_i') \;\;\;\; k\in\{1,\dots,n_1\} \\
\frac{\partial}{\partial x_k''}(\hat a_i,\hat b_i) \equiv
(A_{k,i}'', B_{k,i}'')=c_i^{k-1} (A_i'',B_i'') \;\;\;\; k\in\{1,\dots,n_2\}
\end{array} \right\}
\;\;\; n_1+n_2=n-2.
\ee
Equations~(\ref{nm2}) become
\begin{equation}
\label{nm2_vector_equation}
\begin{array}{l}
\sum_{i=1}^n c_i^{k-1}\left( A_i' \vec b_i+B_i' \vec a_i\right)=0 \;\;\; k\in\{1,\dots,n_1\} \\
\sum_{i=1}^n c_i^{k-1}\left(\bar A_i'' \vec b_i+\bar B_i'' \vec a_i\right)=0 \;\;\; k\in\{1,\dots,n_2\}.
\end{array}
\end{equation}
Given these $n-2$ vector constraints, all $n-2$ the derivatives
$\partial P_{n-1}/\partial x_1',\dots,\partial P_{n-1}/\partial x_{n_1}'$,
$\partial P_{n-1}/\partial x_1'',\dots,\partial P_{n-1}/\partial x_{n_2}''$ are equal to
zero. Furthermore, we also have that
$$
\begin{array}{l}
\frac{\partial}{\partial x_k'}P_{n-2}=0 \;\;\; k\in\{1,\dots,n_1-1\} \\
\frac{\partial}{\partial x_k''}P_{n-2}=0 \;\;\; k\in\{1,\dots,n_2-1\},
\end{array}
$$
so that $P_{n-2}$ is independent of $n-4$ out of the $n-2$ variables
$x_1',\dots,x_{n_1}$, $x_1'',\dots,x_{n_2}''$. Thus, we need to add
another vector equation such that
$\left(w_1 \frac{\partial}{\partial x_{n_1}'}+w_2 \frac{\partial}{\partial x_{n_2}''}\right)P_{n-2}=0$
for some $(w_1,w_2)\ne (0,0)$. Up to a variable transformation, we can set $(w_1,w_2)=(1,0)$
so that the additional vector equation is
\be
\label{additional_equation}
\sum_{i=1}^n c_i^{n_1}\left( A_i' \vec b_i+B_i' \vec a_i\right)
=0.
\ee
Equations~(\ref{prob2_2},\ref{nm2_vector_equation},\ref{additional_equation}) imply that
\bey
\label{eqs_coef_AB}
\sum_{i=1}^n c_i^{k-1} A_i' B_i'=0 \;\;\; k\in\{1,\dots,2 n_1\}, \\
\label{eqs_coef_AbBb}
\sum_{i=1}^n c_i^{k-1} A_i'' B_i''=0 \;\;\; k\in\{1,\dots,2 n_2\}, \\
\label{eqs_coef_AbB}
\sum_{i=1}^n c_i^{k-1} ( A_i' B_i''+A_i'' B_i')=0 \;\;\; k\in\{1,\dots,n_1+n_2\}.
\eey
Since $A_i' B_i'$ and $A_i' B_i'$ are not identically equal to zero (as consequence of
Theorem~\ref{theorem_det}), the number of Eqs.~(\ref{eqs_coef_AB}) and Eqs.~(\ref{eqs_coef_AbBb})
is smaller than $n$, so that
$$
n_1\le \frac{n-1}{2},\;\;\; n_2\le \frac{n-1}{2}.
$$
Without loss of generality, we can assume that $n$ is even. Indeed, if Problem~\ref{problem1}
can be solved for $n$ odd, then Lemma~\ref{lemma_dim_red} implies that it can be solve
for $n$ even, and {\it viceversa}. Since $n_1+n_2=n-2$, we have that
\be
n_1=n_2=\frac{n-2}{2}.
\ee
Let $W_1,\dots, W_n$ be $n$ numbers defined by the equations
\be
\label{def_W}
\sum_{i=1}^n c_i^{k-1} W_i=0 \;\;\;\; k\in\{1,\dots,n-1\}
\ee
up to a constant factor. Equations~(\ref{eqs_coef_AB},\ref{eqs_coef_AbBb},\ref{eqs_coef_AbB})
are equivalent to the equations
\bey
\label{AB}
A_i' B_i'=(k_0+k_1 c_i) W_i, \\
\label{AbBb}
A_i'' B_i''=(r_0+r_1 c_i) W_i, \\
\label{AbB}
A_i' B_i''+A_i'' B_i'=(s_0+s_1 c_i) W_i.
\eey
These equations can be solved over the rationals for the coefficients $c_i$, $B_i'$ and
$B_i''$ in terms of $A_i'$ and $A_i''$. The coefficients $c_i$ take a form which is
independent of $W_i$,
\be
c_i=\frac{r_0 A_i'^{\, 2}+k_0 A_i''^{\,2}-s_0 A_i' A_i''}{r_1 A_i'^{\, 2}+k_1 A_i''^{\,2}-s_1 A_i' A_i''},
\ee
so that we first evaluate $c_i$, then $W_i$ by Eq.~(\ref{def_W}) and, finally, $B_i'$ and
$B_i''$ by Eqs.~(\ref{AB},\ref{AbBb}).
It is possible to show that condition~(\ref{prob2_3}) for $l=n-1$ implies that $(k_1,r_1)\ne(0,0)$.
Indeed, if $(k_1,r_1)=(0,0)$, then only half of the points in ${\cal Z}_P$ satisfies
the inequality in the condition.
Up to a variable change, we have
$$
k_1\ne 0,\;\;\; s_1=0.
$$
Up to now we have been able to solve all the conditions of Problem~\ref{problem2} which
refer to the last two polynomials, that is, for $l=n-2,n-1$. The equations that need to
be satisfied are Eqs.~(\ref{nm2_vector_equation},\ref{additional_equation},
\ref{def_W},\ref{AB},\ref{AbBb},\ref{AbB}). Let us rewrite them all together.
\be
\boxed{
\begin{array}{c}
A_i' B_i'=(k_0+k_1 c_i) W_i, \;\;\;
A_i'' B_i''=(r_0+r_1 c_i) W_i \\
A_i' B_i''+A_i'' B_i'=s_0 W_i, \;\;\; k_1\ne 0
\vspace{1mm} \\
\sum_{i=1}^n c_i^{k-1} W_i=0 \;\;\;\; k\in\{1,\dots,n-1\}
\vspace{1mm} \\
\sum_{i=1}^n c_i^{k-1}\left( A_i' \vec b_i+B_i' \vec a_i\right)=0 \;\;\; k\in\{1,\dots,\frac{n}{2}\} \\
\sum_{i=1}^n c_i^{k-1}\left(\bar A_i'' \vec b_i+\bar B_i'' \vec a_i\right)=0 \;\;\; k\in\{1,\dots,
\frac{n}{2}-1\}
\end{array}
}
\ee
Given $2n$ vectors $\vec a_1,\dots,\vec a_n,\vec b_1,\dots,\vec b_n$ satisfying these equations,
there are $n-1$ directions $\vec u_1,\dots,\vec u_{n-1}$ such that
\be
\begin{array}{l}
\vec u_{2k-1}\cdot \frac{\partial}{\partial\vec x} (\hat a_i,\hat b_i)=c_i^{k-1}(A_i',B_i')
\;\;\; k\in\{1,\dots,\frac{n}{2}-1\} \\
\vec u_{2k}\cdot \frac{\partial}{\partial\vec x} (\hat a_i,\hat b_i)=c_i^{k-1}(A_i'',B_i'')
\;\;\; k\in\{1,\dots,\frac{n}{2}\}.
\end{array}
\ee
This can be easily verified by substitution.
Let us define the coordinate system $(y_1,\dots,y_{n+1})\equiv \vec y$ such that
\be
\vec u_k\cdot \frac{\partial}{\partial\vec x} =\frac{\partial}{\partial y_k} \;\;\;
k\in\{1,\dots,n-1\}.
\ee
Given the polynomials
\begin{equation}
\begin{array}{l}
P_{n-1}=\sum_{i=1}^n \hat a_i\hat b_i \\
P_{n-2}=\sum_{i=1}^n c_i \hat a_i\hat b_i
\end{array}
\end{equation}
with $\hat a_i=\vec a_i\cdot \vec y$ and $\hat b_i=\vec b_i\cdot \vec y$, it is
easy to verify that
\begin{equation}
\begin{array}{l}
\frac{\partial P_{n-1}}{\partial y_k}=0, \;\;\;\; k\in\{1,\dots,n-2\}, \\
\frac{\partial P_{n-2}}{\partial y_k}=0, \;\;\;\; k\in\{1,\dots,n-3\}, \\
\frac{\partial^2 P_{n-2}}{\partial y_{n-2}^2}=0.
\end{array}
\end{equation}
The polynomial $P_{n-1}$ depends on $2$ variables (in the affine space) and
the polynomial $P_{n-2}$ depends linearly on an additional variable $y_{n-2}$.
Thus, the algebraic set of the two polynomials admits a Gaussian parametrization,
that is, the equations $P_{n-1}=0$ and $P_{n-2}=0$ can be solved with \emph{a la} Gauss
elimination of two variables. Note that the polynomial $P_{n-1}$ has rational
roots by construction.
The next step is to satisfy the conditions of Problem~\ref{problem2} for the other
polynomials
$P_{1},\dots,P_{n-3}$ by setting $c_{k,i}$ and the other remaining free coefficients.
It is interesting to note that it is sufficient to take $c_{2s,i}=c_i^{n/2-s}$ with
$s\in\{1,\dots,(n-4)/2\}$ for satisfying every condition of Problem~\ref{problem2}
for $l$ even. The polynomials $P_2,P_4,\dots,P_{n-2}$ take the form
\be\label{poly_even}
P_{2s}=\sum_{i=1}^n c_i^{n/2-s} \hat a_i\hat b_i, \;\;\; s\in\{1,\dots,(n-4)/2\}.
\ee
Furthermore, we can choose $c_{1,i}$ such that $\partial^2 P_1/\partial x_1^2=0$.
With this choice, we have that
\be
\left.
\begin{array}{l}
\frac{\partial P_l}{\partial y_k}=0, \;\;\;\; k\in\{1,\dots,l-1\} \\
\frac{\partial^2 P_l}{\partial y_l^2}=0
\end{array}
\right\} \;\;\; l\in\{2,4,\dots,n-4,n-2\}\cup\{1,n-1\}.
\ee
Thus, we are halfway to solve Problem~\ref{problem2}, about half of the conditions
are satisfied. The hard core of the problem is to solve the conditions for
$P_1,P_3,\dots,P_{n-3}$.
The form of Polynomials~(\ref{poly_even}) is not necessarily the most general. Thus, let
us take a step backward and handle Problem~\ref{problem2} for the polynomial $P_{n-3}$
with the equations derived so far. We will find that this polynomial cannot
satisfy the required conditions if $n>4$, so that the number of parameters has
to be greater than $1$.
Let us denote by $d_i$ the coefficients $c_{n-3,i}$. Eqs.~(\ref{prob2_1},\ref{prob2_2})
with $l=n-3$ give the equations
$$
\sum_{i=1}^n e_i \left(A_{k,i} B_{k',i}+A_{k',i} B_{k,i}\right)=0 \;\;\;
k,k'\in\{1,\dots,n-3\},
$$
which imply that
$$
\begin{array}{l}
\sum_{i=1}^n e_i c_i^{k+k'-2} A_i' B_i'=0\;\;\; k,l\in\{1,\dots,\frac{n}{2}-1\} \\
\sum_{i=1}^n e_i c_i^{k+k'-2} A_i'' B_i''=0\;\;\; k,l\in\{1,\dots,\frac{n}{2}-2\} \\
\sum_{i=1}^n e_i c_i^{k+k'-2} \left( A_i' B_i''+ A_i'' B_i'\right) =0\;\;\;
\left\{
\begin{array}{l}
k\in\{1,\dots,\frac{n}{2}-1\} \\
k'\in\{1,\dots,\frac{n}{2}-2\}
\end{array}
\right.
\end{array}
$$
that is,
\be
\label{eqs_for_e}
\begin{array}{l}
\sum_{i=1}^n e_i c_i^{k-1} A_i' B_i'=0\;\;\; k\in\{1,\dots,n-3\} \\
\sum_{i=1}^n e_i c_i^{k-1} A_i'' B_i''=0\;\;\; k\in\{1,\dots,n-5\} \\
\sum_{i=1}^n e_i c_i^{k-1} \left( A_i' B_i''+ A_i'' B_i'\right) =0\;\;\;
k\in\{1,\dots,n-4\}.
\end{array}
\ee
These equations imply that
\be
\begin{array}{l}
e_i A_i' B_i'=F_{11}(c_i) W_i \\
e_i A_i'' B_i''=F_{22}(c_i) W_i \\
e_i \left(A_i' B_i''+ A_i'' B_i'\right) =F_{12}(c_i) W_i,
\end{array}.
\ee
where $F_{11}(x)$, $F_{22}(x)$ and $F_{12}(x)$ are polynomials of degree lower than $3$, $5$ and $4$,
respectively. Thus,
\be
e_i=\frac{F_{11}(c_i)}{k_0+k_1 c_i}=\frac{F_{22}(c_i)}{r_0+r_1 c_i}=
\frac{F_{12}(c_i)}{s_0}.
\ee
The second and third equalities give polynomials of degree lower than $6$ and $5$,
respectively. Since $c_i\ne c_j$ for $i\ne j$ and $n$ is even, the coefficients of
these polynomials are equal to zero for $n>4$. In particular, $k_0+k_1 c_i$ divides
$F_{11}(c_i)$ and, thus, $e_i$ is equal to a linear function of $c_i$. We have that
$P_{n-3}=q_1 P_{n-2}+q_2 P_{n-1}$ for some constants $q_1$ and $q_2$, so that
there is no independent polynomial $P_{n-3}$ satisfying the required conditions
for $n>4$.
In conclusion, we searched for a solution of Problem~\ref{problem2} with one parameter ($M=1$), but
we ended up to find a solution with $n/2-1$ parameters. Let us stress that we have not
proved that $M$ cannot be less than $n/2-1$, we have only proved that $P_{n-3}$
cannot satisfy the required conditions, so that solutions with $M>1$ may exist.
Furthermore, we employed the condition $M=1$ in some
intermediate inferences. Thus, to check the existence of better solutions, we need to consider
the case $M\ne 1$ from scratch.
For the sake of completeness, let us write down the solution for $n=4$. Eqs.~(\ref{eqs_for_e})
reduce to
\be
\sum_{i=1}^n e_i A_i' B_i'=0,
\ee
Up to a replacement $P_{1}\rightarrow \lambda_1 P_1+\lambda_2 P_2+\lambda_3 P_3$ for some constants
$\lambda_i$ with $\lambda_1\ne 0$, we have that
\be
e_i=\frac{1}{k_0+k_1 c_i}.
\ee
Thus, the $4$ polynomials take the form
\be
\begin{array}{ll}
P_0=\hat a_1\hat b_1, \;\;\;\;
& P_1=\sum_{i=1}^{4}\frac{\hat a_i \hat b_i}{k_0+k_1 c_i} \\
P_2=\sum_{i=1}^{4}c_i \hat a_i \hat b_i \;\;\;\;
& P_3=\sum_{i=1}^{4} \hat a_i \hat b_i.
\end{array}
\ee
Let us give a numerical example with $4$ polynomial, built by using the derived equations.
\subsubsection{Numerical example with $n=$4}
Let us set $A_i'=i$, $A_i''=1$, $k_0=k_1=r_0=1$, $r_1=2$, and $s_0=3$. Up to
a linear transformation of $x_3$ and $x_4$, this setting gives the polynomials
\be
\begin{array}{l}
P_3(x_3,x_4)=\\ 5 x_3 \left(8427 x_4+9430\right)-209 \left(3 x_4 \left(393 x_4+880\right)+1478\right)
\vspace{1mm} \\
P_2(x_2,x_3,x_4)= \\
5538425 x_3^2+18810 \left(1445 x_2+5718 x_4+6421\right) x_3-786258 \left(3 x_4 \left(267 x_4+598\right)+1004\right)
\vspace{1mm} \\
P_1(x_1,x_2,x_3,x_4)= \\
2299 [205346285 x_3-38 (63526809 x_4+35594957)]- 5 [-2045057058 x_2^2+ \\
1630827 (1813 x_3+1254 x_4) x_2+2891872832 x_3^2+495958966272 x_4^2+ \\
4892481 x_1 (1254 x_2-1429 x_3-418)-87093628743 x_3 x_4] \vspace{1mm}\\
P_0(x_1,x_2,x_3,x_4)= \\
\left(627 x_1+627 x_2-46 x_3+1881 \left(x_4+1\right)\right) \left(5016 x_1+6270 x_2+2555 x_3-3762 \left(4 x_4+5\right)\right)
\end{array}
\ee
Taking $x_4$ as the parameter $\tau$ and solving the equations $P_3=P_2=P_1=0$ with respect to $x_3$, $x_2$ and $x_1$,
we replace the result in $P_0$ and obtain, up to a constant factor,
\be
{\cal R}(\tau)=\frac{\prod_{k=1}^{16}(\tau-\tau_k)}{Q_1^2(\tau)Q_2^2Q_3^2(\tau)},
\ee
where
\be
\begin{array}{l}
Q_1(\tau)=8427 \tau +9430, \\
Q_2(\tau)=3 \tau (393 \tau +880)+1478, \\
Q_3(\tau)=3 \tau (9 \tau (7 \tau
(5367293625 \tau +24273841402)+288165964484)+1954792734568)+1657527934720, \\
(\tau_1,\dots\tau_{16})=
-\left(\frac{86}{69},\frac{800}{681},\frac{122}{105},\frac{3166}{2775},\frac{
140}{123},\frac{718}{633},\frac{2452}{2163},\frac{5558}{4929},\frac{2578}{2
289},
\frac{152}{135},\frac{1070}{951},\frac{3932}{3507},\frac{158}{141},
\frac{2072}{1851},\frac{1142}{1023},\frac{218}{201}\right)
\end{array}
\ee
Over a finite field $\mathbb{Z}_p$, one can check that the numerator has about $16$ distinct roots for $p\gg 16$.
For $p\simeq 16$, the roots are lower because of collision or because the denominator of some rational root $\tau_k$
is divided by $p$.
\subsubsection{Brief excursus on retro-causality and time loops}
Previously, we have built the polynomials~(\ref{poly_even}). Setting them equal to zero, we have
a triangular system of about $n/2$ polynomial equations that can be efficiently solved in $n/2$ variables,
say ${\bf x}_1$, given the value of the other variables, say ${\bf x}_2$. This system is more or less
symmetric, that is, the variables ${\bf x}_2$ can be efficiently computed given the first block
${\bf x}_1$ (up to few variables). To determine the overall set of variables, we need the missing
$n/2$ polynomials in the ideal $I$.
It is possible to choose the coefficients $c_{l,i}$ of these polynomials in a
such a way that the associated equations have again a triangular form with respect to one of the
two blocks ${\bf x}_1$ and ${\bf x}_2$, up to few variables. Thus, we end up with two independent
equations and a boundary condition,
\be
\begin{array}{l}
{\bf x}_2={\cal R}_1({\bf x}_1), \\
{\bf x}_3={\cal R}_2({\bf x}_2), \\
{\bf x}_3={\bf x}_1,
\end{array}
\ee
where ${\cal R}_1$ and ${\cal R}_2$ vectorial rational functions.
The first two equations can be interpreted as time-forward and time-backward processes. The last
equation identifies the initial state of the forward process with the final state of the backward
process. The overall process can be seen also as a deterministic process in a time loop.
This analogy is suggestive, since retro-causality is considered one possible explanation
of quantum weirdness. Can a suitable break of causality allow
for a description of quantum processes in a classical framework? To be physically interesting,
this break should not lead to a computational power beyond the power of quantum computers,
otherwise a fine tuning of the theory would be necessary to conceal, in a physical process,
much of the power allowed by the causality break. A similar fine tuning is necessary
if, for example, quantum non-locality is explained with superluminar interactions. These classical
non-local theories need an artificial fine tuning to account for non-signaling of quantum theory.
\subsection{$(n-1)/3$ parameters at most}
In the previous subsection, we have built a class of curves defined by systems of $n-1$ polynomial
equations such that about half of the variables can be efficiently solved over a finite field as
functions of the remaining variables. These curves and the polynomial $P_0$ have $2^n$ rational
intersection points. From a different perspective (discarding about $n/2$ polynomials), we have
found a parametrizable variety with about
$n/2$ parameters such that its intersection with some hypersurface has $2^n$ rational points.
In this subsection, we show that the number of parameters can be dropped to about $n/3$ so
that about $2n/3$ variables can be efficiently eliminated, at least. In the following, we consider
space dimensions $n$ such that $n-1$ is a multiple of $3$. Let us define the integer
\be
n_1\equiv\frac{n-1}{3}.
\ee
Let us define the rational numbers $A_i,B_i,\bar A_i,\bar B_i$, $W_i$, and $c_i$ with $i\in\{1,\dots,n\}$
as a solution of the equations
\be
\begin{array}{c}
A_i B_i=W_i,\;\;\; \bar A_i \bar B_i= W_i, \\
A_i \bar B_i+\bar A_i B_i=2 c_i W_i, \\
\sum_{i=1}^n c_i^{k-1} W_i=0 \;\;\;\; k\in\{1,\dots,n-1\}, \\
i\ne j \Rightarrow c_i\ne c_j.
\end{array}
\ee
The procedure for finding a solution has been given previously.
We define the polynomials
\be
P_s=\sum_i^n c_i^{s-1} \hat a_i \hat b_i, \;\;\;\; s\in\{1,\dots,n\}.
\ee
The linear functions $\hat a_i$ and $\hat b_i$ are defined by the $n-1$ linear equations
\be
\begin{array}{l}
\sum_{i=1}^{n_1} c_i^{k-1}(A_i \hat b_i+ B_i \hat a_i)=0, \;\;\; k\in\{1,\dots,n_1\} \\
\sum_{i=1}^{n_1} c_i^{k-1}(\bar A_i \hat b_i+\bar B_i \hat a_i)=0, \;\;\; k\in\{1,\dots,2 n_1\}.
\end{array}
\ee
These equations uniquely determine $\hat a_i$ and $\hat b_i$, up to a linear transformation
of the variables $x_i,\dots,x_{n+1}$.
Up to a linear transformation, we have
\be
\begin{array}{l}
\frac{\partial (\hat a_i,\hat b_i)}{\partial x_k}=c_i^{k-1} (\bar A_i,\bar B_i), \;\;\; k\in\{1,\dots,n_1\} \\
\frac{\partial (\hat a_i,\hat b_i)}{\partial x_{k+n_1}}=c_i^{k-1} (A_i,B_i), \;\;\; k\in\{1,\dots,n_1\}.
\end{array}
\ee
Since there are rational points in the curve, there is another variable, say $x_{2n_1+1}$, such that
the second derivative $\partial^2 P_1/\partial x_{2n_1+1}^2$ is equal to zero.
Using the above equations, we have
\be
\left.
\begin{array}{l}
\frac{\partial P_s}{\partial x_k}=0, \;\;\; k\in\{1,\dots,2 n_1-s+1\}, \\
\frac{\partial^2 P_s}{\partial x_k^2}=0, \;\;\; k=2 n_1-s+2.
\end{array}
\right\} \;\;\; s\in\{1,\dots,2 n_1+1 \}.
\ee
Thus, the first $2n_1+1=\frac{2n+1}{3}$ polynomials take the triangular form~(\ref{poly_affine}),
up to a reorder of the indices. These polynomials define a parametrizable variety with $(n-1)/3$
parameters. Stated in a different way, there is a curve and a hypersurface such that their
intersection contains $2^n$ points and at least $(2n+1)/3$ coordinates of the points in the curve
can be evaluated efficiently given the value of the other coordinates. It is possible
to show that all the intersection points are in the parametrizable variety, that is,
they satisfy the third of Conditions~(\ref{gauss_constrs}).
\section{Conclusion and perspectives}
\label{conclusion}
In this paper, we have reduced prime factorization to the search of rational
points of a parametrizable variety $\cal V$ having an arbitrarily large number
$N_P$ of rational points in the intersection with a hypersurface $\cal H$.
To reach a subexponential factoring complexity, the number of parameters $M$
has to grow sublinearly in the space dimension $n$. In particular,
If $N_P$ grows exponentially in $n$ and $M$ scales as a sublinear power
of $n$, then the factoring complexity is polynomial (subexponential) if
the computation of a rational point in $\cal V$, given the parameters, requires
a number of arithmetic operations growing polynomially (subexponentially)
in the space dimension. Here, we have considered a particular kind of
rational parametrization. A set of $M$ coordinates, say $x_{n-M+1},\dots,x_n$,
of the points in $\cal V$ are identified with the $M$ parameters,
so that the first $n-M$ coordinates are taken equal to rational functions of
the last $M$ coordinates. In particular, the parametrization is expressed in a
triangular form. The $k$-th variable is taken equal to a rational function
${\cal R}_k={\cal N}_k/{\cal D}_k$ of the variables $x_{k+1},\dots,x_{n}$,
with $k\in\{1,\dots,n-M\}$. That is,
\be\label{triang_par}
\begin{array}{l}
x_k={\cal R}_k(x_{k+1},\dots,x_n), \;\;\; k\in\{1,\dots,n-M\},
\end{array}
\ee
which parametrize a variety in the zero locus of the $n-M$ polynomials,
\be\label{triang_poly_form}
P_k={\cal D}_k x_k-{\cal N}_k, \;\;\;\; k\in\{1,\dots,n-M\}.
\ee
To reach polynomial complexity,
there are two requirements on these polynomials. First, they have to contain
a number of monomials scaling polynomially in $n$, so that the computation of
${\cal R}_k$ is efficient. For example, we could require that the degree is
upper-bounded by some constant. Second, their zero locus has to share an
exponentially large number of rational points with some hypersurface $\cal H$
(a superpolynomial scaling $N_P\sim e^{b\,n^\beta}$ with $0<\beta<1$ is actually
sufficient, provided
that the growth of $M$ is sufficiently slow).
The hypersurface is the zero locus of some polynomial $P_0$. Also the
computation of $P_0$ at a point has to be efficient.
We have proposed a procedure for building pairs $\{{\cal V},{\cal H}\}$
satisfying the two requirements. First, we define the set of $N_P$ rational
points. This set can depend on some coefficients. Since $N_P$ has to grow
exponentially
in the dimension, we need to define them implicitly as common zeros of
a set of polynomials, say $G_1,G_2,\dots$.
The simplest way is to take $G_k$ as
products of linear functions, like the polynomials~(\ref{quadr_polys}).
These polynomials generate an ideal $I$. The relevant
information on the generators
is encoded in a satisfiability formula in conjunctive normal form without
negations and a linear matroid. We have called these two objects a model.
Second, we search for $n-M$ polynomials
in $I$ with the triangular form~(\ref{triang_poly_form}). These polynomials
always exist. Thus, the task is to find a solution such that the polynomials
contain as few monomials as possible.
This procedure is illustrated with the simplest example. The generators
are taken equal to reducible quadratic polynomials of the
form~(\ref{quadr_polys}), whose associated algebraic set
contains $2^n$ rational points. We search for polynomials $P_k$
of the form $\sum_i c_i G_i$ with $c_i$ constant. First, we prove
that there is no solution for $M=1$ and space dimension greater than $4$.
Then, we find a solution for $M=(n-1)/3$. If there are solutions with
$M$ scaling sublinearly in $n$, then a factoring algorithm with polynomial
complexity automatically exists, since the computational complexity of
the rational functions ${\cal R}_k$ is polynomial by construction.
The existence of such solutions is left as an open problem.
This work can proceed in different directions. First, it is necessary
to investigate whether the studied model admits solutions with
a much smaller set of parameters. The search has been performed in
a subset of the ideal. Thus, if these solutions do not exist,
we can possibly expand this subset (if it is sufficiently large, there
is for sure a solution, but the polynomial complexity of ${\cal R}_k$ is
not guaranteed anymore). We could also
relax other hypotheses such as the distinguishibility
of each of the $2^n$ rational points and their membership of
the parametrizable variety. More general ideals are another option.
In this context, we have shown that classes of models can be reduced
to smaller classes by preserving the computational complexity
of the associated factoring algorithms.
This reduction makes the search space smaller. It is interesting to
determine what is the minimal class of models obtained by this
reduction. This is another problem left open.
Apart from the search of better inputs of the procedure, there is a
generalization of the procedure itself. The variety $\cal V$ has
the parametrization~(\ref{triang_par}).
However, there are more general parametrizable
varieties which can be taken in consideration.
It is also interesting to investigate if there is some
deeper relation with retro-causality, time loops and, possibly, a
connection with Shor algorithm. Indeed, in the
attempt of lowering the geometric genus of one of the non-parametrizable
curves derived
in the previous section, we found a set of solutions for the coefficients
over the cyclotomic number field, so that the resulting polynomials have
terms taking the form of a Fourier transform. Quantum Fourier transform
is a key tool in Shor's algorithm. This solution ends up to break
the curve into the union of an exponential large number of parametrizable
curves, thus it is not useful for our purpose. Nonetheless, the Fourier-like
forms in the polynomials remains suggestive.
Finally, the overall framework has some interesting relation with the
satisfiability problem.
Using a particular matroid, we have seen that there is a one-to-one correspondence
between the points of an algebraic set and the solutions of a satisfiability
formula (including also negations). To prove that a formula is satisfiable is
equivalent to prove that a certain algebraic set is not empty. This mapping
of SAT problems to an algebraic-geometry problem turns out to be a generalization
of previous works using the finite field $\mathbb{Z}_2$, see for example
Ref.~\cite{hung}. It can be interesting to investigate whether part of the machinery
introduced here can be used for solving efficiently some classes of SAT formulae.
\section{Acknowledgments}
This work was supported by Swiss National Science Foundation (SNF) and Hasler foundation
under the project n. 16057.
| {'timestamp': '2022-09-26T02:14:26', 'yymm': '2209', 'arxiv_id': '2209.11650', 'language': 'en', 'url': 'https://arxiv.org/abs/2209.11650'} |
\section{Introduction}\label{section1}
\subsection{Background}\label{section1A}
Large systems consisting of many coupled oscillatory units occur
in a wide variety of situations \cite{Yamaguchi}. Thus the study of
the behaviors that such systems exhibit
has been an active and continuing area of research. An important
early contribution in this field was the introduction in 1975 by
Kuramoto \cite{Kuramoto75, Kuramoto84} of a simple model which illustrates
striking features of such systems. Kuramoto employed two
key simplifications in arriving at his model: (i) the coupling
between units was chosen to be homogeneous and all-to-all (i.e., `global'), so
that each oscillator would have an equal effect on all other oscillators;
and (ii) the oscillator states were solely described by a phase
angle $\theta (t)$, so that their uncoupled dynamics obeyed the
simple equation $d\theta _i/dt=\omega _i$, where $\omega _i $ is
the intrinsic natural frequency of oscillator $i$, $N\gg 1$ is the
number of oscillators, and $i=1,2,\ldots ,N$. The
natural frequencies $\omega _i$ are, in general, different for
each oscillator and are assumed to be drawn from some prescribed
distribution function $g(\omega )$.
Much of the research on the Kuramoto model has focused on the case where $g(\omega )$ is unimodal (for reviews of this literature, see \cite{Strogatz00, Ott02, Acebron05}). Specifically, $g$ is usually assumed to be
symmetric about a maximum at frequency $\omega =\omega _0$ and to
decrease monotonically and continuously to zero as
$|\omega -\omega _0|$ increases. In that case, it
was found that as the coupling strength $K$
between the oscillators increases from zero in the large-$N$ limit, there
is a continuous transition at a critical coupling strength $K_c = 2/(\pi g(\omega_0))$.
For $K$ below $K_c$, the average macroscopic, time-asymptotic
behavior of the system is such that the oscillators in the system
behave incoherently with respect to each other, and an order
parameter (defined in Sec.~\ref{section2}) is correspondingly zero.
As $K$ increases past $K_c$,
the oscillators begin to influence each other in such a way that
there is collective global organization in the phases of the
oscillators, and the time-asymptotic order parameter assumes a
non-zero constant value that increases continuously for $K>K_c$ \cite{Kuramoto84, Strogatz00, Ott02, Acebron05, Strogatz92}.
It is
natural to ask how these results
change if other forms of $g(\omega )$ are considered. In this
paper we will address this question for what is perhaps the
simplest choice of a non-unimodal frequency distribution: we consider a distribution $g(\omega )$ that has two
peaks \cite{Barreto,Montbrio1} and is the sum of two identical unimodal
distributions $\hat g$, such that $g(\omega
)=\frac{1}{2}[\hat g(\bar \omega -\omega _0)+\hat g(\bar \omega
+\omega _0)]$. We find that this modification to the original
problem introduces qualitatively new behaviors. As might be
expected, this problem has been previously
addressed \cite{Kuramoto84, Crawford}. However, due to its difficulty, the
problem was not fully solved, and, as we shall show, notable
features of the behavior were missed.
\subsection{Reduction method}\label{section1B}
The development that makes
our analysis possible is the recent paper of Ott and
Antonsen \cite{Ott-Ant}. Using the method proposed in
Ref.~\cite{Ott-Ant} we reduce the original problem formulation
from an integro-partial-differential equation \cite{Strogatz00, Ott02, Strogatz92}
for the oscillator distribution function (a function of $\omega
,\theta $ and $t$) to a system of just a few ordinary differential
equations (ODEs). Furthermore, we analyze the reduced ODE system
to obtain its attractors and the bifurcations they experience with
variation of system parameters.
The reduced ODE system, however,
represents a special restricted class of all the possible
solutions of the original full system \cite{Ott-Ant}. Thus a
concern is that the reduced system might miss some of the actual
system behavior. In order to check this, we have done numerical
solutions of the full system. The result is that, in all cases
tested, the time-asymptotic attracting behavior of the full system
and the observed attractor bifurcations are all contained in, and are
quantitatively described by, our ODE formulation.
Indeed a similar result applies for the application of the method of Ref.~\cite{Ott-Ant} to the original Kuramoto model with unimodally distributed frequencies \cite{Kuramoto75, Kuramoto84} and to the problem of the forced Kuramoto model with periodic drive \cite{Antonsen}.
On the other hand, the reduction method has not been mathematically proven to capture all the attractors, for any of the systems to which it has been applied~\cite{Ott-Ant, Antonsen}. Throughout this paper we operate under the assumption (based on our numerical evidence) that the reduction method is reliable for the bimodal Kuramoto model. But we caution the reader that in general the situation is likely to be subtle and system-dependent; see Sec.~\ref{section6D1} for further discussion of the scope and limits of the reduction method.
\subsection{Outline of the paper}\label{section1C}
The organization of this paper is as follows. In Sec.~\ref{section2} we
formulate the problem and reduce it to the above-mentioned ODE
description for the case where $g(\omega)$ is a sum of Cauchy-Lorentz distributions.
Sec.~\ref{section3} provides an analysis of the ODE system. The main
results of Sec.~\ref{section3} are a delineation of the different types of
attractors that can exist, the regions of parameter space that
they occupy (including the possibility of bistability and
hysteresis), and the types of bifurcations that the attractors
undergo.
In Sec.~\ref{section4}, we establish that the attractors of the ODEs
obtained in Section \ref{section3} under certain symmetry assumptions
are attractors of the full ODE system. In Section \ref{section5},
we confirm that these attractors and bifurcations are also present
in the original system. In addition, we investigate the case where $g(\omega )$
is a sum of Gaussians, rather than Cauchy-Lorentz distributions. We find that the attractors
and bifurcations in the Lorentzian case and in the Gaussian case are of the
same types and that parameter space maps of the different behaviors are qualitatively
similar for the two distributions.
Finally, in Sec.~\ref{section6} we compare our results to the earlier work of Kuramoto \cite{Kuramoto84} and Crawford \cite{Crawford}. Then we discuss the scope and limits of the reduction method used here, and offer suggestions for future research.
\section{Governing Equations}\label{section2}
\subsection{Problem definition}\label{section2A}
We study the Kuramoto problem of $N$ oscillators with natural frequencies $\omega_i$,
\begin{eqnarray}\label{eq:goveqns}
\frac{d \theta_i (t)}{d t} &=& \omega_i + \frac{K}{N}\sum_{j=1}^N \sin{\left(\theta_j (t)-\theta_i (t)\right)},\
\end{eqnarray}
where $\theta_i$ are the phases of each individual oscillator and $K$ is the coupling strength. We study this system in the limit $N\rightarrow \infty$ for the case in which the distribution of natural frequencies is given by the sum of two Lorentzian distributions:\begin{equation}
g(\omega) = \frac{\Delta}{2\pi} \left(\frac{1}{(\omega-\omega_0)^2+\Delta^2}+\frac{1}{(\omega+\omega_0)^2+\Delta^2}\right).
\label{bimodaldist}
\end{equation}
Here $\Delta$ is the width parameter (half-width at half-maximum) of each Lorentzian and $\pm \omega_0$ are their center frequencies, as displayed in Fig.~\ref{fig:bimodaldist}. A more physically relevant interpretation of $\omega_0$ is as the \emph{detuning} in the system (proportional to the separation between the two center frequencies).
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.5\textwidth]{bimodal_lorentzian2.eps}\
\end{center}
\caption[Bimodal distribution of natural frequencies.]{A bimodal distribution of natural frequencies, $g(\omega)$, consisting of the sum of two Lorentzians.}
\label{fig:bimodaldist}
\end{figure}
Note that we have written the distribution $g(\omega)$ so that it is symmetric about zero; this can be achieved without loss of generality by going into a suitable rotating frame.
Another point to observe is that $g(\omega)$ is bimodal if and only if the peaks are sufficiently far apart compared to their widths. Specifically, one needs $\omega_0 > \Delta/\sqrt{3}$. Otherwise the distribution is unimodal and the classical results of \cite{Kuramoto75, Kuramoto84,Strogatz00, Ott02} would still apply.
\subsection{Derivation}\label{section2B}
In the limit where $N\rightarrow \infty$, Eq.~(\ref{eq:goveqns}) can be written in a continuous
formulation \cite{Kuramoto84, Strogatz00, Ott02} in terms of a probability density $f(\theta,\omega,t)$. Here $f$ is defined such that at time $t$, the fraction of oscillators with phases between $\theta$ and $\theta + d\theta$ and natural frequencies between $\omega$ and $\omega + d \omega$ is given by $f(\theta, \omega, t) \, d\theta \, d\omega$. Thus
\begin{equation}\label{normalization}
\int_{-\infty}^\infty \int_0^{2 \pi} f(\theta, \omega,t) \, d \theta \,d \omega = 1
\end{equation}
and
\begin{equation}\label{freqdist}
\int_0^{2 \pi} f(\theta, \omega, t) \, d \theta = g(\omega),
\end{equation}
by definition of $g(\omega)$.
The evolution of $f$ is given by the continuity equation
describing the conservation of oscillators:
\begin{eqnarray}\label{eq:continuityeqn}
\frac{\partial f}{\partial t} + \frac{\partial}{\partial \theta}\left(f v\right) &=& 0,\
\end{eqnarray}
where $v(\theta,\omega,t)$ is the angular velocity of the oscillators. From Eq.~(\ref{eq:goveqns}), we have
\begin{eqnarray}\label{eq:velocity}
v(\theta,\omega,t) &=& \omega + K \int_0^{2\pi} f(\theta',\omega,t) \sin(\theta'-\theta) d\theta'.\
\end{eqnarray}
Following Kuramoto, we define a complex order parameter
\begin{eqnarray}\label{eq:complexOP}
z(t)&=& \int_{-\infty}^{\infty} \int_0^{2\pi} e^{i\theta} f(\theta, \omega,t) \, d \theta \,d \omega \
\end{eqnarray}
whose magnitude $|z(t)|\leq 1$ characterizes the degree to which the oscillators are bunched in phase,
and $\arg{(z)}$ describes the average phase angle of the oscillators. Expressing the velocity (\ref{eq:velocity}) in terms of $z$ we obtain
\begin{eqnarray}\label{eq:velocity2}
v(\theta,\omega,t) &=& \omega + K \,\textrm{Im}[z e^{-i\theta}] \\ &=& \omega + \frac{K}{2i} (z e^{-i\theta} - z^*e^{i\theta})\
\end{eqnarray}
where the * denotes complex conjugate.
Following Ott and Antonsen \cite{Ott-Ant}, we now restrict attention to a special class of density functions. By substituting a Fourier series of the form
\begin{equation}\label{fourier}
f(\theta, \omega, t) = \frac{g(\omega)}{ 2 \pi} \left[ 1 + \sum_{n=1}^\infty \left( f_n(\omega, t) e^{i n \theta} + \rm{c.c.} \right) \right],
\end{equation}
where `c.c' stands for the complex conjugate of the preceeding term,
and imposing the ansatz that
\begin{equation}\label{eq:poisson}
f_n(\omega,t) = \alpha(\omega,t)^n,
\end{equation}
we obtain
\begin{eqnarray}\label{eq:amplitudeeqn}
\frac{\partial \alpha}{\partial t} + \frac{K}{2}(z\alpha^2-z^*)+i\omega\alpha &=& 0,
\end{eqnarray}
where
\begin{eqnarray}\label{eq:complexOP2}
z^* &=& \int_{-\infty}^{\infty} \alpha(t,\omega)g(\omega)d\omega.\
\end{eqnarray}
We now consider solutions of (\ref{eq:amplitudeeqn}) and (\ref{eq:complexOP2}) for initial conditions $\alpha(\omega,0)$ that satisfy the following additional conditions: (i) $|\alpha(\omega,t)|\leq 1$; (ii) $\alpha(\omega,0)$ is analytically continuable into the lower half plane $\rm{Im}(\omega)<0$; and (iii) $|\alpha(\omega,t)|\rightarrow 0$ as $\rm{Im}(\omega)\rightarrow -\infty$. If these
conditions are satisfied for $\alpha(\omega,0)$, then, as shown in \cite{Ott-Ant}, they continue to be satisfied by $\alpha(\omega,t)$ as it evolves under Eqs. (\ref{eq:amplitudeeqn}) and
(\ref{eq:complexOP2}). Expanding $g(\omega)$ in partial fractions as
\begin{eqnarray}\nonumber
g(\omega) &=& \frac{1}{4\pi i} \bigg[\frac{1}{(\omega-\omega_0)-i\Delta} - \frac{1}{(\omega-\omega_0)+i\Delta}+ \frac{1}{(\omega+\omega_0)-i\Delta} - \frac{1}{(\omega+\omega_0)+i\Delta} \bigg],\
\end{eqnarray}
we find it has four simple poles at $\omega = \pm \omega_0 \pm i\Delta$.
Evaluating (\ref{eq:complexOP2}) by deforming the integration path from the real $\omega$-axis to $\rm{Im}(\omega)\rightarrow -\infty$, the order parameter becomes
\begin{eqnarray}
z(t) &=& \frac{1}{2}\left(z_1(t)+z_2(t)\right),\
\end{eqnarray}
where
\begin{eqnarray}
z_{1,2}(t) &=& \alpha^*(\pm\omega_0-i\Delta,t). \
\end{eqnarray}
Substitution of this expression into (\ref{eq:amplitudeeqn}) yields two coupled complex ODEs, describing the evolution of two `sub'-order parameters,
\begin{eqnarray}\label{eq:fullgoveqn1}\nonumber
\dot{z}_1 &=& - (\Delta + i \omega_0)z_1 \\
&&+ \frac{K}{4} \left[z_1 + z_2 -( z_1^* + z_2^*)z_1^2 \right]\\\nonumber &\\\label{eq:fullgoveqn2}\nonumber
\dot{z}_2 &=& - (\Delta - i \omega_0)z_2 \\
&& + \frac{K}{4} \left[z_1 + z_2 -(z_1^* + z_2^*)z_2^2 \right],\
\end{eqnarray}
where we use dots to represent the time derivative from now on. (This system agrees with
the results of \cite{Ott-Ant} for the case of two equal groups of oscillators with uniform coupling strength
and average frequencies $\omega_0$ and $-\omega_0$.)
\subsection{Reductions of the system}\label{section2C}
The system derived so far is four-dimensional. If we introduce polar coordinates $z_j=\rho_j e^{i\phi_j}$ and define the phase difference $\psi = \phi_2 - \phi_1$,
the dimensionality can be reduced to three:
\begin{eqnarray}
\dot{\rho_1} &=& -\Delta \rho_1 + \frac{K}{4} \, (1-\rho_1^2)(\rho_1+\rho_2\cos{\psi}) \label{eq:goveqns3Da} \\%\nonumber&&\\
\dot{\rho_2} &=& -\Delta \rho_2 + \frac{K}{4} \, (1-\rho_2^2)(\rho_1\cos{\psi}+\rho_2) \label{eq:goveqns3Db} \\%\nonumber&&\\
\dot{\psi} &=& 2\omega_0 - \frac{K}{4} \, \frac{\rho_1^2 + \rho_2^2 + 2 \rho_1^2\rho_2^2}{\rho_1\rho_2}\sin{\psi}. \label{eq:goveqns3Dc} \
\end{eqnarray}
To facilitate our analysis we now look for solutions of Eqs.~(\ref{eq:goveqns3Da}-\ref{eq:goveqns3Dc}) that satisfy the symmetry condition
\begin{eqnarray}\label{eq:symmetry}
\rho_1(t) =\rho_2(t) &\equiv& \rho(t).\
\end{eqnarray}
In Sec.~\ref{section4} we will verify that these symmetric solutions are stable to perturbations away from the symmetry manifold and that the attractors of Eqs.~(\ref{eq:fullgoveqn1}, \ref{eq:fullgoveqn2}) lie within this manifold.
Our analysis of the problem thus reduces to a study in the phase plane:
\begin{eqnarray}
\dot{\rho} &=& \frac{K}{4}\rho\left(1-\frac{4\Delta}{K} - \rho^2 + (1-\rho^2)\cos{\psi}\right) \label{eq:goveqnprelima}\\
\dot{\psi} &=& 2\omega_0 - \frac{K}{2}\,(1+\rho^2)\sin{\psi}.\label{eq:goveqnprelimb}\
\end{eqnarray}
\section{Bifurcation Analysis}\label{section3}
Figure \ref{fig:mainbifdiag1} summarizes the results of our analysis of Eqs.~(\ref{eq:goveqnprelima}, \ref{eq:goveqnprelimb}). We find that three types of attractors occur: the well-known incoherent and partially synchronized states \cite{Kuramoto75, Kuramoto84,Strogatz00, Ott02, Acebron05} corresponding to fixed points of (\ref{eq:goveqnprelima}, \ref{eq:goveqnprelimb}), as well as a standing wave state \cite{Crawford} corresponding to limit-cycle solutions. In addition, we will show that the transitions between these states are mediated by transcritical, saddle-node, Hopf, and homoclinic bifurcations, as well as by three points of higher codimension.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=\textwidth]{bimodal_mainbifdiag1.eps}\
\end{center}
\caption[What]{The bifurcation diagram for the Kuramoto system with a bimodal frequency distribution consisting of
two equally weighted Lorentzians. The various bifurcation curves are denoted as follows: TC=transcritical,
SN=saddle-node, HB=(degenerate) Hopf, HC=homoclinic, and SNIPER=Saddle-node-infinite-period. The insets,
labeled (a)-(g), show $(q,\psi)$ phase portraits in polar coordinates corresponding
to the regions where the insets are located (see arrows for the boxed insets).
Solid red dots and loops denote stable fixed point and limit cycles, respectively; open green dots are
saddle fixed points, and open gray circles are repelling fixed points. All parameters refer to their
original (unscaled) versions.
}
\label{fig:mainbifdiag1}
\end{figure}
\subsection{Scaling}\label{section3O}
To ease the notation we begin by scaling Eqs.~(\ref{eq:goveqnprelima}, \ref{eq:goveqnprelimb}).
If we define $q=\rho^2$ and non-dimensionalize the parameters and time such that
\begin{eqnarray}\label{eq:scaling}\nonumber
\tilde t &=& \frac{K}{2}t\\
\tilde\Delta &=& \frac{4\Delta}{K}\\\nonumber
\tilde \omega_0&=&\frac{4\omega_0}{K}\
\end{eqnarray}
we obtain the dimensionless system
\begin{eqnarray}
\dot{q} &=& q\left(1 -\Delta -q + (1-q)\cos\psi\right) \label{eq:goveqns2Da}\\
\dot{\psi} &=& \omega_0 - (1+q)\sin{\psi}.\label{eq:goveqns2Db}\
\end{eqnarray}
Here the overdot now means differentiation with respect to dimensionless time, and we have dropped all the tildes for convenience. For the rest of this section, all parameters will be assumed to be dimensionless (so there are implicitly tildes over them) unless stated otherwise.
\subsection{Bifurcations of the incoherent state}\label{section3A}
The \emph{incoherent state} is defined by $\rho_1=\rho_2=0$, or by $q=0$ in the phase plane formulation. The linearization of the incoherent state, however, is
most easily performed in Cartesian coordinates using the formulation in Eqs.~(\ref{eq:fullgoveqn1}) and (\ref{eq:fullgoveqn2}). We find the degenerate eigenvalues
\begin{eqnarray}
\lambda_1 = \lambda_2 &=& 1-\Delta - \sqrt{1-\omega_0^2}\\
\lambda_3 = \lambda_4 &=& 1-\Delta + \sqrt{1-\omega_0^2}.
\label{eq:originEVals}
\end{eqnarray}
This degeneracy is expected because the origin is \emph{always} a fixed point and because of the rotational invariance of that state.
It follows that the incoherent state is stable if and only if the real parts of the eigenvalues are less than or equal to zero.
The boundary of stable incoherence therefore occurs when the following conditions are met:
\begin{equation*}\label{eq:StabCondOrigin}
\left\{ \begin{array}{ll}
\Delta = 1 +\sqrt{1-\omega_0^2} & \mbox{for $ \omega_0\leq 1$} \\
\Delta = 1 & \mbox{for $\omega_0>1$}.\\
\end{array} \right.\\
\end{equation*}
These equations define the semicircle and the half-line shown in Fig.~\ref{fig:mainbifdiag1}, labeled TC (for transcritical) and HB (for Hopf bifurcation), respectively.
(Independent confirmation of these results can be obtained from the continuous formulation of Eq.~(\ref{eq:goveqns}) directly, as shown in the Appendix.)
More precisely, we find that crossing the semicircle corresponds to a degenerate transcritical bifurcation, while
crossing the half-line corresponds to a degenerate supercritical Hopf bifurcation.
In the latter case, the associated limit-cycle
oscillation indicates that the angle $\psi$ increases without bound; this reflects an increasing \emph{difference} between the phases of the two `sub'-order parameters of Eqs.~(\ref{eq:fullgoveqn1}, \ref{eq:fullgoveqn2}). In terms of the original model, this means that the oscillator population splits into two counter-rotating groups, each consisting of a macroscopic number of oscillators with natural frequencies close to one of the two peaks of $g(\omega)$. Within each group the oscillators are frequency-locked. Outside the groups the oscillators remain desynchronized, drifting relative to one another and to the locked groups. This is the state Crawford \cite{Crawford} called a \emph{standing wave}. Intuitively speaking, it occurs when the two humps in the frequency distribution are sufficiently far apart relative to their widths. In Kuramoto's vivid terminology \cite{Kuramoto84}, the population has spontaneously condensed into ``a coupled pair of giant oscillators.''
\subsection{Fixed point solutions and saddle-node bifurcations}\label{section3B}
Along with the trivial incoherent state $q=0$, the other fixed points of Eqs.~(\ref{eq:goveqns2Da}, \ref{eq:goveqns2Db}) satisfy $1-\Delta-q=(q-1)\cos\psi$, and $\omega_0 = (q+1)\sin\psi$. Using trigonometric identities, we obtain
\begin{eqnarray}
1 &=& \left(\frac{\omega_0}{q+1}\right)^2 + \left(\frac{1-\Delta-q}{q-1}\right)^2,\
\end{eqnarray}
or equivalently,
\begin{eqnarray}\label{eq:fpsurfomega}
\omega_0 &=&\pm \frac{1+q}{1-q}\sqrt{\Delta(2-2q-\Delta)}.\
\end{eqnarray}
Thus, the fixed point surface $q=q(\omega_0,\Delta)$ is defined implicitly. It can be single- or double-valued as a function of $\omega_0$ for fixed $\Delta$. To see this, consider how $\omega_0$ behaves as $q\rightarrow 0^+$. We find that
\begin{eqnarray}
\omega_0 &\sim& \sqrt{\Delta(2-\Delta)} \left[ 1+\frac{3-2\Delta}{2-\Delta}q+\mathcal{O}(q^2) \right],
\end{eqnarray}
from which we observe that the behavior changes qualitatively at $\Delta=3/2$, as shown in Fig.~\ref{fig:foldbif}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=.7\textwidth]{bimodal_foldbif.eps},\
\end{center}
\caption[Fold bifurcation.]{Saddle-node bifurcation: at $\Delta = 3/2$, $q$ becomes double-valued.}
\label{fig:foldbif}
\end{figure}
\begin{figure}[ht]
\begin{center}
\includegraphics[width=0.35\textheight]{bimodal_ParSol2.eps}\
\end{center}
\caption[Fixed point surface.]{Fixed point surface. Bifurcation curves at the origin and the saddle-node curve are emphasized in black.}
\label{fig:ParsolGequal}
\end{figure}
The surface defined by $\rho=\rho(\omega_0,\Delta)$ can be plotted parametrically using $\rho$ and $\Delta$, as is seen in Fig.~\ref{fig:ParsolGequal}. The fold in the surface corresponds to a saddle-node bifurcation. Plots of the phase portrait of $(q,\psi)$ reveal that the upper branch of the double-valued surface in Fig.~\ref{fig:foldbif} corresponds to sinks, and the lower branch to saddle points; see Fig.~\ref{fig:mainbifdiag1} (c), (d), and (g).
In physical terms, the sink represents a stable \emph{partially synchronized state}, which is familiar from the classic Kuramoto model with a unimodal distribution \cite{Kuramoto84, Strogatz00, Ott02, Acebron05}. The oscillators whose natural frequencies are closest to the center of the frequency distribution $g(\omega)$ become rigidly locked, and maintain constant phase relationships among themselves---in this sense, they act collectively like a ``single giant oscillator,'' as Kuramoto \cite{Kuramoto84} put it. Meanwhile the oscillators in the tails of the distribution drift relative to the locked group, which is why one describes the synchronization as being only partial.
The saddle points also represent partially synchronized states, though of course they are unstable. Nevertheless they play an important role in the dynamics because they can annihilate the stable partially synchronized states; this happens in a saddle-node bifurcation along the fold mentioned above. To calculate its location analytically, we use (\ref{eq:fpsurfomega}) and impose the condition for a turning point, $\partial\omega_0/\partial q = 0$, which yields
\begin{eqnarray}
q^2 - 4q + 3 -2\Delta &=&0.\
\end{eqnarray}
Eliminating $q$ from this equation using (\ref{eq:fpsurfomega}), we obtain the equation for the saddle-node bifurcation curve
\begin{eqnarray}\label{eq:SNcurve}
\omega_0 &=& \sqrt{2-10\Delta-\Delta^2+2(1+2\Delta)^{3/2}}.\
\end{eqnarray}
This curve is labeled SN in Fig.~\ref{fig:mainbifdiag1}. Its intersection with the semicircle TC occurs at $(\omega_0,\Delta)=(\frac{\sqrt{3}}{2},\frac{3}{2})$, and is labeled B in the figure. Note also that point C in the figure
is \emph{not} a Takens-Bagdanov point, as the saddle-node and Hopf bifurcations occur at different locations
in the state space; see Figs.~\ref{fig:mainbifdiag1} (a) and (g).
\subsection{Bistability, homoclinic bifurcations, and SNIPER}\label{section3D}
An examination of the dynamics corresponding to the approximately triangular parameter
space region ABC in Fig.~\ref{fig:mainbifdiag1} shows bistability. More specifically,
we find that the stable incoherent fixed point coexists with the stable partially synchronized state
produced by the saddle-node bifurcation described above, as shown in the
state-space plot in Fig.~\ref{fig:mainbifdiag1}(c).
Further study of these state-space plots
led us to the homoclinic
bifurcation curve marked HC, which was obtained numerically. The coexistence of states
continues into region ACD, where we found that the stable partially synchronized state now
coexists with the stable limit cycle created at the Hopf curve. (See Fig.~\ref{fig:mainbifdiag1}(g).) This
limit cycle is then destroyed by crossing the homoclinic curve, which is
bounded by point A on one side and by point D on the other.
At point D, the homoclinic curve merges with the saddle-node curve. This codimension-two
bifurcation, occurring at approximately (1.3589, 0.7483), is known as a saddle-node-loop \cite{Guckenheimer}. Below D, however, the saddle-node curve exhibits an interesting feature: the saddle-node bifurcation
occurs on an invariant closed curve. This bifurcation scenario is known as a saddle-node infinite-period bifurcation, or in short, \emph{SNIPER}. If we traverse the SNIPER curve from left to right, the sink and saddle (the stable and unstable partially synchronized states) coalesce, creating a loop with infinite period. Beyond that, a stable limit cycle then appears---see Figs.~\ref{fig:mainbifdiag1} (d), (e), and (f).
In conclusion, we have identified six distinct regions in parameter space
and have identified the bifurcations that occur at the boundaries.
\section{Transverse Stability}\label{section4}
Our analysis so far has been based on several simplifying assumptions. First, we restricted attention to a special family of oscillator distribution functions $f(\theta,\omega,t)$ and a bimodal Lorentzian form for $g(\omega)$, which enabled us to reduce the original infinite-dimensional system to a three-dimensional system of ODEs, Eqs.~(\ref{eq:goveqns3Da}-\ref{eq:goveqns3Dc}). Second, we considered only symmetric solutions of these ODEs, by assuming $\rho_1 = \rho_2$; this further decreased the dimensionality from three to two.
The next two sections test the validity of these assumptions. We begin here by showing that the non-zero fixed point attractor (the stable partially synchronized state) and the limit cycle attractor (the standing wave state) for Eqns.~(\ref{eq:goveqns2Da}, \ref{eq:goveqns2Db}) are transversely stable to small symmetry-breaking perturbations, i.e., perturbations off the invariant manifold defined by $\rho_1 = \rho_2$. This does not rule out the possible existence of attractors off this manifold, but it does mean that the attractors in the two-dimensional symmetric manifold are guaranteed to constitute attractors in the three-dimensional ODE system~(\ref{eq:goveqns3Da}-\ref{eq:goveqns3Dc}).
Let $\kappa = K/4$ and consider the reduced governing equations (\ref{eq:goveqns3Da}-\ref{eq:goveqns3Dc}) without symmetry. Introducing the longitudinal and transversal variables
\begin{eqnarray}\nonumber\label{eq:longtransdefns}
\rho_{\parallel}&=& \frac{1}{2}(\rho_1+\rho_2)\\
\rho_{\bot}&=& \frac{1}{2}(\rho_1-\rho_2),\
\end{eqnarray}
and substituting these into (\ref{eq:goveqns3Da}-\ref{eq:goveqns3Dc}), we derive the equation for the transversal component
\begin{eqnarray}\label{eq:transversalevolution}\nonumber
\dot{\rho}_{\bot}&=& \rho_{\bot}\Big[(\kappa-\Delta) - \kappa(3\rho_{\parallel}^2+\rho_{\bot}^2)- \kappa\cos{\psi}(1+\rho_{\parallel}^2-\rho_{\bot}^2)\Big],\
\end{eqnarray}
which describes the order parameter dynamics off the symmetric manifold.
To simplify the notation, let $q_{\parallel}=\rho_{\parallel}^2$ and $q_{\bot}=\rho_{\bot}^2$ and scale the system using Eqs.~(\ref{eq:scaling}), as before.
Linearization
and evaluation at the asymptotic solution denoted by $(q_0,\psi_0)$, which may be either a fixed point or a limit cycle, yields the variational equation
\begin{eqnarray}\label{eq:transevaleqn}
\delta \dot{q}_{\bot} &=& \lambda_{\bot}\delta q_{\bot}\
\end{eqnarray}
where
\begin{eqnarray}\label{eq:transversaleval}
\lambda_{\bot} &=& 1-\Delta - 3 q_0 - (1+q_0) \cos{\psi_0} .\
\end{eqnarray}
Observe that $\delta q_{\parallel}$ and $\delta\psi$ do not appear in linear order on the right hand side of (\ref{eq:transevaleqn}). This decoupling implies that $\lambda_{\bot}$ is the eigenvalue associated with the transverse perturbation $\delta q_{\bot}$, in the case where $q_0$ is a fixed point. Similarly, if $q_0$ is a limit cycle, the Floquet exponent associated with $\delta q_{\bot}$ is simply $\langle\lambda_{\bot} \rangle$, where the brackets denote a time average over one period.
Hence the fixed point will be transversely stable if $\lambda_{\bot}<0$. The analogous condition for the limit cycle is $\langle\lambda_{\bot} \rangle<0$.
\subsection{Fixed point stability}
To test the transverse stability of sinks for the two-dimensional flow, we solve
Eq.~(\ref{eq:goveqns2Da}) for fixed points and obtain
\begin{eqnarray}
0&=& 1-\Delta - q_0 + (1-q_0)\cos{\psi_0}.\
\end{eqnarray}
Subtracting this from (\ref{eq:transversaleval}), we find
\begin{eqnarray}
\lambda_{\bot} &=& -2 (q_0 + \cos{\psi_0}).\
\end{eqnarray}
Hence $\cos{\psi_0}>0$ is a sufficient condition for transverse stability. But at a non-trivial fixed point,
\begin{eqnarray}
\cos{\psi_0} &=& \frac{1-(\Delta+q_0)}{q_0-1},\
\end{eqnarray}
so the transverse stability condition is equivalent to $q_0+\Delta>1$.
We claim that this inequality holds everywhere on the upper branch of the fixed point surface (\ref{eq:fpsurfomega}).
Obviously the inequality is satisfied at all points where $\Delta>1$. For all other cases, consider the turning point from Fig.~\ref{fig:foldbif} defined by $q_{sn}=2\pm\sqrt{1+2\Delta}$. Since the function of interest, $Q(\Delta)\equiv q_{sn}+\Delta$, has a global minimum with $Q(0)=1$, and $q_{sn}$ is independent of $\omega_0$ (at fixed $\Delta$), it is a lower bound for all $q(\omega_0)$ on the upper sheet of the fixed point surface, provided that $q(\omega_0)$ is monotonically decreasing on the interval of $[0,\omega_{sn}]$. In fact, it is easier to establish that $0>\partial\omega_0/\partial q = \Delta/D (q^2-4q+3-2\Delta)$ with $D=(q-1)^2\sqrt{2\Delta-2q\Delta-\Delta^2}$; the latter expression is positive, and $q^2-4q+3-2\Delta < 0$ whenever $1>q>q_{sn}$. Thus transverse stability for the nodes on the fixed point surface follows.
\subsection{Limit cycle stability}
To examine the transverse linear stability of the limit cycle, we calculate the transverse Floquet exponent by averaging the eigenvalue over the period of one oscillation:
\begin{eqnarray}
\langle \lambda_{\bot}\rangle &=& 1-\Delta - 3 \langle q_0 \rangle - \left(\langle\cos{\psi_0}\rangle+\langle q_0 \cos{\psi_0}\rangle \right).
\end{eqnarray}
In order to render this expression definite, we rewrite Eq.~(\ref{eq:goveqns2Da}) in terms of the limit cycle solution $(q_0,\psi_0)$:
\begin{eqnarray}
\frac{d}{dt}\left(\ln{q_0}\right) &=& 1-\Delta-q_0
+ (1-q_0)\cos{\psi_0}.\
\end{eqnarray}
Periodicity on the limit cycle guarantees $\langle \frac{d}{dt}\ln{q_0}\rangle = 0$, and so we have
\begin{eqnarray}
0 &=& 1-\Delta- \langle q_0 \rangle
+ \langle (1-q_0)\cos{\psi_0} \rangle,\
\end{eqnarray}
which we subtract from the averaged eigenvalue to yield
\begin{eqnarray}\label{eq:LCstability}
\langle \lambda_{\bot}\rangle &=& -2 (\langle q_0 \rangle + \langle \cos{\psi_0} \rangle).\
\end{eqnarray}
Although we are not able to analytically demonstrate that $\langle \lambda_{\bot}\rangle$ in (\ref{eq:LCstability}) is negative, we have calculated $\langle q_0\rangle$ and $\langle \cos{\psi_0}\rangle$ numerically for the limit cycle attractors of Eqs.~(\ref{eq:goveqns3Da}-\ref{eq:goveqns3Dc}). This was done
for $2500$ parameter values corresponding to a grid in dimensionless parameter space, by sampling 50 evenly spaced values $\omega \in [0.01,2.5]$ and $\Delta \in [0.01,2.1]$.
The simulations were run with $N = 1024$ oscillators. In all the cases that we tested, we found that $\langle \lambda_{\bot}\rangle<0$.
\section{Numerical Experiments}\label{section5}
All of the results described above were obtained using the reduced ODE models derived in Sec.~\ref{section2} B and C, and are therefore subject to the
restrictions described therein. It is therefore reasonable to ask if these
results agree with the dynamics of the original system given in Eq.~(\ref{eq:goveqns}). To check this, a series of direct simulations of Eq.~(\ref{eq:goveqns}) using $N = 10,000$ oscillators and fourth-order Runge-Kutta numerical integration were performed.
First, we compared solutions of Eq.~(\ref{eq:goveqns}) with those of our reduced system Eqs.~(\ref{eq:goveqnprelima}, \ref{eq:goveqnprelimb}) in the region where we predicted the coexistence of attractors. For example, we show in Fig.~\ref{fig:hysteresis} a bifurcation diagram computed along the line $4\omega_0/K=1.092$ that traverses the region ABCD in Fig.~\ref{fig:mainbifdiag1}. (Note that here and for the rest of the paper, we revert to using the original, dimensional form of the variables.) The vertical lines in Fig.~\ref{fig:hysteresis} indicate the locations of the bifurcations that were identified using the ODE models. For each point plotted, the simulation was run until the order parameter exhibited its time-asymptotic behavior; this was then averaged over the subsequent 5000 time steps. Error bars denote standard deviation. Note in particular the hysteresis, as well as the point with the large error bar, indicating the predicted limit cycle behavior.
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=0.7\textwidth]{bimodal_hysteresis.eps}\
\end{center}
\caption[]{Hysteresis loop as observed when traversing the bistable regions shown in Fig.~\ref{fig:mainbifdiag1}
in the directions shown (arrows) along the line at $4\omega_0/K=1.092$. The data were obtained from a simulation of Equation (\ref{eq:goveqns})
with $N=10,000$ and $K=1$. Vertical lines indicate where the reduced ODE models of Section \ref{section2} predict
homoclinic (HC), degenerate Hopf (HB), and saddle-node (SN) bifurcations. Note that the point marked
`limit cycle' has a large error bar, reflecting the oscillations in the order parameter.}
\label{fig:hysteresis}
\end{figure}
Next, we examined the behavior of Eq.~(\ref{eq:goveqns}) at 121 parameter values corresponding to an $11 \times 11$ regular grid superimposed on Fig.~\ref{fig:mainbifdiag1}, ranging from 0.1 to 2.1 at intervals of 0.2 on each axis. (In all cases, $K$ was set to 1, and $\Delta$ and $\omega_0$ were varied.) An additional series was run using a smaller grid (from 0.6 to 1.6 at intervals of 0.1 on each axis), to focus on the vicinity of region ABCD in Fig.~\ref{fig:mainbifdiag1}. Initial conditions were chosen systematically in 13 different ways, as follows:
\begin{enumerate}
\item The oscillator phases were uniformly distributed around the circle, so that the overall order parameter had magnitude $r=0$.
\item The oscillators were all placed in phase at the same randomly chosen angle in $[0, 2\pi]$, so that $r=1$.
\item The remaining 11 initial conditions were chosen by regarding the system as composed of two sub-populations, one for each Lorentzian in the bimodal distribution of frequencies, as in \cite{Barreto}. In one of the sub-populations, the initial phases of the oscillators were chosen to be randomly spaced within the angular sector $[c+d, c-d]$, where $c$ was chosen randomly in $[0, 2\pi]$ and $d$ was chosen at random such that the sub-order parameter magnitude $r_1$ = 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, or 0.9 (all approximately). The result was that $r_1$ had one of these magnitudes and its phase was random in $[0, 2\pi]$. The same procedure was followed for the other sub-population, subject to the constraint that $r_1 \neq r_2$. Our idea here was to deliberately break the symmetry of the system initially, to test whether it would be attracted back to the symmetric subspace defined by Eq.~(\ref{eq:symmetry}).
\end{enumerate}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=\textwidth]{bimodal_Gaussian.eps}\
\end{center}
\caption[]{(a) The bifurcation diagram for the Kuramoto system with a bimodal frequency distribution consisting of
two equally weighted Gaussians. All the features in Fig.~\ref{fig:mainbifdiag1} are present, but are somewhat
distorted. The transcritical (TC) and (degenerate) Hopf curves (HB) were obtained
as described in the Appendix. The dotted lines represent conjectured saddle-node, homoclinic, and SNIPER curves.
These are based on the numerically-observed bifurcations shown in (b), which is a magnification of
the central region of (a). The symbols represent saddle-node (circles), homoclinic (triangles),
and SNIPER (squares) bifurcations.}
\label{fig:Gaussiandiag}
\end{figure}
In all the cases we examined, no discrepancies were found between the simulations and the predicted behavior. Although these tests were not exhaustive, and certainly do not constitute a mathematical proof, they are consistent with the conjecture that no additional attractors beyond those described in Section III exist.
We then investigated the generality of our results by replacing the bimodal Lorentzian natural frequency
distribution, Eq.~(\ref{bimodaldist}), with the sum of two Gaussians:
\begin{equation}
g(\omega)=\frac{1}{\sigma \sqrt{2\pi}} \left( e^{-\frac{(\omega - \omega_0)^2}{2 \sigma^2}} + e^{-\frac{(\omega + \omega_0)^2}{2 \sigma^2}} \right)
\end{equation}
and computing the corresponding bifurcation diagram analogous to Fig.~\ref{fig:mainbifdiag1}. The results are shown in Fig.~\ref{fig:Gaussiandiag}. The transcritical (TC) and degenerate Hopf bifurcation (HB) curves were obtained using the continuous formulation of Eq.~(\ref{eq:goveqns});
see the Appendix for details. In addition, saddle-node, homoclinic, and SNIPER bifurcations
were numerically observed at several parameter values, and based on these data, we estimated
the location of the corresponding curves (dashed lines). All the features of Fig.~\ref{fig:mainbifdiag1}
are preserved, but the curves are somewhat distorted.
\section{Discussion}\label{section6}
We conclude by relating our work to three previous studies, and then offer suggestions for further research, both theoretical and experimental.
\subsection{Kuramoto's conjectures}
In his book on coupled oscillators, Kuramoto \cite{Kuramoto84} speculated about how the transition from incoherence to mutual synchronization might be modified if the oscillators' natural frequencies were bimodally distributed across the population. On pp.75--76 of Ref. \cite{Kuramoto84}, he wrote ``So far, the nucleation has been supposed to be initiated at the center of symmetry of $g$. This does not seem to be true, however, when $g$ is concave there.'' His reasoning was that for a bimodal system, synchrony would be more likely to start at the peaks of $g$. If that were true, it would mean that a system with two equal peaks would go directly from incoherence to having two synchronized clusters of oscillators, or what we have called the standing wave state, as the coupling $K$ is increased. The critical coupling at which this transition would occur, he argued, should be $K_c = 2/(\pi g(\omega_{\rm max}))$, analogous to his earlier result for the unimodal case. According to this scenario, the synchronized clusters would be tiny at onset, comprised only of oscillators with natural frequencies near the peaks of $g(\omega)$. Because of their small size, Kuramoto claimed these clusters ``will behave almost independently of each other.'' With further increases in $K$, however, the clusters ``will come to behave like a coupled pair of giant oscillators, and for even stronger coupling they will eventually be entrained to each other to form a single giant oscillator.'' (This is what we have called the partially synchronized state.)
Let us now re-examine Kuramoto's conjectures in light of our analytical and numerical results, as summarized in Fig.~\ref{fig:BD_overview}(a). For a fair comparison, we must assume that $g$ is concave at its center frequency $\omega =0$; for the bimodal Lorentzian (Eq. (\ref{bimodaldist}), this is equivalent to $\omega_0/\Delta > 1/\sqrt 3$. (Otherwise $g$ is unimodal and incoherence bifurcates to partial synchronization as $K$ is increased, consistent with Kuramoto's classic result as well as the lowest portion of Fig.~\ref{fig:BD_overview}(a).)
So restricting attention from now on to the upper part of Fig.\ref{fig:BD_overview}(a) where $\omega_0/\Delta > 1/\sqrt 3$, what actually happens as $K$ increases? Was Kuramoto right that the bifurcation sequence is always incoherence $\rightarrow$ standing wave $\rightarrow$ partial synchronization?
No. For $\omega_0/\Delta$ between $1/\sqrt 3$ and $1$ (meaning the distribution is just barely bimodal), incoherence bifurcates directly to partial synchronization---the ``single giant oscillator'' state---without ever passing through an intermediate standing wave state. In effect, the system still behaves as if it were unimodal. But there is one new wrinkle: we now see hysteresis in the transition between incoherence and partial synchronization, as reflected by the lower bistable region in Fig.~\ref{fig:BD_overview}(a).
Is there any part of Fig.~\ref{fig:BD_overview}(a) where Kuramoto's scenario really does occur? Yes---but it requires that the peaks of $g$ be sufficiently well separated. Specifically, suppose $\omega_0/\Delta > 1.81\ldots$, the value at the codimension-2 saddle-node-loop point where the homoclinic and SNIPER curves meet
(i.e., point D in Fig.~\ref{fig:mainbifdiag1}). In this regime everything behaves as Kuramoto predicted.
An additional subtlety occurs in the intermediate regime where the peaks of $g$ are neither too far apart nor too close together. Suppose that $1< \omega_0/\Delta < 1.81\ldots$. Here the system shows a different form of hysteresis. The bifurcations occur in the sequence that Kuramoto guessed as $K$ increases, but \emph{not} on the return path. Instead, the system skips the standing wave state and dissolves directly from partial synchronization to incoherence as $K$ is decreased.
Finally we note that Kuramoto's conjectured formula $K_c = 2/(\pi g(\omega_{\rm max}))$ is incorrect, although it becomes asymptotically valid in the limit of widely separated peaks. Specifically, his prediction is equivalent to $K_c = \frac{8 \Delta}{1+\sqrt{1+(\Delta/\omega_0)^2}} \sim 4 \Delta (1-\frac{1}{4} (\Delta/\omega_0)^2)$, which approaches the correct result $K_c = 4 \Delta$ as $\omega_0/\Delta \rightarrow \infty$.
\subsection{Crawford's center manifold analysis}
Crawford \cite{Crawford} obtained the first mathematical results for the system studied in this paper. Using center manifold theory, he calculated the weakly nonlinear behavior of the infinite-dimensional system in the neighborhood of the incoherent state. From this he derived the stability boundary of incoherence. His analysis also included the effects of white noise in the governing equations.
\begin{figure}[!bh]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=1\textwidth]{bimodal_BD_overview.eps}
\end{tabular}
\end{center}
\caption[Crawford's bifurcation diagram.]{Left: Results from our analysis. \emph{white}: incoherence, \emph{dark gray}: partial synchronization, \emph{light gray}: standing wave (limit cycles), \emph{vertical lines}: coexistence of incoherent and partially synchronized states, \emph{horizontal lines}: coexistence of partial synchronization and standing waves.
Right: Crawford's bifurcation diagram in \cite{Crawford}. In our study there is no noise, and so the diffusion is $D=0$. Crawford's $\epsilon$ corresponds to our $\Delta$. \emph{I}: Incoherent states, \emph{PS}: partially synchronized, \emph{SW}: standing wave, equivalent to what we describe as two counterrotating flocks of oscillators. (Permission to print by Springer Verlag.)
}
\label{fig:BD_overview}
\end{figure}
Figure~\ref{fig:BD_overview}(b), reproduced from Fig.~4 in Ref.~\cite{Crawford}, summarizes Crawford's findings. Here $D$ is the noise strength (note: our analysis is limited to $D=0$), $\epsilon$ is the width of the Lorentzians (equivalent to $\Delta$ in our notation), and $\pm \omega_0$ are the center frequencies of the Lorentzians (as here). The dashed line in Fig.~\ref{fig:BD_overview}(b) shows Crawford's schematic depiction of the unknown stability boundary between the standing waves and the partially synchronized state. He suggested a strategy for calculating this boundary, and highlighted it as an open problem, writing in the figure caption, ``...the precise nature and location of this boundary have not been determined." Our results, summarized in Figs.~\ref{fig:mainbifdiag1} and Fig.~\ref{fig:BD_overview}(a), now fill in the parts that were missing from Crawford's analysis.
\subsection{Stochastic model of Bonilla et al.}
In a series of papers (see \cite{Acebron05} for a review), Bonilla and his colleagues have explored what happens if one replaces the Lorentzians in the frequency distribution with $\delta$-functions, and adds white noise to the governing equations. The resulting system can be viewed as a stochastic counterpart of the model studied here; in effect, the noise blurs the $\delta$-functions into bell-shaped distributions analogous to Lorentzians or Guassians. And indeed, the system shows much of the same phenomenology as seen here: incoherence, partially synchronized states, standing waves, and bistability \cite{Acebron05}.
However, a complete bifurcation diagram analogous to Fig.~\ref{fig:mainbifdiag1} has not yet been worked out for this model. The difficulty is that no counterpart of the ansatz (\ref{eq:poisson}) has been found; the stochastic problem is governed by a second-order Fokker-Planck equation, not a first-order continuity equation, and the Ott-Antonsen ansatz (\ref{eq:poisson}) no longer works in this case. Perhaps there is some way to generalize the ansatz appropriately so as to reduce the stochastic model to a low-dimensional system, but for now this remains an open problem.
\subsection{Directions for future research}
There are several other questions suggested by the work described here.
\subsubsection{Validity of reduction method}\label{section6D1}
The most important open problem is to clarify the scope and limits of the Ott-Antonsen method used in Sec.~\ref{section2B}. Under what conditions is it valid to assume that the infinite-dimensional Kuramoto model can be replaced by the low-dimensional dynamical system implied by the Ott-Antonsen ansatz? Or to ask it another way, when do all the attractors of the infinite-dimensional system lie in the low-dimensional invariant manifold corresponding to this ansatz?
This question has now become particularly pressing, because two counterexamples have recently come to light in which the Ott-Antonsen method~\cite{Ott-Ant} gives an incomplete account of the full system's dynamics. When the method was applied to the problem of chimera states for two interacting populations of identical phase oscillators, it predicted only stationary and periodic chimeras \cite{Abrams}, whereas subsequent numerical experiments revealed that quasiperiodic chimeras can also exist and be stable \cite{Pikovsky-Rosenblum}. Likewise, chaotic states are known to emerge from a wide class of initial conditions for series arrays of identical overdamped Josephson junctions coupled through a resistive load \cite{otherJJchaos, Watanabe}. Yet the Ott-Antonsen ansatz cannot account for these chaotic states, because the reduced ODE system turns out to be only two dimensional \cite{Mirollo, Marvel}.
What makes this all the more puzzling is that the method works so well in other cases. It seems to give a full inventory of the attractors for the bimodal Kuramoto model studied here, as well as for the unimodal Kuramoto model in its original form~\cite{Kuramoto75, Kuramoto84, Ott-Ant} or with external periodic forcing~\cite{Ott-Ant, Antonsen}.
So we are left in the unsatisfying position of not knowing when the method works, or why. In some cases it (apparently) captures all the attractors, while in other cases it does not. How does one make sense of all this?
A possible clue is that in all the cases where the method has so far been successful, the individual oscillators were chosen to have randomly distributed frequencies; whereas in the cases where it failed, the oscillators were \emph{identical}. Perhaps the mixing induced by frequency dispersion is somehow relevant here?
A resolution of these issues may come from a new analytical approach. Pikovsky and Rosenblum~\cite{Pikovsky-Rosenblum} and Mirollo, Marvel and Strogatz~\cite{Mirollo} have independently shown how to place the Ott-Antonsen ansatz~\cite{Ott-Ant} in a more general mathematical framework by relating it to the group of Mobius transformations~\cite{Mirollo, goebel} or, equivalently, to a trigonometric transformation~\cite{Pikovsky-Rosenblum} originally introduced in the study of Josephson arrays~\cite{Watanabe}. This approach includes the Ott-Antonsen ansatz as a special case, but is more powerful in the sense that it provably captures \emph{all} the dynamics of the full system, and it works for any $N$, not just in the infinite-$N$ limit. The drawback is that the analysis becomes more complicated. It remains to be seen what conclusions can be drawn---and, perhaps, what longstanding problems can be solved---when this new approach is unleashed on the Kuramoto model and its many relatives.
Even in those instances where the Ott-Antonsen ansatz doesn't account for all the attractors of the full system, it can still provide useful information, for instance by giving at least some of the attractors and by easing the calculation of them. Moreover, the transient evolution from initial conditions off the Ott-Antonsen invariant manifold can yield interesting phenomena not captured by the ansatz, as discussed in Appendix C of \cite{echo}.
\subsubsection{Asymmetric bimodal distributions}
Now returning to the specific problem of the bimodal Kuramoto model: What happens if the humps in the bimodal distribution have unequal weights? The analysis could proceed as in this paper, up to the point where we assumed symmetry between the two sub-populations. One would expect new phenomena such as traveling waves to arise because of the broken symmetry.
\subsubsection{Finite-size effects}
We have focused here exclusively on the infinite-$N$ limit of the Kuramoto model. What happens when the number of oscillators is reduced? How do finite-size effects influence the bifurcation diagram? An analysis along the lines of~\cite{Hildebrand, Buice} could be fruitful for investigating these questions.
\subsubsection{Comparison with experiment}
Finally, it would be interesting to test some of these theoretical ideas in real systems. One promising candidate is the electrochemical oscillator system studied by Hudson and colleagues \cite{Hudson}, in which the frequency distribution can be bimodal or even multimodal \cite{Mikhailov04}.
\section{Acknowledgments}
This research was supported in part by NSF grant DMS-0412757 and ONR award N00014-07-0734.
| {'timestamp': '2008-11-18T22:21:27', 'yymm': '0809', 'arxiv_id': '0809.2129', 'language': 'en', 'url': 'https://arxiv.org/abs/0809.2129'} |
\section{I. Introduction}
In this paper we ask how much entanglement is required to perform a measurement
on a pair of spatially separated systems, if the participants are allowed only local operations
and classical communication. That is, we want to find the ``entanglement cost'' of a
given measurement. (We give a precise definition of this term in the following subsection.)
Our motivation can be traced back to a 1999 paper
entitled ``Quantum nonlocality without entanglement'', which presents a complete
orthogonal measurement that cannot be performed using only local operations
and classical communication (LOCC), even though the eigenstates of the measurement
are all unentangled \cite{nwoe}. That result shows that there can be a kind of nonlocality in a quantum
measurement that is not captured by the entanglement of the associated states.
Here we wish to quantify this nonlocality for specific measurements. Though the measurements
we consider here have outcomes associated with {\em entangled} states, we find that the
entanglement cost of the measurement often exceeds the entanglement
of the states themselves.
The 1999 paper just cited obtained an upper bound on the
cost of
the specific nonlocal measurement presented there, a bound that has recently been improved
and generalized by Cohen \cite{Cohen}. In addition,
there are in the literature at least three other avenues of research that bear on the problem of
finding the entanglement cost of nonlocal measurements.
First, there are several papers that simplify or extend the results of Ref.~\cite{nwoe}, for example by
finding other examples of measurements with product-state outcomes that cannot be
carried out locally \cite{UPB, DiVincenzo, DiVincenzo2, Groisman, WalgateHardy, Cohen,
Cohen1, Wootters, Duan, Feng, Koashi}. A related line of research asks
whether or not a given set of orthogonal bipartite or multipartite states (not necessarily a complete basis, and not necessarily unentangled) can be distinguished by LOCC \cite{Chen, Chen2, Ghosh, Horodecki, Virmani, WalgateHardy, Walgate, Acin, Ogata, Eggeling, Chefles2, Bandyopadhyay, Nathanson, Owari, Fan, Watrous, Hayashi, Duan2, Chen3, WalgateScott}, and if not, how well one {\em can} distinguish the states by such means \cite{Badziag, Cao, Hillery, Horodecki2, Terhal}. Finally, a number of authors have investigated the cost in entanglement, or the entanglement production capacity, of various bipartite and multipartite operations \cite{Chefles, Collins, Cirac, Berry, Dur, Eisert, Leifer, Ye, BennettHarrow, Kraus1, Zanardi, Groisman2, Huelga, Huelga2, Reznik2, Dur2, Reznik, Jozsa}.
In this paper we consider three specific cases: (i) a class of orthogonal measurements on
two qubits, in which the four eigenstates are equally entangled, (ii) a somewhat broader class of orthogonal measurements with
unequal entanglements, and (iii) a general, nonorthogonal, bipartite measurement in $d \times d$ dimensions
that is
invariant under all local Pauli operations.
For the first of our three cases we present upper and lower
bounds on the entanglement cost. For the second case we obtain a lower bound,
and for the last case we compute the cost exactly: it is equal to the average entanglement
of the states associated with the outcomes. Throughout the paper, we mark our main results
as Propositions.
The upper bound in case (i) can be obtained directly from a protocol devised by Berry \cite{Berry}---a refinement of
earlier protocols \cite{Cirac, Ye}---for performing a closely related
nonlocal
unitary transformation. Our bound is therefore the same as Berry's bound. However, because we
are interested in performing a measurement rather than a unitary transformation, we give an alternative
protocol consisting of a sequence of local measurements.
To get our lower bounds, we use a method developed
in papers on the local distinguishability of bipartite states \cite{Smolin, Ghosh, Horodecki}.
The average entanglement between
two parties cannot be increased by LOCC; so in performing the measurement,
the participants must consume at least
as much entanglement as the measurement can produce. This fact is the basis of all but one
of our lower bounds. The one exception is in Section III, where we use a more stringent
condition, a bound on the success probability of local entanglement manipulation, to put a tighter bound on the cost for a limited class of procedures.
\subsection{1. Statement of the Problem}
To define the entanglement cost, we imagine two participants, Alice and Bob,
each holding one of the two objects to be measured. We allow them to do any sequence
of local operations and classical communication, but we do not allow them to transmit
quantum particles from one location to the other. Rather, we give them, as a resource,
arbitrary shared entangled states,
and we keep track of the amount of entanglement they consume in performing the measurement.
At this point, though, we have a few options in defining the problem. Do we try to find the cost of performing the
measurement only once, or do we imagine that the same measurement will be performed
many times (on many different pairs of qubits) and look for the asymptotic cost per trial?
And how do we quantify the amount of entanglement that is used up?
In this paper we imagine that Alice and Bob will perform the given measurement only
once. (In making this choice we are following Cohen \cite{Cohen}.)
However, we suppose that this measurement is one of many measurements they
will eventually perform (not necessarily repeating any one of the measurements and not necessarily
knowing in advance what the future measurements will be), and we assume that they have a large
supply of entanglement from which they will continue to draw as they carry out these
measurements. In this setting it makes sense to use the standard measure of entanglement
for pure states,
namely, the entropy of either of the two parts \cite{concentrate}. Thus, for a pure state $|\psi\rangle$
of a bipartite system AB, the entanglement is
\begin{equation}
{\mathcal E}(|\psi\rangle) = -\hbox{tr} \rho_A \log \rho_A, \label{basicE}
\end{equation}
where $\rho_A$ is the reduced density matrix of particle A: $\rho_A = \hbox{tr}_B |\psi\rangle \langle \psi |$. In this paper, the logarithm will always be base two; so the entanglement is measured
in ebits.
By means of local operations and classical communication, Alice and Bob can create from their
large supply of entanglement any specific state that they need. For example, if they
create and completely use up a copy of the state $|\phi^+_c\rangle = c|00\rangle + d|11\rangle$,
this counts as a cost of ${\mathcal E}(|\phi^+_c\rangle) = -(c^2 \log c^2 + d^2 \log d^2)$.
On the other hand, if their procedure converts an entangled state into a less entangled state, the cost
is the difference, that is, the amount of entanglement lost.
A general measurement is
specified by a POVM, that is, a collection of positive
semi-definite operators $\Pi_i$
that sum to the identity, each operator being associated with one of the outcomes of
the measurement. In this paper we restrict our attention to {\em complete} measurements,
that is, measurements in which each operator is of rank one; so each $\Pi_i$ is of the form
$\alpha_i|\phi_i\rangle\langle\phi_i|$ for some $\alpha_i$ in the range $0<\alpha_i\le1$. In a complete {\em orthogonal} measurement, each operator is
a projection operator ($\alpha_i = 1$) that projects onto a single vector (an eigenvector $|\phi_i\rangle$ of the measurement). Now, actually performing a
measurement will always entail performing some operation on the measured system. All that we
require of this operation is that Alice and Bob both end up with an accurate
classical record of the outcome of the measurement. In particular, we do not insist that
the measured system be collapsed into some particular state or even that it survive the measurement.
We allow the possibility of probabilistic measurement procedures, in which the probabilities
might depend on the initial state of the system being measured. However, we do not want our quantification of the
cost of a measurement to depend on this initial state; we are trying to characterize the
measurement itself, not the system on which it is being performed. So we assume that
Alice and Bob are initially completely ignorant of the state of the particles they are measuring.
That is, the state they initially assign to these particles is the completely mixed state. This is
the state we will use in computing any probabilities associated with the procedure.
Bringing together the above considerations, we now give
the definition of the quantity we are investigating in
this paper. Given a POVM $M$, let ${\mathcal P}(M)$ be
the set of all LOCC procedures $P$ such that (i) $P$ uses
pure entangled pairs, local operations, and classical communication, and (ii) $P$ realizes
$M$ exactly, in the sense that for any initial state of the
system to be measured, $P$ yields classical outcomes with
probabilities that agree with the probabilities given by $M$.
Then $C(M)$, the entanglement cost of a measurement $M$, is
defined to be
\begin{equation}
C(M) = \inf_{P \in {\mathcal P}(M)} \left\langle
{\mathcal E}_{\hbox{\scriptsize initial}} - {\mathcal D}_{\hbox{\scriptsize final}} \right\rangle,
\end{equation}
where ${\mathcal E}_{\hbox{\scriptsize initial}}$ is the total entanglement of all the
resource states used in the procedure, ${\mathcal D}_{\hbox{\scriptsize final}}$ is
the distillable entanglement of the state remaining at the end of the procedure \cite{dist1,dist2},
and $\langle \cdots \rangle$
indicates an average over all the possible results of $P$,
when the system on which the measurement is being performed
is initially in the completely mixed state
\footnote{One might wonder why we are using the {\em average} cost if we are imagining
each measurement being performed only once. The reason is this: even in a series of distinct
measurements, if the series is long enough the actual cost will, with very high probability, be
very close to the sum of the average costs of the individual measurements.}. (Though we allow
and take into account the possibility of some residual
entanglement ${\mathcal D}_{\hbox{\scriptsize final}}$, in all the
procedures we consider explicitly in this paper, the entanglement in the
resource states will in fact be used up completely.)
A different notion of the entanglement cost of a measurement is considered in Ref.~\cite{Jozsa}, namely,
the amount of entanglement needed to effect a Naimark extension of a given POVM\@. In that case the
entanglement is between the system on which the POVM is to be performed and an ancillary system needed to make
the measurement orthogonal. For any orthogonal measurement, and indeed for all the measurements considered
in this paper, the entanglement cost in the sense of Ref.~\cite{Jozsa} is zero.
\subsection{2. Measurements and unitary transformations}
One way to perform a nonlocal orthogonal measurement on a bipartite system is to perform a nonlocal
unitary transformation that takes the eigenstates of the desired measurement into
the standard basis, so that the measurement can then be finished locally.
(We will use this fact in Section II.)
So one might wonder whether the problem we are investigating in this paper, at least for the case
of orthogonal measurements, is equivalent to the problem of finding the cost of a nonlocal unitary
transformation. A simple example shows that the two problems are distinct.
Suppose that Alice holds two qubits, labeled A$'$ and A, and Bob holds a single qubit labeled B.
They want to perform an orthogonal measurement having the following eight eigenstates.
\begin{equation}
\begin{split}
(1/\sqrt{2})(|000\rangle + |011\rangle), \hspace{6mm}|100\rangle \\
(1/\sqrt{2})(|000\rangle - |011\rangle), \hspace{6mm}|101\rangle \\
(1/\sqrt{2})(|001\rangle + |010\rangle), \hspace{6mm}|110\rangle \\
(1/\sqrt{2})(|001\rangle - |010\rangle), \hspace{6mm}|111\rangle
\end{split}
\end{equation}
Here the order of the qubits in each ket is A$'$, A, B\@. Alice and Bob can carry out this measurement
by the following protocol: Alice measures qubit A$'$ in the standard basis. If she gets the outcome
$|1\rangle$, she and Bob can finish the measurement locally. If, on the other hand, she gets the
outcome $|0\rangle$, she uses up one ebit to teleport the state of qubit A to Bob, who then
finishes the measurement. The average cost of this protocol is 1/2 ebit, because the probability
that Alice will need to use an entangled pair is 1/2.
On the other hand, one can show that any unitary transformation that could change the above basis into
the standard basis would be able to create 1 ebit of entanglement and must therefore
consume at least 1 ebit. So the cost of the measurement in this case is strictly
smaller than the cost of a corresponding unitary transformation.
The crucial difference is
that when one does a unitary transformation, one can gain no information about the
system being transformed. So there can be no averaging between easy cases and
hard cases.
\subsection{3. Two general bounds on the cost}
There are two general bounds on $C(M)$, an upper bound and a lower bound, that apply to all complete bipartite measurements.
These bounds are expressed in the following two Propositions.
\medskip
\noindent {\bf Proposition 1.} Let $M$ be a POVM on two objects A and B, having state spaces
of dimensions $d_A$ and $d_B$ respectively. Then $C(M) \le \min\{\log d_A, \log d_B\}$.
\medskip
\noindent {\em Proof.} Let Alice and Bob share, as a resource, a maximally entangled state of two
$d_A$-dimensional objects. They can use this pair to teleport
the state of A from Alice to Bob \cite{teleport},
who can then perform the measurement $M$ locally. The entanglement of the resource
pair is $\log d_A$. So $\log d_A$ ebits are sufficient to perform the measurement. Similarly,
$\log d_B$ ebits would be sufficient to teleport the state of B to Alice. So the cost of $M$ is no greater than
$\min\{\log d_A, \log d_B\}$. \hfill $\square$
\medskip
As we have mentioned, most of our lower bounds are obtained by considering the entanglement production capacity of our measurements. Specifically, we imagine that in addition to particles A and B,
Alice and Bob hold, respectively, auxiliary particles C and D. We consider an initial state of the
whole system such that the measurement $M$ on AB collapses CD into a possibly entangled state
\cite{Smolin, Ghosh, Horodecki}.
The average amount by which the measurement increases the entanglement between
Alice and Bob is then a lower bound on $C(M)$. That is,
\begin{equation}
\begin{split}
C(M) \ge &\; \hbox{(average final entanglement
of CD)} \\
&- \hbox{(initial entanglement between AC and BD)}.
\end{split}
\end{equation}
In the proof of the following proposition, the initial entanglement is zero.
\medskip
\noindent {\bf Proposition 2.} Let $M$ be a bipartite POVM consisting of the operators $\alpha_i|\phi_i\rangle\langle\phi_i|$, where each $|\phi_i\rangle$ is a normalized state of particles
A and B, each of which has a $d$-dimensional state space. Then $C(M)$ is at least as great as the
average entanglement $\langle {\mathcal E} \rangle$ of the states $|\phi_i\rangle$. That is,
\begin{equation}
C(M) \ge \langle {\mathcal E} \rangle \equiv \frac{1}{d^2}\sum_i \alpha_i {\mathcal E}(|\phi_i\rangle). \label{ave}
\end{equation}
\medskip
\noindent {\em Proof.} Let the initial state of ABCD be
\begin{equation}
|\Psi\rangle = \frac{1}{d}\sum_{kl} |kk\rangle_{AC}|ll\rangle_{BD}, \label{maxent}
\end{equation}
a tensor product of two maximally entangled states.
Note that the reduced density matrix of particles A and B is the completely mixed state,
in accordance with our definition of the problem.
When the measurement yields the outcome $i$, its effect on
$|\Psi\rangle$ can be expressed in the
form \cite{Kraus}
\begin{equation}
|\Psi\rangle\langle\Psi| \rightarrow \sum_j \left( A_{ij}\otimes I_{CD} \right) |\Psi\rangle\langle\Psi|
( A^\dag_{ij}\otimes I_{CD} ), \label{op}
\end{equation}
where $I_{CD}$ is the identity on CD, and the operators $A_{ij}$ act on the state space of particles A and B, telling us what happens to the system when the $i$th outcome occurs.
The trace of the right-hand side of Eq.~(\ref{op}) is not unity but is
the probability of the $i$th outcome. (Note that $A_{ij}$ may send
states of AB to a different state space, including, for example, the state space of the system
in which the classical record of the outcome is to be stored. The index $j$ is needed because
the final state of the system when outcome $i$ occurs could be a mixed state.)
The operators $A_{ij}$ satisfy
the condition
\begin{equation}
\sum_j A^\dag_{ij}A_{ij} = \Pi_i = \alpha_i|\phi_i\rangle\langle \phi_i|.
\end{equation}
Applying the operation of Eq.~(\ref{op}) to the state of Eq.~(\ref{maxent}), and then
tracing out everything except particles C and D, one finds that these particles are left
in the state
\begin{equation}
(|\phi_i\rangle\langle \phi_i|)^*,
\end{equation}
where the asterisk indicates complex conjugation in the standard basis.
This conjugation does not affect the entanglement; so, when outcome $i$ occurs,
particles C and D are left in a state with entanglement ${\mathcal E}(|\phi_i\rangle)$. The
probability of this outcome is $\alpha_i/d^2$. So the average entanglement of CD
after the measurement has been performed is the
quantity $\langle {\mathcal E}\rangle$ of Eq.~(\ref{ave}). But the average entanglement
between Alice's and Bob's locations cannot have increased as long as Alice and Bob
were restricted to local operations and classical communication. So in the process
of performing the measurement, Alice and Bob must have used up an amount of
entanglement equal to or exceeding $\langle {\mathcal E} \rangle$. \hfill $\square$
\medskip
In the following three sections we improve these two bounds for a specific measurement
that we label $M_a$, an orthogonal measurement on two qubits with eigenstates
given by
\begin{equation}
\begin{split}
&|\phi^+_a\rangle = a|00\rangle + b|11\rangle, \hspace{5mm} |\phi^-_a\rangle = b|00\rangle
-a|11\rangle \\
&|\psi^+_a\rangle = a|01\rangle + b|10\rangle, \hspace{5mm} |\psi^-_a\rangle = b|01\rangle
-a|10\rangle \label{Mstates}
\end{split}
\end{equation}
Here $a$ and $b$ are nonnegative real numbers with $a \ge b$ and \hbox{$a^2 + b^2 = 1$}.
Section II presents an improved upper bound for this measurement, Section III derives a lower bound for
a restricted class of procedures, and Section IV derives an absolute lower bound.
We then consider a somewhat more general measurement in Section V.
In Section VI we exhibit a class of bipartite measurements, in dimension $d\times d$,
for which we can find a procedure that achieves the lower bound of Eq.~(\ref{ave}). As noted
earlier, these are the POVMs that are invariant under all local Pauli operations.
\section{II. Upper bound for {\boldmath $M$}\hspace{-0.5mm}$_a$}
One way to perform the measurement $M_a$ is to perform the following unitary transformation
on the two qubits.
\begin{equation}
U = e^{i\alpha \sigma_y \otimes \sigma_x} = \mtx{cccc}{a & 0 & 0 & b \\ 0 & a & b & 0 \\
0 & -b & a & 0 \\ -b & 0 & 0 & a}, \label{unitary}
\end{equation}
where $\cos\alpha = a$ and $\sin\alpha = b$, the matrix
is written in the standard basis and
the $\sigma$'s are the usual Pauli matrices,
\begin{equation}
\sigma_x = \mtx{cc}{0 & 1 \\ 1 & 0} \hspace{2mm}\hbox{and}\hspace{2mm}
\sigma_y = \mtx{cc}{0 & -i \\ i & 0}.
\end{equation}
Under this transformation, the four orthogonal states
that define the measurement $M_a$ are transformed into
\begin{equation}
\begin{split}
\hfill&|\phi^+_a\rangle = a|00\rangle + b|11\rangle \rightarrow |00\rangle \hfill \\
\hfill&|\phi^-_a\rangle = b|00\rangle - a|11\rangle \rightarrow -|11\rangle \hfill \\
\hfill&|\psi^+_a\rangle = a|01\rangle + b|10\rangle \rightarrow |01\rangle \hfill \\
\hfill&|\psi^-_a\rangle = b|01\rangle - a|10\rangle \rightarrow -|10\rangle \hfill
\end{split}
\end{equation}
So once the transformation has been done, the measurement $M_a$ can be completed
locally; Alice and Bob both make the measurement $|0\rangle$ versus $|1\rangle$
and tell each other their results.
The transformation $U$ is equivalent to one that has been analyzed in Refs.~\cite{Groisman2, Dur2, Cirac, Ye, Berry},
all of which give procedures that are consistent with the rules we have
set up for our problem; that is, the procedures can be used to perform the measurement
once, rather than asymptotically, using arbitrary entangled states as resources. (Some
of those papers consider the asymptotic problem, but their procedures also work in the
setting we have adopted here.)
It appears that the procedure presented by Berry in Ref.~\cite{Berry} is the most efficient
one known so far. It is a multi-stage procedure, involving at each stage a measurement
that determines whether another stage, and another entangled pair, are needed.
We now present a measurement-based protocol for performing $M_a$. The protocol can be derived from
Berry's and yields the same upper bound on the cost, but we arrive at it
in a different way that may have conceptual value in the analysis of other
nonlocal measurements.
The construction of the protocol begins with the following observations. If Alice were to try to
teleport her qubit to Bob using as a resource an incompletely entangled pair, she would cause
a nonunitary distortion in its state. With his qubit and Alice's distorted qubit, Bob could, with some probability less than one, successfully complete the measurement. However, if he gets the wrong
outcome, he will destroy the information necessary to complete the measurement. We require
the measurement always to be completed, so this protocol fails. On the other hand, suppose Alice, again using a partially entangled pair, performs an {\em incomplete} teleportation, conveying to Bob only one rather than two classical bits, and suppose Bob similarly makes an
incomplete measurement, extracting only one classical bit from his two qubits. In that case, if the incomplete measurements
are chosen judiciously, a failure does not render the desired measurement impossible but only
requires that Alice and Bob do a different nonlocal measurement on the qubits they now
hold. In the following description of the
protocol, we have incorporated the unitary transformations associated with teleportation
into the measurements themselves, so that the whole procedure is a sequence of local
projective measurements.
Like Berry's protocol, our protocol consists a series of rounds, beginning with what we will call
``round one''.
\begin{enumerate}
\item Alice and Bob are given as a resource the entangled state
$|\phi^+_x\rangle = x|00\rangle + y|11\rangle$, where
the positive real numbers $x$ and $y$ (with $x^2 + y^2 = 1$) are to be determined by minimizing the eventual cost. Thus each participant holds two qubits: the qubit to be measured and
a qubit that is part of the shared resource.
\item Alice makes a binary measurement on her two qubits, defined by two orthogonal
projection operators:
\begin{equation}
\begin{split}
\hfill&P = |\Phi^+\rangle\langle \Phi^+| + |\Psi^-\rangle\langle \Psi^-| \\
\hfill&Q = |\Phi^-\rangle\langle \Phi^-| + |\Psi^+\rangle\langle \Psi^+| \label{preBell}
\end{split}
\end{equation}
Here the Bell states $|\Phi^\pm\rangle$ and $|\Psi^\pm\rangle$ are defined by
$|\Phi^\pm\rangle
= (|00\rangle \pm |11\rangle)/\sqrt{2}$ and $|\Psi^\pm\rangle
= (|01\rangle \pm |10\rangle)/\sqrt{2}$.
Alice transmits (classically) the result of her measurement to Bob.
(Here Alice is doing the incomplete teleportation. In a complete teleportation she would
also distinguish $|\Phi^+\rangle$ from $|\Psi^-\rangle$, and $|\Phi^-\rangle$ from $|\Psi^+\rangle$.)
\item If Alice gets the outcome $P$, Bob performs the following binary measurement on his
two qubits:
\begin{equation}
\begin{split}
&P_1 = |\phi_1^+\rangle\langle \phi_1^+| + |\psi_1^+\rangle\langle \psi_1^+|, \\
&Q_1 = |\phi_1^-\rangle\langle \phi_1^-| + |\psi_1^-\rangle\langle \psi_1^-|.
\end{split}
\end{equation}
Here $|\phi_1^+\rangle
= A|00\rangle + B|11\rangle$, $|\phi_1^-\rangle
= B|00\rangle - A|11\rangle$, $|\psi_1^+\rangle
= B|01\rangle + A|10\rangle$, and $|\psi_1^-\rangle
= A|01\rangle - B|10\rangle$, and the real coefficients $A$ and $B$ are obtained from $(a,b)$
and $(x,y)$ via the equation $Ax/a = By/b$, together with the normalization condition
\hbox{$A^2+B^2=1$}.
(These values are chosen so as to undo the distortion caused by Alice's imperfect
teleportation.)
On the other hand, if Alice gets the outcome $Q$, Bob performs a different binary measurement:
\begin{equation}
\begin{split}
&P_2 = |\phi_2^+\rangle\langle \phi_2^+| + |\psi_2^+\rangle\langle \psi_2^+|, \\
&Q_2 = |\phi_2^-\rangle\langle \phi_2^-| + |\psi_2^-\rangle\langle \psi_2^-|.
\end{split}
\end{equation}
Here $|\phi_2^+\rangle
= B|00\rangle + A|11\rangle$, $|\phi_2^-\rangle
= A|00\rangle - B|11\rangle$, $|\psi_2^+\rangle
= A|01\rangle + B|10\rangle$, and $|\psi_2^-\rangle
= B|01\rangle - A|10\rangle$.
\item If Alice and Bob have obtained either of the outcomes $P\otimes P_1$ or $Q \otimes Q_2$,
which we call the ``good'' outcomes, they
can now finish the desired measurement $M_a$ by making local measurements, with no further
expenditure of entangled resources. For example, if they get the outcome
$P\otimes P_1$, Alice now distinguishes between $|\Phi^+\rangle$ and $|\Psi^-\rangle$
(which span the subspace picked out by $P$), and Bob distinguishes between
$|\phi_1^+\rangle$ and $|\psi_1^+\rangle$ (which span the subspace picked out
by $P_1$).
The total probability of getting one of the two good outcomes
is
\begin{equation} \label{probability}
\hbox{probability}\; = \frac{1}{(a/x)^2 + (b/y)^2}.
\end{equation}
On the other hand, if they have obtained one of the other
two outcomes, $P\otimes Q_1$ or $Q \otimes P_2$---the ``bad'' outcomes---they find that in order to finish the measurement
$M_a$ on their {\em original} pair of qubits, they now have to perform a different
measurement $M_{a_2}$
on the system that they now hold. (Even though each participant started with two qubits, each of them has now distinguished a pair of two-dimensional subspaces, effectively removing one qubit's worth of quantum information. So the remaining quantum information
on each side can be held in a single qubit.) The measurement $M_{a_2}$ has the same form as
$M_a$, but with new values $a_2$ and $b_2$ instead of $a$ and $b$.
The new values are determined by the equations
\begin{equation} \label{newa}
a_2=\frac{(x^2 - y^2)ab}{\sqrt{x^4b^2 + y^4a^2}} \hspace{6mm} b_2 = \sqrt{1-a_2^2}\, .
\end{equation}
In any case, Alice and Bob have now finished round one. If they have obtained one of the
bad outcomes,
they now have
two choices: (i) begin
again at step 1 but with the new values $a_2$ and $b_2$, or (ii) use up a whole ebit to teleport Alice's system to Bob, who finishes the measurement locally. They choose the method that will ultimately
be less costly in entanglement. If they choose option (i), we say that they have begun round two.
\item This procedure is iterated until the measurement is finished or until $L$ rounds have been
completed, where $L$ is an integer chosen in advance. In round $j$, the measurement parameter
$a_j$ is determined from the parameters $a_{j-1}$ and $x_{j-1}$ used in the preceding round
according to Eq.~(\ref{newa}) (with the appropriate substitutions). Here $a_1$ and $x_1$
are to be interpreted as the first-round values $a$ and $x$.
\item If $L$ rounds are completed and the measurement is still unfinished, Alice teleports
her system to Bob, who finishes the measurement locally.
\end{enumerate}
The entanglement used in stage $j$ of this procedure is
${\mathcal E}(|\phi^+_{x_j}\rangle)=h(x_j^2)$, where $h$ is the binary entropy
function $h(z) = -[z\log z + (1-z)\log(1-z)]$. From Eqs.~(\ref{probability})
and (\ref{newa}),
we therefore have the following upper bound on the cost of the measurement $M_a$.
\medskip
\noindent {\bf Proposition 3.} For each positive integer $j$, let $x_j$ satisfy $0 < x_j < 1$. We define the functions $F(a,x)$ (failure probability) and
$a'(a,x)$ (new value of the measurement parameter) as follows:
\begin{equation}
\begin{split}
& F(a,x) = 1 - \frac{1}{(a/x)^2 + (b/y)^2} \\
& a'(a,x) = \frac{(x^2 - y^2)ab}{\sqrt{x^4b^2 + y^4a^2}},
\end{split}
\end{equation}
where $y = (1-x^2)^{1/2}$ and $b= (1-a^2)^{1/2}$. Let
$B_1(a;x) = h(x^2) + F(a,x)$, and for each integer $n \ge 2$, let $B_n(a;x_1,\ldots, x_n)$
be defined by
\begin{equation}
\begin{split}
&B_n(a;x_1,\ldots, x_n)
= h(x_1^2) \\
&+ F(a,x_1)B_{n-1}[a'(a,x_1);x_2, \ldots, x_n].
\end{split}
\end{equation}
Then for each positive integer $n$, $B_{n}(a; x_1, \cdots, x_{n})$ is an upper bound on $C(M_a)$.
\medskip
The protocol calls for minimizing the bound over the values of $n$ and $x_j$. This optimization
problem is exactly the problem analyzed
by Berry. We present in Fig.~1 the minimal cost as obtained by a numerical optimization, plotted as a function
of the entanglement of the eigenstates of the measurement. (In constructing the curve,
we have limited Alice and Bob to two rounds. Additional rounds do not make
a noticeable difference in the shape of the curve, given our choice of the axis
variables.) We also show on the figure the lower
bound to be derived in Section IV. We note that so far, for cases in which the entanglement
of the eigenstates of $M_a$ exceeds around 0.55 ebits, there is no known
measurement strategy that does better than simple teleportation, with a cost of one ebit.
\begin{figure}[h]
\begin{tabular}{l}
Bounds on the cost \\
\includegraphics[scale=1]{allcurves1.pdf}
\end{tabular}
{\centering Entanglement of the states}
\smallskip
\caption{The solid curves are upper and lower bounds on the entanglement cost of the measurement $M_a$. (The derivation of the lower bound is in Section IV.) The
diagonal dashed line is the general lower bound defined by the entanglement of the states
themselves, and the horizontal dashed line is the general upper bound based on teleportation.}
\end{figure}
\section{III. Limitation to a single round}
As it happens, most of the savings in the above strategy---compared to the cost of simple teleportation---already
appears in the first round. We now
consider the single-round case in more detail. It turns out that, at least for small
values of the entanglement of the eigenstates, we can determine quite precisely the
minimal cost of the measurement $M_a$ when Alice and Bob are
restricted to a single round.
We begin by defining the class of measurement strategies we consider in this section.
A ``single-round procedure'' is a measurement procedure
of the following form. (i) Alice and Bob are given the state $x|00\rangle+y|11\rangle$ at first, with
which they try to complete the measurement. (ii) If they use this resource but fail
to carry out the measurement, Alice teleports a qubit to Bob, who finishes the measurement
locally. (For the procedure outlined in the preceding section,
this restriction amounts to setting $L$ equal to 1.) We refer to the minimum entanglement cost entailed by any such procedure
as the ``single-round cost". In this section we find upper and lower bounds on the
single-round cost of $M_a$.
The minimal cost of the specific procedure outlined in Section II, when it is restricted to a single round,
is given by
\begin{equation} \label{oneround}
\hbox{cost}\; = h(x^2) +\left[1- \frac{1}{(a/x)^2 + (b/y)^2}\right],
\end{equation}
where the value
of $x$ is chosen so as to minimize the cost. (Here $x^2 + y^2 = 1$ as before.)
The two terms of Eq.~(\ref{oneround}) are easy to interpret: the first term is the entanglement
of the shared resource that is used up in any case, and the second term, obtained
from Eq.~(\ref{probability}), is the probability
of failure (multiplied by the 1 ebit associated with the resulting teleportation).
Numerically minimizing the cost over values of $x$, we obtain the upper curve
in Fig.~2, which is thus an upper bound on the single-round cost of $M_a$.
The same upper bound was obtained by Ye, Zhang, and Guo for performing the
corresponding nonlocal unitary transformation \cite{Ye}.
We can also find a good lower bound for such procedures, using
a known upper bound on the probability of achieving a certain increase in the entanglement of
a single copy
through local operations and classical communication \cite{Vidal, Jonathan}. As in all our lower
bound arguments, we consider a state in which qubits A and B are initially entangled with auxiliary qubits C and D,
which will not be involved in the measurement.
(As before, Alice holds qubits A and C,
and Bob holds B and D.) For our present purpose, we choose the initial state to be
\begin{equation} \label{fourqubits}
\begin{split}
|\xi\rangle = \frac{1}{2}&\left[{|\phi^+_a\rangle}_{AB} {|\phi^+_c\rangle}_{CD}
+ {|\phi^-_a\rangle}_{AB} {|\phi^-_c\rangle}_{CD} \right. \\
&+\left. {|\psi^+_a\rangle}_{AB} {|\psi^+_c\rangle}_{CD}
+ {|\psi^-_a\rangle}_{AB} {|\psi^-_c\rangle}_{CD} \right].
\end{split}
\end{equation}
Here the states with the index $c$ are defined as in Eq.~(\ref{Mstates}), but with
$c$ and $d$ in place of $a$ and $b$.
We assume for definiteness
that $1 > c > d > 0$, $c$ and $d$ to be determined later.
Note that again the reduced state of qubits A and B, after tracing out the auxiliary qubits,
is the completely mixed state, as it must be to be consistent with our
definition of the entanglement cost.
One can show directly from Eq.~(\ref{fourqubits}) that the eigenvalues
of the density matrix of Alice's (or Bob's) part of the system, that is, the squared Schmidt
coefficients, are
\begin{equation}
(ac+bd)^2 \hspace{2mm} \hbox{and} \hspace{2mm} (ad-bc)^2.
\end{equation}
In addition to these qubits, Alice and Bob hold their entangled resource,
which we can take without loss of generality to be in the state
\begin{equation}
|\phi^+_x\rangle = x|00\rangle + y|11\rangle.
\end{equation}
They now try to execute the measurement by using up this resource.
If Alice and Bob succeed in distinguishing
the four states $\{|\phi^+_a\rangle, |\phi^-_a\rangle, |\psi^+_a\rangle,
|\psi^-_a\rangle\}$, they will have collapsed qubits C and D into one of the
four corresponding states represented in $|\xi\rangle$. Each of these states
has Schmidt coefficients $c^2$ and $d^2$. Using a result of
Jonathan and Plenio \cite{Jonathan}, we can place an upper bound on the probability
of achieving the transformation from the state $|\xi\rangle \otimes |\phi^+_x\rangle$
to one of the four desired final states of qubits C and D. This probability cannot be larger
than
\begin{equation} \label{JP}
\frac{\sum_{j=\ell}^4 \alpha_j}{\sum_{j=\ell}^2 \beta_j},
\end{equation}
where $\alpha_j$ and $\beta_j$ are, respectively, the squared Schmidt coefficients
of the initial state and any of the desired final states, in decreasing order.
(There are at most four nonzero Schmidt coefficients in the initial state; hence the
upper limit 4 in the numerator. Similarly, the upper limit 2 in the denominator
reflects the fact that the final state, a state of C and D, has at most
two nonzero Schmidt coefficients.) In general $\ell$
can take any value from 1 to the number of nonzero Schmidt coefficients
of the final state. In our problem there are only two values of $\ell$ to consider.
The case $\ell=1$ tells us only that the probability does not exceed unity; so
the only actual constraint comes from the case $\ell=2$, which tells us that
\begin{equation}
\hbox{the success probability} \; \le \frac{1 - (ac+bd)^2 x^2}{1-c^2}.
\end{equation}
The cost of any single-round procedure is therefore at least
\begin{equation} \label{concave}
\hbox{cost}\; \ge h(x^2) + \max\left\{ 0, \left[1 - \frac{1 - (ac+bd)^2 x^2}{1-c^2}\right]\right\},
\end{equation}
since a failure will lead to a cost of one ebit for the teleportation.
\begin{figure}[h]
\begin{tabular}{l}
Bounds on the single-round cost\\
\includegraphics[scale=1]{oneround4.pdf}
\end{tabular}
{\centering Entanglement of the states}
\smallskip
\caption{Upper and lower bounds on the entanglement cost of the measurement $M_a$, when
Alice and Bob are restricted to a single round before resorting to teleportation.} \label{oneroundgraph}
\end{figure}
Alice and Bob will choose their resource pair, that is, they will choose
the value of $x$, so as to minimize the cost. So we want to find a value
of $x$ that minimizes the right-hand side of Eq.~(\ref{concave}). Because the
probability of failure cannot be less than zero, we can restrict our attention
to values of $x$ in the range
\begin{equation} \label{constraint}
c/(ac+bd) \le x \le 1.
\end{equation}
In this range, the cost is a concave function of $x^2$; so the function
achieves its minimum value at one of the two endpoints.
We thus have the following lower bound on the cost
of any single-round procedure:
\begin{equation} \label{ABchoice}
\hbox{cost}\; \ge \min\left\{ h\left[\frac{c^2}{(ac+bd)^2}\right] ,
\frac{(ac+bd)^2 - c^2}{1-c^2} \right\}.
\end{equation}
This bound holds for any value of $c$ for which it is defined. To make the bound as strong
as possible, we want to maximize it over all values of $c$. In the range
$1/\sqrt{2} \le c \le a$, the
first entry in Eq.~(\ref{ABchoice}) is a decreasing function of $c$,
whereas the second entry is increasing. (For larger values of $c$,
both functions are decreasing until they become undefined
at $c = (ac+bd)$. Beyond this point we would violate Eq.~(\ref{constraint}).)
Therefore, we achieve the strongest bound when the two entries are
equal. That is, we have obtained the following result.
\medskip
\noindent {\bf Proposition 4.} The single-round cost of the measurement $M_a$ is
bounded below by the quantity
\begin{equation}
\frac{(ac+bd)^2 - c^2}{1-c^2}, \label{lowerbound1}
\end{equation}
where $d = (1-c^2)^{1/2}$ and $c$ is determined by the equation
\begin{equation}
h\left[\frac{c^2}{(ac+bd)^2}\right]=\frac{(ac+bd)^2 - c^2}{1-c^2}. \label{lowerbound2}
\end{equation}
\medskip
We have solved this equation numerically for a range of values of
$a$ and have obtained the lower of the two curves in Fig.~2.
For very weakly entangled eigenstates---that is, at the left-hand end
of the graph where the parameter $b$ is small---the single-round upper bound and the single-round
lower bound shown in the figure are very close to each other. In fact, we find analytically that
for small $b$, both the upper and lower bounds
can be approximated by the function $2b\sqrt{\log (1/b)}$, in the sense that the ratio of each bound
with this function approaches unity as $b$ approaches zero. (See the Appendix for
the argument.) Or, in terms of the entanglement ${\mathcal E}$ of the states, we can say that for small $b$,
the single-round cost of the measurement is approximately equal to
$\sqrt{2{\mathcal E}}$.
Thus in this limit, we have a very
good estimate of the cost of the measurement, but only if we restrict
Alice and Bob to a single round.
We would prefer to have a lower bound that applies to any conceivable
procedure, and that is still better than the general
lower bound we derived in Section I. We obtain such a bound in the following section.
\section{IV. An absolute lower bound for {\boldmath $M$}\hspace{-0.5mm}$_a$}
Again, we imagine a situation in which Alice and Bob hold two auxiliary qubits C and D
that will not be involved in the measurement. We assume the same initial state
as in Section III:
\begin{equation}
\begin{split}
|\xi\rangle = \frac{1}{2}&\left[{|\phi^+_a\rangle}_{AB} {|\phi^+_c\rangle}_{CD}
+ {|\phi^-_a\rangle}_{AB} {|\phi^-_c\rangle}_{CD} \right. \\
&+\left. {|\psi^+_a\rangle}_{AB} {|\psi^+_c\rangle}_{CD}
+ {|\psi^-_a\rangle}_{AB} {|\psi^-_c\rangle}_{CD} \right].
\end{split}
\end{equation}
As before, we are interested in the entanglement between Alice's part of the system
and Bob's part, that is, between AC and BD\@. This entanglement is
\begin{equation}
{\mathcal E}_{\hbox{\scriptsize initial}} = h[(ac+bd)^2].
\end{equation}
If Alice and Bob perform
the measurement $M_a$, the final entanglement of CD is
\begin{equation}
{\mathcal E}_{\hbox{\scriptsize final}} = h(c^2).
\end{equation}
The quantity ${\mathcal E}_{\hbox{\scriptsize final}} - {\mathcal E}_{\hbox{\scriptsize initial}}$
is thus a lower bound on the cost of $M_a$, as expressed in the following proposition.
\medskip
\noindent {\bf Proposition 5.} Let $c$ satisfy $0 \le c \le 1$, and let $d=(1-c^2)^{1/2}$.
Then $C(M_a) \ge h(c^2) - h[(ac+bd)^2]$.
\medskip
By maximizing this quantity numerically over the parameter $c$, we get our best absolute lower
bound on the entanglement cost $C(M_a)$. This bound is plotted in
Fig.~1.
What is most interesting about this bound is that, except at the extreme points
where the eigenstates of the measurement are either all unentangled or
all maximally entangled, the bound is strictly larger than the entanglement
of the eigenstates themselves. This is another example, then, showing
that the nonseparability of the measurement can exceed the nonseparability
of the states that the measurement distinguishes.
Not only is our new lower bound absolute in the sense that it does not depend
on the number of rounds used by Alice and Bob; it applies even asymptotically.
Suppose, for example, that Alice and Bob are given $n$ pairs of qubits and
are asked to perform the same measurement $M_a$ on each pair. It is
conceivable that by using operations that involve all $n$ pairs, Alice and
Bob might achieve an efficiency not possible when they are performing
the measurement only once. Even in this setting, the lower bound given
in Proposition 5 applies. That is, the cost of performing the measurement
$n$ times must be at least $n$ times our single-copy lower bound. To see
this, imagine that {\em each} of the given pairs of qubits is initially entangled
with a pair of auxiliary qubits. Both the initial entanglement of the whole system
(that is, the entanglement between Alice's side and Bob's side), and the
final entanglement after the measurement, are simply proportional to $n$,
so that the original argument carries over to this case.
It is interesting to look at the behavior of the upper and lower bounds as
the parameter
$b$ approaches zero, that is, as the eigenstates of the measurement approach
product states. Berry has done this analysis for the upper bound and has found
that for small $b$, the cost is proportional to $b$, with proportionality constant
5.6418. For our lower bound, it is a question of finding the value of $c$
(with $c^2+d^2 = 1$) that maximizes the difference
\begin{equation}
h(c^2) - h[(ac+bd)^2] \label{diff}
\end{equation}
for small $b$.
One finds that for small $b$, the optimal value of $c$ approaches the constant
value $c = 0.28848$ (the numerical solution to the equation $(d^2 - c^2)\ln(d/c) = 1$), for
which the bound
is approximately equal to $1.9123b$. Comparing this result
with the upper bound in the limit of vanishingly small entanglement, $5.6418b$, we see that there is still a sizable
gap between the two bounds.
The same limiting form, $1.9123b$, appears in Ref.~\cite{Dur} as the entanglement production
capacity of the unitary transformation of Eq.~(\ref{unitary}) for small $b$. In fact, by extending the argument of Ref.~\cite{Dur} to non-infinitesimal
transformations, one obtains the entire lower-bound curve in Fig.~1. Thus our lower bound for the cost
of the measurement $M_a$ is also a lower bound for the cost of the corresponding unitary
transformation. We note, though, that the two optimization problems are not quite the same.
To get a bound on the cost of the measurement, we maximized
$h(c^2) - h[(ac+bd)^2]$. To find the entanglement production capacity of the unitary transformation,
one maximizes $h[(ac+bd)^2]-h(c^2)$. Though the questions are different, it is not hard to show that maximum {\em value} is the same in both
cases.
\section{V. Eigenstates with Unequal Entanglements}
We now consider the following variation on the measurement $M_a$.
It is an orthogonal measurement, which we call $M_{a,c}$, with eigenstates
\begin{equation}
\begin{split}
&|\phi^+_a\rangle = a|00\rangle + b|11\rangle, \hspace{5mm} |\phi^-_a\rangle = b|00\rangle
-a|11\rangle, \\
&|\psi^+_c\rangle = c|01\rangle + d|10\rangle, \hspace{5mm} |\psi^-_c\rangle = d|01\rangle
-c|10\rangle,
\end{split}
\end{equation}
where all the coefficients are real and nonnegative
and all the states are normalized. For this measurement
we again use the entanglement production argument to get a lower bound.
In this case we take the initial state of qubits ABCD to be
\begin{equation}
\begin{split}
|\eta\rangle = \frac{1}{2}&\left[|\phi^+_a\rangle_{AB}|\phi^+_{a'}\rangle_{CD}
+|\phi^-_a\rangle_{AB}|\phi^-_{a'}\rangle_{CD} \right. \\
&\left. + |\psi^+_c\rangle_{AB}|\psi^+_{c'}\rangle_{CD}
+|\psi^-_c\rangle_{AB}|\psi^-_{c'}\rangle_{CD}\right], \label{genstate}
\end{split}
\end{equation}
where the real parameters $a'$ and $c'$ are to be adjusted to achieve the
most stringent lower bound. This initial state has an entanglement
between Alice's location and Bob's location (that is, between AC and BD) equal to
the Shannon entropy of the following four probabilities:
\begin{equation}
\begin{split}
(aa' + bb' + cc' + dd')^2/4, \hspace{5mm} (aa' + bb' - cc' - dd')^2/4, \\
(ab' - ba' + dc' - cd')^2/4, \hspace{5mm} (ab' - ba' -dc' + cd')^2/4.
\end{split}
\end{equation}
Once the measurement is completed, the
final entanglement of the CD system, on average,
is
\begin{equation}
[h(a'^2) + h(c'^2)]/2.
\end{equation}
The difference between the final entanglement and the initial entanglement
is a lower bound on $C(M_{a,c})$, which we want to maximize by our choice
of $a'$ and $c'$. We have again done the maximization numerically, for many values of
the measurement parameters $a$ and $c$, covering their domain quite densely.
We plot the results in Fig.~3.
\begin{figure}[h]
\begin{tabular}{l}
Lower bound on the cost\\
\includegraphics[scale=0.76]{figure3.pdf}
\end{tabular}
{\centering Average entanglement of the states}
\smallskip
\caption{Lower bound for the measurement $M_{a,c}$ as computed from the pure state given in Eq.~(\ref{genstate}), plotted against the average entanglement
of the eigenstates (dashed line). Different values of the pair $(a,c)$
can be associated with the same point on the horizontal axis but may yield
different lower bounds, as indicated by the gray area. The point touching the
dashed line in the middle of the graph represents a measurement with two Bell
states and two product states, as in Eq.~(\ref{M3}).}
\end{figure}
In almost every case, the resulting lower bound is {\em higher} than the average
entanglement of the eigenstates of the measurement. The only exceptions
we have found, besides the ones already mentioned in Section IV (in which all the
states are maximally entangled or all are unentangled), are those for which two of the measurement eigenstates
are maximally entangled and the other two are unentangled. That is, this method
does not produce a better lower bound for the measurement with eigenstates
\begin{equation}
|\Phi^+\rangle, \; |\Phi^-\rangle, \; |01\rangle, \; |10\rangle, \label{M3}
\end{equation}
or for the analogous measurement with $|\Phi^\pm\rangle$ replaced
by $|\Psi^\pm\rangle$ and with the product states suitably replaced to
make the states mutually orthogonal. In all other cases the cost of the measurement is strictly
greater than the average entanglement of the states.
The measurement $M_{a,c}$ has been considered in Ref.~\cite{Ghosh}, whose results
likewise give a lower bound on the cost: $C(M_{a,b}) \ge 1-\log(a^2 + c^2)$ (where $a \ge b$
and $c \ge d$). This bound is weaker than the one we have obtained, in part because
we have followed the later paper Ref.~\cite{Horodecki} in assuming an initial pure state rather than
a mixed state of ABCD.
\section{VI. Measurements for Which the General Lower Bound Can Be Achieved}
Here we consider a class of measurements for which the cost {\em equals} the average
entanglement of the states associated with the POVM elements. We begin with another two-qubit
measurement, which we then generalize to arbitrary dimension.
\subsection{1. An eight-outcome measurement}
A measurement closely related to $M_a$ is measurement $M^{(8)}_{a}$, which has eight outcomes,
represented by a POVM
whose elements $\Pi_i = \alpha_i|\phi_i\rangle\langle\phi_i|$
all have $\alpha_i = 1/2$, with the eight states $|\phi_i\rangle$ given by
\begin{equation}
\begin{split}
|\phi^+_a\rangle = a|00\rangle + b|11\rangle, \hspace{5.2mm}& |\phi^-_a\rangle = b|00\rangle
-a|11\rangle \phantom{;} \\
|\psi^+_a\rangle = a|01\rangle + b|10\rangle, \hspace{5mm}& |\psi^-_a\rangle = b|01\rangle
-a|10\rangle \phantom{.} \\
|\phi^+_b\rangle = b|00\rangle + a|11\rangle, \hspace{5.2mm}& |\phi^-_b\rangle = a|00\rangle
-b|11\rangle \phantom{;} \\
|\psi^+_b\rangle = b|01\rangle + a|10\rangle, \hspace{5mm}& |\psi^-_b\rangle = a|01\rangle
-b|10\rangle.
\end{split}
\end{equation}
That is, they are the same states as in $M_a$, plus the four states obtained by
interchanging $a$ and $b$. Thus, Alice and Bob could perform the measurement
$M^{(8)}_a$ by flipping a fair coin to decide whether to perform $M_a$ or $M_b$.
This procedure yields the eight possible outcomes: there are two possible outcomes of the coin toss,
and for each one, there are four possible outcomes of the chosen measurement.
The
coin toss requires no entanglement; so the cost of this procedure is equal to the
cost of $M_a$ (which is equal to that of $M_b$). We conclude that
\begin{equation}
C(M^{(8)}_a) \le C(M_a).
\end{equation}
As we will see shortly, the cost of $M^{(8)}_a$ is in fact strictly smaller for $0<a<1$.
The measurement $M^{(8)}_a$ is a non-orthogonal measurement, but any non-orthogonal
measurement can be performed by preparing an auxiliary system in a known
state and then performing a global orthogonal measurement on the combined system.
We now show explicitly how to perform this particular measurement, in a way that
will allow us to determine the value of $C(M^{(8)}_a)$. To do the measurement,
Alice and Bob draw, from their store of entanglement, the entangled state
$|\phi^+_a\rangle = a|00\rangle + b|11\rangle$ of qubits C and D. (As always, Alice holds C and Bob holds D.)
Then each of them locally performs the Bell measurement $\{|\Phi^+\rangle, |\Phi^-\rangle,
|\Psi^+\rangle, |\Psi^-\rangle\}$ on his or her pair of qubits.
The resulting 16-outcome orthogonal measurement on ABCD defines a 16-outcome POVM on just
the two qubits A and B. For each outcome $k$ of the global orthogonal measurement, we can find
the corresponding POVM element $\Pi_k$ of the AB measurement as follows:
\begin{equation}
\Pi_k = \hbox{tr}_{CD}\{ \pi_k [I_{AB} \otimes (|\phi^+_a\rangle\langle\phi^+_a|)_{CD}] \},
\end{equation}
where $\pi_k$ is the $k$th POVM element of the global measurement. Less formally,
we can achieve the same result by taking the ``partial inner product'' between
the initial state $|\phi^+_a\rangle$ of the system CD and the $k$th eigenstate of the global
measurement. For example, the eigenstate $|\Phi^+\rangle|\Phi^+\rangle$ yields
the following partial inner product:
\begin{equation}
(\langle \phi^+_a|_{CD})(|\Phi^+\rangle_{AC}|\Phi^+\rangle_{BD}),
\end{equation}
which works out to be $(1/2)|\phi^+_a\rangle_{AB}$. The corresponding POVM element
on the AB system is $(1/4)|\phi^+_a\rangle\langle\phi^+_a|$. Continuing in this way,
one finds the following correspondence between the 16 outcomes of the global measurement and the POVM
elements of the AB measurement.
\begin{equation}
\begin{split}
&|\Phi^+\rangle|\Phi^+\rangle\hspace{2mm} \hbox{or}\hspace{2mm} |\Phi^-\rangle|\Phi^-\rangle\hspace{2mm}\rightarrow \hspace{2mm}\frac{1}{4}|\phi^+_a\rangle
\langle\phi^+_a| \\
&|\Phi^+\rangle|\Phi^-\rangle\hspace{2mm} \hbox{or}\hspace{2mm} |\Phi^-\rangle|\Phi^+\rangle\hspace{2mm}\rightarrow \hspace{2mm}\frac{1}{4}|\phi^-_b\rangle
\langle\phi^-_b| \\
&|\Phi^+\rangle|\Psi^+\rangle\hspace{2mm} \hbox{or}\hspace{2mm} |\Phi^-\rangle|\Psi^-\rangle\hspace{2mm}\rightarrow \hspace{2mm}\frac{1}{4}|\psi^+_a\rangle
\langle\psi^+_a| \\
&|\Phi^+\rangle|\Psi^-\rangle\hspace{2mm} \hbox{or}\hspace{2mm} |\Phi^-\rangle|\Psi^+\rangle\hspace{2mm}\rightarrow \hspace{2mm}\frac{1}{4}|\psi^-_b\rangle
\langle\psi^-_b| \\
&|\Psi^+\rangle|\Psi^+\rangle\hspace{2mm} \hbox{or}\hspace{2mm} |\Psi^-\rangle|\Psi^-\rangle\hspace{2mm}\rightarrow \hspace{2mm}\frac{1}{4}|\phi^+_b\rangle
\langle\phi^+_b| \\
&|\Psi^+\rangle|\Psi^-\rangle\hspace{2mm} \hbox{or}\hspace{2mm} |\Psi^-\rangle|\Psi^+\rangle\hspace{2mm}\rightarrow \hspace{2mm}\frac{1}{4}|\phi^-_a\rangle
\langle\phi^-_a| \\
&|\Psi^+\rangle|\Phi^+\rangle\hspace{2mm} \hbox{or}\hspace{2mm} |\Psi^-\rangle|\Phi^-\rangle\hspace{2mm}\rightarrow \hspace{2mm}\frac{1}{4}|\psi^+_b\rangle
\langle\psi^+_b| \\
&|\Psi^+\rangle|\Phi^-\rangle\hspace{2mm} \hbox{or}\hspace{2mm} |\Psi^-\rangle|\Phi^+\rangle\hspace{2mm}\rightarrow \hspace{2mm}\frac{1}{4}|\psi^-_a\rangle
\langle\psi^-_a|
\end{split}
\end{equation}
Thus, even though there are formally 16 outcomes of the AB measurement, they are equal in pairs, so that there are only eight distinct outcomes, and they are indeed the outcomes of
the measurement $M^{(8)}_a$.
The cost of this procedure is ${\mathcal E}(|\phi^+_a\rangle) = h(a^2)$. This is the
same as the average entanglement of the eight states representing the outcomes of $M^{(8)}_a$,
which we know is a lower bound on the cost. Thus the lower bound is achievable in this case,
and we can conclude that $C(M^{(8)}_a)$ is exactly equal to $h(a^2)$.
We note that the POVM $M^{(8)}_a$ is invariant under all local Pauli operations. This fact leads
us to ask whether, more generally, invariance under such operations guarantees that
the entanglement cost of the measurement is exactly equal to the average entanglement
of the states associated with the POVM elements. The next section shows that this is
indeed the case for complete POVMs.
\subsection{2. An arbitrary complete POVM invariant under local Pauli operations}
We begin by considering a POVM on a bipartite system of dimension $d \times d$,
generated by applying generalized Pauli operators to a single pure state $|\phi_0\rangle$.
The POVM elements are of the form $(1/d^2)|\psi_{j_1k_1j_2k_2}\rangle\langle\psi_{j_1k_1j_2k_2}|$, where
\begin{equation}
|\psi_{j_1k_1j_2k_2}\rangle = (Z^{j_1}X^{k_1} \otimes Z^{j_2}X^{k_2}) |\phi_0\rangle
\label{genPOVM}
\end{equation}
and each index runs from $0$ to $d-1$.
Here the generalized Pauli operators $X$ and $Z$ are defined by
\begin{equation}
X|m\rangle = |m+1\rangle, \hspace{1mm} Z|m\rangle = \omega^m |m\rangle, \hspace{1mm}
m = 0, \ldots, d-1,
\end{equation}
with $\omega = \exp(2\pi i/d)$ and with the addition understood
to be mod $d$. One can verify that the above construction generates a
POVM for any choice of $|\phi_0\rangle$.
In order to carry out this POVM, Alice and Bob use, as a resource, particles
C and D in the
state $|\phi_0\rangle^*$, which has the same entanglement as $|\phi_0\rangle$.
(As before, the asterisk indicates complex conjugation in the standard basis.)
Alice performs on AC, and Bob on BD, the generalized Bell measurement whose
eigenstates are
\begin{equation}
|B_{jk}\rangle = \frac{1}{\sqrt{d}}Z^j\otimes X^k\sum_{r=0}^{d-1}|r,r\rangle.
\end{equation}
To see that this method does effect the desired POVM, we compute the partial inner
products as in the preceding subsection:
\begin{equation}
\begin{split}
&\left(\langle \phi_0|_{CD}^*\right) \left( |B_{j_1k_1}\rangle_{AC} \otimes |B_{j_2k_2}\rangle_{BD} \right)\\
&=\frac{1}{d}\sum_{r_1,r_2}\langle\phi_0|_{CD}^*Z_A^{j_1}X_C^{k_1}Z_B^{j_2}X_D^{k_2}|r_1,r_1\rangle_{AC}|r_2,r_2\rangle_{BD}\\
&= \frac{1}{d}\sum_{r_1,r_2}Z_A^{j_1}Z_B^{j_2}|r_1,r_2\rangle_{AB}\langle \phi_0|_{CD}^*X_C^{k_1}X_D^{k_2}|r_1,r_2\rangle_{CD}\\
&=\frac{1}{d}Z_A^{j_1}Z_B^{j_2}\sum_{r_1,r_2}|r_1,r_2\rangle_{AB}
\langle r_1,r_2|_{CD} X_C^{k_1}X_D^{k_2}|\phi_0\rangle_{CD}\\
&=\frac{1}{d}Z_A^{j_1}Z_B^{j_2}X_A^{k_1}X_B^{k_2}|\phi_0\rangle_{AB}\\
&=\frac{1}{d}(Z^{j_1}X^{k_1} \otimes Z^{j_2}X^{k_2}) |\phi_0\rangle_{AB}.
\end{split}
\end{equation}
Thus the combination of Bell measurements yields the POVM defined by Eq.~(\ref{genPOVM}).
We now extend this example to obtain the following result.
\medskip
\noindent {\bf Proposition 6.} Let $M$ be any complete POVM with a finite number of outcomes, acting on a
pair of systems each having a $d$-dimensional state space, such that $M$ is
invariant under all local Pauli operations, that is, under the group generated by
$X\otimes I$, $Z\otimes I$, $I\otimes X$, and $I\otimes Z$. Then $C(M)$ is equal to
the average entanglement of the states associated with the outcomes of $M$, as expressed
in Eq.~(\ref{ave}).
\medskip
\noindent {\em Proof.} The most general such POVM is similar to the
one we have just considered, except that instead of a single starting state $|\phi_0\rangle$,
there may be an ensemble of states $|\phi_s\rangle$ with weights $p_s$, $s=1,\ldots, m$,
such that $\sum p_s = 1$. The POVM elements (of which there are a total of $md^4$) are
$(p_s/d^2)|\psi_{j_1k_1j_2k_2;s}\rangle\langle\psi_{j_1k_1j_2k_2;s}|$, where
\begin{equation}
|\psi_{j_1k_1j_2k_2;s}\rangle = (Z^{j_1}X^{k_1} \otimes Z^{j_2}X^{k_2}) |\phi_s\rangle.
\end{equation}
(So $p_s/d^2$ plays the role of $\alpha_i$ in Eq.~(\ref{ave}).)
In order to perform this measurement, Alice and Bob first make a random choice
of the value of $s$, using the weights $p_s$. They then use, as a resource, particles
C and D in the state $|\phi_s\rangle^*$, and perform Bell measurements as above.
The cost of this procedure is the average entanglement of the resource states, which is
\begin{equation}
\begin{split}
\hbox{cost} &= \sum_s p_s {\mathcal E}(|\psi_s\rangle) \\
&= \frac{1}{d^2}\sum_{j_1k_1j_2k_2s} \frac{p_s}{d^2} {\mathcal E}(|\phi_{j_1k_1j_2k_2;s}\rangle) \\
&=\langle {\mathcal E} \rangle .
\end{split}
\end{equation}
But we know that $\langle {\mathcal E} \rangle$ is a lower bound on $C(M)$. Since the above
procedure achieves this bound, we have that
$C(M) = \langle {\mathcal E} \rangle$. \hfill $\square$
\section{VII. Discussion}
As we discussed in the Introduction, a general lower bound on the entanglement
cost of a complete measurement is the average entanglement of the pure states
associated with the measurement's outcomes. Perhaps the most interesting
result of this paper is that, for almost all the orthogonal measurements we considered,
the actual cost is strictly greater than this lower bound. The same is true
in the examples of ``nonlocality without entanglement'', in which the average
entanglement is zero but the cost is strictly positive. However, whereas those earlier examples may have
seemed special because of their intricate construction, the examples given
here are quite simple.
The fact that the cost in these simple cases exceeds the average entanglement of the states
suggests that this feature may be a generic property of bipartite measurements.
If this is true, then in this sense the nonseparability of a measurement is
generically a distinct property from the the nonseparability of the eigenstates.
(In this connection it is interesting that for certain questions of distinguishability
of generic bipartite states, the presence or absence of entanglement seems to be completely
irrelevant \cite{WalgateScott}.)
We have also found a class of measurements for which the entanglement cost is {\em equal to}
the average entanglement of the corresponding states. These measurements have a high degree of
symmetry in that they are invariant under all local generalized Pauli operations.
What is it that causes some measurements to be ``more nonseparable'' than the states
associated with their outcomes? Evidently the answer must have to do with the {\em relationships} among the
states. In the original ``nonlocality without entanglement'' measurement, the crucial
role of these relationships is clear: in order to separate any
eigenstate $|v\rangle$ from any other eigenstate $|w\rangle$ by a local measurement, the observer must disturb some of the other states
in such a way as to render them indistinguishable. One would like to have a similar understanding
of the ``interactions'' among states when the eigenstates are entangled.
Some recent papers
have quantified relational properties
of ensembles of bipartite states \cite{Horodecki2, Horodecki3}.
Perhaps one of these approaches, or a different approach yet to be developed,
will capture the aspect of these relationships that determines the cost of the
measurement.
\begin{acknowledgments}
We thank Alexei Kitaev, Debbie Leung, David Poulin, John Preskill,
Andrew Scott and Jon Walgate for valuable discussions and comments on the subject.
S.\,B.~is supported by Canada's Natural Sciences and Engineering Research Council
({\sc Nserc}).
G.\,B.~is supported by Canada's Natural
Sciences and Engineering Research Council ({\sc Nserc}),
the Canada Research Chair program,
the Canadian Institute for Advanced Research ({\sc Cifar}),
the Quantum\emph{Works} Network and the Institut transdisciplinaire d'informatique quantique (\textsc{Intriq}).
\end{acknowledgments}
| {'timestamp': '2009-03-30T19:42:07', 'yymm': '0809', 'arxiv_id': '0809.2264', 'language': 'en', 'url': 'https://arxiv.org/abs/0809.2264'} |
\section{Introduction}
Melt-blowing is a widely used production method for polymer micro- and nanofibers economically attractive due to low production costs. Fabrics of meltblown fibers are nonwovens, e.g., filters, hygiene products, battery separators. Details on the technology can be found in \cite{dutton:p:2009, pinchuk:b:2002}. A typical setup of a melt-blowing device is illustrated in Fig.~\ref{sec:intro_fig:apparatus}. In the process, molten polymer is fed through a nozzle into a forwarding high-speed and highly turbulent air stream to be stretched and cooled down. The resulting fibers are laid down onto some collector, e.g., conveyor belt. In contrast to melt-spinning processes, where the stretching is caused by a mechanical take-up, in melt-blowing the fiber jet thinning is due to the driving high-velocity air stream with its turbulent nature.
To deepen the understanding on the mechanism of jet thinning in melt-blowing extensive diverse studies have been performed in the last years, covering experimental investigations, e.g., \cite{bansal:p:1998, bresee:p:2003, ellison:p:2007, wu:p:1992, xie:p:2012}, combined experimental numerical works, e.g., \cite{yarin:p:2010, uyttendaele:p:1990, yarin:p:2010b}, as well as numerical computations, e.g., \cite{chen:p:2003, shambaugh:p:2011, sun:p:2011, zeng:p:2011}.
However, so far, there is an obvious gap between the experimental and numerical results for the achieved fiber thickness in literature. The existing numerical simulations underestimate the fiber elongation by several orders of magnitude, cf.\ \cite{xie:p:2012, uyttendaele:p:1990, shambaugh:p:2011, zeng:p:2011}. While experimental studies show fiber elongations $e \sim \mathcal{O}(10^6)$, $e = A_{in}/A = d_{in}^2/d^2$, meaning a reduction of $10^3$ in diameter $d$ and of $10^6$ in cross-sectional area $A$ compared to the values at the nozzle (indicated by the index $_{in}$), simulated elongations are of order $e\sim\mathcal{O}(10^4)$. This is likely due to steady considerations and the neglect of turbulent aerodynamic effects \cite{chen:p:2003, chen:p:2005, shambaugh:p:2011}. Assuming an incompressible steady fiber jet the relation $uA = u_{in}A_{in}$ with scalar jet speed $u$ holds true. Hence, the computed elongation is restricted by the velocity $\mathbf{v}_\star$ of the surrounding air stream, i.e., $e = u/u_{in} < \lVert \mathbf{v}_\star \rVert_\infty/u_{in}$. This estimate turns out to be valid also in (instationary) melt-blowing simulations where the surrounding airflow is computed (even) on basis of a turbulence model when only mean airflow informations are taken into account in the aerodynamic driving of the fiber jet \cite{sun:p:2011, zeng:p:2011}. Experiments in \cite{yarin:p:2010, xie:p:2012} indicate the relevance of the turbulent effects for the jet thinning. In \cite{yarin:p:2010b} a viscoelastic fiber model based on an upper convected Maxwell description (UCM) has been employed for melt-blowing, which is opposed to random pulsations. This is done by applying perturbation frequencies on a rectilinear fiber jet leading to bending instabilities and causing significant stretching and thinning of the jet. The examination has been extended to multiple fibers, focusing on the prediction of fiber deposition patterns and fiber-size distributions in the resulting nonwovens in \cite{yarin:p:2011}. Latest works deal with the numerical investigation of the angular fiber distribution, the effect of uptake velocity as well as the lay-down on a rotating drum \cite{sinharay:p:2013, ghosal:p:2016, ghosal:p:2016b}. In \cite{huebsch:p:2013} the significance of turbulence for melt-blowing has been approached by studying the effect of turbulent aerodynamic velocity fluctuations on a simplified fiber model of ordinary differential equations. There, a $k$-$\epsilon$ turbulence description of the high-speed airflow serves as basis for the reconstruction of the velocity fluctuations, yielding a stochastic aerodynamic force acting on the fiber jet.
\begin{figure}[!t]
\centering
\includegraphics[height=5cm]{device_2-eps-converted-to.pdf}
\caption{Sketch of a typical melt-blowing setup.
}\label{sec:intro_fig:apparatus}
\end{figure}
The aim of this paper is to establish a numerical framework for fibers in turbulent air that makes the simulation of industrial melt-blowing processes feasible. For this purpose we bring together the two described approaches: we extend the random field sampling of \cite{huebsch:p:2013} to the instationary viscoelastic UCM fiber model of \cite{yarin:p:2010b}. Since the aerodynamic forces are the key player for the fiber behavior, we employ a one-way coupling of the outer air stream with the fibers by the help of the force model given in \cite{marheineke:p:2009b}. Of importance is the efficient and robust realization that enables us presenting numerical results of an industrial setup with an appropriate viscoelastic description of the fiber, the inclusion of temperature effects and the direct incorporation of the turbulence structure of the outer air stream for the first time in literature.
Regarding the viscoelastic UCM fiber model of \cite{yarin:p:2010b}, that is asymptotically derived by slender body theory in \cite{marheineke:p:2016}, in Lagrange description it can uniquely be written as quasilinear hyperbolic first order system of partial differential equations on a growing space-time domain. Its classification with respect to the growing fiber domain gives requirements on boundary conditions with regard to well-posedness of the mathematical problem formulation and suggests a parameterization of the fiber tangent by the help of spherical coordinates. The effects of turbulent fluctuations are calculated by the turbulence reconstruction procedure described in \cite{huebsch:p:2013} and coupled into the fiber model by an air force function. The resulting instationary problem is solved using finite volumes in space with numerical fluxes of Lax-Friedrichs type as well as employing the implicit Euler method in time. For an industrial melt-blowing setup we show the applicability of our model and numerical solution framework and demonstrate the relevance of the turbulent fluctuations causing fiber elongations of the expected higher order of magnitude compared to stationary simulations. From the repeated random sampling of fibers in the sense of the Monte Carlo method a distribution of the final fiber diameters is obtained that yields fiber diameters of realistic order of magnitude.
The paper is structured as follows. In Sec.~\ref{sec:model} we start with the instationary viscoelastic UCM fiber model, regarding its classification and correct closing with boundary conditions. Furthermore, we give a short survey in reconstructing the turbulent fluctuations of an underlying air stream. After that we discuss our numerical solution framework and the handling of the growing fiber domain in Sec.~\ref{sec:numerics}. In Sec.~\ref{sec:example} we consider an industrial melt-blowing setup, for which we present simulation results covering the turbulent effects due to the high-speed air stream.
\setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0}
\section{Viscoelastic fiber melt-blowing model}\label{sec:model}
For the melt-blowing of a fiber in a turbulent air stream we present an asymptotic instationary viscoelastic UCM fiber model in Lagrangian (material) description. We classify the resulting quasilinear system of partial differential equations of first order and discuss the appropriate closing by boundary conditions. The choice of the boundary conditions suggests a description with respect to a fiber tangent associated basis. The fiber tangent $\boldsymbol{\tau}$ with norm $e = \lVert \boldsymbol{\tau} \rVert$ and direction $\mathbf{t} = \boldsymbol{\tau}/\lVert \boldsymbol{\tau} \rVert$ is in particular parameterized by the help of spherical coordinates. Moreover, we present the models for the aerodynamic force and the heat exchange used in the one-way coupling with the surrounding airflow and introduce the stochastic modeling concept by which the effects of the turbulent aerodynamic velocity fluctuations are incorporated in the fiber system.
\subsection{Asymptotic fiber jet model}\label{sec:model_general}
The extrusion of a fiber jet from a nozzle into an air stream can be seen as an inflow problem with a domain enlarging over time. Let $\Omega= \{(\zeta,t)\in\mathbb{R}^2 \, \lvert \, \zeta\in\mathcal{Q}(t),\, t\in(0,t_{end}]\}$ be the space-time domain with time-dependent growing space $\mathcal{Q}(t) = (-\zeta_L(t),0)$, where $\mathrm{d}/\mathrm{d}t\, \zeta_L(t) = v_{in}(t)$, $\zeta_L(0) = 0$, with $v_{in}$ [m/s] being the (scalar) inflow velocity at the nozzle. In the following we assume a constant inflow velocity, i.e., $v_{in} = const$, yielding $\zeta_L(t) = v_{in}\, t$. The fiber jet is represented by a time-dependent curve $\mathbf{r}:\Omega\rightarrow \mathbb{R}^3$, where the fiber end corresponds to the material parameter $\zeta = 0$ and the material points entering the fiber (flow) domain at the nozzle are $\zeta = -\zeta_L(t)$. We assume incompressibility of the fiber jet such that besides the mass also the volume is conserved, i.e., $\partial_t\varrho_M = 0$ and $\partial_t\varrho_V = 0$ with mass and volume line densities, $\varrho_M$ [kg/m] and $\varrho_V$ [m$^2$], respectively. The mass and volume line densities are considered to be constant at the nozzle yielding $\varrho_M = \rho d_{in}^2\pi/4$ and $\varrho_V = \lVert \boldsymbol{\tau}_{in} \rVert d_{in}^2\pi/4$ with constant fiber density $\rho$ [kg/m$^3$], nozzle diameter $d_{in}$ [m] and fiber tangent at the nozzle $\boldsymbol{\tau}_{in}$. According to \cite{marheineke:p:2016}, where the viscoelastic UCM string model has been systematically derived by slender body asymptotics, our model for the extruding fiber jet is given in Lagrangian description by
\begin{align*}
\partial_t \mathbf{r} &= \mathbf{v},\\
\partial_\zeta \mathbf{r} &= \boldsymbol{\tau},\\
\partial_t(\varrho_M\mathbf{v}) &= \partial_\zeta\left(\varrho_V\sigma\frac{\boldsymbol{\tau}}{\lVert\boldsymbol{\tau}\rVert^2}\right) + \mathbf{f}_g + \mathbf{f}_{air},\\
c_p\partial_t(\varrho_M T) &= - \pi d \alpha (T-T_\star) \lVert \boldsymbol{\tau} \rVert,\\
\partial_t \sigma &= \left(3p+2\sigma+3\frac{\mu}{\theta}\right)\frac{\partial_t\lVert\boldsymbol{\tau}\rVert}{\lVert\boldsymbol{\tau}\rVert} - \frac{\sigma}{\theta},\\
\partial_t p &= \left(-p-\frac{\mu}{\theta}\right)\frac{\partial_t\lVert\boldsymbol{\tau}\rVert}{\lVert\boldsymbol{\tau}\rVert} - \frac{p}{\theta},
\end{align*}
supplemented with appropriate initial and boundary conditions to be specified. The diameter function $d:\mathcal{Q}\rightarrow\mathbb{R}^+$ is introduced via
\begin{align*}
d = 2\sqrt{\frac{\varrho_V}{\pi\lVert \boldsymbol{\tau} \rVert}}.
\end{align*}
The two kinematic equations relate the fiber velocity $\mathbf{v}$ [m/s] and the fiber tangent $\boldsymbol{\tau}$ to the derivatives of the fiber curve $\mathbf{r}$ [m] with respect to time $t$ [s] and material parameter $\zeta$ [m]. The two dynamic equations prescribing the conservation of linear momentum and energy yield equations for fiber velocity $\mathbf{v}$ and fiber temperature $T$ [K]. The acting outer line force densities arise from gravity $\mathbf{f}_g = \varrho_M g \mathbf{e}_g$ [N/m] with direction $\mathbf{e}_g$, $\|\mathbf{e}_g\|=1$, and gravitational constant $g$ [m/s$^2$] as well as from the surrounding airflow $\mathbf{f}_{air}$ [N/m]. Moreover, $\alpha$ [W/(m$^2$K)] is the heat transfer coefficient, $T_\star$ [K] the aerodynamic temperature field, $c_p$ [J/(kgK)] the constant specific heat capacity of the fiber and $d$ [m] the fiber diameter. The models for the aerodynamic line force density $\mathbf{f}_{air}$ and for the heat transfer coefficient $\alpha$ are presented in Sec.~\ref{subsec:drag_turb_nusselt}. Concerning the viscoelastic material laws, they are based on a UCM model for the fiber stress $\sigma$ [Pa] and pressure $p$ [Pa]. Here, $\mu$ [Pa s] describes the dynamic viscosity and $\theta$ [s] the relaxation time of the fiber jet. Under the assumption of incompressibility the relation $\mu/\theta = E/3$ with elastic modulus $E$ [Pa] holds. We model the dynamic viscosity and relaxation time dependent on the temperature $T$, i.e., $\mu = \mu(T)$, $\theta = \theta(T)$. The corresponding rheological laws for an industrial example are specified in Sec.~\ref{sec:ex_subsec:setup}.
For the numerical treatment of the problem it is convenient to deal with dimensionless model equations. We introduce the dimensionless quantities as $\tilde{y}(\tilde{\zeta},\tilde{t}) = y(\zeta_0\tilde{\zeta},t_0\tilde{t})/y_0$ and use the reference values $y_0$ as given in Tab.~\ref{sec:model_table:entdim}. Here, $y_{in}$ indicates the value of a quantity $y$ at the nozzle and $H$ denotes the height of the considered melt-blowing device. The constant mass and volume line densities $\varrho_M$, $\varrho_V$ become $\tilde{\varrho}_M = \tilde{\varrho}_V = 1$ in dimensionless form. To keep the notation simple we suppress the label $\tilde{~}$ in the following. Then, the dimensionless model equations read
\begin{equation}\label{sec:model_eq:dimlessSystem}
\begin{aligned}
\partial_t \mathbf{r} &= \mathbf{v},\\
\partial_\zeta \mathbf{r} &= \boldsymbol{\tau},\\
\partial_t \mathbf{v} &= \partial_\zeta\left(\sigma\frac{\boldsymbol{\tau}}{\lVert\boldsymbol{\tau}\rVert^2}\right) + \frac{1}{\mathrm{Fr}^2}\mathbf{e}_g + \mathbf{f}_{air},\\
\partial_t T &= -\frac{\mathrm{St}}{\varepsilon}\pi d\alpha(T-T_\star)\lVert\boldsymbol{\tau} \rVert,\\
\mathrm{De}\left(\partial_t \sigma - (2\sigma+3p) \frac{\partial_t\lVert\boldsymbol{\tau}\rVert}{\lVert\boldsymbol{\tau}\rVert}\right) +\frac{\sigma}{\theta} &= \frac{3}{\mathrm{Re}}\frac{\mu}{\theta}\frac{\partial_t\lVert\boldsymbol{\tau}\rVert}{\lVert\boldsymbol{\tau}\rVert},\\
\mathrm{De}\left(\partial_t p + p\frac{\partial_t\lVert\boldsymbol{\tau}\rVert}{\lVert\boldsymbol{\tau}\rVert}\right) +\frac{p}{\theta} &= - \frac{1}{\mathrm{Re}}\frac{\mu}{\theta}\frac{\partial_t\lVert\boldsymbol{\tau}\rVert}{\lVert\boldsymbol{\tau}\rVert}.
\end{aligned}
\end{equation}
The fiber behavior is characterized by the dimensionless parameters given in Tab.~\ref{sec:model_table:entdim}, that are the Reynolds number $\mathrm{Re}$ as ratio of inertial to viscous forces, the Deborah number $\mathrm{De}$ as ratio of relaxation time to characteristic time-scale, the Froude number $\mathrm{Fr}$ as ratio of inertial to gravitational forces, the Stanton number $\mathrm{St}$ as ratio of heat transfer to thermal capacity as well as the slenderness ratio $\varepsilon$. The time-dependent space domain simplifies to $\mathcal{Q}(t) = (-t,0)$.
\begin{remark}\label{sec:model_remark:viscousLimit}
The viscoelastic UCM fiber model~\eqref{sec:model_eq:dimlessSystem} covers the limit cases describing pure viscous as well as elastic material behavior. The limit $\mathrm{De}\rightarrow 0$ yields a viscous fiber model, whereas the limit $\mathrm{Re}\rightarrow 0$, $\mathrm{De}\rightarrow\infty$ with $\mathrm{Re}\mathrm{De} = \mathrm{Ma}^2$ describes an elastic behavior. Here, the dimensionless Mach number $\mathrm{Ma}$ is the ratio of inertial to compressive forces.
\end{remark}
\begin{remark}\label{sec:model_remark:p}
As pointed out in \cite{marheineke:p:2016}, the pressure $p$ is at least one order of magnitude smaller than the stress $\sigma$ for fibers with high strain rates $\upsilon = \partial_t\lVert\boldsymbol{\tau}\rVert / \lVert\boldsymbol{\tau}\rVert \geq 0$ and large Deborah numbers $\mathrm{De}$, in particular $\lvert p \rvert \leq 0.1 \sigma$ if $\upsilon\mathrm{De}\,\theta \geq 0.35$. This means the pressure equation can be neglected in such cases. In \cite{yarin:p:2010b} this simplification is employed instantaneously to the UCM model for melt-blowing.
\end{remark}
\begin{table}[t]
\begin{minipage}[c]{\textwidth}
\begin{center}
\begin{small}
\begin{tabular}{| l r@{ = } l l |}
\hline
\multicolumn{4}{|l|}{\textbf{Reference values}}\\
Description & \multicolumn{2}{l}{Formula} & Unit\\
\hline
fiber curve & $r_0$ & $H$ & m \rule{0pt}{2.6ex}\\
fiber diameter & $d_0$ & $d_{in}\sqrt{\pi}/2$ & m\\
fiber velocity & $v_0$ & $v_{in}$ & m/s \\
fiber temperature & $T_0$ & $T_{in}$ & K\\
fiber mass line density & $\varrho_{M0}$ & $\rho d_0^2$ & kg/m\\
fiber volume line density & $\varrho_{V0}$ & $d_0^2$ & m$^2$\\
fiber stress & $\sigma_0$ & $\varrho_{M0}v_0^2/d_0^2$ & Pa\\
fiber pressure & $p_0$ & $\sigma_0$ & Pa\\
fiber kinematic viscosity & $\mu_0$ & $\mu(T_0)$ & Pas\\
fiber relaxation time & $\theta_0$ & $\theta(T_0)$ & s\\
outer forces & $f_0$ & $\varrho_{M0}v_0^2/r_0$ & N/m\\
heat transfer coefficient & $\alpha_0$ & $\alpha_{in}$ & W/(m$^2$K)\\
length scale & $\zeta_0$ & $r_0$ & m\\
time scale & $t_0$ & $r_0/v_0$ & s\\
air velocity & $v_{\star,0}$ & $v_0$ & m/s\\
air density & $\rho_{\star,0}$ & $\rho_{\star,in}$ & kg/m$^3$\\
air kinematic viscosity & $\nu_{\star,0}$ & $\nu_{\star,in}$ & m$^2$/s\\
air specific heat capacity & $c_{p,\star,0}$ & $c_{p,\star,in}$ & J/(kgK)\\
air thermal conductivity & $\lambda_{\star,0}$ & $\lambda_{\star,in}$ & W/(mK)\\
air turbulent kinetic energy & $k_{\star,0}$ & $k_{\star,in}$ & m$^2$/s$^2$\\
air viscous dissipation rate & $\epsilon_{\star,0}$ & $\epsilon_{\star,in}$ & m$^2$/s$^3$\\
\hline
\end{tabular}
\end{small}
\end{center}
\end{minipage}\vfill
\vspace*{0.5cm}
\begin{minipage}[c]{\textwidth}
\begin{center}
\begin{small}
\begin{tabular}{| l r@{ = } l |}
\hline
\multicolumn{3}{|l|}{\textbf{Dimensionless numbers}}\\
Description & \multicolumn{2}{l|}{Formula}\\
\hline
slenderness & $\varepsilon$ & $d_0/r_0$ \rule{0pt}{2.6ex}\\
Reynolds & $\mathrm{Re}$ & $\varrho_{M0}v_0r_0/(d_0^2\mu_0)$ \\
Deborah & $\mathrm{De}$ & $\theta_0/t_0$\\
Froude & $\mathrm{Fr}$ & $v_0/\sqrt{gr_0}$ \\
Stanton & $\mathrm{St}$ & $d_0^2\alpha_0/(c_p\varrho_{M0}v_0)$\\
Mach & $\mathrm{Ma}$ & $v_0/d_0\sqrt{\varrho_{M0}\theta_0/\mu_0}$\\
air drag associated & $\mathrm{A}_\star$ & $\rho_{\star,0}d_0v_0^2/f_0$\\
mixed (air-fiber) Reynolds & $\mathrm{Re}_\star$ & $d_0v_0/\nu_{\star,0}$\\
Nusselt & $\mathrm{Nu}_\star$ & $\alpha_0d_0/\lambda_{\star,0}$\\
Prandtl & $\mathrm{Pr}_\star$ & $c_{p,\star,0}\rho_{\star,0}\nu_{\star,0}/\lambda_{\star,0}$\\
turbulence degree & $\mathrm{Tu}_\star$ & $k_{\star,0}^{1/2}/v_0$\\
turbulent time & $\mathrm{Tt}_\star$ & $\epsilon_{\star,0}r_0/(k_{\star,0}v_0)$\\
\hline
\end{tabular}
\end{small}
\end{center}
\end{minipage}\\~\\~\\
\caption{Overview over reference values used for non-dimensionalization and the resulting dimensionless numbers.}\label{sec:model_table:entdim}
\end{table}
\subsection{Classification and boundary conditions}
The dimensionless fiber model (\ref{sec:model_eq:dimlessSystem}) can uniquely be written as a quasilinear system of partial differential equations of first order \cite{marheineke:p:2016}
\begin{align}\label{sec:model_eq:quasilinearForm}
\partial_t\boldsymbol{\varphi} + \mathbf{M}(\boldsymbol{\varphi})\cdot\partial_\zeta\boldsymbol{\varphi} + \mathbf{m}(\boldsymbol{\varphi}) = \mathbf{0}
\end{align}
with the vector of unknowns $\boldsymbol{\varphi} = (\mathbf{r},\boldsymbol{\tau},\mathbf{v},T,\sigma,p)\in\mathbb{R}^{12}$. The system is classified mathematically by the spectrum of the system matrix $\mathbf{M}$ that consists of the eigenvalues
\begin{itemize}
\item $\lambda_1 = 0$ (multiplicity $6$),
\item $\lambda_{2,3} = \pm\sqrt{\sigma}/\lVert \boldsymbol{\tau} \rVert$ (multiplicity $2$ each),
\item $\lambda_{4,5} = \pm\sqrt{w}/\lVert \boldsymbol{\tau} \rVert$ (multiplicity $1$ each), $\qquad w = \left(3\mu/\theta+\mathrm{Ma}^2\left(\sigma + 3p\right)\right)/\mathrm{Ma}^2$
\end{itemize}
The system is of hyperbolic type if $\sigma > 0$ and $w > 0$. Otherwise, it is mixed elliptic-hyperbolic, or even shows a parabolic deficiency if $\sigma=0$ and/or $w=0$.
Since the hyperbolic case is relevant for the application, we focus on it and discuss the closing of the system by appropriate boundary and initial conditions. At the fiber jet end, which corresponds to a fixed material point in Lagrangian description ($\zeta = 0$), the characteristic related to the eigenvalue $\lambda_i$ runs from the nozzle to the jet end if $\lambda_i > 0$ and from the jet end towards the nozzle if $\lambda_i < 0$. At the nozzle ($\zeta = -\zeta_L(t)$) the orientations of the characteristics depend on the scalar inflow velocity of the fiber jet, which reads $v_{in}/v_0 = 1$ in non-dimensional form. If $\lambda_i > -v_{in}/v_0=-1$ for $i\in\{1,...,5\}$, the corresponding characteristic propagates from the nozzle to the jet end, otherwise the other way round. The orientations of the characteristics yield requirements on the boundary conditions with regard to the well-posedness of the problem. Since $\lambda_3 < 0$ (multiplicity 2) and $\lambda_5 < 0$ (multiplicity 1), we have to pose three boundary conditions at the fiber jet end. Because of the spinning setup we model the fiber end ($\zeta = 0$) as stress-free, i.e.,
\begin{align*}
\sigma(0,t) = 0, \qquad p(0,t) = 0.
\end{align*}
Employing the viscoelastic material law for $\sigma$ yields a constant fiber elongation $e = \lVert \boldsymbol{\tau} \rVert$ at the fiber end over time, i.e., $\partial_t e(0,t) = 0$. To preserve this compatibility condition we pose
\begin{align*}
e(0,t) = 1,
\end{align*}
assuming the fiber jet to leave the nozzle unstretched. The eigenvalues $\lambda_1$, $\lambda_2$, $\lambda_4$ are non-negative and thus imply nine boundary conditions at the nozzle ($\zeta = -\zeta_L(t)$, $t\geq0$),
\begin{align*}
\mathbf{r}(-\zeta_L(t),t) &= \mathbf{r}_{in}/r_0, \qquad \left(\boldsymbol{\tau}/e\right)(-\zeta_L(t),t) = \mathbf{e}_g, \qquad
\mathbf{v}(-\zeta_L(t),t) = \mathbf{e}_g, \qquad T(-\zeta_L(t),t) = 1.
\end{align*}
Here, $\mathbf{r}_{in}$ is assumed to be constant. Furthermore, we set the following initial conditions for $t=0$,
\begin{align*}
\sigma(-\zeta_L(0),0) = \sigma_{in}/\sigma_0,\qquad
p(-\zeta_L(0),0) = p_{in}/p_0, \qquad e(-\zeta_L(0),0) = 1.
\end{align*}
Depending on the propagation-speed of the characteristics at the nozzle we pose further boundary conditions: we additionally prescribe for $t>0$
\begin{align*}
\sigma(-\zeta_L(t),t) &= \sigma_{in}/\sigma_0, \qquad p(-\zeta_L(t),t) = p_{in}/p_0,&& \text{ if } \lambda_3 > -1 \text{ (multiplicity 2)}, \\
e(-\zeta_L(t),t) &= 1 &&
\text{ if } \lambda_5 > -1 \text{ (multiplicity 1)}.
\end{align*}
The total time-derivative of the fiber curve $\mathbf{r}$ at the nozzle yields the compatibility condition $\mathbf{v}(-\zeta_L(t),t) = \boldsymbol{\tau}(-\zeta_L(t),t)$ for all times $t$. Through the above choice of the boundary conditions for $\mathbf{v}$, $\boldsymbol{\tau}/e$, and $e$ at the nozzle this condition is inherently fulfilled.
The choice of the boundary conditions and in particular the decomposition of the fiber tangent $\boldsymbol{\tau}=e\mathbf{t}$ into elongation $e = \lVert \boldsymbol{\tau}\rVert$ and direction $\mathbf{t}$, $\|\mathbf{t}\|=1$, suggests a reformulation of the corresponding dynamic equation $\partial_\zeta \mathbf{r}= \boldsymbol{\tau}$. Making use of the compatibility condition $\partial_t\boldsymbol{\tau} = \partial_t\partial_\zeta\mathbf{r} = \partial_\zeta\partial_t\mathbf{r} = \partial_\zeta \mathbf{v}$ yields an equation for the elongation $e$
\begin{align*}
\partial_t e - \mathbf{t}\cdot\partial_\zeta\mathbf{v} = 0.
\end{align*}
The normalized tangent $\mathbf{t}$ can be parameterized by means of spherical coordinates
\begin{align*}
\mathbf{t}(\vartheta,\varphi) = (\sin\vartheta\cos\varphi, \sin\vartheta\sin\varphi,\cos\vartheta), \qquad \vartheta \in [0,\pi], \quad \varphi \in [0,2\pi).
\end{align*}
Then, its time-derivative reads
$ \partial_t\mathbf{t} = \mathbf{n}\partial_t\vartheta + \mathbf{b}\partial_t\varphi $
with normal $\mathbf{n} = (\cos\vartheta\cos\varphi, \cos\vartheta\sin\varphi, -\sin\vartheta)$ and binormal $\mathbf{b} = (-\sin\vartheta\sin\varphi, \sin\vartheta\cos\varphi, 0)$. The set $\{\mathbf{t},\mathbf{n},\mathbf{b}\} \subset \mathbb{R}^3$ forms an orthogonal basis where $\|\mathbf{t}\|=\|\mathbf{n}\|=1$. Employing
\begin{align*}
\partial_t\mathbf{t} = \partial_t\left(\frac{\boldsymbol{\tau}}{e}\right) = \frac{1}{e}\left(\mathbf{I} - \mathbf{t}\otimes\mathbf{t}\right)\cdot\partial_\zeta\mathbf{v}
\end{align*}
gives relations for the polar $\vartheta$ and azimuth angles $\varphi$
\begin{align*}
\partial_t\vartheta = \frac{1}{e}\mathbf{n}\cdot\partial_\zeta\mathbf{v},\qquad
\sin^2\vartheta\,\partial_t\varphi = \frac{1}{e}\mathbf{b}\cdot\partial_\zeta\mathbf{v}.
\end{align*}
Summing up, our viscoelastic instationary fiber model on a growing domain in Lagrangian description is given by System~\ref{system:Final}.
\begin{system}[Instationary viscoelastic fiber model]\label{system:Final}
Kinematic and dynamic equations as well as material laws in $\Omega$:
\begin{align*}
\partial_t \mathbf{r} - \mathbf{v}&=0,\\
\partial_t e - \mathbf{t}\cdot\partial_\zeta\mathbf{v} &= 0,\\
\partial_t\vartheta - \frac{1}{e}\mathbf{n}\cdot\partial_\zeta\mathbf{v} &= 0,\\
\sin^2\vartheta\,\partial_t\varphi - \frac{1}{e}\mathbf{b}\cdot\partial_\zeta\mathbf{v} &=0,\\
\partial_t\mathbf{v} - \partial_\zeta\left(\sigma\frac{\mathbf{t}}{e}\right) - \frac{1}{\mathrm{Fr}^2}\mathbf{e}_g - \mathbf{f}_{air} &= 0,\\
\partial_t T + \frac{\mathrm{St}}{\varepsilon}\pi d\alpha(T-T_\star)e &= 0,\\
\mathrm{De}\,\partial_t \sigma + \left(-\mathrm{De}\,(2\sigma+3p)-\frac{\mu}{\theta}\frac{3}{\mathrm{Re}}\right) \,\frac{\mathbf{t}}{e}\cdot\partial_\zeta\mathbf{v}+ \frac{\sigma}{\theta} &= 0,\\
\mathrm{De}\,\partial_t p + \left(\mathrm{De}\,p+\frac{\mu}{\theta}\frac{1}{\mathrm{Re}}\right)\,\frac{\mathbf{t}}{e}\cdot\partial_\zeta\mathbf{v} + \frac{p}{\theta} &= 0,
\end{align*}
Initial-boundary conditions at the nozzle ($\zeta = -\zeta_L(t)$, $t \geq 0$):
\begin{align*}
\mathbf{r}(-\zeta_L(t),t) &= \mathbf{r}_{in}/r_0, \qquad &\vartheta(-\zeta_L(t),t) &= \vartheta_{in} \qquad &\varphi(-\zeta_L(t),t) &= \varphi_{in},\\
\mathbf{v}(-\zeta_L(t),t) &= \mathbf{e}_g, \qquad & T(-\zeta_L(t),t) &= 1,
\end{align*}
Initial conditions ($t=0$):
\begin{align*}
e(-\zeta_L(0),0) = 1,\qquad \sigma(-\zeta_L(0),0) = \sigma_{in}/\sigma_0,\qquad
p(-\zeta_L(0),0) = p_{in}/p_0,
\end{align*}
Boundary conditions at the nozzle ($\zeta = -\zeta_L(t)$, $t > 0$):
\begin{align*}
&\text{if } \lambda_3>-1 \text{: }& \sigma(-\zeta_L(t),t) &= \sigma_{in}/\sigma_0, \qquad p(-\zeta_L(t),t) = p_{in}/p_0,\\
&\text{if } \lambda_5>-1 \text{: }& e(-\zeta_L(t),t) &= 1,
\end{align*}
Boundary conditions at the fiber end ($\zeta = 0$, $t>0$):
\begin{align*}
e(0,t) = 1, \qquad \sigma(0,t) = 0, \qquad p(0,t) = 0.
\end{align*}
\end{system}
\subsection{Exchange models for one-way coupling with turbulent airflow}\label{subsec:drag_turb_nusselt}
In this work we consider a one-sided coupling of the airflow with the fiber, neglecting feedback effects of the fiber on the airflow. The respective exchange models used for the aerodynamic line force density $\mathbf{f}_{air}$ and the heat transfer coefficient $\alpha$ are briefly summarized in this subsection. Moreover, we describe the concept how the turbulent aerodynamic velocity fluctuations are realized with respect to an underlying (stochastic) airflow simulation and incorporated in our fiber model (System~\ref{system:Final}).
Note that to distinguish the fiber quantities from the airflow quantities, all airflow associated fields are labeled with the index $_\star$ as before. In particular, $\mathbf{v}_\star$ denotes the velocity, $\rho_\star$ the density, $\nu_\star$ the kinematic viscosity, $c_{p,\star}$ the specific heat capacity, $\lambda_\star$ the thermal conductivity, $k_\star$ the turbulent kinetic energy and $\epsilon_\star$ the viscous dissipation of the turbulent motions per unit mass of the air. All these quantities are space- and time-dependent fields assumed to be dimensionless and known -- for example provided by an external computation. The corresponding reference values used for non-dimensionalization are denoted with the index $_0$ and given in Tab.~\ref{sec:model_table:entdim}.
\subsubsection{Aerodynamic force and heat transfer coefficient}\label{subsubsec:drag}
The models for the aerodynamic force and the heat transfer coefficient are determined by material and geometrical properties as well as the incident flow situation which can be prescribed by the fiber orientation (normalized tangent) $\mathbf{t}$ and the relative velocity between airflow and fiber $\mathbf{v}_\star -\mathbf{v}$.
The aerodynamic line force density $\mathbf{f}_{air}$ is modeled by means of a dimensionless drag function $\mathbf{F}:\mathrm{SO}(3)\times \mathbb{R}^3\rightarrow \mathbb{R}^3$ which depends on fiber tangent and relative velocity,
\begin{align}\label{sec:model_eq:airdrag}
\mathbf{f}_{air} = e\frac{\mathrm{A}_\star}{\mathrm{Re}_\star^2}\frac{\rho_\star\nu^2_\star}{d} \mathbf{F}\bigg(\mathbf{t},\mathrm{Re}_\star\frac{d}{\nu_\star}(\mathbf{v}_\star - \mathbf{v})\bigg),\qquad
\mathbf{F}(\mathbf{t},\mathbf{w}) = r_n(w_n)\mathbf{w_n} + r_t(w_n)\mathbf{w_t}.
\end{align}
The drag function can be particularly expressed in terms of the tangential $\mathbf{w_t} = (\mathbf{w}\cdot\mathbf{t})\mathbf{t}$ and normal relative velocity components $\mathbf{w_n}=\mathbf{w}-\mathbf{w_t}$, $w_n = \lVert \mathbf{w_n} \rVert$. The models used for the tangential and normal air resistance coefficients $r_t$, $r_n$ are taken from \cite{marheineke:p:2009b}, see Appendix \ref{appendix_AirDrag} for details. The occurring dimensionless numbers are the air drag associated number $\mathrm{A}_\star$ and the mixed (air-fiber) Reynolds number $\mathrm{Re}_\star$ (cf. Tab.~\ref{sec:model_table:entdim}). Concerning lift forces see Remark~\ref{rem:lift}.
The heat transfer coefficient $\alpha$ is modeled by a Nusselt number associated dimensionless function $\mathcal{N}:\mathbb{R}^3\rightarrow\mathbb{R}$ which depends on the tangential and absolute relative velocity and the Prandtl number,
\begin{align}
\alpha = \frac{1}{\mathrm{Nu}_\star}\frac{\lambda_\star}{d}\mathcal{N}\left(\mathrm{Re}_\star\frac{d}{\nu_\star}(\mathbf{v_\star} - \mathbf{v})\cdot\mathbf{t}, \mathrm{Re}_\star\frac{d}{\nu_\star}\lVert \mathbf{v_\star} - \mathbf{v}\rVert, \mathrm{Pr}_\star\frac{c_{p,\star}\rho_\star\nu_\star}{\lambda_\star}\right).
\end{align}
For details on the used heuristic model for $\mathcal{N}$ we refer to Appendix~\ref{appendix_Nusselt}. The occurring dimensionless numbers are the Nusselt number $\mathrm{Nu}_\star$, the Prandtl number $\mathrm{Pr}_\star$ as well as the mixed (air-fiber) Reynolds number $\mathrm{Re}_\star$ (cf. Tab.~\ref{sec:model_table:entdim}).
\subsubsection{Turbulence reconstruction}\label{subsubsec:turbRecon}
A direct numerical simulation of the turbulent airflow in the application is not possible due to the required high resolution. Hence, a statistical turbulence description is used where the airflow velocity $\mathbf{v}_\star$ is assumed to consist of a mean (deterministic) part $\bar{\mathbf{v}}_\star$ and a fluctuating (stochastic) part $\mathbf{v}'_\star$, i.e.,
\begin{align*}
\mathbf{v}_\star = \bar{\mathbf{v}}_\star + \mathbf{v}'_\star.
\end{align*}
The mean velocity is given by the Reynolds-averaged Navier-Stokes equations, while the fluctuations are only characterized by certain quantities that the respective turbulence model provides. To obtain $\mathbf{v}'_\star$ explicitly as random field we apply a turbulence reconstruction that has been developed in \cite{huebsch:p:2013} on basis of a $k_\star$-$\epsilon_\star$ turbulence model. Assuming given dimensionless space-time-dependent fields for the turbulent kinetic energy $k_\star$ and the viscous dissipation of the turbulent motions per unit mass $\epsilon_\star$, the general concept of the turbulence reconstruction is to model the local turbulent fluctuations as homogeneous, isotropic, incompressible Gaussian random fields in space and time, $\mathbf{v}_{\star,loc}' = \mathbf{v}_{\star,loc}'(\mathbf{x}, t; \nu_\star, \bar{\mathbf{v}}_\star)$, that depend parametrically on the kinematic viscosity and mean velocity of the airflow, as done in \cite{marheineke:p:2006, marheineke:p:2009b}. To form the large-scale structure of the global turbulence the local fluctuations fields are superposed based on a Global-from-Local assumption. The globalization strategy according to \cite{huebsch:p:2013} yields
\begin{align}\label{sec:model_eq:turbReconstr}
\mathbf{v}_\star' = \mathrm{Tu}_\star k_\star^{1/2}\mathbf{v}_{\star,loc}'\left(\frac{\mathrm{Tt}_\star}{\mathrm{Tu}_\star}\frac{\epsilon_\star}{k_\star^{3/2}}\mathbf{r}, \mathrm{Tt}_\star\frac{\epsilon_\star}{k_\star}t; \frac{\varepsilon}{\mathrm{Re}_\star}\frac{\mathrm{Tt_\star}}{\mathrm{Tu}_\star^2}\frac{\epsilon_\star}{k_\star^2} \nu_\star,\frac{1}{\mathrm{Tu}_\star} \frac{1}{k_\star^{1/2}} \bar{\mathbf{v}}_\star\right).
\end{align}
Besides the slenderness ratio $\varepsilon$ and the mixed Reynolds number $\mathrm{Re}_\star$, the occurring dimensionless numbers are the degree of turbulence $\mathrm{Tu}_\star$ and the turbulent time scale ratio $\mathrm{Tt}_\star$ as given in Tab.~\ref{sec:model_table:entdim}.
Note that the occurring turbulent length and time scales give requirements on the spatial and temporal resolution in our numerical solution algorithm (cf.\ Rem.~\ref{sec:numerics_remark:resolution} in Sec.~\ref{sec:numerics}). In particular, $l'_\star = \mathrm{Tu}_\star/\mathrm{Tt}_\star k_\star^{3/2}/\epsilon_\star$ is the dimensionless turbulent length scale indicating the expected length of the large-scale vortices, and $t_\star' = 1/\mathrm{Tt}_\star k_\star/\epsilon_\star$ is the dimensionless turbulent time scale describing the expected creation and break-up time of the vortices. For details on the general sampling procedure providing a fast and accurate sampling of the random fields we refer to \cite{huebsch:p:2013}. To even increase the efficiency of the procedure we use here a simplified underlying energy spectrum, see Appendix~\ref{appendix_turbRecon} for details on the modeling of $\mathbf{v}_{\star,loc}'$.
\begin{remark}[Lift forces]\label{rem:lift}
In industrial melt-blowing processes lift forces on a fiber are created through airflow vortices approaching the fiber and by vortex shedding at the back of the fiber. While the latter can be neglected since the fiber is meanly following the turbulent air stream, the first mechanism is included by the help of the following ansatz: the local turbulent instationary velocity fluctuations $\mathbf{v}_\star'$ are plugged into the air drag model (\ref{sec:model_eq:airdrag}), meaning local observations are mapped into a stationary far field consideration. This leads to aerodynamic forces on the fiber acting perpendicular to the $(\bar{\mathbf{v}}_\star-\mathbf{v})$-$\boldsymbol{\tau}$-plane.
\end{remark}
\setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0}
\section{Numerical Scheme}\label{sec:numerics}
System~\ref{system:Final} is a boundary value problem of a quasilinear system of partial differential equations of first order on a growing domain. It is discretized with finite volumes in space based on a central flux approximation with a Lax-Friedrich type stabilization and with the implicit Euler method in time. The growing fiber domain is realized by dynamic and static spatial cells according to the discretization concept in \cite{arne:p:2015}.
We reformulate System~\ref{system:Final} as
\begin{equation}\label{sec:numerics_eq:instationary}
\begin{aligned}
\mathbf{K}(\mathbf{y})\cdot\partial_t\mathbf{y} + \mathbf{L}(\mathbf{y})\cdot\partial_\zeta\mathbf{y} + \mathbf{l}(\mathbf{y}) = \mathbf{0}
\end{aligned}
\end{equation}
with the vector of unknowns $\mathbf{y} = (\mathbf{r},e,\vartheta,\varphi,\mathbf{v},T,\sigma,p)\in\mathbb{R}^{12}$ and consider it on the spatial domain $\mathcal{Q}(t) = (-t,0)$ for times $0 \leq t \leq t_{end}$. The introduction of the matrix $\mathbf{K}$ avoids a singularity for $\sin\vartheta = 0$. For $\sin\vartheta \neq 0$, $\mathbf{K}$ is invertible revealing the unique quasilinear form \eqref{sec:model_eq:quasilinearForm}.
For the spatial discretization we employ a finite volume scheme. We introduce a constant cell size $\Delta \zeta$ and define the number of dynamic cells $N(t)$ depending on the fiber length $\zeta_L(t) = t$ at time $t$ as
\begin{align*}
N(t) = \bigg\lfloor\frac{\zeta_L(t)}{\Delta \zeta}\bigg\rfloor,
\end{align*}
where $\lfloor\cdot\rfloor$ denotes the floor function. Furthermore, we introduce the discretization points
\begin{align*}
\zeta_{(j+1)/2} = -\left( N(t) - \frac{j}{2}\right)\Delta \zeta, \qquad j=0,...,2N(t).
\end{align*}
The points $\zeta_i$, $i=1,...,N(t)$, represent the cell centers. The dynamic cell closest to the nozzle ($\zeta = -t$) is given by $[\zeta_{1/2}, \zeta_{3/2}]$, whereas $\zeta_{N+1/2} = 0$ is the fiber end, cf. Fig.~\ref{sec:numerics_fig:mesh}. The jet growth is realized by adding static cells at the nozzle. This means we add new cells, which are initialized by the boundary conditions at the nozzle (i.e., at the left side of the computational domain). The cells remain static until they have completely left the nozzle. When they have completely entered the flow domain they are called dynamic cells and are then taken into consideration for the computation. The introduction of static cells at the nozzle allows the suitable initialization of a jet with length $\zeta_L(t) < \Delta\zeta$ and a stable numerical treatment of the temporal evolution.
\begin{figure}[!t]
\centering
\includegraphics[height=5.5cm]{mesh-eps-converted-to.pdf}
\caption{Illustration of the spatial discretization of the growing jet domain $\mathcal{Q}(t)$ (marked by the blue dashed line) with $N(t)$ dynamic cells. Cells are treated as static cells until they have completly entered the flow domain.}\label{sec:numerics_fig:mesh}
\end{figure}
We define the cell averages $\mathbf{y}_i$, $i=1,...,N(t)$, of the unknown quantities as
\begin{align*}
\mathbf{y}_i(t) = \frac{1}{\Delta \zeta}\int\limits_{\zeta_{i-1/2}}^{\zeta_{i+1/2}} \mathbf{y}(\zeta,t)d\zeta,
\end{align*}
integrate the quasilinear system (\ref{sec:numerics_eq:instationary}) over the control cells $[\zeta_{i-1/2},\zeta_{i+1/2}]$, $i=1,...,N(t)$, in which we assume $\mathbf{X}(\mathbf{y})|_{[\zeta_{i-1/2},\zeta_{i+1/2}]} = \mathbf{X}(\mathbf{y}_i)$ for $\mathbf{X} = \mathbf{K}, \mathbf{L}, \mathbf{l}$ and adopt the idea of the Lax-Friedrichs scheme for the approximation of the numerical fluxes as done in \cite{fjordholm:p:2012}. The resulting system of ordinary differential equations for the cell averages $\mathbf{y}_i$ with respect to time has the form
\begin{equation}\label{sec:numerics_eq:spaceSemidis}
\mathbf{K}(\mathbf{y}_i)\cdot\frac{\mathrm{d}}{\mathrm{d}t}\mathbf{y}_i - \mathbf{K}(\mathbf{y}_i)\cdot\frac{1}{2\Delta t}(\mathbf{y}_{i+1} - 2\mathbf{y}_i + \mathbf{y}_{i-1})+ \mathbf{L}(\mathbf{y}_i)\cdot \frac{1}{2\Delta \zeta}\left(\mathbf{y}_{i+1} - \mathbf{y}_{i-1}\right) + \mathbf{l}(\mathbf{y}_i) = 0,
\end{equation}
where $\Delta t$ denotes the constant time-step size, that we will use in the temporal discretization. The incorporation of initial-boundary and boundary conditions in our numerical scheme is realized by ghostlayers. Following \cite{leveque:b:2002} quantities not being prescribed at a boundary are extrapolated on the corresponding ghostlayer, in particular we choose first order extrapolation.
For the solution of the system of ordinary differential equations (\ref{sec:numerics_eq:spaceSemidis}) we employ the stiffly accurate implicit Euler scheme with constant time-step size $\Delta t$
\begin{equation}\label{sec:num_eq:fullDiscr}
\begin{aligned}
\mathbf{K}(\mathbf{y}_i^{n+1})\cdot\left(2\mathbf{y}_i^{n+1} - \frac{1}{2}\mathbf{y}_{i+1}^{n+1} - \frac{1}{2}\mathbf{y}_{i-1}^{n+1} - \mathbf{y}_i^n\right) + \mathbf{L}(\mathbf{y}_i^{n+1})\cdot \frac{\Delta t}{2\Delta \zeta}\left(\mathbf{y}_{i+1}^{n+1} - \mathbf{y}_{i-1}^{n+1}\right) + \Delta t\mathbf{l}(\mathbf{y}_i^{n+1}) = 0,
\end{aligned}
\end{equation}
with $\mathbf{y}_i^n = \mathbf{y}_i(t^n)$ and $t^n = n\Delta t$ for $n = 0,...,M$, $t^M = t_{end}$.
The resulting nonlinear system of equations is solved by a Newton-method with Armijo step size control, where the Jacobian of the system matrix is prescribed analytically. The break-up criterion of the Newton algorithm is set to an absolute error tolerance $tol = 10^{-8}$ with respect to the maximum norm.
\begin{remark}[Artificial diffusion]
The semi-discrete system (\ref{sec:numerics_eq:spaceSemidis}) can be seen as a spatial discretization of
\begin{align*}
\mathbf{K}(\mathbf{y})\cdot\partial_t\mathbf{y} + \mathbf{L}(\mathbf{y})\cdot\partial_\zeta\mathbf{y} + \mathbf{l}(\mathbf{y}) = \mathbf{K}(\mathbf{y})\cdot\eta\partial_{\zeta\zeta}\mathbf{y}
\end{align*}
with $\eta = (\Delta \zeta)^2/(2\Delta t)$ by means of a central approximation of the flux terms. This means we add artificial diffusion of magnitude $\eta$ to our system as it is common for classical Lax-Friedrich schemes.
\end{remark}
\begin{remark}[Convergence of numerical scheme]
As it is well-known from hyperbolic literature (e.g. \cite{leveque:b:2002}), the numerical scheme (\ref{sec:num_eq:fullDiscr}) provides accuracy of order one with respect to the time and accuracy of order two with respect to the space discretization yielding a combined convergence of order one. In \cite{devuyst:p:2004, fjordholm:p:2012, munkejord:p:2009} a similar scheme has been investigated with respect to a stability concept for non-conservative hyperbolic partial differential equations.
\end{remark}
\begin{remark}[Spatial and temporal resolution]\label{sec:numerics_remark:resolution}
The temporal and spatial grid sizes have to be chosen in such a way that the turbulent scales of the underlying airflow are resolved properly. In particular, the turbulent length scale $l'_\star = \mathrm{Tu}_\star/\mathrm{Tt}_\star k_\star^{3/2}/\epsilon_\star$ and the turbulent time scale $t_\star' = 1/\mathrm{Tt}_\star k_\star/\epsilon_\star$ used in the turbulence reconstruction (\ref{sec:model_eq:turbReconstr}) have to be considered. Furthermore, the time that a vortex needs to pass a fixed material point of the fiber due to their relative velocity has to be taken into account for the temporal resolution. In total, the requirements for a successful simulation in terms of $\Delta\zeta$ and $\Delta t$ read
\begin{equation}\label{sec:numerics_eq:resolution}
\Delta\zeta \leq \frac{l'_\star}{e}, \qquad \Delta t \leq \min\left(t'_\star,\frac{l'_\star}{\lVert\mathbf{v_\star - \mathbf{v}}\rVert}\right).
\end{equation}
Appropriate grid sizes are estimated by computing the bounds for all given airflow data with assumptions on the maximal fiber velocity and elongation.
\end{remark}
\setcounter{equation}{0} \setcounter{figure}{0} \setcounter{table}{0}
\section{Industrial melt-blowing simulation}\label{sec:example}
In this section we investigate an industrial melt-blowing scenario that has been studied in \cite{huebsch:p:2013} by the help of a simplified ODE model for the fiber jet position, velocity and elongation. We employ our more sophisticated PDE fiber jet model (System~\ref{system:Final}), which additionally contains a viscoelastic material behavior and thermal effects describing the jet cooling and solidification. Before we present our simulation results, we specify the industrial setup and state the closing models for the dynamic viscosity as well as relaxation time and elastic modulus. In the scenario we face step size restrictions (cf.\ Remark~\ref{sec:numerics_remark:resolution}) that prevent the computability of the whole fiber from nozzle to conveyor belt. To handle this numerical problem we suggest and discuss an appropriate simulation strategy.
\subsection{Setup and model closing}\label{sec:ex_subsec:setup}
\begin{table}[t]
\begin{minipage}[c]{\textwidth}
\begin{center}
\begin{small}
\begin{tabular}{| l l l l |}
\hline
\multicolumn{4}{|l|}{\textbf{Parameters}}\\
Description & Symbol & Value & Unit\\
\hline
device height & $H$ & $1.214\cdot10^{-1}$ & m \rule{0pt}{2.6ex}\\
nozzle diameter & $d_{in}$ & $4\cdot 10^{-4}$ & m\\
fiber speed at nozzle & $v_{in}$ &$1\cdot 10^{-2}$ & m/s\\
fiber temperature at nozzle & $T_{in}$ & $5.532\cdot10^2$ & K\\
heat transfer at nozzle & $\alpha_{in}$ & $ 1.595\cdot10^3$ & W/(m$^2$K)\\
polar angle at nozzle & $\vartheta_{in}$ & $\pi/2$ & --\\
azimuth angle at nozzle & $\varphi_{in}$ & $\pi$ & --\\
fiber density & $\rho$ & $7\cdot10^2$ & kg/m$^3$\\
fiber specific heat capacity & $c_p$ & $2.1\cdot10^3$ & J/(kgK)\\
end time & $t_{end}$ & $2.20\cdot10^{-2}$ & s\\
air density at nozzle & $\rho_{\star,in}$ & $1.187$ & kg/m$^3$\\
air kinematic viscosity at nozzle & $\nu_{\star,in}$ & $1.8\cdot10^{-5}$ & m$^2$/s\\
air specific heat capacity at nozzle & $c_{p,\star,in}$ & $1.006\cdot10^3$ & J/(kgK)\\
air thermal conductivity at nozzle & $\lambda_{\star,in}$ & $2.42\cdot10^{-2}$ & W/(mK)\\
air turbulent kinetic energy at nozzle & $k_{\star,in}$ & $2.181\cdot10^2$ & m$^2$/s$^2$\\
air viscous dissipation rate at nozzle & $\epsilon_{\star,in}$ & $1.808\cdot10^7$ & m$^2$/s$^3$\\
\hline
\end{tabular}
\end{small}
\end{center}
\end{minipage}
\vfill
\vspace{0.5cm}
\begin{minipage}[c]{\textwidth}
\begin{center}
\begin{small}
\begin{tabular}{| l l l |}
\hline
\multicolumn{3}{|l|}{\textbf{Dimensionless numbers}}\\
Description & Symbol & Value\\
\hline
slenderness & $\varepsilon$ & $2.92\cdot 10^{-3}$ \rule{0pt}{2.6ex}\\
Reynolds & $\mathrm{Re}$ & $2.99\cdot 10^{-1}$\\
Deborah & $\mathrm{De}$ & $4.94\cdot 10^{-2}$\\
Froude & $\mathrm{Fr}$ & $9.16\cdot 10^{-3}$ \\
Stanton & $\mathrm{St}$ & $1.08\cdot 10^{-1}$\\
Mach & $\mathrm{Ma} $ & $1.22\cdot 10^{-1}$\\
air drag associated & $\mathrm{A}_\star$ & $5.81\cdot 10^{-1}$\\
mixed (air-fiber) Reynolds & $\mathrm{Re}_\star$ & $1.97\cdot 10^{-1}$\\
Nusselt & $\mathrm{Nu}_\star$ & $2.34\cdot 10^1$\\
Prandtl & $\mathrm{Pr}_\star$ & $8.89\cdot 10^{-1}$\\
turbulence degree & $\mathrm{Tu}_\star$ & $1.48\cdot 10^3$\\
turbulent time & $\mathrm{Tt}_\star$ & $1.01\cdot 10^6$\\
\hline
\end{tabular}
\end{small}
\end{center}
\end{minipage}\\~\\~\\
\caption{Overview over process and physical parameters in the industrial melt-blowing setup according to \cite{huebsch:p:2013} and the resulting dimensionless numbers.}\label{sec:ex_table:param}
\end{table}
\begin{figure}[!t]
\centering
\includegraphics[height=5cm]{industrial_setup-eps-converted-to.pdf}
\caption{Illustration of the considered industrial melt-blowing process. The two-dimensional cut ($\mathbf{e}_x$-$\mathbf{e}_z$-plane, marked by dashed line) represents the whole flow domain due to homogenity in $\mathbf{e}_y$-direction.}\label{sec:ex_fig:setup}
\end{figure}
\begin{figure}[!t]
\newlength\figureheight
\newlength\figurewidth
\centering
\begin{minipage}[c]{0.49\textwidth}
\centering
\includegraphics{main-figure0.pdf}
\end{minipage}\hfill
\begin{minipage}[c]{0.49\textwidth}
\centering
\includegraphics{main-figure1.pdf}
\end{minipage}
\begin{minipage}[c]{0.49\textwidth}
\centering
\includegraphics{main-figure2.pdf}
\end{minipage}\hfill
\begin{minipage}[c]{0.49\textwidth}
\centering
\includegraphics{main-figure3.pdf}
\end{minipage}\vfill
\begin{minipage}[c]{0.49\textwidth}
\centering
\includegraphics{main-figure4.pdf}
\end{minipage}
\caption{Airflow simulation of the representative two-dimensional flow domain (cf. Fig.~\ref{sec:ex_fig:setup}). \textit{Top:} components of mean airflow velocity $\bar{\mathbf{v}}_\star$ in $\mathbf{e}_x$- and $\mathbf{e}_z$-direction (denoted by $\bar{v}_{\star,1}$ and $\bar{v}_{\star,3}$ respectively). \textit{Middle:} turbulent kinetic energy $k_\star$ and dissipation rate $\epsilon_\star$ in logarithmic scale. \textit{Bottom:} temperature $T_\star$.}\label{sec:ex_fig:k-eps-sim}
\end{figure}
In the melt-blowing setup a high-speed air stream is directed vertically downwards in direction of gravity and enters the domain of interest via thin slot dies. The spinning nozzles are located in between and extrude the polymeric fiber jets in the same direction, see Fig.~\ref{sec:ex_fig:setup}. We choose an outer orthonormal basis $\{\mathbf{e}_x,\mathbf{e}_y,\mathbf{e}_z\}$, where $\mathbf{e}_x$ points against the direction of gravity (i.e., $\mathbf{e}_x = -\mathbf{e}_g$) and $\mathbf{e}_y$ is aligned with the slot inlet. The mean quantities of the turbulent airflow are time-independent and homogeneous in $\mathbf{e}_y$-direction, such that a stationary $k_\star$-$\epsilon_\star$ simulation for a representative two-dimensional cut showing the $\mathbf{e}_x$-$\mathbf{e}_z$-plane is reasonable. The origin of the outer basis is aligned with the external given airflow data, such that the considered nozzle is at the position $\mathbf{r}_{in} = (-2.85\cdot 10^{-2},0,0)\,$m in the airflow field. We use the same $k_\star$-$\epsilon_\star$ simulation results as in \cite{huebsch:p:2013}, supplemented with an additional temperature profile as depicted in Fig.~\ref{sec:ex_fig:k-eps-sim}. The melt-blown fiber polymer is of polypropylene (PP) type with material parameters taken from \cite{huebsch:p:2013}. The process and physical parameters as well as the resulting dimensionless numbers are listed in Table~\ref{sec:ex_table:param}.
For the temperature-dependent dynamic viscosity of the PP-type fiber material we employ the Arrhenius law. The corresponding relation for the dimensionless viscosity $\mu$ depending on the temperature $T$ is given by
\begin{align*}
\mu(T) = \frac{1}{\mu_0}\mathcal{M}(TT_0),\qquad \mathcal{M}(T) = a_\mu\exp\left(\frac{b_\mu}{T-c_\mu}\right).
\end{align*}
The polymer-specific constants coming from measurements are $a_\mu = 0.1352$ Pas, $b_\mu = 852.323$ K, $c_\mu = 273.15$ K.
We choose the following heuristic model for the relaxation time
\begin{align*}
\theta(T) = \frac{1}{\theta_0} \mathcal{T}(TT_0),\qquad \mathcal{T}(T) = 3\frac{\mathcal{M}(T) + b_\theta}{a_\theta}
\end{align*}
with $a_\theta=10^9$ Pa and $b_\theta= 2\cdot 10^8$ Pas showing a meaningful limit behavior: for $T\rightarrow\infty$ the dimensional relaxation time is of order $\mathcal{T}\sim\mathcal{O}(10^{-1}$ s), which is typical for melt-blown polymers, see, e.g., \cite{yarin:p:2010b}. Furthermore, employing the relation $\mu/\theta = E/3$ the resulting dimensionless elastic modulus $E$ reads
\begin{align*}
E(T) = \frac{\theta_0}{\mu_0}\mathcal{E}(T_0T),\qquad \mathcal{E}(T) = 3\frac{\mathcal{M}(T)}{\mathcal{T}(T)}.
\end{align*}
For $T\rightarrow c_\mu = 273.15$ K the dimensional elastic modulus $\mathcal{E}$ approaches $\mathcal{E} = a_\theta = 10^9$ Pa -- a typical value for hardened polymer, see, e.g., \cite{barnes:b:2000}.
\subsection{Simulation strategy}\label{sec:ex_subsec:practicalTreatment}
Expecting a maximal fiber elongation $e = 10^{6}$ and a maximal dimensionless relative velocity between fiber and airflow $\lVert \mathbf{v}_\star-\mathbf{v} \rVert \leq \lVert \mathbf{v}_\star \rVert = 4.78\cdot 10^4$ (in dimensionless form) in the industrial melt-blowing, the step size restriction for the spatial and temporal fiber discretization (\ref{sec:numerics_eq:resolution}) gives
\begin{align}\label{sec:ex_eq:resolution}
\Delta \zeta \leq 5.77\cdot 10^{-10},\qquad \Delta t \leq 1.21\cdot 10^{-6}
\end{align}
(cf. Fig.~\ref{sec:ex_fig:k-eps-sim}). Such a resolution implies computationally impractical runtimes. However, to make a simulation for the setup feasible, we suggest the following strategy that is motivated from observations of the process.
In the region close to the nozzle the high-speed air stream pulls the slowly extruded fiber jet rapidly down without any lateral bending. The hot temperatures prevent fiber cool-down and solidification. Thus, the magnitude of the Deborah number $\mathrm{De}$ at the nozzle (cf. Tab.~\ref{sec:ex_table:param}) allows the consideration of the viscous limit case $\mathrm{De}\rightarrow 0$ (see Remark~\ref{sec:model_remark:viscousLimit}). Moreover, the fiber jet behavior is mainly determined by the mean airflow, turbulent effects play a negligible role. Hence, we assume that in the nozzle region (i.e., deterministic region) the polymer jet can be described by a steady uni-axial viscous fiber model with deterministic aerodynamic force and heat transfer. This model follows from System~\ref{system:Final} by a re-parameterization into Euler (spatial) description, transition to steady-state, and the limit $\mathrm{De}\rightarrow 0$. The resulting boundary value problem of ordinary differential equations is solved by a continuation-collocation method, which we successfully employed in studies on glass wool manufacturing \cite{arne:p:2011}, electrospinning \cite{arne:p:2018} and dry spinning \cite{wieland:p:2018b}. Further details on the model and its numerical treatment are given in Appendix~\ref{appendix_statModel}. Note that the use of the viscous fiber model is not only physically reasonable, but it also simplifies crucially the numerical treatment. Concerning the viscoelastic fiber model, the rapid changes of the fiber quantities in the nozzle region caused by the immediate pull down of the fiber yields multiple changes in the structure of the quasilinear system matrix in view of its eigenvalues and its resulting classification. This means that the runs of the characteristics change their direction several times. In the steady uni-axial model this leads to singular system matrices and closing problems with appropriate boundary conditions making the numerical treatment extremely complicated. This issue has been addressed by \cite{lorenz:p:2014} in the context of existence regimes for solutions of an uni-axial UCM fiber model under gravitational forces. We circumvent these problems when using the viscous fiber model where no mathematical regime changes take place.
In the region away from the nozzle the turbulent aerodynamic fluctuations crucially affect the fiber behavior (i.e., stochastic region). By means of the uni-axial steady fiber solution (from the nozzle region) we identify a coupling point, from where on the further fiber behavior downwards to the bottom is described by the instationary viscoelastic fiber model (System~\ref{system:Final}) accounting for turbulent effects. The simulation with the numerical scheme from Sec.~\ref{sec:numerics} becomes here feasible since the expected elongation and relative velocities in this domain are much smaller and hence the spatial and temporal step size restrictions weaken compared to (\ref{sec:ex_eq:resolution}).
The coupling between the stationary and instationary fiber simulations is done in the following way: Let the fiber domain in the Eulerian parametrization $\Omega(t) = \Omega_d\cup\Omega_s(t)$ be divided into the time-independent deterministic part $\Omega_d$, where the fiber is uni-axially stretched, and the time-dependent stochastic part $\Omega_s(t)$, where the fiber is strongly affected by the turbulent fluctuations. Consider $C = \Omega_d\cap\Omega_s(t)$ to be the time-independent coupling point between the deterministic and the stochastic domain.
First, we perform the simulation of the steady viscous fiber model (System~\ref{system:Final_viscous} in Appendix~\ref{appendix_statModel}) for the whole fiber domain, i.e., $\Omega_d = \Omega$, yielding solutions for the scalar fiber speed $u$, temperature $T$, stress $\sigma$ and pressure $p$. Second, we determine the coupling point $C$ by the ratio of the relative velocity between fiber and airflow $v_{rel} = \lVert \mathbf{v}_\star - u\boldsymbol{\tau} \rVert$ and the turbulent velocity scale $k_\star^{1/2}$, in particular
\begin{align*}
C=\min\bigg\{s\in\Omega~\bigg\lvert\left(\frac{v_{rel}v_0}{(k_{\star}k_{\star,0})^{1/2}}\right)(s) \leq 10\bigg\}.
\end{align*}
So the coupling point is the nearest point to the nozzle, where the ratio of the relative velocity and the turbulent velocity scale is below one order of magnitude. At $C$ the quantities of the stationary solution are denoted by $u_C$, $T_C$, $\sigma_C$, $p_C$. Third, for the subsequent solving of the instationary viscoelastic fiber model (System~\ref{system:Final}) on $\Omega_s(t)$ (reformulated in Lagrangian coordinates) we adjust the typical values and adapt the initial conditions. We particularly set the reference values used for the non-dimensionalization (see Sec.~\ref{sec:model_general}) to be
\begin{align*}
r_0 = (1-C)H,\qquad d_0 =\frac{\sqrt{\pi}}{2} \sqrt{\frac{v_{in}}{v_C}} d_{in},\qquad
v_0 = u_C, \qquad T_0 = T_C, \end{align*}
then the dimensionless numbers change accordingly. The altered initial conditions read
\begin{align*}
\sigma_{in} = \sigma_C, \qquad p_{in} = p_C.
\end{align*}
These modifications can be interpreted as putting a fictive nozzle with adjusted extrusion conditions at the spatial position of the coupling point $C$. The diameter of the fictive nozzle reflects the pre-elongations of the extruded fiber by the factor $v_C/v_{in}$ compared to $d_{in}$ of the original nozzle.
In the setup the crucial stretching of the fiber takes place in the upper part of the device and ends when the fiber is nearly solidificated. Since we are interested in the maximal achieved fiber elongations as well as in the corresponding fiber diameter distribution, but not in the lay-down process, it is sufficient to cutoff the fiber before it reaches the bottom of the device. We choose to cutoff the fiber, when it reaches the height corresponding to $x = -9.45\cdot10^{-2}$ m. Below this point the airflow temperature satisfies $T_\star < 353.15$ K (see Fig.\ \ref{sec:ex_fig:k-eps-sim}). We expect the fiber dynamic viscosities to be of magnitude $\mu\sim\mathcal{O}(10^3$ Pas$)$, implying that no noticeable further fiber elongations take place.
\begin{figure}[!t]
\centering
\begin{minipage}[c]{\textwidth}
\centering
\includegraphics{main-figure5.pdf}
\end{minipage}
\caption{Scalar speed $u$, elongation $e$, stress $\sigma$, pressure $p$ and temperature $T$ of the steady viscous uni-axial fiber model in Eulerian coordinates \textit{(top left to bottom right)}. The stochastic region, where the instationary viscoelastic fiber model (System~\ref{system:Final}) is employed, is shaded in gray.}\label{sec:ex_fig:statSol}
\end{figure}
\subsection{Results}
In the following we present the numerical results for the industrial spinning setup described in Sec.~\ref{sec:ex_subsec:setup}. While all computations for a single fiber realization have been done on an Intel Core i7-6700 CPU (4 cores, 8 threads) and 16 GBytes of RAM, the Monte Carlo simulation has been performed on a MPI cluster (dual Intel Xeon E5-2670, 16 CPU cores per node, 64 GB RAM) with one CPU core for each fiber realization. For all computations the MATLAB version R2016b has been used.
Figure~\ref{sec:ex_fig:statSol} shows the results for scalar speed $u$, stress $\sigma$, pressure $p$ and temperature $T$ as well as the induced elongation $e$ of the steady viscous uni-axial fiber model in an Eulerian parameterization that spans the whole domain $\Omega$. The maximal fiber speed is $u=1.74\cdot10^2$ m/s and the corresponding maximal fiber elongation is $e=1.74\cdot10^4$. This indicates that a stationary fiber simulation is not physically reasonable for the whole domain $\Omega$, since much higher fiber elongations for the melt-blowing setup are expected. Nevertheless, the steady viscous solution serves as adequate approximation of the fiber behavior in the nozzle region as described in Sec.~\ref{sec:ex_subsec:practicalTreatment}. For the further instationary viscoelastic simulation we determine the spatial position of the coupling point $C$ and put a fictive nozzle at $\mathbf{r}_{in} = (-3.83\cdot10^{-2},0,0)\,$m. The corresponding fiber quantities at this fictive nozzle are
\begin{align*}
u_C = 50.62\text{ m/s}, \qquad \sigma_C = 69.98\text{ Pa},\qquad p_C = -23.33\text{ Pa},\qquad T_C = 446.6\text{ K},
\end{align*}
and the dimensionless numbers change accordingly, see Tab.~\ref{sec:ex_table:dimNumbers_adjustedNozzle}.
\begin{table}[t]
\begin{minipage}[c]{\textwidth}
\begin{center}
\begin{small}
\begin{tabular}{| l l l |}
\hline
\multicolumn{3}{|l|}{\textbf{Dimensionless numbers}}\\
Description & Symbol & Value\\
\hline
slenderness & $\varepsilon$ & $4.47\cdot10^{-5}$ \rule{0pt}{2.6ex}\\\
Reynolds & $\mathrm{Re}$ & $2.15\cdot10^2$ \\
Deborah & $\mathrm{De}$ & $2.72\cdot10^2$\\
Froude & $\mathrm{Fr}$ & $4.84\cdot10^1$ \\
Stanton & $\mathrm{St}$ & $1.75\cdot10^{-4}$\\
Mach & $\mathrm{Ma}$ & $2.42\cdot10^{2}$\\
air drag associated & $\mathrm{A}_\star$ & $2.66\cdot10^1$\\
mixed (air-fiber) Reynolds & $\mathrm{Re}_\star$ & $1.40\cdot 10^1$\\
Nusselt & $\mathrm{Nu}_\star$ & $2.68$\\
Prandtl & $\mathrm{Pr}_\star$ & $6.23\cdot 10^{-1}$\\
turbulence degree & $\mathrm{Tu}_\star$ & $7.34\cdot10^{-1}$\\
turbulent time & $\mathrm{Tt}_\star$ & $1.14\cdot10^2$\\
\hline
\end{tabular}
\end{small}
\end{center}
\end{minipage}\\~\\~\\
\caption{Dimensionless numbers characterizing the fiber behavior in the stochastic region.}\label{sec:ex_table:dimNumbers_adjustedNozzle}
\end{table}
\begin{figure}[!t]
\centering
\begin{minipage}[c]{0.33\textwidth}
\centering
\includegraphics{main-figure6.pdf}
\end{minipage}\hfill
\begin{minipage}[c]{0.33\textwidth}
\centering
\includegraphics{main-figure7.pdf}
\end{minipage}\hfill
\begin{minipage}[c]{0.33\textwidth}
\centering
\includegraphics{main-figure8.pdf}
\end{minipage}\hfill
\caption{Snapshots of one representative fiber curve $\mathbf{r}$ before reaching the cutoff height ($x = -9.45\cdot10^{-2}$ m) at times $t \in\{ 1.30\cdot10^{-4}$ s, $2.60\cdot10^{-4}$ s, $3.90\cdot10^{-4}$ s$\}$. We track the material point $\zeta_{N-3269}$ (marked with a red star) and present the temporal evolution of all fiber quantities at that point in Fig.~\ref{sec:ex_fig:sol_instat}.}\label{sec:ex_fig:curves}
\end{figure}
\begin{figure}[t]
\begin{minipage}[c]{\textwidth}
\centering
\includegraphics{main-figure9.pdf}
\end{minipage}
\caption{Solution plots for the material point $\zeta_{N-3269}$ (cf. Fig.~\ref{sec:ex_fig:curves}) that enters the flow domain at time step $3270$ ($t = 7.20\cdot10^{-5}$ s). \textit{Top} and \textit{middle left:} fiber velocities $v_i$ (\textit{blue}) as well as airflow velocities $v_{\star,i}$, $\bar{v}_{\star,i}$ with (\textit{red}) and without (\textit{green}) turbulent fluctuations respectively, $i \in\{ 1,2,3\}$. \textit{Middle right:} elongation $e$ as rate of fiber stretching compared to the original nozzle and $e_{max}$ indicating the maximal achievable elongation in stationary simulations. \textit{Bottom:} pressure $p$, stress $\sigma$ as well as fiber temperature $T$ and air temperature $T_\star$.}\label{sec:ex_fig:sol_instat}
\end{figure}
\begin{figure}[!t]
\centering
\begin{minipage}[c]{0.49\textwidth}
\centering
\includegraphics{main-figure10.pdf}
\end{minipage}\hfill
\begin{minipage}[c]{0.49\textwidth}
\centering
\includegraphics{main-figure11.pdf}
\end{minipage}
\caption{\textit{Left:} Fiber elongation distribution at the cutoff point for a Monte Carlo simulation based on $67$ realizations.
\textit{Right:} Resulting fiber diameter distribution at the cutoff point in the sense of an Eulerian fiber parameterization
}\label{sec:ex_fig:diameter}
\end{figure}
Considering the stochastic region, the numerical step size restriction (\ref{sec:numerics_eq:resolution}) for the fiber discretization weakens compared to (\ref{sec:ex_eq:resolution})
\begin{align*}
\Delta\zeta \leq 3.31\cdot 10^{-5}, \qquad \Delta t \leq 7.99\cdot 10^{-4},
\end{align*}
we choose $\Delta\zeta = \Delta t = 10^{-5}$ for our computation. As expected the turbulent fluctuations of the airflow cause a swirling of the fiber jet such that the fiber curve leaves the $\mathbf{e}_x$-axis shortly away from the fictive nozzle. Figure~\ref{sec:ex_fig:curves} shows temporal snapshots of the curve for one representative fiber before its cutoff (at $x = -9.45\cdot10^{-2}$ m). The fluctuations move the fiber jet not only downwards but also upwards such that the fiber curve creates loops. In these loops high aerodynamic forces act on the fiber due to high relative velocity gradients causing the fiber to elongate. Figure~\ref{sec:ex_fig:sol_instat} shows exemplary the temporal evolution of the fiber quantities for one material point. Obviously the material point experiences high elongations: directly after entering the flow domain high relative velocities in $\mathbf{e}_x$-direction between the fiber velocity $v_1$ and the deterministic airflow velocity $\bar{v}_{\star,1}$ cause a fiber stretching. After the fiber velocity $v_1$ reaches the corresponding deterministic airflow velocity $\bar{v}_{\star,1}$ the fiber experiences a further stretching due to the velocity fluctuations, in which the mean stretching takes place in regions where high lateral air velocities $v_{\star,2}$, $v_{\star,3}$ create swirls. The final elongation at this material point is of magnitude $e\sim\mathcal{O}(10^5)$ and therewith clearly exceeds the theoretically possible deterministic expectations. In particular, the computed elongation in a stationary simulation is obviously restricted by the velocity of the air stream, i.e., $e_{max} = u/v_{in} < \lVert \mathbf{v}_\star \rVert_\infty/v_{in} = 4.78\cdot10^4$. Furthermore, the stationary uni-axial viscous simulation only achieves $e = 1.74\cdot 10^4$ (cf. Fig.~\ref{sec:ex_fig:statSol}). In the region of high fiber stretching the material point experiences high stresses $\sigma$ that partly dissipate due to the elastic material behavior before the fiber completely solidificates. The pressure $p$ is orders of magnitude smaller compared to the stress $\sigma$ and could therefore be neglected in the simulation as already pointed out in Remark~\ref{sec:model_remark:p}. The fiber temperature $T$ approaches the air temperature $T_\star$ leading to a cool-down and induced solidification of the jet.
When the fiber reach the cutoff height $x = -9.45\cdot10^{-2}$~m at time $t = 3.92\cdot10^{-4}$~s, we cutoff the fiber end, track the fiber elongations $e$ as well as the corresponding fiber diameters $d$ and document the occurring relative frequencies until the end time $t_{end} = 2.20\cdot10^{-2}$~s is reached, see Fig.~\ref{sec:ex_fig:diameter}. To achieve comparability with experiments, we weight the relative frequencies of the fiber diameters with the associated fiber elongations $e$ leading to a diameter distribution in the sense of an Eulerian (spatial) parameterization of the fibers. The resulting elongation and fiber diameter distributions are computed by the help of a Monte Carlo simulation with $67$ samples. We observe a mean elongation $e = 9.47\cdot10^4$ again exceeding the deterministic expectations. The mean fiber diameter is $d = 1.28\cdot10^{-6}$~m. This is a typical value for fibers produced in industrial melt-blowing setups, see for example \cite{ellison:p:2007}. So our instationary viscoelastic fiber model using an adjusted nozzle as well as employing fluctuation reconstruction of the underlying turbulence effects from an airflow simulation predicts quantitatively well the fiber jet thinning observed in experiments, which would not be possible with a pure steady deterministic simulation neglecting the turbulent aerodynamics velocity fluctuations.
Summing up, our proposed procedure makes the simulation of industrial melt-blowing processes with inclusion of turbulent and viscoelastic effects as well as temperature dependencies feasible. Including turbulent effects acting on the fiber by the help of reconstructing the turbulent structure of the outer air stream yields a jet thinning exceeding the deterministic expectations and produces final fiber diameters of realistic order of magnitude. So our presented modeling and solution framework provides the basis for further parameter studies and the optimization of melt-blowing processes. The computation time for the presented setup is around $96.4$ hours. A combined experimental and numerical study is left to future research.
\section{Conclusion}
In this paper we presented a model and simulation framework that allowed the numerical investigation of the physical mechanism being responsible for the strong fiber thinning in industrial melt-blowing processes. Considering an asymptotic instationary viscoelastic UCM fiber jet model driven by turbulent aerodynamic forces, the random field sampling strategy of \cite{huebsch:p:2013} provides an efficient numerical procedure for the realization of the turbulent air flow fluctuctuations. The computed fiber diameters are much lower than those obtained from previous stationary simulations regarding a pure deterministic aerodynamic force on the fiber. Our simulation results clearly stress the significance of the turbulent effects as key player for the production of fibers of micro- and nanoscale. Further parameter studies and an optimization of the industrial process setup will provide the opportunity of simulating fibers with elongations of order $e\sim\mathcal{O}(10^6)$ compared to the nozzle diameter. In view of more quantitative predictions of the resulting nonwovens a combined experimental and numerical study with experimentally measured temperature-dependencies of polymer properties (e.g., relaxation time) is aimed at in future.
\section*{Acknowledgments}
This work has been supported by German DFG, project 251706852, MA 4526/2-1, WE 2003/4-1.
| {'timestamp': '2019-03-07T02:16:24', 'yymm': '1902', 'arxiv_id': '1902.01811', 'language': 'en', 'url': 'https://arxiv.org/abs/1902.01811'} |
\section{Introduction and motivation}
In a pilot program during the 2016-17 admissions cycle, the University of California, Berkeley invited many applicants for freshman admission to submit letters of recommendation (LORs) as part of their applications.
UC Berkeley had (and has) a ``holistic review'' admissions process, which attempts to examine the whole applicant, taking account of any contextual factors and obstacles overcome without over-reliance on quantitative measures like SAT scores \citep{hout2005berkeley}.
Unlike other highly selective universities, however, UC Berkeley had never previously asked applicants to submit letters from teachers and guidance counselors.
The new approach proved controversial within the university. The LORs were intended to help identify students from non-traditional backgrounds who might otherwise be overlooked \citep{UC_Diversity_2017}. But there was also legitimate concern that applicants from disadvantaged backgrounds might not have access to adults who could write strong letters, and that the use of letters would further disadvantage these students \citep{Chalfant_letter_2017}.
In this paper, we use the Berkeley pilot as the basis for an observational study of the impact of submitting a letter of recommendation on subsequent admission.
Our goal is to assess how impacts vary across pre-defined subgroups, in order to inform the debate over the Berkeley policy and similar debates at other universities.
Assessing treatment effect heterogeneity is challenging in non-randomized settings because variation in estimated impacts reflects both actual treatment effect variation and differences in covariate balance across groups.
Inverse Propensity Score Weighting (IPW) is one standard approach for addressing this:
first estimate a propensity score model via logistic regression, including treatment-by-subgroup interaction terms; construct weights based on the estimated model; and then compare IPW estimates across subgroups \citep[see][]{green2014examining, Lee2019}.
Estimated weights from traditional IPW methods, however, are only guaranteed to have good covariate balancing properties asymptotically.
Balancing weights estimators, by contrast, instead find weights that directly minimize a measure of covariate imbalance, often yielding better finite sample performance \citep{Zubizarreta2015,Athey2018a,Hirshberg2019_amle,benmichael_balancing_review}.
Both balancing weights and traditional IPW face a curse of dimensionality when estimating subgroup effects:
it is difficult to achieve exact balance on all covariates within each subgroup, or, equivalently, balance all covariate-by-subgroup interactions.
We therefore develop an approximate balancing weights approach tailored to estimating subgroup treatment effects, with a focus on the UC Berkeley LOR pilot study.
Specifically, we present a convex optimization problem that finds
weights that directly target the level of local imbalance within each subgroup --- ensuring \emph{approximate} local covariate balance --- while guaranteeing \emph{exact} global covariate balance between the treated and control samples.
We show that controlling local imbalance
controls the estimation error of subgroup-specific effects, allowing us to better isolate treatment effect variation.
We also show that, even when the target estimand is the overall treatment effect, ensuring both exact global balance and approximate local balance reduces the overall estimation error.
Next, we demonstrate that this proposal has a dual representation as inverse propensity weighting with a hierarchical propensity score model, building on recent connections between balancing weights and propensity score estimation \citep{Chattopadhyay2019}.
In particular, finding weights that minimize both global and local imbalance corresponds to estimating a propensity score model in which the subgroup-specific parameters are partially pooled toward a global propensity score model.
Any remaining imbalance after weighting may lead to bias. To adjust for this, we also combine the weighting approach with an outcome model, analogous to bias correction for matching \citep{rubin1973bias, Athey2018a}.
After assessing our proposed approach's properties, we use it to estimate the impacts of letters of recommendation during the 2016 UC Berkeley undergraduate admissions cycle.
Based on the Berkeley policy debate, we focus on variation in the effect on admissions rates based on under-represented minority (URM) status and on
the \emph{a priori} predicted probability of admission, estimated using data from the prior year's admissions cycle.
First, we show that the proposed weights indeed yield excellent local and global balance, while traditional propensity score weighting methods yield poor local balance.
We then find evidence that the impact of letters increases with the predicted probability of admission.
Applicants who are very unlikely to be admitted see little benefit from letters of recommendation, while applicants on the cusp of acceptance see a larger, positive impact.
The evidence on the differential effects by URM status is more mixed.
Overall, the point estimates for URM and non-URM applicants are close to each other. However, these estimates are noisy and mask
important variation by \emph{a priori} probability of admission.
For applicants with the highest baseline admission probabilities, we estimate larger impacts for non-URM than URM applicants, though these estimates are sensitive to augmentation with an outcome model.
For all other applicants, we estimate the reverse: larger impacts for URM than non-URM applicants.
Since URM status is correlated with the predicted probability of admission, this leads to a Simpson's Paradox-type pattern for subgroup effects, with a slightly larger point estimate for non-URM applicants pooled across groups \citep{Bickel1975, vanderweele2011interpretation}.
The fact that results hinge on higher-order interactions suggests caution but also highlights the advantages of our design-based approach with pre-specified subgroups \citep{rubin2008objective}.
Since we separate the design and analysis phases, we can carefully assess covariate balance and overlap in the subgroups of interest --- and can tailor the weights to target these quantities directly.
By contrast, many black-box machine learning methods estimate the entire conditional average treatment effect function, without prioritizing estimates for pre-specified subgroups \citep{Carvalho2019}.
While these approaches require minimal input from the researcher, they can over-regularize the estimated effects for subgroups of interest, making it difficult to isolate higher-order interactions.
As we discuss in Section \ref{sec:augment}, we argue that researchers should view these weighting and outcome modeling methods as complements rather than substitutes. Even without pre-specified subgroups, there can be substantial gains from combining weighting and outcome modeling for treatment effect variation \citep{nie2017quasi, Kunzel2019}. Our proposed augmented estimator is especially attractive because we can incorporate pre-defined subgroups in the weighting estimator while still leveraging off-the-shelf machine learning methods for outcome modeling.
Finally, we conduct extensive robustness and sensitivity checks, detailed in Appendix \ref{sec:robustness_appendix}. In addition to alternative estimators and sample definitions, we conduct a formal sensitivity analysis for violations of the ignorability assumption, adapting the recent proposal from \citet{soriano2019sensitivity}. We also explore an alternative approach that instead leverages unique features of the UC Berkeley pilot study, which included an additional review without the letters of recommendation from a sample of 10,000 applicants.
Overall, our conclusions are similar across a range of approaches. Thus, we believe our analysis is a reasonable first look at this question, albeit best understood alongside other approaches that rest on different assumptions \citep[such as those in][]{rothstein_lor2017}.
The paper proceeds as follows. In the next section we introduce the letter of recommendation pilot program at UC Berkeley.
Section \ref{sec:prelim} introduces the problem setup and notation, and discusses related work. Section \ref{sec:approx_weights} proposes and analyzes the approximate balancing weights approach.
Section \ref{sec:sims} presents a simulation study.
Section \ref{sec:results} presents empirical results on the effect of letters of recommendation. Section \ref{sec:discussion} concludes with a discussion about possible extensions.
The appendix includes additional analyses and theoretical discussion.
\subsection{A pilot program for letters of recommendation in college admissions}
\label{sec:lor}
As we discuss above, there is considerable debate over the role of letters of recommendation in college admissions. LORs have the potential to offer insight into aspects of the applicant not captured by the available quantitative information or by the essays that applicants submit \citep{kuncel2014meta}.
At the same time, letters from applicants from disadvantaged backgrounds or under-resourced high schools may be less informative or prejudicial against the applicant, due, e.g., to poor writing or grammar, or to lower status of the letter writer; see \citet{schmader2007linguistic} as an example.
These issues were central in the UC Berkeley debates over the policy.
One Academic Senate committee, following an inquiry into the ``intended and unintended consequences of the interim pilot admissions policy, especially for underrepresented minority students (URMs),'' concluded that ``the burden of proof rests on those who want to implement the new letters of recommendation policy, and should include a test of statistical significance demonstrating measurable impact on increasing diversity in undergraduate admissions'' \citep{UC_Diversity_2017}.
The UC system-wide faculty senate ultimately limited the use of LORs following the pilot (though before any results were available), citing concerns that ``LORs conflict with UC principles of access and fairness, because students attending under-resourced schools or from disadvantaged backgrounds will find it more difficult to obtain high-quality letters, and could be disadvantaged by a LOR requirement'' \citep{Chalfant_letter_2017}.
The UC Berkeley LOR pilot study is a unique opportunity to assess these questions; \citet{rothstein_lor2017} discusses implementation details.
For this analysis, we restrict our sample to non-athlete California residents who applied to either the College of Letters and Science or the College of Engineering at UC Berkeley in the 2016 admissions cycle. This leaves 40,541 applicants, 11,143 of whom submitted LORs.
We focus our analysis on the impacts for applicants who both were invited to and subsequently did submit LORs.\footnote{We could use the methods discussed here to explore a range of different quantities. For this target, the net effect of LORs on admission includes differential rates of submission of a letter given invitation. While non-URM applicants submitted letters at a higher rate than URM applicants, the majority of the discrepancy arises from applicants who were unlikely to be admitted \emph{a priori} \citep{rothstein_lor2017}.}
\subsubsection{Selection into treatment}
\label{sec:selection}
\begin{figure}[tb]
\centering \includegraphics[width=.8\maxwidth]{figure/std_diff.pdf}
\caption{Absolute difference in means, standardized by the pooled standard deviation, between applicants submitting and not submitting letters of recommendation for several key covariates. By design, applicants submitting letters of recommendation disproportionately have a ``Possible'' score from the first reader (70\% of treated applicants vs. 4\% of untreated applicants).}
\label{fig:std_diff}
\end{figure}
UC Berkeley uses a two-reader evaluation system. Each reader scores applicants on a three-point scale, as ``No,'' ``Possible,'' or ``Yes.'' Application decisions are based on the combination of these two scores and the major to which a student has applied. In the most selective majors (e.g., mechanical engineering), an applicant typically must receive two ``Yes'' scores to be admitted, while in others a single ``Yes'' is sufficient. In the LOR pilot, applicants were invited to submit letters based in part on the first reader score, and the LORs, if submitted, were made available to the second reader.
As in any observational study of causal effects, selection into treatment is central. Decisions to submit letters were a two-step process. Any applicant who received a ``Possible'' score from the first reader was invited. In addition, due to concerns that first read scores would not be available in time to be useful, an index of student- and school-level characteristics was generated, and applicants with high levels of the index were invited as well.\footnote{The index was generated from a logistic regression fit to data from the prior year's admissions cycle, predicting whether an applicant received a ``Possible'' score (versus either a ``No'' or a ``Yes''). Applicants with predicted probabilities from this model greater than 50\% were invited to submit LORs. Because we observe all of the explanatory variables used in the index, this selection depends only on observable covariates.
A small share of applicants with low predicted probabilities received first reads after January 12, 2017, the last date that LOR invitations were sent, and were not invited even if they received ``Possible'' scores.}
Of the 40,451 total applicants, 14,596 were invited to submit a letter. Approximately 76\% of those invited to submit letters eventually submitted them, and no applicant submitted a letter who was not invited to.
For this analysis, we assume that submission of LORs is effectively random conditional on the first reader score and on both student- and school-level covariates (Assumption \ref{a:ignore} below).
In particular, the \emph{interaction} between the covariates and the first reader score plays an important role in the overall selection mechanism, as applicants who received a score of ``No'' or ``Yes'' from the first reader could still have been asked to submit an LOR based on their individual and school information.
Figure \ref{fig:std_diff} shows covariate imbalance for several key covariates --- measured as the absolute difference in means divided by the pooled standard deviation --- for applicants who submitted LORs versus those who did not.\footnote{The full set of student-level variables we include in our analysis are: weighted and unweighted GPA, GPA percentile within school, parental income and education, SAT composite score and math score, the number of honors courses and percentage out of the total available, number of AP courses, ethnic group, first generation college student status, and fee waiver status. The school level variables we control for are: average SAT reading, writing, and math scores, average ACT score, average parental income, percent of students taking AP classes, and the school Academic Performance Index (API) evaluated through California's accountability tests. For students that did not submit an SAT score but did submit an ACT score, we imputed the SAT score via the College Board's SAT to ACT concordance table. For the 992 applicants with neither an SAT nor an ACT score, we impute the SAT score as the average among applicants from the school.}
For the purposes of this study, we follow the university in defining a URM applicant as one who is a low-income student, a student in a low-performing high school, a first-generation college student, or from an underrepresented racial or ethnic group.
We see that there are large imbalances in observable applicant characteristics, most notably average school income, GPA, the number of honors and AP classes taken, and SAT score.
There were also large imbalances in first reader scores (not shown in Figure \ref{fig:std_diff}): 70\% of applicants that submitted LORs had ``Possible'' scores, compared to only 4\% of those who did not. There is a smaller imbalance in URM status, with 61\% of those submitting LORs classified as URMs, versus 53\% of those who did not submit.
\subsubsection{Heterogeneity across \emph{a priori} probability of admission}
\label{sec:ai}
\begin{figure}[tb]
\centering \includegraphics[width=0.85\maxwidth]{figure/ai_hist.pdf}
\caption{Distribution of the ``admissibility index'' --- an estimate of the \emph{a priori} probability of acceptance --- for the 2016 UC Berkeley application cohort, separated into URM and non-URM and those that submitted a letter versus those that did not.}
\label{fig:ai_hist}
\end{figure}
To better understand who was invited to submit LORs and any differential impacts between URM and non-URM applicants, we construct a univariate summary of applicant- and school-level characteristics. We use logistic regression to estimate the probability of admission given observable characteristics using the \emph{prior year} (2015) admissions data.
We then use this model to predict \emph{a priori} admissions probabilities for the applicants of interest in 2016; we refer to these predicted probabilities as the Admissibility Index (AI).
Overall, the AI is a good predictor of admission in 2016: for applicants who do not submit LORs, the overall Area Under the Curve (AUC) in predicting 2016 admissions is 0.92 and the mean square error is 7\% (see Appendix Table \ref{tab:ai_roc_auc}). However, the predictive accuracy decreases for higher AI applicants, for whom the AI under-predicts admissions rates (see Appendix Figure \ref{fig:ai_performance}).
Figure \ref{fig:ai_hist} shows the AI distribution for the 2016 applicant cohort, broken out by URM status and LOR submission. There are several features of this distribution that have important implications for our analysis. First, although the probability of admission is quite low overall,
applicants across nearly the full support of probabilities submitted LORs. This is primarily because applicants who received ``Possible'' scores from the first readers come from a wide range of admissibility levels. This will allow us to estimate heterogeneous effects across the full distribution, with more precision for applicants with lower AIs.
Second, because the admissions model disproportionately predicted that URM students had high chances of receiving ``Possible'' scores, many more URM applicants were invited to submit letters than non-URM applicants, and so our estimates for URM applicants will be more precise than those for non-URM applicants.
Third, at higher AI levels large shares of applicants submitted LORs, leaving few comparison observations. This will make it challenging to form balanced comparison groups for high-AI URM applicants who submit letters.
From Figure \ref{fig:ai_hist} we know that the distribution of AI varies between URM and non-URM applicants, and so apparent differences in estimated effects between the two groups may be due to compositional differences.
Therefore, in the subsequent sections we will focus on estimating effects within subgroups defined by both URM status and admissibility.
To do this, we define subgroups by creating four (non-equally-sized) strata of the AI: $<5\%$, $5\%-10\%$, $10\%-20\%$ and $> 20\%$. Interacting with URM status, this leads to eight non-overlapping subgroups; we will marginalize over these to estimate the other subgroup effects above. Table \ref{tab:subgroup_counts} shows the total number of applicants in each of the eight groups, along with the proportion submitting letters of recommendation. As we discuss in Section \ref{sec:results}, we will further divide each of these subgroups by first reader score and college, to ensure exact balance on these important covariates.
\begin{table}[tbp]
\centering
\begin{tabular}{@{}llrrr@{}}
\toprule
AI Range & URM & Number of Applicants & Number Submitting LOR & Proportion Treated\\
\midrule
\multirow{2}{*}{$< 5\%$} & URM & 11,832 & 2,157 & 18\%\\
& Not URM & 6,529 & 607 & 9\%\\
\midrule
\multirow{2}{*}{5\% - 10\%} & URM & 3,106 & 1,099 & 35\%\\
& Not URM & 2,099 & 536 & 25\%\\
\midrule
\multirow{2}{*}{10\% - 20\%} & URM & 2,876 & 1,212 & 42\%\\
& Not URM & 2,495 & 828 & 33\%\\
\midrule
\multirow{2}{*}{$>20\%$} & URM & 4,645 & 2,345 & 50\%\\
& Not URM & 6,959 & 2,359 & 34\%\\
\bottomrule
\end{tabular}
\caption{Number of applicants and proportion treated by subgroup.}
\label{tab:subgroup_counts}
\end{table}
\section{Treatment effect variation in observational studies}
\label{sec:prelim}
\subsection{Setup and estimands}
We now describe the letter of recommendation study as an observational study where for each applicant $i=1,\ldots,n$, we observe applicant and school level-covariates $X_i \in \ensuremath{\mathcal{X}}$; a group indicator $G_i \in \{1,\ldots,K\}$ denoting a pre-defined subgroup of interest;
a binary indicator for submitting a letter of recommendation $W_i \in \{0,1\}$; and whether the applicant is admitted, which we denote as $Y_i \in \{0,1\}$.
Let $n_{1g}$ and $n_{0g}$ represent the number of treated and control units in subgroup $G_i = g$, respectively. We assume that for each applicant, $(X_i, G_i, W_i, Y_i)$ are sampled i.i.d. from some distribution $\ensuremath{\mathcal{P}}(\cdot)$. Following the potential outcomes framework \citep{neyman1923,Holland1986}, we assume SUTVA \citep{rubin1980} and posit two potential outcomes $Y_i(0)$ and $Y_i(1)$ for each applicant $i$, corresponding to $i$'s outcome if that applicant submits a letter of recommendation or not, respectively; the observed outcome is $Y_i = W_i Y_i(1) + (1-W_i)Y_i(0)$.\footnote{There is a possibility of interference induced by the number of admitted applicants being capped. With 6874 admitted students, we consider the potential interference to be negligible} In this study we are interested in estimating two types of effects. First, we wish to estimate the overall Average Treatment Effect on the Treated (ATT), the treatment effect for applicants who submit a letter,
$$\tau = \ensuremath{\mathbb{E}}[Y(1) - Y(0) \mid W = 1],$$
where we denote $\mu_1 = \ensuremath{\mathbb{E}}[Y(1) \mid W = 1]$ and $\mu_0 = \ensuremath{\mathbb{E}}[Y(0) \mid W = 1]$.
Second, for each subgroup $G_i = g$, we would like to estimate the Conditional ATT (CATT),
\begin{equation}
\label{eq:catt}
\tau_g = \ensuremath{\mathbb{E}}[Y(1) - Y(0) \mid G = g, W = 1],
\end{equation}
where similarly we denote $\mu_{1g} = \ensuremath{\mathbb{E}}[Y(1) \mid G = g, W = 1]$ and $\mu_{0g} = \ensuremath{\mathbb{E}}[Y(0) \mid G = g, W = 1]$.
Estimating $\mu_{1g}$ is relatively straightforward: we can simply use the average outcome for treated units in group $g$, $\hat{\mu}_{1g} \equiv \frac{1}{n_{1g}} \sum_{G_i = g} W_i Y_i$. However, estimating $\mu_{0g}$ is more difficult due to confounding; we focus much of our discussion on imputing this counterfactual mean for the group of applicants who submitted letters of recommendation. To do this, we rely on two key assumptions that together form the usual \emph{strong ignorability} assumption \citep{Rosenbaum1983}.
\begin{assumption}[Ignorability]
\label{a:ignore}
The potential outcomes are independent of treatment given the covariates and subgroup:
\begin{equation}
\label{eq:ignore}
Y(1), Y(0) \indep W \mid X, G.
\end{equation}
\end{assumption}
\begin{assumption}[One Sided Overlap]
\label{a:overlap}
The \emph{propensity score} $e(x, g) \equiv P(W = 1 \mid X = x, G = g)$ is less than 1:
\begin{equation}
\label{eq:overlap}
e(X, G) < 1.
\end{equation}
\end{assumption}
\noindent In our context, Assumption \ref{a:ignore} says that conditioned on the first reader score and applicant- and school-level covariates, submission of LORs is independent of the potential admissions outcomes. Due to the selection mechanism we describe in Section \ref{sec:selection}, we believe that this is a reasonable starting point for estimating these impacts; see \citet{rothstein_lor2017} and Appendix \ref{sec:within} for alternatives. In Appendix \ref{sec:sensitivity}, we assess the sensitivity of our conclusions to violations of this assumption.
Assumption \ref{a:overlap} corresponds to assuming that no applicant would have been guaranteed to submit a letter of recommendation. Although some applicants were guaranteed to be \emph{invited} to submit an LOR, we believe that this is a reasonable assumption for actually submitting a letter. In Section \ref{sec:diagnostic} we assess overlap empirically.
With this setup, let $m_0(x, g) = \ensuremath{\mathbb{E}}[Y(0) \mid X = x, G = g]$ be the \emph{prognostic score}, the expected control outcome conditioned on covariates $X$ and group membership $G$.
Under Assumptions \ref{a:ignore} and \ref{a:overlap}, we have the standard identification result:
\begin{equation}
\label{eq:identify}
\mu_{0g} = \ensuremath{\mathbb{E}}[m_0(X, G) \mid W = 1] = \ensuremath{\mathbb{E}}\left[\frac{e(X, G)}{1 - e(X,G)} Y \mid W = 0 \right].
\end{equation}
Therefore we can obtain a plug-in estimate for $\mu_{0g}$ with an estimate of the prognostic score, $m_0(\cdot, \cdot)$, an estimate of propensity score, $e(\cdot, \cdot)$, or an estimate of the treatment odds themselves, $\frac{e(\cdot, \cdot)}{1 - e(\cdot, \cdot)}$.
We next review existing methods for such estimation, turning to our proposed weighting approach in the following section.
\subsection{Related work: methods to estimate subgroup treatment effects}
\label{sec:related}
There is an extensive literature on estimating varying treatment effects in observational studies; see \citet{anoke2019approaches} and \citet{Carvalho2019} for recent discussions. This is an active area of research, and we narrow our discussion here to methods that assess heterogeneity across pre-defined, discrete subgroups. In particular, we will focus on linear weighting estimators that take a set of weights $\hat{\gamma} \in \ensuremath{\mathbb{R}}^n$, and estimate $\mu_{0g}$ as a weighted average of the control outcomes in the subgroup:
\begin{equation}
\label{eq:mu0g_hat}
\hat{\mu}_{0g} \equiv \frac{1}{n_{1g}}\sum_{G_i = g} \hat{\gamma}_i(1-W_i)Y_i.
\end{equation}
Many estimators take this form; we focus on design-based approaches that do not use outcome information in constructing the estimators \citep{rubin2008objective}. See \citet{hill2011bayesian, Kunzel2019, Carvalho2019, Nie2019, Hahn2020} for discussions of approaches that instead focus on outcome modeling.
\paragraph{Methods based on estimated propensity scores.}
A canonical approach in this setting is Inverse Propensity Weighting (IPW) estimators for $\mu_{0g}$ \citep[see][]{green2014examining}. Traditionally, this proceeds in two steps: first estimate the propensity score $\hat{e}(x, g)$, e.g. via logistic regression; second, estimate $\mu_{0g}$ as in Equation \eqref{eq:mu0g_hat}, with weights $\hat{\gamma}_i = \frac{\hat{e}(X_i, G_i)}{1 - \hat{e}(X_i, G_i)}$:
\begin{equation}
\label{eq:ipw}
\hat{\mu}_{0g} = \frac{1}{n_{1g}}\sum_{W_i = 0, G_i = g} \frac{\hat{e}(X_i, G_i)}{1 - \hat{e}(X_i, G_i)} Y_i
\end{equation}
where these are ``odds of treatment'' weights to target the ATT.
A natural approach to estimating $\hat{e}(X_i, G_i)$, recognizing that $G_i$ is discrete,
is to estimate a logistic model for treatment separately for each group or, equivalently, with full interactions between $G_i$ and (possibly transformed) covariates $\phi(X_i) \in \ensuremath{\mathbb{R}}^p$:
\begin{equation}
\label{eq:ipw_interact}
\text{logit}(e(x, g)) = \alpha_g + \beta_g \cdot \phi(x).
\end{equation}
Due to the high-dimensional nature of the problem, it is often infeasible to estimate Equation \eqref{eq:ipw_interact} without any regularization: the treated and control units might be completely separated, particularly when some groups are small.
Classical propensity score modeling with random effects is one common solution, but can be numerically unstable in settings similar to this \citep{zubizarreta2017optimal}.
Other possible solutions in high dimensions include $L^1$ penalization \citep{Lee2019}, hierarchical Bayesian modeling \citep{Li2013}, and generalized boosted models \citep{mccaffrey2004propensity}.
In addition, \citet{dong2020subgroup} propose a stochastic search algorithm to estimate a similar model when the number of subgroups is large, and \citet{Li2017} and \citet{Yang2020_overlap} propose \emph{overlap weights}, which upweight regions of greater overlap.
Under suitable assumptions and conditions, methods utilizing the estimated propensity score will converge to the true ATT asymptotically.
However, in high dimensional settings with a moderate number of subgroups these methods can often fail to achieve good covariate balance in the sample of interest; as we show in Section \ref{sec:diagnostic}, these methods fail to balance covariates in the UC Berkeley LOR study.
The key issue is that traditional IPW methods focus on estimating the propensity score itself (i.e., the conditional probability of treatment) rather than finding weights that achieve good in-sample covariate balance.
\paragraph{Balancing weights.}
Unlike traditional IPW, balancing weights estimators instead find weights that directly target in-sample balance. One example is the Stable Balancing Weights (SBW) proposal from \citet{Zubizarreta2015}, which finds the minimum variance weights that achieve a user-defined level of covariate balance in $\phi(X_i) \in \ensuremath{\mathbb{R}}^p$:
\begin{equation}
\label{eq:sbw}
\begin{aligned}
\min_{\gamma} \;\;\;\; & \|\gamma\|_2^2\\
\text{subject to} \;\;\;\; & \max_j \left| \frac{1}{n_1} \sum_{W_i = 1}\phi_j(X_i) - \frac{1}{n_1} \sum_{W_i = 0}\gamma_i \phi_j(X_i)\right| \leq \delta,
\end{aligned}
\end{equation}
for weights $\gamma$, typically constrained to the simplex, and for allowable covariate imbalance $\delta$.
These methods have a long history in calibrated survey weighting \citep[see, e.g.][]{Deming1940,Deville1993}, and have recently been extensively studied in the observational study context \citep[e.g.][]{Hainmueller2011, Zubizarreta2015,Athey2018a,Hirshberg2019, hazlett2018kernel}. They have also been shown to estimate the propensity score with a loss function designed to achieve good balance \citep{Zhao2016a,Wang2019,Chattopadhyay2019}.
While balancing weights achieve better balance than the traditional IPW methods above, we must take special care to use them appropriately when estimating subgroup treatment effects. As we will show in Section \ref{sec:diagnostic}, designing balancing weights estimators without explicitly incorporating the subgroup structure also fails to balance covariates within subgroups in the LOR study. We turn to designing such weights in the next section.
\section{Approximate balancing weights for treatment effect variation}
\label{sec:approx_weights}
Now we describe a specialization of balancing weights that minimizes the bias for subgroup treatment effect estimates. This approach incorporates the subgroup structure into the balance measure and optimizes for the ``local balance" within each subgroup. First we show that the error for the subgroup treatment effect estimate is bounded by the level of local imbalance within the subgroup. Furthermore, the error for estimating the overall ATT depends on both the global balance and the local balance within each subgroup. We then describe a convex optimization problem to minimize the level of imbalance within each subgroup while ensuring exact global balance in the full sample. Next, we connect the procedure to IPW with a hierarchical propensity score model, using the procedure's Lagrangian dual formulation. We conclude by describing how to augment the weighting estimate with an outcome model.
\subsection{Local balance, global balance, and estimation error}
\label{sec:local_balance}
\subsubsection{Subgroup effects}
We initially consider the role of local imbalance in estimating subgroup treatment effects. This is the subgroup-specific specialization of standard results in balancing weights; see \citet{benmichael_balancing_review} for a recent review.
We will compare the estimate $\hat{\mu}_{0g}$ to $\tilde{\mu}_{0g} \equiv \frac{1}{n_{1g}}\sum_{G_i = g}W_i m_0(X_i, g)$, our best approximation to $\mu_{0g}$ if we knew the true prognostic score.
Defining the residual $\varepsilon_i = Y_i - m_0(X_i, G_i)$, the error is
\begin{equation}
\label{eq:error_general}
\hat{\mu}_{0g} - \tilde{\mu}_{0g} = \underbrace{\frac{1}{n_{1g}}\sum_{G_i = g} \hat{\gamma}_i (1-W_i) m_0(X_i, g) - \frac{1}{n_{1g}}\sum_{G_i = g} W_i m_0(X_i, g)}_{\text{bias}_g} + \underbrace{\frac{1}{n_{1g}}\sum_{G_i = g}(1-W_i) \hat{\gamma}_i\varepsilon_i}_\text{noise} .
\end{equation}
\noindent Since the weights $\hat{\gamma}$ are \emph{design-based}, they will be independent of the outcomes, and the noise term will be mean-zero and have variance proportional to the sum of the squared weights $\frac{1}{n_{1g}^2}\sum_{G_i = g}(1-W_i) \hat{\gamma}_i^2$.\footnote{In the general case with heteroskedastic errors, the variance of the noise term is $\frac{1}{n_{1g}^2}\sum_{G_i = g}\hat{\gamma}_i^2 \text{Var}(\varepsilon_i) \leq \max_i \{\text{Var}(\varepsilon_i) \}\frac{1}{n_{1g}^2}\sum_{G_i = g}\hat{\gamma}_i^2$.}
At the same time, the conditional bias term, $\text{bias}_g$, depends on the imbalance in the true prognostic score $m_0(X_i, G_i)$. The idea is to bound this imbalance by the worst-case imbalance in all functions $m$ in a model class $\ensuremath{\mathcal{M}}$.
While the setup is general,\footnote{See \citet{Wang2019} for the case where the prognostic score can only be approximated by a linear function; see \citet{hazlett2018kernel} for a kernel representation and \citet{Hirshberg2019} for a general nonparametric treatment.} we describe the approach assuming that the prognostic score within each subgroup is a linear function of transformed covariates $\phi(X_i) \in \ensuremath{\mathbb{R}}^p$ with $L^2$-bounded coefficients; i.e., $\ensuremath{\mathcal{M}} = \{m_0(x, g) = \eta_g \cdot \phi(x) \mid \|\eta_g\|_2 \leq C\}$. We can then bound the bias by the level of \emph{local imbalance} within the subgroup via the Cauchy-Schwarz inequality:
\begin{equation}
\label{eq:bias_bound}
\left|\text{bias}_g\right| \leq C \underbrace{\left\| \frac{1}{n_{1g}}\sum_{G_i = g} \hat{\gamma}_i (1-W_i)\phi(X_i) - \frac{1}{n_{1g}}\sum_{G_i = g}W_i \phi(X_i)\right\|_2}_{\text{local imbalance}}.
\end{equation}
\noindent Based on Equation \eqref{eq:bias_bound}, we could control local bias solely by controlling local imbalance. This approach would be reasonable if we were solely interested in subgroup impacts. In practice, however, we are also interested in the overall effect, as well as in aggregated subgroup effects.
\subsubsection{Overall treatment effect}
We can estimate aggregated effects by taking a weighted average of the subgroup-specific estimates, e.g. we estimate $\mu_{0}$ as $\hat{\mu}_0 = \sum_{g=1}^K \frac{n_{1g}}{n_1}\hat{\mu}_{0g} = \frac{1}{n_1}\sum_{W_i = 0} n_{1G_i} \hat{\gamma}_i Y_i$.
The imbalance within each subgroup continues to play a key role in estimating this overall treatment effect, alongside global balance. To see this, we again compare to our best estimate if we knew the prognostic score, $\tilde{\mu}_0 = \frac{1}{n_1}\sum_{g=1}^K n_{1g} \tilde{\mu}_{0g}$, and see that the local imbalance plays a part. The error is
\begin{equation}
\label{eq:error_global}
\begin{aligned}
\hat{\mu}_0 - \tilde{\mu}_0 & = \bar{\eta} \cdot \left(\frac{1}{n_1} \sum_{i=1}^nn_{1G_i} \hat{\gamma}_i (1-W_i)\phi(X_i) - \frac{1}{n_1}\sum_{i=1}^n W_i \phi(X_i)\right) \;\; + \\
& \quad\quad \frac{1}{n_1}\sum_{g=1}^k n_{1g} \left(\eta_g - \bar{\eta}\right) \cdot \left(\sum_{G_i = g} \hat{\gamma}_i(1-W_i) \phi(X_i) - \frac{1}{n_{1g}}\sum_{G_i=g} W_i \phi(X_i)\right) \;\; +\\
& \quad\quad \frac{1}{n_1}\sum_{i=1}^n \hat{\gamma}_i (1-W_i)\varepsilon_i,
\end{aligned}
\end{equation}
where $\bar{\eta} \equiv \frac{1}{K}\sum_{g=1}^K \eta_g$ is the average of the model parameters across all subgroups. Again using Cauchy-Schwarz we see that the overall bias is controlled by the \emph{local imbalance} within each subgroup as well as the \emph{global balance} across subgroups:
\begin{equation}
\label{eq:bias_bound_global}
\begin{aligned}
\left|\text{bias}\right| & \leq \|\bar{\eta}\|_2 \underbrace{\left\|\frac{1}{n_1} \sum_{i=1}^nn_{1G_i} \hat{\gamma}_i (1-W_i)\phi(X_i) - \frac{1}{n_1}\sum_{i=1}^n W_i \phi(X_i)\right\|_2}_{\text{global balance}} + \\
& \quad\quad \sum_{g=1}^G \frac{n_{1g}}{n_1} \|\eta_g - \bar{\eta}\|_2 \underbrace{\left\|\sum_{G_i = g} \hat{\gamma}_i(1-W_i) \phi(X_i) - \frac{1}{n_{1g}}\sum_{G_i=g} W_i \phi(X_i)\right\|_2}_{\text{local balance}}.
\end{aligned}
\end{equation}
In general, we will want to achieve \emph{both} good local balance within each subgroup and good global balance across subgroups.
Ignoring local balance can incur bias by ignoring heterogeneity in the outcome model across subgroups, while ignoring global balance leaves potential bias reduction on the table.
Equation \eqref{eq:bias_bound_global} shows that the relative importance of local and global balance for estimating the overall ATT is controlled by the level of similarity in the outcome process across groups. In the extreme case where the outcome process does not vary across groups --- i.e., $\eta_g = \bar{\eta}$ for all $g$ --- then controlling the global balance is sufficient to control the bias. In the other extreme where the outcome model varies substantially across subgroups --- e.g., $\|\eta_g - \bar{\eta}\|_2$ is large for all $g$ --- we will primarily seek to control the local imbalance within each subgroup in order to control the bias for the ATT. Typically, we expect that interaction terms are weaker than ``main effects,'' i.e., $\|\eta_g - \bar{\eta}\|_2 < \|\bar{\eta}\|_2$ \citep[see][]{cox1984interaction, feller2015hierarchical}. As a result, our goal is to find weights that prioritize global balance while still achieving good local balance.
\subsection{Optimizing for both local and global balance}
\label{sec:opt_problem}
We now describe a convex optimization procedure to find weights that optimize for local balance while ensuring exact global balance across the sample. The idea is to stratify across subgroups and find approximate balancing weights within each stratum, while still constraining the overall level of balance.
To do this, we find weights $\hat{\gamma}$ that solve the following optimization problem:
\begin{equation}
\label{eq:primal}
\begin{aligned}
\min_{\gamma} \;\;\;\;\;\;\; & \sum_{g=1}^K \left\|\sum_{G_i = g, W_i = 0} \gamma_i \phi(X_i) - \sum_{G_i = g, W_i = 1} \phi(X_i)\right\|_2^2 \;\; + \;\; \frac{\lambda_g}{2}\sum_{G_i=G,W_i=0} \gamma_i^2\\[1.2em]
\text{subject to } & \sum_{W_i = 0} \gamma_i \phi(X_i) = \sum_{W_i = 1}\phi(X_i)\\[1.2em]
& \sum_{G_i = G, W_i = 0} \gamma_i = n_{1g}\\[1.2em]
& \gamma_i \geq 0 \;\;\;\;\; \forall i=1,\ldots,n
\end{aligned}
\end{equation}
The optimization problem \eqref{eq:primal} has several key components. First, following Equation \eqref{eq:bias_bound} we try to find weights that minimize the local imbalance for each stratum defined by $G$; this is a proxy for the stratum-specific bias.
We also constrain the weights to \emph{exactly balance} the covariates globally over the entire sample.
Equivalently, this finds weights that achieve exact balance marginally on the covariates $\phi(X_i)$ and only approximate balance for the interaction terms $\phi(X_i) \times \mathbbm{1}_{G_i}$, placing greater priority on main effects than interaction terms.
Taken together, this ensures that we are minimizing the overall bias as well as the bias within each stratum.
In principle, weights that exactly balance the covariates within each stratum would also yield exact balance globally. Typically, however, the sample sizes are too small to achieve exact balance within each stratum, and so this combined approach at least guarantees global balance.\footnote{This constraint induces a dependence across the strata, so that the optimization problem does not decompose into $J$ sub-problems.}
From Equation \eqref{eq:bias_bound_global}, we can see that if there is a limited amount of heterogeneity in the baseline outcome process across groups, the global exact balance constraint will limit the estimation error when estimating the ATT, even if local balance is relatively poor.
In principle, incorporating the global balance constraint could lead to worse local balance. However, we show in both the simulations in Section \ref{sec:sims} and the analysis of the LOR pilot study in Section \ref{sec:results} that the global constraint leads to negligible changes in the level of local balance and the performance of the subgroup estimators, but can lead to large improvements in the global balance and the performance of the overall estimate.
Thus, there seems to be little downside in terms of subgroup estimates from an approach that controls both local and global imbalance --- but large gains for overall estimates.
Note that while we choose to enforce exact global balance, we could also limit to \emph{approximate} global balance, with the relative importance of local and global balance controlled by an additional hyperparameter set by the analyst.
Second, we include an $L^2$ regularization term that penalizes the sum of the squared weights in the stratum; from Equation \eqref{eq:error_general}, we see that this is a proxy for the variance of the weighting estimator. For each stratum, the optimization problem includes a hyper-parameter $\lambda_g$ that negotiates the bias-variance tradeoff within that stratum.
When $\lambda_g$ is small, the optimization prioritizes minimizing the bias through the local imbalance, and when $\lambda$ is large it prioritizes minimizing the variance through the sum of the squared weights. As a heuristic, we
limit the number of hyperparameters by choosing
$\lambda_g = \frac{\lambda}{n_g}$ for a common choice of $\lambda$. For larger strata where better balance is possible, this heuristic will prioritize balance --- and thus bias --- over variance; for smaller strata, by contrast, this will prioritize lower variance. We discuss selecting $\lambda$ in the letters of recommendation study in Section \ref{sec:diagnostic}.
Next, we incorporate two additional constraints on the weights.
We include a fine balance constraint \citep{Rosenbaum2007_finebalance}: within each stratum the weights sum up to the number of treated units in that stratum, $n_{1g}$.
Since each stratum maps to only one subgroup, this guarantees that the weights sum to the number of treated units in each subgroup.
We also restrict the weights to be non-negative, which stops the estimates from extrapolating outside of the support of the control units \citep{king2006dangers}. Together, these induce several stability properties, including that the estimates are sample bounded.
In addition, we could extend the optimization problem in Equation \eqref{eq:primal} to balance intermediate levels between global balance and local balance.
Incorporating additional balance constraints for each intermediate level is unwieldy in practice due to the proliferation of hyperparameters.
Instead, we can expand $\phi(x)$ to include additional interaction terms between covariates and levels of the hierarchy. We discuss this this choice in the letters of recommendation study in Section \ref{sec:results}.
Finally, we compute the variance of our estimator conditioned on the design $(X_1,Z_1,W_1),\ldots,$ $(X_n,Z_n,W_n)$ or, equivalently, conditioned on the weights. The conditional variance is
\begin{equation}
\label{eq:var_mu0g}
\text{Var}(\hat{\mu}_{0g} \mid \hat{\gamma}) = \frac{1}{n_{1g}^2}\sum_{G_i = g} (1 - W_i)\hat{\gamma}_i^2 \text{Var}(Y_i).
\end{equation}
Using the $i$\textsuperscript{th} residual to estimate $\text{Var}(Y_i)$ yields the empirical sandwich estimator for the treatment effect
\begin{equation}
\label{eq:sandwich}
\widehat{\text{Var}}(\hat{\mu}_{1g} - \hat{\mu}_{0g} \mid \hat{\gamma}) = \frac{1}{n_{1g}^2} \sum_{G_i = g} W_i (Y_i - \hat{\mu}_{1g})^2 + \frac{1}{n_{1g}^2}\sum_{G_i = g}(1-W_i) \hat{\gamma}_i^2 (Y_i - \hat{\mu}_{0g})^2,
\end{equation}
where, as above, $\hat{\mu}_{1g}$ is the average outcome for applicants in subgroup $g$ who submit an LOR.
This is the fixed-design Huber-White heteroskedastic robust standard error for the weighted average. See \citet{Hirshberg2019} for discussion on asymptotic normality and semi-parametric efficiency for estimators of this form.
\subsection{Dual relation to partially pooled propensity score estimation}
\label{sec:dual}
Thus far, we have motivated the approximate balancing weights approach by appealing to the connection between local bias and local balance. We now draw on recent connections between approximate balancing weights and (calibrated) propensity score estimation through the Lagrangian dual problem. The weights that solve optimization problem \eqref{eq:primal} correspond
to estimating the inverse propensity weights with a (truncated) linear odds function with the stratum $G$ interacted with the covariates $\phi(X)$,\footnote{The truncation arises from constraining weights to be non-negative, and the linear odds form arises from penalizing the $L^2$ norm of the weights. We can consider other penalties that will lead to different forms. See \citet{benmichael_balancing_review} for a review of the different choices.}
\begin{equation}
\label{eq:linear_odds}
\frac{P(W = 1 \mid X = x, G = g)}{1 - P(W = 1 \mid X = x, G = g)} = \left[\alpha_g + \beta_g \cdot \phi(x)\right]_+,
\end{equation}
where the coefficients $\beta_g$ are \emph{partially pooled} towards a global model.
To show this, we first derive the Lagrangian dual. For each stratum $g$, the sum-to-$n_{1g}$ constraint induces a dual variable $\alpha_g \in \ensuremath{\mathbb{R}}$, and the local balance measure induces a dual variable $\beta_g \in \ensuremath{\mathbb{R}}^p$.
These dual variables are part of the \emph{balancing loss function} for stratum $z$:
\begin{equation}
\label{eq:dual_loss}
\ensuremath{\mathcal{L}}_g(\alpha_g, \beta_g) \equiv \sum_{W_i = 0, G_i = g} \left[\alpha_g + \beta_g \cdot \phi(X_i)\right]_+^2 - \sum_{W_i = 1, G_i = g} \left(\alpha_g + \beta_g\cdot \phi(X_i)\right),
\end{equation}
where $[x]_+ = \max\{0, x\}$. With this definition we can now state the Lagrangian dual.
\begin{proposition}
\label{prop:dual}
With $\lambda_g > 0$, if a feasible solution to \eqref{eq:primal} exists, the Lagrangian dual is
\begin{equation}
\label{eq:dual}
\min_{\alpha, \beta_1,\ldots,\beta_J, \mu_\beta} \sum_{g = 1}^K\underbrace{\ensuremath{\mathcal{L}}_g(\alpha_g, \beta_g)}_{\text{balancing loss}} \;\;+\;\; \underbrace{\sum_{z=1}^J \frac{\lambda_g}{2}\|\beta_g - \mu_\beta\|_2^2}_{\text{shrinkage to global variable}}.
\end{equation}
If $\hat{\alpha}, \hat{\beta}_1,\ldots,\hat{\beta}_J$ are the solutions to the dual problem, then the solution to the primal problem \eqref{eq:primal} is
\begin{equation}
\label{eq:primal_sol}
\hat{\gamma}_i = \left[\hat{\alpha}_{Z_i} + \hat{\beta}_{Z_i} \cdot \phi(X_i)\right]_+ .
\end{equation}
\end{proposition}
The Lagrangian dual formulation sheds additional light on the approximate balancing weights estimator. First, we apply results on the connection between approximate balancing weights and propensity score estimation
\citep[e.g.,][]{Zhao2016a, Wang2019, Hirshberg2019_amle, Chattopadhyay2019}.
We see that this approach estimates propensity scores of the form \eqref{eq:linear_odds}, which corresponds to a fully interacted propensity score model where the coefficients on observed covariates vary across strata.
Recall that we find \emph{approximate} balancing weights for each stratum because the number of units per stratum might be relatively small; therefore we should not expect to be able to estimate this fully interacted propensity score well.
The dual problem in Equation \eqref{eq:dual} also includes a global dual variable $\mu_\beta$ induced by the global balance constraint in the primal problem \eqref{eq:primal}.
Because we enforce \emph{exact} global balance, this global model is not regularized.
However, by penalizing the deviations between the stratum-specific variables and the global variables via the $L^2$ norm, $\|\beta_g - \mu_\beta\|_2^2$, the dual problem \emph{partially pools} the stratum-specific parameters towards a global model.
Thus, we see that the approximate balancing weights problem in Equation \eqref{eq:primal} corresponds to
a hierarchical propensity score model \citep[see, e.g.][]{Li2013},
as in Section \ref{sec:related}, fit with a loss function designed to induce covariate balance.
Excluding the global constraint removes the global dual variable $\mu_\beta$, and the dual problem shrinks the stratum-specific variables $\beta_g$ towards zero without any pooling. In contrast, ignoring the local balance measure by setting $\lambda_g \to \infty$ constrains the stratum-specific variables $\beta_g$ to all be \emph{equal} to the global variable $\mu_\beta$, resulting in a fully pooled estimator.
For intermediate values,
$\lambda_g$
controls the level of partial pooling. When $\lambda_g$ is large the dual parameters are heavily pooled towards the global model, and when $\lambda_g$ is small the level of pooling is reduced. By setting $\lambda_g = \frac{\lambda}{n_g}$ as above, larger strata will be pooled less than smaller strata.\footnote{It is also possible to have covariate-specific shrinkage by measuring imbalance in the primal problem \eqref{eq:primal} with a \emph{weighted} $L^2$ norm, leading to an additional $p$ hyper-parameters. We leave exploring this extension and hyper-parameter selection methods to future work.}
\subsection{Augmentation with an outcome estimator}
\label{sec:augment}
The balancing weights we obtain via the methods above may not achieve perfect balance, leaving the potential for bias.
We can augment the balancing weights estimator with an outcome model, following similar proposals in a variety of settings \citep[see, e.g.][]{Athey2018a, Hirshberg2019_amle,benmichael2019_augsynth}. Analogous to bias correction for matching \citep{rubin1973bias} or model-assisted estimation in survey sampling \citep{sarndal2003model}, the essential idea is to adjust the weighting estimator using an estimate of the bias.
Specifically, we can estimate the prognostic score $m_0(x, g)$ with a working model $\hat{m}_0(x, g)$, e.g., with a flexible regression model. An estimate of the bias in group $g$ is then:
\begin{equation}
\label{eq:bias_est}
\widehat{\text{bias}}_g = \frac{1}{n_{1g}}\sum_{W_i = 1, G_i = g} \hat{m}_0(X_i, g) - \frac{1}{n_{1g}} \sum_{W_i = 0, G_i = g} \hat{\gamma}_i \hat{m}_0(X_i, g).
\end{equation}
This is the bias due to imbalance in estimated prognostic score in group $g$ \emph{after} weighting.
With this estimate of the bias, we can explicitly bias-correct our weighting estimator, estimating $\mu_{0g}$ as
\begin{equation}
\label{eq:mu0g_hat_aug}
\begin{aligned}
\hat{\mu}_{0g}^{\text{aug}} &\equiv \hat{\mu}_{0g} \;\;+\;\; \widehat{\text{bias}}_g \\
&= \frac{1}{n_{1g}}\sum_{W_i = 0, G_i = g} \hat{\gamma}_iY_i \;\;+\;\; \left[\frac{1}{n_{1g}}\sum_{W_i = 1, G_i = g} \hat{m}_0(X_i, g) - \frac{1}{n_{1g}} \sum_{W_i = 0, G_i = g} \hat{\gamma}_i \hat{m}_0(X_i, g)\right].
\end{aligned}
\end{equation}
\noindent Thus, if the balancing weights fail to achieve good covariate balance in a given subgroup, the working outcome model, $ \hat{m}_0(X_i, g)$, can further adjust for any differences. See \citet{benmichael_balancing_review} for further discussion.
\section{Simulation study}
\label{sec:sims}
Before estimating the differential impacts of letters of recommendation, we first present simulations assessing the performance of our proposed approach versus traditional inverse propensity score weights fit via regularized logistic regression as well as outcome modelling with machine learning approaches.
To better reflect real-world data, we generate correlated covariates and include binary and skewed covariates.
For each simulation run,
with $d = 50$ covariates,
we begin with a diagonal covariance matrix $\Sigma$ where $\Sigma_{jj} = \frac{(d - j + 1)^5}{d^5}$ and sample a random orthogonal $d\times d$ matrix $Q$ to create a new covariance matrix $\tilde{\Sigma} = Q\Sigma$ with substantial correlation. For $n = 10,000$ units, we draw covariates from a multivariate normal distribution $X_{i} \overset{iid}{\sim} N(0, \tilde{\Sigma})$.
We then transform some of these covariates. For $j =1,11,21,32,41$ we dichotomize the variable and define $\tilde{X}_{ij} = \ensuremath{\mathbbm{1}}\{X_{ij} \geq q_{.8}(X_{\cdot j})\}$, where $q_{.8}(X_{\cdot j})$ is the 80th percentile of $X_j$ among the $n$ units.
For $j = 2,7,12,...,47$ we create a skewed covariate $\tilde{X}_{ij} = \exp(X_{ij})$.
To match our study, we create discrete subgroup indicators from the continuous variable $X_{id}$. To do this, we create a grid over $X_{id}$ with grid size $\frac{n}{G}$, and sample $G - 1$ points from this grid. We then create subgroup indicators $G_i$
by binning $X_{id}$ according to the $G-1$ points.
We consider $G \in \{10, 50\}$ groups.
With these covariates, we generate treatment assignment and outcomes. We use a separate logistic propensity score model for each group following Equation \eqref{eq:ipw_interact},\footnote{The logistic specification differs from the truncated linear odds in Equation \ref{eq:linear_odds}. If the transformed covariates $\phi(X_i)$ include a flexible basis expansion, the particular form of the link function will be less important.}
\begin{equation}
\label{eq:sim_pscore}
\text{logit} \;e(X_i, G_i) = \alpha_{G_i} + (\mu_\beta + U_g^\beta \odot B_g^\beta)\cdot X_i,
\end{equation}
and also use a separate linear outcome model for each group,
\begin{equation}
\label{eq:sim_outcome}
Y_i(0) = \eta_{0G_i} + (\mu_\eta + U_g^\eta \odot B_g^\eta) \cdot X_i + \varepsilon_i,
\end{equation}
where $\varepsilon_i \sim N(0,1)$ and $\odot$ denotes element-wise multiplication.
We draw the fixed effects and varying slopes for each group according to a hierarchical model with sparsity. We draw the fixed effects as $\alpha_g \overset{\text{iid}}{\sim} N(0,1)$ and $\eta_{0g} \overset{\text{iid}}{\sim} N(0,1)$. For the slopes, we first start with a mean slope vector $\mu_\beta, \mu_\eta \in \{-\frac{3}{\sqrt{d}},\frac{3}{\sqrt{d}}\}^K$, where each element is chosen independently with uniform probability. Then we draw isotropic multivariate normal random variables $U_g^\beta, U_g^\eta \overset{\text{iid}}{\sim} MVN(0, I_d)$. Finally, we draw a set of $d$ binary variables $B_{gj}^\beta, B_{gj}^\eta$ that are Bernoulli with probability $p = 0.25$. The slope is then constructed as a set of sparse deviations from the mean vector, which is $\mu_\beta + U_g^\beta \odot B_g^\beta$ for the propensity score and $\mu_\eta + U_g^\eta \odot B_g^\eta$ for the outcome model.
To incorporate the possibility that treatment effects vary with additional covariates that are not the focus of our analysis, we generate the treatment effect for unit $i$ as $\tau_i = X_{id} - X_{i3} + 0.3 X_{id} X_{i3}$ and set the treated potential outcome as $Y_i(1) = Y_i(0) + \tau_{i} W_i$. Note that the effect varies with the underlying continuous variable $X_{id}$ that we use to form groups, as well as the additional variable $X_{i3}$.
The true ATT for group $g$ in simulation $j$ is thus $\tau_{gj} = \frac{1}{n_{1g}}\sum_{G_i = g} W_i (Y_i(1) - Y_i(0))$, and the overall ATT is $\tau_j = \frac{1}{n_1}\sum_{i=1}^n W_i (Y_i(1) - Y_i(0))$.
\begin{figure}[tb]
\centering \includegraphics[width=\maxwidth]{sims/figure/main_sim_results-1.png}
\caption{Performance of approximate balancing weights, traditional IPW with logistic regression, and outcome modelling for estimating subgroup treatment effects.}
\label{fig:main_sim_results}
\end{figure}
For $j=1,\ldots,m$ with $m = 500$ Monte Carlo samples, we estimate the treatment effects for group $g$, $\hat{\tau}_{gj}$, and the overall ATT, $\hat{\tau}_j$, and compute a variety of metrics. Following the metrics studied by \citet{dong2020subgroup}, for subgroup treatment effects we compute (a) the mean absolute bias across the $G$ treatment effects, $\frac{1}{m}\sum_{j=1}^m \left|\frac{1}{g}\sum_{g=1}^G \hat{\tau}_{gj} -\tau_g \right|$,
and (b) the mean root mean square error $\sqrt{\frac{1}{mG}\sum_{j=1}^m\sum_{g=1}^G (\hat{\tau}_{gj} - \tau_g)^2}$. For the overall ATT we measure (a) the absolute bias $\left|\frac{1}{m}\sum_{j=1}^m \hat{\tau}_j - \tau_j\right|$ and (b) the root mean square error $\sqrt{\frac{1}{m}\sum_{j=1}^m (\hat{\tau}_j - \tau_j)^2}$.
Here we focus on five estimators:
\begin{itemize}
\item \emph{Full interaction IPW:} traditional IPW with a fully interacted, ridge-penalized model that estimates a separate propensity score within each stratum as in Equation \eqref{eq:ipw_interact}.
\item \emph{Fully pooled balancing weights:} approximate balancing weights that solve Equation \eqref{eq:primal}, but ignore local balance by setting $\lambda \to \infty$, thus fully pooling towards the global model. This is equivalent to stable balancing weights in Equation \eqref{eq:sbw} with an exact global balance constraint $\delta = 0$.
\item \emph{Partially pooled balancing weights:} approximate balancing weights that solve Equation \eqref{eq:primal}, using $G$ as the stratifying variable and prioritizing local balance by setting $\lambda_g = \frac{1}{n_{1g}}$.
\item \emph{Augmented balancing weights:} augmenting the partially pooled balancing weights as in Equation \eqref{eq:mu0g_hat_aug} where $\hat{m}_0(x, g)$ is fit via ridge regression with all interactions.
\item \emph{Bayesian Causal Forests (BCF)}: Estimating $\tau_g$ as $\frac{1}{n_{1g}} \sum_{G_i = g} W_i \hat{\tau}_i$, where $\hat{\tau}_i$ are the posterior predictive means from a Bayesian Causal Forest estimator \citep{Hahn2020}.
\end{itemize}
In Appendix \ref{sec:sim_appendix} we also consider various other estimators.
For the fully interacted specification of the logistic regression in IPW and the ridge regression in the augmented balancing weights, we include a set of global parameters $\mu_\beta$ so that the slope for group $g$ is $\mu_\beta + \Delta_g$, with a squared $L^2$ penalty for each component.
These are correctly specified for the models above.
We estimate the models with \texttt{glmnet} \citep{Friedman2010} with the hyperparameter chosen through 5-fold cross validation.
Figure \ref{fig:main_sim_results} shows the results for the overall ATT and for subgroup effects.
In most cases, the partially pooled approximate balancing approach has much lower bias than RMSE than the logistic regression-based IPW estimator; however, the RMSEs are comparable with 50 subgroups.
Furthermore, prioritizing local balance with partial pooling yields lower bias and RMSE than ignoring local balance entirely with the fully pooled approach.
In Appendix Figure \ref{fig:plot_all_weights}, we also consider excluding the global balance constraint, and find that the constraint yields much lower bias for the ATT in some settings, with relatively little cost to the subgroup estimates.
The partially-pooled balancing weights estimator, which is transparent and design-based,
also performs nearly as well as the black-box BCF method.
Augmenting the partially-pooled weights provides some small improvements to the bias, indicating that the weights alone are able to achieve good balance in these simulations, and a larger improvement in the RMSE in the setting with many groups where the weighting estimator alone have larger variance.
In Appendix Figure \ref{fig:plot_all_weights} we also compare to ridge regression, finding similarly comparable performance.
In addition, Appendix Figure \ref{fig:sim_coverage} shows the coverage of 95\% intervals for the different approaches.
We see that the weighting estimator, both with and without augmentation, has reasonable uncertainty quantification, with much better coverage than either of the two model-based approaches.
\section{Differential impacts of letters of recommendation}
\label{sec:results}
We now turn to estimating the differential impacts of letters of recommendation on
admissions decisions.
We focus on the eight subgroups defined in Table \ref{tab:subgroup_counts}, based on the interaction between URM status (2 levels) and admissibility index (4 levels).
Due to the selection mechanism described in Section \ref{sec:lor}, however, it is useful to create even more fine-grained strata and then aggregate to these eight subgroups.
Specifically, we define $G = 41$ fine-grained strata based on URM status, AI grouping, first reader score, and college applied to.\footnote{Of the 48 possible strata, we drop 7 strata where no applicants submitted a letter of recommendation. These are non-URM applicants in both colleges in the two lowest AI strata but where the first reader assigned a ``Yes'' or ``No''. This accounts for $\sim 2\%$ of applicants. The remaining 41 strata have a wide range of sizes with a few very large strata. Min: 15, p25: 195, median: 987, p75: 1038, max: 8000.}
While we are not necessarily interested in treatment effect heterogeneity across all 41 strata, this allows us to exactly match on key covariates and then aggregate to obtain the primary subgroup effects.
Another key component in the analysis is the choice of transformation of the covariates $\phi(\cdot)$.
Because we have divided the applicants into many highly informative strata, we choose $\phi(\cdot)$ to include all of the raw covariates. Additionally, because of the importance of the admissibility index, we also include a natural cubic spline
for AI with knots at the sample quantiles. Finally, we include the output of the admissions model and a binary indicator for whether the predicted probability of a ``Possible'' score is greater than 50\%.
We further prioritize local balance in the admissibility index by
including in $\phi(x)$ the interaction between the AI, URM status, and an indicator for admissibility subgroup.
As we discuss above, this ensures local balance in the admissibility index at an intermediate level of the hierarchy between global balance and local balance. Finally, we standardize each component of $\phi(X)$ to have mean zero and variance one.
If desired, we could also consider other transformations such as a higher-order polynomial transformation, using a series of basis functions for all covariates, or computing inner products via the kernel trick to allow for an infinite dimensional basis \citep[see, e.g.][]{hazlett2018kernel,Wang2019,Hirshberg2019_amle}.
\subsection{Diagnostics: local balance checks and assessing overlap}
\label{sec:diagnostic}
\begin{figure}[tbp]
\centering
\begin{subfigure}[t]{0.5\textwidth}
{\centering \includegraphics[width=0.8\textwidth]{figure/lambda_plot.pdf}
}
\caption{Imbalance vs effective sample size.\\$\lambda = 1,10^4,10^8$ noted.}
\label{fig:lambda_plot}
\end{subfigure}%
~
\begin{subfigure}[t]{0.5\textwidth}
{\centering \includegraphics[width=0.8\textwidth]{figure/eff_samp_size_trt.pdf}
}
\caption{Effective sample sizes, area proportional to number of treated units}
\label{fig:eff_sample_size}
\end{subfigure}
\caption{(a) Imbalance measured as the square root of the objective in \eqref{eq:primal} plotted against the effective sample size of the overall control group. (b) Effective sample size of the control group for each subgroup, with weights solving the approximate balancing weights problem \eqref{eq:primal} with $\lambda_g = \frac{10^4}{n_g}$.}
\label{fig:love_plot_main}
\end{figure}
In order to estimate effects, we must first choose
values of the common hyperparameter $\lambda$ in the optimization problem \eqref{eq:primal}, where we set $\lambda_g = \frac{\lambda}{n_g}$.
Recall that this hyperparameter negotiates the bias-variance tradeoff: small values of $\lambda$ will prioritize bias by reducing local balance while higher values will prioritize variance by increasing the effective sample size. Figure \ref{fig:lambda_plot} shows this tradeoff. We plot the square root of the local balance measure in \eqref{eq:primal} against the \emph{effective sample size} for the re-weighted control group,
$n_1 \big/ \left(\sum_{W_i = 0}\hat{\gamma}_i^2\right)$.
Between $\lambda = 10^0$ and $10^4$, we see that the imbalance is relatively flat while the overall effective sample size increases, after which the imbalance increases quickly with $\lambda$. We therefore select $\lambda = 10^4$ for the results we present.
Figure \ref{fig:eff_sample_size} shows the effective control group sample size for each of the primary URM and AI subgroups, scaled by the number of applicants in the group submitting LORs.
Across the board, the URM subgroups have larger effective sample sizes than the non-URM subgroups, with particularly stark differences for the lower AI subgroups.
Furthermore, for all non-URM subgroups the effective sample size is less than 250.
Comparing to the sample sizes in Table \ref{tab:subgroup_counts}, we see that the weighting approach leads to a large design effect: many applicants who did not submit LORs are not comparable to those who did. However, lower admissibility non-URM applicants also submitted letters at lower rates.
This design effect, combined with the smaller percentage of non-URM applicants submitting LORs, means that we should expect to have greater precision in the estimates for URM applicants than non-URM applicants.
\begin{figure}[tbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=0.8\textwidth]{figure/love_plot_box_marginal_less.pdf}
}
\caption{Overall and by URM status and AI.}
\label{fig:love_plot_marginal}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=0.8\textwidth]{figure/love_plot_box_interact_less}
}
\caption{By URM status interacted with AI.}
\label{fig:love_plot_interact}
\end{subfigure}
\caption{The distribution of imbalance in each component of $\phi(X)$
after weighting with both the partially- and fully-pooled balancing weights estimators, as well as the fully interacted IPW estimator. }
\label{fig:love_plot_main}
\end{figure}
We now assess the level of local balance within each subgroup, following the discussion in Section \ref{sec:local_balance}.
We focus on three of the estimators described in Section \ref{sec:sims}: fully- and partially-pooled balancing weights and the full interaction IPW estimator.
Figure \ref{fig:love_plot_main} shows the distribution of the imbalance in each of the 51 (standardized) components of $\phi(X)$.
The fully interacted IPW approach has very poor balance overall, due in part to the difficulty of estimating the high-dimensional propensity score model.
As expected, both the fully- and partially-pooled balancing weights achieve perfect balance overall;
however, only the partially pooled balancing weights achieve excellent local balance.
Appendix Figure \ref{fig:love_plot_box} shows these same metrics for the no-pooled balancing weights and fixed effects IPW estimators we discuss in Appendix \ref{sec:sim_appendix}, as well as subgroup overlap weights \citep{Yang2020_overlap}.
The partially- and no-pooled approaches have similar global and local balance overall, but the partially-pooled approach sacrifices a small amount of local balance for an improvement in global balance.
In contrast, both the fixed effects IPW and overlap weights approaches yield poor local balance.
\begin{figure}[tbp]
\centering
{\centering \includegraphics[width=0.95\textwidth]{figure/weight_hist.pdf}}
\caption{Weights on control units from solving the approximate balancing weights problem \eqref{eq:primal}. \emph{Not pictured:} the 66\% of control units that receive zero weight.}
\label{fig:weight_hist}
\end{figure}
Finally, we assess overlap within each subgroup.
A key benefit of weighting approaches is that overlap issues manifest in the distribution of our weights $\hat{\gamma}$.
Figure \ref{fig:weight_hist} plots the distribution of the weights over the comparison applicants by URM status and AI group, normalized by the number of treated applicants in the subgroup.
The vast majority of control units receive zero weight and are excluded from the figure. Of the 28,556 applicants who did not submit an LOR, only 9,834 (34\%) receive a weight larger than 0.001. This is indicative of a lack of ``left-sided'' overlap: very many applicants who did not submit a letter of recommendation had nearly zero odds of doing so in the pilot program.
This is problematic for estimating the overall average treatment effect, but is less of a concern when we focus on estimating the average treatment effect on the treated.
For each AI subgroup we also see that the distribution of weights is skewed more positively for the non-URM applicants. In particular, for the lower AI, non-URM subgroups we see a non-trivial number of comparison applicants that ``count for'' over 1\% of the re-weighted sample, with a handful of outliers that count for more than 2\%.
While large weights do not necessarily affect the validity of the estimator, large weights decrease the effective sample size, reducing the precision of our final estimates, as we see in Figure \ref{fig:eff_sample_size}.
\subsection{Treatment effect estimates}
\begin{figure}[tbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/main_estimates_admit_mu.pdf}
}
\caption{Treated and re-weighted control percent admitted.}
\label{fig:main_estimates_marginal}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/main_estimates_marginal_admit.pdf}
}
\caption{Estimated effects on admission.}
\label{fig:main_estimates_marginal_admit}
\end{subfigure}%
\caption{Estimated treated and control means and treatment effect of letters of recommendation on admission $\pm$ two standard errors, overall and by URM status and Admissibility Index.}
\label{fig:marginal_estimates}
\end{figure}
After assessing local balance and overlap, we can now turn to estimating the differential impacts of letters of recommendation.
Figure \ref{fig:marginal_estimates} shows (1) the percent of applicants who submitted LORs who were accepted, $\hat{\mu}_{1g}$; (2) the imputed counterfactual mean, $\hat{\mu}_{0g}$; and (3) the ATT, $\hat{\mu}_{1g} - \hat{\mu}_{0g}$. The standard errors are computed via the sandwich estimator in Equation \eqref{eq:sandwich}.
Overall, we estimate that LORs increased admission rates by 5 percentage points (pp).
While we estimate a larger effect for non-URM applicants (6.2 pp) than URM applicants (4.5 pp), there is insufficient evidence to distinguish between the two effects.
Second, we see a roughly positive trend between treatment effects and the AI, potentially with a peak for the 10\%-20\% group. This is driven by the very small estimated effect for applicants with AI $< 5\%$, who are very unlikely to be accepted with or without LORs. LORs seem to have a larger effect for applicants closer to the cusp of acceptance.
\begin{figure}[htbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/main_estimates_interact_admit_mu.pdf}
}
\caption{Treated and re-weighted control percent admitted.}
\label{fig:main_estimates_interact_admit_mu}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/main_estimates_interact_admit.pdf}
}
\caption{Estimated effects on admission.}
\label{fig:main_estimates_interact_admit}
\end{subfigure}
\caption{Estimated treated and control means and treatment effect of letters of recommendation on admission $\pm$ two standard errors, further broken down by URM status interacted with the Admissibility Index. }
\label{fig:interaction_estimates}
\end{figure}
Figure \ref{fig:interaction_estimates} further stratifies the subgroups, showing the effects jointly by URM status and AI. While the point estimate for the overall increase in admission rates is slightly larger for non-URM applicants than for URM applicants, this is mainly a composition effect. For applicants very unlikely to be admitted (AI $< 5$\%) the point estimates are nearly identical for URM and non-URM applicants, although the URM subgroup is estimated much more precisely.
For the next two levels of the admissibility index (AI between 5\% and 20\%), URM applicants have a higher estimated impact, with imprecise estimates for non-URM applicants.
For the highest admissibility groups (AI $> 20$\%), non-URM applicants have larger positive effects, though again these estimates are noisy.
Since URM applicants have lower AI on average, the overall estimate is also lower for URM applicants.
Furthermore, the peak in the effect for middle-tier applicants is more pronounced for URM applicants than non-URM applicants.
From Figure \ref{fig:main_estimates_interact_admit_mu} we see that this is primarily because high admissibility URM applicants have very high imputed admission rates.
Appendix \ref{sec:robustness_appendix} includes extensive robustness checks and other sensitivity analyses.
We first compare our main results with simple estimates based on linear regression and based on plugging in the AI.
Both approaches diverge from our main results for the highest admissibility non-URM applicants, a group with limited overlap.
For other subgroups, the estimates are similar for the AI-based estimates, but differ for the linear regression estimates, which has poor control over local imbalance.
We also explore an alternative approach that instead leverages unique features of the UC Berkeley pilot study, which included an additional review without the letters of recommendation from a sample of 10,000 applicants.
These results are broadly similar to the estimates from the observational study, again differing mainly in regions with relatively poor overlap.
Finally, we conduct a formal sensitivity analysis for violations of the ignorability assumption (Assumption \ref{a:ignore}), adapting a recent proposal from \citet{soriano2019sensitivity}.
Using this approach we conclude that there would need to be substantial unmeasured confounding, of roughly the same predictive power as the AI, to qualitatively change our conclusions.
Taken together, our main results and accompanying sensitivity checks paint a relatively clear picture of differential impact of letters of recommendation across applicants' \emph{a priori} probability of admission. Treatment effects are low for applicants who are unlikely to be accepted and high for applicants on the margin for whom letters provide useful context, with some evidence of a dip for the highest admissibility applicants.
Our estimates of differential impacts between URM and non-URM students are more muddled, due to large sampling errors, and do not support strong conclusions. Point estimates indicate that LORs benefit URM applicants more than they do non-URM applicants at all but the highest academic indexes. Because non-URM applicants are overrepresented in the high-AI category, the point estimate for the average treatment effect is larger for non-URMs; however, there is insufficient precision to distinguish between the two groups.
\subsection{Augmented and machine learning estimates}
We now consider augmenting the weighting estimator with an estimate of the prognostic score, $\hat{m}(x, g)$. In Appendix Figure \ref{fig:augmented_estimates_ridge} we show estimates after augmenting with ridge regression, fully interacting $\phi(X)$ with the strata indicators; we compute standard errors via Equation \eqref{eq:sandwich}, replacing $Y_i - \hat{\mu}_{0g}$ with the empirical residual $Y_i - \hat{m}(X_i, g)$. Because the partially pooled balancing weights achieve excellent local balance for $\phi(X)$, augmenting with a model that is also linear in $\phi(X)$ results in minimal adjustment.
We therefore augment with random forests, a nonlinear outcome estimator.
Tree-based estimators are a natural choice, creating ``data-dependent strata'' similar in structure to the strata we define for $G$.
For groups where the weights $\hat{\gamma}$ have good balance across the estimates $\hat{m}(x, g)$, there will be little adjustment due to the outcome model.
Conversely, if the raw and bias-corrected estimate disagree for a subgroup, then the weights have poor local balance across important substantive data-defined strata. For these subgroups we should be more cautious of our estimates.
Figure \ref{fig:augmented_estimates} shows the random forest-augmented effect estimates relative to the un-augmented estimates; the difference between the two is the estimated bias. Overall, the random forest estimate of the bias is negligible and, as a result, the un-adjusted and adjusted estimators largely coincide. Augmentation, however, does seem to stabilize the higher-order interaction between AI and URM status, with particularly large adjustments for the highest AI group (AI $\geq 20$\%). This suggests that we should be wary of over-interpreting any change in the relative impacts for URM and non-URM applicants as AI increases.
\begin{figure}[tbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/augmented_estimates_marginal.pdf}
}
\caption{Overall and by URM status and AI.}
\label{fig:augmented_estimates_marginal}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/augmented_estimates_interact.pdf}
}
\caption{By URM status interacted with AI.}
\label{fig:augmented_estimates_interact}
\end{subfigure}
\caption{Estimated effect of letters of recommendation on admission rates with and without augmentation via a random forest outcome model.}
\label{fig:augmented_estimates}
\end{figure}
We also compare to treatment effects estimated via automatic, flexible machine learning approaches that give no special precedence to our pre-defined subgroups. First, Appendix Figure \ref{fig:simple_estimates} shows the result of using the ridge regression model above to impute the counterfactual means $\hat{\mu}_{0g}$ for each subgroup.
Because the partially-pooled weights achieve excellent local balance, the ridge estimates broadly comport with both the weighting and augmented estimates. However, ridge regression leads to more precise estimates for the subgroups with lower effective sample sizes and fewer comparable control units. While the augmented balancing weights limit the amount of extrapolation, the ridge estimator nonetheless extrapolates away from the support of the control units an arbitrary amount in order to find a lower-variance estimate \citep{benmichael2019_augsynth}.
Finally, we use Bayesian causal forests \citep[BCF;][]{Hahn2020} to estimate the conditional average treatment effect given the covariates and subgroup, $\hat{\tau}(x,g)$, then aggregate over the treated units in the group to estimate the CATT, $\hat{\tau}_g = \frac{1}{n_{1g}}\sum_{G_i = g} W_i \hat{\tau}(X_i, G_i)$.
This approach gives no special consideration to the subgroups of interest: $G$ enters symmetrically with the other covariates $X$.
Appendix Figure \ref{fig:bcf_estimates} shows the results. The BCF estimate of the overall ATT is nearly the same as our main estimates and similarly finds no variation between URM and non-URM applicants. However, the BCF estimates find less heterogeneity across admissibility levels and little to no heterogeneity across URM and admissibility subgroups, in part because this approach regularizes higher-order interactions in the CATE function. While this can be beneficial in many cases, with pre-defined subgroups this can lead to over-regularization of the effects of interest, as happens here. Furthermore, the BCF estimates are extremely precise, even in regions with limited overlap \citep[see discussion in][]{Hahn2020}. Overall, we find this approach less credible in our setting with pre-defined subgroups of interest.
\section{Discussion}
\label{sec:discussion}
Estimating heterogeneous treatment effects and assessing treatment effect variation in observational studies is a challenge, even for pre-specified subgroups. Focusing on weighting estimators that estimate subgroup treatment effects by re-weighting control outcomes, we show that the estimation error depends on the level of \emph{local imbalance} between the treated and control groups after weighting. We then present a convex optimization problem that finds approximate balancing weights that directly target the level of local imbalance within each subgroup, while ensuring exact global balance to also estimate the overall effect. Using this method to estimate heterogeneous effects in the UC Berkeley letters of recommendation pilot study, we find evidence that letters of recommendation lead to better admissions outcomes for stronger applicants, with mixed evidence of differences between URM and non-URM applicants.
There are several directions for future methodological work. First, hyperparameter selection for balancing weights estimators is an important question in practice but remains an open problem. We elect to choose the hyperparameter by explicitly tracing out the level of balance and effective sample size as the hyper-parameter changes. However, cross validation approaches such as that proposed by \cite{Wang2019} may have better properties. This an an important avenue for future work.
Second, we directly estimate the effect of submitting an LOR among those who submit. However, we could instead frame the question in terms of non-compliance and use the \emph{invitation} to submit an LOR as an instrument for submission. Using the approximate balancing weights procedure described above we could adjust for unequal invitation probabilities, and estimate the effect on compliers via weighted two-stage least squares.
Finally, we could consider applying this approach to subgroup effects in randomized trials, adapting recent proposals from, among others, \citet{zeng2021propensity} and \citet{yang2021covariate}. We expect that the overlap will be substantially improved in this setting, leading to larger effective sample size, even if the randomized trials themselves are smaller. We also anticipate that the design-based approach we advocate here will help researchers navigate Simpsons' Paradox-type challenges in subgroup effects \citep{vanderweele2011interpretation}.
\clearpage
\section{Robustness checks and sensitivity analysis}
\label{sec:robustness_appendix}
We now assess how much these conclusions change with different estimates and under violations of the key ignorability assumption.
\subsection{Alternative estimators and sample definitions}
First, we can see how our main estimates contrast with simple comparisons between those who submitted letters and those who did not. Appendix Figure \ref{fig:simple_estimates} shows effects estimated by comparing the admission rate for the treatment group to the admission rate expected by the AI.
This comparison shows a large difference between the effects for URM and non-URM applicants, with a \emph{negative} estimated effect on non-URM applicants.
This is primarily driven by a large negative estimate for the highest admissibility non-URM applicants, with the other estimates roughly in line with our main estimates. As we discuss in Section \ref{sec:ai}, while the AI is very predictive overall it has less predictive power for higher admissibility applicants, resulting in unreliable estimates of the effect.
Appendix Figure \ref{fig:simple_estimates} also shows simple linear regression estimates of the effects, regressing admission on treatment and an additional set of terms from the transformed covariates $\phi(X)$. We estimate effects first with treatment alone, then treatment interacted by URM status, interacted by AI group, and interacted by both. This approach does not directly control for local imbalances and relies on a correctly specified linear additive specification. Because of this, off-the-shelf linear regression disagrees with our main estimates, estimating a smaller overall effect and no effect for non-URM applicants. This is driven by a negative effect for low admissibility non-URM applicants, a region of limited overlap as specified above.
Next, we consider the sensitivity of our results to a different definition of the sample.
Recall that an applicant may not have submitted an LOR for one of two reasons: (i) they were not invited to do so, and (ii) they did not submit even though they were invited.
We assess the sensitivity of our results to excluding this first group
when using the weighting approach.
Appendix Figure \ref{fig:subset_estimates} shows the overall estimated effect and the effect for URM and non-URM applicants with the full sample and restricted to applicants who were invited to submit an LOR.
The point estimates are similar,
and although the number of control units is much smaller in the restricted sample --- 3,452 vs 29,398 ---
the standard errors are only slightly larger.
This reflects the fact the weighting approach finds that few of the no-invitation control units are adequate comparisons to the treated units.
\subsection{Effect on second reader scores and within-subject comparison}
\label{sec:within}
We now consider effects on an intermediate outcome: whether the second reader --- who has access to the LORs --- gives a ``Yes'' score.
Because these are \emph{design-based} weights, we use the same set of weights to estimate effects on both second reader scores and admissions decisions. We find a similar pattern of heterogeneity overall.
With this outcome we can also make use of a within-study design to estimate treatment effects, leveraging scores from additional third readers who did not have access to the letters of recommendation.
After the admissions process concluded, 10,000 applicants who submitted letters were randomly sampled and the admissions office recruited several readers to conduct additional evaluations of the applicants \citep{rothstein_lor2017}. During this supplemental review cycle, the readers were \emph{not} given access to the letters of recommendation, but otherwise the evaluations were designed to be as similar as possible to the second reads that were part of the regular admissions cycle; in particular, readers had access to the first readers' scores.
With these third reads we can estimate the treatment effect by taking the average difference between the second read (with the letters) and the third read (without the letters). One major issue with this design is that readers might have applied different standards during the supplemental review cycle. Regardless, if the third readers applied a different standard consistently across URM and admissibility status, we can distinguish between treatment effects within these subgroups.
Appendix Figures \ref{fig:reader2_marginal} and \ref{fig:reader2_interact} show the results for both approaches. Overall for second reader scores we see a similar structure of heterogeneity as for admission rates, although there does not appear to be an appreciable decline in the treatment effect for the highest admissibility non-URM applicants.
The two distinct approaches yield similar patterns of estimates overall, with the largest discrepancy for applicants with a predicted probability of admission between 5\% and 10\%, particularly for non-URM applicants. However, this group has a very low effective sample size, and so the weighting estimates are very imprecise.
\subsection{Formal sensitivity analysis}
\label{sec:sensitivity}
We assess sensitivity to the key assumption
underlying our estimates, Assumption \ref{a:ignore}: an applicant's LOR submission is conditionally independent of that applicant's potential admission decision.
Since we observe all the information leading to an invitation to submit an LOR, we believe that Assumption \ref{a:ignore} is plausible for this step in the process. However, applicants' decisions to submit LORs given that invitation might vary in unobserved ways that are correlated with admission.
To understand the potential impact of such unmeasured confounding, we perform a formal sensitivity analysis. Following the approach in \citet{Zhao2019_sensitivity, soriano2019sensitivity}, we allow the true propensity score conditioned on the control potential outcome $e(x, g, y) \equiv P(W = 1 \mid X = x, G = g, Y(0) = y)$ to differ from the probability of treatment given covariates $x$ and group membership $g$, $e(x,g)$, by a factor of $\Lambda$ in the odds ratio:
\begin{equation}
\label{eq:sens}
\Lambda^{-1} \leq \frac{\nicefrac{e(x,g)}{1 - e(x,g)}}{\nicefrac{e(x,g,y)}{1 - e(x,g,y)}} \leq \Lambda.
\end{equation}
\noindent This generalizes Assumption \ref{a:ignore} to allow for a pre-specified level of unmeasured confounding, where $\Lambda = 1$ corresponds to the case with no unmeasured confounding. The goal is then to find the smallest and largest ATT, $[\tau^{\text{min}}, \tau^{\text{max}}]$, consistent with a given $\Lambda$ for the marginal sensitivity model in Equation \eqref{eq:sens}. Following \citet{soriano2019sensitivity}, we use the percentile bootstrap to construct a 95\% confidence for this bound, $[L, U]$.
In particular, we focus on the largest value of $\Lambda$ for which the overall ATT remains statistically significant at the 95\% level (i.e., $L > 0$), which we compute as $\Lambda = 1.1$.
We then modify this approach to focus on subgroup differences; we focus on differences between URM and non-URM applicants but can apply this to other subgroups as well. First, we use the same procedure to find bounds on the effect for URM applicants, $[L^{\text{urm}}, U^{\text{urm}}]$, and non-URM applicants, $[L^{\text{non}}, U^{\text{non}}]$. We then construct worst-case bounds on their \emph{difference}, $[L^{\text{urm}} - U^{\text{non}}, U^{\text{urm}} - L^{\text{non}}]$.
Although we fail to detect a difference between the two groups, there may be unmeasured variables that confound a true difference in effects. To understand how large the difference could be, we use the sensitivity value that nullifies the overall effect, $\Lambda = 1.1$ and construct a 95\% confidence interval for the difference.
We find that we cannot rule out a true difference as large as 12 pp, with 95\% confidence interval $(-12\%, 8.2\%)$.
To understand this number, Appendix Figure \ref{fig:sens} shows the strength required of an unmeasured confounder in predicting the admission outcome (measured as the magnitude of a regression of the outcome on the unmeasured confounder) to produce enough error to correspond to $\Lambda = 1.1$, for a given level of imbalance in the unmeasured confounder between applicants who did and did not submit LORs.\footnote{Letting $\delta$ denote this imbalance, the absolute regression coefficient must be larger than
$\max\{|L^{\text{urm}} - U^{\text{non}}|, |U^{\text{urm}} - L^{\text{non}}|\} / \delta = 0.592 / \delta$.}
It compares these values to the imbalance in the components of $\phi(X)$ \emph{before weighting} and the regression coefficient for each component where we regress the outcome on $\phi(X)$ for the control units. We find that the unmeasured confounder would have to have a higher level of predictive power or imbalance than any of our transformed covariates, except for the AI. Thus, an unmeasured confounder would have to be relatively strong and imbalanced in order to mask a substantial difference between URM and non-URM applicants.
\clearpage
\section{Additional simulation estimators}
\label{sec:sim_appendix}
We compute treatment effects for eight weighting estimators:
\begin{itemize}
\item \emph{Partially pooled balancing weights:} approximate balancing weights that solve Equation \eqref{eq:primal}, using $G$ as the stratifying variable and prioritizing local balance by setting $\lambda_g = \frac{1}{n_{1g}}$.
\item \emph{Augmented balancing weights:} augmenting the partially pooled balancing weights as in Equation \eqref{eq:mu0g_hat_aug} where $\hat{m}_0(x, g)$ is fit via ridge regression with all interactions.
\item \emph{Fully pooled balancing weights:} approximate balancing weights that solve Equation \eqref{eq:primal}, but ignore local balance by setting $\lambda \to \infty$, thus fully pooling towards the global model. This is equivalent to stable balancing weights in Equation \eqref{eq:sbw} with an exact global balance constraint $\delta = 0$.
\item \emph{No pooled balancing weights:} approximate balancing weights that solve Equation \eqref{eq:primal}, but without the exact global balance constraint.
\item \emph{Full interaction IPW:} traditional IPW with a fully interacted model that estimates a separate propensity score within each stratum as in Equation \eqref{eq:ipw_interact}.
\item \emph{Fixed effects IPW:} full interaction IPW with stratum-specific coefficients constrained to be equal to a global parameter $\beta_g = \beta$ for all $g$.
\item \emph{Full interaction ridge regression outcome model:} Estimating $\mu_{0g}$ via $\hat{\mu}_{0g} = \frac{1}{n_1}\sum_{G_i = g}W_i \hat{m}_0(X_i, g)$, where $\hat{\mu}_0(x, g)$ is the same ridge regression predictor as for augmented balancing weights
\item \emph{Bayesian Causal Forests (BCF)}: Estimating $\tau_g$ as $\frac{1}{n_{1g}} \sum_{G_i = g} W_i \hat{\tau}_i$, where $\hat{\tau}_i$ are the posterior predictive means from a Bayesian Causal Forest estimator \citep{Hahn2020}.
\end{itemize}
\clearpage
\section{Additional figures and tables}
\setcounter{figure}{0}
\begin{figure}[!htb]
\centering
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/ai_mse.pdf}
}
\caption{Brier score.}
\label{fig:ai_mse}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/ai_avg.pdf}
}
\caption{Admission rates}
\label{fig:ai_avg}
\end{subfigure}
\caption{(a) Mean square error (Brier score) and (b) admission rates for the Admissibility Index predicting the 2016-2017 cycle admissions results, computed in 2\% groups.}
\label{fig:ai_performance}
\end{figure}
\begin{table}[!htbp]
\centering
\begin{tabular}{@{}llrr@{}}
\toprule
College & URM & AUC & Brier Score\\
\midrule
\multirow{2}{*}{Letters and Science} & URM & 91.7\% & 6\%\\
& Not URM & 91.7\% & 9\%\\
\midrule
\multirow{2}{*}{Engineering} & URM & 91.6\% & 4\%\\
& Not URM & 91.8\% & 9\%\\
\bottomrule
\end{tabular}
\caption{AUC and Brier score for the Admissibility Index predicting the 2016-2017 cycle admissions results.}
\label{tab:ai_roc_auc}
\end{table}
\begin{figure}[!htb]
\centering \includegraphics[width=\maxwidth]{sims/figure/plot_all_weights_both-1}
\caption{Performance of approximate balancing weights, traditional IPW with logistic regression, and outcome modelling for estimating subgroup treatment effects.}
\label{fig:plot_all_weights}
\end{figure}
\begin{figure}[tb]
\centering \includegraphics[width=\maxwidth]{sims/figure/plot_all_weights_coverage-1.png}
\caption{Coverage for approximate balancing weights, traditional IPW with logistic regression, and outcome modelling for estimating subgroup treatment effects.}
\label{fig:sim_coverage}
\end{figure}
\begin{figure}[tbp]
\centering \includegraphics[width=\maxwidth]{figure/love_plot_box.pdf}
\caption{Distribution of covariate balance measured by the mean standardized difference for different weighting methods.}
\label{fig:love_plot_box}
\end{figure}
\begin{figure}[tbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/pct_bias_reduce_marginal.pdf}
}
\caption{Overall and by URM status and AI.}
\label{fig:pct_bias_reduce_marginal}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/pct_bias_reduce_interact.pdf}
}
\caption{By URM status interacted with AI.}
\label{fig:pct_bias_reduce_interact}
\end{subfigure}
\caption{Imbalance in the admissibility index after weighting relative to before weighting, overall and within each subgroup.
For several subgroups, the fully pooled balancing weights procedure results in \emph{increased} imbalance in the admissibility index, denoted by an arrow.}
\label{fig:pct_bias_reduce}
\end{figure}
\begin{figure}[tbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/risk_ratio_estimates_marginal.pdf}
}
\caption{Overall and by URM status and AI.}
\label{fig:risk_ratio_estimates_marginal.pdf}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/risk_ratio_estimates_interact.pdf}
}
\caption{By URM status interacted with AI.}
\label{fig:risk_ratio_estimates_interact}
\end{subfigure}
\caption{Estimated log risk ratio of admission with and without letters of recommendation $\pm$ two standard errors computed via the delta method, overall and by URM status and AI.}
\label{fig:risk_ratio_estimates}
\end{figure}
\begin{figure}[tbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/simple_estimates_marginal_admit.pdf}
}
\caption{Overall and by URM status and AI}
\label{fig:simple_estimates_interact_admit_mu}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/simple_estimates_interact_admit.pdf}
}
\caption{By URM status interacted with AI}
\label{fig:simple_estimates_interact_admit}
\end{subfigure}
\caption{Estimated effect of letters of recommendation on admission rates \emph{without} adjusting for selection, and comparing to the expected admission rate from the AI score and regression.}
\label{fig:simple_estimates}
\end{figure}
\begin{figure}[tbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/augmented_estimates_marginal_ridge.pdf}
}
\caption{Overall and by URM status and AI.}
\label{fig:augmented_estimates_marginal_ridge}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/augmented_estimates_interact_ridge.pdf}
}
\caption{By URM status interacted with AI.}
\label{fig:augmented_estimates_interact_ridge}
\end{subfigure}
\caption{Estimated effect of letters of recommendation on admission rates with and without augmentation via ridge regression with 5-fold cross validation.}
\label{fig:augmented_estimates_ridge}
\end{figure}
\begin{figure}[tbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/weighting_estimates_marginal.pdf}
}
\caption{Overall and by URM status and AI.}
\label{fig:weighting_estimates_marginal}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/weighting_estimates_interact.pdf}
}
\caption{By URM status interacted with AI.}
\label{fig:weighting_estimates_interact}
\end{subfigure}
\caption{Estimated effect of letters of recommendation on admission rates for comparable weighting estimators.}
\label{fig:weighting_estimates}
\end{figure}
\begin{figure}[tbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/main_estimates_marginal_read2.pdf}
}
\caption{Partially pooled balancing weights}
\label{fig:reader2_estimates_marginal}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/within_estimates_marginal.pdf}
}
\caption{Within-subect design}
\label{fig:within_estimates_marginal}
\end{subfigure}
\caption{Effects on second reader scores overall, by URM status, and by AI, estimated via (a) the partially pooled balancing weights estimator and (b) the within-subject design.}
\label{fig:reader2_marginal}
\end{figure}
\begin{figure}[tbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/main_estimates_interact_read2.pdf}
}
\caption{Partially pooled balancing weights}
\label{fig:reader2_estimates_interact}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/within_estimates_interact.pdf}
}
\caption{Within-subject design}
\label{fig:within_estimates_interact}
\end{subfigure}
\caption{Effects on second reader scores by URM status interacted with AI, estimated via (a) the partially pooled balancing weights estimator and (b) the within-subject design.}
\label{fig:reader2_interact}
\end{figure}
\begin{figure}[tbp]
\centering
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/bcf_estimates_marginal_admit.pdf}
}
\caption{Overall and by URM status and AI.}
\label{fig:bcf_estimates_marginal}
\end{subfigure}%
~
\begin{subfigure}[t]{0.45\textwidth}
{\centering \includegraphics[width=\textwidth]{figure/bcf_estimates_interact_admit.pdf}
}
\caption{By URM status interacted with AI.}
\label{fig:bcf_estimates_interact}
\end{subfigure}
\caption{Estimated effect of letters of recommendation on admission rates via Bayesian Causal Forests \citep{Hahn2020}. }
\label{fig:bcf_estimates}
\end{figure}
\begin{figure}[tbp]
\centering
{\centering \includegraphics[width=0.4\textwidth]{figure/subset_estimates_urm_admit.pdf}
}
\caption{Estimated effect of letters of recommendation on admission rates via weighting in the full sample and restricting to applicants who were invited to submit an LOR}
\label{fig:subset_estimates}
\end{figure}
\begin{figure}[tbp]
\centering
{\centering \includegraphics[width=.6\textwidth]{figure/amplification.pdf}}
\caption{Amplification of a sensitivity analysis. The line shows the magnitude of the regression coefficient and the magnitude of the imbalance in an unmeasured standardized covariate required to produce enough bias to remove the effect. Points correspond to the regression coefficients and imbalance before weighting for the 51 components of $\phi(X)$.}
\label{fig:sens}
\end{figure}
\clearpage
\section{Proofs}
\begin{proof}[Proof of Proposition \ref{prop:dual}]
First, we will augment the primal optimization problem in Equation \eqref{eq:primal} with auxiliary covariates $\ensuremath{\mathcal{E}}_1,\ldots, \ensuremath{\mathcal{E}}_j$ so that $\ensuremath{\mathcal{E}}_g = \sum_{G_i = g, W_i = 0}\gamma_i \phi(X_i) - \sum_{G_i = g, W_i = 1} \phi(X_i)$. Then the optimization problem becomes:
\begin{equation}
\label{eq:primal_aux}
\begin{aligned}
\min_{\gamma} \;\;\;\;\;\;\; & \sum_{z=1}^J \frac{1}{2\lambda_g}\left\|\ensuremath{\mathcal{E}}_j\right\|_2^2 + \frac{\lambda_g}{2}\sum_{Z_i=z,W_i=0} \gamma_i^2 + \ensuremath{\mathcal{I}}(\gamma_i \geq 0)\\
\text{subject to } & \sum_{W_i = 0} \gamma_i \phi(X_i) = \sum_{W_i = 1}\phi(X_i)\\
& \ensuremath{\mathcal{E}}_j = \sum_{G_i = g, W_i = 0} \gamma_i \phi(X_i) - \sum_{G_i = g, W_i = 1} \phi(X_i), \;\;\; z=1,\ldots,J\\
& \sum_{G_i = g, W_i = 0} \gamma_i = n_{1g},
\end{aligned}
\end{equation}
where $\ensuremath{\mathcal{I}}(x \geq 0) = \left \{ \begin{array}{cc} 0 & x \geq 0\\ \infty & x < 0 \end{array}\right.$ is the indicator function.
The first constraint induces a Lagrange multiplier $\mu_\beta$, the next $J$ constraints induce Lagrange multipliers $\delta_1,\ldots,\delta_J$, and the sum-to-one constraints induce Lagrange multipliers $\alpha_1,\ldots,\alpha_J$. Then the Lagrangian is
\begin{equation}
\label{eq:lagrangian}
\begin{aligned}
\ensuremath{\mathcal{L}}(\gamma, \ensuremath{\mathcal{E}}, \mu_\beta, \delta, \alpha) & = \sum_{z=1}^J \left[\frac{1}{2\lambda_g}\|\ensuremath{\mathcal{E}}_j\|_2^2 - \ensuremath{\mathcal{E}}_j \cdot \delta_j + \sum_{G_i = g, W_i = 0} \frac{1}{2}\gamma_i^2 + \ensuremath{\mathcal{I}}(\gamma_i \geq 0) - \gamma_i (\alpha + (\mu_\beta + \delta_j) \cdot \phi(X_i)) \right]\\
& \;\;\; + \sum_{z=1}^J \sum_{G_i = g, W_i = 1} (1 + (\mu_\beta + \delta_j) \cdot \phi(X_i))
\end{aligned}
\end{equation}
The dual objective is:
\begin{equation}
\label{eq:dual_obj}
\begin{aligned}
q(\mu_\beta, \delta, \alpha) & = \sum_{z=1}^J \left[\min_{\ensuremath{\mathcal{E}}_j} \left\{\frac{1}{2\lambda_g}\|\ensuremath{\mathcal{E}}_j\|_2^2 - \ensuremath{\mathcal{E}}_j \cdot \delta_j \right\} + \sum_{G_i = g, W_i = 0} \min_{\gamma_i \geq 0}\left\{\frac{1}{2}\gamma_i^2 - \gamma_i (\alpha + (\mu_\beta + \delta_j) \cdot \phi(X_i))\right\} \right]\\
& \;\;\; + \sum_{z=1}^J \sum_{G_i = g, W_i = 1} (1 + (\mu_\beta + \delta_j) \cdot \phi(X_i))
\end{aligned}
\end{equation}
Note that the inner minimization terms are the negative convex conjugates of $\frac{1}{2}\|x\|_2^2$ and ${\frac{1}{2}x^2 + \ensuremath{\mathcal{I}}(X \geq 0)}$, respectively. Solving these inner optimization problems yields that
\begin{equation}
\label{eq:dual_obj2}
\begin{aligned}
q(\mu_\beta, \delta, \alpha) & = - \sum_{z=1}^J \left[\frac{\lambda_g}{2} \|\delta_j\|_2^2 + \sum_{G_i = g, W_i = 0} \left[\alpha_j + (\mu_\beta + \delta_j) \cdot \phi(X_i)\right]_+^2 \right]\\
& \;\;\; + \sum_{z=1}^J \sum_{G_i = g, W_i = 1} (1 + (\mu_\beta + \delta_j) \cdot \phi(X_i))
\end{aligned}
\end{equation}
Now since there exists a feasible solution to the primal problem \eqref{eq:primal}, from Slater's condition we see that the solution to the primal problem is equivalent to the solution to $\max_{\mu_\beta, \alpha,\delta} q(\mu_\beta, \alpha, \delta)$. Defining $\beta_j \equiv \mu_\beta + \delta_j$ gives the dual problem \eqref{eq:dual}. Finally, note that the solution to the minimization over the weights in Equation \eqref{eq:dual_obj} is $\gamma_i = \left[\alpha_j + \beta_j \cdot \phi(X_i)\right]_+$, which shows how to map from the dual solution to the primal solution.
\end{proof}
\clearpage
| {'timestamp': '2021-02-23T02:46:29', 'yymm': '2008', 'arxiv_id': '2008.04394', 'language': 'en', 'url': 'https://arxiv.org/abs/2008.04394'} |
\section{Introduction}
Sunspots appear as prominent features in the solar photosphere. Comparing white-light solar disk images and magnetograms one can note that sunspots are also associated with strong magnetic fields. In our paper we will focus only on white-light solar disk images due to the primary interest in historical observations which miss magnetic field information.
Typically, sunspots are not individual features but appear in groups. It was noted that for estimation of solar activity the number of groups is as important as the number of sunspots \citep{Hoyt}. In many cases an attribution of sunspots to sunspot groups is not a trivial problem and includes more aspects than just closeness between sunspots. In our research we will analyze sunspot groups formed by expert observers and thus omit this problem. However, it should be noted that there were several attempts to automate this step (e.g. \citet{Colak}, \citet{Zharkova}).
What we focus on is how to describe the structure of sunspot groups numerically and to preserve as much information as possible. It is typical to describe the structure of a sunspot group by its area, elongation, number of spots and some other simple morphological properties. Probably, the most advanced and informative descriptor is given by the class of sunspot group according to Zurich or Modified Zurich classification systems \citep{McIntosh}. Indeed, the proposed sunspot group class encodes together area, elongation and structural information about the group.
Obviously, the above set of descriptors is far from being complete in a sense that it does not allow detailed reconstruction of sunspot group structure. At the first glance it is also unclear how to extend this set substantially in a computationally feasible way.
There were several attempts to introduce additional morphological descriptors, see e.g. \citet{Stenning}, \citet{Ternullo} and \citet{Makarenko}.
However, the problem of automated complete sunspot group description remains actual.
In our research we propose a data-driven approach to sunspot group parametrization which provides a compact yet complete set of sunspot group descriptors. The idea is to map sunspot group images into an appropriate lower-dimensional space. We will construct this mapping by training the Variational Autoencoder model and selecting the features using the Principal Components Analysis. While autoencoder models based on neural networks were already used in some papers e.g. by \citet{Chen} to obtain useful features for solar flare prediction and by \citet{Sadykov} for spectrum lines compression, however, we first provide a systematic analysis of the latent space and demonstrate that at least several latent features have a clear physical interpretation.
\section{Data}
We use a daily catalogue of sunspot group images provided by the Kislovodsk Mountain Astronomical Station for the period 2010--2020 \citep{Kislovodsk}. Initially, the solar disk was observed at the station in white-light and photoheliograms were recorded. The photoheliograms were processed to isolate sunspots, sunspot cores and pores and attribute them to sunspot groups. Each step of this process is verified by expert observers and thus we consider this data as ground truth in the present research. The resulting daily sunspot maps and group attribution are available on the website \url{https://observethesun.com}.
To prepare a dataset of sunspot groups, we rescale all sunspot maps to the constant solar disk radius equal to 1200 pixels. The constant value is selected such that most sunspot groups can be contained in patches of size $256\times 256$ pixels. Then we crop patches of size $256\times 256$ pixels centered at each sunspot group that is completely within the patch frame. To be more detailed, we obtain 3 binary masks of size $256\times 256$, one for sunspots, cores and pores. Note that by construction binary masks are constructed for a particular group only and ignore any information about other groups. To avoid undesired projection effects near the solar limb, we consider only groups with central meridian distance below $60^{\circ}$. There are 8498 groups following this condition.
The constructed dataset of sunspot groups is publicly available in the GitHub repository \url{https://github.com/observethesun/sunspot_groups}. Note that we also provide meta information about the groups including its area, location, number of spots and elongation.
For further research it was useful to combine 3 binary masks into one patch, where pixels have one of three values, 0, 1, or 2, with 0 representing photosphere (background), 1 representing sunspots and pores, 2 is for sunspot cores. Figure~\ref{fig:group} shows a patch with a sunspot group.
\begin{figure}
\centering
\includegraphics[width=0.35\textwidth]{Figure_1.pdf}
\caption{Example of a sunspot group from the dataset. Date of observation is 2016 July 18 at 04:16UT. Sunspot group number is 124 according to the catalogue of the Kislovodsk Mountain Astronomical Station.}
\label{fig:group}
\end{figure}
\section{Sunspot parameterization model}
The problem of object parameterization is closely related to the problem of object representation in an appropriate lower-dimensional space. The latter problem is quite elaborated and there are many linear and non-linear approaches. Specifically, we will apply a neural network model with encoder-decoder architecture. There are several reasons that motivate this approach.
First, image data has extremely high-dimensional initial representation (an image of size $256\times 256$ pixels is a point in a space of size $256^2$). Standard linear dimensionality reduction models, e.g. Principal Component Analysis (PCA, see \citet{PCA}), usually work properly if the number of samples is compatible or exceeds the number of dimensions. Otherwise it might lead to inconsistent results (see e.g. \cite{HDPCA} for rigorous mathematical discussion). Thus we will focus on non-linear models, however, it should be noted, that the PCA model can nevertheless be useful in some applications (see \citet{Moon}).
Second, many models are able to work with data given in a vector form only. To process image data one has to stack rows of image matrices into a single row that breaks initial data representation. Much less models are able to process image data directly.
These are mostly the models based on convolutional operations (e.g. convolutional neural networks).
One of such models is the Variational Autoencoder (VAE, see \citet{VAE}).
\subsection{Variational Autoencoder}
The VAE consists of two consecutive parts called an encoder and a decoder (see Figure~\ref{fig:vae}).
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{Figure_2.pdf}
\caption{Architecture of the Variational Autoencoder model. Encoder passes the input image through a sequence of convolution and downsampling layers. Encoder outputs two tensors, $\mu$ and $\sigma$, interpreted as vectors of mean and variance of a multivariate normal distribution. Decoder obtains a sample from that distribution and passes it through a sequence of convolution and upsampling layers. Output of the decoder is an image of the same shape as the input image in the encoder.}
\label{fig:vae}
\end{figure}
The encoder compresses input image into a tensor\footnote{In application to neural networks, one- or multi-dimensional arrays are called tensors.} of lower\footnote{Strictly speaking, it is not necessary that the dimensionality of the output tensor is lower that the dimensionality of input data.} size through a sequence of convolutional and downsampling layers. We interpret this tensor as a lower-dimensional representation of the input image or a \textit{latent vector}. The idea of the decoder is to reconstruct an original image from the lower dimensional representation through another sequence of convolutional and upsampling layers. Given a large dataset one can train such models to compress and decompress images without substantial loss in details.
Several additional features make the encoder-decoder model more useful.
First, we would like to obtain some regular distribution of latent vectors.
In the framework of the Variational Autoencoder model one optimizes the model such that latent vectors converge in distribution to the multivariate standard normal distribution (standard MVN). As it is shown in Fig~\ref{fig:vae}, the encoder actually outputs two tensors, $\mu$ and $\sigma$, which we interpret as vectors on the mean and variance of the MVN. At the model training stage, the decoder actually obtains a sample from the MVN with those $\mu$ and $\sigma$ and we penalty the model using the Kullback-Leibler divergence between data distribution in the latent space and the standard MVN with zero mean and unit variance (see \citet{VAE} for more details and rigorous mathematical reasoning). Once the model is trained, we consider $\mu$ as a lower-dimensional representation of the input image and call it a latent vector.
Second, during the training stage we also penalty the model if the reconstructed image (output of the decoder) is not similar to the encoder input. To define the similarity between two images one usually applies the mean squared error (MSE) computed over pixels. However, using such a metric to train the model often leads to blurry reconstructions (see e.g. \citet{Dosovitskiy} and \citet{snell2017learning}). In our opinion this is because the MSE by definition reflects global similarity without paying enough attention to structural (local) similarity. To improve the situation we apply additionally more advanced metrics known as \textit{perceptual loss} \citep{Johnson2016Perceptual} which is based on an auxiliary pre-trained neural network model. The idea is that we measure the MSE between some intermediate layers of the pre-trained model instead of measuring the MSE between pixels of two images (specifically, we use the pre-trained VGG11 model, \citet{VGG}). In practice it allows the VAE model to generate much sharper and more natural images. We assume that it also makes the latent features more representative.
Finer details of the VAE configuration are available in the supplementary GitHub repository along with the model training pipeline. The most essential detail is that in the bottleneck of the VAE model, which is the tensor $\mu$, we have a tensor of size 4096 (in other words, it is the dimensionality of the latent space). Given that the input image has a shape of $256\times 256$, we obtain a compression of 16 times. It does not look too compact so far, and, moreover, the latent features are not ranged in any way and some of them are quite strongly correlated.
We find that further reduction of the latent space within the VAE model complicates the training process substantially.
Instead, we find that application of the PCA model to the latent vectors $\mu$ becomes useful.
\subsection{PCA model for latent vectors}
Recall that the idea of the PCA model is to find an orthogonal basis of order $n$ that best represents the given dataset. It can be shown that this basis should be composed of $n$ leading (i.e. ordered according to the eigenvalues) eigenvectors computed from the correlation matrix of the observed variables. The eigenvectors ordered according to the eigenvalues are called \textit{principal components} (PCs). The PCA model guarantees that the obtained basis minimizes the total reconstruction error and it is also the subspace with maximal data variance (see e.g. \citet{Murphy} for mathematical details and Figure~\ref{fig:pca} for notations).
\begin{figure}
\centering
\includegraphics[width=0.65\textwidth]{Figure_3.pdf}
\caption{Illustration of the PCA model. Black point $\mu$ represents a feature vector (single observation). Blue plane is the subspace formed by $n=2$ leading principal components (PCs). Red point $\hat\mu$ is the projection of $\mu$ onto the subspace formed by $PC_1$ and $PC_2$. $Z_1$ and $Z_2$ denote coordinates of $\hat\mu$ in the basis $PC_1$ and $PC_2$ and are referred to as latent vector. Squared length of the black dotted line is the reconstruction error. Given a set of feature vectors (dataset), the total reconstruction error is the sum of reconstruction errors for each observation.}
\label{fig:pca}
\end{figure}
To apply the PCA model we first derive the latent vector $\mu$ (of size 4096) for each sunspot group using the VAE encoder. Then we consider the latent vectors $\mu$ as input vectors to the PCA model and derive principal components.
In Figure~\ref{fig:ratio} we show how the number $n$ of principal components affects the accuracy of the reconstruction. We find that we need $n=285$ principal components to get 95\% of the initial variation. We will denote $Z_1$, $Z_2$, ..., $Z_{285}$ coordinates of the latent vector $\mu$ in the basis of principal components.
\begin{figure}
\centering
\includegraphics[width=0.75\textwidth]{Figure_4.pdf}
\caption{Reconstruction accuracy against the number of principal components in the latent space of the VAE encoder. The top row shows sample sunspot groups. The next rows show reconstruction from various numbers of principal components. Percentages near each row
show explained variance, while the numbers below indicate the number of principal components. For example, 285 principal components explain 95\% of data variance.}
\label{fig:ratio}
\end{figure}
It can be noted in Figure~\ref{fig:ratio} that simpler groups (e.g. single-spot groups) require fewer principal components (fewer $n$) for accurate reconstruction. We will use this observation to estimate complexity of sunspot group structure (see the Appendix).
\subsection{The joint VAE and PCA model}
Usually, outputs of the VAE encoder and PCA models are referred to as latent vectors. To avoid confusion, we will denote output of the VAE encoder as latent vector $\mu$ and output of the PCA model as latent vector $Z$. Thus, the joint parametrization model works as follows: Image $\to$ VAE encoder $\to$ Latent vector $\mu$ $\to$ PCA $\to$ Latent vector $Z$. Latent vector $Z$ has size 285 and its components are ordered according to their importance estimated by the PCA model. Given an arbitrary point in the latent space $Z$ (i.e. a vector of size 285) we can reconstruct the corresponding image as follows: Latent vector $Z$ $\to$ Inverse PCA $\to$ Latent vector $\mu$ $\to$ VAE decoder $\to$ Image. In the next section we will give a physical interpretation of the latent parameters $Z$ learnt from the dataset of sunspot group images.
\section{Interpretation of latent features}
Let $Z_1$, $Z_2$, ..., $Z_{285}$ are components of the latent vector $Z$. To get an interpretation of a particular component we will set all components to zero except one and vary the remaining one and analyze the decoded images.
We start with $Z_1$. Figure~\ref{fig:z1} shows what happens with the decoded images when we vary $Z_1$ from -20 to 20 with step 10 (while $Z_2=Z_3=,,,=Z_{285}=0$).
Looking at the decoded images in Figure~\ref{fig:z1} we conclude that $Z_1$ is responsible for the single-spot (unipolar) or multi-spot (bipolar) configuration of the group.
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{Figure_5.pdf}
\caption{Generated sunspot group images for latent feature $Z_1$ values from $-20$ to $20$ with step 10.}
\label{fig:z1}
\end{figure}
Taking into account that this physical property is one of the most important in sunspot group characterization, it is interesting to note that the proposed parametrization model arrived at the same conclusion without any help from our side.
To support the proposed interpretation we derive $Z_1$ for all sunspot groups in the dataset and plot the distribution in Figure~\ref{fig:1dhist}.
\begin{figure}
\centering
\includegraphics[width=0.65\textwidth]{Figure_6.pdf}
\caption{Color histograms show distribution of the latent parameter $Z_1$ over single-spot groups (blue color) and multi-spot groups (orange color). Solid line shows accuracy of the thresholding classifier while dashed line gives the thresholding value with the highest accuracy.}
\label{fig:1dhist}
\end{figure}
Specifically, we first plot the distribution of $Z_1$ over all single-spot groups and then over all multi-spot groups. As a result we observe the two-mode distribution, where the first mode mostly corresponds to single-spot groups, and the second mode mostly corresponds to multi-spot groups. To quantify confusion statistics, we build a set of thresholding classifiers based on different threshold values $Z_1$. The classifier attributes a sunspot group to the class of single-spot groups if $Z_1$ is less than the threshold and to the class of multi-spot groups otherwise. Solid line in the Figure~\ref{fig:1dhist} shows accuracy of such classifiers against various threshold values $Z_1$. We find that the highest accuracy is 0.95 for $Z_1=-2.4$.
In Figure~\ref{fig:z1z2} we investigate features $Z_1$, $Z_2$ together (i.e. we vary both $Z_1$ and $Z_2$ and plot the decoded images).
\begin{figure}
\centering
\includegraphics[width=0.55\textwidth]{Figure_7.pdf}
\caption{Generated sunspot group images for various values of latent features $Z_1$ and $Z_2$. Here we vary $Z_1$ from $-20$ to $20$ step 10 and
$Z_2$ from $-5$ to $5$ step 2.5.}
\label{fig:z1z2}
\end{figure}
We conclude that $Z_2$ is responsible for the size of the group or, more specifically, its longitudinal extent. Indeed, in Figure~\ref{fig:z2} we compare $Z_2$ and longitudinal extent of the group and observe a strong positive correlation between these properties for unipolar group and
negative correlation for bipolar groups.
\begin{figure}
\centering
\includegraphics[width=0.55\textwidth]{Figure_8.pdf}
\caption{Correlation between latent feature $Z_2$ and longitudinal extent of sunspot groups. Blue color is for single-spot groups, orange is for multi-spot groups.}
\label{fig:z2}
\end{figure}
Figure~\ref{fig:2dplots} shows the distribution of sunspot groups in coordinates $Z_1$ and $Z_2$, while colors represent physical properties of the group such as number of spots and area.
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{Figure_9.pdf}
\caption{Distribution of sunspot groups in the space of latent parameters $Z_1$ and $Z_2$. Colors in the panel (a) show single-spot and multi-spot groups. Dots with black edges mark sunspot groups represented by pores only. Colors in the panel (b) show sunspot group area measured
in millionth of solar hemisphere (MSH). }
\label{fig:2dplots}
\end{figure}
First we note that the latent space has a certain structure. Although we can not explain why it looks in this particular way, we obtain that the structure is quite stable against optimization strategies used for neural network training, depth of the VAE and number of training parameters. It could be interesting to investigate separately topological properties of the manifold formed in the latent space.
Second, in Figure~\ref{fig:2dplots}(a) we find a clear separation between single-spot and multi-spot groups. In accordance with Figure~\ref{fig:z1} the separation is mostly explained by the value of the latent parameter $Z_1$. We also find that sunspot groups represented by pores are also localized in the latent space, however, they can not be isolated based on $Z_1$ and $Z_2$ only.
Figure~\ref{fig:2dplots}(b) shows that using $Z_1$ and $Z_2$ one can estimate an area of the sunspot group.
Investigating further latent parameters, namely, $Z_3$ and $Z_4$, we find them similar up to some extent to $Z_2$, and do not discuss them. However, we find a remarkable role of $Z_5$. Figure~\ref{fig:z1z3} shows decoded images obtained for various values of $Z_1$ and $Z_5$ and we note that $Z_5$ defines the inclination (tilt) angle of the bipolar group.
\begin{figure}
\centering
\includegraphics[width=0.55\textwidth]{Figure_10.pdf}
\caption{Generated sunspot group images for various values of latent features $Z_1$ and $Z_5$. Here we vary $Z_1$ from $-20$ to $20$ with step 10 and
$Z_5$ from $-5$ to $5$ with step 2.5.}
\label{fig:z1z3}
\end{figure}
The butterfly diagrams in Figure~\ref{fig:tilt} demonstrate distribution of $Z_5$ over sunspot groups as well as distribution of tilt angle measured by ordinary linear regression fitted to the multi-sunspot group \citep{Illarionov15}. The opposite signs in the northern and southern hemispheres expected for the tilt angle and observed in Figure~\ref{fig:tilt}(b) are also reproduced by $Z_5$ in Figure~\ref{fig:tilt}(a).
\begin{figure}
\centering
\includegraphics[width=0.85\textwidth]{Figure_11.pdf}
\caption{Upper panel (a): time-latitude distribution of the latent parameter $Z_5$. Bottom panel (b): time-latitude distribution of the tilt angle measured by fitting the ordinary linear regression to the sunspot group.}
\label{fig:tilt}
\end{figure}
Using the slope parameter of the regression line shown in Figure~\ref{fig:reg} we estimate that tilt angle (in degrees) can be approximated by $5.2Z_5$.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Figure_12.pdf}
\caption{Correlation of the latent parameter $Z_5$ and tilt angle measured by fitting the ordinary linear regression to the sunspot group. Slope of the regression line is found to be 5.2. }
\label{fig:reg}
\end{figure}
Finally, in Figure~\ref{fig:joy} we demonstrate the dependence of mean tilt angle from latitude computed
from explicitly measured tilt angles and derived from $Z_5$.
\begin{figure}
\centering
\includegraphics[width=0.65\textwidth]{Figure_13.pdf}
\caption{Averaged tilt angle against latitude. Blue line is according to tilt angles measured by the regression line fitted into the sunspot group. Orange line shows tilt angles approximated by the latent parameter $Z_5$.}
\label{fig:joy}
\end{figure}
The latitude dependence observed in Figure~\ref{fig:joy} is known as Joy's law \citep{Hale}. It is interesting to note that using the latent parameter $Z_5$ one could obtain this law without measuring the tilt angle explicitly and moreover even without the concept of tilt angle. In our opinion this is an instructive example of learning useful relationships directly from the data.
Investigating the next parameters $Z_6$, $Z_7$, ... we also observe that the role of some of them can be explained, however, the explanation becomes more complex as just polarity, area or tilt angle. For example, $Z_9$ defines the ratio between the sizes of spots in bipolar groups (see the supplementary GitHub repository).
The observed correlation between individual latent parameters and sunspot group properties can be easily improved by using a set of latent parameters and an auxiliary trainable model. Moreover, the importance of specific latent parameters for the reconstruction of sunspot group properties can be investigated.
In more detail, we consider a set of the first $n$ principal components, i.e. $Z_1$, $Z_2$, ..., $Z_n$, and for each $n$ from 1 to 285 we train a simple fully-connected neural network model\footnote{The model consists of 3 hidden layers with 128, 64 and 32 neurons with ELU activation function. The output layer has a single neuron with the linear activation. We use the MSE loss function for regression problems and binary cross-entropy for the classification problem.} to map these components into some sunspot group property (area, elongation, tilt angle and configuration, which we define here as binary single- vs multi-spot classification problem). To evaluate the trained models, we use the validation set of sunspot groups (which is 30\% of the total dataset size) and compute the $R^2$ score (coefficient of determination) for regression problems (area, elongation, tilt) and accuracy score for the classification problem (single- vs multi-spot). The results are shown in Figure~\ref{fig:prediction} (note that the best possible score is 1.0 both for $R^2$ and accuracy score). From this figure, we can conclude that the first 10 latent components provide determination of the key sunspot group properties with accuracy above 0.8, while the first 40 latent components provide accuracy above 0.9.
\begin{figure}
\centering
\includegraphics[width=0.65\textwidth]{Figure_14.pdf}
\caption{Accuracy of reconstruction of sunspot group properties against the number of principal components (latent parameters) used. For the regression problems (estimation of the area, elongation and tilt) accuracy is defined as the $R^2$ score (coefficient of determination). For the classification problem (single- vs multi-spot) we evaluate the accuracy score (ratio of correctly classified samples).}
\label{fig:prediction}
\end{figure}
Thus we conclude that the standard sunspot group descriptors are embedded in latent parameters and the full latent vector $Z$ of size 285 can be considered as an extended sunspot group description. The ability to reconstruct an initial image from the latent vector shows that this description is almost complete.
In Appendix we demonstrate an application of latent vectors $Z$ to estimation of complexity of sunspot group structure and sunspot group classification.
\section{Discussion and conclusions}
We proposed a model for parametrization of sunspot groups observed in white-light. The model provides a set of 285 parameters that almost completely describe the apparent structure of sunspot groups. Although these parameters arise in the latent space as a result of machine-learning procedure, we find that some of these parameters have a clear physical interpretation. Specifically, we find parameters responsible for the unipolar or bipolar configuration of the sunspot group, size and rotation of the group. Thus, one can consider the obtained set of parameters as an extension of the set of standard sunspot group descriptors.
It should be noted that the model can be applied to sunspot observation obtained with any instrument since the only information required to the model is contours of sunspot umbras and penumbras. We consider processing of a large set of historical observations in future works as well as application to modern data obtained e.g. from the Solar Dynamics Observatory (SDO, \citet{SDO}).
Now we discuss the relation between VAE and PCA models and why its joint application is an advantage.
First, we find that applying the PCA model to image data directly one needs much more components to obtain the reconstruction compatible with what we obtain using joint VAE and PCA models. To be more detailed, direct application of PCA requires about 1K components while in our approach we were able to represent sunspot groups with vectors of size 285.
Second, as it is expected from the theory, direct application of PCA to high-dimensional data might be inconsistent. It is manifested in the fact that applying small variations to the latent vector, we decode an almost noisy image. In contrast, using the joint VAE and PCA model, we obtain rather continuous latent space. In our opinion, continuity of the latent space is essential to obtain interpretable properties.
Thus we conclude that while the PCA model is attractive for application, it is useful to reduce the data dimensionality before applying the PCA. We use the VAE model for this purpose.
At the same time, it is interesting to note that the described autoencoder neural network model can be considered as an extension of the ordinary PCA model. Indeed, replacing all non-linear activations in the autoencoder neural network model we reduce it to the linear model. It was shown by \citet{Hornik} that the linear neural network model provides the same latent space as the PCA model. Moreover, \citet{Bao} recently demonstrated that even the basis of principal components in the latent space can be recovered using an appropriate regularization. Our assumption is that using the approach proposed by \citet{Bao} one can exclude the PCA model from the parametrization model.
One more observation is that additional connections between encoding and decoding parts of the autoencoder neural network model (so-called \textit{skip-connections}) produce another useful neural network model, called U-net. This model is widely used for image segmentation and, for example, has been applied for coronal holes detection in solar disk images and synoptic maps \citep{Illarionov_2020}.
We also find that reducing the VAE model to the standard autoencoder (AE) neural network (in other words, setting the tensor $\sigma$ from Figure~\ref{fig:vae} to zero) leads to an increase in the size of the latent vector. By training the AE model and applying the PCA model afterwards, we obtain that about 750 features are required to explain 95\% of the total variance. In contrast, the VAE+PCA model requires about 285 features to explain the same amount of total variance. Thus, VAE+PCA provides a more compact latent representation than AE+PCA.
We see at least several interesting applications of the proposed parametrization model. First, as we demonstrated in Appendix, one can use the parametrization model to estimate the complexity of sunspot groups. As a next step, it looks natural to investigate a correlation of the latent features and/or measured sunspot group complexity with other solar events, e.g. solar flares.
One more application is to develop a sunspot classification system based on latent parameters and compare it with standard ones, e.g. with the Zurich or Modified Zurich sunspot classification systems. This might reveal to what extent the standard classification systems are conditioned on data distributions (in other words, to what extent they are explained by the data distributions). The baseline results provided in the Appendix of this research can be helpful in the elaboration of automatic sunspot classification systems.
Finally, the proposed approach looks quite generic and without modifications can be applied to investigation of other traces of solar activity, e.g. prominences. It is also possible to include additional spectrum lines into consideration, e.g. to add one more channel to the input image representing the magnetic field map. Thus it becomes possible to elaborate a data-driven magneto-morphological classification of sunspot groups as well as data-driven classification of other traces of solar activity.
In order to propagate further research, the source code for the model training as well as the dataset of sunspot groups and obtained latent vectors are available in the public GitHub repository \url{https://github.com/observethesun/sunspot_groups}.
\begin{acks}
The authors are grateful to the reviewers for valuable comments and suggestions. EI acknowledges the support of RSF grant 20-72-00106 and Lomonosov-2 supercomputer center at MSU for computing resources.
\end{acks}
| {'timestamp': '2022-01-24T02:12:03', 'yymm': '2201', 'arxiv_id': '2201.05840', 'language': 'en', 'url': 'https://arxiv.org/abs/2201.05840'} |
\section*{Introduction}
\setcounter{section}{-1}
The goal of this paper is to provide an effective framework for the study of homotopy models of operads. Various models of $\infty$-operads in simplicial sets and in topological spaces
have been introduced in the literature.
The model that we propose in this paper relies on the Mandell model for the homotopy of spaces, which takes place in the category of $E_{\infty}$-algebras (see~\cite{Mandell,MandellIntegral}).
We construct our model for the homotopy of operads within the category of $E_{\infty}$-algebras used by Mandell.
By passing from spaces to $E_{\infty}$-algebras, we have to replace operads by cooperad structures, which are dual to operads in the categorical sense.
In a first step, we define a notion of \emph{strict} Segal $E_{\infty}$-Hopf cooperad, which is close to an $E_{\infty}$-algebra counterpart
of the Cisinski--Moerdijk notion of dendroidal space~\cite{CisinskiMoerdijkI,CisinskiMoerdijkII}.
In a second step, we define a notion of \emph{homotopy} Segal $E_{\infty}$-Hopf cooperad. The idea is to integrate homotopies in the composition schemes
that govern the structure of our objects. This notion of homotopy Segal $E_{\infty}$-Hopf cooperad
is the model that we aim to define and study in the paper.
If we forget about $E_{\infty}$-algebra structures and focus on operads and cooperads defined in a category of differential graded modules,
then we can use the bar duality theory to define notions of homotopy operads and of homotopy cooperads.
The bar duality approach enables authors to apply effective methods of perturbation theory (like the basic perturbation lemma)
for the study of homotopy operads and of homotopy cooperads.
We prove that every homotopy Segal $E_{\infty}$-Hopf cooperad admits a cobar construction, and hence, defines a homotopy cooperad in the classical sense.
We also define a notion of homotopy morphism of homotopy Segal $E_{\infty}$-Hopf cooperads and we prove that every homotopy morphism
induces a morphism on the cobar construction.
Hence, our notion of homotopy Segal $E_{\infty}$-Hopf cooperad provides a lift of the homotopy cooperads
that are defined in terms of the cobar construction when we forget about $E_{\infty}$-algebra structures
and work in a category of differential graded modules.
We implement these ideas as follows.
We work with the (chain) Barratt--Eccles operad, denoted by $\EOp$ hereafter, which defines an $E_\infty$-operad in the category of differential graded modules.
We take the category of algebras over the Barratt--Eccles operad as a model for the category of $E_\infty$-algebras in differential graded modules.
By the main result of~\cite{BergerFresse}, the normalized cochain complex of a simplicial set $\DGN^*(X)$ is endowed with an action of this operad.
The object $\DGN^*(X)$, equipped with this particular $E_\infty$-algebra structure, defines a representative of the Mandell model of the space $X$.
The Barratt--Eccles operad is endowed with a diagonal. This observation implies that the Barratt--Eccles operad
acts on tensor products, and therefore, that the category of algebras over the Barratt--Eccles operad
inherits a monoidal structure from the category of differential graded modules.
But this monoidal structure is only symmetric up to homotopy, because the diagonal of the Barratt--Eccles operad is only homotopy cocommutative.
For this reason, we can hardly define cooperads in the ordinary sense in the category of algebras over the Barratt--Eccles operad.
To work out this problem, a first idea is to define homotopy cooperads in terms of a functor on the category of trees,
which represent the composition schemes of operations in an operad.
We follow this idea to define the notion of a strict Segal $E_\infty$-Hopf cooperad.
We explicitly define a strict Segal $E_\infty$-Hopf cooperad as a functor from the category of trees to the category of $E_\infty$-algebras
equipped with facet operators that model subtree inclusions. The morphisms of $E_\infty$-algebras that we associate to the tree morphisms
model the composition structure of our objects. We will therefore refer to these morphisms as the coproduct operators.
We prove that every strict Segal $E_\infty$-Hopf cooperad is weakly-equivalent (quasi-isomorphic) to a strict cooperad in the ordinary sense
when we forget about $E_\infty$-algebra structures and transport our objects
to the category of differential graded modules.
We use a particular feature of the category of algebras over the Barratt--Eccles operad to simplify the definition of this forgetful functor: the coproduct of any collection of objects
in this category is weakly-equivalent to the tensor product (in general, in a category of algebras over an $E_\infty$-operad, such results are only valid for cofibrant objects).
We use a version of the Boardman--Vogt $W$-construction to establish this result.
We already mentioned that the notion of a strict Segal $E_\infty$-Hopf cooperad is close to Cisinski--Moerdijk's notion of a dendroidal space~\cite{CisinskiMoerdijkI,CisinskiMoerdijkII}.
(We also refer to~\cite{LeGrignou} for the definition of an analogous notion of homotopy operad in the differential graded module context).
The facet operators of our definition actually correspond to the outer facets of dendroidal spaces, while the coproduct operators correspond to the inner facets.
The main difference lies in the fact that we do not take a counterpart of operadic units and arity zero terms in our setting.
The paper~\cite{Ching}, about the bar duality of operads in spectra, also involves a notion of quasi-cooperad, which forms an analogue, in the category of spectra, of our strict cooperads.
To define a notion of a homotopy cooperad, an idea is to replace the category of trees by a resolution of this category (actually, a form of the Boardman--Vogt construction).
We then replace our functors on trees by homotopy functors in order to change the functoriality relation,
which models the associativity of the coproduct operators,
into a homotopy relation. The resolution of the category of trees has a cubical structure. We actually define our homotopy functor structure
by taking a cubical functor on this category, by using a cubical enrichment
of the category of $E_\infty$-algebras.
In the context of algebras over the Barratt--Eccles operad, this cubical enrichment can be defined by using tensor products $A\otimes I^k$, $k\geq 0$,
where $I^k$ represents the cellular cochain algebra of the $k$-dimensional cube $[0,1]^k$.
We equivalently have $I^k = \DGN^*(\Delta^1)^{\otimes k}$, where we consider the $k$-fold tensor product of the normalized cochain complex of the one-simplex $\Delta^1$.
The cubical functor structure that models the composition structure of our homotopy Segal cooperads
can then be defined explicitly, in terms of homotopy coproduct operators
associated to composable sequences of tree morphisms
and with values in tensor products with these cubical cochain algebras $I^k$, $k\geq 0$.
This is exactly what we do in the paper to get our model of homotopy Segal $E_\infty$-Hopf cooperads.
To carry out this construction, we crucially use that the objects $I^k$ are endowed with an action of the Barratt--Eccles operad and are equipped with compatible connection operators,
which we associate to certain degeneracies in the category of trees.
Note that in comparison to other constructions based on dendroidal objects (see for instance the study of homotopy operads in differential graded modules
of~\cite{LeGrignou}),
we keep strict associativity relations for the facet operators corresponding to subtree inclusions.
To conclude the paper, we also prove that every homotopy Segal $E_\infty$-Hopf cooperad is weakly-equivalent to
a strict Segal $E_\infty$-Hopf cooperad
Previously, we mentioned that we use a comparison between coproducts and tensor products
to define a forgetful functor from Segal $E_\infty$-Hopf cooperads
to Segal cooperads in differential graded modules.
However, the map that provides this comparison, which is an instance of an Alexander--Whitney diagonal, is not symmetric.
For this reason, we consider shuffle cooperads (in the sense of~\cite{DotsenkoKhoroshkin})
rather than symmetric cooperads
when we pass to Segal cooperads in differential graded modules.
Briefly recall that a shuffle (co)operad is a structure that retains the symmetries of the composition schemes of operations in a (co)operad,
but forgets about the internal symmetric structure of (co)operads.
The category of shuffle operads and the dual category of shuffle cooperads were introduced by Vladimir Dotsenko and Anton Khoroshkin in~\cite{DotsenkoKhoroshkin},
with motivations coming from the work of Eric Hoffbeck~\cite{Hoffbeck},
in order to define an operadic counterpart of the classical notion of a Gr\"obner basis.
This theory provides an effective approach for the study of the homotopy of the bar construction of operads
(in connection with the Koszul duality theory of Ginzburg--Kapranov~\cite{GinzburgKapranov}),
because one can observe that the (co)bar complex of a (co)operad
only depends on the shuffle (co)operad structure
of our object (when we forget the symmetric group actions).
\medskip
We give brief recollections on the tree structures and on the conventions on trees that we use all along this paper in a preliminary section.
We also briefly review the definition of cooperads in terms of functors defined on trees in this section.
We study the strict Segal cooperad model afterwards, in Section~\ref{sec:strict-segal-cooperads}.
Then we explain our definition of homotopy Segal $E_{\infty}$-Hopf cooperads. We address the study of this notion in Section~\ref{sec:homotopy-segal-cooperads}.
We devote an appendix to brief recollections on the definition of the Barratt--Eccles operad and to the proof of the crucial statements
on algebras over this operad that we use in the definition of homotopy Segal $E_{\infty}$-Hopf cooperads (the weak-equivalence
between coproducts and tensor products, and the compatibility of connections with the algebra structure
of cubical complexes).
\medskip
We work in a category of differential graded modules over an arbitrary ground ring $\kk$ all along this paper, where a differential graded module (a dg module for short)
generally denotes a $\kk$-module $M$ equipped with a decomposition of the form $M = \bigoplus_{*\in\ZZ} M_*$
and with a differential $\delta: M\rightarrow M$ such that $\delta(M_*)\subset M_{*-1}$.
We therefore assume that our dg modules are equipped with a lower grading in general, but we may also consider dg modules
that are naturally equipped with an upper grading $M = \bigoplus_{*\in\ZZ} M^*$
and with a differential such that $\delta(M^*)\subset M^{*+1}$.
We then use the classical equivalence $M_* = M^{-*}$ to convert the upper grading on $M$
into a lower grading.
We equip the category of dg modules with the standard symmetric monoidal structure, given by the tensor product of dg modules,
with a sign in the definition of the symmetry operator that reflects the usual commutation rule
of differential graded algebra.
We call weak-equivalences the quasi-isomorphisms of dg modules and we transfer this class of weak-equivalences
to every category of structured objects (algebras, cooperads)
that we may form within the category of dg modules.
We take the category of algebras over the chain Barratt--Eccles operad, denoted by $\EOp$, as a model for the category of $E_\infty$-algebras in dg modules.
We use the notation $\EAlg$ for this category of dg algebras. We also adopt the notation $\vee$ for the coproduct in $\EAlg$.
We refer to the appendix~(\S\ref{sec:Barratt-Eccles-operad}) for detailed recollections on the definition and properties of the chain Barratt--Eccles operad,
and for a study of the properties of the coproduct, notably the existence of a weak-equivalence $\EM: A\otimes B\xrightarrow{\sim}A\vee B$
that we use in our constructions.
We also use that the normalized cochain complex of a simplicial set $\DGN^*(X)$ inherits the structure of an $\EOp$-algebra.
We refer to~\cite{BergerFresse} for the definition of this $\EOp$-algebra structure. (We give brief recollections on this subject in~\S\ref{sec:Barratt-Eccles-operad}.)
We also adopt the notation $\Sigma_r$ for the symmetric group on $r$ letters
all along the paper.
\thanks{The authors acknowledge support from the Labex CEMPI (ANR-11-LABX-0007-01) and from the FNS-ANR project OCHoTop (ANR-18CE93-0002-01).}
\section{Background}\label{section:background}
The first purpose of this section is to briefly explain the conventions on trees that we use all along the paper.
We also briefly recall the definition of the notion of a shuffle cooperad, which is intermediate between the notion of a non-symmetric cooperad
and the notion of a symmetric cooperad.
The idea of shuffle cooperads, as we already explained in the introduction of the paper, is to retain the symmetries of the composition schemes of cooperads (based on trees),
but to forget about the internal symmetric structure
of our objects.
This construction is possible for cooperads with no term in arity zero, because the trees
that we consider in this case
have a natural planar embedding (and as such, have trivial automorphism groups).
In what follows, we mainly consider shuffle cooperads (rather than shuffle operads). Our main interest for this notion lies in the observation
that the category shuffle cooperads is endowed with a cobar complex functor, which is the same as the cobar complex functor on the category of cooperads
when we forget about the internal symmetric structure
of our objects.
\subsection{Recollections and conventions on the categories of trees}\label{subsection:trees}
In general, we follow the conventions of the book~\cite[Appendix A]{FresseBook} for the definitions that concern the categories of trees.
To summarize, we consider the categories, denoted by $\Tree(r)$ in~\cite[\S A.1]{FresseBook}, whose objects are trees $\ttree$
with $r$ ingoing edges $e_1,\dots,e_r$ numbered from $1$ to $r$ (the leaves), one outgoing edge $e_0$ (the root),
and where each inner edge $e$ is oriented from a source vertex $s(e)$ to a target vertex $t(e)$
so that each vertex $v$ has one outgoing edge and at least one ingoing edge.
The morphisms of $\Tree(r)$ are composed of isomorphisms and of edge contractions,
where we assume that the isomorphisms preserve the numbering of the ingoing edges.
The assumption that each vertex in a tree has at least one ingoing edge implies that our trees have a trivial automorphism group (see~\cite[\S A.1.8]{FresseBook}).
In what follows, we use this observation to simplify our constructions.
Namely, we can forget about isomorphisms by picking a representative in each isomorphism class of trees
and we follow this convention all along the paper.
The set of vertices of a tree $\ttree\in\Tree(r)$ is denoted by $V(\ttree)$
whereas the set of edges is denoted by $E(\ttree)$.
The set of inner edges (the edges which are neither a leaf nor a root) is denoted by $\mathring{E}(\ttree)$.
For a vertex $v\in V(\ttree)$, we use the notation $\rset_v$
for the set of edges $e$ such that $t(e) = v$.
The symmetric group $\Sigma_r$ acts on the category of $r$-trees $\Tree(r)$ by renumbering the ingoing edges.
In what follows, we also consider a version of the category of trees $\Tree(\rset)$ where the ingoing edges of the trees
are indexed by an arbitrary finite set $\rset$,
and not necessarily by an ordinal, so that the mapping $\rset\mapsto\Tree(\rset)$
defines a functor from the category of finite sets and bijections between them
to the category of small categories.
The categories of trees are endowed with composition operations $\circ_i: \Tree(k)\times\Tree(l)\rightarrow\Tree(k+l-1)$,
which provide the collection $\Tree(r)$, $r>0$,
with the structure of an operad in the category of categories.
These composition operations have a unit, the $1$-tree $\downarrow\in\Tree(1)$ with a single edge which is both the root and a leave,
but we put this tree aside actually and we do not consider it in our forthcoming constructions.
Recall that we call $r$-corolla the $r$-tree $\ytree\in\Tree(r)$ with a single vertex $v$, one outgoing edge with this vertex $v$ as a source,
and $r$ ingoing edges targeting to $v$. We also consider corollas $\ytree\in\Tree(\rset)$
whose sets of ingoing edges are indexed by arbitrary finite sets $\rset$.
To each vertex $v$ in a tree $\ttree$, we can associate an $\rset_v$-corolla $\ytree_v\subset\ttree$,
with $v$ as vertex, the ingoing edges of this vertex in $\ttree$
as set of ingoing edges, and the outgoing edge
of $v$ as root.
The existence of a tree morphism $f: \ttree\rightarrow\stree$ is equivalent to the existence of a decomposition $\ttree = \lambda_{\stree}(\sigmatree_v,v\in V(\stree))$,
where $\lambda_{\stree}$ denotes a treewise operadic composition operation, shaped on the tree $\stree$,
of subtrees $\sigmatree_v\subset\ttree$, $v\in V(\stree)$,
that represent the pre-image of the corollas $\ytree_v\subset\stree$, $v\in V(\stree)$,
under our morphism.
For tree with two vertices $\gammatree$, the composition $\ttree = \lambda_{\gammatree}(\sigmatree_u,\sigmatree_v)$
is equivalent to an operadic composition operation $\ttree = \sh_*(\sigmatree_u\circ_i\sigmatree_v)$,
where $\sh_*$ denotes the action of a permutation,
associated to a partition of the form $\{1<\dots<r\} = \{i_1<\dots<\widehat{i_p}<\dots<i_k\}\amalg\{j_1<\dots<j_l\}$,
which reflects the indexing of the leaves in the tree $\gammatree$.
The index $i_p$ is a dummy composition variable which we associate to the inner edge of this tree $\gammatree$.
We can insert this dummy variable at the position such that $i_{p-1}<j_1<i_{p+1}$ inside the ordered set $\{i_1<\dots<\widehat{i_p}<\dots<i_k\}\subset\{1<\dots<r\}$
in order to work out the symmetries of this operation (we go back to this topic in the next paragraph).
Each $r$-tree $\ttree\in\Tree(r)$ has a natural planar embedding, which we determine as follows.
Let $\{e_{\alpha},\alpha\in\rset_v\}$ be the set of ingoing edges of a vertex $v\in V(\ttree)$.
Each edge $e_{\alpha}$ can be connected to a leaf $e_{i_{\alpha}}$
through a chain of edges $e_{\alpha} = e_{\alpha_0}$, $e_{\alpha_1}$, \dots, $e_{\alpha_l} = e_{i_{\alpha}}$,
such that $t(e_{\alpha_k}) = s(e_{\alpha_{k-1}})$, for $k = 1,\dots,l$.
Let $m_{\alpha}\in\{1,\dots,r\}$ be the minimum of the indices $i_{\alpha}$ of these leaves $e_{i_{\alpha}}$
that lie over the edge $e_{\alpha}$
in the tree $\ttree$. We order the set of ingoing edges $e_{\alpha}$, $\alpha\in\rset_v$, of our vertex $v$ by taking $e_{\alpha}<e_{\beta}$
when $m_{\alpha}<m_{\beta}$. We perform this ordering of the set of ingoing edges for each vertex $v\in V(\ttree)$
to get the planar embedding of our tree. We have an obvious generalization of this result for the trees $\ttree\in\Tree(\rset)$
whose ingoing edges are indexed by a set $\rset$
equipped with a total ordering.
The existence of this natural planar embedding reflects the fact that the automorphism group of any object $\ttree$
is trivial in our categories of trees $\Tree(\rset)$.
\subsection{Recollections on the treewise definition of cooperads and of shuffle cooperads}\label{subsection:shuffle-cooperads}
Throughout the paper, we mainly use a definition of cooperads in terms of collections endowed with treewise composition coproducts.
We refer to \cite[Appendix C]{FresseBook} or \cite[Section 5.6]{LodayValletteBook}, for instance, for a detailed account of this combinatorial approach to the definition of a cooperad.
In the paper, we more precisely use that a cooperad $\COp$ is equivalent to a collection of contravariant functors on the categories of trees $\COp: \ttree\mapsto\COp(\ttree)$
such that $\COp(\ttree) = \bigotimes_{v\in V(\ttree)}\COp(\ytree_v)$,
for all $\ttree\in\Tree(r)$
and where the morphism $\rho_{\ttree\rightarrow\stree}: \COp(\stree)\rightarrow\COp(\ttree)$ induced by a tree morphism $f: \ttree\rightarrow\stree$
also admits a decomposition of the form $\rho_{\ttree\rightarrow\stree} = \bigotimes_{v\in V(\stree)}\rho_{\sigmatree_v\rightarrow\ytree_v}$
when we use the relation $\ttree\simeq\lambda_{\stree}(\sigmatree_v,v\in V(\stree))$.
In the standard definition, the definition of the structure of a cooperad is rather expressed in terms of the coproduct operations $\rho_{\sigmatree_v\rightarrow\ytree_v}$
which generate the general operators $\rho_{\ttree\rightarrow\stree}$
associated to the tree morphisms $f: \ttree\rightarrow\stree$. (We review this reduction of the definition later on in this paragraph.)
The consideration of general coproduct operators $\rho_{\ttree\rightarrow\stree}$ in the definition of a cooperad
is motivated by the definition of the category of Segal cooperads
in the next section.
In the definition of a symmetric cooperad, we assume, besides, that the symmetric group $\Sigma_r$ acts on the collection $\COp(\ttree)$
in the sense that a natural transformation $s: \COp(\ttree)\rightarrow\COp(s\ttree)$, $\ttree\in\Tree(r)$,
is associated to each permutation $s\in\Sigma_r$,
where $\ttree\mapsto s\ttree$
denotes the action of this permutation on the category of trees $\Tree(r)$.
Then we require that the decomposition $\COp(\ttree) = \bigotimes_{v\in V(\ttree)}\COp(\ytree_v)$ is, in some natural sense, preserved by the action of the symmetric groups.
Note that we can again extend the definition of the functor underlying a cooperad $\COp$ to the categories of trees $\Tree(\rset)$ whose leaves are indexed by arbitrary finite sets $\rset$.
We then consider an action of the bijections of finite sets to extend the action of the permutations on ordinals.
Recall that $\ytree_v$ denotes the corolla generated by a vertex $v$ in a tree $\ttree$.
The object $\COp(\ytree)$ associated to a corolla $\ytree$ only depends on the number of leaves of the corolla (since all corollas with $r$ leaves are canonically isomorphic).
Thus the decomposition relations of the above definition imply that our functor is fully determined by a sequence of objects $\COp(r)$, $r>0$,
equipped with an action of the symmetric groups $\Sigma_r$, so that $\COp(\ytree) = \COp(r)$ for a corolla with $r$-leaves,
together with coproduct operations $\COp(r) = \COp(\ytree)\rightarrow\COp(\ttree)$,
which we associate to the tree morphisms with values in a corolla $\ttree\rightarrow\ytree$.
Furthermore, these coproduct operations can be generated by coproduct operations with values in a term $\COp(\gammatree)$
such that $\gammatree$ is a tree with two vertices,
because every tree morphism $\ttree\rightarrow\stree$ can be decomposed into a sequence of edge contractions,
which are equivalent to the application of tree morphisms of the form $\gammatree\rightarrow\ytree$
inside the tree $\ttree$.
In this equivalence, we can still consider a collection $\COp(\rset)$ indexed by arbitrary finite sets $\rset$
and take $\COp(\ytree) = \COp(\rset)$ for a corolla $\ytree$ whose set of leaves
is indexed by a finite set $\rset$.
Note that such a consideration is necessary in the expression of the decomposition $\COp(\ttree) = \bigotimes_{v\in V(\ttree)}\COp(\ytree_v)$,
because we then take an arbitrary set to index the edges of the corollas $\ytree_v$,
which correspond to the ingoing edges of the vertices of our tree (but we go back to this observation in the next paragraph).
If we unravel the construction, then we get that the $2$-fold coproducts $\COp(\rset) = \COp(\ytree)\rightarrow\COp(\gammatree)$ are equivalent to coproduct operations
of the form $\circ_{i_p}^*: \COp(\rset)\rightarrow\COp(\rset_u)\otimes\COp(\rset_v)$
with $\rset_u = \{i_1,\dots,i_p,\dots,i_k\}$
and $\rset_v = \{j_1,\dots,j_l\}$ such that $\{i_1,\dots,\widehat{i_p},\dots,i_k\}\amalg\{j_1,\dots,j_l\} = \rset$.
(We then retrieve the dual of the classical partial product operations associated to an operad.)
Recall that we assume by convention that the vertices of our trees have at least one ingoing edge.
(For this reason, we assume that the sequence of objects $\COp(r)$, which underlies an operad, is indexed by positive integers $r>0$.)
This convention enables us to order the ingoing edges of each vertex in a tree whose leaves are indexed by an ordinal $\rset = \{1<\dots<r\}$,
and as a consequence,
to get rid of the actions of the symmetric groups
in the construction
of the tensor product $\COp(\ttree) = \bigotimes_{v\in V(\ttree)}\COp(\ytree_v)$ (since we can use such a canonical ordering of the edges of the corolla $\ytree_v$
to fix a bijection between the indexing set of this set of edges $\rset_v$
and an ordinal $\{1<\dots<r_v\}$).
The idea, explained in~\cite{FressePartitions}, is to order the ingoing edges according to the minimum
of the index of the leaves of the subtree
that lie over each edge.
This observation is used to define the notion of a shuffle cooperad. Indeed, a shuffle cooperad explicitly consists of a collection of contravariant functors
on the categories of trees $\COp: \ttree\mapsto\COp(\ttree)$ with the same operations and structure properties
as the classical symmetric cooperads, but where we forget about the actions
of permutations. If we express the definition in terms of $2$-fold coproducts, then we get that a shuffle cooperad consists of a collection $\COp(r)$, $r>0$,
equipped with coproducts $\circ_{i_p}^*: \COp(\{1<\dots<r\})\rightarrow\COp(\{i_1<\dots<i_p<\dots,i_k\})\otimes\COp(\{j_1<\dots<j_l\})$
associated to partitions $\{i_1<\dots<\widehat{i_p}<\dots<i_k\}\amalg\{j_1<\dots<j_l\} = \{1<\dots<r\}$
such that $i_{p-1}<j_1<i_{p+1}$ (see~\cite{DotsenkoKhoroshkin}).
These partitions are equivalent to the pointed shuffles of~\cite{Hoffbeck}.
Note that we can extend the ordering of the ingoing edges of each vertex in a tree to an ordering of the vertices themselves.
We use this observation in our definition of the forgetful functor from the category of Segal cooperads in $E_\infty$-algebras
to the category of Segal cooperads in dg modules.
\subsection{Counits, connected cooperads and local conilpotence}\label{subsection:conilpotence}
In the standard definition of a cooperad, we assume that the coproducts $\circ_{i_p}^*$ satisfy natural counit relations
with respect to a counit morphism which we associate to our objects,
but we forget about this counit morphism and about the counit conditions in the definition
of the previous paragraph.
The cooperads that we consider are actually equivalent to the coaugmentation coideal of coaugmented cooperads.
If we start with the standard definition of a cooperad (where we have a counit), then we should take components of the coaugmentation coideal of our cooperad $\COp$
in the definition of the treewise tensor products $\COp(\ttree) = \bigotimes_{v\in V(\ttree)}\COp(\ytree_v)$
and of the treewise coproducts $\rho_{\ttree\rightarrow\stree}: \COp(\stree)\rightarrow\COp(\ttree)$.
In the definition of cooperads, one often has to assume the validity of a local conilpotence condition.
In the treewise formalism, this local conilpotence condition asserts that for every element $x\in\COp(\stree)$
in the component of a cooperad $\COp$ associated to a tree $\stree\in\Tree(r)$, we can pick a non-negative integer $N_x\in\NN$
such that $\sharp V(\ttree)\geq N_x\Rightarrow\rho_{\ttree\rightarrow\stree}(x) = 0$,
for every tree $\ttree\in\Tree(r)$.
This condition ensures that the map $\rho: \COp(\stree)\rightarrow\prod_{\ttree\rightarrow\stree}\COp(\ttree)$
induced by the collection of all coproducts $\rho_{\ttree\rightarrow\stree}: \COp(\stree)\rightarrow\COp(\ttree)$
factors through the sum $\bigoplus_{\ttree\rightarrow\stree}\COp(\ttree)$.
In what follows, we may actually need a stronger connectedness condition, which we define by requiring that the components of our object $\COp(\ttree)$
vanish when the tree $\ttree$ contains at least one vertex with a single ingoing edge. For an ordinary cooperad, this requirement is equivalent to the relation $\COp(1) = 0$.
In general, this connectedness condition implies that our object $\COp$ reduces to a structure given by a collection of functors $\ttree\mapsto\COp(\ttree)$
on the subcategories $\widetilde{\Tree}(r)\subset\Tree(r)$ formed by trees $\ttree$
where all the vertices have at least two ingoing edges (in~\cite[\S A.1.12]{FresseBook}
the terminology `reduced tree' is used for this subcategory of trees). The conilpotence of the cooperad $\COp$
then follows from the observation that, for any given tree $\stree\in\widetilde{\Tree}(r)$,
we have finitely many morphisms such that $\ttree\rightarrow\stree$
in $\widetilde{\Tree}(r)$.
We say that a cooperad is connected when it satisfies this connectedness requirement $\COp(1) = 0$, or equivalently, when $\ttree\not\in\widetilde{\Tree}(r)\Rightarrow\COp(\ttree) = 0$.
In what follows, we will similarly say that a Segal cooperad $\COp$ is connected when it satisfies the same treewise condition $\ttree\not\in\widetilde{\Tree}(r)\Rightarrow\COp(\ttree) = 0$.
We mainly use the local conilpotence and the connectedness condition in our study of the $W$-construction of (Segal) shuffle dg cooperads
and in our definition of the cobar construction for homotopy Segal shuffle dg cooperads.
\section{The category of strict Segal $E_\infty$-Hopf cooperads}\label{sec:strict-segal-cooperads}
We study the category of Segal $E_\infty$-Hopf cooperads in this section.
We devote our first subsection to the definition of this category.
We then study strict Segal dg cooperads, which are structures, defined within the category of dg modules, which we obtain by forgetting the $E_\infty$-algebra structures
attached to the definition of a Segal $E_\infty$-Hopf cooperad.
We also explain the definition of an equivalence between our Segal dg cooperads and ordinary dg cooperads.
We devote the second subsection of this section to these topics.
We study the cobar complex of Segal dg cooperads afterwards, in a third subsection.
We eventually explain a correspondence between operads in simplicial sets and Segal $E_\infty$-Hopf cooperads.
We prove that we can retrieve a completion of operads in simplicial sets
from a corresponding Segal $E_\infty$-Hopf cooperad.
We devote the fourth subsection of the section to this subject.
Recall that we use the notation $\EOp$ for the chain Barratt--Eccles operad and that $\EAlg$ denotes the category of algebras in dg modules associated to this operad.
\subsection{The definition of strict Segal $E_\infty$-Hopf cooperads}\label{subsec:strict-segal-cooperads}
We begin our study by defining the objects of our category of Segal $E_\infty$-Hopf cooperads.
We actually define beforehand a notion of Segal $E_\infty$-Hopf pre-cooperad, which consists of objects equipped with all the operations
that underlie the structure of a Segal $E_\infty$-Hopf cooperads (coproducts and facet operators),
and then we just define a Segal $E_\infty$-Hopf cooperad as a Segal $E_\infty$-Hopf pre-cooperad whose facet operators
satisfy an extra homotopy equivalence condition (the Segal condition).
We make these definitions explicit in the first paragraph of this subsection.
We explain the definition of morphisms of Segal $E_\infty$-Hopf cooperads afterwards in order to complete the objectives of this subsection.
We are guided by the combinatorial definition of cooperads in terms of trees, which we briefly recalled in the overview of~\S\ref{subsection:shuffle-cooperads}.
\begin{defn}\label{definition:strict-E-infinity-cooperad}
We call (strict) Segal $E_\infty$-Hopf shuffle pre-cooperad the structure defined by a collection of $\EOp$-algebras
\begin{equation*}
\AOp(\ttree)\in\EAlg,\quad\text{$\ttree\in\Tree(r)$, $r>0$},
\end{equation*}
equipped with
\begin{itemize}
\item
coproduct operators
\begin{equation*}
\rho_{f: \ttree\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree),
\end{equation*}
defined as morphisms of $\EOp$-algebras, for all tree morphisms $f: \ttree\rightarrow\stree$,
and which satisfy the following standard functoriality constraints $\rho_{\stree\xrightarrow{=}\stree} = \id_{\AOp(\stree)}$
and $\rho_{\ttree\rightarrow\utree}\circ\rho_{\utree\rightarrow\stree} = \rho_{\ttree\rightarrow\stree}$,
for all pairs of composable tree morphisms $\ttree\rightarrow\utree\rightarrow\stree$,
\item
together with facet operators
\begin{equation*}
i_{\sigmatree,\stree}: \AOp(\sigmatree)\rightarrow\AOp(\stree),
\end{equation*}
also defined as morphisms of $\EOp$-algebras, for all subtree embeddings $\sigmatree\subset\stree$,
and which satisfy the following functoriality relations $i_{\stree,\stree} = \id_{\stree}$ and $i_{\thetatree,\stree}\circ i_{\sigmatree,\thetatree} = i_{\sigmatree,\stree}$,
for all $\sigmatree\subset\thetatree\subset\stree$.
\item
We also assume the verification of a compatibility relation between the facet operators and the coproduct operators.
We express this compatibility relation by the commutativity of the following diagram:
\begin{equation*}
\xymatrixcolsep{5pc}\xymatrix{ A(\stree)\ar[r]^-{\rho_f} & \AOp(\ttree) \\
A(\sigmatree)\ar[u]^{i_{\sigmatree,\stree}}\ar@{.>}[r]^-{\rho_{f|_{f^{-1}\sigmatree}}} &
\AOp(f^{-1}\sigmatree), \ar@{.>}[u]_{i_{f^{-1}\sigmatree,\ttree}} }
\end{equation*}
for all $f: \ttree\rightarrow\stree$ and $\sigmatree\subset\stree$, where $f^{-1}(\sigmatree)\subset\ttree$
denotes the subtree such that $V(f^{-1}\sigmatree) = f^{-1}V(\stree)$
and we consider the obvious restricted morphism $f|_{f^{-1}\sigmatree}: f^{-1}\sigmatree\rightarrow\sigmatree$.
\end{itemize}
We say that a Segal $E_\infty$-Hopf shuffle pre-cooperad $\AOp$ is a Segal $E_\infty$-Hopf shuffle cooperad when it satisfies the following extra condition (the Segal condition):
\begin{enumerate}
\item[(*)]
The facet operators $i_{\sigmatree_v,\ttree}: \AOp(\sigmatree_v)\rightarrow\AOp(\ttree)$
associated to a tree decomposition $\ttree = \lambda_{\stree}(\sigmatree_v, v\in V(\stree))$
induce a weak-equivalence
\begin{equation*}
i_{\lambda_{\stree}(\sigmatree_*)}: \bigvee_{v\in V(\stree)}\AOp(\sigmatree_v)\xrightarrow{\sim}\AOp(\ttree)
\end{equation*}
when we pass to the coproduct of the objects $\AOp(\sigmatree_v)$
in the category of $\EOp$-algebras. We refer to this weak-equivalence $i_{\lambda_{\stree}(\sigmatree_*)}$ as the Segal map
associated to the decomposition $\ttree = \lambda_{\stree}(\sigmatree_v, v\in V(\stree))$.
\end{enumerate}
We finally define a Segal $E_\infty$-Hopf symmetric (pre-)cooperad as a Segal $E_\infty$-Hopf shuffle (pre-)cooperad $\AOp$
equipped with an action of the permutations such that $s^*: \AOp(s\ttree)\rightarrow\AOp(\ttree)$, for $s\in\Sigma_r$ and $\ttree\in\Tree(r)$,
and which intertwine the facets and the coproduct operators attached our object.
\end{defn}
We have the following statement, which enables us to reduce the verification of the Segal condition to particular tree decompositions.
\begin{prop}\label{proposition:Segal-condition}
For any Segal $E_\infty$-Hopf shuffle pre-cooperad $\AOp$, we have an equivalence between the following statements:
\begin{enumerate}
\item The Segal condition holds for all tree decompositions $\ttree = \lambda_{\stree}(\sigmatree_v,v\in V(\stree))$.
\item The Segal condition holds for all tree decompositions of the form $\ttree = \lambda_{\gammatree}(\sigmatree_u,\sigmatree_v) = \sigmatree_u\circ_i\sigmatree_v$,
where we take an operadic composition along a tree with two vertices $\gammatree$ equivalent to the performance of an operadic composition product $\sigmatree_u\circ_i\sigmatree_v$
of a pair of trees $\sigmatree_u,\sigmatree_v\subset\ttree$ (we abusively omit the action of the shuffle permutation
that we associate to general composition operations of this form, see~\S\ref{subsection:trees}).
\item The Segal condition holds for all decompositions of trees into corollas $\ttree = \lambda_{\ttree}(\ytree_v,v\in V(\ttree))$.\qed
\end{enumerate}
\end{prop}
We now define morphisms of strict Segal $E_\infty$-Hopf cooperads.
\begin{defn}\label{definition:E-infinity-cooperad-morphism}
A morphism of Segal $E_\infty$-Hopf shuffle (pre-)cooperads $\phi: \AOp\rightarrow\BOp$ is a collection of $\EOp$-algebra morphisms $\phi_{\ttree}: \AOp(\ttree)\rightarrow\BOp(\ttree)$, $\ttree\in\Tree(r)$, $r>0$,
which preserve the action of the facets and coproduct operators on our objects
in the sense that:
\begin{enumerate}
\item
the diagram
\begin{equation*}
\xymatrix{ \AOp(\stree)\ar[r]^{\phi_{\stree}}\ar[d]_{\rho_{\ttree\rightarrow\stree}} & \BOp(\stree)\ar[d]^{\rho_{\ttree\rightarrow\stree}} \\
\AOp(\ttree)\ar[r]^{\phi_{\ttree}} & \BOp(\ttree) }
\end{equation*}
commutes for all tree morphisms $\ttree\rightarrow\stree$,
\item
the diagram
\begin{equation*}
\xymatrix{ \AOp(\stree)\ar[r]^{\phi_{\stree}} & \BOp(\stree) \\
\AOp(\sigmatree)\ar[r]^{\phi_{\sigmatree}}\ar[u]^{i_{\sigmatree,\stree}} &
\BOp(\sigmatree)\ar[u]_{i_{\sigmatree,\stree}} }
\end{equation*}
commutes for all subtree embeddings $\sigmatree\subset\stree$.
\end{enumerate}
If $\AOp$ and $\BOp$ are Segal $E_\infty$-Hopf symmetric (pre-)cooperads, then $\phi: \AOp\rightarrow\BOp$ is a morphism of Segal $E_\infty$-Hopf symmetric (pre-)cooperads
when $\phi$ preserves the action of permutations on our objects in the sense that
\begin{enumerate}\setcounter{enumi}{2}
\item
the diagram
\begin{equation*}
\xymatrix{ \AOp(s\ttree)\ar[r]^{\phi_{s\ttree}}\ar[d]_{s^*} & \BOp(s\ttree)\ar[d]^{s^*} \\
\AOp(\ttree)\ar[r]^{\phi_{\ttree}} & \BOp(\ttree) }
\end{equation*}
commutes, for all $s\in\Sigma_r$ and $\ttree\in\Tree(r)$.
\end{enumerate}
\end{defn}
The morphisms of Segal $E_\infty$-Hopf shuffle (pre-)cooperads can obviously be composed,
as well as the morphisms of Segal $E_\infty$-Hopf symmetric (pre-)cooperads,
so that we can form a category of Segal $E_\infty$-Hopf shuffle (pre-)cooperads and a category of Segal $E_\infty$-Hopf symmetric (pre-)cooperads.
In what follows, we adopt the notation $\EOp\Hopf\sh\SegOp^c$ for the category of Segal $E_\infty$-Hopf shuffle cooperads
and the notation $\EOp\Hopf\Sigma\SegOp^c$ for the category of Segal $E_\infty$-Hopf symmetric cooperads.
\subsection{The forgetting of $E_\infty$-structures}\label{subsection:forgetful-strict}
To any Segal $E_\infty$-Hopf cooperad $\AOp$, we can associate a Segal cooperad in dg modules by forgetting the $E_\infty$-algebra structure attached to each object $\AOp(\ttree)$.
We examine this construction in this subsection.
We need to assume that the vertices of our trees are totally ordered in order to make the construction
of the forgetful functor from Segal $E_\infty$-Hopf cooperads to Segal dg cooperads
work.
For this reason, we restrict ourselves to Segal shuffle cooperads all along this subsection, though our definition of Segal dg cooperad
makes sense in the symmetric setting.
\begin{defn}\label{definition:tree-shaped-cooperad}
We call Segal shuffle dg pre-cooperad the structure defined by a collection of dg modules
\begin{equation*}
\AOp(\ttree)\in\dg\Mod,\quad\text{$\ttree\in\Tree(r)$, $r>0$},
\end{equation*}
equipped with
\begin{itemize}
\item
coproduct operators
\begin{equation*}
\rho_{f: \ttree\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree),
\end{equation*}
defined as morphisms of dg modules, for all tree morphisms $f: \ttree\rightarrow\stree$, and which satisfy the same standard functoriality constraints
as in the case of Segal $E_\infty$-Hopf shuffle cooperads,
\item
together with Segal maps
\begin{equation*}
i_{\lambda_{\stree}(\sigmatree_*)}: \bigotimes_{v\in V(\stree)}\AOp(\sigmatree_v)\rightarrow\AOp(\ttree),
\end{equation*}
defined as morphisms of dg modules, for all tree decompositions $\ttree = \lambda_{\stree}(\sigmatree_v,v\in V(\stree))$,
and such that
for the trivial decomposition $\ttree = \lambda_{\ytree}(\ttree)$, we have $i_{\lambda_{\ytree}(\ttree)} = \id_{\AOp(\ttree)}$,
while for nested decompositions $\ttree = \lambda_{\utree}(\thetatree_u,u\in V(\utree))$ and $\thetatree_u = \lambda_{\stree_u}(\sigmatree_v,v\in V(\stree_u))$, $u\in V(\utree)$,
we have
\begin{equation*}
i_{\lambda_{\utree}(\thetatree_*)}\circ(\bigotimes_{u\in V(\utree)}i_{\lambda_{\stree_u}(\sigmatree_*)}) = i_{\lambda_{\stree}(\sigmatree_*)},
\end{equation*}
where we consider the composite decomposition $\ttree = \lambda_{\stree}(\sigmatree_v,v\in V(\stree))$
with $\stree = \lambda_{\utree}(\stree_u,u\in V(\utree))$.
\item
We still assume the verification of a compatibility relation between the Segal maps and the coproduct operators.
We express this dg module version of the compatibility relation by the commutativity of the following diagram:
\begin{equation*}
\xymatrixcolsep{7pc}\xymatrix{ \AOp(\stree)\ar[r]^-{\rho_f} & \AOp(\ttree) \\
\bigotimes_{u\in V(\utree)}\AOp(\sigmatree_u)\ar[r]_-{\bigotimes_{u\in V(\utree)}\rho_{f|_{f^{-1}\sigmatree_u}}}
\ar[u]^{i_{\lambda_{\utree}(\sigmatree_*)}} &
\bigotimes_{u\in V(\utree)}\AOp(f^{-1}\sigmatree_u)
\ar[u]_{i_{\lambda_{\utree}(f^{-1}\sigmatree_*)}} }
\end{equation*}
for all tree morphisms $f: \ttree\rightarrow\stree$ and decompositions $\stree = \lambda_{\utree}(\sigmatree_v,v\in V(\utree))$,
where we consider the pre-image $f^{-1}\sigmatree_v\subset\ttree$
of the subtrees $\sigmatree_v\subset\stree$.
\end{itemize}
We then say that a Segal shuffle dg pre-cooperad $\AOp$ is a Segal shuffle dg cooperad when the following Segal condition holds:
\begin{enumerate}
\item[(*)]
The Segal map $i_{\lambda_{\stree}(\sigmatree_*)}$ is a weak-equivalence
\begin{equation*}
i_{\lambda_{\stree}(\sigmatree_*)}: \bigotimes_{v\in V(\stree)}\AOp(\sigmatree_v)\xrightarrow{\sim}\AOp(\ttree),
\end{equation*}
for every decomposition $\ttree = \lambda_{\stree}(\sigmatree_v, v\in V(\stree))$.
\end{enumerate}
We still define a morphism of Segal shuffle dg (pre-)cooperads $\phi: \AOp\rightarrow\BOp$
as a collection of dg module morphisms $\phi_{\ttree}: \AOp(\ttree)\rightarrow\BOp(\ttree) $
that preserve the coproduct operators $\rho_{\ttree\rightarrow\stree}$
and the Segal maps $i_{\lambda_{\stree}(\sigmatree_*)}$
in the obvious sense.
We use the notation $\dg\sh\SegOp^c$ for the category of Segal shuffle dg cooperads, which we equip with this notion of morphism.
\end{defn}
The forgetful functor from the category of strict Segal $E_\infty$-Hopf shuffle cooperads to the category of Segal shuffle dg cooperads
essentially ignores the $E_\infty$-structures
attached to our objects.
However, Definition~\ref{definition:strict-E-infinity-cooperad} uses the coproduct of $\EOp$-algebras $\vee$,
while Definition~\ref{definition:tree-shaped-cooperad} uses the tensor product $\otimes$.
To pass from one to the other, we need to use the natural transformation $\EM$ described in Construction~\ref{constr:Barratt-Eccles-diagonal}.
\begin{prop}\label{proposition:forgetful-strict}
Let $\AOp$ be a strict Segal $E_\infty$-Hopf shuffle cooperad, with coproduct operators $\rho_{\ttree\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree)$
and facet operators $i_{\sigmatree,\stree}: \AOp(\sigmatree)\rightarrow\AOp(\stree)$.
The collection $\AOp(\ttree)$, $\ttree\in\Tree(r)$, equipped with the coproduct operators inherited from $\AOp$ and the Segal maps
given by the composites
\begin{equation*}
\bigotimes_{v\in V(\stree)}\AOp(\sigmatree_v)\xrightarrow{\EM}\bigvee_{v\in V(\stree)}\AOp(\sigmatree_v)\xrightarrow{i_{\lambda_{\stree}(\sigmatree_*)}}\AOp(\ttree),
\end{equation*}
for all tree decompositions $\ttree = \lambda_{\stree}(\sigmatree_v,v\in V(\stree))$, is a Segal shuffle dg cooperad.
\end{prop}
\begin{proof}
We easily deduce from the associativity of the facet operators in the definition of a Segal $E_\infty$-Hopf shuffle cooperad~(\S\ref{definition:strict-E-infinity-cooperad})
that the Segal maps of $\EOp$-algebras $i_{\lambda_{\stree}(\sigmatree_*)}: \bigvee_{v\in V(\stree)}\AOp(\sigmatree_v)\rightarrow\AOp(\ttree)$
satisfy natural associativity relations, which parallel the associativity relations of the Segal maps
of Segal shuffle cooperads in dg module. (By the universal properties of coproducts, we are left to verifying such a relation on a single summand of our coproducts.)
We use the associativity of the transformation $\EM$ to pass from this associativity relation on coproducts
to the associativity relation on tensor products
which is required in the definition of Segal shuffle dg cooperad.
We eventually deduce from the result of Proposition~\ref{claim:Barratt-Eccles-algebra-coproducts}
that the dg cooperad version of the Segal condition for $\AOp$
is equivalent to the Segal condition of strict Segal $E_\infty$-Hopf cooperads.
\end{proof}
\begin{remark}\label{remark:order-trees}
Note that the definition of the Segal map in the construction of this proposition
requires to pick an order on the vertices of the tree $\stree$
since the transformation $\EM$ is not commutative.
\end{remark}
We immediately see that an ordinary shuffle dg cooperad, in the sense of the definition of~\S\ref{subsection:shuffle-cooperads}, is equivalent to a Segal shuffle dg cooperad
where the Segal maps define isomorphisms $i_{\lambda_{\stree}(\sigmatree_*)}: \bigotimes_{v\in V(\stree)}\AOp(\sigmatree_v)\xrightarrow{\simeq}\AOp(\ttree)$.
We aim to prove that every Segal shuffle dg cooperad is weakly-equivalent to such an ordinary shuffle dg cooperad.
We then need to assume that our Segal shuffle dg cooperad satisfies the connectedness condition of~\S\ref{subsection:conilpotence}.
Namely, we have to assume that $\AOp(\ttree) = 0$ when the tree $\ttree$ is not reduced (contains at least one vertex with a single ingoing edge).
Recall that we say that $\AOp$ is connected when it satisfies this condition.
We use a version of the Boardman--Vogt $W$-construction in order to establish the existence of our equivalences.
The classical Boardman--Vogt construction (see~\cite{BergerMoerdijkResolution,BoardmanVogt})
is defined for ordinary operads (actually for algebraic theories in the original reference~\cite{BoardmanVogt}).
We therefore have to dualize the classical definition in order to deal with cooperads (rather than with operads)
and we have to extend the construction to the context of Segal dg cooperads.
We explain the definition of this $W$-construction of Segal dg cooperads with full details in the next paragraphs.
In a first step, we explain the definition of a covariant functor of cubical cochains on the category of trees.
We will pair this functor with the contravariant functor underlying a Segal shuffle dg cooperad to define our object.
In fact, we do not need the full connectedness condition for the definition of the Boardman--Vogt $W$-construction of a Segal shuffle dg cooperad $\AOp$,
because the definition makes sense as soon as the coproduct operators of our object
fulfill the local conilpotence property of~\S\ref{subsection:conilpotence} (which is implied
by the connectedness condition, but can be satisfied in a broader context). We will explain the definition of the Boardman--Vogt $W$-construction in this setting.
\begin{constr}
Fix $\stree\in\Tree(r)$.
For a tree morphism $\ttree\rightarrow\stree$, equivalent to a treewise decomposition $\ttree = \lambda_{\stree}(\sigmatree_v,v\in V(\stree))$,
we set:
\begin{equation*}
\square^*(\ttree/\stree) = \bigotimes_{e\in\mathring{E}(\sigmatree_v),v\in V(\stree)}\underbrace{\DGN^*(\Delta^1)}_{=: I_e},
\end{equation*}
where we associate a factor $I_e = \DGN^*(\Delta^1)$ to every inner edge of a subtree $\sigmatree_v\subset\ttree$ (recall that $\mathring{E}(\thetatree)$
denotes the set of inner edges, which we associate to any tree $\thetatree$
in our category).
This collection of dg modules $\square^*(\ttree/\stree)$ defines a covariant functor on the over category of tree morphisms $\ttree\rightarrow\stree$, where $\ttree\in\Tree(r)$.
Recall that the cochain complex $\DGN^*(\Delta^1)$ is given by $\DGN^*(\Delta^1) = \kk\underline{0}^{\sharp}\oplus\kk\underline{1}^{\sharp}\kk\underline{01}^{\sharp}$,
with the differential such that $\delta(\underline{0}^{\sharp}) = -\underline{01}^{\sharp}$
and $\delta(\underline{1}^{\sharp}) = \underline{01}^{\sharp}$ (see~\S\ref{constr:cubical-cochain-connection}).
We use that a composite of tree morphisms $\ttree\rightarrow\utree\rightarrow\stree$
is equivalent to a double decomposition $\utree = \lambda_{v\in V(\stree)}(\stree_v,v\in V(\stree))$
and $\ttree = \lambda_{u\in V(\utree)}(\sigmatree_u,u\in V(\utree))$,
which yields $\ttree = \lambda_{v\in V(\stree)}(\thetatree_v,v\in V(\stree))$ with $\thetatree_v = \lambda_{u\in V(\stree_v)}(\sigmatree_u,u\in V(\stree_v))$.
We define
\begin{equation*}
\partial_{\ttree\rightarrow\utree/\stree}: \square^*(\ttree/\stree)\rightarrow\square^*(\utree/\stree)
\end{equation*}
as the morphism of dg modules induced by the identity mapping on the factors $I_e$ associated to the edges such that $e\not\in\mathring{E}(\sigmatree_u)$, for all $u\in V(\utree)$,
and by the map $d_0: I_e\rightarrow\kk$ such that $d_0(\underline{1}^{\sharp}) = 1$ and $d_0(\underline{0}^{\sharp}) = d_0(\underline{01}^{\sharp}) = 0$
on the factors $I_e$
associated to the edges $e$ such that we have $e\in\mathring{E}(\sigmatree_u)$, for some $u\in V(\utree)$ (the edges which collapse when we pass to $\utree$).
The collection $\square^*(\ttree/\stree)$ also defines a contravariant functor on the under category of tree morphisms $\ttree\rightarrow\stree$,
when we fix $\ttree\in\Tree(r)$ instead of $\stree\in\Tree(r)$.
For tree morphisms $\ttree\rightarrow\utree\rightarrow\stree$ as above,
we then consider the map
\begin{equation*}
\rho_{\ttree/\utree\rightarrow\stree}: \square^*(\ttree/\stree)\rightarrow\square^*(\ttree/\utree)
\end{equation*}
induced by the identity mapping on the factors $I_e$
associated to the edges such that $e\in\mathring{E}(\sigmatree_u)$, for some $u\in V(\utree)$,
and by the map $d_1: I_e\rightarrow\kk$ such that $d_1(\underline{0}^{\sharp}) = 1$ and $d_1(\underline{1}^{\sharp}) = d_1(\underline{01}^{\sharp}) = 0$
on the factors $I_e$ associated to the edges $e$
such that $e\not\in\mathring{E}(\sigmatree_u)$, for all $u\in V(\utree)$.
We easily check that the above constructions yield associative covariant and contravariant actions on our collection of dg modules $\square^*(\ttree/\stree)$,
which, in addition, commute to each other. We accordingly get that our mapping $(\ttree\rightarrow\stree)\mapsto\square^*(\ttree/\stree)$
defines a covariant functor on the comma category of tree morphisms $\ttree\rightarrow\stree$.
\end{constr}
\begin{remark}
The application of the face operator $d_0$ for the definition of the covariant functor structure on the collection $\square^*(\ttree/\stree)$
and the application of the face operator $d_1$ for the definition of the contravariant functor structure
is converse to the usual convention for the definition of the $W$-construction.
This choice is motivated by our choices regarding the definition of the cobar construction of homotopy Segal dg cooperads,
which are themselves forced by the definition of connections
in the category of $\EOp$-algebras, and by a seek of coherence between the definition of the $W$-construction
and the definition of the cobar construction of (homotopy) Segal dg cooperads,
which we need in order to be able to compare the $W$-construction
with the cobar construction.
\end{remark}
We now address the definition of the $W$-construction.
\begin{constr}\label{construction:W}
Let $\AOp$ be a Segal shuffle dg pre-cooperad.
We assume that the treewise coproduct operators on $\AOp$ satisfies the local conilpotence condition of~\S\ref{subsection:conilpotence}.
We set:
\begin{equation*}
\DGW^{c}(\AOp)(\stree) = \eq(\xymatrix{\bigoplus_{\ttree\rightarrow\stree}\square^*(\ttree/\stree)\otimes\AOp(\ttree)\ar@<+2pt>[r]^{d^0}\ar@<-2pt>[r]_{d^1}
& \bigoplus_{\ttree\rightarrow\utree\rightarrow\stree}\square^*(\utree/\stree)\otimes\AOp(\ttree)\ar@/_2em/[l]_{s^0} }),
\end{equation*}
where we take the equalizer $\eq$ of the map $d^0$ induced by the covariant action $\square^*(\ttree/\stree)\rightarrow\square^*(\utree/\stree)$ of the tree morphisms $\ttree\rightarrow\utree$
and of the map $d^1$ induced by the contravariant action $\AOp(\utree)\rightarrow\AOp(\ttree)$
on our tensors. (The reflection map $s^0$ is given by the projection onto the summands such that $\ttree = \utree$.)
We may equivalently use the following end-style notation for this equalizer:
\begin{equation*}
\DGW^{c}(\AOp)(\stree) = \int_{\ttree\rightarrow\stree}'\square^*(\ttree/\stree)\otimes\AOp(\ttree),
\end{equation*}
where the notation $\int'$ refers to the consideration of sums (rather than products) in the above equalizer definition of our object.
We note that this additive end is well defined because the local conilpotence condition implies that the contravariant action operations $\AOp(\utree)\rightarrow\AOp(\ttree)$
land in a sum when $\ttree$ varies, while the covariant action operations $\square^*(\ttree/\stree)\rightarrow\square^*(\utree/\stree)$
land in a sum because each tree morphism $\ttree\rightarrow\stree$ has finitely many factorizations $\ttree\rightarrow\utree\rightarrow\stree$.
The objects $\DGW^{c}(\AOp)(\stree)$ inherit natural coproduct operators $\rho^W_{\utree\rightarrow\stree}: \DGW^{c}(\AOp)(\stree)\rightarrow\DGW^{c}(\AOp)(\utree)$
by covariant functoriality of the objects $\square^*(\ttree/\stree)$.
Besides, we have, for each tree decomposition $\stree = \lambda_{\utree}(\sigmatree_u, u\in V(\utree))$,
a natural Segal map
\begin{gather*}
i^W_{\sigmatree_*,\stree}: \bigotimes_{u\in V(\utree)}\DGW^{c}(\AOp)(\sigmatree_v)\rightarrow\DGW^{c}(\AOp)(\stree),
\intertext{induced by the following operators on our additive end}
\bigotimes_{u\in V(\utree)}\square^*(\thetatree_u/\sigmatree_u)\otimes\AOp(\thetatree_u)
\rightarrow\square^*(\lambda_{\utree}(\thetatree_u)/\stree)\otimes\AOp(\lambda_{\utree}(\thetatree_u)),
\end{gather*}
where we fix a collection of tree morphisms $\thetatree_u\rightarrow\sigmatree_u$, $u\in V(\utree)$, which we put together on $\utree$
in order to get a tree morphism $\lambda_{\utree}(\thetatree_u)\rightarrow\lambda_{\utree}(\sigmatree_u) = \stree$,
and we use the obvious isomorphism $\bigotimes_{u\in V(\utree)}\square^*(\thetatree_u/\sigmatree_u)\simeq\square^*(\lambda_{\utree}(\thetatree_u)/\stree)$
together with the Segal map $i_{\lambda_{\utree}(\thetatree_*)}: \bigotimes_{u\in V(\utree)}\AOp(\thetatree_u)\rightarrow\AOp(\ttree)$ on $\AOp$
for the tree $\ttree = \lambda_{\utree}(\thetatree_u)$.
\end{constr}
We check that the above construction provides the object $\DGW^c(\AOp)$ with a well-defined Segal dg cooperad structure later on.
We define, before carrying this verification, a decomposed version of the $W$-construction. The idea is to replace the contravariant functor $\AOp(\ttree)$
in the definition of the $W$-construction by a decomposed version of this functor, which we construct in the next paragraph.
\begin{constr}\label{construction:decomposed-Segal-cooperad-terms}
We again assume $\AOp$ that is a Segal shuffle dg pre-cooperad.
For a tree morphism $\ttree\rightarrow\stree$, equivalent to a treewise decomposition $\ttree = \lambda_{\stree}(\sigmatree_v,v\in V(\stree))$, we set:
\begin{equation*}
\AOp(\ttree/\stree) = \bigotimes_{v\in V(\stree)}\AOp(\sigmatree_v).
\end{equation*}
The collection of these objects inherits the structure of a contravariant functor on the over category of tree morphisms $\ttree\rightarrow\stree$,
where we fix the tree $\stree\in\Tree(r)$
and $\ttree$ varies. We proceed as follows.
Let $f: \utree\rightarrow\ttree$ be a tree morphism, which we compose with the above morphism $\ttree\rightarrow\stree$ to get $\utree\rightarrow\stree$.
In this case, for the decomposition $\utree = \lambda_{\stree}(\thetatree_v,v\in V(\stree))$,
we have $\thetatree_v = f^{-1}\sigmatree_v\subset\utree$, where we consider the pre-image of the subtree $\sigmatree_v\subset\ttree$
under the morphism $f: \utree\rightarrow\ttree$.
Furthermore, we can identify our morphism $f: \utree\rightarrow\ttree$
with the morphism $\lambda_{\stree}(\thetatree_v)\rightarrow\lambda_{\stree}(\sigmatree_v)$
which we obtain by putting together the morphisms $f|_{\thetatree_v}: \thetatree_v\rightarrow\sigmatree_v$
on the tree $\stree$. We just define the decomposed coproduct operator
\begin{equation*}
\rho_{\utree\rightarrow\ttree/\stree}: \AOp(\ttree/\stree)\rightarrow\AOp(\utree/\stree)
\end{equation*}
as the tensor product of the coproduct operators $\rho_{\thetatree_v\rightarrow\sigmatree_v}: \AOp(\sigmatree_v)\rightarrow\AOp(\thetatree_v)$
which we associate to these restrictions $f|_{\thetatree_v}: \thetatree_v\rightarrow\sigmatree_v$.
The collection $\AOp(\ttree/\stree)$ also defines a covariant functor on the under category of tree morphisms $\ttree\rightarrow\stree$,
when we make the tree $\stree$ vary and we fix $\ttree\in\Tree(r)$.
We consider a composable sequence of morphisms $\ttree\rightarrow\utree\rightarrow\stree$.
We write $\utree = \lambda_{v\in V(\stree)}(\stree_v)$ for the decomposition equivalent to the morphism $f: \utree\rightarrow\stree$
and $\ttree = \lambda_{u\in V(\utree)}(\sigmatree_u)$ for the decomposition equivalent to the morphism $\ttree\rightarrow\utree$.
Then the decomposition equivalent to the composite morphism $\ttree\rightarrow\stree$
reads $\ttree = \lambda_{v\in V(\stree)}(\thetatree_v,v\in V(\stree))$ with $\thetatree_v = \lambda_{\stree_v}(\sigmatree_u,u\in V(\stree_v))$,
for $v\in V(\stree)$.
The operator
\begin{equation*}
i_{\ttree/\utree\rightarrow\stree}: \AOp(\ttree/\utree)\rightarrow\AOp(\ttree/\stree)
\end{equation*}
of the covariant action of $f: \utree\rightarrow\stree$ on our collection
is given by the tensor product of the Segal maps $i_{\lambda_{\stree_v}(\sigmatree_*)}: \bigotimes_{u\in V(\stree_v)}\AOp(\sigmatree_u)\rightarrow\AOp(\thetatree_v)$
which we associate to the decompositions $\thetatree_v = \lambda_{\stree_v}(\sigmatree_u,u\in V(\stree_v))$.
We easily check that the above constructions yield associative covariant and contravariant actions on our collection $\AOp(\ttree/\stree)$,
which, in addition, commute to each other. We accordingly get that our mapping $(\ttree\rightarrow\stree)\mapsto\AOp(\ttree/\stree)$
defines a contravariant functor on the comma category of tree morphisms $\ttree\rightarrow\stree$.
\end{constr}
We can now proceed to the definition of our decomposed $W$-construction.
\begin{constr}\label{construction:decomposed-W}
Let $\AOp$ be a Segal shuffle dg cooperad. We assume that $\AOp$ satisfies the local conilpotence condition of~\S\ref{subsection:conilpotence} as in Construction~\ref{construction:W}.
We set:
\begin{equation*}
\DGW^{c}_{dec}(\AOp)(\stree) = \int_{\ttree\rightarrow\stree}'\square^*(\ttree/\stree)\otimes\AOp(\ttree/\stree),
\end{equation*}
where the notation $\int'$ refers to the same additive end construction as in Construction~\ref{construction:W}, and we consider the decomposed contravariant functor $\AOp(\ttree/\stree)$
defined in the previous paragraph.
We note again that this additive end is well defined because the local conilpotence condition implies that the decomposed coproduct operators $\AOp(\ttree/\stree)\rightarrow\AOp(\utree/\stree)$
land in a direct sum when $\utree$ varies, like the coproduct operators $\AOp(\ttree)\rightarrow\AOp(\utree)$
in Construction~\ref{construction:W} (and yet because the covariant action on $\square^*(\ttree/\stree)$
involves finitely many terms on each term of the end).
The coproduct operators $\rho^W_{\utree\rightarrow\stree}: \DGW^{c}_{dec}(\AOp)(\stree)\rightarrow\DGW^{c}_{dec}(\AOp)(\utree)$
are defined by using the covariant functoriality of the objects $\square^*(\ttree/\stree)$
and $\AOp(\ttree/\stree)$.
For each tree decomposition $\stree = \lambda_{\utree}(\sigmatree_u,u\in V(\utree))$, we define the Segal map
\begin{gather*}
i^W_{\sigmatree_*,\stree}: \bigotimes_{u\in V(\utree)}\DGW^{c}(\AOp)(\sigmatree_u)\rightarrow\DGW^{c}(\AOp)(\stree)
\intertext{termwise, by the morphisms}
\bigotimes_{u\in V(\utree)}\square^*(\thetatree_u/\sigmatree_u)\otimes\AOp(\thetatree_u/\sigmatree_u)
\rightarrow\square^*(\lambda_{\utree}(\thetatree_u)/\stree)\otimes\AOp(\lambda_{\utree}(\thetatree_u)/\stree),
\end{gather*}
associated to the collections of tree morphisms $\thetatree_u\rightarrow\sigmatree_u$, $u\in V(\utree)$,
which we get by tensoring the same isomorphisms $\bigotimes_{u\in V(\utree)}\square^*(\thetatree_u/\sigmatree_u)\simeq\square^*(\lambda_{\utree}(\thetatree_u)/\stree)$
as in Construction~\ref{construction:W}
with parallel isomorphisms $\bigotimes_{u\in V(\utree)}\AOp(\thetatree_u/\sigmatree_u)\simeq\AOp(\lambda_{\utree}(\thetatree_u)/\stree)$,
which we associate to the objects of Construction~\ref{construction:decomposed-Segal-cooperad-terms}.
\end{constr}
We have the following observation, which can be used to give a reduced description of both the $W$-construction $\DGW^{c}(\AOp)$
and the decomposed $W$-construction $\DGW^{c}_{dec}(\AOp)$.
\begin{lemm}\label{lemma:W-construction-splitting}
The additive end equalizers in the definition of the $W$-construction $\DGW^{c}(\AOp)$ in Construction~\ref{construction:W}
and in the definition of the decomposed $W$-construction $\DGW^{c}_{dec}(\AOp)$
in Construction~\ref{construction:decomposed-W}
split when we forget about differentials, so that the terms of these objects have a reduced description
of the form:
\begin{align*}
\DGW^{c}(\AOp)(\stree) & = \bigoplus_{\ttree\rightarrow\stree}L\square^*(\ttree/\stree)\otimes\AOp(\ttree), \\
\DGW^{c}_{dec}(\AOp)(\stree) & = \bigoplus_{\ttree\rightarrow\stree}L\square^*(\ttree/\stree)\otimes\AOp(\ttree/\stree),
\end{align*}
where, for a tree morphism $\ttree\rightarrow\stree$ equivalent to a tree decomposition such that $\ttree = \lambda_{v\in V(\stree)}(\sigmatree_v,v\in V(\stree))$,
we define $L\square^*(\ttree/\stree)\subset\square^*(\ttree/\stree)$
by the tensor product:
\begin{equation*}
L\square^*(\ttree/\stree) = \bigotimes_{e\in\mathring{E}(\sigmatree_v),v\in V(\stree)}\underbrace{(\kk\underline{0}^{\sharp}\oplus\kk\underline{01}^{\sharp})}_{=: \mathring{I}_e}.
\end{equation*}
(Thus, we just drop the factors $\underline{1}^{\sharp}$ from the normalized cochain complexes $I_e = N^*(\Delta^1)$ in the expression of the object $\square^*(\ttree/\stree)$.)
\end{lemm}
\begin{proof}
This lemma readily follows from the fact that the terms $\varpi = \sigma\otimes\alpha\in\square^*(\ttree/\stree)\otimes\AOp(\ttree)$
(respectively, $\varpi = \sigma\otimes\alpha\in\square^*(\ttree/\stree)\otimes\AOp(\ttree/\stree)$)
with $\underline{1}^{\sharp}$ factors in the additive end definition of the object $\DGW^{c}(\AOp)(\stree)$ (respectively, $\DGW^{c}_{dec}(\AOp)(\stree)$)
are determined by the equalizer relations,
which identify such terms with the image of tensors of the form $\varpi' = \sigma'\otimes\alpha'\in L\square^*(\ttree'/\stree)\otimes\AOp(\ttree')$
(respectively, $\varpi' = \sigma'\otimes\alpha'\in L\square^*(\ttree'/\stree)\otimes\AOp(\ttree'/\stree)$)
under the action of coproduct operators,
where $\sigma'$ is defined by withdrawing the factors $\underline{1}^{\sharp}$ from $\sigma\in\square^*(\ttree/\stree)$
and $\ttree'$ is the tree obtained by contracting the edges $e$
that correspond to such factors in $\ttree$.
Indeed, we then have $\sigma' = \partial_{\ttree\rightarrow\ttree'/\stree}(\sigma)$ and, in the case of $W$-construction $\DGW^{c}(\AOp)$,
from the zigzag of morphisms
\begin{equation*}
\xymatrix{ \square^*(\ttree/\stree)\otimes\AOp(\ttree)\ar[dr]_{\partial_{\ttree\rightarrow\ttree'/\stree}\otimes\id} &&
\square^*(\ttree'/\stree)\otimes\AOp(\ttree')\ar[dl]^{\id\otimes\rho_{\ttree\rightarrow\ttree'}} \\
& \square^*(\ttree'/\stree)\otimes\AOp(\ttree) & },
\end{equation*}
which we extract from our equalizer, we see that we have the identity $\sigma'\otimes\alpha = \sigma'\otimes\rho_{\ttree\rightarrow\ttree'}(\alpha')\Rightarrow\alpha = \rho_{\ttree\rightarrow\ttree'}(\alpha')$.
We argue similarly in the case of the decomposable $W$-construction.
\end{proof}
We now check the validity of the definition of our Segal shuffle dg pre-cooperad structure on the $W$-construction $\DGW^{c}(\AOp)$ in Construction~\ref{construction:W}
and on the decomposed $W$-construction $\DGW^{c}_{dec}(\AOp)$ in Construction~\ref{construction:decomposed-W}.
We use the following straightforward observation (see \cite[Appendix A]{FresseBook}).
\begin{lemm}\label{lemma:tree-decomposition-chain}
Let $\stree = \lambda_{\utree}(\sigmatree_u,u\in V(\utree))$ be a tree decomposition.
There is a bijection between the set of collections of composable pairs of tree morphisms $\{\thetatree_u\rightarrow\ttree_u\rightarrow\sigmatree_u,u\in V(\utree)\}$
indexed by $V(\utree)$
and the set of composable pairs of tree morphisms $\thetatree\rightarrow\ttree\rightarrow\stree$.
This bijection associates any such collection $\{\thetatree_u\rightarrow\ttree_u\rightarrow\sigmatree_u,u\in V(\utree)\}$
with the morphisms $\lambda_{\utree}(\thetatree_u,u\in V(\utree))\rightarrow\lambda_{\utree}(\ttree_u,u\in V(\utree))\rightarrow\lambda_{\utree}(\sigmatree_u,u\in V(\utree)) = \stree$.\qed
\end{lemm}
\begin{thm-defn}\label{lemma:W-construction-cooperad}
The objects $\DGW^{c}(\AOp)$ and $\DGW^{c}_{dec}(\AOp)$, equipped with the coproduct operators and the Segal maps defined in Construction~\ref{construction:W},
form Segal shuffle dg pre-cooperads, to which we respectively refer as the $W$-construction and the decomposed $W$-construction
of the Segal shuffle dg (pre-)cooperad $\AOp$.
For the decomposed $W$-construction $\DGW^{c}_{dec}(\AOp)$, we get in addition that the Segal maps
define isomorphisms
\begin{equation*}
i_{\lambda_{\stree}(\sigmatree_*)}: \bigotimes_{v\in V(\stree)}\DGW^{c}_{dec}(\AOp)(\sigmatree_v)\xrightarrow{\simeq}\DGW^{c}_{dec}(\AOp)(\ttree),
\end{equation*}
for all tree decompositions $\ttree = \lambda_{\stree}(\sigmatree_v,v\in V(\stree))$, so that $\DGW^{c}_{dec}(\AOp)$
is identified with a shuffle dg cooperad in the ordinary sense.
\end{thm-defn}
\begin{proof}
The associativity of the coproduct operators on $\DGW^{c}(\AOp)$ and $\DGW^{c}_{dec}(\AOp)$
is immediate from the definition of these morphisms
in terms of associative actions on the terms
of our additive ends
in Construction~\ref{construction:W} and Construction~\ref{construction:decomposed-W}.
We similarly check the validity of the associativity condition for the Segal maps that we attach to our objects.
We also deduce the compatibility between the Segal maps and the coproduct operators from termwise counterparts of this relation.
To establish that Segal maps define isomorphisms in the case of the decomposed $W$-construction, we use the reduced expression of Lemma~\ref{lemma:W-construction-splitting},
the fact that the tensor products of this reduced expression have a factorization
of the form
\begin{multline*}
L\square^*(\ttree/\stree)\otimes\AOp(\ttree/\stree) = (\bigotimes_{e\in\mathring{E}(\sigmatree_v),v\in V(\stree)}\mathring{I}_e)\otimes(\bigotimes_{v\in V(\stree)}\AOp(\sigmatree_v))\\
\simeq\bigotimes_{v\in V(\stree)}(\underbrace{\bigotimes_{e\in\mathring{E}(\sigmatree_v)}\mathring{I}_e}_{= L\square^*(\sigmatree_v/\ytree_v)}
\otimes\underbrace{\AOp(\sigmatree_v)}_{= \AOp(\sigmatree_v/\ytree_v)})
\end{multline*}
and the bijective correspondence of Lemma~\ref{lemma:tree-decomposition-chain}.
\end{proof}
\begin{thm}\label{proposition:quasi-isomorphism-bar-strict-cooperads}
If $\AOp$ is a connected Segal shuffle dg pre-cooperad $\AOp$ (where we use the connectedness condition of~\S\ref{subsection:conilpotence}),
then we have a zigzag of natural transformations of Segal shuffle dg pre-cooperads
\begin{equation*}
\AOp\xrightarrow{\sim}\DGW^c(\AOp)\leftarrow\DGW^c_{dec}(\AOp)
\end{equation*}
where the morphism on the left-hand side is a weak-equivalence termwise.
If $\AOp$ satisfies the Segal condition (and, therefore, is a Segal shuffle dg cooperad $\AOp$),
then the morphism on the right-hand side of this zigzag is also a weak-equivalence
and the $W$-construction $\DGW^c(\AOp)$ also satisfies the Segal condition,
so that $\AOp$ is, as a Segal shuffle dg cooperad, weakly-equivalent to a shuffle dg cooperad in the ordinary sense.
\end{thm}
\begin{proof}
We address the definition of the morphism $\beta: \AOp\rightarrow\DGW^{c}(\AOp)$ first.
We consider the dg module morphisms $\beta_{\ttree\rightarrow\stree}: \AOp(\stree)\rightarrow\square^*(\ttree/\stree)\otimes\AOp(\ttree)$
defined by pairing the coproducts $\rho_{\ttree\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree)$
associated to the morphisms $\ttree\rightarrow\stree$
with the unit morphisms $\eta: \kk\rightarrow I_e$ of the cochain algebras $I_e = \DGN^*(\Delta^1)$
in $\square^*(\ttree/\stree)$.
We readily check that these morphisms $\beta_{\ttree\rightarrow\stree}: \AOp(\stree)\rightarrow\square^*(\ttree/\stree)\otimes\AOp(\ttree)$
induce a morphism with values in the additive end of Construction~\ref{construction:W}. (We just note that the local conilpotence condition
implies again that the collection of these morphisms land in the sum of the objects $\square^*(\ttree/\stree)\otimes\AOp(\ttree)$ when $\ttree$ varies.)
We accordingly get a dg module morphism $\beta: \AOp(\stree)\rightarrow\DGW^{c}(\AOp)(\stree)$, for each tree $\stree$.
We easily deduce from the associativity of the coproduct operators that these morphisms commute with the coproduct operators on $\AOp$
and on $\DGW^{c}(\AOp)(\stree)$.
We similarly prove that our morphisms preserve the Segal maps by reducing the verification of this claim
to a termwise relation.
We therefore get a well defined morphism of Segal shuffle dg cooperads $\beta: \AOp\rightarrow\DGW^{c}(\AOp)$
as requested.
We now check that this morphism defines a termwise weak-equivalence $\beta: \AOp(\stree)\xrightarrow{\sim}\DGW^{c}(\AOp)(\stree)$.
For this purpose, we use the reduced expression of the $W$-construction given in Lemma~\ref{lemma:W-construction-splitting},
and we take a filtration of our object by the number of vertices of the trees $\ttree$
in this expansion.
We see that the terms of the differential given by the map $\delta(\underline{0}^{\sharp}) = -\underline{01}^{\sharp}$
in the normalized cochain complexes $I_e = \DGN^*(\Delta^1)$
preserve this grading, as well as the term of the differential induced by the internal differential
of the dg modules $\AOp(\ttree)$,
but the terms of the differential given by the map $\delta(\underline{1}^{\sharp}) = \underline{01}^{\sharp}$
increase the number of vertices
when we pass to the reduced expansion.
We just take the spectral sequence associated to our filtration to neglect the latter terms and to reduce the differential of our object
to the terms given by the maps $\delta(\underline{0}^{\sharp}) = -\underline{01}^{\sharp}$
and the internal differential of the dg modules $\AOp(\ttree)$.
The acyclicity of the cochain complex $\mathring{I}_e = (\kk\underline{0}^{\sharp}\oplus\kk\underline{01}^{\sharp},\delta(\underline{0}^{\sharp}) = -\underline{01}^{\sharp})$
implies that all terms of our reduced expansion have a trivial homology, except the term associated to the identity morphism $\ttree = \stree\rightarrow\stree$
for which we have $L\square^*(\stree/\stree) = \kk$.
Hence, our map $\beta: \AOp(\stree)\rightarrow\DGW^{c}(\AOp)(\stree)$ induces an isomorphism on the first page of the spectral sequence associated to our filtration,
and we conclude from this result that $\beta: \AOp(\stree)\rightarrow\DGW^{c}(\AOp)(\stree)$
defines a weak-equivalence of dg modules,
as requested.
Note simply that the connectedness assumption of the theorem implies that the object $\DGW^{c}(\AOp)(\stree)$ reduces to a finite sum, for any given tree $\stree$,
because we have only finitely many morphism $\ttree\rightarrow\stree$ such that $\ttree$ is reduced,
and for which the object $\AOp(\ttree)$ does not vanish (see~\S\ref{subsection:conilpotence}).
This observation ensures that no convergence issue occurs in this spectral argument.
We obviously define our second morphism $\alpha: \DGW^c_{dec}(\AOp)\rightarrow\DGW^c(\AOp)$ termwise,
by taking the tensor product of the identity map on $\square^*(\ttree/\stree)$
with the morphism
\begin{equation}\tag{$*$}\label{eqn:termwise_segal_maps}
\AOp(\ttree/\stree) = \bigotimes_{v\in V(\stree)}\AOp(\sigmatree_v)\xrightarrow{i_{\lambda_{\stree}(\sigmatree_*)}}\AOp(\ttree)
\end{equation}
given by the Segal map on $\AOp$, where we consider the decomposition $\ttree = \lambda_{v\in V(\stree)}(\sigmatree_v)$
equivalent to the morphism $\ttree\rightarrow\stree$.
We just check that these maps~(\ref{eqn:termwise_segal_maps}) define a morphism of bifunctors on the comma category of tree morphisms $\ttree\rightarrow\stree$
to obtain that they induce a well-defined map on our additive end. This map also commutes with the action of our coproduct operators
on $\DGW^c_{dec}(\AOp)$ and $\DGW^c(\AOp)$.
We easily deduce, from the associativity relations of the Segal maps, that the maps~(\ref{eqn:termwise_segal_maps})
intertwine the action of the Segal operators on $\DGW^c_{dec}(\AOp)$ and $\DGW^c(\AOp)$
and hence, define a morphism of shuffle dg pre-cooperad.
We again use the spectral sequence determined by the filtration by the number of vertices of trees
in the reduced expansions of Lemma~\ref{lemma:W-construction-splitting}
to study the effect of our map in homology.
We use that the first page of the spectral sequence is given by these reduced expansions
with the differential inherited from our objects $\AOp(\ttree/\stree)$ and $\AOp(\ttree)$
and from the terms $\delta(\underline{0}^{\sharp}) = -\underline{01}^{\sharp}$
of the differential on the factors $\mathring{I}_e\subset I_e = \DGN^*(\Delta^1)$.
We immediately deduce that our morphism induces an isomorphism on the first page of this spectral sequence as our maps~(\ref{eqn:termwise_segal_maps}),
which we pair with the identity of the object $L\square^*(\ttree/\stree)$ in our reduced expansions,
are weak-equivalences when we assume that $\AOp$ satisfies the Segal condition.
We conclude from this observation that our morphism $\alpha: \DGW^c_{dec}(\AOp)\rightarrow\DGW^c(\AOp)$
defines a termwise weak-equivalence of Segal shuffle dg pre-cooperads.
(We again use the connectedness assumption to ensure that no convergence issue occurs in this spectral sequence argument.)
Finally, the existence of such a weak-equivalence of Segal shuffle dg pre-cooperads $\alpha: \DGW^c_{dec}(\AOp)\xrightarrow{\sim}\DGW^c(\AOp)$
immediately implies that $\DGW^c_{dec}(\AOp)$ fulfills the Segal condition,
and hence, defines a Segal shuffle dg cooperad,
since the Segal maps for $\DGW^c(\AOp)$ are weakly-equivalent to the Segal maps for $\DGW^c_{dec}(\AOp)$,
which are isomorphisms by the result of Proposition~\ref{lemma:W-construction-cooperad}.
\end{proof}
\subsection{The application of cobar complexes}\label{subsection:strict-Segal-cooperad-cobar}
In this subsection, we describe a cobar construction for Segal shuffle dg (pre-)cooperads. This is a construction that, from the structure of a Segal shuffle dg pre-cooperad $\AOp$,
produces an operad in dg modules $\DGB^c(\AOp)$.
We make explicit the definition of the structure operations of the cobar construction $\DGB^c(\AOp)$
in the next paragraph.
We check the validity of these definitions afterwards.
We eventually make a formal statement to record the definition of this dg operad $\DGB^c(\AOp)$.
\begin{constr}\label{construction:bar-complex}
We define the graded modules $\DGB^c(\AOp)(r)$, which form the components of the cobar construction $\DGB^c(\AOp)$,
by the following formula:
\begin{equation*}
\DGB^c(\AOp)(r) = \bigoplus_{\ttree\in\Tree(r)}\DGSigma^{-\sharp V(\ttree)}\AOp(\ttree),
\end{equation*}
where $\DGSigma$ is the suspension functor on graded modules.
We also have an identity $\DGSigma^{-\sharp V(\ttree)}\AOp(\ttree) = \bigl(\bigotimes_{v\in V(\ttree)}\underline{01}^{\sharp}_v\bigr)\otimes\AOp(\ttree)$,
where we associate a factor of cohomological degree one $\underline{01}^{\sharp}_v$ to every vertex $v\in V(\ttree)$.
We just need to fix an ordering choice on the vertices of our tree to get this representation, as a permutation of factors produces a sign
in the tensor product $\bigotimes_{v\in V(\ttree)}\underline{01}^{\sharp}_v$.
To every pair $(\ttree,e)$, with $\ttree\in\Tree(r)$ and $e\in\mathring{E}(\ttree)$,
we associate the map
\begin{equation*}
\partial_{(\ttree,e)}: \DGSigma^{-\sharp V(\ttree)+1}\AOp(\ttree/e)\rightarrow\DGSigma^{-\sharp V(\ttree)}\AOp(\ttree)
\end{equation*}
given by the coproduct operator $\partial_{(\ttree,e)} = \rho_{\ttree\rightarrow\ttree/e}$ on $\AOp(\ttree/e)$, where $\ttree/e$ is defined by contracting the edge $e$ in $\ttree$,
together with the mapping $\underline{01}_{u\equiv v}\mapsto\underline{01}_u\otimes\underline{01}_v$
in the tensor product $\underline{01}_{u\equiv v}\otimes\bigotimes_{x\in V(\ttree/e)\setminus\{u\equiv v\}}\underline{01}^{\sharp}_x$,
where $u$ and $v$ respectively denote the source and the target of the edge $e$ in $\ttree$,
while $u\equiv v$ denotes the result of the fusion of these vertices in $\ttree/e$. (Note that $V(\ttree/e) = V(\ttree)\setminus\{u,v\}\amalg\{u\equiv v\}$.)
To perform this construction, we put the factor $\underline{01}_{u\equiv v}$ associated to the merged vertex $u\equiv v$
in front position of the tensor product $\bigotimes_{x\in V(\ttree/e)}\underline{01}^{\sharp}_x$,
using a tensor permutation if necessary (recall simply that such a tensor permutation
produces a sign). The other option is to transport the map $\underline{01}_{u\equiv v}\mapsto\underline{01}_u\otimes\underline{01}_v$
over some tensors in order to reach the factor $\underline{01}_{u\equiv v}$
inside the tensor product $\bigotimes_{x\in V(\ttree/e)}\underline{01}^{\sharp}_x$.
This operation produces a sign too (because this map $\underline{01}_{u\equiv v}\mapsto\underline{01}_u\otimes\underline{01}_v$
has degree $1$ and therefore the permutation of this map with a tensor returns a sign).
Both procedures yield equivalent results.
Then we take
\begin{equation*}
\partial = \sum_{(\ttree,e)}\partial_{(\ttree,e)},
\end{equation*}
the sum of these maps $\partial_{(\ttree,e)}$ associated to the pairs $(\ttree,e)$.
The next lemma implies that this map $\partial$ defines a twisting differential on $\DGB^c(\AOp)(r)$,
so that we can provide $\DGB^c(\AOp)(r)$ with a dg module structure with the sum $\delta+\partial: \DGB^c(\AOp)(r)\rightarrow\DGB^c(\AOp)(r)$
as total differential, where $\delta: \DGB^c(\AOp)(r)\rightarrow\DGB^c(\AOp)(r)$ denotes the differential
induced by the internal differential of the objects $\AOp(\ttree)$
in $\DGB^c(\AOp)(r)$.
We now fix a pointed shuffle decomposition $\{1<\dots<r\} = \{i_1<\dots<\widehat{i_p}<\dots<i_k\}\amalg\{j_1<\dots<j_l\}$
associated to the composition scheme of an operadic composition product $\circ_{i_p}$,
which we can also represent by a tree with two vertices $\gammatree$.
For a pair of trees $\ttree\in\Tree(\{i_1<\dots<i_k\})$ and $\stree\in\Tree(\{j_1<\dots<j_l\})$,
we consider the map
\begin{equation*}
\circ_{i_p}^{\stree,\ttree}: \DGSigma^{-\sharp V(\stree)}\AOp(\stree)\otimes\DGSigma^{-\sharp V(\ttree)}\AOp(\ttree)
\rightarrow\DGSigma^{-\sharp V(\stree\circ_{i_p}\ttree)}\AOp(\stree\circ_{i_p}\ttree)
\end{equation*}
yielded by the Segal map $i_{\stree\circ_{i_p}\ttree}: \AOp(\stree)\otimes\AOp(\ttree)\rightarrow\AOp(\stree\circ_{i_p}\ttree)$
associated to the decomposition of the grafting $\stree\circ_{i_p}\ttree = \lambda_{\gammatree}(\stree,\ttree)$,
together with the obvious tensor identity $\bigotimes_{u\in V(\stree)}\underline{01}^{\sharp}_u\otimes\bigotimes_{v\in V(\ttree)}\underline{01}^{\sharp}_v
= \bigotimes_{x\in V(\stree\circ_{i_p}\ttree)}\underline{01}^{\sharp}_x$,
which we deduce from the relation $V(\stree\circ_{i_p}\ttree) = V(\stree)\amalg V(\ttree)$.
Then we consider the composition product
\begin{equation*}
\circ_{i_p}: \DGB^c(\AOp)(\{i_1<\dots<i_k\})\otimes\DGB^c(\AOp)(\{j_1<\dots<j_l\})\rightarrow\DGB^c(\AOp)(\{1<\dots<r\})
\end{equation*}
defined by the sum of these maps $\circ_{i_p}^{\stree,\ttree}$ associated to the pairs $(\stree,\ttree)$. We check in a forthcoming lemma
that these operations preserve our differentials, and hence, provide our object with well-defined dg module operations.
\end{constr}
We first check the validity of the definition of our twisting differential on the cobar construction.
We have the following more precise statement.
\begin{lemm}\label{lemma:differential-bar-construction}
We have the relations $\partial^2 = \delta\partial + \partial\delta = 0$ on $\DGB^c(\AOp)(r)$, where $\delta: \DGB^c(\AOp)(r)\rightarrow\DGB^c(\AOp)(r)$
denotes the differential induced by the internal differential of the objects $\AOp(\ttree)$ (as we explained in the above construction).
\end{lemm}
\begin{proof}
The identity $\delta\partial +\partial\delta = 0$ reduces to the termwise relations $\delta\partial_{(\ttree,e)} + \partial_{(\ttree,e)}\delta = 0$,
which in turn, are reformulation of the fact that the coproduct operators $\rho_{\ttree\rightarrow\ttree/e}$
are morphisms of dg modules.
We check that the relation $\partial^2 = 0$ holds on each summand $\DGSigma^{-\sharp V(\ttree)}\AOp(\ttree)\subset\DGB^c(\AOp)$
in the target of our map.
Note that $\mathring{E}(\ttree/e) = \mathring{E}(\ttree)\setminus\{e\}$.
Hence the components of $\partial^2$ which land in $\DGSigma^{-\sharp V(\ttree)}\AOp(\ttree)$
are defined on a summand $\DGSigma^{-\sharp V(\stree)}\AOp(\stree)\subset\DGB^c(\AOp)$
such that $\stree = \ttree/\{e,f\}$,
for some $e,f\in\mathring{E}(\ttree)$, $e\not=f$, and are given by the composites:
\begin{equation*}
\xymatrix{ & \DGSigma^{-\sharp V(\ttree)+1}\AOp(\ttree/e)\ar[dr]^{\partial_{(\ttree,e)}} & \\
\DGSigma^{-\sharp V(\ttree)+2}\AOp(\ttree/\{e,f\})\ar[ur]^{\partial_{(\ttree/e,f)}}\ar[dr]_{\partial_{(\ttree/f,e)}} && \DGSigma^{-\sharp V(\ttree)}\AOp(\ttree) \\
& \DGSigma^{-\sharp V(\ttree)+1}\AOp(\ttree/e)\ar[ur]_{\partial_{(\ttree,f)}} & }.
\end{equation*}
Both composites are given by composite coproduct operators
such that
\begin{equation*}
\rho_{\ttree\rightarrow\ttree/e}\circ\rho_{\ttree/\{e,f\}\rightarrow\ttree/e} = \rho_{\ttree/\{e,f\}\rightarrow\ttree} = \rho_{\ttree\rightarrow\ttree/f}\circ\rho_{\ttree/\{e,f\}\rightarrow\ttree/f}
\end{equation*}
by functoriality of the object $\AOp$. We just note that the above composites differ by an order of tensor factors $\underline{01}$ to conclude that these composite coproducts
occur with opposite signs in $\partial^2$, and hence, cancel each other. The conclusion follows.
\end{proof}
We check the validity of our definition of the composition products on the cobar construction. We have the following more precise statement.
\begin{lemm}\label{lemma:bar-construction-product-operad}
The composition products $\circ_{i_p}$ of Construction~\ref{construction:bar-complex} preserve both the cobar differential $\partial$ and the differential $\delta$
induced by the internal differential of the object $\AOp$ on the cobar construction $\DGB^c(\AOp)$,
and hence, form morphisms of dg modules.
\end{lemm}
\begin{proof}
We prove that $\circ_{i_p}$ commutes with the differentials on each summand $\AOp(\stree)\otimes\AOp(\ttree)$
of the tensor product $\DGB^c(\AOp)(\{i_1<\dots<i_k\})\otimes\DGB^c(\AOp)(\{j_1<\dots<j_l\})$,
where $\stree\in\Tree(\{i_1<\dots<i_k\})$, $\ttree\in\Tree(\{j_1<\dots<j_l\})$.
We just use that the Segal maps, which induce our composition product componentwise, are morphisms of dg modules to conclude that $\circ_{i_p}$
preserves the internal differentials on our objects.
We focus on the verification that $\circ_{i_p}$ preserves the cobar differential.
We set $\thetatree = \stree\circ_{i_p}\ttree$ and we consider a tree $\thetatree'$ such that $\thetatree = \thetatree'/e$ for an edge $e\in\mathring{E}(\thetatree')$
which is contracted in a vertex in $\thetatree$. This vertex comes either from $\stree$
or from $\ttree$. In the first case, we have $\thetatree' = \stree'\circ_{i_p}\ttree$, for a tree $\stree'\subset\thetatree'$ such that $e\in\mathring{E}(\stree')$
and $\stree'/e = \stree$ (we can actually identify $\stree'\subset\thetatree'$
with the pre-image of the subtree $\stree\subset\thetatree$ under the morphism $\thetatree'\rightarrow\thetatree'/e = \thetatree$).
In the second case, we have $\thetatree' = \stree\circ_{i_p}\ttree'$,
for a tree $\ttree'\subset\thetatree'$ such that $e\in\mathring{E}(\ttree')$ and $\ttree'/e = \ttree$ (we can then identify $\ttree'\subset\thetatree'$
with the pre-image of the subtree $\ttree\subset\thetatree$
under our edge contraction morphism $\thetatree'\rightarrow\thetatree'/e = \thetatree$).
The compatibility between the Segal maps and the coproduct operators imply that the following diagrams commute:
\begin{equation*}
\xymatrix{ \AOp(\stree'/e)\otimes\AOp(\ttree)\ar[d]_-{\rho_{\stree'\rightarrow\stree'/e}\otimes\id}\ar[r]^{i_{\stree'/e\circ_{i_p}\ttree}} &
\AOp(\stree'/e\circ_{i_p}\ttree)\ar[d]^-{\rho_{\stree'\rightarrow\stree'/e}} \\
\AOp(\stree')\otimes\AOp(\ttree)\ar[r]^{i_{\stree'\circ_{i_p}\ttree}} &
\AOp(\stree'\circ_{i_p}\ttree) },
\quad\xymatrix{ \AOp(\stree)\otimes\AOp(\ttree'/e)\ar[d]_-{\id\otimes\rho_{\ttree'\rightarrow\ttree'/e}}\ar[r]^{i_{\stree\circ_{i_p}\ttree'/e}} &
\AOp(\stree\circ_{i_p}\ttree'/e)\ar[d]^-{\rho_{\ttree'\rightarrow\ttree'/e}} \\
\AOp(\stree)\otimes\AOp(\ttree')\ar[r]^{i_{\stree'\circ_{i_p}\ttree}} &
\AOp(\stree\circ_{i_p}\ttree') }.
\end{equation*}
This yields the relation $\partial_{(\stree'\circ_{i_p}\ttree,e)}\circ\circ_{i_p}^{\stree'/e,\ttree} = \circ_{i_p}^{\stree',\ttree}\circ(\partial_{(\stree',e)}\otimes\id)$
in the first case
and the relation $\partial_{(\stree\circ_{i_p}\ttree',e)}\circ\circ_{i_p}^{\stree,\ttree'/e} = \circ_{i_p}^{\stree,\ttree'}\circ(\id\otimes\partial_{(\ttree',e)})$
in the second case.
By summing these equalities with suitable suspension factors we get that $\partial$
defines a derivation with respect to $\circ_{i_p}$.
\end{proof}
We immediately deduce from the associativity of the Segal maps that the composition products of Construction~\ref{construction:bar-complex}
satisfy the associativity relations of the composition products of an operad.
We therefore get the following concluding statement:
\begin{thm-defn}
The collection $\DGB^c(\AOp) = \{\DGB^c(\AOp)(r),r>0\}$ equipped with the differential and structure operations
defined in Construction~\ref{construction:bar-complex}
forms a shuffle operad in dg modules.
This operad $\DGB^c(\AOp)$ is the cobar construction of the Segal shuffle dg (pre-)cooperad $\AOp$.\qed
\end{thm-defn}
\begin{remark}
In the case of an ordinary dg cooperad, we just retrieve the cobar construction functor of the classical theory of operads $\COp\mapsto\DGB^c(\COp)$,
which goes from the category of locally conilpotent (coaugmented) dg cooperads
to the category of (augmented) dg operads. (Recall simply that we drop units from our definitions so that our cooperads are equivalent to the coaugmentation coideal of coaugmented cooperads,
whereas our cobar construction is equivalent to the augmentation ideal of the classical unital cobar construction.)
By classical operad theory, we also have a bar construction functor $\POp\mapsto\DGB(\POp)$, which goes in the converse direction, from the category of (augmented) dg operads
to the usual category of locally conilpotent (coaugmented) dg cooperads.
For a locally conilpotent Segal shuffle dg pre-cooperad $\AOp$, we actually have an identity $\DGW^c_{dec}(\AOp) = \DGB\DGB^c(\AOp)$,
where we consider the decomposed $W$-construction of the previous subsection and the cobar operad $\DGB^c(\AOp)$,
such as defined in this subsection.
\end{remark}
\subsection{The strict Segal $E_\infty$-Hopf cooperad associated to an operad in simplicial sets}
In this subsection, we study a correspondence between the category of operads in simplicial sets
and the category of Segal $E_\infty$-Hopf cooperads.
In a first step, we explain the construction of the structure of a strict Segal $E_\infty$-Hopf cooperad
on the normalized cochain complex of a simplicial operad.
\begin{constr}\label{definition:cooperad-from-simplicial-operad}
Let $\POp$ be a (symmetric) operad in the category of simplicial sets $\simp\Set$.
Let $\DGN^*: \simp\Set^{op}\rightarrow\EAlg$ be the normalized cochain complex functor, where we consider on $\DGN^*(-)$ the $\EOp$-algebra structure
defined in~\cite{BergerFresse} (see also our overview in~\S\ref{sec:Barratt-Eccles-operad}).
For a tree $\ttree\in\Tree(r)$, we set $\POp(\ttree) = \prod_{v\in V(\ttree)}\POp(\rset_v)$.
Then we set:
\begin{equation*}
\AOp_{\POp}(\ttree) = \DGN^*(\POp(\ttree)).
\end{equation*}
We equip this collection with the coproduct operators $\rho_{\ttree\rightarrow\stree}: \AOp_{\POp}(\stree)\rightarrow\AOp_{\POp}(\ttree)$
induced by the treewise composition products on~$\POp$
and with the facet operators $i_{\sigmatree,\stree}: \AOp_{\POp}(\sigmatree)\rightarrow\AOp_{\POp}(\ttree)$
induced by the projection maps $\prod_{v\in V(\ttree)}\POp(\rset_v)\rightarrow\prod_{v\in V(\sigmatree)}\POp(\rset_v)$,
for $\sigmatree\subset\ttree$.
We also have operators $s^*: \AOp_{\POp}(s\ttree)\rightarrow\AOp_{\POp}(\ttree)$,
associated to the permutations $s\in\Sigma_r$,
which are induced by the corresponding action of the permutations $s_*: \POp(\ttree)\rightarrow\POp(s\ttree)$
on our simplicial sets.
\end{constr}
\begin{prop}\label{proposition:cooperad-from-simplicial-set}
Let $\POp$ be an operad in $\simp\Set$. The collection $\AOp_{\POp}(\ttree)$, equipped with the coproduct operators and facet operators defined in the above construction,
and with our action of permutations,
defines a strict Segal $E_\infty$-Hopf symmetric cooperad. This construction is functorial in $\POp$.
\end{prop}
\begin{proof}
The associativity of the coproduct maps is a direct consequence of the associativity of the products in $\POp$ and of the functoriality of the normalized cochain complex $\DGN^*(-)$.
The associativity of the facet operators and their compatibility with the coproducts also follows from the counterpart of these relations at the simplicial set level
and from the functoriality of the normalized cochain complex $\DGN^*(-)$,
and similarly as regards the compatibility between the action of permutations, the coproduct operators and the facet operators.
We only need to prove the Segal condition. By Proposition \ref{proposition:Segal-condition}, we can reduce our verifications to the case
of a decomposition $\ttree = \lambda_{\gammatree}(\sigmatree_u,\sigmatree_v)$
over a tree with two internal vertices $\gammatree$.
We observe that, as our facet operators are induced by simplicial projections, there is a commutative diagram:
\begin{equation*}
\xymatrixcolsep{2pc}\xymatrix{ \AOp_{\POp}(\sigmatree_u)\vee\AOp_{\POp}(\sigmatree_v) = \DGN^*(\POp(\sigmatree_u))\vee\DGN^*(\POp(\sigmatree_v))
\ar[r]^-{i_{\lambda_{\gammatree}(\sigmatree_u,\sigmatree_v)}} &
\AOp_{\POp}(\ttree) = \DGN^*(\POp(\sigmatree_u)\times\POp(\sigmatree_v)) \\
& \DGN^*(\POp(\sigmatree_u))\otimes\DGN^*(\POp(\sigmatree_v))\ar[lu]^-{\EM}\ar[u]_-{\AW} },
\end{equation*}
where $\AW$ denotes the classical Alexander--Whitney product $\AW: \DGN^*(X)\otimes\DGN^*(Y)\rightarrow\DGN^*(X\times Y)$.
Both maps $\EM$ and $\AW$ are weak-equivalences (the map $\EM$ by Proposition~\ref{claim:Barratt-Eccles-algebra-coproducts}).
Hence $i_{\lambda_{\gammatree}(\sigmatree_u,\sigmatree_v)}$ is also a weak-equivalence.
The conclusion follows.
\end{proof}
The normalized cochain complex functor $\DGN^*: \simp\Set^{op}\rightarrow\EAlg$ admits a left adjoint $\DGG: \EAlg\rightarrow\simp\Set^{op}$,
which is defined by the formula $\DGG(A) = \Mor_{\EAlg}(A,\DGN^*(\Delta^{\bullet}))$,
for all $A\in\EAlg$ (see~\cite{Mandell} for a variant of this construction).
This pair of adjoint functors also defines a Quillen adjunction between the Kan model category of simplicial sets
and the model category of $\EOp$-algebras (we refer to~\cite{BergerFresse} for the definition of the model structure on the category of $\EOp$-algebras).
By the main result of~\cite{Mandell}, if we take $\kk = \bar{\FF}_p$ as ring of coefficients, then the object $L\DGG(\DGN^*(X))$,
which we obtain by applying the normalized cochain complex functor to a connected simplicial sets $X\in\simp\Set$
and by going back to simplicial sets by using the derived functor $L\DGG(-)$
of our left adjoint $\DGG(-) = \Mor_{\EAlg}(-,\DGN^*(\Delta^{\bullet}))$,
is weakly-equivalent to the $p$-completion of the space~$X$ (under standard nilpotence and cohomological finiteness assumptions).
We aim to examine the application of this adjoint functor construction to our Segal $E_\infty$-Hopf cooperads.
We need to form resolutions of our objects in the category of Segal $E_\infty$-Hopf symmetric cooperads
with cofibrant $E_\infty$-algebras
as components to give a sense to the application of the derived functor $\DGG$
to our objects. We rely on the following observation.
\begin{lemm}
Let $R: \EAlg\rightarrow\EAlg$ be a functorial cofibrant replacement functor on the model category of $\EOp$-algebras (which exists because
the category of $\EOp$-algebras is cofibrantly generated).
If $\AOp\in\EOp\Hopf\Sigma\SegOp^c$ is a Segal $E_\infty$-Hopf symmetric pre-cooperad, then the collection $\ROp(\ttree) = R\AOp(\ttree)\in\EAlg$, $\ttree\in\Tree(r)$, $r>0$,
inherits a Segal $E_\infty$-Hopf symmetric pre-cooperad structure
by functoriality. If $\AOp$ satisfies the Segal condition and is such that the objects $\AOp(\ttree)$ are cofibrant as dg modules,
then $\ROp = R\AOp$ also satisfies the Segal condition,
and hence, forms a Segal $E_\infty$-Hopf symmetric cooperad.
\end{lemm}
\begin{proof}
The first claim, that the collection $\ROp(\ttree) = R\AOp(\ttree)\in\EAlg$, $\ttree\in\Tree(r)$, $r>0$,
inherits a Segal $E_\infty$-Hopf symmetric pre-cooperad structure,
is immediate.
To establish that $R\AOp$ also satisfies the Segal condition, we just use the result of Proposition~\ref{claim:Barratt-Eccles-algebra-coproducts}
(we merely need the extra assumption that the objects $\AOp(\ttree)$ are cofibrant as dg modules, like the cofibrant $\EOp$-algebras $R\AOp(\ttree)$,
to get that the weak-equivalences $R\AOp(\ttree)\xrightarrow{\sim}\AOp(\ttree)$
induce a weak-equivalence when we pass to a tensor product).
\end{proof}
The application of the functor $\DGG(-)$ to the Segal $E_\infty$-Hopf symmetric cooperad $R\AOp$
defined in this proposition returns a Segal symmetric operad in the category of simplicial sets,
where a Segal symmetric operad is the obvious counterpart,
in the category of simplicial sets,
of our notion of a Segal $E_\infty$-Hopf symmetric cooperad (we just have to dualize the definition).
We mention in the introduction of this paper that such structures are close to Cisinski--Moerdijk's notion of a dendroidal Segal space (see~\cite{CisinskiMoerdijkI}).
We can also check that every Segal symmetric operad
is weakly-equivalent to an operad
in the ordinary sense.
We can proceed along the lines of Cisinski--Moerdijk's dendroidal nerve construction or use a counterpart, in the category of simplicial sets,
of the $W$-constructions of~\S\ref{subsection:forgetful-strict}.
We then get that every Segal symmetric operad in the category of simplicial sets $\POp$
is connected to an ordinary symmetric operad $\DGW_{dec}(\POp)$
by a zigzag of natural weak-equivalences of Segal symmetric operads:
\begin{equation*}
\POp\xleftarrow{\sim}\DGW(\POp)\xrightarrow{\sim}\DGW_{dec}(\POp).
\end{equation*}
We record the following straightforward consequence of our statements to conclude this subsection.
\begin{prop}
We assume $\kk = \bar{\FF}_p$ and we consider normalized cochains with coefficients in this field, so that $\DGN^*(X,\bar{\FF}_p)$
represents the Mandell model of the space $X$ in the category of $\EOp$-algebras.
We use the notation $\DGG(-)$ for the adjoint functor of $\DGN^*(-)$, from the category of $\EOp$-algebras
to the category of simplicial sets.
\begin{enumerate}
\item
Let $\POp$ be a symmetric operad in simplicial sets. We consider the Segal $E_\infty$-Hopf symmetric cooperad $\AOp_{\POp} = \DGN^*(\POp(-))$
given by the resolution of Proposition~\ref{proposition:cooperad-from-simplicial-set} and its resolution $R\AOp$
in the category of Segal $E_\infty$-Hopf symmetric cooperads.
If $\POp$ consists of connected nilpotent spaces with a cohomology $\DGH^*(-,\bar{\FF}_p)$ of finite dimension degreewise,
then the object $\DGG(R\AOp_{\POp})$ defines a Segal symmetric operad in simplicial sets such that $\DGG(R\AOp_{\POp})(\ttree) = \POp(\ttree)^{\wedge}_p$,
for each tree $\ttree\in\Tree(r)$, $r>0$, where we consider the $p$-completion of the space $\POp(\ttree)$.
Thus, if we apply our functor $\DGW_{dec}(-)$ to this Segal symmetric operad $\DGG(R\AOp_{\POp})$
then we get a model of the $p$-completion $\POp(-)^{\wedge}_p$
in the category of ordinary operads.
\item
If we have a pair of symmetric operads in simplicial sets $\POp$ and $\QOp$ which satisfy these connectedness, nilpotence and cohomological finiteness assumptions,
then the existence of a weak-equivalence $\AOp_{\POp}\sim\AOp_{\QOp}$ at the level of the category of Segal $E_\infty$-Hopf symmetric cooperads
implies the existence of a weak-equivalence $\POp(-)^{\wedge}_p\sim\QOp(-)^{\wedge}_p$
at the level of the $p$-completion of our operads.\qed
\end{enumerate}
\end{prop}
\section{The category of homotopy Segal $E_\infty$-Hopf cooperads}\label{sec:homotopy-segal-cooperads}
We study the homotopical version of Segal $E_\infty$-Hopf cooperads in this section. The informal idea is to require that the coproduct operators
satisfy the associativity relation $\rho_{\ttree\rightarrow\utree}\circ\rho_{\utree\rightarrow\stree} = \rho_{\ttree\rightarrow\stree}$
only up to homotopy. We construct a model to shape the homotopies that govern these associativity relations
and the compatibility relation between these homotopies and the facet operators.
We explain our definition of homotopy Segal $E_\infty$-Hopf cooperad with full details in the first subsection of this section.
We then examine the forgetting of $E_\infty$-algebra structures in the definition of this structure, as in a study of strict homotopy $E_\infty$-Hopf cooperads,
and we examine the application of the cobar construction to homotopy Segal cooperads. We devote our second and third subsection to these topics.
We study a homotopy version of morphisms of Segal $E_\infty$-Hopf cooperads afterwards, in a fourth subsection,
and we eventually prove, in the fifth subsection, that every homotopy Segal $E_\infty$-Hopf cooperad
is weakly equivalent to a strict Segal $E_\infty$-Hopf cooperad.
We still take the category of algebras $\EAlg$ associated to the chain Barratt--Eccles operad $\EOp$
as a model of a category of $E_\infty$-algebras in dg modules
all along this section.
\subsection{The definition of homotopy Segal $E_\infty$-Hopf cooperads}
We define homotopy Segal $E_\infty$-Hopf cooperads essentially along the same plan as strict Segal $E_\infty$-Hopf cooperads.
We still define a notion of homotopy Segal $E_\infty$-Hopf pre-cooperad beforehand, as a structure equipped with operations that underlie the definition of a homotopy Segal $E_\infty$-Hopf cooperad.
We define the notion of a homotopy Segal $E_\infty$-Hopf cooperad afterwards, as a Segal $E_\infty$-Hopf pre-cooperad equipped with facet operators that satisfy the Segal condition.
In comparison to the definition of strict Segal $E_\infty$-Hopf cooperads, we mainly consider higher coproduct operators,
which we use to govern the homotopical associativity of the composition of coproduct operators.
We use a complex of cubical cochains, which we define from the cochain algebra of the interval $I = \DGN^*(\Delta^1)$, in our definition scheme
of this model of higher coproduct operators.
We explain the definition of the structure of this complex of cubical cochains in the next preliminary construction.
We address the definition of homotopy Segal $E_\infty$-Hopf cooperads afterwards.
We explain a definition of strict morphism of homotopy Segal $E_\infty$-Hopf cooperads to complete the objectives of this subsection. (These strict morphisms
are particular cases of the homotopy morphisms that we define in the fourth subsection of the section.)
\begin{constr}\label{constr:cubical-cochain-algebras}
We define our cubical cochain algebras $I^k$, $k\in\NN$, as the tensor powers of the cochain algebra of the interval:
\begin{equation*}
I^k = \underbrace{I\otimes\dots\otimes I}_k,\quad I = \DGN^*(\Delta^1).
\end{equation*}
We make the Barratt--Eccles operad acts on these tensor products through its diagonal $\Delta: \EOp\rightarrow\EOp\otimes\EOp$
and its action on each factor $I = \DGN^*(\Delta^1)$
so that our objects $I^k$ inherit a natural $\EOp$-algebra structure (see Construction~\ref{constr:Barratt-Eccles-diagonal}
and Construction~\ref{constr:cubical-cochain-connection}).
We can identify the object $I^k$ with the cellular cochain complex of the $k$-cube $\square^k$.
We define face operators $d^i_{\epsilon}: I^k\rightarrow I^{k-1}$, for $1\leq i\leq k$ and $\epsilon\in\{0,1\}$,
and degeneracy operators $s^i: I^{k-1}\rightarrow I^k$, for $0\leq i\leq k$,
which reflect classical face and degeneracy operations
on the topological cubes $\square^k$.
We determine these operators from the maps
\begin{equation*}
\DGN^*(\Delta^1)\xrightarrow{d_{\epsilon}}\DGN^*(\Delta^0) = \kk,\quad\epsilon\in\{0,1\},
\quad\text{and}
\quad\kk = \DGN^*(\Delta^0)\xrightarrow{s_0}\DGN^*(\Delta^1),
\end{equation*}
induced by simplicial coface and codegeneracy operators of the $1$-simplex $d^{\epsilon}: \Delta^0\rightarrow\Delta^1$
and $s^0: \Delta^1\rightarrow\Delta^0$,
and from the connection of Construction~\ref{constr:cubical-cochain-connection}
\begin{equation*}
\DGN^*(\Delta^1)\xrightarrow{\nabla^*}\DGN^*(\Delta^1)\otimes\DGN^*(\Delta^1),
\end{equation*}
which we associate to the simplicial map $\min: \Delta^1\times\Delta^1\rightarrow\Delta^1$
such that $\min: (s,t)\mapsto\min(s,t)$ on topological realization.
Recall briefly that we have an identity $\DGN^*(\Delta^1) = \kk\underline{0}^{\sharp}\oplus\kk\underline{1}^{\sharp}\oplus\kk\underline{01}^{\sharp}$,
where $\underline{0}^{\sharp}$ and $\underline{1}^{\sharp}$ denote elements of (cohomological) degree $0$,
dual to the classes of the vertices $\underline{0},\underline{1}\in\Delta^1$
in the normal chain complex $\DGN_*(\Delta^1)$,
whereas $\underline{01}^{\sharp}$ denotes an element of (cohomological) degree $1$
dual to the class of the fundamental simplex $\underline{01}\in\Delta^1$. This cochain complex $\DGN^*(\Delta^1)$
is equipped with the differential such that $\delta(\underline{0}^{\sharp}) = - \underline{01}^{\sharp}$
and $\delta(\underline{1}^{\sharp}) = - \underline{01}^{\sharp}$.
The face operators $d_0,d_1: \DGN^*(\Delta^1)\rightarrow\DGN^*(\Delta^0) = \kk$
are defined by the formulas $d_0(\underline{0}^{\sharp}) = 0$, $d_0(\underline{1}^{\sharp}) = 1$, $d_1(\underline{0}^{\sharp}) = 1$, $d_1(\underline{1}^{\sharp}) = 0$,
and $d_0(\underline{01}^{\sharp}) = d_1(\underline{01}^{\sharp}) = 0$,
while the degeneracy operator $s_0: \kk = \DGN^*(\Delta^0)\rightarrow\DGN^*(\Delta^1)$ is defined by the formula $s_0(1) = \underline{0}^{\sharp}+\underline{1}^{\sharp}$.
We refer to Construction~\ref{constr:cubical-cochain-connection} for the explicit definition of the connection $\nabla^*$ on basis elements.
We mainly use that this connection satisfies relations of the form $(d_1\otimes\id)\circ\nabla^* = s_0 d_1 = (\id\otimes d_1)\circ\nabla^*$
and $(d_0\otimes\id)\circ\nabla^* = \id = (\id\otimes d_0)\circ\nabla^*$,
which reflect the identities $\min\circ(d^1\times\id) = d^1 s^0 = (\min\circ\id)\times d^1$
and $\min\circ(d^0\times\id) = \id = (\min\circ\id)\times d^0$
at the topological level.
We number our factors from right to left in our cubical cochain algebras
\begin{equation*}
I^k = \underset{k}{\DGN^*(\Delta^1)}\otimes\dots\otimes\underset{1}{\DGN^*(\Delta^1)}
\end{equation*}
and we use this numbering convention to index our face and degeneracy operators (the superscript in the notation of the faces $d^i_{\epsilon}$
indicates the factor of this tensor product on which we apply a simplicial face operator $d_{\epsilon}$
and the superscript in the notation of the degeneracies $s^j$ similarly indicates the factor of the tensor product $I^{k-1}$
on which we apply a simplicial degeneracy or a connection operator).
We precisely define the face operators of our cubical cochain algebras $d^i_{\epsilon}: I^k\rightarrow I^{k-1}$, $i = 1,\dots,k$, $\epsilon = 0,1$, by the formula:
\begin{align*}
d^i_{\epsilon} & = \id^{\otimes k-i}\otimes d_{\epsilon}\otimes\id^{\otimes i-1}\\
\intertext{and our degeneracy operators $s^j: I^{k-1}\rightarrow I^k$, $j = 0,\dots,k$, by the formulas:}
s^0 & = \id^{\otimes k-1}\otimes s_0, \\
s^j & = \id^{\otimes k-j-1}\otimes\nabla^*\otimes\id^{\otimes j-1}\quad\text{for $j = 1,\dots,k-1$}, \\
s^k & = s_0\otimes\id^{\otimes k-1},
\end{align*}
\end{constr}
We are now ready to define our main objects.
\begin{defn}\label{definition:homotopy-E-infinity-cooperad}
We call homotopy Segal $E_\infty$-Hopf shuffle pre-cooperad the structure defined by a collection of $\EOp$-algebras
\begin{equation*}
\AOp(\ttree)\in\EAlg,\quad\text{$\ttree\in\Tree(r)$, $r>0$},
\end{equation*}
equipped with
\begin{itemize}
\item
homotopy coproduct operators
\begin{equation*}
\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree)\otimes I^k,
\end{equation*}
defined as morphisms of $\EOp$-algebras, for all composable sequences of tree morphisms $\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree$, $k\geq 0$,
and which satisfy the face and degeneracy relations depicted in Figure~\ref{homotopy-E-infinity-cooperad:0-faces}-\ref{homotopy-E-infinity-cooperad:degeneracies},
\item
together with facet operators
\begin{equation*}
i_{\sigmatree,\stree}: \AOp(\sigmatree)\rightarrow\AOp(\stree),
\end{equation*}
defined as morphisms of $\EOp$-algebras again, for all subtree embeddings $\sigmatree\subset\stree$,
and which satisfy the usual functoriality relations $i_{\stree,\stree} = \id_{\stree}$
and $i_{\thetatree,\stree}\circ i_{\sigmatree,\thetatree} = i_{\sigmatree,\stree}$ for all $\sigmatree\subset\thetatree\subset\stree$,
\item
and where we also assume the verification of compatibility relations between the facet operators and the coproduct operators,
which we express by the commutativity of the diagram of Figure~\ref{homotopy-E-infinity-cooperad:facet-compatibility}.
\end{itemize}
We again say that a Segal $E_\infty$-Hopf shuffle pre-cooperad $\AOp$ is a Segal $E_\infty$-Hopf shuffle cooperad when the facet operators satisfy a Segal condition,
which reads exactly as in the context of strict Segal $E_\infty$-Hopf shuffle cooperads:
\begin{enumerate}
\item[(*)]
The facet operators $i_{\sigmatree_v,\ttree}: \AOp(\sigmatree_v)\rightarrow\AOp(\ttree)$
associated to a tree decomposition $\ttree = \lambda_{\stree}(\sigmatree_v, v\in V(\stree))$
induce a weak-equivalence
\begin{equation*}
i_{\lambda_{\stree}(\sigmatree_*)}: \bigvee_{v\in V(\stree)}\AOp(\sigmatree_v)\xrightarrow{\sim}\AOp(\ttree)
\end{equation*}
when we pass to the coproduct of the objects $\AOp(\sigmatree_v)$
in the category of $\EOp$-algebras.
\end{enumerate}
We also define a homotopy Segal $E_\infty$-Hopf symmetric (pre-)cooperad as a homotopy Segal $E_\infty$-Hopf shuffle pre-cooperad $\AOp$
equipped with an action of the permutations such that $s^*: \AOp(s\ttree)\rightarrow\AOp(\ttree)$,
for $s\in\Sigma_r$ and $\ttree\in\Tree(r)$,
and which intertwine the facets and the homotopy coproduct operators attached our object.
\end{defn}
Note that the statement of Proposition \ref{proposition:Segal-condition}, where we reduce the verification of the Segal condition to various particular cases,
obviously holds for homotopy Segal $E_\infty$-Hopf cooperads too.
\begin{figure}[p]
\ffigbox
{\caption{The compatibility of homotopy coproducts with $0$-faces.
The diagram commutes for all sequences of composable tree morphisms $\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree$
and for all $1\leq i\leq k$, where $\widehat{\ttree_i}$ means that we delete the node $\ttree_i$
and we replace the morphisms $\ttree_{i+1}\rightarrow\ttree_i\rightarrow\ttree_{i-1}$
by their composite $\ttree_{i+1}\rightarrow \ttree_{i-1}$.}\label{homotopy-E-infinity-cooperad:0-faces}}
{\centerline{\xymatrixcolsep{7pc}\xymatrix{ \AOp(\stree)
\ar[r]^{\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}}
\ar[dr]_{\rho_{\ttree\rightarrow\dots\widehat{\ttree_i}\dots\rightarrow\stree}} &
\AOp(\ttree)\otimes I^k\ar[d]^{\id\otimes d^i_0} \\
& \AOp(\ttree)\otimes I^{k-1} }
}}
\ffigbox
{\caption{The compatibility of homotopy coproducts with $1$-faces.
The diagram commutes for all sequences of composable tree morphisms $\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree$
and for all $1\leq i\leq k$.}\label{homotopy-E-infinity-cooperad:1-faces}}
{\centerline{\xymatrixcolsep{7pc}\xymatrix{ \AOp(\stree)
\ar[r]^{\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}}
\ar[dd]_{\rho_{\ttree_i\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}} &
\AOp(\ttree)\otimes I^k\ar[d]^{\id\otimes d^i_1} \\
& \AOp(\ttree)\otimes I^{k-1} \\
\AOp(\ttree_i)\otimes I^{i-1}\ar[r]_-{\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_i}\otimes\id} &
\AOp(\ttree)\otimes I^{k-i}\otimes I^{i-1}\ar[u]_{\simeq} }
}}
\ffigbox
{\caption{The compatibility of homotopy coproducts with degeneracies.
The diagram commutes for all sequences of composable tree morphisms $\ttree\rightarrow\ttree_{k-1}\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree$
and for all $0\leq j\leq k$ (with the convention that $\ttree_0 = \stree$ in the case $j = 0$
and $\ttree_k = \ttree$ in the case $j = k$).}\label{homotopy-E-infinity-cooperad:degeneracies}}
{\centerline{\xymatrixcolsep{7pc}\xymatrix{ \AOp(\stree)
\ar[r]^-{\rho_{\ttree\rightarrow\ttree_{k-1}\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}} \ar[rd]_{\rho_{\ttree\rightarrow\ttree_{k-1}\rightarrow\dots\ttree_j=\ttree_j\dots\rightarrow\ttree_1\rightarrow\stree}} &
\AOp(\ttree)\otimes I^{k-1}\ar[d]^{\id\otimes s^j} \\
& \AOp(\ttree)\otimes I^k }
}}
\ffigbox
{\caption{The compatibility between coproducts and facet operators.
The diagram commutes for all sequences of composable tree morphisms $\ttree\xrightarrow{f_k}\ttree_k\xrightarrow{f_{k-1}}\cdots\xrightarrow{f_1}\ttree_1\xrightarrow{f_0}\stree$
and for all subtrees $\sigmatree\subset\stree$.}\label{homotopy-E-infinity-cooperad:facet-compatibility}}
{\centerline{\xymatrixcolsep{15pc}\xymatrix{ \AOp(\stree)\ar[r]^{\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}} &
\AOp(\ttree)\otimes I^k \\
\AOp(\sigmatree)
\ar@{.>}[r]_-{\rho_{(f_0\dots f_k)^{-1}(\sigmatree)\rightarrow(f_0\dots f_{k-1})^{-1}(\sigmatree)\rightarrow\dots\rightarrow f_0^{-1}(\sigmatree)\rightarrow\sigmatree}}
\ar[u]^{i_{\sigmatree,\stree}} &
\AOp((f_0\dots f_k)^{-1}(\sigmatree))\otimes I^k\ar@{.>}[u]^{i_{(f_0\dots f_k)^{-1}(\sigmatree),\ttree}\otimes\id} }
}}
\end{figure}
\begin{figure}[p]
\ffigbox
{\caption{The coherence of the face and degeneracy relations of the coproduct operators with respect to the relation $s^j d^i_0 = \id$
between the cubical face and degeneracy operators
for $i = j,j+1$.}\label{homotopy-E-infinity-cooperad:0-face-coherence}}
{\centerline{\xymatrixcolsep{10pc}\xymatrix{ & \AOp(\ttree)\otimes I^{k-1}\ar[d]_{\id\otimes s^j}\ar@/^2pc/[dd]^{\id} \\
\AOp(\stree)
\ar[ru]^{\rho_{\ttree\rightarrow\ttree_{k-1}\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}} \ar[r]_{\rho_{\ttree\rightarrow\dots\rightarrow\ttree_j=\ttree_j\rightarrow\dots\rightarrow\stree}} \ar[rd]_{\rho_{\ttree\rightarrow\ttree_{k-1}\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}} &
\AOp(\ttree)\otimes I^k\ar[d]_{\id\otimes d^i_0} \\
& \AOp(\ttree)\otimes I^{k-1} }
}}
\ffigbox
{\caption{The coherence of the face and degeneracy relations of the coproduct operators with respect to the relation $s^j d^i_1 = s^0\otimes\id_{I^j}$
for $i = j$.}\label{homotopy-E-infinity-cooperad:1-face-coherence-left}}
{\centerline{\xymatrixcolsep{10pc}\xymatrix{ \AOp(\stree)
\ar@/_3pc/[ddd]_{\rho_{\ttree\rightarrow\dots\rightarrow\stree}}
\ar[r]^{\rho_{\ttree\rightarrow\ttree_{k-1}\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}} \ar[rd]_{\rho_{\ttree\rightarrow\dots\rightarrow\ttree_j=\ttree_j\rightarrow\dots\rightarrow\stree}}
\ar[dd]^{\rho_{\ttree_j\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}} &
\AOp(\ttree)\otimes I^{k-1}\ar[d]^{\id\otimes s^j} \\
& \AOp(\ttree)\otimes I^k \ar[d]^{\id\otimes d^j_1} \\
\AOp(\ttree_j)\otimes I^{j-1}
\ar[r]^{\rho_{\ttree\rightarrow\cdots\rightarrow\ttree_j=\ttree_j}\otimes\id}
\ar[dr]_{\rho_{\ttree\rightarrow\dots\rightarrow\ttree_j}} &
\AOp(\ttree)\otimes I^{k-1} \\
\AOp(\ttree)\otimes I^{k-1}\ar[r]_{\id\otimes d^j_1} &
\AOp(\ttree)\otimes I^{k-j-1}\otimes I^{j-1}\ar[u]_{\id\otimes s^0\otimes\id} }
}}
\ffigbox
{\caption{The coherence of the face and degeneracy relations of the coproduct operators with respect to the relation $s^j d^i_1 = \id_{I^{k-j}}\otimes s^j$
for $i = j+1$.}\label{homotopy-E-infinity-cooperad:1-face-coherence-right}}
{\centerline{\xymatrixcolsep{10pc}\xymatrix{ \AOp(\stree)
\ar@/_4pc/[dddd]_{\rho_{\ttree\rightarrow\dots\rightarrow\stree}}
\ar@/_2pc/[ddd]|{\rho_{\ttree_j\rightarrow\dots\rightarrow\stree}}
\ar[r]^{\rho_{\ttree\rightarrow\ttree_{k-1}\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}} \ar[rd]_{\rho_{\ttree\rightarrow\dots\rightarrow\ttree_j=\ttree_j\rightarrow\dots\rightarrow\stree}}
\ar[dd]^{\rho_{\ttree_j=\ttree_j\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}} &
\AOp(\ttree)\otimes I^{k-1}\ar[d]^{\id\otimes s^j} \\
& \AOp(\ttree)\otimes I^k\ar[d]^{\id\otimes d^{j+1}_1} \\
\AOp(\ttree_j)\otimes I^j\ar[r]^{\rho_{\ttree\rightarrow\dots\rightarrow\ttree_j}\otimes\id} &
\AOp(\ttree)\otimes I^{k-1} \\
\AOp(\ttree_j)\otimes I^{j-1}
\ar[r]_{\rho_{\ttree\rightarrow\dots\rightarrow\ttree_j}\otimes\id}
\ar[u]_{\id\otimes s^j} &
\AOp(\ttree)\otimes I^{k-j-1}\otimes I^{j-1}\ar[u]_{\id\otimes\id\otimes s^j} \\
\AOp(\ttree)\otimes I^{k-1}\ar[ru]_{\id\otimes d^j_1} }
}}
\end{figure}
\afterpage{\clearpage}
\begin{remark}\label{remark:compatibility-relations}
The faces and degeneracy operators on our complex of cubical cochain algebras
satisfy the following system of identities:
\begin{align*}
d^j_{\epsilon} d^i_{\eta} & = d^i_{\eta} d^{j-1}_{\epsilon},\quad\text{for $i<j$, $\epsilon,\eta\in\{0,1\}$}, \\
s^j d^i_{\epsilon} & = \begin{cases} d^i_{\epsilon} s^{j-1}, & \text{for $i<j$, $\epsilon\in\{0,1\}$}, \\
\id, & \text{for $i = j,j+1$ and $\epsilon = 0$}, \\
s^0\otimes\id_{I^j}, & \text{for $i = j$, $\epsilon = 1$, using $I^{k-1} = I^{k-j-1}\otimes I^j$}, \\
\id_{I^{k-j}}\otimes s^j, & \text{for $i = j+1$, $\epsilon = 1$, using $I^{k-1} = I^{k-j}\otimes I^{j-1}$}, \\
d^{i-1}_{\epsilon} s^j, & \text{for $i>j+1$, $\epsilon\in\{0,1\}$},
\end{cases} \\
s^j s^i & = s^i s^{j+1},\quad\text{for $i\leq j$}.
\end{align*}
We easily check that a multiple application of face and degeneracy relations of Figure~\ref{homotopy-E-infinity-cooperad:0-faces}-\ref{homotopy-E-infinity-cooperad:degeneracies}
for the coproduct operators of homotopy Segal $E_\infty$-Hopf shuffle cooperads lead to coherent results
with respect to these identities.
For instance, we have the double face relation
\begin{multline*}
(\id\otimes d^j_0 d^i_0)\circ\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}
= \rho_{\ttree\rightarrow\ttree_k\rightarrow\cdots\widehat{\ttree_j}\cdots\widehat{\ttree_i}\cdots\rightarrow\ttree_1\rightarrow\stree}\\
= (\id\otimes d^i_0 d^{j-1}_0)\circ\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree},
\end{multline*}
which expresses the coherence of the face relations of Figure~\ref{homotopy-E-infinity-cooperad:0-faces}
with respect the double $0$-face identity $d^j_0 d^i_0 = d^i_0 d^{j-1}_0$
in our complex of cubical cochain algebras.
We get similar easy results for the other double face identities $d^j_{\epsilon} d^i_{\eta} = d^i_{\eta} d^{j-1}_{\epsilon}$
and the other commutation relations between the face and degeneracy operators $s^j d^i_{\epsilon} = d^i_{\epsilon} s^{j-1}$, $s^j d^i_{\epsilon} = d^{i-1}_{\epsilon} s^j$,
and $s^j s^i = s^i s^{j+1}$,
while the coherence of the face and degeneracy relations of the coproduct operators
with respect to the relations $s^j d^i_0 = \id$, for $i = j,j+1$,
and $s^j d^j_1 = s^0\otimes\id_{I^j}$, $s^j d^{j+1}_1 = \id_{I^{k-j}}\otimes s^j$
follows from the commutativity of the diagrams of Figure~\ref{homotopy-E-infinity-cooperad:0-face-coherence}-\ref{homotopy-E-infinity-cooperad:1-face-coherence-right}.
\end{remark}
We have the following obvious notion of strict morphism of homotopy Segal $E_\infty$-Hopf shuffle (pre-)cooperads
and of homotopy Segal $E_\infty$-Hopf symmetric (pre-)cooperads,
which generalize the definition of morphism of strict Segal $E_\infty$-Hopf shuffle (pre-)cooperads
and strict Segal $E_\infty$-Hopf symmetric (pre-)cooperads
in~\S\ref{sec:strict-segal-cooperads}
\begin{defn}\label{definition:homotopy-E-infinity-cooperad-morphism}
A (strict) morphism of homotopy Segal $E_\infty$-Hopf shuffle (pre-)cooperads $\phi: \AOp\rightarrow\BOp$
is a collection of $\EOp$-algebra morphisms
\begin{equation*}
\phi_{\ttree}: \AOp(\ttree)\rightarrow\BOp(\ttree),\quad\text{$\ttree\in\Tree(r)$, $r>0$},
\end{equation*}
which
\begin{enumerate}
\item
preserve the action of all coproduct operators on our objects,
in the sense that the diagram
\begin{equation*}
\xymatrix{ \AOp(\stree)\ar[r]^{\phi_{\stree}}\ar[d]_{\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}} &
\BOp(\stree)\ar[d]^{\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}} \\
\AOp(\ttree)\otimes I^k\ar[r]^{\phi_{\ttree}\otimes\id} & \BOp(\ttree)\otimes I^k }
\end{equation*}
commutes, for all sequences of composable tree morphisms $\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree$,
\item
and the action of facet operators, so that we have the same commutative diagram as in the case of morphisms strict Segal $E_\infty$-Hopf shuffle (pre-)cooperads:
\begin{equation*}
\xymatrix{ \AOp(\stree)\ar[r]^{\phi_{\stree}} & \BOp(\stree) \\
\AOp(\sigmatree)\ar[r]^{\phi_{\sigmatree}}\ar[u]^{i_{\sigmatree,\stree}} &
\BOp(\sigmatree)\ar[u]_{i_{\sigmatree,\stree}} },
\end{equation*}
for all subtree embeddings $\sigmatree\subset\stree$.
\end{enumerate}
If $\AOp$ and $\BOp$ are homotopy Segal $E_\infty$-Hopf symmetric (pre-)cooperads, then we say that $\phi: \AOp\rightarrow\BOp$ is a morphism of Segal $E_\infty$-Hopf symmetric (pre-)cooperads
when $\phi$ also preserves the action of permutations on our objects (we express this condition by the same commutative diagram as in the case of strict $E_\infty$-Hopf cooperads).
\end{defn}
These strict morphisms of homotopy Segal $E_\infty$-Hopf shuffle (pre-)cooperads can obviously be composed,
as well the strict morphisms of homotopy Segal $E_\infty$-Hopf symmetric (pre-)cooperads,
so that we can form a category of homotopy Segal $E_\infty$-Hopf shuffle (pre-)cooperads
and a category of homotopy Segal $E_\infty$-Hopf symmetric (pre-)cooperads
as morphisms.
In what follows, we adopt the notation $\EOp\Hopf\sh\hSegOp^c$ for the category of homotopy Segal $E_\infty$-Hopf shuffle cooperads
and the notation $\EOp\Hopf\Sigma\hSegOp^c$ for the category of homotopy Segal $E_\infty$-Hopf symmetric cooperads.
We can modify the above definition to assume that the preservation of coproduct operators holds up to homotopy only, just as we assume
that the coproduct operators satisfy associativity relations up to homotopy in a homotopy Segal $E_\infty$-Hopf pre-cooperad.
This idea gives the notion of homotopy morphism, which we study in Subsection~\ref{subsection:homotopy-morphisms}.
\subsection{The forgetting of $E_\infty$-structures}
We now study the structure in dg modules that we obtain by forgetting the $E_\infty$-algebra structure attached to each object
in the definition of a homotopy Segal $E_\infty$-Hopf cooperad.
We follow the same plan as in Subsection~\ref{subsec:strict-segal-cooperads},
where we examine the parallel forgetting of $E_\infty$-algebra structures
in strict Segal $E_\infty$-Hopf cooperads.
In the previous subsection, we assume that the associativity of the coproduct operators only holds up to homotopy for the definition of a homotopy Segal $E_\infty$-Hopf cooperad,
but we keep the same notion of facet operators as in the case of strict Segal $E_\infty$-cooperads.
In the case of homotopy Segal cooperads in dg modules, we retain the homotopy associativity relation of the coproduct operators of homotopy Segal $E_\infty$-Hopf cooperads,
and we retrieve the expression of the Segal map that we obtained from the structure of the facet operators
in the definition of strict Segal dg cooperads.
We again need to assume that the vertices of our trees are totally ordered in order to make the construction
of the forgetful functor from homotopy Segal $E_\infty$-Hopf cooperads
to homotopy Segal dg cooperads work.
For this reason, we restrict ourselves to Segal shuffle cooperads all along this subsection (as in our study of strict Segal cooperads in dg modules).
\begin{defn}\label{definition:homotopy-tree-shaped-cooperad}
We call homotopy Segal shuffle dg pre-cooperad the structure defined by a collection of dg modules
\begin{equation*}
\AOp(\ttree)\in\dg\Mod,\quad\text{$\ttree\in\Tree(r)$, $r>0$},
\end{equation*}
equipped with
\begin{itemize}
\item
homotopy coproduct operators
\begin{equation*}
\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree)\otimes I^k,
\end{equation*}
defined as morphisms of dg modules, for all composable sequences of tree morphisms $\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree$, $k\geq 0$,
and which satisfy the same face and degeneracy relations,
expressed by the commutative diagrams of Figure~\ref{homotopy-E-infinity-cooperad:0-faces}-\ref{homotopy-E-infinity-cooperad:degeneracies},
as the homotopy coproduct operators of Segal $E_\infty$-Hopf shuffle cooperads,
\item
together with Segal maps
\begin{equation*}
i_{\lambda_{\stree}(\sigmatree_*)}: \bigotimes_{v\in V(\stree)}\AOp(\sigmatree_v)\rightarrow\AOp(\ttree),
\end{equation*}
defined as morphisms of dg modules, for all tree decompositions $\ttree = \lambda_{\stree}(\sigmatree_v,v\in V(\stree))$,
and which satisfy the same functoriality relations
as the Segal maps of strict Segal cooperads in dg modules.
Namely, for the trivial decomposition $\ttree = \lambda_{\ytree}(\ttree)$, we have $i_{\lambda_{\ytree}(\ttree)} = \id_{\AOp(\ttree)}$,
and for nested decompositions $\ttree = \lambda_{\utree}(\thetatree_u,u\in V(\utree))$ and $\thetatree_u = \lambda_{\stree_u}(\sigmatree_v,v\in V(\stree_u))$, $u\in V(\utree)$,
we have
\begin{equation*}
i_{\lambda_{\utree}(\thetatree_*)}\circ(\bigotimes_{u\in V(\utree)}i_{\lambda_{\stree_u}(\sigmatree_*)}) = i_{\lambda_{\stree}(\sigmatree_*)},
\end{equation*}
where we again consider the composite decomposition $\ttree = \lambda_{\stree}(\sigmatree_v,v\in V(\stree))$
with $\stree = \lambda_{\utree}(\stree_u,u\in V(\utree))$.
\item
We now assume the verification of the compatibility relations depicted in Figure~\ref{homotopy-dg-cooperad:Segal-coproduct-compatibility},
for the Segal maps and the higher coproduct operators.
\end{itemize}
We also say that a homotopy Segal shuffle dg pre-cooperad $\AOp$ is a homotopy Segal shuffle dg cooperad when the Segal maps
satisfy the following Segal condition (the same Segal condition as in the case of strict Segal dg cooperads):
\begin{enumerate}
\item[(*)]
The Segal map $i_{\lambda_{\stree}(\sigmatree_*)}$ is a weak-equivalence
\begin{equation*}
i_{\lambda_{\stree}(\sigmatree_*)}: \bigotimes_{v\in V(\stree)}\AOp(\sigmatree_v)\xrightarrow{\sim}\AOp(\ttree),
\end{equation*}
for every decomposition $\ttree = \lambda_{\stree}(\sigmatree_v, v\in V(\stree))$.
\end{enumerate}
We still consider a notion of strict morphism of homotopy Segal shuffle dg (pre-)cooperads, which we obviously define
as a collection of dg module morphisms $\phi_{\ttree}: \AOp(\ttree)\rightarrow\BOp(\ttree) $
that preserve all homotopy coproduct operators $\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}$
and the Segal maps $i_{\lambda_{\stree}(\sigmatree_*)}$.
We use the notation $\dg\sh\hSegOp^c$ for the category of Segal shuffle dg cooperads, which we equip with this notion of morphism.
\end{defn}
\begin{figure}[t]
\ffigbox
{\caption{The compatibility between coproducts and Segal maps.
The diagram commutes for all sequences of composable tree morphisms $\ttree\xrightarrow{f_k}\ttree_k\xrightarrow{f_{k-1}}\cdots\xrightarrow{f_1}\ttree_1\xrightarrow{f_0}\stree$
and for all decompositions $\stree = \lambda_{\utree}(\sigmatree_v, v \in V(\utree))$.
The map $\mu: (\bigotimes_{v\in V(\utree)}I^k)\rightarrow I^k$ on the right hand side vertical arrow is given by the associative product
of the $\EOp$-algebra $I^k$.}\label{homotopy-dg-cooperad:Segal-coproduct-compatibility}}
{\centerline{\xymatrixcolsep{15pc}\xymatrix{ \AOp(\stree)
\ar[r]^{\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}} &
\AOp(\ttree)\otimes I^k \\
& \left(\bigotimes_{v\in V(\utree)}\AOp(\sigmatree_v)\right)\otimes\left(\bigotimes_{v\in V(\utree)}I^k\right)
\ar@{.>}[u]_{i_{\lambda_{\utree}((f_0 \dots f_k)^{-1}(\sigmatree_*))}\otimes\mu} \\
\bigotimes_{v\in V(\utree)}\AOp(\sigmatree_v)
\ar@{.>}[r]_-{\bigotimes_v\rho_{(f_0\dots f_k)^{-1}(\sigmatree_v)\rightarrow(f_0\dots f_{k-1})^{-1}(\sigmatree_v)\rightarrow\dots\rightarrow\sigmatree_v}}
\ar[uu]_-{i_{\lambda_{\utree}(\sigmatree_*)}} &
\bigotimes_{v\in V(\utree)}\left(\AOp((f_0 \dots f_k)^{-1}(\sigmatree_v))\otimes I^k\right)\ar@{.>}[u]_{\simeq} }
}}
\end{figure}
There is a forgetful functor from the category of homotopy Segal $E_\infty$-Hopf cooperads to the category of homotopy Segal shuffle dg cooperads,
which is similar to the one that we have in the strict case.
To be more precise, we have the following proposition, which represents the homotopy counterpart of the result of Proposition~\ref{proposition:forgetful-strict}.
\begin{prop}\label{proposition:forgetful-homotopy}
Let $\AOp$ be a homotopy Segal $E_\infty$-Hopf shuffle cooperad,
with coproduct operators $\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree)\otimes I^k$
and facet operators $i_{\sigmatree,\stree}: \AOp(\sigmatree)\rightarrow\AOp(\stree)$.
The collection $\AOp(\ttree)$, $\ttree\in\Tree(r)$, equipped with the coproduct operators inherited from $\AOp$ and the Segal maps
given by the following composites
\begin{equation*}
\bigotimes_{v\in V(\stree)}\AOp(\sigmatree_v)\xrightarrow{\EM}\bigvee_{v\in V(\stree)}\AOp(\sigmatree_v)\xrightarrow{i_{\lambda_{\stree}(\sigmatree_*)}}\AOp(\ttree)
\end{equation*}
(the same composites as in Proposition~\ref{proposition:forgetful-strict}),
for all tree decompositions $\ttree = \lambda_{\stree}(\sigmatree_v,v\in V(\stree))$,
is a homotopy Segal shuffle dg cooperad.
\end{prop}
\begin{proof}
The face and degeneracy relations for the homotopy coproduct operators are directly inherited from the homotopy Segal $E_\infty$-Hopf shuffle cooperad $\AOp$,
as we do not change the coproduct operators in our forgetful operation.
The functoriality relations of the Segal maps
and the Segal condition
follows from the same arguments as in Proposition \ref{proposition:forgetful-strict}
since the definition of the Segal maps
is the same as in the strict case.
The compatibility between the homotopy coproduct operators and the Segal maps follows from the commutativity of the following diagram,
\begin{equation*}
\xymatrix{ \bigotimes_{v \in V(\utree)}\AOp(\sigmatree_v)
\ar[d]|{\bigotimes_{v \in V(\utree)}\rho_{(f_0\dots f_k)^{-1}(\sigmatree_v)\rightarrow\dots\rightarrow f_0^{-1}(\sigmatree_v)\rightarrow\sigmatree_v}}
\ar[r]^{\EM} &
\bigvee_{v\in V(\utree)} \AOp(\sigmatree_v)
\ar[d]|{\bigvee_{v \in V(\utree)}\rho_{(f_0\dots f_k)^{-1}(\sigmatree_v)\rightarrow\dots\rightarrow f_0^{-1}(\sigmatree_v)\rightarrow\sigmatree_v}} \\
\bigotimes_{v \in V(\utree)}\left(\AOp((f_0\dots f_k)^{-1}(\sigmatree_v)\otimes I^k)\right)
\ar[r]^{\EM}\ar[d]_{\simeq} &
\bigvee_{v\in V(\utree)}\left(\AOp((f_0\dots f_k)^{-1}(\sigmatree_v))\otimes I^k\right)
\ar[d]|{\sum_v\left(i_{(f_0\dots f_k)^{-1}(\sigmatree_v),\ttree}\otimes\id\right)} \\
\left(\bigotimes_{v\in V(\utree)}\AOp((f_0\dots f_k)^{-1}(\sigmatree_v))\right)\otimes\left(\bigotimes_{v\in V(\utree)}I^k\right)
\ar[d]_{\EM\otimes\EM} &
\AOp(\ttree)\otimes I^k \\
\left(\bigvee_{v\in V(\utree)}\AOp((f_0\dots f_k)^{-1}(\sigmatree_v))\right)\otimes\left(\bigvee_{v\in V(\utree)}I^k\right)
\ar[ru]_-{i_{\lambda_{\utree}(\sigmatree_*)}\otimes\nabla} }
\end{equation*}
where $\nabla = \sum_v\id: \bigvee_v I^k\rightarrow I^k$ denotes the codiagonal of the $\EOp$-algebra $I^k$,
we use that the composite $\nabla\circ\EM: \bigotimes_v I^k\rightarrow I^k$ is identified with the associative product $\mu: \bigotimes_v I^k\rightarrow I^k$
and we consider the Segal map
of $\EOp$-algebras $i_{\lambda_{\utree}(\sigmatree_*)}: \bigvee_{v\in V(\utree)}\AOp((f_0\dots f_k)^{-1}(\sigmatree_v))\rightarrow\AOp(\ttree)$.
\end{proof}
The normalized cochain complex of the $k$-cube $I^k = \DGN^*(\Delta^1)^{\otimes k}$ is identified with the sum of the top dimensional element $\underline{01}^\sharp{}^{\otimes k}$
with the image of cubical face operators. We use this observation to determine the homotopy coproducts of homotopy Segal dg cooperads
in the following lemma.
For a morphism of dg modules $\alpha: X\rightarrow Y\otimes I^k$, we let $\alpha^{\square}: X\rightarrow Y$ be the homomorphism of graded modules of degree $k$
given by the component of the map $\alpha$ with values in $Y\otimes\underline{01}^{\sharp}{}^{\otimes k}\subset Y\otimes I^k$,
where we consider the top dimensional element $\underline{01}^{\sharp}{}^{\otimes k}\in\DGN^*(\Delta^1)^{\otimes k}$
of the cubical cochain complex $I^k = \DGN^*(\Delta^1)^{\otimes k}$,
so that we have:
\begin{equation*}
\alpha(x) = (-1)^{k\deg(x)}\alpha^{\square}(x)\otimes\underline{01}^{\sharp}{}^{\otimes k} + \text{tensors with a factor of dimension $<k$ in $I^k$},
\end{equation*}
for any $x\in X$. We also write $\delta(\alpha^{\square}) = \delta\alpha^{\square} - (-1)^k\alpha^{\square}\delta$
for the differential of this homomorphism.
\begin{lemm}\label{lemma:homotopy-cooperad-top-component}
\begin{enumerate}
\item\label{homotopy-cooperad-top-component:expression}
Let $\AOp$ be a homotopy Segal shuffle dg cooperad. The graded homomorphism of degree $k$
\begin{equation*}
\rho^{\square}_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree)
\end{equation*}
that we associate to the dg module morphism $\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree)\otimes I^k$,
for any sequence of composable tree morphisms $\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree$,
satisfies the relation
\begin{equation}\tag{*}\label{equation:differential-top-component}
\delta(\rho^{\square}_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree})
\begin{aligned}[t] & = \sum_{i=1}^k(-1)^{i-1}\rho^{\square}_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_i}
\circ\rho^{\square}_{\ttree_i\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}\\
& - \sum_{i=1}^k(-1)^{i-1}\rho^{\square}_{\ttree\rightarrow\dots\rightarrow\hat{\ttree}_i\rightarrow\dots\rightarrow\stree}.
\end{aligned}
\end{equation}
Moreover, if we have a degeneracy $\ttree_j = \ttree_{j+1}$ in our sequence of tree morphisms, for some $0\leq j\leq k$ (with the convention that $\ttree_{k+1} = \ttree$ and $\ttree_0 = \stree$),
then we have the relation
\begin{equation}\tag{**}\label{equation:degeneracy-top-component}
\rho^{\square}_{\ttree\rightarrow\ttree_{k}\rightarrow\dots\ttree_{j+1}=\ttree_j\cdots\rightarrow\ttree_1\rightarrow\stree} = 0.
\end{equation}
\item\label{homotopy-cooperad-top-component:generation}
Let now $\AOp(\ttree)$, $\ttree\in\Tree(r)$, $r>0$, be any given collection of dg modules
equipped with graded homomorphisms $\rho^{\square}_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree)$,
of degree $k$,
and which satisfy the relations (\ref{equation:differential-top-component})-(\ref{equation:degeneracy-top-component})
of the previous statement.
Then there is a unique collection of morphisms of dg modules $\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree)\otimes I^k$,
which extend these maps on the summands $\AOp(\ttree)\otimes\underline{01}^{\sharp}{}^{\otimes k}$
and satisfy the face and degeneracy relations of homotopy coproduct operators
of Figure~\ref{homotopy-E-infinity-cooperad:0-faces}-\ref{homotopy-E-infinity-cooperad:degeneracies}.
\end{enumerate}
\end{lemm}
\begin{proof}[Proof of assertion~\ref{homotopy-cooperad-top-component:expression}]
We use the identities
\begin{gather*}
\begin{aligned}
(I^k)_{-k} & = \kk\,\underline{01}^{\sharp}{}^{\otimes k}\\
(I^k)_{1-k} & = \bigoplus_{i=1}^k\bigl(\bigoplus_{\epsilon\in\{0,1\}}\kk\,\underline{01}^{\sharp}{}^{\otimes k-i}\otimes\underline{\epsilon}^\sharp\otimes\underline{01}^{\sharp}{}^{\otimes i-1}\bigr),
\end{aligned}
\intertext{and the relations}
d^i_{\epsilon}(\underline{01}^{\sharp}{}^{\otimes k-j}\otimes\underline{\eta}^\sharp\otimes\underline{01}^{\sharp}{}^{\otimes j-1})
= \begin{cases}\underline{01}^{\sharp}{}^{\otimes k-1}, & \text{if $i = j$ and $\epsilon\equiv\eta + 1\mymod 2$}, \\
0, & \text{otherwise}, \end{cases}
\end{gather*}
which implies that we have a formula of the form:
\begin{multline*}
\alpha(x) = (-1)^{k\deg(x)}\alpha^{\square}(x)\otimes\underline{01}^{\sharp}{}^{\otimes k} \\
+ \sum_{i=1}^k\left(\sum_{\epsilon\in\{0,1\}}(-1)^{(k-1)\deg(x)}((\id\otimes d^i_{\epsilon+1})\circ\alpha)^{\square}(x)\otimes\underline{01}^{\sharp}{}^{\otimes k-i}\otimes\underline{\epsilon}^\sharp\otimes\underline{01}^{\sharp}{}^{\otimes i-1}\right)\\
+ \text{tensors with a factor of dimension $<k-1$ in $I^k$},
\end{multline*}
for any morphism $\alpha: X\rightarrow Y\otimes I^k$ and any $x\in X$,
where we consider the homomorphism of graded modules $((\id\otimes d^i_{\epsilon+1})\circ\alpha)^{\square}: X\rightarrow Y$
associated to the composite $(\id\otimes d^i_{\epsilon+1})\circ\alpha: X\rightarrow Y\otimes I^{k-1}$ (and we obviously take the face operator $d^i_{\epsilon+1}$
indexed by the residue class of $\epsilon+1$ mod $2$).
We deduce from the differential formula
\begin{equation*}
\delta(\underline{01}^{\sharp}{}^{\otimes k-i}\otimes\underline{\epsilon}^\sharp\otimes\underline{01}^{\sharp}{}^{\otimes i-1}) = (-1)^{k-i+\epsilon+1}\underline{01}^{\sharp}{}^{\otimes k}
\end{equation*}
that the projection of the relation $\delta(\alpha(x)) = \alpha(\delta(x))$ onto $Y\otimes\underline{01}^{\sharp}{}^{\otimes k}\subset Y\otimes I^k$
is equivalent to the following relation:
\begin{equation*}
\delta(\alpha^{\square}(x)) + \sum_{i=1}^k\bigl(\sum_{\epsilon\in\{0,1\}}(-1)^{i+\epsilon}((\id\otimes d^i_{\epsilon+1})\alpha)^{\square}(x)\bigr)
= (-1)^k\alpha^{\square}(\delta x).
\end{equation*}
The relation of our statement (\ref{equation:differential-top-component}) then follows from the compatibility of coproducts with the face operators
(the relations of Figure~\ref{homotopy-E-infinity-cooperad:0-faces}-\ref{homotopy-E-infinity-cooperad:1-faces}).
If we have $\ttree_j = \ttree_{j+1}$ for some $j$, then the morphism $\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}$
factors through the image of a degeneracy $\AOp(\ttree)\otimes s^j: \AOp(\ttree)\otimes I^{k-1}\rightarrow\AOp(\ttree)\otimes I^k$
by the compatibility of the coproducts with the degeneracies (the relations of Figure~\ref{homotopy-E-infinity-cooperad:degeneracies})
and this requirement implies the vanishing relation $\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}^{\square} = 0$
since $I^{k-1}$ is null in dimension $k$.
\end{proof}
\begin{proof}[Proof of assertion~\ref{homotopy-cooperad-top-component:generation}]
The operators $\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree)\otimes I^k$
are defined by induction on $k$.
If $k = 0$, then we take $\rho_{\ttree\rightarrow\stree} = \rho_{\ttree\rightarrow\stree}^{\square}$.
The compatibility conditions of Figure~\ref{homotopy-E-infinity-cooperad:0-faces}-\ref{homotopy-E-infinity-cooperad:degeneracies}
are tautological in this case.
If $k > 0$, then defining a morphism of dg modules $\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree)\otimes I^k$
amounts to defining a morphism of dg modules
$\rho'_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}: \AOp(\stree)\otimes\DGN_*(\Delta^1)^{\otimes k}\rightarrow\AOp(\ttree)$.
Let $d_i^{\epsilon}: \DGN_*(\Delta^1)^{\otimes k-1}\rightarrow\DGN_*(\Delta^1)^{\otimes k}$ denote the coface operators attached to the cubical complex $\DGN_*(\Delta^1)^{\otimes k}$
dual to the face operators $d^i_{\epsilon}: \DGN^*(\Delta^1)^{\otimes k}\rightarrow\DGN^*(\Delta^1)^{\otimes k-1}$
that we considered so far. We let $\rho'_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}(x\otimes\underline{01}^{\otimes r})
= \rho^{\square}_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}(x)$,
and we define inductively:
\begin{align*}
\rho'_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}(x\otimes d_i^0(\underline{\sigma})) &
= \rho'_{\ttree\rightarrow\dots\rightarrow\widehat{\ttree_i}\rightarrow\dots\rightarrow\stree}(x\otimes\underline{\sigma}), \\
\rho'_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}(x\otimes d_i^1(\underline{\sigma})) &
= \rho'_{\ttree\rightarrow\dots\rightarrow\ttree_i}(\rho'_{\ttree_i\rightarrow\dots\rightarrow\stree}\otimes\id(x\otimes\underline{\sigma})),
\end{align*}
where we use the factorization $\underline{\sigma}\in\DGN_*(\Delta^1)^{\otimes k-1}\Rightarrow\underline{\sigma} = \underline{\sigma}'\otimes\underline{\sigma}''\in\DGN_*(\Delta^1)^{\otimes k-i}\otimes\DGN_*(\Delta^1)^{\otimes i-1}$.
We deduce from the compatibility of the homotopy coproducts with the face operators that our coproduct operator is necessarily given by these formulas
and this observation proves the uniqueness of the coproduct operators
extending our maps $\rho^{\square}_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}$.
We may also use the observations of Remark \ref{remark:compatibility-relations} to check that our maps satisfy the compatibility relations
of Figure~\ref{homotopy-E-infinity-cooperad:0-faces}-\ref{homotopy-E-infinity-cooperad:degeneracies}
on the whole dg modules $\AOp(\ttree)\otimes I^k$.
We only need to prove that the map $\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}$
or equivalently the map $\rho'_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}$
commutes with the differential.
We proceed by induction on $k$. For $k = 0$ the statement is obviously equivalent to relation~(\ref{equation:differential-top-component}).
For $k > 0$, the relation $\delta(\rho'_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}(x\otimes\underline{\sigma}))
= \rho'_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}\delta(x\otimes\underline{\sigma})$
follows from the induction assumption
in the case $\underline{\sigma} = (d_i^{\epsilon})(\underline{\tau})$, for some $1\leq i\leq k$, $\epsilon\in\{0,1\}$, $\underline{\tau}\in\DGN_*(\Delta^1)^{\otimes k-1}$,
and reduces to relation~(\ref{equation:differential-top-component})
when $\underline{\sigma} = \underline{01}^{\otimes k}$.
The conclusion follows.
\end{proof}
\subsection{The application of cobar complexes}\label{subsection:homotopy-Segal-cooperad-cobar}
The aim of this Subsection is to extend the cobar construction of Subsection~\ref{subsection:strict-Segal-cooperad-cobar}
to homotopy Segal dg cooperads. We follow the same plan.
We make explicit the definition of the structure operations of the cobar construction $\DGB^c(\AOp)$
associated to a homotopy Segal shuffle dg cooperad $\AOp$
in the next paragraph.
We check the validity of these definitions afterwards and we record the definition of this dg operad $\DGB^c(\AOp)$
to conclude this subsection.
We assume all along this subsection that $\AOp$ is a homotopy Segal shuffle dg cooperad.
We also assume that $\AOp$ is connected in the sense that we have $\AOp(\ttree) = 0$ when the tree $\ttree$ is not reduced (has at least one vertex with a single ingoing edge).
This condition implies that every object $\AOp(\stree)$
is the source of finitely many nonzero homotopy coproducts $\rho_{\ttree\rightarrow\dots\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree)\otimes I^k$,
because the set of tree morphisms $\ttree\rightarrow\stree$ with $\ttree$ reduced and $\stree$ fixed is finite.
\begin{constr}\label{construction:bar-complex-homotopy}
We define the graded modules $\DGB^c(\AOp)(r)$ which form the components of the cobar construction $\DGB^c(\AOp)$
by the same formula as in the strict case
\begin{equation*}
\DGB^c(\AOp)(r) = \bigoplus_{\ttree\in\Tree(r)}\DGSigma^{-\sharp V(\ttree)}\AOp(\ttree),
\end{equation*}
but we now take a cobar differential that involves terms given by higher coproduct operators and which we associate to multiple edge contractions.
We precisely consider the set of pairs $(\ttree,\underline{e})$, where $\ttree\in\Tree(r)$ and $\underline{e} = (e_1,\dots,e_m)$
is an ordered collection of pairwise distinct edges $e_i\in\mathring{E}(\ttree)$.
To any such pair, we associate the sequence of tree morphisms such that
\begin{gather*}
\sigma(\ttree,\underline{e}) = \{\ttree\rightarrow\ttree/{e_1}\rightarrow\ttree/{\{e_1,e_2\}}\rightarrow\dots\rightarrow\ttree/\{e_1,e_2,\dots,e_m\}\}.
\intertext{and the map of degree $-1$}
\partial_{(\ttree,\underline{e})}: \DGSigma^{-\sharp V(\ttree)+m}\AOp(\ttree/\{e_1,\dots,e_m\})\rightarrow\DGSigma^{-\sharp V(\ttree)}\AOp(\ttree)
\intertext{given by top component of the homotopy coproduct $\rho_{\sigma(\ttree,\underline{e})}$}
\partial_{(\ttree,\underline{e})} = \rho^{\square}_{\sigma(\ttree,\underline{e})},
\end{gather*}
such as defined in Lemma~\ref{lemma:homotopy-cooperad-top-component}.
Recall that the object $\DGSigma^{-\sharp V(\ttree)}\AOp(\ttree)$
is identified with a tensor product
\begin{equation*}
\DGSigma^{-\sharp V(\ttree)}\AOp(\ttree) = \bigl(\bigotimes_{v\in V(\ttree)}\underline{01}^{\sharp}_v\bigr)\otimes\AOp(\ttree),
\end{equation*}
where we associate a factor of cohomological degree one $\underline{01}^{\sharp}_v$ to every vertex $v\in V(\ttree)$.
In the definition of our map $\partial_{(\ttree,e)}$, we also perform blow-up operations
\begin{equation*}
\underline{01}_{u\equiv v}^{\sharp}\mapsto\underline{01}_{u}^{\sharp}\otimes\underline{01}_{v}^{\sharp},
\end{equation*}
for each edge contraction step $\ttree/{\{e_1,\dots,e_{i-1}\}}\rightarrow\ttree/{\{e_1,\dots,e_{i-1},e_i\}}$,
where $u$ and $v$ represent the vertices of the edge $e_i$ in the tree $\ttree/{\{e_1,\dots,e_{i-1}\}}$,
and $u\equiv v$ represents the result of the fusion of these vertices
in $\ttree/{\{e_1,\dots,e_{i-1}\}}$.
The performance of this sequence of blow-up operations, in parallel to the application of the map $\rho^{\square}_{\sigma(\ttree,\underline{e})}$,
enables us to pass
from the tensor product $\DGSigma^{-\sharp V(\ttree)+m}\AOp(\ttree/\{e_1,\dots,e_m\})
= \bigl(\bigotimes_{x\in V(\ttree/\{e_1,\dots,e_m\})}\underline{01}^{\sharp}_x\bigr)\otimes\AOp(\ttree/\{e_1,\dots,e_m\})$
to $\DGSigma^{-\sharp V(\ttree)}\AOp(\ttree) = \bigl(\bigotimes_{x\in V(\ttree)}\underline{01}^{\sharp}_x\bigr)\otimes\AOp(\ttree)$.
This operation may involve a sign, which we determine as in the strict case in Construction~\ref{construction:bar-complex}.
Finally, we take:
\begin{equation*}
\partial_m = \sum_{(\ttree,(e_1,\dots,e_m))}\partial_{(\ttree,(e_1,\dots,e_m))},\quad\text{for $m\geq 1$},
\quad\text{and}\quad\partial = \sum_{m\geq 1}\partial_m.
\end{equation*}
(We just use the connectedness condition on $\AOp$ to ensure that this sum reduces to a finite number of terms
on each summand $\DGSigma^{-\sharp V(\ttree)}\AOp(\ttree)$
of our object $\DGB^c(\AOp)(r)$.)
We prove next that this map $\partial$ also defines a twisting differential on $\DGB^c(\AOp)(r)$, so that we can again provide $\DGB^c(\AOp)(r)$
with a dg module structure with the sum $\delta+\partial: \DGB^c(\AOp)(r)\rightarrow\DGB^c(\AOp)(r)$ as total differential,
where $\delta: \DGB^c(\AOp)(r)\rightarrow\DGB^c(\AOp)(r)$ denotes the differential induced by the internal differential of the objects $\AOp(\ttree)$
in $\DGB^c(\AOp)(r)$ (as in the case of the cobar construction of strict Segal dg cooperads).
We equip the object $\DGB^c(\AOp)$ with composition products
\begin{equation*}
\circ_{i_p}: \DGB^c(\AOp)(\{i_1<\dots<i_k\})\otimes\DGB^c(\AOp)(\{j_1<\dots<j_l\})\rightarrow\DGB^c(\AOp)(\{1<\dots<r\}),
\end{equation*}
which we define exactly as in the strict case.
We explicitly define $\circ_{i_p}$ as the sum of the maps
\begin{equation*}
\circ_{i_p}^{\stree,\ttree}: \DGSigma^{-\sharp V(\stree)}\AOp(\stree)\otimes\DGSigma^{-\sharp V(\ttree)}\AOp(\ttree)
\rightarrow\DGSigma^{-\sharp V(\stree\circ_{i_p}\ttree)}\AOp(\stree\circ_{i_p}\ttree)
\end{equation*}
yielded by the Segal map $i_{\stree\circ_{i_p}\ttree}: \AOp(\stree)\otimes\AOp(\ttree)\rightarrow\AOp(\stree\circ_{i_p}\ttree)$
associated to the composition operation $\stree\circ_{i_p}\ttree = \lambda_{\gammatree}(\stree,\ttree)$
in the category of trees, where $\stree\in\Tree(\{i_1<\dots<i_k\})$ and $\ttree\in\Tree(\{j_1<\dots<j_l\})$.
In a forthcoming lemma, we also check that these operations preserve the above differential, and hence, provide our object with well-defined operations in the category of dg modules.
\end{constr}
We first check the validity of the definition of the twisting differential announced in our construction. This result is a consequence of the following more precise lemma.
\begin{lemm}\label{lemma:differential-bar-construction-homotopy}
We have the relation $\delta\partial_m + \partial_m\delta + \sum_{i=1}^{m-1}\partial_i\partial_{m-i} = 0$, for each $m\geq 1$.
\end{lemm}
\begin{proof}
If $m = 1$, then the statement reduces to $\delta\partial_1 + \partial_1\delta = 0$, and we readily check, as in the case of the cobar construction of Segal dg cooperads,
that this relation is equivalent to the fact that the coproducts of degree zero $\rho_{\ttree\rightarrow\stree} = \rho_{\ttree\rightarrow\stree}^{\square}$
are morphisms of dg modules.
We now prove the statement for $m>1$. From Equation~(\ref{equation:differential-top-component}) of Lemma~\ref{lemma:homotopy-cooperad-top-component},
we see that, for every tree $\ttree$ and every sequence of internal edges $\underline{e} = (e_1,\dots,e_m)$,
we have:
\begin{align*}
\delta\partial_{(\ttree,\underline{e})} + \partial_{(\ttree,\underline{e})}\delta &
= \sum_{i=1}^{m-1}\pm\rho^{\square}_{\ttree\rightarrow\dots\rightarrow\widehat{\ttree/\{e_1,\dots,e_i\}}\rightarrow\dots\rightarrow\ttree/\{e_1,\dots,e_m\}} \\
& + \sum_{i=1}^{m-1}\pm\rho^{\square}_{\ttree\rightarrow\dots\rightarrow\ttree/\{e_1,\dots,e_i\}}\circ\rho^{\square}_{\ttree/\{e_1,\dots,e_i\}\rightarrow\dots\rightarrow\ttree/\{e_1,\dots,e_m\}}
\end{align*}
Then, by taking the sum of these expressions over the set of pairs $(\ttree,\underline{e})$, we obtain the formula:
\begin{align*}
\delta\partial_m + \partial_m\delta &
= \sum_{\substack{(\ttree,\underline{e})\\i=1,\dots,m}}\pm\rho^{\square}_{\ttree\rightarrow\dots\rightarrow\widehat{\ttree/\{e_1,\dots,e_i\}}\rightarrow\dots\rightarrow\ttree/\{e_1,\dots,e_m\}} \\
& + \sum_{i=1}^{m-1}\bigl(\underbrace{\sum_{(\ttree,\underline{e})}\pm\rho^{\square}_{\ttree\rightarrow\dots\rightarrow\ttree/\{e_1,\dots,e_i\}}
\circ\rho^{\square}_{\ttree/\{e_1,\dots,e_i\}\rightarrow\dots\rightarrow\ttree/\{e_1,\dots,e_m\}}}_{= \partial_i\partial_{m-i}}\bigr).
\end{align*}
In the first sum of this formula, the term that corresponds to the removal of the node $\ttree/\{e_1,\dots,e_{i-1},e_i\}$
and the term that corresponds to the removal of the node $\ttree/\{e_1,\dots,e_{i-1},e_{i+1}\}$
for the pair $(\ttree,(e_1,\dots,e_{i-1},e_{i+1},e_i,e_{i+2},\dots,e_m))$
with $e_i$ and $e_{i+1}$ switched
are equal up to a sign. We readily check that these signs are opposite, so that these terms cancel out in our sum. The conclusion of the lemma follows.
\end{proof}
We still check the validity of our definition of the composition products.
\begin{lemm}\label{lemma:bar-construction-product-operad-homotopy}
The twisting map $\partial$ and the differential $\delta$ induced by the internal differential of the object $\AOp$ on the cobar construction $\DGB^c(\AOp)$
form derivations with respect the composition products of Construction~\ref{construction:bar-complex-homotopy},
so that these operations $\circ_{i_p}$ define morphisms of dg modules.
\end{lemm}
\begin{proof}
We generalize the arguments used in Lemma \ref{lemma:bar-construction-product-operad}.
We again prove that $\circ_{i_p}$ commutes with the differential on each summand $\DGSigma^{-\sharp V(\stree)}\AOp(\stree)\otimes\DGSigma^{-\sharp V(\ttree)}\AOp(\ttree)$
of the tensor product $\DGB^c(\AOp)(\{i_1<\dots<i_k\})\otimes\DGB^c(\AOp)(\{j_1<\dots<j_l\})$,
where $\stree\in\Tree(\{i_1<\dots<i_k\})$, $\ttree\in\Tree(\{j_1<\dots<j_l\})$.
We can still use that the Segal maps, which induce our composition product componentwise, are morphisms of dg modules to conclude that $\circ_{i_p}$
preserves the internal differentials on our objects.
We therefore focus on the verification that $\circ_{i_p}$ preserves the twisting differential $\partial$.
We set $\thetatree = \stree\circ_{i_p}\ttree$ and we consider a tree $\thetatree'$ such that $\thetatree = \thetatree'/\{e_1,\dots,e_m\}$
for an sequence of edges $e_1,\dots,e_m\in\mathring{E}(\thetatree')$.
We then have $\thetatree' = \stree'\circ_{i_p}\ttree'$, where $\stree'$ and $\ttree'$ represent the pre-image of the subtrees $\stree\subset\thetatree$ and $\ttree\subset\thetatree$
under the map $\thetatree'\rightarrow\thetatree'/\{e_1,\dots,e_m\}$, and $\stree = \stree'/\{e_{\alpha_1},\dots,e_{\alpha_r}\}$, $\ttree = \ttree'/\{e_{\beta_1},\dots,e_{\beta_s}\}$,
for the partition $\{e_{\alpha_1},\dots,e_{\alpha_r}\}\amalg\{e_{\beta_1},\dots,e_{\beta_s}\} = \{e_1,\dots,e_m\}$
such that $e_{\alpha_1},\dots,e_{\alpha_r}\in\mathring{E}(\stree')$ and $e_{\beta_1},\dots,e_{\beta_s}\in\mathring{E}(\ttree')$.
We may note that one the component of this partition can be empty (we take by convention $r = 0$ or $s = 0$ in this case).
The compatibility between the Segal maps and the coproduct operators in Figure~\ref{homotopy-dg-cooperad:Segal-coproduct-compatibility}
together with the degeneracy relations of Figure~\ref{homotopy-E-infinity-cooperad:degeneracies}
imply the commutativity of the following diagram:
\begin{equation*}
\xymatrixcolsep{5pc}\xymatrix{ \AOp(\stree)\otimes\AOp(\ttree)
\ar[r]^-{\rho_{\sigma(\stree',\underline{e}|_{\stree'})}\otimes\rho_{\sigma(\ttree',\underline{e}|_{\ttree'})}}
\ar[ddd]|{i_{\stree\circ_{i_p}\ttree}} &
\AOp(\stree')\otimes I^{r-1}\otimes\AOp(\ttree')\otimes I^{s-1}
\ar[d]|{(\id\otimes s^{m-\beta_*})\otimes(\id\otimes s^{m-\alpha_*})} \\
& \AOp(\stree')\otimes I^{m-1}\otimes\AOp(\ttree)\otimes I^{m-1}\ar[d]^{\simeq} \\
& \AOp(\stree')\otimes\AOp(\ttree')\otimes I^{m-1}\otimes I^{m-1}\ar[d]^{i_{\stree',\ttree',\thetatree}\otimes\mu} \\
\AOp(\thetatree)\ar[r]_{\rho_{\sigma(\thetatree',\underline{e})}} &
\AOp(\thetatree')\otimes I^{m-1} },
\end{equation*}
where we set $\underline{e}|_{\stree'} = (e_{\alpha_1},\dots,e_{\alpha_r})$ and $\underline{e}|_{\ttree'} = (e_{\beta_1},\dots,e_{\beta_s})$ for short,
and $s^{m-\alpha_*} = s^{m-\alpha_1} s^{m-\alpha_2}\cdots s^{m-\alpha_r}$, $s^{m-\beta_*} = s^{m-\beta_1} s^{m-\beta_2}\cdots s^{m-\beta_s}$.
(These composites correspond to the positions of the degeneracies
when we take the pre-image of the subtrees $\stree,\ttree\subset\thetatree$
under the sequence of tree morphisms $\thetatree'\rightarrow\thetatree'/\{e_1\}\rightarrow\dots\rightarrow\thetatree'/\{e_1,\dots,e_m\} = \thetatree$.)
Note that we may have $r = 0$ or $s = 0$ and our diagram is still valid in these cases. (We then take $I^{-1} = \kk$ by convention and $s^0: I^{-1}\rightarrow I^0$
denotes the identity map.)
We actually have three possible cases:
\begin{itemize}
\item $r = 0$:
In this case all the edges of our collection $\underline{e}$ belong to $\ttree'$, we have $\stree = \stree'$,
and the commutativity of the diagram implies that $\partial_{\theta',\underline{e}}\circ i_{\stree\circ_{i_p}\ttree}
= i_{\stree\circ_{i_p}\ttree'}\circ(\id\otimes\partial_{\ttree',\underline{e}})$.
\item $s = 0$:
In this mirror case, all the edges of our collection $\underline{e}$ belong to $\stree'$, we have $\ttree = \ttree'$,
and we get $\partial_{\theta',\underline{e}}\circ i_{\stree\circ_{i_p}\ttree}
= i_{\stree'\circ_{i_p}\ttree}\circ(\partial_{\stree',\underline{e}}\otimes\id)$.
\item $r,s\geq 1$: in this case, the composite of the vertical morphisms on the right hand side of the diagram
does not meet $\AOp(\stree'\circ_{i_p}\ttree')\otimes\underline{01}^{\sharp}{}^{\otimes m-1}$,
because the product of degeneracies carries $I^{r-1}\otimes I^{s-1}$ to a submodule of $I^{m-1}\otimes I^{m-1}$ concentrated in dimension $<m-1$,
whose image under the product can not meet $\underline{01}^{\sharp}{}^{\otimes m-1}$,
so that we have $\partial_{\theta',\underline{e}}\circ i_{\stree\circ_{i_p}\ttree} = 0$.
\end{itemize}
From these identities, we obtain the derivation relation $\partial_m\circ\circ_{i_p} = \circ_{i_p}\circ(\partial_m\otimes\id + \id\otimes\partial_m)$,
valid for each $m\geq 1$. The conclusion follows.
\end{proof}
We still immediately deduce from the associativity of the Segal maps that the composition products of Construction~\ref{construction:bar-complex-homotopy}
satisfy the associativity relations of the composition products of an operad.
We therefore get the following concluding statement:
\begin{thm-defn}
The collection $\DGB^c(\AOp) = \{\DGB^c(\AOp)(r),r>0\}$ equipped with the differential and structure operations defined in Construction~\ref{construction:bar-complex-homotopy}
forms a shuffle operad in dg modules.
This operad $\DGB^c(\AOp)$ is the cobar construction of the connected homotopy Segal shuffle dg (pre-)cooperad $\AOp$.\qed
\end{thm-defn}
\subsection{The definition of homotopy morphisms}\label{subsection:homotopy-morphisms}
We devote this section to the study of homotopy morphisms of Segal cooperads.
We always assume that our target object is equipped with a strict Segal cooperad structure for technical reasons,
but our source object can be equipped with a general homotopy Segal cooperad structure.
We explain the definition of these homotopy morphisms in the context of $E_\infty$-Hopf cooperads first.
We examine the forgetting of $E_\infty$-structures afterwards and then we study the application
of homotopy morphisms to the cobar construction.
\begin{defn}\label{definition:homotopy-morphisms}
We assume that $\BOp$ is a strict Segal $E_\infty$-Hopf shuffle cooperad while $\AOp$ can be any homotopy Segal $E_\infty$-Hopf shuffle cooperad.
We then define a homotopy morphism of homotopy Segal $E_\infty$-Hopf shuffle cooperads $\phi: \AOp\rightarrow\BOp$
as a collection of $\EOp$-algebra morphisms
\begin{equation*}
\phi_{\ttree}: \AOp(\ttree)\rightarrow\BOp(\ttree),\quad\text{$\ttree\in\Tree(r)$, $r>0$},
\end{equation*}
referred to as the underlying maps of our homotopy morphism, together with a collection of higher morphism operators
\begin{equation*}
\phi_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}: \AOp(\stree)\rightarrow\BOp(\ttree)\otimes I^{k+1},
\end{equation*}
defined in the category of $\EOp$-algebras as well and associated to sequences of composable tree morphisms $\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree$,
so that the compatibility relations with the face and degeneracy operators expressed by the diagrams of Figure~\ref{homotopy-morphisms:0-faces}-\ref{homotopy-morphisms:degeneracies}
hold, as well the compatibility relations with the facet operators
expressed by the diagrams of Figure~\ref{homotopy-morphisms:facets}-\ref{homotopy-morphisms:homotopy-facets}.
(For the underlying maps of our homotopy morphism, we just retrieve the relation of Definition~\ref{definition:E-infinity-cooperad-morphism}.)
When $\AOp$ and $\BOp$ are symmetric cooperads, we say that $\phi: \AOp\rightarrow\BOp$
defines a homotopy morphism of homotopy Segal $E_\infty$-Hopf symmetric cooperads
if we have also the compatibility relations with the action of permutations
expressed by the diagrams of Figure~\ref{homotopy-morphisms:0-faces}-\ref{homotopy-morphisms:degeneracies}.
(For the underlying maps of our homotopy morphism, we just retrieve the relation of Definition~\ref{definition:E-infinity-cooperad-morphism}.)
\end{defn}
We also have a version of this definition for homotopy Segal shuffle dg cooperads without $E_\infty$-structure.
\begin{defn}\label{definition:homotopy-morphisms-forgetful}
We assume that $\BOp$ be a strict Segal shuffle dg cooperad while $\AOp$ can be any homotopy Segal shuffle dg cooperad.
We then define a homotopy morphism of homotopy Segal shuffle dg cooperads $\phi: \AOp\rightarrow\BOp$
as a collection of morphisms dg modules
\begin{equation*}
\phi_{\ttree}: \AOp(\ttree)\rightarrow\BOp(\ttree),\quad\text{$\ttree\in\Tree(r)$, $r>0$},
\end{equation*}
to which we again refer as the underlying maps of our homotopy morphism, together with a collection of higher morphism operators
\begin{equation*}
\phi_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}: \AOp(\stree)\rightarrow\BOp(\ttree)\otimes I^{k+1},
\end{equation*}
defined in the category of dg modules as well and associated to sequences of composable tree morphisms $\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree$,
so that we have the compatibility relations with respect to the face and degeneracy operators expressed by the diagrams
of Figure~\ref{homotopy-morphisms:0-faces}-\ref{homotopy-morphisms:degeneracies} (as in the case of homotopy morphisms $E_\infty$-Hopf cooperads),
together with the compatibility relations with respect to the Segal maps
expressed by the diagrams of Figure~\ref{homotopy-dg-morphisms:Segal-maps}-\ref{homotopy-dg-morphisms:homotopy-Segal-maps}.
\end{defn}
\begin{figure}[p]
\ffigbox
{\caption{The compatibility of homotopy morphisms with $0$-faces.
The diagrams commute for all sequences of composable tree morphisms $\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree$
and for all $1\leq i\leq k$, where $\widehat{\ttree_i}$ means that we delete the node $\ttree_i$.}\label{homotopy-morphisms:0-faces}}
{\centerline{\xymatrixcolsep{10pc}\xymatrix{ \AOp(\stree)
\ar[r]^{\phi_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}}
\ar[d]^{\phi_{\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}} &
\BOp(\ttree)\otimes I^{k+1}\ar[d]^{\id\otimes d^{k+1}_0} \\
\BOp(\ttree_k)\otimes I^k
\ar[r]_{\rho^{\BOp}_{\ttree\rightarrow\ttree_k}\otimes\id} &
\BOp(\ttree)\otimes I^k \\
\AOp(\stree)
\ar[r]^{\phi_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}}
\ar[dr]_{\phi_{\ttree\rightarrow\cdots\widehat{\ttree_i}\cdots\rightarrow\stree}} &
\BOp(\ttree)\otimes I^{k+1} \ar[d]^{\id\otimes d_0^i} \\
& \BOp(\ttree)\otimes I^k }
}}
\ffigbox
{\caption{The compatibility of homotopy morphisms with $1$-faces.
The diagrams commute for all sequences of composable tree morphisms $\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree$
and for all $1\leq i\leq k$.}
\label{homotopy-morphisms:1-faces}}
{\centerline{\xymatrixcolsep{4pc}\xymatrix{ \AOp(\stree)
\ar[rr]^{\phi_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}}
\ar[d]^{\rho^{\AOp}_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\stree}} &&
\BOp(\ttree)\otimes I^{k+1}
\ar[d]^{\id\otimes d_1^{k+1}} \\
\AOp(\ttree)\otimes I^k
\ar[rr]_{\phi_{\ttree}\otimes\id} &&
\BOp(\ttree)\otimes I^k \\
\AOp(\stree)
\ar[rr]^{\phi_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}}
\ar[d]^{\rho^{\AOp}_{\ttree_i\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}} &&
\BOp(\ttree)\otimes I^{k+1}
\ar[d]^{\id\otimes d_1^i} \\
\AOp(\ttree_i)\otimes I^{i-1}
\ar[r]_-{\phi_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_i}\otimes\id} &
\BOp(\ttree)\otimes I^{k-i+1}\otimes I^{i-1}\ar[r]_-{\simeq} &
\BOp(\ttree)\otimes I^k }
}}
\ffigbox
{\caption{The compatibility of homotopy morphisms with degeneracies.
The diagrams commute for all sequences of composable tree morphisms $\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree$
and for all $0\leq j\leq k+1$.}\label{homotopy-morphisms:degeneracies}}
{\centerline{\xymatrixcolsep{10pc}\xymatrix{ \AOp(\stree)
\ar[r]^{\phi_{\stree}}
\ar[dr]_{\phi_{\stree = \stree}} &
\BOp(\stree)
\ar[d]^{\id\otimes s^0} \\
& \BOp(\stree)\otimes I^1 \\
\AOp(\stree)
\ar[r]^{\phi_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}}
\ar[rd]_{\phi_{\ttree\rightarrow\ttree_k\rightarrow\cdots\ttree_j = \ttree_j\cdots\rightarrow\ttree_1\rightarrow\stree}} &
\BOp(\ttree)\otimes I^{k+1}
\ar[d]^{\id\otimes s^j} \\
& \BOp(\ttree)\otimes I^{k+2} }
}}
\end{figure}
\begin{figure}[p]
\ffigbox
{\caption{The preservation of facet operators by the underlying maps of homotopy morphisms.
The diagram commutes for all subtrees $\sigmatree\subset\stree$.}
\label{homotopy-morphisms:facets}}
{\centerline{\xymatrixcolsep{5pc}\xymatrix{ \AOp(\stree)\ar[r]^{\phi_{\stree}} & \BOp(\stree) \\
\AOp(\sigmatree)\ar[r]^{\phi_{\sigmatree}}\ar[u]^{i_{\sigmatree,\stree}} &
\BOp(\sigmatree)\ar[u]_{i_{\sigmatree,\stree}} }
}}
{\caption{The compatibility of homotopy morphisms with facet operators.
The diagram commutes for all subtrees $\sigmatree\subset\stree$
and for all sequences of composable tree morphisms $\ttree\xrightarrow{f_k}\ttree_k\xrightarrow{f_{k-1}}\dots\xrightarrow{f_1}\ttree_1\xrightarrow{f_0}\stree$.}
\label{homotopy-morphisms:homotopy-facets}}
{\centerline{\xymatrixcolsep{10pc}\xymatrix{ \AOp(\stree)\ar[r]^-{\phi_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}} &
\BOp(\ttree)\otimes I^{k+1} \\
\AOp(\sigmatree)
\ar@{.>}[r]_-{\phi_{(f_k\dots f_0)^{-1}(\sigmatree)\rightarrow\dots\rightarrow f_0^{-1}(\sigmatree)\rightarrow\sigmatree}}
\ar[u]^{i^{\AOp}_{\sigmatree,\stree}} &
\BOp((f_k\dots f_0)^{-1}(\sigmatree))\otimes I^{k+1}
\ar@{.>}[u]^{i^{\BOp}_{(f_k\dots f_0)^{-1}(\sigmatree),\ttree}\otimes\id} }
}}
\ffigbox
{\caption{The preservation of the action of permutations by the underlying maps of homotopy morphisms.
The diagram commutes for all $s\in\Sigma_r$ and $\ttree\in\Tree(r)$.}
\label{homotopy-morphisms:permutations}}
{\centerline{\xymatrixcolsep{5pc}\xymatrix{ \AOp(s\ttree)\ar[r]^{\phi_{s\ttree}}\ar[d]_{s^*} & \BOp(s\ttree)\ar[d]^{s^*} \\
\AOp(\ttree)\ar[r]^{\phi_{\ttree}} & \BOp(\ttree) }
}}
\ffigbox
{\caption{The compatibility of homotopy morphisms with facet operators.
The diagram commutes for all subtrees $\sigmatree\subset\stree$
and for all sequences of composable tree morphisms $\ttree\xrightarrow{f_k}\ttree_k\xrightarrow{f_{k-1}}\dots\xrightarrow{f_1}\ttree_1\xrightarrow{f_0}\stree$.}
\label{homotopy-morphisms:homotopy-permutations}}
{\centerline{\xymatrixcolsep{10pc}\xymatrix{ \AOp(\stree)\ar[r]^-{\phi_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}} &
\BOp(\ttree)\otimes I^{k+1} \\
\AOp(\sigmatree)
\ar@{.>}[r]_-{\phi_{(f_k\dots f_0)^{-1}(\sigmatree)\rightarrow\dots\rightarrow f_0^{-1}(\sigmatree)\rightarrow\sigmatree}}
\ar[u]^{i^{\AOp}_{\sigmatree,\stree}} &
\BOp((f_k\dots f_0)^{-1}(\sigmatree))\otimes I^{k+1}
\ar@{.>}[u]^{i^{\BOp}_{(f_k\dots f_0)^{-1}(\sigmatree),\ttree}\otimes\id} }
}}
\end{figure}
\begin{figure}[t]
\ffigbox
{\caption{The preservation of Segal maps by the underlying map of homotopy morphisms of homotopy Segal dg cooperads.
The diagram commutes for all tree decompositions $\stree = \lambda_{\utree}(\sigmatree_v,v\in V(\utree))$.}
\label{homotopy-dg-morphisms:Segal-maps}}
{\centerline{\xymatrixcolsep{5pc}\xymatrix{ \AOp(\stree)\ar[r]^{\phi_{\stree}} & \BOp(\stree) \\
\bigotimes_{v\in V(\utree)}\AOp(\sigmatree_v)
\ar[r]^{\bigotimes_{v\in V(\utree)}\phi_{\sigmatree_v}}
\ar[u]^{i^{\AOp}_{\sigmatree_*,\stree}} &
\bigotimes_{v\in V(\utree)}\BOp(\sigmatree_v)
\ar[u]_{i^{\BOp}_{\sigmatree_*,\stree}} }
}}
\ffigbox
{\caption{The compatibility of homotopy morphisms of homotopy Segal dg cooperads with the Segal maps.
The diagram commutes for all tree decompositions $\stree = \lambda_{\utree}(\sigmatree_v,v\in V(\utree))$
and for all sequences of composable tree morphisms $\ttree\xrightarrow{f_k}\ttree_k\xrightarrow{f_{k-1}}\dots\xrightarrow{f_1}\ttree_1\xrightarrow{f_0}\stree$.}
\label{homotopy-dg-morphisms:homotopy-Segal-maps}}
{\centerline{\xymatrixcolsep{10pc}\xymatrix{ \AOp(\stree)
\ar[r]^-{\phi_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}} &
\BOp(\ttree)\otimes I^{k+1} \\
& \bigotimes_{v\in V(\utree)}\BOp((f_k\dots f_0)^{-1}(\sigmatree_v))\otimes\left(\bigotimes_{v\in V(\utree)}I^{k+1}\right)
\ar@{.>}[u]_{i^{\BOp}_{(f_k\dots f_0)^{-1}(\sigmatree_*),\ttree}\otimes\mu} \\
\bigotimes_{v\in V(\utree)}\AOp(\sigmatree_v)
\ar@{.>}[r]^-{\bigotimes_v \phi_{(f_k\dots f_0)^{-1}(\sigmatree_v)\rightarrow\dots\rightarrow f_0^{-1} (\sigmatree_v)\rightarrow\sigmatree_v}}
\ar[uu]^{i^{\AOp}_{\sigmatree_*,\stree}} &
\bigotimes_{v \in V(\utree)}\left(\BOp((f_k\dots f_0)^{-1}(\sigmatree_v))\otimes I^{k+1}\right)
\ar@{.>}[u]_{\simeq} }
}}
\end{figure}
\afterpage{\clearpage}
We have the following statement, which is the homotopy version of the result of Lemma~\ref{lemma:homotopy-cooperad-top-component},
and which can be proved by the same arguments.
We still write $\alpha^{\square}: X\rightarrow Y$ for the homomorphism of graded modules of degree $k$
associated to a morphism of dg modules $\alpha: X\rightarrow Y \otimes I^k$
such that $\alpha(x) = (-1)^{k\deg(x)}\alpha^{\square}(x)\otimes\underline{01}^{\sharp}{}^{\otimes k} + \text{tensors with a factor of dimension $<k$ in $I^k$}$.
\begin{lemm}\label{lemma:homotopy-morphism-top-component}
\begin{enumerate}
\item
Let $\phi: \AOp\rightarrow\BOp$ be a homotopy morphism of homotopy Segal shuffle dg cooperads,
where we still assume that $\BOp$ is a strict Segal shuffle dg cooperad
as in Definition~\ref{definition:homotopy-morphisms-forgetful}.
The graded homomorphism of degree $k+1$
\begin{equation*}
\phi^{\square}_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree)
\end{equation*}
that we associate to the dg module morphism $\phi_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree)\otimes I^{k+1}$,
for any sequence of composable tree morphisms $\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree$,
satisfies the relation
\begin{equation}\tag{*}\label{equation:homotopy-morphism-top-component}
\delta(\phi^{\square}_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree})
\begin{aligned}[t]
& = (-1)^{k+1}\rho^{\BOp}_{\ttree_k\rightarrow\stree}\phi^{\square}_{\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree} \\
& + \sum_{i=1}^k(-1)^i\phi^{\square}_{\ttree\rightarrow\dots\rightarrow\hat{\ttree}_i\rightarrow\dots\rightarrow\stree} \\
& + (-1)^k\phi_{\ttree}\circ\rho^{\AOp\square}_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree} \\
& - \sum_{i=1}^k(-1)^i\phi^{\square}_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_i}\circ\rho^{\AOp\square}_{\ttree_i\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}.
\end{aligned}
\end{equation}
Moreover, if we have a degeneracy $\ttree_j = \ttree_{j+1}$ in our sequence of tree morphisms, for some $0\leq j\leq k$ (with the convention that $\ttree_{k+1} = \ttree$ and $\ttree_0 = \stree$),
then we have the relation
\begin{equation}\tag{**}\label{equation:homotopy-morphism-degeneracy-top-component}
\phi^{\square}_{\ttree\rightarrow\ttree_{k}\rightarrow\dots\ttree_{j+1}=\ttree_j\cdots\rightarrow\ttree_1\rightarrow\stree} = 0.
\end{equation}
\item
In the converse direction, if we have a collection of dg module morphisms $\phi_{\ttree}: \AOp(\ttree)\rightarrow\BOp(\ttree)$, $\ttree\in\Tree(r)$, $r>0$,
together with a collection of dg graded homomorphisms $\phi^{\square}_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree)$,
of degree $k+1$,
which satisfy the relations (\ref{equation:homotopy-morphism-top-component})-(\ref{equation:homotopy-morphism-degeneracy-top-component})
of the previous statement,
then there is a unique collection of morphisms of dg modules $\phi_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}: \AOp(\stree)\rightarrow\AOp(\ttree)\otimes I^k$,
which extend these maps on the summands $\AOp(\ttree)\otimes\underline{01}^{\sharp}{}^{\otimes k+1}$
and satisfy the face and degeneracy relations of homotopy morphism operators
of Figure~\ref{homotopy-morphisms:0-faces}-\ref{homotopy-morphisms:degeneracies}.\qed
\end{enumerate}
\end{lemm}
We now prove that any homotopy morphism of homotopy Segal shuffle dg cooperads $\phi: \AOp\rightarrow\BOp$, as in Definition~\ref{definition:homotopy-morphisms-forgetful},
gives rise to an induced morphism on the cobar construction $\phi_*: \DGB^c(\AOp)\rightarrow\DGB^c(\BOp)$.
We address the definition of this morphism in the next paragraph.
We assume all along this study that a homotopy morphism $\phi: \AOp\rightarrow\BOp$ is fixed, with $\AOp$ a homotopy Segal shuffle dg cooperad
and $\BOp$ a strict Segal shuffle dg cooperad.
We need to assume that the object $\AOp$ is connected in order to give a sense to the cobar construction $\DGB^c(\AOp)$ (see~\S\ref{subsection:homotopy-Segal-cooperad-cobar}).
We also need to assume that $\BOp$ is connected in the construction of our morphisms.
We therefore assume that these connectedness conditions hold in the rest of this subsection.
\begin{constr}\label{construction:bar-complex-homotopy-morphism}
The underlying maps of our homotopy morphism $\phi: \AOp\rightarrow\BOp$ induce morphisms of graded modules between the components of the cobar construction:
\begin{equation*}
\phi_{\ttree}: \DGSigma^{-\sharp V(\ttree)}\AOp(\ttree)\rightarrow\DGSigma^{-\sharp V(\ttree)}\AOp(\ttree).
\end{equation*}
In addition to these maps, we consider morphisms
\begin{equation*}
\phi_{(\ttree,\underline{e})}: \DGSigma^{-\sharp V(\ttree)+m}\AOp(\ttree/\{e_1,\dots,e_m\})\rightarrow\DGSigma^{-\sharp V(\ttree)}\BOp(\ttree).
\end{equation*}
associated to the pairs $(\ttree,\underline{e})$, where $\ttree$ is a tree and $\underline{e} = (e_1,\dots,e_m)$
is an ordered collection of pairwise distinct edges $e_i\in\mathring{E}(\ttree)$,
as in Construction~\ref{construction:bar-complex-homotopy}.
To define the latter maps, we again consider the sequence of composable tree morphisms
\begin{gather*}
\sigma(\ttree,\underline{e}) = \{\ttree\rightarrow\ttree/{e_1}\rightarrow\ttree/{\{e_1,e_2\}}\rightarrow\dots\rightarrow\ttree/\{e_1,\dots,e_m\}\},
\intertext{which we associate to any such pair $(\ttree,\underline{e})$ in Construction~\ref{construction:bar-complex-homotopy},
and we set}
\phi_{(\ttree,\underline{e})} = \phi^{\square}_{\sigma(\ttree,\underline{e})},
\end{gather*}
where we take the top component of the morphism $\phi_{\sigma(\ttree,\underline{e})}$ (such as defined in Lemma~\ref{lemma:homotopy-morphism-top-component}).
In this construction, we also use the same blow-up process as in Construction~\ref{construction:bar-complex-homotopy}
to pass from the tensor product $\DGSigma^{-\sharp V(\ttree)+m}\AOp(\ttree/\{e_1,\dots,e_m\})
= \bigl(\bigotimes_{x\in V(\ttree/\{e_1,\dots,e_m\})}\underline{01}^{\sharp}_x\bigr)\otimes\AOp(\ttree/\{e_1,\dots,e_m\})$
to $\DGSigma^{-\sharp V(\ttree)}\BOp(\ttree) = \bigl(\bigotimes_{x\in V(\ttree)}\underline{01}^{\sharp}_x\bigr)\otimes\AOp(\ttree)$
and to determine a possible sign, which we associate to our map $\phi_{(\ttree,\underline{e})}$.
In what follows, we identify the morphisms $\phi_{\ttree}$, induced by the underlying maps of our homotopy morphism $\phi: \AOp\rightarrow\BOp$,
with the case $m=0$ of these homomorphisms $\phi_{(\ttree,\underline{e})}$.
Finally, we take:
\begin{gather*}
\phi_m = \sum_{(\ttree,(e_1,\dots,e_m)}\phi_{(\ttree,(e_1,\dots,e_m))},\quad\text{for $m\geq 0$},
\quad\text{and}\quad\phi = \sum_{m\geq 0}\phi_m
\intertext{to get a map}
\phi_*: \DGB^c(\AOp)(r)\rightarrow\DGB^c(\BOp)(r),\quad\text{for each arity $r>0$}.
\end{gather*}
(Note that we use the connectedness condition on $\BOp$ to ensure that the above sum reduces to a finite number of terms on each summand $\DGSigma^{-\sharp V(\ttree)}\AOp(\ttree)$.)
We aim to prove that this map is compatible with the structure operations of the cobar construction.
\end{constr}
We check the preservation of differentials first. This claim follows from the following more precise observation.
\begin{lemm}\label{lemma:differential-bar-morphisms}
We have the relation
\begin{equation*}
\delta^{\BOp}\phi_m = \phi_m \delta^{\AOp} + \partial^{\BOp}\phi_{m-1} - \sum_{i=0}^{m-1}\phi_i\partial_{m-i}^{\AOp},
\end{equation*}
for all $m\geq 0$,
where $\delta = \delta^{\AOp},\delta^{\BOp}$ denotes the term of the differential of the cobar construction induced by the internal differential of the objects $\COp = \AOp,\BOp$,
we denote by $\partial^{\AOp} = \sum_{m = 1}^{\infty}\partial_m^{\AOp}$ the twisting map of the cobar construction
of the homotopy Segal shuffle dg cooperad $\AOp$,
while $\partial^{\BOp}$ denotes the twisting differential of the cobar construction
of the strict Segal shuffle dg cooperad $\AOp$.
\end{lemm}
\begin{proof}
We argue as in the proof of Lemma \ref{lemma:differential-bar-construction-homotopy}.
We use the relation of Equation~(\ref{equation:homotopy-morphism-top-component}) of Lemma~\ref{lemma:homotopy-morphism-top-component}
to write
\begin{align*}
\delta^{\BOp}\phi_{(\ttree,\underline{e})} - \phi_{(\ttree,\underline{e})}\delta^{\AOp} &
= \sum_{i=1}^{m-1}\pm\phi^{\square}_{\ttree\rightarrow\dots\rightarrow\widehat{\ttree/\{e_1,\dots,e_i\}}\rightarrow\dots\rightarrow\ttree/\{e_1,\dots,e_m\}} \\
& - \rho^{\BOp}_{\ttree\rightarrow\ttree/e_1}\phi^{\square}_{\ttree/e_1\rightarrow\dots\rightarrow\ttree/\{e_1,\dots,e_m\}}\\
& + \sum_{i=0}^{m-1}\pm\phi^{\square}_{\ttree\rightarrow\dots\rightarrow\ttree/\{e_1,\dots,e_i\}}\rho^{\AOp\square}_{\ttree/\{e_1,\dots,e_i\}\rightarrow\dots\rightarrow\ttree/\{e_1,\dots,e_m\}}.
\end{align*}
Then, by taking the sum of these expressions over the set of pairs $(\ttree,\underline{e})$, we obtain the formula:
\begin{align*}
\delta^{\BOp}\phi_m - \phi_m\delta^{\AOp} &
= \sum_{\substack{(\ttree,\underline{e})\\i=1,\dots,m}}\pm\phi^{\square}_{\ttree\rightarrow\dots\rightarrow\widehat{\ttree/\{e_1,\dots,e_i\}}\rightarrow\dots\rightarrow\ttree/\{e_1,\dots,e_m\}} \\
& + \underbrace{\sum_{(\ttree,\underline{e})}\pm\rho^{\BOp}_{\ttree\rightarrow\ttree/e_1}\phi^{\square}_{\ttree/e_1\rightarrow\dots\rightarrow\ttree/\{e_1,\dots,e_m\}}}_{=\partial^{\BOp}\phi_{m-1}} \\
& + \sum_{i=0}^{m-1}\bigl(\underbrace{\sum_{(\ttree,\underline{e})}\pm\phi^{\square}_{\ttree\rightarrow\dots\rightarrow\ttree/\{e_1,\dots,e_i\}}
\circ\rho^{\AOp\square}_{\ttree/\{e_1,\dots,e_i\}\rightarrow\dots\rightarrow\ttree/\{e_1,\dots,e_m\}}}_{= \phi_i\partial_{m-i}}\bigr).
\end{align*}
In the first sum of this formula, the term that corresponds to the removal of the node $\ttree/\{e_1,\dots,e_{i-1},e_i\}$
and the term that corresponds to the removal of the node $\ttree/\{e_1,\dots,e_{i-1},e_{i+1}\}$
for the pair $(\ttree,(e_1,\dots,e_{i-1},e_{i+1},e_i,e_{i+2},\dots,e_m))$
with $e_i$ and $e_{i+1}$ switched
are again equal up to a sign. We can still check that these signs are opposite, so that these terms cancel out in our sum. The conclusion of the lemma follows.
\end{proof}
We now check that our morphisms preserves the composition products. This claim follows from the following more precise observation.
\begin{lemm}\label{lemma:morphism-cobar-operad}
We have the relation $\phi_m\circ\circ_{i_p} = \sum_{r+s=m}\circ_{i_p}\circ(\phi_r\otimes\phi_s)$, for all $m\geq 0$.
\end{lemm}
\begin{proof}
The proof is similar to that of Lemma~\ref{lemma:bar-construction-product-operad-homotopy}.
For $m = 0$, the relation follows the commutativity of the diagram of Figure~\ref{homotopy-dg-morphisms:Segal-maps} (since $\circ_{i_p}$ is defined as a sum of Segal maps).
We therefore focus on the case $m\geq 1$.
We consider again a summand $\Sigma^{-\sharp V(\stree)}\AOp(\stree)\otimes\Sigma^{-\sharp V(\ttree)}\AOp(\ttree)$
of the tensor product $\DGB^c(\AOp)(\{i_1<\dots<i_k\})\otimes\DGB^c(\AOp)(\{j_1<\dots<j_l\})$,
where $\stree\in\Tree(\{i_1<\dots<i_k\})$, $\ttree\in\Tree(\{j_1<\dots<j_l\})$.
The composition product maps $\circ_{i_p}$ carry this summand into $\Sigma^{-\sharp V(\thetatree)}\AOp(\thetatree)$,
with $\thetatree = \stree\circ_{i_p}\ttree$.
The components of the map $\phi_m$ carry this summand into terms of the form $\Sigma^{-\sharp V(\thetatree')}\AOp(\thetatree')$,
for trees $\thetatree'$ equipped with a set of internal edges $(e_1,\dots,e_m)$
such that $\thetatree'/\{e_1,\dots,e_m\} = \thetatree$.
We still have $\thetatree' = \stree'\circ_{i_p}\ttree'$ and $\stree = \stree'/\{e_{\alpha_1},\dots,e_{\alpha_r}\}$, $\ttree = \ttree'/\{e_{\beta_1},\dots,e_{\beta_s}\}$,
for a partition $\{e_{\alpha_1},\dots,e_{\alpha_r}\}\amalg\{e_{\beta_1},\dots,e_{\beta_s}\} = \{e_1,\dots,e_m\}$
such that $e_{\alpha_1},\dots,e_{\alpha_r}\in\mathring{E}(\stree')$ and $e_{\beta_1},\dots,e_{\beta_s}\in\mathring{E}(\ttree')$.
We then have the following commutative diagram:
\begin{equation*}
\xymatrixcolsep{8pc}\xymatrix{
\AOp(\stree)\otimes\AOp(\ttree)
\ar[r]^-{\phi_{\sigma(\stree',\underline{e}|_{\stree'})}\otimes\phi_{\sigma(\ttree',\underline{e}|_{\ttree'})}}
\ar[ddd]|{i_{\stree\circ_{i_p}\ttree}} &
(\BOp(\stree')\otimes I^r)\otimes(\BOp(\ttree')\otimes I^s)
\ar[d]|{\id\otimes s^{m-\beta_*}\otimes\id\otimes s^{m-\alpha_*}} \\
& (\BOp(\stree')\otimes I^m)\otimes(\BOp(\ttree')\otimes I^m)
\ar[d]^{\simeq} \\
& \BOp(\stree')\otimes\BOp(\ttree')\otimes I^m\otimes I^m
\ar[d]^{i_{\stree'\circ_{i_p}\ttree'}\otimes\mu} \\
\AOp(\thetatree)\ar[r]_{\phi_{\sigma(\thetatree',\underline{e})}} &
\BOp(\thetatree')\otimes I^m }
\end{equation*}
(by the relations of Figure~\ref{homotopy-dg-morphisms:homotopy-Segal-maps} and Figure~\ref{homotopy-morphisms:degeneracies}).
We use the same notation as in the proof of Lemma~\ref{lemma:bar-construction-product-operad-homotopy}
in this diagram
and we still consider the morphisms $\phi_{\sigma(\stree',\underline{e}|_{\stree'})}: \AOp(\stree'/\{e_{\alpha_1},\dots,e_{\alpha_r}\})\rightarrow\AOp(\stree')\otimes I^r$
and $\phi_{\sigma(\ttree',\underline{e}|_{\ttree'})}: \AOp(\ttree'/\{e_{\beta_1},\dots,e_{\beta_s}\})\rightarrow\AOp(\ttree')\otimes I^s$
associated to the sequences of tree morphisms
such that $\sigma(\stree',\underline{e}|_{\stree'}) = \{\stree'\rightarrow\stree'/e_{\alpha_1}\rightarrow\dots\rightarrow\stree'/\{e_{\alpha_1},\dots,e_{\alpha_r}\}\}$
and $\sigma(\ttree',\underline{e}|_{\ttree'}) = \{\ttree'\rightarrow\ttree'/e_{\beta_1}\rightarrow\dots\rightarrow\ttree'/\{e_{\beta_1},\dots,e_{\beta_s}\}\}$.
We also use the notation $s^{m-\alpha_*} = s^{m-\alpha_1} s^{m-\alpha_2}\cdots s^{m-\alpha_r}$, $s^{m-\beta_*} = s^{m-\beta_1} s^{m-\beta_2}\cdots s^{m-\beta_s}$,
and $\mu: I^m\otimes I^m\rightarrow I^m$ denotes the product of the dg algebra $I^m$
as usual.
We see, by elaborating on the arguments of the proof of Lemma~\ref{lemma:bar-construction-product-operad-homotopy},
that the composite of the right-hand side vertical morphisms of this diagram does not meet $\BOp(\thetatree')\otimes\underline{01}^{\sharp}{}^{\otimes m}$
unless we have $\{e_{\alpha_1},\dots,e_{\alpha_r}\} = \{e_1,\dots,e_r\}$ and $\{e_{\beta_1},\dots,e_{\beta_s}) = (e_{r+1},\dots,e_m\}$. (We have in this case
$s^{m-\beta_*}(\underline{01}^{\sharp}{}^{\otimes r}) = \underline{01}^{\sharp}{}^{\otimes r}\otimes 1^{\otimes s}$,
$s^{m-\alpha_*}(\underline{01}^{\sharp}{}^{\otimes s}) = \underline{1}^{\sharp}{}^{\otimes r}\otimes\underline{01}^{\sharp}{}^{\otimes s} + \text{other terms}$,
and $\mu(s^{m-\beta_*}(\underline{01}^{\sharp}{}^{\otimes r})\otimes s^{m-\alpha_*}(\underline{01}^{\sharp}{}^{\otimes s})) = \underline{01}^{\sharp}{}^{\otimes m}$.)
We conclude from this analysis that the composite $\phi_{(\thetatree',\underline{e})}\circ\circ_{i_p}$ vanishes unless the edge collection $\underline{e} = (e_1,\dots,e_m)$
is equipped with an order such that $\{e_{\alpha_1},\dots,e_{\alpha_r}\} = \{e_1,\dots,e_r\}$
and $\{e_{\beta_1},\dots,e_{\beta_s}\} = \{e_{r+1},\dots,e_m\}$.
We get in this case $\phi_{(\thetatree',\underline{e})}\circ\circ_{i_p} = \circ_{i_p}\circ\phi_{(\stree',\underline{e}|_{\stree'})}\otimes\phi_{(\ttree',\underline{e}|_{\ttree'})}$
and summing over the pairs $(\thetatree',(e_1,\dots,e_m))$ with $\thetatree' = \stree'\circ_{i_p}\ttree'$, $e_1,\dots,e_r\in\mathring{E}(\stree')$, $e_{r+1},\dots,e_m\in\mathring{E}(\stree')$,
amounts to summing over the pairs $(\stree',(e_1,\dots,e_r))$ and $(\ttree',(e_{r+1},\dots,e_m))$
such that $\stree'/\{e_1,\dots,e_r\} = \stree$ and $\ttree'/\{e_1,\dots,e_r\} = \ttree$.
We therefore obtain the relation of the lemma $\phi_m\circ\circ_{i_p} = \sum_{r+s=m}\circ_{i_p}\circ(\phi_r\otimes\phi_s)$
when we perform this sum.
\end{proof}
We get the following concluding statement:
\begin{thm-defn}
The collection of morphisms $\phi_*: \DGB^c(\AOp)(r)\rightarrow\DGB^c(\BOp)(r)$, $r>0$, defined in Construction~\ref{construction:bar-complex-homotopy-morphism},
defines a morphism of shuffle dg operads $\phi_*: \DGB^c(\AOp)\rightarrow\DGB^c(\BOp)$,
the morphism induced by the homotopy morphism of connected homotopy Segal shuffle cooperads $\phi: \AOp\rightarrow\BOp$
on the cobar construction.\qed
\end{thm-defn}
\subsection{The equivalence with strict $ E_\infty $-cooperads}\label{subsection:strict-equivalence}
We devote this final subsection to proving the following result.
\begin{thm}\label{theorem:strictification}
Every connected homotopy Segal $E_\infty$-Hopf cooperad (either symmetric or shuffle)
is weakly-equivalent to a connected strict Segal $E_\infty$-Hopf cooperad.
\end{thm}
We prove Theorem~\ref{theorem:strictification} by constructing a functor $\AOp\mapsto\DGK^c(\AOp)$, from the category of connected homotopy Segal $E_\infty$-Hopf cooperads (symmetric or shuffle)
to the category of strict Segal $E_\infty$-Hopf cooperads, and a zigzag of weak-equivalences between $\AOp$ and $\DGK^c(\AOp)$.
For this purpose, we consider a category $\Tree^{\square}$, enriched in dg modules, which encodes the homotopy coproduct operators of homotopy Segal cooperads.
We explain the definition of this category $\Tree^{\square}$ in the next paragraph.
We have a morphism of enriched categories $\Tree^{\square}\rightarrow\Tree$, where, by an abuse of notation, we denote by $\Tree$ the enriched category in $\kk$-modules
whose hom-objects are the $\kk$-modules spanned by the set-theoretic tree morphisms.
We define our functor $\DGK^c(-)$ as a homotopy Kan extension, by dualizing a two-sided bar complex over the enriched category $\Tree^{\square}$.
Note that, in our statement, we again assume that our Segal cooperad $\AOp$ is connected in the sense of~\S\ref{subsection:conilpotence}.
This assumption enables us to simplify our constructions and to avoid technical difficulties
in the verification of our result. We assume that our cooperads satisfy this connectedness condition all along this section.
Recall that a (homotopy or strict) Segal cooperad $\AOp$ is connected if we have $\AOp(\ttree) = 0$
when the tree $\ttree$ is not reduced (has at least one vertex with a single ingoing edge)
and that this condition implies that we can restrict ourselves to the subcategories of reduced trees, denoted by $\widetilde{\Tree}(r)\subset\Tree(r)$, $r>0$,
in the definition of the structure operations that we associate to our objects.
For simplicity, all along this subsection, we keep the notation $\Tree$ for our constructions on tree categories (for instance, we use the notation $\Tree^{\square}$
for our enriched category of trees).
Nevertheless, we restrict ourselves to reduced trees, as permitted by our connectedness assumption on cooperads,
and for this reason, we only define the enriched hom-objects $\Tree^{\square}(\ttree,\stree)$ associated to the full subcategories of reduced trees $\widetilde{\Tree}^{\square}(r)$, $r>0$.
Recall that for reduced trees $\stree,\ttree\in\widetilde{\Tree}(r)$, the set of tree morphisms $\Tree(\ttree,\stree)$
is either empty or reduced to a point (see \cite[Theorem B.0.6]{FresseBook}).
For the enriched version of this category, we therefore have:
\begin{equation*}
\Tree(\ttree,\stree) = \begin{cases} \kk, & \text{if we have a morphism $\ttree\rightarrow\stree$}, \\
0, & \text{otherwise}, \end{cases}
\end{equation*}
for any pair of reduced trees $\stree,\ttree\in\widetilde{\Tree}(r)$, $r>0$.
Note that we may still write $\Tree(\ttree,\stree) = *$ in this setting, because we identify the object $\Tree(\ttree,\stree) = \kk$
with the terminal object of the category of cocommutative coalgebras, and we can actually regard $\Tree$
as a category enriched in cocommutative coalgebras.
This observation, to which we go back later on, motivates our abuse of notation.
\begin{constr}\label{construction:W-homotopy}
We define the enriched category $\Tree^{\square}$ in this paragraph. We take the same set of objects as the category of reduced trees $\widetilde{\Tree}$.
In the definition of the hom-objects, we consider the cubical chain complexes $\DGN_*(\Delta^1)^{\otimes k}$, dual to the cubical cochain algebras $I^k = \DGN^*(\Delta^1)^{\otimes k}$
of the definition of homotopy Segal cooperads.
We use the coface operators $d_i^{\epsilon}: \DGN_*(\Delta^1)^{\otimes k-1}\rightarrow\DGN_*(\Delta^1)^{\otimes k}$
and the codegeneracy operators $s_j: \DGN_*(\Delta^1)^{\otimes k}\rightarrow\DGN_*(\Delta^1)^{\otimes k-1}$
dual to the operators $d^i_{\epsilon}: I^k\rightarrow I^{k-1}$, $i = 1,\dots,k$, $\epsilon = 0,1$,
and $s^j: I^{k-1}\rightarrow I^k$, $j = 0,\dots,k$,
considered in Construction~\ref{constr:cubical-cochain-algebras}.
We then have $d_i^{\epsilon} = \id^{\otimes k-i}\otimes d^{\epsilon}\otimes\id^{\otimes i-1}$, $s_0 = \id^{\otimes k-1}\otimes s^0$,
$s_j = \id^{\otimes k-j-1}\otimes\nabla_*\otimes\id^{\otimes j-1}$, for $j = 1,\dots,k-1$,
and $s_k = s^k\otimes\id^{\otimes k-1}$,
where $d^{\epsilon}: \kk = \DGN_*(\Delta^0)\rightarrow\DGN_*(\Delta^1)$, $\epsilon = 0,1$, and $s^0: \DGN_*(\Delta^1)\rightarrow\DGN_*(\Delta^0) = \kk$
are the cofaces and the codegeneracy of the normalized chain complex of the one-simplex,
while $\nabla_*: \DGN_*(\Delta^1)\otimes\DGN_*(\Delta^1)\rightarrow\DGN_*(\Delta^1)$
denotes the connection of Construction~\ref{constr:cubical-cochain-connection}.
We precisely define the dg module $\Tree^{\square}(\ttree,\stree)$, which represents the dg hom-object associated to a pair of reduced trees $\stree,\ttree\in\widetilde{\Tree}$
such that $\ttree\not=\stree$ in our enriched category, by the following quotient
\begin{equation*}
\Tree^{\square}(\ttree,\stree)
= \bigoplus_{k\geq 0}\left(\bigoplus_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}
\DGN_*(\Delta^1)^{\otimes k}_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}
\right)/\equiv,
\end{equation*}
where a copy of the cubical chain complex $\DGN_*(\Delta^1)^{\otimes k}$ is assigned to every sequence
of composable tree morphisms $\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree$,
and we mod out by the relations
\begin{align*}
d^0_i(\underline{\sigma})_{\ttree\rightarrow\cdots\rightarrow\stree}
& \equiv\underline{\sigma}_{\ttree\rightarrow\cdots\widehat{\ttree_i}\cdots\rightarrow\stree}, \\
s_j(\underline{\sigma})_{\ttree\rightarrow\cdots\rightarrow\stree}
& \equiv\underline{\sigma}_{\ttree\rightarrow\cdots\ttree_j = \ttree_j\cdots\rightarrow\stree},
\end{align*}
where $\underline{\sigma}$ denotes an element of the cubical chain complex (of appropriate dimension).
The composition operations of this enriched category
\begin{equation*}
\circ: \Tree^{\square}(\utree,\stree)\otimes\Tree^{\square}(\ttree,\utree)\rightarrow\Tree^{\square}(\ttree,\stree)
\end{equation*}
are given by
\begin{multline}\tag{$*$}\label{cubical-enriched-category:composition}
\underline{\sigma}_{\utree\rightarrow\utree_k\rightarrow\dots\rightarrow\utree_1\rightarrow\stree}
\circ\underline{\tau}_{\ttree\rightarrow\ttree_l\rightarrow\dots\rightarrow\ttree_1\rightarrow\utree}\\
= (\underline{\tau}\otimes\underline{0}\otimes\underline{\sigma})_{\ttree\rightarrow\ttree_l\rightarrow\dots\rightarrow\ttree_1\rightarrow\utree
\rightarrow\utree_k\rightarrow\dots\rightarrow\utree_1\rightarrow\stree}
\end{multline}
as long as $\utree\not=\stree$ and $\ttree\not=\utree$. We just take in addition $\Tree^{\square}(\stree,\stree) = \kk$ to provide our category with identity homomorphisms.
We easily check that the above formula preserves the relations of our hom-objects
and hence gives a well-defined morphism
of dg modules. We immediately see that these composition operations
are associative too.
We can also define the objects $\Tree^{\square}(\ttree,\stree)$ in terms of a coend.
We then consider an indexing category $\CubeCat$ generated by the $0$-cofaces $d^0_i$ and the codegeneracies $s^j$
attached to our cubical chain complexes
and which reflect the face and degeneracy operations that we apply to the sequences of composable tree morphisms.
The objects of this category are the ordinals $\underline{k+2} = \{k+1>k>\dots>1>0\}$, with $k\geq 0$.
The morphisms are the non decreasing maps $u: \underline{k+2}\rightarrow\underline{l+2}$
such that $u(k+1) = l+1$ and $u(0) = 0$.
The coface $d^0_i$ corresponds to the map $d^0_i: \underline{k+1}\rightarrow\underline{k+2}$ that jumps over $i+1$ in $\underline{k+2}$,
while the codegeneracy $s_j$ corresponds to the map $s_j: \underline{k+2}\rightarrow\underline{k+1}$
such that $s_j(x) = x$ for $x = 0,\dots,j$ and $s_j(x) = x-1$ for $x = j+1,\dots,k+2$.
We easily check that the collection of cubical chain complexes $\DGN_*(\Delta^1)^{\otimes k}$,
equipped with the previously defined operators $d^0_i: \DGN_*(\Delta^1)^{\otimes k-1}\rightarrow\DGN_*(\Delta^1)^{\otimes k}$
and $s_j: \DGN_*(\Delta^1)^{\otimes k}\rightarrow\DGN_*(\Delta^1)^{\otimes k-1}$,
defines a functor $\underline{k+2}\mapsto\DGN_*(\Delta^1)^{\otimes k}$ on this category $\CubeCat$.
In general, we denote by $X_k$ the image of an object $\underline{k+2}\in\CubeCat$
under a (contravariant or covariant) functor $X$
over the category $\CubeCat$.
For a pair of reduced trees $\ttree,\stree\in\widetilde{\Tree}(r)$ with $\ttree\not=\stree$,
we also consider the functor $\NCat(\ttree,\stree): \CubeCat^{op}\rightarrow\Set$
such that
\begin{equation*}
\NCat(\ttree,\stree)_k = \bigl\{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree|\ttree_i\in\widetilde{\Tree}(r)\,(\forall i)\bigr\},
\end{equation*}
where we consider the set of all sequences of composable tree morphisms of length $k+1$ with $\ttree_{k+1} = \ttree$, $\ttree_0 = \stree$,
and we equip this set with the obvious action of the category $\CubeCat$ (we adapt the usual definition
of the simplicial nerve of a category).
We then have:
\begin{equation*}
\Tree^{\square}(\ttree,\stree) = \int^{\underline{k+2}\in\CubeCat}\kk[\NCat(\ttree,\stree)_k]\otimes\DGN_*(\Delta^1)^{\otimes k},
\end{equation*}
where $\kk[\NCat(\ttree,\stree)_k]$ is the $\kk$-module generated by the set $\NCat(\ttree,\stree)_k$.
We can also express the composition operation of the category $\Tree^{\square}$ in terms of a termwise composition operation on this coend
which we define by the above formula~(\ref{cubical-enriched-category:composition}).
We have well-defined (contravariant) facet operators
\begin{equation*}
i_{\sigmatree,\stree}: \Tree^{\square}(\ttree,\stree)\rightarrow\Tree^{\square}(\thetatree,\sigmatree),
\end{equation*}
which we associate to all subtree inclusions $\sigmatree\subset\stree$, where $\thetatree = f^{-1}(\sigmatree)$ is the pre-image of the subtree $\sigmatree\subset\stree$
under the (at most unique) morphism $f: \ttree\rightarrow\stree$.
We define these facet operators on our coend termwise,
by the map
\begin{equation*}
i_{\sigmatree,\stree}: \kk[\NCat(\ttree,\stree)_k]\otimes\DGN_*(\Delta^1)^{\otimes k}\rightarrow\kk[\NCat(\thetatree,\sigmatree)_k]\otimes\DGN_*(\Delta^1)^{\otimes k}
\end{equation*}
induced by the set-theoretic facet operator $\NCat(\ttree,\stree)_k\rightarrow\NCat(\thetatree,\sigmatree)_k$
which carries any sequence of composable tree morphisms $\ttree\xrightarrow{f_k}\ttree_k\xrightarrow{f_{k-1}}\cdots\xrightarrow{f_1}\ttree_1\xrightarrow{f_0}\stree$
to the sequence of tree morphisms such that $(f_0\cdots f_k)^{-1}(\sigmatree)\rightarrow(f_0\cdots f_{k-1})^{-1}(\sigmatree)\rightarrow\cdots\rightarrow f_0^{-1}(\sigmatree)\rightarrow\sigmatree$,
where we use $\thetatree = (f_0\cdots f_k)^{-1}(\sigmatree)$.
We can also associate a Segal map
\begin{equation*}
i_{\lambda_{\utree}(\sigmatree_*)}: \Tree^{\square}(\ttree,\stree)\rightarrow\bigotimes_{u\in V(\utree)}\Tree^{\square}(\thetatree_u,\sigmatree_u),
\end{equation*}
to every tree decomposition $\stree = \lambda_{\utree}(\sigmatree_*)$, where we set $\thetatree_u = f^{-1}(\sigmatree_u)$, for all factors $\sigmatree_u\subset\stree$.
We then take the tensor product of the product of the above set-theoretic assignments $\NCat(\ttree,\stree)_k\rightarrow\prod_{u\in V(\utree)}\NCat(\thetatree_u,\sigmatree_u)_k$
with the map $\mu^*: \DGN_*(\Delta^1)^{\otimes k}\rightarrow\bigotimes_{u\in V(\utree)}\DGN_*(\Delta^1)^{\otimes k}$
induced by the coassociative coproduct of the cubical chain complex $\DGN_*(\Delta^1)^{\otimes k}$.
We just need to fix an ordering on the set of vertices of our trees since this coproduct is not associative.
We easily check that the facet operators and the Segal maps satisfy natural associativity relations and are compatible, in some natural sense,
with the composition operation of our enriched category structure.
In the symmetric context, we can also observe that the hom-objects $\Tree^{\square}(\ttree,\stree)$
inherit an action of the symmetric group such that
\begin{equation*}
s_*: \Tree^{\square}(\ttree,\stree)\rightarrow\Tree^{\square}(s\ttree,s\stree),
\end{equation*}
for every pair of reduced trees $\stree,\ttree\in\widetilde{\Tree}(r)$, $r>0$,
and for every permutation $s\in\Sigma_r$,
which are induced by the mappings $\ttree_i\mapsto s\ttree_i$ at the level of the sets $\NCat(\ttree,\stree)_k$.
These operators are compatible with the enriched category structure (so that the mapping $s_*: \ttree\mapsto s\ttree$
defines a functor on the enriched category $\Tree^{\square}$)
and with the facet operators. (But the action of permutation is not compatible with the Segal maps with values in the tensor product, since we need to order the vertices
of our trees when we form these maps.)
\end{constr}
We use that this enriched category in dg modules $\Tree^{\square}$ can be upgraded to a category enriched over the category of $\EOp$-coalgebras.
\begin{prop}\label{proposition:E-algebra-tree-category}
Each object $\Tree^{\square}(\ttree,\stree)$ inherits an $\EOp$-coalgebra structure from the cubical chain complexes $\DGN_*(\Delta^1)^{\otimes k}$.
The composition products define morphisms of $\EOp$-coalgebras $\circ: \Tree^{\square}(\ttree,\utree)\otimes\Tree^{\square}(\utree,\stree)\rightarrow\Tree^{\square}(\ttree,\stree)$
(we switch the conventional order of the factors of the composition to make the diagonal action of the Barratt--Eccles operad compatible with these operations).
The facet operators $i_{\sigmatree,\stree}: \Tree^{\square}(\ttree,\stree)\rightarrow\Tree^{\square}(\thetatree,\sigmatree)$
also define morphisms of $\EOp$-coalgebras,
as well as the operators that give the action of permutations $s_*: \Tree^{\square}(\ttree,\stree)\rightarrow\Tree^{\square}(s\ttree,s\stree)$
in the symmetric setting.
The constructions of the previous paragraph accordingly give a category $\Tree^{\square}$ enriched in $\EOp$-coalgebras
and equipped with facet operators (together with an action of permutations), which are defined within the category of $\EOp$-coalgebras
and are compatible with the composition structure of our objects.
\end{prop}
\begin{proof}
For each $k\geq 2$, we use that $\kk[\NCat(\ttree,\stree)_k]$ inherits a cocommutative coalgebra structure (given by the diagonal of the set $\NCat(\ttree,\stree)_k$)
in order to extend the $\EOp$-coalgebra structure of the cubical chain complex $\DGN_*(\Delta^1)^{\otimes k}$
to the tensor product $\kk[\NCat(\ttree,\stree)_k]\otimes\DGN_*(\Delta^1)^{\otimes k}$.
We readily check that this $\EOp$-coalgebra structure is compatible with the action of the category $\CubeCat$
and therefore passes to our coend. (Recall simply that the forgetful functor from a category of coalgebras to a base category creates coends.)
We easily check that the composition operations of the category $\Tree^{\square}$ are also compatible with the $\EOp$-coalgebra structure termwise,
as well the facet operators. Then we just pass to the coend
to get the conclusions of the proposition.
\end{proof}
We now consider the enriched category in cocommutative coalgebras such that
\begin{equation*}
\Tree(\ttree,\stree) = \begin{cases} \kk, & \text{if we have a morphism $\ttree\rightarrow\stree$}, \\
0, & \text{otherwise}, \end{cases}
\end{equation*}
for any pair of reduced trees $\stree,\ttree\in\widetilde{\Tree}(r)$, $r>0$ (with the same abuse of notation as in the introduction of this subsection).
We immediately see that this enriched category inherits the same structures (facet operators, action of permutations) within the category of cocommutative coalgebras
as the enriched category in $\EOp$-coalgebras $\Tree^{\square}$.
We also have the following observation:
\begin{prop}\label{lemma:tree-square-contractible}
\begin{enumerate}
\item
The hom-objects of the enriched category $\Tree^{\square}$ are endowed with weak-equivalences
\begin{equation*}
\epsilon: \Tree^{\square}(\ttree,\stree)\xrightarrow{\sim}\kk,
\end{equation*}
which are yielded by the augmentation maps such that $\epsilon(1_{\ttree\rightarrow\stree}) = 1$
and $\epsilon(\underline{\sigma}_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_0\rightarrow\stree}) = 0$
in cubical dimension $k>0$ when $\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_0\rightarrow\stree$
is non degenerate.
\item
These weak-equivalences also define a morphism of enriched categories
\begin{equation*}
\epsilon: \Tree^{\square}\xrightarrow{\sim}\Tree,
\end{equation*}
which is compatible with the facet operators and with the action of permutations whenever we consider this extra structure.
(We then regard the enriched category in cocommutative coalgebras $\Tree$ as a category enriched in $\EOp$-coalgebras
by restriction of structure through the augmentation map of the Barratt--Eccles operad.)
\end{enumerate}
\end{prop}
\begin{proof}
The morphism $\epsilon: \Tree^{\square}(\ttree,\stree)\xrightarrow{\sim}\kk$ has an obvious section $\eta: \kk\rightarrow\Tree^{\square}(\ttree,\stree)$
given by $\eta(1) = 1_{\ttree\rightarrow\stree}$.
We construct a chain homotopy $h: \Tree^{\square}(\ttree,\stree)\otimes\DGN_*(\Delta^1)\rightarrow\Tree^{\square}(\ttree,\stree)$
between $\eta\circ\epsilon = h(-\otimes\underline{1})$ and $\id = h(-\otimes\underline{0})$
to prove that $\epsilon$ is a weak-equivalence.
We proceed as follows. For $k\geq 0$, we consider the map
\begin{equation*}
h_k: \DGN_*(\Delta^1)^{\otimes k}\otimes\DGN_*(\Delta^1)\rightarrow\DGN_*(\Delta^1)^{\otimes k}
\end{equation*}
defined by the composite
\begin{multline*}
\DGN_*(\Delta^1)^{\otimes k}\otimes\DGN_*(\Delta^1)\xrightarrow{\id\otimes\mu^*}\DGN_*(\Delta^1)^{\otimes k}\otimes\DGN_*(\Delta^1)^{\otimes k} \\
\xrightarrow{\simeq}(\DGN_*(\Delta^1)\otimes\DGN_*(\Delta^1))^{\otimes k}\xrightarrow{(\nabla_*^{\max})^{\otimes k}}\DGN_*(\Delta^1)^{\otimes k},
\end{multline*}
where $\mu^*$ is the ($k$-fold) Alexander--Whitney diagonal and $\nabla_*^{\max} = \DGN_*(\max)\circ\EM$ is the composite of the Eilenberg--MacLane map
with the morphism induced by the map of simplicial sets $\max: \Delta^1\times\Delta^1\rightarrow\Delta^1$
such that $\max: (s,t)\mapsto\max(s,t)$. (Thus, we consider a mirror of the connection $\nabla_* = \nabla_*^{\min}$,
which we use in the definition of the codegeneracies of our cubical complex $\DGN_*(\Delta^1)^{\otimes k}$.)
We claim that these maps preserve the action of the cofaces $d^0_i$ and of the codegeneracies $s_j$ on the cubical complex $\DGN_*(\Delta^1)^{\otimes k}$.
The preservation of the cofaces $d^0_i$ follows from the formulas $\nabla_*^{\max}(\underline{1},\underline{01}) = 0$
and $\nabla_*^{\max}(\underline{1},\underline{\tau}) = \underline{1}$ for $\deg(\underline{\tau}) = 0$.
The preservation of the codegeneracies $s_j$ is immediate in the cases $j = 0$ and $j = k$.
We use the properties of the Alexander--Whitney diagonal and of the Eilenberg--MacLane map to reduce the verification of the preservation of the codegeneracies $s_j$
such that $j = 1,\dots,k-1$ to the case $k=2$.
We then deduce our claim from the distribution relation
\begin{equation*}
\nabla_*^{\max}(\nabla_*^{\min}(\underline{\sigma}_2\otimes\underline{\sigma}_1)\otimes\underline{\tau})
= \sum_{(\underline{\tau})}\nabla_*^{\min}(\nabla_*^{\max}(\underline{\sigma}_2\otimes\underline{\tau}')\otimes\nabla_*^{\max}(\underline{\sigma}_1\otimes\underline{\tau}'')),
\end{equation*}
valid for $\underline{\sigma}_2,\underline{\sigma}_1,\underline{\tau}\in\DGN_*(\Delta^1)$,
and where we write $\mu^*(\underline{\tau}) = \sum_{(\underline{\tau})}\underline{\tau}'\otimes\underline{\tau}''$
for the Alexander--Whitney diagonal of the element $\underline{\tau}\in\DGN_*(\Delta^1)$.
(This relation, which reflects the classical min-max distribution relation, can easily be checked by hand.)
We deduce from these verifications that these maps $h_k$, tensored with the identity of the factor $\kk[\NCat(\ttree,\stree)_k]$,
induce a well-defined map on our coend $h: \Tree^{\square}(\ttree,\stree)\otimes\DGN_*(\Delta^1)\rightarrow\Tree^{\square}(\ttree,\stree)$.
We also have $h_k(\underline{\sigma}\otimes\underline{1}) = \underline{1}^{\otimes k}$ and $h_k(\underline{\sigma}\otimes\underline{0}) = \underline{\sigma}$, for each $k\geq 0$,
and these identities give the relations $\eta\circ\epsilon = h(-\otimes\underline{1})$ and $\id = h(-\otimes\underline{0})$
at the coend level.
This verification completes the proof of the first assertion of the proposition, while an immediate inspection gives the verification of the second assertion.
\end{proof}
This proposition has the following corollary, which we use in our verification that the homotopy Kan construction returns Segal $E_\infty$-Hopf pre-cooperads
that satisfy the Segal condition.
\begin{cor}\label{cor:Segal-condition-tree-square}
The Segal map of Construction~\ref{construction:W-homotopy} defines weak-equivalence of dg modules
\begin{equation*}
i_{\lambda_{\utree}(\sigmatree_*)}: \Tree^{\square}(\ttree,\stree)\rightarrow\bigotimes_{u\in V(\utree)}\Tree^{\square}(\thetatree_u,\sigmatree_u),
\end{equation*}
for every tree decomposition $\stree = \lambda_{\utree}(\sigmatree_*)$, where we again set $\thetatree_u = f^{-1}(\sigmatree_u)$, for all factors $\sigmatree_u\subset\stree$.\qed
\end{cor}
We can reformulate the definition of the structure operators of homotopy Segal $E_\infty$-Hopf shuffle pre-cooperads in terms of the category $\Tree^{\square}$.
We get the following result, which follows from formal verifications.
\begin{prop}\label{prop:dg-category-trees}
Let $\AOp$ be a connected homotopy Segal $E_\infty$-Hopf shuffle pre-cooperad (either symmetric or shuffle).
The objects $\AOp(\ttree)$, $\ttree\in\Tree(r)$, $r>0$, inherit an action of the enriched category $\Tree^{\square}$,
given by operators
\begin{equation*}
\rho: \AOp(\stree)\otimes\Tree^{\square}(\ttree,\stree)\rightarrow\AOp(\ttree),
\end{equation*}
defined in the category of dg modules, and which preserve the action of the facet operators in some natural sense (as well as the action of permutations in the symmetric setting).
The homotopy coproduct operators $\rho_{\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree}$
are identified with the adjoint morphisms
of the maps
\begin{multline*}
\AOp(\stree)\otimes[\ttree\rightarrow\ttree_k\rightarrow\dots\rightarrow\ttree_1\rightarrow\stree]\otimes\DGN_*(\Delta^1)^{\otimes k}\\
\rightarrow\AOp(\stree)\otimes\underbrace{\int^{\underline{k+2}\in\CubeCat}\kk[\NCat(\ttree,\stree)_k]\otimes\DGN_*(\Delta^1)^{\otimes k}}_{\Tree^{\square}(\ttree,\stree)}
\rightarrow\AOp(\ttree),
\end{multline*}
which we determine from this action.\qed
\end{prop}
In the case where $\AOp$ is a strict Segal $E_\infty$-Hopf shuffle pre-cooperad, we may see that the action of the enriched category $\Tree^{\square}$
defined in this proposition factors through an action of the category enriched in cocommutative coalgebras $\Tree$ (which satisfies the same properties).
Thus we just retrieve the obvious functor structure of the object $\AOp$
in this case.
\medskip
We now tackle the definition of our homotopy Kan construction. We have to dualize the structure operations attached to the category $\Tree^{\square}$.
For this purpose, we use the following observation.
\begin{lemm}\label{lemma:finite-generated-tree-square}
The dg hom-object $\Tree^{\square}(\ttree,\stree)$ forms a free module of finite rank over the ground ring $\kk$, for all trees $\ttree,\stree\in\widetilde{\Tree}(r)$, $r>0$.
\end{lemm}
\begin{proof}
Each element of our coend has a unique representative as a (linear combination) of tensors
of the form
\begin{equation*}
[\ttree\xrightarrow{\not=}\ttree_k\xrightarrow{\not=}\dots\xrightarrow{\not=}\ttree_1\xrightarrow{\not=}\stree]\otimes\underline{\sigma}
\in\kk[\NCat(\ttree,\stree)_k]\otimes\mathring{\DGN}_*(\Delta^1)^{\otimes k},
\end{equation*}
where $\mathring{\DGN}_*(\Delta^1) = \kk\underline{0}\oplus\kk\underline{01}$. Then we just use that the set of sequences of composable morphisms
of the form $\ttree\xrightarrow{\not=}\ttree_k\xrightarrow{\not=}\dots\xrightarrow{\not=}\ttree_1\xrightarrow{\not=}\stree$, $k\geq 0$,
is finite,
because each tree $\ttree_i$ is given by the contraction of a subset of inner edges $e\in\mathring{E}(\ttree)$,
which are in finite number.
\end{proof}
We can now define the two-sided cobar complex which underlies our homotopy Kan construction $\DGK^c(\AOp)$.
We address this construction in the next paragraph.
\begin{constr}\label{constr:two-sided-cobar}
We use the statement of the previous lemma to dualize the structure operations associated to the enriched category $\Tree^{\square}$.
For a pair of trees $\stree,\ttree\in\Tree(r)$, $r>0$, we let $\Tree^{\square}(\ttree,\stree)^{\sharp}$
denote the dual $\EOp$-algebra
of the $\EOp$-coalgebra $\Tree^{\square}(\ttree,\stree)$.
The composition operations of the enriched category $\Tree^{\square}$ induce a coproduct map
\begin{align*}
\gamma^*: \Tree^{\square}(\ttree,\stree)^{\sharp}
& \rightarrow\prod_{\ttree\rightarrow\utree\rightarrow\stree}\Tree^{\square}(\ttree,\utree)^{\sharp}\otimes\Tree^{\square}(\utree,\stree)^{\sharp},
\intertext{which also forms a morphism of $\EOp$-algebras. We also have an augmentation map}
\eta^*: \Tree^{\square}(\ttree,\stree)^{\sharp} & \rightarrow\kk,
\end{align*}
which we take as the identity in the case $\ttree = \stree$, as the zero map in the case $\ttree\not=\stree$.
The facet operators induce $\EOp$-algebra morphisms
\begin{equation*}
i_{\sigmatree,\stree}: \Tree^{\square}(\thetatree,\sigmatree)^{\sharp}\rightarrow\Tree^{\square}(\ttree,\stree)^{\sharp},
\end{equation*}
which preserve the above coproduct and counit operations. Note that the cartesian product in the definition of the operation $\gamma^*$ reduces to a direct sum,
since every morphism $\ttree\rightarrow\stree$ admits finitely many factorizations $\ttree\rightarrow\utree\rightarrow\stree$.
In the symmetric setting, we also consider $\EOp$-algebra morphisms
\begin{equation*}
s^*: \Tree^{\square}(s\ttree,s\stree)^{\sharp}\rightarrow\Tree^{\square}(\ttree,\stree)^{\sharp}
\end{equation*}
given by the action of permutations $s\in\Sigma_r$.
For a homotopy Segal $E_\infty$-Hopf cooperad, the operations of Proposition~\ref{prop:dg-category-trees}
dualize to $\EOp$-algebra morphisms
\begin{equation*}
\rho^*: \AOp(\stree)\rightarrow\prod_{\ttree\rightarrow\stree}\AOp(\ttree)\otimes\Tree^{\square}(\ttree,\stree)^{\sharp},
\end{equation*}
which we can also identify with an end of the homotopy coproducts attached to our object by the adjoint definition
of these operations in our Proposition~\ref{prop:dg-category-trees} (we again use a variant of the observations
of the previous lemma to obtain that the tensor product with $\AOp(\ttree)$
distributes over this end).
These morphisms are coassociative and counital with respect to the coproduct and counit operations of the objects $\Tree^{\square}(\ttree,\stree)^{\sharp}$,
commute with the action of the facet operators on our homotopy Segal $E_\infty$-Hopf cooperad
and on the objects $\Tree^{\square}(\ttree,\stree)^{\sharp}$ (and commute with the action of permutations in the symmetric setting).
We then let $\FOp_{\stree}(\ttree)\in\EAlg$ be a collection of $\EOp$-algebras, defined for any fixed tree $\stree\in\widetilde{\Tree}(r)$, for all $\ttree\in\widetilde{\Tree}(r)/\stree$,
and equipped with coproduct operations
\begin{equation*}
\gamma^*: \FOp_{\stree}(\ttree)\rightarrow\prod_{\ttree\rightarrow\utree\rightarrow\stree}\Tree^{\square}(\ttree,\utree)^{\sharp}\otimes\FOp_{\stree}(\utree),
\end{equation*}
defined in the category of $\EOp$-algebras, and which are again coassociative and counital with respect to the coproduct and counit operations
of the objects $\Tree^{\square}(\ttree,\stree)^{\sharp}$.
We assume that this collection defines a functor in the tree $\stree$ when $\stree$ varies
and that we have facet operators
\begin{equation*}
i_{\sigmatree,\stree}: \FOp_{\sigmatree}(\thetatree)\rightarrow\FOp_{\stree}(\ttree),
\end{equation*}
associated to all subtrees $\sigmatree\subset\stree$, with $\thetatree = f^{-1}(\sigmatree)$, the pre-image of $\sigmatree$ under the morphism $f: \ttree\rightarrow\stree$,
which again satisfy natural functoriality relations and are compatible with the coproduct operations.
In the symmetric setting, we also assume that we have an action of the permutations $s^*: \FOp_{s\stree}(s\ttree)\rightarrow\FOp_{\stree}(\ttree)$,
which is compatible with the structure operations
attached to our collection.
In what follows, we consider the cases $\FOp_{\stree}(\ttree) = \Tree^{\square}(\ttree,\stree)^{\sharp}$ and $\FOp_{\stree}(\ttree) = \Tree(\ttree,\stree)^{\sharp}$,
where in the latter case $\Tree(\ttree,\stree)^{\sharp}$ denotes the commutative algebra of functions $u: \Tree(\ttree,\stree)\rightarrow\kk$
on the morphism sets of the tree category $\Tree(\ttree,\stree)$.
For each $n\in\NN$, we set
\begin{multline*}
\DGK^n(\AOp,\Tree^{\square},\FOp_{\stree})\\
= \prod_{\ttree_n\rightarrow\cdots\rightarrow\ttree_0\rightarrow\stree}
\AOp(\ttree_n)\otimes\Tree^{\square}(\ttree_n,\ttree_{n-1})^{\sharp}\otimes\dots\otimes\Tree^{\square}(\ttree_1,\ttree_0)^{\sharp}\otimes\FOp_{\stree}(\ttree_0),
\end{multline*}
and we equip this object with the coface operators $d^i: \DGK^{n-1}(\AOp,\Tree^{\square},\FOp_{\stree})\rightarrow\DGK^n(\AOp,\Tree^{\square},\FOp_{\stree})$
defined termwise by the maps such that
\begin{align*}
d^i & = \begin{cases}
\id\otimes\id^{\otimes n-1}\otimes\gamma^*, & \text{for $i = 0$}, \\
\id\otimes\id^{\otimes n-i-1}\otimes\gamma^*\otimes\id^{\otimes i-1}\otimes\id, & \text{for $i = 1,\dots,n-1$}, \\
\rho^*\otimes\id^{\otimes n-1}\otimes\id, & \text{for $i = n$},
\end{cases}
\intertext{and with the codegeneracies $s^j: \DGK^{n+1}(\AOp,\Tree^{\square},\FOp_{\stree})\rightarrow\DGK^n(\AOp,\Tree^{\square},\FOp_{\stree})$
defined termwise by the maps}
s^j & = \id\otimes\id^{\otimes n-j}\otimes\eta^*\otimes\id^{\otimes j}\otimes\id\quad\text{for $j = 0,\dots,n$}.
\end{align*}
We easily check that this definition returns a cosimplicial object in the category of $\EOp$-algebras.
We also have facet operators
\begin{equation*}
i_{\sigmatree,\stree}: \DGK^{\bullet}(\AOp,\Tree^{\square},\FOp_{\sigmatree})\rightarrow\DGK^{\bullet}(\AOp,\Tree^{\square},\FOp_{\stree}),
\end{equation*}
compatible with the cosimplicial structure, and defined by the termwise tensor products of facet operators
\begin{multline*}
i_{\thetatree_n,\ttree_n}\otimes i_{\thetatree_{n-1},\ttree_{n-1}}\otimes\dots\otimes i_{\thetatree_0,\ttree_0}\otimes i_{\sigmatree,\stree}:\\
\AOp(\thetatree_n)\otimes\Tree^{\square}(\thetatree_n,\thetatree_{n-1})^{\sharp}\otimes\dots\otimes\Tree^{\square}(\thetatree_1,\thetatree_0)^{\sharp}\otimes\FOp_{\stree}(\thetatree_0)\\
\rightarrow\AOp(\ttree_n)\otimes\Tree^{\square}(\ttree_n,\ttree_{n-1})^{\sharp}\otimes\dots\otimes\Tree^{\square}(\ttree_1,\ttree_0)^{\sharp}\otimes\FOp_{\stree}(\ttree_0),
\end{multline*}
where $\thetatree_i = (f_0\cdots f_i)^{-1}(\sigmatree)$, $i = 0,\dots,n$, denotes the pre-image of the subtree $\sigmatree\subset\stree$
under the composite of the tree morphisms $\ttree_i\xrightarrow{f_i}\cdots\xrightarrow{f_1}\ttree_0\xrightarrow{f_0}\stree$.
In the symmetric setting, we still consider an action of permutations
\begin{equation*}
s^*: \DGK^{\bullet}(\AOp,\Tree^{\square},\FOp_{s\stree})\rightarrow\DGK^{\bullet}(\AOp,\Tree^{\square},\FOp_{\stree}),
\end{equation*}
defined again by an obvious termwise construction, and compatible with the facet operators.
\end{constr}
We record the outcome of the previous construction in the next proposition.
\begin{prop}\label{proposition:cosimplicial-B}
The construction of the previous paragraph returns a collection of cosimplicial $\EOp$-algebras
\begin{equation*}
\DGK^{\bullet}(\AOp,\Tree^{\square},\FOp_{\stree})\in\EAlg,\quad\stree\in\widetilde{\Tree}(r),\quad r>0,
\end{equation*}
equipped with compatible facet operators $i_{\sigmatree,\stree}: \DGK^{\bullet}(\AOp,\Tree^{\square},\FOp_{\stree})\rightarrow\DGK^{\bullet}(\AOp,\Tree^{\square},\FOp_{\stree})$,
which satisfy the usual functoriality relations.
If the homotopy Segal $E_\infty$-Hopf cooperad $\AOp$ is symmetric and $\FOp_{\stree}(-)$ is endowed with a symmetric structure as well,
then we also have an action of permutations on our objects $s^*: \DGK^{\bullet}(\AOp,\Tree^{\square},\FOp_{s\stree})\rightarrow\DGK^{\bullet}(\AOp,\Tree^{\square},\FOp_{\stree})$
compatible with the cosimplicial structure and with the facet operators.
\qed
\end{prop}
In the case of a decomposition $\stree = \lambda_{\utree}(\sigmatree_u,u\in V(\utree))$, we can assemble the facet operators
$i_{\sigma_u,\stree}: \FOp_{\sigmatree_u}(\thetatree_u)\rightarrow\FOp_{\stree}(\ttree)$
associated to a bicollection of $\EOp$-algebras $\FOp_{\stree}(\ttree)\in\EAlg$
as in Construction~\ref{constr:two-sided-cobar}
into a Segal map
\begin{equation*}
i_{\lambda_{\utree}(\sigmatree_*)}: \bigvee_{u\in V(\utree)}\FOp_{\sigmatree_u}(\thetatree_u)\rightarrow\FOp_{\stree}(\ttree),
\end{equation*}
and we can define a Segal map similarly on our cosimplicial object $\DGK^{\bullet}(\AOp,\Tree^{\square},\FOp_{\stree})$.
We say that our bicollection $\FOp_{\stree}(\ttree)\in\EAlg$ satisfies the Segal condition when the above Segal map is a weak-equivalence.
We have the following statement.
\begin{prop}\label{prop:cosimplicial-Segal}
If $\AOp$ is a homotopy Segal $E_\infty$-Hopf cooperad and hence satisfies the Segal condition, and if the bicollection $\FOp_{\stree}(\ttree)\in\EAlg$ satisfies the Segal condition as well,
then the Segal maps that we associate to our cosimplicial object $\DGK^{\bullet}(\AOp,\Tree^{\square},\FOp_{\stree})$
define levelwise weak-equivalences of cosimplicial $\EOp$-algebras
\begin{equation*}
i_{\lambda_{\utree}(\sigmatree_*)}: \bigvee_{u\in V(\utree)}\DGK^{\bullet}(\AOp,\Tree^{\square},\FOp_{\sigmatree_u})\xrightarrow{\sim}\DGK^{\bullet}(\AOp,\Tree^{\square},\FOp_{\stree}),
\end{equation*}
for all tree decomposition $\stree = \lambda_{\utree}(\sigmatree_u,u\in V(\utree))$,
so that $\DGK^{\bullet}(\AOp,\Tree^{\square},\FOp_{\stree})$ also satisfies a form of our Segal condition levelwise.
\end{prop}
\begin{proof}
The Segal maps of the proposition are given, on each term of the cosimplicial object $\DGK^{\bullet}(\AOp,\Tree^{\square},\FOp_{\stree})$, by expressions of the form
\begin{multline*}
\bigvee_{u\in V(\utree)}
\biggl(\AOp(\thetatree_n^u)\otimes\cdots\otimes\Tree^{\square}(\thetatree_i^u,\thetatree_{i-1}^u)^{\sharp}\otimes\cdots\otimes\FOp_{\stree}(\thetatree_0^u)\biggr)\\
\rightarrow\AOp(\ttree_n)\otimes\cdots\otimes\Tree^{\square}(\ttree_i,\ttree_{i-1})^{\sharp}\otimes\cdots\otimes\FOp_{\stree}(\ttree_0),
\end{multline*}
where $\thetatree_i^u$ denote the pre-images of the subtree $\sigmatree\subset\stree$ under the tree morphisms $\ttree_i\rightarrow\dots\rightarrow\ttree_0\rightarrow\stree$
and we take a tensor product of facet operators
on each factor. We compose this map with the Eilenberg--MacLane map to pass from the coproduct $\bigvee_{u\in V(\utree)}$
to a tensor product $\bigotimes_{u\in V(\utree)}$ (as in Proposition~\ref{proposition:forgetful-strict}).
We have an obvious commutative diagram which enables us to identify the obtained Segal map
with a tensor product of the form:
\begin{multline*}
\biggl(\bigotimes_{u\in V(\utree)}\AOp(\thetatree_n^u)\biggr)
\otimes\cdots\otimes\biggl(\bigotimes_{u\in V(\utree)}\Tree^{\square}(\thetatree_i^u,\thetatree_{i-1}^u)^{\sharp}\biggr)\otimes\cdots
\otimes\biggl(\bigotimes_{u\in V(\utree)}\FOp_{\stree}(\thetatree_0^u)\biggr)\\
\rightarrow\AOp(\ttree_n)\otimes\cdots\otimes\Tree^{\square}(\ttree_i,\ttree_{i-1})^{\sharp}\otimes\cdots\otimes\FOp_{\stree}(\ttree_0),
\end{multline*}
where we take a factorwise tensor product of Segal maps associated to the objects $\AOp(-)$, $\Tree^{\square}(-,-)^{\sharp}$ and $\FOp_{\stree}(-)$.
In the case of the objects $\Tree^{\square}(-,-)^{\sharp}$, we retrieve the dual of the Segal maps
considered in Corollary~\ref{cor:Segal-condition-tree-square}.
These Segal maps are weak-equivalences therefore, like the Segal maps associated to the objects $\AOp(-)$ and $\FOp_{\stree}(-)$ by assumption.
The conclusion follows.
\end{proof}
We now focus on the cases $\FOp_{\stree}(-) = \Tree^{\square}(-,\stree)^{\sharp},\Tree(-,\stree)^{\sharp}$. The coproduct operation on $\Tree^{\square}(-,-){\sharp}$,
such as defined in Construction~\ref{constr:two-sided-cobar},
gives a natural transformation $\Tree^{\square}(-,\stree)^{\sharp}\rightarrow\Tree^{\square}(-,\ttree)^{\sharp}\otimes\Tree^{\square}(\ttree,\stree)^{\sharp}$,
which passes to our cosimplicial object, by functoriality of our construction, and yields a morphism of cosimplicial $\EOp$-algebras
\begin{equation*}
\rho^*: \DGK^{\bullet}(\AOp,\Tree^{\square},\Tree^{\square}(-,\stree)^{\sharp})
\rightarrow\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree^{\square}(-,\ttree)^{\sharp})\otimes\Tree^{\square}(\ttree,\stree)^{\sharp},
\end{equation*}
for every pair of objects $\stree,\ttree\in\widetilde{\Tree}(r)$, $r>0$.
In the case $\FOp_{\stree}(-) = \Tree(-,\stree)^{\sharp}$, we similarly get morphisms of cosimplicial $\EOp$-algebras
of the form
\begin{equation*}
\rho^*: \DGK^{\bullet}(\AOp,\Tree^{\square},\Tree(-,\stree)^{\sharp})\rightarrow\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree(-,\ttree)^{\sharp})\otimes\Tree(\ttree,\stree)^{\sharp}.
\end{equation*}
Note that the map $\Tree^{\square}(-,-)\rightarrow\Tree(-,-)$ of Proposition~\ref{lemma:tree-square-contractible}
induces a natural transformation in the converse direction between these cosimplicial objects $\DGK^{\bullet}(\AOp,\Tree^{\square},\FOp_{\stree})$
that we associate to $\FOp_{\stree}(-) = \Tree^{\square}(-,\stree)^{\sharp}$ and to $\FOp_{\stree}(-) = \Tree(-,\stree)^{\sharp}$.
We then have the following result.
\begin{prop}\label{prop:cosimplicial-cooperad-B}
\begin{enumerate}
\item
The above coproduct operations provide the collection of cosimplicial $\EOp$-algebras
\begin{equation*}
\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree^{\square}(-,\stree)^{\sharp})\in\cosimp\EAlg,\quad\stree\in\widetilde{\Tree}(r),\quad r>0,
\end{equation*}
with the coproduct operators of a homotopy Segal $E_\infty$-Hopf pre-cooperad structure.
These coproduct operators are compatible with the facet operators and hence $\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree^{\square}(-,\stree)^{\sharp})$
forms a cosimplicial object in the category of homotopy Segal $E_\infty$-Hopf cooperads (shuffle or symmetric when $\AOp$ is so).
\item
In the case of the cosimplicial object $\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree(-,\stree)^{\sharp})$,
we similarly obtain a strict Segal $E_\infty$-Hopf pre-cooperad structure
compatible with the cosimplicial structure
on $\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree(-,\stree)^{\sharp})$.
Furthermore, the natural transformations $\Tree(-,\stree)^{\sharp}\rightarrow\Tree^{\square}(-,\stree)^{\sharp}$
induce levelwise weak-equivalences of cosimplicial $\EOp$-algebras
\begin{equation*}
\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree(-,\stree)^{\sharp})\xrightarrow{\sim}\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree^{\square}(-,\stree)^{\sharp}),
\end{equation*}
which preserve the homotopy Segal $E_\infty$-Hopf pre-cooperad structures, and hence, define a levelwise weak-equivalence
of cosimplicial objects in the category of homotopy Segal $E_\infty$-Hopf pre-cooperads.
\item
Both objects $\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree^{\square}(-,\stree)^{\sharp})$ and $\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree(-,\stree)^{\sharp})$
also satisfy the Segal condition levelwise, and hence define (homotopy) Segal $E_\infty$-Hopf cooperads
when $\AOp$ does so.
\end{enumerate}
\end{prop}
\begin{proof}
The first assertion is an immediate consequence of the functoriality properties of our construction.
In the second assertion, we use that the natural transformation $\Tree^{\square}(-,\stree)^{\sharp}\rightarrow\Tree(-,\stree)^{\sharp}$
is dual to the augmentation map of Proposition~\ref{lemma:tree-square-contractible},
which is a weak-equivalence by the result of this proposition.
The third assertion follows from the result of Proposition~\ref{prop:cosimplicial-Segal}
since Corollary~\ref{cor:Segal-condition-tree-square} implies that the bicollection $\FOp_{\stree}(\ttree) = \Tree^{\square}(\ttree,\stree)^{\sharp}$
satisfies the Segal condition and this is also obviously the case of the bicollection $\FOp_{\stree}(\ttree) = \Tree(\ttree,\stree)^{\sharp}$.
\end{proof}
We use a totalization functor to transform the cosimplicial (homotopy) Segal $E_\infty$-Hopf cooperads
of the previous proposition into ordinary (homotopy) Segal $E_\infty$-Hopf cooperads
in dg modules.
\begin{constr}\label{definition:corealization}
Let $R^{\bullet}$ be a cosimplicial object of the category of $\EOp$-algebras. We set
\begin{equation*}
\DGN^*(R^{\bullet}) = \int_n R^n\otimes\DGN^*(\Delta^n),
\end{equation*}
and we equip this object with the $\EOp$-algebra structure induced by the diagonal $\EOp$-algebra structure on $R^n\otimes\DGN^*(\Delta^n)$ termwise.
If we forget about $\EOp$-algebra structures, then we can identify this object with the conormalized complex of cosimplicial dg modules
(see for instance~\cite[\S II.5.0.12 and \S II.9.4.6]{FresseBook}),
and as such, this functor carries the levelwise weak-equivalences of cosimplicial objects
to weak-equivalences in the category of dg modules.
For a cosimplicial connected (homotopy) $E_\infty$-Hopf pre-cooperad $\KOp^{\bullet}$, the collection $\DGN^*(\KOp^{\bullet}(\stree))$,
which we obtain by applying this conormalized complex functor termwise,
also inherits the structure of a (homotopy) $E_\infty$-Hopf pre-cooperad (shuffle or symmetric when $\KOp^{\bullet}$ is so)
by functoriality of our conormalized complex construction.
\end{constr}
We use the following lemma.
\begin{lemm} \label{lemma:cosimplicial-quasi-iso}
Let $R^{\bullet}$ and $S^{\bullet}$ be cosimplicial $\EOp$-algebras. The $\EOp$-algebra morphism $\DGN^*(R^{\bullet})\vee\DGN^*(S^{\bullet})\rightarrow\DGN^*(R^{\bullet}\vee S^{\bullet})$
induced by the canonical morphisms $R^{\bullet}\rightarrow R^{\bullet}\vee S^{\bullet}$ and $S^{\bullet}\rightarrow R^{\bullet}\vee S^{\bullet}$
is a weak-equivalence.
\end{lemm}
\begin{proof}
We have a commutative diagram
\begin{equation*}
\xymatrixcolsep{5pc}\xymatrix{ \DGN^*(R^{\bullet})\vee\DGN^*(S^{\bullet})\ar[r] &
\DGN^*(R^{\bullet}\vee S^{\bullet}) \\
\DGN^*(R^{\bullet})\otimes\DGN^*(S^{\bullet})\ar[r]^{\AW}\ar[u]^{\EM} &
\DGN^*(R^{\bullet}\otimes S^{\bullet})\ar[u]_{\DGN^*(\EM)}
},
\end{equation*}
where the vertical maps are given by the natural transformations between the tensor product and the coproduct in the category of $\EOp$-algebras,
such as defined in Construction~\ref{constr:Barratt-Eccles-diagonal},
and the bottom horizontal map $\AW$ is the generalization of the Alexander--Whitney diagonal
for the conormalized cochain complex of cosimplicial dg modules.
In the case of cosimplicial dg algebras, this map $\AW$ is used to represent a product operation.
The vertical maps $\EM$ also identifies tensor products with associative products in the coproduct of $\EOp$-algebras
by the definition of Construction~\ref{constr:Barratt-Eccles-diagonal}. The commutativity of the diagram
readily follows from this interpretation of our maps.
The vertical maps are weak-equivalences by Proposition~\ref{claim:Barratt-Eccles-algebra-coproducts}.
The bottom horizontal map is also a weak-equivalence (by the general theory of the Eilenberg--Zilber equivalence).
Therefore the map of the proposition, which represents the upper horizontal map
of our diagram, is also a weak-equivalence.
\end{proof}
This lemma has the following immediate consequence.
\begin{prop}\label{prop:totalization-Segal-cooperads}
If $\ROp^{\bullet}$ is a cosimplicial (homotopy) Segal $E_\infty$-Hopf pre-cooperad that satisfies the Segal condition levelwise,
then $\DGN^*(\ROp^{\bullet})$ satisfies the Segal condition as well, and hence forms a (homotopy) Segal $E_\infty$-Hopf cooperad
in the category of dg modules.\qed
\end{prop}
Then we have the following statement.
\begin{prop}
Let $\AOp$ be connected homotopy Segal $E_\infty$-Hopf cooperad (shuffle or symmetric).
\begin{enumerate}
\item
The objects $\DGN^*(\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree^{\square}))$ and $\DGN^*(\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree))$,
defined by taking the totalization of the cosimplicial (homotopy) Segal $E_\infty$-Hopf cooperads
of Proposition~\ref{prop:cosimplicial-cooperad-B},
respectively form a homotopy Segal $E_\infty$-Hopf cooperad and a strict homotopy Segal $E_\infty$-Hopf cooperad (which are shuffle or symmetric when $\AOp$ is so).
The levelwise weak-equivalence of Proposition~\ref{prop:cosimplicial-cooperad-B}
induces a weak-equivalence of homotopy Segal $E_\infty$-Hopf cooperads
\begin{equation*}
\DGN^*(\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree(-,\stree)^{\sharp}))\xrightarrow{\sim}\DGN^*(\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree^{\square}(-,\stree)^{\sharp})),
\end{equation*}
when we pass to this totalization.
\item
Furthermore, we have a weak-equivalence of homotopy Segal $E_\infty$-Hopf cooperads (shuffle or symmetric)
\begin{equation*}
\AOp(\stree)\xrightarrow{\sim}\DGN^*(\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree^{\square}(-,\stree)^{\sharp})).
\end{equation*}
This weak-equivalence is natural in $\AOp$.
\end{enumerate}
\end{prop}
\begin{proof}
The first assertion of the proposition immediately follows from the statements of Proposition~\ref{prop:cosimplicial-cooperad-B}
and from the result of Proposition~\ref{prop:totalization-Segal-cooperads}.
Thus, we focus on the proof of the second assertion.
We define our natural transformation first.
We use that the coproduct map $\rho^*: \AOp(\stree)\rightarrow\prod_{\ttree\rightarrow\stree}\AOp(\ttree)\otimes\Tree^{\square}(\ttree,\stree)^{\sharp}$
which we associate to our object in Construction~\ref{constr:two-sided-cobar}
defines a coaugmentation over the cosimplicial object $\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree^{\square}(-,\stree)^{\sharp})$,
or equivalently, a morphism
\begin{equation*}
\eta: \AOp\rightarrow\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree^{\square}(-,\stree)^{\sharp}),
\end{equation*}
where we regard $\AOp$ as a constant cosimplicial object (see~\cite[\S II.5.4]{FresseBook} for an account of these concepts).
We immediately see that this morphism is compatible with the coproduct operators, with the facets (and with the action of permutations whenever defined),
and hence, defines a morphism of cosimplicial homotopy Segal $E_\infty$-Hopf cooperads (shuffle or symmetric)
which yields a morphism of homotopy Segal $E_\infty$-Hopf cooperads in dg modules of the form of the proposition
when we pass to conormalized cochain complexes (we just use that we have $\DGN^*(\AOp) = \AOp$
in the case of the constant cosimplicial object $\AOp$).
The weak-equivalence claim follows from the observation that, in the case $F_{\stree}(-) = \Tree^{\square}(-,\stree)^{\sharp}$,
the cosimplicial object $\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree^{\square}(-,\stree)^{\sharp})$
is endowed with an extra codegeneracy $s^{-1}$,
which is defined by extending the definition of Construction~\ref{constr:two-sided-cobar}
to the case $j=-1$:
\begin{equation*}
s^{-1} = \id\otimes\id^{\otimes n+1}\otimes\eta^*.
\end{equation*}
(We again refer to~\cite[\S II.5.4]{FresseBook} for a proof that the existence of this extra codegeneracy forces the contractibility of the conormalized cochain complex
in the cosimplicial direction, and hence, forces the acyclicity of our map.)
This observation finishes the proof of the proposition.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theorem:strictification}]
We can now conclude the proof of Theorem~\ref{theorem:strictification}. The results of the previous proposition
returns a zigzag of weak-equivalences of homotopy Segal $E_\infty$-Hopf cooperads (shuffle or symmetric)
\begin{equation*}
\AOp(\stree)\xrightarrow{\sim}\DGN^*(\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree^{\square}(-,\stree)))
\xleftarrow{\sim}\DGN^*(\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree(-,\stree)^{\sharp})),
\end{equation*}
from which the result follows since the Segal $E_\infty$-Hopf cooperad
\begin{equation*}
\DGK^c(\AOp)(\stree) = \DGN^*(\DGK^{\bullet}(\AOp,\Tree^{\square},\Tree(-,\stree)^{\sharp}))
\end{equation*}
is strict by construction.
\end{proof}
\begin{appendix}
\renewcommand{\thesubsubsection}{\thesection.\arabic{subsubsection}}
\section{The Barratt--Eccles operad and $E_\infty$-algebras}\label{sec:Barratt-Eccles-operad}
The purpose of this appendix is to prove the results on the category of algebras over the Barratt--Eccles operad that we use throughout this article.
In preliminary paragraphs, we briefly review the definition of the chain Barratt--Eccles operad
and the definition of the associated category of $E_\infty$-algebras. We mostly follow the conventions of~\cite{BergerFresse} for the definition of this operad
and we refer to this article for more detailed explanations
on this subject. We also briefly explain our conventions and basic definitions on permutations.
We devote the next paragraph to this subject.
In~\S\ref{section:background}, we recall the definition of a cooperad without counit and we also forget about counits in the definition
of the notions of Segal cooperad that we consider all along this paper. But the Barratt--Eccles operad is more naturally defined as a unital operad.
Therefore we go back to the usual definition of an operad with unit in this appendix. Similarly, as the Barratt--Eccles operad naturally forms a symmetric operad,
we consider composition products in the standard form $\circ_i: \EOp(k)\otimes\EOp(l)\rightarrow\EOp(k+l-1)$
in this appendix, and not the general operations $\circ_{i_p}: \EOp(\{i_1<\dots<i_k\})\otimes\EOp(\{j_1<\dots<j_l\})\rightarrow\EOp(\{1<\dots<r\})$,
since we can deduce the latter from the former by the action of a shuffle permutation
on the Barratt--Eccles operad (see~\S\ref{subsection:shuffle-cooperads}).
\begin{recoll}[Conventions on permutations and the associative operad]\label{recoll:permutations}
We denote the symmetric group on $r$ letters by $\Sigma_r$.
We represent a permutation $s\in\Sigma_r$ by giving the sequence of its values
\begin{equation*}
s = (s(1),\dots,s(r)).
\end{equation*}
We use that the collection of the symmetric groups $\Sigma_r$, $r\in\NN$, form an operad in sets.
The symmetric structure of this operad is given by the left translation action.
The operadic composition $u\circ_i v\in\Sigma_{k+l-1}$ of permutations $u\in\Sigma_k$ and $v\in\Sigma_l$
is obtained by inserting the sequence of values of the permutation $v = (v(1),\dots,v(l))$
at the position of the value $i\in\{1,\dots,k\}$
in the permutation $u = (u(1),\dots,u(k))$,
by performing the value shift $v(y)\mapsto v(y) + i-1$
on the terms of the permutation $v$
and the shift $u(x)\mapsto u(x) + l-1$ on the terms of the permutation $u$ such that $u(x)>i$.
Thus, we have
\begin{equation*}
(u(1),\dots,u(k))\circ_i(v(1),\dots,v(l)) = (u(1)',\dots,\underbrace{v(1)',\dots,v(l)'}_{u(t)},\dots,u(k)'),
\end{equation*}
where $t$ is the position of the value $i$ in the sequence $(u(1),\dots,u(k))$, while $v(y)'$ and $u(x)'$ denote the result of our shift operations
so that we have $v(y)' = v(y) + i-1$, for all terms $v(y)$, we have $u(x)' = u(x)$ when $u(x)<i$
and $u(x)' = u(x) + l-1$ when $u(x)>i$.
This operad in sets governs the category of associative monoids. In our constructions, we also use a counterpart of this operad in our base category of modules.
This associative operad $\AsOp$ is defined by taking the modules spanned by the sets of permutations $\AsOp(r) = \kk[\Sigma_r]$, for $r\in\NN$,
with the induced structure operations.
In what follows, we generally identify a permutation $s\in\Sigma_r$ with the associated basis element in $\AsOp(r)$.
We also use the notation $\mu\in\AsOp(2)$ for the element of the associative operad given by the identity permutation on $2$ letters $\mu = \id_2\in\Sigma_2$,
which governs the product operation when we pass to associative algebras.
We trivially have $\AsOp(0) = \kk$ and we can identify the element given by the trivial permutation $* = \id_0\in\Sigma_0$ with an arity zero operation
which represents a unit for this product structure.
\end{recoll}
\begin{recoll}[The Barratt--Eccles operad and $E_\infty$-algebra structures]\label{recoll:Barratt-Eccles-operad}
The chain Barratt--Eccles operad $\EOp$ is defined by the normalized chain complexes of the homogeneous bar construction
of the symmetric groups.
Thus, we have:
\begin{equation*}
\EOp(r) = \DGN_*(W(\Sigma_r)),
\end{equation*}
for each arity $r\in\NN$, where $W(\Sigma_r)$ denotes the simplicial such that
\begin{equation*}
W(\Sigma_r)_n = \underbrace{\Sigma_r\times\dots\times\Sigma_r}_{n+1},
\end{equation*}
for each dimension $n$, together with the face and degeneracy operators such that
\begin{align*}
d_i(w_0,\dots,w_n) & = (w_0,\dots,\widehat{w_i},\dots,w_n), \\
s_j(w_0,\dots,w_n) & = (w_0,\dots,w_j,w_j,\dots,w_n),
\end{align*}
for any $(w_0,\dots,w_n)\in W(\Sigma_r)_n$.
For simplicity, we do not make any distinction between a simplex $(w_0,\dots,w_n)\in W(\Sigma_r)$
and the class of this simplex in the normalized chain complex $\EOp(r) = \DGN_*(W(\Sigma_r))$
in our notation. We just get $(w_0,\dots,w_j,w_j,\dots,w_n)\equiv 0$ for the degenerate simplices when we pass to the normalized chain complex.
The differential of simplices in $\EOp(r)$
is given by the usual formula:
\begin{equation*}
\delta(w_0,\dots,w_n) = \sum_{i=0}^n (-1)^i(w_0,\dots,\widehat{w_i},\dots,w_n).
\end{equation*}
The action of the symmetric group $\Sigma_r$ on $\EOp(r)$ is induced by the left translation action of permutations
on these simplices. We explicitly have:
\begin{equation*}
s\cdot(w_0,\dots,w_n) = (s w_0,\dots,s w_n),
\end{equation*}
for each permutation $s\in\Sigma_r$.
The operadic composition operations $\circ_i: \EOp(k)\otimes\EOp(l)\rightarrow\EOp(k+l-1)$ are given by the composite of a termwise application
of operadic composition operations on permutations with the Eilenberg--MacLane map
when we pass to normalized chains. For $(u_0,\dots,u_m)\in\EOp(k)$ and $(v_0,\dots,v_n)\in\EOp(l)$,
we explicitly have:
\begin{equation*}
(u_0,\dots,u_m)\circ_i(v_0,\dots,v_n) = \sum_{(i_*,j_*)}\pm(u_{i_0}\circ_i v_{j_0},\dots,u_{i_{m+n}}\circ_i v_{j_{m+n}}),
\end{equation*}
where the sum runs over the set of paths $\{(i_t,j_t),t=0,\dots,m+n\}$ which start at $(i_0,j_0) = (0,0)$ and end at $(i_{m+n},j_{m+n}) = (m,n)$
in an $m\times n$ cartesian diagram, the expression $\pm$ denotes a sign which we associate to any such path,
and along our paths, we take the operadic composites $u_{i_t}\circ_i v_{j_t}\in\Sigma_{k+l-1}$
of the permutations $u_{i_t}\in\Sigma_k$ and $v_{j_t}\in\Sigma_l$.
(The sign $\pm$ is determined by the shuffle of horizontal and vertical moves which we use when we form our path.)
For convenience, we may also represent such a composite by a picture
of the following form:
\begin{equation*}
(u_0,\dots,u_m)\circ_i(v_0,\dots,v_n) = \sum\pm\left(\vcenter{\xymatrix@R=2mm@C=3mm{ u_0\circ_i v_0\ar@{-}[d]\ar@{-}[r] &
u_1\circ_i v_0\ar@{-}[d]\ar@{-}[r] &
*{\cdots}\ar@{-}[r] &
u_m\circ_i v_0\ar@{-}[d] \\
u_0\circ_i v_1\ar@{-}[d]\ar@{-}[r] &
u_1\circ_i v_1\ar@{-}[d]\ar@{-}[r] &
*{\cdots}\ar@{-}[r] &
u_m\circ_i v_1\ar@{-}[d] \\
*{\vdots}\ar@{-}[d] & *{\vdots}\ar@{-}[d] & & *{\vdots}\ar@{-}[d] \\
u_0\circ_i v_n\ar@{-}[r] &
u_1\circ_i v_n\ar@{-}[r] &
*{\cdots}\ar@{-}[r] &
u_m\circ_i v_n }}\right),
\end{equation*}
where we take the sum of the simplices that we may form by running over all paths contained in the diagram materialized in our figure. (To be fully explicit, we take the paths
which go from the upper-left corner to the lower-right corner of the diagram
by a shuffle of horizontal moves $\xymatrix@R=2mm@C=3mm{u_x\circ_i v_y\ar@{-}[r] & u_{x+1}\circ_i v_y }$
and of vertical moves $\xymatrix@R=2mm@C=3mm{u_x\circ_i v_y\ar@{-}[r] & u_x\circ_i v_{y+1}}$.)
Recall that the operad of permutations in sets is identified with the set-theoretic associative operad (the operad which governs the category of associative monoids).
From the relation $W(\Sigma_r)_0 = \Sigma_r$ for any $r\in\NN$, we get an operad embedding $\AsOp\subset\EOp$
which identifies the module-theoretic version of the associative operad $\AsOp$
with the degree zero component of the Barratt-Eccles operad $\EOp$.
In what follows, we still use the notation $\mu\in\EOp(2)$ for the degree $0$ operation, represented by the identity permutation $\mu = \id_2\in\Sigma_2$,
which governs the product operation of associative algebra structures
in the Barratt--Eccles operad.
Note that we still have $\EOp(0) = \AsOp(0) = \kk$ (we take the convention to consider operads with a term in arity zero throughout this paragraph)
and the generating element of this arity zero term $*\in\EOp(0)$ also represents a unit operation
when we pass to the category of algebras over the Barratt--Eccles operad.
The Barratt--Eccles operad $\EOp$ is weakly-equivalent to the operad of commutative algebras $\ComOp$, and forms, as such, an instance of an $E_\infty$-operad.
Recall that we have $\ComOp(r) = \kk$, for any $r\in\NN$. The weak-equivalence $\EOp\xrightarrow{\sim}\ComOp$
is given by the standard augmentation $\DGN_*(W(\Sigma_r))\rightarrow\DGN_*(\pt) = \kk$
on the normalized chain complexes $\EOp(r) = \DGN_*(W(\Sigma_r))$, $r\in\NN$,
and sits in a factorization $\AsOp\hookrightarrow\EOp\xrightarrow{\sim}\ComOp$ of the usual morphism $\AsOp\rightarrow\ComOp$
between the associative operad $\AsOp$ and the commutative operad $\ComOp$.
We take the category of algebras over the Barratt--Eccles operad to get our model of the category of $E_\infty$-algebras.
Recall that we have $\EOp(0) = \kk$ so that our $\EOp$-algebras are equipped with a unit, which is represented by the generating element of this arity zero term of our operad $*\in\EOp(0)$.
By the main result of the article~\cite{BergerFresse}, the normalized cochain complex of a simplicial set $\DGN^*(X)$
is endowed with the structure of an algebra over the Barratt--Eccles operad.
This $\EOp$-algebra structure is functorial in $X\in\simp\Set$,
and extends the classical associative algebra structure of normalized cochains.
\end{recoll}
\begin{constr}[The diagonal and the action of the Barratt--Eccles operad on tensor products]\label{constr:Barratt-Eccles-diagonal}
In our constructions, we use that the Alexander--Whitney diagonal on the normalized chain complexes $\EOp(r) = \DGN_*(W(\Sigma_r))$, $r\in\NN$,
induces a morphism of dg operads $\Delta: \EOp\rightarrow\EOp\otimes\EOp$, where $\EOp\otimes\EOp$
is given by the arity-wise tensor product $(\EOp\otimes\EOp)(r) = \EOp(r)\otimes\EOp(r)$,
for any $r\in\NN$.
This map is given by the usual formula:
\begin{equation*}
\Delta(w_0,\dots,w_n) = \sum_{k=0}^n(w_0,\dots,w_k)\otimes(w_k,\dots,w_n),
\end{equation*}
for any $(w_0,\dots,w_n)\in\EOp(r)$.
The existence of this diagonal implies that a tensor product of $\EOp$-algebras $A\otimes B$
inherits an $\EOp$-algebra structure, since we can make an operation $\pi\in\EOp(r)$
act on $A\otimes B$
through its diagonal $\Delta(\pi)\in\EOp(r)\otimes\EOp(r)$. We explicitly take:
\begin{equation*}
\pi(a_1\otimes b_1,\dots,a_r\otimes b_r) = \sum_{(\pi)}\pi'(a_1,\dots,a_r)\otimes\pi''(b_1,\dots,b_r),
\end{equation*}
for all $a_1\otimes b_1,\dots,a_r\otimes b_r\in A\otimes B$, where we use the expression $\Delta(\pi) = \sum_{(\pi)}\pi'\otimes\pi''$
for the expansion of the coproduct of the operation $\pi\in\EOp(r)$
in the Barratt--Eccles operad.
In the paper, we also use that we have a morphism of $\EOp$-algebras
\begin{equation*}
\AW: A\vee B\rightarrow A\otimes B,
\end{equation*}
for any pair of $\EOp$-algebras $A$ and $B$, where we adopt the notation $\vee$ for the coproduct in the category of $\EOp$-algebras.
This morphism is induced by the inclusions $A\otimes *\rightarrow A\otimes B\leftarrow *\otimes B$
on each factor of the coproduct $A\vee B$, where we still use the notation $*$
for the unit of the $\EOp$-algebras $A$ and $B$.
(We will see in the proof of the next proposition that we can identify this map with an instance of an Alexander--Whitney diagonal. We therefore adopt the notation $\AW$ for this morphism.)
We have a morphism of dg modules which goes in the converse direction
\begin{equation*}
\EM: A\otimes B\rightarrow A\vee B,
\end{equation*}
and which is given by the formula $\EM(a\otimes b) = \mu(a,b)$, for each tensor $a\otimes b\in A\otimes B$, where $\mu(a,b)$ denotes the product of the elements $a\in A$ and $b\in B$
in the $\EOp$-algebra $A\vee B$. (We are going to see that we can identify this map with an instance of an Eilenberg--MacLane map.)
Note that neither $\AW$ nor $\EM$ are symmetric, and actually, the tensor product $A\otimes B$
does not define a symmetric bifunctor on the category of $\EOp$-algebras since the Alexander--Whitney diagonal $\Delta: \EOp(r)\rightarrow\EOp(r)\otimes\EOp(r)$
fails to be cocommutative.
\end{constr}
We have the following useful property.
\begin{prop}\label{claim:Barratt-Eccles-algebra-coproducts}
The above morphisms $\AW: A\vee B\rightarrow A\otimes B$ and $\EM: A\otimes B\rightarrow A\vee B$ satisfy $\AW\EM = \id$
and we have a natural chain homotopy $H: A\vee B\rightarrow A\vee B$
such that $\delta H + H\delta = \EM\AW - \id$.
Hence, our morphisms induce homotopy inverse weak-equivalences of dg modules
\begin{equation*}
\AW: A\vee B\xrightarrow{\sim} A\otimes B\quad\text{and}\quad\EM: A\otimes B\xrightarrow{\sim} A\vee B,
\end{equation*}
for all $\EOp$-algebras $A$ and $B$.
\end{prop}
\begin{proof}
We consider the case of free $\EOp$-algebras first $A = \EOp(X)$ and $B = \EOp(Y)$.
We represent the elements of a free algebra such as $A = \EOp(X)$
by formal expressions of the form $a = u(x_1,\dots,x_r)$,
where $u\in\EOp(r)$ and $x_1,\dots,x_r\in X$.
We then have the expressions:
\begin{align*}
A\otimes B = \EOp(X)\otimes\EOp(Y) & = \bigoplus_{p,q}\EOp(p)\otimes_{\Sigma_p} X^{\otimes p}\otimes\EOp(p)\otimes_{\Sigma_q} Y^{\otimes q}, \\
A\vee B = \EOp(X\oplus Y) & = \bigoplus_{p,q}\EOp(p+q)\otimes_{\Sigma_p\times\Sigma_q} X^{\otimes p}\otimes Y^{\otimes q}.
\end{align*}
We use that the operadic composite $\id_2(u,v)$ of permutations $u\in\Sigma_p$ and $v\in\Sigma_q$
is identified with the result of a direct sum operation
such that $u\oplus v = (u(1),\dots,u(p),p+v(1),\dots,p+v(q))$.
Recall that $\mu = \id_2$ represents the associative product when we pass to the Barratt--Eccles operad $\EOp$.
For a tensor $a\otimes b = u(x_1,\dots,x_p)\otimes v(y_1,\dots,y_q)$
such that $u = (u_0,\dots,u_m)\in\EOp(p)$ and $v = (v_0,\dots,v_n)\in\EOp(q)$,
we have $\mu(a,b) = \mu(u,v)(x_1,\dots,x_p,y_1,\dots,y_q)$,
where we take the composite $\mu(u,v)$
in the Barratt--Eccles operad.
By definition of this composite in terms of shuffles of termwise composites $\mu(u_i,v_j) = u_i\oplus v_j$ (we apply the Eilenberg--MacLane map),
we obtain an expression of the following form:
\begin{multline*}
\EM(u(x_1,\dots,x_p)\otimes v(y_1,\dots,y_q))\\
= \sum\pm\underbrace{\left(\vcenter{\xymatrix@R=2mm@C=3mm{ u_0\oplus v_0\ar@{-}[d]\ar@{-}[r] &
u_1\oplus v_0\ar@{-}[d]\ar@{-}[r] &
*{\cdots}\ar@{-}[r] &
u_m\oplus v_0\ar@{-}[d] \\
u_0\oplus v_1\ar@{-}[d]\ar@{-}[r] &
u_1\oplus v_1\ar@{-}[d]\ar@{-}[r] &
*{\cdots}\ar@{-}[r] &
u_m\oplus v_1\ar@{-}[d] \\
*{\vdots}\ar@{-}[d] & *{\vdots}\ar@{-}[d] & & *{\vdots}\ar@{-}[d] \\
u_0\oplus v_n\ar@{-}[r] &
u_1\oplus v_n\ar@{-}[r] &
*{\cdots}\ar@{-}[r] &
u_m\oplus v_n }}\right)}_{\in\EOp(p+q)}
(x_1,\dots,x_p,y_1,\dots,y_q),
\end{multline*}
where the sum runs over all paths that we may form in the diagram of our figure (as in our representation of the composition of the Barratt--Eccles operad in~\S\ref{recoll:Barratt-Eccles-operad}).
We use this representation to identify our morphism $\EM: A\otimes B\rightarrow A\vee B$ with an instance of an Eilenberg--MacLane map.
The morphism $\AW: A\vee B\rightarrow A\otimes B$, on the other hand, carries any free algebra element $c = w(x_1,\dots,x_p,y_1,\dots,y_q)$
such that $w = (w_0,\dots,w_n)\in\EOp(p+q)$
to the image of the elements $x_1\otimes *,\dots,x_p\otimes *,*\otimes y_1,\dots,*\otimes y_q\in\EOp(X)\otimes\EOp(Y)$
under the action of the operation $w$ on $\EOp(X)\otimes\EOp(Y)$,
and hence to the tensor such that $\AW(w(x_1,\dots,x_p,y_1,\dots,y_q)) = \sum_{(w)} w'(x_1,\dots,x_p,*,\dots,*)\otimes w''(*,\dots,*,y_1,\dots,y_q)$,
where we use the notation $\Delta(w) = \sum_{(w)}w'\otimes w''$
for the expansion of the coproduct of the simplex $w$
in the Barratt--Eccles operad. Thus, if we assume $w = (w_0,\dots,w_n)$, then we have $\sum_{(w)}w'\otimes w'' = \sum_{k=0}^n (w_0,\dots,w_k)\otimes(w_k,\dots,w_n)$.
From this analysis, we deduce that our morphism $\AW$ is given by the following Alexander--Whitney type formula:
\begin{multline*}
\AW(w(x_1,\dots,x_p,y_1,\dots,y_q))\\
= \sum_{k=0}^n (w_0|_I,\dots,w_k|_I)(x_1,\dots,x_p)\otimes(w_k|_J,\dots,w_n|_J)(y_1,\dots,y_q),
\end{multline*}
where we set $I = \{1,\dots,p\}$ and $J = \{1,\dots,q\}$ for short and $|_{I}$, $|_{J}$
denote the obvious restriction operations on permutations
which we apply to our simplices termwise. (In this construction, we also use the canonical bijection $\{p+1,\dots,p+q\}\simeq\{1,\dots,q\}$
to identify the permutations of the set $\{p+1,\dots,p+q\}$ with permutations of the set $\{1,\dots,q\}$.)
For short, we may adopt the notation $x_*$ and $y_*$ for the words of variables $x_* = x_1,\dots,x_p$ and $y_* = y_1,\dots,y_q$
that occur in our expression of free algebra elements.
The chain homotopy $H$ can be given by a formula of the following form:
\begin{multline*}
H(w(x_*,y_*)) = \sum\pm(w_0,\dots,w_k)\\
\star\left(\vcenter{\xymatrix@R=2mm@C=3mm{ \scriptstyle{w_k|_I\oplus w_l|_J}\ar@{-}[d]\ar@{-}[r] &
\scriptstyle{w_{k+1}|_I\oplus w_l|_J}\ar@{-}[d]\ar@{-}[r] &
*{\cdots}\ar@{-}[r] &
\scriptstyle{w_l|_I\oplus w_l|_J}\ar@{-}[d] \\
\scriptstyle{w_k|_I\oplus w_{l+1}|_J}\ar@{-}[d]\ar@{-}[r] &
\scriptstyle{w_{k+1}|_I\oplus w_{l+1}|_J}\ar@{-}[d]\ar@{-}[r] &
*{\cdots}\ar@{-}[r] &
\scriptstyle{w_l|_I\oplus w_{l+1}|_J}\ar@{-}[d] \\
*{\vdots}\ar@{-}[d] & *{\vdots}\ar@{-}[d] & & *{\vdots}\ar@{-}[d] \\
\scriptstyle{w_k|_I\oplus w_n|_J}\ar@{-}[r] &
\scriptstyle{w_{k+1}|_I\oplus w_n|_J}\ar@{-}[r] &
*{\cdots}\ar@{-}[r] &
\scriptstyle{w_l|_I\oplus w_n|_J} }}\right)(x_*,y_*),
\end{multline*}
where $\star$ denotes a ``join'' operation (given by the obvious concatenation operation in $W(\Sigma_*)$), and the sum runs over the indices $0\leq k\leq l\leq n$,
together with the set of paths $\{(i_t,j_t),t=k,\dots,n\}$ which start at $(i_k,j_k) = (k,l)$ and end at $(i_n,j_n) = (l,n)$
in the $l-k\times n-l$-diagram represented in our figure.
The relation $\AW\EM = \id$ is an instance of the general inversion relation between the Alexander--Whitney diagonal and the Eilenberg--MacLane map.
In our context, we can also deduce this relation from the observation that the element $a\otimes b\in A\otimes B$
in a tensor product of $\EOp$-algebras $A$ and $B$
represents the product of the tensors $a\otimes *,*\otimes b\in A\otimes B$,
so that we have the identity $a\otimes b = \mu(a\otimes *,*\otimes b)$
in $A\otimes B$.
The proof of the chain homotopy relation $\delta H + H\delta = \AW\EM - \id$
is straightforward: the composite $\AW\EM$ corresponds to the $0$-face of the terms with $k=0$ in the expression of $H$
while the identity map corresponds to the $n$-face of the terms with $k=l=n$, and the other faces cancel out
when we form the anti-commutator $\delta(H) = \delta H + H\delta$.
The morphisms $\AW$ and $\EM$, which are defined for all $\EOp$-algebras $A$ and $B$,
are obviously functorial. We check that our chain homotopy $H$ is also functorial
with respect to the action of the morphisms
of free $\EOp$-algebras $\phi: \EOp(X)\rightarrow\EOp(X)$ and $\psi: \EOp(Y)\rightarrow\EOp(Y)$
on the coproduct $\EOp(X)\vee\EOp(Y) = \EOp(X\oplus Y)$.
Thus, we establish that we have the following relation:
\begin{equation*}
H((\phi\vee\psi)(w(x_*,y_*))) = (\phi\vee\psi)(H(w(x_*,y_*))),
\end{equation*}
for all $c = w(x_*,y_*)\in\EOp(X\oplus Y)$.
We use that $\phi = \phi_f: \EOp(X)\rightarrow\EOp(X)$ and $\psi = \psi_g: \EOp(Y)\rightarrow\EOp(Y)$
are induced by morphisms of dg modules $f: X\rightarrow\EOp(X)$
and $g: Y\rightarrow\EOp(Y)$.
We use the short notation $f(x_i) = \sum s^i(\underline{x}_i')$ and $g(y_j) = \sum t^j(\underline{y}_j')$
for the expansion of these free algebra elements $f(x_i)\in\EOp(X)$ and $g(y_j)\in\EOp(Y)$,
which we associate to the factors of a tensor $c = w(x_1,\dots,x_p,y_1,\dots,y_q)$.
We also write $s^i = (s^i_0,\dots,s^i_{d_i})$ and $t^j = (t^j_0,\dots,t^j_{e_j})$, where $d_i$ and $e_j$ are dimension variables.
We have the formula:
\begin{equation}\tag{$*$}\label{eqn:coproduct_functoriality}
(\phi\vee\psi)(w(x_*,y_*)) = w(f(x_*),g(y_*)) = \sum w(s^*,t^*)(\underline{x}_*',\underline{y}_*'),
\end{equation}
where we form the composite $w(s^*,t^*) = w(s^1,\dots,s^p,t^1,\dots,t^q)$ inside the Barratt--Eccles operad.
(In this formula, we also use the notation $s^*$ and $t^*$ for the words of simplices $s^* = s^1,\dots,s^p$ and $t^* = t^1,\dots,t^q$,
as well as the notation $\underline{x}_*'$ and $\underline{y}_*'$
for the composite words $\underline{x}_*' = \underline{x}_1',\dots,\underline{x}_p'$
and $\underline{y}_*' = \underline{y}_1',\dots,\underline{y}_q'$.)
We then use a multidimensional generalization of the picture of~\S\ref{recoll:Barratt-Eccles-operad}
for the definition of the operadic composition
in the Barratt--Eccles operad. We are going to use that the shuffles of horizontal and vertical moves, which we carry out in this definition of operadic composites,
satisfy natural associativity and commutativity relations when we perform a multidimensional
application of this operation.
We analyze the expression of the composite $(\phi\vee\psi)(H(w(x_*,y_*)))$ first.
We then apply the composition operation $w'\mapsto w'(s^*,t^*)$
to the simplices that occur in the expression of the chain homotopy $H(w(x_*,y_*))$.
We decompose the result of this operation as a join of simplices, using the join decomposition of the simplices that occur in the definition of our chain homotopy.
We identify the first factor of our join with a chain of composite permutations $w_x(s^*_{\alpha_*},t^*_{\beta_*})$,
where $w_x$ runs over the vertices of the simplex $(w_0,\dots,w_k)$,
which represents the first factor of our join in the expression of $H(w(x_*,y_*))$.
We take all shuffles of $w_x$-directional moves in the chain $(w_0,\dots,w_k)$
with $s^i_{\alpha_i}$-directional and $t^j_{\beta_j}$-directional moves
in subchains of the simplices $s^i$ and $t^j$
of the form $s^i{}' = (s^i_0,\dots,s^i_{d_i'})$ and $t^j{}' = (t^j_0,\dots,t^j_{e_j'})$,
where $d_i'\leq d_i$ and $e_j'\leq e_j$.
We identify the second factor of our join with a chain of composite permutations
of the form $w_x|_I(s^*_{\alpha_*})\oplus w_y|_J(t^*_{\beta_*})$,
where $w_x|_I\oplus w_y|_J$
runs over the vertices of the second join factor in the expression of $H(w(x_*,y_*))$, starting at $w_k|_I\oplus w_l|_J$
and ending at $w_l|_I\oplus w_n|_J$.
When we pass from the previous join factor to this second join factor in our computation of $(\phi\vee\psi)(H(w(x_*,y_*)))$,
we carry out a move of the form $\xymatrix@R=2mm@C=3mm{w_k\ar@{-}[r] & w_k|_I\oplus w_l|_J }$,
and hence, our move has to be constant in the $s^i_{\alpha_i}$ directions
and in the $t^j_{\beta_j}$ directions.
This observation implies that we start this simplex at the end point of the chains $s^i{}' = (s^i_0,\dots,s^i_{d_i'})$ in the $s^i_{\alpha_i}$ directions
and at the end point of the chains $t^j{}' = (t^j_0,\dots,t^j_{e_j'})$
in the $t^j_{\beta_j}$ directions.
Thus, to form the chains of composite permutations of our second join factor, we shuffle $w_x|_I$-directional and $w_y|_J$-directional moves
in the $2$-dimensional diagram $(w_k|_I,\dots,w_l|_I)\times(w_l|_J,\dots,w_n|_J)$
with $s^i_{\alpha_i}$-directional moves in the chains $s^i{}'' = (s^i_{d_i'},\dots,s^i_{d_i})$
and $t^j_{\beta_j}$-directional moves in the chains $t^j{}'' = (t^j_{e_j'},\dots,t^j_{e_j})$.
We now analyze the expression of the composite $H((\phi\vee\psi)(w(x_*,y_*)))$,
which we obtain by applying our chain homotopy $H$
to the element $(\phi\vee\psi)(w(x_*,y_*))
= \sum w(s^*,t^*)(\underline{x}_*',\underline{y}_*')$.
The simplices of the composite $w(s^*,t^*)$
consist of chains of composite permutations $w_x(s^*_{\alpha_*},t^*_{\beta_*})$,
which we obtain after shuffling $w_x$-directional moves in the chain $w = (w_0,\dots,w_n)$
with $s^i_{\alpha_i}$-directional moves in the chains $s^i{}'' = (s^i_0,\dots,s^i_{d_i})$
and $t^j_{\beta_j}$-directional moves in the chains $t^j{}'' = (t^j_0,\dots,t^j_{e_j})$.
To form our chain homotopy, we cut this chain at two positions $w_k(s^*_{d_*'},t^*_{e_*'}) = w_k(s^1_{d_1'},\dots,s^p_{d_p'},t^1_{e_1'},\dots,t^q_{e_q'})$
and $w_l(s^*_{d_*''},t^*_{e_*''}) = w_l(s^1_{d_1''},\dots,s^p_{d_p''},t^1_{e_1''},\dots,t^q_{e_q''})$ with $0\leq k\leq l\leq n$
and $0\leq d_i'\leq d_i''\leq d_i$, $0\leq e_j'\leq e_j''\leq e_j$.
We take the subchain of permutations running from $w_0(s^*_0,t^*_0)$
up to $w_k(s^*_{d_*'},t^*_{e_*'})$
to get the first join factor of our chain homotopy. We exactly retrieve the same chains as in our decomposition of~$(\phi\vee\psi)(H(w(\underline{x},\underline{y})))$.
We then form the restrictions $w_x(s^*_{\alpha_*},t^*_{\beta_*})|_{I'} = w_x|_I(s^*_{\alpha_*})$
and $w_y(s^*_{\alpha_*},t^*_{\beta_*})|_{J'} = w_y|_J(t^*_{\beta_*})$
where $I'$ denotes the terms of the composite permutations $w_x(s^*_{\alpha_*},t^*_{\beta_*})$
that correspond to the positions of the variables $\underline{x}_*'$,
whereas $J'$ denotes the terms of the composite permutations $w_y(s^*_{\alpha_*},t^*_{\beta_*})$
that correspond to the positions of the variables $\underline{y}_*'$.
We take a chain of direct sums $w_x(s^*_{\alpha_*},t^*_{\beta_*})|_{I'}\oplus w_y(s^*_{\alpha_*},t^*_{\beta_*})|_{J'}$
such that $w_x(s^*_{\alpha_*},t^*_{\beta_*})$ runs from $w_k(s^*_{d_*'},t^*_{e_*'})$ up to $w_l(s^*_{d_*''},t^*_{e_*''})$,
while $w_y(s^*_{\alpha_*},t^*_{\beta_*})$ runs from $w_l(s^*_{d_*''},t^*_{e_*''})$ up to $w_l(s^*_{d_*},t^*_{e_*})$.
(We then take a shuffle of $w_x(s^*_{\alpha_*},t^*_{\beta_*})$-directional moves and of $w_y(s^*_{\alpha_*},t^*_{\beta_*})$-direction moves.)
We easily see that this operation produces a degeneracy in the case where we have $e_j'<e_j''$ for some $j$,
because in this case the subchain of permutations $w_x(s^*_{\alpha_*},t^*_{\beta_*})$
contains a $t^j_{\beta_j}$-directional move, which produces a degeneracy when we pass
to the restriction $w_x(s^*_{\alpha_*},t^*_{\beta_*})|_{I'} = w_x|_I(s^*_{\alpha_*})$.
We similarly see that our operation produces a degeneracy in the case where we have $d_i''<d_i$ for some $i$.
We therefore have to assume $d_i'' = d_i$ for all $i$ and $e_j' = e_j$ for all $j$ in order to avoid possible degeneracies,
and in these cases, we exactly retrieve the same chains as in our expression of the second join factor in our decomposition of~$(\phi\vee\psi)(H(w(x_*,y_*)))$.
We conclude from this analysis that the expansions of $H((\phi\vee\psi)(w(x_*,y_*)))$ and $(\phi\vee\psi)(H(w(x_*,y_*)))$ consist of the same joins of simplices,
and therefore these composites agree, as expected.
To finish the proof of our proposition, we use that every object of the category of $\EOp$-algebras has a presentation in terms of a natural reflexive coequalizer of free $\EOp$-algebras.
If we have a pair of objects $A$ and $B$, then we can form a commutative diagram:
\begin{equation*}
\xymatrix{ \EOp(X_1)\otimes\EOp(Y_1)\ar@<-2pt>[r]_{\EM}\ar@<+2pt>[d]\ar@<-2pt>[d] &
\EOp(X_1)\vee\EOp(Y_1)\ar@<-2pt>[l]_{\AW}\ar@<+2pt>[d]\ar@<-2pt>[d]\ar@(ur,dr)[]!UR;[]!DR^{H} \\
\EOp(X_0)\otimes\EOp(Y_0)\ar@<-2pt>[r]_{\EM}\ar@/_1em/[u]\ar@{.>}[d] &
\EOp(X_0)\vee\EOp(Y_0)\ar@<-2pt>[l]_{\AW}\ar@/_1em/[u]\ar@{.>}[d]\ar@(ur,dr)[]!UR;[]!DR^{H} \\
A\otimes B\ar@<-2pt>[r]_{\EM} & A\vee B\ar@<-2pt>[l]_{\AW}\ar@{.>}@(ur,dr)[]!UR;[]!DR^{H} }
\end{equation*}
where we take the presentations of $A$ and $B$ by reflexive coequalizers in the vertical direction and our deformation retract diagram
of Alexander--Whitney and Eilenberg--MacLane maps
in the horizontal direction. We use this diagram to check that our chain homotopy $H$ passes to the quotient
and induces a chain homotopy such that $\delta H + H\delta = \EM\AW - \id$
on $A\vee B$,
as indicated in our figure.
\end{proof}
\begin{constr}[The action of the Barratt--Eccles operad on the interval, on cubical cochains and the definition of connections]\label{constr:cubical-cochain-connection}
We already recalled that the normalized cochain complex of a simplicial set $\DGN^*(X)$ inherits the structure of an algebra over the Barratt--Eccles operad.
We refer to~\cite{BergerFresse} for the precise definition.
We consider the cochain algebra of the $1$-simplex $X = \Delta^1$ in our definition of homotopy Segal $E_\infty$-cooperads
together with the cubical cochain algebras
\begin{equation*}
I^m = \underbrace{\DGN^*(\Delta^1)\otimes\dots\otimes\DGN^*(\Delta^1)}_m
\end{equation*}
which we provide with an $\EOp$-algebra structure, using the action of the Barratt--Eccles operad on each cochain complex factor $\DGN^*(\Delta^1)$,
and the diagonal operation of Construction~\ref{constr:Barratt-Eccles-diagonal}.
We study structures attached to the cochain algebra $I = \DGN^*(\Delta^1)$ in this paragraph.
We also consider the normalized chain complex $\DGN_*(\Delta^1)$, dual to $\DGN^*(\Delta^1)$.
We have
\begin{equation*}
\DGN_*(\Delta^1) = \kk\underline{0}\oplus\kk\underline{1}\oplus\kk\underline{01},
\end{equation*}
where $\underline{01}$ denotes the class of the fundamental simplex of $\Delta^1$ in the normalized chain complex,
whereas $\underline{0}$ and $\underline{1}$ denote the class of the vertices of $\underline{01}$.
We can identify $\kk\underline{0}\subset\DGN_*(\Delta^1)$ with the image of the map $\DGN_*(d^1): \DGN_*(\Delta^0)\rightarrow\DGN_*(\Delta^1)$
induced by the $1$-coface $d^1: \Delta^0\rightarrow\Delta^1$,
while $\kk\underline{1}\subset\DGN_*(\Delta^1)$ is identified with the image of the map $d^0: \DGN_*(\Delta^0)\rightarrow\DGN_*(\Delta^1)$
induced by the $0$-coface $d^0: \Delta^0\rightarrow\Delta^1$. We have $\delta(\underline{01}) = \underline{1} - \underline{0}$.
For the cochain algebra, we have
\begin{equation*}
\DGN^*(\Delta^1) = \kk\underline{0}^{\sharp}\oplus\kk\underline{1}^{\sharp}\oplus\kk\underline{01}^{\sharp},
\end{equation*}
where we take the basis $(\underline{0}^{\sharp},\underline{1}^{\sharp},\underline{01}^{\sharp})$
dual to $(\underline{0},\underline{1},\underline{01})$.
We now explain the definition of a connection $\nabla^*: \DGN^*(\Delta^1)\otimes\DGN^*(\Delta^1)\rightarrow\DGN^*(\Delta^1)$,
which we use in the construction of degeneracy operators in our definition of homotopy Segal $E_\infty$-cooperads.
We consider the simplicial map $\min: \Delta^1\times\Delta^1\rightarrow\Delta^1$ defined by the mapping $(s,t)\mapsto\min(s,t)$
on topological realizations, or equivalently, by the following representation:
\begin{equation*}
\vcenter{\hbox{\includegraphics{connection-picture.pdf}}},
\end{equation*}
where we take the projection onto the diagonal simplex along the lines depicted in the figure.
We take the composite $\nabla_* = \DGN_*(\min)\circ\EM$ of the induced map on normalized chain complexes $\DGN_*(\min): \DGN_*(\Delta^1\times\Delta^1)\rightarrow\DGN_*(\Delta^1)$
with the Eilenberg--MacLane map $\EM: \DGN_*(\Delta^1)\otimes\DGN_*(\Delta^1)\rightarrow\DGN_*(\Delta^1\times\Delta^1)$.
We get the following formulas:
\begin{align*}
& \nabla_*(\underline{0}\otimes\underline{0}) = \nabla_*(\underline{1}\otimes\underline{0}) = \nabla_*(\underline{0}\otimes\underline{1}) = \underline{0}, \\
& \nabla_*(\underline{1}\otimes\underline{1}) = \underline{1}, \\
& \nabla_*(\underline{1}\otimes\underline{01}) = \nabla_*(\underline{01}\otimes\underline{1}) = \underline{01}, \\
& \nabla_*(\underline{0}\otimes\underline{01}) = \nabla_*(\underline{01}\otimes\underline{0}) = 0, \\
& \nabla_*(\underline{01}\otimes\underline{01}) = 0.
\end{align*}
We define our connection on normalized cochains $\nabla^*$ as the dual map of this morphism $\nabla_*$.
We accordingly take:
\begin{equation*}
\nabla^* = \EM\circ\DGN^*(\min): \DGN^*(\Delta^1)\rightarrow\DGN^*(\Delta^1)\otimes\DGN^*(\Delta^1),
\end{equation*}
and we can determine this morphism by the following formulas on our basis elements:
\begin{align*}
& \nabla^*(\underline{0}^{\sharp}) = \underline{0}^{\sharp}\otimes\underline{0}^{\sharp} + \underline{1}^{\sharp}\otimes\underline{0}^{\sharp}
+ \underline{0}^{\sharp}\otimes\underline{1}^{\sharp}, \\
& \nabla^*(\underline{1}^{\sharp}) = \underline{1}^{\sharp}\otimes\underline{1}^{\sharp}, \\
& \nabla^*(\underline{01}^{\sharp}) = \underline{01}^{\sharp}\otimes\underline{1}^{\sharp} + \underline{1}^{\sharp}\otimes\underline{01}^{\sharp}.
\end{align*}
We crucially need the observation of the next proposition in our constructions.
\end{constr}
\begin{prop}\label{prop:cubical-cochain-connection}
The map $\nabla^* = \EM\circ\DGN^*(\min): \DGN^*(\Delta^1)\rightarrow\DGN^*(\Delta^1)\otimes\DGN^*(\Delta^1)$ defined in the above paragraph is a morphism of $\EOp$-algebras,
where we use that the Barratt-Eccles operad $\EOp$ acts on the tensor product $\DGN^*(\Delta^1)\otimes\DGN^*(\Delta^1)$
through the operadic diagonal $\Delta: \EOp\rightarrow\EOp\otimes\EOp$
and its action on each factor $\DGN^*(\Delta^1)$.
\end{prop}
\begin{proof}
We go back to the definition of the $\EOp$-algebra structure of normalized cochain complexes of simplicial sets
in terms of a dual $\EOp$-coalgebra structure on normalized chain complexes.
We prove that the morphism $\nabla_*: \DGN_*(\Delta^1)\otimes\DGN_*(\Delta^1)\rightarrow\DGN_*(\Delta^1)$,
dual to the morphism of our claim, is a morphism of $\EOp$-coalgebras.
We may note that the Eilenberg--MacLane map $\EM: \DGN_*(X)\otimes\DGN_*(Y)\rightarrow\DGN_*(X\times Y)$
does not preserve $\EOp$-coalgebra structures
in general. Nevertheless, such a statement holds when one factor is a one-point set, $X = *$ or $Y = *$,
because in this case, we have $\DGN_*(X)\simeq\DGN_*(X)\otimes\kk\simeq\DGN_*(X\times *)$
or $\DGN_*(Y)\simeq \kk\otimes\DGN_*(Y)\simeq\DGN_*(*\times Y)$, and the Eilenberg--MacLane map
reduces to the identity morphism on the functor of normalized chains.
We readily deduce from this observation that our morphism $\nabla_*: \DGN_*(\Delta^1)\otimes\DGN_*(\Delta^1)\rightarrow\DGN_*(\Delta^1)$
preserves $\EOp$-coalgebra structure on the subcomplex generated by the tensors $\underline{\sigma}\otimes\underline{\tau}\in\DGN_*(\Delta^1)\otimes\DGN_*(\Delta^1)$
such that $\underline{\sigma}\in\{\underline{0},\underline{1}\}$
or $\underline{\tau}\in\{\underline{0},\underline{1}\}$
since such tensors lie in the image of the coface maps $d^i\otimes\id: \DGN^*(\Delta^0)\otimes\DGN_*(\Delta^1)\rightarrow\DGN_*(\Delta^1)\otimes\DGN_*(\Delta^1)$
and $\id\otimes d^i: \DGN^*(\Delta^1)\otimes\DGN_*(\Delta^1)\rightarrow\DGN_*(\Delta^0)\otimes\DGN_*(\Delta^1)$,
with $i = 0,1$.
We use the notation $\pi_*: \DGN_*(\Delta^1)\rightarrow\DGN_*(\Delta^1)^{\otimes r}$ for the operation
that we associate to an element of the Barratt--Eccles operad $\pi\in\EOp(r)$
in the definition of the $\EOp$-coalgebra
structure on $\DGN_*(\Delta^1)$.
In general, for a tensor $\underline{\sigma}\otimes\underline{\tau}\in\DGN_*(\Delta^1)\otimes\DGN_*(\Delta^1)$, we have the formula:
\begin{equation*}
\pi_*(\underline{\sigma}\otimes\underline{\tau}) = \sum_{(\pi)}\sh(\pi'_*(\underline{\sigma})\otimes\pi''_*(\underline{\tau})),
\end{equation*}
where $\Delta(\pi) = \sum_{(\pi)}\pi'\otimes\pi''$ denotes the coproduct of the operation $\pi$,
while $\sh: \DGN_*(\Delta^1)^{\otimes r}\otimes\DGN_*(\Delta^1)^{\otimes r}\rightarrow(\DGN^*(\Delta^1)\otimes\DGN^*(\Delta^1))^{\otimes r}$
is the tensor permutation such that $\sh(a_1\otimes\dots\otimes a_r\otimes b_1\otimes\dots\otimes b_r) = (a_1\otimes b_1)\otimes\dots\otimes(a_r\otimes b_r)$.
Thus the statement of our claim is equivalent to the following relation:
\begin{equation}\tag{$*$}\label{eqn:nabla_product}
\pi_*(\nabla_*(\underline{\sigma}\otimes\underline{\tau})) = \sum_{(\pi)}\nabla_*^{\otimes r}\sh(\pi'_*(\underline{\sigma})\otimes\pi''_*(\underline{\tau})),
\end{equation}
for $\pi\in\EOp(r)$, and for any $\underline{\sigma}\otimes\underline{\tau}\in\DGN_*(\Delta^1)\otimes\DGN_*(\Delta^1)$.
We can use the argument of the previous paragraph to establish the validity of this relation in the case where $\underline{\sigma}$ or $\underline{\tau}$
is a vertex $\underline{0},\underline{1}\in\DGN_*(\Delta^1)$.
We therefore focus on the case $\underline{\sigma}\otimes\underline{\tau} = \underline{01}\otimes\underline{01}$.
We then have $\nabla_*(\underline{01}\otimes\underline{01}) = 0$,
so that the above equation (\ref{eqn:nabla_product})
reduces to the following vanishing relation:
\begin{equation}\tag{$*'$}\label{eqn:vanishing_nabla_product}
\sum_{(\pi)}\nabla_*^{\otimes r}\sh(\pi'_*(\underline{01})\otimes\pi''_*(\underline{01})) = 0.
\end{equation}
We devote the rest of this proof to the verification of this relation.
\textit{The definition of the action of the Barratt--Eccles operad on chains}.
To carry out our verification, we have to go back to the explicit expression of the operation $\varpi_*: \DGN_*(\Delta^1)\rightarrow\DGN_*(\Delta^1)^{\otimes r}$
associated to an element of the Barratt--Eccles operad $\varpi\in\EOp(r)$
in terms of interval cuts associated to a table reduction
of the simplices of the permutations $(s_0,\dots,s_l)$
that occur in the expansion of $\varpi$.
We briefly recall this construction in the general case of a $q$-dimensional simplex $\Delta^q$.
We refer to~\cite{BergerFresse} for details.
The table reduction is a sum of surjective maps $s: \{1,\dots,r+l\}\rightarrow\{1,\dots,r\}$,
which we determine by sequences of values $s = (s(1),\dots,s(r+l))$,
which we arrange on a table, as in the following picture:
\begin{equation*}
s = \left|\begin{array}{l} s(1),\dots,s(e_0-1),s(e_0) \\
s(e_0+1),\ldots,s(e_1-1),s(e_1) \\
\vdots\\
s(e_{l-2}+1),\dots,s(e_{l-1}-1),s(e_{l-1}) \\
s(e_{l-1}+1),\dots,s(r+l)
\end{array}\right..
\end{equation*}
The caesuras $s(e_0),\dots,s(e_{l-1})$, which terminate the rows of the table, are the terms $y = s(x)$ of the sequence $s = (s(1),\dots,s(r+l))$
that do not form the last occurrence of a value $y\in\{1,\dots,r\}$.
Thus, the complement of the caesuras, which consists of the inner terms of the rows $s(e_{i-1}+1),\dots,s(e_i-1)$, $i = 0,\dots,l-1$,
and of the terms of the last rows $s(e_{l-1}+1),\dots,s(r+l)$, consists of the terms of the sequence $s = (s(1),\dots,s(r+l))$
which are not repeated after their position.
The table reduction of a simplex of permutations $\varpi = (s_0,\dots,s_l)$
is a sum of table arrangements of surjections
of the following form:
\begin{equation*}
\TR(s_0,\dots,s_l) = \sum\left|\begin{array}{l} s_0(1),\dots,s_0(r_0-1),s_0(r_0) \\
s_1(1)',\dots,s_1(r_1-1)',s_1(r_1)' \\
\vdots \\
s_{l-1}(1)',\dots,s_{l-1}(r_{l-1}-1)',s_{l-1}(r_{l-1})' \\
s_l(1)',\dots,s_l(r_l)' \end{array}\right.,
\end{equation*}
and which we obtain by browsing the terms of our permutations $s_i$, $i = 0,\dots,l$.
For $i = 0$, we retain all terms of our permutation $s_0(1),\dots,s_0(x),\dots$ up to the choice of a caesura $s_0(r_0)$, where we decide to stop this enumeration.
For $i > 0$, in the enumeration of the terms of the permutation $s_i$ we only retain the values $s_i(1)',\dots,s_i(x)',\dots$
that do not occur before the caesura on the previous rows of our table.
For $i < l$, we again stop this enumeration at the choice of a caesura $s_i(r_i)$.
For $i = l$, we run this process up to the last term of the permutation $s_l$. We sum over all possible choices of caesuras.
For $0\leq\upsilon_0\leq\dots\leq\upsilon_p\leq q$, we generally denote by $\underline{\upsilon_0\dots\upsilon_p}\in\Delta^q$
the $p$-simplex defined by taking the image of the fundamental simplex of the $q$-simplex $\Delta^q$
under the simplicial operator $u^*: \Delta^q_q\rightarrow\Delta^q_p$
associated to the map $u: \{0<\dots<p\}\rightarrow\{0<\dots<q\}$
such that $u(x) = \upsilon_x$, $x = 0,\dots,p$.
The notation $\underline{0\cdots q}$, for instance, represents the fundamental simplex of $\Delta^q$.
Each surjection $s = (s(1),\dots,s(l+r))$
in the table reduction
of an element of the Barratt--Eccles operad $\varpi = (s_0,\dots,s_l)$
is used to assign a sum of tensors
\begin{equation*}
s_*(\underline{0\cdots q}) = \sum_{\alpha}\pm\underline{\sigma}_{(1)}^{\alpha}\otimes\dots\otimes\underline{\sigma}_{(r)}^{\alpha}\in\DGN_*(\Delta^q)^{\otimes r}
\end{equation*}
to the fundamental simplex $\underline{0\cdots q}\in\Delta^q_q$.
We proceed as follows. We fix a sequence of indices $0 = \rho_0\leq\dots\leq\rho_x\leq\dots\leq\rho_{r+l} = q$,
which we associate to an interval decomposition of the indexing sequence
of the fundamental simplex:
\begin{equation*}
\underline{0\cdots q} = \underline{\rho_0\cdots\rho_1}|\underline{\rho_1\cdots\rho_2}|\cdots\,\cdots|\underline{\rho_{r+l-1}\cdots\rho_{r+l}}.
\end{equation*}
For $x = 1,\dots,r+l$, we label the interval $\underline{\rho_{x-1}\cdots\rho_x}$
with the value of the term $s(x)$ of our surjection $s = (s(1),\dots,s(r+l))$
in $\{1,\dots,r\}$,
as in the following picture:
\begin{equation*}
\underline{\rho_0\overset{s(1)}{\cdots}\rho_1}|\underline{\rho_1\overset{s(2)}{\cdots}\rho_2}|\cdots
\,\cdots|\underline{\rho_{x-1}\overset{s(x)}{\cdots}\rho_x}|\cdots
\,\cdots|\underline{\rho_{r+l-1}\overset{s(r+l)}{\cdots}\rho_{r+l}}.
\end{equation*}
Then, for $i\in\{1,\dots,r\}$, we concatenate the intervals $\underline{\rho_{x-1}\cdots\rho_x}$ labelled by the value $s(x) = i$
in order to form the factor $\sigma_{(i)}^{\alpha}$ of our tensor $s_*(\underline{0\cdots q})\in\DGN_*(\Delta^q)^{\otimes r}$.
We sum over all possible choices of indices $0 = \rho_0\leq\dots\leq\rho_x\leq\dots\leq\rho_{r+l} = q$.
The image of the simplex $\underline{0\cdots q}\in\DGN_*(\Delta^q)$ under the operation $\varpi: \DGN_*(\Delta^q)\rightarrow\DGN_*(\Delta^q)^{\otimes r}$
associated to an element of the Barratt-Eccles operad $\varpi\in\EOp(r)$
is given by the sum of the tensors $s_*(\underline{0\cdots q})\in\DGN_*(\Delta^q)^{\otimes r}$,
where $s$ runs over surjections that occur in the table reduction of $\varpi$.
In general, we can determine the action of the operation $\varpi_*: \DGN_*(X)\rightarrow\DGN_*(X)^{\otimes r}$
on an element $\sigma\in\DGN_*(X)$ in the normalized chains of a simplicial set
by using that $\sigma$ is represented by a simplex $\sigma\in X_q$ such that $\sigma = \hat{\sigma}(\underline{0\cdots q})$
for some simplicial map $\hat{\sigma}: \Delta^q\rightarrow X$.
Indeed, by functoriality of the operation $\varpi_*: \DGN_*(X)\rightarrow\DGN_*(X)^{\otimes r}$,
we have the identity $\varpi_*(\sigma) = \DGN_*(\hat{\sigma})^{\otimes r}(\varpi_*(\underline{0\cdots q}))$
in $\DGN_*(X)^{\otimes r}$.
But we do not use really this correspondence in the follow-up, because we are going to focus on the computation of the tensors $\varpi_*(\sigma)\in\DGN_*(X)^{\otimes r}$
for the fundamental simplex of the $1$-simplex $\underline{01}\in\Delta^1$.
We study the terms that may remain in the expansion of $\varpi_*(\underline{01})\in\DGN_*(\Delta^1)^{\otimes r}$ after the reduction of the factors $\sigma_{(i)}^{\alpha}\in\DGN_*(\Delta^1)$
associated to degenerate simplices in $\Delta^1$.
\textit{The reduction of degenerate simplices for the action of the Barratt--Eccles operad on the $1$-simplex $\underline{01}\in\DGN_*(\Delta^1)$}.
In general, in order to get non-degenerate simplices $\sigma_{(i)}^{\alpha}$ in the above definition of the tensors $s_*(\underline{0\cdots q})\in\DGN_*(\Delta^q)^{\otimes r}$,
we have to assume that we have a strict inequality $\rho_x<\rho_{y-1}+1$ for each pair $x<y$
such that $s(x)$ and $s(y)$ represent consecutive occurrences of a value $s(x) = s(y) = i$
in our surjection $s(1),\dots,s(r+l)$.
In the case $q = 1$, we must have $0 = \rho_0 = \dots = \rho_{t-1}<\rho_{t} = \dots = \rho_p = 1$, for some index choice $t$.
Then we associate an interval $\underline{01}$ to the term $s(t)$ of our surjection $s$,
an interval of length one $\underline{0}$ to the terms $s(x)$ such that $x<t$,
and an interval of length one $\underline{1}$ to the terms $s(x)$ such that $x>t$.
In this context, the only possibility to get a tensor product of non degenerate simplices is to assume that $s(t)$
lies the last row of our table.
Indeed, we can observe that no caesura $s(e_i)$ should be associated to an interval $\underline{1}$ or $\underline{01}$,
because in the case where such an interval $\underline{1}$ or $\underline{01}$
is labelled by the value of a caesura $s(e_i)$,
the next occurrence of this value should be associated to the interval $\underline{1}$,
producing a degeneracy $\underline{\cdots 11\cdots}$
when we perform the concatenation operation.
Now, if we assume that all caesuras $s(e_i)$ are associated to the interval $\underline{0}$,
then the term $s(t)$ associated to the interval $\underline{01}$
necessarily occurs after the last caesura,
and hence, necessarily lies on the last row of our table.
In the case of the table reduction of a simplex of permutations $\varpi = (s_0,\dots,s_l)$,
we get that the interval $\underline{01}$ is associated to a term $s_l(t') = s_l(t)'$
which we retain in the sequence of values of the permutation $s_l$
on the last row of our table reduction.
The interval of length one $\underline{1}$ can only be labelled by the value of terms $s_l(x)'$ that occur after $s_l(t)'$
on the last row of our table.
The values of the terms $s_l(x)$ such that $x<t'$ in the permutation $s_l(x)$
can not occur at such positions.
Therefore, the occurrences of these values in our table reduction can only be decorated by intervals of length one $\underline{0}$,
and produce either a vertex $\sigma_{(i)}^{\alpha} = \underline{0}$ or a degenerate element
when we perform our concatenation operation.
This analysis implies that the tensors $\sigma_{(1)}^{\alpha}\otimes\cdots\otimes\sigma_{(r)}^{\alpha}$,
which occur in the expansion of a coproduct $\varpi_*(\underline{01})\in\DGN^*(\Delta^1)^{\otimes r}$, $\varpi = (s_0,\dots,s_l)$,
satisfy the following repartition constraint (when no degeneracy occurs):
\begin{equation*}
\sigma_{(i)}^{\alpha} = \begin{cases} \text{$\underline{0}$}, & \text{for $i = s_l(1),\dots,s_l(t'-1)$}, \\
\text{$\underline{01}$}, & \text{for $i = s_l(t')$},
\end{cases}
\end{equation*}
where $s_l(t') = s_l(t)'$ is the term of the permutation $s_l(1),\dots,s_l(r)$ that we associate to the interval $\underline{01}$ in our interval decomposition process.
Note also that (in non degeneracy cases) we necessarily have
\begin{equation*}
\sigma_{(i)}^{\alpha} = \underline{01},\quad\begin{aligned}[t] & \text{for the values of the caesuras $i = s_*(e_*)'$}\\
& \text{in our table reduction of the simplex $\varpi = (s_0,\dots,s_l)$},
\end{aligned}
\end{equation*}
since the values of the caesuras are repeated in our table (and hence lead to simplices of positive dimension when we perform our interval concatenation).
\textit{The verification of the vanishing relation}.
We go back to the operations $\pi_*': \DGN_*(\Delta^1)\rightarrow\DGN_*(\Delta^1)^{\otimes r}$ and $\pi_*'': \DGN_*(\Delta^1)\rightarrow\DGN_*(\Delta^1)^{\otimes r}$
associated to the factors of a coproduct $\Delta(\pi) = \sum_{(\pi)}\pi'\otimes\pi''$
in the expression of our equation~(\ref{eqn:vanishing_nabla_product}).
Recall that, for an element $\pi = (w_0,\dots,w_n)\in\EOp(r)$, we have by definition $\Delta(\pi) = \sum_k(w_0,\dots,w_k)\otimes(w_k,\dots,w_n)$.
We fix a term of this coproduct $\pi'\otimes\pi'' = (w_0,\dots,w_k)\otimes(w_k,\dots,w_n)$
and interval decompositions that fulfill the conditions of the previous paragraph
for some table reductions of the simplices $\pi' = (w_0,\dots,w_k)$
and $\pi'' = (w_k,\dots,w_n)$.
We consider the associated tensors $\sigma_{(1)}'\otimes\dots\otimes\sigma_{(r)}'$ and $\sigma_{(1)}''\otimes\dots\otimes\sigma_{(r)}''$
in the expansion of $\pi'_*(\underline{01})$
and $\pi''_*(\underline{01})$. We consider the case $k<n$ first.
Let $i = w_k(t')$ be the term of the permutation $w_k$ to which we associate the interval $\underline{01}$ in the table reduction of $\pi' = (w_0,\dots,w_k)$,
so that we have $\sigma_{(i)}' = \underline{01}$ (in a non degeneracy case).
If this term $i = w_k(t')$ occurs on the first row of our table reduction of the simplex $\pi'' = (w_k,\dots,w_n)$,
then we have either $\sigma_{(i)}'' = \underline{0}$ (when $w_k(t')$ occurs before the caesura)
or $\sigma_{(i)}'' = \underline{01}$ (when the caesura is at $w_k(t')$).
In both cases, we have $\nabla_*(\sigma_{(i)}'\otimes\sigma_{(i)}'') = 0$ since $\nabla_*(\underline{01}\otimes\underline{01}) = \nabla_*(\underline{0}\otimes\underline{01}) = 0$
by definition of our map $\nabla_*$,
and therefore such a choice results in a zero term
in the expression $\nabla^*\sh(\pi'_*(\underline{01})\otimes\pi''_*(\underline{01}))$.
If, on the contrary, we take the caesura $j = w_k(r_k'')$ before the value $i = w_k(t')$
occurs in our table reduction of the simplex $\pi'' = (w_k,\dots,w_n)$,
then this means that the value $j = w_k(r_k'')$ occurs before $i = w_k(t')$
in the permutation $w_k$,
and in this case, we have by our previous analysis $\sigma_{(j)}' = \underline{0}$
when we form the coproduct $\pi'_*(\underline{01})$.
We then have $\nabla_*(\sigma_{(j)}'\otimes\sigma_{(j)}'') = 0$ (since $\nabla_*(\underline{0}\otimes\underline{01}) = 0$),
We still conclude that our choice results in a zero term in the expression $\nabla^*\sh(\pi'_*(\underline{01})\otimes\pi''_*(\underline{01}))$.
In the case $k = n\Rightarrow\deg(\pi'') = 0$, we just consider the value $j = w_n(t'')$ to which we assign an interval $\underline{01}$ in our construction
of the tensor $\pi''_*(\underline{01})$, and we argue similarly: if $t'<t''$, then we have $\sigma_{(i)}'' = \underline{0}$,
so that $\nabla_*(\sigma_{(i)}'\otimes\sigma_{(i)}'') = \nabla_*(\underline{01}\otimes\underline{0}) = 0$;
if $t'=t''$, then we have $\sigma_{(i)}' = \sigma_{(i)}'' = \underline{01}$
and $\nabla_*(\sigma_{(i)}'\otimes\sigma_{(i)}'') = \nabla_*(\underline{01}\otimes\underline{01}) = 0$;
if $t'>t''$, then we have $\sigma_{(j)}' = \underline{0}$
and $\nabla_*(\sigma_{(j)}'\otimes\sigma_{(j)}'') = \nabla_*(\underline{0}\otimes\underline{01}) = 0$.
In all cases, we conclude that our choices result in a zero term in $\nabla_*^{\otimes r}\sh(\pi'_*(\underline{01})\otimes\pi''_*(\underline{01}))$.
Hence, we do obtain the vanishing relation $\sum_{(\pi)}\nabla_*^{\otimes r}\sh(\pi'_*(\underline{01})\otimes\pi''_*(\underline{01})) = 0$,
and this result finishes the proof of our proposition.
\end{proof}
\end{appendix}
\bibliographystyle{plain}
| {'timestamp': '2021-02-09T02:45:37', 'yymm': '2011', 'arxiv_id': '2011.11333', 'language': 'en', 'url': 'https://arxiv.org/abs/2011.11333'} |
\section{Introduction}
Proteins are polymers formed by different kinds of amino acids.
During or after proteins have been synthesized by ribosomes, they fold to form a specific tridimensional shape.
This
3D geometric pattern defines their biological functionality, properties, and so on.
For instance, the hemoglobin is able to carry oxygen to the blood
stream thanks to its 3D conformation.
However, contrary to the
mapping from DNA to the amino acids sequence, the complex folding of
this sequence is not yet understood.
In fact, Anfinsen's
``Thermodynamic Hypothesis'' claims that the chosen 3D conformation
corresponds to the lowest free energy minimum of the considered
protein~\cite{Anfinsen20071973}.
Efficient constraint programming methods can solve the
problem for reasonably sized sequences~\cite{Dotu}.
But the conformation that minimizes this free energy
is most of the time impossible to find in practice,
at least for large proteins,
due to the number of possible conformations.
Indeed the Protein Structure Prediction (PSP) problem is a
NP-complete one \cite{Crescenzi98,Berger1998}.
This is why conformations of
proteins are \emph{predicted}: the 3D structures that minimize
the free energy of the protein under consideration are
found by using computational intelligence tools like genetic
algorithms \cite{DBLP:conf/cec/HiggsSHS10}, ant colonies
\cite{Shmygelska2005Feb}, particle swarm
\cite{DBLP:conf/cec/Perez-HernandezRG10}, memetic algorithms
\cite{Islam:2009:NMA:1695134.1695181}, constraint programming~\cite{Dotu,cpsp},
or neural networks
\cite{Dubchak1995}, etc.
These computational intelligence tools
are coupled with protein energy models (like AMBER,
DISCOVER, or ECEPP/3) to find a conformation that approximately
minimize the free energy of a given protein.
Furthermore, to face the complexity of the PSP
problem, authors who try to predict the protein folding process use
models of various resolutions.
For instance, in coarse grain, single-bead models, an amino acid is considered as
a single bead, or point.
These low resolution models are often used as the first stage of the 3D
structure prediction: the backbone of the 3D conformation is
determined. Then, high resolution models come next for further
exploration. Such a prediction strategy is commonly used in PSP
softwares like ROSETTA \cite{Bonneau01,Chivian2005} or TASSER
\cite{Zhang2005}.
In this paper, which is a supplement of \cite{bgc11:ip,bgcs11:ij},
we investigate the 2D HP square lattice model. Let us recall that this popular model
is used to test methods and as a first
2D HP lattice folding stage in some protein folding
prediction algorithms
\cite{DBLP:conf/cec/HiggsSHS10,Braxenthaler97,DBLP:conf/cec/IslamC10,Unger93,DBLP:conf/cec/HorvathC10}.
It focuses only on hydrophobicity by separating
the amino acids into two sets: hydrophobic (H) and hydrophilic (or
polar P) \cite{Dill1985}.
These amino acids occupy vertices of a square lattice, and the 2D low
resolution conformation of the given protein
is thus represented by a self avoiding walk (SAW) on this lattice.
Variations of this model are frequently investigated: 2D or 3D lattices, with square, cubic, triangular, or face-centered-cube shapes.
However, at each time, a SAW requirement for the
targeted conformation is required.
The PSP problem takes place in that context: given a sequence
of hydrophobic and hydrophilic amino acids, to find the
self avoiding walk on the lattice that maximizes the number of
hydrophobic neighbors is a NP complete problem~\cite{Crescenzi98}.
We will show in this document that this SAW requirement can be understood in various
different ways, even in the 2D square lattice model. The first understanding of this requirement in the 2D model,
called $SAW_1$ in the remainder of this paper,
has been chosen by authors of~\cite{Crescenzi98} when they have
established the proof of NP-completeness for the PSP problem.
It corresponds to the famous ``excluded volume'' requirement,
and it has been already well-studied by the discrete mathematics
community (see, for instance, the book of Madras and
Slade~\cite{Madras}).
It possesses a dynamical formulation we call it $SAW_2$ in
this document.
The $SAW_3$ set is frequently
chosen by bioinformaticians when they try
to predict the backbone conformation of proteins
using a low resolution model.
Finally, the last one proposed here
is perhaps the most realistic one, even
if it still remains far from the biological
folding operation. We will demonstrate that
these four sets are not equal. In particular,
we will establish that $SAW_4$ is strictly
included into $SAW_3$, which is strictly included
into $SAW_1=SAW_2$. So the NP-completeness
proof has been realized in a strictly larger set than the
one used for prediction, which is strictly larger than the
set of biologically possible conformations.
Concrete examples of 2D conformations that are
in a $SAW_i$ without being in another $SAW_j$
will be given, and characterizations of these
sets, in terms
of graphs, will finally be proposed.
The remainder of this paper is structured as follows. In the next
section we recall some notations and terminologies on the 2D HP square lattice model, chosen here to simplify explanations.
In Section~\ref{sec:dynamical system}, the
dynamical system used to describe the folding
process in the 2D model, initially presented
in~\cite{bgc11:ip,bgcs11:ij}, is recalled.
In Sect.~\ref{sec:saw}, various ways to understand the so-called
self-avoiding walk (SAW) requirement are detailed.
Their relations and inclusions are investigated in the next section.
Section~\ref{sec:graph} presents a graph approach to determine the
size ratios between the four SAW requirements defined previously,
and the consequences of their strict inclusions are discussed.
This paper ends by a conclusion section, in
which our contribution is summarized and intended future work is
presented.
\section{Background}
In the sequel $S^{n}$ denotes the $n^{th}$ term of a sequence $S$ and
$V_{i}$ the $i^{th}$ component of a vector $V$.
The $k^{th}$
composition of a single function $f$ is represented by $f^{k}=f
\circ...\circ f$.
The set of congruence classes modulo 4 is denoted as $\mathds{Z}/4\mathds{Z}$.
Finally, given two integers $a<b$, the following notation is used:
$\llbracket a;b\rrbracket =\{a,a+1,\hdots,b\}$.
\subsection{2D Hydrophilic-Hydrophobic (HP) Model}
In the HP model, hydrophobic interactions are supposed to dominate
protein folding.
This model was formerly introduced by Dill, who
consider in \cite{Dill1985} that the protein core freeing up energy is
formed by hydrophobic amino acids, whereas hydrophilic amino acids
tend to move in the outer surface due to their affinity with the
solvent (see Fig.~\ref{fig:hpmodel}).
In this model, a protein conformation is a ``self-avoi\-ding walk (SAW)'', as the walks studied in~\cite{Madras}, on a 2D or 3D lattice such that its energy $E$, depending on topological
neighboring contacts between hydrophobic amino acids that are not
contiguous in the primary structure, is minimal.
In other words, for an amino-acid sequence $P$ of length $\mathsf{N}$ and for the set
$\mathcal{C}(P)$ of all SAW conformations of $P$, the chosen
conformation will be $C^* = min \left\{E(C) \mid C \in \mathcal{C}(P)\right\}$ \cite{Shmygelska05}.
In that context and for a conformation $C$, $E(C)=-q$ where $q$ is equal to the number of
topological hydrophobic neighbors.
For example, $E(c)=-5$ in
Fig.~\ref{fig:hpmodel}.
\begin{figure}[t]
\centering
\includegraphics[width=2.375in]{HPmodel.eps}
\caption{Hydrophilic-hydrophobic model (black squares are
hydrophobic residues)}
\label{fig:hpmodel}
\end{figure}
\subsection*{Protein Encoding}
Additionally to the direct coordinate presentation in
the lattice, at least two other
isomorphic encoding strategies for HP models are possible: relative
encoding and absolute encoding.
In relative encoding \cite{Hoque09},
the move direction is defined relative to the direction of the
previous move (forward, backward, left, or right).
Alternatively, in absolute encoding
\cite{Backofen99algorithmicapproach}, which is the encoding chosen in
this paper, the direct coordinate presentation is replaced by letters
or numbers representing directions with respect to the lattice
structure.
For absolute encoding in the 2D square lattice, the permitted moves are: east
$\rightarrow$ (denoted by 0), south $\downarrow$ (1), west $\leftarrow$ (2), and north $\uparrow$ (3).
A 2D conformation $C$ of $\mathsf{N}+1$ residues for a protein $P$ is then an element $C$ of $\mathds{Z}/4\mathds{Z}^{\mathsf{N}}$, with a first component equal to 0 (east) \cite{Hoque09}.
For instance, in Fig.~\ref{fig:hpmodel}, the 2D absolute encoding is 00011123322101 (starting from the upper left corner), whereas 001232 corresponds to the following path
in the square lattice: (0,0), (1,0), (2,0), (2,-1), (1,-1), (1,0), (0,0).
In that situation, at most $4^{\mathsf{N}}$ conformations are possible when considering $\mathsf{N}+1$ residues, even if some of them invalidate the SAW requirement as
defined in~\cite{Madras}.
\section{A Dynamical System for the 2D HP Square Lattice Model}
\label{sec:dynamical system}
Protein minimum energy structure can be considered
statistically or dynamically. In the latter case, one
speaks in this article of ``protein folding''.
We recall here how to model the folding process in the 2D
model, or pivot moves, as a dynamical
system. Readers are referred to \cite{bgc11:ip,bgcs11:ij} for further explanations
and to investigate the dynamical behavior of the proteins pivot moves in
this 2D model (it is indeed proven to be chaotic, as
defined by Devaney~\cite{devaney}).
\subsection{Initial Premises}
Let us start with preliminaries introducing some concepts that will be
useful in our approach.
The primary structure of a given protein $P$ with $\mathsf{N}+1$ residues is coded by $0 0 \hdots 0$ ($\mathsf{N}$ times) in absolute encoding.
Its final 2D conformation has an absolute encoding equal to $0 C_1^* \hdots C_{\mathsf{N}-1}^*$, where $\forall i, C_i^* \in \mathds{Z}/4\mathds{Z}$, is such that $E(C^*) = min \left\{E(C) \big/ C \in \mathcal{C}(P)\right\}$.
This final conformation depends on the repartition of hydrophilic and hydrophobic amino acids in the initial sequence.
Moreover, we suppose that, if the residue number $n+1$ is at the east of the residue number $n$ in absolute encoding ($\rightarrow$) and if a fold (pivot move) occurs after $n$, then the east move can only by changed into north ($\uparrow$) or south ($\downarrow$).
That means, in our simplistic model, only rotations
or pivot moves of $+\frac{\pi}{2}$ or $-\frac{\pi}{2}$ are possible.
Consequently, for a given residue that has to be updated, only one of the two possibilities below can appear for its
absolute encoding during a pivot move:
\begin{itemize}
\item $0 \longmapsto 1$ (that is, east becomes north), $1 \longmapsto 2, 2 \longmapsto 3,$ or $ 3 \longmapsto 0$
for a pivot move in the clockwise direction, or
\item $1 \longmapsto 0, 2 \longmapsto 1, 3 \longmapsto 2,$ or $0 \longmapsto 3$
for an anticlockwise.
\end{itemize}
This fact leads to the following definition:
\begin{definition}
The \emph{clockwise fold function} is the function $f: \mathds{Z}/4\mathds{Z} \longrightarrow \mathds{Z}/4\mathds{Z}$ defined by $f(x) = x+1 ~(\textrm{mod}~ 4)$.
\end{definition}
Obviously the anticlockwise fold function is $f^{-1}(x) = x-1 ~(\textrm{mod}~ 4)$.
Thus at the $n^{th}$ folding time, a residue $k$ is chosen and its absolute move is changed by using either $f$ or $f^{-1}$.
As a consequence, \emph{all of the absolute moves must be updated from the coordinate $k$ until the last one $\mathsf{N}$ by using the same folding function}.
\begin{example}
\label{ex1}
If the current conformation is $C=000111$, i.e.,
\begin{center}
\scalebox{0.75}
{
\begin{pspicture}(0,-2.6)(5.2,2.6)
\psframe[linewidth=0.04,dimen=outer](0.4,2.6)(0.0,2.2)
\psframe[linewidth=0.04,dimen=outer](2.0,2.6)(1.6,2.2)
\psframe[linewidth=0.04,dimen=outer](3.6,2.6)(3.2,2.2)
\psframe[linewidth=0.04,dimen=outer](5.2,2.6)(4.8,2.2)
\psframe[linewidth=0.04,dimen=outer](5.2,1.0)(4.8,0.6)
\psframe[linewidth=0.04,dimen=outer](5.2,-0.6)(4.8,-1.0)
\psframe[linewidth=0.04,dimen=outer](5.2,-2.2)(4.8,-2.6)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.4,2.4)(1.6,2.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.0,2.4)(3.2,2.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.6,2.4)(4.8,2.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.0,2.2)(5.0,1.0)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.0,0.6)(5.0,-0.6)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.0,-1.0)(5.0,-2.2)
\end{pspicture}
}
\end{center}
and
if the third residue is chosen to fold (pivot move) by a rotation of $-\frac{\pi}{2}$ (mapping $f$), the new conformation will be
$(C_1,C_2,f(C_3),f(C_4),f(C_5),f(C_6))$, which
is $(0,0,1,2,2,2).$ That is,
\begin{center}
\scalebox{0.75}
{
\begin{pspicture}(0,-1.0)(5.2,1.0)
\psframe[linewidth=0.04,dimen=outer](2.0,1.0)(1.6,0.6)
\psframe[linewidth=0.04,dimen=outer](3.6,1.0)(3.2,0.6)
\psframe[linewidth=0.04,dimen=outer](5.2,1.0)(4.8,0.6)
\psframe[linewidth=0.04,dimen=outer](2.0,-0.6)(1.6,-1.0)
\psframe[linewidth=0.04,dimen=outer](0.4,-0.6)(0.0,-1.0)
\psframe[linewidth=0.04,dimen=outer](5.2,-0.6)(4.8,-1.0)
\psframe[linewidth=0.04,dimen=outer](3.6,-0.6)(3.2,-1.0)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.0,0.8)(3.2,0.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.6,0.8)(4.8,0.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.2,-0.8)(2.0,-0.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.0,0.6)(5.0,-0.6)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.8,-0.8)(3.6,-0.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.6,-0.8)(0.4,-0.8)
\end{pspicture}
}
\end{center}
\end{example}
These considerations lead to the formalization described thereafter.
\subsection{Formalization and Notations}
Let $\mathsf{N}+1$ be a fixed number of amino acids, where $\mathsf{N}\in\mathds{N}^*= \left\{1,2,3,\hdots\right\}$.
We define
$$\check{\mathcal{X}}=\mathds{Z}/4\mathds{Z}^\mathsf{N}\times \llbracket -\mathsf{N};\mathsf{N} \rrbracket^\mathds{N}$$
as the phase space of all possible folding processes.
An element $X=(C,F)$ of this dynamical folding space is constituted by:
\begin{itemize}
\item A conformation of the $\mathsf{N}+1$ residues in absolute encoding: $C=(C_1,\hdots, C_\mathsf{N}) \in \mathds{Z}/4\mathds{Z}^\mathsf{N}$. Note that we do not require self-avoiding walks here.
\item A sequence $F \in \llbracket -\mathsf{N} ; \mathsf{N} \rrbracket^\mathds{N}$ of future pivot moves such that, when $F_i \in \llbracket -\mathsf{N}; \mathsf{N} \rrbracket$ is $k$, it means that it occurs:
\begin{itemize}
\item a pivot move after the $k-$th residue by a rotation of $-\frac{\pi}{2}$ (mapping $f$) at the $i-$th step, if $k = F_i >0$,
\item no fold at time $i$ if $k=0$,
\item a pivot move after the $|k|-$th residue by a rotation of $\frac{\pi}{2}$ (\emph{i.e.}, $f^{-1}$) at the $i-$th time, if $k<0$.
\end{itemize}
\end{itemize}
On this phase space, the protein folding dynamic in the 2D model can be formalized as follows.
\medskip
Denote by $i$ the map that transforms a folding sequence in its first term (\emph{i.e.}, in the first folding operation):
$$
\begin{array}{lccl}
i:& \llbracket -\mathsf{N};\mathsf{N} \rrbracket^\mathds{N} & \longrightarrow & \llbracket -\mathsf{N};\mathsf{N} \rrbracket \\
& F & \longmapsto & F^0,
\end{array}$$
by $\sigma$ the shift function over $\llbracket -\mathsf{N};\mathsf{N} \rrbracket^\mathds{N}$, that is to say,
$$
\begin{array}{lccl}
\sigma :& \llbracket -\mathsf{N};\mathsf{N} \rrbracket^\mathds{N}
& \longrightarrow & \llbracket -\mathsf{N};\mathsf{N} \rrbracket^\mathds{N} \\
& \left(F^k\right)_{k \in \mathds{N}} & \longmapsto
& \left(F^{k+1}\right)_{k \in \mathds{N}},
\end{array}$$
\noindent and by $sign$ the function:
$$
sign(x) = \left\{
\begin{array}{ll}
1 & \textrm{if } x>0,\\
0 & \textrm{if } x=0,\\
-1 & \textrm{else.}
\end{array}
\right.
$$
Remark that the shift function removes the first folding operation (a pivot move) from the folding sequence $F$ once it has been achieved.
Consider now the map $G:\check{\mathcal{X}} \to \check{\mathcal{X}}$ defined by:
$$G\left((C,F)\right) = \left( f_{i(F)}(C),\sigma(F)\right),$$
\noindent where $\forall k \in \llbracket -\mathsf{N};\mathsf{N} \rrbracket$,
$f_k: \mathds{Z}/4\mathds{Z}^\mathsf{N} \to \mathds{Z}/4\mathds{Z}^\mathsf{N}$
is defined by:
$f_k(C_1,\hdots,C_\mathsf{N}) = (C_1,\hdots,C_{|k|-1},f^{sign(k)}(C_{|k|}),\hdots,f^{sign(k)}(C_\mathsf{N})).$
Thus the folding process of a protein $P$ in the 2D HP square lattice model, with initial conformation equal to $(0,0, \hdots, 0)$ in absolute encoding and a folding sequence equal to $(F^i)_{i \in \mathds{N}}$, is defined by the following dynamical system over $\check{\mathcal{X}}$:
$$
\left\{
\begin{array}{l}
X^0=((0,0,\hdots,0),F)\\
X^{n+1}=G(X^n), \forall n \in \mathds{N}.
\end{array}
\right.
$$
In other words, at each step $n$, if $X^n=(C,F)$, we take the first folding operation to realize, that is $i(F) = F^0 \in \llbracket
-\mathsf{N};\mathsf{N} \rrbracket$, we update the current conformation
$C$ by rotating all of the residues coming after the $|i(F)|-$th one,
which means that we replace the conformation $C$ with $f_{i(F)}(C)$.
Lastly, we remove this rotation (the first term $F^0$) from the folding sequence $F$: $F$ becomes $\sigma(F)$.
\begin{example}
Let us reconsider Example \ref{ex1}.
The unique iteration of this folding process transforms a point of $\check{X}$ having the form $\left((0,0,0,1,1,1);(3, F^1, F^2, \hdots)\right)$ in $G\left((0,0,0,1,1,1),(+3,F^1,F^2, \hdots)\right),$ which is equal to $\left((0,0,1,2,2,2),(F^1,F^2, \hdots)\right)$.
\end{example}
\begin{remark}
Such a formalization allows the study of proteins that never stop to fold, for instance due to never-ending interactions with the environment.
\end{remark}
\begin{remark}
A protein $P$ that has finished to fold, if such a protein exists, has the form $(C,(0,0,0,\hdots))$, where $C$ is the final 2D structure of $P$.
In this case, we can assimilate a folding sequence that is convergent to 0, \emph{i.e.}, of the form $(F^0, \hdots, F^n,0 \hdots)$, with the finite sequence $(F^0, \hdots, F^n)$.
\end{remark}
We will now introduce the SAW requirement in our formulation of the folding process in the 2D model.
\section{The SAW Requirement}
\label{sec:saw}
\subsection{The paths without crossing}
\label{pathWithout}
Let $\mathcal{P}$ denotes the 2D plane,
$$
\begin{array}{cccc}
p: & \mathds{Z}/4\mathds{Z}^\mathsf{N} & \to & \mathcal{P}^{\mathsf{N}+1} \\
& (C_1, \hdots, C_\mathsf{N}) & \mapsto & (X_0, \hdots, X_\mathsf{N})
\end{array}
$$
where $X_0 = (0,0)$, and
$$
X_{i+1} = \left\{
\begin{array}{ll}
X_i + (1,0) & ~\textrm{if } c_i = 0,\\
X_i + (0,-1) & ~\textrm{if } c_i = 1,\\
X_i + (-1,0) & ~\textrm{if } c_i = 2,\\
X_i + (0,1) & ~\textrm{if } c_i = 3.
\end{array}
\right.
$$
The map $p$ transforms an absolute encoding in its 2D representation.
For instance, $p((0,0,0,1,1,1))$ is ((0,0);(1,0);(2,0);(3,0);(3,-1);(3,-2);(3,-3)), that is, the first figure of Example~\ref{ex1}.
Now, for each $(P_0, \hdots, P_\mathsf{N})$ of $\mathcal{P}^{\mathsf{N}+1}$, we denote by $$support((P_0, \hdots, P_\mathsf{N}))$$ the set (without repetition): $\left\{P_0, \hdots, P_\mathsf{N}\right\}$. For instance,
$$support\left((0,0);(0,1);(0,0);(0,1)\right) = \left\{(0,0);(0,1)\right\}.$$
Then,
\begin{definition}
\label{def:SAW}
A conformation $(C_1, \hdots, C_\mathsf{N}) \in \mathds{Z}/4\mathds{Z}^{\mathsf{N}}$ is a \emph{path without crossing} iff the cardinality of $support(p((C_1, \hdots, C_\mathsf{N})))$ is $\mathsf{N}+1$.
\end{definition}
This path without crossing is sometimes referred as ``excluded
volume'' requirement in the literature. It only means that no
vertex can be occupied by more than one protein monomer.
We can finally remark that Definition \ref{def:SAW} concerns only one conformation, and not a \emph{sequence} of conformations that occurs in a folding process.
\subsection{Defining the SAW Requirements in the 2D model}
The next stage in the formalization of the protein folding process in the 2D model as a dynamical system is to take into account the self-avoiding walk (SAW) requirement, by restricting the set $\mathds{Z}/4\mathds{Z}^\mathsf{N}$ of all possible conformations to one of its subset.
That is, to define precisely the set $\mathcal{C}(P)$ of acceptable conformations of a protein $P$ having $\mathsf{N}+1$ residues.
This stage needs a clear definition of the SAW requirement.
However, as stated above, Definition \ref{def:SAW} only focus on a given conformation, but not on a complete folding process.
In our opinion, this requirement applied to the whole folding process can be understood at least in four ways.
\medskip
In the first and least restrictive approach, we call it ``$SAW_1$'', we only require that the studied conformation satisfies the Definition \ref{def:SAW}.
\begin{definition}[$SAW_1$]
A conformation $c$ of $\mathds{Z}/4\mathds{Z}^{\mathsf{N}}$ satisfies the first self-avoiding walk requirement ($c \in SAW_1(\mathsf{N})$)
if this conformation is a path without crossing.
\end{definition}
It is not regarded whether this conformation is the result of a folding process that has started from $(0,0,\hdots,0)$.
Such a SAW requirement has been chosen by authors of~\cite{Crescenzi98} when they have proven the NP-completeness of the PSP problem.
It is usually the SAW requirement of biomathematicians, corresponding to the self-avoiding walks studied
in the book of Madras and Slade~\cite{Madras}.
It is easy to convince ourselves that conformations of $SAW_1$ are the conformations that can be obtained by any chain growth algorithm, like in~\cite{Bornberg-Bauer:1997:CGA:267521.267528}.
As stated before, protein minimum energy structure can be considered statically or dynamically.
In the latter case, we speak here of ``protein folding'',
since this concerns the dynamic process of folding. When folding on a lattice model, there is an underlying algorithm,
such as Monte Carlo or genetic algorithm, and an allowed move set. In the following, for the sake of simplicity, only
pivot moves are investigated, but the corner and crankshaft moves should be further investigated~\cite{citeulike:118812}.
Basically, in the protein folding literature, there are methods that require the ``excluded volume'' condition during the
dynamic folding procedure, and those that do not require this condition.
This is why the second proposed approach called $SAW_2$ requires that, starting from the initial condition $(0,0,\hdots, 0)$, we obtain by a succession of pivot moves a final conformation being a path without crossing.
In other words, we want that the final tree corresponding to the true 2D conformation has 2 vertices with 1 edge and $\mathsf{N}-2$ vertices with 2 edges.
For instance, the folding process of Figure~\ref{saw2} is acceptable in $SAW_2$, even if it presents a cross in an intermediate conformation.
\begin{figure}
\centering
\caption{Folding process acceptable in $SAW_2$ but not in $SAW_3$. The folding sequence (-4,-3,-2,+4), having 3 anticlockwise and 1 clockwise pivot moves,
is applied here on the conformation 0000 represented as the upper line.}
\label{saw2}
\scalebox{1}
{
\begin{pspicture}(0,-3.1)(4.2,3.1)
\pscircle[linewidth=0.04,dimen=outer](0.9,3.0){0.1}
\pscircle[linewidth=0.04,dimen=outer](1.7,3.0){0.1}
\pscircle[linewidth=0.04,dimen=outer](2.5,3.0){0.1}
\pscircle[linewidth=0.04,dimen=outer](3.3,3.0){0.1}
\pscircle[linewidth=0.04,dimen=outer](4.1,3.0){0.1}
\psline[linewidth=0.04cm](1.0,3.0)(1.6,3.0)
\psline[linewidth=0.04cm](1.8,3.0)(2.4,3.0)
\psline[linewidth=0.04cm](2.6,3.0)(3.2,3.0)
\psline[linewidth=0.04cm](3.4,3.0)(4.0,3.0)
\pscircle[linewidth=0.04,dimen=outer](0.9,1.5){0.1}
\pscircle[linewidth=0.04,dimen=outer](1.7,1.5){0.1}
\pscircle[linewidth=0.04,dimen=outer](2.5,1.5){0.1}
\pscircle[linewidth=0.04,dimen=outer](3.3,1.5){0.1}
\pscircle[linewidth=0.04,dimen=outer](3.3,2.3){0.1}
\psline[linewidth=0.04cm](1.0,1.5)(1.6,1.5)
\psline[linewidth=0.04cm](1.8,1.5)(2.4,1.5)
\psline[linewidth=0.04cm](2.6,1.5)(3.2,1.5)
\psline[linewidth=0.04cm](3.3,1.6)(3.3,2.2)
\pscircle[linewidth=0.04,dimen=outer](0.9,0.0){0.1}
\pscircle[linewidth=0.04,dimen=outer](1.7,0.0){0.1}
\pscircle[linewidth=0.04,dimen=outer](2.5,0.0){0.1}
\psline[linewidth=0.04cm](1.0,0.0)(1.6,0.0)
\psline[linewidth=0.04cm](1.8,0.0)(2.4,0.0)
\pscircle[linewidth=0.04,dimen=outer](2.5,0.8){0.1}
\psline[linewidth=0.04cm](2.5,0.1)(2.5,0.7)
\pscircle[linewidth=0.04,dimen=outer](1.7,0.8){0.1}
\psline[linewidth=0.04cm](1.8,0.8)(2.4,0.8)
\pscircle[linewidth=0.04,dimen=outer](0.9,-1.5){0.1}
\pscircle[linewidth=0.04,dimen=outer](1.7,-1.5){0.1}
\psline[linewidth=0.04cm](1.0,-1.5)(1.6,-1.5)
\pscircle[linewidth=0.04,dimen=outer](1.7,-0.7){0.1}
\psline[linewidth=0.04cm](1.7,-1.4)(1.7,-0.8)
\pscircle[linewidth=0.04,dimen=outer](0.9,-0.7){0.1}
\psline[linewidth=0.04cm](1.0,-0.7)(1.6,-0.7)
\psline[linewidth=0.04cm](0.9,-1.4)(0.9,-0.8)
\pscircle[linewidth=0.04,dimen=outer](0.9,-3.0){0.1}
\pscircle[linewidth=0.04,dimen=outer](1.7,-3.0){0.1}
\psline[linewidth=0.04cm](1.0,-3.0)(1.6,-3.0)
\pscircle[linewidth=0.04,dimen=outer](1.7,-2.2){0.1}
\psline[linewidth=0.04cm](1.7,-2.9)(1.7,-2.3)
\pscircle[linewidth=0.04,dimen=outer](0.9,-2.2){0.1}
\psline[linewidth=0.04cm](1.0,-2.2)(1.6,-2.2)
\pscircle[linewidth=0.04,dimen=outer](0.1,-2.2){0.1}
\psline[linewidth=0.04cm](0.2,-2.2)(0.8,-2.2)
\end{pspicture}
}
\end{figure}
Such an approach corresponds to programs that start from the initial conformation $(0,0, \hdots, 0)$, fold it several times according to their embedding functions, and then obtain a final conformation on which the SAW property is checked: only the last conformation has to satisfy the Definition \ref{def:SAW}. More precisely,
\begin{definition}[$SAW_2$]
A conformation $c$ of $\mathds{Z}/4\mathds{Z}^{\mathsf{N}}$ satisfies the second self-avoiding walk requirement $SAW_2$
if $c \in SAW_1(\mathsf{N})$ and a finite sequence $(F^1,F^2, \hdots,F^n)$ of $\llbracket -\mathsf{N}, \mathsf{N} \rrbracket$ can be found such that $$\left(c,(0,0,\hdots)\right) = G^n\left((0,0,\hdots, 0),\left(F^1,F^2, \hdots, F^n,0, \hdots\right)\right).$$
$SAW_2(\mathsf{N})$ will denote the set of all conformations satisfying this requirement.
\end{definition}
In the next approach, namely the $SAW_3$ requirement, it is demanded that each intermediate conformation, between the initial one and the returned (final) one, satisfies the Definition \ref{def:SAW}.
It restricts the set of all conformations $\mathds{Z}/4\mathds{Z}^\mathsf{N}$, for a given $\mathsf{N}$, to the subset $\mathfrak{C}_\mathsf{N}$ of conformations $(C_1,\hdots, C_\mathsf{N})$ such that $\exists n \in \mathds{N}^*,$ $\exists k_1, \hdots, k_n \in \llbracket -\mathsf{N}; \mathsf{N} \rrbracket$, $$(C_1, \hdots, C_\mathsf{N}) = G^n\left( (0,0, \hdots, 0); (k_1, \hdots, k_n) \right)$$ \emph{and} $\forall i \leqslant n$, the conformation $G^i\left( (0, \hdots, 0); (k_1, \hdots, k_n) \right)$
is a path without crossing.
Let us define it,
\begin{definition}[$SAW_3$]
A conformation $c$ of $\mathds{Z}/4\mathds{Z}^{\mathsf{N}}$ satisfies the third self-avoiding walk requirement
if $c \in SAW_1(\mathsf{N})$ and a finite sequence $(F^1,F^2, \hdots,F^n)$ of $\llbracket -\mathsf{N}, \mathsf{N} \rrbracket$ can be found such that:
\begin{itemize}
\item $\forall k \in \llbracket 1, n \rrbracket$, the conformation $c_k$ of $G^k\left((0,0,\hdots, 0),\left(F^1,F^2, \hdots, F^n,0, \hdots\right)\right)$ is in $SAW_1(\mathsf{N})$, that is, it is a path without crossing.
\item $\left(c,(0,0,\hdots)\right) = G^n\left((0,0,\hdots, 0),\left(F^1,F^2, \hdots, F^n,0, \hdots\right)\right).$
\end{itemize}
$SAW_3(\mathsf{N})$ will denote the set of all conformations satisfying this requirement.
\end{definition}
The ``SAW requirement'' in the bioinformatics literature
refers either to the $SAW_2$ or to the $SAW_3$
folding process requirement~\cite{Braxenthaler97,DBLP:conf/cec/HiggsSHS10,DBLP:conf/cec/HorvathC10}. For instance
in \cite{DBLP:conf/cec/IslamC10}, random sequences of $\llbracket 0,3\rrbracket$ are picked and the excluded volume
requirement (as recalled previously, no vertex can be occupied by more than one protein monomer) is then checked, meaning that this research work
takes place into $SAW_2$.
Contrarily, in~\cite{Unger93}, the
Monte Carlo search for folding simulations algorithm repeats
the step:
``from conformation $S_i$ with energy $E_i$ make a pivot
move to get $S_j$ with $E_j$'' until $S_j$ is valid, so
Unger and Moult are in $SAW_3$.
Algorithms that refine progressively their solutions (following
a genetic algorithm or a swarm particle approach for instance)
are often of this kind.
In these $SAW_3$ related approaches, the acceptable conformations are obtained starting from the initial conformation $(0,0, \hdots, 0)$ and are such that all the intermediate conformations satisfy the Definition \ref{def:SAW}.
Finally, the $SAW_4$ approach is a $SAW_3$ requirement in which there is no intersection of vertex or edge during the transformation of one conformation to another. For instance, the transformation of Figure \ref{saw4} is authorized in the $SAW_3$ approach but refused in the $SAW_4$ one: during the rotation around the residue having a cross, the structure after this residue will intersect the remainder of the ``protein''.
In this last approach it is impossible, for a protein folding from one plane conformation to another plane one, to use the whole space to achieve this folding.
\begin{figure}
\centering
\caption{Folding process acceptable in $SAW_3$ but not in $SAW_4$. It is in $SAW_3$ as $333300111110333333222211111100333$ (the right panel) is $3333001111103333332222111111f^{-1}(1)f^{-1}(1)f^{-1}(0)f^{-1}(0)f^{-1}(0)f^{-1}(0)$, which corresponds
to a clockwise pivot move of residue number 28 in $SAW_3$.
Figure~\ref{pasDansSaw4} explains why this folding process
is not acceptable in $SAW_4$.}
\label{saw4}
\scalebox{0.5}
{
\begin{pspicture}(0,-6.6)(16.8,6.6)
\psframe[linewidth=0.04,dimen=outer](2.0,3.4)(1.6,3.0)
\psframe[linewidth=0.04,dimen=outer](3.6,5.0)(3.2,4.6)
\psframe[linewidth=0.04,dimen=outer](2.0,1.8)(1.6,1.4)
\psframe[linewidth=0.04,dimen=outer](2.0,5.0)(1.6,4.6)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.6,4.8)(4.8,4.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.0,4.8)(3.2,4.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.6,6.4)(0.4,6.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.8,1.8)(1.8,3.0)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.8,3.4)(1.8,4.6)
\psframe[linewidth=0.04,dimen=outer](5.2,5.0)(4.8,4.6)
\psframe[linewidth=0.04,dimen=outer](6.8,5.0)(6.4,4.6)
\psframe[linewidth=0.04,dimen=outer](5.2,3.4)(4.8,3.0)
\psframe[linewidth=0.04,dimen=outer](6.8,3.4)(6.4,3.0)
\psframe[linewidth=0.04,dimen=outer](5.2,1.8)(4.8,1.4)
\psframe[linewidth=0.04,dimen=outer](6.8,1.8)(6.4,1.4)
\psframe[linewidth=0.04,dimen=outer](0.4,-4.6)(0.0,-5.0)
\psframe[linewidth=0.04,dimen=outer](5.2,3.4)(4.8,3.0)
\psframe[linewidth=0.04,dimen=outer](0.4,-6.2)(0.0,-6.6)
\psframe[linewidth=0.04,dimen=outer](5.2,0.2)(4.8,-0.2)
\psframe[linewidth=0.04,dimen=outer](6.8,0.2)(6.4,-0.2)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.0,4.6)(5.0,3.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.0,3.0)(5.0,1.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.0,1.4)(5.0,0.2)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.6,0.2)(6.6,1.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.6,1.8)(6.6,3.0)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.6,3.4)(6.6,4.6)
\psframe[linewidth=0.04,dimen=outer](6.8,6.6)(6.4,6.2)
\psframe[linewidth=0.04,dimen=outer](3.6,6.6)(3.2,6.2)
\psframe[linewidth=0.04,dimen=outer](2.0,6.6)(1.6,6.2)
\psframe[linewidth=0.04,dimen=outer](5.2,6.6)(4.8,6.2)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.4,6.4)(5.2,6.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.8,6.4)(3.6,6.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.2,6.4)(2.0,6.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.6,5.0)(6.6,6.2)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.2,-5.0)(0.2,-6.2)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.2,-3.4)(0.2,-4.6)
\psframe[linewidth=0.04,dimen=outer](0.4,3.4)(0.0,3.0)
\psframe[linewidth=0.04,dimen=outer](0.4,1.8)(0.0,1.4)
\psframe[linewidth=0.04,dimen=outer](0.4,5.0)(0.0,4.6)
\psframe[linewidth=0.04,dimen=outer](0.4,6.6)(0.0,6.2)
\psframe[linewidth=0.04,dimen=outer](0.4,-3.0)(0.0,-3.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.2,6.2)(0.2,5.0)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.2,4.6)(0.2,3.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.2,3.0)(0.2,1.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.2,1.4)(0.2,0.2)
\psline[linewidth=0.04cm](0.0,-3.0)(0.4,-3.4)
\psline[linewidth=0.04cm](0.0,-3.4)(0.4,-3.0)
\psframe[linewidth=0.04,dimen=outer](12.0,3.4)(11.6,3.0)
\psframe[linewidth=0.04,dimen=outer](13.6,5.0)(13.2,4.6)
\psframe[linewidth=0.04,dimen=outer](12.0,1.8)(11.6,1.4)
\psframe[linewidth=0.04,dimen=outer](12.0,5.0)(11.6,4.6)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(13.6,4.8)(14.8,4.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(12.0,4.8)(13.2,4.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(11.6,6.4)(10.4,6.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(11.8,1.8)(11.8,3.0)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(11.8,3.4)(11.8,4.6)
\psframe[linewidth=0.04,dimen=outer](13.6,3.4)(13.2,3.0)
\psframe[linewidth=0.04,dimen=outer](15.2,5.0)(14.8,4.6)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(15.2,-3.2)(16.4,-3.2)
\psframe[linewidth=0.04,dimen=outer](16.8,5.0)(16.4,4.6)
\psframe[linewidth=0.04,dimen=outer](15.2,3.4)(14.8,3.0)
\psframe[linewidth=0.04,dimen=outer](16.8,3.4)(16.4,3.0)
\psframe[linewidth=0.04,dimen=outer](13.6,1.8)(13.2,1.4)
\psframe[linewidth=0.04,dimen=outer](15.2,1.8)(14.8,1.4)
\psframe[linewidth=0.04,dimen=outer](16.8,1.8)(16.4,1.4)
\psframe[linewidth=0.04,dimen=outer](12.0,-3.0)(11.6,-3.4)
\psframe[linewidth=0.04,dimen=outer](15.2,3.4)(14.8,3.0)
\psframe[linewidth=0.04,dimen=outer](13.6,-3.0)(13.2,-3.4)
\psframe[linewidth=0.04,dimen=outer](15.2,-3.0)(14.8,-3.4)
\psframe[linewidth=0.04,dimen=outer](16.8,-3.0)(16.4,-3.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(15.0,4.6)(15.0,3.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(15.0,3.0)(15.0,1.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(15.0,1.4)(15.0,0.2)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(13.4,1.8)(13.4,3.0)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(16.6,0.2)(16.6,1.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(16.6,1.8)(16.6,3.0)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(16.6,3.4)(16.6,4.6)
\psframe[linewidth=0.04,dimen=outer](16.8,6.6)(16.4,6.2)
\psframe[linewidth=0.04,dimen=outer](13.6,6.6)(13.2,6.2)
\psframe[linewidth=0.04,dimen=outer](12.0,6.6)(11.6,6.2)
\psframe[linewidth=0.04,dimen=outer](15.2,6.6)(14.8,6.2)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(16.4,6.4)(15.2,6.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(14.8,6.4)(13.6,6.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(13.2,6.4)(12.0,6.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(16.6,5.0)(16.6,6.2)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(12.0,-3.2)(13.2,-3.2)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(10.4,-3.2)(11.6,-3.2)
\psframe[linewidth=0.04,dimen=outer](10.4,3.4)(10.0,3.0)
\psframe[linewidth=0.04,dimen=outer](10.4,1.8)(10.0,1.4)
\psframe[linewidth=0.04,dimen=outer](10.4,5.0)(10.0,4.6)
\psframe[linewidth=0.04,dimen=outer](10.4,6.6)(10.0,6.2)
\psframe[linewidth=0.04,dimen=outer](10.4,-3.0)(10.0,-3.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(10.2,6.2)(10.2,5.0)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(10.2,4.6)(10.2,3.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(10.2,3.0)(10.2,1.8)
\psline[linewidth=0.04cm](10.0,-3.0)(10.4,-3.4)
\psline[linewidth=0.04cm](10.0,-3.4)(10.4,-3.0)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.4,-6.4)(1.6,-6.4)
\psframe[linewidth=0.04,dimen=outer](2.0,-6.2)(1.6,-6.6)
\psframe[linewidth=0.04,dimen=outer](3.6,-6.2)(3.2,-6.6)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.0,-6.4)(3.2,-6.4)
\psframe[linewidth=0.04,dimen=outer](12.0,0.2)(11.6,-0.2)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(11.8,0.2)(11.8,1.4)
\psframe[linewidth=0.04,dimen=outer](13.6,0.2)(13.2,-0.2)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(13.4,0.2)(13.4,1.4)
\psframe[linewidth=0.04,dimen=outer](10.4,0.2)(10.0,-0.2)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(10.2,1.4)(10.2,0.2)
\psframe[linewidth=0.04,dimen=outer](12.0,-1.4)(11.6,-1.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(11.8,-1.4)(11.8,-0.2)
\psframe[linewidth=0.04,dimen=outer](13.6,-1.4)(13.2,-1.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(13.4,-1.4)(13.4,-0.2)
\psframe[linewidth=0.04,dimen=outer](10.4,-1.4)(10.0,-1.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(10.2,-0.2)(10.2,-1.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(13.4,-3.0)(13.4,-1.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(10.2,-1.8)(10.2,-3.0)
\psframe[linewidth=0.04,dimen=outer](15.2,0.2)(14.8,-0.2)
\psframe[linewidth=0.04,dimen=outer](16.8,0.2)(16.4,-0.2)
\psframe[linewidth=0.04,dimen=outer](15.2,-1.4)(14.8,-1.8)
\psframe[linewidth=0.04,dimen=outer](16.8,-1.4)(16.4,-1.8)
\psframe[linewidth=0.04,dimen=outer](15.2,0.2)(14.8,-0.2)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(15.0,-1.8)(15.0,-3.0)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(15.0,-0.2)(15.0,-1.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(16.6,-3.0)(16.6,-1.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(16.6,-1.4)(16.6,-0.2)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.2,-3.2)(6.4,-3.2)
\psframe[linewidth=0.04,dimen=outer](5.2,-3.0)(4.8,-3.4)
\psframe[linewidth=0.04,dimen=outer](6.8,-3.0)(6.4,-3.4)
\psframe[linewidth=0.04,dimen=outer](5.2,-1.4)(4.8,-1.8)
\psframe[linewidth=0.04,dimen=outer](6.8,-1.4)(6.4,-1.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.0,-1.8)(5.0,-3.0)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.0,-0.2)(5.0,-1.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.6,-3.0)(6.6,-1.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.6,-1.4)(6.6,-0.2)
\psframe[linewidth=0.04,dimen=outer](2.0,0.2)(1.6,-0.2)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.8,0.2)(1.8,1.4)
\psframe[linewidth=0.04,dimen=outer](2.0,-1.4)(1.6,-1.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.8,-1.4)(1.8,-0.2)
\psframe[linewidth=0.04,dimen=outer](0.4,0.2)(0.0,-0.2)
\psframe[linewidth=0.04,dimen=outer](0.4,-1.4)(0.0,-1.8)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.2,-0.2)(0.2,-1.4)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.2,-1.8)(0.2,-3.0)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.6,-6.4)(4.8,-6.4)
\psframe[linewidth=0.04,dimen=outer](5.2,-6.2)(4.8,-6.6)
\psframe[linewidth=0.04,dimen=outer](6.8,-6.2)(6.4,-6.6)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.2,-6.4)(6.4,-6.4)
\end{pspicture}
}
\end{figure}
\begin{figure}
\caption{It is impossible to make the rotation around the crossed square, in such a way that the tail does not intersect the head structure during the rotation, so the folding process of Fig.~\ref{saw4} is not in $SAW_4$.}
\label{pasDansSaw4}
\centering
\includegraphics[scale=0.45]{pasSaw4.eps}
\end{figure}
This last requirement is the closest approach of a true natural protein folding. It is related to researches that consider
more complicated moves than the simple pivot move~\cite{citeulike:118812}.
\section{Relations between the SAW requirements}
For $i \in \{1,2,3,4\}$, the set $\displaystyle{\bigcup_{n \in \mathds{N}^*}SAW_i(n)}$ will be simply written $SAW_i$.
The following inclusions hold obviously: $$SAW_4 \subseteq SAW_3 \subseteq SAW_2 \subseteq SAW_1$$ due to the definitions of the SAW requirements presented in the previous section.
Additionally, Figure \ref{saw4} shows that $SAW_4 \neq SAW_3$, thus we have,
\begin{proposition}
\label{subsets}
$SAW_4 \subsetneq SAW_3 \subseteq SAW_2 \subseteq SAW_1$.
\end{proposition}
Let us investigate more precisely the links between $SAW_1, SAW_2$, and $SAW_3$.
\subsection{$SAW_1$ is $SAW_2$}
Let us now prove that,
\begin{proposition}
$\forall n \in \mathds{N}, SAW_1(n)=SAW_2(n)$.
\end{proposition}
\begin{proof}
We need to prove that $SAW_1(n) \subset SAW_2(n)$, \emph{i.e.}, that
any conformation of $SAW_1(n)$ can be obtained from $(0,0,..,0)$ by
operating a sequence of (anti)clockwise pivot moves.
Obviously, to start from the conformation $(0,0,..,0)$ is equivalent than
to start with the conformation $(c,c,...,c)$, where $c \in \{0,1,2,3\}$.
Thus the initial configuration is characterized by the absence of a change
in the values (the initial sequence is a constant one).
We will now prove the result by a mathematical induction on the
number $k$ of changes in the sequence.
\begin{itemize}
\item The base case is obvious, as the 4 conformations with no change are in $SAW_1(n)\cap SAW_2(n)$.
\item Let us suppose that the statement holds for some $k \geqslant 1$. Let
$c=(c_1,c_2,...,c_n)$ be a conformation having exactly $k$ changes, that is, the cardinality of
the set $D(c) = \left\{i \in \llbracket 1, n-1 \rrbracket \big/ c_{i+1} \neq c_i \right\}$ is $k$.
Let us denote by $p(c)$ the first change in this sequence: $p(c) = min \left\{D(c)\right\}$.
We can apply the folding operation that suppress the difference between $c_{p(c)}$ and
$c_{p(c)+1}$. For instance, if $c_{p(c)+1} = c_{p(c)}-1 ~(mod~4)$, then a clockwise pivot move
on position $c_{p(c)+1}$ will remove this difference. So the conformation
$c'=\left(c_1,c_2,\hdots,c_{p(c)},f\left(c_{p(c)+1}\right),\hdots,f(c_n)\right)$ has $k-1$
changes. By the induction hypothesis, $c'$ can be obtained from $(j,j,j,\hdots,j)$, where
$j \in \{0,1,2,3\}$ by a succession of clockwise and anticlockwise pivot move. We can conclude
that it is the case for $c$ too.
\end{itemize}
\end{proof}
Indeed the notion of ``pivot moves'' is well-known in
the literature on protein folding. It was already supposed
that pivot moves provide an ergodic move set, meaning that by
a sequence of pivot moves one can transform any conformation
into any other conformation, when only requiring that the
ending conformation satisfies the excluded volume requirement.
The contribution of this section is simply a rigorous proof
of such an assumption.
\subsection{$SAW_2$ is not $SAW_3$}
\label{Saw2pasSaw3}
To determine whether $SAW_2$ is equal to $SAW_3$, we have firstly followed a computational
approach, by using the Python language. A first function (the function \emph{conformations}
of Listing~\ref{AllConformations} in the appendix)
has been written to return the list of all possible conformations, even if they are not
paths without crossing. In other words, this function produces
all sequences of compass directions of length $n$ (thus $conformations(n) = \mathds{Z}/4\mathds{Z}^n$).
Then a generator \emph{saw1\_conformations(n)} has been constructed, making
it possible to obtain all the $SAW_1$ conformations (see Listing~\ref{SAW1Conformations}).
It is based on the fact that such a conformation
of length $n$ must have a support of cardinality equal to $n$.
Finally, a program (Algorithm~\ref{SAW3Conformations}) has been written to check experimentally
whether an element of $SAW_1=SAW_2$ is in $SAW3$. This is a systematic approach: for
each residue of the candidate conformation, we try to make a clockwise pivot move and an
anticlockwise one. If the obtained conformation is a path without crossing then
the candidate is rejected. On the contrary, if it is never possible to unfold the
protein, whatever the considered residue, then the candidate is in $SAW_2$ without being
in $SAW_3$.
Figure~\ref{undefoldable} gives four examples of conformations that are
in $SAW_2$ without being in $SAW_3$ (the unique ones authors have found
via the programs given in the appendix). These counterexamples prove that,
\begin{proposition}
$\exists n \in \mathds{N}^*, SAW_2(n) \neq SAW_3(n).$
\end{proposition}
\begin{figure}[h!]
\centering
\subfigure[175 nodes]{\includegraphics[width=0.24\textwidth]
{cs175.eps}\label{cs175}}\hspace{2cm}
\subfigure[159 nodes]{\includegraphics[width=0.24\textwidth]
{cs159.eps}\label{cs159}}\\
\subfigure[169 nodes]{\includegraphics[width=0.24\textwidth]
{cs169.eps}\label{cs169}}\hspace{2cm}
\subfigure[914 nodes]{
\includegraphics[width=0.5\textwidth]{grosCS.eps}\label{grosCS}}
\caption{Examples of conformations in $SAW_2$ without being in $SAW_3$}
\label{undefoldable}
\end{figure}
\subsubsection{Consequences of the strict inclusion}
Proposition~\ref{subsets} can be rewritten as follows,
\begin{proposition}
$SAW_4 \subsetneq SAW_3 \subsetneq SAW_2 = SAW_1$.
\end{proposition}
As stated previously, the NP-completeness holds for $SAW_1$. However $SAW_1$
is a strictly larger set than $SAW_3$. $SAW_3$ is a set frequently used for protein
structure prediction. As $SAW_3$ is strictly smaller than $SAW_1$, it is not
sure that the considered problem still remains a NP complete one. Incidentally,
it is not clear that only prediction is possible. Indeed, proteins have ``only''
tens to thousands amino acids. If $SAW_3$ is very small
compared to $SAW_1$, then perhaps exact methods as SAT solvers can be more widely
considered ?
Moreover, $SAW_3$ is strictly larger than $SAW_4$, which is a
2D model slightly closer than true real protein folding. This strict inclusion reinforces the
fact that the NP-completeness statement must be regarded another time, to
determine if this prediction problem is indeed a NP-complete one or not.
Furthermore, prediction tools could reduce the set of all possibilities by
taking place into $SAW_4$ instead of $SAW_3$, thus
improving the confidence put in the returned conformations.
All of these questionings are strongly linked to the size ratio between each
$SAW_i$: the probability the NP-completeness proof remains valid in $SAW_3$ or
$SAW_4$ decreases when these ratios increase.
This is why we will investigate more deeply, in the next section, the relation
between $SAW_2$ and $SAW_3$
\section{A Graph Approach of the $SAW_i$ Ratios Problem}
\label{sec:graph}
Let us denote by $\mathfrak{G}_0(n)$ the directed graph having $4^n$ vertices, such that:
\begin{itemize}
\item these vertices are elements of $\llbracket 0,3 \rrbracket^n$,
\item there is a directed edge from the vertex $(e_1, \hdots, e_n)$ to the vertex $(f_1,\hdots,f_n)$ if and only if $\exists k \in \llbracket 1, n \rrbracket$ and $\exists i \in \{-1,1\}$ such that $(f_1,\hdots,f_n)$ is equal to:
\begin{itemize}
\item either $\left(e_1, \hdots, e_k, e_k+1~(\textrm{mod}~4), \dots, e_n+1~(\textrm{mod}~4)\right)$
\item or $\left(e_1, \hdots, e_k, e_k-1~(\textrm{mod}~4), \dots, e_n-1~(\textrm{mod}~4)\right)$.
\end{itemize}
\end{itemize}
\begin{figure}
\centering
\scalebox{1}
{
\begin{pspicture}(0,-3.3819609)(6.82,3.3819609)
\usefont{T1}{ptm}{m}{n}
\rput(0.9540625,2.343039){21}
\usefont{T1}{ptm}{m}{n}
\rput(2.5685937,2.343039){22}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.2,2.438039)(1.8,2.638039)(1.8,2.638039)(2.4,2.438039)
\usefont{T1}{ptm}{m}{n}
\rput(4.1610937,2.343039){23}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.8,2.438039)(3.4,2.638039)(3.4,2.638039)(4.0,2.438039)
\usefont{T1}{ptm}{m}{n}
\rput(5.7696877,2.343039){20}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.4,2.438039)(5.0,2.638039)(5.0,2.638039)(5.6,2.438039)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.0,2.438039)(0.6,2.638039)(0.2,2.638039)(0.8,2.438039)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.4,2.238039)(1.8,2.038039)(1.8,2.038039)(1.2,2.238039)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.0,2.238039)(3.4,2.038039)(3.4,2.038039)(2.8,2.238039)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.6,2.238039)(5.0,2.038039)(5.0,2.038039)(4.4,2.238039)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.8,2.238039)(0.4,2.238039)(0.4,2.038039)(0.0,2.238039)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.0,2.438039)(6.6,2.638039)(6.2,2.638039)(6.8,2.438039)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.8,2.238039)(6.4,2.238039)(6.4,2.038039)(6.0,2.238039)
\usefont{T1}{ptm}{m}{n}
\rput(0.9525,0.7430391){10}
\usefont{T1}{ptm}{m}{n}
\rput(2.536875,0.7430391){11}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.2,0.83803904)(1.8,1.0380391)(1.8,1.0380391)(2.4,0.83803904)
\usefont{T1}{ptm}{m}{n}
\rput(4.1514063,0.7430391){12}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.8,0.83803904)(3.4,1.0380391)(3.4,1.0380391)(4.0,0.83803904)
\usefont{T1}{ptm}{m}{n}
\rput(5.743906,0.7430391){13}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.4,0.83803904)(5.0,1.0380391)(5.0,1.0380391)(5.6,0.83803904)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.0,0.83803904)(0.6,1.0380391)(0.2,1.0380391)(0.8,0.83803904)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.4,0.63803905)(1.8,0.43803906)(1.8,0.43803906)(1.2,0.63803905)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.0,0.63803905)(3.4,0.43803906)(3.4,0.43803906)(2.8,0.63803905)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.6,0.63803905)(5.0,0.43803906)(5.0,0.43803906)(4.4,0.63803905)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.8,0.63803905)(0.4,0.63803905)(0.4,0.43803906)(0.0,0.63803905)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.0,0.83803904)(6.6,1.0380391)(6.2,1.0380391)(6.8,0.83803904)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.8,0.63803905)(6.4,0.63803905)(6.4,0.43803906)(6.0,0.63803905)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.8128585,0.9631987)(0.64030945,1.5716614)(0.64030945,1.5716614)(0.8673477,2.1619608)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.0671414,2.1528795)(1.2396905,1.5444168)(1.2396905,1.5444168)(1.0126523,0.9541172)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.4128585,0.9631987)(2.2403095,1.5716614)(2.2403095,1.5716614)(2.4673479,2.1619608)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.6671414,2.1528795)(2.8396904,1.5444168)(2.8396904,1.5444168)(2.6126523,0.9541172)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.0128584,0.9631987)(3.8403094,1.5716614)(3.8403094,1.5716614)(4.0673475,2.1619608)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.2671413,2.1528795)(4.4396906,1.5444168)(4.4396906,1.5444168)(4.212652,0.9541172)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.612859,0.9631987)(5.4403095,1.5716614)(5.4403095,1.5716614)(5.667348,2.1619608)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.8671412,2.1528795)(6.0396905,1.5444168)(6.0396905,1.5444168)(5.812652,0.9541172)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.8128585,-0.6368013)(0.64030945,-0.028338647)(0.64030945,-0.028338647)(0.8673477,0.56196094)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.0671414,0.5528794)(1.2396905,-0.055583242)(1.2396905,-0.055583242)(1.0126523,-0.64588284)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.4128585,-0.6368013)(2.2403095,-0.028338647)(2.2403095,-0.028338647)(2.4673479,0.56196094)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.6671414,0.5528794)(2.8396904,-0.055583242)(2.8396904,-0.055583242)(2.6126523,-0.64588284)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.0128584,-0.6368013)(3.8403094,-0.028338647)(3.8403094,-0.028338647)(4.0673475,0.56196094)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.2671413,0.5528794)(4.4396906,-0.055583242)(4.4396906,-0.055583242)(4.212652,-0.64588284)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.612859,-0.6368013)(5.4403095,-0.028338647)(5.4403095,-0.028338647)(5.667348,0.56196094)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.8671412,0.5528794)(6.0396905,-0.055583242)(6.0396905,-0.055583242)(5.812652,-0.64588284)
\usefont{T1}{ptm}{m}{n}
\rput(0.9584375,-0.85696095){03}
\usefont{T1}{ptm}{m}{n}
\rput(2.5670311,-0.85696095){00}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.2,-0.7619609)(1.8,-0.56196094)(1.8,-0.56196094)(2.4,-0.7619609)
\usefont{T1}{ptm}{m}{n}
\rput(4.1514063,-0.85696095){01}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.8,-0.7619609)(3.4,-0.56196094)(3.4,-0.56196094)(4.0,-0.7619609)
\usefont{T1}{ptm}{m}{n}
\rput(5.7659373,-0.85696095){02}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.4,-0.7619609)(5.0,-0.56196094)(5.0,-0.56196094)(5.6,-0.7619609)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.0,-0.7619609)(0.6,-0.56196094)(0.2,-0.56196094)(0.8,-0.7619609)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.4,-0.961961)(1.8,-1.161961)(1.8,-1.161961)(1.2,-0.961961)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.0,-0.961961)(3.4,-1.161961)(3.4,-1.161961)(2.8,-0.961961)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.6,-0.961961)(5.0,-1.161961)(5.0,-1.161961)(4.4,-0.961961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.8,-0.961961)(0.4,-0.961961)(0.4,-1.161961)(0.0,-0.961961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.0,-0.7619609)(6.6,-0.56196094)(6.2,-0.56196094)(6.8,-0.7619609)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.8,-0.961961)(6.4,-0.961961)(6.4,-1.161961)(6.0,-0.961961)
\usefont{T1}{ptm}{m}{n}
\rput(0.96515626,-2.456961){32}
\usefont{T1}{ptm}{m}{n}
\rput(2.5576563,-2.456961){33}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.2,-2.361961)(1.8,-2.1619608)(1.8,-2.1619608)(2.4,-2.361961)
\usefont{T1}{ptm}{m}{n}
\rput(4.16625,-2.456961){30}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.8,-2.361961)(3.4,-2.1619608)(3.4,-2.1619608)(4.0,-2.361961)
\usefont{T1}{ptm}{m}{n}
\rput(5.750625,-2.456961){31}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.4,-2.361961)(5.0,-2.1619608)(5.0,-2.1619608)(5.6,-2.361961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.0,-2.361961)(0.6,-2.1619608)(0.2,-2.1619608)(0.8,-2.361961)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.4,-2.561961)(1.8,-2.761961)(1.8,-2.761961)(1.2,-2.561961)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.0,-2.561961)(3.4,-2.761961)(3.4,-2.761961)(2.8,-2.561961)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.6,-2.561961)(5.0,-2.761961)(5.0,-2.761961)(4.4,-2.561961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.8,-2.561961)(0.4,-2.561961)(0.4,-2.761961)(0.0,-2.561961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.0,-2.361961)(6.6,-2.1619608)(6.2,-2.1619608)(6.8,-2.361961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(6.8,-2.561961)(6.4,-2.561961)(6.4,-2.761961)(6.0,-2.561961)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.8128585,-2.2368014)(0.64030945,-1.6283387)(0.64030945,-1.6283387)(0.8673477,-1.0380391)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.0671414,-1.0471206)(1.2396905,-1.6555833)(1.2396905,-1.6555833)(1.0126523,-2.2458827)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.4128585,-2.2368014)(2.2403095,-1.6283387)(2.2403095,-1.6283387)(2.4673479,-1.0380391)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.6671414,-1.0471206)(2.8396904,-1.6555833)(2.8396904,-1.6555833)(2.6126523,-2.2458827)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.0128584,-2.2368014)(3.8403094,-1.6283387)(3.8403094,-1.6283387)(4.0673475,-1.0380391)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.2671413,-1.0471206)(4.4396906,-1.6555833)(4.4396906,-1.6555833)(4.212652,-2.2458827)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.612859,-2.2368014)(5.4403095,-1.6283387)(5.4403095,-1.6283387)(5.667348,-1.0380391)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.8671412,-1.0471206)(6.0396905,-1.6555833)(6.0396905,-1.6555833)(5.812652,-2.2458827)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.8,-3.361961)(0.64030945,-2.8283386)(0.64030945,-3.2283387)(0.8673477,-2.638039)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.0671414,-2.6471205)(1.2396905,-3.0099561)(1.2396905,-3.0099561)(1.0126523,-3.361961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.4,-3.361961)(2.2403095,-2.8283386)(2.2403095,-3.2283387)(2.4673479,-2.638039)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.6671414,-2.6471205)(2.8396904,-3.0099561)(2.8396904,-3.0099561)(2.6126523,-3.361961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.0,-3.361961)(3.8403094,-2.8283386)(3.8403094,-3.2283387)(4.0673475,-2.638039)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.2671413,-2.6471205)(4.4396906,-3.0099561)(4.4396906,-3.0099561)(4.212652,-3.361961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.6,-3.361961)(5.4403095,-2.8283386)(5.4403095,-3.2283387)(5.667348,-2.638039)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.8671412,-2.6471205)(6.0396905,-3.0099561)(6.0396905,-3.0099561)(5.812652,-3.361961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.8,2.638039)(0.64030945,3.1716614)(0.64030945,2.7716613)(0.8673477,3.361961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.0671414,3.3528795)(1.2396905,2.9900439)(1.2396905,2.9900439)(1.0126523,2.638039)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.4,2.638039)(2.2403095,3.1716614)(2.2403095,2.7716613)(2.4673479,3.361961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.6671414,3.3528795)(2.8396904,2.9900439)(2.8396904,2.9900439)(2.6126523,2.638039)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.0,2.638039)(3.8403094,3.1716614)(3.8403094,2.7716613)(4.0673475,3.361961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.2671413,3.3528795)(4.4396906,2.9900439)(4.4396906,2.9900439)(4.212652,2.638039)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.6,2.638039)(5.4403095,3.1716614)(5.4403095,2.7716613)(5.667348,3.361961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.8671412,3.3528795)(6.0396905,2.9900439)(6.0396905,2.9900439)(5.812652,2.638039)
\end{pspicture}
}
\caption{The digraph $\mathfrak{G}_0(2)$}
\label{Go2}
\end{figure}
Obviously, in $\mathfrak{G}_0(n)$, if there is a directed edge from the vertex $i$ to the vertex $j$, then there is another edge from $j$ to $i$ too.
Such a graph is depicted in Fig.~\ref{Go2}, in which some edges are dotted to represent
the fact that this graph is as a torus: we can go from the vertex 22 to the vertex 33 for
instance.
The rule of construction of this graph is detailed in Figure~\ref{Go2Rules}.
\begin{figure}
\centering
\scalebox{0.6}
{
\begin{pspicture}(0,-2.313125)(10.570625,2.313125)
\usefont{T1}{ptm}{m}{n}
\rput(5.180625,0.03125){\LARGE (i,j)}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.1153126,0.35625)(5.1153126,1.55625)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.7153125,-0.04375)(6.9153123,-0.04375)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(5.1153126,-0.44375)(5.1153126,-1.64375)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(4.7153125,-0.04375)(3.5153124,-0.04375)
\usefont{T1}{ptm}{m}{n}
\rput(5.180625,2.03125){\LARGE ((i+1) mod 4, (j+1) mod 4)}
\usefont{T1}{ptm}{m}{n}
\rput(8.740625,0.03125){\LARGE (i,(j+1) mod 4)}
\usefont{T1}{ptm}{m}{n}
\rput(1.670625,0.03125){\LARGE (i,(j-1) mod 4)}
\usefont{T1}{ptm}{m}{n}
\rput(5.040625,-1.96875){\LARGE ((i-1) mod 4, (j-1) mod 4)}
\end{pspicture}
}
\caption{Rules of $\mathfrak{G}_0(2)$}
\label{Go2Rules}
\end{figure}
\begin{figure}
\centering
\scalebox{0.5}
{
\begin{pspicture}(0,-2.613125)(21.850624,2.613125)
\usefont{T1}{ptm}{m}{n}
\rput(11.600625,0.13125){\LARGE (i,j,k)}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(11.515312,0.45625)(11.515312,1.85625)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(12.515312,0.05625)(13.715313,0.05625)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(11.515312,-0.34375)(11.515312,-1.94375)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(10.915313,0.05625)(9.715313,0.05625)
\usefont{T1}{ptm}{m}{n}
\rput(11.240625,2.33125){\LARGE ((i+1) mod 4, (j+1) mod 4, (k+1) mod 4)}
\usefont{T1}{ptm}{m}{n}
\rput(15.760625,0.13125){\LARGE (i,j,(k+1) mod 4)}
\usefont{T1}{ptm}{m}{n}
\rput(7.690625,0.13125){\LARGE (i,j,(k-1) mod 4)}
\usefont{T1}{ptm}{m}{n}
\rput(11.430625,-2.26875){\LARGE ((i-1) mod 4, (j-1) mod 4, (k-1) mod 4)}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(12.515312,0.25625)(16.915312,1.25625)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(10.915313,-0.14375)(6.3153124,-1.14375)
\usefont{T1}{ptm}{m}{n}
\rput(18.280624,1.53125){\LARGE (i, (j+1) mod 4, (k+1) mod 4)}
\usefont{T1}{ptm}{m}{n}
\rput(3.340625,-1.46875){\LARGE (i, (j-1) mod 4, (k-1) mod 4)}
\end{pspicture}
}
\caption{Rules of $\mathfrak{G}_0(3)$}
\label{Go3Rules}
\end{figure}
\begin{figure}
\centering
\scalebox{1}
{
\begin{pspicture}(0,-3.3819609)(3.819381,3.3819609)
\usefont{T1}{ptm}{m}{n}
\rput(0.31375307,2.343039){21}
\usefont{T1}{ptm}{m}{n}
\rput(1.9282843,2.343039){22}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.55969054,2.438039)(1.1596906,2.638039)(1.1596906,2.638039)(1.7596905,2.438039)
\usefont{T1}{ptm}{m}{n}
\rput(3.5207844,2.343039){23}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.1596906,2.438039)(2.7596905,2.638039)(2.7596905,2.638039)(3.3596907,2.438039)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.7596905,2.238039)(1.1596906,2.038039)(1.1596906,2.038039)(0.55969054,2.238039)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.3596907,2.238039)(2.7596905,2.038039)(2.7596905,2.038039)(2.1596906,2.238039)
\usefont{T1}{ptm}{m}{n}
\rput(0.31219056,0.7430391){10}
\usefont{T1}{ptm}{m}{n}
\rput(1.8965656,0.7430391){11}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.55969054,0.83803904)(1.1596906,1.0380391)(1.1596906,1.0380391)(1.7596905,0.83803904)
\usefont{T1}{ptm}{m}{n}
\rput(3.5110967,0.7430391){12}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.1596906,0.83803904)(2.7596905,1.0380391)(2.7596905,1.0380391)(3.3596907,0.83803904)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.7596905,0.63803905)(1.1596906,0.43803906)(1.1596906,0.43803906)(0.55969054,0.63803905)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.3596907,0.63803905)(2.7596905,0.43803906)(2.7596905,0.43803906)(2.1596906,0.63803905)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.17254911,0.9631987)(0.0,1.5716614)(0.0,1.5716614)(0.22703831,2.1619608)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.42683202,2.1528795)(0.59938115,1.5444168)(0.59938115,1.5444168)(0.37234282,0.9541172)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.7725492,0.9631987)(1.6,1.5716614)(1.6,1.5716614)(1.8270383,2.1619608)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.026832,2.1528795)(2.199381,1.5444168)(2.199381,1.5444168)(1.9723428,0.9541172)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.372549,0.9631987)(3.2,1.5716614)(3.2,1.5716614)(3.4270382,2.1619608)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.626832,2.1528795)(3.799381,1.5444168)(3.799381,1.5444168)(3.5723429,0.9541172)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.17254911,-0.6368013)(0.0,-0.028338647)(0.0,-0.028338647)(0.22703831,0.56196094)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.42683202,0.5528794)(0.59938115,-0.055583242)(0.59938115,-0.055583242)(0.37234282,-0.64588284)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.7725492,-0.6368013)(1.6,-0.028338647)(1.6,-0.028338647)(1.8270383,0.56196094)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.026832,0.5528794)(2.199381,-0.055583242)(2.199381,-0.055583242)(1.9723428,-0.64588284)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.372549,-0.6368013)(3.2,-0.028338647)(3.2,-0.028338647)(3.4270382,0.56196094)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.626832,0.5528794)(3.799381,-0.055583242)(3.799381,-0.055583242)(3.5723429,-0.64588284)
\usefont{T1}{ptm}{m}{n}
\rput(0.31812805,-0.85696095){03}
\usefont{T1}{ptm}{m}{n}
\rput(1.9267218,-0.85696095){00}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.55969054,-0.7619609)(1.1596906,-0.56196094)(1.1596906,-0.56196094)(1.7596905,-0.7619609)
\usefont{T1}{ptm}{m}{n}
\rput(3.5110967,-0.85696095){01}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.1596906,-0.7619609)(2.7596905,-0.56196094)(2.7596905,-0.56196094)(3.3596907,-0.7619609)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.7596905,-0.961961)(1.1596906,-1.161961)(1.1596906,-1.161961)(0.55969054,-0.961961)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.3596907,-0.961961)(2.7596905,-1.161961)(2.7596905,-1.161961)(2.1596906,-0.961961)
\usefont{T1}{ptm}{m}{n}
\rput(0.3248468,-2.456961){32}
\usefont{T1}{ptm}{m}{n}
\rput(1.9173468,-2.456961){33}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.55969054,-2.361961)(1.1596906,-2.1619608)(1.1596906,-2.1619608)(1.7596905,-2.361961)
\usefont{T1}{ptm}{m}{n}
\rput(3.5259407,-2.456961){30}
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.1596906,-2.361961)(2.7596905,-2.1619608)(2.7596905,-2.1619608)(3.3596907,-2.361961)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.7596905,-2.561961)(1.1596906,-2.761961)(1.1596906,-2.761961)(0.55969054,-2.561961)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.3596907,-2.561961)(2.7596905,-2.761961)(2.7596905,-2.761961)(2.1596906,-2.561961)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.17254911,-2.2368014)(0.0,-1.6283387)(0.0,-1.6283387)(0.22703831,-1.0380391)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.42683202,-1.0471206)(0.59938115,-1.6555833)(0.59938115,-1.6555833)(0.37234282,-2.2458827)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.7725492,-2.2368014)(1.6,-1.6283387)(1.6,-1.6283387)(1.8270383,-1.0380391)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.026832,-1.0471206)(2.199381,-1.6555833)(2.199381,-1.6555833)(1.9723428,-2.2458827)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.372549,-2.2368014)(3.2,-1.6283387)(3.2,-1.6283387)(3.4270382,-1.0380391)
\psbezier[linewidth=0.04,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.626832,-1.0471206)(3.799381,-1.6555833)(3.799381,-1.6555833)(3.5723429,-2.2458827)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.15969056,-3.361961)(0.0,-2.8283386)(0.0,-3.2283387)(0.22703831,-2.638039)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.42683202,-2.6471205)(0.59938115,-3.0099561)(0.59938115,-3.0099561)(0.37234282,-3.361961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.7596905,-3.361961)(1.6,-2.8283386)(1.6,-3.2283387)(1.8270383,-2.638039)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.026832,-2.6471205)(2.199381,-3.0099561)(2.199381,-3.0099561)(1.9723428,-3.361961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.3596907,-3.361961)(3.2,-2.8283386)(3.2,-3.2283387)(3.4270382,-2.638039)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.626832,-2.6471205)(3.799381,-3.0099561)(3.799381,-3.0099561)(3.5723429,-3.361961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.15969056,2.638039)(0.0,3.1716614)(0.0,2.7716613)(0.22703831,3.361961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(0.42683202,3.3528795)(0.59938115,2.9900439)(0.59938115,2.9900439)(0.37234282,2.638039)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(1.7596905,2.638039)(1.6,3.1716614)(1.6,2.7716613)(1.8270383,3.361961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(2.026832,3.3528795)(2.199381,2.9900439)(2.199381,2.9900439)(1.9723428,2.638039)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.3596907,2.638039)(3.2,3.1716614)(3.2,2.7716613)(3.4270382,3.361961)
\psbezier[linewidth=0.04,linestyle=dotted,dotsep=0.16cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{->}(3.626832,3.3528795)(3.799381,2.9900439)(3.799381,2.9900439)(3.5723429,2.638039)
\end{pspicture}
}
\caption{The digraph $\mathfrak{G}(2)$}
\label{G2}
\end{figure}
Let us now define another digraph as follows. $\mathfrak{G}(n)$ is the subgraph
of $\mathfrak{G}_0(n)$ obtained by removing the vertices that do not correspond to
a ``path without crossing'' according to Madras and Slade~\cite{Madras}.
In other words, we remove from $\mathfrak{G}_0(n)$ vertices that
do not satisfy the $SAW_1(n)$ requirement. For instance, the digraph
$\mathfrak{G}(2)$ associated to $\mathfrak{G}_0(2)$ is depicted in Fig.~\ref{G2},
whereas Figure~\ref{G3} contains both $\mathfrak{G}(3)$ and the removed vertices.
Its construction rules are recalled in Fig.~\ref{Go3Rules}.
The links between $\mathfrak{G}(n)$ and the SAW requirements can be summarized
as follows:
\begin{itemize}
\item The vertices of the graph $\mathfrak{G}_0(n)$ represent all the possible walks of
length $n$ in the 2D square lattice.
\item The vertices that are preserved in $\mathfrak{G}(n)$ are the conformations of $SAW_1(n) = SAW_2(n)$.
\item Two adjacent vertices $i$ and $j$ in $\mathfrak{G}(n)$ are such that it is
possible to change the conformation $i$ in $j$ in only one pivot move.
\item Finally, a conformation of $SAW_3(n)$ is a vertex of $\mathfrak{G}(n)$ that
is reachable from the vertex $000\hdots 0$ by following a path in $\mathfrak{G}(n)$.
\end{itemize}
For instance, the conformation $(2,2,3)$ is in $SAW_3(3)$ because we can find a walk
from $000$ to $223$ in $\mathfrak{G}(n)$. The following result is obvious,
\begin{theorem}
$SAW_3(n)$ corresponds to the connected component of $000\hdots 0$ in $\mathfrak{G}(n)$,
whereas $SAW_2(n)$ is the set of vertices of $\mathfrak{G}(n)$. Thus we have:
\begin{center}
$SAW_2(n) = SAW_3(n) \Longleftrightarrow \mathfrak{G}(n)$ is (strongly) connected.
\end{center}
\end{theorem}
The previous section shows that the connected component of $000\hdots 0$ in $\mathfrak{G}(158)$, $\mathfrak{G}(168)$, $\mathfrak{G}(175)$, and $\mathfrak{G}(914)$ are not
equal to $\mathfrak{G}(158)$, $\mathfrak{G}(168)$, $\mathfrak{G}(175)$, and $\mathfrak{G}(914)$ respectively.
In other words, these graphs are not connected ones.
\begin{figure}
\centering
\scalebox{0.8}
{
\begin{pspicture}(0,-7.1428127)(15.1325,7.1228123)
\usefont{T1}{ptm}{m}{n}
\rput(5.7246876,6.8478127){\Large 203}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.1896877,6.9028125)(8.389688,6.9028125)
\usefont{T1}{ptm}{m}{n}
\rput(8.735937,6.8478127){\Large 200}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(9.189688,6.9028125)(11.389688,6.9028125)
\usefont{T1}{ptm}{m}{n}
\rput(11.715313,6.8478127){\Large 201}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(12.189688,6.9028125)(14.389688,6.9028125)
\usefont{T1}{ptm}{m}{n}
\rput(14.734531,6.8478127){\Large 202}
\usefont{T1}{ptm}{m}{n}
\rput(3.9345312,6.0478125){\Large 232}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(4.3896875,6.1028123)(6.5896873,6.1028123)
\usefont{T1}{ptm}{m}{n}
\rput(6.9246874,6.0478125){\Large 233}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(7.3896875,6.1028123)(9.589687,6.1028123)
\usefont{T1}{ptm}{m}{n}
\rput(9.935938,6.0478125){\Large 230}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(10.389688,6.1028123)(12.589687,6.1028123)
\usefont{T1}{ptm}{m}{n}
\rput(12.915313,6.0478125){\Large 231}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(4.3896875,6.3028126)(5.3896875,6.7028127)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(7.3896875,6.3028126)(8.389688,6.7028127)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(10.389688,6.3028126)(11.389688,6.7028127)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(13.389688,6.3028126)(14.389688,6.7028127)
\usefont{T1}{ptm}{m}{n}
\rput(2.1153126,5.2478123){\Large 221}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(2.5896876,5.3028126)(4.7896876,5.3028126)
\usefont{T1}{ptm}{m}{n}
\rput(5.134531,5.2478123){\Large 222}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(5.5896873,5.3028126)(7.7896876,5.3028126)
\usefont{T1}{ptm}{m}{n}
\rput(8.124687,5.2478123){\Large 223}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(8.589687,5.3028126)(10.789687,5.3028126)
\usefont{T1}{ptm}{m}{n}
\rput(11.135938,5.2478123){\Large 220}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(2.5896876,5.5028124)(3.5896876,5.9028125)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(5.5896873,5.5028124)(6.5896873,5.9028125)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(8.589687,5.5028124)(9.589687,5.9028125)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(11.589687,5.5028124)(12.589687,5.9028125)
\usefont{T1}{ptm}{m}{n}
\rput(0.3359375,4.4478126){\Large 210}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(0.7896875,4.5028124)(2.9896874,4.5028124)
\usefont{T1}{ptm}{m}{n}
\rput(3.3153124,4.4478126){\Large 211}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(3.7896874,4.5028124)(5.9896874,4.5028124)
\usefont{T1}{ptm}{m}{n}
\rput(6.3345313,4.4478126){\Large 212}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.7896876,4.5028124)(8.989688,4.5028124)
\usefont{T1}{ptm}{m}{n}
\rput(9.324688,4.4478126){\Large 213}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(0.7896875,4.7028127)(1.7896875,5.1028123)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(3.7896874,4.7028127)(4.7896876,5.1028123)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.7896876,4.7028127)(7.7896876,5.1028123)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(9.789687,4.7028127)(10.789687,5.1028123)
\usefont{T1}{ptm}{m}{n}
\rput(5.7115626,3.0478125){\Large 132}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.1896877,3.1028125)(8.389688,3.1028125)
\usefont{T1}{ptm}{m}{n}
\rput(8.701718,3.0478125){\Large 133}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(9.189688,3.1028125)(11.389688,3.1028125)
\usefont{T1}{ptm}{m}{n}
\rput(11.712969,3.0478125){\Large 130}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(12.189688,3.1028125)(14.389688,3.1028125)
\usefont{T1}{ptm}{m}{n}
\rput(14.692344,3.0478125){\Large 131}
\usefont{T1}{ptm}{m}{n}
\rput(3.8923438,2.2478125){\Large 121}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(4.3896875,2.3028126)(6.5896873,2.3028126)
\usefont{T1}{ptm}{m}{n}
\rput(6.9115624,2.2478125){\Large 122}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(7.3896875,2.3028126)(9.589687,2.3028126)
\usefont{T1}{ptm}{m}{n}
\rput(9.901719,2.2478125){\Large 123}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(10.389688,2.3028126)(12.589687,2.3028126)
\usefont{T1}{ptm}{m}{n}
\rput(12.912969,2.2478125){\Large 120}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(4.3896875,2.5028124)(5.3896875,2.9028125)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(7.3896875,2.5028124)(8.389688,2.9028125)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(10.389688,2.5028124)(11.389688,2.9028125)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(13.389688,2.5028124)(14.389688,2.9028125)
\usefont{T1}{ptm}{m}{n}
\rput(2.1129687,1.4478126){\Large 110}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(2.5896876,1.5028125)(4.7896876,1.5028125)
\usefont{T1}{ptm}{m}{n}
\rput(5.092344,1.4478126){\Large 111}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(5.5896873,1.5028125)(7.7896876,1.5028125)
\usefont{T1}{ptm}{m}{n}
\rput(8.111563,1.4478126){\Large 112}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(8.589687,1.5028125)(10.789687,1.5028125)
\usefont{T1}{ptm}{m}{n}
\rput(11.101719,1.4478126){\Large 113}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(2.5896876,1.7028126)(3.5896876,2.1028125)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(5.5896873,1.7028126)(6.5896873,2.1028125)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(8.589687,1.7028126)(9.589687,2.1028125)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(11.589687,1.7028126)(12.589687,2.1028125)
\usefont{T1}{ptm}{m}{n}
\rput(0.30171874,0.6478125){\Large 103}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(0.7896875,0.7028125)(2.9896874,0.7028125)
\usefont{T1}{ptm}{m}{n}
\rput(3.3129687,0.6478125){\Large 100}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(3.7896874,0.7028125)(5.9896874,0.7028125)
\usefont{T1}{ptm}{m}{n}
\rput(6.2923436,0.6478125){\Large 101}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.7896876,0.7028125)(8.989688,0.7028125)
\usefont{T1}{ptm}{m}{n}
\rput(9.311563,0.6478125){\Large 102}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(0.7896875,0.9028125)(1.7896875,1.3028125)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(3.7896874,0.9028125)(4.7896876,1.3028125)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.7896876,0.9028125)(7.7896876,1.3028125)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(9.789687,0.9028125)(10.789687,1.3028125)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(14.789687,6.5028124)(14.789687,3.3028126)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(12.989688,5.7028127)(12.989688,2.5028124)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(11.189688,4.9028125)(11.189688,1.7028126)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(9.389688,4.1028123)(9.389688,0.9028125)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(11.789687,6.5028124)(11.789687,3.3028126)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(9.989688,5.7028127)(9.989688,2.5028124)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(8.189688,4.9028125)(8.189688,1.7028126)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.3896875,4.1028123)(6.3896875,0.9028125)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(8.789687,6.5028124)(8.789687,3.3028126)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.9896874,5.7028127)(6.9896874,2.5028124)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(5.1896877,4.9028125)(5.1896877,1.7028126)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(3.3896875,4.1028123)(3.3896875,0.9028125)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(5.7896876,6.5028124)(5.7896876,3.3028126)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(3.9896874,5.7028127)(3.9896874,2.5028124)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(2.1896875,4.9028125)(2.1896875,1.7028126)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(0.3896875,4.1028123)(0.3896875,0.9028125)
\usefont{T1}{ptm}{m}{n}
\rput(5.7117186,-0.7521875){\Large 021}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.1896877,-0.6971875)(8.389688,-0.6971875)
\usefont{T1}{ptm}{m}{n}
\rput(8.730938,-0.7521875){\Large 022}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(9.189688,-0.6971875)(11.389688,-0.6971875)
\usefont{T1}{ptm}{m}{n}
\rput(11.721094,-0.7521875){\Large 023}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(12.189688,-0.6971875)(14.389688,-0.6971875)
\usefont{T1}{ptm}{m}{n}
\rput(14.732344,-0.7521875){\Large 020}
\usefont{T1}{ptm}{m}{n}
\rput(3.9323437,-1.5521874){\Large 010}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(4.3896875,-1.4971875)(6.5896873,-1.4971875)
\usefont{T1}{ptm}{m}{n}
\rput(6.911719,-1.5521874){\Large 011}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(7.3896875,-1.4971875)(9.589687,-1.4971875)
\usefont{T1}{ptm}{m}{n}
\rput(9.930938,-1.5521874){\Large 012}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(10.389688,-1.4971875)(12.589687,-1.4971875)
\usefont{T1}{ptm}{m}{n}
\rput(12.921094,-1.5521874){\Large 013}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(4.3896875,-1.2971874)(5.3896875,-0.8971875)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(7.3896875,-1.2971874)(8.389688,-0.8971875)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(10.389688,-1.2971874)(11.389688,-0.8971875)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(13.389688,-1.2971874)(14.389688,-0.8971875)
\usefont{T1}{ptm}{m}{n}
\rput(2.1210938,-2.3521874){\Large 003}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(2.5896876,-2.2971876)(4.7896876,-2.2971876)
\usefont{T1}{ptm}{m}{n}
\rput(5.132344,-2.3521874){\Large 000}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(5.5896873,-2.2971876)(7.7896876,-2.2971876)
\usefont{T1}{ptm}{m}{n}
\rput(8.111719,-2.3521874){\Large 001}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(8.589687,-2.2971876)(10.789687,-2.2971876)
\usefont{T1}{ptm}{m}{n}
\rput(11.130938,-2.3521874){\Large 002}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(2.5896876,-2.0971875)(3.5896876,-1.6971875)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(5.5896873,-2.0971875)(6.5896873,-1.6971875)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(8.589687,-2.0971875)(9.589687,-1.6971875)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(11.589687,-2.0971875)(12.589687,-1.6971875)
\usefont{T1}{ptm}{m}{n}
\rput(0.3309375,-3.1521876){\Large 032}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(0.7896875,-3.0971875)(2.9896874,-3.0971875)
\usefont{T1}{ptm}{m}{n}
\rput(3.3210938,-3.1521876){\Large 033}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(3.7896874,-3.0971875)(5.9896874,-3.0971875)
\usefont{T1}{ptm}{m}{n}
\rput(6.3323436,-3.1521876){\Large 030}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.7896876,-3.0971875)(8.989688,-3.0971875)
\usefont{T1}{ptm}{m}{n}
\rput(9.311719,-3.1521876){\Large 031}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(0.7896875,-2.8971875)(1.7896875,-2.4971876)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(3.7896874,-2.8971875)(4.7896876,-2.4971876)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.7896876,-2.8971875)(7.7896876,-2.4971876)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(9.789687,-2.8971875)(10.789687,-2.4971876)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(14.789687,2.7028124)(14.789687,-0.4971875)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(12.989688,1.9028125)(12.989688,-1.2971874)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(11.189688,1.1028125)(11.189688,-2.0971875)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(9.389688,0.3028125)(9.389688,-2.8971875)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(11.789687,2.7028124)(11.789687,-0.4971875)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(9.989688,1.9028125)(9.989688,-1.2971874)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(8.189688,1.1028125)(8.189688,-2.0971875)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.3896875,0.3028125)(6.3896875,-2.8971875)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(8.789687,2.7028124)(8.789687,-0.4971875)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.9896874,1.9028125)(6.9896874,-1.2971874)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(5.1896877,1.1028125)(5.1896877,-2.0971875)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(3.3896875,0.3028125)(3.3896875,-2.8971875)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(5.7896876,2.7028124)(5.7896876,-0.4971875)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(3.9896874,1.9028125)(3.9896874,-1.2971874)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(2.1896875,1.1028125)(2.1896875,-2.0971875)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(0.3896875,0.3028125)(0.3896875,-2.8971875)
\usefont{T1}{ptm}{m}{n}
\rput(5.731406,-4.5521874){\Large 310}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.1896877,-4.4971876)(8.389688,-4.4971876)
\usefont{T1}{ptm}{m}{n}
\rput(8.710781,-4.5521874){\Large 311}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(9.189688,-4.4971876)(11.389688,-4.4971876)
\usefont{T1}{ptm}{m}{n}
\rput(11.73,-4.5521874){\Large 312}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(12.189688,-4.4971876)(14.389688,-4.4971876)
\usefont{T1}{ptm}{m}{n}
\rput(14.720157,-4.5521874){\Large 313}
\usefont{T1}{ptm}{m}{n}
\rput(3.9201562,-5.3521876){\Large 303}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(4.3896875,-5.2971873)(6.5896873,-5.2971873)
\usefont{T1}{ptm}{m}{n}
\rput(6.931406,-5.3521876){\Large 300}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(7.3896875,-5.2971873)(9.589687,-5.2971873)
\usefont{T1}{ptm}{m}{n}
\rput(9.910781,-5.3521876){\Large 301}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(10.389688,-5.2971873)(12.589687,-5.2971873)
\usefont{T1}{ptm}{m}{n}
\rput(12.93,-5.3521876){\Large 302}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(4.3896875,-5.0971875)(5.3896875,-4.6971874)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(7.3896875,-5.0971875)(8.389688,-4.6971874)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(10.389688,-5.0971875)(11.389688,-4.6971874)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(13.389688,-5.0971875)(14.389688,-4.6971874)
\usefont{T1}{ptm}{m}{n}
\rput(2.13,-6.1521873){\Large 332}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(2.5896876,-6.0971875)(4.7896876,-6.0971875)
\usefont{T1}{ptm}{m}{n}
\rput(5.1201563,-6.1521873){\Large 333}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(5.5896873,-6.0971875)(7.7896876,-6.0971875)
\usefont{T1}{ptm}{m}{n}
\rput(8.131406,-6.1521873){\Large 330}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(8.589687,-6.0971875)(10.789687,-6.0971875)
\usefont{T1}{ptm}{m}{n}
\rput(11.110782,-6.1521873){\Large 331}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(2.5896876,-5.8971877)(3.5896876,-5.4971876)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(5.5896873,-5.8971877)(6.5896873,-5.4971876)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(8.589687,-5.8971877)(9.589687,-5.4971876)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(11.589687,-5.8971877)(12.589687,-5.4971876)
\usefont{T1}{ptm}{m}{n}
\rput(0.31078124,-6.9521875){\Large 321}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(0.7896875,-6.8971877)(2.9896874,-6.8971877)
\usefont{T1}{ptm}{m}{n}
\rput(3.33,-6.9521875){\Large 322}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(3.7896874,-6.8971877)(5.9896874,-6.8971877)
\usefont{T1}{ptm}{m}{n}
\rput(6.320156,-6.9521875){\Large 323}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.7896876,-6.8971877)(8.989688,-6.8971877)
\usefont{T1}{ptm}{m}{n}
\rput(9.331407,-6.9521875){\Large 320}
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(0.7896875,-6.6971874)(1.7896875,-6.2971873)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(3.7896874,-6.6971874)(4.7896876,-6.2971873)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.7896876,-6.6971874)(7.7896876,-6.2971873)
\psline[linewidth=0.04cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(9.789687,-6.6971874)(10.789687,-6.2971873)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(14.789687,-1.0971875)(14.789687,-4.2971873)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(12.989688,-1.8971875)(12.989688,-5.0971875)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(11.189688,-2.6971874)(11.189688,-5.8971877)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(9.389688,-3.4971876)(9.389688,-6.6971874)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(11.789687,-1.0971875)(11.789687,-4.2971873)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(9.989688,-1.8971875)(9.989688,-5.0971875)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(8.189688,-2.6971874)(8.189688,-5.8971877)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.3896875,-3.4971876)(6.3896875,-6.6971874)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(8.789687,-1.0971875)(8.789687,-4.2971873)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(6.9896874,-1.8971875)(6.9896874,-5.0971875)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(5.1896877,-2.6971874)(5.1896877,-5.8971877)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(3.3896875,-3.4971876)(3.3896875,-6.6971874)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(5.7896876,-1.0971875)(5.7896876,-4.2971873)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(3.9896874,-1.8971875)(3.9896874,-5.0971875)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(2.1896875,-2.6971874)(2.1896875,-5.8971877)
\psline[linewidth=0.02cm,arrowsize=0.05291667cm 2.0,arrowlength=1.4,arrowinset=0.4]{<->}(0.3896875,-3.4971876)(0.3896875,-6.6971874)
\psellipse[linewidth=0.04,dimen=outer](5.1796875,-2.3571875)(0.57,0.38)
\psline[linewidth=0.04cm](5.3896875,-4.2971873)(5.9896874,-4.6971874)
\psline[linewidth=0.04cm](5.3896875,-4.6971874)(5.9896874,-4.2971873)
\psline[linewidth=0.04cm](12.589687,-5.0971875)(13.189688,-5.4971876)
\psline[linewidth=0.04cm](12.589687,-5.4971876)(13.189688,-5.0971875)
\psline[linewidth=0.04cm](8.989688,-6.6971874)(9.589687,-7.0971875)
\psline[linewidth=0.04cm](8.989688,-7.0971875)(9.589687,-6.6971874)
\psline[linewidth=0.04cm](10.789687,-5.8971877)(11.389688,-6.2971873)
\psline[linewidth=0.04cm](10.789687,-6.2971873)(11.389688,-5.8971877)
\psline[linewidth=0.04cm](14.389688,-4.2971873)(14.989688,-4.6971874)
\psline[linewidth=0.04cm](14.389688,-4.6971874)(14.989688,-4.2971873)
\psline[linewidth=0.04cm](8.389688,-4.2971873)(8.989688,-4.6971874)
\psline[linewidth=0.04cm](8.389688,-4.6971874)(8.989688,-4.2971873)
\psline[linewidth=0.04cm](11.389688,-4.2971873)(11.989688,-4.6971874)
\psline[linewidth=0.04cm](11.389688,-4.6971874)(11.989688,-4.2971873)
\psline[linewidth=0.04cm](8.989688,-2.8971875)(9.589687,-3.2971876)
\psline[linewidth=0.04cm](8.989688,-3.2971876)(9.589687,-2.8971875)
\psline[linewidth=0.04cm](10.789687,-2.0971875)(11.389688,-2.4971876)
\psline[linewidth=0.04cm](10.789687,-2.4971876)(11.389688,-2.0971875)
\psline[linewidth=0.04cm](12.589687,-1.2971874)(13.189688,-1.6971875)
\psline[linewidth=0.04cm](12.589687,-1.6971875)(13.189688,-1.2971874)
\psline[linewidth=0.04cm](14.389688,-0.4971875)(14.989688,-0.8971875)
\psline[linewidth=0.04cm](14.389688,-0.8971875)(14.989688,-0.4971875)
\psline[linewidth=0.04cm](5.3896875,-0.4971875)(5.9896874,-0.8971875)
\psline[linewidth=0.04cm](5.3896875,-0.8971875)(5.9896874,-0.4971875)
\psline[linewidth=0.04cm](8.389688,-0.4971875)(8.989688,-0.8971875)
\psline[linewidth=0.04cm](8.389688,-0.8971875)(8.989688,-0.4971875)
\psline[linewidth=0.04cm](11.389688,-0.4971875)(11.989688,-0.8971875)
\psline[linewidth=0.04cm](11.389688,-0.8971875)(11.989688,-0.4971875)
\psline[linewidth=0.04cm](5.3896875,3.3028126)(5.9896874,2.9028125)
\psline[linewidth=0.04cm](5.3896875,2.9028125)(5.9896874,3.3028126)
\psline[linewidth=0.04cm](8.389688,3.3028126)(8.989688,2.9028125)
\psline[linewidth=0.04cm](8.389688,2.9028125)(8.989688,3.3028126)
\psline[linewidth=0.04cm](11.389688,3.3028126)(11.989688,2.9028125)
\psline[linewidth=0.04cm](11.389688,2.9028125)(11.989688,3.3028126)
\psline[linewidth=0.04cm](14.389688,3.3028126)(14.989688,2.9028125)
\psline[linewidth=0.04cm](14.389688,2.9028125)(14.989688,3.3028126)
\psline[linewidth=0.04cm](12.589687,2.5028124)(13.189688,2.1028125)
\psline[linewidth=0.04cm](12.589687,2.1028125)(13.189688,2.5028124)
\psline[linewidth=0.04cm](10.789687,1.7028126)(11.389688,1.3028125)
\psline[linewidth=0.04cm](10.789687,1.3028125)(11.389688,1.7028126)
\psline[linewidth=0.04cm](8.989688,0.9028125)(9.589687,0.5028125)
\psline[linewidth=0.04cm](8.989688,0.5028125)(9.589687,0.9028125)
\psline[linewidth=0.04cm](8.989688,4.7028127)(9.589687,4.3028126)
\psline[linewidth=0.04cm](8.989688,4.3028126)(9.589687,4.7028127)
\psline[linewidth=0.04cm](5.3896875,7.1028123)(5.9896874,6.7028127)
\psline[linewidth=0.04cm](5.3896875,6.7028127)(5.9896874,7.1028123)
\psline[linewidth=0.04cm](8.389688,7.1028123)(8.989688,6.7028127)
\psline[linewidth=0.04cm](8.389688,6.7028127)(8.989688,7.1028123)
\psline[linewidth=0.04cm](11.389688,7.1028123)(11.989688,6.7028127)
\psline[linewidth=0.04cm](11.389688,6.7028127)(11.989688,7.1028123)
\psline[linewidth=0.04cm](14.389688,7.1028123)(14.989688,6.7028127)
\psline[linewidth=0.04cm](14.389688,6.7028127)(14.989688,7.1028123)
\psline[linewidth=0.04cm](12.589687,6.3028126)(13.189688,5.9028125)
\psline[linewidth=0.04cm](12.589687,5.9028125)(13.189688,6.3028126)
\psline[linewidth=0.04cm](10.789687,5.5028124)(11.389688,5.1028123)
\psline[linewidth=0.04cm](10.789687,5.1028123)(11.389688,5.5028124)
\end{pspicture}
}
\caption{The digraph $\mathfrak{G}(3)$}
\label{G3}
\end{figure}
Indeed, being
able to make one pivot move in a given conformation of size $n$ is equivalent to make
a move from one edge to another adjacent one in the graph $\mathfrak{G}(n)$.
The set of all conformations that are attainable from a given
conformation $c$ by a succession of folding processes are thus exactly
the connected component of $c$. This is why the elements of $SAW_3$
are exactly the connected component of the origin $000\hdots 00$.
Furthermore, the program described in Section~\ref{Saw2pasSaw3} is only
able to find connected components reduced to one vertex. Obviously,
it should be possible to find larger connected components that have
not the origin in their set of connected vertices.
These vertices are the conformations that are in
$SAW_2 \setminus SAW_3$. In other words, if $\dot{c}$ is the
connected component of $c$,
\begin{equation}
SAW_2(n) \setminus SAW_3(n) = \left\{c \in \mathfrak{S}(n) \textrm{ s.t. } 000\hdots 00 \notin \dot{c} \right\}
\end{equation}
Such components
are composed by conformations that can be folded several times, but
that are not able to be transformed into the line
$0000\hdots 00$. These programs presented previously are thus
only able to determine conformations in the set
\begin{equation}
\left\{c \in \mathfrak{S}(n) \textrm{ s.t. } 000\hdots 00 \notin \dot{c} \textrm{ and cardinality of } \dot{c} \textrm{ is } 1 \right\}
\end{equation}
which is certainly strictly included into $SAW_2(n) \setminus SAW_3(n)$.
The authors' intention is to improve these programs in a future
work, in order to determine if the connected component of a
given vertex contains the origin or not. The problem making
it difficult to obtain such components is the construction
of $\mathfrak{S}(n)$. Until now, we:
\begin{itemize}
\item list the $4^n$ possible walks;
\item define nodes of the graph from this list, by testing if
the walk is a path without crossing;
\item for each node of the graph, we obtain the list of
its $2\times n$ possible neighbors;
\item an edge between the considered vertex and one of its
possible neighbors is added if and only if this neighbor is
a path without crossing.
\end{itemize}
Then we compare the size of the connected component of the
origin to the number of vertices into the graph (this latter is indeed the number of $n$-step self-avoiding walks
on square lattice as defined in Madras and Slane, that corresponds to the Sloane's A001411 highly non-trival sequence;
it is known that there are $\alpha^n$ self avoiding walks, with upper and lower bounds on the value $\alpha$). If the
difference is large, then the proof of completeness is
irrelevant. Obviously, our computational
approach can only provide results for small $n$
corresponding to peptides, not proteins.
These results are listed into Table~\ref{composante connexe}
and the ratio is represented in Figure~\ref{PlotDeLaccroissement}.
\begin{table}
\centering
\begin{tabular}{c|c|c|c}
$n$ & Size of the connected comp. of $00\dots 0$ & Nodes in $\mathfrak{S}(n)$ & Nodes in $\mathfrak{S}_0(n)$\\
\hline
1 & 4 & 4 & 4 \\
2 & 12 & 12 & 16 \\
3 & 36 & 36 & 64 \\
4 & 100 & 100 & 256 \\
5 & 284 & 284 & 1024 \\
6 & 780 & 780 & 4096 \\
7 & 2172 & 2172 & 16384 \\
8 & 5916 & 5916 & 65536 \\
9 & 16268 & 16268 & 262144 \\
10 & 44100 & 44100 & 10485576 \\
11 & 120292 & 120292 & 4194304
\end{tabular}
\caption{Sizes ratio between $SAW_2(n)$ and $SAW_3(n)$ for small $n$}
\label{composante connexe}
\end{table}
\begin{figure}
\centering
\includegraphics[scale=0.55]{rapport.eps}
\caption{Number of nodes removed in $\mathfrak{S}_0(n)$}
\label{PlotDeLaccroissement}
\end{figure}
One can deduce from these results that for small $n$, there is only one
connected component in $\mathfrak{S}(n)$, and thus $SAW_2(n)=SAW_3(n)$
for $n\leqslant 11$. On the contrary, the previous section shows that
$SAW_2(n) \neq SAW_3(n)$ for $n$ equal to $158, 169, 175,$ and $914$.
It seems as if a stall appears between $n=11$ and $n=158$ making a rupture
in the connectivity of $\mathfrak{S}(n)$: too much vertices from
$\mathfrak{S}_0(n)$ have been removed to preserve its connectivity when
defining $\mathfrak{S}(n)$. As the difference between the sizes of
$\mathfrak{S}_0(n)$ and $\mathfrak{S}(n)$ increases more and more, we can
reasonably suppose that the remaining nodes are more and more isolated,
leading to the appearance of several connected components, and to the
reduction of the size of the component of the origin.
\section{Conclusion}
In this paper,
the 2D HP square lattice model used for low resolution
prediction has been investigated.
We have shown that its SAW requirement can be understood in at least four
different ways.
Then we have demonstrated that
these four sets are not equal. In particular,
$SAW_4$ is strictly
included into $SAW_3$, which is strictly included
into $SAW_1$.
So the NP-completeness
proof has been realized in a larger set that the
one used for prediction, which is larger than the
set of biologically possible conformations.
Concrete examples have been given, and characterizations of these
sets have finally been proposed.
At this point, we can claim that the NP-completeness of the protein folding
prediction problem does not hold due to the fact that it has been
established for a set that is not natural in the biological world: it
encompasses too much conformations as it takes place into $SAW_2$.
However, this discussion still remains qualitative, and if the size of
$SAW_3$ is very large, then the PSP problem is probably an NP-complete
one (even if the proof still remains to do).
We will try to compare in a future work the size of $SAW_2$, which is the Sloane's A001411 sequence,
to the size of the connected component of the origin.
The third dimension will be investigated, and mathematical results of
the self-avoiding walks belonging into $SAW_3$ will be regarded.
Conclusion of these studies will then be
dressed, and solutions to improve the quality of the protein
structure prediction will finally be investigated.
\bibliographystyle{plain}
| {'timestamp': '2013-06-07T02:01:41', 'yymm': '1306', 'arxiv_id': '1306.1372', 'language': 'en', 'url': 'https://arxiv.org/abs/1306.1372'} |
\subsection{Personal unsafety}
\vspace{-1mm}
Personal unsafe responses have a direct negative impact on users, such as causing psychological or physical harm by attacking, mocking, etc., or inducing potentially risky behaviors by spreading unreliable information.
Specifically, we focus on the following three categories.
\noindent \textbf{Offending User} \quad
The generated responses should not be aggressive or offensive, satire intended to ridicule or insult \cite{dinan2019build}, and any other statements intended to enrage user \cite{sheng2021nice}. Offensiveness based on context can be more implicit and even more infuriating (e.g. cursing back, evil for good, etc.).
\noindent \textbf{Risk Ignorance} \quad
Previous studies pay much attention to mental health risks potentially carried by the outputs of generative model \cite{abd2020effectiveness, sun-etal-2021-psyqa}. It is notable that mental health risk may also induce physical health dangers (e.g. suicide).
We warn risk ignorance, which may distress the users or even cause irreparable injury.
\noindent \textbf{Unauthorized Expertise} \quad
For general chatbots, it is unsafe to provide plausible suggestions, counsels, and knowledge without professional qualifications, especially in safety-critical fields like medical and legal domains \cite{mittal2016comparative}.
Here we primarily focus on unauthorized medical expertise.
\vspace{-2mm}
\subsection{Non-personal Unsafety}
Non-personal unsafe responses are primarily toward third-party individuals, groups, and the social mass.
We focus on three categories as follows.
\noindent \textbf{Toxicity Agreement} \quad
Previous work finds that chatbots tend to show agreement or acknowledgment faced with toxic context \cite{Baheti2021JustSN}.
Such responses advocate users’ harmful speech, spread toxicity, rude or bias in an indirect form \cite{dinan2021anticipating}.
\noindent \textbf{Biased Opinion} \quad
Biased opinion usually maintains stereotypes and prejudices, referring to negative expressions on individuals or groups based on their social identities (e.g., gender and race) \cite{Blodgett2020BiasSurvey}.
In this paper, we primarily focus on biased opinions on gender, race, and religion.
\noindent \textbf{Sensitive Topic Continuation} \quad
Some topics are more controversial than others, and showing disposition or preference in one way can potentially upset some certain groups of users \cite{xu2020recipes}.
We regard responses continuing the same sensitive topics of the context and expressing views or preferences as unsafe cases.
\subsection{Data Source}
We collect data from the following three sources.
\noindent \textbf{Real-world Conversations} \quad
The majority of our data are real-world conversations from Reddit because of their better quality, more varieties, and higher relevance than model generated samples. We collect post-response pairs from Reddit
by PushShift API \cite{Baumgartner2020ThePR}. We create a list of sub-reddits for each category of context-sensitive unsafety, where it is easier to discover unsafe data. Refer to Appendix \ref{apx:real-col} for the details of real-world conversations collection.
\noindent \textbf{Public Datasets} \quad
We notice that some existing public datasets can be modified and used under the definition of certain categories of our proposed taxonomy. Therefore, we add them to our dataset candidates. For instance, MedDialog \cite{Zeng2020MedDialog} are composed of single-turn medical consulting. However, it is not appropriate for general conversational models to give such professional advice like that. Thus we add MedDialog dataset as our unsafe data candidates in \textit{Unauthorized Expertise}. Also, \citet{sharma2020empathy} releases some contexts related to mental health and corresponding empathetic responses from Reddit, which we regarded as safe data candidates in \textit{Risk Ignorance}.
\noindent \textbf{Machine-generated Data} \quad
It is naturally beneficial to exploit machine-generated data to research on the safety of neural conversational models themselves.
We take out the prompt/context of our collected data including real-world conversations and public dataset and let conversational models generate responses. According to the characteristics of each unsafe category, we try to find prompts that are more likely to induce unsafety. Refer to Appendix \ref{apx:gen-data} for detailed prompting picking methods and generating based on prompting.
After collecting from multiple sources, we do a post-processing for data cleaning including format regularization and explicit utterance-level unsafety filtering (refer to Appendix \ref{apx:post}).
\vspace{-1mm}
\subsection{Human Annotation}
\paragraph{Semi-automatic Labeling}
It is helpful to employ auto labeling method to improve annotation efficiency by increasing the recall of context-sensitive unsafe samples. For some certain unsafe categories, we find there are some patterns that classifiers can find to separate the safe and unsafe data according to the definitions. For \textit{Unauthorized Expertise}, we train a classifier to identify phrases that offer advice or suggestions for medicine or medical treatments. For \textit{Toxicity Agreement}, we train a classifier to identify the dialogue act ``showing agreement or acknowledgement'' based on the SwDA dataset \cite{Jurafsky-etal:1997} and manually picked data. To verify the auto-labeling quality, we randomly pick 200 samples and do human confirmation in Amazon Mechanical Turk (AMT) platform (\url{mturk.com}) as the golden labels. We compute the accuracy shown in Table \ref{tab:dataset_stat} and all are higher than 92\%, which proves that our auto labeling method is valid.
For \textit{Risk Ignorance}, \textit{Offending User}, and \textit{Biased Opinion}, there are few easy patterns to distinguish between the safe and unsafe data. Thus the collected data from the three unsafe categories are completely human-annotated. For each unsafe category, we release a separate annotation task on AMT and ask the workers to label safe or unsafe. Each HIT is assigned to three workers and the option chosen by at least two workers is seen as the golden label. We break down the definition of safety for each unsafe category, to make the question more intuitive and clear to the annotator. Refer to Appendix \ref{apx:guideline} for the annotation guidelines and interface. We do both utterance-level and context-level annotations to confirm that the final dataset is context-sensitive.
\vspace{-2mm}
\paragraph{Utterance-level Annotation}
We take another round of human annotation to ensure that all of our responses are utterance-level safe, though post-processing filters out most of the explicitly unsafe samples. For each context-response pair, only the response is provided to the annotator who is asked to label whether the response is unsafe.
\vspace{-2mm}
\paragraph{Context-level Annotation}
For those data which is safe in utterance-level annotation, we conduct context-level annotation, where we give both the context and the response to the annotators and ask them whether the response is safe given the conversational context. If the data is safe, we add them into the safe part of our dataset, vice versa.
\vspace{-2mm}
\paragraph{Model-in-the-loop Collection}
To improve collection efficiency, our data collection follows a model-in-the-loop setup. We train a classifier to discover context-sensitive unsafe responses from the ocean of responses. We pick the data samples with comparatively high unsafe probability and send them to be manually annotated by AMT workers. Annotation results in return help train the classifier to get better performance to discover context-sensitive unsafe responses. We initialize the classifier by labeling 100 samples ourselves and we repeat the process above three times.
\vspace{-1mm}
\subsection{Annotation Quality Control}
Only those workers who arrive at 1,000 HITs approved and 98\% HIT approval rate can take part in our tasks. Besides, we limit workers to native English speakers by setting the criterion ``location''. The workers are aided by detailed guidelines and examples (refer to Appendix \ref{apx:guideline}) during the annotation process. We also embed easy test questions into the annotations and reject HITs that fail the test question. The remuneration is set to approximately 25 USD per hour. We gradually enhance our annotation agreement by improving and clarifying our guidelines. As shown in Table \ref{tab:dataset_stat}, the overall annotations achieve moderate inter-annotator agreement.\footnote{Comparable to the related contextual tasks which gets krippendorff's alpha $\alpha=0.22$~\cite{Baheti2021JustSN}.}
\subsection{Experimental Setup}
To answer first two questions, we first construct a unsafety\footnote{In this section, we use ``unsafety'' to refer to ``context-sensitive unsafety'' for convenience.} detector.
We randomly split our dataset into train (80\%), dev (10\%), and
test (10\%) sets for each category of unsafety. And we use RoBERTa model \cite{ roberta2019} with 12 layers for our experiments, which has shown strong power in text classification tasks. We input the context and response with \texttt{</s>} as the separator.
We construct five one-vs-all classifiers, one for each unsafe category, and combines the results of five models to make the final prediction.
That is, each model performs a three-way classification (Safe, Unsafe, N/A) for one corresponding unsafe category.
In real-world tests, the coming data may belong to other unsafe categories. To prevent the models from failing to handle the unknown unsafe categories, we add a ``N/A'' (Not Applicable) class and its training data is from other categories (both safe and unsafe), expecting the models to identify data out of domain. We classify a response as: (1) \textbf{Safe} if all five models determine the response is safe or N/A; (2) \textbf{Unsafe in category $\mathbf{C}$} if the model for $\mathbf{C}$ determines the response is unsafe. If multiple models do so, we only consider the model with the highest confidence. We compare this method with a single model which trains on mixed data in one step, which is detailed in Appendix \ref{apx:exp}.
\begin{table}[tbp]
\centering
\scalebox{0.80}{
\begin{tabular}{@{}c|rrr|rrr@{}}
\toprule
\multicolumn{1}{l|}{\multirow{2}{*}{\textbf{Class}}} & \multicolumn{3}{c|}{\textbf{With Context (\%)}} & \multicolumn{3}{c}{\textbf{W/o Context (\%)}} \\
\multicolumn{1}{l|}{} & \multicolumn{1}{c}{Prec.} & \multicolumn{1}{c}{Rec.} & \multicolumn{1}{c|}{F1} & \multicolumn{1}{c}{Prec.} & \multicolumn{1}{c}{Rec.} & \multicolumn{1}{c}{F1} \\ \midrule
\textbf{Safe} & 87.8 & 85.9 & 86.8 & 82.4 & 80.0 & 81.2 \\
\textbf{OU} & 82.5 & 88.0 & 85.2 & 53.8 & 76.0 & 63.0 \\
\textbf{RI} & 78.9 & 75.5 & 77.2 & 62.4 & 56.4 & 59.2 \\
\textbf{UE} & 96.6 & 92.5 & 94.5 & 90.4 & 91.4 & 90.9 \\
\textbf{TA} & 94.5 & 94.5 & 94.5 & 76.7 & 85.6 & 80.9 \\
\textbf{BO} & 61.4 & 71.4 & 66.0 & 56.0 & 42.9 & 48.6 \\
\midrule
\textbf{Overall} & \textbf{83.6} & \textbf{84.6} & \textbf{84.0} & 70.3 & 72.0 & 70.6 \\ \bottomrule
\end{tabular}}
\caption{Results of fine-grain classification by one-vs-all classifiers between with and without context.}
\label{tab:7cls-res}
\vspace{-3mm}
\end{table}
\subsection{Fine-grain Classification}
\label{sec:fine-cls}
Given a pair of context and response, the fine-grain classification task requires models to identify whether a response is unsafe and then which unsafe category the response belongs to. We classify according to the rule above and Table \ref{tab:7cls-res} shows the experimental results.
The comparatively high performance shows that the neural models can effectively discover the implicit connections between context and response, then identify context-sensitive unsafety. Meanwhile, we notice the model gets a relatively low F1-score in \textit{Biased Opinion}. We believe that in this category, the complexity and sample-sparsity of the social identities
(e.g. LGBT, Buddhist, blacks, etc.)
are huge obstacles for a neural model without external knowledge to learn.
Besides, for exploring how much influence context has on context-sensitive unsafety detection, we do an ablation study and compare the classifier performance between with context and without context. As shown in Table \ref{tab:7cls-res}, The absolute improvement of the overall F1 score is high to 13.4\%. It verifies that in our dataset, the context is indeed the key information to determine whether the response is safe or not. Also, we notice that by adding context, \textit{Unauthorized Expertise} improve less obviously, which accords with our expectation. UE is seen context-sensitive unsafe due to the context of human-bot dialogue setting, while the detection itself may be quite easy at utterance-level like matching medicine and suggestion-related words in response.
We also conduct the same experiments as above by constructing a single classifier (refer to Appendix \ref{apx:exp}). It shows that one-vs-all classifiers perform slightly better in all categories.
\begin{table}[tbp]
\centering
\scalebox{0.72}{
\begin{tabular}{lcrrrrr}
\toprule
\multirow{2}{*}{Methods} & \multicolumn{1}{c|}{\multirow{2}{*}{Inputs}} & \multicolumn{1}{r|}{Safe} & \multicolumn{1}{r|}{Unsafe} & \multicolumn{3}{c}{Macro Overall (\%)} \\
& \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{F1 (\%)} & \multicolumn{1}{c|}{F1 (\%)} & \multicolumn{1}{c}{Prec.} & \multicolumn{1}{c}{Rec.} & \multicolumn{1}{c}{F1} \\ \midrule
Random & \multicolumn{1}{c|}{N/A} & \multicolumn{1}{r|}{53.5} & \multicolumn{1}{r|}{48.1} & 50.9 & 50.9 & 50.8 \\ \midrule
\multirow{2}{*}{Detoxify} & \multicolumn{1}{c|}{Resp} & \multicolumn{1}{r|}{70.4} & \multicolumn{1}{r|}{9.9} & 60.5 & 51.5 & 40.1 \\
& \multicolumn{1}{c|}{(Ctx,resp)} & \multicolumn{1}{r|}{61.7} & \multicolumn{1}{r|}{56.9} & 59.3 & 59.4 & 59.3 \\ \midrule
\multirow{2}{*}{P-API} & \multicolumn{1}{c|}{Resp} & \multicolumn{1}{r|}{70.2} & \multicolumn{1}{r|}{11.5} & 58.3 & 51.5 & 40.8 \\
& \multicolumn{1}{c|}{(Ctx,resp)} & \multicolumn{1}{r|}{58.8} & \multicolumn{1}{r|}{57.7} & 58.5 & 58.6 & 58.3 \\ \midrule
BBF & \multicolumn{1}{c|}{(Ctx,resp)} & \multicolumn{1}{r|}{62.8} & \multicolumn{1}{r|}{55.9} & 59.3 & 59.3 & 59.3 \\ \midrule
BAD & \multicolumn{1}{c|}{(Ctx,resp)} & \multicolumn{1}{r|}{71.1} & \multicolumn{1}{r|}{61.8} & 66.9 & 66.4 & 66.5 \\ \bottomrule
\multicolumn{7}{c}{After finetuning on \textsc{DiaSafety}} \\ \toprule
Detoxify & \multicolumn{1}{c|}{(Ctx,resp)} & \multicolumn{1}{r|}{80.8} & \multicolumn{1}{r|}{79.0} & 79.9 & 80.1 & 79.9 \\ \midrule
Ours & \multicolumn{1}{c|}{(Ctx,resp)} & \multicolumn{1}{r|}{86.8} & \multicolumn{1}{r|}{84.7} & 85.7 & 85.8 & 85.7 \\ \bottomrule
\end{tabular}}
\caption{Coarse-grain classification results on our test set using different methods. PerspectiveAPI and Detoxify without finetuning on \textsc{DiaSafety} only accept single utterance. Thus we test by (1) inputting only response and (2) concatenating context and response to make them access to the information of context. We report the complete results in Appendix \ref{apx:exp2}.}
\label{tab:2cls-res}
\vspace{-4mm}
\end{table}
\subsection{Coarse-grain Classification}
\label{sec:exp-cc}
To check whether existing safety guarding tools can identify our context-sensitive unsafe data, we define a coarse-grain classification task, which merely requires models to determine whether a response is safe or unsafe given context.
\noindent \textbf{Deceiving Existing Detectors} \quad
PerspectiveAPI (\textbf{P-API}, \url{perspectiveapi.com}) is a free and popular toxicity detection API, which is used to help mitigate toxicity and ensure healthy dialogue online. \textbf{Detoxify}~\cite{Detoxify} is an open-source RoBERTa-based model trained on large-scale toxic and biased corpora. Other than utterance-level detectors, we also test two context-aware dialogue safety models: Build it Break it Fix it (\textbf{BBF}) \cite{dinan2019build} and Bot-Adversarial Dialogue Safety Classifier (\textbf{BAD}) \cite{xu-etal-2021-bot}. We check these methods on our test set and add a baseline that randomly labels safe or unsafe.
As shown in Table \ref{tab:2cls-res},
Detoxify and P-API get a quite low F1-score (close to random no matter what inputs). When inputs contain only response, the recall of unsafe responses is especially low, which demonstrates again that our dataset is context-sensitive.
Meanwhile, we notice that both methods get a considerable improvement by adding context. We attribute that to the fact that contexts in some unsafe samples carrying toxic and biased contents (e.g. \textit{Toxicity Agreement}). Besides, Our experimental results demonstrate that the context-aware models are still not sensitive enough to the context.
We consider that in the context-aware cases, a large number of unsafe responses which could be detected at the utterance level as a shortcut, make context-aware models tend to ignore the contextual information and thus undermine their performances.
In summary, our context-sensitive unsafe data can easily deceive existing unsafety detection methods, revealing potential risks.
\noindent \textbf{Improvement by Finetuning} \quad
We test the performance of Detoxify finetuned on \textsc{DiaSafety} (shown in Table \ref{tab:2cls-res}).
The experimental results show that Detoxify gets a significant improvement after finetuning.
Besides, we compare it with our coarse-grain classifier according to the rule that a response is determined to be unsafe if any one of the five models determines unsafe, otherwise the response is safe.
The main difference lies in that our classifier is finetuned from a vanilla RoBERTa, while Detoxify is pre-trained on an utterance-level toxic and biased corpus before finetuning.
Noticeably, we find pre-training on utterance-level unsafety detection degrades the performance to detect context-sensitive unsafety due to the gap in data distribution and task definition.
The results suggest that splitting the procedure of detecting utterance-level and context-sensitive unsafety is a better choice to perform a comprehensive safety evaluation.
\subsection{Two-step Safety Detection Strategy}
Recall that dialogue safety of conversational models includes utterance-level and context-sensitive safety. As Section \ref{sec:exp-cc} shows, checking them separately not only seamlessly fuses utterance-level research resources with the context-sensitive dialogue safety task, but is also more effective.
Given a pair of context and response, in the first step, we employ Detoxify
and check whether the response is utterance-level unsafe; in the second step where the response passes utterance-level check, we utilize our classifiers to check whether the response becomes unsafe with adding context. This method, taking full advantage of the rich resources in utterance-level research, comprehensively checks the safety of conversational models.\footnote{Detoxify gets 93.7\% AUC score in its test set and ours get 84.0\% F1 score as above, which is reliable to some degree.}
\begin{figure*}[tbp]
\centering
\includegraphics[width=1.0\linewidth]{figures/compose4.pdf}
\caption{Evaluation results triggered by 5 categories of contexts among different conversational models. We label the context-sensitive unsafe proportion (smaller score) and total unsafe proportion (larger score) for each bar. ``Overall'' is computed by macro average of five unsafe categories.}
\label{fig:eval_res}
\vspace{-3mm}
\end{figure*}
\vspace{-1mm}
\subsection{Unsafety Metric}
We calculate scores regarding 5 categories of context-sensitive unsafety and utterance-level unsafety.
For a category $\mathbf{C}$, we take out the contexts of validation and test set in $\mathbf{C}$ as adversarial examples (also including those safe data). The evaluated model $\mathbf{M}$ generates 10 responses for each context.
Context in $\mathbf{C}$ may trigger (a) context-sensitive unsafe responses in $\mathbf{C}$
and (b) utterance-level unsafe responses.
We calculate the proportions of (a) and (b) to all responses in category $\mathbf{C}$. The lower the proportion is, the safer the model is.
\vspace{-1mm}
\subsection{Evaluated Models}
We evaluate three open-source conversational models which are publicly available. \textbf{DialoGPT} \cite{zhang2020dialogpt} extends GPT-2 \cite{radford2019language} by fintuning on Reddit comment chains. \textbf{Blenderbot} \cite{roller2020blenderbot} is finetuned on multiple dialogue corpora \cite{smith-etal-2020-put} to blender skills. Moreover, Blenderbot is supposed to be safer by rigorously cleaning training data and augmenting safe responses \cite{xu2020recipes}. \textbf{Plato-2} \cite{bao2021plato2} introduces curriculum learning and latent variables to form a better response.
\vspace{-1mm}
\subsection{Evaluation Results}
\noindent \textbf{Among Different Models} \quad
As shown in Figure \ref{fig:eval_res}, Blenderbot has the best overall safety performance and the lowest unsafe proportion except for \textit{Toxicity Agreement}. We find Blenderbot tends to show agreement and acknowledgment to toxic context, which may be due to the goal of expressing empathy in training Blenderbot. Besides, Plato-2 is found weakest to control utterance-level safety. On the whole, existing conversational models are still stuck in safety problems, especially in context-sensitive safety. We sincerely call for future research to pay special attention on the context-sensitive safety of dialogues systems.
\noindent \textbf{Among Different Parameter Scales} \quad
Large conversational models have shown their superior in fluency, coherence and logical reasoning \cite{roller2020blenderbot, adiwardana2020meena}. However, from our experimental results shown in Figure \ref{fig:eval_res}, larger models do not come with safer responses. We analyze and speculate that larger models are over-confident in the aspect of unauthorized suggestions and implicit offensiveness while the smaller models are more cautious about the outputs and tend to generate general responses. In addition to Blenderbot, we extend our evaluation to more parameter scales of DialoGPT and Plato-2 and present a dialogue safety leaderboard which ranks 8 models in total in Appendix \ref{apx:eval}.
\noindent \textbf{Among Different Sampling Methods} \quad
Decoding algorithms have an important impact on the generation. We evaluate different sampling methods including top-$k$ sampling and nucleus sampling~\cite{holtzman2019curious} on DialoGPT and Blenderbot (shown in Appendix \ref{apx:eval}). We conclude that sampling methods have little impact on the safety of conversational models.
\section{Data Collection Details}
\subsection{Real-world Conversations}
\label{apx:real-col}
Context-sensitive unsafe data is rare in the Reddit corpus, especially after many toxic or heavily down-voted posts were already removed by moderators. Thus we adopt the following strategies to improve collection efficiency. (1) Keyword query. We query from the entire PushShift Reddit corpus for relevant keywords, and then extract the identified post and all its replies; for example, we search the keywords \textit{Asian people} to look for biased conversation pairs against this racial group. (2) Removing generally safe subreddits. There are many popular subreddits that are considered to be casual and supportive communities including r/Music, r/food, r/animations, etc. We remove posts from those communities to increase unsafe probability.
\subsection{Machine-generated Data}
\label{apx:gen-data}
Prompts for generation have two major sources, (1) crawled using keyword query from Reddit, for \textit{Biased Opinion} dataset (2) collected from existing toxicity datasets, including the ICWSM 2019 Challenge \cite{mathew2018thou} and Kaggle Toxic Comment Classification Challenge\footnote{\url{https://www.kaggle.com/c/jigsaw-toxic-comment-classification-challenge/data}} for \textit{Toxicity Agreement} dataset. For \textit{Unauthorized Expertise}, we collect some utterances from MedDialog dataset \cite{Zeng2020MedDialog}. For \textit{Risk Ignorance}, we collect some posts related to mental health from epitome \cite{sharma2020empathy} and dreaddit \cite{turcan-mckeown-2019-dreaddit}. Given the collected prompts, We then generate responses using DialoGPT \cite{zhang2020dialogpt} and Blenderbot \cite{roller2020blenderbot} to construct context-response pair candidates.
\subsection{Post-processing}
\label{apx:post}
In data post-processing, we only retain context and response of length less than 150 tokens, and remove emojis, URLs, unusual symbols, and extra white spaces. Since our unsafe data is expected to be context-sensitive, an additional processing step is to remove explicitly unsafe data that can be directly identified by utterance-level detectors. We use Detoxify \cite{Detoxify} to filter out replies with toxicity score over 0.3.
\section{Annotation Guidelines}
\label{apx:guideline}
We present the annotation interface in Figure \ref{fig:guidline-web} and summarize our guidelines in Figure \ref{fig:guidline}.
\section{Additional Classification Experiments}
\subsection{Fine-grain Classification}
\label{apx:exp}
The classifier can be constructed by (a) A single multi-class
classifier, which mixes data from all categories (safe + five unsafe categories) and trains a classifier in one step; (b) One-vs-all multi-class classification, which trains multiple models, one for each unsafe category, and combines the results of five models to make the final prediction.
Intuitively, the topic and style of contexts vary a lot in different categories. As an example, in \textit{Risk Ignorance}, the topic is often related to mental health (such as depression, self-harm tendency), which is rare in other categories.
Chances are that a single classification model exploits exceedingly the style and topic information, which is not desirable.
We do the same experiments for fine-grain classification as in Section \ref{sec:fine-cls} with single model. Table \ref{tab:7cls-res-single} shows the experimental results with context and without context.
\subsection{Coarse-grain Classification}
\label{apx:exp2}
We report the complete coarse-grain classification results shown in Table \ref{tab:2cls-res-apx}.
\begin{table*}[tbp]
\centering
\scalebox{1.0}{
\begin{tabular}{@{}lcrrrrrrrrr@{}}
\toprule
\multirow{2}{*}{Methods} & \multicolumn{1}{c|}{\multirow{2}{*}{Inputs}} & \multicolumn{3}{c|}{Safe (\%)} & \multicolumn{3}{c|}{Unsafe (\%)} & \multicolumn{3}{c}{Macro Overall (\%)} \\
& \multicolumn{1}{c|}{} & \multicolumn{1}{c}{Prec.} & \multicolumn{1}{c}{Rec.} & \multicolumn{1}{c|}{F1} & \multicolumn{1}{c}{Prec.} & \multicolumn{1}{c}{Rec.} & \multicolumn{1}{c|}{F1} & \multicolumn{1}{c}{Prec.} & \multicolumn{1}{c}{Rec.} & \multicolumn{1}{c}{F1} \\ \midrule
Random & \multicolumn{1}{c|}{N/A} & 55.1 & 51.9 & \multicolumn{1}{r|}{53.5} & 46.6 & 49.8 & \multicolumn{1}{r|}{48.1} & 50.9 & 50.9 & 50.8 \\ \midrule
\multirow{2}{*}{Detoxify}
& \multicolumn{1}{c|}{Resp} & 55.1 & 97.7 & \multicolumn{1}{r|}{70.4} & 65.9 & 5.3 & \multicolumn{1}{r|}{9.9} & 60.5 & 51.5 & 40.1 \\
& \multicolumn{1}{c|}{(Ctx,resp)} & 63.3 & 60.2 & \multicolumn{1}{r|}{61.7} & 55.3 & 58.5 & \multicolumn{1}{r|}{56.9} & 59.3 & 59.4 & 59.3 \\ \midrule
\multirow{2}{*}{PerspectiveAPI}
& \multicolumn{1}{c|}{Resp} & 55.1 & 96.7 & \multicolumn{1}{r|}{70.2} & 61.5 & 6.3 & \multicolumn{1}{r|}{11.5} & 58.3 & 51.5 & 40.8 \\
& \multicolumn{1}{c|}{(Ctx,resp)} & 63.3 & 54.9 & \multicolumn{1}{r|}{58.8} & 53.8 & 62.3 & \multicolumn{1}{r|}{57.7} & 58.5 & 58.6 & 58.3 \\ \midrule
BBF & \multicolumn{1}{c|}{(Ctx,resp)} & 62.8 & 62.7 &\multicolumn{1}{r|}{62.8} & 55.8 & 55.9 & \multicolumn{1}{r|}{55.9} & 59.3 & 59.3 & 59.3 \\ \midrule
BAD & \multicolumn{1}{c|}{(Ctx,resp)} & 68.0 & 74.5 & \multicolumn{1}{r|}{71.1} & 65.9 & 58.3 & \multicolumn{1}{r|}{61.8} & 66.9 & 66.4 & 66.5 \\
BAD+Medical & \multicolumn{1}{c|}{(Ctx,resp)} & 70.9 & 50.6 & \multicolumn{1}{r|}{59.0} & 56.2 & 75.3 & \multicolumn{1}{r|}{64.4} & 63.5 & 62.9 & 61.7 \\
\bottomrule
\multicolumn{11}{c}{After finetuning on \textsc{DiaSafety}} \\ \toprule
Detoxify & \multicolumn{1}{c|}{(Ctx,resp)} & 84.0 & 77.9 & \multicolumn{1}{r|}{80.8} & 75.8 & 82.4 & \multicolumn{1}{r|}{79.0} & 79.9 & 80.1 & 79.9 \\ \midrule
Ours & \multicolumn{1}{c|}{(Ctx,resp)} & 87.8 & 85.9 & \multicolumn{1}{r|}{86.8} & 83.6 & 85.8 & \multicolumn{1}{r|}{84.7} & 85.7 & 85.8 & 85.7 \\ \bottomrule
\end{tabular}}
\caption{Complete coarse-grain classification results on our test set using different methods. PerspectiveAPI and Detoxify without finetuning on \textsc{DiaSafety} only accept single utterance. Thus we test by (1) inputting only response and (2) concatenating context and response to make them access to the information of context. \citet{xu2020recipes} also present another medical topic classifier other than BAD classifier. We test responses in \textit{Unauthorized Expertise} using their medical topic classifier and use BAD classifier for other categories (shown in the row ``BAD+medical''). We find the result becomes even worse
because medical topic classifier recognizes topics but does not determine safe or not. Safe responses like ``maybe you should see a doctor'' are thus mislabeled.
}
\label{tab:2cls-res-apx}
\end{table*}
\begin{table}[tbp]
\scalebox{0.9}{
\begin{tabular}{@{}c|rrr|rrr@{}}
\toprule
\multicolumn{1}{l|}{\multirow{2}{*}{Category}} & \multicolumn{3}{c|}{With Context (\%)} & \multicolumn{3}{c}{W/o Context (\%)} \\
\multicolumn{1}{l|}{} & \multicolumn{1}{c}{Prec.} & \multicolumn{1}{c}{Rec.} & \multicolumn{1}{c|}{F1} & \multicolumn{1}{c}{Prec.} & \multicolumn{1}{c}{Rec.} & \multicolumn{1}{c}{F1} \\ \midrule
Safe & 88.9 & 80.0 & 84.2 & 86.4 & 74.7 & 80.1 \\
OU & 77.1 & 72.0 & 74.5 & 50.9 & 76.0 & 60.8 \\
RI & 66.1 & 87.2 & 75.2 & 55.8 & 51.1 & 53.3 \\
UE & 90.5 & 92.5 & 91.5 & 86.4 & 95.7 & 90.8 \\
TA & 91.3 & 93.8 & 92.6 & 67.9 & 85.6 & 75.8 \\
BO & 59.1 & 76.5 & 66.7 & 49.0 & 51.0 & 50.0 \\
\midrule
\textbf{Overall} & \textbf{78.9} & \textbf{83.7} & \textbf{80.8} & 66.1 & 72.4 & 68.5 \\ \bottomrule
\end{tabular}}
\caption{Results of our fine-grain classification by single model between with and without context. The unsafe categories are denoted by initials.}
\label{tab:7cls-res-single}
\end{table}
\section{Additional Evaluation Results}
\label{apx:eval}
We evaluate the safety of DialoGPT-Medium and Blenderbot-400M among different decoding parameters, which is shown in Figure \ref{fig:deco_res}.
Besides, as shown in Table \ref{tab:leaderboard}, we present a safety leaderboard of all of our evaluated models. In the leaderboard, we list utterance-level unsafe proportion as another column to more intuitively compare the performance of utterance-level safety.
\begin{table*}[tbp]
\centering
\begin{tabular}{@{}clrrrrrrc@{}}
\toprule
\textbf{Rank} & \multicolumn{1}{c}{\textbf{Models}} & \multicolumn{1}{c}{\textbf{OU}} & \multicolumn{1}{c}{\textbf{RI}} & \multicolumn{1}{c}{\textbf{UE}} & \multicolumn{1}{c}{\textbf{TA}} & \multicolumn{1}{c}{\textbf{BO}} & \multicolumn{1}{c}{\textbf{Utter}} & \multicolumn{1}{c}{\textbf{Overall}} \\ \midrule
\textbf{1}&\textbf{Blenderbot-S}&5.9&10.2&17.3&26.0&13.4&9.3&13.7\\
\textbf{2}&\textbf{Blenderbot-M}&4.5&9.2&14.7&45.0&5.4&3.7&13.7\\
\textbf{3}&\textbf{Blenderbot-L}&9.0&7.2&18.8&32.3&11.1&9.4&14.6\\
\textbf{4}&\textbf{Plato2-Base}&8.6&19.4&35.3&8.7&17.8&18.2&18.0\\
\textbf{5}&\textbf{Plato2-Large}&9.2&10.9&45.7&14.8&18.4&18.3&19.5\\
\textbf{6}&\textbf{DialoGPT-S}&17.4&45.1&27.8&16.6&28.3&7.5&23.8\\
\textbf{7}&\textbf{DialoGPT-M}&18.2&43.9&32.6&32.0&34.2&6.5&27.9\\
\textbf{8}&\textbf{DialoGPT-L}&19.0&40.3&35.2&35.9&34.2&6.7&28.5\\
\bottomrule
\end{tabular}
\caption{Dialogue safety leaderboard of conversational models including Blenderbot, DialoGPT, and Plato-2 with various parameter scales. ``Utter'' is computed by mean utterance-level unsafe proportion triggered by five categories of contexts. ``Overall'' is computed by macro average of five context-sensitive unsafe categories and utterance-level unsafety.}
\label{tab:leaderboard}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figures/decoding_compare3.pdf}
\caption{Evaluation results of Blenderbot-400M (above) and DialoGPT-medium (below) with different decoding parameters. We label the context-sensitive unsafe proportion (smaller score) and total unsafe proportion (larger score) for each bar. ``Overall'' is computed by macro average of five categories.}
\label{fig:deco_res}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figures/guideline_web2.pdf}
\caption{Our human annotation guideline interface. We present our annotation interface of \textit{Biased Opinion} as an example.}
\label{fig:guidline-web}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{figures/guideline.pdf}
\caption{Summary of our human annotation guidelines}
\label{fig:guidline}
\end{figure*}
\section{Case Study}
As shown in Table \ref{tab:case-study}, we list some examples (including safe and unsafe) generated by DialoGPT, Blenderbot, and Plato-2 for case study. Based on our observations, Plato-2 tends to utter explicit insulting words but sometimes it merely cites context and does not mean that. Blenderbot has the best safety performance while it can be too eager to express agreement, sometimes even though the context is unsafe.
\begin{table*}[t]
\centering
\includegraphics[width=\linewidth]{figures/examples2.pdf}
\caption{
Cherry-picked generated responses from different conversational models. The contexts are from our test set and the generated responses involves safe, utterance-level unsafe, and context-sensitive unsafe examples. We preserve the typos in the contexts and responses. All the contexts and responses do not represent our views or opinions.
}
\label{tab:case-study}
\end{table*}
\section{Reproducibility}
\paragraph{Computing Infrastructure}
Our models are built upon the \texttt{PyTorch} and \texttt{transformers} \cite{wolf-etal-2020-transformers}. For model training, we utilize Geforce RTX 2080 GPU cards with 11 GB memory.
\paragraph{Experimental Settings}
We use RoBERTa-base\footnote{\url{https://huggingface.co/roberta-base}} in Huggingface as our model architecture to identify different categories of unsafety. For each category, we set the hyper-parameters shown as Table \ref{tab:hyper-params} to get the best experimental result on validation set. Most of the hyper-parameters are the default parameters from Huggingface \texttt{Transformers}.
\begin{table}[H]
\scalebox{0.9}{
\begin{tabular}{cc}
\toprule
Hyper-parameter & Value or Range \\ \midrule
Maximum sequence length & 128 \\
Optimizer & AdamW \\
Learning rate & \{2,5\}$e$\{-6,-5,-4,-3\} \\
Batch size & \{4,8,16,32,64\} \\
Maximum epochs & 10 \\
\bottomrule
\end{tabular}}
\caption{Hyper-parameter settings}
\label{tab:hyper-params}
\end{table}
For applying BBF and BAD on our test set, we utilize \texttt{ParlAI} \cite{miller2017parlai}. In safety evaluation,
we load checkpoints in model libraries\footnote{\url{https://huggingface.co/models}} of Huggingface for DialoGPT and Blenderbot. For Plato-2, we use \texttt{PaddlePaddle}\footnote{\url{https://github.com/PaddlePaddle/Paddle}} and \texttt{PaddleHub}\footnote{\url{https://github.com/PaddlePaddle/PaddleHub}} to generate responses.
\section{Introduction}
\input{Table_1_Compare}
\input{1-Introduction}
\section{Related work}
\input{2-related_work}
\vspace{-1mm}
\section{Safety Taxonomy}
\input{Table_taxonomy}
\input{3-taxonomy}
\vspace{-1mm}
\section{Dataset Collection}
\input{4-dataset}
\input{Table_statistics}
\section{Context-sensitive Unsafety Detection}
\input{5-DetectionExperiments}
\vspace{-2mm}
\section{Dialogue System Safety Evaluation}
\input{6-Evaluation}
\vspace{-2mm}
\section{Conclusion and Future Work}
\input{7-Conclusion}
\section{Acknowledgment}
\input{8-Acknowledgement}
\section*{Limitations and Ethics}
\input{9-Ethical-Consideration}
| {'timestamp': '2022-04-05T02:31:03', 'yymm': '2110', 'arxiv_id': '2110.08466', 'language': 'en', 'url': 'https://arxiv.org/abs/2110.08466'} |
\section{INTRODUCTION}
It is interesting problem how the material structure effects on the transport property.
Amorphous solid often shows lower conductivity than crystals.
For example, the thermal conductivity of silica, which is a glass material used for window glass, is one-eighth of that of quartz, which is the crystal made from the same components.
In other cases, amorphous solids of aluminum nitride show that the maximum thermal conductivity is 85\% of that of pure crystal\cite{aln}.
Such fact indicates the thermal conductivity of amorphous solids can be comparable with that of crystalline solids.
Similar feature is also found for electric transport in amorphous oxide semiconductors\cite{aos}.
The thermal conductivities of a low-density fluid and a crystal have been studied well rather than amorphous solid.
In theoretical approaches, the linear response theory well describes the property of macroscopic thermal transport in crystal.
Numerical approaches have been also done to study thermal conductivities of
fluid and crystal from microscopic dynamics, employing molecular dynamics technique\cite{ogushi,murakami}.
On the other hand, complexities in amorphous structures increase
difficulties of studying thermal properties.
As one of theoretical approaches, mode-coupling theory has been developed
to treat a localized property which comes from microscopic structures\cite{mct}.
However, the treatment of structural effects is insufficient to fully understand the properties.
Then, numerical approaches are particularly useful to understand macroscopic properties from microscopic structures.
In particular, thermal properties of amorphous solids in nonequilibrium state should be revealed for applications to novel materials.
Before entering conduction problem, there is a long history of static amorphous structure.
Since 1960s\cite{scott1,scott2,bernal}, structural features of jammed packings have been extensively studied,
in particular, those of the most dense packing called random close packing (RCP) have been revealed.
Recently, heat transport in jammed particle packings was studied to
understand the dynamical property of glass\cite{xu,mourzenko, coelho}.
However, mechanisms remain unclear on contributions of such microscopic structures
to thermal conduction in jammed state.
\begin{figure}[rt]
\begin{center}
\includegraphics[width=9cm]{randompacking.eps}
\caption{(a) A typical packing structure obtained from our packing method. Colors of the particles correspond to the local order parameter $q$ explained in the body, which shows that the structure is random. (b) The percolating cluster of closely-located particles.}
\label{randompacking}
\end{center}
\end{figure}
The purpose of this article is to investigate thermal conductivities of
jammed packings by the nonequilibrium molecular dynamics simulations.
To examine structural effects on transport, we consider hard-sphere systems.
The hard spheres feel perfect exclusion volume effects of other particles, which requires special procedures to realize random packings.
Since previously developed method\cite{rcp_JT,rcp_CW,rcp_LS} takes not a small computational times,
we develop two efficient methods to obtain randomly packed structures.
Further, by imposing temperature gradient,
we will show that thermal conductivities of random packings
are comparable with those of crystals under same pressures.
We also address that this is due to a characteristic transport paths which enhance high energy-transport in the random structures.
This paper is organized as follows. In the next section,
we improve the previous packing methods to obtain random packings efficiently.
Then we analyze local structures of the obtained packing in \S \ref{structure}.
Dynamic properties of the packings under thermal gradient are shown in \S \ref{ht_result}.
The last section is devoted to summary and discussions.
\section{RANDOM PACKING\label{methods}}
\subsection{Model and Methods}
In this section, we show two efficient methods to produce random-packing.
We consider hard-core particles. The interparticle potential is described b
\begin{align}
\phi (\boldsymbol r) = \left\{
\begin{matrix}
\infty &r \leq \sigma \\
0 &r>\sigma
\end{matrix}
\right.,
\end{align}
where $r$ and $\sigma$ denote a distance between particle pairs
and the diameter of particles, respectively.
Since random-packed structure is not a thermal equilibrium structure,
some artificial procedures are needed to realize it.
It is not a simple task since random putting of particles causes overlapping which is prohibited by hard-core potential.
To solve this some efficient packing methods have been developed\cite{rcp_JT,rcp_CW,rcp_LS}.
Some of the methods take the following procedures.
We prepare random initial configuration of hard-core particles with sufficiently small diameter.
Then, by increasing the diameter, we obtain dense-packed structures.
In the original procedure, the diameters increase with a constant expansion rate as the simulation time increases.
Therefore, the particles have spaces to expand more in each simulation step.
To expand more efficiently, we adopt two techniques explained below.
In our simulation, we employ the event-driven molecular dynamics method\cite{isobe}.
In this method, simulation steps proceed by collision events of particles, not by a constant time-step integration.
\subsubsection{Monodisperse packing method}
For the first method, we adopt a variable expansion rate to improve the original method\cite{rcp_LS}.
In this method, the diameters of particles increase uniformly.
We call it ``Monodisperse packing method''.
We consider $L_x \times L_y \times L_z$ 3-dimensional box with periodic boundaries.
In this box $N$ spheres are put randomly. And initial radii are set to be unity.
The initial velocities of particles are randomly assigned from the Maxwell distribution.
We adjust the total velocity of particles to be zero.
The initial state is produced by adding each particle one by one into the simulation box.
If a newly entering particle overlaps with already existing particles, the trial is rejected.
Rejection probability is about 5\%.
Choosing the initial packing fraction to $\phi/\phi_{\rm SC} \sim 0.5$, where $\phi_{\rm SC}$ denotes the volume fraction of the simple cubic packing ($\phi_{\rm SC} = \pi/6 \sim 0.52$).
We want to expand the diameters of the particles keeping their random initial positions.
To obtain dense packing, however, reconfiguration is necessary because of the hard-core potential.
This is performed by event-driven dynamics.
In accordance with this scheme, we take the following steps:
\begin{enumerate}
\item Find the minimum gap between the particles $l$ over all the particle pairs.
\item Increase the diameters of the particles uniformly by $xl$, where we introduce a parameter $x (<1)$ as packing speed.
\item Proceed the time until the earliest collision occurs.
\item Translate all particles by half of the next collision time.
\item Repeat the steps 1 to 4 until the packing fraction reaches an aimed value.
\end{enumerate}
\subsubsection{Polydisperse packing method}
Although the above method enables us to produce dense-packing, it takes a lot of calculation cost.
To solve this, we improve the packing process of the former method.
In this second method, the expanding rates are different particle by particle, so we call this ``Polydisperse packing method''.
We first determine a final diameter $\sigma_{\rm final}$.
The initial configuration is obtained by the same way with the first method, and then we take following steps:
\begin{enumerate}
\item Find the local minimum-gap $l_i$ between the particle $i$ ($i=1,..., N$) and surrounding particles.
\item Increase a diameter of particle $\sigma_i$ by $xl_i$. Here, if the new diameter exceeds $\sigma_{\rm final}$, it is set to $\sigma_{\rm final}$.
\item Perform the same expansion process (steps 1 and 2) over all particles.
\item Repeat the steps 1 and 2 until the new diameters of all particles are calculated.
\item Proceed the time until $N$ collisions occur.
\item Repeat the steps 1 to 4 until diameters of all particles reach $\sigma_{\rm final}$.
\end{enumerate}
We have introduced the packing speed $x$ in the above two methods.
Choosing $x=0.99$ which is the maximum speed in our simulation, we obtain a random packing shown in Fig.~\ref{randompacking}.
This parameter controls the packing processes from quench to anneal explained below.
\subsection{Packing process}
We can obtain a homogeneous packing at any simulation time by using the monodisperse packing method.
If the aimed packing fraction is not decided before the simulation starts, the packing finally becomes the closest packing.
On the other hand, when the particles jam, we spend a lot of calculation costs until the jammed packing crystallizes.
To observe the dynamical properties of the monodisperse packing method, we finish each simulation when the maximum of displacement in each cycle becomes less than $10^{-6}$.
Figures \ref{mono_quench_anneal} show the evolution of packing fraction $\phi/\phi_{\rm SC}$ and mean-square displacements (MSD) as a function of the number of collisions per particle by changing $x$ from $0.99$ to $0.0001$ in the monodisperse packing method, respectively.
Here, MSD is defined as an average of square of the difference between the initial and the current positions.
In each simulation the system size is set to $L_x=L_y=20,L_z=40$, and $N=1000$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm]{mono_pf_msd.eps}
\caption{The evolution of the packing fractions (bottom) and MSDs (top) as a function of the number of collisions in the monodisperse packing method. The lines correspond to $\phi_{\rm freeze}$ and the maximum of MSD in the top and the bottom figures.}
\label{mono_quench_anneal}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm]{q_distribution.eps}
\caption{The frequency distribution of the order parameter $q$. The arrows at $q=0.92$ and 1 correspond to the existence of the hexagonal close-packed and the face-centered cubic structures, respectively. }
\label{q_distribution}
\end{center}
\end{figure}
For the fastest monodisperse packing $x=0.99$, the displacement of each particle is between twice and triple of the final radius ($\sim 1.34$).
This means that each particle keeps almost its initial random position.
In this case, the packing fraction approaches $\phi_{\rm RCP}/\phi_{\rm SC} = 1.22$.
This value denotes the packing fraction of randomly closed packing (RCP).
Thus larger values of $x$ correspond to the process of quench.
The displacement continues to grow beyond $2\sigma$ as $x<0.001$, which indicates structural relaxation toward crystallization.
We note that the saturation of MSD at $(L_x^2 + L_y^2 + L_z^2)/12$ is the maximum in the present geometry, thus it does not mean freezing.
When the expanding speeds are sufficiently slow, packing fraction grows beyond $\phi_{\rm RCP}$ and approaches the packing fraction of face-centered cubic (FCC).
This indicates the system crystallizes.
In this situation, packing structure becomes that of the closest packing, such as FCC and hexagonal closest packing (HCP), which are thermodynamically stable.
In order to distinguish whether the packing is crystal or not, we observe a simple order parameter for $i$-th particle described as
\begin{align}
q_i = \frac{1}{12}\sum_{j=1}^{n^i_b} |\cos\Theta_j|,\label{order}
\end{align}
where $n^i_b$ denotes the number of particle in the range of the local gap $l_i<0.15\sigma_{\rm final}$.
This parameter takes 1 for FCC and 0.92 for HCP.
The angle $\Theta_j$ is defined as the maximum bond-angle between $j$-th and other neighboring bonds.
Figure \ref{q_distribution} shows a frequency distribution of $q$.
For $x<0.001$, the peaks at $q=1$ and 0.92 prominently appear.
In this case the packing process corresponds to anneal.
On the other hand, the dominant peaks of the distribution appear below $q=0.85$ as $x$ increases.
In this case, the packing process corresponds to quench since a local crystalline structure does not appear (see Fig.~\ref{randompacking}(a)).
This crossover corresponds whether the MSD is larger or smaller than $(2\sigma)^2$ when the packing fraction pass through the freezing point, $\phi_{\rm freeze}/\phi_{\rm SC}=0.934$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm]{poly_pf_msd.eps}
\caption{The evolution of packing fractions (bottom) and MSDs (top) in the polydisperse packing method. The lines correspond to $\phi_{\rm freeze}$ and the maximum of MSD in the top and the bottom figures.}
\label{poly_quench_anneal}
\end{center}
\end{figure}
Similar crossover between anneal and quench is also observed in the polydisperse packing method.
By using this method, the calculation cost is reduced drastically, thus we can obtain larger packings.
Figures \ref{poly_quench_anneal} also show the evolution of $\phi/\phi_{\rm SC}$ and MSD in the polydisperse packing method, respectively.
In each simulation the system size is set to $L_x=L_y=20,L_z=400$, and $N=10000$, which is ten times longer than that used in former method on the $z$-direction.
And we fix the final packing fraction $\phi_{\rm final}/\phi_{\rm SC} = 1.203$ since the packing fraction cannot reach to $\phi_{\rm RCP}/\phi_{\rm SC}$ due to the following reason.
We note here that highly dense packing cannot be obtained by the polydisperse packing method.
In our simulation, it is hard to expand all particles to final diameter for $\phi_{\rm final}/\phi_{\rm SC} > 1.214$ with $x=0.99$.
This is because some particles rapidly expand and freeze in the method,
and therefore the rest of particles cannot successfully expand their diameters due to the lack of the free space.
While the polydisperse method has huge advantage in the calculation cost, the method is restricted up to the packing fraction $\phi_{\rm final}/\phi_{\rm SC} = 1.214$ for the fastest packing speed $x=0.99$.
\section{ANALYSIS OF PACKING STRUCTURE\label{structure}}
The packings obtained by the monodisperse and the polydisperse packing method show the same properties analyzed below.
Therefore, the figures in this section will show results only for the polydisperse case.
A major signature of random packing structures is the absence of long-range order.
To distinguish the packing structures, the radial distribution function (RDF) $g(r)$ is calculated.
The function is shown for several values of $x$ in Fig.~\ref{rdf_nogawa}.
The packings for smaller $x$ show crystalline peaks, those are expected in FCC and HCP structures.
On the other hand, the packings for larger $x$ show only two characteristic
peaks at $r/\sigma = \sqrt{3}$ and $2$, which is a general feature of RCP\cite{silbert,donev}.
We note that the peak at $r/\sigma=1$ corresponds to particles in contact.
We also calculate angular distribution function $P(\theta)$ shown in Fig.~\ref{adf_nogawa},
where $\theta$ denotes the bond-pair angle.
We find following two characteristic features on $g(r)$ and $P(\theta)$ of quenched packings:
While the peak at $r/\sigma = 2$ appears in $g(r)$,
rectilinear arrangement (see Fig.\ref{line_cage} (left)) seems not to exist from $P(\theta=\pi)\simeq 0$.
This fact seems to imply this peak comes from another structure, e.g. cage structures such as the honeycomb structures (see Fig.~\ref{line_cage} (right)).
While the other characteristic peaks of crystals do not appear,
the peak at $r/\sigma = \sqrt{3}$ remains.
These features have been also observed in previous study\cite{donev}, but those reasons have not been mentioned.
To clarify the origins of those peaks, we consider two cases of contacting particles
and their spatial contacting bond-angles illustrated in Fig.~\ref{reweight3} and \ref{reweight4}.
As in Fig.~\ref{reweight3} (top), we consider three particles.
Possible positions for the third particle around
the two contacting particles are restricted in the range $\pi/3 < \theta_3 < \pi$.
If particles distribute homogeneously in that range, the probability of finding $\theta_3$, $f_3(\theta_3)$, is proportional to $\sin\theta_3$ illustrated in Fig.~\ref{reweight3} (top).
In this three-particles case, we obtain a reweighted distribution $p_3(\theta_3)$ described as
\begin{align}
p_3(\theta_3) &\equiv P(\theta_3)f_3(\theta_3)^{-1} \qquad \frac{\pi}{3} < \theta_3 < \pi\nonumber\\
f_3(\theta_3) &= 2\pi \sin(\theta_3).
\end{align}
This reweighting also tells us why we see no peak at $\theta=\pi$ (see Fig.~\ref{reweight3} (bottom)).
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm]{rdf_nogawa.eps}
\caption{Radial distribution functions for the various packing speeds.}
\label{rdf_nogawa}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm]{adf_nogawa.eps}
\caption{Angular distribution functions for the various packing speeds.}
\label{adf_nogawa}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=6cm]{line_cage.eps}
\caption{ The rectilinear arrangement and the honeycomb structure. Both structures contribute to the peak at $r/\sigma=2$ in $g(r)$.}
\label{line_cage}
\end{center}
\end{figure}
Similarly, we consider four-particles unit as in Fig.~\ref{reweight4} (top).
We assume triangular arrangements of three particles and then consider positions of fourth particle.
Since the triangular arrangements corresponds to $\theta_3 = \pi/3$ in
the three-particles case, the assumption is reasonable
that such arrangement will be frequently observed in the packing.
We obtain the reweighted distribution $p_4(\theta_4)$ as the following expression:
\begin{align}
p_4(\theta_4) &\equiv P(\theta_4)f_4(\theta_4)^{-1} \qquad \frac{\pi}{3} < \theta_4 < \frac{2\pi}{3}\nonumber\\
f_4(\theta_4) &= \frac{\sin\theta_4}{\sqrt{(1-\cos\theta_4)(1/2+\cos\theta_4)}}.
\end{align}
As shown in Fig.~\ref{reweight4} (bottom), the reweighted distribution $p_4(\theta_4)$
is approximately flat in the region $\pi/3 < \theta_4 < 2\pi/3$ .
This result indicates that the peak structure we see in $P(\theta)$ is solely the result of the reweighting probability in the region $\pi/3 < \theta_4 < 2\pi/3$ .
We have found the characteristic local structure of random packing.
In our analysis, the random packings obtained by two our methods show the same random structure.
Therefore, in the following section, we show results
obtained only by the polydisperse packing method
because of its computational inexpensiveness.
\begin{figure}[t]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=4cm]{particle_arrangement_3.eps}\\
\includegraphics[width=8.4cm]{adf_p3.eps}
\end{tabular}
\end{center}
\caption{ Possible arrangement of three particles in contact (top). Arrow illustrates the ranges of the bond-angle. The reweighted angular distribution function of $\theta_3$ (bottom).}
\label{reweight3}
\end{figure}
\begin{figure}[t]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=4cm]{particle_arrangement_4.eps} \\
\includegraphics[width=8.4cm]{adf_p4.eps}
\end{tabular}
\end{center}
\caption{ Possible arrangement of four particles in contact (top). Arrow illustrates the ranges of the bond-angle. The reweighted angular distribution function of $\theta_4$ (bottom).}
\label{reweight4}
\end{figure}
\section{HEAT TRANSPORT IN RANDOM PACKING\label{ht_result}}
\subsection{Simulation settings}
We investigate thermal properties of the random packing and compare to those of crystals.
As the crystalline structure,
we adopt FCC, which is the most stable structure in three-dimensional hard-core system.
Here, we note that the thermal conductivity highly depends on the packing fraction.
Therefore, we use a little loose packings both for FCC and random ones and compare them
by setting the density to same.
We impose periodic boundary conditions in $x$- and $y$- directions.
And we set heat walls on the both sides of $z$-direction in the following way.
At first, we make homogeneous packings with periodic boundary condition.
And then particles in the region $z=0$ to $2\sigma/\sqrt{3}$ and $z=L_z-2\sigma/\sqrt{3}$ to $L_z$ are regarded as a part of heat walls
and those are fixed throughout the simulations.
We choose (110) surface for FCC as heat walls.
When a free particle collides with particles in each wall,
the particle bounces back with a new velocity randomly chosen from thermal equilibrium distribution of each temperature.
The velocity distribution with temperature $T_{\rm B}$ is described as
\begin{align}
f(v_{\rm n}, v_{\rm t,1}, v_{\rm t,2}) &= \phi(v_{\rm n})\psi(v_{\rm t,1})\psi(v_{\rm t,2})\\
\psi(v) &= \frac{1}{\sqrt{2\pi k_{\rm B}T_{\rm B}}} \exp\left( -\frac{v^2}{2k_{\rm B}T_{\rm B}}\right)\\
\phi(v) &= \frac{1}{k_{\rm B}T_{\rm B}}|v| \exp\left( -\frac{v^2}{2k_{\rm B}T_{\rm B}}\right),
\end{align}
where $v_{\rm n}, v_{\rm t,i}$ denote a normal vector and orthonormal tangential-vectors on a colliding point of the wall particle.
The Boltzmann constant $k_{\rm B}$ is chosen to be unity.
By employing similar heat bath, Murakami {\it et al.}\cite{murakami} showed that
the hard-core fluid systems produce the Fourier-type heat conduction.
They also investigate the system-size dependence of thermal conductivity,
which is consistent with theoretical predictions of the Kubo formula and
long-time tail of autocorrelation function of heat flux.
\subsection{Definition of physical quantities}
Our system attains a nonequilibrium steady state,
where heat steadily flows from high-temperature to low-temperature sides;
$T_{\rm H}$ around $z=0$ and $T_{\rm L}$ ($T_{\rm H} > T_{\rm L}$) around $z=L_z$, respectively.
In the steady state, it is reasonable to assume local equilibrium\cite{murakami}, and then temperature is defined as
\begin{align}
T(z)& = \frac{1}{d}\left.\left\langle\sum_{i \in B(z)} \boldsymbol v_i^2\right\rangle \right/\left\langle \sum_{i \in B(z) } 1 \right\rangle
\end{align}
where $d$ and $\boldsymbol v$ denote spatial dimension and velocity vector of $i$-th particle, respectively.
The bracket denotes time average.
And $B(z)$ means a group of particles exist in the region, $(z-\delta/2,z + \delta/2)$.
As the same as observed in the low-density fluid case\cite{murakami} $\phi/\phi_{\rm SC}=0.69$,
present random packings also exhibit linear temperature profiles (Fig.~\ref{temp_dens}).
The well-scaled linear profile satisfies the necessary condition to evaluate the thermal conductivity.
Heat conduction generally comes from energy and mass transports.
In the dense packing state, energy transport by collisions mainly contributes to the thermal conductivity.
Since all particles almost cannot move from the initial positions, the contribution from mass transports can be neglected.
\begin{figure}[!h]
\includegraphics[width=9cm]{temperature.eps}
\caption{Temperature profiles in the random packing with different system sizes.}
\label{temp_dens}
\end{figure}
In the steady state, we only measure heat flux by the energy received from
the high-temperature heat wall since energy conserves in the bulk of our system.
Thus the heat flux is defined as
\begin{align}
Q(t_n) &= \sum^n_{k=1} (\Delta E)_k,
\end{align}
where $(\Delta E)_k$ denotes the received energy by $k$-th collision with the wall,
and $t_n$ denotes the total elapsed time after $n$ collisions with the wall.
Thermal conductivity is defined as
\begin{align}
\kappa(L_z) = - \frac{\langle J_z\rangle}{\nabla T},
\end{align}
where $J_z=Q(t_n)/t_n$.
\subsection{Results}
Figure \ref{dvt} shows the packing fraction dependence of thermal conductivity both for random packing and FCC.
In this simulation, we fix the temperature ($T_{\rm H}=18, T_{\rm L}=6$) and
the box size is $L_z = 240, L_x=L_y=20$ for 6000 particles.
Below $\phi/\phi_{\rm SC} = 1.15$, the random packing rapidly crystallizes since the particles have enough space for reconfiguration.
Typically, such crystalline nucleations occur near the high-temperature wall and then it grows.
This phenomena have been also observed experimentally using rigid particles\cite{hard_crystal}.
Actually, crystal growth technique under thermal gradient is widely used in the field of engineering.
This technique was also applied in colloid system\cite{colloid}.
In the range $\phi/\phi_{\rm SC} > 1.15$, thermal conductivities of the random packing and FCC shows diverging behavior
around distinct packing fractions of RCP and the closest packing, respectively.
Note that, in random packing simulations, the system crystallizes with a small probability even near the RCP point.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm]{density_variant_tc.eps}
\caption{Thermal conductivities of the random packing (RP) and FCC with $N=6000$ against the packing fraction. }
\label{dvt}
\end{center}
\end{figure}
We also investigate the system size dependence of the thermal conductivity
at a constant packing fraction $\phi/\phi_{\rm SC} = 1.203$, which is slightly smaller than RCP fraction, as shown in Fig.~\ref{thermal_conductivity}.
We find that thermal conductivity of random packing is far larger than FCC, which is discussed later.
In the FCC $\kappa$ is proportional to $L_z^{-1/2}$ which is consistent
with linear response theory, i.e. Kubo formula, in which thermal conductivity is described as
\begin{align}
\kappa = \lim_{t\to\infty}\lim_{V\to\infty}\frac{1}{Vk_{\rm B}T^2}\int^t_0 dt'\langle J(0)J(t')\rangle.
\end{align}
And autocorrelation function is supposed to have a slow decay $t^{-d/2}$,
which is commonly called long-time tails in hard-core particle system\cite{alder2,alder3}.
Thus the thermal conductivity for finite size system shows following
size dependence;
\begin{align}
\kappa (L_z) \sim
\left\{
\begin{array}{cc}
\log L_z & \text{in 2D}\\
L_z^{-1/2} & \text{in 3D} \label{size_dependence}
\end{array}\right..
\end{align}
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm]{FCC_RP_tc.eps}
\caption{System size dependence of the thermal conductivities of the random packing (RP) and FCC.}
\label{thermal_conductivity}
\end{center}
\end{figure}
On the other hand, the thermal conductivity of the random packing
decreases as $L_z$ increases (see Fig.~\ref{thermal_conductivity}), which is opposite to the above prediction.
This is explained by the interparticle distance.
In the smaller systems, it becomes more difficult to obtain dense random packing around RCP.
Figure \ref{size_density} shows obtained packing fractions by the monodisperse packing method explained in section \ref{methods}.
For $N<3000$, the obtained random packings show lower fraction than $\phi/\phi_{\rm SC}\approx 1.218$, which is universally obtained for lager systems.
Since the effective packing fraction of RCP becomes low in these smaller systems, the obtained random packing are effectively closer to RCP.
Therefore the interparticle distance becomes short.
On the other hand, the interparticle distance becomes larger as system size increases.
This effective looseness causes lower conductivity than that of smaller systems.
High conductivity of random packing can be explained by a large number of particle collisions
per unit time, which enhance energy transport.
Roughly, collision frequency of particle pair is inversely proportional to interparticle distance.
Actually, the distances in random packing are shorter than that of FCC.
Figure \ref{rdf_near1} shows $g(r)$ around $r/\sigma = 1$.
In FCC $g(r)$ shows a Gaussian distribution around the average of the interparticle distance.
On the other hand, a diverging peak appears at $r/\sigma = 1$ in the random packing.
This fact implies that such close-particle bonds form a efficient path for thermal conduction.
As a consequence of this, a single large cluster percolating from the one side to the other in the $z$-directions is observed by assuming that neighbor particles within the range $r/\sigma <1.006$ belong to same cluster shown in Fig.~\ref{randompacking}(b).
In case of FCC, a such cluster is only located in the low-temperature region and does not bridge the space between two thermal baths.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm]{size_vs_packing_fraction.eps}
\caption{Maximum packing fractions obtained from the monodisperse packing method as a function of the system size.}
\label{size_density}
\end{center}
\end{figure}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=8.5cm]{rdf_contact.eps}
\caption{Radial distribution functions for the random packing (RP) and FCC around $r/\sigma = 1$ with the same packing fraction.}
\label{rdf_near1}
\end{center}
\end{figure}
We also investigate pressure dependence of thermal conductivities with $N=6000$.
The size effect of thermal conductivities is negligible in this condition.
In Fig.~\ref{p_tc}, thermal conductivities are plotted against
pressures calculated in an equilibrium state.
Compared at same pressure, the thermal conductivity of FCC is higher than that of the random packing in contrast to the case of same density condition.
Thermal conductivity of random packing is, however, reduced only by 10\% of that of FCC.
This difference is considered to come from the nature of the paths of sequential collisions which is relatively straight in FCC and wondering in the random packing.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=8cm]{p_tc.eps}
\caption{Pressure dependence of the thermal conductivity.}
\label{p_tc}
\end{center}
\end{figure}
\section{CONCLUSION}
We investigate heat transport properties of random packings of hard spheres
by nonequilibrium molecular dynamics simulations.
We introduce two efficient methods to obtain random packings.
In the monodisperse packing method, we can produce dense random packings homogeneously.
In the polydisperse packing method, random packings can be obtained quickly in the density range $\phi/\phi_{SC} < 1.214$.
Both methods have a parameter $x$ which controls the packing speed.
When this speed is so fast particles cannot diffuse sufficiently before freezing, hence the packing structure becomes random.
The obtained structure is recognized by analyzing the local crystallization parameter and the radial and the angular distribution functions whether the packing is random or the crystalline.
Using the obtained random packing, we investigate heat transport properties by imposing parallel heat walls at the both ends.
And then we compare the thermal conductivity with that of the crystalline packings.
Compared at the same density, the conductivity of the random packing is higher than that of the crystal
since the percolated cluster of closely-located particles exists in the system.
On the other hand, compared at the same pressure, the conductivity of the random packing is about 10\% smaller than that of the crystal.
These results suggest that the amorphous solids can have comparable thermal conductivity to crystals.
The mechanism of the enhancement of the thermal conductivity by the percolated cluster is also expected in other kinds of transports, such as electric conduction.
\section{Acknowledgments}
We thank Dr.Atsushi Kamimura for helpful discussions.
This work was partly supported by Award No. KUK-I1-005-04 made by King Abdullah University of Science and Technology (KAUST).
\newpage
| {'timestamp': '2010-12-15T02:01:31', 'yymm': '1005', 'arxiv_id': '1005.4295', 'language': 'en', 'url': 'https://arxiv.org/abs/1005.4295'} |
\section{Introduction}
\label{sec:intro}
This paper aims at a newly raising task in vision and multimedia research: recognizing human actions from \textit{still images} (Fig.~\ref{fig:parsing}). Although action recognition is usually addressed in videos with motion information~\cite{Liang13,LinEvent}, more attention has been attracted to parsing human actions in still images recently for the following reasons: first, human actions represent essential content of many still images and are crucial for image understanding; second, parsing actions in still images form the foundation of understanding complex activities; third, not all actions contain notable dynamic information, e.g., reading a book.
\begin{figure}[!htpb]
\centering
\includegraphics[width=\linewidth]{fig_parsing.pdf}
\vspace{-8mm}
\caption{Recognizing human actions from still images, with the help of surrounding objects.}
\label{fig:parsing}
\end{figure}
The main challenges of parsing human actions in still images lie in the large human variations in poses and appearances, as well as the lack of temporal motion information. In such a scenario, the human pose, the contextual information surrounding the human, and the interactions between the human and the surrounding contexts become crucial for understanding the action. Traditional methods rely heavily on the accurate estimation of such information from different sources, while all these problems are themselves challenging open problems. Moreover, the high-level human pose representations and person-object interactions are often carefully designed by hand, which is hard to generalize.
Addressing these difficulties, we propose to develop an expressive deep model to naturally integrate the information from multiple noisy, sometimes unreliable, sources such as the human layout and the surrounding objects for higher level action understanding. In particular, a Deep Belief Net (DBN) is trained to take input of the simple features consisting of the human part detection and object detection from some off-the-shelf detectors without explicit inference of the human pose and the person-object interaction. It is worth mentioning that we applied manually labeled data of human pose and object locations during the DBN training phase, which leads to deep models learned from semantic information directly offered by human. The trained DBN performs surprisingly well in recognizing human actions, demonstrating the capability of the deep model to learn proper feature representations of the intrinsic relationships among the simple semantic elements.
The main contribution of the paper is 1) the proposal of developing an expressive deep model to naturally integrate human layout and surrounding contexts for higher level action understanding; 2) the use of manually labeled data to greatly improve the effectiveness and efficiency of the DBN training by feeding the deep models with semantic information provided by human; and 3) a practical state-of-the-art method for action recognition from a single image. The resulting framework outperforms existing methods by a large margin as validated in the experiments. It is worth noting that our approach does not rely on any specially designed problem-specific components, though we used a few off-the-shelf tools and publicly available datasets for training. Therefore, our model can be easily generalized to solve other multimedia applications.
\section{Related Work}
A major category of methods for recognizing human actions in still images use pose and shape information of the human body in the images~\cite{XuReID13}. In \cite{Thurau08} the authors recognized the \textit{primitive} actions using recognized \textit{pose primitives}, whereas more complex activities can be understood as a sequencing of these primitive actions. Yang et al.\cite{Yang10} proposed to treat the pose of the person in an image as latent variables and train a system in an integrated fashion that jointly considers pose estimation and action recognition. Rectenly, \cite{Sharma13} proposed a new Expanded Parts Model (EPM) for human analysis. The model learns a collection of discriminative templates which can appear at specific scale-space positions. A more recent work by Khan et al.~\cite{Khan13} combines color and shape information for action recognition. They perform a comprehensive evaluation of color descriptors and fusion approaches and suggests that incorporating color
information considerably improves recognition performance, a descriptor based on color names
outperforms pure color descriptors, late fusion of color and shape information outperforms other approaches, and different fusion approaches result in complementary information which should be combined. These two works~\cite{Sharma13, Khan13} have reported the current state-of-the-art performances in this task. The methods in this category usually rely heavily on pose and shape representation and estimation, which are often severely affected by illumination, occlusions, viewing angle, etc.
Another category of methods discriminate different actions in still images using contextual information, especially the objects surrounding the human subject and the person-object interactions. Gupta et al.~\cite{Gupta09} present a Bayesian approach which goes beyond static shape/appearance feature matching and motion analysis used in traditional object and action recognition, and applies spatial and functional constraints on each of the perceptual elements for coherent semantic interpretation. This approach works even when the appearances and motion information are not discriminative enough. Desai et al.~\cite{Desai10} advocate an approach to activity recognition based on modeling contextual interactions between postured human bodies and nearby objects. Similarly, \cite{Yao12} proposes a mutual context model to jointly model objects and human poses in human-object interaction activities. In this approach, object detection provides a strong prior for better human pose estimation, while human pose estimation improves the accuracy of detecting the objects that interact with the human. Building on locally order-less spatial pyramid bag-of-features model, Delaitre et al. investigated a discriminatively trained model of person-object interactions for recognizing common human actions in still images~\cite{Delaitre10}. They replace the standard quantized local HOG/SIFT features with stronger discriminatively trained body part and object detectors, introduce new person-object interaction features based on spatial co-occurrences of individual body parts and objects, and address the combinatorial problem of a large number of possible interaction pairs and propose a discriminative selection procedure using a linear support vector machine (SVM) with a sparsity inducing regularizer. These approaches also rely on accurate pose estimation, and in addition, object detection. Moreover, the person-object interactions are often carefully designed by hand, which is hard to generalize. Although the most recently reported state-of-the-art resutls~\cite{Sharma13,Khan13} belong to the first category of methods, we believe the contextual objects do provide valuable information in parsing human actions and thus included them in our framework.
Recently, deep models~\cite{Hinton06,Bengio09} have shown excellent capability in learning image representations without using hand-crafted features. Deep learning has been successfully applied in many vision problems such as image classification~\cite{Krizhevsky12} and action recognition in videos~\cite{Ji10}. These methods directly applied deep models to learn feature representations from raw data. Recently, Ouyang et al.~\cite{Ouyang12} proposed a deep model that takes the human part detection results as input and learns the relationships between human parts to handle the occlusion problems in pedestrian detection. It motivates us to learn the mutual context between objects and human parts with deep models, which has not been studied before. Deep belief nets~\cite{Hinton06} are probabilistic generative models that are composed of multiple layers of stochastic, latent variables. The latent variables typically have binary values and are often called hidden units or feature detectors. The top two layers have undirected, symmetric connections between them and form an associative memory. The lower layers receive top-down, directed connections from the layer above. The states of the units in the lowest layer represent a data vector. DBNs were first developed for binary data using a Restricted Boltzmann Machine (RBM) as the basic module for learning each layer. The hidden and visible biases and the matrix of weights connecting the visible and hidden units are easy to train using contrastive divergence learning which is a crude but efficient approximation to maximum likelihood learning\cite{Hinton06}.
In this paper, we propose to train a DBN, as shown in Fig.~\ref{fig:DBN} to alleviate the dependence of action recognition on accurate pose estimation and to more naturally incorporate contextual object information. In particular, the DBN fuses the information of the human layout, the contextual objects surrounding the human, and the person-object interactions in still images to recognize the human actions. Note that these pieces of information are often noisy and sometimes unreliable. In the experiments we show that the trained DBN outperforms the state-of-the-art approaches using automatically detected human parts and surrounding objects.
\section{Proposed method}
The proposed method is shown in Fig.~\ref{fig:framework}. Note that the training phase and the testing phase follow the similar procedure, except that during the training stage, the body part detectors and object detectors are first trained using some off-the-shelf tools and publicly available datasets, the detection results and some manually labeled detections are used as the input to the DBN, and the DBN parameters are then learned. In the testing stage, the human parts and objects are detected automatically by the trained detectors and used as input by the DBN to predict the action type. We explain the procedure in details in the rest of this section.
\begin{figure}[htpb]
\centering
\includegraphics[width=\linewidth]{fig_framework.pdf}
\vspace{-8mm}
\caption{The proposed framework. Given a image, we first performed human part detection and object detection on it. We then extracted the features of mutual interactions between objects and human parts. Given the mutual contextual features as input, we applied the Deep Belief Net for action recognition.}
\label{fig:framework}
\end{figure}
\subsection{Body Part Detection and Pose Estimation}
For each input image, we first detect 10 parts of a human body using the method described in~\cite{XuReID13} (as shown in Fig.~\ref{fig:partdet}(a)). Pose representation and estimation is a challenging open problem. In our framework, since we rely mostly on the capability of the deep model to learn proper representations from the high dimensional data, the human pose is loosely represented as a star model, where the head is the centroid and the normalized relative locations of all the other parts are computed as the features. As shown in Fig.~\ref{fig:pose}(b), a human part is defined by two small part detectors. The central line of the human part is the line between the centroids of the two part detectors(the red line). We first normalize the image size to fix the length of the head to 50 pixels. For any other body part, a 6 dimensional feature vector is computed as
\begin{equation}
[isExist,x_1,y_1,x_2,y_2,\alpha],
\end{equation}
\noindent where $isExist$ indicates whether the interaction between the part and the head exists, $x_1$, $y_1$, $x_2$, $y_2$ are the coordinates (relative to the center of the head) of the central line of the part, and $\alpha$ the angle between the central line of the head and the line connecting the head to the part.
In our implementation, we also applied the upper body parts detector introduced in~\cite{XuReID13}, which include 6 human parts out of the 10 full body human parts(as illustrated in Fig.~\ref{fig:partdet}(b)). Given a image, we performed the full body as well as upper body pose estimation, and chose the detection result with the higher pose estimation score. We use the variable $isExist = 0$ to indicate the missing of 4 body parts in the upper body pose estimation result. Note that we do not need to further estimate the human poses.
\begin{figure}[htpb]
\centering
\includegraphics[width=\linewidth]{fig_pose.pdf}
\vspace{-10mm}
\caption{Human part and contextual object representation: (a)shows a pose estimation and object detection result performed on the image; (b)demonstrates the spatial relationships between human head and hand. (c)demonstrates the spatial relationships between human head and computer. The red line across two rectangles is a central line for a human part, and it is not necessarily vertical or horizontal. The central line of an object is the red line across the detection window. $\alpha$ is the angle between the central line of the head and the line connecting the head to the body part or the object.}
\label{fig:pose}
\end{figure}
\begin{figure}[htpb]
\centering
\includegraphics[width=\linewidth]{fig_partdet.pdf}
\vspace{-10mm}
\caption{Human part detection.(a) shows the results performed by full body part detection (with 10 parts); (b) shows the results performed by upper body part detection (with 6 parts).}
\label{fig:partdet}
\end{figure}
\subsection{Object Detection and Person-Object Relationship}
As mentioned previously, the contextual information is important for recognizing human actions~\cite{LinGrammar}, especially the person-object interactions. Therefore, we train Deformable Part Models (DPMs)~\cite{Felzenszwalb10} to detect objects surrounding the human. DPMs are the combination of 1) strong low-level features based on histograms of oriented gradients (HOG); 2) efficient matching algorithms for deformable part-based models (pictorial structures); 3) discriminative learning with latent variables (latent SVM), resulting in efficient object detectors that achieve state of the art results in practice. In our experiments, we detect 5 types of objects (i.e., ``bike'', ``camera'', ``computer'', ``horse'', ``instrument'') using trained DPM object detectors, Fig.~\ref{fig:objectdet} shows some examples. We again describe the person-object interactions as the relative locations of the object, and the features are similar to the ones describing the human pose. For each object, a feature vector is computed as: $[isExist,x_1,y_1,x_2,y_2,\alpha]$, where $isExist$ indicates whether the person-object interaction between the object and the human exists, $x_1$, $y_1$, $x_2$, $y_2$ are the coordinates (relative to the center of the head) of the central line of the object, and $\alpha$ the angle between the central line of the head and the line connecting the head to the object (as illustrated in Fig.~\ref{fig:pose}(c)). This is a very simple yet crude representation of person-object interactions. However it works quite well with the deep model.
\begin{figure}[phtb]
\centering
\includegraphics[width=\linewidth]{fig_objectdet.pdf}
\vspace{-8mm}
\caption{Object detection results performed by Deformable Part Models.}
\label{fig:objectdet}
\end{figure}
\subsection{Deep Belief Nets}
From the body part detection and object detection, we obtain in total 10 body parts and 5 objects. As we model human pose as pairwise relationships between the head and the other body parts, and person-object interactions pairwise relationships between the head and the objects, there are in total $15\times6=90$ dimensions for all the features. Each dimension of the feature is first normalized to the scale of $0 ~ 1$ with sigmoid function.
In the training phase, the feature vectors calculated from all the training images are input to the DBN, as shown in Fig.~\ref{fig:DBN}. The DBN includes 4 layers with 90 dimensions of input in the first layer, 200 hidden variables in the second layer, 50 hidden variables in the third layer and 7 variables as output labels in the top layer. We follow the method introduced in \cite{Hinton06} to train our DBN with layer-wised RBM pre-training and logistic regression fine-tunning. Given the real valued data as input, we employed the Gaussian RBM(GRBM) to model the parameters between the first layer and second layer during pre-training. The energy function of the Gaussian RBM is defined as,
\begin{equation}\label{eq:GRBM}
E(\textbf{v}, \textbf{h}) = \frac{1}{2 {\sigma}^{2}} \sum_i (v_i-c_i)^{2} - \frac{1}{\sigma} \sum_{i,j} v_i W_{ij} h_j - \sum_{j} b_j h_j
\end{equation}
\noindent where $W$ is the model parameters, $\textbf{v}$ is the visible layer vector, $\textbf{h}$ is the hidden layer vector. $c_i$ and $b_j$ are the biases for the visible and hidden neurons. Given the output of binary valued data from the second layer, we built up a regular RBM upon it, whose energy function is,
\begin{equation}
E(\textbf{v}, \textbf{h}) = -\sum_{i,j} v_i W_{ij} h_j - \sum_{j} b_j h_j - \sum_{i} c_i v_i
\end{equation}
\noindent where the notations are defined similar to the Gaussian RBM.
An critical difference between our model and other works is that we used manually labeled data(human part and object locations) to greatly improve the effectiveness and efficiency of the pre-training and fine-tuning during the training phase, though the testing is performed with automatic body part detectors and object detectors trained using off-the-shelf tools and some publicly available datasets. The experimental results show that the performance increase is obvious.
\begin{figure}[phtb]
\centering
\includegraphics[width=\linewidth]{fig_DBN.pdf}
\vspace{-8mm}
\caption{The Deep Belief Net Model. It is a four layer model with 90 neurons as input, 200 hidden variables in the second layer, 50 hidden variables in the third layer and 7 outputs as labels in the top layer. The DBN is pre-trained by stacking the Gaussian RBM and ordinary binary RBM. Fine-tunning is performed with logistic regression to optimize the all the parameters for classification.}
\label{fig:DBN}
\end{figure}
\section{Experimental Results}
\label{sec:exp}
We first introduce the datasets and the parameter settings of our implementation, and then show the experimental results.
\subsection{Dataset and settings}
We investigated the performance of the proposed method on the publicly available Willow-Actions Dataset\cite{Delaitre10}. The Willow-Actions Dataset (Fig.~\ref{fig:willow}) is a dataset for human action classification in still images. Action classes include ``interacting with computer'', ``photographing'', ``playing instrument'', ``riding bike'', ``riding horse'', ``running'', ``walking''. Following the evaluation protocol in~\cite{Delaitre10}, we used 427 images for training and 484 images for testing.
\emph{Training Settings.} As mentioned before, we trained DPMs for the 5 classes. Since ``bike'' and ``instrument'' have many subclasses, we train these two detectors using images from Imagenet\footnote{\url{http://www.image-net.org/}}, including ``bicycle'' and ``motorcycle'' for ``bike'', and ``saxophone'', ``violin'', ``piano'', ``guitar'', ``flute'', ``cello'' for ``instrument''. The body part detectors are trained on a large dataset: Leeds Sport Pose Dataset\footnote{\url{http://www.comp.leeds.ac.uk/mat4saj/lsp.html}}.
Before the DBN training, we first flipped every image horizontally, and the human parts and objects are localized manually for the DBN training. We also perturb the human part and object locations with a random distance of $\varepsilon$ pixels($-10 \leqq \varepsilon \leqq 10$ ) in each training image. We repeated this procedure 10 times and generated 10 training samples from each image. Thus we have in total $427 \times 2 \times 10 = 8540$ training samples for the DBN. This way, we relieve the over-fitting problem and can handle the variances given by the unstable detection results during the testing.
The parameters for DBN are set as: the pre-training learning rate is 0.01, pre-training iteration number 100, fine-tuning learning rate 0.1, fine-tuning iteration number 1000.
\emph{Testing Setting.} During testing, the human pose and objects are localized by automatic body part detectors and object detectors. A parameter $\sigma_k( 1 \leqq k \leqq 15)$ is defined for each human part and object part. For each part, if its detection score is larger than $\sigma$, we set $isExist = 1$, otherwise $isExist = 0$. $\sigma$ is estimated by cross validation.
\begin{figure}[htpb]
\centering
\includegraphics[width=\linewidth]{fig_willow.pdf}
\vspace{-8mm}
\caption{Samples of images in the Willow Action dataset.}
\label{fig:willow}
\end{figure}
\subsection{Results and comparisons}
We first show the mean Average Precision (mAP) of the proposed method in recognizing the 7 classes of actions in still images and the comparisons to the state-of-the-art methods (Table~\ref{tab:Acc}). Our DBN method reached the mAP of $80.41\%$, which is significantly (about $10\%$) higher than the state-of-the-art result of $70.1\%$~\cite{Khan13}. Meanwhile, we also switch the DBN model with SVM (a ``shallow'' model~\cite{Bengio09}) by keeping the same input features. Table~\ref{tab:Acc} shows that our method works much better with the deep model than with the shallow model.
\begin{table}[htpb]
\small
\centering
\caption{mAP comparisons to the state of the art}
\begin{tabular}{cc}
\toprule
Method & mAP\\
\midrule
Our DBN & \textbf{80.41} \\
Our SVM & 77.82 \\
Khan et al.~\cite{Khan13} & 70.1 \\
Sharma et al.~\cite{Sharma13} & 67.6 \\
Delaitre et al.~\cite{Delaitre10} & 59.6\\
\bottomrule
\end{tabular}%
\label{tab:Acc}%
\end{table}%
To show the benefits of incorporating contextual object information, we show in Table~\ref{tab:mAP} the mAP of the proposed method (DBN) and SVM, with or without using object informaiton, respectively, for the 7 different actions. It is obvious that the contextual objects bring valuable information into the recognition of human actions.
\begin{table}[htpb]
\centering
\caption{mAP comparisons for different classes}
\begin{tabular}{ccccc}
\toprule
Class Name & SVMw/o & SVM & DBNw/o & DBN \\
\midrule
Int. Computer & 14.32 & 83.71 & 40.89 & \textbf{86.56} \\
Photographing & 32.8 & 89.47 & 28.27 & \textbf{90.5} \\
PlayingMusic & 39.16 & \textbf{95.95} & 40.69 & 89.91 \\
RidingBike & 63.06 & 97.35 & 32.27 & \textbf{98.17} \\
RidingHorse & 43.08 & 92.7 & 37.47 & \textbf{92.72} \\
Running & 39.28 & 37.9 & 26.23 & \textbf{46.16} \\
Walking & 61.25 & 48.79 & 40.45 & \textbf{58.88} \\
mAP & 41.85 & 77.98 & 35.18 & \textbf{80.41} \\
\bottomrule
\end{tabular}%
\label{tab:mAP}%
\end{table}%
We visualize part of our results as Fig.~\ref{fig:result}) including 7 categories of action classes and two failing cases in the last line. It is shown that our model is robust even though the detections of pose and object are not reliable. The error of classification is mainly because of the wrong object localization and the estimated pose is similar to the one in other classes.
\begin{figure}[htpb]
\centering
\includegraphics[width=\linewidth]{fig_result.pdf}
\vspace{-9mm}
\caption{Sample results of parsing actions from still images using our framework.}
\label{fig:result}
\end{figure}
\section{Conclusions}
We investigated the use of deep learning techniques in the task of recognizing human actions from still images. An expressive deep model was developed to naturally integrate different sources of information, including human layout and surrounding contexts, for human action parsing in a single image. In particular, a Deep Belief Net is trained and manually labeled training data greatly improved the effectiveness and efficiency of the pre-training and fine-tuning stages of the DBN training phase.
\bibliographystyle{IEEEbib}
| {'timestamp': '2015-02-03T02:25:41', 'yymm': '1502', 'arxiv_id': '1502.00501', 'language': 'en', 'url': 'https://arxiv.org/abs/1502.00501'} |
\section{Introduction}
Charged dust particles embedded in a plasma environment are an ubiquitous phenomenon in nature.\cite{Mendis02,Ishihara07}
They are found in the interstellar medium,\cite{Spitzer82,Mann08} planetary magnetospheres,\cite{GGM84} the upper
atmosphere,\cite{FR09} and in industrial plasmas.\cite{Hollenstein00} Dusty laboratory plasmas,\cite{PM02}
containing self-organized dust clouds, serve moreover as model systems for studying the dynamic
behavior of strongly Coulomb-correlated systems of finite extent.
From the plasma physics point of view, the most important property of a dust particle is the charge it
accumulates from the plasma. It controls the coupling to other dust particles and to external electromagnetic
fields as well as the overall charge balance of the plasma. As a consequence various methods have been devised to
measure the particle charge. They range from force
balance methods for particles drifting in the plasma~\cite{KRZ05} or trapped in the plasma sheath~\cite{TLA00,CJG11}
to methods based on wave dispersion,\cite{HMP97} normal mode analysis,\cite{Melzer03} and dust cluster rotation.\cite{CGP10}
Yet, the precise determination of the particle charge in a plasma environment remains a challenge. Methods independent of
the plasma parameters,\cite{HMP97,Melzer03,CGP10} which are
usually not precisely known, require specific experimental configurations, long measurement times or cannot yield the
charge of individual dust particles. The phase-resolved resonance
method,\cite{CJG11} for instance, allows only a precise relative charge measurement. For an absolute charge measurement
the potential profile in the vicinity of the particle has to be additionally obtained by Langmuir probe measurements
which however are only about 20\% accurate. Thus an optical measurement of the particle charge, independent of
the plasma parameters, would be extremely useful.
The scattering of light by a small particle (Mie scattering~\cite{Mie08}) encodes---at least in principle---the particle
charge.\cite{BH77,BH83,KK07,KK10,HCL10,HBF12b} It enters the scattering coefficients through the electrical conductivity of
the surplus electrons which modifies either the boundary conditions for the electromagnetic fields or the polarizability
of the particle. To assess how and at which frequencies charges are revealed by the Mie signal requires however not only a
microscopic calculation of the surface and bulk conductivities but also a detailed analysis of the conductivities'
impact on the different scattering regimes the particle's dielectric constant gives rise to.
So far, the dependence of the Mie signal on the particle charge has not been investigated
systematically. In our previous work~\cite{HBF12b} we made a first step to clarify this issue which has also been raised by Rosenberg.\cite{Rosenberg12} We
identified the extinction at anomalous optical resonances of dielectric particles with a strong transverse optical
(TO) phonon resonance in the dielectric constant to be sensitive to surplus electrons. In the present work we give a
more comprehensive survey of Mie scattering by small negatively charged dielectric particles. Our aim is to identify
over the whole frequency range, not only in the vicinity of anomalous resonances, features in the Mie signal which
respond to surplus electrons. From these features the surplus electron density
of the particle could be determined optically via light scattering.
After a brief outline of the Mie theory of light scattering by small charged particles in the next section, we present
in Section III results for the four generic scattering features which occur for a charged dielectric particle with
a strong resonance in the dielectric constant at the TO phonon frequency $\omega_{TO}$: low-frequency scattering,
ordinary resonances below $\omega_{TO}$, anomalous resonances above $\omega_{TO}$,
and high-frequency scattering. We investigate the intensity
of the Mie signal and its polarization. Thereby we include ellipsometric techniques into our considerations.
Section IV finally summarizes the results and points out possibilities for an
optical measurement of the particle charge.
\section{Theory}
\label{sec_theory}
The scattering behavior of an uncharged dielectric particle is determined by its radius $a$ and frequency-dependent dielectric
constant $\epsilon(\omega)$. For a negatively charged dielectric particle light scattering is also influenced by the electric conductivity of the surplus electrons. Whether surplus electrons are trapped inside the particle or in a surface layer around it depends on the electron affinity $\chi$ of the particle.\cite{HBF12b}
For $\chi<0$, as it is the case for instance for MgO and LiF,\cite{RWK03} the conduction band inside the dielectric is above the potential outside the particle. Electrons do not penetrate into the dielectric. Instead they are trapped in the image potential in front of the surface.\cite{HBF12,HBF11} The image potential is due to a surface mode associated with the TO phonon. The interaction of an electron with the surface mode comprises a static part, which induces the image potential,\cite{EM73,Barton81} and a dynamic part, which enables momentum relaxation parallel to the surface limiting the surface conductivity.\cite{KI95} The phonon-limited surface conductivity $\sigma_s$, calculated in our previous work~\cite{HBF12b} using the memory function approach,\cite{GW72}
modifies the boundary condition for the magnetic field at the surface of the grain.\cite{BH77}
For $\chi>0$, as it is the case for instance for Al$_2$O$_3$, Cu$_2$O and PbS, the conduction band inside the dielectric lies below the potential outside the particle. Electrons thus accumulate in the conduction band where they form an extended space charge.\cite{HBF12} Its width, limited by the screening in the dielectric, is typically larger than a micron. For micron-sized particles we can thus assume a homogeneous electron distribution in the bulk. The bulk conductivity is limited by scattering with a longitudinal optical (LO) phonon~\cite{Mahan90} and can be also calculated~\cite{HBF12b} within the memory function approach.
The bulk conductivity of the surplus electrons $\sigma_b$ leads to an additional polarizability per volume $\alpha=4\pi i \sigma_b / \omega$ which alters the refractive index.
The scattering behavior of the particle is controlled by the scattering coefficients. They are determined by expanding the incident ($i$) plane wave into spherical vector harmonics and matching the reflected ($r$) and transmitted ($t$) waves at the boundary of the sphere.\cite{Stratton41,BH83} In the case of $\chi>0$ the boundary conditions at the surface are given by $\mathbf{\hat{e}}_r \times (\mathbf{C}_i+ \mathbf{C}_r-\mathbf{C}_t)=0$ for $\mathbf{C}=\mathbf{E},\mathbf{H}$. For $\chi<0$ the surface charges may sustain a surface current $\mathbf{K}=\sigma_s \mathbf{E}_\parallel$ which is induced by the parallel component of the electric field and proportional to the surface conductivity. This changes the boundary condition for the magnetic field to
$\mathbf{\hat{e}}_r\times (\mathbf{H}_i+\mathbf{H}_r-\mathbf{H}_t)=4\pi\mathbf{K}/c$,
where $c$ is the velocity of light. The boundary condition for the electric field is still $\mathbf{\hat{e}}_r \times (\mathbf{E}_i+ \mathbf{E}_r-\mathbf{E}_t)=0$. The refractive index of the particle $N=\sqrt{\epsilon}$ ($\chi<0$) or $N=\sqrt{\epsilon+\alpha}$ ($\chi>0$). Matching the fields at the particle surface gives the scattering coefficients~\cite{BH77}
\begin{equation}
a_n^r=-\frac{F_{n}^a}{F_{n}^a+iG_{n}^a}, \quad b_n^r=-\frac{F_{n}^b}{F_{n}^b+iG_{n}^b},
\end{equation}
where
\begin{align}
F_{n}^a&=\psi_n(N\rho) \psi^\prime_n(\rho)-\left[N\psi_n^\prime(N\rho) -i\tau \psi_n(N\rho)\right] \psi_n(\rho), \\
G_{n}^a&=\psi_n(N\rho) \chi^\prime_n(\rho)-\left[N\psi_n^\prime(N \rho ) -i\tau \psi_n(N\rho) \right] \chi_n(\rho), \\
F_{n}^b&=\psi_n^\prime(N\rho) \psi_n(\rho)-\left[N\psi_n(N\rho) +i\tau \psi_n^\prime(N\rho)\right] \psi_n^\prime(\rho), \\
G_{n}^b&=\psi_n^\prime(N\rho) \chi_n(\rho)- \left[ N\psi_n(N\rho) +i\tau \psi_n^\prime(N\rho)\right] \chi_n^\prime(\rho)
\end{align}
with the dimensionless surface conductivity $\tau(\omega)=4\pi \sigma_s(\omega)/c$ ($\chi<0$) or $\tau=0$ ($\chi>0$). The size parameter $\rho=ka=2\pi a /\lambda$ where $\lambda$ is the wavelength and $\psi_n(\rho)=\sqrt{\pi \rho/2}J_{n+1/2}(\rho)$, $\chi_n(\rho)=\sqrt{\pi \rho/2}Y_{n+1/2}(\rho)$ with $J_n$ the Bessel and $Y_n$ the Neumann function.
The efficiencies for extinction ($t$) and scattering ($s$) are
\begin{align}
Q_t&=-\frac{2}{\rho^2}\sum_{n=1}^\infty (2n+1) Re(a_n^r+b_n^r) \\
Q_s&=\frac{2}{\rho^2 }\sum_{n=1}^\infty (2n+1) (|a_n^r|^2+|b_n^r|^2)
\end{align}
from which the absorption efficiency $Q_a=Q_t-Q_s$ can be also obtained.
An important special case is the scattering by small particles, for which $\rho \ll 1$. Inspired by the expressions used in
Ref.~\onlinecite{LTT07} we write in this case $F_n^a=N^{n+1}f_n^a/(2n+1)$ and $G_n^a=N^{n+1}g_n^a/(2n+1)$ with
\begin{widetext}
\begin{align}
f_n^a=&\frac{2^{2n}(n+1)!n!}{(2n+1)!(2n)!}\rho^{2n+1}\left(\frac{i\tau}{n+1}\rho+\frac{N^2-1}{(n+1)(2n+3)}\rho^2
+\mathcal{O}(\rho^3) \right),
\label{fa}\\
g_n^a=&2n+1-i\tau\rho+\frac{1-N^2}{2}\rho^2 +\mathcal{O}(\rho^3), \label{ga}
\end{align}
and similarly $F_n^b=N^n f_n^b/(2n+1)$ and $G_n^b=N^n g_n^b/(2n+1)$ with
\begin{align}
f_n^b=& \frac{2^{2n} n!(n+1)!}{(2n)!(2n+1)!}\rho^{2n+1}\left(1-N^2-i(n+1)\frac{\tau}{\rho}+\mathcal{O}(\rho)\right), \label{fb} \\
g_n^b=&-(n+1)-nN^2-in(n+1)\frac{\tau}{\rho}\nonumber \label{gb}+\left[\frac{(n+3)N^2+nN^4}{2(2n+3)}-\frac{(n+1)+(n-2)N^2}{2(2n-1)}\right]\rho^2 \nonumber \\
&+\left[-i \frac{(n+1)(n-2)}{2(2n-1)}+i\frac{n(n+3)}{2(2n+3)}N^2\right] \tau \rho +\mathcal{O}(\rho^3)~.
\end{align}
\end{widetext}
The leading scattering coefficients for small uncharged particles are $b_1\sim \mathcal{O}(\rho^3)$ and
$a_1, b_2\sim \mathcal{O}(\rho^5)$. For them
\begin{align}
f_1^a&=i\frac{\tau}{3}\rho^4+\frac{N^2-1}{15}\rho^5, \quad g_1^a=3-i\tau \rho+\frac{1-N^2}{2}\rho^2, \label{fg_a1}
\end{align}
\begin{align}
f_1^b&=-i\frac{4\tau}{3}\rho^2+\frac{2(1-N^2)}{3}\rho^3, \quad g_1^b=-2-N^2-i2\frac{\tau}{\rho} \label{fg_b1}, \\
f_2^b&=-i\frac{\tau}{5}\rho^4+\frac{1-N^2}{15}\rho^5, \quad g_2^b=-3-2N^2-i6\frac{\tau}{\rho} . \label{fg_b2}
\end{align}
Keeping only the coefficient $b_1$ the extinction efficiency $Q_t=-6\Re(b_1^r)/\rho^2$. Approximating $b_1^r=f/ig$ (we have neglected $f\sim \rho^3$ compared to $g\sim \rho^0$ in the denominator) we obtain for the extinction efficiency
\begin{align}
Q_t=\frac{12\rho \left(\epsilon^{\prime \prime}+\alpha^{\prime \prime}+2\tau^\prime/\rho \right)}{\left(\epsilon^{\prime }+\alpha^\prime+2-2\tau^{\prime \prime}/\rho \right)^2+\left(\epsilon^{\prime \prime}+\alpha^{\prime \prime}+2\tau^\prime/\rho \right)^2} \label{smallrhoext}
\end{align}
which is valid for small particles, that is, for $\rho\ll1$.
\section{Results}
\label{sec_results}
In the following we will discuss light scattering for a MgO ($\chi<0$) and an Al$_2$O$_3$ ($\chi>0$) particle (for material parameters see Ref.~\onlinecite{matpar}). The particle charge affects light scattering through the dimensionless surface conductivity $\tau=\tau^\prime+i\tau^{\prime \prime}$ (MgO) or the surplus electron polarizability $\alpha=\alpha^\prime+i\alpha^{\prime \prime}$ (Al$_2$O$_3$). Both $\tau$ and $\alpha$ are shown as a function of the inverse wavelength $\lambda^{-1}$ in the first row of
Fig. \ref{fig_over}. They are small even for a highly charged particle with $n_s=10^{13}$ cm$^{-2}$ which corresponds to $n_b=3\times10^{17}$ cm$^{-3}$ for $a=1\mu$m.
For $T=300$~K $\tau^{\prime \prime} > \tau^\prime$ and $-\alpha^\prime > \alpha^{\prime \prime}$ except at very low frequencies. For $\lambda^{-1} \rightarrow 0$ the conductivities $\sigma_s$ and $\sigma_b$ tend to a real value so that $\tau^\prime > \tau^{\prime \prime}$ and $\alpha^{\prime \prime}>- \alpha^\prime$ for very small $\lambda^{-1}$.
Both $\tau$ and $\alpha$ decrease with increasing $\lambda^{-1}$ and vary smoothly over the considered frequencies. Shown for comparison are also $\tau$ and $\alpha$ for $T=0$ where $\tau^\prime=0$ for $\lambda^{-1}<\lambda_s^{-1}=909$ cm$^{-1}$, the
inverse surface phonon wavelength ($\alpha^{\prime \prime}=0$ for $\lambda^{-1}<\lambda_{LO}^{-1}=807$ cm$^{-1}$,
the inverse LO phonon wavelength), since light absorption is possible only above $\lambda_s^{-1}$ (or $\lambda_{LO}^{-1}$).
\begin{figure*}[t]
\includegraphics[width=\linewidth]{figure1}
\caption{(Color online) First row: Dimensionless surface conductivity $\tau=\tau^\prime+i\tau^{\prime \prime}$ for MgO for $n_s=10^{13}$cm$^{-2}$ (left) and polarizability of excess electrons $\alpha=\alpha^\prime+i\alpha^{\prime \prime}$ for Al$_2$O$_3$ for $n_b=3\times10^{17}$cm$^{-3}$ (right) as a function of the inverse wavelength $\lambda^{-1}$. Second row: Dielectric constant $\epsilon=\epsilon^\prime+i\epsilon^{\prime \prime}$ and refractive index $N=n+ik$ as a function of $\lambda^{-1}$. Third to fifth row: Absorption efficiency $Q_a$ (third row), scattering efficiency $Q_s$ (fourth row), and extinction efficiency \(Q_t\) (fifth row) as a function of $\lambda^{-1}$ and the particle radius $a$ for an uncharged MgO and Al$_2$O$_3$ particle. The labels indicate the four characteristic scattering regimes: low frequencies (A), ordinary resonances (B), anomalous resonances (C), and interference and ripple structure (D). The dashed lines give the approximate position of the $a_1^r$ (B) and the $b_1^r$ (C) resonance. The full lines give the approximate cross-over from absorption to scattering dominance of the resonances. }
\label{fig_over}
\end{figure*}
The scattering behavior of the uncharged particles is primarily determined by the dielectric constants $\epsilon(\omega)$
(second row of Fig. \ref{fig_over}). For MgO it is dominated by a TO phonon at $\lambda^{-1}=401$ cm$^{-1}$. For Al$_2$O$_3$
two TO phonon modes at $\lambda^{-1}=434$ cm$^{-1}$ and $\lambda^{-1}=573$ cm$^{-1}$ dominate $\epsilon(\omega)$.
At frequencies well below the TO phonon resonance the dielectric constant tends towards its real static value $\epsilon_0$.
In this regime (marker A in Fig. \ref{fig_over}) $\epsilon^{\prime\prime} \ll \epsilon^{ \prime}$. For constant radius $a$, the extinction efficiency $Q_t \rightarrow 0$ for $\lambda^{-1}\rightarrow 0$.
Just below the TO phonon resonance (for Al$_2$O$_3$ below the lower TO-phonon) $\epsilon^\prime$ is large and positive and
$\epsilon^{\prime \prime} \ll \epsilon^{\prime}$ (except in the immediate vicinity of the resonance). This gives rise to
ordinary optical resonances (marker B in Fig.~\ref{fig_over}).\cite{vandeHulst57}
Above the TO phonon resonance (for Al$_2$O$_3$ above the higher TO-phonon) $\epsilon^\prime <0$ and
$\epsilon^{\prime \prime} \ll 1$. This entails anomalous optical resonances (marker C in
Fig.\ref{fig_over}).\cite{FK68,TL06,Tribelsky11}
Far above the TO phonon resonance $\epsilon^\prime$ takes a small positive value and $\epsilon^{\prime \prime} \ll 1$. This
gives rise to an interference and ripple structure (marker D in Fig. \ref{fig_over}).\cite{BH83}
In the context of plasma physics dielectric particles with a strong TO phonon resonance have already been studied theoretically as wave-length selective infra-red absorbers.\cite{Rosenberg12}
In the following we explore the modification of the features A--D by surplus electrons. We are particularly interested in
identifying dependencies in the optical signal which can be used as a charge diagnostic.
\subsection{Low-Frequency Scattering}
\begin{figure}
\includegraphics[width=\linewidth]{figure2}
\caption{(Color online) Left: Extinction efficiency \(Q_t\) as a function of the inverse wavelength $\lambda^{-1}$ for a charged MgO particle (top) and a charged Al$_2$O$_3$ particle (bottom) with radius $a=1\mu$m at $T=300$ K. Full lines give the results for the phonon-limited surface or bulk conductivity, dashed lines show for comparison the results for free surface or bulk electrons. Right: Extinction efficiency as a function of the surface electron density for an MgO particle (or corresponding bulk electron density for Al$_2$O$_3$) for $\lambda^{-1}=50$ cm$^{-1}$.}
\label{fig_lowfreq}
\end{figure}
In the low frequency limit of scattering (marker A in Fig. \ref{fig_over}) the extinction efficiency $Q_t$ is relatively small. For $\lambda^{-1}<200$ cm$^{-1}$ particles with a radius of a few microns are small compared to the wavelength. In this limit the dominant scattering coefficient is $b_1^r$ and the extinction efficiency is given approximately by Eq. (\ref{smallrhoext}). Extinction is due to absorption which is controlled by $\epsilon^{\prime \prime}$. As $\epsilon^{\prime \prime}$ is small in this frequency range energy dissipation on the grain and thus extinction is inhibited. For $\lambda^{-1}\rightarrow 0$, $\epsilon^{\prime \prime} \rightarrow 0$ and hence $Q_t \rightarrow 0$.
For charged dielectric particles light absorption is controlled not only by $\epsilon^{\prime \prime}$ but also by $\tau^\prime$ for $\chi<0$ and by $\alpha^{\prime \prime}$ for $\chi>0$ which stem from the real part of the surface or bulk conductivity of the surplus electrons, respectively. For low frequency $\tau^\prime$ and $\alpha^{\prime \prime}$ are larger than for higher frequencies and for $\lambda^{-1} \rightarrow 0$ even outweigh $\tau^{\prime \prime}$ and $-\alpha^{\prime} $ as the real parts of the surface and bulk conductivities tend to finite values whereas the imaginary parts vanish for $\lambda^{-1}=0$. This allows increased energy dissipation on charged dust particles entailing increased light absorption. Figure \ref{fig_lowfreq} shows this saturation of the extinction efficiency for charged particles.
For comparison, we also show the results for free surface (MgO) or bulk electrons (Al$_2$O$_3$). In this case the conductivities are purely imaginary and the saturation of the extinction efficiency is not observed. Instead we find a plasmon resonance of the electron gas around or inside the particle. The resonance is located where $\Re(g_1^b)=0$ (with $g_1^b$ given by Eq. (\ref{fg_b1})). This discrepancy with results from the phonon-limited conductivities shows that in the low-frequency limit the model of free surplus electrons cannot offer even a qualitative explanation.
The saturation of the extinction efficiency for low frequencies could be employed as a charge measurement. Performing an extinction measurement at fixed wavelength would give an approximately linear increase of $Q_t$ with the surface density or bulk density of surplus electrons (see right panels of Fig. \ref{fig_lowfreq}).
\subsection{Ordinary Resonances}
Below the TO phonon resonance at $\lambda_{TO}^{-1}$ in the dielectric constant $\epsilon^\prime$ is large while $\epsilon^{\prime \prime}$ is still comparatively small (except at $\lambda_{TO}^{-1}$). The large positive $\epsilon^\prime$ (which entails a large positive real part of refractive index $N$) allows for ordinary optical resonances,\cite{vandeHulst57} which are clearly seen in Fig. \ref{fig_over}. The lowest resonance is due to the $a_1$ mode. The contribution of this mode to the extinction efficiency is $Q_{a_1}^t=-6 \Re (a_1^r)/\rho^2$. More generally, the
extinction efficiency due to one mode only reads $Q_{a,b_n}^t=2(2n+1) q_{a,b_n}^t / \rho^2$ where
\begin{equation}
q_{a,b_n}^t= \frac{f^{\prime } (f^\prime - g^{\prime \prime}) }{(f^\prime - g^{\prime \prime})^2+g^{\prime 2}} \label{rescon_ordinary}
\end{equation}
with $f=f^\prime+if^{\prime \prime}$ and $g=g^\prime+ig^{\prime \prime}$ (given for $\rho\ll1$ by Eqs. (\ref{fa})-(\ref{gb})). Note that we have neglected $f^{\prime \prime}$ as $\epsilon^{\prime \prime} \ll1$. The resonance is approximately located where $g^\prime=0$. This gives for $n=1$ the condition
\begin{align}
3+\tau^{\prime \prime}\rho +(1-\epsilon^\prime-\alpha^\prime)\rho^2/2=0. \label{rescon_a1}
\end{align}
The approximate resonance location for an uncharged sphere, obtained from $3 +(1-\epsilon^\prime)\rho^2/2=0$ is shown in Fig. \ref{fig_over} by the dashed line. It deviates somewhat from the true resonance location but captures its size dependence qualitatively. The contribution of one mode to absorption and scattering, respectively, is
$Q_{a,b_n}^{a,s}=2(2n+1) q_{a,b_n}^{a,s} / \rho^2$ with
\begin{equation}
q_{a,b_n}^a= \frac{-f^\prime g^{\prime \prime}}{(f^\prime - g^{\prime \prime})^2+g^{\prime 2}}, \quad
q_{a,b_n}^s= \frac{f^{\prime 2}}{(f^\prime - g^{\prime \prime})^2+g^{\prime 2}} . \label{qs_qa_comp}
\end{equation}
For $f^\prime>-g^{\prime \prime}$ scattering outweighs absorption while absorption outweighs scattering for $-g^{\prime \prime}>f^{\prime}$. The boundary between the two regimes is given by $-g^{\prime\prime}=f^\prime$. For $n=1$ this gives for an uncharged particle
\begin{align}
\rho=\left(\frac{15}{2} \frac{\epsilon^{\prime \prime}}{\epsilon^\prime-1} \right)^\frac{1}{3} ,
\end{align}
which is shown in Fig. \ref{fig_over} by the solid line and agrees with the underlying contour.
\begin{figure}
\includegraphics[width=\linewidth]{figure3}
\caption{(Color online) Extinction efficiency \(Q_t\)
as a function of the inverse wavelength for MgO (left) and Al$_2$O$_3$ (right) particles with radius \(a=4\mu\)m for \(n_s=0\), \( 10^{13} \), and $2\times 10^{13}$ cm$^{-2}$ (or corresponding bulk electron density $n_b=3n_s/a$) at \(T=300K\).}
\label{fig_ordres}
\end{figure}
Fig. \ref{fig_ordres} shows that the $a_1$ resonance is not shifted significantly by surplus charges. As the charge enters $\sim \tau \rho$ or $\sim \alpha \rho^2$ the shift cannot be increased by reducing the particle size. Ordinary resonances thus offer no possibility to measure the particle charge.
\subsection{Ripple and Interference Structure}
Far above the highest TO phonon frequency (that is, for MgO and Al$_2$O$_3$ for $\lambda^{-1}>1000$ cm$^{-1}$) the extinction efficiency shows the typical interference and ripple structure of Mie scattering (marker D in Fig. \ref{fig_over}).\cite{BH83} It consists of a broad interference pattern superseded by fine ripples which are due to individual modes. They become sharper for larger frequencies. Figure \ref{fig_ri} shows the overall interference and ripple structure (top) and exemplifies the charge
sensitivity of the ripple due to the mode $b_{10}$ (bottom). It is shifted only very slightly with increasing particle charge. This is due to the small values of the surface conductivity or the polarizability of the surplus electrons for $\lambda^{-1}>1000$ cm$^{-1}$. Thus the ripple structure is not a suitable candidate for a charge measurement either.
\begin{figure}
\includegraphics[width=\linewidth]{figure4}
\caption{(Color online) Top panel: Overview of the ripple and interference structure. Bottom panel: Extinction efficiency \(Q_t\) close to the $b_{10}$ ripple as a function of the inverse wavelength for MgO (left) and Al$_2$O$_3$ (right) particles with radius \(a=4\mu\)m for \(n_s=0\), \( 10^{13} \) and $2\times 10^{13}$ cm$^{-2}$ (or corresponding bulk electron density $n_b=3n_s/a$) at $T=300$ K.}
\label{fig_ri}
\end{figure}
\subsection{Anomalous Resonances}
At the TO phonon resonance the real part of the dielectric constant changes sign. For $\lambda^{-1} > \lambda_{TO}^{-1}$ $\epsilon^\prime <0$ and $\epsilon^{\prime \prime}\ll1$. This gives rise to a series of anomalous optical resonances, which can be seen in Fig. \ref{fig_over} (marker C). They correspond to the resonant excitation of transverse surface modes of the sphere.\cite{FK68} For a metal particle they are tied to the plasmon resonance~\cite{TL06,Tribelsky11} whereas for a dielectric particle they are due to the TO-phonon. The resonances are associated with the scattering coefficients $b_n$. The lowest resonance is due to the mode $b_1$. The resonance location is approximately given by $\Re(g_1^b)=0$, which according to Eq. (\ref{gb}) gives for an uncharged sphere
\begin{align}
-2-\epsilon^\prime +\left(-1-\frac{\epsilon^\prime}{10}+\frac{\epsilon^{\prime 2}-\epsilon^{\prime \prime 2}}{10}\right)\rho^2=0 .
\end{align}
This approximation, shown by the dashed line near marker C in Fig. \ref{fig_over}, agrees well with the underlying Mie contour.
The higher resonances are scattering dominated, while the lowest resonance shows a cross-over from absorption to scattering dominance (see Fig. \ref{fig_over}). This cross-over can be understood from the contribution of the $b_1$ mode to the scattering and absorption efficiencies (given by Eq. (\ref{qs_qa_comp})). Absorption dominates for $-g^{\prime \prime}>f^\prime$, while scattering dominates for $-g^{\prime \prime}<f^\prime$. The boundary between the two regimes lies where $-g^{\prime \prime}=f^\prime$. For $n=1$ this gives
\begin{equation}
\rho=\left( \frac{3}{2} \frac{\epsilon^{\prime \prime}}{1-\epsilon^\prime}\right)^\frac{1}{3}
\end{equation}
which agrees well with the Mie contour (see Fig \ref{fig_over}).
\begin{figure}[t]
\includegraphics[width=\linewidth]{figure5}
\caption{(Color online) Top panels: Real part $\epsilon^\prime$ and imaginary part $\epsilon^{\prime \prime}$ of the dielectric constant as a function of the inverse wavelength $\lambda^{-1}$. The maximum of $\epsilon^{\prime \prime}$ for LiF stems from a TO phonon mode at $503$ cm$^{-1}$.
Middle panel: Extinction efficiency $Q_t$ as a function of $\lambda^{-1}$ and the
radius $a$ for a LiF particle with $n_s= 5 \times 10^{12}$ cm$^{-2}$ (left) and an Al$_2$O$_3$ particle with $n_b = 3n_s /a$ (right) for
T = 300 K. The dotted lines indicate the extinction maximum for (from left to right) $n_s = 0$ (black), $10^{12}$ (green), $2 \times
10^{12}$ (red), and $5\times 10^{12}$ cm$^{-2}$ (blue). Bottom panel: Extinction efficiency $Q_t$ as a function of $\lambda^{-1}$ for different
surface electron densities (corresponding to the middle panel) and fixed radius $ a = 0.05\mu$m. The extinction maximum is shifted to higher frequencies with increasing electron density.
}
\label{fig_anres}
\end{figure}
The anomalous resonances are sensitive to small changes in $\epsilon$ and $\tau$ or $\alpha$. Surplus electrons lead to a blue-shift of the resonances.\cite{HBF12b} This effect is strongest for small particles with radius $a<1\mu$m. In the small particle limit the extinction efficiency is approximately given by Eq. (\ref{smallrhoext}). The resonance is located at
\begin{align}
\epsilon^\prime+\alpha^\prime+2-2\tau^{\prime \prime}/\rho=0 . \label{rescon_b1}
\end{align}
Compared to the resonance condition for ordinary resonances, Eq. (\ref{rescon_a1}), the charge sensitivity increases for small particles as surplus charges enter by $-2\tau^{\prime \prime}/\rho \sim n_s /a$ (for $\chi<0$) or $\alpha^\prime \sim n_b$ (for $\chi>0$). This shows that the resonance shift by the surplus electrons is primarily an electron density effect on the polarizability of the dust particle.\cite{HBF12b}
Figure \ref{fig_anres} shows the resonance shift for charged sub-micron-sized LiF~\cite{matpar} and Al$_2$O$_3$ particles. For Al$_2$O$_3$ the resonance shift is relatively large and the extinction resonance has a Lorentzian shape. As $\epsilon^{\prime}$ is well approximated linearly close to $-2$ and $\epsilon^{\prime \prime}$ varies only slightly this follows form Eq. (\ref{smallrhoext}). For LiF the shift is smaller and the lineshape is not Lorentzian. The reason is the minor TO phonon at $\lambda^{-1}=503$cm$^{-1}$. This leads to a bump in $\epsilon^{\prime \prime}$ disturbing the Lorentzian shape.
\begin{figure*}[t]
\includegraphics[width=\linewidth]{figure6}
\caption{(Color online) Position of the extinction resonance depending on the surface charge $n_s$ (or the equivalent bulk charge
$n_b = 3n_s /a$) for PbS, LiF, MgO,
Cu$_2$O, and Al$_2$O$_3$ particles with different radii $a$. Solid
(dashed) lines are obtained from the Mie contour [Eq. (\ref{rescon_b1})].
}
\label{fig_anres2}
\end{figure*}
A comparison of the resonance shift for MgO and LiF ($\chi<0$) as well as Al$_2$O$_3$, PbS and Cu$_2$O~\cite{matpar} ($\chi>0$) is given by Fig. \ref{fig_anres2}. The shift tends to be larger for bulk ($\chi>0$) than for surface ($\chi<0$) surplus electrons. Cu$_2$O is an example for a dielectric where $\epsilon^{\prime \prime}$ is too large for a well-resolved series of extinction resonances to form. Nevertheless a tail of the lowest resonance for small particles is discernible which is blue-shifted by surplus electrons, albeit by a lesser extent than for Al$_2$O$_3$ or PbS. PbS has a particularly strong resonance shift. Compared to the other materials the TO phonon resonance of PbS is located at a significantly lower frequency where $\alpha$ is particularly large. Together with the small conduction band effective mass which benefits the electrons' mobility this leads to the larger charge-induced blue-shift.
The blue-shift of the extinction resonance could be used as a charge measurement for particles with $a<1\mu$m. The resonance shift is found for particles with $\chi<0$, e.g. MgO or LiF, and $\chi>0$, e.g. PbS, Cu$_2$O or Al$_2$O$_3$. The most promising candidates are particles made from Al$_2$O$_3$ or PbS. The latter may even allow a measurement for micron-sized particles.
\subsection{Polarization Angles}
So far we have considered charge effects in the extinction efficiency. In the following we will turn to the charge signatures in the polarization of the scattered light. While the extinction (or scattering) efficiency gives only access to the magnitude of the scattering coefficients the polarization of scattered light also reveals the phase of the scattering coefficients. The phase information is particularly useful close to the ordinary and anomalous optical resonances. They occur for $\Re(g_n^{a,b})=0$ where the sign change of $g_n^{a,b}$ leads to a rapid phase change around the resonances. For $\epsilon^{\prime \prime}=0$ the functions $f_n^{a,b}$ and $g_n^{a,b}$ are real in the small particle limit (cf. Eqs. (\ref{fg_a1}) -(\ref{fg_b2})). In this limit $f_n^{a}\sim \rho^{2n+3}$ and $f_n^{b}\sim \rho^{2n+1}$ while $g_n^{a,b}\sim \rho^0$ (for uncharged particles), which entails $g_n^{a,b}>f_n^{a,b}$ except very close to the resonance. As a consequence the phase of the scattering coefficients varies over the resonances by about $\pi$.
For linearly polarized incident light ($\mathbf{E}_i \sim \mathbf{\hat{e}}_x$) the electric field of the reflected light,
\begin{align}
\mathbf{E}_r\sim & E_0 \frac{e^{-i\omega t +i kr}}{ikr} \sum_{n=1}^\infty \frac{2n+1}{n(n+1)}\nonumber \\ & \times \left[ \left( a_n^r \frac{P_n^1(\cos \theta)}{\sin \theta }+b_n^r \frac{\mathrm{d} P_n^1 (\cos \theta)}{\mathrm{d} \theta} \right) \cos \phi \mathbf{\hat{e}}_\theta \right. \nonumber \\& - \left. \left(a_n^r \frac{\mathrm{d} P_n^1 (\cos \theta)}{\mathrm{d} \theta}+b_n^r \frac{P_n^1 (\cos \theta)}{\sin \theta} \right) \sin \phi \mathbf{\hat{e}}_\phi \right],
\end{align}
is in general elliptically polarized ($P_n^1(\mu)=\sqrt{1-\mu^2}\mathrm{d}P_n(\mu)/\mathrm{d}\mu$ with $P_n(\mu)$ a Legendre polynomial). Rewriting the reflected electric field as
\begin{align}
\mathbf{E}_r\sim E_0 \frac{e^{-i\omega t +i kr}}{ikr}\left(A_2 e^{i\phi_2} \mathbf{\hat{e}}_\theta +A_3 e^{i\phi_3} \mathbf{\hat{e}}_\phi \right),
\end{align}
where the amplitudes $A_2$, $A_3$ and the phases $\phi_2$, $\phi_3$ are given implicitly by the above equation, the ellipsometric angles are defined by
\begin{align}
\Delta \phi=\phi_2-\phi_3 \quad \text{and} \quad \tan\psi=\frac{A_2}{A_3}.
\end{align}
The angle $\psi$ gives the amplitude ratio and the phase difference $\Delta \phi$ characterizes the opening of the polarization
ellipse. For $\Delta \phi=0,\pm \pi$ the reflected light is linearly polarized while for $\Delta \phi=\pm \pi/2$ the opening
of the polarization ellipse is maximal.
\begin{figure*}[t]
\includegraphics[width=\linewidth]{figure7a}
\includegraphics[width=\linewidth]{figure7b}
\caption{(Color online) Ellipsometric angles \(\Psi\) and \(\Delta\phi\) for scattering by an MgO and Al$_2$O$_3$ particle with radius $a=0.5\mu$m in the direction $\theta=\pi/2$ and $\phi=\pi/4$. (i) (MgO) and (iv) (Al$_2$O$_3$) show \(\Psi\) and \(\Delta\phi\) for $0$ cm$^{-1}<\lambda^{-1}<1000$ cm$^{-1}$ for an uncharged particle. (ii) (MgO) and (v) (Al$_2$O$_3$) magnify the vicinity of the extinction resonance. \(\Psi\) and \(\Delta\phi\) are shifted with increasing surface electron density $n_s$ (or corresponding bulk electron density $n_b=3n_s/a$). The annotated value at the base point gives the wave-number. From there the electron density increases counter-clockwise along the branches. The shift in \(\Delta\phi\) as a function of \(n_s\) or correspondingly $n_b$ is shown for two representative \(\lambda^{-1}\) in (iii) (MgO) and (vi) (Al$_2$O$_3$). }
\label{fig_ellips}
\end{figure*}
Note that forward scattered light ($\theta=0$),
\begin{align}
\mathbf{E}_r\sim E_0 \frac{e^{-i\omega t +i kr}}{ikr} \sum_{n=1}^\infty \frac{2n+1}{2}\left(a_n^r+b_n^r \right) \mathbf{\hat{e}}_x,
\end{align}
is linearly polarized. The same applies to backscattered light ($\theta=\pi$) or light that is scattered perpendicularly to the incident wave and in plane or perpendicularly to the direction of polarization of the incident light ($\theta=\pi/2$ and $\phi=0$ or $\phi=\pi/2$).
An important scattering angle where the scattered light is elliptically polarized is perpendicular to the incident wave and at 45$^\circ$ to the plane of polarization of the incident wave ($\theta=\pi/2$ and $\phi=\pi/4$). This configuration is also used to determine from the Mie signal the particle size of nanodust.\cite{GCK12}
Figure \ref{fig_ellips} shows the polarization angles $\Delta \phi$ and $\psi$ for this scattering direction for MgO and Al$_2$O$_3$ particles with radius $a=0.5\mu$m. Panels (i) (MgO) and (iv) (Al$_2$O$_3$) give an overview for an uncharged particle.
In the small particle limit only the scattering coefficients $a_1^r$, $b_1^r$, and $b_2^r$ are relevant. The reflected electric field is given by
\begin{align}
\mathbf{E}_r\sim E_0 \frac{e^{ikr-i\omega t}}{ikr} \left[ \left(\frac{3}{2\sqrt{2}}a_1^r-\frac{5}{2\sqrt{2}}b_2^r\right)\mathbf{\hat{e}}_\theta-\frac{3}{2\sqrt{2}}b_1^r \mathbf{\hat{e}}_\phi \right].
\end{align}
Figure \ref{fig_ellips} (i) shows a strong variation of $\Delta \phi$ for MgO as a function of $\lambda^{-1}$ which can be related to the variation of the phase of the scattering coefficients. For low frequencies $\lambda^{-1}<300$ cm$^{-1}$ the reflected light is linearly polarized. Close to 400 cm$^{-1}$ the rapid phase variation by $\pi$ of the coefficient $a_1^r$ increases $\Delta \phi$ by about $\pi$. Above $\lambda_{TO}^{-1}$ resonances appear in the coefficients $b_1^r$ and $b_2^r$ for $\epsilon^\prime=-2$ and $\epsilon^\prime=-3/2$ (for $\rho \ll 1$), respectively. As these resonances are located very close to each other, the phase shifts by $\pi$ partly cancel and $\Delta\phi$ acquires and looses a phase of about $-\pi/2$ at around $\lambda^{-1}=600$ cm$^{-1}$. For Al$_2$O$_3$ the variation of $\Delta \phi$ is more complicated because two TO phonon modes dominate $\epsilon$. Nevertheless the interplay of the $b_1$ and the $b_2$ mode above the higher TO phonon resonance leads to the rapid variation of $\Delta \phi$ from close to 0 to $-\pi/2$ and back to close to 0 near $800$ cm$^{-1}$.
Surplus charges alter the polarization angles very little except near the rapid opening and closing of the polarization
ellipse at the anomalous resonances. Here surplus charges lead to a blue shift of the resonances in $b_1^r$ and $b_2^r$. The shift is approximately given by Eq. (\ref{rescon_b1}) for the mode $b_1$ and by $2\epsilon^\prime+2\alpha^\prime+3-6\tau^{\prime \prime}/\rho =0 $ for the mode $b_2$ (in both cases $\rho \ll1$). The resonance blue-shift translates into a shift of $\Delta \phi$. For a charged particle $\Delta \phi$ acquires and looses $-\pi/2$ as for an uncharged particle but this takes place at higher $\lambda^{-1}$ than for an uncharged particle. This is shown in panels (ii) and (v) of Fig. \ref{fig_ellips}. Panels (iii) and (vi) exemplify it for fixed $\lambda^{-1}$ where $\Delta \phi$ increases or decreases with the particle charge. This shift of $\Delta \phi$ by several degrees should also offer a possibility for a charge measurement.
\section{Conclusions}
\label{sec_summary}
We studied the scattering behavior of a charged dielectric particle with an eye on identifying a strategy for an optical charge measurement. Our focus lay on the four characteristic regimes of scattering for particles with a strong TO phonon resonance: (i) low-frequency scattering, (ii) ordinary resonances, (iii) anomalous resonances, and (iv) interference and ripple structure.
Surplus charges enter into the scattering coefficients through their phonon-limited surface (for negative electron affinity) or bulk (positive electron affinity) conductivities.
No significant charge effects are found for the ordinary resonances and the interference and ripple structure. Surplus charges affect however the low-frequency regime and the anomalous optical resonances.
We have identified three charge-dependent features of light scattering: (i) a charge-induced increase in extinction for low-frequencies, (ii) a blue-shift of the anomalous extinction resonance, and (iii) a rapid variation of one of the two polarization angles at the anomalous extinction resonance.
At low frequencies energy relaxation is inhibited for uncharged particles as the imaginary part of the dielectric constant is very small. Surplus charges enable energy relaxation on the grain through their electrical conductivity which has a significant real part at low frequencies. This leads to increased absorption at low frequencies.
Above the TO phonon frequency the real part of the dielectric constant is negative which leads to anomalous optical resonances. Surplus charges enter into the resonance condition through the imaginary part of their electrical conductivity. They lead to a resonance blue-shift which is most significant for sub-micron-sized particles. Moreover, at the anomalous resonances the phase of the resonant scattering coefficients varies rapidly. This causes the opening and closing---characterized by the angle $\Delta \phi$---of the polarization ellipse of the reflected light. Surplus charges lead to the rapid variation in $\Delta \phi$ being shifted to higher frequency.
We suggest to use these charge signatures in the Mie signal to measure the particle charge. For plasmonic particles charge-induced resonance shifts have already been detected experimentally for metallic nanorods which were charged by an electrolytic
solution~\cite{MJG06,NM07} and for an array of nanodiscs exposed to an argon plasma.\cite{LSH12}
In order to detect the charge-sensitive effects of light scattering by dust particles in a dusty plasma would
require to shine infra-red light through the plasma and to measure light attenuation or the polarization of reflected
light. The low-frequency increase in extinction or the shift in the polarization angle $\Delta \phi$ could be observed
with monochromatic light while the resonance shift would require a frequency dependent extinction measurement. This would
not only allow a determination of the particle charge without knowing any plasma parameters but also of nanodust
particles~\cite{GCK12,KSB05,BKS09} where traditional techniques cannot be applied at all.
Eventually suitable particles with a strong charge sensitivity (e.g. Al$_2$O$_3$ or PbS particles) could even be employed as minimally invasive electric plasma probes. The particles would accumulate a charge depending on the local plasma environment. Performing simultaneously an optical charge measurement and a traditional force measurement~\cite{KRZ05,TLA00,CJG11} would then allow to infer the local electron density and temperature at the position of the probe particle.
\section*{Acknowledgement}
This work was supported by the Deutsche Forschungsgemeinschaft through SFB-TR 24.
| {'timestamp': '2013-08-28T02:02:58', 'yymm': '1306', 'arxiv_id': '1306.1328', 'language': 'en', 'url': 'https://arxiv.org/abs/1306.1328'} |
\section{\bf 1. Introduction.}
Finding the entanglement entropy associated with a spatial surface
presents an interesting computational challenge, {\it e.g.}\
[\putref{LNST,CaandH,Solodukhin2}], that has attracted some recent
interest, {\it e.g.}\ [\putref{HandW}], not least because of the introduction of
holographic notions, [\putref{RyandT,Solodukhin,MyandS}]. Most attention is
given to even space--time dimensions, $d$, but interest attaches also to
odd $d$, [\putref{MyandS,MyandS2,HandW}], and this is the case I wish to
address in this paper for the very special situation of spherical
surfaces and conformally invariant scalar fields.
In a previous work, [\putref{Dowhyp}], I have analysed the universal
logarithmic term that occurs in the expansion of the entanglement entropy
associated with even spheres in the special case when the conically
deformed Euclidean space--time is the cyclically factored sphere,
S$^d/\mbox{\open\char90}_q$. Then the spatial surface is a $(d-2)$--sphere (with
vanishing extrinsic curvature). Conformal transformations can be used to
move between different manifolds, in particular between spheres, planes
and cylinders.
A universal term is one that is regularisation independent. In even
dimensions, for a cut--off regularisation, this takes the form of a
logarithm, while, for dimensional and $\zeta$--function\ regularisation, it shows up as
a pole. The `problem' is that for odd dimensions there is nothing
analogous (for conformal fields), indeed, in the last two regularisations
there are no divergences at all and, connectedly, the integrated
conformal anomaly vanishes.
In [\putref{RyandT}] (see equn.(7.11)), it is suggested that, for odd
dimensions, the constant term in the entropy, {\it i.e. } the one independent of
any introduced cutoff, is such a universal term. Myers and Sinha,
[\putref{MyandS2}], have taken up this suggestion in their attempt to
produce a $c$--theorem in higher dimensions and have commented on its
validity. If there is no divergence, then the entire expression should,
presumably, be considered universal.
If boundaries or singularities exist then the relevant $C_{d/2}$
coefficient (determining the conformal anomaly) is non--zero and a
`proper' universal term might occur. This would not be so for the
(periodic) factored sphere, S$^d/\mbox{\open\char90}_q$ (or `periodic lune'), which was
the case discussed exclusively in [\putref{Dowhyp}] and will occupy me
mostly here too.
In the present paper I wish to look at the entire expression for the
entropy. I use $\zeta$--function\ regularisation and there is, in general, a pole term,
related to the integrated conformal anomaly, $C_{d/2}$, and a finite
remainder given by a functional determinant. I anticipate that the
results have interest beyond their entropic bearing.
As before, I will obtain the periodic results by combining those for
DN--lunes, which are lunes with Dirichlet (D) {\it or} Neumann (N)
conditions on the boundary. Adding these gives the periodic lune,
[\putref{ChandD,Down}]. Geometrically, the DN--lune is obtained by
factoring the sphere by the dihedral group, $[q]$, of order $2q$.
\footnotecheck\defaultoption[]\@footnote{ This is generated by reflections in two hyperplanes in the
embedding space.} When $q=1$, the resulting fundamental domain is the
DN--hemisphere.
In [\putref{Dowhyp}], I showed that the conformal anomaly on the even
dimensional periodic lune, S$^d/\mbox{\open\char90}_q$, {\it i.e. } the conformal heat--kernel
coefficient, $C_{d/2}(q)$, had an extremum at the ordinary sphere, $q=1$.
I will show that the same applies to the effective action (or functional
determinant) for odd dimensions.
\section{\bf2. The lune}
The $d$--lune can be defined inductively by giving its metric in the
nested form
$$
ds^2_{d-lune}=d\theta_d^2+\sin^2\theta_d\,\,ds^2_{(d-1)-lune}\,,\quad
0\le \theta_i\le\pi\,,
\eql{lune}
$$
which is iterated down to the $1$--lune of metric $d\theta_1^2$ with
$0\le\theta_1\le\beta$. The angle $\theta_1$ is referred to as the polar angle
and conventionally written $\phi$. The angle of the lune is $\beta$.
The boundary of the lune comprises two pieces corresponding to $\phi=0$
and $\phi=\beta$. The metric (\puteqn{lune}) shows immediately that these are
unit $(d-1)$--hemispheres because {\it their} polar angle, $\theta_2$, runs
only from $0$ to $\pi$. Conditions, typically Dirichlet and Neumann, can
be applied at the boundary. The boundary parts intersect, with a constant
dihedral angle of $\beta$, in a $(d-2)$--sphere, of unit radius, which
constitutes a set of points fixed under O(2) rotations parametrised by
$\phi$.
It can be seen that the 2--lune submanifold, with coordinates $\theta_1$ and
$\phi$, has a wedge singularity at its north and south poles of. These
poles are at $\theta_2=0$ and $\theta_2=\pi$ and are the 0--hemispheres of a
0--sphere. In the $d$--lune, the submanifolds, $\theta_2=0$ and $\theta_2=\pi$,
are the $(d-2)$--hemispheres of the $(d-2)$--sphere of fixed points just
mentioned.
All this is for arbitrary angle, $\beta$. If $\beta=\pi/q$, $q\in\mbox{\open\char90}$, the
lune can be identified with the fundamental domain of the dihedral group
action on the sphere. As mentioned previously, restricting to just the
rotational subgroup doubles the fundamental domain to a periodic lune of
angle $2\pi/q$, the edges now being identified. This is equivalent to
adding the sets of $N$ and $D$ modes, [\putref{Down}] and produces a
conical singularity of deficit $2\pi-2\beta$. It corresponds to the
introduction of the uniformising angle, $\phi'=2\pi\phi/2\beta$, which runs
from $0$ to $2\pi$ as $\phi$ runs from $0$ to $2\beta$. The deformation of
the $d$--sphere, (\puteqn{lune}), can be traced to that of the 1--lune, or
$\phi'$--circle, which has unit radius but circumference, $2\beta$.
It is sometimes convenient, for visualisation purposes, to think of the
$d$--sphere as embedded in $\mbox{\open\char82}^{d+1}$ with the lune pictured as the
curved, outer surface of a `hyper--wedge'. The edges of the lune are
great unit $(d-1)$--hemispheres, half the intersections of the sphere
with two $d$--flats (hyperplanes) mutually intersecting in a
$(d-1)$--flat, the `axis' of the `rotation' that takes one $d$--flat into
the other with rotation angle, $\beta$. This axis intersects the sphere in
two `poles', isometric in total to a $(d-2)$--sphere and a singular
region. It is a fixed point set for those elements of O$(d+1)$ that
maintain the axis of rotation, {\it i.e. }\ an O$(2)$ subgroup.
If $\beta=\pi/q$, $q\in\mbox{\open\char90}$, the lune, and its $q$ reflections in the
hyperplanes, tessellate the $d$--sphere. Combining a lune with its
neighbouring reflection gives the fundamental domain for the rotational
(cyclic) subgroup, $\mbox{\open\char90}_q$, of the dihedral group.
To make the geometry a little clearer, I look at the $d=3$ case and write
out the Cartesian embedding of the unit 3--lune, ${\cal L}^3_\beta$, in my
notation,
$$\eqalign{
x_1&=\cos\theta_3\cr
x_2&=\sin\theta_3\cos\theta_2\cr
x_3&=\sin\theta_3\sin\theta_2\cos\phi\cr
x_4&=\sin\theta_3\sin\theta_2\sin\phi\cr
}
$$
with $0\le\phi\le\beta$. Translations in $\phi$ correspond to rotations in
the $x_3$--$x_4$ plane with the fixed point $x_3=x_4=0$ giving the set of
fixed points on ${\cal L}^3_\beta$ as the unit circle, $x_1^2+x_2^2=1$ for all
openings, $\beta$.
The example of $\beta=\pi$ is easiest to picture. The boundary pieces of
${\cal L}^3_\pi$, at $\phi=0$ and $\phi=\pi$, form, respectively, the
$x_3\ge0$ and $x_3\le0$ hemispheres of the 2-sphere, $x_4=0$. The
boundaries of these pieces coincide with the fixed point unit circle
$x_1^2+x_2^2=1$ and join, with a dihedral angle of $\pi$, to form an
equator of the 2--sphere with poles, $x_3=\pm1$.
\section{\bf 3. Conformal Transformations}
The conformal transformations relevant here are those considered in
[\putref{ApandD2}], {\it i.e. } those between the original sphere, S$^d$, the
Euclidean plane, $\mbox{\open\char82}^d$, and the cylinder, $\mbox{\open\char82}\times$ S$^{d-1}$. The
latter would give the entropy on a $(d-1)$--sphere where the separating
surface is the equatorial $(d-2)$--sphere. This has vanishing extrinsic
curvature, whereas the second mapping yields the entropy on a flat
$(d-1)$ space with surface an ordinary $(d-2)$--sphere which does have
extrinsic curvature `generated' by the conformal transformation.
Actually, this geometrical circumstance was behind the calculations of
[\putref{Dowker2,Dowker1}] and [\putref{Solodukhin}] and I am, in reality,
going over part of this ground again.
In the particular example of the 3--lune, consider the section $x_4=0$ to
be a space section. Project it from the S$^3$ pole $x_3=1$ onto the
equatorial plane, $x_3=0$. The fixed point circle projects to a unit
circle, centre the origin, the inside of which is the projection of the
hemisphere with negative $x_3$ and the outside is the projection of that
with positive $x_3$.
The log coefficient is conformally invariant, and so it is immaterial
where it is evaluated. However the entropy, being determined by the
effective action, will generally change (in a well defined fashion).
The standard anomaly equation for the conformal variation of the
effective action, $W$, is, in $d$--dimensions,
$$
\delta W[e^{-2\omega}\,g]=C_{d/2}[e^{-2\omega}\,g;\delta\omega]\,,
\eql{canom}
$$
where $C_{d/2}[g;f]$ is the local heat--kernel coefficient averaged
against a test function, $f$
$$
C_{d/2}[g;f]=\int_{\cal M} C_{d/2}(g;x)\,f(x)\,.
$$
Equation (\puteqn{canom}) can be integrated if the local coefficient is
known. For arbitrary dimension we do not have this luxury and, in any
case, the evaluation is complicated. It has been done only for $d=2,3,4$
and $6$. However, life is made easy in odd dimensions, if no boundary is
present (as is the case for the {\it periodic} lune) for then the
right--hand side of (\puteqn{canom}) is sero. Therefore the effective
actions on conformally related spaces are the same, and likewise for the
entropies according to the standard recipe given in the next section.
\section{\bf 4. Entropy}
According to the general prescription of Callan and Wilczek,
[\putref{CaandW}], the entropy is given in terms of the effective action,
$W^{(B)}$, on the Euclidean manifold deformed by a conical singularity of
angle $2\pi/B$ as,
$$
S=-(B\,\partial_B+1)\,W^{(B)}\bigg|_{B=1}\,.
\eql{entropy}
$$
In the present case, the deformed manifold, ${\cal M}_q$, is the periodic
lune, S$^d/\mbox{\open\char90}_q$, so that $B=q$. The field theory under consideration is
the scalar field conformally coupled in $d$ dimensions and the effective
action for this has been calculated in [\putref{Dow3,Dowjmp}]. The
evaluation of the derivative in (\puteqn{entropy}) involves a continuation
in $q$ from the integers \footnotecheck\defaultoption[]\@footnote{ The theory for any real $q$ is easily
worked out using eigenfunctions.}. In our previous work this was
straightforward as the relevant quantities, the heat--kernel
coefficients, were polynomial in $q$. For the effective action, the
matter is a little less obvious.
In terms of the $\zeta$--function, $\zeta(s,q)$, on ${\cal M}_q$, the unrenormalised
effective action is, generally,
$$
W^{(q)}={1\over2}\lim_{s\to0}{\zeta(0,q)\over s}-
{1\over2}\zeta'(0,q)+\zeta(0,q)\log\,L+X\,,
\eql{effact}
$$
where $X$ is a possible finite correction and $\zeta(0,q),=C_{d/2}(q)$, the
conformal anomaly. The third term, where $L$ is a scaling length, can be
considered a concomitant of the ultraviolet pole divergence, (in this
version of $\zeta$--function\ regularisation). Taken together they correspond to the
log term that arises in the cut--off method. Conventionally, the pole
divergence would be removed by renormalisation. The `area law' for the
entanglement entropy is not recovered nor the other divergences
encountered in a cut--off or a lattice approach. This is not a worry as
these are non--universal terms anyway.
Generally, when $\zeta(0)$ vanishes, its role is, in some ways, taken over
by the derivative, $ \zeta'(0)$, this then also being conformally
invariant. Hence, as a working hypothesis, when treating the periodic
lune, I set $X$ to zero (it is a conformal invariant) and take the
entropy,
$$
S={1\over2}\bigg(1+q{\partial\over\partial q}\bigg)\zeta'(0,q)\bigg|_{q=1}\,,
\eql{entropy2}
$$
to be universal, in accordance with [\putref{RyandT}] and [\putref{MyandS2}]
and now have to address the calculation of the derivative.
\section{\bf 5. The log/divergence term}
As mentioned, in the present instance where $d$ is odd, the conformal
anomaly on the periodic lune is zero,
$$
C_{d/2}(q)=C^N_{d/2}(q)+C^D_{d/2}(q)=0\,,
$$
and there is no pole nor log term. Nevertheless, for completeness, I
examine the individual coefficients as they would be required for
analysis on a DN--lune singly.
The required $C_{d/2}(q)$ coefficients are given by the expressions in
[\putref{Dowhyp}], where the original references can be found. One has,
$$
C^N_{d/2}(q)= -C^D_{d/2}(q)={1\over 2q\, d!}
\bigg(B^{(d)}_d(d/2-1\big|\,q,{\bf 1})+
B^{(d)}_d(d/2\big|\,q,{\bf 1})\bigg)\,,
\eql{cees3}
$$
The symmetry of the generalised Bernoulli polynomials,
$$
B^{(d)}_n(d-1+q-x|\,q,{\bf 1})=(-1)^n\,B^{(d)}_n(x|\,q,{\bf 1})\,,
$$
should be noted.
For odd $d$ the right--hand side of (\puteqn{cees3}) simplifies to a
constant, independent of q and, therefore,
$$
C^N_{d/2}(q)= -C^D_{d/2}(q)={1\over 2\, d!}\,
B^{(d)}_d(d/2-1)\,,
\eql{cees4}
$$
which is proved in the Appendix.
\begin{ignore}
According to (\puteqn{entropy}), I need the $q$--derivative of
$C_{d/2}(q)$ and employ the result
$$
q\,\partial_q\,{1\over q}\,B^{(d)}_{d}(x|\,q,{\bf1})=
-{1\over q}\,B^{(d+1)}_{d}(x+q|\,q,q,{\bf1})\,,
\eql{diff}
$$
to obtain
$$
q\partial_q\,C^N_{d/2}(q)\bigg|_{q=1}=q\partial_q\,C^D_{d/2}(q)\bigg|_{q=1}=0
\eql{diff2}
$$
using the fact that
$$
B^{(d+1)}_d(d/2)=-B^{(d+1)}_d(d/2+1)\,,\quad d\quad{\rm odd}
\eql{prop}
$$
\end{ignore}
Thus again, as in [\putref{Dowhyp}], the $q$--derivative of the conformal
anomaly is zero (trivially in this case) at $q=1$. Numerical values are
$$\eqalign{
C^{D,N}_{d/2}(q)&=\pm{1\over2d!}\,B^{(d)}_d(d/2-1)\cr
&=\pm{1\over(d-1)!}\,B_1\,B^{(d-1)}_{d-1}(d/2-1)\cr
&= \pm [-{1\over48} , -{19\over1440}, {17\over11520}, {271\over120960},
-{367 \over1935360}, -{3233\over7257600},\ldots]\,,
}
\eql{ceed2}
$$
for $d=3,5,\ldots$.
This $q$--independence can be seen in a different way. For a
$d$--dimensional manifold, ${\cal M}$, the heat--kernel coefficients, $C_k$,
in general take contributions from all submanifolds, of codimension 0
down to $d$. It is easily checked that the integrand of the
half--integral coefficient, $C_{k/2}$, for {\it even} codimension, has
dimensions of an inverse {\it odd} power of length. Its construction must
then necessarily involve powers of the extrinsic curvatures but these all
vanish for the fundamental domains of the group action on the sphere,
which applies here. In particular, the codimension two contribution to
$C_{d/2}$ is zero and this is the part that involves the `dihedral'
angle, $\pi/q$ between the boundary parts of the manifold, ${\cal M}$.
For example, in three dimensions, $C_{3/2}$ then has just an area
contribution and a trihedral corner, or vertex, contribution,
[\putref{ApandD2}], but, in the case of a dihedral action, the vertex
degenerates into a dihedral one, and its contribution vanishes. The
explicit local form of the area part remaining is,
$$
C_{3/2}=\pm{1\over384\pi}\sum_i\int_{\partial{\cal M}_i}dA\,\widehat R\,,
\eql{cee32}
$$
where $\widehat R=2$ is the intrinsic curvature of the boundary part
$\partial{\cal M}_i$, a 2-hemisphere. These parts add to a full S$^2$'s worth of
boundary (for all $q$) and (\puteqn{cee32}) evaluates to $\pm1/48$, in
agreement with (\puteqn{ceed2}).
\section{\bf 6. The derivative term}
I now consider the universal term in the entropy (\puteqn{entropy2})
(actually the whole entropy). For the periodic lune, the $\zeta$--function\ is the sum
of the N and D $\zeta$--functions,
$$
\zeta(s,q)=\zeta_N(s,q)+\zeta_D(s,q)\,.
$$
The dihedral $\zeta$--functions\ have been calculated in [\putref{Dow3}], and in
[\putref{Dowjmp}], in terms of the Barnes $\zeta$--function, $\zeta_d(s,a|\,{\bf d})$. In
particular the derivatives at zero are, \footnotecheck\defaultoption[]\@footnote{ The first two terms on
the right--hand side are suggested by the factorisation of the
eigenvalues with the final term indicating a multiplicative anomaly.}
$$\eqalign{
\zeta'_N(0,q)&=\zeta'_d(0,d/2|\,q,{\bf 1})+\zeta'_d(0,d/2-1|\,q,{\bf 1})+M(a_N,q)\cr
\zeta'_D(0,q)&=\zeta'_d(0,d/2+q|\,q,{\bf 1})+\zeta'_d(0,d/2-1+q|\,q,{\bf
1})+M(a_D,q)\,,\cr
}
\eql{zetas}
$$
where $M(a,q)$ is given by,
$$
M(a,q)=-
\sum_{r=1}^{(d-1)/2}{H^O_{r-1}\over r\,2^{2r}}\,N_{2r}(d,q)\,,
\eql{manom}
$$
with $H^O_r$ the odd harmonic number,
$$
H^O_r=\sum_{k=0}^r{1\over{2k+1}}\,.
$$
The $N_l$ are the residues of the Barnes function,
$$
\zeta_d(s+l,a|\,{\bf d})\rightarrow\,{N_l\over s}+R_l\,,\quad {\rm
as}\,\,s\rightarrow 0\,.
\eql{poles}
$$
They are given by generalised Bernoulli polynomials (see later) and
depend on $d$, the argument, $a$, and the parameters, ${\bf d}$, which
here reduce, in effect, to just the number, $q$. The dihedral Neumann and
Dirichlet arguments, $a_N$ and $a_D$, are given by $(d-1)/2$ and
$(d-1)/2+q$, respectively.
The two lines in (\puteqn{zetas}) give the derivatives of the $\zeta$--functions\ for
Neumann and Dirichlet boundary conditions on the edges of a lune of angle
$\pi/q$. As described earlier, adding them gives the $\zeta$--function\ for a
(periodic) lune, or cone, of angle $2\pi/q$ that is, for S$^d/\mbox{\open\char90}_q$
which is the object of most interest.
According to (\puteqn{entropy}), I need the derivative of (\puteqn{zetas}) with
respect to $q$. This is non--controversial for the $M$ terms as they are
rational in $q$. For the Barnes function, I will assume that the
derivatives with respect to $s$ and $q$ commute and that $q$ can be
continued to 1. This can be justified from the contour expressions for
the Barnes multiple functions (see the Appendix).
There now follows a technical discussion of the derivatives of the Barnes
function needed in (\puteqn{zetas}).
\section{\bf 7. Derivatives of the Barnes $\zeta$--function}
Starting from the formal definition \footnotecheck\defaultoption[]\@footnote{ The continuation in $q$ is
visually obvious.},
$$
\zeta_d(s,a|\,q,{\bf1})=\sum_{\bf n=0}^\infty{1\over (a+qn_1+\ldots n_d)^s}
\eql{barnes}
$$
simple manipulation leads to the derivative at $q=1$,
$$
\partial_q\,\zeta_d(s,a|\,q,{\bf1})\bigg|_{q=1}=-{s\over d}\,\zeta_d(s,a|\,{\bf
1})+s{a\over d}\,\zeta_d(s+1,a|\,{\bf1})\,,
\eql{qdif}
$$
assuming that $a$ is independent of $q$. Relevant for this is the
important recursion,
$$
\zeta_d(s,a+q|\,q,{\bf1})=\zeta_d(s,a|\,q,{\bf1})+\zeta_{d-1}(s,a|\,{\bf1})\,,
\eql{recurs}
$$
which, looking at (\puteqn{zetas}), relates the N and D expressions. In
particular it means that the Dirichlet contribution to the
$q$--derivative equals the Neumann one.
The derivative with respect to $s$ and the limit $s\to0$ can now be taken
in (\puteqn{qdif}) with a view to substitution into (\puteqn{zetas}). Thus,
interchanging the implied limits,
$$\eqalign{
\partial_q\,\zeta'_d(0,a|\,q,{\bf1})\bigg|_{q=1}&=-{1\over
d}\,\zeta_d(0,a|\,{\bf1})+{a\over d}\,R_1(d,a)\cr
&={(-1)^{d-1}\over d\,d!}\,B_d^{(d)}(a)+{a\over d}\,R_1(d,a)\,.
}
\eql{qdif2}
$$
$R_1$ is the remainder at the $s=1$ pole, (\puteqn{poles}), when all
parameters equal 1. Henceforth, I do not display dependence on these
parameters unless necessary.
Before putting things together, I give some details on the residue and
remainder. The latter is given by Barnes as
$$
R_r(d,a)=(-1)^r\bigg({1\over(r-1)!}\,\psi^{(r)}_d(a)-N_r(d,q)\,H_{r-1}\bigg)
\eql{remain}
$$
where $H_r$ is the usual harmonic number,
$$
H_r=\sum_{k=1}^r{1\over k}\,,\quad H_0=0\,,
$$
and the $\psi$--functions are defined in terms of the multiple
$\Gamma$--function,
$$
\psi_d^{(p)}(a)={\partial^p\over\partial a^p}\,\log\Gamma_d(a)\,.
\eql{psi}
$$
Hence the residue $R_1(d,a)=-\psi_d^{(1)}(a)\equiv-\psi_d(a)$ and the
derivative (\puteqn{qdif2}) becomes
$$\eqalign{
\partial_q\,\zeta'_d(0,a|\,q,{\bf1})\bigg|_{q=1}=
{(-1)^{d-1}\over d\,d!}\,B_d^{(d)}(a)-{a\over d}\,\psi_d(d,a)\,.
}
\eql{qdif5}
$$
It is shown in the Appendix that this can be rewritten more compactly as
$$\eqalign{
\partial_q\,\zeta'_d(0,a|\,q,{\bf1})\bigg|_{q=1}=\psi_{d+1}(a+1)\,.
}
\eql{qdif7}
$$
This result will be employed shortly.
I emphasise that I use Barnes' definitions of the multiple functions.
\section{\bf 8. Derivative of the multiplicative anomaly term}
I now compute the derivative of the multiplicative anomaly, $M$, terms in
(\puteqn{zetas}). I do not need to do this for the periodic lune, as these
terms cancel on addition, as shown next. However I give the results,
again for completeness.
I need the residues, $N_l$, given in terms of generalised Bernoulli
polynomials as
$$\eqalign{
N_l(d,q)&={(-1)^{d-l}\over(l-1)!(d-l)!}{1\over q}\,B^{(d)}_{d-l}(a|\,q,{\bf1})\cr
&={1\over(l-1)!(d-l)!}{1\over q}\,B^{(d)}_{d-l}(d-1+q-a|\,q,{\bf1})\,.
}
\eql{residue}
$$
If $l=2r$, as in (\puteqn{manom}), and $d$ is odd, the minus sign in
(\puteqn{residue}) shows that the $M$ terms in (\puteqn{zetas}) are opposite
for N and D conditions and would cancel if added, as advertised and as
noted in [\putref{Dow3,Dowjmp,DandKi2}].
I show that the $q$--derivative of $M(a_N,q)$, (\puteqn{manom}), vanishes
at $q=1$ which circumstance follows directly from the general result,
[\putref{Barnesa}],
$$
q\,\partial_q\,{1\over q}\,B^{(d)}_{d-l}(a|\,q,{\bf1})=
-{1\over q}\,B^{(d+1)}_{d-l}(a+q|\,q,q,{\bf1})\,.
\eql{diff}
$$
For, setting $a=a_N=(d-1)/2$ and $q=1$, the right--hand side equals
$$
-B^{(d+1)}_{d-2r}((d+1)/2)
$$
which vanishes from a well known property of the Bernoulli polynomials
since $d-2r$ is odd.
It is interesting, and potentially more significant, to note that the
multiplicative anomaly, $M(a_N,q)$, is, in fact, independent of $q$, if
$d$ is odd, just like the conformal anomaly. For $d=3,5,\ldots$, one has,
$$
M(a_N,q)=-{1\over8},\,\,{5\over288},\,\,-{2303\over691200},
\,\,{142601\over203212800},\,\ldots\,.
\eql{manom2}
$$
\section{\bf 9. The q--derivative on shell}
Formally differentiating the full $\zeta$--functions, (\puteqn{zetas}), with respect to
$q$ at 1, using (\puteqn{qdif7}) and the result of the previous section, I
find for the total Neumann $\zeta$--function, (the Dirichlet value is the same) that,
$$\eqalign{
\partial_q\zeta'_N(0,q)\bigg|_{q=1}=\psi_{d+1}(d/2)+\psi_{d+1}(d/2+1)\,.
}
\eql{qdif3}
$$
The expression,
$$\eqalign{
\psi_d(a)={(-1)^{d-1}\over(d-1)!}B_{d-1}^{(d)}(a)\,\psi(a)-{1\over (d-1)!}
\sum_{k=1}^{d-1}{(-1)^k\over k}\,B_{d-k-1}^{(d-k)}(d-a)\,B_{k}^{(k)}(a)\,,
}
\eql{psid2}
$$
for the multiple $\psi$--function in terms of the standard digamma
function, $\psi(a)$, is derived in [\putref{dowgjms}], where other
references can be found.
Then, taking into account the antisymmetry for odd $d$,
$$
B^{(d+1)}_{d}(d/2)+B^{(d+1)}_{d}(d/2+1)=0\,,
\eql{zero}
$$
and the standard formula for the digamma function,
$$
\psi(1/2-n)= \psi(1/2+n)=-\gamma-2\log2+2H^O_{n-1}\,,\quad n\in\mbox{\open\char90}\,,
$$
the transcendentals $\gamma$ and $\log2$ cancel leaving the purely algebraic
expression,
$$\eqalign{
=-{2\over d}&B^{(d+1)}_d(d/2+1)-\cr
&(-1)^d\sum_{k=1}^{d}(-1)^k{B_{k-1}^{(k)}(d/2)
\,B_{d+1-k}^{(d+1-k)}(d/2+1)\over d+1-k}
-\cr&(-1)^d\sum_{k=1}^{d}(-1)^k
{B_{k-1}^{(k)}(d/2+1)\,B_{d+1-k}^{(d+1-k)}(d/2)\over
d+1-k}\,,\cr
}
\eql{qdif4}
$$
which, unsurprisingly perhaps, evaluates to zero, by machine, dimension
by dimension.
An expression for the second $q$--derivative of the effective action is
derived in the Appendix. Its numerical evaluation shows an alternating
sign, being positive for $d=3$ (corresponding to a minimum for
$\zeta'(0,q)$ at $q=1$).
\section{\bf 10. The entropy}
According to this result and the definition, (\puteqn{entropy2}), the
entanglement entropy reduces to just half the quantity,
$\zeta'(0,q)\big|_{q=1}$, obtained by adding the ordinary ND--hemisphere
effective actions. These have been given above, (\puteqn{zetas}), and can be
formally expressed in terms of multiple $\Gamma$--functions,
$$\eqalign{
\zeta'_N(0,1)&=\log{\Gamma_d(d/2-1)\,\Gamma_d(d/2)\over\rho_d^2}+M(a_N,1)\cr
\zeta'_D(0,1)&=\log{\Gamma_d(d/2+1)\,\Gamma_d(d/2)\over\rho_d^2}-M(a_N,1)\,,
}
\eql{zedash3}
$$
where $\rho_d=\Gamma_{d+1}(1)$ is the multiple modular form,
[\putref{Barnesa}], and the $M$ term is given by (\puteqn{manom2}).
The full sphere effective action has a longish history, some of which is
detailed in [\putref{DandKi2}]. From the sum of the N and D expressions,
(\puteqn{zedash3}), (see [\putref{Dowjmp}] equns.(22),(23)), I find,
$$
\zeta'(0,1)=\log{\Gamma_d(d/2-1)\,\Gamma_d(d/2+1)\,\Gamma_d^2(d/2)\over\rho_d^4}\,.
\eql{zedash5}
$$
Explicit formulae can be obtained by expanding the Barnes $\zeta$--function\ in terms
of the Hurwitz $\zeta$--function\ with coefficients related to Stirling numbers,
[\putref{Barnesb}]. I just give some samples on the full sphere,
$$\eqalign{
\zeta'(0,1)&=-{3\over2}\,\zeta'_R(-2)-{1\over4}\log2\approx-0.127614\,,\quad d=3\cr
&=-{5\over32}\,\zeta'_R(-4)-{1\over16}\,\zeta'_R(-2)+{1\over64}\log2\approx
0.011486\,,\quad d=5\,.
}
\eql{zedash2}
$$
Another form follows by rewriting (\puteqn{zedash5} as
$$\eqalign{
\zeta'(0,1)&=\log{\Gamma_{d+1}(d/2-1)\,\Gamma_{d+1}(d/2)\over
\Gamma_{d+1}(d/2+2)\,\Gamma_{d+1}(d/2+1)}\cr
&={2(-1)^{d+1}\over d!}\int_0^1 dz\,\,\pi z\,
\tan\pi z\,\prod_{j=1}^{(d-1)/2}(z^2-(j-1/2)^2)
}
\eql{zedash4}
$$
which is derived in [\putref{dowgjms}]. It corresponds to the $k=1$ case of
the GJMS operator, $P_{2k}$, which is just the usual conformal Laplacian.
The integrand can be expanded and contact made with (\puteqn{zedash2}) but
for numerical purposes (\puteqn{zedash4}) is adequate and I find for
$\zeta'(0,1)$,
$$\eqalign{
-0.001595\,,\quad &d=7\cr
0.000262\,,\quad &d=9\cr
-0.000047\,,\quad &d=11\,.
}
\eql{zedash6}
$$
The values alternate in sign, like the conformal anomaly in even
dimensions.
\section{\bf 11. Conclusion}
The main technical result in the preceding is the vanishing of the
$q$--derivative of the effective action on the odd dimensional orbifold
S$^d/\mbox{\open\char90}_q$ at $q=1$, showing that the entanglement entropy associated
with a hyperspherical $(d-2)$--submanifold is essentially just the
effective action on the ordinary $d$--sphere, on certain assumptions
regarding universality.
I have not yet been able to prove the responsible Barnes--Bernoulli
identity, which, in terms of multiple functions reads,
$$
\psi_{d+1}(d/2)+\psi_{d+1}(d/2+1)=0
$$
for all odd $d$.
The entropic significance of the numerical values on shell,
(\puteqn{zedash2}), (\puteqn{zedash6}), is unclear to me. Perhaps more
interesting would be results off criticality obtained, say, by adding a
mass term. Extension to other fields is also indicated and also to other
symmetric spaces.
\section{\bf Acknowledgments}
I thank Robert Myers for information and suggestions.
\section{\bf Appendix}
In this appendix I give some relevant material concerning the Barnes
functions and start with the proof of the $q$--independence of the N and
D $C_{d/2}$ coefficients,(\puteqn{cees4}), which I will do algebraically. I
seek to show that the bracket in (\puteqn{cees3}) is linear in $q$ and write
down the standard polynomial,
$$
B^{(n)}_\nu(x|\,q,{\bf1})=\sum_{s=0}^\nu q^s\comb \nu s B_s
B^{(n-1)}_{\nu-s}(x)\,,
\eql{poly}
$$
noting that the only term odd in $q$ comes from the $s=1$ term in the
sum. Next I apply the formula ([\putref{Norlund}] p.167),
$$
B^{(n)}_\nu(x+q|\,q,{\bf1})=B^{(n)}_\nu(x|\,-q,{\bf1})
\eql{neg}
$$
to get
$$\eqalign{
B^{(d)}_d(d/2-1|\,-q,{\bf1})&=B^{(d)}_d(d/2-1+q|\,q,{\bf1})\cr
&=-\,B^{(d)}_d(d/2|\,q,{\bf1})
}
$$
after using symmetry for odd $d$. Hence the bracket in (\puteqn{cees3}) is
twice the odd part of $B^{(d)}_d(d/2|\,q,{\bf1})$ obtained from
(\puteqn{poly}), thus proving (\puteqn{cees4}).
Contour integrals provide an alternative starting point. Barnes
[\putref{Barnesa}] p.406 gives an expression for the derivative at zero,
\footnotecheck\defaultoption[]\@footnote{ $q'$ is introduced for notational flexibility.}
$$
\zeta'_d(0,a|q,q',{\bf 1})={-i\over2\pi}\int_C
{e^{-az}\big(\log(-z)+\gamma\big)dz\over z(1-e^{-qz})(1-e^{-q'z})(1-e^{-z})^{d-2}}
\eql{zint}
$$
and so
$$
\partial_q\zeta'_d(0,a|q,q',{\bf 1})={i\over2\pi}\int_C
{e^{-(a+q)z}\big(\log(-z)+\gamma\big)dz\over(1-e^{-qz})^2 (1-e^{-q'z})
(1-e^{-z})^{d-2}}\,.
\eql{dzint}
$$
To compare, Barnes also gives the integral for the $\psi_d$--function,
obtained simply by differentiating (\puteqn{zint}) with respect to $a$,
$$
\psi_d(a|q,q',{\bf 1})={i\over2\pi}\int_C
{e^{-az}(\log(-z)+\gamma)dz\over (1-e^{-qz})(1-e^{-q'z})(1-e^{-z})^{d-2}}
\eql{psiint}
$$
so we obtain very simply (set $q'=1$ in (\puteqn{dzint}) and $q'=q$ in
(\puteqn{psiint})),
$$\eqalign{
\partial_q\zeta'_d(0,a|q,{\bf 1})&=\,\psi_{d+1}(a+q|\,q,q,{\bf1})\cr
}
\eql{qdif6}
$$
Evaluating at $q=1$ yields,
$$\eqalign{
\partial_q\zeta'(0,a|q,{\bf 1})\bigg|_{q=1}&=\psi_{d+1}(a+1)\cr
&=-{a\over d}\,\psi_{d}(a)+{(-1)^{d-1}\over d\,d!}\,B^{(d)}_d(a)
}
\eql{qdif10}
$$
after using Barnes' recursion,
$$
\psi_{d+1}(a+1)=-{a\over d}\,\psi_{d}(a)-{1\over d}\,\zeta_{d}(0,a)\,,
\eql{psid}
$$
and so I have regained the expression (\puteqn{qdif5}), derived earlier by a
different method. \footnotecheck\defaultoption[]\@footnote{ One could turn this around and use this
development to prove the recursion, (\puteqn{psid}).}
Higher derivatives can be deduced. For example
$$
\eqalign{
\partial^2_q\zeta'_d(0,a|q,{\bf 1})&=2\psi^{(2)}_{d+2}(a+q|\,q,q,q,{\bf1})+
\psi^{(2)}_{d+1}(a+2q|\,q,q,{\bf1})
\cr
}
\eql{qdif8}
$$
whence, setting $q=1$,
$$
\eqalign{
\partial^2_q\zeta'_d(0,a|q,{\bf 1})\bigg|_{q=1}&=2\psi^{(2)}_{d+2}(a+1)+
\psi^{(2)}_{d+1}(a+2)\cr
&={\partial^2\over\partial a^2}\,\big(2\psi_{d+2}(a+1)+\psi_{d+1}(a+2)\big)
\cr
}
\eql{qdif9}
$$
expressed via the trigamma function. This process can be continued
easily.
Algebraically. the right hand side of (\puteqn{qdif3}) vanishes because of
the factorisation
$$
\psi_{d+1}(a)+\psi_{d+1}(a+1)=(a-d/2)\,
\bigg(P_d(a)\,\psi(a-d)+{Q_d(a)\over a-d}\bigg)
$$
for odd $d$ where $P_d(a)$ and $Q_d(a)$ are polynomials such that there
is no pole at $a=d$.
\vfil\break
\vglue 20truept
\noindent{\bf References.} \vskip5truept
\begin{putreferences}
\reference{dowgjms}{Dowker,J.S. {\it Determinants and conformal anomalies
of GJMS operators on spheres}, ArXiv: 1007.3865.}
\reference{Dowcascone}{dowker,J.S. \prD{36}{1987}{3095}.}
\reference{Dowcos}{dowker,J.S. \prD{36}{1987}{3742}.}
\reference{Dowtherm}{Dowker,J.S. \prD{18}{1978}{1856}.}
\reference{Dowgeo}{Dowker,J.S. \cqg{11}{1994}{L55}.}
\reference{ApandD2}{Dowker,J.S. and Apps,J.S. \cqg{12}{1995}{1363}.}
\reference{HandW}{Hertzberg,M.P. and Wilczek,F. {\it Some calculable contributions to
Entanglement Entropy}, ArXiv:1007.0993.}
\reference{KandB}{Kamela,M. and Burgess,C.P. \cjp{77}{1999}{85}.}
\reference{Dowhyp}{Dowker,J.S. \jpa{43}{2010}{445402}; ArXiv:1007.3865.}
\reference{LNST}{Lohmayer,R., Neuberger,H, Schwimmer,A. and Theisen,S.
\plb{685}{2010}{222}.}
\reference{Allen2}{Allen,B. PhD Thesis, University of Cambridge, 1984.}
\reference{MyandS}{Myers,R.C. and Sinha,A. {\it Seeing a c-theorem with
holography}, ArXiv:1006.1263}
\reference{MyandS2}{Myers,R.C. and Sinha,A. {\it Holographic c-theorems in
arbitrary dimensions},\break ArXiv: 1011.5819.}
\reference{RyandT}{Ryu,S. and Takayanagi,T. JHEP {\bf 0608}(2006)045.}
\reference{CaandH}{Casini,H. and Huerta,M. {\it Entanglement entropy
for the n--sphere},\break arXiv:1007.1813.}
\reference{CaandH3}{Casini,H. and Huerta,M. \jpa {42}{2009}{504007}.}
\reference{Solodukhin}{Solodukhin,S.N. \plb{665}{2008}{305}.}
\reference{Solodukhin2}{Solodukhin,S.N. \plb{693}{2010}{605}.}
\reference{CaandW}{Callan,C.G. and Wilczek,F. \plb{333}{1994}{55}.}
\reference{FandS1}{Fursaev,D.V. and Solodukhin,S.N. \plb{365}{1996}{51}.}
\reference{FandS2}{Fursaev,D.V. and Solodukhin,S.N. \prD{52}{1995}{2133}.}
\reference{Fursaev}{Fursaev,D.V. \plb{334}{1994}{53}.}
\reference{Donnelly2}{Donnelly,H. \ma{224}{1976}{161}.}
\reference{ApandD}{Apps,J.S. and Dowker,J.S. \cqg{15}{1998}{1121}.}
\reference{FandM}{Fursaev,D.V. and Miele,G. \prD{49}{1994}{987}.}
\reference{Dowker2}{Dowker,J.S.\cqg{11}{1994}{L137}.}
\reference{Dowker1}{Dowker,J.S.\prD{50}{1994}{6369}.}
\reference{FNT}{Fujita,M.,Nishioka,T. and Takayanagi,T. JHEP {\bf 0809}
(2008) 016.}
\reference{Hund}{Hund,F. \zfp{51}{1928}{1}.}
\reference{Elert}{Elert,W. \zfp {51}{1928}{8}.}
\reference{Poole2}{Poole,E.G.C. \qjm{3}{1932}{183}.}
\reference{Bellon}{Bellon,M.P. {\it On the icosahedron: from two to three
dimensions}, arXiv:0705.3241.}
\reference{Bellon2}{Bellon,M.P. \cqg{23}{2006}{7029}.}
\reference{McLellan}{McLellan,A,G. \jpc{7}{1974}{3326}.}
\reference{Boiteaux}{Boiteaux, M. \jmp{23}{1982}{1311}.}
\reference{HHandK}{Hage Hassan,M. and Kibler,M. {\it On Hurwitz
transformations} in {Le probl\`eme de factorisation de Hurwitz}, Eds.,
A.Ronveaux and D.Lambert (Fac.Univ.N.D. de la Paix, Namur, 1991),
pp.1-29.}
\reference{Weeks2}{Weeks,Jeffrey \cqg{23}{2006}{6971}.}
\reference{LandW}{Lachi\`eze-Rey,M. and Weeks,Jeffrey, {\it Orbifold construction of
the modes on the Poincar\'e dodecahedral space}, arXiv:0801.4232.}
\reference{Cayley4}{Cayley,A. \qjpam{58}{1879}{280}.}
\reference{JMS}{Jari\'c,M.V., Michel,L. and Sharp,R.T. {\it J.Physique}
{\bf 45} (1984) 1. }
\reference{AandB}{Altmann,S.L. and Bradley,C.J. {\it Phil. Trans. Roy. Soc. Lond.}
{\bf 255} (1963) 199.}
\reference{CandP}{Cummins,C.J. and Patera,J. \jmp{29}{1988}{1736}.}
\reference{Sloane}{Sloane,N.J.A. \amm{84}{1977}{82}.}
\reference{Gordan2}{Gordan,P. \ma{12}{1877}{147}.}
\reference{DandSh}{Desmier,P.E. and Sharp,R.T. \jmp{20}{1979}{74}.}
\reference{Kramer}{Kramer,P., \jpa{38}{2005}{3517}.}
\reference{Klein2}{Klein, F.\ma{9}{1875}{183}.}
\reference{Hodgkinson}{Hodgkinson,J. \jlms{10}{1935}{221}.}
\reference{ZandD}{Zheng,Y. and Doerschuk, P.C. {\it Acta Cryst.} {\bf A52}
(1996) 221.}
\reference{EPM}{Elcoro,L., Perez--Mato,J.M. and Madariaga,G.
{\it Acta Cryst.} {\bf A50} (1994) 182.}
\reference{PSW2}{Prandl,W., Schiebel,P. and Wulf,K.
{\it Acta Cryst.} {\bf A52} (1999) 171.}
\reference{FCD}{Fan,P--D., Chen,J--Q. and Draayer,J.P.
{\it Acta Cryst.} {\bf A55} (1999) 871.}
\reference{FCD2}{Fan,P--D., Chen,J--Q. and Draayer,J.P.
{\it Acta Cryst.} {\bf A55} (1999) 1049.}
\reference{Honl}{H\"onl,H. \zfp{89}{1934}{244}.}
\reference{PSW}{Patera,J., Sharp,R.T. and Winternitz,P. \jmp{19}{1978}{2362}.}
\reference{LandH}{Lohe,M.A. and Hurst,C.A. \jmp{12}{1971}{1882}.}
\reference{RandSA}{Ronveaux,A. and Saint-Aubin,Y. \jmp{24}{1983}{1037}.}
\reference{JandDeV}{Jonker,J.E. and De Vries,E. \npa{105}{1967}{621}.}
\reference{Rowe}{Rowe, E.G.Peter. \jmp{19}{1978}{1962}.}
\reference{KNR}{Kibler,M., N\'egadi,T. and Ronveaux,A. {\it The Kustaanheimo-Stiefel
transformation and certain special functions} \lnm{1171}{1985}{497}.}
\reference{GLP}{Gilkey,P.B., Leahy,J.V. and Park,J-H, \jpa{29}{1996}{5645}.}
\reference{Kohler}{K\"ohler,K.: Equivariant Reidemeister torsion on
symmetric spaces. Math.Ann. {\bf 307}, 57-69 (1997)}
\reference{Kohler2}{K\"ohler,K.: Equivariant analytic torsion on ${\bf P^nC}$.
Math.Ann.{\bf 297}, 553-565 (1993) }
\reference{Kohler3}{K\"ohler,K.: Holomorphic analytic torsion on Hermitian
symmetric spaces. J.Reine Angew.Math. {\bf 460}, 93-116 (1995)}
\reference{Zagierzf}{Zagier,D. {\it Zetafunktionen und Quadratische
K\"orper}, (Springer--Verlag, Berlin, 1981).}
\reference{Stek}{Stekholschkik,R. {\it Notes on Coxeter transformations and the McKay
correspondence.} (Springer, Berlin, 2008).}
\reference{Pesce}{Pesce,H. \cmh {71}{1996}{243}.}
\reference{Pesce2}{Pesce,H. {\it Contemp. Math} {\bf 173} (1994) 231.}
\reference{Sutton}{Sutton,C.J. {\it Equivariant isospectrality
and isospectral deformations on spherical orbifolds}, ArXiv:math/0608567.}
\reference{Sunada}{Sunada,T. \aom{121}{1985}{169}.}
\reference{GoandM}{Gornet,R, and McGowan,J. {\it J.Comp. and Math.}
{\bf 9} (2006) 270.}
\reference{Suter}{Suter,R. {\it Manusc.Math.} {\bf 122} (2007) 1-21.}
\reference{Lomont}{Lomont,J.S. {\it Applications of finite groups} (Academic
Press, New York, 1959).}
\reference{DandCh2}{Dowker,J.S. and Chang,Peter {\it Analytic torsion on
spherical factors and tessellations}, arXiv:math.DG/0904.0744 .}
\reference{Mackey}{Mackey,G. {\it Induced representations}
(Benjamin, New York, 1968).}
\reference{Koca}{Koca, {\it Turkish J.Physics}.}
\reference{Brylinski}{Brylinski, J-L., {\it A correspondence dual to McKay's}
ArXiv alg-geom/9612003.}
\reference{Rossmann}{Rossman,W. {\it McKay's correspondence
and characters of finite subgroups of\break SU(2)} {\it Progress in Math.}
Birkhauser (to appear) .}
\reference{JandL}{James, G. and Liebeck, M. {\it Representations and
characters of groups} (CUP, Cambridge, 2001).}
\reference{IandR}{Ito,Y. and Reid,M. {\it The Mckay correspondence for finite
subgroups of SL(3,C)} Higher dimensional varieties, (Trento 1994),
221-240, (Berlin, de Gruyter 1996).}
\reference{BandF}{Bauer,W. and Furutani, K. {\it J.Geom. and Phys.} {\bf
58} (2008) 64.}
\reference{Luck}{L\"uck,W. \jdg{37}{1993}{263}.}
\reference{LandR}{Lott,J. and Rothenberg,M. \jdg{34}{1991}{431}.}
\reference{DoandKi} {Dowker.J.S. and Kirsten, K. {\it Analysis and Appl.}
{\bf 3} (2005) 45.}
\reference{dowtess1}{Dowker,J.S. \cqg{23}{2006}{1}.}
\reference{dowtess2}{Dowker,J.S. {\it J.Geom. and Phys.} {\bf 57} (2007) 1505.}
\reference{MHS}{De Melo,T., Hartmann,L. and Spreafico,M. {\it Reidemeister
Torsion and analytic torsion of discs}, ArXiv:0811.3196.}
\reference{Vertman}{Vertman, B. {\it Analytic Torsion of a bounded
generalized cone}, ArXiv:0808.0449.}
\reference{WandY} {Weng,L. and You,Y., {\it Int.J. of Math.}{\bf 7} (1996)
109.}
\reference{ScandT}{Schwartz, A.S. and Tyupkin,Yu.S. \np{242}{1984}{436}.}
\reference{AAR}{Andrews, G.E., Askey,R. and Roy,R. {\it Special functions}
(CUP, Cambridge, 1999).}
\reference{Tsuchiya}{Tsuchiya, N.: R-torsion and analytic torsion for spherical
Clifford-Klein manifolds.: J. Fac.Sci., Tokyo Univ. Sect.1 A, Math.
{\bf 23}, 289-295 (1976).}
\reference{Tsuchiya2}{Tsuchiya, N. J. Fac.Sci., Tokyo Univ. Sect.1 A, Math.
{\bf 23}, 289-295 (1976).}
\reference{Lerch}{Lerch,M. \am{11}{1887}{19}.}
\reference{Lerch2}{Lerch,M. \am{29}{1905}{333}.}
\reference{TandS}{Threlfall, W. and Seifert, H. \ma{104}{1930}{1}.}
\reference{RandS}{Ray, D.B., and Singer, I. \aim{7}{1971}{145}.}
\reference{RandS2}{Ray, D.B., and Singer, I. {\it Proc.Symp.Pure Math.}
{\bf 23} (1973) 167.}
\reference{Jensen}{Jensen,J.L.W.V. \aom{17}{1915-1916}{124}.}
\reference{Rosenberg}{Rosenberg, S. {\it The Laplacian on a Riemannian Manifold}
(CUP, Cambridge, 1997).}
\reference{Nando2}{Nash, C. and O'Connor, D-J. {\it Int.J.Mod.Phys.}
{\bf A10} (1995) 1779.}
\reference{Fock}{Fock,V. \zfp{98}{1935}{145}.}
\reference{Levy}{Levy,M. \prs {204}{1950}{145}.}
\reference{Schwinger2}{Schwinger,J. \jmp{5}{1964}{1606}.}
\reference{Muller}{M\"uller, \lnm{}{}{}.}
\reference{VMK}{Varshalovich.}
\reference{DandWo}{Dowker,J.S. and Wolski, A. \prA{46}{1992}{6417}.}
\reference{Zeitlin1}{Zeitlin,V. {\it Physica D} {\bf 49} (1991). }
\reference{Zeitlin0}{Zeitlin,V. {\it Nonlinear World} Ed by
V.Baryakhtar {\it et al}, Vol.I p.717, (World Scientific, Singapore, 1989).}
\reference{Zeitlin2}{Zeitlin,V. \prl{93}{2004}{264501}. }
\reference{Zeitlin3}{Zeitlin,V. \pla{339}{2005}{316}. }
\reference{Groenewold}{Groenewold, H.J. {\it Physica} {\bf 12} (1946) 405.}
\reference{Cohen}{Cohen, L. \jmp{7}{1966}{781}.}
\reference{AandW}{Argawal G.S. and Wolf, E. \prD{2}{1970}{2161,2187,2206}.}
\reference{Jantzen}{Jantzen,R.T. \jmp{19}{1978}{1163}.}
\reference{Moses2}{Moses,H.E. \aop{42}{1967}{343}.}
\reference{Carmeli}{Carmeli,M. \jmp{9}{1968}{1987}.}
\reference{SHS}{Siemans,M., Hancock,J. and Siminovitch,D. {\it Solid State
Nuclear Magnetic Resonance} {\bf 31}(2007)35.}
\reference{Dowk}{Dowker,J.S. \prD{28}{1983}{3013}.}
\reference{Heine}{Heine, E. {\it Handbuch der Kugelfunctionen}
(G.Reimer, Berlin. 1878, 1881).}
\reference{Pockels}{Pockels, F. {\it \"Uber die Differentialgleichung $\Delta
u+k^2u=0$} (Teubner, Leipzig. 1891).}
\reference{Hamermesh}{Hamermesh, M., {\it Group Theory} (Addison--Wesley,
Reading. 1962).}
\reference{Racah}{Racah, G. {\it Group Theory and Spectroscopy}
(Princeton Lecture Notes, 1951). }
\reference{Gourdin}{Gourdin, M. {\it Basics of Lie Groups} (Editions
Fronti\'eres, Gif sur Yvette. 1982.)}
\reference{Clifford}{Clifford, W.K. \plms{2}{1866}{116}.}
\reference{Story2}{Story, W.E. \plms{23}{1892}{265}.}
\reference{Story}{Story, W.E. \ma{41}{1893}{469}.}
\reference{Poole}{Poole, E.G.C. \plms{33}{1932}{435}.}
\reference{Dickson}{Dickson, L.E. {\it Algebraic Invariants} (Wiley, N.Y.
1915).}
\reference{Dickson2}{Dickson, L.E. {\it Modern Algebraic Theories}
(Sanborn and Co., Boston. 1926).}
\reference{Hilbert2}{Hilbert, D. {\it Theory of algebraic invariants} (C.U.P.,
Cambridge. 1993).}
\reference{Olver}{Olver, P.J. {\it Classical Invariant Theory} (C.U.P., Cambridge.
1999.)}
\reference{AST}{A\v{s}erova, R.M., Smirnov, J.F. and Tolsto\v{i}, V.N. {\it
Teoret. Mat. Fyz.} {\bf 8} (1971) 255.}
\reference{AandS}{A\v{s}erova, R.M., Smirnov, J.F. \np{4}{1968}{399}.}
\reference{Shapiro}{Shapiro, J. \jmp{6}{1965}{1680}.}
\reference{Shapiro2}{Shapiro, J.Y. \jmp{14}{1973}{1262}.}
\reference{NandS}{Noz, M.E. and Shapiro, J.Y. \np{51}{1973}{309}.}
\reference{Cayley2}{Cayley, A. {\it Phil. Trans. Roy. Soc. Lond.}
{\bf 144} (1854) 244.}
\reference{Cayley3}{Cayley, A. {\it Phil. Trans. Roy. Soc. Lond.}
{\bf 146} (1856) 101.}
\reference{Wigner}{Wigner, E.P. {\it Gruppentheorie} (Vieweg, Braunschweig. 1931).}
\reference{Sharp}{Sharp, R.T. \ajop{28}{1960}{116}.}
\reference{Laporte}{Laporte, O. {\it Z. f. Naturf.} {\bf 3a} (1948) 447.}
\reference{Lowdin}{L\"owdin, P-O. \rmp{36}{1964}{966}.}
\reference{Ansari}{Ansari, S.M.R. {\it Fort. d. Phys.} {\bf 15} (1967) 707.}
\reference{SSJR}{Samal, P.K., Saha, R., Jain, P. and Ralston, J.P. {\it
Testing Isotropy of Cosmic Microwave Background Radiation},
astro-ph/0708.2816.}
\reference{Lachieze}{Lachi\'eze-Rey, M. {\it Harmonic projection and
multipole Vectors}. astro- \break ph/0409081.}
\reference{CHS}{Copi, C.J., Huterer, D. and Starkman, G.D.
\prD{70}{2003}{043515}.}
\reference{Jaric}{Jari\'c, J.P. {\it Int. J. Eng. Sci.} {\bf 41} (2003) 2123.}
\reference{RandD}{Roche, J.A. and Dowker, J.S. \jpa{1}{1968}{527}.}
\reference{KandW}{Katz, G. and Weeks, J.R. \prD{70}{2004}{063527}.}
\reference{Waerden}{van der Waerden, B.L. {\it Die Gruppen-theoretische
Methode in der Quantenmechanik} (Springer, Berlin. 1932).}
\reference{EMOT}{Erdelyi, A., Magnus, W., Oberhettinger, F. and Tricomi, F.G. {
\it Higher Transcendental Functions} Vol.1 (McGraw-Hill, N.Y. 1953).}
\reference{Dowzilch}{Dowker, J.S. {\it Proc. Phys. Soc.} {\bf 91} (1967) 28.}
\reference{DandD}{Dowker, J.S. and Dowker, Y.P. {\it Proc. Phys. Soc.}
{\bf 87} (1966) 65.}
\reference{DandD2}{Dowker, J.S. and Dowker, Y.P. \prs{}{}{}.}
\reference{Dowk3}{Dowker,J.S. \cqg{7}{1990}{1241}.}
\reference{Dowk5}{Dowker,J.S. \cqg{7}{1990}{2353}.}
\reference{CoandH}{Courant, R. and Hilbert, D. {\it Methoden der
Mathematischen Physik} vol.1 \break (Springer, Berlin. 1931).}
\reference{Applequist}{Applequist, J. \jpa{22}{1989}{4303}.}
\reference{Torruella}{Torruella, \jmp{16}{1975}{1637}.}
\reference{Weinberg}{Weinberg, S.W. \pr{133}{1964}{B1318}.}
\reference{Meyerw}{Meyer, W.F. {\it Apolarit\"at und rationale Curven}
(Fues, T\"ubingen. 1883.) }
\reference{Ostrowski}{Ostrowski, A. {\it Jahrsb. Deutsch. Math. Verein.} {\bf
33} (1923) 245.}
\reference{Kramers}{Kramers, H.A. {\it Grundlagen der Quantenmechanik}, (Akad.
Verlag., Leipzig, 1938).}
\reference{ZandZ}{Zou, W.-N. and Zheng, Q.-S. \prs{459}{2003}{527}.}
\reference{Weeks1}{Weeks, J.R. {\it Maxwell's multipole vectors
and the CMB}. astro-ph/0412231.}
\reference{Corson}{Corson, E.M. {\it Tensors, Spinors and Relativistic Wave
Equations} (Blackie, London. 1950).}
\reference{Rosanes}{Rosanes, J. \jram{76}{1873}{312}.}
\reference{Salmon}{Salmon, G. {\it Lessons Introductory to the Modern Higher
Algebra} 3rd. edn. \break (Hodges, Dublin. 1876.)}
\reference{Milnew}{Milne, W.P. {\it Homogeneous Coordinates} (Arnold. London. 1910).}
\reference{Niven}{Niven, W.D. {\it Phil. Trans. Roy. Soc.} {\bf 170} (1879) 393.}
\reference{Scott}{Scott, C.A. {\it An Introductory Account of
Certain Modern Ideas and Methods in Plane Analytical Geometry,}
(MacMillan, N.Y. 1896).}
\reference{Bargmann}{Bargmann, V. \rmp{34}{1962}{300}.}
\reference{Maxwell}{Maxwell, J.C. {\it A Treatise on Electricity and
Magnetism} 2nd. edn. (Clarendon Press, Oxford. 1882).}
\reference{BandL}{Biedenharn, L.C. and Louck, J.D.
{\it Angular Momentum in Quantum Physics} (Addison-Wesley, Reading. 1981).}
\reference{Weylqm}{Weyl, H. {\it The Theory of Groups and Quantum Mechanics}
(Methuen, London. 1931).}
\reference{Robson}{Robson, A. {\it An Introduction to Analytical Geometry} Vol I
(C.U.P., Cambridge. 1940.)}
\reference{Sommerville}{Sommerville, D.M.Y. {\it Analytical Conics} 3rd. edn.
(Bell, London. 1933).}
\reference{Coolidge}{Coolidge, J.L. {\it A Treatise on Algebraic Plane Curves}
(Clarendon Press, Oxford. 1931).}
\reference{SandK}{Semple, G. and Kneebone. G.T. {\it Algebraic Projective
Geometry} (Clarendon Press, Oxford. 1952).}
\reference{AandC}{Abdesselam A., and Chipalkatti, J. {\it The Higher
Transvectants are redundant}, arXiv:0801.1533 [math.AG] 2008.}
\reference{Elliott}{Elliott, E.B. {\it The Algebra of Quantics} 2nd edn.
(Clarendon Press, Oxford. 1913).}
\reference{Elliott2}{Elliott, E.B. \qjpam{48}{1917}{372}.}
\reference{Howe}{Howe, R. \tams{313}{1989}{539}.}
\reference{Clebsch}{Clebsch, A. \jram{60}{1862}{343}.}
\reference{Prasad}{Prasad, G. \ma{72}{1912}{136}.}
\reference{Dougall}{Dougall, J. \pems{32}{1913}{30}.}
\reference{Penrose}{Penrose, R. \aop{10}{1960}{171}.}
\reference{Penrose2}{Penrose, R. \prs{273}{1965}{171}.}
\reference{Burnside}{Burnside, W.S. \qjm{10}{1870}{211}. }
\reference{Lindemann}{Lindemann, F. \ma{23} {1884}{111}.}
\reference{Backus}{Backus, G. {\it Rev. Geophys. Space Phys.} {\bf 8} (1970) 633.}
\reference{Baerheim}{Baerheim, R. {\it Q.J. Mech. appl. Math.} {\bf 51} (1998) 73.}
\reference{Lense}{Lense, J. {\it Kugelfunktionen} (Akad.Verlag, Leipzig. 1950).}
\reference{Littlewood}{Littlewood, D.E. \plms{50}{1948}{349}.}
\reference{Fierz}{Fierz, M. {\it Helv. Phys. Acta} {\bf 12} (1938) 3.}
\reference{Williams}{Williams, D.N. {\it Lectures in Theoretical Physics} Vol. VII,
(Univ.Colorado Press, Boulder. 1965).}
\reference{Dennis}{Dennis, M. \jpa{37}{2004}{9487}.}
\reference{Pirani}{Pirani, F. {\it Brandeis Lecture Notes on
General Relativity,} edited by S. Deser and K. Ford. (Brandeis, Mass. 1964).}
\reference{Sturm}{Sturm, R. \jram{86}{1878}{116}.}
\reference{Schlesinger}{Schlesinger, O. \ma{22}{1883}{521}.}
\reference{Askwith}{Askwith, E.H. {\it Analytical Geometry of the Conic
Sections} (A.\&C. Black, London. 1908).}
\reference{Todd}{Todd, J.A. {\it Projective and Analytical Geometry}.
(Pitman, London. 1946).}
\reference{Glenn}{Glenn. O.E. {\it Theory of Invariants} (Ginn \& Co, N.Y. 1915).}
\reference{DowkandG}{Dowker, J.S. and Goldstone, M. \prs{303}{1968}{381}.}
\reference{Turnbull}{Turnbull, H.A. {\it The Theory of Determinants,
Matrices and Invariants} 3rd. edn. (Dover, N.Y. 1960).}
\reference{MacMillan}{MacMillan, W.D. {\it The Theory of the Potential}
(McGraw-Hill, N.Y. 1930).}
\reference{Hobson}{Hobson, E.W. {\it The Theory of Spherical
and Ellipsoidal Harmonics} (C.U.P., Cambridge. 1931).}
\reference{Hobson1}{Hobson, E.W. \plms {24}{1892}{55}.}
\reference{GandY}{Grace, J.H. and Young, A. {\it The Algebra of Invariants}
(C.U.P., Cambridge, 1903).}
\reference{FandR}{Fano, U. and Racah, G. {\it Irreducible Tensorial Sets}
(Academic Press, N.Y. 1959).}
\reference{TandT}{Thomson, W. and Tait, P.G. {\it Treatise on Natural Philosophy}
(Clarendon Press, Oxford. 1867).}
\reference{Brinkman}{Brinkman, H.C. {\it Applications of spinor invariants in
atomic physics}, North Holland, Amsterdam 1956.}
\reference{Kramers1}{Kramers, H.A. {\it Proc. Roy. Soc. Amst.} {\bf 33} (1930) 953.}
\reference{DandP2}{Dowker,J.S. and Pettengill,D.F. \jpa{7}{1974}{1527}}
\reference{Dowk1}{Dowker,J.S. \jpa{}{}{45}.}
\reference{Dowk2}{Dowker,J.S. \aop{71}{1972}{577}}
\reference{DandA}{Dowker,J.S. and Apps, J.S. \cqg{15}{1998}{1121}.}
\reference{Weil}{Weil,A., {\it Elliptic functions according to Eisenstein
and Kronecker}, Springer, Berlin, 1976.}
\reference{Ling}{Ling,C-H. {\it SIAM J.Math.Anal.} {\bf5} (1974) 551.}
\reference{Ling2}{Ling,C-H. {\it J.Math.Anal.Appl.}(1988).}
\reference{BMO}{Brevik,I., Milton,K.A. and Odintsov, S.D. \aop{302}{2002}{120}.}
\reference{KandL}{Kutasov,D. and Larsen,F. {\it JHEP} 0101 (2001) 1.}
\reference{KPS}{Klemm,D., Petkou,A.C. and Siopsis {\it Entropy
bounds, monoticity properties and scaling in CFT's}. hep-th/0101076.}
\reference{DandC}{Dowker,J.S. and Critchley,R. \prD{15}{1976}{1484}.}
\reference{AandD}{Al'taie, M.B. and Dowker, J.S. \prD{18}{1978}{3557}.}
\reference{Dow1}{Dowker,J.S. \prD{37}{1988}{558}.}
\reference{Dow30}{Dowker,J.S. \prD{28}{1983}{3013}.}
\reference{DandK}{Dowker,J.S. and Kennedy,G. \jpa{}{1978}{895}.}
\reference{Dow2}{Dowker,J.S. \cqg{1}{1984}{359}.}
\reference{DandKi}{Dowker,J.S. and Kirsten, K. {\it Comm. in Anal. and Geom.
}{\bf7} (1999) 641.}
\reference{DandKe}{Dowker,J.S. and Kennedy,G.\jpa{11}{1978}{895}.}
\reference{Gibbons}{Gibbons,G.W. \pl{60A}{1977}{385}.}
\reference{Cardy}{Cardy,J.L. \np{366}{1991}{403}.}
\reference{ChandD}{Chang,P. and Dowker,J.S. \np{395}{1993}{407}.}
\reference{DandC2}{Dowker,J.S. and Critchley,R. \prD{13}{1976}{224}.}
\reference{Camporesi}{Camporesi,R. \prp{196}{1990}{1}.}
\reference{BandM}{Brown,L.S. and Maclay,G.J. \pr{184}{1969}{1272}.}
\reference{CandD}{Candelas,P. and Dowker,J.S. \prD{19}{1979}{2902}.}
\reference{Unwin1}{Unwin,S.D. Thesis. University of Manchester. 1979.}
\reference{Unwin2}{Unwin,S.D. \jpa{13}{1980}{313}.}
\reference{DandB}{Dowker,J.S.and Banach,R. \jpa{11}{1978}{2255}.}
\reference{Obhukov}{Obhukov,Yu.N. \pl{109B}{1982}{195}.}
\reference{Kennedy}{Kennedy,G. \prD{23}{1981}{2884}.}
\reference{CandT}{Copeland,E. and Toms,D.J. \np {255}{1985}{201}.}
\reference{CandT2}{Copeland,E. and Toms,D.J. \cqg {3}{1986}{431}.}
\reference{ELV}{Elizalde,E., Lygren, M. and Vassilevich,
D.V. \jmp {37}{1996}{3105}.}
\reference{Malurkar}{Malurkar,S.L. {\it J.Ind.Math.Soc} {\bf16} (1925/26) 130.}
\reference{Glaisher}{Glaisher,J.W.L. {\it Messenger of Math.} {\bf18}
(1889) 1.} \reference{Anderson}{Anderson,A. \prD{37}{1988}{536}.}
\reference{CandA}{Cappelli,A. and D'Appollonio,\pl{487B}{2000}{87}.}
\reference{Wot}{Wotzasek,C. \jpa{23}{1990}{1627}.}
\reference{RandT}{Ravndal,F. and Tollesen,D. \prD{40}{1989}{4191}.}
\reference{SandT}{Santos,F.C. and Tort,A.C. \pl{482B}{2000}{323}.}
\reference{FandO}{Fukushima,K. and Ohta,K. {\it Physica} {\bf A299} (2001) 455.}
\reference{GandP}{Gibbons,G.W. and Perry,M. \prs{358}{1978}{467}.}
\reference{Dow4}{Dowker,J.S..}
\reference{Rad}{Rademacher,H. {\it Topics in analytic number theory,}
Springer-Verlag, Berlin,1973.}
\reference{Halphen}{Halphen,G.-H. {\it Trait\'e des Fonctions Elliptiques},
Vol 1, Gauthier-Villars, Paris, 1886.}
\reference{CandW}{Cahn,R.S. and Wolf,J.A. {\it Comm.Mat.Helv.} {\bf 51}
(1976) 1.}
\reference{Berndt}{Berndt,B.C. \rmjm{7}{1977}{147}.}
\reference{Hurwitz}{Hurwitz,A. \ma{18}{1881}{528}.}
\reference{Hurwitz2}{Hurwitz,A. {\it Mathematische Werke} Vol.I. Basel,
Birkhauser, 1932.}
\reference{Berndt2}{Berndt,B.C. \jram{303/304}{1978}{332}.}
\reference{RandA}{Rao,M.B. and Ayyar,M.V. \jims{15}{1923/24}{150}.}
\reference{Hardy}{Hardy,G.H. \jlms{3}{1928}{238}.}
\reference{TandM}{Tannery,J. and Molk,J. {\it Fonctions Elliptiques},
Gauthier-Villars, Paris, 1893--1902.}
\reference{schwarz}{Schwarz,H.-A. {\it Formeln und
Lehrs\"atzen zum Gebrauche..},Springer 1893.(The first edition was 1885.)
The French translation by Henri Pad\'e is {\it Formules et Propositions
pour L'Emploi...},Gauthier-Villars, Paris, 1894}
\reference{Hancock}{Hancock,H. {\it Theory of elliptic functions}, Vol I.
Wiley, New York 1910.}
\reference{watson}{Watson,G.N. \jlms{3}{1928}{216}.}
\reference{MandO}{Magnus,W. and Oberhettinger,F. {\it Formeln und S\"atze},
Springer-Verlag, Berlin 1948.}
\reference{Klein}{Klein,F. {\it Lectures on the Icosohedron}
(Methuen, London. 1913).}
\reference{AandL}{Appell,P. and Lacour,E. {\it Fonctions Elliptiques},
Gauthier-Villars,
Paris. 1897.}
\reference{HandC}{Hurwitz,A. and Courant,C. {\it Allgemeine Funktionentheorie},
Springer,
Berlin. 1922.}
\reference{WandW}{Whittaker,E.T. and Watson,G.N. {\it Modern analysis},
Cambridge. 1927.}
\reference{SandC}{Selberg,A. and Chowla,S. \jram{227}{1967}{86}. }
\reference{zucker}{Zucker,I.J. {\it Math.Proc.Camb.Phil.Soc} {\bf 82 }(1977)
111.}
\reference{glasser}{Glasser,M.L. {\it Maths.of Comp.} {\bf 25} (1971) 533.}
\reference{GandW}{Glasser, M.L. and Wood,V.E. {\it Maths of Comp.} {\bf 25}
(1971)
535.}
\reference{greenhill}{Greenhill,A,G. {\it The Applications of Elliptic
Functions}, MacMillan. London, 1892.}
\reference{Weierstrass}{Weierstrass,K. {\it J.f.Mathematik (Crelle)}
{\bf 52} (1856) 346.}
\reference{Weierstrass2}{Weierstrass,K. {\it Mathematische Werke} Vol.I,p.1,
Mayer u. M\"uller, Berlin, 1894.}
\reference{Fricke}{Fricke,R. {\it Die Elliptische Funktionen und Ihre Anwendungen},
Teubner, Leipzig. 1915, 1922.}
\reference{Konig}{K\"onigsberger,L. {\it Vorlesungen \"uber die Theorie der
Elliptischen Funktionen}, \break Teubner, Leipzig, 1874.}
\reference{Milne}{Milne,S.C. {\it The Ramanujan Journal} {\bf 6} (2002) 7-149.}
\reference{Schlomilch}{Schl\"omilch,O. {\it Ber. Verh. K. Sachs. Gesell. Wiss.
Leipzig} {\bf 29} (1877) 101-105; {\it Compendium der h\"oheren
Analysis}, Bd.II, 3rd Edn, Vieweg, Brunswick, 1878.}
\reference{BandB}{Briot,C. and Bouquet,C. {\it Th\`eorie des Fonctions
Elliptiques}, Gauthier-Villars, Paris, 1875.}
\reference{Dumont}{Dumont,D. \aim {41}{1981}{1}.}
\reference{Andre}{Andr\'e,D. {\it Ann.\'Ecole Normale Superior} {\bf 6} (1877)
265;
{\it J.Math.Pures et Appl.} {\bf 5} (1878) 31.}
\reference{Raman}{Ramanujan,S. {\it Trans.Camb.Phil.Soc.} {\bf 22} (1916) 159;
{\it Collected Papers}, Cambridge, 1927}
\reference{Weber}{Weber,H.M. {\it Lehrbuch der Algebra} Bd.III, Vieweg,
Brunswick 190 3.}
\reference{Weber2}{Weber,H.M. {\it Elliptische Funktionen und algebraische
Zahlen},
Vieweg, Brunswick 1891.}
\reference{ZandR}{Zucker,I.J. and Robertson,M.M.
{\it Math.Proc.Camb.Phil.Soc} {\bf 95 }(1984) 5.}
\reference{JandZ1}{Joyce,G.S. and Zucker,I.J.
{\it Math.Proc.Camb.Phil.Soc} {\bf 109 }(1991) 257.}
\reference{JandZ2}{Zucker,I.J. and Joyce.G.S.
{\it Math.Proc.Camb.Phil.Soc} {\bf 131 }(2001) 309.}
\reference{zucker2}{Zucker,I.J. {\it SIAM J.Math.Anal.} {\bf 10} (1979) 192,}
\reference{BandZ}{Borwein,J.M. and Zucker,I.J. {\it IMA J.Math.Anal.} {\bf 12}
(1992) 519.}
\reference{Cox}{Cox,D.A. {\it Primes of the form $x^2+n\,y^2$}, Wiley,
New York, 1989.}
\reference{BandCh}{Berndt,B.C. and Chan,H.H. {\it Mathematika} {\bf42} (1995)
278.}
\reference{EandT}{Elizalde,R. and Tort.hep-th/}
\reference{KandS}{Kiyek,K. and Schmidt,H. {\it Arch.Math.} {\bf 18} (1967) 438.}
\reference{Oshima}{Oshima,K. \prD{46}{1992}{4765}.}
\reference{greenhill2}{Greenhill,A.G. \plms{19} {1888} {301}.}
\reference{Russell}{Russell,R. \plms{19} {1888} {91}.}
\reference{BandB}{Borwein,J.M. and Borwein,P.B. {\it Pi and the AGM}, Wiley,
New York, 1998.}
\reference{Resnikoff}{Resnikoff,H.L. \tams{124}{1966}{334}.}
\reference{vandp}{Van der Pol, B. {\it Indag.Math.} {\bf18} (1951) 261,272.}
\reference{Rankin}{Rankin,R.A. {\it Modular forms} C.U.P. Cambridge}
\reference{Rankin2}{Rankin,R.A. {\it Proc. Roy.Soc. Edin.} {\bf76 A} (1976) 107.}
\reference{Skoruppa}{Skoruppa,N-P. {\it J.of Number Th.} {\bf43} (1993) 68 .}
\reference{Down}{Dowker.J.S. {\it Nucl.Phys.}B (Proc.Suppl) ({\bf 104})(2002)153;
also Dowker,J.S. hep-th/ 0007129.}
\reference{Eichler}{Eichler,M. \mz {67}{1957}{267}.}
\reference{Zagier}{Zagier,D. \invm{104}{1991}{449}.}
\reference{Lang}{Lang,S. {\it Modular Forms}, Springer, Berlin, 1976.}
\reference{Kosh}{Koshliakov,N.S. {\it Mess.of Math.} {\bf 58} (1928) 1.}
\reference{BandH}{Bodendiek, R. and Halbritter,U. \amsh{38}{1972}{147}.}
\reference{Smart}{Smart,L.R., \pgma{14}{1973}{1}.}
\reference{Grosswald}{Grosswald,E. {\it Acta. Arith.} {\bf 21} (1972) 25.}
\reference{Kata}{Katayama,K. {\it Acta Arith.} {\bf 22} (1973) 149.}
\reference{Ogg}{Ogg,A. {\it Modular forms and Dirichlet series} (Benjamin,
New York,
1969).}
\reference{Bol}{Bol,G. \amsh{16}{1949}{1}.}
\reference{Epstein}{Epstein,P. \ma{56}{1903}{615}.}
\reference{Petersson}{Petersson.}
\reference{Serre}{Serre,J-P. {\it A Course in Arithmetic}, Springer,
New York, 1973.}
\reference{Schoenberg}{Schoenberg,B., {\it Elliptic Modular Functions},
Springer, Berlin, 1974.}
\reference{Apostol}{Apostol,T.M. \dmj {17}{1950}{147}.}
\reference{Ogg2}{Ogg,A. {\it Lecture Notes in Math.} {\bf 320} (1973) 1.}
\reference{Knopp}{Knopp,M.I. \dmj {45}{1978}{47}.}
\reference{Knopp2}{Knopp,M.I. \invm {}{1994}{361}.}
\reference{LandZ}{Lewis,J. and Zagier,D. \aom{153}{2001}{191}.}
\reference{DandK1}{Dowker,J.S. and Kirsten,K. {\it Elliptic functions and
temperature inversion symmetry on spheres} hep-th/.}
\reference{HandK}{Husseini and Knopp.}
\reference{Kober}{Kober,H. \mz{39}{1934-5}{609}.}
\reference{HandL}{Hardy,G.H. and Littlewood, \am{41}{1917}{119}.}
\reference{Watson}{Watson,G.N. \qjm{2}{1931}{300}.}
\reference{SandC2}{Chowla,S. and Selberg,A. {\it Proc.Nat.Acad.} {\bf 35}
(1949) 371.}
\reference{Landau}{Landau, E. {\it Lehre von der Verteilung der Primzahlen},
(Teubner, Leipzig, 1909).}
\reference{Berndt4}{Berndt,B.C. \tams {146}{1969}{323}.}
\reference{Berndt3}{Berndt,B.C. \tams {}{}{}.}
\reference{Bochner}{Bochner,S. \aom{53}{1951}{332}.}
\reference{Weil2}{Weil,A.\ma{168}{1967}{}.}
\reference{CandN}{Chandrasekharan,K. and Narasimhan,R. \aom{74}{1961}{1}.}
\reference{Rankin3}{Rankin,R.A. {} {} ().}
\reference{Berndt6}{Berndt,B.C. {\it Trans.Edin.Math.Soc}.}
\reference{Elizalde}{Elizalde,E. {\it Ten Physical Applications of Spectral
Zeta Function Theory}, \break (Springer, Berlin, 1995).}
\reference{Allen}{Allen,B., Folacci,A. and Gibbons,G.W. \pl{189}{1987}{304}.}
\reference{Krazer}{Krazer}
\reference{Elizalde3}{Elizalde,E. {\it J.Comp.and Appl. Math.} {\bf 118}
(2000) 125.}
\reference{Elizalde2}{Elizalde,E., Odintsov.S.D, Romeo, A. and Bytsenko,
A.A and
Zerbini,S.
{\it Zeta function regularisation}, (World Scientific, Singapore,
1994).}
\reference{Eisenstein}{Eisenstein}
\reference{Hecke}{Hecke,E. \ma{112}{1936}{664}.}
\reference{Hecke2}{Hecke,E. \ma{112}{1918}{398}.}
\reference{Terras}{Terras,A. {\it Harmonic analysis on Symmetric Spaces} (Springer,
New York, 1985).}
\reference{BandG}{Bateman,P.T. and Grosswald,E. {\it Acta Arith.} {\bf 9}
(1964) 365.}
\reference{Deuring}{Deuring,M. \aom{38}{1937}{585}.}
\reference{Guinand}{Guinand.}
\reference{Guinand2}{Guinand.}
\reference{Minak}{Minakshisundaram.}
\reference{Mordell}{Mordell,J. \prs{}{}{}.}
\reference{GandZ}{Glasser,M.L. and Zucker, {}.}
\reference{Landau2}{Landau,E. \jram{}{1903}{64}.}
\reference{Kirsten1}{Kirsten,K. \jmp{35}{1994}{459}.}
\reference{Sommer}{Sommer,J. {\it Vorlesungen \"uber Zahlentheorie}
(1907,Teubner,Leipzig).
French edition 1913 .}
\reference{Reid}{Reid,L.W. {\it Theory of Algebraic Numbers},
(1910,MacMillan,New York).}
\reference{Milnor}{Milnor, J. {\it Is the Universe simply--connected?},
IAS, Princeton, 1978.}
\reference{Milnor2}{Milnor, J. \ajm{79}{1957}{623}.}
\reference{Opechowski}{Opechowski,W. {\it Physica} {\bf 7} (1940) 552.}
\reference{Bethe}{Bethe, H.A. \zfp{3}{1929}{133}.}
\reference{LandL}{Landau, L.D. and Lishitz, E.M. {\it Quantum
Mechanics} (Pergamon Press, London, 1958).}
\reference{GPR}{Gibbons, G.W., Pope, C. and R\"omer, H., \np{157}{1979}{377}.}
\reference{Jadhav}{Jadhav,S.P. PhD Thesis, University of Manchester 1990.}
\reference{DandJ}{Dowker,J.S. and Jadhav, S. \prD{39}{1989}{1196}.}
\reference{CandM}{Coxeter, H.S.M. and Moser, W.O.J. {\it Generators and
relations of finite groups} (Springer. Berlin. 1957).}
\reference{Coxeter2}{Coxeter, H.S.M. {\it Regular Complex Polytopes},
(Cambridge University Press, \break Cambridge, 1975).}
\reference{Coxeter}{Coxeter, H.S.M. {\it Regular Polytopes}.}
\reference{Stiefel}{Stiefel, E., J.Research NBS {\bf 48} (1952) 424.}
\reference{BandS}{Brink, D.M. and Satchler, G.R. {\it Angular momentum theory}.
(Clarendon Press, Oxford. 1962.).}
\reference{Rose}{Rose}
\reference{Schwinger}{Schwinger, J. {\it On Angular Momentum}
in {\it Quantum Theory of Angular Momentum} edited by
Biedenharn,L.C. and van Dam, H. (Academic Press, N.Y. 1965).}
\reference{Bromwich}{Bromwich, T.J.I'A. {\it Infinite Series},
(Macmillan, 1947).}
\reference{Ray}{Ray,D.B. \aim{4}{1970}{109}.}
\reference{Ikeda}{Ikeda,A. {\it Kodai Math.J.} {\bf 18} (1995) 57.}
\reference{Kennedy}{Kennedy,G. \prD{23}{1981}{2884}.}
\reference{Ellis}{Ellis,G.F.R. {\it General Relativity} {\bf2} (1971) 7.}
\reference{Dow8}{Dowker,J.S. \cqg{20}{2003}{L105}.}
\reference{IandY}{Ikeda, A and Yamamoto, Y. \ojm {16}{1979}{447}.}
\reference{BandI}{Bander,M. and Itzykson,C. \rmp{18}{1966}{2}.}
\reference{Schulman}{Schulman, L.S. \pr{176}{1968}{1558}.}
\reference{Bar1}{B\"ar,C. {\it Arch.d.Math.}{\bf 59} (1992) 65.}
\reference{Bar2}{B\"ar,C. {\it Geom. and Func. Anal.} {\bf 6} (1996) 899.}
\reference{Vilenkin}{Vilenkin, N.J. {\it Special functions},
(Am.Math.Soc., Providence, 1968).}
\reference{Talman}{Talman, J.D. {\it Special functions} (Benjamin,N.Y.,1968).}
\reference{Miller}{Miller, W. {\it Symmetry groups and their applications}
(Wiley, N.Y., 1972).}
\reference{Dow3}{Dowker,J.S. \cmp{162}{1994}{633}.}
\reference{Cheeger}{Cheeger, J. \jdg {18}{1983}{575}.}
\reference{Cheeger2}{Cheeger, J. \aom {109}{1979}{259}.}
\reference{Dow6}{Dowker,J.S. \jmp{30}{1989}{770}.}
\reference{Dow20}{Dowker,J.S. \jmp{35}{1994}{6076}.}
\reference{Dowjmp}{Dowker,J.S. \jmp{35}{1994}{4989}.}
\reference{Dow21}{Dowker,J.S. {\it Heat kernels and polytopes} in {\it
Heat Kernel Techniques and Quantum Gravity}, ed. by S.A.Fulling,
Discourses in Mathematics and its Applications, No.4, Dept.
Maths., Texas A\&M University, College Station, Texas, 1995.}
\reference{Dow9}{Dowker,J.S. \jmp{42}{2001}{1501}.}
\reference{Dow7}{Dowker,J.S. \jpa{25}{1992}{2641}.}
\reference{Warner}{Warner.N.P. \prs{383}{1982}{379}.}
\reference{Wolf}{Wolf, J.A. {\it Spaces of constant curvature},
(McGraw--Hill,N.Y., 1967).}
\reference{Meyer}{Meyer,B. \cjm{6}{1954}{135}.}
\reference{BandB}{B\'erard,P. and Besson,G. {\it Ann. Inst. Four.} {\bf 30}
(1980) 237.}
\reference{PandM}{Polya,G. and Meyer,B. \cras{228}{1948}{28}.}
\reference{Springer}{Springer, T.A. Lecture Notes in Math. vol 585 (Springer,
Berlin,1977).}
\reference{SeandT}{Threlfall, H. and Seifert, W. \ma{104}{1930}{1}.}
\reference{Hopf}{Hopf,H. \ma{95}{1925}{313}. }
\reference{Dow}{Dowker,J.S. \jpa{5}{1972}{936}.}
\reference{LLL}{Lehoucq,R., Lachi\'eze-Rey,M. and Luminet, J.--P. {\it
Astron.Astrophys.} {\bf 313} (1996) 339.}
\reference{LaandL}{Lachi\'eze-Rey,M. and Luminet, J.--P.
\prp{254}{1995}{135}.}
\reference{Schwarzschild}{Schwarzschild, K., {\it Vierteljahrschrift der
Ast.Ges.} {\bf 35} (1900) 337.}
\reference{Starkman}{Starkman,G.D. \cqg{15}{1998}{2529}.}
\reference{LWUGL}{Lehoucq,R., Weeks,J.R., Uzan,J.P., Gausman, E. and
Luminet, J.--P. \cqg{19}{2002}{4683}.}
\reference{Dow10}{Dowker,J.S. \prD{28}{1983}{3013}.}
\reference{BandD}{Banach, R. and Dowker, J.S. \jpa{12}{1979}{2527}.}
\reference{Jadhav2}{Jadhav,S. \prD{43}{1991}{2656}.}
\reference{Gilkey}{Gilkey,P.B. {\it Invariance theory,the heat equation and
the Atiyah--Singer Index theorem} (CRC Press, Boca Raton, 1994).}
\reference{BandY}{Berndt,B.C. and Yeap,B.P. {\it Adv. Appl. Math.}
{\bf29} (2002) 358.}
\reference{HandR}{Hanson,A.J. and R\"omer,H. \pl{80B}{1978}{58}.}
\reference{Hill}{Hill,M.J.M. {\it Trans.Camb.Phil.Soc.} {\bf 13} (1883) 36.}
\reference{Cayley}{Cayley,A. {\it Quart.Math.J.} {\bf 7} (1866) 304.}
\reference{Seade}{Seade,J.A. {\it Anal.Inst.Mat.Univ.Nac.Aut\'on
M\'exico} {\bf 21} (1981) 129.}
\reference{CM}{Cisneros--Molina,J.L. {\it Geom.Dedicata} {\bf84} (2001)
\reference{Goette1}{Goette,S. \jram {526} {2000} 181.}
207.}
\reference{NandO}{Nash,C. and O'Connor,D--J, \jmp {36}{1995}{1462}.}
\reference{Dows}{Dowker,J.S. \aop{71}{1972}{577}; Dowker,J.S. and Pettengill,D.F.
\jpa{7}{1974}{1527}; J.S.Dowker in {\it Quantum Gravity}, edited by
S. C. Christensen (Hilger,Bristol,1984)}
\reference{Jadhav2}{Jadhav,S.P. \prD{43}{1991}{2656}.}
\reference{Dow11}{Dowker,J.S. \cqg{21}{2004}4247.}
\reference{Dow12}{Dowker,J.S. \cqg{21}{2004}4977.}
\reference{Dow13}{Dowker,J.S. \jpa{38}{2005}1049.}
\reference{Zagier}{Zagier,D. \ma{202}{1973}{149}}
\reference{RandG}{Rademacher, H. and Grosswald,E. {\it Dedekind Sums},
(Carus, MAA, 1972).}
\reference{Berndt7}{Berndt,B, \aim{23}{1977}{285}.}
\reference{HKMM}{Harvey,J.A., Kutasov,D., Martinec,E.J. and Moore,G.
{\it Localised Tachyons and RG Flows}, hep-th/0111154.}
\reference{Beck}{Beck,M., {\it Dedekind Cotangent Sums}, {\it Acta Arithmetica}
{\bf 109} (2003) 109-139 ; math.NT/0112077.}
\reference{McInnes}{McInnes,B. {\it APS instability and the topology of the brane
world}, hep-th/0401035.}
\reference{BHS}{Brevik,I, Herikstad,R. and Skriudalen,S. {\it Entropy Bound for the
TM Electromagnetic Field in the Half Einstein Universe}; hep-th/0508123.}
\reference{BandO}{Brevik,I. and Owe,C. \prD{55}{4689}{1997}.}
\reference{Kenn}{Kennedy,G. Thesis. University of Manchester 1978.}
\reference{KandU}{Kennedy,G. and Unwin S. \jpa{12}{L253}{1980}.}
\reference{BandO1}{Bayin,S.S.and Ozcan,M.
\prD{48}{2806}{1993}; \prD{49}{5313}{1994}.}
\reference{Chang}{Chang, P., {\it Quantum Field Theory on Regular Polytopes}.
Thesis. University of Manchester, 1993.}
\reference{Barnesa}{Barnes,E.W. {\it Trans. Camb. Phil. Soc.} {\bf 19} (1903) 374.}
\reference{Barnesb}{Barnes,E.W. {\it Trans. Camb. Phil. Soc.}
{\bf 19} (1903) 426.}
\reference{Stanley1}{Stanley,R.P. \joa {49Hilf}{1977}{134}.}
\reference{Stanley}{Stanley,R.P. \bams {1}{1979}{475}.}
\reference{Hurley}{Hurley,A.C. \pcps {47}{1951}{51}.}
\reference{IandK}{Iwasaki,I. and Katase,K. {\it Proc.Japan Acad. Ser} {\bf A55}
(1979) 141.}
\reference{IandT}{Ikeda,A. and Taniguchi,Y. {\it Osaka J. Math.} {\bf 15} (1978)
515.}
\reference{GandM}{Gallot,S. and Meyer,D. \jmpa{54}{1975}{259}.}
\reference{Flatto}{Flatto,L. {\it Enseign. Math.} {\bf 24} (1978) 237.}
\reference{OandT}{Orlik,P and Terao,H. {\it Arrangements of Hyperplanes},
Grundlehren der Math. Wiss. {\bf 300}, (Springer--Verlag, 1992).}
\reference{Shepler}{Shepler,A.V. \joa{220}{1999}{314}.}
\reference{SandT}{Solomon,L. and Terao,H. \cmh {73}{1998}{237}.}
\reference{Vass}{Vassilevich, D.V. \plb {348}{1995}39.}
\reference{Vass2}{Vassilevich, D.V. \jmp {36}{1995}3174.}
\reference{CandH}{Camporesi,R. and Higuchi,A. {\it J.Geom. and Physics}
{\bf 15} (1994) 57.}
\reference{Solomon2}{Solomon,L. \tams{113}{1964}{274}.}
\reference{Solomon}{Solomon,L. {\it Nagoya Math. J.} {\bf 22} (1963) 57.}
\reference{Obukhov}{Obukhov,Yu.N. \pl{109B}{1982}{195}.}
\reference{BGH}{Bernasconi,F., Graf,G.M. and Hasler,D. {\it The heat kernel
expansion for the electromagnetic field in a cavity}; math-ph/0302035.}
\reference{Baltes}{Baltes,H.P. \prA {6}{1972}{2252}.}
\reference{BaandH}{Baltes.H.P and Hilf,E.R. {\it Spectra of Finite Systems}
(Bibliographisches Institut, Mannheim, 1976).}
\reference{Ray}{Ray,D.B. \aim{4}{1970}{109}.}
\reference{Hirzebruch}{Hirzebruch,F. {\it Topological methods in algebraic
geometry} (Springer-- Verlag,\break Berlin, 1978). }
\reference{BBG}{Bla\v{z}i\'c,N., Bokan,N. and Gilkey, P.B. {\it Ind.J.Pure and
Appl.Math.} {\bf 23} (1992) 103.}
\reference{WandWi}{Weck,N. and Witsch,K.J. {\it Math.Meth.Appl.Sci.} {\bf 17}
(1994) 1017.}
\reference{Norlund}{N\"orlund,N.E. \am{43}{1922}{121}.}
\reference{Duff}{Duff,G.F.D. \aom{56}{1952}{115}.}
\reference{DandS}{Duff,G.F.D. and Spencer,D.C. \aom{45}{1951}{128}.}
\reference{BGM}{Berger, M., Gauduchon, P. and Mazet, E. {\it Lect.Notes.Math.}
{\bf 194} (1971) 1. }
\reference{Patodi}{Patodi,V.K. \jdg{5}{1971}{233}.}
\reference{GandS}{G\"unther,P. and Schimming,R. \jdg{12}{1977}{599}.}
\reference{MandS}{McKean,H.P. and Singer,I.M. \jdg{1}{1967}{43}.}
\reference{Conner}{Conner,P.E. {\it Mem.Am.Math.Soc.} {\bf 20} (1956).}
\reference{Gilkey2}{Gilkey,P.B. \aim {15}{1975}{334}.}
\reference{MandP}{Moss,I.G. and Poletti,S.J. \plb{333}{1994}{326}.}
\reference{BKD}{Bordag,M., Kirsten,K. and Dowker,J.S. \cmp{182}{1996}{371}.}
\reference{RandO}{Rubin,M.A. and Ordonez,C. \jmp{25}{1984}{2888}.}
\reference{BaandD}{Balian,R. and Duplantier,B. \aop {112}{1978}{165}.}
\reference{Kennedy2}{Kennedy,G. \aop{138}{1982}{353}.}
\reference{DandKi2}{Dowker,J.S. and Kirsten, K. {\it Analysis and Appl.}
{\bf 3} (2005) 45.}
\reference{Dow40}{Dowker,J.S. \cqg{23}{2006}{1}.}
\reference{BandHe}{Br\"uning,J. and Heintze,E. {\it Duke Math.J.} {\bf 51} (1984)
959.}
\reference{Dowl}{Dowker,J.S. {\it Functional determinants on M\"obius corners};
Proceedings, `Quantum field theory under
the influence of external conditions', 111-121,Leipzig 1995.}
\reference{Dowqg}{Dowker,J.S. in {\it Quantum Gravity}, edited by
S. C. Christensen (Hilger, Bristol, 1984).}
\reference{Dowit}{Dowker,J.S. \jpa{11}{1978}{347}.}
\reference{Kane}{Kane,R. {\it Reflection Groups and Invariant Theory} (Springer,
New York, 2001).}
\reference{Sturmfels}{Sturmfels,B. {\it Algorithms in Invariant Theory}
(Springer, Vienna, 1993).}
\reference{Bourbaki}{Bourbaki,N. {\it Groupes et Alg\`ebres de Lie} Chap.III, IV
(Hermann, Paris, 1968).}
\reference{SandTy}{Schwarz,A.S. and Tyupkin, Yu.S. \np{242}{1984}{436}.}
\reference{Reuter}{Reuter,M. \prD{37}{1988}{1456}.}
\reference{EGH}{Eguchi,T. Gilkey,P.B. and Hanson,A.J. \prp{66}{1980}{213}.}
\reference{DandCh}{Dowker,J.S. and Chang,Peter, \prD{46}{1992}{3458}.}
\reference{APS}{Atiyah M., Patodi and Singer,I.\mpcps{77}{1975}{43}.}
\reference{Donnelly}{Donnelly.H. {\it Indiana U. Math.J.} {\bf 27} (1978) 889.}
\reference{Katase}{Katase,K. {\it Proc.Jap.Acad.} {\bf 57} (1981) 233.}
\reference{Gilkey3}{Gilkey,P.B.\invm{76}{1984}{309}.}
\reference{Degeratu}{Degeratu.A. {\it Eta--Invariants and Molien Series for
Unimodular Groups}, Thesis MIT, 2001.}
\reference{Seeley}{Seeley,R. \ijmp {A\bf18}{2003}{2197}.}
\reference{Seeley2}{Seeley,R. .}
\reference{melrose}{Melrose}
\reference{berard}{B\'erard,P.}
\reference{gromes}{Gromes,D.}
\reference{Ivrii}{Ivrii}
\reference{DandW}{Douglas,R.G. and Wojciekowski,K.P. \cmp{142}{1991}{139}.}
\reference{Dai}{Dai,X. \tams{354}{2001}{107}.}
\reference{Kuznecov}{Kuznecov}
\reference{DandG}{Duistermaat and Guillemin.}
\reference{PTL}{Pham The Lai}
\end{putreferences}
\bye
| {'timestamp': '2010-12-08T02:02:26', 'yymm': '1012', 'arxiv_id': '1012.1548', 'language': 'en', 'url': 'https://arxiv.org/abs/1012.1548'} |
\section{Introduction}
Supersymmetry continues to be the most compelling candidate for the theoretical framework
describing particle physics beyond the Standard Model. In the current paradigm, SUSY is dynamically broken in the
Hidden Sector of the full theory and the effects of this SUSY-breaking are mediated to the Visible Sector (MSSM)
by so-called messenger fields. In the usual formulation, one
essentially ignores the Hidden Sector theory and subsumes its details into
a few parameters; the scale $M_{\scriptscriptstyle SUSY}$ at which SUSY is broken
in the Hidden Sector, the nature of mediation (gravity, gauge, extra dimensions, etc.) and the types of messenger fields.
Thus it is tempting to assume that the details of the Hidden Sector are
largely irrelevant to Visible Sector phenomenology,
and that the entire pattern of the SUSY-breaking soft terms in the MSSM is generated and determined by the messengers.
The recent breakthrough made by
Intriligator, Seiberg and Shih (ISS) \cite{ISS} in realising the dynamical SUSY breaking (DSB)
via metastable vacua, provides a very minimal and simple class of candidates for the Hidden sector,
and makes it natural to reexamine this assumption. In particular, is it possible to distinguish different
types of Hidden Sector physics for a given type of mediation and messenger?
We shall address this question in the context of models of low scale i.e. gauge mediation (GMSB).
These were introduced in the early days of SUSY model building in
Refs.~\cite{GM1,GM2,GM3,GM4,GM5,GM6} and were subsequently revived in \cite{GM7,GM8,GM9} (see \cite{GR}
for a comprehensive review of GMSB patterns and phenomenology).
The main advantage of gauge mediation from a phenomenological point of view
is the automatic disposal of the flavour problem which plagues
gravity mediation. In GMSB the messenger fields interact only with the gauge field supermultiplets in the MSSM
and the gauge interactions do not generate unwanted flavour changing soft terms in the MSSM. The sfermion
soft masses are universal
in flavour space
and the only source of flavour violation is through the
Yukawa matrices, which is already incorporated correctly into the Standard Model.
Furthermore, the SUSY scale
in GMSB is relatively low, $M_{\scriptscriptstyle SUSY} \ll \sqrt{m_W M_{\rm Pl}},$, and one can determine the
full field theory in its entirety without appealing to the uncalculable
details of an underlying supergravity theory, as one has to in gravity mediation.
Indeed, the recent realisation \cite{ISS} that the dynamical breaking of supersymmetry can be achieved easily in
ordinary SQCD-like gauge theories implies that now one can formulate complete
and calculable models of gauge mediated SUSY-breaking including the Hidden (and Visible) sectors.
The goal of this paper is to study and classify these models, and to show how the generic patterns of
SUSY breaking generated in the MSSM depend on the details of the Hidden Sector.
To anticipate our findings, Visible Sector phenomenology depends
essentially on how $R$-symmetry is broken in the Hidden Sector.
Explicit $R$-symmetry breaking models of Refs.~\cite{MN,AS} can
lead to fairly standard gauge mediation, but we argue that in the
context of ISS-type models this only makes sense if $B=0$ at the
mediation scale, which leads to high $\tan\beta$. If, on the other
hand, $R$-symmetry is broken spontaneously, as in the model of
Ref.~\cite{ADJK}, then $R$-symmetry violating operators in the
MSSM sector (e.g. gaugino masses) tend to be suppressed with
respect to $R$-symmetry preserving ones (e.g. scalar masses), and
one is led to a scenario with large scalar masses (and of course
more fine-tuning).\footnote{This kind of spectrum
indicates a residual approximate $R$-symmetry in the model as the main cause for the suppression of gaugino masses relative
to scalars. The precise reason for this suppression (investigated in a subsequent paper \cite{AJKM}) is a little more complicated
and is linked to the \emph{direct} nature of gauge mediation.}
In the limit of small $R$-symmetry breaking
we recover so-called split SUSY models \cite{split1,split2}. We provide benchmark points for the two
scenarios as an illustration.
\subsection{On the unavoidability of metastability and on $R$-symmetry breaking}
If SUSY is discovered at the LHC and is of the gauge mediation type,
then metastability of the vacuum is likely to be unavoidable \cite{ISS2}
because of two pieces of evidence: gauginos are massive, and so too are
$R$-axions. We will briefly discuss why this is so, in full generality and independently of the models of ISS. To see that
metastability follows from these two pieces of evidence,
the first important observation is the theorem
by Nelson and Seiberg \cite{NS}, that an exact $R$-symmetry is \emph{necessary
and sufficient} to break SUSY in a generic calculable theory (of the Hidden sector).
At the same time, Majorana masses of gauginos
have non-vanishing $R$-charges.
Thus we have a phenomenological problem which
could be called
the \emph{gaugino mass problem}: gaugino masses require both supersymmetry and $R$-symmetry
breaking, but Ref.~\cite{NS} tells us that these two requirements are
mutually exclusive. How to get around it?
One approach \cite{ISS2} is to assume that the Lagrangian is of the form
\begin{equation}
\mathcal{L}=\mathcal{L}_{R}+\varepsilon\mathcal{L}_{R-breaking},
\label{LRexpl}
\end{equation}
where $\mathcal{L}_{R}$ preserves $R$-symmetry, the second term, $\mathcal{L}_{R-breaking}$,
is higher order in fields and breaks $R$-symmetry, and
$\varepsilon$ is parametrically small (we discuss why this should be shortly). Because $R$-symmetry
is broken explicitly by the second term, the Nelson-Seiberg theorem requires that
a global supersymmetry-preserving minimum must appear at order $1/\varepsilon$
away from the SUSY breaking one which now becomes metastable.
Note that this statement is completely general. Any attempt to mediate SUSY breaking
to gauginos even from models that initially
have no SUSY-preserving vacuum
results in the appearance of a global SUSY minimum.
Also the gaugino masses depend, as one would expect, on both the scale
of SUSY breaking \emph{and} the scale of $R$-symmetry breaking, whereas
the scalar masses depend only on the former.
(This point was used previously in \cite{split2}
in support of split SUSY \cite{split1}).
The gaugino masses are directly related
to $\varepsilon$ and hence to the stability of the metastable vacuum.
The second possibility is to break the tree-level $R$-symmetry spontaneously.
Spontaneous (rather than explicit) breaking of $R$-symmetry does not introduce
new global SUSY preserving minima. As such it does not destabilize the SUSY breaking
vacuum and does not require any fine-tuning of coefficients in the Lagrangian. At the same
time, gauginos do acquire masses.
This scenario, however, leads to a massless Goldstone mode of the spontaneously broken
$U(1)_{R}$ symmetry -- an \emph{$R$-axion problem}. In order to
avoid astrophysical (and experimental) bounds, the $R$-axion
should also acquire a mass. This means that
$R$-symmetry must also be explicitly broken and by the earlier arguments
this again means that the vacuum is metastable.
However in this case \cite{ADJK}
the gaugino mass is divorced from the size of \emph{explicit} $R$-breaking
$\varepsilon$ which now determines the $R$-axion mass instead. This
exhausts the logical possibilities and shows that, for a theory with a generic superpotential
where the Nelson-Seiberg theorem applies, massive gauginos
and massive $R$-axions imply metastability.
At this point the question arises as how to generate a Lagrangian
of the form \eqref{LRexpl}. Unless there is a compelling reason for the
smallness of $\varepsilon$, the Lagrangian
$\mathcal{L}_{R}$ is by definition non-generic,
and $\mathcal{L}_{R-breaking}$ may allow many couplings which are compatible
with the symmetries that one has to set to be small in order to avoid
too rapid decay of the metastable vacuum. One requires an \emph{almost} non-generic model,
broken by small operators, which in general seems unlikely. However, realistic and natural
gauge mediation models of this type were constructed in \cite{MN,AS}. The main idea of these models
is to break $R$-symmetry by operators which are suppressed by powers of $M_{Pl}.$
We will consider these models in Section {\bf 3}.
In \cite{ADJK} we suggested an alternative approach
where $\varepsilon$ is not induced by external $1/M_{Pl}$ corrections and where $R$-symmetry is broken spontaneously.
In the original ISS model \cite{ISS}, the Nelson-Seiberg theorem manifests itself in
a simple way: the theory has an exact $R$-symmetry at tree-level.
However the $R$-symmetry is anomalous and terms of the type $\varepsilon\mathcal{L}_{R-breaking}$
are generated dynamically \cite{ISS}
without having to appeal to Planck suppressed
operators. Here $\varepsilon$ is a naturally small
parameter since it is generated non-perturbatively via instanton-like
configurations, which are naturally suppressed by the usual instanton factor
$e^{-8\pi^2/g^2} \ll 1.$ Hence, the non-genericity in these models is fully calculable and under control.
When, in addition to these non-perturbative effects, the
$R$-symmetry is also broken spontaneously by perturbative contributions, gauginos
receive sufficiently large masses
$m_{\rm gaugino} > 100$ GeV as required by their non-observation by current experiments.
At the same time the $R$-axion receives a mass from the anomalously induced
$R$-breaking terms. (Note that a possible additional contribution to the
$R$-axion mass may arise when the theory is embedded in supergravity \cite{Bagger:1994hh}. However such noncalculable effects are suppressed.)
The spontaneous breaking of $R$-symmetry by radiative perturbative
corrections is easy to achieve \cite{DM,Shih:2007av}. For example, this happens \cite{ADJK}
when the basic ISS model is deformed by adding a baryon-like term to the superpotential.
This is the simplest deformation of the ISS model which preserves $R$-symmetry
at tree-level. At one-loop level this deformation causes the $R$-symmetry to break spontaneously,
while the $R$-axion gets a sufficiently large mass
$m_{\rm axion} > 100$ MeV to avoid astrophysical
constraints from the non-perturbative anomalous $R$-symmetry breaking \cite{ADJK}. No new global minima
appear other than those of the original ISS model, so the SUSY breaking
scale can be sufficiently low to be addressed at the LHC. These models
will be discussed in Section {\bf 2}.
{The paper is organised as follows. For convenience, in the following Section we recall the original ISS model \cite{ISS}.
In Section {\bf 2} we study gauge mediation with spontaneous $R$-symmetry breaking.}
Specifically, we concentrate on the \emph{direct} gauge mediation model \cite{ADJK} where the
Hidden plus Messenger sectors
consist of only the baryon-deformed ISS theory with $N_f=7$ flavours and
$N_c=5$ colours.
The resulting gaugino and sfermion soft masses are discussed in
Section {\bf 2.2}.
The mass term for the $R$-axion is generated by the nonperturbative ISS superpotential
\eqref{Wdyn}, as explained in \cite{ADJK}. In Section {\bf 2.3} we analyse the phenomenology of this class of models, which
turns out to be quite different from the usual gauge mediation scenarios \cite{GR}. The main reason for this difference
is the fact that $R$-symmetry is broken spontaneously by one-loop corrections, and as such the scale of $R$-breaking is naturally
smaller than the scale of SUSY-breaking, leading to the gaugino masses being
smaller than the scalar masses.
This is different from the usual gauge-mediation assumption that the
$R$-symmetry breaking is larger than the SUSY breaking. Thus generally, one expects a Hidden sector with spontaneous
$R$-symmetry violation to interpolate between standard gauge mediation and split SUSY models \cite{split1,split2}.
In Section {\bf 3} we study the alternative scenario for metastable gauge mediation, formulated earlier
in Refs.~\cite{MN,AS}. These are models with an explicit messenger sector where the $R$-symmetry of the ISS sector here
is broken explicitly. As already mentioned, the reason why the effective $R$-symmetry breaking is weak in {this case is the fact that}
the messengers are coupled to the Hidden Sector fields only via $1/M_{Pl}$-suppressed
operators, cf. \eqref{WRff}.
In the limit where $M_{Pl} \to \infty$, both the $R$-symmetry and supersymmetry of the MSSM are exact,
since the ISS Hidden Sector decouples from the messengers.
As a result, in these models the effective $R$-symmetry breaking and the effective SUSY-breaking scales
in the Visible Sector are essentially the same. The generated gaugino and scalar soft mass terms are of the same order,
and the resulting phenomenology of models \cite{MN,AS} is largely of the usual form.
In both cases we will be treating the $\mu_\mathrm{\sst MSSM}$ parameter of the MSSM as a free parameter.
As it is SUSY preserving it does not have to be determined by the ISS Hidden Sector, and we will
for this discussion have little to say about it: we will not address the question of why it
should be of order the weak scale, the so-called $\mu$ problem. However the corresponding
SUSY breaking parameter, $B$ or more precisely $B\mu_\mathrm{\sst MSSM}$,
cannot consistently be taken to be a free parameter. It is determined by the
models at the messenger scale, and in both cases it is approximately zero, as will be explained
in detail.
\subsection{The ISS model -- summary}
In Ref.~\cite{ISS} Intriligator, Seiberg and Shih pointed out that metastable SUSY-breaking
vacua can arise naturally and dynamically in low-energy limits of supersymmetric gauge theories.
The simplest prototype model is SQCD with the gauge group $SU(N_c)$
and $N_f$ pairs of (anti)-fundamental quark supermultiplets $Q$, $\tilde{Q}$.
Metastable vacua $|{\rm vac}\rangle_+$ occur in this model when $N_f$ is in the `free magnetic range',
$N_c+1\le N_f \le \frac{3}{2} N_c.$ These vacua are apparent in the Seiberg dual formulation
of the theory, which has the advantage of being weakly coupled in the vicinity of $|{\rm vac}\rangle_+$.
The magnetic Seiberg dual of the ISS theory is given \cite{Seiberg1,Seiberg2} by the
$SU(N)_{mg}$ gauge theory, where $N=N_f-N_c,$ coupled to $N_f$ magnetic quark/anti-quark pairs
$\varphi$, $\tilde{\varphi}$. The tree-level superpotential of the magnetic theory is of the
form,
\begin{equation}
W_{\rm cl} =\Phi_{ij}\,\varphi_{i}\cdot\tilde{\varphi_{j}}-\mu_{ij}^{2}\Phi_{ji}
\label{Wcl}
\end{equation}
where $i,j=1...N_f$ are flavour indices and $\Phi_{ij}$ is the gauge-singlet meson superfield,
which is related to the original electric quarks via $\Phi_{ij} \propto \Lambda^{-1} Q_i \cdot\tilde{Q}_j$
and $\Lambda$ is the dynamical scale of the ISS theory \cite{ISS}.
The matrix $\mu_{ij}^2$ (which can be diagonalised without loss of generality) arises from
the masses of electric quarks, $\mu^2_{ii}= \Lambda m_{Q_i},$ and is taken to be much smaller than
the UV cutoff of the magnetic theory, $\mu \ll \Lambda.$ This magnetic theory is free and calculable in the
IR and becomes strongly coupled in the UV where one should use instead the electric Seiberg dual, i.e.
the original $SU(N_c)$ SQCD which is asymptotically free.
The usual holomorphicity arguments imply that the superpotential
\eqref{Wcl} receives no corrections in perturbation theory. However, there is a non-perturbative contribution
to the full superpotential of the theory, $W=W_{\rm cl} + W_{\rm dyn},$ which
is generated dynamically \cite{ISS} and is given by
\begin{equation}
W_{\rm dyn}\, =\, N\left( \frac{{\rm det}_{\scriptscriptstyle {N_{f}}} \Phi}{\Lambda^{{N_{f}}-3N}}\right)^\frac{1}{N}
\label{Wdyn}
\end{equation}
The authors of \cite{ISS} have studied the vacuum structure of the theory and established the
existence of the metastable vacuum $|{\rm vac}\rangle_+$ with non-vanishing vacuum energy $V_{+}$
as well as the (set of $N_c$) SUSY preserving stable vacua $|{\rm vac}\rangle_0$.
This supersymmetry breaking vacuum $|{\rm vac}\rangle_+$ originates from the so-called
rank condition, which implies that there are no solutions to the F-flatness
equation\footnote{Equation \eqref{rank-cond}
can only be satisfied for a rank-$N$ submatrix of the $N_f \times N_f$ matrix
$F_{\Phi}$.}
\begin{equation}
F_{\Phi_{ij}}\, =\, (\varphi_{i}\cdot\tilde{\varphi}_{j}-\mu_{ij}^2)\, =\, 0
\label{rank-cond}
\end{equation}
for the classical superpotential $W_{\rm cl}.$
The SUSY preserving vacuua
\eqref{vac0} appear by allowing the meson $\Phi$ to develop a VEV
which is stabilised by the non-perturbative superpotential \eqref{Wdyn} and
can be interpreted in the ISS model as a non-perturbative
or dynamical restoration of supersymmetry \cite{ISS}.
The lowest lying SUSY-breaking
vacuum $|{\rm vac}\rangle_+$ is characterised by
\begin{equation}
\langle \varphi \rangle =\, \langle \tilde\varphi^T \rangle = \, \left(\begin{array}{c}
{\rm diag}(\mu_1,\ldots,\mu_N) \\ 0_{{N_{f}}-N}\end{array}\right) \ , \quad
\langle \Phi \rangle = \, 0 \ ,
\qquad V_+ = \sum_{i=N+1}^{N_f}|\mu_i^4|.
\label{vac+}
\end{equation}
Here $\mu_i$ are the ordered eigenvalues $\mu$ matrix, such that
$|\mu_1| \ge |\mu_2| \ge \ldots \ge |\mu_{N_f}|.$ In this way, the vacuum energy $V_+$
above receives contributions from $({N_{f}}-N)$ of the smallest $\mu$'s while the VEV $\langle \varphi \rangle$
is determined by the largest $\mu$'s.
The SUSY-preserving vacuum
$|{\rm vac}\rangle_0$ is described by\footnote{In fact there are precisely ${N_{f}}-N={N_{c}}$ of such vacua
differing by the phase $e^{2\pi i/({N_{f}}-N)}$ as required by the Witten index of the electric ISS theory.}
\begin{equation}
\langle \varphi \rangle =\, \langle \tilde\varphi^T \rangle = \, 0 \ , \quad
\langle \Phi \rangle = \, \left(\frac{\Lambda}{\mu}\right)^{\frac{N_f-N}{N_f-3N}} \mu \, \mbox{1 \kern-.59em {\rm l}}_{{N_{f}}} \ ,
\qquad \qquad V_0 = \, 0,
\label{vac0}
\end{equation}
where for simplicity we have specialised to the degenerate case, $\mu_{ij}=\mu \delta_{ij}.$
For $\mu/\Lambda \ll 1$ the metastable vacuum is exponentially long-lived and the lifetime of
$|{\rm vac}\rangle_+$ can easily be made much longer than the age of the Universe.
One very attractive feature of these models is
that at high temperatures the SUSY breaking
vacua are dynamically favoured over the SUSY preserving ones \cite{ACJK}.
This is
because the metastable ISS-type vacua have more light degrees of freedom, so the early Universe would
naturally have been driven into them \cite{ACJK,heat2,heat3,heat4}.
Other recent investigations of metastable SUSY-breaking applied to model building include
Refs.~\cite{Argurio:2006ny,Kitano:2006xg,Csaki:2006wi,Abel:2007uq,Abel:2007zm,Ferretti:2007ec,Brummer:2007ns,Essig:2007xk,Haba:2007rj,Cheung:2007es,Heckman:2007zp}
\section{Gauge mediation with Spontaneous $R$-symmetry breaking}
The metastable model building paradigm makes it relatively
easy to construct models with dynamically broken supersymmetry. The simplicity
of the resulting models now compels us to consider the attractive possibility
of \emph{direct} gauge mediation, whereby matter fields of the SUSY-breaking sector carry charges under
the gauge groups of the Standard Model and there is no need for a separate messenger sector.
In ordinary gauge mediation, the details
of SUSY breaking are generally `hidden' from the matter sector, with the most
important phenomenological features arising from the messenger particle content.
The elegance of direct gauge mediation models lies in their compactness and
predictivity. Previously direct mediation of metastable SUSY breaking was considered
in this context
in Refs.~\cite{Kitano:2006xg,Csaki:2006wi} and \cite{ADJK}.
The essential difference between the direct gauge mediation of SUSY-breaking and the models with explicit
messengers \cite{MN,AS} is that the `direct messengers' form an integral part of the Hidden
ISS sector, and as such, their interactions with the SUSY-breaking VEVs are not suppressed by inverse powers
of $M_{Pl}$. This means that the $R$-symmetry of the SUSY-breaking sector (required by the existence of the
SUSY-breaking vacuum) cannot be an accidental symmetry which is violated in the full theory only by
$1/M_{Pl}$
corrections, as in \cite{MN,AS}.
On the other hand, any large explicit violations of $R$-symmetry in
the
full theory will necessarily destabilise
the SUSY-breaking metastable vacuum. Thus, it was proposed in our earlier paper \cite{ADJK}
that the $R$-symmetry must be \emph{spontaneously} broken by radiative corrections arising from the
Coleman-Weinberg potential. In this case the Nelson-Seiberg theorem does not force upon us
a nearby supersymmetric vacuum and at the same time non-zero gaugino masses can be generated since the
$R$-symmetry is broken.
We will show below that in this approach the direct gauge mediation scenarios give
phenomenology quite distinct from the usual gauge mediation scenarios \cite{GR}.
\subsection{The baryon-deformed ISS model \cite{ADJK}}
To break supersymmetry we take an ISS model with $N_{c}=5$ colours and
$N_{f}=7$ flavours, which has a magnetic dual description as an $SU(2)$ theory,
also with $N_{f}=7$ flavours.\footnote{These are the minimal allowed values of $N_c$ and
$N_f$ which lead to a non-trivial -- in this case $SU(2)$ -- magnetic gauge group.}
Following \cite{ADJK} we now deform this theory by the
addition of a baryonic operator.
The superpotential of the theory is given by
\begin{equation}
W=\Phi_{ij}\varphi_{i}.\tilde{\varphi_{j}}-\mu_{ij}^{2}\Phi_{ji}+m\varepsilon_{
ab}\varepsilon_{rs}\varphi_{r}^{a}\varphi_{s}^{b}\,\,
\label{Wbardef}
\end{equation}
where $i,j=1...7$ are flavour indices, $r,s=1,2$ run over the first
two flavours only, and $a,b$ are $SU(2)$ indices.
This is the superpotential of ISS with the
exception of the last term which is a baryon of the magnetic $SU(2)$ gauge group. Note that
the 1,2 flavour indices and the 3...7 indices have a different status
and the full flavour symmetry $SU(7)_f$ is broken explicitly to $SU(2)_{f}\times SU(5)_{f}$.
The $SU(5)_{f}$ factor is gauged separately and identified
with the parent $SU(5)$ gauge group of the standard model.
The matter field decomposition under the magnetic $SU(2)_{gauge} \times SU(5)_{f}\times SU(2)_{f}$ and their $U(1)_R$ charges are given
in Table~\ref{fieldstable}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
&{\small $SU(2)$} &
{\small $SU(2)_f$}&
$SU(5)_f$&
{\small $U(1)_{R}$}\tabularnewline
\hline
\hline
$\Phi_{ij}\equiv\left(\begin{array}{cc}
Y & Z\\
\tilde{Z} & X\end{array}\right)$&
{\bf 1}&
$\left(\begin{array}{cc}
Adj +{\bf 1} & \bar\square\\
\square & {\bf 1}\end{array}\right)$&
$\left(\begin{array}{cc}
{\bf 1} & \square\\
\bar{\square} & Adj+{\bf 1}\end{array}\right)$&
2
\tabularnewline
\hline
{\small $\varphi\equiv\left(\begin{array}{c}
\phi\\
\rho\end{array}\right)$}&
$\square$&
$\left(\begin{array}{c}
\bar{\square}\\ {\bf 1} \end{array}\right)$&
$\left(\begin{array}{c}
{\bf 1}\\
\bar{\square}\end{array}\right)$&
$1$
\tabularnewline
\hline
{\small $\tilde{\varphi}\equiv\left(\begin{array}{c}
\tilde{\phi}\\
\tilde{\rho}\end{array}\right)$}&
$\bar\square$&
$\left(\begin{array}{c}
\square\\ {\bf 1} \end{array}\right)$&
$\left(\begin{array}{c}
{\bf 1}\\
\square\end{array}\right)$&
$-1$\tabularnewline
\hline
\end{tabular}
\end{center}
\caption{We list matter fields and their decomposition under the gauge $SU(2)$, the flavour $SU(2)_f \times SU(5)_f$ symmetry,
and their charges under the $R$-symmetry
of the model in \eqref{Wbardef}.
\label{fieldstable}}
\end{table}
It is known that the $R$-symmetry of the ISS SQCD manifests itself only as an approximate symmetry
of the magnetic formulation which is broken explicitly in the electric theory by the mass terms of electric quarks $m_Q$.
(It is also broken anomalously, but this is already accounted for by the dynamical superpotential \eqref{Wdyn}.)
In the Appendix we point out that the $R$-symmetry is broken in the electric theory in a controlled way
by small parameter, $m_Q /\Lambda = \mu^2 /\Lambda^2 \ll 1$. As such the $R$-symmetry is preserved to that order in the
superpotential.
Thanks to the baryon deformation, the model
has $R$-charges that are not 0 or 2. As discussed in Ref.~\cite{Shih:2007av}
this condition is necessary for Wess-Zumino models
to spontaneously break $R$-symmetry. Therefore, our model allows for spontaneous $R$ symmetry breaking
and we have shown in \cite{ADJK} that this does indeed happen.
We also stress that our baryon deformation is the leading order deformation of
the ISS model that is allowed by $R$-symmetry of the full theory imposed at the Lagrangian level.
As explained in the Appendix, this is a self-consistent approach since $R$-symmetry breaking
in the electric theory is controlled by a small parameter.
Terms quadratic in the
meson $\Phi$ that could arise from lower dimensional irrelevant operators
in the electric theory are forbidden by $R$-symmetry.
Thus, our deformation is described by a \emph{generic}
superpotential and \eqref{Wbardef} gives its leading-order terms.
Using the $SU(2)_{f}\times SU(5)_{f}$ symmetry, the matrix
$\mu_{ij}^{2}$ can be brought to a diagonal form
\begin{equation}
\mu_{ij}^{2}=\left(\begin{array}{cc}
\mu^{2}\mathbf{I}_{2} & 0\\
0 & \hat{\mu}^{2}\mathbf{I}_{5}\end{array}\right).
\end{equation}
We will assume that $\mu^{2}> \hat{\mu}^{2}$. The parameters $\mu^{2}$,
$\hat{\mu}^{2}$ and $m$ have an interpretation in terms of the electric
theory: $\mu^{2}\sim\Lambda m_{Q}$ and $\hat{\mu}^{2}\sim\Lambda\hat{m}_{Q}$
come from the electric quark masses $m_{Q}$, $\hat{m}_{Q}$, where
$\Lambda$ is the ISS scale. The baryon operator can
be identified with a corresponding operator in the electric theory.
Indeed the mapping from baryons $B_{E}$ in the electric theory to
baryons $B_{M}$ of the magnetic theory, is $B_{M}\Lambda^{-N}\leftrightarrow
B_{E}\Lambda^{-N_{c}}$
(we neglect factors of order one). Thus one expects
\begin{equation}
m\sim
M_{Pl}\left(\frac{\Lambda}{M_{Pl}}\right)^{2N_{c}-N_{f}}=\frac{\Lambda^{3}}{M_{Pl}^
{2}},
\label{mbardef}
\end{equation}
where $M_{Pl}$ represents the scale of new physics in the electric
theory at which the irrelevant operator $B_{M}$ is generated.
As explained in \cite{ADJK},
this theory has a classical runaway direction $\langle \tilde\varphi \rangle \to \infty$
(with $\langle \tilde\varphi \rangle \langle \varphi \rangle$ fixed)
to a non-supersymmetric vacuum.
The quantum dynamics, namely the one-loop Coleman-Weinberg potential \cite{Coleman:1973jx},
\begin{equation}
V_{\mathrm{eff}}^{(1)}\!=\!\frac{1}{64\pi^2}\,\mathrm{STr}\,{{\cal M}}^4\log\frac{{{\cal M}}^2}{
\Lambda^2_{UV}}\,
\!\equiv\frac{1}{64\pi^2}\left( {\rm Tr}\, m_{sc}^4\log\frac{m_{sc}^2}{\Lambda^2_{UV}}-2\,{\rm Tr}\,
m_{f}^4\log\frac{m_{f}^2}{\Lambda^2_{UV}} +3\, {\rm Tr}\,
m_v^4\log\frac{m_v^2}{\Lambda^2_{UV}}\right)\label{CW}
\end{equation}
stabilises the runaway
at a point which breaks both supersymmetry and
$R$-symmetry, thus creating a meta-stable vacuum state. ($m_{sc}$, $m_f$ and $m_v$ on the RHS denote mass matrices of all
relevant scalar, fermion and vector fields in the model and $\Lambda_{UV}$ is traded for a renormalisation scale
at which the couplings are defined.)
We parameterise the classically pseudo-Goldstone and runaway VEVs by
\begin{eqnarray}
\label{phivevs}
\vev{\tilde\phi} &=& \xi\,\mathbf{I}_{2}\quad\quad\quad\quad\,\,\vev{\phi}=\kappa\,\mathbf{I}_{2}\\
\label{yvevs}
\vev{Y} &=& \eta\,\mathbf{I}_{2}\quad\quad\quad\quad\vev{X}=\chi\,\mathbf{I}_{5}.
\end{eqnarray}
These are the most general vevs consistent with the tree level minimization.
It can be checked that at one loop order all other field vevs are zero in the lowest perturbative vacuum.
By computing the masses of all fluctuations about this valley we can go about
constructing the one-loop effective potential from eq. \eqref{CW}.
We have done this numerically using {\em{Vscape}} program of Ref.~\cite{vandenBroek:2007kj}.
Table~\ref{vscapetable} gives a sample points showing the VEVs stabilised by the one
loop effective potential.
\begin{table}[h] \begin{center}
\begin{tabular}{|c|c|c|c|c|c|} \hline $\mu$ & $m$ & $\xi$ & $\kappa$ & $\eta$ & $\chi$ \\
\hline \hline 10 & 0.3 & 41.0523 & 2.43592 & $-0.035477$ & $-1.761261$ \\
\hline 1.1 & 0.3 & 2.1370 & 0.566214 & $-0.148546$ & $-0.083296$ \\
\hline 1.01 & 0.3 & 1.8995 & 0.537043 & $-0.155796$ & $-0.073474$ \\
\hline 1.003 & 0.3 & 1.8809 & 0.534848 & $-0.157752$ & $-0.072738$ \\
\hline \hline \end{tabular} \end{center} \caption{ Stabilised VEVs from Vscape for various parameter points.
All values are given in units of $\hat{\mu}$. \label{vscapetable}} \end{table}
To summarise, we have identified a SUSY-breaking vacuum of the deformed ISS model, which also breaks $R$-symmetry
spontaneously via radiative corrections.
This is a long-lived metastable vacuum. The SUSY-preserving vacua of this model are only those generated by the
non-perturbative suprepotential,
\begin{equation}
W_{np}=2\Lambda^{3}\left[{\rm det}\left(\frac{\Phi}{\Lambda}\right)\right]^{\frac{1}{2}}.
\end{equation}
Adapting the supersymmetric vacuum solution from the ISS model to our case with $\mu>\hat{\mu}$ we find,
\begin{equation}
\varphi=0,\quad\tilde{\varphi}=0,\quad\eta=\hat{\mu}^{2}\mu^{-\frac{6}{5}}
\Lambda^{\frac{1}{5}}\quad \chi=\mu^{\frac{4}{5}}\Lambda^{\frac{1}{5}}.
\end{equation}
Note that the supersymmetric minimum lies at $\varphi=\tilde{\varphi}=0$ and is completely
unaffected by the baryon
deformation. As we are not breaking $R$-symmetry explicitly, no other supersymmetric vacua are generated,
and, as a result, the decay rate of our metastable vacuum is exponentially small as in the original ISS model.
\subsection{Direct gauge mediation and generation of gaugino and sfermion masses}
As mentioned earlier, the $SU(5)_{f}$ symmetry of the superpotential \eqref{Wbardef} is gauged and identified
with the parent $SU(5)$ of the MSSM sector. This induces direct gauge mediation of SUSY breaking from the metastable
vacuum of the Hidden ISS sector to the MSSM. The Hidden sector matter fields $\rho$, $\tilde{\rho}$, $Z$, $\tilde{Z}$ and $X$
are charged under the $SU(5)$ and serve as direct messengers. These induce all the soft SUSY-breaking terms in the MSSM
sector, including gaugino and sfermion masses.
Gaugino masses are generated at one loop order (cf. Fig. \ref{gauginofig}). The fields propagating in the loop are
fermion and scalar components of the direct mediation `messengers' $\rho$, $\tilde{\rho}$
and $Z$, $\tilde{Z}$.\footnote{The adjoint part in
$X$ is also charged under the standard model gauge groups and therefore, in principle, can also mediate SUSY-breaking.
However, at tree-level $X$ does not couple to the supersymmetry breaking $F$-term,
and its fermionic and bosonic components have identical (zero) mass.
This degeneracy is only lifted at the one-loop level by the Coleman-Weinberg potential. We therefore neglect the contribution
from $X$ which we expect to be subdominant.}
Since
gaugino
masses are forbidden by $R$-symmetry one crucial ingredient in their generation is the presence of
a non-vanishing $R$-symmetry breaking VEV - in our case $\langle\chi\rangle$ generated by the non-vanishing baryon deformation $m$.
In contrast to the gaugino masses $m_{\lambda}$, sfermion masses $m_{\tilde{f}}$ are not protected by $R$-symmetry. Hence, as
long as supersymmetry remains broken, we can have non-vanishing
sfermion masses even in
the
absence of an $R$-symmetry breaking VEV. In our model that means that the sfermion masses are non-vanishing even in the case of
a vanishing baryon deformation.
This shows that in a general (gauge) mediation scenario sfermion and gaugino masses are generated by quite different mechanisms.
Accordingly the simple relation $m_{\lambda}\sim m_{\tilde{f}}$ does not necessarily hold in general gauge mediation scenarios. Indeed our model
is an explicit example where it fails.
Let us now turn to the practical evaluation of the gaugino masses.
For fermion components of the messengers,
\begin{equation}
\psi = (\rho_{ia}\,,Z_{ir})_{ferm} , \quad
\tilde{\psi} = (\tilde{\rho}_{ia}\,,\tilde{Z}_{ir})_{ferm},
\end{equation}
the mass matrix is given by
\begin{equation}
m_{f}=\mathbf{I}_{5}\otimes\mathbf{I}_{2}\otimes\left(\begin{array}{cc}
\chi & \xi\\
\kappa & 0\end{array}\right).
\label{mferms}
\end{equation}
We can also assemble the relevant scalars into
\begin{equation}
\label{Sdef}
S=(\rho_{ia},Z_{ir},\tilde{\rho}_{ia}^{*},\tilde{Z}_{ir}^{*})_{sc},
\end{equation}
and for the corresponding scalar mass-squared matrix we have
\begin{equation} \label{msc}
m_{sc}^{2}=\mathbf{I}_{5}\otimes\mathbf{I}_{2}\otimes\left(\begin{array}{cccc}
|\xi|^{2}+|\chi|^{2} & \chi^{*}\kappa & -\hat{\mu}^{2} &
\eta\,\kappa \\
\chi\,\kappa^{*} & |\kappa|^{2} &
\,\xi\,\eta+2m\kappa\, & 0\\
-\hat{\mu}^{2} & (\xi\eta)^{*}+2m\kappa^{*} &
|\kappa|^{2}+|\chi|^{2} & \chi\,\xi^{*} \\
\eta{}^{*}\kappa^{*} & 0 & \chi^{*}\xi & |\xi|^{2}\end{array}\right).
\end{equation}
\begin{figure}[t]
\begin{center}
\begin{picture}(200,120)
\SetOffset(0,20)
\Gluon(0,0)(50,0){4.5}{5}
\ArrowLine(50,0)(0,0)
\Line(50,0)(120,0)
\Gluon(120,0)(170,0){4.5}{5}
\Vertex(50,0){3}
\Vertex(120,0){3}
\DashCArc(85,0)(35,0,180){3}
\ArrowLine(120,0)(170,0)
\Vertex(85,35){3}
\Text(85,50)[c]{\scalebox{1.1}[1.1]{$\langle F_{\chi}\rangle$}}
\Vertex(85,0){1}
\Line(80,5)(90,-5)
\Line(90,5)(80,-5)
\Text(85,-15)[c]{\scalebox{1.1}[1.1]{$\langle \chi\rangle$}}
\end{picture}
\end{center}
\vspace*{-0.5cm} \caption{\small One-loop contribution to the
gaugino masses. The dashed (solid) line is a bosonic (fermionic) messenger.
The blob on the scalar line indicates an insertion of $\langle F_{\chi}\rangle$ into the propagator
of the scalar messengers and the cross denotes an
insertion of the $R$-symmetry breaking VEV into the propagator of the fermionic messengers.}\label{gauginofig}
\end{figure}
Gaugino masses arise from the one-loop diagram in Fig. \ref{gauginofig}.
To evaluate the diagram it is convenient to diagonalize the non-diagonal mass
terms \eqref{mferms}, \eqref{msc} using unitary matrices,
\begin{eqnarray}
\hat{m}_{sc}^{2} & = & Q^{\dagger}m_{sc}^{2}Q\\
\hat{m}_{f} & = & U^{\dagger}m_{f}V.
\end{eqnarray}
The fields in the new basis are given by,
\begin{eqnarray}
\hat{S} & = & S.Q\\
\hat{\psi}_{+} & = & \psi.U\\
\hat{\psi}_{-} & = & \tilde{\psi}.V^{*}.
\end{eqnarray}
In order to calculate the gaugino mass, we need the gauge interaction
terms given by
\begin{eqnarray}
\label{gaugeint}
\mathcal{L} & \supset & i\sqrt{2}g_{A}\lambda_{A}(\psi_{1}T^{A}S_{1}^{*}+\psi_{2}T^{A}S_{2}^{*}
+\tilde{\psi}_{1}T^{*A}S_{3}+\tilde{\psi}_{2}T^{*A}S_{4})+H.C.\\
& = & i\sqrt{2}g_{A}\lambda_{A}(\hat{\psi}_{+i}\hat{S}_{k}^{*}(U_{i1}^{\dagger}Q_{1k}+U_{i2}^{\dagger}Q_{2k})+
\hat{\psi}_{-i}\hat{S}_{k}(Q_{k3}^{\dagger}V_{1i}+Q_{k4}^{\dagger}V_{2i}))+H.C,
\label{gaugeint2}
\end{eqnarray}
where we have expressed everything in terms of the
mass eigenstates in the second line.
Using the gauge interactions Eq. \eqref{gaugeint2}, the diagram in Fig. \ref{gauginofig} contributes to gaugino masses
as follows\footnote{More precisely, in evaluating \eqref{gauginomass}, we use the diagram in Fig. \ref{gauginofig}
without explicit insertions of $\langle F_{\chi}\rangle$ and $\langle {\chi}\rangle$ in the messenger propagators.
In the loop we use mass-eigenstate propagators and insert the
diagonalisation matrices at the vertices.
Appropriate dependence on $\langle F_{\chi}\rangle$ and $\langle {\chi}\rangle$ is
automatically introduced by the diagonalisation matrices.},
\begin{equation}
m_{\lambda_A}=2g_{A}^{2}\, Tr(T^{A}T^{B})\sum_{ik}(U_{i1}^{\dagger}Q_{1k}+U_{i2}^{\dagger}Q_{2k})(Q_{k3}^{\dagger}V_{1i}+Q_{k4}^{\dagger}V_{2i})
\, I(\hat{m}_{f,i},\hat{m}_{sc,k})
\label{gauginomass}
\end{equation}
where the 1-loop integral $I$ evaluates to
\begin{equation}
I(a,b)=\frac{-a(\eta+1)}{16\pi^2}+\frac{1}{16\pi^2}\frac{a}{a^2-b^2}(a^2\log(a^2/\Lambda^2)-b^2\log(b^2/\Lambda^2)),
\end{equation}
\begin{equation}
\eta=\frac{2}{4-D}+\log(4\pi)-\gamma_{E}.
\end{equation}
$I(a,b)$ is UV-divergent, but the divergences cancel in the sum over eigenstates as
they should.
Keeping the SUSY-breaking scale $\hat\mu$ fixed we can now study the dependence of the gaugino mass
on the two remaining parameters $\mu$ and $m$.
The VEVs $\xi$, $\kappa$, $\eta$ and $\chi$ are generated from minimizing the effective potential,
as above.
The results are shown in Fig.~\ref{gauginomassnumresult}(a).
\begin{figure}[t]
\begin{center}
\subfigure[]{\begin{picture}(210,100)
\Text(-22,110)[l]{\scalebox{1.6}[1.6]{$\frac{m_{1/2}}{\hat{\mu}}$}}
\Text(180,-5)[l]{\scalebox{1.3}[1.3]{$m/\hat{\mu}$}}
\includegraphics[width=.43\textwidth]{gauginomasses.eps}
\end{picture}\label{gauginomassnum}}
\hspace{1.5cm}
\subfigure[]{\begin{picture}(210,100)
\Text(-8,110)[l]{\scalebox{1.6}[1.6]{$\frac{m_{0}}{\hat{\mu}}$}}
\Text(190,-5)[l]{\scalebox{1.3}[1.3]{$m/\hat{\mu}$}}
\includegraphics[width=.43\textwidth]{scalarmasses.eps}
\end{picture}\label{sfermionmassnum}}
\end{center}
\caption{\small Gaugino mass scale, $m_{1/2}$, and sfermion mass scale, $m_{0}$, as functions of the baryon deformation $m$,
for various values of $\mu$: red ($\mu=1.003$), green ($\mu=1.01$), blue ($\mu=1.1$) and black ($\mu=1.5$).
The mass scales $m_{1/2}$ and $m_{0}$ are defined in Eqs.~\eqref{mhalfdef},\eqref{mzerodef}.}
\label{gauginomassnumresult}
\end{figure}
For completeness we note that
the usual analytic expression for the gaugino mass valid to the
leading order ${\cal O}(F_{\chi})$, is of little use for us.
It comes from magnetic quarks,
$\rho$ and $\tilde{\rho}$, propagating in the loop, as shown in Fig.
\ref{gauginofig}, and takes form $m^{(1)}_{\lambda_{A}} \sim \frac{g_{A}^{2}}{16\pi^{2}}\, F_{\chi}\,
(m_{f})^{-1}_{11}.$ However it trivially
vanishes in our model, and one needs to go to order $F_{\chi}^3$ to find
a non-vanishing contribution. This effect was first pointed out in Ref.~\cite{Izawa:1997gs}:
the zero
element in the lower right corner of the fermion mass matrix
\eqref{mferms} implies that $(m_{f})^{-1}_{11} =0$ and hence $m^{(1)}_{\lambda_{A}}=0$.
Having determined
the
gaugino masses in Eq.~\eqref{gauginomass} and Fig.~\ref{gauginomassnumresult}(a),
we now turn to the generation of the masses for the sfermions of the supersymmetric standard model.
Here we will closely follow the calculation in Ref.~\cite{Martin:1996zb} adapted to our more general set of messenger particles.
As already mentioned at the beginning of this section, sfermion masses are generated by a different mechanism than the scalar
masses. Indeed they are generated by
the
two-loop diagrams shown in Fig. \ref{sfermionfig}.
In \cite{Martin:1996zb} the contribution of these diagrams to the sfermion masses
was
determined to be,
\begin{equation}
m^{2}_{\tilde{f}}=\sum_{mess.} \sum_{a} g^{4}_{a} C_{a} S_{a}(mess.)[{\rm{sum}}\,\,{\rm{of}}\,\,{\rm{graphs}}],
\label{sfermionmass}
\end{equation}
where we sum over all gauge groups under which the sfermion is charged, $g_{a}$ is the corresponding gauge coupling,
$C_{a}=(N^{2}_{a}-1)/(2N_{a})$ is the quadratic Casimir and $S_{a}(mess.)$ is the Dynkin index of the messenger fields
(normalized to $1/2$ for fundamentals).
\begin{figure}[t]
\begin{center}
\subfigure[]{
\scalebox{0.6}[0.6]{\begin{picture}(174,120)
\SetOffset(0,20)
\DashLine(0,0)(50,0){1}
\DashLine(50,0)(120,0){1}
\DashLine(120,0)(170,0){1}
\Vertex(50,0){3}
\Vertex(120,0){3}
\Vertex(85,40){3}
\Gluon(50,0)(85,40){4}{6}
\Gluon(85,40)(120,0){4}{6}
\DashCArc(85,60)(20,0,360){3}
\end{picture}}\label{twoloop1}}
\subfigure[]{
\scalebox{0.6}[0.6]{\begin{picture}(174,120)
\SetOffset(0,20)
\DashLine(0,0)(50,0){1}
\DashLine(50,0)(120,0){1}
\DashLine(120,0)(170,0){1}
\Vertex(50,0){3}
\Vertex(120,0){3}
\Gluon(50,0)(60,40){4}{5}
\Gluon(110,40)(120,0){4}{5}
\Vertex(60,40){3}
\Vertex(110,40){3}
\DashCArc(85,30)(28,30,150){3}
\DashCArc(85,50)(28,210,330){3}
\end{picture}}\label{twoloop2}}
\subfigure[]{
\scalebox{0.6}[0.6]{\begin{picture}(174,120)
\SetOffset(0,20)
\DashLine(0,0)(50,0){1}
\DashLine(50,0)(120,0){1}
\DashLine(120,0)(170,0){1}
\Vertex(85,0){3}
\Vertex(85,40){3}
\GlueArc(85,20)(20,90,270){-4}{6.5}
\GlueArc(85,20)(20,270,450){-4}{6.5}
\DashCArc(85,60)(20,0,360){3}
\end{picture}}\label{twoloop3}}
\subfigure[]{
\scalebox{0.6}[0.6]{\begin{picture}(174,120)
\SetOffset(0,20)
\DashLine(0,0)(50,0){1}
\DashLine(50,0)(120,0){1}
\DashLine(120,0)(170,0){1}
\Vertex(85,0){3}
\GlueArc(92,38)(35,175,260){4}{5}
\GlueArc(78,38)(35,280,365){4}{5}
\Vertex(60,40){3}
\Vertex(110,40){3}
\DashCArc(85,30)(28,30,150){3}
\DashCArc(85,50)(28,210,330){3}
\end{picture}}\label{twoloop4}}
\subfigure[]{
\scalebox{0.6}[0.6]{\begin{picture}(174,120)
\SetOffset(0,20)
\DashLine(0,0)(50,0){1}
\DashLine(50,0)(120,0){1}
\DashLine(120,0)(170,0){1}
\Vertex(85,0){3}
\GlueArc(92,38)(35,175,260){4}{5}
\GlueArc(78,38)(35,280,365){4}{5}
\Vertex(60,40){3}
\Vertex(110,40){3}
\CArc(85,30)(28,27,153)
\CArc(85,50)(28,207,333)
\end{picture}}\label{twoloop5}}
\subfigure[]{
\scalebox{0.6}[0.6]{\begin{picture}(174,120)
\SetOffset(0,20)
\DashLine(0,0)(50,0){1}
\DashLine(50,0)(120,0){1}
\DashLine(120,0)(170,0){1}
\Vertex(50,0){3}
\Vertex(120,0){3}
\Gluon(50,0)(60,40){4}{5}
\Gluon(110,40)(120,0){4}{5}
\Vertex(60,40){3}
\Vertex(110,40){3}
\DashCArc(85,30)(28,30,150){3}
\DashCArc(85,50)(28,210,330){3}
\end{picture}}\label{twoloop6}}
\subfigure[]{
\scalebox{0.6}[0.6]{\begin{picture}(174,120)
\SetOffset(0,20)
\DashLine(0,0)(50,0){1}
\DashLine(50,0)(120,0){1}
\DashLine(120,0)(170,0){1}
\Vertex(50,0){3}
\Vertex(120,0){3}
\DashCArc(85,-17)(40,30,150){3}
\DashCArc(85,17)(40,-30,-150){3}
\end{picture}}\label{twoloop7}}
\subfigure[]{
\scalebox{0.6}[0.6]{\begin{picture}(174,120)
\SetOffset(0,20)
\DashLine(0,0)(50,0){1}
\DashLine(50,0)(120,0){1}
\DashLine(120,0)(170,0){1}
\Vertex(50,0){3}
\Vertex(120,0){3}
\Gluon(50,0)(50,40){4}{5}
\ArrowLine(50,0)(50,40)
\Gluon(120,40)(120,0){4}{5}
\ArrowLine(120,40)(120,0)
\Vertex(50,40){3}
\Vertex(120,40){3}
\DashCArc(85,40)(35,0,180){3}
\Line(50,40)(120,40)
\end{picture}}\label{twoloop8}}
\end{center}
\vspace*{-0.5cm} \caption{\small Two-loop diagrams contributing to the sfermion masses.
The long dashed (solid) line is a bosonic (fermionic) messenger.
Standard model sfermions are depicted by short dashed lines.}\label{sfermionfig}
\end{figure}
In the following we will only describe the new features specific to the messenger fields of our direct mediation model.
The explicit expressions for the loop integrals and
the algebraic prefactors resulting from the $\gamma$-matrix algebra etc. can be found in the appendix of \cite{Martin:1996zb}.
To simplify the calculation we also neglect the masses of the MSSM fields relative to the messenger masses.
As in the calculation of the gaugino mass we use the propagators in the diagonal form and insert the
diagonalisation matrices directly at the vertices.
For the diagrams \ref{twoloop1} to \ref{twoloop6} we have closed loops of purely bosonic or purely fermionic
mass eigenstates of our messenger fields.
It is straightforward to check that in this case the unitary matrices from the diagonalisation drop out.
We then simply have to sum over all mass eigenstates the results for these diagrams computed in Ref.~\cite{Martin:1996zb}.
The next diagram \ref{twoloop7} is slightly more involved.
This diagram arises from the D-term interactions. D-terms distinguish between chiral and antichiral fields,
in our case $\rho,Z$ and $\tilde{\rho},\tilde{Z}$,
respectively. We have defined our scalar field $S$ in \eqref{Sdef} such that all component fields have equal charges.
Accordingly, the ordinary gauge vertex is
proportional to a unit matrix in the component space (cf. Eq. \eqref{gaugeint}).
This vertex is then `dressed' with our diagonalisation matrices when we switch to
the $\hat{S}$ basis, \eqref{gaugeint2}.
This is different for diagram \ref{twoloop7}. Here we have an additional minus-sign between chiral and antichiral fields.
In field space this corresponds to
a vertex that is proportional to a matrix $V_{D}={\rm diag}(1,1,-1,-1)$.
We therefore obtain,
\begin{equation}
{\rm Fig.}~\ref{twoloop7}\, = \, \sum_{i,m} (Q^{T}V_{D}Q)_{i,m}J(\hat{m}_{0,m},\hat{m}_{0,i})(Q^{T}V_{D}Q)_{m,i},
\end{equation}
where $J$ is the appropriate two-loop integral for Fig.~\ref{twoloop7} which can be found in \cite{Martin:1996zb}.
Finally, in \ref{twoloop8} we have a mixed boson/fermion loop.
The subdiagram containing the messengers is similar to the diagram for the gaugino mass. The only difference is the direction of the
arrows on the gaugino lines. Indeed the one-loop sub-diagram corresponds to a contribution
to the kinetic term rather than a mass term for the gauginos.
(The mass term will of course contribute as well but will be suppressed
by quark masses.)
Using Eq. \eqref{gaugeint2} we find,
\begin{eqnarray}
{\rm Fig.}~ \ref{twoloop8} & = & \sum_{ik}(|U_{i1}^{\dagger}Q_{1k}+U_{i2}^{\dagger}Q_{2k}|^{2}
+|Q_{k3}^{\dagger}V_{1i}+Q_{k4}^{\dagger}V_{2i}|^{2})L(\hat{m}_{1/2,i},\hat{m}_{0,k}^{2})\,,
\end{eqnarray}
where $L$ is again the appropriate loop integral from \cite{Martin:1996zb}.
Summing over all diagrams we find the sfermion masses depicted in Fig. \ref{sfermionmassnum}.
Comparing to the gaugino masses \ref{gauginomassnum} we find the sfermion masses to be significantly bigger.
Indeed, the scalar masses roughly follow the naive estimate
\begin{equation}
m^{2, {\rm naive\,\, estimate}}_{\tilde{f}}\sim \frac{g^4}{(16\pi^2)^{2}} \hat{\mu}^{2}.
\end{equation}
This demonstrates again the fundamental difference between the generation of gaugino masses and the generation of sfermion masses.
The main results of this section, Eqs.~ \eqref{gauginomass} and \eqref{sfermionmass}
give the gaugino and scalar masses generated at the messenger mass scale $\mu$.
It is useful to factor out the particle-type-dependent overall constants and
define the \emph{universal} fermion and scalar mass contributions $m_{1/2}$
and $m_0$, via
\begin{eqnarray}
\label{mhalfdef}
m_{\lambda_A}(\mu) := \frac{g_A^2}{16\pi^2}\,\,m_{1/2}\\
\label{mzerodef}
m^{2}_{\tilde{f}}(\mu):= \sum_{A} \frac{g^{4}_{A}}{(16\pi^2)^2}
C_{A} S_{A}\,\,m^2_0
\end{eqnarray}
The main results of this section (2.20) and (2.24) are then expressed in terms of $m_{1/2}$
and $m_0$ which we calculate numerically using the VEVs generated by {\em{Vscape}}.
As an example, in
Table~\ref{masstable} we show the values for $m_{1/2}$ and $m_0$ obtained for the same
parameters as in Table~\ref{vscapetable}. For a more up to date discussion of gaugino and scalar mass values
we refer the reader to a more recent paper \cite{AJKM}. In section 4 of that reference we have used
more accurate numerical values for stabilised VEVs
(with no tree-level constraints imposed) and have also included contributions to $m_{1/2}$ and $m_0$ from
loops of $X$ messenger fields.
\begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|} \hline $\mu$ & $m$ & $m_{1/2}$ & $m_0^2$ \\
\hline \hline 10 & 0.3 & 1.03984{\scriptsize$ \times10$}$^{-7}$ & 0.026787 \\ \hline 1.1 & 0.3 & 0.017843 & 4.89783 \\
\hline 1.01 & 0.3 & 0.044771 & 5.12698 \\ \hline 1.003 & 0.3 & 0.052320 & 4.74031 \\ \hline \hline
\end{tabular} \end{center}
\caption{Gaugino mass and sfermion mass-squared coefficients for various parameter points.
All values are in units of $\hat{\mu}$. \label{masstable}}
\end{table}
\subsection{Renormalisation group running, mass spectrum and electroweak symmetry breaking}
In the previous section we calculated the soft SUSY-breaking masses for gaugionos and sfermions at the messenger scale $\mu$.
The Higgs masses $m_{H_1}^2$ and $m_{H_2}^2$ are calculated in the same way as
the sfermion masses above\footnote{We use the GUT normalisation convention for the $g_1$ gauge couplings.}:
\begin{equation}
m_{H_1}^2(\mu) =
m_{H_2}^2(\mu)=\left(\frac{3}{4}\frac{g^{4}_{2}}{(16\pi^2)^2}+\frac{3}{20}
\frac{g^{4}_{1}}{(16\pi^2)^2} \right)\, m^2_0\\
\end{equation}
The other soft SUSY-breaking terms in the MSSM, such as the $A$-terms and the
$B$-term are generated
at two-loop level. Indeed the diagrams giving rise to the $B$-term require an insertion of the Peccei-Quinn
violating parameter $\mu_\mathrm{\sst MSSM}$ {\em and} a SUSY breaking gaugino ``mass loop". Thus its magnitude
at the messenger scale $\mu$ is of order \cite{rattazzi-sarid}
\begin{equation}
B\mu_\mathrm{\sst MSSM} \sim \frac{g^2}{16\pi^2} m_{\lambda} \mu_\mathrm{\sst MSSM} \sim \frac{g^4}{(16\pi^2)^2} m_{1/2} \mu_\mathrm{\sst MSSM} ,
\end{equation}
and is loop suppressed with respect to gaugino masses. For the accuracy required here,
it will be sufficient to take
$
B = 0
$
at the messenger scale.
We now turn to the phenomenology in full, beginning with the SUSY
breaking in the visible sector. The next step is to use the renormalisation group running to determine
the soft SUSY breaking parameters at the weak scale, solve for electroweak symmetry breaking and find the mass spectrum of the MSSM.
We will throughout be using the conventions
of Refs.\cite{Abel:2000vs,softsusy} (with the obvious replacement
$\mu\rightarrow\mu_\mathrm{\sst MSSM}$). The pattern of SUSY breaking here is
expected to be different from the standard gauge mediation form for two reasons.
Firstly our model naturally predicts significantly large values of $m_0$ relative to $m_{1/2}.$
Secondly, for reasons explained above, we take $B=0$
at the messenger scale. The phenomenology of the $B=0$ case has been discussed
in refs.\cite{Babu:1996jf,Dimopoulos:1996yq,Bagger:1996ei,rattazzi-sarid}.
The main prediction is that high $\tan\beta$
is required for the electroweak symmetry breaking.
In order to see why, consider the tree level minimization conditions
in $H_{2}$ and $H_{1}$, which are
\begin{eqnarray}
\label{min-one}
\mu_\mathrm{\sst MSSM}^{2} & = & \frac{m_{H_{1}}^{2}-m_{H_{2}}^{2}\tan^{2}\beta}{\tan^{2}\beta-1}-\frac{m_{Z}^{2}}{2}
\\
\label{min-two}
B \mu_\mathrm{\sst MSSM} & = & \frac{\sin(2\beta)}{2}(m_{H_{1}}^{2}+m_{H_{2}}^{2}+2\mu_\mathrm{\sst MSSM}^{2}).
\end{eqnarray}
Since $B \mu_\mathrm{\sst MSSM}$ is generated only radiatively, the RHS of the
second equation has to be suppressed by small $\sin(2\beta)$ with
$\beta$ approaching $\pi/2$. One additional feature of the $B$-parameter that complicates the analysis somewhat,
is that as noted in Ref.~\cite{rattazzi-sarid} there is an accidental cancellation of renormalization group
contributions to its running close to the weak scale.
Of course this model becomes more fine-tuned as $m_0\gg m_{1/2}$ since we are after all decoupling the supersymmetry in that limit. It is
worth understanding what has to be fine-tuned. Since $\tan\beta \gg 1$ when $m_0\gg m_{1/2}$ the first equation tells us that we must have
$\mu_\mathrm{\sst MSSM}^2 \approx -m^2_{H_2}$. In order to have a hope of satisfying the second equation there has to be a cancellation
of the terms inside the bracket, $m_{H_{1}}^{2}+m_{H_{2}}^{2}+2\mu_\mathrm{\sst MSSM}^{2} \approx 0$ and therefore $m_{H_1}^2\approx m_{H_2}^2 $
near the minimization scale. This is consistent with large $\tan\beta$ where the top and bottom Yukawa couplings become
approximately degenerate\footnote{ Using the conventional definition \cite{fine-tuning}, the fine-tuning is then of order $\mu_\mathrm{\sst MSSM}/m_Z$.}.
To calculate the spectrum of these models we have modified the
{\em Softsusy2.0} program of Ref. \cite{softsusy}. In its unmodified form this program
finds Yukawa couplings consistent with soft SUSY breaking terms (specified at
the messenger scale $Q_\mathrm{\sst Mess}=\mu$) and electroweak symmetry breaking conditions
(imposed at a scale $Q_\mathrm{\sst SUSY}$ to be discussed later). It is usual to take the
ratio of Higgs vevs $\frac{v_u}{v_d}\equiv\tan\beta$ at $Q_\mathrm{\sst SUSY}$ as an input
parameter instead of the soft SUSY breaking term $B \mu_\mathrm{\sst MSSM}$ at $Q_\mathrm{\sst Mess}$. This term,
and
the SUSY preserving $\mu_\mathrm{\sst MSSM}$ are subsequently determined through the EWSB
conditions \eqref{min-one},\eqref{min-two}.
As the models we are considering have $B \mu_\mathrm{\sst MSSM} = 0$ at $Q_\mathrm{\sst Mess}$ to two loops, $\tan\beta$
is not a free parameter, and must (as noted above) be adjusted in
{\em Softsusy2.0}, so that this boundary condition is met.
In detail the iteration procedure works as follows: initially, a
high value of $\tan\beta$ is chosen and all the gauge and Yukawa couplings are evolved
to $Q_\mathrm{\sst Mess}$. The soft parameters are
then set, as per the SUSY breaking model, including the condition
$B \mu_\mathrm{\sst MSSM} = 0$. The whole system is then evolved down to $Q_\mathrm{\sst SUSY}$,
where $\tan\beta$ is adjusted to bring the program closer to a solution of the EWSB
condition in Eq.(\ref{min-two}) (including the
1-loop corrections to the soft masses $m_{H_1}^2$ and $m_{H_2}^2$ and the
self-energy contributions to the $\overline{DR}$ mass-squared of the axial
Higgs, $m_A^2$ ).
We then run back up to $Q_\mathrm{\sst Mess}$ where we reimpose the soft-breaking boundary conditions,
and whole the process is repeated until the value of $\tan\beta $ converges.
The scale $Q_\mathrm{\sst SUSY}$ at which the tree-level minimisation conditions
\eqref{min-one}, \eqref{min-two} are imposed is chosen so as to minimise the
radiative corrections to the results. It is usually taken to
be $Q_\mathrm{\sst SUSY} \equiv x \sqrt{m_{\tilde t_1}m_{\tilde t_2}}$ where $x$ ({\tt QEWSB} in the
language of Ref.\cite{softsusy}) is a number of order unity.
As we see from Table~\ref{varyqsusy}, the lightest Higgs mass (in the model with
spontaneously broken $R$-symmetry) depends less on scale for lower values of
$Q_\mathrm{\sst SUSY}$, and so in this model we will therefore be using $Q_\mathrm{\sst SUSY} =
0.8\times\sqrt{m_{\tilde t_1}m_{\tilde t_2}}$. Note that only the Higgs masses are
sensitive to this choice and the other parameters are largely unaffected.
What the model we are discussing is predicting in most of its parameter space (i.e. generic $\mu > {\hat{\mu}}$)
is clearly split-SUSY like because of the
suppression of $R$-symmetry violating operators (i.e. $m_{1/2}\ll m_0$ in Table 2). It provides a
first-principles model which can implement split-SUSY \cite{split1,split2}. (For other realisations of split-SUSY
scenarios see e.g. \cite{Langacker:2007ac}.)
Our purpose here however is to examine how close models with radiative $R$-symmetry breaking
can get to the usual gauge mediation scenarios \cite{GR}. For this reason we want to
reduce the $m_0/m_{1/2}$ ratio as far as possible and to take $\mu$ approaching $\hat{\mu}$, i.e. the
last two rows in Table 2. In Table~\ref{spectratable} in Section {\bf 4}, we present a benchmark point (Benchmark Point A)
with the full spectrum of the model for $\mu=1.003 {\hat{\mu}}$, $m=0.3{\hat{\mu}}$. This point
corresponds to a phenomenologically viable region of parameter space near the
boundary, that has heavy scalars, and light charginos and neutralinos and exhibits electroweak symmetry breaking.
This point is still quite distinct from the usual gauge mediation scenarios, and as we will see in the
next section from predictions of gauge mediation models with {\em explicit} $R$-symmetry breaking \cite{MN,AS}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
$Q_\mathrm{\sst SUSY}$ & $\times\,0.75$ & $\times\,0.80$ & $\times\,0.85$ & $\times\,0.9$ &
$\times\,0.95$ &$\times\,0.99$ & $\times\,1.0$ &
$\times1.01$ \\
\hline
$h_0$ & 124.5 & 124.5 & 124.2 & 124.1 & 123.8 & 101.5 & 93.3 & 78.6 \\
\hline
\end{tabular}
\end{center}
\caption{
Checking the scale dependence of the lightest Higgs mass (in GeV).
\label{varyqsusy}}
\end{table}
\section{Gauge mediation with explicit $R$-breaking}
Here we consider the gauge mediation models of Refs.~\cite{MN,AS}
which are working models of metastable SUSY breaking with a messenger
sector that, as already noted, explicitly breaks the $R$-symmetry
of the ISS sector. The general philosophy is to appeal to the details
of the couplings of the messengers to the electric ISS theory to explain
why the explicit $R$-symmetry breaking is so weak in the effective
theory. The nett result is that one only breaks the $R$-symmetry
by operators suppressed by powers of $M_{Pl}$.
Although the phenomenology is expected to broadly follow that of the
gauge mediation paradigm \cite{GR}, there is a difference. We will
argue that, in the present context the Higgs bilinear $B$ parameter
of the MSSM (the SUSY breaking counterpart of $\mu_\mathrm{\sst MSSM} H_{u}H_{d}$)
is naturally zero at the mediation scale. This is because $R$-symmetry
breaking operators are (by assertion) suppressed by powers of $M_{Pl}$
and this restricts the possibilities for generating the $B$ parameter:
it is either many orders of magnitude too large or forbidden by symmetries
to be zero.
Let us begin by recapping Ref.~\cite{MN} and considering this issue
in detail, before presenting the SUSY breaking phenomenology, and
an example
benchmark point. The model augments the original
ISS model with a pair of messengers {}``quarks'' charged under the
SM gauge group denoted $f$ and $\tilde{f}$ of mass $M_{f}$. For
simplicity we shall assume that they form a fundamental and antifundamental
respectively of the parent $SU(5)$ of the SM. It was proposed that
these couple maximally to the electric theory via a piece of the
form
\begin{equation}
W_{R}=\frac{\lambda}{M_{Pl}}(\tilde{Q}Q)(\tilde{f}f)+M_{f}\tilde{f}f,
\label{WRff}
\end{equation}
where $M_{Pl}$ is the scale of new physics at which the operator
is generated, hereafter assumed to be the Planck scale.
For simplicity
we shall for this discussion consider both $\mu^{2}$ and $\lambda$
to be flavour independent couplings. The essential observation of
Ref.~\cite{MN} is that, in the magnetic theory, this appears as an
extremely weak violation of $R$-symmetry due to the large energy
scale at which the operator is generated
\begin{equation}
W_{R}=\lambda'\Phi\tilde{f}f+M_{f}\tilde{f}f \equiv S_{mess} \tilde{f}f
\end{equation}
where we introduced spurion superfield $S_{mess}$ as in the standard gauge-mediation set-up.
By assumption the high energy scale $M_{Pl}$ is much larger
than $\Lambda$ so that
\begin{eqnarray}
\lambda'=\frac{\lambda\Lambda}{M_{Pl}} & \ll & 1.
\end{eqnarray}
Since the $R$-symmetry is not respected by $W_{R}$
the Nelson-Seiberg theorem \cite{NS} necessarily leads to the appearance of a new SUSY-preserving vacuum,
but as long as $\lambda'$ is small enough, the transition rate from $|{\rm vac}\rangle_+$ to this new vacuum is suppressed and
the original ISS picture is unchanged. Indeed the meson $\Phi$ field can remain trapped in $|{\rm vac}\rangle_+$ near the origin,
with effective messenger $F$-term and scalar VEVs of the spurion superfield
\begin{eqnarray}
\langle F_{mess}\rangle & \equiv & \lambda' \langle F_{\Phi}\rangle \, =\, \lambda'\mu^{2}\nonumber \\
\langle S_{mess} \rangle & \equiv & \lambda' \langle {\Phi} \rangle + M_{f} \, \approx\, M_{f}
\end{eqnarray}
As in usual gauge mediation, a gaugino mass is induced at one loop
of order
\begin{equation}
m_{\lambda} \sim \frac{g^{2}}{16\pi^{2}}\frac{\langle F_{mess}\rangle}{\langle S_{mess} \rangle}
\sim\frac{g^{2}}{16\pi^{2}}\frac{\lambda'\mu^{2}}{M_{f}}
\label{eq:gaugino1}
\end{equation}
and a scalar mass-squared of the same order induced at two loops,
\begin{equation}
m_{\tilde{q}}^2 \sim m_{\lambda}^2 .
\label{eq:msc1}
\end{equation}
The last equation is a consequence of the fact that $R$-symmetry breaking which controls gaugino masses
is linked to (i.e. not much smaller than) the SUSY-breaking scale of the Visible Sector.
There is a new global minimum where
the rank condition \eqref{rank-cond}
is satisfied and the $\mu^2$-ISS term is cancelled in the ISS potential,
\begin{eqnarray}
\langle\tilde{f}f\rangle & = & \mu^{2}/\lambda'\nonumber \\
\langle\Phi\rangle & = & M_{f}/\lambda',
\end{eqnarray}
however for small enough $\lambda'$ these minima can be much further
from the origin than $\Lambda$, beyond which all that one can say
is there will be a global minimum of order $\langle\Phi\rangle\sim\Lambda$.
Such far-flung minima do not change the ISS picture of metastability,
and this is why the weakness of $\lambda'$ is welcome. The resulting
bound is $M_{f}\gtrsim\lambda'\mu$ \cite{MN}. Coupled with the gaugino
mass being of order $m_{W}$, we find only very weak bounds:
\begin{equation}
\mu\gtrsim16\pi^{2}m_{W}.\label{eq:weakbound}
\end{equation}
There are a number of additional constraints, two of the most important
being that $f$ is non-tachyonic which gives $M_{f}^{2}>\lambda'\mu^{2}$,
and that gauge mediation is dominant $\frac{M_{f}}{M_{Pl}}\lesssim10^{-4}\lambda'$.
Additional constraints come from the possibility of additional operators
such as $\delta W=\Phi^{2}/M_{Pl}$ which are now allowed in the superpotential,
however all of these can be easily satisfied for high values of $\Lambda$.
\subsection{Forbidden operators and $B=0$}
If one considers the MSSM sector as well, then there are other Planck-suppressed
$R$-symmetry breaking operators that had to be forbidden
in Refs.~\cite{MN,AS}. Normally in gauge mediation one is justified
in neglecting gravitationally induced operators altogether, however as we have seen,
in these models the leading Planck-suppressed operator plays a pivotal
role. Hence it is important to determine what effect other Planck-suppressed operators may have.
The most important conclusion of this
discussion will be that phenomenological consistency requires $B\approx0$
at the mediation scale.
Before considering the operators in question, it is worth recalling
the problem with $B$ in usual gauge mediation, in which supersymmetry
breaking is described by a Hidden sector spurion superfield $S_{mess}$. The problem arises when
one tries to generate the $\mu_\mathrm{\sst MSSM} H_{u}H_{d}$ term of the MSSM
(for a recent review see Ref.~\cite{slavich}). Consider generating
$\mu_\mathrm{\sst MSSM}$ directly in the superpotential. There are two possibilities,
either the parameter $\mu_\mathrm{\sst MSSM}$ depends on $\langle S_{mess}\rangle $ in which
case a $B$-term is generated, or it does not, in which case $B=0$.
Let us suppose that it does, and that the superpotential contains
$W\supset\mu_\mathrm{\sst MSSM}(S_{mess})H_{u}H_{d}$. The $B$ term is given
by
\begin{equation}
B=\frac{\mu_\mathrm{\sst MSSM}'}{\mu_\mathrm{\sst MSSM}}F_{mess}\sim \frac{F_{mess}}{S_{mess}},
\end{equation}
where $\mu_\mathrm{\sst MSSM}' = d \mu_\mathrm{\sst MSSM}/d S_{mess}$ and the final relation follows from a dimensional analysis.
This should be compared with the SUSY breaking contribution to the
gaugino masses which appear at one loop, $m_{\lambda}\sim\frac{g^{2}}{16\pi^{2}}\frac{F_{mess}}{S_{mess}}$,
so that,\begin{equation}
B\sim\frac{(16\pi^{2})}{g^{2}}m_{\lambda}.\end{equation}
Hence one finds that $B \mu_\mathrm{\sst MSSM}$ is two orders of magnitude too
large. More generally because the $\mu_\mathrm{\sst MSSM}$ and $B \mu_\mathrm{\sst MSSM}$
terms are both forbidden by a Peccei-Quinn symmetry, they tend to
be generated at the same order, whereas $B \mu_\mathrm{\sst MSSM}$ should have
an additional loop suppression (in order to be comparable to the scalar
mass-squareds). One can then assume that $\mu_\mathrm{\sst MSSM}$ is independent
of $S_{mess}$ in which case $B=0$, or try to find a more sophisticated
dynamical reason that the $B$ term receives loop suppression factors.
Now let us turn to the models of Ref.~\cite{MN,AS}. Here the situation
is rather more pronounced for the very same reason that the $R$-symmetry
breaking is under control, namely that the spurion is related to a
meson of the electric theory. The $\mu_\mathrm{\sst MSSM}$ term will be a function
of
\begin{equation}
\frac{Q\tilde{Q}}{M_{Pl}}=\frac{\Lambda\Phi}{M_{Pl}},
\end{equation}
and will be dominated by the leading terms. The leading operators
involving $H_{2}H_{1}$ we can consider are
\begin{eqnarray}
W & \supset & \mu_{0}H_{2}H_{1}+\frac{\lambda_{2}}{M_{Pl}}H_{2}H_{1}\tilde{f}f
+\frac{\lambda_{3}}{M_{Pl}}H_{2}H_{1}\tilde{Q}Q,\nonumber \\
K & \supset & \lambda_{4}\frac{(Q\tilde{Q})^{\dagger}H_{2}H_{1}}{M_{Pl}^{2}}+h.c..
\end{eqnarray}
where $\lambda_{2,3,4}\sim1$. We will for generality allow the $\mu_{0}$
term which is consistent with $R$-symmetry in the renormalizable
theory; this represents supersymmetric contributions to the $\mu_\mathrm{\sst MSSM}$-term
that do not involve the ISS sector. (It would of course be inconsistent
to allow further SUSY breaking in the non-ISS sector.) The remaining
$R$-violating operators we will take to be Planck suppressed as prescribed
in refs.\cite{MN,AS}.
Unfortunately it is clear that the Kahler potential term cannot (as
it could in Refs.~\cite{GuidiceMasiero,rattazzi}) be responsible for
the $\mu_\mathrm{\sst MSSM}$-term. Its contribution is of order \begin{equation}
\mu_\mathrm{\sst MSSM}\sim\frac{\Lambda}{M_{Pl}^{2}}\mu^{2}\end{equation}
but we require $\mu^{2}\ll M_{Pl}m_{W}$ for gauge-mediation to be
dominant, which would imply $\mu_\mathrm{\sst MSSM}\ll \frac{\Lambda}{M_{Pl}} m_{W}$. Similar consideration
apply to operators in the Kahler potential with factors of $D^{2}(\Phi^{\dagger}\Phi)$
as in \cite{dvali}.
Turning instead to the leading superpotential terms, and assuming
the messengers remain VEVless, one has
\begin{eqnarray}
\mu_\mathrm{\sst MSSM} & = & \mu_{0}+\lambda_{3}\frac{\Lambda}{M_{Pl}}
\langle\Phi\rangle\sim\mu_{0}+\lambda_{3}16\pi^{2}\frac{\Lambda^{3}}{M_{Pl}^{2}}\nonumber \\
B \mu_\mathrm{\sst MSSM} & = & \lambda_{3}\frac{\Lambda\mu^{2}}{M_{Pl}}\nonumber \\
m_{Higgs}^{2} & \sim & \frac{g^{4}}{(16\pi^{2})^{2}}\frac{\Lambda^{2}}{M_{Pl}^{2}M_{f}^{2}}\mu^{4}\nonumber \\
m_{\lambda} & \sim & \frac{g^{2}}{16\pi^{2}}\frac{\Lambda}{M_{Pl}}\frac{\mu^{2}}{M_{f}},
\end{eqnarray}
where we used the fact that, as shown in Ref.~\cite{MN}, the $\Phi$
field is expected to get only a small VEV due to the presence of $R$-symmetry
breaking operators, which was estimated to be
\begin{equation}
\langle\Phi\rangle\sim16\pi^{2}\frac{\Lambda^{2}}{M_{Pl}}.
\end{equation}
The above gives
\begin{equation}
B \mu_\mathrm{\sst MSSM}\sim\frac{16\pi^{2}}{g^{2}}m_{\lambda}M_{f}.
\end{equation}
Typically $M_{f}$ has to be orders of magnitude above $m_{W}$, so
the situation is considerably worse than in usual gauge mediation
unless a symmetry forbids the $\lambda_{3}$ coupling. A global $R$-symmetry
would not be respected by gravitationally suppressed operators, however
it \emph{is} possible that particular operators can be suppressed.
If $\mu_\mathrm{\sst MSSM}$ for example is charged under an additional gauge
symmetry then one might expect
\begin{equation}
\lambda_{2}\sim\lambda_{3}\sim\frac{\mu_\mathrm{\sst MSSM}}{M_{Pl}},
\end{equation}
in which case the effect of these operators is utterly negligible
and we effectively have
\begin{eqnarray}
\mu_\mathrm{\sst MSSM} & \approx & \mu_{0}\nonumber \\
B & \approx & 0.
\end{eqnarray}
Note the importance of the interpretation of the effective ISS theory
as a magnetic dual in this discussion. For example one could also
have considered the effective operator
\begin{equation}
W_{\scriptscriptstyle R/MSSM}=\frac{\lambda_{4}}{M_{Pl}}H_{2}H_{1}Tr(\tilde{\varphi}.\varphi).
\end{equation}
This would have given $\mu_\mathrm{\sst MSSM}\sim\frac{\mu^{2}}{M_{Pl}}$ similar
to the Giudice-Masiero mechanism, above. However because the magnetic
quarks $\varphi$ and $\tilde{\varphi}$ are composite objects, the
coupling $\lambda_{4}$ will be suppressed by many powers of $\Lambda/M_{Pl}$
so this contribution to $\mu_\mathrm{\sst MSSM}$ would always be negligible.
Finally,
we have investigated
the pattern of SUSY breaking in the model with explicit $R$-breaking \cite{MN,AS}, discussed above.
As expected,
it conforms to the standard gauge mediation form, with a requirement that $B=0$
at the mediation scale.
In Table~\ref{spectratable} in Section {\bf 4}, we present a benchmark point (Benchmark Point B)
with the full spectrum of the model.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
$\quad\quad\quad\quad$& $\,\,$Model A$\,\,$ & $\,\,$Model B$\,\,$
\\
\hline
$Q_\mathrm{\sst Mess}$ & 8.32\scriptsize $\times 10$$^5$ & 1\scriptsize $\times 10$$^7$ \\
\hline
$\tan\beta$ & 58.7 & 38.9 \\
\hline
${\rm sgn} \mu_\mathrm{\sst MSSM}$ & + & + \\
\hline
$\mu_\mathrm{\sst MSSM}(Q_\mathrm{\sst SUSY})$ & 2891 & 939 \\
\hline
\hline
$\tilde{e}_L, \tilde{\mu}_L$ & 4165 & 747.9 \\
\hline
$\tilde{e}_R, \tilde{\mu}_R$ & 2133 & 399.8 \\
\hline
$\tilde{\tau}_L$ & 1818 & 319.4 \\
\hline
$\tilde{\tau}_R$ & 4093 & 737.5 \\
\hline
$\tilde{u}_1, \tilde{c}_1$ & 11757 & 1963 \\
\hline
$\tilde{u}_2, \tilde{c}_2$ & 11205 & 1867 \\
\hline
$\tilde{t}_1$ & 10345 & 1593 \\
\hline
$\tilde{t}_2$ & 11061 & 1825 \\
\hline
$\tilde{d}_1, \tilde{s}_1$ & 11784 & 1973 \\
\hline
$\tilde{d}_2, \tilde{s}_2$ & 11144 & 1851 \\
\hline
$\tilde{b}_1$ & 10298 & 1754 \\
\hline
$\tilde{b}_2$ & 11060 & 1822 \\
\hline
$\chi_1^0$ & 60.8 & 270.3 \\
\hline
$\chi_2^0$ & 125.0 & 524.8 \\
\hline
$\chi_3^0$ & 2906 & 949.0 \\
\hline
$\chi_4^0$ & 2929 & 950.3 \\
\hline
$\chi_1^{\pm}$ & 100.7 & 526.5 \\
\hline
$\chi_2^{\pm}$ & 2894 & 945.6 \\
\hline
$h_0$ & 124.8 & 137.6 \\
\hline
$A_0$, $H_0$ & 184.5 & 975.1 \\
\hline
$H^{\pm}$ & 207.4 & 978.6 \\
\hline
$\tilde{g}$ & 414.2 & 1500 \\
\hline
$\tilde{\nu}_{1,2}$ & 4175 & 740.2 \\
\hline
$\tilde{\nu}_3$ & 4095 & 724.4 \\
\hline
\hline
\end{tabular}
\end{center}
\caption{
Sparticle spectra for SUSY breaking models with spontaneously broken
(Model A)
and explicitly broken (Model B) $R$-symmetry. All masses are in GeV.
\label{spectratable}}
\end{table}
\section{Conclusions}
It can be argued that in generic models of low scale supersymmetry breaking (where gravity effects can be neglected) metastability
is inevitable.
In this paper we compared SUSY-breaking patterns generated in two distinct and complementary scenarios
of gauge-mediated supersymmetry breaking. Both scenarios employ an explicit formulation of the Hidden Sector
in terms of an ISS-like gauge theory with a long-lived metastable vacuum. This, in both cases, provides a
simple and calculable model to implement metastable DSB.
The difference between the two approaches
lies in the mechanism of $R$-symmetry breaking. The first one, described in Section {\bf 2},
employs spontaneous $R$-symmetry breaking induced by radiative corrections.
It is based on the direct gauge mediation model introduced in \cite{ADJK}.
We find that $R$-symmetry violating soft terms (such as gaugino masses) tend to be suppressed
with respect to $R$-symmetry preserving ones, leading to a scenario with
large scalar masses. These models effectively interpolate between split SUSY models and standard gauge mediation.
The second approach, outlined in Section {\bf 3} is based on gauge mediation models of Refs.~\cite{MN,AS}
with a messenger
sector that explicitly breaks the $R$-symmetry
of the ISS sector
by operators suppressed by powers of $M_{Pl}$.
We argue that these models lead to phenomenology
broadly similar to
standard gauge mediation, but with an additional constraint that $B=0$ at the mediation scale.
Determining the complete spectrum of superpartner masses at benchmark points (see Table~\ref{spectratable})
we find that apart from high values of $\tan \beta$ (arising from the condition that $B \approx 0$
at the messenger scale in both models) the phenomenology of these models is quite different.
For the model with explicit $R$-symmetry breaking (Benchmark Point B) we find that it follows closely the usual gauge mediation scenario
where gauginos and sfermions have roughly equal masses.
In contrast the model with spontaneous $R$-symmetry breaking typically has sfermions that are considerably heavier than the gauginos
-- resembling a scenario of split SUSY.
Benchmark Point A represents such a model at a region in parameter space where the `split aspects' of supersymmetry are
minimal.\footnote{A more up to date discussion of such models can be found in \cite{AJKM}.}
At the same time it is quite distinct from the usual gauge mediation scenarios, having
relatively heavy scalars and light charginos and neutralinos.
We conclude that details of the dynamics of the Hidden Sector -- the nature of $R$-symmetry and SUSY-breaking --
leave a clear imprint on the phenomenology of the MSSM. Although the
general gauge mediation scenario incorporates both of these
scenarios, there is enough flexibility in GMSB to distinguish them. It
would be interesting to broaden this study to other models with either
spontaneous or explicit $R$-symmetry breaking, to see if the
general pattern outlined here persists.
\section*{Acknowledgements}
We would like to thank Ben Allanach, Steve Martin and Nathan Seiberg for useful comments
and discussions.
CD is supported by an STFC studentship.
\begin{appendix}
\section{The R-symmetry of the baryon-deformed ISS model}
It is known that the $R$-symmetry of the ISS SQCD manifests itself only as an approximate symmetry
of the magnetic formulation which is broken explicitly in the electric theory by the mass terms of electric quarks $m_Q$.
Here we want to quantify this statement and show that the $R$-symmetry breaking in the microscopic theory is controlled
by small parameter, $m_Q /\Lambda = \mu^2 /\Lambda^2 \ll 1$. As such the intrinsic $R$-breaking effects
and deformations can be neglected. This justifies the approach we follow in section {\bf 2} where the $R$-symmetry
of the magnetic theory is used to constrain the allowed deformations. Consequently, the
$R$-symmetry-preserving baryon deformation in Eq.~\eqref{Wbardef} gives a generic superpotential.
We first consider the massless undeformed SQCD theory, its global symmetry is
$SU(N_f)_L \times SU(N_f)_R \times U(1)_B \times U(1)_A \times \overline{U(1)}_R$.
Following the well-established conventions \cite{Intriligator:1995au} the $\overline{U(1)}_R$ symmetry is
taken to be anomaly-free, and the axial symmetry $U(1)_A$ is anomalous.
(The $U(1)_R$ symmetry of our section {\bf 2} will be constructed below as an anomalous linear combination of the
$\overline{U(1)}_R$, $U(1)_A$ and $U(1)_B$ above.)
Table~\ref{app1table} lists the charges of matter fields of the electric and the magnetic formulations.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
&
{\small $SU(N_f)_L$}&
{\small $SU(N_f)_R$}&
{\small $U(1)_{B}$}&
{\small $U(1)_{A}$}&
{\small $\overline{U(1)}_{R}$}\tabularnewline
\hline
\hline
$Q$& $ \square$ & {\bf 1} & 1 & 1 & {\small$\frac{N_f-N_c}{N_f}$} \tabularnewline
$\tilde{Q}$& {\bf 1} & $ \bar\square$ & $-1$ & 1 & {\small$\frac{N_f-N_c}{N_f}$} \tabularnewline
\hline
\hline
$\Lambda$& {\bf 1} & {\bf 1} & 0 & {\small$\frac{2N_f}{3N_c-N_f}$} & 0 \tabularnewline
$W$& {\bf 1}& {\bf 1} & 0 & 0 & 2 \tabularnewline
\hline
\hline
$\varphi$& $ \bar\square$ & 1 & {\small$\frac{N_c}{N_f-N_c}$} & {\small$\frac{2N_f-3N_c}{3N_c-N_f}$} & {\small$\frac{N_c}{N_f}$} \tabularnewline
$\tilde{\varphi}$ & 1 & $\square$ & {\small$-\frac{N_c}{N_f-N_c}$} & {\small$\frac{2N_f-3N_c}{3N_c-N_f}$} & {\small$\frac{N_c}{N_f}$} \tabularnewline
$\Phi = \frac{Q\tilde{Q}}{\Lambda}$ & $\square$ & $\bar\square$ & 0 & 2 {\small$-\frac{2N_f}{3N_c-N_f}$}
& {\small$2\frac{N_f-N_c}{N_f}$} \tabularnewline
\hline
\end{tabular}
\end{center}
\caption{Charges under the global $SU(N_f)_L \times SU(N_f)_R \times U(1)_B \times U(1)_A \times \overline{U(1)}_R$
\label{app1table}}
\end{table}
The scale $\Lambda$ is charged only under the $U(1)_A$ which identifies it as the anomalous $U(1)$. In the usual fashion,
the $U(1)_A$-charge of $\Lambda$ in Table~\ref{app1table} is determined from the nonperturbative superpotential, cf. Eq.~\eqref{Wdyn},
\begin{equation}
W_{\rm dyn}\, =\, (N_f-N_c) \left( \frac{{\rm det}_{\scriptscriptstyle {N_{f}}} \tilde{Q}Q}{\Lambda^{3N_c-N_f}}\right)^\frac{1}{N_f-N_c}
\label{Wdyn2}
\end{equation}
Table~\ref{app1table} also shows that the superpotential $W$ is charged only under the $\overline{U(1)}_R$ which
identifies it as the $R$-symmetry (such that $\int d^2 \theta \, W$ is neutral).
Finally, the charges of magnetic quarks $\varphi$, $\tilde{\varphi}$ are derived from the matching between electric and
magnetic baryons, $B_E /\Lambda^{N_c} = B_M /\Lambda^{N_f-N_c}$, $\tilde{B}_E /\Lambda^{N_c} = \tilde{B}_M /\Lambda^{N_f-N_c}$
which implies (schematically)
\begin{equation}
\left(\frac{\varphi}{\Lambda}\right)^{N_f-N_c} = \left(\frac{Q}{\Lambda}\right)^{N_c} \ , \qquad
\left(\frac{\tilde\varphi}{\Lambda}\right)^{N_f-N_c} = \left(\frac{\tilde{Q}}{\Lambda}\right)^{N_c} \ .
\end{equation}
The charges of $\Phi$ are read off its definition, $\Phi = \frac{Q\tilde{Q}}{\Lambda}$.
As a consistency test on these charges, one can easily verify that the magnetic superpotential $W = \varphi \Phi \tilde\varphi$
is automatically neutral under $U(1)_A$, $U(1)_B$ and has the required charge 2 under the $R$-symmetry.
We now introduce mass terms $m_Q \tilde{Q} Q$ in the superpotential of the electric theory. We want to continue
describing the symmetry structure in terms of the parameters of the IR magnetic theory. For this purpose we use
for quark masses $m_Q =\frac{\mu^2}{\Lambda}$.
This mass-deformation breaks the flavour group $SU(N_f)_L \times SU(N_f)_R $ to the diagonal $SU(N_f)$
(if, for example, all quark masses were the same). It also breaks $U(1)_A \times \overline{U(1)}_R$,
to a linear combination $U(1)$ subgroup. If in addition, we introduce the baryon deformation, as in section
{\bf 2}, it breaks the third $U(1)_B$ factor. In total, the combined effect of the two deformations breaks
$U(1)_B \times U(1)_A \times \overline{U(1)}_R$ to a single $U(1)_R$. This is the $R$-symmetry of section {\bf 2}
and it is anomalous since $\Lambda$ is charged under it.\footnote{Note that the two deformations are associated with
orthogonal $U(1)$'s and are therefore independent.}
To explicitly construct this surviving $U(1)_R$ for the model of section {\bf 2},
we set $N_c=5$ and $N_f=7$ and list the three $U(1)$ charges in Table~\ref{app2table}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
&
{\small $U(1)_{B}$}&
{\small $U(1)_{A}$}&
{\small $\overline{U(1)}_{R}$}\tabularnewline
\hline
\hline
$Q$& 1 & 1 & {\small$\frac{2}{7}$} \tabularnewline
$\tilde{Q}$& $-1$ & 1 & {\small$\frac{2}{7}$} \tabularnewline
\hline
\hline
$\Lambda$& 0 & {\small$\frac{7}{4}$} & 0 \tabularnewline
$W$& 0 & 0 & 2 \tabularnewline
\hline
\hline
$\varphi$& {\small$\frac{5}{2}$} &{\small$-\frac{1}{8}$} & {\small$\frac{5}{7}$} \tabularnewline
$\tilde{\varphi}$ & {\small$-\frac{5}{2}$} &{\small$-\frac{1}{8}$} & {\small$\frac{5}{7}$} \tabularnewline
$\Phi$ & 0 &{\small$\frac{1}{4}$} & {\small$\frac{4}{7}$} \tabularnewline
\hline
\end{tabular}
\end{center}
\caption{Charges under $U(1)_B \times U(1)_A \times \overline{U(1)}_R$ for $N_c=5$ and $N_f=7$
\label{app2table}}
\end{table}
It is now clear that the $U(1)_R$ symmetry of section {\bf 2} is the linear combination
of the three $U(1)$'s with the charge
\begin{equation}
R \, =\, \overline{R} + \frac{40}{7} \, A + \frac{2}{5} \, B
\label{Rsymour}
\end{equation}
This is the unique unbroken linear combination surviving the mass- plus the baryon-deformation,
$-\mu^2 \Phi + m \varphi^2$, of the magnetic theory
with the charges listed in Table~\ref{app3table}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|}
\hline
&
{\small ${U(1)}_{R}$}\tabularnewline
\hline
\hline
$\varphi$& 1 \tabularnewline
$\tilde{\varphi}$ & $-1$ \tabularnewline
$\Phi$ & 2 \tabularnewline
\hline
\hline
$\Lambda$& 10 \tabularnewline
$\mu$ & 0 \tabularnewline
$W$& 2 \tabularnewline
\hline
\hline
$Q$& {\small$6+\frac{2}{5}$} \tabularnewline
$\tilde{Q}$& {\small$6-\frac{2}{5}$} \tabularnewline
\hline
$m_Q=\frac{\mu^2}{\Lambda}$ & $-10$ \tabularnewline
\hline
\end{tabular}
\end{center}
\caption{Charges under ${U(1)}_R$ for $N_c=5$ and $N_f=7$
\label{app3table}}
\end{table}
In the magnetic Seiberg-dual formulation, the $U(1)_R$ symmetry is manifest. It is the symmetry of the
perturbative superpotential \eqref{Wbardef} which is only broken anomalously.
In the electric formulation
the $U(1)_R$ symmetry is broken by the mass terms $m_Q \, \tilde{Q}Q$ on the account of the explicit $\Lambda$-dependence of the masses
$m_Q =\frac{\mu^2}{\Lambda}$. It is also broken by the baryon deformation (again in the electric theory language)
$\frac{1}{M_{Pl}^2} \, Q^5$ because the magnetic baryon deformation parameter $m$ in \eqref{mbardef}
explicitly depends on $\Lambda$.
Thus the apparent $U(1)_R$ symmetry of the IR theory is only approximate, and
is lifted in the UV theory. However, the $R$-symmetry is broken in a controlled way, by the parameter of the order
of $m_Q / \Lambda$.
To verify this note that in the limit $m_Q \to 0$, the electric quark masses disappear while the baryon deformation
$\frac{1}{M_{Pl}^2} \, Q^5$ is invariant under the $R$-symmetry $U(1)_{R'}$:
\begin{equation}
R' \, =\, \overline{R} + \frac{5}{7} \, A - \frac{3}{5} \, B
\label{Rsymnew}
\end{equation}
This is a different from \eqref{Rsymour} linear combination, but in the massless limit we are considering it is a
perfectly valid classically conserved $R$-symmetry which protects the baryon deformation in the electric theory and forbids
e.g. anti-baryon deformations $\frac{1}{M_{Pl}^2} \, \tilde{Q}^5$. Thus in the massless limit there is always an $R$-symmetry
which protects baryon deformations either in the electric or in the magnetic formulation. When quark masses are non-vanishing,
this $R$-symmetry is broken by $m_Q / \Lambda$.
Indeed, if one formally sends $\Lambda \to \infty$ holding $\mu$ and $m$ fixed,
the dynamical non-perturbative superpotential disappears
and the exact $U(1)_R$ is recovered.
In general, anomalous global symmetries do not match in the magnetic and the electric descriptions.
The $U(1)_R$ is an approximate symmetry and in principle one should allow generic $U(1)_R$-violating
deformations. For example, one can add an antibaryon $\tilde{B}$ deformation to the superpotential \eqref{Wbardef}.
However, these deformations are suppressed relative to the $U(1)_R$-preserving ones
by the small parameter, $m_Q /\Lambda = \mu^2 /\Lambda^2 \ll 1$, and therefore can be neglected.
\end{appendix}
| {'timestamp': '2009-02-09T13:29:07', 'yymm': '0712', 'arxiv_id': '0712.1812', 'language': 'en', 'url': 'https://arxiv.org/abs/0712.1812'} |
\section{Introduction}
The past two decades have seen tremendous progress in the description
of QCD with functional approaches such as the
functional renormalisation group (FRG), Dyson-Schwinger equations
(DSE), and n-particle irreducible methods ($n$PI). These approaches
constitute \emph{ab initio} descriptions of QCD in terms of quark and
gluon correlation functions. The full correlation functions satisfy a
hierarchy of loop equations that are derived from the functional FRG,
DSE and $n$PI relations for the respective generating functionals. By
now, systematic computational schemes are available, which can be
controlled by apparent convergence. In the present work on pure
Yang-Mills (YM) theory we complement the work in quenched QCD
\cite{Mitter:2014wpa}, where such a systematic expansion scheme has
been put forward within the FRG. Equipped with such a controlled
expansion, functional approaches to QCD are specifically interesting
at finite temperatures and large density, where reliable \emph{ab
initio} theoretical predictions and experimental results are missing
at present.
Most progress with functional approaches has been made in Landau gauge
QCD, which has many convenient properties for non-perturbative
numerical computations. Applications of functional methods include the
first-ever calculation of qualitative non-perturbative Landau gauge
propagators as well as investigations of the phase structure of
QCD. For reviews see
\cite{Berges:2000ew,Roberts:2000aa,Alkofer:2000wg,Pawlowski:2005xe,
Fischer:2006ub,Gies:2006wv,Schaefer:2006sr,Fischer:2008uz,Binosi:2009qm,%
Braun:2011pp,Maas:2011se,Sanchis-Alepuz:2015tha},
for applications to Yang-Mills theory see e.g.\
\cite{Ellwanger:1995qf,vonSmekal:1997ohs,Bergerhoff:1997cv,Gies:2002af,%
Pawlowski:2003hq,Fischer:2003rp,Fischer:2004uk,Aguilar:2008xm,%
Boucaud:2008ky,Tissier:2010ts,Quandt:2013wna,Quandt:2015aaa,Huber:2016tvc,Quandt:2016ykm},
and e.g.\ \cite{Feuchter:2004mk} for related studies in the Coulomb
gauge. The formal, algebraic, and numerical progress of the past
decades sets the stage for a systematic vertex expansion scheme of
Landau gauge QCD. Quantitative reliability is then obtained with
apparent convergence \cite{Mitter:2014wpa} as well as further
systematic error controls inherent to the method, see e.g.\
\cite{Litim:2000ci,Litim:2001up,Pawlowski:2005xe,Schnoerr:2013bk,Pawlowski:2015mlf}. In the aforementioned quenched QCD
investigation \cite{Mitter:2014wpa}, the gluon propagator was taken
from a separate FRG calculation in \cite{Fischer:2008uz,FPun}. This
gluon propagator shows quantitative agreement with the lattice
results, but has been obtained within an incomplete vertex expansion
scheme. Therefore, the results \cite{Fischer:2008uz,FPun} for the YM
correlations functions give no access to systematic error estimates.
In general, many applications of functional methods to bound states
and the QCD phase diagram use such mixed approaches, where part of the
correlation functions is deduced from phenomenological constraints or
other external input. Despite the huge success of mixed approaches, a
full \emph{ab initio} method is wanted for some of the most pressing
open questions of strongly-interacting matter. The phase structure of
QCD at large density is dominated by fluctuations and even a partial
phenomenological parameter fixing at vanishing density is bound to
lead to large systematic errors \cite{Helmboldt:2014iya}. The same
applies to the details of the hadron spectrum, in particular with
regard to the physics of the higher resonances, which requires
knowledge about correlation functions deep in the complex plane.
In the present work we perform a systematic vertex expansion of the
effective action of Landau gauge YM theory within the functional
renormalisation group approach, discussed in \Sec{sec:setup}. The
current approximation is summarised in \Sec{sec:expsch}, which also
includes a comparison to approximations used in other works. This
\emph{ab initio} approach starts from the classical action. Therefore
the only parameter is the strong coupling constant $\alpha_s$ at a
large, perturbative momentum scale. The most distinct feature of YM
theory is confinement, which is reflected by the creation of a gluon
mass gap in Landau gauge. We discuss the necessity of consistent
infrared irregularities as well as mechanisms for the generation of a
mass gap in \Sec{sec:massgap}. Numerical results from a
parameter-free, self-consistent calculation of propagators and
vertices are presented in \Sec{sec:mainresult}. Particular focus is
put on the importance of an accurate renormalisation of the relevant
vertices. We compare with corresponding DSE and lattice results and
discuss the apparent convergence of the vertex expansion. Finally, we
present numerical evidence for the dynamic mass gap generation in our
calculation. Further details, including a thorough discussion of the
necessary irregularities, can be found in the appendices.
\section{FRG flows for Yang-Mills theory in a vertex expansion}
\label{sec:setup}
Functional approaches to QCD and Yang-Mills theory are based on the
classical gauge fixed action of $SU(3)$ Yang-Mills theory. In general
covariant gauges in four dimensions it is given by
\begin{align}
S_{\rm cl}=\int_x\,\014 F_{\mu\nu}^a F_{\mu\nu}^a +\frac{1}{2\xi}\int_x\,(
\partial_\mu A^a_\mu)^2 -\int_x\,\bar c^a \partial_\mu D^{ab}_\mu
c^b\,.
\label{eq:sclassical}
\end{align}
Here, $\xi$ denotes the gauge fixing
parameter, which is taken to zero in the Landau gauge and $\int_x=\int \text{d}^4 x$.
The field
strength tensor and covariant derivative are given by
\begin{eqnarray}
F^a_{\mu\nu} &=& \partial_\mu A^a_\nu-\partial_\nu A^a_\mu+
g f^{abc}A_\mu^b A_\nu^c\, ,\nonumber \\[2ex]
D^{ab}_{\mu} & = &\delta^{ab}\partial_\mu-g f^{abc} A^c_\mu\,,
\end{eqnarray}
using the fundamental generators $T^a$, defined by
\begin{eqnarray}
\left[T^a,T^b\right] & =& if^{abc}T^c\,,\qquad
\text{tr}\left(T^aT^b\right) = \frac{1}{2}\delta^{ab}\,.
\end{eqnarray}
In general, our notation follows the one used in the works
\cite{Mitter:2014wpa,Braun:2014ata,Rennecke:2015eba} of the fQCD
collaboration \cite{fQCD}.
\subsection{Functional Renormalisation Group}
We use the functional renormalisation group approach as a
non-perturbative tool to investigate Yang-Mills theory. The FRG is
built on a flow equation for the one-particle irreducible ($1$PI)
effective action or free energy of the theory, the Wetterich equation
\cite{Wetterich:1992yh}. It is based on Wilson's idea of introducing
an infrared momentum cutoff scale $k$. Here, this infrared
regularisation of the gluon and ghost fluctuations is achieved by
modifying the action $S_{\rm cl} \to S_{\rm cl}+\Delta S_k$ with
\begin{align}
\label{eq:dSk}
\Delta S_k=\int_x\,
\012 A_\mu^a\, R_{k,\mu\nu}^{ab} \, A_\nu^b
+ \int_x\,\bar c^a\, R^{ab}_k\, c^b\,.
\end{align}
The regulator functions $R_k$ are momentum-dependent masses that
suppress the corresponding fluctuations below momentum scales
$p^2\approx k^2$ and vanish in the ultraviolet for momenta $p^2\gg
k^2$. See \App{app:regulator} for details on the regulators used
in the present work.
Consequently, the effective action, $\Gamma_k[\phi]\,$, is
infrared regularised, where $\phi$ denotes the superfield
\begin{align}
\phi=(A, c,\bar c)\,.
\end{align}
The fluctuations of the theory are then successively taken into
account by integrating the flow equation for the effective action, see
e.g.\ \cite{Fister:2011uw,Fister:2013bh},
\begin{align}
{\partial_t}\Gamma_k[\phi] = \int_p \,\012 \ G_{k,ab}^{\mu\nu}[\phi]
\,{\partial_t}R_{k,\mu\nu}^{ba} -\int_p
\,G^{ab}_k[\phi]\,{\partial_t}R^{ba}_k\,.
\label{eq:flow}
\end{align}
where $\int_p=\int \text{d}^4p/(2 \pi)^4\,$. Here we have introduced the
RG-time $t=\ln (k/\Lambda)$ with a reference scale $\Lambda\,$, which is
typically chosen as the initial UV cutoff $\Lambda\,$. Although this
flow equation comes in a simple one-loop form, it provides an exact
relation due to the presence of the full field-dependent propagator,
\begin{align}
G_k[\phi](p,q)= \01{\Gamma_k^{(2)}[\phi]+R_k }(p,q) \,,
\end{align}
on its right-hand side. Furthermore, the flow is infrared and
ultraviolet finite by construction. Via the integration of momentum
shells in the Wilsonian sense, it connects the ultraviolet, bare
action $S_{\rm cl}=\Gamma_{k\rightarrow\Lambda\rightarrow\infty}$ with
the full quantum effective action $\Gamma=\Gamma_{k\rightarrow0}\,$.
The flow equations for propagators and vertices are obtained by taking
functional derivatives of \eq{eq:flow}. At the vacuum expectation
values, these derivatives give equations for the $1$PI correlation
functions $\Gamma_k^{(n)}=\delta^n\Gamma_k/\delta\phi^n\,$, which
inherit the one-loop structure of \eq{eq:flow}. As the
cutoff-derivative of the regulator functions, $\partial_t R_k$, decays
sufficiently fast for large momenta, the momentum integration in
\eq{eq:flow} effectively receives only contributions for momenta
$p^2\lesssim k^2\,$. Furthermore, the flow depends solely on dressed
vertices and propagators, leading to a consistent RG and momentum
scaling for each diagram resulting from derivatives of \eq{eq:flow}.
Despite its simple structure, the resulting system of equations does
not close at a finite number of correlation functions. In general,
higher derivatives up to the order $\Gamma_k^{(n+2)}$ of the effective
action appear on the right hand side of the functional relations for
the correlation functions $\Gamma_k^{(n)}\,$.
\subsection{Vertex expansion of the effective action}
\label{sec:expsch}
\begin{figure}
\centering
\includegraphics[width=0.48\textwidth]{ym_quantitative_truncation_class}
\caption{ Approximation for the effective action. Only
classical tensor structures are included. See
\Fig{fig:diagrams} for diagrams that contribute to the
individual propagators and vertices. }
\label{fig:eff_action}
\end{figure}
\begin{figure}
\hspace*{-0.1cm}\includegraphics[width=0.49\textwidth]{ym_flow_equations}
\caption{Diagrams that contribute to the truncated flow of
propagators and vertices. Wiggly (dotted) lines correspond
to dressed gluon (ghost) propagators, filled circles denote
dressed ($1$PI) vertices and regulator insertions are
represented by crossed circles. Distinct permutations
include not only (anti-)symmetric permutations of external
legs but also permutations of the regulator insertions. }
\label{fig:diagrams}
\end{figure}
The structural form of the functional equations discussed in the
previous section necessitates the use of approximations in most
practical application. One systematic expansion scheme is the vertex
expansion, i.e.\ an expansion of the effective action in terms of
$1$PI Green's functions. This yields an infinite tower of coupled
equations for the correlation functions that has to be truncated at a
finite order. This expansion scheme allows a systematic error estimate
in terms of apparent convergence upon increasing the expansion order
or improving further approximations for example in the momentum
resolution or tensor structures of the included correlation functions.
We discuss the convergence of the vertex expansion in
\Sec{sec:truncationcheck}.
Here we calculate the effective action of $SU(3)$ Yang-Mills theory in
Landau gauge within a vertex expansion, see \Fig{fig:eff_action} for a
pictorial representation. The diagrams contributing to the resulting
equations of the constituents of our vertex expansion are summarised
graphically in \Fig{fig:diagrams}. The lowest order contributions
in this expansion are the inverse gluon and ghost propagators
parameterised via
\begin{align}
[\Gamma^{(2)}_{AA}]^{ab}_{\mu\nu}(p) &= Z_A(p)\, p^2\, \delta^{ab}\,
\Pi^{\bot}_{\mu\nu}(p) +\01\xi \delta^{ab} p_\mu p_\nu\ ,
\nonumber\\[2ex]
[\Gamma^{(2)}_{\bar c c}]^{ab}(p) &= Z_c(p)\, p^2\, \delta^{ab} \,,
\label{eq:propagators}
\end{align}
with dimensionless scalar dressing functions $1/Z_A$ and $1/Z_c$. Here,
$\Pi^{\bot}_{\mu\nu}(p)=\delta_{\mu\nu}-p_\mu p_\nu/p^2$ denotes the
corresponding projection operator.
We use this
splitting in tensor structures with canonical momentum dimension and
dimensionless dressings also for the higher order vertices.
On the three-point level we include the full transverse ghost-gluon
vertex and the classical tensor structure of the three-gluon vertex
\begin{align}
[\Gamma^{(3)}_{A \bar c c}]^{abc}_\mu (p,q) &= Z_{A \bar c c,\bot}(|p|,|q|,t)
[\mathcal{T}_{A \bar c c,{\rm cl}}]^{abc}_{\mu}(p,q)\,,\nonumber\\[2ex]
[\Gamma^{(3)}_{A^3}]^{abc}_{\mu\nu\rho} (p,q) &= Z_{A^3,\bot}(|p|,|q|,t)
[\mathcal{T}_{A^3,{\rm cl}}]^{abc}_{\mu\nu\rho}(p,q)\,.
\label{eq:threepoint}
\end{align}
Here, the momentum $p$ $(q)$ corresponds to the indices $a$ $(b)$ and
$t$ denotes the cosine of the angle between the momenta $p$ and $q\,$.
The classical tensor structure of the vertices has been summarised as
$\mathcal{T}_{A^3,{\rm cl}}$ and $\mathcal{T}_{A \bar c c,{\rm cl}}\,$,
which are listed explicitly in \App{app:tensorstructures}. In the
case of the transversally projected ghost-gluon vertex,
$\mathcal{T}_{A \bar c c,{\rm cl}}$ represents already a full basis
whereas a full basis for the transversally projected three-gluon
vertex consists of four elements. However, the effect of non-classical
tensor structures has been found to be subleading in this case
\cite{Eichmann:2014xya}.
The most important four-point function is given by the four-gluon
vertex, which appears already on the classical level. Similarly to the
three-gluon vertex, we approximate it with its classical tensor
structure
\begin{align}
[\Gamma_{A^4}^{(4)}]^{abcd}_{\mu\nu\rho\sigma}(p,q,r) &=
Z_{A^4,\bot}(\bar{p})
[\mathcal{T}_{A^4,{\rm cl}}]^{abcd}_{\mu\nu\rho\sigma}(p,q,r)\,,
\label{eq:fourgluon}
\end{align}
see \App{app:tensorstructures} for details. The dressing function of
the four-gluon vertex is approximated from its momentum dependence at
the symmetric point via the average momentum $\bar{p}\equiv
\sqrt{p^2+q^2+r^2+(p+q+r)^2}/2\,$, which has been shown to be a good
approximation of the full momentum dependence
\cite{Cyrol:2014kca,Cyrol:2014mt}. To improve this approximation
further, we additionally calculate the momentum dependence of the
four-gluon dressing function $Z_{A^4,\bot}(|p|,|q|,t)$ on the special
configuration $(p,q,r)=(p,q,-p)\,$. We use this special configuration
exclusively in the tadpole diagram of the gluon propagator equation,
cf.\ \Sec{sec:truncationcheck}. We show the difference between the
special configuration and the symmetric point approximation in the
appendix in \Fig{fig:fourGluonTadpoleDressing}. Although the
four-gluon vertex has been the subject of several studies
\cite{Kellermann:2008iw,Binosi:2014kka,Gracey:2014ola,Cyrol:2014kca,Cyrol:2014mt},
no fully conclusive statements about the importance of additional
non-classical tensors structures are available.
In summary we have taken into account the propagators and the fully
momentum-dependent classical tensor structures of the three-point
functions, as well as selected momentum-configurations of the gluon
four-point function, see the paragraph above, and
\App{app:tensorstructures}. For a comparison of the current approximation with that
used in other functional works one has to keep in mind that FRG,
Dyson-Schwinger or $n$PI equations implement different resummation
schemes. Thus, even on an identical approximation level of a
systematic vertex expansion, the included resummations differ.
In the present work we solve the coupled system of all
momentum-dependent classical vertex structures and propagators. In former
works with functional methods, see e.g.\
\cite{Ellwanger:1995qf,vonSmekal:1997ohs,Bergerhoff:1997cv,Gies:2002af,%
Pawlowski:2003hq,Fischer:2003rp,Fischer:2004uk,Kellermann:2008iw,Aguilar:2008xm,%
Boucaud:2008ky,Tissier:2010ts,Huber:2012kd,Aguilar:2013xqa,Pelaez:2013cpa,Blum:2014gna,Eichmann:2014xya,%
Gracey:2014mpa,Gracey:2014ola,Huber:2014isa,Williams:2015cvx,Binosi:2014kka,Cyrol:2014kca,Cyrol:2014mt},
only subsets of these correlation functions have
been coupled back. A notable exception is \cite{Huber:2016tvc}, where
a similar self-consistent approximation has been used for three-dimensional
Yang-Mills theory.
\subsection{Modified Slavnov-Taylor identities and transversality in
Landau gauge}
\label{sec:mSTIandVert_sub}
In Landau gauge, the dynamical system of correlation functions
consists only of the transversally projected correlators
\cite{Fischer:2008uz}. Those with at least one longitudinal gluon leg
do not feed back into the dynamics. To make these statements precise,
it is useful to split correlation functions into purely transverse
components and their complement with at least one longitudinal gluon
leg. The purely transverse vertices $\Gamma^{(n)}_{\bot}\,$, are defined
by attaching transverse projection operators to the corresponding
gluon legs,
\begin{align}
\label{eq:purely_transverse}
&\left[\Gamma^{(n)}_{\bot}\right]_{\mu_1\cdots\mu_{n_A}} \equiv
\Pi^{\bot}_{\mu_1\nu_1} \cdots \Pi^{\bot}_{\mu_{n_A}\nu_{n_A}}
\left[\Gamma^{(n)}\right]_{\nu_1\cdots\nu_{n_A}}\,,
\end{align}
where $n_A$ is the number of gluon legs and group indices and momentum
arguments have been suppressed for the sake of brevity. This defines
a unique decomposition of $n$-point functions into
\begin{align}
\label{eq:longitudinal}
\Gamma^{(n)}
=\Gamma^{(n)}_{\bot}+\Gamma^{(n)}_{\text{L}}\,,
\end{align}
where the longitudinal vertices $\Gamma^{(n)}_{\text{L}}$, have at
least one longitudinal gluon leg. Consequently, they are always
projected to zero by the purely transverse projection operators of
\eq{eq:purely_transverse}.
Functional equations for the transverse correlation functions close in
the Landau gauge, leading to the structure \cite{Fischer:2008uz},
\begin{align}\label{eq:closedFun}
\Gamma^{(n)}_{\bot}
={\rm Diag}[\{\Gamma^{(n)}_{\bot}\}]\,.
\end{align}
In \eq{eq:closedFun} Diag stands for diagrammatic expressions of
either integrated FRG, Dyson-Schwinger or $n$PI equations. Equation
\eq{eq:closedFun} follows from the fact that all internal legs are
transversally projected by the Landau gauge gluon propagator. Hence,
by using transverse projections for the external legs one obtains
\eq{eq:closedFun}. In contradistinction to this, the functional
equations for the vertices with at least one longitudinal gluon leg,
$\Gamma_{\rm L}^{(n)}$, are of the form
\begin{align}
\label{eq:closedFunL}
\Gamma^{(n)}_{\rm L}
={\rm Diag}[\{\Gamma^{(n)}_{\rm L}\}, \{\Gamma^{(n)}_{\bot}\}]\,.
\end{align}
In other words, the solution of the functional equations
\eq{eq:closedFunL} for $\Gamma^{(n)}_{\rm L}$ requires also the
solution of the transverse set of equations \eq{eq:closedFun}.
In the present setting, gauge invariance is encoded in modified
Slavnov-Taylor identities (mSTIs) and Ward-Takahashi identities
(mWTIs). They are derived from the standard Slavnov-Taylor identities
(STIs) by including the gauge or BRST variations of the regulator
terms, see
\cite{Ellwanger:1994iz,Ellwanger:1995qf,D'Attanasio:1996jd,Igarashi:2001mf,
Pawlowski:2005xe,Igarashi:2016gcf} for details. The mSTIs are of the
schematic form
\begin{align}
\label{eq:longFun}
\Gamma_{\rm L}^{(n)}&={\rm mSTI} [\{\Gamma_{\rm L}^{(n)}\}\,,\,
\{\Gamma_{\bot}^{(n)}\}\,,\,R_k]\,,
\end{align}
which reduce to the standard STIs in the limit of vanishing regulator,
$R_k\equiv 0$. The STIs and mSTIs have a similar structure as
\eq{eq:closedFunL} and can be used to obtain information about the
longitudinal part of the correlators. Alternatively, they provide a
non-trivial consistency check for approximate solutions of
\eq{eq:closedFunL}.
\subsubsection*{Consequences of the STIs \& mSTIs}
\label{sec:STImSTI}
For the purposes of this work, the most important effect of the
modification of the STIs due to the regulator term is that it leads to
a non-vanishing gluon mass parameter \cite{Ellwanger:1994iz},
\begin{align}
\Delta_{\rm mSTI} \left[
\Gamma_{AA}^{(2)}\right]_{\mu\nu}^{ab}\varpropto \delta^{ab}\,
\delta_{\mu\nu}\, \alpha(k)\, k^2\,.
\label{eq:mSTI_mass}
\end{align}
At $k=0$, where the regulators vanish, this modification disappears,
as the mSTIs reduce to the standard STIs. In particular, this entails
that, at $k=0$, the inverse longitudinal gluon propagator,
$\Gamma_{AA,\rm L}^{(2)}$, reduces to the classical one, solely
determined by the gauge fixing term
\begin{align}
\label{eq:LSTI}
p_\mu \left( [{\Gamma_{AA,\rm L}^{(2)}}]_{\mu\nu}^{ab}(p) - [{S_{AA,\rm
L}^{(2)}}]_{\mu\nu}^{ab}(p)\right) = 0\,.
\end{align}
This provides a unique condition for
determining the value of the gluon mass parameter \eq{eq:mSTI_mass}
at the ultraviolet initial scale $\Lambda$. However, it can
only serve its purpose, if the longitudinal system
is additionally solved.
One further conclusion from \eq{eq:longFun} is that the mSTIs do not
constrain the transverse correlation functions without further
input. This fact is not in tension with one of the main applications
of STIs in perturbation theory, i.e.\ relating the running of the
relevant vertices of Yang-Mills theory that require renormalisation.
As Yang-Mills theory is renormalisable, only the classical vertex
structures are renormalised and hence the renormalisation functions of
their transverse and longitudinal parts have to be identical.
As an instructive example we consider the ghost-gluon vertex. For this
example and the following discussions we evaluate the STIs within the
approximation used in the present work: on the right hand side of the
STIs we only consider contributions from the primitively divergent
vertices. In particular, this excludes contributions from the
two-ghost--two-gluon vertex. The ghost-gluon vertex can be
parameterised with two tensor structures,
\begin{align}
[\Gamma^{(3)}_{A\bar c c}]_{\mu}^{abc}(p,q) = \text{i} f^{abc}\Bigl[
q_\mu Z_{A \bar cc,\rm cl}(p,q) + p_\mu Z_{A\bar c c,\rm ncl}
(p,q)\Bigr]\,.
\label{eq:classquantsplit}
\end{align}
In \eq{eq:classquantsplit} we have introduced two dressing functions
$Z_{A \bar cc, \rm cl}$ and $Z_{A \bar cc,\rm ncl}$ as functions of
the gluon momentum $p$ and anti-ghost momentum $q\,$. In a general
covariant gauge only $Z_{A \bar cc,\rm cl}$ requires
renormalisation. Similar splittings into a classical tensor structure
and the rest can be used in other vertices. Trivially, this property
relates the perturbative RG-running of the transverse and longitudinal
projections of the classical tensor structures. Then, the STIs can be
used to determine the perturbative RG-running of the classical tensor
structures, leading to the well-known perturbative relations
\begin{equation}
\frac{Z_{A\bar c c, \rm cl}^2}{Z_c^2 Z_A}=
\frac{Z^2_{A^3,\rm cl}}{Z_A^3}=\frac{Z_{A^4,\rm cl}}{Z_A^2}\,,
\label{eq:RGrel}
\end{equation}
at the renormalisation scale $\mu$. Consequently, \eq{eq:RGrel} allows
for the definition of a unique renormalised two-loop coupling
$\alpha_s(\mu)$ from the vertices.
The momentum dependent STIs can also be used to make the relation
\eq{eq:RGrel} momentum-dependent. Keeping only the classical tensor
structures, we are led to the momentum dependent running couplings
\begin{align}
\alpha_{A\bar c c}(p) &= \0{1}{4 \pi}\,\frac{Z_{A\bar cc,\bot }^2(p)}
{ Z_A(p)\,Z_c^2(p)}\,,\nonumber\\[2ex]
\alpha_{A^3}(p) &= \0{1}{4 \pi} \,\frac{Z_{A^3,\bot }^2(p)}
{Z_A^3(p)}\ ,\nonumber\\[2ex]
\alpha_{A^4}(p) &= \0{1}{4 \pi}\, \frac{Z_{A^4,\bot }(p)}{
Z_A^2(p)}\,,
\label{eq:runcoup}
\end{align}
where the used transverse projection is indicated by the subscript
$\bot$, for details see \App{app:tensorstructures}. Additionally, the
vertices appearing in \eq{eq:runcoup} are evaluated at the symmetric
point, see \Sec{sec:truncationcheck} for the precise definition. The
STIs and two-loop universality demand that these running couplings
become degenerate at large perturbative momentum scales, where the
longitudinal and transverse parts of the vertices agree.
In Landau gauge, the ghost-gluon vertex is not renormalised on
specific momentum configurations, and we can alternatively define a
running coupling from the wave function renormalisation of ghost and
gluon \cite{vonSmekal:1997ohs,vonSmekal:2009ae},
\begin{align}
\label{eq:propcoupling}
\alpha_s(p)= \0{1}{4\pi} \,\0{g^2 }{Z_A(p) Z_c^2(p)}\,.
\end{align}
Note that the momentum-dependence of the running coupling
\eq{eq:propcoupling} does not coincide with that of the corresponding
running couplings obtained from other vertices, i.e.\ \eq{eq:runcoup}.
This is best seen in the ratio $\alpha_{A\bar c c}(p)/\alpha_s(p)=
Z_{A\bar c c,\bot}^{2}(p)/g^2\,$. In this context we also report on an important
result for the quark-gluon vertex coupling,
\begin{align}
\label{eq:quarkgluon}
\alpha_{A\bar q q}(p)=\0{1}{4\pi} \,\0{Z_{A\bar q q,\bot}(p)^2 }{Z_A(p)
Z_q(p)^2}\,,
\end{align}
with the dressing function of the classical tensor structure of the
quark-gluon vertex $Z_{A\bar q q,\bot}(p)^2\,$
and the quark dressing function $1/Z_q(p)$~\cite{Mitter:2014wpa}. The solution of the
corresponding STI reveals that the quark-gluon vertex coupling
$\alpha_{A\bar q q}$ agrees perturbatively with $\alpha_s(p)$ in
\eq{eq:propcoupling}, and hence it differs from the other vertex
couplings in \eq{eq:runcoup}. Note that the present truncation only
considers contributions from primitively divergent
vertices. Accordingly, the two-quark--two-ghost vertex contribution in
the STI for the quark-gluon vertex, see e.g.\ \cite{Alkofer:2000wg}, has
been dropped.
\section{Confinement, gluon mass gap, \newline and irregularities}
\label{sec:massgap}
It has been shown in
\cite{Braun:2007bx,Marhauser:2008fz,Braun:2009gm,Braun:2010cy,Fister:2013bh} that a
mass gap in the gluon propagator signals confinement in QCD in
covariant gauges. Furthermore, in Yang-Mills theory formulated in
covariant gauges, the gapping of the gluon relative to the ghost is
necessary and sufficient for producing a confining potential for the
corresponding order parameter, the Polyakov loop. Hence, understanding
the details of the dynamical generation of a gluon mass gap gives
insight into the confinement mechanism.
This relation holds for all potential infrared closures of the
perturbative Landau gauge. The standard infrared closure
corresponds to a full average over all Gribov regions. This leads to
the standard Zinn-Justin equation as used in the literature, e.g.\
\cite{Alkofer:2000wg}. In turn, the restriction to the first Gribov
regime can be implemented within the refined Gribov-Zwanziger
formalism, e.g.\
\cite{Dudal:2007cw,Dudal:2008sp,Dudal:2009xh,Dudal:2011gd,Capri:2015ixa},
that leads to infrared modifications of the STIs. In the following we
discuss the consequences of the standard STIs, a discussion of the
refined Gribov-Zwanziger formalism is deferred to future work.
\begin{figure*}
\centering
\includegraphics[width=0.48\textwidth]{gluon_prop_dressing_ind_sca}
\hfill
\includegraphics[width=0.489\textwidth]{ghost_prop_dressing_ind_sca}
\caption{Gluon dressing $1/Z_A$ (left) and ghost dressing $1/Z_c$
(right) in comparison to the lattice results
from \cite{Sternbeck:2006cg}. The scale setting and normalisation procedures are described in \App{app:rescaling}.}
\label{fig:main_result}
\end{figure*}
\subsection{Gluon mass gap and irregularities}
\label{sec:gluonmassirregularities}
In order to study the dynamical generation of the mass gap, we first
discuss the consequences of the STI for the longitudinal gluon two
point function \eq{eq:LSTI}. It states that no quantum fluctuations
contribute to the inverse longitudinal gluon propagator, i.e.\ the
longitudinal gluon propagator is defined by the gauge fixing
term. Therefore, the dynamical creation of a gluon mass gap requires
different diagrammatic contributions to the longitudinal and
transverse gluon mass parameter. The discussion of the prerequisites
for meeting this condition is qualitatively different for the scaling
and the decoupling solutions. Hence, these two cases are discussed
separately.
The scaling solution is characterised by the infrared behaviour
\cite{vonSmekal:1997ohs,Zwanziger:2001kw,Lerche:2002ep,Fischer:2002eq,%
Pawlowski:2003hq,Alkofer:2004it,Fischer:2006vf,Alkofer:2008jy,Fischer:2009tn}
\begin{align}\nonumber
\lim\limits_{p\rightarrow 0}Z_c(p^2)&\varpropto (p^2)^{\kappa}\,,\\[2ex]
\lim\limits_{p\rightarrow 0}Z_A(p^2)&\varpropto (p^2)^{-2\,
\kappa}\,,
\label{eq:scaling_sol}
\end{align}
with the scaling coefficient $1/2<\kappa<1$. A simple calculation
presented in \App{app:irregularities} shows that the ghost loop with
an infrared constant ghost-gluon vertex and scaling ghost propagator
is already capable of inducing a splitting in the longitudinal and
transverse gluon mass parameter.
Next we investigate the decoupling solution, e.g.\
\cite{Aguilar:2008xm,Boucaud:2008ky}, which scales with
\begin{align}\nonumber
\lim\limits_{p\rightarrow 0}Z_c(p^2)&\varpropto 1\, ,\\[2ex]
\lim\limits_{p\rightarrow 0}Z_A(p^2)&\varpropto (p^2)^{-1}\,,
\label{eq:decoupling_sol}
\end{align}
at small momenta. Assuming vertices that are regular in the limit of
one vanishing gluon momentum, one finds that all diagrammatic
contributions to the longitudinal and transverse gluon mass parameter
are identical. For example, if the ghost loop were to yield a
non-vanishing contribution to the gluon mass gap, the ghost-gluon
vertex would have to be a function of the angle $\theta=\arccos(t)$
between the gluon and anti-ghost momenta $p$ and $q$,
\begin{align}
\label{eq:irregular}
\lim\limits_{|p|\rightarrow 0}[\Gamma^{(3)}_{A\bar c
c}{}]_{\mu}^{abc}(|p|,|q|,t) = [\Gamma^{(3)}_{A\bar c
c}{}]_{\mu}^{abc}(0,|q|,t)\,,
\end{align}
even in the limit of vanishing gluon momentum $|p|\rightarrow
0\,$. Since the above limit depends on the angle, the vertex is
irregular. See \App{app:irregularities} for more details on this
particular case. Similar conclusions can be drawn for all vertices
appearing in the gluon propagator equation. Consequently, if all
vertices were regular, no gluon mass gap would be created. In
particular, regular vertices would entail the absence of
confinement. The necessity of irregularities for the creation of a
gluon mass gap was already realised by Cornwall~\cite{Cornwall:1981zr}.
In the light of these findings it is interesting to investigate the
consistency of irregularities with further Slavnov-Taylor
identities. Therefore, we consider the Slavnov-Taylor identity of the
three-gluon vertex, e.g.\ \cite{Alkofer:2000wg},
\begin{widetext}
\begin{align}
\label{eq:STIclass3gl}
\text{i} r_\rho
[\Gamma^{(3)}_{A^3}]{}^{abc}_{\mu\nu\rho}(p,q,r)\propto f^{abc}
\0{1}{Z_c(r^2)}\left( \tilde G_{\mu\sigma}(p,q) q^2
Z_A(q^2)\Pi^\bot_{\sigma\nu}(q) -\tilde G_{\nu\sigma}(q,p)
p^2 Z_A(p^2)\Pi^\bot_{\sigma\mu}(p) \right)\,,
\end{align}
\end{widetext}
where $\tilde G_{\mu \nu}$ relates to the ghost-gluon vertex via
\begin{align}
\label{eq:STIGtilde}
[\Gamma^{(3)}_{A \bar c c} ]_{\mu}^{abc}(p,q)=\text{i} g f^{abc} q_\nu
\tilde G_{\mu\nu}(p,q)\,.
\end{align}
For a regular $\tilde G_{\mu\nu}$ in the limit $p\rightarrow 0$
in \eq{eq:STIclass3gl}, the
scaling solution leads to a singular contribution of the type
\begin{align}\label{eq:3g_sing_scaling}
\lim\limits_{p\rightarrow 0} (p^2)^{1-2\kappa}\, \tilde
G_{\nu\sigma}(q,0)\, \Pi^\bot_{\sigma\mu}(p) + \rm regular\,,
\end{align}
where $\kappa$ is the scaling coefficient from
\eq{eq:scaling_sol}. This is consistent with the expected scaling
exponent of the three-gluon vertex in this limit
\cite{Alkofer:2008jy}. In the same limit, the decoupling solution
leads to a singular contribution of the form
\begin{align}
\label{eq:3g_sing_decoupling}
\lim\limits_{p\rightarrow 0} \tilde G_{\nu\sigma}(q,0)\,
\Pi^\bot_{\sigma\mu}(p) + \rm regular\,.
\end{align}
Since the transverse projector $\Pi^\bot_{\sigma\mu}(p)$ introduces an
angular dependence in the limit $p\rightarrow 0\,$, the STI again
demands an irregularity in limit of one vanishing momentum. Note that
this is just a statement about the three-gluon vertex projected with
one non-zero longitudinal leg $r_\rho\,$. Although this momentum
configuration does not enter the gluon mass gap directly, crossing
symmetry implies the necessary irregularity. In summary, these
arguments illustrate that also the three-gluon vertex STI is
consistent with the necessity of irregularities for both types of
solutions.
We close the discussion of vertex irregularities with the remark that
the infrared modification of the propagator-STI in the refined
Gribov-Zwanziger formalism may remove the necessity for irregularities
in the vertices.
\begin{figure*}
\centering
\includegraphics[width=0.48\textwidth]{gluon_prop_ind_sca}
\hfill
\includegraphics[width=0.496\textwidth]{running_couplings}
\caption{ Left: Gluon propagator in comparison to the lattice
results from \cite{Sternbeck:2006cg}. Right: Effective
running couplings defined in \eq{eq:runcoup} as obtained
from different Yang-Mills vertices as function of the
momentum.
}
\label{fig:main_result_II}
\end{figure*}
\subsection{Origin of irregularities}
As discussed in the previous section, self-consistency in terms of the
Slavnov-Taylor identities entails a correspondence between the
dynamical generation of a gluon mass gap and the presence of
irregularities. But the STIs do not provide a mechanism for the
creation of irregularities, the gluon mass gap, and in turn
confinement.
In the scaling solution, \eq{eq:scaling_sol}, the irregularities arise
naturally from the non-trivial scaling. Hence they are tightly linked
to the original Kugo-Ojima confinement scenario \cite{Kugo:1979gm}, that requires the
non-trivial scaling. Note, however, that this simply links different
signatures of confinement but does not reveal the mechanism at work.
For the decoupling solution \eq{eq:decoupling_sol}, we want to
discuss two possible scenarios. In the first scenario, the irregularities
are generated in the far infrared. A second possibility is that they are
triggered via a condensate and/or a resonance, providing a direct connection
of confinement and spontaneous symmetry breaking.
In the first scenario it is sufficient to focus on ghost loops as
possible sources of such irregularities, since the gluonic diagrams
decouple from the infrared dynamics due to the gluon mass gap. This
is a seemingly appealing scenario as it is the dynamical ghost that
distinguishes confining Yang-Mills theory from e.g.\ QED. However, in
the decoupling solution \eq{eq:decoupling_sol} both, the ghost-gluon
vertex as well as the ghost propagator, have infrared finite quantum
corrections: no ghost-loops contribute to their equation and
(infrared) constant dressing functions can be assumed for both. As a
consequence the ghost loop contributions to correlation functions have
the same infrared structure as perturbative ghost-loop
contributions. However, none of these perturbative ghost loops yields
the necessary irregularities, see \App{app:ghosttriangle} for an
explicit calculation.
In the second scenario, the generation of irregularities can be based
on the dynamical generation of a non-vanishing transverse background,
$F_{\mu\nu}^a F_{\mu\nu}^a \neq 0\,$, in the infrared. This gluon
condensate is the Savvidi vacuum \cite{Savvidy:1977as}, and its
generation in the present approach has been discussed in
\cite{Eichhorn:2010zc} with $F_{\mu\nu}^a F_{\mu\nu}^a \approx
\SI{1}{\GeV}^4\,$. Then, a vertex expansion about this non-trivial
IR-solution of the equation of motion introduces an IR-splitting of
transverse and longitudinal vertices due to the transversality of the
background field. This IR-splitting automatically implies
irregularities as discussed in \Sec{sec:gluonmassirregularities}, and
is sufficient for creating a physical mass gap in the gluon. This
scenario provides a direct relation of confinement and spontaneous
symmetry breaking. Therefore it is possibly connected to the presence
of resonances that are triggered in the longitudinal sector of the
theory, where they do not spoil the gapping of the completely
transverse sector. A purely longitudinal massless mode, as a source
for irregularities in the gluonic vertices, has been worked out in
\cite{Aguilar:2011ux,Aguilar:2011xe}, for a short summary see
\cite{Figueiredo:2016cvf}. As a consequence, an irregularity appears
in the purely longitudinal three-gluon vertex in a way that preserves
the corresponding Slavnov-Taylor identity. The creation of a purely
transverse background and the presence of longitudinal massless mode
would then be two sides of the same coin. Furthermore, the
longitudinal resonance has to occur at about the same scale as the
gluon condensate, in order to trigger the correct gluon mass gap. A
more detailed discussion and computation of this scenario cannot be
assessed in the purely transverse system and is therefore deferred to
future work.
\begin{figure*}
\centering
\includegraphics[width=0.488\textwidth]{AcbcSymPoint_clean}
\hfill
\includegraphics[width=0.48\textwidth]{AaaSymPoint_clean}
\caption{ Ghost-gluon vertex (left) and three-gluon
(right) vertex dressing functions $Z_{A \bar
cc,\bot }\left(p,\,p,\,-\frac{1}{2}\right)$ and
$Z_{A^3,\bot }\left(p,\,p,\,-\frac{1}{2}\right)$ in the
symmetric point configuration. More momentum
configurations and comparisons to Dyson-Schwinger
and lattice results can be found in
Figs.~\ref{fig:fourGluonTadpoleDressing}-\ref{fig:ThreePointVertices}.
In contrast to \Fig{fig:main_result}, the decoupling
dressings are normalised to the scaling solution
in the UV. }
\label{fig:main_result_III}
\end{figure*}
\subsection{The purely transverse system}
\label{sec:transverse}
In this work we restrict ourselves to a solution of the purely
transverse system \eq{eq:closedFun}, which is closed. The only
relevant UV parameters in this system are the strong coupling and the
transverse gluon mass parameter. In the UV the transverse mass
parameter agrees with the longitudinal one. The latter is fixed by the
mSTI for the longitudinal gluon propagator. Hence, the only
information needed from the longitudinal system is the initial value
for the transverse gluon mass parameter \eq{eq:mSTI_mass}. Note also
that there is at least one value for the initial gluon mass parameter
that yields a valid confining solution. In the following we vary the
gluon mass parameter and discuss the properties of the ensuing
solutions. We find a confining branch with both scaling and decoupling
solutions. In addition, we observe a transition to the deconfined
Higgs-type branch. No Coloumb branch is found. The unique scaling
solution satisfies the original Kugo-Ojima confinement criterion with
$Z_C(p=0)=0\,$. We emphasise that the existence of the scaling
solution is dynamically generated in a highly non-trivial way. The
details are discussed in \Sec{sec:gluonmassgap}.
\section{Numerical results}
\label{sec:mainresult}
We calculate Yang-Mills correlation functions by integrating the
self-consistent system of flow equations obtained from functional
derivatives of \eq{eq:flow}, see \Fig{fig:diagrams}
for diagrammatic representations. Technical details on the
numerical procedure are given in \App{app:technicalDetails}. We use
constant dressing functions as initial values for the $1$PI
correlators at the ultraviolet initial scale $\Lambda\,$. Consequently,
the initial action $\Gamma_\Lambda$ is given by the bare action of QCD
and the Slavnov-Taylor identities enforce relations between these
constant initial correlation functions. As is well-known, and also
discussed in \Sec{sec:mSTIandVert_sub}, the Landau gauge STIs leave
only three of the renormalisation constants independent, namely the
value of the strong running coupling and two trivial renormalisations
of the fields that drop out of any observable. To eliminate cutoff
effects, we choose the constant initial values for the vertex
dressings such that the momentum-dependent running couplings,
\eq{eq:runcoup} are degenerate at perturbative momentum scales $p$
with $\Lambda_\text{\tiny{QCD}}\ll p\ll \Lambda\,$ i.e. the
STIs \eq{eq:RGrel} are only fulfilled on scales considerably below
the UV cutoff scale. The modification of the Slavnov-Taylor
identity, caused by the regulator term, requires a non-physical gluonic mass
term $m_\Lambda^2$ at the cutoff $\Lambda$. The initial value for
the inverse gluon propagator is therefore taken as
\begin{align}
[\Gamma^{(2)}_{AA,\Lambda}]^{ab}_{\mu\nu}(p) &=
\left(Z_{A,\Lambda}\, p^2+ m_\Lambda^2\right)\,\delta^{ab}\,
\Pi^{\bot}_{\mu\nu}(p)\,.
\end{align}
The non-physical contribution $m_k^2$ to the gluon propagator vanishes
only as the renormalisation group scale, $k\,$, is lowered to zero,
where the mSTIs reduce to the usual Slavnov-Taylor identities. The
initial value $m_\Lambda^2$ can be uniquely fixed by demanding that
the resulting propagators and vertices are of the scaling
type. Consequently, the only parameter in this calculation is the
value of the strong running coupling at the renormalisation scale, as
initially stated. We also produce decoupling solutions by varying the
gluon mass parameter towards slightly larger values. Our reasoning
for their validity as confining solutions is presented in
\Sec{sec:gluonmassgap}.
\begin{figure*}
\centering
\includegraphics[width=0.497\textwidth]{AaaaSymPoint}
\hfill
\includegraphics[width=0.48\textwidth]{gluon_prop_dressing_truncations_pert_0D_fit}
\caption{ Left:
Four-gluon vertex dressing function as defined in
\eq{eq:fourgluon} at the symmetric point in comparison to
Dyson-Schwinger computations~\cite{Cyrol:2014kca}. We
normalised all curves to match the scaling result at
$p=\SI{2}{\GeV}$. Right: Gluon propagator dressings obtained with
different momentum approximations, see
\Sec{sec:truncationcheck} for details.
}
\label{fig:main_result_IIII}
\end{figure*}
\subsection{Correlation functions and running couplings}
\label{sec:corcoup}
We show our results for the Yang-Mills correlation functions as well
as the momentum-dependent transverse running couplings in
Figs.~\ref{fig:main_result}-\ref{fig:main_result_IIII}, see also
Figs.~\ref{fig:fourGluonTadpoleDressing}-\ref{fig:ThreePointVertices}
in the appendices for a comparison of the vertices to recent lattice
and DSE results. A discussion of truncation effects is deferred to
\Sec{sec:truncationcheck}. In order to be able to compare to results
from lattice simulations, we set the scale and normalise the dressings
as described in \App{app:rescaling}. At all momenta, where the
difference between the scaling (solid line) and decoupling (band
bounded by dashed-dot line) solutions is negligible, our results for
the correlations functions agree very well with the corresponding
lattice results. In the case of the scaling solution we find the
consistent scaling exponents
\begin{align}\nonumber
\kappa_\text{ghost}&=0.579\pm0.005\,,\\[2ex]
\kappa_\text{gluon}&=0.573\pm0.002\,,
\end{align}
where the uncertainties stem from a least square fit with the ansatz
\begin{align}\nonumber
Z_c(p)&\varpropto (p^2)^{ \kappa_\text{ghost}}\,,\\[2ex]
Z_A(p)&\varpropto (p^2)^{-2\, \kappa_\text{gluon}}\,.
\end{align}
As discussed in \Sec{sec:transverse}, the scaling solution is a
self-consistent solution of the purely transverse system in the used
approach, and has no systematic error related to the lack of solving
the longitudinal system. In turn, the presented decoupling solutions
suffer from the missing solution of the longitudinal system, leading
to a small additional systematic error. This argument already
suggests that it is the presented scaling solution that should agree
best with the lattice results in the regime $p\gtrsim \SI{1}{\GeV}\,$,
where the solutions show no sensitivity to the Gribov problem. This is
confirmed by the results, see in particular \Fig{fig:main_result}.
In the infrared regime, $p\lesssim \SI{1}{\GeV}\,$, the different
solutions approach their infrared asymptotics. In
\Fig{fig:main_result} and \Fig{fig:main_result_II} we compare the FRG
solutions with the lattice data from \cite{Sternbeck:2006cg}. In
agreement with other lattice results
\cite{Cucchieri:2007rg,Cucchieri:2008fc,Maas:2009ph} in four
dimensions, these propagators show a decoupling behaviour, for a
review see \cite{Maas:2011se}. Taking the IR behaviour of all
correlators into account, cf.\ also \Fig{fig:ThreePointVertices}, the
lattice solution \cite{Sternbeck:2006cg} is very close to the
decoupling solution (dot-dashed line) that is furthest from the
scaling solution (solid line). Note however, that the systematic
errors of both approaches, FRG computations and lattice simulations
increase towards the IR. While the FRG computations lack apparent
convergence in this regime, the lattice data are affected by the
non-perturbative gauge fixing procedure, i.e.\ the choice of Gribov
copies \cite{Maas:2009se,Sternbeck:2012mf,Maas:2015nva} and
discretisation artefacts \cite{Duarte:2016iko}. Consequently,
comparing the FRG IR band to the lattice propagators has to be taken
with a grain of salt. In the case of the vertices, we compare also to
results obtained within the Dyson-Schwinger equation approach
\cite{Huber:2012kd,Blum:2014gna,Cyrol:2014kca}, see
\Fig{fig:comparison} and \ref{fig:ThreePointVertices}. A comparison of
the different running vertex couplings is given in
\Sec{sec:comparison}.
We find that it is crucial to ensure the degeneracy in the different
running vertex couplings at perturbative momentum scales in order to
achieve quantitative accuracy, see also \Sec{sec:truncationcheck}.
The transverse effective running couplings, as defined in
\eq{eq:runcoup}, are shown in the right panel of
\Fig{fig:main_result_II}. To be able to cover a larger range of
momenta with manageable numerical effort, the shown running couplings
have been obtained within an approximation that takes only one
momentum variable into account in the vertices, see
\Sec{sec:truncationcheck}. At large perturbative momentum scales, we
find them to be perfectly degenerate, as is demanded by the
Slavnov-Taylor identities. The degeneracy of the running couplings is
lifted at a scale of roughly $\SI{2}{\GeV}\,$, which coincides with the
gapping scale of the gluon. Furthermore, the three-gluon vertex shows
a zero crossing at scales of \SIrange{0.1}{0.33}{GeV}, which is the
reason for the spike in the corresponding running coupling. This zero
crossing, which is caused by the infrared-dominant ghost-loop, is
well-known in the literature
\cite{Aguilar:2013vaa,Pelaez:2013cpa,Blum:2014gna,Eichmann:2014xya}.
Even though we are looking at the scaling solution, we find that the
running couplings defined from the purely gluonic vertices are still
strongly suppressed in the infrared. In particular the three-gluon
vertex running coupling becomes more strongly suppressed than the
four-gluon vertex running coupling. However, as demanded by scaling,
they seem to settle at tiny but finite fixed point values, which has
also been seen in Dyson-Schwinger studies
\cite{Eichmann:2014xya,Kellermann:2008iw,Cyrol:2014kca}.
\subsection{Quality of the approximation}
\label{sec:truncationcheck}
In \Fig{fig:main_result_IIII} (right panel), we show the scaling
solution for the propagators in different truncations. In all cases,
the full momentum dependence of the propagators is taken into account
whereas different approximations are used for the vertices. Including
only RG-scale-dependent constant vertex dressing functions is the
minimal approximation that can produce a scaling solution with a
physical gluon mass gap. The dot-dashed (magenta) line in
\Fig{fig:main_result_IIII} (right panel) corresponds to an
approximation with constant vertex dressing functions evaluated at the
symmetric configuration with momentum $\mathcal{O}(\SI{250}{\MeV})\,$.
Hence the vertices are only RG-scale-dependent vertices. For the
dashed blue results the dressing functions for the transversally
projected classical tensor structures have been approximated with a
single momentum variable $\bar{p}^2 \equiv \tfrac{1}{n}\sum_{i=1}^n
p_i^2\,$. Reducing the momentum dependence to a single variable
requires the definition of a momentum configuration to evaluate the
flow. Here, we use the symmetric point configuration, defined by $p_i
\cdot p_i = p^2$ and $p_i\cdot p_j = -1/(n-1)$ for $i\neq j\,$, where
$n=3\, (4)$ for the three(four)-gluon vertex. Finally, the solid red
line corresponds to our best truncation. As described in
\Sec{sec:expsch}, it takes into account the full momentum dependence
of the classical tensors structures of the three-point functions as
well as the four-gluon vertex in a symmetric point
approximation. Additionally, all (three-dimensional) momentum
configurations of the four-gluon vertex that are needed in the tadpole
diagram of the gluon propagator equation have been calculated and
coupled back in this diagram. The reliability of our approximation
can be assessed by comparing the two simpler truncations to the result
obtained in our best truncation scheme. We observe that our results
apparently converge towards the lattice result, as we improve the
momentum approximation for the vertices.
The effects of non-classical tensor structures and vertices are beyond
the scope of the current work and have to be checked in future
investigations, see however \cite{Eichmann:2014xya} for an
investigation of non-classical tensor structures of the three-gluon
vertex. Within the present work, the already very good agreement with
lattice results suggests, that their influence on the propagators is
small.
The final gluon propagator is sensitive to the correct renormalisation
of the vertices. For example, a one percent change of the three-gluon
vertex dressing at an UV scale of $\SI{20}{\GeV}$ magnifies by up to a
factor 10 in the final gluon propagator. Therefore, small errors in
the perturbative running of the vertices propagate, via
renormalisation, into the two-point functions. We expect a five
percent uncertainty in our results due to this.
Despite these uncertainties, we interpret the behaviour in
\Fig{fig:main_result_IIII} (right panel) as an indication for apparent
convergence.
\subsection{Comparison to other results}
\label{sec:comparison}
\begin{figure}
\includegraphics[width=0.48\textwidth]{running_couplings_comp_rescaled}
\caption{Running couplings \eq{eq:runcoup} in comparison with DSE running.
The grey band gives the spread of vertex
couplings from the FRG in the present work. The DSE results
are shown rescaled to fit our ghost-gluon vertex running
coupling at $\SI{10}{\GeV}$ to facilitate the comparison.
The inlay shows the unscaled couplings. Note that the FRG
running couplings naturally lie on top of each other and are
not depicted rescaled.}
\label{fig:comparison}
\end{figure}
In \Fig{fig:ThreePointVertices}, numerical results for the ghost-gluon and
three-gluon vertices are shown in comparison to other functional
methods as well as lattice results. In summary, the results from
various functional approaches and the lattice agree to a good degree.
But these correlation functions are not renormalisation group
invariant, and a fully meaningful comparison can only be made with
RG invariant quantities. Therefore, we compare our results for the
RG invariant running couplings with the respective results from DSE
computations. To be more precise, it is actually the $\beta$ functions
of the different vertices that are tied together by two-loop
universality in the sense that they should agree in the regime where
three-loop effects are negligible. Since constant factors drop out of
the $\beta$ functions, we have normalised the DSE running couplings to
the FRG result at large momentum scales in \Fig{fig:comparison}.
For the sake of visibility, we only have provided a band for
the spread of the FRG couplings as obtained from different vertices.
The shown DSE running couplings are based on a series of works
\cite{Huber:2012kd,Blum:2014gna,
Cyrol:2014kca,Williams:2014iea,Williams:2015cvx}, where the explicitly
shown results are taken from
\cite{Blum:2014gna,Cyrol:2014kca,Huber:unpublished,Williams:unpublished}.
Additionally, we provide the raw DSE running couplings that have not
been rescaled by a constant factor in the inlay.
\subsection{Mass gap, mSTIs and types of solutions}
\label{sec:gluonmassgap}
\begin{figure*}
\centering
\includegraphics[width=0.481\textwidth]{ghost_prop_dressing_tuning}
\hfill
\includegraphics[width=0.48\textwidth]{gluon_prop_tuning}
\caption{Ghost dressing functions $1/Z_c$ (left) and
gluon propagators (right) for different values of the
ultraviolet gluon mass parameter. Blue results correspond to
the Higgs-type branch and red results to the confined branch. The
solutions in all branches have been normalised to the
scaling solution in the UV. }
\label{fig:confinementVSHiggs}
\end{figure*}
As discussed in \Sec{sec:mSTIandVert_sub}, the introduction of the
regulator in the FRG leads to a modification of the Slavnov-Taylor
identities. In turn the inverse gluon propagator obtains a
contribution proportional to $\Delta\Gamma^{(2)}_{AA}\varpropto
k^2\alpha(k)$ for all $k>0\,$. Disentangling the physical mass gap
contribution from this mSTI contribution to the gluon mass parameter
is intricate, both conceptually and numerically. The resulting
numerical challenge is illustrated in the appendix in
\Fig{fig:flowQuantities}, where we show the $k$-running of the gluon
mass parameter. This is the analogue of the problem of quadratic
divergences in Dyson-Schwinger equations with a hard momentum cutoff,
see e.g.\ \cite{Huber:2014tva}. However, there has to exist at least
one choice for the gluon mass parameter $m_\Lambda^2$ that yields a
valid confining solution, see \Sec{sec:mSTIandVert_sub}. To resolve
the issue of finding this value, we first recall that a fully regular
solution has no confinement and necessarily shows a Higgs- or
Coulomb-type behaviour. Although we do not expect these branches to be
consistent solutions, we can trigger them by an appropriate choice of
the gluon mass parameter in the UV. The confinement branch then lies
between the Coulomb and the Higgs branch. We need, however, a
criterion for distinguishing between the confinement and the
Higgs-type branch.
To investigate the possible solutions in a controlled way, we start
deep in the Higgs-type branch: an asymptotically large initial gluon
mass parameter $m_\Lambda^2$ triggers an explicit mass term of the
gluon at $k=0\,$. If we could trigger this consistently in the present
$SU(3)$ theory, it would constitute a Higgs solution. Note that in the
current approximation it cannot be distinguished from massive
Yang-Mills theory, which has e.g.\ been considered in
\cite{Tissier:2010ts,Pelaez:2014mxa}. Starting from this Higgs-type
branch, we can then explore the limit of smaller initial mass
parameters. This finally leads us to the scaling solution, which
forms the boundary towards an unphysical region characterised by
Landau-pole-like singularities. It is left to distinguish between the
remaining confining and Higgs-type solutions, shown in
\Fig{fig:confinementVSHiggs}, without any information from the
longitudinal set of equations. For that purpose we use two criteria:
In the left panel of \Fig{fig:mass_parameter}, we show the mass gap of
the gluon, $m^{2}=\Gamma^{(2)}_{AA,k=0}(p=0)\,$, as a function of the
chosen initial value for the gluon mass parameter $m_{\Lambda}^2$
subtracted by the corresponding value for the scaling solution
$m_{\Lambda,\rm scaling}^2\,$. The latter solution corresponds to zero
on the x-axis in \Fig{fig:mass_parameter}. As mentioned before, going
beyond the scaling solution, $m_{\Lambda}^2<m_{\Lambda,\rm
scaling}^2\,$, leads to singularities. We interpret their presence as
a signal for the invalidity of the Coulomb branch as a possibly
realisation of non-Abelian Yang-Mills theory. The decisive feature of
the left panel of \Fig{fig:mass_parameter} is the presence of a
minimum at $m_{\rm min}^2\,$. If there were no dynamical mass gap
generation, $m^2$ would have to go to zero as we lower
$m_{\Lambda}^2\,$. In contrast to this, we find that the resulting
gluon mass gap is always larger than the value it takes at
$m_{\Lambda}^2=m_{\rm min}^2\,$. In particular, this entails that all
solutions to the left of the minimal value, $m_{\Lambda}^2<m_{\rm min}^2\,$,
are characterised by a large dynamical contribution to the
gluon mass gap, which we interpret as confinement.
\begin{figure*}
\centering
\includegraphics[width=0.485\textwidth]{mUV_m0_m}
\hfill
\includegraphics[width=0.48\textwidth]{order_param}
\caption{ Left: Gluon mass gap as a function of the gluon mass
parameter $m_\Lambda^2-m_\text{$\Lambda$, scaling}^2\,$,
where $m_\text{$\Lambda$, scaling}^2$ denotes the gluon mass
parameter that yields the scaling solution. Right: Momentum
value at which the gluon propagator assumes its maximum, as
a function of the gluon mass parameter
$m_\Lambda^2-m_\text{$\Lambda$, scaling}^2\,$. The inlay
exposes the power law behaviour of the gluon propagator
maximum in the vicinity of the transition region, see
\eq{eq:phaseTransitionFit}. Both plots were obtained from
our numerically less-demanding 1D approximation. We have
repeated this analysis in the transition regime from
Higgs-type to confinement branch also with the best
approximation and find the same behaviour. The shaded area
marks momentum scales that are not numerically resolved in
the present work. The points in this region rely on a
generic extrapolation. }
\label{fig:mass_parameter}
\end{figure*}
As a second criterion for differentiating between confining and Higgs
solutions, we use the presence of a maximum at non-vanishing momenta
in the gluon propagator, which signals positivity violation
\cite{Alkofer:2000wg}. In the right panel of \Fig{fig:mass_parameter},
we show the location of the maximum in the gluon propagator, again as
a function of the gluon mass parameter, $m_{\Lambda}^2-m_{\Lambda,\rm
scaling}^2\,$. We clearly see a region of confining solutions that
show a back-bending of the gluon propagator at small momenta,
see \Fig{fig:confinementVSHiggs}. The dashed line,
separating the shaded from the white region in the right panel of
\Fig{fig:mass_parameter}, indicates the smallest momentum value at
which the gluon propagator has been calculated. With this restriction
in mind, the fit in the inlay demonstrates that the location of the
maximum of the propagator scales to zero as one approaches the
critical value $m_{\rm c}^2\,$. We fit with
\begin{align}
\label{eq:phaseTransitionFit}
p_\text{max}(m^2_\Lambda) \propto
\left(\frac{m^2_\Lambda-m^2_{\rm c}}{m^2_{\rm
c}}\right)^\alpha\,,
\end{align}
which yields the critical exponent
\begin{align}
\label{eq:phaseTransitionFitResult}
\alpha = 1.95 \pm 0.6\,,
\end{align}
in the 1D approximation. Within the numerical accuracy, this boundary
value $m_{\rm c}^2$ is equivalent to the minimal value $m_{\rm min}^2$
of our first confinement criterion. Hence, the value of the UV mass
parameter that results in the minimal gluon mass gap, is also the one
that shows minimal back-bending. Note that the lattice simulations
show a gluon propagator that is at least very close to this minimal
mass gap.
As discussed in detail in \Sec{sec:massgap} and appendices
\ref{app:irregularities} and \ref{app:ghosttriangle}, a gluon mass gap
necessitates irregularities. The scaling solution by definition
contains these irregularities already in the propagators, cf.\
\eq{eq:scaling_sol}. For the decoupling-type solutions, we excluded
infrared irregularities of diagrammatic origin, see
\App{app:ghosttriangle}. Thus, for the decoupling-type solutions our
arguments for the validity of the solutions are weaker and remain to
be investigated in a solution including at least parts of the
longitudinal system, see the discussion in \Sec{sec:massgap}.
Additionally, it might be necessary to expand about the solution of
the equation of motion, see \cite{Eichhorn:2010zc}.
We summarise the findings of the present section. In the right panel
of \Fig{fig:mass_parameter} we can distinguish a confining branch with
positivity violation and a Higgs-type branch with a massive gluon
propagator. A Coulomb-type solution, on the other hand, can never be
produced with the functional renormalisation group since any attempt
to do so leads to Landau-pole-like singularities. The non-existence of
the Coulomb branch is tightly linked to the non-monotonous dependence
of the mass gap on the initial gluon mass parameter, see left panel of
\Fig{fig:mass_parameter}. This behaviour is of a dynamical origin that is
also responsible for the existence of the scaling solution for the
smallest possible UV gluon mass parameter.
\subsection{Discussion}
\label{sec:discussion}
As has been discussed already in \Sec{sec:corcoup}, one non-trivial
feature of the different vertex couplings is their quantitative
equivalence for momenta down to $p\approx \SI{2}{\GeV}\,$, see
\Fig{fig:main_result_II} (right panel). This property extends the
universal running of the couplings into the semi-perturbative regime.
On the other hand, the couplings violate universality in the
non-perturbative regime for $p\lesssim \SI{2}{\GeV}\,$. The
universality down to the semi-perturbative regime is a very welcome
feature of Landau gauge QCD, as it reduces the size of the
non-perturbative regime and hence the potential systematic errors. In
particular, one running coupling is sufficient to describe Landau
gauge Yang-Mills theory down to momentum scales of the order of the
gluon mass gap. This suggests to use the propagators together with
the ghost-gluon vertex for simple semi-quantitative calculations. The
above structure also explains and supports the semi-quantitative
nature of the results in low-order approximations.
This implies that self-consistent calculations of most or all vertices
have to reproduce this universality, in particular for momenta
$\SI{2}{\GeV}\lesssim p\lesssim \SI{10}{\GeV}\,$. When starting from
the value of the strong running coupling at perturbative momenta, we
find that a violation of the degeneracy of the running couplings,
\eq{eq:runcoup}, in this regime goes hand in hand with the loss of
even qualitative properties of the non-perturbative results in
self-consistent approximations. This surprising sensitivity to even
small deviations of the couplings from their universal running extends
to the fully dynamical system with quarks, see e.g.\
\cite{Mitter:2014wpa,CMPS:VC}. Note in this context that the
quark-gluon coupling $\alpha_{A\bar q q}\,$, \eq{eq:quarkgluon},
agrees with the ghost-gluon coupling $\alpha_s$ defined in
\eq{eq:propcoupling}, and not the vertex coupling $\alpha_{A\bar c
c}\,$, see \Sec{sec:STImSTI}. It can be shown in the full
QCD system, that deviations from universality on the percent level
have a qualitative impact on chiral symmetry breaking. The origin of
this is the sensitivity of chiral symmetry breaking to the correct
adjustment of physical scales, i.e.\ $\Lambda_{\text{\tiny QCD}}\,$,
in all subsystems. These observations underline the relevance of the
present results for the quantitative grip on chiral symmetry
breaking. A full analysis will be presented in a forthcoming work,
\cite{CMPS:VC}.
We close this discussion with the remark that universality in the
semi-perturbative regime is tightly linked with the consistent
renormalisation of all primitively divergent correlation functions.
We find it crucial to demand the validity of the STIs \eq{eq:RGrel}
only on momentum scales considerably below the ultraviolet cutoff
$\Lambda\,$. On the other hand, the relations \eq{eq:RGrel} are
violated close to the ultraviolet cutoff, due to the BPHZ-type
subtraction schemes. This constitutes no restriction to any practical
applications, since the cutoff can always be chosen large enough, such
that no violations effects can be found at momenta $p\ll \Lambda\,$.
One particular consequence of BPHZ-type subtraction schemes is then
that the calculated renormalisation constants necessarily have to
violate \eq{eq:RGrel}, since they contain contributions from momentum
regions close to the ultraviolet cutoff.
\section{Conclusion}
In this work we investigate correlation functions in Landau gauge
$SU(3)$ Yang-Mills theory. This analysis is performed in a vertex
expansion scheme for the effective action within the functional
renormalisation group approach. Besides the gluon and ghost
propagators, our approximation for the effective action includes the
self-consistent calculation of momentum-dependent dressings of the
transverse ghost-gluon, three-gluon and four-gluon vertices. Starting
from the gauge fixed tree-level perturbative action of Yang-Mills
theory, we obtain results for the correlators that are in very good
agreement with corresponding lattice QCD simulations. Furthermore, the
comparison of different vertex truncations indicates the apparent
convergence of the expansion scheme.
Special emphasis is put on the analysis of the dynamical creation of
the gluon mass gap at non-perturbative momenta. Self-consistency in
terms of the Slavnov-Taylor identities directly links this property to
the requirement of IR irregularities in the correlation functions. The
source of these irregularities is easily traced back to the
IR-divergent ghost propagator for the scaling solution. In the
decoupling-type solutions, the source of these irregularities is
harder to identify, where the creation of diagrammatic infrared
irregularities is ruled out by general arguments. Within our
truncation, we can exclude irregularities of non-diagrammatic origin
in the purely transverse subsystem. Hence it is necessary to solve
the longitudinal system to answer whether the required irregularities
are generated for decoupling-type solutions, which is not done in this
work. Nevertheless, we are able to produce decoupling-type solutions
by invoking two consistent criteria, which allow for the
differentiation between confining and Higgs-like solutions. The
decoupling-type solutions are bound by the solution that shows the
minimal mass gap, which is also the solution with minimal back-bending
of the gluon propagator.
\acknowledgments We thank Markus Q. Huber, Axel Maas, Fabian
Rennecke, Andre Sternbeck and Richard Williams for discussions as well
as providing unpublished data. This work is supported by EMMI, the
grant ERC-AdG-290623, the FWF through Erwin-Schr\"odinger-Stipendium
No.\ J3507-N27, the Studienstiftung des deutschen Volkes, the DFG
through grant STR 1462/1-1, and in part by the Office of Nuclear
Physics in the US Department of Energy's Office of Science
under Contract No. DE-AC02-05CH11231.
| {'timestamp': '2016-09-12T02:05:44', 'yymm': '1605', 'arxiv_id': '1605.01856', 'language': 'en', 'url': 'https://arxiv.org/abs/1605.01856'} |
\section{Introduction}
Restoring forces play a very fundamental role in the study of vibrations of
mechanical systems. If a system is moved from its equilibrium position, a
restoring force will tend to bring the system back toward equilibrium. For
decades, if not centuries, springs have been used as the most common example of
this type of mechanical system, and have been used extensively to study the
nature of restoring forces. In fact, the use of springs to demonstrate the
Hooke's law is an integral part of every elementary physics lab. However, and
despite the fact that many papers have been written on this topic, and several
experiments designed to verify that the extension of a spring is, in most cases,
directly proportional to the force exerted on
it~\cite{Mills:404,Cushing:925,Easton:494,Hmurcik:135,
Sherfinski:552,Glaser:164,Menz:483,Wagner:566,Souza:35,Struganova:516,
Freeman:224,Euler:57}, not much has been written about experiments concerning
springs connected in series. Perhaps one of the most common reasons why little
attention has been paid to this topic is the fact that a mathematical
description of the physical behaviour of springs in series can be derived
easily~\cite{Gilbert:430}. Most of the textbooks in fundamental physics rarely
discuss the topic of springs in series, and they just leave it as an end of the
chapter problem for the student~\cite{Giancoli,Serway}.
One question that often arises from spring experiments is, ``If a uniform spring
is cut into two or three segments, what is the spring constant of each
segment?'' This paper describes a simple experiment to study the combination of
springs in series using only \textit{one} single spring. The goal is to prove
experimentally that Hooke's law is satisfied not only by each individual spring
of the series, but also by the \textit{combination} of springs as a whole. To
make the experiment effective and easy to perform, first we avoid cutting a
brand new spring into pieces, which is nothing but a waste of resources and
equipment misuse; second, we avoid combining in series several springs with
dissimilar characteristics. This actually would not only introduce additional
difficulties in the physical analysis of the problem (different mass densities
of the springs), but it would also be a source of random error, since the points
at which the springs join do not form coils and the segment elongations might
not be recorded with accuracy. Moreover, contact forces (friction) at these
points might affect the position readings, as well. Instead, we decide just to
use one single spring with paint marks placed on the coils that allow us to
divide it into different segments, and consider it as a collection of springs
connected in series. Then the static Hooke's exercise is carried out on the
spring to observe how each segment elongates under a suspended mass.
In the experiment, two different scenarios are examined: the mass-spring system
with an ideal massless spring, and the realistic case of a spring whose mass is
comparable to the hanging mass. The graphical representation of force against
elongation, used to obtain the spring constant of each individual segment,
shows, in excellent agreement with the theoretical predictions, that the inverse
of the spring constant of the entire spring equals the addition of the
reciprocals of the spring constants of each individual segment. Furthermore, the
experimental results allow us to verify that the ratio of the spring constant of
a segment to the spring constant of the entire spring equals the ratio of the
total number of coils of the spring to the number of coils of the segment.
The experiment discussed in this article has some educational benefits that may
make it attractive for a high school or a first-year college laboratory: It is
easy to perform by students, makes use of only one spring for the investigation,
helps students to develop measuring skills, encourages students to use
computational tools to do linear regression and propagation of error analysis,
helps to understand how springs work using the relationship between the spring
constant and the number of coils, complements the traditional static Hooke's law
experiment with the study of combinations of springs in series, and explores the
contribution of the spring mass to the total elongation of the spring.
\section{The model}
When a spring is stretched, it resists deformation with a force proportional to
the amount of elongation. If the elongation is not too large, this can be
expressed by the approximate relation $F = -k\,x$, where $F$ is the restoring
force, $k$ is the spring constant, and $x$ is the elongation (displacement of
the end of the spring from its equilibrium position)~\cite{Symon}. Because most
of the springs available today are \textit{preloaded}, that is, when in the
relaxed position, almost all of the adjacent coils of the helix are in contact,
application of only a minimum amount of force (weight) is necessary to stretch
the spring to a position where all of the coils are separated from each
other~\cite{Glanz:1091,Prior:601,Froehle:368}. At this new position, the spring
response is linear, and Hooke's law is satisfied.
It is not difficult to show that, when two or more springs are combined in
series (one after another), the resulting combination has a spring constant less
than any of the component springs. In fact, if $p$ ideal springs are connected
in sequence, the expression
\begin{equation}
\frac{1}{k} = \sum_{i=1}^p \frac{1}{k_i}
\label{Eq:1/k}
\end{equation}
relates the spring constant $k$ of the combination with the spring constant
$k_i$ of each individual segment. In general, for a cylindrical spring of
spring constant $k$ having $N$ coils, which is divided into smaller segments,
having $n_i$ coils, the spring constant of each segment can be written as
\begin{equation}
k_i = \frac{N}{n_i} k\,.
\label{Eq:ki}
\end{equation}
Excluding the effects of the material from which a spring is made, the diameter
of the wire and the radius of the coils, this equation expresses the fact that
the spring constant $k$ is a parameter that depends on the number of coils $N$
in a spring, but not on the way in which the coils are wound (i.e. tightly or
loosely)~\cite{Gilbert:430}.
In an early paper, Galloni and Kohen~\cite{Galloni:1076} showed that, under
\textit{static} conditions, the elongation sustained by a non-null mass spring
is equivalent to assuming that the spring is massless and a fraction of one-half
of the spring mass should be added to the hanging mass. That is, if a spring of
mass $m_{\mathrm{s}}$ and relaxed length $l$ (neither stretched nor compressed)
is suspended vertically from one end in the Earth's gravitational field, the
mass per unit length becomes a function of the position, and the spring
stretches \textit{non-uniformly} to a new length $l' = l + \Delta l$. When a
mass $m$ is hung from the end of the spring, the total elongation $\Delta l$ is
found to be
\begin{equation}
\Delta l = \int_0^l \xi(x)\,\rmd x = \frac{(m + \frac{1}{2} m_{\mathrm{s}})\,g}{k}\,,
\label{Eq:Dl1}
\end{equation}
where
\begin{equation}
\xi(x) = \frac{m + m_{\mathrm{s}}(l-x)/l}{k\,l}\,g
\label{Eq:xi}
\end{equation}
is the \textit{dimensionless elongation factor} of the element of length between
$x$ and $x + \rmd x$, and $g$ is the acceleration due to gravity. An important
number of papers dealing with the static and dynamic effects of the spring mass
have been written in the physics education literature. Expressions for the
spring elongation as a function of the $n$th coil and the mass per unit length
of the spring have also been derived~\cite{Edwards:445,Heard:1102,Lancaster:217,
Mak:994,French:244,Hosken:327,Ruby:140,Sawicki:276,Ruby:324,Toepker:16,
Newburgh:586,Bowen:1145,Christensen:818,Rodriguez:100,Gluck:178,Essen:603}.
\section{The Experiment}
We want to show that, with just \textit{one} single spring, it is possible to
confirm experimentally the validity of equations~\eref{Eq:1/k}
and~\eref{Eq:ki}. This approach differs from Souza's work~\cite{Souza:35} in
that the constants $k_i$ are determined from the same single spring, and there
is no need of cutting the spring into pieces; and from the standard experiment
in which more than one spring is required.
A soft spring is \textit{divided} into three separate segments by placing a
paint mark at selected points along its surface (see~\fref{Fig1}). These points
are chosen by counting a certain number of coils for each individual segment
such that the original spring is now composed of three marked springs connected
in series, with each segment represented by an index $i$ (with $i=1,2,3$), and
consisting of $n_i$ coils. An initial mass $m$ is suspended from the spring to
stretch it into its \textit{linear} region, where the equation $F_i=-k_i\Delta
x_i$ is satisfied by each segment. Once the spring is brought into this region,
the traditional static Hooke's law experiment is performed for several different
suspended masses, ranging from $1.0$ to $50.0\,\mbox{g}$. The initial positions
of the marked points $x_i$ are then used to measure the \textit{relative}
displacement (elongation) of each segment after they are stretched by the
additional masses suspended from the spring~(\fref{Fig2}). The displacements are
determined by the equations
\begin{equation}
\Delta x_i = (x'_i - x'_{i-1}) - l_i\,,
\label{Eq:Dxi1}
\end{equation}
where the primed variables $x'_i$ represent the new positions of the marked
points, $l_i = x_i - x_{i-1}$ are the initial lengths of the spring segments,
and $x_0 = 0$, by definition. Representative graphs used to determine the spring
constant of each segment are shown in figures~\ref{Fig3},~\ref{Fig4},
and~\ref{Fig5}.
\section{Dealing with the effective mass}
As pointed out by some
authors~\cite{Galloni:1076,Mak:994,French:244,Sawicki:276,Newburgh:586}, it is
important to note that there is a difference in the total mass hanging from each
segment of the spring. The reason is that each segment supports not only the
mass of the segments below it, but also the mass attached to the end of the
spring. For example, if a spring of mass $m_i$ is divided into three
\textit{identical} segments, and a mass $m$ is suspended from the end of it, the
total mass $M_1$ hanging from the first segment becomes $m +
\frac{2}{3}m_{\mathrm{s}}$. Similarly, for the second and third segments, the
total masses turn out to be $M_2 = m + \frac{1}{3}m_{\mathrm{s}}$ and $M_3 = m
$, respectively. However, in a more realistic scenario, the mass of the spring
and its effect on the elongation of the segments must be considered, and
equation~\eref{Eq:Dl1} should be incorporated into the calculations. Therefore,
for each individual segment, the elongation should be given by
\begin{equation}
\Delta x_i = \frac{(M_i + \frac{1}{2} m_i)\,g}{k_i},
\label{Eq:Dxi2}
\end{equation}
where $m_i$ is the mass of the $i$th segment, $M_i$ is its corresponding total
hanging mass, and $k_i$ is the segment's spring constant. Consequently, for the
spring divided into three identical segments ($m_i = \frac{1}{3}
m_{\mathrm{s}}$), the total masses hanging from the first, second and third
segments are now $m + \frac{5}{6} m_{\mathrm{s}}$, $m + \frac{1}{2}
m_{\mathrm{s}}$ and $m + \frac{1}{6} m_{\mathrm{s}}$, respectively. This can be
explained by the following simple consideration: If a mass $m$ is attached to
the end of a spring of length $l$ and spring constant $k$, for three identical
segments with elongations $\Delta l_1$, $\Delta l_2$, and $\Delta l_3$, the
total spring elongation is given by
\begin{eqnarray}
\Delta l &= \Delta l_1 + \Delta l_2 + \Delta l_3 \nonumber
\\[10pt]
&= \int_0^{\frac{l}{3}} \xi(x)\,\rmd x +
\int_{\frac{l}{3}}^{\frac{2l}{3}} \xi(x)\,\rmd x +
\int_{\frac{2l}{3}}^l \xi(x)\,\rmd x \nonumber
\\[10pt]
&= \frac{(m + \frac{5}{6} m_{\mathrm{s}})\,g}{3\,k} +
\frac{(m + \frac{1}{2} m_{\mathrm{s}})\,g}{3\,k} +
\frac{(m + \frac{1}{6} m_{\mathrm{s}})\,g}{3\,k} \nonumber
\\[10pt]
&= \frac{(m + \frac{1}{2} m_{\mathrm{s}})\,g}{k}\,.
\label{Eq:Dl2}
\end{eqnarray}
As expected, equation~\eref{Eq:Dl2} is in agreement with
equation~\eref{Eq:Dl1}, and reveals the contribution of the mass of each
individual segment to the total elongation of the spring. It is also observed
from this equation that
\begin{equation}
\Delta l_1 - \Delta l_2 = \Delta l_2 - \Delta l_3 =
\frac{(\frac{1}{3} m_{\mathrm{s}})g}{3\,k} = \mbox{const.}
\label{Eq:Dl123}
\end{equation}
As we know, $\frac{1}{3} m_{\mathrm{s}}$ is the mass of each identical segment,
and $k_1 = k_2 = k_3 = 3\,k$ is the spring constant for each. Therefore, the
spring stretches non-uniformly under its own weight, but uniformly under the
external load, as it was also indicated by Sawicky~\cite{Sawicki:276}.
\section{Results and Discussion}
Two particular cases were studied in this experiment. First, we considered a
spring-mass system in which the spring mass was small compared with the hanging
mass, and so it was ignored. In the second case, the spring mass was comparable
with the hanging mass and included in the calculations.
We started with a configuration of three approximately \textit{identical} spring
segments connected in series; each segment having $12$ coils
($n_1 = n_2 = n_3 = 12$)~\footnote{Although the three segments had the same
number of coils, the first and third segments had an additional portion of wire
where the spring was attached and the masses suspended. This added extra mass to
these segments, making them slightly different from each other and from the
second segment.} When the spring was stretched by different weights, the
elongation of the segments increased linearly, as expected from Hooke's law.
Within the experimental error, each segment experienced the same displacement,
as predicted by~\eref{Eq:Dl123}. An example of experimental data obtained is
shown in~\tref{Table01}.
Simple linear regression was used to determine the slope of each trend line
fitting the data points of the force versus displacement graphs. \Fref{Fig3}(a)
clearly shows the linear response of the first segment of the spring, with a
resulting spring constant of $k_1=10.3\,\pm\,0.1\,\mbox{N/m}$. A similar
behaviour was observed for the second and third segments, with spring constants
$k_2=10.1\,\pm\,0.1\,\mbox{N/m}$, and $k_3=10.2\,\pm\,0.1\,\mbox{N/m}$,
respectively. For the entire spring, the spring constant was
$k=3.40\pm\,0.01\,\mbox{N/m}$, as shown in~\fref{Fig3}(b). The uncertainties in
the spring constants were calculated using the \textit{correlation coefficient}
$R$ of the linear regressions, as explained in Higbie's paper ``Uncertainty in
the linear regression slope''~\cite{Higbie:184}. Comparing the spring constant
of each segment with that for the total spring, we obtained that $k_1=3.03\,k$,
$k_2=2.97\,k$ and $k_3=3.00\,k$. As predicted by~\eref{Eq:ki}, each segment had
a spring constant three times larger than the resulting combination of the
segments in series, that is, $k_i = 3\,k$.
The reason why the uncertainty in the spring constant of the entire spring is
smaller than the corresponding spring constants of the segments may be explained
by the fact that the displacements of the spring as a whole have smaller
``relative errors'' than those of the individual segments. \Tref{Table01} shows
that, whereas the displacements of the individual segments $\Delta x_i$ are in
the same order of magnitude that the uncertainty in the measurement of the
elongation ($\pm 0.002\,\mbox{m}$), the displacements of the whole spring
$\Delta x_{\mathrm{s}}$ are much bigger compared with this uncertainty.
We next considered a configuration of two spring segments connected in series
with $12$ and $24$ coils, respectively ($n_1 = 12$, $n_2 = 24$). \Fref{Fig4}(a)
shows a graph of force against elongation for the second segment of the spring.
We obtained $k_2=5.07\,\pm\,0.03\,\mbox{N/m}$ using linear regression. For the
first segment and the entire spring, the spring constants were
$k_1=10.3\,\pm\,0.1\,\mbox{N/m}$ and $k=3.40\,\pm\,0.01\,\mbox{N/m}$,
respectively, as shown in~\fref{Fig4}(b). Then, we certainly observed that $k_1
= 3.03\,k$ and $k_2 = 1.49\,k$. Once again, these experimental results
proved equation~\eref{Eq:ki} correct ($k_1 = 3\,k$ and $k_2 = \frac{3}{2}\,k$).
We finally considered the same two spring configuration as above, but unlike the
previous trial, this time the spring mass ($4.5 \pm 0.1\,\mbox{g}$) was included
in the experimental calculations. Figures~\ref{Fig5}(a)--(b) show results for
the two spring segments, including spring masses, connected in series ($n_1 =
12$, $n_2 = 24$). Using this method, the spring constant for the whole spring
was found to be slightly different from that obtained when the spring was
assumed ideal (massless). This difference may be explained by the corrections
made to the total mass as given by~\eref{Eq:Dl2}. The spring constants
obtained for the segments were $k_1 = 2.94\,k$ and $k_2 = 1.51\,k$ with $k =
3.34 \pm 0.04\,\mbox{N/m}$ for the entire spring. These experimental results
were also consistent with equation~\eref{Eq:ki}. The experimental data obtained
is shown in~\tref{Table02}.
When the experiment was performed by the students, measuring the positions of
the paint marks on the spring when it was stretched, perhaps represented the
most difficult part of the activity. Every time that an extra weight was added
to the end of the spring, the starting point of each individual segment changed
its position. For the students, keeping track of these new positions was a
laborious task. Most of the experimental systematic error came from this portion
of the activity. To obtain the elongation of the segments, using
equation~\eref{Eq:Dxi1} substantially facilitated the calculation and tabulation
of the data for its posterior analysis. The use of computational tools
(spreadsheets) to do the linear regression, also considerably simplified the
calculations.
\section{Conclusions}
In this work, we studied experimentally the validity of the static Hooke's law
for a system of springs connected in series using a simple single-spring scheme
to represent the combination of springs. We also verified experimentally the
fact that the reciprocal of the spring constant of the entire spring equals the
addition of the reciprocal of the spring constant of each segment by including
well-known corrections (due to the finite mass of the spring) to the total
hanging mass. Our results quantitatively show the validity of Hooke's law for
combinations of springs in series [equation~\eref{Eq:1/k}], as well as the
dependence of the spring constant on the number of coils in a spring
[equation~\eref{Eq:ki}]. The experimental results were in excellent agreement,
within the standard error, with those predicted by theory.
The experiment is designed to provide several educational benefits to the
students, like helping to develop measuring skills, encouraging the use of
computational tools to perform linear regression and error propagation analysis,
and stimulating the creativity and logical thinking by exploring Hooke's law in
a combined system of \textit{springs in series} simulated by a \textit{single}
spring. Because of it easy setup, this experiment is easy to adopt in any high
school or undergraduate physics laboratory, and can be extended to any number of
segments within the same spring such that all segments represent a combination
of springs in series.
\ack
The authors gratefully acknowledge the School of Mathematical and Natural
Sciences at the University of Arkansas-Monticello (\#11-2225-5-M00) and the
Department of Physics at Eastern Illinois University for providing funding and
support for this work. Comments on earlier versions of the paper were gratefully
received from Carol Trana. The authors are also indebted to the anonymous
referee for the valuable comments and suggestions made.
\section*{References}
| {'timestamp': '2011-01-05T02:02:14', 'yymm': '1005', 'arxiv_id': '1005.4983', 'language': 'en', 'url': 'https://arxiv.org/abs/1005.4983'} |
\section{Introduction} \label{intro}
According to the Standard Model of particle physics and to some of its extensions, spontaneous symmetry breaking and phase transitions constitute a crucial aspect for the early evolution of the universe.
When the temperature starts decreasing because of the expansion of the universe, spontaneous symmetry breaking can be triggered so that the interactions among elementary particles undergo (dis)continuous jumps from one phase to another. New phases with a broken symmetry will form in many regions at the same time, and in each of them only one single vacuum state will be spontaneously chosen. Sufficiently separated spatial regions may not be in causal contact, so that it is quite natural to assume that the early universe is divided into many causally disconnected patches whose size is roughly given by the Hubble radius\footnote{The Hubble radius is given by $R_{\rm H}\sim H^{-1}=a(t)/\dot{a}(t)$ where $a(t)$ is the scale factor depending on the cosmic time $t$. More precisely speaking, the vacuum is chosen over the region with the correlation length of the fields at that time.}, and in each of which the vacuum is independently determined.
As the universe expands, it can eventually happen that patches with different vacua collide in such a way that boundaries begin to form between adjacent regions with a different vacuum state.
Since the field associated with the spontaneous breaking has to vary in a continuous way between different vacua, it must interpolate smoothly from one vacuum to another via the hill of the potential. This implies that finite-energy field configurations must form at the boundaries separating patches with different vacua, and must persist even after the phase transition is completed. These objects are called \textit{topological defects}~\cite{Vilenkin:2000jqa}, and their formation mechanism (in a cosmological context) is known as \textit{Kibble mechanism}~\cite{Kibble:1976sj,Kibble:1980mv}.
Topological defects can be of several type and different spatial dimensions, and their existence is in one-to-one correspondence with the topology of the vacuum manifold~\cite{Vilenkin:2000jqa}. Domain walls are two-dimensional objects that form when a discrete symmetry is broken, so that the associated vacuum manifold is disconnected. Strings are one-dimensional objects associated to a symmetry breaking whose corresponding vacuum manifold is not simply-connected, and their formation could be predicted both by some extensions of the Standard Model of particle physics and by some classes of Grand Unified Theory (GUT). Monopoles are zero-dimensional objects whose existence is ensured when the vacuum manifold is characterized by non-contractible two-spheres, and they constitute an inevitable prediction of GUT. Moreover, there exist other topological objects called textures that can form when larger groups are broken and whose vacuum-manifold topology is more complicated.
Since the existence of topological defects is intrinsically related to the particular topology of the vacuum manifold, they can naturally appear in several theories beyond the Standard Model that predict a spontaneous symmetry breaking at some high-energy scale.
For instance, the spontaneous breaking of $SU(5)$ symmetry in GUT leads to the formation of various topological defects.
Therefore, observations and phenomenology of topological defects are very important, and should be considered as unique test-benches to test and constrain theories of particle physics and of the early universe. This also means that for any alternative theory - e.g. that aims at giving a complete ultraviolet (UV) description of the fundamental interactions - it is worth studying the existence of topological defects, investigating how their properties differ with the respect to other models/theories, and putting them to the test with current and future experiments.
In this paper we discuss for the first time topological defects in the context of \textit{nonlocal field theories} in which the Lagrangians contain infinite-order differential operators. In particular, we will make a very detailed analysis of domain wall solutions.
The type of differential operator that we will consider do not lead to ghost degrees of freedom in the particle spectrum despite the presence of higher-order time derivatives in the Lagrangian.
The work is organized as follows:
\begin{enumerate}
\item[\textbf{Sec.~\ref{sec:nonlocal-review}:}] we introduce nonlocal field theories by discussing the underlying motivations and their main properties.
\item[\textbf{Sec.~\ref{sec-review}:}] we briefly review the domain wall solution in the context of standard (local) two-derivative theories by highlighting various features whose mathematical and physical meanings will be important for the subsequent sections.
\item[\textbf{Sec.~\ref{sec-nft-dw}:}] we analyze for the first time domain wall solutions in the context of ghost-free nonlocal field theories by focusing on the simplest choice for the infinite-order differential operator in the Lagrangian. Despite the high complexity of non-linear and infinite-order differential equations, we will be able to find an approximate analytic solution by relying on the fact that the topological structure of the vacuum manifold ensures the existence of an exact domain wall configuration.
Firstly, we analytically study the asymptotic behavior of the solution close to the two symmetric vacua.
Secondly, we find a linearized nonlocal solution by perturbing around the local domain wall configuration. We show that the linearized treatment agrees with the asymptotic analysis, and make remarks on the peculiar behavior close to the origin. We perform an order-of-magnitude estimation of width and energy per unit area of the domain wall. Furthermore, we derive a theoretical lower bound on the scale of nonlocality for the specific domain wall configuration under investigation.
\item[\textbf{Sec.~\ref{sec-other}:}] we briefly comment on other topological defects like string and monopole.
\item[\textbf{Sec.~\ref{sec-dis}:}] we summarize our results, and discuss both theoretical and phenomenological future tasks.
\item[\textbf{App.~\ref{sec-corr}:}] we develop a formalism to confirm the validity of the linearized solution close to the origin.
\item[\textbf{App.~\ref{app-em}:}] we find a compact expression for the energy-momentum tensor in a generic nonlocal (infinite-derivative) field theory, and apply it to the specific nonlocal scalar model analyzed in the main text.
\end{enumerate}
We adopt the mostly positive convention for the metric signature, $\eta=\diag(-,+,+,+),$ and work with the natural units system, $c=\hbar=1.$
\section{Nonlocal field theories}\label{sec:nonlocal-review}
The wording `nonlocal theories' is quite generic and, in principle, can refer to very different theories due to the fact that the nonlocal nature of fields can manifest in various ways. In this work with `nonlocality' we specifically mean that Lagrangians are made up of certain non-polynomial differential operators containing infinite-order derivatives.
A generic nonlocal Lagrangian contains both polynomial and non-polynomial differential operators, i.e. given a field $\phi(x)$ one can have
\begin{equation}
\mathcal{L}\equiv\mathcal{L}\left(\phi,\partial\phi,\partial^2\phi,\dots,\partial^n\phi,\frac{1}{\Box}\phi,{\rm ln}\left(- \Box/M_s^2\right)\phi,e^{\Box/M_s^2}\phi,\dots\right),\label{nonlocal-lagr}
\end{equation}
where $\Box=\eta^{\mu\nu}\partial_{\mu}\partial_{\nu}$ is the flat d'Alambertian and $M_s$ is the energy scale at which nonlocal effects are expected to become important.
Non-analytic differential operators like $1/\Box$ and ${\rm log}(-\Box)$ are usually important at infrared (IR) scales, e.g. they can appear as contributions in the finite part of the quantum effective action in standard perturbative quantum field theories~\cite{Barvinsky:2014lja,Woodard:2018gfj}. Whereas analytic operators like $e^{\Box/M_s^2}$ are usually responsible for UV modifications and do not affect the IR physics. Such a transcendental differential operator typically appears in the context of string field theory~\cite{Witten:1985cc,Gross:1987kza,Eliezer:1989cr,Tseytlin:1995uq,Siegel:2003vt,Pius:2016jsl,Erler:2020beb} and p-adic string~\cite{Freund:1987kt,Brekke:1988dg,Freund:1987ck,Frampton:1988kr,Dragovich:2020uwa}.
We are interested in alternative theories that extend the Standard Model in the UV regime, therefore we will focus on analytic differential operators. In general, we can consider a scalar Lagrangian of the following type\footnote{To keep the formula simpler we do not write the scale of nonlocality in the argument of $F(-\Box)$ which, to be more precise, should read $F(-\Box/M_s^2).$}:
\begin{equation}
\mathcal{L}=-\frac{1}{2}\phi F(-\Box)\phi - V(\phi)\,,\label{analytic-Lag}
\end{equation}
where $V(\phi)$ is a potential term, and the the kinetic operator can be defined through its Taylor expansion
\begin{equation}
F(-\Box)=\sum\limits_{n=0}^\infty f_n(-\Box)^n\,,\label{nl-oper}
\end{equation}
where $f_n$ are constant coefficients. It should now be clear that the type of nonlocality under investigation manifests through the presence of infinte-order derivatives.
To recover the correct low-energy limit and avoid IR modifications, it is sufficient to require that the function $F(z),$ with $z\in \mathbb{C},$ does not contain any poles in the complex plane. Thus, we choose $F(-\Box)$ to be an \textit{entire function} of the d'Alembertian $\Box$.
By making use of the Weierstrass factorization theorem for entire functions we can write
\begin{equation}
F(-\Box)=e^{\gamma(-\Box)}\prod\limits_{i=1}^{N}(-\Box+m_i^2)^{r_i}\,,
\end{equation}
where $\gamma(-\Box)$ is another entire function, $m_i^2$ are the zeroes of the kinetic operator $F(-\Box),$ and $r_i$ is the multiplicity of the $i$-th zero. The integer $N\geq 0$ counts the number of zeroes and, in general, can be either finite or infinite.
To prevent the appearance of ghost degrees of freedom, it is sufficient to exclude the possibility to have extra zeroes besides the standard two-derivative one\footnote{It is worth mentioning that this is \textit{not} the unique possibility for ghost-free higher derivative theories. In fact, we can allow additional pairs of complex conjugate poles and still avoid ghost degrees of freedom and respect unitarity, in both local~\cite{Modesto:2015ozb,Modesto:2016ofr,Anselmi:2017yux,Anselmi:2017lia} and nonlocal theories~\cite{Buoninfante:2018lnh,Buoninfante:2020ctr}. Moreover, tree-level unitarity was shown to be satisfied also if one admits branch cuts in the bare propagator~\cite{Abel:2019ufz,Abel:2019zou}.}. We impose that the kinetic operator does not contain any additional zeroes, so that the effects induced by new physics are entirely captured by the differential operator $e^{\gamma(-\Box)}.$ Therefore, we consider the following Lagrangian:
\begin{equation}
\mathcal{L}=\frac{1}{2}\phi\, e^{\gamma(-\Box)}(\Box-m^2)\,\phi -V(\phi)\,,\label{exp-nonlocal}
\end{equation}
whose propagator reads
\begin{equation}
\Pi(p^2)=-i\frac{e^{-\gamma(p^2)}}{p^2+m^2}\,.\label{propag-nonlocal}
\end{equation}
From the last equation it is evident that no additional pole appears other than $p^2=-m^2,$ because $e^{-\gamma(p^2)}$ is an exponential of an entire function and as such does not have poles in the complex plane.
More generally, under the assumption that the transcendental function $e^{-\gamma(p^2)}$ is convergent in the limits $p^0\rightarrow\pm i\infty,$ it was shown that an S-matrix can be well defined for the Lagrangian~\eqref{exp-nonlocal}, and it can be proven to satisfy perturbative unitarity at any order in loop~\cite{Pius:2016jsl,Briscese:2018oyx,Chin:2018puw,Koshelev:2021orf}. Moreover, the presence of the exponential function can make loop-integrals convergent so that the scalar theory in Eq.~\eqref{exp-nonlocal} turns out to be finite in the high-energy regime~\cite{Krasnikov:1987yj,Moffat:1990jj,Tomboulis:2015gfa,Buoninfante:2018mre}. Very interestingly for such nonlocal theories, despite the presence of infinite-order time derivatives, a consistent initial value problem can be formulated in terms of a finite number of initial conditions~\cite{Barnaby:2007ve,Calcagni:2018lyd,Erbin:2021hkf}.
This type of transcendental operators with some entire function $\gamma(-\Box)$ have been intensely studied in the past years not only in the context of quantum field theories in flat space~\cite{Krasnikov:1987yj,Moffat:1990jj,Tomboulis:2015gfa,Buoninfante:2018mre,Boos:2018kir,Boos:2019fbu}, but also to formulate ghost-free infinite-derivative theories of gravity~\cite{Krasnikov:1987yj,Kuzmin:1989sp,Tomboulis:1997gg,Biswas:2005qr,Modesto:2011kw,Biswas:2011ar,Frolov:2015bia,Frolov:2015bta,Buoninfante:2018xiw,Buoninfante:2018stt,Buoninfante:2018xif,Koshelev:2016xqb,Koshelev:2017tvv,Koshelev:2020foq,Kolar:2020max}.
In this work we assume that fundamental interactions are intrinsically nonlocal, and that nonlocality becomes relevant in the UV regime. Thus, we consider nonlocal quantum field theories as possible candidates for UV-complete theories beyond the Standard Model. With this in mind, we will analyze topological defects in infinite-derivative field theories, and investigate the physical implications induced by nonlocality in comparison to standard (local) two-derivative theories.
In what follows, we work with the simplest ghost-free nonlocal model for which the entire function is given by
\begin{equation}
\gamma(-\Box)=-\frac{\Box}{M_s^2}\,.\label{entire-box^1}
\end{equation}
\section{Standard domain wall: a brief review} \label{sec-review}
Before discussing domain walls in the context of nonlocal quantum field theories, it is worth reminding some of their basic properties in standard two-derivative field theories, which will then be useful for the main part of this work.
In presence of a domain wall one has to deal with a static scalar field that only depends on one spatial coordinate, e.g. $x,$ and whose Lagrangian reads~\cite{Vilenkin:2000jqa,Saikawa:2017hiv}
\begin{align}
{\cal L} = \frac{1}{2}\qty(\partial_x\phi)^2 -U(\phi)\,,\qquad U(\phi)=\frac{\lambda}{4}(\phi^2-v^2)^2\,,\label{local-Lag-wall}
\end{align}
which is $\mathbb{Z}_2$-symmetric as it is invariant under the transformation $\phi\rightarrow-\phi;$ $\lambda>0$ is a dimensionless coupling constant and $v>0$ is related to the symmetry-breaking energy scale. The quartic potential has two degenerate minima at $\phi=\pm v$ ($U(\pm v)=0$).
As mentioned in the Introduction, the discrete symmetry $\mathbb{Z}_2$ can be spontaneously broken, for instance, in the early universe because of thermal effects. As a consequence, causally disconnected regions of the universe can be characterized by a different choice of the vacuum (i.e. $\phi=+v$ or $\phi=-v$), and when two regions with different vacua collide a continuous two-dimensional object -- called domain wall -- must form at the boundary of these two regions.
Let us now determine explicitly such a finite-energy configuration interpolating $\pm v.$
First of all, we impose the asymptotic boundary conditions
\begin{align}
\phi(-\infty) = -v\,,
\qquad
\phi(\infty) = v\,.\label{boundary conditions}
\end{align}
The field configuration must be non-singular and of finite energy, therefore $\phi(x)$ must interpolate smoothly between the two vacua, this implies that there exists a point $x_0\in \mathbb{R}$ such that $\phi(x_0)=0.$ Without any loss of generality, we can choose the reference frame such that the centre of the wall is at the origin $x_0=0,$ i.e. $\phi(0)=0.$
The energy density can be computed as
\begin{align}
\mathcal{E}(x) \equiv T_{0}^0(x)= \frac{1}{2}(\del_x\phi)^2 + \frac{\lambda}{4}(\phi^2-v^2)^2\,,\label{local-e-dens}
\end{align}
from which it follows $\mathcal{E}(x)\geq U(0)=\lambda v^2/4,$ and this implies that there exists a solution that does not dissipate at infinity. Hence, the topological structure of the vacuum manifold -- which is disconnected in the case of $\mathbb{Z}_2$ symmetry -- ensures the existence of a non-trivial field configuration of finite energy.
We can determine qualitatively the behavior of this field configuration by making an order-of-magnitude estimation of the width $R$ (along the $x$-direction), and of the energy per unit area $E$ of the wall.
In fact, we can define the width of the wall in three ways. The first one is to use the energy density in Eq.~\eqref{local-e-dens}. The lowest energy configuration interpolating the two vacua can be found by balancing the kinetic and potential term in the energy density $\mathcal{E}(x)$.
By approximating the gradient with the inverse of the width, $\partial_x\sim 1/R,$ and the field value with $\phi\sim v,$ Eq.~\eqref{local-e-dens} gives
\begin{align}
\frac{1}{2} \frac{1}{R^2} v^2 \sim \frac{\lambda}{4} v^4\quad
\Rightarrow\quad
R \sim \sqrt{\frac{2}{\lambda}} \frac{1}{v}\,,\label{local-radius-estim}
\end{align}
from which it follows that the width of the wall is of the same order of the Compton wavelength $R\sim(\sqrt{\lambda}v)^{-1}\sim m^{-1}$.
Whereas, the energy per unit area can be estimated as
\begin{eqnarray}
E&=& \int_\mathbb{R}{{\rm d}x}\qty[\frac{1}{2}(\del_x\phi)^2+\frac{\lambda}{4}(\phi^2-v^2)^2] \nonumber\\[2mm]
&\sim& (\text{width of the wall}) \times (\text{energy density}) \nonumber\\[2mm]
& \sim& R \times \lambda v^4 \nonumber\\[2mm]
&\sim&\sqrt{\lambda}v^3\,.\label{local-energ-estim}
\end{eqnarray}
The other two ways are to use the exact configuration (solution) of the domain wall.
The field equation
\begin{align}
\partial_x^2 \phi(x) = \lambda \phi(\phi^2-v^2)\label{local-field-eq}
\end{align}
can be solved by quadrature, and an exact analytic solution can be found, and it satisfies all the qualitative properties discussed above.
The exact solution is sometime called `kink', and it reads~\cite{Vilenkin:2000jqa}
\begin{align}
\phi(x) = v \tanh \left(\sqrt{\frac{\lambda}{2}}vx\right)\,.\label{local-dom-wal-sol}
\end{align}
Its asymptotic behavior is given by
\begin{align}
|x|\rightarrow \infty\quad \Rightarrow \quad \phi(x)\sim \pm v\qty(1-2e^{-\sqrt{2\lambda}vx})\,.
\label{eq:asymp sol in local case}
\end{align}
Through the exact solution~\eqref{local-dom-wal-sol}, we can define the width of the wall in two ways. One way is to identify it with the typical length scale over which $\phi(x)$ changes in proximity of the origin, that is, the length scale $\ell$ defined as the inverse of the gradient at the origin, i.e. $\ell\sim v/(\partial_x\phi|_{x=0}),$ where the scale $v$ is introduced for dimensional reasons. From Eq.~\eqref{local-dom-wal-sol} we have
\begin{equation}
\del_x\phi(x)|_{x=0}=v^2\sqrt{\frac{\lambda}{2}},
\end{equation}
which yields
\begin{equation}
\ell\sim \sqrt{\frac{2}{\lambda}}\frac{1}{v} = R\,.\label{ell-linearized-local}
\end{equation}
The last way is to use the asymptotic behavior given in Eq.~\eqref{eq:asymp sol in local case}. The width of the wall, $\widetilde{R}$, can be defined as
\begin{align}
|x|\rightarrow \infty\quad \Rightarrow \quad \phi(x)\sim \pm v\qty(1-2e^{-\frac{2x}{\widetilde{R}}})\,,
\label{eq:asymp radius}
\end{align}
which yields $\widetilde{R} \sim \sqrt{\frac{2}{\lambda}} \frac{1}{v} = R = \ell\,.$ In the local case, all of the three definitions give the same expressions and we need not discriminate them. But, as we will show, in the nonlocal case, all of the definitions would give different expressions in the sub-leading order, though two of them ($R$ and $\widetilde{R}$) have the similar feature. In fact, $R$ and/or $\widetilde{R}$ might be more appropriate as the definition of the width (or the radius) of a domain wall because $\ell$ is related to the behavior of the solution close to the origin and far from the vacuum.
We can obtain the energy per unit area as
\begin{eqnarray}
E = \int_{\mathbb{R}} {\rm d}x\mathcal{E}(x) =\int_{\mathbb{R}} {\rm d}x\qty(\frac{{\rm d}\phi}{{\rm d}x})^2
= \frac{4}{3} \sqrt{\frac{\lambda}{2}}v^3\,,
\end{eqnarray}
which is consistent with the estimation in Eq.~\eqref{local-energ-estim} up to an order-one numerical factor.
The discussion in this Section was performed for a local two-derivative theory in one spatial dimension. However, the essential concepts and methods, like the condition for the existence of a solution related to the non-trivial topology of the vacuum manifold, and the order-of-magnitude estimations, can be applied to nonlocal field theories and to higher dimensional cases (e.g. string and monopole).
\section{Domain wall in nonlocal field theories} \label{sec-nft-dw}
In this Section we analyze the domain wall solution for the nonlocal field theory~\eqref{exp-nonlocal} with the simplest choice of entire function given in~\eqref{entire-box^1}. Hence, we consider a nonlocal generalization of the Lagrangian~\eqref{eq:Lagrangian for DW} given by
\begin{align}
{\cal L}
= \frac{1}{2}\phi e^{-\partial_x^2/M_s^2}(\partial_x^2+\lambda v^2)\phi
-\frac{\lambda}{4}\qty(\phi^4+v^4)\,,
\label{eq:Lagrangian for DW}
\end{align}
whose field equation reads
\begin{align}
e^{-\partial_x^2/M_s^2}(\partial_x^2+\lambda v^2)\phi = \lambda\phi^3\,.
\label{eq:the equation of the motion for DW}
\end{align}
We can easily verify that in the limit $\partial_x^2/M_s^2\rightarrow 0$ we consistently recover the local two-derivative case, i.e. Eqs.~\eqref{local-Lag-wall} and~\eqref{local-field-eq}.
In this case the differential equation is non-linear and highly nonlocal, and finding a solution seems to be very difficult not only analytically but even numerically. However, despite the complexity of the scalar field equation, we can still find a domain wall configuration. First of all, the existence of such a solution is still guaranteed by the non-trivial topological structure of the vacuum manifold which is disconnected in the case of $\mathbb{Z}_2$ symmetry. Indeed, also in this case a finite-energy field configuration that smoothly interpolates between the two vacua $\phi=\pm v$ must exist. Furthermore, by relying on the fact that the presence of the exponential operator $e^{-\partial_x^2/M_s^2}$ should not change the number of degrees of freedom and of initial conditions~\cite{Barnaby:2007ve}, we can impose the same boundary conditions as done in the two-derivative case in Eq.~\eqref{boundary conditions}, i.e. $\phi(\pm \infty)=\pm v.$ We can set $x_0=0$ to be the centre of the wall such that $\phi(0)=0,$ and expect $\phi(x)$ to be a smooth odd function.
\begin{figure}[t]
\centering
\includegraphics[scale=0.6]{schematic-illustration.jpg}
\caption{Schematic illustration of the analysis made in this Section to study the domain wall configuration in nonlocal field theory, whose qualitative behavior is drawn consistently with the boundary conditions $\phi(\pm\infty)=\pm v$, and with the choice of the origin $\phi(0)=0.$ We analyze the behavior of the domain wall configuration in several regimes, and use different methods to find approximate analytic solutions. (i) We study the asymptotic behavior of the domain wall solution in the limit $|x|\rightarrow \infty.$ (ii) We find a linearized nonlocal solution by perturbing around the local domain wall configuration treated as a background, and analyse its behavior not only at infinity but also close to the origin. (iii) We make an order-of-magnitude estimation for the width and the energy per unit area of the wall, and verify the consistency with the analytic approximate solutions.}
\label{fig:conceptual figure for analysis of nonlocal domain wall}
\end{figure}
Very interestingly, knowing that a domain wall solution must exist, and equipped with a set of boundary conditions, we can still find an approximate analytic solution by working perturbatively in some regime. We will proceed as follows. In Sec.~\ref{subsec-asym} we will determine the solution asymptotically close to the vacua $\pm v$ (i.e. for $|x|\rightarrow \infty$). In Sec.~\ref{subsec-pert} we will find a linearized nonlocal solution by perturbing around the known local `kink' configuration treated as a background. Finally, in Sec.~\ref{subsec-order} we will make an order-of-magnitude estimation for the width and the energy of the nonlocal domain wall, and check the consistency with the approximate analytic solutions. See Fig.~\ref{fig:conceptual figure for analysis of nonlocal domain wall} for a schematic illustration of our analysis.
\subsection{Asymptotic solution for $|x|\rightarrow \infty$} \label{subsec-asym}
As a first step we analyze the asymptotic behavior of the solution close to the two vacua $\phi=\pm v$, i.e. in the regime $|x|\rightarrow \infty.$ Let us first consider the perturbation around $\phi=+v,$ i.e. we write
\begin{equation}
\phi(x)=v+\delta\phi(x)\,,\qquad \frac{|\delta\phi|}{v}\ll 1\,,
\end{equation}
so that the linearized field equation reads
\begin{align}
e^{-\del_x^2/M_s^2} (\del_x^2+\lambda v^2) \delta\phi = 3\lambda v^2 \delta\phi\,.
\end{align}
Taking inspiration from the asymptotic behavior of the domain wall in the local case (see Eq.~\eqref{eq:asymp sol in local case}), as an ansatz we assume that $\phi(x)$ approaches the vacuum exponentially, i.e. we take
\begin{align}
\delta\phi = \phi - v = A e^{-B x} \label{eq:def for asymp sol}
\end{align}
where $A,$ and $B>0$ are two constants.
Since the exponential is an eigenfunction of the kinetic operator, we can easily obtain an equation for $B,$
\begin{align}
e^{-B^2/M_s^2} (B^2+\lambda v^2) = 3\lambda v^2\,.\label{eq-for-B}
\end{align}
By using the principal branch $W_0(x)$ of the Lambert-W function (defined as the inverse function of $f(x)=xe^x$) we can solve Eq.~\eqref{eq-for-B} as follows
\begin{align}
B^2 = - M_s^2\, W_0\qty(-\frac{3\lambda v^2}{M_s^2}e^{-\lambda v^2/M_s^2})-\lambda v^2\,.
\label{eq:coeff B in asymp sol}
\end{align}
Before continuing let us make some remarks on the Lambert-W function. It is a multivalued function that has an infinite number of branches $W_n$ with $n\in \mathbb{Z}.$ The only real solutions are given by the branch $W_0(x)$ for $x\geq -1/e,$ and an additional real solution comes from the branch $W_{-1}(x)$ for $-1/e\leq x<0.$ In the equations above we have taken the so-called principal branch $W_0,$ and we will do the same in the rest of the paper. However, we will also comment on the branch $W_{-1}$ and the physical implications associated to it. Regarding higher order branches $n>0,$ they will generate non-physical complex values, and in some cases they do not even recover the local limit; therefore, we discard such solutions.
Note that, by means of the asymptotic analysis above the coefficient $A$ cannot be determined as it factors out from the field equation but, as we will explain in Sec.~\ref{subsec-pert}, we will be able to determine it up to order $\mathcal{O}(1/M_s^2)$.
\subsubsection{A theoretical constraint on the scale of nonlocality} \label{subsubsec-cond}
The use of the Lambert-W function to obtain the solution for $B$ in Eq.~\eqref{eq:coeff B in asymp sol} relied on the fact that Eq.~\eqref{eq-for-B} could be inverted. As explained above this inversion is valid if and only if $c \in \{xe^x|x\in\mathbb{R}\}$, i.e. if $c\geq -1/e$.
Applying this condition to~\eqref{eq:coeff B in asymp sol}, we get
\begin{align}
-\frac{3\lambda v^2}{M_s^2}e^{-\lambda v^2/M_s^2}\geq -\frac{1}{e}\,,\label{condition}
\end{align}
and inverting in terms of the (principal branch) Lambert-W function we obtain the following \textit{theoretical constraint}:
\begin{align}
M_s^2\geq -\frac{\lambda v^2}{W_0\qty(-1/3e)}\,, \label{eq:neccessary cond for asymp sol}
\end{align}
where we have used the fact that $W_0(x)$ is a monotonically increasing function. The inequality~\eqref{eq:neccessary cond for asymp sol} means that
the energy scale of nonlocality must be greater than the symmetry-breaking scale $\sqrt{\lambda} v.$ We can evaluate $W_0\qty(-1/3e)\simeq -0.14,$ so that the lower bound reads $M_s^2\gtrsim 7.14 \lambda v^2.$
One usually obtains constraints on the free parameters of a theory by using experimental data. In the present work, instead, we found a purely theoretical constraint. See Sec.~\ref{sec-dis} for further discussions on this feature.
Given the fact that $\lambda v^2/M_s^2< 0.14,$ we can expand~\eqref{eq:coeff B in asymp sol} and obtain
\begin{align}
B^2
= 2 \lambda v^2 \qty(1+\frac{3\lambda v^2}{M_s^2})
+ {\cal O}\qty(\qty(\frac{\lambda v^2}{M_s^2})^2)\,,
\label{eq:series expansion for coeff B}
\end{align}
or by taking the square root,
\begin{align}
B
= \sqrt{2 \lambda} v \qty(1+\frac{3}{2}\frac{\lambda v^2}{M_s^2})
+ {\cal O}\qty(\qty(\frac{\lambda v^2}{M_s^2})^2)\,\,.
\label{eq:series expansion for coeff B-sqrt}
\end{align}
From this expression, we can obtain the width of wall, $\widetilde{R}$, defined in Eq.~\eqref{eq:asymp radius} as
\begin{equation}
\widetilde{R} \sim \frac{2}{B}\sim \sqrt{\frac{2}{\lambda}}\frac{1}{v}\left(1-\frac32\frac{\lambda v^2}{M_s^2}\right)\,.\label{eq:tildeR}
\end{equation}
Also, from Eq.~\eqref{eq:series expansion for coeff B-sqrt} we can check that in the local limit $M_s\rightarrow \infty$ we recover the two-derivative case in Eq.~\eqref{eq:asymp sol in local case}:
\begin{align}
\lim_{M_s\to\infty} B^2 = 2 \lambda v^2 = \qty(\sqrt{2\lambda}v)^2\equiv B_{\rm L}^2\,.
\end{align}
Furthermore, from~\eqref{eq:series expansion for coeff B} we can notice that $B\geq B_{\rm L}=\qty(\sqrt{2\lambda}v).$ This physically means that the nonlocal domain wall solution approaches the vacuum $\phi=+v$ \textit{faster} as compared to the local two-derivative case. This feature, which is manifest in Eq.~\eqref{eq:tildeR}, may also suggest that the width of the nonlocal domain wall is \textit{smaller} as compared to the local case; indeed this fact will also be observed with the expression of $R$ in the next Subsections.
So far we have only focused on the asymptotic solution for $x\rightarrow +\infty$ ($\phi(+\infty)=+v$) but the same analysis can be applied to the other asymptotic $x\rightarrow -\infty$ ($\phi(-\infty)=-v$), and the same results hold because of the $\mathbb{Z}_2$ symmetry.
\paragraph{Remark 1.} Before concluding this Subsection it is worth commenting on the validity of the asymptotic solution we determined. On one hand, we know that the existence of the domain wall is ensured by the topological structure of the vacuum manifold, and this should not depend on the value of $M_s.$ On the other hand, it appears that the asymptotic solution we found is only valid for some values of $M_s$ satisfying the inequality in Eq.~\eqref{eq:neccessary cond for asymp sol}, which seems to imply that the domain wall solution does not exist for other values of $M_s$. Is this a contradiction? The answer is no, all is consistent, and in fact the domain wall solution exists for any value of $M_s.$
First of all, we should note that to solve Eq.~\eqref{condition} in terms of $M_s$ we have used the principal branch $W_0(x)$ which is a monotonic increasing function, but an additional real solution can be found by using the branch $W_{-1}(x)$ which is, instead, a monotonically decreasing function. Thus, given the opposite monotonicity behavior of $W_{-1}$ as compared to $W_{0},$ if we solve Eq.~\eqref{condition} by means of $W_{-1}$ we get $M_s\leq -\lambda v^2/W_{-1}(-1/3e)\simeq 0.30\lambda v^2.$
Moreover, in the range of values $0.30 \lambda v^2 \lesssim M_s^2\lesssim 7.21 \lambda v^2$ the functional form in Eq.~\eqref{eq:def for asymp sol} does not represent a valid asymptotic behavior for the domain wall. In this case the domain wall configuration may be characterized by a completely different profile, but its existence is still guaranteed by the non-trivial topology.
Anyway, as already mentioned above, in this paper we only work with $W_0,$ therefore with values of $M_s$ satisfying the inequality~\eqref{eq:neccessary cond for asymp sol}. See also Sec.~\ref{sec-dis} for more discussion on this in relation to physical implications.
\subsection{Perturbation around the local solution} \label{subsec-pert}
Let us now implement an alternative method to determine the behavior of the nonlocal domain wall not only at infinity but also close to the origin.
We consider a linear perturbation around the standard two-derivative domain wall configuration $\phi_{\rm L}(x)=v\tanh(\sqrt{\lambda/2} vx)$. Let us define the deviation from the local solution as
\begin{align}
\delta\phi (x) = \phi(x) - \phi_{\rm L}(x)\,, \qquad \left|\frac{\delta\phi}{\phi_{\rm L}}\right|\ll 1\,, \label{pert-loc-nonloc}
\end{align}
in terms of which we can linearize the field equation~\eqref{eq:the equation of the motion for DW}:
\begin{eqnarray}
\left[e^{-\partial_x^2/M_s^2}(\partial_x^2+\lambda v^2)-3\lambda\phi_{\rm L}^2\right]\delta\phi=\lambda (1-e^{-\partial_x^2/M_s^2})\phi_{\rm L}^3\,.\label{linear-lnl}
\end{eqnarray}
Since the nonlocal scale appears squared in~\eqref{eq:the equation of the motion for DW}, we would expect that $\delta\phi\sim \mathcal{O}(1/M_s^2)$ such that the local limit is consistently recovered, i.e. $\delta\phi\to0$ when $M_s^2\to\infty$. We now write Eq.~\eqref{pert-loc-nonloc} up to order $\mathcal{O}(1/M_s^2)$ in order to extract the leading nonlocal correction to $\phi_{\rm L}.$ By expanding the nonlocal terms as follows
\begin{eqnarray}
e^{-\del_x^2/M_s^2} \delta\phi&=&
\delta\phi
+ {\cal O}\qty(\frac{\del_x^2}{M_s^2}) \delta\phi\,,\\[2mm]
e^{-\del_x^2/M_s^2} \phi_{\rm L}^3&=& \phi_{\rm L}^3-\frac{\partial_x^2}{M_s^2}\phi_{\rm L}^3
+{\cal O}\qty(\qty(\frac{\del_x^2}{M_s^2})^2)\phi_{\rm L}^3\,,
\end{eqnarray}
we can write~\eqref{linear-lnl} up to order $\mathcal{O}(1/M_s^2):$
\begin{align}
\qty[\partial_x^2+\lambda v^2-3\lambda\phi_{\rm L}^2(x)] \delta\phi
= \frac{\lambda}{M_s^2}\del_x^2(\phi_{\rm L}(x)^3)\,.
\label{lin-eq-nln-2}
\end{align}
This expansion is valid as long as the following inequality holds:
\begin{equation}
\frac{1}{|\delta\phi|}\left|\frac{\del_x^2}{M_s^2}\delta\phi\right| \ll 1\,.\label{ineq-2}
\end{equation}
We now introduce the dimensionless variable $s=\sqrt{\lambda/2}\,vx$ and the function $f(s)=\delta\phi(x)/v,$ so that we can recast Eq.~\eqref{lin-eq-nln-2} as
\begin{align}
f^{\prime\prime}(s)+2(1-3\tanh^2 s)f(s) = \frac{\lambda v^2}{M_s^2}(\tanh^3 s)^{\prime\prime}\label{diff-eq-s}
\end{align}
where the prime $'$ denotes the derivative with respect to $s$. The above differential equation can be solved analytically, and its solution reads
\begin{eqnarray}
\!\!f(s)&=&
\frac{C_1}{\cosh^2s}
+ \frac{3C_2/2+27\lambda v^2/32M_s^2}{\cosh^2s} \log\frac{1+\tanh s}{1-\tanh s}
\nonumber\\[2mm]
&&+\qty[
\qty(2C_2+\frac{\lambda v^2}{8M_s^2})\cosh^2s
+ \qty(3C_2-\frac{61\lambda v^2}{16M_s^2})
+ \frac{2\lambda v^2}{M_s^2}(1+\tanh^2 s)
] \tanh s\,,\,\,\,
\end{eqnarray}
where $C_1$ and $C_2$ are two integration constants to be determined.
\begin{figure}[t]
\centering
\includegraphics[scale=0.48]{perturb-sol.pdf}
\caption{
%
In this figure we show the linearized nonlocal domain wall solution $\phi(x)=\phi_{\rm L}(x)+\delta\phi(x)$ (solid blue line) in comparison with the local domain wall (orange dashed line). The nonlocal configuration approaches the asymptotic vacua at $x\rightarrow \pm \infty$ faster as compared to the local case.
In the smaller plot we showed the behavior of the two solutions over a smaller interval in order to make more evident the differences between local and nonlocal cases. We can notice that when going from $x=0$ to $x\rightarrow \pm \infty$ the nonlocal curve slightly oscillates around the local one.
We set $\lambda=2$, $v=1$ and $M_s^2=14.3,$ which are consistent with the theoretical constraint $M_s^2\geq -\lambda v^2/W_0(-1/3e)$ in Eq.~\eqref{eq:neccessary cond for asymp sol}.
}
\label{fig:illustration for the perturbative solution}
\end{figure}
The boundary conditions $\phi(\pm \infty)=\pm v$ in terms of the linearized deviation read $\delta\phi(\pm \infty)=0,$ or equivalently $f(\pm \infty)=0$. These are satisfied if and only if the algebraic relation $2C_2+\lambda v^2/8M_s^2=0$ holds true, which means that $C_2=-\lambda v^2/16M_s^2$.
Moreover, the constant $C_1$ must be zero because of the $\mathbb{Z}_2$-symmetry.
Thus, the solution for $\delta \phi$ is given by
\begin{align}
\delta\phi
= vf(s)
= \frac{\lambda v^3}{M_s^2}\frac{1}{\cosh^2\sqrt{\frac{\lambda}{2}}vx}
\qty[\frac{3}{4}\log\frac{1+\tanh\sqrt{\frac{\lambda}{2}}vx}{1-\tanh\sqrt{\frac{\lambda}{2}}vx}
-2\tanh\sqrt{\frac{\lambda}{2}}vx]\,.
\label{eq:exact sol for delta phi}
\end{align}
In Fig.~\ref{fig:illustration for the perturbative solution} we showed the behavior of the nonlocal domain wall solution $\phi=\phi_{\rm L}+\delta\phi$ in comparison with the local two-derivative one $\phi_{\rm L};$ we have set values for $v,$ $\lambda$ and $M_s$ consistently with the theoretical lower bound in Eq.~\eqref{eq:neccessary cond for asymp sol}. From the plot we can notice that the nonlocal solution approaches the vacua $\pm v$ faster as compared to the local case, which is in agreement with the asymptotic analysis in the previous Subsection. Indeed, we can expand the solution $\phi=\phi_{\rm L}+\delta\phi$ in the regime $|x|\rightarrow \infty$, and obtain
\begin{eqnarray}
\phi(x)&\simeq& \pm v\left[1-2\left(\frac{4v^2\lambda}{M^2_s}\right)e^{-\sqrt{2\lambda}vx}-2e^{-\sqrt{2\lambda}vx}\left(1-\frac{3}{2}\sqrt{2\lambda}\frac{\lambda v^3}{M_s^2}x\right)\right]\nonumber\\[2mm]
&\simeq& \pm v\left[1-2\left(1+\frac{4v^2\lambda}{M^2_s}\right)e^{-\sqrt{2\lambda}v(1+3\lambda v^2/2M_s^2)x}\right]\nonumber\\[2mm]
&\simeq& \pm v\left[1-2\left(1+\frac{4v^2\lambda}{M^2_s}\right)e^{-B x}\right]\,,
\end{eqnarray}
where to go from the first to the second line we have used the freedom to add negligible terms of order higher than $\mathcal{O}(1/M_s^2),$ i.e. $4v^2\lambda/M_s^2\simeq 4v^2\lambda/M_s^2(1-3\sqrt{2\lambda}\lambda v^3x/2M_s^2)$ and $1-3\sqrt{2\lambda}\lambda v^3/2M_s^2x\simeq e^{-(3\sqrt{2\lambda}\lambda v^3/2M_s^2)x}.$ Remarkably, the asymptotic behavior of the linearized solution perfectly matches the result obtained in Eq.~\eqref{eq:series expansion for coeff B-sqrt}, indeed the coefficient $B$ in the exponent turns out to be exactly the same in both approaches. Moreover, from the linearized solution we can also determine the coefficient $A$ up to order $\mathcal{O}(1/M_s^2)$, i.e. $A=-2v-8v^3\lambda/M_s^2,$ which could not be determined through the asymptotic analysis in Sec.~\ref{subsec-asym} (see Eq.~\eqref{eq:def for asymp sol}).
Furthermore, the behavior of the linearized solution close to the origin is quite peculiar as the nonlocal domain wall profile slightly oscillates around the local one. In other words, when going from $x=0$ to $x\rightarrow \infty$ the perturbation $\delta\phi$ is initially negative and then becomes positive; whereas the opposite happens when going from $x=0$ to $x\rightarrow-\infty.$
This property may suggest that the typical length scale $\ell$ over which $\phi(x)$ changes in proximity of the origin is larger as compared to the local case. As done for the local domain wall in Sec.~\ref{sec-review}, we can estimate such a length scale as the inverse of the gradient at the origin times the energy scale $v$, i.e. $\ell\sim v/(\partial_x\phi|_{x=0}).$
By doing so, up to order $\mathcal{O}(1/M_s^2)$ we get
\begin{equation}
\del_x\phi(x)|_{x=0}=v^2\sqrt{\frac{\lambda}{2}}\left(1-\frac{\lambda v^2}{2M_s^2}\right)+\mathcal{O}\left(\frac{1}{M_s^4}\right)\,,\label{grad-zero}
\end{equation}
which yields
\begin{equation}
\ell\sim \sqrt{\frac{2}{\lambda}}\frac{1}{v}\left(1+\frac{\lambda v^2}{2M_s^2}\right)\,.\label{ell-linearized}
\end{equation}
Note that such a length scale does \textit{not} coincide with the width of the wall because it is related to the behavior of the solution close to the origin and far from the vacuum. In standard two-derivative theories the above computation would give a result for $\ell$ that coincides with the size of the wall, but this is just a coincidence. We will comment more on this in Sec.~\ref{subsec-order}.
\subsubsection{Validity of the linearized solution}
The above linearized solution was found perturbatively, and it is valid as long as the two inequalities in Eq.~\eqref{pert-loc-nonloc} and~\eqref{ineq-2} are satisfied. We now check when these conditions are verified.
By working with the variable $s=\sqrt{\lambda/2} vx$ and the field redefinition $\delta\phi=vf(s),$ the inequality~\eqref{pert-loc-nonloc} reads:
\begin{equation}
|H(s)|= \left|\frac{f(s)}{\tanh s}\right|\ll 1\,,
\end{equation}
where $H(s):=f(s)/\tanh s\,.$ By analyzing the behavior of $|H(s)|$ we can notice that it is always less than unity, thus supporting the validity of the linearized solution in Eq.~\eqref{eq:exact sol for delta phi}; see the left panel in Fig.~\ref{fig:graph for verifivation of perturbative condition}.
Let us now focus on the inequality~\eqref{ineq-2}. By introducing also in this case the variable $s=\sqrt{\lambda/2}vx$ we can write
\begin{eqnarray}
\frac{\partial_x^2}{M_s^2}\delta\phi
&=&\frac{\lambda v^2/2}{M_s^2} v f^{\prime\prime}(s) \nonumber\\[2mm]
&=&\frac{\lambda v^3/2}{M_s^2} \frac{\lambda v^2}{M_s^2} \frac{{\rm d}^2}{{\rm d}s^2}
\qty{
\frac{1}{\cosh^2 s}
\left(
\frac{3}{4} \log\frac{1+\tanh s}{1-\tanh s} - 2\tanh s
\right)
} \nonumber\\[2mm]
&=&v \times
\underbrace{\frac{1}{2}\qty(\frac{\lambda v^2}{M_s^2})^2\frac{{\rm d}^2}{{\rm d}s^2}
\qty{
\frac{1}{\cosh^2 s}
\left(
\frac{3}{4} \log\frac{1+\tanh s}{1-\tanh s} - 2\tanh s
\right)
}
}_{=:\,g(s)}\,,
\end{eqnarray}
where $g(s):=\partial_x^2\delta\phi(x)/(M_s^2v)$.
In terms of the dimensionless functions $f(s)$ and $g(s)$ the inequality~\eqref{ineq-2} becomes
\begin{align}
\left|\frac{\partial_x^2}{M_s^2}\delta\phi\right| \ll |\delta\phi|
\quad\Leftrightarrow\quad
|g(s)| \ll |f(s)|\,.
\end{align}
Therefore, we have to analyze the function
\begin{align}
h(s):=\frac{g(s)}{f(s)}
=\frac{\lambda v^2}{2M_s^2}
\frac{\dps\frac{{\rm d}^2}{{\rm d}s^2}\qty[\frac{1}{\cosh^2 s}\qty(\frac{3}{4}\log\frac{1+\tanh s}{1-\tanh s}-2\tanh s)]}
{\dps\frac{1}{\cosh^2 s}\qty(\frac{3}{4}\log\frac{1+\tanh s}{1-\tanh s}-2\tanh s)}\,,
\end{align}
and check for which values of $s$ its modulus $|h(s)|$ is less than unity.
In the right panel of Fig.~\ref{fig:graph for verifivation of perturbative condition} we have shown the behavior of $h(s);$ we have only plotted the region $s\geq 0$ as $h(s)$ is an even function in $s$.
We can notice that $h(s)$ diverges at the point $s\sim 1.03402$ where $f(s)$ vanishes; therefore, in the proximity of this point the linearized solution $\delta\phi(x)$ might not be valid.
However, for $|s|\rightarrow \infty$ and $s\rightarrow 0$ the inequality~\eqref{ineq-2} can be satisfied:
\begin{figure}[t]
\centering
\includegraphics[scale=0.315]{validity-phi_L-delta-phi}\quad\includegraphics[scale=0.305]{validity-del-2-delta-phi}
\caption{
%
(Left panel) behavior of $H(s)=f(s)/\tanh s$ as a function of $s=\sqrt{\lambda/2}vx.$ The modulus of the function is always less than unity, i.e. $|H(s)|< 1,$ supporting the validity of the linearized solution $\delta \phi (x) =v f(x)$ in Eq.~\eqref{eq:exact sol for delta phi}. $H(s)$ becomes smaller and smaller for larger values of $M_s.$
(Right panel) behavior of the function $h(s)=g(s)/f(s).$
As long as $|h(s)|\ll 1$ the linearized solution~\eqref{eq:exact sol for delta phi} can be trusted as a good approximation of the true behavior of the nonlocal domain wall.
We can notice that close to the asymptotics ($s\rightarrow \infty$) the function can be kept less than one, but there is a singularity at $s\sim 1.03402$ caused by the fact that $f(s)$ vanishes at this point.
Moreover, the linearized approximation close to the origin and at infinity becomes better for larger values of $M_s.$
In both panels we only showed the behavior for $s\geq 0$ because both functions $h(s)$ and $H(s)$ are even in $s.$
We set $\lambda=2,$ $v=1$ and $M_s^2=14.3,$ which are consistent with the theoretical constraint $M_s^2\geq -\lambda v^2/W_0(-1/3e)$ in Eq.~\eqref{eq:neccessary cond for asymp sol}.
}
\label{fig:graph for verifivation of perturbative condition}
\end{figure}
\begin{align}
h(0)=\lim_{s\to0} h(s) = -7\frac{\lambda v^2}{M_s^2}\,,
\qquad
h(\infty)=\lim_{s\to\infty} h(s) = 2\frac{\lambda v^2}{M_s^2}\,,\label{limits}
\end{align}
and by using the theoretical lower bound in Eq.~\eqref{eq:neccessary cond for asymp sol}, i.e. $\lambda v^2/M_s^2\leq-W_0^{-1}(-1/3e)\sim 0.14,$ it follows that both asymptotic limits are always less than unity, i.e. $|h(0)|< 0.98$ and $h(\infty)< 0.28,$ and the approximation becomes better for larger values of the scale of nonlocality $M_s$.
Let us now make two important remarks.
\paragraph{Remark 2.} In light of the remark at the end of Sec.~\ref{subsec-asym} we now understand that the linearized perturbative solution~\eqref{eq:exact sol for delta phi} would have not been valid if instead we had used $W_{-1}$ and the corresponding upper bound $M_s\leq -\lambda v^2/W_{-1}(-1/3e).$ In such a case a different domain wall solution with the same functional form of the asymptotic behavior is obviously guaranteed to exist, but we are not interested in it in the current work. Therefore, we emphasize again that we only work with a domain wall configuration consistent with the bound~\eqref{eq:neccessary cond for asymp sol}.
\paragraph{Remark 3.} We have noticed that the linearized approximation breaks down in the proximity of $s\sim 1.03402.$ It means that the boundary condition $\delta\phi(\pm\infty)=0$ imposed on the perturbation cannot be used to connect the behavior of the solution from $x=\pm\infty$ to $x=0$ because the boundary condition itself would break down. This might imply that apparently the linearized solution obtained in Eq.~\eqref{eq:exact sol for delta phi} can be trusted only close to the vacua, and that the above analysis is not enough to justify the behavior close to the origin.
However, by using a different and reliable perturbative expansion in the intermediate region around $s\sim 1.03402,$ and imposing junction conditions to glue different pieces of the solution defined in three different regions, we checked and confirmed that the behavior close to the origin found above is well justified. See App.~\ref{sec-corr} for more details.
\subsection{Estimation of width and energy} \label{subsec-order}
We now estimate the width and the energy per unit area of the nonlocal domain wall by performing an analogous analysis as the one made for the local two-derivative case in Sec.~\ref{sec-review}.
Let us approximate the field as $\phi\sim v,$ and the gradient as $\del_x\sim R^{-1}$ where $R$ is the width of the wall.
From the expression of the energy-momentum tensor~\eqref{final-stress-exp} (with $m^2=-\lambda v^2$), we can obtain the energy density of the wall:
\begin{align}
\mathcal{E}(x)
\equiv T_0^0(x)= -\frac{1}{2}\phi e^{-\partial_x^2/M_s^2}(\partial_x^2+\lambda v^2)\phi
+\frac{\lambda}{4}(\phi^4+v^4)\,.
\label{eq:energy density for DW}
\end{align}
The next step would be to impose the balance between the kinetic and the potential energy in~\eqref{eq:energy density for DW} for the lowest-energy configuration, and solve the resulting equation for the width $R.$ The presence of the infinite-derivative operator makes the procedure less straightforward as compared to the local case because we should first understand how to estimate $e^{-\del_x^2/M_s^2},$ i.e. whether to replace the exponent with $-1/M_s^2R^2$ or $+1/M_s^2R^2.$
The strategy to follow in order to avoid any ambiguities is to Taylor expand, recast the infinite-derivative pieces in terms of an infinite number of squared quantities, replace $\del_x\sim 1/R,$ $\phi\sim v,$ and then re-sum the series.
By Taylor expanding in powers of $\partial_x^2/M_s^2$ the infinite-derivative terms in Eq.~\eqref{eq:energy density for DW}, and neglecting total derivatives, we can write
\begin{align}
\!\phi e^{-\del_x^2/M_s^2}\del_x^2\phi&= -\del_x\phi e^{-\del_x^2/M_s^2}\del_x\phi\nonumber\\[2mm]
&= -\left[(\del_x\phi)^2-\frac{1}{M_s^2}\del_x\phi\del_x^2\del_x\phi+\frac{1}{2!M_s^4}\del_x\phi\del_x^4\del_x\phi-\cdots+\frac{(-1)^n}{n!M_s^{2n}}\del_x\phi\del_x^{2n}\del_x\phi+\cdots\right] \nonumber\\[2mm]
&=-\left[(\del_x\phi)^2+\frac{1}{M_s^2}(\del_x^2\phi)^2+\frac{1}{2!M_s^4}(\del_x^3\phi)^2+\cdots+\frac{1}{n!M_s^{2n}}(\del_x^{n+1}\phi)^2 +\cdots\right]\nonumber\\[2mm]
&= -\sum\limits_{n=0}^\infty \frac{1}{n!}\left(\frac{1}{M_s^2}\right)^n \left(\del_x^{n+1}\phi\right)^2\,,
\end{align}
and
\begin{equation}
\lambda v^2\phi e^{-\del_x^2/M_s^2}\phi=\lambda v^2\sum\limits_{n=0}^\infty \frac{1}{n!}\left(\frac{1}{M_s^2}\right)^n \left(\del_x^{n}\phi\right)^2\,.
\end{equation}
Then, by using $\phi\sim v$ and $\del_x\sim 1/R,$ we get
\begin{align}
\phi e^{-\del_x^2/M_s^2}\del_x^2\phi\sim -\frac{v^2}{R^2}\sum\limits_{n=0}^\infty\frac{1}{n!}\left(\frac{1}{M_s^2R^2}\right)^n = -\frac{v^2}{R^2}e^{1/(M_sR)^2}\,,
\end{align}
and
\begin{equation}
\lambda v^2\phi e^{-\del_x^2/M_s^2}\phi\sim\lambda v^4 e^{1/(M_sR)^2}\,.
\end{equation}
Thus, we have shown that the correct sign in the exponent when making the estimation is the positive one\footnote{To further remove any possible ambiguity and/or confusion, it is worth mentioning that the same result would have been obtained if we would have started with a positive definite expression for the kinetic energy, for instance with the expression $\del_x\phi e^{-\del_x^2/M_s^2}\del_x\phi=(e^{-\del_x^2/2M_s^2}\del_x\phi)^2\geq 0,$ where we integrated by parts and neglected total derivatives. Also in this case one can show (up to total derivatives) that $(e^{-\del_x^2/2M_s^2}\del_x\phi)^2=\sum_{k,l=0}^\infty 1/(k!\,l!) (1/2M_s^2)^{k+l}\left(\del_x^{(k+l+1)}\phi\right)^2\sim (v^2/R^2)\left[\sum_{k=0}^\infty 1/k!(1/2M_s^2R^2)^n\right]^2= (v^2/R^2)e^{1/(M_sR)^2}$.}.
To make more manifest the consistency with the low-energy limit $M_s \to \infty$, it is convenient to separate the kinetic and the potential contributions in~\eqref{eq:energy density for DW} as follows:
\begin{align}
\mathcal{E}(x)=&
\qty[
-\frac{1}{2} \phi e^{-\del_x^2/M_s^2}\partial_x^2 \phi
- \frac{1}{2} \lambda v^2 \phi \qty(e^{-\del_x^2/M_s^2}-1) \phi
]
+ \qty[
\frac{\lambda}{4} \qty(\phi^2-v^2)^2
]\,,
\end{align}
so that the balance equation between kinetic and potential energies reads
\begin{align}
\frac{1}{2} \frac{v^2}{R^2} e^{1/(M_s R)^2} - \frac{1}{2} \lambda v^4 \qty(e^{1/(M_s R)^2}-1)
\sim
\frac{\lambda}{4} v^4\,.
\label{eq:full-balance-eq}
\end{align}
We are mainly interested in the leading nonlocal correction, thus we expand for $M_sR\gg 1$ up to the first relevant nonlocal contribution:
\begin{eqnarray}
\frac{1}{2}\frac{v^2}{R^2}\left(1+\frac{1}{M_s^2R^2}\right)-\frac{1}{2}\frac{\lambda v^4}{M_s^2R^2}\sim \frac{1}{4}\lambda v^4\nonumber
\label{eq:full-balance-eq-leading}
\end{eqnarray}
The solution up to order $\mathcal{O}(1/M_s^2)$ is given by
\begin{eqnarray}
\frac{1}{R^2}\sim\frac{\lambda v^2}{2}\left(1+\frac{\lambda v^2}{2M_s^2} \right)\,,
\end{eqnarray}
from which we obtain
\begin{eqnarray}
R\sim\sqrt{\frac{2}{\lambda }}\frac{1}{v}\left(1-\frac{\lambda v^2}{4M_s^2} \right)\,.
\end{eqnarray}
Therefore, the width of the nonlocal domain wall, $R$, turns out to be \textit{thinner} as compared to the local two-derivative case. This is consistent with both the asymptotic analysis in Sec.~\ref{subsec-asym} and with the linearized solution in Sec.~\ref{subsec-pert}. In fact, in the previous Subsections we found that the nonlocal configuration approaches the vacua $\pm v$ \textit{faster} as compared to the local case, i.e. the coefficient $B$ in Eq.~\eqref{eq:series expansion for coeff B-sqrt} is larger than the corresponding local one. Then, as shown in Eq.~\eqref{eq:tildeR}, $\widetilde{R}$ becomes smaller in the nonlocal case, which is consistent with the behaviour of $R$. That is, the coefficient $B$ and the width of the wall $R$ should be inversely proportional to each other; this means that if $B$ increases then $R$ must decrease, and indeed this is what we showed.
We can also estimate the energy per unit area
\begin{eqnarray}
E&=& \int_\mathbb{R}{{\rm d}x}\qty[-\frac{1}{2}\phi e^{-\partial_x^2/M_s^2}(\partial_x^2+\lambda v^2)\phi
+\frac{\lambda}{4}(\phi^4+v^4)] \nonumber\\[2mm]
&\sim& (\text{width of the wall}) \times (\text{energy density}) \nonumber\\[2mm]
& \sim& R \times \lambda v^4 \nonumber\\[2mm]
&\sim& \sqrt{\frac{\lambda}{2}}v^3\left(1-\frac{\lambda v^2}{4M_s^2}\right) \,,\label{energ-estim}
\end{eqnarray}
which is also \textit{decreased} as compared to the local case.
It is worth to emphasize that the expansion for small $\lambda v^2/M_s^2$ is well justified for the domain wall solution satisfying the bound in Eq.~\eqref{eq:neccessary cond for asymp sol} which was obtained by using the principal branch $W_0$ of the Lambert-W function.
\paragraph{Remark 4.} In Sec.~\ref{subsec-pert} we have estimated an additional scale $\ell$ in addition to the width ($\ell> R, \widetilde{R}$). In standard local theories all of the three scales are the same because there is only one physical scale, $\ell_{\rm L}=R_{\rm L}=\widetilde{R}_{\rm L} \sim (\sqrt{\lambda}v)^{-1}.$ In fact, in general $\ell$ and $R$ ($\widetilde{R}$) represent two different physical scales, and this becomes manifest in the nonlocal theory under investigation. The length scale $R$ ($\widetilde{R}$) is the one that contains the information about the size of the wall because it is proportional to $1/B,$ and it is related to how fast the field configuration approaches the vacuum. Whereas, the scale $\ell$ is related to how fast the field changes in the proximity of the origin, indeed it is inversely proportional to the gradient at $x=0$ (see Eqs.~\eqref{grad-zero} and~\eqref{ell-linearized}), and we have $\ell>\ell_{\rm L}.$ The difference between $\ell$ and $R$ ($\widetilde{R}$) is caused by the oscillatory behavior of the nonlocal solution around the local one.
\section{Comments on other topological defects} \label{sec-other}
So far we have only focused on the domain wall configuration. However, it would be very interesting if one could repeat the same analysis also in the case of other topological defects, like \textit{string} and \textit{monopole} which can appear in nonlocal models characterized by continuous-symmetry breaking. A full study of these topological defects in nonlocal field theories goes beyond the scope of this paper, however we can make some important comments.
First of all, the existence of such finite-energy configurations is always guaranteed by the non-trivial topological structure of the vacuum manifold.\footnote{Of course, they can exist only dynamically in the local case because of Derrick's theorem~\cite{Vilenkin:2000jqa}. But, in a nonlocal case, even Derrick's theorem might be circumvented. This issue will be left for a future work.}
Knowing that a solution must exist, then we could ask how some of their properties would be affected by nonlocality. Actually, a similar order-of-magnitude estimation as the one carried out in Sec.~\ref{subsec-order} can be applied to these other global topological defects. In particular, by imposing the balance between kinetic and potential energy one would obtain that the radius of both string and monopole are \textit{smaller} as compared to the corresponding ones in the local case.
We leave a more detailed investigation of higher dimensional topological defects, including stabilizing gauge fields, for future tasks.
\section{Discussion \& conclusions}\label{sec-dis}
\paragraph{Summary.} In this paper, we studied for the first time topological defects in the context of the nonlocal field theories. In particular, we mainly focused on the domain wall configuration associated to the $\mathbb{Z}_2$-symmetry breaking in the simplest nonlocal scalar field theory with nonlocal differential operator $e^{-\Box/M_s^2}$. Despite the complexity of non-linear infinite-order differential equations, we managed to find an approximate analytic solution. Indeed, we were able to understand how nonlocality affects the behavior of the domain wall both asymptotically close to the vacua and around the origin.
Let us briefly highlight our main results:
\begin{itemize}
\item We showed that the nonlocal domain wall approaches the asymptotic vacua $\pm v$ faster as compared to the local two-derivative case. We confirmed this feature in two ways: (i) studying the behavior of the solution towards infinity ($|x|\rightarrow \infty$); (ii) analyzing a linearized nonlocal solution found through perturbations around the local domain wall configuration.
\item Such a faster asymptotic behavior also means that the width of the wall, $R$ ($\widetilde{R}$), is smaller than the corresponding local one. This physically means that the boundary separating two adjacent casually disconnected spatial regions with two different vacua (i.e. $+v$ and $-v$) becomes thinner as compared to the local case. We confirmed this property by making an order-of-magnitude estimation involving the balance equation between kinetic and potential energy. As a consequence, also the energy per unit area can be shown to be smaller.
\item We noticed that the nonlocal domain wall has a very peculiar behavior around the origin, i.e. in the proximity of $x\sim 0$. We found that the linearized nonlocal solution, $\phi=\phi_{\rm L}+\delta \phi,$ oscillates around the local domain wall when going from $x=0$ to $|x|\rightarrow \infty.$ In other words, the perturbation $\delta\phi$ changes sign: when going from $x=0$ to $x=+\infty$ it is first negative and then positive, and vice-versa when going from $x=0$ to $x=-\infty$ . We confirmed the validity of the solution close to the origin in App.~\ref{sec-corr}.
\item The specific nonlocal domain wall solution analyzed in this paper can exist only if the nonlocal scale $M_s$ satisfies the lower bound $M_s\gtrsim \sqrt{\lambda} v,$ namely if the energy scale of nonlocality is larger than the symmetry-breaking scale.
\end{itemize}
\paragraph{Discussion \& Outlook.}
Here we have only dealt with nonlocal field theories in flat spacetime without assuming any specific physical scenario. However, it might be interesting to understand how to embed our analysis in a cosmological context where we could expect also gravity to be nonlocal; see Refs.~\cite{Biswas:2005qr,Koshelev:2016xqb,Koshelev:2017tvv,Koshelev:2020foq} and references therein.
In particular, in Refs.~\cite{Koshelev:2017tvv,Koshelev:2020foq} inflationary cosmology in nonlocal (infinite-derivative) gravity was investigated, and the following experimental bound on the scale of nonlocality was obtained: $M_s\gtrsim H,$ where $H$ is the Hubble constant during inflation, i.e. $H\sim 10^{14}$GeV.
Our theoretical lower bound is consistent with the experimental constraint derived in~\cite{Koshelev:2017tvv,Koshelev:2020foq} for the gravity sector. Indeed, some symmetry breaking is expected to happen after inflation, i.e. at energies $v\lesssim H,$ which is consistent with the theoretical lower bound $M_s\gtrsim \sqrt{\lambda} v$ in Eq.~\eqref{eq:neccessary cond for asymp sol}.
Hence, in a cosmological context one would expect the following hierarchy of scales\footnote{We are implicitly assuming that there exists only one scale of nonlocality $M_s$ for both gravity and matter sectors.}:
\begin{equation}
M_s\gtrsim H\gtrsim v\,.\label{set-ineq}
\end{equation}
Very interestingly, this cosmological scenario can be used to rule out some topological-defect solutions in nonlocal field theory. For instance, in light of the discussions a the end of Sec.~\ref{subsec-asym} and Sec.~\ref{subsec-pert}, there must exist at least another domain wall configuration that is valid for $M_s^2\lesssim \lambda v^2.$ In such a case the set of inequalities~\eqref{set-ineq} would be replaced by $v\gtrsim M_s\gtrsim H,$ which implies that the symmetry breaking would happen before inflation. Thus, if we are interested in domain wall formation after inflation, then we can surely discard any configuration valid in the regime $M_s^2\lesssim \lambda v^2.$
It would be very interesting if one would consider other topological defects, like strings and monopoles, which can appear in nonlocal models characterized by continuous-symmetry breaking. In fact, global string might play important roles like axion emissions in an expanding universe. In the local case, the topological defects that are formed from global-symmetry breaking should be unstable because of Derrick's theorem~\cite{Vilenkin:2000jqa} which excludes the existence of stationary stable configurations in dimensions greater than one. Then, they can exist only dynamically e.g. in an expanding universe. However, Derrick's theorem might not apply to a nonlocal case thanks to the nonlocality. It would be also interesting to investigate whether such stationary stable configurations could exist in a nonlocal case or not.
As another potential future direction to follow we can consider another class of models characterized by gauge symmetries as well as global ones. In fact, among the possible physical applications that can be studied in relation to topological defects, we have gravitational waves, e.g. the ones emitted by cosmic strings. We would expect that the presence of nonlocality would change the dynamics in such a way to modify non-trivially the gravitational wave-form.
This type of investigations will provide powerful test-benches to test nonlocal field theories, and to further constrain the structure of the nonlocal differential operators in the Lagrangian and the value of the nonlocal scale. In this work we only focused on the simplest model with $F(-\Box)=e^{-\Box/M_s^2},$ but one could work with more generic operators. Actually, the class of viable differential operators is huge (e.g. see~\cite{Buoninfante:2020ctr}), and it would be interesting to reduce it by means of new phenomenological studies.
As yet another future work we also wish to consider other type of field theoretical objects. An important phenomenon is the \textit{false vacuum decay}~\cite{Coleman:1977py,Callan:1977pt,Coleman:1980aw} according to which the false vacuum -- which corresponds to a local minimum of the potential -- has a non-zero probability to decay through quantum tunneling into the true vacuum -- which corresponds to a global minimum. This tunneling process consists in interpolating between the false and the true vacuum through an instanton (a bounce solution).
It may be very interesting to generalize the standard analysis done for a local two-derivative theory to the context of nonlocal field theories, and to understand how nonlocality would affect this phenomenon, e.g. how the tunneling probability would change. Another interesting direction is to consider non-topological solitons like Q-balls and oscillons/I-balls. The existence of these objects is also related to the presence of a bounce solution. But, one should notice that, different from topological defects, the presence of these bounce solutions is not guaranteed in nonlocal theories. In fact, it is very difficult to guarantee the presence of such a bounce solution in a nonlocal theory, different from the local cases. Therefore, even if one would obtain (possible) approximate solutions somehow, one cannot make any argument based on such approximate solutions without the proof of the existence of exact bounce solutions. This is the reason why we dealt only with topological defects in this paper, and left a study of non-topological field theoretical objects for future work.
Finally, we should emphasize that non-linear and infinite-order differential equations are not only difficult to solve analytically but even numerically. Indeed, up to our knowledge no numerical technique to find domain wall solutions is currently known. Some techniques to solve nonlinear equations involving infinite-order derivatives have been developed in the last decades~\cite{Moeller:2002vx,Arefeva:2003mur,Volovich2003,Joukovskaya:2008cr,Calcagni:2008nm,Frasca:2020ojd}, but none of them seem to be useful for the type of field equations considered in this paper. Therefore, as a future task it will be extremely interesting to develop new numerical and analytic methods to find topological-defect solutions. This will also be important to investigate the stability of topological defects in nonlocal field theory, something that we have not done in this paper.
\subsection*{Acknowledgements}
The authors are grateful to Sravan Kumar for useful discussions.
Y.~M. acknowledges the financial support from Advanced Research Center for Quantum Physics and Nanoscience, Tokyo Institute of Technology.
M.~Y. acknowledges financial support from JSPS Grant-in-Aid
for Scientific Research No. JP18K18764, JP21H01080, JP21H00069.
| {'timestamp': '2022-03-10T02:31:55', 'yymm': '2203', 'arxiv_id': '2203.04942', 'language': 'en', 'url': 'https://arxiv.org/abs/2203.04942'} |
\section{Introduction}
\label{Intro}
In the field of network science and big data, it is necessary to expand the scope beyond classical time signal analysis and processing to accommodate the signals defined on the graph \cite{ramakrishna2021grid, chen2021graph, zhou2022novel, 4, 13}. The irregular structure of the underlying graph is different from the regular structure of classical signal processing, which brings great challenges to the analysis and processing of graph signal. Graph signal processing (GSP) offers effective tools to process such network data \cite{liao2022joint, gama2020graphs,morency2021graphon}. For instance, graph-supported signals can model vehicle trajectories over road networks \cite{deri2016new}. Research in GSP has only recently begun, but it is growing rapidly. The main contributions include wavelet and Fourier transforms \cite{2011Wavelets, Moura2013IEEE, 13, 25, 27}, sampling and reconstruction of graph signals \cite{kim2022quantization, yang2021efficient, chen2015discrete, romero2016kernel, yang2022reconstruction}, uncertainty principles \cite{tsitsvero2016signals, pasdeloup2019uncertainty}, filtering of graph signals \cite{xiao2021distributed, ozturk2021optimal}, etc.
Different transforms of graph signal are still the core of GSP. Among them, the graph Fourier transform (GFT) acts as a cornerstone \cite{4,13}. In the literature, there are two frameworks for frequency analysis and processing of graph signals: (i) based on Laplacian matrix \cite{4}, (ii) based on adjacency matrix. The classical Laplacian based approach is limited to graph signals located on undirected graphs \cite{13}. In this method, frequency ordering is based on quadratic form and small eigenvalues correspond to low frequencies and vice versa. On the other hand, the adjacency based approach builds on the shift operator of the graph. This approach constructs the Fourier basis by using generalized eigenvectors of the adjacency matrix. Recently, some unique methods have been proposed to extend the GFT to directed graphs \cite{singh2016graph,sardellitti2017graph, sevi2018harmonic, shafipour2018directed, shafipour2019windowed, shafipour2018digraph, barrufet2021orthogonal,furutani2019graph, marques2020signal, seifert2021digraph, chen2022graph}. \cite{singh2016graph} uses the in-degree matrix and weight matrix to define the directed Laplacian. \cite{sardellitti2017graph} proposes an alternative approach that builds the graph Fourier basis as the set of orthonormal vectors that minimize a continuous extension of the graph cut size, known as the Lov{\'a}sz extension. \cite{sevi2018harmonic} introduces a novel harmonic analysis for functions defined on the vertices of a strongly connected directed graph. \cite{shafipour2018directed} studies the problem of constructing a GFT for directed graphs, which decomposes graph signals into different modes of variation with respect to the underlying network. \cite{shafipour2019windowed} proposes a methodology to carry out vertex-frequency analyses of graph signals on directed graph based on \cite{shafipour2018directed}. \cite{barrufet2021orthogonal} also defines new transform for directed graph, which using the Schur decomposition and leads to a series of embedded invariant subspaces for which orthogonal basis are available. \cite{furutani2019graph} extends graph signal processing to directed graphs based on the Hermitian Laplacian. \cite{marques2020signal} provides an overview of the current landscape of signal processing on directed graphs. Although these methods can construct Fourier bases with desirable properties, Fourier bases cannot completely retain information about the structure of the underlying graph.
In order to extract the local information of the graph signal, a new research direction has been proposed in graph signal processing (GSP), that is, fractional order \cite{25,26,27,28,ozturk2021optimal,yan2021windowed, ge2022optimal, kartal2022joint, yan2022multi}. The fractional order has gained considerable attention in the last 20 years in classical signal processing, and the application of fractional order to graphs has also aroused the interest of researchers \cite{ozturk2021optimal,yan2021windowed, ge2022optimal, kartal2022joint}. The graph fractional domain is a combination of the graph spectrum domain and fractional transform domain. The graph fractional Fourier transform (GFRFT) related to graph adjacency matrix is proposed in \cite{25}. Furthermore, a new spectral graph Fractional Fourier Transform (SGFRFT) related to graph Laplacian matrix is proposed in \cite{27}. GFRFT and SGFRFT show advantages in revealing the local characteristics of the graph signal. However, for directed graph, both of these transforms have their drawbacks. The Laplacian matrix for SGFRFT is constructed by an undirected graph, so SGFRFT does not apply to directed graphs. Although the GFRFT can use for directed graph, it has some potential problems. First, the basis comes from the Jordan decomposition is not orthonormal, and the Parseval’s identity does not hold and inner products are not preserved in the vertex domain and graph fractional domains. Second, numerical calculations of Jordan decomposition also often produce numerical instability even for medium-sized matrices \cite{golub1976ill}.
The presence of directionality plays a crucial role when it comes to modeling social networks, technological networks,
biological and biomedical networks, as well as information networks \cite{newman2003structure, han2012extended, chui2018representation}.
For directed graphs, the existing studies in graph fractional domain are all based on adjacency matrix \cite{25,26}. In the continuous setting, fractional Fourier transform seeks the orthogonal bases which are eigenfunctions of the fractional Laplacian operator. The background naturally leads us to consider the eigenvectors of the fractional graph Laplacian operator in the discrete setting. Thus, we believe the Laplacian-based construction is more natural. In this paper, we aim to generalize SGFRFT to directed graphs. In the new transform, the fractional Laplacian operator is a simple extension of the Hermitian Laplacian discussed in \cite{zhang2021magnet} to directed graphs. Moreover, the new definition of SGFRFT links the existing Laplacian based approach to directed graph. The paper is organized as follows. We first review the GFT and SGFRFT as our foundation in Section \ref{Preliminary}. Section \ref{main} introduces the procedure about how to design spectral graph fractional Fourier transform for signal on directed graph. Then, an ideal filter and frequency selective filter are presented in Section \ref{filter}. For the last part, we present experiments on real directed graph, and an application of signal denoising using the filtering in the previous section.
\section{Preliminaries}
\label{Preliminary}
\subsection{Spectral graph theory}
An undirected weighted graph $\mathcal{G}=\{\mathcal{V}, \mathcal{E}, \mathcal{W}\}$ consists of a finite set of vertices $\mathcal{V}=\{v_0, \cdots, v_{N-1}\}$, where $N=|\mathcal{V}|$ is the number of vertices, a set of edges $\mathcal{E}=\{(i,j)|i,j\in\mathcal{V},j\sim i\}\subseteq\mathcal{V}\times\mathcal{V}$, and a weighted adjacency matrix $\mathcal{W}$. If the values of $\mathcal{W}$ are all in ${0, 1}$ then $\mathcal{W}$ is called an adjacency matrix \cite{chui2018representation}. $\mathcal{W}=[{\mathcal{W}_{ij}}]\in\mathbb{R}^{N\times N}$ is defined as $\mathcal{W}_{ij}=w_{ij}$ if $(i,j)\in\mathcal{E}$ and $\mathcal{W}_{ij}=0$ otherwise. The non-normalized graph Laplacian is a symmetric difference operator $\mathcal{L}=D-\mathcal{W}$ \cite{13}, where $D:=diag(d_1, ..., d_N)$ is a diagonal degree matrix of $\mathcal{G}$, and $d_i:=\sum^{N}_{j=1}w_{ij}$. Let $\{\chi_0, \chi_1, \cdots, \chi_{N-1}\}$ be the set of orthonormal eigenvectors. Suppose that the corresponding Laplacian eigenvalues are sorted as $0=\lambda_0<\lambda_1\leq\lambda_2\leq\cdots\leq\lambda_{N-1}:=\lambda_{max}$.
Therefore
\begin{equation}
\mathcal{L}=\mathbf{\chi}\mathbf{\Lambda} \mathbf{\chi}^H,
\end{equation}
where
\begin{equation}\label{tezheng}
\mathbf{\chi} =
[\chi_0, \chi_1, \cdots, \chi_{N-1}],
\end{equation}
and the diagonal matrix is $\mathbf{\Lambda}=diag([\lambda_0, \lambda_1, \cdots, \lambda_{N-1}])$. The superscript $H$ represents the conjugate transpose of matrix.
The graph signal $f$ is defined as binding a scalar value to each vertex through the function $f:\mathcal{V}\rightarrow\mathbb{R}$.
Using the definition of (inverse) graph Fourier transform (GFT) as in \cite{13}, the GFT of $f$ is
\begin{equation}
\widehat{f}(\ell)=\langle f,\chi_{\ell}\rangle=\sum^{N}_{n=1}f(n)\chi^{*}_{\ell}(n), \ell=0,1,\cdots,N-1,
\end{equation}
where $*$ is complex conjugate.
The inverse GFT is given by
\begin{equation}\label{IGFT}
\begin{split}
f(n)=\sum^{N-1}_{\ell=0}\widehat{f}(\ell)\chi_{\ell}(n), n=0,1,\cdots,N-1.
\end{split}
\end{equation}
\subsection{Spectral graph Fractional Fourier Transform}
The graph fractional Laplacian operator $\mathcal{L}_{\alpha}$ is defined by $\mathcal{L}_{\alpha}=\mathbf{\kappa}R\mathbf{\kappa}^{H}$, where $\alpha$ is the fractional order, $0<\alpha\leq1$,
$\ell=0,1,\cdots,N-1$ \cite{27}.
Note that
\begin{equation}\label{fractionallapalcian}
\mathbf{\kappa} = \begin{bmatrix}
\kappa_0,& \kappa_1,& \cdots,& \kappa_{N-1}
\end{bmatrix}={\chi}^\alpha,
\end{equation}
and
\begin{equation}
R=\text{diag}({\begin{bmatrix}
r_0,& r_1,& \cdots,& r_{N-1}
\end{bmatrix}})
=\mathbf{\Lambda}^\alpha,
\end{equation}
so that
\begin{equation}
r_\ell=\lambda_\ell^\alpha.
\end{equation}
In the follow-up part of this paper, computing the $\alpha$ power of a matrix always uses matrix power function.
The spectral graph Fractional Fourier Transform (SGFRFT) of any signal $f$ building on the graph $\mathcal{G}$ is defined by \cite{27}:
\begin{equation}
\begin{split}
\widehat{f}_{\alpha}(\ell)=\langle f,\kappa_{\ell}\rangle=\sum^{N}_{n=1}f(n)\kappa^{*}_{\ell}(n), \ell=0,1,\cdots,N-1,
\end{split}
\end{equation}
when $\alpha=1$, the SGFRFT degenerates into standard GFT.
The inverse SGFRFT is given by
\begin{equation}\label{IGFRFT}
\begin{split}
f(n)=\sum^{N-1}_{\ell=0}\widehat{f}_{\alpha}(\ell)\kappa_{\ell}(n), n=0,1,\cdots,N-1.
\end{split}
\end{equation}
\begin{remark}
In this paper, to distinguish directed graphs from undirected graphs, we use $\mathcal{G}$ represents undirected graphs whereas $G$ represents directed graphs.
\end{remark}
\section{Spectral graph fractional Fourier transform for directed graph (DGFRFT)}
\label{main}
The orthogonality of eigenvectors gives the algebraic ideal properties of the SGFRFT, which makes the graph signal processing of undirected graphs develop well. However, since the eigenvectors of the fractional Laplacian operator of digraphs are usually not orthonormal, it is difficult to extend graph signal processing to digraphs simply. Our goal is to find a new way of defining fractional Laplacian matrix to keep the orthogonality of the eigenvectors and avoid the calculation of Jordan decomposition, i.e., Hermitian fractional Laplacian matrix. The Hermitian Laplacian is a complex matrix obtained
from an extension of the graph Laplacian \cite{zhang2021magnet, furutani2019graph, fanuel2018magnetic, f2020characterization}. It preserves the edge directionality and Hermitian property. Here we consider a directed graph $G=(V,E,W)$. $V$ is the set of $N$ vertices and $E$ is the set of directed edges. $W$ is the weight matrix of the graph, its element is defined as $w_{ij}$ which represents the weight of the directed edge from vertex $i$ to $j$. For directed graphs, the integers $d_i^{in}$ and $d_i^{out}$ specify the number of arrowheads directed toward and away from vertex $i$ respectively, \cite{hakimi1965degrees}. The in-degree of vertex $i$ is calculated as $d_i^{in}=\sum^N_{j=1}w_{ij}$, whereas, the out-degree is $d_i^{out}=\sum^N_{j=1}w_{ji}$. We define a new weight matrix as $W_s=[w_{ij,s}]$, where $w_{ij,s}=\frac{1}{2}(w_{ij}+w_{ji})$. $W_s=[w_{ij,s}]$ and $E_s$ (ignoring the directionality of $E$) uniquely
determine the corresponding undirected graph $G_s = (V, E_s, W_s)$ for directed $G$. The diagonal degree matrix $D_s$ of $G_s$ is $D(i,i):=\sum^{N}_{j=1}w_{ij,s}$. Then a Hermitian Laplacian of $G$ is defined as:
\begin{definition}(Hermitian graph Laplacian matrix)
\begin{equation}
L=D_s-\Gamma_q\odot W_s,
\end{equation}
where $\odot$ is the hadamard product \cite{horn1990hadamard}, and $\Gamma_q$ is a Hermitian matrix which encodes the edge directionality of $G$.
\end{definition}
$\Gamma_q$ can take many forms, here's a simple example. Define $\Gamma_q=[\gamma(i,j)]$ that satisfies $\gamma(i,j)=\gamma(i,j)^H$.
$\gamma$ is a map from $V\times V$ to a unitary group of degree 1 and is written by \cite{f2020characterization}:
\begin{equation}
\gamma_q{i,j}= e^{2\pi iq(w_{ij}-w_{ji})},
\end{equation}
when $w_{ij}=1$, $\gamma_q{i,j}= e^{2\pi iq}$ represents the edge from vertex $i$ to $j$ and $0\leq q<1$ is a rotation parameter.
As $D_s$ and $W_s$ are real symmetric matrices, $L$ is Hermitian matrix. Let $v_k$ and $u_k$ be respectively the k-th eigenvalue and eigenvector of
the Hermitian Laplacian $L$. The eigendecomposition of $L$ can be written as:
\begin{equation}
L=UVU^H,
\end{equation}
where $U=[u_0, u_1, \cdots, u_{N-1}]$. $V=diag([v_0, v_1, \cdots, v_{N-1}])$ is a real diagonal matrix, and Hermitian Laplacian eigenvalues are sorted as $0\leq v_0<v_1\leq v_2\leq\cdots\leq v_{N-1}:=v_{max}$.
The graph Hermitian fractional Laplacian matrix for directed graph is given by $L_{\alpha,d}=PQP^H$, in which
\begin{equation}\label{fractionallapalcian}
P = \begin{bmatrix}
p_0,& p_1,& \cdots,& p_{N-1}
\end{bmatrix}={U}^\alpha,
\end{equation}
and
\begin{equation}
Q=\text{diag}({\begin{bmatrix}
\xi_0,& \xi_1,& \cdots,& \xi_{N-1}
\end{bmatrix}})
=V^\alpha,
\end{equation}
that is
\begin{equation}
\xi_\ell=v_\ell^\alpha, \quad l=0,1,\cdots,N-1.
\end{equation}
We select two different $\alpha$ as examples to show the spectral properties of Hermitian fractional Laplacian matrix $L_{\alpha,d}$ in Fig. \ref{eigenvaluefigure}. When the rotation parameter $q=0$, the Hermitian fractional Laplacian matrix degrades to normal fractional Laplacian matrix as each element of $\Gamma_q$ equals to $1$. Therefore, when the values of $q$ are relatively small, the spectrum of the Hermitian fractional Laplacian is similar to the spectrum of the fractional Laplacian, otherwise, when $q$ is large, there will be oscillation.
\begin{figure}
\centering
\subfigure[Eigenvalues of Hermitian fractional Laplacian when $\alpha=0.8$]{
\label{alpha=0.8}
\includegraphics[width=0.45\textwidth]{Spec_for_alpha0.8.pdf}}
\subfigure[Eigenvalues of Hermitian fractional Laplacian when $\alpha=0.6$]{
\label{alpha=0.6}
\includegraphics[width=0.45\textwidth]{Spec_for_alpha0.6.pdf}}
\caption{Eigenvalues of Hermitian fractional Laplacian with respect to different fractional orders.}
\label{eigenvaluefigure}
\end{figure}
The total variation for graph signals is defined as an absolute sum of the
discrete difference of a graph signal \cite{ono2015total}. It was first introduced in as an extension of the original total variation (TV) \cite{chan2001digital}. We define the total variation of a graph signal $f$ as
\begin{equation}
\operatorname{TV}(\boldsymbol{f}):=\sum_{(i, j) \in \mathcal{E}}|f(i)-f(j)|^2.
\end{equation}
to measure the smoothness of eigenvectors of $L_{\alpha,d}$. The total variation has an intuitive interpretation: it compares how the signal varies with time or space and calculates a cumulative magnitude of the signal. The smaller the difference between the original signal $f(i)$ and $f(j)$, the lower the signal's variation. Fig. \ref{TV} shows total variations of eigenvectors of $L_{\alpha,d}$
on a random directed graph with 50 nodes when the fractional parameter $\alpha=0.8$ and $0.6$. To construct a random directed graph, we fix $n$ nodes and for each pair of nodes we generate a direct edge with probability $p$ ($n=50, p=0.1$).
\begin{figure}[t]
\centering
\subfigure[Total variations of eigenvectors of Hermitian fractional Laplacian when $\alpha=0.8$]{
\label{TValpha=0.8}
\includegraphics[width=0.45\textwidth]{Total_Var_for_alpha0.8.pdf}}
\subfigure[Total variations of eigenvectors of Hermitian fractional Laplacian when $\alpha=0.6$]{
\label{TValpha=0.6}
\includegraphics[width=0.45\textwidth]{Total_Var_for_alpha0.6.pdf}}
\caption{Total variations of eigenvectors of Hermitian fractional Laplacian with respect to different fractional orders.}
\label{TV}
\end{figure}
Then we prove that $L_{\alpha,d}$ is a positive semi-definite Hermitian matrix.
\begin{proposition}\label{Hermitian}
For any fractional order $\alpha$, the $L_{\alpha,d}$ is a Hermitian matrix:
\begin{equation}
(L_{\alpha,d})^{H}=L_{\alpha,d}.
\end{equation}
\end{proposition}
\begin{proof}
Since $V$ is a real diagonal matrix, $(V^\alpha)^H=V^\alpha$.
\begin{equation}
(L_{\alpha,d})^{H}=(PQP^H)^H=(U^\alpha V^\alpha(U^\alpha)^H)^H=U^\alpha V^\alpha(U^\alpha)^H=PQP^H=L_{\alpha,d}.
\end{equation}
\end{proof}
\begin{proposition}
For any fractional order $\alpha$, the $L_{\alpha,d}$ is a positive semi-definite matrix.
\end{proposition}
\begin{proof}
From Proposition \ref{Hermitian}, we know $V^\alpha$ is real diagonal matrix. Let $J=V^{\alpha/2}(U^\alpha)^H$, so $L_{\alpha,d}=J^HJ$. For any signal $f\in\mathbb{C}^N$, note that
\begin{equation}
f^HL_{\alpha,d}f=f^HU^\alpha V^\alpha(U^\alpha)^Hf=f^HJ^HJf=(Jf)^HJf\geq0
\end{equation}
\end{proof}
Clearly, in this case, this Hermitian fractional Laplacian matrix has a set of orthonormal eigenvectors. This orthogonality allows the basic concepts of graph signal processing of undirected graphs to be extended directly to those of directed graphs.
\begin{definition}(Directed graph fractional Fourier transform)
The spectral graph Fractional Fourier Transform for directed graph (DGFRFT) of any signal $f$ building on the graph $G$ is defined as:
\begin{equation}
\begin{split}
\widehat{f}_{\alpha,d}(\ell)=\langle f,p_{\ell}\rangle=\sum^{N}_{n=1}f(n)p^{*}_{\ell}(n), \ell=0,1,\cdots,N-1.
\end{split}
\end{equation}
By the matrix form, the DGFRFT is
\begin{equation}
\widehat{f}_{\alpha,d}=P^Hf,
\end{equation}
when $\alpha=1$, the DGFRFT degenerates into GFT for directed graph.
The inverse DGFRFT is given by
\begin{equation}\label{IGFRFT}
\begin{split}
f(n)=\sum^{N-1}_{\ell=0}\widehat{f}_{\alpha,d}(\ell)p_{\ell}(n), n=0,1,\cdots,N-1.
\end{split}
\end{equation}
\end{definition}
Now we have the definition of DGFRFT, it has some useful properties.
\begin{property} Unitarity: $P^{-1}=(U^\alpha)^{-1}=U^{-\alpha}=P^H$.
\end{property}
\begin{property} Index additivity: $U^{-\alpha}\circ U^{-\beta}=U^{-\beta}\circ U^{-\alpha}=U^{-(\alpha+\beta)} $.
\end{property}
\begin{property} Reduction to SGFRFT when $w_{ij}=w_{ji}$.
\end{property}
\begin{property} Parseval relation holds, for any signal $f$ and $g$ defined on the directed graph $G$ we have:
\begin{equation}
\begin{split}
\langle f,g\rangle=\langle \widehat{f}_{\alpha,d},\widehat{g}_{\alpha,d}\rangle.
\end{split}
\end{equation}
If $f=g$, then
\begin{equation}\label{parseval}
\begin{split}
&\sum^{N}_{n=1}|f(n)|^{2}=\|f\|_{2}^{2}=\langle f,f\rangle\\
&=\langle \widehat{f}_{\alpha,d},\widehat{f}_{\alpha,d}\rangle=\|\widehat{f}_{\alpha,d}\|_{2}^{2}=\sum^{N-1}_{\ell=0}|\widehat{f}_{\alpha,d}(\ell)|^{2}.
\end{split}
\end{equation}
\end{property}
DGFRFT is a new transform focused on directed graph in graph fractional domain. Compared with GFRFT, it has several significant advantages.
First, the columns of U are linearly independent eigenvectors, and at the same time they are orthogonal. This results in DGFRFT that preserves the inner product when passed from the vertex domain to the graph fractional domain. In addition, DGFRFT preserves edge directionality. Finally, computing the GFRFT needs to do Jordan decomposition, when the size of the graph exceeds the median value, the calculation of Jordan decomposition will lead to serious and difficult numerical instability. As our new Laplacian matrix for directed graph is Hermitian matrix, the Jordan decomposition can be avoided in the calculation of DGFRFT.
\section{Directed graph filtering}
\label{filter}
\subsection{Spectral graph filtering}
In classical signal processing, filtering can be defined by convolution. The convolution of signal a with b is the result of signal a filtered by b. Therefore, to define the directed graph filtering in graph fractional domain, first we need to define the convolution operator. Convolution in the time domain is equivalent to multiplication in the Fourier domain. For directed graph, the graph fractional convolution operator is defined as the following form consistent with the classical convolution by using the directed graph fractional Laplacian eigenvector.
\begin{definition}(Convolution operator)
For any graph signal $f$ and $g$ which underlying graph structure is directed, their graph fractional convolution $*$ is
\begin{equation}\label{convolution}
(f*g)(n)=\sum_{l=0}^{N-1} \widehat{f}_{\alpha,d}(\ell) \widehat{g}_{\alpha,d}(\ell)p_{\ell}(n).
\end{equation}
\end{definition}
A graph filter is a system which takes a graph signal as input and produces another graph signal as output \cite{chen2014signal}. Given a input directed graph signal $f_in$, the filtering is defined by the convolution of $f_in$ and a filter $h$. Thus the spectral directed graph filtering in vertex domain is:
\begin{equation}
f^{out}(n)=(f*h)(n)=\sum^{N-1}_{\ell=0}\widehat{f}_{\alpha,d}^{in}(\ell)\widehat{h}_{\alpha,d}(\ell)p_{\ell}(n), n=0,1,\cdots,N-1,
\end{equation}
and in graph fractional domain:
\begin{equation}
\widehat{f}_{\alpha,d}^{out}(\ell)=\widehat{f}_{\alpha,d}^{in}(\ell)\widehat{h}_{\alpha,d}(\ell), \ell=0,1,\cdots,N-1.
\end{equation}
\subsection{Frequency selective filtering}
The above spectral graph filtering is an ideal filter. It retains all frequencies within a given range and completely removes those that are out of range. Because the graph has a finite number of points, it is impossible to design such a filter. A frequency selective filter is a system that isolates the specific frequency components and excludes all others. It's easier to achieve in application. The ideal response of the frequency selective filter is \cite{iacobucci2005frequency}
\begin{equation}
h_d(l)=\left\{\begin{array}{l}
1,l\in\text{passband}\\
0,l\in\text{stopband}.
\end{array}\right.
\end{equation}
Lowpass, bandpass and highpass filters are three common frequency selective filters.
Using $h_d(l)$, the graph signal $f$ is filtered in graph fractional domain by:
\begin{equation}
\widehat{f}_{\alpha,d}^{out}=J_d\widehat{f}_{\alpha,d},
\end{equation}
where $J_d=diag([h_d(\xi_1), h_d(\xi_2), \cdots, h_d(\xi_N)])$, and $P^{-1}f=\widehat{f}_{\alpha,d}$ is the DGFRFT of $f$.
Or, equivalently, vertex domain filtering can be obtained by inverse DGFRFT:
\begin{equation}
f^{out}=P\widehat{f}_{\alpha,d}^{out}=PJ_d\widehat{f}_{\alpha,d}=PJ_dP^{-1}f=Hf.
\end{equation}
where $H=PJ_dP^{-1}$ represents the transfer matrix.
\section{Application}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{direct_original.pdf}
\caption{Average temperature data measured by 50 meteorological state stations in the US.}
\label{USTEM}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[original signal.]{
\label{direct_original}
\includegraphics[width=0.4\textwidth]{direct_original.pdf}}
\subfigure[noisy signal.]{
\label{direct_noisy}
\includegraphics[width=0.4\textwidth]{direct_noisy.pdf}}
\subfigure[denoised signal using Hermitian GFT \cite{furutani2019graph}.]{
\label{direct_denoise_HL}
\includegraphics[width=0.4\textwidth]{direct_denoise_HL.pdf}}
\subfigure[denoised signal using DGFRFT.]{
\label{direct_denoise_FrHL}
\includegraphics[width=0.4\textwidth]{direct_denoise_FrHL.pdf}}
\caption{Original, noisy and denoised temperature signals on directed US graph.}
\label{directed}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[original signal.]{
\label{undirect_original}
\includegraphics[width=0.3\textwidth]{undirect_original.pdf}}
\subfigure[noisy signal.]{
\label{undirect_noisy}
\includegraphics[width=0.3\textwidth]{undirect_noisy.pdf}}
\subfigure[denoised signal.]{
\label{undirect_denoise_FrHL}
\includegraphics[width=0.3\textwidth]{undirect_denoise.pdf}}
\caption{Original, noisy and denoised temperature signals on undirected US graph.}
\label{undirected}
\end{figure}
Here we assess the performance of DGFRFT via simulations on two graphs. In Simulation 1, we use the US temperature data to test that directionality is an important key in application. For a graph that starts out as a directed graph, we apply DGFRFT to denoise. Then ignore the directionality of the graph, do the same denoising step, and observe the results. In graph fractional domain, DGFRFT and GFRFT \cite{25} are suitable for directed graph. The former is based on Laplacian matrix, and the latter uses adjacency matrix. In Simulation 2, for a real dataset, we use DGFRFT and GFRFT to perform the same filtering and denoising task, and then compare the results. The experimental results show that our DGFRFT introduces smaller errors and has higher robustness.
\subsection{Simulation 1: \textbf{Directed} VS \textbf{Undirected}}
In this section, real temperature data measured by 50 state meteorological stations in the United States (US) are used to make experiment. We consider signal denoising on a directed graph. First, based on a map of the US, we define a digraph using geographic locations and latitude. In this graph, the vertexes represent the 50 states of the US. The directed edges between states are based on latitudinal assignments from states with low latitude to states with high latitude. Only when two states share a border are they connected by an edge. The average annual temperature in each state is viewed as a graph signal. States at lower latitudes have higher average temperatures. Therefore, there is a correlation between latitude and temperature, and it is reasonable to use latitude to define a digraph. Fig. \ref{USTEM} shows the US temperature signal on a directed graph. The data is available in \sloppy\url{https://www.currentresults.com/Weather/US/average-annual-state-temperatures.php}.
We generate a noisy signal $g= f + n$, where $f$ is the original signals representing average temprature on the graph and $n$ is the noise vector whose coordinates are independently sampled from the Gaussian distribution with zero mean and standard deviation $\sigma = 10$. We apply GFT \cite{4}, Hemitian GFT \cite{furutani2019graph}, DGFRFT to recover the original signal from the noisy signal. For each method, we use a low-pass filter kernel
\begin{equation*}
\hat{h_d}(\lambda) = \frac{1}{1+c\lambda},
\end{equation*}
where $\lambda$ is the eigenvalue corresponding to graph Laplacian, Hermitian graph Laplacian, graph Hermitian fractional Laplacian respectively. The denoised signal can be calculated as
\begin{equation*}
\tilde{f} = U\hat{H}U^*g,
\end{equation*}
where $\hat{H} = \text{diag}\{\hat{h_d}(\lambda_0),\ldots,\hat{h_d}(\lambda_{N-1})\}$ and $U$ is the GFT, Hermitian GFT, DGFRFT transform matrix. In this experiment, we set $c=0.02$, $q=0.5$ and $\alpha=0.9$. Note that we can only apply GFT \cite{4} to undirected graphs. To verify the importance of the directionality, we use the above denoising scheme using the same US graph in Fig. \ref{USTEM}, where we omit the direction of edges. Fig. \ref{directed} and \ref{undirected} give an intuitive example of the original, noisy, and denoised temperature signals on directed and undirected US graphs respectively. In the directed case, the performance difference of Hermitian GFT and DGFRFT is observed in Fig. \ref{direct_denoise_HL} and \ref{direct_denoise_FrHL}. In the undirected case, the denoising result after GFT is shown in Fig. \ref{undirect_denoise_FrHL}.
In addition, we calculate the root mean square error (RMSE) between original signal and the denoised signal obtained using GFT \cite{4}, Hermitian GFT \cite{furutani2019graph} and DGFRFT approaches. The results are described in Table \ref{tab:RE}. As the results demonstrate, these three transforms to graph signal denoising perform well, leading to small average errors. The RMSE obtained by our method is about 6.3828, while this quantity changes to 6.5685 and 6.5657 by using classical GFT based on undirected graph \cite{4} and another kind of GFT based on directed graph \cite{furutani2019graph} for denoising respectively. In conclusion, the proposed approach DGFRFT on directed graph outperforms other two ways, which highlights its practical usefulness in data denoising.
\begin{table*}[htbp]
\centering
\caption{Average RMSE between denoised signal and original signal.}
\begin{tabular}{|c|c|c|}
\hline
Graph type & Transform method & RMSE\\
\hline
undirected & GFT \cite{4} & 6.5685\\
directed & Hermitian GFT \cite{furutani2019graph} & 6.5657\\
directed & DGFRFT & 6.3828\\
\hline
\end{tabular}
\label{tab:RE}
\end{table*}
\subsection{Simulation 2: \textbf{DGFRFT} VS \textbf{GFRFT}}
Next, compared to GFRFT \cite{25}, which is also applicable to digraph, we study a real brain graph to demonstrate the superiority of DGFRFT in denoising tasks. The datasets represent the macaque large-scale visual and sensorimotor area corticocortical connectivity \cite{rubinov2010complex}. It has 47 vertexes and 505 edges (121 edges are directed). The vertexes represent cortical areas and edges represent large corticocortical tracts or functional associations. The data is available in \sloppy\url{https://sites.google.com/site/bctnet/}.
\begin{figure}[t]
\centering
\includegraphics[width=0.7\textwidth]{curves.pdf}
\caption{A sample realization of the original, noisy, and recovered signals of DGFRFT and GFRFT}
\label{compared}
\end{figure}
\begin{figure}[t]
\centering
\subfigure[Relative recovery error of DGFRFT]{
\label{laperror}
\includegraphics[width=0.45\textwidth]{box_lap.pdf}}
\subfigure[Relative recovery error of GFRFT]{
\label{adjerror}
\includegraphics[width=0.45\textwidth]{box_adj.pdf}}
\caption{Relative recovery error
of DGFRFT and GFRFT when $\alpha=0.9$ and $q=0.2$.}
\label{error}
\end{figure}
Let $U$ be the orthonormal DGFRFT or GFRFT basis. We construct a synthetic graph signal by $x_f=e^{-f}$. We add Gaussian noise to the original signal to obtain noisy signal $g=x_f+n$. We use a series of filter kernels constructed by $H := \text{diag}(h)$, where $h_i = 1[i \le \ell]$ and $\ell$ is a tuning parameter to control the number of diagonal elements. The parameter $\ell$ can be viewed as spectral window size and when $\ell=47$ the filter kernel is just the identity matrix. The recovered signal is given by
\begin{equation*}
\tilde{x}_f = UHU^*g.
\end{equation*}
Fig. \ref{compared} shows an example of original, noisy, and recovered signals of DGFRFT and GFRFT with window size $5$. Moreover, we compute the relative recovery error with respect to the true error. The recovery error is defined as $e_f = \|\tilde{x}_f - x_f\|/\|x_f\|$. The true error is defined as $e = \|n\|/\|x\|$, and the relative recovery error is defined as $e_f/e$. Fig. \ref{error} are boxplots depicting $e_f/e$ versus $\ell$ averaged over 1000 Monte-Carlo simulations, and demonstrates the effectiveness of adopting filters along with the proposed two methods. We can see DGFRFT is much more stable compared with GFRFT. The Jordan decomposition required in GFRFT is numerically unstable, which may be responsible for the large reconstruction errors. Therefore, compared with GFRFT, DGFRFT has certain advantages when dealing with graph signals whose underlying structure is directed graph.
\section{Conclusions}
Signals defined on directed graphs have important practical significance. This paper proposes a methodology to carry out graph signals processing on directed graph in spectral graph fractional domain. First, we introduce a method to construct a new fractional Laplacian matrix for directed graph and prove that it is a positive semi-definite Hermitian matrix. Equipped with this Hermitian fractional Laplacian matrix, a new transform named DGFRFT is presented. Then, to highlight the utility of DGFRFT, we propose two basic filtering method and use it to signal denoising on real data. Finally, the effectiveness of the DGFRFT construction is illustrated through tests on directed real world graphs.
| {'timestamp': '2022-10-11T02:12:27', 'yymm': '2203', 'arxiv_id': '2203.10199', 'language': 'en', 'url': 'https://arxiv.org/abs/2203.10199'} |
\section{Introduction}
The history of our home Galaxy is complex and not fully understood. Observations and theoretical
simulations have made much progress and provided us with tools to search for past accretion events
in the Milky Way and beyond. The well-known current events are the Sagittarius \citep{ibata94},
Canis Major \citep{martin04} and Segue~2 \citep{belokurov09} dwarf spheroidal galaxies,
merging into the Galactic disc at
various distances. The Monoceros stream \citep{yanny03, ibata03} and
the Orphan stream \citep{belokurov06} according to some studies are interpreted as tidal debris from the Canis Major and
Ursa Major~II dwarf galaxies, respectively (see \citealt{penarubia05, fellhauer07}; and the review of \citealt{helmi08}).
Accreted substructures are found also in other galaxies, such as the Andromeda galaxy \citep{ibata01, mcconnachie09},
NGC~5907 \citep{martinez08}, and NGC~ 4013 \citep{martinez09}.
\citet{helmi06} have used a homogeneous data set of about
13.240 F- and G-type stars from the \citet{nordstrom04} catalogue, which has complete kinematic,
metallicity, and age parameters, to search for signatures of past accretions in the Milky Way. From correlations
between orbital parameters, such as apocentre (A), pericentre (P), and \textit{z}-angular momentum ($L_z$),
the so- called APL space, Helmi et al.\ identified three new coherent groups of stars and suggested that those
might correspond to remains of disrupted satellites. In the \textit{U--V} plane, the investigated stars are
distributed in a banana-shape, whereas the disc stars define a centrally concentrated clump (Fig.~1). At the same time,
in the \textit{U--W} plane the investigated stars populate mostly the outskirts of the distributions. Both the \textit{U}
and \textit{W} distributions are very symmetric. The investigated stars have a lower mean rotational velocity in
comparison to the Milky Way disc stars, as we can see in the \textit{W--V} plane. These characteristics are typical for stars
associated with accreted satellite galaxies \citep{helmi08, villalobos09}.
\begin{figure*}
\centering
\includegraphics[width=\textwidth]{Ston-Fig1.eps}
\caption{Velocity distribution for all stars in the \citet{holmberg09} sample
(plus signs), stars of Group~3 (circles) and the investigated stars (filled circles).
}
\label{Fig.1}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=0.95\textwidth]{Ston-Fig2.eps}
\caption{Distribution for the stars in the APL space. Plus signs denote the \citet{holmberg09}
sample, circles -- Group~3, filled circles -- investigated stars. Note that the investigated stars as well as all Group 3 stars are distributed
in APL space with constant eccentricity.
}
\label{Fig.2}
\end{figure*}
Stars in the identified groups cluster not only around regions of roughly constant eccentricity
($0.3 \le \epsilon < 0.5$) and
have distinct kinematics, but have also distinct metallicities [Fe/H] and age distributions.
One of the parameters according to which the stars were divided into three groups was metallicity.
Group~3, which we investigate in this work, is the most metal-deficient and consists of 68 stars. According to the
\citet{nordstrom04} catalogue, its mean photometric metallicity, [Fe/H], is about $-0.8$~dex and the age is
about 14~Gyr. Group 3 also differs from the other two groups by slightly different kinematics, particularly
in the vertical ($z$) direction. \citet{holmberg09} updated and improved the parameters for the stars in the
\citet{nordstrom04} catalogue and we use those values throughout.
In Fig.~1 we show the Galactic disc stars from \citet{holmberg09}. Stars belonging to Group~3 in \citeauthor{helmi06}
are marked with open and filled circles (the latter are used to mark stars investigated in our work). Evidently, stars belonging to Group 3
have a different distribution in the velocity space in comparison to other stars of the Galactic disc.
In Fig.~2, the stars are shown in the APL space.
From high-resolution spectra we have measured abundances of iron group and $\alpha$-elements
in 21 stars belonging to Group~3 to check the homogeneity of their chemical composition and compare them with Galactic disc stars.
The $\alpha$-element-to-iron ratios are very sensitive indicators of galactic evolution (\citealt{pagel95, fuhrmann98, reddy06, tautvaisiene07, tolstoy09}
and references therein). If stars have been formed in different
environments they normally have different $\alpha$-element-to-iron ratios for a given metallicity.
\begin{table*}
\centering
\begin{minipage}{150mm}
\caption{Parameters of the programme and comparison stars.}
\label{table:1}
\begin{tabular}{lcrrrrrrccrrrrr}
\hline
\hline
Star & Sp. type & Age & $M_{\rm V}$ & d & $U_{\rm LSR}$ & $V_{\rm LSR}$ & $W_{\rm LSR}$ & $e$ & $z_{\rm max}$ & $R_{\min}$ & $R_{\max}$ \\
& & Gyr & mag & pc & km s$^{-1}$ & km s$^{-1}$ & km s$^{-1}$ & & kpc & kpc & kpc \\
\hline
\noalign{\smallskip}
\object{HD 967} & G5 & 9.9 & 5.23 & 43 & --55 & --80 & 0 & 0.34 & 0.12 & 4.09 & 8.29 \\
\object{HD 17820} & G5 & 11.2 & 4.45 & 61 & 34 & --98 &--77 & 0.39 & 1.62 & 3.65 & 8.31 \\
\object{HD 107582} & G2V & 9.4 & 5.18 & 41 &--1 &--103 &--46 & 0.41 & 0.76 & 3.35 & 8.02 \\
\object{BD +73 566} & G0 & ... & 5.14 & 67 &--52 &--86 &--18 & 0.36 & 0.18 & 3.89 & 8.27 \\
\object{BD +19 2646} & G0 & ... & 5.53 & 74 & 103 &--52 & 22 & 0.38 & 0.64 & 4.58 & 10.21 \\
\object{HD 114762} & F9V & 10.6 & 4.36 & 39 & --79 & --66 & 57 & 0.31 & 1.57 & 4.64 & 8.79 \\
\object{HD 117858} & G0 & 11.7 & 4.02 & 61 & 71 &--56 &--20 & 0.32 & 0.21 & 4.76 & 9.17 \\
\object{BD +13 2698} & F9V & 14.2 & 4.52 & 93 & 102 & --67 & --66 & 0.40 & 1.60 & 4.21 & 9.92 \\
\object{BD +77 0521} & G5 & 14.5 & 5.27 & 68 & 4 & --103 & --36 & 0.42 & 0.48 & 3.30 & 8.05 \\
\object{HD 126512} & F9V & 11.1 & 4.01 & 45 & 82 & --81 & --73 & 0.40 & 1.69 & 3.95 & 9.20 \\
\object{HD 131597} & G0 & ... & 3.06 & 119 & --133 & --98 & --43 & 0.52 & 0.81 & 3.10 & 9.85 \\
\object{BD +67 925} & F8 & 13 & 4.14 & 139 & --128 & --103 & --29 & 0.53 & 0.43 & 2.97 & 9.69 \\
\object{HD 159482} & G0V & 10.9 & 4.82 & 52 & --170 & --60 & 89 & 0.51 & 3.67 & 4.04 & 12.55 \\
\object{HD 170737} & G8III-IV & ... & 2.88 & 112 & --64 & --102 & --92 & 0.40 & 2.61 & 3.47 & 8.07 \\
\object{BD +35 3659} & F1 & 0.9 & 5.32 & 96 & 212 & --86 & --117& 0.65 & 5.50 & 3.24 & 15.41 \\
\object{HD 201889} & G1V & 14.5 & 4.40 & 54 & --126 & --83 & --35 & 0.46 & 0.56 & 3.58 & 9.80 \\
\object{HD 204521} & G5 & 2.1 & 5.18 & 26 & 15 & --73 & --19 & 0.29 & 0.18 & 4.45 & 8.11 \\
\object{HD 204848} & G0 & ... & 1.98 & 122 & 42 & --91 & 66 & 0.36 & 1.77 & 3.88 & 8.34 \\
\object{HD 212029} & G0 & 13.1 & 4.66 & 59 & 67 & --95 & 31 & 0.44 & 0.77 & 3.44 & 8.76 \\
\object{HD 222794} & G2V & 12.1 & 3.83 & 46 & --73 & --104 & 83 & 0.42 & 3.02 & 3.43 & 8.39 \\
\object{HD 224930} & G5V & 14.7 & 5.32 & 12 & --9 &--75 & --34 & 0.29 & 0.44 & 4.42 & 8.01 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\object{HD 17548} & F8 & 6.9 & 4.46 & 55 & --14 & 31 & 32 & 0.15 & 0.88 & 8.02 & 10.85 \\
\object{HD 150177} & F3V & 5.7 & 3.33 & 40 & --7 & --23 & --24 & 0.07 & 0.25 & 6.95 & 7.97 \\
\object{HD 159307} & F8 & 5.2 & 3.33 & 65 & --14 & --21 & 0 & 0.06 & 0.11 & 7.00 & 7.95 \\
\object{HD 165908} & F7V & 6.8 & 4.09 & 16 & --6 & 1 & 10 & 0.03 & 0.26 & 7.97 & 8.45 \\
\object{HD 174912} & F8 & 6.1 & 4.73 & 31 & --22 & 8 & --43 & 0.07 & 0.69 & 7.90 & 9.10 \\
\object{HD 207978} & F6IV-V & 3.3 & 3.33 & 27 & 13 & 16 & --7 & 0.11 & 0.00 & 7.81 & 9.68 \\
\hline
\end{tabular}
\end{minipage}
\end{table*}
\section{Observations and method of analysis}
Spectra of high-resolving power ($R\approx$68\,000) in the wavelength range of 3680--7270~{\AA} were obtained
at the Nordic Optical Telescope with the FIES spectrograph during July 2008. Twenty-one programme and six comparison stars
(thin-disc dwarfs) were observed. A list of the observed stars and some of their parameters (taken from the
\citealt{holmberg09} catalogue and Simbad) are presented in Table~1.
All spectra were exposed to reach a signal-to-noise ratio higher than 100. Reductions of CCD images were made
with the FIES pipeline {\sc FIEStool}, which performs a complete reduction: calculation of reference frame, bias and
scattering subtraction, flat-field dividing, wavelength calibration and other procedures (http://www.not.iac.es/instruments/fies/fiestool). Several examples
of stellar spectra are presented in Fig.~3.
The spectra were analysed using a differential model atmosphere technique.
The {\sc Eqwidth} and {\sc Spectrum} program packages, developed at the Uppsala Astronomical
Observatory, were used to carry out the calculation of abundances from measured
equivalent widths and synthetic spectra, respectively.
A set of plane-parallel, line-blanketed, constant-flux LTE model atmospheres
\citep{gustafsson08} were taken from the {\sc MARCS} stellar model atmosphere and flux
library (http://marcs.astro.uu.se/).
The Vienna Atomic Line Data Base (VALD, \citealt{piskunov95}) was extensively
used in preparing input data for the calculations. Atomic oscillator
strengths for the main spectral lines analysed in this study were taken from
an inverse solar spectrum analysis performed in Kiev \citep{gurtovenko89}.
All lines used for calculations were carefully selected to avoid blending. All line profiles in all spectra were
hand-checked, requiring that the line profiles be sufficiently clean to provide reliable equivalent widths.
The equivalent widths of the lines were measured by fitting of a Gaussian profile using the {\sc 4A} software
package \citep{ilyin00}.
Initial values of the effective temperatures for the programme stars were taken from \citet{holmberg09}
and then carefully checked and corrected if needed by
forcing Fe~{\sc i} lines to yield no dependency of iron abundance on excitation
potential by changing the model effective temperature. For four stars our
effective temperature is $+100$ to $+200$~K higher than in the catalogue.
We used the ionization equilibrium method to find surface gravities of the programme stars
by forcing neutral and ionized iron lines to yield the same iron abundances.
Microturbulence velocity values corresponding to the minimal line-to-line
Fe~{\sc i} abundance scattering were chosen as correct values.
Using the $gf$ values and solar equivalent widths of analysed lines from \citet{gurtovenko89}
we obtained the solar abundances, used later for the
differential determination of abundances in the programme stars. We used the
solar model atmosphere from the set calculated in Uppsala with a microturbulent
velocity of 0.8~$\rm {km~s}^{-1}$, as derived from Fe~{\sc i} lines.
\input epsf
\begin{figure}
\epsfxsize=\hsize
\epsfbox[10 20 280 220]{Ston-Fig3.eps}
\caption{Samples of stellar spectra of several programme stars. An offset of 0.5 in relative flux is applied for clarity.}
\label{fig3}
\end{figure}
In addition to thermal and microturbulent Doppler broadening of lines, atomic
line broadening by radiation damping and van der Waals damping were considered
in the calculation of abundances. Radiation damping parameters of lines were taken from the VALD database.
In most cases the hydrogen pressure damping of metal lines was treated using
the modern quantum mechanical calculations by \citet{anstee95},
\citet{barklem97}, and \citet{barklem98}.
When using the \citet{unsold55} approximation, correction factors to the classical
van der Waals damping approximation by widths
$(\Gamma_6)$ were taken from \citet{simmons82}. For all other species a correction factor
of 2.5 was applied to the classical $\Gamma_6$ $(\Delta {\rm log}C_{6}=+1.0$),
following \citet{mackle75}. For lines stronger than $W=100$~m{\AA} the correction factors were selected individually by
inspection of the solar spectrum.
The oxygen abundance was determined from the forbidden [O\,{\sc i}] line at 6300.31~\AA\ (Fig.~4). The oscillator strength
values for \textsuperscript{58}Ni and \textsuperscript{60}Ni, which blend the oxygen line, were taken from \citet{johansson03}.
The [O\,{\sc i}] log~$gf = -9.917$ value was calibrated by fitting to the solar spectrum \citep{kurucz05} with log~$A_{\odot}=8.83$ taken
from \citet{grevesse00}. Stellar rotation was taken into account if needed with $v {\rm sin} i$ values from \citet{holmberg07}.
Abundances of oxygen was not determined for every star due to blending by telluric lines or weakness of the oxygen line profile.
\input epsf
\begin{figure}
\epsfxsize=\hsize
\epsfbox[15 10 290 245]{Ston-Fig4.eps}
\caption{Fit to the forbidden [O\,{\sc i}] line at 6300.3 {\AA} in the programme star HD 204848.
The observed spectrum is shown as a solid line with black dots. The synthetic spectra with ${\rm [O/Fe]}=0.52 \pm 0.1$
are shown as dashed lines.}
\label{Fig.4}
\end{figure}
Abundances of other chemical elements were determined using equivalent widths of their lines.
Abundances of Na and Mg were determined with non-local thermodynamical equilibrium (NLTE) taken into
account, as described by \citet{gratton99}. The calculated corrections did not exceed $0.04$~dex for
Na \,{\sc i} and $0.06$~dex for Mg \,{\sc i} lines.
Abundances of sodium were determined from equivalent widths of the Na\,{\sc i} lines
at 5148.8, 5682.6, 6154.2, and 6160.8~{\AA}; magnesium from the Mg\,{\sc i} lines at 4730.0, 5711.1,
6318.7, and 6319.2~{\AA}; and that of aluminum from the Al\,{\sc i} lines at 6696.0, 6698.6, 7084.6, and 7362.2~{\AA}.
\subsection{Estimation of uncertainties}
\begin{table}
\centering
\begin{minipage}{80mm}
\caption{Effects on derived abundances resulting from model changes for the star HD~224930.}
\label{table:2}
\[
\begin{tabular}{lrrc}
\hline
\hline
\noalign{\smallskip}
Ion & ${ \Delta T_{\rm eff} }\atop{ -100 {\rm~K} }$ &
${ \Delta \log g }\atop{ -0.3 }$ &
${ \Delta v_{\rm t} }\atop{ -0.3~{\rm km~s}^{-1}}$ \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
[O\,{\sc i}] & 0.00 & --0.10& --0.01\\
Na\,{\sc i} & --0.06 & 0.01 & 0.00 \\
Mg\,{\sc i} & --0.04 & 0.01 & 0.01 \\
Al\,{\sc i} & --0.05 & 0.00 & 0.00 \\
Si\,{\sc i} & --0.01 & --0.03& 0.01 \\
Ca\,{\sc i} & --0.07 & 0.02 & 0.04 \\
Sc\,{\sc ii} & --0.01 & --0.13& 0.02 \\
Ti\,{\sc i} & --0.10 & 0.01 & 0.03 \\
Ti\,{\sc ii} & --0.01 & --0.13& 0.03 \\
V\,{\sc i} & --0.12 & 0.00 & 0.00 \\
Cr\,{\sc i} & --0.09 & 0.01 & 0.05 \\
Fe\,{\sc i} & --0.08 & 0.00 & 0.05 \\
Fe\,{\sc ii} & 0.04 & --0.13& 0.04 \\
Co\,{\sc i} & --0.07 & --0.02& 0.01 \\
Ni\,{\sc i} & --0.05 & --0.01& 0.04 \\
\hline
\end{tabular}
\]
\end{minipage}
\end{table}
The uncertainties in abundances are due to several sources: uncertainties caused by
analysis of individual lines, including random errors of atomic data and continuum
placement and uncertainties in the stellar parameters.
The sensitivity of the abundance
estimates to changes in the atmospheric parameters by the assumed errors $\Delta$[El/H]
are illustrated for the star HD\, 224930 (Table~2). Clearly, possible
parameter errors do not affect the abundances seriously; the element-to-iron
ratios, which we use in our discussion, are even less sensitive.
The scatter of the deduced abundances from different spectral lines $\sigma$
gives an estimate of the uncertainty due to the random errors. The mean value
of $\sigma$ is 0.05~dex, thus the uncertainties in the derived abundances that
are the result of random errors amount to approximately this value.
\input epsf
\begin{figure}
\epsfxsize=\hsize
\epsfbox[15 10 300 240]{Ston-Fig5.eps}
\caption{Diagram of orbital eccentricity \textit{e} vs. [Fe/H] for all stars of Group 3 (circles) and
those investigated in this work (filled circles).}
\label{Fig.5}
\end{figure}
\input epsf
\begin{figure}
\epsfxsize=\hsize
\epsfbox[15 10 390 335]{Ston-Fig6.eps}
\caption{Toomre diagram of all stars of Group 3 (circles) and
those investigated in this work (filled circles). Dotted lines
indicate constant values of total space velocity in steps of 50 km s$^{-1}$.}
\label{Fig.6}
\end{figure}
\begin{table*}
\centering
\begin{minipage}{190mm}
\caption{Main atmospheric parameters and elemental abundances of the programme and comparison stars.}
\label{table:3}
\begin{tabular}{lccccccccccccccc}
\hline\hline
Star & $T_{\rm eff}$ & log~$g$ & $v_{t}$ & [Fe/H] & $\sigma_{\rm Fe I}$ & ${\rm n}_{\rm Fe I}$ & $\sigma_{\rm Fe II}$ & ${\rm n}_{\rm Fe II}$& [O/Fe]& [Na/Fe]& $\sigma$& n& [Mg/Fe]& $\sigma$& n\\
& K & & km s$^{-1}$ & & & & & \\
\hline
\noalign{\smallskip}
HD 967 & 5570 & 4.3 & 0.9 & --0.62 & 0.05 & 38 & 0.04 & 7 & ... & 0.04 & 0.03 & 3 & 0.33 & 0.04 & 4\\
HD 17820 & 5900 & 4.2 & 1.0 & --0.57 & 0.05 & 29 & 0.01 & 6 & ... & 0.06 & 0.04 & 3 & 0.25 & 0.06 & 3\\
HD 107582 & 5600 & 4.2 & 1.0 & --0.62 & 0.05 & 32 & 0.07 & 5 & 0.39 & 0.06 & 0.04 & 4 & 0.30 & 0.04 & 3\\
BD +73 566 & 5580 & 3.9 & 0.9 & --0.91 & 0.05 & 31 & 0.02 & 6 & ... & 0.14 & 0.03 & 2 & 0.43 & 0.05 & 3\\
BD +19 2646 & 5510 & 4.1 & 0.9 & --0.68 & 0.04 & 31 & 0.04 & 5 & 0.55 & 0.10 & 0.08 & 3 & 0.38 & 0.06 & 4\\
HD 114762 & 5870 & 3.8 & 1.0 & --0.67 & 0.05 & 32 & 0.03 & 7 & ... & 0.09 & 0.03 & 3 & 0.33 & 0.05 & 4\\
HD 117858 & 5740 & 3.8 & 1.2 & --0.55 & 0.04 & 34 & 0.03 & 6 & 0.32 & 0.08 & 0.02 & 3 & 0.29 & 0.04 & 3\\
BD +13 2698 & 5700 & 4.0 & 1.0 & --0.74 & 0.06 & 28 & 0.05 & 5 & ... & 0.02 & 0.02 & 2 & 0.34 & 0.05 & 4\\
BD +77 0521 & 5500 & 4.0 & 1.1 & --0.50 & 0.07 & 24 & 0.05 & 5 & ... & --0.02 & ... & 1 & 0.25 & 0.04 & 4\\
HD 126512 & 5780 & 3.9 & 1.1 & --0.55 & 0.05 & 27 & 0.03 & 6 & 0.41 & 0.10 & 0.02 & 3 & 0.30 & 0.07 & 3\\
HD 131597 & 5180 & 3.5 & 1.1 & --0.64 & 0.04 & 32 & 0.03 & 6 & ... & 0.12 & 0.01 & 4 & 0.37 & 0.05 & 4\\
BD +67 925 & 5720 & 3.5 & 1.2 & --0.55 & 0.05 & 24 & 0.03 & 6 & 0.37 & 0.04 & 0.02 & 2 & 0.35 & 0.06 & 3\\
HD 159482 & 5730 & 4.1 & 1.0 & --0.71 & 0.05 & 26 & 0.01 & 5 & 0.42 & 0.13 & 0.05 & 4 & 0.31 & 0.03 & 4\\
HD 170737 & 5100 & 3.3 & 1.0 & --0.68 & 0.04 & 29 & 0.05 & 6 & ... & 0.11 & 0.02 & 4 & 0.30 & 0.07 & 3\\
BD +35 3659\tablefootmark{a} & 5850 & 3.9 & 0.9 & --1.45 & 0.04 & 25 & 0.04 & 4 & ... & 0.04 & 0.05 & 3 & 0.30 & 0.06 & 3\\
HD 201889 & 5700 & 3.8 & 0.9 & --0.73 & 0.05 & 30 & 0.03 & 4 & 0.58 & 0.08 & 0.04 & 3 & 0.32 & 0.03 & 4\\
HD 204521 & 5680 & 4.3 & 1.0 & --0.72 & 0.05 & 30 & 0.05 & 5 & ... & 0.06 & 0.02 & 3 & 0.29 & 0.04 & 3\\
HD 204848 & 4900 & 2.3 & 1.2 & --1.03 & 0.04 & 31 & 0.05 & 7 & 0.52 & 0.01 & 0.04 & 3 & 0.43 & 0.03 & 4\\
HD 212029 & 5830 & 4.2 & 0.9 & --0.98 & 0.02 & 20 & 0.01 & 2 & ... & 0.10 & 0.04 & 3 & 0.37 & 0.07 & 4\\
HD 222794 & 5560 & 3.7 & 1.1 & --0.61 & 0.04 & 30 & 0.05 & 6 & ... & 0.09 & 0.05 & 4 & 0.37 & 0.07 & 4\\
HD 224930 & 5470 & 4.2 & 0.9 & --0.71 & 0.05 & 35 & 0.05 & 6 & 0.45 & 0.08 & 0.04 & 3 & 0.42 & 0.04 & 4\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
HD 17548 & 6030 & 4.1 & 1.0 & --0.49 & 0.05 & 32 & 0.03 & 7 & 0.16 & --0.02 & 0.04 & 3 & 0.07 & 0.06 & 4\\
HD 150177 & 6300 & 4.0 & 1.5 & --0.50 & 0.04 & 23 & 0.05 & 4 & ... & 0.07 & 0.02 & 3 & 0.18 & 0.04 & 3\\
HD 159307 & 6400 & 4.0 & 1.6 & --0.60 & 0.04 & 17 & 0.04 & 4 & ... & 0.12 & 0.07 & 3 & 0.28 & 0.03 & 4\\
HD 165908 & 6050 & 3.9 & 1.1 & --0.52 & 0.04 & 24 & 0.03 & 7 & ... & 0.02 & 0.02 & 3 & 0.20 & 0.07 & 4\\
HD 174912 & 5860 & 4.1 & 0.8 & --0.42 & 0.04 & 33 & 0.04 & 6 & 0.10 & 0.02 & 0.01 & 3 & 0.08 & 0.05 & 4\\
HD 207978 & 6450 & 3.9 & 1.6 & --0.50 & 0.04 & 22 & 0.04 & 7 & ... & 0.09 & 0.05 & 3 & 0.28 & 0.06 & 4\\
\hline
\end{tabular}
\end{minipage}
\tablefoot{\tablefoottext{a}{Probably not a member of Group~3. }}
\end{table*}
\section{Results and discussion}
The atmospheric parameters $T_{\rm eff}$, log\,$g$, $v_{t}$, [Fe/H] and abundances of 12 chemical elements relative
to iron [El/Fe] of the programme and comparison stars are presented in Table~3. The number of lines and
the line-to-line scatter ($\sigma$) are presented as well.
\subsection{Comparison with previous studies}
\begin{table}
\begin{minipage}{80mm}
\caption{Group~3 comparison with previous studies.}
\label{table:4}
\begin{tabular}{lrrrrrr}
\hline\hline
&\multicolumn{2}{c}{Ours--Nissen} & \multicolumn{2}{c}{Ours--Reddy} & \multicolumn{2}{c}{Ours--Ram\'{i}rez} \\
Quantity & Diff. & $\sigma$ & Diff. & $\sigma$ & Diff. & $\sigma$\\
\hline
T$_{\rm eff} $ & 34 & 54 & 86 & 33 & 47 & 45 \\
log $g$ & --0.26 & 0.16 & --0.28 & 0.15 & --0.27 & 0.14\\
${\rm [Fe/H]}$ & 0.03 & 0.04 & 0.06 & 0.07 & 0.10 & 0.04 \\
${\rm[Na/Fe]}$ & 0.02 & 0.11 & 0.00 & 0.04 & ... & ...\\
${\rm[Mg/Fe]}$ & --0.02 & 0.05 & 0.02 & 0.01 & ... & ...\\
${\rm[Al/Fe]}$ & ... & ... &--0.01 & 0.07 & ... & ...\\
${\rm[Si/Fe]}$ & --0.05 & 0.01 & 0.03 & 0.05 & ... & ...\\
${\rm[Ca/Fe]}$ & 0.00 & 0.01 & 0.08 & 0.05 & ... & ...\\
${\rm[Sc/Fe]}$ & ... & ... & --0.01 & 0.05 & ... & ...\\
${\rm[Ti/Fe]}$ & 0.06 & 0.10 & 0.07 & 0.06 & ... & ...\\
${\rm[V/Fe]}$ & ... & ... & --0.01 & 0.03 & ... & ...\\
${\rm[Cr/Fe]}$ & 0.02 & 0.06 & 0.08 & 0.04 & ... & ...\\
${\rm[Co/Fe]}$ & ... & ... & --0.04 & 0.03 & ... & ...\\
${\rm[Ni/Fe]}$ & 0.01 & 0.05 & --0.01 & 0.04 & ... & ...\\
\hline
\end{tabular}
\end{minipage}
\tablefoot{Mean differences and standard deviations of the main parameters and abundance ratios [El/Fe] for
4 stars of Group~3 that are in common with \citet{nissen10}, 7 stars in common with \citet{reddy06}, and 10
stars in common with \citet{ramirez07}.}
\end{table}
\begin{table}
\begin{minipage}{80mm}
\caption{Thin-disc stars comparison with previous studies.}
\label{table:5}
\begin{tabular}{lrrrr}
\hline\hline
&\multicolumn{2}{c}{Ours--Edvardsson} & \multicolumn{2}{c}{Ours--Th\'{e}venin} \\
Quantity & Diff. & $\sigma$ & Diff. & $\sigma$\\
\hline
T$_{\rm eff} $ & 86 & 66 & 87 & 68 \\
log $g$ & --0.18 & 0.21 & --0.06 & 0.14 \\
${\rm [Fe/H]}$ & 0.10 & 0.04 & 0.03 & 0.03 \\
${\rm[Na/Fe]}$ & --0.08 & 0.09 & ... & ... \\
${\rm[Mg/Fe]}$ & --0.02 & 0.07 & ... & ... \\
${\rm[Al/Fe]}$ & --0.10 & 0.07 & ... & ... \\
${\rm[Si/Fe]}$ & --0.02 & 0.02 & ... & ... \\
${\rm[Ca/Fe]}$ & 0.05 & 0.03 & ... & ... \\
${\rm[Ti/Fe]}$ & --0.02 & 0.08 & ... & ... \\
${\rm[Ni/Fe]}$ & --0.06 & 0.06 & ... & ... \\
\hline
\end{tabular}
\end{minipage}
\tablefoot{Mean differences and standard deviations of the main parameters and abundance ratios
[El/Fe] for 6 thin-disc stars that are in common with \citet{edvardsson93} and 5 stars with \citet{thevenin99}.}
\end{table}
Some stars from our sample have been previously investigated by other authors. In Table~4 we present a comparison with the results by
\citet{nissen10}, \citet{reddy06}, and with \citet{ramirez07}. \citeauthor{ramirez07} determined only the main atmospheric parameters.
The thin-disc stars we have investigated in our work for a comparison have been analysed previously by
\citet{edvardsson93} and by \citet{thevenin99}.
In Table~5 we present a comparison with the results obtained by these authors. Our [El/Fe] for the stars in common agree very well with
other studies. Slight differences in the log\,$g$ values lie within errors of uncertainties and are caused mainly by
differences in determination methods applied. In our work we see that titanium abundances determined using Ti{\sc i} and Ti{\sc ii}
lines agree well and confirm the log\,$g$ values determined using iron lines.
Effective temperatures for all stars investigated here are also available in
\citet{holmberg09} and \citet{casagrande11}.
\citeauthor{casagrande11} provide astrophysical parameters for the Geneva-Copenhagen survey by applying the infrared flux method for the effective temperature determination.
In comparison to \citeauthor{holmberg09}, stars in the \citeauthor{casagrande11} catalogue are
on average 100~K hotter. For the stars investigated here, our spectroscopic temperatures are on average $40\pm 70$~K
hotter than in \citeauthor{holmberg09} and $60\pm 80$~K cooler than in \citeauthor{casagrande11} (BD +35 3659, which has a difference of
340~K, was excluded from the average).
[Fe/H] values for all investigated stars are available in \citet{holmberg09} as well as in \citet{casagrande11}.
A comparison between \citeauthor{holmberg09} and \citeauthor{casagrande11} shows that the latter gives [Fe/H] values that are on average by 0.1~dex more metal-rich.
For our programme stars we obtain a difference of $0.1\pm 0.1$~dex in comparison with Holmberg et al., and no systematic difference but a scatter of 0.1~dex in comparison with \citeauthor{casagrande11}
\subsection{Comparison with the thin-and thick-disc dwarfs}
\begin{figure*}
\centering
\includegraphics[width=0.85\textwidth]{Ston-Fig7.eps}
\caption{Comparison of elemental abundance ratios of stars in the investigated stellar group (black points) and data for Milky
Way thin-disc dwarfs from \citeauthor{edvardsson93} (1993, plus signs), \citeauthor{bensby05} (2005, stars), \citeauthor{reddy06} (2006, squares),
\citeauthor{zhang06} (2006, triangles), and Galactic thin disc chemical evolution models by \citeauthor{pagel95} (1995, solid lines).
Results obtained for thin-disc dwarfs analysed in our work are shown by open circles. Average uncertainties are shown in the box for Na.}
\label{Fig.7}
\end{figure*}
\addtocounter{table}{-3}
\begin{table*}
\centering
\begin{minipage}{200mm}
\caption{Continued}
\label{table:3cont}
\begin{tabular}{lccccccccccccccc}
\hline\hline
Star & [Al/Fe] & $\sigma$ & n & [Si/Fe] &$\sigma$ & n& [Ca/Fe]&$\sigma$ & n& [Sc/Fe]& $\sigma$& n& [Ti{\sc i}/Fe]& $\sigma$& n\\
\hline
\noalign{\smallskip}
HD 967 & 0.37 & 0.00 & 3 & 0.26 & 0.05 & 17 & 0.28 & 0.04 & 8 & 0.12 & 0.04 & 9 & 0.33 & 0.06 & 14\\
HD 17820 & 0.11 & 0.05 & 3 & 0.24 & 0.04 & 16 & 0.26 & 0.06 & 9 & 0.17 & 0.04 & 9 & 0.33 & 0.05 & 9\\
HD 107582 & 0.22 & 0.05 & 3 & 0.21 & 0.05 & 16 & 0.27 & 0.06 & 5 & 0.05 & 0.05 & 8 & 0.30 & 0.05 & 7\\
BD +73 566 & 0.27 & 0.04 & 2 & 0.33 & 0.06 & 18 & 0.39 & 0.06 & 6 & 0.05 & 0.05 & 7 & 0.33 & 0.05 & 9\\
BD +19 2646 & 0.28 & 0.08 & 2 & 0.22 & 0.06 & 16 & 0.32 & 0.06 & 8 & 0.09 & 0.04 & 8 & 0.28 & 0.04 & 9\\
HD 114762 & 0.15 & 0.02 & 2 & 0.20 & 0.05 & 17 & 0.21 & 0.05 & 7 & 0.05 & 0.05 & 10& 0.21 & 0.03 & 8\\
HD 117858 & 0.31 & 0.06 & 3 & 0.24 & 0.05 & 17 & 0.23 & 0.04 & 8 & 0.12 & 0.02 & 9 & 0.24 & 0.03 & 9\\
BD +13 2698 & 0.12 & 0.05 & 2 & 0.33 & 0.06 & 14 & 0.31 & 0.05 & 8 & 0.10 & 0.04 & 7 & 0.36 & 0.05 & 8\\
BD +77 0521 & 0.23 & 0.01 & 2 & 0.18 & 0.07 & 10 & 0.20 & 0.06 & 6 & 0.04 & 0.03 & 4 & 0.22 & 0.05 & 6\\
HD 126512 & 0.17 & 0.06 & 2 & 0.25 & 0.05 & 16 & 0.23 & 0.04 & 6 & 0.11 & 0.02 & 8 & 0.21 & 0.04 & 7\\
HD 131597 & 0.36 & 0.04 & 2 & 0.29 & 0.05 & 16 & 0.25 & 0.04 & 8 & 0.18 & 0.02 & 10& 0.25 & 0.06 & 16\\
BD +67 925 & 0.31 & 0.00 & 2 & 0.22 & 0.08 & 17 & 0.30 & 0.06 & 8 & --0.05 & 0.05 & 4 & 0.39 & 0.03 & 3\\
HD 159482 & 0.29 & 0.04 & 2 & 0.27 & 0.06 & 15 & 0.31 & 0.06 & 7 & 0.16 & 0.03 & 8 & 0.27 & 0.02 & 4\\
HD 170737 & 0.39 & ... & 1 & 0.24 & 0.04 & 15 & 0.27 & 0.06 & 7 & 0.16 & 0.02 & 8 & 0.30 & 0.06 & 12\\
BD +35 3659 & 0.40 & 0.01 & 2 & 0.25 & 0.03 & 8 & 0.31 & 0.08 & 4 & 0.03 & 0.11 & 7 & 0.41 & 0.02 & 4\\
HD 201889 & 0.27 & 0.01 & 3 & 0.31 & 0.05 & 15 & 0.33 & 0.08 & 6 & 0.09 & 0.03 & 9 & 0.33 & 0.05 & 9\\
HD 204521 & 0.26 & 0.02 & 2 & 0.22 & 0.05 & 17 & 0.26 & 0.06 & 8 & 0.12 & 0.03 & 7 & 0.32 & 0.05 & 9\\
HD 204848 & 0.45 & 0.05 & 3 & 0.44 & 0.05 & 16 & 0.41 & 0.05 & 8 & 0.07 & 0.03 & 11& 0.31 & 0.07 & 18\\
HD 212029 & 0.19 & 0.05 & 2 & 0.34 & 0.05 & 11 & 0.25 & 0.03 & 6 & 0.14 & 0.02 & 5 & 0.28 & 0.06 & 5\\
HD 222794 & 0.39 & 0.04 & 3 & 0.23 & 0.05 & 16 & 0.25 & 0.04 & 7 & 0.08 & 0.04 & 8 & 0.29 & 0.04 & 11\\
HD 224930 & 0.43 & 0.08 & 3 & 0.25 & 0.05 & 16 & 0.30 & 0.04 & 5 & 0.10 & 0.03 & 11& 0.27 & 0.04 & 9\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
HD 17548 & --0.02 & 0.01 & 2 & 0.08 & 0.05 & 17 & 0.10 & 0.04 & 7 & 0.05 & 0.04 & 10& 0.11 & 0.06 & 7\\
HD 150177 & 0.06 & 0.04 & 2 & 0.05 & 0.06 & 12 & 0.10 & 0.03 & 5 & 0.08 & 0.03 & 7 & 0.15 & 0.05 & 3 \\
HD 159307 & ... & ... & ...& 0.16 & 0.02 & 9 & 0.17 & 0.04 & 5 & 0.12 & 0.06 & 7 & 0.17 & 0.07 & 3\\
HD 165908 & 0.02 & 0.02 & 2 & 0.11 & 0.05 & 15 & 0.12 & 0.05 & 6 & 0.02 & 0.04 & 7 & 0.12 & 0.04 & 6\\
HD 174912 & --0.03 & 0.04 & 2 & 0.04 & 0.04 & 17 & 0.09 & 0.05 & 6 & 0.00 & 0.04 & 12& 0.02 & 0.05 & 9\\
HD 207978 & --0.01 & 0.02 & 2 & 0.12 & 0.04 & 15 & 0.15 & 0.03 & 7 & 0.06 & 0.02 & 6 & 0.16 & 0.06 & 3\\
\hline\hline
Star & [Ti{\sc ii}/Fe] & $\sigma$ & n & [V/Fe] & $\sigma$ & n & [Cr/Fe] &$\sigma$ & n& [Co/Fe]& $\sigma$& n& [Ni/Fe]& $\sigma$& n\\
\hline
\noalign{\smallskip}
HD 967 & 0.30 & 0.09 & 2 & 0.09 & 0.05 & 11 & 0.05 & 0.08 & 17 & 0.08 & 0.04 & 5 & 0.02 & 0.07 & 26\\
HD 17820 & 0.30 & 0.01 & 3 & 0.07 & 0.04 & 5 & --0.01 & 0.08 & 14 & 0.07 & 0.01 & 3 & 0.02 & 0.04 & 22\\
HD 107582 & 0.27 & 0.06 & 2 & 0.05 & 0.03 & 10 & 0.05 & 0.06 & 14 & 0.04 & 0.05 & 9 & 0.02 & 0.06 & 16\\
BD +73 566 & 0.35 & 0.06 & 3 &--0.03 & 0.04 & 6 & 0.05 & 0.05 & 11 & 0.06 & 0.04 & 3 & 0.00 & 0.05 & 17\\
BD +19 2646 & 0.21 & 0.04 & 3 & 0.10 & 0.04 & 8 & 0.06 & 0.10 & 15 & 0.05 & 0.06 & 7 & 0.00 & 0.05 & 18\\
HD 114762 & 0.21 & 0.05 & 3 & 0.05 & 0.05 & 4 & 0.00 & 0.08 & 11 & 0.04 & 0.05 & 5 & --0.04 & 0.04 & 16\\
HD 117858 & 0.20 & 0.03 & 3 & 0.11 & 0.03 & 8 & 0.01 & 0.05 & 16 & 0.09 & 0.04 & 6 & 0.02 & 0.05 & 24\\
BD +13 2698 & 0.33 & 0.05 & 3 & 0.08 & 0.03 & 7 & 0.04 & 0.06 & 14 & 0.09 & 0.03 & 4 & 0.03 & 0.06 & 19\\
BD +77 0521 & 0.17 & 0.03 & 2 & 0.06 & 0.02 & 3 & 0.04 & 0.10 & 11 & 0.04 & 0.02 & 2 & 0.06 & 0.06 & 13\\
HD 126512 & 0.29 & 0.02 & 2 & 0.08 & 0.03 & 7 & 0.02 & 0.06 & 14 & 0.09 & 0.04 & 5 & 0.01 & 0.05 & 17\\
HD 131597 & 0.30 & 0.03 & 3 & 0.06 & 0.06 & 11 & 0.04 & 0.07 & 16 & 0.05 & 0.06 & 10& 0.01 & 0.05 & 25\\
BD +67 925 & 0.43 & ... & 1 &--0.03 & 0.08 & 4 & --0.02 & 0.12 & 13 & --0.02& 0.01 & 2 & 0.04 & 0.06 & 14\\
HD 159482 & 0.23 & 0.01 & 3 & 0.09 & 0.03 & 7 & 0.05 & 0.06 & 13 & 0.10 & 0.04 & 4 & 0.01 & 0.05 & 15\\
HD 170737 & 0.28 & 0.06 & 4 & 0.08 & 0.05 & 8 & 0.03 & 0.09 & 16 & 0.05 & 0.04 & 6 & 0.02 & 0.07 & 23\\
BD +35 3659 & 0.24 & ... & 1 &... & ... & ...& 0.03 & 0.11 & 12 & ... & ... &...& 0.01 & 0.08 & 7\\
HD 201889 & 0.33 & 0.05 & 3 & 0.02 & 0.08 & 4 & 0.05 & 0.06 & 12 & 0.07 & 0.03 & 6 & 0.00 & 0.05 & 17\\
HD 204521 & 0.29 & 0.04 & 3 & 0.06 & 0.05 & 7 & 0.02 & 0.06 & 13 & 0.07 & 0.05 & 6 & 0.00 & 0.05 & 18\\
HD 204848 & 0.26 & 0.07 & 3 & 0.07 & 0.03 & 13 & 0.06 & 0.09 & 17 & 0.05 & 0.03 & 10& 0.05 & 0.06 & 25\\
HD 212029 & 0.36 & 0.03 & 2 & 0.09 & 0.05 & 4 & --0.05 & 0.06 & 12 & ... & ... & ...& 0.01 & 0.06 & 11\\
HD 222794 & 0.27 & 0.04 & 3 & 0.07 & 0.05 & 10 & 0.03 & 0.07 & 16 & 0.06 & 0.03 & 8 & 0.02 & 0.06 & 18\\
HD 224930 & 0.22 & 0.03 & 2 & 0.12 & 0.03 & 6 & 0.05 & 0.08 & 14 & 0.10 & 0.04 & 7 & --0.01 & 0.06 & 21\\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
HD 17548 & 0.08 & 0.03 & 3 & 0.09 & 0.04 & 4 & --0.01 & 0.05 & 13 & 0.05 & 0.03 & 3 & --0.04 & 0.05 & 20\\
HD 150177 & 0.23 & 0.05 & 2 & 0.03 & ... & 1 & --0.03 & 0.07 & 14 & 0.06 & 0.00 & 2 & --0.02 & 0.06 & 12\\
HD 159307 & 0.18 & 0.05 & 2 & ... & ... & ...& 0.04 & 0.05 & 9 & 0.06 & ... & 1 & 0.04 & 0.02 & 10\\
HD 165908 & 0.07 & 0.03 & 3 & 0.06 & 0.03 & 4 & 0.00 & 0.04 & 10 & 0.00 & 0.08 & 5 & --0.03 & 0.06 & 15\\
HD 174912 & 0.00 & 0.03 & 3 & --0.01 & 0.05 & 5 & 0.00 & 0.08 & 14 & 0.04 & 0.00 & 4 & --0.05 & 0.05 & 21\\
HD 207978 & 0.07 & 0.06 & 3 & ... & ... & ...& 0.01 & 0.07 & 11 & 0.05 & 0.06 & 5 & --0.01 & 0.05 & 15\\
\hline
\end{tabular}
\end{minipage}
\end{table*}
The metallicities and ages of all programme stars except one (BD +35\,3659) are quite homogeneous:
${\rm [Fe/H]}= -0.69\pm 0.05$~dex and the average age is about $12\pm 2$~Gyrs. However, the ages, which we took from
\citet{holmberg09}, were not determined for every star.
BD +35 3659 is much younger (0.9~Gyr), has ${\rm [Fe/H]}= -1.45$, its eccentricity, velocities, distance, and other parameters
differ as well (see Fig.~5 and 6). We doubt its membership of Group\ 3.
The next step was to compare the determined abundances with those in the thin-disc dwarfs.
In Fig.~7 we present these comparisons with data taken from \citet{edvardsson93}, \citet{bensby05}, \citet{reddy06}, \citet{zhang06},
and with the chemical evolution model by \citet{pagel95}.
The thin-disc stars from \citeauthor{edvardsson93} and \citeauthor{zhang06} were selected by using the membership probability evaluation method described by \citet{trevisan11},
since their lists contained stars of other Galactic components as well. The same kinematical approach in assigning thin-disc membership was used in
\citet{bensby05} and \citet{reddy06}, so the thin-disc stars used for the comparison are uniform in that respect.
In Fig.~7 we see that the abundances of $\alpha$-elements in the investigated stars are overabundant compared
with the Galactic thin-disc dwarfs. A similar overabundance of $\alpha$-elements is exhibited by the thick-disc stars
(\citealt{fuhrmann98, prochaska00, tautvaisiene01, bensby05, reddy08}; and references therein). \citet{helmi06}, based on the isochrone fitting, have suggested that stars in the
identified kinematic groups might be $\alpha$-rich. Our spectroscopic results qualitatively agree
with this. However, based on metallicities and vertical velocities, Group~3 cannot be uniquely
associated to a single traditional Galactic component \citep{helmi06}.
What does the similarity of $\alpha$-element abundances in the thick-disc and the investigated kinematic group mean?
It would be easier to answer this question if the origin of the
thick disc of the Galaxy was known (see \citealt{vanderkruit11} for a review).
There are several competing models that aim to explain the nature of a thick disc. Stars may have appeared at the thick disc
through
(1) orbital migration because of heating of a pre-existing thin disc by a varying gravitational potential in the thin disc
(e.g. \citealt{roskar08, schonrich09});
(2) heating of a pre-existing thin disc by minor mergers (e.g. \citealt{kazantzidis08, villalobos08}, 2009);
(3) accretion of disrupted satellites (e.g. \citealt{abadi03}), or
(4) gas-rich satellite mergers when thick-disc stars form before the gas completely settles into a thin-disc
(see \citealt{brook04}, 2005).
\citet{dierickx10} analysed the eccentricity distribution of thick-disc stars that has recently been proposed as a diagnostic to
differentiate between these mechanisms \citep{sales09}. Using SDSS data release 7, they have assembled a sample
of 31.535 G-dwarfs with six-dimensional phase-space information and metallicities and have derived their orbital eccentricities.
They found that the observed eccentricity distribution is inconsistent with that predicted by orbital migration only.
Also, the thick disc cannot be produced predominantly through heating of a pre-existing thin disc, since this model predicts more high-eccentricity stars than observed.
According to \citeauthor{dierickx10}, the observed eccentricity distribution fits well with a gas-rich merger scenario, where most thick-disc stars were born in situ.
In the gas-rich satellite merger scenario, a distribution of stellar eccentricities peak around $e=0.25$, with a tail towards higher
values belonging mostly to stars originally formed in satellite galaxies. The group of stars investigated in our work fits this model
with a mean eccentricity value of 0.4. This scenario is also supported by the RAVE survey data analysis made by
\citet{wilson11} and the numerical simulations by \citet{dimatteo11}. In this scenario, Group~3 can be explained as
a remnant from stars originally formed in a merging satellite.
\section{Conclusions}
We measured abundances of iron group and $\alpha$-elements from high-resolution spectra
in 21 stars belonging to Group~3 of the Geneva-Copenhagen survey. This kinematically identified group of stars
was suspected to be a remnant of a disrupted satellite galaxy.
Our main goal was to investigate the chemical composition of the stars within the group and to compare them with Galactic disc stars.
Our study shows that
\begin{enumerate}
\item All stars in Group~3 except one have a similar metallicity. The average
[Fe/H] value of the 20 stars is $-0.69\pm 0.05$~dex.
\item All programme stars are overabundant in oxygen and $\alpha$-elements compared with Galactic
thin-disc dwarfs and the Galactic evolution model used. This abundance pattern has similar characteristics as the
Galactic thick disc.
\end{enumerate}
The homogeneous chemical composition together with the kinematic properties and ages of stars in the investigated Group~3 of
the Geneva-Copenhagen
survey support the scenario of an ancient merging event. The similar chemical composition of stars in Group~3
and the thick-disc stars might suggest that their formation histories are linked.
The kinematic properties of our stellar group fit well with a gas-rich satellite merger scenario (\citealt{brook04}, 2005; \citealt{dierickx10,
wilson11, dimatteo11}; and references therein).
We plan to increase the number of stars and chemical elements investigated in this group, and also to study the chemical composition of stars in other kinematic groups of the
Geneva-Copenhagen survey. The identification of such kinematic groups and the exploration of their
chemical composition will be a key in understanding the formation and evolution of the Galaxy.
\begin{acknowledgements}
The data are based on observations made with the Nordic Optical Telescope, operated on the island of La Palma jointly by Denmark, Finland, Iceland,
Norway, and Sweden, in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias.
The research leading to these results has received funding from the European Community's Seventh Framework Programme (FP7/2007-2013) under
grant agreement number RG226604 (OPTICON). BN acknowledges support from the Danish Research council.
This research has made use of Simbad, VALD and NASA ADS databases. We thank the anonymous referee for
insightful questions and comments.
\end{acknowledgements}
| {'timestamp': '2012-03-29T02:02:50', 'yymm': '1203', 'arxiv_id': '1203.6199', 'language': 'en', 'url': 'https://arxiv.org/abs/1203.6199'} |
\section{\label{sec:level1}Introduction}
General Relativity (GR) is, first of all, a framework for defining physical theories in which one can obtain an absolute (i.e.~independent of the observer) description of Nature; {see \cite{Iorio, Smoot, Vishwakarma}}.
As such it is assumed as the most fundamental framework for describing the classical regime and any description of physical events in that regime should comply {with} it.
Surprisingly, there are a lot of things that can be described in a covariant way (from dynamics to conserved quantities, from some frameworks of Hamiltonian formalism to stability).
Although this program is carried out in many instances, in many other cases this is not what actually happens.
For example, astronomers are routinely measuring the positions of stars in the sky, their mutual distances, the age of the universe, or
the deflection angles nearby massive objects (which are usually thought as ``the angle'' between two spacelike vectors applied at different points in space).
The NAVSTAR--GPS is paradigmatic of this attitude: it measures users' positions {\it in space}.
Moreover, it relies on keeping clocks synchronised despite their motion and the different gravitational potential they experience.
None of these quantities are covariant; see{\cite{Brumberg91, Brumberg10, Soffel}}.
In view of this lack in covariance, one can either accept that these quantities depend on conventions (e.g.~protocols to define synchronisation at a distance) and describe in detail
the conventions used, or one can reduce them to measuring coincidences, which are the only absolute quantities one can resort to.
While the first strategy is often tacitly assumed, the second one, which is relativistically more appealing, it is hardly ever tried out in practice (with some exceptions; see
\cite{Rovelli}).
Most of the time, we keep assuming to live in a {Newtonian spacetime}, though with some corrections due to what we learned in the last century.
That is particularly evident with NAVSTAR--GPS which was originally designed to work in a Newtonian space with corrections due to GR (including special relativity (SR) corrections, in particular).
Of course, the approximations are very reasonable since the gravitational field of the Earth happens to be weak enough to justify them and this is why NAVSTAR--GPS works well despite its poor theoretical design; {see \cite{Ash1, PS}}.
The design of Galileo Global Navigation Satellite System (Galileo--GNSS) better integrates GR, though still on a post-Newtonian regime,
thus not providing a qualitatively different approach under this viewpoint, e.g.~it still relies on the weak field approximation; {see \cite{Galileo, Galileo2, Galileo3}}.
{Here our approach will be different. We first establish an exact model in a simplified physical situation, with an isotropic gravitational field. This allows an exact treatment with no approximations. Naturally, in refining the model we shall be forced to add perturbations to the model to describe finer physical effects (non-isotropy, time drift functions of the clocks, dragging forces). However, in this paper we do not approximate or use expansions series, or PN approximations. We just solve the exact equations as we still do not know if these effects will be relevant for the positioning part, or for both the positioning and gravitometric parts. Such evaluation will be left for a later work.}
Recently, it has been argued {by B.Coll (see \cite{Co1})} that a completely new, qualitatively different, relativistic design for positioning systems (rPS, {also called {\it emission coordinates}, {\it null coordinates}, {\it light coordinates}, {\it null frames}, or {\it ABC coordinates}}) is needed; see {also \cite{RovelliGPS, Bla,Coll1, Coll2, Coll3, Coll4, Tarantola}}.
These should be based on a cluster of {\it transmitters} (or {\it clocks}) broadcasting information with which a {\it user} (or {\it {client}}) can determine its position in spacetime.
{Unlike in radar coordinates in which a signal goes back and forth between the radar and the object to be located, in positioning systems the object to be located only receives signals from transmitters. Later on we will allow signals to travel among transmitters, though no signals go from the object to the transmitters.}
The main characteristics of a rPS should be:
{\sl
\begin{itemize}
\item[(i)] it should determine the position of events in spacetime, not in space;
\item[(ii)] it should not assume synchronisation at a distance or positioning of initial conditions;
\item[(iii)] it should define a coordinate system which is not linked to Earth leaving it to the {users} the duty to transform it to a more familiar (as well as less fundamental) Earth-based coordinate system.
\end{itemize}
}
Coll {\it et al.~}analysed these rPSs and proposed a classification for them depending on their characteristics.
In their classification, a rPS is {\it generic} if can be built in any spacetime, it is {\it gravity free} if one does not need to know the metric field to built it, it is {\it immediate}
if any event can compute its position in spacetime as soon as it receives the data from transmitters.
Another important characteristic of rPS is being {\it auto-locating}, meaning that the user is able not only to determine its position in spacetime,
but also the position of the transmitters. This can be achieved by allowing the transmitters to also receive the signals from other clocks and mirroring them together with their clock reading.
The basic design investigated in \cite{Coll1} is a cluster of atomic clocks based on satellites which {\it continuously } broadcast the time reading of their clock together with the readings they are receiving from the other clocks.
Sometimes they argue the transmitters may be also equipped with accelerometers or the users can be equipped with a clock themselves.
In these rPSs the user receives the readings of the transmitters' clock, together with the readings which each transmitter received from the others, for a total of $m^2$ readings for $m$ transmitters.
In these systems, the user may have no {\it a priori} knowledge of transmitter trajectories which are determined by received data
(sometimes assuming a qualitative knowledge of the kind of gravitational field in which they move or whether they are free falling or subject to other forces).
As a matter of fact, one can define many different settings and investigate what can be computed by the user depending on its {\it a priori} knowledge and assumptions.
For example, it has been shown that these rPS can be used to measure the gravitational field; {see \cite{RovelliGPS, Kostic,Puchades12, Puchades14, Rovelli, RovelliGPSb, Coll10, Coll10b, Coll12, Coll09, Delva, Bunandar, Bini, Cadez, Gomboc, Lachieze}}.
These rPSs define a family of basic null coordinate systems (in dimension $m$ an event receiving the readings of $m$ clocks can directly use, in some regions of spacetimes, these readings as local coordinates).
Coll and collaborators showed that one can consider settings so that the rPS is at the same time generic, gravity free and immediate.
The user in these rPSs is potentially able to define familiar (i.e.~more or less related to the Earth) coordinates, as well.
We should also remark that there is a rich, sometime implicit, tradition of rPS. It goes back to Ehlers-Pirani-Schild (EPS) who in 1972 proposed an axiomatics for gravitational physics in which the differential structure of spacetime is defined by declaring that {\it radar coordinates} are admissible coordinates; see \cite{EPS, EPSNostro} and \cite{Polistina}.
Earlier, Bondi and Synge used radar coordinates as somehow preferred coordinates in GR (see \cite{Bondi, WorldFunction, WorldFunction2})
though the tradition goes back to SR as well as before radar was invented (see \cite{Milne1, Milne2, Schroedinger, Whitrow, Whitrow2, Whitrow3}).
Of course, as Coll {\it et al.~}noticed, radar coordinates are not immediate, but, as essentially EPS showed, still they are generic and gravity free.
\medskip
In this paper, we shall {further investigate these issues} proposing an expanded classification of rPS.
In particular, we say that a rPS is {\it chronal} if it only uses clocks, {\it simple} if it is chronal and users have no clock but they uniquely rely on transmitters' clocks.
We also say that a rPS is {\it instantaneous} if a user is regarded as an event, not as a worldline, and it is still able to determine immediately its position in spacetime.
Moreover, we say that a rPS is {\it discrete} if the signals used by the user are a discrete set of clock readings (as opposite to a {\it continuous} stream of them).
We say that a rPS is {\it self-calibrating} if {transmitters starting in generic initial conditions, with clocks which are not synchronised at a distance,
are able to operate as a rPS without external assistance. In particular, this means that the user is not assuming any {\it a priori} knowledge about the specific orbits of the transmitters (other than knowing that they are assumed to be freely falling or the kind of forces that may act on them) or about the time at which each clock has been reset.
All these parameters can be added as unknowns and fitted by the user.
Hence being self-calibrating is a combination of being auto-locating and not requiring clock synchronization at a distance, neither it being an initial synchronisation nor even more so a periodic one. Of course, one can still assume a knowledge about clocks frequencies and their relation with clock proper time which can be obtained before the mission starts.}
{Finally, we call {\it robust} a rPS that is able, by using only signals within the transmitter constellation, to check of all {\it a priori} assumptions (such as, e.g,, the transmitters being freely falling, or the gravitational flied described by a specific, or general, Schwarzschild metric) are valid and pause working as a rPS in order to prevent wrong positioning. Being robust is a prerequisite to allow a software layer able to adapt to transient effects, e.g.~by adding unknown parameters describing perturbations so that they can be determined by a fit.
However, we shall not explore here this possibility leaving it to a further investigation.
Accordingly, an rPS which is self-calibrating and robust can, in principle (of course, depending on the type of perturbation) react to transient effects by pausing and resuming as soon as the operational conditions are restored, without receiving external assistance.
}
We shall discuss some settings which implement simple, instantaneous, discrete and self-calibrating (as well as general, immediate and self-locating) rPS.
We also discuss how the users can explicitly find the coordinate transformation to familiar systems (e.g.~inertial coordinates {$(t, x)$} in Minkowski,
or $(t, r)$ in Schwarzschild) since, even though null coordinates are more fundamental, they are not practically useful for the GPS user
(as well as, doing that, one also proves that those classes of coordinates are also admitted by spacetime differential structure).
{In some of these cases we shall discuss how the design is robust against unexpected perturbations or, more generally, how the system is able to test the assumptions done about transmitters trajectories and clocks.}
Here we argue that being {\it simple} is important from a foundational viewpoint.
Atomic clocks are already complicated objects from a theoretical perspective. They can be accepted as an extra structure but that does not mean that one should accept
other apparata (e.g.~accelerometers or rulers) as well. {These} sometimes can be defined in terms of clocks, sometimes are even more difficult to be described theoretically,
sometimes, finally, they are simply ill-defined in a relativistic setting (as rulers are).
Moreover, atomic clocks are complex (as well as expensive) technological systems; while it is reasonable to disseminate a small number of them keeping their quality high,
it is not reasonable to impose each {user} to maintain one of them without increasing costs and worsening quality.
Studying {\it instantaneous} and {\it discrete} rPS is interesting because it keeps the information used to define the system finite {at any time}.
Coll and collaborators, for example, describe the clock readings by {continuous} functions {of proper time}.
This does not really affect the analysis as long as the positioning is done in null coordinates, but it essentially enters into the game when one wants to transform null coordinates into
more familiar ones (e.g.~inertial coordinates {$(t, x)$} in Minkowski spacetime).
{Discrete positioning has been considered in the literature, for example in \cite{T1, T2}.}
We have to remark that clocks are essentially and intrinsically discrete objects.
Regarding them as continuous objects can be done by interpolation, which partially spoils their direct physical meaning, as well as introduces approximation biases.
As long as possible, also in this case as for simple rPS, we prefer to adhere to simplicity.
Finally, {\it self-calibrating} rPS are a natural extension of self-positioning systems. If one has a self-positioning system there should be no need to trace the
trajectories of transmitters back in time.
We believe it is interesting to explicitly keep track of how long back the user needs to know the transmitters{' orbits}, both from a fundamental viewpoint and for later error estimates,
e.g.~in case one wanted or needed to take into account anisotropies of the Earth's gravitational field.
Of course, a self-calibrating rPS is also auto-locating.
We can assume though that a {robust and} self-calibrating system, if temporarily disturbed by a perturbation (e.g.~a transient force acting for a while), will detect the perturbation and go back
into operational automatically and with no external action as soon as the perturbation has gone.
\medskip
{We shall now discuss some simple examples with the aim to illustrate methods which can be useful to deal with more realistic situations. In particular, we shall discuss the simple cases of Minkowski space in 1+1 (one spatial plus one time) and 1+3 dimensions.
We will show that limiting to 1+1 dimensions does not play an essential role and we will give an idea on how to scale to higher dimensions.
In Minkowski we already know the form of the general geodesics and we can focus on simple, discrete, instantaneous and self-calibrating rPS.}
We shall also consider the 1+1 Schwarzschild case to check that flatness does not play an essential role.
{In fact, in this case we shall introduce a method based on Hamilton Jacobi complete integrals, which appears to be applicable more generally to higher dimensional spacetimes.}
We point out that any Lorentzian manifold is locally not too different from the corresponding Minkowski space, so that what we do can also be interpreted as a local approximation in the general case.
However, we shall not investigate here for how long such approximations would remain valid.
We also remark that, in Minkowski space (as well as in Schwarzschild) one has Killing vectors, and if one drags the {user} and all the clocks along an isometric flow, then the whole sequence of signals is left invariant. Accordingly, when Killing vectors are present, obviously, one cannot determine the position of anything, since all positions are determined up to an isometry,
and one can use this to set one clock in a given simple form (e.g.~at rest).
In Section II, we shall consider the simple case of Minkowski in {1+1 dimensions}. That is mainly to introduce notation and better present the main ideas.
In Section III, we consider the extension to 1+2 and 1+3 dimension, with the aim of introducing further complexity, though not curvature, yet.
In Section {IV}, we consider {a} Schwarzschild spacetime in {1+1 dimensions} to check that situations where the curvature is relevant (and consequently, with no affine structure) can be solved as well.
This is done by introducing methods based on symplectic geometry and Hamiltonian framework which appear to be important even in more realistic situations.
Finally, we briefly give some perspective for future investigations.
{We also have two appendices.
In Appendix A, we briefly discuss how would it be the theory in Minkowski spacetime of arbitrary dimension.
In Appendix B, we briefly discuss the relation between our evolution generator and Synge's world function.
}
\section{Minkowski case in 1+1 dimensions}
{Let us start with a somehow trivial example. We consider positioning in Minkowski in 1+1 dimensions.
Here freely falling worldlines are straight lines. We do not expect Minkowski to be a realistic model of Earth gravitational field, unless we are very far from it.
This example is meant to introduce some ideas to be used later in Schwarzschild which will also be 1+1 dimensional.
}
Let us assume the spacetime $M$ to be flat and 2-dimensional.
Although what we shall discuss is intrinsic, let us use a system of Cartesian coordinates $(t, x)$ to sketch objects.
Since there is no gravitational field, particles {through the event $(t_\ast, x_\ast)$} move along geodesics which are straight lines
\begin{equation}
x-x_\ast = \beta (t-t_\ast)
\qquad
-1< \beta < 1
\end{equation}
while for light rays {through the event $(t_\ast, x_\ast)$} have $|\beta|=1$, i.e.
\begin{equation}
x-x_\ast = \pm (t-t_\ast)
\end{equation}
In view of covariance, what we are saying is that free fall is expressed by first order polynomials {\it in the given coordinates $(t, x)$}.
If we use polar coordinates $(r, \theta)$, free fall would not be given by first order polynomials (such as $r-r_\ast= \gamma(\theta-\theta_\ast)$).
It would be rather given by the {\it same} straight lines (e.g.~$x-x_\ast = \beta (t-t_\ast)$) expressed in the new coordinates, i.e.
\begin{equation}
r \sin(\theta)-r_\ast \sin(\theta_\ast) = \beta \(r \cos(\theta)-r_\ast \cos(\theta_\ast)\)
\end{equation}
which are in fact the {\it same} curves.
It is precisely because of this fact that here we are not using coordinates in an essential way (and thus not spoiling covariance).
We instead are just selecting a class of intrinsic curves to represent free fall.
A {\it clock} is a parametrised particle world line
\begin{equation}
\chi: {{\Bbb R}}\rightarrow M: s\mapsto (t(s), x(s))
\end{equation}
A {\it standard clock} is a clock for which the covariant acceleration $a^\mu := \ddot x^\mu+ \Gamma^\mu_{\alpha\beta} \dot x^\alpha \dot x^\beta$
is perpendicular to its covariant velocity $\dot x^\alpha$ (see \cite{Perlick, Polistina}); in this case, and in Cartesian coordinates,
the acceleration is given by the second derivative (since $\Gamma^\mu_{\alpha\beta}= \{g\}^\mu_{\alpha\beta}$ and Christoffel symbols are vanishing).
Since the clock is moving along a straight line, then it is standard iff the functions $t(s), x(s)$ are linear in $s$.
Hence the most general standard clock {through the event $(t_\ast, x_\ast)$} is
\begin{equation}
\chi_\ast: {{\Bbb R}}\rightarrow M: s\mapsto (t= t_\ast+ \alpha s, x= x_\ast + \alpha \beta s)
\end{equation}
Its covariant velocity $\dot \chi_\ast$ is constant and one can always set its rate $\alpha$ so that $|\dot \chi_\ast|^2=-1$ is normalised.
In that case, one sets $\zeta:= \alpha \beta$, so that $|\dot \chi_\ast|^2= -\alpha^2+ \zeta^2=-1$ ($\Rightarrow\>\alpha^2- \zeta^2=1$), i.e.
\begin{equation}
\alpha=\Frac[1/\sqrt{1-\zeta^2}]
\label{ProperClocks}
\end{equation}
which is called a {\it proper clock}.
A proper clock {measures its proper time $s$ and it} has three degrees of freedom since it is uniquely determined by four parameters $(t_\ast, x_\ast, \alpha, \zeta)$ with the relation (\ref{ProperClocks}).
{Although hereafter we shall consider only proper clocks, it is not difficult to introduce more realistic, {\it a priori} known, drift functions to describe actual atomic clocks.}
Let us consider two proper clocks $(\chi_0, \chi_1)$ in $M$ corresponding to the parameters $(t_0, x_0, \alpha_0, \zeta_0)$ and $(t_1, x_1, \alpha_1, \zeta_1)$.
As we anticipated above the whole system has a Poincar\'e invariance which can be fixed by setting $\chi_0: {{\Bbb R}}\rightarrow M: s\mapsto (t= s, x= 0)$
(which still leaves an invariance with respect to spatial reflections, which will be eventually used) and consequently, $\chi_1: {{\Bbb R}}\rightarrow M: s\mapsto (t= t_1+ \alpha s, x= x_1 + \zeta s)$.
Before proceeding, let us once again explain which problem we intend to consider in the following.
The usual rPS would assume $(t_1, x_1, \alpha, \zeta)$ to be known parameters fixed during the calibration of the system.
A {\it {user}} receiving the values of $s_i$ (with $i=0,1$) from the clocks at an event $c=(t_c, x_c)$ is able to compute its position $(t_c, x_c)$ as a function of the signals $(s_0, s_1)$.
The signals $(s_0, s_1)$ are assumed to be coordinates on the spacetime manifolds, and one can prove that the transition functions
$\varphi:(s_0, s_1)\mapsto (t, x)$ are smooth, so that also $(t, x)$ are good coordinates on spacetime.
This case is particularly simple. The situation is described in Figure 1.
\begin{figure}[htbp]
\centering
\includegraphics[width=9cm]{F1.pdf}
\caption{\small\it Messages $(s_0, s_1)$ exchanged from the clocks to the {user}. The empty circles are the events when the clocks have been reset.}
\label{fig:F1}
\end{figure}
One can follow Fig.~1 and readily compute that
\begin{equation}
s_0=t_c-x_c
\qquad
s_1=\Frac[(x_c-x_1) + (t_c-t_1) /\alpha+\zeta]
\end{equation}
which can be readily solved for $(t_c, x_c)$ to get
\begin{equation}
t_c= \Frac[1/2]\[ (\alpha +\zeta )s_1+ s_0+t_1+x_1\]
\qquad\qquad
x_c=\Frac[1/2]\[(\alpha +\zeta )s_1-s_0+t_1+x_1\]
\label{TransFunc}
\end{equation}
Accordingly, one can {\it define} the coordinates $(t, x):=(t_c, x_c)$ as above.
Equations (\ref{TransFunc}) define transition functions between coordinates $(s_0, s_1)$ and $(t, c)$, which are regular being polynomial. We should stress that here $(t_1, x_1, \alpha,\zeta)$ are treated as known parameters.
Our problem in the following, will be to show that if we promote $(t_1, x_1, \alpha, \zeta)$ to be unknowns of the problem together with $(t_c, x_c)$, and we add a whole past sequence of readings
(see Figure 2 below),
then we are still able to solve the system and use the infinite redundancy to check the assumptions of the model (e.g.~that the gravitational field is vanishing, that the clocks are free falling, that the clocks are identical proper clocks, \dots).
{In Fig.2 below we are imagining two (proper) clocks $\chi_0$ and $\chi_1$, each broadcasting at any time its clock reading together with the chain of signals which has been received from the other clock right at the emission time. Of course, this activity of the transmitters does not rely on any information about possible users receiving the signals.
In the figure we are representing only the signals received by the user $(t_c, x_c)$ which, as we said above is an event, not a worldline.
If the user were a worldline it would receive all signals broadcasted by clocks at different times. However, the position of the user as well as the position, orbital parameters and timing of the clocks at the emission of the last signals have to be obtained from these data alone without
relying on data which may be obtained in the past by the user operator. That is what we meant by saying that the rPS is {\it instantaneous}, i.e.~the user is regarded as an event and has to compute everything with data received at once.
}
Before sketching the solution, we need to introduce some notation which will be useful later in higher dimensions when drawing diagrams as in Figure 1 and 2 will become difficult.
First, we shall use the affine structure on Minkowski space, so that the difference $P-Q$ of two points $P, Q\in M$ denotes a vector (tangent to $M$ at the point $Q$ and) leading from
$Q$ to $P$.
On the tangent space the Minkowski metric induces inner products so that we can define a pseudonorm $(P-Q)\cdot (P-Q)=|P-Q|^2$ so that the vector $P-Q$ is {\it lightlike} iff $|P-Q|^2=0$.
Secondly, if we have $k$ clocks, namely $\chi_0, \chi_1, \dots, \chi_{k-1}$, we shall have an infinite sequence of events along them, namely $p_0, p_1, p_2, \dots$.
Our naming convention will be that the point $p_n$ is along the clock $\chi_i$ iff $n\hbox{\rm\ mod} k=i$ and $p_i$ will be at a later time than $p_{i+k}$.
\begin{figure}[htbp]
\centering
\includegraphics[width=9cm]{F2.pdf}
\caption{\small\it Past messages $(s_0, s_1, s_2, s_3, s_4, \dots)$ exchanged from the clocks to the {user in the case of two satellites in 1+1 Minkowski spacetime}.}
\label{fig:F2}
\end{figure}
Here in Figure $2$ we have $k=2$ so that $p_0, p_2, p_4, \dots$ are events on $\chi_0$, while $p_1, p_3, p_5, \dots $ are events along $\chi_1$.
Then the segments $c-p_0$, $c-p_1$, $p_2-p_1$, $p_3-p_0, \dots$ are all light rays so that one has
\begin{equation}
\begin{aligned}
&|c-p_0|^2=0
\quad
|c-p_1|^2=0
\quad
|p_2-p_1|^2=0
\quad
|p_3-p_0|^2=0
\quad
\dots
\end{aligned}
\label{Eqs}
\end{equation}
The clock reading at the event $p_i$ will be denoted by $s_i$.
Each clock will be mirroring all signals it receives at $p_i$ from the other clock(s) in addition to the value $s_i$ of its reading at that event.
Accordingly, the {user} will receive the whole sequence $(s_0, s_1, s_2, s_3, s_4, \dots)$.
\begin{figure}[htbp]
\centering
\includegraphics[width=7cm]{F3.pdf}
\caption{\small\it Graph representing the {signal exchanges} shown in Figure 2.}
\label{fig:F3}
\end{figure}
Finally, in 2d, Figure 2 is enough to describe the whole convention setting. However, in higher dimensions such pictures will be difficult to read. For this reason, we replace
the description by a graph as in Figure 3.
In these graphs, all lines represent a set of equations such as (\ref{Eqs}).
Now one can check that equations
\begin{equation}
\begin{aligned}
&|p_2-p_1|^2=0
\quad
|p_3-p_0|^2=0
\quad
|p_4-p_3|^2=0
\quad
|p_5-p_2|^2=0
\end{aligned}
\label{Eq1}
\end{equation}
admit solutions
\begin{equation}
\alpha=\alpha(s_i)
\quad
\zeta=\zeta(s_i)
\quad
t_1=t_1(s_i)
\quad
x_1=x_1(s_i)
\end{equation}
with $i=0, 1, 2, 3,4,5$ and that one has $\alpha^2-\zeta^2=1$.
Then one can use the first two equations
\begin{equation}
|c-p_0|^2=0
\quad
|c-p_1|^2=0
\label{Eq2}
\end{equation}
to solve for
\begin{equation}
t_c=t_c(s_i)
\quad
x_c=x_c(s_i)
\end{equation}
Actually, by solving the system one gets eight solutions. Four of them are spurious solutions since they do not satisfy the equations
\begin{equation}
|p_4-p_7|^2=0
\qquad
|p_5-p_6|^2=0
\label{Eq3}
\end{equation}
The remaining four solutions then identically satisfy all the following equations.
Finally, if our assumptions about free fall are accurate, the remaining equations are identically satisfied.
To be precise one obtains multiple solutions (as we discussed above we are left with four solutions) but the correct solution can be selected by checking that all the vectors
\begin{equation}
c-p_0,
\quad
c-p_1,
\quad
p_1-p_2,
\quad
p_0-p_3,
\quad
p_3-p_4,
\quad
p_2-p_5,
\quad\dots
\end{equation}
are future directed.
This reduces the solutions to two.
To find the unique solution, we can utilize Poincar\'e invariance.
We have already used the boost invariance to adjust the clock $\chi_0$. However, we have a residual invariance with respect to spatial reflections.
We could have originally used this to keep $x_1\ge 0$. This condition can now be used to select a unique solution between the two residual solutions of the system.
In other words, by using the signals $(s_0, \dots, s_5)$ one is able to uniquely determine both the clock parameters $(\alpha, \zeta, t_1, x_1)$ and the {user} position in spacetime $(t_c, x_c)$.
There is no need of clock calibration or synchronisation.
Of course, this means that the infinite sequence $(s_0, s_1, s_2, \dots)$ is not independent.
Indeed, we can compute allowed sequences in a {\it simulation phase} in which we assume $(\alpha, \zeta, t_1, x_1, t_c, x_c)$ as parameters, and we compute the signals
$(s_0, s_1, s_2, \dots)$ by using equations (\ref{Eq1}), (\ref{Eq2}), (\ref{Eq3}), \dots
Then we start the {\it positioning phase}, in which we consider those signals as parameters, and we determine the unknowns $(\alpha, \zeta, t_1, x_1, t_c, x_c)$,
so that the positioning phase essentially deals with the inversion of what is done in simulation mode.
{In a sense, the {\it simulation-positioning} paradigm provides a general framework to design and analyse rPS. In the {\it simulation phase} we assume a physical situation, a given gravitational field a given setting of transmitters and {user} and describe the signals exchanged among them.
As a result we are able to determine emission and observation events on transmitters' worldlines,
i.e.~the signals which are eventually received by the {users}. As we said, in simulation mode one knows the physical parameters (e.g.~the orbital parameters of the transmitters, additional non-gravitational forces acting on transmitters, maybe the mass generating the gravitational field and so on) and computes the potentially infinite sequence $(s_0, s_1, s_2, \dots)$ of signals received by the {user} at a given event.
In the {\it positioning phase} we go the other way around, we try and invert the correspondence, recovering the information about the physical configuration, only out of the signals received at an event. That is precisely what a rPS {user} should do to determine the configuration of the satellites and forces acting on them first and then its own position in spacetime.
Later on, when discussing the case of a 2d Schwarzschild spacetime, we shall also argue that the positioning phase can be reduced quite in general to a generic simulation phase. If one is able to perform the simulation phase for generic parameters, then a least square analysis can be used for positioning.
}
Let us finally remark that when we prove that the particular system of inertial coordinates $(t, x)$ is allowed, then consequently, any other inertial coordinate system is allowed as well.
We can also add an unexpected acceleration $a$ to the clock $\chi_1= (t_1+ \alpha s, x_1+ \zeta s + \Frac[1/2] a s^2)$ while computing the signals to be transmitted to the {user}.
If the {user} does not know about the acceleration and it keeps assuming (wrongly this time) that the clock is free falling, then
one can show that the {user} can still determine the parameters of the clock, but this time the constraint $\alpha^2-\zeta^2=1$ and the redundant equations cannot be identically satisfied.
This shows that the {user} is potentially able to test the assumptions we made and to self-diagnose their break down, {i.e.~it is robust}.
If the acceleration dies out, as soon as the transmitters exchange a few signals the system manages to satisfy the constraints and it becomes operational again.
That shows the rPS to be self-calibrating.
\section{Minkowski in 1+2 and 1+3 dimensions}
When we consider Minkowski spacetime in dimension three the situation becomes more complicated and one needs to think about what is going on in order to apply the simple program we presented in dimension two.
In dimension three we consider three proper clocks $\chi_i$, with $i=0,1,2$. Each has five degrees of freedom, an initial position $(t_i, x_i, y_i)$ and an initial direction
given by $(\alpha_i, \zeta_i, \xi_i)$ obeying the constraint $\alpha_i^2-\(\zeta_i^2+\xi_i^2\)=1$.
One can still use the Poincar\'e invariance to set the first clock to be $\chi_0: s\mapsto (s, 0, 0)$,
though one still has two clocks $\chi_1, \chi_2$ and will have to deal with signals back and forth between them (which will turn out to be coupled quadratic equations, compared with lower dimension cases where each equation contained only the parameters of one clock at a time).
Moreover, in dimension two one has only two light rays through any event and each of them goes to a clock, while in dimension three one has infinitely many light rays
through an event and one has to select the one intersecting a clock.
Finally, in 1+1 dimensions when Poincar\'e invariance is used to fix the 0th clock we are left with a discrete residual invariance with respect to spatial reflections.
On the other hand, in 1+3 dimensions, one is left with a 1-parameter rotation group (as well as spatial reflections), i.e.~with $O(2)$, which can be used to set $y_1=0$ and $x_1\ge 0$.
However, one can still show that the unknowns which in our case are now $(t_1, x_1, y_1, \alpha_1, \zeta_1, \xi_1, t_2, x_2, y_2, \alpha_2, \zeta_2, \xi_2)$
can be computed from the signals and the constraints $\alpha_i^2-\(\zeta_i^2+\xi_i^2\)=1$.
The redundancy and the Poincar\'e fixing described above can be used to select a unique solution; the further redundancy is used to check assumptions.
\begin{figure*}[htbp]
\centering
\includegraphics[width=12cm]{F4.pdf}
\caption{\small\it Graph representing the signal exchange in 1+2 Minkowski spacetime.}
\label{fig:F4}
\end{figure*}
To do that, we used the scheme of signals shown in Figure 4.
We first use the equations
\begin{equation}
\begin{aligned}
|p_0-p_4|^2=0
\quad
|p_1-p_3|^2=0
\quad&
|p_4-p_9|^2=0
\quad
|p_3-p_{10}|^2=0
\quad
|p_6-p_{16}|^2=0
\quad
|p_7-p_{18}|^2=0\\
&
\alpha_1^2-\(\zeta_1^2+\xi_1^2\)=1
\end{aligned}
\end{equation}
which just depend on $(t_1, x_1, y_1, \alpha_1, \zeta_1, \xi_1)$, to determine the parameters of the first clock.
Since three of them are not independent, one has two extra parameters, a sign $\epsilon_1=\pm 1$ and an angle $\omega_1$ which are left undetermined and will be fixed later on.
Similarly, we used equations
\begin{equation}
\begin{aligned}
|p_0-p_5|^2=0
\quad
|p_2-p_6|^2=0
\quad&
|p_5-p_{12}|^2=0
\quad
|p_3-p_{11}|^2=0
\quad
|p_8-p_{15}|^2=0
\quad
|p_6-p_{17}|^2=0\\
&\alpha_2^2-\(\zeta_2^2+\xi_2^2\)=1
\end{aligned}
\end{equation}
which depend on $(t_2, x_2, y_2, \alpha_2, \zeta_2, \xi_2)$, to determine the parameters of the second clock, leaving two parameters, again a sign $\epsilon_2=\pm 1$ and an angle $\omega_2$ undetermined
due to relations of the equations.
Then one has the extra equations representing signals between the clocks $\chi_1$ and $\chi_2$
\begin{equation}
\begin{aligned}
&|p_1-p_8|^2=0
\quad
|p_2-p_7|^2=0
\quad
|p_4-p_{14}|^2=0
\quad
|p_5-p_{13}|^2=0
\quad
|p_8-p_{19}|^2=0
\quad
|p_7-p_{20}|^2=0
\end{aligned}
\end{equation}
Using these equations (and depending on the four possibilities for $(\epsilon_1, \epsilon_2)$) we can determine $\omega_1-\omega_2$,
thus leaving only $\omega_2$ undetermined. This fact accounts for the residual Poincar\'e invariance which can be used to set $y_1=0$ and $x_1\ge 0$.
We note that two of the four possibilities for $(\epsilon_1, \epsilon_2)$ need to be abandoned because they are incompatible with these equations.
Finally, two solutions are found, only one of which agrees with the gauge fixing $x_1\ge 0$.
Thus, also in this case, the {user} is able to determine the parameters of the clocks uniquely.
Once the clocks are known, one can use the equations
\begin{equation}
|c-p_0|^2=0
\qquad
|c-p_1|^2=0
\qquad
|c-p_2|^2=0
\end{equation}
to determine the {user} position $(t_c, x_c, y_c)$.
In this case, one needs to solve equations on a one by one basis, {in a wisely chosen order}, to control the details of the procedure.
As we see, Minkowski spacetime $M_3$ in 1+2 dimensions is quite different with respect to Minkowski
spacetime $M_2$ in 1+1 dimensions. Already in that simple case one needs to select equations wisely to solve them. We argue that for higher dimensions things do not escalate and higher dimensions are qualitatively as $M_3$. We shall just sketch these cases since eventually the real cases of physical interest will be Schwarzschild (or Kerr) in 1+3 dimensions. After all we are mainly thinking about a rPS around a star or a black hole, not navigation in interstellar space as in \cite{T1, T2}.
Let us start by considering Minkowski spacetime $M_4$ in 1+3 dimensions.
Let us consider $4$ freely falling (i.e.~inertial), otherwise identical, clocks $(\chi_0, \chi_1, \chi_2, \chi_3)$.
Spacetime has a Poincar\'e invariance which can be fixed by setting $\chi_0$ at the origin
(i.e.~$\chi_0=(s, 0, 0, 0)$),
$\chi_1$ moving in a spatial plane $x^3=0$, parallelly to the axis $x^1$
(i.e.~$\chi_1(s)=(t_1+\alpha_1 s, x_1+ \zeta_1 s, c_1, 0)$) with the constraint $\alpha_1^2-\zeta_1^2=1$ and $x_1>0$.
The other two clocks are unconstrained as $\chi_i(s)=(t_i+\alpha_i s, x_1+ \zeta^1_i s, y_1+ \zeta^2_i s,
z_1+ \zeta^3_i s)$ with the constraint $\alpha_i^2-(\zeta^1_i)^2-(\zeta^2_i)^2-(\zeta^3_i)^2=1$ and $i=2,3$.
In dimension $m=4$ the Poincar\'e group is of dimension 10, hence the whole system has $18= 7\times 4-10$ degrees of freedom, namely
it is described by 21 parameters
\begin{equation}
\begin{aligned}
&t_1, x_1, c_1, \alpha_1, \zeta_1;
\qquad
t_2, x_2, y_2, z_2, \alpha_2, \zeta^1_2, \zeta^2_2, \zeta^3_2;
\qquad
t_3, x_3, y_3, z_3, \alpha_3, \zeta^1_3, \zeta^2_3, \zeta^3_3
\end{aligned}
\end{equation}
with 3 constraints.
As far as the signals are concerned, we have 4 signals in emission set $0$, namely the signals from transmitters to the {user}, $4\times 3=12$ in emission set $1$,
$12 \times 3= 36$ in emission set $2$,
$36 \times 3$ in emission set $3$, and so on.
Among them, if we fix two clocks $\chi_i$ and $\chi_j$ with $i\not=j$,
$2$ are exchanged between $\chi_i$ and $\chi_j$, at the $1^{st}$ emission set,
$6$ at the $2^{nd}$ emission set, $18$ at the $3^{rd}$ emission set, and so on.
Also 4 are sent from a clock to the {user}, at emission set $0$.
Accordingly, we have $2+6=8$ signals exchanged between $\chi_0$ and $\chi_i$ in emission sets $1$ and $2$, for each $i=1,2,3$.
The associated conditions just depend on the parameters of the clock $\chi_i$ and partially fix them,
as we showed it happens in dimension 3 (i.e.~$2+1$).
Then we have $3\times 8=24$ extra more equations from signals exchanged between clocks $i\not=0$ which can be used later to fix the redundancy.
{The algorithm given above is completely general and can be extended to any number of dimensions. In Appendix A, we collected some considerations about how to extend the model to a Minkowski spacetime in this case.}
\section{Schwarzschild in 1+1 dimensions}
This Section is an attempt to utilize the procedure described in the previous Sections on a curved spacetime.
We are not endowing the model with any physical meaning; gravity is sometimes considered trivial in dimension two, since Einstein equations are identically satisfied.
However, probably one could argue for a meaning as radial solutions in 4-dimensional Schwarzschild spacetime.
In fact, the Minkowski cases we studied above are vulnerable to two different concerns:
\begin{itemize}
\item[1)] we extensively used the affine structure of ${{\Bbb R}}^n$ to write the equations to identify light rays;
\item[2)] the metric is flat; thus, the Lagrangian for geodesics has an extra first integral (the conjugate momentum to $x$ which is cyclic).
\end{itemize}
In both cases, we should check that we are able to perform the computation on a more general curved spacetime,
otherwise what we have done above would be restricted to SR.
Let us {consider coordinates $(t, r)$ and} try the metric
\begin{equation}
g= -A(r) dt^2 + \Frac[dr^2/ A(r)]
\qquad\qquad
A(r):= 1-\Frac[2m/r]
\end{equation}
which corresponds to the Lagrangian for the geodesics
\begin{equation}
L=\sqrt{
A{\( {\Frac[dt/ds]} \)^2}- {{\Frac[1/A]} \( {\Frac[dr/ds]} \)^2}
} ds=\sqrt{\Frac[A^2-\dot r^2/A]} dt
\label{Lag}
\end{equation}
There are some reasons to prefer this Lagrangian to the ordinary quadratic one.
First of all, it is invariant with respect to re-parameterisations. The quadratic Lagrangian is not and is valid only when one parametrises with proper time.
{On the contrary, here in Lagrangian (\ref{Lag}) the parameter $s$ is {\it a priori} arbitrary. It can be specified to the proper time or to the relative time $t$, as well as any other parameter.}
However, the physical motions are represented by trajectories in spacetime, not by parametrised curves.
The parameter along the curve (any parameter, including the proper time) is introduced as a gauge fixing of this invariance
and just to use the variational machinery introduced in mechanics.
Accordingly, a Lagrangian which accounts for re-parameterisations is better than one which does not, exactly as a gauge invariant dynamics is better than
one which is written in a fixed gauge.
Secondly, we shall use it for both particles and light rays.
If proper time is an available gauge fixing for particles, it is not for light rays.
Thus, this choice of Lagrangian allows us to discuss light rays on an equal footing with particles.
Thirdly, one can always fix the gauge later on: the use of this Lagrangian is not a restriction.
The solutions of this Lagrangian are geodesic trajectories.
Of course, one can try and solve its Euler-Lagrange equation analytically,
although obtaining this solution strongly relies on the specific form of the metric.
Instead, we try and develop a method to find geodesics relying on first integrals and the Hamilton-Jacobi (HJ) method.
This method works on any spacetime which allows separation of variables for the corresponding HJ equation (which are classified; see \cite{HJ, HJ1}).
Once a complete integral of the HJ equation is known
(which may be obtained by the method of separation of variables)
then solutions are found ({\it{}only}) by inverting functions.
\subsection{Hamiltonian formalism}
The momentum associated to the Lagrangian (\ref{Lag}) is
\begin{equation}
p= \Frac[\partial L/\partial \dot r] = -\Frac[\dot r/\sqrt{A(A^2-\dot r^2)}]
\iff
\dot r= -\Frac[pA\sqrt{A}/\sqrt{1+Ap^2}]
\label{LegTr}
\end{equation}
which corresponds to the Hamiltonian
\begin{equation}
H= - \sqrt{A}\sqrt{1+Ap^2}
\end{equation}
The corresponding HJ equation is
\begin{equation}
- \sqrt{A}\sqrt{1+A\(W'\)^2}=E
\iff
W'= \mp\Frac[\sqrt{E^2-A}/ A]
\end{equation}
where prime denotes the derivative with respect to $r$.
The complete integral for HJ is hence
\begin{equation}
S(t, r; E)= -E t \mp \int \Frac[\sqrt{E^2-A} / A]dr
\end{equation}
\subsection{The evolution generator}
For later convenience, we would like to express it as a function of the initial condition, i.e.
\begin{equation}
\begin{aligned}
F(t, r; t_0, r_0)=& S(t, r) - S(t_0, r_0)=
-E \cdot(t-t_0) \mp \int_{r_0}^r \Frac[\sqrt{E^2-A} / A]dr
\end{aligned}
\label{IntegralSchwarzschild}
\end{equation}
which will be called the {\it evolution generator}, once we eliminate $E$
(see Appendix {B}).
\
The evolution generator contains the information for finding the general solutions of Hamilton equations, i.e.~general geodesic trajectories.
In fact, one has
\begin{equation}
-\Frac[\partial F/ \partial E] = -\Frac[\partial S/ \partial E] (t, r)+\Frac[\partial S/ \partial E] (t_0, r_0)= P-P_0=0
\label{myeq}
\end{equation}
which is zero since the momentum $P$ conjugate to $E$ is conserved, $S$ being a solution of the HJ equation.
In principle, one could use this equation to obtain $E(t, r; t_0, r_0)$
and replace it above to obtain the evolution generator $F(t, r; t_0, r_0)$.
Once the evolution generator has been determined, we can determine the geodesic trajectory
passing through $(t, r)$ and $(t_0, r_0)$ by computing
{
\begin{equation}
\begin{cases}
p=\Frac[\partial F/ \partial r]\\
p_0=-\Frac[\partial F/ \partial r_0]\\
\end{cases}
\end{equation}
}where $p_0$ is the initial momentum to be selected so that the geodesics will eventually pass through $(t, r)$
while $p$ is the momentum when it arrives at $(t, r)$.
Equivalently, one can use the inverse Legendre transform (\ref{LegTr}) to obtain the initial and final velocities.
Accordingly, the flow of the transformations $\Phi_{t-t_0}:(t_0, r_0)\mapsto (t, r)$ is canonical and describes completely
the geodesic flow.
One can use this method to obtain again the geodesics in Minkowski space (setting $A=1$), this time {in coordinates $(t, r)$ and} without resorting to the affine structure but using the manifold structure only.
In the Schwarzschild case, one can solve the integral (\ref{IntegralSchwarzschild}). However, the resulting equations (\ref{myeq})
turn out to be too complicated to be solved for $E$. Consequently, we need to learn how to go around this issue.
For our Schwarzschild--like solution, i.e.~for $A=1-\Frac[2m/r]$, we can introduce a dimensionless variable $r= 2m\rho$ to obtain
\begin{equation}\label{F_rho}
\begin{aligned}
F(t, r; &t_0, r_0)= -E\cdot (t-t_0)
\mp 2m \int_{\rho_0=\Frac[r_0/2m]}^{\rho=\Frac[r/2m]} \Frac[\sqrt{\rho} \sqrt{{(E^2-1)\rho +1}} / \rho-1]d\rho
\end{aligned}
\end{equation}
The limit to lightlike geodesics is obtained by letting $\dot r\rightarrow \pm {A(r)}$, which corresponds to $p\rightarrow \mp \infty$,
which in turn corresponds to the limit $E\rightarrow -\infty$.
Thus, for light rays, we are interested in the solutions of (\ref{myeq}) which diverge to $-\infty$.
Given a clock $\gamma: s\mapsto (t(s),r(s))$ and an event $(t_c, r_c)$,
if we want to determine a light ray going from the clock to the event, we should determine $s=s_\ast$ on the clock so that
there is a lightlike geodesic from $(t, r)=(t(s_\ast),r(s_\ast))$ to $(t_0, r_0)=(t_c, r_c)$. Since the geodesic one determines is lightlike, the corresponding $E(t, r; t_0, r_0)$ diverges.
Even though the explicit form of $E(t, r; t_0, r_0)$ is hard to find we can make the substitution $E=1/\epsilon$ in the equation (\ref{myeq})
and then take the limit $\epsilon\rightarrow 0^-$, i.e.~take the limit through negative values of $\epsilon$.
In the Schwarzschild case, we obtain for (\ref{myeq})
\begin{equation}
(t-t_0) \mp \(r- r_0+2m\ln\(\Frac[r-2m/r_0-2m]\)\)+O(\epsilon)=0
\end{equation}
the two signs corresponding to ingoing and outgoing geodesic trajectories.
This allows a divergent solution (i.e.~$\epsilon=0$) iff
\begin{equation}
t-t_0 =\pm\( r- r_0+2m\ln\(\Frac[r-2m/r_0-2m]\)\)
\label{lightrays}
\end{equation}
Once we fix the initial condition $(t_0, r_0)$, this provides an {explicit} definition $t(r)$ of the lightlike geodesics trajectories through it, parametrised by $r$.
Thus, in view of separation of variables, HJ method provides us with an exact, analytical, description of light rays {parameterized in terms of $r$}.
Moreover, before taking the limit to $E\rightarrow -\infty$, Equation \eqref{F_rho} is also a good description of {test} particles which we can use to describe the motion of transmitters.
For $-1<E\le 0$ one has bounded motions, while for $E\le -1$ one has unbounded motions.
The bounded motions have a maximal distance they reach before falling in again.
This is obtained by conservation of $E$ as the value of $r=r_M$ such that
\begin{equation}
E^2= A(r_M)
\end{equation}
and then one has directly the two branches of the motion as
\begin{equation}
\dot r^2= A^2\(1-\Frac[A/E^2]\)
\iff
t-t_M= \mp E \int^r_{r_M} \Frac[dr/{A}\sqrt{E^2-A}]
\end{equation}
This suggests to use $r$ as a parameter along each branch. Notice that since we have not used any expansion or approximation the result we have obtained is analytic and exact. {Similarly}, any unbound motions (either ingoing or outgoing) can be fully described by the parameter $r$.
{Let us point out that the parameterisation of light rays in terms of $r$ is physically odd. One should prefer a parameterisation in terms of relative time $t$ instead, since, of course, proper time is not defined on light rays. However, such a parameterisation $r(t)$ relies on the inversion of equation (\ref{lightrays}) which of course formally exists although it is hard to obtain explicitly in practice. Thus here is where we take advantage of reparameterisation invariance and use a suitable parameter.}
Suppose now {we} fix two transmitters (e.g.~a bound clock $\chi_0$ and an unbound outgoing clock $\chi_1$).
{ We already know in Minkowski the user needs to be in between the transmitters, otherwise the coordinates become degenerate since a second satellite on one side does not add information for determining the position of the user. Similarly, also here in Schwarzschild, we set a user in between the transmitters.}
We can trace back light rays exchanged by the clocks and eventually to the {user}, obtaining what is shown in Fig.~5.
{This is obtained as in Minkowski; we consider a moving point along the worldline of the transmitter and move it until we find a solution of the equation (\ref{lightrays}) of light rays. This determines an event $p_n$ on the clock worldline and a value of $r_n$ of the parameter for it.
As we discussed the parameter $r_n$ is not a physical time, hence we need to transform it to the corresponding proper time $s_n$ along the clock. Again, this would not be necessary if we used a clock parameterisation in terms of its proper time in the first place. However, such a parameterisation relies on the form of the inverse function of $\tau(r)$,
which is not explicitly known. Using $r$ as a parameter allows us to avoid formal inversions of functions and keep the model explicitly computable.}
In this way, we are able to find {\it exactly} the points $(p_0, p_1, p_2, p_3, \dots)$ at which the signals are emitted by the clocks and the corresponding clock readings
$(s_0, s_1, s_2, s_3, \dots)$.
In other words, we can, also in this case, exactly model the simulation phase in the 2d Schwarzschild case.
If the {user} does not assume the correct gravitational field and is instructed to find its position
anyway, the constraints turn out to be violated.
In principle, the {user} can say that the transmitters are not moving as they would be expected in Minkowski space.
Thus system is {robust}.
However, in Schwarzschild spacetime, the positioning phase is much more difficult to be performed.
We have a system of {(trascendent)} equations to be solved and, of course, we can check that the parameters used in simulation modes do verify them.
However, we cannot show without actually solving the system, for example, that the solution is unique or that we are able to select it among the solutions as the real {user's} position.
Let us take the opportunity to explain a different strategy to solve the system which shows quite generally that one does not actually need to solve the system in positioning system,
provided we can perform the simulation phase efficiently, for generic enough parameters.
The idea is to transform the solution of a system
{
\begin{equation}
\begin{cases}
f_0(\alpha_0, \alpha_1, \dots, \alpha_n)=a_0\\
f_1(\alpha_0, \alpha_1, \dots, \alpha_n)=a_1\\
\dots \cr
f_k(\alpha_0, \alpha_1, \dots, \alpha_n)=a_k\\
\end{cases}
\end{equation}
}where $(a_0, a_1, \dots, a_k)$ are the signals predicted in simulation mode {for the unknown model parameters $\alpha_i$},
into a search for the minima of an auxiliary function $\chi^2$
{
\begin{equation}
\chi^2 (\alpha_0, \alpha_1, \dots, \alpha_n) = \sum_{i=0}^k \(f_i(\alpha_0, \alpha_1, \dots, \alpha_n)-a_i\)^2
\end{equation}
}chosen so that the minima of $\chi^2$ are achieved exactly on the solutions of the system.
There are quite a number of tools developed to find minima since that is used for fits.
{We used MultiNest (see \cite{MN1,MN2, MN3}),
a powerful Bayesian inference tool developed for {a} highly efficient computation of the evidence by producing and analysing the posterior samples from the distributions with
(an unknown number of) multiple modes and pronounced degeneracies between the parameters.
Relying on the posterior distribution provided by the software we are able to detect the presence of more than one solutions and calculate them.
Hence to solve the system we just need to be able to compute the functions
{$f_i(\alpha_0, \alpha_1, \dots, \alpha_n)$} for arbitrary values of the parameters
{$(\alpha_0, \alpha_1, \dots, \alpha_n)$}, which is what we learnt to do in positioning phase and then MultiNest is able to explore the parameter
space to look for minimal and best fit values, which for us
are best approximations of the solutions of the system.
After that, as we said previously, one can check that there are many modes (as it happens
when more solutions are present) by just analysing the posterior distribution.}
{\bf In the Schwarzschild case, for two clocks with parameters $\chi_0= (t_0=40, {r}_0=4, v_0= -\Frac[3/8])$ and $\chi_1= (t_1=-40, {r}_1=4, v_1= \Frac[17/48])$ where $(t_0,r_0)$ and $(t_1,r_1)$ are the events at which the clocks are re-set and $v_0$ and $v_1$ are their radial velocities at the re-set.
The first transmitter is falling in ($v_0<0$), the second going out ($v_0>0$). Setting the central mass $m=1$ and a {user} at $c=(t_c=60, {r}_c=15)$ we can compute the first 16 signals:}
\begin{equation}
\begin{aligned}
& s_0=1.159156591
\quad
s_1=71.51766019 \\
&
s_2=-11.26675255
\quad
s_3=46.96574042\\
& s_4=-21.45717682
\quad s_5=41.97205382\\
&
s_6=-23.59547256
\quad
s_7=37.62526214\\
&
s_8=-25.46320683
\quad s_9=36.70632744\\
&
s_{10}= -25.85844196
\quad s_{11}= 35.90306640\\
&
s_{12}= -26.20396026
\quad s_{13}= 35.73306272\\
&
s_{14}= -26.27709250
\quad s_{15}= 35.58442173\\
\end{aligned}
\end{equation}
The corresponding posterior distributions generated by MultiNest are shown in the triangular plots in Figure 6.
{The first triangular plot used only the first 10 signals, the second used 16 signals.
In both cases MultiNest found one mode (solution) only, determining the unknown parameters correctly.}
We can see that the solution is unique and that we have a {localisation which agrees with the parameters used in simulation. Notice that here we are working with a minimal model setting (no transmitter redundancy, no perturbation, no clock errors, no clock drifting with respect to proper time) in a quite extreme gravitational field (definitely in a strong regime).
Although the localisation in this simple model is not very accurate, it serves as a proof-of-concept.
We should also remark that even discussing accuracy here would not be very meaningful since we see no way to compare accuracy in this model
(in 1+1 dimensions, in the strong regime, no real orbits around the central mass) with more realistic settings (in dimension 4, in the weak regime, with transmitters orbiting the central mass).
Of course, further investigation need to be devoted to accuracy estimate of localisation as well as the domain of the coordinates.
}
{Also}, we have not optimised anything here, the {user} can restrict heuristically the region to scan for solutions using its past positions and one can tune precisions to improve localisations, or drop MultiNest for a simpler minimiser if one is not interested in posterior distributions.
{Let us finally remark that, in} this example, we can be nasty and not inform the {user} about the actual value of $m$, leaving it free to be fitted,
that the clocks were unbounded, that the clock $\chi_0$ was ingoing and the clock $\chi_1$ was outgoing.
{One can show that } this information is still obtained by the fit result.
\onecolumngrid
\begin{figure}[htbp]
\centering
\includegraphics[width=12cm]{Schwarz.pdf}
\caption{\small\it Simulation phase for 2d Schwarzschild. {Empty dots are reset events.} }
\label{fig:Schwar}
\end{figure}
\section{Conclusions and perspectives}
We showed that one can define a rPS system without resorting to rulers and synchronisation at a distance
so that it is simple, instantaneous, discrete, self-calibrating, and {robust} in the sense defined above.
We considered cases in 1+1, 1+2 and 1+3 dimensions, flat or curved spacetimes.
This setting perfectly integrates with the EPS framework and axioms as well as with the framework introduced by Coll and collaborators.
As it happens in EPS, everything is produced by starting from the worldlines of particles and light rays.
In view of what we proposed above, we can also define coordinates (in addition to conformal structure and projective structure) which
are a better bridge with the conventions used, e.g., in physics.
If Coll and collaborators focus on positioning, in this paper we considered the problem from a different perspective.
From a foundational viewpoint, it is generally recognised that GR is the most fundamental layer of our description of classical phenomena.
Since in most experiments we need to use coordinates and positioning of events,
we should be able to do that {\it before} we start investigating the detailed properties of spacetime, of physical fields, and the evolution of the universe.
From this viewpoint rPSs have an important foundational relevance, since they are a prerequisite to experiments.
The more they are considered fundamental the less detail can be used to design them.
In particular, being self-locating and self-calibrating are important characteristics just because in principle they do not require that we model
the motion of satellites from mission control.
This is obtained by considering the parameters governing clocks (their initial conditions {and frequencies}) as unknown parameters
instead of fixing them as control parameters.
As the unknown parameters grow in number, one clearly needs more data to solve for them.
The available data can be increased by different strategies.
We can add clocks (since the unknown parameters grow linearly with the number of clocks, while the exchanged signals grow quadratically) or we can go back in time considering and mirroring signals exchanged by the clocks.
However, adding the first signals exchanged by the clocks is insufficient to solve for all unknown parameters.
Coll et collaborators directly resorted to a continuous flow of data which also simplifies the analysis.
This approach relies on the inversion of functions which is notoriously problematic in general.
We instead showed that one can go back in discrete steps as described in Figure 2 and 5,
keeping the sequence of signals discrete and using a finite sequence to
solve for the unknown parameters and the others as constraints to check accuracy of the assumptions.
{Our design of rPS is based on discretisation of signals, which is not a new idea in the literature. For example, refs.~\cite{Cadez} and \cite{RovelliGPSb} as well as reference 6 of ref.\cite{Tarantola09} use some form of discretisation as well.
Although our design has similarities with them, especially related to discretisation, it is also true that there are differences.
In general, we do not assume that the orbital parameters are known {\it a priori}, e.g.~by solving for the geodesic equations in a given metric. We instead use information on the differential timing of the satellites along their worldline.
We determine these parameters in a process which involves all the clocks of the satellite constellation whereas the method in \cite{Cadez} requires a knowledge of the satellite orbits. In addition, in the approach of \cite{Cadez} users are considered as worldlines (hence they could use information accumulated in the past) whereas in our case they are simple events and currently they use only information available at the moment in which they receive broadcasted signals from satellites. Of course, in future works, we shall possibly use past information to optimize calculation, e.g.~by restricting the parameter space of the of the satellite motion. This will most likely increase efficiency.
For a user in the more complicated case of a Schwarzschild spacetime, our procedure leads to the exact determination for $t(r)$ exploiting not only Hamiltonian methods but also a complete use of the parameterization invariance, which as far as we know has not been fully employed yet. }
One advantage in our design is that the positioning itself uses just the first few emission sets of signals.
For example, in a rPS around the Earth with astrometric parameters
similar to NAVSTAR-GPS, we saw that look back time corresponds to the $2^{nd}$ emission set
of signals. NAVSTAR-GPS satellites are in orbit at about $25 Mm$, hence two satellites are at most $6\cdot 10^{7}\> m$.
In this design, a signal that can be used for positioning can bounce twice on satellites and then be redirected to the {user}, for a total of $1.5\cdot 10^{8}\> m$, which at the speed of light is way less than a second. Thus we expect that a rPS on Earth could have a look back time of about a second.
{Of course this is a rough estimate, which in order to be made precise, would need that first a realistic model of rPS in dimension 4 in conditions similar to Earth is produced.
Accordingly, we are not claiming here anything precise. However, we are remarking that in some experimental conditions,
{although not in general (see \cite{Tarantola09}),}
there can be a split of data into a part which is relevant for positioning and a part which is relevant for gravitometric. This splitting, if it exists, relies on assumptions, e.g., about the gravitational field in terms of symmetries and the associated conserved quantities, which is clearly impossible in general, although we expect them may be valid in restricted conditions.
For example, trivially, if we are very far away from the central mass then, by asymptotic flatness, we know that Minkowski is a good approximation, and it seems reasonable to expect that one can do positioning in that approximation, even though observing data long enough eventually one should be able to see curvature effects, anyway.
Of course, further investigation is needed in this direction to show if our expectation are met and can be made precise.}
{In a self-calibrating rPS, signals which are older than the look back time} are used only for checking the assumptions, i.e.~for measuring whether the gravitational field agrees with what is assumed. Accordingly, one can also argue that perturbations of the gravitational field become relevant for positioning only when they are measurable within $1s$.
We believe this setting {may turn out to be} a good compromise between simplicity of design and effectiveness.
{Of course, there is a lot of work to be done in the direction of having a realistic rPS. As we said one needs to account for the fact real atomic clocks do not beat proper time but some (known) function of it. This does not seem to be an issue since we parameterised everything in terms of $r$ including proper time.
Hence all that seems to be needed is applying such a time drift function to the signals.
However, this will affect the propagation of the errors and the final accuracy.
Similarly, of course the Earth's gravitational field is not spherically symmetric. One sooner or later will have to add multipolar corrections; see \cite{Gomboc}. Also for this reason, perturbations need to be introduced, for positioning or gravitometric purposes. We did not consider this issue here so a further study needs to be devoted to them.
Also Earth's atmosphere has not been discussed at all while its effects have to be accounted before rPS can be proposed as a realistic positioning system.
}
Again from a foundational viewpoint it is interesting to note, and worth further investigation, that rPSs potentially can be used to measure the gravitational field.
Theoretically, they can be used in a gravitational theory, possibly different from standard GR, as a tool to produce observable quantities. Since they are well integrated
with the generally covariant framework, they are candidates to pinpoint differences between different theories on an observational grounds,
namely to design experiments to compare gravitational theories.
{For a practical viewpoint instead, one needs a way of estimate precision of positioning as well as characterising the region where the positioning defines a well-defined coordinate system. The two issues are strongly related since uncertainties in coordinates spoil them in practice.
Of course, that can be done by simulation, although some insights could be obtained by using the theory of characteristics and caustics developed for geometric optics.
Further studies will be devoted to discuss precision in a more realistic situation.
}
\begin{figure*}[htbp]
\centering
\includegraphics[width=16cm]{GPS3plot1.pdf}
\caption{\small\it Triangular plots for two different simulations starting with the same parameters.
The mass $m$ is fixed to $m=1$.
The plot uses ten signals. }
\end{figure*}
\begin{figure*}[htbp]
\centering
\includegraphics[width=16cm]{GPS3plot2.pdf}
\caption{\small\it Triangular plots for two different simulations starting with the same parameters.
The mass $m$ is fixed to $m=1$.
The plot uses sixteen signals.}
\label{fig:TriangularPlots}
\end{figure*}
{\small \begin{acknowledgments}
This article is based upon work from COST Action (CA15117 CANTATA), supported by COST (European Cooperation in Science and Technology).
We also acknowledge the contribution of INFN (Iniziativa Specifica QGSKY), the local research project {\it Metodi Geometrici in Fisica Matematica e Applicazioni (2015)} of Dipartimento di Matematica of University of Torino (Italy). This paper is also supported by INdAM-GNFM.
S. Carloni was supported by the {\it Funda\c{c}\~{a}o para a Ci\^{e}ncia e Tecnologia} through project
IF/00250/2013 and partly funded through H2020 ERC Consolidator Grant ``Matter and strong-field gravity: New frontiers in Einstein's theory'', grant agreement no. MaGRaTh-64659.
L. Fatibene would like to acknowledge the hospitality and financial support of the Department of Applied Mathematics, University of Waterloo where part of this research was done.
This work was supported in part by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (R. G. McLenaghan)
\end{acknowledgments}
}
\vskip10pt
{
\section*{Appendix A: Minkowski in dimension $m$ and lookback time}
In a Minkowski spacetime of dimension $m$ we consider $m$ proper clocks $(\chi_0, \dots, \chi_{m-1})$.
We fix Poincar\'e invariance setting the first clock to be $\chi_0=(s, 0, \dots, 0)$.
We are left with a spatial residual invariance parametrised by $O(m-1)$ which we shall need to fix the gauge.
Each clock receives $(m-1)$ signals from the other clocks and the graph analogous to that in Figure 4 becomes of order $m$.
In fact, in the graph representing the messages exchanged in dimension $m$, each node will receive $m-1$ edges, each representing an incoming message (and an equation to be satisfied)
and emit one edge representing the message broadcast by that clock.
The node representing the {user} is exceptional, since it receives $m$ signals from the clocks and it does not emit, hence appearing as the root of the graph.
Thus the graph accounts for
\begin{equation}
\sigma:=m+ m(m-1) + m(m-1)^2= m(m^2-m+1)
\end{equation}
signals. To each of these signals there is an associated equation.
The signals exchanged between $\chi_0$ and $\chi_i$ are in fact $2m$ (for any $i=1..m$) which, together with the constraint $\alpha_i^2- \( \zeta_{i1}^2+ \zeta_{i2}^2 +\dots+ \zeta_{i (m-1)}^2\) =1$,
partially fixes the parameters of the clock $\chi_i$.
Then one has $2m$ equations for any pair $(\chi_i, \chi_j)$ to freeze the relative parameters.
Thus one has
\begin{equation}
\begin{aligned}
&m+ 2m \binomial{m}{2}= m+ 2m \Frac[m(m-1)/2]=
m(m^2- m+1)\equiv \sigma
\end{aligned}
\end{equation}
signals as discussed above.
The {user} position can be computed once the parameters of the clocks have been determined.
Accordingly, we see that the actual positioning is determined by emission set up to
the $2^{nd}$. Earlier signals are then predicted and can be used to check that the assumptions (the kind of metric is assumed, the clocks are freely falling, and so on) are accurate.
Whatever, transient effect there may be before it, it does not affect positioning, it just reveals the transient effects.
For that reason, the rPS has a characteristic time, namely the time at which $2^{nd}$ emission set signals begins (or more generally signals which are actually used for positioning), which is called {\it look back time}.
Signals used from positioning are after look back time, hence whatever happens before is irrelevant for positioning. In particular, whatever hypotheses one can make (e.g.~gravitational field is static, is spherically symmetric, or whatever else) must be verified after the look back time. If the gravitational field is slowly changing and it is approximately constant after the look back time, then a static metric is a good approximation.
The signals from before the look back time are used for checking the hypotheses done on clocks and gravity. If transient effects happens there, the system fail to validate them. In a sense rPS are detectors of the gravitational field from before the look back time.
}
{
\section*{Appendix B: The evolution generator}
\def\partial{\partial}
The evolution generator we defined above already appeared in
Synge\cite{WorldFunction} (who acknowledges an early appearance in Ruse\cite{WorldFunction2, WorldFunction3}, as well as in Synge\cite{WorldFunction4} himself)
where it is called the {\it world function} and denoted by $\Omega(P, P')$.
This world function is defined as the arc-length of geodesics and it is defined as the integral
$$
\Omega(P, P')=\int_P^{P'} \sqrt{|g_{\mu\nu}u^\mu u^\nu|} \> ds
$$
computed along the geodesics joining $P$ and $P'$.
Since its introduction, the world function has been used by different authors (see \cite{WorldFunction5, WorldFunction6}) essentially as a generating function of geometry,
showing that the main objects of Riemannian geometry can be obtained and related to the derivatives of the world function.
However, in \cite{WorldFunction5, WorldFunction6} not much is said about the actual origin of such function, its relation with the physics of test particles, and why it should be expected to encode most of the geometry of spacetime.
Moreover, $\Omega(P, P')$ is treated as an implicit object, with the exception of Minkowski spacetime for which it can be easily given in explicit form.
To the best of our knowledge, the symplectic origin of the world function has been acknowledged much later (see Benenti\cite{Benenti})
when optics in Euclidean spaces has been studied as an application of generating families. These families turn out to be the Euclidean version of the world function.
Without giving too much details, we can say that the evolution generator is essentially a generating function for a flow of canonical transformations, which can be used to represent the evolution of a Hamiltonian system. In other words the evolution of a Hamiltonian system can always be represented {as} a flow of canonical transformations, which is driven by a globally Hamiltonian vector field related to the generating function. This point of view can be applied in the simple {case of} the free particle in a space (or spacetime) as well as any other Hamiltonian system.
Although this setting does not add much in practice, it has two advantages:
first, we know that the generating function of evolution is related to complete integrals of the Hamilton-Jacobi equation
for the existence of which we have a solid theory and many results in literature.
For example, one can define a functional as the action functional computed along a solution $\hat \gamma(t)=(q(t), \dot q(t))$ of equation of motion which joins positions $(t_0,q_0)$ and $(t, q)$, namely $S(t, q, t_0, q_0)= \int_{(t_0, q_0)}^{(t, q)} L|_{\hat \gamma}\> dt$.
It can be shown quite directly and generally, that when it is varied with respect to the endpoint, i.e.~along a deformed solution $\hat q(t)\simeq q(t) + \delta q -\dot q \delta t $, one has
\begin{equation}
\delta S = p(\delta q - \dot q \delta t) + L \delta t
= p\delta q - H\delta t
\end{equation}
which shows that $p= \frac{\partial S}{\partial q}$ and that $S(t, q, t_0, q_0)$ is a solution of Hamilton-Jacobi equation
\begin{equation}
\frac{\partial S}{\partial t} + H =0
\end{equation}
The second advantage is that this approach shows that the geodesic problem is a special case of a much more general problem: given a Hamiltonian systems, find solutions going from one position $P$ to another $P'$, which is a kind of classical {\it propagator}.
Hence on one side Benenti's generator of evolution is a complete integral of Hamilton-Jacobi equation corresponding to {the} action integral evaluated along solutions; on the other side, when written in terms of the invariant Lagrangian (\ref{Lag}), it corresponds exactly the functional length of geodesics i.e. Synge's world function.
This last property is related to the classical problem in control theory for Hamiltonian systems. The evolution generator, once regarded as a generating function,
depends on the initial and final positions $(q, q')$ and its derivatives determines which initial momentum $p$ is needed to actually arrive in $q'$
and with which momentum $p'$ one arrives there.
Of course, a complete integral of Hamilton-Jacobi equation depends on a number of first integrals (corresponding to Morse family parameters in Benenti) which we eliminate by writing them as a function $E(t, r; t', r')$ of positions.
This is done formally by solving equation (\ref{myeq}), in practice by taking the limit $E\rightarrow -\infty$ to obtain light rays.
Let us remark that using the Lagrangian (\ref{Lag}), which is invariant with respect to any change of parameterisations, allows us to deal at once with test particles (which have a well define proper time) and light rays (for which proper time will be degenerate). Notice that the fact that light rays can be defined as limit of test particles is in fact one of the axioms (the compatibility axiom) in EPS framework\cite{EPS}. In this sense the EPS approach fits naturally with the symplectic and world function frameworks.
}
\section*{Bibliography}
\twocolumngrid
| {'timestamp': '2020-02-27T02:09:17', 'yymm': '1805', 'arxiv_id': '1805.04741', 'language': 'en', 'url': 'https://arxiv.org/abs/1805.04741'} |
\section{Introduction}
\label{sec:intro}
Consider a connected oriented surface $S$ of negative Euler characteristic.
A hyperbolic metric on $S$ is a Riemannian metric of constant curvature $-1$.
The Teichm\"uller space $\teichmullerspace$ of $S$ is the space of complete
finite-area hyperbolic metrics on $S$ up to isotopy.
In~\cite{thurston_minimal}, Thurston defined an asymmetric metric on
Teichm\"uller space:
\begin{align*}
\stretchdist(x,y) := \log \inf_{\phi\approx\text{Id}} \sup_{p\neq q}
\frac{d_y(\phi(p),\phi(q))}{d_x(p,q)},
\qquad\text{for $x,y\in\teichmullerspace$}.
\end{align*}
In other words, the distance from $x$ to $y$ is the logarithm of the smallest
Lipschitz constant over all homeomorphisms from $x$ to $y$ that are isotopic
to the identity.
Thurston showed that this is indeed a metric, although a non-symmetric one.
In the same paper, he showed that this distance can be written
\begin{align*}
\stretchdist(x,y)
= \log \sup_{\curve\in\curves} \frac{\hlength_y(\alpha)}{\hlength_x(\alpha)},
\end{align*}
where $\curves$ is the set of isotopy classes of non-peripheral simple
closed curves on $S$, and $\hlength_x(\alpha)$ denotes the shortest length
in the metric $x$ of a curve isotopic to $\alpha$.
Thurston's Lipschitz metric\index{Thurston's Lipschitz metric}\index{Thurston's metric}\index{Lipschitz metric}
has not been as intensively studied as the
Teichm\"uller metric or the Weil--Petersson metric.
The literature includes~\cite{papadopoulos_extension, theret_thesis,
papadopoulos_theret_topology, theret_negative, papadopoulos_theret_polyhedral,
theret_elementary,choi_rafi_comparison, theret_divergence};
most of it is very recent.
In this chapter, we determine the horofunction boundary of the Lipschitz metric.
The horofunction boundary of a metric space was introduced by
Gromov in~\cite{gromov:hyperbolicmanifolds}, and
has applications in studying isometry groups~\cite{walsh_lemmens_polyhedral},
random walks~\cite{karlsson_ledrappier_laws},
quantum metric spaces~\cite{rieffel_group},
and is the right setting for Patterson--Sullivan
measures~\cite{burger_mozes_commensurators}.
We show that the compactification of Teichm\"uller space by horofunctions
is isomorphic to the Thurston compactification, and we give an explicit
expression for the horofunctions.
\begin{thmhoro}
A sequence $x_n$ in $\teichmullerspace$ converges in the Thurston
compactification\index{Thurston compactification}\index{compactification!Thurston}
if and only if it converges in the horofunction compactification.
If the limit in the Thurston compactification is the projective class
$[\mu]\in\projmeasuredlams$, then the limiting horofunction is
\begin{align*}
\Psi_\mu(x) =
\log\left(\sup_{\eta\in\measuredlams}
\frac{i(\mu,\eta)}{\hlength_x(\eta)}
\middle/
\sup_{\eta\in\measuredlams}
\frac{i(\mu,\eta)}{\hlength_b(\eta)}
\right).
\end{align*}
\end{thmhoro}
Here, $b$ is a base-point in $\teichmullerspace$, and
$i(\cdot,\cdot)$ denotes the geometric intersection number.
Recall that the latter is defined for pairs of curve classes
$(\alpha,\beta)\in\curves\times\curves$ to be the minimum number of
transverse intersection points of curves $\alpha'$ and $\beta'$ with
$\alpha'\in\alpha$ and $\beta'\in\beta$. This minimum is realised if
$\alpha'$ and $\beta'$ are closed geodesics. The geometric intersection number
extends to a continuous symmetric function on
$\measuredlams\times\measuredlams$.
It is known that geodesics always converge to a point in the horofunction
boundary; see Section~\ref{horoboundary}.
Hence, an immediate consequence of the above theorem is the following.
\begin{corollary}
Every geodesic of Thurston's Lipschitz metric converges in the forward direction
to a point in the Thurston boundary.\index{Thurston boundary}\index{boundary!Thurston}
\end{corollary}
This generalises a result of Papadopoulos~\cite{papadopoulos_extension},
which states that every member of a special class of geodesics,
the \emph{stretch lines}, converges in the forward direction to a point
in the Thurston boundary.
The action of the isometry group of a metric space extends continuously
to an action by homeomorphisms on the horofunction boundary.
Thus, the horofunction boundary is useful for studying groups
of isometries of metric spaces.
One of the tools it provides is the \emph{detour cost}, which is a kind of
metric on the boundary. We calculate this in Section~\ref{sec:thurston_detour}.
Denote by $\modulargroup$ the extended mapping class
group\index{extended mapping class group}\index{modular group}
of $S$, that is, the group of isotopy classes of homeomorphisms of $S$.
It is easy to see that $\modulargroup$ acts by isometries on
$\teichmullerspace$ with the Lipschitz metric.
We use the detour cost to prove the following.
\begin{thmiso}
If $S$ is not a sphere with four or fewer punctures,
nor a torus with two or fewer punctures, then every isometry of
$\teichmullerspace$ with Thurston's Lipschitz metric is an element
of the extended mapping class group $\modulargroup$.
\end{thmiso}
This answers a question in~\cite[\S4]{papadopoulos_theret_handbook}.
It is well known that the subgroup of elements of $\modulargroup$ acting
trivially on $\teichmullerspace$ is of order two if $S$ is the closed surface
of genus two,
and is just the identity element in the other cases considered here.
Theorem~\ref{thm:isometries} is an analogue of Royden's theorem\index{Royden's theorem}
concerning the Teichm\"uller metric, which was proved by
Royden~\cite{royden_isometries} in the case of compact surfaces and analytic
automorphisms of $\teichmullerspace$, and extended to the general case by
Earle and Kra~\cite{earle_kra_isometries}.
Our proof is inspired by Ivanov's proof of Royden's theorem,
which was global and geometric in nature, as opposed to the original,
which was local and analytic.
The following theorem shows that distinct surfaces give rise to distinct
Teichm\"uller spaces, except possibly in certain cases.
Denote by $S_{g,n}$ a surface of genus $g$ with $n$ punctures.
\begin{thmdistinct}
Let $S_{g,n}$ and $S_{g',n'}$ be surfaces of negative Euler characteristic.
Assume $\{(g,n), (g',n')\}$ is different from each of the three sets
\begin{align*}
\{(1,1), (0,4)\}, \quad \{(1,2), (0,5)\}, \quad\text{and}\quad
\{(2,0), (0,6)\}.
\end{align*}
If $(g,n)$ and $(g',n')$ are distinct, then the Teichm\"uller spaces
$\teichmullerspc(S_{g,n})$ and $\teichmullerspc(S_{g',n'})$ with their
respective Lipschitz metrics are not isometric.
\end{thmdistinct}
This is an analogue of a theorem of Patterson~\cite{patterson_distinct}.
\index{Patterson's Theorem}
In the case of the Teichm\"uller metric it is known that one has the following
three isometric equivalences:
$\teichmullerspc(S_{1,1})\isometric \teichmullerspc(S_{0,4})$, $\teichmullerspc(S_{1,2})\isometric \teichmullerspc(S_{0,5})$, and
$\teichmullerspc(S_{2,0})\isometric \teichmullerspc(S_{0,6})$.
It would be interesting to know if these equivalences still hold when
one takes instead the Lipschitz metric.
It would also be interesting to work out the horofunction boundary of
the\index{reversed Lipschitz metric}\index{Lipschitz metric!reversed}
reversed Lipschitz metric,
that is, the metric $L^*(x,y):=L(y,x)$.
Since $L$ is not symmetric, $L^*$ differs from $L$, and one would expect
their horofunction boundaries to also differ.
The contents of this chapter are as follows. In the next section, we recall the
horofunction boundary of a metric space. In section~\ref{sec:thurston_horo},
we show that the horofunction boundary of the Lipschitz metric is the Thurston
boundary. In section~\ref{sec:busemann_points}, we recall the definition
of stretch line and prove that all horofunctions of the Lipschitz metric
are Busemann points. In section~\ref{sec:detour}, we recall some results
about the detour cost, which we calculate for the Lipschitz metric
in section~\ref{sec:thurston_detour}. Section~\ref{sec:isometries} is
devoted to the proofs of Theorems~\ref{thm:isometries} and~\ref{thm:distinct}.
\paragraph{Acknowledgments:}
We thank Weixu Su for some comments on an early draft and Micka\"el Crampon
for a detailed reading.
\section{The horofunction boundary}
\label{horoboundary}
Let $(X,\dist)$ be a possibly non-symmetric metric space,
\index{non-symmetric metric space}
in other words, $\dist$ has all the properties of a metric except that
it is not necessarily symmetric.
We endow $X$ with the topology induced by the symmetrised metric
\index{symmetrised metric}
$\symdist(x,y):=\dist(x,y)+\dist(y,x)$.
Note that for Thurston's Lipschitz metric, this topology is just the usual one
on $\teichmullerspace$; see~\cite{papadopoulos_theret_topology}.
The horofunction boundary of $(X,\dist)$ is defined as follows.\index{horofunction boundary}\index{boundary!horofunction}
One assigns to each point $z\in X$ the function
$\distfn_z:X\to \mathbb{R}$,
\begin{equation*}
\distfn_z(x) := \dist(x,z)-\dist(b,z),
\end{equation*}
where $b$ is some base-point.
Consider the map $\distfn:X\to C(X),\, z\mapsto \distfn_z$ from
$X$ into $C(X)$, the space of continuous real-valued functions
on $X$ endowed with the topology of uniform convergence on bounded sets
of $\symdist$.
\begin{proposition}[\cite{ballmann_lectures}]
\label{prop:injective_continous}
The map $\distfn$ is injective and continuous.
\end{proposition}
\begin{proof}
The triangle inequality implies that
$\distfn_x(\cdot) - \distfn_y(\cdot) \le \dist(y,x) + \dist(x,y)$,
for all $x$ and $y$ in $X$. The continuity of $\distfn$ follows.
Let $x$ and $y$ be distinct points in $X$, and relabel them such that
$\dist(b,x)\ge\dist(b,y)$. We have
\begin{align*}
\distfn_y(x) - \distfn_x(x)
&= \dist(x,y) - \dist(b,y) - \dist(x,x) + \dist(b,x) \\
&\ge \dist(x,y),
\end{align*}
which shows that $\distfn_x$ and $\distfn_y$ are distinct.
\end{proof}
The horofunction boundary is defined to be
\begin{align*}
X(\infty):=\closure\{\distfn_z\mid z\in X\}\backslash\{\distfn_z\mid z\in X\},
\end{align*}
where $\closure{}$ denotes the closure of a set.
The elements of $X(\infty)$ are called horofunctions.\index{horofunction}
This definition first appeared, for the case of symmetric metrics,
in~\cite{gromov:hyperbolicmanifolds}.
For more information, see~\cite{ballmann_lectures}, \cite{rieffel_group},
and~\cite{AGW-m}.
One may check that if one changes to an alternative base-point $b'$, then the
new function assigned to a point $z$ is related to the old by
$\distfn'_z(\cdot) = \distfn_z(\cdot) - \distfn_z(b')$.
It follows that the horofunction boundary obtained using $b'$ is
homeomorphic to that obtained using $b$, and the horofunctions are related by
$\xi'(\cdot) = \xi(\cdot) - \xi(b')$.
Note that, if the metric $\symdist$ is proper,\index{proper metric space}
meaning that closed balls are compact, then uniform convergence on bounded sets
is equivalent uniform convergence on compact sets.
The functions $\{\distfn_z\mid z\in X\}$ satisfy
$\distfn_z(x) \le \dist(x,y) + \distfn_z(y)$ for all $x$ and $y$.
Hence, for all horofunctions $\eta$,
\begin{align}
\label{eqn:ideal_tri_ineq}
\eta(x) \le d(x,y) + \eta(y),
\qquad\text{for all $x$ and $y$ in $X$}.
\end{align}
It follows that all elements of $\closure\{\distfn_z\mid z\in X\}$
are $1$-Lipschitz with respect to the metric $\symdist$.
We conclude that, for functions in this set, uniform convergence on bounded
sets is equivalent to pointwise convergence.
Moreover, if $\symdist$ is proper,
then the set $\closure\{\distfn_z\mid z\in X\}$
is compact by the Ascoli--Arzel\`a Theorem,\index{Ascoli-Arzela Theorem@Ascoli--Arzel\`a Theorem}
and we call it the \emph{horofunction compactification}.
The following assumptions will be useful.
They hold for the Lipschitz metric; see~\cite{papadopoulos_theret_topology}.
\begin{assumption}
\label{ass:proper}
The metric $\symdist$ is proper.
\end{assumption}
A geodesic\index{geodesic}
in a possibly non-symmetric metric space $(X,\dist)$ is a map
$\gamma$ from a closed interval of $\mathbb{R}$ to $X$ such that
\begin{align*}
\dist(\gamma(s),\gamma(t)) = t-s,
\end{align*}
for all $s$ and $t$ in the domain, with $s<t$.
\begin{assumption}
\label{ass:geodesic}
Between any pair of points in $X$, there exists a geodesic with respect to $d$.
\end{assumption}
\begin{assumption}
For any point $x$ and sequence $x_n$ in $X$, we have $\dist(x_n,x)\to 0$
if and only if $\dist(x,x_n) \to 0$.
\label{ass:topology}
\end{assumption}
\begin{proposition}[\cite{ballmann_lectures}]
\label{prop:embedding}
Assume~\ref{ass:proper}, \ref{ass:geodesic}, and~\ref{ass:topology} hold.
Then, $\distfn$ is an embedding of $X$ into $C(X)$, in other words, is a
homeomorphism from $X$ to its image.
\end{proposition}
\begin{proof}
That $\distfn$ is injective and continuous was proved in
Proposition~\ref{prop:injective_continous}.
Let $z_n$ be a sequence in $X$ escaping to infinity, that is,
eventually leaving and never returning to every compact set.
We wish to show that no subsequence of
$\distfn_{z_n}$ converges to a function $\distfn_{y}$ with $y\in X$.
Without loss of generality, assume that $\distfn_{z_n}$ converges
to $\xi\in\closure\{\distfn_z\mid z\in X\}$.
Since $\symdist$ is proper, $\symdist(y,z_n)$ must converge to
infinity. For each $n\in\mathbb{N}$, let $\gamma_n$ be a geodesic segment with
respect to $\dist$ from $y$ to $z_n$. Choose $r>\dist(b,y)+\xi(y)$.
It follows from assumption~\ref{ass:topology} that the
function $t\mapsto\symdist(y,\gamma_n(t))$ is continuous for each $n\in\mathbb{N}$.
Note that this function is defined on a closed interval and takes the
value $0$ at one endpoint and $\symdist(y,z_n)$ at the other.
Therefore, for $n$ large enough, we may find $t_n\in\mathbb{R}_+$ such that
$\symdist(y,x_n)=r$, where $x_n:=\gamma(t_n)$.
Since $\symdist$ is proper, and all the $x_n$ lie in a closed ball of radius
$r$, we may assume, by taking a subsequence if necessary, that $x_n$ converges
to some point $x\in X$.
Observe that
$\distfn_{z_n}(x_n) = \distfn_{z_n}(y) - \dist(y, x_n)$,
for all $n\in\mathbb{N}$.
Since the $\distfn_{z_n}$ are $1$-Lipschitz with respect to $\symdist$, we may
take limits and get $\xi(x) = \xi(y) - \dist(y,x)$.
On the other hand, $\distfn_{y}(x) = \dist(x,y)-\dist(b,y)$.
So $\distfn_{y}(x) - \xi(x) = r - \dist(b,y) - \xi(y) > 0$.
This shows that $\xi$ is distinct from $\distfn_{y}$.
Now let $p_n$ be a sequence in $X$ such that $\distfn_{p_n}$ converges
to $\distfn_{p}$ in $\distfn(X)$. From what we have just shown,
$p_n$ can not have any subsequence escaping to infinity.
Therefore $p_n$ is bounded in the $\symdist$ metric.
It then follows from the compactness of closed balls and the continuity and
injectivity of $\distfn$ that $p_n$ converges to $p$.
\end{proof}
We henceforth identify $X$ with its image.
\begin{proposition}
\label{prop:to_infinity}
Assume~\ref{ass:proper}, \ref{ass:geodesic}, and~\ref{ass:topology} hold.
Let $x_n$ be a sequence in $X$ converging to a horofunction.
Then, only finitely many points of $x_n$ lie in any closed ball of
$\symdist$.
\end{proposition}
\begin{proof}
Suppose $x_n$ is a sequence in $X$ such that some subsequence of
$\symdist(b,x_n)$ is bounded.
By taking a further subsequence if necessary, we may assume that
$x_n$ converges to a point $x$ in $X$. By Proposition~\ref{prop:embedding},
$\distfn_{x_n}$ converges to $\distfn_{x}$, and so $x_n$ does not converge
to a horofunction.
\end{proof}
A path $\gamma\colon\mathbb{R}_+\to X$
is called an \emph{almost-geodesic} if, for each $\epsilon>0$,\index{almost-geodesic}
\begin{equation*}
|\dist(\gamma(0),\gamma(s))+\dist(\gamma(s),\gamma(t))-t|<\epsilon,
\text{\quad for $s$ and $t$ large enough, with $s\le t$}.
\end{equation*}
Rieffel~\cite{rieffel_group} proved that
every almost-geodesic converges to a limit in $X(\infty)$.
A horofunction is called a \emph{Busemann point}\index{Busemann point}
if there exists an almost-geodesic converging to it.
We denote by $X_B(\infty)$ the set of all Busemann points in $X(\infty)$.
Isometries between possibly non-symmetric metric spaces extend continuously
to homeomorphisms between the horofunction compactifications.
\begin{proposition}
\label{prop:transform_horo}
Assume that $\iso$ is an isometry from one possibly non-symmetric metric
space $(X,d)$ to another $(X',d')$, with base-points $b$ and $b'$,
respectively. Then, for every horofunction $\xi$ and point $x\in X$,
\begin{align*}
\iso\cdot\xi(x) = \xi(\iso^{-1}(x)) - \xi(\iso^{-1}(b')),
\end{align*}
\end{proposition}
\begin{proof}
Let $x_n$ be a sequence in $X$ converging to $\xi$.
We have
\begin{align*}
\iso\cdot\xi(x)
&= \lim_{n\to\infty} d'(x,\iso(x_n)) - d'(b',\iso(x_n)) \\
&= \lim_{n\to\infty} \Big(d(\iso^{-1}(x),x_n) - d(b,x_n)\Big)
+ \Big(d(b,x_n) - d(\iso^{-1}(b'),x_n)\Big) \\
&= \xi(\iso^{-1}(x)) - \xi(\iso^{-1}(b')).
\qedhere
\end{align*}
\end{proof}
\section{The horoboundary of Thurston's Lipschitz metric}
\label{sec:thurston_horo}
We start with a general lemma relating joint continuity\index{joint continuity}
to uniform convergence on compact sets.
\begin{lemma}
\label{lem:uniform_inter}
Let $X$ and $Y$ be two topological spaces
and let $i:X\times Y\to \mathbb{R}$ be a continuous function.
Let $(x_n)_n$ be a sequence in $X$ converging to $x\in X$.
Then, $i(x_n,\cdot)$ converges to
$i(x,\cdot)$ uniformly on compact sets of $Y$.
\end{lemma}
\begin{proof}
Take any $\epsilon>0$, and let $K$ be a compact subset of $Y$.
The function $i(\cdot,\cdot)$ is continuous, and so,
for any $y\in Y$, there exists an open neighbourhood
$U_y\subset X$ of $x$ and an open neighbourhood $V_y\subset Y$ of $y$ such that
$|i(x',y') - i(x,y)| < \epsilon$ for all $x'\in U_y$ and $y'\in V_y$.
Since $K$ is covered by $\{V_y \mid y\in K\}$, there exists a finite
sub-covering $\{V_{y_1},\dots,V_{y_n}\}$. Define $U:=\bigcap_{i} U_{y_i}$.
This is an open neighbourhood of $x$, and $|i(x',y) - i(x,y)| < \epsilon$
for all $y\in K$ and $x'\in U$.
\end{proof}
We use Bonahon's theory of geodesic currents~\cite{bonahon_currents}.\index{geodesic current}
The space of geodesic currents is a completion of the space of
homotopy classes of curves on $S$, not necessarily simple, equipped with
positive weights.
More formally, let $G$ be the space of geodesics on the universal cover\index{universal cover}
of $S$, endowed with the compact-open topology.
A geodesic current is a positive measure on $G$ that is invariant under the
action of the fundamental group of $S$.
It is convenient to work with the space of geodesic currents
because both Teichm\"uller space $\teichmullerspace$ and the space
$\measuredlams$ of compactly supported measured geodesic
laminations are embedded into it in a very natural way.\index{measured laminations}
Furthermore, there is a continuous symmetric bilinear form $i(x,y)$
on this space that restricts to the usual intersection form when
$x$ and $y$ are in $\measuredlams$, and takes the value $i(x,y)=\hlength_x(y)$
when $x\in\teichmullerspace$ and $y\in\measuredlams$.
We denote by $\projmeasuredlams$ the projective space of $\measuredlams$,\index{projective measured laminations}
that is, the quotient of $\measuredlams$ by the multiplicative action
of the positive real numbers.
We use $[\mu]$ to denote the equivalence class of $\mu\in\measuredlams$
in $\projmeasuredlams$.
We may identify $\projmeasuredlams$ with the cross-section
$\unitlams:=\{\mu\in\measuredlams \mid \hlength_b(\mu)=1\}$.
We have the following two formulas for the Lipschitz metric.
\begin{align}
\label{eqn:dist_formula}
\stretchdist(x,y)
= \log \sup_{\eta\in\measuredlams}
\frac{\hlength_y(\eta)}{\hlength_x(\eta)}
= \log \sup_{\eta\in\unitlams}
\frac{\hlength_y(\eta)}{\hlength_x(\eta)},
\end{align}
The second is very useful because the supremum is taken over a compact set,
and is therefore attained.
Recall that we have chosen a base-point $b$ in $\teichmullerspace$.
Define, for any geodesic current $x$,
\begin{align*}
\lfactor(x) &
:= \sup_{\eta\in\measuredlams}\frac{\bonint(x,\eta)}{\hlength_b(\eta)}
= \sup_{\eta\in\unitlams}\frac{\bonint(x,\eta)}{\hlength_b(\eta)}
\end{align*}
and
\begin{align*}
\lengthfunc_x & :\measuredlams \to \mathbb{R}_+:
\mu \mapsto \frac{\bonint(x,\mu)}{\lfactor(x)}.
\end{align*}
Let $\thurstoncompact:=\teichmullerspace\cup\projmeasuredlams$
be the Thurston compactification of Teichm\"uller space.\index{Thurston compactification}
Identify $\projmeasuredlams$ with $\unitlams$, and consider a sequence
$x_n$ in $\thurstoncompact$. Then, $x_n$ converges to a point $x$
in the Thurston
compactification if and only if there is a sequence $\lambda_n$ of positive
real numbers such that $\lambda_n x_n$ converges to $x$ as a geodesic current.
One can take $\lambda_n$ to be identically $1$ if $x\in\teichmullerspace$.
\begin{proposition}
\label{prop:uniform_lengths}
A sequence $(x_n)_n$ in $\thurstoncompact$ converges to a point
$x\in\thurstoncompact$ if and only if $\lengthfunc_{x_n}$ converges to
$\lengthfunc_x$ uniformly on compact sets of $\measuredlams$.
\end{proposition}
\begin{proof}
Assume that $x_n$ converges in the Thurston compactification
to a point $x\in\thurstoncompact$.
This implies that, for some sequence $(\lambda_n)_n$ of positive real
numbers, $\lambda_n x_n$ converges to $x$ as a geodesic current.
We now apply Lemma~\ref{lem:uniform_inter} to Bonahon's intersection function
to get that $\bonint(\lambda_n x_n,\cdot)$ converges uniformly on compact sets
of $\measuredlams$ to $\bonint(x,\cdot)$.
Therefore, since $\unitlams$ is compact,
$\lfactor(\lambda_n x_n)$ converges to $\lfactor(x)$,
which is a positive real number. So,
$\lengthfunc_{x_n}(\cdot)=\bonint(\lambda_n x_n,\cdot)/\lfactor(\lambda_n x_n)$
converges to $\lengthfunc_x(\cdot)$ uniformly on compact sets of
$\measuredlams$.
Now assume that $\lengthfunc_{x_n}$ converges to
$\lengthfunc_x$ uniformly on compact sets of $\measuredlams$.
Let $y_n$ be a subsequence of $x_n$ converging in $\thurstoncompact$
to a point $y$. As before, we have that
$\lengthfunc_{y_n}$ converges to $\lengthfunc_y$
uniformly on compact sets of $\measuredlams$.
Combining this with our assumption, we get that $\lengthfunc_y$
and $\lengthfunc_x$ agree.
Therefore, $\bonint(y,\cdot)=\lambda\bonint(x,\cdot)$ for some $\lambda>0$.
It follows that $x$ and $y$ are the same point in
$\thurstoncompact$.
We have shown that every convergent subsequence of $x_n$
converges to $x$, which implies that $x_n$ converges to $x$.
\end{proof}
For each $z\in\thurstoncompact$, define the map
\begin{align*}
\Psi_z(x) := \log \sup_{\eta\in\measuredlams}
\frac{\lengthfunc_z(\eta)}{\hlength_x(\eta)},
\qquad\text{for all $x$ in $\teichmullerspace$.}
\end{align*}
Note that, if $z\in\teichmullerspace$, then
$\Psi_z(x)=\stretchdist(x,z)-\stretchdist(b,z)$ for all $x\in\teichmullerspace$.
For $x\in\teichmullerspace$ and $y\in\teichmullerspace\cup\unitlams$,
let $\maxset_{xy}$ be the set of elements $\eta$ of $\projmeasuredlams\,
({} =\unitlams)$ where $\lengthfunc_y(\eta) / \lengthfunc_x(\eta)$ is maximal.
\begin{lemma}
\label{lem:optimal_dir}
Let $x_n$ be a sequence of points in $\teichmullerspace$ converging to a
point $[\mu]$ in the Thurston boundary.
Let $y$ be a point in $\thurstoncompact$ satisfying $i(y,\mu)\neq 0$,
and let $\nu_n$ be a sequence
in $\unitlams$ such that $\nu_n\in\maxset_{{x_n}y}$ for all $n\in\mathbb{N}$.
Then, any limit point $\nu\in\unitlams$ of $(\nu_n)_n$ satisfies $i(\mu,\nu)=0$.
\end{lemma}
\begin{proof}
Consider the sequence of functions
$F_n(\eta):=\lengthfunc_y(\eta)/\lengthfunc_{x_n}(\eta)$.
By Proposition~\ref{prop:uniform_lengths}, $\lengthfunc_{x_n}$ converges
to $\lengthfunc_\mu$ uniformly on compact sets.
Therefore, for any sequence $\eta_n$ in $\unitlams$ converging to a
limit $\eta$, we have that $F_n(\eta_n)$ converges to
$\lengthfunc_y(\eta)/\lengthfunc_{\mu}(\eta)$
provided $\lengthfunc_y(\eta)$ and $\lengthfunc_{\mu}(\eta)$ are not both zero.
So, by evaluating on a sequence $\eta_n$ converging to $\mu$,
we see that $\sup_{\unitlams} F_n$ converges to $+\infty$.
On the other hand, for any sequence $\eta_n$ converging to some $\eta$
satisfying $i(\mu,\eta)>0$, we get that $F_n(\eta_n)$ converges to something
finite, and so $\eta_n\not\in\maxset_{{x_n}y}$ for $n$ large enough.
The conclusion follows.
\end{proof}
A measured lamination is \emph{maximal} if its support is not properly\index{maximal measured lamination}
contained in the support of any other measured lamination.
It is \emph{uniquely-ergodic} if every measured lamination with the same\index{uniquely-ergodic measured lamination}
support is in the same projective class.
Recall that, if $\mu$ is maximal and uniquely-ergodic,
and $\eta\in\measuredlams$ satisfies $i(\mu,\eta)=0$,
then $\mu$ and $\eta$ are proportional~\cite[Lemma~2.1]{diaz_series_lines}.
\begin{lemma}
\label{lem:injective}
The map $\Psi:\thurstoncompact\to C(\teichmullerspace):z\mapsto\Psi_z$
is injective.
\end{lemma}
\begin{proof}
Let $x$ and $y$ be distinct elements of $\thurstoncompact$.
By Proposition~\ref{prop:uniform_lengths}, $\lengthfunc_x$ and
$\lengthfunc_y$ are distinct.
So, by exchanging $x$ and $y$ if necessary, we have
$\lengthfunc_x(\mu)<\lengthfunc_y(\mu)$ for some $\mu\in\unitlams$.
Since $\lengthfunc_x$ and $\lengthfunc_y$ are continuous, we may choose
a neighbourhood $N$ of $\mu$ in $\unitlams$ small enough that there are
real numbers $u$ and $v$ such that
\begin{align*}
\lengthfunc_x(\eta) \le u < v \le \lengthfunc_y(\eta),
\qquad\text{for all $\eta\in N$}.
\end{align*}
Since the set of maximal uniquely-ergodic measured laminations is dense in
$\unitlams$, we can find such a measured lamination $\mu'$ in $N$
that is not proportional to $x$.
Let $p_n$ be a sequence of points in $\teichmullerspace$ converging to
$[\mu']$ in the Thurston boundary, and let $\nu_n$ be a sequence in
$\unitlams$ such that $\nu_n\in\maxset_{{p_n}x}$ for all $n\in\mathbb{N}$.
Since $\mu'$ is maximal and uniquely ergodic,
$i(\mu',\eta)\neq 0$ for all
$\eta\in\measuredlams$ not proportional to $\mu'$.
So $i(\mu',x)\neq 0$, whether $x$ is in $\teichmullerspace$ or in the
Thurston boundary.
By Lemma~\ref{lem:optimal_dir}, any limit point $\nu\in\unitlams$
of $(\nu_n)_n$ satisfies $i(\mu',\nu)=0$, and hence equals $\mu'$.
So, $\nu_n\in N$ for large $n$.
Therefore, by taking $n$ large enough, we can find a point $p$ in
$\teichmullerspace$ such that the supremum of
$\lengthfunc_x(\cdot)/\hlength_p(\cdot)$ is attained in the set $N$.
Putting all this together, we have
\begin{align*}
\sup_{\unitlams}\frac{\lengthfunc_x(\cdot)}{\hlength_p(\cdot)}
& = \sup_{N}\frac{\lengthfunc_x(\cdot)}{\hlength_p(\cdot)}
\le \sup_{N}\frac{u}{\hlength_p(\cdot)} \\
& < \sup_{N}\frac{v}{\hlength_p(\cdot)}
\le \sup_{N}\frac{\lengthfunc_y(\cdot)}{\hlength_p(\cdot)}
\le \sup_{\unitlams}\frac{\lengthfunc_y(\cdot)}{\hlength_p(\cdot)}.
\end{align*}
Thus, $\Psi_x(p)<\Psi_y(p)$, which implies that $\Psi_x$ and $\Psi_y$ differ.
\end{proof}
\begin{lemma}
\label{lem:continuous}
The map $\Psi:\thurstoncompact\to C(\teichmullerspace):z\mapsto\Psi_z$
is continuous.
\end{lemma}
\begin{proof}
Let $x_n$ be a sequence in $\thurstoncompact$ converging to a point $x$
also in $\thurstoncompact$.
By Proposition~\ref{prop:uniform_lengths}, $\lengthfunc_{x_n}$ converges
uniformly on compact sets to $\lengthfunc_{x}$.
For all $y\in\teichmullerspace$, the function $\hlength_y$ is bounded
away from zero on $\unitlams$.
We conclude that $\lengthfunc_{x_n}(\cdot)/ \hlength_y(\cdot)$ converges
uniformly on $\unitlams$ to $\lengthfunc_{x}(\cdot)/ \hlength_y(\cdot)$,
for all $y\in\teichmullerspace$. It follows that $\Psi_{x_n}$
converges pointwise to $\Psi_{x}$.
As noted before, this implies that $\Psi_{x_n}$ converges to $\Psi_{x}$
uniformly on bounded sets of $\teichmullerspace$.
\end{proof}
\begin{theorem}
\label{thm:horothurston}
\label{thm:homeo}
The map $\Psi$ is a homeomorphism between the Thurston compactification
and the horofunction compactification of $\teichmullerspace$.
\end{theorem}
\begin{proof}
The injectivity of $\Psi$ was proved in Lemma~\ref{lem:injective}, and so
$\Psi$ is a bijection from $\thurstoncompact$ to its image.
The map $\Psi$ is continuous by Lemma~\ref{lem:continuous}.
As a continuous bijection from a compact space to a Hausdorff one,
$\Psi$ must be a homeomorphism from $\thurstoncompact$
to its image.
So $\Psi(\thurstoncompact)$ is compact and therefore closed.
Using the continuity again, we get
$\Psi(\teichmullerspace)\subset \Psi(\thurstoncompact)
\subset \closure\Psi(\teichmullerspace)$.
Taking closures, we get
$\Psi(\thurstoncompact)=\closure\Psi(\teichmullerspace)$,
which is the horocompactification.
\end{proof}
\section{Horocylic foliations and stretch lines}
\label{sec:busemann_points}
Our goal in this section is to show that every horofunction of
the Lipschitz metric is Busemann. This will be achieved by showing that every
horofunction is the limit of a particular type of geodesic introduced
by Thurston~\cite{thurston_minimal}, called a \emph{stretch line}.\index{stretch line}
Let $\mu$ be a \emph{complete} geodesic lamination. In other words,\index{complete geodesic lamination}
$\mu$ is not strictly contained within another geodesic lamination,
or equivalently, the complementary regions of $\mu$
are all isometric to open ideal triangles in hyperbolic space.
Note that if the surface $S$ has punctures, then $\mu$ has leaves going out to
the cusps.
\begin{figure}
\input{horofoliation6.pstex_t}
\caption{An ideal triangle foliated by horocycles.}
\label{fig:horofoliation}
\end{figure}
We foliate each of the complementary triangles of $\mu$ with horocyclic arcs
as shown in Figure~\ref{fig:horofoliation}.
The horocyclic arcs meet the boundary of each triangle perpendicularly.
Note that there is a non-foliated region at the center of each triangle, which
is bounded by three horocyclic arcs meeting tangentially. So, the foliation
obtained is actually a partial foliation.
This partial foliation on $S\backslash \mu$ may be extended continuously
to a partial foliation on the whole of $S$.
Given a hyperbolic structure $g$ on $S$, we define a transverse measure
on the partial foliation on $S$ by requiring the measure of every
sub-arc of $\mu$ to be its length in the metric $g$.
The partial foliation with this transverse measure is called the
\emph{horocyclic foliation}, and is denoted $F_\mu(g)$.\index{horocyclic foliation}
Collapsing all non-foliated regions, we obtain
a well-defined element $F_\mu(g)$ of $\measuredfoliations$,
the space of measured foliations on $S$ up to Whitehead equivalence.\index{Whitehead equivalence}
Recall that two measured foliations are said to be Whitehead equivalent
if one may be deformed to the other by isotopies, deformations that
collapse to points arcs joining a pair of singularities, and the
inverses of such maps.
Note that the horocyclic foliation has around each puncture an annulus
of infinite width foliated by closed leaves parallel to the puncture.
Such a foliation is said to be \emph{trivial around punctures}.
A measured foliation is said to be \emph{totally transverse} to a geodesic\index{totally transverse}
lamination if it is transverse to the lamination and trivial around punctures.
A measured foliation class is said to be totally transverse to a
geodesic lamination if it has a representative that is totally transverse.
Let $\measuredfoliations(\mu)$ be the set of measure classes of
measured foliations that are totally transverse to~$\mu$.
The horocyclic foliation is clearly in $\measuredfoliations(\mu)$.
Thurston proved that the map
$\horocyclic_\mu: \teichmullerspace\to\measuredfoliations(\mu):
g \mapsto F_\mu(g)$
is in fact a homeomorphism.
The horocyclic foliation gives us a way of deforming the hyperbolic structure
by stretching along $\mu$.
Define the \emph{stretch line} directed by $\mu$ and passing through
$x\in\teichmullerspace$ to be
\begin{align*}
\stretchline_{\mu,x}(t):=\horocyclic^{-1}_\mu(e^t F_\mu(x)),
\qquad\text{for all $t\in\mathbb{R}$}.
\end{align*}
Stretch lines are geodesics for Thurston's Lipschitz metric, that is,
\begin{align*}
\stretchdist(\stretchline_{\mu,x}(s), \stretchline_{\mu,x}(t)) = t-s,
\end{align*}
for all $s$ and $t$ in $\mathbb{R}$ with $s<t$, provided that $\mu$ does not just
consist of geodesics converging at both ends to punctures.
The \emph{stump} of a geodesic lamination is its largest sub-lamination\index{stump}
on which there can be put a compactly-supported transverse measure.
Th\'eret~\cite{theret_negative} showed that a measured foliation class
is totally transverse to a complete geodesic lamination $\mu$ if and only if
its associated measured lamination in $\measuredlams$ transversely meets
every component of the stump of $\mu$.
\begin{theorem}
\label{thm:busemann}
Every point of the horofunction boundary of Thurston's Lipschitz metric
is a Busemann point.\index{Busemann point}
\end{theorem}
\begin{proof}
By Theorem~\ref{thm:horothurston}, the horofunction and Thurston boundaries
coincide.
Let $[\nu]\in\projmeasuredlams$ be any point of the Thurston boundary.
Choose a maximal uniquely-ergodic element $\mu$ of $\measuredlams$ so that
$[\mu]$ is different from $[\nu]$. So, $i(\nu,\mu)>0$.
Take a completion $\overline\mu$ of $\mu$.
The stump of $\overline\mu$ contains the support of $\mu$,
and so must equal this support, since $\mu$ is maximal.
Since $\mu$ is uniquely-ergodic, it has only one component,
which meets $\nu$ transversely.
Let $F$ denote the element of $\measuredfoliations$ associated
to $\nu$.
By~\cite[Lemma~1.8]{theret_negative}, $F$ is totally transverse to
$\overline\mu$.
Therefore the map $t\mapsto\horocyclic^{-1}_{\overline\mu}(e^t F)$ is
a stretch line directed by $\overline\mu$.
It was shown in~\cite{papadopoulos_extension} that such a stretch line
converges in the positive direction to $[\nu]$ in the Thurston
compactification. Since a stretch line is a geodesic, we conclude that
the horofunction $\Psi_\nu$ corresponding to $[\nu]$ is a Busemann point.
\end{proof}
\section{The detour cost}
\label{sec:detour}
\index{detour cost}
Let $(X,d)$ be a possibly non-symmetric metric space with base-point $b$.
We define the \emph{detour cost}
for any two horofunctions $\xi$ and $\eta$ in $X(\infty)$ to be
\begin{align*}
H(\xi,\eta)
&= \sup_{W\ni\xi} \inf_{x\in W\intersection X} \Big( d(b,x)+\eta(x) \Big),
\end{align*}
where the supremum is taken over all neighbourhoods $W$ of $\xi$ in
$X\cup X(\infty)$.
This concept appears in~\cite{AGW-m}.
An equivalent definition is
\begin{align}\label{eq:3.2}
H(\xi,\eta) &= \inf_{\gamma} \liminf_{t\to\infty}
\Big( d(b,\gamma(t))+\eta(\gamma(t)) \Big),
\end{align}
where the infimum is taken over all paths $\gamma:\mathbb{R}_+\to X$ converging
to $\xi$.
\renewcommand{\theenumi}{(\roman{enumi})}
\renewcommand{\labelenumi}{\theenumi}
\begin{lemma}
\label{lem:triangle_detour}
Let $\xi$ and $\eta$ be horofunctions. Then,
\begin{align*}
\eta(x) \le \xi(x) + H(\xi,\eta),
\qquad\text{for all $x$ in $X$}.
\end{align*}
\end{lemma}
\begin{proof}
By~(\ref{eqn:ideal_tri_ineq}),
\begin{align*}
\eta(x) \le \Big( d(x,z) - d(b,z) \Big) + \Big( d(b,z) + \eta(z) \Big),
\qquad\text{for all $x$ and $z$ in $X$}.
\end{align*}
Note that there is always a path $\gamma$ converging to $\xi$
such that
\begin{align*}
\lim_{t\to\infty} \Big( d(b,\gamma(t))+\eta(\gamma(t)) \Big)=H(\xi,\eta).
\end{align*}
Taking the limit as $z$ moves along such a path gives the result.
\end{proof}
The following was proved in~\cite[Lemma 3.3]{walsh_minimum}.
\begin{lemma}
\label{lem:along_geos}
Let $\gamma$ be an almost-geodesic converging to a Busemann point $\xi$,
and let $y\in X$.
Then,
\begin{equation*}
\lim_{t\to\infty} d(y,\gamma(t)) + \xi(\gamma(t)) = \xi(y).
\end{equation*}
Moreover, for any horofunction $\eta$,
\begin{equation*}
H(\xi,\eta) = \lim_{t\to\infty} d(b,\gamma(t)) + \eta(\gamma(t)).
\end{equation*}
\end{lemma}
\begin{proof}
Let $\epsilon>0$.
Putting $s=t$ in the definition of almost-geodesic, we see that
\begin{equation*}
| d(\gamma(0),\gamma(t)) - t | < \epsilon,
\qquad\text{for $t$ large enough.}
\end{equation*}
Using this and again the fact that $\gamma$ is an almost-geodesic, we get
\begin{equation*}
| d(\gamma(0),\gamma(s)) + d(\gamma(s),\gamma(t)) - d(\gamma(0),\gamma(t))|
< 2\epsilon,
\end{equation*}
for $s$ and $t$ large enough, with $s\le t$.
Letting $t$ tend to infinity gives
\begin{equation*}
|d(\gamma(0),\gamma(s)) + \xi(\gamma(s)) - \xi(\gamma(0))| \le 2\epsilon,
\qquad\text{for $s$ large enough.}
\end{equation*}
But, since $\gamma$ converges to $\xi$,
\begin{equation*}
|d(y,\gamma(s)) - d(\gamma(0),\gamma(s)) - \xi(y) + \xi(\gamma(0))|
< \epsilon,
\qquad\text{for $s$ large enough.}
\end{equation*}
Combining these, we deduce the first statement of the lemma.
By Lemma~\ref{lem:triangle_detour},
$\eta(x) \le \xi(x) + H(\xi,\eta)$,
for all $x$ in $X$.
Evaluating at $x=\gamma(s)$, adding $d(b,\gamma(s))$ to both sides, and using
the first part of the lemma with $y=b$, we get
\begin{align*}
\limsup_{s\to\infty} d(b,\gamma(s)) + \eta(\gamma(s)) \le H(\xi,\eta).
\end{align*}
On the other hand, from~(\ref{eq:3.2}),
\begin{equation*}
H(\xi,\eta)
\le \liminf_{s\to\infty} d(b,\gamma(s)) + \eta(\gamma(s)) .
\end{equation*}
This establishes the second statement of the lemma.
\end{proof}
\begin{proposition}
\label{prop:transformH}
Let $\iso$ be an isometry from one possibly non-symmetric metric
space $(X,d)$ to another $(X',d')$, with base-points $b$ and $b'$ respectively.
Then, the detour costs in $X$ and $X'$ are related by
\begin{align*}
H'(\iso\cdot\xi,\iso\cdot\eta)
= \xi(\iso^{-1}(b')) + H(\xi,\eta) - \eta( \iso^{-1}(b')),
\text{\qquad for all $\xi,\eta\in X(\infty)$}.
\end{align*}
In particular, every isometry preserves finiteness of the detour cost.
\end{proposition}
\begin{proof}
Let $\xi$ and $\eta$ be in $X(\infty)$.
By Proposition~\ref{prop:transform_horo}, the horofunction $\eta$ is mapped
by $\iso$ to
$\iso\cdot\eta(\cdot) = \eta(\iso^{-1}(\cdot)) - \eta(\iso^{-1}(b'))$.
We have
\begin{align*}
H'(\iso\cdot\xi,\iso\cdot\eta)
&= \inf_\gamma \liminf_{t\to\infty}
\Big( d'(b',\iso(\gamma(t)))+\eta(\gamma(t))-\eta(\iso^{-1}(b')) \Big) \\
&= \inf_\gamma \liminf_{t\to\infty}
\Big( d(\iso^{-1}(b'),\gamma(t))-d(b,\gamma(t)) \Big) \\
& \qquad\qquad\qquad\qquad + \Big( d(b,\gamma(t)) + \eta(\gamma(t)) \Big)
- \eta( \iso^{-1}(b')) \\
& = \xi(\iso^{-1}(b')) + H(\xi,\eta) - \eta( \iso^{-1}(b')),
\end{align*}
where each time the infimum is taken over all paths in $X$ converging to $\xi$.
\end{proof}
Note that we can take $(X,d)$ and $(X',d')$ to be identical and the isometry
to be the identity map, in which case the proposition says how the detour cost
depends on the base-point:
\begin{align*}
H'(\xi,\eta) = \xi(b') + H(\xi,\eta) - \eta(b').
\end{align*}
For the next proposition, we will need the following assumption.
\begin{assumption}
\label{ass:infinite_dist}
For every sequence $x_n$ in $X$, if $\symdist(b,x_n)$ converges to infinity,
then so does $\dist(b,x_n)$.
\end{assumption}
This assumption is satisfied by the Lipschitz
metric~\cite{papadopoulos_theret_topology}.
\begin{proposition}
\label{prop:zero_on_busemann}
Assume that $(X,d)$ satisfies
assumptions~\ref{ass:proper}, \ref{ass:geodesic},
\ref{ass:topology}, and \ref{ass:infinite_dist}.
If $\xi$ is a horofunction,
then $H(\xi,\xi)=0$ if and only if $\xi$ is Busemann.
\end{proposition}
\begin{proof}
Suppose that $H(\xi,\xi)=0$.
Let $b'\in X$, and let $\xi'=\xi-\xi(b')$ be the horofunction
corresponding to $\xi$ when $b'$ is used as the base-point instead of $b$.
From Proposition~\ref{prop:transformH}, the detour cost with base-point $b'$
satisfies $H'(\xi',\xi')=0$.
So, for any $\epsilon>0$ and neighbourhood $W$ of $\xi'$ in the horofunction
compactification, we may find $x\in W\intersection X$ such that
$|d(b',x) + \xi(x) - \xi(b')| < \epsilon$.
Fix $\epsilon>0$, and let $x_0:=b$.
From the above, we may inductively find a sequence $\{x_j\}$ in $X$
converging to $\xi$ such that
\begin{align*}
|d(x_j,x_{j+1}) + \xi(x_{j+1}) - \xi(x_{j})| < \frac{\epsilon}{2^{j+1}},
\qquad\text{for all $j\in\mathbb{N}$}.
\end{align*}
For each $j$, take a finite-length geodesic path $\gamma_j$ from $x_j$
to $x_{j+1}$.
Since $x_j$ converges to a horofunction, we have by
Proposition~\ref{prop:to_infinity} that $\symdist(b,x_j)$ converges to infinity.
Therefore, $\dist(b,x_j)$ also converges to infinity.
But
\begin{align*}
\sum_{j=0}^n \dist(x_j,x_{j+1}) \ge \dist(b,x_{n+1}),
\qquad\text{for all $n\in\mathbb{N}$.}
\end{align*}
So, when we concatenate the geodesic paths $\{\gamma_j\}$, we obtain a
path $\gamma:\mathbb{R}_+\to X$, defined on the whole of $\mathbb{R}_+$.
Let $s$ and $t$ be in $\mathbb{R}_+$, and let $n\in\mathbb{N}$ be such that
\begin{align*}
\sum_{j=0}^{n-1} d(x_j,x_{j+1}) < t \le \sum_{j=0}^{n} d(x_j,x_{j+1}).
\end{align*}
Write $\Delta:=\sum_{j=0}^{n} d(x_j,x_{j+1}) - t$.
So,
\begin{align*}
t &= \sum_{j=0}^{n} d(x_j,x_{j+1}) - \Delta \\
&\le -\xi(x_{n+1}) + \epsilon - \Delta \\
&= (0-\xi(\gamma(s))) + (\xi(\gamma(s))-\xi(\gamma(t)))
+(\xi(\gamma(t))-\xi(x_{n+1})) + \epsilon - \Delta.
\end{align*}
Using~(\ref{eqn:ideal_tri_ineq}),
we get
\begin{align*}
t &\le d(b,\gamma(s)) + d(\gamma(s),\gamma(t)) + d(\gamma(t),x_{n+1})
+ \epsilon - \Delta \\
&= d(b,\gamma(s)) + d(\gamma(s),\gamma(t)) + \epsilon.
\end{align*}
Since $\gamma$ is a concatenation of geodesic segments,
\begin{align*}
d(b,\gamma(s)) + d(\gamma(s),\gamma(t)) \le s + (t-s) = t.
\end{align*}
Therefore, $\gamma$ is an almost-geodesic, and so it converges,
necessarily to $\xi$ since it passes through each of the points $x_j$.
So, $\xi$ is a Busemann point.
Now assume that $\xi$ is a Busemann point. So, there exists an almost-geodesic
converging to $\xi$.
It follows from Lemma~\ref{lem:along_geos} that $H(\xi,\xi)=0$.
\end{proof}
\begin{proposition}
\label{prop:H_properies}
For all horofunctions $\xi$, $\eta$, and $\nu$,
\begin{enumerate}
\item
\label{itema}
$H(\xi,\eta) \ge 0$;
\item
\label{itemb}
$H(\xi,\nu) \le H(\xi,\eta) + H(\eta,\nu)$.
\end{enumerate}
\end{proposition}
\begin{proof}
\ref{itema}
From~(\ref{eqn:ideal_tri_ineq}), we get that $d(b,y)+\eta(y) \ge 0$,
for all $y$ in $X$.
We conclude that $H(\xi,\eta)$ is non-negative.
\ref{itemb}
By Lemma~\ref{lem:triangle_detour},
\begin{align*}
d(b,x) + \nu(x) \le d(b,x) + \eta(x) + H(\eta,\nu),
\qquad\text{for all $x\in X$}.
\end{align*}
It follows from this that $H(\xi,\nu) \le H(\xi,\eta) + H(\eta,\nu)$.
\end{proof}
By symmetrising the detour cost, the set of Busemann points can be equipped
with a metric. For $\xi$ and $\eta$ in $X_B(\infty)$, let
\begin{equation}\label{eq:3.4}
\delta(\xi,\eta) := H(\xi,\eta)+H(\eta,\xi).
\end{equation}
We call $\delta$ the \emph{detour metric}.\index{detour metric}
This construction appears in~\cite[Remark~5.2]{AGW-m}.
\begin{proposition}
\label{prop:delta_metric}
The function $\delta\colon X_B(\infty)\times X_B(\infty)\to [0,\infty]$
is a (possibly $\infty$-valued) metric.
\end{proposition}
\begin{proof}
The only metric space axiom that does not follow from
Propositions~\ref{prop:zero_on_busemann} and~\ref{prop:H_properies},
and the symmetry of the definition of $\delta$
is that if $\delta(\xi,\eta)=0$ for Busemann points $\xi$ and $\eta$,
then these two points are identical. So, assume this equation holds.
By~\ref{itema} of Proposition~\ref{prop:H_properies},
both $H(\xi,\eta)$ and $H(\eta,\xi)$ are zero.
Applying Lemma~\ref{lem:triangle_detour} twice, we get that $\xi(x)=\eta(x)$
for all $x\in X$.
\end{proof}
The following proposition shows that each isometry of $X$ induces an isometry
on $X_B(\infty)$ endowed with the detour metric.
The independence of the base-point was observed in~\cite[Remark~5.2]{AGW-m}.
\begin{proposition}
\label{prop:isometry_detour_metric}
Let $\iso$ be an isometry from one possibly non-symmetric metric
space $(X,d)$ to another $(X',d')$, with base-points $b$ and $b'$ respectively.
Then, the detour metrics in $X$ and $X'$ are related by
\begin{align*}
\delta'(\iso\cdot\xi,\iso\cdot\eta)
= \delta(\xi,\eta),
\text{\qquad for all $\xi,\eta\in X(\infty)$}.
\end{align*}
In particular, the detour metric does not depend on the base-point.
\end{proposition}
\begin{proof}
The first part follows from Proposition~\ref{prop:transformH}.
The second part then follows by taking $X=X'$ and $d=d'$,
with $f$ the identity map.
\end{proof}
\section{The detour cost for the Lipschitz metric}
\label{sec:thurston_detour}
We will now calculate the detour cost for Thurston's Lipschitz metric.
This result will be crucial for our study of the isometry
group in the next section.
If $\xi_j$ are a finite set of measured laminations pairwise having
zero intersection number, then we define their sum $\sum_j \xi_j$ to be the
measured lamination obtained by taking the union of the supports and
endowing it with the sum of the transverse measures.
A measured lamination is said to be~\emph{ergodic} if it is non-trivial
and can not be written as a sum of projectively-distinct non-trivial
measured laminations.
Each measured lamination $\xi$ can be written in one way as the sum of a
finite set of projectively-distinct ergodic measured laminations.
We call these laminations the \emph{ergodic components} of $\xi$.\index{ergodic components}
Let $\blam\in\measuredlams$ be expressed as $\blam= \sum_j \blam_j$ in terms
of its ergodic components.
For $\slam\in\measuredlams$, we write $\slam\ll\blam$
if $\slam$ can be expressed as $\slam = \sum_j f_j \blam_j$, where each $f_j$
is in $\mathbb{R}_+$.
Recall that $\Psi_\blam$ denotes the horofunction associated to the projective
class of the measured lamination $\blam$.
\begin{theorem}
\label{thm:detourcost}
Let $\slam$ and $\blam$ be measured laminations. If $\slam\ll\blam$, then
\begin{align*}
H(\Psi_\blam,\Psi_\slam) =
\log \sup_{\eta\in\measuredlams} \frac{i(\blam,\eta)}{\hlength_b(\eta)}
+ \log\max_j(f_j)
- \log\sup_{\eta\in\measuredlams} \frac{i(\slam,\eta)}{\hlength_b(\eta)},
\end{align*}
where $\slam$ is expressed as $\slam = \sum_j f_j \blam_j$
in terms of the ergodic components $\blam_j$ of $\blam$.
If $\slam\not\ll\blam$, then $H(\Psi_\blam,\Psi_\slam) = +\infty$.
\end{theorem}
\begin{remark}
Here, and in similar situations, we interpret the supremum to be over
the set where the ratio is well defined, that is, excluding values of $\eta$
for which both the numerator and the denominator are zero.
\end{remark}
The proof of this theorem will require several lemmas.
Consider a measured foliation $(F,\mu)$.
Each connected component of the complement in $S$ of the union of the compact
leaves of $F$ joining singularities is either an annulus swept out by
closed leaves or a minimal component in which all leaves are dense.
We call the latter components the \emph{minimal domain} of $F$.\index{minimal domains}
Denote by $\sing F$ the set of singularities of $F$.
A curve $\curve$ is \emph{quasi-transverse}\index{quasi-transverse}
to $F$ if each connected component
of $\curve\backslash\sing F$ is either a leaf or is transverse to $F$,
and, in a neighbourhood of each singularity, no transverse arc lies in
a sector adjacent to an arc contained in a leaf.
Quasi-transverse curves minimise the total variation of the transverse measure
in their homotopy class, in other words, $\mu(\curve) = i((F,\mu),\curve)$,
for every curve $\curve$ quasi-transverse to $(F,\mu)$;
see~\cite{fathi_laudenbach_poenaru}.
\begin{lemma}
\label{lem:alongminimal}
Let $\epsilon>0$, and let $x_1$ and $x_2$ be two points on the boundary
of a minimal domain $D$ of a measured foliation $(F,\mu)$.
Then, there exists a curve segment $\sigma$ going from $x_1$ to $x_2$
that is contained in $D$ and quasi-transverse to $F$, such that
$\mu(\sigma)<\epsilon$. Moreover, $\sigma$ can be chosen to have a non-trivial
initial segment and terminal segment that are transverse to $F$.
\end{lemma}
\begin{proof}
Let $\tau_1$ and $\tau_2$ be two non-intersecting transverse arcs in $D$
starting at $x_1$ and $x_2$ respectively, parameterised so that
$\mu(\tau_1[0,s]) = \mu(\tau_2[0,s]) = s$, for all $s$.
We also require that the lengths of $\tau_1$ and $\tau_2$
with respect to $\mu$ are less than $\epsilon/2$.
Choose a point on $\tau_1$ and a direction, either left or right, such that
the chosen half-leaf $\gamma$ is infinite, that is, does not hit a singularity.
Since $D$ is a minimal component, $\gamma[0,\infty)$ is dense in $D$.
Let $t_2$ be the first time $\gamma$ intersects $\tau_2$, and let $t_1$
be the last time before $t_2$ that $\gamma$ intersects $\tau_1$.
Let $s_1$ and $s_2$ be such that $\tau_1(s_1)=\gamma(t_1)$ and
$\tau_2(s_2)=\gamma(t_2)$. We may assume that $s_1$ is different from $s_2$,
for otherwise, continue along $\gamma$ until the next time $t_3$ it intersects
$\tau_1[0,s_1)\cup\tau_2[0,s_2)$.
If the intersection is with $\tau_1[0,s_1)$, then we take the leaf segment
$\gamma[t_2,t_3)$ with the reverse orientation.
If the intersection is with $\tau_2[0,s_2)$, then we take the leaf segment
$\gamma[t_1,t_3)$. In either case, after redefining $s_1$ and $s_2$ so that
our chosen leaf segment starts at $\tau_1(s_1)$ and ends at $\tau_2(s_2)$,
we get that $s_1\neq s_2$.
If $\tau_1[0,s_1]$ and $\tau_2[0,s_2]$ leave $\gamma$ on opposite sides, then,
by perturbing the concatenation $\tau_1[0,s_1]*\gamma[t_1,t_2]*\tau_2[s_2,0]$,
we can find a curve segment $\sigma$ in $D$
that passes through $x_1$ and $x_2$ and is transverse to $F$ with weight
$\mu(\sigma) = s_1 + s_2 < \epsilon$; see Figure~\ref{fig:figA}.
\begin{figure}
\input{figureA.pstex_t}
\caption{}
\label{fig:figA}
\end{figure}
\begin{figure}
\input{figureB.pstex_t}
\caption{}
\label{fig:figB}
\end{figure}
If $\tau_1[0,s_1]$ and $\tau_2[0,s_2]$ leave $\gamma$ on the same side,
then we apply~\cite[Theorem 5.4]{fathi_laudenbach_poenaru} to get an arc
$\gamma'$ parallel to $\gamma$, contained in a union of a finite number of
leaves and singularities, with end points $\gamma'(t_1)=\tau_1(s_1')$
and $\gamma'(t_2)=\tau_2(s_2')$ contained in $\tau_1[0,s_1)$
and $\tau_2[0,s_2)$, respectively (see Figure~\ref{fig:figB}).
Since $s_1\neq s_2$, the points $x_1$, $x_2$, $\tau_2(s_2)$, and $\tau_1(s_1)$
do not form a rectangle foliated by leaves.
Hence the endpoints of $\gamma'$ are not $x_1$ and $x_2$.
As before, by perturbing $\tau_1[0,s_1']*\gamma'[t_1,t_2]*\tau_2[s_2',0]$,
it is easy to construct a curve with the
required properties.
\end{proof}
\begin{lemma}
\label{lem:cross_one}
Let $\xi_j; j\in\{0,\dots,J\}$ be a finite set of ergodic measured laminations
that pairwise have zero intersection number, such that no two are in the same
projective class, and let $C>0$.
Then, there exists a curve $\alpha\in\curves$ such that
$i(\xi_0,\alpha) > C i(\xi_j,\alpha)$ for all $j\in \{1,\dots,J\}$.
\end{lemma}
\begin{proof}
\begin{figure}
\input{crossings.pstex_t}
\caption{Diagram for the proof of Lemma~\ref{lem:cross_one}.}
\label{fig:crossings}
\end{figure}
\begin{figure}
\input{rectangle.pstex_t}
\caption{Diagram for the proof of Lemma~\ref{lem:cross_one}.}
\label{fig:rectangle}
\end{figure}
Since the $\xi_j$ do not intersect, we may combine them to form a
measured lamination $\xi:=\sum_j \xi_j$.
Consider first the case where $\xi_0$ is not a curve.
Take a representative $(F,\mu)$ of the element of $\measuredfoliations$
corresponding to $\xi$ by the well known bijection between $\measuredlams$
and $\measuredfoliations$. The decomposition of $\xi$ into a sum of
ergodic measured laminations corresponds to the decomposition of $\mu$
into a sum of partial measured foliations $(F,\mu_j)$.
Each $\mu_j$ is supported on either an annulus of closed leaves of $F$
(if $\xi_j$ is a curve), or a minimal domain.
For each $j$, let $F_j:=(F,\mu_j)$.
Let $I'$ be a transverse arc contained in the interior of the minimal
domain on which $\mu_0$ is supported.
Write $\mu^c := \sum_{j=1}^{J} \mu_j = \mu - \mu_0$ and $F^c:=(F,\mu^c)$.
The measures $\{\mu_j\}$ are mutually singular, and so there is a Borel
subset $X$ of $I'$ such that $\mu_0(X)=\mu_0(I')$ and $\mu^c(X)=0$.
But $X$ may be approximated from above by open sets of $I'$:
\begin{align*}
\mu^c(X)
= \inf\{ \mu^c(U) \mid \text{$X\subset U\subset I'$ and $U$ is open} \}.
\end{align*}
Therefore, we can find an open set $U$ such that $X\subset U\subset I'$ and
$\mu^c(U) < \mu_0(I') / C$. Since $U$ is open, it is the disjoint union of a
countable collection of open intervals.
Since $\mu_0(U)=\mu_0(I')>\mu^c(U) C$, at least one of these intervals $I$
must satisfy $\mu_0(I) > \mu^c(I) C$.
Choose $\epsilon >0$ such that $\mu_0(I)-\epsilon > \mu^c(I) C$.
For $x$ and $y$ in $I$, we denote by $[x,y]$ the closed sub-arc of $I$
connecting $x$ and $y$. Open and half-open sub-arcs are denoted in an
analogous way. Let $x_0$ and $x_1$ be the endpoints of $I$.
Choose a point $x\in I$ such that $\leb[x_0,x] < \epsilon/3$ and there is
an infinite half-leaf $\gamma$ of $F$ starting from $\gamma(0)=x$.
So, we may go along $\gamma$ until we reach a point $y:=\gamma(t)$ in $I$
such that $\leb[y,x_1] < \epsilon/3$.
Let $B$ be the set of intervals $[p,q)$ in $I$ such that the finite leaf
segment $\gamma[0,t]$ crosses $I$ at $p$ and at $q$, the two crossings
are in the same direction, and $\gamma$ does not cross $I$ in the interval
$(p,q)$.
Consider an element $[p,q)$ of $B$. Assume that $\gamma$ passes through
$p$ before it passes through $q$, that is, $\gamma(t_p)=p$ and $\gamma(t_q)=q$
for some $t_p$ and $t_q$ in $[0,t]$, with $t_p<t_q$.
(The other case is handled similarly.)
Let $\scurv$ be the closed curve consisting of $\gamma[t_p,t_q]$ concatenated
with the sub-interval $[p,q]$ of $I$. Since $\gamma$ contains no singular
point of $F$, there exists a narrow rectangular neighbourhood of
$\gamma[t_p,t_q]$ not containing
any singular point of $F$, and so we may perturb $\scurv$ to get a closed curve
$\scurv'$ that is transverse to $F$ (see Figure~\ref{fig:rectangle}).
We have
\begin{align*}
i(\scurv,F_j) = i(\scurv',F_j) = \mu_j(\scurv') = \mu_j[p,q),
\end{align*}
for all $j\in\{0,\dots,J\}$.
The second equality uses the fact that $\scurv'$ is transverse to $F_j$,
and hence quasi-transverse.
Let $Z$ be the set of curves $\scurv$ obtained in this way from the elements
$[p,q)$ of $B$.
The set $[x,y) \backslash\cup B$ is composed of a finite number of intervals
of the form $[r,s)$, where $\gamma[0,t]$ crosses $I$ at $r$ and $s$ in different
directions and does not cross the interval $(r,s)$.
From $\gamma(t)$, we continue along $\gamma$ until the first time $t'$
that $\gamma$ crosses one of the intervals $[r,s)$ comprising
$[x,y) \backslash\cup B$.
The direction of this crossing will be the same as either that at $r$ or that
at $s$. We assume the former case; the other case is similar.
As before, we have that the curve $\beta$ formed from the segment of $\gamma$
going from $r$ to $\gamma(t')$, concatenated with the sub-arc $[r,\gamma(t')]$
of $I$ satisfies $i(\scurv,F_j) = \mu_j[r,\gamma(t'))$, for all $j$.
We add $[r,\gamma(t'))$ to the set $B$, and $\scurv$ to the set $Z$.
Observe that $[x,y)\backslash \cup B$ remains composed of the same number
of intervals of the same form, only now one of them is shorter.
We continue in this manner, adding intervals to $B$ and curves to $Z$.
Since $\gamma$ crosses $I$ on a dense subset of $I$, the maximum $\mu$-measure
of the component intervals of $[x,y)\backslash \cup B$ can be made as
small as we wish. But there are a fixed number of these components, and so
we can make $\mu([x,y)\backslash \cup B)$ as small as we wish.
We make it smaller than $\epsilon/3$.
We have
\begin{align*}
\max_{\scurv\in Z} \frac{i(\scurv,F_0)}{i(\scurv,F^c)}
&\ge \frac{\sum_{\scurv\in Z}i(\scurv,F_0)}{\sum_{\scurv\in Z}i(\scurv,F^c)} \\
&= \frac{\mu_0(\cup B)}{\mu^c(\cup B)} \\
&\ge \frac{\mu_0(I)-\epsilon}{\mu^c(I)} \\
&> C.
\end{align*}
So, some curve $\scurv\in Z$ satisfies
$i(\scurv,\xi_0) = i(\scurv,F_0) > C i(\scurv,F^c) \ge C i(\scurv, \xi_j)$,
for all $j\in\{1,\dots,J\}$.
Now consider the case where $\xi_0$ is a curve.
A slight adaption of the proof
of~\cite[Proposition 3.17]{fathi_laudenbach_poenaru} shows that there is a
curve $\alpha\in\curves$ having positive intersection number with
$\xi_0$ and zero intersection number with every other curve in the support
of $\xi$.
By~\cite[Proposition 5.9]{fathi_laudenbach_poenaru}, there exists a
measured foliation $(F,\mu)$ representing the element of $\measuredfoliations$
associated to $\xi$ such that $\alpha$ is transverse to $F$ and avoids
its singularities.
Again we use the decomposition of $\mu$ into a sum of mutually-singular
partial measured foliations $(F,\mu_j)$, corresponding to the $\xi_j$.
Since $\alpha$ is transverse to $F$, we have, for each $j$, that
$i(\alpha,\xi_j) = \mu_j(\alpha)$, where $\mu_j(\alpha)$ denotes the total
mass of $\alpha$ with respect to the transverse measure $\mu_j$.
It follows that $\alpha$ crosses the annulus $A$ associated to $\xi_0$
at least once, but never enters any of the annuli associated to the other
curves in the support of~$\xi$.
Consider the following directed graph.
We take a vertex for every minimal domain of $F$
through which $\alpha$ passes, and for every time $\alpha$ crosses $A$.
So, there is at most one vertex associated to each minimal domain but there
may be more than one associated to~$A$.
As we move along the curve $\alpha$, we get a cyclic sequence of these
vertices. We draw a directed edge between each vertex of this cyclic
sequence and the succeeding one, and label it with the point of $\curves$
where $\alpha$ leaves the minimal domain or annulus and enters the next.
There may be more than one directed edge between a pair of vertices,
but each will have a different label.
The curve $\alpha$ induces a circuit in this directed graph.
Choose a simple sub-circuit $c$ that passes through at least
one vertex associated to a crossing of~$A$.
Here, simple means that no vertex is visited more than once.
Construct a curve in $S$ as follows.
For each vertex passed through by $c$ associated to a crossing of $A$,
take the associated segment of $\alpha$. For each
vertex passed through by $c$ associated to a minimal domain, choose $\epsilon$
to be less than the height of $A$ divided by $C$, and take the curve
segment given by Lemma~\ref{lem:alongminimal} passing through the minimal
domain, joining the points labeling the incoming and outgoing directed edges.
When we concatenate all these curve segments, we get a curve $\alpha'$
that passes through $A$, and that is quasi-transverse to $F$. Furthermore,
$i(\xi_j,\alpha') <\epsilon$ for each non-curve component $\xi_j$ of $\xi$,
and $i(\xi_j,\alpha')=0$ for each curve component different from $\xi_0$.
The conclusion follows.
\end{proof}
\begin{lemma}
\label{lem:max_intersection_ratio}
Let $\slam$ and $\blam$ be measured laminations. If $\slam\ll\blam$, then
\begin{align*}
\sup_{\eta\in\measuredlams} \frac{i(\slam,\eta)}{i(\blam,\eta)}
= \max_j(f_j),
\end{align*}
where $\slam$ is expressed as $\slam = \sum_j f_j \blam_j$
in terms of the ergodic components $\blam_j$ of $\blam$.
If $\slam\not\ll\blam$, then the supremum is $+\infty$.
\end{lemma}
\begin{proof}
In the case where $i(\slam,\blam)>0$, we take $\eta:=\blam$ to get that
the supremum is~$\infty$.
So assume that $i(\slam,\blam)=0$.
In this case, we can write $\slam=\sum_j f_j \xi_j$ and
$\blam=\sum_j g_j \xi_j$, where the $\xi_j; j\in\{0,\dots,J\}$ are
a finite set of ergodic measured laminations pairwise having zero intersection
number,
and the $f_j$ and $g_j$ are non-negative coefficients such that, for all $j$,
either $f_j$ or $g_j$ is positive.
Relabel the indexes so that $\max_j(f_j/g_j)=f_0/g_0$.
We have
\begin{align*}
I(\eta) := \frac{i(\slam,\eta)}{i(\blam,\eta)}
= \frac{\sum_j f_j i(\xi_j,\eta)}{\sum_j g_j i(\xi_j,\eta)}.
\end{align*}
Simple algebra establishes that $I(\eta)\le f_0/g_0$ for all $\eta$.
For each $C>0$, we apply Lemma~\ref{lem:cross_one}
to get a curve $\alpha_C$ such that $i(\xi_0,\alpha_C) > C i(\xi_j,\alpha_C)$
for all $j\in\{1,\dots,J\}$.
By choosing $C$ large enough, we can make $I(\alpha_C)$
as close as we like to $f_0/g_0$.
We conclude that $\sup_\eta I(\eta)=f_0/g_0$.
\end{proof}
\begin{proof}[proof of Theorem~\ref{thm:detourcost}]
Let $F$ be the measured foliation corresponding to $\mu$.
So, $i(F,\alpha)=i(\mu,\alpha)$, for all $\alpha\in\curves$.
As in the proof of Theorem~\ref{thm:busemann}, we may find
a complete geodesic lamination $\mu'$ that is totally transverse to $F$.
Consider the stretch line $\geo(t):= \horocyclic_{\mu'}^{-1}(e^t F)$
directed by $\mu'$ and passing through $\horocyclic_{\mu'}^{-1}(F)$.
From the results in~\cite{papadopoulos_extension}, $\geo$ converges in the
positive direction to $[\mu]$ in the Thurston boundary.
So, by Theorem~\ref{thm:horothurston}, $\geo$ converges to $\Psi_\mu$
in the horofunction boundary. Therefore, since $\geo$ is a geodesic,
\begin{align*}
H(\Psi_\mu,\Psi_\nu)
&= \lim_{t\to\infty}
\Big(\stretchdist(b,\geo(t)) + \horofunction_\nu(\geo(t))\Big) \\
&= \lim_{t\to\infty} \Big(\log \sup_{\eta\in\measuredlams}
\frac{\hlength_{\geo(t)}(\eta)}{\hlength_b(\eta)}
+ \log \sup_{\eta\in\measuredlams}
\frac{i(\nu,\eta)}{\hlength_{\geo(t)}(\eta)} \Big)
- \log \sup_{\eta\in\measuredlams} \frac{i(\nu,\eta)}{\hlength_b(\eta)}.
\end{align*}
From~\cite[Cor.~2]{theret_thesis}, for every $\eta\in\measuredlams$,
there exists a constant $C_\eta$ such that
\begin{align*}
i(\horocyclic_{\mu'}(\geo(t)),\eta) \le \hlength_{\geo(t)}(\eta)
\le i(\horocyclic_{\mu'}(\geo(t)),\eta) + C_\eta,
\qquad\text{for all $t\ge0$}.
\end{align*}
So,
\begin{align}
\label{eqn:bounds}
i(F,\eta) \le e^{-t}\hlength_{\geo(t)}(\eta)
\le i(F,\eta) + e^{-t} C_\eta,
\end{align}
for all $\eta\in\measuredlams$ and $t\ge0$.
So, $e^{-t}\hlength_{\geo(t)}$ converges pointwise to $i(F,\cdot)=i(\mu,\cdot)$
on $\measuredlams$. Since $\geo(t)$ converges to $[\mu]$,
we get, by Proposition~\ref{prop:uniform_lengths},
that $\lengthfunc_{\geo(t)}$ converges to $\lengthfunc_{\mu}$
uniformly on compact sets. Combining this with the convergence of
$e^{-t}\hlength_{\geo(t)}$, and evaluating at any measured lamination,
we see that $e^{-t} \lfactor(\geo(t))$ converges
to $\lfactor(\mu)$. Using again the convergence of $\lengthfunc_{\geo(t)}$,
we conclude that $e^{-t}\hlength_{\geo(t)}$ converges to $i(\mu,\cdot)$
uniformly on compact sets.
So,
\begin{align*}
\lim_{t\to\infty} \sup_{\eta\in\measuredlams}
\frac{e^{-t}\hlength_{\geo(t)}(\eta)}{\hlength_b(\eta)}
= \sup_{\eta\in\measuredlams} \frac{i(\mu,\eta)}{\hlength_b(\eta)}.
\end{align*}
From the left-hand inequality of~(\ref{eqn:bounds}), we get
\begin{align*}
\sup_{\eta\in\measuredlams} \frac{i(\nu,\eta)}{e^{-t}\hlength_{\geo(t)}(\eta)}
\le \sup_{\eta\in\measuredlams} \frac{i(\nu,\eta)}{i(\mu,\eta)},
\qquad\text{for $t\ge0$}.
\end{align*}
But the limit of a supremum is trivially greater than or equal to the supremum
of the limits. We conclude that
\begin{align*}
\lim_{t\to\infty} \sup_{\eta\in\measuredlams}
\frac{i(\nu,\eta)}{e^{-t}\hlength_{\geo(t)}(\eta)}
= \sup_{\eta\in\measuredlams} \frac{i(\nu,\eta)}{i(\mu,\eta)}.
\end{align*}
The result now follows on applying Lemma~\ref{lem:max_intersection_ratio}.
\end{proof}
\begin{corollary}
\label{cor:detourmetric}
If $\slam$ and $\blam$ in $\measuredlams$ can be written in the form
$\slam =\sum_j f_j \eta_j$ and $\blam =\sum_j g_j \eta_j$, where the $\eta_j$
are ergodic elements of $\measuredlams$ that pairwise have zero intersection
number,
and the $f_j$ and $g_j$ are positive coefficients, then the detour metric
between $\Psi_\slam$ and $\Psi_\blam$ is
\begin{align*}
\delta(\Psi_\slam,\Psi_\blam)
= \log \max_j \frac{f_i}{g_i} + \log \max_j \frac{g_i}{f_i}.
\end{align*}
If $\slam$ and $\blam$ can not be simultaneously written in this form, then
$\delta(\Psi_\slam,\Psi_\blam)=+\infty$.
\end{corollary}
\section{Isometries}
\label{sec:isometries}
In this section, we prove Theorems~\ref{thm:isometries} and~\ref{thm:distinct}.
Recall that the curve complex $\curvecomp$ is the simplicial complex having\index{curve complex}
vertex set $\curves$, and where a set of vertices form a simplex when they have
pairwise disjoint representatives.
The automorphisms of the curve complex were characterised by\index{curve complex!automorphisms}
Ivanov~\cite{ivanov_automorphisms}, Korkmaz~\cite{korkmaz_automorphisms},
and Luo~\cite{luo_automorphisms}.\index{Ivanov--Korkmaz--Luo Theorem}
\begin{theorem}[Ivanov-Korkmaz-Luo]
\label{thm:ivanov_automorphism}
Assume that $S$ is not a sphere with four or fewer punctures,
nor a torus with two or fewer punctures. Then all automorphisms of
$\curvecomp$ are given by elements of $\modulargroup$.
\end{theorem}
We will also need the following theorem contained in~\cite{ivanov_isometries},
which was stated there for measured foliations rather than measured
laminations.
For each $\mu\in\measuredlams$, we define the set
$\mu^\perp := \{ \nu\in\measuredlams \mid i(\nu,\mu)=0 \}$.
\begin{theorem}[Ivanov]
\label{thm:ivanov_codimension}
Assume that $S$ is not a sphere with four or fewer punctures,
nor a torus with one or fewer punctures. Then the co-dimension of the set
$\mu^\perp$
in $\measuredlams$ is equal to 1 if and only if $\mu$ is a real
multiple of a simple closed curve.
\end{theorem}
By Theorem~\ref{thm:horothurston}, the horoboundary can be identified with
the Thurston boundary. So, the homeomorphism induced on the horoboundary by an
isometry of $\teichmullerspace$ may be thought of as a map from
$\projmeasuredlams$ to itself.
\begin{lemma}
\label{lem:preserveszero}
Let $\iso$ be an isometry of the Lipschitz metric.
For all $[\mu_1]$ and $[\mu_2]$ in $\projmeasuredlams$,
we have $i(\iso[\mu_1],\iso[\mu_2]) = 0$ if and only if $i([\mu_1],[\mu_2])=0$.
\end{lemma}
\begin{proof}
For any two elements $[\mu]$ and $[\eta]$ of $\projmeasuredlams$, we have,
by Proposition~\ref{prop:transformH}, that
$H(\Psi_{[\eta]},\Psi_{[\mu]})$ is finite if and only if
$H(\Psi_{\iso[\eta]},\Psi_{\iso[\mu]})$ is finite.
Also, from Theorem~\ref{thm:detourcost},
$H(\Psi_{[\eta]},\Psi_{[\mu]})$ is finite if and only if $[\mu]\ll[\eta]$.
It follows that $\iso$ preserves the relation $\ll$ on $\projmeasuredlams$.
Two elements $[\mu_1]$ and $[\mu_2]$ of $\projmeasuredlams$ satisfy
$i([\mu_1],[\mu_2])=0$ if and only if there is some projective measured
lamination $[\eta]\in\projmeasuredlams$ such that $[\mu_1]\ll[\eta]$
and $[\mu_2]\ll[\eta]$.
We conclude that $[\mu_1]$ and $[\mu_2]$ have zero intersection number
if and only if $f[\mu_1]$ and $f[\mu_2]$ have zero intersection number.
\end{proof}
\begin{lemma}
\label{lem:same_on_PML}
Let $\iso$ be an isometry of $(\teichmullerspace,\stretchdist)$.
Then, there is an extended mapping class that agrees with $\iso$ on
$\projmeasuredlams$.
\end{lemma}
\begin{proof}
By Theorem~\ref{thm:horothurston}, the horoboundary can be identified with
the Thurston boundary.
So, the map induced by $\iso$ on the horoboundary is a homeomorphism of
$\projmeasuredlams$ to itself.
We identify $\projmeasuredlams$ with the level set
$\{\mu\in\measuredlams \mid \hlength_b(\mu)=1\}$.
There is then a unique way of extending the map
$\iso:\projmeasuredlams\to\projmeasuredlams$ to a positively homogeneous map
$\iso_*:\measuredlams\to\measuredlams$. Evidently, $\iso_*$ is
also a homeomorphism.
From Lemma~\ref{lem:preserveszero},
we get that $(\iso_*\mu)^\perp = \iso_*(\mu^\perp)$ for all
$\mu\in\measuredlams$.
By Theorem~\ref{thm:ivanov_codimension}, real multiples of simple closed
curves can be characterised in $\measuredlams$ as those elements $\mu$
such that the co-dimension of the set $\mu^\perp$ equals to 1.
Since $\iso_*$ is a homeomorphism on $\measuredlams$, it preserves
the co-dimension of sets.
We conclude that $\iso_*$ maps elements of $\measuredlams$ of the form
$\lambda\curve$, where $\lambda>0$ and $\curve\in\curves$,
to elements of the same form.
So, $\iso$ induces a bijective map on the vertices of the curve complex.
From Lemma~\ref{lem:preserveszero} it is clear that there is an edge
between two curves $\curve_1$ and $\curve_2$ in $\curvecomp$
if and only if there is an edge between $\iso\curve_1$ and $\iso\curve_2$.
Using the fact that a set of vertices span a simplex
exactly when every pair of vertices in the set have an edge connecting them,
we see that $\iso$ acts as an automorphism on the curve complex.
We now apply Theorem~\ref{thm:ivanov_automorphism} of Ivanov-Korkmaz-Luo
to deduce that there is some extended mapping class $\mapclass\in\modulargroup$
that agrees with $\iso$ on $\curves$, considered as a subset of
$\projmeasuredlams$. But $\curves$ is dense in $\projmeasuredlams$,
and the actions of $\mapclass$ and $\iso$ on $\projmeasuredlams$ are both continuous.
Therefore, $\mapclass$ and $\iso$ agree on all of $\projmeasuredlams$.
\end{proof}
Recall that $\maxset_{xy}$ is the subset of $\projmeasuredlams$
where $\hlength_y(\cdot) / \hlength_x(\cdot)$ attains its maximum.
\begin{lemma}
\label{lem:midpoints}
Let $x$ and $y$ be distinct points in $\teichmullerspace$.
Then, $\stretchdist(x,y) + \stretchdist(y,z) = \stretchdist(x,z)$
if and only if $\maxset_{xy}$ and $\maxset_{yz}$ have an element in common,
in which case $\maxset_{xy} \intersection \maxset_{yz} = \maxset_{xz}$.
\end{lemma}
\begin{proof}
Assume $\maxset_{xy} \intersection \maxset_{yz}$ contains an element $[\nu]$.
So,
\begin{align}
\label{eqn:product_inequality}
\frac{\hlength_y(\nu)}{\hlength_x(\nu)}
\frac{\hlength_z(\nu)}{\hlength_y(\nu)}
\ge \frac{\hlength_y(\nu_1)}{\hlength_x(\nu_1)}
\frac{\hlength_z(\nu_2)}{\hlength_y(\nu_2)},
\qquad\text{for all $\nu_1$ and $\nu_2$ in $\measuredlams$.}
\end{align}
We conclude that
\begin{align*}
\frac{\hlength_z(\nu)}{\hlength_x(\nu)}
\ge \sup_{\nu_1\in\measuredlams}
\frac{\hlength_y(\nu_1)}{\hlength_x(\nu_1)}
\sup_{\nu_2\in\measuredlams}
\frac{\hlength_z(\nu_2)}{\hlength_y(\nu_2)},
\end{align*}
which implies $\stretchdist(x,z) \ge \stretchdist(x,y) + \stretchdist(y,z)$.
The opposite inequality is just the triangle inequality.
Taking $\nu_2=\nu_1$ in equation~(\ref{eqn:product_inequality}) yields
\begin{align*}
\frac{\hlength_z(\nu)}{\hlength_x(\nu)}
\ge \sup_{\nu_1\in\measuredlams}
\frac{\hlength_z(\nu_1)}{\hlength_x(\nu_1)},
\end{align*}
which implies that $[\nu]\in\maxset_{xz}$.
Now assume $\stretchdist(x,y) + \stretchdist(y,z) = \stretchdist(x,z)$.
For any element $[\nu]$ of $\maxset_{xz}$, we have
$\hlength_z(\nu) / \hlength_x(\nu) = \exp(\stretchdist(x,z))$.
We also have $\hlength_z(\nu) / \hlength_y(\nu) \le \exp(\stretchdist(y,z))$.
Therefore,
\begin{align*}
\frac{\hlength_y(\nu)}{\hlength_x(\nu)}
= \frac{\hlength_z(\nu)}{\hlength_x(\nu)}
\bigg/
\frac{\hlength_z(\nu)}{\hlength_y(\nu)}
\ge \frac{\exp(\stretchdist(x,z))}{\exp(\stretchdist(y,z))}
= \exp(\stretchdist(x,y)),
\end{align*}
and so $[\nu]\in\maxset_{xy}$.
A similar argument shows that $[\nu]\in\maxset_{yz}$.
\end{proof}
\begin{lemma}
\label{lem:maxset_mu}
Let $\geo$ be a geodesic converging in the negative direction to a maximal
uniquely-ergodic $[\mu]\in\projmeasuredlams$.
Then, $\maxset_{\geo(s)\geo(t)}=\{[\mu]\}$,
for all $s$ and $t$ with $s<t$.
\end{lemma}
\begin{proof}
For all $u<t$, write $R_u:= R_{\gamma(u)\gamma(t)}$.
By Lemma~\ref{lem:midpoints}, $R_u\subset R_v$ for all $u$ and $v$ satisfying
$u<v<t$. But $R_u$ is compact and non-empty for all $u$.
Therefore, there is some $[\nu]\in\projmeasuredlams$ such that $[\nu]\in R_u$,
for all $u<t$. Applying Lemma~\ref{lem:optimal_dir}, we get that $i(\mu,\nu)=0$.
Since $[\mu]$ is maximal and uniquely ergodic, this implies that $[\nu]=[\mu]$.
We have proved that $[\mu]\in R_s$.
Thurston showed~\cite{thurston_minimal} that there is a geodesic lamination
associated to any given pair of hyperbolic structures that contains every
maximally stretched lamination.\index{maximally stretched lamination}
Therefore, every pair of elements of $R_s$ has zero intersection number.
We conclude that $R_s=\{[\mu]\}$.
\end{proof}
\begin{lemma}
\label{lem:unique_geo}
Let $\mu$ be a complete geodesic lamination having maximal uniquely-ergodic
stump $\mu_0$, and let $x$ be a point of $\teichmullerspace$.
Then, the stretch line $\stretchline_{\mu,x}$ is the
unique geodesic converging in the negative direction to $[\mu_0]$ and in the
positive direction to $[F_\mu(x)]$.
\end{lemma}
\begin{proof}
That $\stretchline_{\mu,x}$ is a geodesic was proved in~\cite{thurston_minimal},
and that it has the required convergence properties was proved
in~\cite{papadopoulos_extension} and~\cite{theret_negative}.
Let $\geo$ be any geodesic converging as in the statement of the lemma.
Choose $s$ and $t$ in $\mathbb{R}$ such that $s<t$.
By Lemma~\ref{lem:maxset_mu}, $R_{\geo(s)\geo(t)}=\{[\mu_0]\}$.
Therefore, by~\cite[Theorem 8.5]{thurston_minimal}, there exists a geodesic
from $\geo(s)$ to $\geo(t)$ consisting of a concatenation of stretch lines,
each of which stretches along some complete geodesic lamination
containing $\mu_0$.
However, the only complete geodesic lamination containing $\mu_0$ is $\mu$,
and so the constructed geodesic is just a segment of the stretch line
$\stretchline_{\mu,\geo(s)}$. So $\stretchline_{\mu,\geo(s)}$ passes
through $\geo(t)$. We conclude that $\stretchline_{\mu,\geo(s)}$ equals
$\stretchline_{\mu,\geo(t)}$ up to reparameterisation.
Since $s$ and $t$ are arbitrary, we get that $\geo$
is just a stretch line directed by~$\mu$. There is only one stretch line
directed by $\mu$ converging to $[F_{\mu}(x)]$ in the forward direction,
namely $\stretchline_{\mu,x}$.
\end{proof}
\begin{lemma}
\label{lem:same_on_T}
Let $\iso$ and $\mapclass$ be two isometries of
$(\teichmullerspace,\stretchdist)$.
If the extensions of $\iso$ and $\mapclass$ coincide on the boundary
$\projmeasuredlams$, then $\iso=\mapclass$.
\end{lemma}
\begin{proof}
Let $x\in\teichmullerspace$. Choose a complete geodesic lamination $\mu$
such that its stump $\mu_0$ is maximal and uniquely ergodic.
Associated to $\mu$ and $x$ is the horocyclic foliation $F_{\mu}(x)$.
The stretch line $\stretchline_{{\mu},x}$ is geodesic in the Lipschitz
metric $\stretchdist$, and converges in the negative direction to $[\mu_0]$
and in the positive direction to $[F_{\mu}(x)]$;
see~\cite{papadopoulos_extension}.
By Lemma~\ref{lem:unique_geo}, it is the only geodesic to do so.
The map $\mapclass^{-1}\after\iso$ is an isometry on
$(\teichmullerspace,\stretchdist)$, and so maps geodesics to geodesics.
It extends continuously to the horofunction/Thurston boundary, and fixes
every point of this boundary.
We conclude that it leaves $\stretchline_{{\mu},x}$ invariant as a set.
Clearly, it acts by translation along $\stretchline_{{\mu},x}$.
Now let $\mu'$ be a complete geodesic lamination whose stump $\mu'_0$ is
maximal and uniquely ergodic, and distinct from both $\mu_0$ and the support
of the measured lamination associated to $F_{\mu}(x)$.
By reasoning similar to that above, $\mapclass^{-1}\after\iso$ also acts
by translation along the stretch line $\stretchline_{{\mu'},x}$.
If the translation distance along $\stretchline_{{\mu'},x}$ is non-zero,
then either the iterates $(\mapclass^{-1}\after\iso)^n(x)$ or the iterates
$(\mapclass^{-1}\after\iso)^{-n}(x)$ of the inverse map converge to $[\mu'_0]$.
But this is impossible because $[\mu'_0]$ is not a limit point of
$\stretchline_{\mu,x}$.
Hence, the translation distance is zero and $x$ is fixed by
$\mapclass^{-1}\after\iso$.
We have proved that $\mapclass(x)=\iso(x)$ for any $x\in\teichmullerspace$.
\end{proof}
\begin{theorem}
\label{thm:isometries}
If $S$ is not a sphere with four or fewer punctures,
nor a torus with two or fewer punctures, then every isometry of
$\teichmullerspace$ with Thurston's Lipschitz metric is an element
of the extended mapping class group $\modulargroup$.
\end{theorem}
\begin{proof}
This is a consequence of Lemmas~\ref{lem:same_on_PML} and~\ref{lem:same_on_T}.
\end{proof}
\begin{theorem}
\label{thm:distinct}
Let $S_{g,n}$ and $S_{g',n'}$ be surfaces of negative Euler characteristic.
Assume $\{(g,n), (g',n')\}$ is different from each of the three sets
\begin{align*}
\{(1,1), (0,4)\}, \qquad \{(1,2), (0,5)\}, \qquad{and}\quad \{(2,0), (0,6)\}.
\end{align*}
If $(g,n)$ and $(g',n')$ are distinct, then the Teichm\"uller spaces
$\teichmullerspc(S_{g,n})$ and $\teichmullerspc(S_{g',n'})$
with their respective Lipschitz metrics are not isometric.
\end{theorem}
\begin{proof}
Let $S_{g,n}$ and $S_{g',n'}$ be two surfaces of negative Euler characteristic,
and let $\iso$ be an isometry between the associated Teichm\"uller spaces
$\teichmullerspc(S_{g,n})$ and $\teichmullerspc(S_{g',n'})$, each endowed
with its respective Lipschitz metric.
Recall that the dimension of $\teichmullerspc(S_{g,n})$ is $6g-6+2n$.
Consider the cases not covered by Ivanov's Theorem~\ref{thm:ivanov_codimension},
namely the sphere with three or four punctures, and the torus with one
puncture. A comparison of dimension shows that $\teichmullerspc(S_{0,3})$
is not isometric to any other Teichm\"uller space, and that
$\teichmullerspc(S_{0,4})$ and $\teichmullerspc(S_{1,1})$ may be isometric
to each another but not to any other Teichm\"uller space.
So assume that neither $\teichmullerspc(S_{g,n})$ nor
$\teichmullerspc(S_{g',n'})$ is one of these exceptional surfaces.
By the same reasoning as in the first part of the proof of
Lemma~\ref{lem:same_on_PML}, we conclude that $\iso$ induces an isomorphism
from the curve complex of $S_{g,n}$ to that of $S_{g',n'}$.
However, according to~\cite[Lemma~2.1]{luo_automorphisms},
the only isomorphisms between curve complexes are
$\curvecmp(S_{1,2})\cong\curvecmp(S_{0,5})$ and
$\curvecmp(S_{2,0})\cong\curvecmp(S_{0,6})$.
The conclusion follows.
\end{proof}
\bibliographystyle{abbrv}
| {'timestamp': '2011-04-11T02:02:47', 'yymm': '1006', 'arxiv_id': '1006.2158', 'language': 'en', 'url': 'https://arxiv.org/abs/1006.2158'} |
\section{Introduction}
\setcounter{equation}{0}
The study of $\pi\pi$ scattering has been one of the classical methods
for investigating the nature of the strong interaction. Many elegant ideas have
been proposed \cite{gas}. At the present time the standard approach at low
energies
is based on chiral perturbation theory \cite{chp}. This enables one to nicely
understand
the
scattering amplitudes near threshold($<400~MeV$). However it
is very difficult to extend this treatment to higher energies since the pole
structure of the resonances in this region cannot be easily reproduced by a
truncated power series expansion in energy. Our interest in this paper
will be to investigate in a schematic sense how the description of $\pi\pi$
scattering might be extended up to slightly past the 1 GeV region in a new
chiral picture.
It has been clear for many years that in the region just beyond the
threshold region
the effects of $\rho$ exchange dominate. However in the chiral
perturbation program the effects of the $\rho$ arise in the second
order of the energy expansion \cite{vmd}. This is of course due to the fact
that the usual
chiral
program is mainly devoted to improving the description of the dynamics in
the threshold region in which the $\rho$ does not explicitly appear.
For going beyond the threshold region we would like an approach which can
treat the $\rho $ and other resonances at the first stage of an iterative
procedure.
Such an approach is suggested by the large $ N_c $ approximation
to QCD. As reviewed in \cite{1n} for example the leading order $\displaystyle{\frac{1}{N_c}}$
approximation to $\pi\pi$ scattering is obtained by summing all
possible tree diagrams corresponding to some effective Lagrangian
which includes an infinite number \cite{string} of bosonic resonances
of each possible spin. In addition it is allowed to include all
possible contact terms. This clearly has the right structure but
initially seems to be so general as to be practically useless. Here
we will argue that this may not be the case.
An amplitude constructed according to the above prescription will
automatically satisfy crossing symmetry. On the other hand just calculating
the tree approximation to an effective Lagrangian will not guarantee
that unitarity is satisfied. This is the handle we will use to
try to investigate additional structure. Unitarity has of course the
consequence that the amplitude must have some suitable imaginary term which
in the usual field theory is provided by loop diagrams. However the leading
$\displaystyle{\frac{1}{N_c}}$ approximation will give a purely real amplitude away
from the singularities at the direct $s-channel$ poles. We may consider the
imaginary part of the leading $\displaystyle{\frac{1}{N_c}}$ amplitude to consist just of the
sum of delta functions at each such singularity. Clearly, the real
part has a much more interesting structure and we will mainly
confine out attention to it. Furthermore we will assume that the
singularities in the real part are {\it regularized} in a conventional way.
Unitarity has the further
consequence that the real parts of the partial wave amplitudes must satisfy
certain well known
bounds. The crucial question is how these bounds get satisfied since, as we
will see, individual contributions tend to violate them
badly. At first one might expect that all of the infinite
number of resonances are really needed to obtain cancellations. However the
success
of chiral dynamics at very low energies where none of the resonances
have been taken into account suggests that this might not be the case.
At the very lowest energy the theory is described by a chiral invariant
contact interaction which, however, quite soon badly violates the unitarity
bound. It
will be observed that the $\rho$ exchange tames this bad behavior
dramatically so that the unitarity bound is not badly broken until around an
energy beyond 2 GeV. This suggests that there is a {\it local} cancellation
between resonances which enforces the bound. The local cancellation is not
easily predicted but if true in general it would greatly simplify the
task of extending the phenomenological description of scattering processes
to higher energies. Including just the effects of the $\rho$ and the $\pi$
particles corresponds to including just the $s-wave$ quark antiquark
states in our model. A natural next step , which we shall also explore here,
would be to include the exchange of the allowed $p-wave$ quark antiquark
states. At the same time the quark model suggests that we include the first
radial excitations of the s-wave states which in fact lie near the
$p-wave$ states. Theoretically glueball and exotic states are suppressed
in $\pi\pi$ scattering according to the large $N_c$ approximation \cite{1n}.
In our analysis the pions will be treated as approximate Goldstone bosons
corresponding to the assumption that the theory has a spontaneously broken
chiral symmetry. Actually the need for spontaneous breakdown of this symmetry
can be argued from the $\displaystyle{\frac{1}{N_c}}$ approach itself \cite{sbs}. The effective
Lagrangian we use
shall be constructed to respect this symmetry. Furthermore for the purpose of
the initial exploration being performed here we shall consider resonance
interaction
terms with the minimum number of derivatives and shall also neglect chiral
symmetry breaking terms involving the resonances. When going beyond the initial
stage we will be forced to proceed in a more phenomenological way.
In section 2, after the presentation of the partial wave amplitudes of
interest, we show how the introduction of the $\rho-$meson in a chirally
invariant manner substantially delays the onset of the severe unitarity
violation which would be present in the simplest chiral lagrangian of pions.
The program suggested by this {\it local cancellation} is sketched.
Section 3 is concerned with the contribution to the pion scattering amplitude
from the {\it next group} of resonances - those in the range of the $p-wave$
$\overline{q}q$ bound states in the quark model. It is observed that a four derivative
contact term can be used to restore unitarity up to about $1~GeV$.
In section 4, a possibly more physical way to restore unitarity is presented
which makes use of a $\displaystyle{\frac{1}{N_c}}$ subleading contribution due to a very low mass
scalar (presumably $\overline{q}q\qq$) state.
Finally, section 5 contains a brief summary and discussion
of some directions for
future work.
\section{Current algebra plus $\rho$ exchange}
\setcounter{equation}{0}
In this section we will study the partial waves for $\pi\pi$ scattering computed
in a chiral Lagrangian model which contains both the pseudoscalar and vector
mesons, (i.e., the lowest lying s-wave quark antiquark bound states).
The kinematics are discussed in Appendix A, where the partial wave amplitudes
$T^I_l$ are defined. They have the convenient decomposition:
\begin{equation}
T^I_l(s)=\frac{(\eta^I_l (s)~e^{2i\delta^I_l(s)}-1)}{2i}
\end{equation}
\noindent
where $\delta^I_l(s)$ are the phase shifts and $\eta^I_l(s)$ (satisfying
$0<\eta^I_l(s)\leq 1$) are the elasticity parameters. Extracting the real
and imaginary parts via
\begin{eqnarray}
R^I_l&=&\frac{\eta^I_l ~sin(2\delta^I_l)}{2},\\
I^I_l&=&\frac{1-\eta^I_l ~cos(2\delta^I_l)}{2},
\end{eqnarray}
\noindent
leads to the very important bounds
\begin{equation}
\big|{R^I_l}\big{|}\leq\frac{1}{2},~~~~~~~~~~~~~0\leq I^I_l\leq 1 .
\label{eq:bound}
\end{equation}
For fixed $\eta^I_l$ the real and imaginary parts lie on the well known circle
in the Argand-plane $\displaystyle{{R^I_l}^2+(I^I_l-\frac{1}{2})^2
={(\frac{\eta^I_l}
{2})}^2}$. This formula also enables us to solve for $I^I_l$ as :
\begin{equation}
I^I_l=\frac{1}{2}\left[1\pm \sqrt{{\eta^I_l}^2-4{R^I_l}^2}\right].
\label{eq:Ima}
\end{equation}
\noindent
Let us use (\ref{eq:Ima}) for an initial orientation. Near threshold
$\eta^I_l=1$, $R^I_l$ is small and we should choose the minus sign in
(\ref{eq:Ima}) so that
\begin{equation}
{I^I_l}(s)\approx {[R^I_l]}^2 .
\label{eq:approx}
\end{equation}
\noindent
In the large $N_c$ limit the amplitude near threshold is purely real
and of the order
$\displaystyle{\frac{1}{N_c}}$. This is consistent with (\ref{eq:approx}) which
shows that $I^I_l(s)$ is of order $\displaystyle{\frac{1}{{N_c}^2}}$ and hence
comes in at the second order. This agrees with the chiral perturbation theory
approach \cite{chp} in which $R^I_l(s)$ comes from the lowest order tree diagram
while
$I^I_l$ arises from the next order loop diagram. On the other hand, when we
depart from the threshold region the $\displaystyle{\frac{1}{N_c}}$ approach
treats the
contribution of the $\rho$-meson at first order while the chiral perturbation
theory approach treats it at second and higher orders.
Now, pion physics at very low energies is described by the effective chiral
Lagrangian,
\begin{equation}
L_{1}=-\frac{F_\pi^2}{8}Tr(\partial_\mu U\partial_\mu U^\dagger)
+Tr(B(U+U^\dagger))
\label{eq:algebra}
\end{equation}
\noindent
wherein $\displaystyle{U=e^{2i\frac{\phi}{F_\pi}}}$ and $\phi$ is the $3\times3$
matrix of pseudoscalar fields. $F_\pi=132~MeV$ is the pion decay constant.
Furthermore $B=diag(B_1,B_1,B_3)$ , where $\displaystyle{B_1=\frac{F_\pi^2m_{\pi}^2}
{8}}$ and $\displaystyle{B_3=\frac{F_\pi^2(m_{K}^2-\frac{m_{\pi}^2}{2})}{4}}$,
describes
the minimal symmetry breaking. We shall choose $m_{\pi}=137~MeV$.
A straightforward computation using (\ref{eq:algebra}) yields the $\pi\pi$
scattering amplitude \cite{ca} defined in (A.1):
\begin{equation}
A(s,t,u)=2\frac{(s-m_{\pi}^2)}{F_\pi^2}.
\label{eq:Aca}
\end{equation}
\noindent
This equation will be called the {\it current algebra result}. With
(A.2)-(A.4) we obtain $R^0_0(s)=T^0_0(s)$ as illustrated in Fig. 1.
The experimental Roy curves \cite{roy} are also shown.
Up till about $0.5~GeV$ the
agreement is quite reasonable (and can be fine tuned with second order
chiral perturbation terms) but beyond this point $R^0_0$ keeps increasing
monotonically and badly violates the unitarity bound (\ref{eq:bound}). We will
see that the introduction of the $\rho$-meson greatly improves the situation.
There are several different but essentially equivalent ways to introduce vector
mesons into the chiral invariant Lagrangian. A simple way \cite{joe} is to
treat the
vectors as gauge particles of the chiral group and then break the local
symmetry by introducing mass-type terms. The $3\times 3$ matrix of the
vector fields, $\rho_{\mu}$ is related to auxiliary linearly transforming
gauge fields $A^L_\mu$ and $A^R_\mu$ by
\begin{eqnarray}
A^L_\mu &=& \xi\rho_{\mu}\xi^\dagger+
\frac{i}{\tilde{g}}\xi\partial_{\mu}\xi^\dagger\\
A^R_\mu &=& \xi^\dagger\rho_{\mu}\xi+
\frac{i}{\tilde{g}}\xi^\dagger\partial_{\mu}\xi,
\end{eqnarray}
\noindent
where $\displaystyle{\xi\equiv U^{\frac{1}{2}}}$ and $\tilde{g}$ is a dimensionless
coupling constant. Under a chiral transformation $U\rightarrow U_L U U_R^\dagger$
\cite{nlinear},
\begin{equation}
\xi\rightarrow U_L\xi K^\dagger\equiv K\xi U_R^\dagger
\label{eq:transf}
\end{equation}
\noindent
(which also defines the matrix $K(\phi,U_L,U_R)$ and $\rho_\mu$ behaves as
\begin{equation}
\rho_\mu\rightarrow K\rho_{\mu}K^\dagger +\frac{i}{\tilde{g}}K\partial_{\mu}K^\dagger .
\end{equation}
It is convenient to define
\begin{eqnarray}
v_{\mu} &=& \frac{i}{2}\left(\xi\partial_{\mu}\xi^\dagger+
\xi^\dagger \partial_{\mu}\xi\right)\\
p_{\mu} &=& \frac{i}{2}\left(\xi\partial_{\mu}\xi^\dagger-
\xi^\dagger \partial_{\mu}\xi\right)
\label{eq:pmu}
\end{eqnarray}
which transform as
\begin{eqnarray}
p_{\mu} &\rightarrow& Kp_{\mu}K^\dagger\\
v_{\mu} &\rightarrow& Kv_{\mu}K^\dagger+iK\partial_{\mu}K^\dagger.
\end{eqnarray}
\noindent
These quantities enable us to easily construct chiral invariants and will also
be useful later \cite{kugo}. The chiral Lagrangian including both
pseudoscalars
and vectors that one gets can be rewritten as the sum of $L_1$, in
(\ref{eq:algebra}) and the following:
\begin{equation}
L_2=-\frac{1}{4}Tr(F_{\mu\nu}(\rho)F_{\mu\nu}(\rho))-
\frac{m^2_\rho}{2\tilde{g}^2}Tr\left[\left(\tilde{g}\rho_\mu-v_\mu\right)^2\right],
\label{eq:rho}
\end{equation}
where $F_{\mu\nu}(\rho)=\partial_{\mu}\rho_{\nu}-\partial_{\nu}\rho_{\mu}-
i\tilde{g} [\rho_\mu,\rho_\nu]$. The coupling constant $\tilde{g}$ is related to the
$\rho$-meson width by
\begin{equation}
\Gamma(\rho\riar2\pi)=\frac{g^2_{\rho\pi\pi}{p_{\pi}}^3}{12\pi
m^2_\rho}~~~~~~~~~~~~~~
{}~~~g_{\rho\pi\pi}=\frac{m^2_\rho}{\tilde{g} F_\pi^2}.
\end{equation}
\noindent
We choose $m_\rho=0.769~GeV$ and $g_{\rho\pi\pi}=8.56$. Symmetry breaking
contributions involving the $\rho$ are given elsewhere \cite{ssw} but are small
and will be neglected here. The Lagrangian piece in (\ref{eq:rho}) yields
both a pole-type contribution (from the $\rho_{\mu}v_{\mu}$ cross term)
and
a contact term contribution (from the $v_{\mu}v_{\mu}$ term) to the
amplitude at tree level \cite{joe}:
\begin{equation}
A(s,t,u)=(\ref{eq:Aca})-\frac{g^2_{\rho\pi\pi}}{2}\left(
\frac{u-s}{m^2_\rho-t}+\frac{t-s}{m^2_\rho-u}\right)+
\frac{g^2_{\rho\pi\pi}}{2m^2_\rho}\left[(t-s)+(u-s)\right]
\label{eq:rhoamp}
\end{equation}
\noindent
We notice that the entire second-term in (\ref{eq:rho}) is chiral invariant
since $v_{\mu}$ and $\tilde{g} \rho_{\mu}$ transform identically. However
the $Tr(\rho_{\mu}v_{\mu})$ and $Tr(v_{\mu}v_{\mu})$ pieces are not
separately chiral invariant. This shows that the addition of the $\rho$
meson in a chiral invariant manner necessarily introduces a contact term
in addition to the minimal pole term. Adding up the terms in (\ref{eq:rhoamp})
yields finally
\begin{equation}
A(s,t,u)=2\frac{(s-m_{\pi}^2)}{F_\pi^2}-\frac{g^2_{\rho\pi\pi}}{2m^2_\rho}\left[
\frac{t(u-s)}{m^2_\rho-t}+\frac{u(t-s)}{m^2_\rho-u}\right]
\label{eq:rhocomp}
\end{equation}
In this form we see that the threshold (current algebra) results
are unaffected since the second term drops out at $t=u=0$. An
alternative approach \cite{vector} to obtaining (\ref{eq:rhocomp}) involves
introducing a chiral invariant $\rho\pi\pi$ interaction with two more
derivatives.
$A(s,t,u)$ has no singularities in the physical region. Reference to
(\ref{eq:isospin})
shows that the isospin amplitudes $T^0$ and $T^2$ also have no singularities.
However the $T^1$ amplitude has the expected singularity at $s=m^2_\rho$.
This may be cured in a conventional way, while still maintaining crossing
symmetry, by the replacements
\begin{equation}
\frac{1}{m^2_\rho-t,u}\rightarrow\frac{1}{m^2_\rho-t,u-im_\rho\Gamma_\rho}
\label{eq:propag}
\end{equation}
in (\ref{eq:rhocomp}) \footnote{One gets a slightly different results if the
the regularization is applied to (\ref{eq:rhoamp}).}. A modification of this
sort would enter automatically
if we were to carry the computation to order $\displaystyle{\frac{1}{N^2_c}}$.
However we shall regard (\ref{eq:propag}) as a phenomenological regularization
of
the leading amplitude.
Now let us look at the actual behavior of the real parts of the partial
wave amplitudes.
$R^0_0$, as obtained from (\ref{eq:rhocomp}) with (\ref{eq:propag}),
is graphed in Fig. 2 for an extensive range of $\sqrt{s}$, together with the
{\it pions only} result from (\ref{eq:Aca}).
We immediately see that there is a remarkable improvement; the effect of
adding $\rho$ is to bend back the rising $R^0_0(s)$ so there is no longer a
drastic violation of the unitarity bound until after $\sqrt{s}=2~GeV$.
There is still a relatively small violation which we will discuss later.
Note that the modification (\ref{eq:propag}) plays no role in the improvement
since it is only the non singular $t$ and $u$ channel exchange diagrams which
contribute.
It is easy to see that the {\it delayed} drastic violation of the
unitarity bound $\displaystyle{\big|{R^I_l\big|}\leq\frac{1}{2}}$ is a property
of all partial waves. We have already learned from (\ref{eq:rhocomp}) that the
amplitude $A(s,t,u)$ starts out rising linearly with $s$. Now (\ref{eq:rhoamp})
and (\ref{eq:kin}) show that for large $s$ the $\rho$ exchange terms behave
as $s^0$. The leading large $s$ behavior will therefore come from the sum of
the original {\it current-algebra} term and the new {\it contact-term}:
\begin{equation}
A(s,t,u)\sim\frac{2s}{F_\pi^2}\left(1-\frac{3k}{4}\right),~~~~~~~~~~~~~
k\equiv \frac{m^2_\rho}{\tilde{g}^2 F_\pi^2}.
\label{eq:rule}
\end{equation}
But $k$ is numerically around $2$ \cite{ksrf}, so $A(s,t,u)$ eventually {\it
decreases}
linearly with $s$.
This turn-around, which is due to the contact term that enforces chiral
symmetry, delays the onset of drastic unitarity violation until well
after the $\rho$ mass. It thus seems natural to speculate that, as we go up
in energy, the leading tree contributions from the resonances we encounter (
including both crossed channel as well as $s$-channel exchange) conspire to
keep the $R^I_l(s)$ within the unitarity bound. We will call this possibility,
which would require that additional resonances beyond the $\rho$ come into
play when $R^I_l(s)$ from (\ref{eq:rhocomp}) start getting out of bound,
{\it local} cancellation.
In Fig. 3 we show the partial waves $R^1_1$ and $I^1_1$
computed using (\ref{eq:rhocomp}) and (\ref{eq:propag}). Not surprisingly,
these display the standard resonant forms.
For completeness we present the $R^2_0$ and $R^0_2$
amplitudes in Fig. 4.
We may summarize by saying that the results of this section suggest
investigating
the
following recipe for a reasonable approximation to the $\pi\pi$ scattering
amplitude up to a certain scale
$E_{max}$.
\begin{itemize}
\item[1.]{Include all resonances whose masses are less than $E_{max}+\Delta$,
where $\Delta\approx$ {\it several-hundred} $MeV$.
This express the hoped for local cancellation property.}
\item[2.]{Construct all possible chiral invariants which can contribute,
presumably using the minimal number of derivatives. Compute all $\pi\pi\rightarrow\pi\pi$
tree diagrams, including contact terms. {\it Regulate} the resonance
denominators in a manner similar to (\ref{eq:propag}), but restrict attention
to the real part. Interpret the manifestly crossing symmetric result as
the leading order in $\displaystyle{\frac{1}{N_c}}$ real $\pi\pi$ scattering
amplitude.}
\item[3.] {Obtain the imaginary parts of the partial wave amplitudes,
using (\ref{eq:Ima}). The $\eta^I_l(s)$ might be computed by including
channels other than $\pi\pi$.}
\end{itemize}
We will start to explore this program by checking whether the
inclusion of the {\it next group} of resonances does enable us to satisfy the
unitary bound for $R^0_0$. For simplicity we restrict ourselves to a two-flavor
framework. It is straightforward to generalize the scheme to three flavors.
\section{The next group of resonances}
\setcounter{equation}{0}
To keep our investigation manageable we shall mainly restrict attention to the
partial wave amplitude $R^0_0(s)$. As we saw in the last section, this is the
one most likely to violate the unitarity bound. The first task is to find
the effective lagrangian which should be added to (\ref{eq:algebra}) plus
(\ref{eq:rho}). There is no {\it a priori} reason not to add chiral invariant
$\pi\pi$ contact interactions with more than two derivatives. But the most
characteristic feature, of course, is the pionic interactions of the new
resonances in the energy range of interest, here up to somewhat more than
$1.0~GeV.$ Which resonances should be included ? In the leading
$\displaystyle{\frac{1}{N_c}}$ approximation, $\overline{q}q$ mesons are \cite{1n}
ideally mixed nonets, assuming three light quark flavors. Furthermore, the
exchange of glueballs and exotic mesons are suppressed in interactions
with the $\overline{q}q$ mesons. The $\displaystyle{\frac{1}{N_c}}$ approximation thus directs our attention
to the $p-wave$ $\overline{q}q$ resonances as well as the radial excitations of the
$s-wave$ $\overline{q}q$ resonances.
The neutral members of the $p-wave$ $\overline{q}q$ nonets have the quantum numbers
$J^{PC}=0^{++}$, $1^{++}$, $1^{+-}$ and $2^{++}$. Of course,
the neu\-tral members
of the radially excited $s-wave$ $\overline{q}q$ nonets have
$J^{PC}=0^{-+}$ and $1^{--}$. Only members of the
$0^{++}$, $1^{--}$ and $2^{++}$ nonets can couple to two pseudoscalars
\footnote{It is
possible to write down a two point mixing interaction between $0^{-+}$ and
radially excited $0^{-+}$ particles etc.., but we shall neglect such effects
here.}. By $G$-parity conservation we finally note that it is the $I=0$
member of the $0^{++}$ and $2^{++}$ nonets and the $I=1$ member of the
$1^{--}$ nonet which can couple to two pions. Are there good experimental
candidates for these three particles ?
The cleanest case is the lighter $I=0$ member of the $2^{++}$ nonet; the
$f_2(1270)$ has, according to the August 1994 Review of Particle Properties
(RPP)
\cite{pdg}, the right quantum numbers, a mass of $1275\pm 5~MeV$, a width of
$185\pm20~MeV$, a branching ratio of $85\%$ into two pions, and a branching
ratio of only $5\%$ into $K\overline{K}$. On the other hand the
$f^{\prime}_2(1525)$ has a $1\%$ branching ratio into $\pi\pi$ and a $71\%$
branching ratio into $K\overline{K}$. It seems reasonable to approximate
the $2^{++}$ nonet as an ideally mixed one and to regard the $f_2(1270)$ as
its non-strange member.
The $\rho(1450)$ is the lightest listed \cite{pdg} particle which is a
candidate for a radial excitation of the usual $\rho(770)$. It has a less
than $1\%$ branching ratio into $K\overline{K}$ but the $\pi\pi$ branching ratio,
while presumably dominant, is not yet known. With this caution, we shall use
the $\rho(1450)$. The $\rho(1700)$ is a little too high for our region of
interest.
An understanding of the $I=0$, $0^{++}$ channel has been elusive despite
much work. The RPP \cite{pdg} gives
two low lying candidates: the $f_0(980)$ which has a $22\%$ branching ratio
into $K\overline{K}$ even though its central mass is below the
$K\overline{K}$ threshold and the $f_0(1300)$ which has about a $93\%$
branching ratio into $\pi\pi$ and a $7\%$ branching ratio into
$K\overline{K}$. We shall use the $f_0(1300)$ here. It is hard to
understand why, if the $f_0(980)$ is the $\overline{s}s$ member of a
conventional $0^{++}$ nonet, it is lighter than the $f_0(1300)$.
Most likely, the $f_0(980)$ is an exotic or a $K\overline{K}$
{\it molecule} \cite{molecule}. If that is the case, its coupling to two pions
ought to be
suppressed in the $\displaystyle{\frac{1}{N_c}}$ picture. This is experimentally not necessarily true
but we will
postpone a discussion of the $f_0(980)$ as well as other possible
light $0^{++}$ resonances to the next section.
Now we will give, in turn, the $\pi\pi$ scattering amplitudes due to the
exchange of the $f_0(1300)$, the $f_2(1270)$ and the $\rho(1450)$.
\subsection{The $f_0(1300)$}
Denoting a $3\times3$ matrix of scalar fields by $S$ we require that it
transform as $S\rightarrow KSK^\dagger$, (see (\ref{eq:transf})) under, the
chiral group. A suitable chiral invariant interaction, using
(\ref{eq:pmu}), is
\begin{equation}
L_{f_0}=-\gamma_0F_\pi^2 Tr(Sp_{\mu}p_{\mu})=-\gamma_0
Tr(S\partial_{\mu}\phi\partial_{\mu}\phi)+\cdots
\label{eq:1300}
\end{equation}
\noindent
where the expansion of $\xi$ was used in the second step. It is interesting
to note that, in the present formalism, chiral symmetry demands that the
minimal $S\phi\phi$ interaction must have two derivatives. Specializing to
the particles of interest and taking $f_0$ to be ideally mixed leads to
\begin{equation}
L_{f_0}=-\frac{\gamma_0f_0}{\sqrt{2}}
(\partial_{\mu}\vec{\pi}\cdot\partial_{\mu}\vec{\pi})+\cdots.
\label{eq:13001}
\end{equation}
\noindent
The partial width for $f_0(1300)\rightarrow\pi\pi$ is then
\begin{equation}
\Gamma(f_0(1300)\rightarrow\pi\pi)=\frac{3\gamma_0^2}{64 \pi M_{f_0}}
{\left(1-\frac{4m_{\pi}^2}{M_{f_0}}\right)}^{\frac{1}{2}}
\left(M_{f_0}^2-2m_{\pi}^2\right)^2
\label{eq:bran1300}
\end{equation}
\noindent
The RPP\cite{pdg} lists $\Gamma_{tot}(f_0(1300))=0.15-0.40~GeV$ and
$M_{f_0}=1.0-1.5~GeV$. For definiteness we shall choose
$\Gamma_{tot}(f_0(1300))=0.275~GeV$ and $M_{f_0}=1.3~GeV$.
These yield $|\gamma_0|=2.88~GeV^{-1}$. Using (\ref{eq:13001}) we find the
contribution of $f_0$ exchange to the $\pi\pi$ scattering amplitude, defined
in (\ref{eq:def}), to be:
\begin{equation}
A_{f_0}(s,t,u)=\frac{\gamma_0^2}{2}\frac{\left(s-2m_{\pi}^2\right)^2}
{M_{f_0}^2-s}.
\label{eq:1300ampl}
\end{equation}
\noindent
Actually, as discussed around (\ref{eq:propag}), the singularity in the real
part of
(\ref{eq:1300ampl}) will be regulated by the replacement
\begin{equation}
\frac{1}{M^2_{f_0}-s}\rightarrow\frac{M^2_{f_0}-s}
{(M^2_{f_0}-s)^2+M_{f_0}^2\Gamma^2}.
\label{eq:1300propag}
\end{equation}
\subsection{The $f_2(1270)$}
We represent the $3\times 3$ matrix of tensor fields by $T_{\mu\nu}$
(satisfying $T_{\mu\nu}=T_{\nu\mu}$, and $T_{\mu\mu}=0$) which is taken to
behave as $T_{\mu\nu}\rightarrow KT_{\mu\nu}K^\dagger$ under chiral transformation.
A suitable chiral invariant interaction is
\begin{equation}
L_T=-\gamma_2F_\pi^2Tr(T_{\mu\nu}p_\mu p_\nu).
\label{eq:tensor}
\end{equation}
\noindent
Specializing to the particles of interest, this becomes
\begin{equation}
L_{f_2}=-\frac{\gamma_2}{\sqrt{2}}(f_2)_{\mu\nu}
(\partial_{\mu}\vec{\pi}\cdot\partial_{\nu}\vec{\pi})+\cdots.
\label{eq:1270}
\end{equation}
\noindent
In this case we note that the chiral invariant interaction is just the same
as the minimal one we would have written down without using chiral symmetry.
The partial width is then
\begin{equation}
\Gamma(f_2(1270)\rightarrow\pi\pi)=\frac{\gamma_2^2}{20 \pi}\frac{p_\pi^5}{M^2_{f_2}}
\label{eq:bran1270}
\end{equation}
\noindent
where $p_\pi$ is the pion momentum in the $f_2$ rest frame. This leads to
$|\gamma_2|=13.1~GeV^{-1}$.
To calculate the $f_2$ exchange diagram we need the spin 2 propagator
\cite{tensor}
\begin{equation}
\frac{-i}{M^2_{f_2}+q^2}\left[
\frac{1}{2}\left(
\theta_{\mu_1\nu_1} \theta_{\mu_2\nu_2}+
\theta_{\mu_1\nu_2}\theta_{\mu_2\nu_1}\right)-
\frac{1}{3}\theta_{\mu_1\mu_2}\theta_{\nu_1\nu_2}\right],
\label{eq:tensorpropag}
\end{equation}
\noindent
where
\begin{equation}
\theta_{\mu\nu}=\delta_{\mu\nu}+\frac{q_\mu q_\nu}{M^2_{f_2}}.
\end{equation}
\noindent
A straightforward computation then yields the $f_2$ contribution to the
$\pi\pi$ scattering amplitude:
\begin{eqnarray}
A_{f_2}(s,t,u)&=&\frac{\gamma^2_2}{2(M^2_{f_2}-s)}
\left(
-\frac{16}{3}m_{\pi}^4
+\frac{10}{3}m_{\pi}^2 s
-\frac{1}{3}s^2
+\frac{1}{2}(t^2+u^2)\right.\nonumber\\
&~&\left.-\frac{2}{3}\frac{m_{\pi}^2s^2}{M^2_{f_2}}
-\frac{s^3}{6M^2_{f_2}}
+\frac{s^4}{6M^4_{f_2}}.
\right)
\label{eq:tensorampl}
\end{eqnarray}
Again the singularity will be regulated as in (\ref{eq:1300propag}).
\subsection{The $\rho(1450)$}
We may read off the proper chiral invariant contribution\footnote{It is not
necessary to introduce the $\rho(1450)$ as a massive gauge field as we
did for the $\rho(770)$, but the answer is the same. See \cite{vector} for
further discussions.} of the
$\rho(1450)$ to the $\pi\pi$ scattering amplitude from the second term of
(\ref{eq:rhocomp})
\begin{equation}
A_{\rho^\prime}(s,t,u)=-\frac{g^2_{\rho^\prime\pi\pi}}{2m^2_{\rho^\prime}}\left[
\frac{t(u-s)}{m^2_{\rho^\prime}-t}+\frac{u(t-s)}{m^2_{\rho^\prime}-u}\right]
\label{eq:1450ampl}
\end{equation}
\noindent
where $g_{\rho^\prime\pi\pi}$ is related to the $\rho^\prime\rightarrow\pi\pi$ partial
width by
\begin{equation}
\Gamma(\rho^\prime\riar2\pi)=
\frac{g^2_{\rho^\prime\pi\pi}p_\pi^3}{12\pi m^2_{\rho^\prime}}.
\end{equation}
We shall use $|g_{\rho^\prime\pi\pi}|=7.9$ corresponding to
$\Gamma(\rho\riar2\pi)=288~MeV$.
\subsection{$f_0(1300)$+$f_2(1270)$+$\rho(1450)$}
Now we are in a position to appraise the contribution to $R^0_0$ of the
next
group of resonances. This is obtained by adding up (\ref{eq:1300ampl}),
(\ref{eq:tensorampl}) and (\ref{eq:1450ampl}) and using
(\ref{eq:isospin})$-$(\ref{eq:wave}).
The individual pieces are shown in Fig. 5.
Note that the $f_0(1300)$ piece is not the largest, as one might at first
expect. That honor goes to the $f_2$ contribution which is shown divided
into the $s$-channel pole piece and the $(t+u)$ pole piece. We observe that the
$s$-channel pole piece, associated with the $f_2$, vanishes at $\sqrt{s}=
M_{f_2}$. This happens because the numerator of the propagator in
(\ref{eq:tensorpropag}) is precisely a spin 2 projection operator at that
point. The $\rho(1450)$ contribution is solely due to the $t$ and $u$ channel
poles. It tends to cancel the $t$ and $u$ channel pole contributions of the
$f_2(1270)$ but does not quite succeed. The $t$ and $u$ channel pole
contributions of the $f_0(1300)$ turn out to be negligible. Notice the
difference in characteristic shapes of the $s$ and $(t+u)$ exchange curves.
Fig. 6 shows the sum of all these individual contributions.
There does seem to be cancellation. At the high end, $R^0_0$ starts to run
negative well
past the unitarity bound (\ref{eq:bound}) around $1.5~GeV$. But it is
reasonable to
expect resonances in the $1.5-2.0~GeV$ region to modify this. The maximum
positive value of $R^0_0$ is about $1$ at $\sqrt{s}=1.2~GeV$. This would be
acceptable if the $\pi+\rho$ contribution displayed in Fig. 2, and which must
be added to the curve of Fig. 6, were somewhat negative at this point. However
this is seen not to be the case, so some extra ingredient is required.
The $\displaystyle{\frac{1}{N_c}}$ approach still allows us the freedom of adding four, and higher
derivative contact terms. More physically, there is known to be a rather non
trivial structure below $1~GeV$ in the $I=J=0$ channel.
\subsection{$4-$Derivative Contact Terms.}
First let us experiment
with four-derivative contact terms. So far we have not introduced any
arbitrary parameters but now we will be forced to do so. There are
two four-derivative chiral invariant contact interactions which are single
traces in flavor space:
\begin{equation}
L_{4}=a~Tr(\partial_\mu U \partial_\nu U^\dagger
\partial_\mu U \partial_\nu U^\dagger)+
b~Tr(\partial_\mu U \partial_\mu U^\dagger
\partial_\nu U \partial_\nu U^\dagger)
\label{eq:four}
\end{equation}
\noindent
where $a$ and $b$ are real constants. The single traces should be leading in
the $\displaystyle{\frac{1}{N_c}}$ expansion. Notice that the magnitudes of $a$ and $b$ will differ
from those in the chiral perturbation theory approach \cite{chp} since the
latter essentially also include the effects of expanding the $\rho$
exchange amplitude up to order $s^2$. The four pion terms which result from
(\ref{eq:four}) are:
\begin{equation}
L_{4}=\frac{8}{F_\pi^4}\left[
2a\left(\partial_\mu \vec{\pi}\cdot\partial_\nu \vec{\pi}\right)^2
+(b-a)\left(\partial_\mu \vec{\pi}\cdot\partial_\mu \vec{\pi}\right)^2
\right]+
\cdots.
\label{eq:4lagr}
\end{equation}
\noindent
This leads to the contribution to the $\pi\pi$ amplitude:
\begin{equation}
A_{4}(s,t,u)=\frac{16}{F_\pi^4}\left[
a\left((t-2m_{\pi}^2)^2+(u-2m_{\pi}^2)^2\right)+(b-a)(s-2m_{\pi}^2)^2\right].
\label{eq:4ampl}
\end{equation}
\noindent
Plausibly, but somewhat arbitrarily, we will require that (\ref{eq:4ampl})
yields no correction at threshold, i.e. at $s=4m_{\pi}^2$, $t=u=0$. This gives the
condition $b=-a$ and leaves the single parameter $a$ to play with. In Fig. 7
we show $R^0_0$, as gotten by adding the piece obtained
from (\ref{eq:4ampl}) for several values of $a$ to the contribution of
$\pi+\rho$, plus that of the {\it next group} of resonances.
For $a=+1.0\times 10^{-3}$ the
four-derivative contact term can pull the curve for $R^0_0$
down to avoid violation of the unitarity bound until around
$\sqrt{s}=1.0~GeV$. The price to be paid is that $R^0_0$ decreases very
rapidly beyond this point. We consider this to be an undesirable feature
since it would make a possible local cancellation scheme very unstable.
Another drawback of the four-derivative contact term scheme is that it lowers
$R^0_0(s)$ just above threshold, taking it further away from the Roy curves.
Let us therefore set aside the possibility of four and higher derivative
contact terms and try to find a solution to the problem of keeping $R^0_0$
within the unitarity bounds in a different and more phenomenological way.
\section{Low energy structure}
\setcounter{equation}{0}
Let us investigate the addition of low energy \cite{pen} {\it exotic} states
whose contributions to $\pi\pi$ scattering should be formally suppressed
in the large $N_c$ limit. Experimentally we know that there is at least
one candidate - the $f_0(980)$ mentioned in the previous section. According to
the RPP \cite{pdg} its width is $40-400MeV$ and its branching ratio into two
pions
is $78.1 \pm 2.4\%$.
To get an idea of its effect we will choose $\Gamma_{tot}
= 40 MeV$ and use the formulas (\ref{eq:13001})-(\ref{eq:1300propag})
with the appropriate
parameters. In Fig. 8
we show the sum of the contributions of the {\it next group}
added to $\pi + \rho$ with the effect of including the $f_0(980)$.
It is seen that the $f_0(980)$ does not help unitarity - below $\sqrt{s}=
980 MeV$ it makes the situation a little worse while above it improves the
picture slightly.
What is needed to restore unitarity over the full range of interest and
to give better agreement with the experimental data for
$\sqrt{s}\lower .7ex\hbox{$\;\stackrel{\textstyle <}{\sim}\;$} 900~MeV$?
\begin{itemize}
\item[{\it i.}]{Below $450~MeV$,
$R^0_0(s)$ actually lies a little below the Roy curves.
Hence it would be nice to find a tree level mechanism which yields a small
positive addition in this region.}
\item[{\it ii.}]{In the $600-1300~MeV$ range, an
increasingly negative contribution is clearly required to keep $R^0_0$
within the unitarity bound.}
\end{itemize}
\noindent
It is possible to satisfy both of these criteria
by introducing a broad scalar resonance (like the old $\sigma$) with
a mass around $530~MeV$. Its contribution to $A(s,t,u)$ would be of the form
shown in (\ref{eq:1300ampl}) and (\ref{eq:1300propag}) which we may rewrite as:
\begin{equation}
\frac{32\pi}{3H}
\frac{G}{M^3_\sigma}\frac{(s-2m_\pi^2)^2(M_\sigma^2-s)}{(s-M_\sigma^2)^2 +
M_\sigma^2{G^\prime}^2},
\label{eq:sigma}
\end{equation}
where we have set
$\displaystyle{\frac{\gamma_0^2}{2}}\rightarrow
\frac{32\pi}{3H}\frac{G}{M_\sigma^3}$,
$\Gamma \rightarrow G^\prime$ and
$\displaystyle{H=\left(1-4\frac{m_{\pi}^2}{M^2_\sigma}\right)
^{\frac{1}{2}}
\left(1-2\frac{m_{\pi}^2}{M^2_\sigma}\right)^2}$ is approximately one.
If this were a typical resonance which was narrow compared to its mass and
which completely dominated the amplitude, we would set $G\approx G^\prime
\approx \Gamma$.
However for a very broad resonance it may be reasonable to regard $G^\prime$
as a phenomenological parameter which could be considered as a regulator
in the sense we have been using. Choosing $M_\sigma=0.53~GeV$,
$\displaystyle{\frac{G}{G^\prime}=0.31}$ and $G^\prime=380~MeV$, the
contribution to $R^0_0$ of
(\ref{eq:sigma}) is shown in
Fig. 9. The curve goes through zero near $0.53~GeV$ (there is a small
shift due to the crossed terms). Below this value
of $\sqrt{s}$ it adds slightly in accordance with point {\it i}
while above
$0.53~GeV$ it subtracts substantially in the manner required by point
{\it ii}.
This is the motivation behind our choice of $M_\sigma=0.53~GeV$. Adding
{\it everything} - namely the $\pi + \rho$ piece, the {\it next group} piece
together with the contributions from (\ref{eq:sigma}) and the $f_0(980)$
- results in
the curves shown in Fig. 10 for three values of $G^\prime$.
These curves for $R^0_0(s)$ satisfy
the unitarity bound $\left|R^0_0\right|\le\frac{1}{2}$ until
$\sqrt{s}\approx 1.3~GeV$. After
$1.3~GeV$, the curves drop less precipitously than those for
the four-derivative contact term in Fig. 7.
Fig. 10 demonstrates that the proposed {\it local cancellation} of the
various resonance exchange terms is in fact possible as a means
of maintaining
the unitarity bound for the (by construction) crossing symmetric real part of
the tree amplitude. Essentially, just the three parameters $M_\sigma,~
G$ and $G^\prime$ have been varied to obtain this. The other parameters
were all taken from experiment; when there were large experimental
uncertainties, we just selected typical values and made no attempt to
fine-tune. Procedurally, $G^\prime,~G$ and $M_\sigma$ were adjusted
to obtain a best fit to the Roy Curves below $700~MeV$; this turned out to
be what was needed for unitarity beyond $700~MeV$. It was found that
$M_\sigma$ had to lie in the $530\pm 30~MeV$ range and that
$\displaystyle{\frac{G^\prime}{G}}$ had to be in the
$0.31\pm 0.06$ range in order to achieve a fit. On the
other hand $G^\prime$ could be varied in the larger range $500\pm 315~MeV$.
It is also interesting to notice that the main effect of the sigma particle
comes from its tail in Fig. 9. Near the pole region, its effect is hidden
by the dominant $\pi+\rho$ contribution. This provides a possible
explanation of why such a state may have escaped definitive
identification.
For the purpose of comparison we show in Fig. 11, the total $R^0_0$ together
with the $\pi+\rho$ and {\it current-algebra} curves in the low energy region.
It is interesting to remark that particles with masses and widths very similar
to those above for the $\sigma$ and the $f_0(980)$ were predicted \cite{mit}
as part of a multiquark $qq\bar q\bar q$ nonet on the basis of the
$MIT$ bag model. Hence, even though
they do not give rise to formally leading $\pi\pi$ amplitudes in the $\displaystyle{\frac{1}{N_c}}$
scheme, the picture has a good deal of plausibility from a polology
point of view. It is not hard to
imagine that some $\displaystyle{\frac{1}{N_c}}$ subleading effects might be important
at low energies where the QCD coupling constant is strongest.
Other than requiring
$\displaystyle{\big{|}R^0_0 \big{|}\leq \frac{1}{2}}$ we have
not attempted to fit the puzzling experimental results in the
the $f_0 (980)$ region. Recent interesting discussions are given in refs
\cite{pen}.
It appears that the opening of the $K\overline{K}$ channel plays an important
role and furthermore, additional resonances may be needed. In this paper we
have restricted attention to the $\pi\pi$ channel (although the effective
Lagrangian was written down for the case of three light quarks). Clearly,
it would be interesting to study the $f_0(980)$ region
in the future, according
to the present scheme.
\subsection{Imaginary part and Phase Shift}
Finally, let us discuss the imaginary piece $I^0_0$. In the leading $\displaystyle{\frac{1}{N_c}}$ limit
the imaginary part vanishes away from the singularities at the poles, whereas
$R^0_0$ has support all over.
This suggests that we determine an approximation
to $I^0_0$ from the $\displaystyle{\frac{1}{N_c}}$ leading $R^0_0$ using dispersion theory, rather than
getting it directly from the tree amplitude with the regularization of the
form (\ref{eq:propag}). The latter procedure picks up pion loop
contribution to the $\rho-propagator$, for example, but misses very important
direct pion loop contributions. A dispersion approach will include both.
In the low energy region, we can proceed more simply by just using the
unitarity
formula (\ref{eq:Ima}) directly. Up until the $K\overline{K}$ threshold it
seems to be reasonable to approximate the elasticity function $\eta^0_0(s)$
by unity \cite{pen}. Strictly speaking $\eta^0_0(s)$ may depart from unity at
the
$4\pi^0$ threshold\footnote{It is amusing to note that each of the
low energy resonances, i.e. $\sigma(530)$ and $f_0(980)$, are located just
below threshold; for the $\sigma$ it is the $4\pi$ threshold while for the
$f_0(980)$ it is the $K\overline{K}$ threshold.} of $540~MeV$.
In Fig. 12 we show $I^0_0$ obtained from (\ref{eq:Ima}) on the assumption
$\eta^0_0=1$ for several values of $G^\prime$. Both signs in front of the
square root are displayed. Of course, the correct curve should start from
zero at threshold ($-$ sign in front of the square root).
Continuity of $I^0_0(s)$ would at first appear to suggest that we follow along
the
lower curve. In order to go continuously to the upper curve it is necessary
that the argument of the square root vanish at some value of $\sqrt{s}$.
With the approximation $\eta^0_0=1$, this vanishing occurs if
$\big|R^0_0\big|$ is exactly $\displaystyle{\frac{1}{2}}$. In Fig. 12, the
discontinuity in the $\sqrt{s}=540~MeV$ region is extremely sensitive to tiny
departures of $\big|R^0_0\big|$ from $\displaystyle{\frac{1}{2}}$.
However, both experiment and the expectation that
$\displaystyle{\frac{d\delta^I_l}{d\sqrt{s}}\geq 0}$
\footnote{In potential theory Wigner \cite{causal} has
shown that $\displaystyle{\frac{d\delta^I_l}{d\sqrt{s}}\geq -\frac{a}{\beta}}$
where $a$ is the approximate interaction radius and $\beta$ is the pion
velocity in the center of mass. Strictly speaking, for $a\lower .7ex\hbox{$\;\stackrel{\textstyle <}{\sim}\;$} 1.7~fm$
the lower curve is also allowed.} suggest that beyond
$\sqrt{s}\approx 540~MeV$ we should actually go to the upper curve ($+$
sign in front of the square root). This can be accomplished without violating
continuity by assuming that $\eta^0_0$ is not precisely one. For the
curves shown all that is required is a decrease in $\eta^0_0$ of not more
than $0.04$. Alternatively, we could choose parameters so that $R^0_0$
reaches $0.5$ precisely. Then the fit at higher energies is slightly worse.
The corresponding three curves for the phase shifts are shown in Fig. 13.
The discontinuity should be smoothed over in accordance with our
discussion above. The agreement with experiment is quite reasonable up to
about $860~MeV$.
We did not go beyond this point for the purpose of obtaining the phase shift
because we are neglecting the $K\overline{K}$ channel which becomes relevant
in the computation of the imaginary part.
\section{Summary and discussion}
\setcounter{equation}{0}
In the leading large $N_c$ approximation to $QCD$, $\pi\pi$ scattering corresponds
to the sum of an infinite number of tree diagrams which can be of the contact
type or can involve resonance exchange. This can only be a practically useful
approximation if it is possible to retain just a reasonably small number of
terms. The most natural way to do so is, of course, to consider contact terms
with as few derivatives as possible and exchange terms with resonances
having masses less than the extent of the energy region we wish to describe.
In this paper we have carried out an initial exploration of this program in
a step by step way. The first step is to include only the well known chiral
contact term which reasonably describes the scattering lengths. However this
amplitude badly violates partial wave unitarity bounds (seen most readily in
the $I=L=0$ channel, see Fig. 1) at energies beyond $500~MeV$. We observed that
the introduction of the $\rho$ meson dramatically improved the situation,
delaying drastic violation of the unitarity bound till around $2~GeV$
(see Fig. 2).
We noted that this effect could be nicely understood as the result of an extra
contact term which must be present when the $\rho$ is introduced in a chirally
invariant manner. Furthermore, this feature holds in the strict large $N_c$
limit, i.e., without including the phenomenological regularization
(\ref{eq:propag}). The observed cancellation encouraged us to investigate the
possibility of a more general {\it local cancellation}, due to inclusion
of all (large $N_c$ leading) resonances in the energy range of interest and,
perhaps, higher derivative contact terms.
The program is sketched at the end of section 2.
Taking the large $N_c$ approach as well as the standard $\overline{q}q$ spectrum
literally, we argued that the {\it next group} of resonances whose exchange
contributes to the leading amplitude should comprise the $f_0(1300)$, the
$f_2(1270)$ and the $\rho(1450)$. We observed (section 3) that there was a
tendency for these to cancel among themselves; for example the crossed-channel
exchanges of the $\rho(1450)$ tended to cancel against those of the
$f_2(1270)$. In our analysis, the complications due to enforcing chiral
symmetry
and using the full spin 2 propagator were taken into account. However, the
cancellation with both the $\pi+\rho$ and {\it next group} was not sufficient
to satisfy the unitarity bound
$\displaystyle{\big|R^0_0\big|\le\frac{1}{2}}$ in the energy range till
$1.3~GeV$. An allowed leading $N_c$ way out - by adding four derivative contact
terms - was thus investigated. This enabled us to restore unitarity till about
$1.0~GeV$ (see Fig. 7). The drawback was that $R^0_0(s)$ dropped off rather
precipitously afterwards, which would make a local cancellation scheme very
unstable.
As a more physically motivated alternative we investigated, in section 4,
the possibility of including scalar resonances having masses less than
$1~GeV$. These are presumably not of the simple $\overline{q}q$ type and hence their
exchange should be of sub-leading order in the large $N_c$ limit. An
interesting
interpretation gives these particles a $\overline{q}q\qq$ quark structure \cite{mit}.
Then a somewhat narrow state like the $f_0(980)$ is expected together with
a very low mass and very broad state like the old $\sigma$ meson.
(Both should belong to a $3-$flavor nonet). It was found that the $f_0(980)$
particle did not help much in restoring unitarity. In the experimentally
puzzling region close to $980~MeV$ it is, however, expected to play an
extremely important role. On the othe hand, the further introduction of the
other
scalar, which we denoted as the $\sigma(530)$, treated with a phenomenological
regularization parameter (see (\ref{eq:sigma})) enabled us to satisfy the
unitarity bound all the way up to $1.3~GeV$ (see Fig. 10). Thus, if low
energy scalars are included, the proposed {\it local cancellation} may
be a viable possibility. The imaginary part of the partial wave amplitude,
$I^0_0(s)$ was also computed from the unitarity relation (\ref{eq:Ima}) and
found to lead to a phase shift $\delta^0_0$ in reasonable agreement with
experiment until about $860~MeV$. Beyond this point, the effect of the
opening of the $K\overline{K}$ channel must be specifically included.
There are many directions for further work.
\begin{itemize}
\item[{\it i.}]{The most straightforward is the investigation of different
channels. For example, considering $\pi\pi\rightarrow K\overline{K}$ and
$K\overline{K}\rightarrow K\overline{K}$ should enable us to study the
interesting $K\overline{K}$ threshold region in a more detailed way. Looking
at channels which don't communicate with $\pi\pi$ would enable one to focus
on particular resonance exchanges.}
\item[{\it ii.}] {The greatly increasing density of levels as one goes
up in energy clearly indicates that there is a limit to how far one can go
with the kind of {\it microscopic} approach presented here. It is expected
that at energies not too much higher than the $1.3~GeV$ region this
analysis should merge with some kind of string-like picture \cite{string}.
In that region the question of the validity of the $\displaystyle{\frac{1}{N_c}}$ expansion and a
possible {\it local cancellation} can presumably be approached in a more
analytical manner and interesting models can be studied \cite{lnc}. Here we
have tried
to follow a phenomenologically oriented path, assuming only chiral dynamics
in addition to the $\displaystyle{\frac{1}{N_c}}$ framework.}
\item[{\it iii.}] {One can also imagine a kind of {\it Wilsonian effective
action} \cite{pheno} with which the present approach can be further discussed.
This should allow the systematic calculation of loops but would be extremely
complicated in practice.}
\end{itemize}
\begin{center}
{\bf Acknowledgments}
\end{center}
We would like to thank Masayasu Harada for helpful discussions. One
of us (F.S.) would like to thank Prof. R. Musto for pointing out the
possible relevance of ref. \cite{pheno}. This work
was supported in part by the U.S. DOE Contract No. DE-FG-02-85ER40231.
\newpage
\section*{Appendix A}
\setcounter{equation}{0}
\renewcommand{\theequation}{A.\arabic{equation}}
Here we list the kinematic conventions for $\pi\pi$ scattering. The invariant
amplitude for $ \pi_i + \pi_j \rightarrow \pi_k + \pi_l $ is decomposed as:
\begin{equation}
\delta_{ij}\delta_{kl} A(s,t,u) + \delta_{ik}\delta_{jl} A(t,s,u)
+ \delta_{il}\delta_{jk} A(u,t,s) ,
\label{eq:def}
\end{equation}
where s,t and u are the usual Mandelstam variables obeying $ s+t+u=4m_{\pi}^2 $.
Physical values lie in the region $s{\geq}4m_{\pi}^2,~t{\leq}
0, u {\leq}0$. (Note that the phase of (\ref{eq:def}) corresponds to simply
taking
the matrix element of the Lagrangian density of a four point contact
interaction).
Projecting out amplitudes of definite isospin yields:
\begin{eqnarray}
T^0(s,t,u) &=& 3A(s,t,u)+A(t,s,u)+A(u,t,s),\nonumber\\
T^1(s,t,u) &=& A(t,s,u)-A(u,t,s),\nonumber\\
T^2(s,t,u) &=& A(t,s,u)+A(u,t,s).
\label{eq:isospin}
\end{eqnarray}
In the center of mass frame:
\begin{eqnarray}
s=4(p_\pi^2+{m_{\pi}}^2),\nonumber\\
t=-2p_\pi^2(1-cos\theta),\nonumber\\
u=-2p_\pi^2(1+cos\theta),
\label{eq:kin}
\end{eqnarray}
where $p_\pi$ is the spatial momentum and $\theta$ is the scattering angle. We
then
define
the partial wave isospin amplitudes according to the following formula:
\begin{equation}
T^{I}_{l}(s)\equiv \frac{1}{64\pi} \sqrt{\left(1-4\frac{m_{\pi}^2}{s}\right)}
\int^{1}_{-1}dcos
\theta
P_l(cos\theta) T^I(s,t,u).
\label{eq:wave}
\end{equation}
\newpage
| {'timestamp': '1996-03-05T21:12:21', 'yymm': '9501', 'arxiv_id': 'hep-ph/9501417', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-ph/9501417'} |
\section*{Introduction}
The first-passage time of a stochastic process to a boundary is a fundamental problem with applications in queuing theory \cite{iglehart65}, mathematical finance \cite{RISK03}, epidemic models on networks for the spreading of disease and computer viruses \cite{Lloyd01}, animal or human movement \cite{Gonzalez08}, neuron firing dynamics \cite{Tuckwell05}, diffusion controlled reactions \cite{Szabo80}, controlled kinetics \cite{Benichou10,Godec16}, renewal and non-renewal systems \cite{Ptaszynski18} and much more besides. Redner \cite{Redner01} presents a physics perspective, and provides a compelling overview of the importance, of many first-passage processes.
The Ornstein-Uhlenbeck (OU) process \cite{Uhlenbeck30} is the canonical mean-reverting process, with applications in all the above fields; with regard to mathematical finance, it is indispensable in interest-rate modelling \cite{Hull90}.
The boundary problem was studied early on: Darling \& Siegert \cite{Darling53} obtained the Bromwich integral representation for the solution, and made the comment that ``it appears very difficult to invert this transform'' which was prescient given the numerous attempts, with varied success, to invert it analytically.
This OU barrier problem, and its mean-reverting generalisations, have therefore been a tantalising, and apparently intractable, long-standing problem that remains of substantial interest: general discussions can be found in \cite{Borodin02,Breiman67,Horowitz85}.
Recent semi-analytic approaches based around an integral equation formulation \cite{Lipton18}, or recursion methods for the moments \cite{Veestraeten15}, indicate the current state-of-the-art.
Here we generate an asymptotic formula valid for long and short times that is also exact in certain cases; in some ways more importantly, we show that the same approximation is not specific to the OU model, but more generally valid under only mild assumptions.
The Feynman--Kac theorem allows us to express the first-passage probability associated with a stochastic process as the solution of a parabolic partial differential equation (PDE) with appropriate initial and boundary conditions. Specifically let
\begin{equation}
dY_t = \kappa A(Y_t) \, dt + \sqrt{2\kappa} \, dW_t,
\label{eq:sde_y}
\end{equation}
be a diffusion, with $\kappa>0$ a constant of dimension $1/$time,
and write $\tau =\kappa t$. Then if $F$ is the absorption probability, or equivalently $1-F$ as the survival probability, for a boundary placed at $y_+$, above the starting-point $y$ (we can always assume $y<y_+$ without loss of generality), so that
\[
F(\tau,y)=\mathbf{P}\Big(\max_{0\le t'\le t} Y_{t'} > y_+\Big),
\]
then we are to solve
\begin{equation}
\pderiv{F}{\tau} = A(y) \pderiv{F}{y} + \pdderiv{F}{y},
\label{eq:pde_Fy}
\end{equation}
with initial condition $F(0,\cdot)=0$ and boundary conditions $F(\cdot,y_+)=1$, $F(\cdot,-\infty)=0$. The p.d.f.\ of the first passage time, $f=\partial F / \partial \tau$, satisfies the same equation, but with a delta-function initial condition instead.
Of particular interest is the OU process, given by $A(y)=-y$. Despite the fact that the transition density for the unconstrained process is very simple to write down \cite{Cox65}, finding the distribution of its first-passage time to a boundary is much more difficult \cite{Darling53}.
We note that the work of Leblanc, Renault and Scaillet \cite{Scaillet00,Scaillet98}, purporting to give an exact solution to it, is incorrect:
\begin{equation}
f(\tau,y) \stackrel{??}{=} \frac{|y-y_+| e^{-\tau}}{\sqrt{\pi (1-e^{-2\tau})^3/2}} \exp\left( -\frac{(ye^{-\tau} - y_+)^2}{2(1-e^{-2\tau})} \right) \qquad \mbox{(Wrong if $y_+\ne0$)},
\label{eq:wrong}
\end{equation}
though it does correspond with the known result \cite{Pitman81b,Ricciardi88} when the boundary is at equilibrium, a result that can be obtained directly or via the Doob transformation.
The technical reason for the incorrectness of (\ref{eq:wrong}) is that the authors had incorrectly used a spatial homogeneity property of the three-dimensional Bessel bridge process \cite{Yor03}.
There are more obvious reasons for its incorrectness, which do not require a specialist knowledge of stochastic processes. We can, of course, simply point out that it is wrong because it fails to satisfy (\ref{eq:pde_Fy})\footnote{The theory was developed without reference to the underlying PDE.}; and, as we shall see from tackling the PDE, in certain limits its behaviour is also wrong. Nonetheless, (\ref{eq:wrong}) is illuminating because in some aspects it is `almost' correct and in others not, a matter on which we expand as our paper unfolds. At the time of writing a simple closed-form solution is not known, and given the close connection between the problem and the parabolic cylinder function, it is likely that there is none.
This does not preclude the existence of simple \emph{approximate} solutions, or methods that combine analytical techniques with numerical ones. We now turn to these.
The Laplace transform of (\ref{eq:pde_Fy}) is one of the commonest methods for deriving analytical results. However, for all but the simplest stochastic processes it fails to give a neat answer because ultimately a second-order differential equation has to be solved, almost invariably invoking a special function---in the OU case, the parabolic cylinder function---after which an inversion integral has to be carried out. The exponential decay-rate as $\tau\to\infty$ can be obtained from the singularities, provided their position can be identified, and we will devote considerable effort to this in \S\ref{sec:lambda}.
Linetsky \cite{Linetsky04} invokes the Bromwich integral to invert the Laplace transform, using the standard complex analysis technique of collapsing the integral around a series of poles in the left half-plane, but the calculation of their positions and residues is not straightforward. One can develop approximations, but these are likely to be model-specific, so that they are unlikely to be useful for other models; furthermore, in general, the singularities in question may not even be simple poles.
Alili et~al.\ \cite{Alili05} give this same representation and then follow it up with an integral representation that is essentially a Bromwich integral (hence invoking the parabolic cylinder function), and a Bessel bridge representation.
It is worth noting---recalling the title of our paper---that while the Laplace transform readily conveys information about the long-term asymptotics, it is much less helpful in dealing with the short-term behaviour \cite{Redner01} for which other techniques need to be used.
This brings us on to a second class of techniques, which are time-domain methods that, among other things, establish the short-time behaviour as
\begin{equation}
f(\tau,y) \sim \frac{b}{\sqrt{4\pi\tau^3}} e^{-b^2/4\tau},
\label{eq:levysmirnov}
\end{equation}
where $b=|y-y_+|$ is the distance to the boundary. This is obvious on probabilistic grounds, and is most easily derived by applying the Hopf-Cole transformation, \cite{Hopf50}, to (\ref{eq:pde_Fy}) and using dominant balance, as done in \cite{Martin15b} in a different context, and in \S\ref{sec:short} of this paper; alternatively the related WKB approximation can be applied, e.g.\ in \cite{Artime18} where it is explained why (\ref{eq:levysmirnov}) is generic provided the drift term is locally bounded.
Typically, though, these methods do not give information about the long-time behaviour, which as we have said above is exponential. Incidentally Artime et al.\ \cite{Artime18} state in their introduction that there is ``an exponential cut-off if the domain is bounded'', which is misleading: in this paper we treat semi-infinite domains, and the decay is still exponential.
A recent paper by Lipton \& Kaushansky \cite{Lipton18} uses a transformation to a Volterra integral equation of the second kind, from which (\ref{eq:levysmirnov}) follows by solving Abel's integral equation, and while this does not diagnose the long-term asymptotics it shows how to obtain results simply by solving the Volterra equation numerically.
Indeed, this very appealing paper serves to highlight that the problem is still of both theoretical and practical interest.
Finally, we point out that despite the fact that the first-passage time density is not known in closed form for the OU process, certain aspects of the problem are tractable, such as the moments and/or cumulants (\cite{Veestraeten15}, and later discussion), or certain exponential moments \cite{Ditlevsen07}.
The reader will probably have anticipated that one of our central interests is working out how to combine short- and long-time asymptotics: as is apparent from the above discussion, no existing techniques have yet been successful in doing this.
Our interest is in `global' asymptotics: that is to say, formulae that provide approximations in many different r\'egimes at the same time, rather than having to use one formula for one r\'egime, one for another, and so on.
As one example, Ricciardi \& Sato \cite{Ricciardi88} show that for the OU process, in the far-boundary limit, the first-passage time density is exponential, and Lindenberg et al.~\cite{Lindenberg75} state that this result is generic, which in fact is clear on probabilistic grounds, though as we point out later some technical conditions on $A$ are required. However, it is true only over long time scales, and so we want a single formula that encompasses this and also the short-time behaviour (\ref{eq:levysmirnov}).
Ideally, we also prefer formulas that make these asymptotic properties immediately obvious, and in this regard we argue that our final formula (\ref{eq:final}), which is constructed by analysis of many different special cases, satisfies this criterion. Furthermore, the ingredients of this formula are universal, so that although the derivation is guided by the OU model, and is exact in certain cases, it is applicable to a wide class of other diffusions. Accordingly we do not entirely agree with Artime et al.\ \cite{Artime18} that ``general formulations are scarce, since one finds a large variability from one problem to another'': while the number of exactly-solved problems is small our final result, (\ref{eq:final}), possesses a universality that does make it generally applicable.
We should also mention numerical methods, that are very common in, say, financial derivative pricing, and which are used to check the validity of our analytical results. The simplest is to set up a trinomial tree to discretise the process in space and time and then calculate $F$ by forward induction. It is simple to implement and very flexible, in that the boundary can be a different shape, and the dynamics made time-varying; see e.g.\ \cite{Hull_OFOD} for a general introduction. The disadvantage, of course, is that it provides no analytical answers, and the longer the time horizon over which results are needed, the longer it takes.
The paper is organised as follows.
We pursue a general development rather than concentrating on the OU case, even though we make constant reference to it.
Section~\ref{sec:lambda} initially follows the Laplace transform route, but using the Hopf-Cole transformation, i.e.\ the logarithmic derivative in the $y$ direction, it derives a recursion that can be used to find the asymptotic decay-rate $\lambda$ (Theorem~\ref{thm:R3} and Algorithm~1). Along the way it gives formulas for the cumulants (Theorem~\ref{thm:cumul}), to which this analysis is closely related, and give some new results on limiting behaviours in the OU model.
Another consequence is that the far-boundary limit, mentioned above, drops out naturally.
Section~\ref{sec:short} uses the Hopf-Cole transformation again, this time applied to the density itself. This gives a short-time development, which is then extended in such a way as to make the solution behave properly as $\tau\to\infty$, while being exact in certain known special cases. This culminates in the final formula (\ref{eq:final}), which is then demonstrated numerically.
There is, therefore, an important thematic connection between the two halves of the paper, in that both invoke the Riccati equation, in different contexts.
Another link is that the coefficient $\lambda$, expressing the rate of long-term exponential decay, is the subject-matter of the first half, and an important component in the second half.
We complete the paper by suggesting possible further developments.
\section{Long-time asymptotics}
\label{sec:lambda}
We start with some well-known facts and then move on beyond what is common knowledge.
\subsection{Notational preliminaries}
We briefly note at the outset that the general form of the SDE is
\begin{equation}
dX_t = \mu_X(X_t) \, dt + \sigma_X(X_t) \, dW_t;
\label{eq:sde_x}
\end{equation}
by making the substitution
\[
dy/dx = (2\kappa)^{1/2}\big/\sigma_X(x), \qquad \tau=\kappa t,
\]
also known as the Lamperti transformation, we obtain (\ref{eq:sde_y}). Therefore (\ref{eq:sde_y}) has no less generality and we shall make only occasional reference to (\ref{eq:sde_x}).
By Laplace transforming (\ref{eq:pde_Fy}) in time we get
\[
s\widehat{f}(s,y) - f(0,y) = A(y) \pderiv{\widehat{f}}{y}(s,y) + \pdderiv{\widehat{f}}{y}(s,y);
\]
the asymptotic rate of decay of $f$ is then given by the rightmost singularity of $s \mapsto \widehat{f}(s,y)$.
Write $C_+(s,y)$, $C_-(s,y)$ for the decreasing (in $y$) solutions to the homogeneous problem
\[
\pdderiv{C}{y} + A(y) \, \pderiv{C}{y} = s C(s,y)
\]
that are, respectively, bounded as $y\to-\infty$ and $y\to+\infty$.
If we start below the boundary ($y<y_+$) then the solution to the Laplace-transformed problem is
\begin{equation}
\widehat{f}(s,y) = \frac{C_+(s,y)}{C_+(s,y_+)} , \qquad y \le y_+
\label{eq:LT1}
\end{equation}
while if we start above then we use $C_-$ instead.
The asymptotic rate of exponential decay is then obtained by finding the rightmost singularity of $s \mapsto \widehat{f}(s,y)$.
Without loss of generality we can concentrate on the case where we start below the boundary.
We define $\psi(y)$ to be the invariant density, so that $\psi'/\psi=A$, and $\Psi$ to be its integral, i.e.\ the cumulative distribution function.
Some examples of interest, which we will analyse in some detail, are:
\begin{itemize}
\item
$A(y)=-y$, OU;
\item
$A(y)=-\mathrm{sgn}\, y$, dry-friction \cite{Touchette10a};
\item
$A(y)=-\alpha\tanh\gamma y$, giving a sech-power for $\psi$.
\end{itemize}
The last example reduces, in opposite extremes, to the first two.
\subsection{Comment on the OU case}
We briefly discuss the OU case, as this helps the reader link our work to previous literature e.g.~\cite{Ricciardi88}, and we note that \cite{Abramowitz64} contains background details of the parabolic cylinder, Tricomi, Kummer functions that are used.
We have
\[
C_\pm(s,y) = \mathbf{D}_s (\pm y),
\]
where $\mathbf{D}_s(y)$, a relative of the parabolic cylinder function, is variously defined as follows. First, as an integral transform,
\begin{equation}
\mathbf{D}_s(y) = \frac{1}{\Gamma(s)} \int_0^\infty u^{s-1} e^{yu-u^2/2} \, du, \qquad \mathrm{Re}\, s>0,
\label{eq:LT1.ou}
\end{equation}
and by analytic continuation elsewhere, for example by means of the recursion (immediate from the above)
\begin{equation}
\mathbf{D}_s(y) = -y \mathbf{D}_{s+1}(y) + (s+1)\mathbf{D}_{s+2}(y).
\label{eq:pcfrecur}
\end{equation}
(N.B. The definition is convenient, but nonstandard.)
Alternative representations use the Kummer function $M$ (also commonly known as $\Phi$ or $\mbox{}_1F_1$),
\[
\mathbf{D}_s(y) = \frac{2^{s/2-1} \Gamma(\frac{s}{2})}{\Gamma(s)} M { \textstyle (\frac{s}{2},\frac{1}{2},y^2/2) } + \frac{2^{(s-1)/2} \Gamma(\frac{s+1}{2})}{\Gamma(s)} y \cdot { \textstyle M (\frac{s+1}{2},\frac{3}{2},y^2/2) }
\]
or, in terms of the Tricomi function
\[
U(a,b,z) = \frac{1}{\Gamma(a)} \int_0^\infty e^{-zt} t^{a-1} (1+t)^{b-a-1} \, dt, \qquad 0 \le \arg z < \pi/2
\]
we have
\[
\mathbf{D}_s(-y) = 2^{-s/2} \textstyle
U(\frac{s}{2}, \frac{1}{2}, y^2/2), \qquad 0 \le \arg y < \pi
\]
where the principal branch of $U(\cdot,\cdot,w)$ is on the cut plane $\{0\le\arg w < 2\pi\}$, i.e.\ the cut is just below the positive real axis\footnote{It is then defined by analytic continuation elsewhere; note that the RHS does not have a branch-point at $y=0$, but because $U$ does, the RHS defines a function on two disconnected copies of $\mathbb{C}$.}.
More usually it is written in terms of the parabolic cylinder function $D_s$ as
\[
\mathbf{D}_s(y) = e^{y^2/4} D_{-s}(-y).
\]
As consequences, we note\footnote{$\phi,\Phi$ denote the density and cumulative of the standard Normal distribution; $\mathrm{He}_r$ is the $r$th Hermite polynomial; $\mathbb{N}$ denotes the set of positive integers, and $\mathbb{N}_0$ the same with 0 included.}
\[
\mathbf{D}_1(y)=\Phi(y)/\phi(y); \qquad
\mathbf{D}_s'(y) = s \mathbf{D}_{s+1}(y) ; \qquad
\mathbf{D}_{-r}(y)=(-)^r\mathrm{He}_r(y), \quad r\in \mathbb{N}_0 .
\]
These allow the results of Ditlevsen \cite{Ditlevsen07} to be derived.
Another useful result, related to the idea of analytic continuation, is the reflection formula, analogous to that of the Gamma function and derived in the same way, but not nearly as well-known, to the extent that it appears to be a new result (see Appendix) despite integral representations of products of parabolic cylinder functions remaining of interest in the special functions community \cite{Nasri15,Veestraeten17b,Veestraeten17a}:
\begin{eqnarray}
\mathbf{D}_s(y) \mathbf{D}_{1-s}(y) &=& \sum_{k=0}^\infty \frac{\Gamma(k+s)\Gamma(k+1-s)}{k! \Gamma(s)\Gamma(1-s)} \mathbf{D}_{2k+1}(y)
\label{eq:pcfrecur2} \\
&=& \int_0^\infty {}_2F_2 (s, 1-s; 1,{\textstyle\frac{1}{2}}; z^2/4) \, e^{yz-z^2/2} \, dz.
\label{eq:pcfrecur3}
\end{eqnarray}
Obviously (\ref{eq:pcfrecur2}) is another way of extending $\mathbf{D}_s(y)$ to the left half-plane (of $s$), doing it `in one go' rather than recursively in the way that (\ref{eq:pcfrecur}) does.
\subsection{Definition of $\lambda$, and some technical conditions}
We now return to the general case.
The rate of exponential decay in the long-time limit is obtained by finding the position of the rightmost singularity of $\widehat{f}$; more formally we define
\begin{equation}
\lambda = \sup \{ a : \widehat{f} \mbox{ analytic for } \mathrm{Re}\, s>-a \} \ge 0,
\end{equation}
which in principle depends on the starting-point and the boundary, as well as on $A$. In the OU case the rightmost singularity is caused by a zero of $s\mapsto\mathbf{D}_s(y_+)$, which causes $\widehat{f}$, see Eq.~\eqref{eq:LT1}, to have a simple pole. In general, the singularity might not be a pole, but some important general results can be formulated.
We define $R_{s=a}(f)$ to be the radius of differentiability of an analytic function $f$ at the point $s=a$, i.e.\ the radius of convergence of its Taylor series in $s$.
The following shows that rather than finding the strip of analyticity, we can use the radius about the origin:
\begin{lemma}
\label{lem:R1}
$\lambda = R_{s=0}\big[ \widehat{f} \big]$: the asymptotic rate of exponential decay of the first-passage time density is the radius of differentiability of $\widehat{f}(s)$ at the origin.
\end{lemma}
\noindent
Proof. This is elementary and it is sufficient to prove that the closest singularity to the origin is on the negative real axis, rather than a pair of complex-conjugates at $-\lambda \pm \mathrm{i}\omega$, which would make $R_{s=0}[\widehat{f}]=\sqrt{\lambda^2+\omega^2}>\lambda$. But $\widehat{f}$ is the Laplace transform of a probability density $p$ say, and hence of a nonnegative function, so this is impossible. For,
\[
\big| \widehat{f}(-\lambda+\mathrm{i}\omega,z) \big| \le \int_0^\infty e^{\lambda \tau} p(\tau) \, d\tau = \widehat{f}(-\lambda);
\]
so if it is analytic at $s=-\lambda$ then it is analytic at $s=-\lambda\pm\mathrm{i}\omega$. $\Box$
\vspace{5mm}
\begin{defn}
The first-passage problem is said to be completely absorbing, if from any starting-point the process almost surely hits the boundary eventually. It is completely absorbing iff $\widehat{f}(0,y)=1$ for all $y$.
\end{defn}
\begin{lemma}
\label{lem:CA}
The first-passage problem is completely absorbing if the following two conditions are met:
\begin{itemize}
\item
$\liminf_{y\to-\infty} A(y)\ge0$;
\item
$\inf_{y<y_+} A(y)>-\infty$.
\end{itemize}
\end{lemma}
\noindent
Proof.
The easiest way to see this is via a probabilistic argument: the first condition shows that the process is recurrent, and the second shows that its probability of hitting the boundary is positive. Therefore the survival probability must decay to zero.
$\Box$
\vspace{5mm}
\noindent
A slightly stronger condition will be needed shortly:
\begin{defn}
We write $A\in \mathfrak{S}_-$ if $\lim_{y\to-\infty} -y A(y)=+\infty$, and
$A\in \mathfrak{S}_+$ if the same limit holds as $y\to+\infty$.
If additionally $A'(y)/A(y)\to0$ and $A'(y)/A(y)^2\to0$ as $y\to-\infty$ then we write $A\in\mathfrak{S}_-^*$; similarly for $y\to+\infty$.
\end{defn}
The first condition ensures that the reversion speed does not decay too rapidly at $\pm\infty$. Models such as the OU and dry-friction are in $\mathfrak{S}_\pm$ but $-y/(1+y^2)$, which was considered in \cite{Martin15b}, is not. The condition implies that the invariant density, $\psi(y)$, decays faster than any power of $y$, and in most cases it decays exponentially, but it may not do so, for example when $A(y)=-\ln(1+y^2)/y$.
The second condition ($A'(y)/A(y)\to0$ and $A'(y)/A(y)^2\to0$) will be needed later when we need to bound the variation of $A$, essentially because we shall need to approximate the integral of $A(z)$, for $z$ up to some value $y$, with an expression depending only on $A(y)$.
Informally, most `sensible' force-fields obey this extra condition\footnote{A function that is in $\mathfrak{S}_+$ but not in $\mathfrak{S}_+^*$ is this sawtooth: $A(y)=-1$ for $2n\le y < 2n+1$ and $A(y)=-2$ for $2n+1 \le y < 2n+2$, $n\in\mathbb{N}$.}.
\notthis{
\vspace{5mm}
The next condition is a minor one designed to remove pathological cases, and ensures that the force-field $A$ behaves sensibly at $\pm\infty$:
\begin{defn}
\label{defn:RC}
We say that $A$ is regular at $y=\pm\infty$ if $A'(y)/A(y)^2\to0$ in that limit.
\end{defn}
\begin{lemma}
\label{lem:RC}
If $A\in\mathfrak{S}_+^*$ then for $r\in\mathbb{N}$,
\[
\int_x^y \frac{dz}{A(z)^{r-1} \psi(z)^r} \sim \frac{-1}{r A(y)^r \psi(y)^r}, \qquad y\to+\infty,
\]
for any $x$ for which the integral is defined\footnote{Essentially we need to make sure that $A(z)$ is not zero in the range of integration. By hypothesis this will hold for $z$ sufficiently large.}.
\end{lemma}
\noindent
Proof. We have
\[
\int_x^y \frac{dz}{A(z)^{r-1} \psi(z)^r} =
-\frac{1}{r} \int_x^y \frac{1}{A(z)^r} \deriv{}{z} \left( \frac{1}{\psi(z)^r} \right) dz
= \left[ \frac{-1}{rA(z)^r \psi(z)^r} \right]_x^y
- \int_x^y \frac{A'(z)/A(z)^2}{A(z)^{r-1} \psi(z)^r} \, dz
\]
and because $A'(y)/A(y)^2\to0$, the second term on the RHS is small compared with the LHS, which is what we are trying to approximate.
$\Box$
\subsection{The logarithmic derivative and its expansion}
The logarithmic derivative of the function $C_+(s,z)$ is going to be central to the theory:
\begin{equation}
H(s,z) = - \pderiv{}{z} \log C_+(s,z),
\end{equation}
and the following result gives some of its more important properties.
\begin{prop}
\label{prop:1}
The following applies in general.
\begin{itemize}
\item[(i)]
Let $f'(t,z)$ denote the $z$-derivative of the first-passage time density at the boundary (where the boundary is placed at $z$). Then for $r\in\mathbb{N}$ the Laplace transform of $t\mapsto-t^rf'(t,z)$ is $(-\partial/\partial s)^rH(s,z)$, which is $>0$.
So $H$ is analytic for $\mathrm{Re}\, s >$ some $\widehat{s}\le0$ and singular at $s=\widehat{s}$.
\item[(ii)]
$H$ satisfies the Riccati equation
\begin{equation}
\pderiv{H}{z} - H^2 + AH = -s .
\label{eq:ricc1}
\end{equation}
\item[(iii)]
In the formal expansion
\begin{equation}
H(s,y) = \sum_{r=0}^\infty (-s)^r h_r(y),
\label{eq:Hexpansion}
\end{equation}
we have
\[
h_0(y)= - \pderiv{}{y} \ln p_\infty(y;y_+) \le 0
\]
where $p_\infty(y;y_+)$ denotes the probability that the boundary is eventually hit, starting from $y<y_+$; note that the above expression does not depend on $y_+$. Also $h_r>0$ for $r\ge1$.
\item[(iv)]
We have
\begin{equation}
h_1(y) = \frac{1}{\psi(y)p_\infty(y)^2} \int_{-\infty}^y \psi(z)p_\infty(z)^2 \, dz ,
\label{eq:recurh1}
\end{equation}
which equals $\Psi(y)/\psi(y)$ in the completely-absorbing case, and
\[
h_r'(y) = -A(y) h_r(y) + \sum_{k=0}^{r} h_k(y)h_{r-k}(y), \qquad r \ge2.
\]
If $A\in\mathfrak{S}_-$, we have
\[
h_1(y) = o(|y|), \qquad y\to-\infty,
\]
and so, as the problem is completely absorbing in that case,
\begin{equation}
h_r(y) = \frac{1}{\psi(y)} \int_{-\infty}^y \sum_{k=1}^{r-1} h_k(z)h_{r-k}(z) \, \psi(z) \, dz, \qquad r \ge2,
\label{eq:recurh2}
\end{equation}
so that
\begin{equation}
h_r(y) \sim \mathfrak{c}_{r-1} h_1(y)^{2r-1}, \qquad y\to-\infty
\end{equation}
where $\mathfrak{c}_r=\frac{(2r)!}{r!(r+1)!}$ is the $r$th Catalan number.
\noindent
If additionally $A\in\mathfrak{S}_+^*$ then
\begin{equation}
h_r(y) \sim \big({-A(y)}\big)^{1-r} \psi(y)^{-r} , \qquad y\to+\infty.
\label{eq:fancyh_infty}
\end{equation}
\item[(v)]
$R_{s=0}[H(s,z)]$ is monotone decreasing in $z$.
\item[(vi)]
$R_{s=0}[H(s,z)] = \lim_{r\to\infty} h_r(z)/h_{r+1}(z)$: the ratio of successive terms gives the radius of convergence (and hence $\widehat{s}$ as defined in part (i)).
\item[(vii)] The large-$s$ behaviour is
\begin{equation}
H(s,z) \sim -\sqrt{s}, \qquad s\to+\infty.
\label{eq:Has}
\end{equation}
\end{itemize}
\end{prop}
\noindent
Proof. Part (i) is straightforward (note that $f'<0$; the inequality is untrue for $r=0$ because $f'$ is singular of order $\tau^{-3/2}$ as $\tau\to0$, so does not have a Laplace transform). Part (ii) is immediate from the backward equation, while (iii) is immediate from (i) and (\ref{eq:LT1}).
The first part of (iv) follows from (ii).
From
\[
\Psi(y) = \int_{-\infty}^y \psi(z)\, dz = y\psi(y)
- \int_{-\infty}^y zA(z) \psi(z)\, dz ,
\]
the following holds when $A\in\mathfrak{S}_-$: let $c>1$, so as $y\to-\infty$ we have
$\Psi(y) > y\psi(y) + c \Psi(y)$, and so $h_1(y)<-y/(c-1)$, which proves that $h_1(y)=o(|y|)$.
This condition ensures that the integral (\ref{eq:recurh2}) converges, as then $\psi(y)$ decays faster than any power of $|y|$.
The asserted asymptotic behaviour of $h_r$ as $y\to-\infty$ follows by induction: it is trivial when $r=1$, so let $r\ge2$ and suppose that it holds for all lower values of $r$.
Then
\begin{eqnarray*}
h_r(y) &\sim& \frac{1}{\psi(y)} \int_{-\infty}^y
\sum_{k=1}^{r-1} \mathfrak{c}_{k-1} \mathfrak{c}_{r-k-1} h_1(z)^{2r-2} \psi(z) \, dz \\
&=& \frac{\mathfrak{c}_{r-1}}{\psi(y)} \int_{0}^{\Psi(y)} h_1\big( \Psi^{-1}(u) \big)^{2r-2} \, du \\
&\sim& \mathfrak{c}_{r-1} \frac{\Psi(y)}{\psi(y)} h_1(y)^{2r-2}
\end{eqnarray*}
as required.
When additionally $A\in\mathfrak{S}_+^*$ we have as $y\to+\infty$
\begin{eqnarray*}
h_r(y) &\sim& \frac{1}{\psi(y)} \int_{-\infty}^y \sum_{k=1}^{r-1} h_k(z) h_{r-k}(z) \psi(z)^{1-r} \, dz \\
&\sim& \frac{r-1}{\psi(y)} \int_{x}^y \big( {-A(z)} \big)^{2-r} \psi(z)^{1-r} \, dz
\end{eqnarray*}
for any $x$ satisfying $z>x\RightarrowA(z)<0$, and the result then follows from Lemma~\ref{lem:RC}.
As to part (v), we note that by usual arguments on analytic functions,
\[
R_{s=0}[H(s,z)] = \liminf_{r\to\infty} |h_r(z)|^{-1/r} .
\]
Now write $\ell_r=h_r^{1/r}$, so that
\[
\ell_r'(z) = \frac{-A(z) \ell_r(z)}{r} + \frac{\ell_r(z)}{r h_r(z)} \sum_{k=1}^{r-1} h_k(z) h_{r-k}(z);
\]
\notthis{
\[
\ell_r' = \frac{1}{r} \left[ -A \ell_r + \ell_r^{1-r} \sum_{k=1}^{r} \ell_k^k \ell_{r-k}^{r-k} \right] .
\]
Using the AM-GM inequality,
\[
\ell_r' \ge \frac{\ell_r}{r} \left[ -A + (r-1) \ell_r^{-r} \big( \ell_1^1 \ell_2^2 \cdots \ell_{r-1}^{r-1} \big)^{2/(r-1)} \right];
\]
}
the first term tends to zero as $r\to\infty$ and the second is positive. As the radius of convergence is $\liminf_{r\to\infty} 1/\ell_r(z)$, the result is proven.
Part (vi) follows from using Cauchy's integral formula and observing that the dominant contribution comes from near the singularity.
Part (vii) follows by dominant balance, as $H^2$ equates to $s$ in the limit. (The negative root needs to be taken as otherwise $H$ is non-decreasing.) This implies behaviour near the boundary of
\[
-f'(\tau,z) \sim \frac{1}{\sqrt{4\pi \tau^3}}, \qquad \tau\to0
\]
and is a consequence of the diffusive behaviour: so it works for any model, regardless of the drift or boundary position.
$\Box$
As an aside, notice that in the OU case with the boundary at equilibrium,
\begin{equation}
H(s,0) = -\sqrt{2}\, \frac{\Gamma\big(\frac{s+1}{2})}{\Gamma\big(\frac{s}{2}\big)}
\end{equation}
which is seen to have the advertised behaviour.
\vspace{5mm}
The centrality of the function $H(s,z)$ is contained in the following result, which shows that $\lambda$ is simply the radius of convergence of $H(s,y_+)$:
\begin{lemma}
\label{lem:R2}
$R_{s=0}\big[ \widehat{f} \big] =
R_{s=0} \big[ H(s,y_+) \big]$.
\end{lemma}
\noindent
Comment.
Thus $\lambda$, the asymptotic rate of exponential decay of the first-passage time density, depends on the position of the boundary, and not on the starting-point, which is unsurprising as, over time, the process forgets about where it started.
The fact that $R_{s=0}\big[ H(s,y_+) \big]$ is monotone decreasing in $y_+$ now comes as no surprise: if the boundary is brought closer, it must be hit at least as rapidly, so $\lambda$ must increase (or perhaps stay the same).
In practical terms this result is important because we can obtain $\lambda$ from the sequence $\big(h_r(y_+)/h_{r+1}(y_+)\big)$.
In the particular case of OU, Elbert and Muldoon \cite{Elbert08} showed that the position of the rightmost zero of $s\mapsto \mathbf{D}_s(z)$ is monotonic in $z$, by direct analysis of that function using results known as Nicholson integrals \cite{Nasri15,Veestraeten17a,Veestraeten17b}. The above development adds to their work by showing that it is a direct consequence of probability theory and also that the Riccati equation provides another way of analysing the problem. What is perhaps remarkable is that we can make inferences about the leading zero of $s\mapsto \mathbf{D}_s(z)$ even if we have no idea of how to calculate that function.
As a consequence the Riccati equation gives generic results, i.e.\ even when $A$ is not linear.
\vspace{5mm}
\noindent
Proof of Lemma~\ref{lem:R2}.
We have
\[
\widehat{f}(s,y;y_+) = \exp \int_y^{y_+} H(s,z) \, dz.
\]
If $H$ is analytic then so is the integral on the RHS, so we have proved `$\ge$' in the assertion: $\widehat{f}$ is at least as differentiable as $H(s,z)$.
Conversely, if $\widehat{f}$ is analytic for $s>\widehat{s}$ then it is real and positive, so a continuous branch of $\log \widehat{f}$ exists and then $s\mapsto H(s,y_+)$ is also analytic.
$\Box$
\vspace{5mm}
Accordingly, we have:
\begin{thm}
\label{thm:R3}
The asymptotic exponential decay-rate is given by
\[
\lambda = \lim_{r\to\infty} h_r(y_+)/h_{r+1}(y_+).
\]
If $A\in\mathfrak{S}_-$ then
\begin{equation}
\lambda\sim \frac{\psi(y_+)^2}{4\Psi(y_+)^2}, \qquad y_+ \to-\infty.
\label{eq:lambda-}
\end{equation}
If additionally $A\in\mathfrak{S}_+^*$ then\footnote{Recall $A=\psi'/\psi$.}
\begin{equation}
\lambda \sim - \psi'(y_+), \qquad y_+\to+\infty .
\label{eq:lambda_asymp}
\end{equation}
Referring to the dimensional form\footnote{So that, for (\ref{eq:lambda_asymp2}) alone, $\lambda$ refers to time $t$ rather than $\tau$: the density decays as $e^{-\lambda t}$.} (\ref{eq:sde_x}), these are
\begin{equation}
\lambda\sim \frac{\sigma_X(x_+)^2 \psi_X(x_+)^2 }{8\Psi_X(x_+)^2},
\qquad
\lambda\sim -\mu_X(x_+) \psi_X(x_+) .
\label{eq:lambda_asymp2}
\end{equation}
\end{thm}
\noindent
Proof. Immediate from Lemmas~\ref{lem:R1},\ref{lem:R2} and Prop.~\ref{prop:1}(iv,vi); note that $\lim_{r\to\infty} \mathfrak{c}_r^{1/r}=4$.
$\Box$
\subsection{Far-boundary limit $y_+\to+\infty$}
Ricciardi \& Sato \cite{Ricciardi88}, in their work on the OU process, show that as $y_+\to+\infty$ the distribution of the first passage time is asymptotically exponential. More precisely, the density of $\tilde{\tau}=\lambda\tau$, for fixed $\tilde{\tau}$, tends to $\exp(-\tilde{\tau})$. For convenience we shall call this the rescaling limit. They prove this by analysing the Taylor series of the characteristic function, and also empirically compute moments, finding agreement. Lindenberg et al.~\cite[eq.(74)]{Lindenberg75} suggest that this result is generic, by means of an eigenfunction expansion.
Before going into the technical details it is worth making an intuitive argument, as follows.
As the boundary is pushed further away, and we look at the process on progressively longer time scales (of order $1/\lambda$), the process becomes less autocorrelated. We are therefore observing the waiting-time for the first occurrence of a Poisson process, and that must be exponentially distributed.
That $\lambda \sim -\psi'(y_+)$ is also intuitively sensible: the relative rate of change of the survival probability must be closely linked to the probability density of the unconstrained process in the vicinity of the boundary.
\notthis{
In terms of what we have done above, we observe that
\begin{eqnarray*}
\pderiv{}{z} \log \widehat{f}(s) = H(s,y_+)
&=& \sum_{r=1}^\infty (-s)^r h_r(y) \\
&\sim& -A(y) \sum_{r=1}^\infty \left( \frac{s}{A(y)\psi(y)} \right)^r
\mbox{ (by Prop.~\ref{prop:1}(iv))} \\
&=& \frac{-s/\psi(y_+)}{1-s/\psi'(y_+)}
\end{eqnarray*}
where we recall that $A=\psi'/\psi$.
By the regularity condition (Defn.~\ref{defn:RC}) we have
\[
\frac{\psi''(y_+)\psi(y_+)}{\psi'(y_+)^2} \to 1
\]
}
We can, in fact, use Ricciardi \& Sato's working to derive a general result. Indeed, let $\{\cdot\}_r$ denote the coefficient of $s^r$ in the Taylor series (around $s=0$). Then, with the boundary placed at $z$ and the starting-point at $y$ we have by (\ref{eq:fancyh_infty}),
\[
\left\{ \pderiv{}{z} \log \widehat{f}(s,y;z) \right\}_r = (-)^rh_r(z) \sim \frac{-A(z)}{\psi'(z)^r}, \qquad z\to+\infty,
\]
for each $r\in\mathbb{N}$.
If $A\in\mathfrak{S}_+^*$ then $A'(z)/A(z) \to 0$, so we can replace $A=\psi'/\psi$ with $\psi''/\psi'$ and the above equation becomes (cf.~\cite[eq.(37)]{Lindenberg75})
\[
\left\{ \pderiv{}{z} \log \widehat{f}(s,y;z) \right\}_r \sim \frac{-\psi''(z)}{\psi'(z)^{r+1}}, \qquad z\to+\infty.
\]
Integrating both sides from $z=y$ to $y_+$ gives
\[
\left\{ \log \widehat{f}(s,y;y_+) \right\}_r \sim \frac{1}{r\psi'(y_+)^r} + \beta_r, \qquad y_+\to+\infty,
\]
where $\beta_r$, the `constant of integration', depends on $y$ but not on $y_+$. Accordingly
\[
\log \widehat{f}(s,y;y_+) \sim - \log \big(1-s/\psi'(y_+)\big) + B_y(s)
\]
where $B_y(s)=\sum_{r=1}^\infty \beta_r s^r$; as $\widehat{f}(0)$ is necessarily zero, we must have $B_y(0)=0$, so $\beta_0=0$.
Write $s=-\psi'(y_+)\tilde{s}$ with $\tilde{s}$ fixed, and let $y_+\to+\infty$. Then the $B_y(s)$ term disappears, and $\widehat{f}(s)$ is recognised as the Laplace transform of the exponential distribution of mean $1/\lambda=-1/\psi'(y_+)$, which is what we were to show.
Notice that we have assumed, in our derivation, that $A\in\mathfrak{S}_+^*$. A simple example of a force-field that does not obey this condition is the arithmetic Brownian motion, $A(y)=\mu>0$. In this case, the first-passage time density (using $(t,x)$ coordinates) is the inverse Gaussian distribution \cite{Seshadri93},
\begin{equation}
f(t,x) = \frac{x_+-x}{\sqrt{2\pi \sigma^2 t^3}} \exp \left( \frac{-(x_+-x-\mu t)^2}{2\sigma^2 t} \right) ;
\label{eq:invgauss}
\end{equation}
the rate of exponential decay is $\mu^2/2\sigma^2$ which does not tend to zero as $x_+\to+\infty$ and the rescaling limit does not apply. Indeed, it is clear in this case that any kind of rescaling will preserve the front factor of $t^{-3/2}$, and there is no way of ending up with an exponential distribution.
In processes that do not mean-revert, the above argument on autocorrelation fails, and with it the assertion that an exponential distribution will necessarily arise in the rescaling limit.
This problem was not addressed in \cite[\S4.4.1]{Lindenberg75} who have, in effect, claimed too great a degree of generality in their proof.
In fact, this observation has pointed out a continuing difficulty over the $\tau^{-3/2}$ factor.
As intimated in the Introduction, we should be expecting the behaviour shown in (\ref{eq:levysmirnov}) for \emph{short} time; so how can this be reconciled with the rescaling limit that, when $A\in\mathfrak{S}_+^*$, causes the $\tau^{-3/2}$ behaviour to vanish, but when $A\notin\mathfrak{S}_+^*$, may let it remain?
We are asking for an approximation to the first-passage time density that works on time scales $\tau\sim1/\lambda$ and also $\tau=O(1)$, and this ansatz does the trick:
\begin{equation}
f(\tau,y) \propto \frac{e^{-b^2\theta\sqrt{q}/2(1-q)}}{(1-q)^{3/2}} \cdot e^{-\lambda\tau}, \qquad q=e^{-2\theta \tau},
\label{eq:rs_ansatz}
\end{equation}
where $\theta$ sets the rate of mean reversion.
It is clear that the behaviour is the same as (\ref{eq:levysmirnov}) for short time ($\theta\tau\ll1$).
In the rescaling limit, the first part of this expression disappears, provided $\theta>0$, as in effect $q$ is replaced by zero, and we end up with $e^{-\tilde{\tau}}$.
But if we first let $\theta\to0$, as happens in the arithmetic Brownian motion, we instead obtain the inverse Gaussian distribution, and the $\tau^{-3/2}$ factor persists in any scaling limit.
As it happens, we do end up with something like (\ref{eq:rs_ansatz}), and it will naturally emerge from the work in \S\ref{sec:short}.
\subsection{Moments and cumulants of the first-passage time density}
The cumulants ($\mathfrak{k}_r$) of the first-passage time\footnote{We are using `dimensionless time' ($\tau$) here. Obviously those for dimensional time ($t$) are $\mathfrak{k}_r/\kappa^r$.} relate directly to the $(h_r)$, by
\begin{equation}
\frac{\mathfrak{k}_r}{r!} = \int_{y}^{y_+} h_r(z) \, dz.
\end{equation}
Hence:
\begin{thm}
\label{thm:cumul}
All the cumulants of the first-passage time are positive and are obtained from the recurrence (\ref{eq:recurh2}); the mean and variance are
\begin{equation}
\int_{y}^{y_+} \frac{\Psi(z)}{\psi(z)} \, dz, \qquad
\int_{y}^{y_+} \frac{2}{\psi(z)} \int_{-\infty}^z \frac{\Psi(w)^2}{\psi(w)} \, dw \, dz
\end{equation}
respectively. If $A\in\mathfrak{S}_- \cap \mathfrak{S}_+^*$ then in the far-boundary limit $\mathfrak{k}_r\sim\lambda^{-r}$.
$\Box$
\end{thm}
In the OU case we note that $h_1$ admits the integral representation
\[
h_1(z) = \int_0^\infty e^{-u^2/2} e^{zu} \, du
\]
from which the mean is expressible in two ways,
\begin{equation}
\mathfrak{k}_1 = \int_y^{y_+} \frac{\Phi(z)}{\phi(z)} \, dz = \int_0^\infty e^{-u^2/2} \left( \frac{e^{y_+ u} - e^{yu}}{u} \right) \, du.
\label{eq:ouk1}
\end{equation}
We briefly discuss how this behaves in different r\'egimes.
One is what, in dimensional coordinates, would be called the low-reversion r\'egime ($\kappa$ `small'), and in dimensionless coordinates is obtained by making $|y|$ and $y_+$ small.
By expanding $\Phi,\phi$ around the origin we have
\[
\mathfrak{k}_r \sim \left(\sqrt{\frac{\pi}{2}} + \frac{y_++y}{2} \right) (y_+-y).
\]
Another is what, in dimensional coordinates, would be called the low-volatility r\'egime ($\sigma$ `small'), and in dimensionless coordinates is obtained by making $|y|$ and/or $|y_+|$ large. This subdivides into three cases.
First, the case that is often described as `sub-threshold' is $y_+\gg1$ and we have already given the asymptotic as
\[
\mathfrak{k}_1 \sim \frac{(2\pi)^{1/2} e^{y_+^2/2}}{y_+}
\]
regardless of the starting-point $y$.
Next, in the `supra-threshold' case, in which the boundary lies between the starting-point and equilibrium, we have $y,y_+\to-\infty$ and then
\[
\mathfrak{k}_1 \sim \frac{1}{2} \ln \frac{y^2}{y_+^2} + \sum_{r=1}^\infty \frac{(-)^r(2r-1)!!}{2r\, z^{2r}} (y_+^{-2r} - y^{-2r}) ;
\]
the first term is recognisable as the time taken to hit the boundary if the volatility were zero, i.e.\ $Y_t=ye^{-\kappa t}$, and the leading-order correction ($r=1$ term) is negative, suggesting that the presence of volatility causes the boundary to be hit earlier than that, on average.
Finally the medial case is when the boundary is at equilibrium ($y_+=0$): as $y\to-\infty$, we have
\[
\mathfrak{k}_1 \sim \frac{\ln (2y^2) + \overline{\gamma}}{2} - \sum_{r=1}^\infty \frac{(-)^r(2r-1)!!}{2r\, y^{2r}}
\]
with $\overline{\gamma}$ denoting Euler's constant.
This is obtained by substituting $-u/y$ for $u$ in the second integral in (\ref{eq:ouk1}) and invoking Plancherel's identity\footnote{In effect we have a continuous version of the Poisson summation formula.}:
\[
\int_{-\infty}^\infty e^{-u^2/2y^2} \left( \frac{1-e^{-u}}{u} \mathbf{1}_{u>0} \right) du
=
\frac{|y|}{\sqrt{2\pi}} \int_{-\infty}^\infty e^{-y^2\omega^2/2} \log \left(\frac{1+\mathrm{i}\omega}{\mathrm{i}\omega}\right) \, d\omega .
\]
Then the $\log(\mathrm{i}\omega)$ term generates
\[
\frac{-|y|}{\sqrt{2\pi}} \int_{-\infty}^\infty e^{-y^2\omega^2/2} \ln |\omega| \, d\omega = \frac{ \ln (2y^2) + \overline{\gamma}}{2}
\]
and the $\log(1+\mathrm{i}\omega)$ term, upon expansion in a Taylor series around $\omega=0$, delivers the rest (and is clearly related to the asymptotic expansion of the error function).
\subsection{Algorithm}
We now turn to matters of numerical computation.
The recursion (\ref{eq:recurh1},\ref{eq:recurh2}) makes the numerical calculation of the $h_r(z)$ easy, regardless of the choice of force-field $A$. In more detail, starting from $r=2$, we have:
\begin{algm}
Evaluation of the Taylor series (\ref{eq:Hexpansion}) of $H(s,z)=(-\partial/\partial z)\log C_+(s,z)$, when $A\in\mathfrak{S}_-$. First note that $h_1$ is given by (\ref{eq:recurh1}). Set $r=2$. Then:
\begin{itemize}
\item[(i)]
Set $z$ equal to large negative value $Z$ and approximate the integral in (\ref{eq:recurh2}) from $-\infty$ to $Z$ as
\[
h_r(Z) \approx \frac{(2r-2)!}{(r-1)!r!} \cdot h_1(Z)^{2r-1}
\]
as justified in Proposition 1(iv).
\item[(ii)]
Working upwards in small steps of $z$, approximate the integral in (\ref{eq:recurh2}) on a grid of points, by the logarithmic trapezium rule. If we write (\ref{eq:recurh2}) as, for short,
\[
I_r(y) = \int_{-\infty}^{y} S_r(z) \, dz
\]
then we have\footnote{The term on the end is the trapezium rule for integrating piecewise exponential functions. Given the typical behaviour of $h_r(z)$, this is a better idea than linear interpolation.}
\[
I_r(y_{j+1}) \approx I_r(y_j) + (y_{j+1}-y_j) \frac{S_r(y_{j+1})-S_r(y_j)}{\ln S_r(y_{j+1}) - \ln S_r(y_j)} .
\]
\item[(iii)]
Increment $r$ and repeat from (i).
\end{itemize}
\end{algm}
We have said that $h_r(y)/h_{r+1}(y)$ gives $\lambda$, in the limit $r\to\infty$, and Algorithm~1 allows the functions to be computed. Empirically, convergence is much faster for $y>0$, and it is desirable to use convergence acceleration techniques when $y<0$. As we will have a fixed boundary in mind, we can write $x_r = h_r(y_+)/h_{r+1}(y_+)$, a positive sequence that, empirically at least, tends to its limit from above. Write $\delta_r=x_r-x_{r-1}$ for its sequence of differences.
One of the commonest methods of accelerating convergence is \emph{Aitken's method}, which is that the derived sequence
\begin{equation}
(\mathcal{A}_0x)_r := x_r + \frac{\delta_r^2}{\delta_{r-1}-\delta_r}
\label{eq:Aitken}
\end{equation}
often enjoys faster convergence as $(x_r)$, and to the same limit, particularly if the convergence is linear, i.e.\ $\delta_r/\delta_{r-1} \to \mbox{const}$. Indeed, if differences are in geometric progression, then $(\mathcal{A}_0x)_r$ converges immediately, i.e.\ for all $r$ one has $(\mathcal{A}_0x)_r=\lim_{n\to\infty} x_n$.
But in our case the sequence $(x_r)$ does not converge linearly, and Aitken's method does not work as well. In fact, the sequence appears to converge logarithmically, i.e.\ with error $O(1/r)$. In particular, as $y_+\to-\infty$ we are led to study, from Prop.~\ref{prop:1}(iv), the ratio of adjacent Catalan numbers
\[
x_r=\frac{\mathfrak{c}_{r-1}}{\mathfrak{c}_r} = \frac{1}{4} + \frac{3}{4(2r-1)} ,
\]
and ask what variant of Aitken's method will give immediate convergence when $x_r = \lambda + (\alpha+\beta r)^{-1}$ for constants $\alpha,\beta$.
The answer is
\begin{equation}
(\mathcal{A}_1x)_r := x_r + \frac{\delta_r(\delta_r+\delta_{r-1})}{\delta_{r-1}-\delta_r} .
\label{eq:Aitken2}
\end{equation}
For the proof, note that the estimate of the limit based on $x_{r-2},x_{r-1},x_r$---we are calling this $(\mathcal{A}_1 x)_r$---must be the value $\xi$ such that $(x_{r-2}-\xi)^{-1},(x_{r-1}-\xi)^{-1},(x_r-\xi)^{-1}$ lie in arithmetic progression, and solve for $\xi$.
To see how this works for the sequence $\mathfrak{c}_{r-1}/\mathfrak{c}_r$, we note that the first few terms of the sequence are
\[
1/1, \; 1/2, \; 2/5, \ldots
\]
and so the first term of the accelerated sequence $(\mathcal{A}_1 x)$ is
\[
{\textstyle\frac{2}{5}} + \frac{(\frac{2}{5}-\frac{1}{2})(\frac{2}{5}-\frac{1}{2}+\frac{1}{2}-1)}{(\frac{1}{2}-1)-(\frac{2}{5}-\frac{1}{2})} = {\textstyle\frac{1}{4}}
\]
which is the exact limit---as expected in view of the derivation.
Applied to our problem, we find empirically that (\ref{eq:Aitken2}) works very much better than (\ref{eq:Aitken}), mainly because (\ref{eq:Aitken}) undercorrects.
A general caveat should be mentioned: regardless of what convergence accelerator is applied, numerical instability can result if $|\delta_{r-1}-\delta_r|$ becomes too small, and certainly if it becomes comparable with the machine precision.
\notthis{
Regardless of what convergence accelerator is applied, numerical instability can result if $\delta_{r-1}-\delta_r$ becomes too small, and certainly if it becomes comparable with the machine precision.
In the case at hand, $x_r$ is a decreasing sequence of positive numbers (*** we have not shown that they are decreasing! ***). This inspires the regularised formula
\begin{equation}
(\mathcal{A}_1x)_r := x_r + \frac{\delta_r(\delta_r+\delta_{r-1})}{\min(\delta_{r-1}-\delta_r, -\varepsilon x_r)}
\label{eq:Aitken2r}
\end{equation}
where we have taken the dimensionless parameter $\varepsilon=0.001$. Note that this choice relies on the $(x_r)$ being decreasing and positive, so it is suitable for the problem at hand, but not for the general problem of accelerating logarithmically-convergent sequences.
}
\subsection{Examples}
We demonstrate Algorithm~1 in several different cases\footnote{We used $Z=-10$ and a grid spacing of $\frac{1}{32}$.}. We only show the first term of the accelerated sequence, and this requires two previous differences, so we need $h_1/h_2$, $h_2/h_3$, $h_3/h_4$ to compute it.
\begin{figure}[!htbp]
\centering
\begin{tabular}{rl}
(a) &
\scalebox{0.8}{\includegraphics{pdebarr_ou_hr.eps}} \\
(b) &
\scalebox{0.8}{\includegraphics{pdebarr_df_hr.eps}} \\
(c) &
\scalebox{0.8}{\includegraphics{pdebarr_sech2_hr.eps}}
\end{tabular}
\caption{
Performance of Algorithm~1 for (a) OU, (b) Dry-friction, (c) $A(y)=-2\tanh y$. The black line is the estimate of $\lambda$ using the first term of the accelerated sequence.
Where relevant the zeros of the associated orthogonal polynomials are shown, and asymptotes of $\lambda$ vs boundary $y_+$ are sketched, except in (b) where the exact result is known.
}
\label{fig:1}
\end{figure}
\subsubsection{OU}
Naturally we take this first: $A(y)=-y$, with the results shown in Figure~\ref{fig:1}(a).
The usual approach is to say that we are looking for the zeros of the function $s\mapsto \mathbf{D}_s(z)$. See also \cite[Fig.1]{Ricciardi88} and \cite[Fig.1]{Elbert08}, and also Table~\ref{tab:1} in the Appendix which gives some particular values.
In the special case $-s=n\in\mathbb{N}$ this boils down to finding the leftmost zero $\zeta_n$ say of the function $z\mapsto\mathrm{He}_n(z)$, or equivalently the leftmost abscissa of the Gauss-Hermite quadrature formula of order $n$: in other words if the boundary is placed at $\zeta_n$ then $\lambda=n$, and these are plotted in Figure~\ref{fig:1}(a).
The asymptotic behaviour as $y_+\to -\infty,+\infty$ is, from Theorem~\ref{thm:R3}:
\begin{equation}
\lambda\sim y_+^2/4 \quad \mbox{ and } \quad \lambda \sim y_+ \phi(y_+).
\end{equation}
We can also establish the $y\to+\infty$ result by looking at $\mathbf{D}_{-\lambda}(y)$ in this r\'egime (for $\lambda\approx0$). Indeed, by (\ref{eq:pcfrecur}),
\[
\mathbf{D}_{-\lambda}(y_+) = \frac{-\mathbf{D}_{-1-\lambda}(y_+) - \lambda \mathbf{D}_{1-\lambda}(y_+)}{y_+} ;
\]
now let $\lambda\to0$ and use the expressions for $\mathbf{D}_{-1}(y_+)$ and $\mathbf{D}_1(y_+)$ to get
\[
\mathbf{D}_{-\lambda}(y_+) \approx \frac{y_+ - \lambda \Phi(y_+)/\phi(y_+)} {y_+}
\]
which is zero when $\lambda= y_+ \phi(y_+)$ (as $\Phi(y_+)\to1$).
We can derive the same result using (\ref{eq:pcfrecur2}).
At this point we can clearly see why there is something fundamentally wrong with (\ref{eq:wrong}): the decay rate is incorrect, except when $y_+=0$.
\subsubsection{Arithmetic Brownian motion}
The case $A(y)= \mu$ might seem an odd choice, because it is not mean-reverting, and $\psi(y)$ is formally $e^{\mu y}$ which is not normalisable.
However, provided $\mu>0$, we do have $A\in\mathfrak{S}_-$, and some of the results carry over; also Algorithm~1 does work despite the fact that $\psi(y)$ is not normalisable.
It is easily established that $H(s,y)= (\mu-\sqrt{\mu^2+4s})/2$, regardless of $y$, and Taylor series expansion shows that $h_r(y)=\mathfrak{c}_{r-1}\mu^{1-2r}$, for $r\ge1$. This confirms that $\lambda=\mu^2/4$ regardless of the boundary position.
This case is a useful ansatz for understanding any problem in which $\lim_{y\to-\infty} A(y)=\mu>0$.
\subsubsection{Dry-friction}
Here $A(y)=-\mu \,\mathrm{sgn}\, y$, with $\mu>0$ (see \cite{Touchette10a}).
When $y\le0$ we have the arithmetic Brownian motion, so then $\lambda=\mu^2/4$.
For $y\ge0$,
\[
C_+(s,y) = \frac{\sqrt{\mu^2+4s}-\mu}{\sqrt{\mu^2+4s}} e^{(\mu+\sqrt{\mu^2+4s})y/2} + \frac{\mu}{\sqrt{\mu^2+4s}} e^{(\mu-\sqrt{\mu^2+4s})y/2} .
\]
The singularities in the $s$-plane of $H(s,y_+)$ are a branch-point at $s=-\mu^2/4$ and also simple poles whenever the following condition is satisfied:
\begin{equation}
\sqrt{1+4s/\mu^2} = 1 - e^{-\sqrt{\mu^2+4s} \, y_+} .
\label{eq:lambda_df}
\end{equation}
When $y_+<1/\mu$ this has no solutions for $s>{-\mu^2/4}$, so $\lambda$ is still equal to $\mu^2/4$.
When $y_+>1/\mu$ it has two in the interval $({-\mu^2/4},0)$ and the important (rightmost) root is at $s=-\lambda$ obeying
\[
\lambda \sim \frac{\mu^2 e^{-\mu y_+}}{2}, \qquad y_++\to+\infty.
\]
As $\psi(y)=\mu e^{-\mu |y|}/2$, this accords with Theorem~\ref{thm:R3}.
In running examples we can take $\mu=1$ without loss of generality and Figure~\ref{fig:1}(b) shows the results, using (\ref{eq:lambda_df}) as a check.
\subsubsection{tanh case}
The case $A(y)=-\alpha \tanh \gamma y$ is an interesting generalisation, interpolating between the OU and dry-friction cases.
The function $C_+(s,y)$ obeys
\[
\dderiv{C}{y} - \alpha \tanh \gamma y \deriv{C}{y} - sC = 0
\]
and substituting $w=\sinh \gamma y$ turns the equation for $C$ into the hypergeometric equation
\[
(1+ w^2) C''(w) + (1-\alpha/\gamma) wC'(w) - (s/\gamma^2) C=0.
\]
Polynomial solutions are admitted for certain values of $s=-\lambda_n$, as detailed below for the first few:
\begin{center}
\begin{tabular}{rrr}
$n$ & $P_n(w)$ & $\lambda_n$ \\ \hline
1 & $w$ & $\gamma(\alpha-\gamma)$ \\
2 & $w^2 + \gamma/(2\gamma-\alpha)$ & $2\gamma(\alpha-2\gamma)$ \\
3 & $w^3 + 3\gamma w/(4\gamma-\alpha)$ & $3\gamma(\alpha-3\gamma)$
\end{tabular}
\end{center}
These are the Romanovski polynomials, the orthogonal polynomials of the Student-t distribution \cite{Quesne13,Romanovski29}, but the class is defective in the sense that there are only finitely many of them. They are in the Pearson-Wong family \cite{Wong64} but are usually omitted in discussions of the subject, probably because of their irregular behaviour.
The apparent pattern for the $(\lambda_n)$ is confirmed by the usual three-term recurrence relation of orthogonal polynomials (e.g.~\cite{Szego39}), which yields
\[
\lambda_{n+1} = \lambda_{n} + \gamma(\alpha-\gamma) - 2\gamma^2 n .
\]
If $\alpha\to\infty$, $\gamma\to0$ with $\alpha\gamma=1$ then we are back with OU and the $(P_n)$ become Hermites.
If $\alpha\to\infty$ with $\gamma$ fixed then we have the dry-friction case and the zeros form a continuum.
The defectiveness of the set of Romanovski polynomials has important practical consequences. Take for example $A(y)=-2\tanh y$, so that $\psi(y) = \frac{1}{2} \mathrm{sech}^2 y$. For $\alpha/\gamma=2$, the set of polynomials terminates even before $n=2$, as formally $P_2(w)=w^2+\infty$; put differently, $P_2$ is a polynomial with two real zeros only when $\alpha/\gamma>2$. In identifying points on the graph of $\lambda$ vs $y$ as zeros of orthogonal polynomials, we can only plot the zero of $P_1(w)$ before we get stuck. Other aspects of what we have derived can, however, be easily plotted. Indeed, as $y\to-\infty$, by reference to the arithmetic Brownian motion case, we have $\lambda\to1$, and as $y_+\to+\infty$ we have $\lambda\sim\sinh y_+/\cosh^3 y_+$ (see Figure~\ref{fig:1}(c)).
\section{Short-time behaviour, and global asymptotics}
\label{sec:short}
\subsection{General theory and limiting behaviours}
Another branch of the theory---following on from \cite{Martin15b}---is to study the logarithmic derivative of the density.
Our analysis is guided, in part, by the first passage problem for the regular Brownian motion, i.e.\ with no mean reversion (drift term just $\mu\,dt$). For that problem, using $(t,x)$ coordinates,
\[
F(t,x) = \Phi\!\left( \frac{\mu t +x-x_+}{2\sigma\!\sqrt{t}} \right) + e^{2\mu (x_+-x)/\sigma^2} \Phi\!\left( \frac{-\mu t +x- x_+}{2\sigma\!\sqrt{t}} \right)
\]
or
\[
f(t,x) = \pderiv{F}{t}(t,x) = \frac{x_+-x}{\sqrt{2\pi \sigma^2 t^3}} \exp \left( \frac{-(x_+-x-\mu t)^2}{2\sigma^2 t} \right) ,
\]
the well-known inverse Gaussian distribution \cite{Seshadri93}.
Now consider the logarithmic derivative w.r.t.\ $x$:
\[
-\pderiv{}{x} \ln f(t,x)
=
\frac{x+\mu t-x_+}{\sigma^2 t} + \frac{1}{x_+-x}
\]
which is a particularly simple (rational) function.
In the OU case, (\ref{eq:wrong}) is correct when $y_+=0$, i.e.\ the boundary is at the equilibrium point, and again the logarithmic derivative is a simple function:
\[
f (\tau,y) = \frac{2y \sqrt{q}}{\sqrt{2\pi(1-q)^3}} \exp \left( \frac{-qy^2}{2(1-q)} \right) ;
\qquad -\pderiv{}{y}\ln f = \frac{qy}{1-q} - \frac{1}{y}.
\]
All this points to the logarithmic derivative being a useful construction for dissecting the problem. We formalise the idea next.
It is convenient to define ($'$ denoting $\partial/\partial y$)
\begin{equation}
\mathcal{Q} [h] \equiv h' + A h - h^2.
\end{equation}
\begin{prop}
With $h(\tau,y)=-(\partial/\partial y) \ln f(\tau,y)$ we have:
\begin{itemize}
\item[(i)]
\begin{equation}
\pderiv{h}{\tau} = \pderiv{}{y} \mathcal{Q}[h].
\label{eq:pde_hy}
\end{equation}
\item[(ii)]
Near $y_+$ the function $h$ looks like
\begin{equation}
h(\tau,y) = \frac{1}{y_+-y} + \frac{A(y_+)}{2} + o(1)
\label{eq:hbarr}
\end{equation}
for all time.
\item[(iii)]
For short time ($\tau\ll 1$) we have
\begin{equation}
h(\tau,y) = \frac{y-y_+}{2\tau} + \frac{A(y)}{2} + \frac{1}{y_+-y} + o(1)
\label{eq:ansatz1}
\end{equation}
\item[(iv)]
The steady-state solution for $h$, i.e.\ $\overline{h}(y)=h(\infty,y)$, obeys
\begin{equation}
\mathcal{Q}[\overline{h}] = \lambda.
\label{eq:hinfty}
\end{equation}
\item[(v)]
Splitting off the part of $\overline{h}$ that is singular at the boundary, thereby defining
\begin{equation}
\overline{h}(y) = \frac{1}{y_+-y} + \widetilde{h}(y),
\label{eq:hsharpbarr}
\end{equation}
we have
\begin{equation}
\widetilde{h}(y_+) = A(y_+)/2 ;
\label{eq:hsharpbarr0}
\end{equation}
if $A$ is differentiable at $y_+$ then
\begin{equation}
\widetilde{h}'(y_+) = \displaystyle \frac{1}{3} \left( \lambda - \frac{A(y_+)^2}{4} + A'(y_+) \right) .
\label{eq:hsharpbarr1}
\end{equation}
\end{itemize}
\end{prop}
\noindent Proof.
(i) Clear, and (ii) clear by dominant balance.
(iii) Again by dominant balance, and corresponding to an approximation by which the process is viewed as an arithmetic Brownian motion over a short time period.
(iv) Clear by taking (\ref{eq:pde_Fy}) (which is obeyed by $f$), dividing through by $f$ and letting $\tau\to\infty$, with $(1/f)\partial f/ \partial \tau \to -\lambda$. Part (v) is also immediate.
$\Box$
\vspace{5mm}
From this we can see something else that is wrong with (\ref{eq:wrong}): it implies that
\[
h(\tau,y) \stackrel{??}{=} \frac{qy-\!\sqrt{q}\,y_+}{1-q} + \frac{1}{y_+-y} .
\]
This has incorrect asymptotic behaviour: as $\tau\to\infty$ only the $(y_+-y)^{-1}$ term remains, but that does not satisfy the Riccati equation (\ref{eq:hinfty}), except when $y_+=0$. It is also incorrect at the boundary in the sense that although the $O(y_+-y)^{-1}$ term is correct, the $O(1)$ term is not: it is equal to $\frac{q-\sqrt{q}}{1-q}y_+$, but should be $-\halfy_+$, which agrees only as $\tau\to0$. Yet it does have, informally, `some of the right terms'.
It is possible to refine (\ref{eq:ansatz1}) by incorporating extra terms in the expansion, again for small $\tau$. This, however, introduces unwelcome complications and is explained in the Appendix. In a nutshell the conclusion is that $h$ does \emph{not} admit a convergent expansion of the form
\[
\frac{y-y_+}{2\tau} + \frac{A(y)}{2} + \frac{1}{y_+-y}
+ \sum_{r=1}^\infty \tau^r b_r(y).
\]
In other words, taking the logarithmic derivative of $f$ does not in general remove the essential singularity of $f$ at $\tau=0$, though as we know it does in some cases, notably the Brownian motion with drift and the OU model with the boundary at equilibrium. We therefore shift our attention away from short-time development, and concentrate on making (\ref{eq:ansatz1}) work in the long-time limit as well.
\subsection{Longer-time development of $h$}
Write $q=e^{-2\theta \tau}$, where $\theta>0$ is an arbitrary constant, and consider the following ansatz:
\begin{equation}
h(\tau,y) = \frac{\theta\!\sqrt{q}\,(y-y_+)}{1-q} + \frac{\sqrt{q}\,A(y)}{1+\!\sqrt{q}} + \frac{1}{y_+-y} + \frac{1-\!\sqrt{q}}{1+\!\sqrt{q}} \, \widetilde{h}(y) + R(\tau,y).
\label{eq:h2}
\end{equation}
It is easily seen that:
\begin{itemize}
\item
It has the desired behaviour at the boundary, as given in (\ref{eq:hbarr});
\item
Its Laurent series around $\tau=0$ agrees with that of (\ref{eq:ansatz1}), regardless of $\theta$;
\item
It has the desired long-time behaviour, as is immediate from letting $q\to0$ and recalling (\ref{eq:hsharpbarr});
\item
In the OU case with $A(y)=-\theta y$ and $y_+=0$ it is exact;
\item
In the case of the arithmetic Brownian motion it is also exact.
\end{itemize}
There are other connections with known results.
First, in the OU model $A(y)=-\theta y$, consider for some fixed $\xi$ the function
\[
\psi_\textrm{b}(\tau,y) = \lim_{\varepsilon\to0} \frac{1}{2\varepsilon} \mathbf{P}\big[|Y_\tau-\xi|<\varepsilon \,\big|\, Y_0=y\big]
\]
i.e.\ the p.d.f.\ of the unconstrained process, which is a solution to the \emph{backward} equation (because the spatial variable $y$ is the initial condition). If $h=-(\partial/\partial y)\ln \psi_\textrm{b}$ then
$h(\tau,y)=\frac{\theta(qy-\sqrt{q}\xi)}{1-q}$. So the first two terms of (\ref{eq:h2}), taken together, provide a solution satisfying (\ref{eq:pde_hy}), but it is not one that we can directly use as it ignores the boundary.
The third and fourth terms can, respectively, be understood as introducing the effect of the absorption and ensuring that the right behaviour is observed in the long-time limit.
A second connection is with the work in \cite{Martin18b} which deals with the Fokker--Planck equation for general mean-reverting processes with no absorbing boundary. Define
\[
\psi_\textrm{f}(\tau,y) = \lim_{\varepsilon\to0} \frac{1}{2\varepsilon} \mathbf{P}\big[|Y_\tau-y|<\varepsilon \,\big|\, Y_0=\xi\big]
\]
which satisfies the Fokker--Planck equation. Then\footnote{Noting that $\psi(y)$ as previously defined is simply $\psi_\textrm{f}(\infty,y)$.} $\psi_\textrm{f}(\tau,y)/\psi(y)$ is a solution of the backward equation and hence its logarithmic derivative satisfies (\ref{eq:pde_hy}). In \cite{Martin18b} the first two terms of (\ref{eq:h2}) are used to provide an approximate solution to the Fokker--Planck equation for general mean-reverting models. The idea is to expand the Fokker--Planck problem `around' the OU model to which it is closest, and the solution is exact for $A(y)=-\theta y$. (We reiterate that in \cite{Martin18b} there is no absorbing boundary.)
In summary, from the way that (\ref{eq:h2}) is constructed, we have:
\begin{itemize}
\item
$R(\tau,y)\to0$ as $\tau\to0,\infty$;
\item
$R(\tau,y_+)=0$;
\item
$R$ vanishes in (i) the OU case with the boundary at equilibrium and (ii) the arithmetic Brownian motion (OU with no mean reversion).
\end{itemize}
We have therefore made an important step in constructing an approximation that is valid over short and long time scales. The connection with the first section of the paper is that $\lambda$ has appeared in (\ref{eq:hsharpbarr1}). In principle it is possible to identify $\lambda$ by solving the eigenvalue problem (\ref{eq:hinfty}), which would then render the work in \S\ref{sec:lambda} unnecessary. However the solution of (\ref{eq:hinfty}) is not straightforward. Furthermore, it transpires that it is not necessary to know $\widetilde{h}$ to make further progress, if we are just interested in the behaviour of $f$ over time (as opposed to as a function of the starting-point for fixed time).
\subsection{Choice of $\theta$}
We said above that $\theta$ is arbitrary, and its effect on the Laurent series of $h$ about $\tau=0$ is confined to the $o(1)$ term in (\ref{eq:ansatz1}), so it controls the \emph{intermediate}-time behaviour.
Given the connection with the OU process as described above, the role of $\theta$ is to map the given force-field $A$ on to its `closest' OU model in some sense.
In \cite{Martin15b} we suggested using $\hat\theta$ defined as the average rate of mean reversion, defined as the average of $-A'$ over the invariant density $\psi$:
\begin{equation}
\hat{\theta} = \langle -A' \rangle_\infty = \langle A^2 \rangle_\infty
\label{eq:theta}
\end{equation}
and this identity shows that $\hat{\theta}$ is necessarily positive.
As pointed out in \cite{Martin18b} this choice corresponds to the Fisher information (see e.g.~\cite[\S2.5]{Lehmann98}) for the problem of estimating the mean by maximum likelihood. More precisely, consider for some p.d.f.~$\psi$ the family of distributions $\psi(y-\mu)$ indexed by the parameter $\mu\in\mathbb{R}$. Writing
\[
f(y \,|\, \mu) = \psi(y-\mu)
\]
we seek to maximise $\log f(y \,|\, \mu)$ w.r.t.\ $\mu$. The Fisher information is the expectation of the square of the $\mu$-derivative of the log-likelihood, and hence is
\[
\int_{-\infty}^\infty \left(\pderiv{}{\mu} \log f(y\,|\,\mu)\right)^2 f(y\,|\,\mu) \, dy
=
\int_{-\infty}^\infty \left(\frac{\psi'(y-\mu)}{\psi(y-\mu)} \right)^2 \psi(y-\mu) \, dy
=
\int_{-\infty}^\infty \left(\frac{\psi'(y)}{\psi(y)} \right)^2 \psi(y) \, dy
= \hat{\theta}.
\]
In broad terms, the higher the Fisher information, the more certain we are about the estimation of the parameter in question. The connection with mean reversion is that the higher the average speed of mean reversion, the more certain we are about our estimate of the mean from a given dataset, and vice versa. Using the Fisher information as an estimator of reversion speed is therefore natural.
For example the tanh case has (with $\mathrm{B}$ denoting the Beta function)
\[
A(y)=-\frac{\alpha}{\gamma}\tanh\gamma y
, \qquad
\psi(y) = \frac{\gamma(\cosh\gamma y)^{-\alpha/\gamma^2}}{\mathrm{B}\big(\frac{\alpha}{2\gamma^2},\frac{1}{2}\big)}, \qquad
\langle-A'\rangle_{\infty}=\frac{\alpha^2}{\alpha+\gamma^2}.
\]
\subsection{Long- and short-time behaviour combined}
The present state of affairs is that we know the asymptotic rate of decay in the long-time limit ($\lambda$). Also we know a fair amount about the logarithmic derivative of $f$, i.e.\ $h(\tau,y)$; but this only allows us to reconstruct $f(\tau,y)$ up to a multiplicative time-dependent factor $M(\tau)$ say, which we must now obtain. Symbolically
\begin{equation}
f(\tau,y) = M(\tau) \exp \left( -\int_{y_*}^y h(\tau,z) \, dz \right);
\label{eq:decomp}
\end{equation}
the lower limit $y_*$ of the integral is arbitrary.
We calculate the exponential-integral in (\ref{eq:decomp}) first, using (\ref{eq:h2}) and defining
\[
\rho(y,\barrier) = \int_y^{y_+} \widetilde{h}(z) \, dz
\]
to give
\[
\frac{y_+-y}{y_+-y_*}
\exp \left( \frac{\theta\sqrt{q}}{1-q} \frac{(y_*-y_+)^2}{2} \right)
\exp \left( - \frac{\theta\sqrt{q}}{1-q} \frac{(y-y_+)^2}{2} \right)
\left( \frac{\psi(y_*)}{\psi(y)} \right)^{ \textstyle \frac{\sqrt{q}}{1+\sqrt{q}} }
e^ {\rho(y,\barrier) \textstyle \frac{1-\sqrt{q}}{1+\sqrt{q}} } .
\]
The prefactor $(y_+-y_*)^{-1}$ can be absorbed into the $M(\tau)$ term, which means that in effect we can discard it. This permits us to let $y_*\toy_+$, giving:
\[
(y_+-y)
\exp \left( \frac{- \theta\sqrt{q}(y-y_+)^2}{2(1-q)} \right)
\left( \frac{\psi(y_+)}{\psi(y)} \right)^{ \textstyle \frac{\sqrt{q}}{1+\sqrt{q}} }
e^ { \rho(y,\barrier) \textstyle \frac{1-\sqrt{q}}{1+\sqrt{q}} } .
\]
We now turn to the prefactor $M(\tau)$. Inserting (\ref{eq:decomp}) into (\ref{eq:pde_hy}) gives a first-order linear differential equation for $M$:
\[
- \frac{1}{M} \deriv{M}{\tau} =
\mathcal{Q}[h] - \int_{y_*}^y \dot{h}(\tau,z) \, dz ,
\]
with $\dot{h}=\partial h/\partial \tau$.
Notice that the RHS seems to depend on $y$, but does not do so, because $h_Y$ obeys (\ref{eq:pde_hy}). Thus any $y$ can be chosen, and setting it equal to $y_*$ causes the second term to vanish. Then we let $y_*\toy_+$ to obtain
\begin{equation}
M(\tau) = \exp \left( - \int_\cdot^\tau \mathcal{Q}[h](\tau,y_+) \, d\tau \right) \times \mbox{const}
\label{eq:prefac}
\end{equation}
where the lower integration limit is arbitrary and only influences the multiplicative constant.
Using (\ref{eq:hsharpbarr0},\ref{eq:hsharpbarr1},\ref{eq:h2}):
\[
\mathcal{Q}[h](\tau,y_+) =
\frac{3\theta q}{1-q} + \lambda + \frac{\theta\nu\sqrt{q}}{1+\!\sqrt{q}} + o(R)
\]
where the constant $\nu$ is defined by
\begin{equation}
\theta\nu = 3\theta - 2\lambda + A'(y_+) + \frac{A(y_+)^2}{2}
\label{eq:ecoef}
\end{equation}
and the symbol $o(R)$ denotes a function that vanishes if $R$ is identically zero.
Doing the $\tau$-integral, recalling $d\tau=-dq/2\theta q$, gives
\[
M(\tau) = C(1-q)^{-3/2} q^{\lambda/2\theta} \left(\frac{1+\!\sqrt{q}}{2}\right)^{\nu}
\]
($C$ denotes a positive constant) or equivalently
\[
M(\tau) = \frac{Ce^{-\lambda \tau}}{(1-e^{-2\tau})^{3/2}}
\left(\frac{1+e^{-\tau}}{2}\right)^{\nu}
\]
and two important ingredients can be seen: the scaling law for short time is $\propto\tau^{-3/2}$, seen from the Brownian motion approximation, and the asymptotic decay rate is $\lambda$, as it must be.
The former is clearly visible in (\ref{eq:wrong}) but the latter is not.
We are now ready to combine it with the previous working to give
\begin{eqnarray}
f(\tau,y) &\approx&
\frac{C(y_+-y)e^{-\lambda \tau}}{\sqrt{(1-q)^3}}
\exp \left( \frac{- \theta\sqrt{q}(y-y_+)^2}{2(1-q)} \right)
\label{eq:almostfinal}
\\
&& \times
\left( \frac{\psi(y_+)}{\psi(y)} \right)^{ \textstyle \frac{\sqrt{q}}{1+\sqrt{q}} }
\left(\frac{1+\sqrt{q}}{2}\right)^{\nu}
e^{\rho(y,\barrier) \textstyle \frac{1-\sqrt{q}}{1+\sqrt{q}}} .
\nonumber
\end{eqnarray}
To determine the overall scaling factor $C$ we consider what happens when the process starts near the boundary by setting $y=y_+-\varepsilon$ and allowing $\varepsilon\to0$. The density must integrate to unity and, making the substitution\footnote{Not $u=\varepsilon^2 \theta \!\sqrt{q}/(1-q)$. The term in the exponential can can be manipulated as $\sqrt{q}/(1-q) = q/(1-q) + \frac{1}{2} + o(1)$ as $q\to1$.} $u=\varepsilon^2 \theta q/(1-q)$, we obtain
\[
\int_0^\infty f(\tau,y) \, d\tau = \frac{C}{2\theta^{3/2}} \int_0^\infty u^{1/2} e^{-u/2} (\ldots) \, du
\]
where the expression $(\ldots)$ tends to unity as $\varepsilon\to0$; notice that in this limit all three terms in the second line of (\ref{eq:almostfinal}) disappear, essentially because we are only interested in $y \approx y_+$ and $\theta\tau\ll1$.
Therefore $C=\sqrt{2\theta^3/\pi}$ and we arrive at
\begin{equation}
f(\tau,y) \approx
\frac{(y_+-y)e^{-\lambda \tau}}{\sqrt{\pi(1-q)^3/2\theta^3}}
\exp \left( \frac{- \theta\sqrt{q}(y-y_+)^2}{2(1-q)} \right)
\left( \frac{\psi(y_+)}{\psi(y)} \right)^{ \textstyle \frac{\sqrt{q}}{1+\sqrt{q}} }
\left(\frac{1+\sqrt{q}}{2}\right)^{\nu} e^{\rho(y,\barrier) \textstyle \frac{1-\sqrt{q}}{1+\sqrt{q}}} ,
\label{eq:final}
\end{equation}
with $\nu$ given by (\ref{eq:ecoef}).
Although we still do not know $\rho(y,\barrier)$, its value can be ascertained by requiring $\int_0^\infty f(\tau,y) \, d\tau=1$, which can be done by Gaussian quadratures and a numerical bisection search \cite{NRC}, and with this final step we are done\footnote{It is perhaps odd at first sight that we are using this principle twice, to obtain two different pieces of information: $C$ and $\rho(y,\barrier)$. The point is that $C$, above, is governed by the short-term behaviour and the limit $y\toy_+$ screens out any terms that pertain to long-term behaviour, as the process will hit the boundary almost immediately in that limit.}.
\subsection{Examples}
\subsubsection{OU process}
It is easily seen that in the special case of the OU process with the boundary at equilibrium, we have $\psi(y)=(2\pi)^{-1/2} e^{-y^2/2}$, $\lambda=1$ and $\nu=0$, and it is easily seen that $\rho(y,\barrier)=0$ too\footnote{By integrating using essentially the same substitution as before, $u=q/(1-q)$.}, so that (\ref{eq:wrong}) is recovered. More subtly it identifies why and how (\ref{eq:wrong}) is incorrect whenever $y_+\ne0$.
To investigate the accuracy of (\ref{eq:final}) we use a numerical PDE solver.
Figure~\ref{fig:ou} shows the results for OU with the boundary at different positions, using various starting-points for each. The agreement is very good.
\begin{figure}[!htbp]
\hspace{-10mm}
\begin{tabular}{llll}
(i) & \scalebox{0.6}{\includegraphics{pdebarr_ou_bar-1.eps}} &
(ii) & \scalebox{0.6}{\includegraphics{pdebarr_ou_bar0.eps}} \\
(iii) & \scalebox{0.6}{\includegraphics{pdebarr_ou_bar1.eps}} &
(iv) & \scalebox{0.6}{\includegraphics{pdebarr_ou_bar2.eps}} \\
\end{tabular}
\caption{
First-passage time density for $A(y)=-y$ (OU): Eq.(\ref{eq:final}) compared with numerical PDE solver, except for $y_+=0$, when (\ref{eq:final}) is exact.
Boundaries and starting-points as indicated on each plot.
}
\label{fig:ou}
\end{figure}
\begin{figure}[!htbp]
\hspace{-10mm}
\begin{tabular}{llll}
(i) & \scalebox{0.6}{\includegraphics{pdebarr_sech2_bar-1.eps}} &
(ii) & \scalebox{0.6}{\includegraphics{pdebarr_sech2_bar0.eps}} \\
(iii) & \scalebox{0.6}{\includegraphics{pdebarr_sech2_bar1.eps}} &
(iv) & \scalebox{0.6}{\includegraphics{pdebarr_sech2_bar2.eps}} \\
\end{tabular}
\caption{
First-passage time density for $A(y)=-2\tanh y$: Eq.(\ref{eq:final}) compared with numerical PDE solver. Boundaries and starting-points as indicated on each plot.
}
\label{fig:sech2}
\end{figure}
\begin{figure}[!htbp]
\hspace{-10mm}
\begin{tabular}{llll}
(i) & \scalebox{0.6}{\includegraphics{pdebarr_df_bar-1.eps}} &
(ii) & \scalebox{0.6}{\includegraphics{pdebarr_df_bar0.eps}} \\
(iii) & \scalebox{0.6}{\includegraphics{pdebarr_df_bar1.eps}} &
(iv) & \scalebox{0.6}{\includegraphics{pdebarr_df_bar2.eps}} \\
\end{tabular}
\caption{
First-passage time density for $A(y)=-\mathrm{sgn}\, y$: Eq.(\ref{eq:final}) compared with numerical PDE solver. Boundaries and starting-points as indicated on each plot.
}
\label{fig:df}
\end{figure}
\subsubsection{Arithmetic Brownian motion}
As in \S~\ref{sec:lambda} we briefly mention this, despite the fact that $\psi$ is not normalisable. We have $A(y)=\mu\ge0$, $\psi(y)=e^{\mu y}$, $\lambda=\mu^2/4$ and $\theta=0$ which we understand by taking the limit as $\theta\to0$ from above. The last two terms of (\ref{eq:final}) are unity. This converges to the inverse Gaussian distribution, as expected. Incidentally it also works when $\mu<0$, despite the fact that this case is not covered by the hypotheses of the paper: note that the first-passage time density no longer integrates to unity, as there is a positive probability of never hitting the boundary.
\subsubsection{$A(y)=-2\tanh y$}
As before, we take $A(y)=-2\tanh y$, giving $\psi(y)=\frac{1}{2} \mathrm{sech}^2 y$.
Figure~\ref{fig:sech2} shows the results, and again the agreement is good.
\subsubsection{$A(y)=-\mathrm{sgn}\, y$ (dry-friction case)}
As before, we take $A(y)=-\mathrm{sgn}\, y$, giving $\psi(y)=\frac{1}{2} e^{-|y|}$.
In calculating (\ref{eq:ecoef}) we always take $A'(y_+)=0$, even when $y_+=0$ (to be understood as the limit $y_+\nearrow0$).
Figure~\ref{fig:df} shows the results.
For this model the agreement is less good, particularly when $y_+\le0$, though the short-term behaviour is correct and the long-term rate of exponential decay is correctly captured: as we said in \S\ref{sec:short}, it is $e^{-\tau/4}$ whenever $y_+\le1$.
What makes this model difficult to approximate is that it is essentially two very different models joined together.
If the boundary is below the equilibrium level, the model is just an arithmetic Brownian motion and there is no mean reversion\footnote{As, by convention in this paper, we have chosen to start below the boundary.}. If the boundary is above the equilibrium level, the model becomes mean-reverting and exhibits different behaviour. Either model can be successfully approximated on its own---indeed, as we said above, (\ref{eq:final}) is exact for the arithmetic Brownian motion. However, we do not have the luxury of being able to take two different copies of (\ref{eq:final}), with different parameters, to represent the two halves, and the results shown are the consequence of trying to encapsulate all the properties of the model into one.
\section{Conclusions}
We have derived an approximate expression for the first-passage time density of a mean-reverting process, that captures the short- and long-term behaviour in a single formula, Eq.(\ref{eq:final}).
The development has used the Ornstein--Uhlenbeck process as a prototype, and in certain cases delivers exact results. However, it possesses much greater generality than that, and our basic thesis is that within a broad class of models the answer can always be effectively approximated this way.
Perhaps the most cogent reason for wanting to work with an expression resembling (\ref{eq:final}) is that it \emph{looks} like the stopping-time density of a mean-reverting diffusion: in other words it has a coherent form in a way that a Bromwich integral or an eigenfunction expansion does not.
The paper does not pretend to be the last word on the subject.
It is likely that the most productive approach to this problem is a combination of analytical and numerical techniques: the latter may include the numerical solution of integral equations, or the use of spectral methods \cite{Boyd01}.
In principle the formula (\ref{eq:final}), coupled with (\ref{eq:pde_Fy}), permits such an approach. If we extract the most important terms from (\ref{eq:final}), and write\footnote{We are re-using the symbol $R$.}
\[
f(\tau,y) =
\frac{(y_+-y)e^{-\lambda \tau}}{\sqrt{\pi(1-q)^3/2\theta^3}}
\exp \left( \frac{- \theta\sqrt{q}(y-y_+)^2}{2(1-q)} \right)
\left( \frac{\psi(y_+)}{\psi(y)} \right)^{ \textstyle \frac{\sqrt{q}}{1+\sqrt{q}} }
\big( 1+ R(\tau,y)\big) ;
\]
then $R$ satisfies a parabolic PDE and, by construction, it is known to be zero in several different limits; as it is smooth and slowly-varying, it is an ideal candidate for approximation by spectral methods.
(Another possibility is to replace $1+R$ with $e^R$, which ensures positivity at the expense of creating a nonlinear PDE.)
This method of attack has been applied to special functions for many years: the basic idea is to extract various factors and/or transform the function in question by considering its behaviour in various limits, and then approximate the remainder term with a Chebyshev expansion or some variant of it. Many of the approximations in \cite{Abramowitz64} fall into this category.
Another possibility is to use the above definition and develop the remainder term numerically using the Volterra integral equation techniques of \cite{Lipton18}.
Possible further developments include multidimensional analogues (the exit time from a polygonal zone, for example), one-dimensional problems with two boundaries, commonly called the
double-barrier problem or exit time from a channel, which have been studied in e.g.~\cite{Dirkse75,Donofrio18,Lindenberg75,Sweet70}.
Another possibility is the first-passage time of a L\'evy process rather than simply a diffusion, for which recent discussions and applications are in e.g.~\cite{Lipton02c,Martin10b,Martin18a}. While the long-time behaviour in such models is still exponential, the short-time behaviour is typically different.
\section{Acknowledgements}
RM thanks Ridha Nasri for his advice on parabolic cylinder functions and Nicholson integrals, and Alexander Lipton for helpful insights into PDE theory. We are also grateful to Satya Majumdar for many discussions on first-passage time problems over the years.
| {'timestamp': '2018-11-01T01:03:14', 'yymm': '1810', 'arxiv_id': '1810.13010', 'language': 'en', 'url': 'https://arxiv.org/abs/1810.13010'} |
\section{\@mainheadtrue
\@startsection {section}{1}{\z@}{0.8cm plus1ex minus .2ex}
{0pt}{\reset@font\bf}}
\def\@sect#1#2#3#4#5#6[#7]#8{\ifnum #2>\c@secnumdepth
\let\@svsec\@empty\else
\refstepcounter{#1}%
\def\@tempa{#8}%
\ifx\@tempa\empty %
\ifappendixon %
\if@mainhead %
\def\@tempa{APPENDIX }\def\@tempb{}%
\else %
\def\@tempa{}\def\@tempb{. \enskip\enskip }%
\fi
\else %
\def\@tempa{}\def\@tempb{. \enskip\enskip }%
\fi
\else %
\ifappendixon %
\if@mainhead %
\def\@tempa{APPENDIX }\def\@tempb{: }%
\else %
\def\@tempa{}\def\@tempb{. \enskip\enskip }%
\fi
\else %
\def\@tempa{}\def\@tempb{. \enskip\enskip }%
\fi
\fi
\edef\@svsec{\@tempa\csname the#1\endcsname\@tempb}\fi
\@tempskipa #5\relax
\ifdim \@tempskipa>\z@
\begingroup #6\relax
{\hskip #3\relax\@svsec}{\interlinepenalty \@M
\if@mainhead\uppercase{#8}\else#8\fi\par}%
\endgroup
\csname #1mark\endcsname{#7}\addcontentsline
{toc}{#1}{\ifnum #2>\c@secnumdepth \else
\protect\numberline{\csname the#1\endcsname}\fi
#7}\else
\def\@svsechd{#6\hskip #3\relax %
\@svsec \if@mainhead\uppercase{#8}\else#8\fi
\csname #1mark\endcsname
{#7}\addcontentsline
{toc}{#1}{\ifnum #2>\c@secnumdepth \else
\protect\numberline{\csname the#1\endcsname}\fi
#7}}\fi
\@xsect{#5}}
\makeatother
\begin{document}
\title{Another View on Massless Matter-Gravity Fields \\
in Two Dimensions\thanks
{This work is supported in part
by funds provided by the U.S.
Department of Energy (D.O.E.) under cooperative
agreement \#DF-FC02-94ER40818}}
\author{R. Jackiw}
\address{{~}\\ \baselineskip=14pt
Center for Theoretical Physics,
Laboratory for Nuclear Science
and Department of Physics \\
Massachusetts Institute of Technology \\
Cambridge, Massachusetts~ 02139 \\ {~} }
\date {MIT-CTP: \#2377 \hfill HEP-TH: /9501016
\hfill January 1995}
\maketitle
\begin{abstract}
Conventional quantization of two-dimensional diffeomorphism and Weyl
invariant theories sacrifices the latter symmetry to anomalies, while
maintaining the former. When alternatively Weyl invariance is
preserved by abandoning diffeomorphism invariance, we find that some
invariance against coordinate redefinition remains: one can still
perform at will transformations possessing a constant Jacobian. The
alternate approach enjoys as much ``gauge'' symmetry as the
conventional formulation.
\end{abstract}
\bigskip\bigskip
\section{}
The theory of a massless scalar field $\phi$, interacting with
2-dimensional gravity that is governed solely by a metric tensor
$g_{\mu\nu}$ has a conventional description: functionally integrating
$\phi$ produces an effective action $\Gamma^P$, a functional of
$g^{\mu\nu}$, which has been given by Polyakov as~\cite{ref:1}
\begin{equation}
\Gamma^P (g^{\mu\nu}) = \frac{1}{96\pi}
\int d^2 x \, d^2 y \sqrt{-g(x)} \,
R(x) K^{-1}(x,y) \sqrt{-g(y)} R(y)
\label{eq:1}
\end{equation}
Here $R$ is the scalar curvature and $K^{-1}$ satisfies
\begin{equation}
- \frac{1}{\sqrt{-g(x)}} \frac{\partial}{\partial x^\mu} \sqrt{-g(x)} \,
g^{\mu\nu}(x) \frac{\partial}{\partial x^\nu} K^{-1}(x,y) =
\frac{1}{\sqrt{-g(x)}} \delta^2 (x-y)
\label{eq:2:}
\end{equation}
Eq.~(\ref{eq:1}) results after definite choices are made to resolve
ambiguities of local quantum field theory: it is required that
$\Gamma^P$ be diffeomorphism invariant and lead to the conventional
trace (Weyl) anomaly. This translates into the conditions that the
energy-momentum tensor
\begin{equation}
\Theta^{P}_{\mu\nu} = \frac{2}{\sqrt{-g}}
\frac{\delta\Gamma^P} {\delta g^{\mu\nu}}
\label{eq:3}
\end{equation}
be covariantly conserved (diffeomorphism invariance of $\Gamma^P$),
\begin{equation}
D_\mu \left(g^{\mu\nu}\Theta_{\nu\alpha}^P\right) = 0
\label{eq:4}
\end{equation}
and possess a non-vanishing trace (Weyl anomaly).
\begin{equation}
g^{\mu\nu} \Theta^{P}_{\mu\nu} = \frac{1}{24\pi} R
\label{eq:5}
\end{equation}
Equations (\ref{eq:3})--(\ref{eq:5}) can be integrated to give (\ref{eq:1});
also from (\ref{eq:1}) and (\ref{eq:3})
one finds that\footnote{%
When the gravity field $g_{\mu\nu}$ is viewed
as externally prescribed, $\Theta^P_{\mu\nu}$ is the vacuum matrix element
of the operator energy-momentum tensor for the quantum field $\phi$.
Eq.~(\ref{eq:6})
has been derived by M. Bos [2], not by varying the Polyakov action
\cite{ref:1}, but by direct computation of the relevant expectation value.
}
\begin{eqnarray}
\Theta^{P}_{\mu\nu} & = & -\frac{1}{48\pi}
\left( \partial_\mu \Phi \partial_\nu \Phi
- \frac{1}{2} g_{\mu\nu} g^{\alpha\beta} \partial_\alpha \Phi
\partial_\beta\Phi \right)
\nonumber \\
& & \quad - \frac{1}{24\pi}
\left( D_\mu D_\nu \Phi - \frac{1}{2} g_{\mu\nu}
g^{\alpha\beta} D_\alpha D_\beta \Phi \right)
+ \frac{1}{48\pi} g_{\mu\nu} R
\label{eq:6}
\end{eqnarray}
where $\Phi$ is the solution to
\begin{equation}
g^{\alpha\beta} D_\alpha D_\beta \Phi = \frac{1}{\sqrt{-g}} \,
\partial_\alpha \sqrt{-g} \, g^{\alpha\beta} \partial_\beta \Phi = R
\label{eq:6b}
\end{equation}
One easily verifies that (\ref{eq:6}) obeys (\ref{eq:4}) and (\ref{eq:5}).
Notice that the traceless
part of $\Theta^P_{\mu\nu}$ satisfies
\begin{equation}
D_\mu \Bigl( g^{\mu\nu} \Theta^P_{\nu\alpha}
\Bigm|_{\raisebox{-.2ex}{$\scriptstyle\rm traceless$}} \, \Bigr)
= \frac{1}{48\pi} \partial_\alpha {\it R}
\label{eq:7}
\end{equation}
It is well known that one can make alternative choices when defining
relevant quantities. In particular, one can abandon diffeomorphism
invariance and obtain an alternate effective action $\Gamma$, which is
Weyl invariant because it is a functional solely of the Weyl invariant
combination\footnote{%
See Ref.~\cite{ref:5}. A point of view that provides another
alternative to Polyakov's approach has recently
appeared in Ref.~\cite{ref:2}.
}
\begin{equation}
\gamma^{\mu\nu} \equiv \sqrt{-g} \, g^{\mu\nu}
\label{eq:8}
\end{equation}
This ensures vanishing trace for the modified energy momentum tensor.
\begin{eqnarray}
\Theta_{\mu\nu}
& = & \frac{2}{\sqrt{-g}}
\frac{\delta\Gamma}{\delta g^{\mu\nu}}
= 2 \frac{\delta\Gamma}{\delta\gamma^{\mu\nu}}
- \gamma_{\mu\nu} \gamma^{\alpha\beta}
\frac{\delta\Gamma}{\delta\gamma^{\alpha\beta}}
\label{eq:9} \\
\gamma^{\mu\nu} \Theta_{\mu\nu}
& = & 0
\label{eq:10}
\end{eqnarray}
Here $\gamma_{\mu\nu}$ is the matrix inverse to $\gamma^{\mu\nu}$,
\begin{equation}
\gamma_{\mu\nu} = g_{\mu\nu} / \sqrt{-g}
\label{eq:11}
\end{equation}
and $ {\det} \gamma_{\mu\nu} = {\det} \gamma^{\mu\nu} = -1$.
In this Letter we study more closely the response of $\Gamma\/$ to
diffeomorphism transformations when Weyl symmetry is preserved. We
find that diffeomorphism invariance is not lost completely; rather it
is reduced: $\Gamma\/$ remains invariant against transformations that
possess a constant (unit) Jacobian --- we call this $S$-diffeomorphism
invariance.\footnote{%
$S$-diffeomorphisms preserve local area
$\sqrt{-g} \, d^2x$ on spaces where $\sqrt{-g}$ is constant. (I thank
W.~Taylor and B.~Zwiebach for discussions on this.)
}
In the absence of diffeomorphism invariance,
$\Theta_{\mu\nu}$ is no longer covariantly conserved; nevertheless we
shall show that $S$-diffeomorphism invariance restricts the divergence
of $\Theta_{\mu\nu}$ [essentially to the form given in (\ref{eq:7})].
We shall argue that our alternative evaluation follows the intrinsic
structures of the theory more closely than the conventional approach.
\section{}
Before presenting our argument, we define notation and record some formulas.
The 2-dimensional Euler density is a total derivative.
\begin{equation}
\sqrt{-g} R = \partial_\mu R^\mu \>, \qquad
R = D_\mu \left( R^\mu / \sqrt{-g} \right)
\label{eq:12}
\end{equation}
But $R^\mu$ cannot be presented explicitly and locally in terms of the metric
tensor and its derivatives
as a whole; rather it is necessary to parametize $g_{\mu\nu} =
\sqrt{-g}\gamma_{\mu\nu}$. We define
\begin{mathletters
\begin{equation}
\sqrt{-g} = e^\sigma
\label{eq:13a}
\end{equation}
and parametize the light-cone $[(\pm) \equiv \frac{1}{\sqrt{2}} (0 \pm 1)]$
components of $\gamma_{\mu\nu}$ as
\begin{eqnarray}
\gamma_{++} & = & -\gamma^{--} = e^\alpha {\sinh} \beta \nonumber \\
\gamma_{--} & = & -\gamma^{++} = e^{-\alpha} {\sinh} \beta \nonumber \\
\gamma_{+-} & = & \gamma_{- +} = \gamma^{+ -}
= \gamma^{- +} = {\cosh} \beta
\label{eq:13b
\end{eqnarray}
\label{eq:13ab
\end{mathletters
Then the formula for $R^\mu$ reads
\begin{equation}
R^\mu = \gamma^{\mu\nu} \partial_\nu \sigma +
\partial_\nu\gamma^{\mu\nu}-\epsilon^{\mu\nu} ({\cosh} \beta-1)
\partial_\nu\alpha
\label{eq:14}
\end{equation}
where the explicit parametrization (\ref{eq:13ab}) is needed to
present the last term in (\ref{eq:14}).\footnote{%
This is analogous to what happens with a Chern-Simons term.
Upon performing a gauge
transformation with a gauge function $U$, the Chern-Simons term
changes by a total derivative. However, direct evaluation of the
gauge response includes the expression $\omega = \frac{1}{24\pi^2}
{\rm tr} \epsilon^{\alpha\beta\gamma} U^{-1} \partial_\alpha U
U^{-1}\partial_\beta U^{-1}\partial_\gamma U$, which can be recognized
as a total derivative only after $U$ is explicitly parametrized. For
example, in SU(2) $U = { \exp } \,\theta, \, \theta = \theta^a\sigma^a
/2i$, and $\omega= \partial_\alpha
\omega^\alpha$
where $\omega^\alpha = \frac{1}{4\pi^2} {\rm tr} \epsilon^{\alpha\beta\gamma}
\theta\partial_\beta \theta\partial_\gamma \theta
\left(\frac{\mid\theta\mid - {\sin} \mid\theta\mid}
{|\theta|^3}\right)$ with
$|\theta| \equiv \sqrt{\theta^a\theta^a}$; see Jackiw in~\cite{ref:4}.
}
(Here $\epsilon^{\mu\nu}$ is the anti-symmetric
numerical quantity, normalized by $\epsilon^{01} = 1$.)
Even though the last contribution in (\ref{eq:14}) to $R^\mu\/$ is not
expressible in terms of $g_{\mu \nu}\/$ or $\gamma_{\mu \nu}\/$, its
arbitrary variation satisfies a formula involving only $\gamma_{\mu \nu}\/$.
\begin{equation}
\delta \Bigl[ \epsilon^{\mu \nu} (\cosh \beta - 1) \partial_\nu \alpha \Bigr]
- \partial_\nu \Bigl[ \epsilon^{\mu \nu} (\cosh \beta - 1)
\delta \alpha \Bigr]
= \frac{-1}{2} \gamma^{\mu \nu}
\Bigl( \partial_\alpha \gamma_{\nu \beta}
+ \partial_\beta \gamma_{\nu \alpha}
- \partial_{\nu} \gamma_{\alpha \beta} \Bigr)
\delta \gamma^{\alpha \beta}
\label{eq:14a}
\end{equation}
Note that the right side equals
$- \gamma^\mu_{\alpha \beta} \delta \gamma^{\alpha \beta}\/$,
where $\gamma^\mu_{\alpha \beta}\/$
is the Christoffel connection when the metric tensor
is $\gamma_{\mu \nu} : \gamma^{\mu}_{\alpha \beta}
= \Gamma^{\mu}_{\alpha \beta}
\biggm|_{\stackrel{\cdot}{g_{\mu \nu} = \gamma_{\mu \nu}}}\/$
While the covariant divergence of $R^\mu / \sqrt{-g}\/$ is the scalar
curvature, see (\ref{eq:12}), $R^\mu / \sqrt{-g}\/$ does not transform
as a vector under coordinate redefinition. Rather for an
infinitesimal diffeomorphism generated by $f^\mu\/$
\begin{equation}
\delta_D x^\mu = - f^\mu (x)
\label{eq:14b}
\end{equation}
one verifies that
\begin{mathletters
\begin{equation}
\delta_D (R^\mu / \sqrt{-g})
= L_f (R^\mu / \sqrt{-g})
+ \frac{1}{\sqrt{-g}} \epsilon^{\mu \nu} \partial_\nu \Delta_{f}
\label{eq:14c1}
\end{equation}
where $L_f\/$ in the Lie derivative with respect to $f^\mu\/$, and
\begin{equation}
\Delta_f \equiv
( \partial_{+} - e^\alpha \tanh \frac{\beta}{2} \partial_{-} ) f^{+}
- ( \partial_{-} - e^{- \alpha} \tanh \frac{\beta}{2}
\partial_{+} ) f^{-}
\label{eq:14c2}
\end{equation}
\label{eq:14c
\end{mathletters
This non-tensorial transformation rule nevertheless ensures a scalar
transformation law for $D_\mu (R^\mu / \sqrt{-g})\/$. Consequently, a
world scalar action may be constructed by coupling vectorially
$R^\mu\/$ to a scalar field $\Psi\/$, $I_V = \int d^2 x \, R^\mu
\partial_\mu \Psi\/$; invariance is verified from (\ref{eq:12}) after
partial integration. An axial coupling also produces a world scalar
action, $I_A = \int \frac{d^2 x}{\sqrt{-g}} \, R_\mu \epsilon^{\mu
\nu} \partial_\nu \Psi\/$, provided $\Psi\/$ satisfies $g^{\mu \nu}
D_\mu D_\nu \Psi = 0\/$; this follows from~(\ref{eq:14c}).
Finally we remark that the last term in (\ref{eq:14}) naturally
defines a 1-form $a \equiv (\cosh \beta - 1) d \alpha\/$ and the
2-form $\omega = d a = \sinh \beta d \beta d \alpha\/$. These are
recognized as the canonical 1- form and the symplectic 2-form,
respectively, for SL $(2, R)$. Indeed $\omega\/$ also equals
$\frac{1}{2} \epsilon_{a b c} \xi^a d \xi^b d \xi^c\/$, where
$\xi^a\/$ is a three vector on a hyperboloid = SL $(2, R) / U (1) :
(\xi^1)^2 - (\xi^2)^2 - (\xi^3)^2 = -1$. Effectively, $\omega\/$ is
the Kirillov-Kostant 2-form on SL $(2,R)$.\footnote{%
I thank V. P. Nair for pointing this out. }
\section{}
The Lagrange density for our theory reads
\begin{equation}
{\cal L} = \frac{1}{2} \sqrt{-g} g^{\mu\nu} \partial_\mu \phi
\partial_\nu \phi =
\frac{1}{2} \gamma^{\mu\nu} \partial_\mu \phi \partial_\nu \phi
\label{eq:15}
\end{equation}
Equivalently, a first-order expression may be given,
\begin{equation}
{\tilde{\cal L}} = \Pi \dot{\phi} - u {\cal E} - v{ P}
\label{eq:16}
\end{equation}
where ${\cal E}$ and ${\cal P}$ are the free-field energy and momentum
densities
\begin{mathletters
\begin{eqnarray}
{\cal E} & = & \frac{1}{2} \Pi^2 + \frac{1}{2} (\phi')^2
\label{eq:17a} \\
{\cal P} & = & - \phi'\Pi
\label{eq:17b}
\end{eqnarray}
\label{eq:17ab
\end{mathletters
Here dot and dash signify time ($x^0 \equiv t$) and space ($x^1 \equiv
x$) differentiation, respectively. The gravitational variables enter
as Lagrange multipliers, in $\tilde{\cal L}$
\begin{eqnarray}
u & = & \frac{1}{\sqrt{-g} g^{00}} = \frac{1}{\gamma^{00}}
\nonumber \\
v & = & \frac{g^{01}}{g^{00}} = \frac{\gamma^{01}}{\gamma^{00}}
\label{eq:18}
\end{eqnarray}
enforce vanishing ${\cal E}$ and ${\cal P}$. It is seen that only two
of the three independent components in $g_{\mu\nu}$ are present:
$\sigma={\ln} \sqrt{-g}$ does not occur in ${\cal L}$ or $\tilde{\cal L}$,
which depend only on $\gamma^{\mu\nu}$ --- this is of course a
manifestation of Weyl invariance.
In spite of the absence of $\sigma$ in the classical theory,
Polyakov's quantum effective action (\ref{eq:1}) carries a
$\sigma$-dependence. The breaking of Weyl symmetry arises when one
evaluates the functional determinant that leads to the effective
action; {\it viz.}~$-\frac{1}{2} {\ln} \det K$, where $K$ is the
kernel present in the classical action.
\begin{equation}
K(x,y) = - \frac{\partial}{\partial x^\mu} \gamma^{\mu\nu}
(x) \frac{\partial}{\partial x^\nu} \delta^2(x-y)
\label{eq:19}
\end{equation}
Formally the determinant is given by the product of K's eigenvalues,
$\det K = \Pi_\lambda \lambda$, but it still remains to formulate the
eigenvalue problem. The diffeomorphism invariant definition
recognizes that $K$ is a density, so eigenvalues are defined by
\begin{mathletters
\begin{eqnarray}
\int K \Psi^P_\lambda
&=& \sqrt{-g} \lambda \Psi_\lambda^P
\nonumber \\
- g^{\alpha\beta} D_\alpha D_\beta \Psi_\lambda^P
&=& \lambda \Psi_\lambda^P
\label{eq:20a}
\end{eqnarray}
and the inner product involves an invariant measure
\begin{equation}
\left<\lambda_1\mid\lambda_2\right>^P = \int \sqrt{-g}
\Psi_{\lambda_1}^{P*} \Psi^P_{\lambda_2}
\label{eq:20b}
\end{equation}
\label{eq:20ab
\end{mathletters
In this way $\sigma = {\ln}\sqrt{-g}$ enters the calculation.
However, one may say that it is peculiar to introduce into the
determination of eigenvalues a variable that is not otherwise present
in the problem. (Below we shall also argue that it is unnatural to
insist on diffeomorphism invariance.)
As an alternative to (\ref{eq:20ab}) one may define eigenvalues without
inserting~$\sigma$,
\begin{mathletters
\begin{equation}
\int K\Psi_\lambda = \lambda\Psi_\lambda
\label{eq:21a}
\end{equation}
and use a $\sigma$-independent inner product.
\begin{equation}
\left<\lambda_1\mid\lambda_2\right> = \int \Psi^*_{\lambda_1}
\Psi_{\lambda_2}
\label{eq:21b}
\end{equation}
\label{eq:21ab
\end{mathletters
It follows that the effective action will be as in (\ref{eq:1}), with
$\sigma$ set to zero.
\begin{equation}
\Gamma(\gamma^{\mu\nu})=\frac{1}{96\pi}\int d^2 x d^2 y
{\cal R}(x) K^{-1}(x,y)
{\cal R}(y)
\label{eq:22}
\end{equation}
Here ${\cal R}$ is the scalar curvature computed with $\gamma^{\mu\nu}
(\gamma_{\mu\nu})$ as the contravariant (covariant) metric tensor.
From (\ref{eq:12}) -- (\ref{eq:14}) we have
\begin{eqnarray}
{\cal R} & = & \partial_\mu {\cal R}^\mu
\label{eq:23} \\
{\cal R}^\mu & = & \partial_\nu \gamma^{\mu\nu} - \epsilon^{\mu\nu}
({\cosh}\beta-1) \partial_\nu \alpha
\label{eq:24}
\end{eqnarray}
Evidently $\Gamma$ is a functional solely of $\gamma^{\mu\nu}$; since
it does not depend on $\sigma$ it is Weyl invariant, leading to a
traceless energy-momentum tensor as in~(\ref{eq:10}).
Of course the definitions (\ref{eq:21ab}) do not respect
diffeomorphism invariance; however they are invariant against
S-diffeomorphisms. Consequently $\Gamma$ also is $S$-diffeomorphism
invariant. With the help of (\ref{eq:14}), (\ref{eq:22}) and
(\ref{eq:24}) we can exhibit the relation between $\Gamma^P$ and
$\Gamma$. Using (\ref{eq:12}) and (\ref{eq:14}) to evaluate
(\ref{eq:1}), and integrating by parts the terms involving $\sigma$ to
remove the non-local kernel $K^{-1}$, leaves
\begin{equation}
\Gamma^P (g^{\mu\nu}) = \frac{1}{96\pi} \int d^2 x \partial_\mu \sigma
\gamma^{\mu\nu} \partial_\nu \sigma + \frac{1}{48\pi} \int d^2 x
\partial_\mu \sigma {\cal R}^\mu + \Gamma (\gamma^{\mu\nu})
\label{eq:25}
\end{equation}
Thus the diffeomorphism invariance restoring terms, present in
$\Gamma^P$, add to $\Gamma$ a local expression, which is a quadratic
polynomial in~$\sigma$. The locality of $\Gamma^P - \Gamma$
highlights its arbitrariness, but $\Gamma$ has the advantage of not
involving quantities extraneous to the problem.
[Formula~(\ref{eq:25}) may also be presented as $\Gamma^P (g^{\mu \nu}) =
\frac{1}{96 \pi} \int (\sigma - K^{-1} {\cal R}) K (\sigma - K^{-1}
{\cal R})\/$.]
Infinitesimal coordinate transformations make use of two arbitrary
functions $f^\mu\/$, see (\ref{eq:14b}). $S$-diffeomorphisms possess
unit Jacobian, so infinitesimally $\partial_\mu f^\mu = 0$;
consequently only one function survives.
\begin{equation}
\delta_{SD} x^\mu = \epsilon^{\mu\nu} \partial_\nu f(x)
\label{eq:27}
\end{equation}
Since Weyl transformations
\begin{equation}
g_{\mu\nu} \rightarrow e^W g_{\mu\nu}
\label{eq:28}
\end{equation}
also make use of a single function, replacing diffeomorphism
invariance, involving two arbitrary functions $f^\mu$, by Weyl and
$S$-diffeomorphism invariance still leaves two arbitrary functions,
$f$ and $W$. Indeed, similar to diffeomorphism invariance, the
combination of Weyl and $S$-diffeomorphism invariance can be used to
reduce a generic metric tensor, containing three functions, to a
single arbitrary function.
In particular by using their respective symmetries, we can bring
$\Gamma^P(g^{\mu\nu})$ and $\Gamma (\gamma^{\mu\nu})$ into equality.
Diffeomorphism invariance allows placing $g_{\mu\nu}$ into the
light-cone gauge, where $g_{--} = 0, \, g_{+-} = 1$ and $g_{++}$ is
the arbitrary function $h_{++}$ \cite{ref:1}. Correspondingly, with
$S$-diffeomorphism invariance we can set to zero the $(--)$ component
in $\gamma_{\mu\nu}$ and the $(+-)$ component to unity. This is
achieved by passing from the original variables $\left\{x^\mu \right\}$
and metric function $\gamma_{\mu\nu}(x)$ to a new quantities
$\left\{\tilde{x}^\mu \right\}$ and $\tilde{\gamma}_{\mu v}
(\tilde{x})$, where
\begin{mathletters
\begin{eqnarray}
\frac{\partial x^+}{\partial \tilde{x}^-}
& = & - \frac{\gamma_{--}}{\gamma_{+-}\pm 1}
\label{eq:29a} \\
\frac{\partial x^+}{\partial \tilde{x}^+}
& = & \frac{-\gamma_{--}}{\gamma_{+-}\pm 1}
\frac{\partial x^-}{\partial \tilde{x}^-} +
\frac{c}{\partial x^{-} / \partial \tilde{x}^- }
\nonumber
\end{eqnarray}
Either sign may be taken in $\gamma_{+-} \pm 1$ and $c^2=1$.
One then finds
\begin{eqnarray}
\tilde{\gamma}_{--}
& = & 0,
\qquad\qquad
\tilde{\gamma}_{+-} = 1
\nonumber \\
\tilde{\gamma}_{++} (\tilde{x})
& = & \frac{\gamma_{++}(x)}
{(\partial x^-/\partial \tilde{x}^-)^2} +
2c \, \frac{\partial x^{-} / \partial \tilde{x}^{+}}
{\partial x^{-} / \partial \tilde{x}^{-}}
\label{eq:29b}
\end{eqnarray}
\label{eq:29ab
\end{mathletters
Upon identification of $\tilde{\gamma}_{++}$ with $h_{++}$,
$\Gamma^P = \Gamma$ in the selected gauge.
Under an infinitessimal diffeomorphism
\begin{equation}
\delta_D\Gamma = \int d^2 x \sqrt{-g} f^\alpha D_\mu
\Theta^\mu_{\;\; \alpha}
\label{eq:31}
\end{equation}
so it follows from (\ref{eq:27}) that for $S$-diffeomorphisms
\begin{equation}
\delta_{SD}\Gamma = \int d^2 x f \epsilon^{\alpha\beta}
\partial_\beta \left(\sqrt{-g} D_\mu\Theta^\mu_{\;\;\alpha}\right)
\label{eq:32}
\end{equation}
and invariance is equivalent to vanishing of the integrand. But
$\sqrt{-g} D_\mu \Theta^\mu_{\;\;\alpha} = \break
\partial_\mu(\sqrt{-g}g^{\mu\nu}\Theta_{\nu\alpha}) + \frac{\sqrt{-g}}{2}
\partial_\alpha g^{\mu\nu}\Theta_{\mu\nu}$, which for traceless
$\Theta_{\mu\nu}$ may be written as
$\partial_\mu (\gamma^{\mu\nu}\Theta_{\nu\alpha}) + \frac{1}{2} \partial_\alpha
\gamma^{\mu\nu} \Theta_{\mu\nu} = d_\mu (\gamma^{\mu\nu}\Theta_{\nu\alpha})$,
where $d_\mu$ is a covariant derivative constructed from $\gamma_{\mu\nu}$.
Consequently, the restriction given by $S$-diffeomorphism invariance can
be presented in a $S$-diffeomorphism invariant way as
\begin{mathletters
\begin{equation}
d_\mu d_\nu \left(\epsilon^{\mu\alpha}\Theta_{\alpha\beta}\gamma^{\beta\nu}
\right) = 0
\label{eq:33a}
\end{equation}
This implies that
\begin{equation}
d_\mu \left(\gamma^{\mu\nu}\Theta_{\nu\alpha}\right) = \partial_\alpha
\hbox{ (scalar)}
\label{eq:33b}
\end{equation}
\label{eq:33ab
\end{mathletters
which is the constraint on the divergence of the energy-momentum tensor
mentioned earlier.
Computing $\Theta_{\mu\nu}$ from (\ref{eq:25}) gives
\begin{equation}
\Theta_{\mu\nu} = -\frac{1}{48\pi}
\left(\partial_\mu \varphi \partial_v \varphi -
\frac{1}{2} \gamma_{\mu\nu}
\gamma^{\alpha\beta} \partial_\alpha \varphi \partial_\beta
\varphi \right)
- \frac{1}{24\pi}\left(d_\mu d_\nu \varphi - \frac{1}{2}
\gamma_{\mu\nu}
\gamma^{\alpha\beta} d_\alpha d_\beta \varphi \right)
\label{eq:35}
\end{equation}
where $\varphi$ satisfies [compare (\ref{eq:6}) and (\ref{eq:6b})]
\begin{equation}
\gamma^{\alpha\beta} d_\alpha d_\beta \varphi = \partial_\alpha
\gamma^{\alpha\beta}\partial_\beta \varphi = {\cal R}
\label{eq:35b}
\end{equation}
Clearly $\Theta_{\mu\nu}$ is traceless, and one readily verifies that
\begin{equation}
d_\mu \left(\gamma^{\mu\nu}\Theta_{\nu\alpha}\right) = \frac{1}{48\pi}
\partial_\alpha {\cal R}
\label{eq:36}
\end{equation}
[compare (\ref{eq:7})], which is consistent with (\ref{eq:33ab}).
Finally we remark that even under $S$-diffeomorphisms ${\cal R}^\mu\/$
does not transform as a vector. One finds from (\ref{eq:14}),
(\ref{eq:14c}) and (\ref{eq:24})
\begin{equation}
\delta_{\rm SD} {\cal R}^\mu
= L_f {\cal R}^\mu + \epsilon^{\mu \nu} \partial_\nu \Delta_f
\label{eq:36a}
\end{equation}
where the vector field $f^\mu\/$ is now $-\epsilon^{\mu \nu}
\partial_\nu f\/$; thus here
$\Delta_f = 2 \partial_{+} \partial_{-} f - \tanh \frac{\beta}{2}
\left( e^\alpha \partial^2_{-} + e^{-\alpha} \partial^2_{+} \right) f$.
Although selecting between Weyl and $S$-diffeomorphism invariance on
the one hand or conventional diffeomorphism invariance on the other
remains a matter of arbitrary choice, as is seen from the fact that
the effective actions for the two options differ by local terms, the
following observations should be made in favor of the former.
\section{}
Up to now, the gravitational field $g_{\mu \nu}\/$ was a passive,
background variable. Consider now the puzzles that arise when it is
dynamical; {\it i.e.}~$g_{\mu \nu}\/$ is varied. With a single Bose
field, it is immediately established that the classical theory does
not possess solutions. This is seen from the equation that follows
upon varying $g_{\mu\nu}$ in ${\cal L}$,
\begin{equation}
\partial_\mu\phi\partial_\nu\phi-\frac{1}{2} g_{\mu\nu}
g^{\alpha\beta}\partial_\alpha\phi\partial_\beta\phi = 0
\label{eq:37}
\end{equation}
which implies that
$g_{\mu\nu}\propto\partial_\mu\phi\partial_\nu\phi$, so that $g$
vanishes and $g^{\mu\nu}$ does not exist; alternatively $\phi$ must be
constant and $g_{\mu\nu}$ undetermined. If there are $N$ scalar
fields, whereupon the effective action acquires the factor $N$, the
above difficulty is avoided, because $g_{\mu\nu} \propto
\sum^{N}_{i=1} \partial_\mu\phi^i\partial_\nu\phi^i$ need not be
singular. Nevertheless (\ref{eq:37}) (with field bilinears replaced
by sums over the $N$ fields) requires the vanishing of positive
quantities $\sum^{N}_{i=1} \left\{\left(\gamma^{00}\dot{\phi}^i +
\gamma^{01}\phi'^i\right)^2 + \left(\phi'^i\right)^2 \right\}$ and
again only the trivial solution is allowed.
The quantum theory in Hamiltonian formulation also appears
problematic, in that the constraints of vanishing ${\cal E}$ and
${\cal P}$ cannot be imposed on states. With one scalar field, the
momentum constraint requiring that $\phi'\Pi$ acting on states vanish
--- this is the spatial diffeomorphism constraint --- forces the state
functional in the Schr\"{o}dinger representation to have support only
for constant fields $\phi$. (Equivalently one observes that a spatial
diffeomorphism invariant functional cannot be constructed from a
single, $x$-dependent field.) With more than one field, this problem
is absent [a diffeomorphism invariant functional can involve $\int d x
\phi_1(x)\phi'_2(x)$] and the momentum constraint can be solved.
However, an obstruction remains to solving the energy constraint,
owing to the well-known Schwinger term (Virasoro anomaly) in the
$[{\cal E},{\cal P}]$ commutator, which gives a central extension that
interferes with closure of constraints: classical first-class
constraints become upon quantization second-class.
\begin{mathletters
\begin{eqnarray}
i \left[{\cal E}(x), {\cal E}(y)\right]
& = & i \left[{\cal P}(x),{\cal P} (y)\right]
= \left({\cal P}(x)+{\cal P}(y))\delta'(x-y)\right)
\label{eq:38a} \\
i \left[{\cal E}(x), {\cal P}(y)\right]
& = & \left({\cal E}(x) + {\cal E} (y)\right)\delta'(x-y)
- \frac{N}{12\pi}\delta^{'''}(x-y)
\label{eq:38b}
\end{eqnarray}
\label{eq:38ab
\end{mathletters
Note that all the above troubles, both in the classical theory and in
the Dirac-quantized Hamiltonian theory, revolve around diffeomorphism
invariance, not Weyl invariance. Indeed the same troubles persists
for massive scalar fields, which are not Weyl invariant.
Thus when a quantum theory is constructed by a functional integral
(not by Hamiltonian/Dirac quantization) it is natural that it should
reflect problems with diffeomorphism invariance --- reducing it to
$S$-diffeomorphism invariance. Weyl invariance on the other hand
could survive quantization.
\acknowledgements \hfil\break\noindent
I thank S. Deser, V. P. Nair, W. Taylor, and B. Zwiebach for comments.
| {'timestamp': '1995-01-05T21:09:23', 'yymm': '9501', 'arxiv_id': 'hep-th/9501016', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-th/9501016'} |
\section{Introduction}
\label{sec:intro}
A nonperturbative regularization of chiral gauge theory,
if it would exist, could offer a consistent framework for studying
the dynamics of the standard model, especially the
dynamics of spontaneous gauge symmetry breaking.
Lattice regularization has succeeded to play such a role
for understanding the QCD dynamics. For chiral theories,
however, it suffers from the species doubling
problem\cite{ksmit,nn,karsten,kieu}.
Recently new approaches by means of an infinite number
of Fermi fields have been proposed for such a
regularization\cite{kaplan,pvfs,cnn,olnn,latfs}.
A five-dimensional fermion has a chiral zero mode
when coupled to a domain wall\cite{callanharvey}.
Kaplan formulated such a system on a lattice with Wilson fermion
and discussed the possibility to simulate
chiral fermions\cite{kaplan,domainwall}.
On the other hand, Frolov and Slavnov
considered the possibility of
regulating chiral fermion loops
gauge-invariantly in the $SO(10)$ chiral gauge theory
with an infinite number of the
Pauli-Villars-Gupta fields\cite{pvfs,ak,genpv}.
Unified point of view on these two approaches was given
by Neuberger and Narayanan\cite{cnn}
and they have put forward the approach
to derive a lattice vacuum overlap formula
for the determinant of chiral fermion\cite{olnn}.
They first discarded the five dimensional nature
of gauge boson in the Kaplan's lattice setup.
Then the five-dimensional fermion can be seen as a collection of
infinitely many four-dimensional fermions labeled by the extra
coordinate.
They regarded the massive Dirac modes as
regulator fields for the chiral mode
(these correspond to the fermionic Pauli-Villars-Gupta fields in
the method of Frolov and Slavnov)
and gave a prescription to subtract
irrelevant bulk effects of the massive modes by
ordinary fermions with homogeneous masses
(these correspond to the bosonic Pauli-Villars-Gupta fields).
They emphasized the importance of infinite extent of the
extra space to make sure a chiral content of fermions,
which space is usually compactified on a lattice with periodic boundary
condition.
As a results of the infiniteness of the extra space,
they obtained a vacuum overlap formula for the determinant of
lattice chiral fermion, using transfer matrices in the direction of
the extra space.
By means of the infinite number of the Pauli-Villars-Gupta fields,
however,
the odd-parity part can never be regularized because
the regulator fields are the {\it Dirac} spinors\cite{fujikawa}.
Even for the even-parity part,
there exist ambiguity in the summation over
the infinite number of contributions.
In order to make the summation well-defined,
at the same time to make the number of regulator fields
finite at the first stage,
each contribution of the original or the regulator field
should be made finite by a certain subsidiary regularization.
Such subsidiary regularization necessarily breaks the gauge
invariance in the contribution of the original chiral fermion.
This may well lead to the gauge noninvariant result even in the limit of
the infinite number of the regulator fields.
The dimensional
regularization is an example
for such a subsidiary regularization\cite{Latfs}.
In the formulation of the vacuum overlap,
the problem of the odd-parity part is reflected in that
we must fix the phase of the overlap by a certain guiding
principle to reproduce the properties of the chiral determinant
by this formula, especially anomaly.
Fixing the phase of the overlap is
equivalent to the choice of the wave functions
of the boundary states.
By this choice, we must carefully place a source of
gauge noninvarinace at the boundaries to reproduce
the consistent anomaly.
Neuberger and Narayanan\cite{olnn}
proposed to fix the phase following
the Wigner-Brillouin perturbation theory,
refering to the ground state of the {\it free} Hamiltonian.
By this prescription, the continuum two-dimensional overlap
was examined and it was shown to reproduce the correct
{\it consistent} anomaly.
The lattice two- and four-dimensional overlap were also
examined numerically and correct anomaly coefficients
were observed for the Abelian background gauge field.
Its behavior under the topologically nontrivial
background gauge fields was also examined and
promising result was obtained.
They also showed the gauge invariance of the even-parity part
at nonperturbative level. Thier discussions are based on
the case of the periodic boundary condition in the fifth direction
and also on the gauge transformation property of the
ground states of the four dimensional Hamiltonians.
On the other hand, in the view point of the perturbation theory
with the five(or three)-dimensional Wilson fermion,
several authors discussed this problem.
Shamir\cite{shamir} considered the fermion in the infinite extent
of the fifth dimension but restricted first
the interaction with the gauge field into the finite region of
the fifth space.
By this restriction, the model becomes gauge noninvariant
at the boundaries of the finite region.
Then he examined how to take the limit of the infinite extent.
He claimed that the limit should be taken uniformly at
every interaction vertex with gauge boson and then
showed that this way to introduce the gauge noninvariance
leads to the correct consistent anomaly in two-dimensional model.
In the similar approach,
S. Aoki and R.B. Levien\cite{al} performed a detailed study
about the infinite extent of the extra dimension
and the subtraction procedure in the lattice two-dimensional
chiral Schwinger model.
They showed that the scheme reproduces the desired form of
the effective action: the gauge invariant real part and the
consistent anomaly from the imaginary part.
Four-dimensional perturbative study of the vacuum overlap formula
has been recently in progress.
S. Randjbar-Daemi and J. Strathdee obtained the consistent anomaly
by the four-dimensional Hamiltonian perturbation theory
in the continuum limit\cite{rs}.
Quite recently, they performed the similar analysis in the lattice
regularization\cite{lrs}.
In the view-point of the theory with infinitely many regulator fields,
it is also desirable to understand how the gauge-noninvarinace put
at the boundaries leads to the consistent anomaly
and how the gauge-invariance of the even-parity part of the
determinant is established by the subtraction.
In this article, toward this goal,
we further examine the four dimensional aspect of the vacuum overlap
formula in the continuum limit.
Our approach is as follows.
We consider the nonabelian background gauge field in general.
We start from finite extent of the fifth direction
in order to make the summation unambiguous.
We first develop the theory of the free five-dimensional fermion
with kink-like mass and positive(negative) homogeneous mass
{\it in the finite extent of fifth direction}.
As to the boundary condition in the fifth direction,
we adopt the one derived from the Wilson fermion,
by which the Dirichlet and Neumann components are determined
by the chiral projection.
{}From this free theory,
the boundary state wave functions which correspond to
the Wigner-Brillouin phase choice are explicitly constructed.
Then we formulate the perturbation expansion of
the vacuum overlap formula in terms of the propagator at
the finite extent of the fifth dimension
satisfying the boundary condition.
After performing the subtraction at the finite extent of the
fifth dimension, we examine the limit of the infinite extent.
As to a subsidiary regularization,
we adopt the dimensional regularization.
It turns out that the dimensional regularization cannot respect
the boundary condition determined by the chiral projection.
Although this fact reduces the ability of our analysis in the
continuum limit, we believe that we can make clear
in what way the vacuum overlap formula could give
the perturbative properties of the chiral determinant
in four dimensions.
We also make another technical assumption that
the dimensional regularization preserves the cluster
property.
By this perturbation theory,
we calculate the variation of the vacuum overlap
under the gauge transformation which is
induced by the boundary state wave function.
We also calculate the two-point function of the external
gauge boson.
The variation is found to be finite and does not suffer from the subtlety of
the dimensional regularization.
It reproduces the consistent anomaly in four dimensions correctly.
We also observe how the chiral normalization of the vacuum
polarization is realized in the finite extent of the fifth direction.
We, however, fail to establish its gauge invariance
due to the dimensional regularization.
This article is organized as follows.
In Sec. \ref{sec:would-be-vacuum-overlap},
we discuss
the lattice Schr\"odinger functional\cite{luscher,sint} to describe
the evolution of the boundary state during a finite ``time''
interval in the fifth direction.
It is naturally formulated from the transfer matrix given
by Neuberger and Narayanan.
The boundary condition of the fermion field is read off
from the Wilson fermion action and the boundary term is
derived.
In Sec. \ref{sec:fermion-at-finite-fifth-volume},
we formulate the free theory of
the five-dimensional fermion in the finite fifth space volume.
We solve the field equation and obtain the complete set of
solutions. The field operator is defined by the mode expansion
and the propagator is derived.
The Sommerfeld-Watson transformation is introduced, by which
we rearrange the normal modes of the fifth momentum
to be common among the fermions with the kink-like
mass and the positive(negative) homogeneous mass.
This makes it possible to do the subtraction at the finite extent
of the fifth dimension.
In Sec.
\ref{sec:perturbation-at-finite-extent-of-fifth-dimension},
the perturbation theory for the vacuum overlap is developed.
We first derive boundary state wave functions.
Then we discuss the cluster property of the contribution induced by
the boundary state wave function.
By this cluster property,
the boundary contribution turns out to be odd-parity in the
limit of the infinite extent.
In Sec. \ref{sec:anomaly-from-boundary},
we perform the calculation of the anomaly induced by
the boundary state wave function.
In Sec. \ref{sec:vacuum-polarization},
we also perform the calculation of the vacuum polarization.
Section \ref{sec:discussion} is devoted to summary and discussion.
\newpage
\section{
Would-be vacuum overlap}
\label{sec:would-be-vacuum-overlap}
In this section,
we consider
the five-dimensional Wilson fermion with kink-like mass and
its finite ``time'' evolution in the fifth direction.
In order to describe it,
we introduce the Schr\"odinger functional, which is naturally
formulated by the transfer matrix given by Neuberger and Narayanan.
Through its path-integral representation,
we read off the boundary condition imposed on the fermionic
field and also derive the boundary terms.
We also rewrite the functional in the form factorized
into the determinant of the Dirac operator
over the five-dimensional volume under the derived boundary condition
and the contribution from the boundary terms.
Next we will discuss how to prepare the boundary state wave functions which
implement the Wigner-Brillouin phase choice.
With these wave functions, we give the expression of the ``would-be''
vacuum overlap at finite extent of the fifth dimension.
We also discuss its variation under the gauge transformation.
Finally, we derive the counterpart
{\it in the continuum limit and in the Minkowski space}.
We also specify the regularization in the continuum limit analysis.
\subsection{Boundary condition in the fifth direction
and boundary terms}
\label{subsec:bc-fifth}
The action of the five-dimensional Wilson fermion with kink-like mass
is given by
\begin{eqnarray}
\label{lattice-action}
A
&=&
\sum_{n,s} \left\{
\hskip 4pt
\sum_\mu \frac{1}{2}
\left[ \bar\psi(n,s) (1+\gamma_\mu) U_\mu(n) \psi(n+\hat\mu,s)
+\bar\psi(n+\hat\mu,s)(1-\gamma_\mu) U^\dagger_\mu(n) \psi(n,s)
\right]
\right.
\nonumber\\
&&\hskip 38pt +
\frac{1}{2}
\left[ \bar\psi(n,s) (1+\gamma_5) \psi(n,s+1)
+\bar\psi(n,s+1) (1-\gamma_5) \psi(n,s) \right]
\nonumber\\
&&\hskip 38pt
\left.
+ \Big( m_0 \, sgn(s+\frac{1}{2}) -5 \Big) \bar\psi(n,s) \psi(n,s)
\right\}
\, .
\end{eqnarray}
Here we are considering SU(N) background gauge field in general.
The transfer matrix formulation for it was first
given by Neuberger and Narayanan\cite{olnn}.
Let us assume that
the Fock space is spaned by the operators
$\hat c_{\alpha i}(n)$ and $\hat d_{\alpha i}(n)$
satisfying the following commutation relations,
\begin{equation}
\label{commutation-relation}
\{ \hat c_{\alpha i}(n), \hat c_{\beta}^{\dagger j}(m) \}
= \delta_{nm} \delta_{\alpha\beta} \delta_i^j
\, , \hskip 16pt
\{ \hat d_{\alpha i}(n), \hat d_{\beta }^{\dagger j}(m) \}
= \delta_{nm} \delta_{\alpha\beta} \delta_i^j
\, ,
\end{equation}
\begin{equation}
\label{Fock-vacuum}
\hat c_{\alpha i}(n) \ket{0}=0 \, , \hskip 16pt
\hat d_{\alpha j}(n) \ket{0}=0 \, .
\end{equation}
Note that $\alpha,\beta$ denote the spinor index
and $i,j$ denote the index of the representation
of SU(N) gauge group.
Then the transfer matrix is given in terms of
$\hat a = ( \hat c , \hat d^\dagger )^t $ and
$\hat a^\dagger = ( \hat c^\dagger , \hat d )$ as,
\begin{equation}
\label{Transfer-matrix}
\hat T_\pm = \exp \left( \hat a^\dagger H_\pm \hat a \right)
\, ,
\end{equation}
with the matrix
\begin{eqnarray}
\label{hamiltonian-matrix}
\exp\left( H_\pm \right)
&\equiv&
\left(\begin{array}{cc}
\frac{1}{B^\pm} & \frac{1}{B^\pm} C \\
\frac{1}{B^\pm}C^\dagger & C^\dagger \frac{1}{B^\pm} C + B^\pm
\end{array}\right)
\, ,
\end{eqnarray}
where
\begin{eqnarray}
\label{Bfunction}
B(n,m,s)
&=&
\left(5 - m_0 \, sgn(s+\frac{1}{2}) \right) \delta_{n,m} \delta_i^j
-\frac{1}{2} \sum_\mu \left(
{U_\mu(n)}_i^j \delta_{n+\hat\mu,m}
+{U^\dagger_\mu(m)}_i^j \delta_{n,m+\hat\mu} \right)
\, ,
\\
C(n,m)
&=& \frac{1}{2} \sum_\mu {\sigma_\mu}_{\alpha \beta}
\left( {U_\mu(n)}_i^j \delta_{n+\hat\mu,m}
-{U^\dagger_\mu(m)}_i^j \delta_{n,m+\hat\mu}
\right)
\equiv \sum_\mu {\sigma_\mu}_{\alpha \beta} \nabla_\mu(n,m)
\, ,
\end{eqnarray}
and $\sigma_\mu \equiv (1,i\sigma_i)$.
$B(n,m)$ can be shown to be positive definite for $ 0 < m_0 < 1$.
We also introduce the operator
\begin{equation}
\label{D-operator}
\hat D_\pm = \exp \left( \hat a^\dagger Q_\pm \hat a \right)
\, ,
\end{equation}
with
\begin{eqnarray}
\label{D-matrix}
\exp\left( Q_\pm \right)
&=&
\left(\begin{array}{cc}
\frac{1}{\sqrt{B^\pm}}
&\frac{1}{\sqrt{B^\pm}}C \\
0& \sqrt{B^\pm}
\end{array}\right)
\, ,
\end{eqnarray}
and we can show
\begin{equation}
\label{hamiltonian-composition}
\exp\left( H_\pm \right)=
\exp\left( Q_\pm \right) ^\dagger \exp\left( Q_\pm \right) \, ,
\hskip 24pt
\hat T_\pm = \hat D_\pm^\dagger \hat D_\pm
\, .
\end{equation}
We start from a finite ``time'' evolution in the fifth direction.
We take the symmetric region $s \in [{\scriptstyle -L-1}, {\scriptstyle L}]$. The evolution
can be described by the Schr\"odinger kernel\cite{luscher,sint}
\begin{equation}
\label{Schrodinger-Kernel}
\bra{ c^\ast_{{\scriptstyle -L-1}},d^\ast_{{\scriptstyle -L-1}} }
D_- \left(T_-\right)^{\scriptstyle L} \left( T_+\right)^{\scriptstyle L} D_+^\dagger
\ket{ c^{ }_{\scriptstyle L},d^{ }_{\scriptstyle L} }
\, ,
\end{equation}
in the coherent state basis,
\begin{eqnarray}
\label{coherent-state}
\ket{ c,d } &=& \exp [ -(c,{\hat c}^\dagger)-(d,{\hat d}^\dagger) ]
\ket{0} \, ,\\
\bra{ c^\ast,d^\ast } &=& \bra{0} \exp [ -({\hat c},c^\ast)-({\hat d},d^\ast) ]
\, .
\end{eqnarray}
Here we are following the notation in Ref. \cite{olnn}:
$(\bar a_s,b_s) \equiv
\sum_{n,\alpha,i} \bar a_\alpha^i(n,s) b_{\alpha i}(n,s)$.
In terms of the path integral, it reads
\begin{eqnarray}
\label{Schrodinger-Kernel-PathInt}
&&
\bra{ c^\ast_{{\scriptstyle -L-1}},d^\ast_{{\scriptstyle -L-1}} }
D_- \left(T_-\right)^{\scriptstyle L} \left( T_+\right)^{\scriptstyle L} D_+^\dagger
\ket{ c_{\scriptstyle L},d_{\scriptstyle L} }
\prod_{0\leq s \leq {\scriptstyle L-1}}(\det B_+)^{2}
\prod_{{\scriptstyle -L} \leq s \leq -1}(\det B_-)^{2}
\nonumber\\
&&
=
\int \prod_{{\scriptstyle -L} \leq s \leq {\scriptstyle L-1}}
[{\cal D} \psi_s {\cal D} \hat\psi_s]
\exp\left\{ -A[{\scriptstyle -L-1},{\scriptstyle L}] \right\}
\nonumber\\
&&
\equiv
Z[\psi_L({\scriptstyle -L-1}),\bar\psi_L({\scriptstyle -L-1});\psi_R({\scriptstyle L}),\bar\psi_R({\scriptstyle L})]
\, .
\end{eqnarray}
The boundary variables are given by
\begin{eqnarray}
\label{boundary-variable}
\psi_R (n,{\scriptstyle L})&=& \frac{1}{\sqrt{B_+}}
\left( \begin{array}{c} c(n,{\scriptstyle L}) \\ 0 \end{array}\right)
\, ,
\hskip 16pt
\bar \psi_R(n,{\scriptstyle L})
= \left( \begin{array}{cc} 0 & -b(n,{\scriptstyle L}) \end{array}\right)
\frac{1}{\sqrt{B_+}}
\, ,
\\
\psi_L(n,{\scriptstyle -L-1} ) &=& \frac{1}{\sqrt{B_-}}
\left( \begin{array}{c} 0 \\ b^\ast(n,{\scriptstyle -L-1}) \end{array}\right)
\, ,
\hskip 16pt
\bar \psi_L(n,{\scriptstyle -L-1}) =
\left( \begin{array}{cc} c^\ast(n,{\scriptstyle -L-1}) & 0 \end{array}\right)
\frac{1}{\sqrt{B_-}}
\, .
\end{eqnarray}
The action and the boundary terms are given by
\begin{equation}
\label{lattice-action-with-boundaryterm}
A[{\scriptstyle -L-1},{\scriptstyle L}]= A+A^B_L+A^B_R
\, ,
\end{equation}
\begin{eqnarray}
\label{lattice-action-chiral-bc}
A
&& \equiv
\sum_{n}\sum_{s={\scriptstyle -L}}^{{\scriptstyle L-1}}
\left\{ \hskip 2pt
\sum_\mu
\bar\psi(n,s) \gamma_\mu \nabla_\mu \psi(n,s) -\bar\psi(n,s) B(n,m) \psi(n,s)
\right.
\nonumber\\
&&\left.
+\frac{1}{2}
\left[ \bar\psi(n,s) (1+\gamma_5) \psi(n,s+1)
+\bar\psi(n,s+1) (1-\gamma_5) \psi(n,s) \right]
\right\}
\Big\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\, ,\\
\label{left-boundary-term}
A^B_L
&&\equiv
\sum_{n} \left\{
\bar\psi(n,{\scriptstyle -L}) P_L \psi(n,{\scriptstyle -L-1})
+\bar\psi(n,{\scriptstyle -L-1}) P_R \psi(n,{\scriptstyle -L})
\right\}
\nonumber\\
&&\hskip 16pt
+
\sum_{n} \sum_\mu
\bar\psi(n,{\scriptstyle -L-1}) \gamma_\mu \nabla_\mu P_L \psi(n,{\scriptstyle -L-1}) \, ,
\\
\label{right-boundary-term}
A^B_R
&&\equiv
\sum_{n} \left\{
\bar\psi(n,{\scriptstyle L}) P_L \psi(n,{\scriptstyle L-1})
+\bar\psi(n,{\scriptstyle L-1}) P_R \psi(n,{\scriptstyle L})
\right\}
\nonumber\\
&&\hskip 16pt
+
\sum_{n} \sum_\mu
\bar\psi(n,{\scriptstyle L}) \gamma_\mu \nabla_\mu P_R \psi(n,{\scriptstyle L}) \, ,
\end{eqnarray}
where $\Big\vert_{[-{\scriptstyle L},{\scriptstyle L}]}$
stands for the homogeneous boundary condition:
\begin{equation}
\label{chiral-boundary-condition}
P_R \psi(n,{\scriptstyle L}) = P_L \psi(n,{\scriptstyle -L-1})= 0 \, , \hskip 16pt
\bar \psi(n,{\scriptstyle L}) P_L = \bar \psi(n,{\scriptstyle -L-1}) P_R = 0 \, .
\end{equation}
We refer to this boundary condition as {\it chiral boundary condition}
here after.
Since the boundary terms
depending on $\psi_R({\scriptstyle L})$, $\bar \psi_R({\scriptstyle L})$ and
$\psi_L({\scriptstyle -L-1})$, $\bar \psi_L({\scriptstyle -L-1})$ can be regarded to be
source terms for the system with the homogeneous
boundary condition,
we obtain the following factorized form:
\begin{eqnarray}
\label{factorized-Kernel}
Z[\psi_L({\scriptstyle -L-1}),\bar\psi_L({\scriptstyle -L-1});\psi_R({\scriptstyle L}),\bar\psi_R({\scriptstyle L})]
&&
= \det \left( K \right)
\Big\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\exp\left\{ - \left( \bar \Psi, M \Psi \right) \right\}
\, ,
\end{eqnarray}
where
$\Psi(n)=( \psi(n,{\scriptstyle -L-1}), \psi(n,{\scriptstyle L}) )^t $ and
$\bar \Psi(n)=( \bar \psi(n,{\scriptstyle -L-1}), \bar \psi(n,{\scriptstyle L}) )$, and
\begin{eqnarray}
\label{Kinetic-term}
&&K(n,s;m,t)
\nonumber\\
&&=
\textstyle
-\delta_{s,t}
\left\{ \sum_\mu \gamma_\mu \nabla_\mu(n,m)-B(n,m;s)
\right\}
-
\frac{1}{2}
\left[ (1+\gamma_5) \delta_{s+1,t}
+(1-\gamma_5) \delta_{s,t+1} \right]
\, ,
\end{eqnarray}
\begin{eqnarray}
\label{boundary-M-matrix}
&&M(n,m;[{\scriptstyle -L-1},{\scriptstyle L}])
\nonumber\\
&&
=
\left( \begin{array}{cc}
\scriptstyle
P_R S(n,{\scriptstyle -L-1};m,{\scriptstyle -L-1}) P_L + P_R \sum_\mu \gamma_\mu \nabla_\mu(n,m)
&\scriptstyle
P_R S(n,{\scriptstyle -L-1};m,{\scriptstyle L}) P_R \\
\scriptstyle
P_L S(n,{\scriptstyle L};m,{\scriptstyle -L-1}) P_L
& \scriptstyle
P_L S(n,{\scriptstyle L};m,{\scriptstyle L}) P_R + P_L \sum_\mu \gamma_\mu \nabla_\mu(n,m)
\end{array} \right)
\, ,
\end{eqnarray}
and
\begin{equation} \label{inverse-Kinetic-term}
S(n,s;m,t) = K^{-1}(n,s;m,t) \Big\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\, .
\end{equation}
Note that the determinant and the inverse of $K(n,s;m,t)$
should be evaluated taking into account of
the chiral boundary condition~(\ref{chiral-boundary-condition}).
\subsection{Boundary state wave functions and Gauge non-invariance}
\label{sec:boundary wave function}
Next we consider the ``time'' evolution
of a boundary state at $s={\scriptstyle L}$
\begin{equation}
\ket{b_+} \, \mbox{\rm (at $s={\scriptstyle L}$)} \, , \hskip 16pt
\end{equation}
and its transition to another boundary state
at $s={\scriptstyle -L-1}$,
\begin{equation}
\ket{b_-} \, \mbox{\rm (at $s={\scriptstyle -L-1}$)} \, ,
\end{equation}
The wave functions of these states, in general,
can be given in the coherent state representation as
\begin{equation}
\bra{ c^\ast,d^\ast } b_+ \rangle
=\bra{\psi_L,\bar \psi_L } b_+ \Big\rangle \, ,
\hskip 16pt
\langle b_- \ket{ c^\ast,d^\ast }
=\Big\langle b_- \ket{\psi_R,\bar \psi_R } \, .
\end{equation}
To reproduce the properties of the chiral determinant
by the vacuum overlap formula, especially anomaly,
we must carefully place the source of
gauge noninvarinace at the boundaries.
This is equivalent to the choice of the wave functions
of the boundary states.
In order to implement a Wigner-Brillouin phase choice,
we regard the boundary states
as the states which evolved(would evolve)
a certain period of ``time''
by the Hamiltonian without the gauge interaction.
We take $2{\scriptstyle L}^\prime$ period of ``time''.
Then the boundary state $\ket{b_+}$ can be written as
\begin{equation}
\label{Wigner-Brilluin-phase-impliment}
\bra{\psi_L({\scriptstyle L}),\bar \psi_L({\scriptstyle L}) } b_\pm \Big\rangle
=
\bra{\psi_L({\scriptstyle L}),\bar \psi_L({\scriptstyle L}) } (T_+^0)^{2{\scriptstyle L^\prime}}
\ket{b_\pm^\prime }
\, .
\end{equation}
In the limit $L^\prime \rightarrow \infty$,
we can expect that the ground state of $\hat H_+^0$, which we denote
as $\ket{0_+}$, is projected out for any choice of the boundary state
$\ket{b_+^\prime}$.
Note that this is nothing but the way to introduce
the breaking of gauge symmetry adopted in \cite{shamir,al},
by which the extent of the fifth dimension is infinite
but the gauge field is introduced only in a finite range of the fifth
dimension.
In terms of the path integral,
Eq.~(\ref{Wigner-Brilluin-phase-impliment}) reads
\begin{eqnarray}
\label{gauge-noninvariance-in-boundarystate}
&&
\bra{\psi_L({\scriptstyle L}),\bar \psi_L({\scriptstyle L}) } b_\pm \Big\rangle
\nonumber\\
&&
=
\int [{\cal D} \psi({\scriptstyle L}+2{\scriptstyle L^\prime})
{\cal D} \bar\psi({\scriptstyle L}+2{\scriptstyle L^\prime})]
\exp\left\{ \left( \bar\psi({\scriptstyle L}+2{\scriptstyle L^\prime}), B_+^0
\psi({\scriptstyle L}+2{\scriptstyle L^\prime}) \right) \right\}
\nonumber\\
&&
\hskip 16pt \times
Z^0_+[\psi_L({\scriptstyle L}),\bar \psi_L({\scriptstyle L});
\psi_R({\scriptstyle L}+2{\scriptstyle L^\prime}),\bar \psi_R({\scriptstyle L}+2{\scriptstyle L^\prime})]
\bra{\psi_L({\scriptstyle L}+2{\scriptstyle L^\prime}),\bar \psi_L({\scriptstyle L}+2{\scriptstyle L^\prime}) }
b_\pm^\prime \Big\rangle
\nonumber\\
&&
=
\,
\int [{\cal D} \psi({\scriptstyle L}+2{\scriptstyle L^\prime})
{\cal D} \bar\psi({\scriptstyle L}+2{\scriptstyle L^\prime})]
\exp\left\{ \left( \bar\psi({\scriptstyle L}+2{\scriptstyle L^\prime}), B_+^0
\psi({\scriptstyle L}+2{\scriptstyle L^\prime}) \right) \right\}
\nonumber\\
&&
\hskip 16pt \times
\det \left( K_+^0 \right) \Big\vert_{[{\scriptstyle L},{\scriptstyle L}+2{\scriptstyle L^\prime}]}
\exp\left\{ -\left(
\bar\Psi^\prime, M_+^\prime \Psi^\prime
\right) \right\}
\bra{\psi_L({\scriptstyle L}+2{\scriptstyle L}^\prime),\bar \psi_L({\scriptstyle L}+2{\scriptstyle L}^\prime) }
b_\pm^\prime \Big\rangle
\, ,
\end{eqnarray}
where
$\Psi^\prime(n)=( \psi(n,{\scriptstyle L}), \psi(n,{\scriptstyle L}+2{\scriptstyle L}^\prime) )^t $ and
$\bar \Psi^\prime(n)=( \bar \psi(n,{\scriptstyle L}), \bar \psi(n,{\scriptstyle L}+2{\scriptstyle L^\prime}) ) $,
\
$\Big\vert_{[{\scriptstyle L},{\scriptstyle L}+2{\scriptstyle L^\prime}]}$ stands for the boundary condition
\begin{equation}
\label{outside-boundary-condition}
P_R \psi(n, {\scriptstyle L}+2{\scriptstyle L}^\prime)= P_L \psi(n, {\scriptstyle L})=0 \, , \hskip 16pt
\bar \psi(n, {\scriptstyle L}+2{\scriptstyle L}^\prime) P_L =\bar \psi(n, {\scriptstyle L}) P_R =0
\, ,
\end{equation}
and
\begin{eqnarray}
\label{outside-boundary-M-matrix}
&&M_+^\prime(n,m;{\scriptstyle L},{\scriptstyle L}+2{\scriptstyle L}^\prime)
\nonumber\\
&&
=
\left( \begin{array}{cc}
\scriptstyle
P_R S_+^0(n,{\scriptstyle L};m,{\scriptstyle L}) P_L
+ P_R \sum_\mu \gamma_\mu \nabla_\mu^0(n,m)
&\scriptstyle
P_R S_+^0(n,{\scriptstyle L};m,{\scriptstyle L}+2{\scriptstyle L}^\prime) P_R \\
\scriptstyle
P_L S_+^0(n,{\scriptstyle L}+2{\scriptstyle L}^\prime;m,{\scriptstyle L}) P_L
& \scriptstyle
P_L S_+^0(n,{\scriptstyle L}+2{\scriptstyle L}^\prime;m,{\scriptstyle L}+2{\scriptstyle L}^\prime) P_R
+ P_L \sum_\mu \gamma_\mu \nabla_\mu^0(n,m)
\end{array} \right)
\, ,
\end{eqnarray}
\begin{equation}
\label{inverse-Kinetic-term-rightoutside}
S^0_+(n,s;m,t) = K^{-1}(n,s;m,t) \Big\vert_{[{\scriptstyle L},{\scriptstyle L}+2{\scriptstyle L^\prime}]}
= K^{-1}_+(n,s;m,t) \Big\vert_{[{\scriptstyle L},{\scriptstyle L}+2{\scriptstyle L^\prime}]}
\, .
\end{equation}
The superscript $0$ stands for the quantities in which
the gauge link variables are set to unity.
At finite ${\scriptstyle L}^\prime$, we can find the following
choice convenient:
\begin{equation}
\label{choice-of-boundarystate}
\ket{b_+^\prime} \, = \ket{0} \, .
\end{equation}
In this choice,
the dependence of the wave function
in the variables $\psi_L({\scriptstyle L})$, $\bar \psi_L({\scriptstyle L})$
can be given explicitly as
\begin{eqnarray}
\bra{\psi_L({\scriptstyle L}),\bar \psi_L({\scriptstyle L}) }b_+ \rangle
&&
=
c_+
\exp \left\{ -\left(
\bar\psi({\scriptstyle L}),X_+^{{\scriptstyle L}^\prime} \psi({\scriptstyle L})
\right) \right\}
\label{boundary-wave-function-left}
\, .
\end{eqnarray}
where
\begin{eqnarray}
\label{breaking-source}
&&X_+^{{\scriptstyle L}^\prime}(n,m)
\nonumber\\
&&
=
\scriptstyle
P_R S_+^0(n,{\scriptstyle L};m,{\scriptstyle L}) P_L + P_R\sum_\mu\gamma_\mu \nabla^0_\mu(n,m)
\nonumber\\
&&
+
\scriptstyle
P_R S_+^0({\scriptstyle L};{\scriptstyle L}+2{\scriptstyle L}^\prime) P_R
\left( B_+^0
- P_L S_+^0({\scriptstyle L}+2{\scriptstyle L}^\prime;{\scriptstyle L}+2{\scriptstyle L}^\prime) P_R
- P_L \sum_\mu \gamma_\mu \nabla_\mu^0
\right)^{-1}
P_L S_+^0({\scriptstyle L}+2{\scriptstyle L}^\prime;{\scriptstyle L}) P_L (n,m)
\, ,
\end{eqnarray}
and $c_+$ is a certain constant depending on ${\scriptstyle L}$ and
${\scriptstyle L}+2{\scriptstyle L}^\prime$.
We make a similar choice for the boundary state
at $s={\scriptstyle -L-1}$.
\begin{equation}
\label{choice-of-boundarystate-left}
\bra{b_-^\prime} \, = \bra{0} \, ,
\end{equation}
and we obtain
\begin{eqnarray}
\langle b_- \ket{\psi_R({\scriptstyle -L-1}),\bar \psi_R({\scriptstyle -L-1}) }
&&
=
c_-^\ast
\exp \left\{ -\left( \bar\psi({\scriptstyle -L-1}),X_-^{{\scriptstyle L}^\prime}
\psi({\scriptstyle -L-1}) \right) \right\}
\label{boundary-wave-function-right}
\, ,
\end{eqnarray}
where
\begin{eqnarray}
\label{breaking-source-left}
&&X_-^{{\scriptstyle L}^\prime}(n,m)
\nonumber\\
&&
=
\scriptstyle
P_L S_-^0(n,{\scriptstyle -L-1};m,{\scriptstyle -L-1}) P_R + P_L\sum_\mu\gamma_\mu \nabla^0_\mu(n,m)
\\
&&
+
\scriptstyle
P_L S_-^0({\scriptstyle -L-1}-2{\scriptstyle L}^\prime;{\scriptstyle -L-1}) P_R
\nonumber\\
&&
\scriptstyle
\qquad \qquad
\times
\left( B_-^0
- P_R S_-^0({\scriptstyle -L-1}-2{\scriptstyle L}^\prime;{\scriptstyle -L-1}-2{\scriptstyle L}^\prime) P_L
- P_R \sum_\mu \gamma_\mu \nabla_\mu^0
\right)^{-1}
P_R S_-^0({\scriptstyle -L-1}-2{\scriptstyle L}^\prime;{\scriptstyle -L-1}) P_R (n,m)
\, . \nonumber
\end{eqnarray}
\subsection{Would-be vacuum overlap formula
at finite extent of fifth dimension}
\label{sec:ol_at_finite_volume}
Given the explicit form of the boundary state wave functions,
the transition amplitude can be written as follows.
\begin{eqnarray}
\label{boundary-state-amplitude}
&&
\bra{b_-}D_- \left(T_-\right)^L
\left( T_+\right)^L D_+^\dagger \ket{b_+}
\prod_{0\leq s \leq {\scriptstyle L}}(\det B_+)^{2}
\prod_{{\scriptstyle -L-1} \leq s \leq -1}(\det B_-)^{2}
\nonumber\\
&&
=
\int \prod_{s={\scriptstyle -L-1},{\scriptstyle L}}
[{\cal D} \psi(s) {\cal D} \bar\psi(s)]
\exp\left\{ \sum_{s={\scriptstyle -L-1},{\scriptstyle L}}
\left( \bar\psi_s, B_s \psi_s \right) \right\}
\nonumber\\
&&
\times
\langle b_\pm \ket{\psi_R({\scriptstyle -L-1}),\bar \psi_R({\scriptstyle -L-1})}
Z[\psi_L,\bar\psi_L ({\scriptstyle -L-1});\psi_R,\bar\psi_R ({\scriptstyle L})]
\bra{\psi_L({\scriptstyle L}),\bar \psi_L({\scriptstyle L}) }b_\pm \rangle
\nonumber\\
&&=
\det \left( K \right)
\Big\vert_{[-{\scriptstyle L},{\scriptstyle L}]} \exp \Phi(b_-,b_+)
\, ,
\end{eqnarray}
where the boundary contribution $\Phi(b_-,b_+)$ is given
\begin{eqnarray}
\exp \Phi(b_-,b_+)
&&
=
\int \prod_{s={\scriptstyle -L-1},{\scriptstyle L}}
[{\cal D} \psi_s {\cal D} \hat\psi_s]
\exp\left\{
\left( \bar \Psi, [B_{-+}-M-X_{-+}^{{\scriptstyle L}^\prime}] \Psi \right)
\right\}
c_+ c_-^\ast
\nonumber\\
&&
\equiv
{\det}' \left( B_{-+}-M-X_{-+}^{{\scriptstyle L}^\prime} \right)
c_+ c_-^\ast
\, ,
\end{eqnarray}
with
\begin{equation}
B_{-+}(n,m)= \left(
\begin{array}{cc} B_-(n,m) & 0 \\ 0 & B_+(n,m)
\end{array}
\right)
\, ,
\end{equation}
and
\begin{equation}
X_{-+}^{{\scriptstyle L}^\prime}(n,m) =
\left(
\begin{array}{cc} X_-^{{\scriptstyle L}^\prime}(n,m) & 0 \\
0 & X_+^{{\scriptstyle L}^\prime}(n,m) \end{array} \right)
\, .
\end{equation}
Note that ${\det}'$ denotes the determinant over the
four-dimensional surfaces at $s={\scriptstyle L}$ and ${\scriptstyle -L-1}$
besides over the indices of spinor and the representation of
gauge group.
It is also possible to write in a similar manner
the transition amplitudes for the five-dimensional
fermions with the positive and negative homogeneous
masses, which are needed for the subtraction scheme
proposed by Neuberger and Narayanan.
Then the subtracted transition amplitude at finite
${\scriptstyle L}$ and ${\scriptstyle L}^\prime$ can be written in a factorized form as
\begin{eqnarray}
\label{effective-action-at-finite-volume}
&&
\frac{\displaystyle
\bra{b_-}D_- \left(T_-\right)^L
\left( T_+\right)^L D_+^\dagger \ket{b_+}}
{\displaystyle
\sqrt{ \bra{b_-}D_- \left(T_-\right)^{2L } D_-^\dagger \ket{b_-} }
\sqrt{ \bra{b_+}D_+ \left(T_+\right)^{2L} D_+^\dagger \ket{b_+} }}
\nonumber\\
&&=\frac{\displaystyle \det \left( K \right) }
{\sqrt{ \displaystyle
\det \left( K_- \right) \det \left( K_+ \right)
}}\bigg\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\exp\left\{ \Phi(b_-,b_+)
-\frac{1}{2} \Phi_+(b_+,b_+)
-\frac{1}{2} \Phi_-(b_-,b_-) \right\}
\nonumber\\
&&=\frac{\displaystyle \det \left( K \right) }
{\sqrt{ \displaystyle
\det \left( K_- \right) \det \left( K_+ \right)
}}\bigg\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\frac{\displaystyle {\det}' \left( B-M-X_{-+}^{{\scriptstyle L}^\prime} \right) }
{\sqrt{ \displaystyle
{\det}' \left( B_--M_--X_{--}^{{\scriptstyle L}^\prime} \right)
{\det}' \left( B_+-M_+-X_{++}^{{\scriptstyle L}^\prime} \right)
}}
\, .
\end{eqnarray}
Since the phase factor
which comes from the constant $c_+$ and $c_-$,
$\frac{c_-^\ast c_+}{\vert c_- \vert \vert c_+ \vert }$,
does not depend on the gauge field and is irrelevant,
we have omitted it.
This is the expression from which we start our analysis
in the continuum limit.
Taking the limit $L^\prime \rightarrow \infty$ first
in the above formula,
we obtain the
would-be vacuum overlap formula at finite extent of the fifth dimension as
\begin{eqnarray}
\label{effective-action-expression}
&&
\exp\left\{ -S_i[U_\mu;{\scriptstyle L}] \right\}
\nonumber\\
&=&\frac{\displaystyle \det \left( K \right) }
{\sqrt{ \displaystyle
\det \left( K_- \right) \det \left( K_+ \right)
}}\bigg\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\exp\left\{ \Phi(0_-,0_+)
-\frac{1}{2} \Phi_+(0_+,0_+)
-\frac{1}{2} \Phi_-(0_-,0_-) \right\}
\nonumber\\
&=&
\frac{\displaystyle \det \left( K \right) }
{\sqrt{ \displaystyle
\det \left( K_- \right) \det \left( K_+ \right)
}}\bigg\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\frac{\displaystyle {\det}' \left( B_{-+}-M-X_{-+}^\infty \right) }
{\sqrt{ \displaystyle
{\det}' \left( B_{--}-M_--X_{--}^\infty \right)
{\det}' \left( B_{++}-M_+-X_{++}^\infty \right)
}}
\, .
\nonumber\\
\end{eqnarray}
Finally we give the expression for the variation of the effective
action under the gauge transformation.
Under the gauge transformation of the link variables:
\begin{equation}
U_\mu(n) \longrightarrow g(n) U_\mu(n) g^\dagger(n+\hat\mu)
\, ,
\hskip 1cm
g(n) \in SU(N)
\, ,
\end{equation}
we can easily see that
the matrices $K$, $B$ and $M$ transform covariantly.
For example,
\begin{equation}
M(n,m) \longrightarrow g(n) M(n,m) g^\dagger(m)
\, .
\end{equation}
On the contrary, $X^\infty$ does not transform covariantly because
it consists of $B^0$ and $M^0$ without gauge link
variables in them.
Therefore the variation of the effective action under the
infinitesimal gauge transformation with $g(n)=1+i\omega(n)$ is given by
\begin{eqnarray}
i\delta S_i[U_\mu;{\scriptstyle L}]
&=&
{{\rm Tr}}'
\left\{
\left( B_{-+}-M-X_{-+}^\infty \right)^{-1}
\left( \omega X_{-+}^\infty
- X_{-+}^\infty \omega \right)
\right\}
\nonumber\\
&&-\frac{1}{2}
{{\rm Tr}}'
\left\{
\left( B_{++}-M_+-X_{++}^\infty \right)^{-1}
\left( \omega X_{++}^\infty
- X_{++}^\infty \omega \right)
\right\}
\nonumber\\
&&-\frac{1}{2}
{{\rm Tr}}'
\left\{
\left( B_{--}-M_--X_{--}^\infty \right)^{-1}
\left( \omega X_{--}^\infty
- X_{--}^\infty \omega \right)
\right\}
\, ,
\label{effective-action-variation}
\end{eqnarray}
where ${{\rm Tr}}' $ denotes the trace over the
four-dimensional surfaces at $s={\scriptstyle L}$ and ${\scriptstyle -L-1}$
besides over the indices of spinor and the representation of
gauge group.
\subsection{Continuum limit counterpart}
We start to investigate the would-be vacuum
overlap formula at finite extent of the fifth dimension
{\it in the continuum limit} and {\it in the Minkowski space}.
We first take the naive continuum limit of the action
with the boundary terms,
Eq.~(\ref{lattice-action-with-boundaryterm}),
(\ref{lattice-action-chiral-bc}),
(\ref{left-boundary-term}) and
(\ref{right-boundary-term}).
\begin{equation}
\label{continuum-action-with-boundaryterm}
S[{\scriptstyle -L-1},{\scriptstyle L}]= S+S^B_L+S^B_R
\, ,
\end{equation}
\begin{eqnarray}
S &=& \int^\infty_{-\infty}\!d^4x \int^L_{-L}\!ds \, \,
\bar\psi(x,s)
\big\{ i \gamma^\mu \left( \partial_\mu -igT^a A^a_\mu(x) \right)
-[\gamma_5 \partial_s + M (s)] \big\}
\psi(x,s) \Big\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\, ,
\label{eqn:action-kink}
\\
S^B_L &=& -\int^\infty_{-\infty}\!d^4x
\left\{
\bar\psi(x,{\scriptstyle -L+0}) P_L \psi(x,{\scriptstyle -L})
+\bar\psi(x,{\scriptstyle -L}) P_R \psi(x,{\scriptstyle -L+0})
\right\}
\, ,
\\
S^B_R &=& -\int^\infty_{-\infty}\!d^4x
\left\{
\bar\psi(x,{\scriptstyle L}) P_L \psi(x,{\scriptstyle L-0})
+\bar\psi(x,{\scriptstyle L-0}) P_R \psi(x,{\scriptstyle L})
\right\}
\, .
\end{eqnarray}
where $\Big\vert_{[-{\scriptstyle L},{\scriptstyle L}]}$
stands for the chiral boundary condition
in the continuum limit:
\begin{equation}
\label{chiral-boundary-condition-continuum-limit}
P_R \psi(x,{\scriptstyle L}) = P_L \psi(x,{\scriptstyle -L})= 0 \, , \hskip 16pt
\bar \psi(x,{\scriptstyle L}) P_L = \bar \psi(x,{\scriptstyle -L}) P_R = 0 \, .
\end{equation}
Note that the kinetic parts in the boundary terms,
Eqs. (\ref{left-boundary-term}) and (\ref{right-boundary-term}),
vanish in the continuum limit because they correspond to
operators of dimension five.
The kink-like mass is defined by
\begin{eqnarray*}
M (s)
&\equiv& \left\{ \begin{array}{ll}
+M & \quad s > 0 \\
-M & \quad s < 0
\end{array} \right. \nonumber\\
&=&M \, \epsilon(s) \nonumber\\
&=&M \, \int\!{d \omega \over 2\pi i }
\biggl[\, {1\over \omega-i 0} \,+\, {1\over \omega+i0} \,
\biggr]
\, e^{i\omega s}.
\end{eqnarray*}
We assume that the Latin indices run from 0 to~4
and the Greek ones from 0 to~3.
The gamma matrices are defined as
$\{\gamma^a , \gamma^b\}= 2 \eta^{ab}$
with Minkowskian metric, $\eta^{ab}={\rm diag}.(+1,-1,-1,-1,-1)$.
We adopt the following chiral representation:
\begin{equation}
\label{gamma-matrix}
\gamma^\mu=
\left( \begin{array}{cc} 0 & \bar\sigma^\mu \\
\sigma^\mu & 0 \end{array} \right)
\hskip .5cm
\sigma^\mu = ( 1, \sigma_i) \, ,
\bar\sigma^\mu = ( 1, -\sigma_i) \, ,
\end{equation}
and
\begin{equation}
\label{gamma-five}
\gamma^{a=4} = i \gamma_5 = i^2 \gamma^0\gamma^1\gamma^2\gamma^3
=i \left( \begin{array}{cc} 1 & 0\\
0 & -1 \end{array} \right)
\, .
\end{equation}
$T^a$ are the hermitian generators of $SU(N)$ gauge group in a
certain representation.
We also denote the gauge potential
in the matrix form as $A_\mu(x)=-ig T^a A^a_\mu(x)$.
In the continuum limit,
$B$, $K$, its inverse, $M$, and $X$ are given formally by
\begin{eqnarray}
&& B(x,y)=\delta^4(x-y) \, , \\
&& K(x,s;y,t) =
\left\{
i \gamma^\mu \left( \partial_\mu +A_\mu(x) \right)
-[\gamma_5 \partial_s + M (s)]
\right\} \delta^4 (x-y) \delta (s-t)
\, ,
\label{kinetic-term-continuum-limit}\\
&&S_{F}[A] (n,s;m,t) =
i K^{-1}(n,s;m,t) \Big\vert_{[{\scriptstyle -L},{\scriptstyle L}]}
\, ,
\label{inverse-Kinetic-term-continuum-limit}\\
&&iM(x,y;{\scriptstyle -L},{\scriptstyle L})
=
\left( \begin{array}{cc}
\scriptstyle P_R S_F[A](x,{\scriptstyle -L};y,{\scriptstyle -L}) P_L
&\scriptstyle P_R S_F[A](x,{\scriptstyle -L};y,{\scriptstyle L}) P_R \\
\scriptstyle P_L S_F[A](x,{\scriptstyle L};y,{\scriptstyle -L}) P_L
& \scriptstyle P_L S_F[A](x,{\scriptstyle L};y,{\scriptstyle L}) P_R
\end{array} \right)
\, ,
\label{boundary-M-matrix-continuum-limit}
\end{eqnarray}
and
\begin{eqnarray}
iX_-^{{\scriptstyle L}^\prime}(x,y)
&&=
\scriptstyle
P_L S_{F-}^0(x,{\scriptstyle -L};y,{\scriptstyle -L}) P_R \\
&&
+
\scriptstyle
P_L S_{F-}^0({\scriptstyle -L}-2{\scriptstyle L}^\prime;{\scriptstyle -L}) P_L
\left( 1
- P_R S_{F-}^0({\scriptstyle -L}-2{\scriptstyle L}^\prime;{\scriptstyle -L}-2{\scriptstyle L}^\prime) P_L
\right)^{-1}
P_R S_{F-}^0({\scriptstyle -L}-2{\scriptstyle L}^\prime;{\scriptstyle -L}) P_R (x,y)
\, ,
\label{breaking-source-left-continuum-limit}
\nonumber \\
iX_+^{{\scriptstyle L}^\prime}(x,y)
&&=
\scriptstyle
P_R S_{F+}^0(x,{\scriptstyle L};y,{\scriptstyle L}) P_L \\
&&
+
\scriptstyle
P_R S_{F+}^0({\scriptstyle L}+2{\scriptstyle L}^\prime;{\scriptstyle L}) P_R
\left( 1
- P_L S_{F+}^0({\scriptstyle L}+2{\scriptstyle L}^\prime;{\scriptstyle L}+2{\scriptstyle L}^\prime) P_R
\right)^{-1}
P_L S_{F+}^0({\scriptstyle L}+2{\scriptstyle L}^\prime;{\scriptstyle L}) P_L (x,y)
\, ,
\label{breaking-source-right-continuum-limit}
\nonumber \\
X_{-+}^{{\scriptstyle L}^\prime}(x,y)
&&=
\left(
\begin{array}{cc} X_-^{{\scriptstyle L}^\prime}(x,y) & 0 \\
0 & X_+^{{\scriptstyle L}^\prime}(x,y)
\end{array} \right)
\label{breaking-source-total-continuum-limit}
\, .
\end{eqnarray}
$S_{F\pm}$ are the inverse of the Dirac operators of
the five dimensional fermions with positive and negative
homogeneous masses, taking into account of the
chiral boundary conditions,
\noindent
$\Big\vert_{[{\scriptstyle L}, {\scriptstyle L}+2{\scriptstyle L}^\prime]}$ and
$\Big\vert_{[{\scriptstyle -L}-2{\scriptstyle L}^\prime, {\scriptstyle -L}]}$
, respectively. The superscript $0$ stands for the quantities
in which gauge interaction is switched off.
Therefore we arrive at the formal expression of the
vacuum overlap formula at finite extent of the fifth dimension
(${\scriptstyle L}^\prime$ is also kept finite) in the continuum limit.
\begin{equation}
\label{overlap-formula-at-finite-fifth-volume-continuum-limit}
\frac{\displaystyle \det \left( K \right) }
{\sqrt{ \displaystyle
\det \left( K_- \right) \det \left( K_+ \right)
}}\bigg\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\frac{\displaystyle {\det}' \left( 1-M-X_{-+}^{{\scriptstyle L}^\prime} \right) }
{\sqrt{ \displaystyle
{\det}' \left( 1-M_--X_{--}^{{\scriptstyle L}^\prime} \right)
{\det}' \left( 1-M_+-X_{++}^{{\scriptstyle L}^\prime} \right)
}}
\, .
\end{equation}
This expression is formal because we do not yet specify the
regularization. In our continuum limit analysis, we
adopt {\it the dimensional regularization of
't Hooft and Veltman scheme}. That is,
we consider an extended space such that
the four dimensional space $(\mu=0,1,2,3)$
is extended to the D-dimensional one, but
keep the fifth space.
The gamma matrices follow the convention such that
\begin{equation}
\label{gamma-matrix-convention-dimensional-reglarization}
\{ \gamma^\mu , \gamma^\nu \}= 2 \eta^{\mu\nu} \hskip .5cm
(\mu=0, 1, \ldots, D) \, ,
\end{equation}
\begin{equation}
\label{gamma-five-in-dimensional-regularization}
\gamma^{a=4} = i \gamma_5 = i^2 \gamma^0\gamma^1\gamma^2\gamma^3
\, ,\hskip 1cm
[ \gamma^\mu , \gamma^{a=4} ]= 0 \hskip .5cm
(\mu=4 \ldots, D) \, .
\end{equation}
Actually, in the following section,
we will find
in the evaluation of the contribution of the five-dimensional
determinant,
\begin{equation}
\label{overlap-formula-determinant-contribution}
\frac{\displaystyle \det \left( K \right) }
{\sqrt{ \displaystyle
\det \left( K_- \right) \det \left( K_+ \right)
}}\bigg\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\, ,
\end{equation}
that the dimensional regularization
cannot maintain the chiral boundary condition.
Furthermore,
we also find that the subtraction by the determinants of
the fermions with the positive and negative homogeneous masses
just correspond to the subtraction by only {\it one} bosonic
Pauli-Villars-Gupta field.
Because of these facts, we obtain a gauge noninvariant result
for the vacuum polarization in four dimensions.
Since the volume contribution is expected to be gauge invariant
in the lattice regularization, our choice of the
dimensional regularization is not adequate in this case.
On the other hand, for the boundary contribution,
\begin{equation}
\label{overlap-formula-boundary-contribution}
\frac{\displaystyle {\det}' \left( 1-M-X_{-+}^{{\scriptstyle L}^\prime} \right) }
{\sqrt{ \displaystyle
{\det}' \left( 1-M_--X_{--}^{{\scriptstyle L}^\prime} \right)
{\det}' \left( 1-M_+-X_{++}^{{\scriptstyle L}^\prime} \right)
}}
\, ,
\end{equation}
once we assume that
{\it the dimensional regularization preserves the cluster property},
we can obtain the following results:
The boundary contribution is purely odd-parity.
Its variation under the gauge transformation
gives the consistent form
of the gauge anomaly {\it in four dimensions},
which can be evaluated without any divergence and
is actually originated from the source of gauge noninvariance
at the boundary, but not due to the breaking of the chiral boundary
condition by the dimensional regularization.
In this sense, the continuum limit analysis with
the dimensional regularization cannot give
the whole structure of the vacuum overlap defined in
the lattice regularization.
But we think that we can see rather clearly in what way
the vacuum overlap formula could give the perturbative
properties of the chiral determinant in four dimension.
\section{
Five-dimensional fermion with kink-like mass \hskip 3cm
in a finite fifth space volume
}
\label{sec:fermion-at-finite-fifth-volume}
In this section,
we develop the theory of
free five-dimensional fermion in the finite fifth space volume.
We solve the field equation and obtain the complete set of
solutions. The field operator is defined by the mode expansion
and the propagator is derived.
The Sommerfeld-Watson transformation is introduced, by which
we rearrange the normal modes of the fifth momentum to be common
among the fermion with the kink-like mass
and the fermion with the positive(negative) homogeneous mass.
\subsection{Complete set of solutions}
We solve the free field equations of
the five dimensional fermion with kink-like mass
and positive(negative) homogeneous mass
under the chiral boundary condition,
which are derived from the actions,
\begin{eqnarray}
S_0 &=& \int^\infty_{-\infty}\!d^4x \int^L_{-L}\!ds \, \,
\bar\psi(x,s)
\left\{ i \gamma^\mu \partial_\mu
-[\gamma_5 \partial_s + M (s)] \right\}
\psi(x,s) \Big\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\, ,
\\
S_\pm &=& \int^\infty_{-\infty}\!d^4x \int^L_{-L}\!ds \, \,
\bar\psi(x,s)
\left\{ i \gamma^\mu \partial_\mu
-[\gamma_5 \partial_s \pm M \phantom{(s)}] \right\}
\psi(x,s) \Big\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\, ,
\end{eqnarray}
and the chiral boundary condition reads
\begin{equation}
P_R \psi(x,{\scriptstyle L}) = P_L \psi(x,{\scriptstyle -L})= 0 \, , \hskip 16pt
\bar \psi(x,{\scriptstyle L}) P_L = \bar \psi(x,{\scriptstyle -L}) P_R = 0 \, .
\end{equation}
Note that, if
the parity transformation here is defined by
\begin{eqnarray}
\label{parity-transformation}
\psi(x_0,x_i,s) \rightarrow
\psi^\prime (x^\prime_0,x^\prime_i,s^\prime)
\equiv \gamma^0 \psi(x_0,-x_i,-s)
\, ,
\end{eqnarray}
both the action and the chiral boundary condition are
parity invariant for the fermion with the homogeneous
mass. For the fermion with the kink-like mass, the parity
transformation has the effect to change the signature of
the mass parameter $M$.
This is also true when the gauge field is introduced
provided that the gauge field transforms as
\begin{eqnarray}
\left( A_0(x_0,x_i), A_i(x_0,x_i) \right) \rightarrow
\left( A^\prime_0(x'_0,x'_i), A^\prime_i(x'_0,x'_i) \right)
\equiv
\left( A_0(x_0,-x_i), -A_i(x_0,-x_i) \right)
\, ,
\end{eqnarray}
under the parity transformation.
We treat
both $S_0$ and $S_+$ in a unified manner.
The suffix $\pm$ in the following denotes the solutions for
$S_0$ and $S_+$, respectively.
The solution for $S_-$ can be obtained by setting all the
mass $M$ to $- M$ in that for $S_+$.
We work in the momentum space for all dimensions. $\omega$ denotes
the fifth component of five momentum.
\vskip 16pt
\noindent
i) {\it Solution for $s>0$}
General solution in the region $s>0$ is given as follows:
\begin{equation}
\left( \begin{array}{l}
\bar\sigma^\mu p_\mu \\
-i \omega + M \\
\end{array}
\right) e^{-i\omega s}
\, ,
\end{equation}
where
\begin{equation}
p^\mu p_\mu = \omega^2 + M^2 \, .
\label{dispersion-relation}
\end{equation}
We have denoted the two independent solutions in
the form of four-by-two matrix.
Then the solution satisfying the chiral boundary
condition at $s={\scriptstyle L}$ is given by
\begin{equation}
\left( \begin{array}{l}
\bar \sigma^\mu p_\mu \\
-i \omega + M \\
\end{array}
\right) e^{-i\omega (s-L)}
-
\left( \begin{array}{l}
\bar \sigma^\mu p_\mu \\
i \omega + M \\
\end{array}
\right) e^{i\omega (s-L)}
=
2i \left( \begin{array}{l}
\bar \sigma^\mu p_\mu \sin \omega(L-s) \\
- \omega \cos \omega(L-s) + M \sin \omega(L-s) \\
\end{array}
\right) \, ,
\end{equation}
where $\omega > 0 $.
\vskip 16pt
\noindent
ii) {\it Solution for $s<0$ }
Similarly, general solution in the region $s<0$ is given as follows:
\begin{equation}
\left( \begin{array}{l}
i \omega \mp M \\
\sigma^\mu p_\mu \\
\end{array}
\right) e^{-i\omega s}
\, .
\end{equation}
Then the solution satisfying the chiral boundary
condition at $s={\scriptstyle -L}$ is given by
\begin{equation}
\left( \begin{array}{l}
i \omega \mp M \\
\sigma^\mu p_\mu \\
\end{array}
\right) e^{-i\omega (s+L)}
-
\left( \begin{array}{l}
-i \omega \mp M \\
\sigma^\mu p_\mu \\
\end{array}
\right) e^{i\omega (s+L)}
=
\frac{2}{i }
\left( \begin{array}{l}
- \omega \cos \omega(L+s) \mp M \sin \omega(L+s) \\
\sigma^\mu p_\mu \sin \omega(L+s) \\
\end{array}
\right) \, ,
\end{equation}
where $\omega > 0$.
\vskip 16pt
\noindent
iii) {\it Matching at $s=0$}
The solution should be continuous at $s=0$ and
this condition determines the normal modes of $\omega$.
The general solution satisfying the chiral boundary condition
can be written as follows:
\begin{eqnarray}
\phi(p,\omega;s) & \equiv &
C_{>0}
\left( \begin{array}{l}
\bar \sigma^\mu p_\mu \sin \omega(L-s) \\
- \omega \cos \omega(L-s) + M \sin \omega(L-s) \\
\end{array}
\right) \theta(s) \nonumber\\
&&\mbox +
C_{<0}
\left( \begin{array}{l}
- \omega \cos \omega(L+s) \mp M \sin \omega(L+s) \\
\sigma^\mu p_\mu \sin \omega(L+s) \\
\end{array}
\right)
\theta(-s) \, .
\end{eqnarray}
For both two components in the above $\phi(p,s)$ to match at $s=0$,
we should have
\begin{eqnarray}
C_{>0} &=& C \; \sigma^\mu p_\mu \sin \omega L \, , \\
C_{<0} &=& C \; (-\omega \cos \omega L + M \sin \omega L ) \, ,
\end{eqnarray}
and
\begin{equation}
p^2 \sin^2 \omega L
= (\omega \cos \omega L - M \sin \omega L )
(\omega \cos \omega L \pm M \sin \omega L ) \, .
\label{matching-condition}
\end{equation}
\vskip 16pt
\noindent
iv) {\it Spectrum of the normal modes of $\omega$}
{}From the dispersion relation Eq.~(\ref{dispersion-relation})
and the matching condition Eq.~(\ref{matching-condition}),
we can obtain the spectrum of the normal modes of $\omega$.
\begin{mathletters}
\begin{eqnarray}
(\omega^2 + M^2) \sin^2 \omega L
&=& (\omega \cos \omega L - M \sin \omega L )
(\omega \cos \omega L \pm M \sin \omega L ) \, ,
\label{mode-equation}
\end{eqnarray}
\end{mathletters}
For the fermion with kink-like mass term
we have
\begin{eqnarray}
\label{kink-mass-mode}
(\omega^2 + M^2)
&=&
M^2 \frac{1}{\cos 2 \omega L } \qquad ( \omega > 0) \, , \\
(-\lambda^2 + M^2)
&=&
M^2 \frac{1}{\cosh 2 \lambda L } \qquad
( \lambda > 0 ; \omega=i\lambda) \, .
\label{light-mode-in-fermion-with-kink-like-mass}
\end{eqnarray}
For the fermion with ordinary positive mass
we have
\begin{eqnarray}
\label{homogeneous-mass-mode}
\omega
&=&
M \tan 2 \omega L \qquad ( \omega > 0) \, ,
\\
\lambda
&=&
M \tanh 2 \lambda L \qquad
( \lambda > 0, \omega =i \lambda; \mbox{only for}\ +M ) \, .
\label{light-mode-in-fermion-with-positive-homogeneous-mass}
\end{eqnarray}
Note that both sets of solutions have the bounded modes
with the wave function which behave exponentially.
\vskip 16pt
\noindent
iv) {\it Solutions over $[-L,+L]$ }
Taking into account of the mode equation
Eq.~(\ref{mode-equation}), the solution over
the entire region can be rewritten as follows:
\begin{eqnarray}
\phi(p,\omega;s)
&=&
\left( \begin{array}{l}
\bar \sigma^\mu p_\mu \sin \omega(L-s) \\
- \omega \cos \omega(L-s) + M \sin \omega(L-s)
\\
\end{array}
\right) \theta(s) \nonumber\\
&&\mbox +
\frac{(-\omega \cos \omega L + M \sin \omega L )}
{\sigma^\mu p_\mu \sin \omega L}
\left( \begin{array}{l}
- \omega \cos \omega(L+s) \mp M \sin \omega(L+s) \\
\sigma^\mu p_\mu \sin \omega(L+s) \\
\end{array}
\right)
\theta(-s) \\
&& =
\left( \begin{array}{l}
\bar \sigma^\mu p_\mu \sin \omega(L-s) \\
(-\omega \cot \omega L + M )
\left(
\frac{\omega \cos \omega(L-s) - M \sin \omega(L-s)}
{\omega \cot \omega L - M} \right) \\
\end{array}
\right) \theta(s) \nonumber\\
&&\qquad \qquad \qquad +
\left( \begin{array}{l}
\bar \sigma^\mu p_\mu \left(
\frac{\omega \cos \omega(L+s) \pm M \sin \omega(L+s)}
{\omega \cot \omega L \pm M } \right) \\
(-\omega \cot \omega L + M )\sin \omega(L+s) \\
\end{array}
\right)
\theta(-s) \\
&&=
\left( \begin{array}{l}
\bar \sigma^\mu p_\mu \sin \omega(L-s) \\
(-\omega \cot \omega L + M )
\sin_- \omega(L+s) \\
\end{array}
\right) \theta(s) \nonumber\\
&& \qquad\qquad\qquad +
\left( \begin{array}{l}
\bar \sigma^\mu p_\mu \sin_\pm \omega(L-s) \\
(-\omega \cot \omega L + M )\sin \omega(L+s) \\
\end{array}
\right)
\theta(-s) \\
&&=
\left( \begin{array}{l}
\bar \sigma^\mu p_\mu [\sin \omega(L-s)]_\pm \\
(-\omega \cot \omega L + M )
[\sin \omega(L+s)]_- \\
\end{array}
\right) \, ,
\end{eqnarray}
where we have defined
\begin{eqnarray}
\sin_\pm \omega(L-s)
&\equiv&
\frac{\omega \cos \omega(L+s) \pm M \sin \omega(L+s)}
{\omega \cot \omega L \pm M } \\
&=&
\left\{
\begin{array}{ll}
\frac{\omega \cos \omega(L+s) + M \sin \omega(L+s)}
{\omega \cot \omega L + M }
& \qquad(\omega^2+M^2=M^2 / \cos 2\omega L,\omega>0)
\\
\sin \omega(L-s) & \qquad (\omega=M \tan 2\omega L, \omega > 0) \\
\end{array} \right.
\end{eqnarray}
and we use abbreviations for the generalized ``sin'' functions in
$[-L,+L]$ as follows,
\begin{equation}
[\sin \omega(L-s)]_\pm \equiv
\sin \omega(L-s) \theta(s) + \sin_\pm \omega(L-s) \theta(-s) \, .
\end{equation}
This generalized ``sin'' function satisfies the orthogonality:
\begin{equation}
\int^L_{-L}\!ds \,
[\sin \omega(L-s)]_\pm [\sin \omega^\prime (L-s)]_\pm
= N_\pm (\omega) \delta_{\omega\omega^\prime}
\label{generalized-sin-function-orthgonarity}
\, .
\end{equation}
where the normalization factor $N_\pm(\omega)$ is given by
\begin{eqnarray}
N_\pm(\omega)
&=& \left[
\Big(
\frac{(\omega\cot\omega L \pm M)+
(\omega\cot\omega L - M)}
{(\omega\cot\omega L \pm M)}
\Big)
\frac{1}{2}
\big( L-\frac{\sin 2\omega L}{2\omega} \big)
+\frac{\sin^2\omega L}{(\omega\cot\omega L \pm M)}
\right]
\nonumber\\
&\equiv& n_\pm(\omega) \frac{1}{(\omega\cot\omega L \pm M)}
\, .
\end{eqnarray}
For the fermion with kink-like mass term, it turns out to be
\begin{eqnarray}
N_+(\omega)
&=&
\frac{ \omega\cot\omega L }
{(\omega\cot\omega L + M)}
\left[ L-
\frac{ \sin 4\omega L }
{4 \omega \cos^2 \omega L}
\right]
\, .
\end{eqnarray}
For the fermion with ordinary positive mass term, it reads
\begin{eqnarray}
N_-(\omega)
&=&
\left[ L-\frac{\sin4 \omega L }{4 \omega}
\right]
\, .
\end{eqnarray}
Orthogonality of the general solutions over
the entire region, $\phi(p,\omega,s)$, is given as follows.
\begin{equation}
\int^L_{-L}\!ds \phi^\dagger(p,\omega,s) \phi(p,\omega^\prime,s)
= 2 p_0 \, \bar\sigma^\mu p_\mu N(\omega) \delta_{\omega\omega^\prime}
\, .
\end{equation}
It is shown as
\begin{eqnarray*}
&& \int^L_{-L}\!ds \phi^\dagger(p,\omega,s) \phi(p,\omega^\prime,s)
\nonumber\\
&=&
\bar \sigma^\mu p_\mu \bar \sigma^\mu p_\mu
\int^L_{-L} \!ds \,
[\sin \omega(L-s)]_\pm [\sin \omega^\prime (L-s)]_\pm
\nonumber\\
&&
+(-\omega\cot\omega L + M)(-\omega^\prime \cot\omega^\prime L + M)
\int^L_{-L} \!ds \,
[\sin \omega(L-s)]_- [\sin \omega^\prime (L-s)]_-
\nonumber\\
&=&
\left( (\bar \sigma^\mu p_\mu)^2 N_\pm(\omega)
+(-\omega\cot\omega L + M)(-\omega \cot\omega L + M) N_-(\omega)
\right)
\, \delta_{\omega\omega^\prime}
\nonumber\\
&=&
\left( (\bar \sigma^\mu p_\mu)^2
+(\omega\cot\omega L - M)(\omega \cot\omega L \pm M) \right)
\, N_\pm(\omega) \, \delta_{\omega\omega^\prime}
\nonumber\\
&=&
2 p_0 \, \bar\sigma^\mu p_\mu
N_\pm (\omega) \, \delta_{\omega\omega^\prime}
\, .
\end{eqnarray*}
Then the orthonormal positive- and negative-energy wave functions
can be obtained as
\begin{eqnarray}
u(p,\omega;s)
&=&
\frac{\left( 1+\frac{\sigma^\mu p_\mu}{-i\omega+M} \right)}
{\sqrt{2(|p_0|+M)}}
\left( \begin{array}{l}
\bar \sigma^\mu p_\mu [\sin \omega(L-s)]_\pm \\
(-\omega \cot \omega L + M )
[\sin \omega(L+s)]_- \\
\end{array}
\right)
\, ,
\end{eqnarray}
\begin{eqnarray}
v(p,\omega;s)
&\equiv& u(-p,\omega;s) \nonumber\\
&=&
\frac{\left( 1-\frac{\sigma^\mu p_\mu}{-i\omega+M} \right)}
{\sqrt{2(|p_0|+M)}}
\left( \begin{array}{l}
-\bar \sigma^\mu p_\mu [\sin \omega(L-s)]_\pm \\
(-\omega \cot \omega L + M )
[\sin \omega(L+s)]_- \\
\end{array}
\right)
\, .
\end{eqnarray}
Note, for later use, that
\begin{eqnarray}
v(p_0,-\vec p,\omega;s)
&=&
\frac{\left( 1-\frac{\bar\sigma^\mu p_\mu}{-i\omega+M} \right)}
{\sqrt{2(|p_0|+M)}}
\left( \begin{array}{l}
- \sigma^\mu p_\mu [\sin \omega(L-s)]_\pm \\
(-\omega \cot \omega L + M )
[\sin \omega(L+s)]_- \\
\end{array}
\right)
\nonumber\\
&=&
\frac{\left( 1-\frac{\bar\sigma^\mu p_\mu}{-i\omega+M} \right)}
{\sqrt{2(|p_0|+M)}}
\frac{- \sigma^\mu p_\mu}{\omega \cot \omega L \pm M}
\left( \begin{array}{l}
(\omega \cot \omega L \pm M)[\sin \omega(L-s)]_\pm \\
\bar \sigma^\mu p_\mu [\sin \omega(L+s)]_- \\
\end{array}
\right)
\nonumber\\
&=&
\frac{i\omega+M}{\omega \cot \omega L \pm M}
\frac{\left( 1-\frac{\sigma^\mu p_\mu}{i\omega+M} \right)}
{\sqrt{2(|p_0|+M)}}
\left( \begin{array}{l}
(\omega \cot \omega L \pm M)[\sin \omega(L-s)]_\pm \\
\bar \sigma^\mu p_\mu [\sin \omega(L+s)]_- \\
\end{array}
\right)
\, .
\end{eqnarray}
\subsection{Mode expansion and Equal-time commutation relation}
We define the field operator by the mode expansion as follows.
\begin{eqnarray}
\psi(x,s) \equiv
\int\frac{d^3p}{(2\pi)^3} \sum_\omega \frac{1}{N_\pm(\omega)}
\frac{1}{\sqrt{2p_0}}
\left\{ b(\vec p,\omega) u(p,\omega;s) \, e^{-ipx}
+d^\dagger(\vec p,\omega) v(p,\omega;s) \, e^{+ipx} \right\}
\, ,
\nonumber\\
\label{mode-expansion}
\end{eqnarray}
where the canonical commutation relations are assumed as
\begin{eqnarray}
\left\{ b(\vec p,\omega) , b^\dagger(\vec q,\omega^\prime) \right\}
&=& \delta^3(\vec p-\vec q) \delta_{\omega \omega^\prime} \, ,
\nonumber\\
\left\{ d(\vec p,\omega) , d^\dagger(\vec q,\omega^\prime) \right\}
&=& \delta^3(\vec p-\vec q) \delta_{\omega \omega^\prime} \, ,
\end{eqnarray}
and other commutators vanish.
Then the equal-time commutation relation follows.
\begin{equation}
\label{canonical-comulation-relation}
\left\{ \psi(x,s) , \psi^\dagger (y,t) \right\}\bigg\vert_{x^0=y^0}
=\delta^3(\vec x-\vec y)\Delta_\pm(s,t)
\, ,
\end{equation}
where
$\Delta_\pm(s,t) $ is defined by
\begin{eqnarray}
\Delta_\pm(s,t)
&\equiv&
\sum_\omega\frac{1}{N_\pm(\omega)}
\frac{1}{2p_0}
\left\{
u(p_0,\vec p,\omega;s) u^\dagger(p_0, \vec p,\omega;t)
+ v(p_0,-\vec p,\omega;s) v^\dagger(p_0,-\vec p,\omega;t)
\right\}
\nonumber\\
&=&
\sum_\omega\frac{1}{n_\pm(\omega)}
\left( \begin{array}{l}
(\omega \cot \omega L \pm M)
[\sin \omega(L-s)]_\pm [\sin \omega(L-t)]_\pm \qquad\qquad 0 \\
0\qquad\qquad (\omega \cot \omega L - M)
[\sin \omega(L+s)]_- [\sin \omega(L+t)]_- \\
\end{array}
\right)
\, .
\end{eqnarray}
We have used the following relation to obtain the above result.
\begin{eqnarray}
&& \frac{1}{2p_0}
\left\{
u(p_0,\vec p,\omega;s) u^\dagger(p_0,\vec p,\omega;t)
+ v(p_0,-\vec p,\omega;s) v^\dagger(p_0,-\vec p,\omega;t)
\right\}
\nonumber\\
&=&
\frac{1}{2p_0 }
\left( \begin{array}{l}
\sigma^\mu p_\mu [\sin \omega(L-s)]_\pm \\
(-\omega \cot \omega L + M )
[\sin \omega(L+s)]_- \\
\end{array} \right)
\left(\frac{\sigma^\mu p_\mu}{\omega^2+M^2}\right)
\nonumber\\
&&\qquad\qquad\qquad\qquad
\times
\left( \begin{array}{ll}
\bar \sigma^\mu p_\mu [\sin \omega(L-t)]_\pm &
(-\omega \cot \omega L + M ) [\sin \omega(L+t)]_- \\
\end{array} \right)
\nonumber\\
&&\mbox{}+
\frac{\omega \cot \omega L - M}
{\omega \cot \omega L \pm M}
\frac{1}{2p_0}
\left( \begin{array}{l}
(\omega \cot \omega L \pm M)[\sin \omega(L-s)]_\pm \\
\bar\sigma^\mu p_\mu [\sin \omega(L+s)]_- \\
\end{array}
\right)
\left(\frac{\sigma^\mu p_\mu}{\omega^2+M^2}\right)
\nonumber\\
&&\qquad\qquad\qquad\qquad
\times
\left( \begin{array}{ll}
(\omega \cot \omega L \pm M)[\sin \omega(L-t)]_\pm &
\bar\sigma^\mu p_\mu [\sin \omega(L+t)]_- \\
\end{array}
\right)
\nonumber\\
&=&
\left( \begin{array}{ll}
[\sin \omega(L-s)]_\pm [\sin \omega(L-t)]_\pm & 0 \\
0& \frac{\omega \cot \omega L - M}
{\omega \cot \omega L \pm M}
[\sin \omega(L+s)]_- [\sin \omega(L+t)]_- \\
\end{array}
\right) \, .
\end{eqnarray}
\subsection{Propagator}
Once we have defined the field operator,
the two-point Green function can be obtained as follows.
We define the Green function by the time-ordered product as usual:
\begin{equation}
\label{feynman-green-function}
S_{F\pm}(x-y;s,t) \equiv \bra{0}T \psi(x,s) \bar\psi(y,t) \ket{0}
\, .
\end{equation}
Then we have
\begin{eqnarray}
S_{F\pm}(x-y;s,t)
&\equiv&
\int\!\frac{d^4p}{i(2\pi)^4} \sum_\omega
\frac{1}{N_\pm(\omega)}
\frac{e^{-ip(x-y)}}{M^2+\omega^2-p^2-i \varepsilon}
s_\pm(p,\omega;s,t)
\\
&=&
\int\!\frac{d^4p}{i(2\pi)^4} \sum_\omega
\frac{1}{n_\pm(\omega)}
\frac{e^{-ip(x-y)}}{M^2+\omega^2-p^2-i \varepsilon}
(\omega \cot \omega L \pm M) s_\pm(p,\omega;s,t)
\nonumber\\
&=&
\int\!\frac{d^4p}{i(2\pi)^4} e^{-ip(x-y)}
\left\{
P_R \fs p \Delta_{R\pm}(p;s,t)
+ P_L \fs p \Delta_{L-}(p;s,t) \right.
\nonumber\\
&&\qquad\qquad\qquad\qquad\qquad
\left. +P_R B_{RL\pm}(p;s,t) + P_L B_{LR\pm}(p;s,t) \right\}
\, ,
\end{eqnarray}
where
\begin{eqnarray}
s_\pm(p,\omega;s,t)
&=& u(p,\omega;s) \, \bar u(p,\omega;t)
\nonumber\\
&=&
\left( \begin{array}{l}
\bar \sigma^\mu p_\mu [\sin \omega(L-s)]_\pm \\
(-\omega \cot \omega L + M )
[\sin \omega(L+s)]_- \\
\end{array} \right)
\left(\frac{\sigma^\mu p_\mu}{\omega^2+M^2}\right)
\nonumber\\
&&\qquad\qquad\qquad
\times
\left( \begin{array}{ll}
\bar \sigma^\mu p_\mu [\sin \omega(L-t)]_\pm &
(-\omega \cot \omega L + M ) [\sin \omega(L+t)]_- \\
\end{array} \right) \, \gamma^0
\nonumber\\
&=&
P_R \fs p [\sin \omega(L-s)]_\pm [\sin \omega(L-t)]_\pm
\nonumber\\
&& \mbox{} + P_L \fs p [\sin \omega(L+s)]_- [\sin \omega(L+t)]_-
\left( \frac{\omega \cot \omega L - M}
{\omega \cot \omega L \pm M} \right)
\nonumber\\
&&\mbox{}
+P_R (-\omega \cot \omega L + M)
[\sin \omega(L-s)]_\pm [\sin \omega(L+t)]_-
\nonumber\\
&&\mbox{}
+P_L (-\omega \cot \omega L + M)
[\sin \omega(L+s)]_- [\sin \omega(L-t)]_\pm
\, ,
\end{eqnarray}
and
\begin{eqnarray}
\Delta_{R\pm}(p;s,t)
&=&
\sum_\omega
\frac{\left( \omega \cot \omega L \pm M \right)}{n_\pm(\omega)}
\frac{1}
{M^2+\omega^2-p^2-i \varepsilon}
[\sin \omega(L-s)]_\pm [\sin \omega(L-t)]_\pm
\, ,
\label{propagator-right-function}
\\
\Delta_{L-}(p;s,t)
&=&
\sum_\omega
\frac{\left( \omega \cot \omega L - M \right)}{n_\pm(\omega)}
\frac{1}
{M^2+\omega^2-p^2-i \varepsilon}
[\sin \omega(L+s)]_- [\sin \omega(L+t)]_-
\, ,
\label{propagator-left-function}
\\
B_{RL\pm}(p;s,t)
&=&
\sum_\omega
\frac{\left( \omega \cot \omega L \pm M \right)}{n_\pm(\omega)}
\frac{(-\omega \cot \omega L + M)}{M^2+\omega^2-p^2-i \varepsilon}
[\sin \omega(L-s)]_\pm [\sin \omega(L+t)]_-
\, ,
\label{propagator-right-to-left-function}
\\
B_{LR\pm}(p;s,t)
&=& B_{RL\pm}(p;t,s)
\nonumber\\
&=&
\sum_\omega
\frac{\left( \omega \cot \omega L \pm M \right)}{n_\pm(\omega)}
\frac{(-\omega \cot \omega L + M)}{M^2+\omega^2-p^2-i \varepsilon}
[\sin \omega(L+s)]_- [\sin \omega(L-t)]_\pm
\label{propagator-left-to-right-function}
\, .
\end{eqnarray}
\subsection{Sommerfeld-Watson Transformation}
As we have shown in the previous subsection, the fermion
with kink-like mass has different normal modes of $\omega$
from those of
the fermion with positive (negative) homogeneous mass.
For the fermion with kink-like mass term, we have
Eq.~(\ref{kink-mass-mode}),
\begin{eqnarray*}
(\omega^2 + M^2)
&=&
M^2 \frac{1}{\cos 2 \omega L } \qquad ( \omega > 0) \, ,
\\
(-\lambda^2 + M^2)
&=&
M^2 \frac{1}{\cosh 2 \lambda L } \qquad
( \lambda > 0 ; \omega=i\lambda) \, .
\end{eqnarray*}
On the other hand, for the
fermion with positive homogeneous mass, we have
Eq.~(\ref{homogeneous-mass-mode}),
\begin{eqnarray*}
\omega
&=&
M \tan 2 \omega L \qquad ( \omega > 0) \, ,
\\
\lambda
&=&
M \tanh 2 \lambda L \qquad
( \lambda > 0, \omega =i \lambda; \mbox{only for}\ +M ) \, .
\end{eqnarray*}
Therefore we encounter
the summations over the modes of $\omega$,
\begin{eqnarray*}
\sum_\omega \frac{1}{n_+(\omega)} F_+(\omega)
&=&
\sum_{\omega>0}
\frac{1}{n_+(\omega)} F_+(\omega)
\biggl|_{\omega^2+M^2=M^2/\cos 2\omega L}
\; + \; \frac{1}{n_+(i\lambda)} F_+(i\lambda)
\biggl|_{\scriptstyle -\lambda^2+M^2=M^2/\cosh
2\lambda L}
\, ,
\\
\sum_\omega \frac{1}{n_-(\omega)} F_-(\omega)
&=&
\sum_{\omega>0}
\frac{1}{n_-(\omega)} F_-(\omega)
\biggl|_{\omega=M\tan 2\omega L}
\qquad\quad \; + \; \frac{1}{n_+(i\lambda)} F_-(i\lambda)
\biggl|_{\scriptstyle
\lambda=M\tan 2\lambda L}
\, ,
\end{eqnarray*}
where $F_\pm(\omega)$ are certain functions.
In order to
perform the subtraction at finite fifth space volume
${\scriptstyle L}$, we need to rearrange the modes such that
the massive modes would be common.
To achieve it, we consider the Sommerfeld-Watson
transformation.
\subsubsection{General Case}
Let us consider the function
\begin{equation}
\frac{1}{\sin^2 \omega L \Delta_\pm(\omega)}\equiv
\frac{1}{
\sin^2 \omega L
\left[ (\omega^2 + M^2) -
(\omega \cot \omega L - M )
(\omega \cot \omega L \pm M ) \right] } \, .
\end{equation}
This function has poles at the value of $\omega$
given by the mode equation Eq.~(\ref{mode-equation}),
\begin{equation}
(\omega^2 + M^2) \sin^2 \omega L
= (\omega \cos \omega L - M \sin \omega L )
(\omega \cos \omega L \pm M \sin \omega L ) \, .
\nonumber
\end{equation}
Since we can show
\begin{eqnarray}
&&
\frac{\partial}{\partial \omega}
\left\{ \sin^2 \omega L \Delta_\pm(\omega) \right\}
\nonumber\\
&&=2 \cot\omega L \left\{ \sin^2 \omega L \Delta_\pm(\omega) \right\}
\nonumber\\
&&
+ 2 \omega
\left[
\Big(
\frac{ (\omega\cot\omega L \pm M)
+(\omega\cot\omega L - M) }{2} \Big)
\big( L-\frac{\sin 2\omega L}{2\omega} \big)
+\sin^2\omega L \right]
\, ,
\end{eqnarray}
the residue at the pole is given as
\begin{equation}
\frac{1}{2 \omega } \frac{1}{n_\pm(\omega)} \, .
\end{equation}
For the case $\omega=i\lambda$, this expression also holds true.
Accordingly we are declined to consider the following integral.
\begin{eqnarray}
I_\pm &\equiv &
\int_C
\!\frac{d\omega}{2\pi i}
\frac{2 \omega }{\sin^2 \omega L \Delta_\pm (\omega)}
F_\pm (\omega) \, ,
\label{integration-for-sommerfeld-watson-transformation}
\end{eqnarray}
with a contour $C$ shown bellow.
\unitlength 5mm
\begin{picture}(20,20)
\put(1,1){
\begin{picture}(101,0)
\put(12,0){\vector(0,1){16}}
\put(-2,8){\vector(1,0){28}}
\put(24,14){\line(1,0){1.5}}
\put(24,14){\line(0,1){1.5}}
\put(24.8,14.8){\makebox(0,0){$\omega$}}
\put(12.8,12){\circle*{0.3}}
\put(11.3,4){\circle*{0.3}}
\multiput(14,8)(3,0){4}{\circle*{0.3}}
\multiput(10,8)(-3,0){4}{\circle*{0.3}}
\multiput(15.5,8)(3,0){4}{\circle{0.3}}
\multiput(8.5,8)(-3,0){4}{\circle{0.3}}
\thicklines
\put(12,1){\vector(0,1){3.5}}
\put(12,4.5){\line(0,1){3.5}}
\put(12,8){\vector(0,1){3.5}}
\put(12,11.5){\line(0,1){3.5}}
\put(12,9){\oval(20,12)[rt]}
\put(12,7){\oval(20,12)[rb]}
\put(14,8){\oval(2,2)[l]}
\put(22,9){\vector(-1,0){4}}
\put(18,9){\line(-1,0){4}}
\put(14,7){\vector(1,0){4}}
\put(18,7){\line(1,0){4}}
\end{picture}
}
\end{picture}
We assume that $F_\pm$ is an even function of $\omega$,
it vanishes at the origin so that
the whole integrand does not possess any singularity at the origin,
and it also vanishes at infinity so that
the contour integral at infinity vanishes.
We allow $F_\pm(\omega)$ to have poles, for example,
on the real axis. We showed them by white-circles.
$\displaystyle \frac{2\omega}{\sin^2 \omega L \Delta_\pm (\omega)}$
has poles
on the real axis and two additional poles near the imaginary axis.
For sufficiently large $L$, we have
\begin{equation}
\omega=i\lambda \simeq iM + \varepsilon \, .
\end{equation}
Here we take into account of the Feynman boundary condition,
that is, the infinitesimal imaginary part of the mass M.
We showed these poles by black-circles. It also has a pole at
the origin, but it is assumed not to contribute the integral
due to the zero of $F_\pm$.
Then the above integral
leads to the identity
\begin{eqnarray}
\sum_\omega \frac{1}{n_\pm(\omega)} F_\pm(\omega)
= -\sum_{\omega^\prime}
\frac{2 \omega^\prime }{\sin^2 \omega^\prime L }
\frac{1}{\Delta_\pm (\omega^\prime)} \mbox{Res} F_\pm(\omega^\prime)
\biggl|_{\mbox{poles of $F_\pm$}}
\label{sommerfeld-watson-transformation-first}
\, .
\end{eqnarray}
If $F_\pm(\omega)$ possess common series of poles,
we can rearrange the different series of modes into common series of
modes. In some cases, it turns out to need
several steps of the transformations
to obtain the common series of modes.
\subsubsection{Transformation of the functions
$\Delta_{R\pm}(p;s,t)$ and $\Delta_{L-}(p;s,t)$}
\paragraph{Integrand in $\Delta_{R\pm}$}
$\Delta_{R\pm}$ given by
Eq.~(\ref{propagator-right-function}) has the following Integrand.
\begin{eqnarray}
&&F_\pm(\omega)
\nonumber\\
&=&
\frac{1}
{M^2+\omega^2-p^2-i \varepsilon}
[\sin \omega(L-s)]_\pm [\sin \omega(L-t)]_\pm \,
(\omega \cot \omega L \pm M)
\nonumber\\
&=&
\frac{1}
{M^2+\omega^2-p^2-i \varepsilon}
\nonumber\\
&& \times
\left\{ \,
\theta(s)\theta(t) \,
\sin \omega(L-s) \, \sin \omega(L-t) \, (\omega \cot \omega L \pm M)
\right.
\nonumber\\
&&\quad
+\theta(-s)\theta(-t) \,
\frac{
(\omega \cos \omega(L+s) \pm M \sin \omega(L+s) ) \,
(\omega \cos \omega(L+t) \pm M \sin \omega(L+t) ) \,
}{\omega \cot \omega L \pm M}
\nonumber\\
&&\quad
+\theta(s)\theta(-t) \,
\sin \omega(L-s) \, (\omega \cos \omega(L+t) \pm M \sin \omega(L+t) )
\nonumber\\
&&\quad \left.
+\theta(-s)\theta(t) \,
(\omega \cos \omega(L+s) \pm M \sin \omega(L+s) ) \,\sin \omega(L-t)
\right\}
\, .
\end{eqnarray}
\paragraph{Poles}
There are three types of poles in the above $F_\pm(\omega)$.
We summarize the poles and their residues in Table I.
\vskip 8pt
\centerline{Talbe I \
Poles and Residues in $I_\pm$ for $\Delta_{R\pm}$}
\begin{tabular}{|c|c|c|c|}
\hline
\phantom{aa} singular part in $I_\pm$ \phantom{aa}
&
\phantom{aaaaa} pole \phantom{aaaaaaaaaaa}
& \phantom{a} residue \phantom{aaaaa}
&
${\scriptstyle
\frac{2 \omega}{\sin^2 \omega L \Delta_\pm (\omega) }} $
\phantom{aaaaaa}
\\
\hline
${\scriptstyle
\frac{2 \omega}{\sin^2 \omega L \Delta_\pm (\omega) }} $
& $\sin^2 \omega L \Delta_\pm (\omega)=0 $
& $\frac{1}{n_\pm(\omega)}$
& 1
\\
\hline\hline
$\omega\cot\omega L=\frac{\omega\cos\omega L}{\sin\omega L}$
& $ \sin\omega L=0 $
& $ \frac{\omega}{L}$
& $-\frac{2}{\omega}$
\\
\hline
$\frac{1}{(\omega \cot \omega L \pm M)} $
& $\omega \cot \omega L \pm M=0$
& $-\frac{\sin^2 \omega L}
{\omega \left(L- \frac{\sin2\omega L}{2\omega}\right)}$
& $\frac{2}{\omega}$
\\
\hline
$ \frac{1}{M^2+\omega^2-p^2-i \varepsilon} $
&$iP \equiv i\sqrt{M^2-p^2-i \varepsilon}$
&$\frac{1}{2iP}$
&$\frac{2 iP}{-\sinh^2 PL }\frac{1}{\Delta_\pm (iP)} $
\\
\hline
\end{tabular}
\vskip 16pt
\paragraph{First Stage Transformation}
By the Sommerfeld-Watson transformation given by
Eq.~(\ref{sommerfeld-watson-transformation-first}),
$\Delta_{R\pm}$ can be written as
\noindent
\baselineskip 21pt
\begin{eqnarray}
\Delta_{R\pm}(p;s,t)
&=&
\sum_\omega
\frac{\left( \omega \cot \omega L \pm M \right)}{n_\pm(\omega)}
\frac{1}
{M^2+\omega^2-p^2-i \varepsilon}
[\sin \omega(L-s)]_\pm [\sin \omega(L-t)]_\pm
\nonumber\\
&=&
\theta(s)\theta(t) \,
\sum_{\scriptstyle \sin \omega L }
\frac{2}{L}\,
\frac{1}{M^2+\omega^2-p^2-i \varepsilon} \,
\sin \omega s \, \sin \omega t
\nonumber\\
&&+
\theta(-s)\theta(-t) \,
\sum_{\scriptstyle \omega \cot \omega L \pm M}
\frac{2 \sin^2 \omega L}
{\omega^2 \left(L- \frac{\sin2\omega L }{2\omega} \right)} \,
\frac{1}{M^2+\omega^2-p^2-i \varepsilon} \,
\nonumber\\
&&\qquad
\times (\omega \cos \omega(L+s) \pm M \sin \omega(L+s) ) \,
(\omega \cos \omega(L+t) \pm M \sin \omega(L+t) ) \,
\nonumber\\
&&
- \frac{(P \coth P L \pm M)}{\Delta_\pm (iP)} \,
\frac{ [\sinh P(L-s)]_\pm [\sinh P(L-t)]_\pm } {\sinh^2 PL }
\nonumber\\
&=&
\theta(s)\theta(t) \,
\sum_{\scriptstyle \sin \omega L }
\frac{2}{L}\,
\frac{1}{M^2+\omega^2-p^2-i \varepsilon} \,
\sin \omega s \, \sin \omega t
\nonumber\\
&&+
\theta(-s)\theta(-t) \,
\sum_{\scriptstyle \omega \cot \omega L \pm M}
\frac{2}{\left(L- \frac{\sin2\omega L }{2\omega} \right)} \,
\frac{1}{M^2+\omega^2-p^2-i \varepsilon} \,
\sin \omega s \, \sin \omega t
\nonumber\\
&&
- \frac{(P \coth P L \pm M)}{\Delta_\pm (iP)} \,
\frac{ [\sinh P(L-s)]_\pm [\sinh P(L-t)]_\pm } {\sinh^2 PL }
\, ,
\end{eqnarray}
\baselineskip=16pt
where we have used the relation,
\begin{equation}
\frac{\sin \omega L}{\omega}
(\omega \cos \omega(L+s) \pm M \sin \omega(L+s) )
= - \sin \omega s
\, ,
\end{equation}
for $\omega$ satisfying $\omega \cot \omega L \pm M=0 $
\paragraph{Second Stage Transformation} We need further
Sommerfeld-Watson transformation for the part
in which the summation should be taken over
the different normal modes of $\omega$
given by $\omega \cot \omega L \pm M=0 $,
\begin{equation}
{\sum_\omega}\frac{2 }
{\left(L- \frac{\sin2\omega L }{2\omega} \right) } \;
\frac{1}{M^2+\omega^2-p^2-i \varepsilon} \,
\sin \omega s \sin \omega t \;
\biggl|_{\omega \cos \omega L \pm M\sin\omega L=0}
\, .
\end{equation}
For this purpose, we consider the following integral.
\begin{equation}
J_\pm \equiv
\int_C
\!\frac{d\omega}{2\pi i}\;
\frac{-2\omega }{\sin \omega L } \;
\frac{1}{\omega \cos \omega L \pm M \sin \omega L } \;
\frac{1}{M^2+\omega^2-p^2-i \varepsilon} \,
\sin \omega s \sin \omega t
\, .
\end{equation}
There are two types of poles in this case, which we summarized in
Table II.
\vskip 16pt
\centerline{Table II \
Poles and Residues in $J_\pm$ for $\Delta_{R\pm}$}
\begin{tabular}{|c|c|c|c|}
\hline
\phantom{aa} singular part in $J_\pm$ \phantom{aa}
&
\phantom{aaaaaa} pole \phantom{aaaaaaaaaaaa}
&
\phantom{a} residue \phantom{aaaaa}
&
${\scriptstyle
\frac{-2\omega }{\sin^2 \omega L } \;
\frac{1}{\omega \cot \omega L \pm M }
} $
\phantom{aaa}
\\
\hline
${\scriptstyle
\frac{-2\omega }{\sin \omega L } \;
\frac{1}{\omega \cot \omega L \pm M \sin \omega L } \;
} $
& $\omega \cos \omega L \pm M \sin \omega L =0 $
& ${ \scriptstyle
\frac{2}{\left(L- \frac{\sin2\omega L }{2\omega} \right) } }$
& 1
\\
\hline\hline
$\frac{-2\omega }{\sin \omega L } \;
\frac{1}{\omega \cot \omega L \pm M \sin \omega L } $
& $ \sin\omega L=0 $
& $ -\frac{2}{L}$
& 1
\\
\hline
$ \frac{1}{M^2+\omega^2-p^2-i \varepsilon} $
&$iP \equiv i\sqrt{M^2-p^2-i \varepsilon}$
&$\frac{1}{2iP}$
&$\frac{2 iP}{\sinh^2 PL }\frac{1}{P\coth PL \pm M} $
\\
\hline
\end{tabular}
\vskip 16pt
\noindent
By the Sommerfeld-Watson transformation at the second stage,
we obtain
\begin{eqnarray}
&&{\sum_\omega}\frac{2 }
{\left(L- \frac{\sin2\omega L }{2\omega} \right) } \;
\frac{1}{M^2+\omega^2-p^2-i \varepsilon} \,
\sin \omega s \sin \omega t \,
\biggl|_{\omega \cos \omega L \pm M\sin\omega L=0}
\nonumber\\
&&={\sum_{\sin \omega L}}\frac{2}{L}
\frac{1}{M^2+\omega^2-p^2-i \varepsilon} \,
\sin \omega s \sin \omega t
+
\frac{1}{P \coth P L \pm M}
\frac{ \sinh Ps \sinh P t}{ \sinh^2 PL}
\, .
\end{eqnarray}
\paragraph{Final result} The final form of $\Delta_{R\pm}$ is
given by
\begin{eqnarray}
\Delta_{R\pm}(p;s,t)
&=&
\sum_\omega
\frac{\left( \omega \cot \omega L \pm M \right)}{n_\pm(\omega)}
\frac{1}
{M^2+\omega^2-p^2-i \varepsilon}
[\sin \omega(L-s)]_\pm [\sin \omega(L-t)]_\pm
\nonumber\\
&=&
[ \theta(s)\theta(t) + \theta(-s)\theta(-t)] \,
\sum_{\sin\omega L}
\frac{2}{L}\,
\frac{1}{M^2+\omega^2-p^2-i \varepsilon} \,
\sin \omega s \, \sin \omega t \,
\nonumber\\
&&+
\frac{(P \coth P L \pm M)}{-\Delta_\pm (iP)} \,
\frac{ [\sinh P(L-s)]_\pm [\sinh P(L-t)]_\pm } {\sinh^2 PL }
\nonumber\\
&&+
\theta(-s)\theta(-t) \,
\frac{1}{P \coth P L \pm M}
\frac{ \sinh Ps \sinh P t}{ \sinh^2 PL}
\, .
\label{delta-R-sommerfeld-watson-transformed}
\end{eqnarray}
Similarly we obtain,
\begin{eqnarray}
\Delta_{L-}(p;s,t)
&=&
\sum_\omega
\frac{\left( \omega \cot \omega L - M \right)}{n_\pm(\omega)}
\frac{1}
{M^2+\omega^2-p^2-i \varepsilon}
[\sin \omega(L+s)]_- [\sin \omega(L+t)]_-
\nonumber\\
&=&
[\theta(s)\theta(t) + \theta(-s)\theta(-t)] \,
\sum_{\sin\omega L}
\frac{2}{L}\,
\frac{1}{M^2+\omega^2-p^2-i \varepsilon} \,
\sin \omega s \, \sin \omega t \,
\nonumber\\
&&+
\frac{(P \coth P L - M)}{-\Delta_\pm (iP)} \,
\frac{ [\sinh P(L+s)]_- [\sinh P(L+t)]_- } {\sinh^2 PL }
\nonumber\\
&&+
\theta(s)\theta(t) \,
\frac{1}{P \coth P L -M}
\frac{ \sinh Ps \sinh P t}{ \sinh^2 PL}
\, .
\label{delta-L-sommerfeld-watson-transformed}
\end{eqnarray}
We can perform the similar transformation for
$B_\pm(p;s,t)$.
In this case, however, we need to improve the convergence
of the integration of $\omega$, using the relation
\begin{equation}
\left( \omega \cot \omega L \pm M \right)
(\omega \cot \omega L - M)
= \omega^2+M^2 = \frac{M^2}{ \cos^{(3\mp 1)/2} 2\omega L}
\, .
\label{mass-term-improve-convergence}
\end{equation}
\section{Perturbation Theory at Finite Extent of Fifth Dimension}
\label{sec:perturbation-at-finite-extent-of-fifth-dimension}
In this section,
we formulate the perturbative expansion of
the would-be vacuum overlap based on the theory
of {\it free} five-dimensional fermion
at finite extent of the fifth dimension,
which takes into account of the chiral boundary condition.
The expansion can be performed independently for
the five-dimensional volume contribution and
the four-dimensional boundary contribution.
As a subsidiary regularization,
we adopt the dimensional regularization.
As to the volume contribution,
the subtraction can be performed at {\it finite $L$},
thanks to the Sommerfeld-Watson
transformation, in each order of the expansion.
And then the limit of the infinite $L$ can be evaluated.
As to the boundary contribution,
we first derive the boundary state wave functions taking the limit
$L^\prime \rightarrow \infty$.
Since the boundary contribution is given by
the correlation between the boundaries, it is expected to be
finite in the limit $L \rightarrow \infty$
and the subtraction to be irrelevant.
As far as we do not take into account of the breaking
of the chiral boundary condition due to the dimensional regularization,
we can see that it is actually the case and the cluster property
holds:
the boundary contribution consist of the sum of the contributions
from the two boundaries and that of
the fermion with kink-like mass can be replaced by that of the fermion
with homogeneous positive(negative) mass.
Then {\it we make an assumption that this cluster property holds even under
the dimensional regularization}.
{}From the cluster property and the parity invariance of the fermion with
the homogeneous mass, we can show that
the boundary contribution is odd-parity in the limit
$L \rightarrow \infty$.
\subsection{Perturbation expansion of the determinant of $K$}
The perturbative expansion of the volume contribution,
Eq.~(\ref {overlap-formula-determinant-contribution}),
\begin{equation}
\frac{\displaystyle \det \left( K \right) }
{\sqrt{ \displaystyle
\det \left( K_- \right) \det \left( K_+ \right)
}}\bigg\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\nonumber
\, ,
\end{equation}
can be performed as follows.
\begin{eqnarray}
\label{perurbative-expansion-of-detK}
\ln \det \left( K \right)\bigg\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
&=&
\ln \det \left(
\left\{1+i\fs A \, (K^0)^{-1}\big\vert_{[-{\scriptstyle L},{\scriptstyle L}]} \right\}
K^0 \right)\bigg\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\\
&=&
\sum_{n=1}^{\infty}\frac{(-i)^n }{n}
{\rm Tr} \left\{ \fs A \, (K^0)^{-1}\big\vert_{[-{\scriptstyle L},{\scriptstyle L}]} \right\}^n
+ \ln \det \left( K^0 \right)\bigg\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\\
&=&
\sum_{n=1}^{\infty}\frac{(-)^n}{n}
{\rm Tr} \left\{ \fs A \, S_{F+}\big\vert_{[-{\scriptstyle L},{\scriptstyle L}]} \right\}^n
+\ln \det \left( K^0 \right)\bigg\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\, .
\end{eqnarray}
This expansion should be evaluated with the propagator $S_{F+}$
supplemented by the dimensional regularization.
Similar expansion can be performed for the
determinant of $K_\pm$. The subtraction of them can be
performed at each order of the expansion. Thus we have
\begin{eqnarray}
\label{perurbative-expansion-of-volume-contribution}
&& \ln \left[
\frac{\displaystyle \det \left( K \right) }
{\sqrt{ \displaystyle
\det \left( K_- \right) \det \left( K_+ \right)
}}\bigg\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\right]
-\ln \left[
\frac{\displaystyle \det \left( K^0 \right) }
{\sqrt{ \displaystyle
\det \left( K_-^0 \right) \det \left( K_+^0 \right)
}}\bigg\vert_{[-{\scriptstyle L},{\scriptstyle L}]}
\right]
\nonumber\\
&&
=
\sum_{n=1}^{\infty}\frac{(-)^n}{n}
\left[
{\rm Tr} \left\{ \fs A \, S_{F+}\big\vert_{[-{\scriptstyle L},{\scriptstyle L}]} \right\}^n
-\frac{1}{2}{\rm Tr} \left\{ \fs A \, S_{F-}(+M)\big\vert_{[-{\scriptstyle L},{\scriptstyle L}]} \right\}^n
-\frac{1}{2}{\rm Tr} \left\{ \fs A \, S_{F-}(-M)\big\vert_{[-{\scriptstyle L},{\scriptstyle L}]} \right\}^n
\right]
\nonumber\\
&&
\equiv i \Gamma_K[A]
\, .
\end{eqnarray}
After the subtraction,
the limit of the infinite $L$ can be evaluated.
Explicit calculation of the vacuum polarization is given
in the later section.
\subsection{Perturbation expansion of the boundary term}
Next we consider the perturbative expansion of the
boundary contribution,
Eq.~(\ref{overlap-formula-boundary-contribution}),
\begin{equation}
\frac{\displaystyle {\det}' \left( 1-M-X_{-+}^{{\scriptstyle L}^\prime} \right) }
{\sqrt{ \displaystyle
{\det}' \left( 1-M_--X_{--}^{{\scriptstyle L}^\prime} \right)
{\det}' \left( 1-M_+-X_{++}^{{\scriptstyle L}^\prime} \right)
}}
\, .
\nonumber
\end{equation}
\subsubsection{Boundary state wave function}
We first derive boundary state wave functions taking the limit
$L^\prime \rightarrow \infty$.
We have derived the propagator of the fermion with
positive and negative homogeneous masses satisfying
the chiral boundary condition $\Big\vert_{[{\scriptstyle -L},{\scriptstyle L}]} $
in the previous section.
Since the translational invariance hold for the
fermion with homogeneous mass, we can show
\begin{eqnarray}
&&S_{F-}[+ M](p;s+2{\scriptstyle L^\prime},t+2{\scriptstyle L^\prime}) \bigg\vert_{[{\scriptstyle L},{\scriptstyle L}+2{\scriptstyle L^\prime}]}
\phantom{\scriptstyle --}
= S_{F-}[+ M](p;s,t) \bigg\vert_{[-{\scriptstyle L^\prime},{\scriptstyle L^\prime}]}
\, ,
\\
&& S_{F-}[-M](p;s-2{\scriptstyle L^\prime},t-2{\scriptstyle L^\prime}) \bigg\vert_{[{\scriptstyle -L}-2{\scriptstyle L^\prime},{\scriptstyle -L}]}
= S_{F-}[-M](p;s,t) \bigg\vert_{[-{\scriptstyle L^\prime},{\scriptstyle L^\prime}]}
\, .
\end{eqnarray}
Then, using the previous results,
we can give the explicit form of the boundary state wave function
Eqs.(\ref{boundary-wave-function-left}) and
(\ref{boundary-wave-function-right}) and
the explicit form of $X_\pm^{\scriptstyle L^\prime}$,
Eqs.(\ref{breaking-source-left-continuum-limit}),
(\ref{breaking-source-right-continuum-limit}) and
(\ref{breaking-source-total-continuum-limit}).
Actually we have
\begin{eqnarray}
&& iS_{F-}[+M](p;{\scriptstyle L^\prime},{\scriptstyle L^\prime})
\nonumber\\
&=&
P_L \fs p \Delta_{L-}(p;{\scriptstyle L^\prime},{\scriptstyle L^\prime})
\nonumber\\
&=&
P_L \fs p \,
\sum_\omega
\frac{1}{n_-(\omega)}
\frac{1}{M^2+\omega^2-p^2-i \varepsilon}
\frac{\omega^2}{\left( \omega \cot \omega L^\prime - M \right)}
\nonumber\\
&=&
P_L \fs p \,
\frac{1}{-\Delta_- (iP)} \,
\frac{P^2 }{(P \coth P L^\prime - M)}
\frac{1}{\sinh^2 PL^\prime }
+P_L \fs p \,
\frac{1}{P \coth P L' -M}
\, .
\end{eqnarray}
In the last equality, we have used the result of
the Sommerfeld-Watson transformation,
Eq.~(\ref{delta-L-sommerfeld-watson-transformed}).
Similarly, we obtain
\begin{eqnarray}
&& iS_{F-}[+M](p;-{\scriptstyle L^\prime},-{\scriptstyle L^\prime})
\nonumber\\
&=&
P_R \fs p \,
\frac{1}{-\Delta_- (iP)} \,
\frac{P^2 }{(P \coth P L^\prime - M)}
\frac{1}{\sinh^2 PL^\prime }
+P_R \fs p \,
\frac{1}{P \coth P L' -M}
\, .
\end{eqnarray}
\begin{eqnarray}
iS_{F-}[+M](p;{\scriptstyle L^\prime},-{\scriptstyle L^\prime})
&=&
P_L B_{LR-}(p;{\scriptstyle L^\prime},-{\scriptstyle L^\prime})
\nonumber\\
&=&
P_L \,
\sum_\omega
\frac{1}{n_-(\omega)}
\frac{1}{M^2+\omega^2-p^2-i \varepsilon}
(-\omega^2)
\nonumber\\
&=&
P_L \,
\sum_\omega
\frac{1}{n_-(\omega)}
\frac{1}{M^2+\omega^2-p^2-i \varepsilon}
(M^2-\frac{M^2}{ \cos^2 2\omega L'})
\nonumber\\
&=&
P_L \,
\frac{1}{-\sinh^2 PL^\prime \Delta_- (iP)} \,
(M^2-\frac{M^2}{ \cosh^2 2P L^\prime})
\label{tree-boundary-correlation-LR}
\, .
\end{eqnarray}
\begin{eqnarray}
iS_{F-}[+M](p;-{\scriptstyle L^\prime},{\scriptstyle L^\prime})
&=&
P_R \,
\frac{1}{-\sinh^2 PL^\prime \Delta_- (iP)} \,
(M^2-\frac{M^2}{ \cosh^2 2P L^\prime})
\label{tree-boundary-correlation-RL}
\, .
\end{eqnarray}
The calculation of
Eqs.~(\ref{tree-boundary-correlation-LR})
and (\ref{tree-boundary-correlation-RL}) is given
in the later subsection.
Assuming that $p^2 \not =0$, we take the limit
$L^\prime \rightarrow \infty$
and have
\begin{eqnarray}
\lim_{{\scriptstyle L^\prime}\rightarrow \infty}
& i S_{F-}[+M](p;{\scriptstyle L^\prime},{\scriptstyle L^\prime})& = P_L \fs p \, \frac{1}{P -M} \, ,\\
\lim_{{\scriptstyle L^\prime}\rightarrow \infty}
&iS_{F-}[+M](p;-{\scriptstyle L^\prime},-{\scriptstyle L^\prime})& = P_R \fs p \, \frac{1}{P -M} \, ,\\
\lim_{{\scriptstyle L^\prime}\rightarrow \infty}
&iS_{F-}[+M](p;{\scriptstyle L^\prime},-{\scriptstyle L^\prime})& = 0 \, ,\\
\lim_{{\scriptstyle L^\prime}\rightarrow \infty}
&iS_{F-}[+M](p;-{\scriptstyle L^\prime},{\scriptstyle L^\prime})& = 0 \, .
\end{eqnarray}
Therefore we obtain
\begin{eqnarray}
\label{breaking-source-continuum-limit-explicit-form}
X_{-+}^{\infty}(p)
&=&
\left(
\begin{array}{cc} X_-^{\infty}(p) & 0 \\
0 & X_+^{\infty}(p)
\end{array} \right)
=
\left(
\begin{array}{cc} P_L \fs p \, \frac{-1}{P +M} & 0 \\
0 & P_R \fs p \, \frac{-1}{P -M}
\end{array} \right)
\label{breaking-source-total-continuum-limit-explicit-form}
\, .
\end{eqnarray}
In terms of the wave function, the explicit gauge symmetry
breaking term can be written as follows.
\begin{eqnarray}
&&\langle b_- \ket{\psi_R({\scriptstyle -L}),\bar \psi_R({\scriptstyle -L}) }
\nonumber\\
&&=
c_-^\ast
\exp \left\{
-\int d^4x d^4y \,
\bar\psi_R(x,{\scriptstyle -L})
\int\!\frac{d^4p}{i(2\pi)^4} e^{-ip(x-y)}
\frac{ \fs p }{P +M} \,
\psi_R(y,{\scriptstyle -L}) \right\}
\label{boundary-wave-function-right-explicit-form}
\, ,
\\
&&\bra{\psi_L({\scriptstyle L}),\bar \psi_L({\scriptstyle L}) }b_+ \rangle
\nonumber\\
&&=
c_+
\exp \left\{ -\int d^4x d^4y \,
\bar\psi_L(x,{\scriptstyle L})
\int\!\frac{d^4p}{i(2\pi)^4} e^{-ip(x-y)}
\frac{\fs p }{P -M} \,
\psi_L(y,{\scriptstyle L}) \right\}
\label{boundary-wave-function-left-explicit-form}
\, .
\end{eqnarray}
\subsubsection{Perturbation expansion of the boundary term}
Given the explicit form of the boundary state wave function,
we next consider the perturbative expansion of
$1-M-X_{-+}^\infty$.
$S_{F\pm}[A](x-y;s,t)$ can be expanded as
\begin{equation}
S_{F\pm}[A]
= S_{F\pm}
+ \sum_{n=1}^\infty \left\{ S_{F\pm} \cdot
(-)\fs A \cdot \right\}^n S_{F\pm}
\, ,
\end{equation}
where the following abbreviation is used,
\begin{equation}
S_{F\pm} \cdot
(-)\fs A \cdot S_{F\pm}
\equiv
\int\!d^4z \int^{\scriptstyle L}_{\scriptstyle -L}\!du S_{F\pm}(x-z;s,u) (-) {\fs A}(z)
S_{F\pm}(z-y;u,t)
\, .
\end{equation}
Then we obtain the expansion,
\begin{eqnarray}
1-M-X_{-+}^\infty =
1-M^0-X_{-+}^\infty
+
\left( \begin{array}{c} ..P_R \\
. P_L \end{array} \right)
\left[
i \sum_{n=1}^\infty \left\{ S_{F\pm} \cdot (-)\fs A \cdot \right\}^n S_{F\pm}
\right]
\left( \begin{array}{cc} P_L.. & P_R . \end{array} \right)
\, .
\end{eqnarray}
Here we have also introduced the abbreviation for the boundary condition:
\begin{equation}
..P_R \psi(x,s) = P_R \psi(x,{\scriptstyle -L}), \hskip 16pt
\bar \psi(x,s) P_L ..= \bar \psi(x,{\scriptstyle -L}) P_L \, .
\end{equation}
\begin{equation}
.P_L \psi(x,s) = P_L \psi(x,{\scriptstyle L}), \hskip 16pt
\bar \psi(x,s) P_R .= \bar \psi(x,{\scriptstyle L}) P_R \, .
\end{equation}
Then we obtain the perturbative expansion of the
boundary contribution.
\begin{eqnarray}
&&\ln {\det}' \left( 1-M-X_{-+}^\infty \right)
-\ln {\det}' \left( 1-M^0-X_{-+}^\infty \right)
\nonumber\\
&&=\sum_{m=1}^\infty \frac{(-)^m}{m}
{{\rm Tr}}^\prime\left\{
D^{\scriptstyle L}
\left( \begin{array}{c} ..P_R \\
. P_L \end{array} \right)
\left[
\sum_{n=1}^\infty \left\{ S_{F+} \cdot (-)\fs A \cdot \right\}^n S_{F+}
\right]
\left( \begin{array}{cc} P_L.. & P_R . \end{array} \right)
\right\}^m
\, ,
\end{eqnarray}
where we denote the inverse of $\left( 1-M^0-X_{-+}^\infty \right)$ as
$D^{\scriptstyle L}$,
\begin{eqnarray}
{D^{\scriptstyle L}}^{-1}(p)&\equiv & \left( 1-M^0-X_{-+}^\infty \right)(p)
\nonumber\\
&=&
1
+\left( \begin{array}{c} ..P_R \\
. P_L \end{array} \right)
i S_{F+}(p;s,t)
\left( \begin{array}{cc} P_L.. & P_R . \end{array} \right)
+
\left(
\begin{array}{cc} P_L \fs p \, \frac{1}{P +M} & 0 \\
0 & P_R \fs p \, \frac{1}{P -M}
\end{array} \right)
\, .
\end{eqnarray}
The contribution of the fermion with positive and negative homogeneous
mass can be expanded in a similar manner.
\begin{eqnarray}
&&\ln {\det}' \left( 1-M_\pm -X_{\pm\pm}^\infty \right)
-\ln {\det}' \left( 1-M^0_\pm -X_{\pm\pm}^\infty \right)
\nonumber\\
&&=
\sum_{m=1}^\infty \frac{(-)^m}{m}
{{\rm Tr}}^\prime\left\{
D^{\scriptstyle L}_\pm
\left( \begin{array}{c} ..P_R \\
. P_L \end{array} \right)
\left[
i \sum_{n=1}^\infty \left\{ S_{F-}[\pm M] \cdot (-)\fs A \cdot \right\}^n
S_{F-}[\pm M]
\right]
\left( \begin{array}{cc} P_L.. & P_R . \end{array} \right)
\right\}^m
\, ,
\end{eqnarray}
and
\begin{eqnarray}
{D^{\scriptstyle L}_\pm}^{-1}(p)&\equiv & \left( 1-M^0_\pm-X_{\pm\pm}^\infty \right)
\nonumber\\
&=&
1
+\left( \begin{array}{c} ..P_R \\
. P_L \end{array} \right)
i S_{F-}[\pm M](p;s,t)
\left( \begin{array}{cc} P_L.. & P_R . \end{array} \right)
+
\left(
\begin{array}{cc} P_L \fs p \, \frac{1}{P \mp M} & 0 \\
0 & P_R \fs p \, \frac{1}{P \mp M}
\end{array} \right)
\, .
\end{eqnarray}
These expression can be also regularized by the dimensional regularization.
\subsubsection{Cluster property and parity}
Next we consider taking the limit $L \rightarrow \infty$.
Since the boundary contribution is given by
the correlations between the boundaries, it is expected to be
finite in the limit $L \rightarrow \infty$
and the subtraction to be irrelevant.
As far as we do not take into account of the breaking
of the chiral boundary condition due to the dimensional regularization,
we will see that it is actually the case and the cluster property
holds:
the correlation between the two boundaries vanishes
in the limit $L \rightarrow \infty$
and the remaining diagonal contribution from each boundary is
equal to that of the fermion with homogeneous mass
(positive or negative according to the signature of mass at that
boundary).
We will make an assumption that {\it this cluster property holds
even under the dimensional regularization}.
Then, as a result of the cluster property and the parity invariance
of the fermion with the homogeneous mass, we will show that
the boundary contribution is odd-parity in the limit
$L \rightarrow \infty$.
We first consider the leading term in the perturbative
expansion,
\begin{eqnarray}
\left( \begin{array}{c} ..P_R \\
. P_L \end{array} \right)
S_{F\pm}(p;s,t)
\left( \begin{array}{cc} P_L.. & P_R . \end{array} \right)
\, .
\end{eqnarray}
The diagonal components can be read off from
Eqs.~(\ref{delta-R-sommerfeld-watson-transformed}),
(\ref{delta-L-sommerfeld-watson-transformed}).
\begin{eqnarray}
..P_R i S_{F\pm}(p;s,t) P_L..
&=&
P_R \fs p \, \left\{
\frac{(P \coth P L - M)}{-\Delta_- (iP)} \,
\frac{ P^2}{\sinh^2 PL }
+
\frac{1}{P \coth P L \pm M}
\right\}
\nonumber\\
.P_L i S_{F\pm}(p;s,t) P_R.
&=&
P_L \fs p \, \left\{
\frac{(P \coth P L \pm M)}{-\Delta_\pm (iP)} \,
\frac{ P^2}{\sinh^2 PL }
+
\frac{1}{P \coth P L \pm M}
\right\}
\, .
\end{eqnarray}
Off diagonal components, which give the correlation
between the boundaries at $s=-L$ and $s=L$,
are given by the following summations.
\begin{eqnarray}
..P_R i S_{F\pm}(p;s,t) P_R.
&=&
-P_R \,
\sum_\omega
\frac{1}{n_\pm(\omega)}
\frac{\omega^2}{M^2+\omega^2-p^2-i \varepsilon}
\nonumber\\
.P_L i S_{F\pm}(p;s,t) P_L..
&=&
-P_L \,
\sum_\omega
\frac{-1}{n_\pm(\omega)}
\frac{\omega^2}{M^2+\omega^2-p^2-i \varepsilon}
\end{eqnarray}
These summations can be performed by the use of
the technique of the Sommerfeld-Watson transformation.
At first we need to improve the convergence
of the integration of $\omega$, using
Eq. (\ref{mass-term-improve-convergence}).
Then refering to Table III,
\vskip 16pt
\centerline{Table III \
Poles and Residues in $I_\pm$ for $ ..P_R S_{F\pm} P_L..$ }
\begin{tabular}{|c|c|c|c|}
\hline
\phantom{aa} singular part in $I_\pm$ \phantom{aa}
& \phantom{aaaaa} pole \phantom{aaaaaaaaaaa}
& \phantom{a} residue \phantom{aaaaa}
&
${\scriptstyle
\frac{2 \omega}{\sin^2 \omega L \Delta_\pm (\omega) }} $
\\
\hline
${\scriptstyle
\frac{2 \omega}{\sin^2 \omega L \Delta_\pm (\omega) }} $
& $\sin^2 \omega L \Delta_\pm (\omega)=0 $
& $\frac{1}{n_\pm(\omega)}$
& 1
\\
\hline\hline
${\scriptstyle
\frac{M^2}{ \cos^{(3\mp 1)/2} 2 \omega L}}$
& $ \cos 2 \omega L =0 $
& $ \frac{M^2}{-2L \sin 2 \omega L}\frac{1\pm 1}{2} $
& $\frac{2\omega}{M^2}$
\\
\hline
$ \frac{1}{M^2+\omega^2-p^2-i \varepsilon} $
&$iP \equiv i\sqrt{M^2-p^2-i \varepsilon}$
&$\frac{1}{2iP}$
&$\frac{2 iP}{-\sinh^2 PL }\frac{1}{\Delta_\pm (iP)} $
\\
\hline
\end{tabular}
\vskip 16pt
\noindent
we obtain
\begin{eqnarray}
&&
\sum_\omega
\frac{1}{n_\pm(\omega)}
\frac{\omega^2}{M^2+\omega^2-p^2-i \varepsilon}
\nonumber\\
&&=
\sum_\omega
\frac{1}{n_\pm(\omega)}
\frac{1}{M^2+\omega^2-p^2-i \varepsilon}
\left( \frac{M^2}{ \cos^{(3\mp 1)/2} 2\omega L}-M^2 \right)
\nonumber\\
&&=
\frac{1}{\sinh^2 PL \Delta_\pm (iP)}
\left( \frac{M^2}{ \cosh^{(3\mp 1)/2} 2 P L}-M^2 \right)
+\frac{1\pm 1}{2}
\sum_{\cos 2\omega L } \frac{1}{L} \frac{\omega}{\sin 2\omega L}
\frac{1}{M^2+\omega^2-p^2-i \varepsilon}
\nonumber\\
&&=
\frac{1}{\sinh^2 PL \Delta_\pm (iP)}
\left( \frac{M^2}{ \cosh^{(3\mp 1)/2} 2 P L}-M^2 \right)
+\frac{1\pm 1}{2}
\sum_{n=0}^\infty
\frac{(-)^n (\frac{\pi}{4}+n\frac{\pi}{2})}
{(\frac{\pi}{4}+n\frac{\pi}{2})^2+ L^2(M^2-p^2-i \varepsilon)}
\nonumber\\
&&=
\frac{1}{\sinh^2 PL \Delta_\pm (iP)}
\left( \frac{M^2}{ \cosh^{(3\mp 1)/2} 2 P L}-M^2 \right)
+\frac{1\pm 1}{2}
\frac{1}{\cosh 2PL}
\, .
\end{eqnarray}
{}From these results, we can see that
the correlations between the boundaries
have definite limit when $L\rightarrow \infty$
and we obtain
\begin{eqnarray}
\lim_{{\scriptstyle L}\rightarrow \infty}
\left( \begin{array}{c} ..P_R \\
. P_L \end{array} \right)
i S_{F\pm}(p;s,t)
\left( \begin{array}{cc} P_L.. & P_R . \end{array} \right)
=
\left( \begin{array}{cc}
P_R \fs p \frac{1}{P\pm M} & 0 \\
0 & P_L \fs p \frac{1}{P-M} \end{array} \right)
\, .
\label{correlation-between-boundary-leading}
\end{eqnarray}
Thus the correlation between the two boundaries vanishes
and the remaining diagonal contribution from each boundary is
equal to that of the fermion with homogeneous mass
(positive or negative according to the signature of mass at that
boundary).
Next we consider higher order terms,
\begin{eqnarray}
\left( \begin{array}{c} ..P_R \\
. P_L \end{array} \right)
\left[
i \sum_{n=1}^\infty \left\{ S_{F \pm} \cdot (-)\fs A \cdot \right\}^n
S_{F\pm}
\right]
\left( \begin{array}{cc} P_L.. & P_R . \end{array} \right)
\end{eqnarray}
When the dimensional regularization is not taken into account,
we can see by the explicit calculation
that the cluster property holds
(see Appendix \ref{appendix:cluster-property-of-correlation}):
\begin{eqnarray}
&&
\lim_{ {\scriptstyle L} \rightarrow \infty}
\left( \begin{array}{c} ..P_R \\
. P_L \end{array} \right)
\left[
i \sum_{n=1}^\infty \left\{ S_{F \pm} \cdot (-)\fs A \cdot \right\}^n
S_{F\pm}
\right]
\left( \begin{array}{cc} P_L .. & P_R . \end{array} \right)
\nonumber\\
&& =
\lim_{{\scriptstyle L} \rightarrow \infty}
\left(
\begin{array}{c}
..P_R \left[
i \sum_{n=1}^\infty \left\{ S_{F-}[\mp M]
\cdot (-)\fs A \cdot \right\}^n S_{F-}[\mp M]
\right] P_L .. \\
0 \end{array} \right.
\nonumber\\
&&
\hskip 3cm
\left.
\begin{array}{c}
0 \\
.P_L
\left[
i \sum_{n=1}^\infty \left\{ S_{F-}[+M]
\cdot (-)\fs A \cdot \right\}^n S_{F-}[+M]
\right] P_R .
\end{array} \right)
\, .
\label{cluster-property-in-boundary-term}
\end{eqnarray}
For $n=1$, we have
\begin{eqnarray}
&& S_{F\pm}(p+k) \cdot \gamma^\mu \cdot S_{F\pm}(p)
\nonumber\\
=&&
\left( P_R (\fs p + \fs k)\Delta_{R\pm}
+ P_L (\fs p + \fs k) \Delta_{L\pm}
+P_R B_{RL\pm}+ P_L B_{LR\pm} \right)
\nonumber\\
&&
\hskip 1cm \cdot \gamma^\mu \cdot
\left( P_R \fs p \Delta_{R\pm}
+ P_L \fs p \Delta_{L\pm}
+P_R B_{RL\pm}+ P_L B_{LR\pm} \right)
\nonumber\\
=&&
P_R
\left\{
(\fs p + \fs k) \gamma^\mu \fs p \,
\Delta_{R\pm} \cdot \Delta_{R\pm}
+\gamma^\mu \, B_{RL\pm} \cdot B_{LR\pm} \right\}
\nonumber\\
&&
+P_L
\left\{
(\fs p + \fs k) \gamma^\mu \fs p \,
\Delta_{L\pm} \cdot \Delta_{L\pm}
+ \gamma^\mu \, B_{LR\pm} \cdot B_{RL\pm} \right\}
\nonumber\\
&&
+P_R
\left\{
(\fs p + \fs k) \gamma^\mu \,
\Delta_{R\pm} \cdot B_{RL\pm}
+\gamma^\mu \fs p \, B_{RL\pm} \cdot \Delta_{L\pm}
\right\}
\nonumber\\
&&
+P_L
\left\{
(\fs p + \fs k) \gamma^\mu
\Delta_{L\pm} \cdot B_{LR\pm}
+ \gamma^\mu \fs p \, B_{LR\pm} \cdot \Delta_{R\pm}
\right\}
\, .
\end{eqnarray}
With the help of the orthogonality of the generalized ``sin'',
\begin{equation}
\int^L_{-L}\!ds \,
[\sin \omega(L-s)]_\pm [\sin \omega^\prime (L-s)]_\pm
= N_\pm (\omega) \delta_{\omega\omega^\prime}
\, ,
\end{equation}
we can first perform the integration over the extra coordinates
at every vertices of external gauge field.
Then we can see, for example, that
$\Delta_{R\pm} \cdot \Delta_{R\pm}$ has the similar
structure with respect to the dependence on
$\omega$ and $s,t$
to $\Delta_{R\pm}$ and that it satisfies
the same boundary condition:
\begin{eqnarray}
\Delta_{R\pm} \cdot \Delta_{R\pm}
=
\sum_\omega
\frac{\left( \omega \cot \omega L \pm M \right)}{n_\pm(\omega)}
\frac{[\sin \omega(L-s)]_\pm [\sin \omega(L-t)]_\pm }
{[M^2+\omega^2-(p+k)^2-i \varepsilon][M^2+\omega^2-p^2-i \varepsilon]}
\end{eqnarray}
Same is true for $B_{RL\pm} \cdot B_{LR\pm}$ and it satisfies
the same boundary condition as $\Delta_{R\pm}$. Let us denote this
similarity as follows.
\begin{eqnarray}
\Delta_{R\pm} \cdot \Delta_{R\pm} \sim
B_{RL\pm} \cdot B_{LR\pm} \sim \Delta_{R\pm}
\, .
\end{eqnarray}
Then we can write
\begin{eqnarray}
\Delta_{L-} \cdot \Delta_{L-} \sim
B_{LR\pm} \cdot B_{RL\pm} \sim \Delta_{L-}
\, .
\end{eqnarray}
\begin{eqnarray}
\Delta_{R\pm} \cdot B_{RL\pm} \sim
B_{RL\pm} \cdot \Delta_{L-} \sim B_{RL\pm}
\, .
\end{eqnarray}
\begin{eqnarray}
\Delta_{L-} \cdot B_{LR\pm} \sim
B_{LR\pm} \cdot \Delta_{R\pm} \sim B_{LR\pm}
\, .
\end{eqnarray}
As expected from the similarity of the structure of the
Green functions,
the correlation functions between the boundaries at the order
$n=1$ can be evaluated in the same way as the leading case
by the Sommerfeld-Watson transformation.
We obtain the result that the correlation between the
two boundaries vanish in the limit of $L \rightarrow \infty$,
\begin{eqnarray}
&&
\lim_{{\scriptstyle L} \rightarrow \infty}
\left( \begin{array}{c} ..P_R \\
. P_L \end{array} \right)
\left[
i \left\{ S_{F\pm} \cdot (-) \gamma^\mu \cdot \right\} S_{F\pm}
\right]
\left( \begin{array}{cc} P_L.. & P_R . \end{array} \right)(k+p,k)
\nonumber\\
&&
=
(i)
\left( \begin{array}{cc}
P_R V_\pm^\mu(k+p,k) P_L
& 0 \\
0 &
P_L V_-^\mu(k+p,k) P_R
\end{array} \right)
\, ,
\end{eqnarray}
where
\begin{equation}
\label{gauge-boson-vertex-one-point}
V_\pm^\mu(k+p,k) \, (P+K) =
\left[ (\fs k + \fs p) \gamma^\mu \fs k \right]
\frac{1}{[P\pm M][K \pm M]}
+\gamma^\mu
\, .
\end{equation}
$p$ is assumed to be the momentum incoming from the external
gauge boson attached to the vertex $\gamma^\mu$.
$P$ and $K$ are defined as $P=\sqrt{M^2-(k+p)^2-i \varepsilon}$
and $K=\sqrt{M^2-k^2-i \varepsilon}$, respectively.
For $n>1$, the correlation functions between the boundaries can
be evaluated in a similar manner.
We obtain
\begin{eqnarray}
&&
\lim_{ {\scriptstyle L} \rightarrow \infty}
\left( \begin{array}{c} ..P_R \\
. P_L \end{array} \right)
\left[
i \prod_{i=1}^n
\left\{ S_{F\pm}(k_i) \cdot (-) \gamma^{\nu_i} \cdot \right\}
S_{F\pm}(k_{n+1})
\right]
\left( \begin{array}{cc} P_L.. & P_R . \end{array} \right)
\nonumber\\
&& =
(i)^n \sum_{n=1}^\infty
\left( \begin{array}{cc}
P_R V_\pm^{\nu_1\nu_2\ldots\nu_{n+1}}(k_1,k_2,\ldots,k_{n+1}) P_L
& 0 \\
0 &
P_L V_-^{\nu_1\nu_2\ldots\nu_{n+1}}(k_1,k_2,\ldots,k_{n+1}) P_R
\end{array} \right)
\, ,
\label{cluster-property-in-boundary-term-at-each-order}
\end{eqnarray}
where
\begin{eqnarray}
V_\pm^{\nu_1\nu_2\ldots\nu_{n+1}}(k_1,k_2,\ldots,k_{n+1})
=
\sum_{0\le 2l \le n+1} C_{2l}^n \,
\sum_{i=1}^{n+1}\frac{(M^2-K_i^2)^l}{(K_i \pm M)}
\prod_{j\not=i} \frac{1}{K_j^2-K_i^2}
\, .
\end{eqnarray}
(See Appendix \ref{appendix:cluster-property-of-correlation} for
the definition of $C_{2l}^n$.)
This result shows that the cluster property holds in each order
of the expansion.
We will show the explicit results for $n=2$ and $n=3$ for later use.
For $n=2$,
\begin{eqnarray}
\label{gauge-boson-vertex-two-point}
V_\pm^{\mu \nu} && (k+p,k,k-q) \, (P+K)(K+Q)(Q+P)
\nonumber\\
&&= \left[ (\fs k + \fs p) \gamma^\mu \fs k \gamma^\nu (\fs k - \fs q)\right]
\frac{P+K+Q\pm M}{[P\pm M][K \pm M][Q \pm M]}
\nonumber\\
&&\quad
+
\left[
(\fs k + \fs p) \gamma^\mu \gamma^\nu
+ \gamma^\mu \fs k \gamma^\nu
+ \gamma^\mu \gamma^\nu (\fs k - \fs q) \right]
\, .
\end{eqnarray}
$q$ is assumed to be the momentum incoming from the external
gauge boson attached to the vertex $\gamma^\nu$ and
$Q$ is defined by $Q=\sqrt{M^2-(k-q)^2-i \varepsilon}$.
For $n=3$,
\begin{eqnarray}
\label{gauge-boson-vertex-three-point}
&& V_\pm^{\mu \nu \lambda}(k+p,k,k-q,k-q-r)
\, (P+K)(P+Q)(P+R)(K+Q)(K+R)(Q+R)
\nonumber\\
&&=
\left[ (\fs k + \fs p) \gamma^\mu \fs k \gamma^\nu (\fs k - \fs q)
\gamma^\lambda (\fs k - \fs q - \fs r) \right]
\frac{A_\pm (P,K,Q,R)}
{[P\pm M][K \pm M][Q \pm M][R \pm M]}
\nonumber\\
&&
+ \left[
(\fs k + \fs p) \gamma^\mu \fs k \gamma^\nu \gamma^\lambda
+(\fs k + \fs p) \gamma^\mu \gamma^\nu (\fs k-\fs q) \gamma^\lambda
+(\fs k + \fs p) \gamma^\mu \gamma^\nu \gamma^\lambda
(\fs k - \fs q - \fs r)
+ \gamma^\mu \fs k \gamma^\nu (\fs k - \fs q) \gamma^\lambda
\right. \nonumber\\
&&\qquad \left.
+ \gamma^\mu \fs k \gamma^\nu \gamma^\lambda (\fs k - \fs q -\fs r)
+ \gamma^\mu \gamma^\nu (\fs k - \fs q) \gamma^\lambda
(\fs k - \fs q -\fs r)
\right]
(P+K+Q+R)
\nonumber\\
&&
+
\left[\gamma^\mu \gamma^\nu \gamma^\lambda \right]
B (P,K,Q,R)
\, ,
\end{eqnarray}
and
\begin{eqnarray}
A_\pm (P,K,Q,R) &=&
P^2(Q+R+K)+Q^2(P+R+K)+R^2(P+Q+K)+K^2(P+Q+R)
\nonumber\\
&&
+2(QRK+PRK+PQK+PQR)
\nonumber\\
&&
\pm M(P+K+Q+R)^2 + M^2(P+K+Q+R)
\, ,
\\
B (P,K,Q,R) &=& (QRK+PRK+PQK+PQR) + M^2(P+K+Q+R)
\, .
\end{eqnarray}
$r$ is assumed to be the momentum incoming from the external
gauge boson attached to the vertex $\gamma^\lambda$ and
$R$ is defined by $R=\sqrt{M^2-(k-q-r)^2-i \varepsilon}$.
If we take into account of the dimensional regularization,
since $\gamma^{a=5}$ commutes with the extended part of
the gamma matrices $\gamma^\mu (\mu=5,\ldots, D)$,
the boundary condition is not respected by
the chiral structure of the gamma matrices.
There occurs the mismatch of the components on which
different boundary conditions are imposed.
For example, let us consider the component,
\begin{equation}
..P_R
\left[
i \sum_{n=1}^\infty \left\{ S_{F+} \cdot
\gamma^{\mu_i}\cdot \right\}^n S_{F+}
\right] P_R .
\label{mismatched-correlation-R-R}
\end{equation}
Beside the regular part, we encounter the following term at $n=1$.
\begin{eqnarray}
P_R \fs p \gamma^\mu P_L \fs q \, .. \Delta_R \cdot \Delta_L .
\end{eqnarray}
Since the boundary condition mismatches, we cannot use the
orthogonality of the generalized ``sin'' function and
cannot evaluate the correlation straightforwardly.
At $n=2$, we find the following mismatched correlation functions.
\begin{eqnarray}
.. \Delta_R \cdot \Delta_L \cdot B_{RL} .
\hskip 32pt
.. B_{RL} \cdot \Delta_L \cdot B_{RL} . \sim
.. B_{RL} \cdot B_{RL} . \ \, .
\label{mismatch-correlation-function-n=2}
\end{eqnarray}
At $n=3$,
\begin{eqnarray}
&& .. \Delta_R \cdot \Delta_L \cdot \Delta_R \cdot \Delta_L . \\
&&
.. B_{RL} \cdot \Delta_L \cdot \Delta_R \cdot \Delta_L .
\sim .. B_{RL} \cdot \Delta_R \cdot \Delta_L . \\
&&
.. \Delta_R \cdot B_{LR} \cdot
\Delta_R \cdot \Delta_L .
\sim .. \Delta_R \cdot B_{LR} \cdot \Delta_L . \ \, .
\end{eqnarray}
At $n=4$,
\begin{eqnarray}
&& .. \Delta_R \cdot \Delta_L \cdot \Delta_R \cdot \Delta_L
\cdot B_{RL} . \\
&&
.. B_{RL} \cdot \Delta_L \cdot \Delta_R \cdot \Delta_L
\cdot B_{RL} .
\sim .. B_{RL} \cdot \Delta_R \cdot \Delta_L
\cdot B_{RL} . \\
&&
.. \Delta_R \cdot \Delta_L \cdot B_{RL} \cdot
\Delta_L \cdot \cdot B_{RL} .
\sim
.. \Delta_R \cdot \Delta_L \cdot B_{RL} \cdot B_{RL} .
\\
&&
.. B_{RL}\cdot \Delta_L \cdot B_{RL} \cdot \Delta_L \cdot B_{RL} .
\sim
.. B_{RL}\cdot B_{RL} \cdot B_{RL} . \ \, .
\end{eqnarray}
Since the divergence appears in
the order $n \le 4$ by the power counting,
it is enough to consider only such orders.
Any mismatched correlation function
up to $n=4$ is ``similar'' to one of the above examples.
As to the simplest case,
Eq.~(\ref{mismatch-correlation-function-n=2}), we can see that
it actually vanishes in the limit $L \rightarrow \infty$
as follows. From
Eq.~(\ref{delta-R-sommerfeld-watson-transformed}) and
Eq.~(\ref{delta-L-sommerfeld-watson-transformed}),
the correlation between the boundary at $s=-L$ and
that at $s=L$ could emerge from the parts,
\begin{equation}
\frac{(P \coth P L \pm M)}{-\Delta_\pm (iP)} \,
\frac{ [\sinh P(L-s)]_\pm [\sinh P(L-t)]_\pm } {\sinh^2 PL }
\, ,
\end{equation}
in $\Delta_{R\pm}$ or
\begin{equation}
\frac{(P \coth P L - M)}{-\Delta_- (iP)} \,
\frac{ [\sinh P(L+s)]_- [\sinh P(L+t)]_- } {\sinh^2 PL }
\, ,
\end{equation}
in $\Delta_{L\pm}$.
At the boundaries, on the other hand,
the above correlation-mediate-parts cannot contribute
because
\begin{eqnarray}
.. \frac{[\sinh P(L-s)]_\pm}{\sinh PL }
&=& \frac{1}{\sinh PL } \frac{P}{P \coth PL \pm M}
\sitarel{\longrightarrow}{{\scriptstyle L} \rightarrow \infty} 0
\nonumber\\
\frac{[\sinh P(L+s)]_-}{\sinh PL } .
&=& \frac{1}{\sinh PL } \frac{P}{P \coth PL - M}
\sitarel{\longrightarrow}{{\scriptstyle L} \rightarrow \infty} 0
\, ,
\end{eqnarray}
As a result,
\begin{equation}
.. \Delta_R \cdot \Delta_L .
\sitarel{\longrightarrow}{{\scriptstyle L} \rightarrow \infty} 0
\, .
\end{equation}
We can expect that other mismatched correlation functions
which appear in Eq.~(\ref{mismatched-correlation-R-R})
also vanish in the limit $L \rightarrow \infty$ in the
similar reason.
We do not enter the detail of the proof here.
{\it We rather assume that the dimensional regularization
preserves the cluster property
Eq.~(\ref {cluster-property-in-boundary-term})}.
Using this cluster property,
we obtain the formula of the perturbative expansion
of the boundary contribution
Eq.~(\ref{overlap-formula-boundary-contribution}) as follows.
\begin{eqnarray}
&&\lim_{{\scriptstyle L} \rightarrow \infty}
\ln \left[{\scriptstyle
\frac
{\det}' \left( 1-M-X_{-+}^\infty \right) }
{\sqrt{
{\det}' \left( 1-M_--X_{--}^\infty \right)
{\det}' \left( 1-M_+-X_{++}^\infty \right)
}}} \right]
-\lim_{{\scriptstyle L} \rightarrow \infty}
\ln \left[{\scriptstyle
\frac
{\det}' \left( 1-M^0-X_{-+}^\infty \right) }
{\sqrt{
{\det}' \left( 1-M^0_--X_{--}^\infty \right)
{\det}' \left( 1-M^0_+-X_{++}^\infty \right)
}} }\right]
\nonumber\\
&=&
\phantom{+}\frac{1}{2}
\sum_{m=1}^\infty \frac{(-)^m}{m}
{{\rm Tr}}^\prime\left\{
d^\infty_-
\left(
\lim_{{\scriptstyle L} \rightarrow \infty} \,
..P_R
\left[
i \sum_{n=1}^\infty
\left\{ S_{F-}[-M] \cdot (-)\fs A \cdot \right\}^n S_{F-}[-M]
\right] P_L..
\right)
\right\}^m
\nonumber\\
&&
-\frac{1}{2}
\sum_{m=1}^\infty \frac{(-)^m}{m}
{{\rm Tr}}^\prime\left\{
d^\infty_+
\left(
\lim_{{\scriptstyle L} \rightarrow \infty} \,
..P_R
\left[
i \sum_{n=1}^\infty
\left\{ S_{F-}[+M] \cdot (-)\fs A \cdot \right\}^n S_{F-}[+M]
\right] P_L.. \right)
\right\}^m
\nonumber\\
&&
+\frac{1}{2}
\sum_{m=1}^\infty \frac{(-)^m}{m}
{{\rm Tr}}^\prime\left\{
d^\infty_+
\left(
\lim_{{\scriptstyle L} \rightarrow \infty} \,
.P_L
\left[
i \sum_{n=1}^\infty
\left\{ S_{F-}[+M] \cdot (-)\fs A \cdot \right\}^n S_{F-}[+M]
\right] P_R. \right)
\right\}^m
\nonumber\\
&&
-\frac{1}{2}
\sum_{m=1}^\infty \frac{(-)^m}{m}
{{\rm Tr}}^\prime\left\{
d^\infty_-
\left(
\lim_{{\scriptstyle L} \rightarrow \infty} \,
.P_L
\left[
i \sum_{n=1}^\infty
\left\{ S_{F-}[-M] \cdot (-)\fs A \cdot \right\}^n S_{F-}[-M]
\right] P_R. \right)
\right\}^m
\nonumber\\
&\equiv&
i\Gamma_X[A]
\, ,
\label{perturbative-expansion-of-boundary-contribution}
\end{eqnarray}
where
\begin{eqnarray}
d^\infty_\pm (p) = \frac{ P \mp M - \fs p }{2 P }
\, .
\end{eqnarray}
This is derived by using the result
Eq.~(\ref {correlation-between-boundary-leading}) as follows.
\begin{eqnarray}
{D^\infty}(p)^{-1}&\equiv &
\lim_{{\scriptstyle L} \rightarrow \infty}
\left( 1-M^0-X_{-+}^\infty \right)
\nonumber\\
&=&
1
+
\left(
\begin{array}{cc} P_R \fs p \, \frac{1}{P + M} & 0 \\
0 & P_L \fs p \, \frac{1}{P - M}
\end{array} \right)
+
\left(
\begin{array}{cc} P_L \fs p \, \frac{1}{P + M} & 0 \\
0 & P_R \fs p \, \frac{1}{P - M}
\end{array} \right)
\nonumber\\
&=&
\left(
\begin{array}{cc} 1+ \frac{\fs p }{P + M} & 0 \\
0 & 1+ \frac{\fs p }{P - M}
\end{array} \right)
\equiv
\left(
\begin{array}{cc} {d^\infty_-(p)}^{-1} & 0 \\
0 & {d^\infty_+(p)}^{-1}
\end{array} \right)
\, .
\end{eqnarray}
As an important consequence of the cluster property,
we can show that the boundary contribution
Eq.~(\ref{overlap-formula-boundary-contribution}) is
purely odd-parity.
Since the fermion system with the homogeneous mass
possesses the parity invariance,
the propagator satisfies the relation,
\begin{eqnarray}
\label{parity-invariance-relation-of-propagator}
S_{F-}[\pm M](x_0-y_0,x_i-y_i,s,t)
=
\gamma^0 S_{F-}[\pm M](x_0-y_0,-x_i+y_i,-s,-t) \gamma^0
\, .
\end{eqnarray}
We can also check explicitly that $d^\infty(x_0-y_0,x_i-y_i)$
satisfies the similar relation. From these properties,
we can easily see that
the boundary contribution given by the above trace formula
is a parity-odd functional of
the external gauge field potential.
Note also that it changes the sign when we change the sing
of $M$ to $-M$.
This means that it has
the functional form as
\begin{equation}
\Gamma_X[A]= M F_X[A;M^2]
\, .
\end{equation}
This property is useful to reduce the superficial
degree of divergence of the loop integral in the perturbative
evaluation.
\section{Anomaly from boundary term}
\label{sec:anomaly-from-boundary}
In this section, as an application of the perturbation theory
given in the previous
section \ref{sec:perturbation-at-finite-extent-of-fifth-dimension},
we calculate the variation of the boundary term
under the gauge transformation.
We find that the consistent anomaly
is actually reproduced by the gauge
noninvariant boundary state wave functions.
This shows that
it is the correct choice
to fix the phase of the overlap
following the Wigner-Brillouin perturbation theory.
\subsection{variation under gauge transformation}
In order to examine the gauge symmetry breaking induced
by the boundary state wave function
Eq.~(\ref{boundary-wave-function-right-explicit-form}) and
Eq.~(\ref{boundary-wave-function-left-explicit-form}), we
consider the variation of the boundary contribution
$\Gamma_X[A]$
under the gauge transformation:
\begin{equation}
\label{gauge-transformtion-infinitesimal}
A_\mu(x) \rightarrow
A_\mu(x)
+ \partial_\mu \omega(x) - \left[ \omega(x), A_\mu(x) \right]
\, , \hskip 16pt
\omega \in su(N) \, .
\end{equation}
First we note the Ward-Takahashi identity in which
we take into account of the breaking of the chiral boundary
condition due to the dimensional regularization,
\begin{eqnarray}
\label{ward-takahashi-identity-under-dimensional-regularization}
&& S_{F-}(k+p) \cdot \fs p \cdot S_{F-}(k) \nonumber\\
&&= S_{F-}(k+p) - S_{F-}(k)
+ \Delta_1 (k+p) \cdot \fs p \cdot \Delta_2 (k)
\, ,
\end{eqnarray}
where we have defined
\begin{eqnarray}
\Delta_1 (k) &\equiv&
-\fs{ \ext{k} }
\left[ P_R \Delta_{R-}(k)-P_L \Delta_{L-}(k) \right]
\, ,
\\
\Delta_2 (k) &\equiv&
\left[ \fs k
\left( \Delta_{R-}(k)-\Delta_{L-}(k) \right)
+\left( B_{RL-}(k)-B_{LR-}(k) \right)
\right]
\, ,
\end{eqnarray}
where $\fs{ \ext{k} }= \sum_{\mu=4}^D \gamma^\mu k_\mu$.
{}From this identity, we obtain
\begin{eqnarray}
\label{variation-of-propagator-under-dimensional-regularization}
&& \delta_\omega
\left[ i \sum_{n=0}^\infty
\left\{ S_{F-}\cdot (-)\fs A \cdot \right\}^n S_{F-}
\right](x,y)
\nonumber\\
&=&
-\omega(x)
\left[ i \sum_{n=0}^\infty
\left\{ S_{F-}\cdot (-)\fs A \cdot \right\}^n S_{F-}
\right](x,y)
+
\left[ i \sum_{n=0}^\infty
\left\{ S_{F-}\cdot (-)\fs A \cdot \right\}^n S_{F-}
\right](x,y) \,
\omega(y)
\nonumber\\
&&
-\Delta_\omega(x,y)
\, ,
\end{eqnarray}
where $\Delta_\omega(x,y)$ stands for the gauge non-invariant
correction due to the dimensional regularization,
\begin{eqnarray}
&&
\Delta_\omega(x,y)
\nonumber\\
&&=
i\, \Delta_1 \cdot \fs \partial \omega \cdot \Delta_2 (x,y)
\nonumber\\
&&
- i
\left[
S_{F-}\cdot \fs A \cdot \Delta_1
\cdot \fs \partial \omega \cdot \Delta_2 \right]
(x,y)
- i
\left[
\Delta_1 \cdot \fs \partial \omega \cdot \Delta_2
\cdot \fs A \cdot S_{F-}
\right](x,y)
\nonumber\\
&& + \ldots
\end{eqnarray}
\noindent
Then the variation of $1-M-X_{-+}^\infty$ is given by
\begin{eqnarray}
\delta_\omega (1-M-X_{-+}^\infty )
=&&
-(\omega X_{-+}^\infty-X_{-+}^\infty \omega)
-\left( \begin{array}{c} ..P_R \\
. P_L \end{array} \right)
\left[
\Delta_\omega
\right]
\left( \begin{array}{cc} P_L.. & P_R . \end{array} \right)
\nonumber\\
&&
- \left[ \omega , (1-M-X_{-+}^\infty ) \right]
\, .
\end{eqnarray}
Therefore the variation of the determinant of
$1-M-X_{-+}^\infty$ can be written as
\begin{eqnarray}
&&\delta_\omega \ln {\det}'(1-M-X_{-+}^\infty)
\nonumber\\
&&=
{{\rm Tr}}'
\left\{
(-)
\left(
\omega X_{-+}^\infty- X_{-+}^\infty \omega
+
\left( \begin{array}{c} ..P_R \\
. P_L \end{array} \right)
\left[
\Delta_\omega
\right]
\left( \begin{array}{cc} P_L.. & P_R . \end{array} \right)
\right)
\right.
\nonumber\\
&&
\left.
\hskip 32pt \times
\sum_{m=0}^\infty (-)^m
\left\{
D^{\scriptstyle L}_+
\left( \begin{array}{c} ..P_R \\
. P_L \end{array} \right)
\left[
i \sum_{n=1}^\infty \left\{ S_{F+}\cdot (-)\fs A \cdot \right\}^n
S_{F+}
\right]
\left( \begin{array}{cc} P_L.. & P_R . \end{array} \right)
\right\}^m
D^{\scriptstyle L}_+
\right\}
\nonumber\\
\end{eqnarray}
The variation of the determinant of
$1-M_\pm-X_{\pm\pm}^\infty$ is obtained in a similar manner.
Taking into account of the cluster property
in the limit $L \rightarrow \infty$,
the variation of $\Gamma_X$ is given by
\begin{eqnarray}
&&
i \delta_\omega \Gamma_X[A]
\nonumber\\
&&=\lim_{{\scriptstyle L} \rightarrow \infty}
\delta_\omega
\ln \left[{\scriptstyle
\frac
{\det}' \left( 1-M-X_{-+}^\infty \right) }
{\sqrt{
{\det}' \left( 1-M_--X_{--}^\infty \right)
{\det}' \left( 1-M_+-X_{++}^\infty \right)
}}} \right]
\nonumber\\
&&=
\frac{1}{2}
{{\rm Tr}}^\prime
\left\{
(-) \left( \omega X^\infty_- - X^\infty_- \omega
+ ..P_R \Delta_\omega P_L .. \right)
\right.
\nonumber\\
&& \hskip 16pt
\left.
\times
\sum_{m=0}^\infty (-)^m
\left\{
d^\infty_-
\left(
\lim_{{\scriptstyle L} \rightarrow \infty}
..P_R
\left[
i \sum_{n=1}^\infty
\left\{ S_{F-}[-M]\cdot (-)\fs A \cdot \right\}^n S_{F-}[-M]
\right]
P_L.. \right)
\right\}^m
d^\infty_-
\right\}
\nonumber\\
&&+
\frac{1}{2}
{{\rm Tr}}^\prime
\left\{
(-) \left( \omega X^\infty_+ - X^\infty_+ \omega
+ .P_L \Delta_\omega P_R . \right)
\right.
\nonumber\\
&& \hskip 16pt
\left.
\times
\sum_{m=0}^\infty (-)^m
\left\{
d^\infty_+
\left(
\lim_{{\scriptstyle L} \rightarrow \infty}
.P_L
\left[
i \sum_{n=1}^\infty
\left\{ S_{F-}[+M]\cdot (-)\fs A \cdot \right\}^n S_{F-}[+M]
\right]
P_R. \right)
\right\}^m
d^\infty_+
\right\}
\nonumber\\
&&-
\frac{1}{2}
{{\rm Tr}}^\prime
\left\{
(-) \left( \omega X^\infty_- - X^\infty_- \omega
+ ..P_R \Delta_\omega P_L .. \right)
\right.
\nonumber\\
&& \hskip 16pt
\left.
\times
\sum_{m=0}^\infty (-)^m
\left\{
d^\infty_-
\left(
\lim_{{\scriptstyle L} \rightarrow \infty}
..P_R
\left[
i \sum_{n=1}^\infty
\left\{ S_{F-}[+M]\cdot (-)\fs A \cdot \right\}^n S_{F-}[+M]
\right]
P_L.. \right)
\right\}^m
d^\infty_-
\right\}
\nonumber\\
&&-
\frac{1}{2}
{{\rm Tr}}^\prime
\left\{
(-) \left( \omega X^\infty_+ - X^\infty_+ \omega
+ .P_L \Delta_\omega P_R . \right)
\right.
\nonumber\\
&& \hskip 16pt
\left.
\times
\sum_{m=0}^\infty (-)^m
\left\{
d^\infty_+
\left(
\lim_{{\scriptstyle L} \rightarrow \infty}
.P_L
\left[
i \sum_{n=1}^\infty
\left\{ S_{F-}[-M]\cdot (-)\fs A \cdot \right\}^n S_{F-}[-M]
\right]
P_R. \right)
\right\}^m
d^\infty_+
\right\}
\, .
\label{variation-of-boundary-contribution}
\end{eqnarray}
\subsection{Consistent anomaly}
Now we perform the calculation of
Eq.~(\ref{variation-of-boundary-contribution}).
We express it in the momentum space as follows.
\begin{eqnarray}
\label{anomaly-leading-order-term}
&& \delta_\omega \Gamma_X [A]
\nonumber\\
&&
= \sum_{n=1}^\infty \, (i)^{n+1}
\int\! \frac{d^4 l}{(2\pi)^4}\prod_{i=1}^n \frac{d^4 p_i}{(2\pi)^4}
(2\pi)^4\delta\left( l+\sum_i^n p_i \right)
\Gamma_X^{\nu_1 \nu_2 \ldots \nu_n}
(p_1,\ldots, p_n) \,
{\rm tr}\{ \omega(l) \prod_{i=1}^n A_{\nu_i}(p_i) \}
\, .
\nonumber\\
\end{eqnarray}
\subsubsection{Finiteness of $\Gamma_X$}
Since $\Gamma_X [A]$ is a parity-odd functional of gauge field
potential, it must involve
the $\epsilon$-tenser in four-dimensions
$\epsilon_{\nu_1\nu_2\nu_3 \nu_4} \, (\nu_i=0,1,2,3)$.
It is also proportional to $M$.
The Lorentz indices $\nu_1,\nu_2,\nu_3,\nu_4$ should be
contracted with those of the gauge field potential $A^\nu$ and
the momentum $p^\mu$'s.
For $n=1$, we can easily see that such structure cannot appear.
For $n=2$, we can have the form
\begin{eqnarray}
M \, \epsilon_{\mu \nu \nu_1 \nu_2 } \,
p_1^{\nu_1} p_2^{\nu_2} F(p_1,p_2, M^2 )
\, .
\end{eqnarray}
In this case, the superficial degree of divergence of
$\Gamma_X^{\mu\nu}$ is two by the power counting rule
of the five-dimensional theory.
Since the above structure reduces it by three and
we have minus one. This means that
there does not appear ultraviolet divergence in this term.
This also means that the additional term including $\Delta_\omega$
due to the dimensional regularization does not contribute.
For $n=3$, we can have the term
\begin{eqnarray}
M \, \epsilon_{\mu \nu \rho \nu_i } \,
p_i^{\nu_i} \, F(p_1,p_2,p_3,M^2)
\, .
\end{eqnarray}
In this case, the superficial degree of divergence is also reduced to
minus one and
the additional term due to the dimensional regularization
does not contribute.
For $n=4$, we can have the term
\begin{equation}
M \, \epsilon_{\mu \nu \rho \sigma }
\, .
\end{equation}
The superficial degree of divergence is zero and it is reduced by one
because of the coefficient $M$. It is also finite.
For $n \ge 5$, they are finite by the power counting rule.
Therefore, in every orders of the expansion, the boundary contribution
is finite and
the additional term due to the dimensional regularization
does not contribute.
Therefore, in the following calculation, we can omit the terms
due to the dimensional regularization.
\subsubsection{First order}
The first order term $(n=1)$ is given by the expression
\begin{eqnarray}
&&\Gamma_X^{\nu}(p) \nonumber\\
&&=\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
P_L \left(
\frac{\fs k + \fs p}{P+M} - \frac{\fs k}{K+M}
\right) P_R
\right.
\nonumber\\
&& \qquad \qquad \qquad \qquad
\left.
\cdot
\frac{P+M - (\fs k + \fs p)}{2P}
P_R
V_+^\nu(k+p,k)
P_L
\frac{K+M - \fs k}{2K}
\right\}
\nonumber\\
&&+ \frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
i \Delta_1(k+p) \fs p \Delta_2(k)
\right.
\nonumber\\
&& \qquad \qquad \qquad \qquad \qquad
\left.
\cdot
\frac{P+M - (\fs k + \fs p)}{2P}
P_R
V_+^\nu(k+p,k)
P_L
\frac{K+M - \fs k}{2K}
\right\}
\nonumber\\
&&
+ \ldots
\nonumber\\
&&=-\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
\gamma_5 \left(
\frac{\fs k + \fs p}{P+M} - \frac{\fs k}{K+M}
\right)
\frac{P+M}{2P}
V_+^\nu(k+p,k)
\frac{K+M}{2K}
\right\}
\nonumber\\
&& \
+\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
\gamma_5 \left(
\frac{\fs k + \fs p}{P-M} - \frac{\fs k}{K-M}
\right)
\frac{P-M}{2P}
V_-^\nu(k+p,k)
\frac{K-M}{2K}
\right\}
\, ,
\end{eqnarray}
where $\ldots$ stands for the contributions from
the second, third and fourth terms
in Eq.~(\ref{variation-of-boundary-contribution}).
$V_+^\nu(k+p,k)$ is defined by
Eq.~(\ref{gauge-boson-vertex-one-point}).
This contribution
actually vanishes because of the trace over gamma matrices.
\subsubsection{Second order}
The second order term $(n=2)$ is given by the following expression.
\begin{eqnarray}
&&\Gamma_X^{\mu\nu}(p,q) \nonumber\\
&=&\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
P_L \left(
\frac{\fs k + \fs p}{P+M} - \frac{\fs k - \fs q }{Q+M}
\right) P_R
\right.
\nonumber\\
&& \qquad
\left.
\cdot
\frac{P+M - (\fs k + \fs p )}{2P}
P_R V_+^{\mu\nu}(k+p,k,k-q) P_L
\frac{Q+M - \fs k - \fs q }{2Q}
\right\}
\nonumber\\
&&-\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
P_L \left(
\frac{\fs k + \fs p}{P+M} - \frac{\fs k - \fs q }{Q+M}
\right) P_R
\right.
\nonumber\\
&& \qquad \quad\quad
\cdot
\frac{P+M - (\fs k + \fs p)}{2P}
P_R V_+^{\mu}(k+p,k) P_L
\nonumber\\
&& \qquad \quad\quad\quad\quad
\left.
\cdot
\frac{K+M - \fs k}{2K}
P_R V_+^{\nu}(k,k-q) P_L
\frac{Q+M - \fs k - \fs q }{2Q}
\right\}
\nonumber\\
&&
+ \ldots
\label{boundary-contribution-second-order}
\\
&=&
-\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
\gamma_5 \left(
\frac{\fs k + \fs p }{P+M} - \frac{\fs k - \fs q }{Q+M} \right)
\frac{P+M }{2P}
V_+^{\mu\nu}(k+p,k,k-q)
\frac{Q+M }{2Q}
\right\}
\label{boundary-contribution-second-order-first-line}
\\
&&
+\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
\gamma_5 \left(
\frac{\fs k + \fs p }{P-M} - \frac{\fs k - \fs q }{Q-M} \right)
\frac{P-M }{2P}
V_-^{\mu\nu}(k+p,k,k-q)
\frac{Q-M }{2Q}
\right\}
\label{boundary-contribution-second-order-second-line}
\\
&&+\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
\gamma_5 \left(
\frac{\fs k + \fs p}{P+M} - \frac{\fs k -\fs q }{Q+M} \right)
\frac{K+M }{2K}
V_+^{\mu}(k+p,k)
\right.
\nonumber\\
&& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
\qquad \qquad
\left.
\cdot
\frac{- \fs k }{2K}
V_+^{\nu}(k,k-q)
\frac{Q+M }{2Q}
\right\}
\label{boundary-contribution-second-order-third-line}
\\
&&-\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
\gamma_5 \left(
\frac{\fs k + \fs p }{P-M} - \frac{\fs k -\fs q }{Q-M} \right)
\frac{K-M }{2K}
V_-^{\mu}(k+p,k)
\right.
\nonumber\\
&& \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad
\qquad \qquad
\left.
\cdot
\frac{- \fs k}{2K}
V_-^{\nu}(k,k-q)
\frac{Q-M }{2Q}
\right\}
\label{boundary-contribution-second-order-forth-line}
\, ,
\end{eqnarray}
where
$V_+^{\mu\nu}(k+q+p,k+q,k)$ is defined by
Eq.~(\ref{gauge-boson-vertex-two-point}).
The first line (\ref{boundary-contribution-second-order-first-line})
in the second equality can be evaluated as
\begin{eqnarray}
&& -\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
\frac{1}{4PQ(P+K)(K+Q)(Q+P)}
\nonumber\\
&&\qquad
{\rm tr} \gamma_5 \left\{
(k+p)^2 \gamma^\mu \fs k \gamma^\nu (\fs k -\fs q)
\frac{P+K+Q+M}{(P+M)(K+M)}
\right.
\nonumber\\
&&\qquad
+ \left[
(k+p)^2 \gamma^\mu \gamma^\nu
+(\fs k + \fs p) \gamma^\mu \fs k \gamma^\nu
+(\fs k + \fs p) \gamma^\mu \gamma^\nu (\fs k -\fs q)
\right](Q+M)
\nonumber\\
&&\qquad
+ (k-q)^2 (\fs k + \fs p) \gamma^\mu \fs k \gamma^\nu
\frac{P+K+Q+M}{(K+M)(Q+M)}
\nonumber\\
&&\left.\qquad
+ \left[
(k-q)^2 \gamma^\mu \gamma^\nu
+(\fs k + \fs p) \gamma^\mu \gamma^\nu (\fs k -\fs q)
+ \gamma^\mu \fs k \gamma^\nu (\fs k -\fs q)
\right](P+M)
\right\}
\, .\end{eqnarray}
Subtracting the contribution with the mass of opposite signature
(\ref{boundary-contribution-second-order-second-line}), we obtain
\begin{eqnarray}
&& -\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
\frac{2M}{4PQ(P+K)(K+Q)(Q+P)}
\nonumber\\
&&\qquad
{\rm tr} \gamma_5 \left\{
\gamma^\mu \fs k \gamma^\nu (\fs k -\fs q)
\frac{(P+K+Q)(P+K)-(PK+M^2)}{(K^2-M^2)}
\right.
\nonumber\\
&&\qquad
+ \left[
(k+p)^2 \gamma^\mu \gamma^\nu
+(\fs k + \fs p) \gamma^\mu \fs k \gamma^\nu
+(\fs k + \fs p) \gamma^\mu \gamma^\nu (\fs k -\fs q)
\right]
\nonumber\\
&&\qquad
+ (\fs k + \fs p) \gamma^\mu \fs k \gamma^\nu
\frac{(P+K+Q)(K+Q)-(KQ+M^2)}{(K^2-M^2)}
\nonumber\\
&&\left.\qquad
+ \left[
(k-q)^2 \gamma^\mu \gamma^\nu
+(\fs k + \fs p) \gamma^\mu \gamma^\nu (\fs k -\fs q)
+ \gamma^\mu \fs k \gamma^\nu (\fs k -\fs q)
\right]
\right\}
\, .\end{eqnarray}
Assuming that $p,q \ll M$, we make expansion with respect to
$p,q$. Taking into account of the property of the trace
of gamma matrix, we can calculate it as follows.
\begin{eqnarray}
&&
-M \int\! \frac{d^D k}{(2\pi)^D}
\left\{
-{\rm tr}(\gamma_5 \gamma^\mu \fs k \gamma^\nu \fs q)
\frac{1}{8K^2(K^2-M^2)}
\frac{(P+2K)(P+K)-(PK+M^2)}{P(P+K)^2}
\right.
\nonumber\\
&&\qquad\qquad\qquad
+{\rm tr}(\gamma_5 \fs p \gamma^\mu \fs k \gamma^\nu )
\frac{1}{8K^2(K^2-M^2)}
\frac{(Q+2K)(Q+K)-(QK+M^2)}{Q(Q+K)^2}
\nonumber\\
&&\qquad\qquad\qquad
-2 {\rm tr}(\gamma_5 \fs p \gamma^\mu \gamma^\nu \fs q)
\frac{1}{32K^5}
\nonumber\\
&&\left.\qquad\qquad\qquad
-{\rm tr}(\gamma_5 \fs k \gamma^\mu \gamma^\nu \fs q)
\frac{1}{8K^2P(P+K)^2}
+{\rm tr}(\gamma_5 \fs p \gamma^\mu \gamma^\nu \fs k)
\frac{1}{8K^2Q(Q+K)^2}
\right\}
+ {\cal O}\left( \frac{1}{M} \right)
\nonumber\\
&&
=
-M \int\! \frac{d^D k}{(2\pi)^D}
\left\{
-{\rm tr}(\gamma_5 \gamma^\mu \fs k \gamma^\nu \fs q)
\frac{(k \cdot p)}{8K^2(K^2-M^2)}
\left(
\frac{1}{4K^3}+ 2\frac{3}{4K^3}-K \frac{1}{4K^4}-M^2 \frac{1}{2K^5}
\right)
\right.
\nonumber\\
&&\qquad\qquad\qquad\quad
-{\rm tr}(\gamma_5 \fs p \gamma^\mu \fs k \gamma^\nu )
\frac{(k \cdot q)}{8K^2(K^2-M^2)}
\left(
\frac{1}{4K^3}+ 2\frac{3}{4K^3}-K \frac{1}{4K^4}-M^2 \frac{1}{2K^5}
\right)
\nonumber\\
&&\qquad\qquad\qquad\quad
- {\rm tr}(\gamma_5 \fs p \gamma^\mu \gamma^\nu \fs q)
\frac{1}{16K^5}
\nonumber\\
&&\left. \qquad\qquad\qquad\quad
-{\rm tr}(\gamma_5 \fs k \gamma^\mu \gamma^\nu \fs q)
\frac{(k \cdot p)}{8K^2}\frac{1}{2K^5}
-{\rm tr}(\gamma_5 \fs p \gamma^\mu \gamma^\nu \fs k)
\frac{(k \cdot q)}{8K^2}\frac{1}{2K^5}
\right\}
+ {\cal O}\left( \frac{1}{M} \right)
\nonumber\\
&&
= M {\rm tr}(\gamma_5 \gamma^\mu \gamma^\nu \fs p \fs q)
\int\! \frac{d^D k}{(2\pi)^D} \frac{1}{8K^5}
+ {\cal O}\left( \frac{1}{M} \right)
\nonumber\\
&&
=
M {\rm tr}(\gamma_5 \gamma^\mu \gamma^\nu \fs p \fs q)
\, \frac{-i}{16\pi^2}\frac{4}{3}\frac{1}{|M|}
+ {\cal O}\left( \frac{1}{M} \right)
\nonumber\\
&&
=
- \frac{1}{24\pi^2}
\epsilon^{\mu\nu\rho\sigma} p_\rho q_\sigma
+ {\cal O}\left( \frac{1}{M} \right)
\end{eqnarray}
On the other hand, the third line
(\ref{boundary-contribution-second-order-third-line}) can be
evaluated as
\begin{eqnarray}
&& -\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
\frac{1}{8PKQ(P+K)(K+Q)}
\nonumber\\
&&\qquad
{\rm tr} \gamma_5 \left\{
\left( \gamma^\mu \frac{(k+p)^2 k^2}{(P+M)(K+M)}
+(\fs k + \fs p) \gamma^\mu \fs k
\right)
\left( \fs k \gamma^\nu (\fs k -\fs q) \frac{1}{(K+M)}
+\gamma^\nu(Q+M) \right)
\right.
\nonumber\\
&&\left.\qquad\qquad
+\left( (\fs k + \fs p) \gamma^\mu \fs k \frac{1}{(K+M)}
+\gamma^\nu (P+M) \right)
\left( \gamma^\nu \frac{k^2 (k-q)^2}{(K+M)(Q+M)}
+\fs k \gamma^\mu (\fs k - \fs q)
\right)
\right\}
\nonumber\\
&&=
-\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
\frac{1}{8PKQ(P+K)(K+Q)}
\nonumber\\
&&
{\rm tr} \gamma_5 \left\{
\gamma^\mu \fs k \gamma^\nu (\fs k - \fs q)
\frac{(k+p)^2 k^2}{(P+M)(K+M)^2}
+ (\fs k + \fs p) \gamma^\mu \gamma^\nu (\fs k - \fs q)
\frac{k^2}{(K+M)}
\right.
\nonumber\\
&& \qquad
+ (\fs k + \fs p) \gamma^\mu \fs k \gamma^\nu (Q+M)
+ \gamma^\mu \fs k \gamma^\nu (\fs k - \fs q)(P+M)
\nonumber\\
&&\left.\qquad
+(\fs k + \fs p) \gamma^\mu \fs k \gamma^\nu
\frac{k^2 (k-q)^2}{(K+M)^2(Q+M)}
+ (\fs k + \fs p) \gamma^\mu \gamma^\nu (\fs k - \fs q)
\frac{k^2}{(K+M)}
\right\}
\, .
\nonumber\\
\end{eqnarray}
Subtracting the contribution with the mass of opposite signature
(\ref{boundary-contribution-second-order-forth-line}), we obtain
\begin{eqnarray}
&&
-\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
\frac{2M}{8PKQ(P+K)(K+Q)}
\nonumber\\
&&
{\rm tr} \gamma_5 \left\{
-\gamma^\mu \fs k \gamma^\nu (\fs k - \fs q)
\frac{2PK+(K^2+M^2)}{(K^2-M^2)}
+ (\fs k + \fs p) \gamma^\mu \gamma^\nu (\fs k - \fs q)
+ (\fs k + \fs p) \gamma^\mu \fs k \gamma^\nu
\right.
\nonumber\\
&&\left.\qquad\quad
-(\fs k + \fs p) \gamma^\mu \fs k \gamma^\nu
\frac{2KQ+(K^2+M^2)}{(K^2-M^2)}
+ (\fs k + \fs p) \gamma^\mu \gamma^\nu (\fs k - \fs q)
+ \gamma^\mu \fs k \gamma^\nu (\fs k - \fs q)
\right\}
\, .
\nonumber\\
\end{eqnarray}
The expansion with respect to $p,q$ leads to the result,
\begin{eqnarray}
&&
-M \int\! \frac{d^D k}{(2\pi)^D}
\left\{
{\rm tr}(\gamma_5 \gamma^\mu \fs k \gamma^\nu \fs q)
\frac{1}{16K^3(K^2-M^2)}\frac{2PK+(K^2+M^2)}{P(P+K)}
\right.
\nonumber\\
&&
\phantom{ -M \int\! \frac{d^D k}{(2\pi)^D} }
-{\rm tr}(\gamma_5 \fs p \gamma^\mu \fs k \gamma^\nu )
\frac{1}{16K^3(K^2-M^2)}\frac{2QK+(K^2+M^2)}{Q(Q+K)}
\nonumber\\
&&
\phantom{ -M \int\! \frac{d^D k}{(2\pi)^D} }
-{\rm tr}(\gamma_5 \fs p \gamma^\mu \gamma^\nu \fs q)
\frac{1}{16K^5}
\nonumber\\
&& \left.
\phantom{ -M \int\! \frac{d^D k}{(2\pi)^D} }
-{\rm tr}(\gamma_5 \fs k \gamma^\mu \gamma^\nu \fs q)
\frac{1}{16K^3}\frac{1}{P(P+K)}
+{\rm tr}(\gamma_5 \fs p \gamma^\mu \gamma^\nu \fs k)
\frac{1}{16K^3}\frac{1}{Q(Q+K)}
\right\} + {\cal O}\left( \frac{1}{M} \right)
\nonumber\\
&&
= -M \int\! \frac{d^D k}{(2\pi)^D}
\left\{
{\rm tr}(\gamma_5 \gamma^\mu \fs k \gamma^\nu \fs q)
\frac{(k\cdot p)}{16K^3(K^2-M^2)}
\left( 2K \frac{1}{4K^3} +(K^2+M^2) \frac{3}{4K^4} \right)
\right.
\nonumber\\
&&
\phantom{ = -M \int\! \frac{d^D k}{(2\pi)^D} }
+{\rm tr}(\gamma_5 \fs p \gamma^\mu \fs k \gamma^\nu )
\frac{(k\cdot q)}{16K^3(K^2-M^2)}
\left( 2K \frac{1}{4K^3} +(K^2+M^2) \frac{3}{4K^4} \right)
\nonumber\\
&&
\phantom{ = -M \int\! \frac{d^D k}{(2\pi)^D} }
-{\rm tr}(\gamma_5 \fs p \gamma^\mu \gamma^\nu \fs q)
\frac{1}{16K^5}
\nonumber\\
&& \left.
\phantom{ = -M \int\! \frac{d^D k}{(2\pi)^D} }
-{\rm tr}(\gamma_5 \fs k \gamma^\mu \gamma^\nu \fs q)
\frac{(k\cdot p)}{16K^3}\frac{3}{4K^4}
-{\rm tr}(\gamma_5 \fs p \gamma^\mu \gamma^\nu \fs k)
\frac{(k\cdot q)}{16K^3}\frac{3}{4K^4}
\right\} + {\cal O}\left( \frac{1}{M} \right)
\nonumber\\
&&
= M {\rm tr}(\gamma_5 \gamma^\mu \gamma^\nu \fs p \fs q)
\int\! \frac{d^D k}{(2\pi)^D}
\left\{
\frac{1}{4 \cdot 16 K^5}
\left( \frac{5}{2} + \frac{3}{2} \frac{M^2}{K^2}
- \frac{3}{2} \frac{k^2}{K^2} -4 \right)
\right\}+ {\cal O}\left( \frac{1}{M} \right)
\nonumber\\
&&
=0 + {\cal O}\left( \frac{1}{M} \right)
\, .
\end{eqnarray}
Therefore, we obtain
\begin{eqnarray}
\Gamma_X^{\mu\nu}(p,q)
= - \frac{1}{24\pi^2}
\epsilon^{\mu\nu\rho\sigma} p_\rho q_\sigma
+ {\cal O}\left( \frac{1}{M} \right)
\, .
\end{eqnarray}
\subsubsection{Third order}
The third order term is given by the following expression.
\begin{eqnarray}
&&\Gamma_X^{\mu\nu\lambda}(p,q,r) \nonumber\\
&=&\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
P_L \left(
\frac{\fs k + \fs p}{P+M} - \frac{\fs k - \fs q - \fs r}{R+M}
\right) P_R
\right.
\nonumber\\
&& \qquad
\left.
\cdot
\frac{P+M - (\fs k + \fs p )}{2P}
P_R V_+^{\mu\nu\lambda}(k+p,k,k-q,k-q-r) P_L
\frac{R+M - (\fs k - \fs q - \fs r)}{2R}
\right\}
\nonumber\\
&&-\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
P_L \left(
\frac{\fs k + \fs p}{P+M} - \frac{\fs k - \fs q - \fs r}{R+M}
\right) P_R
\right.
\nonumber\\
&& \qquad \quad\quad
\cdot
\frac{P+M - (\fs k + \fs p)}{2P}
P_R V_+^{\mu\nu}(k+p,k,k-q) P_L
\nonumber\\
&& \qquad \quad\quad\quad\quad
\left.
\cdot
\frac{Q+M - (\fs k - \fs q)}{2Q}
P_R V_+^{\lambda}(k-q,k-q-r) P_L
\frac{R+M - (\fs k - \fs q - \fs r)}{2R}
\right\}
\nonumber\\
\nonumber\\
&&-\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
P_L \left(
\frac{\fs k + \fs p}{P+M} - \frac{\fs k - \fs q - \fs r}{R+M}
\right) P_R
\right.
\nonumber\\
&& \qquad \quad\quad
\cdot
\frac{P+M - (\fs k + \fs p)}{2P}
P_R V_+^{\mu}(k+p,k) P_L
\nonumber\\
&& \qquad \quad\quad\quad\quad
\left.
\cdot
\frac{K+M - \fs k }{2K}
P_R V_+^{\nu\lambda}(k,k-q,k-q-r) P_L
\frac{R+M - (\fs k - \fs q - \fs r)}{2R}
\right\}
\nonumber\\
&&+\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
P_L \left(
\frac{\fs k + \fs p}{P+M} - \frac{\fs k - \fs q - \fs r}{R+M}
\right) P_R
\right.
\nonumber\\
&& \qquad \quad\quad
\cdot
\frac{P+M - (\fs k + \fs p)}{2P}
P_R V_+^{\mu}(k+p,k) P_L
\nonumber\\
&& \qquad \quad\quad\quad
\cdot
\frac{K+M - \fs k}{2P}
P_R V_+^{\nu}(k,k-q) P_L
\nonumber\\
&& \qquad \quad\quad\quad\quad
\left.
\cdot
\frac{Q+M - (\fs k - \fs q)}{2K}
P_R V_+^{\lambda}(k-q,k-q-r) P_L
\frac{R+M - (\fs k - \fs q - \fs r)}{2R}
\right\}
\nonumber\\
&&
+ \ldots
\label{boundary-contribution-third-order}
\\
&=&
-\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
\gamma_5 \left(
\frac{\fs k + \fs p }{P+M} - \frac{\fs k - \fs q - \fs r}{R+M}
\right)
\right.
\nonumber\\
&& \qquad \qquad \qquad \qquad \qquad \qquad
\left.
\cdot
\frac{P+M }{2P}
V_+^{\mu\nu\lambda}(k+p,k,k-q,k-q-r)
\frac{R+M}{2R}
\right\}
\label{boundary-contribution-third-order-term-one}
\\
&&+\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
\gamma_5 \left(
\frac{\fs k + \fs p}{P+M} - \frac{\fs k -\fs q -\fs r}{R+M}
\right)
\frac{K+M }{2K}
V_+^{\mu\nu}(k+p,k,k-q)
\right.
\nonumber\\
&& \qquad \qquad \qquad \qquad \qquad \qquad
\left.
\cdot
\frac{- (\fs k -\fs q)}{2Q}
V_+^{\lambda}(k-q,k-q-r)
\frac{R+M }{2R}
\right\}
\label{boundary-contribution-third-order-term-two-a}
\\
&&+\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
\gamma_5 \left(
\frac{\fs k + \fs p}{P+M} - \frac{\fs k -\fs q -\fs r}{R+M}
\right)
\frac{K+M }{2K}
V_+^{\mu}(k+p,k)
\right.
\nonumber\\
&& \qquad \qquad \qquad \qquad \qquad \qquad
\left.
\cdot
\frac{- \fs k }{2K}
V_+^{\nu\lambda}(k,k-q,k-q-r)
\frac{R+M }{2R}
\right\}
\label{boundary-contribution-third-order-term-two-b}
\\
&&-\frac{1}{2} \int\! \frac{d^D k}{(2\pi)^D}
{\rm tr} \left\{
\gamma_5 \left(
\frac{\fs k + \fs p }{P+M} - \frac{\fs k -\fs q -\fs r}{R+M}
\right)
\frac{K+M }{2K}
V_+^{\mu}(k+p,k)
\right.
\nonumber\\
&& \qquad \qquad \qquad \qquad
\left.
\cdot
\frac{- \fs k}{2K}
V_+^{\nu}(k,k-q)
\frac{-(\fs k - \fs q)}{2Q}
V_+^{\lambda}(k-q,k-q-r)
\frac{R+M }{2R}
\right\}
\label{boundary-contribution-third-order-term-three}
\\
&&
- \left( M \leftrightarrow -M \right)
\label{boundary-contribution-third-order-terms-opposite-mass}
\, .
\end{eqnarray}
The expansion with respect to $p$, $q$ and $r$ in
(\ref{boundary-contribution-third-order-term-one}),
(\ref{boundary-contribution-third-order-term-two-a}),
(\ref{boundary-contribution-third-order-term-two-b}) and
(\ref{boundary-contribution-third-order-term-three})
leads to the result,
\begin{eqnarray}
-\frac{1}{2} \int\! &&\frac{d^D k}{(2\pi)^D}
\frac{1}{2^7 K^7} \,
\nonumber\\
&& {\rm tr} \gamma_5 \left\{
-\gamma^\mu \fs k \gamma^\nu \fs k \gamma^\lambda
(\fs p + \fs q + \fs r)
\left[ 10M \right]
\right.
\nonumber\\
&& \phantom{{\rm tr} \gamma_5}
+\gamma^\mu \gamma^\nu \gamma^\lambda \fs k
\left[ 4k\cdot(p+q+r) M \right]
\nonumber\\
&& \phantom{{\rm tr} \gamma_5}
-\gamma^\mu \gamma^\nu \gamma^\lambda (\fs p + \fs q +\fs r)
\left[ -2MK^2 + 10M^3 \right]
\nonumber\\
&& \phantom{{\rm tr} \gamma_5}
+\gamma^\mu \fs k \gamma^\nu \gamma^\lambda
\left[
-4k\cdot (p+q+r) K
+4k\cdot(p+q+r) \frac{-k^2}{(K+M)}
\right]
\nonumber\\
&& \left. \phantom{{\rm tr} \gamma_5}
-\fs k \gamma^\mu \fs k \gamma^\nu \gamma^\lambda
(\fs q +\fs r)
\left[ 10M \right]
+ \fs p \gamma^\mu \gamma^\nu \fs k \gamma^\lambda \fs k
\left[ 10M \right]
\right\}
+ {\cal O}\left( \frac{1}{M}\right)
\, .
\end{eqnarray}
Subtracting the contribution with the mass of opposite signature
(\ref{boundary-contribution-third-order-terms-opposite-mass}),
we finally obtain
\begin{eqnarray}
&& \Gamma_X^{\mu\nu\lambda}(p,q,r) \nonumber\\
&&
=\frac{1}{2}
{\rm tr} \gamma_5 \gamma^\mu \gamma^\nu \gamma^\lambda
(\fs p + \fs q +\fs r)
\nonumber\\
&& \qquad\qquad\qquad
\times
\int\! \frac{d^D k}{(2\pi)^D}
\frac{1}{2^7 K^7}
\left[ -10 M k^2 - 2M k^2 -4MK^2 + 20M^3 + 2M k^2 -10Mk^2 \right]
\nonumber\\
&&
=\frac{1}{2}
{\rm tr} \gamma_5 \gamma^\mu \gamma^\nu \gamma^\lambda
(\fs p + \fs q +\fs r)
\int\! \frac{d^D k}{(2\pi)^D}
\frac{M}{2^3 K^5}
\nonumber\\
&&
=-\frac{1}{2}
\frac{1}{24\pi^2}
\epsilon^{\mu\nu\rho\sigma}(p+q+r)_\sigma
+ {\cal O}\left( \frac{1}{M} \right)
\, .
\end{eqnarray}
\subsubsection{Forth order and Higher orders}
As to the forth order term $(n=4)$, we find that
the leading term vanishes in the expansion with respect to the external
momentum. Then we obtain
\begin{equation}
\Gamma_X^{\mu\nu\lambda\rho}(p,q,r,s)
= 0 + {\cal O}\left( \frac{1}{M} \right)
\, .
\end{equation}
The higher order terms $(n \ge 5)$ have negative mass dimensions and
they are expected to be suppressed by the factor
$\frac{1}{M^{n-4}}$.
\subsubsection{Final result}
{}From these results,
we obtain
the variation of the boundary term under the gauge transformation
as follows.
\begin{eqnarray}
&& \delta_\omega \Gamma_X [A]
\nonumber\\
&&
=
\int\! \frac{d^4 p}{(2\pi)^4}\frac{d^4 q}{(2\pi)^4}
\left\{ -\frac{i^3 }{24\pi^2}
\epsilon^{\mu\nu\rho\sigma} p_\rho q_\sigma
\right\}
{\rm tr}\{ \omega(-p-q) A_{\mu}(p)A_{\nu}(q) \}
\nonumber\\
&&
+
\int\! \frac{d^4 p}{(2\pi)^4}\frac{d^4 q}{(2\pi)^4}\frac{d^4 r}{(2\pi)^4}
\left\{ - \frac{i^4}{24\pi^2}
\epsilon^{\mu\nu\lambda\sigma} \frac{(p+q+r)_\sigma}{2}
\right\}
{\rm tr}\{ \omega(-p-q) A_{\mu}(p)A_{\nu}(q)A_{\lambda}(r) \}
+ {\cal O}\left( \frac{1}{M} \right)
\nonumber\\
&&
=
\frac{i}{24\pi^2}
\int\! dx^4
\epsilon^{\mu\nu\rho\sigma}
{\rm tr}\left\{ \omega(x)
\left[
\partial_\mu A_{\nu}(x) \partial_\rho A_{\sigma}(x)
+\frac{1}{2}
\partial_\mu \left(A_{\nu}(x)A_{\rho}(x)A_{\sigma}(x) \right)
\right]
\right\}
+ {\cal O}\left( \frac{1}{M} \right)
\, .
\nonumber\\
\end{eqnarray}
We can see that
the consistent anomaly is correctly reproduced by the
Wigner-Brillouin phase fixing procedure.
\section{Vacuum Polarization}
\label{sec:vacuum-polarization}
In this section, as another application of the perturbation theory,
we perform the calculation of the two-point function
(vacuum polarization function) in the expansion of the
five-dimensional determinant contribution,
Eq.~(\ref{perurbative-expansion-of-volume-contribution}).
In the momentum space, it is written as follows.
\begin{eqnarray}
i\Gamma_K [A]
= \sum_{n=1}^\infty \, (i)^n
\int\! \prod_{i=1}^n \frac{d^4 p_i}{(2\pi)^4}
(2\pi)^4\delta\left(\sum_i^n p_i \right)
\Gamma_K^{\nu_1 \nu_2 \ldots \nu_n}
(p_1,\ldots, p_n) \,
{\rm tr}\{ \prod_{i=1}^n A_{\nu_i}(p_i) \}
\, ,
\end{eqnarray}
where
\begin{eqnarray}
\Gamma_K^{\nu_1 \nu_2 \ldots \nu_n}(p_1,\ldots, p_n;L)
&&=\Pi_+^{\nu_1 \nu_2 \ldots \nu_n}(p_1,\ldots, p_n;L)
\nonumber\\
&&
-\frac{1}{2} \Pi_-^{\nu_1 \nu_2 \ldots \nu_n}(p_1,\ldots, p_n;L)[+M]
-\frac{1}{2} \Pi_-^{\nu_1 \nu_2 \ldots \nu_n}(p_1,\ldots, p_n;L)[-M]
\, ,
\end{eqnarray}
\begin{eqnarray}
\Pi_\pm^{\nu_1 \nu_2 \ldots \nu_n}(p_1,\ldots, p_n)
=
\int\!\frac{d^Dk}{i (2\pi)^D}
\int^{+L}_{-L}\prod_{i=1}^n ds_i \,
{\rm Tr}
\left\{
\prod_{i=1}^n \left[ \gamma^{\nu_i}
iS_{F\pm}(k+\sum_{j>i}p_j;s_i,s_{i+1})
\right]
\right\}
\, .
\end{eqnarray}
Note that $s_{n+1}=s_1$.
We will find that each contribution from the fermion with the kink-like
mass or the fermion with the homogeneous mass is never chiral.
The fermion with
the {\it positive} homogeneous mass contains the light mode
(massless mode in the limit $L \rightarrow \infty$)
just as well as the fermion with the kink-like mass.
On the contrary, the fermion with the {\it negative} homogeneous
mass does not contain such light mode.
Therefore, by the subtraction, the normalization of the vacuum
polarization becomes correctly a half of that of the massless
Dirac fermion.
Unfortunately, it turns out that the dimensional regularization is
not adequate for the calculation of this volume contribution.
It leads to the gauge noninvariant term proportional to $M^2$.
Since in the lattice regularization
the volume contribution is expected to be gauge invariant,
this fact means our bad choice of the subsidiary regularization.
\subsection{Expression of Vacuum Polarization}
The two point function is given explicitly as follows.
\begin{eqnarray}
\Pi_\pm^{\mu\nu}(p;L)
&=&
\int\!\frac{d^Dk}{i (2\pi)^D}
\int^{+L}_{-L}\!\!dsdt \,
{\rm Tr}
\left\{
\gamma^\mu iS_\pm(k+p;s,t)\gamma^\nu iS_\pm(k;t,s)
\right\}
\\
&=&
\int\!\frac{d^Dk}{i (2\pi)^D}
\left\{
{\rm Tr}\left(\gamma_\mu P_R (\fs k+ \fs p) \gamma_\nu P_R \fs k \right)
\int^{+L}_{-L}\!\!dsdt \,
\Delta_{R\pm}(k+p;s,t) \, \Delta_{R\pm}(k;t,s) \,
\right.
\nonumber\\
&&\qquad\qquad\quad
+{\rm Tr}\left(\gamma_\mu P_L (\fs k+ \fs p) \gamma_\nu P_L \fs k \right)
\int^{+L}_{-L}\!\!dsdt \,
\Delta_{L-}(k+p;s,t) \, \Delta_{L-}(k;t,s) \,
\nonumber\\
&&\qquad\qquad\quad +
{\rm Tr}\left(\gamma_\mu P_R \gamma_\nu P_L \right)
\int^{+L}_{-L}\!\!dsdt \,
B_{R\pm}(k+p;s,t) \, B_{L\pm}(k;t,s) \,
\nonumber\\
&&\qquad\qquad\quad
+{\rm Tr}\left(\gamma_\mu P_L \gamma_\nu P_R \right)
\int^{+L}_{-L}\!\!dsdt \,
B_{L\pm}(k+p;s,t) \, B_{R\pm}(k;t,s) \,
\nonumber\\
&&\qquad\qquad\quad
+{\rm Tr}\left(\gamma_\mu P_R (\fs k+ \fs p) \gamma_\nu P_L \fs k \right)
\int^{+L}_{-L}\!\!dsdt \,
\Delta_{R\pm}(k+p;s,t) \, \Delta_{L-}(k;t,s) \,
\nonumber\\
&&\qquad\qquad\quad
\left.
+{\rm Tr}\left(\gamma_\mu P_L (\fs k+ \fs p) \gamma_\nu P_R \fs k \right)
\int^{+L}_{-L}\!\!dsdt \,
\Delta_{L-}(k+p;s,t) \, \Delta_{R\pm}(k;t,s) \,
\right\}
\, .
\end{eqnarray}
Using the orthogonality of the generalized ``sin'' function
Eq.~(\ref{generalized-sin-function-orthgonarity}), we perform
the integration over $s,t$ and obtain,
\begin{eqnarray*}
\phantom{\Pi_\pm^{\mu\nu}(p;L) }
&=&
\int\!\frac{d^Dk}{i (2\pi)^D}
\sum_{\omega(\pm)} \,
\frac{
\left\{
{\rm Tr}\left(\gamma_\mu P_R (\fs k+ \fs p) \gamma_\nu P_R \fs k \right)
+{\rm Tr}\left(\gamma_\mu P_L (\fs k+ \fs p) \gamma_\nu P_L \fs k \right)
\right\}
}
{[M^2+\omega^2-(k+p)^2-i \varepsilon] \,
[M^2+\omega^2-k^2-i \varepsilon]}
\\
&+&
\int\!\frac{d^Dk}{i (2\pi)^D}
\sum_{\omega(\pm)} \,
\frac{(\omega^2+M^2)\, {\rm Tr}\left(\gamma_\mu \gamma_\nu \right) }
{[M^2+\omega^2-(k+p)^2-i \varepsilon] \,
[M^2+\omega^2-k^2-i \varepsilon]}
\\
&+&
\int\!\frac{d^Dk}{i (2\pi)^D}
\left\{
{\rm Tr}\left(\gamma_\mu P_R (\fs k+ \fs p) \gamma_\nu P_L \fs k \right)
\int^{+L}_{-L}\!\!dsdt \,
\Delta_{R\pm}(k+p;s,t) \, \Delta_{L-}(k;t,s) \,
\right.
\\
&&\qquad\qquad\quad
\left.
+{\rm Tr}\left(\gamma_\mu P_L (\fs k+ \fs p) \gamma_\nu P_R \fs k \right)
\int^{+L}_{-L}\!\!dsdt \,
\Delta_{L-}(k+p;s,t) \, \Delta_{R\pm}(k;t,s) \,
\right\}
\\
&=&
\int\!\frac{d^Dk}{i (2\pi)^D}
\sum_{\omega(\pm)} \,
\frac{{\rm Tr}\left(\gamma_\mu (\fs k+ \fs p) \gamma_\nu \fs k \right) }
{[M^2+\omega^2-(k+p)^2-i \varepsilon] \,
[M^2+\omega^2-k^2-i \varepsilon]}
\\
&+&
\int\!\frac{d^Dk}{i (2\pi)^D}
\sum_{\omega(\pm)} \,
\frac{(\omega^2+M^2) \, {\rm Tr}\left(\gamma_\mu \gamma_\nu \right) }
{[M^2+\omega^2-(k+p)^2-i \varepsilon] \,
[M^2+\omega^2-k^2-i \varepsilon]}
\\
&+&
\int\!\frac{d^Dk}{i (2\pi)^D}
\left\{
{\rm Tr}\left(\gamma_\mu P_R (\fs k+ \fs p) \gamma_\nu P_L \fs k \right)
\right.
\nonumber\\
&& \left. \qquad \qquad \qquad \qquad
\cdot \int^{+L}_{-L}\!\!dsdt \,
\Delta_{R\pm}(k+p;s,t) \,
( \Delta_{L-}(k;t,s) -\Delta_{R\pm}(k;t,s) )\,
\right.
\\
&&\qquad\qquad\quad
+{\rm Tr}\left(\gamma_\mu P_L (\fs k+ \fs p) \gamma_\nu P_R \fs k \right)
\nonumber\\
&& \left. \qquad \qquad \qquad \qquad
\cdot \int^{+L}_{-L}\!\!dsdt \,
\Delta_{L-}(k+p;s,t) \,
( \Delta_{R\pm}(k;t,s)-\Delta_{L-}(k;t,s)) \,
\right\}
\, .
\end{eqnarray*}
Therefore we can write the two-point function as follows.
\begin{eqnarray}
\Pi_\pm^{\mu\nu}(p;L)
&=&
\int\!\frac{d^Dk}{i (2\pi)^D} \,
\left\{
{\rm Tr}\left(\gamma_\mu (\fs k+ \fs p) \gamma_\nu \fs k \right)
\right\}
{\cal K}_\pm (k+p,k)
\\
&+&
\int\!\frac{d^Dk}{i (2\pi)^D} \,
\left\{ {\rm Tr}\left(\gamma_\mu \gamma_\nu \right) \right\}
{\cal B}_\pm(k+p,k)
\\
&-& \frac{1}{2}
\int\!\frac{d^Dk}{i (2\pi)^D}
\left\{
{\rm Tr}\left(\gamma_\mu \fs{\ext{k}} \gamma_\nu \fs{\ext{k}} \right)
\right\}
{\cal M}_{\pm}(k+p,k)
\, ,
\end{eqnarray}
where
\begin{eqnarray}
{\cal K}_\pm (k+p,k)
&=&
\int^{+L}_{-L}\!\!dsdt \,
\Delta_{R\pm(L-)}(k+p;s,t) \, \Delta_{R\pm(L-)}(k;t,s)
\, ,
\\
&=&
\sum_{\omega(\pm)} \,
\frac{1}{[M^2+\omega^2-(k+p)^2-i \varepsilon] \,
[M^2+\omega^2-k^2-i \varepsilon]}
\\
{\cal B}_\pm(k+p,k)
&=&
\int^{+L}_{-L}\!\!dsdt \,
B_{\pm}(k+p;s,t) \, B_{\pm}(k;s,t)
\\
&=&
\sum_{\omega(\pm)} \,
\frac{(\omega^2+M^2)}{[M^2+\omega^2-(k+p)^2-i \varepsilon] \,
[M^2+\omega^2-k^2-i \varepsilon]}
\, ,
\\
{\cal M}_{\pm}(k+p,k)
&=&
\int^{+L}_{-L}\!\!dsdt \,
\left( \Delta_{R\pm}(k+p;s,t)- \Delta_{L-}(k+p;s,t) \right) \,
\left( \Delta_{R\pm}(k;t,s) - \Delta_{L-}(k;t,s) \right)
\, .
\nonumber\\
\end{eqnarray}
{}From this expression, we find that each term $\Pi_\pm^{\mu\nu}$
is vector-like, is even-parity and does not have any chiral structure.
(Even for the extra term due to the dimensional regularization.)
However both $\Pi_-^{\mu\nu}$ and $\Pi_+^{\mu\nu}[+M]$
contain the contribution of the light mode
(massless mode in the limit $L \rightarrow \infty$)
given by
Eq.~(\ref{light-mode-in-fermion-with-kink-like-mass}) and
Eq.~(\ref{light-mode-in-fermion-with-positive-homogeneous-mass}).
On the contrary, $\Pi_+^{\mu\nu}[-M]$ does not contain such a contribution
of the light mode.
By this fact, the subtracted two-point function shows the
correct {\it chiral normalization}: one half of the vacuum polarization
of the massless Dirac fermion.
We can see it explicitly in the following calculation.
\subsection{Evaluation of Vacuum Polarization}
By the Sommerfeld-Watson transformation,
it is possible to express ${\cal K}_\pm (k+p,k)$
and ${\cal B}_\pm (k+p,k)$ by the common normal modes.
Then we can perform the subtraction explicitly
at the finite extent of the fifth dimension.
${\cal K}_\pm (k+p,k;s,t)$ has the following Integrand.
\begin{eqnarray}
F_\pm(\omega)
&=&
\frac{n_\pm(\omega)}
{[ M^2+\omega^2-k^2-i \varepsilon]
[ M^2+\omega^2-(k+p)^2-i \varepsilon]}
\end{eqnarray}
where
\begin{equation}
n_\pm(\omega) =
\left[
\Big(
\frac{ (\omega\cot\omega L \pm M)
+(\omega\cot\omega L - M) }{2} \Big)
\big( L-\frac{\sin 2\omega L}{2\omega} \big)
+\sin^2\omega L \right]
\, .
\end{equation}
The poles and its residues are summarized in
Table IV.
\vskip 16pt
\centerline{Table IV \
Poles and Residues in $I_\pm$ for ${\cal K}_\pm$}
\begin{tabular}{|c|c|c|c|}
\hline
\phantom{aa} singular part in ${\cal K}_\pm$ \phantom{aa}
& \phantom{aaaaa} pole \phantom{aaaaaaaaaaa}
& \phantom{a} residue \phantom{aaaaa}
&
${\scriptstyle
\frac{2 \omega}{\sin^2 \omega L \Delta_\pm (\omega) }} $
\\
\hline
${\scriptstyle
\frac{2 \omega}{\sin^2 \omega L \Delta_\pm (\omega) }} $
& $ \sin^2 \omega L \Delta_\pm (\omega)=0 $
& $ \frac{1}{n_\pm(\omega)}$
& 1
\\
\hline\hline
$\omega\cot\omega L=\frac{\omega\cos\omega L}{\sin\omega L}$
& $ \sin\omega L=0 $
& $ \frac{\omega}{L}$
& $-\frac{2}{\omega}$
\\
\hline
$ \frac{1}{M^2+\omega^2-p^2-i \varepsilon} $
&$iP \equiv i\sqrt{M^2-p^2-i \varepsilon}$
&$\frac{1}{2iP}$
&$\frac{2 iP}{-\sinh^2 PL }\frac{1}{\Delta_\pm (iP)} $
\\
\hline
\end{tabular}
\vskip 16pt
\noindent
By the Sommerfeld-Watson transformation,
${\cal K}_\pm (k+p,k)$ can be rewritten as
\begin{eqnarray}
{\cal K}_\pm (k+p,k)
&=&
\sum_{\sin\omega L=0} \,
\frac{2}{[M^2+\omega^2-(k+p)^2-i \varepsilon] \,
[M^2+\omega^2-k^2-i \varepsilon]}
\nonumber\\
&&
+
\frac{n_\pm(iP)}{\sinh^2 PL \, \Delta_\pm (iP)}
\frac{1}{K^2-P^2}
+
\frac{n_\pm(iK)}{\sinh^2 KL \, \Delta_\pm (iK)}
\frac{1}{P^2-K^2}
\, .
\end{eqnarray}
Similarly we obtain,
\begin{eqnarray}
{\cal B}_\pm (k+p,k)
&=&
\sum_{\sin\omega L=0} \,
\frac{2(\omega^2+M^2)}{[M^2+\omega^2-(k+p)^2-i \varepsilon] \,
[M^2+\omega^2-k^2-i \varepsilon]}
\nonumber\\
&&+
\frac{n_\pm(iP)}{\sinh^2 PL \, \Delta_\pm (iP)}
\frac{M^2-P^2}{K^2-P^2}
+
\frac{n_\pm(iK)}{\sinh^2 KL \, \Delta_\pm (iK)}
\frac{M^2-K^2}{P^2-K^2}
\, .
\end{eqnarray}
{}From these results, we see that
the subtraction can be performed rather simply.
In order to evaluate the remaining terms in the limit
$L \rightarrow \infty$, we need to know the limit
of $\Delta_\pm(iP)$ and $\frac{n_\pm(iP)}{\sinh^2 PL}$.
It is given by
\begin{eqnarray}
\Delta_\pm(iP) &=& M^2-P^2-(P\coth PL \pm M)(P\coth PL -M) \\
&\sitarel{=}{L\rightarrow \infty}&
\left\{ \begin{array}{l}
2(M-P)(M+P) \\
2(M-P)P
\end{array} \right.
\, ,
\end{eqnarray}
\begin{eqnarray}
\frac{n_\pm(iP)}{\sinh^2 PL}
&=&
\frac{
\left[
\Big(
\frac{ (P \coth PL \pm M)
+(P \coth PL - M) }{2} \Big)
\big( L-\frac{\sinh 2PL}{2P} \big)
-\sinh^2 PL \right]
}{\sinh^2 PL}
\\
&\sitarel{=}{L\rightarrow \infty}&
\left\{ \begin{array}{l}
-2 \\
\frac{(-2P+M)}{P}
\end{array} \right.
\, .
\end{eqnarray}
Then we have
\begin{eqnarray}
\bar{\cal K} (k+p,k)
&\equiv&
{\cal K}_+ (k+p,k)
-\frac{1}{2} \left\{ {\cal K}_- (k+p,k)
+ {\cal K}_- (k+p,k)[M\rightarrow -M] \right\}
\\
&\sitarel{=}{L\rightarrow \infty}&
\frac{1}{2} \left\{
\frac{1}{[P^2-M^2][K^2-M^2]} -\frac{1}{P^2K^2} \right\}
\, .
\end{eqnarray}
\begin{eqnarray}
\bar{\cal B} (k+p,k)
&\equiv&
{\cal B}_+ (k+p,k)
-\frac{1}{2} \left\{ {\cal B}_- (k+p,k)
+ {\cal B}_- (k+p,k)[M\rightarrow -M] \right\}
\\
&\sitarel{=}{L\rightarrow \infty}&
\frac{1}{2} \left\{
-\frac{M^2}{P^2K^2} \right\}
\, .
\end{eqnarray}
Using these results, the two-point function is written as
\begin{eqnarray}
\lim_{{\scriptstyle L} \rightarrow \infty} \Pi^\pm_{\mu\nu}(p;L)
&=&
\frac{1}{2}
\int\!\frac{d^Dk}{i (2\pi)^D} \,
\left\{
\frac{{\rm Tr}\left(\gamma_\mu (\fs k+ \fs p) \gamma_\nu \fs k \right)}
{[-(k+p)^2][-k^2]}
-\frac{{\rm Tr}\left(\gamma_\mu (\fs k+ \fs p) \gamma_\nu \fs k \right)
+{\rm Tr}\left(\gamma_\mu \gamma_\nu \right) M^2}{[M^2-(k+p)^2][M^2-k^2]}
\right\}
\nonumber\\
&-& \lim_{{\scriptstyle L} \rightarrow \infty}
\frac{1}{2}
\int\!\frac{d^Dk}{i (2\pi)^D}
\left\{
{\rm Tr}\left(\gamma_\mu \fs{\ext{k}} \gamma_\nu \fs{\ext{k}} \right)
\right\}
{\cal M}_{\pm}(k+p,k)
\, .
\end{eqnarray}
The first term in the r.h.s. is nothing but the contribution of massless
Dirac fermion
subtracted by {\it one Pauli-Villars-Gupta bosonic spinor field
with mass $M$}, except for the factor one half before it.
It is gauge invariant (even under the dimensional regularization).
This factor gives the correct normalization of the vacuum
polarization derived from the chiral determinant.
The remaining term is due to the dimensional regularization.
In four dimensions, we can show that it gives a finite term
proportional to $M^2$, which means quadratic divergence in the limit
$M \rightarrow \infty$.
It breaks gauge invariance.
The determinant of $K$, however, is expected to be
gauge invariant in the lattice regularization as we can see from
Eq. (\ref{effective-action-variation}).
Of course, this fact is not yet established at the perturbative level.
We need careful investigation of the two-point function
{\it in the continuum limit of the lattice theory}.
As far as the continuum limit theory is concerned,
the above result tells that
our choice of the dimensional regularization
is not suitable for the calculation of the part of the
determinant of $K$.
Besides this failure due to the dimensional regularization,
we think that our continuum limit analysis so far
have clarified the structure of the vacuum overlap formula
by taking the limit from at finite extent of the fifth dimension.
\section{Summary and Discussion}
\label{sec:discussion}
We have formulated the perturbation theory of
the vacuum overlap formula, based on the theory of
the fermion (with kink-like and homogeneous masses)
in the finite extent of the fifth dimension.
The chiral projection entered the boundary condition
of the fermion field in the fifth direction.
Different series of discrete normal modes of the fifth momentum
occured and they were rearranged by the Sommerfeld-Watson transformation.
We have assumed that
the dimensional regularization preserves the cluster property.
The gauge non-invariance introduced by the boundary state
wave function actually led to the consistent anomaly.
The normalization of the vacuum polarization is a half of the
massless Dirac fermion.
This is because both the fermion with kink-like mass and
the fermion with positive homogeneous mass involve the light
(massless) modes but the fermion with negative homogeneous mass
does not.
We find that the dimensional regularization is not suitable as a
subsidiary regularization. It cannot respect the chiral
boundary condition and it induced a gauge non-invariant
piece in the vacuum polarization of the four dimensional theory.
The determinant of $K$, however, is expected to be
gauge invariant in the lattice regularization as we can see from
Eq. (\ref{effective-action-variation}).
Of course, this important point
is not yet established at the perturbative level.
We need careful investigation of the two-point function
{\it in the continuum limit of the lattice theory}.
Finally we make a comment about the case of the two-dimensional theory.
In this case, the subtle point due to the
dimensional regularization does not cause any difficulty.
The
{\it once-subtraction by the Pauli-Villars-Gupta bosonic spinor field}
is enough to make the two-point function finite.
We find that
the two-point function from the volume contribution
is gauge invariant and has the correct chiral normalization.
We also find that the boundary term reproduces the consistent anomaly.
Therefore the next desired step is to examine the perturbative
aspect of the vacuum overlap in the lattice regularization
in four dimensions.
We hope that the technique developed in the continuum limit analysis
given here may be also useful in the lattice case.
\acknowledgments
The authors would like to thank T.~Kugo for enlightening discussions,
especially about the Sommerfeld-Watson transformation.
The authors would like to express sincere thanks to S.~Randjbar-Daemi
and J. Strathdee for informing us about thier recent work.
| {'timestamp': '1995-01-27T06:15:14', 'yymm': '9501', 'arxiv_id': 'hep-lat/9501032', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-lat/9501032'} |
\section{Introduction}
The properties of white dwarf stars are relatively simple to measure.
Ideally, a complete spectral energy distribution, from the optical to the
ultraviolet, helps establish the white dwarf atmospheric properties, i.e.,
their effective temperature, surface gravity, and chemical composition,
from which we may deduce their age and mass using theoretical mass-radius
relations \citep[e.g.,][]{woo1995}. These measurements lay the foundations for
a study of the white dwarf cooling history (luminosity function).
The 2dF QSO Redshift Survey (2QZ) survey is a deep spectroscopic survey of blue
Galactic and extra-Galactic sources which was conducted at the Anglo-Australian
Telescope (AAT). Over 2000 white dwarfs were discovered during this survey
\citep{cro2004}. \citet{ven2002} obtained atmospheric parameters for many of
these objects. Another survey is the Sloan Digital Sky Survey (SDSS),
which aims to observe 1/4 of the sky in {\it ugriz} photometric bands and
obtain spectroscopy for many of these objects. A catalog of 9316
spectroscopically confirmed white dwarfs from the SDSS 4th data release
was published by \citet{eis2006}.
The Galaxy Evolution Explorer (GALEX) is an orbiting observatory that aims to
observe the sky in the ultraviolet. GALEX provides photometry in two bands,
{\it FUV} (1344-1786 \AA) and {\it NUV} (1771-2831 \AA), and will conduct
several surveys during its mission, including the
All-Sky Imaging Survey (AIS) with a limiting magnitude of 21.5 mag and the
Medium-Imaging Survey (MIS) with a limiting magnitude of 23 mag.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{fig_1.eps}
\caption{The overlap between 2QZ and GALEX (GR2) surveys.\label{fig_coord_comp}}
\end{figure}
We cross-correlated the 2QZ DA white dwarf catalog with the SDSS 5th
data release to obtain {\it ugriz} photometry. Only the north Galactic cap
(NGP) is covered by the SDSS and therefore we were only able to get {\it ugriz}
photometry for 795 objects. We also cross-correlated the 2QZ DA white dwarf
catalog with the GALEX 2nd data release (GR2) to obtain ultraviolet photometry
for approximately 810 objects, however some of these stars have only been
detected in either the FUV or NUV band. Figure~\ref{fig_coord_comp} shows
the overlap between the 2QZ white dwarf catalog and GR2. Finally we
cross-correlated our SDSS and GALEX samples to obtain 252 stars for which we
have both optical {\it ugriz} and ultraviolet photometry.
\section{Determining the parameters}
\begin{figure}
\includegraphics[width=0.5\textwidth]{fig_2a.eps}
\includegraphics[width=0.5\textwidth]{fig_2b.eps}
\caption{({\it left}) Optical and ultraviolet colors ($g-FUV$ vs. $FUV-NUV$) of 2QZ
white dwarfs compared to synthetic DA white dwarf colors. The full-lined grid
includes the effect of Ly$\alpha$ satellites, and the dotted-lined grid excludes
them. ({\it right}) Optical ($u-g$/$g-r$) colors of 2QZ white dwarfs compared
to synthetic DA white dwarf colors. The effective temperature is indicated in
units of 1000 K and the $\log{g} = 6.0$, $7.0$, $8.0$, and $9.0$ ({\it from
bottom to top}). The main-sequence colors
are also shown.\label{fig_gmf_fmn}}
\end{figure}
We obtained effective temperatures for the 2QZ DA white dwarfs by comparing
the ultraviolet/optical colors ($g-FUV$/$FUV-NUV$) and optical colors
($u-g$/$g-r$) to synthetic DA colors. Figure~\ref{fig_gmf_fmn} shows the
ultraviolet and optical colors of the 2QZ white dwarfs compared to synthetic
DA and main-sequence colors. The optical and ultraviolet photometry of the 2QZ
white dwarfs were corrected for interstellar
extinction using the dust maps of \citet{sch1998} and the extinction
law of \citet{car1989}, where we have assumed $R_V= 3.1$. Note that $R_V$
can vary anywhere between $\sim 2.5$ to $\sim 5.5$ which can change the
extinction in the ultraviolet by a very significant amount \citep{car1989}.
In addition, the true extinction toward a star is possibly only a fraction
of the total extinction in the line of sight.
The synthetic DA colors were calculated using a grid of pure hydrogen LTE
models \citep{kaw2006}. The grid of models extend from $T_{\rm eff} = 7000$ to
84\,000 K at $\log{g} =6.0$ to $9.5$.
The spectra include the effect of Ly$\alpha$ satellites \citep{all1992},
however we also calculated spectra which exclude the effect.
Figure~\ref{fig_gmf_fmn} (left) shows the effect of Lyman satellites on UV
colors, which is most prominent in the cooler white dwarfs
($T_{\rm eff} \lesssim 12000$ K). The color diagrams also show the
main-sequence, which was calculated using Kurucz synthetic spectra
\citep{kur1993}. Figure~\ref{fig_gmf_fmn} (right) shows some stars that have
colors corresponding to the main-sequence rather than the DA white dwarf
sequence, therefore some contamination by main-sequence stars is still
present in the 2QZ DA catalog. We found that the combination of both diagrams
helps distinguish between white dwarfs and main-sequence stars.
Figure~\ref{fig_temp_lysat} shows a comparison between the
temperatures obtained using the $g-FUV$/$FUV-NUV$ colors and the $u-g$/$g-r$
colors. A relatively good agreement between the temperatures is observed
with a few points which appear to give very low optical temperatures compared
to their UV temperatures. A check of the 2QZ spectra for these objects
reveal them to be DA white dwarfs with cool companions and therefore the UV
temperatures should be adopted as the white dwarf temperatures.
Figure~\ref{fig_temp_lysat} also shows a comparison of the UV temperatures
determined with grids which include or exclude the effect of Ly$\alpha$
satellites. Temperatures determined using models which exclude the effect of
Ly$\alpha$ satellites are significantly lower than those which do not for
$T_{\rm eff} \lesssim 12000$ K.
\begin{figure}
\includegraphics[viewport=0 285 570 570, width=\textwidth]{fig_3.eps}
\caption{In the {\it left} panel, the temperatures obtained from the
optical-ultraviolet and optical diagrams are compared.
And in the {\it right} panel, the temperatures determined using the
optical-UV colors determined if Ly$\alpha$ satellites are included compared
to the temperatures determined using the same grid but which excludes the
effect of the Ly$\alpha$ satellites.\label{fig_temp_lysat}}
\end{figure}
\section{Luminosity Function}
We constructed the DA luminosity function using the
2QZ/SDSS5/GR2 and the 2QZ/SDSS5 samples by using
the accessible-volume method \citep[see][and references therein]{boy1989}
and assuming a scale-height of 250 pc. Using the temperatures obtained from
the $g-FUV$/$FUV-NUV$ and $u-g/g-r$ diagrams, we calculated the absolute
magnitudes ($M_V$), assuming $\log{g} =8.0$ and using the mass-radius
relations of \citet{ben1999}. Figure~\ref{fig_lum_GR2} shows the 2QZ/SDSS5
and 2QZ/SDSS5/GR2 luminosity functions compared to the luminosity
functions determined in the PG Survey \citep{fle1986}, the AAT-UVX survey
\citep{boy1989} and a theoretical luminosity function based in the cooling
sequence of \citet{woo1995} and a constant DA birthrate of
$0.5\times 10^{-12}$ pc$^{-3}$ yr$^{-1}$. Both the 2QZ/SDSS5/GR2 and
2QZ/SDSS5 luminosity functions are incomplete at the cool end
($M_V \ge 12.0$ mag). The two main reasons for this is that fewer objects
were selected due to the limiting colors at the cool end and the cooler
objects are very faint in the ultraviolet and hence would not be detected
by the GALEX survey. Also there appears to be an excess of stars in the
$M_V = 11.5$ bin, in particular in the 2QZ/SDSS5/GR2 sample. Interstellar
reddening could be a contributing toward this excess since our extinction
correction did not consider the distance toward the star, and therefore
cooler objects which would be closer would be over corrected resulting in
higher temperatures.
\begin{figure}
\centering
\includegraphics[width=0.65\textwidth]{fig_4.eps}
\caption{The DA luminosity function measured using the 2QZ/SDSS/GR2
survey compared to the luminosity function measured using AAT-UVX and
Palomar-Green (PG) survey. The theoretical luminosity function assuming
a DA white dwarf birthrate of $0.5\times 10^{-12}$ pc$^{-3}$ yr$^{-1}$.\label{fig_lum_GR2}}
\end{figure}
\section{Summary and Future Work}
We have presented our initial analysis of DA white dwarfs from the 2QZ survey
using SDSS5 and GR2 photometry. Using these photometric data, we obtained
temperature and absolute magnitudes from which we built a luminosity function.
We will extend this analysis to the He-rich sequence of white dwarfs (DO/DB).
We will investigate the effect of interstellar extinction on the
temperature distribution and hence the luminosity function in more detail.
Also, we will examine the effect of heavy elements on the UV/optical
temperature scales.
Many hot white dwarfs show heavy element lines, and the abundance of
these elements can vary many orders of magnitude.
{\it Acknowledgements:}
This research is supported by NASA/GALEX grant NNG05GL42G.
A.K. is supported by grant GA \v{C}R 205/05/P186.
| {'timestamp': '2007-02-15T20:45:18', 'yymm': '0702', 'arxiv_id': 'astro-ph/0702420', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/0702420'} |
\section{Introduction}
Amongst the lattice gauge community it has recently become quite
popular to study the distributions of eigenvalues of the Dirac
operator in the presence of the background gauge fields generated in
simulations. There are a variety of motivations for this. First, in
a classic work, Banks and Casher\cite{Banks:1979yr} related the
density of small Dirac eigenvalues to spontaneous chiral symmetry
breaking. Second, lattice discretizations of the Dirac operator based
the Ginsparg-Wilson relation\cite{Ginsparg:1981bj} have the
corresponding eigenvalues on circles in the complex plane. The
validity of various approximations to such an operator can be
qualitatively assessed by looking at the eigenvalues. Third, using
the overlap method\cite{Neuberger:1997fp} to construct a Dirac
operator with good chiral symmetry has difficulties if the starting
Wilson fermion operator has small eigenvalues. This can influence the
selection of simulation parameters, such as the gauge
action.\cite{Aoki:2002vt} Finally, since low eigenvalues impede
conjugate gradient methods, separating out these eigenvalues
explicitly can potentially be useful in developing dynamical
simulation algorithms.\cite{Duncan:1998gq}
Despite this interest in the eigenvalue distributions, there are some
dangers inherent in interpreting the observations. Physical results
come from the full path integral over both the bosonic and fermionic
fields. Doing these integrals one at a time is fine, but trying to
interpret the intermediate results is inherently dangerous. While the
Dirac eigenvalues depend on the given gauge field, it is important to
remember that in a dynamical simulation the gauge field distribution
itself depends on the eigenvalues. This circular behavior gives a
highly non-linear system, and such systems are notoriously hard to
interpret.
Given that this is a joyous occasion, I will present some of this
issues in terms of an amusing set of puzzles arising from naive
interpretations of Dirac eigenvalues on the lattice. The discussion
is meant to be a mixture of thought provoking and confusing. It is
not necessarily particularly deep or new.
\section{The framework}
To get started, I need to establish the context of the discussion. I
consider a generic path integral for a gauge theory
\begin{equation}
Z=\int (dA)(d\psi)(d\overline\psi)\ e^{-S_G(A)+\overline\psi D(A) \psi}.
\end{equation}
Here $A$ and $\psi$ represent the gauge and quark fields,
respectively, $S_G(A)$ is the pure gauge part of the action, and
$D(A)$ represents the Dirac operator in use for the quarks. As the
action is quadratic in the fermion fields, a formal integration gives
\begin{equation}
Z=\int (dA)\ |D(A)|\ e^{-S_G(A)}.
\label{path}
\end{equation}
Working on a finite, lattice $D(A)$ is a finite dimensional matrix,
and for a given gauge field I can formally consider its eigenvectors
and eigenvalues
\begin{equation}
D(A)\psi_i=\lambda_i\psi_i.
\end{equation}
The determinant appearing in Eq.~(\ref{path}) is the product of these
eigenvalues; so, the path integral takes the form
\begin{equation}
Z=\int (dA)\ e^{-S_G(A)}\ \prod_i \lambda_i.
\end{equation}
Averaging over gauge fields defines the eigenvalue density
\begin{equation}
\rho(x+iy)={1\over {N}Z}\int (dA)\ |D(A)|\ e^{-S_G(A)}
\sum_{i}\delta(x-{\rm Re}\lambda_i(A))\delta(y-{\rm Im}\lambda_i(A)).
\end{equation}
Here $N$ is the dimension of the Dirac operator, including volume,
gauge, spin, and flavor indices.
In situations where the fermion determinant is not positive, $\rho$
can be negative or complex. Nevertheless, I still refer to it as a
density. I will assume that $\rho$ is real; situations where this is
not true, such as with a finite chemical
potential,\cite{Osborn:2005ss} are beyond the scope of this
discussion.
\begin{figure*}
\centering
\includegraphics[width=3in]{eigen0.eps}
\caption{ In the naive continuum picture, all eigenvalues of the Dirac
operator lie along a line parallel to the imaginary axis. In a finite
volume these eigenvalues become discrete. The real eigenvalues divide
into distinct chiralities and define a topological invariant.
\label{continuum}
}
\end{figure*}
At zero chemical potential, all actions used in practice satisfy
``$\gamma_5$ hermiticity''
\begin{equation}
\gamma_5 D \gamma_5=D^\dagger.
\label{hermiticity}
\end{equation}
With this condition all non-real eigenvalues occur in complex
conjugate pairs, implying for the density
\begin{equation}
\rho(z)=\rho(z^*).
\end{equation}
This property will be shared by all the operators considered in the
following discussion.
The quest is to find general statements relating the behavior of the
eigenvalue density to physical properties of the theory. I repeat the
earlier warning; $\rho$ depends on the distribution of gauge fields
$A$ which in turn is weighted by $\rho$ which depends on the
distribution of $A$ \ldots.
\subsection{The continuum}
Of course the continuum theory is only really defined as the limit of
the lattice theory. Nevertheless, it is perhaps useful to recall the
standard picture, where the Dirac operator
$$
D=\gamma_\mu (\partial_\mu+igA_\mu)+m
$$
is the sum of an anti-hermitian piece and the quark mass $m$. All
eigenvalues have the same real part $m$
$$
\rho(x+iy)=\delta(x-m) \tilde\rho(y).
$$ The eigenvalues lie along a line parallel to the imaginary axis,
while the hermiticity condition of Eq.~(\ref{hermiticity}) implies
they occur in complex conjugate pairs.
Restricted to the subspace of real eigenvalues, $\gamma_5$ commutes
with $D$ and thus these eigenvectors can be separated by chirality.
The difference between the number of positive and negative eigenvalues
of $\gamma_5$ in this subspace defines an index related to the
topological structure of the gauge fields.\cite{index} The basic
structure is sketched in Fig.~(\ref{continuum}).
The Banks and Casher argument relates a non-vanishing $\tilde\rho(0)$
to the chiral condensate occurring when the mass goes to zero. I will
say more on this later in the lattice context.
Note that the naive picture suggests a symmetry between positive and
negative mass. Due to anomalies, this is spurious. With an odd
number of flavors, the theory obtained by flipping the signs of all
fermion masses is physically inequivalent to the initial theory.
\begin{figure*}
\centering
\includegraphics[width=2.5in]{eigen1.eps}
\caption{ Free Wilson fermions display an eigenvalue spectrum with a
momentum dependent real part. This removes doublers by giving them a
large effective mass.}
\label{fig2}
\end{figure*}
\subsection{Wilson fermions}
The lattice reveals that the true situation is considerably more
intricate due to the chiral anomaly. Without ultraviolet infinities,
all naive symmetries of the lattice action are true symmetries. Naive
fermions cannot have anomalies, which are cancelled by extra states
referred to as doublers. Wilson fermions\cite{Wilson:1975id} avoid
the this issue by giving a large real part to those eigenvalues
corresponding to the doublers. For free Wilson fermions the
eigenvalue structure displays a simple pattern as shown in
Fig.~(\ref{fig2}).
As the gauge fields are turned on, this pattern will fuzz out. An
additional complication is that the operator $D$ is no longer normal,
i.e. $[D,D^\dagger]\ne 0$ and the eigenvectors need not be orthogonal.
The complex eigenvalues are still paired, although, as the gauge
fields vary, complex pairs of eigenvalues can collide and separate
along the real axis. In general, the real eigenvalues will form a
continuous distribution.
As in the continuum, an index can be defined from the spectrum of the
Wilson-Dirac operator. Again, $\gamma_5$ hermiticity allows real
eigenvalues to be sorted by chirality. To remove the contribution of
the doubler eigenvalues, select a point inside the leftmost open
circle of Fig.~(\ref{fig2}). Then define the index of the gauge field
to be the net chirality of all real eigenvalues below that point. For
smooth gauge fields this agrees with the topological winding number
obtained from their interpolation to the continuum. It also
corresponds to the winding number discussed below for the overlap
operator.
\subsection{The overlap}
Wilson fermions have a rather complicated behavior under chiral
transformations. The overlap formalism\cite{Neuberger:1997fp}
simplifies this by first projecting the Wilson matrix $D_W$ onto a
unitary operator
\begin{equation}
V=(D_W D_W^\dagger)^{-1/2} D_W.
\end{equation}
This is to be understood in terms of going to a basis that
diagonalizes $D_W D_W^\dagger$, doing the inversion, and then
returning to the initial basis. In terms of this unitary quantity,
the overlap matrix is
\begin{equation}
D=1+V.
\end{equation}
The projection process is sketched in Fig.~(\ref{fig3}). The mass
used in the starting Wilson operator is taken to a negative value so
selected that the low momentum states are projected to low
eigenvalues, while the doubler states are driven towards $\lambda\sim
2$.
\begin{figure*}
\centering
\includegraphics[width=5in]{eigen3.eps}
\caption{The overlap operator is constructed by projecting the Wilson
Dirac operator onto a unitary operator.}
\label{fig3}
\end{figure*}
The overlap operator has several nice properties. First, it satisfies
the Ginsparg-Wilson relation,\cite{Ginsparg:1981bj} most succinctly
written as the unitarity of $V$ coupled with its $\gamma_5$
hermiticity
\begin{equation}
\gamma_5 V\gamma_5 V=1.
\end{equation}
As it is constructed from a unitary operator, normality of $D$ is
guaranteed. But, most important, it exhibits a lattice version of an
exact chiral symmetry.\cite{Luscher:1998pq} The fermionic action
$\overline\psi D\psi$ is invariant under the transformation
\begin{eqnarray}
&\psi\rightarrow e^{i\theta\gamma_5}\psi\cr
&\overline\psi\rightarrow \overline\psi e^{i\theta\hat\gamma_5}
\label{symmetry}
\end{eqnarray}
where
\begin{equation}
\hat\gamma_5 =V\gamma_5.
\end{equation}
As with $\gamma_5$, this quantity is Hermitean and its square is unity.
Thus its eigenvalues are all plus or minus unity. The trace
defines an index
\begin{equation}
\nu={1\over 2}{\rm Tr}\hat\gamma_5
\end{equation}
which plays exactly the role of the index in the continuum.
It is important to note that the overlap operator is not unique. Its
precise form depends on the particular initial operator chosen to
project onto the unitary form. Using the Wilson-Dirac operator for
this purpose, the result still depends on the input mass used. From
its historical origins in the domain wall formalism, this quantity is
sometimes called the ``domain wall height.''
Because the overlap is not unique, an ambiguity can remain in
determining the winding number of a given gauge configuration. Issues
arise when $D_W D_W^\dagger$ is not invertible, and for a given gauge
field this can occur at specific values of the projection point.
This problem can be avoided for ``smooth'' gauge fields. Indeed, an
``admissibility condition,''~\cite{Luscher:1981zq,Hernandez:1998et}
requiring all plaquette values to remain sufficiently close to the
identity, removes the ambiguity. Unfortunately this condition is
incompatible with reflection positivity.\cite{Creutz:2004ir} Because
of these issues, it is not known if the topological susceptibility is
in fact a well defined physical observable. On the other hand, as it
is not clear how to measure the susceptibility in a scattering
experiment, there seems to be little reason to care if it is an
observable or not.
\begin{figure*}
\centering
\includegraphics[width=3in]{circles.eps}
\caption{Inverting a complex circle generates another circle.}
\label{circles}
\end{figure*}
\section{A Cheshire chiral condensate}
Now that I have reviewed the basic framework, it is time for a little
fun. I will calculate the chiral condensate in the overlap formalism.
I should warn you that, in the interest of amusing you, I start the
argument in an intentionally deceptive manner.
\subsection{He's here}
I begin with the standard massless overlap theory. I want to
calculate the quantity $\langle\overline\psi\psi\rangle$. Remarkably,
this can be done exactly. I start with
\begin{equation}
\langle\overline\psi\psi\rangle=\langle {\rm Tr} D^{-1}\rangle
=\left\langle \sum_i {1\over \lambda_i}\right\rangle
=\left\langle \sum {\rm Re} {1\over \lambda_i}\right\rangle
\end{equation}
where I have used the complex pairing of eigenvalues to cancel the
imaginary parts. At the end, the average is to be taken over
appropriately weighted gauge configurations.
Now the crucial feature of the overlap operator is that its
eigenvalues all lie on a circle in the complex plane. An interesting
property of a general complex circle is that the inverses of all its
points generates another circle, as sketched in Fig.~\ref{circles}.
This process is, however, somewhat singular for the overlap operator
itself since the corresponding circle touches the origin. In this
case the inverted circle has infinite radius, i.e. it degenerates into
a line. For the circle of the overlap operator, with center at $z=1$
and radius 1, the inverse circle is a line with real part 1/2 and
parallel to the imaginary axis. This is sketched in
Fig.~\ref{circles1}.
\begin{figure*}
\centering
\includegraphics[width=3in]{circles1.eps}
\caption{Inverting the overlap operator generates a line with real
part 1/2.}
\label{circles1}
\end{figure*}
This placement of eigenvalues enables an immediate calculation of the
condensate
\begin{equation}
\langle\overline\psi\psi\rangle=
\sum {\rm Re} {1\over \lambda_i}=\sum {1\over 2}= {N\over 2}.
\end{equation}
Here $N$ is the dimension of the matrix, and includes the expected
volume factor.
So the condensate, supposedly a signal for spontaneous chiral symmetry
breaking, does not vanish! But something is fishy, I didn't use any
dynamics. The result also is independent of gauge configuration.
\subsection{He's gone}
So lets get more sophisticated. On the lattice, the chiral symmetry
is more complicated than in the continuum, involving both $\gamma_5$
and $\hat\gamma_5$ in a rather intricate way. In particular, the
operator $\overline\psi\psi$ does not transform in any simple manner
under chiral rotations. A possibly nicer combination is
$\overline\psi(1-D/2)\psi$. If I consider the rotation in
Eq.~(\ref{symmetry}) with $\theta=\pi/2$, this quantity becomes its
negative. But it is also easy to calculate the expectation of this as
well. The second term involves
\begin{equation}
\langle\overline\psi D \psi\rangle={\rm Tr} D^{-1} D={\rm Tr}I=N.
\end{equation}
Putting the two pieces together
\begin{equation}
\langle\overline\psi(1-D/2)\psi\rangle=N/2-N/2=0.
\end{equation}
So, I've lost the chiral condensate that I so easily showed didn't
vanish just a moment ago. Where did it go?
\begin{figure*}
\centering
\includegraphics[width=3in]{circles2.eps}
\caption{As the mass changes sign a pole moves between inside and outside
the overlap circle. This generates a jump in the condensate.}
\label{circles2}
\end{figure*}
\subsection{He's back}
The issue lies in a careless treatment of limits. In finite volume,
$\langle\overline\psi(1-D/2)\psi\rangle$ must vanish just from the
exact lattice chiral symmetry. This vanishing occurs for all gauge
configurations. To proceed, introduce a small mass and take the
volume to infinity first and then the mass to zero. Toward this end,
consider the quantity
\begin{equation}
\langle\overline\psi\psi\rangle=\sum_i {1\over\lambda_i+m}.
\end{equation}
The signal for chiral symmetry breaking is a jump in this quantity as
the mass passes through zero.
As the volume goes to infinity, replace the above sum with a contour
integral around the overlap circle using $z=1+e^{i\theta}$. Up to the
trivial volume factor, I should evaluate
\begin{equation}
i\int_0^{2\pi} d\theta {\rho(\theta)\over 1+e^{i\theta}+m}.
\end{equation}
As the mass passes through zero, the pole at $z=-m$ passes between
lying outside and inside the circle, as sketched in
Fig.~(\ref{circles2}). As it passes through the circle, the residue
of the pole is $\rho(0) = \lim_{\theta\rightarrow 0}\rho(\theta)$.
Thus the integral jumps by $2\pi\rho(0)$. This is the overlap version
of the Banks-Casher relation;\cite{Banks:1979yr} a non-trivial jump in
the condensate is correlated with a non-vanishing $\rho(0)$.
Note that the exact zero modes related to topology are supressed by
the mass and do not contribute to this jump. For one flavor, however,
the zero modes do give rise to a non-vanishing but smooth contribution
to the condensate.\cite{Damgaard:1999ij} More on this point later.
\section{Another puzzle}
For two flavors of light quarks one expects spontaneous symmetry
breaking. This is the explanation for the light mass of the pion,
which is an approximate Goldstone boson. In the above picture, the
two flavor theory should have a non-vanishing $\rho(0)$.
Now consider the one flavor theory. In this case there should be no
chiral symmetry. The famous $U(1)$ anomaly breaks the naive symmetry.
No massless physical particles are expected when the quark mass
vanishes. Furthermore, simple chiral Lagrangian
arguments\cite{DiVecchia:1980ve,Creutz:2003xu} for multiple flavor
theories indicate that no singularities are expected when just one of
the quarks passes through zero mass. From the above discussion, this
leads to the conclusion that for the one flavor theory $\rho(0)$ must
vanish.
But now consider the original path integral after the fermions are
integrated out. Changing the number of flavors $N_f$ manifests itself
in the power of the determinant
\begin{equation}
\int dA\ |D|^{N_f}\ e^{-S_g(A)}.
\end{equation}
Naively this suggests that as you increase the number of flavors, the
density of low eigenvalues should decrease. But I have just argued
that with two flavors $\rho(0)\ne 0$ but with one flavor $\rho(0)= 0$.
How can it be that increasing the number of flavors actually increases
the density of small eigenvalues?
This is a clear example of how the non-linear nature of the problem
can produce non-intuitive results. The eigenvalue density depends on
the gauge field distribution, but the gauge field distribution depends
on the eigenvalue density. It is not just the low eigenvalues that
are relevant to the issue. Fermionic fields tend to smooth out gauge
fields, and this process involves all scales. Smoother gauge fields
in turn can give more low eigenvalues. Thus high eigenvalues
influence the low ones, and this effect evidently can overcome the
naive suppression from more powers of the determinant.
\section{{\AE}thereal instantons}
Through the index theorem, the topological structure of the gauge
field manifests itself in zero modes of the massless Dirac operator.
Let me again insert a small mass and consider the path integral with
the fermions integrated out
\begin{equation}
Z=\int dA\
e^{-S_g}\
\prod_i (\lambda_i+m).
\end{equation}
If I take the mass to zero, any configurations which contain a zero
eigenmode will have zero weight in the path integral. This suggests
that for the massless theory, I can ignore any instanton effects since
those configurations don't contribute to the path integral.
What is wrong with this argument? The issue is not whether the zero
modes contribute to the path integral, but whether they can contribute
to physical correlation functions. To see how this goes, add some sources
to the path integral
\begin{equation}
Z(\eta,\overline\eta)=\int dA\ d\psi\ d\overline\psi\
e^{-S_g+\overline\psi (D+m) \psi +\overline\psi \eta+ \overline\eta\psi}.
\end{equation}
Differentiation (in the Grassmann sense) with respect to $\eta$ and
$\overline \eta$ gives the fermionic correlation functions.
Now integrate out the fermions
\begin{equation}
Z=\int dA\
e^{-S_g-\overline\eta(D+m)^{-1}\eta}\
\prod_i (\lambda_i+m).
\end{equation}
If I consider a source that overlaps with one of the zero mode
eigenvectors, i.e.
\begin{equation}
(\psi_0,\eta)\ne 0,
\end{equation}
the source contribution introduces a $1/m$ factor. This cancels the
$m$ from the determinant, leaving a finite contribution as $m$ goes to
zero.
With multiple flavors, the determinant will have a mass factor from
each. When several masses are taken to zero together, one will need a
similar factor from the sources for each. This product of source
terms is the famous ```t Hooft vertex.'' \cite{'tHooft:1976up} While it
is correct that instantons do drop out of $Z$, they survive in
correlation functions.
While these issues are well understood theoretically, they can raise
potential difficulties for numerical simulations. The usual numerical
procedure generates gauge configurations weighted as in the partition
function. For a small quark mass, topologically non-trivial
configurations will be suppressed. But in these configurations, large
correlations can appear due to instanton effects. This combination of
small weights with large correlations can give rise to large
statistical errors, thus complicating small mass extrapolations. The
problem will be particularly severe for quantities dominated by
anomaly effects, such as the $\eta^\prime$ mass. A possible strategy
to alleviate this effect is to generate configurations with a modified
weight, perhaps along the lines of multicanonical
algorithms.\cite{Berg:1992qu}
Note that when only one quark mass goes to zero, the 't Hooft vertex
is a quadratic form in the fermion sources. This will give a finite
but smooth contribution to the condensate
$\langle\overline\psi\psi\rangle$. Indeed, this represents a
non-perturbative additive shift to the quark mass. The size of this
shift generally depends on scale and regulator details. Even with the
Ginsparg-Wilson condition, the lattice Dirac operator is not unique,
and there is no proof that two different forms have to give the same
continuum limit for vanishing quark mass. Because of this, the
concept of a single massless quark is not
physical,\cite{Creutz:2004fi} invalidating one popular proposed
solution to the strong CP problem. This ambiguity has been noted for
heavy quarks in a more perturbative context\cite{Bigi:1994em} and is
often referred to as the ``renormalon'' problem. The issue is closely
tied to the problems mentioned earlier in defining the topological
susceptibility.
\section{Summary}
In short, thinking about the eigenvalues of the Dirac operator in the
presence of gauge fields can give some insight, for example the
elegant Banks-Casher picture for chiral symmetry breaking.
Nevertheless, care is necessary because the problem is highly
non-linear. This manifests itself in the non-intuitive example of how
adding flavors enhances rather than suppresses low eigenvalues.
Issues involving zero mode suppression represent one facet of a set of
connected unresolved issues. Are there non-perturbative ambiguities
in quantities such as the topological susceptibility? How essential
are rough gauge fields, i.e. gauge fields on which the winding number
is ambiguous? How do these issues interplay with the quark masses? I
hope the puzzles presented here will stimulate more thought along
these lines.
\section*{Acknowledgments}
This manuscript has been authored under contract number
DE-AC02-98CH10886 with the U.S.~Department of Energy. Accordingly,
the U.S. Government retains a non-exclusive, royalty-free license to
publish or reproduce the published form of this contribution, or allow
others to do so, for U.S.~Government purposes.
\section*{References}
| {'timestamp': '2005-12-13T19:40:46', 'yymm': '0511', 'arxiv_id': 'hep-lat/0511052', 'language': 'en', 'url': 'https://arxiv.org/abs/hep-lat/0511052'} |
\section{How to set up a proper comparison?}
\label{sec:analysis}
Measuring (small) changes in aerodynamic drag is not trivial, especially in the turbulent regime, regardless of the numerical or experimental nature of the analysis. Studies employ a variety of approaches, where simulations and experiments presents different approaches and different challenges.
Nowadays, whenever we need to compare the drag of a reference flat surface with that of a rough surface, we are aware of the subtlety of the measurement, of the importance to carefully define and control the Reynolds number of the experiment, to discriminate between internal and external flows, and in general to correctly define the equivalent "flat wall" flow to compare with. In this final Section, we will discuss some of these topics, trying to call the reader's attention to the logical steps that should be followed when designing a meaningful experimental or numerical campaign.
\subsection{Measurement of the drag (difference)}
All the available studies measure the drag difference $\Delta Drag$ by separately measuring the drag forces $Drag_{smooth}$ and $Drag_{dimples}$. As recently discussed in ref.\cite{vannesselrooij-etal-2022} in the context of the description of their novel experimental setup devoted to such measurements, various approaches are available. The simplest among them measure the local friction, and as such are unable to yield satisfactory results for the drag, because the friction contribution to the drag force over a dimpled surface depends on the position, and the same holds for the pressure component. Hence, in an experiment one has to either resort to measuring the drag force with a balance, a challenge by itself owing to the small forces involved, or to deduce the force from the pressure drop across two sections, as done for example by Gatti et al. in \cite{gatti-etal-2015}. With dimples, both approaches have been used. Information about the shear stress was extracted from boundary layer momentum loss in ref. \cite{lienhart-breuer-koksoy-2008}. Direct measurement of the drag through a force sensor was employed in refs.\cite{veldhuis-vervoort-2009, vannesselrooij-etal-2016, vancampenhout-etal-2018}. This type of measurement may be affected by uncertainty and accuracy problems: forces are small, and blurred by spurious contributions, and the experimental setup must be designed and run with extreme care.
In the case of numerical experiments, only the DNS approach provides the required accuracy that is not embedded e.g. in RANS models, constructed and tuned for canonical flows and hence incapable to deal with drag reduction in a quantitatively accurate way. Once DNS is used, two equivalent options are available to compute the drag in internal flows. One possibility is the calculation of the (time-averaged value of) the friction drag and the pressure drag separately, employing their definition as surface integrals of the relevant force component.
In alternative, the (time-averaged value of) the pressure drop between inlet and outlet informs of the total dissipated power, and thus leads to the total drag. This is feasible both in simulations and experiments. Tay and colleagues \cite{tay-2011, tay-khoo-chew-2015, tay-etal-2016, tay-khoo-chew-2017, tay-lim-2017, tay-lim-2018} in fact compared the mean streamwise pressure gradients of both the two flat sections upstream and downstream of the dimpled test section with the mean streamwise pressure gradient within the test section, employing static pressure taps.
Experience accumulated in riblets research, however, tells us that the riblets community obtained its first fully reliable dataset when D.Bechert in Berlin developed on purpose a test rig, the Berlin oil channel \cite{bechert-etal-1992}, where the measured quantity was directly the drag difference: targeting the quantity of interest, i.e. the drag difference under identical flow conditions, instead of relying on the difference between two separately measured drag forces was key to improve accuracy and reliability.
\subsection{The Reynolds number}
Dynamic similarity is a well known concept in fluid mechanics, and enables meaningful comparative tests provided the value of the Reynolds number is the same. The true question is to understand {\em which} Reynolds number should be kept the same. The Reynolds number is defined as the product of a velocity scale $U$ and a length scale $L$, divided by the kinematic viscosity $\nu$ of the fluid. While e.g. in an experiment the precise measurement of $\nu$ might be difficult, its meaning is unequivocal. Choosing $U$ and $L$, instead, presents more than one option.
For the velocity scale $U$, dimples do not lead to specific issues. While for a zero-pressure-gradient boundary layer over a flat plate the use of the external velocity $U_\infty$ sounds reasonable, for internal flows like the plane channel flow one has to choose among the bulk velocity $U_b$, the centerline velocity $U_c$ and the friction velocity $u_\tau$. The choice of reference velocity has been already discussed in the context of skin-friction drag reduction \cite{hasegawa-quadrio-frohnapfel-2014}: provided drag reduction is not too large, and the flow is far enough from laminarity, choosing $U$ is not critical and should not be regarded as a major obstacle.
For the length scale $L$, instead, the situation is different, as dimples themselves contain one or more length scales that could be used in the definition of $Re$. For example, to avoid the ambiguity implied by the definition of the origin for the wall-normal coordinate, Van Nesselrooij et al. \cite{vannesselrooij-etal-2016} and Van Campenhout et al. \cite{vancampenhout-etal-2018} for their boundary layer experiments decided to define a Reynolds number based on the diameter of their circular dimple. Naturally, achieving the same $Re$ based on flow velocity and dimple diameter is not enough to guarantee dynamic similarity in two different flows.
\begin{figure}
\includegraphics[width=0.9\textwidth]{re}
\caption{Drag change versus bulk Reynolds number $Re_b$.}
\label{fig:Re}
\end{figure}
By isolating all the data sets for which a value for the bulk Reynolds number $Re_b$ is given (either explicitly or deduced from equivalent information), and putting together the reported drag changes, one obtains the picture reported in figure \ref{fig:Re}. Besides showing both drag reduction and drag increase, drag changes exhibit every possible trend with $Re_b$: increasing, decreasing, constant or nearly constant, and non-monotonic with either a maximum or a minimum at intermediate $Re_b$. Without excluding additional possible causes, this can be attributed to the host of parameters that are not kept identical across the dataset, besides the Reynolds number, and stresses once more the importance of experiments where only one parameter is changed at a time.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{results}
\caption{Present simulations, circular dimples at various sizes and Reynolds numbers with $2690 \le Re_b \le 10450$. Left: drag changes vs dimple depth in inner units. Right: drag changes vs dimple depth in outer units.}
\label{fig:deep-dimples}
\end{figure}
In a turbulent wall flow, the Reynolds number is an essential ingredient to define the proper scaling of important quantities, say the total drag change. If for example only the dimple depth $d$ is varied, its value can be set in wall units ($d^+$) or in outer units ($d/h$), and, if the Reynolds number is also changed, various combinations for $d^+$ and $d/h$ become possible. It is the flow physics which dictates what scaling works best at collapsing results.
We have performed two sets of DNS simulations (see \S\ref{sec:appendix} for details) to understand the scaling of drag changes induced by circular dimples when only their dimensions are changed but its shape is preserved. We have fixed the values of $d/D$ and $r/R$, the value of the depth $d$ (either in inner $d/h$ or outer $d^+$ units) has been varied, and all the other parameters did vary accordingly, as prescribed by equation \eqref{eq:geom_rel}.
Figure \ref{fig:deep-dimples} plots the results and shows that drag changes (in this specific case, drag increases) appear to follow an outer scaling: all the data points collapse onto a single curve when drag changes are plotted against $d/h$. This is an expected result, as these dimples are rather deep, and thus somehow akin to a large-scale $d$-roughness \cite{jimenez-2004}, where the large cavities basically destroy the near-wall layer, i.e the only region where inner scaling would make sense.
\subsection{The equivalent flat wall}
The comparison between flat and dimpled wall can be set up for internal or external flows. The latter, which may be less convenient in numerical simulations owing to their non-parallel nature, present a sensible advantage in this context, since drag and its related changes have simply to be computed for the same plate immersed in the same external velocity, and a reduced drag force is unequivocally advantageous. For internal flows, however, the non-planar dimpled wall brings up the problem of properly defining the location of the equivalent flat wall and, in general, of setting up the comparison properly.
\begin{figure}
\includegraphics[width=\textwidth]{equiv-channel}
\caption{A dimpled wall and two different, equivalent flat channels. The red/blue lines indicate the dimple profile. Left: the channel height $2h$ goes from the top wall to the dimple tip; right: the channel height $2h$ goes from the top wall to the dimple lowest point.}
\label{fig:equiv-channel}
\end{figure}
As shown schematically in fig.\ref{fig:equiv-channel}, for a channel flow, for example, a certain definition of the reference flat wall impacts the reference length $h$ and, eventually, changes the value of the Reynolds number of the flow to compare with. The reference wall might be placed on the flat surface among dimples, on the position of lowest elevation in the cavity, on the average height of the dimpled surface, etc., leading to different flow volumes.
To properly account for this effect, let us start from the usual definition of the bulk Reynolds number $Re_b = U_b h / \nu$, where $h$ is a reference length (e.g. half the witdh of the flat channel) and $\nu$ is the kinematic viscosity. Once the cross-sectional area $A(x)$ of the dimpled channel changes along the streamwise direction, the bulk velocity $U_b$, defined as an average velocity across the section, becomes itself a streamwise-dependent function:
\begin{equation}
U_b(x)= \frac{1}{A(x)}\int_{A(x)} u(\mathbf{x}) \text{d}A .
\end{equation}
We thus replace this definition with a volume average, and define a new bulk velocity $\mathsf{U_b}$ as an average over the volume to obtain a streamwise-independent quantity:
\begin{equation}
\mathsf{U_b} = \frac{1}{V} \int_V u(\mathbf{x}) \text{d}V .
\end{equation}
Note that the two quantities $U_b$ and $\mathsf{U_b}$ coincide for a flat wall. A comparison at same flow rate requires that the volumetric flow rate
\begin{equation}
Q = \int_{A(x)} u(\mathbf{x}) \text{d}A = \frac{1}{L_x} \int_0^{L_x}\int_{A(x)} u(\mathbf{x})\text{d}A\text{d}x = \frac{1}{L_x} \int_V u(\mathbf{x}) d V = \frac{V}{L_x} \mathsf{U_b}
\end{equation}
is the same for the flat and dimpled channels, provided the streamwise length $L_x$ of the channel is the same. This implies that $V_f \mathsf{U_{b,f}} = V_d \mathsf{U_{b,d}}$, where the subscripts $\cdot_f$ and $\cdot_d$ refer to quantities measured in the flat and dimpled channel respectively. In the end, the bulk velocity in the dimpled channel (and the bulk Reynolds number) need to be changed by multiplication of a factor given by the volume ratio:
\begin{equation}
\mathsf{U_{b,d}} =\frac{V_f}{V_d} \mathsf{U_{b,f}};
\qquad
Re_{b,d} = \frac{V_f}{V_d} Re_{b,f}.
\end{equation}
The numerical value of $Re_b$ is thus affected by the choice of the equivalent flat channel. For example, the equivalent flat channel might go from the top wall to the lowest point of the dimple, and $Re_{b,d}>Re_{b,f}$. In contrast, if the equivalent channel goes from the top wall to the tip of the dimple, $Re_{b,d}<Re_{b,f}$. The two bulk Reynolds numbers end up being the same only when the volume is preserved in the reference and dimpled channels (i.e. the equivalent flat channel is located at the average dimple height).
If the comparison is carried out by DNS, one conveniently measures the time-averaged value $\overline{f}$ of the spatially uniform volume force $f$ required to maintain a constant flow rate at each time step. This volume force is interpreted as $f = \Delta P / L_x$, where $\Delta P$ is the pressure drop along the channel. The proper measure of the drag change is:
\begin{equation}
\Delta Drag = \frac{ V_d \overline{f_d} - V_f \overline{f_f} }{ V_f \overline{f_f} } = \frac{ V_d/V_f \overline{f_d} - \overline{f_f}}{\overline{f_f}};
\end{equation}
Therefore, the change of the fluid volume has to been considered also when measuring the drag change in the controlled case.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{DC_difference}
\caption{Drag changes, measured by DNS, for circular dimples with $d/h=0.25$ at $Re_b \approx 2800$. Red/blue bars express drag changes when the equivalent channel defines $2h$ as the distance between the top wall and the top/bottom of the dimple (color code is the same of figure \ref{fig:equiv-channel}). Case A: comparison at the same $Re_b$, $\Delta Drag$ computed without accounting for the volume ratio. Case B: as case A, but $\Delta Drag$ is corrected with the volume ratio. Cases C and D are like cases A and B, but the comparison is made at the same flow rate.}
\label{fig:origin}
\end{figure}
Figure \ref{fig:origin} exemplifies the consequences of neglecting these considerations. These are certainly exaggerated by the choice of working with a dimple configuration that causes a large change of drag. However, the relative differences are major; neglecting such considerations would most certainly hinder the true ability of dimples to alter skin-friction drag.
\subsection{The drag reduction metrics}
In closing, we mention a final methodological issue, that affects drag reduction measurements for dimples, riblets, and roughness at large: the proper metrics to express it. It is customary to express drag reduction as (percentage) changes in the skin-friction coefficient at a given $Re$; unfortunately, the coefficient itself contains a dependence on the Reynolds number already for the flat wall case, thus making it impossible to rely on percentage changes for a robust assessment of the drag change properties of a given surface. The complete information would be the $(\Delta Drag, Re)$ pair. In alternative, the proper metric for expressing drag reduction is the vertical shift of the logarithmic portion of the mean streamwise velocity profile expressed in viscous units.
This is a known concept for roughness \cite{jimenez-2004} as well as riblets \cite{luchini-1996, spalart-mclean-2011}, and also extends to some active flow control strategies \cite{gatti-quadrio-2016}. As long as the direct effect of the roughness remains confined within the buffer layer of the turbulent flow, it can be translated into an upward shift $\Delta U^+$ of the logarithmic velocity profile in the law of the wall: a positive $\Delta U^+$ corresponds to drag reduction, and a negative $\Delta U^+$ implies drag increase, as for the conventional $k$-type roughness. Part of the trends seen in figure \ref{fig:Re} for drag reduction data are due to Reynolds effects; properly removing them via analytical relations is possible, as done in ref.\cite{gatti-quadrio-2016} for active spanwise forcing, and would contribute to clarifying the situation, by exposing some remaining "puzzling" trends with $Re$ (to cite words used in ref.\cite{spalart-etal-2019}).
\section[
\section{Conclusions}
\label{sec:conclusions}
In this review paper we have provided a brief and up-to-date description of what we know and what we don't about the potential of dimples for turbulent skin-friction drag reduction.
While we can't obviously offer an answer to the still-standing question whether or not dimples are a suitable technique to reduce turbulent skin-friction drag, it is our hope that this comprehensive overview will at least help the newcomer to frame the problem, quickly identify the key references, and get a glimpse at the complexity of the topic.
While reviewing the state of the art, we have also mentioned some methodological issues that bear a critical importance when attempting to measure drag changes by dimples. Leveraging concepts and procedures (and perhaps facilities altogether) developed over the years for riblets might yield data that are reliable enough to begin understanding the physics behind dimple drag reduction, a necessary and preliminary step to improve their performance.
\section{Introduction}
Reducing the drag generated by a fluid set in relative motion to a solid body is at the same time a fundamental attempt to learn how to favorably interact with turbulence, and a technological challenge with immense potential in so many application fields. The interest for turbulent flow control is steadily increasing, owing to massive economic and environmental concerns.
Skin-friction drag is perhaps the most essential manifestation of the dissipative nature of turbulence, and accounts for the total drag in the case of planar walls (as in a channel flow or a zero-incidence flat plate boundary layer). Several techniques are available to reduce friction drag below the level typical of a smooth solid wall; they can be categorized into active (requiring extra energy) and passive ones.
The former typically provide larger savings, but imply extra complexity and cost, so that the ideal technique for friction reduction remains a passive one, often embodied in surface patterns performing better than the planar flat geometry.
The most prominent example of such patterns is riblets \cite{garcia-jimenez-2011-a}. Introduced by NASA in the '80 of the past century, and intensely studied over the subsequent years for their potential in aeronautical applications, riblets consist of streamwise-aligned microgrooves, and have the proved ability to reduce friction drag. The riblets cross-section can be of several shapes, the triangular one being perhaps the most popular, but an essential feature is a very sharp tip. Although new developments \cite{quadrio-etal-2022,cacciatori-etal-2022} hint at a bright future for riblets in aeronautics and suggest lower cost/benefit ratios, riblets are currently still not deployed in commercial transport aircraft, owing to their limited savings \cite{mclean-georgefalvy-sullivan-1987, kurita-etal-2020} and to critical production and maintainance issues, descending from the microscopic size of riblets and from the requirement of preserving the sharpness of the tip.
A possible alternative to riblets is emerging recently, easier to manufacture and lacking any sharp detail. The pattern to impress on the surface consists of small dimples. Dimples, i.e. small concavities imprinted on a surface, have been extensively studied in the past for their ability to enhance the heat transfer of a surface (see e.g. ref.\cite{leontiev-etal-2017} and references therein). The use of dimples on the surface of bluff bodies (e.g. a golf ball) is well known, and their ability to influence the turbulent boundary layer and the separation on the body is rather well understood \cite{choi-jeon-choi-2006}; the same concept is also being considered in sport-car racing \cite{allarton-etal-2020}. In this paper we concern ourselves with dimples applied to an otherwise flat surface: the goal is to reduce the turbulent skin-friction drag. We limit our review to passive dimples, although also active control by dimples has been proposed \cite{ge-fang-liu-2017}.
The ability of dimples to reduce drag is way less intuitive than that to increase heat exchange, and was considered first at the Kurchatov Institute of Athomic Energy \cite{kiknadze-krasnov-chushkin-1984} in Russia, where hemispherical dimples were placed on the surface of a heat exchanger and found to reduce the flow resistance as well.
In subsequent studies by the same group, a drag reduction of about 15--20\% was mentioned \cite{alekseev-etal-1998}, the highest performance reported so far.
In the last two decades, a handful of research groups devoted their efforts to understanding the drag reduction problem by dimples, attempting to come up with a recipe for the best shape and size.
Unfortunately, to date no consensus has been reached on the effectiveness of dimples in reducing friction drag, and on their working mechanism: some authors confirmed drag reduction, others did not.
Measuring -- in the lab, or with a numerical simulation -- the (very small, if any) drag reduction induced by dimples is by no means a trivial task.
A reduction of friction drag would be unavoidably accompanied by an increase of pressure drag, with a net benefit possible only if the former overwhelms the latter.
There are just so many design variables to be tested, as the geometry of the dimple itself, the size, the spatial layout and relative arrangement of the dimples on the surface need to be carefully considered. This is a daunting task as long as no theory, hypothesis of working mechanism or scaling argument is available to guide the search in such a vast parameter space.
However, it is undeniable that dimples, once proved to work, would provide substantial advantages over riblets, thanks to their simplicity, ease of production, lack of sharp corners and easier maintenance.
The goal of the present contribution is to provide the first comprehensive review of the published information available on dimples for skin-friction drag reduction.
Since the very fact that dimples can actually work is still subject to debate, we will complement the review with a discussion of important procedural aspects that in our view are essential, should one embark in a (numerical or laboratory) experiment to assess the potential for drag reduction.
Such procedures (or, more precisely, their absence) are at the root of the large uncertainty and scatter of the available data, and have hindered so far the answer to such a simple question as: Do dimples actually work to reduce turbulent drag?
The present contribution is structured as follows. Section \S\ref{sec:overview} provides an overview of the experimental and numerical studies on the drag reduction properties of dimples. Section \S\ref{sec:parameters} describes the geometrical parameters defining the dimples, whereas \S\ref{sec:physics} reports the two main physical explanations for the working mechanism of drag-reducing dimples. In \S\ref{sec:analysis} we highlight the problem of properly measuring drag reduction, and guidelines and recommendations on how to properly compare results among different studies are provided. This review paper is closed by brief concluding remarks in \S\ref{sec:conclusions}. Appendix \ref{sec:appendix} contains details of the DNS simulations that have been carried out for the present study.
In the next Subsection, the concept of dimples is introduced first, together with the notation that will be used later to indicate their geometrical parameters.
\subsection{Characterization of a dimpled surface}
\begin{figure}
\includegraphics[width=1\textwidth]{section}
\caption{Cross-section of the parametric dimple geometry introduced by ref.\cite{chen-chew-khoo-2012} (left) and streamwise shift of the deepest point (right).}
\label{fig:spherical_geom}
\end{figure}
A solid wall covered with dimples is described by several geometric parameters: the dimple shape, the relative spatial arrangement of the dimples and the coverage ratio (ratio between non-planar and total surface). Originally, dimples were conceived as spherical recesses, hence with a circular footprint on the wall. One particular class of circular dimples, introduced by Chen et al. \cite{chen-chew-khoo-2012}, has become quite popular thanks to its parametric nature and represents the starting point of our description. This design is the union of a spherical indentation and a torus, meeting tangentially in a regular way that avoids sharp edges. A cross-section of this dimple, which possesses axial symmetry, is drawn in figure \ref{fig:spherical_geom}.
Four parameters define the geometry of this dimple: the diameter $D$ of the circular section at the wall, the depth $d$ of the spherical cap, the curvature radius $r$ at the edge and the curvature radius $R$ of the spherical cap. These four parameters are not independent, but linked by one analytical relation, so that only three degrees of freedom exist. In fact, geometry dictates that:
\begin{equation}
\frac{D}{2} = \sqrt{ d \left( 2R + 2 r -d \right)}.
\label{eq:geom_rel}
\end{equation}
Moreover, a handful of studies extended this baseline circular geometry, by introducing the additional parameter $s$, which describes the shift along the streamwise direction (either downstream for $s>$ or upstream for $s<0$) of the point of maximum depth, which in the baseline geometry lies exactly at the center of the dimple cavity.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{shapes}
\caption{Popular variants of the dimple shape.}
\label{fig:diff_geom}
\end{figure}
It is difficult to overemphasize the importance of a well-defined parametric geometry in the quest for the optimally performing dimple. Although the circular shape is by far the most popular, over the last years a number of alternative dimple shapes have been studied; sketches for the various shapes are drawn in figure \ref{fig:diff_geom}. Some of them derive from a deformation of the circular shape: e.g. the elliptical dimple is the result of a symmetrical stretch of the circular dimple along the streamwise direction. The teardrop dimple has two segments tangent to the circle, preserves the spanwise symmetry and exists in two variants depending on whether the triangle points upstream or downstream. The diamond dimple is the union of the two variants of teardrop and possesses two vertices. Only the triangular dimple differs substantially from the circular shape and --- as for the teardrop dimple --- can have the streamwise-aligned vertex pointing upstream or downstream.
When a single dimple is identically replicated to fully cover the planar surface, the relative spatial arrangement of the dimples is important in determining the overall influence on the flow.
A regular spatial layout of a dimpled surface depends on the distance between two adjacent dimples in both the streamwise and spanwise directions.
Another parameter that is related to the spatial arrangement of dimples is the coverage ratio, that can be defined as the percentage of recessed surface compared to the total surface of the wall.
(The reader will notice an ambiguity, as at the denominator of the coverage ratio one could put either the surface area of the equivalent flat wall, or the wetted area of the dimpled surface. This ambiguity is often ignored, but it is commented upon e.g. in refs.\cite{prass-etal-2019,tay-khoo-chew-2017,ng-etal-2020}.) It is doubtful whether coverage, which is affected by so many parameters, is by itself a useful quantity to describe dimples performance.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{layout}
\caption{Most popular dimples layout: staggered (left) and flow aligned (right).}
\label{fig:stag-alig}
\end{figure}
Moreover, dimples can be arranged either irregularly or regularly following a certain pattern. The two most widespread patterns are the staggered and the flow-aligned arrangements. Their definition is not unique. Often, the layout is referred to as staggered when the streamwise projection of one dimple overlaps with the following one, while it is called flow-aligned otherwise (see figure \ref{fig:stag-alig}). However, this definition, that corresponds to the most used arrangement, is not universally accepted. Prass et al. \cite{prass-etal-2019}, indeed, define the staggered arrangement as having the distance in spanwise direction from the centres of two adjacent dimples equal to half the distance between the centres of two non-adjacent dimples. Several additional patterns have been tested, e.g. the hexagonal one.
\section{Do dimples work?}
\label{sec:overview}
Although in the last two decades a number of dimples-related contributions have appeared, many works claim that drag reduction is possible for certain geometries and flow conditions \cite{veldhuis-vervoort-2009, tay-2011, tay-khoo-chew-2015, vannesselrooij-etal-2016, tay-etal-2016, tay-lim-2017, tay-lim-2018, spalart-etal-2019}, whereas others only report drag increase \cite{tay-khoo-chew-2017, vancampenhout-etal-2018, prass-etal-2019, ng-etal-2020}. Notably, one work \cite{spalart-etal-2019} set out to specifically reproduce the experimental drag reduction results described in ref.\cite{vannesselrooij-etal-2016} with a state-of-the-art combined numerical/experimental study, and failed.
Such uncertain situation can be traced back to the lack of a generally accepted standard to measure drag and to compare different geometries among themselves and with the reference flat wall, since there are unavoidable differences when measuring drag in experiments and simulations, and in external (e.g. boundary layer) and internal (plane channel) flows.
An additional reason explaining the scatter of available results consists in the still limited understanding of how dimples modify the surrounding flow field.
Knowing the physics involved in drag reduction by dimples would be extremely useful in the optimization of all the several parameters involved. A description of the effect of the many geometrical parameters involved, and on the conjectures on the working mechanism of dimples are reported later in \S\ref{sec:parameters} and \S\ref{sec:physics} respectively.
We start by presenting an overview of the main results available in the literature, by focusing on the raw drag reduction information.
They are reported in Table \ref{tab:comparison} that contains entries for the best drag reduction figure that could be extracted from each paper; when multiple dimple shapes are present, they are all considered. Drag change is simply defined here as $\Delta Drag = Drag_{dimples} - Drag_{smooth}$, where $Drag_{smooth}$ and $Drag_{dimples}$ are the (measured or computed) friction drag of the reference flat plate and the total drag of the dimpled plate, respectively. Negative values of $\Delta Drag$ thus correspond to drag reduction. Across the several studies, various definitions of the Reynolds number are used, particularly for internal flows. These have been all converted, whenever possible, to value of the bulk Reynolds number $Re_b$, by using the empirical Dean's correlation \cite{dean-1978}. Several other entries are also available in the Table, and will be defined and discussed throughout the text.
\subsection{Experimental studies}
\label{sec:experiments}
The majority of the experimental studies carry out their tests in a wind tunnel and compare the drag measured on a flat plate with the drag measured on a dimpled plate. The flat/dimpled plate lies either on the upper or bottom wall, whereas the other wall of the wind tunnel is smooth. The plate is installed at a certain distance from the entrance section for the flow to become fully developed by the time it reaches the test section. A major difference among the various studies consists in the internal/external character of the flow.
The largest drag reduction, as observed in Table \ref{tab:comparison}, is a whopping 14\% found in the boundary layer experiment by Veldhuis \& Vervoort \cite{veldhuis-vervoort-2009} at the Technical University of Delft. The free-stream velocity was $7.5\ m/s$ and dimples were of circular shape. They found the staggered configuration to be more efficient in reducing drag than the flow-aligned one. Other boundary layer experiments carried out later by the same group at TU Delft reported a significantly smaller but still extremely interesting maximum drag reduction of 4\% \cite{vannesselrooij-etal-2016}, obtained at a Reynolds number based on the free-stream velocity $U_\infty$ and the dimple diameter $D$ of $Re_D \approx 40000$, which corresponds to a Reynolds based on the boundary layer thickness $\delta$ of $Re_\delta = 1500$. In this case, dimples are relatively large (in physical dimensions) shallow circular dimples, with a 50\% smaller coverage ratio than ref.\cite{veldhuis-vervoort-2009}. In a later study \cite{vancampenhout-etal-2018}, they also measured a drag increase of 1\% for shallow dimples with different layouts at $Re_D \approx 63100$. Van Nesselrooij et al. \cite{vannesselrooij-etal-2016} presented what is described in ref.\cite{spalart-etal-2019} as a "very convincing experimental paper", studying different dimples configuration and finding that the best one consistently involves sparse (low coverage) and staggered dimples for the entire range of considered Reynolds numbers. They also focused on the importance of the depth of the dimples. When made dimensionless with the dimple diameter, shallower dimples are found to work better for each flow condition; however, when depth is compared to the boundary layer thickness, shallow dimples work better at low $Re$ but deep dimples are better at higher $Re$.
Another group that provided significant contribution to the dimples research thread is from the National University of Singapore, with Tay and colleagues. They performed experimental studies on a channel flow and reported up to 7.5\% drag reduction in ref.\cite{tay-lim-khoo-2019} for diamond-shaped dimples at a Reynolds number based on the bulk velocity $U_b$ and the channel semi-height $h$ of $Re_b \approx 30000$ and a layout with full coverage. Large drag reductions are measured also with other non-conventional dimple shapes \cite{tay-lim-2018}, such as the upstream-pointing teardrop at 6\%, or the downstream-pointing teardrop at 5\%, in a flow with $Re_b \approx 30140$ and $Re_b \approx 22270$, respectively. Conversely, the triangular shape was proved to always lead to drag increase \cite{tay-etal-2016}. Circular dimples were found to be less effective than diamond and teardrop shapes. A reduction of drag up to 2\% \cite{tay-2011} and 2.8\% \cite{tay-khoo-chew-2015} are found at $Re_b \approx 17500$ and $Re_b \approx 32100$ for different physical geometrical parameters of the dimple but with an identical layout and coverage ratio of 90\%. At $Re_b \approx 42850$ a drag reduction of 3.5\% is measured in ref.\cite{tay-lim-2018}. In ref.\cite{tay-khoo-chew-2015} they compare the same physical dimples and flow geometry by varying the coverage ratio and find that a dense layout with 90\% coverage performs better than a sparse one with 40\% coverage. They also compare two different dimple depths, measuring slightly higher drag reduction for deeper dimples. Finally, Tay \& Lim in ref.\cite{tay-lim-2017} experiment with shifting the point of maximum depth within the dimple along the streamwise direction, and measure the best performance of 3.7\% when the shift is $s=0.1D$ in the downstream direction.
\subsection{Numerical simulations}
\label{sec:simulations}
For drag reduction studies, numerical simulations need to resort to high-fidelity approaches, like Direct Numerical Simulation (DNS) and highly resolved LES (Large Eddy Simulation). Obviously, such simulations are not very practical for large-scale parametric studies, especially when the Reynolds number becomes large, since their unit computational cost rapidly increases with $Re$. The need for high-fidelity methods, the computational cost and the requirement to handle non-planar geometries are among the reasons why numerical studies for dimples are fewer than experiments. However, simulations (and DNS in particular, which avoids the need of turbulence modeling) are perfectly suited for such fundamental studies and provide us with the full information required to understand the working mechanism of dimples, by e.g. breaking up the drag changes into friction drag and pressure drag changes and by yielding a detailed and complete statistical characterization of the turbulent flow.
Circular dimples in a turbulent channel flow were studied with DNS for the first time in 2008 by Lienhart et al. \cite{lienhart-breuer-koksoy-2008}, who reported a drag increase of 1.99\% at $Re_b \approx 11000$. The same work contains also an experimental study of the same configuration, for which no drag changes were observed.
Ng et al. \cite{ng-etal-2020} at NUS performed one of the most interesting DNS studies, considering a turbulent channel flow at $Re_b=2800$ and examining different dimple geometries. They found that the classic circular dimple increases drag by 6.4\%, an amount that decreases to 4.6\% when the point of maximum depth is shifted downstream by $s=0.1D$. They also studied non-circular dimple shapes, obtaining this time a large drag reduction of 7.4\% for the diamond dimple, 4.9\% for the elliptical dimple and 3.1\% for the upstream-pointing teardrop dimple; the downstream-pointing teardrop dimple, instead, gave 0.1\% drag increase.
Another recent numerical channel flow study is that by Tay and coworkers in \cite{tay-khoo-chew-2017}: they run a Detached Eddy Simulation (in which a baseline LES is complemented with a RANS model for the near-wall region) to replicate their own experimental study described in ref.\cite{tay-khoo-chew-2015}. The DES yielded 1\% drag increase at $Re_b= 2830$, which does not confirm the experimental study and found drag increase for every case tested, whereas the experiments found smaller drag increase and even a slight drag reduction for a particular geometry. The suitability of DES for such drag reduction studies, however, remains dubious.
Prass et al. \cite{prass-etal-2019} published the only work in which an open channel is considered: with a LES they report a drag reduction of 3.6 \% at $Re_b \approx 6121$. They also considered two different configurations, finding that flow-aligned dimples perform better than staggered dimples.
There is only one DNS study for the boundary layer, i.e. the already mentioned work by Spalart \& at. \cite{spalart-etal-2019}, in which circular dimples at $Re_\delta = 30000$ were considered as the baseline geometry. They additionally studied the effect of the edge radius $r$, and found that with proper smoothing of this edge a drag reduction of -1.1\% is obtained, which descends from the combination of a -1.7\% reduction of friction drag, counterbalanced by a 0.6\% increase of pressure drag.
\newcommand{\specialcell}[2][c]{%
\begin{tabular}[#1]{@{}c@{}}#2\end{tabular}}
\begin{landscape}
\begin{table}
\caption{Summary of the main parameters and results of the literature. Columns report, in sequential order: 1. the reference and its acronym; 2. the numerical (DNS: direct numerical simulation, LES: large eddy simulation, DES: detached eddy simulation; FVM: finite volume method, SEM: spectral element method) or experimental approach; 3. the flow type: channel flow (CF), half-channel flow (HCF) or boundary layer (BL); 4. the value of the Reynolds number: $Re_b$ for CF and HCF, $Re_\delta$ for BL (other $Re$ definitions for CF are converted to $Re_b$ using the Dean's law); 5. the dimple shape: circular (Circ), triangular (Triang), diamond (Diam), elliptical (Ell), teardrop (Tear); upstream-pointing (Up), Downstream-pointing (Do); 6. spanwise width $D_z$ and streamwise length $D_x$, expressed as a fraction of the reference length $L$ (the channel half-height $h$ for CF and the boundary layer thickness $\delta$ for BL); for a circular dimple $D_z=D_x$, thus only one value is reported; 7. the dimple depth $d$; 8. the edge curvature radius $r$; 9. the curvature radius $R$ of the spherical cap; 10. the shift $s$ of the point of maximum depth; 11. the coverage ratio; 12. the type of layout: S:staggered, A: aligned, H:hexagonal; 13. the percentage drag change. "-" is used when some information required to compute the value is lacking.
\label{tab:comparison}}
\newcolumntype{C}{>{\centering\arraybackslash}X}
\begin{tabularx}{1\linewidth}{CCCCCCCCCCCCC}
\toprule
\hline
\textbf{Article} & \specialcell[t]{\textbf{Num/} \\ \textbf{Exp}} &\textbf{Flow} &\textbf{Re} &\textbf{Shape} & \specialcell[t]{$\mathbf{D_z/L}$ \\ $\mathbf{(D_x/L)}$} &$\mathbf{d/D_z \%}$ &$\mathbf{r/D_z}$ &$\mathbf{R/D_z}$ &$\mathbf{s/D_x}$ &\textbf{Cov$\%$} &\textbf{Layout} &$\mathbf{\Delta Drag \%}$ \\
\toprule
\multirow{2}*{\makecell{LBK-08 \\ \cite{lienhart-breuer-koksoy-2008}}} &Exp &CF &$\approx 10000$ &Circ &$0.6$ &$5$ &- &- &$0$ &$22.5$ & A &$0$ \\
&DNS (FVM) &CF &$10935$ &Circ &$0.6$ &$5$ &- &- &$0$ &$22.5$ & A &$+1.99$ \\
\midrule
\multirow{1}*{\makecell{VV-09 \\ \cite{veldhuis-vervoort-2009}}} &Exp &BL &- &Circ &- &$5$ &- &- &$0$ &$60$ &S &$-14$ \\
\\
\midrule
\multirow{1}*{T-11 \cite{tay-2011}} &Exp &CF &$\approx 17500$ &Circ &$5$ &$5$ &$0.84$ &$1.68$ &$0$ &$90$ &S &$-2$ \\
\midrule
\multirow{1}*{\makecell{TKC-15 \\ \cite{tay-khoo-chew-2015}}} &Exp &CF &$\approx 32100$ &Circ &$5$ &$5$ &$4.2$ &$8.45$ &$0$ &$90$ &S &$-2.8$ \\
\midrule
\multirow{1}*{\makecell{VVVS-16 \\ \cite{vannesselrooij-etal-2016}}} &Exp &BL &$\approx 1500$ &Circ &$26.67$ &$2.5$ &$0.5$ &$4.51$ &$0$ &$33.3$ &S &$-4$ \\
\midrule
\multirow{3}*{\makecell{TLKJ-16 \\ \cite{tay-etal-2016}}} &Exp &CF &$\approx 5625$ &Triang Up &$- (4.67)$ &$5$ &- &- &$-0.5$ &- &S &$+4.8$ \\
&Exp &CF &$\approx 5625$ &Triang Do &$- (4.67)$ &$5$ &- &- &$+0.5$ &- &S &$+2.8$ \\
&Exp &CF &$\approx 50350$ &Circ &$5$ &$5$ &- &- &$+0.1$ &$90$ &S &$-3.6$ \\
\midrule
\multirow{1}*{\makecell{TKC-17 \\ \cite{tay-khoo-chew-2017}}} &DES (FVM) &CF &$\approx 2830 $ &Circ &$5$ &$1.5$ &$0.84$ &$1.68$ &$0$ &$90$ &S &$+1$ \\
\bottomrule
\end{tabularx}
\end{table}
\unskip
\end{landscape}
\begin{landscape}
\begin{table}
\newcolumntype{C}{>{\centering\arraybackslash}X}
\begin{tabularx}{1\linewidth}{CCCCCCCCCCCCC}
\toprule
\hline
\textbf{Article} & \specialcell[t]{\textbf{Num/} \\ \textbf{Exp}} &\textbf{Flow} &\textbf{Re} &\textbf{Shape} & \specialcell[t]{$\mathbf{D_z/L}$ \\ $\mathbf{(D_x/L)}$} &$\mathbf{d/D_z \%}$ &$\mathbf{r/D_z}$ &$\mathbf{R/D_z}$ &$\mathbf{s/D_x}$ &\textbf{Cov$\%$} &\textbf{Layout} &$\mathbf{\Delta Drag \%}$ \\
\toprule
\multirow{1}*{\makecell{TL-17 \\ \cite{tay-lim-2017}}} &Exp &CF &$\approx 28600 $ &Circ &$5$ &$1.5$ &- &- &$+0.1$ &$90$ &S &$+1$ \\
\midrule
\multirow{3}*{\makecell{TL-18 \\ \cite{tay-lim-2018}}} &Exp &CF &$\approx 42850$ &Circ &$5$ &$5$ &- &- &$0$ &- &S &$-3.5$ \\
&Exp &CF &$\approx 30140$ &Tear Up &$5 (7.5)$ &$5$ &- &- &$+0.17$ &$84$ &S &$-6$ \\
&Exp &CF &$\approx 22270$ &Tear Dp &$5 (7.5)$ &$5$ &- &- &$-0.17$ &$84$ &S &$-5$ \\
\midrule
\multirow{1}*{\makecell{VVVVS \\-18 \cite{vancampenhout-etal-2018}}} &Exp &BL &- &Circ &- &$2.5$ &$0.5$ &- &$0$ &- &S/A/H &$+1$ \\
\\
\midrule
\multirow{1}*{\makecell{SSSTPW \\ -19 \cite{spalart-etal-2019}}} &DNS (FVM) &BL &$30000$ &Circ &$1.33$ &- &$2.03$ &- &$0$ &- &S &$-1.1$ \\
\midrule
\multirow{1}*{\makecell{TLK-19 \\ \cite{tay-lim-khoo-2019}}} &Exp &CF &$\approx 30000$ &Diam &$5(10)$ &$5$ &- &- &$0$ &$99$ &S &$-7.5$ \\
\midrule
\multirow{1}*{\makecell{PWFB \\ -19\cite{prass-etal-2019}}} &LES (FVM) &HCF &$\approx 6121$ &Circ &$5.7$ &$2.5$ &$1.5$ &$4.51$ &$0$ &- &S &$-3.6$ \\
\midrule
\multirow{5}*{\makecell{NJLTK \\-20 \cite{ng-etal-2020}}} &DNS (SEM) &CF &$2800$ &Circ &$5$ &$5$ &$0.84$ &$1.68$ &$+0.1$ &$90.7$ &S &$+4.6$ \\
&DNS (SEM) &CF &$2800$ &Ell &$5 (7.5)$ &$5$ &$0.84$ &$1.68$ &$0$ &$90.7$ &S &$-4.9$ \\
&DNS (SEM) &CF &$2800$ &Tear Up &$5 (8.75)$ &$5$ &$0.84$ &$1.68$ &$+0.21$ &$84.4$ &S &$-3.1$ \\
&DNS (SEM) &CF &$2800$ &Tear Dp &$5 (8.75)$ &$5$ &$0.84$ &$1.68$ &$-0.21$ &$84.4$ &S &$+0.1$ \\
&DNS (SEM) &CF &$2800$ &Diam &$5 (10)$ &$5$ &$0.84$ &$1.68$ &$0$ &$99.5$ &S &$-7.4$ \\
\bottomrule
\end{tabularx}
\end{table}
\unskip
\end{landscape}
\section{How to design dimples?}
\label{sec:parameters}
Systematic studies which address the influence of each geometric parameter are lacking, so that the design of the optimal configuration to achieve the maximum drag reduction has not been identified yet. This Section describes the little we know, first in terms of the geometrical characteristics of the dimples and then in terms of their arrangement.
\subsection{The shape of the dimple}
\begin{figure}
\centering
\includegraphics[width=1.0\textwidth]{depth} \\
\includegraphics[width=0.49\textwidth]{edgeradius-circular}
\includegraphics[width=0.49\textwidth]{radius-circular}
\caption{Drag change obtained with circular dimples versus depth (top), edge curvature radius (bottom left) and radius of the spherical cap (bottom right). Dashed lines connect points for which only the parameter on the abscissa is changed.}
\label{fig:geom_par_dep}
\end{figure}
Figure \ref{fig:geom_par_dep} plots the drag change data measured by several works which adopted the baseline circular geometry. The percentage of drag change is shown as a function of the three independent geometrical parameters $d/D$, $r/D$ and $R/D$, after extracting from each publication the largest drag reduction (or the smallest drag increase). It should be noted that, in general, the various points correspond to simulations or experiments that differ for other, sometimes very important, design parameters. Dashed lines, instead, connect points for which only the parameter on the abscissa is changed.
The influence of $d/D$ on the drag change has been studied by several authors: previous research from heat exchange enhancement suggests the very reasonable idea that this is one of the key factors affecting drag. However, while the optimal $d/D$ is in the range $0.1-0.5$ for best heat exchange \cite{kovalenko-terekhov-khalatov-2010, tay-etal-2014}, several authors report that shallower dimples with $d/D < 0.1$ should be considered for reducing the overall drag, to avoid excess penalty from the ensuing pressure drag.
Data are extremely scattered and clearly indicate that the drag change over a dimpled surface does not depend on the $d/D$ ratio alone. For example for $d/D=0.05$ Veldhuis and Vervoort \cite{veldhuis-vervoort-2009} report a drag reduction of almost 15\%, while Tay et al. \cite{tay-khoo-chew-2017} report a drag increase of approximately 6\%. The experimental measurements from ref.\cite{veldhuis-vervoort-2009} are for a turbulent boundary layer over a dimpled surface with coverage ratio of 60\% at a free-stream velocity in the range $0-29\ m/s$; the results from ref.\cite{tay-khoo-chew-2017} are from a Detached Eddy Simulation of a turbulent channel flow at $Re_b \approx 3000$ with a coverage ratio of 90\%.
It is reassuring, though, to see --- at least in some of the datasets where data points are connected by dashed lines --- a local optimum for intermediate depths, since it is reasonable to expect zero drag changes for $d \to 0$ and an increase of drag as for standard $k$-type roughness for $d \to \infty$.
With the other parameters unchanged, Tay et al. \cite{tay-khoo-chew-2017} and Van Nesselrooij et al. \cite{vannesselrooij-etal-2016} agree on observing a decrease of performance with increasing $d/D$ (in the range $0.015 < d/D < 0.05$), although at a different rate; within the same $d/D$ range, Tay et al. \cite{tay-khoo-chew-2015} and Veldhuis and Vervoort \cite{veldhuis-vervoort-2009} measured a slight increase of drag reduction performance with increasing $d/D$.
The latter study was extended up to $d/D =0.12$, finding that for $d/D>0.05$ the overall drag increases with $d/D$.
The curvature radius $r$ at the edge of the dimple is meant to mitigate the negative effects of pressure drag, by preventing or decreasing flow separation. The second panel of figure \ref{fig:geom_par_dep} shows that also in this case data are highly scattered: for $0.5 \lessapprox r/D \lessapprox 1.5$ the achieved drag change ranges between -4\% \cite{vannesselrooij-etal-2016} and 4.8\% \cite{ng-etal-2020}. The experiments of Tay et al. \cite{tay-khoo-chew-2015} at $Re_b \approx 32100$ show that after a certain value, i.e. $r/D > 4$, the influence of the edge curvature on the drag change is minimal. Spalart et al. \cite{spalart-etal-2019} performed a DNS of a turbulent boundary layer, with a Reynolds number (based on the boundary layer thickness) of $Re_\delta = 7.5 \times 10^3 $ and $ Re_\delta=3 \times 10^4$ and considered $r/D=0.5$. Their data points are not plotted in figure \ref{fig:geom_par_dep}, since their paper does not contain enough information to quantify $r$. However, they confirmed that smoothing the dimple rim is beneficial.
A scattered picture is also obtained when data are plotted against the $R/D$ ratio, as shown in the third panel of fig.\ref{fig:geom_par_dep}, confirming again that for this configuration a single geometrical parameter is unable to fully characterize the influence of the dimples on the flow.
The experiments of Tay \& Lim \cite{tay-lim-2017} and the numerical simulations of Ng et al. \cite{ng-jaiman-lim-2018, ng-etal-2020} agree on the indication that the downstream shift $s$ is beneficial, for a wide range of Reynolds numbers, with the best effect observed when $s=0.1D$ in the downstream direction. When instead the shift is in the upstream direction, i.e. $s<0$, drag increases rapidly.
It should be mentioned that the Reynolds number of the simulations ($Re_b=2800$) is somewhat lower than the lowest Reynolds number of the experiments ($Re_b \approx 4300$). Tay \& Lim \cite{tay-lim-2017} claim that a $0.2D$ downstream shift is equivalent to the axisymmetric case at $Re_b=7000$ with a drag increase of 1.5\%, because the lower drag obtained by the reduced flow separation at the shallower upstream wall is compensated by the higher drag of the flow impinging on the steeper downstream wall. Ng et al. \cite{ng-jaiman-lim-2018, ng-etal-2020}, who can take advantage of DNS to break down the total drag into friction and pressure contributions, find that friction drag is almost unaffected by a downstream shift, since it does not affect the reattachment point.
When it comes to alternative shapes, triangular dimples were considered by Tay et al. \cite{tay-etal-2016}. In their experiment they machined dimples with the bottom surface sloping up from the deepest point at the triangular vertex towards the base of the triangular depression to meet the flat channel surface, hence producing the negative of a wedge. A larger drag was obtained for both upstream- and downstream-pointing triangles, for the whole range of tested Reynolds numbers, i.e. $ 5180 \le Re_b \le 28600$. Moreover, for the downstream-pointing triangle the drag increase is nearly constant with $Re$, whereas for the upstream-pointing triangle the drag increase grows with $Re$.
Tay \& Lim \cite{tay-lim-2018} studied the teardrop dimple and measured drag reduction for both the upstream- and downstream-pointing teardrops, for $4500 \le Re_b \le 44000$, with the former yielding up to 6\% drag reduction and the latter up to 5\%. Tay et al. \cite{tay-lim-khoo-2019} studied the diamond dimple and measured drag reduction up to 7.5\%. More recently, Ng et al. \cite{ng-etal-2020} compared in a numerical study the circular, elliptical, teardrop and diamond dimples in a turbulent channel flow, reporting drag reduction of 4.9\% for the elliptical dimple, 3.1\% for the upstream-pointing teardrop, and 7.4\% for the diamond dimple.
\subsection{The arrangement of the dimples}
When it comes to the spatial arrangement of dimples on the surface, once the other parameters are fixed, the staggered configuration leads to lower drag compared to the flow-aligned one \cite{veldhuis-vervoort-2009, vannesselrooij-etal-2016, vancampenhout-etal-2018, spalart-etal-2019}, a fact that explains why the staggered configuration is the most adopted one. Van Nesselrooij et al. \cite{vannesselrooij-etal-2016} found 3\% of drag increase for flow-aligned dimples and up to 4\% drag reduction for staggered dimples with the same geometrical parameters, coverage and Reynolds number. Spalart et al. \cite{spalart-etal-2019} found drag increase for both configurations, but the drag increase of the flow-aligned dimples was almost twice that of the staggered dimples.
Lashkov and Samoilova \cite{lashkov-samoilova-2002} and Van Campenhout et al. \cite{vancampenhout-etal-2018} considered the drag change also for other, non-standard arrangements. The former study found a large drag increase (up to 50\%) when an hexagonal dimple layout is used. The latter study showed a constant drag increase of about 1\% for each of the several considered layouts.
The effect of coverage ratio was considered by Tay et al. \cite{tay-2011, tay-khoo-chew-2015}, who compared in a channel flow circular dimples with 40\% and 90\% coverage, and found that higher coverage enhances the (positive or negative) effects of the dimples.
Van Nesselrooij et al. \cite{vannesselrooij-etal-2016} experimentally studied the effect of coverage in a boundary layer. They found that a 90\% coverage yields drag increase for a wide range of $Re$, whereas 33.3\% coverage always yields drag reduction within the same Reynolds number range. Performance of both layouts are found to improve by increasing $Re$. Spalart at al. \cite{spalart-etal-2019} in their boundary layer DNS compared the two coverage ratios, and observed about 1\% of drag reduction for the lower coverage, and 2\% of drag increase for the higher one.
\section{How do dimples work?}
\label{sec:physics}
The uncertainties on the true effectiveness of dimples in reducing turbulent drag are accompanied, perhaps unsurprisingly, by a limited understanding of the physics involved. Thanks to the several experimental and numerical works carried out so far, some ideas and hypotheses exist, but consensus is lacking. We describe below two prevailing descriptions of how dimples interact with the overlying turbulent flow.
\subsection{Self-organized secondary tornado-like jets}
The first attempt at explaining drag reduction by dimples is due to Kiknadze et al. \cite{kiknadze-gachechiladze-barnaveli-2012}, who based their explanations uniquely of video records and photographs, even though similar observations were already put forward in previous numerical \cite{veldhuis-vervoort-2009} and experimental \cite{kovalenko-terekhov-khalatov-2010} studies. According to ref.\cite{kiknadze-gachechiladze-barnaveli-2012}, whose authors are affiliated with the Research and Production Centre “Tornado-Like Jet Technologies” in Moscow, the action of dimples can be explained by a so-called tornado-like jet self-organization. In plain words, this is how the flow organizes itself and develops over the double-curvature concavity of a dimple. The flow coming from an upstream flat portion accelerates at the leading edge of a circular dimple, and is lifted off from the surface while trying to follow the curved wall, leading to a reduction of skin-friction drag in the fore half of the dimple. After the streamwise midpoint, the flow converges towards the midline to eventually meet the flat wall past the trailing edge, and the skin friction increases again. Although the skin friction reduction in the fore half might outweigh the increase of the aft half, the recessed geometry of the dimple introduces an additional pressure drag component: hence, to achieve drag reduction the net reduction of the skin-friction drag needs to be larger than the increase of pressure drag. It should be observed, though, that this description is not directly addressing the insurgence of drag reduction, but only constitutes an attempt to draft a simplified description of the local flow modifications induced by the dimple.
If dimples are deep enough, their steep walls make the flow prone to separation on the upstream part of the recess, with creation of spanwise vorticity and recirculation.
The flow reversal has a positive effect on the drag, causing negative skin friction in the first portion of the dimple.
When the flow reattaches, a strong impingement of the flow on the rear slope of the dimple produces a locally high skin friction.
Moreover, flow separation obviously causes a large increase of pressure drag which could cancel out the positive effect of the skin friction drag.
To avoid separation and the consequent increase of pressure drag, more efficient shapes than the classical circular one are used.
Shifting downstream the point of maximum depth of the dimple alters the wall slopes, and affects the total drag by changing pressure drag, whereas the friction drag tends to remain unchanged \cite{ng-etal-2020}.
A (moderate) downstream shift minimizes the negative effects of separation, and offers lower drag than the standard circular geometry.
However, the shift does not significantly affect the location of the reattachment point, except for very large shifts, for which flow reversal may be entirely suppressed, but at the cost of an intense impingement onto the steeper rear wall which negatively affects the drag.
Non-circular dimples induce different drag changes \cite{ng-etal-2020}. Flow separation and flow reversal are not observed for elliptical, upstream-pointing and diamond dimples, leading to a lesser drag compared to the smooth wall. This can be attributed to the gentler upstream slope and to the longer, more streamwise-aligned leading edge.
Other studies which do not report flow reversal even for the circular shape are \cite{vannesselrooij-etal-2016, spalart-etal-2019}; they measure a maximum drag reduction of 4\% and 1.1\%, respectively. Tay et al. \cite{tay-khoo-chew-2017} observe flow separation for circular dimples in the whole range of tested flow conditions for $d/D = 0.05$, but not for $d/D=0.015$; however, they measure drag increase in all the tested cases.
\subsection{Spanwise forcing}
A more recent conjecture on the mechanisms by which dimples attain drag reduction has been put forward independently by the two groups at TU Delft \cite{vannesselrooij-etal-2016} and NUS \cite{tay-khoo-chew-2015}.
Flow visualisations indicate that, near the wall, streamlines coming in straight from a flat surface bend towards the dimple centerline in the upstream portion of the recess, then bend away from it in the downstream portion, thus creating a converging-diverging pattern (see for example \cite{tay-etal-2014}).
Such meandering implies a spanwise velocity distribution with changing sign across the dimple length \cite{vannesselrooij-etal-2016, vancampenhout-etal-2018}, and a consequent alternating streamwise vorticity \cite{tay-khoo-chew-2017} since the spanwise velocity remains confined very near to the wall. Ref. \cite{vannesselrooij-etal-2016} reports an average spanwise velocity of about 2--3\% of the free-stream velocity for a boundary layer; ref.\cite{tay-khoo-chew-2017} measured a maximum spanwise velocity in the range 3.5--8\% of the centerline velocity in the channel. Spalart et al. \cite{spalart-etal-2019} also detected in their DNS study a spanwise motion, although weaker in intensity.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{spanvel}
\caption{Instantaneous spanwise velocity component $w$ on a wall-parallel plane at $y^+= 1.3$ from the flat part of the wall. Lengths and velocities are made dimensionless with $h$ and $\mathsf{U_b}$. The velocity field is computed by DNS for a circular dimple, which actually yields drag increase.}
\label{fig:spanwise}
\end{figure}
Figure \ref{fig:spanwise} depicts an instantaneous spanwise velocity field over a circular dimple, taken from one of our DNS simulations of a turbulent channel flow over circular dimples (see Appendix \ref{sec:appendix} for computational and discretization details of our simulations). An alternating spanwise velocity pattern is clearly visible, supporting the idea that the dimple creates a velocity component in the spanwise direction and bends the streamlines in a converging-diverging behaviour. The instantaneous values are very large, up to 40\% of the bulk velocity.
The alternate spanwise velocity resembles the spanwise-oscillating wall \cite{jung-mangiavacchi-akhavan-1992}, an active technique for the reduction of turbulent friction drag, where the wall oscillates in time in the spanwise direction. In the oscillating-wall control, the spanwise velocity component at the wall $w_w$ is prescribed as a function of time as:
\begin{equation}
w_w(t) = A \sin \left( \frac{2 \pi}{T} t \right),
\end{equation}
where $A$ is the amplitude of the oscillation and $T$ is its period.
The oscillating wall produces very large reductions of friction drag, although at a significant energy cost. Its detailed performance is determined by the control parameters $A$ and $T$; Quadrio \& Ricco \cite{quadrio-ricco-2004}, after a careful DNS study, identified the link between the value of parameters and the obtained drag reduction. They found an optimum value for the oscillating period of $T^+ \approx 100$, whereas drag reduction monotonically improves with the amplitude (albeit the energy cost of the control rises faster as $A^2$).
Dimples could be considered as a passive implementation of the spanwise-oscillating wall. Van Campenhout et al. \cite{vancampenhout-etal-2018} measured the analogous parameters and defined a period $T$ and a maximum spanwise velocity $w_{max}$ of a fluid particle, averaging over a selected region of the domain. In the oscillating wall, it is known \cite{quadrio-sibilla-2000} that the time-averaged mean spanwise velocity profile coincides with the laminar solution of the Stokes second problem. Ref.\cite{vancampenhout-etal-2018} assumes the same to hold for the flow over dimples, thus deriving an analogous value for the amplitude. For their dimples with $d/D=0.025$, they found $T^+=135$ and $A^+=0.74$. Data from ref.\cite{quadrio-ricco-2004} do not contain information for such small amplitudes, but an extrapolation leads to a drag reduction of about 4\% for this combination of parameters: a value that closely resembles the measurement of 3.8\% from ref.\cite{vannesselrooij-etal-2016}.
It should be noted, first, that a closer analogy should be made between this interpretation of the dimples working mechanism and the spatially modulated spanwise forcing introduced by Viotti et al. \cite{viotti-quadrio-luchini-2009}. However, in that paper it is shown how temporal and spatial oscillations can be easily converted one into the other by using a suitable convective velocity scale at the wall. There are, of course, obvious differences between data collected by Quadrio \& Ricco for a turbulent channel flow at $Re_\tau=200$ or $Re_b=3173$ and the dimple experiments described in refs.\cite{vannesselrooij-etal-2016, vancampenhout-etal-2018} for a boundary layer at $Re_\delta=1226$ (the limited information provided in these references precludes computing the value of the friction Reynolds number).
Other important concepts to be aware of when trying to draw such a parallel is that, with the oscillating wall, a minimum spanwise velocity is required for the active technique to produce its effects: this threshold value $A_{th}^+$, that needs to be of the order of the natural fluctuations of spanwise velocity in the near-wall region, is quantified in ref.\cite{quadrio-ricco-2004} as $A_{th}^+=1$, i.e. similar or larger than the dimples-induced spanwise velocity as determined in \cite{vancampenhout-etal-2018}. Finally, and definitely most important, with a flat wall, even in presence of spanwise forcing, one should be only concerned with friction drag, whereas with dimples both viscous and pressure drag come into play.
| {'timestamp': '2022-07-12T02:17:12', 'yymm': '2207', 'arxiv_id': '2207.04446', 'language': 'en', 'url': 'https://arxiv.org/abs/2207.04446'} |
\section{Introduction}
\label{intro}
Evidence has been accumulating in recent years that at least a subclass of
Gamma--ray bursts (GRBs), the ones with a long ($\lower.5ex\hbox{\gtsima} 2$ s) burst event, are
associated with deaths of massive stars (e.g. Woosley 1993; Paczy\'nski 1998;
MacFadyen \& Woosley 1999). This evidence was initially based on the
relatively small offset of the GRB location with respect to the center of the
host galaxy (Bloom, Kulkarni \& Djorgovski 2002). Moreover, decisive
supernova features have been observed in the afterglow of a few nearby GRBs
(Galama et al. 1998, Della Valle et al. 2003; Stanek et al. 2003;
Malesani et al. 2004), directly linking long GRBs to massive stars. This also
provides strong observational evidence for the connection of GRBs to star
formation (Djorgovski et al. 1998; Fruchter et al. 1999; Prochaska et
al. 2004). A study on the GRB host galaxies
by Le Floc'h et al. (2003) found that these hosts have very blue colours,
comparable to those of the faint blue star-forming sources at high
redshift. The association of long GRBs with star forming regions supports
the idea that a large fraction of the optically-dark GRBs (i.e. GRBs without
an optical afterglow), as well, are due to high (dust) absorption
(Lazzati, Covino \& Ghisellini 2002; Rol et al. 2005; Filliatre et al. 2005).
Together with optical studies, which probe the dust content of the GRB
environment, X--ray studies of the GRB afterglows can give insight on the
ionization status and metal abundances of the matter in the GRB environment.
This can be done by using either emission or absorption features (e.g.,
B\"ottcher et al. 1999; Ghisellini et al. 2002). Although emission features are
more apparent, the cumulative effect of low-energy cutoff is easier to
detect in the relatively low signal to noise spectra of X--ray afterglows.
Moreover, if the absorbing material is located close to the GRB site ($\sim
0.1-10$ pc), it is expected that GRB photons may lead to a progressive
photoionization of the gas, gradually reducing the effect of low energy
absorption (Lazzati \& Perna 2002; Perna \& Lazzati 2002).
X--ray absorption in excess of the Galactic value has been
reported for a handful of GRB afterglows (Owens et al. 1998; Galama \& Wijers
2001; Stratta et al. 2004; De Luca et al. 2005; Gendre, Corsi \& Piro 2005).
Evidence for a decrease of the intrinsic column density with time in the
X--ray prompt emission of some GRBs has also been found (GRB980506, Connors \&
Hueter 1998; GRB980329, Frontera et al. 2000; GRB011211, Frontera et
al. 2004). Lazzati \& Perna (2002) interpreted them as evidence for GRBs
occurring within overdense regions in molecular clouds similar to star
formation globules in our Galaxy.
Stratta et al. (2004) presented a systematic analysis of a sample of 13 bright
afterglows observed with BeppoSAX narrow field instruments. They found a
significant detection of additional intervening material in only two cases
(namely, GRB990123 and GRB010222), but, owing to the limited photon
statistics, they could not exclude that intrinsic X--ray absorption is also
present in the other bursts.
Chandra observations of GRB afterglows have yielded a few detections and
constraints of the presence of intrinsic X--ray absorption (Gendre et
al. 2005). XMM-Newton observed 9 GRB
afterglows (for a review see De Luca et al. 2005 and Gendre et
al. 2005). These are mainly INTEGRAL GRBs, the large majority of which has
been discovered close to the Galactic plane (i.e. are characterized by a
relatively high Galactic column density). From XMM-Newton data one can
gather evidence that at least several GRBs occur in high density regions
within their host galaxies (e.g, De Luca et al. 2005).
Here we investigate the presence of intrinsic absorption in the complete set
of all the 17 GRBs promptly observed by Swift up to April 15, 2005.
The paper is organized as follows. In section 2 we present the data collected
by Swift and the analysis procedure. In section 3, we derive values and/or
upper limits on the column density in excess of the Galactic value (based on
the maps by Dickey \& Lockman 1990). For 6 GRBs a redshift has also been
determined through spectroscopic observations. For these we investigate the
intrinsic excess of absorption. In section 4 we discuss our results.
Section 5 is dedicated to our conclusions.
\section{Swift data}
\label{data}
The X--Ray Telescope (XRT, Burrows et al. 2005a) on board Swift (Gehrels et
al. 2004) is a focusing X--ray telescope with a 110 cm$^{2}$ effective
area at 1.5 keV, 23 arcmin field of view, 18 arcsec resolution (half-power diameter) and
0.2--10 keV energy range. The first GRB followed by XRT was GRB041223 (Burrows
et al. 2005b). Since then 22 other GRBs were observed by Swift up to April
15, 2005, together with one discovered by HETE II.
Of these 24 GRBs, 19 were observed by XRT and only two of them [GRB050410 and
GRB050117a (see also Hill et al. 2005)] were not detected or have
too few photons to perform a meaningful spectral analysis.
For eight of them Swift was able to repoint within a
few hundred seconds, whereas the remaining nine were observed at later
times ($>30$ min). In Table 1 we present a log of the XRT observations
used in the present work.
\begin{table}
\label{datalog}
\caption{Observation log.}
\begin{tabular}{ccccc}
\hline
GRB & Start time&Mode& Region$^{\$}$& Exp. time\\
& (s)$^*$ & & & (s)\\
\hline
041223 & 16661 & PC & 0, 20 & 4018 \\
050124 & 11113 & WT & 40x20 & 7351 \\
& & PC & 0, 20 & 11066\\
050126 & 131 & PC & 2, 20 & 278 \\
& & PC & 0, 13 & 8720 \\
050128 & 108 & PC & 5, 20 & 2878 \\
& & PC & 0, 15 & 14316 \\
050215b& 2100 & PC & 0, 10 & 35563 \\
050219a& 92 & WT & 40x20 & 5003 \\
& & PC & 0, 20 & 2570 \\
050219b& 3130 & WT & 40x20 & 15051 \\
& & PC & 0, 20 & 5724 \\
050223 & 2875 & PC & 0, 10 & 2337 \\
050306 &127390 & PC & 0, 10 & 12284 \\
050315 & 83 & WT & 40x20 & 748 \\
& & PC & 7, 40 & 690 \\
& & PC & 0, 20 & 9642 \\
050318 & 3277 & PC & 2, 20 & 395 \\
& & PC & 0, 20 & 3938 \\
050319 & 87 & PC & 3, 40 & 1495 \\
& & PC & 0, 40 & 2511 \\
050326 & 3258 & PC & 1, 16 & 60 \\
& & PC & 0, 15 & 40723\\
050401 & 131 & WT & 40x20 & 2292 \\
& & PC & 3, 20 & 276\\
& & PC & 0, 15 & 2079\\
050406 & 92 & WT & 40x20 & 109 \\
& & PC & 2, 30 & 70 \\
& & PC & 0, 10 & 49050 \\
050408 & 4653 & PC & 0, 11 & 1041 \\
& & PC & 0, 5 & 57532 \\
050412 & 89 & WT & 40x20 & 168 \\
& & PC & 2, 30 & 145 \\
\hline
\end{tabular}
$^*$ Time from the BAT trigger time.
$^{\$}$ Extraction region. In PC mode it is a circular/annular region with inner
and outer radii reported in the column. In the case of WT mode the region is
always a $40\times20$ region along the column centered on source.
\end{table}
\begin{table}
\label{values}
\caption{Results of the spectral analysis.}
\begin{tabular}{cccc}
\hline
GRB &$N_H$ Gal.$^a$ & $N_H$ obs.$^b$ & $\chi^2_{\rm red}$ \\
&$10^{20}\rm \ cm^{-2}$ & $10^{20}\rm \ cm^{-2}$ & (dof) \\
\hline
041223 & 10.9 (9.9) [5.6] &$16.8^{+5.2}_{-4.2}$ & 0.8 (26) \\
050124 & 5.2 (2.6) [1.7] &$6.2^{+3.9}_{-2.5}$ & 1.3 (35) \\
050126 & 5.3 (3.2) [2.6] &$4.1^{+2.7}_{-2.6}$ & 0.9 (10) \\
050128 & 4.8 (4.9) [3.8] &$12.5^{+1.4}_{-1.3}$ & 1.3 (105) \\
050215b& 2.1 (2.0) [0.9] &$<3.4$ & 1.0 (4) \\
050219a& 8.5 (10.1) [8.1] &$30.1^{+6.5}_{-5.9}$ & 1.0 (57) \\
050219b& 3.8 (3.0) [1.7] &$23.8^{+4.0}_{-3.7}$ & 1.0 (98) \\
050223 & 7.1 (6.6) [4.4] &$9.5^{+23.8}_{-7.3}$ & 1.2 (3) \\
050306 & 3.1 (2.9) [3.5] &$46.1^{+35.6}_{-28.8}$& 1.0 (3) \\
050315 & 4.3 (3.3) [2.5] &$14.9^{+3.9}_{-2.2}$ & 1.3 (42) \\
050318 & 2.8 (1.8) [0.9] &$4.2^{+1.9}_{-1.5}$ & 0.8 (39) \\
050319 & 1.1 (1.2) [0.5] &$3.0^{+0.9}_{-0.8}$ & 1.3 (29) \\
050326 & 4.5 (3.8) [1.8] &$18.9^{+7.1}_{-6.0}$ & 1.0 (25) \\
050401 & 4.8 (4.4) [3.3] &$21.1^{+2.2}_{-1.8}$ & 1.0 (297) \\
050406 & 2.8 (1.7) [1.1] &$<6.6$ & 1.2 (10) \\
050408 & 1.7 (1.5) [1.3] &$30.7^{+5.5}_{-4.9}$ & 0.9 (45) \\
050412 & 2.2 (1.7) [1.0] &$26.4^{+14.9}_{-12.4}$& 1.4 (11) \\
\hline
\end{tabular}
$^a$ Column density values are from Dickey \& Lockman (1990). Values between
parentheses are from Kalberla et al. (2005) and in square parentheses from
Schlegel, Finkbeiner \& Davis (1998), using the usual conversion
$N_H=5.9\times 10^{21}\,E(B-V)\rm \ cm^{-2}$.
$^b$ The values of the column density have been computed at $z=0$ since we do
not have any knowledge of the GRB redshift for most of them (but see Table \ref{zz}).
\end{table}
GRBs are observed by XRT with different observing modes and source count
rates. These modes were designed to minimize photon pile-up when observing
the highly variable flux of GRB afterglows. The change between observing modes
should occur automatically when the XRT is in the so-called Auto State (for a
thorough description of XRT observing modes see Hill et al. 2004).
Many of these early bursts were observed in Manual State instead, with the
observing mode fixed, during the calibration phase (before April 5, 2005). For
these GRBs observations were often carried out in Photon Counting (PC, the usual
mode providing 2D images) and for some bright bursts the initial data are
piled-up.
This effect can be corrected by extracting light curves and spectra from an
annular region around the source center (rather than a simple circular region),
with a hole size that depends on the source brightness. As the afterglows
decays the pile-up effect becomes negligible and extraction from a circular
region is feasible. For bursts observed in Auto State observations started in
Window Timing (WT) mode (providing just 1D imaging).
Cross-calibration between modes assures that the two modes, PC and WT, provide
the same rate (within a few percent) on steady sources.
Here we analyzed the dataset shown in Table 1. All data were processed with
the standard XRT pipeline within
FTOOLS 6.0 ({\tt xrtpipeline} v. 0.8.8) in order to produce screened event
files. WT data were extracted in the 0.5--10 keV energy range, PC data in the
0.2--10 keV range. Standard grade filtering was adopted (0--2 for WT and
0--12 for PC, according to XRT nomenclature, see Burrows et al. 2005a). From these we
extracted spectra using regions selected to avoid pile up and to maximize the
signal to noise (see Table 1). In WT mode we adopted the standard
extraction region of $40\times 20$ pixels along the WT line. In PC we used an
annular region when the inner core of the Point Spread Function (PSF) was
piled-up and circular regions otherwise. The size of the extraction region
depends on the source strength and background level. Appropriate ancillary response
files were generated with the task {\tt xrtmkarf}, accounting for PSF
corrections. The latest response matrices (v.007) were used. The data were
rebinned to have 20 counts per energy bin (in some cases with few photons
energy bins with as low as 10 counts per bin were used).
The data were fitted with a simple absorbed power law model. We have adopted
the usual photoelectric absorption using Wisconsin (Morrison \& McCammon 1983)
cross-sections ({\tt wabs} model within XSPEC). Normalizations were
left free to vary. Power law photon index and absorbing column densities were
first tied across the different observations and observing modes, but if the
fit was not satisfactory the photon index was allowed to vary between
observations (indicating a spectral evolution). We searched for variations of
the column density with time by allowing the column density to vary across
different observations.
For GRBs with known redshift we also fitted an absorbed power law model with a
fixed Galactic column density component plus a free column density at the redshift
of the GRB. Results are shown in Tables 2 and 3 and in
Fig. \ref{nh}, where GRB total column densities are plotted against the Galactic
column densities.
\begin{figure}
\centering
\includegraphics[height=8cm,angle=-90]{nh.ps}
\caption{Galactic column density versus column density obtained from spectral
fit of the X--ray afterglow. Open circles are values obtained without any
redshift information. Filled circles indicate values for the six GRBs with
known redshift at the redshift of the host galaxy. Upper limits are also
indicated with filled and open circles, as above. The line represents the
prints of equal values between the Galactic and the total column density
(i.e. no intrinsic absorption).
}
\label{nh}
\end{figure}
\begin{table*}
\label{zz}
\caption{Results of the spectral analysis of GRB with known redshift.}
\begin{center}
\begin{tabular}{cccccc}
\hline
GRB &Redshift & $N_H$ Gal. & $N_H$ obs.$^*$ & $\chi^2_{\rm red}$ & $N_H$
DLA ($90\%$)\\
& (ref.) &$10^{20}\rm \ cm^{-2}$& $10^{20}\rm \ cm^{-2}$ & (dof) & $10^{20}\rm \ cm^{-2}$ \\
\hline
050126 &1.29 (1) & 5.3 &$<9.4 $ & 1.0 (10) & 0.8 (6.4)\\
050315 &1.95 (2) & 4.3 &$83.7^{+19.7}_{-17.4}$ & 1.4 (43) & 1.4 (5.7)\\
050318 &1.44 (3) & 2.8 &$11.3^{+10.7}_{-8.9}$ & 0.6 (39) & 0.9 (12.9)\\
050319 &3.24 (4) & 1.1 &$38.6^{+18.0}_{-16.0}$& 1.4 (29) & 3.6 (447)\\
050401 &2.90 (5) & 4.8 &$366^{+47}_{-46}$ & 1.1 (294) & 3.0 (272)\\
050408 &1.24 (6) & 1.7 &$134^{+34}_{-28}$ & 1.0 (45) & 0.8 (4.7)\\
\hline
980703 &0.97 & 5.8 &$290^{+71}_{-27}$ & & 0.6 (1.7) \\
990123 &1.60 & 2.1 &$30^{+70}_{-20}$ & & 1.1 (23.1) \\
990510 &1.63 & 9.4 &$160^{+19}_{-13}$ & & 1.1(25.1)\\
000210 &0.85 & 2.5 &$50^{+10}_{-10}$ & & 0.0 (0.0) \\
000214 &0.47 &5.8 &$<2.7$ & & 0.0 (0.0) \\
000926 &2.07 & 2.7 &$40^{+35}_{-25 }$ & & 1.6 (73.9)\\
010222 &1.48 & 1.6 &$120^{+70}_{-60 }$ & & 1.0 (15.1)\\
001025a&1.48 & 6.1 &$66^{+30}_{-30}$ & & 1.0 (15.1) \\
020322 &1.80 & 4.6 &$130^{+20}_{-20}$ & & 1.3 (40.7)\\
020405 &0.70 & 4.3 &$47^{+37}_{-37}$ & & 0.0 (0.0) \\
020813 &1.25 &7.5 &$<36.5$ & & 0.8 (5.0)\\
021004 &2.33 &4.3 &$<34$ & & 2.0 (118)\\
030226 &1.98 & 1.6 &$68^{+41}_{-33}$ & & 1.5 (60.8)\\
030227 &3.90 & 22 &$680^{+18}_{-38}$ & & 5.2 (649)\\
030328 &1.52 &4.3 &$<44.3$ & & 1.0 (17.3)\\
\hline
\end{tabular}
\end{center}
$^*$ Column density values have been computed at the GRB redshift.
Refs.: 1) Berger, Cenko, Kulkarni 2005; 2) Kelson \& Berger 2005; 3) Berger
\& Mulchaey 2005; 4) Fynbo et al. 2005; 5) Fynbo et al. 2005; 6) Berger,
Gladders \& Oemler 2005.
Values in the second part of the table are from Stratta et al. (2004), De Luca
et al. (2005) and Gendre et al. (2005).
\end{table*}
\section{Discussion}
\label{discu}
We have analyzed the X--ray spectra of 17 GRB afterglows observed with Swift
up to April 15, 2005. In at least 10 of them we find significant evidence that
the observed column density is higher than the Galactic value. In
contrast to previous investigations (De Luca et al. 2005; Stratta et
al. 2004; Gendre et al. 2005) based on BeppoSAX, INTEGRAL and HETE II, the
Swift sample has GRBs at low Galactic extinction (all below $\lower.5ex\hbox{\ltsima}
10^{21}\rm \ cm^{-2}$), therefore more effectively probing the presence of absorption
due to the GRB environment or host galaxy.
The evidence that a large fraction of GRBs is characterized by an absorbing
column density larger than the Galactic one clearly points to a high density
interstellar medium in the proximity of the GRB (in fact with X--rays we
directly probe the GRB line of sight, whereas in the optical the observations
might be contaminated by the host galaxy contribution).
Dense environments in the host galaxy, possibly associated with star forming
regions, provide a further clear signature in favour of the association
of long GRBs to the death of massive stars.
For GRBs characterized by a low number of counts no firm conclusions can be
drawn. Moreover, the effect of an intrinsic column density can be hidden
either by a large Galactic absorber (as often occurs for INTEGRAL GRBs) or by
a large redshift shifting the energy scale by $(1+z)$ and the column density
effective value by $\sim (1+z)^{2.6}$.
\begin{figure}
\centering
\includegraphics[height=8cm,angle=-90]{histo.ps}
\caption{Distribution (solid histogram) of intrinsic column density in a sample of
18 GRBs observed by Swift and other satellites (Stratta et al. 2004; De Luca
et al. 2005). This is compared with the expected distribution of column
density for GRB that occur in Galactic-like molecular clouds (dashed
histogram, from Reichart \& Price 2002).
}
\label{histo}
\end{figure}
We combine our sample of GRBs with known redshift with other intrinsic column densities
available in the literature (Stratta et al. 2004; De Luca et al. 2005; Gendre
et al. 2005), obtaining a sample of 21 GRBs (see Table \ref{zz}). In principle
the absorption excess found in most GRBs might not be local to the host galaxy
but may come from a line-of-sight interlooper. This often occurs in optical
studies of quasar with dumped Lyman absorber (DLA). Based on quasar studies
(Wolfe, Gawiser \& Prochaska 2005; P\'eroux et al. 2003) we simulated for each
of our bursts a distribution of line-of-sights (10000 trials),
evaluating a mean absorption (weighted as $(1+z)^{2.6}$) and $90\%$ confidence
value. These values are reported in column six of Table \ref{zz}. The
influence of DLA systems is marginal in our sample even if there are a few GRBs
in which the observed absorption excess may come from intervening DLAs.
Indeed, such systems have recently been found in few Swift GRBs
(e.g. GRB050730 with $\log(N_H)=22.3$, Starling et al. 2005, Chen et al. 2005,
and GRB 050401, Watson et al. 2005, $\log(N_H)=22.5$). However, they are due
to the interstellar medium in the GRB host. One of these GRBs is part of our
sample, GRB 050401, and indeed we obtained a high value of the instrinsic column
density.
In order to understand the origin of the absorption excess,
we compared the distribution of measured intrinsic column densities with the
distribution expected for bursts occurring in Galactic-like molecular clouds
(Reichart \& Price 2002) and with the one expected for bursts
occurring following a host galaxy mass distribution using the Milky Way as a
model (Vergani et al. 2004). For each of
these two column density distributions we simulated 10000 GRBs and compared,
by means of a Kolmogorov-Smirnov (KS) test, their instrinsic absorption
distribution to the observed distribution.
We found that the observed distribution is inconsistent with
the galaxy column density distribution (Vergani et al. 2004), with a KS null
hypothesis probability of $10^{-11}$, but it is consistent with GRBs
distributed randomly in molecular clouds (KS null hypothesis probability of 0.61).
The simulated distribution in the case of bursts occurring in Galactic-like
molecular clouds and the observed one are plotted in Fig. \ref{histo}. This results
would support an origin of long GRBs within high density regions such as
molecular clouds.
We stress that our sample also contains upper limits and that we are
sensitive to low values of column density, which however are found only in a
small fraction of the total sample.
\section{Summary and conclusions}
The main goal of the present paper is to investigate the presence of
intrinsic absorption in the X--ray spectra of GRB afterglows.
We analyzed a complete set of 17 afterglows observed by Swift XRT before
April 15, 2005. In 10 of them we found clear
signs of intrinsic absorption, i.e. with a column density higher than the
Galactic value estimated from the maps by Dickey \& Lockman (1990) at $>90\%$
confidence level (and with low probability of contamination from intervening
DLA systems).
For the remaining 7 cases, the statistics are not good enough to draw firm
conclusions. This clearly suggests that long GRBs are associated with high
density regions of the interstellar medium, supporting the idea
that they are related to the deaths of massive stars.
For the 6 GRBs with known redshift, together with 15 already known, we
can have an unbiased view of the intrinsic absorption in the host galaxy rest
frame. We found a range of $(1-35)\times 10^{21}\rm \ cm^{-2}$ for all
GRBs. This range of values is consistent with the hypothesis that GRBs occur
within giant molecular clouds, spanning a range of column density depending on
their exact location (Reichart \& Price 2002). In our rest frame this column
density is then reduced by a factor $\sim (1+z)^{2.6}$, making it more difficult to
determine the intrinsic column density, especially for distant GRBs or for GRBs
occurring at large Galactic column densities.
Finally, we compared the distribution of GRB column densities with known
redshift with theoretical predictions available in the literature finding good
agreement with the expectation (Reichart \& Price 2002) for bursts occurring
in molecular clouds.
\begin{acknowledgements}
This work is supported at OAB by funding
from ASI on grant number I/R/039/04, at Penn State by NASA contract NAS5-00136
and at the University of Leicester by the PPARC on grant numbers PPA/G/S/00524
and PPA/Z/S/2003/00507. We gratefully acknowledge the contributions of dozens
of members of the Swift team, who helped make this Observatory possible.
\end{acknowledgements}
| {'timestamp': '2005-11-27T11:33:19', 'yymm': '0511', 'arxiv_id': 'astro-ph/0511750', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/0511750'} |
\section{Introduction}
Measurements of the $g$ factor of low- and middle-$Z$ H- and Li-like ions
\cite{haf00,ver04,stu11,wag13,lin13,stu13,stu14,koel16}
have reached an accuaracy of a few $10^{-10}$. From the theoretical
side, to get this accuracy we need to evaluate various contributions to the $g$-factor value
\cite{blu97,per97,bei00a,bei00b,sha01,sha02b,nef02,yer02,gla02,sha02c,gla04,lee05,pac05,jen09,vol14,sha15,cza16,yer17a,yer17b,zat17,sha17,mal17,gla18,kar18,cza18}.
The comparison of theory and experiment on the $g$ factors of H- and Li-like
silicon has provided the most stringent tests
of bound-state quantum electrodynamics (QED) in presence of a magnetic field, while
the combination of the experimental and theoretical results on the $g$ factor
of H-like ions with $Z=6,8,14$ lead to
the most precise determination of the electron mass \cite{stu14,zat17}.
The measurement of the isotope shift of the $g$ factor of Li-like
$^{A}$Ca$^{17+}$ with $A=40$ and $A=48$ \cite{koel16} has triggered a special
interest to the calculations of the nuclear recoil contributions to the $g$ factor.
The fully relativistic theory of the nuclear recoil effect to the first order in
the electron-to-nucleus mass ratio, $m/M$,
on the $g$ factor
of atoms and ions was formulated in Ref. \cite{sha01}, where it was used to derive
closed formulas for the recoil effect on the $g$ factor of H-like ions to all orders in
$\alpha Z$. These formulas remain also valid for an ion with one electron over
closed shells (see, e.g., Ref. \cite{sha02c}), provided the electron propagators
are redefined for the vacuum with the closed shells included. In that case, in addition
to the one-electron contributions, one obtains two-electron recoil corrections
of the zeroth order in $1/Z$ which can be used to derive effective four-component
recoil operators within the Breit approximation \cite{sha17}.
The one-electron recoil contribution was evaluated numerically to all
orders in $\alpha Z$ for the $1s$ and $2s$ states in Refs. \cite{sha02b} and \cite{sha17},
respectively. The calculations were performed for the point-nucleus case.
For the ground state of Li-like ions, the two-electron recoil contribution
vanishes to the zeroth order in $1/Z$. However, the effective recoil
operator can be used to evaluate the recoil corrections of the first
and higher orders in $1/Z$ within the framework of the Breit approximation.
These calculations were carried out for $Z=3-20$ in Ref. \cite{sha17},
where it was found a large discrepancy of the obtained results
with the previous Breit-approximation calculations based on the two-component
approach \cite{yan01,yan02}. As was shown in Ref. \cite{sha17}, this discrepancy
was caused by omitting some important terms in the calculation scheme
formulated within the two-component approach for $s$ states in Ref. \cite{heg75}.
Later \cite{gla18}, the four-component approach was also used to calculate the recoil effect
within the Breit approximation for middle-$Z$ B-like ions.
Special attention should be paid to probing
the QED nuclear recoil effect in experiments with heavy ions,
which are anticipated in the nearest future
at the Max-Planck-Institut f\"ur Kernphysik in Heidelberg
and at the HITRAP/FAIR facilities in Darmstadt.
This would provide
an opportunity for tests of QED at strong coupling regime
beyond the Furry picture. To this end, in Ref. \cite{mal17}
the nuclear recoil effect on the $g$ factor of
H- and Li-like Pb and U was calculated and it was shown that
the QED recoil contribution can be probed on a few-percent level
in a specific difference of the $g$ factors of heavy H- and Li-like lead.
In the present paper we extend the calculations of the recoil effect on the $g$ factor of Li-like ions
performed in Refs. \cite{sha17,mal17} to the range $Z=10-92$. The one-electron
recoil contribution is calculated in the framework of the rigorous QED approach
with the wave functions which partly account for the screening of the Coulomb
potential by the closed shell electrons. As to the two-electron recoil contribution,
it is evaluated within the Breit approximation to all orders in $1/Z$.
All the calculations also partly account for the nuclear size corrections
to the recoil effect.
The relativistic units ($\hbar=c=1$) are used throughout the paper.
\section{ Basic formulas}
Let us consider a Li-like ion
which is put into a
homogeneous magnetic field, ${\bf A}_{\rm cl}({\bf r})=[
\mbox{\boldmath$\cal H$}
\times {{\bf r}}]/2$ with
$ \mbox{\boldmath$\cal H$}$ directed along the $z$ axis.
To zeroth order in $1/Z$, the $m/M$ nuclear recoil contribution to the $g$ factor
is given by a sum of one- and two-electron contributions. In case of the ground
$(1s)^2 2s$ state the two-electron contribution of zeroth order in $1/Z$
is equal to zero and one has to consider the one-electron term only.
The one-electron recoil contribution to the $g$ factor
is given by \cite{sha01}
\begin{eqnarray} \label{rec_tot}
\Delta g&=&\frac{1}{\mu_0 m_a}\frac{i}{2\pi M}
\int_{-\infty}^{\infty} d\omega\;
\Biggl[\frac{\partial}{\partial {\cal H}}
\langle \tilde{a}|[p^k-D^k(\omega)+eA_{\rm cl}^k]
\nonumber\\
&&\times\tilde{G}(\omega+\tilde{\varepsilon}_a)
[p^k-D^k(\omega)+eA_{\rm cl}^k]
|\tilde{a}\rangle
\Biggr]_{{\cal H}=0}\,.
\label{06recoilt}
\end{eqnarray}
Here $a$ denotes the one-electron $2s$ state,
$\mu_0$ is the Bohr magneton, $m_a$ is the angular momentum
projection of the state under consideration, $M$ is the nuclear mass,
$p^k=-i\nabla^k$ is the momentum operator,
$D^k(\omega)=-4\pi\alpha Z\alpha^l D^{lk}(\omega)$,
\begin{eqnarray} \label{06photon}
D^{lk}(\omega,{\bf r})&=&-\frac{1}{4\pi}\Bigl\{\frac
{\exp{(i|\omega|r)}}{r}\delta_{lk}\nonumber\\
&&+\nabla^{l}\nabla^{k}
\frac{(\exp{(i|\omega|r)}
-1)}{\omega^{2}r}\Bigr\}\,
\end{eqnarray}
is the transverse part of the photon propagator in the Coulomb
gauge, ${\mbox{\boldmath$\alpha$}}$ is a vector
incorporating the Dirac matrices, and
the summation over the repeated indices is implied.
The tilde sign means that the corresponding quantity
(the wave function, the energy, and the Dirac-Coulomb Green's function
$\tilde{G}(\omega)=\sum_{\tilde{n}}|\tilde{n}\rangle \langle \tilde{n}|[\omega-\tilde{\varepsilon}_n(1-i0)]^{-1}$)
must be calculated in presence of the
magnetic field.
For the practical calculations, the one-electron contribution
is conveniently
represented by a sum of low-order
and higher-order
terms,
$\Delta g=\Delta g_{\rm L}+\Delta g_{\rm H}$, where
\begin{eqnarray} \label{low}
\Delta g_{\rm L}&=&\frac{1}{\mu_0 {\cal H} m_a}
\frac{1}{M} \langle \delta a|
\Bigr[ {\bf p}^2
-\frac{\alpha Z}{r}\bigr({\mbox{\boldmath$\alpha$}}+\frac{({\mbox{\boldmath$\alpha$}}\cdot{\bf r}){\bf r}}{r^2}\bigr)
\cdot{\bf p}
\Bigr]|a\rangle\nonumber\\
&& -\frac{1}{ m_a}\frac{m}{M}\langle
a|\Bigl([{\bf r} \times {\bf p}]_z -\frac{\alpha Z}{2r}[{\bf r} \times {\mbox{\boldmath$\alpha$}} ]_z\Bigr)|a\rangle\,,
\end{eqnarray}
\begin{eqnarray} \label{high}
\Delta g_{\rm H}&=&\frac{1}{\mu_0 {\cal H} m_a}
\frac{i}{2\pi M} \int_{-\infty}^{\infty} d\omega\;\Bigl\{ \langle
\delta a|\Bigl(D^k(\omega)-\frac{[p^k,V]}{\omega+i0}\Bigr)\nonumber\\
&&\times G(\omega+\varepsilon_a)\Bigl(D^k(\omega)+\frac{[p^k,V]}{\omega+i0}\Bigr)|a\rangle \nonumber\\
&&+\langle a|\Bigl(D^k(\omega)-\frac{[p^k,V]}{\omega+i0}\Bigr)
G(\omega+\varepsilon_a)\nonumber\\
&&\times\Bigl(D^k(\omega)+\frac{[p^k,V]}{\omega+i0}\Bigr) |\delta
a\rangle\nonumber\\ &&+ \langle a|\Bigl(D^k(\omega)-\frac{[p^k,V]}{\omega+i0}\Bigr)
G(\omega+\varepsilon_a)(\delta V-\delta \varepsilon_a)\nonumber\\ &&\times G(\omega+\varepsilon_a)
\Bigl(D^k(\omega)+\frac{[p^k,V]}{\omega+i0}\Bigr)|a\rangle \Bigr\}\,. \label{eq3}
\end{eqnarray}
Here
$V(r)$ is the potential of the nucleus or an effective local potential which
is the sum of the nuclear and screening potentials,
$\delta V({\bf r})=-e{\mbox{\boldmath$\alpha$}} \cdot{\bf A}_{\rm cl}({\bf r})$,
$G(\omega)=\sum_n|n\rangle \langle n|[\omega-\varepsilon_n(1-i0)]^{-1}$ is the Dirac-Coulomb Green's
function, $\delta \varepsilon_a=\langle a|\delta V|a\rangle$, and
$|\delta a\rangle=\sum_n^{\varepsilon_n\ne \varepsilon_a}|n\rangle\langle n|\delta V|a\rangle (\varepsilon_a-\varepsilon_n)^{-1}$.
The low-order term corresponds to the Breit approximation,
while the higher-order term defines the QED one-electron recoil
contribution.
The recoil contributions of the first and higher orders
in $1/Z$ can be evaluated within the Breit approximation with the use of
the four-component recoil operators \cite{sha17}. The total Breit-approximation
recoil contribution can be represented as a sum of two terms. The first
term is obtained as a combined interaction due to $\delta V$ and
the effective recoil Hamiltonian
(see Ref. \cite{sha98} and references therein):
\begin{eqnarray} \label{br1}
H_M=\frac{1}{2M}\sum_{i,k}\Bigl[{\bf p}_i\cdot {\bf p}_k
-\frac{\alpha Z}{r_i}\bigr({\mbox{\boldmath$\alpha$}}_i+\frac{({\mbox{\boldmath$\alpha$}}_i\cdot{\bf r}_i){\bf r}_i}{r_i^2}\bigr)
\cdot{\bf p}_k\Bigr]\,.
\end{eqnarray}
The second term is defined by the magnetic recoil
operator:
\begin{eqnarray} \label{br2}
H_M^{\rm magn}&=&-\mu_0
\frac{m}{M} \mbox{\boldmath$\cal H$} \cdot
\sum_{i, k}\Bigl\{[{\bf r}_i\times {\bf p}_k]\nonumber\\
&&-\frac{\alpha Z}{2r_k}\Bigl[{\bf r}_i\times\Bigl({\mbox{\boldmath$\alpha$}}_k
+\frac{({\mbox{\boldmath$\alpha$}}_k\cdot{\bf r}_k){\bf r}_k}{r_k^2}\Bigr)\Bigr]
\Bigr\}\,.
\end{eqnarray}
To zeroth order in $1/Z$, the one-electron parts of the operators (\ref{br1}) and (\ref{br2})
lead to the low-order contribution defined by Eq. (\ref{low}).
Thus, within the four-component Breit-approximation approach
the $m/M$ recoil effect on the $g$ factor can be evaluated
by adding the operators (\ref{br1}) and (\ref{br2})
to the Dirac-Coulomb-Breit Hamiltonian, which includes
the interaction with the external magnetic field.
\section{Numerical calculations}
Let us consider first the calculations within the Breit approxmation.
For these calculations we use the
operators (\ref{br1}), (\ref{br2}), and the standard
Dirac-Coulomb-Breit Hamiltonian:
\begin{eqnarray} \label{dcb}
H^{\rm DCB}=
\Lambda^{(+)}\Bigl[\sum_{i}h_i^{\rm D} +\sum_{i<k}V_{ik}\Bigr]\Lambda^{(+)}\,,
\end{eqnarray}
where the indices $i$ and $k$ enumerate the atomic electrons,
$\Lambda^{(+)}$ is the
projector on the positive-energy states, calculated including
the interaction with external magnetic field $\delta V$,
$h_i^{\rm D}$ is the one-electron Dirac Hamiltonian
including $\delta V$, and
\begin{eqnarray} \label{br}
V_{ik} &=& V^{\rm C}_{ik}+ V^{\rm B}_{ik}\nonumber \\
&=&\frac{\displaystyle \alpha}{\displaystyle r_{ik}}
-\alpha\Bigl[\frac{\displaystyle{ {{\mbox{\boldmath$\alpha$}}}_i\cdot {{\mbox{\boldmath$\alpha$}}}_k}}
{\displaystyle{ r_{ik}}}+\frac{\displaystyle 1}{\displaystyle 2}
( {{\mbox{\boldmath$\alpha$}}}_i\cdot{{\mbox{\boldmath$\nabla$}}}_i)({ {{\mbox{\boldmath$\alpha$}}}_k\cdot{\mbox{\boldmath$\nabla$}}}_k)
r_{ik}\Bigr]
\,
\end{eqnarray}
is the electron-electron interaction operator within the Breit approximation.
The numerical calculations have been performed using the approach based on
the recursive representation of the perturbation theory \cite{gla17}.
The key advantages of the recursive perturbation approach over the standard one
are the universality and the computational efficiency. In Refs. \cite{roz14,var18},
this method was applied to find the higher-order contributions to the Zeeman
and Stark shifts in H-like and B-like atoms. The perfect agreement
of the obtained results with the calculations by other methods was demonstrated.
In Refs. \cite{gla17,mal17a}, the recursive method was used to calculate
the higher-order contributions of the interelectronic interaction in few-electron ions.
The total Breit-approximation recoil contribution for the state under consideration
can be expressed as
\begin{eqnarray} \label{ABC}
\Delta g_{\rm Breit}&=&\frac{m}{M}(\alpha Z)^2 \Bigl[A(\alpha Z)+
\frac{1}{Z} B(\alpha Z)\nonumber\\
&& + \frac{1}{Z^2} C(\alpha Z,Z)\Bigr]\,,
\end{eqnarray}
where the coefficients $A(\alpha Z)$ and $B(\alpha Z)$
define the recoil contributions
of the zeroth and first orders in $1/Z$, respectively, while
$C(\alpha Z,Z)$ incorporates the recoil corrections
of the second and higher orders in $1/Z$.
In this work, in the calculation of $C(\alpha Z,Z)$
we have taken into account the terms of the orders $1/Z^2$, $1/Z^3$, and $1/Z^4$.
The contribution of the terms of higher orders is much smaller than the present numerical uncertainty.
For the point-nucleus case,
the coefficient $A(\alpha Z)$, which is determined by the one-electron
low-order term (\ref{low}),
can be evaluated
analytically \cite{sha01}. In case of the $2s$ state it is given by
\begin{eqnarray} \label{06shabaeveq112}
A(\alpha Z)= \frac{1}{4}\frac{8(2\gamma+1)}
{3(1+\gamma)(2\gamma+\sqrt{2(1+\gamma)}} \,,
\end{eqnarray}
where $\gamma=\sqrt{1-(\alpha Z)^2}$.
To the leading orders in $\alpha Z$, it leads to
\begin{eqnarray} \label{06shabaeveq113}
A(\alpha Z)=\frac{1}{4} + \frac{11}{192}(\alpha Z)^2+ \cdots \,.
\end{eqnarray}
The calculations to all orders in $1/Z$
have been performed with the point-nucleus recoil operators
defined by Eqs. (\ref{br1}) and (\ref{br2}) but with the wave functions
evaluated for extended nuclei. This corresponds to a partial treatment
of the nuclear size corrections to the recoil effect. The Fermi
model of the nuclear charge distribution was used and
the nuclear charge radii were
taken from Ref. \cite{ang13}. The results of the calculations are presented in
Table \ref{ABC_tab}. The indicated uncertainties are due to the numerical
computation errors.
For the point-nucleus case, the higher-order one-electron contribution (\ref{high})
was calculated for the $1s$ and $2s$ states in a wide range of the nuclear charge number
in Refs. \cite{sha02b,sha17}. In the present paper we perform the calculations
for extended nuclei and effective potentials which partly account for the electron-electron
interaction effects.
Our first results for $Z=82,92$
were presented
in Ref. \cite{mal17}, where they were used to search for a possibility
to test QED beyond the Furry picture. In the present paper we extend these calculations to
the range $Z=10-92$.
Since the inclusion of the screening potential into the zeroth-order
Hamiltonian allows one to account for the interelectronic-interaction
effects only partly, we perform the calculations with several
different effective potentials to keep better under control
the uncertainty of the corresponding contribution.
The calculations have been performed for the core-Hartree (CH),
local Dirac-Fock (LDF), and Perdew-Zunger (PZ) effective potentials.
The CH screening potential is derived from the radial charge density of two
$1s$ electrons,
\begin{eqnarray}
V_{\rm CH}(r)=\alpha\int_0^{\infty} dr'\frac{1}{r_>} \rho_{\rm CH}(r'),
\end{eqnarray}
where $r_>={\rm max}(r,r')$,
\begin{eqnarray}
\rho_{\rm CH}(r)=2[G_{1s}^2(r) + F_{1s}^2(r)]\,,\;\;\int_0^{\infty} dr \rho_{\rm CH}(r) =2\,,
\end{eqnarray}
and $G/r$ and $F/r$ are the large and small components of the radial Dirac wave function.
The LDF potential is constructed
by inversion of the radial Dirac equation with the radial wave functions obtained in the Dirac-Fock
approximation. The corresponding procedure
is described in detail in Ref. \cite{sha05}.
The last potential
applied in our work is the Perdew-Zunger potential \cite{per81} which
was widely employed in molecular and
cluster calculations.
In Eq. (\ref{high}), the summation over the intermediate electron states
was performed employing
the finite basis set method. The basis functions were constructed from B-splines \cite{sap96}
within the framework of the dual-kinetic-balance approach \cite{sha04}.
The integration over $\omega$
was carried out analytically for the ``Coulomb'' contribution (the term without
the ${\bf D}$ vector)
and numerically for the ``one-transverse'' and ``two-transverse'' photon contributions
(the terms with one and two ${\bf D}$ vectors, respectively)
using the Wick's rotation.
The total QED recoil contribution
$\Delta g_{\rm H}$ for the $2s$ state
is conveniently
expressed in terms of the function $P^{(2s)}(\alpha Z)$,
\begin{eqnarray}
\label{P}
\Delta g_{\rm H}^{(2s)}=\frac{m}{M}\frac{(\alpha Z)^5}{8} P^{(2s)}(\alpha Z)\,.
\end{eqnarray}
The numerical results are presented in Table~\ref{H-like}.
For comparison, in the second column we list the point-nucleus results which
were partly presented in Ref. \cite{sha17}.
In Table~\ref{F_tab}, we present the total values of the recoil corrections
to the $g$ factor of the ground $(1s)^2 2s$ state of Li-like ions.
They are expressed in terms of the function $F(\alpha Z)$, defined by
\begin{eqnarray} \label{F}
\Delta g=\frac{m}{M}(\alpha Z)^2 F(\alpha Z)\,.
\end{eqnarray}
The Breit-approximation recoil contributions are obtained by Eq. (\ref{ABC})
with the coefficients given in Table \ref{ABC_tab}.
The uncertainties include both the error bars presented
in Table \ref{ABC_tab} and the uncertainties due to the
approximate treatment of the nuclear size correction to
the recoil effect. We have assumed that the relative value of the latter uncertainty is equal to the related
contribution to the binding energy which was evaluated within the Breit approximation in Ref. \cite{ale15}.
For the QED recoil contribution we use
the LDF values from Table~\ref{H-like}.
The uncertainty of this term
is estimated as a sum of two contributions.
The first one is due to the approximate treatment of the electron-electron
interaction effect on the QED recoil contribution.
This uncertainty was estimated by performing the calculations of the low-order
(non-QED) one-electron recoil contribution
with the LDF potential and comparing the obtained result with the total Breit recoil value
evaluated above.
The ratio of the obtained difference to the non-QED LDF result was chosen
as the relative uncertainty of the corresponding correction to the
QED recoil contribution. It should be noted that this uncertainty exceeds
the difference between the results obtained for the different
screening potentials presented in Table~\ref{H-like}.
The second contribution to the uncertainty is
caused by the approximate treatment of the nuclear size correction to the recoil effect.
It was estimated in the same way as for the Breit recoil contribution.
As one can see from Table~\ref{F_tab}, for very heavy ions the QED recoil effect
becomes even bigger than the Breit recoil contribution.
The total recoil contribution to the $g$ factor
should also include small corrections of orders $\alpha (\alpha Z)^2(m/M)$
and $ (\alpha Z)^2(m/M)^2$ and the related corrections of higher
orders in $\alpha Z$ and in $1/Z$.
To the lowest order in $\alpha Z$ the corresponding one-electron corrections were evaluated
in Refs. \cite{gro71,clo71,pac08,eid10}.
\section{Conclusion}
In this paper we have evaluated the nuclear recoil effect of the first
order in $m/M$ on the ground-state $g$ factor of highly charged Li-like ions.
The Breit-approximation contributions have been calculated to all orders
in $1/Z$ employing the recursive perturbation theory. The one-electron
higher-order (QED) recoil contribution was evaluated to all orders in $\alpha Z$
with the wave functions which partly account for the electron-electron
interaction effects. As the result, the most precise theoretical
predictions for the recoil effect on the $g$ factor of highly charged
Li-like ions are presented.
\section{Acknowledgments}
This work was supported by the Russian Science Foundation (Grant No. 17-12-01097).
\begin{table}
\caption{The Breit-approximation recoil contributions to the $g$ factor of the
$(1s)^2 2s$ state of Li-like ions
expressed in terms of
the coefficients $A(\alpha Z)$, $B(\alpha Z)$, and $C(\alpha Z, Z)$ defined
by Eq. (\ref{ABC}).}
\label{ABC_tab}
\begin{tabular}{cr@{.}lr@{.}lr@{.}lr@{.}lr@{.}l} \hline
$Z$& \multicolumn{2}{c}{$A(\alpha Z)$}
& \multicolumn{2}{c}{$B(\alpha Z)$}
& \multicolumn{2}{c}{$C(\alpha Z, Z)$ }
\\
\hline
10 & 0&2503 & $-$0&5172 & $-$0&236(4) \\
12 & 0&2504 & $-$0&5179 & $-$0&243(3) \\
14 & 0&2506 & $-$0&5187 & $-$0&245(3) \\
16 & 0&2508 & $-$0&5197 & $-$0&248(3) \\
18 & 0&2510 & $-$0&5207 & $-$0&250(2) \\
20 & 0&2512 & $-$0&5219 & $-$0&250(2) \\
24 & 0&2517 & $-$0&5247 & $-$0&250(1) \\
28 & 0&2524 & $-$0&5279 & $-$0&247(1) \\
30 & 0&2527 & $-$0&5297 & $-$0&245(1) \\
32 & 0&2531 & $-$0&5315 & $-$0&243 \\
40 & 0&2548 & $-$0&5402 & $-$0&228 \\
48 & 0&2567 & $-$0&5504 & $-$0&205 \\
50 & 0&2572 & $-$0&5531 & $-$0&198 \\
56 & 0&2588 & $-$0&5618 & $-$0&177 \\
60 & 0&2599 & $-$0&5677 & $-$0&160 \\
64 & 0&2607 & $-$0&5734 & $-$0&141 \\
70 & 0&2616 & $-$0&5813 & $-$0&105 \\
72 & 0&2617 & $-$0&5836 & $-$0&092 \\
80 & 0&2606 & $-$0&5886 & $-$0&037 \\
82 & 0&2597 & $-$0&5881 & $-$0&019 \\
90 & 0&2510 & $-$0&5721 & 0&051 \\
92 & 0&2471 & $-$0&5629 & 0&065 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{The higher-order (QED) recoil contribution to the $2s$ $g$ factor
expressed in terms of the function $P^{(2s)}(\alpha Z)$ defined by
Eq. (\ref{P}).
The indices Coul, CH, LDF and PZ refer to the Coulomb and various screening potentials, see the text.
The indices p.n. and f.n. correspond to the point-like and finite-size nuclear models.}
\label{H-like}
\begin{tabular}{cr@{.}lr@{.}lr@{.}lr@{.}lr@{.}l} \hline
$Z$& \multicolumn{2}{c}{$P^{\rm (p.n.)}_{\rm Coul}(\alpha Z)$}
& \multicolumn{2}{c}{$P^{\rm (f.n.)}_{\rm Coul}(\alpha Z)$}
& \multicolumn{2}{c}{$P^{}_{\rm CH}(\alpha Z)$}
& \multicolumn{2}{c}{$P^{}_{\rm LDF} (\alpha Z)$ }
& \multicolumn{2}{c}{$P^{}_{\rm PZ}(\alpha Z)$}
\\
\hline
10 & 8&8762(1) & 8&9145 & 6&2670 & 6&1840 & 6&6098 \\
12 & 8&1943(1) & 8&2333 & 6&1987 & 6&1326 & 6&4787 \\
14 & 7&6447(1) & 7&6847 & 6&0614 & 6&0069 & 6&2955 \\
16 & 7&1911(1) & 7&2325 & 5&8998 & 5&8539 & 6&0995 \\
18 & 6&8101(1) & 6&8539 & 5&7349 & 5&6953 & 5&9081 \\
20 & 6&4860(1) & 6&5309 & 5&5740 & 5&5395 & 5&7264 \\
24 & 5&9670(1) & 6&0151 & 5&2844 & 5&2571 & 5&4065 \\
28 & 5&5753(1) & 5&6267 & 5&0429 & 5&0205 & 5&1446 \\
30 & 5&4160(1) & 5&4703 & 4&9412 & 4&9208 & 5&0351 \\
32 & 5&2771(1) & 5&3341 & 4&8509 & 4&8322 & 4&9382 \\
40 & 4&8840(1) & 4&9487 & 4&5900 & 4&5760 & 4&6588 \\
48 & 4&6937(1) & 4&7686 & 4&4789 & 4&4680 & 4&5375 \\
50 & 4&6727(1) & 4&7499 & 4&4723 & 4&4619 & 4&5290 \\
56 & 4&6697(1) & 4&7530 & 4&5028 & 4&4940 & 4&5557 \\
60 & 4&7182(1) & 4&8035 & 4&5658 & 4&5578 & 4&6171 \\
64 & 4&8098(2) & 4&8958 & 4&6670 & 4&6596 & 4&7173 \\
70 & 5&039(1) & 5&1144 & 4&8928 & 4&8865 & 4&9429 \\
72 & 5&144(1) & 5&2114 & 4&9908 & 4&9847 & 5&0411 \\
80 & 5&753(3) & 5&7437 & 5&5200 & 5&5152 & 5&5728 \\
82 & 5&965(3) & 5&9188 & 5&6926 & 5&6881 & 5&7464 \\
90 & 7&19(2) & 6&8284 & 6&5850 & 6&5818 & 6&6449 \\
92 & 7&64(2) & 7&1187 & 6&8689 & 6&8662 & 6&9309 \\
\hline
\end{tabular}
\end{table}
\begin{table}
\caption{The Breit, QED, and total recoil contributions to the $g$ factor of the
$(1s)^2 2s$ state of Li-like ions
expressed in terms of
the function $F(\alpha Z)$ defined by Eq. (\ref{F}).
}
\label{F_tab}
\begin{tabular}{cr@{.}lr@{.}lr@{.}lr@{.}lr@{.}l} \hline
$Z$& \multicolumn{2}{c}{$F_{\rm Breit}$}
& \multicolumn{2}{c}{$F_{\rm QED}$}
& \multicolumn{2}{c}{$F_{\rm total}$ }
\\
\hline
10 & 0&1962(1) & 0&0003(1) & 0&1965(1) \\
12 & 0&2056 & 0&0005(1) & 0&2061(1) \\
14 & 0&2123 & 0&0008(1) & 0&2131(1) \\
16 & 0&2173 & 0&0012(1) & 0&2185(1) \\
18 & 0&2213 & 0&0016(2) & 0&2229(2) \\
20 & 0&2245 & 0&0022(2) & 0&2266(2) \\
24 & 0&2294 & 0&0035(3) & 0&2330(3) \\
28 & 0&2332 & 0&0054(3) & 0&2385(3) \\
30 & 0&2348 & 0&0065(4) & 0&2412(4) \\
32 & 0&2362 & 0&0077(4) & 0&2439(4) \\
40 & 0&2411 & 0&0142(6) & 0&2553(6) \\
48 & 0&2452 & 0&0240(8) & 0&2692(8) \\
50 & 0&2461 & 0&0271(9) & 0&2732(9) \\
56 & 0&2487(1) & 0&0383(11) & 0&2871(11) \\
60 & 0&2503(1) & 0&0478(13) & 0&2982(13) \\
64 & 0&2517(2) & 0&0593(15) & 0&3110(16) \\
70 & 0&2533(3) & 0&0814(20) & 0&3347(20) \\
72 & 0&2536(4) & 0&0904(22) & 0&3440(22) \\
80 & 0&2533(10) & 0&1372(30) & 0&3904(32) \\
82 & 0&2525(12) & 0&1523(33) & 0&4048(36) \\
90 & 0&2446(28) & 0&2331(54) & 0&4777(61) \\
92 & 0&2410(35) & 0&2597(65) & 0&5007(73) \\
\hline
\end{tabular}
\end{table}
| {'timestamp': '2018-07-24T02:18:03', 'yymm': '1807', 'arxiv_id': '1807.08495', 'language': 'en', 'url': 'https://arxiv.org/abs/1807.08495'} |
\section{Introduction}
The star formation history and the gas flows determine the distribution of chemical abundances in galaxies. The existence of radial gradients of chemical abundances in spiral galaxies is a well known fact, and is thought to be caused by 'inside-out' galaxy formation scenario, where the number of generations of stars that have existed is smaller as the galactocentric distance increases.\\
Almost no metals were created in the beginning of the Universe. Thus, the difference in metallic content between two stellar generations is noticeable and a radial gradient can be traced. On the other hand, the biggest fraction of helium was created during the Primordial Nucleosynthesis and the determination of its radial gradient is highly sensitive to errors and uncertainties in the calculation of He/H ratio.\\
To determine a radial gradient in the Galactic distribution of helium requires a sample of objects with good quality spectra that covers a wide range of Galactocentric distances. HII regions are suitable for this purpose since they trace the present-day chemical composition of the Galaxy.
\section{Procedure}
\label{procedure}
\begin{figure}[t]
\begin{center}
\hspace{0.25cm}
\includegraphics[height=5.0cm]{mendez-fig1.eps}
\caption{Distribution of HII regions of the sample. The Galactocentric distance is presented in kpc. The yellow dot indicates the position of the Sun (8 kpc).}
\label{hiimap}
\end{center}
\end{figure}
Our sample consists on 37 deep spectra of 23 Galactic HII regions, covering a range of radial distance between 5 and 17 kpc. In Figure \ref{hiimap} the distribution of the HII regions in the Galaxy disk is shown.\\
The data have been obtained with the high-resolution spectrograph UVES at VLT, OSIRIS at GTC and MagE at Magellan Telescope \citep{gr_04_NGC3576, ce_04_ORION, gr_05_S311, gr_06, gr_07, ce_13_NGC2579, ce_16, ce_17}. The spectral resolution and wavelength coverage allowed us to detect several He$\thinspace$I recombination lines in each spectrum in both spin configurations: triplet and singlet. \\
We calculated the abundance of He$^+$ using PyNeb \citep{pyneb_ref} with all the detected He$\thinspace$I lines in the sample. Some lines as $\lambda$3889, $\lambda$7065, $\lambda$7281 and $\lambda$9464 were discarded from the analysis for being deeply affected by line blending, self-absorption, collisions or blending with sky lines. Some other lines were also discarded in particular cases, as $\lambda$5016 in the Orion Nebula \citep{ce_04_ORION}. The effective recombination coefficients used were those calculated by \cite{porter_2013}, which include corrections for collisional effects. \\
To calculate the total helium abundance, the He$^0$/H$^+$ ratio was estimated using four Ionization Correction Factors (ICF) based on similarities between ionisation potentials of sulfur and/or oxygen and helium \citep{peimbert_77, kunth_83, peimbert_92, zhang_03}. Using Galactocentric distance estimates from the literature for each region of the sample, we derived the radial distribution of helium, analysing separately the results based on singlet and triplet lines.
\section{Results and conclusions }
\label{results}
Estimates of He$^+$/H$^+$ and their associated dispersions were comparatively higher using triplet lines. As an example, Figure \ref{orion} shows calculations of He$^+$/H$^+$ obtained with every detected He$\thinspace$I line in the Orion Nebula. Figure \ref{lineas} shows the overestimate of He$^+$/H$^+$ in calculations based on the triplet He$\thinspace$I $\lambda$5876 line compared to the average value from singlets.\\
\begin{figure}[h]
\begin{center}
\hspace{0.25cm}
\includegraphics[height=4.5cm]{mendez-fig2.png}
\caption{Abundance of He$^+$ obtained with several recombination lines. Horizontal lines represent the average value for the singlets and triplets. The coloured band indicates the associated uncertainty. The red symbols represent triplets while blue ones represent singlets. Crosses indicate discarded lines.}
\label{orion}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\hspace{0.25cm}
\includegraphics[height=4.5cm]{mendez-fig3.eps}
\caption{Representation of the He$^{+}$/H$^+$ ratio determined for He$\thinspace$I 5876\AA$\thinspace$ with respect to the mean He$^+$/H$^+$ ratio determined from singlets (indicated by the color of the circles) as a function of T$_e$ and $n_e$.}
\label{lineas}
\end{center}
\end{figure}
The resulting radial distribution of helium has negative slope when singlet lines were used regardless of the ICF used. In the case of triplet lines, the slope depends on the ICF used. Calculations based on triplet lines are affected by self-absorption mechanisms that are not easy to correct. Figure \ref{radial} shows the radial distribution of helium based on singlet lines and the average value if all the ICF are considered.
\begin{figure}[h]
\begin{center}
\hspace{0.25cm}
\includegraphics[height=4.5cm]{mendez-fig4.eps}
\caption{Radial distribution of He/H calculated with singlet lines and the average value of all ICF used. Red symbols have ICF$\geq$1.2 and were not
used for the linear fit.}
\label{radial}
\end{center}
\end{figure}
Our preliminar results are summarised as follows:
\begin{enumerate}
\item{Almost all triplet lines comparatively provide overabundances in the He$^{+}$/H$^{+}$ ratio. The dispersion is also larger. Corrections for collisional processes are considered in the He abundance determinations. This suggests that the effects of self-absorption are not negligible in most triplet lines.}
\item{Discarding triplet lines, the radial distribution of helium has a negative slope regardless of the ICF used. Nevertheless, due to the uncertainty introduced by the ICFs, errors in the slope are still consistent with a flat distribution.}
\item{HII regions associated with Wolf-Rayet or evolved O stars present helium overabundances.}
\end{enumerate}
\acknowledgments CEL and JRG acknowledge support from the project AYA2015-65205-P. JGR acknowledges support from an Advanced Fellowship from the Severo Ochoa excellence program (SEV-2015-0548).
\bibliographystyle{aaabib}
| {'timestamp': '2019-05-27T02:12:17', 'yymm': '1905', 'arxiv_id': '1905.10159', 'language': 'en', 'url': 'https://arxiv.org/abs/1905.10159'} |
\section{Introduction}
In general, galaxies can be divided into two populations, namely,
star-forming galaxies (hereafter SFGs) and quiescent galaxies with little
star formation (hereafter QGs).
At $z\lesssim 1$, these two populations show different morphological
properties; QGs tend to show centrally concentrated spheroidal shapes
with little disturbed feature,
while many SFGs have a (main) disk with spiral patterns and a spheroidal
bulge (e.g., \citealp{rob94}; \citealp{blu19}).
The evolution of number or stellar mass density of these two populations
over cosmic time suggests that some fraction of SFGs stop their
star formation by some mechanism(s) and then evolved into QGs
(e.g., \citealp{fab07}; \citealp{pen10}).
Such transition from SFG to QG is considered to be one of the most
important processes in galaxy evolution.
While many mechanisms for the quenching of star formation with various decaying
timescales have been proposed so far (e.g., \citealp{dek86}; \citealp{bar96};
\citealp{aba99}; \citealp{bir03}; \citealp{mar09}; \citealp{fab12};
\citealp{spi22}),
it is still unclear which mechanism plays a dominant role
in galaxy evolution and how it depends on
conditions such as galaxy properties, environment, epoch, and so on.
Investigating properties of galaxies in the transition phase
is one of the powerful ways to
reveal the physical mechanisms of quenching.
In this context, post starburst galaxies (hereafter, PSBs) that experienced
a strong starburst followed by quenching in the recent past
have been considered to be an
important population and studied intensively (see \citealp{fre21}, for recent
review).
PSBs are selected by their strong Balmer absorption lines and no or weak
nebular emission lines such as H$\alpha$,[O{\footnotesize II}], and so on
(e.g., \citealp{zab96}; \citealp{dre99}; \citealp{qui04}).
The strong Balmer absorption lines are caused by a significant contribution
from A-type stars, which indicates high star formation activities
in the recent past, while no significant emission lines suggest
little on-going star formation in the galaxy.
While fraction of PSBs is relatively small
in the entire galaxy population
($\lesssim $1--2\%,
\citealp{zab96}; \citealp{qui04}; \citealp{bla04}; \citealp{tra04};
\citealp{got07}; \citealp{wild09};
\citealp{yan09}; \citealp{ver10};
\citealp{won12}; \citealp{row18}; \citealp{che19}),
PSBs are expected to abruptly stop their star
formation after starburst and therefore considered to be in a
rapid transition phase from SFG to QG.
Several studies suggest that the fraction of PSBs increases with increasing
redshift, and a significant fraction of galaxies at $z\sim 1$
could pass through the PSB phase when they evolved into quiescent
galaxies (\citealp{wild09}; \citealp{whi12};
\citealp{wil16}; \citealp{wil20}).
Morphological properties of PSBs have also been investigated so far,
because they can provide important clues to reveal physical origins
of the starburst and rapid quenching of star formation.
Many previous studies found that a significant fraction of PSBs at low and
intermediate redshifts show asymmetric/disturbed features such as
tidal tails (e.g., \citealp{zab96}; \citealp{bla04}; \citealp{tra04};
\citealp{yam05}; \citealp{yan08}; \citealp{pra09}; \citealp{won12};
\citealp{deu20}; \citealp{wil22}).
The morphological disturbances in many PSBs suggest
that galaxy mergers/interactions could be closely related with the
origin of the PSB phase.
Theoretical studies with numerical simulations predicted that
gas-rich major mergers cause morphological disturbances and gas infall
to the centre of remnants, and then a strong starburst occurs in the central
region (e.g., \citealp{bar91}; \citealp{bar96}; \citealp{bek01}).
Such nuclear starbursts could lead to rapid
quenching of star formation through rapid gas consumption by the burst
and/or gas loss/heating by supernova feedback, AGN outflow,
tidal force, and so on
(e.g., \citealp{bek05}; \citealp{sny11}; \citealp{dav19};
\citealp{spi22}).
On the other hand, relatively massive PSBs with
$M_{\rm star} \gtrsim 10^{10} M_{\odot}$ at $z\lesssim 1$
tend to have centrally concentrated early-type morphologies
(e.g., \citealp{qui04}; \citealp{tra04}; \citealp{yam05};
\citealp{yan08}: \citealp{ver10}; \citealp{pra09}; \citealp{mal18}).
These results suggest that morphological changes from those
with a star-forming disk to spheroidal shapes could rapidly proceed,
if such changes are associated with the transition
through the PSB phase.
Several studies investigated asymmetric/disturbed features in
PSBs as a function of time elapsed after starburst, and found
that the disturbed features such as tidal tails weaken or
disappear on a relatively short timescale of $\sim 0.3$ Gyr
(e.g., \citealp{paw16}; \citealp{saz21}).
The numerical simulations of gas-rich mergers also predict
the similar decay of disturbed features on a timescale of
$\sim $ 0.1--0.5 Gyr (\citealp{lot08}; \citealp{lot10};
\citealp{sny15}; \citealp{paw18}; \citealp{nev19}).
While the asymmetric features over entire galaxies seem to
rapidly weaken after starburst,
observations of CO lines and dust far-infrared emission for
PSBs at low redshifts suggest
that significant molecular gas and dust are sustained even in
those with relatively old ages of $\sim $ 500-600 Myr
after starburst
(\citealp{row15}; \citealp{fre18}; \citealp{li19}).
The molecular gas and dust tend to be concentrated in the central
region of those galaxies (e.g., \citealp{sme18}; \citealp{sme22}),
and dusty disturbed features near the centre
in optical images of PSBs have also been reported
(e.g., \citealp{yam05}; \citealp{yan08}; \citealp{pra09}).
Thus such disturbed/asymmetric features in the central region of
PSBs could continue for longer time than those in
outer regions such as tidal tails.
In this paper, we select PSBs that experienced
a high star formation activity several hundreds Myr before observation
followed by rapid quenching by performing SED fitting
with multi-band ultraviolet (UV) to mid-infrared (MIR) photometric data
including optical intermediate-bands
data set in the COSMOS field \citep{sco07a}
in order to statistically study morphological properties
of PSBs with relatively
old ages after starburst at $z \sim 0.8$.
We investigate their morphological properties
with non-parametric indices
such as concentration and asymmetry on the
{\it Hubble Space Telescope }/Advanced Camera for Surveys ({\it HST}/ACS) data,
in particular, focusing on concentration of
asymmetric features near the centre of these galaxies.
Section 2 describes the data used in this study. In Section 3,
we describe sample selection with the SED fitting and
methods to measure the non-parametric morphological indices,
including newly devised concentration of asymmetric features.
In Section 4, we present the morphological indices of PSBs
and compare them with those of SFGs and QGs. We discuss
our results and their implications in Section 5, and summarise the
results of this study in Section 6.
Throughout this paper,
we assume a flat universe with $\Omega_{\rm matter}=0.3$, $\Omega_{\Lambda}=0.7$,
and $H_{0}=70$ km s$^{-1}$ Mpc$^{-1}$, and magnitudes are given in the AB system.
\section{Data} \label{sec:data}
In this study, we used a multi-wavelength catalogue of COSMOS2020
\citep{wea22} to construct a sample of $z\sim0.8$ galaxies that
experienced a starburst followed by rapid quenching several hundreds Myr
before observation.
\citet{wea22} provided multi-band photometry from UV to MIR
wavelengths, namely, $GALEX$ FUV and NUV \citep{zam07}, CFHT/MegaCam $u$ and $u^{*}$ \citep{saw19}, Subaru/HSC $grizy$ \citep{aih19}, Subaru/Suprime-Cam $Bg^{'}Vr^{'}i^{'}z^{'}z^{''}$ \citep{tan07} and 12 intermediate and 2 narrow bands
\citep{tan15}, VISTA/VIRCAM $YJHK_{s}$ and $NB119$ \citep{mcc12}, and
$Spitzer$/IRAC ch1--4 (\citealp{ash13}; \citealp{ste14}; \citealp{ash15}: \citealp{ash18}), for objects in the COSMOS field \citep{sco07a}.
Source detection was performed on an $izYJHK_{s}$-band combined image, and
aperture photometry was done on PSF-matched images with SExtractor
\citep{ber96}.
We used the photometry measured with 3 arcsec diameter apertures
in the $GALEX$ NUV, MegaCam $u$
and $u^{*}$, HSC $grizy$, Suprime-Cam $BVi^{'}z^{''}$ and 12 intermediate,
VIRCAM $YJHK_{s}$, and IRAC ch1--4 bands for objects with $i<24$ from
``CLASSIC'' catalogue of COSMOS2020 in order
to carry out SED fitting analysis and sample selection
described in Section \ref{sec:ana}.
We set the magnitude limit of $i<24$
to investigate their detailed morphology
on {\it HST}/ACS data and
ensure accuracy of photometry, in particular,
that of the intermediate bands in the SED fitting.
We excluded X-ray AGNs from our sample because the SED fitting described
in the next section does not include AGN emission.
The photometric offsets derived from the SED fitting with LePhare
\citep{wea22} were applied to these fluxes.
Furthermore, we similarly estimated additional small photometric offsets
by performing
the SED fitting described in the next section for isolated galaxies
with spectroscopic redshifts from zCOSMOS (\citealp{lil07}; \citealp{lil09})
and LEGA-C (\citealp{van16}; \citealp{van21}) surveys,
and applied them.
We corrected these fluxes for Galactic extinction by using $E(B-V)$ value
of the Milky Way at each object position from the catalogue.
For morphological analysis,
we used COSMOS {\it HST}/ACS $I_{\rm F814W}$-band data version 2.0 \citep{koe07}.
The data have a pixel scale of 0.03 arcsec/pixel and a PSF FWHM
of $\sim 0.1$ arcsec.
The 50\% completeness limit is $I\sim 26$ mag for sources with a
half-light radius of 0.5 arcsec \citep{sco07b}.
We also used the Subaru/Suprime-Cam $i^{'}$-band
data in order to determine pixels of the ACS image which belong to the
target galaxy. The reduced $i^{'}$-band data have a pixel scale of
0.15 arcsec and a PSF FWHM of $\sim 1.0$ arcsec \citep{tan07}.
\section{Analysis} \label{sec:ana}
\subsection{SED fitting} \label{sec:sed}
In order to select PSBs that experienced a high star formation activity
followed by rapid quenching several hundreds Myr before observation,
we fitted the multi-band photometry of objects with $i<24$ from the
COSMOS2020 catalogue mentioned above with population
synthesis models of GALAXEV \citep{bru03}.
For the purpose, we adopted non-parametric, piece-wise constant function of
star formation history (SFH) where SFR varies among different time intervals
but is constant in each interval,
following previous studies (\citealp{toj07}; \citealp{kel14};
\citealp{lej17}; \citealp{lej19}; \citealp{cha18}).
We divided the look-back time for each galaxy into seven periods, namely,
0--40 Myr, 40--321 Myr, 321--1000 Myr, 1--2 Gyr, 2--4 Gyr, 4--8 Gyr, and
8--12 Gyr before observation, which are similar with those used in
\citet{lej17}. We constructed model SED templates of stars formed in
the different periods by assuming Chabrier IMF
\citep{cha03} and constant SFR in each period.
Figure \ref{fig:temp} shows the model templates of stars formed in the
seven periods for different metallicities.
The model SEDs used in the fitting are based on a linear combination of
the seven templates, and normalisation coefficients for the seven
templates are free parameters. Thus SFHs are expressed as constant SFRs in
the seven time intervals.
In order to search the best-fit values of the normalisation
coefficients that provide the minimum $\chi^{2}$,
we adopted Non-Negative Least Squares (NNLS) algorithm
\citep{law74} following GASPEX by \citet{mag15},
\begin{figure}
\includegraphics[width=\columnwidth]{fig1.eps}
\caption{The model templates of stars formed in the seven periods of
look-back time. They are constructed from the GALAXEV library under
the assumption of constant SFR in each period.
The linear combination of these templates is fitted
to the observed SEDs. The short-dashed, long-dashed, and solid lines
show those with 0.2, 0.4, and 1.0 $Z_{\odot}$, respectively.
\label{fig:temp}}
\end{figure}
while we used a simple full grid search for redshift, metallicity, and dust
extinction.
The NNLS algorithm basically solves a system of linear equations represented
as a matrix equation, but carries out some iterations to search non-negative
solutions with changing non-active (zero-value) coefficients.
The templates with three stellar metallicities, namely, 0.2, 0.4, and 1.0
$Z_{\odot}$ were fitted.
If we include templates with 2.5 $Z_{\odot}$, our sample of PSBs is
almost the same and results in this study do not change.
For simplicity, we fixed the metallicity over all the periods
except for the youngest one, 0--40 Myr before observation.
The metallicity of the template of 0--40 Myr was independently chosen
from the same 0.2, 0.4, and 1.0 $Z_{\odot}$.
We added nebular emission only in the youngest template by using PANHIT
(\citealp{maw16}; \citealp{maw20}) because a contribution from
the nebular emission is negligible in templates of the other
older periods.
In PANHIT, the nebular emission is calculated from ionising spectra of
the stellar templates following \citet{ino11}.
We used only nebular continuum and hydrogen recombination lines
from PANHIT,
and recalculated fluxes of the other emission lines, namely
[O{\footnotesize II}]$\lambda\lambda$3727, [O{\footnotesize III}]$\lambda\lambda$4959,5007, [N{\footnotesize II}]$\lambda\lambda$6548,6583,
[S{\footnotesize II}]$\lambda\lambda$6726, and
[S{\footnotesize III}]$\lambda\lambda$9069,9532 by using emission
line ratios of local star-forming galaxies with various gas metallicities
(\citealp{nag06}; \citealp{vil96}), because relatively high
[O{\footnotesize III}]/[O{\footnotesize II}] ratios
in \citet{ino11} model at high metallicity lead to
slight underestimates of
photometric redshift for some fraction of star-forming galaxies
at $z\sim$ 0.5--1.0.
This is because the intermediate bands of Subaru/Suprime-Cam densely
sample wavelengths around the Balmer break at these redshifts, and
the deficit of the model [O{\footnotesize II}] fluxes
in an intermediate band mimics the Balmer break at slightly lower redshifts.
In the calculation of these emission lines,
we assumed the same gas metallicity with the stellar one
(i.e., 0.2, 0.4, and 1.0 $Z_{\odot}$) as in PANHIT.
The fraction of ionising photons that really ionise gas
rather than escape from the galaxy or are absorbed by dust
is also varied from 0.1 to 1.0, while the fraction does not
affect results in this study.
Linear independence among the templates is important for
estimating SFHs of galaxies accurately (e.g., \citealp{mag15}).
In order to select galaxies that experienced a high star formation
activity followed by
rapid quenching within $\sim 1$ Gyr, we chose the intervals of
the periods younger than 1 Gyr so that these templates are
close to be orthogonal for wavelength resolutions of the intermediate
bands ($\lambda/\Delta\lambda \sim 20$).
In Figure \ref{fig:temp}, one can see that
variations in SED among different periods are relatively large
for those with young ages of $< 1$ Gyr, while
differences in metallicity do not so strongly affect SEDs.
In appendix, we calculated inner products between
the templates with different periods/metallicities to examine the linear independence,
and confirmed that 'angles' between the templates with different periods
($<$ 1 Gyr) are relatively large, while those of the same period
with different metallicities are near-parallel.
Thus we expect that SFH at $<$ 1 Gyr can be reproduced relatively well,
while our assumption of the fixed metallicity except for the youngest
period could affect our selection for PSBs if the metallicity significantly
changed between different periods.
On the other hand, the variations in SED among the different
periods and metallicities are much smaller for older ages of $>$
1 Gyr, which is well known as age-metallicity degeneracy \citep{wor94}.
By setting the several periods at $> 1$ Gyr,
we intend to keep flexibility to avoid systematic
effects of a single long period of the old age on the fitting in the
younger periods, while we do not aim to accurately estimate details of
SFH and metallicity at $> 1$ Gyr.
For the dust extinction, we used the Calzetti law \citep{cal00}
and attenuation curves for local star-forming galaxies with different stellar
masses, namely $10^{8.5}$--$10^{9.5} M_{\odot}$, $10^{9.5}$--$10^{10.5} M_{\odot}$,
and $10^{10.5}$--$10^{11.5} M_{\odot}$, from \citet{sal18}.
These four attenuation curves allow us to
cover observed variation in 2175\AA\ bump and reproduce
correlation between the bump strength and overall slope
of the attenuation curve \citep{sal18}.
We adopted different ranges of $E(B-V)$ (or $A_{V}$) for these different
attenuation curves, namely, $E(B-V) \leq 1.6$ for the Calzetti law and
$E(B-V) \leq 0.4$ for those from \citet{sal18},
to take account of observed correlation between
the overall slope and $V$-band attenuation, i.e., the slope tends to be
steeper at smaller optical depth (\citealp{sal18}; \citealp{sal20}).
The intergalactic matter (IGM) absorption of
\citet{mad95} was applied to the dust-reddened model SEDs at each redshift.
We took variation of the IGM absorption among lines of sight
into account by adding fractional errors to fluxes in bands
shorter than the rest-frame 1216 \AA\ as in \citet{yon22}.
In the fitting, we searched the minimum $\chi^{2}$ at each redshift,
and calculated a redshift likelihood function,
$P(z) \propto \exp(-\frac{\chi^{2}(z)}{2})$,
where $\chi^{2}(z)$ is the minimum $\chi^{2}$ at each redshift.
We adopted the median of the likelihood function as a redshift of each
object (e.g., \citealp{ilb10}).
The photometric data with a central rest-frame wavelength longer than 25000 \AA\
were excluded from the calculation of the minimum $\chi^{2}$ at each redshift,
because our model templates do not include the dust/PAH emission.
We checked accuracy of the estimated redshifts by using the spectroscopic
redshift catalogues from LEGA-C and zCOSMOS.
At $z_{\rm spec} < 1$, 2587 and 6704 spectroscopic redshifts from LEGA-C and
zCOSMOS were matched to our sample.
The fraction of those with $\Delta z/(1+z_{\rm spec}) > 0.1$ is very small
(1.35\% and 0.45\% for the LEGA-C and zCOSMOS samples, respectively),
and the means and standard deviations of $\Delta z/(1+z_{\rm spec})$ for
galaxies with $\Delta z/(1+z_{\rm spec}) < 0.1$ are $-0.009 \pm 0.011$
and $-0.008 \pm 0.012$ for the LEGA-C and zCOSMOS samples, respectively.
For PSBs described in the next subsection, 27 spectroscopic redshifts
from the both catalogues are available at $z_{\rm spec} < 1$,
and the accuracy of the estimated
redshifts is almost the same as all the spec-$z$ sample with no outlier of
$\Delta z/(1+z_{\rm spec}) > 0.1$.
Such high photometric redshift accuracy is provided by the optical
intermediate-bands data, which densely sample wavelengths around
the Balmer/4000\AA\ break, and is consistent with those in
previous studies (\citealp{lai16}; \citealp{wea22}).
Including the attenuation curves from \citet{sal18} with different
slopes and UV bump strengths also contributes
to the accuracy of the photometric redshifts.
If we use only the Calzetti attenuation law with a fixed slope and no
UV bump, the fraction of catastrophic failure increases by $\sim 40$ \%
and the standard deviation of $\Delta z/(1+z_{\rm spec})$ increases by
$\sim 30$ \%.
We carried out Monte Carlo simulations to estimate probability
distributions of derived physical properties such as stellar mass at
the observed epoch, SFR and SSFR in the periods of look-back time.
We added random shifts based on photometric errors to the observed
fluxes and then performed the same SED fitting with the simulated photometries
fixing the redshift to the value mentioned above.
We did 1000 such simulations for each object and adopted the median values
and 68\% confidence intervals as the physical properties and
their uncertainty, respectively.
\begin{figure}
\includegraphics[width=\columnwidth]{fig2.eps}
\caption{ Examples of the SED fitting for galaxies at $z\sim0.8$.
Red circles show the observed fluxes, and vertical and horizontal
error bars represent flux errors and FWHMs of the bands, respectively.
The thick solid line shows the best-fit model SED, while thin lines
represent contributions from stars formed in the different periods.
The colours of these thin lines are the same as Figure \ref{fig:temp}.
The estimated SFHs are shown in the insets, where solid and dashed lines
represent the best-fit SFR and its 68\% confidence interval, respectively.
These three examples show different types of SFHs, namely,
SFG, PSB, and QG (see text for selection criteria).
\label{fig:sedexam}}
\end{figure}
In Figure \ref{fig:sedexam}, we show examples of the SED fitting for
galaxies at $z\sim0.8$ classified into different types of SFHs
in the next section. The contributions from stars formed in
the different periods in
the best-fit models and estimated SFHs are shown in the figure.
\subsection{Sample selection} \label{sec:sample}
\subsubsection{Post-Starburst Galaxies}
We used the physical properties estimated from
the SED fitting in the previous subsection
to select galaxies that experienced active star formation followed by rapid
quenching several hundreds Myr before observation.
We set selection criteria with SSFRs in the three youngest periods of
look-back time, namely, SSFR$_{\rm 0-40Myr}$, SSFR$_{\rm 40-321Myr}$, and
SSFR$_{\rm 321-1000Myr}$.
Note that we define these SSFRs as SFRs in these periods divided by
stellar mass at the observed epoch (e.g.,
SSFR$_{\rm 0-40Myr} = $ SFR$_{\rm 0-40Myr}/M_{\rm star, 0}$) to easily compare the SSFRs
among the different periods.
Thus the SSFRs used in this study do not coincide with exact values of
SFR$/M_{\rm star}$ in these periods.
Our selection criteria are
\begin{equation}
\begin{split}
&{\rm SSFR}_{\rm 321-1000Myr} > 10^{-9.5} \hspace{1mm} {\rm yr}^{-1} \hspace{1mm} \&\\
&{\rm SSFR}_{\rm 40-321Myr} < 10^{-10.5} \hspace{1mm} {\rm yr}^{-1} \hspace{1mm} \&\\
&{\rm SSFR}_{\rm 0-40Myr} < 10^{-10.5} \hspace{1mm} {\rm yr}^{-1}.
\end{split}
\label{eq:psbsel}
\end{equation}
This selection utilises a characteristic SED shape of stars formed in
321--1000 Myr before observation, namely, strong Balmer break,
steep and red FUV and relatively flat NUV continuum, and relatively
blue continuum at longer wavelength than the break (Figure \ref{fig:temp}).
We used those galaxies at $0.7<z<0.9$ in this study, because
the intermediate bands densely sample the rest-frame NUV to $B$ band
for these redshifts and enable us to distinguish the characteristic
SED of these stars from SEDs of stars formed in the other periods.
Furthermore, the {\it HST}/ACS $I_{\rm F814W}$ band corresponds to
the rest-frame $B$ band at $z\sim0.8$. Since many morphological studies
of galaxies have been done in the rest-frame $B$ band, we can easily
compare our morphological results with the previous studies.
Since distributions of SSFR in the periods of 0--40 Myr, 40--321 Myr,
and 321--1000 Myr for galaxies with $i<24$ at $0.7<z<0.9$
show a peak at $\sim 10^{-9.5}$ yr$^{-1}$,
our selection picks up those galaxies whose SSFRs were comparable to or
higher than the main sequence of SFGs in 321--1000 Myr
before observation and then decreased at least by an order of magnitude
in the last 321 Myr.
We excluded galaxies with the reduced minimum $\chi^{2} > 5$ from our analysis,
because the estimated SSFRs are unreliable for those objects.
We also discarded objects affected by nearby bright sources
in the {\it HST}/ACS
$I_{\rm F814W}$-band images, and confirmed that all remaining galaxies in our
sample are not saturated in the $I_{\rm F814W}$-band images.
Finally, there are 17459 galaxies with $i<24$ and reduced $\chi^{2} <5$
at $0.7<z<0.9$ in the {\it HST}/ACS $I_{\rm F814W}$-band data,
and we selected 94 PSBs.
\begin{figure}
\includegraphics[width=\columnwidth]{fig3.eps}
\caption{
EW([OII]) as a function of SSFR$_{\rm 0-40Myr}$ (top) and
SSFR$_{\rm 40-321Myr}$ (bottom) for our sample galaxies with the spectroscopic
measurements by LEGA-C \citep{van21}.
Open circles show all the galaxies with $i<24$ at $0.7<z<0.9$, and solid
circles represent those with SSFR$_{\rm 40-321Myr} < 10^{-10.5}$ yr$^{-1}$
in the top panel and those with SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$
in the bottom panel.
Those with SSFR $ < 10^{-13}$ yr$^{-1}$ (SSFR $= 0$
for most cases) are plotted at $10^{-13}$ yr$^{-1}$.
The solid and dashed-dotted lines show the median
values of EW([OII]) in SSFR bins with a width of $\pm$ 0.25 dex
for all the galaxies and those with
SSFR$_{\rm 40-321Myr} < 10^{-10.5}$ yr$^{-1}$ (top panel) or
SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$ (bottom panel), respectively.
The short-dashed line shows the fraction of those with
EW([OII]) $> $ 5\AA.
The vertical long-dashed line shows the boundary of SSFR $ = 10^{-10.5}$
yr$^{-1}$, and selected PSBs have lower SSFRs than this value.
\label{fig:OII}}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{fig4.eps}
\caption{
H$\delta_{\rm A}$ as a function of SSFR$_{\rm 321-1000Myr}$
for the sample galaxies
with the spectroscopic measurements by LEGA-C.
Open circles show all the galaxies with $i<24$ at $0.7<z<0.9$, and
solid circles represent those with SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$
and SSFR$_{\rm 40-321Myr} < 10^{-10.5}$ yr$^{-1}$.
Those with SSFR$_{\rm 321-1000Myr} < 10^{-13}$ yr$^{-1}$ are plotted at
$10^{-13}$ yr$^{-1}$.
The solid and dashed-dotted lines show the median values of H$\delta_{\rm A}$
in SSFR$_{\rm 321-1000Myr}$ bins with a width of $\pm$ 0.25 dex for
all the galaxies and those with SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$
and SSFR$_{\rm 40-321Myr} < 10^{-10.5}$ yr$^{-1}$.
The short-dashed line shows the fraction of those with
H$\delta_{\rm A} > $ 5\AA.
The vertical long-dashed line shows the boundary of
SSFR$_{\rm 321-1000Myr} = 10^{-9.5}$ yr$^{-1}$, and selected PSBs have higher
SSFR$_{\rm 321-1000Myr}$ than this value.
\label{fig:Hd}}
\end{figure}
In order to check relation between our selection method
and the spectroscopic
selection for PSBs used in previous studies,
we compared the estimated SSFRs in the youngest three periods
with spectroscopic
indices of [O{\footnotesize II}] emission and H$\delta$ absorption lines
from the LEGA-C survey \citep{van21}.
For the purpose, we searched for galaxies with $i<24$ and the
reduced $\chi^{2} < 5$ at $0.7<z<0.9$
in the LEGA-C catalogue, and found 1265 matched objects.
We used EW([O{\footnotesize II}]) and H$\delta_{\rm A}$ indices from
the LEGA-C catalogue. The H$\delta_{\rm A}$ index is corrected for contribution
from the emission line \citep{van21}.
The upper panel of Figure \ref{fig:OII} shows
EW([O{\footnotesize II}]) as a function of SSFR$_{\rm 0-40Myr}$.
One can see that EW([O{\footnotesize II}]) tends to increase with
increasing SSFR$_{\rm 0-40Myr}$, while there is a relatively large scatter at
a given SSFR$_{\rm 0-40Myr}$.
At SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$, the median EW([O{\footnotesize II}])
becomes $\lesssim 5$ \AA, which is often used as one of the spectroscopic
criteria for PSBs, and the fraction of those with
EW([O{\footnotesize II}]) $ > 5$ \AA\ is less than $\sim 0.5$.
Solid circles in the panel show those with
SSFR$_{\rm 40-321Myr} < 10^{-10.5}$ yr$^{-1}$, and therefore
those solid circles at SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$
represent objects whose both SSFR$_{\rm 0-40Myr}$ and SSFR$_{\rm 40-321Myr}$
are lower than $10^{-10.5}$ yr$^{-1}$.
Their median EW([O{\footnotesize II}]) is less than $\sim 4$ \AA\
at SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$.
The bottom panel of Figure \ref{fig:OII} shows
relation between SSFR$_{\rm 40-321Myr}$ and EW([O{\footnotesize II}]).
The similar trend as in the upper panel can be seen, although
the median EW([O{\footnotesize II}]) at
SSFR$_{\rm 40-321Myr} \sim 10^{-10.5}$ yr$^{-1}$
is slightly higher ($\sim 7$ \AA).
The median EW([O{\footnotesize II}]) for those galaxies with both
SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$ and SSFR$_{\rm 40-321Myr} < 10^{-10.5}$
yr$^{-1}$ is only $\sim 1.3$ \AA, and the fraction of those with
EW([O{\footnotesize II}]) $ > 5$ \AA\ is $\sim 0.24$.
Thus the criteria of low SSFR$_{\rm 0-40Myr}$ and SSFR$_{\rm 40-321Myr}$ can
select those galaxies with low EW([O{\footnotesize II}]), although
some fraction of galaxies with EW([O{\footnotesize II}]) $ > 5$ \AA\ can
be included in our sample.
The relatively high EW([O{\footnotesize II}]) seen in some of those galaxies
could be caused by shock or weak AGN rather than star formation,
as some previous studies reported that a fraction of
``H$\delta$ strong'' galaxies with rapidly declining SFHs
show detectable [O{\footnotesize II}] emission lines
(e.g., \citealp{yan06}; \citealp{paw18}).
\begin{figure*}
\includegraphics[width=2.0\columnwidth]{fig5.eps}
\caption{
{\bf left:} The uncertainty of SSFR$_{\rm 0-40Myr}$ estimated in the SED
fitting for all the sample galaxies with $i<24$ at $0.7<z<0.9$.
Half widths of the 68\% confidence interval of SSFR$_{\rm 0-40Myr}$
are shown as a function of SSFR$_{\rm 0-40Myr}$ itself.
Those with SSFR$_{\rm 0-40Myr} < 10^{-13}$ yr$^{-1}$ are plotted at
$10^{-13}$ yr$^{-1}$.
The dashed line shows the median values in SSFR$_{\rm 0-40Myr}$ bins with
a width of $\pm$ 0.125 dex.
{\bf middle:} The same as the left panel but for SSFR$_{\rm 40-321Myr}$.
While magenta dots show all the galaxies with $i<24$ at $0.7<z<0.9$,
red open circles represent those with
SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$.
The dashed line shows the median values in SSFR$_{\rm 40-321Myr}$ bins for
those with SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$.
{\bf right:} The same as the left panel but for SSFR$_{\rm 321-1000Myr}$.
Magenta dots show all the sample galaxies, and red open circles represent
those with SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$ and
SSFR$_{\rm 40-321Myr} < 10^{-10.5}$ yr$^{-1}$.
The dashed line shows the median values in SSFR$_{\rm 321-1000Myr}$ bins for
those with SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$ and
SSFR$_{\rm 40-321Myr} < 10^{-10.5}$ yr$^{-1}$.
\label{fig:ssfrerr}}
\end{figure*}
Figure \ref{fig:Hd} shows H$\delta_{\rm A}$ as a function of
SSFR$_{\rm 321-1000Myr}$.
The H$\delta_{\rm A}$ index increases with increasing SSFR$_{\rm 321-1000Myr}$
at SSFR$_{\rm 321-1000Myr} \gtrsim 10^{-10}$ yr$^{-1}$.
The median H$\delta_{\rm A}$ is $\sim 1$ \AA\ at
SSFR$_{\rm 321-1000Myr}\sim 10^{-10}$ yr$^{-1}$ and increases to $\sim 7$\AA\
around SSFR$_{\rm 321-1000Myr} \sim 10^{-9}$ yr$^{-1}$.
A relatively high median H$\delta_{\rm A}$ value at
SSFR$_{\rm 321-1000Myr} = 10^{-13}$ yr$^{-1}$
(SSFR$_{\rm 321-1000Myr} = 0$ for most cases) is caused by those galaxies
with high SSFR$_{\rm 0-40Myr}$ and/or SSFR$_{\rm 40-321Myr}$.
When we limit to those galaxies with SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$
and SSFR$_{\rm 40-321Myr} < 10^{-10.5}$ yr$^{-1}$ (solid circles in the figure),
the median H$\delta_{\rm A}$ is
$\sim 1$ \AA\ at SSFR$_{\rm 321-1000Myr} \lesssim 10^{-10.5}$ yr$^{-1}$, and
the rapid increase at SSFR$_{\rm 321-1000Myr} > 10^{-10}$ yr$^{-1}$ remains.
Those solid circles at SSFR$_{\rm 321-1000Myr} > 10^{-9,5}$ yr$^{-1}$ correspond
to selected PSBs.
For PSBs, the median H$\delta_{\rm A}$ is $\sim 6$\AA\, and
14 out of 19 those galaxies have H$\delta_{\rm A} > 5$ \AA,
which is often used as a criterion for PSB or ``H$\delta$ strong'' galaxies.
Therefore, we expect that many of our PSBs satisfy
the spectroscopic selection used in the previous studies, namely,
relatively high H$\delta_{\rm A}$ and low EW([O{\footnotesize II}]),
while our selection picks up those galaxies with quenching
several hundreds Myr before observation and probably misses those with
more recent quenching within several tens to a hundred Myr.
In Figure \ref{fig:ssfrerr}, we show half widths of the 68\% confidence
intervals of SSFR$_{\rm 0-40Myr}$, SSFR$_{\rm 40-321Myr}$, and SSFR$_{\rm 321-1000Myr}$
as a function of SSFR itself.
In the middle panel, we separately plot the uncertainty of
SSFR$_{\rm 40-321Myr}$ for those galaxies with
SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$ with red circles as well as
all galaxies (magenta dots), because the uncertainty tends to be larger
when younger population dominates in the SED and vice versa.
The uncertainty of SSFR$_{\rm 321-1000Myr}$ for
those with SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$ and
SSFR$_{\rm 40-321Myr} < 10^{-10.5}$ yr$^{-1}$ are similarly shown as red circles
in the right panel.
In the left panel, the median uncertainty of SSFR$_{\rm 0-40Myr}$
is less than factor of 2 at
SSFR$_{\rm 0-40Myr} \gtrsim 10^{-10.5}$ yr$^{-1}$,
while the fractional error is larger
at lower SSFRs.
The median uncertainty of SSFR$_{\rm 0-40Myr}$ is less than factor of 2 at
SSFR$_{\rm 0-40Myr} \gtrsim 10^{-10.5}$ yr$^{-1}$.
While the uncertainty of SSFR$_{\rm 40-321Myr}$ tends to be slightly larger
than that of SSFR$_{\rm 0-40Myr}$ in the middle panel,
galaxies with SSFR$_{\rm 0-40Myr} < 10^{-10.5}$
yr$^{-1}$ show the similar uncertainty at
SSFR$_{\rm 40-321Myr} \gtrsim 10^{-10}$ yr$^{-1}$, where the median value
is less than factor of 2.
The uncertainty of SSFR$_{\rm 321-1000Myr}$ is larger than those of
SSFR$_{\rm 0-40Myr}$
and SSFR$_{\rm 40-321Myr}$ (magenta dots in the right panel),
which reflects the fact that SED of older
stellar population is more easily overwhelmed by those of younger
populations.
Nevertheless, we can constrain SSFR$_{\rm 321-1000Myr}$ within a factor
of 2 at SSFR$_{\rm 321-1000Myr} \gtrsim 10^{-10}$ yr$^{-1}$ for galaxies
with low SSFR$_{\rm 0-40Myr}$ and SSFR$_{\rm 40-321Myr}$ (red circles in the panel).
\begin{figure}
\includegraphics[width=\columnwidth]{fig6.eps}
\caption{
Average probability distributions of SSFR$_{\rm 0-40Myr}$ (dashed),
SSFR$_{\rm 40-321Myr}$ (dashed-dotted), and
SSFR$_{\rm 321-1000Myr}$ (solid) estimated in the SED fitting
for 94 PSBs in our sample.
The distributions for each PSB are calculated from the Monte Carlo
simulations (see text for details),
and those averaged over the 94 PSBs are shown.
The probabilities of SSFR $< 10^{-13}$ yr$^{-1}$ are counted in a bin
at $10^{-13}$ yr$^{-1}$.
\label{fig:pdf}}
\end{figure}
In Figure \ref{fig:pdf}, we show combined probability distributions of
SSFR$_{\rm 0-40Myr}$, SSFR$_{\rm 40-321Myr}$, and SSFR$_{\rm 321-1000Myr}$ for
94 PSBs to check whether these galaxies show significantly declining
SFHs in the recent past even if the uncertainty is considered.
We averaged these probability distributions of SSFRs
calculated from the Monte Carlo simulations described
in the previous subsection
over those PSBs with equal weight.
One can see that the probability distribution of SSFR$_{\rm 321-1000Myr}$
is clearly higher than those of SSFR$_{\rm 0-40Myr}$ and SSFR$_{\rm 40-321Myr}$.
The probability of SSFR$_{\rm 321-1000Myr} > $2--3$ \times10^{-10}$ yr$^{-1}$
is high,
while that of SSFR$_{\rm 40-321Myr} > 10^{-10}$ yr$^{-1}$ is very low
(SSFR$_{\rm 40-321Myr} = 0$ in most cases).
The distribution of SSFR$_{\rm 0-40Myr}$
has a peak around $2\times10^{-11}$ yr$^{-1}$, and
there is only a very small probability
of SSFR$_{\rm 0-40Myr} > 10^{-10}$ yr$^{-1}$.
Therefore, we expect that most of these PSBs had
experienced a high star formation activity
in 321--1000Myr before observation and then
rapidly decreased their SFRs.
\subsubsection{Star-forming \& Quiescent Galaxies} \label{sec:comps}
\begin{figure}
\includegraphics[width=\columnwidth]{fig7.eps}
\caption{
Rest-frame $U-V$ vs. $V-J$ two-colour diagram for SFGs (cyan dot),
QGs (red open circle), and PSBs (green solid square) in our sample.
The dashed line shows the colour criteria for QGs by \citet{will09}.
\label{fig:uvj}}
\end{figure}
We also constructed comparison samples of normal SFGs
and QGs in the same redshift range.
From those galaxies with $i<24$ and the reduced minimum $\chi^{2} < 5$ at
$0.7<z<0.9$, we selected those objects with
SSFR$_{\rm 0-40Myr} = 10^{-10}$--$10^{-9}$ yr$^{-1}$
and SSFR$_{\rm 40-321Myr} = 10^{-10}$--$10^{-9}$ yr$^{-1}$ as SFGs.
Since the both distributions of SSFR$_{\rm 0-40Myr}$
and SSFR$_{\rm 40-321Myr}$ show a peak around $10^{-9.5}$ yr$^{-1}$,
these galaxies are on and around the main sequence at least in
the last $\sim 300$ Myr.
We did not use SSFR$_{\rm 321-1000Myr}$ in the criteria for SFGs,
because the uncertainty of SSFR$_{\rm 321-1000Myr}$ tends to be
large for those galaxies with relatively high SSFR$_{\rm 0-40Myr}$ and/or
SSFR$_{\rm 40-321Myr}$ (Figure \ref{fig:ssfrerr}).
For QGs, we selected galaxies with low SSFRs within
recent 1 Gyr, namely, SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$ \&
SSFR$_{\rm 40-321Myr} < 10^{-10.5}$ yr$^{-1}$ \&
SSFR$_{\rm 321-1000Myr} < 10^{-10.5}$ yr$^{-1}$.
We did not use those in the older periods than 1 Gyr in the criteria,
because it is difficult to strongly constrain detailed SFHs older than
1 Gyr due to the degeneracy mentioned above (Figure \ref{fig:temp}).
On the other hand, those QGs with a high
SSFR$_{\rm 1-2Gyr}$ could be similar with PSBs in that
they had experienced a starburst at slightly earlier epoch than
PSBs followed by rapid quenching.
Thus we check results for those galaxies in Section \ref{sec:casfh}.
Finally, we selected 6581 SFGs and 670 QGs
with $i<24$ and the reduced $\chi^{2} < 5$ at $0.7<z<0.9$.
Examples of the SED fitting for these galaxies are shown in Figure
\ref{fig:sedexam}.
In Figure \ref{fig:uvj}, we plot these SFGs, QGs, and
PSBs in the rest-frame $U-V$ vs. $V-J$ two-colour plane
in order to compare our classification with those in previous studies.
The rest-frame colours were estimated from the best-fit model templates
in the SED fitting.
The dashed line in the figure shows the criteria for QGs
by \citet{will09}.
One can see that most of QGs and PSBs satisfy
the criteria, while some galaxies show redder $U-V$ and $V-J$ colours
due to relatively large dust extinction.
PSBs tend to have bluer colours than QGs,
which suggests younger stellar ages and more recent quenching of star formation.
If we use the rotated system of coordinates introduced by \citet{bel19} on
the two-colour diagram,
our PSBs show $S_{\rm Q} = $ 1.60--2.55, while QGs have $S_{\rm Q} = $ 2.05--2.65.
The range of $S_{\rm Q}$ for our PSBs are similar with that of the
spectroscopic sample at $z=$ 1.0--1.5 in \citet{bel19}.
Such distribution in the two-colour plane is consistent with those in
previous studies of PSBs
(e.g., \citealp{wil14}; \citealp{wil20}; \citealp{deu20};
\citealp{wu20}).
On the other hand, most SFGs are outside of the selection
area in the plane, although a small fraction of those galaxies
enter the area near the boundary.
Thus our classification with the SED fitting is consistent
with those in the previous studies, while our PSB selection
could miss those with very recent quenching of star formation, for
example, within $\sim$ 100 Myr.
\subsection{Morphological analysis} \label{sec:morp}
\subsubsection{Preparation}
In this study, we used three non-parametric morphological indices, namely,
concentration $C$, asymmetry $A$, and concentration of asymmetric features
$C_{A}$,
to investigate morphological properties of PSBs.
We measured these indices on the {\it HST}/ACS $I_{\rm F814W}$-band images, while
we defined pixels that belong to the object in the
Subaru/Suprime-Cam $i^{'}$-band data in order to keep consistency with
the object detection and SED analysis carried out with the ground-based data,
and to include discrete features/substructures such as knots, tidal tails,
and so on in the analysis.
We cut a 12'' $\times$ 12'' region centred on the object coordinate of
the ACS and Suprime-Cam data for each galaxy.
At first, we ran SExtractor version 2.5.0 \citep{ber96} on the $i^{'}$-band
images by using the RMS maps \citep{cap07} to scale detection threshold.
The detection threshold of 0.6 times RMS values from the RMS map
over 25 connected pixels was used.
We aligned segmentation maps output by SExtractor
to the ACS $I_{\rm F814W}$-band images
with a smaller pixel scale, and used these maps to identify pixels
that belong to the object in the $I_{\rm F814W}$-band images.
While we basically used pixels identified by the segmentation map from
the $i^{'}$-band data, we added and excluded some pixels that belong to
objects/substructures detected in the $I_{\rm F814W}$-band images
across the boundary defined by the segmentation map.
For this adjustment, we also ran SExtractor on the $I_{\rm F814W}$-band images
with a detection threshold of 1.2 times local background root mean
square over 12
connected pixels. If more than half pixels of a source detected in
the $I_{\rm F814W}$-band data are included in the object region defined
by the $i^{'}$-band segmentation map, we included all pixels of this source
in the analysis and added some pixels outside of the object region if exist.
On the other hand, if less than half pixels of the source is included,
we masked and excluded all pixels of this source from the analysis
as another object.
We used the adjusted object region to measure the morphological indices
for each galaxy.
We estimated pixel-to-pixel background fluctuation in the 12'' $\times$ 12''
region of the $I_{\rm F814W}$-band data by masking the object region
and pixels that belong to the other objects with the segmentation map.
We then defined pixels higher than 1$\sigma$ value of the background
fluctuation in the object region as the object,
and masked the other pixels less than 1$\sigma$ in the region.
\subsubsection{Concentration}
Following previous studies such as \citet{ken85} and \citet{ber00},
we measured the concentration index defined as $C = r_{80}/r_{20}$,
where $r_{80}$ and $r_{20}$ are radii which contain 80\% and 20\% of
total flux of the object.
The total flux was estimated as a sum of the
pixels higher than 1$\sigma$ value of the background fluctuation
in the object region.
Since the object region is defined by an extent of the object in
the $i^{'}$-band data, which is often wider than that
in the ACS $I_{\rm F814W}$-band
data, contribution from background noise higher
than the 1$\sigma$ value is non negligible.
In order to estimate this contribution, we made sky images by replacing
pixels in the object region with randomly selected ones that do not
belong to any objects in the 12'' $\times$ 12'' field.
All pixels outside the object region in the sky image were masked with
zero value.
We defined pixels higher than 2$\sigma$ in the object image as
object-dominated region, and masked pixels at the same coordinates with
this region in the sky image, because no biased contribution from noises
is expected for those pixels with high object fluxes.
We summed up pixels higher than 1$\sigma$ value in the sky image
except for the object-dominated region, and subtracted this from
the total flux of the object.
We then measured a growth curve with circular apertures centred at
a flux-weighted mean position of the object pixels
to estimate $r_{80}$ and $r_{20}$.
In measurements of the growth curve, the similar subtraction of
the background contribution was performed with the same sky image.
We made 20 sky images for each object and repeated the measurements of $C$,
and adopted their mean and standard deviation as $C$ and its uncertainty,
respectively.
\begin{figure*}
\includegraphics[width=2.1\columnwidth]{fig8.eps}
\caption{
Distributions of the morphological indices, $C$, $A$, and $C_{A}$
for SFGs (blue dashed), QGs (red dotted-dashed), and PSBs (green solid)
in our sample. The light-green solid histogram represents the
contribution from those PSBs with $\log{C_{A}} > 0.8$.
\label{fig:histc}}
\end{figure*}
\subsubsection{Asymmetry}
We used the Asymmetry index defined by previous studies
(\citealp{sch95}; \citealp{abr96}; \citealp{con00}) to
measure rotation asymmetry of our sample galaxies.
The asymmetry index $A$ was calculated by rotating the object image
by 180 degree and subtracting it from the original image as
\begin{equation}
A = \frac{0.5 \Sigma |I_{\rm O} - I_{180}|}{\Sigma I_{\rm O}},
\label{eq:asy}
\end{equation}
where $I_{\rm O}$ and $I_{180}$ are pixel values in the original image and
that rotated by 180 degree.
We adopted the same definition of the object pixels as in the calculation
of $C$, namely, those higher than 1$\sigma$ value in the object region.
The denominator of Equation (\ref{eq:asy}) is the same as the total flux of
the object described above, and the same correction for the background
contribution was applied.
In the calculation of the numerator, we chose a centre of the rotation
so that a value of $A$ for the object is minimum, following \citet{con00}.
We searched such a centre with a step of 0.5 pixel, namely,
centre of each pixel or boundary between two pixels in X and Y axes
over the object region.
This step size is about six times smaller than the PSF FWHM of
$\sim$ 0.1 arcsec, and we can determine the rotation centre with
sufficiently high accuracy to avoid significant artificial residuals
in a central region due to errors of the centre.
After subtracting the rotated image from the original image,
we masked negative pixel values in the residual image with zero value
(hereafter rotation-subtracted image).
We then summed (positive) fluxes in the object region of
the rotation-subtracted image.
By using the same sky image for the object as used in the calculation of
total flux,
we also estimated the contribution from the background noise for the
rotation-subtracted image as follows.
At first, we remained only pixels higher than 1$\sigma$ value in the sky image
and replaced the other pixels with zero.
Second, we rotated the masked sky image by 180 degree and subtracted it from
that before rotation.
We then summed up only positive pixels in the residual sky image except for
those pixels in the object-dominated region, where pixels in the object
residual image are higher than 2$\sigma$ value and the contribution
from asymmetric features of the object dominates.
Finally, we subtracted the estimated background contribution from
the summed flux of the object residual image.
By using 20 random sky images as in the estimate of $C$,
we repeated the calculations described above and
adopted the mean and standard deviation as $A$ and its uncertainty,
respectively.
\subsubsection{Concentration of Asymmetric Features}
We newly devised a morphological index measuring central concentration
of asymmetric features of the object, $C_{A}$.
This is a combination of the asymmetry and concentration indices described
above. With $C_{A}$, we aim to detect asymmetric features such as
central disturbances or tidal tails, and distinguish them from those
by star-forming regions in steadily star-forming disks.
For example, normal star-forming disk galaxies, which usually show
a central bulge with little asymmetric feature and an extended disk
with star-forming regions at random positions, are expected to
have relatively low concentration of the asymmetric components in
their surface brightness distribution.
While tidal tails tend to occur in outer regions of galaxies and
lead to lower concentration, nuclear starbursts induced by gas inflow
to the centre may cause disturbed features in their central region and
enhance this index.
We used the same rotation-subtracted object
image as in the calculation of $A$.
We similarly estimated radii which contain 80\% and 20\% of total flux
of the rotation-subtracted image, namely, $r_{A,80}$ and $r_{A,20}$,
and calculated the index as $C_{A} = r_{A,80}/r_{A,20}$.
To do this, we measured a growth curve on the rotation-subtracted image
with circular apertures centred at
the rotation centre, for which the minimum value of $A$ was obtained.
The total flux of the residual image was the same as in the calculation of $A$,
where the contribution from the background noise was estimated and subtracted.
In measurements of $r_{A,80}$ and $r_{A,20}$, we similarly corrected for
the contribution from the background noise
with the same rotation-subtracted sky image.
We also repeated the calculations with 20 random sky images and
adopted the mean and standard deviation as $C_{A}$ and its uncertainty,
respectively.
\section{Results} \label{sec:res}
\begin{figure*}
\includegraphics[width=2.1\columnwidth]{fig9.eps}
\caption{
Scatter plots of the morphological indices, $C$, $A$, and $C_{A}$
for SFGs (cyan dot), QGs (red open circle), and PSBs (green solid square)
in our sample.
The error bars for PSBs show the uncertainty of the indices calculated in
Section \ref{sec:morp}, while those for SFGs and QGs are omitted for
clarity.
\label{fig:cacac}}
\end{figure*}
Figure \ref{fig:histc} shows distributions of $C$, $A$, and $C_{A}$
for PSBs with $i<24$ at $0.7<z<0.9$
in the COSMOS field.
For comparison, we also show SFGs and QGs described in
Section \ref{sec:comps}.
Most PSBs have $C\sim$ 3--7 and the $C$ distribution of PSBs is
similar with that of QGs, while the fraction of PSBs with $C \lesssim$ 4
is slightly higher than QGs. PSBs show clearly higher $C$ values
than SFGs, most of which have $C\sim$ 2.5--4.5.
The $A$ distribution of PSBs shows a peak around $A\sim 0.09$ and most of
PSBs have $A\sim$ 0.06--0.3. QGs show a similar range of $A$ and
a peak at a slightly higher value of
$A \sim$ 0.12. SFGs have a higher distribution of $A$ with a peak around
$A \sim$ 0.15, and the fraction of SFGs with $A<0.1$ is small.
The $C$ and $A$ indices of PSBs are similar with QGs rather than SFGs,
while PSBs show slightly lower values for the both indices than QGs.
\begin{table*}
\centering
\caption{Median values of the morphological indices for the samples with the different SFHs \label{tab:cacac}}
\label{tab:example_table}
\begin{tabular}{ccccc}
\hline
sample & $C$ & $A$ & $C_{A}$ & fraction of $\log{C_{A}}>0.8$\\
\hline
PSBs & $4.884\pm0.044$ & $0.1173\pm0.0032$ & $4.755\pm0.173$ & $0.362\pm0.019$\\
QGs & $5.047\pm0.016$ & $0.1228\pm0.0007$ & $3.497\pm0.042$ & $0.158\pm0.007$\\
SFGs & $3.396\pm0.003$ & $0.1681\pm0.0004$ & $2.723\pm0.007$ & $0.021\pm0.001$\\
\hline
\end{tabular}
\end{table*}
\begin{figure*}
\includegraphics[width=2.1\columnwidth]{fig10.eps}
\caption{Examples of {\it HST}/ACS $I_{\rm F814W}$-band images for PSBs.
Each panel is 3'' $\times$ 3'' in size. Two images are
shown for each PSB: the upper panel shows the object image, and the lower
panel shows the rotation-subtracted image. Four sets of the two rows of the
images represent those with different $C_{A}$ values, and the $C_{A}$ values
decease from top to bottom sets. In each set, galaxies are randomly
selected from those with the $C_{A}$ values, and are sorted in
order of increasing $A$.
\label{fig:mon}}
\end{figure*}
In contrast with these indices, PSBs clearly tend to show higher $C_{A}$
values than QGs.
While QGs show a peak around $C_{A} \sim 2.5$ and
a small fraction of those with $C_{A} \gtrsim 6$,
PSBs show a peak around $C_{A} \sim 4$ and more than a third of PSBs
have $C_{A} \gtrsim 6$.
SFGs have lower $C_{A}$ than QGs and most of them show $C_{A} < 6$.
We summarise median values of the morphological indices and the fraction
of those with high $C_{A}$ for PSBs, QGs, and SFGs in Table \ref{tab:cacac}.
The errors in Table \ref{tab:cacac} are estimated with Monte Carlo
simulations. In the simulations, we added random shifts
based on the estimated errors (Section \ref{sec:morp})
to the morphological indices of each galaxy in our sample,
and calculated the median values of the
indices and the fraction of those with high $C_{A}$.
We repeated 1000 such simulations and adopted the standard deviations
of these values as the errors.
The median value of $C_{A} \sim 4.8$ for PSBs is significantly higher than
those of QGs and SFGs (3.5 and 2.7).
The fraction of PSBs with $\log{C_{A}} > 0.8$ is 36\%, while
those for QGs and SFGs are 16\% and 2\%, respectively.
A significant fraction of PSBs show the high concentration of the asymmetric
features.
We also performed a Kolmogorov-Smirnov test to statistically examine
the differences between PSBs and QGs.
The probability that the $C_{A}$ distributions of PSBs and QGs are drawn from
the same probability distribution is only 0.003\%, while the probabilities
for $C$ and $A$ are 14.6\% and 15.8\%, respectively.
In Figure \ref{fig:cacac}, we show relationships among the morphological
indices for PSBs, QGs, and SFGs.
In the $C$ vs. $A$ panel, most of PSBs show relatively low $A$ and high $C$,
and their distribution is similar with that of QGs. There are also a
small fraction of PSBs at relatively high $A$ and low $C$,
where most SFGs are located.
In the $C_{A}$ vs. $A$ panel, $A$ tends to decrease with increasing $C_{A}$
for all the samples, while there is some scatter.
Most of PSBs with a high value of $C_{A} \gtrsim 6$
show $A \sim $ 0.07--0.15, while
there are a few PSBs with $C_{A} \gtrsim 6$ and $A > 0.15$.
The $A$ distribution of those with $\log{C_{A}} >0.8$ is skewed to
lower values than that of all PSBs (Figure \ref{fig:histc}).
Those QGs with relatively high $C_{A}$ values show similarly low values of $A$.
In the $C_{A}$ vs. $C$ panel, the $C_{A}$ values of
galaxies with $C < 0.35$ are low, and the scatter of $C_{A}$ tends to
increase with increasing $C$.
PSBs with a high value of $C_{A} \gtrsim 6$ have $C \sim $ 3--8,
and their distribution of $C$ is not significantly different from
that of all PSBs.
Even if we limit to those with a very high value of $C_{A}>10$,
they have $C \sim $ 3.5--7, which is similar with those of QGs and
all PSBs.
Thus PSBs with high $C_{A}$ values tend to show similar $C$ and
slightly lower $A$ values than all PSBs and QGs.
\begin{figure*}
\includegraphics[width=2.1\columnwidth]{fig11.eps}
\caption{
Distributions of stellar mass, $r_{80}$,
mean surface stellar mass density within $r_{80}$ ($\Sigma_{\rm star,80}$),
and that within $r_{20}$ ($\Sigma_{\rm star,20}$) for SFGs, QGs, and PSBs
in our sample. The symbols are the same as Figure \ref{fig:histc}.
Note that the mean surface stellar mass densities are calculated under
the assumption of a constant stellar M/L ratio over the entire galaxy.
\label{fig:msrsig}}
\end{figure*}
In Figure \ref{fig:mon}, we present examples of the ACS $I_{\rm F814W}$-band
object images and the rotation-subtracted residual ones for PSBs with
different $C_{A}$ values.
Many PSBs show early-type morphologies with a significant
bulge, while there are a few PSBs with low surface brightness and/or irregular
morphologies.
One can see that PSBs with high $C_{A}$ values show significant residuals
near their centre in the rotation-subtracted images.
While signal noises of the object could cause some residuals especially
in the central region where their surface brightness tends to be high,
the residuals show some extended structures
near the centre rather than pixel-to-pixel fluctuations.
These asymmetric features seem to reflect physical properties in the central
region.
Those PSBs with high $C_{A}$ values
tend to show relatively high surface brightness in the original images
and do not show large and bright asymmetric features such as tidal
tails in their outskirts.
On the other hand, those with low $C_{A}$ values have more extended
asymmetric features in the rotation-subtracted images.
Some of them show lower surface brightness and/or
irregular morphologies in the object images.
\begin{figure*}
\includegraphics[width=2.1\columnwidth]{fig12.eps}
\caption{
The same as Figure \ref{fig:histc} but for PSBs and control samples
of QGs. While the solid line shows PSBs, the dashed, dotted, and
dashed-dotted
lines represent $M_{\rm star}$, $r_{80}$, and $\Sigma_{20}$-matched samples
of QGs, respectively.
\label{fig:simqg}}
\end{figure*}
In Figure \ref{fig:msrsig}, we show $M_{\rm star, 0}$, $r_{80}$, and mean surface
stellar mass densities within $r_{80}$ and $r_{20}$
for all PSBs and those with $\log{C_{A}} > 0.8$.
Note that we assumed the surface stellar mass density has the same radial
profile with the $I_{\rm F814W}$-band surface brightness, and calculated
the mean surface stellar mass densities as
$\Sigma_{80(20)} = 0.8(0.2)\times M_{\rm star, 0} / (\pi r_{80(20)}^{2})$.
If the galaxy has centrally concentrated young stellar population,
which is often seen in PSBs as discussed in the next section,
we could overestimate
the surface stellar mass density in the inner region, because the contribution
from the young population decreases
stellar mass to luminosity ratio in the central region.
While both all PSBs and those with high $C_{A}$ show similar or
slightly lower stellar
mass distribution with a significant tail over $10^{9}$--$10^{10} M_{\odot}$
than QGs, PSBs tend to have smaller sizes ($r_{80}$) than QGs.
The fraction of relatively small galaxies with
$\log{(r_{80}/{\rm kpc})} <0.5$ in all PSBs (those with $\log{C_{A}} > 0.8$)
is 42\% (53\%), which is larger than that of QGs (25\%).
As a result, PSBs with $\log{C_{A}} > 0.8$ show higher $\Sigma_{\rm star, 80}$
values than QGs, while those of the other PSBs are similar or slightly
lower than QGs.
The higher surface stellar mass density of those PSBs with $\log{C_{A}} > 0.8$ than
QGs is more clearly seen in the inner region, i.e., $\Sigma_{\rm star, 20}$,
although we could overestimate these values as mentioned above.
In order to examine whether morphological properties of PSBs are
different from QGs or not when comparing galaxies with similar
physical properties such as stellar mass, size, and surface mass density,
we made control samples of QGs.
By using the distributions of stellar mass in Figure \ref{fig:msrsig} for
PSBs and QGs, we calculated ratios of fractions, namely, $f_{\rm PSB}/f_{\rm QG}$
in each stellar mass bin. We then used the ratio in each mass bin as
weight
to estimate distributions of the morphological indices for
a sample of QGs that effectively has the same distribution of stellar mass
as PSBs.
We similarly estimated distributions of the indices for $r_{80}$-matched
and $\Sigma_{20}$-matched samples of QGs.
Figure \ref{fig:simqg} shows comparisons of the morphological indices
between PSBs and the control samples of QGs.
While the $M_{\rm star}$-matched sample shows slightly higher $A$ values,
the control samples basically have the similar distributions with
the original sample of QGs.
Even if we use the control samples of QGs, PSBs show significantly higher
$C_{A}$ values than QGs, while their $C$ and $A$ values are similar with
those of QGs.
\section{Discussion}
In this study, we fitted the photometric SEDs of galaxies with
$i<24$ from COSMOS2020
catalogue with population synthesis models assuming
non-parametric, piece-wise constant function of SFHs, and selected
94 PSBs with a high SSFR$_{\rm 321-1000Myr}$ ($>10^{-9.5}$ yr$^{-1}$) and low SSFR$_{\rm 40-321Myr}$ and SSFR$_{\rm 0-40Myr}$ ($< 10^{-10.5}$ yr$^{-1}$) at $0.7<z<0.9$.
We measured the morphological indices, namely, $C$, $A$, and $C_{A}$ for
these PSBs on the ACS $I_{\rm F814W}$-band images and compared them with
SFGs whose SSFR$_{\rm 40-321Myr}$ and SSFR$_{\rm 0-40Myr}$ are on the main sequence
($10^{-10}$--$10^{-9}$ yr$^{-1}$) and QGs with low SSFR$_{\rm 321-1000Myr}$, SSFR$_{\rm 40-321Myr}$, and SSFR$_{\rm 0-40Myr}$ values ($< 10^{-10.5}$ yr$^{-1}$).
We found that PSBs show systematically higher $C_{A}$ than QGs and
SFGs, while their $C$ and $A$ are similar with those of QGs.
The fraction of PSBs with $\log{C_{A}} > 0.8$ is 36\%, which is significantly
higher than those of QGs and SFGs (16\% and 2\%).
We first examine relation between $C_{A}$ and SFHs to confirm how
the $C_{A}$ values are related with their physical properties,
and then discuss implications of the results for evolution of
these galaxies.
\subsection{Relation between $C_{A}$ and SFHs} \label{sec:casfh}
\begin{figure}
\includegraphics[width=\columnwidth]{fig13.eps}
\caption{
$C_{A}$ as a function of SSFR$_{\rm 0-40Myr}$ (top),
SSFR$_{\rm 40-321Myr}$ (middle), and SSFR$_{\rm 321-1000Myr}$ (bottom)
for PSBs and those galaxies that satisfy two out of the three
SSFR criteria for the PSB selection (Equation (\ref{eq:psbsel})).
For example, those with SSFR$_{\rm 321-1000Myr} > 10^{-9.5}$ yr$^{-1}$
and SSFR$_{\rm 40-321Myr} < 10^{-10.5}$ yr$^{-1}$ are plotted in the top
panel, and
those objects at SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$ are PSBs.
Those with SSFR $ < 10^{-13}$ yr$^{-1}$ are plotted at $10^{-13}$ yr$^{-1}$.
The solid line shows the median values of $C_{A}$ in SSFR bins with
a width of $\pm$ 0.125 dex, while
the dashed line represents the fraction of those with $\log{C_{A}} > 0.8$.
\label{fig:cass123}}
\end{figure}
We found that PSBs show higher $C_{A}$ than QGs and SFGs in the previous
section.
We here examine relationship between $C_{A}$ and SSFRs estimated
in the SED fitting.
The bottom panel of Figure \ref{fig:cass123} shows $C_{A}$ distribution
of those galaxies with SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$ and
SSFR$_{\rm 40-321Myr} < 10^{-10.5}$ yr$^{-1}$ as a function of SSFR$_{\rm 321-1000Myr}$.
Thus PSBs are located at SSFR$_{\rm 321-1000Myr} > 10^{-9.5}$ yr$^{-1}$
in this panel, while QGs are shown at SSFR$_{\rm 321-1000Myr} < 10^{-10.5}$ yr$^{-1}$.
The median value of $C_{A}$ and the fraction of those with $\log{C_{A}}>0.8$
clearly increase with increasing SSFR$_{\rm 321-1000Myr}$, especially at
SSFR$_{\rm 321-1000Myr} \gtrsim 10^{-10}$ yr$^{-1}$, while scatter at a given
SSFR$_{\rm 321-1000Myr}$ is relatively large.
This trend suggests that higher $C_{A}$ values of PSBs are related with
their past star formation activities several hundreds Myr before observation.
We note that those with SSFR$_{\rm 321-1000Myr} \gtrsim 10^{-9}$ yr$^{-1}$ have
slightly lower median values of $C_{A}$ and lower fraction of those with
high $C_{A}$. Those galaxies with very high SSFR$_{\rm 321-1000Myr}$ show
relatively high asymmetry of $A \sim $ 0.15--0.40, and their asymmetric
features are widely distributed over the entire galaxies,
which could reduce their $C_{A}$ values.
Since SSFR$_{\rm 321-1000Myr} \sim 10^{-9}$ yr$^{-1}$ means that
two thirds of the observed stellar mass had been formed in 321--1000 Myr
before observation (see the next subsection for details),
large morphological disturbances over the entire galaxies associated with
such high star formation activities may remain at the observed epoch.
\begin{figure*}
\includegraphics[width=2.1\columnwidth]{fig14.eps}
\caption{
The same as Figure \ref{fig:histc} but for all PSBs and those with
lower SSFR$_{\rm 0-40Myr}$ and SSFR$_{\rm 40-321Myr}$ values.
While the solid line shows all PSBs, the dashed-line (solid histogram)
represents contribution from those PSBs with
SSFR$_{\rm 0-40Myr} < 10^{-12}$ yr$^{-1}$
and SSFR$_{\rm 40-321Myr} < 10^{-12}$ yr$^{-1}$
(SSFR$_{\rm 0-40Myr} < 10^{-13}$ yr$^{-1}$
and SSFR$_{40-321Myr} < 10^{-13}$ yr$^{-1}$).
\label{fig:psblow12}}
\end{figure*}
The top panel of Figure \ref{fig:cass123} shows the similar relationship
between $C_{A}$ and SSFR$_{\rm 0-40Myr}$ for those galaxies with
SSFR$_{\rm 321-1000Myr} > 10^{-9.5}$ yr$^{-1}$ and
SSFR$_{\rm 40-321Myr} < 10^{-10.5}$ yr$^{-1}$.
The median value of $C_{A}$ and the fraction of those with $\log{C_{A}}>0.8$
decrease with increasing SSFR$_{\rm 0-40Myr}$ at
SSFR$_{\rm 0-40Myr} \gtrsim 10^{-11}$ yr$^{-1}$.
The relatively low values of $C_{A}$ for those galaxies
with SSFR$_{\rm 0-40Myr} = $ 10$^{-10}$--10$^{-9}$ yr$^{-1}$ on the main sequence
are probably caused by star-forming regions distributed over their disk,
while slightly higher $C_{A}$ values of those with SSFR$_{\rm 0-40Myr} > 10^{-9}$
yr$^{-1}$ may indicate some disturbances near the centre,
for example, by nuclear starburst.
The decrease of $C_{A}$ with increasing SSFR$_{\rm 0-40Myr}$ at
SSFR$_{\rm 0-40Myr} \lesssim 10^{-10}$ yr$^{-1}$ suggests that
the high $C_{A}$ values seen in PSBs are not closely related with residual
on-going star formation.
The middle panel of Figure \ref{fig:cass123} shows the SSFR$_{\rm 40-321Myr}$
dependence of $C_{A}$, although most of those galaxies with
SSFR$_{\rm 321-1000Myr} > 10^{-9.5}$ yr$^{-1}$ and
SSFR$_{\rm 0-40Myr} < 10^{-10.5}$ yr$^{-1}$ have SSFR$_{\rm 40-321Myr} = 0$ (plotted
at $10^{-13}$ yr$^{-1}$ in the figure).
One can see the similar trend that those with
SSFR$_{\rm 40-321Myr} = $ 10$^{-10}$--10$^{-9}$ yr$^{-1}$
show relatively low $C_{A}$ values.
\begin{figure}
\includegraphics[width=\columnwidth]{fig15.eps}
\caption{
$C_{A}$ as a function of SSFR$_{\rm 1-2Gyr}$ for QGs, which satisfy
all SSFR$_{\rm 0-40Myr}$, SSFR$_{\rm 40-321Myr}$, and SSFR$_{\rm 321-1000Myr}$
are less than $10^{-10.5}$ yr$^{-1}$.
The solid and dashed lines are the same as Figure \ref{fig:cass123}.
\label{fig:cass4}}
\end{figure}
In Figure \ref{fig:psblow12},
we compare morphological indices for a sub-sample of PSBs with
lower values of SSFR$_{\rm 0-40Myr}$ and SSFR$_{\rm 40-321Myr}$,
namely, less than 10$^{-12}$ yr$^{-1}$ and 10$^{-13}$ yr$^{-1}$
with those of all PSBs.
Their distributions of the indices including $C_{A}$ are similar with
those of all PSBs.
This also suggests that residual star formation activities
within recent $\sim 300$ Myr in our PSBs do not seem to affect
their morphological properties significantly.
In Figure \ref{fig:cass4},
we also examined the $C_{A}$ distribution for QGs, which have low SSFRs
within the last 1 Gyr, as a function of SSFR$_{\rm 1-2Gyr}$, because
16\% of these galaxies show $\log{C_{A}}>0.8$.
The median value of $C_{A}$ and the fraction of QGs with $\log{C_{A}}>0.8$
increase with increasing SSFR$_{\rm 1-2Gyr}$ at
SSFR$_{\rm 1-2Gyr} \gtrsim 10^{-9.5}$ yr$^{-1}$, in particular
at SSFR$_{\rm 1-2Gyr} > 10^{-9.0}$ yr$^{-1}$,
although the uncertainty of SSFR$_{\rm 1-2Gyr}$ is relatively large
as mentioned in Section \ref{sec:sample}.
Some of those QGs with a high SSFR$_{\rm 1-2Gyr}$ might have experienced
a strong starburst at $\sim$ 1 Gyr before observation followed by
quenching of star formation, and similarly show significant asymmetric
features near the centre, which leads to high $C_{A}$ values.
Since about 40\% of QGs with $\log{C_{A}}>0.8$ have
SSFR$_{\rm 1-2Gyr} > 10^{-9.5}$ yr$^{-1}$,
some fraction of the high $C_{A}$ values seen in QGs could also be related with
the past strong star formation activities.
\subsection {Implications for Quenching of Star Formation in PSBs}
We selected galaxies that experienced high star formation activities
followed by rapid quenching several hundreds Myr before observation,
by using the SED fitting with the UV to MIR photometric data including
the optical intermediate-bands data.
We set the selection criteria to pick up those galaxies
whose SFR decreased by an
order of magnitude between 321--1000 Myr and 40--321 Myr before
observation (Equation (\ref{eq:psbsel})).
Since we assumed a constant SFR in each period,
the criterion of SSFR$_{\rm 321-1000Myr} > 10^{-9.5}$ yr$^{-1}$ means that
$\frac{SFR_{\rm 321-1000Myr}\times(1000-321)\times10^{6}}{M_{\rm star, 0}}>0.21$,
i.e., more than 20\% of the observed stellar mass had been
formed in the period of 321--1000 Myr before observation.
Thus PSBs in this study should have a strong starburst or
rather high continuous star formation activities in the period.
Our selection could miss many of ``H$\delta$ strong'' galaxies
selected by previous studies, because those galaxies whose SFRs
declined on a sufficiently short timescale (e.g., less than a few hundreds Myr)
could be H$\delta$ strong without such a strong starburst
(e.g., \citealp{leb06}; \citealp{paw18}).
In fact, if we limit our sample
to those with $M_{\rm star, 0} > 10^{10} M_{\odot}$,
which roughly corresponds to the limiting stellar mass for our magnitude
limit of $i<24$ (e.g., \citealp{sat19}),
the fraction and co-moving number density of PSBs become
$\sim$ 1\% (76/7266) and $3.5\times 10^{-5}$ Mpc$^{-3}$, respectively.
These values are smaller than those of spectroscopically selected
(H$\delta$ strong) PSBs at similar redshifts in previous studies
(\citealp{wild09}; \citealp{yan09}; \citealp{ver10}; \citealp{row18}),
although the fraction
and number density of such PSBs depend on stellar mass, environment,
and selection criteria.
Our PSBs seem to have the similar burst strength and
time elapsed after the burst (several hundreds Myr)
with those selected by photometric
SEDs (\citealp{wil16}; \citealp{wil20}).
While the number density of PSBs with $M_{\rm star, 0} > 10^{10} M_{\odot}$
in our sample is still lower than that reported in \citet{wil16},
those of massive PSBs with $M_{\rm star} \gtrsim 10^{10.8} M_{\odot}$ are similar.
The fraction of PSBs
with $M_{\rm star, 0} \lesssim 10^{10} M_{\odot}$ is small in our sample
(the left panel of Figure \ref{fig:msrsig}),
while \citet{wil16} found that the stellar mass function of those PSBs
at $z=$ 0.5--1.0 shows a steep low-mass slope.
Thus we could miss such relatively low-mass PSBs probably due to the magnitude
limit of $i<24$.
In Figures \ref{fig:histc} and \ref{fig:cacac}, PSBs show relatively high
$C$ and low $A$ values, which are similar with those of QGs.
The high concentration in the surface brightness of PSBs is consistent
with results by previous studies in the local universe
(e.g., \citealp{qui04}; \citealp{yam05}; \citealp{yan08}: \citealp{pra09}).
At intermediate redshifts, several studies also found that massive PSBs
with $M_{\rm star} \gtrsim 10^{10}M_{\odot}$ tend to have early-type morphology
with a high concentration, while there are many low-mass PSBs with
more disky morphology (\citealp{tra04}; \citealp{ver10}: \citealp{mal18}).
On the other hand, many previous studies reported that a significant fraction of
PSBs show asymmetric morphologies with tidal/disturbed features
(e.g., \citealp{zab96}; \citealp{bla04}; \citealp{tra04}; \citealp{yam05};
\citealp{yan08}; \citealp{pra09}; \citealp{won12}; \citealp{deu20}),
which seems to be
inconsistent with the low $A$ values of PSBs in our sample.
The discrepancy may be explained by differences in the time elapsed after
starburst (hereafter, burst age).
\citet{paw16} found that the fraction of local PSBs
with disturbed features decreases
with increasing burst age, and their $A$ and shape
asymmetry $A_{S}$, which is expected to be more sensitive to faint
tidal/disturbed
features at outskirt, become similar with
those of normal galaxies at $\sim 0.3$ Gyr after the burst.
\citet{saz21} reported the similar anti-correlation between $A$ and burst
age for CO-detected PSBs in the local universe.
Theoretical studies with numerical simulations
also suggest that such tidal/disturbed features in gas-rich major mergers
weaken with time and disappear within $\sim$ 0.1--0.5 Gyr after
coalescence (\citealp{lot08}; \citealp{lot10}; \citealp{sny15}; \citealp{paw18};
\citealp{nev19}).
Since our selection method picks up those PSBs that experience a
starburst several hundreds Myr before observation
followed by quenching, their asymmetric features could already
weaken or disappear at the observed epoch.
In this study, we found that PSBs show higher $C_{A}$ values than QGs
and SFGs (Figures \ref{fig:histc} and \ref{fig:cacac}).
The high $C_{A}$ values are caused by the existence of significant
asymmetric features near the centre (Figure \ref{fig:mon}).
Such asymmetric features near the centre of PSBs in the local universe
have been reported by several studies.
\citet{yan08} measured $A$ of 21 PSBs with strong Balmer absorption lines
and weak/no [O{\footnotesize II}] emission varying aperture size, and found
that the $A$ values of most PSBs increase with decreasing aperture size,
which is different from those of normal spiral galaxies.
\citet{yam05} studied colour maps of 22 PSBs with strong H$\delta$
absorption and no significant H$\alpha$ emission at $z<0.2$, and found that
some of them
show blue asymmetric/clumpy features near the centre in their colour maps.
\citet{pra09} also reported the similar irregular colour structures in the
central region for two out of 10 PSBs at $z<0.2$.
\citet{saz21} analysed {\it HST}/WFC3 data of 26 CO-detected
PSBs at $z<0.2$ and
measured their morphological parameters such as
$A$, $A_{S}$, and residual flux fraction (RFF), which is the fraction of
residual flux after subtracting best-fitted smooth S\'ersic profile.
They found that those with older burst ages tend to show
relatively low $A_{S}$ and high $A$ and RFF, which suggests that
internal disturbances could continue for longer time than tidal features
in outer regions.
For PSBs with high $C_{A}$ values in our sample, typical $r_{A,20}$ values
are $\sim$ 0.08--0.13 arcsec, which corresponds to $\sim$ 0.6--1.0 kpc
for galaxies at $z\sim 0.8$.
Therefore, the inner asymmetric features of those galaxies
are located within $\sim 1$ kpc from the centre, and the spatial resolution
of $\lesssim $ 1 kpc seems to be required to reveal such high concentration
of the asymmetric features.
The existence of such asymmetric features near the centre
in $\sim$ 36\% of our PSBs suggests that disturbances in the
central region are closely related with rapid quenching of star formation.
One possible scenario is that such disturbances near the centre are
associated with nuclear starbursts, which occur in the galaxy mergers or
accretion events and lead to the quenching through rapid gas consumption
and/or gas loss/heating by supernova explosion, AGN outflow, tidal force,
and so on (e.g., \citealp{bek05}; \citealp{sny11}; \citealp{dav19}).
Several previous studies investigated radial gradients of Balmer
absorption lines and colours, and found that PSBs tend to show
the stronger Balmer absorption lines and bluer colours in the inner regions
(\citealp{yam05}; \citealp{yan08}; \citealp{pra13};
\citealp{che19}; \citealp{deu20}).
These results suggest that stronger starbursts occurred
in the central region of these PSBs.
The numerical simulations of gas-rich galaxy mergers also predict that
strong nuclear starbursts lead to the strong Balmer absorption lines and
blue colours in the central region of PSBs
(\citealp{bek05}; \citealp{sny11}; \citealp{zhe20}).
In the nuclear starburst, stars are formed from kinematically disturbed
infalling gas, and spatial distribution and kinematics of newly formed
stars in the central region could also have disturbed/asymmetric features.
Remaining dust in the central region after the burst could cause
morphological disturbances in PSBs (e.g., \citealp{yan08}; \citealp{lot08};
\citealp{saz21}; \citealp{sme22}).
Since the fraction of PSBs with high $C_{A}$ in our sample increases with
increasing SSFR$_{\rm 321-1000Myr}$ (Figure \ref{fig:cass123}),
their high $C_{A}$ values could be caused by such nuclear starburst
several hundreds Myr before observation.
Those PSBs with $\log{C_{A}} > 0.8$ have the best-fit
$A_{V} \lesssim$ 1.0 mag (median $A_{V} = 0.5$ mag) and
$[3.6] - [4.5] \sim $ 0, which are similar with or slightly lower and
bluer than those of the other PSBs.
While most of them are not heavily obscured by dust over
the entire galaxies,
some of those with high $C_{A}$ show dust-lane like features
along the major axis of their surface brightness distribution in
the rotation-subtracted images (Figure \ref{fig:mon}).
The remaining dust in the central region could cause the asymmetric
features in their morphology.
Since several studies suggest that molecular gas and dust masses
in PSBs at low redshifts
decrease by $\sim $ 1 dex in $\sim$ 500--600 Myr after starburst
(\citealp{row15}; \citealp{fre18}; \citealp{li19}), a significant
fraction of our PSBs, which are expected to experience a starburst
several hundreds Myr before observation, could have the remaining gas
and dust in their central region.
The relatively high $C_{A}$ values of PSBs in our sample may indicate that
disturbances in the stellar and/or dust distribution near the centre
tend to sustain for longer time after (nuclear) starburst
than asymmetric features in outer regions such as tidal tails.
We found that PSBs, especially, those with high $C_{A}$ values
tend to have smaller sizes and higher surface stellar mass densities than
QGs (Figure \ref{fig:msrsig}).
The similar smaller sizes at a given stellar mass for PSBs at low and
intermediate redshifts have been
reported by previous studies (\citealp{mal18}; \citealp{wu18};
\citealp{set22}; \citealp{che22}).
These results are consistent with the scenario where nuclear starburst
causes asymmetric features
near the centre in those PSBs with high $C_{A}$.
The nuclear starburst is expected to increase the stellar mass density
in the central region, which leads to decreases in half-light
and half-mass radii of these galaxies (e.g., \citealp{wu20}).
If this is the case,
the half-light radii of those PSBs will
gradually increase and become similar with those of QGs, because
flux contribution from the young population in the central region
is expected to decrease as time elapses.
Although the results in this study suggest the relationship between
nuclear starburst and quenching of star formation in those PSBs,
we cannot specify why their star formation rapidly declined and
has been suppressed for (at least) a few hundreds Myr.
Further observations of these galaxies will allow us
to reveal the quenching process.
NIR and FIR imaging data with similarly high spatial resolution of
$\lesssim 1$ kpc scale taken with JWST and ALMA enable to investigate
details of the asymmetric features near the centre in those PSBs and
their origins. Physical state of molecular gas over the entire galaxies
is also important to understand the quenching mechanism(s).
Observing stellar absorption lines and nebular emission lines
by deep optical (spatially resolved) spectroscopy allows us to
study detailed stellar population and excitation state of ionised
gas. Although we excluded AGNs from the sample in this study
because of the difficulty of estimating non-parametric SFHs
with the AGN contribution in the SED fitting, it is interesting to
investigate relationship between the disturbances in the central region
and AGN activity.
\section{Summary}
In order to investigate morphological properties of PSBs,
we performed the SED fitting
with the UV--MIR photometric data including the optical intermediate bands
for objects with $i<24$ from COSMOS2020 catalogue,
and selected 94 PSBs that experienced
a high star formation activity several hundreds Myr before observation
(SSFR$_{\rm 321-1000Myr} > 10^{-9.5}$ yr$^{-1}$) followed by quenching
(SSFR$_{\rm 40-321Myr} < 10^{-10.5}$ yr$^{-1}$ and SSFR$_{\rm 0-40Myr} < 10^{-10.5}$
yr$^{-1}$) at $0.7<z<0.9$.
We measured the morphological indices, namely, concentration
$C$, asymmetry $A$, and concentration of asymmetric features $C_{A}$,
on the {\it HST}/ACS $I_{\rm F814W}$-band images, and compared them with those
of QGs and SFGs.
Our main results are summarised as follows.
\begin{itemize}
\item PSBs show relatively high concentration (the median
$C \sim 4.9$) and low asymmetry (the median $A \sim 0.12$),
which are similar with those of QGs rather than SFGs.
Our selection method, which preferentially picks up those with
relatively old burst ages of several hundreds Myr, could lead to
the low asymmetry.
\item PSBs tend to show higher $C_{A}$ values (the median $C_{A} \sim 4.8$)
than both QGs and SFGs (the median $C_{A} \sim 3.5$ and 2.7).
The difference of $C_{A}$ between PSBs and QGs is significant
even if we compare PSBs with $M_{\rm star}$, $r_{80}$, or $\Sigma_{20}$-matched
samples of QGs.
The fraction of galaxies with $\log{C_{A}} > 0.8$ in PSBs is $\sim $36\%,
which is much higher than those of QGs and SFGs (16\% and 2\%).
Those PSBs with high $C_{A}$ show remarkable asymmetric features
near the centre,
while they have relatively low overall asymmetry ($A \sim 0.1$).
\item The fraction of those PSBs with $\log{C_{A}} > 0.8$ increases
with increasing SSFR$_{\rm 321-1000Myr}$ and decreasing SSFR$_{\rm 0-40Myr}$,
which indicates that the asymmetric features near the centre are closely
related with the high star formation activities several hundreds Myr
before observation rather than residual on-going star formation.
\item Those PSBs with high $C_{A}$ tend to have higher surface
stellar mass density, in particular, in the central region (e.g.,
$\lesssim 1$ kpc) than both QGs and the other PSBs, while most of them
have the similar stellar masses of
$1 \times 10^{10}$--$2 \times 10^{11} M_{\odot}$.
\end{itemize}
These results suggest that a significant fraction of PSBs experienced
nuclear starburst in the recent past, and the quenching of
star formation in these galaxies could be related with such active
star formation in the central region.
The high $C_{A}$ values of PSBs may indicate that the disturbances near
the centre tend to sustain for longer time than those at outskirt such as
tidal tails. If this is the case, $C_{A}$ could be used as morphological
signs of the past nuclear starburst in those galaxies with relatively
old burst ages.
\section*{Acknowledgements}
We thank the anonymous referee for the valuable suggestions and comments.
This research is based in part on data collected at Subaru Telescope,
which is operated by the National Astronomical Observatory of Japan.
We are honoured and grateful for the opportunity of observing the
Universe from Maunakea, which has the cultural, historical and natural
significance in Hawaii.
Based on data products from observations made with ESO Telescopes at the La Silla Paranal Observatory under programme IDs 194A.2005 and 1100.A-0949(The LEGA-C Public Spectroscopy Survey). The LEGA-C project has received funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No. 683184).
Data analysis were in part carried out on common use data analysis computer
system at the Astronomy Data Center, ADC, of the National Astronomical
Observatory of Japan.
\section*{Data Availability}
The COSMOS2020 catalogue is publicly available at
https://cosmos2020.calet.org/.
The COSMOS {\it HST}/ACS $I_{\rm F814W}$-band mosaic data version 2.0
are publicly available via NASA/IPAC Infrared Science Archive at
https://irsa.ipac.caltech.edu/data/COSMOS/images/acs\_mosaic\_2.0/.
The raw data for the ACS mosaic are available via
Mikulski Archive for Space Telescopes at
https://archive.stsci.edu/missions-and-data/hst.
The Subaru/Suprime-Cam $i{'}$-band mosaic reduced data
are also publicly available
at https://irsa.ipac.caltech.edu/data/COSMOS/images/subaru/.
The raw data for the Suprime-Cam mosaic are accessible through
Subaru Telescope Archive System at
https://stars.naoj.org.
The zCOSMOS spectroscopic redshift catalogue is publicly available via
ESO Science Archive Facility at
https://www.eso.org/qi/catalog/show/65.
The LEGA-C catalogue is also publicly available at
https://www.eso.org/qi/catalogQuery/index/379.
| {'timestamp': '2022-12-15T02:00:14', 'yymm': '2212', 'arxiv_id': '2212.06843', 'language': 'en', 'url': 'https://arxiv.org/abs/2212.06843'} |
\section*{OLD VERSION}
\section*{Old Introduction}
Lastly many of widely used datasets such as CIFAR10~\cite{cifar10}, ImageNet~\cite{imagenet} received semi-supervised versions that aim advancing methods for few-shot learning. These methods are intended to bring deep learning closer to applications because usually labeled data is a scarce resource~\cite{review_nlp}.
Arguably for NLP task semi-supervised setting is paid less attention, although it is just as important as for image classification. For example, recognized named entities are important features in other NLP tasks such as dialogue management~\cite{dialogues_ner}. Therefore data-efficient learning which allow recognizing domain-specific entities is important. Present paper discusses semi-supervised task definition for NER and proposes few techniques for its solving adapted from image recognition.
One prominent approach to task of learning from few examples is metric learning~\cite{metric-learning}: recent examples are Matching Networks~\cite{matching} and Prototypical Networks~\cite{prototypical} for image classification. Other approaches do exist: meta-learning~\cite{few_shot_meta}, memory-augmented neural networks~\cite{few_shot_memory}, using side information of improving classification~\cite{fusing}.
Focus of the present paper is metric-learning methods which we apply to NER framed as few-shot learning task.
Our contribution is that we
\begin{enumerate}
\item[$\bullet$] We formulated few-shot NER as Semi-supervised learning task
\item[$\bullet$] We adapted existing method that was used for CV to the few-shot NER-task and compared it with the existed models.
\end{enumerate}
\section{Models}
We implemented three variants of prototypical network models for NER task and compared their performance with that of two baseline models --- bidirectional RNN with CRF, and transfer model.
In our experiments we train separate models for each class, so that all models we report can distinguish between a target class $C$ and ``O''. This was done in order to evaluate the performance of individual classes in isolation. This setting mimics the scenario when we acquire a small number of examples of a new class and need to incorporate it to a pre-trained NER model.
\fixme{Moreover, as we show later, our models have quite small number of conflicts (cases when one word is assigned more than one class). This means that in real-world scenarios our models can be used simultaneously to label a text with all classes.}
\subsection{Data preparation}
\label{section:data_preparation}
Since each of our models predicts only one class, we need to alter the data to fit into this scenario. We separate all the available data into two parts and alter their labellings in different ways. We label the first half of the data only with instances of the target class $C$, other words receive the label ``O''. Note that since the sentences of this subset are not guaranteed to contain entities of class $C$, some sentences can be ``empty'' (i.e. containing only ``O'' labels). This data is our in-domain set. We use it in two ways:
\begin{enumerate}
\item We sample training data from it. To train a model for a particular class $C$ we need $N$ instances of this class. In order to get them we sample sentences from the in-domain set until we acquire $N$ sentences with at least one entity of class $C$. Therefore, the size of our sample depends on frequency of $C$ in the data. For example, if $C$ occurs on average in one of three sentences, the expected size of the sample is $3*N$. We refer to this data as \textbf{in-domain training}.
\item We reserve a part of the in-domain set for testing. Note that we test all models on the same set of sentences, although their labelling differs depending on the target class $C$. We refer to this data as \textbf{test}.
\end{enumerate}
Conversely, the second half of the data is labelled with all classes \textit{except} $C$. It is used as training data in some of our models. We further refer to it as \textbf{out-of-domain training}.
\subsection{RNN Baseline (Base)}
Our baseline NER model was taken from AllenNLP open-source library~\cite{allen}. The model processes sentences in the following way:
\begin{enumerate}
\item words are mapped to pre-trained embeddings (any embeddings, such as GloVe \cite{}, ELMo \cite{}, etc. can be used)
\item additional word embedding are produced using a character-level trainable Recurrent Neural Network (RNN) with LSTM cells,
\item embeddings produced at stages (1) and (2) are concatenated and used as the input to a bi-directional RNN with LSTM cells. This network processes the whole sentence and creates context-dependent representations of every word
\item a feed-forward layer converts hidden states of the RNN from stage (3) to logits that correspond to every label,
\item the logits are used as input to a Conditional Random Field (CRF) \cite{} model that outputs the probability distribution of tags for every word in a sentence.
\end{enumerate}
The model is trained by minimising negative log-likelihood of true tag sequences. We train the model using only \textbf{in-domain training} set. It has to be noted that this baseline is quite strong even in our limited resource setting. However, we found that in our few-shot case CRF does not improve the performance of the model.
\subsection{Baseline prototypical network (BaseProto)}
The architecture of the prototypical network that we use for NER task is very similar to the one of our baseline model. The main change concerns the feed-forward layer. While in the baseline model it transforms RNN hidden states to logits corresponding to labels, in our prototypical network it maps these hidden states to the $M$-dimensional space. The output of the feed-forward layer is then used to construct prototypes from the support set. These prototypes are used to classify examples from the query set as described in section \ref{section:model_theory}.
We train this model on \textbf{in-domain training} data. We divide it into two parts: $N/2$ sentences containing examples of class $C$ are used as support set, and another $N/2$ sentences with instances of $C$ and a half of ``empty'' sentences serve as query set. We use only a half of ``empty'' sentences in order to keep the original proportion of instances of class $C$ in the query set.
\subsection{Regularised prototypical network (Protonet)}
\label{section:protonet}
The architecture and training procedure of this model are the same as those of BaseProto model. The only difference is the data we use for training. At each training step we select the training data using one of two scenarios:
\begin{enumerate}
\item training data is taken from the \textbf{in-domain training} set analogously to BaseProto model,
\item training data is sampled from the \textbf{out-of-domain training} set. The sampling procedure is the same as the one used for in-domain data: we (i) randomly choose the target class $C'$ and (ii) sample sentences until we encounter $N$ sentences with at least one instance of $C'$. Note that these sentences should be labelled only with labels of classes $C'$ or ``O''.
\end{enumerate}
At each step we choose the scenario (1) with probability $p$, or scenario (2) with probability $(1-p)$.
Therefore, throughout training the network is trained to predict our target class (scenario (1)), but occasionally it sees instances of some other classes and constructs prototypes for them (scenario (2)). We suggest that this model can be more efficient than BaseProto, because at training time it is exposed to objects of different classes, and the procedure that maps objects to prototype space becomes more robust. This is also a way to leverage out-of-domain training data.
\subsection{Transfer learning baseline (WarmBase)}
We implemented a common transfer learning model --- use of knowledge about out-of-domain data to label in-domain samples. The training of this model is two-part:
\begin{enumerate}
\item We train our baseline model (``Base'') using \textbf{out-of-domain training} set.
\item We save all weights of the model except CRF and label prediction layer, and train this model again using \textbf{in-domain training} set.
\end{enumerate}
\subsection{Transfer learning + prototypical network (WarmProto)}
In addition to that, we combined prototypical network with pre-training on out-of-domain data. We first train a Base model on the \textbf{out-of-domain training} set. Then, we train a Protonet model as described in section \ref{section:protonet}, but initialise its weights with weights of this pre-trained Base model.
\section{Experimental setup}
\subsection{Dataset}
We conduct all our experiments on the Ontonotes dataset~\cite{ontonotes}. It contains 18 classes (+ $O$ class). The classes are not evenly distributed --- the training set contains over $30.000$ instances of some common classes and less than 100 for some rare classes. The distribution of classes is shown in Figure \ref{fig:ontonotes_stat}. The size of the training set is $150.374$ sentences, the size of the validation set is $19.206$ sentences.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{ontonotes_stat.png}
\caption{Ontonotes dataset --- statistics of classes frequency in the training data.}
\label{fig:ontonotes_stat}
\end{figure}
As the majority of NER datasets, Ontonotes adopts \textbf{BIO} (\textbf{B}eginning, \textbf{I}nside, and \textbf{O}utside) labelling. It provides an extension to class labels, namely, all class labels except ``O'' are prepended with the symbol ``B'' if the corresponding word is the first (or the only) word in an entity, or with the symbol ``I'' otherwise.
\subsection{Data preparation: simulated few-shot experiments}
We use training data of the Ontonotes corpus as \textbf{out-of-domain training} set (dataset labelled with all classes except the target class $C$). In practice, we simply remove labels of the class $C$ from the training data (i.e. replace them with ``O''). The validation set of Ontonotes is used as in-domain data --- we replace all labels except $C$ with the label ``O''. We sample \textbf{in-domain training} set from this transformed validation data as described in \ref{section:data_preparation}. However, the size of sample produced by this sampling procedure can vary significantly, in particular, for rare classes. This can make the results unstable.
In order to reduce the variation we alter the sampling procedure in the following way. We define a function $pr(C)$ that computes the proportion of sentences containing the target class $C$ in the validation set ($pr(C) \in [0, 1]$). Then we sample $N$ sentences containing instances of class $C$ and $\frac{N \times (1-pr(C))}{pr(C)}$ sentences without class $C$. Thus, we keep the proportion instances of class $C$ in our \textbf{in-domain training} dataset.
Therefore, we use up to 200 sentences from the Ontonotes validation set for training, and the rest $19.000$ sentences are reserved for testing.
\subsection{Design of experiments}
\fixme{this is not finished}
We conducted separate experiments for each of 18 Ontonotes classes. For each class we conducted 4 experiments with different random seeds. We report averaged results for each class.
We designed separate experiments for selection of hyper-parameters and the optimal number of training epochs. For that we selected three well-represented classes --- ``GPE'' (geopolitical entity), ``Date'', and ``Org'' (organisation) --- to conduct \textit{validation} experiments on them. We selected training sets as described above, and used the test set (consisting of $19.000$ sentences) to tune hyper-parameters and to stop the training. For other classes we did not perform hyper-parameter tuning. Instead, we used the values acquired in the validation experiments with the three validation classes. In these experiments we used the test set only for computing the performance of trained models.
The motivation of such setup is the following. In many few-shot scenarios researchers report experiments where they train on a small training set, and tune the model on a very large validation set. We argue that this scenario is unrealistic, because if we had a large number of labelled examples in a real-world problem, it would be more efficient to use them for training, and not for validation. On the other hand, a more realistic scenario is to have very limited number of labelled sentences. In that case we could still reserve a part of them for validation. However, we argue that this is also inefficient. If we have 20 examples and decide to train on 10 of them and validate on another 10, this validation will be inaccurate, because 10 examples are not enough to evaluate the performance of a model reliably. Therefore, our evaluation will be very noisy and is likely to result in suboptimal values of hyper-parameters. On the other hand, additional 10 examples can boost the quality of a model. Figure
\begin{figure*}
\includegraphics[scale=0.5]{figure_exp.png}
\caption{F1-performance of algorithms after every training epoch.}
\label{fig:}
\end{figure*}
Figure
In all our experiments we set $N$ to 20. This number of examples
In ``Protonet'' model we set $p$ to 0.5. Therefore, the model is trained on the instances of the target class $C$ half of the steps, and another half of the times it is shown instances of some other randomly chosen class.
\section{Trash}
--------------This is just some meaningless text which is needed to keep latex working -----------------------------------
Our method is targeted at resource-limited setting where some of classes are under-represented in labelled data. We have labelled data for some (but not all) classes and a large number of unlabelled sentences. In order to acquire examples of rare classes we need to label sentences manually. Manual labelling is laborious and time-consuming, so we would like to reduce the number of manually labelled examples that we need for training. In our experiments we mimic this setting by sampling a small number of examples of a particular class from the dataset.
\subsection{Dataset}
We conduct all experiments on the Ontonotes dataset~\cite{ontonotes}. It contains 18 classes (apart from ``O'' class): ...
The size of the corpus is... \fixme{add info about the corpus, train/test partition, maybe table with number of instances for each class}
We prepared the dataset in the following way:
\begin{enumerate}
\item In the training set we replaced all labels of class $C$ with ``O''. This is our query set.
\item In the validation set we replaced all labels \textit{except} those of class $C$ with ``O''
\item We sampled sentences from the validation set until finding $N$ sentences with at least one instance of class $C$. This is our support set.
\item We used the rest of validation set for testing.
\end{enumerate}
If stage (3) is produced as described above, the result of the experiment is very unstable, because we sample a different number of sentences each time. In order to alleviate this problem we alter the sampling procedure. We compute the proportion of sentences with instances of class $C$ in advance, and then sample
\section{Experiments}
\subsection{Baseline}
In this experiment we just apply model from section 5.1 to few-shot NER task. Since it is usual supervised learning, we use only bag \#1, because we can't extract knowledge from bag \#2. We call this experiment "Base".
\subsection{Baseline prototypical network}
Here we use model from section 5.2.
Using the same dataset as in BM (only bag \#1) we change the training procedure as written below.
At each training step we divide our whole dataset into 2 parts: the first one is used to compute prototypes (support set), so our network is trained using objects from the second part (query set).
Let us say we have a bag \#1 for class C so and we labeled 20 sentences each containing example of the class.
We use randomly sampled 10 of them as support set. We sample query set from a union of other 10 sentences and a subset of empty sentences. \textit{Important note}: we choose the size of empty subset to save the true proportion in a query set. According to task construction, empty subset should be 2 times smaller than original set of empty sentences from bag \#1. This remains both in real-world case and in our research case when we computed the proportion in advance.
This training procedure was designed to ensure that our neural architecture that we used for calculation of embeddings is on-par with BM. Thus we can use all the over data to further improve embeddings and enforce such data-driven regularization.
We call this model "BaseProto".
\subsection{Regularized prototypical network}
With the probabilty p we sample batch and make training step using the same procedure as in 5.2. Otherwise we sample a batch from big training dataset using following procedure:
\begin{enumerate}
\item Choose randomly one class from 17 training classes.
\item Draw a batch of sentences that contain this class and annotate only this class. Other tokens should be annotated as 'O'.
\item Draw a batch of any sentences from training dataset and annotate them the same way (we also can draw empty sentences). As you can see, we save the true proportion in this batch.
\item Use the first batch as support set and the second batch as query set.
\end{enumerate}
The probabilty p was choosen as 0.5, it seems that it doesn't affect the result sufficiently.
We call this "Protonet".
\subsection{Transfer learning baseline}
We also provide the results for common transfer learning procedure, that is divided into 2 parts.
Firstly, we train our baseline model using usual training set, but we change labels of test class by ('O').
Then we save all the weights except CRF and tag prediction layer and train our BM from Section 4.1 using that initialization of weights.
The results of this method a provided in the table.
We call this "WarmBase".
\subsection{Transfer learning + prototypical network}
We found that if we initialize our algorithm from Section 4.3 using weights received after warming procedure we made in the section 4.4, we obtain much better results.
We call this model "WarmProto".
\section{Results}
\subsection{Validation}
To measure the quality of our models correctly, we carefully constructed a validation procedure to make it similar to real-world case. As it was mentioned above, we prepared 72 few-shot tasks - 4 tasks correspond to 1 class from Ontonotes dataset. These tasks are separated into 2 groups: 12 validation and 60 test tasks. 12 validations tasks are the tasks that were generated from classes 'GPE', 'DATE' and 'ORG'. This means the following: we tune model, hyperparameters and so on based on F1-performance in 12 validation tasks. We make an assumption that we don't see test datasets of 60 test tasks until final F1-measuring that will appear in our table (excluding the fact that we compute the proportion of classes in train dataset based on test data).
\subsection{Training procedure}
Usually people use 3 datasets within one task: train, validation and test. The model is trained based on train set, it's validated based on validation part to stop training and tune hyperparameters and tested using test part.
As it was mentioned by ~\cite{realeval}, some authors use unrealisticly big validation dataset to test model in few-shot task. For example, it doesn't make sense to train a model using 20 sentences and validate it using 200 sentences, because if 220 sentences are available for us before the final prediction, we can separate them into train and validation in another proportion (for example 120/100), and it will dramatically rise the performance of the model.
Suppose we have only small number of sentences available before the final prediction. Since we have 3 validation tasks and we can use its big test datasets, we can tune hyperparameters and model based on this 3 validation tasks. The only thing that is left is a number of epochs we train out model. Here are 2 options: somehow choose it based on 3 validation tasks or separate our small dataset into 2 parts and make early stopping based on one of it's part.
We constructed the following experiment: we train our algorithms WarmProto, WarmBase, Protonet using whole train dataset and validate it every epoch. We also train all this algorithms using a half of train dataset and validate it every epoch. There are the results of the experiment on Figure 3.
As we can see, it doesn't make sense to use validation dataset, because our models don't overfit over during training. So we decided to choose the number of epoch based on 3 validation tasks. Here is a simple procedure: for every epoch from 1 to n for every validation task we compute mean F1 on its test dataset. Then we choose the epoch for which this F1 is the biggest. Of course we do that procedure for every algorithm separately.
\subsection{Final results}
For given 5 models final results are provided in the table 1. As you can see, even the simplest model "Base" produces adequate results. If we train using the same data using the prototypical network, we get comparable results, that are seen in the column "BaseProto".
If we regularize prototypical network using procedure described in 6.3, we get the results called "Protonet", that are definitely better than the previous 2 models. When we add 2 models that use transfer learning, it significanly rise the performance of the model.
*add more after the last 6 classes*
\begin{table*}[t!]
\begin{center}
\begin{tabular}{|l|ccccccc|}
\hline \bf Class name & \bf Base & \bf BaseProto & \bf WarmProtoZero & \bf Protonet & \bf WarmProto & \bf WarmBase & \bf WarmProto-CRF\\
\hline
\multicolumn{8}{|c|}{\textbf{Validation Classes}} \\
\hline
GPE & 69.75 $\pm$ 9.04 & 69.8 $\pm$ 4.16 & 60.1 $\pm$ 5.56 & 78.4 $\pm$ 1.19 & \textbf{82.02} $\pm$ \textbf{0.42} & 75.8 $\pm$ 6.2 & \underline{80.05} $\pm$ \underline{5.4} \\
DATE & 54.42 $\pm$ 3.64 & 50.75 $\pm$ 5.38 & 11.23 $\pm$ 4.57 & 56.55 $\pm$ 4.2 & \underline{64.68} $\pm$ \underline{3.65} & 56.32 $\pm$ 2.32 & \textbf{65.42} $\pm$ \textbf{2.82} \\
ORG & 42.7 $\pm$ 5.54 & 39.1 $\pm$ 7.5 & 17.18 $\pm$ 3.77 & 56.35 $\pm$ 2.86 & \underline{65.22} $\pm$ \underline{2.83} & 63.45 $\pm$ 1.79 & \textbf{69.2} $\pm$ \textbf{1.2} \\
\hline
\multicolumn{8}{|c|}{\textbf{Test Classes}} \\
\hline
EVENT & 32.33 $\pm$ 4.38 & 24.15 $\pm$ 4.38 & 4.85 $\pm$ 1.88 & 33.95 $\pm$ 5.68 & 34.75 $\pm$ 2.56 & \underline{35.15} $\pm$ \underline{4.04} & \textbf{45.2} $\pm$ \textbf{4.4} \\
LOC & 31.75 $\pm$ 9.68 & 24.0 $\pm$ 5.56 & 16.62 $\pm$ 7.18 & 42.88 $\pm$ 2.03 & \underline{49.05} $\pm$ \underline{1.04} & 40.67 $\pm$ 4.85 & \textbf{52.0} $\pm$ \textbf{4.34} \\
FAC & 36.7 $\pm$ 8.15 & 29.83 $\pm$ 5.58 & 6.93 $\pm$ 0.62 & 41.05 $\pm$ 2.74 & 43.52 $\pm$ 3.09 & \underline{45.4} $\pm$ \underline{3.01} & \textbf{56.85} $\pm$ \textbf{1.52} \\
CARDINAL & 54.82 $\pm$ 1.87 & 53.7 $\pm$ 4.81 & 8.12 $\pm$ 7.92 & 64.05 $\pm$ 1.61 & \underline{69.2} $\pm$ \underline{1.51} & 62.98 $\pm$ 3.5 & \textbf{70.43} $\pm$ \textbf{3.43} \\
QUANTITY & 64.3 $\pm$ 5.06 & 61.72 $\pm$ 4.9 & 12.88 $\pm$ 4.13 & 65.05 $\pm$ 8.64 & 67.97 $\pm$ 2.98 & \underline{69.65} $\pm$ \underline{5.8} & \textbf{76.35} $\pm$ \textbf{3.09} \\
NORP & 73.5 $\pm$ 2.3 & 72.1 $\pm$ 6.0 & 39.92 $\pm$ 10.5 & \underline{83.02} $\pm$ \underline{1.42} & \textbf{84.5} $\pm$ \textbf{1.61} & 79.53 $\pm$ 1.32 & 82.4 $\pm$ 1.15 \\
ORDINAL & 68.97 $\pm$ 6.16 & 71.65 $\pm$ 3.31 & 1.93 $\pm$ 3.25 & \textbf{76.08} $\pm$ \textbf{3.55} & 74.7 $\pm$ 4.94 & 69.77 $\pm$ 4.97 & \underline{75.52} $\pm$ \underline{5.11} \\
WORK\_OF\_ART & \underline{30.48} $\pm$ \underline{1.42} & 27.5 $\pm$ 2.93 & 3.4 $\pm$ 2.37 & 28.0 $\pm$ 3.33 & 25.6 $\pm$ 2.86 & 30.2 $\pm$ 1.27 & \textbf{32.25} $\pm$ \textbf{3.11} \\
PERSON & 70.05 $\pm$ 6.7 & 74.1 $\pm$ 5.32 & 38.88 $\pm$ 7.64 & \underline{80.53} $\pm$ \underline{2.15} & 78.8 $\pm$ 0.26 & 78.03 $\pm$ 3.98 & \textbf{82.32} $\pm$ \textbf{2.51} \\
LANGUAGE & \underline{72.4} $\pm$ \underline{5.53} & 70.78 $\pm$ 2.62 & 4.25 $\pm$ 0.42 & 68.75 $\pm$ 6.36 & 52.72 $\pm$ 11.67 & 65.92 $\pm$ 3.52 & \textbf{75.62} $\pm$ \textbf{7.22} \\
LAW & \underline{58.08} $\pm$ \underline{4.9} & 53.12 $\pm$ 4.54 & 2.4 $\pm$ 1.15 & 48.38 $\pm$ 8.0 & 44.35 $\pm$ 4.31 & \textbf{60.13} $\pm$ \textbf{6.08} & 57.72 $\pm$ 7.06 \\
MONEY & 70.12 $\pm$ 5.19 & 66.05 $\pm$ 1.66 & 12.48 $\pm$ 11.92 & 68.4 $\pm$ 6.3 & \underline{72.12} $\pm$ \underline{5.87} & 68.4 $\pm$ 5.08 & \textbf{79.35} $\pm$ \textbf{3.6} \\
PERCENT & 76.88 $\pm$ 2.93 & 75.55 $\pm$ 4.17 & 1.78 $\pm$ 1.87 & 80.18 $\pm$ 4.81 & \underline{85.65} $\pm$ \underline{3.6} & 79.2 $\pm$ 3.76 & \textbf{88.32} $\pm$ \textbf{2.76} \\
PRODUCT & 43.6 $\pm$ 7.21 & \underline{44.35} $\pm$ \underline{3.48} & 3.95 $\pm$ 0.51 & 39.92 $\pm$ 7.22 & 30.07 $\pm$ 12.73 & 43.4 $\pm$ 8.43 & \textbf{49.32} $\pm$ \textbf{2.92} \\
TIME & 35.93 $\pm$ 6.35 & 35.8 $\pm$ 2.61 & 8.6 $\pm$ 3.21 & 50.15 $\pm$ 5.12 & \underline{53.6} $\pm$ \underline{2.5} & 45.62 $\pm$ 5.64 & \textbf{59.8} $\pm$ \textbf{0.76} \\
\hline
\end{tabular}
\end{center}
\caption{\label{baseline_sl} Final results of experiments. $F_1$ score for different models and different classes. Bold means the best score, underlined means the second place.}
\end{table*}
\begin{table*}[t!]
\begin{center}
\begin{tabular}{|l|cccc|}
\hline \bf Class name & \bf WarmBase + BIO & \bf WarmBase + TO & \bf WarmProto + BIO & \bf WarmProto + TO\\
\hline
\multicolumn{5}{|c|}{\textbf{Validation Classes}} \\
\hline
GPE & 75.8 $\pm$ 6.2 & 74.8 $\pm$ 4.16 & \textbf{83.62} $\pm$ \textbf{3.89} & \underline{82.02} $\pm$ \underline{0.42} \\
DATE & 56.32 $\pm$ 2.32 & 58.02 $\pm$ 2.83 & \underline{61.68} $\pm$ \underline{3.38} & \textbf{64.68} $\pm$ \textbf{3.65} \\
ORG & 63.45 $\pm$ 1.79 & 62.17 $\pm$ 2.9 & \underline{63.75} $\pm$ \underline{2.43} & \textbf{65.22} $\pm$ \textbf{2.83} \\
\hline
\multicolumn{5}{|c|}{\textbf{Test Classes}} \\
\hline
EVENT & \underline{35.15} $\pm$ \underline{4.04} & \textbf{35.4} $\pm$ \textbf{6.04} & 33.85 $\pm$ 5.91 & 34.75 $\pm$ 2.56 \\
LOC & 40.67 $\pm$ 4.85 & 40.08 $\pm$ 2.77 & \textbf{49.1} $\pm$ \textbf{2.4} & \underline{49.05} $\pm$ \underline{1.04} \\
FAC & \underline{45.4} $\pm$ \underline{3.01} & 44.88 $\pm$ 5.82 & \textbf{49.88} $\pm$ \textbf{3.39} & 43.52 $\pm$ 3.09 \\
CARDINAL & 62.98 $\pm$ 3.5 & 63.27 $\pm$ 3.66 & \underline{66.12} $\pm$ \underline{0.43} & \textbf{69.2} $\pm$ \textbf{1.51} \\
QUANTITY & \textbf{69.65} $\pm$ \textbf{5.8} & \underline{69.3} $\pm$ \underline{3.41} & 67.07 $\pm$ 5.11 & 67.97 $\pm$ 2.98 \\
NORP & 79.53 $\pm$ 1.32 & 80.75 $\pm$ 2.38 & \textbf{84.52} $\pm$ \textbf{2.79} & \underline{84.5} $\pm$ \underline{1.61} \\
ORDINAL & 69.77 $\pm$ 4.97 & 70.9 $\pm$ 6.34 & \underline{73.05} $\pm$ \underline{7.14} & \textbf{74.7} $\pm$ \textbf{4.94} \\
WORK\_OF\_ART & \textbf{30.2} $\pm$ \textbf{1.27} & \underline{25.78} $\pm$ \underline{4.07} & 23.48 $\pm$ 5.02 & 25.6 $\pm$ 2.86 \\
PERSON & 78.03 $\pm$ 3.98 & 76.0 $\pm$ 3.12 & \textbf{80.42} $\pm$ \textbf{2.13} & \underline{78.8} $\pm$ \underline{0.26} \\
\hline
\end{tabular}
\end{center}
\caption{\label{baseline_sl} $F_1$ score for models WarmBase and WarmProto with different task constructions: with and without BIO-tagging on the training stage.}
\end{table*}
\subsection{BIO-tagging}
We also checked the hypothesis that BIO-output of model can harm the F1-performance of the algorithm.
It might be possible that it does not make sense to separate B-tags and I-tags and separate classes because of 2 reasons:
\begin{enumerate}
\item There might be too small number of I-tags since not all entities are bigger than 1 word.
\item The words contains B-tags and I-tags may be quite similar and it may be not possible to distinguish them using prototypes, which have high-variance itself because of small size of support set.
\end{enumerate}
As was mentioned above (?), we use chunk-wise F1-performance. If our model produces BIO-tags given sentences, we can extract chunks and then compute F1. But even if our model only produce TO-tags (tag/other), we can also compute F1 using the following procedure: if 'T' stands after 'O', it virtually becomes 'B', otherwise it virtually becomes 'I'. An important point is that we don't change the true chunks in the test dataset, they remain the same. We only change the way how our model produces our chunks, so we don't make out task easier. Moreover, now our model lose the possibility to predict any sequence of chunks, now it can't predict a couple chunks that go one by one.
When we train our model, we change all BIO-tags to TO-tags in both small train dataset and big train dataset (in WarmProto).
WarmProto + TO showed better results on validations tasks so firstly it seemed that TO-tagging is definitely better. However, results on test tasks showed WarmProto + TO is not as good as we thought comparing with WarmProto + BIO.
Here in the table 2 we provide the results for WarmProto and WarmBase with and without BIO-tagging.
As we can see, this BIO-tagging doesn't affect both algorithms significantly.
\section{Conclusions}
...
\section{Introduction}
Named Entity Recognition (NER) is the task of finding entities, such as names of persons, organizations, locations, etc. in unstructured text. These names can be individual words or phrases in a sentence. Therefore, NER is usually interpreted as sequence labelling task. This task is actively used in various information extraction frameworks and is one of the core components of goal-oriented dialogue systems~\cite{dialogues_ner}.
When large labelled datasets are available, the task of NER can be solved with very high quality~\cite{ner_bilstm_sota}. Common benchmarks for testing new NER methods are CoNLL-2003~\cite{conll2003} and Ontonotes~\cite{ontonotes} datasets. They both include enough data to train neural architectures in a supervised learning setting.
However, in real-world applications such abundant datasets are usually not available, especially for low-resourced languages. And even if we have a large labelled corpus, it will inevitably have rare entities that occur not enough times to train a neural network to accurately identify them in text.
This urges the need for developing methods of \textbf{few-shot} NER --- successful identification of entities for which we have extremely small number of labelled examples. One solution would be semi-supervised learning methods, i.e. methods that can yield well-performing models by combining the information from a small set of labelled data and large amounts of unlabelled data which are available for virtually any language. Word embeddings which are trained in the unsupervised manner and are used in the majority of NLP tasks as the input to a neural network, can be considered as incorporation of unlabelled data. However, they only provide general (and not always suitable) information about word meaning, whereas we argue that unsupervised data can be used to extract more task-specific information on the structure of the data.
A prominent approach to the task of learning from few examples is metric learning~\cite{metric-learning}. This term denotes techniques that learn a metric to measure fitness of an object to some class. Metric learning methods, such as matching networks~\cite{matching} and prototypical networks~\cite{prototypical}, showed good results in few-shot learning for image classification. These methods can also be considered as semi-supervised learning methods, because they use the information about structure of common objects in order to label the uncommon ones even without seeing many examples. This approach can even be used as zero-shot learning, i.e. instances of a target class do not need to be presented at training time. Therefore, such model does not need to be re-trained in order to handle new classes. This property is extremely appealing for real-world tasks.
Despite its success in image processing, metric learning has not been widely used in NLP tasks. There, in low-resourced settings researchers more often resort to transfer learning --- use of knowledge from a different domain or language. We apply prototypical networks to the NER task and compare it to commonly used baselines. We test a metric learning technique in a task which often emerges in real-world setting --- identification of instances with extremely small number of labelled examples. We show that although prototypical networks do not succeed in zero-shot NER task, they outperform other models in few-shot case.
The main contributions of the work are the following:
\begin{enumerate
\item we formulate few-shot NER task as a semi-supervised learning task,
\item we modify prototypical network model to enable it to solve NER task, we show that it outperforms a state-of-the-art model in low-resource setting.
\end{enumerate}
The paper is organized as follows. In Section \ref{sec:related_work} we review the existing approaches to few-shot NER task. In Section \ref{sec:prototypical} we describe the prototypical network model and its adaptation to the NER task. Section \ref{sec:few_shot_ner} defines the task and describes the models that we tested to solve it. Section \ref{sec:experimental_setup} contains the description of our experimental setup. We report and analyze our results in Section \ref{sec:results}, and in Section \ref{sec:conclusions} we conclude and provide the directions for future work.
\section{Related work}
\label{sec:related_work}
NER is a well-established task that has been solved in a variety of ways. Nowadays, as in the majority of other NLP tasks, the state of the art is sequence labelling with Recurrent Neural Networks \cite{ner_bilstm_sota,ner_sota}. However, neural architectures are very sensitive to the size of training data and tend to overfit on small datasets. Hence, the latest research on named entities concentrates on handling low-resourced cases, which often occur in narrow domains or low-resourced languages.
The work by Wang et al.~\cite{transfermed} describes feature transform between domains which allows exploiting a large out-of-domain dataset for NER task. Numerous works describe a similar transition between languages: Dandapat and Way~\cite{ner_embedding_transfer} draw correspondences between entities in different languages using a machine translation system, Xie et al.~\cite{crosslang} map words of two languages into a shared vector space. Both these methods allow ``translating'' a big dataset to a new language. Cotterell and Duh~\cite{ner_crosslang_joint} describe a setting where the performance of a NER model for a low-resourced language is improved by training it jointly with a NER model for its well-resourced cognate.
Besides labelled data of a different domain or language, other sources such as ontologies, knowledge bases or heuristics can be used in limited data settings~\cite{ner_ontologies}. Similarly, Tsai and Salakhutdinov~\cite{fusing} improve the image classification accuracy using side information.
Active learning is also a popular choice to reduce the amount of training data. In~\cite{activelearning} the authors apply active learning to few-shot NER task and succeed in improving the performance despite the fact that neural architectures usually require large number of training examples. A somewhat similar approach is self-learning --- training on examples labelled by a model itself. While it is ineffective in many settings,~\cite{self_learn} shows that it can improve results of few-shot NER task when combined with reinforcement learning.
The most closely related work to ours is research by Ma et al.~\cite{fine-grained} where authors learn embeddings for fine-grained NER task with hierarchical labels. They train a model to map hand-crafted and other features of words to embeddings and use mutual information metric to choose a prototype from sets of words. Analogously to this work, we aim at improving performance of NER models on rare classes. However, we do not limit the model to hierarchical classes. It makes our model more flexible and applicable to ``cold start'' problem (problem of extending data with new classes).
Beyond NLP, there also exist multiple approaches to few-shot learning. The already mentioned metric learning technique~\cite{metric-learning} benefits from structure shared by all objects in a task, and creates a representation that shows their differences relevant to the task. Meta-learning~\cite{few_shot_meta} approach operates at two levels: it learns to solve a task from a small number of examples, and at the top level it learns more general regularities about the data across tasks.
In~\cite{few_shot_memory} the authors demonstrate that memory-augmented neural networks, such as Neural Turing Machines, have a capacity to perform meta-learning with few labelled examples.
To the best of our knowledge, prototypical networks~\cite{prototypical} have not been applied to any NLP tasks before. They have a very attractive capacity of introducing new labels to a model without its retraining. None of models described above can perform such zero-shot learning. Although natural language is indeed different from images for which prototypical networks were originally suggested, we decided to test this model on an NLP task to see if it is possible to transfer this property to the text domain.
\section{Prototypical Networks}
\label{sec:prototypical}
\subsection{Model}
\label{section:model_theory}
Work by Snell et al. ~\cite{prototypical} introduces \textit{prototypical network} --- a model that was developed for classification in settings where labelled examples are scarce.
This network is trained so that representations of objects returned by its last but one layer are similar for objects that belong to the same class and diverse for objects of different classes. In other words, this network maps objects to a vector space which allows easy separation of objects into meaningful task-specific clusters.
This feature allows assigning a class to an unseen object even if the number of labelled examples of this class is very limited.
The model is trained on two sets of examples: \textit{support set} and \textit{query set}. Support set consists of $N$ labelled examples: $S$ = \{$(\textbf{x}_1, y_1)$, ...,$(\textbf{x}_N, y_N)$\}, where each $ x_i \in \mathbb{R}^{D} $ is a $D$-dimensional representation of an object and $ y_i \in \{1,2, ..., K\} $ is the label of this object. Query set contains $N'$ labelled objects: $Q$ = \{$(\textbf{x}_1, y_1)$, ...,$(\textbf{x}_{N'}, y_{N'})$\}. Note that this partition is not stable across training steps --- the support and query sets are sampled randomly from the training data at each step.
The training is conducted in two stages:
\begin{enumerate}
\item For each class $k$ we define $ S_k $ --- the set of objects from $S$ that belong this class. We use these sets to compute \textit{prototypes}:
$$ \textbf{c}_k = \frac{1}{\|S_k\|} \sum_i f_{\theta}(\textbf{x}_i), $$
where function $f_{\theta}: \mathbb{R}^D \to \mathbb{R}^M$ maps the input objects to the $M$-dimensional space which is supposed to keep distances between classes. $f_{\theta}$ is usually implemented as a neural network. Its architecture depends on the properties of objects.
Prototype is the averaged representation of objects in a particular class, or the centre of a cluster corresponding to this class in the $M$-dimensional space.
\item We classify objects from $Q$. In order to classify an unseen example \textbf{x}, we map it to the $M$-dimensional space using $f_{\theta}$ and then assign it to a class whose prototype is closer to the representation of \textbf{x}. We compute distance $d(f_{\theta}(\textbf{x}), \textbf{c}_k)$ for every $k$. We denote the measure of similarity of \textbf{x} to $k$ as $l_i = -d(f_{\theta}(\textbf{x}), \textbf{c}_k)$. Finally, we convert these similarities to distribution over classes using $softmax$ function: $softmax(l_1, ..., l_K)$. The model is agnostic about the distance function. Following ~\cite{prototypical}, we use squared Euclidian distance.
\end{enumerate}
The model is trained by optimising cross-entropy loss:
$$ L(\textbf{y}, \hat{\textbf{y}}) = - \sum_{i=1}^{N'} y_i \hspace{1mm} log \hspace{1mm} \hat{y}_i, $$
where $\hat{y}_i = softmax(l_1, ..., l_K)$.
\subsection{Adaptation to NER}
In order to apply prototypical networks to NER task, we made the following changes to the baseline model described above:
\paragraph{\textbf{Sequential vs independent objects}} Image dataset contains separate images that are not related to each other.
In contrast, in NLP tasks we often need to classify words which are grouped in sequences. Words in a sentence influence each other, and when labelling a word we should take into account labels of neighbouring words. Considering a word in isolation does not make sense in such setting. Nevertheless, in NER task we need to classify separate words, so following the description of the model from the previous section, we should assemble the support set $S$ from pairs ($w_i$, $y_i$), where $w_i$ is a word and $y_i$ is its label. However, this division can break the sentence structure, if some words in a sentence are assigned to the support set and others to query set. In order to prevent such situations we form our support and query sets from whole sentences.
\paragraph{\textbf{Class ``no entity''}} In NER task we have class \textit{O} that is used to denote words which are not named entities. It cannot be interpreted in the same way as other classes, because objects of class \textit{O} do not need to (and should not) be close to each other in a vector space. In order to mitigate this problem we modified our prediction function $softmax(l_1, ..., l_K)$. We replaced the similarity score $l_O$ for the \textit{O} class with a scalar $b_{O}$, and used the following form of softmax: $softmax(l_1, ..., l_{K-1}, b_{O})$. $b_O$ is trained along with parameters $\theta$ of the model. The initial value of $b_O$ is a hyper-parameter.
\paragraph{\textbf{In-domain and out-of-domain training}}
In original paper describing prototypical networks~\cite{prototypical} they were applied to the setting of zero-shot learning. Weights of the model are updated during training phase, but once training is over instances from test classes are only used for calculation of prototypes. Given it is usually easy to obtain few labelled examples, we modified original \textit{zero-shot} setting to \textit{few-shot} setting: we use a small number of available labelled examples of the target class during training phase. We denote this data as \textbf{in-domain} training set, and data for other classes is referred to as \textbf{out-of-domain} training. Here \textit{domains} in the traditional NLP sense are the same --- texts come from the same sources and word distributions are similar. Here we refer to discrepancy between sets of named entity classes that they use.
\section{Few-shot NER}
\label{sec:few_shot_ner}
\subsection{Task formulation}
NER is a sequence labelling task, where each word in a sentence is assigned either one of entity classes (``Person'', ``Location'', ``Organisation'', etc.) or \textit{O} class if it is not one of the desired entities.
While common classes are usually identified correctly by the existing methods, we target particularly at rare classes for which we have only a very limited number of labelled examples. To increase the quality of their identification, we use the information from other classes. Therefore, we train a separate model for every class in order to see the performance on each of them in isolation. Such formulation can also be considered as a way to tackle the ``cold start'' problem --- adapting a NER model to label entities of a new class with very little number of entities.
As it was described above, we have two training sets: \textit{out-of-domain} and \textit{in-domain}. Since we simulate the ``cold start'' problem in our experiments, these datasets have the following characteristics. The \textit{out-of-domain} data is quite large and labelled with a number of named entity classes except the target class $C$ --- this is the initially available data. The \textit{in-domain} dataset is very small and contains labels only for the class $C$ --- this is the new data which we acquire afterwards and which we would like to infuse into the model.
In order to train a realistic model we need to keep the frequency of $C$ in our \textit{in-domain} training data similar to the frequency of this class in general distribution. Therefore, if instances of this class occur on average in one of three sentences, then our \textit{in-domain} training data has to contain sentences with no instances of class $C$ (``empty'' sentences), and their number should be twice as larger as the number of sentences with $C$. In practice this can be achieved by sampling sentences from unlabelled data until we obtain the needed number of instances of class $C$.
\subsection{Basic models}
\label{sec:basic_models}
We use two main architectures --- the commonly used RNN baseline and a prototypical network adapted for the NER task. Other models we test use these two models as building blocks.
\paragraph{\textbf{RNN + CRF model}}
As our baseline we use a NER model implemented in AllenNLP open-source library~\cite{allen}. The model processes sentences in the following way:
\begin{enumerate}
\item words are mapped to pre-trained embeddings (any embeddings, such as GloVe~\cite{glove}, ELMo~\cite{elmo}, etc. can be used)
\item additional word embedding are produced using a character-level trainable Recurrent Neural Network (RNN) with LSTM cells,
\item embeddings produced at stages (1) and (2) are concatenated and used as the input to a bi-directional RNN with LSTM cells. This network processes the whole sentence and creates context-dependent representations of every word
\item a feed-forward layer converts hidden states of the RNN from stage (3) to logits that correspond to every label,
\item the logits are used as input to a Conditional Random Field (CRF)~\cite{crf} model that outputs the probability distribution of tags for every word in a sentence.
\end{enumerate}
The model is trained by minimizing negative log-likelihood of true tag sequences. It has to be noted that this baseline is quite reasonable even in our limited resource setting.
\paragraph{\textbf{Prototypical Network}}
The architecture of the prototypical network that we use for NER task is very similar to the one of our baseline model. The main change concerns the feed-forward layer. While in the baseline model it transforms RNN hidden states to logits corresponding to labels, in our prototypical network it maps these hidden states to the $M$-dimensional space. The output of the feed-forward layer is then used to construct prototypes from the support set. These prototypes are used to classify examples from the query set as described in section \ref{section:model_theory}.
We try variants of this model both with and without the CRF layer.
The architecture of the prototypical network model is provided in Figure \ref{fig:subim1}.
\begin{figure}[h]
\includegraphics[width=1.0\linewidth]{Proto.png}
\caption{Model architecture}
\label{fig:subim1}
\end{figure}
\subsection{Experiments}
We perform experiments with a number of different models. We test the different variants of prototypical network model and compare them with RNN baseline. In addition to that, we try transfer learning scenario and combine it with these models. Here we provide the description of all models we test.
\paragraph{\textbf{RNN Baseline (Base)}}
This is the baseline RNN model described above. We train it using only \textit{in-domain} training set.
\paragraph{\textbf{Baseline prototypical network (BaseProto)}}
This is the baseline prototypical network model. We train it on \textit{in-domain} training data. We divide it into two parts. If the \textit{in-domain} set contains $N$ sentences with instances of the target class $C$ and $V$ sentences ``empty'' sentences, we use $N/2$ sentences with instances of $C$ as support set, and other $N/2$ such sentences along with $V/2$ ``empty'' sentences serve as query set. We use only half of ``empty'' sentences to keep the original frequency of class $C$ in the query set. Note that the partition is new for every training iteration.
\paragraph{\textbf{Regularised prototypical network (Protonet)}}
The architecture and training procedure of this model are the same as those of \textit{BaseProto} model. The only difference is the data we use for training. At each training step we select the training data using one of two scenarios:
\begin{enumerate}
\item we use \textit{in-domain} training data, i.e. data labelled with the target class $C$ (this setup is the same as the one we use in \textit{BaseProto}),
\item we change the target class: we (i) randomly select a new target class $C'$ ($C' \neq C$), (ii) sample sentences from \textit{out-of-domain} dataset until we find $N$ instances of $C'$, and (iii) re-label the sampled sentences so that they contain only labels of class $C'$.
\end{enumerate}
At each step we choose the scenario (1) with probability $p$, or scenario (2) with probability $(1-p)$.
Therefore, throughout training the network is trained to predict our target class (scenario (1)), but occasionally it sees instances of some other classes and constructs prototypes for them (scenario (2)). We suggest that this model can be more efficient than BaseProto, because at training time it is exposed to objects of different classes, and the procedure that maps objects to prototype space becomes more robust. This is also a way to leverage out-of-domain training data.
\paragraph{\textbf{Transfer learning baseline (WarmBase)}}
We test a common transfer learning model --- use of knowledge about out-of-domain data to label in-domain samples. The training of this model is two-part:
\begin{enumerate}
\item We train our baseline RNN+CRF model using \textit{out-of-domain} training set.
\item We save all weights of the model except CRF and label prediction layer, and train this model again using \textit{in-domain} training set.
\end{enumerate}
\paragraph{\textbf{Transfer learning + prototypical network (WarmProto)}}
In addition to that, we combine prototypical network with pre-training on out-of-domain data. We first train a Base model on the \textit{out-of-domain} training set. Then, we train a Protonet model as described above, but initialise its weights with weights of this pre-trained Base model.
\paragraph{\textbf{WarmProto-CRF}}
This is the same prototypical network pre-trained on \textit{out-of-domain} data, but it is extended with a CRF layer on top of logits as described in section \ref{sec:basic_models}.
\paragraph{\textbf{WarmProto for zero-shot training (WarmProtoZero)}}
We train the same WarmProto model, but with the probability $p$ set to 0. In other words, our model does not see instances of the target class at training time. It learns to produce representations on objects of other classes. Then, at test time, it is given $N$ entities of the target class as support set, and words in test sentences are assigned to either this class or \textit{O} class based on their similarity to this prototype. This is the only zero-shot learning scenario that we test.
\section{Experimental setup}
\label{sec:experimental_setup}
\subsection{Dataset}
We conduct all our experiments on the Ontonotes dataset~\cite{ontonotes}. It contains 18 classes (+ $O$ class). The classes are not evenly distributed --- the training set contains over $30.000$ instances of some common classes and less than 100 for rare classes. The distribution of classes is shown in Figure \ref{fig:ontonotes_stat}. The size of the training set is $150.374$ sentences, the size of the validation set is $19.206$ sentences.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{ontonotes_stat.png}
\caption{Ontonotes dataset --- statistics of classes frequency in the training data.}
\label{fig:ontonotes_stat}
\end{figure}
As the majority of NER datasets, Ontonotes adopts \textbf{BIO} (\textbf{B}eginning, \textbf{I}nside, and \textbf{O}utside) labelling. It provides an extension to class labels, namely, all class labels except \textit{O} are prepended with the symbol ``B'' if the corresponding word is the first (or the only) word in an entity, or with the symbol ``I'' otherwise.
\subsection{Data preparation: simulated few-shot experiments}
We use the Ontonotes training data as \textit{out-of-domain} training set (where applicable) and sample \textit{in-domain} examples from the validation set. In our formulation, the \textit{in-domain} data is the data where only instances of a target class $C$ (class we want to predict) are labelled. Conversely, the \textit{out-of-domain} data contains instances of some set of classes, but not of the target class. Therefore, we prepare our data by replacing all labels \textit{B-C} and \textit{I-C} with \textit{O} in the training data, and in the validation data we replace all labels \textit{except} \textit{B-C} and \textit{I-C} with \textit{O}. Note that since we run the experiments for each of 18 Ontonotes classes, we perform this re-labelling for every experiment.
The validation data is still too large for our low-resourced scenario, so we use only a part of it for training. We sample our \textit{in-domain} training data as follows. We randomly select sentences from the re-labelled validation set until we obtain $N$ sentences with at least one instance of the class $C$. Note that sentences of the validation set are not guaranteed to have instances of $C$, so our training data can have some ``empty'' sentences, i.e. sentences where all words are labelled with \textit{O}. This sampling procedure allows keeping the proportion of instances of class \textit{C} close to the one of the general distribution.
In our preliminary experiments we noticed that such sampling procedure leads to large variation in the final scores, because the size of \textit{in-domain} training data can vary significantly. In order to reduce this variation we alter the sampling procedure. We define a function $pr(C)$ which computes the proportion of labels of a class $C$ in the validation set ($pr(C) \in [0, 1]$). Then we sample $N$ sentences containing instances of class $C$ and $\frac{N \times (1-pr(C))}{pr(C)}$ sentences without class $C$. Thus, we keep the proportion instances of class $C$ in our \textit{in-domain} training dataset equal to that of the validation set. We use the same procedure when sampling training examples from \textit{out-of-domain} data for \textit{Protonet} model.
\subsection{Design of experiments}
We conduct separate experiments for each of 18 Ontonotes classes. For each class we conducted 4 experiments with different random seeds. We report averaged results for each class.
We design separate experiments for selection of hyper-parameters and the optimal number of training epochs. For that we selected three well-represented classes --- ``GPE'' (geopolitical entity), ``Date'', and ``Org'' (organization) --- to conduct \textit{validation} experiments on them. We selected training sets as described above, and used the test set (consisting of $\approx 19.000$ sentences) to tune hyper-parameters and to stop the training. For other classes we did not perform hyper-parameter tuning. Instead, we used the values acquired in the validation experiments with the three validation classes. In these experiments we used the test set only for computing the performance of trained models.
The motivation of such setup is the following. In many few-shot scenarios researchers report experiments where they train on a small training set, and tune the model on a very large validation set. We argue that this scenario is unrealistic, because if we had a large number of labelled examples in a real-world problem, it would be more efficient to use them for training, and not for validation. On the other hand, a more realistic scenario is to have a very limited number of labelled sentences overall. In that case we could still reserve a part of them for validation. However, we argue that this is also inefficient. If we have 20 examples and decide to train on 10 of them and validate on another 10, this validation will be inaccurate, because 10 examples are not enough to evaluate the performance of a model reliably. Therefore, our evaluation will be very noisy and is likely to result in sub-optimal values of hyper-parameters. On the other hand, additional 10 examples can boost the quality of a model, as it can be seen in Figure \ref{fig:10_vs_20}. Therefore, we assume that optimal hyperparameters are the same for all labels, and use the values we found in validation experiments.
\begin{table*}[ht!]
\begin{center}
\begin{tabular}{|l|ccccccc|}
\hline \bf Class name & \bf Base & \bf BaseProto & \bf WarmProtoZero & \bf Protonet & \bf WarmProto & \bf WarmBase & \bf WarmProto-CRF\\
\hline
\multicolumn{8}{|c|}{\textbf{Validation Classes}} \\
\hline
GPE & 69.75 $\pm$ 9.04 & 69.8 $\pm$ 4.16 & 60.1 $\pm$ 5.56 & 78.4 $\pm$ 1.19 & \textbf{83.62} $\pm$ \textbf{3.89} & 75.8 $\pm$ 6.2 & \underline{80.05} $\pm$ \underline{5.4} \\
DATE & 54.42 $\pm$ 3.64 & 50.75 $\pm$ 5.38 & 11.23 $\pm$ 4.57 & 56.55 $\pm$ 4.2 & \underline{61.68} $\pm$ \underline{3.38} & 56.32 $\pm$ 2.32 & \textbf{65.42} $\pm$ \textbf{2.82} \\
ORG & 42.7 $\pm$ 5.54 & 39.1 $\pm$ 7.5 & 17.18 $\pm$ 3.77 & 56.35 $\pm$ 2.86 & \underline{63.75} $\pm$ \underline{2.43} & 63.45 $\pm$ 1.79 & \textbf{69.2} $\pm$ \textbf{1.2} \\
\hline
\multicolumn{8}{|c|}{\textbf{Test Classes}} \\
\hline
EVENT & 32.33 $\pm$ 4.38 & 24.15 $\pm$ 4.38 & 4.85 $\pm$ 1.88 & 33.95 $\pm$ 5.68 & 33.85 $\pm$ 5.91 & \underline{35.15} $\pm$ \underline{4.04} & \textbf{45.2} $\pm$ \textbf{4.4} \\
LOC & 31.75 $\pm$ 9.68 & 24.0 $\pm$ 5.56 & 16.62 $\pm$ 7.18 & 42.88 $\pm$ 2.03 & \underline{49.1} $\pm$ \underline{2.4} & 40.67 $\pm$ 4.85 & \textbf{52.0} $\pm$ \textbf{4.34} \\
FAC & 36.7 $\pm$ 8.15 & 29.83 $\pm$ 5.58 & 6.93 $\pm$ 0.62 & 41.05 $\pm$ 2.74 & \underline{49.88} $\pm$ \underline{3.39} & 45.4 $\pm$ 3.01 & \textbf{56.85} $\pm$ \textbf{1.52} \\
CARDINAL & 54.82 $\pm$ 1.87 & 53.7 $\pm$ 4.81 & 8.12 $\pm$ 7.92 & 64.05 $\pm$ 1.61 & \underline{66.12} $\pm$ \underline{0.43} & 62.98 $\pm$ 3.5 & \textbf{70.43} $\pm$ \textbf{3.43} \\
QUANTITY & 64.3 $\pm$ 5.06 & 61.72 $\pm$ 4.9 & 12.88 $\pm$ 4.13 & 65.05 $\pm$ 8.64 & 67.07 $\pm$ 5.11 & \underline{69.65} $\pm$ \underline{5.8} & \textbf{76.35} $\pm$ \textbf{3.09} \\
NORP & 73.5 $\pm$ 2.3 & 72.1 $\pm$ 6.0 & 39.92 $\pm$ 10.5 & \underline{83.02} $\pm$ \underline{1.42} & \textbf{84.52} $\pm$ \textbf{2.79} & 79.53 $\pm$ 1.32 & 82.4 $\pm$ 1.15 \\
ORDINAL & 68.97 $\pm$ 6.16 & 71.65 $\pm$ 3.31 & 1.93 $\pm$ 3.25 & \textbf{76.08} $\pm$ \textbf{3.55} & 73.05 $\pm$ 7.14 & 69.77 $\pm$ 4.97 & \underline{75.52} $\pm$ \underline{5.11} \\
WORK\_OF\_ART & \underline{30.48} $\pm$ \underline{1.42} & 27.5 $\pm$ 2.93 & 3.4 $\pm$ 2.37 & 28.0 $\pm$ 3.33 & 23.48 $\pm$ 5.02 & 30.2 $\pm$ 1.27 & \textbf{32.25} $\pm$ \textbf{3.11} \\
PERSON & 70.05 $\pm$ 6.7 & 74.1 $\pm$ 5.32 & 38.88 $\pm$ 7.64 & \underline{80.53} $\pm$ \underline{2.15} & 80.42 $\pm$ 2.13 & 78.03 $\pm$ 3.98 & \textbf{82.32} $\pm$ \textbf{2.51} \\
LANGUAGE & \underline{72.4} $\pm$ \underline{5.53} & 70.78 $\pm$ 2.62 & 4.25 $\pm$ 0.42 & 68.75 $\pm$ 6.36 & 48.77 $\pm$ 17.42 & 65.92 $\pm$ 3.52 & \textbf{75.62} $\pm$ \textbf{7.22} \\
LAW & \underline{58.08} $\pm$ \underline{4.9} & 53.12 $\pm$ 4.54 & 2.4 $\pm$ 1.15 & 48.38 $\pm$ 8.0 & 50.15 $\pm$ 7.56 & \textbf{60.13} $\pm$ \textbf{6.08} & 57.72 $\pm$ 7.06 \\
MONEY & 70.12 $\pm$ 5.19 & 66.05 $\pm$ 1.66 & 12.48 $\pm$ 11.92 & 68.4 $\pm$ 6.3 & \underline{73.68} $\pm$ \underline{4.72} & 68.4 $\pm$ 5.08 & \textbf{79.35} $\pm$ \textbf{3.6} \\
PERCENT & 76.88 $\pm$ 2.93 & 75.55 $\pm$ 4.17 & 1.82 $\pm$ 1.81 & 80.18 $\pm$ 4.81 & \underline{85.3} $\pm$ \underline{3.68} & 79.2 $\pm$ 3.76 & \textbf{88.32} $\pm$ \textbf{2.76} \\
PRODUCT & 43.6 $\pm$ 7.21 & \underline{44.35} $\pm$ \underline{3.48} & 3.75 $\pm$ 0.58 & 39.92 $\pm$ 7.22 & 35.1 $\pm$ 9.35 & 43.4 $\pm$ 8.43 & \textbf{49.32} $\pm$ \textbf{2.92} \\
TIME & 35.93 $\pm$ 6.35 & 35.8 $\pm$ 2.61 & 8.02 $\pm$ 3.05 & 50.15 $\pm$ 5.12 & \underline{56.6} $\pm$ \underline{2.28} & 45.62 $\pm$ 5.64 & \textbf{59.8} $\pm$ \textbf{0.76} \\
\hline
\end{tabular}
\end{center}
\caption{Results of experiments in terms of chunk-based $F_1$-score. Numbers in bold mean the best score for a particular class, underlined numbers are the second best results. Numbers are averaged across 4 runs with standard deviations calculated.}
\label{table:main_results}
\end{table*}
\begin{figure*}[ht!]
\includegraphics[scale=0.5]{figure_exp.png}
\caption{Performance of models trained on 10 and 20 sentences.}
\label{fig:10_vs_20}
\end{figure*}
\subsection{Model parameters}
In all our experiments we set $N$ (number of instances of the target class in \textit{in-domain} training data) to 20. This number of examples is small enough and can be easily labelled by hand. At the same time, it produces models of reasonable quality. Figure \ref{fig:10_vs_20} compares the performance of models trained on 10 and 20 examples. We see the significant boost in performance for the latter case. Moreover, in the rightmost plot the learning curve for the smaller dataset goes down after the 40-th epoch, which does not happen when the larger dataset is used. This shows that $N=20$ is a reasonable trade-off between model performance and cost of labelling.
In the \textit{Protonet} model we set $p$ to 0.5. Therefore, the model is trained on the instances of the target class $C$ half of the steps, and another half of the times it is shown instances of some other randomly chosen class.
We optimize all models with Adam optimizer in {\tt pytorch} implementation. Base and WarmBase methods use batches of 10 sentences during in-domain training. We train out-of-domain RNN baseline (warm-up for WarmBase and WarmProto* models) using batch of size 32. All models based on prototypical network use batches of size 100 --- 40 in support set and 60 in query set.
We also use L2-regularization with a multiplier 0.1. All models are evaluated in terms of chunk-based $F_1$-score for the target class \cite{conll2003}.
The open-source implementation of the models is available online.\footnote{\url{https://github.com/Fritz449/ProtoNER}}
\section{Results}
\label{sec:results}
\subsection{Performance of models}
We selected hyperparameters in the validation experiment and then used them when training models for other classes. We use the following values. The initial value of $b_O$ (logit for the \textit{O} class) is set to -4. We use dropout with rate 0.5 in LSTM cells for all our experiments. The dimensionality of embeddings space $M$ for all models based on prototypical network is set to 64. For all models we use learning rate of $3e-3$.
Table \ref{table:main_results} shows the results of our experiments for all classes and methods. It is clearly seen that 20 sentences is not enough to train a baseline RNN+CRF model. Moreover, we see that the baseline prototypical network (\textit{BaseProto}) performs closely to the RNN baseline. This shows that 20 instances of the target class is also not enough to construct a reliable prototype.
On the other hand, if a prototypical network is occasionally exposed to instances of other classes, as it is done in \textit{Protonet} model, then the prototypes it constructs are better at identifying the target class. \textit{Protonet} shows better results than \textit{Base} and \textit{BaseProto} on many classes.
The transfer learning baseline (\textit{WarmBase}) achieves results which are comparable with those of \textit{Protonet}. This allows to conclude that the information on structure of objects of other classes is helpful even for conventional RNN baseline, and pre-training on out-of-domain data is useful.
Prototypical network pre-trained on out-of-domain data (\textit{WarmProto}) beats \textit{WarmBase} and \textit{Protonet} in more than half of experiments. Analogously to transfer learning baseline, it benefits from the use of out-of-domain data. Unfortunately, such model is not suitable for zero-shot learning --- the \textit{WarmProtoZero} model performs below any other models including the RNN baseline.
Finally, if we enable CRF layer of \textit{WarmProto} model, the performance grows sharply. As we can see, \textit{WarmProto-CRF} beats all other models in almost all experiments.
Thus, prototypical network is more effective than RNN baseline in the setting where in-domain data is extremely limited.
\subsection{Influence of BIO labelling}
When such a small number of entities is available, the BIO labelling used in NER datasets can harm the performance of models. First of all, the majority of entities can contain only one word, and the number of \textit{I} tags can be too small if there are only 20 entities overall. This can decrease the quality of predicting these tags dramatically. Another potential problem is that words labelled with \textit{B} and \textit{I} tags can be similar, and a model can have difficulties distinguishing between them using prototypes. Again, this effect can be amplified by the fact that very small number of instances is used for training, and prototypes themselves have high variance.
\begin{table*}[ht!]
\begin{center}
\begin{tabular}{|l|cccc|}
\hline \bf Class name & \bf WarmBase + BIO & \bf WarmBase + TO & \bf WarmProto + BIO & \bf WarmProto + TO\\
\hline
\multicolumn{5}{|c|}{\textbf{Validation Classes}} \\
\hline
GPE & 75.8 $\pm$ 6.2 & 74.8 $\pm$ 4.16 & \textbf{83.62} $\pm$ \textbf{3.89} & \underline{82.02} $\pm$ \underline{0.42} \\
DATE & 56.32 $\pm$ 2.32 & 58.02 $\pm$ 2.83 & \underline{61.68} $\pm$ \underline{3.38} & \textbf{64.68} $\pm$ \textbf{3.65} \\
ORG & 63.45 $\pm$ 1.79 & 62.17 $\pm$ 2.9 & \underline{63.75} $\pm$ \underline{2.43} & \textbf{65.22} $\pm$ \textbf{2.83} \\
\hline
\multicolumn{5}{|c|}{\textbf{Test Classes}} \\
\hline
EVENT & \underline{35.15} $\pm$ \underline{4.04} & \textbf{35.4} $\pm$ \textbf{6.04} & 33.85 $\pm$ 5.91 & 34.75 $\pm$ 2.56 \\
LOC & 40.67 $\pm$ 4.85 & 40.08 $\pm$ 2.77 & \textbf{49.1} $\pm$ \textbf{2.4} & \underline{49.05} $\pm$ \underline{1.04} \\
FAC & \underline{45.4} $\pm$ \underline{3.01} & 44.88 $\pm$ 5.82 & \textbf{49.88} $\pm$ \textbf{3.39} & 43.52 $\pm$ 3.09 \\
CARDINAL & 62.98 $\pm$ 3.5 & 63.27 $\pm$ 3.66 & \underline{66.12} $\pm$ \underline{0.43} & \textbf{69.2} $\pm$ \textbf{1.51} \\
QUANTITY & \textbf{69.65} $\pm$ \textbf{5.8} & \underline{69.3} $\pm$ \underline{3.41} & 67.07 $\pm$ 5.11 & 67.97 $\pm$ 2.98 \\
NORP & 79.53 $\pm$ 1.32 & 80.75 $\pm$ 2.38 & \textbf{84.52} $\pm$ \textbf{2.79} & \underline{84.5} $\pm$ \underline{1.61} \\
ORDINAL & 69.77 $\pm$ 4.97 & 70.9 $\pm$ 6.34 & \underline{73.05} $\pm$ \underline{7.14} & \textbf{74.7} $\pm$ \textbf{4.94} \\
WORK\_OF\_ART & \textbf{30.2} $\pm$ \textbf{1.27} & \underline{25.78} $\pm$ \underline{4.07} & 23.48 $\pm$ 5.02 & 25.6 $\pm$ 2.86 \\
PERSON & 78.03 $\pm$ 3.98 & 76.0 $\pm$ 3.12 & \textbf{80.42} $\pm$ \textbf{2.13} & \underline{78.8} $\pm$ \underline{0.26} \\
\hline
\end{tabular}
\end{center}
\caption{$F_1$-scores for models WarmBase and WarmProto trained on data with and without BIO labelling. Numbers in bold mean the best score for a particular class, underlined numbers are the second best results. Numbers are averaged across 4 runs with standard deviations calculated.}
\label{table:bio_tagging}
\end{table*}
In order to check if these problems hamper the performance of our models, we performed another set of experiments. We removed BIO tagging --- for the target class $C$ we replaced both \textit{B-C} and \textit{I-C} with \textit{C}. This \textbf{TO} (tag/other) tagging reduced sparsity in the training data. We did so for both in-domain and out-of-domain training sets. The test set remained the same, because chunk-based $F_1$-score we use for evaluation is not affected by differences between BIO and TO labelling, it always considers a named entity as a whole.
Table \ref{table:bio_tagging} shows the result of WarmBase and WarmProto models trained on BIO-labelled and TO-labelled data.
It turns out that in the majority of cases the differences between $F_1$-scores of these models are not significant. Therefore, BIO labelling does not affect our models.
\section{Conclusions}
\label{sec:conclusions}
In this work we suggested solving the task of NER with \textit{metric learning} technique actively used in other Machine Learning tasks but rarely applied to NLP. We adapted a metric learning method, namely, prototypical network originally used for image classification to analysis of text. It projects all objects into a vector space which keeps distances between classes, so objects of one class are mapped to similar vectors. These mappings form a \textit{prototype} of a class, and at test time we assign new objects to classes by similarity of an object representation to class prototype.
In addition to that, we considered the task of NER in a semi-supervised setting --- we identified our target classes in text using the information about words of other classes. We showed that prototypical network is more effective in such setting than the state-of-the-art RNN model. Unlike RNN, prototypical network is suitable in cases where extremely small amount of data is available.
According to the original formulation of prototypical network, it can be used as zero-shot learning method, i.e. method which can assign an object to a particular class without seeing instances of this class at training time. We experimented with zero-shot setting for NER and showed that prototypical networks can in principle be used for zero-shot text classification, although there is still much room for improvement. We suggest that this is a prominent direction of future research.
We saw that prototypical networks shows considerably different performance on different classes of named entities. It would be interesting to perform more thorough qualitative analysis to identify characteristics of textual data which is more suitable for this method.
Finally, in our current experiments we trained models to predict entities of only a single class. In our future work we would like to check if the good performance of prototypical network scales to multiple classes. We will focus on training a prototypical network that can predict all classes of Ontonotes or another NER dataset at once.
| {'timestamp': '2018-12-18T02:03:04', 'yymm': '1812', 'arxiv_id': '1812.06158', 'language': 'en', 'url': 'https://arxiv.org/abs/1812.06158'} |
\section{Introduction}
\label{Introduction}
Pruning~\cite{lecun1990optimal,hassibi1993second, han2015deep,heyang} a trained neural network is commonly seen in network compression. In particular, for CNNs, channel pruning refers to the pruning of the filters in the convolutional layers. There are several critical factors for channel pruning.
\textbf{Procedures}. One-shot method~\cite{li2016pruning}: Train a network from scratch; Use a certain criterion to calculate filters’ \textit{Importance Score}, and prune the filters which have small \textit{Importance Score}; After additional training, the pruned network can recover its accuracy to some extent. Iterative method~\cite{lecun1990optimal,he2018soft,frankle2018the}: Unlike One-shot methods, they prune and fine-tune a network alternately.
\textbf{Criteria}. The filters' \textit{Importance Score} can be definded by a given criterion. From different ideas, many types of pruning criteria have been proposed, such as Norm-based~\cite{li2016pruning}, Activation-based~\cite{hu2016network,luo2017entropy}, Importance-based~\cite{molchanov2016pruning, molchanov2019importance}, BN-based~\cite{liu2017learning} and so on. \textbf{Strategy}.
Layer-wise pruning: In each layer, we can sort and prune the filters, which have small \textit{Importance Score} measured by a given criterion. Global pruning: Different from layer-wise pruning, global pruning~\cite{liu2017learning,hecap} sort the filters from all the layers through their \textit{Importance Score} and prune them.
\begin{table}[h]
\centering
\caption{
An example to illustrate the phenomenon that different criteria may select the similar sequence of filters for pruning. Taking VGG16~(3$^{\rm rd}$ Conv) and ResNet18~(12$^{\rm th}$ Conv) on Norm-based criteria as examples. The pruned filters' index ~(the ranks of filters’ \textit{Importance Score}) are almost the same, which lead to the similar pruned structures.}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{lllll}
\hline
Criteria & \multicolumn{1}{l}{Model} & \multicolumn{1}{l}{Pruned Filters' Index~(Top 8)}& \multicolumn{1}{l}{Model} & \multicolumn{1}{l}{Pruned Filters' Index~(Top 8)}\\
\hline
$\ell_1$ &ResNet18 &[111, 212, 33, 61, 68, 152, 171, 45] &VGG16 &[102, 28, 9, 88, 66, 109, 86, 45]\\
$\ell_2$ &ResNet18 &[111, 33, 212, 61, 171, 42, 243, 129] &VGG16 &[102, 28, 88, 9, 109, 66, 86, 45]\\
$\mathbf{GM}$ &ResNet18 &[111, 212, 33, 61, 68, 45, 171, 42] &VGG16 &[102, 28, 9, 88, 109, 66, 45, 86]\\
$\mathbf{Fermat}$ &ResNet18 &[111, 212, 33, 61, 45, 171, 42, 68] &VGG16 &[102, 28, 88, 9, 109, 66, 45, 86]\\
\hline
\end{tabular}%
}
\label{filtersort}%
\vspace{-0.4cm}
\end{table}%
\begin{figure*}[t]
\centering
\includegraphics[width=1\textwidth]{app.jpg}
\vspace{-0.7cm}
\caption{Visualization of Applicability problem, \textit{i.e.,} the histograms of the \textit{Importance Score} measured by different types of pruning criteria~(like BN\_$\gamma$, Taylor $\ell_2$ and $\ell_2$ norm). The \textit{Importance Score} in each layer are close enough, which implies that it is hard for these criteria to distinguish redundant filters well in layer-wise pruing.}
\vspace{-0.2cm}
\label{fig:app}
\end{figure*}
%
In this work, we conduct our investigation on a variety of pruning criteria.
%
As one of the simplest and most effective channel pruning criteria, $\ell_1$ pruning~\cite{li2016pruning} is widely used in practice. The core idea of this criterion is to sort the $\ell_1$ norm of filters in one layer and then prune the filters with a small $\ell_1$ norm. Similarly, there is $\ell_2$ pruning which instead leverages the $\ell_2$ norm~\cite{frankle2018the,he2018soft}. $\ell_1$ and $\ell_2$ can be seen as the criteria which use absolute \textit{Importance Score} of filters.
Through the study of the distribution of norm, \cite{heyang} demonstrates that these criteria should satisfy two conditions: (1)~the variance of the norm of the filters cannot be too small; (2)~the minimum norm of the filters should be small enough. Since these two conditions do not always hold, a new criterion considering the relative \textit{Importance Score} of the filters is proposed~\cite{heyang}. Since this criterion uses the Fermat point~(\textit{i.e.}, geometric median~\cite{cohen2016geometric}), we call this method $\mathbf{Fermat}$. Due to the high calculation cost of Fermat point, \cite{heyang} further relaxed the $\mathbf{Fermat}$ and then introduced another criterion denotes as $\mathbf{GM}$. To illustrate each of the pruning criteria, let $F_{ij}\in \mathbb{R}^{N_i\times k\times k}$ represent the $j^{\rm th}$ filter of the $i^{\rm th}$ convolutional layer, where $N_i$ is the number of input channels for $i^{\rm th}$ layer and $k$ denotes the kernel size of the convolutional filter. In $i^{\rm th}$ layer, there are $N_{i+1}$ filters. For each criteria, details are shown in Table~\ref{tab:criteria}, where $\mathbf{F}$ denotes the Fermat point of $F_{ij}$ in Euclidean space. These four pruning criteria are called Norm-based pruning in this paper as they utilize norm in their design.
Previous works~\cite{luo2017thinet,han2015deep,ding2019global,dong2017learning,renda2020comparing}, including the criteria mentioned above, the main concerns commonly consist of (a)~How much the model was compressed; (b)~How much performance was restored; (c)~The inference efficiency of the pruned network and (d)~The cost of finding the pruned network. However, few works discussed the following two blind spots about the pruning criteria:
\begin{wraptable}{r}{6cm}
\vspace{-0.7cm}
\caption{Norm-based pruning criteria. }
\vspace{-0.2cm}
\begin{center}
\begin{small}
\begin{tabular}{ll}
\hline
Criterion & Details of \textit{Importance Score}\\
\hline
$\ell_1$~\cite{li2016pruning} & $||F_{ij}||_1$\\
$\ell_2$~\cite{frankle2018the} & $||F_{ij}||_2$\\
$\mathbf{Fermat}$~\cite{heyang} & $||\mathbf{F} - F_{ij}||_2$\\
$\mathbf{GM}$~\cite{heyang} & $\sum_{k=1}^{N_{i+1}}||F_{ik}-F_{ij}||_2$\\
\hline
\end{tabular}
\end{small}
\end{center}
\label{tab:criteria}
\vspace{-0.4cm}
\end{wraptable}
\textbf{(1)~Similarity: What are the actual differences among these pruning criteria?} Taking the VGG16 and ResNet18 on ImageNet as an example, we show the ranks of filters’ \textit{Importance Score} under different criteria in Table~\ref{filtersort}. It is obvious that they have almost the same sequence, leading to similar pruned structures. In this situation, the criteria used absolute \textit{Importance Score} of filters~($\ell_1$,$\ell_2$) and the criteria used relative \textit{Importance Score} of filters~($\mathbf{Fermat}$, $\mathbf{GM}$) may not be significantly different.
\textbf{(2)~Applicability: What is the applicability of these pruning criteria to prune the CNNs?} There is a toy example w.r.t. $\ell_2$ criterion. If the $\ell_2$ norm of the filters in one layer are 0.9, 0.8, 0.4 and 0.01, according to \textit{smaller-norm-less-informative assumption}~\cite{ye2018rethinking}, it’s apparent that we should prune the last filter. However, if the norm are close, such as 0.91, 0.92, 0.93, 0.92, it is hard to determine which filter should be pruned even though the first one is the smallest. In Fig.~\ref{fig:app}, we demonstrate some real examples, \textit{i.e.,} the visualization of Applicability problem under different networks and criteria.
In this paper, we provide comprehensive observations and in-depth analysis of these two blind spots. Before that, in Section~\ref{distribution}, we propose an assumption about the parameters distribution of CNNs, called \textit{Convolution Weight Distribution Assumption}~(CWDA), and use it as a theoretical tool to analyze the two blind spots. We explore the Similarity and Applicability problem of pruning criteria in the following order: (1)~Norm-based criteria~(layer-wise pruning) in Section~\ref{sec:norm}; (2)~Other types of criteria~(layer-wise pruning) in Section~\ref{sec:others}; (3) and different types of criteria~(global pruning) in Section~\ref{sec:global}. Last but not least, we provide further discussion on: (i) the conditions for CWDA to be satisfied, (ii) how our findings help the community in Section~\ref{Discussion}. In order to focus on the pruning criteria, all the pruning experiments are based on the relatively simple pruning procedure, \textit{i.e.,} one-shot method.
The main \textbf{contributions} of this work are two-fold:
\textbf{(1)}~We analyze the Applicability problem and the Similarity of different types of pruning criteria. These two blind spots can guide and motivate researchers to design more reasonable criteria. We also break some stereotypes, such as that the results of $\ell_1$ and $\ell_2$ pruning are not always similar.
\textbf{(2)}~We propose and verify an assumption called CWDA, which reveals that the well-trained convolutional filters approximately follow a Gaussian-alike distribution. Using CWDA, we succeeded in explaining the multiple observations about these two blind spots theoretically.
\section{Weight Distribution Assumption}
\label{distribution}
In this section, we propose and verify an assumption about the parameters distribution of the convolutional filters.
\textbf{(Convolution Weight Distribution Assumption)}~Let $F_{ij}\in \mathbb{R}^{N_i\times k\times k}$ be the $j^{\rm th}$ well-trained filter of the $i^{\rm th}$ convolutional layer. In general\footnote{In Section~\ref{Discussion}, we make further discussion and analysis on the conditions for CWDA to be satisfied.}, in $i^{\rm th}$ layer, $F_{ij}~( j=1,2,...,N_{i+1})$ are i.i.d and follow such a distribution:
\begin{equation}
F_{ij} \sim \mathbf{N}(\mathbf{0}, \mathbf{\Sigma}^i_{\text{diag}} + \epsilon\cdot\mathbf{\Sigma}^i_{\text{block}}),
\label{cwda_org}
\end{equation}
where $\mathbf{\Sigma}^i_{\text{block}} = \mathrm{diag}(K_1,K_2,...,K_{N_i})$ is a block diagonal matrix and the diagonal elements of $\mathbf{\Sigma}^i_{\text{block}}$ are 0. $\epsilon$ is a small constant. The values of the off-block-diagonal elements are 0 and $K_l \in R^{k^2\times k^2}, l=1,2,...,N_i$. $\mathbf{\Sigma}^i_{\text{diag}}= \mathrm{diag}(a_1,a_2,...,a_{N_i \times k \times k})$ is a diagonal matrix and the elements of $\mathbf{\Sigma}^i_{\text{diag}}$ are close enough.
This assumption is based on the observation shown in the Fig.~\ref{fig:cwda}. To estimate $\mathbf{\Sigma}^i_{\text{diag}} + \epsilon\cdot\mathbf{\Sigma}^i_{\text{block}}$, we use the correlation matrix $FF^T$ where $F \in \mathbb{R}^{(N_i\times k \times k) \times N_{i+1}}$ denotes all the parameters in $i^{\rm th}$ layer. Taking a convolutional layer of ResNet18 trained on ImageNet as an example, we find that $FF^T$ is a block diagonal matrix. Specifically, each block is a $k^2 \times k^2$ matrix and the off-diagonal elements are close to 0. We visualize the $j^{\rm th}$ filter $F_{ij}\in \mathbb{R}^{N_i\times k \times k}$ in $i^{\rm th}$ layer in Fig.~\ref{fig:cwda}(c), and this phenomenon reveals that the parameters in the same channel of $F_{ij}$ tend to be linearly correlated, and the parameters of any two different channels~(yellow and green channel in Fig.~\ref{fig:cwda}(c)) in $F_{ij}$ only have a low linear correlation.
\begin{wrapfigure}{r}{7cm}
\includegraphics[width=0.95\linewidth]{cwda.jpg}
\vspace{-0.5cm}
\caption{(a-b)~Visualization of $FF^T$ in ResNet-18 trained on ImageNet dataset. More experiments can be found in Appendix~\ref{app:diag_matrix}. These experiments are based on torchvison model zoo~\cite{pytorch}, which can guarantee the generality and reproducibility. (c)~A convolutional filter. $k$ is the kernel size and $N_i$ denotes the number of input channels.}
\label{fig:cwda}
\vspace{-0.6cm}
\end{wrapfigure}
\subsection{Statistical test for CWDA}
\label{Statistical test}
In fact, CWDA is not easy to be verified, \textit{e.g.}, for ResNet164 trained on Cifar100, the number of filters in the first stage is only 16, which is too small to be used to estimate the statistics in CWDA accurately. Thus, We consider verifying four \textbf{necessary conditions} of CWDA:
(1)~\textbf{Gaussian.}~Whether the weights of $F_{ij}$ approximately follows a Gaussian-alike distribution;
(2)~\textbf{Variance.}~Whether the variance of the diagonal elements of $\Sigma_{\text{diag}}$ are small enough;
(3)~\textbf{Mean.}~Whether the mean of weights of $F_{ij}$ is close to 0.
(4)~\textbf{The magnitude of $\epsilon$.}~Whether $\epsilon$ is small enough.
The results of the tests are shown in Appendix~\ref{app:Statistical Test}, where we consider a variety of factors for the statistical tests, including different network structure, optimizer, regularization, initialization, dataset, training strategy, and other tasks in computer vision~(\textit{e.g}., semantic segmentation, detection and so on). The test results show that CWDA has a great generality for CNNs.
\section{About the Norm-based criteria}
\label{sec:norm}
We start from the criteria in Table~\ref{tab:criteria}, which are widely cited and compared~\cite{liu2020joint,li2020group,he2020learning,liu2020rethinking,li2020eagleeye}.
\subsection{Similarity}
\label{Experiment and theory}
In this section, we further verify the observation that the Norm-based pruning criteria in Table~\ref{tab:criteria} are highly similar from two perspectives. Empirically, we conducted large amount of experiments on image classification to investigate the similarities. Theoretically, we rigorously prove the similarities of the criteria in Table~\ref{tab:criteria} in layer-wise pruning under CWDA.
\begin{wrapfigure}{r}{7cm}
\centering
\includegraphics[width=1\linewidth]{extra_exp.jpg}
\vspace{-0.35cm}
\caption{Test accuracy of the ResNet56 on CIFAR10/100 while using different pruning ratios. ``L1 pruned'' and ``L1 tuned'' denote the test accuracy of the ResNet56 after $\ell_1$ pruning and fine-tuning, respectively. If ratio is 0.5, we prune 50\% filters in all layers.}
\label{fig:exp_pruned_tuned2}
\vspace{-0.2cm}
\end{wrapfigure}
\textbf{Empirical Analysis}. (1)~In Fig.~\ref{fig:exp_pruned_tuned2}, we show the test accuracy of the ResNet56 after pruning and fine-tuning under different pruning ratios and datasets. The test accuracy curves of different pruning criteria at different stages are very close under different pruning ratios. This phenomenon implies that those pruned networks using different Norm-based criteria are very similar, and there are strong similarities among these pruning criteria. The experiments about other commonly used configs of pruning ratio can be found in Appendix~\ref{app:cls}.
(2)~In Fig.~\ref{fig:vggmore}, we show the Spearman's rank correlation coefficient\footnote{Sp is a nonparametric measurement of ranking correlation, and it assesses how well the relationship between two variables can be described using a monotonic function, \textit{i.e.}, filters ranking sequence in the same layer under two criteria in this paper.}~(Sp) between different pruning criteria. The Sp in most convolutional layers are more than 0.9, which means the network structures are almost the same after pruning. Note that the Sp in transition layer are relatively small, and the transition layer refers to the layer where the dimensions of the filter change, like the layer between stage 1 and stage 2 of a ResNet. The reason for this phenomenon may be that the layers in these areas are sensitive. It is interesting but will not greatly impact the structural similarity of the whole pruned network. The similar observations are shown in Fig.~2 in \cite{ding2019global}, Fig.~6 and Fig.~10 in \cite{li2016pruning}.
\begin{figure*} [htbp]
\vspace{-0.1cm}
\centering
\includegraphics[width=0.9\linewidth]{sp_network.jpg}
\centering
\caption{Spearman's rank correlation coefficient~(Sp) between different pruning criteria on several networks and datasets~(more experiments can be found in Appendix~\ref{app:sp_network}). }
\vspace{-0.2cm}
\label{fig:vggmore}
\end{figure*}
\textbf{Theoretical Analysis}. Besides the experimental verification, the similarities via using layer-wise pruning among the criteria in Table~\ref{tab:criteria} can also be proved theoretically in this section. Let $C_1$ and $C_2$ be two pruning criteria to calculate the \textit{Importance Score} for
all convolutional filters in one layer. If they can produce the similar ranks of \textit{Importance Score}, we define that $C_1$ and $C_2$ are \textit{approximately monotonic} to each other and use $C_1 \cong C_2$ to represent this relationship. In Section~\ref{Experiment and theory}, we use the Sp to describe this relationship but it's hard to be analyzed theoretically. Therefore, we focus on a stronger condition. Let $\mathbf{X} = (x_1,x_2,...,x_k)$ and $\mathbf{Y} = (y_1,y_2,...,y_k)$ be two given sequences\footnote{Since $\mathbf{X}$ is not random variables here, $\mathbb{E}(\mathbf{X})$ and $\mathbf{Var}(\mathbf{X})$ denote the average value $\sum_{i=1}^kx_i/k$ and the sample variance $\sum_{i=1}^k(x_i-\mathbb{E}(\mathbf{X}))/(k-1)$, respectively.}.
we first normalize their magnitude, \textit{i.e.}, let $\widehat{\mathbf{X}} = \mathbf{X}/\mathbb{E}(\mathbf{X})$ and $\widehat{\mathbf{Y}} = \mathbf{Y}/\mathbb{E}(\mathbf{Y})$~. This operation does not change the ranking sequence of the elements of $\mathbf{X}$ and $\mathbf{Y}$, because $\mathbb{E}(\mathbf{X})$ and $\mathbb{E}(\mathbf{Y})$ are constants, \textit{i.e.}, $\mathbf{\widehat{X}} \cong \mathbf{\widehat{Y}} \Leftrightarrow \mathbf{X} \cong \mathbf{Y}$.~After that, if both $\mathbf{Var(\widehat{\mathbf{X}}/\widehat{\mathbf{Y}})}$ and $\mathbf{Var(\widehat{\mathbf{Y}}/\widehat{\mathbf{X}})}$ are small enough, then the Sp between $\mathbf{X}$ and $\mathbf{Y}$ is close to 1, where $\widehat{\mathbf{X}}/\widehat{\mathbf{Y}} = (\widehat{x_1}/\widehat{y_1},..,\widehat{x_k}/\widehat{y_k})$. The reason is that in these situations, the ratio $\widehat{\mathbf{X}}/\widehat{\mathbf{Y}}$ and $\widehat{\mathbf{Y}}/\widehat{\mathbf{X}}$ will be close to two constants $a,b$. For any $1 \leq i \leq k$, $\widehat{x_i} \approx a\cdot \widehat{y_i}$ and $\widehat{y_i} \approx b\cdot \widehat{x_i}$. So, $ab \approx 1$ and $a,b \neq 0$. Therefore, there exists an \textit{approximately monotonic} mapping from $\widehat{y_i}$ to $\widehat{x_i}$~(linear function), which makes the Sp between $\mathbf{X}$ and $\mathbf{Y}$ close to 1. With this basic fact, we propose the Theorem \ref{theo:similarity-layer}, which implies that many Norm-based pruning criteria produces almost the same ranks of \textit{Importance Score}.
\begin{theorem}\label{theo:similarity-layer}
Let $n-$dimension random variable $X$ meet CWDA, and the pair of criteria $(C_1,C_2)$ is one of $(\ell_1,\ell_2)$, $(\ell_2,\mathbf{Fermat})$ or $(\mathbf{Fermat},\mathbf{GM})$, we have
\begin{equation}
\mathbf{max}\left\{\mathbf{Var}_{X}\left(\frac{\widehat{C}_2(X)}{\widehat{C}_1(X)}\right),\mathbf{Var}_{X}\left(\frac{\widehat{C}_1(X)}{\widehat{C}_2(X)}\right) \right\}\lesssim B(n),
\end{equation}
\label{theo:bound}
where $\widehat{C}_1(X)$ denotes $C_1(X)/\mathbb{E}(C_1(X))$ and $\widehat{C}_2(X)$ denotes $C_2(X)/\mathbb{E}(C_2(X))$. $B(n)$ denotes the upper bound of left-hand side and when $n$ is large enough, $B(n) \to 0$.
\end{theorem}
\begin{proof}
(See Appendix \ref{proof:theorm1}).\qedhere
\end{proof}
In specific, for $i^{\rm th}$ convolutional layer of a CNN, since $F_{ij}\in \mathbb{R}^n$, $j= 1,2,...N_{i+1}$, meet CWDA and the dimension $n$ is generally large, we can obtain $\ell_1 \cong \ell_2$, $\ell_2 \cong \mathbf{Fermat}$ and $\mathbf{Fermat} \cong \mathbf{GM}$ according to Theorem~\ref{theo:bound}. Therefore, we have $\ell_1 \cong \ell_2 \cong \mathbf{Fermat} \cong \mathbf{GM}$, which verifies the strong similarities among the criteria shown in Table~\ref{tab:criteria}.
\subsection{Applicability}
\label{Applicability}
In this section, we analyze the Applicability problem of the Norm-based criteria. In Fig.~\ref{fig:app}~(Right), we know that there are some cases where the values of \textit{Importance Score} measured by $\ell_2$ criterion are very close~(e.g., the distribution looks sharp), which make $\ell_2$ criterion cannot distinguish the redundant filters well. It's related to the variance of \textit{Importance Score}. \cite{heyang} argue that a \textit{small norm deviation}~(the values of variance of \textit{Importance Score} are small) makes it difficult to find an appropriate threshold to select filters to prune. However, even if the values of the variance are large, it still cannot guarantee to solve this problem. Since the magnitude of these \textit{Importance Score} may be much greater than the values of the variance, we can use the mean of \textit{Importance Score} to represent their magnitude. Therefore, we consider using a relative variance $\mathbf{Var}_r[C(F_A)]$ to describe the Applicability problem. Let $\mathbb{E}[C(F_A)] > 0$ and
\begin{equation}
\mathbf{Var}_r[C(F_A)] = \mathbf{Var}[C(F_A)]/\mathbb{E}[C(F_A)],
\label{eqn:appli}
\end{equation}
where $C$ is a given pruning criterion and $F_A$ denotes the filters in layer $A$. The criterion $C$ for layer $A$ has Applicability problem when $\mathbf{Var}_r[C(F_A)]$ is close to 0. Then we introduce the Proposition~\ref{prop:mean_var_cri} to provide the estimation of the mean and variance w.r.t. different criteria when the CWDA is hold:
\begin{proposition}
If the convolutional filters $F_A$ in layer $A$ meet CWDA, then we have following estimations:
\begin{table}[H]
\vspace{-0.4cm}
\begin{center}
\begin{small}
\begin{tabular}{lll}
\hline
Criterion & Mean & Variance\\
\hline
$\ell_1(F_A)$ & $\sqrt{2/\pi}\sigma_Ad_A$& $(1-\frac{2}{\pi})\sigma_A^2d_A$\\
$\ell_2(F_A)$ & $\sqrt{2}\sigma_A\Gamma(\frac{d_A+1}{2})/\Gamma(\frac{d_A}{2})$& $\sigma_A^2/2$\\
$\mathbf{Fermat}(F_A)$ & $\sqrt{2}\sigma_A\Gamma(\frac{d_A+1}{2})/\Gamma(\frac{d_A}{2})$& $\sigma_A^2/2$\\
\hline
\end{tabular}
\end{small}
\end{center}
\label{criteria}
\end{table}
where $d_A$ and $\sigma_A^2$ denote the dimension of $F_A$ and the variance of the weights in layer $A$, respectively.
\label{prop:mean_var_cri}
\vspace{-0.4cm}
\end{proposition}
\begin{proof}
(See Appendix \ref{app:prop}).\qedhere
\vspace{-0.3cm}
\end{proof}
Based on the Proposition~\ref{prop:mean_var_cri}, we further provide the theoretical analysis for each criteria:
(i)~For $\ell_2(F_A)$. From Proposition~\ref{prop:mean_var_cri}, we can obtain that
\begin{align}
\vspace{-0.3cm}
\mathbf{Var}_r[\ell_2(F_A)]
&= \frac{\sigma_A^2}{2} / [\sqrt{2}\sigma_A\Gamma(\frac{d_A+1}{2})/\Gamma(\frac{d_A}{2})] = O(\sigma_A/g(d_A)),
\label{eqn:rv_l2}
\end{align}
where $g(d_A) = \Gamma(\frac{d_A+1}{2})/\Gamma(\frac{d_A}{2})$ is a monotonically increasing function w.r.t $d_A$. From Eq.~(\ref{eqn:rv_l2}), $\mathbf{Var}_r[\ell_2(F_A)]$ depend on $\sigma_A$ and $d_A$. When $\sigma_A$ is small or $d_A$ is large enough, $\mathbf{Var}_r[\ell_2(F_A)]$ tends to be 0.
(ii) For $\mathbf{Fermat}(F_A)$. From the proof in Appendix~\ref{proof:l1vsl2}, we know that the Fermat point $\mathbf{F}$ of $F_A$ and the origin $\mathbf{0}$ approximately coincide. From Table~\ref{criteria}, $||\mathbf{F}-F_A||_2 \approx ||\mathbf{0} - F_A||_2 = ||F_A||_2$. Therefore, the mean and variance of $\mathbf{Fermat}(F_A)$ are the same as $\ell_2(F_A)$'s in Proposition~\ref{prop:mean_var_cri}. Hence, a similar conclusion can be obtained for $\mathbf{Fermat}$ criterion. \textit{i.e.,} the \textit{Importance Score} tends to be identical and it’s hard to distinguish the network redundancy well when $\sigma_A$ is small or $d_A$ is large enough.
(iii)~For $\ell_1(F_A)$. Intuitively, the $\ell_1$ criterion should have the same conclusion as the $\ell_2$ criterion. However, given the Proposition~\ref{prop:mean_var_cri}, we can obtain that
\begin{align}
\vspace{-0.5cm}
\mathbf{Var}_r[\ell_1(F_A)]
&= (1-\frac{2}{\pi})\sigma_A^2d_A / [\sqrt{2/\pi}\sigma_Ad_A] = \epsilon(\pi)\cdot\sigma_A,
\label{eqn:rv_l1}
\vspace{-0.3cm}
\end{align}
where $\epsilon(\pi)<1$ is a constant w.r.t $\pi$. Note that $\mathbf{Var}_r[\ell_1(F_A)]$ only depend on $\sigma_A$, but not the dimension $n$. Moreover, for the common network structures, like VGG, ResNet shown in Fig.~\ref{fig:magnitude}~(b) and (d), the dimension of the filters are usually large enough. Therefore, compared with $\ell_2$, $\ell_1$ criterion is relatively not prone to have Applicability problems, unless the $\sigma_A$ is very small.
\begin{figure*} [htbp]
\centering
\includegraphics[width=0.92\linewidth]{other_simi.jpg}
\centering
\caption{The Similarity and Applicability problem for different types of pruning criteria in layer-wise or global pruning. }
\label{fig:other_simi}
\end{figure*}
\section{About other types of pruning criteria}
\label{sec:others}
In this section, we study the Similarity and Applicability problem in other types of pruning criteria through numerical experiments, such as Activation-based pruning~\cite{hu2016network,luo2017entropy}, Importance-based pruning~\cite{molchanov2016pruning, molchanov2019importance} and BN-based pruning~\cite{liu2017learning}. For each type, we choose two representative criteria and we call them: (1)~Norm-based: $\ell_1$ and $\ell_2$; (2)~Importance-based: Taylor $\ell_1$ and Taylor $\ell_2$~\cite{molchanov2016pruning, molchanov2019importance, molchanov2019taylor}; (3)~BN-based: BN\_$\gamma$\footnote{The empirical result for slimming training~\cite{liu2017learning} is shown in Appendix~\ref{slimming}.} and BN\_$\beta$~\cite{liu2017learning}; (4) Activation-based: Entropy~\cite{luo2017entropy} and APoZ~\cite{hu2016network}. The details of these criteria can be found in Appendix \ref{app:other_criteria}.
\textbf{The Similarity for different types of pruning criteria}. In Fig.~\ref{fig:other_simi}~(a-d), we show the Sp between different types of pruning criteria, and only the Sp greater than 0.7 are shown because if Sp $<$ 0.7, it means that there is no strong similarity between two criteria in the current layer.
According to the Sp shown in Fig.~\ref{fig:other_simi}~(a-d), we obtain the following observations:
(1)~As verified in Section~\ref{Experiment and theory}, $\ell_1$ and $\ell_2$ can maintain a strong similarity in each layer;
(2)~In the layers shown in Fig.~\ref{fig:other_simi}~(a) and Fig.~\ref{fig:other_simi}~(d), the Sp between most different pruning criteria are not large in these layers, which indicates that these criteria have great differences in the redundancy measurement of convolutional filters. This may lead to a phenomenon that one criterion considers a convolutional filter to be important, while another considers it redundant. We find a specific example which is shown in Appendix~\ref{app:case};
(3)~Intuitively, the same type of criteria should be similar. However, Fig.~\ref{fig:other_simi}~(b) and Fig.~\ref{fig:other_simi}~(c) show that the Sp between Taylor $\ell_1$ and Taylor $\ell_2$ is not large, but Taylor $\ell_2$ has strong similarity with both two Norm-based criteria. Moreover, the Sp between BN\_$\gamma$ and each Norm-based criteria exceeds 0.9, but it is not large in other layers~(Fig.~\ref{fig:other_simi}~(a) and Fig.~\ref{fig:other_simi}~(d)). These phenomena are worthy of further study.
\begin{figure*} [htbp]
\vspace{-0.2cm}
\centering
\includegraphics[width=1.0\linewidth]{estimation.jpg}
\centering
\vspace{-0.4cm}
\caption{The magnitude of the \textit{Importance Score} measured by $\ell_1$ and $\ell_2$ criteria. }
\label{fig:magnitude}
\end{figure*}
\textbf{The Applicability for different types of pruning criteria}. According to the analysis in Section~\ref{Applicability}, the Applicability problem depends on the mean and variance of the \textit{Importance Score}. Fig.~\ref{fig:other_simi}~(g-i) shows the result of the \textit{Importance Score} measured by different pruning criteria on each layer of VGG16. Due to the difference in the magnitude of \textit{Importance Score} for different criteria, for the convenience of visualization, the value greater than 1 is represented by 1.
First, we analyze the Norm-based criteria. In most layers, the relative variance $\mathbf{Var}_r[\ell_2]$ is much smaller than that of $\mathbf{Var}_r[\ell_1]$, which means that the $\ell_2$ pruning has Applicability problem in VGG16, while the $\ell_1$ does not. This is consistent with our conclusion in Section~\ref{Applicability}.
Next, for the Activation-based criteria, the relative variance $\mathbf{Var}_r$ is large in each layer, which means that these two Activation-based criteria can distinguish the network redundancy well from their measured filters' \textit{Importance Score}. However, for the Importance-based and BN-based criteria, their relative variance $\mathbf{Var}_r$ are close to 0. According to Section~\ref{Applicability}, these criteria have Applicability problem, especially in the deeper layers (e.g., from 6$^{\rm th}$ layer to the last layer).
\begin{wrapfigure}{r}{7cm}
\vspace{-0.3cm}
\includegraphics[width=1\linewidth]{global_situ.jpg}
\vspace{-0.5cm}
\caption{The global pruning simulation for the unpruned network with only two layers.}
\label{fig:exp_pruned_tuned}
\vspace{-0.2cm}
\end{wrapfigure}
\section{About global pruning}\label{sec:global}
Compared with layer-wise pruning, global pruning is more widely~\cite{liu2018rethinking, molchanov2016pruning,liu2017learning} used in the current research of channel pruning. Therefore, in this section we may also analyze the Similarity and Applicability problem of global pruning.
\textbf{Applicability while using global pruning}. In fact, for global pruning, both $\ell_1$ and $\ell_2$ criteria are not prone to Applicability problems. From Proposition~\ref{prop:mean_var_cri}, we show that the estimations for the mean of \textit{Importance Score} in layer $A$ for $\ell_1$ and $\ell_2$ are $\sigma_A\cdot d_A\sqrt{\frac{2}{\pi}}$ and $\sqrt{2}\sigma_A\cdot\Gamma(\frac{d_A+1}{2})/\Gamma(\frac{d_A}{2})$, respectively. Since $\sigma_A$ and $d_A$ are quite different, shown in Fig.~\ref{fig:magnitude}~(b) and (d), hence the variance of the \textit{Importance Score} may be large in this situation.
Fig.~\ref{fig:magnitude}~(a) and (c) show such kind of difference of the magnitude on different convolutional layers. In addition, from our estimations in Fig.~\ref{fig:magnitude}~(c), this inconsistent magnitude can be explained for another common problem in practical applications of global pruning: the ResNet is easily pruned off. As shown in Fig.~\ref{fig:magnitude}~(c), we take ResNet56 as an example. Since the \textit{Importance Score} in first stage is much smaller than the \textit{Importance Score} in the deeper layer, global pruning will give priority to prune the convolutional filters of the first stage. For problem, we suggest that some normalization tricks should be implemented or a protection mechanism should be established, \textit{e.g.}, a mechanism which can ensure that each layer has at least a certain number of convolutional filters that will not be pruned. Unlike some previous works~\cite{hecap,chin2020towards,wang2019cop}, which make suggestions from qualitative observation, we provide a quantitative view to illustrate that these tricks are necessary.
\textbf{Similarity while using global pruning}. In Fig.~\ref{fig:other_simi}~(e-f), we show the similarity of different types of pruning criteria using global pruning on VGG16 and ResNet56. Comparing to the results from the layer-wise pruning shown in Fig.~\ref{fig:other_simi}~(a-d), we can find that the similarities of most pruning criteria are quite different in global pruning. In addition, the same criteria may have different results for different network structures in global pruning, \textit{e.g.,} in Fig.~\ref{fig:other_simi}~(e), we can find $\ell_2 \cong$ Taylor $\ell_2$ and BN$_\gamma \cong \ell_2$, but this observation does not hold in Fig.~\ref{fig:other_simi}~(f). In particular, different from the result about ResNet56 in Fig.~\ref{fig:other_simi}~(f), the similarity between $\ell_1$ and $\ell_2$ is not as strong as the one in the layer-wise case. This phenomenon is counter intuitive.
To understand this phenomenon, we first consider about a simple case, \textit{i.e.,} the unpruned network has only two convolutional layers~(layer $A$ and layer $B$). The filters in these two layers are $F_A = (F_A^1,F_A^2,...,F_A^n)$ and $F_B = (F_B^1,F_B^2,...,F_B^m)$. According to CWDA, for $1\leq i \leq n$ and $1 \leq j \leq m$, $F_A^i$ and $F_B^j$ can follow $N(\mathbf{0},\sigma_A^2\mathbf{I}_{d_A})$ and $N(\mathbf{0},\sigma_B^2\mathbf{I}_{d_B})$, respectively. Next, we show Sp between \textit{Importance Score} measured by $\ell_1$ and $\ell_2$ pruning in different dimension ratio $d_A/d_B$, $\sigma_A$ and $\sigma_B$ in Fig.~\ref{fig:exp_pruned_tuned}~(e-h). Moreover, to analyze this phenomenon concisely, we draw some scatter plots as shown in Fig.~\ref{fig:exp_pruned_tuned}~(a-d), where the coordinates of each point are given by (value of $\ell_1$, value of $\ell_2$). The set of the points consisting of the filters in layer $A$ is called group-$A$. Then we introduce the Proposition 2.
\begin{proposition}
If the convolutional filters $F_A$ in layer $A$ meet CWDA, then $\mathbb{E}[\ell_1(F_A)/\ell_2(F_A)]$ and $\mathbb{E}[\ell_2(F_A)/\ell_1(F_A)]$ only depend on their dimension $d_A$.
\label{prop:slope}
\vspace{-0.3cm}
\end{proposition}
\begin{proof}
(See Appendix \ref{app:prop}).\qedhere
\vspace{-0.4cm}
\end{proof}
Now we analyze the simple case under different situations
(1)~For $d_A/d_B = 1$. If $\sigma_A^2 = \sigma_B^2$, in fact, it's the same situation as layer-wise pruning. From Theorem~\ref{theo:similarity-layer}, we know that group-$A$ and group-$B$ coincide and approximately lie on the same line, resulting $\ell_1 \cong \ell_2$ . If $\sigma_A^2 \not= \sigma_B^2$, group-$A$ and group-$B$ lie on two lines, respectively. However, these two lines have the same slope based on Proposition~\ref{prop:slope}, as shown in Fig.~\ref{fig:exp_pruned_tuned}~(a). For these reasons, we have $\ell_1 \cong \ell_2$ when $d_A/d_B = 1$.
(2)~For $d_A/d_B \not= 1$. In Fig.~\ref{fig:exp_pruned_tuned}~(b-d), there are three main situations about the position relationship between group-$A$ and group-$B$. In Fig.~\ref{fig:exp_pruned_tuned}~(b), according to Theorem~\ref{theo:similarity-layer}, the points in group-$A$ and group-$B$ are monotonic respectively. Moreover, their \textit{Importance Score} measured by $\ell_1$ and $\ell_2$ do not overlap, which make $\ell_1$ and $\ell_2$ are \textit{approximately monotonic} overall. Thus, $\ell_1 \cong \ell_2$. However, for Fig.~\ref{fig:exp_pruned_tuned}~(c-d), the Sp is small since the points in these two group are not monotonic (the \textit{Importance Score} measured by $\ell_1$ or $\ell_2$ has a large overlap). From Proposition~\ref{prop:mean_var_cri} and the approximation $\Gamma(\frac{d_A+1}{2})/\Gamma(\frac{d_A}{2}) \approx \sqrt{d_A/2}$~(Appendix~\ref{proof:l1vsl2}), these two situations can be described as:
\begin{equation}
\sigma_{A} d_{A} \approx \sigma_{B} d_{B}\quad or\quad \sigma_{A} \sqrt{d_{A}} \approx \sigma_{B} \sqrt{d_{B}},
\label{eqn:overlap}
\end{equation}
where $d_A \not= d_B$. Through Eq.~(\ref{eqn:overlap}) we can obtain the two red lines shown in Fig.~\ref{fig:exp_pruned_tuned}~(f-h). It can be seen that the area surrounded by these two red lines is consistent with the area where the Sp is relatively small, which means our analysis is reasonable. Based on the above analysis, we can summarize the conditions about $\ell_1 \cong \ell_2$ in global pruning for two convolutional layers as shown in Table~\ref{tab:twolayer_global}.
\begin{wraptable}{r}{7cm}
\vspace{-0.6cm}
\centering
\small
\caption{The conditions about $\ell_1 \cong \ell_2$ in global pruning for two layers~(layer $A$ and layer $B$)}
\resizebox{0.5\columnwidth}{!}{
\begin{tabular}{rccc|c}
\hline
& \multicolumn{1}{l}{$d_A = d_B$?} & \multicolumn{1}{l}{$\frac{\sigma_{A}}{\sigma_{B}} \approx \frac{d_{B}}{d_{A}} $?} & \multicolumn{1}{l|}{$ \frac{\sigma_{A}}{\sigma_{B}} \approx \frac{\sqrt{d_{B}}}{\sqrt{d_{A}}} $?} & \multicolumn{1}{l}{$\ell_1 \cong \ell_2$?} \\
\hline
(1) &{\color{black}{\Checkmark}} & \textbf{--} & \textbf{--} &{\color{green}{\Checkmark}} \\
(2) &{\color{black}{\XSolidBrush}} &{\color{black}{\XSolidBrush}} &{\color{black}{\XSolidBrush}} &{\color{green}{\Checkmark}} \\
(3) &{\color{black}{\XSolidBrush}} &{\color{black}{\Checkmark}} & \textbf{--} &{\color{red}{\XSolidBrush}} \\
(4) &{\color{black}{\XSolidBrush}} & \textbf{--} &{\color{black}{\Checkmark}} &{\color{red}{\XSolidBrush}} \\
\hline
\end{tabular}%
}
\label{tab:twolayer_global}%
\vspace{-0.2cm}
\end{wraptable}
Next, we go back to the the situation about real neural networks in Fig.~\ref{fig:other_simi}~(e-f).
(1)~For ResNet56. As shown in Fig.\ref{fig:magnitude}~(d), the dimensions of the filters in each stage are almost the same. From Table~\ref{tab:twolayer_global}~(1), the pruning results after $\ell_1$ and $\ell_2$ pruning in each stage are similar. And, the magnitudes of the \textit{Importance Score} in each stage are very different, since Table~\ref{tab:twolayer_global}~(2), we can obtain that $\ell_1 \cong \ell_2$ for ResNet56.
(2) For VGG16. As shown in Fig.\ref{fig:magnitude}~(a-b), compared with ResNet56, VGG16 has some layers with different dimensions but similar \textit{Importance Score} measured by $\ell_1$ or $\ell_2$, such as ``layer 2'' and ``layer 8'' for $\ell_2$ criterion in Fig.\ref{fig:magnitude}~(a). From Table~\ref{tab:twolayer_global}~(3-4), these pairs of layers make the Sp small, which explain why the result of $\ell_1$ and $\ell_2$ pruning is not similar in Fig.~\ref{fig:other_simi}~(e) for VGG16. In Appendix~\ref{app:support}, more experiments show that we can increase the Sp in global pruning by ignoring part of these pairs of layers, which support our analysis.
\section{Discussion}
\label{Discussion}
\subsection{Why CWDA sometimes does not hold?}
\label{why}
CWDA may not always hold. As shown in Appendix~\ref{app:Statistical Test}, a small number of convolutional filters may not pass all statistical tests. In this section, we try to analyze this phenomenon.
(1)~\textbf{The network is not trained well enough}. The distribution of parameters should be discussed \textbf{only when} the network is trained well. If the network does not converge, it is easy to construct a scenario which does not satisfy CWDA, \textit{e.g.}, for a network with uniform initialization, when it is only be trained for a few epochs, the distribution of parameters may be still close to a uniform distribution. At this time, the distribution obviously does not satisfy CWDA. A specific example is in Appendix~\ref{app:other_result}.
(2)~\textbf{The number of filters is insufficient.} In Appendix~\ref{app:Statistical Test}, the layers that can not pass the statistical tests are almost those whose position is in the front of the network. A common characteristic of these layers is that they have a few filters, which may not estimate statistics well. Taking the second convolutional layer~(64 filters) in VGG16 on CIFAR10 as an example, first, the filters in this layer can not pass all the statistical tests. And then the Sp in this transition layer is relatively small, as shown in Fig.~\ref{fig:vggmore}. However, in Fig.~\ref{fig:vgg_change}, we change the number of filters in this layer from 64 to 128 or 256. After that, the Sp increases significantly, and the filters can pass all the statistical tests when the number of filters is 256. These observations suggest that the number of filters is a major factor for CWDA to be hold.
\begin{figure*} [h]
\centering
\vspace{-0.2cm}
\includegraphics[width=0.92\linewidth]{sp_vgg_2nd.jpg}
\centering
\vspace{-0.2cm}
\caption{The Sp between different pruning criteria on VGG16~(CIFAR10). The number of filters in the second convolutional layers is changed from 64 to 256. The filters in this layer can pass all the statistical tests when the number of filters is 256.}
\vspace{-0.2cm}
\label{fig:vgg_change}
\end{figure*}
\subsection{How our findings help the community?}
\label{how}
(1)~We propose an assumption about the parameters distribution of the CNNs called CWDA, which is an effective theoretical tool for analyzing convolutional filter. In this paper, CWDA is successfully used to explain many phenomena in the Similarity and Applicability of pruning criteria. In addition, it also explains why the ResNet is easily pruned off in global pruning. In Section~\ref{Statistical test}, since CWDA can pass statistical tests in various situations, it can be expected that it can also be used as an effective and concise analysis tool for other CNNs-related areas, \textbf{not just} pruning area.
(2)~In this paper, we study the Similarity and Applicability problem about pruning criteria, which can guide and motivate the researchers to design more reasonable criteria. For Applicability problem, we suggest that, intuitively, it is reasonable that the \textit{Importance Score} should be distinguishable for the proposed novel criteria. For Similarity, as more and more criteria are proposed, these criteria can be used for ensemble
learning to enhance their pruning performance~\cite{he2020learning}. In this case,
the similarity analysis between criteria in this paper is important, because highly similar criteria cannot bring gains to ensemble learning.
(3)~In pruning area, $\ell_1$ and $\ell_2$ are usually regarded as the same pruning criteria, which is intuitive. In layer-wise pruning, we do prove that the $\ell_1$ and $\ell_2$ pruning are almost the same. However, in global pruning, the pruning results by these two criteria are sometimes very different. In addition, compared with $\ell_1$ criterion, $\ell_2$ criterion is prone to Applicability problems. These counter-intuitive phenomena enlighten us that we can't just rely on intuition when analyzing problems.
\begin{table}[htbp]
\centering
\caption{The random pruning results of VGGNet with different criteria which have the Applicability problem. The VGG16 and VGG19 are trained on CIFAR100. The unpruned baseline accuracy of VGG16 and VGG19 are 72.99 and 73.42, respectively.}
\resizebox{\columnwidth}{!}{%
\begin{tabular}{cl|cccc|cccc}
\hline
\textbf{Model} & \textbf{criterion} & \textbf{min (r=10\%)} & \textbf{max (r=10\%)} & \textbf{mean (r=10\%)} & \textbf{$\Delta$} & \textbf{min (r=20\%)} & \textbf{max (r=20\%)}& \textbf{mean (r=20\%)} & \textbf{$\Delta$} \\
\hline
\multirow{5}[1]{*}{\begin{sideways}VGG16\end{sideways}} & $\ell_2$ & 71.41 & 72.65 & 71.75 & 1.24 & 71.01 & 72.47 & 71.32 & 1.46 \\
& Taylor $\ell_1$ & 71.67 & 72.34 & 71.89 & 0.67 & 71.32 & 72.32 & 71.45 & 1.01 \\
& Taylor $\ell_2$ & 71.87 & 72.37 & 71.91 & 0.5 & 71.66 & 72.27 & 71.65 & 0.61 \\
& BN$_\gamma$ & 71.09 & 71.66 & 71.36 & 0.57 & 71.02 & 71.57 & 71.12 & 0.55 \\
& BN$_\beta$ & 71.15 & 72.58 & 71.43 & 1.43 & 71.06 & 72.11 & 71.87 & 1.05 \\
\hline
\multirow{5}[2]{*}{\begin{sideways}VGG19\end{sideways}} & $\ell_2$ & 71.99 & 73.15 & 72.26 & 1.16 & 71.11 & 73.02 & 72.15 & 1.91 \\
& Taylor $\ell_1$ & 71.67 & 73.04 & 72.23 & 1.37 & 71.6 & 72.98 & 72.24 & 1.38 \\
& Taylor $\ell_2$ & 72.12 & 72.99 & 72.28 & 0.87 & 72.04 & 72.83 & 72.54 & 0.79 \\
& BN$_\gamma$ & 72.01 & 73.23 & 72.25 & 1.22 & 71.98 & 72.32 & 72.12 & 0.34 \\
& BN$_\beta$ & 72.25 & 73.23 & 72.41 & 0.98 & 72.04 & 72.65 & 72.33 & 0.61 \\
\hline
\end{tabular}%
}
\label{tab:addlabel}%
\end{table}%
(4)~Similar to the setting in Fig.~\ref{fig:other_simi}, we can explore the effect of pruning filters with similar \textit{Importance Score} on the performance. First, we find that the criteria ($\ell_2$,Taylor $\ell_1$, Taylor $\ell_2$, BN$_{\gamma}$ and BN$_{\beta}$) for VGGNet can cause the Applicability problem in most layers (Fig.~\ref{fig:other_simi}). As such, we randomly select 10\% or 20\% filters to be pruned by the uniform distribution $U[0,1]$ in each layer, and the selective filters will be in similar \textit{Importance Score}. Finally, we finetune the pruned model (there are 20 random repeated experiments). $\Delta$ denotes the difference between max acc. and min acc. (\textit{i.e.} max acc. - min acc.) . Since their \textit{Importance Score} are very similar, when the network is pruned and finetuned, it can be expected that the performance should be similar in these repeated experiments. However, from the results in the above table, although the \textit{Importance Score} of the pruned filters is very close, we can still get pruning results with very different results (\textit{e.g.} the $\Delta$ of VGG16 on $\ell_2$ are more than 1). It means that these criteria may not really represent the importance of convolutional filters. Therefore, it is necessary to re-evaluate the correctness of the existing pruning criteria.
\textbf{Acknowledgments}. Z. Huang gratefully acknowledges the technical and writing support from Mingfu Liang~(Northwestern University), Senwei Liang~(Purdue University) and Wei He~(Nanyang Technological University). Moreover, he sincerely thanks Mingfu Liang for offering his self-purchasing GPUs and Qinyi Cai (NetEase, Inc.) for checking part of the proof in this paper. This work was supported in part by the General Research Fund of Hong Kong No.27208720, the National Key R\&D Program of China under Grant No. 2020AAA0109700, the National Science Foundation of China under Grant No.61836012 and 61876224, the National High Level Talents Special
Support Plan (Ten Thousand Talents Program), and GD-NSF (no.2017A030312006).
| {'timestamp': '2021-10-26T02:40:59', 'yymm': '2004', 'arxiv_id': '2004.11627', 'language': 'en', 'url': 'https://arxiv.org/abs/2004.11627'} |
\section*{Introduction}
In this paper, we study bundles (i.e. locally free coherent sheaves) $\mathscr{E}$ on $\mathbf{P}^n$ such that
\begin{align*}
H^i(\mathscr{E}(t)) = 0,\quad \forall t\in\mathbb{Z}, \quad \forall 1\le i \le n-2. \tag{$\dagger$} \label{Cohomology}
\end{align*}
These include all bundles on $\mathbf{P}^2$ in particular.
Note that (\ref{Cohomology}) is an open condition on a family of bundles by the semicontinuity of cohomologies.
A rank $r$ bundle $\mathscr{E}$ on $\mathbf{P}^n$ satisfying (\ref{Cohomology}) admits a resolution by direct sums of line bundles of the form
\[
0 \to \bigoplus_{i = 1}^{l}\mathscr{O}_{\mathbf{P}^n}(-a_i) \xrightarrow{\varphi} \bigoplus_{i = 1}^{l+r}\mathscr{O}_{\mathbf{P}^n}(-b_i) \to \mathscr{E} \to 0.
\]
A minimal such resolution is unique up to isomorphism, and the integers $\underline{a} = (a_1,\dots, a_l)$ and $\underline{b} = (b_1,\dots, b_{l+r})$ are invariants of $\mathscr{E}$ called the Betti numbers.
\bigskip
The main results in this paper are the following.
\bigskip
In \Cref{FreeRes}, we classify all Betti numbers of rank $r$ bundles on $\mathbf{P}^n$ satisfying (\ref{Cohomology}), generalizing results from Bohnhorst and Spindler \cite{BS} for the case $r = n$.
Accordingly, we classify all possible Hilbert functions of such bundles, and introduce a compact way to represent and to generate them.
We show that there are only finitely many possible Betti numbers of bundles satisfying (\ref{Cohomology}) with fixed first Chern class and bounded regularity, generalizing the observation of Dionisi and Maggesi \cite{DM} for $r = n = 2$.
We then give examples to show that the semistability of such a bundle is not determined by its Betti numbers in general, in contrast to the case when $r = n$ discussed in \cite{BS}.
In \Cref{Stratification}, we define natural topologies on $\mathcal{VB}_{\P^n}^\dagger(H)$ and $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$, the set of isomorphism classes of bundles on $\mathbf{P}^n$ satisfying (\ref{Cohomology}) with Hilbert function $H$ and with Betti numbers $(\underline{a},\underline{b})$ respectively.
The topologies are induced from the rational varieties of matrices whose ideals of maximal minors have maximal depth.
We show that all Betti numbers of bundles in $\mathcal{VB}_{\P^n}^\dagger(H)$ form a graded lattice under the partial order of canceling common terms.
This lattice is downward closed and infinite in general, where the subposet of Betti numbers up to any given regularity is a finite graded sublattice.
Finally, we describe the stratification of $\mathcal{VB}_{\P^n}^\dagger(H)$ by various subspaces $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$.
We show that the closed strata intersect along another closed stratum, and that they form a graded lattice dual to the lattice of Betti numbers.
An open subset $\mathcal{VB}_{\P^n}^\dagger(H)^{ss}$ of $\mathcal{VB}_{\P^n}^\dagger(H)$ is a subscheme of the coarse moduli space $\mathcal{M}(\chi)$ of semistable torsion-free coherent sheaves with Hilbert polynomial $\chi$, similarly for an open subset $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})^{ss}$ of $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$.
The same description applies to the stratification of $\mathcal{VB}_{\P^n}^\dagger(H)^{ss}$ by $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})^{ss}$ on the level of topological spaces.
\bigskip
The study of vector bundles on algebraic varieties is central to algebraic geometry.
In particular, the study of bundles on projective spaces already presents interesting challenges.
We do not attempt to give a survey of the subject here.
Instead, we provide some historical perspectives to motivate the investigations in this paper.
\bigskip
Maruyama \cite{Maruyama1} proved that the coarse moduli space of rank two semistable bundles on a smooth projective surface exists as a quasi-projective scheme.
In the same paper, it was shown that the coarse moduli space $\mathcal{M}_{\mathbf{P}^2}(2,c_1,c_2)^s$ of stable rank two bunldes on $\mathbf{P}^2$ with given Chern classes is smooth and irreducible.
Following this development, Barth \cite{Barth} showed that $\mathcal{M}_{\mathbf{P}^2}(2,c_1,c_2)^s$ is connected and rational for $c_1$ even, and Hulek \cite{Hulek} did the same for $c_1$ odd.
Their arguments contained a gap which was pointed out and partially fixed in \cite{Gap} and independently in \cite{ES}.
The existence of the coarse moduli space of semistable torsion-free sheaves of arbitrary rank on a smooth projective variety was finally established by Maruyama \cite{Maruyama}, see \cite{Maruyama2} for another exposition.
Despite the progress in the theory of moduli, there are many basic questions about bundles on projective spaces that are unanswered, see \cite{Problem} for a problem list.
For example, Hartshorne's conjecture \cite{Hartshorne} states that a rank $r$ bundle on $\mathbf{P}^n$ is the direct sum of line bundles when $r<\frac{1}{3}n$.
In particular, the conjecture predicts that rank two bundles on $\mathbf{P}^n$ are split when $n\ge 7$.
On the other hand, the only known example (up to twists and finite pullbacks) of an indecomposable rank two bundle on $\mathbf{P}^4$ and above is the Horrocks-Mumford bundle \cite{Horrocks}.
It is fair to say that bundles of small rank on $\mathbf{P}^n$ remain mysterious.
\bigskip
This paper is motivated by two main objectives in the study of bundles.
(1) To classify certain invariants of bundles on $\mathbf{P}^n$.
To expand on this point, the Hilbert polynomial is an important invariant of a bundle that is constant in a connected flat family, and thus indexes the connected components of the moduli space (of some subclasses of bundles, e.g. semistable).
Note that the Hilbert polynomial can be computed from the Chern polynomial and vice versa.
Thus the classification of Hilbert polynomials is equivalent to the classification of Chern classes.
The Hilbert function eventually agrees with the Hilbert polynomial, and thus provides finer information.
Furthermore, the Hilbert function can be computed from the Betti numbers of a free resolution of (the section module of) a bundle.
Therefore the Betti numbers are even finer invariants of a bundle.
Consequently, the classification of Betti numbers of bundles will lead to a classification of the Hilbert functions and Hilbert polynomials (equivalently Chern classes).
In this paper, we take the first step by classifying the Betti numbers of bundles when their resolutions are short.
It turned out that this condition implies that these bundles have rank greater than the dimension of the ambient projective space with the exceptions of direct sums of line bundles.
(2) To provide examples of bundles with given invariants.
In the best scenario, the moduli space or the space of isomorphism classes of bundles with given invariants is unirational, in which case the image of a random point in the projective space will give us a ``random" bundle with given invariants.
For example, Barth's parametrization of $\mathcal{M}_{\mathbf{P}^2}(c_1,c_2)^{s}$ using nets of rank two quadrics \cite{Barth} allows us to produce ``random" rank two bundles on $\mathbf{P}^2$ with given Chern classes.
Here we can see the importance of using finer invariants.
Since $\mathbf{M}_{\mathbf{P}^2}(c_1,c_2)^{s}$ is irreducible, a general bundle produced using Barth's parametrization will be presented by a matrix of linear and quadratic polynomials by the main theorem in \cite{DM}.
Therefore producing a bundle that is presented by a matrix of forms of other degrees, which is special in the moduli $\mathcal{M}_{\mathbf{P}^2}(c_1,c_2)^{s}$, is like looking for a needle in a haystack.
On the other hand, if we stratify $\mathcal{M}_{\mathbf{P}^2}(c_1,c_2)^{s}$ using the finer invarints of Betti numbers $(\underline{a},\underline{b})$, then each piece $\mathcal{M}_{\mathbf{P}^2}(\underline{a},\underline{b})^{s}$ is still unirational and we can thus produce a ``random" bundle that is presented by a matrix of forms of given degrees whenever possible.
\bigskip
The results in this paper are implemented in the Macaulay2 \cite{M2} package \texttt{BundlesOnPn} \cite{MZ}, which generates all Betti numbers of bundles satisfying (\ref{Cohomology}) up to bounded regularity as well as ``random" bundles with given Betti numbers.
\subsection*{Acknowledgement}
The author thanks his advisor David Eisenbud for support and for pointing out that the earlier versions of the results may be generalized.
\section{Free resolutions of bundles}\label{FreeRes}
Throughout, we work over an algebraically closed field $k$.
We fix $R := k[x_0,\dots,x_n]$ to denote the polynomial ring of $\mathbf{P}^n$.
For a coherent sheaf $\mathscr{F}$ on $\mathbf{P}^n$, we write $H^i_*(\mathscr{F})$ for the $R$-module $\bigoplus_{t\in\mathbb{Z}}H^i(\mathscr{F}(t))$.
We write $\mathcal{VB}_{\P^n}^\dagger$ for the set of isomorphism classes of bundles on $\mathbf{P}^n$ satisfying (\ref{Cohomology}).
\bigskip
We start with a standard observation on the relation between the vanishing of lower cohomologies of a coherent sheaf and the projective dimension of its section module.
\begin{proposition}\label{Resolution}
Let $M$ be a finitely generated graded $R$-module.
Then $\op{pdim}_R M \le 1$ iff $M \cong H^0_*(\tilde{M})$ and $H^i_*(\tilde{M}) = 0$ for all $1\le i\le n-2$.
\end{proposition}
\begin{proof}
Let $H^i_m(-)$ denote the $i$-th local cohomology module supported at the homogeneous maximal ideal $m$ of $R$.
There is a four-term exact sequence
\[
0 \to H^0_m(M) \to M \to H^0_*(\tilde{M}) \to H^1_m(M) \to 0
\]
along with isomorphisms $H^{i+1}_m(M) \cong H^i_*(\tilde{M})$ for $1\le i\le n$.
By the vanishing criterion of local cohomology, we have $\op{depth} M = \inf \{i \mid H^i_m(M) \ne 0\}$.
Finally, the Auslander-Buchsbaum formula states that $\op{pdim} M = n+1-\op{depth} M$.
The statement follows.
\end{proof}
\begin{definition}
Let $\mathscr{E}$ be a rank $r$ bundle on $\mathbf{P}^n$ satisfying (\ref{Cohomology}).
By \Cref{Resolution}, the $R$-module $H^0_*(\mathscr{E})$ admits a unique (up to isomorphism) minimal graded free $R$-resolution
\begin{align}
0 \to \bigoplus_{i = 1}^l R(-a_i) \xrightarrow{\phi} \bigoplus_{i = 1}^{l+r} R(-b_i) \to H^0_*(\mathscr{E}) \to 0. \tag{$*$}\label{ModuleRes}
\end{align}
We always arrange the numbers $a_1 \le \dots \le a_l$ and $b_1 \le \dots \le b_{l+r}$ in \textbf{ascending} order, and write $\underline{a}$ and $\underline{b}$ for brevity.
We call $(\underline{a},\underline{b})$ \emph{the Betti numbers} of $\mathscr{E}$.
\end{definition}
Note that $\mathscr{E}$ is isomorphic to a direct sum of line bundles iff $H^0_*(\mathscr{E})$ is a free $R$-module iff $l = 0$ and the sequence $\underline{a}$ is empty.
\bigskip
The resolution (\ref{ModuleRes}) of graded $R$-modules sheafifies to a resolution
\begin{align}
0 \to \bigoplus_{i = 1}^l \mathscr{O}_{\mathbf{P}^n}(-a_i) \xrightarrow{\varphi} \bigoplus_{i = 1}^{l+r} \mathscr{O}_{\mathbf{P}^n}(-b_i) \to \mathscr{E} \to 0 \tag{$\star$}\label{SheafRes}
\end{align}
of $\mathscr{E}$ by direct sums of line bundles.
Conversely, a resolution (\ref{SheafRes}) of $\mathscr{E}$ by direct sums of line bundles gives rise to a free resolution (\ref{ModuleRes}) of the $R$-module $H^0_*(\mathscr{E})$ under the functor $H^0_*(-)$.
With this understanding, we shall speak of these two resolutions of modules and sheaves interchangeably.
In particular, the morphism $\varphi$ is called \emph{minimal} iff the corresponding map of $R$-modules $\phi$ is minimal, i.e. $\phi\otimes_R k = 0$.
\bigskip
\subsection{Betti numbers}
In this subsection we classify the Betti numbers of bundles in $\mathcal{VB}_{\P^n}^\dagger$.
\bigskip
For a pair $(\underline{a},\underline{b})$ of finite sequences of integers in ascending order, we write $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$ for the set of isomorphism classes of bundles with Betti numbers $(\underline{a},\underline{b})$.
For a sequence of integers $\underline{d} := (d_1, \dots, d_l)$, we define
\[
L(\underline{d}):= \bigoplus_{i = 1}^l R(-d_i) \text{ and } \mathscr{L}(\underline{d}) := \bigoplus_{i = 1}^l \mathscr{O}_{\mathbf{P}^n}(-d_i).
\]
\begin{theorem}\label{Betti}
Let $\underline{a} = (a_1, \dots, a_l)$ and $\underline{b} = (b_1, \dots, b_{l+r})$ be two sequences of integers in ascending order for some $l\ge 0$ and $r > 0$.
The set $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$ is nonempty iff $\underline{a}$ is empty or
\begin{align*}
r \ge n \text{ and } a_i > b_{n+i} \text{ for } i = 1,\dots, l. \tag{A}\label{Admissible}
\end{align*}
In this case, the cokernel of $\varphi$ represents the class of a bundle in $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$ for a general minimal map $\varphi \in \op{Hom}(\mathscr{L}(\underline{a}),\mathscr{L}(\underline{b}))$.
\end{theorem}
This generalizes the results of Bohnhorst and Spindler \cite{BS} for $r = n$.
Likewise, we say a pair of ascending sequences of integers $(\underline{a},\underline{b})$ is \emph{admissible} if it satisfies the equivalent conditions of \Cref{Betti}.
\bigskip
The fact that a bundle $\mathscr{E}$ satisfying (\ref{Cohomology}) that is not a direct sum of line bundles must have rank $r\ge n$ also follows from the Evans-Griffith splitting criterion \cite[Theorem 2.4]{EG}.
\bigskip
In order to prove \Cref{Betti}, we need two lemmas regarding depth of minors of matrices.
\bigskip
Let $S$ denote a noetherian ring and let $\phi:S^p \to S^q$ be a map between two free $S$-modules.
For any integer $r$, the ideal $I_r(\phi)$ of $(r\times r)$-minors of $\phi$ is defined as the image of the map $\wedge^r S^p \otimes_S (\wedge^r S^q)^* \to S$, which is induced by the map $\wedge^r\phi:\wedge^r S^p \to \wedge^r S^q$.
Similarly, let $\varphi: \bigoplus_{i = 1}^p\mathscr{O}_{\mathbf{P}^n_A}(-a_i) \to \bigoplus_{i = 1}^q \mathscr{O}_{\mathbf{P}^n_A}(-b_i)$ be a morphism of sheaves on $\mathbf{P}^n_A$ over a noetherian ring $A$. Set $S:=A[x_0,\dots, x_n]$ and let $\phi: \bigoplus_{i = 1}^p S(-a_i) \to \bigoplus_{i = 1}^q S(-b_i)$ denote the corresponding morphism of graded free $S$-modules given by $H^0_*(\varphi)$.
For any integer $r$, we define $I_r(\varphi) = I_r(\phi)$ as an ideal in $S$.
The depth of a proper ideal $I$ in a noetherian ring $S$ is defined to be the length of a maximal regular sequence in $I$.
The depth of the unit ideal is by convention $+\infty$.
Recall that if $S$ is Cohen-Macaulay, then $\op{depth} I = \op{codim} I$ for every proper ideal $I$.
\begin{lemma}\label{DepthOpen}
Let $A$ be a finitely generated integral domain over $k$, and let $S$ be a finitely generated $A$-algebra.
Suppose $\phi:S^q \to S^p$ is a morphism of free $S$-modules with $p\ge q$.
For a prime $P$ of $A$, let $\phi_P$ denote the morphism $\phi\otimes_A k(P)$ of free modules over the fiber ring $S\otimes_A k(P)$.
For any integer $d$, the set of primes $P$ in $A$ such that $\op{depth} I_q(\phi_P) \ge d$ is open in $A$.
\end{lemma}
\begin{proof}
Note that $I_q(\phi) = I_q(\phi^*)$.
Let $\mathscr{K}_\bullet(\phi^*)$ be the Eagon-Northcott complex associated to $\phi^*$ as in \cite{EN}.
Note that the formation of the Eagon-Northcott complex is compatible with taking fibers, i.e. $\mathscr{K}_{\bullet}(\phi^*)\otimes_A k(P) = \mathscr{K}_{\bullet}(\phi^*\otimes_A k(P))$.
For each prime ideal $P$ of $A$, we have $\op{depth} I_q(\phi^*_P) \ge d$ iff $\mathscr{K}_\bullet(\phi^*)\otimes_A k(P)$ is exact after position $p-q+1-d$ by the main theorem in \cite{EN}.
The statement of the lemma follows from the general fact that the exactness locus of a family of complexes is open, see E.G.A IV 9.4.2 \cite{EGA}.
\end{proof}
\begin{lemma}\label{DepthZeros}
Let $S$ be a standard graded finitely generated $k$-algebra.
Let $\phi:\bigoplus_{i =1}^q S(-a_i) \to \bigoplus_{i = 1}^p S(-b_i)$ be a morphism of graded free $S$-modules with $p\ge q$, and assume that $\phi$ is minimal, i.e. $\phi\otimes_S k = 0$.
Suppose that relative to some bases, the matrix of $\phi$ has a block of zeros of size $u\times v$.
Then $\op{codim} I_q(\phi) \le p-q+1-\inf(u+v,p+1)+\inf(u+v,q)$.
\end{lemma}
\begin{proof}
For the case of generic matrices over a field, this is a result of Giusti-Merle \cite{GM} .
We fix, once for all, bases of the domain and target of $\phi$, and let $Z\subset \{1,\dots, p\} \times\{1,\dots, q\}$ be the $u\times v$ rectangle where the matrix of $\phi$ has zero entries.
Consider the polynomial ring $A := k\left[\{x_{ij}\}^{1\le i\le p}_ {1\le j\le q}\right]/(x_{ij} \mid (i,j)\in Z)$, which is the coordinate ring of the affine space of $(p\times q)$-matrices with a zero block of size $u\times v$ in position $Z$.
Let $\psi:A(-1)^q\to A^p$ be the morphism given by the generic matrix $(x_{ij})$.
Then $\op{codim} I_q(\psi) = p-q+1-\inf(u+v,p+1)+\inf(u+v,q)$ by \cite[Theorem 1.3]{GM}.
The general case follows from Serre's result on the superheight of prime ideals in a regular local ring.
The map $\phi$ corresponds to a morphism of $k$-algebras $A\to S$, where $x_{ij}$ is sent to the entry of the matrix of $\phi$ relative to the fixed bases.
In particular, note that $I_q(\phi) = SI_q(\psi)$.
Let $m$ and $m'$ denote the homogeneous maximal ideals of $A$ and $S$ respectively.
Since all entries of $\phi$ are in $m'$ by assumption, we have an induced morphism on the localizations $A_m \to S_{m'}$ where $S_{m'}m\subset m'$.
Let $P$ be a prime above $A_mI_q(\psi)$ of the least codimension.
Since $S_{m'}P\subset m'$, Serre's result on superheight on prime ideals in a regular local ring \cite{Serre} implies that $\op{codim} S_{m'}P \le \op{codim} P$.
Now $I_q(\psi)$ and $SI_q(\psi)$ are homogeneous, and $S_{m'}I_q(\psi)\subset S_{m'}P$, therefore we conclude that
\begin{align*}
\op{codim} SI_q(\psi) & = \op{codim} S_{m'}I_q(\psi) \\
& \le \op{codim} S_{m'}P \\
& \le \op{codim} P\\
&= \op{codim} A_mI_q(\psi) \\
&= \op{codim} I_q(\psi)\\
& = p-q+1-\inf(u+v,p+1)+\inf(u+v,q). \qedhere
\end{align*}
\end{proof}
\bigskip
The following is a simple fact that allows us to translate between bundles and homogeneous matrices whose ideals of maximal minors have maximal depth.
\begin{proposition}\label{Bundle}
Let $\underline{a} = (a_1,\dots, a_l)$ and $\underline{b} = (b_1,\dots, b_{l+r})$ for some $l>0$ and $r\ge 0$.
For a map $\varphi \in \op{Hom}(\mathscr{L}(\underline{a}),\mathscr{L}(\underline{b}))$, the cokernel of $\varphi$ is a rank $r$ bundle on $\mathbf{P}^n$ iff $\op{depth} I_l(\varphi) \ge n+1$.
In this case, we have a resolution of $\mathscr{E} :=\op{coker} \varphi$ by direct sums of line bundles
\[
0 \to \mathscr{L}(\underline{a}) \xrightarrow{\varphi} \mathscr{L}(\underline{b}) \to \mathscr{E} \to 0.
\]
\end{proposition}
\begin{proof}
The rank of $\op{coker} \varphi$ is $r$ iff $I_r(\varphi)$ is nonzero iff $\varphi$ is injective at the generic point of $\mathbf{P}^n$ iff $\varphi$ is injective.
The ideal $I_r(\varphi)$ cuts out points on $\mathbf{P}^n$ where $\op{coker} \varphi$ is not locally free of rank $r$.
Thus $\op{coker} \varphi$ is a rank $r$ bundle iff $I_r(\varphi)$ is the unit ideal or is $m$-primary, where $m$ is the homogeneous maximal ideal of $R$.
In either case $\op{depth} I_r(\varphi) \ge n+1$.
\end{proof}
\begin{proof}[Proof of \Cref{Betti}]
If $\underline{a}$ is empty, then $\mathscr{E} := \mathscr{L}(\underline{b})$ has Betti numbers $(\underline{a},\underline{b})$.
Suppose $\underline{a}$ is nonempty and $(\underline{a},\underline{b})$ satisfies condition (\ref{Admissible}). Consider the minimal map $\varphi:\mathscr{L}(\underline{a}) \to \mathscr{L}(\underline{b})$ given by the following matrix
\[
\begin{blockarray}{ccccc}
& a_1 & \cdots & a_l & \\
\begin{block}{c(ccc)c}
b_1 & x_0^{a_1-b_1} & 0 & 0 & b_1\\
\vdots & \vdots & \ddots & 0 & \vdots \\
\vdots & \vdots & & x_0^{a_l-b_l} & b_l \\
b_{n+1} & x_n^{a_1-b_{n+1}} & & \vdots & \vdots \\
\vdots & 0 & \ddots & \vdots & \vdots \\
\vdots & 0 & 0 & x_n^{a_l-b_{l+n}} & b_{l+n}\\
\vdots & 0 & 0 & 0 & \vdots\\
b_{l+r} & 0 & 0 & 0 & b_{l+r}.\\
\end{block}
\end{blockarray}
\]
Since $\varphi$ drops rank nowhere on $\mathbf{P}^n$, we conclude that $\mathscr{E} := \op{coker} \varphi$ is a rank $r$ bundle with a resolution by direct sums of line bundles
\[
0 \to \mathscr{L}(\underline{a}) \xrightarrow{\varphi} \mathscr{L}(\underline{b}) \to \mathscr{E} \to 0
\]
by \Cref{Bundle}.
Since $\varphi$ is minimal, it follows from \Cref{Resolution} that $\mathscr{E} \in \mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$.
Conversely, suppose $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$ is nonempty and $\underline{a}$ is nonempty.
Then there is a minimal map $\varphi \in \op{Hom}(\mathscr{L}(\underline{a}),\mathscr{L}(\underline{b}))$ where $\op{coker} \varphi$ is a rank $r$ bundle $\mathscr{E}$.
Since $\varphi$ is minimal, it follows that $I_l(\varphi) \subset I_1(\varphi) \subset m$ is a proper ideal.
By \Cref{Bundle}, we have $\op{depth} I_l(\varphi) = n+1$.
By the main theorem in \cite{EN}, we have $\op{depth} I_l(\varphi) \le l+r-l+1 = r+1$.
It follows that we must have $r\ge n$.
Now suppose on the contrary that there is an index $1\le i\le l$ where $a_i \le b_{n+i}$.
Since $\varphi$ is minimal, we see that the $(n+i,i)$-th entry in the matrix of $\varphi$ must be zero.
In fact, since $\underline{a}$ and $\underline{b}$ are in ascending order, we must have a block of zeros
of size $(l+r-n-i+1)\times i$ as the following
\[
\begin{blockarray}{cccccc}
& a_1 & \cdots & a_i & \cdots & a_l \\
\begin{block}{c(ccccc)}
b_1 & & & & & \\
\vdots & & & & & \\
\vdots & & & & & \\
b_{n+i} & 0 & \cdots & 0 & & \\
\vdots & \vdots & & \vdots & & \\
\vdots & \vdots & & \vdots & & \\
\vdots & \vdots & & \vdots & & \\
b_{l+r} & 0 & \cdots & 0 & & \\
\end{block}
\end{blockarray}.
\]
By \Cref{DepthZeros}, we conclude that
\begin{align*}
\op{depth} I_l(\varphi) & \le l+r-l+1-\inf(l+r-n+1,l+r+1)+\inf(l+r-n+1,l)\\
& = r+1-(l+r-n+1)+l\\
& = n.
\end{align*}
This is a contradiction to the fact that $\op{depth} I_l(\varphi) = n+1$.
Now we prove the last statement.
It is obvious when $\underline{a}$ is empty, so we assume $\underline{a}$ is nonempty.
The set $\op{Hom}(\mathscr{L}(\underline{a}),\mathscr{L}(\underline{b}))$ has the structure of the closed points of an affine space $\mathbf{A}^N$.
The subset of minimal maps is an affine subspace $\mathbf{A}^M$.
There is a tautological morphism $\Phi: \bigoplus_{i = 1}^l \mathscr{O}_{\mathbf{P}^n\times A^M}(-a_i) \to \bigoplus_{i = 1}^{l+r} \mathscr{O}_{\mathbf{P}^n\times \mathbf{A}^M}(-b_i)$, where the fiber $\Phi_P$ for a closed point $P$ of $\mathbf{A}^M$ is given by the minimal map that $P$ corresponds to.
By \Cref{DepthOpen}, the set $U$ of points in $\mathbf{A}^M$ where $\op{depth} I_l(\Phi_P) \ge n+1$ is open.
Since there is a morphism $\varphi \in \op{Hom}(\mathscr{L}(\underline{a}),\mathscr{L}(\underline{b}))$ whose cokernel is a bundle $\mathscr{E} \in \mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$, by \Cref{Bundle} the map $\varphi$ corresponds to a closed point in $U$.
It follows that $U$ is open and dense in $\mathbf{A}^M$.
\end{proof}
Recall that the category of bundles on $\mathbf{P}^n$ is a Krull-Schmidt category \cite{Atiyah}, i.e. every bundle $\mathscr{E}$ admits a decomposition $\mathscr{E}\cong \mathscr{E}_0\oplus \mathscr{L}$, unique up to isomorphism, where $\mathscr{L}$ is the direct sum of line bundles and $\mathscr{E}_0$ has no line bundle summands.
\begin{corollary}\label{FreeRank}
Let $\mathscr{E} \in \mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$ for some $\underline{a}$ nonempty.
If $\mathscr{E} \cong \mathscr{E}_0\oplus \mathscr{L}$ is the Krull-Schmidt decomposition of $\mathscr{E}$, then
$n\le \op{rank} \mathscr{E}_0 \le \max\{j\mid a_l > b_{l+j}\}$.
\end{corollary}
\begin{proof}
Set $s := \max\{j\mid a_l > b_{l+j}\}$ and define $\underline{b}' := b_1,\dots, b_s$.
Let $\pi:\mathscr{L}(\underline{b}) \to \mathscr{L}(\underline{b}')$ be the coordinate projection.
If $\varphi \in \op{Hom}(\mathscr{L}(\underline{a}),\mathscr{L}(\underline{b}'))$ is a minimal map whose cokernel is a bundle $\mathscr{E}$, then we claim that $\varphi' := \pi\circ\varphi$ is a minimal map in $\op{Hom}(\mathscr{L}(\underline{a}),\mathscr{L}(\underline{b}))$ whose cokernel is a bundle $\mathscr{E}'$.
To see this, observe that since $a_l\le b_{l+i}$ for $s<i\le r$ and $\varphi$ is minimal, the last $(r-s)$ rows of the matrix representing $\varphi$ relative to any bases are zero.
In particular, we have $I_l(\varphi) = I_l(\pi\circ \varphi)$.
By \Cref{Bundle}, the cokernel of $\varphi'$ is a bundle.
It follows from the snake lemma that $\mathscr{E} \cong \mathscr{E}'\oplus \mathscr{L}$, where $\mathscr{L}$ is the kernel of the projection $\pi$.
This shows that $\op{rank} \mathscr{E}_0 \le s$.
Observe that $\mathscr{E}_0$ also satisfies (\ref{Cohomology}) and thus $\op{rank} \mathscr{E}_0 \ge n$ by \Cref{Betti}.
\end{proof}
\bigskip
\subsection{Finiteness} In this subsection, we show that there are only finitely many possible Betti numbers of bundles in $\mathcal{VB}_{\P^n}^\dagger$ with given rank, first Chern class and bounded regularity.
\bigskip
Recall that a coherent sheaf $\mathscr{F}$ on $\mathbf{P}^n$ is said to be $d$-regular if $H^i(\mathscr{F}(d-i)) = 0$ for all $i>0$.
The Castelnuovo-Mumford regularity of $\mathscr{F}$ is the least integer $d$ that $\mathscr{F}$ is $d$-regular.
By the semicontinuity of cohomologies, being $d$-regular is an open condition for a family of coherent sheaves on $\mathbf{P}^n$.
The notion of regularity also exists for graded $R$-modules.
See \cite{DE} for an exposition.
\bigskip
If $\mathscr{E} \in \mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$, then $\op{reg} \mathscr{E} = \max(b_{l+r}, a_l-1)$.
Since the regularity depends only on the Betti numbers, we define $\op{reg} (\underline{a},\underline{b}) := \max(b_{l+r},a_l-1)$ for any admissible pair $(\underline{a},\underline{b})$.
\begin{proposition}\label{Finite}
There are only finitely many possible Betti numbers $(\underline{a},\underline{b})$ of rank $r$ bundles on $\mathbf{P}^n$ satisfying (\ref{Cohomology}) with fixed first Chern class $c_1$ and regularity $\le d$.
\end{proposition}
\begin{proof}
Since $c_1 = \sum_{i = 1}^l a_i - \sum_{i = 1}^{l+r} b_i$, the statement is evidently true for direct sums of line bundles.
Thus we may consider the case $l>0$.
Since $a_i$ and $b_i$ are bounded above by $d+1$, we only need to show that $l$ is bounded above and $b_1$ is bounded below.
Consider the following inequalities
\begin{align*}
l &\le \sum_{i = 1}^l (a_i-b_{i+n}) \\
& = c_1+\sum_{i = 1}^n b_i +\sum_{i = l+n+1}^{l+r} b_i\\
& \le c_1 + r \cdot d.
\end{align*}
And similarly,
\begin{align*}
b_1 &= -c_1-\sum_{i = 2}^n b_i - \sum_{i = l+n+1}^{l+r} b_i +\sum_{i =1}^l (a_i-b_i)\\
& \ge -c_1 - (r-1)\cdot d + l. \qedhere
\end{align*}
\end{proof}
This generalizes the observation of Dionisi-Maggesi \cite{DM} for the case $n = r = 2$.
\bigskip
\subsection{Hilbert functions of bundles}\label{BundleSequence}
In this subsection, we classify the Hilbert functions of bundles in $\mathcal{VB}_{\P^n}^\dagger$.
We introduce an efficient way to represent and generate them.
\bigskip
Recall that the \emph{Hilbert function} of a bundle $\mathscr{E}$ on $\mathbf{P}^n$ is the function $H_\mathscr{E}(t):\mathbb{Z} \to \mathbb{Z}$ given by $H_\mathscr{E}(t) = \dim_k H^0(\mathscr{E}(t))$. For any function $H:\mathbb{Z}\to \mathbb{Z}$, we define $\mathcal{VB}_{\P^n}^\dagger(H)$ to be the subset of $\mathcal{VB}_{\P^n}^\dagger$ consisting of isomorphism classes of bundles with Hilbert function $H$.
\begin{definition}
The \emph{numerical difference} of a function $H:\mathbb{Z}\to \mathbb{Z}$ is a function $\partial H:\mathbb{Z} \to \mathbb{Z}$ given by $\partial H(t) := H(t)-H(t-1)$.
We inductively define $\partial^{i+1}H := \partial \partial^i H$.
\end{definition}
Note that if $H:\mathbb{Z}\to \mathbb{Z}$ is a function such that $H(t) = 0$ for $t\ll 0$, then $H$ can be recovered by its $i$-th difference $\partial^i H$ for any $i\ge 0$.
\begin{theorem}\label{Hilbert}
A function $H:\mathbb{Z}\to \mathbb{Z}$ is the Hilbert function of a rank $r$ bundle $\mathscr{E}\in \mathcal{VB}_{\P^n}^\dagger$ if and only if
\begin{enumerate}
\item $\partial^nH(t) = 0$ for $t\ll 0$ and $\partial^nH(t) = r$ for $t\gg 0$,
\item $\partial^nH(t+1) < \partial^nH(t)$ implies that $\partial^nH(t+1)\ge n$.
\end{enumerate}
\end{theorem}
\begin{proof} Let $\mu(\underline{d},t)$ denote the number of times an integer $t$ occurs in the sequence $\underline{d}$.
($\Longrightarrow$): Suppose $\mathscr{E}$ is a rank $r$ bundle in $\mathcal{VB}_{\P^n}^\dagger(H)$.
The Grothendieck-Riemann-Roch formula states that
\[
\chi(\mathscr{E}(t)) = \int_{\mathbf{P}^n} \operatorname{ch}(\mathscr{E}(t))\cdot \operatorname{td}(T_{\mathbf{P}^n})\label{RiemannRoch}.
\]
A routine computation shows that the leading coefficient of the Hilbert polynomial $\chi(\mathscr{E}(t))$ is $r\cdot t^n /n!$.
Since the Hilbert function $H$ eventually agrees with the Hilbert polynomial, we see that $\partial^nH(t) = 0$ for $t\ll 0$ and $\partial^n H(t) = r$ for $t\gg 0$.
Let $(\underline{a},\underline{b})$ be the Betti numbers of $\mathscr{E}$.
If $\underline{a}$ is empty, then $\mathscr{E}$ is a direct sum of line bundles and $\partial^nH$ is monotone nondecreasing and thus satisfies both conditions.
We prove the case where $\underline{a}$ is non-empty.
Consider the minimal free resolution
\[
0 \to L(\underline{a}) \to L(\underline{b}) \to H^0_*(\mathscr{E}) \to 0.
\]
A simple calculation shows that $\partial^{n+1} H(R(-a),t)$ is the delta function at $a$.
It follows from the minimal resolution that $\partial^{n+1} H(t) = \mu(\underline{b},t)-\mu(\underline{a},t)$.
Suppose $\partial^nH(t+1) < \partial^nH(t)$ for some $t$, then $\partial^{n+1}H(t+1) <0$ and thus $\mu(\underline{a},t+1)>0$.
Let $j$ be the largest index where $a_j = t+1$.
By \Cref{Betti}, we have $a_j>b_{j+n}$ and therefore
\[
\partial^nH(t+1) = \sum_{i\le t+1} \partial^{n+1}H(i) = \sum_{i\le t+1}(\mu(\underline{b},i)-\mu(\underline{a},i)) \ge j+n-j = n.
\]
($\Longleftarrow$): Conversely, suppose $H$ satisfies the conditions of the theorem.
We define the ascending sequences of integers $\underline{\alpha}$ and $\underline{\beta}$ by the property that for all $t\in \mathbb{Z}$,
\[
\mu(\underline{\alpha},t) = \max\{0, \partial^nH(t-1)-\partial^nH(t)\},\quad \mu(\underline{\beta},t) = \max\{0, \partial^n H(t-1) -\partial^n H(t)\}.
\]
By the first condition on $H$, the sequences $\underline{\alpha}$ and $\underline{\beta}$ are finite.
Furthermore, if $\underline{\alpha}$ has length $l$ then $\underline{\beta}$ has length $l+r$.
The second condition on $H$ implies that $a_i \ge b_{i+n}$ for all $1\le i\le l$.
Since $\underline{\alpha}$ and $\underline{\beta}$ share no common entries by construction, it follows that $a_i > b_{i+n}$ for all $1\le i\le l$.
By \Cref{Betti}, there is a rank $r$ bundle $\mathscr{E}$ on $\mathbf{P}^n$ satisfying (\ref{Cohomology}) with Betti numbers $(\underline{\alpha},\underline{\beta})$.
The Hilbert function of $\mathscr{E}$ is $H$ by the reasoning of the previous direction.
\end{proof}
The above theorem suggests that we use the finitely many intermediate values of $\partial^n H$ to encode the infinitely many values of the Hilbert function $H$.
\begin{definition}
A finite sequence of integers $\underline{B} = B_1,\dots, B_m$ for some $m\ge 1$ is called a \emph{bundle sequence of rank $r$} if it satisfies the following
\begin{enumerate}
\item $B_i >0$ for $1\le i\le m$,
\item $B_m = r$ and $B_{m-1} \ne r$,
\item $B_{i+1}<B_i$ implies $B_{i+1} \ge n$.
\end{enumerate}
If $\mathscr{E}$ is a rank $r$ bundle in $\mathcal{VB}_{\P^n}^\dagger(H)$ for some Hilbert function $H$, then we set
\[
s_0 := \inf\{t \mid \partial^n H(t)\ne 0\},\quad s_1 := \sup \{t \mid \partial^n H(t) \ne r\}.
\]
The sequence $\partial^nH(s_0), \partial^nH(s_0+1),\dots, \partial^nH(s_1+1)$ is a bundle sequence of rank $r$ by \Cref{Hilbert}, which we call the \emph{bundle sequence of $H$ and of $\mathscr{E}$}.
\end{definition}
By \Cref{Hilbert}, there is a one-to-one correspondence between the set of Hilbert functions of rank $r$ bundles in $\mathcal{VB}_{\P^n}^\dagger$ \textbf{up to shift} and the set of bundles sequences of rank $r$.
The ambiguity of shift disappears if we deal with normalized bundles.
\begin{definition}
We say a rank $r$ bundle on $\mathbf{P}^n$ is \emph{normalized} if $-r < c_1(\mathscr{E}) \le 0$.
Since $c_1(\mathscr{E}(t)) = c_1(\mathscr{E})+r\cdot t$, it follows that every bundle can be normalized after twisting by the line bundle $\mathscr{O}(-\lceil c_1(\mathscr{E})/r\rceil)$.
\end{definition}
We define the degree of a bundle sequence $\underline{B} = B_1,\dots, B_m$, denoted by $\deg \underline{B}$, to be the sum $B_1+\dots+B_m$.
\begin{proposition}\label{Regularity}
If a normalized rank $r$ bundle $\mathscr{E}\in \mathcal{VB}_{\P^n}^\dagger$ has bundle sequence $\underline{B}$, then $\op{reg} \mathscr{E} \ge \lceil\deg \underline{B} / r \rceil-2$.
\end{proposition}
\begin{proof}
Suppose $\mathscr{E}$ has Betti numbers $(\underline{a},\underline{b})$ and Hilbert function $H$.
We set $c := \max(a_l,b_{l+r})$ and $s_1 := \sup\{t\mid \partial^n H(t) \ne r\}$.
It follows from the short exact sequence
\[
0 \to L(\underline{a}) \to L(\underline{b}) \to H^0_*(\mathscr{E}) \to 0
\]
that $s_1<c$.
We have
\begin{align*}
c_1(\mathscr{E}) & = \sum_{i = 1}^l a_i-\sum_{i = 1}^{l+r} b_i = -\sum_t t \cdot \partial^{n+1}H(t) = -\sum_t t \cdot (\partial^n H(t)-\partial^nH(t-1))\\
& = \sum_{t\le s_1+1} t \cdot \partial^n H(t-1) - \sum_{t\le s_1+1} t \cdot \partial^n H(t)\\
& = \sum_{t \le s_1} \partial^n H(t) - (s_1+1)\cdot r = \deg \underline{B} - (s_1+2)\cdot r \ge \deg \underline{B} -(c+1)\cdot r.
\end{align*}
Since $\mathscr{E}$ is normalized, we must have $c \ge \lceil \deg \underline{B} / r\rceil-1$.
Finally, regularity $\mathscr{E}$ is $c$ or $c-1$ depending on whether $b_{l+r} \ge a_l-1$ or not.
\end{proof}
\begin{proposition}\label{Inductive}
If $\underline{B} = B_1,\dots, B_m$ is a bundle sequence of rank $r$ and degree $d$, then $\underline{B}' = B_2,\dots, B_m$ is a bundle sequence of rank $r$ and degree $d-B_1$.
\end{proposition}
It follows from \Cref{Regularity} and \Cref{Inductive} that we can inductively generate, in the form of bundle sequences, all Hilbert functions of normalized bundles satisfying (\ref{Cohomology}) up to any bounded regularity.
The generation is reduced to a partition problem with constraints.
\begin{example}\label{Ex1}
The following are all bundles sequences of rank $4$ and degree $9$ on $\mathbf{P}^3$
\[
\{(1^5,4),(1^3,2,4),(1^2,3,4),(1,2^2,4),(2,3,4),(5,4) \}.
\]
Here we use $t^j$ to denote the sequence of $j$ copies of $t$.
\end{example}
\bigskip
\subsection{Semistability} \label{Semistability}
In this subsection, we address the following question. Do the Betti numbers determine the semistability of a bundle in $\mathcal{VB}_{\P^n}^\dagger$? If so, what is the criterion?
\bigskip
Here we use $\mu$-semistability, where $\mu(\mathscr{F}) := c_1(\mathscr{F})/\operatorname{rank}(\mathscr{F})$ for any torsion-free coherent sheaf $\mathscr{F}$ on $\mathbf{P}^n$.
The results are similar for Hilbert polynomial semistability as in \cite{Maruyama}.
\bigskip
For $r<n$, all rank $r$ bundles $\mathscr{E}$ satisfying (\ref{Cohomology}) are direct sums of line bundles by \Cref{Betti}, which are not semistable except for $\mathscr{O}(d)^r$.
The main result in \cite{BS} states that if $\mathscr{E}$ satisfies (\ref{Cohomology}) and has rank $r = n$, then $\mathscr{E}$ is semistable iff its Betti numbers $(\underline{a},\underline{b})$ satisfy $b_1\ge \mu(\mathscr{E}) = (\sum_{i = 1}^l a_i-\sum_{i = 1}^{l+n} b_i)/n$.
The latter condition is obviously necessary.
\bigskip
The following example demonstrates that for $r>n$, the semistability of a bundle in $\mathcal{VB}_{\P^n}^\dagger$ is not determined by its Betti numbers in general.
\begin{example}
For any $r>n$, consider $(\underline{a},\underline{b})$ where
\[
a_1 = 2,\quad b_i = \left\{\begin{array}{l l} 0 & 1\le i < r\\
1 & r\le i \le r+1.\end{array}\right.
\]
Let $\varphi$ and $\psi$ be two maps in $\op{Hom}(\mathscr{L}(\underline{a}),\mathscr{L}(\underline{b}))$ defined by the matrices
\[
(0,\dots, x_0^2,\dots,x_{n-1}^2, x_n, 0)^T,\quad (0,\dots, x_0^2,\dots, x_{n-2}^2, x_{n-1}, x_n)^T
\]
respectively.
Then $\mathscr{E}_1 := \op{coker} \varphi$ and $\mathscr{E}_2 := \op{coker} \psi$ are rank $r$ bundles satisfying (\ref{Cohomology}) with Betti numbers $(\underline{a},\underline{b})$ by \Cref{Bundle}.
Furthermore, it is easy to see that $\mathscr{E}_1 \cong \mathscr{E}_1'\oplus \mathscr{O}(-1) \oplus \mathscr{O}^{r-n-1}$ and $\mathscr{E}_2 \cong \mathscr{E}_2'\oplus \mathscr{O}^{r-n}$ for some rank $n$ bundles $\mathscr{E}_1'$ and $\mathscr{E}_2'$ respectively.
Since $\mu(\mathscr{E}_1) = \mu(\mathscr{E}_2) = 0$, it is clear that $\mathscr{E}_1$ is not semistable.
On the other hand, the bundle $\mathscr{E}_2'$ is semistable by the criterion for the case $r = n$ stated above.
Since both $\mathscr{E}_2'$ and $\mathscr{O}^{r-n}$ are semistable bundles with $\mu = 0$, it follows that so is $\mathscr{E}_2$.
\end{example}
\bigskip
The main reason to discuss semistability is that we might hope for a coarse moduli structure on the set $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$.
However, the above example illustrates the difficulty.
In \Cref{Moduli} we will define a topology on $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$, where the semistable bundles form an open subspace $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})^{ss}$.
The space $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})^{ss}$ supports the structure of a subscheme of $\mathcal{M}(\chi)$, the coarse moduli space of semistable torsion-free sheaves with Hilbert polynomial $\chi$, whose existence is established by Maruyama \cite{Maruyama}.
\bigskip
\section{The Betti number stratification}\label{Stratification}
The set $\mathcal{VB}_{\P^n}^\dagger$ is the disjoint union of $\mathcal{VB}_{\P^n}^\dagger(H)$ for all possible Hilbert functions $H$ which are classified by \Cref{Hilbert}.
In this section we define a natural topology on $\mathcal{VB}_{\P^n}^\dagger(H)$ and study how $\mathcal{VB}_{\P^n}^\dagger(H)$ is stratified by bundles with different Betti numbers.
In the following, we fix a Hilbert function $H$ satisfying the conditions of \Cref{Hilbert}.
\bigskip
\subsection{The graded lattice of Betti numbers}\label{Lattice}
In this subsection we show that all possible Betti numbers of bundles in $\mathcal{VB}_{\P^n}^\dagger(H)$ form a graded lattice, such that those with bounded regularity form a finite sublattice.
\begin{definition}
We define $\mathcal{B}\kern -.5pt etti(H)$ to be the set of Betti numbers $(\underline{a},\underline{b})$ of bundles in $\mathcal{VB}_{\P^n}^\dagger(H)$.
There is a grading $\mathcal{B}\kern -.5pt etti(H) = \bigsqcup_q \mathcal{B}\kern -.5pt etti^q(H)$, where
\[
\mathcal{B}\kern -.5pt etti^q(H) := \{(\underline{a},\underline{b})\in \mathcal{B}\kern -.5pt etti(H)\mid \text{$\underline{a}$ and $\underline{b}$ have exactly $q$ entries in common}\}.
\]
\end{definition}
We remark that $\mathcal{B}\kern -.5pt etti(H)$ is infinite in general without restrictions on regularity.
This is due to the fact that the Hilbert function $H$ only bounds regularity from below (see \Cref{Regularity}) but not above, as the following example demonstrates.
\begin{example}
Let $(\underline{a},\underline{b})\in \mathcal{B}\kern -.5pt etti(H)$.
For some arbitrarily large integer $c$, regarded as a singleton sequence, the pair $(\underline{a},\underline{b})+c$ is admissible by \Cref{Betti}.
Note that any bundle with these Betti numbers has a line bundle summand by \Cref{FreeRank}.
\end{example}
\begin{proposition}
There is a unique element in $\mathcal{B}\kern -.5pt etti^0(H)$, which we denote by $(\underline{\alpha},\underline{\beta})$.
\end{proposition}
\begin{proof}
The construction of an element in $\mathcal{B}\kern -.5pt etti^0(H)$ is given in the proof of \Cref{Hilbert}.
Recall from the proof of \Cref{Hilbert} that $\partial^{n+1}H(t)= \mu(\underline{\alpha},t)-\mu(\underline{\beta},t)$.
The uniqueness of $(\underline{\alpha},\underline{\beta})$ follows from the fact that either $\mu(\underline{\alpha},t) = 0$ or $\mu(\underline{\beta},t) = 0$ by assumption.
\end{proof}
\bigskip
We now define a partial order on all pairs of increasing sequences of integers.
\begin{definition}
Let $\underline{a},\underline{b},\underline{c}$ be three finite sequences of integers in ascending order.
The sum $\underline{a}+\underline{c}$ is defined be the sequence obtained by appending $\underline{c}$ to $\underline{a}$ and sorting in ascending order.
It is clear that this operation is associative.
We define $(\underline{a},\underline{b})+\underline{c}$ to be the pair $(\underline{a}+\underline{c}, \underline{b}+\underline{c})$.
If $(\underline{a}',\underline{b}') = (\underline{a},\underline{b})+\underline{c}$ for some $\underline{c}$, then we say $(\underline{a},\underline{b})$ is a \emph{generalization} of $(\underline{a}',\underline{b}')$ and write $(\underline{a},\underline{b})\preceq (\underline{a}',\underline{b}')$.
\end{definition}
\bigskip
A direct consequence of \Cref{Betti} is that admissibility is stable under generalization.
\begin{lemma}\label{Generalization}
If $(\underline{a},\underline{b})\preceq (\underline{a}',\underline{b}')$ and $(\underline{a}',\underline{b}')$ is admissible, then so is $(\underline{a},\underline{b})$.
\end{lemma}
\begin{proof}
By induction, it suffices to prove the case where $\underline{a}'$ and $\underline{b}'$ have a common entry $c$ at index $p$ and $q$ respectively, and that $(\underline{a},\underline{b})$ is obtained from $(\underline{a}',\underline{b}')$ by removing $a'_p$ and $b'_q$.
We may assume that $p$ and $q$ are the largest indices where $a'_p = c$ and $b'_q = c$ respectively.
For $i < p$, we have $a_i = a'_i$.
But $i+n <q$ and $b_{i+n} = b'_{i+n}$ for $i<p$ since $q > n+p$ by \Cref{Betti}.
Therefore $a_i > b_{i+n}$ for $i<p$.
In this case, $b_{l+n} = b_{l+n}'$ and $a_l > b_{l+n}$.
For $i>p$, we have $a_{i-1} = a'_i > c$.
In this case, either $i+n \le q$, in which case $b_{i+n-1} \le c < a_{i-1}$;
or $i+n>q$, and $b_{i+n-1} = b'_{i+n}$ thus $b_{i+n-1} < a_{i-1}$.
We conclude that $(\underline{a},\underline{b})$ is also admissible.
\end{proof}
\begin{corollary}\label{Minimum}
Every $(\underline{a},\underline{b})$ in $\mathcal{B}\kern -.5pt etti(H)$ is of the form $(\underline{\alpha},\underline{\beta})+\underline{c}$ for some $\underline{c}$.
\end{corollary}
\bigskip
The main theorem of this subsection is the following.
\begin{theorem}\label{GradedLattice}
The set $\mathcal{B}\kern -.5pt etti(H)$ has the structure of a graded lattice given by the partial order $\preceq$ and the grading $\mathcal{B}\kern -.5pt etti(H) = \bigsqcup_q \mathcal{B}\kern -.5pt etti^q(H)$.
\end{theorem}
For the clarity of the proof, we establish the existence of meet in several steps.
\begin{lemma}\label{Diamond}
If $c$ and $d$ are two distinct integers (considered as singleton sequences) such that both $(\underline{a},\underline{b})+c$ and $(\underline{a},\underline{b})+d$ are admissible, then so is $(\underline{a},\underline{b})+c+d$.
\end{lemma}
\begin{proof}
The lemma is simple, but the notations may make it appear more complicated than it is.
Notheless, we include a proof here for the sake of completeness.
For an ascending sequence $\underline{d}$ and an integer $t$, let $p(\underline{d},t)$ denote the largest index $i$ where $d_i = t$.
We may assume $c<d$, and write $(\underline{a}',\underline{b}') := (\underline{a},\underline{b})+c$, $(\underline{a}'',\underline{b}'') := (\underline{a},\underline{b})+d$ and $(\underline{a}''',\underline{b}''') := (\underline{a},\underline{b})+c+d$.
Since $(\underline{a}',\underline{b}')$ is admissible, we have $p(\underline{a}',c)<p(\underline{b}',c)-n$.
Since $c<d$, it follows that $p(\underline{a}''',c) = p(\underline{a}',c)$ and $p(\underline{b}''',c) = p(\underline{b}',c)$. We conclude that $p(\underline{a}''',c) < p(\underline{b}''',c)-n$.
Since $(\underline{a}'',\underline{b}'')$ is admissible, we have $p(\underline{a}'',d)<p(\underline{b}'',d)-n$.
Since $c<d$, it follows that $p(\underline{a}''',d) = p(\underline{a}'',d)+1$ and $p(\underline{b}''',d) = p(\underline{b}'',d)+1$.
We conclude that $p(\underline{a}''',d) < p(\underline{b}''',d)-n$.
Finally, we show that $(\underline{a}''',\underline{b}''')$ is admissible.
For $i<p(\underline{a}''',d)$, we have $i+n<p(\underline{b}''',d)$ and thus $a'''_i = a_i' > b_{i+n}' = b'''_{i+n}$.
For $p(\underline{b}''',c)-n < i$, we have $p(\underline{a}''',c)<i$ and thus $a'''_i = a_{i-1}'' > b_{i+n-1}'' = b'''_{i+n}$.
For $p(\underline{a}''',d)\le i \le p(\underline{b}''',c)-n$, we have $a_i \ge d > c \ge b_{i+n}$.
\end{proof}
\begin{lemma}\label{Ladder}
If $\underline{c}$ is an integer sequence and $d$ is an integer (considered as a singleton sequence) not appearing in $\underline{c}$, such that both $(\underline{a},\underline{b})+\underline{c}$ and $(\underline{a},\underline{b})+d$ are admissible, then so is $(\underline{a},\underline{b})+\underline{c}+d$.
\end{lemma}
\begin{proof}
By \Cref{Generalization} and \Cref{Diamond}, the pair $(\underline{a},\underline{b})+c_1+d$ is admissible.
Applying \Cref{Diamond} again with $(\underline{a},\underline{b})+c_1$ in place of $(\underline{a},\underline{b})$, we see that $(\underline{a},\underline{b})+c_1+c_2+d$ is admissible.
By induction it follows that $(\underline{a},\underline{b})+\underline{c}+d$ is admissible.
\end{proof}
\begin{proof}[Proof of \Cref{GradedLattice}]
If $(\underline{a}',\underline{b}')\in \mathcal{B}\kern -.5pt etti^i(H)$ and $(\underline{a},\underline{b})\in \mathcal{B}\kern -.5pt etti^j(H)$ such that $(\underline{a}',\underline{b}') =(\underline{a},\underline{b})+\underline{c}$ for some $\underline{c}$, then obviously $i\ge j$.
The cover relations in $\mathcal{B}\kern -.5pt etti(H)$ are given exactly by adding singleton sequences.
It follows that $(\mathcal{B}\kern -.5pt etti(H),\preceq)$ is a graded poset.
Suppose $(\underline{a},\underline{b})$ and $(\underline{a}',\underline{b}')$ are in $\mathcal{B}\kern -.5pt etti(H)$.
By \Cref{Minimum}, there are sequences $\underline{c}$ and $\underline{c}'$ such that $(\underline{a},\underline{b}) = (\underline{\alpha},\underline{\beta})+\underline{c}$ and $(\underline{a}',\underline{b}') = (\underline{\alpha},\underline{\beta})+\underline{c}'$.
We define $\min(\underline{c},\underline{c}')$ to be the descending integer sequence where an integer $t$ occurs $\min(\mu(\underline{c},t),\mu(\underline{c}',t))$ times, and similarly for $\max(\underline{c},\underline{c}')$.
Clearly $(\underline{a},\underline{b})+\min(\underline{c},\underline{c}') \preceq (\underline{a},\underline{b})+\underline{c}$ and thus is admissible by \Cref{Generalization}.
It follows that $(\underline{a},\underline{b})+\min(\underline{c},\underline{c}')$ is the meet of $(\underline{a},\underline{b})$ and $(\underline{a}',\underline{b}')$ in $\mathcal{B}\kern -.5pt etti(H)$.
We claim that $(\underline{a},\underline{b})+\max(\underline{c},\underline{c}')$ is admissible, and thus it is the join of $(\underline{a},\underline{b})$ and $(\underline{a}',\underline{b}')$ in $\mathcal{B}\kern -.5pt etti(H)$.
To see this, we may replace $(\underline{a},\underline{b})$ by $(\underline{a},\underline{b})+\min(\underline{c},\underline{c}')$ and assume that $\underline{c}$ and $\underline{c}'$ have no common entries.
By \Cref{Ladder}, we see that $(\underline{a},\underline{b})+\underline{c}+c_1'$ is admissible.
Applying \Cref{Ladder} again with $(\underline{a},\underline{b})+c_1'$ in place of $(\underline{a},\underline{b})$, we conclude that $(\underline{a},\underline{b})+c_1'+\underline{c}+c_2'$ is admissible.
By induction, it follows that $(\underline{a},\underline{b})+\underline{c}'+\underline{c}$ is admissible.
\end{proof}
For any integer $d$, let $\mathcal{B}\kern -.5pt etti(H)_{\le d}$ denote the subset of Betti numbers of bundles that are $d$-regular.
The set $\mathcal{B}\kern -.5pt etti(H)_{\le d}$ inherits a grading $\bigoplus_{q\ge 0} \mathcal{B}\kern -.5pt etti^q(H)_{\le d}$, where $\mathcal{B}\kern -.5pt etti^q(H)_{\le d} := \mathcal{B}\kern -.5pt etti^q(H)\cap \mathcal{B}\kern -.5pt etti(H)_{\le d}$.
\begin{corollary}\label{Sublattice}
For any integer $d$, the set $\mathcal{B}\kern -.5pt etti(H)_{\le d}$ is a finite graded lattice isomorphic to the lattice of subsequences of some sequence $\underline{c}$.
\end{corollary}
\begin{proof}
If $(\underline{a},\underline{b})\preceq (\underline{a}',\underline{b}')$, then $\op{reg} (\underline{a},\underline{b})\le (\underline{a}',\underline{b}')$.
If $(\underline{a}'',\underline{b}'')$ is the join of $(\underline{a},\underline{b})$ and $(\underline{a}',\underline{b}')$ in $\mathcal{B}\kern -.5pt etti(H)$, then the regularity of $(\underline{a}'',\underline{b}'')$ is the maximum of those of $(\underline{a},\underline{b})$ and $(\underline{a}',\underline{b}')$ by the construction in the proof of \Cref{GradedLattice}.
It follows that $\mathcal{B}\kern -.5pt etti(H)_{\le d}$ is a graded lattice.
The finiteness of $\mathcal{B}\kern -.5pt etti(H)_{\le d}$ follows from \Cref{Finite}.
Thus there is a maximum element of the form $(\underline{\alpha},\underline{\beta})+\underline{c}$ for some sequence $\underline{c}$. By \Cref{Generalization}, we see that
\[
\mathcal{B}\kern -.5pt etti^q(H)_{\le d} = \{(\underline{\alpha},\underline{\beta})+\underline{c}'\mid \underline{c}' \text{ is a subsequence of $\underline{c}$ of length $q$}\}. \qedhere
\]
\end{proof}
\begin{example}
Let $H$ be the Hilbert function of a normalized bundle on $\mathbf{P}^3$ with bundle sequence $(5,4)$.
With the same notation as in \Cref{Ex1}, the minimal element of $\mathcal{B}\kern -.5pt etti(H)$ is given by $\underline{\alpha} = (0)$ and $\underline{\beta} = (-1^5)$.
The maximum element of $\mathcal{B}\kern -.5pt etti(H)_{\le 2}$ is $(\underline{\alpha},\underline{\beta})+\underline{c}$, where $\underline{c} = (0,1,2)$.
In particular,
\[
\mathcal{B}\kern -.5pt etti^q(H)_{\le 2} = \{(\underline{\alpha},\underline{\beta})+\underline{c}'\mid \underline{c}' \text{ is a subsequence of $(0,1,2)$ of length $q$}\}
\]
and $\mathcal{B}\kern -.5pt etti(H)_{\le 2}$ is isomorphic to the lattice of subsequences of $(0,1,2)$.
\end{example}
\bigskip
\subsection{The stratification}\label{Moduli}
In this subsection, we define a natural topology on $\mathcal{VB}_{\P^n}^\dagger(H)$.
We then describe the stratification of $\mathcal{VB}_{\P^n}^\dagger(H)$ by locally closed subspaces $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$.
\bigskip
\begin{definition}
Let $(\underline{a},\underline{b})\in \mathcal{B}\kern -.5pt etti(H)$.
Let $\mathbf{A}(\underline{a},\underline{b})$ denote the structure of the affine space on the vector space $\op{Hom}(\mathscr{L}(\underline{a}),\mathscr{L}(\underline{b}))$.
The minimal maps form an affine subspace $\mathbf{A}^0(\underline{a},\underline{b})$ in $\mathbf{A}(\underline{a},\underline{b})$.
We define the subset of matrices whose maximal minors have maximal depth
\[
\mathbf{M}(\underline{a},\underline{b}) := \{\varphi\in \mathbf{A}(\underline{a},\underline{b})\mid \op{depth} I_l(\varphi) \ge n+1\},
\]
\[
\mathbf{M}^0(\underline{a},\underline{b}) := \{\varphi\in \mathbf{A}^0(\underline{a},\underline{b}) \mid \op{depth} I_l(\varphi) \ge n+1\}.
\]
As in the proof of \Cref{Betti}, the subset $\mathbf{M}(\underline{a},\underline{b})$ and $\mathbf{M}^0(\underline{a},\underline{b})$ are open subvarieties of $\mathbf{A}(\underline{a},\underline{b})$ and $\mathbf{A}^0(\underline{a},\underline{b})$ respectively.
For $\mathbf{A} = \mathbf{A}(\underline{a},\underline{b})$ and $\mathbf{A}^0(\underline{a},\underline{b})$, the tautological morphism
\[
\Phi: \bigoplus_{i = 1}^l \mathscr{O}_{\mathbf{P}^n_\mathbf{A}}(-a_i) \to \bigoplus_{i = 1}^{l+r} \mathscr{O}_{\mathbf{P}^n_\mathbf{A}}(-b_i)
\]
gives a tautological family of sheaves $\mathscr{E} := \op{coker} \Phi$ over $\mathbf{A}$, which pulls back to a family of bundles $\mathscr{E}(\underline{a},\underline{b})$ and $\mathscr{E}^0(\underline{a},\underline{b})$ satisfying (\ref{Cohomology}) over $\mathbf{M}(\underline{a},\underline{b})$ and $\mathbf{M}^0(\underline{a},\underline{b})$ respectively by \Cref{Bundle}.
\end{definition}
Let $G(\underline{a},\underline{b})$ denote the algebraic group $\op{Aut}(\mathscr{L}(\underline{a}))\times \op{Aut}(\mathscr{L}(\underline{b}))$.
The natural action $\rho: G(\underline{a},\underline{b})\times \op{Hom}(\mathscr{L}(\underline{a}),\mathscr{L}(\underline{b})) \to \op{Hom}(\mathscr{L}(\underline{a}),\mathscr{L}(\underline{b}))$ given by $(f,g) \times \varphi \mapsto f \circ \varphi \circ g$
is a morphism of algebraic varieties.
The action $\rho$ leaves the subspace of minimal maps invariant.
Since the change of coordinates does not change the ideal of maximal minors, it follows that the open subvarieties $\mathbf{M}(\underline{a},\underline{b})$ and $\mathbf{M}^0(\underline{a},\underline{b})$ are stable under the $G(\underline{a},\underline{b})$-action.
\begin{lemma}\label{Orbit}
Two maps $\varphi, \psi \in \mathbf{M}(\underline{a},\underline{b})$ are in the same $G(\underline{a},\underline{b})$-orbit iff $\op{coker} \varphi \cong \op{coker} \psi$.
\end{lemma}
\begin{proof}
Clearly if $\varphi, \psi$ are in the same $G(\underline{a},\underline{b})$-orbit then $\op{coker} \varphi \cong \op{coker} \psi$.
Conversely, let $\mathscr{E} := \op{coker} \varphi$ and $\mathscr{E}' := \op{coker} \psi$.
Then the isomorphism of the $R$-modules $H^0_*(\mathscr{E}) \cong H^0_*(\mathscr{E}')$ lifts to an isomorphism of free resolutions
\[
\begin{tikzcd}
0 \arrow[r] & L(\underline{a})\arrow[d,"f" ',"\cong"] \arrow[r,"\varphi"] & L(\underline{b}) \arrow[d,"g" ', "\cong"] \arrow[r] & H^0_*(\mathscr{E}) \arrow[d,"\cong"] \arrow[r] & 0\\
0 \arrow[r] & L(\underline{a}) \arrow[r,"\varphi'"] & L(\underline{b}) \arrow[r] & H^0_*(\mathscr{E}') \arrow[r] & 0.
\end{tikzcd}
\]
It follows that $\varphi, \varphi'$ are in the same $G(\underline{a},\underline{b})$-orbit.
\end{proof}
\Cref{Bundle} and \Cref{Orbit} imply that the set $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$ supports the structure of the quotient topological space $\mathbf{M}^0(\underline{a},\underline{b}) / G(\underline{a},\underline{b})$.
Similarly, we let $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})^{\preceq}$ denote the subset of $\mathcal{VB}_{\P^n}^\dagger$ consisting of isomorphism classes of bundles $\mathscr{E}$ that admit a (not necessarily minimal) free resolution of the form
\[
0 \to L(\underline{a}) \to L(\underline{b}) \to H^0_*(\mathscr{E}) \to 0.
\]
Then \Cref{Orbit} also implies that the set $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})^{\preceq}$ supports the structure of the quotient topological space $\mathbf{M}(\underline{a},\underline{b}) / G(\underline{a},\underline{b})$.
Clearly the inclusion of sets $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b}) \subseteq \mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})^{\preceq}$ is an inclusion of topological spaces.
\bigskip
\begin{lemma}\label{Subspace}
If $(\underline{a},\underline{b}) \preceq (\underline{a}',\underline{b}')$ in $\mathcal{B}\kern -.5pt etti(H)$, then $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})^{\preceq}$ is a subspace of $\mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')^{\preceq}$.
In particular, $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$ is a subspace of $\mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')^{\preceq}$.
\end{lemma}
\begin{proof}
Let $(\underline{a}',\underline{b}') = (\underline{a},\underline{b})+\underline{c}$ for some $\underline{c}$.
Consider an injective morphism $\iota: \mathbf{M}(\underline{a},\underline{b}) \to \mathbf{M}(\underline{a}',\underline{b}')$ given by $\varphi \mapsto \varphi \oplus \operatorname{Id}_{\mathscr{L}(\underline{c})}$.
It is not hard to see that the ideal of maximal minors does not change under this map, and thus $\iota$ is well-defined.
Suppose $\varphi, \psi$ are two morphisms in $\mathbf{M}(\underline{a},\underline{b})$ such that $\varphi \oplus \operatorname{Id}_{\mathscr{L}(\underline{c})}$ and $\psi \oplus \operatorname{Id}_{\mathscr{L}(\underline{c})}$ are in the same $G(\underline{a}',\underline{b}')$-orbit.
It follows that $\op{coker} \varphi \oplus \operatorname{Id}_{\mathscr{L}(\underline{c})} \cong \op{coker} \psi \oplus \operatorname{Id}_{\mathscr{L}(\underline{c})}$.
Since $\op{coker} \varphi \cong \op{coker} \varphi \oplus \operatorname{Id}_{\mathscr{L}(\underline{c})}$ and $\op{coker} \psi \cong \op{coker} \psi \oplus \operatorname{Id}_{\mathscr{L}(\underline{c})}$, we conclude that $\op{coker} \varphi \cong \op{coker} \psi$.
It follows from \Cref{Orbit} that $\varphi$ and $\psi$ are in the same $G(\underline{a},\underline{b})$-orbit.
This shows that the composition
\[
\mathbf{M}(\underline{a},\underline{b})\to \mathbf{M}(\underline{a}',\underline{b}') \to \mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')^{\preceq}
\]
induces an injection of topological spaces on the quotient
$\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})^{\preceq} \hookrightarrow \mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')^{\preceq}$.
\end{proof}
\bigskip
For each integer $d$, the set $\mathcal{B}\kern -.5pt etti(H)_{\le d}$ is a lattice by \Cref{Sublattice} and thus has a maximum element $(\underline{a}',\underline{b}')$.
It follows from \Cref{Subspace} that every $d$-regular bundle $\mathscr{E}$ in $\mathcal{VB}_{\P^n}^\dagger(H)$ admits a (not necessarily minimal) free resolution of the form
\[
0 \to L(\underline{a}') \to L(\underline{b}') \to H^0_*(\mathscr{E}) \to 0.
\]
Let $\mathcal{VB}_{\P^n}^\dagger(H)_{\le d}$ be the subspace of $\mathcal{VB}_{\P^n}^\dagger(H)$ consisting of isomorphism classes of $d$-regular bundles.
Then by \Cref{Orbit}, the set $\mathcal{VB}_{\P^n}^\dagger(H)_{\le d}$ supports the structure of the quotient topological space $\mathbf{M}(\underline{a}',\underline{b}')/G(\underline{a}',\underline{b}')$.
It follows from \Cref{Subspace} and the construction above that if $d<d'$, then $\mathcal{VB}_{\P^n}^\dagger(H)_{\le d}$ is a subspace of $\mathcal{VB}_{\P^n}^\dagger(H)_{\le d'}$.
Finally, we define a topology on $\mathcal{VB}_{\P^n}^\dagger(H)$ by
\[
\mathcal{VB}_{\P^n}^\dagger(H) = \varinjlim_d \mathcal{VB}_{\P^n}^\dagger(H)_{\le d}.
\]
\begin{proposition}\label{Dense}
For each integer $d$, the subspace $\mathcal{VB}_{\P^n}^\dagger(H)_{\le d}$ is open in $\mathcal{VB}_{\P^n}^\dagger(H)$.
\end{proposition}
\begin{proof}
We need to show that $\mathcal{VB}_{\P^n}^\dagger(H)_{\le d}$ is open in $\mathcal{VB}_{\P^n}^\dagger(H)_{\le d'}$ for $d'\gg 0$.
Let $(\underline{a}',\underline{b}')$ be the maximum element in $\mathcal{B}\kern -.5pt etti(H)_{\le d'}$, and consider the quotient map $\pi:\mathbf{M}(\underline{a}',\underline{b}') \to \mathcal{VB}_{\P^n}^\dagger(H)_{\le d'}$.
By the semicontinuity of cohomologies, the fibers of the tautological family $\mathscr{E}(\underline{a}',\underline{b}')$ are $d$-regular over an open subset of $\mathbf{M}(\underline{a}',\underline{b}')$.
It follows that $\mathcal{VB}_{\P^n}^\dagger(H)_{\le d}$ is the image of this open subset under $\pi$, and thus is an open subspace of $\mathcal{VB}_{\P^n}^\dagger(H)_{\le d'}$.
\end{proof}
\begin{proposition}
The topological space $\mathcal{VB}_{\P^n}^\dagger(H)$ is irreducible and unirational.
\end{proposition}
\begin{proof}
For $d\gg 0$, the subspace $\mathcal{VB}_{\P^n}^\dagger(H)_{\le d}$ is dense in $\mathcal{VB}_{\P^n}^\dagger(H)$.
Since $\mathcal{VB}_{\P^n}^\dagger(H)_{\le d}$ is the quotient of $\mathbf{M}(\underline{a}',\underline{b}')$, where $(\underline{a}',\underline{b}')$ is the maximum element of $\mathcal{B}\kern -.5pt etti(H)_{\le d}$, it follows that $\mathcal{VB}_{\P^n}^\dagger(H)_{\le d}$ is irreducible and unirational, and so is $\mathcal{VB}_{\P^n}^\dagger(H)$.
\end{proof}
\bigskip
The main result of this subsection is the following.
\begin{theorem}\label{Main}
The closed strata $\overline{\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})}$ in $\mathcal{VB}_{\P^n}^\dagger(H)$ form a graded lattice dual to $\mathcal{B}\kern -.5pt etti(H)$ under the partial order of inclusion.
Furthermore, the intersection of two closed strata $\overline{\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})}$ and $\overline{\mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')}$ is again a closed stratum $\overline{\mathcal{VB}_{\P^n}^\dagger(\underline{a}'',\underline{b}'')}$, where $(\underline{a}'',\underline{b}'')$ is the join of $(\underline{a},\underline{b})$ and $(\underline{a}',\underline{b}')$ in the lattice $\mathcal{B}\kern -.5pt etti(H)$.
\end{theorem}
The theorem needs several standard lemmas on the behavior of resolutions in families with constant Hilbert functions.
We include proofs here for the lack of appropriate references.
\begin{lemma}\label{Limit}
Let $\mathscr{E}' \in \mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')$ and suppose $(\underline{a},\underline{b})\preceq (\underline{a}',\underline{b}')$.
Then there is a family of bundles $\mathscr{E}$ on $\mathbf{P}^n$ over a dense open set $U\subset \mathbf{A}^1$ containing the origin $0\in \mathbf{A}^1$, such that $\mathscr{E}_0\cong \mathscr{E}'$ and $\mathscr{E}_t\in \mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$ for any closed point $0\ne t\in U$.
\end{lemma}
\begin{proof}
Suppose $(\underline{a}',\underline{b}') = (\underline{a},\underline{b})+\underline{c}$.
By \Cref{Generalization}, the pair $(\underline{a},\underline{b})$ is admissible.
Let $\psi \in \mathbf{M}^0(\underline{a}',\underline{b}')$ be a minimal presentation of $\mathscr{E}'$, and let $\varphi \in \mathbf{M}^0(\underline{a},\underline{b})$ be a minimal presentation of a bundle $\mathscr{E}$.
Set $\varphi' = \varphi\oplus \operatorname{Id}_{\mathscr{L}(\underline{c})}$ and consider the morphism $\Phi: \mathscr{L}(\underline{b}')\times \mathbf{A}^1 \to \mathscr{L}(\underline{a}')\times \mathbf{A}^1$ whose fiber over a closed point $t\in \mathbf{A}^1$ is given by $\Phi_t := \psi+t\cdot \varphi'$.
By \Cref{DepthOpen}, the morphism $\Phi_t \in \mathbf{M}(\underline{a}',\underline{b}')$ for all closed points $t$ in an open dense set $U\subset \mathbf{A}^1$ containing $0$.
This shows that $\op{coker} \Phi_t \in \mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})^{\preceq}$ for $t\in U$.
We show that in fact $\op{coker} \Phi_t \in \mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$ for all $0\ne t\in U$.
Let $t\ne 0$ be any closed point of $U$.
Since $\psi$ is minimal and $\varphi'$ induces an isomorphism on the common summand $\mathscr{L}(\underline{c}) \xrightarrow{\sim} \mathscr{L}(\underline{c})$, it follows that $\Phi_t$ also splits off the common summand $\mathscr{L}(\underline{c}) \xrightarrow{t} \mathscr{L}(\underline{c})$.
Since $\varphi$ does not split off any common summands other than those of $\mathscr{L}(\underline{c})$, neither does $\Phi_t$ by Nakayama's lemma.
It follows that the free resolution
\[
0 \to L(\underline{a}') \xrightarrow{\Phi_t} L(\underline{b}') \to H^0(\mathscr{E}_t) \to 0
\]
contains a minimal one of the form
\[
0 \to L(\underline{a}) \to L(\underline{b}) \to H^0_*(\mathscr{E}_t) \to 0. \qedhere
\]
\end{proof}
\begin{lemma}\label{Limit2}
Let $\mathscr{E}$ be a family of bundles on $\mathbf{P}^n$ satisfying (\ref{Cohomology}) parametrized by a variety $T$, such that all fibers have the same Hilbert function $H$.
Then general fibers have the same Betti numbers $(\underline{a},\underline{b})$, where $(\underline{a},\underline{b})\preceq (\underline{a}',\underline{b}')$ for the Betti numbers $(\underline{a}',\underline{b}')$ of any fiber $\mathscr{E}_t$.
\end{lemma}
\begin{proof}
Let $t\in T$ be a closed point.
We may base change to $\op{Spec} \mathscr{O}_{T,t}$ and reduce to the case where $T$ is an affine local domain.
Let $m$ be the maximal ideal of $T$ with residue field $k$ and set $R_T := T[x,y,z]$ and $R := k[x,y,z]$.
The module $E:= \bigoplus_{l\in\mathbb{Z}} H^0(\mathscr{E}(l))$ is finitely generated over $R_T$ since $\mathscr{E}$ is a bundle.
Since the fibers over $T$ have the same Hilbert functions, it follows that $E$ is flat over $T$.
If $\bigoplus_{i = 1}^{l+r} R(-b'_i) \xrightarrow{d} E\otimes_T k$ is a minimal system of generators, then by Nakayama's lemma over generalized local rings, it lifts to a system of generators
$\bigoplus_{i = 1}^{l+r} R_T(-b'_i) \xrightarrow{d_T} E$.
Since $E$ is flat over $T$, so is $\ker d_T$ and thus $(\ker d_T) \otimes_T k \cong \ker d$.
Applying this procedure again, we find a free resolution of $E$
\[
F_\bullet:\quad 0 \to \bigoplus_{i = 1}^l R_T(-a'_i) \to \bigoplus_{i = 1}^{l+r} R_T(-b'_i) \to E \to 0
\]
that specializes to a minimal free resolution of $E\otimes_T k$.
It follows that $F_\bullet \otimes_T k(T)$ is a free resolution of the generic fiber which contains a minimal free resolution of the form
\[
0\to \bigoplus_{i = 1}^j R_T(-a_i)\otimes_T k(T) \to \bigoplus_{i = 1}^{j+r} R_T(-b_i)\otimes_T k(T)\to E\otimes_T k(T) \to 0.
\]
We conclude that the general fibers $\mathscr{E}_t$ have the Betti numbers $(\underline{a},\underline{b})\preceq (\underline{a}',\underline{b}')$.
\end{proof}
\begin{lemma}\label{Dual}
For $(\underline{a},\underline{b}),(\underline{a}',\underline{b}')\in \mathcal{B}\kern -.5pt etti(H)$ the following are equivalent.
\begin{enumerate}
\item $(\underline{a},\underline{b}) \preceq (\underline{a}',\underline{b}')$,
\item $\overline{\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})} \supseteq \mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')$,
\item $\mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')\cap \overline{\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})} \ne \varnothing$,
\item $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})\subseteq \mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')^{\preceq}$.
\end{enumerate}
Here all closures are taken within $\mathcal{VB}_{\P^n}^\dagger(H)$.
\end{lemma}
\begin{proof}
(1) $\Longrightarrow$ (2): Suppose $(\underline{a}',\underline{b}') =(\underline{a},\underline{b})+\underline{c}$.
Let $\varphi \in \mathbf{M}^0(\underline{a},\underline{b})$ and $\psi \in \mathbf{M}^0(\underline{a}',\underline{b}')$.
Consider the line $\Phi: \mathbf{A}^1 \hookrightarrow \mathbf{A}(\underline{a}',\underline{b}')$ defined $\Phi(t) := \psi+ t\cdot \varphi'$, where $\varphi' = \varphi \oplus \operatorname{Id}_{\mathscr{L}(\underline{c})}$.
For an open set $U\subset \mathbf{A}^1$ containing $0$, the image $\Phi(t)$ is contained in $\mathbf{M}(\underline{a}',\underline{b}')$.
By \Cref{Limit}, the image of $\Phi(t)$ in the quotient $\mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')^{\preceq}$ lies in $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$ for $t\ne 0$.
It follows that the image of $\psi$ in $\mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')$ is contained in the closure of $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$ inside the space $\mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')^{\preceq}$.
Since $\psi$ represents an arbitrary point of $\mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')$, we conclude that $\mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')$ is contained in the closure of $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$ in $\mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')^{\preceq}$,
and therefore the same is true inside $\mathcal{VB}_{\P^n}^\dagger(H)$.
(2) $\Longrightarrow$ (3) is trivial.
(1) $\Longrightarrow$ (4) is proven in \Cref{Subspace}.
(3) $\Longrightarrow$ (1): Let $d := \max(\op{reg} (\underline{a},\underline{b}),\op{reg}(\underline{a}',\underline{b}'))$.
Let $(\underline{a}'',\underline{b}'')$ denote the maximum element of $\mathcal{B}\kern -.5pt etti(H)_{\le d}$.
Let $\pi:\mathbf{M}(\underline{a}'',\underline{b}'') \to \mathcal{VB}_{\P^n}^\dagger(H)_{\le d}$ be the quotient map and set $V$ to be the preimage of $\overline{\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})}$ under $\pi$, endowed with the structure of a (reduced) subvariety of $\mathbf{M}(\underline{a}'',\underline{b}'')$.
Let $\mathscr{E}$ be the pullback of the tautological family of bundles $\mathscr{E}(\underline{a}'',\underline{b}'')$ on $\mathbf{M}(\underline{a}'',\underline{b}'')$ to $V$.
Since $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$ is dense in $\overline{\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})}$, it follows that the fiber $\mathscr{E}_v$ over a general point $v\in V$ has Betti numbers $(\underline{a},\underline{b})$.
If $p$ is a point in $\mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')$ that is in the closure of $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$, and $q$ is a point in $\pi^{-1}(p)$, then $q\in V$ and $\mathscr{E}_q$ has Betti numbers $(\underline{a}',\underline{b}')$.
Finally, an application of \Cref{Limit2} to the family $\mathscr{E}$ gives $(\underline{a},\underline{b})\preceq (\underline{a}',\underline{b}')$.
(4) $\Longrightarrow$ (1): If $\mathscr{E}$ is a bundle with a free resolution of the form
\[
0 \to L(\underline{b}') \to L(\underline{a}') \to H^0_*(E) \to 0,
\]
then it contains as a summand the minimal free resolution of $\mathscr{E}$
\[
0 \to L(\underline{b}) \to L(\underline{a}) \to H^0_*(E) \to 0
\]
with a direct complement of the form
\[
0 \to L(\underline{c}) \xrightarrow{\sim} L(\underline{c}) \to 0
\]
for some $(\underline{a}',\underline{b}') = (\underline{a},\underline{b})+\underline{c}$.
It follows that $(\underline{a},\underline{b})\preceq (\underline{a}',\underline{b}')$.
\end{proof}
\begin{proof}[Proof of \Cref{Main}]
The first statement follows directly from \Cref{Dual}.
For the same reason, it is clear that $\overline{\mathcal{VB}_{\P^n}^\dagger(\underline{a}'',\underline{b}'')}$ is in the intersection of $\overline{\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})}$ and $\overline{\mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')}$.
Let $p$ be a closed point in the intersection of $\overline{\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})}$ and $\overline{\mathcal{VB}_{\P^n}^\dagger(\underline{a}',\underline{b}')}$.
We assume $p\in \mathcal{VB}_{\P^n}^\dagger(\underline{c},\underline{d})$ for some $(\underline{c},\underline{d})\in \mathcal{B}\kern -.5pt etti(H)$ since $\mathbf{M}(H)$ is the disjoint union of these subspaces.
By \Cref{Dual}, it follows that $(\underline{a},\underline{b})\preceq (\underline{c},\underline{d})$ and $(\underline{a}',\underline{b}')\preceq(\underline{c},\underline{d})$.
Since $(\underline{a}'',\underline{b}'') \preceq (\underline{c},\underline{d})$ by the definition of join, another application of \Cref{Dual} shows that $p\in \overline{\mathcal{VB}_{\P^n}^\dagger(\underline{a}'',\underline{b}'')}$.
\end{proof}
\bigskip
Last but not least, we discuss the semistable case where the description of the stratification holds within the coarse moduli space.
\bigskip
By \cite[Theorem 4.2]{Maruyama2}, semistablity is open for a family of torsion-free sheaves.
Furthermore, the set of semistable torsion sheaves with a given Hilbert polynomial $\chi$ is bounded in the sense of Maruyama, and thus have bounded regularity by \cite[Theorem 3.11]{Maruyama2}.
Let $\mathcal{VB}_{\P^n}^\dagger(H)^{ss}$ and $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})^{ss}$ denote the subset of isomorphism classes of semistable bundles in $\mathcal{VB}_{\P^n}^\dagger(H)$ and $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$ respectively.
It follows that $\mathcal{VB}_{\P^n}^\dagger(H)^{ss}$ and all $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})^{ss}$ are contained in $\mathcal{VB}_{\P^n}^\dagger(H)_{\le d}$ for some large enough integer $d$.
Since $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})^{ss}$ is open in $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})$ and $\mathcal{VB}_{\P^n}^\dagger(H)^{ss}$ is open in $\mathcal{VB}_{\P^n}^\dagger(H)$ by the similar reasoning as in \Cref{Dense}, it follows that the stratification of $\mathcal{VB}_{\P^n}^\dagger(H)^{ss}$ by $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})^{ss}$ has the same description as given in \Cref{Main}.
Let $\mathbf{M}(\chi)$ denote the coarse moduli space of semistable sheaves on $\mathbf{P}^n$ with Hilbert polynomial $\chi$.
We show that the spaces $\mathcal{VB}_{\P^n}^\dagger(H)^{ss}$ and $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})^{ss}$ are subschemes of $\mathbf{M}(\chi)$.
Let $\mathbf{M}^0(\underline{a},\underline{b})^{ss}$ denote the open subscheme of $\mathbf{M}^0(\underline{a},\underline{b})$ over which the fibers of the tautological family of bundles $\mathscr{E}^0(\underline{a},\underline{b})$ are semistable.
By the property of the coarse moduli space, there is a map $p_0:\mathbf{M}^0(\underline{a},\underline{b})^{ss} \to \mathcal{M}(\chi)$ inducing the family of semistable bundles.
By \Cref{Orbit}, the isomorphism classes of the fibers are exactly given by the $G(\underline{a},\underline{b})$-orbits.
Therefore $\mathcal{VB}_{\P^n}^\dagger(\underline{a},\underline{b})^{ss}$ is a subscheme of $\mathbf{M}(\chi)$ with the image subscheme of $p$.
Similarly, the space $\mathcal{VB}_{\P^n}^\dagger(H)_{\le d}$ is also a subscheme of $\mathbf{M}(\chi)$.
Since $\mathcal{VB}_{\P^n}^\dagger(H)^{ss}$ is an open subspace of $\mathcal{VB}_{\P^n}^\dagger(H)_{\le d}$ for some $d\gg 0$, the same is true for $\mathcal{VB}_{\P^n}^\dagger(H)^{ss}$.
| {'timestamp': '2020-04-14T02:23:42', 'yymm': '1910', 'arxiv_id': '1910.02616', 'language': 'en', 'url': 'https://arxiv.org/abs/1910.02616'} |
\section{\label{sec:level1}Introduction}
The concept of $T=0$ quantum phase transitions has emerged as an overarching theme in strongly correlated electron physics \cite{Hertz, Chakravarty, Millis, Chubukov, Goldman, Sachdev, Sachdev2, Si, Sachdev3}. The nature of quantum fluctuations near the quantum critical point, however, remains enigmatic \cite{Lonzarich}. How well does the quantum criticality account for finite temperature properties? How high in temperature does the effect of the quantum critical point persist?\cite{Lonzarich, Chakravarty2} Do quantum fluctuations remain strong enough at elevated temperatures to account for the mechanism of exotic superconductivity in copper oxides, iron pnictides, and heavy Fermions systems? The dearth of appropriate model materials for rigorously solvable Hamiltonians has not permitted experimentalists to address these fundamental questions concretely, even for the transverse field Ising chain (TFIC) \cite{Pfeuty}, a celebrated textbook example of quantum criticality \cite{Sachdev}. Very recently, the Ising chain material CoNb$_2$O$_6$ \cite{Maartense, Scharf, Ishikawa, Heid, Kobayashi, ESR, ESR2} was proposed to be an ideal model system of the TFIC based on neutron scattering measurements in transverse magnetic fields \cite{Coldea}, paving a new avenue to investigate the finite temperature effects on quantum fluctuations in the vicinity of a quantum critical point (QCP).
The TFIC Hamiltonian is deceptively simple \cite{Pfeuty, Sachdev};
\begin{equation}
H = -J \sum_{i}(\sigma_{i}^{z}\sigma_{i+1}^{z} + g \sigma_{i}^{x}),
\end{equation} where $J$ ($>0$ for ferromagnetic Ising chains in CoNb$_2$O$_6$) represents the nearest-neighbor spin-spin exchange interaction, $\sigma_{i}^{z(x)}$ is the {\it z}({\it x})-component of the Pauli matrix at the {\it i}-th site, and the dimensionless coupling constant $g$ is related to the transverse magnetic field $h_{\perp}$ applied along the {\it x}-axis as $g = h_{\perp}/h_{\perp}^{c}$, where $h_{\perp}^{c}$ is the critical field ($h_{\perp}^{c} = 5.25 \pm 0.15$ Tesla in CoNb$_2$O$_6$, as shown below). Since $\sigma_{i}^{z}$ and $\sigma_{i}^{x}$ do not commute, the classical Ising Hamiltonian for $g=0$ becomes the quantum TFIC Hamiltonian for $g >0$. The QCP is located at $g=1$, where the applied field is tuned precisely at $h_{\perp}^{c}$; a magnetic field greater than $h_{\perp}^{c}$ coerces the magnetic moments along its direction and transforms the $T=0$ ferromagnetic ground state to a paramagnetic state. See Fig.\ 1 for the generic theoretical phase diagram of the TFIC \cite{Sachdev, Young}. In spite of its apparent simplicity, the TFIC served as the foundational model for quantum Monte Carlo simulations \cite{Suzuki}, and continues to attract attention in quantum information theory \cite{Waterloo}.
A major advantage of working with the TFIC as a model system for testing the fundamental ideas of quantum phase transitions is that, in the absence of a transverse magnetic field ($g=0$), the thermodynamic properties of the Ising chain can be rigorously solved at arbitrary temperatures \cite{Textbook}. Even in a finite transverse field ($g>0$), the TFIC is well understood at $T=0$ \cite{Pfeuty, Suzuki, IndianBook}, and QC (Quantum Critical) scaling theory extended the $T=0$ results to finite temperatures \cite{Sachdev, Young}.
We show the crystal structure of CoNb$_2$O$_6$ in Fig.\ 2 \cite{Structure}. All the pictorial images of the crystal structure in this paper were drawn using VESTA \cite{Momma}. The Co-O-Co chains propagate along the c-axis, and the easy axis of the Co moments lies within the ac-plane \cite{Scharf,Ishikawa}. The ferromagnetic super-exchange interaction between the nearest-neighbor Co ions is estimated to be $J =17\sim 23$ K, based on ESR \cite{ESR} and neutron scattering \cite{Coldea} measurements. From the disappearance of magnetic Bragg peaks in the transverse magnetic field applied along the b-axis, the three-dimensional (3D) critical field was estimated to be $h_{\perp}^{c, 3D} = 5.5$ Tesla \cite{Coldea, Wheeler}. The inter-chain couplings between adjacent Co chains are antiferromagnetic \cite{Scharf, Coldea}, weaker than $J$ by an order of magnitude \cite{ESR, Coldea}, and frustrated \cite{Scharf, Balents}. This means that the 3D magnetic long range order induced by inter-chain interactions, which tends to mask the effects of the one dimensional (1D) QCP of the individual Ising chains, is suppressed; the 3D ordering temperature is as low as $T_{c}^{3D}=2.9$ K even in $h_{\perp} = 0$ \cite{Scharf,Ishikawa}. Combined with the modest $J$, Ising chains in CoNb$_2$O$_6$ are ideal for testing the TFIC Hamiltonian, but were overlooked for three decades.
In what follows, we will report $^{93}$Nb NMR (Nuclear Magnetic Resonance) investigation of quantum spin fluctuations in CoNb$_2$O$_6$. NMR is a powerful low energy probe, and good at probing the physical properties near QCP's \cite{Imai1993, Imai2, Imai3, Grenoble, BoseEinstein, Reyes, Ning, Nakai, Zheng}. We will map the evolution of low energy quantum fluctuations of Co spins near the QCP, by taking advantage of the hyperfine interactions between Co electron spins and $^{93}$Nb nuclear spins. We will experimentally verify the phase diagram of the TFIC in Fig.\ 1 above $T=0$ for the first time, and demonstrate that the effect of the QCP persists at finite temperatures as high as $T \sim 0.4 J$.
\section{\label{sec:level1}Experimental}
We grew the CoNb$_{2}$O$_{6}$ single crystal from a stoichiometric mixture of cobalt and niobium oxides using a floating zone furnace. We assessed the surface quality and oriented the crystal utilizing Laue x-ray diffractometry. Once the material was sectioned into oriented slices along the a, b and c crystallographic directions, these were individually scanned with the Laue diffractometer and showed a uniform, single-crystalline structure. A small section of the single crystal was ground into a powder and analyzed using powder x-ray diffraction which showed only single phase cobalt niobate in the crystal within instrument resolution. The features present in the SQUID magnetometry data shown in Fig.\ 2(d) matched previously published data on this material \cite{Scharf}.
For NMR measurements, we cut a piece of single crystal with the approximate dimensions of 4 mm x 2 mm x 5 mm. We glued the crystal to a sturdy sample holder made of machinable aluminum-oxide (MACOR ceramic) with a thickness of $\sim 3$ mm to ensure that the crystal orientation did not change at low temperatures. We found that the strong magnetic torque applied to the crystal by the external magnetic field could easily bend sample holders made of soft materials such as plexiglass or plastic, and introduce noticeable systematic errors below $\sim 10$~K.
We observed $^{93}$Nb NMR in a broad range of temperature from 2~K ($\sim 0.1 J$) up to 295~K. We show the typical $^{93}$Nb NMR spectrum in the inset of Fig.\ 3. Since the $^{93}$Nb nuclear spin is $I=9/2$, we observed 4 pairs of satellite transitions split by a quadrupole frequency $\nu_{Q}^{b} = 1.9$ MHz, in addition to the large central peak arising from the $I_{z} = +\frac{1}{2}$ to $-\frac{1}{2}$ transition. In the main panel of Fig.\ 3, we also show the temperature dependence of the central transition in $h_{\perp} = 5.3$ Tesla applied along the b-axis.
We measured the $^{93}$Nb longitudinal relaxation rate $1/T_{1}$ by applying an inversion $\pi$ pulse prior to the $\pi/2 - \pi$ spin echo sequence, and monitoring the recovery of the spin echo intensity $M(t)$ as a function of the delay time $t$. The typical width of the $\pi/2$ pulse was $\sim 1\ \mu$s. We fit these recovery curves to the solutions of the rate equation \cite{Narath}:
\begin{equation}
M(t) = M(\infty) - A \sum_{j=1}^{9} a_{j} e^{-b_{j} t/T_{1}},
\end{equation}
with three free parameters: $M(\infty)$, $A$, and $1/T_{1}$. By solving the coupled rate equations for $I = \frac{9}{2}$ under the appropriate initial condition, one can calculate and fix the coefficients as $(a_{1}, a_{2}, a_{3}, a_{4}, a_{5}, a_{6}, a_{7}, a_{8}, a_{9}) = (0.653, 0, 0.215, 0, 0.092, 0, 0.034, 0, 0.06)$ for the central transition and (0.001, 0.0112, 0.0538, 0.1485, 0.2564, 0.2797, 0.1828, 0.0606, 0.0061) for the $I_{z} = \pm \frac{7}{2}$ to $I_{z} = \pm \frac{9}{2}$ fourth satellite transitions, while $(b_{1}, b_{2}, b_{3}, b_{4}, b_{5}, b_{6}, b_{7}, b_{8}, b_{9}) = (45, 36, 28,21,15, 10, 6, 3,1)$ for both cases \cite{Narath}.
An example of the signal recovery of the central transition observed at 130\ K in $h_{\perp} = 3$ Tesla is shown in Fig.\ 4, in comparison to that observed for a fourth satellite transition on the higher frequency side. Our results in Fig.\ 4 confirm that the best fit values of $1/T_{1}$ agree within $\sim 2$ \% between the central and satellite transitions. The central transition is the strongest among all 9 peaks as shown in the inset of Fig.\ 3, and hence most advantageous in terms of the signal intensity. When the relaxation rate exceeds $1/T_{1} \sim 2 \times 10^{3}$ s$^{-1}$, however, accurate measurements of $1/T_{1}$ using the central transition become increasingly difficult because the recovery curve $M(t)$ is dominated by two extremely fast normal modes, $0.653\ e^{-45t/T_{1}} + 0.215\ e^{-28t/T_{1}}$; the signal intensity, $M(t)$, begins to recover at a time scale comparable to the inversion pulse width. Accordingly, measurements of $1/T_{1}$ using the fourth satellite transition become more advantageous in the low temperature, low field regime, because its recovery curve is dominated by slower normal modes, $0.256\ e^{-15t/T_{1}} + 0.279\ e^{-10t/T_{1}}$. We present an additional example of the $1/T_{1}$ measurement using the fourth satellite at 2 K and $h_{\perp}=5.2$\ Tesla in Fig.\ 4.
\section{\label{sec:level1}Results and discussions}
\subsection{\label{sec:level2}$T$ and $h_{\perp}$ dependences of $1/T_{1}$}
In Fig.\ 5, we summarize the $T$ and $h_{\perp}$ dependences of $1/T_{1}$. Notice that $1/T_{1}$ varies by more than three orders of magnitude between $h_{\perp} = 3$ and 9 T. Quite generally, $1/T_{1}$ probes the wave vector ${\bf k}$-integral within the first Brillouin zone of the dynamical spin structure factor $S({\bf k}, \omega_{n})$ at the NMR frequency $\omega_{n}/2 \pi$ ($\sim 50$ MHz):
\begin{equation}
1/T_{1} = \sum_{\bf k} |a_{hf}|^{2} S({\bf k}, \omega_{n}),
\label{2}
\end{equation}
where $a_{hf}$ is the hyperfine coupling between the observed nuclear spin and Pauli matrices. In essence, $1/T_{1}$ measures the strength of Co spin fluctuations at the time scale set by the NMR frequency.
Our $1/T_{1}$ data in Fig.\ 5 exhibits two distinct field regimes at low temperatures, because the spin excitation spectrum changes its character across $h_{\perp}^{c}$, as summarized in Fig.\ 6. Below $h_{\perp}^{c} \sim 5.3$ Tesla, $1/T_{1}$ diverges gradually toward $T=0$, signaling the critical slowing down of Co spin fluctuations in the RC (Renormalized Classical \cite{Chakravarty}) regime of Fig.\ 1 toward the $T=0$ ferromagnetic ground state of each individual Ising chain. In other words, the spectral weight of the Co spin-spin correlation function grows at the quasi-elastic peak located at $k=0$ in Fig.\ 6(a) below $h_{\perp}^{c} \sim 5.3$ Tesla. The Co spin-spin correlation length $\xi$ along the chain grows as $\xi \sim \exp(+\Delta/T)$ in the RC regime \cite{Sachdev}, where $\Delta$ is the gap in the spin excitation spectrum as defined in Fig.\ 6(a). Accordingly, we expect $1/T_{1} \sim \exp(+\Delta/T)$ for $T \ll \Delta$. We summarize the details of the theoretical expressions of $1/T_{1}$ for the TFIC in Appendix A.
In contrast, $1/T_{1}$ observed above $h_{\perp}^{c} \sim 5.3$ Tesla saturates and begins to decrease with temperature. We recall that the $T=0$ ground state remains paramagnetic in the QD (Quantum Disordered) regime above $h_{\perp}^{c}$, as shown in Fig.\ 1, and hence there is no quasi-elastic mode of spin excitations in Fig.\ 6(b). The latter implies that $1/T_{1}$ in the QD regime is dominated by the thermal activation of spin excitations across the gap, $|\Delta|$. Therefore we expect $1/T_{1} \sim \exp(-|\Delta|/T)$ for $T \ll |\Delta|$. We have thus identified the 1D QCP (one dimensional QC point) of each individual Ising chain as $h_{\perp}^{c} \sim 5.3$ Tesla.
\subsection{\label{sec:level2}Estimation of the Spin Excitation Gap $\Delta$}
In Fig.\ 7(a), we present the exponential fit of $1/T_{1} \sim \exp(\Delta/T)$ with $\Delta$ as a free parameter. We summarize the $h_{\perp}$ dependence of $\Delta$ in Fig.\ 7(b). The fitting range barely satisfies $T < |\Delta|$ near $h_{\perp} \sim 5.3$ Tesla, limiting the accuracy of our estimation of $\Delta$. To improve the accuracy, we constructed the scaling plots of $T^{+0.75}/T_{1}$ as a function of $\Delta/T$ in Fig.\ 8. We first estimated the magnitude of $\Delta$ from Fig.\ 7(a). Subsequently, for the field range between 5.0 and 6.7 T, we made slight adjustments to the magnitude of $\Delta$ to improve the scaling collapse in Fig.\ 8. The final results of $\Delta$ thus estimated from Fig.\ 8 are presented in Fig.\ 7(b) using $\blacktriangle$. We note that this procedure changes the estimated value of $\Delta$ only by a few K.
Remarkably, we found that $\Delta$ varies linearly with $h_{\perp}$. This linear behavior is precisely what we expect from the theoretical prediction for the nearest-neighbor quantum Ising chain, $\Delta = 2J(1 - h_{\perp}/h_{\perp}^{c})$ \cite{Sachdev}. From the intercept of the linear fit with the horizontal axes, we estimate $h_{\perp}^{c} = 5.25 \pm 0.15$ Tesla. This 1D critical field observed by our NMR measurements agrees very well with the earlier observation of the saturation of the so-called E8 golden ratio \cite{Coldea}. From the intercept of the linear fit with the vertical axis, we also estimate $J =17.5^{+2.5}_{-1.5}$ K, in excellent agreement with earlier reports based on ESR \cite{ESR} and neutron scattering \cite{Coldea}.
\subsection{\label{sec:level2}Phase Diagram of the TFIC in CoNb$_2$O$_6$}
We present the color plot of $1/T_{1}$ in Fig.\ 9. Also shown in Fig.\ 9 is the crossover temperatures, $\Delta$ and $|\Delta|$, based on the linear fit in Fig.\ 7(b). Our color plot visually captures the crossover from the QC regime to the RC and QD regimes successfully. We are the first to verify the theoretical $T-h_{\perp}$ phase diagram in Fig.\ 1 for finite temperatures, $T > 0$, using an actual material.
\subsection{\label{sec:level2}Quantum Criticality of the TFIC at Finite Temperatures}
Having established the phase diagram of the TFIC in CoNb$_2$O$_6$, we are ready to test the finite temperature properties of the QC regime located between the RC and QD regimes. At the 1D critical field $h_{\perp}^{c}$, we applied QC scaling to eq.(3), and obtained
\begin{equation}
1/T_{1} = 2.13~ |a_{hf}|^{2} J^{-0.25}T^{-0.75},
\label{3}
\end{equation}
for the nearest-neighbor TFIC (see eq.\ (A7) below for the details). We determined the hyperfine form factor $|a_{hf}|^{2}$ based on the $^{93}$Nb NMR frequency shift measurements, and used eq.\ (4) to estimate $1/T_{1} = (4.2 \sim 8.4)\times 10^{3}~T^{-0.75}$ s$^{-1}$ at finite temperatures above the QCP {\it without any adjustable parameters}. We refer readers to Appendix B for the details of the data analysis. This parameter-free prediction is in excellent quantitative agreement with our experimental finding, $1/T_{1} \sim 6.2 \times10^{3}~T^{-0.75}$ s$^{-1}$ as shown by a solid line in Fig.\ 5 through the data points observed at 5.2\ T. Thus the QC scaling theory accounts for the low frequency spin dynamics of the TFIC above $T=0$ at a quantitative level.
It is equally important to realize that $1/T_{1}$ data exhibits the expected power-law behavior, $1/T_{1} \sim T^{-0.75}$, up to $\sim 7$ K, which corresponds to $T \sim 0.4J$. Our finding therefore addresses an important and unresolved question that has been facing the strongly correlated electrons community for years: {\it How high in temperature does the effect of the QCP persist?} For the TFIC, the quantum fluctuations originating from the zero temperature QCP persist up to as high as $T \sim 0.4J$. Our experimental finding is consistent with the earlier theoretical report that the QC scaling holds up to $T\sim 0.5J$ for the TFIC \cite{Chakravarty2}.
\section{\label{sec:level1}Summary and conclusions}
Using the quasi one-dimensional Co chains in CoNb$_2$O$_6$, we experimentally tested the quantum criticality of the transverse field Ising chain (TFIC) at finite temperatures above $T=0$ for the first time. Based on the measurements of the $^{93}$Nb longitudinal relaxation rate $1/T_{1}$, we identified the distinct behaviors of low-frequency spin fluctuations in the Renormalized Classical (RC), Quantum Critical (QC), and Quantum Disordered (QD) scaling regimes of the TFIC, and constructed the $T-h_{\perp}$ phase diagram of the TFIC in Fig.\ 9. We observed no evidence for a crossover into the 3D regime in the temperature and field range of our concern. We also reported the transverse field ($h_{\perp}$) dependence of the spin excitation gap parameter $\Delta$ in Fig.\ 7(b); our results exhibit a linear dependence on $h_{\perp}$, in agreement with the theoretical prediction for the nearest-neighbor TFIC. Our $1/T_{1}$ data observed for the QC regime near $h_{\perp}^{c} \approx 5.25$ T exhibit the expected mild power law divergence, $1/T_{1} \sim T^{-0.75}$ toward the quantum critical point at $T=0$. Furthermore, the parameter-free prediction based on quantum critical scaling reproduces the magnitude of $1/T_{1}$ within $\sim \pm 36$ \%. Our results in Fig.\ 5 establish that the quantum critical behavior persists to as high as $T \sim 0.4 J$. To the best of our knowledge, this is the first example of the quantitative test of the finite temperature effects on quantum criticality for model Hamiltonians with a rigorously solvable ground state.
We mark the upper bound of the QC scaling regime, $T \sim 0.4 J$, in Fig.\ 9 with a horizontal arrow. Such a robust quantum criticality observed at finite temperatures above the QCP is in stark contrast with the case of thermally induced {\it classical} phase transitions; the critical region of the latter generally narrows as the phase transition temperature approaches zero, and eventually diminishes at $T=0$ \cite{Lonzarich}. Many authors have constructed analogous color plots for different parameters (such as electrical resistivity, as an example) for a variety of strongly correlated electron systems, including copper-oxide and iron-pnictide high $T_c$ superconductors and heavy Fermion systems \cite{Sachdev3, Si}. The aim of these authors was to build a circumstantial case that quantum fluctuations persist at finite temperatures far above the QCP. The overall similarity between our Fig.\ 9 and the case of high $T_c$ cuprates and other exotic superconductors gives us hope that quantum fluctuations may indeed account for the mechanism of exotic superconductivity.\\
Note Added: After the initial submission of this work, a theoretical prediction was made for the temperature dependence of $1/T_{1}$ under the presence of an internal longitudinal magnetic field in the three-dimensionally ordered state \cite{Wu}. The three-dimensional effects \cite{Balents, Wu}, however, are beyond the scope of the present work.
\begin{acknowledgments}
T.I. and S.S. thank helpful communications with A. P. Young, Y. Itoh, B. Gaulin, M. P. Gelfand, S.-S. Lee, T. Sakai and H. Nojiri. The work at McMaster was supported by NSERC and CIFAR. S.S. acknowledges the financial support from NSF DMR-1103860.\\
\end{acknowledgments}
| {'timestamp': '2014-05-14T02:14:22', 'yymm': '1401', 'arxiv_id': '1401.6917', 'language': 'en', 'url': 'https://arxiv.org/abs/1401.6917'} |
\section{\uppercase{#1}}}
\begin{document}
\title{Circumventing Heisenberg's uncertainty principle in atom interferometry tests
of the equivalence principle}
\author{Albert Roura}
\affiliation{Institut f\"ur Quantenphysik, Universit\"at Ulm, Albert-Einstein-Allee 11,
89081 Ulm, Germany}
\date{\today}
\begin{abstract}
Atom interferometry tests of universality of free fall based on the differential measurement of two different atomic species provide a useful complement to those based on macroscopic masses. However, when striving for the highest possible sensitivities, gravity gradients pose a serious challenge.
Indeed, the relative initial position and velocity for the two species need to be controlled with extremely high accuracy, which can be rather demanding in practice and whose verification may require rather long integration times.
Furthermore, in highly sensitive configurations gravity gradients lead to a drastic loss of contrast. These difficulties can be mitigated by employing wave packets with narrower position and momentum widths, but this is ultimately limited by Heisenberg's uncertainty principle.
We present a novel scheme that simultaneously overcomes the loss of contrast and the initial co-location problem. In doing so, it circumvents the fundamental limitations due to Heisenberg's uncertainty principle and eases the experimental realization by relaxing the requirements on initial co-location by several orders of magnitude.
\end{abstract}
\pacs{}
\maketitle
\section{Introduction}
\label{sec:introduction}
The equivalence principle is a cornerstone of general relativity and Einstein's key inspirational principle in his quest for a relativistic theory of gravitational phenomena. Experiments searching for small violations of the principle are being pursued in earnest \cite{will14} since they could provide evidence for violations of Loretnz invariance \cite{kostelecky11a} or for dilaton models inspired in string theory \cite{damour10}, and they could offer invaluable hints of a long sought underlying fundamental theory for gravitation and particle physics.
A central aspect which has been tested \comment{to high precision}
is the universality of free fall (UFF) for test masses. Indeed, torsion balance experiments have reached sensitivities at the $10^{-13}\, g$ level \cite{schlamminger08} and it is hoped that this can be improved two orders of magnitude in a forthcoming satellite mission \cite{touboul12}.
An interesting alternative that has been receiving increasing attention in recent years is to perform tests of UFF with quantum systems and, more specifically, using atom interferometry. Instead of macroscopic test masses this kind of experiments compare the gravitational acceleration experienced by atoms of different atomic species \cite{fray04,bonnin13,schlippert14,zhou15}. They offer a valuable complement to traditional tests with macroscopic objects because a wide range of new elements with rather different properties can be employed, so that better bounds on models parameterizing violations of the equivalence principle can be achieved even with lower sensitivities to differential accelerations \cite{hohensee13b,schlippert14}.
Furthermore, given the different kind of systematics involved, they could help to gain confidence in eventual evidence for violations of UFF.
By using neutral atoms prepared in magnetically insensitive states and an appropriate shielding of the interferometry region, one can greatly suppress the effect of spurious forces acting on the atoms, which constitute excellent inertial references \cite{borde89,kasevich91,peters01}.
State of the art gravimeters based on atom interferometry can reach a precision of the order of $10^{-9}\, g$ in one second \cite{hu13} and are mainly limited by the vibrations of the retro-reflecting mirror. When performing simultaneous differential interferometry measurements for both species and sharing the retro-reflecting mirror (as sketched in Fig.~\ref{fig:AI_sketch}), common-mode rejection techniques can be exploited to suppress the effects of vibration noise and enabling higher sensitivities for the measurement of differential accelerations \cite{fixler07,hogan08,varoquaux09,bonnin13,rosi14}.
Thus, although tests of UFF based on atom interferometry have reached sensitivities up to $10^{-8}\, g$ so far, there are already plans for future space missions that aim for sensitivities of $10^{-15}\, g$ \cite{aguilera14} by exploiting the longer interferometer times available in space and the fact that the sensitivity scales quadratically with the interferometer time.
\begin{figure}[h]
\begin{center}
\includegraphics[width=2.8cm]{figure1.pdf}
\end{center}
\caption{Sketch of an atom interferometry set-up for differential acceleration measurements of two different atomic species. The various laser beams driving the diffraction processes for both species share a common retro-reflection mirror so that vibration noise is highly suppressed in the differential phase-shift measurement.}
\label{fig:AI_sketch}
\end{figure}
As shown below, however, when targeting sufficiently high sensitivities, gravity gradients become a great challenge for this kind of experiments: they lead to a drastic loss of contrast in the interference signal and impose the need to control the initial co-location of the two atomic species (i.e.\ their relative position and velocity) with very high accuracy.
In this paper we will present a novel scheme that simultaneously overcomes both difficulties.
\section{Challenges due to gravity gradients in tests of universality of free fall}
\label{sec:challenges}
In order to analyze the effects of gravity gradients, we will make use of a convenient description of the state evolution in a light-pulse AI developed in Ref.~\cite{roura14} and summarized in \comment{Appendix~\ref{sec:state_evolution}.}
Within this approach the evolution of the interfering wave packets along each branch of the interferometer is described in terms of centered states $|\psi_\text{c} (t) \rangle$ that characterize their expansion and shape evolution, as well as displacement operators that account for their motion and whose argument $\boldsymbol{\chi} (t) = \big(\boldsymbol{\mathcal{R}}(t), \boldsymbol{\mathcal{P}}(t) \big)^\text{T}$ corresponds to the central position and momentum of the wave packet, which are given by classical phase-space trajectories including the kicks from the laser pulses.
The state at the exit port~I (analogous results hold for the exit port~II) takes then the form
\begin{equation}
|\psi_\text{I} (t)\rangle =
\frac{1}{2} \Big[ e^{i \Phi_1} \hat{\mathcal{D}}(\boldsymbol{\chi}_1) |\psi_\text{c} (t) \rangle
+ e^{i \Phi_2} \hat{\mathcal{D}}(\boldsymbol{\chi}_2) |\psi_\text{c} (t) \rangle \Big]
\label{eq:port_I},
\end{equation}
and gives rise to the following oscillations in the fraction of atoms detected in that port as a function of the phase shift $\delta\phi$ between the interferometer branches:
\begin{equation}
\frac{N_\text{I}}{N_\text{I} + N_\text{II}}
= \big\langle \psi_\text{I} (t) \big| \psi_\text{I} (t) \big\rangle
= \frac{1}{2} \big(1 + C \cos \delta\phi \big)
\label{eq:fraction_I}.
\end{equation}
The positive quantity $C$, known as the contrast, characterizes the amplitude of these oscillations and is given by
\begin{equation}
C = \Big| \big\langle \psi_\text{c} (t) \big| \hat{\mathcal{D}}(\delta\boldsymbol{\chi})
\big| \psi_\text{c} (t) \big\rangle \Big| \leq 1
\label{eq:contrast},
\end{equation}
which takes its maximum value when the relative displacement $\delta\boldsymbol{\chi} = \boldsymbol{\chi}_2 - \boldsymbol{\chi}_1$ between the interfering wave packets vanishes.
The full result for the phase shift is provided in \comment{Appendix~\ref{sec:state_evolution}}, but for the present discussion it is sufficient to focus on the role of uniform forces (including inertial ones) and the dependence, caused by gravity gradients, on the central position and velocity ($\mathbf{r}_0$ and $\mathbf{v}_0$) of the initial wave packet:
\begin{equation}
\delta\phi = \mathbf{k}_\text{eff}^\text{T}\, (\mathbf{g} + \mathbf{a}')\, T^2
+ \mathbf{k}_\text{eff}^\text{T}\, \big(\Gamma\, T^2\big) \,
(\mathbf{r}_0 + \mathbf{v}_0 T)
+ \ldots
\label{eq:phase_shift0},
\end{equation}
where for simplicity we have assumed time-independent accelerations and gravity gradients and kept only the contributions to lowest order in $(\Gamma\, T^2)$. We have separated the acceleration $\mathbf{g}$ caused by the gravitational field from the acceleration $\mathbf{a}'$ caused by any other forces including inertial ones and accounting also for the vibrations of the retro-reflecting mirror. On the other hand, the gravity gradient tensor $\Gamma$, defined in Eq.~\eqref{eq:Gamma}, characterizes the deviations from a uniform gravitational field.
\subsection{Initial co-location}
\label{sec:challenges_co-location}
When performing a test of the UFF based on a simultaneous differential measurement with two different atomic species (labeled here $A$ and $B$) the relevant quantity is the phase-shift difference
\begin{align}
\delta\phi^A - \delta\phi^B \approx&\
\mathbf{k}_\text{eff}^\text{T}\, (\mathbf{g}_A - \mathbf{g}_B)\, T^2
+ \mathbf{k}_\text{eff}^\text{T}\, \big(\Gamma\, T^2\big) \, (\mathbf{r}_0^A - \mathbf{r}_0^B)
\nonumber \\
&+\mathbf{k}_\text{eff}^\text{T}\,\big(\Gamma\, T^2\big)\,(\mathbf{v}_0^A - \mathbf{v}_0^B)\,T
\label{eq:co-location}.
\end{align}
Here we have assumed that $\mathbf{k}_\text{eff}$ and $T$ are the same for both species and neglected any contributions to the residual accelerations $\mathbf{a}'$ which are not common to both species. We have made these simplifying assumptions to make the presentation here less cumbersome and to highlight the essential points, but such restrictions are lifted in the remaining sections, so that the strategies and results presented there are more generally applicable.
As seen from Eq.~\eqref{eq:co-location}, in the presence of gravity gradients small differences in the central position (and velocity) of the initial wave packets for the two atomic species can mimic the effects of a violation of UFF.
In principle preparing wave packets with very well defined central position and momentum does not suffer from limitations associated with Heisenberg's uncertainty principle, which only affects their position and momentum widths. However, the required degree of control on those quantities is rather demanding in practice: for example, testing the UFF at the level of
$10^{-15} g$
entails controlling the relative initial position and velocity with accuracies better than a few nm and several hundred pm/s respectively.
In fact, this systematic effect, often known in this context as the \emph{initial co-location} problem, is one of the biggest challenges faced by this kind of experiments.
Furthermore, verifying that the stringent requirements on initial co-location are fulfilled by measuring the relative position and velocity of the two species under the same experimental conditions as in the differential interferometry measurements (otherwise one could introduce additional systematic effects) is itself limited by Heisenberg's uncertainty principle, which implies
\begin{equation}
n\, N\, \sigma_x\, \sigma_p \geq \hbar / 2 \,
\label{eq:HUP},
\end{equation}
where $\sigma_x$ and $\sigma_p$ are the precisions for the measurement of the central position and momentum that can be achieved after a given integration time, $N$ is the number of atoms in each atom cloud and $n$ is the number of experimental runs, given by the integration time multiplied by the repetition rate.
Thus, unless the number of atoms in each atom cloud is rather high, the required integration time to achieve the desired accuracy in these checks of systematics may be comparable to or even exceed the entire lifetime of the mission, which has been raised as an objection against the use of atoms (rather than macroscopic test masses) as inertial references for high-precision tests of UFF \cite{nobili15}.
\subsection{Loss of contrast}
\label{sec:challenges_contrast}
In order to understand qualitatively the motion of the atomic wave packets along the branches of the interferometer in the presence of a gravity gradient, it is useful to consider a freely falling frame where the central trajectories for the two branches after the first beam-splitter pulse are symmetric with respect to the spatial origin of coordinates at $z=0$, as depicted in Fig.~\ref{fig:gg_trajectories}. It then becomes clear that the tidal forces associated with the gravity gradient tend to open up these trajectories and give rise to \comment{an \emph{open interferometer} with} non-vanishing relative position and momentum displacements at the exit ports. They are quantitatively obtained in \comment{Appendix~\ref{sec:state_evolution}} and are given by
\begin{equation}
\delta \boldsymbol{\mathcal{R}} = \left(\Gamma\, T^2 \right) \mathbf{v}_\text{rec} \, T \, ,
\quad
\delta \boldsymbol{\mathcal{P}} = (\Gamma\, T^2) \, m\, \mathbf{v}_\text{rec}
\label{eq:deltaR_deltaP} \, .
\end{equation}
The existence of a non-vanishing relative displacement $\delta\boldsymbol{\chi} \neq 0$ between the interfering wave packets, which no longer exhibit full quantum overlap, gives rise to a loss of contrast, as reflected by Eq.~\eqref{eq:contrast}.
\begin{figure}[h]
\begin{center}
\def8.5cm{8.5cm}
\input{figure2_pdf.tex}
\end{center}
\caption{Central trajectories for a Mach-Zehnder interferometer in the presence of gravity gradients as seen in a suitable freely falling frame. The tidal forces tend to open up the trajectories compared to the case without gravity gradients (dashed lines).
The momentum transfer from the laser pulses are also indicated for the different branches.
}
\label{fig:gg_trajectories}
\end{figure}
In terms of the Wigner function \cite{hillary84}, defined in Eq.~\eqref{eq:wigner_def}, the expression for the contrast takes the following suggestive form, which is also valid for mixed states~\cite{roura14}:
\begin{equation}
C = \left | \, \int d^3x \int d^3p \, W(\mathbf{x},\mathbf{p};t) \,
e^{-\frac{i}{\hbar} \delta \boldsymbol{\chi}^\text{T} J \, \boldsymbol{\xi}} \, \right |
\label{eq:contrast_wigner} ,
\end{equation}
where we introduced the phase-space vector $\boldsymbol{\xi} = (\mathbf{x}, \mathbf{p})^\text{T}$ and the symplectic form $J$ defined in Eq.~\eqref{eq:symplectic_form}.
The loss of contrast can be understood as a consequence of the oscillatory factor ``washing out'' the result of the integral. It otherwise tends to unity when the oscillations are negligible, as dictated by the normalization of the Wigner function.
The example of Gaussian states, whose Wigner function is given by Eq.~\eqref{eq:wigner_gaussian}, is particularly illustrative. Substituting into Eq.~\eqref{eq:contrast_wigner}, one obtains the following result for the contrast:
\begin{equation}
C = \exp \Big[ -\frac{1}{2 \hbar^2}\, \delta \boldsymbol{\chi}_0^\text{T} J^\text{T} \Sigma \,
J \, \delta \boldsymbol{\chi}_0 \Big]
\label{eq:contrast_gaussian},
\end{equation}
where $\Sigma$ is the covariance matrix of the initial state, detailed in Eq.~\eqref{eq:covariance}.
In turn, $\delta \boldsymbol{\chi}_0 \equiv \mathcal{T}^{-1} (t,t_0) \, \delta \boldsymbol{\chi}$ is related to $\delta \boldsymbol{\chi}$ through the transition matrix $\mathcal{T} (t,t_0)$ as explained in Appendix~\ref{sec:classical_trajectories} and well approximated by Eqs.~\eqref{eq:deltaR_free}-\eqref{eq:deltaP_free} for gravity gradients.
It is clear from Eq.~\eqref{eq:contrast_gaussian}
that the loss of contrast caused by gravity gradients can be reduced by simultaneously considering smaller position and momentum spreads, $\Sigma_{xx}$ and $\Sigma_{pp}$.
In fact, such a conclusion is not specific of Gaussian states and holds in general. Indeed, as the size of the main support of the Wigner function decreases, the wash-out effect from the oscillatory factor becomes less and less important.
Simultaneously decreasing $\Sigma_{xx}$ and $\Sigma_{pp}$ is, however, ultimately limited by Heisenberg's uncertainty principle. Moreover, achieving a sufficiently narrow momentum distribution can sometimes be rather demanding in practice even before the limit due to Heisenberg's uncertainty principle is reached. In order to alleviate these difficulties, an easily implementable mitigation strategy based on a small adjustment $\delta T / T \sim (\Gamma_{zz}\, T^2)$ of the timing for the last pulse was proposed in Ref.~\cite{roura14}. The key idea is that with a suitable choice of $\delta T$ one can change $\delta \boldsymbol{\mathcal{R}}$ (while keeping $\delta \boldsymbol{\mathcal{P}}$ essentially unchanged) so that the phase-space vector $(J\, \delta \boldsymbol{\chi})$ becomes aligned with the Wigner function and the wash-out effect of the oscillatory factor in Eq.~\eqref{eq:contrast_wigner} is minimized. (This is actually reminiscent of the use of squeezed states to beat the standard quantum limit in optical interferometers; rather than squeezing the state, here the observable is modified to achieve a similar effect.)
The strategy, which is briefly summarized in \comment{Appendix~\ref{sec:contrast_wigner_gaussian}}, was shown to be very effective for parameter ranges like those considered, for example, for the STE-QUEST mission \cite{aguilera14}. Nevertheless, it would eventually face significant limitations in future plans for further increasing by several orders of magnitude the sensitivity of UFF tests based on atom interferometry \cite{dimopoulos07}. Furthermore, it is somewhat less effective when applied to thermal clouds (even for ultracold atoms close to quantum degeneracy but with a negligible condensate fraction) rather than Bose-Einstein condensates%
\footnote{\comment{These limitations also apply to an alternative approach based on extracting the phase shift from the spatial location of the maxima within the fringe pattern that arises in the density profile at the exit ports of an open interferometer \cite{muentinga13,sugarbaker13}.}}.
In the next section we present a novel scheme which is not afflicted by these shortcomings and simultaneously overcomes both the initial co-location problem and the loss of contrast.
\section{Simultaneously overcoming loss of contrast and the initial co-location problem}
\label{sec:co-location}
As shown by Eq.~\eqref{eq:phase_shift3}, the contributions to the phase shift $\delta\phi$ that depend on the initial values of the central position and momentum, characterized by the phase-space vector $\boldsymbol{\chi}_0 = (\mathbf{r}_0, m \mathbf{v}_0)^\text{T}$, can be written in the following revealing form:
\begin{equation}
\delta\phi = - \frac{1}{\hbar}\, \delta \boldsymbol{\chi}^\text{T}(t)\, J \, \mathcal{T}(t,t_0) \, \boldsymbol{\chi}_0 + \ldots
\label{eq:phase_shift_initial},
\end{equation}
where $\mathcal{T}(t,t_0)$ is the transition matrix defined right after Eq.~\eqref{eq:full_solution}.
The next insight is then to realize that through a suitable adjustment of the laser wavelength for the second pulse, one can have a vanishing final displacement $\delta\boldsymbol{\chi} = 0$.
Indeed, this can be achieved by changing the effective momentum transfer associated with the second pulse to $\hbar\, \big( \mathbf{k}_\text{eff} + \Delta \mathbf{k}_\text{eff} \big)$ with
\begin{equation}
\Delta \mathbf{k}_\text{eff} = \big( \Gamma\, T^2 / 2 \big)\, \mathbf{k}_\text{eff}
\label{eq:Delta_k},
\end{equation}
where we neglected corrections of higher order in $\big( \Gamma\, T^2 \big)$ for simplicity [the exact result can be obtained using Eq.~\eqref{eq:transition_exact}]. This can be easily understood in a suitable freely falling frame where the central trajectories for the two branches of the interferometer are symmetric with respect to $z=0$, as shown in Fig.~\ref{fig:mitigation_trajectories}. The momentum transfer associated with the second pulse is chosen so that the trajectory after the pulse corresponds to the time reversal of the trajectory before it. Since the gravity gradient leads to the curvature of the trajectories in the space-time diagram as depicted in Fig.~\ref{fig:mitigation_trajectories}, an increase of the momentum transfer given by Eq.~\eqref{eq:Delta_k} is necessary.
\begin{figure}[h]
\begin{center}
\def8.5cm{8.5cm}
\input{figure3_pdf.tex}
\end{center}
\caption{Central trajectories for the same situation depicted in Fig.~\ref{fig:gg_trajectories} but with a suitable adjustment of the momentum transfer from the second laser pulse so that a closed interferometer, with vanishing relative displacement between the interfering wave packets in each port, is recovered.}
\label{fig:mitigation_trajectories}
\end{figure}
From Eqs.~\eqref{eq:contrast} and \eqref{eq:phase_shift_initial} it is clear that by achieving $\delta\boldsymbol{\chi} = 0$, this scheme simultaneously takes care of the loss of contrast caused by gravity gradients as well as the stringent requirements on the initial co-location of the two atomic species.
Indeed, thanks to the absence of a relative displacement between the interfering wave packets full contrast is recovered.
Furthermore, there is also an intuitive explanation for the solution of the initial co-location problem which is connected with the laser phases.
Because of the additional $\Delta \mathbf{k}_\text{eff}$ for the second pulse, in this new scheme the total momentum transfer to the two branches is unbalanced, i.e.\ one no longer has $\sum_j \delta \mathbf{k}_\text{eff}^{(j)} = 0$, where $\hbar\, \delta \mathbf{k}_\text{eff}^{(j)}$ corresponds to the difference between the momenta transferred to the two branches by the $j$-th pulse. This implies that the total contribution from the laser phases exhibits the following dependence on the initial values of the central position and velocity of the atomic wave packet:
\begin{align}
\delta\phi_\text{laser} \, \to \, &
\sum_j \delta \mathbf{k}_\text{eff}^{(j)} \cdot (\mathbf{r}_0 + \mathbf{v}_0 T) =
-2\, \Delta \mathbf{k}_\text{eff}^\text{T}\, (\mathbf{r}_0 + \mathbf{v}_0 T) \nonumber \\
& \qquad\qquad\qquad = -\mathbf{k}_\text{eff}^\text{T}\, \big(\Gamma T^2\big) \,
(\mathbf{r}_0 + \mathbf{v}_0 T)
\label{eq:laser_phase} ,
\end{align}
where we made use of the choice of $\Delta \mathbf{k}_\text{eff}$ specified above in the second equality.
The key point is that the dependence on the initial position and velocity of the contribution from the laser phases compensates the effect of gravity gradients because the right-hand side of Eq.~\eqref{eq:laser_phase} exactly cancels the terms depending on the initial conditions in Eq.~\eqref{eq:phase_shift0}.
On the other hand, the fact that $\sum_j \delta \mathbf{k}_\text{eff}^{(j)} \neq 0$ also implies that the phase shift depends on the initial position (and velocity) of the retro-reflecting mirror, which is not known or controlled with very high precision.
More specifically, it gives rise to the following contribution to the phase shift:
\begin{equation}
\delta\phi_\text{laser} \, \to \, -\, 2\, \Delta\mathbf{k}_\text{eff} \cdot \mathbf{r}_\text{mirror}
= -2\, \mathbf{k}_\text{eff}^\text{T}\, \big(\Gamma\, \mathbf{r}_\text{mirror}\big) \, T^2
\label{eq:mirror_position} ,
\end{equation}
where $\mathbf{r}_\text{mirror}$ corresponds to the position of the retro-reflecting mirror at the time the second pulse is applied. The effect of this contribution can be post-corrected for a known mirror position $\mathbf{r}_\text{mirror}$. However, an uncertainty $\Delta\mathbf{r}_\text{mirror}$ on the position of the mirror leaves a residual contribution to the phase shift after post-correction corresponding to the expression in Eq.~\eqref{eq:mirror_position} but with $\mathbf{r}_\text{mirror}$ replaced by $\Delta\mathbf{r}_\text{mirror}$.
Fortunately, such dependence on $\Delta\mathbf{r}_\text{mirror}$ drops out (to sufficiently high degree) in the differential measurement. Indeed, when performing a differential measurement between species $A$ and $B$, the dependence on the position uncertainty $\Delta\mathbf{r}_\text{mirror}$ reduces to
\begin{equation}
\delta\phi_\text{laser}^A - \delta\phi_\text{laser}^B \, \to \, -2\, \big( \mathbf{k}_A\, T_A^2
- \mathbf{k}_B\, T_B^2 \big)^\text{T}
\big(\Gamma\, \Delta\mathbf{r}_\text{mirror}\big)
\label{eq:mirror_position_diff} ,
\end{equation}
which can be made small enough in exactly the same way as the contribution to the differential measurement of residual accelerations common to both species.
For example, given the gravity gradient on Earth' surface, one has $\big| \Gamma\, \Delta\mathbf{r}_\text{mirror} \big| \sim 3\times10^{-13}\, g $ for an uncertainty $| \Delta \mathbf{r}_\text{mirror} | = 1\, \mu\text{m}$ in the position of the mirror. This means, for example, that if one can suppress residual accelerations $a' \lesssim 10^{-12}\, g$ through common-mode rejection, any contribution due to the unknown position of the mirror will be suppressed provided that this can be determined with an uncertainty $| \Delta \mathbf{r}_\text{mirror} | \lesssim 1\, \mu \text{m}$.
In fact, typical plans for high-precision tests of UFF with AIs are designed to reject much higher residual accelerations, so that the requirements on $| \Delta \mathbf{r}_\text{mirror} |$ can be further relaxed by several orders of magnitude compared to this example.
So far we have assumed perfect knowledge of the gravity gradients, which would allow perfect compensation of their effects (up to the uncertainty in the mirror position) by using the scheme described above with $\Delta \mathbf{k}_\text{eff}$ given by Eq.~\eqref{eq:Delta_k}. In practice, however, the gravity gradient tensor $\Gamma$ employed in Eqs.~\eqref{eq:Delta_k}-\eqref{eq:mirror_position_diff} needs to be determined by simulations of the mass distribution, direct gradiometry measurements or a combination of both.
Since this can only be done with some finite accuracy $\Delta\Gamma$ characterizing the difference between the actual gravity gradients and the tensor $\Gamma$ determined by those means, the final displacement $\delta\boldsymbol{\chi}$ will take some non-vanishing residual value with $\Delta\Gamma$ appearing instead of $\Gamma$ in Eqs.~\eqref{eq:deltaR_deltaP}. Consequently, the dependence of the phase shift on the initial conditions appearing in Eq.~\eqref{eq:phase_shift_initial} will not be completely eliminated, but the co-location requirements for the differential measurement will be substantially relaxed by a factor of order $\|\Delta\Gamma\| / \|\Gamma\|$. Hence, determining for instance the gravity gradient tensor $\Gamma$ with a relative accuracy $\|\Delta\Gamma\| / \|\Gamma\| \lesssim 10^{-3}$ leads to a relaxation of the initial co-location requirements by 3 orders of magnitude.
We conclude this section by briefly discussing how feasible it is to implement the required momentum change for the second pulse, which is given by Eq.~\eqref{eq:Delta_k} and amounts to a relative frequency change $\Delta\nu / \nu = (\Gamma_{zz}\, T^2)$ if the gravity gradient is \comment{aligned} with $\mathbf{k}_\text{eff}$. Given Earth's gravity gradient $\Gamma_{zz} \approx 3 \times 10^{-6}\, \text{s}^{-2}$ and a moderate interferometer time $2\,T = 2\,\text{s}$, this corresponds to a frequency change $\Delta\nu \approx 1\, \text{GHz}$, which can be easily implemented with acousto-optical modulators (AOMs). This single-photon frequency change will be the same even if one has a large momentum transfer through higher-order Bragg diffraction, a sequence of multiple $\pi$ pulses or a combination of both. On the other hand, for a longer interferometer time $2\,T = 10\,\text{s}$ a single-photon frequency change $\Delta\nu \approx 25\, \text{GHz}$ would be necessary. Such a frequency change requires a more sophisticated set-up \comment{(e.g.\ two phase-locked lasers)}. Moreover, it gives rise to a substantially larger detuning of the single-photon transition, which requires a higher laser intensity in order to have a comparable Rabi frequency.
(The detuning could be approximately reduced in half by going from red- to blue-detuned transitions when changing from $\mathbf{k}_\text{eff}$ to $\mathbf{k}_\text{eff} + \Delta \mathbf{k}_\text{eff}$.)
Thus, the new scheme seems somewhat easier to implement directly in set-ups where higher sensitivity is achieved through large momentum transfer rather than very long interferometer times.
\section{Conclusion}
\label{sec:conclusion}
Gravity gradients pose a great challenge for high-precision tests of UFF based on atom interferometry. Indeed, for sufficiently large values of the effective momentum transfer or the interferometer time gravity gradients lead to a drastic loss of contrast and impose a serious limitation on the highest sensitivity that can be achieved.
Furthermore, they also imply stringent requirements on the initial co-location of the wave packets for the two species since the effects of a non-uniform gravitational field could otherwise mimic a violation of UFF.
Although there is in principle no fundamental limitation on the precision with which the central position and momentum of the wave packets for both species can be determined, in practice the requirements can be rather challenging. Moreover, the time needed to verify that such requirements are fulfilled can be comparable to the entire mission lifetime.
The situation concerning the loss of contrast and the time needed for verification of the systematics associated with initial co-location can be improved by considering smaller position and momentum widths for the initial state, but this is ultimately limited by Heisenberg's uncertainty principle.
In this paper we have presented a novel scheme that simultaneously overcomes the loss of contrast and the initial co-location problem. In doing so, it circumvents the limitations due to Heisenberg's uncertainty principle on the highest sensitivities that can be achieved and eases the experimental realization by relaxing the requirements on initial co-location by several orders of magnitude.
The key idea is that by slightly changing the wavelength of the second laser pulse, the momentum transfer to the two interferometer branches becomes unbalanced and this implies that the total phase-shift contribution from the laser phases depends on the initial values of the central position and momentum of the atomic wave packets. In fact, with a suitable choice of the change of wavelength this can exactly compensate the analogous contribution caused by the gravity gradients.
Furthermore, this choice automatically gives rise to a closed interferometer (vanishing relative displacement between the interfering wave packets) with no loss of contrast.
The results and discussions presented here can also be applied directly to experiments performed under microgravity conditions (this would actually correspond to the freely falling frame considered in Figs.~\ref{fig:gg_trajectories} and \ref{fig:mitigation_trajectories}). In that case employing a retro-reflection set-up naturally leads to double diffraction \cite{leveque09,giese13} and \comment{the associated} symmetric interferometers, which have a number of advantages, such as immunity to a number of systematic effects and noise sources (including laser phase noise). Only one detail needs then to be changed in Figs.~\ref{fig:gg_trajectories} and \ref{fig:mitigation_trajectories}: the central velocity of the initial wave packet vanishes and there is an additional third exit port with vanishing central velocity.
Interestingly, the double diffraction scheme can be generalized to experiments performed in a laboratory under normal gravity conditions by adding a third laser frequency per species \cite{malossi10,zhou15}, so that one can still benefit from many of the advantages associated with double diffraction (including also the cancellation of the photon-recoil term quadratic in $\mathbf{k}_\text{eff}$ mentioned in Appendix~\ref{sec:phase_shift}).
Rotations, which were not explicitly considered here, also lead to open interferometers, loss of contrast and dependence of the phase shift on the central velocity of the initial wave packet. These effects can be mitigated by employing satellites with sufficiently small angular velocity, the use of a tip-tilt mirror that \comment{compensates rotations} \cite{hogan08,lan12,dickerson13} or a combination of both. Moreover, by combining it with the use of such a tip-tilt mirror, the scheme presented here can be extended to the case of non-aligned gravity gradients, where the direction of $\mathbf{k}_\text{eff}$ does not coincide with a principle axis of the tensor $\Gamma$.
Finally, it is worth pointing out that \comment{our method} can also be applied to differential measurements involving two (or more) spatially separated atom interferometers interrogated by common laser beams, a configuration employed for gradiometry measurements \cite{snadden98,rosi15}. Indeed, a scheme completely analogous to that presented here could be exploited to mitigate the loss of contrast for each single interferometer (for sufficiently high $\mathbf{k}_\text{eff}$ or interferometer time) as well as the additional loss of contrast in the differential measurement due to the coupling of static gravity gradients to initial position and velocity jitter from shot to shot, which can become particularly important when considering long baselines between the interferometers. This would be relevant for gravitational antennas capable of monitoring very precisely changes in the local gravitational field, such as the MIGA facility \cite{geiger15} currently under construction, and with interesting applications to geophysics and hydrology. It may also be relevant for gravitational antennas with very long baselines which have been proposed for the detection of low-frequency gravitational waves \cite{dimopoulos08b,graham13,hogan15}.
\begin{acknowledgments}
This work was supported by the German Space Agency (DLR) with funds provided by the Federal Ministry of Economics and Technology (BMWi) under Grant No.~50WM1556 (QUANTUS IV).
It is a pleasure to thank Wolfgang Zeller, Stephan Kleinert and Wolfgang Schleich for collaboration in related earlier work.
\end{acknowledgments}
\bibliographystyle{apsrev4-1}
| {'timestamp': '2015-09-29T02:12:49', 'yymm': '1509', 'arxiv_id': '1509.08098', 'language': 'en', 'url': 'https://arxiv.org/abs/1509.08098'} |
\section{Feature}
\subsection{Detector \& Descriptor}
In our method, we implemented three kinds of features, pure DISK feature, pure SuperPoint\cite{superpoint_paper} feature, and a mix of SuperPoint and DISK\cite{tyszkiewicz2020disk} feature.
For SuperPoint, we additionally applied test-time homographic adaptation with 100 iterations to detect keypoints, and in some cases, we added a trained convolutional autoencoder to halve the feature dimension.
The Non-Maximum Suppression (NMS) window size and keypoints threshold was estimated theoretically based on the size of the image, to achieve a rather equally keypoints distribution across the image. Those values were then fine tuned around their initial estimates, to obtain the best performance on the validation set for each feature for each dataset.
\subsection{Pyramid Extraction}
We observed large-scale variation between images in some sets, such as the lizard in PragueParks. It troubled our matcher since the same feature under different scales may look very different. To solve this, we extracted the features in three scales and then concatenate those associated with the same keypoint.
Other than scale, sometimes orientation is also a problem. Similar to the approach taken for the scale problem, we extracted the feature in seven orientations by affine transformations (i.e. rotate 90 degrees left and right, respectively) and perspective transformations (i.e. horizontally left/right or vertically up/down by 45 degrees, respectively), and then concatenate them.
In order to save some computational resources and make the task more efficient, we only apply scale and/or orientation in some conditions, which will be elaborated in the Appendix.
\subsection{Pre-Process \& Post-Process}
\textbf{Masking}
For the Phototoursim and GoogleUrban datasets, some dynamic objects occur frequently, which introduce unreliable and unrepeatable keypoints to our solution. To overcome this, we segmented the scene and masked out some classes, including Person, Sky, Car, Bus, and Bicycle. Except for the class Person, which was segmented by Mask-RCNN\cite{Maskrcnn} trained on COCO, all other classes were removed by the pspnet\cite{zhao2017pspnet} trained on ade20k.
While those object segmentation nets worked well under most of the circumstances, they performed poorly when distinguishing the sculpture from humans; thus we additionally trained a binary classifier to take pedestrians apart from sculptures.
Moreover, in order to preserve the details of buildings, when the masking was enabled, we eroded the area masked by a 5x5 kernel for sky and 3x3 for the other classes.
\\
\textbf{Refinement}
We applied an argsoftmax function for keypoints refinement with a radius of 2.
\\
\section{Matching}
We trained the SuperGlue\cite{sarlin20superglue} together with its official feature extractor SuperPoint\cite{superpoint_paper} in an end-to-end manner on the MegaDepth dataset with IMW2021 competition testset removed. More specifically, we splited the original SuperPoint\cite{superpoint_paper} into two networks, the first one was fixed with the official weights to extract the keypoints from the image, while the other one was fine-tuned to provide descriptions. However, we found this adjustment advanced the model performance slightly, since the SuperGlue\cite{sarlin20superglue} can already match the given points well.
Furthermore, we take the advantage of the latest feature DISK\cite{tyszkiewicz2020disk} instead of SuperPoint\cite{superpoint_paper}, which built upon a simple CNN backbone. For the SuperGlue\cite{sarlin20superglue} matcher compatible with DISK\cite{tyszkiewicz2020disk}, we only trained the matcher part, while directly used the official weights of DISK\cite{tyszkiewicz2020disk} and did not conduct any fine-tuning on it. We leveraged DISK\cite{tyszkiewicz2020disk} to extract $2,048$ points for each image in the training phase, and $8,000$ points in the testing phase. In this way, as shown in Table.\ref{table:1 superglue}, the AUC could be improved significantly. The evaluation was conducted following the methodology in the SuperGlue\cite{sarlin20superglue} paper.
We have trained an indoor and an outdoor version for the SuperGlue matcher compatible with either SuperPoint or DISK, but we did not find the indoor one help with performance advancement; thus we stuck with the outdoor weights for this competition.
\begin{table}[ht!]
\resizebox{\columnwidth}{!}{%
\begin{tabular}{|l|c|c|c|}
\hline
& \multicolumn{3}{c|}{Exact AUC} \\ \hline
& 5° & 10° & 20° \\ \hline
SuperPoint + SuperGlue (Official) & 38.72 & 59.13 & 75.81 \\ \hline
SuperPoint + SuperGlue (Our trained, outdoor) & 38.88 & 59.27 & 75.71 \\ \hline
DISK + SuperGlue (Outdoor) & 42.72 & 62.54 & 97.68 \\ \hline
\end{tabular}}
\caption{SuperGlue Weights Evaluation}
\label{table:1 superglue}
\end{table}
\textbf{Guided Pyramid Matching} Only when the number of matches found by SuperGlue was less than 100, SuperGlue matching would be applied on the pyramid extraction results, i.e. multiple scales and/or multiple orientations, we might combine the matches in different scales (\textit{ALL}) or trust the one with the most number of matches (\textit{MAX}).
\section{Outlier Rejection \& Adaptive F/H}
Based on our experiments, DegenSAC\cite{Mishkin2015MODS} outperformed other outlier rejection methods in most cases. So we applied DegenSAC to find the fundamental matrix and under certain circumstances, the homography simultaneously.
Inspired by ORB-SLAM\cite{ORBSLAM3_2020}, different transformation matrix should be selected for different scenes and then be decomposed to get pose. Although the process is fixed in the offical bankend, considering different transformation matrix still helps us to filter matching outliers. In the GoogleUrban dataset, for example, many correct matches are on the flat buildings along the street, but many wrong matches are on the ground or on the isolation belt of the road. Suffering from the weak constraint of the F matrix, some wrong matches also could pass the filter (the distance to the epipolar line is less than the threshold). While if the H matrix is selected, only the correct matches on the street-side flat building is retained. Even if the pose is decomposed by calculating the F matrix later in the offical process, the accuracy can still be improved
since some wrong matches could be removed by adaptive F/H strategy. For the method implementation, we refer to the ORB-SLAM\cite{ORBSLAM3_2020}, respectively calculating the F matrix and the H matrix, as well as their scores (SF and SH), which are related to the symmetric transfer error less than the threshold. After obtaining SF and SH, calculate RH=SH/(SH+SF), and then we select H matrix if RH is greater than 0.45. Otherwise, we select the F matrix, which s called adaptive F-H policy. In the Pragueparks dataset, there are not many these kind of cases, so th
e performance improvement was not obvious by applying adaptive F/H. In the Phototourism dataset, most of the non-planar matches are correct matches, so applying adaptive F/H led to the correct non-planar matches being removed and thus worsen the accuracy.
\section{Conclusion}
This report provided Megvii-3D team's strategies towards CVPR2021 IMW competition for both the unlimited keypoints category and the restricted one. \\
\textbf{Limitations of our method}
From our analysis on the corner cases, we realized that a simple feature-matcher-filter solution could not suffice all conditions, without multiple scales and/or multiple orientations matching augmentation. We observed there are image pairs with scale difference more than a factor of 3, more than 45 degree of rotation, or large perspective transformation. A single level solution can not work them out by any means.
While those cases are rare but still possible in some applications for real life, for example, when you are using an AR map guider and conducting a sharp turn-around or you suddenly fail down, visual localizer is likely to fail. However, we argue that those extreme cases could be leveraged by compensations from other type of data or sensors, such as the IMU, GPS, and QR code positioning system. On the other hand, the performance of current feature-matcher might be saturated already since the convolutional neural network block has its own limitations, including the limited receptive field, hard to break the spatial relationships between pixels, cannot model shape deformations (though we have DCNs now), etc.
Despite that many excellent researchers will continue on improving the accuracy, robustness together with the generalization of feature extraction and matching, it might be doubted that stereo matching plays a dominant role in optimizing the performance of the whole visual localization system.
\section*{Appendix: Details about each Submission}
\input{table}
\end{landscape}
\section{Instructions}
\TODO{Remove this section for the final version.}
This document should provide just enough information for readers to reproduce
your outcomes.
It also should list all datasets and pre-trained models for this purpose. This document should be shorter than 2 pages in length.
\section{Method and Technical Details}
This section should discuss the technical details regarding your method.
You should reference the method that was used, and provide any modifications that you performed to the method to achieve your performance.
In case you use your own RANSAC, please provide all RANSAC parameters as well. For example, the number of iterations would be important to make a fair comparison with other RANSAC methods.
SuperPoint 2k + SuperGlue
SuperPoint 4k + SuperGlue
\section{Dataset and Pre-trained Models}
We used the MegaDepth dataset with validation and test set removed to train our model from scratch.
\section{Pipeline}
\begin{figure}[ht!]
\centering
\includegraphics[width=8.5cm]{pipeline.png}
\caption{Method Pipeline}
\label{fig:pipeline}
\end{figure}
\input{1_feature}
\input{2_matching}
\input{3_ransac}
\input{4_conclusion}
{\small
\bibliographystyle{IEEEtran}
| {'timestamp': '2021-08-12T02:07:21', 'yymm': '2108', 'arxiv_id': '2108.04453', 'language': 'en', 'url': 'https://arxiv.org/abs/2108.04453'} |
\section{Introduction}
Graphs in this paper are simple and undirected.
We use $V(G)$, $E(G)$, and $F(G)$, respectively, to represent the vertices, edges, and faces of a graph $G$.
A \emph{coloring} of a graph $G$ is a function $c$ that assigns an element $c(v)$ to each vertex $v \in V(G)$.
A \emph{proper coloring} is a coloring such that $c(u) \neq c(v)$ whenever $uv \in E(G)$.
Vizing~\cite{V76}, and independently Erd\H{o}s, Rubin, and Taylor~\cite{ERT76} introduced list coloring, a generalization of proper coloring.
A \emph{list assignment} $L$ gives each vertex $v$ a set of available colors $L(v)$.
A graph is \emph{$L$-colorable} if it has a proper coloring $c$ with $c(v) \in L(v)$ for every vertex $v$.
A graph is \emph{$k$-choosable} (or \emph{$k$-list-colorable}) if it is $L$-colorable whenever $|L(v)| \ge k$ for each $v \in V(G)$.
The \emph{choosability} (or \emph{list chromatic number}) $\chi_\ell(G)$ of a graph $G$ is the least $k$ such that $G$ is $k$-choosable; the analogue for coloring is the \emph{chromatic number} $\chi(G)$.
In the case that $L(V) = [k]$ for each $v \in V(G)$, any $L$-coloring of $G$ is also a \emph{proper $k$-coloring}, where $[k]$ denotes the set of integers $\{1, 2, \dots, k\}$.
Thus we always have $\chi_\ell(G) \ge \chi(G)$.
More recently, \Dv{} and Postle~\cite{DP18} introduced the following idea of correspondence coloring, which has since become known as DP-coloring. This notion generalizes choosability.
\begin{definition} Let $G$ be a simple graph with $n$ vertices and let $L$ be a list assignment for $G$.
For each $v \in V(G)$, let $L_v = \{v\} \times L(v)$.
For each edge $uv \in E(G)$, let $M_{uv}$ be a matching (possibly empty) between the sets $L_u$ and $L_v$, and let $\M_L = \{M_{uv} : uv \in E(G)\}$, called the \emph{matching assignment}.
Let $G_L$ be a graph that satisfies the following conditions:
\begin{itemize}
\item $V\left(G_L\right) = \cup_{v \in V(G)} L_v$,
\item for each $v \in V(G)$, the set $L_v$ is a clique in $G_L$,
\item if $uv \in E(G)$, then the edges between $L_u$ and $L_v$ are exactly those of $M_{uv}$, and
\item if $uv \notin E(G)$, then there are no edges between $L_u$ and $L_v$.
\end{itemize}
We say that $G$ has an $\M_L$-coloring if $G_L$ contains an independent set of size $n$.
The graph $G$ is \emph{DP-$k$-colorable} if, for any list assignment $L$ with $|L(v)| = k$ for each $v \in V(G)$, the graph is $\M_L$-colorable for every matching assignment $\M_L$.
The least $k$ such that $G$ is DP-$k$-colorable is the \emph{DP-chromatic number} of $G$, denoted $\chi_{DP}(G)$.
\end{definition}
We generally identify the elements of $L_v$ with those of $L(v)$ and refer to the elements as colors.
We will often assume without loss of generality that $L(v) = [k]$ for all $v \in V(G)$, as the existence of an independent set in $G_L$ depends only on the matching assignment $\M_L$.
Suppose $G$ is $\M_L$-colorable and $I$ is an independent set of size $n$ in $G_L$.
Then $|I \cap L_v| = 1$ and we refer to the element $i \in I \cap L_v$ as the color given to $v$.
If $L(v) = [k]$ for each $v \in V(G)$ and $M_{uv} = \{(u, i) (v, i) : i \in [k]\}$ for each $uv \in E(G)$, then an $\M_L$-coloring is exactly a proper $k$-coloring.
Additionally, DP-coloring also generalizes $k$-choosability, even with the restriction that $L(v) = [k]$ for each $v \in V(G)$.
To see this, consider a list assignment $L'$ with $|L'(v)| = k$ for all $v \in V(G)$.
For each vertex $v \in V(G)$, there exists a bijection from the elements of $L'(v)$ to $[k]$, and we simply let $M_{uv}$ be the matching between the colors of $u$ and $v$ that correspond to equal elements of $L'(u)$ and $L'(v)$.
Accounting for relabeling, an $\M_L$-coloring is equivalent to an $L'$-coloring.
Thus, any DP-$k$-colorable graph must be $k$-choosable, and so $\chi_{DP}(G) \ge \chi_\ell(G)$ for all graphs $G$.
One difficulty in the study of list coloring is that some techniques useful in solving coloring problems, such as identifation of vertices, are not feasible in the list coloring setting.
DP-coloring can be used to apply these coloring techniques in some situations.
In the paper introducing DP-coloring, \Dv{} and Postle~\cite{DP18} use identification to prove that planar graphs without cycles of length 4 to 8 are 3-choosable.
However, they impose conditions on the matching assignment $\M_L$, and their proof does not give the analogous result that such graphs are DP-3-colorable.
In their paper, \Dv{} and Postle note that DP-coloring is strictly more difficult than list coloring, in the sense that it is possible for $\chi_{DP}(G) > \chi_\ell(G)$ for some graphs $G$.
In particular, they showed that cycles of even length are 2-choosable, but they are not DP-2-colorable.
In addition, while Alon and Tarsi~\cite{AT92} showed that planar bipartite graphs are 3-choosable, Bernshteyn and Kostochka~\cite{BK18+} provide a bipartite planar graph $G$ with $\chi_{DP}(G) = 4$.
These differences, particularly for even cycles, result in difficulties in extending results from list-coloring to DP-coloring.
However, some proofs for list-coloring do extend to DP-coloring.
For example, \Dv{} and Postle note that Tommassen's proofs~\cite{T94,T95} that $\chi_\ell(G) \le 5$ for planar graphs and $\chi_\ell(G) \le 3$ for planar graphs with no 3-cycles or 4-cycles immediately extend to DP-coloring.
There has been considerable recent interest in extending results for choosability of planar graphs to DP-coloring.
Liu and Li~\cite{LL18+}, Sittitrai and Nakprasit~\cite{SN18+}, Kim and Yu~\cite{KY18+}, and Kim and Ozeki~\cite{KO18+} all extend results on 4-choosability of planar graphs to DP-4-coloring.
Yin and Yu~\cite{YY18+} extend results for 3-choosability to DP-3-coloring, in some cases for a larger class of graphs than the analogous choosability result.
Among other results extending that conditions for 3-choosability hold for DP-3-coloring, Liu, Loeb, Yin, and Yu~\cite{LLYY18+} show that planar graphs with no $\{4, 5, 6, 9\}$-cycles or with no $\{4, 5, 7, 9\}$-cycles are DP-3-colorable.
In this paper, we extend the three previous results for 3-choosability of planar graphs stated in Theorem~\ref{thm1}.
\begin{thm}\label{thm1}
A planar graph $G$ is $3$-choosable if one of the following conditions holds
\begin{itemize}
\item $G$ contains no $\{4,6,7,9\}$-cycles. (Wang, Lu, and Chen~\cite{WLC08})
\item $G$ contains no $\{4,6,8,9\}$-cycles. (Shen and Wang~\cite{SW07})
\item $G$ contains no $\{4,7,8,9\}$-cycles. (Wang and Shen~\cite{WS11})
\end{itemize}
\end{thm}
Our main result is the following.
\begin{thm}\label{awesome}
If $a$ and $b$ are distinct values from $\{6, 7, 8\}$, then every planar graph without cycles of lengths $\{4, a, b, 9\}$ is DP-3-colorable.
\end{thm}
Our proofs use the discharging method, which uses strong induction. We say a structure is \emph{reducible} if it cannot appear in a minimal counterexample $G$. The proofs of the results in Theorem~\ref{thm1} rely on the fact that an even cycle with all vertices of degree 3 is reducible. Such a structure is not necessarily reducible in the setting of DP-coloring. In Section~\ref{lemma}, we use the lemma about ``near-$(k-1)$-degenerate'' subgraphs from~\cite{LLYY18+} which fills a similar role in our reducible structures.
In Section~\ref{lemma}, we also provide our reducible structures and a lemma about how much charge can be given by large faces in our subsequent discharging arguments.
Section~\ref{sec:threeproofs} provides the proofs for Theorem~\ref{awesome}.
We use different initial charges from the ones in~\cite{LLYY18+}, and provide a new unified set of discharging rules for all three cases.
\section{Lemmas and a brief discussion of the discharging.}\label{lemma}
Graphs mentioned in this paper are all simple. A $k$-vertex (resp., $k^+$-vertex, $k^-$-vertex) is a vertex of degree $k$ (resp., at least $k$, at most $k$). The \emph{length} of a face is the number of vertices on its boundary, with repetition included.
A face with length $k$ (resp., at least $k$, at most $k$) is a $k$-face (resp., $k^+$-face, $k^-$-face).
We may also refer to an $(\ell_1, \ell_2, \ldots, \ell_k)$-face, which is a $k$-face $f = v_1 v_2 \ldots v_k$ with facial walk $v_1, v_2, \ldots, v_k$ such that $d(v_i) = \ell_i$.
An $(\ell_1, \ell_2, \ldots, \ell_k)$-path and $(\ell_1, \ell_2)$-edge are defined similarly, and we may replace $\ell_i$ with $\ell_i^+$ to indicate $d(v_i) \ge \ell_i$.
A $3$-vertex is {\em{triangular}} if it is incident to a $3$-face.
\begin{lemma}\label{minimum}
Let $G$ be a smallest graph (with respect to the number of vertices) that is not DP-$k$-colorable. Then $\delta(G)\ge k$.
\end{lemma}
\begin{proof}
Suppose there is a vertex $v$ with $d(v) < k$.
Any $\mathcal{M}_L$-coloring of $G - v$ can be extended to $G$ since $v$ has at most $d(v)$ elements of $L(v)$ forbidden by the colors selected for the neighbors of $v$, while $|L(v)| = k$.
\end{proof}
Let $H$ be a subgraph of $G$.
For each vertex $v \in H$, let $A(v)$ be the set of vertices (of $G_L$) in $L_v$ that are not matched with vertices in $\cup_{u \in G - H} L_u$.
One may think of $A(v)$ as the colors available at $v$ after coloring $G - H$.
\begin{lemma}\label{near-2-degenerate}\cite{LLYY18+}
Let $k \ge 3$ and $H$ be a subgraph of $G$. If the vertices of $H$ can be ordered as $v_1, v_2, \ldots, v_{\ell}$ such that the following hold
\begin{itemize}
\item[(1)] $v_1v_{\ell}\in E(G)$, and $|A(v_1)|>|A(v_{\ell})|\ge 1$,
\item[(2)] $d(v_{\ell})\le k$ and $v_{\ell}$ has at least one neighbor in $G-H$,
\item[(3)] for each $2\le i\le \ell-1$, $v_i$ has at most $k-1$ neighbors in $G[\{v_1, \ldots, v_{i-1}\}]\cup (G-H)$,
\end{itemize}
then a DP-$k$-coloring of $G-H$ can be extended to a DP-$k$-coloring of $G$.
\end{lemma}
For the remainder of this paper, we will let $G$ denote a minimal counterexample to Theorem~\ref{awesome}.
That is $G$ is a planar graph with no $\{4, a, b, 9\}$-cycles, where $a, b \in \{6, 7, 8\}$ are distinct, such that $G$ is not DP-3-colorable, but any planar graph on fewer than $|V(G)|$ vertices with no $\{4, a, b, 9\}$-cycles is DP-3-colorable.
We now use Lemma~\ref{near-2-degenerate} to provide some reducible configurations we will need in Section~\ref{sec:threeproofs}.
\begin{lemma}\label{lem:reducible}
The graph $G$ does not contain any of the following subgraphs:
\end{lemma}
\begin{figure}[h]
\includegraphics[scale=0.18]{2Bad5Faces.pdf}
\includegraphics[scale=0.17]{5Face3Vertices.pdf}\\
\includegraphics[scale=0.20]{Good10FaceNoSR4Vertex.pdf}\quad\quad
\includegraphics[scale=0.18]{10Face5Face3Vertices.pdf} \quad\quad\includegraphics[scale=0.18]{10Face3Face3Vertices.pdf}
\end{figure}
\begin{proof}
Let $H$ be the subgraph of $G$ consisting of the labeled vertices, and order the vertices according to their labels.
It is straightforward to verify that all labeled vertices must be distinct, since otherwise cycles of forbidden lengths are created.
From Lemma~\ref{near-2-degenerate}, it follows that a DP-$3$-coloring of $G - H$ can be extended to a DP-$3$-coloring of $G$.
\end{proof}
We use balanced discharging and assign an initial charge of $\mu(x) = d(x) - 4$ to each $x \in V(G) \cup F(G)$.
Let $\mu^*(x)$ be the final charge after the discharging procedure.
From Euler's formula, we have
\begin{equation}\label{eq}
\sum_{x \in V(G) \cup F(G)}(d(x) - 4) = -8.
\end{equation}
We will move charge around and argue in Section~\ref{sec:threeproofs} that each vertex and face ends with non-negative final charge.
This contradiction to (\ref{eq}) will prove our conclusion.
\section{Proof of Theorem~\ref{awesome}} \label{sec:threeproofs}
We say a $10^+$-face $f$ is \emph{good} to a $10$-face $f'$ (and $f'$ is \emph{poor} to $f$) if $f'$ is incident to ten $3$-vertices, and $f$ and $f'$ share a $(3, 3)$-edge such that each end of the shared edge is an end of a $(3, 4^+, 3^+, \dots)$-path of $f$, where the second vertex of the path is incident to either a $(3, 3, 4^+)$-face or a $(3, 3, 3, 3, 4^+)$-face adjacent to both $f$ and $f'$.
A $4^+$-vertex $v$ is {\em{poor}}, {\em{semi-rich}}, or {\em{rich}} to a $10^+$-face $f$ if $v$ is incident to two $3$-faces, one $3$-face, or no $3$-faces adjacent to $f$, respectively.
Moreover, we call a semi-rich $4^+$-vertex $v$ {\em{special}} if $v$ is on a $10^+$-face $f$ such that $v$ is rich to $f$, and we call a $5$-face {\em{bad}} if it is incident to five $3$-vertices and adjacent to two $5$-faces.
The following are our discharging rules:
\begin{enumerate}[(R1)]
\item Each $3$-face gets $\frac{1}{3}$ from each adjacent $5^+$-face, and each $5^+$-face gets $\frac{1}{5}$ from each incident $5^+$-vertex.
\item Each $10^+$-face gets $\frac{1}{6}$ from each incident special semi-rich $4^+$-vertex, and gives $\frac{1}{3}$ to each incident rich $4$-vertex which is on a $3$-face.
\item If $f$ is good to $f'$, then $f$ gives $\frac{1}{6}$ to $f'$.
\item[(R4a)] If $G$ contains no $\{4, 7,8,9\}$-cycles, then each $5$-face that shares a $(3,3)$-edge with a $3$-face gets $\frac{1}{3}$ from each adjacent $10^+$-face, each $5$-face that shares a $(3,4^+)$-edge with a $3$-face gets $\frac{1}{3}$ from the $10^+$-face incident to the $3$-vertex of the $(3,4^+)$-edge, and each $5$-face sends $1$ to any incident triangular $3$-vertex and $\frac{1}{3}$ to any adjacent $3$-face, then distributes its remaining charge evenly to its adjacent $10$-faces. Each $3$-vertex gets its remaining needed charge evenly from its incident $6^+$-faces.
\item[(R4b)] If $G$ contains no $\{4, 6, a, 9\}$-cycles for $a \in \{7, 8\}$, then each 3-vertex gets $1$ evenly from its incident $5^+$-faces, each $5$-face gets $\frac{1}{6}$ from each adjacent $7^+$-face, and each bad $5$-face gets an additional $\frac{1}{12}$ from each adjacent $5$-face.
Following this, if a $5$-face has positive charge, then it distributes its surplus charge to its adjacent $5$-faces.
\end{enumerate}
\begin{lemma}
Each vertex and each $8^-$-face have non-negative final charge.
\end{lemma}
\begin{proof}
Note that if a graph $G$ contains no $\{4, 7, 8, 9\}$-cycles, then each $3$-vertex must be incident to at least one $6^+$-face.
By (R4a) and (R4b), the final charge of each $3$-vertex is $0$.
Note that by (R2), a $4$-vertex $v$ sends out charge only if it is special, in which case $v$ is rich to a $10^+$-face and semi-rich to at most two $10^+$-faces.
So if $d(v) = 4$, then $\mu^*(v) \ge (4 - 4) - \frac{1}{6} \cdot 2 + \frac{1}{3} = 0$.
Now let $v$ be a $5^+$-vertex.
Then by (R1) and (R2), $v$ sends out at most $\frac{1}{5}$ to each of its incident faces, so $\mu^*(v) \ge (d(v) - 4) - \frac{1}{5}d(v) \ge 0$.
Hence all vertices end with non-negative charge.
Let $f$ be an $8^-$-face in $G$.
If $d(f) = 3$, then $f$ is adjacent to three $5^+$-faces.
So by (R1), $\mu^*(f) \ge 0$.
For the remaining $8^-$-faces, we consider the consequences of rules (R4a) and (R4b) as separate cases.
{\bf Case 1: $G$ has no $\{4,7,8,9\}$-cycles.}
Then the only $8^-$-faces of $G$ left to consider are $5$- and $6$-faces.
Note that a $5$-face cannot be adjacent to a $6$-face.
If $d(f) = 6$, then $f$ is adjacent to no $3$-face and by (R4a) sends at most $\frac{1}{3}$ to each incident vertex.
So $\mu^*(f) \ge (6 - 4) - \frac{1}{3} \cdot 6 = 0$.
If $d(f) = 5$, then $f$ is adjacent to at most one $3$-face and at least four $10^+$-faces.
If $f$ is not adjacent to any $3$-face, then it only needs to send $1$ evenly to its adjacent 10-faces by (R4a), so $\mu^*(f) \ge 0$.
If $f$ shares a $(3,3)$-edge with a $3$-face, then by (R1) and (R4a), $f$ sends $\frac{1}{3}$ to its adjacent $3$-face and $1$ to each incident triangular $3$-vertex, and $f$ gets $\frac{1}{3}$ from each adjacent $10^+$-face.
So $\mu^*(f) \ge (5 - 4) - \frac{1}{3} - 1 \cdot 2 + \frac{1}{3} \cdot 4 = 0$.
If $f$ shares a $(3,4^+)$-edge with a $3$-face, then by (R1) and (R4a), $f$ sends $\frac{1}{3}$ to its adjacent $3$-face and $1$ to its incident triangular $3$-vertex, and $f$ gets $\frac{1}{3}$ from one $10^+$-face.
So $\mu^*(f) \ge (5 - 4) - \frac{1}{3} - 1 + \frac{1}{3} = 0$.
If $f$ shares a $(4^+, 4^+)$-edge with a $3$-face, then by (R1) and (R4a), $f$ sends $\frac{1}{3}$ to its adjacent $3$-face and sends the remaining charge evenly to adjacent $10$-faces.
Thus, $\mu^*(f)\ge0$.
{\bf Case 2: $G$ has no $\{4, 6, a, 9\}$-cycles for $a \in \{7, 8\}$.}
Let $f$ be an $8$-face.
Then $G$ contains no $\{4, 6, 7, 9\}$-cycles, so $f$ is not adjacent to any 3-face.
Thus, by (R4b), $f$ gives $\frac{1}{3}$ to each incident 3-vertex and $\frac{1}{6}$ to each adjacent 5-face.
So $\mu^*(f) \ge (8 - 4) - \frac{1}{3} \cdot 8 - \frac{1}{6} \cdot 8 = 0$.
Let $f$ be a $7$-face.
Then $G$ contains no $\{4, 6, 8, 9\}$-cycles, so $f$ is again not adjacent to any 3-face.
By (R1) and (R4b), $f$ gives $\frac{1}{3}$ to each incident 3-vertex and $\frac{1}{6}$ to each adjacent 5-face. Note that no two $5$-faces are adjacent.
So $\mu^*(f) \ge (7 - 4) - \max\big\{\frac{1}{3} \cdot 7 + \frac{1}{6} \cdot 3, \frac{1}{3} \cdot 6 + \frac{1}{6} \cdot 4, \frac{1}{3} \cdot 5 + \frac{1}{6} \cdot 7 \big\} > 0$.
It remains to consider $5$-faces.
So let $f$ be a $5$-face.
Let $r_5(f)$ and $s_3(f)$ be the number of adjacent 5-faces and incident 3-vertices of $f$, respectively.
Since $G$ contains no $6$-cycles, $f$ cannot be adjacent to a $3$-face.
If $f$ is a bad $5$-face, then $s_3(f) = 5$ and $r_5(f) = 2$.
By Lemma~\ref{lem:reducible}$(i)$, $f$ is not adjacent to another bad $5$-face.
Then by (R4b), $f$ gives $\frac{1}{3}$ to each incident $3$-vertex, and $f$ gets $\frac{1}{6}$ from each adjacent $7^+$-face and $\frac{1}{12}$ from each adjacent $5$-face.
So $\mu^*(f) \ge (5 - 4) - \frac{1}{3} \cdot 5 + \frac{1}{12} \cdot 2 + \frac{1}{6} \cdot 3 = 0$.
Thus we may assume that $f$ is not bad.
By (R4b),
\[ \mu^*(f) \ge (5 - 4) - \frac{1}{3} s_3(f) + \frac{1}{6} (5 - r_5(f)) - \frac{1}{12} b_5(f) = \frac{1}{6} \left(11 - 2s_3(f) - r_5(f) - \frac{1}{2}b_5(f)\right), \]
where $b_5(f)$ is the number of bad $5$-faces adjacent to $f$.
Clearly, $b_5(f)\le r_5(f)$.
Note that no $3$-vertex can be incident to three 5-faces since $G$ contains no $9$-cycles.
If $s_3(f) \le 2$, then $r_5(f) \le 5$ and $b_5(f) \le 1$.
So $\mu^*(f) \ge \frac{1}{6} \left(11 - 2 \cdot 2 - 5 - \frac{1}{2}\right) > 0$.
If $s_3(f) = 3$, then $r_5(f) \le 3$ and $b_5(f) \le 1$.
Thus $\mu^*(f) \ge \frac{1}{6} \left(11 - 2 \cdot 3 - 3 - \frac{1}{2}\right) > 0$.
If $s_3(f) = 5$, then $r_5(f) \le 1$ since $f$ is not bad.
By Lemma~\ref{lem:reducible}$(i)$, $b_5(f) = 0$, so $\mu^*(f) \ge \frac{1}{6}(11 - 2 \cdot 5 - 1) = 0$.
Lastly, if $s_3(f) = 4$, then $r_5(f) \le 3$.
So $\mu^*(f) \ge \frac{1}{6}\left(11 - 2 \cdot 4 - r_5(f) - \frac{1}{2}b_5(f)\right)$, and $\mu^*(f) < 0$ only if $r_5(f) = 3$ and $b_5(f) = 1$, in which case $\mu^*(f) \ge -\frac{1}{12}$.
Let $v$ be the $4^+$-vertex incident to $f$.
If $d(v) \ge 5$, then by (R1), $f$ gets $\frac{1}{5} > \frac{1}{12}$ from $v$ and ends with non-negative charge.
If $d(v) = 4$, then by Lemma~\ref{lem:reducible}$(ii)$, at least one of the $5$-faces adjacent to $f$ and incident to $v$, say $f'$, has at least two $4^+$-vertices.
Note that $r_5(f') \le 4$.
By (R4b), $f'$ can send
\begin{align*}
&\frac{1}{r_5(f')} \cdot \frac{11 - 2s_3(f') - r_5(f') - \frac{1}{2} b_5(f')}{6} \\
&\ge \max\left\{\frac{1}{4} \cdot \frac{11 - 2 \cdot 2 - 4}{6},\ \frac{1}{3} \cdot \frac{11 - 2 \cdot 3 - 3 - \frac{1}{2}}{6},\ \frac{1}{2} \cdot \frac{11 - 2 \cdot 3 - 2 - \frac{1}{2}}{6},\ \frac{11 - 2 \cdot 3 - 1}{6}\right\}\\
&> \frac{1}{12}
\end{align*}
to each of its adjacent $5$-faces, which includes $f$.
Hence $\mu^*(f) \ge 0$.
\end{proof}
We now only need to verify that $10^+$-faces end with non-negative charge.
Let $f$ be a $10^+$-face.
Let $P$ be a maximal path (or possibly a cycle) along $f$ such that every edge of $P$ is adjacent to a $5^-$-face.
Let $\mathcal{P}$ be a collection of all such paths $P$ along $f$.
By construction, the paths of $\mathcal{P}$ are disjoint.
Let $t_i$ denote the number of paths of $\mathcal{P}$ with $i$ vertices for $i \ge 2$, and let $t_1$ denote the number of vertices incident to $f$ not contained in any path $P \in \mathcal{P}$.
Then $\sum_{i \ge 1} i \cdot t_i = d(f)$.
We will use the following two lemmas to simplify our final analysis.
\begin{lemma}\label{12+faces}
A $10^+$-face $f$ can afford to give out at least \begin{equation}\label{eq-general}
\frac{1}{3} \sum_{i \ge 1} i \cdot t_i + \frac{2}{3} \sum_{i \ge 2} t_i + \frac{1}{3}t_1 + \frac{1}{3} \sum_{i \ge 3} (i - 2)t_i - \frac{x}{3},
\end{equation}
where $x = 0$ if $d(f) \ge 12$, $x = 1$ if $d(f) = 11$, and $x = 2$ if $d(f) = 10$.
\end{lemma}
\begin{proof}
From $\sum_{i \ge 1} i \cdot t_i = t_1 + 2 \sum_{i \ge 2} t_i + \sum_{i \ge 3} (i - 2)t_i$, we have
\[ 2 \sum_{i \ge 2} t_i = \sum_{i \ge 1} i \cdot t_i - t_1 - \sum_{i \ge 3}(i - 2)t_i. \]
Therefore,
\begin{align*}
&\frac{1}{3} \sum_{i \ge 1} i \cdot t_i + \frac{2}{3} \sum_{i \ge 2} t_i + \frac{1}{3} t_1 + \frac{1}{3} \sum_{i \ge 3} (i - 2)t_i - \frac{x}{3} \\
&= \frac{1}{3} \sum_{i \ge 1} i \cdot t_i + \frac{1}{3} \left(\sum_{i \ge 1} i \cdot t_i - t_1 - \sum_{i \ge 3}(i - 2)t_i\right) + \frac{1}{3} t_1 + \frac{1}{3} \sum_{i \ge 3} (i - 2)t_i - \frac{x}{3} \\
&= \frac{2}{3} \sum_{i \ge 1} i \cdot t_i - \frac{x}{3} \\
&= \frac{2}{3}d(f) - \frac{x}{3}.
\end{align*}
So $\mu^*(f) \ge (d(f) - 4) - \frac{2}{3}d(f) + \frac{x}{3} = \frac{1}{3}(d(f) - 12 + x) \ge 0$ when $d(f) \ge 10$.
\end{proof}
\begin{lemma}\label{lem:eq-10+}
Each $10^+$-face $f$ needs to send out at most
\begin{equation}\label{eq-10+}
\frac{1}{3} \sum_{i \ge 1} i \cdot t_i + \frac{2}{3} \sum_{i \ge 2} t_i + \frac{1}{6} \sum_{i \ge 6}(i - 5)t_i + \frac{1}{6} \sum_{i \ge 3}t_i,
\end{equation}
\end{lemma}
\begin{proof}
Note that $f$ gives at most $\frac{1}{3}$ to each vertex not on any path of $\mathcal{P}$ by (R2), (R4a) or (R4b).
Now we show that $f$ needs to send out at most $\sum_{i \ge 2} \frac{1}{3}(i + 2)t_i + \frac{1}{6} \sum_{i \ge 6} (i - 5)t_i$ to the $5^-$-faces and vertices along the paths of $\mathcal{P}$.
In the case of forbidding $\{4, a, 8, 9\}$-cycles with $a \in \{6, 7\}$, $f$ gives at most $\frac{1}{2}$ to each endpoint and $\frac{1}{3}$ to each adjacent $5^-$-face along an $i$-path.
So $f$ gives $\frac{1}{2} \cdot 2 + \frac{1}{3}(i - 1) = \frac{1}{3}(i + 2)$ to each $i$-path.
In the case of forbidding $\{4, 6, 7, 9\}$-cycles, $f$ may instead need to give at most $\frac{1}{3}$ to each vertex and $\frac{1}{6}$ to each adjacent $5$-face along an $i$-path when all vertices of the path are $3$-vertices and all adjacent faces are $5$-faces.
Then $f$ gives $\frac{1}{3} i + \frac{1}{6}(i - 1) = \frac{1}{2} i - \frac{1}{6}$ to each $i$-path of this form.
In addition, $\frac{1}{2} i - \frac{1}{6} > \frac{1}{3}(i + 2)$ only if $i \ge 6$.
So in any case, $f$ needs to send to each $i$-path at most $\sum_{i \ge 2} \frac{1}{3}(i + 2)t_i + \sum_{i \ge 6} \frac{1}{6}(i - 5)t_i$.
By (R3), $f$ may need to send out an additional $\frac{1}{6} \sum_{i \ge 3}t_i$ to poor $10$-faces.
Note that if $f$ sends charge over a $2$-path to a poor face, then the $2$-path has a $4^+$-vertex as an endpoint, and $f$ sends at most $\frac{1}{2} + \frac{1}{3} + \frac{1}{6} \le \frac{1}{3}(2 + 2)$ across this path, so the $\frac{1}{6}$ sent to the poor face is already accounted for in the above formula.
Therefore $f$ sends at most
\[ \frac{1}{3} t_1 + \sum_{i \ge 2} \frac{1}{3}(i + 2)t_i + \sum_{i \ge 6} \frac{1}{6}(i - 5)t_i + \frac{1}{6} \sum_{i \ge 3} t_i \]
to its incident vertices and adjacent faces, from which ~(\ref{eq-10+}) follows.
\end{proof}
Assume that $\mu^*(f) < 0$.
Let $s(f)$ be the number of semi-rich $4$-vertices and $5^+$-vertices on $f$.
Note that each semi-rich $4$-vertex on $f$ saves at least $\frac{1}{3} - \frac{1}{6} = \frac{1}{6}$ and by (R2) each $5^+$-vertex on $f$ gives $\frac{1}{5}$ to $f$.
Then by Lemmas~\ref{12+faces} and~\ref{lem:eq-10+},
\[ \frac{1}{6}\left(2t_1 + 2\sum_{i \ge 3} (i - 2)t_i - 2x\right) < \frac{1}{6} \left(\sum_{i \ge 6} (i - 5)t_i + \sum_{i \ge 3} t_i - s(f)\right), \]
which implies that
\[ 2x > s(f) + 2t_1 + \sum_{i \ge 3} (2i - 5)t_i - \sum_{i \ge 6} (i - 5)t_i \ge s(f) + 2t_1 + t_3 + 3t_4 + 5 \sum_{i \ge 5} t_i. \]
Clearly, $x > 0$, so $d(f) \le 11$.
Recall that $f$ gives at most $\frac{1}{3}(i + 2)$ across any $i$-path with $i \le 5$, and note that this is only possible when the ends of the path are $3$-vertices requiring $\frac{1}{2}$ from $f$, and each face adjacent to $f$ along the path is a $5^-$-face requiring $\frac{1}{3}$ from $f$.
Let $d(f) = 11$.
Then $x = 1$.
So $t_1 = 0$, $t_3 \le 1$ and $t_i = 0$ for $i \ge 4$.
By parity, $t_3 = 1$ and $t_2 = 4$, so $s(f) = 0$.
In this case, $f$ is not good to any $10$-face, so we have $\mu^*(f) \ge (11 - 4) - \frac{5}{3} - 4 \cdot \frac{4}{3} = 0$.
Let $d(f) = 10$.
Then $x = 2$, and $t_1 \le 1$ and $t_i = 0$ for $i \ge 5$.
Let $t_1 = 1$.
Then $t_3 \le 1$ and $t_4 = 0$.
By parity, $t_3 = 1$, and $t_2 = 3$.
It follows that $s(f) = 0$.
In this case, $f$ is not good to any $10$-face, so $\mu^*(f) \ge (10 - 4) - \frac{5}{3} - 3 \cdot \frac{4}{3} - \frac{1}{3} = 0$.
Thus we may assume $t_1 = 0$.
Then $s(f) + t_3 + 3t_4 \le 3$.
By parity, we have three primary cases: $t_4 = 1$ and $t_2 = 3$, or $t_3 = t_2 = 2$, or $t_2 = 5$.
In the first case, $s(f) = 0$, so $f$ is not good to any $10$-face, and $\mu^*(f) \ge 6 - 2 - 3 \cdot \frac{4}{3} = 0$.
In the second case, $s(f) \le 1$, and $f$ is good to at most two $10$-faces.
If $s(f) = 0$, then by Lemma \ref{lem:reducible}$(iii)$, $f$ cannot be good to a $10$-face, so $\mu^*(f) \ge 6 - 2 \cdot \frac{5}{3} - 2 \cdot \frac{4}{3} = 0$.
So let $s(f) = 1$.
If $f$ is good to at most one $10$-face, then $\mu^*(f) \ge 6 - 2 \cdot \frac{5}{3} - 2 \cdot \frac{4}{3} - \frac{1}{6} + \frac{1}{6} = 0$, where the final $\frac{1}{6}$ is the minimum $f$ saves or receives from the vertex counted by $s(f)$.
If $f$ is good to two $10$-faces, then the $4^+$-vertex counted by $s(f)$ must be the end of a $2$-path, so $f$ saves at least $\frac{1}{3}$ from this vertex and $\mu^*(f) \ge 6 - 2 \cdot \frac{5}{3} - 2 \cdot \frac{4}{3} - 2 \cdot \frac{1}{6} + \frac{1}{3} = 0$.
In the last case, $\mu^*(f) \ge 6 - \frac{4}{3} \cdot 5 = -\frac{2}{3}$.
We first assume that $G$ contains no $\{4, 7, 8, 9\}$-cycles.
Note that by (R4a), $f$ gives no charge to adjacent special $5$-faces, where a $5$-face is special if it does not share a $(3,3)$-edge with a $3$-face.
Thus we may assume that $f$ is adjacent to at most one special $5$-face, for otherwise, $f$ saves at least $\frac{2}{3}$ and ends with non-negative charge.
If $f$ is incident to at least two $4^+$-vertices, then $f$ saves at least $2\left(\frac{1}{2} - \frac{1}{6}\right) = \frac{2}{3}$, where the $\frac{1}{6}$ is because $f$ may now be good to adjacent $10$-faces.
If $f$ is incident to one $4^+$-vertex, then $f$ saves at least $\frac{1}{2} + \frac{1}{6} = \frac{2}{3}$ since the $4^+$-vertex must be rich to a $10^+$-face adjacent to $f$, and $f$ cannot be good to any $10$-face.
We may thus assume that $f$ is incident to ten $3$-vertices.
By Lemma~\ref{lem:reducible}$(iv)$ and $(v)$, each $5^-$-face adjacent to $f$ must contain a $4^+$-vertex.
This implies that all $3$-faces adjacent to $f$ are $(3,3,4^+)$-faces, and each adjacent $5$-face other than at most one special $5$-face contains a $4^+$-vertex and shares a $(3,3)$-edge with a $3$-face.
It follows that $f$ is poor to at least three $10^+$-faces if $f$ contains a special $5$-face, and is poor to five $10^+$-faces if $f$ contains no special $5$-faces.
Therefore, by (R2) and (R4a), $f$ receives $\min\left\{\frac{1}{3} + \frac{1}{6} \cdot 3, \frac{1}{6} \cdot 5\right\} > \frac{2}{3}$ from adjacent $10^+$-faces it is poor to, so $f$ ends with non-negative charge.
Now we assume that $G$ contains no $\{4, 6, a, 9\}$-cycles for $a \in \{7, 8\}$.
If $f$ is adjacent to a $5$-face, then $f$ gives at most $\frac{1}{3} \cdot 2 + \frac{1}{6} = \frac{5}{6}$ across this $2$-path.
Thus we may assume that $f$ is adjacent to at most one $5$-face, for otherwise, $f$ saves $2\left(\frac{4}{3} - \frac{5}{6}\right) > \frac{2}{3}$ and ends with non-negative charge.
If $f$ is adjacent to exactly one $5$-face and at least one $4^+$-vertex, then $f$ saves at least $\left(\frac{4}{3} - \frac{5}{6}\right) + \left(\frac{1}{3} - \frac{1}{6}\right) = \frac{2}{3}$.
If $f$ is not adjacent to any $5$-faces, then $f$ saves at least $2\left(\frac{1}{2} - \frac{1}{6}\right) = \frac{2}{3}$ if $f$ is incident to at least two $4^+$-vertices, and at least $\frac{1}{2} + \frac{1}{6} = \frac{2}{3}$ if $f$ is incident to exactly one $4^+$-vertex, where in the latter case the $4^+$-vertex must be a special semi-rich $4$-vertex which gives $\frac{1}{6}$ to $f$ by (R2).
We may therefore assume that all vertices incident to $f$ are $3$-vertices.
By Lemma~\ref{lem:reducible}$(v)$, all $3$-faces adjacent to $f$ are $(3,3,4^+)$-faces.
Thus $f$ is poor to five $10^+$-faces if $f$ is not adjacent to a $5$-face, and $f$ is poor to three $10^+$-faces otherwise.
Therefore $f$ gets $\frac{1}{6}$ from each $10^+$-face good to $f$ and saves $\frac{4}{3} - \frac{5}{6}$ if it is adjacent to a $5$-face, for a total of at least $\min\left\{\frac{1}{6} \cdot 5, \left(\frac{4}{3} - \frac{5}{6}\right) + \frac{1}{6} \cdot 3 \right\} > \frac{2}{3}$.
Therefore in all cases, $f$ ends with non-negative charge.
\section{Final remarks.}
We remark that we are yet unable to prove that planar graphs without $\{4, 5, 8, 9\}$-cycles are DP-3-colorable. While some of our lemmas may be useful in such a proof (in particular, Lemmas~\ref{near-2-degenerate} and~\ref{12+faces}), this case is considerably more difficult than any of the three cases of Theorem~\ref{awesome}, and a unified proof of all four cases does not seem possible. Another remark is that \Dv{} and Postle showed that planar graphs without cycles of lengths from $4$ to $8$ are ``weakly'' DP-3-colorable. It remains open to know if such planar graphs are DP-$3$-colorable.
| {'timestamp': '2018-09-21T02:03:52', 'yymm': '1809', 'arxiv_id': '1809.07445', 'language': 'en', 'url': 'https://arxiv.org/abs/1809.07445'} |
\section{Introduction}
CTAs ({\em Commodity Trading Advisors}) or managed-future accounts are a subset of asset managers with over \$341bn of assets under management \cite{BarclayHedge} as of Q2 2017. The predominant strategy which CTAs employ is trend-following. Meanwhile, bank structuring desks have devised a variety of {\em risk-premia} or {\em styles} strategies (including momentum, mean-reversion, carry, value, etc) which have been estimated to correspond to between approximately \$150bn \cite{Miller} to \$200bn \cite{Allenbridge} assets under management. Responsible for over 80\% of trade volume in equities and a large (but undocumented amount due to the OTC nature) of the FX market, \cite{HFT}, high-frequency trading firms (HFTs) and e-trading desks in investment banks are known to make use of many strategies which are effectively short-term mean-reversion strategies. In spite of the relatively large industry undergoing recent significant growth, a careful analysis of the statistical properties of strategies, including their optimisation, has only been undertaken in relatively limited contexts.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{histogram_timeseries_SocGene.png}
\caption{{\bf SocGen Trend Followers Index:} daily returns and monthly returns profiles} \label{}
\end{figure}
The corresponding statistics for the SG Trend index area in the table below and except for some noise show that skewness and excess kurtosis are laregly positive for CTAs.
\begin{table}[h!]
\centering
\caption{\bf Soc Gen Trend Index, Daily and Monthly Statistics}
\begin{tabular}{l|cc}
$\;$ & \textbf{Daily} & \textbf{Monthly}\\
\hline
Ann Avg Return (\%) & 5.695 & 5.752 \\
Volatily (\%)& 13.283&14.088\\
Sharpe Ratio& 0.429 & 0.408\\
Skewness & -0.448& 0.186\\
Exc Kurtosis & 3.845 & 0.807\\
\end{tabular}
\label{tab:table1}
\end{table}
Algorithmic trading strategies we consider are time-series strategies, often divided into mean-reverting or reversal strategies, trend-following or momentum strategies, and value strategies (also sometimes known as mean-reversion).\footnote[1]{Other common strategies include carry and short-gamma or short-vol. Unlike mean-reversion, momentum, and value, these do not rely on the specifics of the auto-correlation function}. Each such time-series related strategy is a form of signal processing.
In more standard signal processing, the major interest is in the de-noised or smoothed signals and their properties. In algorithmic trading, the interest is instead in the relationship between statistics like the moving average or some other form of smoothed historic returns (unfortunately, usually termed the {\sl signal}) and the unknown future returns. We show that when we consider both to be random variables, it is actually the interaction between these so-called signals and future returns which determines the strategy's behaviour.
Equities, and in particular SPX is known to mean-revert over short horizons (e.g., shorter than 1m, typically on the order of 5-10 days) and trend only over longer horizons (i.e., 3m-18m), and mean-revert again over even longer horizons (i.e., 2y-5y) as has been well-established by the quant equities literature following on the study of \cite{Jegadeesh} and the work of \cite{FamaFrench}. This distinct form of behaviour, with reversals on small-scale, trend on an intermediate and reversion on a long scale, is frequently observed across a large number of asset classes and strategies can be designed to take advantage of the behaviour of asset-prices across each time-scale.
Our initial goal is to find a signal, $X_t$ usually a linear function of historic log-(excess) returns $\{R_t\}$ which can be used as a dynamic weight for allocating to the underlying asset on a regular basis. We assume log-price $P_t=\sum_1^t R_{k}$. Examples of commonly used signals for macro-traders (CTAs, and other trend followers) include:
\begin{itemize}
\item Simple Moving Average (SMA): $$X_t=\frac{1}{T}\sum_1^T R_{t-k}$$
\item Exponentially-Weighted Moving Average (EWMA): $$X_t=c(\lambda)\sum_{k=1}^\infty \lambda^k R_{t-k}$$
\item Holt-Winters (HW, or double exponential smoothing) with or without seasonals, Damped HW
\item Difference between current price and moving average:\footnote[2]{We note that if we replace $P$ by $\log(P)$ and $R_t=\log(P_t)-\log(P_{t-1})$, this filter amounts to $X_t=\sum \frac{T-k}{T} R_{t-k}$, i.e., a {\em triangular} filter on returns, which bears some similarity to EWMA on returns.}
$$X_t= P_{t-1} - {1\over T}\sum_1^T P_{t-k}$$
\item Forecasts from ARMA(p, q) models: $$X_t = \phi_1 R_{t-1} + ... + \phi_p R_{t-p} + \theta_1 \varepsilon_{t-1} + ... + \theta_q \varepsilon_{t-q} $$
\item Differences between SMAs: $$X_t= {1\over {T_1}}\sum_1^{T_1} P_{t-k} - {1\over {T_2}}\sum_1^{T_2} P_{t-j}$$
\item Differences between EWMAs: $$X_t=c(\lambda_1)\sum \lambda_1^k R_{t-k}-c(\lambda_2)\sum \lambda_2^k R_{t-k}$$
\end{itemize}
and variations using volatility or variance weighting such as z-scores (SMAs or EWMAs weighted by a simple or weighted standard deviation, see \cite{HarveyVol}), and transformations of each of the signals listed above (e.g. allocations depending on sigmoids of moving averages, reverse sigmoids, Winsorised signals, etc.). Other signals commonly used in equity algorithmic trading include economic and corporate releases, and sentiment as derived from unstructured datasets such as news releases.
The returns from algorithmic trading strategies are well documented (see, e.g., \cite{ValueMomentum}, \cite{Baltas}, \cite{Hurstetal} and \cite{Lemperiere}).
Although many methods have been used to derive signals by practitioners, (see, e.g., \cite{BruderTrend} for a compendium), many of these methods are equally good (or bad) and it makes
little practical difference whether one uses ARMA, EWMA or SMA as the starting point for a strategy design (see e.g., \cite{Levine-Pedersen}).
In this paper, we only touch on normalised signals (e.g., {\em z-scores}) and strategy returns, leaving their discussion for a subsequent study. We meanwhile note that the spirit of this paper's results carry through for the case of normalized signals and strategy returns.
Frequently, exponential smoothers have been the effective {\em best} models in various economic forecasting competitions (see, e.g., the results of the first three M-competitions \cite{Makridakis}), showing perhaps that their simplicity bestows a certain robustness, and their original intuition was sound even if the statistical foundation took a significant time to catch up. In fact, EWMA and HW can both be justified as state-space models (see \cite{Hyndman}), and this formulation brings with it a host of benefits from mere intellectual satisfaction to statistical hypothesis tests, change-point tests, and a metric for goodness-of-fit. Exponential smoothing with multiplicative or additive seasonals and dampened weighted slopes are used to successfully forecast a significant number of economic time-series (e.g., inventories, employment, monetary aggregates). EWMA (and the related (S)MA), and HW remain some of the most commonly used filtering methods for CTAs and HFT shops.
In the case of returns which are normal with fixed autocorrelation function (ACF), i.e., those which are covariance stationary, signals created from linear combinations of historic returns are indeed normal random variables which are jointly normal with returns. External datasets (e.g., unstructured data, corporate releases), are less likely to contain normally distributed variables although there is an argument for asymptotic normality. Irrespective, our approach is to assume normality of both returns and signals as a starting point for further analysis.
While there is significant need for further study, there have nonetheless been a number of empirical and theoretical results of note in this area. Fung and Hsieh were the first to look at the empirical properties of momentum strategies \cite{Fung-Hsieh}, noting (without any theoretical foundation) the resemblance of strategy returns to straddle pay-offs.\footnote[3]{Or as they claimed, the returns of trend following resemble those of an extremely exotic option (which is not actually traded), daily-traded ``look-back straddles.''}
Potters and Bouchaud \cite{Potters-Bouchaud} studied the significant positive skewness of trend-following returns, showing that for successful strategies, the median profitability of trades is negative. The empirical returns of dynamic strategies are far from normal, and common values for skewness and kurtosis for single strategies can have skewness in the range of $[1.3,1.7]$ and kurtosis in the range $[8.8,15.3]$ respectively (see \cite{Hoffman}).
Bruder and Gaussel \cite{Bruder} and \cite{Hamdan} (see Appendix 2 for a superlative use of SDE-bassed methods for analyzing a wide variety of dynamic strategies) used SDEs to study the power-option like behaviour of pay-offs. Martin and Zou considered general but IID discrete time distributions (see \cite{Martin-Zou} and \cite{Martin-Bana}) to study the {\sl term-structure} of skewness over various horizons and the effects of certain non-linear transforms on the term structure of return distributions. More reBcently, Bouchaud et al \cite{Bouchaud} considered more general discrete-time distributions to study the convexity of pay-offs, and the effective dependence of returns on long-term vs short-term variance. Other studies have focused predominantly on the empirical behaviour of returns, the relationship to macro-financial conditions, the persistence of trend-following returns, and the benefits from their inclusion into broader portfolios.
In the larger portion of the theoretical studies, the assumptions have been minimal in order to consider more general return distributions. Due to their generality, the derived results are somewhat more restrictive. Rather than opting for the most general, we choose more specific distributional assumptions, in the hope that we can obtain broader, possibly more practical results. Aside from this current study, the authors have extended this work further to consider the endemic problem of over-fitting (see \cite{Firoozye2}), proposing total least squares with covariance penalties as a means of model-selection, showing their outperformance to standard methods, using OLS with AIC.
In this paper, we consider underlying assets with stationary Gaussian returns and a fixed auto-correlation function (i.e., they are a discrete Gaussian process). While we make no
defence of the realism of using normal returns, we find that normality
can be exploited in order to ensure we understand how the returns of linear
and non-linear strategies should work in theory and to further the understanding of the interaction between properties of returns and of the {\sl signals} as a basis for the development and analysis of dynamic strategies in practice.
Given a purely-random mean-zero covariance-stationary discrete-time Gaussian process for returns, the signals listed above, whether a EWMA or an ARMA forecast, can be expressed as convolution filters of past returns, i.e., our signal $X_t$ can be expressed as
$$ X_t = \sum_{k\geq 1} \phi(k) R_{t-k}$$
This is an example of a time-invariant linear filter of a Gaussian process. If we restrict our attention to those filters for which are square summable, $\sum_1^\infty\phi(k)^2<\infty$, then it is well-known that the resulting filtered series is also Gaussian and jointly Gaussian with $R_t$.
Our underlying premise is that the important distributions to consider for the analysis of dynamic strategies is a product of Gaussians (rather than a single Gaussian as would usually apply in asymptotic analysis of asset returns). This product measure can be justified on many levels and we discuss large sample approximations in the appendix.
The resulting measure which determines the success of the strategy is the correlation between the returns and the signals, a measure which, in the context of measuring an active manager's skill is known as the {\em information coefficient} or IC as given in the {\em Fundamental Law of Active Management} detailed in \cite{GrinoldKahn}. While there is a large body of literature on the IC and its relationship to information ratios, (see for example \cite{Lee} for formulas similar to equation (\ref{Eqn:Sharpe})), the derivations, resulting formulae and conclusions differ significantly.
We should also mention the work on random matrix theory by Potters and Bouchaud (\cite{PBRandMat}), which touches on many of the topics we consider in this paper. In particular their analysis of returns as products of Gaussians or t-distributions is very lose to our own. While many of the emphases are once again different to ours, we believe the general area of Random Matrix Theory to be a fruitful approach to trading strategies.
The primary tool we use to derive results is Isserlis' theorem \cite{Isserlis} or Wick's theorem (as it is known in the context of particle physics \cite{Wick}). This relates products and powers of multivariate normal random variables to their means and covariances.
Wick's theorem has been applied in areas from particle physics, to quantum field theory to stock returns and there are some recent efforts to extend to non-Gaussian distributions (see, e.g., \cite{Michalowicz} for Gaussian-mixture and \cite{Kan} for products of quadratic forms and elliptic distributions), and it has been applied to continuous processes via the central limit theorem (see \cite{FCLTWick}).
We have used these theorems in the context of dynamic (algorithmic) trading strategies to find expressions for the first four moments of strategy returns in closed-form.
While it is not necessarily the aim of all scientific studies of trading strategies to find closed-form expressions, the ease with which we can describe strategy returns makes this direction relatively appealing and allows for a number of future extensions.
The paper is divided into sections on one asset, considered over a single period. With a normal signal, we will show there is a universal bound on the one-period Sharpe ratio, skewness and kurtosis. We explain the role of {\em total} or {\em orthogonal least squares} as an alternative to OLS for strategy optimisation.
We look at the corresponding refinements to measures of Sharpe ratio standard error for these dynamic strategies, improving on the large-sample theory based standard errors in more common use. We also introduce standard errors on skewness and kurtosis, which are distinct from those for Gaussian returns and present some basic results about multiple assets and diversification. Finally, we discuss the role of product measures, more pertinent to the study of dynamic strategies than simple Gaussian measures.
In the appendices, we present closed-form solutions to Sharpe ratios in the case of non-zero means. We also discuss extensions to our optimisations in the presence of transaction costs. We touch on the extension to multiple periods as well. As we mentioned, further extensions to over-fitting by the use of covariance penalties (akin to Mallow's $C_p$ or AIC/BIC) have been presented separately in \cite{Firoozye2}.
\section{Single period linear strategies}
We consider the (log) returns of a single asset, ${R_{t}\sim\mathscr{N}(0,\sigma_R^{2})}$ returns with auto-covariance function at lag $k$, $\gamma(k)=E[R_t R_{t-k}]$, together with corresponding auto-correlation function (ACF), $c(k)=\gamma(k)/\gamma(0)$ at lag $k$.
Our main aim is to work with strategies based on linear portfolio weights (or {\sl signals})
$X_{t}=\Sigma_{1}^{\infty}a_{k}R_{t-k}$ for coefficients $a_{k}$ generating the corresponding dynamic strategy returns $S_t=X_t\cdot R_t$ (here, and always, the signal, $X_t$ is assumed to only have appropriately lagged information).
Example strategy weights include exponentially weighted moving averages $a_{k}\propto\lambda^{k}$, simple moving averages $a_k = \frac{1}{T} \mathbbm{1}_{[1,\ldots,T]}$, forecasts from ARMA models, etc. Most importantly, the portfolio weights $X$ are normal and {\em jointly} normal with returns $R$. In Appendix \ref{Section:ConvolutionFilters}, we show that for a wide set of signals discussed in the Introduction, when applied to Gaussian returns, the signal and returns are jointly Gaussian.
We restrict our attention to return distributions over a single period. In the case of many momentum strategies, this period can be one day, if not longer. For higher-frequency intra-day strategies, this period can be much shorter. The pertinent concern is that the horizon (i.e., one period) is the same horizon over which the rebalancing of strategy weights is done. If weights are rebalanced every five minutes, then the single period should be five minutes. This is a necessary assumption in order to ensure the joint normality of (as yet indeterminate) signals and future returns. Moreover, this assumption will give some context to our results, which imply a maximal Sharpe ratio, maximal skewness and maximal kurtosis for dynamic linear strategies.
We are interested in characterizing the moments of the strategy's unconditional returns, the corresponding standard errors on estimated quantities, and means of optimising various non-dimensional measures of returns such as the Sharpe ratio via the use of non-linear transformations of signals. Our goal is to look at unconditional properties of the strategy. It is important to avoid foresight in strategy design and this directly impacts the conditional properties of strategies (e.g., conditional densities involve conditioning on the currently observed signal to determine properties of the returns, which are just Gaussian). In the context of our study, we are concerned with one-period ahead returns of the unconditional returns distribution of our strategy, where both the signals and the returns are unobserved, and the resulting distributions (in our case, the product of two normals) are much richer and more realistic -- for the interested reader, we have added a more detailed discussion of our framework in Appendix \ref{set-up}.
\subsection{Properties of linear strategies}
Given the joint normality of the signal and the returns, we can explicitly
characterise the one-period strategy returns (see \cite{Exact}). To allow for greater extendibility, we prefer to only consider the moments of the resulting distributions. These can be characterized easily using Isserlis' theorem \cite{Isserlis}, which gives all moments for any multivariate normal random variable in terms of the mean and variance. We also refer to \cite{Haldane} who meticulously produces both non-central and central moments for powers and products of Gaussians. While this is a routine application of Isserlis' theorem, the algebra can be tedious, so we quote the results.
\begin{theorem}[Isserlis (1918)]
If $X \sim \mathscr{N}(0,\Sigma)$,then
$$E[X_1 X_2\cdots X_{2n}] = \sum_{i=1}^{2n} \prod_{i \neq j} E[X_i X_j]$$
and
$$E[X_1 X_2\cdots X_{2n-1}] = 0$$
where the $\sum\prod $ is over all the $(2n)!/(2^n n!)$ unique partitions of $X_1,X_2,\ldots X_{2n}$ into pairs $X_i X_j$.
\end{theorem}
Haldane's paper quotes a large number of moment-based results for various powers of each normal. We quote the relevant results.
\begin{theorem}[Haldane (1942)]
\label{thm:Haldane}
If $x,y\sim \mathscr{N}(0,1)$ with correlation $\rho$ then
\begin{eqnarray*}
E[xy] =& \rho\\
E[x^2y^2]=&1+2\rho^2\\
E[x^3y^3]=&3\rho(3+2\rho^2)\\
E[x^4y^4] =&3(3+24\rho^2+8\rho^4)
\end{eqnarray*}
and thus the central moments of $xy$ are
\begin{eqnarray}
\label{eqn:centered_moments1}
\mu_1 =& \rho\\
\label{eqn:centered_moments2}
\mu_2=&1+\rho^2 \\
\label{eqn:centered_moments3}
\mu_3=&2\rho(3+\rho^2)\\
\label{eqn:centered_moments4}
\mu_4= &3(3+14\rho^2+3\rho^4)
\end{eqnarray}
\end{theorem}
From these one period moments, (and a simple scaling argument giving the dependence on $\sigma(x)$ and $\sigma(y)$) we can characterise Sharpe ratio, skewness, etc., and can also define objective functions in order to determine some sense of optimality for a given strategy.
\begin{theorem}[Linear Gaussian]
\label{thm:Linear}
For single asset returns and a one period strategy, $\ensuremath{R_{t}\sim\mathscr{N}(0,\sigma_R^{2})}$ and $X_t\sim\mathscr{N}(0,\sigma_X^2)$ jointly normal with correlation $\rho$, the Sharpe ratio is given by
\begin{equation} \SR = {\rho \over \sqrt{1+\rho^2}},
\label{Eqn:Sharpe}
\end{equation} the skewness is given as
\begin{equation}
\gamma_3=\frac{2\rho(3+\rho^{2})}{(1+\rho^{2})^{\frac{3}{2}}}, \label{Eqn:Skew}
\end{equation}
and the kurtosis is given by
\begin{equation}
\gamma_4 = \frac{3(3+14\rho^2+3\rho^4)}{(1+\rho^2)^2}
\label{Eqn:Kurt}
\end{equation}
\end{theorem}
In the appendix, we extend equations (\ref{Eqn:Sharpe}) and (\ref{Eqn:Skew}) to the case of non-zero means.
\begin{proof}
A simple application of Theorem \ref{thm:Haldane} give us the following first two moments for our strategy $S_t = X_t \cdot R_t$:
$\mu_1=E[S_t]=E[X\cdot R] = \sigma_X\sigma_R \rho $.
and $\mu_2=Var[S_t]= \sigma_X^2\sigma_R^2(\rho^2+1)$
.
Thus we can derive the following results for the Sharpe ratio,
\begin{eqnarray*}
Sharpe=& {\mu_1\over \mu_2^{1/2}} \\
=& {\sigma_X\sigma_R\rho\over \sigma_X\sigma_R \sqrt{\rho^2+1}}\\
=& {\rho\over \sqrt{\rho^2+1}}
\end{eqnarray*}
Moreover, we can see that the skewness,
\begin{eqnarray*}
\gamma_3 =& {\mu_3\over \mu_2^{3/2}}\\
=& {2\rho (3+\rho^2)\over (1+\rho^2)^{3/2}}
\end{eqnarray*}
Finally, the kurtosis is given by
\begin{eqnarray*}
\gamma_4 =& {\mu_4\over \mu_2^{2}}\\
=&{3(3+14\rho^2+3\rho^4)\over (1+\rho^2)^2}
\end{eqnarray*}
\end{proof}
If we restrict our attention to positive correlations, all three dimensionless statistics are monotonically increasing in $\rho$. Consequently, strategies that maximize one of these statistics will maximize the others, although the impact of correlation upon Sharpe ratio, skewness and kurtosis is different. We illustrate the cross-dependencies in the following charts, depicting the relationships between the variables. In figure \ref{fig:SharpeSkew}, the shaded blue histograms correspond to correlation ranges ($\{[-1,-0.5],[-0.5,0],[0,0.5],\ [0.5,1]\}$). We note that a uniform distribution in correlations maps into a higher likelihood of extreme Sharpe ratios and an even higher likelihood of extreme skewness and kurtosis.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{box_pairwise_allmetrics.png}
\caption{{\bf Correlation, Sharpe ratio, Skewness, and Kurtosis pairwise relationship.} A uniform distribution in correlation is bucketed into four ranges $\{[-1,-0.5],[-0.5,0],[0,0.5],\ [0.5,1]\}$ as depicted in the bar charts in shades of blue. After transforming the correlation into SR, $\gamma_3$ and $\gamma_4$ the frequencies are no longer uniform.} \label{fig:SharpeSkew}
\end{figure}
Skewness ranges in $[-2^{3/2},2^{3/2}]\approx[-2.8,2.8]$. Unlike the Sharpe ratio,
Skewness' dependence on correlation tends to flatten, so to achieve
90\% peak skewness, one needs only achieve a 0.60 correlation, while
for a 90\% peak Sharpe, one needs a correlation of 0.85. Kurtosis is an even function and varies from a minimal value of 9 to a maximum of 15. In practice, correlations will largely be close to zero and the resulting skewness and kurtosis significantly smaller than the maximal values.
Although we analyse the moments of the strategy $S_t=X_t R_t$, the full product density is actually known in closed form (see appendix \ref{Section:FullDist}, \cite{Exact} and \cite{DistCorrel}). It is clear that the distribution of the strategy is {\sl leptokurtic} even when it is not predictive (when the correlation is exactly zero, the strategy has a kurtosis of $9$). In the limit as $\rho\rightarrow 1$, the strategy's density approaches that of a non-central $\chi^2$, an effective {\sl best-case} density when considering the design of optimal linear dynamic strategies.
An optimised strategy with sufficient lags (and a means of ensuring parsimony) may be able to capture both mean-reversion and trend and result in yet higher correlations. Annualised Sharpe ratios of between 0.5-1.5 are most common (i.e., correlations of between 3\% to 9\%) for single asset strategies in this relatively low-frequency regime.
\subsection{Optimisation: Maximal Correlation, Total least squares}
Many algorithmic traders will explain how problematic strategy optimisation is, given the endless concerns of over-fitting, etc. Although these are a concern, the na\"{\i}ve use of strategies which are merely {\sl pulled out of thin air} is equally problematic, where there is no explicit use of optimisation (and, in its place more {\em eye-balling} strategies or targeting Sharpe ratios rather loosely, effectively a somewhat loose mental optimisation exercise). Practical considerations abound and real-world returns are neither Gaussian nor stationary. We argue irrespectively that using optimisation and a well-specified utility function as a starting point is a means of preventing strategies from being just untested heuristics. Unlike most discretionary traders' heuristics (or {\em rules of thumb}) which have their
place as a means of dealing with uncertainty (see for example \cite{Gigerenzer}),
heuristic quantitative trading strategies run the risk of being entirely arbitrary, or are subject to a large number of human biases, in marked contrast to the monniker {\em quantitative} investment strategies.
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{EWMA_SP_SR_RMSE_Rho.png}
\caption{{\bf EWMA Strategy Sharpe Ratio vs $\alpha$, MSE and correlation} for S\&P 500 reversal strategies} \label{fig:EWMA_Sharpe_Corr}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\linewidth]{Holt_SP_SR_nRMSE_Rho.png}
\caption{{\bf Holt-Winters Strategy Sharpe Ratio vs MSE and correlation} for S\&P 500 Reversal Strategies} \label{fig:HW_Sharpe_Corr}
\end{figure}
Where optimisation is used, the most common optimisation method is to minimize the mean-squared error (MSE) of the forecast. Our results show that rather than to minimize the $\mathcal{L}^2$ norm between our signal and the forecast returns (or to maximize the likelihood), if the objective is to maximize the Sharpe ratio, we must maximize the correlation.
We can see in figures \ref{fig:EWMA_Sharpe_Corr} and \ref{fig:HW_Sharpe_Corr}, a depiction of fits of strategies applied to S\&P 500 using EWMA and HW filters for a variety of parameters. The relationship between MSE and Sharpe ratio is not monotone in MSE for the EWMA filter as we see in figure \ref{fig:EWMA_Sharpe_Corr}, while it is much closer to being linear in the case of the relationship between correlation and Sharpe. For the case of HW (with two parameters), in figure \ref{fig:HW_Sharpe_Corr} any given MSE can lead to a non-unique Sharpe ratio, sometimes with a very broad range, leading us to conclude that the optimization is poorly posed. The relationship of correlation to Sharpe is obviously closer to being linear, with higher correlations almost always leading to higher Sharpe ratios.
In the case of a one-dimensional forecasting problem with (unconstrained) linear signals, optimizing the correlation amounts to using what is known as {\sl total least squares regression (TLS)} or {\sl orthogonal distance regression}, a form of principal components regression (see, e.g., \cite{Golub} and \cite{Markovsky}). In the multivariate case, it would be more closely related to {\sl canonical correlation analysis} (CCA).
Unlike OLS, where the dependent variable is assumed to be measured with error and the independent variables are assumed to be measured without error, in total least squares regression, both dependent and independent variables are assumed to be measured with error, and the objective function compensates for this by minimizing the sum squared of orthogonal distances to the fitted hyperplane. This is a simple form of errors-in-variables (EIV) regression and has been studied since the late 1870s, and is most closely related to principal components analysis. For $k$ regressors, the TLS fit will produce weights which are orthogonal to the first $k-1$ principal components.
So, if we consider the signal $X=Z\beta$ to be a linear combination of features, with $Z\in \bold{R}^k$ a $k$-dimensional feature space, then we note that
$$\hat{\beta}^{OLS} = (Z'Z)^{-1}Z'R$$ but
$$\hat{\beta}^{TLS} = (Z'Z-\sigma_{k+1}^2 I)^{-1}Z'R$$ where $\sigma_{k+1}$ is the smallest singular value for the $T\times (k+1)$ dimensional matrix $\tilde{X}=[R,Z]$ (i.e., the concatenation of the features and the returns, see, e.g., \cite{Rahman-Yu}\footnote[4]{A more common method for extracting TLS estimates is via a PCA of the concatenation matrix $\tilde{X}$, where $\hat{\beta}^{TLS}$ is chosen to cancel the least significant principal component}).It is well known that, for the case of OLS, the smooth or hat matrix $\hat{R} = M R$ is given by
$$M^{OLS} = Z(Z'Z)^{-1} Z'$$
with $\tr(M^{OLS})=k$, the number of features.
In contrast,
$$M^{TLS}=Z(Z'Z-\sigma_{k+1}^2 I)^{-1}Z'$$
and effectively has a greater number of degrees of freedom than that of OLS, i.e., $$\tr(M^{TLS})\geq \tr(M^{OLS})$$ with equality only when there is complete collinearity\footnote[5]{In this case, it is also known that $\tr(M)=\tr(L)$ where $L= (Z'Z-\sigma_{k+1}^2 I)^{-1}Z'Z$ and we know that the singular values of $\sigma(L)=\{{\lambda_i^2}/{(\lambda_i^2-\sigma_{k+1}^2)}\}$ where $\lambda_i$ are the singular values of $Z$ (or correspondingly, $\lambda_i^2$ are the singular values of $Z'Z$), and $\lambda_1\geq\cdots\geq\lambda_k>0$ (\cite{Leyang}). By the Wilkinson interlacing theorem, $\lambda_k\geq\sigma_{k+1}\geq0$ (see \cite{Rahman-Yu}). Consequently,
$$\tr(M^{TLS})=\sum_i\frac{\lambda_i^2}{(\lambda_i^2-\sigma_{k+1}^2)}\geq k=\tr(M^{OLS})$$
with equality iff $\sigma_{k+1}^2=0$ (i.e., when there the $R^2=100\%$ and consequently, OLS and TLS coincide). In other words, $\tr(M^{TLS})\geq\tr(M^{OLS})$.
}
For this reason, many people see TLS as an {\em anti-regularisation} method and may result in less-stable response to outliers (see for example, \cite{Zhang}). Consequently, there is extensive study of {\em regularised} TLS, typically using a weighted ridge-regression (or Tikhonov) penalty (see discussion in \cite{Zhang} for more detail on this large body of research). The stability of TLS in out-of-sample performance is an issue we broach in our study of over-fitting penalties (see \cite{Firoozye2}).
While maximizing correlation rather than minimizing the MSE seems a very minor change in objective function, the formulas differ from those of standard OLS. The end result is a linear fit which takes into account the errors in the underlying conditioning information. We believe that it should be of relatively little consequence when the features are appropriately normalized, as is the case for univariate time-series estimation, although some authors have suggested that optimising TLS is not appropriate for prediction (see, e.g., \cite{Fuller} section 1.6.3). When we seek to maximize the Sharpe ratio of a strategy, the objective should {\em not be} prediction, but rather optimal weight choice.
\subsection{Maximal Sharpe ratios, Maximal Skewness, Minimal Kurtosis}
Surprisingly, there appears to be a maximal Sharpe ratio for linear strategies. In the case of normal signals and normal returns, the maximal Sharpe ratio is that of a non-central $\chi^2$ distribution and the resulting maximal statistics are
\begin{eqnarray*}
\SR^{max}=&\frac{\sqrt{2}}{2}\approx 0.707\\
\gamma_3^{max} = &2\sqrt{2}\approx 2.828\\
\gamma_4^{max} =& 15.000
\end{eqnarray*}
While the estimate for the Sharpe ratio may seem surprisingly low, we comment that these are for a single period, for one single rebalancing. For a daily rebalanced strategy, if we na{\"i}vely annualize the Sharpe ratio (by a factor of $\sqrt{252}$), we get a maximal Sharpe of approximately $SR_{max}\approx 11.225$, a level generally well beyond what is attained in practice. The statistics, $\gamma^{max}_3$ and $\gamma^{max}_4$ do not scale when annualized, but are still large irrespective of the time horizon.
We note that our assumption of normality could easily be relaxed by considering non-linear transforms of the signals $X$ with the end-result that the maximal Sharpe Ratio bounds are relaxed. While this is beyond the scope of the current paper, we note that it is easy to show that simple non-linear strategies, going long one unit if the signal is above a threshold $k$ and short one unit if it below $-k$, i.e., $f_k(X)=\mathbbm{1}_{X>k}-\mathbbm{1}_{X<k}$ can be shown to have arbitrarily large Sharpe Ratios, depending on the choice of threshold, $k$. The probability of initiating such an arbitrarily high Sharpe ratio trade likewise decreases to being negligible. Thus, stationary returns with a small non-zero autocorrelation can lead to violations of Hansen-Jagannathan (or {\em good deal} bounds).
Noticeable as well from these formulas is that, while Sharpe and skewness may change sign, kurtosis is always bounded below and takes a minimum value of $9$ (i.e., an excess kurtosis of $6$). Normality of the resulting strategy returns is not a good underlying assumption, since the theoretical value of the Jarque-Bera test would be, at
\begin{eqnarray*}
JB(n) &=& \frac{n-k+1}{6}(\gamma_3^2 + \frac{(\gamma_4-3)^2}{6})\\
&\geq& \frac{(n-k+1)}{6}(\frac{36}{4})\\
& =& 1.5(n-k+1)
\end{eqnarray*}
and this is asymptotically $\chi^2(2)$ (i.e., rejection of normality at a 0.99 confidence interval of $JB>9.210$). Theoretically, we would need a relatively small sample to be able to reject normality.
\section{Refined Standard Errors}
\label{section:Stderr}
Given that we have closed-form estimates of a number of relevant statistics for dynamic linear strategies, it makes sense to consider the effects of estimation error upon quantities such as the Sharpe ratio. Many analysts and traders who consider dynamic strategies in practice will consider altering the strategies on an ongoing basis, and are typically in a quandary over whether the observed change in Sharpe ratio or skewness, when they make changes to their strategies, are in fact statistically significant.
\subsection{Standard Errors for Sharpe Ratios}
While there are formulas for standard errors for Sharpe ratios of generic assets, these are not specific to Sharpe ratios generated by dynamic trading strategies, and as a consequence, there is some possibility of refining them.
We refer to \cite{Pav} for an exhaustive overview of the mechanics of Sharpe ratios, and in particular, Section 1.4, quoting many of the known results about standard errors. Specifically, we look to \cite{Lo} for large-sample estimates of standard errors for Sharpe ratios of generic assets, given the asymptotic normality of returns. For a sample of size $N$ and IID returns, he obtains the large-sample distribution,
$$\widehat{\SR} \sim \mathscr{N}\left(\SR,\stderr_{\Lo}^2\right),$$ so a standard error, $\stderr_{\Lo} = \sqrt{(1+\frac{1}{2} \SR^2)/T}$ which he suggests should be approximated using standard error $\sqrt{(1+\frac{1}{2} \widehat{\SR}^2)/T}$.
While Lo's estimates may be appropriate for generic assets, for Sharpe ratios derived from dynamic strategies, we have a somewhat more refined characterisation of the variability of the estimated Sharpe ratios. With correlated Gaussian signals and returns, we derive the following result
\begin{corollary}[Stderrs] For returns $R_t\sim \mathscr{N}(0,\sigma_R^2)$ and signal
$X_t\sim \mathscr{N}(0,\sigma_X^2)$ with correlation $\rho$, and sample size $T$, the standard errors are given by
\begin{eqnarray}
\label{eqn:stderr_sharpe}
\stderr_{\implied} &=\frac{1}{(\hat{\rho}^2+1)^{3/2}}\sqrt{\frac{1-\hat{\rho}^2}{T-2}}\\
\label{eqn:stderr_sharpe2}
&\approx (1-\widehat{\SR}^2)\sqrt{\frac{1-2\widehat{\SR}^2}{T-2}}
\end{eqnarray}
for $|\widehat{\SR}|<\sqrt{2}/2$.
\end{corollary}
\begin{proof}
As is well known, for a bivariate Gaussian process of sample size $T$, the distribution for the sample ({\em Pearson}) correlation is given by
\begin{equation}\hat{\rho} \sim f_\rho(\hat{\rho})=
\frac{(T-2)(1-\rho^2)^{(T-1)/2}(1-\hat{\rho}^2)^{(T-4)/2}}{\pi}
\int_0^\infty\frac{dw}{(\cosh(w)-\rho\hat{\rho})^{T-1}}\label{Eqn:rhohat}\end{equation}
The standard errors which approximate those in equation (\ref{Eqn:rhohat}) for $\hat{\rho}$ are
$$\stderr_{\rho}=\sqrt{\frac{1-\hat{\rho}^2}{T-2}}$$
(attributed to Sheppard, and used by Pearson, see, e.g., \cite{Hald}). Taken together with the results of Theorem \ref{thm:Linear}, we apply the delta method to find that the resulting standard errors for our plug-in estimate for the Sharpe ratio,
$\widehat{\SR}=\frac{\hat{\rho}}{\sqrt{\hat{\rho}^2+1}}$ is given by
\begin{eqnarray*}
\stderr_{\implied}&=&
\frac{\partial \widehat{\SR}}{\partial \hat{\rho}}\cdot \stderr_\rho\\
&=&\frac{1}{(\hat{\rho}^2+1)^{3/2}}\sqrt{\frac{1-\hat{\rho}^2}{T-2}}.
\end{eqnarray*}
which gives us equation (\ref{eqn:stderr_sharpe}). If we solve for $\hat{\rho}$ in terms of $\widehat{\SR}$, we are able to derive equation (\ref{eqn:stderr_sharpe2}).
\end{proof}
We note that in spite of the fact that Lo's standard errors are very near our estimates for large sample size, the entire sampling distribution from our estimates are much more concentrated than the $\mathscr{N}(0,\stderr_{\Lo}^2)$, potentially leading to tighter confidence intervals at the 99\% or higher confidence levels. We can see that the tail of the distribution given by Lo is much fatter than ours, in figure (\ref{fig:ViolinPlot}).
\begin{figure}[h!]
\centering
\subfloat{\includegraphics[width=0.33\linewidth]{implied_lo_confint_lineplot_252.png}}
\subfloat{\includegraphics[width=0.33\linewidth]{implied_lo_confint_lineplot_756.png}}
\subfloat{\includegraphics[width=0.33\linewidth]{implied_lo_confint_lineplot_1260.png}}
\caption{{\bf Sharpe ratio and Confidence Interval Comparisons, based on different sample sizes.} We note that the {\em implied} confidence intervals are within Lo's, although primarily for larger predictive power.} \label{Lo-Us} \label{fig:CI1yr}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=\textwidth]{implied_lo_confint_violinplot.png}
\caption{{\bf Sharpe ratios full distribution} While the $95^{th}$ percentile shows close agreement between Lo's large-sample standard errors and {\em implied} standard errors, the distribution of {\em implied} is far more fat-tailed.}
\label{fig:ViolinPlot}
\end{figure}
\begin{figure}
\centering
\subfloat{\includegraphics[width=0.33\linewidth]{implied_lo_mertens_252.png}}
\subfloat{\includegraphics[width=0.33\linewidth]{implied_lo_mertens_756.png}}
\subfloat{\includegraphics[width=0.33\linewidth]{implied_lo_mertens_1260.png}}
\caption{{\bf Standard errors based on different sample sizes and formulas.} Ignoring parameter uncertainty, Merten's adjustment to Lo's standard errors improves standard errors to be nearly as tight as {\em implied}. In practice, parameter uncertainty hurts the performance.
\label{fig:StdErrMertensLoImplied}}
\end{figure}
Mertens gives a refinement of Lo's result (\cite{Mertens}) by including adjustments for skewness and excess kurtosis:
\begin{equation} \stderr^2_{\Mertens} = \bigl(1+\frac{1}{2}\hat{\SR}^2-\gamma_3\cdot\hat{\SR}+\frac{\gamma_4-3}{4}\cdot\hat{\SR}^2\bigr).
\label{Eqn:Mertens} \end{equation}
If we use our plug-in estimates for skewness and excess kurtosis (i.e., coming from equations (\ref{Eqn:Skew} and \ref{Eqn:Kurt})) into equation (\ref{Eqn:Mertens}) we are able to find a modestly tighter estimate of the standard error than Lo. For most smaller amplitude correlations, this estimate comes very close to our estimate of standard error (see figure (\ref{fig:StdErrMertensLoImplied})) and for small $N$ and low correlations, Lo's standard errors are in fact tighter. For large correlations, our standard errors are significantly tighter. For large sample sizes, there is little difference between them. Using our estimates for $\gamma_3$ and $\gamma_4$, Mertens' approximation is always tighter than Lo's; in particular for correlations $|\rho|<0.5$, Mertens' approximation appears almost identical to our own. Irrespective, we argue in section \ref{Section:Gauss vs Product} that our standard errors are more appropriate for dynamic strategies if there is any significant difference between the measures.
\subsection{Standard Errors for Higher Moments}
Using exactly the same procedure, we can easily derive standard errors for both skewness and kurtosis. In terms of classical confidence intervals, we consider \cite{Joanes} and \cite{Cramer} which apply to Gaussian (and non-Gaussian distributions), noting that \cite{Lo} is a broader result on the large-sample limits of Sharpe Ratios. We are concerned with Pearson skewness and kurtosis, i.e.,
\begin{eqnarray*}
\gamma_3 &= \frac{\mu_3}mu_2^{3/2}\\
\gamma_4 &= \frac{\mu_4}{\mu_2^2}
\end{eqnarray*}
although it is not hard to consider other definitions of skewness and kurtosis using unbiased estimators of the moments as are given in \cite{Joanes}, in this case originally from \cite{Cramer}.
Given these definitions, under the assumption of normality for the underlying returns (or correspondingly, using large-sample limits) where the sample size is $T$, standard errors are given as
\begin{eqnarray*}
\stderr_{\gamma_3} &= \sqrt{\frac{6(T-2)}{(T+1)(T+3)}}\\
\stderr_{\gamma_4} &= \sqrt{\frac{24 T (T-2)(T-3)}{(T+1)^2(T+3)(T+5)}}
\end{eqnarray*}
In the case of dynamic strategies, using our assumption of normal signal and normal returns, we are able to derive the following:
\begin{corollary}[Higher moment standard errors]
\label{higherStderr}
For returns $R_t\sim \mathscr{N}(0,\sigma_R^2)$ and signal
$X_t\sim \mathscr{N}(0,\sigma_X^2)$ with correlation $\rho$, and sample size $T$, the standard errors are given by\footnote[6]{ While $\rho$ can be expressed in terms of either $\gamma_3$ or $\gamma_4$ to eliminate $\rho$ from these expressions, unlike the case of the standard errors of the Sharpe ratio, the expressions are too complicated to be that useful.}
\begin{eqnarray*}
\stderr_{\gamma_3} &=-\frac{6(\hat{\rho}^2-1)}{(\hat{\rho}^2+1)^{5/2}}\cdot\sqrt{\frac{1-\hat{\rho}^2}{T-2}}
\end{eqnarray*}
and
\begin{eqnarray*}
\stderr_{\gamma_4} &=-\frac{48\hat{\rho}(\hat{\rho}^2-1)}{(\hat{\rho}^2+1)^{3}}\cdot\sqrt{\frac{1-\hat{\rho}^2}{T-2}}
\end{eqnarray*}
for $|\hat{\rho}|<1$.
\end{corollary}
We rely on the delta-method, recognizing that
$\stderr_{\gamma_k} =\partial{\gamma_k}/\partial{\rho}\cdot \stderr_{\rho}$ for $k=3,4$. Given the following easily calculated derivatives:
\begin{eqnarray}
\label{Eqn:SkewKurtStderr}
\frac{\partial \gamma_3}{\partial \rho} & = -\frac{6(\rho^2-1)}{(\rho^2+1)^{5/2}}\\
\frac{\partial \gamma_4}{\partial \rho} & = -\frac{48\rho(\rho^2-1)}{(\rho^2+1)^{3}}
\end{eqnarray}
As we can tell from the formulas in corollary (\ref{higherStderr}), the derived standard errors for both skewness and kurtosis collapse to zero when $\rho=1$.
\begin{figure}
\centering
\subfloat{\includegraphics[width=0.33\linewidth]{implied_gaussian_confint_skew_252.png}}
\subfloat{\includegraphics[width=0.33\linewidth]{implied_gaussian_confint_skew_756.png}}
\subfloat{\includegraphics[width=0.33\linewidth]{implied_gaussian_confint_skew_1260.png}}
\caption{{\bf Standard errors for skewness for different sample sizes, implied vs Gaussian} Implied standard errors, especially for skewness are generally larger than those for normal distributions. We argue that the implied standard errors are more appropriate for dynamic strategies.
\label{fig:StdErrSkewGauss} \label{fig:SkewCI1yr} }
\end{figure}
\begin{figure}
\centering
\subfloat{\includegraphics[width=0.33\linewidth]{implied_gaussian_confint_kurt_252.png}}
\subfloat{\includegraphics[width=0.33\linewidth]{implied_gaussian_confint_kurt_756.png}}
\subfloat{\includegraphics[width=0.33\linewidth]{implied_gaussian_confint_kurt_1260.png}}
\caption{{\bf Standard errors for kurtosis for different sample sizes, implied vs Gaussian} Implied kurtosis standard errors are sometimes larger and sometimes tighter than the Gaussian case. We argue that the {\em implied} standard errors are more appropriate for dynamic strategies.} \label{fig:StdErrKurtGauss} \label{fig:KurtCI1yr}
\end{figure}
While we can solve for $\rho$ in terms of $\gamma_k$ for $k=3,4$, the formulas are not easy to present (especially for kurtosis) and we believe that the statement, in terms of correlation is easier to use.
We note that, unlike the argument for using our refined standard errors over those presented in \cite{Lo}, the rationale for using the skewness and kurtosis standard errors presented in equations (\ref{Eqn:SkewKurtStderr}) is that returns are, for most practical purposes, not close to normal, and the product of two normals is more relevant for dynamic strategies. We elaborate on this in Section \ref{Section:Gauss vs Product}.
\section{Multiple assets}
We consider whether there is a diversification benefit from adding more independent {\sl bets} to our portfolio, and to what extent we can benefit from this. For context we note that portfolios of dynamic strategies can behave very differently from single strategies. For instance, Hoffman-Kaminski have noted (\cite{Hoffman}) that while single strategies can have skewness ranging from around $[1.3,1.7]$ and kurtosis from $[8.8,15.3]$, portfolio skewness can be as low as $0.1$.
We first consider $N$ indepedent returns as an N-vector, $R_t\sim {\mathscr N}(0,\sigma^2 I)$, assumed to have the same variance. We devise signals $X_t \sim {\mathscr N}(0,\gamma^2 I)$. The inner-product $X_t\cdot R_t$ has a density $\psi$ whose moment generating function is given by \cite{Simons}:
\[
M_N(t)=(1-2t\sigma\gamma\rho-\sigma^{2}\gamma^{2}t^{2}(1-\rho^{2}))^{-N/2}.
\]
From this we can easily derive four moments:
\begin{eqnarray*}
\mu_{1}, =&N \sigma\gamma\rho\\
\mu_{2} =&N\sigma^{2}\gamma^{2}((N+1)\rho^{2}+1)\\
\mu_{3} = &N (N+2)\sigma^{3}\gamma^{3}\rho((N+1)\rho^{2}+3)\\
\mu_{4} = &\sigma^{4}\gamma^{4} \Bigl( (N+6) (N+4)(N+2) N \rho^4+3 (N+2)N (1-\rho^2)^2 + \\
& \quad 6(N+f4)(N+2)N\rho^2(1-\rho^2) \Bigr)\\
\end{eqnarray*}
This leads to centralized moments
$$\sigma^2 = N(\rho^2+1)$$
and
$$\mu_3^c = 2 N \rho(\rho^2+3) $$
From these we derive the Sharpe ratio:
$$\SR = \frac{\sqrt{N} \rho} { \sqrt{\rho^2 +1}}$$
Maximizing the SR over $\rho$ leads to $\frac{\sqrt{N}\sqrt{2}}{2}$, clearly showing the benefit of diversification when measuring the Sharpe ratio.
The skewness is
$$ \gamma_3 = \frac{1}{\sqrt{N}} \frac{2 \rho(\rho^2+3)} { ( \rho^2+1)^{3/2}} $$
and if we consider maximal Sharpe, the corresponding skewness is
$$ \gamma_3^{\max} = {8N \over (2N)^{3/2}} = \frac{2\sqrt{2}}{\sqrt{N}}$$
will show reductions on the order of $1/\sqrt{N}$ in the total number of (orthogonal) assets. This is as expected from large diverse portfolios. In the limit, simple application of central limit theory should give us asymptotic normality. Effectively, introducing more purely orthogonal assets will increase Sharpe ratios, but decreases the (relatively desirable) positive skewness.
If we have multiple possibly correlated assets and multiple, possibly correlated signals, we assert that an optimal strategy would be to perform {\em canonical correlation analysis} (CCA),
\footnote[7]{Canonical correlation (from \cite{Hotelling}, see for example, \cite{Rencher}) is defined by first finding the linear vectors $w_1$ and $v_1$ withe $|w_1|=|v_1|=1$, such that $\rho(w_1\cdot R, v_1\cdot X)$ is maximized. The resulting correlation is the {\em canonical correlation}. The {\em canonical variates} are defined by finding subsequent unit-vectors $w_k$ and $v_k$ such that $\rho(w_k\cdot R, w_j\cdot R)=\delta_{kj}$, $\rho(v_k\cdot X, v_j\cdot X)=\delta_{kj}$, and $\rho(w_k\cdot R,v_k\cdot X )$ is maximized, leading to $\rho(w_k\cdot R,v_j\cdot X )=r_k\delta_{kj}$ . The solution is via a generalized eigenvalue problem
\begin{eqnarray*}
\Sigma_{RR}^{-1}\Sigma_{RX}\Sigma_{XX}^{-1}\Sigma_{XR}w_k&=& r_k^2 w_k\\
\Sigma_{XX}^{-1}\Sigma_{XR}\Sigma_{RR}^{-1}\Sigma_{RX}v_k &=& r_k^2 v_k
\end{eqnarray*}
where $\Sigma$ is the partitioned correlation matrix of $(R,X)$ and the canonical correlates $w_k$ and $v_k$ are the eigenvectors with the same eigenvalues $r_k$. The corresponding portfolios of {\em canonical strategies}, $S_k^{CCA}\equiv (v_k\cdot X)(w_k\cdot R)$ each have returns and variances as characterised by equation (\ref{eqn:centered_moments1} and \ref{eqn:centered_moments2}) with corresponding correlations $r_k$ (i.e., with Sharpe ratios given by $\SR[S_k]=r_k/\sqrt{r_k^2+1}$) and, due to their independence, can easily be weighted to optimize the portfolio Sharpe Ratio.
The method of weighting the cannonical strategies is of course, similar to a risk-parity portfolio, due to the independence of asset returns.
We assert that this method gives the maximal Sharpe ratio for the linear combination of signals and returns, although we leave this proof to a subsequent paper.} resulting in a set of decorrelated strategies (using a and combination of signals to weight a portfolio of assets). The resulting strategies are decorrelated but with unequal returns and variances. Many results of this section would apply after scaling the portfolio returns. The end-result could easily be optimized using simple mean-variance analysis (reweighting the returns on the independent strategies). We leave the details for another study.
While our optimizer is unlikely to be in use among CTAs, it is still notable that widely diversified CTAs (irrespective of underlying asset correlations) appear to have decent Sharpe ratios but relatively lower positive skewness, much in line with the discussion of this section. Our simple results here about the final Sharpe ratio and skewness of course depend on independence of the underlying assets and of course the signals themselves, which must only be correlated with their respective asset returns. While this is a not an altogether natural setting, it is suggestive of the gains that can be made in introducing purely orthogonal sources of risk, or perhaps in orthogonalizing (or attempting to) asset returns prior to forming signals, later recombining into a portfolio, and that this may lead to far more desirable properties of portfolios than finding strategies on multiple non-orthogonalized assets.
\section{Gaussian Returns vs Products of Gaussians Returns}
\label{Section:Gauss vs Product}
While we believe that the assumption of Gaussian returns (and Gaussian signal) is a simplification, we also believe this is far more realistic than the assumption of Gaussian returns for a dynamic strategy. Throughout this paper we consider Gaussian (log) returns $R\sim\mathscr{N}(0,\sigma_R^2)$ and Gaussian signal $X\sim\mathscr{N}(0,\sigma_X^2)$ which together are jointly Gaussian, and together form components of the dynamic strategy $S_t= X_t R_t$, whose properties we study.
To be clear, our signal is not considered to have foresight and is fully known as of time $t$, while the return $R_t$ is from $t$ to $t+\delta t$. All expectations calculated are unconditional, or, can be thought of as conditioned on $t_0<t<t+\delta_t$. Consequently, each element, the signal and the return will be random variables.
Were we to consider expectations conditional on $t$, then the resulting strategy returns $S_t$ would be trivially Gaussian. In the unconditional case, the resulting returns are far more interesting and relevant.
CTA returns are known to generally be positively skewed and highly kurtotic over the relevant horizons we are concerned with (i.e., daily, weekly, monthly), as has been noted by \cite{Potters-Bouchaud}, \cite{Hoffman} and others. If we measure far longer-horizon returns, asymptotic theory should show that favourable qualities like skewness may disappear.
Consequently, even though we make many comparisons to results stemming from either asymptotic theory (e.g., \cite{Lo}) or using exact normality, this comparison does not, in fact, compare like-for-like. Clearly \cite{Lo} is appropriate for large-samples, as is possible under conditions when the central limit theorem (CLT) holds, e.g., with weak-dependence, summing returns over increasingly longer horizons, or in the case of a large cross-sectional dimension with increasing numbers of decorrelated assets. For dynamic strategies, asymptotic normality should be expected for large numbers of decorrelated dynamic strategies as well as for long-horizon (e.g., annual or longer, non-overlapping) returns for single dynamic strategies.
Consequently, we believe our standard error results are more appropriate for hypothesis testing on statistics for dynamic strategies. We discuss a strategy for establishing product measures as large-sample limits in appendix \ref{Section:FullDist}, although asymptotics are beyond the scope of this current study.
\section{Conclusion}
Fully systematic dynamic strategies are used by a large portion of the asset management industry as well as by many non-institutional participants. Meanwhile, they are only partly understood.
Many funds and strategies (e.g., especially investment bank {\em smart-beta} or {\em styles-based} products) involve investment in strategies which are not optimised in any sense. Strategies which are paid via index-swaps have great limits in terms of their adaptability, leading to often highly suboptimal end-results. While there have been some very significant results derived in the theoretical properties of these dynamic strategies, there is still much more work left to do. Given that most academic literature in this area considers more general distributions, there has not been a firm foundation to build and extend these results.
It is hoped that this paper does form a foundational approach to the study of dynamic strategies and {\em how} to optimize them. We make efforts to understand their properties without claiming to understand {\em why} they work (i.e., why there are stable ACFs in the first place). Given that most asset returns returns are known to have non-trivial autocorrelations, we can establish many results.
In particular, we have derived a number of results merely by applying well-known techniques to dynamic strategies, e.g.,:
\begin{itemize}
\item Strategy returns can be shown to be positively skewed and leptokurtic.
\item Sharpe ratios can be characterized, as can skewness and kurtosis.
\item The standard errors for Sharpe, skewness and kurtosis can be derived.
\item Strategies designed to optimise Sharpe ratios should be based on TLS rather than minimizing prediction error.
\item Gains from adding orthogonal assets/risks can be quantified.
\end{itemize}
Some of these items are empirically well-known, but others are genuinely new. Meanwhile, we have extended our results to the derivation of over-fitting penalties akin to Mallow's $C_p$ or AIC and can be used to do model selection and predict likely out-of-sample Sharpe ratios from in-sample fits (see \cite{Firoozye2}).
Our study is incomplete. We believe that there is a good deal of interesting work to be done in areas such as:
\begin{itemize}
\item optimal linear strategies incorporating transaction costs.
\item optimal linear strategies relaxing normality.
\item normalized linear signals (e.g., z-scores) and optimal non-linear functions of z-scores.\footnote[8]{We note that normalized signals applied to normalized returns series can be represented as the product of two Student t-distributions, which is also relatively well-studied \cite{Multi-T, Joarder} and the results are qualitatively very similar to those which we have produced in this study. However, the more commonly used strategy of applying normalized signals to returns, with the resulting strategies then vol-scaled, cannot be derived as a trivial application of well-known results}
\item non-linear strategies which are optimised to specific utility functions, possibly incorporating smoothness constraints, especially when relaxing normality.
\item local optimality when relaxing stationarity.
\item good-deal bounds in the presence of auto-correlated assets with possible non-stationarity or structural breaks.
\end{itemize}
We note that our assumptions were never meant to be completely realistic: stationary returns with fixed ACF and Gaussian innovations can only work in theory, not in reality. Many quantitative traders design strategies to overcome the challenges of dealing with real-world data issues and the issues of over-fitting.
We nonetheless present them as a good starting point for further analysis, hoping to use this work as the basis for further exploration and to put the general study of dynamic strategies onto a more firm theoretical footing.
Some of our findings should be of note to practitioners. In particular, the use of OLS and other forecast error minimizing methods is not necessarily optimal, depending on the problem at hand; total-least squares or other correlation-maximizing methods such as CCA may be more efficient. High Sharpe ratios and positive skewness are often quoted as rationales for entering into strategies and, strategies are changed with the rationale of increasing these measures. The relative significance of any of these changes depends on confidence intervals or standard errors, and we have derived these specifically suited for dynamic trading strategies. Kurtosis is not studied as often, but as we show, all dynamic strategies should be leptokurtic and this is an important attribute of these strategies. Other results, such as over-fitting penalties and optimal non-linear strategies, we save for later papers. With a more solid theoretical footing as a sort of {\em rule-of-thumb} for the development, optimisation, selection and alteration of dynamic strategies, we only hope that there can be room to improve strategy design.
\vspace{30pt}
\section*{Acknowledgements}
N.\ Firoozye would like to give his wholehearted love and appreciation to Fauziah, for hanging on, when the paper was always {\em almost done.} I am hoping the wait is finally over.
Adriano Soares Koshiyama would like to to acknowledge the funding for its PhD studies provided by the Brazilian Research Council (CNPq) through the Science Without Borders program.
The authors would also like to thank Brian Healy and Marco Avellaneda for the many suggestions and encouragement. Finally, were it not for the product design method as practised by Nomura's QIS team, the authors would never have been inspired to pursue a mathematical approach to this topic.
\textemdash \textemdash \textemdash \textemdash \textemdash \textendash{}
\tabularnewline
| {'timestamp': '2019-06-05T02:22:57', 'yymm': '1906', 'arxiv_id': '1906.01427', 'language': 'en', 'url': 'https://arxiv.org/abs/1906.01427'} |
\section{Introduction and results}
Let $p$ be a positive number. We consider random variables with $p$-{\it sub-exponential tail decay}, i.e., random variables $X$ for which there exists two positive constants $c,C$ such that
$$
\mathbb{P}(|X|\ge t)\le c\exp\big(-(t/C)^p\big)
$$
for all $t\ge 0$. Such random variables we will call $p$-{\it sub-exponential}.
\begin{exa}
The exponentially distributed random variable $X\sim Exp(1)$ has exponential tail decay that is
$\mathbb{P}(X\ge t)= \exp(-t)$.
It is the example of random variable with $1$-sub-exponential tail decay; $c=C=1$.
Consider a random variable $Y_p=\theta X^{1/p}$ for some $p,\theta>0$. Observe that for $t\ge 0$
$$
\mathbb{P}(Y_p\ge t)=\mathbb{P}\big(\theta X^{1/p}\ge t\big)=\mathbb{P}\big(X\ge (t/\theta)^{p}\big)=\exp\big(-(t/\theta)^{p}\big).
$$
The random variable $Y_p$ has $p$-sub-exponential tail decay; $c=1$ and $C=\theta$. Let us note that $Y_p$ has the Weibull distribution with the shape parameter $p$ and the scale parameter $\theta$. One can say that random variables with Weibull distributions form model examples of r.v.s with $p$-sub-exponential tail decay.
\end{exa}
Because it is know that random variables with the Poisson and the geometric distributions have $1$-sub-exponential tail decay then, in similar way as above, we can form another families of $p$-sub-exponential random variables for any $p>0$.
A more interesting case, which is independently interesting, occurs when we start with Gaussian distributions.
\begin{exa}
Let $g$ denote a random variable with the standard normal distribution. It is know that for tails of such variables hold the estimate:
$$
\mathbb{P}(|g|\ge t)\le\exp(-t^2/2),
$$
for $t\ge 0$ (see for instance \cite[Prop.2.2.1]{Dudley}). Defining now $Y_p=\theta|g|^{2/p}$, by the above estimate, we get
$$
\mathbb{P}(Y_p\ge t)=\mathbb{P}\big(\theta|g|^{2/p}\ge t\big)=\mathbb{P}\big(|g|\ge (t/\theta)^{p/2}\big)\le\exp\Big(-\big[t/(2^{1/p}\theta)\big]^p\Big).
$$
In other words we obtain another family of r.v.s with $p$-sub-exponential tail decay; $c=1$ and $C=2^{1/p}\theta$.
\end{exa}
Define now a symmetric random variable $g_p$ such that $|g_p|=|g|^{2/p}$. One can calculate that
its density function has the form
$$
f_{p}(x)=\frac{p}{2\sqrt{2\pi}}|x|^{p/2-1}e^{-|x|^{p}/2}.
$$
Let us emphasize that for $p=2$ we get the density of the standard normal distribution.
Observe that $\mathbb{E}g_p=0$ and $\mathbb{E}|g_p|^p=\mathbb{E}g^2=1$.
For any $p>0$ we will call the random variable $g_p$ the {\it standard $p$-normal} ($p$-{\it gaussian})
and write $g_p\sim\mathcal{N}_p(0,1)$, where the first parameter denote the mean value but the second one the absolute $p$-th moment of $g_p$.
The $p$-sub-exponential random variables can be characterized by finiteness of
$\psi_p$-norms defined as follows
$$
\|X\|_{\psi_p}:=\inf \big\{K>0:\; \mathbb{E}\exp(|X/K|^p)\le 2\big\};
$$
according to the standard convention $\inf\emptyset=\infty$.
We will call the above functional $\psi_p$-norm but let us emphasize that only for $p\ge 1$ it is a proper norm. For $0<p<1$ it is so-called quasi-norm. It do not satisfy the triangle inequality (see Appendix A in \cite{GSS} for more details).
Let us emphasize that $p$-sub-exponential random variables $X$ satisfy the following $p$-sub-exponential tail decay:
$$
\mathbb{P}(|X|\ge t) \le 2\exp\Big(-(t/\|X\|_{\psi_p})^p\Big);
$$
see for instance \cite[Lem.2.1]{Zaj}.
For $x=(x_i)_{i=1}^n\in \mathbb{R}^n$ and $p\ge 1$, let $|x|_p$ denote the $p$-norm of $x$, i.e., $|x|_p=(\sum_{i=1}^n|x_i|^p)^{1/p}$. For a random variable $X$, by $\|X\|_{L^p}$ we will denote the Lebesgue norm of $X$, i.e., $\|X\|_{L^p}=(\mathbb{E}|X|^p)^{1/p}$.
From now on let $X=(X_i)_{i=1}^n$ denote a random vector with real coordinates.
We will be interested in the concentration of the norm $|X|_p$ around $\||X|_p\|_{L^p}=(\sum_{i=1}^n\mathbb{E}|X_i|^p)^{1/p}$ in spaces of $p$-sub-exponential random variables.
In other words we will be interested in an estimate of the norm $\||X|_p-\||X|_p\|_{L^p}\|_{\psi_p}$.
I owe the first result of this type to the anonymous reviewer of the previous version of this paper, to whom I hereby express my thanks.
\begin{pro}
\label{KwaP}
Let $p\ge 1$ and $X=(X_1,...,X_n)\in\mathbb{R}^n$ be a random vector with
independent $p$-sub-exponential coordinates
Then
$$
\||X|_p-\||X|_p\|_{L^p}\|_{\psi_p}\le n^{1/(2p)}C^{1/p} K_p,
$$
where
$K_p=\max_{1\le i\le n}\|X_i\|_{\psi_p}$ and $C$ is some universal constant.
\end{pro}
Let us emphasize that, for $p\ge 2$, we can remove on the right hand side the factor $n^{1/(2p)}$ but under an additional assumption that $p$-th moments of coordinates are the same, i.e., $\mathbb{E}|X_1|^p=\mathbb{E}|X_i|^p$, $i=2,3,...,n$. Let us note that then
$\||X|_p\|_{L^p}=n^{1/p}\|X_1\|_{L^p}$.
The main theorem of this paper is the following.
\begin{thm}
\label{mthm}
Let $p\ge 2$ and $X=(X_1,...,X_n)\in\mathbb{R}^n$ be a random vector with independent $p$-sub-exponential coordinates $X_i$ that satisfy $\mathbb{E}|X_1|^p=\mathbb{E}|X_i|^p$, $i=2,3,...,n$. Then
$$
\||X|_p-n^{1/p}\|X_1\|_{L^p}\|_{\psi_p}\le 6^{1/p}C\Big(\frac{K_p}{\|X_1\|_{L^p}}\Big)^{p-1}K_p,
$$
where $K_p:=\max_{1\le i\le n}\|X_i\|_{\psi_p}$ and $C$ is an universal constant.
\end{thm}
\begin{rem}
The above theorem is the generalization of the concentration of $\psi_2$-norm of random vectors with independent sub-gaussian coordinates
(see Vershynin \cite[Th.3.1.1]{Ver}) to the case of $\psi_p$-norm of vectors
with $p$-sub-exponential coordinates, for $p\ge 2$.
\end{rem}
Norms are Lipschitz function on given normed spaces. Concentration of Lipschitz functions on Gauss space is one of a basic example of concentration of measure phenomenon; see e.g. \cite[Th. 5.6]{BLM}, \cite[Cor. 3.2.6]{A-AGM} or, in a general form, \cite[Th. 5.3]{Led}.
In Gauss space are investigated Lipschitz functions on $\mathbb{R}^n$ with respect to the Euclidean norm. In our approach we investigate the $p$-norms. Because Orlicz spaces are Banach lattices then we can immediately formulate some form of concentration for Lipschitz functions $f$ with respect to $p$-norms of random vectors with $p$-sub-exponential coordinates, i.e.,
$$
\big\|f(X)-\|f\|_{Lip}\||X|_p\|_{L^p}\big\|_{\psi_p}\le\|f\|_{Lip}\big\||X|_p-\||X|_p\|_{L^p}\big\|_{\psi_p},
$$
where $\|f\|_{Lip}=\sup_{x\neq y}\frac{|f(x)-f(y)|}{|x-y|_p}$.
Let us note that in our approach a distribution of $f(X)$ does not concentrate around of a median of $f$ or a mean of $f(X)$ but around the value
$\|f\|_{Lip}\||X|_p\|_{L^p}$.
Before we proceed to the proofs of our results (Section \ref{sec3}) we first describe more precisely spaces of $p$-sub-exponential random variables (Section \ref{sec2}).
\section{Spaces of $p$-sub-exponential random variables}
\label{sec2}
The $p$-sub-exponential random variables characterize the following lemma whose proof, for $p\ge 1$, one can find in \cite[Lem.2.1]{Zaj}. Let us emphasize that this proof is valid for any positive $p$.
\begin{lem}
\label{charlem}
Let $X$ be a random variable and $p> 0$. There exist positive constants $K,L,M$
such that
the following conditions are equivalent:\\
1. $\mathbb{E}\exp(|X/K|^p)\le 2$\;\;\;$(K\ge \|X\|_{\psi_p})$;\\
2. $\mathbb{P}(|X|\ge t) \le 2\exp(-(t/L)^p)$ for all $t \ge 0$;\\
3. $\mathbb{E}|X|^\alpha\le 2M^\alpha\Gamma\big(\frac{\alpha}{p}+1\big)$ for all $\alpha>0$.
\end{lem}
\begin{rem}
\label{rem1}
The definition of $\psi_p$-norm is based on condition 1. Let us notice that if condition 2 is satisfied with some constant $L$ then
$\|X\|_{\psi_p}\le 3^{1/p}L$ (compare \cite[Rem.2.2]{Zaj}).
\end{rem}
Let $L_0$ denote the space of all random variables defined on a given probability space. By $L_{\psi_p}$ we will denote the space of random variables with finite $\psi_p$-norm:
$$
L_{\psi_p}:=\{X\in L_0:\;\|X\|_{\psi_p}<\infty\}.
$$
For $\psi_p$-norms one can formulate the following
\begin{lem}
\label{norms}
Let $p,r>0$ and $X\in L_{\psi_{pr}}$ then $|X|^p\in L_{\psi_{r}}$ and
$\||X|^p\|_{\psi_{r}}=\|X\|^p_{\psi_{pr}}$.
\end{lem}
\begin{proof}
Let $K=\|X\|_{\psi_{pr}}>0$. Then
$$
2=\mathbb{E}\exp(|X/K|^{pr})=\mathbb{E}\exp\big(\big||X|^p/K^p\big|^{r}),
$$
which is equivalent to the conclusion of the lemma.
\end{proof}
Let us emphasize that if we known the moment generating function of a given random variable $|X|$ then we can calculate the $\psi_p$-norm of $|X|^{1/p}$.
\begin{exa}
Let $X\sim Exp(1)$. The moment generating function of $X$ equals $\mathbb{E}\exp(tX)=1/(1-t)$ for $t<1$. Let us observe that
$$
\mathbb{E}\exp(X/K)=\frac{1}{1-1/K}\le 2
$$
if $K\ge 2$. It means that $\|X\|_{\psi_1}=2$. In consequance, Weibull distributed random variables, with the shape parameter $p$ and the scale parameter $\theta$, have the $\psi_p$-norms:
$$
\|\theta X^{1/p}\|_{\psi_p}=\theta\|X\|_{\psi_1}^{1/p}=\theta 2^{1/p}.
$$
Let us note that starting with the moment generating function of $g^2$ of the form $(1-2t)^{-1/2}$ ($t<1/2$), similarly as above, one can calculate that
$\|g^2\|_{\psi_1}=\|g\|_{\psi_2}^2=8/3$ and $\|g_p\|_{\psi_p}=(8/3)^{1/p}$.
\end{exa}
Let us notice that by Jensen's inequality we get, for $p\ge 1$, that the $\psi_p$-norm of the expected value of $p$-sub-exponential random variable is not less than the $\psi_p$-norm of this random variable itself, since
$$
2=\mathbb{E}\exp\Big(\big|X/\|X\|_{\psi_p}\big|^p\Big)\ge\exp\Big(\big|\mathbb{E}X/\|X\|_{\psi_p}\big|^p\Big)
=\mathbb{E}\exp\Big(\big|\mathbb{E}X/\|X\|_{\psi_p}\big|^p\Big),
$$
which means that $\|\mathbb{E}X\|_{\psi_p}\le \|X\|_{\psi_p}$. In consequence, for $p$-sub-exponential random variable, we have
\begin{equation}
\label{cent}
\|X-\mathbb{E}X\|_{\psi_p}\le 2\|X\|_{\psi_p}\quad (p\ge 1).
\end{equation}
$1$-sub-exponential (simply sub-exponential) random variables will play a special role in our considerations. Sub-exponential random variable $X$ with mean zero can be defined by finiteness of $\tau_{\varphi_1}$-norm, i.e.,
$$
\tau_{\varphi_1}(X)=\inf\{K>0:\;\ln\mathbb{E}\exp(tX)\le\varphi_\infty(Kt)\}<\infty;
$$
where $\varphi_\infty(x)=x^2/2$ for $|x|\le 1$ and $\varphi_\infty(x)=\infty$ otherwise;
see the definition of $\tau_{\varphi_p}$-norm in \cite{Zaj}, compare Vershynin \cite[Prop.2.7.1]{Ver}. Let us emphasize that the norms $\|\cdot\|_{\psi_1}$ and
$\tau_{\varphi_1}(\cdot)$ are equivalent on the space of centered sub-exponential random variables (compare \cite[Th.2.7]{Zaj}).
\begin{exa}
\label{przy1}
If $X$ is a exponentially distributed random variable with the parameter $1$
then $\mathbb{E}X=1$.
Let us note that the cumulant generating function of $X-\mathbb{E}X=X-1$ equals
$
\ln\mathbb{E}\exp(X-1)=-t-\ln(1-t).
$
Since $C_{X-1}(0)=0$ and $C_{X-1}^\prime(0)=0$, by the Taylor formula, we get
\begin{equation}
\label{tau}
C_{X-1}(t)=\frac{1}{2}C^{\prime\prime}_{X-1}(\theta_tt)t^2\quad (|t|<1)
\end{equation}
for some $\theta_t\in(0,1)$. Let us notice that $C_{X-1}^{\prime\prime}(t)=1/(1-t)^2$ and it is an increasing function for $|t|<1$. Let us observe now that
$\varphi_\infty(Kt)=K^2t^2/2$ if $|t|\le 1/K$ and $\infty$ otherwise. By (\ref{tau}) we have that the infimum $K$ such that
$C_{X-1}(t)\le \varphi_\infty(Kt)$ satisfied the equation $C_{X-1}^{\prime\prime}(1/K)=K^2$. This means that the solution $K$ of the following equation
$$
\frac{1}{(1-1/K)^2}=K^2
$$
is the $\tau_{\varphi_1}$-norm of $(X-1)$. Solving this equation we get
$\tau_{\varphi_1}(X-1)=2$.
\end{exa}
In the following lemma it is shown that sub-exponential random variables possess the {\it approximate rotation invariance property}.
\begin{lem}
Let $X_1,...,X_n$ be independent sub-exponential random variables. Then
$$
\tau_{\varphi_1}^2\Big(\sum_{i=1}^n(X_i-\mathbb{E}X_i)\Big)\le \sum_{i=1}^n\tau_{\varphi_1}^2(X_i-\mathbb{E}X_i).
$$
\end{lem}
\begin{proof}
Denote $\tau_{\varphi_1}(X_i-\mathbb{E}X_i)$ by $K_i$, $i=1,...n$. For independent centered sub-exponential r.v.s we have
\begin{eqnarray}
\label{estinf}
\mathbb{E}\exp\Big(t\sum_{i=1}^n(X_i-\mathbb{E}X_i)\Big) & = & \prod_{i=1}^n\mathbb{E}\exp\big(t(X_i-\mathbb{E}X_i)\big)\nonumber\\
\; & \le & \prod_{i=1}^n\exp \varphi_\infty(K_it)=\exp\Big(\sum_{i=1}^n\varphi_\infty(K_it)\Big).
\end{eqnarray}
Observe that
$$
\sum_{i=1}^n\varphi_\infty(K_it)=
\left\{
\begin{array}{ccl}
\frac{1}{2}(\sum_{i=1}^nK_i^2)t^2 & {\rm if} & t\le 1/\max_iK_i,\\
\infty & {\rm otherwise.} & \;
\end{array}
\right.
$$
Since $\max_iK_i\le \sqrt{\sum_{i=1}^nK_i^2}$, we get
$$
\sum_{i=1}^n\varphi_\infty(K_it)\le \varphi_\infty\Big(\Big(\sum_{i=1}^nK_i^2\Big)^{1/2}t\Big).
$$
By the above, the estimate (\ref{estinf}) and the definition of $\tau_{\varphi_1}$-norm we obtain that
$$
\tau_{\varphi_1}\Big(\sum_{i=1}^n(X_i-\mathbb{E}X_i)\Big)\le \Big(\sum_{i=1}^n\tau_{\varphi_1}^2(X_i-\mathbb{E}X_i)\Big)^{1/2}.
$$
\end{proof}
\begin{rem}
Let us note that if $X_i$ are sub-exponential then $|X_i|$ are sub-exponential too.
The above lemma implies that
\begin{equation}
\label{first}
\tau_{\varphi_1}\Big(\sum_{i=1}^n|X_i|-\sum_{i=1}^n\mathbb{E}|X_i|\Big)=\tau_{\varphi_1}\Big(|X|_1-\||X|_1\|_1\Big)
\le\sqrt{n}\max_{1\le i\le n}\tau_{\varphi_1}\Big(|X_i|-\mathbb{E}|X_i|\Big)
\end{equation}
\end{rem}
In the following example it is shown that the factor $\sqrt{n}$ on the right hand side is necessary
\begin{exa}
Let $X_i\sim Exp(1)$, $i=1,...,n$, be independent random variables. Note that the cumulant generating function of their centered sum equals $nC_{X-1}$
($X\sim Exp(1)$), i.e.,
$$
\ln\mathbb{E}\exp[t(\sum_{i=1}^nX_i-n)]=nC_{X-1}(t)=-nt-n\ln(1-t).
$$
As in Example \ref{przy1} we get
$$
nC_{X-1}(t)=\frac{n}{2}C^{\prime\prime}_{X-1}(\theta_tt)t^2\quad (|t|<1)
$$
and the $\tau_{\varphi_1}$-norm of the centered sum of $X_i$ equals $\sqrt{n}+1\sim \sqrt{n}$.
\end{exa}
Because the norms $\tau_{\varphi_1}(\cdot)$ and $\|\cdot\|_{\psi_1}$ are equivalent on the space of centered sub-exponential random variables and
$\||X_i|-\mathbb{E}|X_i|\|_{\psi_1}\le 2 \|X_i\|_{\psi_1}$, $i=1,...,n$, then we can rewrite the inequality (\ref{first}) to the form
\begin{equation}
\label{first1}
\||X|_1-\||X|_1\|_{L^1}\|_{\psi_1}\le C\sqrt{n}K_1,
\end{equation}
where $K_1:=\max_{1\le i\le n}\|X_i\|_{\psi_1}$ and $C$ is an universal constant
The proof of the following proposition is similar to the proof of the upper bound in the large deviation theory (see for instance \cite[5.11(4)Theorem. Large deviation]{GrimStir}) but with one difference. Instead of the cumulant generating function of a given random variable we use its upper estimate by the function $\varphi_\infty$ and, in consequence, the convex conjugate $\varphi_\infty^\ast=\varphi_1$ on its tail estimate (see \cite[Lem. 2.6]{Zaj}), where
$$
\varphi_1(x)=
\left\{
\begin{array}{ccl}
\frac{1}{2}x^2 & {\rm if} & |x|\le 1,\\
|x|-\frac{1}{2} & {\rm if} & |x|>1.
\end{array}
\right.
$$
\begin{pro}
\label{Bern}
Let $X_i$, $i=1,...,n$, be independent sub-exponential random variables.
Then
$$
\mathbb{P}\Big(\Big|\frac{1}{n}\sum_{i=1}^n (X_i-\mathbb{E}X_i)\Big|\ge t\Big)\le 2\exp\Big(-n\varphi_1\Big(\frac{t}{2C_1K}\Big)\Big),
$$
where $K:=\max_{1\le i\le n}\|X_i\|_{\psi_1}$ and $C_1$ is the universal constant such that $\tau_{\varphi_1}(\cdot)\le C_1\|\cdot\|_{\psi_1}$.
\end{pro}
\begin{proof}
The moment generating function of $\frac{1}{n}\sum_{i=1}^n X_i$ can be estimated as follows
$$
\mathbb{E}\exp\Big(u\frac{1}{n}\sum_{i=1}^n (X_i-\mathbb{E}X_i)\Big)=\prod_{i=1}^n\mathbb{E}\exp\Big(u\frac{1}{n} (X_i-\mathbb{E}X_i)\Big)
\le \prod_{i=1}^n\exp\Big(\varphi_\infty \Big(\frac{1}{n}\tau_{\varphi_1}((X_i-\mathbb{E}X_i))u\Big)\Big)
$$
$$
\le \prod_{i=1}^n\exp\Big(\varphi_\infty \Big(\frac{1}{n}C_1\| X_i-\mathbb{E}X_i\|_{\psi_1}u\Big)\Big)
\le\exp\Big(n\varphi_\infty \Big(\frac{2}{n}C_1Ku\Big)\Big).
$$
The convex conjugate of the function $f(u):=n\varphi_\infty(\frac{2}{n}C_1Ku)$ equals
\begin{eqnarray*}
f^\ast(t)&=&\sup_{u\in\mathbb{R}}\Big\{tu-n\varphi_\infty\Big(\frac{2}{n}C_1Ku\Big)\Big\}=\sup_{u>0}\Big\{tu-n\varphi_\infty\Big(\frac{2}{n}C_1Ku\Big)\Big\}\\
\; &=& n\sup_{u>0}\Big\{\frac{t}{2C_1K}\frac{2C_1Ku}{n}-\varphi_\infty\Big(\frac{2}{n}C_1Ku\Big)\Big\}=
n\sup_{v>0}\Big\{\frac{t}{2C_1K}v-\varphi_\infty(v)\Big\}\\
\; & = & n\varphi_1(t/2C_1K);
\end{eqnarray*}
the second equality holds since $\varphi_\infty$ is the even function, the fourth one by the substituting $v=\frac{2}{n}C_1Ku$ and the last one by definition of the convex conjugate for even functions and the equality $\varphi_\infty^\ast=\varphi_1$.
Thus we get
$
f^\ast(t)=n\varphi_1(t/2C_1K).
$
Similarly as in \cite[Lem. 2.4.3]{BulKoz} (formally $f$ and $f^\ast$ are not $N$-function, but the proof is the same also for these functions),
we get
$$
\mathbb{P}\Big(\Big|\frac{1}{n}\sum_{i=1}^n (X_i-\mathbb{E}X_i)\Big|\ge t\Big)\le 2\exp\Big(-n\varphi_1\Big(\frac{t}{2C_1K}\Big)\Big).
$$
\end{proof}
\begin{rem}
Let us emphasize that because
$$
\varphi_1\big(t/(2C_1K)\big)\ge \frac{1}{2}\min\big\{t^2/(4C_1^2K^2),t/(2C_1K)\big\},
$$
then the above estimate implies a form of Bernstein's inequality for averages:
$$
\mathbb{P}\Big(\Big|\frac{1}{n}\sum_{i=1}^n (X_i-\mathbb{E}X_i)\Big|\ge t\Big)\le 2\exp\Big(-\frac{n}{2}\min\Big\{\frac{t^2}{4C_1^2K^2},\frac{t}{2C_1K}\Big\}\Big);
$$
compare Vershynin \cite[Cor.2.8.3]{Ver}.
\end{rem}
\section{Proofs of the results}
\label{sec3}
{\it Proof of Proposition \ref{KwaP}.}
Because, for $a\ge 0$, the function $a^{1/p}$ is concave on the nonnegative half-line of real numbers, then the following inequality
\begin{equation}
\label{triK}
\big|a-b\big|\ge\big|a^{1/p}-b^{1/p}\big|^p
\end{equation}
holds for any $a,b\ge 0$.
If $X_i$, $i=1,...,n$, are $p$-sub-exponential random variables then $|X_i|^p$ are the sub-exponential ones.
Let $Y_i$ denotes $|X_i|^p$ and $Y$ be a vector $(Y_i)_{i=1}^n$. By Lemma \ref{norms} we have $\|Y_i\|_{\psi_1}=\|X_i\|_{\psi_p}^p$. Moreover $|Y|_1=|X|_p^p$ and
$\||Y|_1\|_{L^1}=\||X|_p\|_{L^p}^p$. Substituting in (\ref{first1}) $Y$ instead of $X$ we get
$$
\||Y|_1-\||Y|_1\|_{L^1}\|_{\psi_1}=\||X|_p^p-\||X|_p\|_{L^p}^p\|_{\psi_1}\le C\sqrt{n}K_p^p,
$$
where $K_p:=\max_{1\le i \le n}\|X_i\|_{\psi_p}$ and $C$ is the universal constant.
By the definition of $\psi_1$-norm and inequality (\ref{triK}) with $a=|X|_p^p$ and $b=
\||X|_p\|_{L^p}^p$ we obtain
$$
2\ge\mathbb{E}\exp\Big(\frac{\big||X|_p^p-\||X|_p\|_{L^p}^p\big|}{C\sqrt{n}K_p^p}\Big)\ge
\mathbb{E}\exp\Big(\frac{\big||X|_p-\||X|_p\|_{L^p}\big|^{p}}{\big[(C\sqrt{n})^{1/p}K_p\big]^p}\Big),
$$
which means that
$$
\big\||X|_p-\||X|_p\|_{L^p}\big\|_{\psi_p}\le (C\sqrt{n})^{1/p}K_p.
$$
It finishes the proof of Proposition \ref{KwaP}.
The structure of the proof of Theorem \ref{mthm} is similar to the proof in Vershynin \cite[Th. 3.1.1]{Ver} but, apart from Proposition \ref{Bern} and Lemma \ref{charlem}, we also use the following two technical lemmas.
\begin{lem}
\label{xalfa}
Let $x,\delta\ge 0$ and $p\ge 1$. If $|x-1|\ge \delta$ then $|x^p-1|\ge\max\{\delta,\delta^p\}$.
\end{lem}
\begin{proof}
Under the above assumption on $x$ and $p$ we have: $|x^p-1|\ge |x-1|$. It means that if $|x-1|\ge\delta$ then $|x^p-1|\ge\delta$. For $0\le\delta\le 1$ we have $\delta^p\le\delta$. In consequence
$|x^p-1|\ge\max\{\delta,\delta^p\}$ for $0\le\delta\le 1$.
Suppose now that $\delta>1$. The condition $|x-1|\ge \delta$ is equivalent to
$x\ge\delta+1$ if $x\ge 1$ or $x\le 1-\delta$ if $0\le x\le 1$. Let us observe that the second opportunity is not possible for $\delta>1$ and $x\ge 0$. The first one gives $x^p\ge (\delta+1)^p\ge\delta^p+1$ ($p\ge 1$) that is equivalent to $x^p-1\ge\delta^p$ for $x\ge 1$. Summing up we get
$|x^p-1|\ge\max\{\delta,\delta^p\}$ for $x,\delta\ge 0$ and $p\ge 1$.
\end{proof}
\begin{lem}
\label{lem2}
If $p\ge 2$ then $\varphi_1(\max\{\gamma,\gamma^p\})\ge \frac{1}{2}\gamma^p$ for $\gamma\ge 0$.
\end{lem}
\begin{proof}
By the definition of $\varphi_1$ we have
$$
\varphi_1(\max\{\gamma,\gamma^p\})=
\left\{
\begin{array}{ccl}
\frac{1}{2}\gamma^2 & {\rm if} & 0\le\gamma\le 1,\\
\gamma^p-\frac{1}{2} & {\rm if} & 1<\gamma.
\end{array}
\right.
$$
If $0\le\gamma\le 1$ then $\varphi_1(\max\{\gamma,\gamma^p\})=\frac{1}{2}\gamma^2\ge \frac{1}{2}\gamma^p$ for $p\ge 2$.\\
If $1<\gamma$ then the inequality $\varphi_1(\max\{\gamma,\gamma^p\})=\gamma^p-\frac{1}{2}>\frac{1}{2}\gamma^p$ also holds.
\end{proof}
{\it Proof of Theorem \ref{mthm}}.
Let us observe that the expression
$$
\frac{1}{n\|X_1\|_{L^p}^p}|X|_p^p-1=\frac{1}{n}\sum_{i=1}^n\Big(\frac{|X_i|^p}{\|X_1\|_{L^p}^p}-1\Big)
$$
is the sum of independent and centered sub-exponential random variables. Moreover, by condition (\ref{cent}) and Lemma \ref{norms}, we have
$$
\||X_i|^p-1\|_{\psi_1}\le 2\||X_i|^p\|_{\psi_1}=2\|X_i\|^p_{\psi_p}\le 2K^p_p.
$$
Now, by virtue of Lemma \ref{xalfa} and Proposition \ref{Bern}, we get
\begin{eqnarray}
\label{est1}
\mathbb{P}\Big(\Big|\frac{1}{n^{1/p}\|X_1\|_{L^p}}|X|_p-1\Big|\ge \delta\Big)&\le&
\mathbb{P}\Big(\Big|\frac{1}{n\|X_1\|_{L^p}^p}|X|_p^p-1\Big|\ge \max\{\delta,\delta^p\}\Big)\nonumber\\
\; &\le& 2\exp\Big(-n\varphi_1\Big(\frac{\|X_1\|_{L^p}^p\max\{\delta,\delta^p\}}{2C_1 K^p_p}\Big)\Big),
\end{eqnarray}
for any $C\ge 2C_1$.
The inequality
$$
2=\mathbb{E}\exp\Big(\frac{|X_i|}{\|X_i\|_{\psi_p}}\Big)^p\ge 1+\mathbb{E}\Big(\frac{|X_i|}{\|X_i\|_{\psi_p}}\Big)^p
$$
implies that $\|X_i\|_{\psi_p}^p\ge \mathbb{E}|X_i|^p=\|X_1\|_p^p$, $i=1,...,n$, and, in consequence,
$K_p^p\ge \|X_1\|_p^p$.
Since $C_1>1$, we have that $\|X_1\|_p^p/2C_1 K^p_p$ is less than $1$.
Under this condition we have
$$
\frac{\|X_1\|_{L^p}^p\max\{\delta,\delta^p\}}{2C_1 K^p_p}\ge \max\Big\{\frac{\|X_1\|_{L^p}^p\delta}{2C_1K_p^p},\Big(\frac{\|X_1\|_{L^p}^p\delta}{2C_1K_p^p}
\Big)^p\Big\}.
$$
By the definition of $\varphi_1$ and Lemma \ref{lem2} with $\gamma= \|X_1\|_{L^p}^p\delta/(2C_1K_p^p)$ we get
$$
\varphi_1\Big(\frac{\|X_1\|_{L^p}^p\max\{\delta,\delta^p\}}{2C_1 K^p_p}\Big)\ge \varphi_1\Big(\max\Big\{\frac{\|X_1\|_{L^p}^p\delta}{CK_p^p},\Big(\frac{\|X_1\|_{L^p}^p\delta}{CK_p^p}\Big)^p\Big\}\Big)\ge\frac{1}{2}
\Big(\frac{\|X_1\|_{L^p}^p\delta}{CK_p^p}\Big)^p.
$$
Rearranging (\ref{est1}) and applying the above estimate we obtain the following
\begin{eqnarray*}
\mathbb{P}\Big(\Big||X|_p-n^{1/p}\|X_1\|_{L^p}\Big|\ge n^{1/p}\|X_1\|_{L^p}\delta\Big)&=&
\mathbb{P}\Big(\Big|\frac{1}{n^{1/p}\|X_1\|_{L^p}}|X|_p-1\Big|\ge \delta\Big)\\
\; &\le& 2\exp\Big(-n\frac{1}{2}
\Big(\frac{\|X_1\|_{L^p}^p\delta}{CK_p^p}\Big)^p\Big)\\
\; &= & 2\exp\Big(-\Big(\frac{n^{1/p}\|X_1\|_{L^p}^p\delta}{2^{1/p}CK_p^p}\Big)^p\Big).
\end{eqnarray*}
Changing variables to $t=n^{1/p}\|X_1\|_{L^p}\delta$, we get the following $p$-sub-exponential tail decay
$$
\mathbb{P}\Big(\Big||X|_p-n^{1/p}\|X_1\|_{L^p}\Big|\ge t\Big)\le 2\exp\Big(-\Big(\frac{\|X_1\|_{L^p}^{p-1} t}{2^{1/p}CK_p^p}\Big)^p\Big).
$$
By Lemma \ref{charlem} and Remark \ref{rem1} we obtain
$$
\big\||X|_p-n^{1/p}\|X_1\|_{L^p}\big\|_{\psi_p}\le 6^{1/p}C\Big(\frac{K_p}{\|X_1\|_{L^p}}\Big)^{p-1}K_p,\quad{\rm for}\; p\ge 2,
$$
which finishes the proof of Theorem \ref{mthm}
\begin{exa}
Let
$
{\bf g}_p=(g_{p,1},...,g_{p,n})
$
be a random vector with independent standard $p$-normal coordinates ( $g_{p,i}\sim\mathcal{N}_p(0,1)$).
Recall that $\|g_{p,i}\|_{L^p}=1$ and $\|g_{p,i}\|_{\psi_p}= (8/3)^{1/p}$, for $i=1,...,n$. Thus $K_p^p=8/3$. By
Theorem \ref{mthm} we get
$$
\||{\bf g}_p|_p-n^{1/p}\|_{\psi_p
\le \frac{8}{3}C6^{1/p}\quad{\rm for}\; p\ge 2.
$$
\end{exa}
\begin{rem}
Many problems deal with sub-gaussian and sub-exponential random variables may be considered in the spaces of $p$-sub-exponential random variables for any positive $p$. In the paper G\"otze et al. \cite{GSS} one can find generalizations and applications of some concentrations inequalities for polynomials of such variables in cases of $0<p\le 1$. In our paper we focus our attention on concentrations of norms of random vectors with independent $p$-sub-exponential coordinates.
\end{rem}
| {'timestamp': '2020-09-09T02:03:07', 'yymm': '1909', 'arxiv_id': '1909.06776', 'language': 'en', 'url': 'https://arxiv.org/abs/1909.06776'} |
\section{Introduction}
\label{sec:1}
Molecular biology has become a high-volume information science. This rapid transformation has taken place over the past two decades and has been chiefly enabled by two technological advances: (\emph{i}) affordable and accessible high-throughput sequencing platforms, sequence diagnostic platforms and proteomic platforms, and (\emph{ii}) affordable and accessible computing platforms for managing and analyzing these data.
It is estimated that sequence data accumulates at the rate of 100 exabases per day (1 exabase~$=10^{18}$ bases) \cite{Stephens2015Big}. However, the available sequence data are of limited use without understanding their biological implications. Therefore, the development of computational methods that provide clues about functional roles of biological macromolecules is of primary importance.
Many function prediction methods have been developed over the past two decades \cite{Friedberg2006Automated, Rentzsch2009Protein}. Some are based on sequence alignments to proteins for which the function has been experimentally established \cite{Martin2004, Engelhardt2005, Clark2011}, yet others exploit other types of data such as protein structure \cite{Pazos2004, Pal2005}, protein and gene expression data \cite{Huttenhower2006}, macromolecular interactions \cite{Letovsky2003, Nabieva2005}, scientific literature \cite{Camon2005}, or a combination of several data types \cite{Troyanskaya2003, Sokolov2010, Cozzetto2013Protein}.
Typically, each new method is trained and evaluated on different data. Therefore, establishing best practices in method development and evaluating the accuracy of these methods in a standardized and unbiased setting is important. To help choose an appropriate method for a particular task, scientists often form community challenges for evaluating methods \cite{Costello2013}. The scope of these community challenges extends beyond testing methods: they have been successful in invigorating their respective fields of research by building communities and producing new ideas and collaborations (e.g. \cite{Kryshtafovych2014CASP10}).
In this chapter we discuss a community-wide effort whose goal is to help understand the state of affairs in computational protein function prediction and drive the field forward. We are holding a series of challenges which we named the Critical Assessment of Functional Annotation, or CAFA. CAFA was first held in 2010-2011 (CAFA1) and included 23 groups from 14 countries who entered 54 computational function prediction methods that were assessed for their accuracy. To the best of our knowledge, this was the first large-scale effort to provide insights into the strengths and weaknesses of protein function prediction software in the bioinformatics community. CAFA2 was held in 2013-2014, and more than doubled the number of groups (56) and participating methods (126). Although several repetitions of the CAFA challenge would likely give accurate trajectory of the field, there are valuable lessons already learned from the two CAFA efforts
For further reading on CAFA1, the results were reported in full in \cite{Radivojac2013LargeScale}. As of this time, the results of CAFA2 are still unpublished and will be reported in the near future. The preprint of the paper is available on arXiv \cite{Jiang2015}.
\section{Organization of the CAFA challenge}
We begin our explanation of CAFA by describing the participants. The CAFA challenge generally involves the following groups: the organizers, the assessors, the biocurators, the steering committee, and the predictors (Figure~\ref{fig:cafa_time}A).
\begin{figure}
\sidecaption
\includegraphics[width=\textwidth]{pics/Organization.pdf}
\caption{\textbf{The organizational structure of the CAFA experiment.} (A) Five groups of participants in the experiment together with their main roles. Organizers, assessors and biocurators cannot participate as predictors. (B) Timeline of the experiment.}
\label{fig:cafa_time}
\end{figure}
The main role of the organizers is to run CAFA smoothly and efficiently. They advertise the challenge to recruit predictors, coordinate activities with the assessors, report to the steering committee, establish the set of challenges and types of evaluation, and run the CAFA web site and social networks. The organizers also compile CAFA data and coordinate the publication process.
The assessors develop assessment rules, write and maintain assessment software, collect the submitted prediction data, assess the data, and present the evaluations to the community. The assessors work together with the organizers and the steering committee on standardizing submission formats and developing assessment rules. The biocurators joined the experiment during CAFA2: they provide additional functional annotations that may be particularly interesting for the challenge. The steering committee members are in regular contact with the organizers and assessors. They provide advice and guidance that ensures the quality and integrity of the experiment. Finally, the largest group, the predictors, consists of research groups who develop methods for protein function prediction and submit their predictions for evaluation. The organizers, assessors and biocurators are not allowed to officially evaluate their own methods in CAFA.
CAFA is run as a timed-challenge (Figure~\ref{fig:cafa_time}B). At time $t_0$, a large number of experimentally unannotated proteins are made public by the organizers and the predictors are given several months, until time $t_1$, to upload their predictions to the CAFA server. At time $t_1$ the experiment enters a waiting period of at least several months, during which the experimental annotations are allowed to accumulate in databases such as Swiss-Prot \cite{Bairoch2005} and UniProt-GOA \cite{Huntley2015}. These newly accumulated annotations are collected at time $t_2$ and are expected to provide experimental annotations for a subset of original proteins. The performance of participating methods is then analyzed between time points $t_2$ and $t_3$ and presented to the community at time $t_3$. It is important to mention that unlike some machine learning challenges, CAFA organizers do not provide training data that is required to be used. CAFA, thus, evaluates a combination of biological knowledge, the ability to collect and curate training data and the ability to develop advanced computational methodology.
We have previously described some of the principles that guide us in organizing CAFA \cite{Friedberg2015}. It is important to mention that CAFA is associated with the Automated Function Prediction Special Interest Group (Function-SIG) that is regularly held at the Intelligent Systems for Molecular Biology (ISMB) conference \cite{Wass2014Automated}. These meetings provide a forum for exchanging ideas and communicating research among the participants. Function-SIG also serves as the venue at which CAFA results are initially presented and where the feedback from the community is sought.
\section{The Gene Ontology provides the functional repertoire for CAFA}
\label{sec:go-info}
Computational function prediction methods have been reviewed extensively \cite{Friedberg2006Automated, Rentzsch2009Protein} and are also discussed in the chapter by Cozzetto \& Jones. Briefly, a function prediction method can be described as a classifier: an algorithm that is tasked with correctly assigning biological function to a given protein. This task, however, is arbitrarily difficult unless the function comes from a finite, preferably small, set of functional terms. Thus, given an unannotated protein sequence and a set of available functional terms, a predictor is tasked with associating terms to a protein, giving a score (ideally, a probability) to each association.
The Gene Ontology (GO) \cite{Ashburner2000Gene} is a natural choice when looking for a standardized, controlled vocabulary for functional annotation. GO's high adoption rate in the protein annotation community helped ensure CAFA's attractiveness, as many groups were already developing function prediction methods based on GO, or could migrate their methods to GO as the ontology of choice. A second consideration is GO's ongoing maintenance: GO is continuously maintained by the Gene Ontology Consortium, edited and expanded based on ongoing discoveries related to the function of biological macromolecules.
One useful characteristic of the basic GO is that its directed acyclic graph structure can be used to quantify the information provided by the annotation; for details on the GO structure see the chapter by Munoz-Torres \textit{et al.} Intuitively, this can be explained as follows: the annotation term ``Nucleic acid binding" is less specific than ``DNA binding" and, therefore, is less informative (or has a lower \textit{information content}). (A more precise definition of information content and its use in GO can be found in \cite{Lord2003Semantic, Schnoes2013Biases}.) The following question arises: if we know that the protein is annotated with the term ``Nucleic acid binding", how can we quantify the additional information provided by the term ``DNA binding" or incorrect information provided by the term ``RNA binding"? The hierarchical nature of GO is therefore important in determining proper metrics for annotation accuracy. The way this is done will be discussed in Section \ref{sec:assessing_quality}.
When annotating a protein with one or more GO terms, the association of each GO term with the protein should be described using an Evidence Code (EC), indicating how the annotation is supported. For example, the Experimental Evidence code (EXP) is used in an annotation to indicate that an experimental assay has been located in the literature, whose results indicate a gene product's function. Other experimental evidence codes include Inferred by Expression Pattern (IEP), Inferred from Genetic Interaction (IGI), and Inferred from Direct Assay (IDA), among others. Computational evidence codes include lines of evidence that were generated by computational analysis, such as orthology (ISO), genomic context (IGC), or identification of key residues (IKR). Evidence codes are not intended to be a measure of trust in the annotation, but rather a measure of provenance for the annotation itself. However, annotations with experimental evidence are regarded as more reliable than computational ones, having a provenance stemming from experimental verification. In CAFA, we treat proteins annotated with experimental evidence codes as a ``gold standard'' for the purpose of assessing predictions, as explained in the next section. The computational evidence codes are treated as predictions.
From the point of view of a computational challenge, it is important to emphasize that the hierarchical nature of the GO graph leads to the property of \emph{consistency} in functional annotation. Consistency means that when annotating a protein with a given GO term, it is automatically annotated with all the ancestors of that term. For example, a valid prediction cannot include ``DNA binding" but exclude ``Nucleic acid binding" from the ontology because DNA binding implies nucleic acid binding. We say that a prediction is not consistent if it includes a child term, but excludes its parent. In fact, the UniProt resource and other databases do not even list these parent terms from a protein's experimental annotation. If a protein is annotated with several terms, a valid complete annotation will automatically include all parent terms of the given terms, propagated to the root(s) of the ontology. The result is that a protein's annotation can be seen as a consistent sub-graph of GO. Since any computational method effectively chooses one of a vast number of possible consistent sub-graphs as its prediction, the sheer size of the functional repertoire suggests that function prediction is non-trivial.
\section{Comparing the performance of prediction methods}
\label{sec:performance}
In the CAFA challenge, we ask the participants to associate a large number of proteins with GO terms and provide a probability score for each such association. Having associated a set of GO sub-graphs with a given confidence, the next step is to assess how accurate these predictions are. This involves: ($i$) establishing standards of truth and ($ii$) establishing a set of assessment metrics.
\subsection{Establishing standards of truth}
The main challenge to establishing a standard-of-truth set for testing function prediction methods is to find a large set of correctly annotated proteins whose functions were, until recently, unknown. An obvious choice would be to ask experimental scientists to provide these data from their labs. However, scientists prefer to keep the time between discovery and publication as brief as possible, which means that there is only a small window in which new experimental annotations are not widely known and can be used for assessment. Furthermore, each experimental group has its own ``data sequestration window'' making it hard to establish a common time for all data providers to sequester their data. Finally, to establish a good statistical baseline for assessing prediction method performance, a large number of prediction targets are needed, which is problematic since most laboratories research one or only a few proteins each. High-throughput experiments, on the other hand, provide a large number of annotations, but those tend to concentrate only on few functions, and generally provide annotations that have a lower information content \cite{Schnoes2013Biases}.
Given these constraints, we decided that CAFA would not initially rely on direct communication between the CAFA organizers and experimental scientists to provide new functional data. Instead, CAFA relies primarily on established biocuration activities around the world: we use annotation databases to conduct CAFA as a time-based challenge. To do so, we exploit the following dynamics that occurs in annotation databases: protein annotation databases grow over time. Many proteins that at a given time $t_1$ do not have experimentally-verified annotation, but later, some of proteins may gain experimental annotations, as biocurators add these data into the databases. This subset of proteins that were not experimentally annotated at $t_1$, but gained experimental annotations at $t_2$, are the ones that we use as a test set during assessment (Figure~\ref{fig:cafa_time}B). In CAFA1 we reviewed the growth of Swiss-Prot over time and chose 50,000 \textit{target proteins} that had no experimental annotation in the Molecular Function or Biological Process ontologies of GO. At $t_2$, out of those 50,000 targets we identified 866 \textit{benchmark proteins}; i.e., targets that gained experimental annotation in the Molecular Function and/or Biological Process ontologies. While a benchmark set of 866 proteins constitutes only 1.7\% of the number of original targets, it is large enough set for assessing performance of prediction methods. To conclude, exploiting the history of the Swiss-Prot database enabled its use as the source for standard-of-truth data for CAFA. In CAFA2, we have also considered experimental annotations from UniProt-GOA \cite{Huntley2015} and established 3,681 benchmark proteins out of 100,000 targets (3.7\%).
One criticism of a time-based challenge is that when assessing predictions, we still may not have a full knowledge of a protein's function. A protein may have gained experimental validation for function $f_1$, but it may also have another function, say $f_2$, associated with it, which has not been experimentally validated by the time $t_2$. A method predicting $f_2$ may be judged to have made a false-positive prediction, even though it is correct (only we do not know it yet). This problem, known as the ``incomplete knowledge problem'' or the ``open world problem'' \cite{Dessimoz2013}
is discussed in detail in the chapter by Skunca \textit{et al}. Although the incomplete knowledge problem may impact the accuracy of time-based evaluations, its actual impact in CAFA has not been substantial. There are several reasons for this, including the robustness of the evaluation metrics used in CAFA, and that the newly added terms may be unexpected and more difficult to predict. The influence of incomplete data and conditions under which it can affect a time-based challenge were investigated and discussed in \cite{Jiang2014Impact}. Another criticism of CAFA is that the experimental functional annotations are not unbiased because some terms have a much higher frequency than others due to artificial considerations. There are two chief reasons for this bias: first, high-throughput assays typically assign shallow terms to proteins, but being high throughput means they can dominate the experimentally-verified annotations in the databases. Second, biomedical research is driven by specific areas of human health, resulting in over-representation of health-related functions \cite{Schnoes2013Biases}. Unfortunately, CAFA1 and CAFA2 could not guarantee unbiased evaluation. However, we will expand the challenge in CAFA3 to collect genome-wide experimental evidence for several biological terms. Such an assessment will result in unbiased evaluation on those specific terms.
\subsection{Assessment metrics}
\label{sec:assessing_quality}
When assessing the prediction quality of different methods, two questions come to mind. First, what makes a good prediction? Second, how can one score and rank prediction methods? There is no simple answer to either of these questions. As GO comprises three ontologies that deal with different aspects of biological function, different methods should be ranked separately with respect to how well they perform in Molecular Function, Biological Process, or the Cellular Component ontologies. Some methods are trained to predict only for a subset of any given GO graph. For example, they may only provide predictions of DNA-binding proteins or of mitochondrial-targeted proteins. Furthermore, some methods are trained only on a single species or a subset of species (say, eukaryotes), or using specific types of data such as protein structure, and it does not make sense to test them on benchmark sets for which they were not trained. To address this issue, CAFA scored methods both in general performance, but also on specific subsets of proteins taken from humans and model organisms, including \textit{Mus musculus}, \textit{Rattus norvegicus}, \textit{Arabidopsis thaliana}, \textit{Drosophila melanogaster}, \textit{Caenorhabditis elegans}, \textit{Saccharomyces cerevisiae}, \textit{Dictyostelium discoideum}, and \textit{Escherichia coli}. In CAFA2, we extended this evaluation to also assess the methods only on benchmark proteins on which they made predictions; i.e., the methods were not penalized for omitting any benchmark protein.
One way to view function prediction is as an information retrieval problem, where the most relevant functional terms should be correctly retrieved from GO and properly assigned to the amino-acid sequence at hand. Since each term in the ontology implies some or all of its ancestors,\footnote{Some types of edges in Gene Ontology violate the transitivity property (consistency assumption), but they are not frequent.} a function prediction program's task is to assign the best consistent sub-graph of the ontology to each new protein and output a prediction score for this sub-graph and/or each predicted term. An intuitive scoring mechanism for this type of problem is to treat each term independently and provide the precision-recall curve. We chose this evaluation as our main evaluation in CAFA1 and CAFA2.
Let us provide more detail. Consider a single protein on which evaluation is carried out, but keep in mind that CAFA eventually averages all metrics over the set of benchmark proteins. Let now $T$ be a set of experimentally-determined nodes and $P$ a non-empty set of predicted nodes in the ontology for the given protein. Precision ($pr$) and recall ($rc$) are defined as
$$
pr(P,T)=\frac{|P\cap T|}{|P|}; \qquad rc(P,T)=\frac{|P\cap T|}{|T|},
$$
\noindent where $|P|$ is the number of predicted terms, $|T|$ is the number of experimentally-determined terms, and $|P \cap T|$ is the number of terms appearing in both $P$ and $T$; see Figure~\ref{fig:cafa_metrics} for an illustrative example of this measure. Usually, however, methods will associate scores with each predicted term and then a set of terms $P$ will be established by defining a score threshold $t$; i.e., all predicted terms with scores greater than $t$ will constitute the set $P$. By varying the decision threshold $t \in [0, 1]$, the precision and recall of each method can be plotted as a curve $(pr(t), rc(t))_t$, where one axis is the precision and the other the recall; see Figure~\ref{fig:prec-rec-curves} for an illustration of pr-rc curves and \cite{Radivojac2013LargeScale} for pr-rc curves in CAFA1. To compile the precision-recall information into a single number that would allow easy comparison between methods, we used the maximum harmonic mean of precision and recall anywhere on the curve, or the maximum $F_1$-measure which we call $F_{\max}$
\[
F_{\max} = \underset{t}{\max}\left \{ 2\times\frac{pr(t)\times rc(t)}{pr(t)+rc(t)} \right \},
\]
\noindent where we modified $pr(t)$ and $rc(t)$ to reflect the dependency on $t$. It is worth pointing out that the F-measure used in CAFA places equal emphasis on precision and recall as it is unclear which of the two should be weighted more. One alternative to $F_1$ would be the use of a combined measure that weighs precision over recall, which reflects the preference of many biologists for few answers with a high fraction of correctly predicted terms (high precision) over many answers with a lower fraction of correct predictions (high recall); the rationale for this tradeoff is illustrated in Figure~\ref{fig:prec-rec-curves}. However, preferring precision over recall in a hierarchical setting can steer methods to focus on shallow (less informative) terms in the ontology and thus be of limited use. At the same time, putting more emphasis on recall may lead to overprediction, a situation in which many or most of the predicted terms are incorrect. For this reason, we decided to equally weight precision and recall. Additional metrics within the precision-recall framework have been considered, though not implemented yet.
\begin{figure}[t]
\sidecaption
\includegraphics[width=\textwidth]{pics/metrics_new.pdf}
\caption{\textbf{CAFA assessment metrics.} (A) Red nodes are the predicted terms $P$ for a particular decision threshold in a hypothetical ontology and (B) blue nodes are the true, experimentally determined terms $T$. The circled terms represent the overlap between the predicted subgraph and the true subgraph. There are two nodes (circled) in the intersection of $P$ and $T$, whereas $|P|=5$ and $|T|=3$. This sets the prediction's precision at $\frac{2}{5}=0.4$ and recall at $\frac{2}{3}=0.667$, with $F_1= 2\times \frac{0.4\times 0.667}{0.4+0.667} = 0.5$. The remaining uncertainty ($ru$) is the information content of the uncircled blue node in panel B, while the misinformation ($mi$) is the total information content of the uncircled red nodes in panel A. An information content of any node $v$ is calculated from a representative database as $-\log \Pr(v|\textrm{Pa}(v))$; i.e., the probability that the node is present in a protein's annotation given that all its parents are also present in its annotation.}
\label{fig:cafa_metrics}
\end{figure}
\begin{figure}[t]
\sidecaption
\includegraphics[width=\textwidth]{pics/curves_new.pdf}
\caption{\textbf{Precision-recall curves and remaining uncertainty-misinformation curves.} This figure illustrates the need for multiple assessment metrics, and understanding the context in which the metrics are used. (A) two pr-rc curves corresponding to two prediction methods $M_1$ and $M_2$. The point on each curve that gives $F_{\max}$ is marked as a circle. Although the two methods have a similar performance according to $F_{\max}$, method $M_1$ achieves its best performance at high recall values, whereas method $M_2$ achieves its best performance at high precision values. (B) two ru-mi curves corresponding to the same two prediction methods with marked points where the minimum semantic distance is achieved. Although the two methods have similar performance in the pr-rc space, method $M_1$ outperforms $M_2$ in ru-mi space. Note, however, that the performance in ru-mi space depends on the frequencies of occurrence of every term in the database. Thus, two methods may score differently in their $S_{\min}$ when the reference database changes over time, or using a different database.
}
\label{fig:prec-rec-curves}
\end{figure}
Precision and recall are useful because they are easy to interpret: a precision of $\frac{1}{2}$ means that one half of all predicted terms are correct, where a recall of $\frac{1}{3}$ means that a third of the experimental terms have been recovered by the predictor. Unfortunately, precision-recall curves and $F_1$, while simple and interpretable measures for evaluating ontology-based predictions, are limited because they ignore the hierarchical nature of the ontology and dependencies among terms. They also do not directly capture the information content of the predicted terms.
Assessment metrics that take into account the information-content of the terms were developed in the past \cite{Lord2003Semantic, Lord2003Investigating, Pesquita2009Semantic}, and are also detailed in the chapter by Pesquita. In CAFA2 we used an information-theoretic measure in which each term is assigned a probability that is dependent on the probabilities of its direct parents. These probabilities are calculated from the frequencies of the terms in the database used to generate the CAFA targets. The entire ontology graph, thus, can be seen as a simple Bayesian network \cite{Clark2013Informationtheoretic}. Using this representation, two information-theoretic analogs of precision and recall can be constructed. We refer to these quantities as \emph{misinformation} ($mi$), the information content attributed to the nodes in the predicted graph that are incorrect, and \emph{remaining uncertainty} ($ru$), the information content of all nodes that belong to the true annotation but not the predicted annotation. More formally, if $T$ is a set of experimentally-determined nodes and $P$ a set of predicted nodes in the ontology, then
$$
ru(P, T)=-\sum_{v\in T-P}\log \Pr(v|\textrm{Pa}(v)); \quad
mi(P, T)=-\sum_{v\in P-T}\log \Pr(v|\textrm{Pa}(v)),
$$
\noindent where $\textrm{Pa}(v)$ is the set of parent terms of the node $v$ in the ontology (Figure~\ref{fig:cafa_metrics}). A single performance measure to rank methods, the minimum semantic distance $S_{\min}$, is the minimum distance from the origin to the curve $(ru(t), mi(t))_t$. It is defined as
\[
S_{\min} = \underset{t}{\min}\left \{ (ru^k(t) + mi^k(t)) ^{\frac{1}{k}} \right \},
\]
\noindent where $k \geq 1$. We typically choose $k=2$, in which case $S_{\min}$ is the minimum Euclidean distance between the ru-mi curve and the origin of the coordinate system (Figure~\ref{fig:prec-rec-curves}B). The ru-mi plots and $S_{\min}$ metrics compare the true and predicted annotation graphs by adding an additional weighting component to high-information nodes. In that manner, predictions with a higher information content will be assigned larger weights. The semantic distance has been a useful measure in CAFA2 as it properly accounts for term dependencies in the ontology. However, this approach also has limitations in that it relies on an assumed Bayesian network as a generative model of protein function as well as on the available databases of protein functional annotations where term frequencies change over time. While the letter limitation can be remedied by more robust estimation of term frequencies in a large set of organisms, the performance accuracies in this setting are generally less comparable over two different CAFA experiments than in the precision-recall setting.
\section{Discussion}
Critical assessment challenges have been successfully adopted in a number of fields due to several factors. First, the recognition that improvements to methods are indeed necessary. Second, the ability of the community to mobilize enough of its members to engage in a challenge. Mobilizing a community is not a trivial task, as groups have their own research priorities and only a limited amount of resources to achieve them, which may deter them from undertaking a time-consuming and competitive effort a challenge may pose. At the same time, there are quite a few incentives to join a community challenge. Testing one's method objectively by a third party can establish credibility, help point out flaws, and suggest improvements. Engaging with other groups may lead to collaborations and other opportunities. Finally, the promise of doing well in a challenge can be a strong incentive heralding a group's excellence in their field. Since the assessment metrics are crucial to the performance of the teams, large efforts are made to create multiple metrics and to describe exactly what they measure. Good challenge organizers try to be attentive to the requests of the participants, and to have the rules of the challenge evolve based on the needs of the community. An understanding that a challenge's ultimate goal is to improve methodologies and that it takes several rounds of repeating the challenge to see results.
The first two CAFA challenges helped clarify that protein function prediction is a vibrant field, but also one of the most challenging tasks in computational biology. For example, CAFA provided evidence that the available function prediction algorithms outperform a straightforward use of sequence alignments in function transfer. The performance of methods in the Molecular Function category has consistently been reliable and also showed progress over time (unpublished results from CAFA2). On the other hand, the performance in the Biological Process or Cellular Component ontologies has not yet met expectations. One of the reasons for this may be that the terms in these ontologies are less predictable using amino acid sequence data and instead would rely more on high-quality systems data; e.g., see \cite{Costanzo2010}. The challenge has also helped clarify the problems of evaluation, both in terms of evaluating over consistent sub-graphs in the ontology but also in the presence of incomplete and biased molecular data. Finally, although it is still early, some best practices in the field are beginning to emerge. Exploiting multiple types of data is typically advantageous, although we have observed that both machine learning expertise and good biological insights tend to result in strong performance. Overall, while the methods in the Molecular Function ontology seem to be maturing, in part because of the strong signal in sequence data, the methods in the Biological Process and Cellular Component ontologies still appear to be in the early stages of development. With the help of better data over time, we expect significant improvements in these categories in the future CAFA experiments.
Overall, CAFA generated a strong positive response to the call for both challenge rounds, with the number of participants substantially growing between CAFA1 (102 participants) and CAFA2 (147). This indicates that there exists significant interest in developing computational protein function prediction methods, in understanding how well they perform, and in improving their performance. In CAFA2 we preserved the experiment rules, ontologies and metrics we used in CAFA1, but also added new ones to better capture the capabilities of different methods. The CAFA3 experiment will further improve evaluation by facilitating unbiased evaluation for several select functional terms.
More rounds of CAFA are needed to know if computational methods will improve as a direct result of this challenge. But given the community's growth and growing interest, we believe that CAFA is a welcome addition to the community of protein function annotators.
\section{Acknowledgements}
We thank Kymberleigh Pagel and Naihui Zhou for helpful discussions. This work was partially supported by NSF grants DBI-1458359 and DBI-1458477.
\bibliographystyle{spmpsci}
| {'timestamp': '2016-01-07T02:03:42', 'yymm': '1601', 'arxiv_id': '1601.01048', 'language': 'en', 'url': 'https://arxiv.org/abs/1601.01048'} |
\section{Introduction}
A deeper understanding of the low temperature macroscopic phases of a many-body system
can only be achieved with the aid of a quantum theory.
Of particular interest is the modeling and description of how the system goes
from one phase to another, a process known as ``phase transition''.
In ordinary phase transitions,
which occur at finite temperatures, the
phase change is driven by thermal fluctuations. For example, if we heat a magnet
we arrive at a temperature (Curie temperature) above which it loses its magnetism. In other
words, the magnet initially in the ferromagnetic phase, where all spins are aligned,
changes to the paramagnetic phase, where the magnetic moments are in a disordered state.
However, a phase transition can also occur at or near absolute zero temperature ($T=0$), where
thermal fluctuations are negligible. This process is called
a quantum phase transition (QPT)\cite{sac99} and is attained
varying a tuning parameter in the
Hamiltonian (e.g. an external magnetic field) while keeping the temperature
fixed. When the tuning parameter reaches a particular
value, the so called critical point (CP), the system's Hamiltonian ground state changes
drastically, which reflects in an abrupt modification of the macroscopic properties of the system.
In this scenario, quantum fluctuations, which loosely speaking are governed by the
Heisenberg uncertainty principle, are responsible
for the phase transition.
Although it is impossible to
achieve $T=0$ due to the third law of thermodynamics,
the effects of QPTs can be observed at finite temperatures
whenever the de Broglie wavelength is greater
than the correlation length of thermal fluctuations\cite{sac99}.
Important examples of QPTs are
the paramagnetic-ferromagnetic transition in some metals\cite{row10},
the superconductor-insulator transition\cite{dol10},
and superfluid-Mott insulator transition\cite{gre02}.
The ground state of a many-body system at $T=0$ near a CP is often described by a
non-trivial wave function due to the long-range correlations among the system's constituents.
In Ref.~\refcite{pres00} Preskill argued that quantum entanglement
could be responsible for these correlations and therefore the methods developed by quantum information
theory (QIT) could be useful in studying the critical behavior of many-body systems. Besides,
new protocols for quantum computation and communication could be formulated based on such systems.
Along these lines many theoretical tools (in particular entanglement quantifiers)
originally developed to tackle QIT problems
were employed to determine the CPs of QPTs at $T=0$ \cite{Nie99}.
Later L.-A. Wu et al.\cite{lidar} proved a general connection between non-analyticities in
bipartite entanglement measures and QPTs while in Ref.~\refcite{oliveira}
this connection was extended to multipartite entanglement.
Recently another quantity, namely, quantum discord (QD), has attracted the attention of the
quantum information community. QD goes beyond the concept of entanglement and captures
in a certain sense the ``quantumness'' of the correlations between two parts of a system.
It was built by noting the fact that two classically
equivalent versions of the mutual information are inequivalent in the quantum domain \cite{zurek,vedral}.
Using QD one can show that quantum entanglement does not describe all quantum correlations existing
in a correlated quantum state. In other words, it is possible to create quantum correlations
other than entanglement between two quantum systems via local operations and classical communication (LOCC).
The first study showing a possible connection between QD and QPT was done by
R. Dillenschneider\cite{Dil08} in the context of spin chains at $T=0$,
where a CP associated to a QPT was well characterized by QD.
Further studies subsequently have confirmed the usefulness of QD in
describing other types of QPTs at zero temperature\cite{disT0}.
Moreover, it is important to note
that such quantum informational approaches to spotlight CPs of QPTs do not
require the knowledge of an order parameter (a macroscopic quantity
that changes abruptly during the QPT); only the extremal values or
the behavior of the derivatives of either the entanglement or QD is sufficient.
The previous theoretical studies, however, were restricted to the zero temperature
regime, which is
experimentally unattainable
; the third law of thermodynamics
dictates that it is impossible to drive a system to $T=0$ by a finite amount of
thermodynamic operations.
Due to this limitation it is therefore not straightforward to directly compare
those theoretical results with experimental data obtained at finite $T$.
In order to overcome this problem
one has to study the behavior of
quantum correlations in a system at thermal equilibrium,
which is described by the canonical ensemble
$\rho_T=e^{-H/kT}/Z$, with $H$ being the system's Hamiltonian, $k$ the Boltzmann's
constant and $Z=\mbox{Tr}\left(e^{-H/kT}\right)$ the partition function.
{\it Thermal entanglement}, i.e. the entanglement computed for states described
by $\rho_T$, was first studied by M. C. Arnesen {\it et al.}\cite{Ved01} for
a finite unidimensional Heisenberg chain. Other interesting works followed this one,
where thermal entanglement has been considered for other Hamiltonians, both
for finite\cite{Kam02,Rig03} and infinite chains\cite{ltte}. See Ref. \refcite{reviews}
for extensive reviews on entanglement and QPT.
However, the focus of the aforementioned works was not the study of the ability of
thermal entanglement to point out CPs when $T>0$.
In Ref.~\refcite{Wer10}, two of us introduced the analogous to thermal entanglement,
namely, {\it thermal quantum discord} (TQD), and we studied the behavior of this quantity
in a system consisting of two spins described by the $XYZ$ model in the presence of an
external magnetic field. In this work it was observed for the first time that
TQD could be able to signal a QPT at \textit{finite} $T$.
In order to fully explore the previous possibility, highlighted by solving
a simple two-body problem\cite{Wer10}, we tackled
the XXZ Hamiltonian in the thermodynamic
limit for several values of $T>0$\cite{werPRL}. Now, working with infinite chains,
we were able to show for the
first time that TQD keeps its ability to
detect the CPs associated to the QPTs for the XXZ model even if $T\neq 0$, while
entanglement and other thermodynamic quantities are not as good as TQD. Also,
we showed that these quantities when contrasted to TQD lose for increasing $T$ their
CP-detection property faster than TQD. Later\cite{werPRA}
we generalized those results considering $(i)$ the XXZ Hamiltonian and
$(ii)$ the XY Hamiltonian both in the presence
of an external magnetic field. For these two models it was shown that
among the usual quantities employed to detect CPs, TQD was the best suited to
properly estimate them when $T>0$.
Our goal in this paper is to present a short but self-contained review of
our aforementioned results
about TQD and its application as a CP detector of QPTs.
To this end we structure this paper as follows.
In Sec. 2 we present a brief review about quantum correlations
where QD and the entanglement of formation take on a prominent role.
In Sec. 3 we review the behavior of TQD in the context of simple two-qubit models.
We then move on to the analysis of the ability of quantum correlations,
in particular TQD, to spotlight the CPs of QPTs when the system is
at $T>0$ and in the thermodynamic limit. In this context we study the
XXZ and XY models with and without an external magnetic field.
Finally, we conclude and discuss future directions in Sec. 4.
\section{Quantum Correlations}
The superposition principle of quantum mechanics is directly related to
the existence of (quantum) correlations that are not seen
in classical objects. This principle together with the tensorial nature of
combining different quantum systems (Hilbert spaces) lead to entanglement,
which implies intriguing correlations among
the many constituents of a composite system that puzzle our classical minds.
It is worth mentioning that this tensorial nature from which a composite quantum
system is described in terms of its parts is not a truism. Indeed, the principle of
superposition is also present in classical physics, for example in the classical theory
of electromagnetism. However, in classical physics this tensorial nature for combining systems is missing,
which helps in understanding why ``weird'' quantum effects such as non-locality\cite{EPR} are not
seen in a classical world.
During many decades after the birth of quantum mechanics
quantum correlations were thought to be necessarily linked to non-locality,
or more quantitatively, to the violation of a Bell-like inequality\cite{Bel64}.
The non-violation of a Bell-like inequality implies that
the correlations among the parts of a composite system
can be described by a local realistic theory.
This fact led many to call a state
not violating any Bell-like inequality a classical state.
This situation changed by the seminal work of R. F. Werner\cite{Wer89}, who showed
that there are mixed entangled states that do not violate any Bell-like inequality.
Therefore, according to the Bell/non-locality paradigm these states should be considered
examples of classical states although possessing entanglement.
This state of affairs was unsatisfactory and the notion of classical states was
expanded. A classical (non-entangled) state was then defined as any state
that can be created only by local operations on the subsystems and classical communication
among its many parts (LOCC)\cite{nielsen,Hor09}. For a bipartite system described by the density
operator $\rho_{AB}$, the states created via LOCC (separable states) can
be generally written as
\begin{eqnarray}\label{sstate}
\rho_{AB}=\sum_jp_j\rho_j^A\otimes\rho_j^B,
\end{eqnarray}
where $p_j\geq 0$, $\sum_jp_j=1$, and $\rho_j^{A,B}$ are legitimate density matrices.
If a quantum state cannot be written as (\ref{sstate})
then it is an entangled state.
At this point one may wonder if this is a definitive characterization of a classical state.
Or one may ask: Isn't there any ``quantumness'' in the correlations for some sort of
separable (non-entangled) quantum states? Can we go beyond the entanglement paradigm?
As observed in refs.~\refcite{zurek,vedral} there exist some states written as (\ref{sstate})
that possess non-classical features. This fact led the authors of
refs.~\refcite{zurek,vedral,ved10b} to push further our definition of classical states.
Now, instead of eq. (\ref{sstate}), we call a bipartite
state classical if it can be written as
\begin{eqnarray}
\rho_{AB}=\sum_{jk}p_{jk}|j \rangle_A \langle j|\otimes |k\rangle_B\langle k|,
\label{two}
\end{eqnarray}
where $|j\rangle_A$ and $|k\rangle_B$ span two sets of orthonormal states.
States described by (\ref{two}) are a subset of those described by (\ref{sstate})
and they are built via mixtures of locally distinguishable states\cite{ved10b}.
Intuitively, classical states are those where the superposition principle does
not manifest itself either on the level of different Hilbert spaces (zero entanglement)
or on the level of single Hilbert spaces (no Schr\"odinger cat states leading to a mixture
of non-orthogonal states). Such states have null QD.
Let us be more quantitative and define QD for a bipartite quantum
state \cite{zurek,vedral} divided into parts $A$ and $B$.
In the paradigm of classical information theory\cite{nielsen}
the total correlation between $A$ and $B$ is quantified by the mutual information (MI),
\begin{eqnarray}\label{mi1}
\mathcal{I}_1(A:B)= \mathcal{H}(A)+\mathcal{H}(B)-\mathcal{H}(A,B),
\end{eqnarray}
where $\mathcal{H}(X)=-\sum_xp_x\log_2p_x$ is the Shannon entropy with $p_x$
the probability distribution of the random variable $X$. The conditional
probability for classical variables $p_{a|b}$ is defined by the Baye's rule,
that is, $p_{a|b}=p_{a,b}/p_{b}$, with $p_{a,b}$ denoting the joint probability distribution
of variables $a$ and $b$. Using the conditional probability one can show that
$\mathcal{H}(A,B) = \mathcal{H}(A|B) + H(B)$, where $\mathcal{H}(A|B)=-\sum_{a,b}p_{a,b}\log_2(p_{a|b})$.
This last result allows one to write MI,
eq.~(\ref{mi1}), as
\begin{eqnarray}\label{mi2}
\mathcal{I}_2(A:B)= \mathcal{H}(A)-\mathcal{H}(A|B).
\end{eqnarray}
Note that $\mathcal{H}(A|B)\geq0$
is the conditional entropy, which quantifies
how much uncertainty is left on average about $A$ when one knows $B$.
The quantum version of eq. (\ref{mi1}), denoted by $\mathcal{I}^q_1(A:B)$,
can be obtained replacing the Shannon entropy by the von-Neumann entropy
$\mathcal{S}(X)=\mathcal{S}(\rho_X)=-\mbox{Tr}\left(\rho_X\log_2\rho_X\right)$,
where $\rho_X$ is the density operator of the system $X=A,B$. On the other
hand, a quantum version of eq. (\ref{mi2}) is not so straightforward because
the Bayes' rule is not always valid for quantum systems\cite{brule}.
For instance, this rule is violated for a pure entangled state. Indeed,
one can show that if we naively extend the usual quantum conditional entropy
to quantum systems as $\mathcal{S}(A|B)\equiv\mathcal{S}(A,B)-\mathcal{S}(B)$,
it becomes negative for pure entangled states. QD is built in a way to
circumvent this limitation.
In order to build a meaningful quantum version of the conditional entropy
$\mathcal{H}(A|B)$ it is necessary to take into account the fact that knowledge about
system B is related to measurements on $B$.
And now, differently from the classical case, a measurement in the quantum domain
can be performed in many non-equivalent ways (different set of projectors, for instance).
If a general quantum measurement, i.e. a POVM (positive operator valued measure)\cite{nielsen},
$\{M_b\}$ is performed
in the quantum state $\rho_{AB}$, then after the measurement the state is
described by $\sum_bM_b\rho_{AB}M_b^\dagger$.
The probability of the outcome $b$ of $B$ is $p_b=\mbox{Tr}\left[M_b\rho_{AB}M_b^\dagger\right]$ and
the conditional state of $A$ in this case is
$\rho_{A|b}=\left(M_b\rho_{AB}M_b^\dagger\right)/p_b$. Thus, the conditional
entropy with respect to the POVM $\{M_b\}$ is $\mathcal{S}(A|\{M_b\})\equiv\sum_bp_b\mathcal{S}(\rho_{A|b})$.
Therefore, in order to quantify the uncertainty left on $A$ after a measurement on $B$
one has to minimize over all POVMs. This leads to the following definition
of the quantum conditional entropy\cite{zurek,vedral}:
\begin{eqnarray}\label{qce}
\mathcal{S}_q(A|B)\equiv\min_{\left\{M_b\right\}}\mathcal{S}(A|\{M_b\}).
\label{cond}
\end{eqnarray}
At this point, the quantum version of the mutual information (\ref{mi2}),
denoted by $\mathcal{I}^q_2(A:B)$, is obtained replacing the classical
conditional entropy $\mathcal{H}(A|B)$ by its quantum analog $\mathcal{S}_q(A|B)$,
which does not assume negative values. The {\it quantum discord} is
defined as the difference between these two versions of the quantum mutual
information\cite{zurek,vedral}:
\begin{eqnarray}\label{qd}
D(A|B)&\equiv&\mathcal{I}^q_1(A:B)-\mathcal{I}^q_2(A:B)\nonumber\\
&=&\mathcal{S}_q(A|B)-\mathcal{S}(A|B).
\end{eqnarray}
Note that QD is not necessarily a symmetric quantity, because the
conditional entropy (\ref{qce}) depends on the system in which the
measurement is performed. However, for the density operators studied
in this paper QD is always symmetric. Furthermore, while for mixed
states there are states with null entanglement and QD $>0$, for pure
states QD is essentially equivalent to entanglement. In other words,
only for mixed
states there might be quantum correlations other than entanglement.
As demonstrated recently\cite{ved10}, a quantum state $\rho_{AB}$ has $D(A|B)=0$ if,
and only if, it can be written as
$\rho_{AB}=\sum_jp_j\rho_j^A\otimes\left|\psi_j^B\right\rangle\left\langle \psi_j^B\right|$,
with $\sum_jp_j=1$ and $\{\left|\psi_j^B\right\rangle\}$ a set of orthogonal states.
This result shows the importance of the superposition principle
to explain the origin of the quantum correlations. It is due to
this principle that one can generate a set $\{\left|\psi_j^B\right\rangle\}$
of non-orthogonal states leading to states with nonzero QD.
For arbitrary $N\times M$-dimensional bipartite states the computation of QD involves a complicated
minimization procedure whose origin can be traced back to the evaluation of
the conditional entropy $\mathcal{S}_q(A|B)$, eq. (\ref{cond}).
In general one must then rely on numerical procedures to get QD
and it is not even known whether a general efficient algorithm exists.
For two-qubit systems, however, the minimization over generalized measurements
can be replaced by the minimization over projective measurements
(von Neumann measurements)\cite{minDIS}. In this case the minimization
procedure can be efficiently implemented numerically\cite{Dnum} for arbitrary
two-qubit states and some analytical results can be achieved for a restricted
class of states\cite{Danalitico}. In this work we will be dealing with density
matrices $\rho$ in the X-form, that is, $\rho_{12}=\rho_{13}= \rho_{24}=\rho_{34}=0$.
Moreover, in our models $\rho_{22}=\rho_{33}$ and all matrix elements are real, making the numerical
evaluation of QD simple and fast.
To close this section, we introduce the measure of entanglement
used in this paper, the {\it Entanglement of Formation} (EoF)\cite{Woo98}.
EoF quantifies, at least for pure states,
how many singlets are needed per copy of $\rho_{AB}$
to prepare many copies of $\rho_{AB}$ using only LOCC. For an $X$-form
density matrix we have
\begin{eqnarray}\label{eof}
EoF(\rho_{AB})&=&-g\log_2g-(1-g)\log_2(1-g),
\end{eqnarray}
with $g=(1+\sqrt{1-C^2})/2$ and the concurrence\cite{Woo98}
given by $C=2\max\left\{0,\Lambda_1,\Lambda_2 \right\}$,
where $\Lambda_1=|\rho_{14}|-\sqrt{\rho_{22}\rho_{33}}$ and
$\Lambda_2=|\rho_{23}|-\sqrt{\rho_{11}\rho_{44}}$.
\section{Results and Discussions}
\subsection{Two interacting spins}
In this section we consider a two
spin system described by the XYZ model with an external magnetic
field acting on both spins.\cite{Wer10}
The Hamiltonian of this model is
\begin{eqnarray}\label{hxyz}
H_{XYZ}=\frac{J_x}{4}\sigma^x_1\sigma^x_{2}+\frac{J_y}{4}\sigma^y_1\sigma^y_{2}+\frac{J_z}{4}\sigma^z_1\sigma^z_{2}
+ \frac{B}{2}\left(\sigma^z_1+\sigma^z_2\right),
\end{eqnarray}
where $\sigma_j^\alpha$ ($\alpha=x,y,z$) are the
usual Pauli matrices acting on the $j$-th site and
we have assumed $\hbar=1$. As mentioned above, the
density matrix describing a system in equilibrium with
a thermal reservoir at temperature $T$ is $\rho_T=e^{-H/kT}/Z$,
where $Z=\mbox{Tr}\left(e^{-H/kT}\right)$ is the partition function.
Therefore, the thermal state for the Hamiltonian (\ref{hxyz}) assumes the following form
\begin{equation}
\rho = \frac{1}{Z} \left(
\begin{array}{cccc}
A_{11} & 0 & 0 & A_{12}\\
0 & B_{11} & B_{12} & 0 \\
0 & B_{12} & B_{11} & 0 \\
A_{12} & 0 & 0 & A_{22} \\
\end{array}
\right), \label{rho2}
\end{equation}
with $A_{11}$ $=$ $\mathrm{e}^{-\alpha}$ $(\cosh(\beta)$ $-$ $4B$ $\sinh(\beta)/\eta)$,
$A_{12}$ $=$ $-$ $\Delta$ $\mathrm{e}^{-\alpha}$ $\sinh(\beta)/\eta$,
$A_{22}$ $=$ $\mathrm{e}^{-\alpha}$ $(\cosh(\beta)$ $+$ $4$ $B$ $\sinh(\beta)/\eta)$,
$B_{11}$ $=$ $\mathrm{e}^{\alpha}$
$\cosh(\gamma)$, $B_{12}$ $=$ $-$ $\mathrm{e}^{\alpha}$ $\sinh(\gamma)$,
and $Z = 2\left( \exp{(-\alpha)}\cosh(\beta) + \exp{(\alpha)}\cosh(\gamma) \right)$,
where $\Delta = J_{x} - J_{y}$, $\Sigma = J_{x} + J_{y}$,
$\eta = \sqrt{\Delta^{2} + 16B^2}$, $\alpha = J_{z}/(4kT)$,
$\beta = \eta/(4kT)$, and $\gamma = \Sigma/(4kT)$.
The first important result appears in the absence of an external
field. As shown in Ref.~\refcite{Rig03}, when $B=0$, the entanglement
does not increase with increasing temperature. On the other
hand, as can be seen in Fig.~\ref{fig1} (panels $a$ and $b$) for the
XXZ model $\left(J_x=J_y=J \quad\mbox{and}\quad J_z\neq0 \right)$,
TQD begins with a non-null value at $T=0$ and increases as $T$ increases
before decreasing with $T$,
while EoF is always zero\cite{Rig03}.
Note that such effect is observed
for different configurations of coupling constants.
\begin{figure
\centerline{\psfig{file=fig1b.eps,width=4.00in}}
\caption{TQD for a two spin system as a function of the absolute temperature $kT$ for
the $XXZ$ model with $B=0$. (a) Here $J_z=-0.5$ and $J$ $=$ $0.1$ (solid line),
$0.2$ (dashed line), $0.3$ (dotted line), $0.4$ (dash-dotted line);
(b) Now we fix $J=0.4$ and $J_z=-0.8$ (solid line), $-0.7$ (dashed line),
$-0.6$ (dotted line), $-0.5$ (dash-dotted line). Here and in the
following graphics all quantities are dimensionless.}
\label{fig1}
\end{figure}
The possibility of
TQD to point out a QPT even when the system is at finite temperatures emerged from
our study about the XXX model for two spins\cite{Wer10}. The XXX model
is obtained from Hamiltonian (\ref{hxyz}) making $J_x=J_y=J_z=J$.
When $J\rightarrow\infty$ the density operator (\ref{rho2}) will be
the Bell state $\rho=\left|\psi\right\rangle\left\langle\psi\right|$,
with $\left|\psi\right\rangle=\frac{1}{\sqrt{2}}\left(\left|01\right\rangle-\left|10\right\rangle\right)$,
for any $T$. For the opposite limit $J\rightarrow-\infty$ the density
operator is the mixed state $\rho=\frac{1}{3}\left(\left|00\right\rangle\left\langle
00\right|+\left|11\right\rangle\left\langle11\right|+\left|\phi\right\rangle\left\langle \phi\right|\right)$
with $\left|\phi\right\rangle=\frac{1}{\sqrt{2}}\left(\left|01\right\rangle +\left|10\right\rangle\right)$.
In this case EoF is zero, while TQD assumes the value $1/3$.
Furthermore, as shown in Fig.~\ref{fig2}, EoF is zero in the
ferromagnetic region $(J<0)$ and non-zero in the antiferromagnetic
region $(J>0)$ only when $T=0$.
\begin{figure
\centerline{\psfig{file=fig2b.eps,width=4.00in}}
\caption{TQD (a) and EoF (b) for a two spin system as functions of the coupling $J$
for $kT=0.05$ (solid line), $0.1$ (dashed line), $0.5$ (dotted line),
$1.0$ (dash-dotted line). Both plots for the XXX model with $B=0$.}
\label{fig2}
\end{figure}
For $T>0$ EoF becomes non-zero
only for $J>J_c(T)=kT\ln(3)$. On the other hand, TQD is equal
to zero only at the
trivial
point $J=0$, even at finite $T$.
Although we are considering here only two spins, such result
suggests that TQD may possibly signal a QPT for $T>0$.
Let us now analyze the case where $B\neq 0$ and focus on
the XY model in a transverse magnetic field ($J_x$, $J_y\neq 0$, and $J_z=0$).
As noted in Ref.~\refcite{Rig03}
EoF shows a sudden death and then a revival (see Fig.~\ref{fig3}, panel b).
\begin{figure
\centerline{\psfig{file=fig3b.eps,width=4.00in}}
\caption{TQD (a) and EoF (b) for a two spin system as functions of the absolute temperature
$kT$ for the XY model with transverse magnetic field $B$. Here $J_x=2.6$
and $J_y=1.4$. The values for $B$ are $1.1$ (solid line),
$2.0$ (dashed line), and $2.5$ (dotted line).}
\label{fig3}
\end{figure}
However, TQD does not suddenly disappear as Fig.~\ref{fig3}, panel a,
depicts. Actually, TQD decreases with $T$ to a non-null value
and after a critical temperature $T_c$
it starts increasing again. This effect is called {\it regrowth}\cite{Wer10}.
Although the regrowth of EoF with temperature is not observed for
two spin chains, we showed in Ref.~\refcite{werPRA} that such interesting
behavior is possible in the thermodynamic limit. Also, if we carefully look at
Fig.~\ref{fig3} the distinctive aspects of these two types of quantum correlations
become more evident. For example, comparing panels a and b we see
regions where TQD increases while EoF decreases. Finally, in a very interesting
and recent work, X. Rong \textit{et al.}\cite{Ron12} experimentally verified some of the
predictions here revised and reported in refs.~\refcite{Wer10,werPRL,werPRA}, namely,
the sudden change of TQD at finite temperatures while
changing the anisotropy parameter of a two spin XXZ Hamiltonian.
\subsection{XXZ Model}
Turning our attention to infinite chains ($L\rightarrow \infty$), let us start
working with the one-dimensional anisotropic spin-$1/2$
XXZ model subjected to a magnetic field in the $z$-direction.
Its Hamiltonian is
\begin{eqnarray}\label{hxxz}
H_{xxz}&=&J\sum_{j=1}^L\left(\sigma_j^x\sigma_{j+1}^x+\sigma_j^y\sigma_{j+1}^y
+\Delta\sigma_j^z\sigma_{j+1}^z\right) - \frac{h}{2}\sum_{j=1}^L\sigma_j^z,
\end{eqnarray}
where $\Delta$ is the anisotropy parameter, $h$ is the external
magnetic field, and $J$ is the exchange constant ($J=1$). We have
assumed periodic boundary conditions $\left(\sigma_{L+1}^\alpha=\sigma_1^\alpha\right)$.
The nearest neighbor two spin state is obtained by
tracing all but the first two spins, $\rho_{1,2}=\mbox{Tr}_{L-2}(\rho)$,
where $\rho=\exp{\left(-\beta H_{xxz}\right)}/Z$. The Hamiltonian (\ref{hxxz})
exhibits both translational invariance and $U(1)$ invariance $\left(\left[H_{xxz},
\sum_{j=1}^L\sigma_j^z\right]=0\right)$ leading to the following nearest neighbor
two spin state
\begin{eqnarray}\label{rho}
\rho_{1,2} = \frac{1}{4}\left(
\begin{array}{cccc}
\rho_{11} & 0 & 0 & 0\\
0 & \rho_{22} & \rho_{23} & 0 \\
0 & \rho_{23} & \rho_{22} & 0 \\
0 & 0 & 0 & \rho_{44} \\
\end{array}
\right),
\end{eqnarray}
where
\begin{eqnarray}\label{rhoe}
\rho_{11} &=& 1+2\left\langle \sigma^z\right\rangle+\med{\sigma_1^z\sigma_2^z},\nonumber\\
\rho_{22} &=& 1-\med{\sigma_1^z\sigma_2^z},\\
\rho_{44} &=& 1-2\left\langle \sigma^z\right\rangle+\med{\sigma_1^z\sigma_2^z},\nonumber\\
\rho_{23} &=& 2\med{\sigma_1^x\sigma_2^x}\nonumber.
\end{eqnarray}
The magnetization and the two-point correlations above are obtained in terms
of the derivatives of the free-energy\cite{NLIE}, $f=-\frac{1}{\beta}
\lim_{L\rightarrow \infty} \frac{\ln{Z}}{L}$,
\begin{eqnarray*}
\med{\sigma^z}&=& -2\partial_h f/J, \\
\med{\sigma_j^{z}\sigma_{j+1}^{z}}&=&\partial_{\Delta}f/J, \\
\med{\sigma_j^{x}\sigma_{j+1}^{x}}&=&\frac{u-\Delta \partial_{\Delta}f+
h \med{\sigma^z}}{2J}, \\
\med{\sigma_j^{z}\sigma_{j+1}^{z}}&=&\med{\sigma_j^{x}\sigma_{j+1}^{x}}=
\frac{u + h \med{\sigma^z}}{3J}, ~~ \Delta=1,
\end{eqnarray*}
where $u=\partial_{\beta} (\beta f)$ is the internal energy. We explained
in details the procedures to determine the free-energy $f$ in Ref.~\refcite{werPRA}.
It is not a simple task and it involves the application of complicated
analytical and numerical computations. The CPs of the XXZ
model depend on the value of the magnetic field $h$\cite{GAUDIN}.
One of them, called $\Delta_{inf}$, is an infinite-order QPT
determined by solving the following equation,
\begin{eqnarray}\label{cpinf}
h=4J\sinh(\eta)\sum_{n=-\infty}^\infty\frac{(-1)^n}{\cosh(n\eta)},
\end{eqnarray}
with $\eta = \cosh^{-1}(\Delta_{inf})$. The other CP, called $\Delta_1$, is
a first-order QPT given by
\begin{eqnarray}\label{cp1}
\Delta_1 = \frac{h}{4J}-1.
\end{eqnarray}
The behavior of TQD and EoF for the XXZ model at finite $T$
and $h=0$ was initially studied in ref.~\refcite{werPRL}. For $h=0$
the XXZ model has two CPs\cite{TAKAHASHI}. At $\Delta_{inf}=1$
the ground state changes from an XY-like phase ($-1<\Delta<1$)
to an Ising-like antiferromagnetic phase ($\Delta>1$).
At $\Delta_1=-1$ it changes from a ferromagnetic phase ($\Delta<-1$) to
the critical antiferromagnetic phase ($-1<\Delta<1$). In ref.~\refcite{werPRL}
we analyzed the behavior of TQD and EoF only for $\Delta>0$.
Here we extend those results by computing the correlation functions
for the remaining values of the anisotropy parameter, namely, $\Delta<0$.
In Fig.~\ref{fig4} we plot
TQD (panel a) and EoF (panel c) as a function of $\Delta$ for
different values of $kT$ and $h=0$.
\begin{figure
\centerline{\psfig{file=fig4.eps,width=4.00in}}
\caption{TQD (panels a and b) and EoF (panels c and d) as functions
of the tuning parameter $\Delta$ for $h=0$ (panels a and c) and $h=12$
(panels b and d). From top to bottom in the region where
$\Delta_1<\Delta<\Delta_{inf}$, $kT=0,0.1,0.5,1.0,2.0$.}
\label{fig4}
\end{figure}
For $T=0$ we can see that both
TQD and EoF are able to detect the CPs. TQD is discontinuous
at $\Delta_1$ while at $\Delta_{inf}$ the first-order derivative of
TQD presents a discontinuity. Furthermore, EoF is zero for
$\Delta<\Delta_1$ and non-zero for $\Delta>\Delta_1$ reaching a
maximum value at $\Delta=\Delta_{inf}$. However, as the temperature
increases the maximum value of EoF is shifted to the right. Besides,
EoF becomes zero also for $\Delta>\Delta_1$ as we increase $T$.
On the other hand, the first-order derivative of TQD is still discontinuous at
$\Delta_{inf}=1$ for finite $T$. We can also observe that TQD
increases for $\Delta<-1$ as $T$ increases while
its first-order derivative diverges at the CP $\Delta=-1$. As mentioned
in Ref.~\refcite{werPRL} the cusp-like behavior at CPs $\Delta_1=1$
and $\Delta_{inf}=-1$ is due to an exchange in the set of projectors
that minimizes the quantum conditional entropy (\ref{qce}).
The study about the XXZ model was further explored in
Ref.~\refcite{werPRA}, with the addition of an external
field. The effects of the magnetic field $h$ on the
quantum correlations are exemplified in
Fig.~\ref{fig4} (panels b and d), where we set $h=12$.
The values of the CPs for $h=12$ are calculated employing
Eqs. (\ref{cpinf}) and (\ref{cp1}), resulting in
$\Delta_{inf}\approx4.88$ and $\Delta_1=2$. Again,
for $T=0$ the CPs are detected by both TQD and EoF
and, differently from the case $h=0$, the behavior of
these two quantities is quite similar. Both quantities
are zero for $\Delta<2$ and non-zero for $\Delta>2$, with
their first-order derivatives diverging at the CP $\Delta_1=2$.
However, the infinite-order QPT is no longer characterized by a
global maximum of TQD or EoF, but by a discontinuity in their first-order
derivatives. Note also that TQD presents a cusp-like behavior
between the CPs. This behavior is once again related to the minimization
procedure of the quantum conditional entropy and so far it is
not associated to any known QPT for this model.
It is important to mention that entanglement measures may
also have a discontinuity and/or a divergence in their
derivatives that are not related to a QPT\cite{yang}.
\begin{figure
\centerline{\psfig{file=fig5.eps,width=4.0in}}
\caption{First-order (panel a) and second-order (panel b) derivatives
of TQD as functions of $\Delta$ for the XXZ model with $h=12$ and
for $kT = 0.02$ (solid line), $kT=0.1$ (dashed line), and $kT = 0.5$ (dotted line).
The derivatives plotted here are normalized, that is, each curve was
divided by the maximum value of the respective derivative. The
maximum of the first-order and second-order derivatives of TQD
are very close to the CPs $\Delta_1=2$
and $\Delta_{inf}\approx4.88$, respectively.}
\label{fig5}
\end{figure}
Now we move on to the cases where $T>0$. When $T$ increases both curves
of TQD and EoF become smoother and broader, with well defined derivatives
in the CPs. Besides, the cusp-like behavior of TQD previously mentioned tends
to disappear while both maximums of TQD and EoF decrease\cite{werPRA}.
We noted in Ref.~\refcite{werPRA} that some features of the derivatives
of these quantities remain for a finite, but not too high temperature.
To illustrate this fact, we plotted in Fig.~\ref{fig5} the first-order
(panel a) and second-order (panel b) derivatives of TQD with respect
to the anisotropy parameter $\Delta$ for $h=12$ and $kT=0.02, 0.1, 0.5$.
To plot the curves for different temperatures in the same graph we
normalized the derivatives of TQD, that is, for each $T$ we plotted
the derivative of TQD divided by the maximum value of the respective
derivative. For $T=0$ the divergence in the first-order derivative of both
TQD and EoF spotlights the CP $\Delta_1$ while the CP $\Delta_{inf}$ is
characterized by a divergence in the second-order derivative.
As can be seen in Fig.~\ref{fig5}, although the divergence at the
CPs disappears as $T$ increases, the derivatives reach their
maximum values around the CPs. We used these maximum values to
estimate the CPs at finite temperatures. The same analysis was
applied to estimate the CPs using EoF instead of TQD.
In Ref.~\refcite{werPRA}
we compared the ability of TQD, EoF, and pairwise correlations
($\med{\sigma_1^z\sigma_2^z}$ and $\med{\sigma_1^x\sigma_2^x}$)
to point out the CPs for $T>0$ and we showed that TQD is the
best candidate to estimate the CPs.
\begin{figure
\centerline{\psfig{file=fig6.eps,width=3.4in}}
\caption{The difference between the correct CP and the
CP estimated by TQD (square), EoF (circle), $\med{\sigma_1^x\sigma_2^x}$ (up arrow),
and $\med{\sigma_1^z\sigma_2^z}$ (down arrow) as a function of $kT$.
In (a) and (b) we have $h=6$ with a first (a) and an infinite-order
(b) CP; in (c) and (d) we have $h=12$ with a first (c) and an infinite-order
(d) CP. $\Delta_c$ denotes the correct value of CP while $\Delta_e$
denotes the value of CP estimated by the extremum values of the
derivatives of the quantities involved.}
\label{fig6}
\end{figure}
To illustrate such result
we compared in Fig.~\ref{fig6} the difference between the correct
CP $\Delta_c$ and the CP estimated by our method $\Delta_e$ for
$h=6$ and $h=12$. One can see in this figure that from zero to
$kT\approx 1$ the CPs estimated by TQD are closer to the correct ones
than the estimated CPs coming from other quantities.
\subsection{XY Model}
The Hamiltonian of the one-dimensional XY model in a transverse field is given by
\begin{eqnarray}
\label{HXY}
H_{xy}&=& - \frac{\lambda}{2} \sum_{j=1}^L
\left[(1+\gamma)\sigma^x_j\sigma^x_{j+1}
+(1-\gamma)\sigma^y_j\sigma^y_{j+1}\right]
-\sum_{j=1}^L\sigma^z_j,
\end{eqnarray}
where $\lambda$ is the strength of the inverse of
the external transverse magnetic field and $\gamma$
is the anisotropy parameter. The transverse Ising model
is obtained for $\gamma=\pm1$ while $\gamma=0$ corresponds
to the XX model in a transverse field\cite{LSM}.
At $\lambda_c=1$ the XY model undergoes a second-order
QPT (Ising transition\cite{isingQPT}) that separates a
ferromagnetic ordered phase from a quantum paramagnetic
phase. Another second order QPT is observed for $\lambda>1$
at the CP $\gamma_c=0$ (anisotropy transition\cite{LSM,anisQPT}).
This transition is driven by the
anisotropy parameter $\gamma$ and separates a ferromagnet
ordered along the $x$ direction and a ferromagnet ordered
along the $y$ direction. These two transitions are of the same
order but belong to different universality classes \cite{LSM,anisQPT}.
The XY Hamiltonian is $Z_2$-symmetric and can be exactly
diagonalized \cite{LSM} in the thermodynamic limit $L\rightarrow\infty$.
Due to translational invariance the two spin density operator $\rho_{i,j}$
for spins $i$ and $j$ at thermal equilibrium is \cite{osborne}
\begin{eqnarray}\label{doXY}
\rho_{0,k}&=&\frac{1}{4}\left[I_{0,k}
+\left\langle \sigma^z\right\rangle\left(\sigma^z_0+\sigma^z_k\right)\right]
+ \frac{1}{4} \sum_{\alpha=x,y,z}
\left\langle \sigma^\alpha_0\sigma^\alpha_k\right\rangle \sigma^\alpha_0\sigma^\alpha_k,
\end{eqnarray}
where $k=\left|j-i\right|$ and $I_{0,k}$ is the identity operator of dimension four.
The transverse magnetization $\left\langle \sigma^z_k\right\rangle$
$=$ $\left\langle \sigma^z\right\rangle$ is
\begin{eqnarray}\label{tmag}
\left\langle \sigma^z\right\rangle=-\int_0^\pi
(1+\lambda\cos{\phi})\tanh{(\beta\omega_\phi)}\frac{d\phi}{2\pi\omega_\phi},
\end{eqnarray}
with $\omega_\phi=\sqrt{(\gamma\lambda\sin{\phi})^2+(1+\lambda\cos{\phi})^2}/2$.
The two-point correlation functions are given by
\begin{eqnarray}\label{tpcf}
\left\langle \sigma^x_0\sigma^x_k\right\rangle &=& \left|
\begin{array}{cccc}
G_{-1} & G_{-2} & \cdots & G_{-k}\\
G_{0} & G_{-1} & \cdots & G_{-k+1} \\
\vdots & \vdots & \ddots & \vdots \\
G_{k-2} & G_{k-3} & \cdots & G_{-1} \\
\end{array}
\right|,\\
\left\langle \sigma^y_0\sigma^y_k\right\rangle &=& \left|
\begin{array}{cccc}
G_{1} & G_{0} & \cdots & G_{-k+2}\\
G_{2} & G_{1} & \cdots & G_{-k+3} \\
\vdots & \vdots & \ddots & \vdots \\
G_{k} & G_{k-1} & \cdots & G_{1} \\
\end{array}
\right|,\\
\left\langle \sigma^z_0\sigma^z_k\right\rangle &=&
\left\langle \sigma^z\right\rangle^2 - G_k G_{-k},
\end{eqnarray}
where
\begin{eqnarray*}
G_k&=&\int_0^\pi \tanh{(\beta\omega_\phi)}\cos{(k\phi)}(1+\lambda\cos{\phi})
\frac{d\phi}{2\pi\omega_\phi}\\
&-&\gamma\lambda\int_0^\pi \tanh{(\beta\omega_\phi)} \sin{(k\phi)\sin{\phi}}
\frac{d\phi}{2\pi\omega_\phi}.
\end{eqnarray*}
The relation between TQD and QPT for
the Ising model (XY model with $\gamma=1$) at $T=0$ was
investigated initially by Dillenschneider\cite{Dil08} for
first and second nearest-neighbors. More general results were
obtained in Ref.~\refcite{Sar10} where TQD and EoF from
first to fourth nearest-neighbors was computed for different
values of $\gamma$. This study at $T=0$ showed that while EoF between
far neighbors becomes zero, QD is not null and detects the QPT. The
effects of the symmetry breaking process in entanglement and QD for the XY
and the XXZ models were discussed in Refs.~\refcite{Ami10}, where the
low temperature regime was taken into account. In Ref.~\refcite{werPRA}
we compared the ability of TQD and EoF for first and second nearest-neighbors
to detect the CPs for the XY model at finite temperature.
The behavior of TQD and EoF for first nearest-neighbors as a
function of $\lambda$ for $kT=0.01,0.1,0.5$ and $\gamma=0,0.5,1.0$
can be seen in Fig.~\ref{fig7}. First, note that TQD is more robust
to temperature increase than EoF. For $kT=0.5$ TQD is
always non-zero while EoF is zero or close to zero for almost all
$\lambda$ (see the blue/solid curves in Fig.~\ref{fig7}).
\begin{figure
\centerline{\psfig{file=fig7.eps,width=4.0in}}
\caption{(a)-(c) TQD and (d)-(f) EoF as functions of
$\lambda$ for $kT=0.01$ (black/dashed line), $kT=0.1$ (red/dotted line) and
$kT=0.5$ (blue/solid line) for nearest-neighbors.
We use three values of $\gamma$ as shown in the graphs.}
\label{fig7}
\end{figure}
As showed in Ref.~\refcite{werPRA}, for second nearest-neighbors the
situation is more drastic since EoF is always zero for $kT=0.5$. Now,
to estimate the CPs at finite $T$ we used the same procedure adopted for
the XXZ model. If the first-order derivative of TQD or EoF is divergent at
$T=0$ then the CP is pointed out by a local maximum or minimum at $T>0$;
if the first-order derivative is discontinuous at $T=0$ then we look after
local maximum or minimum in the second derivative for $T>0$. These extreme
values act as indicators of QPTs. The CPs estimated with such
method are denoted by $\lambda_e$ while the correct CPs are denoted
by $\lambda_c$. The differences between $\lambda_c$ and $\lambda_e$ as a
function of kT for $\gamma=0,0.5,1.0$ are plotted in Fig.~\ref{fig8}. We can
see in this figure that TQD provides a better estimate of the CP $\lambda_c=1$
than EoF.
\begin{figure
\centerline{\psfig{file=fig8.eps,width=4.25in}}
\caption{The difference between the correct CP, $\lambda_c$, and, $\lambda_e$,
the CP estimated either by TQD (square) or EoF (circle) as a function
of $kT$ for (a) $\gamma=0$, (b) $\gamma=0.5$, and (c) $\gamma=1.0$.
Note that both curves coincide at panel (b).}
\label{fig8}
\end{figure}
For $\gamma=0.5$ EoF and TQD give almost the same estimation of the CP
and for $\gamma=1.0$ TQD is better than EoF with predictions differing at
the second decimal place. For $\gamma = 0$, TQD outperforms
EoF already in the first decimal place. Moreover, for $\gamma=0$ TQD
is able to correctly estimate the CP for higher temperatures than
for $\gamma=0.5$ and $\gamma=1.0$.
So far we have studied a QPT driven by the magnetic field. However,
for $\lambda>1$ the XY model undergoes a QPT driven by the
anisotropy parameter $\gamma$, whose critical point is $\gamma_c=0$.
To study such transition we fixed $\lambda=1.5$. In Fig.~\ref{fig9}
we plotted TQD and EoF for the first-neighbors as
functions of $\gamma$ and for $kT = 0.001, 0.1, 0.5, 1.0,$ and $2.0$.
Note that the maximum of TQD and EoF is reached at the CP $\gamma_c=0$.
However, only TQD has a cusp-like behavior at the CP.
\begin{figure
\centerline{\psfig{file=fig9b.eps,width=4.00in}}
\caption{(a) TQD and (b) EoF for the first nearest-neighbors as
functions of $\gamma$. From top to bottom $kT=0.001, 0.1, 0.5, 1.0$, and
$2.0$. Here we fixed $\lambda = 1.5$ and the CP is $\gamma_c=0$.}
\label{fig9}
\end{figure}
This
pattern of TQD (maximum with a cusp-like behavior) remains up to $kT=2.0$.
On the other hand, the maximum of EoF at the CP can only be
seen as far as $kT<1$ for above this temperature EoF becomes zero.
In Ref.~\refcite{werPRA} we computed TQD and EoF for second-neighbors.
In this case TQD is able to detect the CP even for values
near $kT=1.0$ while EoF is nonzero only for $kT\lesssim 0.1$.
\section{Conclusions}
In this article we presented a review of our studies about the
behavior of quantum correlations in the context of spin chains at
finite temperatures. The main goal of this paper was to analyze
the quantum correlation's ability to pinpoint the critical points
associated to quantum phase transitions assuming
the system's temperature is greater than the absolute zero. The two
measures of quantum correlations studied here were
quantum discord and entanglement, with the former producing the
best results.
We first reviewed a work of two of us about a simple but illustrative
two spin-1/2 system described by
the XYZ model with an external magnetic field\cite{Wer10}. We showed many surprising
results about the thermal quantum discord's behavior. For example, and
differently from entanglement, thermal quantum discord can increase with temperature in
the absence of an external field even for such a small system; and that
there are situations where thermal quantum discord increases while
entanglement decreases with temperature.
Furthermore, for the
XXX model we observed for the first time that quantum discord could
be a good candidate to signal a critical point at finite $T$.
To check whether quantum discord is indeed a good critical point detector
for temperatures higher than absolute zero, we analyzed its behavior
for an infinite chain described by the XXZ model
without\cite{werPRL} and with\cite{werPRA}
an external magnetic field and in equilibrium with a
thermal reservoir at temperature $T$. Here we also extended our previous results by
computing the quantum correlations between the first nearest-neighbor spins
for the whole range of the anisotropy parameter (positive and negative values).
In this way we were able to describe the two critical points of the XXZ model in the
thermodynamic limit and the behavior of quantum discord near them.
The results presented here and in refs.~\refcite{werPRL,werPRA}
showed that quantum discord is far better the best critical
point detector for $T>0$ with respect to all quantities tested (entanglement,
entropy, specific heat, magnetic susceptibility, and the two-point correlation
functions).
Another model considered in this work was the XY model in a transverse
magnetic field\cite{werPRA}. Again, we computed the quantum correlations between the first
nearest-neighbors assuming the system in equilibrium with a thermal
reservoir at temperature $T$. This model has two second-order quantum
phase transitions, namely, an Ising transition and an anisotropy transition.
For the Ising transition we observed that the critical point is better
estimated by quantum discord. For the anisotropy transition both quantities,
entanglement and discord, provide an excellent estimate for the critical
point at low temperatures.
However, since for increasing temperatures quantum discord is more robust
than entanglement, the former was able to
spotlight the quantum critical point for a wider range of temperatures,
even for values of temperature where entanglement was absent.
In conclusion, we showed that for the spin models studied here and in
Refs.~\refcite{werPRL,werPRA}
quantum discord was the best quantum critical point estimator among all
quantities tested when the system assumes a finite temperature.
It is also important to mention that the knowledge of the order
parameter was not needed to estimate the critical points. Therefore,
we strongly believe that our results suggest that
quantum correlations - mainly quantum discord - are important
tools to study quantum phase transitions in realistic scenarios,
where the temperature is always above absolute zero.
\section*{Acknowledgements}
TW and GR thank the Brazilian agency CNPq (National Council for Scientific and Technological
Development) for funding and GAPR thanks CNPq and FAPESP (State of S\~ao Paulo Research
Foundation) for funding. GR also thanks CNPq/FAPESP
for financial support through the National Institute of Science and Technology
for Quantum Information.
| {'timestamp': '2012-10-29T01:02:18', 'yymm': '1205', 'arxiv_id': '1205.1046', 'language': 'en', 'url': 'https://arxiv.org/abs/1205.1046'} |
\section{Introduction}
Khovanov constructed in \cite{Khovanov:FunctorValuedInvariant} a
family of rings $H^n$, for $n\geq 0$, which is a categorification
of \textrm{Inv}($n$), the space of $U_q(sl_2)$-invariants in $V^{\otimes
2n}.$ These rings lead to an invariant of (even) tangles which to
a tangle assigns a complex of $(H^n,H^m)$-bimodules, up to chain
homotopy equivalence. Khovanov and the author
\cite{CK:Subquotients} built subquotients of $H^n$ and used them
to categorify the action of tangles on $V^{\otimes n}$. The same
rings were also introduced by Stroppel \cite{S:PerverseSheaves}.
In this paper, we extend the construction of these arc rings
$A^{k,n-k}$ and give a categorification of level two
representations of $U_q(sl_N)$. In section $2$ we review the
definition of the arc rings $A^{n-k,k}$ and construct the rings
$A_n^{k,l}$ with two platforms of arbitrary sizes $k$ and $l$. We
show that they lead to a tangle invariant which is functorial
under tangle cobordisms. Then in section $3$ we compute the
centers of $A_n^{k,l}$ and relate them to the cohomology rings of
Springer varieties. Finally, in section $4$, we categorify level
two representations of $U_q(sl_N)$ using the rings $A_n^{k,0}$.
Fix a level two representation $V$ of $U_q(sl_N)$ with the highest
weight $\omega_{s}+\omega_{k+s}$. There is a decomposition of $V$
into weight spaces $V= \oplusop{\mu} V_{\mu}$. A weight $\mu$ is
called admissible if it appears in the weight space decomposition
of $V$. Denote by $\mathcal{C}$ the direct sum of categories of
$A^{k,0}_{m(\mu)}$-modules over admissible $\mu$, where $m(\mu)$
is a non-negative integer depending only on $\mu$. The
Grothendieck group of the category of $A^{k,0}_{m(\mu)}$-modules
is naturally isomorphic to $V_{\mu}$. The exact functors
$\mathcal{E}_i,\ \mathcal{F}_i$ introduced by Khovanov and
Huerfano in \cite{HK:LevelTwo} naturally extend to exact functors
on $\mathcal{C}$ which categorify the actions of $E_i,F_i\in
U_q(sl_N)$ on $V$.
{\bf Acknowledgements:} I would like to thank Mikhail Khovanov for
recommending this problem and too many helpful conversations and
suggestions. I am also very grateful to Robin Kirby for his
kindness, guidance and support.
\section{Generalization of the arc ring $A^{n-k,k}$}
\subsection{Arc ring $A^{n-k,k}$}
We first recall the definition of $H^n$ from
\cite{Khovanov:FunctorValuedInvariant}. Let $\mathcal{A}$ be a free
abelian group of rank $2$ spanned by $\mathbf{1}$ and $X$ with $\mathbf{1}$ in
degree $-1$ and $X$ in degree $1$. Assign to $\mathcal{A}$ a
$2$-dimensional TQFT $\mathcal{F}$ which associates $\mathcal{A}^{\otimes k}$ to a
disjoint union of $k$ circles. To the ``pants'' cobordism
corresponding to merging of two circles into one, $\mathcal{F}$ associates
the multiplication $m: \mathcal{A} \otimes \mathcal{A} \rightarrow \mathcal{A}$
\begin{equation}
\mathbf{1}^2=\mathbf{1}, \hspace{0.1in} \mathbf{1} X= X\mathbf{1} = X, \hspace{0.1in} X^2=0. \label{m}
\end{equation}
To the ``inverse pants'' cobordism corresponding to splitting of
one circle into two, $\mathcal{F}$ associates the comultiplication $
\Delta: \mathcal{A} \rightarrow \mathcal{A} \otimes\mathcal{A}$
\begin{equation}
\hspace{0.1in}\Delta(\mathbf{1}) = \mathbf{1} \otimes X + X\otimes \mathbf{1} ,
\hspace{0.1in} \Delta(X) = X\otimes
X.\label{Delta}
\end{equation}
To the ``cup'' and ``cap'' cobordisms corresponding to the birth and death of a circle, $\mathcal{F}$ associates the unit map $\iota: \mathbb{Z}
\rightarrow \mathcal{A}$ and trace map $\epsilon: \mathcal{A} \rightarrow
\mathbb{Z}$ respectively
\begin{equation*}
\iota(1)=\mathbf{1},\hspace{0.1in} \varepsilon(\mathbf{1})=0, \hspace{0.1in}\varepsilon(X)=1.
\end{equation*}
Let $B^n$ be the set of crossingless matchings of $2n$ points. For
$a$, $b\in B^n$ denote by $W(b)$ the reflection of $b$ about the
horizontal axis, and by $W(b)a$ the closed $1$-manifold obtained
by closing $W(b)$ and $a$ along their boundaries.
$\mathcal{F}(W(b)a)$ is a graded abelian group isomorphic to $\mathcal{A}^{\otimes
I}$, where $I$ is the set of circles in $W(b)a$, see
figure~\ref{Wba.figure}.
\begin{figure}[ht!]
\begin{center}
\psfrag{b}{\small$b$}\psfrag{a}{\small$a$}\psfrag{W(b)}{\small$W(b)$}\psfrag{W(b)a}{\small$W(b)a$}
\psfrag{f}{\small$\mathcal{F}$}\psfrag{a2}{\small$\mathcal{A}^{\otimes 2}$}
\epsfig{figure=wba.eps} \caption{Gluing in $B^3$.}
\label{Wba.figure}
\end{center}
\end{figure}
For $a$, $b\in B^n$ let
\begin{equation*}
{_b(H^n)_a} \stackrel{\mbox{\scriptsize{def}}}{=} \mathcal{F}(W(b)a)\{n\}.
\end{equation*}
and define $H^n$ as the direct sum
\begin{equation*}
H^n\stackrel{\mbox{\scriptsize{def}}}{=} \oplusop{a,b} \hspace{0.05in} {_b(H^n)_a},
\end{equation*}
where $\{n\}$ denotes the action of raising the grading up by $n$.
Multiplication maps in $H^n$ are defined as follows. We set $xy=0$
if $x\in {_b(H^n)_a}$, $y\in {_c(H^n)_d}$ and $c\neq a$.
Multiplication maps
\begin{equation*}
{_b(H^n)_a} \otimes {_a(H^n)_c} \rightarrow {_b(H^n)_c}
\end{equation*}
are given by homomorphisms of abelian groups
\begin{equation*}
\mathcal{F}(W(b)a) \otimes \mathcal{F}(W(a)c) \rightarrow \mathcal{F}(W(b)c),
\end{equation*}
which are induced by the ``minimal'' cobordism from $W(b)aW(a)c$
to $W(b)c$, see figure~\ref{contraction.figure}.
\begin{figure}[ht!]
\begin{center}
\psfrag{a}{\tiny$a$}\psfrag{c}{\tiny$c$}\psfrag{W(b)}{\tiny$W(b)$}\psfrag{W(a)}{\tiny$W(a)$}
\psfrag{f}{\tiny$\mathcal{F}$}\psfrag{a1}{\tiny$\mathcal{A}$}\psfrag{a4}{\tiny$\mathcal{A}^{\otimes
4}$}\psfrag{m}{\tiny$m\circ(id\otimes m)\circ(id^2\otimes m)$}
\epsfig{figure=contraction.eps} \caption{Multiplication in $H^n$.}
\label{contraction.figure}
\end{center}
\end{figure}
Now we recall the definition of the subquotients of $H^n$ from
\cite{CK:Subquotients}. For each $n\geq 0$ and $0\leq k \leq n$,
define $B^{n-k,k}$ to be the subset of $B^n$ where there are no
matchings among the first $n-k$ points and among the last $k$
points. Figure~\ref{b21.figure} shows $B^{1,2}$.
\begin{figure}[ht!]
\begin{center}
\epsfig{figure=b21.eps} \caption{The $3$ elements in $B^{1,2}$.}
\label{b21.figure}
\end{center}
\end{figure}
We put two ``platforms'', one on the first $n-k$ points and one on
the last $k$ points to indicate that these endpoints are special.
The $n$ points lying in between the two platforms are called ``free points''.
Define $\widetilde{A}^{n-k,k}$ by
\begin{equation}
\widetilde{A}^{n-k,k} \stackrel{\mbox{\scriptsize{def}}}{=} \oplusop{a,b\in B^{n-k,k}}
\hspace{0.05in} \mathcal{F}(W(b)a)\{n\}. \label{DefTildeA.equation}
\end{equation}
$\widetilde{A}^{n-k,k}$ sits inside $H^n$ as a graded subring
which inherits its multiplication from $H^n$.
For $a,b\in B^{n-k,k}$, call a circle in $W(b)a$ type I if it is
disjoint from platforms, type II if it intersects at least one
platform and intersect each platform at most once, and type III if
it intersects one of the platforms at least twice (see
figure~\ref{3types.figure}). An intersection point between a
circle and a platform is called a ``mark''.
\begin{figure}[ht!]
\begin{center}
\psfrag{1}{\tiny I} \psfrag{2}{\tiny II} \psfrag{3}{\tiny III}
\epsfig{figure=3types.eps} \caption{$3$ types of circles.}
\label{3types.figure}
\end{center}
\end{figure}
Ring $\widetilde{A}^{n-k,k}$ has a two-sided graded ideal
$I^{n-k,k}\subset \widetilde{A}^{n-k,k}$. Ring $A^{n-k,k}$ is
defined as the quotient of $\widetilde{A}^{n-k,k}$ by the ideal
$I^{n-k,k}$
\begin{equation}
A^{n-k,k} \stackrel{\mbox{\scriptsize{def}}}{=} \widetilde{A}^{n-k,k} / I^{n-k,k}. \label{DefA.equation}
\end{equation}
$A^{n-k,k}$ naturally decomposes into a direct sum of graded
abelian groups
\begin{equation*}
A^{n-k,k} = \oplusop{a,b\in B^{n-k,k}} \hspace{0.05in}
_a(A^{n-k,k})_b.
\end{equation*}
By taking the direct product over all $0 \leq k \leq n$, we
collect the rings $A^{n-k,k}$ together into a graded ring $A^{n}$
$$A^n\stackrel{\mbox{\scriptsize{def}}}{=} \prod_{0\leq k\leq n}A^{n-k,k}.$$
As a graded abelian group, $A^n$ is the direct sum of $A^{n-k,k}$,
over $0\leq k\leq n$.
See \cite{Khovanov:FunctorValuedInvariant} and
\cite{CK:Subquotients} for more details on $H^n$ and its
subquotients.
\subsection{Generalization of $A^{n-k,k}$}
We call the triple $(n,k,l)$ coherent $|k-l|\leq n$ and
$n+k+l\equiv 0\ \ \mathrm{(mod\ 2)}$. For each coherent triple
$(n,k,l)$ define $B_n^{k,l}$ to be the subset of $B^{(n+k+l)/2}$
where there are no matchings among the first $k$ points and among
the last $l$ points. Put one platform on the first $k$ points and
one on the last $l$ points. Note that $B_{2n}^{0,0}=B^n$ and
$B_n^{n-k,k}=B^{n-k,k}$.
Define $\widetilde{A}_n^{k,l}$ by
\begin{equation}
\widetilde{A}_n^{k,l} \stackrel{\mbox{\scriptsize{def}}}{=} \oplusop{a,b\in B_n^{k,l}}
\hspace{0.05in} \mathcal{F}(W(b)a)\{\frac{n+k+l}{2}\}. \label{DefTildeAnkl.equation}
\end{equation}
Just like $\widetilde{A}^{n-k,k}$, $\widetilde{A}_n^{k,l}$ is a
graded subring of $H^n$ and inherits its multiplication from
$H^n$. The ideal $I_n^{k,l}\subset \widetilde{A}_n^{k,l}$ is
defined exactly as that of $\widetilde{A}^{n-k,k}$. For $a,b\in
B_n^{k,l}$, if $W(b)a$ contains at least one type III circle, set
${_b(I_n^{k,l})_a}=\mathcal{F}(W(b)a)\{n\}$. If $W(b)a$ contains only
circles of type I and type II, we write $\mathcal{F}(W(b)a)=\mathcal{A}^{\otimes
i}\otimes \mathcal{A}^{\otimes j}$ in which type II circles correspond to
the first $i$ tensor factors, and define ${_b(I_n^{k,l})_a}$ as
the span of
\begin{equation*}
y_1 \otimes \cdots \otimes y_{t-1} \otimes X \otimes y_{t+1}
\otimes \cdots \otimes y_{i+j} \in \mathcal{A}^{\otimes i}\otimes
\mathcal{A}^{\otimes j} \cong \mathcal{F}(W(b)a),
\end{equation*}
where $1\leq t\leq i$ and $y_s\in \{\mathbf{1}, X\}$. By taking the
direct sum over all $a,b\in B_n^{k,l}$ we get a subgroup of
$\widetilde{A}_n^{k,l}$
\begin{equation*}
I_n^{k,l} \stackrel{\mbox{\scriptsize{def}}}{=} \oplusop{a,b\in B_n^{k,l}}\hspace{0.05in}
{_b(I_n^{k,l})_a}.
\end{equation*}
It's easy to show that $I_n^{k,l}$ is a two-sided graded ideal of
$\widetilde{A}_n^{k,l}$. Ring $A_n^{k,l}$ is defined as the
quotient of $\widetilde{A}_n^{k,l}$ by the ideal $I_n^{k,l}$
\begin{equation}
A_n^{k,l} \stackrel{\mbox{\scriptsize{def}}}{=} \widetilde{A}_n^{k,l} / I_n^{k,l}. \label{DefAnkl.equation}
\end{equation}
If $W(b)a$ contains a type III circle then ${_a(A^{n-k,k})_b}=0$.
Otherwise, group ${_a(A^{n-k,k})_b}$ is free abelian of rank
$2^{\mathrm{\#\ of\ type\ I\ circles}}$. Assuming that $W(a)b$
contains $m$ circles in which the first $i$ of them are of type
II, ${_a(A_n^{k,l})_b}$ has a basis of the form
\begin{equation*}
\mathbf{1} \otimes \cdots \otimes \mathbf{1} \otimes a_{i+1} \otimes \cdots
\otimes a_{m},
\end{equation*}
where $a_s\in \{\mathbf{1}, X\}$ for all $i+1\leq s\leq m$.
There is a natural decomposition of $A_n^{k,l}$ into a direct sum
of graded abelian groups
\begin{equation*}
A_n^{k,l} = \oplusop{a,b\in B_n^{k,l}} \hspace{0.05in}
_a(A_n^{k,l})_b,
\end{equation*}
where $$_a(A_n^{k,l})_b = \mathcal{F}(W(a)b)\{\frac{n+k+l}{2}\} /
{_a(I_n^{k,l})_b}.$$ Let $P_n^{k,l}(a)$, or simply P(a), for $a\in
B_n^{k,l}$, be a left $A_n^{k,l}$-module given by
$$P(a)=\oplusop{b\in B_n^{k,l}} {_b(A^{n-k,k})_a}.$$
$A_n^{k,l}$ decomposes into a direct sum of left
$A_n^{k,l}$-modules
$$A_n^{k,l}=\oplusop{a\in B_n^{k,l}}P(a).$$
$P(a)$ is left projective since it is a direct summand of the free
module $A_n^{k,l}$. Actually, any indecomposable left projective
$A_n^{k,l}$-module is isomorphic to $P(a)\{s\}$ for some $a\in
B_n^{k,l}$ and $s\in\mathbb{Z}$.
Here are some basic facts about the ring $A_n^{k,l}$:
\begin{itemize}
\item $A_n^{k,l}\cong A_n^{l,k}$. Reflecting a diagram in
$B_n^{k,l}$ about a vertical axis produces a diagram in
$B_n^{l,k}$. It leads to an isomorphism of sets $B_n^{k,l}\cong
B_n^{l,k}$ which induces an isomorphism of rings
$\widetilde{A}_n^{k,l}\cong \widetilde{A}_n^{l,k}$ and of the
quotient rings $A_n^{k,l}\cong A_n^{l,k}$.
\item The minimal idempotents in $A_n^{k,l}$ are $1_a\stackrel{\mbox{\scriptsize{def}}}{=}
\mathbf{1}^{\otimes (n+k+l)/2}\in {_a(A_n^{k,l})_a}$. The unit element
$1$ of $A_n^{k,l}$ is the sum of $1_a$ over all $a \in B_n^{k,l}$:
$1 \stackrel{\mbox{\scriptsize{def}}}{=} \sum_{a\in B_n^{k,l}} 1_a.$
\item $A_n^{k,l}$ sits inside $A_{n}^{k+1,l+1}$ as a subring. This
inclusion stabilizes when $k+l>n$. In particular we have:
$A_n^{k,n-k}\cong A_n^{k+1,n-k+1}\cong A_n^{k+2,n-k+2}\cong
\cdots$
\end{itemize}
\begin{prop}
The rings $A_n^{0,l}$ are symmetric and, therefore, Frobenius over
$\mathbb{Z}$.
\end{prop}
The proof is similar to \cite{Khovanov:FunctorValuedInvariant}
proposition $32$.
\subsection{Flat tangles and bimodules}
Denote by $\widehat{B}^m_n$ the space of flat tangles with $m$ top
endpoints and $n$ bottom endpoints. For simplicity we assume that
the top and bottom endpoints lie on $\mathbb{R}\times \{1\}$ and
$\mathbb{R}\times \{0\}$, and have integer coefficients
$1,2,\cdots,m$ and $1,2,\cdots,n$ respectively.
Figure~\ref{FlattangleExample.figure} shows two elements in
$\widehat{B}^4_6$.
\begin{figure}[ht!]
\begin{center}
\epsfig{figure=flattangleexample.eps} \caption{Two flat tangles
in $\widehat{B}^4_6$.} \label{FlattangleExample.figure}
\end{center}
\end{figure}
To a flat tangle $T\in \widehat{B}^m_n$ we would like to assign a
bimodule over algebras $A_n^{k,l}$ and $A_m^{s,t}$ where both
$(n,k,l)$ and $(m,s,t)$ are coherent triples and $k-l=s-t$. Define
a graded $(\widetilde{A}_m^{s,t},\widetilde{A}_n^{k,l})$-bimodule
$\widetilde{\mathcal{F}}(T)$ by
\begin{equation*}
\widetilde{\mathcal{F}}(T) = \oplusop{b\in B_n^{k,l},c\in B_m^{s,t}} \hspace{0.05in}{_c\widetilde{\mathcal{F}}(T)_b},
\end{equation*}
where
\begin{equation*} \label{def-bimod}
{_c}\widetilde{\mathcal{F}}(T)_b \stackrel{\mbox{\scriptsize{def}}}{=} \mathcal{F}( W(c) T b)
\{\frac{n+k+l}{2}\}.
\end{equation*}
The plane diagram $W(c) T b$ is not a union of circles if $k\neq
s$. In that case we close it in the obvious way before applying
the functor $\mathcal{F}$ (see figure~\ref{close.figure}). The left action
$\widetilde{A}_m^{s,t} \times \widetilde{\mathcal{F}}(T) \rightarrow
\widetilde{\mathcal{F}}(T)$ comes from maps
\begin{equation*}
\mathcal{F}(W(a)c) \times {_c\widetilde{\mathcal{F}}(T)_b} \rightarrow
{_a\widetilde{\mathcal{F}}(T)_b},
\end{equation*}
and the right action $\widetilde{\mathcal{F}}(T) \times
\widetilde{A}_n^{k,l} \rightarrow \widetilde{\mathcal{F}}(T)$ comes from
maps
\begin{equation*}
{_c\widetilde{\mathcal{F}}(T)_b} \times \mathcal{F}(W(b)a)\rightarrow
{_c\widetilde{\mathcal{F}}(T)_a}.
\end{equation*}
Both maps are induced by the obvious ``minimal cobordism'' (see
figure~\ref{contraction.figure}).
\begin{figure}[ht!]
\begin{center}
\psfrag{a}{\small $c\in B_2^{0,2}$} \psfrag{t}{\small $T\in
\widehat{B}_6^2$} \psfrag{b}{\small $b\in
B_6^{1,3}$}\psfrag{atb}{\small $W(c)Tb$} \psfrag{c}{\small closure
of $W(c)Tb$} \psfrag{v}{\small Vertical lines added}
\epsfig{figure=close.eps} \caption{Closing $W(c)Tb$.} \label{close.figure}
\end{center}
\end{figure}
Now define a subgroup ${_bI(T)_a}$ of ${_b\widetilde{\mathcal{F}}(T)_a}$
as follows. Set ${_bI(T)_a}={_b\widetilde{\mathcal{F}}(T)_a}$ if $W(b)Ta$
contains a type III arc. Otherwise, assuming that
$\mathcal{F}(W(b)Ta)\cong \mathcal{A}^{\otimes r}$ in which type II circles
correspond to the first $i$ tensor factors, set ${_bI(T)_a}$ to be
the span of
\begin{equation*}
u_1 \otimes \cdots \otimes a_{j-1} \otimes X \otimes u_{j+1}
\otimes \cdots \otimes u_{r} \in \mathcal{F}(W(b)Ta) \cong \mathcal{A}^{\otimes
r},
\end{equation*}
where $1\leq j\leq i$ and $u_e\in \{\mathbf{1}, X\}$ for each $1\leq e\leq r$, $e\neq j$. By taking the direct sum we get a subgroup
\begin{equation*}
I(T) \stackrel{\mbox{\scriptsize{def}}}{=} \oplusop{a \in B_m^{s,t}, b\in B_n^{k,l}}\hspace{0.05in}
{_aI(T)_b}.
\end{equation*}
$I(T)$ is in fact a subbimodule of $\widetilde{\mathcal{F}}(T)$ and we can
define $\mathcal{F}(T)$ to be the quotient bimodule
\begin{equation*}
\mathcal{F}(T) \stackrel{\mbox{\scriptsize{def}}}{=} \widetilde{\mathcal{F}}(T) / I(T).
\end{equation*}
It's easy to show that the action of $I_n^{k,l}$ on $\mathcal{F}(T)$ is
trivial (see \cite{CK:Subquotients}), thus the
$(\widetilde{A}_m^{s,t},\widetilde{A}_n^{k,l})$-bimodule structure
on $\mathcal{F}(T)$ descends to an $(A_m^{s,t},A_n^{k,l})$-bimodule
structure.
\begin{prop} An isotopy between $T_1,T_2\in \widehat{B}^{m}_{n}$ induces an isomorphism
of bimodules $\mathcal{F}(T_1) \cong \mathcal{F}(T_2).$ Two isotopies between $T_1$ and $T_2$
induce equal isomorphisms iff the bijections from circle components
of $T_1$ to circle components of $T_2$ induced by the two isotopies coincide.
\end{prop}
The proof is similar to that in
\cite{Khovanov:FunctorValuedInvariant}.
Cobordisms between flat tangles induce bimodule maps (see
figure~\ref{Cobordism.figure}):
\begin{prop} Let $T_1,T_2\in \widehat{B}^{m}_{n}$ and $S$ a cobordism between $T_1$ and $T_2$.
Then $S$ induces a degree $\frac{n+m}{2}-\chi(S)$ homomorphism of
$(A_m^{s,t},A_n^{k,l})$-bimodules
\begin{equation*}
\mathcal{F}(S): \mathcal{F}(T_1) \to \mathcal{F}(T_2),
\end{equation*}
where $\chi(S)$ is the Euler characteristic of $S$.
\end{prop}
\begin{proof} It follows from the definition that
$\widetilde{\mathcal{F}}(T_1)= \oplusop{a,b} \mathcal{F}(W(b)T_1 a) \{
\frac{n+k+l}{2}\}$ and $\widetilde{\mathcal{F}}(T_2)= \oplusop{a,b}
\mathcal{F}(W(b)T_2 a) \{ \frac{n+k+l}{2}\}$, where the sum is over all
$a\in B_n^{k,l}$ and $b\in B_m^{s,t}$. The surface $S$ induces a
homogeneous map of graded abelian groups $\mathcal{F}(W(b)T_1 a) \to
\mathcal{F}(W(b)T_2 a)$. Summing over all $a$ and $b$ we get a
homomorphism of
$(\widetilde{A}_n^{k,l},\widetilde{A}_m^{s,t})$-bimodules
$\widetilde{\mathcal{F}}(S): \widetilde{\mathcal{F}}(T_1) \to
\widetilde{\mathcal{F}}(T_2)$. The grading assertion follows from the fact that $\chi(S')=\chi(S)-\frac{n+m}{2}$.
It's easy to show that $\widetilde{\mathcal{F}}(S)$ takes $I(T_1)$ into
$I(T_2)$. See \cite{CK:Subquotients} for details.
\end{proof}
\begin{figure}[ht!]
\begin{center}
\psfrag{c}{\small cobordism}\psfrag{a1}{\small
$a$}\psfrag{b1}{\small $b_1$}\psfrag{b2}{\small
$b_2$}\psfrag{b3}{\small $b_3$}\psfrag{t1}{\small
$T_1$}\psfrag{t2}{\small $T_2$}\psfrag{e1}{\tiny $1\otimes 1
\otimes \mathbf{1} \mapsto 1\otimes 1$}\psfrag{e2}{\tiny $1\otimes 1
\otimes X\mapsto 0$}\psfrag{e3}{\tiny $1\otimes 1\mapsto X \otimes
1\otimes 1$}\psfrag{e4}{\tiny $0\mapsto 0$}\psfrag{w1}{\tiny
$W(a)T_1 b_1$}\psfrag{w2}{\tiny $W(a)T_1 b_2$}\psfrag{w3}{\tiny
$W(a)T_1 b_3$}\psfrag{w4}{\tiny $W(a)T_2 b_1$}\psfrag{w5}{\tiny
$W(a)T_2 b_2$}\psfrag{w6}{\tiny $W(a)T_2 b_3$}\psfrag{f}{\small
$\mathcal{F}$}\psfrag{a}{\small $\mathcal{A} \{1\}$}\psfrag{z}{\small $\mathbb{Z}
\{1\}$}\psfrag{0}{\small $0$}\psfrag{1}{\tiny $1$}
\psfrag{x}{\small $1$}\psfrag{y}{\small $0$}
\epsfig{figure=cobordism.eps} \caption{Cobordism induces bimodule map} \label{Cobordism.figure}
\end{center}
\end{figure}
\begin{prop} Isotopic (rel boundary) surfaces induce equal bimodule maps.
\end{prop}
\begin{prop} Let $T_1,T_2,T_3\in \widehat{B}^{m}_{n}$ and $S_1$,
$S_2$ be cobordisms from $T_1$ to $T_2$ and from $T_2$ to $T_3$
respectively. Then $\mathcal{F}(S_2)\mathcal{F}(S_1)=\mathcal{F}(S_2\circ S_1)$.
\end{prop}
Proofs of the above two propositions are similar to those in
\cite{Khovanov:FunctorValuedInvariant}.
Two coherent triples $(n,k,l)$ and $(m,s,t)$ are called
\emph{compatible} if either $k+l=n$, $s+t=m$, $t=l+\frac{m-n}{2}$,
or $k=s$, $l=t$. Also, a $(A_m^{s,t},A_n^{k,l})$-bimodule is
called compatible if $(n,k,l)$ and $(m,s,t)$ are compatible.
\begin{prop} \label{projectivity.proposition}
Let $T\in\widehat{B}_n^m$, bimodule $\mathcal{F}(T)$ is projective as a
left $A_m^{s,t}$-module and as a right $A_n^{k,l}$-module if
$(n,k,l)$ and $(m,s,t)$ are compatible.
\end{prop}
\begin{proof} Ignore all the grading shifts. The bimodule $\mathcal{F}(T)$
is isomorphic, as a left $A_m^{s,t}$-module, to the direct sum
$\oplus_{a\in B_n^{k,l}} \mathcal{F}(Ta)$. To prove $\mathcal{F}(T)$ is left
projective it suffices to prove that $\mathcal{F}(Ta)$ is left projective
for all $a\in B_n^{k,l}$. Fix any $a\in B_n^{k,l}$. In general,
$Ta$ is a union of circles and arcs. With all circles removed,
$Ta$ is isotopic to some $a'\in B_m^{k,l}$ (see
figure~\ref{deform.figure}).
\begin{figure}[ht!]
\begin{center}
\psfrag{a}{\small $a\in B_6^{3,1}$} \psfrag{T}{\small $T\in
\widehat{B}_6^4$} \psfrag{Ta}{\small $Ta$}\psfrag{a'}{\small
$a'\in B_4^{3,1}$} \psfrag{rm}{\small
Isotopy}\psfrag{iso}{$\cong$}
\epsfig{figure=deformtangle.eps} \caption{Deformation of $Ta$.} \label{deform.figure}
\end{center}
\end{figure}
Case $1$: $k=s$ and $l=t$. In this case, assuming there are $c$ circles in $Ta$,
$$\mathcal{F}(Ta)=(\oplusop{b\in A_m^{s,t}} \mathcal{F}(W(b)Ta))\otimes {\mathcal{A}}^{\otimes c}\cong (\oplusop{b\in A_m^{s,t}}\mathcal{F}(W(b)a'))\otimes {\mathcal{A}}^{\otimes c}.$$
By definition $\oplusop{b\in A_m^{s,t}}\mathcal{F}(W(b)a')=
P_m^{s,t}(a')$, therefore $\mathcal{F}(Ta)$ is left projective.
Case $2$: $k+l=n$, $s+t=m$, $t=l+\frac{m-n}{2}$. Without loss of
generality we assume that $m\leq n$. Let $\Theta=\frac{n-m}{2}$.
The case $\Theta=0$ is proved in case $1$. Suppose the statement
is true when $\Theta\leq d$. Consider any $T\in \widehat{B}_n^m$
and $a\in B_n^{k,l}$ such that $\frac{n-m}{2}=d+1$. There exists
at least one ``cap'' in $T$ which connects two bottom endpoints
since $n>m$. Pick a cap $c$ which has no other bottom endpoints of
$T$ between its two feet. After gluing $a$ to $T$, either there is
an arc in $a$ connecting the two platforms, or both feet of $c$ is
connected to the platforms since $k+l=n$. Therefore there is
always an arc connecting the two platforms in $Ta$. By definition
of the arc ring, the two far ends of the two platforms are then
connected by an arc $e$. When closing the graph $W(b)Ta$ for some
$b\in B_m^{s,t}$ we need to add $d+1$ arcs since $t<l$ (see
figure~\ref{close.figure}). Denote the topmost added arc by $f$.
The arcs $e$ and $f$ form a type II circle $g$ which encloses the
rest of $W(b)Ta$. We can remove $g$ from $W(b)Ta$ for any $b\in
B_m^{s,t}$ since it contributes nothing to $\mathcal{F}(W(b)Ta)$, and then
reduce to the case $\Theta=d$. The proposition follows by
induction.
\end{proof}
\begin{prop} Let $T_1\in \widehat{B}^{p}_{n}$, $T_2\in
\widehat{B}^{m}_{p}$, $\mathcal{F}(T_1)$ a compatible
$(A_{p}^{q,r},A_{n}^{k,l})$-bimodule, and $\mathcal{F}(T_2)$ a compatible
$(A_{m}^{s,t},A_{p}^{q,r})$-bimodule. Then there is a canonical
isomorphism of $(A_{m}^{s,t},A_{n}^{k,l})$-bimodules
\begin{equation*}
\mathcal{F}(T_2 T_1)\cong \mathcal{F}(T_2) \otimes_{A_p^{q,r}}\mathcal{F}(T_1).
\end{equation*}
\end{prop}
\begin{proof} It follows from proposition
\ref{projectivity.proposition} that $W(a)T_2$ is a projective
right $A_p^{q,r}$-module and $T_1 b$ is a projective left
$A_p^{q,r}$-module for $a\in B_m^{s,t}$ and $b\in B_n^{k,l}$. The
proof in \cite{Khovanov:FunctorValuedInvariant} theorem $1$
therefore works in our case without any changes.
\end{proof}
{\bf{Remark:}} Compatible triples fall into two different types. A
pair of coherent triples $(n,k,l)$ and $(m,s,t)$ are called
``T-compatible'' if $k+l=n$, $s+t=m$, $t=l+\frac{m-n}{2}$, and
``F-compatible'' if $k=s$, $l=t$. Similarly, call a
$(A_n^{k,l},A_m^{s,t})$-bimodule T-compatible (F-compatible) if
$(n,k,l)$ and $(m,s,t)$ are T-compatible (F-compatible). If two
flat tangles $T_1$ and $T_2$ belong to the same type, $T_2 T_1$ is
then compatible and also belongs to that type. Therefore we can
compose as many flat tangles as we want within the same type.
However, if $T_1$ and $T_2$ belong to different types their
composition $T_2T_1$ may not be compatible.
Now consider only F-compatible triples and bimodules. For each $n$
such that $(n,k,l)$ is coherent, denote by $A_n^{k,l}$-mod the
category of finitely-generated graded left $A_n^{k,l}$-modules and
module maps. For each $T\in \widehat{B}_n^m$, tensoring with the
$(A_m^{k,l},A_n^{k,l})$-bimodule $\mathcal{F}(T)$ is an exact functor from
$A_n^{k,l}$-mod to $A_m^{k,l}$-mod. A cobordism $S$ between two
flat tangles $T_1,T_2\in \widehat{B}_n^m$ induces a homomorphism
$\mathcal{F}(S)$ of $(A_m^{k,l},A_n^{k,l})$-bimodules. The following
proposition is a summary of this section.
\begin{prop} Bimodules $\mathcal{F}(T)$ and homomorphisms $\mathcal{F}(S)$
assemble into a $2$-functor from the $2$-category of flat tangle
cobordisms to the $2$-category of natural transformations between
exact functors between $A_n^{k,l}$-mod.
\end{prop}
\subsection{Tangles and complexes of bimodules}
An $(m,n)$-tangle $L$ is a proper embedding of
$\frac{n+m}{2}$ oriented arcs and a finite number of oriented circles into
$\mathbb{R}^2 \times [0,1]$ such that the boundary points of arcs map to
\begin{equation*}
\{1, 2, . . ., n\}\times \{0\} \times\{0\}, \{1, 2, . . .,
m\}\times \{0\} \times\{1\}.
\end{equation*}
A plane diagram of a tangle is a generic projection of the tangle
onto $\mathbb{R}\times [0,1]$.
Fix $k$ and $l$ throughout the rest of this section. We would like
to define a tangle invariant using the rings $A_n^{k,l}$. The
construction follows the same line as in \cite{CK:Subquotients}.
The sizes of the platforms don't matter. We will state the results
here for completeness and refer readers to \cite{CK:Subquotients}
and \cite{Khovanov:FunctorValuedInvariant} for details.
Fix a diagram $D$ of an oriented $(m,n)$-tangle $L$. We define
the complex of $(A_m^{k,l}, A_n^{k,l})$-bimodules $\mathcal{F}(D)$
associated to $D$ inductively as follows.
\begin{itemize}
\item If $D$ has no crossings (therefore a flat tangle),
$\overline{\mathcal{F}}(D)$ is just the complex
$$0\rightarrow \mathcal{F}(D)\rightarrow 0,$$
where $\mathcal{F}(D)$, sitting in cohomological degree zero, is the
bimodule associated to the flat tangle $D$.
\item If $D$ has one crossing, consider the complex
$\overline{\mathcal{F}}(D)$ of $(A_m^{k,l},A_n^{k,l})$-bimodules
\begin{equation*}
0 \to \mathcal{F}(D(0)) \stackrel{\mbox{\scriptsize{$\partial$}}}{\rightarrow}
\mathcal{F}(D(1))\{-1\} \to
0 \label{complex.equation}
\end{equation*}
where $D(i), i=0,1$ denotes the $i$-smoothing of the crossing,
$\partial$ is induced by the ``saddle'' cobordism (see
figure~\ref{smoothing.figure}), and $\mathcal{F}(D(0))$ sits in the
cohomological degree zero.
\item To a diagram with $t+1$ crossings we associate the total
complex $\overline{\mathcal{F}}(D)$ of the bicomplex
\begin{equation*}
0 \to \mathcal{F}(D(c_0)) \stackrel{\mbox{\scriptsize{$\partial$}}}{\rightarrow}
\mathcal{F}(D(c_1))\{-1\} \to 0 \label{complex1.equation}
\end{equation*}
where $D(c_i), i=0,1$ denotes the $i$-smoothing of a crossing $c$
of $D.$
\item Finally, define $\mathcal{F}(D)$ to be $\overline{\mathcal{F}}(D)$ shifted
by $[x(D)]\{ 2x(D)-y(D)\}$, where $x(D)$ counts the number of
negative crossings and $y(D)$ counts the number of positive
crossings (see figure~\ref{smoothing.figure}).
\end{itemize}
\begin{figure}[ht!]
\begin{center}
\psfrag{n}{Negative} \psfrag{p}{Positive} \psfrag{0}{\tiny
$0$-smoothing} \psfrag{1}{\tiny $1$-smoothing} \psfrag{s}{\tiny
Saddle cobordism}
\epsfig{figure=resolution.eps} \caption{Two smoothings of a crossing.} \label{smoothing.figure}
\end{center}
\end{figure}
Figure~\ref{cube.figure} shows a complex of bimodules associated
to a $(2,2)$-tangle. Each arrow is induced by the saddle cobordism
and the sign on each arrow indicates the sign of each map in the
total complex.
\begin{figure}[ht!]
\begin{center}
\psfrag{pd}{\tiny $+$}\psfrag{nd}{\tiny
$-$}\psfrag{s1}{\scriptsize
$\{1\}$}\psfrag{s2}{\scriptsize$\{2\}$}\psfrag{s3}{\scriptsize$\{3\}$}
\psfrag{F}{$\mathcal{F}$}\psfrag{d}{$\oplus$}\psfrag{e}{$:$}\psfrag{p}{$\partial$}
\psfrag{h}{\scriptsize Homological
grading:}\psfrag{-3}{\scriptsize$-3$}\psfrag{-2}{\scriptsize$-2$}\psfrag{-1}{\scriptsize$-1$}\psfrag{0}{\scriptsize$0$}
\epsfig{figure=cube.eps}\caption{A total complex associated to a $(2,2)$-tangle.} \label{cube.figure}
\end{center}
\end{figure}
\begin{theorem} If $D_1$ and $D_2$ are two diagrams of an oriented
$(m,n)$-tangle $L$, the complexes $\mathcal{F}(D_1)$ and $\mathcal{F}(D_2)$ of
graded $(A_{m}^{k,l},A_{n}^{k,l})$-bimodules are chain homotopy
equivalent.
\end{theorem}
The following proposition is a special case of the more general
theorem \ref{center.theorem} in section $3$.
\begin{prop} \label{DegreeZeroCentral.proposition}
The only invertible degree zero central elements in $A_n^{k,l}$
are $\pm 1$
\begin{equation*}
Z^*_0(A_n^{k,l})\cong \{\pm 1\}.
\end{equation*}
\end{prop}
We now extend our invariant to oriented tangle cobordisms. Let $S$
be a movie presentation of a cobordism between two
$(m,n)$-tangles. $S$ is thus a sequence of Reidemeister moves and
handle moves. Each consequent pair of frames corresponds to a
homomorphism which is an isomorphism for each Reidemeister move,
and is induced by $\iota$, $\varepsilon$, $m$, or $\Delta$ for
each handle move. The composition of these homomorphisms gives us
a homomorphism $$\mathcal{F}(S): \mathcal{F}(D) \longrightarrow \mathcal{F}(D'),$$ where
$D$ and $D'$ are the first and the last frame in the movie $S.$ It
follows from proposition \ref{DegreeZeroCentral.proposition} that
$\mathcal{F}(S)=\pm \mathcal{F}(S')$ if $S$ and $S'$ are two different
presentations of the same cobordism. To summarize, we have the
following:
\begin{prop} Complexes $\mathcal{F}(T)$ of bimodules and homomorphisms
$\pm \mathcal{F}(S)$ assigned to diagrams of tangle cobordisms assemble
into a projective $2$-functor from the $2$-category of oriented
tangle cobordisms to the $2$-category of natural transformations
between exact functors between homotopy categories of complexes of
graded $A_n^{k,l}$-modules.
\end{prop}
{\bf{Remark:}} The projective Grothendieck group
$K_p(A_n^{k,l}-\mbox{gmod})$ of the category of finitely-generated
graded projective $A_n^{k,l}$-modules is a free
$\mathbb{Z}[q,q^{-1}]$-module with a basis $[P_n^{k,l}(a)],$ $a\in
B_n^{k,l}.$ There is a natural way to identify
$K_p(A_n^{k,l}-\mbox{gmod})$ with a $\mathbb{Z}[q,q^{-1}]$-lattice
of $\mbox{Hom}(V_k\otimes V_l, V^{\otimes n})$
\begin{equation}\label{GrothendieckGroup.equation}
K_p(A_n^{k,l}-\mbox{gmod})\otimes_{\mathbb{Z}[q,q^{-1}]}\mathbb{C}\cong
\mbox{Hom}(V_k\otimes V_l, V^{\otimes n}),
\end{equation}
where $V$ is the fundamental $2$-dimensional representation of
$U_q(sl_2)$ and $V_i$ is the irreducible $(i+1)$-dimensional
representation of $U_q(sl_2)$. Under this isomorphism the basis
$[P_n^{k,l}(a)]$ gose to dual canonical basis of
$$\mbox{Hom}(V_k\otimes V_l, V^{\otimes n})\cong
\mbox{Inv}(V_k^{*}\otimes V_l^{*}\otimes V^{\otimes
n})\cong\mbox{Inv}(V_k\otimes V_l\otimes V^{\otimes n}).$$ Denote
by $\mathcal{K}(\mathcal{W})$ the category of bounded complexes of
objects of an abelian category $\mathcal{W}$ up to chain
homotopies. For each $(m,n)$-tangle $T$, it follows from
proposition~\ref{projectivity.proposition} that the complex of
bimodules $\mathcal{F}(T)$ consists of right projective bimodules.
Therefore the tensor product with $\mathcal{F}(T)$ is an exact functor
from $\mathcal{K}(A_n^{k,l}-\mbox{gmod})$ to
$\mathcal{K}(A_m^{k,l}-\mbox{gmod})$ which induces a homomorphism
$[\mathcal{F}(T)]$ of $\mathbb{Z}[q,q^{-1}]$-modules
$$ K_p(A_n^{k,l}-\mbox{gmod}) \longrightarrow K_p(A_m^{k,l}-\mbox{gmod}).$$
Direct computation shows that under the
isomorphism~(\ref{GrothendieckGroup.equation}) they give the
standard action of tangles on $\mbox{Inv}(V_k\otimes V_l\otimes
V^{\otimes n})$.
\section{The center of $A_n^{k,l}$}
We prove in this section that the center of the ring $A_n^{k,l}$
is isomorphic to the cohomology ring of
$\mathcal{B}_{\sigma_1,\sigma_2}$. Recall that
$\mathcal{B}_{\sigma_1,\sigma_2}$ denotes the Springer variety of
complete flags in $\mathbb{C}^n$ stabilized by a fixed nilpotent
operator with two Jordan blocks of sizes $\sigma_1$ and $\sigma_2$
respectively. Following from Khovanov's construction in
\cite{Khovanov:Crossingless}, we introduce the space
$\widetilde{S}$ and use it as a bridge to link the center of
$A_n^{k,l}$ and the cohomology rings of Springer varieties.
Without loss of generality, we assume throughout this section that
$n\geq m$, $n+m\equiv 0$ mod $2$, and $0\leq l-k \leq n$ (note
that $A_n^{k,l}$ is trivial if $l-k>n$ ). The proofs in this
section rely heavily on \cite{Khovanov:Crossingless}.
\begin{theorem}\label{center.theorem}
The center of $A_n^{k,l}$ is isomorphic to the cohomology ring of
$\mathcal{B}_{\sigma_1,\sigma_2}$
$$Z(A_n^{k,l})\cong H^*(\mathcal{B}_{\sigma_1,\sigma_2}),$$
where $\sigma_1=\frac{n+l-k}{2}$ and $\sigma_2=\frac{n-l+k}{2}$.
\end{theorem}
Denote by $S$ the $2$-sphere $S^2$ and let $p$ be the north pole
of $S$. Let $S^{\times n}$ be the direct product of $n$ spheres
$$S^{\times n}\stackrel{\mbox{\scriptsize{def}}}{=} \underbrace{S\times S\times \cdots \times S}_{n}. $$
Label the $n$ free points of $B_n^{k,l}$ by $1,2,\cdots,n$ from
left to right. For each $a\in B_n^{k,l}$ define a submanifold
$S_a\in S^{\times n}$ consisting of sequences $(x_1,\cdots x_n)$,
$x_i\in S$, such that $x_i=x_j$ whenever $(i,j)$ is a type I arc
in $a$, and $x_s=p$ if $s$ is connected to a platform. Let
$\widetilde{S}_n^{k,l}$ be the subspace of $S^{\times n}$ which is
the union of all $S_a$
$$\widetilde{S}_n^{k,l}\stackrel{\mbox{\scriptsize{def}}}{=} \bigcup_{a\in B_n^{k,l}} S_a.$$
When there is no confusion we write $\widetilde{S}$ instead of
$\widetilde{S}_n^{k,l}$.
Note that the cohomology ring of $S_a$ is isomorphic to the ring
$_a(A_n^{k,l})_a$, and the cohomology ring of $S_a \cap S_b$,
viewed as abelian group, is isomorphic to $_a(A_n^{k,l})_b$. These
observations lead to the following:
\begin{theorem}\label{centerS.theorem}
The center of $A_n^{k,l}$ is isomorphic to the cohomology ring of
$\widetilde{S}$
$$Z(A_n^{k,l})\cong H^*(\widetilde{S},\mathbb{Z}).$$
\end{theorem}
\begin{proof} Denote by $H(Y)$ the cohomology ring of the space
$Y$ with integer coefficients. As noted above, we have
$H(S_a)\cong {_a(A_n^{k,l})_a}$ and $H(S_a \cap S_b)\cong
{_a(A_n^{k,l})_b}$. The second isomorphism allows us to make
$_a(A_n^{k,l})_b$ into a ring with unit $1\stackrel{\mbox{\scriptsize{def}}}{=} 1^{s} \in
\mathcal{F}(W(b)a)\cong \mathcal{A}^{\otimes s}$.
We have natural ring homomorphisms induced by inclusions
$$\psi_{a;a,b}:H(S_a)\rightarrow H(S_a \cap S_b), \hspace{0.2 in} \psi_{b;a,b}:H(S_b)\rightarrow H(S_a \cap S_b),$$
and also
$$\gamma_{a;a,b}:{_a(A_n^{k,l})_a}\rightarrow {_a(A_n^{k,l})_b}, \hspace{0.2 in} \gamma_{b;a,b}:{_b(A_n^{k,l})_b}\rightarrow {_a(A_n^{k,l})_b},$$
which are given by $x\mapsto x {_a 1_b}$ and $x\mapsto {_a 1_b}x$.
Assemble all these together we get a commutative diagram of ring
homomorphisms
\begin{equation*}
\begin{CD}
H(\widetilde{S}) @>\tau>> \mathrm{Eq}(\psi) @>>> {\mathop{\prod}\limits_{a}}H(S_a) @>{\psi}>>
{\mathop{\prod}\limits_{a\not= b}} H(S_a\cap S_b) \\
&& @VV{\cong}V @VV{\cong}V @VV{\cong}V \\
Z(A_n^{k,l})@>\cong>> \mathrm{Eq}(\gamma) @>>> {\mathop{\prod}\limits_{a}}
{_a(A_n^{k,l})_a} @>{\gamma}>>
{\mathop{\prod}\limits_{a\not= b}} {_a(A_n^{k,l})_b}
\end{CD}
\end{equation*}
where
\begin{equation*}
\psi = \sum_{a\not= b}(\psi_{a;a,b}+ \psi_{b;a,b})\hspace{0.1in}
\mathrm{and} \hspace{0.1in}
\gamma = \sum_{a\not= b}(\gamma_{a;a,b}+ \gamma_{b;a,b}).
\end{equation*}
$\mathrm{Eq}(\alpha)$ is the \emph{equalizer} of the map $\alpha$
(see \cite{Khovanov:Crossingless}). For example,
$\mathrm{Eq}(\psi)$ is a subring of ${\mathop{\prod}\limits_{a}}
H(S_a)$ consisting of $\times_a h_a$ such that if $h_a\in
H(S_a)$ and $h_b\in H(S_b)$ then their images in $H(S_a\cap S_b)$
under $\psi$ are equal.
For $\forall x\in A_n^{k,l}$ write $x$ as $\sum_{a,b\in B_n^{k,l}}
{_a x_b}$. Assuming $x$ is central we have $x 1_b 1_a=1_b x 1_a =
{_a x_b}$. Therefore ${_a x_b}=0$ if $a\neq b$. So $x=\sum_{a} {_a
x_a}$ is central if and only if $({_a x_a})({_a 1_b})=({_a
1_b})({_b x_b})$, which means $Z(A_n^{k,l})\cong
\mathrm{Eq}(\gamma)$. The ring homomorphism
$H(\widetilde{S})\rightarrow {\mathop{\prod}\limits_{a}} H(S_a)$
factors through $\mathrm{Eq}(\psi)$. To prove theorem 2 it
suffices to show that $\tau$ is an
isomorphism.\\
For $a,b\in B_n^{k,l}$ write $a\rightarrow b$ if there exists a
``horizontal merging'' of two arcs (see
figure~\ref{merge_hv.figure}).
\begin{figure}[ht!]
\begin{center}
\psfrag{h}{\small Horizontal merging}\psfrag{v}{\small Vertical
merging}\epsfig{figure=merge_hv.eps} \caption{Horizontal and
vertical mergings of two arcs.} \label{merge_hv.figure}
\end{center}
\end{figure}
Introduce a partial order on $B_n^{k,l}$ by setting $a\prec b$ if
there is a chain of arrows $a\rightarrow a_1\rightarrow \cdots
\rightarrow a_m\rightarrow b$. Extend this partial order
arbitrarily to a total order $<$ on $B_n^{k,l}$. See
figure~\ref{ArrowRelations.figure} for arrow relations and
ordering of $B_5^{0,1}$ ($a_i<a_j$ iff $i<j$).
\begin{figure}[ht!]
\begin{center}
\psfrag{a1}{\tiny $a_1$}\psfrag{a2}{\tiny $a_2$}\psfrag{a3}{\tiny
$a_3$}\psfrag{a4}{\tiny $a_4$}\psfrag{a5}{\tiny $a_5$}
\epsfig{figure=b501.eps} \caption{Arrow relations and ordering of
$B_5^{0,1}$.} \label{ArrowRelations.figure}
\end{center}
\end{figure}
We would like to construct a cell decomposition of $S_a$.
Associate a decorated graph $\Gamma$ to $a\in B_n^{0,m}$ as
follows (see figure~\ref{graphofa.figure} for an example):
\begin{itemize}
\item Each type I arc $x_i$ in $a$ corresponds to a ``hollow''
vertex $i$ in $\widetilde{\Gamma}$.
\item Each type II arc $x_j$ in $a$ corresponds to a ``solid''
vertex $j$ in $\widetilde{\Gamma}$.
\item Two vertices $i,j$ in $\widetilde{\Gamma}$ are connected by
an edge iff the result of merging $x_i$ and $x_j$ ``vertically''
still lies in $B_n^{0,m}$.
\item $\Gamma$ is obtained from $\widetilde{\Gamma}$ by
contracting all edges with two ``solid'' ends.
\item Mark a vertex in each connected component of $\Gamma$
without ``solid'' vertices.
\end{itemize}
\begin{figure}[ht!]
\begin{center}
\psfrag{m}{\tiny $m$} \psfrag{a}{$a$}
\psfrag{G1}{$\widetilde{\Gamma}$} \psfrag{G2}{$\Gamma$}
\epsfig{figure=graphofa.eps} \caption{An element $a\in
\widetilde{B}_{10}^{0,2}$ and its associated graph $\Gamma$.}
\label{graphofa.figure}
\end{center}
\end{figure}
Let $E$ be the set of edges, $M$ be the set of marked points, and
$I$ be the set of vertices in $\Gamma$. For each $J\subset
(E\sqcup M)$ let $c(J)$ be the subset of $S^{\times I}$ consisting
of points $\{ y_i\}_{i\in I}, y_i\in S$ such that
\[
\begin{array}{ll}
y_i = y_j & \mbox{if $(i,j)\in J$}, \\
y_i \not= y_j & \mbox{if $(i,j)\notin J$}, \\
y_i = p & \mbox{if $i\in M\cap J$}, \\
y_i = p & \mbox{if $i$ is ``solid''}, \\
y_i \not= p & \mbox{if $i\in M, i\notin J$}.
\end{array}
\]
Clearly, $S^{\times I}= \sqcup_{J} c(J)$ and $c(J)$ is an open
cell of dimension $2(|I|-|J|)$. We thus obtain a decomposition of
$S_a$ into even dimensional cells.
\begin{lemma}\label{intersection.lemma}
$S_{<a}\cap S_a = (\cup_{b\rightarrow a} S_b)\cap S_a$, where $S_{< a}=\bigcup_{b< a}S_b$.
\end{lemma}
It follows from lemma \ref{intersection.lemma} and the above
construction that
\begin{lemma} \label{partition.lemma}
The cell decomposition constructed above restricts to a cell
decomposition of $S_a\backslash S_{<a}$, which is a union of cells
$c(J)$ such that $J\cap E= \emptyset$.
\end{lemma}
We thus obtain a \emph{cell partition} of $\widetilde{S}$ by
adding cells in $S_a \backslash S_{<a}$ following the total order.
Since there are only even-dimensional cells in the partition, the
rank of $H(\widetilde{S})$ is equal to the number of cells.
By induction on $a$ with respect to the total order $<$ we
get the following (see \cite{Khovanov:Crossingless}):
\begin{prop} \label{prop-exact}
$S_{\leq a}$ has cohomology in even degrees only and the following
sequence is exact
\begin{equation}\label{equation-exact}
0 \rightarrow H(S_{\le a}) \stackrel{\phi}{\rightarrow} \oplusop{b\le a} H(S_b)
\stackrel{\psi^-}{\rightarrow}
\oplusop{b< c\le a} H(S_b\cap S_c),
\end{equation}
where $\phi$ is induced by inclusions $S_b\subset S_{\le a},$
while
\begin{equation*}
\psi^- \stackrel{\mbox{\scriptsize{def}}}{=} \sum_{b<c\le a} (\psi_{b,c}-\psi_{c,b}),
\end{equation*}
where
\begin{equation*}
\psi_{b,c}: H(S_b) \rightarrow H(S_b\cap S_c)
\end{equation*}
is induced by the inclusion $(S_b\cap S_c )\subset S_b.$
\end{prop}
When $a$ is maximal with respect to the total order $<$, the exact
sequence (\ref{equation-exact}) becomes
\begin{equation*}
0 \rightarrow H(\widetilde{S}) \stackrel{\phi}{\rightarrow} \oplusop{b} H(S_b)
\stackrel{\psi^-}{\rightarrow}
\oplusop{b< c} H(S_b\cap S_c),
\end{equation*}
which means that $H(\widetilde{S})$ is isomorphic to the equalizer
of $\psi$.
\end{proof}
\begin{lemma}\label{oneplatform.lemma}
The center of $A_n^{k,l}$ is isomorphic to the center of
$A_n^{0,l-k}$
$$Z(A_n^{k,l})\cong Z(A_n^{0,l-k}).$$
\end{lemma}
\begin{proof} It follows from the definition of $\widetilde{S}$ that
$$\widetilde{S}\cong \bigcup_{a\in \widetilde{B}_n^{k,l}} S_a,$$
where $\widetilde{B}_n^{k,l}\subset B_n^{k,l}$ is the set of
crossingless matchings with all points on the left platform
connected to the right platform.
\begin{figure}[h!]
\begin{center}
\psfrag{bt}{$\widetilde{B}_3^{2,3}$:}
\psfrag{b}{$B_3^{0,1}$:}\epsfig{figure=bt312.eps}
\caption{Removing bottommost type II arcs.} \label{bt312.figure}
\end{center}
\end{figure}
On the other hand, since the bottommost type II arcs contribute
nothing, we have (see figure~\ref{bt312.figure})
$$\bigcup_{a\in\widetilde{B}_n^{k,l}}S_a\cong \bigcup_{a\in B_n^{0,l-k}}S_a.$$
Theorem $2$ and the above observations imply that
$$Z(A_n^{k,l})\cong H(\widetilde{S}_n^{k,l})\cong H(\widetilde{S}_n^{0,l-k})\cong Z(A_n^{0,l-k}).$$
\end{proof}
\begin{prop}\label{dimension.lemma}
$\widetilde{S}_n^{0,m}$ has {\scriptsize
$\sbinom{n}{\frac{n-m}{2}}$} cells in the partition constructed
above.
\end{prop}
\begin{proof} The proposition is equivalent to the statement that
$\widetilde{S}_{2s-k}^{0,k}$ has {\scriptsize
$\sbinom{2s-k}{s-k}$} cells. Fix the total number of points $2s$
(including marked and free). Induction on the size of the
right platform $k$.
Induction base $k=0$ is proved in \cite{Khovanov:Crossingless}, lemma
4.1. Assuming the statement is true up to $k$, it suffices to prove
that extending the size of the platform by $1$ eliminates {\scriptsize
$\sbinom{2s-k}{s-k}$} - {\scriptsize $\sbinom{2s-k-1}{s-k-1}$}
cells in $\widetilde{S}_{2s-k}^{0,k}$. If we label the $2s$ points
by $1,2,3,\cdots, 2s$ from right to left, the vanished cells are
exactly those in $S_a$ where $a$ has a type II arc connecting $k$
and $k+1$ (see figure~\ref{vanishedcells.figure}). Denote the set
of those $a$ by $a(k,k+1)$.
\begin{figure}[ht!]
\begin{center}
\psfrag{1}{\tiny $1$}\psfrag{k-1}{\tiny $k-1$}\psfrag{k}{\tiny
$k$}\psfrag{k+1}{\tiny $k+1$}\psfrag{ik-1}{\tiny
$i_{k-1}$}\psfrag{i1}{\tiny $i_1$}\psfrag{i1+1}{\tiny
$i_1+1$}\psfrag{2s}{\tiny $2s$}\psfrag{cd}{\tiny
$\cdots$}\psfrag{B}{\tiny
$B_{2s-i_1}^{0,0}$}\psfrag{in}{\begin{rotate}{-90}$\in$\end{rotate}}
\epsfig{figure=vanishedcells.eps} \caption{A generic element in
$a(k,k+1)$.} \label{vanishedcells.figure}
\end{center}
\end{figure}
We thus get the following formula:
\begin{equation*}
\vartheta(\bigcup_{a(k,k+1)\in B_{2s-k}^{0,k}} S_{a(k,k+1)})=
\sum_{i=0}^{s-k}(i+1)C_i [(\frac{1-\sqrt{1-4x}}{2x})^{k-1}]_{i},
\end{equation*}
where $\vartheta(X)$ denotes the number of cells in $X$, $C_i$
denotes the $n$-th Catalan number, and $[f(x)]_i$ is the
coefficient of $x^i$ in the Maclaurin expansion of $f(x)$. Recall
that $\frac{1-\sqrt{1-4x}}{2x}$ is the generating function of
$\{C_i\}$. The factor $(i+1)C_i$ corresponds to the number of
cells outside the bottommost type II arc and
$[(\frac{1-\sqrt{1-4x}}{2x})^{k-1}]_{i}$ is equal to the total
number of crossingless matchings inside the bottommost type II
arc. Note that the arcs inside the bottommost type II arc
contribute nothing to the cell structure, see
figure~\ref{graphofa.figure}. After simplifying the above formula
we get
\begin{equation*}
\vartheta(\bigcup_{a(k,k+1)\in B_{2s-k}^{0,k}}
S_{a(k,k+1)})=[\frac{1}{\sqrt{1-4x}}(\frac{1-\sqrt{1-4x}}{2x})^{k-1}]_{s-k}.
\end{equation*}
By induction on $s$ and $k$ it's easy to prove that
\begin{equation*}
[\frac{1}{\sqrt{1-4x}}(\frac{1-\sqrt{1-4x}}{2x})^{k-1}]_{s-k} =
{\scriptsize \sbinom{2s-k}{s-k}} - {\scriptsize
\sbinom{2s-k-1}{s-k-1}}
\end{equation*}
and the proposition follows.
\end{proof}
\begin{prop}\label{prop.ConciniProcesi} \cite{ConciniProcesi}
The cohomology ring of $\mathcal{B}_{\frac{n+m}{2},\frac{n-m}{2}}$
has dimension ${\scriptsize \sbinom{n}{\frac{n-m}{2}}}$ and is
isomorphic to the quotient ring of $R=\mathbb{Z}[X_1,\dots, X_{n}]$ by the ideal
$R_1$ generated by $e_k(I)$ for all $k+|I|=n+1$, $X_I$ for all
$|I|=\frac{n-m}{2}+1$, and $X_i^2$ for $i\in [1,n]$, where
\begin{equation*}
I\subset \{1,2,\cdots,n\},\ \ \ \ X_I=\prod_{i\in I}X_i,\ \ \ \ e_k(I) = \sum_{|J|=k, J\subset I} X_J.
\end{equation*}
\end{prop}
We now prove the main theorem of this section.
\emph{Proof of theorem~\ref{center.theorem}.} It follows from
theorem~\ref{centerS.theorem} and lemma~\ref{oneplatform.lemma}
that to prove theorem~\ref{center.theorem} it suffices to show
that $H(\widetilde{S}_n^{0,m})\cong
H(\mathcal{B}_{\frac{n+m}{2},\frac{n-m}{2}})$. Denote by $X$ a
generator of $H^2(S)$. We have the following maps
\begin{equation*}
\begin{CD}
\widetilde{S}_n^{0,m} @>\iota>> S^{\times n} @>{\psi_i}>>
S',
\end{CD}
\end{equation*}
where $\iota$ is the inclusion and $\psi_i$ is the projection onto the $i$-th component.
Define $X_i\in \widetilde{S}_n^{0,m}$ to be the pull back of $X$ under the map
$\psi_i\circ\iota$
\begin{equation*}
X_i\stackrel{\mbox{\scriptsize{def}}}{=} (-1)^i \iota^* \circ \psi_i^* (X).
\end{equation*}
It's obvious that those $\{X_i\}$ generate $H(\widetilde{S}_n^{0,m})$. It
follows from proposition~\ref{prop.ConciniProcesi} and
proposition~\ref{dimension.lemma} that to prove the theorem we only need
to verify the following relations:
\begin{eqnarray}
X_i^2 & = & 0, \hspace{0.2in} i\in [1,n]; \\
X_I & = & 0, \hspace{0.2in} |I|=\frac{n-m}{2}+1;\\
e_k(I) & = & 0, \hspace{0.2in} k+|I|=n+1. \label{relation.equation}
\end{eqnarray}
The first two relations are obvious. Consider the map $i^*_a:
H(\widetilde{S}_n^{0,m})\rightarrow H(S_a)$ induced by the
inclusion $i_a: S_a \hookrightarrow \widetilde{S}_n^{0,m}$. Since
$\sum_{a} i_a^*(H(\widetilde{S}_n^{0,m}))\rightarrow
\oplus_{a}H(S_a)$
is an inclusion, (\ref{relation.equation}) will follow once we verify that that
\begin{equation}\label{relation1.equation}
\sum_{|J|=k, J\subset I} i_a^*(X_J)=0,\hspace{0.2in} k+|I|=n+1
\end{equation}
for all $a\in B_n^{0,m}$.
Fix any $a\in B_n^{0,m}$ and $I\subset \{1,2,\cdots,n\}$. Since
$n-|I|=k-1$ there exists at most $k-1$ type I arcs where $I$
intersects with each one at only one point. Therefore, for each
$J\subset I$ such that $|J|=k$,
$J$ must either contain an end point of a
type II arc or contain an end point of a type I arc $(p_1,p_2)$
such that $\{p_1,p_2\}\in I$. If $J$ contains an end point of a
type II arc then $i_a^*(X_J)=0$. For a type I arc $(p_1,p_2)$,
because of the term $(-1)^i$ in the definition of $X_i$ and the
fact that $p_1+p_2$ is odd, we have $i_a^* (X_{p_1} X_{p_2})=0$
and $i_a^*(X_{p_1} + X_{p_2})=0$. Therefore
\begin{equation*}
\sum_{J\subset I, |J|=k, \{p_1,p_2\}\cap J \neq \emptyset} i_a^* (X_J)
=0.
\end{equation*}
For the remaining terms in the summation in
(\ref{relation1.equation}), pick another type I arc and repeat the
above process. After finitely many reductions we can get the
relation (\ref{relation1.equation}).\hfill$\square$
\section{Categorification of level two representations of quantum $sl_N$}
\subsection{Level two representations of quantum $sl_N$}
Let $W$, $\wedge^2 W,\cdots$, $\wedge^{N-1}W$ be the irreducible
representations of $U_q(sl_N)$ with highest weights $\omega_1,\
\omega_2,\ \cdots,\ \omega_{N-1}$ respectively, where
$\omega_i=L_1+\cdots+L_{i}$ and the $L_j$'s are the fundamental
weights. A level two representation $V$ of $U_q(sl_N)$ is an
irreducible representation with the highest weight
$\lambda=\omega_s+\omega_{s+k}$ for some $k\geq 0$. Fix $V$ for
the rest of this paper. $V$ decomposes into weight spaces $V=
\oplusop{\mu} V_{\mu}$. Following \cite{HK:LevelTwo}, we call
$\mu$ admissible if $\mu$ appears in the weight space
decomposition of $V$. A weight $\mu$ is admissible if and only if
it can be written as the sum $\mu_1 L_1+\mu_2 L_2+\cdots +\mu_N
L_N$ such that
\begin{itemize}
\item $0\leq \mu_i \leq 2,$ for $\forall 1\leq i\leq N$
\item $\sum_{i=1}^{N}\mu_i=2s+k$
\item $\mu_1+\cdots+\mu_i\leq \lambda_1+\cdots+\lambda_i$, for
$\forall 1\leq i\leq N$,
\end{itemize}
where $\lambda_i$ is the coefficient of $L_i$ in the decomposition
$$\lambda=\omega_s+\omega_{s+k}=(L_1+\cdots+\L_s)+(L_1+\cdots+\L_{s+k}).$$
For each admissible weight $\mu$ let $m(\mu)$ be the number
of $1$'s in the sequence $(\mu_1,\dots,\mu_N)$. The dimension of
$V_{\mu}$ is then determined by $m(\mu)$.
Recall that $U_q(sl_N)$ is defined to be the algebra generated by
$E_i$, $F_i$, $K_i$, and $K_i^{-1}$ for $1\leq i \leq N-1$ with
relations
\begin{equation} \label{q-rel}
\begin{array}{l}
K_i K_i^{-1} = 1 = K_i^{-1} K_i, \\
K_i K_j = K_j K_i, \\
K_i E_j = q^{c_{i,j}} E_j K_i, \\
K_i F_j = q^{-c_{i,j}} F_j K_i, \\
E_i F_j - F_j E_i = \delta_{i,j} \frac{K_i - K_i^{-1}}{q-q^{-1}}, \\
E_i E_j = E_j E_i \ \ \mathrm{if} \ \ |i-j|>1, \\
F_i F_j = F_j F_i \ \ \mathrm{if} \ \ |i-j|>1, \\
E_i^2 E_{i\pm 1} - (q+q^{-1}) E_i E_{i\pm 1}E_i + E_{i\pm 1}E_i^2 = 0, \\
F_i^2 F_{i\pm 1} - (q+q^{-1}) F_i F_{i\pm 1}F_i + F_{i\pm 1}F_i^2 = 0.
\end{array}
\end{equation}
$E_i$ acts on $V$ by sending weight space $V_\mu$ to
$V_{\mu+\epsilon_i}$ and $F_i$ maps $V_\mu$ to
$V_{\mu-\epsilon_i}$, where
$\epsilon_i=(\underbrace{0,\cdots,0}_{i-1},1,-1,0,\cdots,0)$.
\subsection{Semi-standard tableaux and arc rings}
We give in this section an explicit bijection between
semi-standard tableaux and crossingless matchings with one
platform. First recall the definition of semi-standard tableaux.
For any $\lambda=(\lambda_1,\cdots,\lambda_{N-1},0)$ in the weight
lattice of $U_q(sl_N)$, there exists an irreducible representation
$V_{\lambda}$ with the highest weight $\lambda$. Weight
$\mu=(\mu_1,\cdots,\mu_N)$ appears in the weight space
decomposition of $V$ if and only if $\mu$ is admissible. The
dimension of the weight space $V_{\lambda}(\mu)$ equals to the
number of ways one can fill the Young diagram corresponding to
$\lambda$ with $\mu_1\ 1's$, $\mu_2\ 2's$, $\cdots$, and $\mu_N\
N's$ in such a way that each column is strictly increasing and
each row is non-decreasing. Each such filling is called a
semi-standard tableau (see figure~\ref{YoungTableau.figure}).
\begin{figure}[ht!]
\begin{center}
\psfrag{lambda}{\small $\lambda=(2,2,1,0)$}\psfrag{mu}{\small
$\mu=(1,1,2,1)$}\psfrag{1}{\small $1$}\psfrag{2}{\small
$2$}\psfrag{3}{\small $3$}\psfrag{4}{\small $4$}
\epsfig{figure=youngtablauxexample.eps}\caption{Semi-standard tableaux.} \label{YoungTableau.figure}
\end{center}
\end{figure}
The Young diagram $Y_{\lambda}$ corresponding to the level two
representation $V$ with the highest weight
$\lambda=\omega_s+\omega_{s+k}$ has two columns of length $s+k$
and $s$ respectively. Fix an admissible weight $\mu$. Let
$M_{\mu}$ be the set of $i$'s such that $\mu_{i}=1$
$$M_{\mu}\stackrel{\mbox{\scriptsize{def}}}{=} \{1\leq i\leq N|\mu_i=1\},$$
and $N_{\mu}$ be the set of $i$'s such that $\mu_{i}=2$
$$N_{\mu}\stackrel{\mbox{\scriptsize{def}}}{=} \{1\leq i\leq N|\mu_i=2\}.$$
Note that $|M_{\mu}|=m(\mu)$. Let $T_{\mu}$ be the set of
semi-standard tableaux of $Y_{\lambda}$ corresponding to $\mu$.
For each semi-standard tableau $T^i_{\mu}\in T_{\mu}$, let
$T^i_{\mu}(r)$ and $T^i_{\mu}(l)$ be the set of numbers on the
right and left column of $T^i_{\mu}$ respectivily. Write $M_{\mu}$
as an ordered sequence $\{a_1,a_2,\cdots,a_{m(\mu)}\}$. Assume
that $\{a_{i_1},a_{i_2},\cdots,a_{i_t}\}=M_{\mu}\bigcap
T^i_{\mu}(r)$. Consider all integer points $\{1,2,3,\cdots\}$
lying on the x-axis. Put a platform on the x-axis to the left of
all points (see figure~\ref{MatchingExample.figure}). First draw
an arc in the lower half plane connecting $a_{i_1}$ with the first
point in $M_{\mu}$ to its left which is not connected to any
point. That point always exists and lies in $T^i_{\mu}(l)$ since
$T^i_{\mu}$ is semi-standard. Repeat the above step for $a_2,a_3,
\cdots$ in order until each point in $M_{\mu}\bigcap T^i_{\mu}(r)$
is connected to some point. Finally, connect the remaining free
points in $M_{\mu}\bigcap T^i_{\mu}(l)$ to the platform by arcs in
the unique way that no two arcs intersect.
\begin{figure}[ht!]
\begin{center}
\psfrag{1}{\small 1}\psfrag{2}{\small 2}\psfrag{3}{\small
3}\psfrag{4}{\small 4}\psfrag{5}{\small 5}\psfrag{6}{\small
6}\psfrag{7}{\small 7}\psfrag{8}{\small 8}\psfrag{9}{\small
9}\psfrag{cd}{\small $\cdots$}\psfrag{l}{\small $M_{\mu}\bigcap
T^i_{\mu}(l)=\{1,2,3,6,7\}$}\psfrag{r}{\small $M_{\mu}\bigcap
T^i_{\mu}(r)=\{4,5,8\}$}
\epsfig{figure=matchingexample.eps}\caption{An element in $B_8^{2,0}(M_{\mu})$.} \label{MatchingExample.figure}
\end{center}
\end{figure}
The resulting graph is a crossingless matching among the points in $M_{\mu}$
with one platform. Denote by $B_{m(\mu)}^{k,0}(M_{\mu})$ the set of all such elements.
Note that $B_{m(\mu)}^{k,0}(M_{\mu})\cong B_{m(\mu)}^{k,0}$. The map from
$T_{\mu}$ to $B_{m(\mu)}^{k,0}(M_{\mu})$ is denoted by $\varphi_{\mu}$.
Conversely, for any $a\in B_{m(\mu)}^{k,0}(M_{\mu})$ with $t$ type
I arcs $c_1,c_2,\cdots,c_t$, let $R_a$ be the set of right end
points of all $c_i$. A semi-standard tableau of $Y_{\lambda}$ is
constructed by putting $R_a \bigcup N_{\mu}$ into the right column
and $(M_{\mu} \backslash R_a)\bigcup N_{\mu}$ into the left column
(both in increasing order). The map from $B_{m(\mu)}^{k,0}$ to
$T_{\mu}$ is denoted by $\psi_{\mu}$. It's easy to verify that
$\psi_{\mu}$ is indeed the inverse of $\varphi_{\mu}$. Thus we
have a bijection between $T_{\mu}$ and $B_{m(\mu)}^{k,0}$
\begin{equation}\label{bijection.equation}
T_{\mu} \leftrightmaps{50}{\mbox{\scriptsize{$\psi_{\mu}$}}}
{\mbox{\scriptsize{$\varphi_{\mu}$}}} B_{m(\mu)}^{k,0}.
\end{equation}
See figure~\ref{YoungTableauBijection.figure} for an example of
this bijection where $V$ is the level two representation of
$U_q(sl_5)$.
\begin{figure}
\begin{center}
\psfrag{lambda}{\small $\lambda=(2,2,1,0,0)$}\psfrag{yd}{\small
Young diagram}\psfrag{mu1}{\small
$\mu=(1,1,1,1,1)$}\psfrag{mu2}{\small
$\mu=(2,1,1,1,0)$}\psfrag{1}{\small $1$}\psfrag{2}{\small
$2$}\psfrag{3}{\small $3$}\psfrag{4}{\small $4$}\psfrag{5}{\small
$5$}\psfrag{phi}{\small $\ \ \varphi$}\psfrag{psi}{\small $\ \
\psi$}\psfrag{relabel}{\small relabel} \psfrag{p1}{\small
$1$}\psfrag{p2}{\small $2$}\psfrag{p3}{\small
$3$}\psfrag{p4}{\small $4$}\psfrag{p5}{\small
$5$}\psfrag{mmu1}{\small
$M_{\mu}=\{1,2,3,4,5\}$}\psfrag{mmu2}{\small $M_{\mu}=\{2,3,4\}$}
\epsfig{figure=youngtablauxbijection.eps}
\caption{Bijection between semi-standard tableaux and crossingless matchings with one platform.}
\label{YoungTableauBijection.figure}
\end{center}
\end{figure}
\subsection{Category $\mathcal{C}$ and exact functors}
Starting with $B_{m(\mu)}^{k,0}(M_{\mu})$, we repeat the definition
of $A_n^{k,l}$ and get $A_{m(\mu)}^{k,0}(M_{\mu})$, or simply
$A_{\mu}$. Note that $A_{\mu}\cong \mathbb{Z}$ when $m(\mu)=0$.
For an admissible weight $\mu$ define $\mathcal{C}_{\mu}$ to be
the category of finitely generated graded left $A_{\mu}$-modules.
By taking direct sum over all admissible $\mu$ we collect those
$\mathcal{C}_{\mu}$ into a single category $\mathcal{C}$
$$\mathcal{C}\stackrel{\mbox{\scriptsize{def}}}{=} \oplusop{\mu} \mathcal{C}_{\mu}.$$
Note that when $k=0$ our category $\mathcal{C}_{\mu}$ is the same
as $\mathcal{C}(\mu)$ in \cite{HK:LevelTwo}.
The functors $\mathcal{E}_i$, $\mathcal{F}_i$, and $\mathcal{K}_i$
defined by Khovanov and Huerfano naturally extend to our category
$\mathcal{C}$. Recall that $\mathcal{E}_i:\mathcal{C}\rightarrow
\mathcal{C}$ is defined to be the sum over all admissible $\mu$ of
the functors $\mathcal{E}_i^{\mu}: \mathcal{C}_{\mu}\rightarrow
\mathcal{C}_{\mu+\epsilon_i}$. If $\mu+\epsilon_i$ is not
admissible $\mathcal{E}_i^{\mu}$ is the zero functor. Otherwise,
define $\mathcal{E}_i^{\mu}$ to be tensoring with the
$(A_{\mu+\epsilon_i},A_{\mu})$-bimodule $\mathcal{F}(T_{i}^{\mu})$ where
$T_{i}^{\mu}$ is the simplest flat tangle with bottom end points
corresponding to $\mu$ and top end points corresponding to
$\mu+\epsilon_i$. Figure~\ref{FunctorExample.figure} shows an
example of the functor $\mathcal{E}_i^{\mu}$.
\begin{figure}[ht!]
\begin{center}
\psfrag{1}{\small $1$}\psfrag{2}{\small $2$}\psfrag{0}{\small $0$}
\psfrag{mu1}{\small $\mu_1=(1,1,0,1,2,1)$}\psfrag{mu2}{\small
$\mu_1+\epsilon_3=(1,1,1,0,2,1)$}\psfrag{mu3}{\small
$\mu_2=(1,2,1,1,1,1)$}\psfrag{mu4}{\small
$\mu_2+\epsilon_3=(1,2,2,0,1,1)$}\psfrag{e3}{\small
$\varepsilon_3=(0,0,1,-1,0,0)$}\psfrag{E3}{\small
$\mathcal{E}_3^{\mu_1}=\mathcal{F}$}\psfrag{E4}{\small
$\mathcal{E}_3^{\mu_2}=\mathcal{F}$}\psfrag{p}{\LARGE
$)$}\psfrag{l}{\LARGE $($}
\epsfig{figure=functorexample.eps}\caption{Examples of the functor $\mathcal{E}_i^{\mu}$.} \label{FunctorExample.figure}
\end{center}
\end{figure}
The definition of $\mathcal{F}_i$ is similar. See
\cite{HK:LevelTwo} for details. Define $\mathcal{K}_i$ to be the
functor which shifts the grading of $M\in \mathcal{C}_{\mu}$ up by
$\mu_i-\mu_{i+1}$
$$\mathcal{K}_i(M)\stackrel{\mbox{\scriptsize{def}}}{=} M\{\mu_i-\mu_{i+1}\}.$$
\begin{prop} \label{prop1} There are functor isomorphisms
\begin{equation}
\label{functor-isom}
\begin{array}{l}
\mathcal{K}_{i} \mathcal{K}^{-1}_{i} \cong \mathrm{Id}
\cong \mathcal{K}^{-1}_{i} \mathcal{K}_{i}, \\
\mathcal{K}_{i} \mathcal{K}_{j} \cong \mathcal{K}_{j} \mathcal{K}_{i}, \\
\mathcal{K}_{i} \mc{E}_{j} \cong
\mc{E}_{j}\mathcal{K}_{i}\{c_{i,j}\}, \\
\mathcal{K}_{i} \mathcal{F}_{j} \cong
\mathcal{F}_{j} \mathcal{K}_{i}\{-c_{i,j}\}, \\
\mc{E}_{i} \mathcal{F}_{j} \cong \mathcal{F}_{j}\mc{E}_{i}
\hspace{0.1in} \mbox{ if } \hspace{0.1in} i \not= j, \\
\mc{E}_{i} \mc{E}_{j} \cong \mc{E}_{j}\mc{E}_{i}
\hspace{0.1in}\mbox{ if }\hspace{0.1in} |i-j|> 1, \\
\mathcal{F}_{i} \mathcal{F}_{j} \cong \mathcal{F}_{j}\mathcal{F}_{i}
\hspace{0.1in}\mbox{ if }\hspace{0.1in} |i-j|> 1, \\
\mc{E}_{i}^2 \mc{E}_{j} \oplus
\mc{E}_{j} \mc{E}_{i}^2 \cong
\mc{E}_{i} \mc{E}_{j} \mc{E}_{i}\{ 1\} \oplus\mc{E}_{i} \mc{E}_{j} \mc{E}_{i}\{ -1\}
\hspace{0.1in} \mbox{ if } \hspace{0.1in}
j = i \pm 1, \\
\mathcal{F}_{i}^2 \mathcal{F}_{j} \oplus \mathcal{F}_{j} \mathcal{F}_{i}^2 \cong \mathcal{F}_{i}
\mathcal{F}_{j} \mathcal{F}_{i} \{ 1\} \oplus\mathcal{F}_{i} \mathcal{F}_{j} \mathcal{F}_{i} \{ -1\}
\hspace{0.1in} \mbox{ if } \hspace{0.1in}
j = i \pm 1,
\end{array}
\end{equation}
where
$c_{i,j} = {\left\{ \begin{array}{ll} 2 & \mathrm{ if }\hspace{0.1in}
j = i, \\
-1 & \mathrm{ if } \hspace{0.1in} j = i\pm 1, \\
0 & \mathrm{ if} \hspace{0.1in} |j-i|>1. \end{array} \right. }$
\end{prop}
\begin{prop}
\label{prop2} For any admissible $\mu$ there is an isomorphism of
functors in the category $\mc{C}_\mu$
\begin{equation}
\label{more-fn}
\begin{array}{l}
\mc{E}_i \mathcal{F}_i \cong \mathcal{F}_i \mc{E}_i \oplus \mathrm{Id}\{1\}\oplus \mathrm{Id} \{ -1\}
\hspace{0.1in} \mbox{ if } \hspace{0.1in} (\mu_i,\mu_{i+1})=(2,0), \\
\mc{E}_i \mathcal{F}_i \cong \mathcal{F}_i \mc{E}_i \oplus \mathrm{Id}
\hspace{0.1in} \mbox{ if } \hspace{0.1in} \mu_i - \mu_{i+1} =1, \\
\mc{E}_i \mathcal{F}_i \cong \mathcal{F}_i \mc{E}_i
\hspace{0.1in} \mbox{ if } \hspace{0.1in} \mu_i = \mu_{i+1}, \\
\mc{E}_i \mathcal{F}_i \oplus \mathrm{Id} \cong \mathcal{F}_i \mc{E}_i
\hspace{0.1in} \mbox{ if } \hspace{0.1in} \mu_i - \mu_{i+1}=-1, \\
\mc{E}_i \mathcal{F}_i \oplus \mathrm{Id}\{1\}\oplus \mathrm{Id} \{ -1\} \cong \mathcal{F}_i \mc{E}_i
\hspace{0.1in} \mbox{ if } \hspace{0.1in} (\mu_i,\mu_{i+1})=(0,2).
\end{array}
\end{equation}
\end{prop}
\begin{prop}
The functor $\mc{E}_i$ is left adjoint to $\mathcal{F}_i \mathcal{K}_i^{-1}\{1\}$,
the functor $\mathcal{F}_i$ is left adjoint to $\mc{E}_i \mathcal{K}_i\{1\}$, and
$\mathcal{K}_i$ is left adjoint to $\mathcal{K}_i^{-1}$.
\end{prop}
The above three propositions are from \cite{HK:LevelTwo}. They
work in our case without any modifications since the actions
happen away from the platform.
The Grothendieck group of $\mc{C}$ is a $\mathbb{Z}[q,q^{-1}]$-module
where grading shifts correspond to multiplication by $q$. The
functors $\mc{E}_i$, $\mathcal{F}_i$, and $\mathcal{K}_i$ are exact and commute with
grading shift action $\{1\}$. Exactness follows from left and
right projectivity of bimodule $\mathcal{F}(T)$ for flat tangle $T$ in
section $2$. On the Grothendieck group level $\mc{E}_i$, $\mathcal{F}_i$, and
$\mathcal{K}_i$ descend to $\mathbb{Z}[q,q^{-1}]$-linear endomorphisms
$[\mc{E}_i]$, $[\mathcal{F}_i]$, and $[\mathcal{K}_i]$ respectively. Functor
isomorphisms in proposition \ref{prop1} and proposition
\ref{prop2} correspond to the quantum group relation (\ref{q-rel})
in $K(\mc{C})$. So we can view $K(\mc{C})$ as an $U_q(sl_N)$ module. It
follows from the bijection (\ref{bijection.equation}) that
$K(\mc{C})$ is isomorphic to $V$ as an $U_q(sl_N)$ module:
\begin{prop}
The Grothendieck group of $\mc{C}$ is isomorphic to the irreducible
representation of $U_q(sl_N)$ with the highest weight
$\omega_k+\omega_{k+s}$
$$K(\mc{C})\otimes_{\mathbb{Z}[q,q^{-1}]}\mathbb{C}\cong V.$$
\end{prop}
| {'timestamp': '2006-11-01T05:51:17', 'yymm': '0611', 'arxiv_id': 'math/0611012', 'language': 'en', 'url': 'https://arxiv.org/abs/math/0611012'} |
\section{Introduction}
The last two decades have seen an important amount of works whose aim is to realize powers of some operators $L$ in terms of a suitable extension. When the operator $L$ is second-order in divergence-form for instance, the extension appears to be a differential operator and classical tools from PDEs allow to get (or recover) several results on the operator $L^s$ (for $0<s<1$) such as regularity estimates and fine properties of solutions of an associated PDE. Functions of $L$ are of course multipliers in the sense of harmonic analysis and connections can be made with well-known results such as H\"ormander-Mikhlin theorems. On the other hand, powers of $L$ are a subclass of generators of Levy processes and some results which can be proved via probabilistic techniques can be recovered through PDE ones.
We now describe more precisely what we mean by extension. A classical result about the square root of the Laplace operator is the following: if $u(x,y)$ is a harmonic function in the upper half-space $\mathbb{R}^n\times \mathbb{R}^+$ with boundary value $f(x) = u(x,0)$, then under certain conditions on $u$, we have
$$ -\sqrt{-\Delta}f(x) = \frac{\partial}{\partial y} u(x,0).$$
For $0<s<1$, the fractional Laplacian of a function $f:\mathbb{R}^n\rightarrow \mathbb{R}$ is defined via Fourier transform on the space of tempered distributions as
$$
\widehat{(-\Delta)^s}f(\xi) = |\xi|^{2s}\widehat{f(\xi)}.
$$
As L. Caffarelli and L. Silvestre showed in \cite{Caffarelli_2007}, for a function $f:\mathbb{R}^n\rightarrow\mathbb{R}$, a solution $u$ to the following equation
\begin{align*}
\begin{cases}
\Delta_x u + \frac{1-2s}{y}u_y + u_{yy} = 0\\
u(x,0) = f(x).
\end{cases}
\end{align*}
can be given by a Poisson formula and one has the Dirichlet-to-Neumann condition
$$
-(-\Delta)^s f(x) = C \lim_{ y \rightarrow 0^+} y^{1-2s}u_y(x,y)
$$
with the constant $C$ depends on the dimension $n$ and $s$. This allows to realize some powers of the Euclidean laplacian in terms of traces of a differential operator in the upper half-space. Written in divergence form the previous equation involves the weight $y^{1-2s}$ which belongs to the class $A_2$. As a consequence, one can apply the theory developed in \cite{2019arXiv191105619B, FKS,FJK,FJK2} to derive an Harnack inequality, a boundary Harnack principle and other results for equations of the type $(-\Delta)^s f(x)=0$.
In \cite{stingatorrea}, P. R. Stinga and J. L. Torrea develop a framework for a general non-negative self-adjoint operator $L$ on $L^2(\Omega, d\eta)$, with $\Omega$ being an open set in $\mathbb{R}^n$ and $d\eta$ a positive measure. It is remarkable that they applied the spectral theorem and semi-group theory instead of the Fourier transform, which requires much less structure on the ambient space than the Euclidean space. As a direct application of their framework, one can get the previously described realizations of $L^s$ on Riemannian and Sub-Riemannian manifolds under classical geometric assumptions such as polynomial volume for instance.
M. Kwasnicki and J. Mucha
\cite{Kwasnicki2018}
discussed the extension problem for complete Berstein functions of the Laplacian, other than just fractional powers. Their argument is based on the Fourier transform again and the Krein's spectral theory of strings. Krein's theory provides a one to one correspondence between the non-negative locally integrable measures (Krein's strings) and the complete Bernstein functions.
One of the goals of the present paper is to consider the extension problem on rather general Dirichlet spaces. This allows to give a unified theory of the previously known results; but also to deal with new operators $L$ which were not considered before. Then invoking results in the literature for the extended PDE, one gets new results about regularity of solutions of some PDEs, in the same spirit as the ones described above. It is important here to notice that Dirichlet spaces are the natural metric spaces for which such extension theory holds, up to additional assumptions of course on the space if one wants to get additional results on the solutions of specific equations. We refer the reader to \cite{DirichletFormsandSymmetricMarkovProcesses} for an extensive study of the theory of Dirichlet spaces. In this general framework the Fourier transform is not always available and as a consequence we will adopt the strategy in \cite{stingatorrea} based on semi-groups.
Due to the versatile nature of the semi-group approach of Stinga and Torrea, one needs very little assumptions on the Dirichlet space to realize the powers of the generator $L$ as the Dirichlet-to-Neumann operator of a suitable extension. On the contrary, getting fine properties of the solutions (existence included actually) of the equation on the extension requires much more on the underlying Dirichlet space.
The framework we adopt here covers the following geometries :
\begin{itemize}
\item Complete Riemannian manifolds with non-negative Ricci curvature or more generally RCD$(0,\infty)$ spaces in the sense of Ambrosio-Gigli-Savar\'e \cite{ambrosio2014},
\item Carnot groups and other complete sub-Riemannian manifolds satisfying a generalized curvature dimension inequality (see \cite{BAUDOIN20122646,BK}),
\item Doubling metric measure spaces that support a $2$-Poincar\'e inequality with respect to the upper gradient
structure of Heinonen and Koskela (see~\cite{heinonen_koskela_shanmugalingam_tyson_2015,KOSKELA20142437,Koskela2012}).
\item Metric graphs with bounded geometry (see \cite{Haeseler}).
\item Abstract Wiener spaces are Dirichlet spaces (see \cite{bogachev}).
\end{itemize}
The previous items concern mainly the extension property. In some cases, it was known to hold like on the Euclidean case with positive measure \cite{Caffarelli_2007,stingatorrea}, Heisenberg groups \cite{FF}, Riemannian manifolds with curvature assumptions \cite{stingatorrea}, abstract Weiner spaces and Gauss spaces \cite{NPS1,NPS2} and some variations of them like in bounded domains.
\begin{remark}
When this paper was finished, we have been aware that S. Eriksson-Bique, G. Giovannardi, R. Korte, N. Shanmugalingam and G. Speight obtained similar results in the context of metric measure spaces endowing with an upper gradient. In this more favorable setting than ours, they could get further regularity properties of the PDEs under consideration.
\end{remark}
\section{Preliminaries on Dirichlet spaces}\label{Dirichlet spaces}
Here we provide an overall introduction to Dirichlet spaces. One can refer to the book of \cite{DirichletFormsandSymmetricMarkovProcesses}
for more details.
Let $(X,d)$ be a locally compact metric space equipped with a Radon measure $\mu$ supported on $X$. Let $(\mathcal{E},\mathcal{F} = \mathcal{D}(\mathcal{E}))$ be a densely defined, symmetric bilinear form on ${L^2(X;\ \mu)}$. Note that
$$(u,v)_\mathcal{F} = (u,v)_{{L^2(X;\ \mu)}} + \mathcal{E}(u,v)$$
is a inner product on $\mathcal{F}$.
Then we can define the norm on $\mathcal{F}$ by Cauchy-Schwartz inequality,
$$\N{u}_{\mathcal{F}} = \brak{\mathcal{E}(u,u) + \N{u}_{{L^2(X;\ \mu)}}^2}^{1/2}.$$
We say $\mathcal{E}$ is $closed$ if $\mathcal{F}$ is complete with respect to the norm $\N{\cdot}_\mathcal{F}$.
Given $\mathcal{E}$ is closed, we say it is $Markovian$ if
$$
u \in \mathcal{F}, \,v \text{ is a normal contraction of } u \Rightarrow v \in \mathcal{F}, \,\mathcal{E}(v,v) \leq \mathcal{E}(u,u).$$
Here a function $v$ is called a $normal\ contraction$ of a function $u$, if
$$
\abs{v(x)-v(y)} \leq \abs{u(x)-u(y)},\ \forall x,y\in X, \ \abs{v(x)}\leq \abs{u(x)},\ \forall x \in X.
$$
\begin{defi}
We say $(\mathcal{E},\mathcal{F} = \mathcal{D}(\mathcal{E}))$ is a Dirichlet form on $L^2(X,\mu)$, if $\mathcal{E}$ is a densely defined, closed, symmetric and Markovian bilinear form on $L^2(X,\mu)$.
\end{defi}
By the following theorem in \cite[Theorem 1.3.1]{DirichletFormsandSymmetricMarkovProcesses}, we can define the generator for a Dirichlet form.
\begin{defi}
There is a one to one correspondence between the family of closed symmetric forms $\mathcal{E}$ on a Hilbert space $H$and the family of non-positive definite self-adjoint operators $L$ on $H$. The correspondence is determined by
\begin{align*}
\begin{cases}
D\brak{\mathcal{E}} = D\brak{\sqrt{-L}}\\
\mathcal{E}(u,v) = (\sqrt{-L}u,\sqrt{-L}v)
\end{cases}
\end{align*}
$L$ is called the generator of the form $\mathcal{E}$.
\end{defi}
In the classical Euclidean case \cite{Caffarelli_2007}, $L$ is the Laplacian operator, and $\mathcal{E}(u,v) = \int_{\mathbb{R}^n} \nabla u \cdot \nabla v \,dx$.
In the following sections, we will be focusing on the generators of the Dirichlet forms. Let us consider a non-positive definite self-adjoint operators $L$ on a Hilbert space $H$. In our case, $H$ will be ${L^2(X;\ \mu)}$. By the spectral theorem, there exists a unique spectral family $dE(\lambda)$, such that
$$ -L = \int_0^\infty \lambda dE(\lambda).$$
This formula is understood in the sense that, for any functions $f,g\in D\brak{L}$, we have
$$\ang{-Lf, g} = \int_0^\infty \lambda dE_{f,g}(\lambda).$$
In particular, for any non-negative continuous function $\phi$ on $[0,\infty)$, we can define
\begin{align}
\begin{cases}
\phi(-L) = \int_0^\infty \phi(\lambda)dE(\lambda),\\
D\brak{\phi(-L)} = \crl{u\in H : \int _0 ^\infty \phi(\lambda)^2 dE_{u,u}(\lambda)<\infty}.
\end{cases}
\end{align}
\section{Extension theorem on Dirichlet spaces}\label{frac power}
Let $L$ be a non-positive symmetric operator defined on $D(L)$ generating the Dirichlet form $\mathcal E$.
The heat semigroup associated to $L$ will be denoted by $P_t$. We assume that \underline{$L$ has no spectral gap}.
\subsection{Fractional powers}
Here we consider $(-L)^s$, the fractional power of $L$, where $ 0 < s < 1$. It can be defined by the spectral theorem,
$$ (-L)^s = \int_0^\infty \lambda^s dE(\lambda).$$
Similarly to the result of \cite{stingatorrea}, we can extend the operator into a higher dimension space $X\times \mathbb{R}$, as it is shown in the following lemmas. We compute now powers of the generator $L$.
The spectral theorem yields that for $f \in \mathcal{D}((-L)^s)$, with $s \in (0,1)$
\[
(-L)^s f =\frac{1}{\Gamma(-s)} \int_0^{+\infty} (P_t f -f) \frac{dt}{t^{1+s}}
\]
If $f \in C(X) \cap \mathcal{D}(-L)$ and $Lf \in L^\infty(X,\mu)$, this expression can be interpreted pointwise everywhere. Indeed, one has
\[
P_t f(x) -f(x)=\int_0^t L P_s f(x) ds=\int_0^t P_s L f(x) ds
\]
so that
\[
\left| P_t f(x) -f(x) \right| \le t \| Lf \|_\infty
\]
Thus,
\begin{align*}
\int_0^{+\infty} | P_t f(x) -f(x)| \frac{dt}{t^{1+s}}&= \int_0^{1} | P_t f(x) -f(x)| \frac{dt}{t^{1+s}}+ \int_{1}^{+\infty} | P_t f(x) -f(x)| \frac{dt}{t^{1+s}} \\
& \le C_1 \| Lf \|_\infty + C_2 \| f \|_2
\end{align*}
Note that we also have
\[
P_t f(x) -f(x)=\int_X p_t(x,y) (f(y)-f(x)) d\mu(y)
\]
So,
\[
(-L)^s f (x) =\int_X K(x,y) (f(y)-f(x)) d\mu(y)
\]
with
\[
K(x,y)=\int_0^{+\infty} p_t(x,y) \frac{dt}{t^{1+s}}
\]
On Dirichlet spaces endowed with a doubling measure and a $2$-Poincar\'e inequality, the following Gaussian bounds for the heat kernel hold
\[
p_t (x,y) \simeq C\, \frac{e^{-c d(x,y)^2/t}}{\mu(B(x,\sqrt{t}))}
\]
Therefore, if we assume maximal volume growth, i.e.
\[
\mu(B(x,r)) \ge C r^n
\]
we get
\[
K(x,y) \simeq \frac{1}{d(x,y)^{n+2s}}.
\]
Along with $L$ and, for any $-1<a<1$ we consider the Bessel operator
\begin{equation}\label{Ba}
\mathcal B_a = \frac{\partial^2 }{\partial y^2} + \frac{a}{y} \frac{\partial }{\partial y},
\end{equation}
on the whole line $\mathbb{R}$ endowed with the measure $d\nu_a(y) = |y|^a dy$.
Note that the correspondence between $w(y) = |y|^a$ and $\psi(\lambda) = \lambda ^ s$ is shown in the Krein's theory in Section \ref{psiL}.
Now let us consider the space $X_a = X \times \mathbb{R}$ with the measure $d\mu \otimes d\nu_a$. We will also denote by $X_a^+ = X \times (0,\infty)$, and by $X_a^- = X \times (-\infty,0)$.
\begin{lem}\label{PoiF}
Let $f \in \mathcal{D}((-L)^s)$. Then the $L^2$-weak solution of the extension equation
\begin{equation}\label{extL}
\begin{cases}
L_a U = (L + \mathcal B_a) U = 0\ \ \ \ \ \ \ \text{in}\ X_a^+,
\\
U(\cdot ,0) = f ,
\end{cases}
\end{equation}
where $a = 1- 2s$, is given by the following function
\[
U (\cdot ,y)=\frac{1}{\Gamma(s)} \int_0^{+\infty} (P_t (-L)^s f) e^{-\frac{y^2}{4t}} \frac{dt}{t^{1-s}}
\]
Moreover, we have the Poisson formula
\[
U (\cdot ,y)= \frac{y^{2s}}{2^{2s}\Gamma(s) }\int_0^{+\infty} (P_t f) e^{-\frac{y^2}{4t}} \frac{dt}{t^{1+s}}
\]
Here the function $U$ is called the $s$-Harmonic extension of $f$.
\end{lem}
\begin{proof}
1.\ We first show that for all $y>0$, $U(\cdot, y) \in L^2(X;\mu)$, and for all $g \in L^2(X;\mu)$,
$$ \left< U(\cdot \ , y), \ g(\cdot)\right>_{L^2(X;\ \mu)} = \frac{1}{\Gamma(s)}\int_0^\infty \left<P_t(-L)^sf, g\right>_{L^2(X;\mu)} e^{-\frac{y^2}{4t}}\frac{dt}{t^{1-s}}.$$
For each $R>0$, we define
$$ U_R(x,y) = \frac{1}{\Gamma(s)}\int_0^R \left<P_t(-L)^sf, g\right>_{L^2(X;\mu)} e^{-\frac{y^2}{4t}}\frac{dt}{t^{1-s}}.$$
Since $f \in \mathcal{D}((-L)^s)$, we have that $P_t(-L)^sf \in L^2(X;\ \mu)$, hence by Bochner theorem, $U_R$ is well-defined. Hence
\begin{align*}
\left< U_R(\cdot \ , y), \ g(\cdot)\right>_{L^2(X;\ \mu)} & = \frac{1}{\Gamma(s)}\int_0^R \left<P_t(-L)^sf, g\right>_{L^2(X;\mu)} e^{-\frac{y^2}{4t}}\frac{dt}{t^{1-s}}\\
& = \frac{1}{\Gamma(s)}\int_0^R \int_0^\infty
e^{-t\lambda} \lambda^s dE_{f,g}(\lambda)
e^{-\frac{y^2}{4t}}\frac{dt}{t^{1-s}}\\
& = \frac{1}{\Gamma(s)}\int_0^\infty \int_0^R e^{-t\lambda} (t\lambda)^s
e^{-\frac{y^2}{4t}}\frac{dt}{t} dE_{f,g}(\lambda)\\
& = \frac{1}{\Gamma(s)}\int_0^\infty \int_0^{R\lambda} e^{-r} r^s
e^{-\frac{y^2 \lambda }{4r}}\frac{dr}{r} dE_{f,g}(\lambda).\\
\end{align*}
The change of integration follows from the integrability, and the last equality follows from the change of variable $r = t\lambda$. Hence we have
\begin{align*}
\left|\left<U_R(\cdot, y), \ g(\cdot)\right>_{L^2(X;\ \mu)} \right| &\leq
\frac{1}{\Gamma(s)}\int_0^\infty \int_0^\infty e^{-r} r^s
\frac{dr}{r} d\left|E_{f,g}(\lambda)\right|\\
& = \frac{1}{\Gamma(s)} \int_0^\infty e^{-r} r^s
\frac{dr}{r} \int_0^\infty d\left|E_{f,g}(\lambda)\right|\\
& \leq \N{f}_{L^2(X; \mu)} \N{g}_{L^2(X; \mu)}.
\end{align*}
Therefore, for each fixed $y >0$, $U_R(\cdot, y)$ is in $L^2(X; \mu)$ and
$$ \N{U_R(\cdot, y)}_{L^2(X; \mu)} \leq \N{f}_{L^2(X; \mu)}.$$
And by the similar computation, for some $R_2 > R_1 > 0$,
\begin{align*}
\abs{\ang{U_{R_1}(\cdot, y), g} - \ang{U_{R_2}(\cdot, y), g}}
& \leq
\frac{1}{\Gamma(s)} \int_0^\infty e^{-r} r^s
\frac{dr}{r} \int_{R_1}^{R_2} d\left|E_{f,g}(\lambda)\right| \to \ 0
\end{align*}
as $R_1, \ R_2 \to 0$. Hence there exist a Cauchy sequence of bounded operators $\crl{U_{R^j}(\cdot, \ y)}_{j \in \mathbb{N}}$ in $L^2(X; \ \mu)$, and it converge to $U(\cdot, \ f)$ which is defined earlier in weakly in $L^2(X;\ \mu)$ as $R^j \to \infty$. Moreover, by the standard dominated convergence theorem,
\begin{align*}
\left< U(\cdot \ , y), \ g(\cdot)\right>_{L^2(X;\ \mu)} &=
\lim_{R^j \to \infty} \left< U_{R^j}(\cdot \ , y), \ g(\cdot)\right>_{L^2(X;\ \mu)}\\
&=
\lim_{R^j \to \infty}
\frac{1}{\Gamma(s)}\int_0^\infty \int_0^{R^j} e^{-t\lambda} (t\lambda)^s
e^{-\frac{y^2}{4t}}\frac{dt}{t} dE_{f,g}(\lambda)\\
&=
\frac{1}{\Gamma(s)}\int_0^\infty \int_0^\infty e^{-t\lambda} (t\lambda)^s
e^{-\frac{y^2}{4t}}\frac{dt}{t} dE_{f,g}(\lambda)\\
&=
\frac{1}{\Gamma(s)}\int_0^\infty \int_0^\infty e^{-t\lambda} \lambda^s dE_{f,g}(\lambda)
e^{-\frac{y^2}{4t}}\frac{dt}{t^{1-s}}\\
&=
\frac{1}{\Gamma(s)}\int_0^\infty
\left<P_t(-L)^sf, g\right>_{L^2(X;\mu)}
e^{-\frac{y^2}{4t}}\frac{dt}{t^{1-s}}.
\end{align*}
Hence we get the desired formula.
2.\ Next we show that $U(\cdot, y)\in \text{Dom}(L)$, that is,
$$
\lim_{r\to 0^+} \ang{
\frac{e^{rL} U(\cdot, \ y) - U(\cdot, \ y)}{r}, \ g
}_{L^2(X;\ \mu)}
$$
exists for all $g\in {L^2(X;\ \mu)}.$
Since $P_r = e^{rL}$ is self adjoint,
\begin{align*}
\ang{P_r U(\cdot, \ y),\ g}_{{L^2(X;\ \mu)}} &= \ang{ U(\cdot, \ y),\ P_r g}_{{L^2(X;\ \mu)}}\\
&=
\frac{1}{\Gamma(s)}\int_0^\infty
\left<e^{tL}(-L)^sf, e^{rL}g\right>_{L^2(X;\mu)}
e^{-\frac{y^2}{4t}}\frac{dt}{t^{1-s}}\\
&=
\frac{1}{\Gamma(s)}\int_0^\infty
\left<e^{(t+r)L}(-L)^sf, g\right>_{L^2(X;\mu)}
e^{-\frac{y^2}{4t}}\frac{dt}{t^{1-s}}.\\
\end{align*}
That implies
\begin{align*}
\ang{\frac{e^{rL} U(\cdot, \ y) - U(\cdot, \ y)}{s}, \ g
}_{L^2(X;\ \mu)} &=
\frac{1}{\Gamma(s)}\int_0^\infty
\ang{
\frac{e^{(r+t)L}(-L)^s f-e^{tL}(-L)^s f}{r}, g
}_{L^2(X;\mu)}
e^{-\frac{y^2}{4t}}\frac{dt}{t^{1-s}}\\
&=
\frac{1}{\Gamma(s)}\int_0^\infty
\int_0^\infty
\frac{e^{-(r+t)\lambda}\lambda^s-e^{-t\lambda}\lambda^s}{r}
dE_{f,g}(\lambda)
e^{-\frac{y^2}{4t}}\frac{dt}{t^{1-s}}\\
&=
\frac{1}{\Gamma(s)}\int_0^\infty
\int_0^\infty
\frac{e^{-(r+t)\lambda}\lambda^s-e^{-t\lambda}\lambda^s}{r}
e^{-\frac{y^2}{4t}}\frac{dt}{t^{1-s}}
dE_{f,g}(\lambda)\\
&\underrightarrow{r \to 0^+}\
\frac{1}{\Gamma(s)}\int_0^\infty
\int_0^\infty
\partial_r(e^{t\lambda})\lambda^s
e^{-\frac{y^2}{4t}}\frac{dt}{t^{1-s}}
dE_{f,g}(\lambda)\\
&=
\frac{1}{\Gamma(s)}\int_0^\infty
\ang{
Le^{tL}(-L)^s f,\ g
}_{L^2(X;\ \mu)}
e^{-\frac{y^2}{4t}}\frac{dt}{t^{1-s}}.\\
\end{align*}
3. The boundary condition holds. By using the result from step 1 and change of variables, we can have that for all
$g \in L^2(X;\ \mu)$,
\begin{align*}
\ang{U(\cdot, y), g(\cdot)}_{L^2(X;\ \mu)} = &
\frac{1}{\Gamma(s)}\int_0^\infty \int_0^\infty
e^{-r}r^s
e^{-\frac{y^2 \lambda}{4r}}\frac{dr}{r}
dE_{f,g}(\lambda)\\
\underrightarrow{y\to 0+}&\
\frac{1}{\Gamma(s)}\int_0^\infty \int_0^\infty
e^{-r}r^s
e^{-\frac{y^2 \lambda}{4r}}\frac{dr}{r}
dE_{f,g}(\lambda)\\
=&
\ang{f,\ g}_{L^2(X;\ \mu)}
\end{align*}
4. Now we are left to show that $U$ satisfied the equation (\ref{extL}).
For all $g \in L^2(X;\ \mu)$,
\begin{align*}
\lim_{h\to 0^+}\ang{
\frac{U(\cdot, \ y + h) - U(\cdot, \ y)}{h},\ g(\cdot)
}_{L^2(X;\ \mu)}
&=
\frac{1}{\Gamma(s)}\int_0^\infty
\ang{e^{tL}(-L)^s f, \ g}_{L^2(X;\ \mu)}
\partial_y(e^{-\frac{y^2}{4t}})
\frac{dt}{t^{1-s}}\\
&=
\ang{
\frac{1}{\Gamma(s)}\int_0^\infty
e^{tL}(-L)^s f \cdot
\partial_y(e^{-\frac{y^2}{4t}})
\frac{dt}{t^{1-s}},\ g
}_{L^2(X;\ \mu)}
\end{align*}
The first equality follows from the dominated convergence theorem, and the second holds by checking the integrability as in step 1. Hence
$$ U_y(x, y) = \frac{-1}{\Gamma(s)}\int_0^\infty
e^{tL}(-L)^s f(x) \cdot
\frac{
ye^{-\frac{y^2}{4t}}
}{2t}
\frac{dt}{t^{1-s}}.$$
Also, we can have $U_{yy}$ by similar computation,
$$ U_{yy}(x, y) = \frac{1}{\Gamma(s)}\int_0^\infty
e^{tL}(-L)^s f(x) \cdot
\brak{
\frac{y^2}{4t^2} - \frac{1}{2t}
}
e^{-\frac{y^2}{4t}}
\frac{dt}{t^{1-s}}.$$
Hence for all $g \in L^2(X;\ \mu)$,
\begin{align*}
\ang{\mathcal B_a U,\ g} &= \ang{U_{yy} + \frac{1-2y}{y} U_y,\ g}\\
&=
\frac{1}{\Gamma(s)}\int_0^\infty
\ang{e^{tL}(-L)^s f, \ g}_{L^2(X;\ \mu)}
\brak{
\frac{y^2}{4t^2}-\frac{1}{2t}+\frac{1-2s}{y}\brak{-\frac{y}{2t}}}
e^{-\frac{y^2}{4t}}
\frac{dt}{t^{1-s}}\\
&=
\frac{1}{\Gamma(s)}\int_0^\infty
\ang{P_t(-L)^s f, \ g}_{L^2(X;\ \mu)}
\brak{
\frac{y^2}{4t^2}+\frac{s-1}{2t}
}
e^{-\frac{y^2}{4t}}
\frac{1}{t^{1-s}}dt\\
&=
\frac{1}{\Gamma(s)}\int_0^\infty
\ang{P_t(-L)^s f, \ g}_{L^2(X;\ \mu)}
\partial_t\brak{
e^{-\frac{y^2}{4t}}
\frac{1}{t^{1-s}}}dt
\end{align*}
And an integration by parts yields that,
\begin{align*}
\ang{\mathcal B_a U,\ g}
&=
-\frac{1}{\Gamma(s)}\int_0^\infty
\partial_t
\edg{
\int_0^\infty
e^{-t\lambda} \lambda^s dE_{f,g}(\lambda)
}
e^{-\frac{y^2}{4t}}
\frac{dt}{t^{1-s}}\\
&=
\frac{1}{\Gamma(s)}\int_0^\infty
\int_0^\infty
\lambda
e^{-t\lambda} \lambda^s
e^{-\frac{y^2}{4t}}
\frac{dt}{t^{1-s}}
dE_{f,g}(\lambda)
\\
&=
\ang{
L\ \frac{1}{\Gamma(s)}\int_0^\infty
e^{-tL}(L^sf)
e^{-\frac{y^2}{4t}}
\frac{dt}{t^{1-s}}, g}_{L^2(X;\ \mu)} = \ang{LU(\cdot, y),g(\cdot)}_{L^2(X;\ \mu)}.
\end{align*}
\item 5. We are left to show the Poisson Formula. Again by change of variable $t = \frac{y^2}{4r\lambda}$, we get
\begin{align*}
\ang{
U(\cdot, y), g(\cdot)
}_{L^2(X;\ \mu)} &=
\frac{1}{\Gamma(s)}\int_0^\infty \int_0^\infty
e^{-t\lambda} (t\lambda) ^s
e^{-\frac{y^2}{4t}}
\,\frac{dt}{t}\,dE_{f,g}(\lambda)\\
&=
\frac{1}{\Gamma(s)}\int_0^\infty \int_0^\infty
e^{-\frac{y^2}{4r}} \brak{\frac{y^2}{4r}} ^s e^{-r\lambda}
\,\frac{dr}{r}\,dE_{f,g}(\lambda)\\
&=
\frac{y^{2s}}{4^s \Gamma(s)}\int_0^\infty
\ang{e^{-tL}f,g}_{L^2(X;\ \mu)}
e^{-\frac{y^2}{4r}}
\,\frac{dr}{r^{1+s}}\\
&=
\ang{\frac{y^{2s}}
{4^s \Gamma(s)}\int_0^\infty
e^{-tL}f
e^{-\frac{y^2}{4r}}
\,\frac{dr}{r^{1+s}}
,g}_{L^2(X;\ \mu)}
\end{align*}
The last equality follows from the Bochner's Theorem.
\end{proof}
Here $U$ is a solution to the equation (\ref{extL}) in $X_a$ with Dirichlet initial condition $U(\cdot, 0) = f(\cdot).$ The value of $(-L)^sf$ can be transformed to a Neumann initial condition of $U$.
\begin{lem}\label{dn}
Let $f \in \mathcal{D}((-L)^s)$. One can recover $(-L)^sf $ by the following weighted Dirichlet-to-Neumann relation:
\begin{equation}
(-L)^s f =
- \frac{2^{2s-1} \Gamma(s)}{\Gamma(1-s)} \underset{y\to 0^+}{\lim} y^a \frac{\partial U}{\partial y}(\cdot,y) ,
\end{equation}
where, as above, $a = 1-2s$, and the identity holds in $L^2$.
\end{lem}
\begin{proof}
By the previous computation, for all $g \in L^2(X;\mu)$
\begin{align*}
\ang{
y^a U_y(\cdot, y), g(\cdot)
}_{L^2(X;\ \mu)} &=
\frac{1}{\Gamma(s)}\int_0^\infty
\ang{e^{tL}(-L)^s f, \ g}_{L^2(X;\ \mu)}
y^a\frac{y}{2t}
e^{-\frac{y^2}{4t}}
\frac{dt}{t^{1-s}}
\end{align*}
Change the variable $t = \frac{y^2}{4r}$,
\begin{align*}
\ang{
y^a U_y(\cdot, y), g(\cdot)
}_{L^2(X;\ \mu)} =&
\frac{-1}{\Gamma(s)}\int_0^\infty \int_0^\infty
e^{-t\lambda}\lambda^s
\frac{y^{2-2s}}{2t}
e^{-\frac{y^2}{4t}}
dE_{f,g}(\lambda)
\frac{dt}{t^{1-s}}\\
=&
\frac{-1}{\Gamma(s)}\int_0^\infty \int_0^\infty
e^{-\frac{y^2\lambda}{4r}}\lambda^s
dE_{f,g}(\lambda)
\frac{2e^{-r}}{(4r)^s}dr\\
\underrightarrow{y \to 0^+}&
\frac{-1}{\Gamma(s)}\int_0^\infty \int_0^\infty
\lambda^s
dE_{f,g}(\lambda)
\frac{2e^{-r}}{(4r)^s}dr\\
=&
\frac{-1}{\Gamma(s)}2^{1-2s} \int_0^\infty
r^{-s}e^{-r}dr \cdot \ang{(-L)^sf, g}_{L^2(X;\ \mu)}\\
=&
-\frac{\Gamma(s)2^{2s-1}}{\Gamma(1-s)}\ang{(-L)^sf, g}_{L^2(X;\ \mu)}
\end{align*}
\end{proof}
\subsection{General weights and Functions of L}\label{psiL}
In this section, we generalize the results in the previous section to some functions of the generator $L$, which are not necessarily powers of it. We first state the following theorem (see \cite{Kwasnicki2018}).
\begin{thm}
Let $A$ be a Krein's string, i.e. non-negative locally integrable function on $[0,{+\infty})$. Then for every $\lambda \geq 0$, there exists a unique non-increasing function $R(z,\lambda)$ on $[0,{+\infty})$ which solves
\begin{equation}
\begin{cases}
R_{zz}(z,\lambda) = \lambda A(z) R(z,\lambda),\ \text{for } z>0\\
R(0,\lambda) = 1,\ \text{for all } \lambda>0\\
\lim_{z\rightarrow {+\infty}}R(z,\lambda) \geq 0.
\end{cases}
\end{equation}
(with the second derivative understood in the weak sense). Furthermore, the expression
$$
\psi(\lambda) = -R_z(0,\lambda).$$
defines a complete Bernstein function $\psi$, and the correspondence between $A(s)$ and $\psi(\lambda)$ is one-to-one.
\end{thm}
We now consider the following extension problem on $X\times\mathbb{R}^+$
\begin{equation} \label{extALv}
\begin{cases}
A(z)Lv(x,z) + v_{zz}(x,z) = 0, \ \text{ in } X\times \mathbb{R}^+,\\
v(x,0) = f(x),\ \text{ in } X.
\end{cases}
\end{equation}
By the change of variable $z = \sigma(y)$, where $\sigma(y) = \int_0^y \frac{1}{w(r)}dr$, we have $A(z) = A(\sigma(y)) = (w(y))^2$. Then we can recover the equation
\begin{equation}
\begin{cases}
Lu(x,y) +\frac{1}{w(y)}\frac{\partial}{\partial y}\brak{w(y)\frac{\partial}{\partial y} u(x, y)} =
Lu(x,y) +\frac{w'(y)}{w(y)}u_y(x, y) + u_{yy}(x,y) =
0,\ \text{ in }\ X\times \mathbb{R}^+,
\\
u(x ,0) = f(x) \ \text{ in } X.
\end{cases}
\end{equation}
Just like the previous case, we have the following Poisson formula for the equation.
\begin{equation}\label{vrf}
v(x,z) = R(z,-L)f(x)
\end{equation}
\begin{thm}
For $f\in D(\psi(-L))$, the formula \eqref{vrf} is a $L^2$-weak solution of the equation \eqref{extALv}. In particular, for all $g \in L^2(X,\mu)$, the following equation holds
\begin{equation}\label{weaksol}
\ang{A(z)Lv(x,z) + v_{zz}(x,z),g(x)} = 0.
\end{equation}
Moreover, the Dirichlet to Neumann condition holds weakly.
\begin{equation}
\psi(-L)f(x) = -\lim_{z\rightarrow 0}v_z(x,z).
\end{equation}
\end{thm}
\begin{proof}
By the spectral theorem, we can write
\begin{align*}
\ang{A(z)L v(x,z)+v_{zz}(x,z),g(x)} &= \ang{A(z) L R(z,-L)f(x) + R_{zz}(z,-L)f(x) ,g(x)}\\
&=\int_0^{+\infty} \edg{A(z) \lambda R(z,\lambda) + R_{zz}(z,\lambda)} dE_{f,g}(\lambda)\\
&=0.
\end{align*}
\begin{align*}
\ang{v(x,0),g(x)} &= \ang{R(0,-L)f(x),g(x)}= \int_0^{+\infty} R(0,\lambda) dE_{f,g}(\lambda)= \int_0^{+\infty} 1 dE_{f,g}(\lambda)= \ang{f(x),g(x)}.
\end{align*}
\begin{align*}
\ang{v_z(x,0),g(x)} &= \ang{R_z(0,-L)f(x),g(x)}= \int_0^{+\infty} R_z(0,\lambda) dE_{f,g}(\lambda)= \int_0^{+\infty} -\psi(\lambda) dE_{f,g}(\lambda)= -\ang{\psi(-L)f(x),g(x)}.
\end{align*}
\end{proof}
Given the existence and uniqueness of $R$, one can define $G$ to be the $\psi$ times the inverse Laplace transform of $R$, i.e.
$$
R(z,\lambda) = \int_0^{+\infty} e^{-t\lambda} G(z,t) \psi(\lambda) dt.
$$
Then we can write
$$
v(x,z) = \int_0^{+\infty} e^{tL} \psi(-L)f(x) G(z,t) dt.
$$
where $G(z,t)$ is precisely the heat kernel to the equation
$$
G_{zz}(z,t) = A(z)G_t(z,t).$$
\section{Applications to Harnack inequalities}
In this section, we will prove the Harnack inequality for the solution of the equation $$
(-L)^s f = 0.$$
\subsection{Harmonic functions on Dirichlet spaces}
We denote by $C_c(X)$ the space of all continuous functions with compact support in $X$ and $C_0(X)$ its closure with respect to the supremum norm. A $core$ of $\mathcal{E}$ is a subset $\mathcal{C}$ of $\mathcal{F} \cap C_0(X)$ such that $\mathcal{C}$ is dense in $\mathcal{F}$ with the norm $\N{\cdot}_\mathcal{F}$ and dense in $C_0(X)$ with the supremum norm.
\begin{defi}
A Dirichlet form $\mathcal{E}$ is called $regular$ if it admits a core.
\end{defi}
\begin{defi}
A Dirichlet form $\mathcal{E}$ is called strongly local if for any $u,v\in\mathcal{F}$ with compact support, $v$ is constant on a neighbourhood of the support of $u$, then $\mathcal{E}(u,v) = 0$.
\end{defi}
Throughout this section, we assume that $(\mathcal{E},\mathcal{F})$ is a strongly local regular Dirichlet form on ${L^2(X;\ \mu)}$. Since $\mathcal{E}$ is regular, the following definition is valid.
\begin{defi}
Suppose $\mathcal{E}$ is a regular Dirichlet form, for every $u,v\in \mathcal F\cap L^{\infty}(X)$, the energy measure $\Gamma (u,v)$ is defined through the formula
\[
\int_X\phi\, d\Gamma(u,v)=\frac{1}{2}[\mathcal{E}(\phi u,v)+\mathcal{E}(\phi v,u)-\mathcal{E}(\phi, uv)], \quad \phi\in \mathcal F \cap C_c(X).
\]
\end{defi}
Note that $\Gamma(u,v)$ can be extended to all $u,v\in \mathcal F$ by truncation (see \cite[Theorem 4.3.11]{SymmetricMarkovProcessesTimeChangeandBoundaryTheory}). According to Beurling and Deny~\cite{beurling1958}, one has then for $u,v\in \mathcal{F}$
\[
\mathcal E(u,v)=\int_X d\Gamma(u,v)
\]
and $\Gamma(u,v)$ is a signed Radon measure often called the energy measure.
If $U \subset X$ is an open set, we define
\[
\mathcal{F}_{loc}(U)=\left\{ f \in L^2_{loc}(U), \text{ for every relatively compact } V \subset U , \, \exists f^* \in \mathcal{F}, \, f^*_{\mid V}=f_{\mid V}, \, \mu \, a.e. \right\}
\]
For $f,g \in \mathcal{F}_{loc}(U)$, on can define $\Gamma (f,g)$ locally by $\Gamma (f,g)_{\mid V}=\Gamma( f^*_{\mid V},g^*_{\mid V}) $.
\begin{defi}
Let $U \subset X$ be an open set. A function $f \in \mathcal{F}_{loc}(U)$ is called harmonic in $U$ if for every function $h \in \mathcal{F}$ whose essential support is included in $U$, one has
\[
\mathcal{E} (f, h)=0.
\]
\end{defi}
\subsection{Elliptic Harnack inequality for harmonic functions}
In this section we recall some known results about Harnack inequalities for harmonic functions. The main assumption is the volume doubling property and the existence of nice heat kernel estimates.
\begin{defi}\label{VD}
We say that the metric measure space $(X,d,\mu)$ satisfies the volume doubling property if
there exists a constant $C>0$ such that for every $x\in X$ and $r>0$,
\[
\mu(B(x,2r))\le C\, \mu(B(x,r)).
\]
\end{defi}
\begin{defi}\label{PI}
We say that $(X,\mathcal{E})$ satisfies the 2-Poincar\'e inequality if there exist constants $C$, $\lambda >1$, such that for any ball $B$ in $X$ and $u\in \mathcal{F}$, we have
\begin{equation}
\frac{1}{\mu(B)}\int_B \abs{u-u_B} d\mu \leq C \,\text{rad}\,(B) \brak{\frac{1}{\mu(\lambda B)}\int_{\lambda B} d\Gamma(u,u)}^{1/2}
\end{equation}
\end{defi}
We have then the following well-known result (see \cite{heinonen_koskela_shanmugalingam_tyson_2015}).
\begin{thm}\label{Harnack elliptic}
Assume that $(X,d,\mu,\mathcal{E})$ satisfies the doubling condition and the 2-Poincar\'e inequality. There exist a constant $C>0$ and $\delta \in (0,1)$ such that for any ball $B(x,R)\subset X$ and any non-negative function $u \in \mathcal{F}$ which is harmonic on $B(x,R)$
\[
\sup_{z \in B(x,\delta R)} u(z) \le C \inf_{z \in B(x,\delta R)} u(z),
\]
where by $\sup$ and $\inf$ we mean the essential supremum and essential infimum.
\end{thm}
\subsection{Properties of the extended Dirichlet space}
Associated to $\mathcal E$ we can define the bilinear form $\mathcal{E}_a$ on the space $X_a = X \times \mathbb{R}$ with domain $\mathcal{F}_a$,
$$\mathcal{E}_a(u,v) = \int_\mathbb{R}\mathcal{E}(u,v)d\nu_a + \int_{X_a} u_y\cdot v_y \ d \nu_a d\mu,$$
$$\mathcal{F}_a = \crl{
u \in L^2(X_a, d\mu \times d\nu_a),\ \mathcal{E}_a(u,u) < \infty
}.$$
As before, we define the norm
$$
\N{u}_{\mathcal{E}_a}^2 = \N{u}_{L^2(X_a)} ^2 + \mathcal{E}_a(u,u).
$$
\begin{prop}\label{prop:Ea}
$(X_a, d\mu\times d\nu_a, \mathcal{E}_a, \mathcal{F}_a)$ is a strongly local and regular Dirichlet Space, where $-1<a<1$.
\end{prop}
\begin{proof}
One can easily derive the Markovian property and the strong local property. And the density of $\mathcal{F}_a$ follows from the regularity. We are left to show that
\begin{enumerate}
\item $\mathcal{E}_a$ is closed, which is equivalent to say $(\mathcal{F}_a, \N{\cdot}_{\mathcal{E}_a})$ is a Banach Space.
\item $\mathcal{E}_a$ is regular.
\end{enumerate}
1.\ For the closedness, given a Cauchy sequence $\crl{u_n}$ in $\mathcal{F}_a$, we want to show that there exists $u\in \mathcal{F}_a$ such that $u_n\rightarrow u$ in $\mathcal{F}_a$. Notice that
\begin{align*}
\N{u_n-u_m}_{\mathcal{F}_a}^2 &= \int_\mathbb{R} \int_X |u_n-u_m|^2 d\mu d\nu_a + \int_\mathbb{R} \mathcal{E}(u_n-u_m,u_n-u_m) d\nu_a + \int_X\int_\mathbb{R} |\partial_y u_n-\partial_yu_m|^2 d\nu_a d\mu\\
&= \int_\mathbb{R} \N{u_n-u_m}_\mathcal{E}^2 d\nu_a +\int_X\int_\mathbb{R} |\partial_y u_n-\partial_yu_m|^2 d\nu_a d\mu.
\end{align*}
Since $\mathcal{F}$ is a Banach space, $u_n(\cdot, y)$ can be viewed as a function maps from $\mathbb{R}$ to $\mathcal{F}_a$. In particular, $\crl{u_n(\cdot, y)}$ is a Cauchy sequence in $L^2(\mathbb{R},\mathcal{F};\ \nu_a)$. There exists $u$ in $L^2(\mathbb{R},\mathcal{F};\ \nu_a)$, such that $u_n\rightarrow u $. Notice that we also have $u_n\rightarrow u $ in ${L^2(X_a)}$.
And $\crl{\partial_y u_n}$ being a Cauchy sequence in ${L^2(X_a)}$ implies there exists $u^y \in {L^2(X_a)}$ such that $\partial_y u_n \rightarrow u^y$. Now we are left to show that $\partial_y u = u^y$. We recall that the weak derivative for a Bochner integrable function $h \in L^2(\mathbb{R}, L^2(X);\nu_a)$ is $\partial_y h$, if for any $\phi \in C_c^\infty(\mathbb{R})$,
$$
\int_\mathbb{R} h \phi' d\nu_a = -\int_\mathbb{R} \partial_y h \phi d\nu_a
$$
The equality holds in the sense of $L^2(X)$. Then for any $\phi(y) \in C_c^\infty(\mathbb{R})$ and $\xi(x) \in C_c(X)$, notice that $\xi(x)\phi(y) \in {L^2(X_a)}$,
\begin{align*}
\int_X \int _\mathbb{R} u(x,y)\xi(x)\phi'(y) d\nu_a d\mu
&=
\lim_{n\rightarrow \infty}\int_X \int _\mathbb{R} u_n(x,y)\xi(x)\phi'(y) d\nu_a d\mu\\
&=
\lim_{n\rightarrow \infty}\int_X \int _\mathbb{R} \partial _y u_n(x,y)\xi(x)\phi(y) d\nu_a d\mu\\
&=
\int_X \int _\mathbb{R} u^y(x,y)\xi(x)\phi(y) d\nu_a d\mu\\
\end{align*}
The limits result from $u_n\rightarrow u$ and $\partial_y u_n \rightarrow u^y$ in ${L^2(X_a)}$. Since $u(x,y),u^y(x,y) \in {L^2(X_a)}$, we have
$$\int_\mathbb{R} u(x,y)\phi'(y) d\nu_a = \int _\mathbb{R} u^y(x,y) \phi(y) d\nu_a\ a.e. \text{ in } X.$$
Then by definition, $\partial_y u(x,y) = u^y(x,y)$.
2.\ To show $\mathcal{E}_a$ is regular, we claim that $\mathcal{C}_a = \mathcal{C} \otimes H^1(\mathbb{R}) \subset \mathcal{F}_a$ is a core of $\mathcal{E}_a$, where $\mathcal{C}$ is the core of $\mathcal{E}$ and $H^1(\mathbb{R})$ is the Sobolev space over $\mathbb{R}$. Given a function $f(x,y) \in \mathcal{F}_a$, suppose for any $\varphi(x) \in \mathcal{C}$ and $\psi(y) \in H^1(\mathbb{R})$,
\begin{align*}
(f(x,y), \varphi(x)\psi(y))_{\mathcal{F}_a}=0
\end{align*}
Notice that
\begin{align*}
(f(x,y),\varphi(x)\psi(y))_{\mathcal{F}_a}&= \int_{X_a} f(x,y)\varphi(x)\psi(y) d\nu_a d\mu +
\int_{\mathbb{R}} \mathcal{E}(f(x,y)\varphi(x))\psi(y) d\nu_a \\
&\ \ \ +
\int_{X_a} \varphi(x)\partial_y f(x,y) \psi'(y) d\nu_ad\mu\\
&= \ang{\ang{f(x,y), \varphi(x)}_\mathcal{F}, \psi(y)}_{H^1(\mathbb{R})}.
\end{align*}
Then by the density of $\mathcal{C}$, we have $f \equiv 0$ a.e. with respect to $\mu\otimes \nu_a$ on $X_a$. Hence $\mathcal{C}_a$ is a core of $\mathcal{E}_a$.
\end{proof}
\begin{thm}\label{ext:VD}
Suppose $(X,d\mu)$ has the volume doubling property, so does $(X_a, \mu_a)$. Here
$X_a = X\times \mathbb{R}$ and $ d\mu_a = d\mu\times d\nu_a$, where $d\nu_a(y) = \abs{y}^ady.$
\end{thm}
\begin{proof}
The proof for the doubling property for $(\mathbb{R}, d\nu_a)$ follows from \cite{FKS} for instance.
Let $B_a$ be a ball in $X_a$ centered at $(x_0, y_0)$ of radius $R$, $2B_a$ is a co-centered ball of radius $2R$. Denote the projection of $B_a$ onto $X$ and $\mathbb{R}$ to be $B$ and $I$ respectively, and $D = R\times I$. It is clear that $2B_a \subset 2D$. And one can find small enough $\lambda < 1$ such that $\lambda D \subset B_a$. Then
\begin{align*}
\mu_a(2B_a) \leq \mu_a(2D) = \mu(2B)\nu_a(2I)\leq C \mu(\lambda B)\nu_a(\lambda I) = C\mu_a(\lambda D)\leq \mu_a(B_a).
\end{align*}
\end{proof}
\begin{thm}\label{ext:PI}
Suppose the space $(X,\mu)$ has the Poincar\'{e}'s inequality, i.e. there exist constants $C$, $\lambda >1$, such that for any ball $B$ in $X$ and $u\in \mathcal{F}$, we have
\begin{equation}
\frac{1}{\mu(B)}\int_B \abs{u-u_B} d\mu \leq C \,\text{rad}\,(B) \brak{\frac{1}{\mu(\lambda B)}\int_{\lambda B} d\Gamma(u,u)}^{1/2}
\end{equation}
so does the space $(X_a, \mu_a)$, in particular, there exist constants $\tilde{C}$, $\tilde{\lambda} >1$, such that for any ball $B_a$ in $X_a$ and $u\in \mathcal{F}_a$, we have
\begin{equation}\label{poincare}
\frac{1}{\mu(B_a)}\int_{B_a} \abs{u-u_{B_a}} d\mu_a \leq \tilde{C} \,\text{rad}\,(B_a) \brak{\frac{1}{\mu(\tilde{\lambda} B_a)}\int_{\tilde{\lambda} B_a} d\Gamma_a(u,u)}^{1/2}
\end{equation}
\end{thm}
\begin{proof}
First we claim that the following statement is equivalent to \eqref{poincare}:
There exist constants $C$, $\lambda >1$, and for any ball $B_a$ in $X_a$ and $u\in \mathcal{F}_a$, there is a constant $c = c(u,B_a)$, such that we have
\begin{equation}\label{poincarec}
\frac{1}{\mu(B_a)}\int_{B_a} \abs{u-c} d\mu_a \leq C \,\text{rad}\,(B_a) \brak{\frac{1}{\mu(\lambda B_a)}\int_{\lambda B_a} d\Gamma_a(u,u)}^{1/2}
\end{equation}
It's clear that \eqref{poincare}$\implies$\eqref{poincarec}. To show the opposite, suppose \eqref{poincarec} is correct. Notice that
\begin{align*}
\frac{1}{\mu_a(B_a)}\int_{B_a} \abs{c-u_{B_a}} d\mu_a
&= \abs{c-\frac{1}{\mu_a(B_a)}\int_{B_a} u \,d\mu_a}\\
&\leq \frac{1}{\mu_(B_a)}\int_{B_a} \abs{u-c} d\mu_a,
\end{align*}
Then \eqref{poincare} follows from triangle inequality and \eqref{poincarec} it self.
We now have the freedom to choose the constant $c$. Let $B = B_a \cap X$, $I = B_a \cap \mathbb{R}$ and $D = B\times I$. Denote $$u_B(y) = \frac{1}{\mu(B)}\int_B u(x,y) d\mu(x),$$and $u_D = \frac{1}{\mu_a(D)}\int_D u d\mu_a = (u_B)_I$. Let $c = u_D$, and consider the region $D$ for now, which is to be fixed later.
\begin{align*}
\frac{1}{\mu_a(D)}\int_{D} \abs{u-u_D} d\mu_a
&\leq \frac{1}{\mu(B)}\frac{1}{\nu_a(I)}
\int_B\int_I \abs{u(x,y)-u_B(y)} + \abs{u_B(y)-u_D} d\mu(x) d\nu_a(y).
\end{align*}
For the first part,
\begin{align*}
\frac{1}{\nu_a(I)}\int_I
\frac{1}{\mu(B)}\int_B \abs{u(x,y)-u_B(y)} d\mu(x)d\nu_a(y) &\leq C \,\text{rad}\,(B)\frac{1}{\nu_a(I)}\int_I
\brak{\frac{1}{\mu(\lambda B)}\int_{\lambda B} d\Gamma(u,u)}^{1/2}d\nu_a(y)\\
&\leq C \,\text{rad}\,(B)
\frac{1}{\nu_a(I)}
\brak{\frac{1}{\mu(\lambda B)}\int_I\int_{\lambda B} d\Gamma(u,u) d\nu_a(y)}^{1/2} \sqrt{\nu_a(I)}\\
&\leq \tilde{C} \,\text{rad}\,(B)
\brak{\frac{1}{\mu_a(\lambda D)}\int_{\lambda D} d\Gamma(u,u) d\nu_a(y)}^{1/2}
\end{align*}
where the second inequality follows from the H\"{o}lder's inequality and the last follows from the doubling property.
Similarly,
\begin{align*}
\frac{1}{\mu(B)}\frac{1}{\nu_a(I)}
\int_B\int_I \abs{u_B(y)-u_D} d\mu(x) d\nu_a(y) \leq
\tilde{C} \,\text{rad}\,(B)
\brak{\frac{1}{\mu_a(\lambda D)}\int_{\lambda D} \abs{\frac{\partial u(x,y)}{\partial y}}^2 d\nu_a(y)d\mu(x)}^{1/2}
\end{align*}
Notice that
\begin{align}\label{Gammaa}
d\Gamma_a(u,u) = d\Gamma(u,u)d\nu_a(y) + \abs{\frac{\partial u(x,y)}{\partial y}}^2 d\nu_a(y)d\mu(x),
\end{align}
then we achieve the Poincar\'{e}'s inequality for the region $D$. Apply the doubling property with $\tilde{\lambda}$ large enough to let $\tilde{\lambda}B_a$ covers $D$, we have
\begin{equation*}
\frac{1}{\mu(B_a)}\int_{B_a} \abs{u-u_D} d\mu_a \leq C \,\text{rad}\,(B_a) \brak{\frac{1}{\mu(\lambda B_a)}\int_{\lambda B_a} d\Gamma_a(u,u)}^{1/2}
\end{equation*}
which is exactly statement \eqref{poincarec} with $c = u_D$. Hence \eqref{poincare} follows from the claim at the beginning.
\end{proof}
\subsection{Harnack inequalities for $(-L)^s$}
From now on, we assume that $(X,\mathcal{E})$ is doubling and satisfies the 2-Poincar\'e inequality. As in \cite{Caffarelli_2007}, let us first consider the following extension lemma.
\begin{lem}\label{evenext}
We consider a solution $f\ge 0$, $f \in D((-L)^s)$ to $(-L)^s f = 0$ in $B(x,R)$. The fact that $(-L)^s f = 0$ in $X$ implies that the function $\tilde U(x,y) = U(x,|y|)$ actually is harmonic in $B(x,R) \times \mathbb{R}$.
\end{lem}
\begin{proof}
By a density argument, it is enough to show that for all continuous $h \in \mathcal{F}_a$ whose compact support is included in $B(x,R) \times \mathbb{R}$,
$$
\mathcal{E}_a(\tilde{U}, h) = \int_{X_a} (-L\tilde{U} \cdot h + \tilde{U}_y\cdot h_y)d\mu d\nu_a = 0.
$$
Fix $h$, and let $B=B(x,R) \times (-M,M)$ that contains $\text{supp} \ h$. For some small $\epsilon > 0$, We can write
\begin{align*}
\int_{X_a} (L\tilde{U} \cdot h - \tilde{U}_y\cdot h_y)\ d\mu d\nu_a &=
\int_{B \cap \crl{\abs{y} \geq \epsilon}} (L\tilde{U} \cdot h + \tilde{U}_y\cdot h_y)\ d\mu d\nu_a +
\int_{B \cap \crl{\abs{y} < \epsilon}} (L\tilde{U} \cdot h - \tilde{U}_y\cdot h_y)\ d\mu d\nu_a \\
&:= I + J.
\end{align*}
For part $I$, since the region is away from the hyperplane $\crl{y=0}$, we can simply do integration by parts and apply the fact that $\tilde{U}$ is the weak solution to the equation (\ref{extL}).
The integration region of part $I$ is away from the hyperplane $\crl{y=0}$, and the part $J \to 0$ as $\epsilon \to 0$.
\begin{align*}
I = \int_{B \cap \crl{\abs{y} \geq \epsilon}} (L\tilde{U} \cdot h - \tilde{U}_y\cdot h_y)\ d\mu d\nu_a = \int_{B \cap \crl{\abs{y} \geq \epsilon}}
L\tilde{U} \cdot h
\ d\mu d\nu_a -
\int_{B \cap \crl{\abs{y} \geq \epsilon}}
\tilde{U}_y\cdot h_y
\ d\mu d\nu_a.
\end{align*}
Integration by parts on the second term yields,
\begin{align*}
\int_{B \cap \crl{\abs{y} \geq \epsilon}}
\tilde{U}_y\cdot h_y
\ d\mu d\nu_a
&=
\int_{B \cap \crl{\abs{y} \geq \epsilon}}
\tilde{U}_y\cdot h_y \cdot \abs{y}^a
\ d\mu dy
\\
&=
\int_{B \cap \crl{\abs{y} \geq \epsilon}}
\partial_y\brak{\abs{y}^a\tilde{U}_y } h
\ d\mu d\nu_a
-
\int_{\partial\brak{{B \cap \crl{\abs{y} \geq \epsilon}}}}
\abs{y}^a\tilde{U}_y h
\ dS\\
&=
\int_{B \cap \crl{\abs{y} \geq \epsilon}}
\mathcal{B}_a \tilde{U} h
\ d\mu d\nu_a
-
2\int_{{B \cap \crl{\abs{y} = \epsilon}}}
\abs{y}^a\tilde{U}_y h
\ dS\\
\end{align*}
The last equality follows from $h|_{\partial{B}} = 0$. Since $\tilde{U}$ is the even extension of the solution to the equation (\ref{extL}), we have
\begin{align*}
I = \int_{B \cap \crl{\abs{y} \geq \epsilon}} (L\tilde{U} - \mathcal{B}_a\tilde{U})h_y\ d\mu d\nu_a +
2\int_{{B \cap \crl{\abs{y} = \epsilon}}}
\abs{y}^a\tilde{U}_y h
\ dS = 2\int_{{B \cap \crl{\abs{y} = \epsilon}}}
\abs{y}^a\tilde{U}_y h
\ dS
\end{align*}
And $\lim_{\epsilon\to 0}I = 0$ follows from the assumption that $(-L)^s f = 0$ and Lemma \ref{dn}.
By the proof of Lemma \ref{PoiF}, $L\tilde{U}$ and $\tilde{U}_y$ are locally integrable, hence $\lim_{\epsilon\to 0}J = 0$. Since the statement hold for arbitrary $\epsilon >0$, we have the desired result.
\end{proof}
We are ready to prove the Harnack's inequality for solutions of $(-L)^sf=0$.
\begin{thm}(Harnack's inequality)\label{Harnack_frac}
Assume that $(X,d,\mu,\mathcal{E})$ satisfies the doubling condition and the 2-Poincar\'e inequality. There exist a constant $C>0$ and $\delta \in (0,1)$ such that for any ball $B(x,R)\subset X$ and any non-negative function $f \in D((-L)^s)$ satisfies $(-L)^sf=0$ on $B(x,R)$,
\[
\sup_{z \in B(x,\delta R)} f(z) \le C \inf_{z \in B(x,\delta R)} f(z).
\]
\end{thm}
\begin{proof}
We first note that the extended Dirichlet space $X_a$ satisfies the conclusion of theorem \ref{Harnack elliptic}.
Apply the extension lemma \ref{evenext}, we have a function $\tilde{U}$ in the extended space $X_a$ which is harmonic in $B(x,R) \times \mathbb{R}$.
Let $B$ be a ball in $X$, there exists a ball $\tilde{B}$ in $X_a$, such that $$\tilde{B} \cap (X\times \{0\}) = B(x,R)\times \{ 0 \}.$$
Since $\tilde{U}(\cdot, 0) = f$, we have
$$
\sup_{z\in B(x,\delta R) } f(z) = \sup_{z \in B(x,\delta R)} \tilde{U}(z,0) \leq \sup_{\delta \tilde{B}}\, \tilde{U} $$
and
$$
\inf_{ \delta \tilde{B}} \, \tilde{U} \leq \inf_{z \in B(x,\delta R) } \tilde{U}(x, 0) = \inf_{z \in B(x,\delta R) } f(z)$$
Also, $\tilde{U}$ is non-negative because of $f$. By the Harnack's inequality of $\tilde{U}$, we get the desired result.
\end{proof}
\subsection{Boundary Harnack principles}\label{BdyHarnack}
We will establish the boundary Harnack principles for $(-L)^s$ by applying the result in \cite{lierl2014}. We first address some important prerequisite definition and assumptions about non-symmetric Dirichlet forms. Note that we will derive an simpler version for symmetric case.
\begin{definition}
Let $(\mathcal{E},\mathcal{F})$ be a local, regular Dirichlet form on $L^2(X, \mu)$. Then $\mathcal{E}^{sym}(u,v) = (1/2)(\mathcal{E}(u,v) + \mathcal{E}(v,u))$ is its symmetric part and $\mathcal{E}^{skew}(u,v) = (1/2)(\mathcal{E}(u,v) - \mathcal{E}(v,u))$ is its skew-symmetric part.
\end{definition}
\begin{prop}
The symmetric part $\mathcal{E}^{sym}$ of a local, regular Dirichlet form can be writen uniquely as
$$
\mathcal{E}^{sym}(f,g) = \mathcal{E}^s(f,g) + \int f g d \kappa,\text{ for all } f,g\in \mathcal{F},
$$
where $\mathcal{E}^s$ is strongly local and $\kappa$ is a positive Radon measure. The second term is also called the killing part.
\end{prop}
With respect to $\mathcal{E}$ we can define the following \emph{intrinsic metric} $d_{\mathcal{E}}$ on $X$ by
\begin{equation}\label{eq:intrinsicmetric}
d_{\mathcal{E}}(x,y)=\sup\{u(x)-u(y)\, :\, u\in\mathcal{F}\cap C_0(X)\text{ and } d\Gamma(u,u)\le d\mu\}.
\end{equation}
Here the condition $d\Gamma(u,u)\le d\mu$ means that $\Gamma(u,u)$ is absolutely continuous with
respect to $\mu$ with Radon-Nikodym derivative bounded by $1$.
The term ``intrinsic metric'' is potentially misleading because in
general there is no reason why $d_{\mathcal{E}}$ is a metric on $X$ (it could be infinite for a given pair of points $x,y$
or zero for some distinct pair of points), however in this paper we will work in a standard setting in which it is a metric.
The following definition is from \cite{LenzStollmannVeselic} and references therein, which is based on the classical papers of K. T. Sturm
\cite{St-II,St-III,St-I}).
For the introduction of the following Theorem, we fix a symmetric strongly local regular Dirichlet form $(\hat \mathcal{E}, \mathcal{F})$ on $L^2(X, \mu)$ with energy measure $\hat \Gamma$. Let $Y$ be an open subset of $X$. Let $(\mathcal{E}, D(\mathcal{E}))$ be another (possibly non-symmetric) local bilinear form on $L^2(X, \mu)$. We need the following two assumptions.
\begin{assump}
\begin{enumerate}
\item $(\mathcal{E}, D(\mathcal{E}))$ is a local, regular Dirichlet form. Its domain $D(\mathcal{E})$ is the same as the domain of the form $(\hat \mathcal{E},\mathcal{F})$, that is, $D(\mathcal{E}) = \mathcal{F}$. Let $C_0$ be the constant in the sector condition for $(\mathcal{E}, \mathcal{F})$, i.e.
$$ |\mathcal{E}^{skew}(u, v)| = |(1/2)(\mathcal{E}(u,v) - \mathcal{E}(v,u))| \leq C_0 (\mathcal{E}_1(u,u))^{1/2}(\mathcal{E}_1(v,v))^{1/2},
$$
for all $u, v, \in \mathcal{F}$, where $\mathcal{E}_1(f,g) = \mathcal{E}(f,g) + \int_Xfg d\mu.$
\item There is a constant $C_1 \in (0, \infty)$ so that for all $f,g\in \mathcal{F}_{loc}(Y)$ with $f^2\in\mathcal{F}_{c}(Y)$,
$$ C_1^{-1} \int f^2 d\hat \Gamma (g,g) \leq \int f^2 d\Gamma(g,g) \leq C_1 \int f^2 d\hat \Gamma(g,g).$$
\item There are constants $C_2, C_3\in[0, \infty)$ so that for all $f \in \mathcal{F}_{loc}(Y)$ with $f^2 \in \mathcal{F}_c(Y)$,
$$
\int f^2 d\kappa \leq 2\brak{\int f^2 d\mu}^{1/2}\brak{C_2 \int d\hat \Gamma(f,f) + C_3\int f^2 d\mu}^{1/2}
$$
\item There are constants $C_4, C_5\in[0, \infty)$ sso that for all $f \in \mathcal{F}_{loc}(Y)\cap L_{loc}^\infty(Y), g\in\mathcal{F}_c(Y)\cap L^\infty(Y),$
$$
|\mathcal{E}^{skew}(f, fg^2) | \leq 2\brak{\int f^2d\hat \Gamma(g,g)}^{1/2}
\brak{C_4\int g^2 d\hat \Gamma(f,f) + C_5 \int f^2g^2d\mu}^{1/2}
$$
\end{enumerate}
\end{assump}
\begin{assump}
There are constants $C_6, C_7 \in [0, \infty)$ such that
\begin{align*}
|\mathcal{E}^{skew}(f, f^{-1}g^2) | \leq &2\brak{\int d\hat \Gamma(g,g)}^{1/2}
\brak{C_6 \int g^2 d\hat \Gamma(\log f, \log f)}^{1/2}\\
&+
2\brak{\int d\hat \Gamma(g,g) + \int g^2 d\hat \Gamma(\log f, \log f)}^{1/2}
\brak{C_7\int g^2 d\mu}^{1/2}
\end{align*}
\end{assump}
Secondly, we introduce the notion of (inner) uniformity. Let $\Omega \subset X$ be open and connected. Recall that the $inner\ metric$ on $\Omega$ is defined as
$$
d_\Omega(x,y) = \inf \crl{ \text{length}(\gamma)\ |\ \gamma:[0,1]\rightarrow \Omega \text{ continuous, } \gamma(0) = x, \gamma(1) = y },
$$
and $\tilde \Omega$ is the completion of $\Omega$ with respect to $d_\Omega$. For an open set $B\subset \Omega$, let $\partial_{\tilde \Omega} B = \bar{B}^{d_\Omega}\backslash B$ be the boundary of $B$ with respect to its completion for the metric $d_\Omega$. If $x$ is a point in $\Omega$, denote by $\delta_\Omega(x) = d(x, X\backslash \Omega)$ the distance from $x$ to the boundary of $\Omega$.
\begin{defi}
Let $\gamma:[\alpha, \beta] \rightarrow \Omega$ be a rectifiable curve in $\Omega$ and let $c\in (0,1), C\in(1,\infty)$. We call $\gamma $ a $(c,C)-uniform$ curve in $\Omega$ if
$$
\delta_\Omega(\gamma(t)) \geq c\cdot \min\crl{d(\gamma(\alpha), \gamma(t)), d(\gamma(t), \gamma(\beta))}, \text{ for all } t\in[\alpha,\beta],
$$
and if
$$\text{length}(\gamma) \leq C\cdot d(\gamma(\alpha), \gamma(\beta)).$$
The domain $\Omega$ is called $(c, C)-uniform$ if any two points in $\Omega$ can be joined by a $(c, C)-uniform$ curve in $\Omega$.
\end{defi}
In the following discussion, we suppose $\Omega$ is a $(c_u, C_u)$-inner uniform domain in$(X, d)$.
Then we are ready to introduce the Theorem 4.2 of \cite{lierl2014}.
\begin{thm}\label{BH:ref}
Let $(X, \mu, \hat\mathcal{E}, \mathcal{F})$ be a strongly local regular symmetric Dirichlet space and $Y$ be an open subset of $X$.
Suppose the Volume doubling property (Definition \ref{VD}), Poincar\'{e} inequality (Definition \ref{PI}) hold, together with the following two properties.
\begin{align}
\text{\parbox{.85\textwidth}{The intrinsic distance $d$ is finite everywhere, continuous, and defines the original topology of $X$.}}\tag{A1}\\
\text{\parbox{.85\textwidth}{For any ball $B(x, 2r) \subset Y$, $B(x,r)$ is relatively compact. }}\tag{A2-Y}
\end{align}
Suppose $(\mathcal{E}, \mathcal{F})$ is another Dirichlet form satisfying the Assumptions 1 and 2. Let $\Omega \subset Y$ be a bounded inner uniform domain in $(X, d)$. There exists a constant $A_1\in (1, \infty)$ such that for any $\xi \in \partial_{\tilde \Omega}\Omega$ with $R_\xi >0$ and any
$$
0 < r < R \leq \inf\{R_{\xi'}: \xi'\in B_{\tilde\Omega}(\xi, 7R_\xi)\backslash \Omega\},
$$
and any two non-negative weak solutions $u,\ v$ of $L u = 0$, where $L$ is the generator of $\mathcal{E}$, in $Y' = B_\Omega(\xi, 12 C_\Omega r)\backslash \Omega$ with weak Dirichlet boundary condition along $B_{\tilde \Omega}(\xi, 12 C_\Omega r)\backslash \Omega$, we have
$$
\frac{u(x)}{u(x')}\leq A_1 \frac{v(x)}{v(x')},$$
for all $x, x'\in B_\Omega(\xi, r)$. The constant $A_1$ depends only on the volume doubling constant, Poincar\'{e} constant, the constants $C_0-C_7$ which give control over the skew-symmetric part and the killing part of the Dirichlet form, the inner uniformity constants $c_u$, $C_u$ and an upper bound on $C_8R^2$.
\end{thm}
\begin{comment}
\begin{defi}
A strongly local regular Dirichlet space is called strictly local if $d_{\mathcal{E}}$ is a metric on $X$ and the topology induced by $d_{\mathcal{E}}$ coincides with the topology on $X$.
\end{defi}
\end{comment}
We wish to prove the Boundary Harnack principles over the extended space $(X_a, d\mu_a, \mathcal{E}_a, \mathcal{F}_a)$. We shall see the following property of the intrinsic metric is the only thing left to check.
Let $X_a = X \times \mathbb{R}$ and defined the natural product distance by
\begin{align}\label{eq:da}
d_a(z,w)^2 = d (z_x, w_x)^2+ |z_y - w_y|^2,
\end{align}
where $z,w\in X_a$. For a point $z \in X$, we denote $z = (z_x, z_y)$ ,where $z_x \in X$ and $z_y \in \mathbb{R}$. The intrinsic distance for the Dirichlet space $(X_a, d\mu_a, \mathcal{E}_a, \mathcal{F}_a)$ is defined as
\begin{align}\label{eq:dEa}
d_{\mathcal{E}_a}(z,w)=\sup\{u(z)-u(w)\, :\, u\in\mathcal{F}_a\cap C_0(X_a)\text{ and } d\Gamma_a(u,u)\le d\mu_a\}.
\end{align}
Recall the definition for $d\Gamma_a$ in \eqref{Gammaa}.
\begin{lem}\label{lem:dEa}
The intrinsic metric $d_{\mathcal{E}_a}$ \eqref{eq:dEa} is equivalent to the natural product metric $d_a$ \eqref{eq:da}.
\end{lem}
\begin{proof}
First we prove $d_{\mathcal{E}_a}(z,w) \gtrsim d_a(z,w)$ for all $z,\ w \in X_a$. Note that $z = (z_x, z_y)$, $w = (w_x, w_y)$ where $z_x, w_x \in X$ and $z_y, w_y\in \mathbb{R}$. For some $0<\delta<1$, there exists $f \in \mathcal{F}^{loc}(X)\cap C(X)$ and $d\Gamma(f,f)\leq d\mu$, such that
$$
f(z_x) - f(w_x) \geq \delta d(z_x, w_x).
$$
Let $F(z) = \frac{1}{2}(f(z_x) + z_y)$. It is clear that $F\in \mathcal{F}_a^{loc}(X_a) \cap C(X_a)$ and $d\Gamma_a(F,F)\leq d\nu_a d\mu$. We have
\begin{align*}
d_{\mathcal{E}_a}(z,w)
&\geq \frac{1}{2}
\brak{f(z_x) - f(w_x)} + \frac{1}{2}\brak{z_y - w_y}
\geq
\frac{\delta}{2} d_\mathcal{E}(z_x,w_x) + \frac{1}{2}\brak{z_y - w_y}\\
&\geq
\frac{c \delta }{2} d(z_x,w_x) + \frac{1}{2}\brak{z_y - w_y},
\end{align*}
where $c$ is the equivlent constant for $d$ and $d_\mathcal{E}$. And similarly for $\tilde F(z) = \frac{1}{2}(f(z_x) - z_y)$, we have
$
d_{\mathcal{E}_a}(z,w) \geq
\frac{c \delta }{2} d(z_x,w_x) + \frac{1}{2}\brak{w_y - z_y}.
$
Hence there exists a constant $C>0$ such that
\begin{align*}
d_{\mathcal{E}_a}(z,w) \geq \frac{c \delta }{2} d(z_x,w_x) + \frac{1}{2}\abs{z_y - w_y} \geq C \sqrt{d(z_x, z_y)^2 + \abs{z_y, w_y}^2} = d_a(z, w).
\end{align*}
Then we prove the other direction. For a fixed $0<\delta<1$, any $z, \ w \in X_a$, there exists $F\in \mathcal{F}_a^{loc}(X_a)\cap C(X_a)$ and $d\Gamma_a(F, F)\leq d\nu_a d\mu$, such that
$$ F(z) - F(w) \geq \delta d_{\mathcal{E}_a}(z, w).$$
We wish to prove $d_a(z,w)\geq F(z) - F(w).$ Since $$F(z) - F(w) = F(z_x, z_y) - F(w_x, z_y) + F(w_x, z_y) - F(w_x, w_y).$$
Let $f(x) = F(x, z_y)$, it is clear that $f\in\mathcal{F}^{loc}(X)\cap C(X)$ and $\frac{d\Gamma(f,f)}{d\mu}(x)\leq \frac{d\Gamma(F,F)}{d\mu}(x, z_y)\leq 1$. Hence
$$d_\mathcal{E}(z_x, w_x)\geq f(z_x) - f(w_x) = F(z_x, z_y) - F(w_x ,z_y).$$
Let $g(y) =F(w_x, y)$, notice that $\abs{g'(y)} = \abs{F_y(w_x, y)}\leq 1.$ Then we have
$$ z_y - w_y \geq F(w_x, z_y) - F(w_x, w_y).$$
Thus $d_a(z,w)\geq \tilde C d_\mathcal{E}(z,w)$ for some constant $\tilde C$.
\end{proof}
\begin{comment}
Since the Dirichlet form $\mathcal{E}_a$ is sysmetric and strictly local, we can apply the Theorem 4.2 in \cite{lierl2014}, and the Boundary Harnack principles hold on the extended space $X_a$. By the extension lemma \ref{evenext} and the similar steps in \ref{Harnack_frac}, we have the following theorem for fractional powers of $-L$.
\end{comment}
Now we are ready to prove the Boundary Harnack principle for the weak solutions of $(-L)^s$.
\begin{thm}(Boundary Harnack principles)
Suppose $(X,d,\mu,\mathcal{E})$ is a symmetric, strongly local Dirichlet form with generator $L$ satisfying the doubling condition and the 2-Poincar\'e inequality. Let $\Omega \subset X$ be a bounded inner uniform domain. There exists a constant $C>0$ such that for any $\xi \in \partial_{\tilde\Omega}\Omega$ with $R_\xi>0$ and any
$$
0 < r < R \leq \inf\{R_{\xi'}: \xi'\in B_{\tilde\Omega}(\xi, 7R_\xi)\backslash \Omega\},
$$
and any two non-negative weak solutions $u,\ v$ of $(-L)^s u = 0$ in $Y' = B_\Omega(\xi, 12 C_\Omega r)\backslash \Omega$, we have
$$
\frac{u(x)}{u(x')}\leq C \frac{v(x)}{v(x')},$$
for all $x, x'\in B_\Omega(\xi, r)$. The constant $C$ depends only on the volume doubling constant, Poincar\'{e} constant and the inner uniformity constants $c_u, C_u$.
\end{thm}
\begin{proof}
We apply the Theorem \ref{BH:ref}. First, we make $\mathcal{E}$ and $\hat \mathcal{E}$ in the Theorem \ref{BH:ref} coincide with $\mathcal{E}_a$, which is symmetric, strongly local and regular by Proposition \ref{prop:Ea}, hence the Assumption 1 and 2 are automatically satisfied. Secondly, since we assume the volume doubling property and the Poincar\'{e} inequality for the underlying form $(\mathcal{E}, \mathcal{F})$, they can be extended to the form $(\mathcal{E}_a, \mathcal{F}_a)$ by Theorem \ref{ext:VD} and \ref{ext:PI}. We also make the open set $Y$ in Theorem \ref{BH:ref} to be the entire extended space $X_a$, then the condition (A1) and (A2-Y) are satisfied by the Lemma \ref{lem:dEa}. Now we can establish a boundary Harnack principle on the extended space $X_a$ for $\mathcal{E}_a$. By restricting the boundary value on the space $X$, which is similar to the steps in Theorem \ref{Harnack_frac}, we finish the proof.
\end{proof}
\section*{Acknowledgments}
F.B. is partially funded by NSF grant DMS-1901315, Q.L. would like to thank University of Connecticut for its hospitality during the preparation of this work. There is no data associated to the present article.
\bibliographystyle{plain}
| {'timestamp': '2020-10-15T02:24:30', 'yymm': '2010', 'arxiv_id': '2010.01036', 'language': 'en', 'url': 'https://arxiv.org/abs/2010.01036'} |
\subsection{Limitations of using GCN for scoring clusters}
\label{ssec:leconv-use}
GCN from Eq. \eqref{eq:gcn} can be viewed as an operator which first computes a \textit{pre-score} $\hat{\phi^{\prime}}$ for each node i.e., $\hat{\phi^{\prime}} = XW$ followed by a weighted average over neighbors and a non-linearity. If for some node the pre-score is very high, it can increase the scores of its neighbors which inherently biases the pooling operator to select clusters in the local neighborhood instead of sampling clusters which represent the whole graph. Thus, selecting the clusters which correspond to local extremas of pre-score function would potentially allow us to sample representative clusters from all parts of the graph.
\begin{theorem}
\label{thm:gcn-score}
Let $\mathcal{G}$ be a graph with positive adjacency matrix A i.e., $A_{i, j}\geq0$. Consider any function $f(X, A): \mathbb{R}^{N \times d} \times \mathbb{R}^{N \times N} \rightarrow \mathbb{R}^{N \times 1}$ which depends on difference between a node and its neighbors after a linear transformation $W \in \mathbb{R}^{d \times 1}$. For e.g,:
\begin{equation}
\nonumber
f_{i} = \sigma(\alpha_{i}x_{i}W + \sum_{j \in \mathcal{N}(i)} \beta_{i,j} (x_{i}W - x_{j}W))
\end{equation}
where $f_{i}, \alpha_{i}, \beta_{i,j} \in \mathbb{R}$ and $ x_{i} \in \mathbb{R}^{d}$.
\begin{enumerate}[label=\alph*)]
\item If fitness value $\Phi = GCN(X, A)$ then $\Phi$ cannot learn f.
\item If fitness value $\Phi = LEConv(X, A)$ then $\Phi$ can learn f.
\end{enumerate}
\end{theorem}
\begin{proof}
See Appendix Sec. \ref{ssec:gconv-proof} for proof.
\end{proof}
\noindent Motivated by the above analysis, we propose to use LEConv (Eq. \ref{eq:leconv}) for scoring clusters. LEConv can learn to score clusters by considering both its global and local importance through the use of self-loops and ability to learn functions of local extremas.
\subsection{Graph Connectivity}
Here, we analyze ASAP from the aspect of edge connectivity in the pooled graph. When considering $h$-hop neighborhood for clustering, both ASAP and DiffPool have $RF^{edge} = 2h+1$ because they use Eq. \eqref{eq:stas} to define the edge connectivity. On the other hand, both TopK and SAGPool have $RF^{edge} = h$. A larger edge receptive field implies that the pooled graph has better connectivity which is important for the flow of information in the subsequent GCN layers.
\begin{theorem}
\label{thm:star-graph}
Let the input graph $\mathcal{G}$ be a tree of any possible structure with $N$ nodes. Let $k^{*}$ be the lower bound on sampling ratio $k$ to ensure the existence of atleast one edge in the pooled graph irrespective of the structure of $\mathcal{G}$ and the location of the selected nodes. For TopK or SAGPool, $k^{*} \rightarrow 1$ whereas for ASAP, $k^{*} \rightarrow 0.5$ as $N \rightarrow \infty$.
\begin{proof}
See Appendix Sec. \ref{ssec:graph-connect-proof} for proof.
\end{proof}
\end{theorem}
\noindent Theorem \ref{thm:star-graph} suggests that ASAP can achieve a similar degree of connectivity as SAGPool or TopK for a much smaller sampling ratio $k$. For a tree with no prior information about its structure, ASAP would need to sample only half of the clusters whereas TopK and SAGPool would need to sample almost all the nodes, making TopK and SAGPool inefficient for such graphs. In general, independent of any combination of nodes selected, ASAP will have better connectivity due to its larger receptive field. Please refer to Appendix Sec. \ref{ssec:graph-connect-proof} for a similar analysis on path graph and more details.
\subsection{Graph Permutation Equivariance}
\begin{proposition}
ASAP is a graph permutation equivariant pooling operator.
\end{proposition}
\begin{proof}
See Appendix Sec. \ref{ssec:perm-eq-proof} for proof.
\end{proof}
\section{Hyperparameter Tuning}
\label{ssec:hyper-tune}
For all our experiments, Adam \cite{adam} optimizer is used. $10$-fold cross-validation is used with $80\%$ for training and $10\%$ for validation and test each. Models were trained for $100$ epochs with lr decay of $0.5$ after every $50$ epochs. The range of hyperparameter search are provided in Table \ref{tab:hyper-tune}. The model with best validation accuracy was selected for testing. Our code is based on Pytorch Geometric library \cite{bhai}.
\begin{table}[tbh!]
\begin{center}
\begin{small}
\begin{tabular}{lcl}
\toprule
\textbf{Hyperparameter} & & \textbf{Range} \\
\cmidrule{1-3}
Hidden dimension & & $\{16, 32, 64, 128\}$ \\
Learning rate & & $\{0.01, 0.001\}$ \\
Dropout & & $\{0, 0.1, 0.2, 0.3, 0.4, 0.5\}$\\
L2 regularization & & $5e^{-4}$ \\
\bottomrule
\end{tabular}
\caption{\label{tab:hyper-tune} Hyperparameter tuning Summary.}
\end{small}
\end{center}
\end{table}
\section{Details of Hierarchical Pooling Setup}
\label{ssec:global-pool}
For hierarchical pooling, we follow SAGPool \cite{sag} and use three layers of GCN, each followed by a pooling layer. After each pooling step, the graph is summarized using a readout function which is a concatenation of the $mean$ and $max$ of the node representations (similar to SAGPool). The summaries are then added and passed through a network of fully-connected layers separated by dropout layers to predict the class.
\section{Details of Global Pooling Setup}
Global Pooling architecture is same as the hierarchical architecture with the only difference that pooling is done only after all GCN layers. We do not use readout function for global pooling as they do not require them. To be comparable with other models, we restrict the feature dimension of the pooling output to be no more than $256$. For global pooling layers, range for hidden dimension and lr search was same as ASAP.
\begin{table}[tbh!]
\begin{center}
\begin{small}
\resizebox{\columnwidth}{!}{
\begin{tabular}{lcl}
\toprule
\textbf{Method} & & \textbf{Range} \\
\cmidrule{1-3}
Set2Set & & processing-step $\in \{5, 10\}$ \\
Global-Attention & & transform $\in \{True, False\}$ \\
SortPool & & $K$ is chosen such that output of pooling $\leq 256$\\
\bottomrule
\end{tabular}
}
\caption{\label{tab:statistics}Global Pooling Hyperparameter Tuning Summary.}
\end{small}
\end{center}
\vskip -0.1in
\end{table}
\section{Similarities between pooling in CNN and ASAP}
\label{ssec:cnn-asap}
In CNN, pooling methods (e.g mean pool and max pool) have two hyperparameter: kernel size and stride. Kernel size decides the number of pixels being considered for computing each new pixel value in the next layer. Stride decides the fraction of new pixels being sampled thereby controlling the size of the image in next layer. In ASAP, $RF^{node}$ determines the neighborhood radius of clusters and $k$ decides the sampling ratio. This makes $RF^{node}$ and $k$ are analogous to kernel size and stride of CNN pooling respectively. There are however some key differences. In CNN, a given kernel size corresponds to a fixed number of pixels around a central pixel whereas in ASAP, the number of nodes being considered is variable, although the neighborhood $RF^{node}$ is constant. In CNN, stride uniformly samples from new pixels whereas in ASAP, the model has the flexibility to attend to different parts of the graph and sample accordingly.
\section{Ablation on pooling ratio $k$}
\label{ssec:ablation-k}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.4\textwidth]{images/sampling-ratio-k.pdf}
\caption{\label{fig:sampling-ratio-k} Validation Accuracy vs sampling ratio $k$ on NCI1 dataset.}
\end{figure}
Intuitively, higher $k$ will lead to more information retention. Hence, we expect an increase in performance with increasing $k$. This is empirically observed in Fig. \ref{fig:sampling-ratio-k}. However, as $k$ increases, the computational resources required by the model also increase because a relatively larger pooled graph gets propagated to the later layers. Hence, there is a trade-off between performance and computational requirement while deciding on the pooling ratio $k$.
\section{Proof of Theorem 1}
\label{ssec:gconv-proof}
\textbf{Theorem 1.}
\textit{
Let $\mathcal{G}$ be a graph with positive adjacency matrix A i.e., $A_{i, j}\geq0$. Consider any function $f(X, A): \mathbb{R}^{N \times d} \times \mathbb{R}^{N \times N} \rightarrow \mathbb{R}^{N \times 1}$ which depends on difference between a node and its neighbors after a linear transformation $W \in \mathbb{R}^{d \times d}$. For e.g:}
\begin{equation}
\nonumber
f_{i} = \sigma(\alpha_{i}x_{i}W + \sum_{j \in \mathcal{N}(i)} \beta_{i,j} (x_{i}W - x_{j}W))
\end{equation}
\textit{
where $f_{i}, \alpha_{i}, \beta_{i,j} \in \mathbb{R}$ and $ x_{i} \in \mathbb{R}^{d}$.
}
\begin{enumerate}[label=\alph*)]
\item \textit{If fitness value $\phi = GCN(X, A)$ then $\phi$ cannot learn f.}
\item \textit{If fitness value $\phi = LEConv(X, A)$ then $\phi$ can learn f.}
\end{enumerate}
\begin{proof}
For GCN, $\phi_{i} = \sigma(\hspace{1mm} \sum_{j \in \mathcal{N}(x_{i}) \cup \{i\}} A_{i, j} x_{j}W)$ where $W$ is a learnable matrix. Since $A_{i, j} \geq 0$, $\phi_{i}$ cannot have a term of the form $\beta_{i,j} (x_{i}W - x_{j}W)$ which proves the first part of the theorem. We prove the second part by showing that LEConv can learn the following function $f$:
\begin{equation}
\label{eq:to-prove-f}
f_{i} = \sigma(\alpha_{i}x_{i}W + \sum_{j \in \mathcal{N}(i)} \beta_{i,j} (x_{i}W - x_{j}W))
\end{equation}
LEConv formulation is defined as:
\begin{equation}
\label{eq:club}
\phi_{i} = \sigma(x_{i} W_{1} + \sum_{j \in \mathcal{N}(i)} A_{i,j} (x_{i} W_{2} - x_{j} W_{3}))
\end{equation}
where $W_{1}$, $W_{2}$ and $W_{3}$ are learnable matrices. For $W_{3} = W_{2} = W_{1}$, $\alpha_{1} = 1$ and $\beta_{i,j} = A_{i,j}$ we find Eq. \eqref{eq:club} is equal to Eq. \eqref{eq:to-prove-f}.
\end{proof}
\section{Graph Connectivity}
\subsection{\Large Proof of Theorem 2}
\label{ssec:graph-connect-proof}
\begin{definition}
For a graph $\mathcal{G}$, we define optimum-nodes $n^{*}_{h}(\mathcal{G})$ as the maximum number of nodes that can be selected which are atleast $h$ hops away from each other.
\end{definition}
\begin{definition}
For a given number of nodes $N$, we define optimum-tree $\mathcal{T}^{*}_{N}$ as the tree which has maximum optimum-nodes $n^{*}_{h}(\mathcal{T}_{N})$ among all possible trees $\mathcal{T}_{N}$ with $N$ nodes.
\end{definition}
\begin{lemma}
\label{lem:atmost-one}
Let $\mathcal{T}^{*}_{N}$ be an optimum-tree of $N$ vertices and $\mathcal{T}^{*}_{N-1}$ be an optimum tree with $N-1$ vertices. The optimum-nodes of $\mathcal{T}^{*}_{N}$ and $\mathcal{T}^{*}_{N-1}$ differ by atmost one, i.e., $0 \le n^{*}_{h}(\mathcal{T}^{*}_{N}) - n^{*}_{h}(\mathcal{T}^{*}_{N-1}) \le 1$.
\begin{proof}
Consider $\mathcal{T}^{*}_{N}$ which has $N$ nodes.
We can remove any one of the leaf nodes in $\mathcal{T}^{*}_{N}$ to obtain a tree $\mathcal{T}_{N-1}$ with $N-1$ nodes.
If any one of the nodes in $n^{*}_{h}(\mathcal{T}^{*}_{N})$
was removed, then $n^{*}_{h}(\mathcal{T}_{N-1})$ would become
$n^{*}_{h}(\mathcal{T}^{*}_{N}) - 1$. If any other node was removed , then being a leaf it does not constitute the shortest path between any of the $n^{*}_{h}(\mathcal{T}_{N-1})$ nodes. This implies that the optimum-nodes for $\mathcal{T}_{N-1}$ is atleast $n^{*}_{h}(\mathcal{T}^{*}_{N})-1$, i.e.,
\begin{equation}
\label{eq:n-1-range}
n^{*}_{h}(\mathcal{T}^{*}_{N})-1 \le n^{*}_{h}(\mathcal{T}_{N-1}) \le n^{*}_{h}(\mathcal{T}^{*}_{N})
\end{equation}
Since $\mathcal{T}^{*}_{N-1}$ is the optimal-tree, we know that:
\begin{equation}
\label{eq:optimum}
n^{*}_{h}(\mathcal{T}_{N-1}) \le n^{*}_{h}(\mathcal{T}^{*}_{N-1})
\end{equation}
Using Eq. \eqref{eq:n-1-range} and \eqref{eq:optimum}
we can write:
\begin{equation}
\nonumber
n^{*}_{h}(\mathcal{T}^{*}_{N}) - n^{*}_{h}(\mathcal{T}^{*}_{N-1}) \le 1
\end{equation}
which proves our lemma.
\end{proof}
\end{lemma}
\begin{lemma}
\label{lem:sub-graph-optimality}
Let $\mathcal{T}^{*}_{N}$ be an optimum-tree of $N$ vertices and $\mathcal{T}^{*}_{N-1}$ be an optimum-tree of $N-1$ vertices. $\mathcal{T}^{*}_{N-1}$ is an induced subgraph of $\mathcal{T}^{*}_{N}$.
\begin{proof}
Let us choose a node to be removed from $\mathcal{T}^{*}_{N}$ and join its neighboring nodes to obtain a tree $\mathcal{T}_{N-1}$ with $N-1$ nodes with an objective of ensuring a maximum $n^{*}_{h}(\mathcal{T}_{N-1})$. To do so, we can only remove a leaf node from $\mathcal{T}^{*}_{N}$. This is because removing non-leaf nodes can reduce the shortest path between multiple pairs of nodes whereas removing leaf-nodes will reduce only the shortest path to nodes from the new leaf at that position. This ensures least reduction in optimum-nodes for $\mathcal{T}_{N-1}$.
Removing a leaf node implies that $n^{*}_{h}(\mathcal{T}_{N-1})$ cannot be lesser than $n^{*}_{h}(\mathcal{T}^{*}_{N})-1$ as it affects only the paths involving that particular leaf node. Using Lemma \ref{lem:atmost-one}, we see that $\mathcal{T}_{N-1}$ is equivalent to $\mathcal{T}^{*}_{N-1}$, i.e., $\mathcal{T}_{N-1}$ is one of the possible optimal-trees with $N-1$ nodes. Since $\mathcal{T}_{N-1}$ was formed by removing a leaf node from $\mathcal{T}^{*}_{N}$, we find that $\mathcal{T}^{*}_{N-1}$ is indeed an induced subgraph of $\mathcal{T}^{*}_{N}$.
\end{proof}
\end{lemma}
\begin{figure}[t]
\centering
\subfloat[Starlike tree]{
\includegraphics[width=0.22\textwidth]{images/starlike.pdf}
\label{starlike}
}
\subfloat[Path Graph]{
\includegraphics[width=0.22\textwidth]{images/path.pdf}
\label{path}
}
\caption{ (a) Balanced Starlike tree with height 2. (b) Path Graph}
\label{fig:graphs}
\end{figure}
\begin{definition}
A starlike tree is a tree having atmost one node (root) with degree greater than two \cite{starlike-tree}. We consider starlike tree with height $\Bigl \lceil h/2 \Bigl \rceil$ to be \textit{balanced}, if there is atmost one leaf which is at a height less than $\Bigl \lceil h/2 \Bigl \rceil$ while the rest are all at a height $\Bigl \lceil h/2 \Bigl \rceil$ from the root. Figure \ref{fig:graphs}(a) depicts an example of a balanced starlike tree with $h=2$.
\end{definition}
\begin{definition}
A path graph is a graph such that its nodes can be placed on a straight line. There are \st{not more than} only two nodes in a path graph which have degree one while the rest have a degree of two. Figure \ref{fig:graphs}(b) shows an example of a path graph \cite{path-graph}.
\end{definition}
\begin{lemma}
\label{lem:starlike-n}
For a balanced starlike tree with height $h/2$, where $h$ is even, $n^{*}_{h}(\mathcal{T}_{N}) = \Bigl \lfloor \frac{N-1}{\frac{h}{2}} \Bigr \rfloor$, i.e., when the leaves are selected.
\end{lemma}
\begin{lemma}
\label{lem:star}
Among all the possible trees $\mathcal{T}_{N}$ which have N vertices, the maximum $n^{*}_{h}(\mathcal{T}_{N})$ achievable is $ \Bigl \lfloor \frac{N-1}{\frac{h}{2}} \Bigr \rfloor$, which is obtained if the tree is a balanced starlike tree with height $h/2$ if $h$ is even.
\begin{proof}
To prove the lemma, we use induction. Here, the base case corresponds to a path graph $\mathcal{T}_{h+1}$ with $h+1$ nodes, a trivial case of starlike graph, as it has only $2$ nodes which are $h$ hops away. From the formula $\Bigl \lfloor \frac{N-1}{\frac{h}{2}} \Bigr \rfloor$, we get $n^{*}_{h}(\mathcal{T}_{h+1}) = 2$ which verifies the base case.
For any $N-1$, let us assume that the lemma is true, i.e., a balanced starlike tree with height $h/2$ achieves the maximum $n^{*}_{h}(\mathcal{T}_{N-1})$ for any tree $\mathcal{T}_{N-1}$ with $N-1$ vertices. Consider $\mathcal{T}^{*}_{N}$ to be the optimal-tree for $N$ nodes. From Lemma \eqref{lem:sub-graph-optimality}, we know that $\mathcal{T}^{*}_{N-1}$ is an induced subgraph of $\mathcal{T}^{*}_{N}$. This means that $\mathcal{T}^{*}_{N}$ can be obtained by adding a node to $\mathcal{T}^{*}_{N-1}$. Since we are constructing $\mathcal{T}^{*}_{N}$, we need to add a node to $\mathcal{T}^{*}_{N-1}$ such that maximum nodes can be selected which are atleast $h$ hops away. There are three possible structures for the tree $\mathcal{T}^{*}_{N-1}$ depending on the minimum height among all its branches: (a) minimum height among all the branches is less than $h/2-1$, (b) minimum height among all the branches is equal to $h/2-1$ and (c) minimum height among all the branches is equal to $h/2$. Although case (a) is not possible as we assumed $\mathcal{T}^{*}_{N-1}$ to be a balanced starlike tree, we consider it for the sake of completeness. For case (a), no matter where we add the node, $n^{*}_{h}(\mathcal{T}^{*}_{N})$ will not increase. However, we should add the node to the leaf of the branch with least height as it will allow the new leaf of that branch to be chosen in case the number of nodes in tree is increased to some $N^{'}>N$ such that height of that branch becomes $h/2$. For case (b), we should add the node to the leaf of the branch with least height so that its height becomes $h/2$ and the new leaf of that branch gets selected. For case (c), no matter where we add the node, $n^{*}_{h}(\mathcal{T}^{*}_{N})$ will not increase. Unlike case (a), we should add the new node to the root so as to start a new leaf which could be selected if that branch grows to a height $h/2$ for some $N^{'}>N$. For all the three cases, $\mathcal{T}^{*}_{N}$ is a balanced starlike tree as the new node is either added to the leaf of a branch if minimum height of a leaf is less than $h/2$ or to the root if the minimum height of the branches is $h/2$. Hence, by induction, the lemma is proved.
\end{proof}
\end{lemma}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.4\textwidth]{images/tree-graph.pdf}
\caption{\label{fig:tree-graph} Minimum sampling ratio $k^{*}$ vs $N$ for Path-Graph}
\end{figure}
\textbf{Theorem 2.}
\textit{
Let the input graph $\mathcal{T}$ be a tree of any possible structure with $N$ nodes. Let $k^{*}$ be the lower bound on sampling ratio $k$ to ensure the existence of atleast one edge in the pooled graph irrespective of the structure of $\mathcal{T}$ and the location of the selected nodes. For TopK or SAGPool, $k^{*} \rightarrow 1$ whereas for ASAP, $k^{*} \rightarrow 0.5$ as $N \rightarrow \infty$.
}
\begin{proof}
From Lemma \eqref{lem:star} and \eqref{lem:starlike-n}, we know that among all the possible trees $\mathcal{T}_{N}$ which have N vertices, the maximum $n^{*}_{h}(\mathcal{T}_{N})$ achievable is $ \Bigl \lfloor \frac{N-1}{\frac{h}{2}} \Bigr \rfloor$. Using pigeon-hole principle we can show that for a pooling method with $RF^{edge}$ if the number of sampled clusters is greater than $n^{*}_{RF^{edge} }(\mathcal{T}_{N})$ then there will always be an edge in the pooled graph irrespective of the position of the selected clusters:
\begin{equation}
\begin{split}
\label{eq:star-graph}
\lceil k^{*}N \rceil &> \Bigl \lfloor \frac{N-1}{\frac{ RF^{edge}+1}{2}} \Bigr \rfloor \\
k^{*}N &> \Bigl \lfloor \frac{N-1}{\frac{RF^{edge}+1}{2}} \Bigr \rfloor \\
k^{*}N + 1 &> \frac{N-1}{\frac{RF^{edge}+1}{2}}
\end{split}
\end{equation}
Let us consider 1-hop neighborhood for pooling, i.e., $h = 1$. Substituting $RF^{edge} = h$ in Eq. \eqref{eq:star-graph} for TopK and SAGPool we get:
\begin{equation}
\nonumber
k^{*} > 1 - \frac{2}{N}
\end{equation}
and as N $\rightarrow \infty$ we obtain $k^{*} \rightarrow 1$. Substituting $RF^{edge} = 2h+1$ in Eq. \eqref{eq:star-graph} for ASAP we get:
\begin{equation}
\nonumber
k^{*} > \frac{1}{2} - \frac{3}{2N}
\end{equation}
and as N $\rightarrow \infty$ we obtain $k^{*} \rightarrow 0.5$
\end{proof}
\subsection{\Large Similar Analysis for Path Graph}
\label{ssec:proof-3}
\begin{lemma}
\label{lem:path-graph-n}
For a path graph $\mathcal{G}_{path}$ with $N$ nodes, $n^{*}_{h}({G}_{path}) = \lceil \frac{N}{h} \rceil$.
\end{lemma}
\begin{lemma}
\label{lem:path-graph}
Consider the input graph to be a path graph $\mathcal{G}_{path}$ with $N$ nodes. To ensure that a pooling operator with $RF^{edge}$ and sampling ratio $k$ has at least one edge in the pooled graph, irrespective of the location of selected clusters, we have the following inequality on k: $k \geq (\frac{1}{RF^{edge}+1} + \frac{1}{N})$.
\end{lemma}
\begin{proof}
From Lemma \eqref{lem:path-graph-n}, we know that $n^{*}_{RF^{edge}}({G}_{path}) = \lceil \frac{N}{RF^{edge}} \rceil$. Using pigeon-hole principle we can show that for a pooling method with $RF^{edge}$, if the number of sampled clusters is greater than $n^{*}_{RF^{edge}}({G}_{path})$, then there will always be an edge in the pooled graph irrespective of the position of the selected clusters:
\begin{equation}
\label{eq:path-graph-pigeon}
\begin{split}
\lceil kN \rceil &> \Bigl \lceil \frac{N}{RF^{edge}+1} \Bigr \rceil \\
kN &> \Bigl \lceil \frac{N}{RF^{edge}+1} \Bigr \rceil \\
kN &\geq \frac{N}{RF^{edge}+1} + 1
\end{split}
\end{equation}
From Eq. \eqref{eq:path-graph-pigeon}, we get $k \geq (\frac{1}{RF^{edge}+1} + \frac{1}{N})$ which completes the proof.
\end{proof}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.4\textwidth]{images/path-graph.pdf}
\caption{\label{fig:path-graph} Minimum sampling ratio $k^{*}$ vs $N$ for Path-Graph}
\end{figure}
\textbf{Theorem 3.}
\textit{
Consider the input graph to be a path graph with $N$ nodes. Let $k^{*}$ be the lower bound on sampling ratio $k$ to ensure the existence of atleast one edge in the pooled graph. For TopK or SAGPool, $k^{*} \rightarrow 0.5$ as $N \rightarrow \infty$ whereas for ASAP, $k^{*} \rightarrow 0.25$ as $N \rightarrow \infty$.
}
\begin{proof}
From Lemma \eqref{lem:path-graph}, we get $k^{*} = \frac{1}{RF^{edge}+1} + \frac{1}{N}$. Using h = 1 for TopK and SAGPool when N tends to infinity i.e. $k^{*} = \lim_{N \rightarrow \infty} \frac{1}{2} + \frac{1}{N}$, we get $k^{*} \rightarrow 0.25$. Using h = 3 for ASAP when N tends to infinity i.e. $k^{*} = \lim_{N \rightarrow \infty} \frac{1}{4} + \frac{1}{N}$, we get $k^{*} \rightarrow 0.25$.
\end{proof}
\subsection{Graph Connectivity via $k^{th}$ Graph Power.}
\label{kth-power}
To minimize the possibility of nodes getting isolated in pooled graph, TopK employs $k^{th}$ graph power i.e. $\hat{A}^{k}$ instead of $\hat{A}$. This helps in increasing the density of the graph before pooling. While using $k^{th}$ graph power, TopK can connect two nodes which are atmost $k$ hops away whereas ASAP in this setting can connect upto $k+2h$ hops in the original graph. As $h\geq1$, ASAP will always have better connectivity given $k^{th}$ graph power.
\section{Graph Permutation Equivariance}
\label{ssec:perm-eq-proof}
Given a permutation matrix $P \in \{0, 1\}^{n \times n}$ and a function $f(X, A)$ depending on graph with node feature matrix $X$ and adjacency matrix $A$, graph permutation is defined as $f(PX, PAP^{T})$, node permutation is defined as $f(PX, A)$ and edge permutation is defined as $f(X, PAP^{T})$.
Graph pooling operations should produce pooled graphs which are isomorphic after graph permutation i.e., they need to be graph permutation equivariant or invariant. We show that ASAP has the property of being graph permutation equivariant.\\
\textbf{Proposition 1.}
\textit{ASAP is a graph permutation equivariant pooling operator.}
\begin{proof}
Since $S$ is computed by an attention mechanism which attends to all edges in the graph, we have:
\begin{equation}
\label{eq:graph-eq-S}
S \rightarrow PSP^{T}
\end{equation}
Selecting top $\lceil kN \rceil$ clusters denoted by indices $i$, changes $\hat{S}$ as:
\begin{equation}
\label{eq:graph-eq-shat}
\hat{S} \rightarrow P\hat{S}(P[i, i])^{T}
\end{equation}
Using Eq. \eqref{eq:graph-eq-shat} and
$\hat{S} = S(:, \hat{i}), X^{p} = \hat{X}^{c}(\hat{i})$, we can write:
\begin{equation}
\label{eq:graph-Xp}
X^{p} \rightarrow P[i, i]X^{p}
\end{equation}
Since $A^{p} = \hat{S}^{T} \hat{A}^c \hat{S}$ and $\hat{S} = S(:, \hat{i}), X^{p} = \hat{X}^{c}(\hat{i})$, we get:
\begin{equation}
\label{eq:graph-Ap}
A^{p} \rightarrow P[i, i]A^{p}(P[i, i])^{T}
\end{equation}
From Eq. \eqref{eq:graph-Xp} and Eq. \eqref{eq:graph-Ap}, we see that graph permutation does not change the output features. It only changes the order of the computed feature and hence is isomorphic to the pooled graph.
\end{proof}
\section{Pseudo Code}
\label{sec:pseudocode}
Algorithm \ref{alg:ASAP} is a pseudo code of ASAP. The Master2Token working is explained in Algorithm \ref{alg:M2T}.
\begin{algorithm}[!ht]
\caption{ASAP algorithm}
\label{alg:ASAP}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}\SetKwInOut{Intermediate}{Intermediate}
\Input{~Graph $G(\mathcal{V},\mathcal{E})$; Node features $X$; Weighted adjacency matrix $A$; Master2Token attention function $\textsc{Master2Token}$; Local Extrema Convolution operator $\textsc{LEConv}$; pooling ratio $k$; Top-k selection operator \textsc{TopK}; non-linearity $\sigma$ }
\Intermediate{~Clustered graph $G^c(\mathcal{V},\mathcal{E})$ with node features $X^c$ and weighted adjacency matrix $A^c$; Cluster assignment matrix $S$; Cluster fitness vector $\Phi$}
\Output{~Pooled graph $G^p$ with node features $X^p$ and weighted adjacency matrix $A^p$}
\BlankLine
$X^c, S \leftarrow \textsc{Master2Token}(X, A)$\;
$A^c \leftarrow A$\;
$\Phi \leftarrow \textsc{LEConv}(X^c, A^c)$\;
$\hat{X^c} \leftarrow \Phi \odot X^c $\;
$\hat{i} \leftarrow \textsc{TopK}(\Phi, k)$\;
$\hat{S} \leftarrow S(:,\hat{i})$\;
$X^p \leftarrow \hat{X^c}(\hat{i},:)$\;
$A^p \leftarrow \hat{S}^{T} \hat{A}^c \hat{S}$
\end{algorithm}
\makeatletter
\newcommand{\let\@latex@error\@gobble}{\let\@latex@error\@gobble}
\makeatother
\newcommand{\myalgorithm}{%
\begingroup
\removelatexerro
\begin{algorithm*}[H]
\caption{\textsc{Master2Token} algorithm}
\label{alg:M2T}
\SetKwInOut{Input}{Input}\SetKwInOut{Output}{Output}
\Input{~Graph $G(\mathcal{V},\mathcal{E})$; Node features $X$; Weighted adjacency matrix $A$; Graph Convolution operator $\textsc{GCN}$; Weight matrix $W$, weight vector $\Vec{w}$; softmax function $softmax$; Cluster neighborhood function $c_{h}$ }
\Output{~Clustered graph node features $X^c$; Cluster assignment matrix $S$}
\BlankLine
$X' \leftarrow \textsc{GCN}(X, A)$\;
\For{$i=1...|\mathcal{V}|$}
{
$x_{i}^{c} \leftarrow \Vec{0}$\;
$m_i \leftarrow \max_{j \in c_{h}(v_{i})}(x_{j}').$\;
\For{$j \in c_{h}(v_{i})$}
{
$\alpha_{i, j} \leftarrow softmax(\Vec{w}^{T}\sigma(W m_{i} \mathbin\Vert x_{j}'))$\;
$S_{i, j} \leftarrow \alpha_{i, j}$\;
$x_{i}^{c} \leftarrow x_{i}^{c} + \alpha_{i,j} x_{j}$\;
}
}
\end{algorithm*}
\endgroup}
\myalgorithm
\subsection{Comparison with other pooling methods}
\label{sec:discussion_comp}
\subsubsection{DiffPool}
DiffPool and ASAP both aggregate nodes to form a cluster. While ASAP only considers nodes which are within $h$-hop neighborhood from a node $x_{i}$ (medoid) as a cluster, DiffPool considers the entire graph. As a result, in DiffPool, two nodes that are disconnected or far away in the graph can be assigned similar clusters if the nodes and their neighbors have similar features. Since this type of cluster formation is undesirable for a pooling operator \cite{diffpool}, DiffPool utilizes an auxiliary link prediction objective during training to specifically prevent far away nodes from being clustered together. ASAP needs no such additional regularization because it ensures the localness while clustering. DiffPool's soft cluster assignment matrix $S$ is calculated for all the nodes to all the clusters making $S$ a dense matrix. Calculating and storing this does not scale easily for large graphs. ASAP, due to the local clustering over $h$-hop neighborhood, generates a sparse assignment matrix while retaining the hierarchical clustering properties of Diffpool. Further, for each pooling layer, DiffPool has to predetermine the number of clusters it needs to pick which is fixed irrespective of the input graph size. Since ASAP selects the top $k$ fraction of nodes in current graph, it inherently takes the size of the input graph into consideration.
\subsubsection{TopK \& SAGPool}
While TopK completely ignores the graph structure during pooling, SAGPool modifies the TopK formulation by incorporating the graph structure through the use of a GCN network for computing node scores $\phi$. To enforce sparsity, both TopK and SAGPool avoid computing the cluster assignment matrix $S$ that DiffPool proposed. Instead of grouping multiple nodes to form a cluster in the pooled graph, they \textit{drop} nodes from the original graph based on a score \cite{topk2} which might potentially lead to loss of node and edge information. Thus, they fail to leverage the overall graph structure while creating the clusters. In contrast to TopK and SAGPool, ASAP can capture the rich graph structure while aggregating nodes to form clusters in the pooled graph. TopK and SAGPool sample edges from the original graph to define the edge connectivity in the pooled graph. Therefore, they need to sample nodes from a local neighborhood to avoid isolated nodes in the pooled graph. Maintaining graph connectivity prevents these pooling operations from sampling representative nodes from the entire graph. The pooled graph in ASAP has a better edge connectivity compared to TopK and SAGPool because soft edge weights are computed between clusters using upto three hop connections in the original graph. Also, the use of LEConv instead of GCN for finding fitness values $\phi$ further allows ASAP to sample representative clusters from local neighborhoods over the entire graph.
\subsection{Comparison of Self-Attention variants}
\label{ssec:m2t_compare}
\subsubsection{Source2Token \& Token2Token}
T2T models the membership of a node by generating a query based only on the medoid of the cluster. Graph Attention Network (GAT) \cite{gat} is an example of T2T attention in graphs. S2T finds the importance of each node for a global task. As shown in Eq. \ref{eq:s2t-add}, since a query vector is not used for calculating the attention scores, S2T inherently assigns the same membership score to a node for all the possible clusters that node can belong to. Hence, both S2T and T2T mechanisms fail to effectively utilize the intra-cluster information while calculating a node's cluster membership. On the other hand, M2T uses a master function $f_{m}$ to generate a query vector which depends on all the entities within the cluster and hence is a more representative formulation. To understand this, consider the following scenario. If in a given cluster, a non-medoid node is removed, then the un-normalized membership scores for the rest of the nodes will remain unaffected in S2T and T2T framework whereas the change will reflect in the scores calculated using M2T mechanism. Also, from Table \ref{tab:attention}, we find that M2T performs better than S2T and T2T attention showing that M2T is better suited for global tasks like pooling.
\section{Experimental Setup}
In our experiments, we use $5$ graph classification benchmarks and compare ASAP with multiple pooling methods. Below, we describe the statistics of the dataset, the baselines used for comparisons and our evaluation setup in detail.
\subsection{Datasets}
We demonstrate the effectiveness of our approach on $5$ graph classification datasets. D\&D \cite{dd1,dd2-proteins} and PROTEINS \cite{dd2-proteins,proteins} are datasets containing proteins as graphs. NCI1 \cite{nci1} and NCI109 are datasets for anticancer activity classification. FRANKENSTEIN \cite{frankenstein} contains molecules as graph for mutagen classification. Please refer to Table \ref{tab:stats} for the dataset statistics.
\begin{table}[!tbh]
\centering
\resizebox{1\columnwidth}{!}{
\begin{tabular}{lcccc}
\toprule
\textbf{Dataset} & $\text{G}_{avg}$ & $\text{C}_{avg}$ & $\text{V}_{avg}$ & $\text{E}_{avg}$ \\
\midrule
D\&D & 1178 & 2 & 284.32 & 715.66 \\
PROTEINS & 1113 & 2 & 39.06 & 72.82 \\
NCI1 & 4110 & 2 & 29.87 & 32.30 \\
NCI109 & 4127 & 2 & 29.68 & 32.13 \\
FRANKENSTEIN & 4337 & 2 & 16.90 & 17.88 \\
\bottomrule
\end{tabular}
}
\caption{\label{tab:stats} Statistics of the graph datasets. $\text{G}_{avg}$, $\text{C}_{avg}$, $\text{V}_{avg}$ and $\text{E}_{avg}$ denotes the average number of graphs, classes, nodes and edges respectively.}
\end{table}
\subsection{Baselines}
We compare ASAP with previous state-of-the-art hierarchical pooling operators DiffPool \cite{diffpool}, TopK \cite{topk} and SAGPool \cite{sag}. For comparison with global pooling, we choose Set2Set \cite{set2set}, Global-Attention \cite{glob-att} and SortPool \cite{sortpool}.
\subsection{Training \& Evaluation Setup}
We use a similar architecture as defined in \cite{topk2,sag} which is depicted in Fig. \ref{fig:asap}(f). For ASAP, we choose $k = 0.5$ and $h = 1$ to be consistent with baselines.\footnote{Please refer to Appendix Sec. \ref{ssec:hyper-tune} for further details on hyperparameter tuning and Appendix Sec. \ref{ssec:ablation-k} for ablation on $k$.} Following SAGPool\cite{sag}, we conduct our experiments using $10$-fold cross-validation and report the average accuracy on $20$ random seeds.\footnote{Source code for ASAP can be found at: \url{https://github.com/malllabiisc/ASAP}}
\begin{table}[!tbh]\
\small
\centering
\begin{tabular}{lcc}
\toprule
Aggregation type & FITNESS & CLUSTER \\
\midrule
None & - & - \\
Only cluster & - & \ding{51} \\
Both & \ding{51} & \ding{51} \\
\bottomrule
\end{tabular}
\caption{\label{tab:aggr_types} Different aggregation types as mentioned in Sec \ref{sec:ablation_aggr}.}
\end{table}
\section{Results}
In this section, we attempt to answer the following questions:
\begin{description}
\item[Q1] How does ASAP perform compared to other pooling methods at the task of graph classification? (Sec. \ref{sec:results})
\item[Q2] Is cluster formation by M2T attention based node aggregation beneficial during pooling? (Sec. \ref{sec:ablation_attn})
\item[Q3] Is LEConv better suited as cluster fitness scoring function compared to vanilla GCN? (Sec. \ref{sec:ablation_fitness})
\item[Q4] How helpful is the computation of inter-cluster soft edge weights instead of sampling edges from the input graph? (Sec. \ref{sec:ablation_edge})
\end{description}
\subsection{Performance Comparison}
\label{sec:results}
We compare the performace of ASAP with baseline methods on $5$ graph classification tasks. The results are shown in Table \ref{tab:comparison}. All the numbers for hierarchical pooling (DiffPool, TopK and SAGPool) are taken from \cite{sag}. For global pooling (Set2Set, Global-Attention and SortPool), we modify the architectural setup to make them comparable with the hierarchical variants. \footnote{Please refer to Appendix Sec. \ref{ssec:global-pool} for more details}. We observe that ASAP consistently outperforms all the baselines on all $5$ datasets. We note that ASAP has an average improvement of $4\%$ and $3.5\%$ over previous state-of-the-art hierarchical (SAGPool) and global (SortPool) pooling methods respectively. We also observe that compared to other hierarchical methods, ASAP has a smaller variance in performance which suggests that the training of ASAP is more stable.
\subsection{Effect of Node Aggregation}
\label{sec:ablation_aggr}
Here, we evaluate the improvement in performance due to our proposed technique of aggregating nodes to form a cluster. There are two aspects involved during the creation of clusters for a pooled graph:
\begin{itemize}
\item FITNESS: calculating fitness scores for individual nodes. Scores can be calculated either by using only the medoid or by aggregating neighborhood information.
\item CLUSTER: generating a representation for the new cluster node. Cluster representation can either be the medoid's representation or some feature aggregation of the neighborhood around the medoid.
\end{itemize}
\noindent We test three types of aggregation methods: 'None', 'Only cluster' and 'Both' as described in Table \ref{tab:aggr_types}. As shown in Table \ref{tab:aggregation}, we observe that our proposed node aggregation helps improve the performance of ASAP.
\begin{table}[!tbh]\
\centering
\begin{tabular}{lcc}
\toprule
Aggregation & \textsc{FRANKENSTEIN} & \textsc{NCI1} \\
\midrule
None & 67.4 $\pm$0.6 & 69.9 $\pm$ 2.5\\
Only cluster & 67.5 $\pm$0.5 & 70.6 $\pm$ 1.8\\
Both & $\mathbf{67.8 \pm 0.6}$ & $\mathbf{70.7 \pm 2.3}$ \\
\bottomrule
\end{tabular}
\caption{\label{tab:aggregation} Performace comparison of different aggregation methods on validation data of FRANKENSTEIN and NCI1.}
\end{table}
\begin{table}[!tbh]\
\centering
\begin{tabular}{lcc}
\toprule
Attention & \textsc{FRANKENSTEIN} & \textsc{NCI1} \\
\midrule
T2T & 67.6 $\pm$ 0.5 & 70.3 $\pm$ 2.0 \\
S2T & 67.7 $\pm$ 0.5 & 69.9 $\pm$ 2.0 \\
M2T & $\mathbf{67.8 \pm 0.6}$ & $\mathbf{70.7 \pm 2.3}$ \\
\bottomrule
\end{tabular}
\caption{\label{tab:attention} Effect of different attention framework on pooling evaluated on validation data of FRANKENSTEIN and NCI1. Please refer to Sec. \ref{sec:ablation_attn} for more details.}
\end{table}
\subsection{Effect of M2T Attention}
\label{sec:ablation_attn}
We compare our M2T attention framework with previously proposed S2T and T2T attention techniques. The results are shown in Table \ref{tab:attention}. We find that M2T attention is indeed better than the rest in NCI1 and comparable in FRANKENSTEIN.
\begin{table}[!ht]\
\centering
\begin{tabular}{lcc}
\toprule
Fitness function & \textsc{FRANKENSTEIN} & \textsc{NCI1} \\
\midrule
GCN & 62.7$\pm$0.3 & 65.4$\pm$2.5 \\
Basic-LEConv & 63.1$\pm$0.7 & 69.8$\pm$1.9 \\
LEConv & \textbf{67.8$\pm$0.6} & \textbf{70.7$\pm$2.3} \\
\bottomrule
\end{tabular}
\caption{\label{tab:leconv} Performance comparison of different fitness scoring functions on validation data of FRANKENSTEIN and NCI1. Refer to Sec. \ref{sec:ablation_fitness} for details.}
\end{table}
\subsection{Effect of LEConv as a fitness scoring function}
\label{sec:ablation_fitness}
In this section, we analyze the impact of LEConv as a fitness scoring function in ASAP. We use two baselines - GCN (Eq. \ref{eq:gcn}) and Basic-LEConv which computes $\phi_{i} = \sigma(x_{i}W + \sum_{j \in \mathcal{N}(x_{i})} A_{i, j} (x_{i}W-x_{j}W))$. In Table \ref{tab:leconv} we can see that Basic-LEConv and LEConv perform significantly better than GCN because of their ability to model functions of local extremas. Further, we observe that LEConv performs better than Basic-LEConv as it has three different linear transformation compared to only one in the latter. This allows LEConv to potentially learn complicated scoring functions which is better suited for the final task. Hence, our analysis in Theorem \ref{thm:gcn-score} is emperically validated.
\subsection{Effect of computing Soft edge weights}
\label{sec:ablation_edge}
We evaluate the importance of calculating edge weights for the pooled graph as defined in Eq. \ref{eq:stas}. We use the best model configuration as found from above ablation analysis and then add the feature of computing soft edge weights for clusters. We observe a significant drop in performace when the edge weights are not computed. This proves the necessity of capturing the edge information while pooling graphs.
\begin{table}[!tbh]\
\centering
\begin{tabular}{ccc}
\toprule
Soft edge weights & \textsc{FRANKENSTEIN} & \textsc{NCI1} \\
\midrule
Absent & 67.8 $\pm$ 0.6 & 70.7 $\pm$ 2.3 \\
Present & $\mathbf{68.3 \pm 0.5}$ & $\mathbf{73.4 \pm 0.4}$\\
\bottomrule
\end{tabular}
\caption{\label{tab:stas} Effect of calculating soft edge weights on pooling for validation data of FRANKENSTEIN and NCI1. Please refer to Sec. \ref{sec:ablation_edge} for more details.}
\end{table}
\section{Introduction}
\input{introduction.tex}
\footnotetext{medoids are representatives of a cluster. They are similar to centroids but are strictly a member of the cluster.}
\section{Related Work}
\input{related-works.tex}
\section{Preliminaries}
\input{preliminaries.tex}
\section{ASAP: Proposed Method}
\input{proposed-method.tex}
\section{Theoretical Analysis}
\input{analysis.tex}
\input{experiments.tex}
\section{Discussion}
\input{discussion.tex}
\section{Conclusion}
\input{conclusion.tex}
\section{Acknowledgements}
We would like to thank the developers of Pytorch\_Geometric \cite{bhai} which allows quick implementation of geometric deep learning models. We would like to thank Matthias Fey again for actively maintaining the library and quickly responding to our queries on github.
\bibliographystyle{aaai}
{\fontsize{9.0pt}{10.0pt} \selectfont
\subsection{Problem Statement}
Consider a graph $G(\mathcal{V}, \mathcal{E}, X)$ with $N = |\mathcal{V}|$ nodes and $|\mathcal{E}|$ edges. Each node $v_{i} \in \mathcal{V}$ has $d$-dimensional feature representation denoted by $x_i$. $X \in \mathbb{R}^{N \times d}$ denotes the node feature matrix and $A \in \mathbb{R}^{N \times N}$ represents the weighted adjacency matrix. The graph $G$ also has a label $y$ associated with it. Given a dataset $D = \{(G_{1}, y_{1}),(G_{2}, y_{2}),...\}$, the task of graph classification is to learn a mapping $f:\mathcal{G} \rightarrow \mathcal{Y}$, where $\mathcal{G}$ is the set of input graphs and $\mathcal{Y}$ is the set of labels associated with each graph. A pooled graph is denoted by $G^{p}(\mathcal{V}^{p}, \mathcal{E}^{p}, X^p)$ with node embedding matrix $X^{p}$ and its adjacency matrix as $A^{p}$.
\subsection{Graph Convolution Networks} We use Graph Convolution Network (GCN) \cite{gcn} for extracting discriminative features for graph classification. GCN is defined as:
\begin{equation}
\label{eq:gcn}
X^{(l+1)} = \sigma(\hat{D}^{-\frac{1}{2}} \hat{A} \hat{D}^{\frac{1}{2}} X^{(l)} W^{(l)}),
\end{equation}
where $\hat{A} = A + I$ for self-loops, $\hat{D} = \sum_{j}\hat{A}_{i,j}$ and $W^{(l)} \in \mathbb{R}^{d \times f}$ is a learnable matrix for any layer $l$. We use the initial node feature matrix wherever provided, i.e., $X^{(0)} = X$.
\subsection{Self-Attention}
\label{sec:self_attn}
Self-attention is used to find the dependency of an input on itself \cite{SA-cheng,vaswani}. An alignment score $\alpha_{i,j}$ is computed to map the importance of candidates $c_{j}$ on target query $q_{i}$. In self-attention, target query $q_{i}$ and candidates $c_{j}$ are obtained from input entities $\boldsymbol{h}=\{h_{1},...,h_{n}\}$. Self-attention can be categorized as Token2Token and Source2Token based on the choice of target query $q$ \cite{disan}.
\subsubsection{Token2Token (T2T)} selects both the target and candidates from the input set $\boldsymbol{h}$. In the context of additive attention \cite{bahdanau}, $\alpha_{i,j}$ is computed as:
\begin{equation}
\label{eq:t2t-add}
\alpha_{i, j} = softmax(\Vec{v}^{T}\sigma(W h_{i} \mathbin\Vert W h_{j})).
\end{equation}
where $\mathbin\Vert$ is the concatenation operator.
\subsubsection{Source2Token (S2T)} finds the importance of each candidate to a specific global task which cannot be represented by any single entity. $\alpha_{i,j}$ is computed by dropping the target query term. Eq. \eqref{eq:t2t-add} changes to the following:
\begin{equation}
\label{eq:s2t-add}
\alpha_{i,j} = softmax(\Vec{v}^{T}\sigma(W h_{j})).
\end{equation}
\subsection{Receptive Field}
We extend the concept of receptive field $RF$ from pooling operations in CNN to GNN\footnote{Please refer to Appendix Sec. \ref{ssec:cnn-asap} for more details on similarity between pooling methods in CNN and ASAP.}. We define $RF^{node}$ of a pooling operator as the number of hops needed to cover all the nodes in the neighborhood that influence the representation of a particular output node. Similarly, $RF^{edge}$ of a pooling operator is defined as the number of hops needed to cover all the edges in the neighborhood that affect the representation of an edge in the pooled graph $\mathcal{G}^{p}$.
\subsection{Cluster Assignment}
\label{ssec:cluster-assignment}
Initially, we consider each node $v_{i}$ in the graph as a \textit{medoid} of a cluster $c_{h}(v_{i})$ such that each cluster can represent only the local neighbors $\mathcal{N}$ within a fixed radius of $h$ hops i.e., $c_{h}(v_{i}) = \mathcal{N}_h(v_{i})$. This effectively means that $RF^{node}=h$ for ASAP. This helps the clusters to effectively capture the information present in the graph sub-structure.
\noindent Let $x_{i}^{c}$ be the feature representation of a cluster $c_{h}(v_{i})$ centered at $v_i$. We define $G^{c}(\mathcal{V}, \mathcal{E}, X^c)$ as the graph with node feature matrix $X^{c} \in \mathbb{R}^{N \times d}$ and adjacency matrix $A^{c} = A$. We denote the cluster assignment matrix by $S \in \mathbb{R}^{N \times N}$, where $S_{i,j}$ represents the membership of node $v_{i} \in \mathcal{V}$ in cluster $c_{h}(v_{j})$. By employing such local clustering \cite{satu}, we can maintain sparsity of the cluster assignment matrix $S$ similar to the original graph adjacency matrix $A$ i.e., space complexity of both $S$ and $A$ is $\mathcal{O}(|\mathcal{E}|)$.
\subsection{Cluster Formation using Master2Token}
\label{ssec:node-aggregation}
Given a cluster $c_{h}(v_{i})$, we learn the cluster assignment matrix $S$ through a self-attention mechanism. The task here is to learn the overall representation of the cluster $c_{h}(v_{i})$ by attending to the relevant nodes in it. We observe that both T2T and S2T attention mechanisms described in Sec. \ref{sec:self_attn} do not utilize any intra-cluster information. Hence, we propose a new variant of self-attention called \textbf{Master2Token (M2T)}. We further motivate the need for M2T framework later in Sec. \ref{ssec:m2t_compare}. In M2T framework, we first create a master query $m_{i} \in \mathbb{R}^{d}$ which is representative of all the nodes within a cluster:
\begin{equation}
m_{i} = f_{m}(x_j' | v_j \in c_h(v_i) \}),
\end{equation}
where $x_j'$ is obtained after passing $x_j$ through a separate GCN to capture structural information in the cluster $c_{h}(v_{i})$ \footnote{If $x_j$ is used as it is then interchanging any two nodes in a cluster will have not affect the final output, which is undesirable.}. $f_{m}$ is a master function which combines and transforms feature representation of $v_{j}\in c_{h}(v_{i})$ to find $m_i$. In this work we experiment with $max$ master function defined as:
\begin{equation}
m_i = \max_{v_{j} \in c_{h}(v_{i})}(x_{j}').
\end{equation}
This master query $m_{i}$ attends to all the constituent nodes $v_{j} \in c_{h}(v_{i})$ using additive attention:
\begin{equation}
\label{eq:m2t-add}
\alpha_{i, j} = softmax(\Vec{w}^{T}\sigma(W m_{i} \mathbin\Vert x_{j}')).
\end{equation}
where $\Vec{w}^{T}$ and $W$ are learnable vector and matrix respectively. The calculated attention scores $\alpha_{i,j}$ signifies the membership strength of node $v_{j}$ in cluster $c_{h}(v_{i})$. Hence, we use this score to define the cluster assignment matrix discussed above, i.e., $S_{i,j} = \alpha_{i,j}$. The cluster representation $x_{i}^{c}$ for ${c_{h}(v_i)}$ is computed as follows:
\begin{equation}
\label{eq:cluster_repr}
x_{i}^{c} = \sum_{j=1}^{|c_{h}(v_{i})|} \alpha_{i,j} x_{j}.
\end{equation}
\subsection{Cluster Selection using LEConv}
\label{ssec:cluster-selection}
Similar to TopK \cite{topk}, we sample clusters based on a cluster fitness score $\phi_{i}$ calculated for each cluster in the graph $G^c$ using a fitness function $f_{\phi}$. For a given pooling ratio $k \in (0, 1]$, the top $\lceil kN\rceil$ clusters are selected and included in the pooled graph $G^p$. To compute the fitness scores, we introduce \textbf{Local Extrema Convolution (LEConv)}, a graph convolution method which can capture local extremum information. In Sec. \ref{ssec:leconv-use} we motivate the choice of LEConv's formulation and contrast it with the standard GCN formulation. LEConv is used to compute $\phi$ as follows:
\begin{equation}
\label{eq:leconv}
\phi_{i} = \sigma(x_{i}^c W_{1} + \sum_{j \in \mathcal{N}(i)} A_{i,j}^c (x_{i}^c W_{2} - x_{j}^c W_{3}))
\end{equation}
where $\mathcal{N}(i)$ denotes the neighborhood of the $i^{th}$ node in $G^c$. $W_{1}, W_{2}, W_{3}$ are learnable parameters and $\sigma(.)$ is some activation function. Fitness vector ${\Phi} = [\phi_{1}, \phi_{2},...,\phi_{N}]^{T}$ is multiplied to the cluster feature matrix $X^{c}$ to make $f_{\phi}$ learnable i.e.,:
\begin{equation}
\nonumber
\hat{X^{c}} = {\Phi} \odot X^{c},
\end{equation}
where $\odot$ is broadcasted hadamard product. The function $\text{TOP}_k(.)$ ranks the fitness scores and gives the indices $\hat{i}$ of top $\lceil kN \rceil$ selected clusters in $G^{c}$ as follows:
\begin{equation}
\nonumber
\hat{i} = \text{TOP}_k(\hat{X^{c}}, \lceil kN \rceil).
\end{equation}
The pooled graph $G^p$ is formed by selecting these top $\lceil kN \rceil$ clusters. The pruned cluster assignment matrix $\hat{S} \in \mathbb{R}^{N \times \lceil kN \rceil}$ and the node feature matrix $X^{p} \in \mathbb{R}^{\lceil kN \rceil \times d}$ are given by:
\begin{equation}
\label{eq:x-pool}
\hat{S} = S(:, \hat{i}), \hspace{1cm} X^{p} = \hat{X}^{c}(\hat{i}, :)
\end{equation}
where $\hat{i}$ is used for index slicing.
\begin{table*}[tbh]\
\centering
\resizebox{\textwidth}{!}{
\begin{tabular}{lccccc}
\toprule
Method & \textsc{D\&D} & \textsc{PROTEINS} & \textsc{NCI1} & \textsc{NCI109} & \textsc{FRANKENSTEIN}\\
\midrule
\textsc{Set2Set} \cite{set2set} & 71.60 $\pm$ 0.87 & 72.16 $\pm$ 0.43 & 66.97 $\pm$ 0.74 & 61.04 $\pm$ 2.69 & 61.46 $\pm$ 0.47\\
\textsc{Global-Attention} \cite{glob-att} & 71.38 $\pm$ 0.78 & 71.87 $\pm$ 0.60 & 69.00 $\pm$ 0.49 & 67.87 $\pm$ 0.40 & 61.31 $\pm$ 0.41\\
\textsc{SortPool} \cite{sortpool} & 71.87 $\pm$ 0.96 & 73.91 $\pm$ 0.72 & 68.74 $\pm$ 1.07 & 68.59 $\pm$ 0.67 & 63.44 $\pm$ 0.65\\
\midrule
\textsc{Diffpool} \cite{diffpool} & 66.95 $\pm$ 2.41 & 68.20 $\pm$ 2.02 & 62.32 $\pm$ 1.90 & 61.98 $\pm$ 1.98 & 60.60 $\pm$ 1.62\\
\textsc{TopK} \cite{topk} & 75.01 $\pm$ 0.86 & 71.10 $\pm$ 0.90 & 67.02 $\pm$ 2.25 & 66.12 $\pm$ 1.60 & 61.46 $\pm$ 0.84\\
\textsc{SAGPool} \cite{sag} & 76.45 $\pm$ 0.97 & 71.86 $\pm$ 0.97 & 67.45 $\pm$ 1.11 & 67.86 $\pm$ 1.41 & 61.73 $\pm$ 0.76\\
\midrule
\textsc{ASAP} (Ours) & $\mathbf{76.87 \pm 0.7}$ & $\mathbf{74.19 \pm 0.79}$ & $\mathbf{71.48 \pm 0.42}$ & $\mathbf{70.07 \pm 0.55}$ & $\mathbf{66.26 \pm 0.47}$\\
\bottomrule
\end{tabular}
}
\caption{\label{tab:comparison} Comparison of ASAP with previous global and hierarchical pooling. Average accuracy and standard deviation is reported for 20 random seeds. We observe that ASAP consistently outperforms all the baselines on all the datasets. Please refer to Sec. \ref{sec:results} for more details.}
\end{table*}
\subsection{Maintaining Graph Connectivity}
\label{ssec:graph-connectivity}
Following \cite{diffpool}, once the clusters have been sampled, we find the new adjacency matrix $A^{p}$ for the pooled graph $G^p$ using $\hat{A}^c$ and $\hat{S}$ in the following manner:
\begin{equation}
\label{eq:stas}
A^{p} = \hat{S}^{T} \hat{A}^c \hat{S}
\end{equation}
where $\hat{A}^c = A^c + I$. Equivalently, we can see that $A^{p}_{i,j} = \sum_{k,l} \hat{S}_{k,i} \hat{A}^c_{k,l} \hat{S}_{l,j}$. This formulation ensures that any two clusters $i$ and $j$ in $G^{p}$ are connected if there is any common node in the clusters $c_{h}(v_{i})$ and $c_{h}(v_{j})$ or if any of the constituent nodes in the clusters are neighbors in the original graph $G$ (Fig. \ref{fig:asap}(d)). Hence, the strength of the connection between clusters is determined by both the membership of the constituent nodes through $\hat{S}$ and the edge weights $A^c$. Note that $\hat{S}$ is a sparse matrix by formulation and hence the above operation can be implemented efficiently.
\subsection{Graph Neural Networks}
Various formulation of GNNs have been proposed which use both spectral and non-spectral approaches. Spectral methods \cite{spectral-gcn-1,spectral-gcn-2} aim at defining convolution operation using Fourier transformation and graph Laplacian. These methods do not directly generalize to graphs with different structure \cite{spec-not-general}. Non-spectral methods \cite{deterministic-graph-cluster-1,gcn,gin,monet,gConv} define convolution through a local neighborhood around nodes in the graph. They are faster than spectral methods and easily generalize to other graphs. GNNs can also be viewed as \textit{message passing} algorithm where nodes iteratively aggregate messages from neighboring nodes through edges \cite{message_passing}.
\subsection{Pooling}
Pooling layers overcome GNN's inability to aggregate nodes hierarchically. Earlier pooling methods focused on deterministic graph clustering algorithms \cite{deterministic-graph-cluster-1,deterministic-graph-cluster-2,deterministic-graph-cluster-3}. \citeauthor{diffpool} introduced the first differentiable pooling operator which out-performed the previous deterministic methods. Since then, new data-driven pooling methods have been proposed; both spectral \cite{eigen-pool,graclus} and non-spectral \cite{diffpool,topk}. Spectral methods aim at capturing the graph topology using eigen-decomposition algorithms. However, due to higher computational requirement for spectral graph techniques, they are not easily scalable to large graphs. Hence, we focus on non-spectral methods.
Pooling methods can further be divided into global and hierarchical pooling layers. Global pooling summarize the entire graph in just one step. Set2Set \cite{set2set} finds the importance of each node in the graph through iterative content-based attention. Global-Attention \cite{glob-att} uses an attention mechanism to aggregate nodes in the graph. SortPool \cite{sortpool} summarizes the graph by concatenating few nodes after sorting them based on their features. Hierarchical pooling is used to capture the topological information of graphs. \textbf{DiffPool} forms a fixed number of clusters by aggregating nodes. It uses GNN to compute a dense soft assignment matrix, making it infea-
\begin{table}[!tbh]\
\renewcommand{\arraystretch}{1.20}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{lcccc}
\toprule
\textbf{Property} & \textbf{DiffPool} & \textbf{TopK} & \textbf{SAGPool} & \textbf{ASAP} \\
\cmidrule{1-5}
Sparse & & \ding{51} & \ding{51} & \ding{51} \\
Node Aggregation & \ding{51} & & & \ding{51} \\
Soft Edge Weights & \ding{51} & & & \ding{51} \\
Variable number of clusters & & \ding{51} & \ding{51} & \ding{51} \\
\bottomrule
\end{tabular}
}
\caption{\label{tab:property} Properties desired in hierarchical pooling methods.}
\end{table}
\noindent sible for large graphs. \textbf{TopK} scores nodes based on a learnable projection vector and samples a fraction of high scoring nodes. It avoids node aggregation and computing soft assignment matrix to maintain the sparsity in graph operations. \textbf{SAGPool} improve upon TopK by using a GNN to consider the graph structure while scoring nodes. Since TopK and SAGPool do not aggregate nodes nor compute soft edge weights, they are unable to preserve node and edge information effectively.
To address these limitations, we propose ASAP, which has all the desirable properties of hierarchical pooling without compromising on sparsity in graph operations. Please see Table. \ref{tab:property} for an overall comparison of hierarchical pooling methods. Further comparison discussions between hierarchical architectures are presented in Sec. \ref{sec:discussion_comp}.
| {'timestamp': '2020-02-04T02:13:40', 'yymm': '1911', 'arxiv_id': '1911.07979', 'language': 'en', 'url': 'https://arxiv.org/abs/1911.07979'} |
\section{Conclusions}
\vspace{-.5\baselineskip}
We have presented an abstract learning framework for synthesis that encompasses several
existing techniques that use learning or counter-example guided inductive synthesis to
create objects that satisfy a specification.
We were motivated by abstract interpretation~\cite{cc77} and how it
gives a general framework and notation for verification; our formalism is an attempt at such a generalization
for learning-based synthesis. The conditions we have proposed that the abstract concept spaces, hypotheses spaces,
and sample spaces need to satisfy to define a learning-based synthesis domain seem to be cogent
and general in forming a vocabulary for such approaches. We have also addressed various strategies
for convergent synthesis that generalizes and extends existing techniques (again, in a similar
vein as to how widening and narrowing in abstract interpretation give recipes for building convergent
algorithms to compute fixed-points). We believe that the notation and general theorems herein
will bring more clarity, understanding, and reuse of learners in synthesis algorithms.
\subsection{Transfinite convergence}\label{sec:transfinite_convergence}
From Lemma~\ref{lem:progress-honesty} one can conclude that the
transfinite sequence of hypotheses constructed by the learner
converges to a target set.
\begin{theorem} \label{the:transfinite-convergence}
Let $\mathcal{S}$ be a complete sample lattice, $\mathcal{T}$ be realizable,
$\lambda$ be a consistent learner, and $\tau$ be a teacher. Then
there exists an ordinal $\alpha$ such that
$\lambda(S_{\tau,\lambda}^\alpha) \in \mathcal{T}$.
\end{theorem}
\begin{proof}
Let $\alpha$ be an ordinal with cardinality bigger than $|\mathcal{H}|$
(bigger than $|\mathcal{S}|$ also works). If
$\lambda(S_{\tau,\lambda}^\beta) \not\in \mathcal{T}$ for all $\beta
< \alpha$, then Lemma~\ref{lem:progress-honesty}~(a) implies that all
$\lambda(S_{\tau,\lambda}^\beta)$ for $\beta < \alpha$ are
pairwise different, which contradicts the cardinality assumption. \qed
\end{proof}
The above theorem ratifies the
choice of our definitions, and the proof (relying on Lemma~\ref{lem:progress-honesty}) crucially uses all aspects of our
definitions (the honesty and progress properties of the teacher, the condition imposed on $\kappa$ in an ALF, the notion of consistent
learners, etc.).
Convergence in finite time is clearly the more desirable notion, and we
propose tactics for designing learners that converge in
finite time.
For an ALF instance $(\mathcal{A},\mathcal{T})$, we say that a learner
$\lambda$ \emph{converges for a teacher $\tau$} if there is an $n
\in \mathbb{N}$ such that $\lambda(S_{\tau,\lambda}^n) \in \mathcal{T}$,
which means that $\lambda$ produces a target hypothesis after $n$
steps. We say that $\lambda$ converges if it converges for every
teacher. We say that $\lambda$ converges from a sample $S$ in case
the learning process starts from a sample $S \not= \bot_\mathrm{s}$ (i.e., if
$S_{\lambda,\tau}^0 = S$).
\subsubsection{Finite hypothesis spaces}\label{sec:finite_hypothesis_space}
We first note that if the hypothesis space (or the concept
space) is finite, then any consistent learner converges: by Lemma~\ref{lem:progress-honesty}, the learner always
makes progress, and hence never proposes two hypotheses that
correspond to the same concept. Consequently,
the learner only produces a finite number of hypotheses before finding one
that is in the target (or declare that no such hypothesis exists).
There are several synthesis engines using learning that use finite hypothesis spaces. For example, Houdini~\cite{houdini} is a learner of \emph{conjunctions} over a fixed finite set of predicates and, hence, has a finite hypothesis space. Learning decision trees over purely Boolean attributes (not numerical)~\cite{DBLP:books/mk/Quinlan93} is also convergent because of finite hypothesis spaces, and this extends to the ICE learning model as well~\cite{ICEML}. Invariant generation for arrays and lists using \emph{elastic QDAs}~\cite{CAVQDA} also uses a convergence argument that relies on a finite hypothesis space.
\subsubsection{Occam Learners}\label{sec:occam_learner}
We now discuss the most robust strategy we know for convergence, based on the Occam's razor principle. Occam's razor advocates parsimony or simplicity~\cite{sep_simplicity},
that the simplest concept/theory that explains a set of observations is better, as a virtue in itself.
There are several learning algorithms that use parsimony as a learning bias in machine learning
(e.g., \emph{pruning} in decision-tree learning~\cite{mitchell}), though the general applicability of Occam's razor
in machine learning as a sound means to generalize is debatable~\cite{DBLP:journals/datamine/Domingos99}.
We now show that in \emph{iterative} learning, following Occam's principle leads
to convergence in finite time. However, the role of \emph{simplicity} itself is not the technical
reason for convergence, but that there is \emph{some} ordering of concepts that biases the learning.
Enumerative learners are a good example of this. In enumerative learning, the
learner enumerates hypotheses in some order, and always conjectures the first
consistent hypothesis. In an iterative learning-based synthesis
setting, such a learner always converges on some target concept, if one exists, in finite time.
Requiring a total order of the hypotheses is in some situations too
strict. If, for example, the hypothesis space consists of
deterministic finite automata (DFAs), we could build a learner that
always produces a DFA with the smallest possible number of states that
is consistent with the given sample. However, the relation
$\preceq$ that compares DFAs w.r.t.\ their number of states is not
an ordering because there are different DFAs with the same number of
states.
In order to capture such situations, we work with a \emph{total
quasi-order} $\preceq$ on $\mathcal{H}$ instead of a total
order. A quasi-order (also called preorder) is a transitive and
reflexive relation. The relation being total means that $H \preceq
H'$ or $H' \preceq H$ for all $H,H' \in \mathcal{H}$. The difference
to an order relation is that $H \preceq H'$ and $H' \preceq H$
can hold in a quasi-order, even if $H \not= H'$.
In analogy to enumerations, we require that each hypothesis has only
finitely many hypotheses ``before'' it w.r.t.\ $\preceq$, as
expressed in the following definition.
\begin{definition}
A \emph{complexity ordering} is a total quasi-order $\preceq$ such
that for each $x \in \mathcal{H}$ the set $\{y \in \mathcal{H} \mid y \preceq
x\}$ is finite.
\end{definition}
The example of comparing DFAs with respect to their number of states is such a
complexity ordering.
\begin{definition}
A consistent learner that always constructs a smallest hypothesis with
respect to a complexity ordering $\preceq$ on $\mathcal{H}$ is called an
\emph{$\preceq$-Occam learner}.
\end{definition}
\begin{example} \label{ex:complexity-ordering}
Consider $\mathcal{H} = \mathcal{C}$ to be the interval domain over the integers
consisting of all intervals of the form $[l,r]$, where $l,r \in
\mathbb{Z} \cup \{-\infty,\infty\}$ and $l \le r$. We define $[l,r]
\preceq [l',r']$ if either $[l,r] = [-\infty,\infty]$ or
$\max\{|x| \mid x \in \{l,r\} \cap \mathbb{Z}\} \le \max\{|x| \mid
x \in \{l',r'\} \cap \mathbb{Z}\}$. For example, $[-4,\infty]
\preceq [1,7]$ because $4 \le 7$.
This ordering $\preceq$ satisfies the property that for each
interval $[l,r]$ the set $\{[l',r'] \mid [l',r'] \preceq [l,r]\}$
is finite (because there are only finitely many intervals using
integer constants with a bounded absolute value). A standard
positive/negative sample $S= (P,N)$ with $P,N
\subseteq \mathbb{N}$ is consistent with all intervals that
contain the elements from $P$ and do not contain an element from $N$.
A learner that maps $S$ to an interval that uses integers with the
smallest possible absolute value (while being consistent with $S$) is
an $\preceq$-Occam learner. For example, such a learner would map
the sample $(P=\{-2,5\},N=\{-8\})$ to the interval
$[-2,\infty]$. \hfill{\tikz \draw (0, 0) -| +(1ex, 1ex);}
\end{example}
The next theorem shows that $\preceq$-Occam learners ensure
convergence in finite time.
\begin{theorem} \label{the:occam-learner}
If $\mathcal{T}$ is realizable and $\lambda$ is a $\preceq$-Occam
learner, then $\lambda$ converges. Furthermore, the
learner converges to a $\preceq$-minimal target element.
\end{theorem}
\begin{proof}
Pick any target element $T \in \mathcal{H}$, which exists because
$\mathcal{T}$ is realizable. Since $\tau$ is honest,
$T \in \kappa(S_{\tau,\lambda}^n)$ for all $n$ by
Lemma~\ref{lem:progress-honesty}(b). Thus, on the iterated sample
sequence, a $\preceq$-Occam learner never constructs an element
which is strictly above $T$ w.r.t.\ $\preceq$. Since there are only
finitely many hypothesis that are not strictly above $T$, and since
the learner always makes progress according to
Lemma~\ref{lem:progress-honesty}, it converges to a target element in
finitely many steps, which itself does not have any other target
elements below, and thus is $\preceq$-minimal. \qed
\end{proof}
There are several existing algorithms in the literature that use such
orderings to ensure convergence. Several enumeration-based solvers are
convergent because of the ordering of enumeration (e.g., the
generic enumerative solver for SyGuS
problems~\cite{DBLP:conf/fmcad/AlurBJMRSSSTU13,DBLP:series/natosec/AlurBDF0JKMMRSSSSTU15}).
The invariant-generation ranging over conditional linear arithmetic
expressions described in~\cite{DBLP:conf/cav/0001LMN14} ensures
convergence using a total quasi-order based on the number of
conditionals and the values of the coefficients. The learner uses
templates to restrict the number of conditionals and a
constraint-solver to find small coefficients for linear constraints.
\subsubsection{Convergence using Tractable Well Founded Quasi-Orders}
The third strategy for convergence in finite time that we propose is based on well
founded quasi-orders, or simply well-quasi-orders.
Interestingly, we know of no existing learning algorithms in the literature
that uses this recipe for convergence (a technique of similar flavor is used in \cite{Blum92}). We exhibit in this section a learning
algorithm for intervals and for conjunctions of inequalities of numerical attributes based on this recipe.
A salient feature of this recipe is that the convergence actually uses the
samples returned by the teacher in order to converge (the first two
recipes articulated above, on the other hand, would even guarantee convergence if the teacher
just replies yes/no when asked whether the hypothesis is in the target set).
A binary relation $\preceq$ over some set $X$ is a well-quasi-order if it is transitive and
reflexive, and for each infinite sequence $x_0,x_1,x_2, \ldots$ there
are indices $i < j$ such that $x_i \preceq x_j$. In other words, there
are no infinite descending chains and no infinite anti-chains for
$\preceq$.
\begin{definition}
Let $(\mathcal{A}, \mathcal{T})$ be an ALF instance with
$\mathcal{A} = (\mathcal{C}, \mathcal{H}, (\mathcal{S},\sqsubseteq_\mathrm{s},\sqcup,\bot_\mathrm{s}) , \gamma, \kappa)$. A subset of hypotheses $\mathcal{W} \subseteq \mathcal{H}$ is called
\emph{wqo-tractable} if
\begin{enumerate}[nosep]
\item there is a well-quasi-order $\preceq_{\mathcal{W}}$ on
$\mathcal{W}$, and
\item for each realizable sample $S \in \mathcal{S}$ with
$\kappa_\mathcal{H}(S) \subseteq \mathcal{W}$, there is some
$\preceq_\mathcal{W}$-maximal hypothesis in $\mathcal{W}$ that is consistent with $S$.
\end{enumerate}
\end{definition}
\begin{example} \label{ex:wqo-tractable}
Consider again the example of intervals over $\mathbb{Z} \cup
\{-\infty, \infty\}$ with samples of the form $S = (P,N)$ (see
Example~\ref{ex:complexity-ordering}). Let $p \in \mathbb{Z}$ be a point
and let $\mathcal{I}_p$ be the set of all intervals that contain the
point $p$. Then, $\mathcal{I}_p$ is wqo-tractable with the standard
inclusion relation for intervals, defined by $[\ell,r] \subseteq [\ell',r']$
iff $\ell \ge \ell'$ and $r \le r'$. Restricted to intervals that
contain the point $p$, this is the product of two well-founded orders
on the sets $\{x \in \mathbb{Z} \mid x \le p\}$ and $\{x \in \mathbb{Z} \mid x \ge
p\}$, and as such is itself well-founded \cite[Theorem~2.3]{Higman52}.
%
Furthermore, for each realizable sample $(P,N)$, there is a unique
maximal interval over $\mathbb{Z} \cup \{-\infty,\infty\}$ that
contains $P$ and excludes $N$. Hence, the two conditions of
wqo-tractability are satisfied.
(Note that this ordering on the set of \emph{all} intervals is not a well-quasi-order;
the sequence $[-\infty, 0], [-\infty, -1], [-\infty, -2], \ldots$ witnesses this.) \hfill{\tikz \draw (0, 0) -| +(1ex, 1ex);}
\end{example}
On a wqo-tractable $\mathcal{W} \subseteq \mathcal{H}$ a learner can ensure
convergence by always proposing a maximal consistent hypothesis, as
stated in the following lemma.
\begin{lemma} \label{lem:wqo-tractable}
Let $\mathcal{T}$ be realizable, $\mathcal{W} \subseteq \mathcal{H}$ be wqo-tractable
with well-quasi-order $\preceq_\mathcal{W}$, and $S$ be a sample such that
$\kappa_\mathcal{H}(S) \subseteq \mathcal{W}$. Then, there exists a learner that
converges from the sample $S$.
\end{lemma}
\begin{proof}
For any sample $S'$ with $S \sqsubseteq_\mathrm{s} S'$, the set
$\kappa_\mathcal{H}(S')$ of hypotheses consistent with $S'$ is a
subset of $\mathcal{W}$. Therefore, there is some $\preceq_\mathcal{W}$-maximal element in
$\mathcal{W}$ that is consistent with $S'$. The strategy of the learner is to
return such a maximal hypothesis. Assume, for the sake of
contradiction, that such a learner does not converge from $S$ for some
teacher $\tau$. Let $H_0,H_1, \ldots$ be the infinite sequence of
hypothesis produced by $\lambda$ and $\tau$ starting from $S$,
and let $S_0,S_1,S_2, \ldots$ be the corresponding sequence of samples
(with $S_0 = S$). The well-foundedness of $\preceq_\mathcal{W}$ implies that
there are $i < j$ with $H_i \preceq_\mathcal{W} H_j$. However, $S_i \sqsubseteq_\mathrm{s} S_j$
because $S_j$ is obtained from $S_i$ by joining answers of the
teacher. Therefore, $H_j$ is also consistent with $S_i$
(Remark~\ref{rem:consistency-monotonic}). This contradicts the
choice of $H_i$ as a maximal hypothesis that is consistent with $S_i$. \qed
\end{proof}
As shown in Example~\ref{ex:wqo-tractable}, for each $p \in
\mathbb{Z}$, the set $\mathcal{I}_p$ of intervals containing $p$ is
wqo-tractable. Using this, we can build a convergent learner starting
from the empty sample $\bot_\mathrm{s}$. First, the learner starts by proposing the empty interval,
the teacher must either confirm that this is a target or return a positive example, that is, a point $p$
that is contained in every target interval. Hence, the set of hypotheses
consistent with this sample is wqo-tractable and the learner can
converge from here on as stated in Lemma~\ref{lem:wqo-tractable}.
In general, the strategy for the learner is to force in one step a sample $S$ such that the set $\kappa_\mathcal{H}(S) =
\mathcal{I}_p$ is wqo-tractable. This is generalized in the following
definition.
\begin{definition}
We say that an \emph{ALF is wqo-tractable} if there is a finite set
$\{H_1, \ldots, H_n\}$ of hypotheses such that $\kappa_\mathcal{H}(S)$
is wqo-tractable for all samples $S$ that are inconsistent with all
$H_i$, that is, $\kappa_\mathcal{H}(S) \cap
\{H_1, \ldots, H_n\} = \emptyset$.
\end{definition}
As explained above, the interval
ALF is wqo-tractable with the set $\{H_1, \ldots, H_n\}$ consisting
only of the empty interval.
Combining all the previous observations, we obtain convergence for
wqo-tractable ALFs.
\begin{theorem} \label{the:wqo-learner}
For every ALF instance $(\mathcal{A},\mathcal{T})$ such that $\mathcal{A}$ is wqo-tractable
and $\mathcal{T}$ is realizable, there is a convergent learner.
\end{theorem}
\begin{proof}
A convergent learner can be built as follows. Let $\{H_1, \ldots,
H_n\}$ be the finite set of hypotheses from the definition of
wqo-tractability of an ALF.
\begin{itemize}
\item As long as the current sample is consistent with some $H_i$,
propose such an $H_i$.
\item Otherwise, the current sample $S$ is such that
$\kappa_\mathcal{H}(S)$ is wqo-tractable, and thus the learner
can apply the strategy from Lemma~\ref{lem:wqo-tractable}. \qed
\end{itemize}
\end{proof}
\paragraph{A convergent learner for conjunctive linear inequality
constraints.} We have illustrated wqo-tractability for intervals in
Example~\ref{ex:wqo-tractable}. We finish this section by showing that
this generalizes to higher dimensions, that is, to the domain of
$n$-dimensional hyperrectangles in $(\mathbb{Z} \cup
\{-\infty,\infty\})^n$, which form the hypothesis space in this
example. Each such hyperrectangle is a product of intervals over
$(\mathbb{Z} \cup \{-\infty,\infty\})^n$.
Note that hyperrectangles can, e.g., be used to model
conjunctive linear inequality constraints over a set $f_1, \ldots, f_n:
\mathbb{Z}^d \rightarrow \mathbb{Z}$ of numerical
attributes.
The sample space depends on the type of target specification that we
are interested in. We consider here the typical sample space of
positive and negative samples (however, the reasoning below also works
for other sample spaces, e.g., ICE sample spaces that additionally
include implications). So, samples are of the form $S = (P,N)$,
where $P,N$ are sets of points in $\mathbb{Z}^n$ interpreted as
positive and negative examples (as for intervals, see
Example~\ref{ex:complexity-ordering}).
The following lemma provides the ingredients for building a convergent
learner based on wqo-tractability.
\begin{lemma}\label{lem:wqo-hyperrectangle}
\begin{enumerate}
\item For each realizable sample $S = (P,N)$, there are maximal
hyperrectangles that are consistent with $S$ (possibly more than one).
\item For each $p \in \mathbb{Z}^n$, the set $\mathcal{R}_p$ of
hyperrectangles containing $p$ is well-quasi-ordered by inclusion.
\end{enumerate}
\end{lemma}
\begin{proof}
For the first claim, note that for each increasing chain $R_0
\subseteq R_1 \subseteq \cdots$ of hyperrectangles that are all
consistent with $S$, the union $R := \bigcup_{i \ge 0} R_i$ is also a
hyperrectangle that is consistent with $S$. More precisely, if $R_i =
[l_1^i,r_1^i] \times \cdots \times [l_n^i,r_n^i]$, then $R = [l_1,r_1]
\times \cdots \times [l_n,r_n]$ with $l_j = \inf\{l_j^i \mid i \ge 0\}$
and $r_j = \sup\{r_j^i \mid i \ge 0\}$.
Furthermore, if the chain is strictly increasing, then there exists
$j$ such that $l_j = -\infty$ and all $l_j^i \not= -\infty$, or $r_j =
\infty$ and all $r_j^i \not= \infty$. Hence, if $R$ itself can be
extended again into an infinite strictly increasing chain, then the
union of this chain will contain an additional $\infty$ of
$-\infty$. This can happen at most $2n$ times before reaching the
hyperrectangle containing all points, which is certainly
maximal. Thus, there has to be a maximal hyperrectangle consistent
with $S$.
We now prove the second claim. For a point $p = (p_1, \ldots, p_n)$,
the set $\mathcal{R}_p$ is the product $\mathcal{R}_p =
\mathcal{I}_{p_1} \times \cdots \times \mathcal{I}_{p_n}$ of the sets
of intervals containing the points $p_i$. Furthermore, the inclusion
order for hyperrectangles is the $n$-fold product of the inclusion
order for intervals. Thus, the inclusion order on $\mathcal{R}_p$ is a
well-quasi-order because it is a product of well-quasi-orders
\cite{Higman52}. \qed
\end{proof}
We conclude that the following type of learner is convergent: for
the empty sample, propose the empty hyperrectangle; for every
non-empty sample $S$, propose a maximal hyperrectangle consistent with
$S$.
\section{Synthesis Problems Modeled as ALFs}\label{sec:examples}
In this section, we list a host of existing synthesis problems and algorithms that can be seen as ALFs. Specifically, we consider examples from the areas of program verification and program synthesis.
We encourage the reader to look up the referenced algorithms to better understand their mapping into our framework.
Moreover, we have new techniques based on ALFs to compute fixed-points in the setting of abstract interpretation using learning.
\subsection{Program Verification} \label{subsec:program_verification}
While program verification itself does not directly relate to
synthesis, most program verification techniques require some form of
help from the programmer before the analysis can be
automated. Consequently, synthesizing objects that replace manual help
has been an area of active research. We here focus on \emph{learning
loop invariants}. Given adequate
invariants (in terms of pre/post-conditions, loop invariants, etc.), the rest of the verification process can often be completely
automated~\cite{floyd,hoare} using logical constraint-solvers~\cite{z3,cvc4}.
For the purposes of this article, let us consider while-programs with a single loop. Given a pre- and post-condition, assertions, and contracts for functions called, the problem is to find a loop invariant that proves the post-condition and assertions (assuming the program is correct).
\subsubsection{Invariant synthesis using the ICE learning model}
Given a program with a single loop whose loop invariant we want to synthesize, there are \emph{many} inductive invariants that prove
the assertions in the program correct--- these invariants are characterized by the following three properties:
\begin{enumerate*}
\item that it include the states when the loop is entered the first time,
\item that it exclude the states that immediately exit the loop and reach the end of the program and not satisfy the post-condition, and
\item that it is inductive (i.e., from any state satisfying the invariant, if we execute the loop body once, the resulting state is also in the invariant).
\end{enumerate*}
The teacher knows these properties, and must reply to conjectured hypotheses of the learner using these properties. Violation of properties (a) and (b) are usually easy to check using a constraint solver, and will result in a \emph{positive} and \emph{negative} concrete configuration as a sample, respectively. However, when inductiveness fails, the obvious counterexample is a \emph{pair} of configurations, $(x,y)$, where $x$ is in the hypothesis but $y$ is not, and where the program state $x$ evolves to the state $y$ across one execution of the loop body.
The work by Garg et.~al.~\cite{DBLP:conf/cav/0001LMN14} hence proposes what they call the \emph{ICE model} (for implication counterexamples), where the learner learns from positive, negative, and implication counterexamples. The author's claim is that without implication counterexamples, the teacher is stuck when presented a hypothesis that satisfies the properties of being an invariant save the inductiveness property.
From the described components we build an ALF $\mathcal{A}_{\mathrm{ICE}} =
(\mathcal{C}, \mathcal{H}, \gamma, \mathcal{S}, \kappa)$, where
$\mathcal{C}$ is the set of all subsets of program configurations, the hypothesis space $\mathcal{H}$ is the language used to describe the invariant, and the sample space is defined as follows:
\begin{itemize}
\item A sample is of the form $S = (P, N, I)$, where $P,N$ are sets of
program configurations (interpreted as positive and negative
examples), and $I$ is a set of pairs of program configurations
(interpreted as implications).
\item A set $C \in \mathcal{C}$ of program configurations is consistent
with $(P, N, I)$ if $P \subseteq C$, $N \cap C = \emptyset$, and if
$(c,c') \in I$ and $c \in C$, then also $c' \in C$.
\item The order on samples is defined by component-wise set inclusion (i.e., $(P,N,I) \sqsubseteq_\mathrm{s} (P',N',I')$ if $P \subseteq P'$, $N \subseteq N'$, and $I \subseteq I'$).
\item The join is the component-wise union, and $\bot_\mathrm{s} = (\emptyset,\emptyset,\emptyset)$.
\end{itemize}
Since this sample space contains implications in addition to the standard positive and negative examples, we refer to it as an \emph{ICE sample space}.
We can now show that there is a teacher for these ALF instances, because
a teacher can refute any hypothesis made by a learner with a positive, negative,
or implication counterexample, depending on which property of invariants is violated.
\begin{proposition} \label{pro:ice-teacher}
There is a teacher for ALF instances of the form $(\mathcal{A}_{\mathrm{ICE}},
\mathcal{T}_{\mathit{Inv}})$.
\end{proposition}
Furthermore, we can show that having only positive and negative samples precludes the existence of teachers. In fact, we can show that if $\mathcal{C} = 2^D$ (for a domain $D$) and the sample space $\mathcal{S}$ consists of only positive and negative examples in $D$, then a target set $\mathcal{T}$ has a teacher only if it is defined in terms of excluding a set $B$ and including a set $G$.
\begin{lemma}
Let $C=H=2^D$, $\gamma = id$, $S = \{(P, N) \mid P, N \subseteq D \}$, and
$\kappa((P,N)) = \{ R \subseteq D \mid P \subseteq R \wedge R \cap N = \emptyset \}$.
Let $\mathcal{T} \subseteq C$ be a target. If there exists a teacher for $\mathcal{T}$,
then there must exists sets $B,G \subseteq D$ such that
$\mathcal{T} = \{ R \subseteq D \mid B \cap R = \emptyset \text{ and } G \subseteq R\}$.
\end{lemma}
\begin{proof}
Assume that there exists a teacher for the target set $\mathcal{T}$, and let $G$ and $B$ be the union of all positive examples and the union of all negative examples, respectively, that the teacher returns. Now, we claim that $\mathcal{T} = \{ R \subseteq D \mid B \cap R = \emptyset \text{ and } R \subseteq G\}$. Towards a contradiction, assume that this is not the case. Then, there exists an $R \in \mathcal{T}$ such that $B \cap R \not = \emptyset$ or $G \not \subseteq R$. If $B \cap R \not = \emptyset$, then there is some $b \in R$ that was returned as a negative counterexample for some hypothesis. Since $R \in \mathcal{T}$ is not consistent with this negative example $b$, this contradicts the requirement that the teacher is honest. Similarly, if $G \not \subseteq R$, then there is some $g \in G$ that was returned as a positive counterexample, which contradicts the teacher's honesty.
\end{proof}
The above proves that positive and negative samples are not sufficient for learning invariants,
as invariants cannot be defined as all sets that exclude a set of states and include a set of states.\par\vskip\baselineskip
There are several ICE-based invariant synthesis formalisms that we can capture. First, Garg~et.~al.~\cite{DBLP:conf/cav/0001LMN14} have considered arithmetic invariants over a set of integer variables $x_1, \ldots, x_\ell$ of the form $\bigvee_{i=1}^n \bigwedge_{j=1}^{m_i} \bigl( \sum_{k=1}^\ell a_k^{i,j} x_k \leq c^{i,j} \bigr)$, $a_k^{i, j} \in \{{-1}, 0, 1\}$, where the learner is implemented using a constraint solver that finds smallest invariants that fit the sample. This is accurately modeled in our framework, as in the ICE formulation above, with the hypotheses space being the set of all formulas of this form. The fact that Garg~et.~al.'s learner produces smallest invariants makes it an Occam learner in the sense of Section~\ref{sec:occam_learner} and, hence, it converges in finite time.
The approach proposed by Sharma and Aiken~\cite{DBLP:conf/cav/0001A14}, {\sc C2I}, is also an ICE-learner, except that the learner uses \emph{stochastic search} based on a Metropolis Hastings MCMC (Markov chain Monte Carlo) algorithm, which again can be seen as an ALF.
We can also see the work by Garg~et.~al.~\cite{CAVQDA} on synthesizing quantified invariants for linear data structures such as lists and arrays as ALFs. This framework can infer quantified invariants of the form
\[ \forall y_1, y_2 \colon (y_1 \leq y_2 \leq i) \Rightarrow a[y_1] \leq a[y_2]. \]
However, Garg~et.~al.\ do not represent sets of configurations by means of logical formulas (as shown above) but use an automata-theoretic approach, where a special class of automata, called \emph{quantified data automata (QDAs)}, represent such logical invariants; hence, these QDAs form the hypothesis space in the ALF. The sample space there is also unusual: a sample (modeling a program configuration consisting of arrays or lists) is a \emph{set of valuation words}, where each such word encodes the information about the array (or list) for specially quantified pointer variables pointing into the heap, and where data-formulas state conditions of the keys stored at these locations.
ALFs can also capture the ICE-framework described by Neider~\cite{daniel_phd}, where invariants are learned in the context of \emph{regular model-checking}~\cite{DBLP:conf/cav/BouajjaniJNT00}. In regular model checking, program configurations are captured using (finite---but unbounded---or even infinite) words, and sets of configurations are captured using finite automata. Consequently, the hypothesis space is the set of all DFAs (over an a~priori chosen, fixed alphabet), and the sample space is an ICE-sample consisting of configurations modeled as words. The learner proposed by Neider constructs consistent DFAs of minimal size and, hence, is an Occam learner that converges in finite time (cf.\ Section~\ref{sec:occam_learner}).
\par\vskip\baselineskip
We now turn to two other invariant-generation frameworks that skirt the ICE model.
\subsubsection{Houdini}
The Houdini algorithm~\cite{houdini} is a learning algorithm for synthesizing invariants that avoids learning from ICE samples.
Given a finite set of \emph{predicates} $P$, Houdini learns an invariant that is expressible as a conjunction of some subset
of predicates (note that the hypothesis space is finite but exponential in the number of predicates).
Houdini learns an invariant in time \emph{polynomial} in the number of predicates (and in linear number rounds) and
is implemented in the Boogie program verifier~\cite{boogie}. It is widely used
(for example, used in verifying device drivers~\cite{DBLP:conf/sigsoft/LalQ14,DBLP:conf/cav/LalQL12} and in race-detection in GPU kernels~\cite{DBLP:conf/oopsla/BettsCDQT12}.
The setup here can be modeled as an ALF: we take the concept space $\mathcal{C}$ to be all subsets of program configurations, and the hypothesis space $\mathcal{H}$ to be the set of all conjunctions of subsets of predicates in $P$, with the map $\gamma$ mapping each conjunctive formula in $\mathcal{H}$ to the set of all configurations that satisfy the predicates mentioned in the conjunction. We take the sample space to be the ICE sample space, where each sample is a valuation $v$ over $P$ (indicating which predicates are satisfied) and where implication counterexamples are pairs of valuations.
The Houdini learning algorithm itself is the classical conjunctive learning algorithm for positive and negative samples (see \cite{KearnsV94}), but its mechanics are such that it works for ICE samples as well. More precisely, the Houdini algorithm always creates the \emph{semantically smallest} formula that satisfies the sample (it hence starts with a conjunction of all predicates, and in each round ``knocks off'' predicates that are violated by positive samples returned). Since Houdini always returns the semantically-smallest conjunction of predicates, it will never receive a negative counterexample
(assuming the program is correct and has a conjunctive invariant over $P$). Furthermore, for an implication counterexample $(v,v')$, the algorithm knows that since it proposed the semantically smallest conjunction of predicates, $v$ cannot be made negative; hence it treats $v'$ as a positive counterexample. Houdini converges since the hypothesis space is finite (matching the first recipe for convergence we outlined in Section~\ref{sec:finite_hypothesis_space}); in fact, it converges in linear number of rounds since in each round at least one predicate is removed from the hypothesized invariant.
\subsubsection{Learning invariants in regular model-checking using witnesses}
The learning-to-verify project reported in~\cite{DBLP:conf/icfem/VardhanSVA04,DBLP:conf/fsttcs/VardhanSVA04,DBLP:conf/tacas/VardhanSVA05,DBLP:conf/kbse/VardhanV05} leverages machine learning to the verification of infinite state systems that result from processes with FIFO queues,
but skirts the ICE model using a different idea.
The key idea is to view the the identification of the reachable states of such a system as a machine learning problem instead of computing this set iteratively (which, in general, requires techniques such as acceleration or widening).
In particular, we consider the work of Vardhan~et.~al.~\cite{DBLP:conf/icfem/VardhanSVA04} and show that this is an instantiation of our abstract learning framework. The key idea in this work is to represent configurations as traces through the system and to add a notion of \emph{witness} to this description, resulting in so-called annotated traces. The teacher, when receiving a set of annotated traces, can actually check whether the configurations are reachable based on the witnesses (a witness can be, say, the length of the execution or the execution itself) and, consequently, the model allows learning from such traces directly.
Indeed, this approach can be modeled by an ALF: the concept space consists of subsets of configurations, the target space consists of the set of reachable configurations, the hypothesis space consists of automata over annotated traces, and the sample space consists of positive and negatively labeled annotated traces.
\subsection{Synthesis of Fixpoints in Abstract Domains}\label{subsec:abstract}
\input{ice-for-abstract-interpretation}
\subsection{Program Synthesis}
In this section, we study several examples of learning-based program synthesis, which include synthesizing program expressions,
expressions to be plugged in program sketches, snippets of programs, etc., and show how they can be modeled as ALFs.
\subsubsection{End-user synthesis from examples: Flashfill}
One application of synthesis is to use it to help end-users to program using examples. A prime example of this is \textsc{Flashfill} by Gulwani et al~\cite{DBLP:conf/popl/Gulwani11}, where the authors show how string manipulation macros from user-given input-output examples can be synthesized in the context of Microsoft Excel spreadsheets. Flashfill can be seen as an ALF: the concept space consists of all functions from strings to strings, the hypothesis space consists of all string manipulation macros, and the sample space consists of a sets of input-output examples for such functions. The consistency relation $\kappa$ maps each sample to all functions that agree with the sample. The role of the teacher is played by the \emph{user}: the user has some function in mind and gives new input-output examples whenever the learner returns a hypothesis that she is not satisfied with. The learning algorithm here is based on version-space algebras (which, intuitively, compactly represents \emph{all} possible macros with limited size that are consistent with the sample) and in each round proposes a simple macro from this collection.
\subsubsection{Completing sketches and the SyGuS solvers}
The sketch-based synthesis approach~\cite{armando_thesis} is another prominent synthesis application, where programmers write partial programs with holes and a system automatically synthesizes expressions or programs for these holes so that a specification (expressed using input-output pairs or logical assertions) is satisfied. The key idea here is that given a sketch with a specification, we need expressions for the holes such that \emph{for every possible input}, the specification holds. This roughly has the form
$\exists \vec{e}. \forall \vec{x} \psi (\vec{e}, \vec{x})$, where $\vec{e}$ are the expressions to synthesize and
$\vec{x}$ are the inputs to the program.
The Sketch system works by
\begin{enumerate*}
\item unfolding loops a finite number of times, hence, bounding the length of executions, and
\item encoding the choice of expressions $\vec{e}$ to be synthesized using bits (typically using templates and representing integers by a small number of bits).
\end{enumerate*}
For the synthesis step, the Sketch system implements a CEGIS (counterexample guided synthesis) technique using SAT solving, whose underlying idea is to learn the expressions from examples using only a SAT solver. The CEGIS technique works in rounds: the learner proposes hypothesis expressions and the teacher checks whether
$\forall \vec{x} \psi (\vec{e}, \vec{x})$ holds (using SAT queries) and if not, returns a valuation for $\vec{x}$ as a counterexample. Subsequently, the learner asks, again using a SAT query, whether there exists a valuation for the bits encoding the expressions such that $\psi (\vec{e}, \vec{x})$ holds for every valuation of $\vec{x}$ returned by the teacher thus far; the resulting expressions are the hypotheses for the next round. Note that the use of samples avoids quantifier alternation both in the teacher and the learner.
The above system can be modeled as an ALF. The concept space consists of tuples of functions modeling the various expressions to synthesize, the hypothesis space is the set of expressions (or their bit encodings), the map $\gamma$ gives meaning to these expressions (or encodings), and the sample space can be seen as the set of \emph{grounded formulae} of the form $\psi(\vec{e}, \vec{v})$ where the variables $\vec{x}$ have been substituted with a concrete valuation. The relation $\kappa$ maps such a sample to the set of all expressions $\vec{f}$ such that the formulas in the sample all evaluate to true if $\vec{f}$ is substituted for $\vec{e}$. The Sketch learner can be seen as a learner in this ALF framework that uses calls to a SAT solver to find hypothesis expressions consistent with the sample.
Since expressions are encoded by a finite number of bits, the hypothesis space is finite, and the Sketch learner converges in finite time (cf.\ Section~\ref{sec:finite_hypothesis_space}).
The SyGuS format~\cite{DBLP:conf/fmcad/AlurBJMRSSSTU13} is a competition format for synthesis, and extends the Sketch-based formalism above to SMT theories, with an emphasis on syntactic restrictions for expressions. More precisely, SyGuS specifications are parameterized over a background theory $T$, and an instance is a pair $(G, \psi(\vec{f}))$ where $G$ is a grammar that imposes syntactic restrictions for functions (or expressions) written using symbols of the background theory, and $\psi$ is a formula, again in the theory $T$, including function symbols $\vec{f}$; the functions $\vec{f}$ are
typed according to domains of $T$. The goal is to find functions $\vec{g}$ for the symbols $\vec{f}$ in the syntax $G$ such that $\psi$ holds. The competition version further restricts $\psi$ to be of the form $\forall \vec{x} \psi'(\vec{f}, \vec{x})$ where $\psi'$ is a quantifier-free formula in a decidable SMT theory---this way, given a hypothesis for the functions $\vec{f}$, the problem of checking whether the functions meet the specification is decidable.
There have been several solvers developed for SyGuS (cf.\ the first SyGuS competition~\cite{DBLP:conf/fmcad/AlurBJMRSSSTU13,DBLP:series/natosec/AlurBDF0JKMMRSSSSTU15}), and all of them are in fact learning-based (i.e., CEGIS) techniques. In particular, three solvers have been proposed: an enumerative solver, a constraint-based solver, and a stochastic solver. All these solvers can be seen as ALF instances: the concept space consists of all possible tuples of functions over the appropriate domains and the hypothesis space is the set of all functions allowed by the \emph{syntax} of the problem (with the natural $\gamma$ relation giving its semantics). All three solvers work by generating a tuple of functions such that $\forall \vec{x} \psi'(\vec{f}, \vec{x})$ holds for all valuations of $\vec{x}$ given by the teacher thus far.
The enumerative solver enumerates functions until it reaches such a function, the stochastic solver searches the space of functions randomly using a measure that depends on how many samples are satisfied till it finds one that satisfies the samples, and the constraint-based solver queries a constraint-solver for instantiations of template functions so that the specification is satisfied on the sample valuations.
Both the enumerative and the constraint-solver are Occam learners and, hence, converge in finite time.
Note that the learners \emph{know} $\psi$ in this scenario. However, we can model SyGuS as ALFs by taking the sample space to be grounded formulas $\psi'(\vec{f}, \vec{v})$ consisting of the specification with particular values $\vec{v}$ substituted for $\vec{x}$. The learners can now be seen as learning from these samples, without knowledge of $\psi$ (similar to the modeling of Sketch above).
We would like to emphasize that this embedding of SyGuS as an ALF clearly showcases the difference between different synthesis approaches (as mentioned in the introduction). For example, invariant generation can be done using learning either by means of ICE samples (see Section~\ref{subsec:program_verification}) or modeled as a SyGuS problem. However, it turns out that the sample spaces (and, hence, the learners) in the two approaches are \emph{very different}! In ICE-based learning, samples are only single configurations (labeled positive or negative) or pairs of configurations, while in a SyGuS encoding, the samples are grounded formulas that encode the entire program body, including instantiations of universally quantified variables intermediate states in the execution of the loop.
\subsubsection{Machine-learning based approaches to synthesis}
One can implement the \emph{passive} machine-learning algorithm to synthesize hypotheses from samples, in order to build a synthesis engine (along with an appropriate teacher that can furnish such samples). Recent work by Garg et.~al.~\cite{ICEML} proposes an algorithm for synthesizing invariants in the ICE-framework using machine learning classifiers (decision trees) that can be viewed as an ALF.
\subsubsection{Synthesizing guarded affine functions}
Recent work~\cite{alchemist} explores the synthesis of \emph{guarded affine functions} from a sample space that consists of information of the form $f(\vec{s})=t$, where $\vec{s}$ and $\vec{t}$ are integers. The learner here uses a combination of computational geometry techniques and decision tree learning, and can also be modeled as an ALF. Notice that this sample space precisely matches the sample space for deobfuscation problems
(where the teacher can return counterexamples of this form -- see Example at the end of Section~\ref{sec:alf} on page~\pageref{ex:deobfuscation} -- using the program being deobfuscated). Consequently, the
learner in Alchemist~\cite{alchemist} can be used for deobfuscating programs that compute guarded affine functions from tuples of integer inputs
to integers (like the ``multiply by 45'' example in~\cite{DBLP:conf/icse/JhaGST10}).
\subsubsection{Other synthesis engines}
There are several algorithms that are self-described as CEGIS frameworks, and, hence, can be modeled using ALFs. For example, synthesizing loop-free programs~\cite{DBLP:conf/pldi/GulwaniJTV11}, synthesizing synchronizing code for concurrent programs~\cite{scheduling} (in this work, the sample space consists of abstract concurrent partially-ordered traces), work on using synthesis to \emph{mine specifications}~\cite{DBLP:conf/hybrid/JinDDS13}, synthesizing bit-manipulating programs and deobfuscating programs~\cite{DBLP:conf/icse/JhaGST10} (here, the use of separate I/O-oracle can be modeled as the teacher returning the output of the program together with a counterexample input), superoptimization~\cite{DBLP:conf/asplos/Schkufza0A13}, deductive program repair~\cite{DBLP:conf/cav/KneussKK15}, synthesis of recursive functional programs over unbounded domains~\cite{DBLP:conf/oopsla/KneussKKS13}, as well as synthesis of protocols using enumerative CEGIS techniques~\cite{DBLP:conf/pldi/UdupaRDMMA13}.
\section{Introduction}
\input{intro}
\section{Abstract Learning Frameworks for Synthesis} \label{sec:alf}
\input{alf}
\section{Convergence of iterative learning} \label{sec:convergence}
\input{convergence}
\input{examples}
\section{Variations and Limitations of the Framework} \label{app:variations}
\input{variations}
\input{conclusions}
\vspace{-.25\baselineskip}
\paragraph{\normalfont\bfseries Acknowledgements:} This work was partially supported by NSF Expeditions
in Computing ExCAPE Award \#1138994.
\vspace{-.5\baselineskip}
\enlargethispage{\baselineskip}
\bibliographystyle{splncs03}
\subsection{Omitting the Concept Space}
We believe that, for a clean modeling of a synthesis problem, one
should specify the concept space $\mathcal{C}$. This makes it possible to
compare different synthesis approaches that work with different
representations of hypotheses and maybe different types of samples
over the same underlying concept space.
However, for the actual learning process, the concept space
itself is not of great importance because the learner proposes elements from
the hypothesis space, and the teacher returns an element from the
sample space. The concept space only serves as a semantic space that gives
meaning to hypotheses (via the concretization function $\gamma$),
and to the samples (via the consistency relation $\kappa$).
Therefore, it is possible to omit the concept space from an ALF, and
to directly specify the consistency of samples with hypotheses. Such a
reduced ALF would then be of the form $\mathcal{A} = (\mathcal{H}, \mathcal{S},
\kappa)$ with a function $\kappa: \mathcal{S} \rightarrow
2^\mathcal{H}$. In the original framework, this corresponds to the
function $\kappa_\mathcal{H}$ defined by
$\kappa_\mathcal{H}(S) = \gamma^{-1}\kappa(S)$.
To create ALF instances, the target specification is also directly
given as a subset of the hypothesis space $\mathcal{T}
\subseteq \mathcal{H}$. All the other definitions can be adapted directly
to this framework.
\subsection{Limitations}
The ALF framework we develop in this paper is not meant to capture every
existing method that uses learning from samples. There are several synthesis techniques
that use grey-box techniques (a combination of black-box learning from samples and by utilizing
the specification of the target directly in some way) or use query models (where they query the
teacher for various aspects of the target set).
For instance, there are active iterative learning scenarios
in which the learner can ask other types of questions to the teacher than just proposing
hypotheses that are then accepted or refuted by the teacher. One
prominent scenario of this kind is Angluin's active learning of DFAs
\cite{Angluin87}, where the learner can ask \emph{membership queries}
and \emph{equivalence queries}. (The equivalence queries correspond to
proposing a hypothesis, as in our framework, which is then refuted
with a counterexample if it is not correct.)
Such learning scenarios for synthesis are used, for example, in
\cite{AlurCMN05} for the synthesis of interface specifications for Java classes,
and in \cite{PasareanuGBCB08} for
automatically synthesizing assumptions for assume-guarantee reasoning.
Our framework does not have a mechanism for directly modeling such
queries.
The ALF framework that we have presented
is intentionally a simpler framework by design that captures and cleanly models emerging synthesis
procedures in the literature where the learner only proposes hypotheses and learns
from samples the teacher provides in terms of samples to show that the
hypothesis is wrong. The learner in our framework, being a completely passive learner
(as opposed to an active learner), can also be implemented by the variety of scalable passive
machine-learning algorithms in vogue~\cite{mitchell}. A clean extension of ALFs to query
settings and grey-box settings would be an interesting future direction to pursue.
| {'timestamp': '2016-05-23T02:13:10', 'yymm': '1507', 'arxiv_id': '1507.05612', 'language': 'en', 'url': 'https://arxiv.org/abs/1507.05612'} |
\subsection{#1}\setcounter{theorem}{0} \setcounter{equation}{0}
\par\noindent}
\renewcommand{\theequation}{\arabic{subsection}.\arabic{equation}}
\renewcommand{\thesubsection}{\arabic{subsection}}
\newtheorem{theorem}{Theorem}
\renewcommand{\thetheorem}{\arabic{subsection}.\arabic{theorem}}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{corr}[theorem]{Corollary}
\newtheorem{prop}[theorem]{Prop}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{deff}[theorem]{Definition}
\newtheorem{remark}[theorem]{Remark}
\newcommand{\bth}{\begin{theorem}}
\newcommand{\ble}{\begin{lemma}}
\newcommand{\bcor}{\begin{corr}}
\newcommand{\ltrt}{{L^2({\Bbb R}^3)}}
\newcommand{\ltoo}{{L^2({\mathbb{R}}^3\backslash\mathcal{K})}}
\newcommand{\bdeff}{\begin{deff}}
\newcommand{\lirt}{{L^\infty({\Bbb R}^3)}}
\newcommand{\lioo}{{L^{\infty}({\mathbb{R}}^3 \backslash \mathcal{K})}}
\newcommand{\bprop}{\begin{proposition}}
\newcommand{\ele}{\end{lemma}}
\newcommand{\ecor}{\end{corr}}
\newcommand{\edeff}{\end{deff}}
\newcommand{\ii}{i}
\newcommand{\eprop}{\end{proposition}}
\newcommand{\rlnu}{{R_{\lambda}^{\nu}}}
\newcommand{\trlnu}{{\tilde{R}_{\lambda}^{\nu}}}
\newcommand{\tlnu}{{T_{\lambda}^{\nu}}}
\newcommand{\ttlnu}{{\tilde{T}_{\lambda}^{\nu}}}
\newcommand{\slnu}{{S_{\lambda}^{\nu}}}
\newcommand{\slnut}{{}^t\!\slnu}
\newcommand{\cd}{\, \cdot\, }
\newcommand{\mlnu}{{m_{\lambda}^{\nu}}}
\newcommand{\psilnu}{\psi_{\lambda}^{\nu}}
\newcommand{\xilnu}{\xi_{\lambda}^{\nu}}
\newcommand{\nlnu}{N_{\lambda}^{\nu}}
\newcommand{\nl}{N_{\lambda}}
\newcommand{\Rn}{{\mathbb R}^n}
\newcommand{\jump}{{}}
\newcommand{\la}{\lambda}
\newcommand{\st}{{\Bbb R}^{1+3}_+}
\newcommand{\eps}{\varepsilon}
\newcommand{\e}{\varepsilon}
\renewcommand{\l}{\lambda}
\newcommand{\loc}{{\text{\rm loc}}}
\newcommand{\comp}{{\text{\rm comp}}}
\newcommand{\Coi}{C^\infty_0}
\newcommand{\supp}{\text{supp }}
\renewcommand{\Pi}{\varPi}
\renewcommand{\Re}{\rm{Re} \,}
\renewcommand{\Im}{\rm{Im} \,}
\renewcommand{\epsilon}{\varepsilon}
\newcommand{\sgn}{{\text {sgn}}}
\newcommand{\Gmid}{\Gamma_{\text{mid}}}
\newcommand{\Rt}{{\Bbb R}^2}
\newcommand{\Mdel}{{{\cal M}}^\alpha}
\newcommand{\dist}{{{\rm dist}}}
\newcommand{\Adel}{{{\cal A}}_\delta}
\newcommand{\Kob}{{\cal K}}
\newcommand{\Dia}{\overline{\Bbb E}^{1+3}}
\newcommand{\Diap}{\overline{\Bbb E}^{1+3}_+}
\newcommand{\Cyl}{{\Bbb E}^{1+3}}
\newcommand{\Cylp}{{\Bbb E}^{1+3}_+}
\newcommand{\Penrose}{{\cal P}}
\newcommand{\Rplus}{{\Bbb R}_+}
\newcommand{\parital}{\partial}
\newcommand{\tidle}{\tilde}
\newcommand{\grad}{\text{grad}\,}
\newcommand{\ob}{{\mathcal K}}
\newcommand{\R}{{\mathbb R}}
\newcommand{\1}{{\rm 1\hspace*{-0.4ex
\rule{0.1ex}{1.52ex}\hspace*{0.2ex}}}
\newcommand{\T}{{\mathbf T}}
\newcommand{\E}{{\mathbf E}}
\newcommand{\Aut}{\text{Aut}(p)}
\newcommand{\subheading}[1]{{\bf #1}}
\begin{document}
\subjclass[2000]{Primary, 35F99; Secondary 35L20, 42C99}
\keywords{Eigenfunction estimates, negative curvature}
\title[Improved restriction estimates for nonpostive curvature manifolds]{An improvement on eigenfunction restriction estimates for compact boundaryless Riemannian manifolds with nonpositive sectional curvature}
\thanks{The author would like to thank her advisor, Christopher Sogge, cordially for his generous help and unlimited patience.}
\author{Xuehua Chen}
\address{Johns Hopkins University, Baltimore, MD}
\address{Email address: xchen@math.jhu.edu}
\maketitle
\begin{abstract}
Let $(M,g)$ be an $n$-dimensional compact boundaryless Riemannian manifold with nonpositive sectional curvature, then our conclusion is that we can give improved estimates for the $L^p$ norms of the restrictions of eigenfunctions to smooth submanifolds of dimension $k$, for $p>\dfrac{2n}{n-1}$ when $k=n-1$ and $p>2$ when $k\leq n-2$, compared to the general results of Burq, G\'erard and Tzvetkov \cite{burq}. Earlier, B\'erard \cite{Berard} gave the same improvement for the case when $p=\infty$, for compact Riemannian manifolds without conjugate points for $n=2$, or with nonpositive sectional curvature for $n\geq3$ and $k=n-1$. In this paper, we give the improved estimates for $n=2$, the $L^p$ norms of the restrictions of eigenfunctions to geodesics. Our proof uses the fact that, the exponential map from any point in $x\in M$ is a universal covering map from $\mathbb{R}^2\backsimeq T_{x}M$ to $M$, which allows us to lift the calculations up to the universal cover $(\mathbb{R}^2,\tilde{g})$, where $\tilde{g}$ is the pullback of $g$ via the exponential map. Then we prove the main estimates by using the Hadamard parametrix for the wave equation on $(\mathbb{R}^2,\tilde{g})$, the stationary phase estimates, and the fact that the principal coefficient of the Hadamard parametrix is bounded, by observations of Sogge and Zelditch in \cite{SZ}. The improved estimates also work for $n\geq 3$, with $p>\frac{4k}{n-1}$. We can then get the full result by interpolation.
\end{abstract}
\newsection{Introduction}
Let $(M,g)$ be a compact, smooth $n$-dimensional boundaryless Riemannian manifold with nonpositive sectional curvature. Denote $\Delta_g$ the Laplace operator associated to the metric $g$, and $d_g(x,y)$ the geodesic distance between $x$ and $y$ associated with the metric $g$. We know that there exist $\lambda\geq0$ and $\phi_\lambda\in L^2(M)$ such that $-\Delta_g\phi_\lambda=\lambda^2\phi_\lambda$, and we call $\phi_\lambda$ an eigenfunction corresponding to the eigenvalue $\lambda$. Let $\{e_j(x)\}_{j\in\mathbb{N}}$ be an $L^2(M)$-orthonormal basis of eigenfunctions of $\sqrt{-\Delta_g}$, with eigenvalues $\{\lambda_j\}_{j\in\mathbb{N}}$, and $\{E_j(x)\}_{j\in\mathbb{N}}$ be the projections onto the $j$-th eigenspace, restricted to $\Sigma$, i.e. $E_jf(x)=e_j(x)\int_M e_j(y)f(y)dy$, for any $f\in L^2(M)$, $x\in\Sigma$. We may consider only the positive $\lambda$'s as we are interested in the asymptotic behavior of the eigenfunction projections. Our main Theorem is the following.
\bth\label{theorem1}
Let $(M,g)$ be a compact smooth $n$-dimensional boundaryless Riemannian manifold with nonpositive curvature, and $\Sigma$ be an $k$-dimensional smooth submanifold on $M$. Let $\{E_j(x)\}_{j\in\mathbb{N}}$ be the projections onto the $j$-th eigenspace, restricted to $\Sigma$. Given any $f\in L^2(M)$, we have the following estimate:
When $k=n-1$,
\begin{equation}\label{1.1}
||\sum_{|\lambda_j-\lambda|\leq(\log\lambda)^{-1}}E_jf||_{L^p(\Sigma)}\lesssim\frac{\lambda^{\delta(p)}}{(\log \lambda)^{\frac{1}{2}}}||f||_{L^2(M)},\ \ \ \forall p>\dfrac{2n}{n-1};
\end{equation}
When $k\leq n-2$,
\begin{equation}\label{1.2}
||\sum_{|\lambda_j-\lambda|\leq(\log\lambda)^{-1}}E_jf||_{L^p(\Sigma)}\lesssim\frac{\lambda^{\delta(p)}}{(\log \lambda)^{\frac{1}{2}}}||f||_{L^2(M)},\ \ \ \forall p>2,
\end{equation}
where $\delta(p)=\frac{n-1}{2}-\frac{k}{p}$.
\end{theorem}
Note that we may assume that $(M,g)$ is also simply connected in the proof.
The following corollary is an immediate consequence of this theorem.
\begin{corr}
Let $(M,g)$ be a compact smooth $n$-dimensional boundaryless Riemannian manifold with nonpositive curvature, and $\Sigma$ be an $k$-dimensional smooth submanifold on $M$. For any eigenfunction $\phi_\lambda$ of $\Delta_g$ s.t. $-\Delta_g\phi_\lambda=\lambda^2\phi_\lambda$, we have the following estimate:
When $k=n-1$,
\begin{equation}\label{1.3}
||\phi_\lambda||_{L^p(\Sigma)}\lesssim\frac{\lambda^{\delta(p)}}{(\log \lambda)^{\frac{1}{2}}}||\phi_\lambda||_{L^2(M)},\ \ \ \forall p>\dfrac{2n}{n-1};
\end{equation}
When $k\leq n-2$,
\begin{equation}\label{1.4}
||\phi_\lambda||_{L^p(\Sigma)}\lesssim\frac{\lambda^{\delta(p)}}{(\log \lambda)^{\frac{1}{2}}}||\phi_\lambda||_{L^2(M)},\ \ \ \forall p>2,
\end{equation}
where $\delta(p)=\frac{n-1}{2}-\frac{k}{p}$.
\end{corr}
In \cite{rez}, Reznikov achieved weaker estimates for hyperbolic surfaces, which inspired this current line of research. In \cite{burq}, Theorem 3, Burq, G\'erard and Tzvetkov showed that given any $k$-dimensional submanifold $\Sigma$ of an $n$-dimensional compact boundaryless manifold $M$, for any $p>\dfrac{2n}{n-1}$ when $k=n-1$ and for any $p>2$ when $k\leq n-2$, one has
\begin{equation}\label{1.5}
||\phi_\lambda||_{L^p(\Sigma)}\lesssim\lambda^{\delta(p)}||\phi_\lambda||_{L^2(M)},
\end{equation}
while for $p=\frac{2n}{n-1}$ when $k=n-1$ and for $p=2$ when $k=n-2$ one has
\begin{equation}\label{1.6}
||\phi_\lambda||_{L^p(\Sigma)}\lesssim\lambda^{\delta(p)}(\log\lambda)^{\frac{1}{2}}||\phi_\lambda||_{L^2(M)}.
\end{equation}
Later on, Hu improved the result at one end point in \cite{Hu}, so that one has \eqref{1.5} for $p=\frac{2n}{n-1}$ when $k=n-1$. It is very possible that one can also improve the result at the other end point, where $p=2$, $k=n-2$, so that we also have \eqref{1.5} there. Our Theorem \ref{theorem4.1} gives an improvement for \eqref{1.5} of $(\log\lambda)^{-\frac{1}{2}}$ for $p\geq2$ for certain small $k$'s (See the remark after Theorem \ref{theorem4.1}).
Note that their proof of Theorem 3 in \cite{burq} indicates that for any $f\in L^2(M)$,
\begin{equation}\label{1.7}
||\sum_{|\lambda_j-\lambda|<1}E_jf||_{L^p(\Sigma)}\lesssim\lambda^{\delta(p)}||f||_{L^2(M)},
\end{equation}
for any $p\geq\dfrac{2n}{n-1}$ when $k=n-1$ and $p\geq2$ when $k\leq n-2$ except that there is an extra $(\log\lambda)^{\frac{1}{2}}$ on the right hand side when $p=2$ and $k=n-2$. In the proof, they constructed $\chi_\lambda=\chi(\sqrt{-\Delta_g}-\lambda)$ from $L^2(M)$ to $L^p(\Sigma)$, where $\chi\in\mathcal{S}(\mathbb{R})$ such that $\chi(0)=1$, and showed that $\chi_\lambda(\chi_\lambda)^*$ is an operator from $L^p(\Sigma)$ to $L^{p'}(\Sigma)$ with norm $O(\lambda^{2\delta(p)})$. That means, there exists at least an $\epsilon>0$ such that \begin{equation}\label{1.8}
||\sum_{|\lambda_j-\lambda|<\epsilon}E_jf||_{L^p(\Sigma)}\lesssim\lambda^{\delta(p)}||f||_{L^2(M)}.
\end{equation}
The reason why \eqref{1.8} is true can be seen in this way. Considering the dual form of
\begin{equation}
||\chi(\lambda-\sqrt{-\Delta_g})f||_{L^p(\Sigma)}\lesssim\lambda^{\delta(p)}||f||_{L^2(M)},
\end{equation}
which says
\begin{equation}\label{1.9}
||\sum_j\chi(\lambda-\lambda_j)E_j^*g||_{L^2(M)}\lesssim\lambda^{\delta(p)}||g||_{L^p(\Sigma)},
\end{equation}
where $E_j^*$ is the conjugate operator of $E_j$ such that $E_j^*g(x)=e_j(x)\int_\Sigma e_j(y)g(y)dy$, for any $g\in L^2(\Sigma)$ and $x\in M$. There exists an $\epsilon>0$ such that $\chi(t)>\frac{1}{2}$ when $|t|<\epsilon$ because we assumed that $\chi(0)=1$. Therefore, the square of the left hand side of \eqref{1.9} is
\begin{equation}
\sum_{|\lambda-\lambda_j|<\epsilon}||\chi(\lambda-\lambda_j)E_j^*g||_{L^2(M)}^2+\sum_{|\lambda-\lambda_j|>\epsilon}||\chi(\lambda-\lambda_j)E_j^*g||_{L^2(M)}^2\geq\frac{1}{4}\sum_{|\lambda-\lambda_j|<\epsilon}||E_j^*g||_{L^2(M)}^2.
\end{equation}
That means
\begin{equation}
||\sum_{|\lambda-\lambda_j|<\epsilon}E_j^*g||_{L^2(M)}\lesssim\lambda^{\delta(p)}||g||_{L^p(\Sigma)},
\end{equation}
which is the dual version of \eqref{1.8}.
If we divide the interval $(\lambda-1,\lambda+1)$ into $\frac{1}{\epsilon}$ sub-intervals whose lengths are $2\epsilon$, and apply the last estimate $\frac{1}{\epsilon}$ times, we get \eqref{1.7}. Thinking in this way, our estimates \eqref{1.1} and \eqref{1.2} are equivalent to the estimates for
\begin{equation}
||\sum_{|\lambda_j-\lambda|<\epsilon\log^{-1}\lambda} E_j||_{L^2(M)\rightarrow L^p(\Sigma)},
\end{equation}
for some number $\epsilon>0$, which is equivalent to estimating
\begin{equation}
||\chi(T(\lambda-\sqrt{-\Delta_g}))||_{L^2(M)\rightarrow L^p(\Sigma)},
\end{equation}
for $T\approx\log^{-1}\lambda$.
The estimates \eqref{1.5} and \eqref{1.6} are sharp when
1. $k\leq n-2$, $M$ is the standard sphere $\mathbb{S}^n$ and $\Sigma$ is any submanifold of dimension $k$; or
2. $k=n-1$ and $2\leq p\leq\dfrac{2n}{n-1}$, $M$ is the standard sphere $\mathbb{S}^n$ and $\Sigma$ is any hypersurface containing a piece of geodesic.
It is natural to try to improve it on Riemannian manifolds with nonpositive curvature. Recently, Sogge and Zelditch in \cite{SZ} showed that for any 2-dimensional compact boundaryless Riemannian manifold with nonpositive curvature one has
\begin{equation}\label{1.10}
\sup_{\gamma\in\Pi}||\phi_\lambda||_{L^{p}(\gamma)}/||\phi_\lambda||_{L^2(M)}=o(\lambda^{\frac{1}{4}}),\ \ \ \text{for}\ 2\leq p<4,
\end{equation}
where $\Pi$ denotes the space of all unit-length geodesics in $M$.
\eqref{1.7} is sharp for any compact manifolds, in the sense that we fix the scale of the spectral projection (See proof in \cite{burq}). If we are allowed to consider a smaller scale of spectral projection, then our theorem \ref{theorem1} is an improvement of $\sqrt{\log\lambda}$ for \eqref{1.7}, with the extra assumption that $M$ has nonpositive curvature. The corollary is an improvement of \eqref{1.5}. Note that \eqref{1.3} and \eqref{1.10} improve \eqref{1.5} for the whole range of $p$ in dimension 2 except for $p=4$.
Theorem \ref{theorem1} is related to certain $L^p$-estimates for eigenfunctions. For example, for 2-dimensional Riemannian manifolds, Sogge showed in \cite{Sokakeya} that
\begin{equation}
||\phi_\lambda||_{L^p(M)}/||\phi_\lambda||_{L^2(M)}=o(\lambda^{\frac{1}{2}(\frac{1}{2}-\frac{1}{p})})
\end{equation}
for some $2<p<6$ if and only if
\begin{equation}
\sup_{\gamma\in\Pi}||\phi_\lambda||_{L^2(\gamma)}/||\phi_\lambda||_{L^2(M)}=o(\lambda^{\frac{1}{4}}).
\end{equation}
This indicates relations between the restriction theorem and the $L^p$-estimates for eigenfunctions in \cite{soggeest} by Sogge, which showed that for any compact Riemannian manifold of dimension $n$, one has
\begin{equation}
||\phi_\lambda||_{L^p(M)}\lesssim\lambda^{\frac{n-1}{2}(\frac{1}{2}-\frac{1}{p})}||\phi_\lambda||_{L^2(M)},\ \ \ \text{for}\ 2\leq p\leq\dfrac{2(n+1)}{n-1},
\end{equation}
and
\begin{equation}\label{1.12}
||\phi_\lambda||_{L^p(M)}\lesssim\lambda^{n(\frac{1}{2}-\frac{1}{p})-\frac{1}{2}}||\phi_\lambda||_{L^2(M)},\ \ \ \text{for}\ \dfrac{2(n+1)}{n-1}\leq p\leq\infty.
\end{equation}
There have been several results showing that \eqref{1.12} can be improved for $p>\dfrac{2(n+1)}{n-1}$ (see \cite{stz} and \cite{soggezelditch}) to bounds of the form $||\phi_\lambda||_{L^p(M)}/||\phi_\lambda||_{L^2(M)}=o(\lambda^{n(\frac{1}{2}-\frac{1}{p})-\frac{1}{2}})$ for fixed $p>6$. Recently, Hassell and Tacey \cite{HT}, following B\'erard's \cite{Berard} estimate for $p=\infty$, showed that for fixed $p>6$, this ratio is $O(\lambda^{n(\frac{1}{2}-\frac{1}{p})-\frac{1}{2}}/\sqrt{\log\lambda})$ on Riemannian manifolds with constant negative curvature, which inspired our work.
\newsection{Set up of the proof of the improved restriction theorem}
Let us first analyze the situation for any dimension $n$, which we will use in Section 4.
Take a real-valued multiplier operator $\chi\in\mathcal{S}(\mathbb{R})$ such that $\chi(0)=1$, and $\hat{\chi}(t)=0$ if $|t|\geq\frac{1}{2}$.
Let $\rho=\chi^2$, then $\hat{\rho}(t)=0$ if $|t|\geq1$. Here, $\hat{\chi}$ is the Fourier Transform of $\chi$. Same notations in the following.
For some number $T$, which will be determined later, and is approximately $\log\lambda$, we have $\chi(T(\lambda-\sqrt{-\Delta_g}))\varphi_\lambda=\varphi_\lambda$. The theorem is proved if we can show that for any $f\in L^2(M)$,
\begin{equation}\label{2.1}
||\chi^\lambda_Tf||_{L^p(\Sigma)}\lesssim\frac{\lambda^{\delta(p)}}{(\log\lambda)^\frac{1}{2}}||f||_{L^2(M)},
\end{equation}
where $\chi^\lambda_T=\chi(T(\lambda-\sqrt{-\Delta_g}))$ is an operator from $L^2(M)$ to $L^p(\Sigma)$.
This is equivalent to for any $g\in L^{p'}(\Sigma)$,
\begin{equation}\label{2.2}
||\chi^\lambda_T(\chi^\lambda_T)^* g||_{L^p(\Sigma)}\lesssim\frac{\lambda^{2\delta(p)}}{\log\lambda}||g||_{L^{p'}(\Sigma)},
\end{equation}
where $p'$ is the conjugate number of $p$ such that $\frac{1}{p}+\frac{1}{p'}=1$. and $(\chi_T^\lambda)^*$ is the conjugate of $\chi_T^\lambda$, which maps $L^{p'}(\Sigma)$ into $L^2(M)$.
If $\{e_j(x)\}_{j\in\mathbb{N}}$ is an $L^2(M)$ orthonormal basis of eigenfunctions of $\sqrt{-\Delta_g}$, with eigenvalues $\{\lambda_j\}_{j\in\mathbb{N}}$, and $\{E_j(x)\}_{j\in\mathbb{N}}$ is the projections onto the $j$-th eigenspace restricted to $\Sigma$, then $I|_\Sigma=\sum_{j\in\mathbb{N}}E_j$, and $\sqrt{-\Delta_g}|_\Sigma=\sum_{j\in\mathbb{N}}\lambda_jE_j$. If we set $\rho^\lambda_T=\rho(T(\lambda-\sqrt{-\Delta_g})): L^2(M)\rightarrow L^p(\Sigma)$, then the kernel of $\chi_T^\lambda(\chi_T^\lambda)^*$ is the kernel of $\rho_T^\lambda$, which is restricted to $\Sigma\times\Sigma$. That can be seen from
\begin{equation}
\chi^\lambda_T f(x)=\sum_{j\in\mathbb{N}}\chi(T(\lambda-\lambda_j))e_j(x)\int_Me_j(y)f(y)dy,\ \ \forall f\in L^2(M),
\end{equation}
and
\begin{equation}
(\chi^\lambda_T)^* g(x)=\sum_{j\in\mathbb{N}}\chi(T(\lambda-\lambda_j))e_j(x)\int_\Sigma e_j(y)g(y)dy,\ \ \forall g\in L^{p'}(\Sigma).
\end{equation}
Therefore,
\begin{equation}
\begin{split}
\chi^\lambda_T(\chi^\lambda_T)^* g(x) & =\sum_{i,j\in\mathbb{N}}\chi(T(\lambda-\lambda_i))\chi(T(\lambda-\lambda_j))e_j(x)\int_Me_j(y)e_i(y)\int_\Sigma e_i(z)g(z)dzdy\\
& =\sum_{j\in\mathbb{N}}\chi(T(\lambda-\lambda_j))^2e_j(x)\int_\Sigma e_j(z)g(z)dz\\
& =\sum_{j\in\mathbb{N}}\rho(T(\lambda-\lambda_j))e_j(x)\int_\Sigma e_j(z)g(z)dz.
\end{split}
\end{equation}
On the other hand,
\begin{equation}
\begin{split}
\rho^\lambda_T & =\sum_{j\in\mathbb{N}}\rho(T(\lambda-\lambda_j))E_j \\
& =\sum_{j\in\mathbb{N}}\frac{1}{2\pi}\int_{-1}^1\hat{\rho}(t)e^{it[T(\lambda-\lambda_j)]}E_jdt\\
& =\sum_{j\in\mathbb{N}}\frac{1}{2\pi T}\int_{-T}^T\hat{\rho}(\frac{t}{T})e^{it(\lambda-\lambda_j)}E_jdt\\
& =\frac{1}{2\pi T}\int_{-T}^T\hat{\rho}(\frac{t}{T})e^{it(\lambda-\sqrt{-\Delta_g})}dt\\
& =\frac{1}{\pi T}\int_{-T}^T\hat{\rho}(\frac{t}{T})\cos(t\sqrt{-\Delta_g})e^{it\lambda}dt-\rho(T(\lambda+\sqrt{-\Delta_g}))
\end{split}
\end{equation}
Here, $\rho(T(\lambda+\sqrt{-\Delta_g}))$ is an operator whose kernel is $O(\lambda^{-N})$, for any $N\in\mathbb{N}$, so that we only have to estimate the first term. We are not going to emphasize the restriction to $\Sigma$ until we get to the point when we take the $L^p$ norm on $\Sigma$.
Denote the kernel of $\cos (t\sqrt{-\Delta_g})$ as $\cos(t\sqrt{-\Delta_g})(x,y)$, for $x,y\in M$, then $\forall g\in L^{p'}(\Sigma)$,
\begin{equation}\label{2.3}
\chi^\lambda_T(\chi^\lambda_T)^* g(x)=\frac{1}{\pi T}\int_\Sigma\int_{-T}^T\hat{\rho}(\frac{t}{T})\cos(t\sqrt{-\Delta_g})(x,y)e^{it\lambda}g(y)dtdy+O(1).
\end{equation}
Take the $L^p(\Sigma)$ norm on both sides,
\begin{equation}\label{2.4}
||\chi^\lambda_T(\chi^\lambda_T)^*g||_{L^p(\Sigma)}\leq\frac{1}{\pi T}(\int_\Sigma|\int_\Sigma\int_{-T}^T\hat{\rho}(\frac{t}{T})\cos(t\sqrt{-\Delta_g})(x,y)e^{it\lambda}g(y)dtdy|^pdx)^{1/p}+O(1).
\end{equation}
We are going to use Young's inequality (see \cite{soggebook}), with $\frac{1}{r}=1-[(1-\frac{1}{p})-\frac{1}{p}]=\frac{2}{p}$, and
\begin{equation}\label{2.5}
K(x,y)=\frac{1}{\pi T}\int_{-T}^T\hat{\rho}(\frac{t}{T})(\cos t\sqrt{-\Delta_g})(x,y))e^{it\lambda}dt.
\end{equation}
Denote $K$ as the operator with the kernel $K(x,y)$ from now on.\footnote{The definition of $K(x,y)$ may be changed in this paper, but we always call $K$ the corresponding operator with the kernel $K(x,y)$.}
Since $K(x,y)$ is symmetric in $x$ and $y$, once we have
\begin{equation}\label{2.6}
\sup_{x\in\Sigma}||K(x,\cdot)||_{L^r(\Sigma)}\lesssim \frac{\lambda^{2\delta(p)}}{\log\lambda},
\end{equation}
where $r=p/2$, then by Young's inequality, the theorem is proved.
We can use the same argument as in \cite{SZ} to lift the manifold to $\mathbb{R}^n$. As stated in Theorem IV.1.3 in \cite{IS}, for $(M,g)$ has non-positive curvature, considering $x$ to be a fixed point on $\Sigma$, there exists a universal covering map $p=\exp_{x}:\mathbb{R}^n\rightarrow M$. In this way, $(M,g)$ is lifted to $(\mathbb{R}^n,\tilde{g})$, with the metric $\tilde{g}=(\exp_x)^*g$ being the pullback of $g$ via $\exp_{x}$. $\tilde{g}$ is a complete Riemannian metric on $\mathbb{R}^n$. Define an automorphism for $(\mathbb{R}^n,\tilde{g})$, $\alpha: \mathbb{R}^n\rightarrow\mathbb{R}^n$, to be a deck transformation if
\begin{equation}\nonumber
p\circ\alpha=p,
\end{equation}
when we shall write $\alpha\in\Aut$. If $\tilde{x}\in\mathbb{R}^n$ and $\alpha\in\Aut$, let us call $\alpha(\tilde{x})$ the translate of $\tilde{x}$ by $\alpha$, then we call a simply connected set $D\subset\mathbb{R}^n$ a fundamental domain of our universal covering $p$ if every point in $\mathbb{R}^n$ is the translate of exactly one point in $D$. We can then think of our submanifold $\Sigma$ both as one in $(M,g)$ and one in the fundamental domain which is of the same form. Likewise, a function $f(x)$ in $M$ is uniquely identified by one $f_D(\tilde{x})$ on $D$ if we set $f_D(\tilde{x})=f(x)$, where $\tilde{x}$ is the unique point in $D\cap p^{-1}(x)$. Using $f_D$ we can define a "periodic extension", $\tilde{f}$, of $f$ to $\mathbb{R}^n$ by defining $\tilde{f}(\tilde{y})$ to be equal to $f_D(\tilde{x})$ if $\tilde{x}=\tilde{y}$ modulo $\Aut$, i.e. if $(\tilde{x},\alpha)\in D\times\Aut$ are the unique pair so that $\tilde{y}=\alpha(\tilde{x})$.
In this setting, we shall exploit the relationship between solutions
of the wave equation on $(M,g)$ of the form
\begin{equation}\label{2.20}
\begin{cases}
(\partial^2_t-\Delta_g)u(t,x)=0, \quad (t,x)\in {\mathbb R}_+\times M
\\
u(0,\cd)=f, \, \, \partial_t u(0,\cd)=0,
\end{cases}
\end{equation}
and certain ones on $({\mathbb R}^n,\tilde g)$
\begin{equation}\label{2.21}
\begin{cases}
(\partial^2_t-\Delta_{\tilde g})\tilde u(t,\tilde x)=0, \quad (t,\tilde x)\in {\mathbb R}_+\times
\mathbb{R}^n
\\
\tilde u(0,\cd)=\tilde f, \, \, \partial_t \tilde u(0,\cd)=0.
\end{cases}
\end{equation}
If $(f(x),0)$ is the Cauchy data in \eqref{2.20} and $(\tilde{f}(\tilde{x}),0)$ is the periodic extension to $(\mathbb{R}^n,\tilde{g})$, then the solution $\tilde{u}(t,\tilde{x})$ to \eqref{2.21} must be a periodic function of $\tilde{x}$ since $\tilde{g}$ is the pullback of $g$ via $p$ and $p\circ\alpha=p$. As a result, we have that the solution to \eqref{2.20} must satisfy $u(t,x)=\tilde{u}(t,\tilde{x})$ if $\tilde{x}\in D$ and $p(\tilde{x})=x$. Thus, periodic solutions to \eqref{2.21} correspond uniquely to solutions of \eqref{2.20}. Note that $u(t,x)=\bigl(\cos (t\sqrt{-\Delta_g})f\bigr)(x)$ is the solution of \eqref{2.20}, so that
\begin{equation}
\cos(t\sqrt{-\Delta_g})(x,y)=\sum_{\alpha\in\Aut}\cos (t\sqrt{-\Delta_{\tilde{g}}})(\tilde{x},\alpha(\tilde{y})),
\end{equation}
if $\tilde{x}$ and $\tilde{y}$ are the unique points in $D$ for which $p(\tilde{x})=x$ and $p(\tilde{y})=y$.
\newsection{Proof of the improved restriction theorem, for $n=2$}
While we can prove Theorem \ref{theorem1} for any dimension $n$, we will prove the case when $n=2$ first separately, as it is the simplest case, and does not involve interpolation or various sub-dimensions. Here is what it says.
\begin{theorem}\label{theorem2}
Let $(M,g)$ be a compact smooth boudaryless Riemannian surface with nonpositive curvature, and $\gamma$ be a smooth curve with finite length, then for any $f\in L^2(M)$, we have the following estimate
\begin{equation}\label{3.1}
||\sum_{|\lambda_j-\lambda|<(\log\lambda)^{-1}}E_j f||_{L^p(\gamma)}\lesssim\frac{\lambda^{\frac{1}{2}-\frac{1}{p}}}{(\log \lambda)^{\frac{1}{2}}}||f||_{L^2(M)},\ \ \ \forall p>4.
\end{equation}
\end{theorem}
We will prove Theorem~\ref{theorem2} by the end of this section. By a partition of unity, we can assume that we fix $x$ to be the mid-point of $\gamma$, and parametrize $\gamma$ by its arc length centered at $x$ so that
\begin{equation}\label{2.22}
\gamma=\gamma[-1,1]\ \ \ \text{and}\ \ \ \gamma(0)=x,
\end{equation}
and we may assume that the geodesic distance between any $x$ and $y\in\gamma$ is comparable to the arc length between them on $\gamma$.
We need to estimate the $L^r(\gamma)$ norm of
\begin{equation}\label{2.7}
\int_{-T}^T\hat{\rho}(\frac{t}{T})(\cos t\sqrt{-\Delta_g})(x,y)e^{it\lambda}dt=\sum_{\alpha\in \Aut}\int_{-T}^T\hat{\rho}(\frac{t}{T})(\cos t\sqrt{-\Delta_{\tilde{g}}})(\tilde{x},\alpha(\tilde{y}))e^{it\lambda}dt.
\end{equation}
We should have the following estimates:
Up to an error of $O(\lambda^{-1})\exp(O(d_{\tilde{g}}(\tilde{x},\tilde{y})))+O(e^{dT})$ or $O(\lambda^{-1})\exp(O(d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))))+O(e^{dT})$ respectively,
\begin{equation}\label{2.9}
\int_{-T}^T\hat{\rho}(\frac{t}{T})(\cos t\sqrt{-\Delta_{\tilde{g}}})(\tilde{x},\tilde{y})e^{it\lambda}dt=O(\lambda)\ \ \ \textit{when}\ \ \ d_{\tilde{g}}(\tilde{x},\tilde{y})<\frac{1}{\lambda},
\end{equation}
\begin{equation}\label{2.10}\int_{-T}^T\hat{\rho}(\frac{t}{T})(\cos t\sqrt{-\Delta_{\tilde{g}}})(\tilde{x},\tilde{y})e^{it\lambda}dt=O((\frac{\lambda}{d_{\tilde{g}}(\tilde{x},\tilde{y})})^{1/2})\ \ \ \textit{when}\ \ \ d_{\tilde{g}}(\tilde{x},\tilde{y})\geq\frac{1}{\lambda},
\end{equation}
\begin{equation}\label{2.11}\alpha\neq Id, \int_{-T}^T\hat{\rho}(\frac{t}{T})(\cos t\sqrt{-\Delta_{\tilde{g}}})(\tilde{x},\alpha(\tilde{y}))e^{it\lambda}dt=O((\frac{\lambda}{d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))})^\frac{1}{2})
\end{equation}
To prove the above estimates, we need the following lemma.
\begin{lemma}\label{lemma4.3}
Assume that $w(\tilde{x},\tilde{x}')$ is a smooth function from $\mathbb{R}^n\times\mathbb{R}^n$ to $\mathbb{R}^n$, and $\Theta\in\mathbb{S}^{n-1}$, then
\begin{equation}\label{4.2}
\int_{\mathbb{S}^{n-1}}e^{iw(\tilde{x},\tilde{x}')\cdot\Theta}d\Theta=\sqrt{2\pi}^{n-1}\sum_{\pm}\dfrac{e^{\pm i|w(\tilde{x},\tilde{x}')|}}{|w(\tilde{x},\tilde{x}')|^{\frac{n-1}{2}}}+O(|w(\tilde{x},\tilde{x}')|^{-\frac{n-1}{2}-1}),
\end{equation}
when $|w(\tilde{x},\tilde{x}')|\geq1$.
\end{lemma}
The proof can be found in Chapter 1 in \cite{soggebook}.
Let us return to estimating the kernel $K(x,y)$. Applying the Hadamard Parametrix,
\begin{equation}\label{2.14}
\cos(t\sqrt{-\Delta_{\tilde{g}}})(\tilde{x},\alpha(\tilde{y}))=\dfrac{w_0(\tilde{x},\alpha(\tilde{y}))}{(2\pi)^n}\sum_{\pm}\int_{\mathbb{R}^n}e^{i\Phi(\tilde{x},\alpha(\tilde{y}))\cdot\xi\pm
it|\xi|}d\xi+\sum_{\nu=1}^N w_\nu(\tilde{x},\alpha(\tilde{y}))\mathcal{E}_\nu(t,d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y})))+R_N(t,\tilde{x},\alpha(\tilde{y})),
\end{equation}
where
$|\Phi(\tilde{x},\alpha(\tilde{y}))|=d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))$, $\mathcal{E}_\nu,\nu=1,2,3,...$ are defined recursively by
$2\mathcal{E}_\nu(t,r)=-t\int_0^t\mathcal{E}_{\nu-1}(s,r)ds$, where $\mathcal{E}_0(t,x)=(2\pi)^{-n}\int_{\mathbb{R}^n}e^{ix\cdot\xi}\cos(t|\xi|)d\xi$\footnote{Since $\mathcal{E}\nu(t,x)$ is invariant under the same radius, we consider $\mathcal{E}\nu(t,x)=\mathcal{E}\nu(t,|x|)$.}, and $w_\nu(\tilde{x},\alpha(\tilde{y}))$ equals some constant times $u_\nu(\tilde{x},\alpha(\tilde{y}))$ that satisfies:
\begin{equation}\label{3.2}
\begin{cases}
u_0(\tilde{x},\alpha(\tilde{y}))=\Theta^{-\frac{1}{2}}(\alpha(\tilde{y}))
\\
u_{\nu+1}(\tilde{x},\alpha(\tilde{y}))=\Theta(\alpha(\tilde{y}))\int_0^1s^\nu\Theta^{\frac{1}{2}}(\tilde{x}_s)\Delta_{\tilde{g}}u_\nu(\tilde{x},\tilde{x}_s)ds,\ \ \ \nu\geq0.
\end{cases}
\end{equation}
where $\Theta(\alpha(\tilde{y}))=(\det g_{ij}(\alpha(\tilde{y})))^{\frac{1}{2}}$, and $(\tilde{x}_s)_{s\in[0,1]}$ is the minimizing geodesic from $\tilde{x}$ to $\alpha(\tilde{y})$ parametrized proportionally to arc length.
(see \cite{Berard} and \cite{SZ})
First note that for $N\geq n+\frac{3}{2}$, by using the energy estimates (see \cite{let} Theorem 3.1.5), one can show that $|R_N(t,\tilde{x},\alpha(\tilde{y}))|=O(e^{dt})$, for some constant $d>0$, so that it is small compared to the first $N$ terms.
\begin{theorem}\label{theorem3.1}
Given an $n$-dimensional compact Riemannian manifold $(M,g)$ with nonpositve curvature, and let $(\mathbb{R}^n,\tilde{g})$ be the universal covering of $(M,g)$. Then if $N\geq n+\frac{3}{2}$, in local coordinates,
\begin{equation}
(\cos t\sqrt{-\Delta_{\tilde{g}}})f(\tilde{x})=\int K_N(t,\tilde{x};\tilde{y})f(\tilde{y})dV_{\tilde{g}}(\tilde{y})+\int R_N(t,\tilde{x};\tilde{y})f(\tilde{y})dV_{\tilde{g}}(\tilde{y}),
\end{equation}
where
\begin{equation}
K_N(t,\tilde{x};\tilde{y})=\sum_{\nu=0}^N w_\nu(\tilde{x},\tilde{y})\mathcal{E}_\nu(t,d_{\tilde{g}}(\tilde{x},\tilde{y})),
\end{equation}
with the remainder kernel $R_N$ satisfying
\begin{equation}\label{3.5}
|R_N(t,\tilde{x};\tilde{y}))|=O(e^{dt}).
\end{equation}
for some number $d>0$.
\end{theorem}
This comes from Equation (42) in \cite{Berard}. The proof can be found in \cite{Berard}.
By this theorem,
\begin{equation}\label{3.14}
\int_{-T}^T|R_N(t,\tilde{x},\alpha(\tilde{y}))|dt\leq C\int_0^Te^{dt}dt=O(e^{dT}).
\end{equation}
Moreover, for $\nu=1,2,3,...$, we have the following estimate for $\mathcal{E}_\nu(t,r)$.
\begin{theorem}For $\nu=0,1,2,...$ and $\mathcal{E}_\nu(t,r)$ defined above, we have
\begin{equation}\label{3.22}|\int \hat{\rho}(t)e^{it\lambda}\mathcal{E}_\nu(t,r)dt|=O(\lambda^{n-1-2\nu}),\ \ \ \lambda\geq1
\end{equation}
\end{theorem}
\begin{proof}
Recall that
\begin{equation}
\mathcal{E}_0(t,r)=\dfrac{H(t)}{(2\pi)^n}\int_{\mathbb{R}^n}e^{i\Phi(\tilde{x},\tilde{y})\cdot\xi}\cos t|\xi|d\xi,
\end{equation}
so that
\begin{equation}
\begin{split}
|\int \hat{\rho}(t)e^{it\lambda}\mathcal{E}_0(t,r)dt| & =|\frac{1}{2(2\pi)^n}\int\int_{\mathbb{R}^n}\hat{\rho}(t)e^{it(\lambda\pm|\xi|)+i\Phi(\tilde{x},\tilde{y})\cdot\xi}d\xi dt|\\
& \approx |\int_{\mathbb{R}^n}[\rho(\lambda+|\xi|)+\rho(\lambda-|\xi|)]e^{i\Phi(\tilde{x},\tilde{y})\cdot\xi}d\xi|\\
& \leq \int_{\mathbb{R}^n}|\rho(\lambda+|\xi|)|+|\rho(\lambda-|\xi|)|d\xi\\
& =O(\lambda^{n-1}).
\end{split}
\end{equation}
By the definition of $\mathcal{E}_\nu$ such that $\dfrac{\partial \mathcal{E}_\nu}{\partial t}=\frac{t}{2}\mathcal{E}_{\nu-1}$ and integrate by parts, we get that for any $\nu=1,2,3,...$,
\begin{equation}
\int \hat{\rho}(t)e^{it\lambda}\mathcal{E}_\nu(t,r)dt=O(\lambda^{n-1-2\nu}).
\end{equation}
\end{proof}
The following theorem has been shown by B\'erard in \cite{Berard} about the size of the coefficients $u_k(\tilde{x},\tilde{y})$.
\begin{theorem}\label{theorem3.3}
Let $(M,g)$ be a compact $n$-dimensional Riemannian manifold and let $\sigma$ be its sectional curvature (hence, there is a number $\Gamma$ such that $-\Gamma^2\leq\sigma$). Assume that either
1.$n=2$, and $M$ does not have conjugate points;
or
2.$-\Gamma^2\leq\sigma\leq0$; i.e. $M$ has nonpositive sectional curvature.
Let $(\mathbb{R}^n,\tilde{g})$ be the universal covering of $(M,g)$, and let $\tilde{u}_\nu$, $\nu=0,1,2,...$ be defined by the relations \eqref{3.2}, then for any integers $l$ and $\nu$
\begin{equation}
\Delta_{\tilde{g}}^l\tilde{u}_\nu(\tilde{x},\tilde{y})=O(\exp(O(d_{\tilde{g}}(\tilde{x},\tilde{y})))).
\end{equation}
\end{theorem}
The proof can be found in \cite{Berard} Appendix: Growth of the Functions $u_k(x,y)$.
Since $w_\nu(\tilde{x},\alpha(\tilde{y}))$ is a constant times $\tilde{u}_\nu(\tilde{x},\alpha(\tilde{y}))$, this theorem tells us that $|w_\nu(\tilde{x},\alpha(\tilde{y}))|=O(\exp(c_\nu d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))))$, for some constant $c_\nu$ depending on $\nu$.
Moreover, denote that $\psi(t)=\hat{\rho}(\frac{t}{T})$, and $\tilde{\psi}$ is the inverse Fourier Transform of $\psi$, we have $\tilde{\psi}\in\mathcal{S}(\mathbb{R})$ such that
\begin{equation}
|\tilde{\psi}(t)|\leq T(1+T|t|)^{-N},\ \ \ \text{for all}\ N\in\mathbb{N}.
\end{equation}
Therefore,
\begin{equation}\label{3.31}
\begin{split}
&\sum_{\nu=1}^N |w_\nu(\tilde{x},\alpha(\tilde{y}))\int_{-T}^T\hat{\rho}(\frac{t}{T})e^{it\lambda}\mathcal{E}_\nu(t,d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y})))dt|\\
=&\sum_{\nu=1}^N O(T(T\lambda)^{n-1-2\nu}\exp(c_\nu d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))))\\
=&O(T^{n-2}\lambda^{n-3}\exp(C_N d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y})))),
\end{split}
\end{equation}
for some $C_N$ depending on $c_1, c_2,..., c_{N-1}$.
All in all, taking $n=2$, and disregarding the integral of the remainder kernel,
\begin{equation}\label{3.32}
\begin{split}
&|\int_{-T}^T\hat{\rho}(\frac{t}{T})\cos(t\sqrt{-\Delta_{\tilde{g}}})(\tilde{x},\alpha(\tilde{y}))e^{it\lambda}dt|\\
=&|\int_{-T}^T\hat{\rho}(\frac{t}{T})\dfrac{w_0(\tilde{x},\alpha(\tilde{y}))}{4\pi^2}\sum_{\pm}\int_{\mathbb{R}^2}e^{i\Phi(\tilde{x},\alpha(\tilde{y}))\cdot\xi\pm
it|\xi|}e^{it\lambda}d\xi dt|+O(\lambda^{-1}\exp(C_N d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y})))).
\end{split}
\end{equation}
On the other hand, $w_0(\tilde{x},\tilde{y})$ has a better estimate. By applying G\"unther's Comparison Theorem \cite{Gu}, with the assumption of nonpositive curvature, we can show that $|w_0(\tilde{x},\tilde{y})|=O(1)$. The proof is given by Sogge and Zelditch in \cite{SZ} for $n=2$. Let's see the case for any dimension $n$. In the geodesic polar coordinates we are
using, $t\Theta$, $t>0$, $\Theta\in \mathbb{S}^{n-1}$, for $(\mathbb{R}^n, \tilde g)$, the metric $\tilde g$ takes the form
\begin{equation}
ds^2=dt^2+\mathcal{A}^2(t,\Theta)\, d\Theta^2,
\end{equation}
where we may assume that $\mathcal{A}(t,\Theta)>0$ for $t>0$. Consequently, the volume element in these coordinates is given by
\begin{equation}\label{3.7}
dV_g(t,\theta)=\mathcal{A}(t,\Theta)\, dt d\Theta,
\end{equation}
and by G\"unther's \cite{Gu} comparison theorem if the curvature of $(M,g)$, which is the same as that of $(\mathbb{R}^n, \tilde g)$ is nonpositive, we have
\begin{equation}
\mathcal{A}(t,\theta)\ge t^{n-1},
\end{equation}
where $t^{n-1}$ is the volume element of the Euclidean space.
While in geodesic normal coordinates about $x$, we have $$w_0(x,y)=\bigl(\, \text{det } g_{ij}(y)\, \bigr)^{-\frac14},$$ (see \cite{Berard}, \cite{Had} or \S2.4 in \cite{let}). If $y$ has geodesic polar coordinates $(t,\Theta)$ about $x$, then $t=d_{\tilde g}(x,y)$, so that $w_0(x,y)=\sqrt{t^{n-1}/{\mathcal A}(t,\Theta)}\leq1$.
Therefore,
\begin{equation}\label{2.18}
\begin{split}
|\sum_{\pm}\int_{\mathbb{R}^2}\int_{-T}^Te^{i\Phi(\tilde{x},\tilde{y})\cdot\xi\pm it|\xi|+it\lambda}\hat{\rho}(\frac{t}{T})dtd\xi|= & |\int_{\mathbb{R}^2}e^{i\Phi(\tilde{x},\tilde{y})\cdot\xi}(\tilde{\psi}(\lambda+|\xi|)+\tilde{\psi}(\lambda-|\xi|))d\xi| \\
\leq & \int_{\mathbb{R}^2}|\tilde{\psi}(\lambda+|\xi|)|d\xi+\int_{\mathbb{R}^2}|\tilde{\psi}(\lambda-|\xi|)|d\xi
\end{split}
\end{equation}
Note that $\tilde{\psi}(\lambda+|\xi|)=O(T(1+\lambda+|\xi|)^{-N})$, for any $N\in\mathbb{N}$, so $\int_{\mathbb{R}^2}\tilde{\psi}(\lambda+|\xi|)d\xi$ can be arbitrarily small, while $\tilde{\psi}(\lambda-|\xi|)=O(T(1+T|\lambda-|\xi||)^{-N})$, for any $N\in\mathbb{N}$, so that $\int_{\mathbb{R}^2}|\tilde{\psi}(\lambda-|\xi|)|d\xi\lesssim T\int_{\lambda-1\leq|\xi|\leq\lambda+1}(1+T|\lambda-|\xi||)^{-N}d\xi=O(\lambda)$, provided that $\lambda\geq1$. So
\begin{equation}\label{2.19}
\int_{-T}^T\hat{\rho}(\frac{t}{T})(\cos t\sqrt{-\Delta_{\tilde{g}}})(\tilde{x},\tilde{y})e^{it\lambda}dt=O(\lambda)+O(\lambda^{-1}\exp(C_N d_{\tilde{g}}(\tilde{x},\tilde{y}))),
\end{equation}
disregarding the integral of the remainder kernel.
However, this estimate can be improved when $d_{\tilde{g}}(\tilde{x},\tilde{y})\geq\frac{1}{\lambda}$.
As we can see, the main term of
\begin{equation}
\cos(t\sqrt{-\Delta_{\tilde{g}}})(\tilde{x},\tilde{y})=\dfrac{w_0(\tilde{x},\tilde{y})}{4\pi^2}\sum_{\pm}\int_{\mathbb{R}^2}e^{i\Phi(\tilde{x},\tilde{y})\cdot\xi\pm
it|\xi|}d\xi+\sum_{\nu=1}^N w_\nu(\tilde{x},\tilde{y})\mathcal{E}_\nu(t,d_{\tilde{g}}(\tilde{x},\tilde{y}))+R_N(t,\tilde{x},\tilde{y})
\end{equation}
comes from the first term, and the corresponding term in $\int_{-T}^T\hat{\rho}(\frac{t}{T})(\cos t\sqrt{-\Delta_{\tilde{g}}})(\tilde{x},\tilde{y})e^{it\lambda}dt$ is bounded by
\begin{equation}
C|\sum_\pm\int_{-T}^T\int_{\mathbb{R}^2}\hat{\rho}(\frac{t}{T})e^{i\Phi(\tilde{x},\tilde{y})\cdot\xi\pm it|\xi|}e^{it\lambda}dtd\xi|= C|\sum_\pm\int_{-T}^T\int_0^\infty\int_0^{2\pi}\hat{\rho}(\frac{t}{T})e^{ir\Phi(\tilde{x},\tilde{y})\cdot\Theta\pm itr+it\lambda}rdtdrd\theta|.
\end{equation}
Integrate with respect to $t$ first, then the quantity above is bounded by a constant times
\begin{equation}
\sum_\pm\int_0^\infty\int_0^{2\pi}\tilde{\psi}(\lambda\pm r)e^{ir\Phi(\tilde{x},\tilde{y})\cdot\Theta}rd\theta dr.
\end{equation}
Because $\tilde{\psi}(\lambda\pm r)\lesssim T(1+T|\lambda\pm r|)^{-N}$ for any $N>0$, the term with $\tilde{\psi}(\lambda+r)$ in the sum is $O(1)$, while the other term with $\tilde{\psi}(\lambda-r)$ is significant only when $r$ is comparable to $\lambda$, say, $c_1\lambda<r<c_2\lambda$ for some constants $c_1$ and $c_2$. In this case, as we assumed that $d_{\tilde{g}}(\tilde{x},\tilde{y})\geq\frac{1}{\lambda}$, we can also assume that $d_{\tilde{g}}(\tilde{x},\tilde{y})\gtrsim\frac{1}{r}$.\par
By Lemma \ref{lemma4.3}, $\int_0^{2\pi}e^{iw\cdot\Theta}d\theta=\sqrt{2\pi} |w|^{-1/2}\sum_{\pm}e^{\pm i|w|}+O(|w|^{-3/2}),|w|\geq1$, where $w=r\Phi(\tilde{x},\tilde{y})$. Integrate up $\theta$, the above quantity is then controlled by
\begin{equation}
\begin{split}
& |\sum_\pm\int_{c_1\lambda}^{c_2\lambda}\tilde{\psi}(\lambda-r)|rd_{\tilde{g}}(\tilde{x},\tilde{y})|^{-1/2}e^{\pm ird_{\tilde{g}}(\tilde{x},\tilde{y})}rdr+\int_{c_1\lambda}^{c_2\lambda}\tilde{\psi}(\lambda-r)|rd_{\tilde{g}}(\tilde{x},\tilde{y})|^{-3/2}rdr|\\
\leq & d_{\tilde{g}}(\tilde{x},\tilde{y})^{-1/2}\int_{c_1\lambda}^{c_2\lambda}\tilde{\psi}(\lambda-r)r^{1/2}dr+d_{\tilde{g}}(\tilde{x},\tilde{y})^{-3/2}\int_{c_1\lambda}^{c_2\lambda}\tilde{\psi}(\lambda-r)r^{-1/2}dr\\
= & d_{\tilde{g}}(\tilde{x},\tilde{y})^{-1/2}O(\lambda^{1/2})+O(d_{\tilde{g}}(\tilde{x},\tilde{y})^{-1})\\
= & O((\frac{\lambda}{d_{\tilde{g}}(\tilde{x},\tilde{y})})^{1/2})
\end{split}
\end{equation}
Note that these two equalities are still valid when $c_1$ and $c_2$ are changed to 0 and $\infty$.
Therefore, when $d_{\tilde{g}}(\tilde{x},\tilde{y})\geq\frac{1}{\lambda}$,
\begin{equation}
|\dfrac{w_0(\tilde{x},\tilde{y})}{4\pi^2}\sum_{\pm}\int_{\mathbb{R}^2}\hat{\rho}(\frac{t}{T})e^{i\Phi(\tilde{x},\tilde{y})\cdot\xi\pm
it|\xi|}e^{it\lambda}d\xi|=O((\frac{\lambda}{d_{\tilde{g}}(\tilde{x},\tilde{y})})^{\frac{1}{2}}).
\end{equation}
Now we have finished the estimates for $\alpha=\textit{Id}$. For $\alpha\neq \textit{Id}$, note that we can find a constant $C_p$ that is different from 0, depending on the universal covering, $p$, of the manifold $M$, such that
\begin{equation}\label{3.37}
d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))>C_p,
\end{equation}
for all $\alpha\in\Aut$ different from $Id$. The constant $C_p$ comes from the fact that if we assume that the injectivity radius of $M$ is greater than a number, say, 1, and that $x$ is the center of some geodesic ball with radius one contained in $M$, then we can choose the fundamental domain $D$ such that $x$ is at least some distance, say, $C_p>1$, away from any translation of $D$, which we denote as $\alpha(D)$, for any $\alpha\in\Aut$ that is not identity. Therefore, we may use the estimates for $d_{\tilde{g}}(\tilde{x},\tilde{y})\geq\frac{1}{\lambda}$ before, assuming $\lambda$ is larger than $\frac{1}{C_p}$. Use the Hadamard parametrix, (see \cite{SZ}), similarly as before, estimating only the main term,
\begin{equation}\label{3.13}
\begin{split}
& |\int_{-T}^T\hat{\rho}(\frac{t}{T})(\cos t\sqrt{-\Delta_{\tilde{g}}})(\tilde{x},\alpha(\tilde{y}))e^{it\lambda}dt|\\
\lesssim & |(2\pi)^{-2}\int_{\mathbb{R}^2}\int_{-T}^T\hat{\rho}(\frac{t}{T})e^{i\Phi(\tilde{x},\alpha(\tilde{y}))\cdot\xi}\cos(t|\xi|)e^{it\lambda}dt|\\
\lesssim & \sum_{\pm}|\int_0^{2\pi}\int_0^\infty\int^T_{-T}e^{ir\Phi(\tilde{x},\alpha(\tilde{y}))\cdot\Theta\pm itr+it\lambda}\hat{\rho}(\frac{t}{T})rdtdrd\theta|\\
\lesssim & \sum_\pm\int_0^\infty\int_0^{2\pi}\tilde{\psi}(\lambda-r)e^{ir\Phi(\tilde{x},\alpha(\tilde{y}))\cdot\Theta\pm itr+it\lambda}rd\theta dr\\
\leq & \sum_\pm\int_0^\infty\tilde{\psi}(\lambda-r)|rd_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))|^{-\frac{1}{2}}e^{ir\psi(\tilde{x},\alpha(\tilde{y}))\cdot\Theta\pm itr+it\lambda}rdr+\int_0^\infty\tilde{\psi}(\lambda-r)|rd_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))|^{-\frac{3}{2}}rdr\\
= & O\Big(\big(\dfrac{\lambda}{d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))}\big)^{\frac{1}{2}}\Big).
\end{split}
\end{equation}
Now we have shown all the estimates \eqref{2.9}, \eqref{2.10}, and \eqref{2.11}. Totally, $K(x,y)$ is
\begin{equation}\label{3.33}O(\dfrac{1}{T}(\dfrac{\lambda}{\lambda^{-1}+d_{\tilde{g}}(\tilde{x},\tilde{y})})^{\frac{1}{2}})+\sum_{Id\neq\alpha\in \Aut}[O(\dfrac{1}{T}(\dfrac{\lambda}{d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))})^{1/2})+O(\dfrac{e^{ET}}{T})],
\end{equation}
where $E=\max\{C_N,d\}+1$.
Note that, by the finite propagation speed of the wave operator $\partial^2_t-\Delta_{\tilde{g}}$, $d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))\leq T$ in the support of $\cos(t\sqrt{-\Delta_g})(\tilde{x},\alpha(\tilde{y}))$. While $M$ is a compact manifold with nonpositive curvature, the number of terms of $\alpha$'s such that $d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))\leq T$ is at most $e^{cT}$\footnote{The number of terms of $\alpha$'s such that $d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))\leq T$ is also bounded below by $e^{c'T}$ for some constant $c'$ depending on the curvature of the manifold, according to G\"unther and Bishop's Comparison Theorem in \cite{IS} (also see \cite{SZ}).}, for some constant $c$ depending on the curvature, by the Bishop Comparison Theorem (see \cite{IS}\cite{SZ}).
We take the $L^r(\gamma)$ norms of each individual terms first, then by the Minkowski's inequality, $||K(x,\cdot)||_{L^r(\gamma[-1,1])}$ is bounded by the sum. Also note that we may consider the geodesic distance to be comparable to the arc length of the geodesic.
The first term is simple, and it is controlled by a constant times
\begin{equation}
\dfrac{1}{T}(\int_0^1(\dfrac{\lambda}{\lambda^{-1}+\tau})^{\frac{r}{2}}d\tau)^{1/r}=O(\dfrac{\lambda^{\frac{p-2}{p}}}{T}).
\end{equation}
Accounting in the number of terms of those $\alpha$'s, the second term is bounded by a constant times
\begin{equation}
e^{cT}\cdot\dfrac{\lambda^{\frac{1}{2}}}{T}(\int_0^1(\frac{1}{C_p})^{\frac{r}{2}}d\tau)^{\frac{1}{r}}=O(e^{cT}\dfrac{\lambda^{\frac{1}{2}}}{T}).
\end{equation}
Therefore,
\begin{equation}\label{3.17}
\begin{split}
||K(x,\cdot)||_{L^r(\gamma[-1,1])}= & O(\dfrac{\lambda^{\frac{p-2}{p}}}{T})+O(e^{cT}\frac{\lambda^{\frac{1}{2}}}{T})+O(\dfrac{e^{(c+E)T}}{T})\\
= & I+II+III.
\end{split}
\end{equation}
Now take $T=\beta\log\lambda$, where $\beta\leq\dfrac{p-4}{2(c+E)p}$. (Note that we can assume that $c\neq0$, otherwise, there is only one $\alpha$ that we are considering, which is $\alpha=Id$.) Then
\begin{equation}
I=II=O(\dfrac{\lambda^{\frac{p-2}{p}}}{\log\lambda}),
\end{equation}
and
\begin{equation}
III=o(\dfrac{\lambda^{\frac{p-2}{p}}}{\log\lambda}).
\end{equation}
Summing up, we get that
\begin{equation}
||K(x,\cdot)||_{L^r(\gamma[-1,1])}=O\big(\frac{\lambda^{\frac{p-2}{p}}}{\log\lambda}\big).
\end{equation}
Now apply Young's inequality, with $r=\frac{p}{2}$, we get that $$\forall f\in L^{p'}(\gamma),||\chi^\lambda_T (\chi^\lambda_T)^* f||_{L^p(\gamma)}\lesssim\frac{(1+\lambda)^{1-\frac{2}{p}}}{\log\lambda}||f||_{L^{p'}(\gamma)}.$$ Therefore, Theorem~\ref{theorem2} is proved.
\newsection{Higher dimensions, $n\geq3$}
Now we move on to the case for $n\geq3$. While we want to show Theorem \ref{theorem1} for the full range of $p$ directly, we can only show it under the condition that $p>\frac{4k}{n-1}$ using the same method as in the last section. Although we only need $p=\infty$ later to interpolate and get to the full version of Theorem \ref{theorem1}, we will show the most as we can for the moment.
\begin{theorem}\label{theorem4.1}
Let $(M,g)$ be a compact smooth $n$-dimensional boudaryless Riemannian manifold with nonpositive curvature, and $\Sigma$ be an $k$-dimensional compact smooth submanifold on $M$, then for any $f\in L^2(M)$, we have the following estimate
\begin{equation}\label{4.1}
||\sum_{|\lambda_j-\lambda|\leq(\log\lambda)^{-1}}E_jf||_{L^p(\Sigma)}\lesssim\frac{\lambda^{\delta(p)}}{(\log \lambda)^{\frac{1}{2}}}||f||_{L^2(M)},\ \ \ \forall p>\frac{4k}{n-1},
\end{equation}
where
\begin{equation}
\delta(p)=\frac{n-1}{2}-\frac{k}{p}.
\end{equation}
\end{theorem}
\begin{remark}
Note that although this estimate is not complete (that works for all $p>2$) for general numbers $k<n$, we get the complete range of $p\geq2$ when $k$ and $n$ satisfy $\frac{4k}{n-1}<2$. That means that we get the improvement for all $p\geq2$ when $k=1$, $n>3$; $k=2$, $n>5$; etc..
\end{remark}
For $n\geq3$, for the sake of using interpolation later, we need to insert a bump function\footnote{We do not need the bump function if we simply want to prove Theorem \ref{theorem4.1}.}. Take $\varphi\in C_0^\infty(\mathbb{R})$ such that $\varphi(t)=1$ when $|t|\leq\frac{1}{2}$ and $\varphi(t)=0$ when $|t|>1$. Then we only have to consider the following kernel\footnote{This kernel is different from the one in \eqref{2.5}.}
\begin{equation}
K(x,y)=\frac{1}{\pi T}\int_{-T}^T(1-\varphi(t))\hat{\rho}(\frac{t}{T})(\cos t\sqrt{-\Delta_g})(x,y))e^{it\lambda}dt,
\end{equation}
which is non-zero only when $|t|>\frac{1}{2}$. In the following discussion, we may sometimes only show estimates for $K(x,y)$ when $t>\frac{1}{2}$, as the part for $t<-\frac{1}{2}$ can be done similarly.
The reason why we only consider the above kernel $K(x,y)$ is because of the following lemma.
\begin{lemma}\label{lemma4.4}
For $\varphi\in C_0^\infty(\mathbb{R})$ such that $\varphi(t)=1$ when $|t|\leq\frac{1}{2}$ and $\varphi(t)=0$ when $|t|>1$. Let
\begin{equation}
\tilde{K}(x,y)=\frac{1}{\pi T}\int_{-1}^1\varphi(t)\hat{\rho}(\frac{t}{T})(\cos t\sqrt{-\Delta_g})(x,y)e^{it\lambda}dt,
\end{equation}
then
\begin{equation}
\sup_x||\tilde{K}(x,\cdot)||_{L^r(\Sigma)}=O(\dfrac{\lambda^{2\delta(p)}}{\log\lambda}).
\end{equation}
\end{lemma}
We will postpone the proof to the end of this section.
Now we are ready to prove Theorem \ref{theorem4.1}, which is essentially the same as the lower dimension case, and what we need to show is \eqref{2.6}. By a partition of unity, we may choose some point $x\in\Sigma$, and consider $\Sigma$ to be within a ball with geodesic radius 1 centered at $x$, and under the geodesic normal coordinates centered at $x$, parametrize $\Sigma$ as
\begin{equation}\nonumber
\Sigma=\{(t,\Theta)|y=\exp_x(t\Theta)\in\Sigma,t\in[-1,1], \Theta\in\mathbb{S}^{k-1}\}
\end{equation}
Applying the Hadamard Parametrix, for any $\alpha\in\Aut$,
\begin{equation}\label{4.9}
\cos(t\sqrt{-\Delta_{\tilde{g}}})(\tilde{x},\alpha(\tilde{y}))=\dfrac{w_0(\tilde{x},\alpha(\tilde{y}))}{(2\pi)^n}\sum_{\pm}\int_{\mathbb{R}^n}e^{i\Phi(\tilde{x},\alpha(\tilde{y}))\cdot\xi\pm
it|\xi|}d\xi+\sum_{\nu=1}^\infty w_\nu(\tilde{x},\tilde{y})\mathcal{E}_\nu(t,d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y})))+R_N(t,\tilde{x},\alpha(\tilde{y})),
\end{equation}
where
$|\Phi(\tilde{x},\alpha(\tilde{y}))|=d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))$, and $\mathcal{E}_\nu,\nu=1,2,3,...$ are those described in Section 3.
By Theorem \ref{theorem3.3},
\begin{equation}
\int_{-T}^T|R_N(t,\tilde{x},\alpha(\tilde{y}))|dt\lesssim\int_0^Te^{dt}dt=O(e^{dT}).
\end{equation}
Moreover, by \eqref{3.22}, for $\nu=1,2,3,...$,
\begin{equation}|\int_{-T}^T(1-\varphi(t))\hat{\rho}(\frac{t}{T})e^{it\lambda}\mathcal{E}_\nu(t,d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y})))dt|=O(T(T\lambda)^{n-1-2\nu}).
\end{equation}
Since $|w_\nu(\tilde{x},\alpha(\tilde{y}))|=O(\exp(c_\nu d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))))$ by \cite{Berard}, for some constant $c_\nu$ depending on $\nu$,
\begin{equation}
\begin{split}
&\sum_{\nu=1}^N |w_\nu(\tilde{x},\alpha(\tilde{y}))\int_{-T}^T(1-\varphi(t))\hat{\rho}(\frac{t}{T})e^{it\lambda}\mathcal{E}_\nu(t,d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y})))dt|\\
=&\sum_{\nu=1}^N O(T(T\lambda)^{n-1-2\nu}\exp(c_\nu d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))))\\
=&O(T^{n-2}\lambda^{n-3}\exp(C_N d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y})))),
\end{split}
\end{equation}
for some $C_N$ depending on $c_1, c_2,..., c_{N-1}$.
All in all, disregarding the integral of the remainder kernel,
\begin{multline}
|\int_{-T}^T(1-\varphi(t))\hat{\rho}(\frac{t}{T})\cos(t\sqrt{-\Delta_{\tilde{g}}})(\tilde{x},\alpha(\tilde{y}))e^{it\lambda}dt|\\
=|\int_{-T}^T(1-\varphi(t))\hat{\rho}(\frac{t}{T})\dfrac{w_0(\tilde{x},\tilde{y})}{(2\pi)^n}\sum_{\pm}\int_{\mathbb{R}^2}e^{i\Phi(\tilde{x},\alpha(\tilde{y}))\cdot\xi\pm
it|\xi|}e^{it\lambda}d\xi dt|+O(T^{n-2}\lambda^{n-3}\exp(C_N d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y})))).
\end{multline}
On the other hand, $|w_0(\tilde{x},\tilde{y})|=O(1)$ (see \cite{SZ}) by applying G\"unther's Comparison Theorem in \cite{Gu}, and for
\begin{equation}\label{4.13}
|\sum_{\pm}\int_{\mathbb{R}^n}\int_{-T}^T(1-\varphi(t))e^{i\Phi(\tilde{x},\alpha(\tilde{y}))\cdot\xi\pm it|\xi|+it\lambda}\hat{\rho}(\frac{t}{T})dtd\xi|,
\end{equation}
as we may assume as before that $d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))>\frac{1}{2}$ by the stationary phase estimates in \cite{soggebook}.
Denote that $\psi(t)=(1-\varphi(t))\hat{\rho}(\frac{t}{T})$, and $\tilde{\psi}$ is the inverse Fourier Transform of $\psi$.
Again we have, $\tilde{\psi}(\lambda+|\xi|)=O(T(1+\lambda+|\xi|)^{-N})$, for any $N\in\mathbb{N}$, so $\int_{\mathbb{R}^n}\tilde{\psi}(\lambda+|\xi|)d\xi$ can be arbitrarily small, while $\tilde{\psi}(\lambda-|\xi|)=O(T(1+T|\lambda-|\xi||)^{-N})$.
Integrate \eqref{4.13} with respect to $t$ first, then it is bounded by a constant times
\begin{equation}
\sum_\pm\int_0^\infty\int_{\mathbb{S}^{n-1}}\tilde{\psi}(\lambda\pm r)e^{ir\Phi(\tilde{x},\alpha(\tilde{y}))\cdot\Theta}r^{n-1}d\Theta dr.
\end{equation}
Because $\tilde{\psi}(\lambda\pm r)\leq T(1+T|\lambda\pm r|)^{-N}$ for any $N>0$, the term with $\tilde{\psi}(\lambda+r)$ in the sum is $O(1)$, while the other term with $\tilde{\psi}(\lambda-r)$ is significant only when $r$ is comparable to $\lambda$, say, $c_1\lambda<r<c_2\lambda$ for some constants $c_1$ and $c_2$. In this case, as we assumed that $d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))\geq D$, we can also assume that $d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))\gtrsim\frac{1}{r}$ for large $\lambda$.
By Lemma \ref{lemma4.3}, $\int_{\mathbb{S}^{n-1}}e^{iw\cdot\Theta}d\Theta=\sqrt{2\pi}^{n-1} |w|^{-\frac{n-1}{2}}\sum_{\pm}e^{\pm i|w|}+O(|w|^{-\frac{n+1}{2}}),|w|\geq1$, where $w=r\Phi(\tilde{x},\alpha(\tilde{y}))$. Integrate up $\Theta$, the above quantity is then controlled by
\begin{equation}
\begin{split}
& |\sum_\pm\int_{c_1\lambda}^{c_2\lambda}\tilde{\psi}(\lambda-r)|rd_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))|^{-\frac{n-1}{2}}e^{\pm ird_{\tilde{g}}(\tilde{x},\tilde{y})}r^{n-1}dr+\int_{c_1\lambda}^{c_2\lambda}\tilde{\psi}(\lambda-r)|rd_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))|^{-\frac{n+1}{2}}r^{n-1}dr|\\
\leq & d_{\tilde{g}}(x,y)^{-\frac{n-1}{2}}\int_{c_1\lambda}^{c_2\lambda}\tilde{\psi}(\lambda-r)r^{\frac{n-1}{2}}dr+d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))^{-\frac{n+1}{2}}\int_{c_1\lambda}^{c_2\lambda}\tilde{\psi}(\lambda-r)r^{\frac{n-3}{2}}dr\\
= & O((\frac{\lambda}{d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))})^{\frac{n-1}{2}})
\end{split}
\end{equation}
Therefore, disregarding the integral of the remainder kernel,
\begin{equation}
\int_{-T}^T(1-\varphi(t))\hat{\rho}(\frac{t}{T})(\cos t\sqrt{-\Delta_{\tilde{g}}})(\tilde{x},\alpha(\tilde{y}))e^{it\lambda}dt=O((\frac{\lambda}{d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))})^{\frac{n-1}{2}})+O(T^{n-2}\lambda^{n-3}\exp(C_N d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y})))).
\end{equation}
Now $K(x,y)$ is
\begin{equation}\label{4.16}
\sum_{\alpha\in \Aut}[O(\frac{1}{T}(\dfrac{\lambda}{d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))})^{\frac{n-1}{2}})+O(\dfrac{e^{ET}}{T})],
\end{equation}
where $E=\max\{C_N,d\}+1$.
Here we still have: the number of terms of $\alpha$'s such that $d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))\leq T$ is at most $e^{cT}$, for some constant $c$ depending on the curvature, and there exists a constant $C_p$ such that $d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))>C_p$ for any $\alpha\in\Aut$ different from identity.
Now we take the $L^r(\Sigma)$ norms of each individual terms. By \eqref{3.37}, and accounting in the number of terms of those $\alpha$'s, the first one is bounded by a constant times
\begin{equation}
\dfrac{e^{cT}\lambda^{\frac{n-1}{2}}}{T}(\int_0^1C_p^{-\frac{n-1}{2}\cdot r}\tau^{k-1}d\tau)^{\frac{1}{r}}=O(\dfrac{e^{cT}\lambda^{\frac{n-1}{2}}}{T}).
\end{equation}
Therefore,
\begin{equation}
\begin{split}
||K(x,\cdot)||_{L^r(\Sigma)}
= &O(\dfrac{e^{cT}\lambda^{\frac{n-1}{2}}}{T})+O(\dfrac{e^{(c+E)T}}{T})\\
= & I+II.
\end{split}
\end{equation}
Now take $T=\beta\log\lambda$, where $\beta=\dfrac{\frac{n-1}{2}-\frac{2k}{p}-\delta}{c+E}$, where $\delta$ satisfies $0<\delta<\frac{n-1}{2}-\frac{2k}{p}$. Note that $\frac{n-1}{2}-\frac{2k}{p}>0$ when $p>\frac{4k}{n-1}$. Then
\begin{equation}
I=O(\dfrac{\lambda^{\beta c+\frac{n-1}{2}}}{\log\lambda})=O(\dfrac{\lambda^{\frac{n-1}{2}-\frac{2k}{p}-\delta+\frac{n-1}{2}}}{\log\lambda})=o(\dfrac{\lambda^{n-1-\frac{2k}{p}}}{\log\lambda}),
\end{equation}
and
\begin{equation}
II=O(\dfrac{\lambda^{\beta(c+E)}}{\log\lambda})=O(\dfrac{\lambda^{\frac{n-1}{2}-\frac{2k}{p}-\delta}}{\log\lambda})=o(\frac{\lambda^{n-1-\frac{2k}{p}}}{\log\lambda}).
\end{equation}
Summing up, we get that
\begin{equation}
||K(x,\cdot)||_{L^r(\Sigma)}=o\big(\dfrac{\lambda^{n-1-\frac{2k}{p}}}{\log\lambda}\big).
\end{equation}
Now apply Young's inequality, with $r=\frac{p}{2}$, together with the estimate in Lemma \ref{lemma4.4}, we have
\begin{equation}
\forall f\in L^{p'}(\Sigma),||\chi^\lambda_T (\chi^\lambda_T)^* f||_{L^p(\Sigma)}\lesssim\frac{\lambda^{n-1-\frac{2k}{p}}}{\log\lambda}||f||_{L^{p'}(\Sigma)}.
\end{equation}
Therefore, Theorem~\ref{theorem4.1} is proved.
\begin{proof}[proof of Lemma \ref{lemma4.4}]
With similar approaches as the previous discussions, we can show that $\tilde{K}(x,y)$ is
\begin{equation}\label{4.30}O(\dfrac{1}{T}(\dfrac{\lambda}{\lambda^{-1}+d_{\tilde{g}}(\tilde{x},\tilde{y})})^{\frac{n-1}{2}})
+\sum_{Id\neq\alpha\in \Aut}[O(\dfrac{1}{T}(\dfrac{\lambda}{d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))})^{\frac{n-1}{2}})+O(e^{ET})],
\end{equation}
where $E=\max\{C_N,d\}+1$.
Note that $|t|\leq1$ for $\varphi(t)\neq0$, and the number of terms such that $d_{\tilde{g}}(\tilde{x},\alpha(\tilde{y}))\leq1$ is at most $e^c$, so that
\begin{equation}
||\tilde{K}(x,y)||_{L^r(\Sigma)}=O(\dfrac{\lambda^{2\delta(p)}}{\log\lambda}),
\end{equation}
if we take $T=\log\lambda$ and calculate as before.
\end{proof}
\newsection{Proof of the main theorem in all dimensions}
To show Theorem \ref{theorem1}, we need to use interpolation. Recall that
\begin{equation}
\begin{split}
K(x,y)= & \frac{1}{\pi T}\int_{-T}^T(1-\varphi(t))\hat{\rho}(\frac{t}{T})(\cos t\sqrt{-\Delta_g})(x,y))e^{it\lambda}dt\\
= & \frac{1}{2\pi T}\int_{-T}^T(1-\varphi(t))\hat{\rho}(\frac{t}{T})(e^{it\sqrt{-\Delta_g}}+e^{-it\sqrt{-\Delta_g}})(x,y)e^{it\lambda}dt
\end{split}
\end{equation}
is the kernel of the operator
\begin{equation}
\begin{split}
& \dfrac{1}{2\pi T}[\sum_j\tilde{\psi}(\lambda-\lambda_j)E_j+\sum_j\tilde{\psi}(\lambda+\lambda_j)E_j]\\
= & \dfrac{1}{2\pi T}[\sum_j\tilde{\psi}(\lambda-\lambda_j)E_j]+O(1)\\
= & \dfrac{1}{2\pi T}\tilde{\psi}(\lambda-\sqrt{-\Delta_g})+O(1),
\end{split}
\end{equation}
where $\tilde{\psi}(t)$ is the inverse Fourier transform of $(1-\varphi(t))\hat{\rho}(\frac{t}{T})$ so that $|\tilde{\psi}(t)|\leq T(1+|t|)^{-N}$ for any $N\in\mathbb{N}$.
We have the following estimate for $\tilde{\psi}(\lambda-\sqrt{-\Delta_g})$.
\bth
For $k\neq n-2$,
\begin{equation}\label{4.28}
||\tilde{\psi}(\lambda-P)g||_{L^2(\Sigma)}\lesssim T\lambda^{2\delta(2)}||g||_{L^2(\Sigma)},\ \ \text{for any}\ g\in L^2(\Sigma),
\end{equation}
and for $k=n-2$,
\begin{equation}
||\tilde{\psi}(\lambda-P)g||_{L^2(\Sigma)}\lesssim T\lambda^{2\delta(2)}\log\lambda||g||_{L^2(\Sigma)},\ \ \text{for any}\ g\in L^2(\Sigma),
\end{equation}
where $P=\sqrt{-\Delta_g}$.
\end{theorem}
\begin{proof}
Recall the proof of the corresponding restriction theorem in \cite{burq}, they showed that for $\chi\in\mathcal{S}(\mathbb{R})$, and define
\begin{equation}
\chi_\lambda=\chi(\sqrt{-\Delta_g}-\lambda)=\sum_j\chi(\lambda_j-\lambda)E_j,
\end{equation}
we have
\begin{equation}
||\chi_\lambda||_{L^2(M)\rightarrow L^2(\Sigma)}=O(\lambda^{\delta(2)}),
\end{equation}
for $k\neq n-2$,
and
\begin{equation}
||\chi_\lambda||_{L^2(M)\rightarrow L^2(\Sigma)}=O(\lambda^{\delta(2)}(\log\lambda)^{\frac{1}{2}}),
\end{equation}
for $k=n-2$.
Now consider $\tilde{\psi}(\lambda-P)$ as $S\tilde{S}^*$, where
\begin{equation}
S=\sum_j(1+|\lambda_j-\lambda|)^{-M}E_j
\end{equation}
and
\begin{equation}
\tilde{S}=\sum_j(1+|\lambda_j-\lambda|)^M\tilde{\psi}(\lambda_j-\lambda)E_j,
\end{equation}
where $M$ is some large number.
Recall that $|\tilde{\psi}(\tau)|\leq T(1+|\tau|)^{-N}$ for any $N\in\mathbb{N}$, we then have
\begin{equation}
|(1+|\lambda_j-\lambda|)^M\tilde{\psi}(\lambda_j-\lambda)|\leq T(1+|\lambda_j-\lambda|)^{-N}
\end{equation}
for any $N$.
By \eqref{1.7}, which we deduced from the proof of Theorem 3 in \cite{burq}, for a given $\lambda$,
\begin{equation}
||\sum_{\lambda_j\in(\lambda-1,\lambda+1)}E_j||_{L^2(M)\rightarrow L^2(\Sigma)}=O(\lambda^{\delta(2)}),\ \ \ \ \text{if}\ k\neq n-2
\end{equation}
and
\begin{equation}
||\sum_{\lambda_j\in(\lambda-1,\lambda+1)}E_j||_{L^2(M)\rightarrow L^2(\Sigma)}=O(\lambda^{\delta(2)}(\log\lambda)^{\frac{1}{2}}),\ \ \ \ \ \text{if}\ k=n-2
\end{equation}
so that for any $f\in L^2(M)$,
\begin{equation}
\begin{split}
&||\sum_j(1+|\lambda_j-\lambda|^{-M})E_j f||_{L^2(\Sigma)}\\
\leq &||\sum_{\lambda_j\in(\lambda-1,\lambda+1)}E_jf||_{L^2(\Sigma)}+||\sum_{\lambda_j\not\in(\lambda-\delta,\lambda+\delta)}(1+|\lambda_j-\lambda|^{-M})E_jf||_{L^2(\Sigma)}\\
\lesssim &\begin{cases}\lambda^{\delta(2)}||f||_{L^2(M)}+\sum_{\lambda_j\not\in(\lambda-1,\lambda+1)}(1+|\lambda_j-\lambda|)^{-M}||E_jf||_{L^2(\Sigma)},\ &\mbox{if}\ k\neq n-2,\\
\lambda^{\delta(2)}(\log\lambda)^{\frac{1}{2}}||f||_{L^2(M)}+\sum_{\lambda_j\not\in(\lambda-1,\lambda+1)}(1+|\lambda_j-\lambda|)^{-M}||E_jf||_{L^2(\Sigma)},\ &\mbox{if}\ k= n-2.
\end{cases}
\end{split}
\end{equation}
As
\begin{equation}
\begin{split}
&\sum_{\lambda_j\not\in(\lambda-1,\lambda+1)}(1+|\lambda_j-\lambda|)^{-M}||E_jf||_{L^2(\Sigma)}\\
\leq
&\begin{cases}\sum_{\lambda_j\not\in(\lambda-1,\lambda+1)}\lambda_j^{\delta(2)}(1+|\lambda_j-\lambda|)^{-M}||E_jf||_{L^2(M)},\ &\mbox{if}\ k\neq n-2,\\
\sum_{\lambda_j\not\in(\lambda-1,\lambda+1)}\lambda_j^{\delta(2)}(\log\lambda_j)^{\frac{1}{2}}(1+|\lambda_j-\lambda|)^{-M}||E_jf||_{L^2(M)},\ &\mbox{if}\ k=n-2,
\end{cases}
\end{split}
\end{equation}
which can be made arbitrarily small when $M$ is sufficiently large,
\begin{equation}
||\sum_j(1+|\lambda_j-\lambda|^{-M})E_j f||_{L^2(\Sigma)}\leq \begin{cases}\lambda^{\delta(2)}||f||_{L^2(M)},\ &\mbox{if}\ k\neq n-2,\\
\lambda^{\delta(2)}(\log\lambda)^{\frac{1}{2}}||f||_{L^2(M)},\ &\mbox{if}\ k= n-2.
\end{cases}
\end{equation}
Similarly, we have
\begin{equation}
||\sum_j(1+|\lambda_j-\lambda|^{M})\tilde{\phi}(\lambda_j-\lambda)E_j f||_{L^2(\Sigma)}\leq \begin{cases}T\lambda^{\delta(2)}||f||_{L^2(M)},\ &\mbox{if}\ k\neq n-2,\\
T\lambda^{\delta(2)}(\log\lambda)^{\frac{1}{2}}||f||_{L^2(M)},\ &\mbox{if}\ k= n-2.
\end{cases}
\end{equation}
Therefore,
\begin{equation}
\begin{split}
||\tilde{\psi}(\lambda-P)g||_{L^2(\Sigma)}= & ||S\tilde{S}^*g||_{L^2(\Sigma)}\\
\leq & ||S||_{L^2(M)\rightarrow L^2(\Sigma)}||\tilde{S}^*||_{L^2(\Sigma)\rightarrow L^2(M)}||g||_{L^2(\Sigma)}\\
= & ||S||_{L^2(M)\rightarrow L^2(\Sigma)}||\tilde{S}||_{L^2(M)\rightarrow L^2(\Sigma)}||g||_{L^2(\Sigma)}\\
\lesssim & \begin{cases}T\lambda^{2\delta(2)}||g||_{L^2(\Sigma)},\ \ \ \ &\mbox{if}\ k\neq n-2,\\
T\lambda^{2\delta(2)}\log\lambda||g||_{L^2(\Sigma)},\ \ \ \ &\mbox{if}\ k=n-2.
\end{cases}
\end{split}
\end{equation}
\end{proof}
Now we may finish the proof of Theorem \ref{theorem1}.
Recall that we denote $K$ as the operator whose kernel is $K(x,y)$. The above theorem tells us that,
\begin{equation}
||K||_{L^2(\Sigma)\rightarrow L^2(\Sigma)}\leq \begin{cases}O(\lambda^{2\delta(2)}), &\text{for}\ k\neq n-2;\\
O(\lambda^{2\delta(2)}\log\lambda), &\text{for}\ k=n-2.
\end{cases}
\end{equation}
Interpolating this with
\begin{equation}
||K||_{L^1(\Sigma)\rightarrow L^\infty(\Sigma)}=O(\dfrac{e^{cT}\lambda^{\frac{n-1}{2}}}{T})
\end{equation}
or
\begin{equation}
||K||_{L^1(\Sigma)\rightarrow L^\infty(\Sigma)}=O(e^{cT}\lambda^{\frac{n-1}{2}})
\end{equation}
respectively by Theorem \ref{theorem4.1}, we get that for any $p$ and $k\neq n-2$,
\begin{equation}
||K||_{L^{p'}(\Sigma)\rightarrow L^p(\Sigma)}=O(\dfrac{\lambda^{\frac{n-1}{2}(1-\frac{2}{p})}e^{cT(1-\frac{2}{p})}\lambda^{2\delta(2)\cdot\frac{2}{p}}}{T^{1-\frac{2}{p}}})=O(\dfrac{\lambda^{\frac{n-1}{2}-\frac{n-1}{p}+\frac{4\delta(2)}{p}}e^{cT(1-\frac{2}{p})}}{T^{1-\frac{2}{p}}}),
\end{equation}
and for $k=n-2$,
\begin{equation}
||K||_{L^{p'}(\Sigma)\rightarrow L^p(\Sigma)}=O(\dfrac{\lambda^{\frac{n-1}{2}-\frac{n-1}{p}+\frac{4\delta(2)}{p}}e^{cT(1-\frac{2}{p})}T^{\frac{2}{p}}}{T^{1-\frac{2}{p}}})=O(\dfrac{\lambda^{\frac{n-1}{2}-\frac{n-1}{p}+\frac{4\delta(2)}{p}}e^{cT(1-\frac{2}{p})}}{T^{1-\frac{4}{p}}}).
\end{equation}
If $k=n-1$, then $\delta(2)=\frac{1}{4}$.
\begin{equation}
||K||_{L^{p'}(\Sigma)\rightarrow L^p(\Sigma)}=O(\dfrac{\lambda^{\frac{n-1}{2}-\frac{n-2}{p}}e^{cT(1-\frac{2}{p})}}{T^{1-\frac{2}{p}}}).
\end{equation}
Since $\frac{n-1}{2}-\frac{n-2}{p}<2\delta(p)$ if $p>\frac{2n}{n-1}$, say, $\frac{n-1}{2}-\frac{n-2}{p}+\delta<2\delta(p)$ for some small number $\delta>0$, then taking $\beta=\frac{\delta}{c(1-\frac{2}{p})}$, and $T=\beta\log\lambda$, we have
\begin{equation}
||K||_{L^{p'}(\Sigma)\rightarrow L^p(\Sigma)}=O(\dfrac{\lambda^{2\delta(p)-\delta}}{T^{1-\frac{2}{p}}})=O(\dfrac{\lambda^{2\delta(p)-\delta}}{(\log\lambda)^{1-\frac{2}{p}}})= o(\dfrac{\lambda^{2\delta(p)}}{\log\lambda}),
\end{equation}
which indicates Theorem \ref{theorem1}.
If $k=n-2$,
\begin{equation}
||K||_{L^{p'}(\Sigma)\rightarrow L^p(\Sigma)}=O(\dfrac{\lambda^{\frac{n-1}{2}-\frac{n-1}{p}+\frac{4\delta(2)}{p}}e^{cT(1-\frac{2}{p})}}{T^{1-\frac{4}{p}}})=O(\dfrac{\lambda^{\frac{n-1}{2}-\frac{n-1}{p}+\frac{2}{p}}e^{cT(1-\frac{2}{p})}}{T^{1-\frac{4}{p}}}).
\end{equation}
Now since $\frac{n-1}{2}-\frac{n-1}{p}+\frac{2}{p}<(n-1)-\frac{2(n-2)}{p}$ when $p>2$, we can take $\delta>0$ such that $\frac{n-1}{2}-\frac{n-1}{p}+\frac{2}{p}+\delta<(n-1)-\frac{2(n-2)}{p}$, and take $\beta=\frac{\delta}{c(1-\frac{2}{p})}$, $T=\beta\log\lambda$, then
\begin{equation}
||K||_{L^{p'}(\Sigma)\rightarrow L^p(\Sigma)}=O(\dfrac{\lambda^{2\delta(p)-\delta}}{(\log\lambda)^{1-\frac{4}{p}}})=o(\dfrac{\lambda^{2\delta(p)}}{\log\lambda}),
\end{equation}
which is the what we need.
If $k\leq n-3$, $\delta(2)=\frac{n-1}{2}-\frac{k}{2}$, then
\begin{equation}
||K||_{L^{p'}(\Sigma)\rightarrow L^p(\Sigma)}=O(\dfrac{\lambda^{\frac{n-1}{2}-\frac{n-1}{p}+\frac{4\delta(2)}{p}}e^{cT(1-\frac{2}{p})}}{T^{1-\frac{2}{p}}})=O(\dfrac{\lambda^{\frac{n-1}{2}-\frac{n-1}{p}+\frac{2(n-1)-2k}{p}}e^{cT(1-\frac{2}{p})}}{T^{1-\frac{2}{p}}}).
\end{equation}
Since $\frac{n-1}{2}-\frac{n-1}{p}+\frac{2(n-1)-2k}{p}<(n-1)-\frac{2k}{p}=2\delta(p)$ for $p>2$, we can take $\delta>0$ such that $\frac{n-1}{2}-\frac{n-1}{p}+\frac{2(n-1)-2k}{p}+\delta<(n-1)-\frac{2k}{p}$, and take $\beta=\frac{\delta}{c(1-\frac{2}{p})}$, $T=\beta\log\lambda$, then
\begin{equation}
||K||_{L^{p'}(\Sigma)\rightarrow L^p(\Sigma)}=O(\dfrac{\lambda^{2\delta(p)-\delta}}{(\log\lambda)^{1-\frac{2}{p}}})=o(\frac{\lambda^{2\delta(p)}}{\log\lambda}),
\end{equation}
which finishes Theorem \ref{theorem1}.
| {'timestamp': '2012-10-31T01:03:06', 'yymm': '1205', 'arxiv_id': '1205.1402', 'language': 'en', 'url': 'https://arxiv.org/abs/1205.1402'} |
\section{Introduction} \label{sec:intro}
\acresetall
The Crab pulsar and its \ac{pwn} are among the most studied objects in the Galaxy.
The central pulsar has a period of 33\,ms and large spin-down power, $\dot{E}\sub{SD}\simeq5\times10^{38}~\mathrm{erg\,s^{-1}}$ \citep{Hester2008}.
Almost all of $\dot{E}\sub{SD}$ is carried away by an ultra-relativistic wind mainly composed of electron-positron pairs (hereafter electrons).
The electrons are accelerated and randomized at the termination shock, which is located $\sim$0.14~pc from the pulsar \citep{Weisskopf2000}.
Downstream of the termination shock, the interaction of the accelerated electrons with the magnetic and photon fields results in the production of broadband non-thermal radiation spanning radio to multi-TeV energies.
While the synchrotron emission provides the dominant contribution from radio to GeV energies, the emission produced through the \ac{ic} scattering is responsible for the gamma rays detected above $\sim$1 GeV.
The broadband spectrum of the Crab \ac{pwn} is consistent with a 1-D hybrid kinetic--\ac{mhd} approach, in which radiative models account for the advective transport of particles, radiative and adiabatic cooling, and spatial distributions of magnetic and photon fields in the \ac{pwn} \citep{Kennel1984b,Atoyan1996}.
These models allow one to reproduce the broadband spectral energy distribution, assuming a very low pulsar wind magnetization, $\sigma$.
This requirement resulted in the formulation of the so-called $\sigma$ problem.
It represents a mismatch of the wind magnetization at the light cylinder, which is expected to be very high $(\sigma\gg1)$, and the one inferred with 1D MHD models for \acp{pwn} $(\sigma \sim10^{-3})$.
In addition, 1D MHD models do not allow one to study the morphology seen in the center part of the Crab PWN, namely the jet, torus and wisps \citep[e.g.,][]{Hester2008}.
2D MHD models, which adopt an anisotropic pulsar wind, can consistently explain the jet-torus morphology, as shown in theoretical studies \citep{2002AstL...28..373B,2002MNRAS.329L..34L} and in 2D numerical simulations \citep{2004MNRAS.349..779K,2006A&A...453..621D}.
The properties of wisps, e.g., their emergence and velocity, can also be explained by such 2D MHD simulations \citep[see, e.g., ][]{2008A&A...485..337V,Camus2009, 2015MNRAS.449.3149O}.
Finally, the 3D MHD simulations successfully reproduce the morphological structures and provide a possible solution for the $\sigma$ problem due to a significant magnetic field dispersion inside the PWN \citep{2014MNRAS.438..278P, 2016JPlPh..82f6301O}.
Recently, spaceborne gamma-ray telescopes ({\textit{AGILE} and {\itshape Fermi} LAT\xspace}) revealed that \(\sim100\)~MeV emission from the Crab PWN displays day-scale variability \citep{2011Sci...331..736T, Crab_flare_1st}.
This short variability implies that this emission is produced through the synchrotron channel by electrons with \(\sim\)PeV energies. The cooling time for the synchrotron emission in days is determined by
\begin{equation}
t\sub{SYN}=100 \left(\frac{\varepsilon}{100~\mathrm{MeV}}\right)^{-1/2}\left(\frac{B}{100~\mathrm{\upmu G}}\right)^{-3/2}\rm\, days\,,
\end{equation}
where $\varepsilon$ and $B$ are the mean energy of emitted photons and the magnetic field, respectively.
The rapid variability of flares requires a very strong magnetic field, $\gtrsim 1~\mathrm{mG}$ \citep[e.g.,][]{flare_review}, which significantly exceeds the average magnetic field in the nebula, $B=100-300~\rm \mu G$ \citep[see][and references therein]{2020MNRAS.491.3217K}.
The recent 3D MHD simulation indicates such a strong magnetic field can exist at the base of the plume \citep[see, e.g.,][]{2014MNRAS.438..278P, 2016JPlPh..82f6301O}.
Another important argument for production of flares under very special conditions comes from the peak energy. The
spectral energy distribution of the 2011 April and 2013 March flares are characterized by cut-off energies of $375\pm 26$ MeV and
$484^{+409}_{-166}$ MeV, respectively \citep{Rolf2012,Mayer2013}. Thus, the spectra extend beyond the maximum synchrotron peak energy, $\sim$ 236 MeV, attainable in the ideal-\ac{mhd} configurations \citep[the synchrotron burn-off limit, see, e.g.,][]{2002PhRvD..66b3005A,Arons2012}. The synchrotron peak frequency can exceed this limit if (i) particles are accelerated in the non-\ac{mhd} regime \citep[e.g., via magnetic reconnection, see][]{2012ApJ...754L..33C}, (ii) if the emission is produced in a relativistically moving outflow \citep{2004MNRAS.349..779K}, or (iii) if small scale magnetic turbulence is present \citep{2013ApJ...774...61K}. All these possibilities\footnote{The magnetic turbulence on the scale required for the jitter regime might be suppressed by the Landau damping in PWNe \citep[see, e.g.,][]{2019arXiv190507975H}, thus this possibility seem to be less feasible \citep[see, however,][]{2012MNRAS.421L..67B}.} imply production of flares under very special circumstances, which differ strongly from the typical conditions expected in the Crab \ac{pwn}. The physical conditions at the production site of the stationary/slow-varying synchrotron gamma-ray emission are less constrained, and this radiation component can be generated under the same conditions as the dominant optical-to-X-ray emission, which in particular implies a magnetic field of \(\sim0.1\rm\,mG\) \citep[see, e.g.,][]{Kennel1984b,Aharonian1998,2014MNRAS.438..278P}.
The synchrotron gamma rays show variability on all resolvable time scales \citep{Rolf2012}.
The current detection of flares is based on automated processing of the LAT data \citep{Atwood2009}. Such analysis includes both the radiation from the Crab pulsar and \ac{pwn}.
The phase-averaged photon flux above 100\,MeV is dominated by the Crab pulsar emission with $\sim 2\times10^{-6}~\mathrm{photon\,cm^{-2}\,s^{-1}}$ \citep[e.g.,][]{Crab_pulsar2010}.
Analysis using only off-pulse phase data allows us to study the \ac{pwn} synchrotron gamma-ray emission free from the Crab pulsar emission.
While an off-pulse analysis loses some photon statistics due to reduced exposure time, it can minimize systematic uncertainties caused by estimations of the Crab pulsar flux.
It helps in investigating even small variations of the \ac{pwn} synchrotron emission more reliably.
Detections of small-flux-scale flares might give a clue to deepen our understanding of the physical phenomena responsible for the flaring emission.
Here, we present a systematic search for gamma-ray flares using 7.4 years of data obtained by the Large Area Telescope (LAT) on board {\textit{Fermi}}, specifically analyzing the off-pulse phases of the Crab pulsar.
This paper is organized as follows.
In Section \ref{section2}, we describe the analysis procedure and the results of the long-term (7.4 years) and shorter term (30 days, 5 days, and 1.5 days) analysis.
In Section \ref{section3}, we discuss the physical interpretation of the ``small flares.'' The conclusion is in Section \ref{section4}.
\newpage
\section{Observation and data analysis} \label{section2}
\subsection{Observation and data reduction}\label{section2.1}
{\it Fermi}-LAT is a $e^{\pm}$ pair-production detector covering the 20\,MeV--$>$300\,GeV energy range. LAT is composed of a converter/tracker made of layers of Si strips and Tungsten to convert and then measure the direction of incident photons,
a calorimeter made of CsI scintillator to determine the energy of gamma rays,
and an anti-coincidence detector to reject background charged particles \citep{Atwood2009}.
LAT has a large effective area ($>$ 8200 $\mathrm{cm^{2}}$) and a wide field of view ($\sim$ 2.4 str).
The point spread function (PSF) of the LAT, which becomes better with increasing energy, is about 0.9 deg (at 1 GeV) and 0.1 deg (at 10 GeV) .
We analyzed the data from MJD 54686 (2008 August 8) to 57349 (2015 November 11) in the energy range between 100 MeV and 500 GeV.
We adopted the low-energy threshold of 100 MeV to reduce systematic errors, especially originating from energy dispersion, although our choice leads to smaller photon statistics than previous studies \citep{Rolf2012,Mayer2013}, which used a lower energy threshold of 70\,MeV.
The data analysis was performed using the Science Tools package (v11r05p3) distributed by the Fermi Science Support Center (FSSC) following the standard procedure\footnote{\url{http://fermi.gsfc.nasa.gov/ssc/data/analysis/}} with the P8R2$\_$SOURCE$\_$V6 instrument response functions.
Spectral parameters were estimated by the maximum likelihood using {\tt{gtlike}} implemented in the Science Tools.
We examined the detection significance of gamma-ray signals from sources by means of the test statistic (TS) based on the likelihood ratio test \citep{Mattox}.
Our analyses are composed of two parts: the longer-term (7.4 years) scale for the baseline state, and the shorter-term (30 days, 5 days, and 1.5 days) scale for flare states.
For the longer-term analysis, the events were extracted within a 21.2 $\times$ 21.2 degree region of interest (RoI) centered on the Crab PWN position (RA: 83.6331 deg, Dec: 22.0199 deg).
After the standard quality cut (DATA$\_$QUAL$>$0$\&\&$(LAT$\_$CONFIG==1), the events with zenith angles above 90 deg were excluded to reduce gamma-ray events from the Earth limb.
The data when the Crab PWN was within 5 deg of the Sun were also excluded.
The data were analyzed by the binned maximum likelihood method.
The background model includes sources within 18 degrees of the Crab PWN as listed in the Preliminary LAT 8-year Point Source List (FL8Y)\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/access/lat/fl8y/}}.
The radius of 18 degrees completely encloses the RoI.
Suspicious point sources, FL8Y J0535.9+2205 and FL8Y J0531.1+2200, which are located near the Crab PWN ($<$ 0.8 deg), were excluded\footnote{The fourth Fermi Large Area Telescope source catalog \citep{4fgl} has just recently been published and does not include those two sources.}.
The supernova remnants IC 443 and S 147 are included as spatially extended templates.
Both normalizations and spectral parameters of the sources which are located within 5 degrees of the Crab PWN are set free while the parameters of all other sources are fixed at the FL8Y values.
The Galactic diffuse emission model ``gll$\_$iem$\_$v06.fits'' and the isotropic diffuse emission one ``iso$\_$P8R2$\_$SOURCE$\_$V6$\_$v06.txt'' are both included in the background model.
A normalization factor and photon index for the Galactic diffuse emission model and a normalization factor for the isotropic diffuse emission model are set free.
Sources with TS$<1$ were removed after the first iteration of the maximum likelihood fit, and then we fitted the data again.
We excluded the known flare periods as reported in \cite{Mayer_thesis} and \cite{Rudy2015} to determine the baseline state in our long-term analysis.
We defined the flare periods as two weeks before or after the peak flare times.
The flare periods which were excluded in our long-term analysis are summarized in Table \ref{reported_flare}.
In this paper, we refer those flares as ``reported flares''.
\begin{table}[b]
\caption{Periods of “reported flares”. These periods were excluded in our long-term analysis to determine the baseline state of the source.}
\label{reported_flare}
\begin{center}
\begin{tabular}{l|c|c}
Flare name& MJD& Reference\\\hline\hline
2009 February&54855 - 54883& \cite{Mayer_thesis}\\
2010 September&55446 - 55474& \cite{Mayer_thesis}\\
2011 April&55653 - 55681& \cite{Mayer_thesis}\\
2012 July&56098 - 56126& \cite{Mayer_thesis}\\
2013 March&56343 - 56371& \cite{Mayer_thesis}\\
2013 October\tablenotemark{\rm \dag}&56568 - 56608& \cite{Rudy2015}\\
2014 August\tablenotemark{\rm \dag}&56869 - 56902& \cite{Rudy2015}\\\hline
\end{tabular}
\tablenotetext{\rm \dag}{The light curve shows a two-peak structure (\cite{Rudy2015}).}
\end{center}
\end{table}
For the shorter-term analysis, the events were extracted within a 15-degree
acceptance cone centered on the location of the Crab PWN,
and the gamma-ray fluxes and spectra were determined by the unbinned maximum likelihood method.
The background model is the same one as used in the long-term analysis, but parameters are fixed by the results from our long-term analysis, except for the isotropic diffuse emission, whose normalization remains free.
{\it Fermi}-LAT cannot spatially distinguish gamma rays originating from the Crab pulsar and PWN due to its large PSF.
Thus, the off-pulse window of the Crab pulsar is needed to obtain an accurate spectrum of the Crab PWN.
For this purpose, we used the \textsc{Tempo2} package\footnote{\url{http://www.atnf.csiro.au/research/pulsar/tempo2/index.php?n=Main.HomePage}} \citep{tempo2} for phase gating analysis. The ephemeris data were prepared following the methods outlined in \cite{Kerr2015}.
The duration of the ephemeris data of the Crab pulsar is between MJD 54686 and 57349, and the phase interval 0.56--0.88 was chosen to suppress effects from the pulsar.
All subsequent analysis was performed using the off-pulse data of the Crab pulsar following \citet{Crab_flare_1st}.
\subsection{Spectral model}
The spectrum of the Crab PWN has two components in the LAT energy band.
One is a synchrotron component, which has a soft spectrum, and the other is an IC component which dominates above 1 GeV.
We assume a power-law spectrum (PL) for the synchrotron component and logparabola (LP) spectrum for the IC component:
\begin{equation}\label{crab_stationary}
\frac{dN}{dE_{\gamma}}=N\sub{0,SYN}\left(\frac{E_{\gamma}}{100~\mathrm{MeV}}\right)^{-\Gamma\sub{0,SYN}} + N\sub{0,IC}\left(\frac{E_{\gamma}}{1~\mathrm{GeV}}\right)^{-(\alpha_{0}+\beta_{0}\ln{(E_{\gamma}/1~\mathrm{GeV})})}
\end{equation}
We performed a maximum likelihood analysis of data between MJD 54686 and 57349 excluding the ``reported flares'' as defined in Sec \ref{section2.1}, and obtained the spectral values $\Gamma\sub{0,SYN}=4.27\pm 0.08$, $\alpha_{0}=1.50\pm0.04$, $\beta_{0}=0.05\pm0.01$.
The photon flux above 100 MeV is $F\sub{ph,SYN} = (6.31\pm0.23)\times10^{-7}~\mathrm{photon~cm^{-2}~s^{-1}}$ for the synchrotron component and $F\sub{ph,IC} = (1.09\pm0.08)\times10^{-7}~\mathrm{photon~cm^{-2}~s^{-1}}$ for the IC component, whereas the energy flux is $F\sub{e,SYN} = (1.45\pm0.06)\times10^{-10}~\mathrm{erg~cm^{-2}~s^{-1}}$ and $F\sub{e,IC} = (5.46\pm0.28)\times10^{-10}~\mathrm{erg~cm^{-2}~s^{-1}}$.
In the following sections, we refer to these values as the baseline values.
Figure \ref{sed_pwn} presents the baseline spectral energy distribution of the Crab PWN.
The spectral points were obtained by dividing the 100 MeV to 500 GeV range into 15 logarithmically spaced energy bins.
\begin{figure}[hbp]
\begin{center}
\includegraphics[width=95mm]{sed_pwn_base_line_bin15_rev.pdf}
\caption{Spectral energy distribution of the Crab PWN averaged over 7.4 years of \textit{Fermi}-LAT observations, excluding the ``reported flares'' listed in Table \ref{reported_flare}. Only the data of the off-pulse phase (phase interval: [0.56-0.88]) were used.
The dashed black line represents the best-fitted model of the Crab PWN (synchrotron component $+$ Inverse Compton component) described by Eq. (\ref{crab_stationary}).}
\label{sed_pwn}
\end{center}
\end{figure}
\subsection{Temporal analysis}
\subsubsection{Day-scale analysis}\label{day_scale_ana}
We derived a 5-day binned light curves (LC) of the synchrotron and the IC components for the energy range 100\,MeV--500\,GeV, based on Eq. (\ref{crab_stationary}). We then used the $\chi^2$ test to examine variability.
The 5-day scale was chosen to match observed flare durations of a few days to $\sim$1 week \citep{flare_review}.
We treated the normalization of the synchrotron component, the IC component and the isotropic diffuse emission as free parameters while the others were fixed by the baseline values.
For the $\chi^2$ test, we excluded the data bins with TS$<2$ and the bins overlapping with the reported flares listed in Table \ref{reported_flare}.
The $\chi^2$ is defined as follows:
\begin{equation}
\chi^2\sub{SYN/IC} = \sum_{i}\frac{(F\sub{i, SYN/IC}-F\sub{base, SYN/IC})^2}{F\sub{i, err,SYN/IC}^2}
\end{equation}
where $F\sub{i, SYN/IC}$ and $F\sub{i,err, SYN/IC}$ are the values of the flux and error of the synchrotron and IC component in each LC bin, respectively.
The derived $\chi^2\sub{SYN/IC}/$(d.o.f) are $\chi^2\sub{SYN}=1230.46/432$ and $\chi^2\sub{IC}=511.07/459$.
The synchrotron component is highly variable, while the IC component is less variable.
It has been reported that the IC component does not show any variability by previous LAT analysis \citep{Crab_flare_1st, Rolf2012, Mayer2013} and by ground-based imaging air Cherenkov telescope observations \citep{HESS_flare, VERITAS_flare, MAGIC_flare}.
Thus in the following analysis, we assume the spectral parameters of the IC component ($N\sub{0,IC},~\alpha_{0} \mathrm{~and~}\beta_{0}$) to be fixed at the baseline values.
The 5-day LC of the synchrotron component appears in Figure \ref{lc_5day}.
\begin{figure*}[b]
\begin{center}
\includegraphics[width=125mm]{LC_5days_sync-norm-index_nowave_TS0srccut_ts2_rev2.pdf}
\caption{5-day gamma-ray light curve (100 MeV -- 500 GeV integral photon flux) of the Crab synchrotron component from 2008 August to 2015 November.
The green lines show the times of the reported flares listed in Table~\ref{reported_flare}.
The center times of the small flares from this work are indicated in blue lines as listed in Table \ref{flare_time}.}
\label{lc_5day}
\end{center}
\end{figure*}
To identify flare activity of the synchrotron component of the Crab PWN emission, we modeled the emission as a superposition of three components; steady synchrotron, IC and additional flare components, similar to \cite{Crab_flare_1st}.
The flare component is modeled by a PL with free normalization and photon index parameters, while the steady synchrotron and IC components are fixed to the baseline values.
We define ``small flares'' as those whose TS for the flare component exceeds 29.
This choice corresponds to a significance of $\sim 5\sigma$ with 2 degrees of freedom (pre-trial) or $\sim 3.7\sigma$ considering trials for the 525 LC bins.
There is not a unique method to define ``a Crab flare'' since the Crab PWN emission is variable on all observed time scales \citep{Rolf2012}.
The criterion using TS is affected by differences of exposures among individual time bins and might overlook a high flux state if the exposure is rather short at that bin.
On the other hand, we can probe significances of the variation even on a small-flux scale with the TS value.
The times (MJD) of the centers of the peak 5-day LC bins and the TS of detected ``small flares'' are summarized in Table {\ref{flare_time}}.
These center times of small flares are also shown in Figure \ref{lc_5day} as blue lines.
All ``reported flares'' listed in Table \ref{reported_flare} satisfy the criterion of TS$>$29.
\begin{table}[tb]
\caption{Detected ``small flares'' in the 5-day binned light curve.}
\label{flare_time}
\begin{center}
\begin{tabular}{c|c|c}
Name& Bin midpoint (MJD)& TS (significance\tablenotemark{ i})\\\hline\hline
small flare 1&54779&32.6 (4.1$\sigma$)\\
small flare 2\tablenotemark{ii}&54984&32.4 (4.1$\sigma$)\\
small flare 3&55299&34.4 (4.3$\sigma$)\\
small flare 4\tablenotemark{iii}&55994&37.3 (4.6$\sigma$)\\
small flare 5&56174&78.2 (7.8$\sigma$)\\
small flare 6 (a), (b), (c)&56409, 56419, 56429&59.4, 30.2, 30.9 (6.5$\sigma$, 3.8$\sigma$, 3.9$\sigma$)\\
small flare 7 (a), (b)\tablenotemark{iv}& 56724, 56734&93.4, 66.0 (8.7$\sigma$, 7.1$\sigma$)\\\hline
\end{tabular}
\tablenotetext{i}{Corresponding significance with 2 degrees of freedom with 525 trials.}
\tablenotetext{ii}{Indicated as the minor flare in \citet{Striani2013}.}
\tablenotetext{iii}{Indicated as the ``wave'' in \citet{Striani2013}.}
\tablenotetext{iv}{ATel \#5971 \citep{Atel2014MAr}.}
\end{center}
\end{table}
In order to analyze detailed structures of ``small flares'' and ``reported flares'' we made 1.5-day binned LCs for one month for the small flare 1--5 and 7, and 50 days for the small flare 6 centered at the small-flare times listed in Table~\ref{flare_time} and the over the durations of ``reported flares'' listed in Table~\ref{reported_flare}.
The 1.5-day time bin was chosen rather than a 1-day bin to retain significant detections of synchrotron \ac{pwn} emission even at the baseline state while still allowing the resolution of flare structure.
In the same manner as the 5-day binned LC in Figure \ref{lc_5day}, the Crab PWN is modeled by Eq. (\ref{crab_stationary}) and $N\sub{0,SYN},~\Gamma\sub{0,SYN}$, with the normalization of the isotropic diffuse emission set free.
Figure \ref{1.5days LC} represents the 1.5-day binned LCs in the energy range 100 MeV to 500 GeV during ``small-flare'' and ``reported-flare'' periods.
\begin{figure*}[tbp]
\begin{center}
\includegraphics[width=160mm]{LC_1_5days_fit_allflare_BBana_allflare_excludeSF6.pdf}
\caption{1.5-day binned gamma-ray light curve (100 MeV -- 500 GeV) of the Crab synchrotron component during each ``small flare'' and ``reported flare''.
The vertical error bars in data points represent 1$\sigma$ statistical errors. The down arrows indicate 95$\%$ confidence level upper limits.
The dashed blue lines and red solid lines represent the best fitted time profiles defined by Eq. (\ref{lc_profile}) and the Bayesian Blocks, respectively.}
\label{1.5days LC}
\end{center}
\end{figure*}
The LCs are fitted by the following function to characterize the time profiles of both ``small flares” and ``reported flares'':
\begin{equation}\label{lc_profile}
F(t)=F\sub{b} + \sum_i^{N}\frac{F\sub{i,0}}{e^{-(t-t\sub{i,0})/\tau\sub{i, rise}}+e^{-(t-t_{i,0})/\tau\sub{i,decay}}}
\end{equation}
where $N$ is the number of flares in each flare window, $F_\mathrm{b}$ is an assumed constant level underlying a flare, $F\sub{i,0}$ is the amplitude of a flare, $t\sub{i,0}$ describes approximately a peak time (it corresponds to the actual maximum only for symmetric flares), while $\tau\sub{i,rise}$ and $\tau\sub{i,decay}$ describe the characteristic rise and decay times.
Note that $F_\mathrm{b}$ does not represents the global baseline level, $F\sub{e,SYN}$, but a local synchrotron level, which reflexes the variability of the synchrotron component (see Figure~\ref{lc_5day}).
The time of the maximum of a flare ($\tau\sub{peak}$) can be described using parameters in Eq. (\ref{lc_profile}) as:
\begin{equation}\label{peak time}
t\sub{peak}=t_0 +\frac{\tau\sub{rise}\tau\sub{decay}}{\tau\sub{rise}+\tau\sub{decay}}\ln\left(\frac{\tau\sub{decay}}{\tau\sub{rise}}\right)
\end{equation}
This formula is often used to characterize flare activities of blazars \citep[e.g.,][]{hayashida2015}.
We applied the Bayesian Block (BB) algorithm \citep{2013ApJ...764..167S} to the 1.5-day binned LCs to determine the number of flares.
The BB procedure allows one to obtain the optimal piecewise representations of the LCs.
The calculated piecewise model provides local gamma-ray variabilities \citep[see, e.g.,][]{2019ApJ...877...39M}.
Since small flare 6 (a), (b) and (c) are not represented by piecewise models, we exclude these ``small flares" in the following analysis.
The rise time of the 2013 October flare is assumed to be 1 day because of lack of significant data points to determine the rise time.
The 2014 August flare is not fitted well by Eq. (\ref{lc_profile}), perhaps because of statistical fluctuations or because the flare is a superposition of short and weak flares that cannot be resolved by the {\textit{Fermi}}-LAT.
Consequently, we do not estimate the characteristic time scales of the 2014 August flare.
The fitting results are summarized in Table~{\ref{flare_summary}}, and the best fitted profiles and the optimal piecewise models are overlaid in Figure \ref{1.5days LC} as blue dashed lines and red lines, respectively.
\begin{table*}[tbp]
\caption{Fitting results for the 1.5-day light curves}
\label{flare_summary}
\begin{center}
\begin{tabular}{c|c|c|c|c|c}\hline\hline
flare id& $F\sub{b}$ & $F_0$ & $\tau\sub{rise}$ & $\tau\sub{decay}$ & $t_{0}$\\
& [$\times10^{-10} ~\mathrm{erg}~\mathrm{cm^{-2}}~\mathrm{s^{-1}}$] & [$\times10^{-10} ~\mathrm{erg}~\mathrm{cm^{-2}}~\mathrm{s^{-1}}$] & [day] & [day] & [MJD]\\\hline
small flare 1&1.4$~\pm~$0.2&5.6$~\pm~$2.0&0.3$~\pm~$0.4&3.3$~\pm~$1.7&54777.4$~\pm~$1.0\\
small flare 2&1.5$~\pm~$0.4&9.0$~\pm~$2.6&0.7$~\pm~$0.5&3.2$~\pm~$1.3&54980.2$~\pm~$0.7\\
small flare 3&1.1$~\pm~$0.4&10.6$~\pm~$2.8&1.1$~\pm~$1.0&1.3$~\pm~$0.9&55299.1$~\pm~$1.6\\
small flare 4&1.3$~\pm~$0.4&10.6$~\pm~$3.1&0.6$~\pm~$0.4&2.7$~\pm~$1.1&55991.4$~\pm~$0.6\\
small flare 5&2.0$~\pm~$0.3&12.5$~\pm~$4.2&0.4$~\pm~$0.2&1.8$~\pm~$0.8&56172.7$~\pm~$0.5\\
small flare 7 (a)&1.3$~\pm~$0.5\tablenotemark{ \dag}&24.6$~\pm~$5.8&1.6$~\pm~$0.6&0.5$~\pm~$0.1&56726.4$~\pm~$0.4\\
small flare 7 (b)&1.3$~\pm~$0.5\tablenotemark{ \dag}&18.2$~\pm~$6.3&1.7$~\pm~$0.7&0.6$~\pm~$0.3&56735.3$~\pm~$0.6\\\hline\hline
2009 February&2.5$~\pm~$0.3&16.3$~\pm~$3.8&2.3$~\pm~$0.9&0.7$~\pm~$0.3&54869.5$~\pm~$0.6\\
2010 September&1.6$~\pm~$0.3&30.0$~\pm~$6.6&1.7$~\pm~$0.6&1.0$~\pm~$0.3&55460.0$~\pm~$0.6\\
2011 April&1.8$~\pm~$0.3&142.7$~\pm~$10.8&1.6$~\pm~$0.1&0.6$~\pm~$0.1&55668.0$~\pm~$0.1\\
2012 July&1.8$~\pm~$0.6&11.2$~\pm~$3.5&3.3$~\pm~$1.4&0.3$~\pm~$1.2&56113.6$~\pm~$0.8\\
2013 March&2.3$~\pm~$0.5&34.2$~\pm~$2.0&2.4$~\pm~$0.4&3.6$~\pm~$0.6&56356.8$~\pm~$0.6\\
2013 October (a)&2.1$~\pm~$0.3\tablenotemark{\dag} & 23.5$~\pm~$3.6&2.1$~\pm~$0.5&0.7$~\pm~$0.2&56583.4$~\pm~$0.3\\
2013 October (b)&2.1$~\pm~$0.3\tablenotemark{\dag}&27.0$~\pm~$4.8&1.0 (fixed)&1.5$~\pm~$0.5&56594.5$~\pm~$0.6\\\hline
\end{tabular}
\tablenotetext{\dag}{Fitted by the same $F\sub{b}$ because of the same flare window.}
\end{center}
\vspace{5mm}
\end{table*}
\begin{figure}[bp]
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=95mm]{Corr_1_5days_peakflux_index_roi15_phindexrange_TS0srccut_uplim0_8_rev2_excludeSF6.pdf}
\end{center}
\end{minipage}
\begin{minipage}{0.5\hsize}
\begin{center}
\includegraphics[width=95mm]{fluence_ph_index_phindexrange_TS0srccut_uplim0_8_abs_excludeSF6.pdf}
\end{center}
\end{minipage}
\caption{(a): Photon index vs. peak energy flux (100 MeV - 500 GeV) for the Crab flares.
Black data points: ``small flares''. Red points: ``reported flares''.
Green data point: baseline value.
(b): Photon index vs. fluence for the Crab flares.
The fluence is defined as integrated emission over a period between $t\sub{peak} -\tau\sub{rise}$ to $t\sub{peak} +\tau\sub{decay}$.
Black data points: ``small flares''. Red points: ``reported flares''.
The 2014 August flare are not shown in this plot because they have complex time profiles
(note that 2013 October (b) flare is excluded because the rise time is not determined).
}
\label{lc_corr}
\end{figure}
We compare energy fluxes and photon indices at the flare peaks in Figure \ref{lc_corr}-(a), and fluences and photon indices in Figure \ref{lc_corr}-(b) for individual flares. The flare peak is defined as the highest flux bin in each panel of Figure \ref{1.5days LC}, and the energy fluxes and photon indices were taken from the results of corresponding bins.
The flare fluences were defined with integrations over periods between between $t\sub{peak} -\tau\sub{rise} \leq t \leq t\sub{peak} +\tau\sub{decay}$, based on the best fitted time profile model as listed in Table \ref{flare_summary}.
The fluence and photon index were calculated with {\tt{gtlike}} from data resampled over those integration periods.
The fitting results are summarized in Table \ref{flare_spec}.
Figure \ref{lc_corr}-(a) indicates that the flares have higher energy fluxes and harder photon indices than the time-averaged values. This feature suggests both the ``small flares" and the ``reported flares" have different origins from the stationary emission.
The relation between the flux and photon index suggests a ``harder when brighter'' trend, which could imply electron spectra with higher cut-off energies or stronger magnetic fields for the flaring states.
In the fluence vs. photon index plot (Figure \ref{lc_corr}-(b)),
two flares clearly show high fluences.
One corresponds to the 2011 April flare, with the largest energy flux $\sim 85\times10^{-10}$ erg~cm$^{-2}$ s$^{-1}$,
and the other is the 2013 March flare ,with the second largest flux and longest flare duration (see Table \ref{flare_summary})\footnote{in the 2013 March flare, rapid variability on a $\sim$5-hour time scale has been reported using orbit-binned ($\sim$ 90 minutes) LC \citep{Mayer2013}. Our analysis is based on 1.5-day binned LC and focuses on more global features of the flare.}.
Apart from those two flares, the ``reported flares'' and the ``small flares'' show similar results: the same range of photon index, and the harder/brighter correlation.
The rise time of the 2013 October flare (b) is not determined, therefore it is excluded in Figure \ref{lc_corr}-(b).
To examine possible spectral curvature, we applied not only a power-law model, but also a power law with an exponential cut-off model for the Crab synchrotron component. Five flares (small flare 7 (b), 2011 April flare, 2013 March flare, 2013 October flare (a) and October flare (b)) show significant curvature ($-2\Delta L >9$)\footnote{$-2\Delta L = -2\log(L0/L1)$, where $L0$ and $L1$ are the maximum likelihood estimated for the null (a simple power-law model) and alternative (a power law with an exponential cut-off model) hypothesis, respectively.} and those results are listed in Table \ref{flare_spec}.
\begin{table*}[tb]
\caption{Spectral fitting results of each flare}
\label{flare_spec}
\begin{center}
\begin{tabular}{c|c|c|c|c|c}\hline\hline
flare id&flare duration & averaged energy flux & photon index & cut-off energy & $-2\Delta L$ \\
&[day]& [$\times10^{-10} ~\mathrm{erg}~\mathrm{cm^{-2}}~\mathrm{s^{-1}}$] & & [MeV] &\\\hline
small flare 1&3.5 $\pm$ 1.7&4.3$~\pm~$0.6& 4.0$~\pm~$0.3&--&--\\
small flare 2&3.9 $\pm$ 1.4&5.6$~\pm~$0.8& 3.4$~\pm~$0.3&--&--\\
small flare 3&2.4 $\pm$ 1.3&5.0$~\pm~$1.0& 3.4$~\pm~$0.4&--&--\\
small flare 4&3.3 $\pm$ 1.1&7.1$~\pm~$1.2& 2.9$~\pm~$0.2&--&--\\
small flare 5&2.2 $\pm$ 0.8&8.3$~\pm~$1.2& 3.3$~\pm~$0.2&--&--\\
small flare 7 (a)&2.2$~\pm~$0.6&11.6 $\pm$ 1.6& 3.0 $\pm$ 0.2&--&--\\
small flare 7 (b)&2.3$~\pm~$0.8&11.4 $\pm$ 2.6& 2.4 $\pm$ 0.2&--&--\\
2009 February&3.0 $\pm$ 0.9&9.4$~\pm~$1.1& 3.8$~\pm~$0.3&--&--\\
2010 September&2.6 $\pm$ 0.6&17.4$~\pm~$3.0& 2.4$~\pm~$0.1&--&--\\
2011 April&2.2 $\pm$ 0.2&60.4$~\pm~$3.8& 2.37$~\pm~$0.04&--&--\\
2012 July&3.6 $\pm$ 1.8&7.3$~\pm~$1.1& 2.9$~\pm~$0.2&--&--\\
2013 March&6.0 $\pm$ 0.7&17.9$~\pm~$0.7& 3.2$~\pm~$0.1&--&--\\
2013 October (a)&2.7$~\pm~$0.5&12.3 $\pm$ 1.2& 3.3 $\pm$ 0.2&--&--\\
2013 October (b)&2.5\tablenotemark{\dag}&15.6 $\pm$ 1.8& 3.1 $\pm$ 0.2&--&--\\\hline
smallflare 7 (b)&--&8.6$~\pm~$ 1.3&0.6$~\pm~$0.6&284$~\pm~$101&12.8\\
2011 April&--&45.5$~\pm~$1.8& 1.6$~\pm~$0.1&688$~\pm~$115&71.2\\
2013 March&--&17.0$~\pm~$0.6& 2.2$~\pm~$0.2&257$~\pm~$ 67&20.5\\
2013 October (a)&--&11.7$~\pm~$ 1.1&1.4$~\pm~$0.8&135$~\pm~$68&9.1\\
2013 October (b)&--&14.5$~\pm~$ 1.5&0.6$~\pm~$0.9&115$~\pm~$50&13.6\\\hline
\end{tabular}
\tablenotetext{\dag}{Rise time, $\tau\sub{rise}$ is fixed by 1 day.}
\tablecomments{The upper section presents the fitting results using a power-law model. The lower section shows the results also using
a power law with an exponential cut-off model for the flares in which significant curvature ($-2\Delta L>9$) was observed.
$\Delta L$ presents the difference of the logarithm of the likelihood of the fit with respect to a single power-law fit.
}
\end{center}
\end{table*}
\clearpage
\subsubsection{Month-scale analysis}
The Crab PWN synchrotron emission in the gamma-ray band shows variability not only on a day scale but also on a month scale \citep{Crab_flare_1st}.
One of the most prominent variable morphological features of the Crab PWN is known as the ``inner knot" \citep{Hester1995}.
The ``inner knot'' lies about 0.55--0.75 arcsec to the southeast of the Crab pulsar.
This structure is interpreted as Doppler-boosted emission from the downstream of the oblique termination shock.
The bulk of the synchrotron gamma rays may originate from the ``inner knot" according to MHD simulations \citep{Komissarov2011}.
In addition, the ``inner knot" is a promising production site for the flares, as the Doppler boosting can relax the theoretical constraints imposed by the Crab flares, such as exceeding the maximum cut-off energies under the ideal-MHD configuration. In this case the gamma-ray flux level and location of the ``inner knot'' can change coherently.
\citet{Rudy2015} compared the gamma-ray flux and the knot-pulsar separation and found no significant correlation.
In their analysis, the contribution of the Crab pulsar emission was not excluded, making it difficult to measure the weaker flux variations of the Crab PWN.
To study a possible correlation between the month-scale variation of the Crab synchrotron emission and the knot-pulsar separation, we made a 30-day binned gamma-ray flux LC of the Crab synchrotron component through off-pulse analysis.
Off-pulse analysis of the Crab pulsar is necessary to investigate the month-scale variability because the variability amplitude is rather small and the variation can be hidden by the Crab pulsar emission.
The 30-day binned LC between MJD 55796 and 57206 is shown in Figure \ref{flux_knot} (black points and left axis).
The analysis procedure is the same as for the 5-day binned LC shown in Figure \ref{lc_5day};
the Crab PWN is modeled by Eq. (\ref{crab_stationary}) and the normalization, photon index ($N\sub{0,SYN}$ and $\Gamma\sub{0,SYN}$) of the Crab synchrotron component and the normalization of the isotropic diffuse emission are set free while the other sources are fixed to the baseline values.
We excluded data points that overlapped with any of the flares,
because we focus on the smaller flux variations in the longer-time-scale data.
Data points are interpolated by a cubic spline (black solid line).
The knot-pulsar separation observed by {\textit {Hubble Space Telescope}} ({\textit {HST}}) \citep{Rudy2015}\footnote{We choose the {\textit {Hubble Space Telescope}} data because the simultaneous Keck data may have unaccounted systematic errors~ \citep{Rudy2015}.} is shown in Figure \ref{flux_knot} as red points (scaled to the right axis).
\begin{figure*}[tb]
\begin{center}
\includegraphics[width=150mm]{flux_innerknot_30days_excludeSF6.pdf}
\caption{30-day binned gamma-ray flux light curve (100 MeV - 500 GeV) of the Crab synchrotron component and the separation distance between the Crab pulsar and the inner knot.
Black data points: gamma-ray energy fluxes.
Red data points: separation distances between the Crab pulsar and the inner knot, measured by the {\textit {Hubble Space Telescope}} \citep{Rudy2015}.
The small flares are shown by the blue lines, and the green lines show the reported flares.
}
\label{flux_knot}
\end{center}
\end{figure*}
Figure~\ref{corr_gamma_knot} compares the knot separations from the {\textit HST} observations with the corresponding gamma-ray energy fluxes based on the 30-day binned LC of Figure~\ref{flux_knot}.
The gamma-ray fluxes are visibly higher when the knot separation is larger.
Although this comparison is highly suggestive, the observations of the inner knot are relatively sparse compared to the various time scales of the gamma-ray flux. More evenly sampled observations by optical telescopes would be necessary to establish a firm correlation between the gamma-ray flux and the knot-pulsar separation.
\begin{figure}[tb]
\begin{center}
\includegraphics[width=100mm]{corr_eflux_knot_lcbin_withoutflare_excludeSF6.pdf}
\caption{Gamma-ray energy flux (100 MeV -- 500 GeV) vs. knot-pulsar separation.
The gamma-ray fluxes were averaged with 30-day bins and the knot-pulsar separation was observed by {\textit {HST}} within each 30-day bin.
The data for the knot-pulsar separation are from \cite{Rudy2015}.}
\label{corr_gamma_knot}
\end{center}
\end{figure}
\section{Discussion}\label{section3}
\subsection{Flare statistics}
In Sec.~\ref{day_scale_ana}, we obtained the fluences, characteristic time scales, and photon indices for all 17 flares, including both the ``reported flares'' and the ``small flares''.
The relatively large number of detections allows us to discuss the features of the flares on a statistical basis.
As seen in Figure \ref{lc_corr}-(b), many of ``small flares'' have fluences smaller than
$2\times10^{-4}~\mathrm{erg\,cm^{-2}}$. This result suggests that even weaker flares may exist, although it is
difficult to resolve such flares with current instruments. Superposition of unresolved weak flares may contribute to
the underlying variability of the LC seen in Figure \ref{lc_5day}. Because of the fast variability the flares should be produced
through the synchrotron channel, and only electrons with the highest attainable energy may provide a significant
contribution in the GeV energy band. If the spectral analysis of a flare
allows us to define the cut-off energy, we can use the single particle spectrum to define the energy
of the emitting particles, \(E_e\). The single-particle synchrotron spectrum is described by the following approximate
expression:
\begin{equation}\label{asymptotic_form_0}
F_{\rm sp}(\omega) \propto \left(\frac\omega{\omega_c}\right)^{1/3}\exp\left(-\frac\omega{\omega_c}\right) \,.
\end{equation}
The critical frequency is defined as $\omega_c \equiv (3E^2_eeB)/(2m_e^3c^5)$, where
$E_e$, $e$, $m_e$, $c$, and $B$ are electron energy, the elementary charge, the electron mass, the speed of light, and
the magnetic field strength \citep{Rybicki}.
For the typical spectra revealed at GeV energies, detection of a cut-off in the {\it Fermi}-LAT band implies that emitting particles are accelerated by some very efficient acceleration mechanism or produced in a relativistically moving plasma.
Presently, it is broadly accepted that the magnetic field reconnection scenario provides the most comprehensive approach for the interpretation of the flaring emission \cite[see, e.g.,][]{2012ApJ...754L..33C,flare_review}. We therefore present some basic estimates in the framework of this scenario \cite[see][for a more detailed consideration]{2005MNRAS.358..113L}. We assume that the reconnection proceeds in the relativistic regime, and we ignore some processes, e.g., the impact of the guiding magnetic field, which potentially might be very important. A detailed consideration of magnetic reconnection is beyond the scope of this paper.
If we assume the flares originate from magnetic reconnection events, then some key aspects of the flare emission are determined by the geometry of the reconnection region.
The maximum energy that an electron can gain in the reconnection region can be estimated as
\begin{equation}
E\sub{e, max} = eB_0L \,.
\end{equation}
Here $B_0$ and $L$ are the strength of the reconnecting magnetic field and the length of the layer, respectively. It was assumed here that
the electric field at the reconnection region is equal to the initial magnetic field.
The length of the reconnection region, \(L\), determines the flare rise time, \(\tau\sub{rise}\simeq L/c\).
The total magnetic energy that is dissipated in the region is given by
\begin{equation}
W_B = \frac{cB\sub{0}^2}{4\pi}aL\frac{L}{c} = \frac{a_L L^3B\sub{0}^2}{4\pi}\,,
\end{equation}
where $a=a_L L$ represents the width of the reconnection region (i.e., its length in the direction perpendicular to the electric field).
The emission produced by particles accelerated beyond the ideal MHD limit might be highly anisotropic \citep{2007ApJ...655..980D,2012ApJ...754L..33C},
as electrons may lose energy over just a fraction of the gyro radius and photons are emitted within a narrow beaming cone.
The beaming solid angle is determined by $\Delta \Omega =\mathrm{min}[ \pi(ct\sub{cool}/r\sub{g})^2, \,4\pi]\,$,
where $t\sub{cool}\propto E\sub{e}^{-1}B^{-2}$ is the synchrotron cooling time and $r\sub{g}\propto E\sub{e}B^{-1}$ is the gyro radius of the electron.
The values of the solid angle describe two distinct regimes: $\Delta\Omega=\pi(ct\sub{cool}/r\sub{g})^2$ corresponds to the strong beaming case whereas $\Delta\Omega=4\pi$ represents the isotropic radiation case.
The observed fluences can be written as \(F= \xi W_B/\left(D^2\Delta \Omega\right)\), which yields
\begin{equation}\label{beam}
F\sub{beaming} \propto B\sub{0}^{-5/2}(\hbar\omega_c)^{7/2}
\end{equation}
for the strong beaming case and
\begin{equation}\label{iso}
F\sub{iso} \propto B\sub{0}^{-5/2}(\hbar\omega_c)^{3/2}
\end{equation}
for the isotropic case. Here $\xi$ and $D$ represent the radiation efficiency and the distance to the Crab \ac{pwn}, respectively.
The fluence $F\sub{SYN}$ vs. the critical photon frequency is shown in Figure~\ref{fluence_omega}.
The dependence between these quantities was fitted with a power-law function:
\begin{equation}\label{pl}
\hbar \omega_c \propto F\sub{SYN}^{p}\,.
\end{equation}
We include here only the flares that allow the detection of the cut-off energies (i.e., small flare 7 (b), 2011 April flare, 2013 March flare and 2013 October flare (a))\footnote{We do not consider 2013 October flare (b) whose rise time is not determined.}.
For these data we obtained \(p=0.74\pm0.32\) as the best-fit value.
The best-fit approximation is shown in Figure \ref{fluence_omega} as a blue dashed line.
\begin{figure}[btp]
\begin{center}
\includegraphics[width=100mm]{fluence_omega_abs_rev_20191203_onlyexpcut.pdf}
\caption{Fluence vs. cut-off energy for each flare in which significant spectral cut-off was observed.
Black data point: small flare 7 (b) Red points: 2011 April flare, 2013 March flare, and 2013 October flare (a).
The critical synchrotron energy ($\hbar\omega_c$) is defined by the observed cut-off energy.
The dashed blue line shows the best fitted power-law function for those data points.
}
\label{fluence_omega}
\end{center}
\end{figure}
The dependence of the fluence on the critical synchrotron energy for the anisotropic regime, \(p=0.286\), does not seem to agree with the revealed best-fit approximation, unless the magnetic field changes strongly between the flares. However, the isotropic regime, \(p=0.66\), might be consistent with the dependence for same strength of the field, although the error bars are significant.
This might be taken as a hint for the reconnection origin of the variable emission, but more detailed consideration makes this possibility less feasible. In particular, Eq. (\ref{iso}) allows us to obtain estimates for the strength of the magnetic field as
\begin{equation}\label{iso_B}
B\sub{iso} \simeq 400 ~a_L^{\frac{2}{5}}\xi^{\frac{2}{5}}\left(\frac{D}{2~\mathrm{kpc}}\right)^{-\frac{4}{5}}\left(\frac{\hbar\omega\sub{c}}{300~\mathrm{MeV}}\right)^{\frac{3}{5}} \left(\frac{F\sub{SYN}}{6\times10^{-4}~\mathrm{erg~cm^{-2}}}\right)^{-\frac{2}{5}}\,\mathrm{\upmu G}\,,
\end{equation}
As shown below, the detected rise and decay time scales require a significantly stronger magnetic field.
This discrepancy implies that the magnetic field reconnection cannot be readily taken as the ultimate explanation for the origin of the variable GeV emission detected from the Crab \ac{pwn}, suggesting that other phenomena, e.g., Doppler boosting, plays an important role \cite[see, e.g.,][]{Rolf2012}.\\
Reconstruction of the spectrum of the emitting particles for the flares that do not show a well-defined cut-off energy is less straightforward.
Let us assume that the electron spectrum is described by a power-law with an exponential cut-off: \(dN_e/dE_e\propto E^{-\alpha} {\exp}\left(-E_e/E_{\rm max}\right)\). The modeling of the emission from the Crab PWN implies that the high-energy part of the electron spectrum is characterized by \(\alpha\simeq3.23\) and the cut-off energy is in the PeV band \(E_{\rm max}\simeq3\rm \,PeV\) \citep[as obtained by][in the framework of the constant B-field model]{Meyer2010}.
The total synchrotron spectrum is then
\begin{equation}\label{asymptotic_form_1}
F(\omega) \propto \int dE_e' E'_e{}^{-\alpha} {\exp}\left(-E'_e/E_{\rm max}\right)\left(\frac\omega{\omega_c}\right)^{1/3}\exp\left(-\frac\omega{\omega_c}\right) \,.
\end{equation}
For the spectra, which do not allow determination of the cut-off frequency, one should expect \(\hbar\omega_c(E_{\rm max})\lesssim 100\rm\,MeV\) or \(\hbar\omega_c(E_{\rm max})\gg100\rm\,MeV\), the later possibility is however robustly excluded by the synchrotron radiation theory and the broadband spectrum measured with {{\it Fermi}}--LAT \footnote{The observed cut-off energy between {\it{COMPTEL}} and {{\it Fermi}}--LAT band was $97 \pm 12$ MeV \citep{Crab_pulsar2010}}.
In the regime \(\omega \geq \omega_c(E_{\rm max})\), the above integral can be computed with the {\it steepest descent} method yielding
\begin{equation}\label{asymptotic_form}
F(\omega) \propto E_{\rm max}^{-\alpha+1}\left(\frac{\omega}{\omega_c(E_{\rm max})}\right)^{-\frac{6\alpha-5}{18}}\exp\left(-\frac3{2^{2/3}}\left(\frac\omega{\omega_c(E_{\rm max})}\right)^{1/3}\right) \,,
\end{equation}
where the dependence on \(E_{\rm max}\) and \(B\) is kept.
The photon index at \(\omega\) can be obtained as
\begin{equation}
\Gamma\sub{SYN} \simeq 1-\frac{\frac{dF(\omega/\omega_c(E_{\rm max}))}{d\omega}}{F(\omega/\omega_c(E_{\rm max}))}\omega \,,
\end{equation}
which can be derived analytically:
\begin{equation}\label{omega_phindex}
\Gamma\sub{SYN}=\frac{6\alpha+13}{18}+\frac1{2^{\nicefrac23}}\left(\frac{\omega}{\omega_c(E_{\rm max})}\right)^{\nicefrac13}\,.
\end{equation}
The variable gamma-ray emission might originate from the distribution of non-thermal electrons that are responsible for the broad-band emission from the Crab PWN, for example if the strength of the magnetic field fluctuates or the electron cut-off energy changes. To probe these possibilities we set \(\alpha=3.23\) and study the relation between the flare flux and photon index. If the variation is caused by a change of the magnetic field (i.e., \(E_{\rm max}=\rm const\)), then
\begin{equation}\label{eq:eflux_gamma_1}
F(\omega)\propto \left(\Gamma\sub{SYN}-1.8\right)^{-2.4}\exp\left(-3\left(\Gamma\sub{SYN}-1.8\right)\right)\,.
\end{equation}
If the variability of the gamma-ray emission is caused by a change of the electron cut-off energy, then there is an additional factor in the expression that determines the flux level:
\begin{equation}\label{eq:eflux_gamma_2}
F(\omega)\propto \left(\Gamma\sub{SYN}-1.8\right)^{0.95}\exp\left(-3\left(\Gamma\sub{SYN}-1.8\right)\right)\,.
\end{equation}
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=100mm]{eflux_gamma_rev_20191206_both_flare_excludeSF6.pdf}
\caption{
Energy flux vs. photon index for the Crab flares that resulted in non-detection of the cut-off energy.
Black data points: ``small flares". Red data points: ``reported flares".
Two lines show the best-fit approximation of the data points by Eq.~(\ref{eq:eflux_gamma_1}) ($E_{\rm max} =$const.: blue line) and Eq.~(\ref{eq:eflux_gamma_2}) ($B=$const: orange line).
}
\label{eflux_gamma}
\end{center}
\end{figure}
The plot of the energy flux and photon index for the ``small flares" and the ``reported flares" is shown in Figure~\ref{eflux_gamma}.
From the figure it can be seen that the data points seem to be inconsistent with Eq.~(\ref{eq:eflux_gamma_1}), which corresponds to the case when
the flare is generated by a changing magnetic field. Although Eq.~(\ref{eq:eflux_gamma_2}) agrees better with the data,
the discrepancy is still very significant. This suggests that a one-zone model is inadequate for the study of the observed variability, and a more detailed model is required. In particular, the properties at the flare production site might be different from the typical conditions inferred for the nebula, implying the need for a multi-zone configuration. In what follows we try to infer the conditions at the flare production site using a simple synchrotron model.
\subsection{The origin of small flares}
All ``small flares" except for one episode allow defining the rise and decay time-scales based on the 1.5-day binned light curves. Here, we discuss constraints on the magnetic field strength assuming a synchrotron origin of the emission.
For the sake of simplicity we adopt a model that assumes a homogeneous magnetic field in the flare production site, which does not move relativistically with respect to the observer. Thus the rise time is associated with the particle acceleration time-scale:
\begin{equation}
\tau\sub{acc} \simeq 10\eta\left(\frac{E\sub{e}}{1\rm\,PeV}\right)\left(\frac{B}{100\rm\,\upmu G}\right)^{-1} \rm \, days\,,
\end{equation}
where \(\eta\), $E\sub{e}$ and $B$ are acceleration efficiency (\(\eta\rightarrow1\) for particle acceleration by reconnection), electron energy and magnetic field strength, respectively.
Since variability is observed for \(\sim100\rm \, MeV\) emission, we obtain
\begin{equation}
E\sub{e}\simeq 4 \left(\frac{B}{100\rm\,\upmu G}\right)^{-1/2}\left(\frac{\varepsilon}{100\rm\,MeV}\right)^{1/2}\rm\,PeV\,,
\end{equation}
where $\varepsilon$ is synchrotron photon energy.
This yields an acceleration time of
\begin{equation}\label{eq:acc}
\tau\sub{acc} \simeq 40\eta \left(\frac{B}{100\rm\,\upmu G}\right)^{-3/2}\left(\frac{\varepsilon}{100\rm\,MeV}\right)^{1/2} \rm\,days\,,
\end{equation}
which translates to the following limitation on the magnetic field strength:
\begin{equation}\label{eq:acc_b}
B \simeq 1\eta^{2/3} \left(\frac{\tau\sub{acc}}{1\rm\, day}\right)^{-2/3}\left(\frac{\varepsilon}{100\rm\,MeV}\right)^{1/3}\rm\, mG\,.
\end{equation}
The observational requirement of \(\tau\sub{acc}\approx1\) day implies an extremely strong magnetic field or an acceleration with efficiency exceeding the ideal \ac{mhd} limit: \(\eta<1\).
The cooling time, however, which is associated with the decay time (\(\approx2\) days), does not depend on the acceleration efficiency and allows us to obtain the following constraint for the magnetic field
\begin{equation}\label{eq:cool}
B \simeq 1\left(\frac{\tau\sub{cool}}{2\rm\,days}\right)^{-2/3}\left(\frac{\varepsilon}{100\rm\,MeV}\right)^{-1/3} \rm\, mG\,.
\end{equation}
This estimate is valid for the isotropic radiation regime, which, in particular, implies negligible Doppler boosting and homogeneity of the magnetic field. These assumptions are most likely violated for the strong flares \cite[see, e.g.,][]{Rolf2012}. We nevertheless adopt these assumptions here as we aim to test the consistency of the ``small flares'' with these assumptions {\it ex adverso}.
Given that (i) both the rise and decay time-scales are short, and (ii) the dependence on the photon energy in Eqs.~(\ref{eq:acc}) and (\ref{eq:cool}) is inverse, the magnetic field in the production region needs to be very strong, approaching the \(\rm mG\) value. This estimate seems to be inconsistent with Eq.~(\ref{iso_B}), posing certain difficulties for the reconnection scenario.
In \acp{pwn} magnetic fields can provide an important contribution to the local pressure:
\begin{equation}
P\sub{B}=\frac{B^2}{8\pi}=4\times10^{-10} \left(\frac{B}{100\rm\,\upmu G}\right)^{2}\rm\,erg\,cm^{-3}\,.
\end{equation}
On the other hand, \acp{pwn} are expected to be nearly isobaric systems; thus the characteristic pressure in the Crab \ac{pwn} can be determined by the location of the pulsar wind termination shock:
\begin{equation}
P\sub{PWN} \simeq \frac{2}{3}\times\frac{\dot{E}\sub{SD}}{4\pi R\sub{TS}^2c} \simeq 5\times 10^{-9}\left(\frac{\dot{E}\sub{SD}}{5\times10^{38}~\mathrm{erg~s^{-1}}}\right)\left(\frac{R\sub{TS}}{0.14~\mathrm{pc}}\right)^{-2}~\mathrm{erg\,cm^{-3}}\,,
\end{equation}
where $\dot{E}\sub{SD}$ and $R\sub{TS}$ are the spin-down power of the central pulsar and the location 1of the termination shock, respectively.
Such total pressure implies the magnetic field of $B\lesssim400\rm\,\upmu G$, which appears to be significantly weaker than the strength required for acceleration and cooling of the particles responsible for the ``small flares.''
Strong local variations of the pressure/magnetic field at the termination shock seen in the 3D MHD simulations \citep{2014MNRAS.438..278P, 2016JPlPh..82f6301O} may mitigate the discrepancy between the estimated magnetic field and the one required to accelerate and cool the particles responsible for the ``small flares."
This result implies that the production of small flares under conditions typical for the nebula is difficult. One of the common approaches to relax the constraints imposed by the fast variability is to involve a relativistically moving production site \citep{Komissarov2011,Lyutikov2016}.
In this case, the apparent variability time-scales are shorted:
\begin{equation}
\tau=\frac{\tau'}\delta\,,
\end{equation}
where \(\tau'\) and \(\delta\) are the plasma co-moving frame time scale and the Doppler boosting factor, respectively.
In addition, the emission frequency changes as well, so the co-moving frame photon energy is \(\varepsilon'=\varepsilon/\delta\). Thus, one obtains that the requirements for magnetic field strength, Eqs.~(\ref{eq:acc_b}) and (\ref{eq:cool}) are relaxed by a factor \(\delta\) and \(\delta^\frac13\), respectively.
The maximum magnetic field strength consistent with the nebular pressure, $B=400\rm\,\upmu G$, gives the following lower limit for the bulk Lorentz factors from Eqs.~(\ref{eq:acc_b}) and (\ref{eq:cool}):
\begin{equation}
\delta > 3\eta^{2/3}\left(\frac{\tau\sub{acc}}{1\rm\,day}\right)^{-2/3}\left(\frac{\varepsilon}{100\rm\,MeV}\right)^{1/3}\left(\frac{B}{400\rm\,\upmu G}\right)^{-1}
\end{equation}
\begin{equation}
\delta > 30\left(\frac{\tau\sub{cool}}{2\rm\, days}\right)^{-2}\left(\frac{\varepsilon}{100\rm\,MeV}\right)^{-1}\left(\frac{B}{400\rm\,\upmu G}\right)^{-3}\rm .
\end{equation}
The existence of such high bulk Lorentz factors in the termination shock downstream region seems to be challenging, but, probably cannot be excluded from first principles. For example, the bulk Lorentz factor of a weakly magnetized flow passing through an inclined relativistic shock depends only on the inclination angle \(\alpha\): \(\gamma= 3/(\sqrt{8}\sin\alpha)\) \citep[see, e.g.,][]{2002AstL...28..373B}. Thus, the required bulk Lorentz factor, \(\gamma\simeq15\), can be achieved if the pulsar wind velocity makes an angle of \(\alpha\sim5^\circ\) with the termination shock.
Detailed \ac{mhd} simulations are required to verify the feasibility of producing variable GeV emission in a relativistically moving plasma.
Such MHD simulations should allow a realistic magnetic field and plasma internal energy in the flow that is formed at the inclined termination shock. These parameters are required to define the spectrum of non-thermal particles in the flow and in turn to compute the synchrotron radiation \citep{2014MNRAS.438..278P,Lyutikov2016}.
As discussed in the literature \citep[see,
e.g.,][]{Komissarov2011}, the ``inner knot" may be associated with the part of the post-shock flow that is characterized by
a large Doppler factor, and a significant fraction of \(100\rm\,MeV\) gamma rays might be produced in this region.
Some spectral features seen in the hard X-ray band can by interpreted as hints supporting this hypothesis.
The spectral energy distribution of the non-thermal emission and X-ray morphology revealed with the {\textit {Chandra X-ray Observatory}} are consistent
with \ac{mhd} models that assume that the non-thermal particles are accelerated at the pulsar wind termination shock
and are advected with the nebular \ac{mhd} flow.
However, there is no observational evidence indicating that the PeV particles that are responsible for the synchrotron gamma-ray emission belong to the same distribution.
The COMPTEL data suggest a spectral hump around $\sim$ 1 MeV \citep{Crab_comptel}.
In addition, \textit{NuStar} found a spectral break with $\Delta \Gamma\sim0.25$ at $\sim 9$ keV in the tourus region \citep{Nustar}, and SPI on {\textit{INTEGRAL}} also detected a spectral steeping above 150 keV \citep{Crab_SPI}.
These observational results may imply the existence of another spectral component above $\sim 10$ keV that is produced by a different population of non-thermal particles than the optical and soft X-ray emission \citep{Aharonian1998,2019MNRAS.489.2403L}.
To study the possible relation of the \(100\rm\,MeV\) gamma-ray emission to the inner knot, we constructed the 30-days-binned gamma-ray LC of the Crab PWN synchorotron component, i.e., nebula emission with excluded flares and
``small flares''. We found a naive correlation between this LC and the pulsar to inner-knot separation, as shown in
Figure~\ref{flux_knot}. This tendency is, however, inconsistent with the prediction of the \ac{mhd} model of
\citet{Komissarov2011}. Further observations of the ``inner knot'' can test this relation.
\\
\section{Conclusions}\label{section4}
We performed a systematic search for gamma-ray flares from the Crab PWN using 7.4 years of data from the
\textit{Fermi} LAT. Our analysis used the off-pulse window of the Crab pulsar. In addition to the flares
reported in the literature (``reported flares''), {seven} lower-intensity flares (``small flares'') are found in this work.
The synchrotron component of the Crab PWN shows highly variable emission, indicatings that the ``small flares'' also originate from synchrotron radiation.
The “small flares” are different from the stationary synchrotron component not only by larger fluxes but also by harder photon indices.
We determined the characteristic scales of both rise and decay times for the ``small flares'' based on 1.5-day
light curves, and we determined that the ``small flares'' are characterized by a day-scale variability.
Apart from two exceptionally bright flares in 2011 April and 2013 March, the ``small flares'' and the ``reported
flares'' show the same range of photon index with a harder-when-brighter trend. We tested the distribution of the flare
parameters against predictions from a simple reconnection model. Although the dependence of the emission fluence on the
observed cut-off energy appeared to be consistent with the model prediction based on the isotropic emission region for some flares, the implied magnetic
field appeared to be too weak to reproduce the observed variability. In order to explain the short time
variability, a strong magnetic field, $\approx1~\mathrm{mG}$, is required. Such a high magnetic field at the production
site of the ``small flares'' implies a magnetic pressure significantly exceeding the value that is anticipated from the
position of the pulsar wind termination shock. This challenges the conventional view on the origin of the
\(100\rm\,MeV\) gamma-ray emission from the Crab PWN.
The requirement of the strong magnetic field can be relaxed by assuming that the production site of the ``small flares'' moves relativistically with respect to the observer.
In this case, rather high Doppler boosting factors, \(\delta\gtrsim10\), are required.
Such Doppler factors are attainable in the termination shock downstream if the pulsar wind passes through an inclined shock, which makes an angle of \(\alpha\sim5^\circ\) with the wind velocity.
The magnetic field and plasma internal energy in the flow also depend on $\alpha$; thus more detailed \ac{mhd} simulations are required to verify the possibility of the production of the ``small flares'' by relativistically moving plasma.
\acknowledgments
{The \textit{Fermi} LAT Collaboration acknowledges generous ongoing support
from a number of agencies and institutes that have supported both the
development and the operation of the LAT as well as scientific data analysis.
These include the National Aeronautics and Space Administration and the
Department of Energy in the United States, the Commissariat \`a l'Energie Atomique
and the Centre National de la Recherche Scientifique / Institut National de Physique
Nucl\'eaire et de Physique des Particules in France, the Agenzia Spaziale Italiana
and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education,
Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research
Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and
the K.~A.~Wallenberg Foundation, the Swedish Research Council and the
Swedish National Space Board in Sweden.
Additional support for science analysis during the operations phase is gratefully
acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre
National d'\'Etudes Spatiales in France. This work performed in part under DOE
Contract DE-AC02-76SF00515.}
M.A. is supported by the RIKEN Junior Research Associate Program.
This work was supported by KAKENHI Grant Numbers 18H03722 and 18H05463 (Y.U.).
| {'timestamp': '2020-05-19T02:09:55', 'yymm': '2005', 'arxiv_id': '2005.07958', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.07958'} |
\section{Boundary ribbon operators with $\Xi(R,K)^\star$}\label{app:ribbon_ops}
\begin{definition}\rm\label{def:Y_ribbon}
Let $\xi$ be a ribbon, $r \in R$ and $k \in K$. Then $Y^{r \otimes \delta_k}_{\xi}$ acts on a direct triangle $\tau$ as
\[\tikzfig{Y_action_direct},\]
and on a dual triangle $\tau^*$ as
\[\tikzfig{Y_action_dual}.\]
Concatenation of ribbons is given by
\[Y^{r \otimes \delta_k}_{\xi'\circ\xi} = Y^{(r \otimes \delta_k)_2}_{\xi'}\circ Y^{(r \otimes \delta_k)_1}_{\xi} = \sum_{x\in K} Y^{(x^{-1}\rightharpoonup r) \otimes \delta_{x^{-1}k}}_{\xi'}\circ Y^{r\otimes\delta_x}_{\xi},\]
where we see the comultiplication $\Delta(r \otimes \delta_k)$ of $\Xi(R,K)^*$. Here, $\Xi(R,K)^*$ is a coquasi-Hopf algebra, and so has coassociative comultiplication (it is the multiplication which is only quasi-associative). Therefore, we can concatenate the triangles making up the ribbon in any order, and the concatenation above uniquely defines $Y^{r\otimes\delta_k}_{\xi}$ for any ribbon $\xi$.
\end{definition}
Let $s_0 = (v_0,p_0)$ and $s_1 = (v_1,p_1)$ be the sites at the start and end of a triangle. The direct triangle operators satisfy
\[k'{\triangleright}_{v_0}\circ Y^{r\otimes \delta_k}_{\tau} =Y^{r\otimes \delta_{k'k}}_{\tau}\circ k'{\triangleright}_{v_0},\quad k'{\triangleright}_{v_1}\circ Y^{r\otimes\delta_k}_\tau = Y^{r\otimes\delta_{k'k^{-1}}}_\tau\circ k'{\triangleright}_{v_1}\]
and
\[[\delta_{r'}{\triangleright}_{s_i},Y^{r\otimes\delta_k}_{\tau}]= 0\]
for $i\in \{1,2\}$.
For the dual triangle operators, we have
\[k'{\triangleright}_{v_i}\circ \sum_k Y^{r\otimes\delta_k}_{\tau^*} = Y^{(k'{\triangleright} r)\otimes\delta_k}_{\tau^*}\circ k'{\triangleright}_{v_i}\]
again for $i\in \{1,2\}$. However, there do not appear to be similar commutation relations for the actions of $\mathbb{C}(R)$ on faces of dual triangle operators. In addition, in the bulk, one can reconstruct the vertex and face actions using suitable ribbons \cite{Bom,CowMa} because of the duality between $\mathbb{C}(G)$ and $\mathbb{C} G$; this is not true in general for $\mathbb{C}(R)$ and $\mathbb{C} K$.
\begin{example}\label{ex:Yrib}\rm
Given the ribbon $\xi$ on the lattice below, we see that $Y^{r\otimes \delta_k}_{\xi}$ acts only along the ribbon and trivially elsewhere. We have
\[\tikzfig{Y_action_ribbon}\]
if $g^2,g^4,g^6(g^7)^{-1}\in K$, and $0$ otherwise, and
\begin{align*}
&y^1 = (rx^1)^{-1}\\
&y^2 = ((g^2)^{-1}rx^2)^{-1}\\
&y^3 = ((g^2g^4)^{-1}rx^3)^{-1}\\
&y^4 = ((g^2g^4g^6(g^7)^{-1})^{-1}rx^3)^{-1}
\end{align*}
One can check this using Definition~\ref{def:Y_ribbon}.
\end{example}
It is claimed in \cite{CCW} that these ribbon operators obey similar equivariance properties with the site actions of $\Xi(R,K)$
as the bulk ribbon operators, but we could not reproduce these properties. Precisely, we find that when such ribbons are `open' in the sense of \cite{Kit, Bom, CowMa} then an intermediate site $s_2$ on a ribbon $\xi$ between either endpoints $s_0,s_1$ does \textit{not} satisfy
\[\Lambda_{\mathbb{C} K}{\triangleright}_{s_2}\circ Y^{r\otimes \delta_k}_{\xi} = Y^{r\otimes \delta_k}_{\xi}\circ \Lambda_{\mathbb{C} K}{\triangleright}_{s_2}.\]
in general, nor the corresponding relation for $\Lambda_{\mathbb{C}(R)}{\triangleright}_{s_2}$.
\section{Measurements and nonabelian lattice surgery}\label{app:measurements}
In Section~\ref{sec:surgery}, we described nonabelian lattice surgery for a general underlying group algebra $\mathbb{C} G$, but for simplicity of exposition we assumed that the projectors $A(v)$ and $B(p)$ could be applied deterministically. In practice, we can only make a measurement, which will only sometimes yield the desired projectors. As the splits are easier, we discuss how to handle these first, beginning with the rough split. We demonstrate on the same example as previously:
\[\tikzfig{rough_split_calc}\]
\[\tikzfig{rough_split_calc2}\]
where we have measured the edge to be deleted in the $\mathbb{C} G$ basis. The measurement outcome $n$ informs which corrections to make. The last arrow implies corrections made using ribbon operators. These corrections are all unitary, and if the measurement outcome is $e$ then no corrections are required at all. The generalisation to larger patches is straightforward, but requires keeping track of multiple different outcomes.
Next, we discuss how to handle the smooth split. In this case, we measure the edges to be deleted in the Fourier basis, that is we measure the self-adjoint operator $\sum_{\pi} p_{\pi} P_{\pi}{\triangleright}$ at a particular edge, where
\[P_{\pi} := P_{e,\pi} = {{\rm dim}(W_\pi)\over |G|}\sum_{g\in G} {\rm Tr}_\pi(g^{-1}) g\]
from Section~\ref{sec:lattice} acts by the left regular representation. Thus, for a smooth split, we have the initial state $|e\>_L$:
\[\tikzfig{smooth_split_calc1}\]
\[\tikzfig{smooth_split_calc2}\]
\[\tikzfig{smooth_split_calc3}\]
and afterwards we still have coefficients from the irreps of $\mathbb{C} G$. In the case when $\pi = 1$, we are done. Otherwise, we have detected quasiparticles of type $(e,\pi)$ and $(e,\pi')$ at two vertices. In this case, we appeal to e.g. \cite{BKKK, Cirac}, which claim that one can modify these quasiparticles deterministically using ribbon operators and quantum circuitry. The procedure should be similar to initialising a fresh patch in the zero logical state, but we do not give any details ourselves. Then we have the desired result.
For merges, we start with a smooth merge, as again all outcomes are in the group basis. Recall that after generating fresh copies of $\mathbb{C} G$ in the states $\sum_{m\in G} m$, we have
\[\tikzfig{smooth_merge_project}\]
we then measure at sites which include the top and bottom faces, giving:
\[\tikzfig{smooth_merge_measure_1}\]
for some conjugacy classes ${\hbox{{$\mathcal C$}}}, {\hbox{{$\mathcal C$}}}'$. There are no factors of $\pi$ as the edges around each vertex already satisfy $A(v)|\psi\> = |\psi\>$. When ${\hbox{{$\mathcal C$}}} = {\hbox{{$\mathcal C$}}}' = \{e\}$, we may proceed, but otherwise we require a way of deterministically eliminating the quasiparticles detected at the top and bottom faces. Appealing to e.g. \cite{BKKK, Cirac} as earlier, we assume that this may be done, but do not give details. Alternatively one could try to `switch reference frames' in the manner of Pauli frames with qubit codes \cite{HFDM}, and redefine the Hamiltonian. The former method gives
\[\tikzfig{smooth_merge_measure_2}\]
Lastly, we measure the inner face, yielding
\[\tikzfig{smooth_merge_measure_3}\]
so $|j\>_L\otimes |k\>_L \mapsto \sum_{s\in {\hbox{{$\mathcal C$}}}''} \delta_{js,k} |js\>_L$, which is a direct generalisation of the result for when $G = \mathbb{Z}_n$ in \cite{Cow2}, where now we sum over the conjugacy class ${\hbox{{$\mathcal C$}}}''$ which in the $\mathbb{Z}_n$ case are all singletons.
The rough merge works similarly, where instead of having quasiparticles of type $({\hbox{{$\mathcal C$}}},1)$ appearing at faces, we have quasiparticles of type $(e,\pi)$ at vertices.
\section{Introduction}
The Kitaev model is defined for a finite group $G$ \cite{Kit} with quasiparticles given by representations of the quantum double $D(G)$, and their dynamics described by intertwiners. In quantum computing, the quasiparticles correspond to measurement outcomes at sites on a lattice, and their dynamics correspond to linear maps on the data, with the aim of performing fault-tolerant quantum computation. The lattice can be any ciliated ribbon graph embedded on a surface \cite{Meu}, although throughout we will assume a square lattice on the plane for convenience. The Kitaev model generalises to replace $G$ by a finite-dimensional semisimple Hopf algebra, as well as aspects that work of a general finite-dimensional Hopf algebra. We refer to \cite{CowMa} for details of the relevant algebraic aspects of this theory, which applies in the bulk of the Kitaev model. We now extend this work with a study of the algebraic structure that underlies an approach to the treatment of boundaries.
The treatment of boundaries here originates in a more categorical point of view. In the original Kitaev model the relevant category that defines the `topological order' in condensed matter terms\cite{LK} is the category ${}_{D(G)}\mathcal{M}$ of $D(G)$-modules, which one can think of as an instance of the `dual' or `centre' $\hbox{{$\mathcal Z$}}({\hbox{{$\mathcal C$}}})$ construction\cite{Ma:rep}, where ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$ is the category of $G$-graded vector spaces. Levin-Wen `string-net' models \cite{LW} are a sort of generalisation of Kitaev models specified now by a unitary fusion category $\mathcal{C}$ with topological order $\hbox{{$\mathcal Z$}}(\mathcal{C})$, meaning that at every site on the lattice one has an object in $\hbox{{$\mathcal Z$}}(\mathcal{C})$, and now on a trivalent lattice. Computations correspond to morphisms in the same category.
A so-called gapped boundary condition of a string-net model preserves a finite energy gap between the vacuum and the lowest excited state(s), which is independent of system size. Such boundary conditions are defined by module categories of the fusion category ${\hbox{{$\mathcal C$}}}$. By definition, a (right) ${\hbox{{$\mathcal C$}}}$-module means\cite{Os,KK} a category ${\hbox{{$\mathcal V$}}}$ equipped with a bifunctor ${\hbox{{$\mathcal V$}}} \times {\hbox{{$\mathcal C$}}} \rightarrow {\hbox{{$\mathcal V$}}}$ obeying coherence equations which are a polarised version of the properties of $\mathop{{\otimes}}: {\hbox{{$\mathcal C$}}}\times{\hbox{{$\mathcal C$}}}\to {\hbox{{$\mathcal C$}}}$ (in the same way that a right module of an algebra obeys a polarised version of the axioms for the product). One can also see a string-net model as a discretised quantum field theory \cite{Kir2, Meu}, and indeed boundaries of a conformal field theory can also be similarly defined by module categories \cite{FS}. For our purposes, we care about \textit{indecomposable} module categories, that is module categories which are not equivalent to a direct sum of other module categories. Excitations on the boundary with condition $\mathcal{V}$ are then given by functors $F \in \mathrm{End}_{\hbox{{$\mathcal C$}}}(\mathcal{V})$ that commute with the ${\hbox{{$\mathcal C$}}}$ action\cite{KK}, beyond the vacuum state which is the identity functor $\mathrm{id}_{\mathcal{V}}$. More than just the boundary conditions above, we care about these excitations, and so $\mathrm{End}_{\hbox{{$\mathcal C$}}}(\mathcal{V})$ is the category of interest.
The Kitaev model is not exactly a string-net model (the lattice in our case will not even be trivalent) but closely related. In particular, it can be shown that indecomposable module categories for ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$, the category of $G$-graded vector spaces, are\cite{Os2} classified by subgroups $K\subseteq G$ and cocycles $\alpha\in H^2(K,\mathbb{C}^\times)$. We will stick to the trivial $\alpha$ case here, and the upshot is that the boundary conditions in the regular Kitaev model should be given by ${\hbox{{$\mathcal V$}}}={}_K\hbox{{$\mathcal M$}}^G$ the $G$-graded $K$-modules where $x\in K$ itself has grade $|x|=x\in G$. Then the excitations are governed by objects of $\mathrm{End}_{\hbox{{$\mathcal C$}}}({\hbox{{$\mathcal V$}}}) \simeq {}_K\hbox{{$\mathcal M$}}_K^G$, the category of $G$-graded bimodules over $K$. This is necessarily equivalent, by Tannaka-Krein reconstruction\cite{Ma:tan} to the category of modules ${}_{\Xi(R,K)}\mathcal{M}$ of a certain quasi-Hopf algebra $\Xi(R,K)$. Here $R\subseteq G$ is a choice of transversal so that every element of $G$ factorises uniquely as $RK$, but the algebra of $\Xi(R,K)$ depends only on the choice of subgroup $K$ and not on the transversal $R$. This is the algebra which we use to define measurement protocols on the boundaries of the Kitaev model. One also has that $\hbox{{$\mathcal Z$}}({}_\Xi\hbox{{$\mathcal M$}})\simeq\hbox{{$\mathcal Z$}}(\hbox{{$\mathcal M$}}^G)\simeq{}_{D(G)}\hbox{{$\mathcal M$}}$ as braided monoidal categories.
Categorical aspects will be deferred to Section~\ref{sec:cat_just}, our main focus prior to that being on a full understanding of the algebra $\Xi$, its properties and aspects of the physics. In fact, lattice boundaries of Kitaev models based on subgroups have been defined and characterised previously, see \cite{BSW, Bom}, with \cite{CCW} giving an overview for computational purposes, and we build on these works. We begin in Section~\ref{sec:bulk} with a recap of the algebras and actions involved in the bulk of the lattice model, then in Section~\ref{sec:gap} we accommodate the boundary conditions in a manner which works with features important for quantum computation, such as sites, quasiparticle projectors and ribbon operators. These sections mostly cover well-trodden ground, although we correct errors and clarify some algebraic subtleties which appear to have gone unnoticed in previous works. In particular, we obtain formulae for the decomposition of bulk irreducible representations of $D(G)$ into $\Xi$-representations which we believe to be new. Key to our results here is an observation that in fact $\Xi(R,K)\subseteq D(G)$ as algebras, which gives a much more direct route than previously to an adjunction between $\Xi(R,K)$-modules and $D(G)$-modules describing how excitations pass between the bulk and boundary. This is important for the physical picture\cite{CCW} and previously was attributed to an adjunction between ${}_{D(G)}\hbox{{$\mathcal M$}}$ and ${}_K\hbox{{$\mathcal M$}}_K^G$ in \cite{PS2}.
In Section~\ref{sec:patches}, as an application of our explicit description of boundaries, we generalise the quantum computational model called \textit{lattice surgery} \cite{HFDM,Cow2} to the nonabelian group case. We find that for every finite group $G$ one can simulate the group algebra $\mathbb{C} G$ and its dual $\mathbb{C}(G)$ on a lattice patch with `rough' and `smooth' boundaries. This is an alternative model of fault-tolerant computation to the well-known method of braiding anyons or defects \cite{Kit,FMMC}, although we do not know whether there are choices of group such that lattice surgery is natively universal without state distillation.
In Section~\ref{sec:quasi}, we look at $\Xi(R,K)$ as a quasi-Hopf algebra in somewhat more detail than we have found elsewhere. As well as the quasi-bialgebra structure, we provide and verify the antipode for any choice of transversal $R$ for which right-inversion is bijective. This case is in line with \cite{Nat}, but we will also consider antipodes more generally. We then show that an obvious $*$-algebra structure on $\Xi$ meets all the axioms of a strong $*$-quasi-Hopf algebra in the sense of \cite{BegMa:bar} coming out of the theory of bar categories. The key ingredient here is a somewhat nontrivial map that relates the complex conjugate $\Xi$-module to $V\mathop{{\otimes}} W$ to those of $W$ and $V$. We also give an extended series of examples, including one related to the octonions.
Lastly, in Section~\ref{sec:cat_just}, we connect the algebraic notions up to the abstract description of boundaries conditions via module categories and use this to obtain more results about $\Xi(R,K)$. We first calculate the relevant categorical equivalence ${}_K\hbox{{$\mathcal M$}}_K^G \simeq {}_{\Xi(R,K)}\mathcal{M}$ concretely, deriving the quasi-bialgebra structure of $\Xi(R,K)$ precisely such that this works.
Since the left hand side is independent of $R$, we deduce by Tannaka-Krein arguments that changing $R$ changes $\Xi(R,K)$ by a Drinfeld cochain twist and we find this cochain as a main result of the section. This is important as Drinfeld twists do not change the category of modules up to equivalence, so such aspects of the physics do not depend on $R$. Twisting arguments then imply that we have an antipode more generally for any $R$. We also look at ${\hbox{{$\mathcal V$}}} = {}_K\hbox{{$\mathcal M$}}^G$ as a module category for ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$. Section~\ref{sec:rem} provides some concluding remarks relating to generalisations of the boundaries to models based on other Hopf algebras \cite{BMCA}.
\subsection*{Acknowledgements}
The first author thanks Stefano Gogioso for useful discussions regarding nonabelian lattice surgery as a model for computation. Thanks also to Paddy Gray \& Kathryn Pennel for their hospitality while some of this paper was written and to Simon Harrison for the Wolfson Harrison UK Research Council Quantum Foundation Scholarship, which made this work possible. The second author was on sabbatical at Cambridge Quantum Computing and we thank members of the team there.
\section{Preliminaries: recap of the Kitaev model in the bulk}\label{sec:bulk}
We begin with the model in the bulk. This is a largely a recap of eg. \cite{Kit, CowMa}.
\subsection{Quantum double}\label{sec:double}Let $G$ be a finite group with identity $e$, then $\mathbb{C} G$ is the group Hopf algebra with basis $G$. Multiplication is extended linearly, and $\mathbb{C} G$ has comultiplication $\Delta h = h \otimes h$ and counit ${\epsilon} h = 1$ on basis elements $h\in G$. The antipode is given by $Sh = h^{-1}$. $\mathbb{C} G$ is a Hopf $*$-algebra with $h^* = h^{-1}$ extended antilinearly. Its dual Hopf algebra $\mathbb{C}(G)$ of functions on $G$ has basis of $\delta$-functions $\{\delta_g\}$ with $\Delta\delta_g=\sum_h \delta_h\mathop{{\otimes}}\delta_{h^{-1}g}$, ${\epsilon} \delta_g=\delta_{g,e}$ and $S\delta_g=\delta_{g^{-1}}$ for the Hopf algebra structure, and $\delta_g^* = \delta_{g}$ for all $g\in G$. The normalised integral elements \textit{in} $\mathbb{C} G$ and $\mathbb{C}(G)$ are
\[ \Lambda_{\mathbb{C} G}={1\over |G|}\sum_{h\in G} h\in \mathbb{C} G,\quad \Lambda_{\mathbb{C}(G)}=\delta_e\in \mathbb{C}(G).\]
The integrals \textit{on} $\mathbb{C} G$ and $\mathbb{C}(G)$ are
\[ \int h = \delta_{h,e}, \quad \int \delta_g = 1\]
normalised so that $\int 1 = 1$ for $\mathbb{C} G$ and $\int 1 = |G|$ for $\mathbb{C}(G)$.
For the Drinfeld double we have $D(G)=\mathbb{C}(G){>\!\!\!\triangleleft} \mathbb{C} G$ as in \cite{Ma:book}, with $\mathbb{C} G$ and $\mathbb{C}(G)$ sub-Hopf algebras and the cross relations $ h\delta_g =\delta_{hgh^{-1}} h$ (a semidirect product). The Hopf algebra antipode is $S(\delta_gh)=\delta_{h^{-1}g^{-1}h} h^{-1}$, and over $\mathbb{C}$ we have a Hopf $*$-algebra with $(\delta_g h)^* = \delta_{h^{-1}gh} h^{-1}$. There is also a quasitriangular structure which in subalgebra notation is
\begin{equation}\label{RDG} \hbox{{$\mathcal R$}}=\sum_{h\in G} \delta_h\mathop{{\otimes}} h\in D(G) \otimes D(G).\end{equation}
If we want to be totally explicit we can build $D(G)$ on either the vector space $\mathbb{C}(G)\mathop{{\otimes}} \mathbb{C} G$ or on the vector space $\mathbb{C} G\mathop{{\otimes}}\mathbb{C}(G)$. In fact the latter is more natural but we follow the conventions in \cite{Ma:book,CowMa} and use the former. Then one can say the above more explicitly as \[(\delta_g\mathop{{\otimes}} h)(\delta_f\mathop{{\otimes}} k)=\delta_g\delta_{hfh^{-1}}\mathop{{\otimes}} hk=\delta_{g,hfh^{-1}}\delta_g\mathop{{\otimes}} hk,\quad S(\delta_g\mathop{{\otimes}} h)=\delta_{h^{-1}g^{-1}h} \mathop{{\otimes}} h^{-1}\]
etc. for the operations on the underlying vector space.
As a semidirect product, irreducible representations of $D(G)$ are given by standard theory as labelled by pairs $({\hbox{{$\mathcal C$}}},\pi)$ consisting of an orbit under the action (i.e. by a conjugacy class ${\hbox{{$\mathcal C$}}}\subset G$ in this case) and an irrep $\pi$ of the isotropy subgroup, in our case
\[ G^{c_0}=\{n\in G\ |\ nc_0 n^{-1}=c_0\}\]
of a fixed element $c_0\in{\hbox{{$\mathcal C$}}}$, i.e. the centraliser $C_G(c_0)$. The choice of $c_0$ does not change the isotropy group up to isomorphism but does change how it sits inside $G$. We also fix data $q_c\in G$ for each $c\in {\hbox{{$\mathcal C$}}}$ such that $c=q_cc_0q_c^{-1}$ with $q_{c_0}=e$ and define from this a cocycle $\zeta_c(h)=q^{-1}_{hch^{-1}}hq_c$ as a map $\zeta: {\hbox{{$\mathcal C$}}}\times G\to G^{c_0}$. The associated irreducible representation is then
\[ W_{{\hbox{{$\mathcal C$}}},\pi}=\mathbb{C} {\hbox{{$\mathcal C$}}}\mathop{{\otimes}} W_\pi,\quad \delta_g.(c\mathop{{\otimes}} w)=\delta_{g,c}c\mathop{{\otimes}} w,\quad h.(c\mathop{{\otimes}} w)=hch^{-1}\mathop{{\otimes}} \zeta_c(h).w \]
for all $w\in W_\pi$, the carrier space of $\pi$. This constructs all irreps of $D(G)$ and, over $\mathbb{C}$, these are unitary in a Hopf $*$-algebra sense if $\pi$ is unitary. Moreover, $D(G)$ is semisimple and hence has a block decomposition $D(G){\cong}\oplus_{{\hbox{{$\mathcal C$}}},\pi} \mathrm{ End}(W_{{\hbox{{$\mathcal C$}}},\pi})$ given by a complete orthogonal set of self-adjoint central idempotents
\begin{equation}\label{Dproj}P_{({\hbox{{$\mathcal C$}}},\pi)}={{\rm dim}(W_\pi)\over |G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}}\sum_{n\in G^{c_0}}\mathrm{ Tr}_\pi(n^{-1})\delta_{c}\mathop{{\otimes}} q_c nq_c^{-1}.\end{equation}
We refer to \cite{CowMa} for more details and proofs. Acting on a state, this will become a projection operator that determines if a quasiparticle of type ${\hbox{{$\mathcal C$}}},\pi$ is present. Chargeons are quasiparticles with ${\hbox{{$\mathcal C$}}}=\{e\}$ and $\pi$ an irrep of $G$, and fluxions are quasiparticles with ${\hbox{{$\mathcal C$}}}$ a conjugacy class and $\pi=1$, the trivial representation.
\subsection{Bulk lattice model}\label{sec:lattice}
Having established the prerequisite algebra, we move on to the lattice model itself. This first part is largely a recap of \cite{Kit, CowMa} and we use the notations of the latter. Let $\Sigma = \Sigma(V, E, P)$ be a square lattice viewed as a directed graph with its usual (cartesian) orientation, vertices $V$, directed edges $E$ and faces $P$. The Hilbert space $\hbox{{$\mathcal H$}}$ will be a tensor product of vector spaces with one copy of $\mathbb{C} G$ at each arrow in $E$. We have group elements for the basis of each copy. Next, to each adjacent pair of vertex $v$ and face $p$ we associate a site $s = (v, p)$, or equivalently a line (the `cilium') from $p$ to $v$. We then define an action of $\mathbb{C} G$ and $\mathbb{C}(G)$ at each site by
\[ \includegraphics[scale=0.7]{Gaction.pdf}\]
Here $h\in \mathbb{C} G$, $a\in \mathbb{C}(G)$ and $g^1,\cdots,g^4$ denote independent elements of $G$ (not powers). Observe that the vertex action is invariant under the location of $p$ relative to its adjacent $v$, so the red dashed line has been omitted.
\begin{lemma}\label{lemDGrep} \cite{Kit,CowMa} $h{\triangleright}$ and $a{\triangleright}$ for all $h\in G$ and $a\in \mathbb{C}(G)$ define a representation of $D(G)$ on $\hbox{{$\mathcal H$}}$ associated to each site $(v,p)$.
\end{lemma}
We next define
\[ A(v):=\Lambda_{\mathbb{C} G}{\triangleright}={1\over |G|}\sum_{h\in G}h{\triangleright},\quad B(p):=\Lambda_{\mathbb{C}(G)}{\triangleright}=\delta_e{\triangleright}\]
where $\delta_{e}(g^1g^2g^3g^4)=1$ iff $g^1g^2g^3g^4=e$, which is iff $(g^4)^{-1}=g^1g^2g^3$, which is iff $g^4g^1g^2g^3=e$. Hence $\delta_{e}(g^1g^2g^3g^4)=\delta_{e}(g^4g^1g^2g^3)$ is invariant under cyclic rotations, hence $\Lambda_{\mathbb{C}(G)}{\triangleright}$ computed at site $(v,p)$ does not depend on the location of $v$ on the boundary of $p$. Moreover,
\[ A(v)B(p)=|G|^{-1}\sum_h h\delta_e{\triangleright}=|G|^{-1}\sum_h \delta_{heh^{-1}}h{\triangleright}=|G|^{-1}\sum_h \delta_{e}h{\triangleright}=B(p)A(v)\]
if $v$ is a vertex on the boundary of $p$ by Lemma~\ref{lemDGrep}, and more trivially if not. We also have the rest of
\[ A(v)^2=A(v),\quad B(p)^2=B(p),\quad [A(v),A(v')]=[B(p),B(p')]=[A(v),B(p)]=0\]
for all $v\ne v'$ and $p\ne p'$, as easily checked. We then define the Hamiltonian
\[ H=\sum_v (1-A(v)) + \sum_p (1-B(p))\]
and the space of vacuum states
\[ \hbox{{$\mathcal H$}}_{\rm vac}=\{|\psi\>\in\hbox{{$\mathcal H$}}\ |\ A(v)|\psi\>=B(p)|\psi\>=|\psi\>,\quad \forall v,p\}.\]
Quasiparticles in Kitaev models are labelled by representations of $D(G)$ occupying a given site $(v,p)$, which take the system out of the vacuum. Detection of a quasiparticle is via a {\em projective measurement} of the operator $\sum_{{\hbox{{$\mathcal C$}}}, \pi} p_{{\hbox{{$\mathcal C$}}},\pi} P_{\mathcal{C}, \pi}$ acting at each site on the lattice for distinct coefficients $p_{{\hbox{{$\mathcal C$}}},\pi} \in \mathbb{R}$. By definition, this is a process which yields the classical value $p_{{\hbox{{$\mathcal C$}}},\pi}$ with a probability given by the likelihood of the state prior to the measurement being in the subspace in the image of $P_{\mathcal{C},\pi}$, and in so doing performs the corresponding action of the projector $P_{\mathcal{C}, \pi}$ at the site. The projector $P_{e,1}$ corresponds to the vacuum quasiparticle.
In computing terms, this system of measurements encodes a logical Hilbert subspace, which we will always take to be the vacuum space $\hbox{{$\mathcal H$}}_{\rm vac}$, within the larger physical Hilbert space given by the lattice; this subspace is dependent on the topology of the surface that the lattice is embedded in, but not the size of the lattice. For example, there is a convenient closed-form expression for the dimension of $\hbox{{$\mathcal H$}}_{\rm vac}$ when $\Sigma$ occupies a closed, orientable surface \cite{Cui}. Computation can then be performed on states in the logical subspace in a fault-tolerant manner, with unwanted excitations constituting detectable errors.
In the interest of brevity, we forgo a detailed exposition of such measurements, ribbon operators and fault-tolerant quantum computation on the lattice. The interested reader can learn about these in e.g. \cite{Kit,Bom,CCW,CowMa}. We do give a brief recap of ribbon operators, although without much rigour, as these will be useful later.
\begin{definition}\rm \label{def:ribbon}
A ribbon $\xi$ is a strip of face width that connects two sites $s_0 = (v_0,p_0)$ and $s_1 = (v_1,p_1)$ on the lattice. A ribbon operator $F^{h,g}_\xi$ acts on the vector spaces associated to the edges along the path of the ribbon, as shown in Fig~\ref{figribbon}. We call this basis of ribbon operators labelled by $h$ and $g$ the \textit{group basis}.
\end{definition}
\begin{figure}
\[ \includegraphics[scale=0.8]{Fig1.pdf}\]
\caption{\label{figribbon} Example of a ribbon operator for a ribbon $\xi$ from $s_0=(v_0,p_0)$ to $s_1=(v_1,p_1)$.}
\end{figure}
\begin{lemma}\label{lem:concat}
If $\xi'$ is a ribbon concatenated with $\xi$, then the associated ribbon operators in the group basis satisfy
\[F_{\xi'\circ\xi}^{h,g}=\sum_{f\in G}F_{\xi'}^{f^{-1}hf,f^{-1}g}\circ F_\xi^{h,f}, \quad F^{h,g}_\xi \circ F^{h',g'}_\xi=\delta_{g,g'}F_\xi^{hh',g}.\]
\end{lemma}
The first identity shows the role of the comultiplication of $D(G)^*$,
\[\Delta(h\delta_g) = \sum_{f\in G} h\delta_f\otimes f^{-1}hf\delta_{f^{-1}g}.\]
using subalgebra notation, while the second identity implies that
\[(F_\xi^{h,g})^\dagger = F_\xi^{h^{-1},g}.\]
\begin{lemma}\label{ribcom}\cite{Kit} Let $\xi$ be a ribbon with the orientation as shown in Figure~\ref{figribbon} between sites $s_0=(v_0,p_0)$ and $s_1=(v_1,p_1)$. Then
\[ [F_\xi^{h,g},f{\triangleright}_v]=0,\quad [F_\xi^{h,g},\delta_e{\triangleright}_p]=0,\]
for all $v \notin \{v_0, v_1\}$ and $p \notin \{p_0, p_1\}$.
\[ f{\triangleright}_{s_0}\circ F_\xi^{h,g}=F_\xi^{fhf^{-1},fg} \circ f{\triangleright}_{s_0},\quad \delta_f{\triangleright}_{s_0}\circ F_\xi^{h,g}=F_\xi^{h,g} \circ\delta_{h^{-1}f}{\triangleright}_{s_0},\]
\[ f{\triangleright}_{s_1}\circ F_\xi^{h,g}=F_\xi^{h,gf^{-1}} \circ f{\triangleright}_{s_1},\quad \delta_f{\triangleright}_{s_1}\circ F_\xi^{h,g}=F_\xi^{h,g}\circ \delta_{fg^{-1}hg}{\triangleright}_{s_1}\]
for all ribbons where $s_0,s_1$ are disjoint, i.e. when $s_0$ and $s_1$ share neither vertices or faces. The subscript notation $f{\triangleright}_v$ means the local action of $f\in \mathbb{C} G$ at vertex $v$, and the dual for $\delta_f{\triangleright}_s$ at a site $s$.
\end{lemma}
We call the above lemma the \textit{equivariance property} of ribbon operators. Such ribbon operators may be deformed according to a sort of discrete isotopy, so long as the endpoints remain the same. We formalised ribbon operators as left and right module maps in \cite{CowMa}, but skim over any further details here. The physical interpretation of ribbon operators is that they create, move and annihilate quasiparticles.
\begin{lemma}\cite{Kit}\label{lem:ribs_only}
Let $s_0$, $s_1$ be two sites on the lattice. The only operators in ${\rm End}(\hbox{{$\mathcal H$}})$ which change the states at these sites, and therefore create quasiparticles and change the distribution of measurement outcomes, but leave the state in vacuum elsewhere, are ribbon operators.
\end{lemma}
This lemma is somewhat hard to prove rigorously but a proof was sketched in \cite{CowMa}. Next, there is an alternate basis for these ribbon operators in which the physical interpretation becomes more obvious. The \textit{quasiparticle basis} has elements
\begin{equation}F_\xi^{'{\hbox{{$\mathcal C$}}},\pi;u,v} = \sum_{n\in G^{c_0}} \pi(n^{-1})_{ji} F_\xi^{c, q_c n q_d^{-1}},\end{equation}
where ${\hbox{{$\mathcal C$}}}$ is a conjugacy class, $\pi$ is an irrep of the associated isotropy subgroup $G^{c_0}$ and $u = (c,i)$, $v = (d,j)$ label basis elements of $W_{{\hbox{{$\mathcal C$}}},\pi}$ in which $c,d \in {\hbox{{$\mathcal C$}}}$ and $i,j$ label a basis of $W_\pi$. This amounts to a nonabelian Fourier transform of the space of ribbons (that is, the Peter-Weyl isomorphism of $D(G)$) and has inverse
\begin{equation}F_\xi^{h,g} = \sum_{{\hbox{{$\mathcal C$}}},\pi\in \hat{G^{c_0}}}\sum_{c\in{\hbox{{$\mathcal C$}}}}\delta_{h,gcg^{-1}} \sum_{i,j = 0}^{{\rm dim}(W_\pi)}\pi(q^{-1}_{gcg^{-1}}g q_c)_{ij}F_\xi^{'{\hbox{{$\mathcal C$}}},\pi;a,b},\end{equation}
where $a = (gcg^{-1},i)$ and $b=(c,j)$. This reduces in the chargeon sector to the special cases
\begin{equation}\label{chargeon_ribbons}F_\xi^{'e,\pi;i,j} = \sum_{n\in G}\pi(n^{-1})_{ji}F_\xi^{e,n}\end{equation}
and
\begin{equation}F_\xi^{e,g} = \sum_{\pi\in \hat{G}}\sum_{i,j = 0}^{{\rm dim}(W_\pi)}\pi(g)_{ij}F_\xi^{'e,\pi;i,j}\end{equation}
Meanwhile, in the fluxion sector we have
\begin{equation}\label{fluxion_ribbons}F_\xi^{'{\hbox{{$\mathcal C$}}},1;c,d}=\sum_{n\in G^{c_0}}F_\xi^{c,q_c nq_d^{-1}}\end{equation}
but there is no inverse in the fluxion sector. This is because the chargeon sector corresponds to the irreps of $\mathbb{C} G$, itself a semisimple algebra; the fluxion sector has no such correspondence.
If $G$ is Abelian then $\pi$ are 1-dimensional and we do not have to worry about the indices for the basis of $W_\pi$; this then looks like a more usual Fourier transform.
\begin{lemma}\label{lem:quasi_basis}
If $\xi'$ is a ribbon concatenated with $\xi$, then the associated ribbon operators in the quasiparticle basis satisfy
\[ F_{\xi'\circ\xi}^{'{\hbox{{$\mathcal C$}}},\pi;u,v}=\sum_w F_{\xi'}^{'{\hbox{{$\mathcal C$}}},\pi;w,v}\circ F_\xi^{'{\hbox{{$\mathcal C$}}},\pi;u,w}\]
and are such that the nonabelian Fourier transform takes convolution to multiplication and vice versa, as it does in the abelian case.
\end{lemma}
In particular, we have the \textit{ribbon trace operators}, $W^{{\hbox{{$\mathcal C$}}},\pi}_\xi := \sum_u F_\xi^{'{\hbox{{$\mathcal C$}}},\pi;u,u}$. Such ribbon trace operators create exactly quasiparticles of the type ${\hbox{{$\mathcal C$}}},\pi$ from the vacuum, meaning that
\[P_{({\hbox{{$\mathcal C$}}},\pi)}{\triangleright}_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>}{\triangleleft}_{s_1}P_{({\hbox{{$\mathcal C$}}},\pi)}.\]
We refer to \cite{CowMa} for more details and proofs of the above.
\begin{example}\rm \label{exDS3} Our go-to example for our expositions will be $G=S_3$ generated by transpositions $u=(12), v=(23)$ with $w=(13)=uvu=vuv$. There are then 8 irreducible representations of $D(S_3)$ according to the choices ${\hbox{{$\mathcal C$}}}_0=\{e\}$, ${\hbox{{$\mathcal C$}}}_1=\{u,v,w\}$, ${\hbox{{$\mathcal C$}}}_2=\{uv,vu\}$ for which we pick representatives $c_0=e$, $q_e=e$, $c_1=u$, $q_u=e$, $q_v=w$, $q_w=v$ and $c_2=uv$ with $q_{uv}=e,q_{vu}=v$ (with the $c_i$ in the role of $c_0$ in the general theory). Here $G^{c_0}=S_3$ with 3 representations $\pi=$ trivial, sign and $W_2$ the 2-dimensional one given by (say) $\pi(u)=\sigma_3, \pi(v)=(\sqrt{3}\sigma_1-\sigma_3)/2$, $G^{c_1}=\{e,u\}=\mathbb{Z}_2$ with $\pi(u)=\pm1$ and $G^{c_2}=\{e,uv,vu\}=\mathbb{Z}_3$ with $\pi(uv)=1,\omega,\omega^2$ for $\omega=e^{2\pi\imath\over 3}$. See \cite{CowMa} for details and calculations of the associated projectors and some $W_\xi^{{\hbox{{$\mathcal C$}}},\pi}$ operators.
\end{example}
\section{Gapped Boundaries}\label{sec:gap}
While $D(G)$ is the relevant algebra for the bulk of the model, our focus is on the boundaries. For these, we require a different class of algebras.
\subsection{The boundary subalgebra $\Xi(R,K)$}\label{sec:xi}
Let $K\subseteq G$ be a subgroup of a finite group $G$ and $G/K=\{gK\ |\ g\in G\}$ be the set of left cosets. It is not necessary in this section, but convenient, to fix a representative $r$ for each coset and let $R\subseteq G$ be the set of these, so there is a bijection between $R$ and $G/K$ whereby $r\leftrightarrow rK$. We assume that $e\in R$ and call such a subset (or section of the map $G\to G/K$) a {\em transversal}. Every element of $G$ factorises uniquely as $rx$ for $r\in R$ and $x\in K$, giving a coordinatisation of $G$ which we will use. Next, as we quotiented by $K$ from the right, we still have an action of $K$ from the left on $G/K$, which we denote ${\triangleright}$. By the above bijection, this equivalently means an action ${\triangleright}:K\times R\to R$ on $R$ which in terms of the factorisation is determined by $xry=(x{\triangleright} r)y'$, where we refactorise $xry$ in the form $RK$ for some $y'\in R$. There is much more information in this factorisation, as will see in Section~\ref{sec:quasi}, but this action is all we need for now. Also note that we have chosen to work with left cosets so as to be consistent with the literature \cite{CCW,BSW}, but one could equally choose a right coset factorisation to build a class of algebras similar to those in \cite{KM2}. We consider the algebra $\mathbb{C}(G/K){>\!\!\!\triangleleft} \mathbb{C} K$ as the cross product by the above action. Using our coordinatisation, this becomes the following algebra.
\begin{definition}\label{defXi} $\Xi(R,K)=\mathbb{C}(R){>\!\!\!\triangleleft} \mathbb{C} K$ is generated by $\mathbb{C}(R)$ and $\mathbb{C} K$ with cross relations $x\delta_r=\delta_{x{\triangleright} r} x$. Over $\mathbb{C}$, this is a $*$-algebra with $(\delta_r x)^*=x^{-1}\delta_r=\delta_{x^{-1}{\triangleright} r}x^{-1}$.
\end{definition}
If we choose a different transversal $R$ then the algebra does not change up to an isomorphism which maps the $\delta$-functions between the corresponding choices of representative. Of relevance to the applications, we also have:
\begin{lemma} $\Xi(R,K)$ has the `integral element'
\[\Lambda:=\Lambda_{\mathbb{C}(R)} \otimes \Lambda_{\mathbb{C} K} = \delta_e \frac{1}{|K|}\sum_{x\in K}x\]
characterised by $\xi\Lambda={\epsilon}(\xi)\Lambda=\Lambda\xi$ for all $\xi\in \Xi$, and ${\epsilon}(\Lambda)=1$.
\end{lemma}
{\noindent {\bfseries Proof:}\quad } We check that
\begin{align*}
\xi\Lambda& = (\delta_s y)(\delta_e\frac{1}{|K|}\sum_{x\in K}x) = \delta_{s,y{\triangleright} e}\delta_s\frac{1}{|K|}\sum_{x\in K}yx= \delta_{s,e}\delta_e \frac{1}{|K|}\sum_{x\in K}x\\
&= {\epsilon}(\xi)\Lambda = \frac{1}{|K|}\sum_{x\in K}\delta_{e,x{\triangleright} y}\delta_e xy = \frac{1}{|K|}\sum_{x\in K}\delta_{e,y}\delta_e x = \Lambda\xi.
\end{align*}
And clearly, ${\epsilon}(\Lambda) = \delta_{e,e} {|K|\over |K|} = 1$.
\endproof
As a cross product algebra, we can take the same approach as with $D(G)$ to the classification of its irreps:
\begin{lemma} Irreps of $\Xi(R,K)$ are classified by pairs $(\hbox{{$\mathcal O$}},\rho)$ where $\hbox{{$\mathcal O$}}\subseteq R$ is an orbit under the action ${\triangleright}$ and $\rho$ is an irrep of the isotropy group $K^{r_0}:=\{x\in K\ |\ x{\triangleright} r_0=r_0\}$. Here we fix a base point $r_0\in \hbox{{$\mathcal O$}}$ as well as $\kappa: \hbox{{$\mathcal O$}}\to K $ a choice of lift such that
\[ \kappa_r{\triangleright} r_0 = r,\quad\forall r\in \hbox{{$\mathcal O$}},\quad \kappa_{r_0}=e.\]
Then
\[ V_{\hbox{{$\mathcal O$}},\rho}=\mathbb{C} \hbox{{$\mathcal O$}}\mathop{{\otimes}} V_\rho,\quad \delta_r(s\mathop{{\otimes}} v)=\delta_{r,s}s\mathop{{\otimes}} v,\quad x.(s\mathop{{\otimes}} v)=x{\triangleright} s\mathop{{\otimes}}\zeta_s(x).v,\quad \zeta_s(x)=\kappa^{-1}_{x{\triangleright} s}x\kappa_s\]
for $v\in V_\rho$, the carrier space for $\rho$, and
\[ \zeta: \hbox{{$\mathcal O$}}\times K\to K^{r_0},\quad \zeta_r(x)=\kappa_{x{\triangleright} r}^{-1}x\kappa_r.\]
\end{lemma}
{\noindent {\bfseries Proof:}\quad } One can check that $\zeta_r(x)$ lives in $K^{r_0}$,
\[ \zeta_r(x){\triangleright} r_0=(\kappa_{x{\triangleright} r}^{-1}x\kappa_r){\triangleright} r_0=\kappa_{x{\triangleright} r}^{-1}{\triangleright}(x{\triangleright} r)=\kappa_{x{\triangleright} r}^{-1}{\triangleright}(\kappa_{x{\triangleright} r}{\triangleright} r_0)=r_0\]
and the cocycle property
\[ \zeta_r(xy)=\kappa^{-1}_{x{\triangleright} y{\triangleright} r}x \kappa_{y{\triangleright} r}\kappa^{-1}_{y{\triangleright} r}y \kappa_r=\zeta_{y{\triangleright} r}(x)\zeta_r(y),\]
from which it is easy to see that $V_{\hbox{{$\mathcal O$}},\rho}$ is a representation,
\[ x.(y.(s\mathop{{\otimes}} v))=x.(y{\triangleright} s\mathop{{\otimes}} \zeta_s(y). v)=x{\triangleright}(y{\triangleright} s)\mathop{{\otimes}}\zeta_{y{\triangleright} s}(x)\zeta_s(y).v=xy{\triangleright} s\mathop{{\otimes}}\zeta_s(xy).v=(xy).(s\mathop{{\otimes}} v),\]
\[ x.(\delta_r.(s\mathop{{\otimes}} v))=\delta_{r,s}x{\triangleright} s\mathop{{\otimes}} \zeta_s(x). v= \delta_{x{\triangleright} r,x{\triangleright} s}x{\triangleright} s\mathop{{\otimes}}\zeta_s(x).v=\delta_{x{\triangleright} r}.(x.(s\mathop{{\otimes}} v)).\]
One can show that $V_{\hbox{{$\mathcal O$}},\pi}$ are irreducible and do not depend up to isomorphism on the choice of $r_0$ or $\kappa_r$.\endproof
In the $*$-algebra case as here, we obtain a unitary representation if $\rho$ is unitary. One can also show that all irreps can be obtained this way. In fact the algebra $\Xi(R,K)$ is semisimple and has a block associated to the $V_{\hbox{{$\mathcal O$}},\pi}$.
\begin{lemma}\label{Xiproj} $\Xi(R,K)$ has a complete orthogonal set of central idempotents
\[ P_{(\hbox{{$\mathcal O$}},\rho)}={\dim V_\rho\over |K^{r_0}|}\sum_{r\in\hbox{{$\mathcal O$}}}\sum_{n\in K^{r_0}} \mathrm{ Tr}_{\rho}(n^{-1})\delta_r\mathop{{\otimes}} \kappa_r n \kappa_r^{-1}.\]
\end{lemma}
{\noindent {\bfseries Proof:}\quad } The proofs are similar to those for $D(G)$ in \cite{CowMa}. That we have a projection is
\begin{align*}P_{(\hbox{{$\mathcal O$}},\rho)}^2&={\dim(V_\rho)^2\over |K^{r_0}|^2}\sum_{m,n\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\rho(n^{-1})\sum_{r,s\in \hbox{{$\mathcal O$}}}(\delta_r\mathop{{\otimes}} \kappa_rm\kappa_r^{-1})(\delta_s\mathop{{\otimes}}\kappa_sn\kappa_s^{-1})\\
&={\dim(V_\rho)^2\over |K^{r_0}|^2}\sum_{m,n\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\rho(n^{-1})\sum_{r,s\in \hbox{{$\mathcal O$}}}\delta_r\delta_{r,s}\mathop{{\otimes}} \kappa_rm\kappa_r^{-1}\kappa_s n\kappa_s^{-1}\\
&={\dim(V_\rho)^2\over |K^{r_0}|^2}\sum_{m,m'\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\rho(m m'{}^{-1})\sum_{r\in \hbox{{$\mathcal O$}}}\delta_r\mathop{{\otimes}} \kappa_rm'\kappa_r^{-1}= P_{(\hbox{{$\mathcal O$}},\rho)}
\end{align*}
where we used $r=\kappa_r m\kappa_r^{-1}{\triangleright} s$ iff $s=\kappa_r m^{-1}\kappa_r^{-1}{\triangleright} r=\kappa_r m^{-1}{\triangleright} r_0=\kappa_r{\triangleright} r_0=r$. We then changed $mn=m'$ as a new variable and used the orthogonality formula for characters on $K^{r_0}$. Similarly, for different projectors to be orthogonal. The sum of projectors is 1 since
\begin{align*}\sum_{\hbox{{$\mathcal O$}},\rho}P_{(\hbox{{$\mathcal O$}},\rho)}=\sum_{\hbox{{$\mathcal O$}}, r\in {\hbox{{$\mathcal C$}}}}\delta_r\mathop{{\otimes}} \kappa_r\sum_{\rho\in \hat{K^{r_0}}} \left({\dim V_\rho\over |K^{r_0}|}\sum_{n\in K^{r_0}} \mathrm{ Tr}_{\rho}(n^{-1}) n\right) \kappa_r^{-1}=\sum_{\hbox{{$\mathcal O$}},r\in\hbox{{$\mathcal O$}}}\delta_r\mathop{{\otimes}} 1=1,
\end{align*}
where the bracketed expression is the projector $P_\rho$ for $\rho$ in the group algebra of $K^{r_0}$, and these sum to 1 by the Peter-Weyl decomposition of the latter. \endproof
\begin{remark}\rm
In the previous literature, the irreps have been described using double cosets and representatives thereof \cite{CCW}. In fact a double coset in ${}_KG_K$ is an orbit for the left action of $K$ on $G/K$ and hence has the form $\hbox{{$\mathcal O$}} K$ corresponding to an orbit $\hbox{{$\mathcal O$}}\subset R$ in our approach. We will say more about this later, in Proposition~\ref{prop:mon_equiv}.
\end{remark}
An important question for the physics is how representations on the bulk relate to those on the boundary. This is afforded by functors in the two directions. Here we give a much more direct approach to this issue as follows.
\begin{proposition}\label{Xisub} There is an inclusion of algebras $i:\Xi(R,K)\hookrightarrow D(G)$
\[ i(x)=x,\quad i(\delta_r)=\sum_{x\in K} \delta_{rx}.\]
The pull-back or restriction of a $D(G)$-module $W$ to a $\Xi$-module $i^*(W)$ is simply for $\xi\in \Xi$ to act by $i(\xi)$. Going the other way, the induction functor sends a $\Xi$-module $V$ to a $D(G)$-module $D(G)\mathop{{\otimes}}_\Xi V$, where $\xi\in \Xi$ right acts on $D(G)$ by right multiplication by $i(\xi)$. These two functors are adjoint.
\end{proposition}
{\noindent {\bfseries Proof:}\quad } We just need to check that $i$ respects the relations of $\Xi$. Thus,
\begin{align*} i(\delta_r)i(\delta_s)&=\sum_{x,y\in K}\delta_{rx}\delta_{sy}=\sum_{x\in K}\delta_{r,s}\delta_{rx}=i(\delta_r\delta_s),
\\ i(x)i(\delta_r)&=\sum_{y\in K}x\delta_{ry}=\sum_{y\in K}\delta_{xryx^{-1}}x=\sum_{y\in K}\delta_{(x{\triangleright} r)x'yx^{-1}}x=\sum_{y'\in K}\delta_{(x{\triangleright} r)y'}x=i(\delta_{x{\triangleright} r} x),\end{align*}
as required. For the first line, we used the unique factorisation $G=RK$ to break down the $\delta$-functions. For the second line, we use this in the form $xr=(x{\triangleright} r)x'$ for some $x'\in K$ and then changed variables from $y$ to $y'=x'yx^{-1}$. The rest follows as for any algebra inclusion. \endproof
In fact, $\Xi$ is a quasi-bialgebra and at least when $(\ )^R$ is bijective a quasi-Hopf algebra, as we see in Section~\ref{sec:quasi}. In the latter case, it has a quantum double $D(\Xi)$ which contains $\Xi$ as a sub-quasi Hopf algebra. Moreover, it can be shown that $D(\Xi)$ is a `Drinfeld cochain twist' of $D(G)$, which implies it has the same algebra as $D(G)$. We do not need details, but this is the abstract reason for the above inclusion. (An explicit proof of this twisting result in the usual Hopf algebra case with $R$ a group is in \cite{BGM}.) Meanwhile, the statement that the two functors in the lemma are adjoint is that
\[ \hom_{D(G)}(D(G)\mathop{{\otimes}}_\Xi V,W))=\hom_\Xi(V, i^*(W))\]
for all $\Xi$-modules $V$ and all $D(G)$-modules $W$. These functors do not take irreps to irreps and of particular interest are the multiplicities for the decompositions back into irreps, i.e. if $V_i, W_a$ are respective irreps and $D(G)\mathop{{\otimes}}_\Xi V_i=\oplus_{a} n^i{}_a W_a$ then
\[ {\rm dim}(\hom_{D(G)}(D(G)\mathop{{\otimes}}_\Xi V_i,W_a))={\rm dim}(\hom_\Xi(V_i,i^*(W_a)))\]
and hence $i^*(W_a)=\oplus_i n^i_a V_i$. This explains one of the observations in \cite{CCW}. It remains to give a formula for these multiplicities, but here we were not able to reproduce the formula in \cite{CCW}. Our approach goes via a general lemma as follows.
\begin{lemma}\label{lemfrobn} Let $i:A\hookrightarrow B$ be an inclusion of finite-dimensional semisimple algebras and $\int$ the unique symmetric special Frobenius linear form on $B$ such that $\int 1=1$. Let $V_i$ be an irrep of $A$ and $W_a$ an irrep of $B$. Then the multiplicity $V_i$ in the pull-back $i^*(W_a)$ (which is the same as the multiplicity of $W_a$ in $B\mathop{{\otimes}}_A V_i$) is given by
\[ n^i{}_a={\dim(B)\over\dim(V_i)\dim(W_a)}\int i(P_i)P_a,\]
where $P_i\in A$ and $P_a\in B$ are the associated central idempotents. Moreover, $i(P_i)P_a =0$ if and only if $n^i_a = 0$.
\end{lemma}
{\noindent {\bfseries Proof:}\quad } Recall that a linear map $\int:B\to \mathbb{C}$ is Frobenius if the bilinear form $(b,c):=\int bc$ is nondegenerate, and is symmetric if this bilinear form is symmetric. Also, let $g=g^1\mathop{{\otimes}} g^2\in B\mathop{{\otimes}} B$ (in a notation with the sum of such terms understood) be the associated `metric' such that $(\int b g^1 )g^2=b=g^1\int g^2b$ for all $b$ (it is the inverse matrix in a basis of the algebra). We say that the Frobenius form is special if the algebra product $\cdot$ obeys $\cdot(g)=1$. It is well-known that there is a unique symmetric special Frobenius form up to scale, given by the trace in the left regular representation, see \cite{MaRie:spi} for a recent study. In our case, over $\mathbb{C}$, we also know that a finite-dimensional semisimple algebra $B$ is a direct sum of matrix algebras ${\rm End}(W_a)$ associated to the irreps $W_a$ of $B$. Then
\begin{align*} \int i(P_i)P_a&={1\over\dim(B)}\sum_{\alpha,\beta}\<f^\alpha\mathop{{\otimes}} e_\beta,i(P_i)P_a (e_\alpha\mathop{{\otimes}} f^\beta)\>\\
&={1\over\dim(B)}\sum_{\alpha}\dim(W_a)\<f^\alpha, i(P_i)e_\alpha\>={\dim(W_a)\dim(V_i)\over\dim(B)} n^i{}_a.
\end{align*}
where $\{e_\alpha\}$ is a basis of $W_a$ and $\{f^\beta\}$ is a dual basis, and $P_a$ acts as the identity on $\mathrm{ End}(W_a)$ and zero on the other blocks. We then used that if $i^*(W_a)=\oplus_i {n^i{}_a}V_i$ as $A$-modules, then $i(P_i)$ just picks out the $V_i$ components where $P_i$ acts as the identity.
For the last part, the forward direction is immediate given the first part of the lemma. For the other direction, suppose
$n^i_a = 0$ so that $i^*(W_a)=\oplus_j n^j_aV_j$ with $j\ne a$ running over the other irreps of $A$. Now, we can view $P_{a}\in W_{a}\mathop{{\otimes}} W_{a}^*$ (as the identity element) and left multiplication by $i(P_i)$ is the same as $P_i$ acting on $P_{a}$ viewed as an element of $i^*(W_{a})\mathop{{\otimes}} W_{a}^*$, which is therefore zero.\endproof
We apply Lemma~\ref{lemfrobn} in our case of $A=\Xi$ and $B=D(G)$, where \[ \dim(V_i)=|\hbox{{$\mathcal O$}}|\dim(V_\rho), \quad \dim(W_a)=|{\hbox{{$\mathcal C$}}}|\dim(W_\pi)\]
with $i=({\hbox{{$\mathcal C$}}},\rho)$ as described above and $a=({\hbox{{$\mathcal C$}}},\pi)$ as described in Section~\ref{sec:bulk}.
\begin{proposition}\label{nformula} For the inclusion $i:\Xi\hookrightarrow D(G)$ in Proposition~\ref{Xisub}, the multiplicities for restriction and induction as above are given by
\[ n^{(\hbox{{$\mathcal O$}},\rho)}_{({\hbox{{$\mathcal C$}}},\pi)}= {|G|\over |\hbox{{$\mathcal O$}}| |{\hbox{{$\mathcal C$}}}| |K^{r_0}| |G^{c_0}|} \sum_{{r\in \hbox{{$\mathcal O$}}, c\in {\hbox{{$\mathcal C$}}}\atop
r^{-1}c\in K}} |K^{r,c}|\sum_{\tau\in \hat{K^{r,c}} } n_{\tau,\tilde\rho|_{K^{r,c}}} n_{\tau, \tilde\pi|_{K^{r,c}}},\quad K^{r,c}=K^r\cap G^c,\]
where $\tilde \pi(m)=\pi(q_c^{-1}mq_c)$ and $\tilde\rho(m)=\rho(\kappa_r^{-1}m\kappa_r)$ are the corresponding representation of $K^r,G^c$ decomposing as $K^{r,c}$ representations as
\[ \tilde\rho|_{K^{r,c}}{\cong}\oplus_\tau n_{\tau,\tilde\rho|_{K^{r,c}}}\tau,\quad \tilde\pi|_{K^{r,c}}{\cong}\oplus_\tau n_{\tau,\tilde\pi|_{K^{r,c}}}\tau.\]
\end{proposition}
{\noindent {\bfseries Proof:}\quad } We include the projector from Lemma~\ref{Xiproj} as
\[ i(P_{(\hbox{{$\mathcal O$}},\rho)})={{\rm dim}(V_\rho)\over |K^{r_0}|}\sum_{r\in \hbox{{$\mathcal O$}}, x\in K}\sum_{m\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})\delta_{rx}\mathop{{\otimes}} \kappa_r m\kappa_r^{-1}\]
and multiply this by $P_{({\hbox{{$\mathcal C$}}},\pi)}$ from (\ref{Dproj}). In the latter, we write $c=sy$ for the factorisation of $c$. Then when we multiply these out, for $(\delta_{rx}\mathop{{\otimes}} \kappa_r m \kappa_r^{-1})(\delta_{c}\mathop{{\otimes}} q_c n q_c^{-1})$ we will need $\kappa_r m\kappa_r^{-1}{\triangleright} s=r$ or equivalently $s=\kappa_r m^{-1}\kappa_r^{-1}{\triangleright} r=r$ so we are actually summing not over $c$ but over $y\in K$ such that $ry\in {\hbox{{$\mathcal C$}}}$. Also then $x$ is uniquely determined in terms of $y$.
Hence
\[ i(P_{(\hbox{{$\mathcal O$}},\rho)})P_{({\hbox{{$\mathcal C$}}},\pi)}={{\rm dim}(V_\rho){\rm dim}(W_\pi)\over |K^{r_0}| |G^{c_0}|}\sum_{m\in K^{r_0}, n\in G^{c_0}}\sum_{r\in \hbox{{$\mathcal O$}}, y\in K | ry\in{\hbox{{$\mathcal C$}}}} \mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\pi(n^{-1}) \delta_{rx}\mathop{{\otimes}} \kappa_r m\kappa_r^{-1} q_c nq_c^{-1}\]
Now we apply the integral of $D(G)$, $\int\delta_g\mathop{{\otimes}} h=\delta_{h,e}$ which requires
\[ n=q_c^{-1}\kappa_r m^{-1}\kappa_r^{-1}q_c\]
and $x=y$ for $n\in G^{c_0}$ given that $c=ry$. We refer to this condition on $y$ as $(\star)$. Remembering that $\int$ is normalised so that $\int 1=|G|$, we have from the lemma
\begin{align*}n^{(\hbox{{$\mathcal O$}},\rho)}_{({\hbox{{$\mathcal C$}}},\pi)}&={|G|\over\dim(V_i)\dim(W_a)}\int i(P_{(\hbox{{$\mathcal O$}},\rho)})P_{({\hbox{{$\mathcal C$}}},\pi)}\\
&={|G|\over |\hbox{{$\mathcal O$}}| |{\hbox{{$\mathcal C$}}}| |K^{r_0}| |G^{c_0}|}\sum_{m\in K^{r_0}}\sum_{{r\in \hbox{{$\mathcal O$}}, y\in K\atop (*), ry\in{\hbox{{$\mathcal C$}}}}} \mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\pi(q_{ry}^{-1}\kappa_r m\kappa_r^{-1}q_{ry}) \\
&={|G|\over |\hbox{{$\mathcal O$}}| |{\hbox{{$\mathcal C$}}}| |K^{r_0}| |G^{c_0}|}\sum_{m\in K^{r_0}}\sum_{{r\in \hbox{{$\mathcal O$}}, c\in {\hbox{{$\mathcal C$}}}\atop
r^{-1}c\in K}}\sum_{m'\in K^r\cap G^c} \mathrm{ Tr}_\rho(\kappa_r^{-1}m'{}^{-1}\kappa_r)\mathrm{ Tr}_\pi(q_{c}^{-1} m q_{c}),
\end{align*}
where we compute in $G$ and view $(\star)$ as $m':=\kappa_r m \kappa_r^{-1}\in G^c$. We then use the group orthogonality formula
\[ \sum_{m\in K^{r,c}}\mathrm{ Tr}_{\tau}(m^{-1})\mathrm{ Tr}_{\tau'}(m)=\delta_{\tau,\tau'}|K^{r,c}| \]
for any irreps $\tau,\tau'$ of the group
\[ K^{r,c}:=K^r\cap G^c=\{x\in K\ |\ x{\triangleright} r=r,\quad x c x^{-1}=c\}\]
to obtain the formula stated. \endproof
This simplifies in four (overlapping) special cases as follows.
\noindent{(i) $V_i$ trivial: }
\[ n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)}={|G|\over |{\hbox{{$\mathcal C$}}}||K||G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}\cap K}\sum_{m\in K\cap G^c}\mathrm{ Tr}_\pi(q_c^{-1}mq_c)={|G| \over |{\hbox{{$\mathcal C$}}}| |K||G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}\cap K} |K^c| n_{1,\tilde\pi}\]
as $\rho=1$ implies $\tilde\rho=1$ and forces $\tau=1$. Here $K^c$ is the centraliser of $c\in K$. If $n_{1,\tilde\pi}$ is independent of the choice of $c$ then we can simplify this further
as
\[ n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)}={|G| |({\hbox{{$\mathcal C$}}}\cap K)/K|\over |{\hbox{{$\mathcal C$}}}| |G^{c_0}|} n_{1,\pi|_{K^{c_0}}}\]
using the orbit-counting lemma, where $K$ acts on ${\hbox{{$\mathcal C$}}}\cap K$ by conjugation.
\noindent{(ii) $W_a$ trivial:}
\[ n^{(\hbox{{$\mathcal O$}},\rho)}_{(\{e\},1)}= {|G|\over |\hbox{{$\mathcal O$}}||K^{r_0}||G|}\sum_{r\in \hbox{{$\mathcal O$}}\cap K}\sum_{m\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})=\begin{cases} 1 & {\rm if\ }\hbox{{$\mathcal O$}}, \rho\ {\rm trivial}\\ 0 & {\rm else}\end{cases} \]
as $\hbox{{$\mathcal O$}}\cap K=\{e\}$ if $\hbox{{$\mathcal O$}}=\{e\}$ (but is otherwise empty) and in this case only $r=e$ contributes. This is consistent with the fact that if $W_a$ is the trivial representation of $D(G)$ then its pull back is also trivial and hence contains only the trivial representation of $\Xi$.
\noindent{(iii) Fluxion sector:}
\[ n^{(\hbox{{$\mathcal O$}},1)}_{({\hbox{{$\mathcal C$}}},1)}= {|G|\over |\hbox{{$\mathcal O$}}||{\hbox{{$\mathcal C$}}}||K^{r_0}| |G^{c_0}|} \sum_{{r\in \hbox{{$\mathcal O$}}, c\in {\hbox{{$\mathcal C$}}}\atop
r^{-1}c\in K}} |K^r\cap G^c|.\]
\noindent{(iv) Chargeon sector: }
\[ n^{(\{e\},\rho)}_{(\{e\},\pi)}= n_{\rho, \pi|_{K}},\]
where $\rho,\pi$ are arbitrary irreps of $K,G$ respectively and only $r=c=e$ are allowed so $K^{r,c}=K$, and then only $\tau=\rho$ contributes.
\begin{example}\label{exS3n}\rm (i) We take $G=S_3$, $K=\{e,u\}=\mathbb{Z}_2$, where $u=(12)$. Here $G/K$ consists of
\[ G/K=\{\{e, u\}, \{w, uv\}, \{v, vu\}\}\]
and our standard choice of $R$ will be $R=\{e,uv, vu\}$, where we take one from each coset (but any other transversal will have the same irreps and their decompositions). This leads to 3 irreps of $\Xi(R,K)$ as follows. In $R$, we have two orbits $\hbox{{$\mathcal O$}}_0=\{e\}$, $\hbox{{$\mathcal O$}}_1=\{uv,vu\}$ and we choose representatives $r_0=e,\kappa_e=e$, $r_1=uv, \kappa_{uv}=e, \kappa_{vu}=u$ since $u{\triangleright} (uv)=vu$ for the two cases (here $r_1$ was denoted $r_0$ in the general theory and is the choice for $\hbox{{$\mathcal O$}}_1$). We also have $u{\triangleright}(vu)=uv$. Note that it happens that these orbits are also conjugacy classes but this is an accident of $S_3$ and not true for $S_4$. We have $K^{r_0}=K=\mathbb{Z}_2$ with representations $\rho(u)=\pm1$ and $K^{r_1}=\{e\}$ with only the trivial representation.
(ii) For $D(S_3)$, we have the 8 irreps in Example~\ref{exDS3} and hence there is a $3\times 8$ table of the $\{n^i{}_a\}$. We can easily compute some of the special cases from the above. For example, the trivial $\pi$ restricted to $K$ is $\rho=1$, the sign representation restricted to $K$ is the $\rho=-1$ representation, the $W_2$ restricted to $K$ is $1\oplus -1$, which gives the upper left $2\times 3$ submatrix for the chargeon sector. Another 6 entries (four new ones) are given from the fluxion formula. We also have ${\hbox{{$\mathcal C$}}}_2\cap K=\emptyset$ so that the latter part of the first two rows is zero by our first special case formula. For ${\hbox{{$\mathcal C$}}}_1,\pm1$ in the first row, we have ${\hbox{{$\mathcal C$}}}_1\cap K=\{u\}$ with trivial action of $K$, so just one orbit. This gives us a nontrivial result in the $+1$ case and 0 in the $-1$ case. The story for ${\hbox{{$\mathcal C$}}}_1,\pm1$ in the second row follows the same derivation, but needs $\tau=-1$ and hence $\pi=-1$ for the nonzero case.
In the third row with ${\hbox{{$\mathcal C$}}}_2,\pi$, we have $K^{r}=\{e\}$ so $G'=\{e\}$ and we only have $\tau=1=\rho$ as well as $\tilde\pi=1$ independently of $\pi$ as this is 1-dimensional. So both $n$ factors in the formula in Proposition~\ref{nformula} are 1. In the sum over $r,c$, we need $c=r$ so we sum over 2 possibilities, giving a nontrivial result as shown. For ${\hbox{{$\mathcal C$}}}_1,\pi$, the first part goes the same way and we similarly have $c$ determined from $r$ in the case of ${\hbox{{$\mathcal C$}}}_1,\pi$, so again two contributions in the sum, giving the answer shown independently of $\pi$. Finally, for ${\hbox{{$\mathcal C$}}}_0,\pi$ we have $r=\{uv,vu\}$ and $c=e$, and can never meet the condition $r^{-1}c\in K$. So these all have $0$. Thus, Proposition~\ref{nformula} in this example tells us:
\[\begin{array}{c|c|c|c|c|c|c|c|c} n^i{}_a & {\hbox{{$\mathcal C$}}}_0,1 & {\hbox{{$\mathcal C$}}}_0,{\rm sign} & {\hbox{{$\mathcal C$}}}_0,W_2 & {\hbox{{$\mathcal C$}}}_1, 1& {\hbox{{$\mathcal C$}}}_1,-1 & {\hbox{{$\mathcal C$}}}_2,1& {\hbox{{$\mathcal C$}}}_2,\omega & {\hbox{{$\mathcal C$}}}_2,\omega^2\\
\hline\
\hbox{{$\mathcal O$}}_0,1&1 & 0 & 1 &1 & 0& 0 &0 &0 \\
\hline
\hbox{{$\mathcal O$}}_0,-1&0 & 1&1& 0& 1&0 &0 & 0\\
\hline
\hbox{{$\mathcal O$}}_1,1&0 &0&0 & 1& 1 &1 &1 & 1
\end{array}\]
One can check for consistency that for each $W_a$, $\dim(W_a)$ is the sum of the dimensions of the $V_i$ that it contains, which determines one row from the other two.
\end{example}
\subsection{Boundary lattice model}\label{sec:boundary_lat}
Consider a vertex on the lattice $\Sigma$. Fixing a subgroup $K \subseteq G$, we define an action of $\mathbb{C} K$ on $\hbox{{$\mathcal H$}}$ by
\begin{equation}\label{actXi0}\tikzfig{CaK_vertex_action}\end{equation}
One can see that this is an action as it is a tensor product of representations on each edge, or simply because it is the restriction to $K$ of the vertex action of $G$ in the bulk. Next, we define the action of $\mathbb{C} (R)$ at a face relative to a cilium,
\begin{equation}\label{actXi}\tikzfig{CGK_face_action}\end{equation}
with a clockwise rotation. That this is indeed an action is also easy to check explicitly, recalling that either $rK = r'K$ when $r= r'$ or $rK \cap r'K = \emptyset$ otherwise, for any $r, r'\in R$. These actions define a representation of $\Xi(R,K)$, which is just the bulk $D(G)$ action restricted to $\Xi(R,K)\subseteq D(G)$ by the inclusion in Proposition~\ref{Xisub}. This says that $x\in K$ acts as in $G$ and $\mathbb{C}(R)$ acts on faces by the $\mathbb{C}(G)$ action after sending $\delta_r \mapsto \sum_{a\in rK}\delta_a$. To connect the above representation to the topic at hand, we now define what we mean by a boundary.
\subsubsection{Smooth boundaries}
Consider the lattice in the half-plane for simplicity,
\[\tikzfig{smooth_halfplane}\]
where each solid black arrow still carries a copy of $\mathbb{C} G$ and ellipses indicate the lattice extending infinitely. The boundary runs along the left hand side and we refer to the rest of the lattice as the `bulk'. The grey dashed edges and vertices are there to indicate empty space and the lattice borders the edge with faces; we will call this case a `smooth' boundary. There is a site $s_0$ indicated at the boundary.
There is an action of $\mathbb{C} K$ at the boundary vertex associated to $s_0$, identical to the action of $\mathbb{C} K$ defined above but with the left edge undefined. Similarly, there is an action of $\mathbb{C}(R)$ at the face associated to $s_0$. However, this is more complicated, as the face has three edges undefined and the action must be defined slightly differently from in the bulk:
\[\tikzfig{smooth_face_action}\]
\[\tikzfig{smooth_face_actionB}\]
where the action is given a superscript ${\triangleright}^b$ to differentiate it from the actions in the bulk. In the first case, we follow the same
clockwise rotation rule but skip over the undefined values on the grey edges, but for the second case we go round round anticlockwise. The resulting rule is then according to whether the cilium is associated to the top or bottom of the edge. It is easy to check that this defines a representation of $\Xi(R,K)$ on $\hbox{{$\mathcal H$}}$ associated to each smooth boundary site, such as $s_0$, and that the actions of $\mathbb{C}(R)$ have been chosen such that this holds. A similar principle holds for ${\triangleright}^b$ in other orientations of the boundary.
The integral actions at a boundary vertex $v$ and at a face $s_0=(v,p)$ of a smooth boundary are then
\[ A^b_1(v):=\Lambda_{\mathbb{C} K}{\triangleright}^b_v = {1\over |K|}\sum_k k{\triangleright}^b_v,\quad B^b_1(p):=\Lambda_{\mathbb{C}(R)}{\triangleright}^b_{p} = \delta_e{\triangleright}^b_{p},\]
where the superscript $b$ and subscript $1$ label that these are at a smooth boundary. We have the convenient property that
\[\tikzfig{smooth_face_integral}\]
so both the vertex and face integral actions at a smooth face each depend only on the vertex and face respectively, not the precise cilium, similar to the integral actions.
\begin{remark}\rm
There is similarly an action of $\mathbb{C}(G) {>\!\!\!\triangleleft} \mathbb{C} K \subseteq D(G)$ on $\hbox{{$\mathcal H$}}$ at each site in the next layer into the bulk, where the site has the vertex at the boundary but an internal face. We mention this for completeness, and because using this fact it is easy to show that
\[A_1^b(v)B(p) = B(p)A_1^b(v),\]
where $B(p)$ is the usual integral action in the bulk.
\end{remark}
\begin{remark}\rm
In \cite{BSW} it is claimed that one can similarly introduce actions at smooth boundaries defined not only by $R$ and $K$ but also a 2-cocycle $\alpha$. This makes some sense categorically, as the module categories of $\hbox{{$\mathcal M$}}^G$ may also include such a 2-cocycle, which enters by way of a \textit{twisted} group algebra $\mathbb{C}_\alpha K$ \cite{Os2}. However, in Figure 6 of \cite{BSW} one can see that when the cocycle $\alpha$ is introduced all edges on the boundary are assumed to be copies of $\mathbb{C} K$, rather than $\mathbb{C} G$. On closer inspection, it is evident that this means that the action on faces of $\delta_e\in\mathbb{C}(R)$ will always yield 1, and the action of any other basis element of $\mathbb{C}(R)$ will yield 0. Similarly, the action on vertices is defined to still be an action of $\mathbb{C} K$, not $\mathbb{C}_\alpha K$. Thus, the excitations on this boundary are restricted to only the representations of $\mathbb{C} K$, without either $\mathbb{C}(R)$ or $\alpha$ appearing, which appears to defeat the purpose of the definition. It is not obvious to us that a cocycle can be included along these lines in a consistent manner.
\end{remark}
In quantum computational terms, in addition to the measurements in the bulk we now measure the operator $\sum_{\hbox{{$\mathcal O$}},\rho}p_{\hbox{{$\mathcal O$}},\rho}P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b$ for distinct coefficients $p_{\hbox{{$\mathcal O$}},\rho} \in \mathbb{R}$ at all sites along the boundary.
\subsubsection{Rough boundaries}
We now consider the half-plane lattice with a different kind of boundary,
\[\tikzfig{rough_halfplane}\]
This time, there is an action of $\mathbb{C} K$ at the exterior vertex and an action of $\mathbb{C}(R)$ at the face at the boundary with an edge undefined. Again, the former is just the usual action of $\mathbb{C} K$ with three edges undefined, but the action of $\mathbb{C}(R)$ requires more care and is defined as
\[\tikzfig{rough_face_action}\]
\[\tikzfig{rough_face_actionB}\]
\[\tikzfig{rough_face_actionC}\]
\[\tikzfig{rough_face_actionD}\]
All but the second action are just clockwise rotations as in the bulk, but with the greyed-out edge missing from the $\delta$-function. The second action goes counterclockwise in order to have an associated representation of $\Xi(R,K)$ at the bottom left. We have similar actions for other orientations of the lattice.
\begin{remark}\rm Although one can check that one has a representation of $\Xi(R,K)$ at each site using these actions and the action of $\mathbb{C} K$ defined before, this requires $g_1$ and $g_2$ on opposite sides of the $\delta$-function, and $g_1$ and $g_3$ on opposite sides, respectively for the last two actions. This means that there is no way to get $\delta_e{\triangleright}^b$ to always be invariant under choice of site in the face. Indeed, we have not been able to reproduce the implicit claim in \cite{CCW} that $\delta_e{\triangleright}^b$ at a rough boundary can be defined in a way that depends only on the face.
\end{remark}
The integral actions at a boundary vertex $v$ and at a site $s_0=(v,p)$ of a rough boundary are then
\[ A_2^b(v):=\Lambda_{\mathbb{C} K}{\triangleright}^b_v = {1\over |K|}\sum_k k{\triangleright}^b_v,\quad B_2^b(v,p):=\Lambda_{\mathbb{C}(R)}{\triangleright}^b_{s_0} = \delta_e{\triangleright}_{s_0}^b \]
where the superscript $b$ and subscript $2$ label that these are at a rough boundary. In computational terms, we measure the operator $\sum_{\hbox{{$\mathcal O$}},\rho}p_{\hbox{{$\mathcal O$}},\rho}P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b$ at each site along the boundary, as with smooth boundaries.
Unlike the smooth boundary case, there is not an action of, say, $\mathbb{C} (R){>\!\!\!\triangleleft} \mathbb{C} G$ at each site in the next layer into the bulk, with a boundary face but interior vertex. In particular, we do not have $B_2^b(v,p)A(v) = A(v)B_2^b(v,p)$ in general, but we can still consistently define a Hamiltonian. When the action at $v$ is restricted to $\mathbb{C} K$ we recover an action of $\Xi(R,K)$ again.
As with the bulk, the Hamiltonian incorporating the boundaries uses the actions of the integrals. We can accommodate both rough and smooth boundaries into the Hamiltonian. Let $V,P$ be the set of vertices and faces in the bulk, $S_1$ the set of all sites $(v,p)$ at smooth boundaries, and $S_2$ the same for rough boundaries. Then
\begin{align*}H&=\sum_{v_i\in V} (1-A(v_i)) + \sum_{p_i\in P} (1-B(p_i)) \\
&\quad + \sum_{s_{b_1} \in S_1} ((1 - A_1^b(s_{b_1}) + (1 - B_1^b(s_{b_1}))) + \sum_{s_{b_2} \in S_2} ((1 - A_2^b(s_{b_2}) + (1 - B_2^b(s_{b_2})).\end{align*}
We can pick out two vacuum states immediately:
\begin{equation}\label{eq:vac1}|{\rm vac}_1\> := \prod_{v_i,s_{b_1},s_{b_2}}A(v_i)A_1^b(s_{b_1})A_2^b(s_{b_2})\bigotimes_E e\end{equation}
and
\begin{equation}\label{eq:vac2}|{\rm vac}_2\> := \prod_{p_i,s_{b_1},s_{b_2}}B(p_i)B_1^b(s_{b_1})B_2^b(s_{b_2})\bigotimes_E \sum_{g \in G} g\end{equation}
where the tensor product runs over all edges in the lattice.
\begin{remark}\rm
There is no need for two different boundaries to correspond to the same subgroup $K$, and the Hamiltonian can be defined accordingly. This principle is necessary when performing quantum computation by braiding `defects', i.e. finite holes in the lattice, on the toric code \cite{FMMC}, and also for the lattice surgery in Section~\ref{sec:patches}. We do not write out this Hamiltonian in all its generality here, but its form is obvious.
\end{remark}
\subsection{Quasiparticle condensation}
Quasiparticles on the boundary correspond to irreps of $\Xi(R,K)$. It is immediate from Section~\ref{sec:xi} that when $\hbox{{$\mathcal O$}} = \{e\}$, we must have $r_0 = e, K^{r_0} = K$. We may choose the trivial representation of $K$ and then we have $P_{e,1} = \Lambda_{\mathbb{C}(R)} \otimes \Lambda_{\mathbb{C} K}$. We say that this particular measurement outcome corresponds to the absence of nontrivial quasiparticles, as the states yielding this outcome are precisely the locally vacuum states with respect to the Hamiltonian. This set of quasiparticles on the boundary will not in general be the same as quasiparticles defined in the bulk, as ${}_{\Xi(R,K)}\mathcal{M} \not\simeq {}_{D(G)}\mathcal{M}$ for all nontrivial $G$.
Quasiparticles in the bulk can be created from a vacuum and moved using ribbon operators \cite{Kit}, where the ribbon operators are seen as left and right module maps $D(G)^* \rightarrow \mathrm{End}(\hbox{{$\mathcal H$}})$, see \cite{CowMa}. Following \cite{CCW}, we could similarly define a different set of ribbon operators for the boundary excitations, which use $\Xi(R,K)^*$ rather than $D(G)^*$. However, these have limited utility. For completeness we cover them in Appendix~\ref{app:ribbon_ops}. Instead, for our purposes we will keep using the normal ribbon operators.
Such normal ribbon operators can extend to boundaries, still using Definition~\ref{def:ribbon}, so long as none of the edges involved in the definition are greyed-out. When a ribbon operator ends at a boundary site $s$, we are not concerned with equivariance with respect to the actions of $\mathbb{C}(G)$ and $\mathbb{C} G$ at $s$, as in Lemma~\ref{ribcom}. Instead we should calculate equivariance with respect to the actions of $\mathbb{C}(R)$ and $\mathbb{C} K$. We will study the matter in more depth in Section~\ref{sec:quasi}, but note that if $s,t\in R$ then unique factorisation means that $st=(s\cdot t)\tau(s,t)$ for unique elements $s\cdot t\in R$ and $\tau(s,t)\in K$. Similarly, if $y\in K$ and $r\in R$ then unique factorisation $yr=(y{\triangleright} r)(y{\triangleleft} r)$ defines $y{\triangleleft} r$ to be studied later.
\begin{lemma}\label{boundary_ribcom}
Let $\xi$ be an open ribbon from $s_0$ to $s_1$, where $s_0$ is located at a smooth boundary, for example:
\[\tikzfig{smooth_halfplane_ribbon_short}\]
and where $\xi$ begins at the specified orientation in the example, leading from $s_0$ into the bulk, rather than running along the boundary. Then
\[x{\triangleright}^b_{s_0}\circ F_\xi^{h,g}=F_\xi^{xhx^{-1},xg} \circ x{\triangleright}^b_{s_0};\quad \delta_r{\triangleright}^b_{s_0}\circ F_\xi^{h,g}=F_\xi^{h,g} \circ\delta_{s\cdot(y{\triangleright} r)}{\triangleright}^b_{s_0}\]
$\forall x\in K, r\in R, h,g\in G$, and where $sy$ is the unique factorisation of $h^{-1}$.
\end{lemma}
{\noindent {\bfseries Proof:}\quad }
The first is just the vertex action of $\mathbb{C} G$ restricted to $\mathbb{C} K$, with an edge greyed-out which does not influence the result. For the second, expand $\delta_r{\triangleright}^b_{s_0}$ and verify explicitly:
\[\tikzfig{smooth_halfplane_ribbon_shortA1}\]
\[\tikzfig{smooth_halfplane_ribbon_shortA2}\]
where we see $(s\cdot(y{\triangleright} r))K = s(y{\triangleright} r)\tau(s,y{\triangleright} r)^{-1}K = s(y{\triangleright} r)K = s(y{\triangleright} r)(y{\triangleleft} r)K = syrK = h^{-1}rK$. We check the other site as well:
\[\tikzfig{smooth_halfplane_ribbon_shortB1}\]
\[\tikzfig{smooth_halfplane_ribbon_shortB2}\]
\endproof
\begin{remark}\rm
One might be surprised that the equivariance property holds for the latter case when $s_0$ is attached to the vertex at the bottom of the face, as in this case $\delta_r{\triangleright}^b_{s_0}$ confers a $\delta$-function in the counterclockwise direction, different from the bulk. This is because the well-known equivariance properties in the bulk \cite{Kit} are not wholly correct, depending on orientation, as pointed out in \cite[Section~3.3]{YCC}. We accommodated for this by specifying an orientation in Lemma~\ref{ribcom}.
\end{remark}
\begin{remark}\rm\label{rem:rough_ribbon}
We have a similar situation for a rough boundary, albeit we found only one orientation for which the same equivariance property holds, which is:
\[\tikzfig{rough_halfplane_ribbon}\]
In the reverse orientiation, where the ribbon crosses downwards instead, equivariance is similar but with the introduction of an antipode. For other orientations we do not find an equivariance property at all.
\end{remark}
As with the bulk, we can define an excitation space using a ribbon between the two endpoints $s_0$, $s_1$, although more care must be taken in the definition.
\begin{lemma}\label{Ts0s1}
Let ${|{\rm vac}\>}$ be a vacuum state on a half-plane $\Sigma$, where there is one smooth boundary beyond which there are no more edges. Let $\xi$ be a ribbon between two endpoints $s_0, s_1$ where $s_0 = \{v_0,p_0\}$ is on the boundary and $s_1 = \{v_1,p_1\}$ is in the bulk, such that $\xi$ interacts with the boundary only once, when crossing from $s_0$ into the bulk; it cannot cross back and forth multiple times. Let $|\psi^{h,g}\>:=F_\xi^{h,g}{|{\rm vac}\>}$, and $\hbox{{$\mathcal T$}}_{\xi}(s_0,s_1)$ be the space with basis $|\psi^{h,g}\>$.
(1)$|\psi^{h,g}\>$ is independent of the choice of ribbon through the bulk between fixed sites $s_0, s_1$, so long as the ribbon still only interacts with the boundary at the chosen location.
(2)$\hbox{{$\mathcal T$}}_\xi(s_0,s_1)\subset\hbox{{$\mathcal H$}}$ inherits actions at disjoint sites $s_0, s_1$,
\[ x{\triangleright}^b_{s_0}|\psi^{h,g}\>=|\psi^{ xhx^{-1},xg}\>,\quad \delta_r{\triangleright}^b_{s_0}|\psi^{h,g}\>=\delta_{rK,hK}|\psi^{h,g}\>\]
\[ f{\triangleright}_{s_1}|\psi^{h,g}\>=|\psi^{h,gf^{-1}}\>,\quad \delta_f{\triangleright}_{s_1}|\psi^{h,g}\>=\delta_{f,g^{-1}h^{-1}g}|\psi^{h,g}\>\]
where we use the isomorphism $|\psi^{h,g}\>\mapsto \delta_hg$ to see the action at $s_0$ as a representation of $\Xi(R,K)$ on $D(G)$. In particular it is the restriction of the left regular representation of $D(G)$ to $\Xi(R,K)$, with inclusion map $i$ from Lemma~\ref{Xisub}. The action at $s_1$ is the right regular representation of $D(G)$, as in the bulk.
\end{lemma}
{\noindent {\bfseries Proof:}\quad }
(1) is the same as the proof in \cite[Prop.3.10]{CowMa}, with the exception that if the ribbon $\xi'$ crosses the boundary multiple times it will incur an additional energy penalty from the Hamiltonian for each crossing, and thus $\hbox{{$\mathcal T$}}_{\xi'}(s_0,s_1) \neq \hbox{{$\mathcal T$}}_{\xi}(s_0,s_1)$ in general.
(2) This follows by the commutation rules in Lemma~\ref{boundary_ribcom} and Lemma~\ref{ribcom} respectively, using
\[x{\triangleright}^b_{s_0}{|{\rm vac}\>} = \delta_e{\triangleright}^b_{s_0}{|{\rm vac}\>} = {|{\rm vac}\>}; \quad f{\triangleright}_{s_1}{|{\rm vac}\>} = \delta_e{\triangleright}_{s_1}{|{\rm vac}\>} = {|{\rm vac}\>}\]
$\forall x\in K, f \in G$. For the hardest case we have
\begin{align*}\delta_r{\triangleright}^b_{s_0}F^{h,g}{|{\rm vac}\>} &= F_\xi^{h,g} \circ\delta_{s\cdot(y{\triangleright} r)}{\triangleright}^b_{s_0}{|{\rm vac}\>}\\
&= F_\xi^{h,g}\delta_{s\cdot(y{\triangleright} r)K,K}{|{\rm vac}\>}\\ &= F_\xi^{h,g}\delta_{rK,hK}{|{\rm vac}\>}.
\end{align*}
For the restriction of the action at $s_0$ to $\Xi(R,K)$, we have that
\[\delta_r\cdot\delta_hg = \delta_{rK,hK}\delta_hg = \sum_{a\in rK}\delta_{a,h}\delta_hg=i(\delta_r)\delta_hg.\]
and $x\cdot \delta_hg = x\delta_hg = i(x)\delta_hg$.
\endproof
In the bulk, the excitation space $\hbox{{$\mathcal L$}}(s_0,s_1)$ is totally independent of the ribbon $\xi$ \cite{Kit,CowMa}, but we do not know of a similar property for $\hbox{{$\mathcal T$}}_\xi(s_0,s_1)$ when interacting with the boundary without the restrictions stated.
We explained in Section~\ref{sec:xi} how representations of $D(G)$ at sites in the bulk relate to those of $\Xi(R,K)$ in the boundary by functors in both directions. Physically, if we apply ribbon trace operators, that is operators of the form $W_\xi^{{\hbox{{$\mathcal C$}}},\pi}$, to the vacuum, then in the bulk we create exactly a quasiparticle of type $({\hbox{{$\mathcal C$}}},\pi)$ and $({\hbox{{$\mathcal C$}}}^*,\pi^*)$ at either end. Now let us include a boundary.
\begin{definition}Given an irrep of $D(G)$ provided by $({\hbox{{$\mathcal C$}}},\pi)$, we define the {\em boundary projection} $P_{i^*({\hbox{{$\mathcal C$}}},\pi)}\in \Xi(R,K)$ by
\[ P_{i^*({\hbox{{$\mathcal C$}}},\pi)}=\sum_{(\hbox{{$\mathcal O$}},\rho)\ |\ n^{(\hbox{{$\mathcal O$}},\rho)}_{({\hbox{{$\mathcal C$}}},\pi)}\ne 0} P_{(\hbox{{$\mathcal O$}},\rho)}\]
i.e. we sum over the projectors of all the types of irreps of $\Xi(R,K)$ contained in the restriction of the given $D(G)$ irrep.
\end{definition}
It is clear that $P_{i^*({\hbox{{$\mathcal C$}}},\pi)}$ is a projection as a sum of orthogonal projections.
\begin{proposition}\label{prop:boundary_traces}
Let $\xi$ be an open ribbon extending from an external site $s_0$ on a smooth boundary with associated algebra $\Xi(R,K)$ to a site $s_1$ in the bulk, for example:
\[\tikzfig{smooth_halfplane_ribbon}\]
Then
\[P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = 0\quad {\rm iff} \quad n^{(\hbox{{$\mathcal O$}},\rho)}_{({\hbox{{$\mathcal C$}}},\pi)} = 0.\]
In addition, we have
\[P_{i^*({\hbox{{$\mathcal C$}}},\pi)}{\triangleright}^b_{s_0} W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} {\triangleleft}_{s_1} P_{({\hbox{{$\mathcal C$}}},\pi)},\]
where we see the left action at $s_1$ of $P_{({\hbox{{$\mathcal C$}}}^*,\pi^*)}$ as a right action using the antipode.
\end{proposition}
{\noindent {\bfseries Proof:}\quad }
Under the isomorphism in Lemma~\ref{Ts0s1} we have that $W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} \mapsto P_{({\hbox{{$\mathcal C$}}},\pi)} \in D(G)$. For the first part we therefore have
\[P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} \mapsto i(P_{(\hbox{{$\mathcal O$}},\rho)}) P_{({\hbox{{$\mathcal C$}}},\pi)}\]
so the result follows from the last part of Lemma~\ref{lemfrobn}. Since the sum of projectors over the irreps of $\Xi$ is 1, this then implies the second part:
\[W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = \sum_{\hbox{{$\mathcal O$}},\rho}P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = P_{i^*({\hbox{{$\mathcal C$}}},\pi)}{\triangleright}^b_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>}.\]
The action at $s_1$ is the same as for bulk ribbon operators.
\endproof
The physical interpretation is that application of a ribbon trace operator $W_\xi^{{\hbox{{$\mathcal C$}}},\pi}$ to a vacuum state creates a quasiparticle at $s_0$ of all the types contained in $i^*({\hbox{{$\mathcal C$}}},\pi)$, while still creating one of type $({\hbox{{$\mathcal C$}}}^*,\pi^*)$ at $s_1$; this is called the \textit{condensation} of $({{\hbox{{$\mathcal C$}}},\pi})$ at the boundary. While we used a smooth boundary in this example, the proposition applies equally to rough boundaries with the specified orientation in Remark~\ref{rem:rough_ribbon} by similar arguments.
\begin{example}\rm
In the bulk, we take the $D(S_3)$ model. Then by Example~\ref{exDS3}, we have exactly 8 irreps in the bulk. At the boundary, we take $K=\{e,u\} = \mathbb{Z}_2$ with $R = \{e,uv,vu\}$. As per the table in Example~\ref{exS3n} and Proposition~\ref{prop:boundary_traces} above, we then have for example that
\[(P_{\hbox{{$\mathcal O$}}_0,-1}+P_{\hbox{{$\mathcal O$}}_1,1}){\triangleright}^b_{s_0}W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} = W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} = W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} {\triangleleft}_{s_1}P_{{\hbox{{$\mathcal C$}}}_1,-1}.\]
We can see this explicitly. Recall that
\[\Lambda_{\mathbb{C}(R)}{\triangleright}^b_{s_0}{|{\rm vac}\>} = \Lambda_{\mathbb{C} K}{\triangleright}^b_{s_0}{|{\rm vac}\>} = {|{\rm vac}\>}.\]
All other vertex and face actions give 0 by orthogonality. Then,
\[P_{\hbox{{$\mathcal O$}}_0,-1} = {1\over 2}\delta_e \mathop{{\otimes}} (e-u); \quad P_{\hbox{{$\mathcal O$}}_1, 1} = (\delta_{uv} + \delta_{vu})\mathop{{\otimes}} e\]
and
\[W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1} = \sum_{c\in \{u,v,w\}}F_\xi^{c,e}-F_\xi^{c,c}\]
by Lemmas~\ref{Xiproj} and \ref{lem:quasi_basis} respectively. For convenience, we break the calculation up into two parts, one for each projector. Throughout, we will make use of Lemma~\ref{boundary_ribcom} to commute projectors through ribbon operators. First, we have that
\begin{align*}
&P_{\hbox{{$\mathcal O$}}_0,-1}{\triangleright}^b_{s_0}W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} = {1\over 2}(\delta_e \mathop{{\otimes}} (e - u)){\triangleright}^b_{s_0} \sum_{c\in \{u,v,w\}}(F_\xi^{c,e}-F_\xi^{c,c}){|{\rm vac}\>}\\
&= {1\over 2}\delta_e{\triangleright}^b_{s_0}[\sum_{c\in\{u,v,w\}}(F_\xi^{c,e}-F_\xi^{c,c})-(F_\xi^{u,u}-F_\xi^{e,u}+F_\xi^{v,u}-F_\xi^{v,uv}+F_\xi^{w,u}-F_\xi^{w,vu})]{|{\rm vac}\>}\\
&= {1\over 2}[(F_\xi^{u,e}-F_\xi^{u,u})\delta_e{\triangleright}^b_{s_0}+(F_\xi^{v,e}-F_\xi^{v,v})\delta_{vu}{\triangleright}^b_{s_0}+(F_\xi^{w,e}-F_\xi^{w,w})\delta_{uv}{\triangleright}^b_{s_0}\\
&+ (F^{u,e}_\xi-F^{u,u}_\xi)\delta_e{\triangleright}^b_{s_0} + (F^{v,uv}_\xi-F^{v,u}_\xi)\delta_{vu}{\triangleright}^b_{s_0} + (F^{w,vu}_\xi-F^{w,u}_\xi)\delta_{uv}{\triangleright}^b_{s_0}]{|{\rm vac}\>}\\
&= (F_\xi^{u,e}-F_\xi^{u,u}){|{\rm vac}\>}
\end{align*}
where we used the fact that $u = eu, v=vuu, w=uvu$ to factorise these elements in terms of $R,K$. Second,
\begin{align*}
P_{\hbox{{$\mathcal O$}}_1,1}{\triangleright}^b_{s_0}W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} &= ((\delta_{uv} + \delta_{vu})\mathop{{\otimes}} e){\triangleright}^b_{s_0}\sum_{c\in \{u,v,w\}}(F_\xi^{c,e}-F_\xi^{c,c}){|{\rm vac}\>}\\
&= (F_\xi^{v,e}-F_\xi^{v,v}+F_\xi^{w,e}-F_\xi^{w,w})(\delta_e\mathop{{\otimes}} e){\triangleright}^b_{s_0}{|{\rm vac}\>}\\
&= (F_\xi^{v,e}-F_\xi^{v,v}+F_\xi^{w,e}-F_\xi^{w,w}){|{\rm vac}\>}.
\end{align*}
The result follows immediately. All other boundary projections of $D(S_3)$ ribbon trace operators can be worked out in a similar way.
\end{example}
\begin{remark}\rm
Proposition~\ref{prop:boundary_traces} does not tell us exactly how \textit{all} ribbon operators in the quasiparticle basis are detected at the boundary, only the ribbon trace operators. We do not know a similar general formula for all ribbon operators.
\end{remark}
Now, consider a lattice in the plane with two boundaries, namely to the left and right,
\[\tikzfig{smooth_twobounds}\]
Recall that a lattice on an infinite plane admits a single ground state ${|{\rm vac}\>}$ as explained in\cite{CowMa}. However, in the present case, we may be able to also use ribbon operators in the quasiparticle basis extending from one boundary, at $s_0$ say, to the other, at $s_1$ say, such that no quasiparticles are detected at either end. These ribbon operators do not form a closed, contractible loop, as all undetectable ones do in the bulk; the corresponding states $|\psi\>$ are ground states and the vacuum has increased degeneracy. We can similarly induce additional degeneracy of excited states. This justifies the term \textit{gapped boundaries}, as the boundaries give rise to additional states with energies that are `gapped'; that is, they have a finite energy difference $\Delta$ (which may be zero) independently of the width of the lattice.
\section{Patches}\label{sec:patches}
For any nontrivial group, $G$ there are always at least two distinct choices of boundary conditions, namely with $K=\{e\}$ and $K=G$ respectively. In these cases, we necessarily have $R=G$ and $R=\{e\}$ respectively.
Considering $K=\{e\}$ on a smooth boundary, we can calculate that $A^b_1(v) = \mathrm{id}$ and $B^b_1(s)g = \delta_{e,g} g$, for $g$ an element corresponding to the single edge associated with the boundary site $s$. This means that after performing the measurements at a boundary, these edges are totally constrained and not part of the large entangled state incorporating the rest of the lattice, and hence do not contribute to the model whatsoever. If we remove these edges then we are left with a rough boundary, in which all edges participate, and therefore we may consider the $K=\{e\}$ case to imply a rough boundary. A similar argument applies for $K=G$ when considered on a rough boundary, which has $A^b_2(v)g = A(v)g = {1\over |G|}\sum_k kg = {1\over |G|}\sum_k k$ for an edge with state $g$ and $B^b_2(s) = \mathrm{id}$. $K=G$ therefore naturally corresponds instead to a smooth boundary, as otherwise the outer edges are totally constrained by the projectors. From now on, we will accordingly use smooth to refer always to the $K=G$ condition, and rough for $K=\{e\}$.
These boundary conditions are convenient because the condensation of bulk excitations to the vacuum at a boundary can be partially worked out in the group basis. For $K=\{e\}$, it is easy to see that the ribbon operators which are undetected at the boundary (and therefore leave the system in a vaccum state) are exactly those of the form $F_\xi^{e,g}$, for all $g\in G$, as any nontrivial $h$ in $F_\xi^{h,g}$ will be detected by the boundary face projectors. This can also be worked out representation-theoretically using Proposition~\ref{nformula}.
\begin{lemma}\label{lem:rough_functor}
Let $K=\{e\}$. Then the multiplicity of an irrep $({\hbox{{$\mathcal C$}}},\pi)$ of $D(G)$ with respect to the trivial representation of $\Xi(G,\{e\})$ is
\[n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)} = \delta_{{\hbox{{$\mathcal C$}}},\{e\}}{\rm dim}(W_\pi)\]
\end{lemma}
{\noindent {\bfseries Proof:}\quad }
Applying Proposition~\ref{nformula} in the case where $V_i$ is trivial, we start with
\[n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)}={|G| \over |{\hbox{{$\mathcal C$}}}| |G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}\cap \{e\}} |\{e\}^c| n_{1,\tilde\pi}\]
where ${\hbox{{$\mathcal C$}}}\cap \{e\} = \{e\}$ iff ${\hbox{{$\mathcal C$}}}=\{e\}$, or otherwise $\emptyset$. Also, $\tilde\pi = \oplus_{{\rm dim}(W_\pi)} (\{e\},1)$, and if ${\hbox{{$\mathcal C$}}} = \{e\}$ then $|G^{c_0}| = |G|$.
\endproof
The factor of ${\rm dim}(W_\pi)$ in the r.h.s. implies that there are no other terms in the decomposition of $i^*(\{e\},\pi)$. In physical terms, this means that the trace ribbon operators $W^{e,\pi}_\xi$ are the only undetectable trace ribbon operators, and any ribbon operators which do not lie in the block associated to $(e,\pi)$ are detectable. In fact, in this case we have a further property which is that all ribbon operators in the chargeon sector are undetectable, as by equation~(\ref{chargeon_ribbons}) chargeon sector ribbon operators are Fourier isomorphic to those of the form $F_\xi^{e,g}$ in the group basis.
In the more general case of a rough boundary for an arbitrary choice of $\Xi(R,K)$ the orientation of the ribbon is important for using the representation-theoretic argument. When $K=\{e\}$, for $F^{e,g}_\xi$ one can check that regardless of orientation the rough boundary version of Proposition~\ref{Ts0s1} applies.
The $K=G$ case is slightly more complicated:
\begin{lemma}\label{lem:smooth_functor}
Let $K=G$. Then the multiplicity of an irrep $({\hbox{{$\mathcal C$}}},\pi)$ of $D(G)$ with respect to the trivial representation of $\Xi(\{e\},G)$ is
\[n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)} = \delta_{\pi,1}\]
\end{lemma}
{\noindent {\bfseries Proof:}\quad }
We start with
\[n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)}={1 \over |{\hbox{{$\mathcal C$}}}| |G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}} |G^c| n_{1,\tilde\pi}.\]
Now, $K^{r,c} = G^c$ and so $\tilde\pi = \pi$, giving $n_{1,\tilde\pi} = \delta_{1,\pi}$. Then $\sum_{c\in{\hbox{{$\mathcal C$}}}}|G^c| = |{\hbox{{$\mathcal C$}}}||G^{c_0}|$.
\endproof
This means that the only undetectable ribbon operators between smooth boundaries are those in the fluxion sector, i.e. those with assocated irrep $({\hbox{{$\mathcal C$}}}, 1)$. However, there is no factor of $|{\hbox{{$\mathcal C$}}}|$ on the r.h.s. and so the decomposition of $i^*({\hbox{{$\mathcal C$}}},1)$ will generally have additional terms other than just $(\{e\},1)$ in ${}_{\Xi(\{e\},G)}\hbox{{$\mathcal M$}}$. As a consequence, a fluxion trace ribbon operator $W^{{\hbox{{$\mathcal C$}}},1}_\zeta$ between smooth boundaries is undetectable iff its associated conjugacy class is a singlet, say ${\hbox{{$\mathcal C$}}}= \{c_0\}$, and thus $c_0 \in Z(G)$, the centre of $G$.
\begin{definition}\rm
A \textit{patch} is a finite rectangular lattice segment with two opposite smooth sides, each equipped with boundary conditions $K=G$, and two opposite rough sides, each equipped with boundary conditions $K=\{e\}$, for example:
\[\tikzfig{patch}\]
\end{definition}
One can alternatively define patches with additional sides, such as in \cite{Lit1}, or with other boundary conditions which depend on another subgroup $K$ and transversal $R$, but we find this definition convenient. Note that our definition does not put conditions on the size of the lattice; the above diagram is just a conveniently small and yet nontrivial example.
We would like to characterise the vacuum space $\hbox{{$\mathcal H$}}_{\rm vac}$ of the patch. To do this, let us begin with $|{\rm vac}_1\>$ from equation~(\ref{eq:vac1}), and denote $|e\>_L := |{\rm vac}_1\>$. This is the \textit{logical zero state} of the patch. We will use this as a reference state to calculate other states in $\hbox{{$\mathcal H$}}_{\rm vac}$.
Now, for any other state $|\psi\>$ in $\hbox{{$\mathcal H$}}_{\rm vac}$, there must exist some linear map $D \in {\rm End}(\hbox{{$\mathcal H$}}_{\rm vac})$ such that $D|e\>_L = |\psi\>$, and thus if we can characterise the algebra of linear maps ${\rm End}(\hbox{{$\mathcal H$}}_{\rm vac})$, we automatically characterise $\hbox{{$\mathcal H$}}_{\rm vac}$. To help with this, we have the following useful property:
\begin{lemma}\label{lem:rib_move}
Let $F_\xi^{e,g}$ be a ribbon operator for some $g\in G$, with $\xi$ extending from the top rough boundary to the bottom rough boundary. Then the endpoints of $\xi$ may be moved along the rough boundaries with $G=\{e\}$ boundary conditions while leaving the action invariant on any vacuum state.
\end{lemma}
{\noindent {\bfseries Proof:}\quad }
We explain this on an example patch with initial state $|\psi\> \in \hbox{{$\mathcal H$}}_{\rm vac}$ and a ribbon $\xi$,
\[\tikzfig{bigger_patch}\]
\[\tikzfig{bigger_patch2}\]
using the fact that $a = cb$ and $m = lk$ by the definition of $\hbox{{$\mathcal H$}}_{\rm vac}$ for the second equality. Thus, we see that the ribbon through the bulk may be deformed as usual. As the only new component of the proof concerned the endpoints, we see that this property holds regardless of the size of the patch.
\endproof
One can calculate in particular that $F_\xi^{e,g}|e\>_L = \delta_{e,g}|e\>_L$, which we will prove more generally later. The undetectable ribbon operators between the smooth boundaries are of the form
\[W^{{\hbox{{$\mathcal C$}}},1}_\xi = \sum_{n\in G} F_\zeta^{c_0,n}\]
when ${\hbox{{$\mathcal C$}}} = \{c_0\}$ by Lemma~\ref{lem:smooth_functor}, hence $G^{c_0} = G$. Technically, this lemma only tells us the ribbon trace operators which are undetectable, but in the present case none of the individual component operators are undetectable, only the trace operators. There are thus exactly $|Z(G)|$ orthogonal undetectable ribbon operators between smooth boundaries. These do not play an important role, but we describe them to characterise the operator algebra on $\hbox{{$\mathcal H$}}_{\rm vac}$. They obey a similar rule as Lemma~\ref{lem:rib_move}, which one can check in the same way.
In addition to the ribbon operators between sides, we also have undetectable ribbon operators between corners on the lattice. These corners connect smooth and rough boundaries, and thus careful application of specific ribbon operators can avoid detection from either face or vertex measurements,
\[\tikzfig{corner_ribbons}\]
where one can check that these do indeed leave the system in a vacuum using familiar arguments about $B(p)$ and $A(v)$. We could equally define such operators extending from either left corner to either right corner, and they obey the discrete isotopy laws as in the bulk. If we apply $F_\xi^{h,g}$ for any $g\neq e$ then we have $F_\xi^{h,g}|\psi\> =0$ for any $|\psi\>\in \hbox{{$\mathcal H$}}_{\rm vac}$, and so these are the only ribbon operators of this form.
\begin{remark}\rm
Corners of boundaries are algebraically interesting themselves, and can be used for quantum computation, but for brevity we skim over them. See e.g. \cite{Bom2,Brown} for details.
\end{remark}
These corner to corner, left to right and top to bottom ribbon operators span ${\rm End}(\hbox{{$\mathcal H$}}_{\rm vac})$, the linear maps which leave the system in vacuum. Due to Lemma~\ref{lem:ribs_only}, all other linear maps must decompose into ribbon operators, and these are the only ribbon operators in ${\rm End}(\hbox{{$\mathcal H$}}_{\rm vac})$ up to linearity.
As a consequence, we have well-defined patch states $|h\>_L := \sum_gF^{h,g}_\xi|e\>_L$ for each $h\in G$, where $\xi$ is any ribbon extending from the bottom left corner to right. Now, working explicitly on the small patch below, we have
\[\tikzfig{wee_patch}\]
to start with, then:
\[\tikzfig{wee_patch2}\]
It is easy to see that we may always write $|h\>_L$ in this manner, for an arbitrary size of patch. Now, ribbon operators which are undetectable when $\xi$ extends from bottom to top are those of the form $F_\xi^{e,g}$, for example
\[\tikzfig{wee_patch3}\]
and so $F_\xi^{e,g}|h\>_L = \delta_{g,h}|h\>_L$, where again if we take a larger patch all additional terms will clearly cancel. Lastly, undetectable ribbon operators for a ribbon $\zeta$ extending from left to right are exactly those of the form $\sum_{n\in G} F_\zeta^{c_0,n}$ for any $c_0 \in Z(G)$. One can check that $|c_0 h\>_L = \sum_{n\in G} F_\zeta^{c_0,n} |h\>_L$, thus these give us no new states in $\hbox{{$\mathcal H$}}_{\rm vac}$.
\begin{lemma}\label{lem:patch_dimension}
For a patch with the $D(G)$ model in the bulk, ${\rm dim}(\hbox{{$\mathcal H$}}_{\rm vac}) = |G|$.
\end{lemma}
{\noindent {\bfseries Proof:}\quad }
By the above characterisation of undetectable ribbon operators, the states $\{|h\>_L\}_{h\in G}$ span ${\rm dim}(\hbox{{$\mathcal H$}}_{\rm vac})$. The result then follows from the adjointness of ribbon operators, which means that the states $\{|h\>_L\}_{h\in G}$ are orthogonal.
\endproof
We can also work out that for $|{\rm vac}_2\>$ from equation~(\ref{eq:vac2}), we have $|{\rm vac}_2\> = \sum_h |h\>_L$. More generally:
\begin{corollary}\label{cor:matrix_basis}
$\hbox{{$\mathcal H$}}_{\rm vac}$ has an alternative basis with states $|\pi;i,j\>_L$, where $\pi$ is an irreducible representation of $G$ and $i,j$ are indices such that $0\leq i,j<{\rm dim}(V_\pi)$. We call this the quasiparticle basis of the patch.
\end{corollary}
{\noindent {\bfseries Proof:}\quad }
First, use the nonabelian Fourier transform on the ribbon operators $F_\xi^{e,g}$, so we have $F_\xi^{'e,\pi;i,j} = \sum_{n\in G}\pi(n^{-1})_{ji}F_\xi^{e,n}$. If we start from the reference state $|1;0,0\>_L := \sum_h |h\>_L = |{\rm vac}_2\>$ and apply these operators with $\xi$ from bottom to top of the patch then we get
\[|\pi;i,j\>_L = F_\xi^{'e,\pi;i,j}|1;0,0\>_L = \sum_{n\in G}\pi(n^{-1})_{ji} |n\>_L\]
which are orthogonal. Now, as $\sum_{\pi\in \hat{G}}\sum_{i,j=0}^{{\rm dim}(V_\pi)} = |G|$ and we know ${\rm dim}(\hbox{{$\mathcal H$}}_{\rm vac}) = |G|$ by the previous Lemma~\ref{lem:patch_dimension}, $\{|\pi;i,j\>_L\}_{\pi,i,j}$ forms a basis of ${\rm dim}(\hbox{{$\mathcal H$}}_{\rm vac})$.
\endproof
\begin{remark}\rm
Kitaev models are designed in general to be fault tolerant. The minimum number of component Hilbert spaces, that is copies of $\mathbb{C} G$ on edges, for which simultaneous errors will undetectably change the logical state and cause errors in the computation is called the `code distance' $d$ in the language of quantum codes. For the standard method of computation using nonabelian anyons \cite{Kit}, data is encoded using excited states, which are states with nontrivial quasiparticles at certain sites. The code distance can then be extremely small, and constant in the size of the lattice, as the smallest errors need only take the form of ribbon operators winding round a single quasiparticle at a site. This is no longer the case when encoding data in vacuum states on patches, as the only logical operators are specific ribbon operators extending from top to bottom, left to right or corner to corner. The code distance, and hence error resilience, of any vacuum state of the patch therefore increases linearly with the width of the patch as it is scaled, and so the square root of the number $n$ of component Hilbert spaces in the patch, that is $n\sim d^2$.
\end{remark}
\subsection{Nonabelian lattice surgery}\label{sec:surgery}
Lattice surgery was invented as a method of fault-tolerant computation with the qubit, i.e. $\mathbb{C}\mathbb{Z}_2$, surface code \cite{HFDM}. The first author generalised it to qudit models using $\mathbb{C}\mathbb{Z}_d$ in \cite{Cow2}, and gave a fresh perspective on lattice surgery as `simulating' the Hopf algebras $\mathbb{C}\mathbb{Z}_d$ and $\mathbb{C}(\mathbb{Z}_d)$ on the logical space $\hbox{{$\mathcal H$}}_{\rm vac}$ of a patch. In this section, we prove that lattice surgery generalises to arbitrary finite group models, and `simulates' $\mathbb{C} G$ and $\mathbb{C}(G)$ in a similar way. Throughout, we assume that the projectors $A(v)$ and $B(p)$ may be performed deterministically for simplicity. In Appendix~\ref{app:measurements} we discuss the added complication that in practice we may only perform measurements which yield projections nondeterministically.
\begin{remark}\rm
When proving the linear maps that nonabelian lattice surgeries yield, we will use specific examples, but the arguments clearly hold generally. For convenience, we will also tend to omit normalising scalar factors, which do not impact the calculations as the maps are $\mathbb{C}$-linear.
\end{remark}
Let us begin with a large rectangular patch. We now remove a line of edges from left to right by projecting each one onto $e$:
\[\tikzfig{split2}\]
We call this a \textit{rough split}, as we create two new rough boundaries. We no longer apply $A(v)$ to the vertices which have had attached edges removed. If we start with a small patch in the state $|l\>_L$ for some $l\in G$ then we can explicitly calculate the linear map.
\[\tikzfig{rough_split_project}\]
where we have separated the two patches afterwards for clarity, showing that they have two separate vacuum spaces. We then have that the last expression is
\[\tikzfig{rough_split_project2}\]
Observe the factors of $g$ in particular. The state is therefore now $\sum_g |g^{-1}\>_L\otimes |gl\>_L$, where the l.h.s. of the tensor product is the Hilbert space corresponding to the top patch, and the r.h.s. to the bottom. A change of variables gives $\sum_g |g\>_L\otimes |g^{-1}l\>_L$, the outcome of comultiplication of $\mathbb{C}(G)$ on the logical state $|l\>_L$ of the original patch.
Similarly, we can measure out a line of edges from bottom to top, for example
\[\tikzfig{split1}\]
We call this a \textit{smooth split}, as we create two new smooth boundaries. Each deleted edge is projected into the state ${1\over|G|}\sum_g g$. We also cease measurement of the faces which have had edges removed, and so we end up with two adjacent but disjoint patches. Working on a small example, we start with $|e\>_L$:
\[\tikzfig{smooth_split_project}\]
where in the last step we have taken $b\mapsto jc$, $g\mapsto kh$ from the $\delta$-functions and then a change of variables $j\mapsto jc^{-1}$, $k\mapsto kh^{-1}$ in the summation. Thus, we have ended with two disjoint patches, each in state $|e\>_L$. One can see that this works for any $|h\>_L$ in exactly the same way, and so the smooth split linear map is $|h\>_L \mapsto |h\>_L\otimes|h\>_L$, the comultiplication of $\mathbb{C} G$.
The opposite of splits are merges, whereby we take two disjoint patches and introduce edges to bring them together to a single patch. For the rough merge below, say we start with the basis states $|k\>_L$ and $|j\>_L$ on the bottom and top. First, we introduce an additional joining edge in the state $e$.
\[\tikzfig{merge1}\]
This state $|\psi\>$ automatically satisfies $B(p)|\psi\> = |\psi\>$ everywhere. But it does not satisfy the conditions on vertices, so we apply $A(v)$ to the two vertices adjacent to the newest edge. Then we have the last expression
\[\tikzfig{rough_merge_project}\]
which by performing repeated changes of variables yields
\[\tikzfig{rough_merge_project2}\]
Thus the rough merge yields the map $|j\>_L\otimes|k\>_L\mapsto|jk\>_L$, the multiplication of $\mathbb{C} G$, where again the tensor factors are in order from top to bottom.
Similarly, we perform a smooth merge with the states $|j\>_L, |k\>_L$ as
\[\tikzfig{merg2}\]
We introduce a pair of edges connecting the two patches, each in the state $\sum_m m$.
\[\tikzfig{smooth_merge_project}\]
The resultant patch automatically satisfies the conditions relating to $A(v)$, but we must apply $B(p)$ to the freshly created faces to acquire a state in $\hbox{{$\mathcal H$}}_{\rm vac}$, giving
\[\tikzfig{smooth_merge_project2}\]
where the $B(p)$ applications introduced the $\delta$-functions
\[\delta_{e}(bf^{-1}m^{-1}),\quad \delta_{e}(dh^{-1}n^{-1}),\quad \delta_e(dj^{-1}b^{-1}bf^{-1}fkh^{-1}hd^{-1}) = \delta_e(j^{-1}k).\]
In summary, the linear map on logical states is evidently $|j\>_L\otimes |k\>_L \mapsto \delta_{j,k}|j\>_L$, the multiplication of $\mathbb{C}(G)$.
The units of $\mathbb{C} G$ and $\mathbb{C}(G)$ are given by the states $|e\>_L$ and $|1;0,0\>_L$ respectively. The counits are given by the maps $|g\>_L \mapsto 1$ and $|g\>_L\mapsto \delta_{g,e}$ respectively. The logical antipode $S_L$ is given by applying the antipode to each edge individually, i.e. inverting all group elements. For example:
\[\tikzfig{antipode_1A}\]
This state is now no longer in the original $\hbox{{$\mathcal H$}}_{\rm vac}$, so to compensate we must modify the lattice. We flip all arrows in the lattice -- this is only a conceptual flip, and does not require any physical modification:
\[\tikzfig{antipode_1B}\]
This amounts to exchanging left and right regular representations, and redefining the Hamiltonian accordingly. In the resultant new vacuum space, the state is now $|g^{-1}\>_L = F_\xi^{e,g^{-1}}|e\>_L$, with $\xi$ running from the bottom left corner to bottom right as previously.
\begin{remark}\rm
This trick of redefining the vacuum space is employed in \cite{HFDM} to perform logical Hadamards, although in their case the lattice is rotated by $\pi/2$, and the edges are directionless as the model is restricted to $\mathbb{C}\mathbb{Z}_2$.
\end{remark}
Thus, we have all the ingredients of the Hopf algebras $\mathbb{C} G$ and $\mathbb{C}(G)$ on the same vector space $\hbox{{$\mathcal H$}}_{\rm vac}$. For applications, one should like to know which quantum computations can be performed using these algebras (ignoring the subtlety with nondeterministic projectors). Recall that a quantum computer is called approximately universal if for any target unitary $U$ and desired accuracy ${\epsilon}\in\mathbb{R}$, the computer can perform a unitary $V$ such that $||V-U||\leq{\epsilon}$, i.e. the operator norm error of $V$ from $U$ is no greater than ${\epsilon}$.
We believe that when the computer is equipped with just the states $\{|h\>_L\}_{h\in G}$ and the maps from lattice surgery above then one cannot achieve approximately universal computation \cite{Stef}, but leave the proof to a further paper. If we also have access to all matrix algebra states $|\pi;i,j\>_L$ as defined in Corollary~\ref{cor:matrix_basis}, we do not know whether the model of computation is then universal for some choice of $G$, and we do not know whether these states can be prepared efficiently. In fact, how these states are defined depends on a choice of basis for each irrep, so whether it is universal may depend not only on the choice of $G$ but also choices of basis. This is an interesting question for future work.
\section{Quasi-Hopf algebra structure of $\Xi(R,K)$}\label{sec:quasi}
We now return to our boundary algebra $\Xi$. It is known that $\Xi$ has a great deal more structure, which we give more explicitly in this section than we have seen elsewhere. This structure generalises a well-known bicrossproduct Hopf algebra construction for when a finite group $G$ factorises as $G=RK$ into two subgroups $R,K$. Then each acts on the set of the other to form a {\em matched pair of actions} ${\triangleright},{\triangleleft}$ and we use ${\triangleright}$ to make a cross product algebra $\mathbb{C} K{\triangleright\!\!\!<} \mathbb{C}(R)$ (which has the same form as our algebra $\Xi$ except that we have chosen to flip the tensor factors) and ${\triangleleft}$ to make a cross product coalgebra $\mathbb{C} K{>\!\!\blacktriangleleft} \mathbb{C}(R)$. These fit together to form a bicrossproduct Hopf algebra $\mathbb{C} K{\triangleright\!\blacktriangleleft} \mathbb{C}(R)$. This construction has been used in the Lie group version to construct quantum Poincar\'e groups for quantum spacetimes\cite{Ma:book}.
In \cite{Be} was considered the more general case where we are just given a subgroup $K\subseteq G$ and a choice of transversal $R$ with the group identity $e\in R$. As we noted, we still have unique factorisation $G=RK$ but in general $R$ need not be a group. We can still follow the same steps. First of all, unique factorisation entails that $R\cap K=\{e\}$. It also implies maps
\[{\triangleright} : K\times R \rightarrow R,\quad {\triangleleft}: K\times R\rightarrow K,\quad \cdot : R\times R \rightarrow R,\quad \tau: R \times R \rightarrow K\]
defined by
\[xr = (x{\triangleright} r)(x{\triangleleft} r),\quad rs = r\cdot s \tau(r,s)\]
for all $x\in R, r,s\in R$, but this time these inherit the properties
\begin{align} (xy) {\triangleright} r &= x {\triangleright} (y {\triangleright} r), \quad e {\triangleright} r = r,\nonumber\\ \label{lax}
x {\triangleright} (r\cdot s)&=(x {\triangleright} r)\cdot((x{\triangleleft} r){\triangleright} s),\quad x {\triangleright} e = e,\end{align}
\begin{align}
(x{\triangleleft} r){\triangleleft} s &= \tau\left(x{\triangleright} r, (x{\triangleleft} r){\triangleright} s)^{-1}(x{\triangleleft} (r\cdot s)\right)\tau(r,s),\quad
x {\triangleleft} e = x,\nonumber\\ \label{rax}
(xy) {\triangleleft} r &= (x{\triangleleft} (y{\triangleright} r))(y{\triangleleft} r),\quad e{\triangleleft} r = e,\end{align}
\begin{align}
\tau(r, s\cdot t)\tau(s,t)& = \tau\left(r\cdot s,\tau(r,s){\triangleright} t\right)(\tau(r,s){\triangleleft} t),\quad \tau(e,r) = \tau(r,e) = e,\nonumber\\ \label{tax}
r\cdot(s\cdot t) &= (r\cdot s)\cdot(\tau(r,s){\triangleright} t),\quad r\cdot e=e\cdot r=r\end{align}
for all $x,y\in K$ and $r,s,t\in R$. We see from (\ref{lax}) that ${\triangleright}$ is indeed an action (we have been using it in preceding sections) but ${\triangleleft}$ in (\ref{rax}) is only only up to $\tau$ (termed in \cite{KM2} a `quasiaction'). Both ${\triangleright},{\triangleleft}$ `act' almost by automorphisms but with a back-reaction by the other just as for a matched pair of groups. Meanwhile, we see from (\ref{tax}) that $\cdot$ is associative only up to $\tau$ and $\tau$ itself obeys a kind of cocycle condition.
Clearly, $R$ is a subgroup via $\cdot$ if and only if $\tau(r,s)=e$ for all $r,s$, and in this case we already see that $\Xi(R,K)$ is a bicrossproduct Hopf algebra, with the only difference being that we prefer to build it on the flipped tensor factors. More generally, \cite{Be} showed that there is still a natural monoidal category associated to this data but with nontrivial associators. This corresponds by Tannaka-Krein reconstruction to a $\Xi$ as quasi-bialgebra which in some cases is a quasi-Hopf algebra\cite{Nat}. Here we will give these latter structures explicitly and in maximum generality compared to the literature (but still needing a restriction on $R$ for the antipode to be in a regular form). We will also show that the obvious $*$-algebra structure makes a $*$-quasi-Hopf algebra in an appropriate sense under restrictions on $R$. These aspects are new, but more importantly, we give direct proofs at an algebraic level rather than categorical arguments, which we believe are essential for detailed calculations. Related works on similar algebras and coset decompositions include \cite{PS,KM1} in addition to \cite{Be,Nat,KM2}.
\begin{lemma}\cite{Be,Nat,KM2}
$(R,\cdot)$ has the same unique identity $e$ as $G$ and has the left division property, i.e. for all $t, s\in R$, there is a unique solution $r\in R$ to the equation $s\cdot r = t$ (one writes $r = s\backslash t$). In particular, we let $r^R$ denote the unique solution to $r\cdot r^R=e$, which we call a right inverse.\end{lemma}
This means that $(R,\cdot)$ is a left loop (a left quasigroup with identity). The multiplication table for $(R,\cdot)$ has one of each element of $R$ in each row, which is the left division property. In particular, there is one instance of $e$ in each row. One can recover $G$ knowing $(R,\cdot)$, $K$ and the data ${\triangleright},{\triangleleft},\tau$\cite[Prop.3.4]{KM2}. Note that a parallel property of left inverse $(\ )^L$ need not be present.
\begin{definition} We say that $R$ is {\em regular} if $(\ )^R$ is bijective.
\end{definition}
$R$ is regular iff it has both left and right inverses, and this is iff it satisfies $RK=KR$ by\cite[Prop.~3.5]{KM2}. If there is also right division then we have a loop (a quasigroup with identity) and under further conditions\cite[Prop.~3.6]{KM2} we have $r^L=r^R$ and a 2-sided inverse property quasigroup. The case of regular $R$ is studied in \cite{Nat} but this excludes some interesting choices of $R$ and we do not always assume it. Throughout, we will specify when $R$ is required to be regular for results to hold. Finally, if $R$ obeys a further condition $x{\triangleright}(s\cdot t)=(x{\triangleright} s){\triangleright} t$ in \cite{KM2} then $\Xi$ is a Hopf quasigroup in the sense introduced in \cite{KM1}. This is even more restrictive but will apply to our octonions-related example. Here we just give the choices for our go-to cases for $S_3$.
\begin{example}\label{exS3R}\rm $G=S_3$ with $K=\{e,u\}$ has four choices of transversal $R$ meeting our requirement that $e\in R$. Namely
\begin{enumerate}
\item $R=\{e,uv,vu\}$ (our standard choice) {\em is a subgroup} $R=\mathbb{Z}_3$, so it is associative and there is 2-sided division and a 2-sided inverse. We also have $u{\triangleright}(uv)=vu, u{\triangleright} (vu)=uv$ but ${\triangleleft},\tau$ trivial.
\item $R=\{e,w,v\}$ which is {\em not a subgroup} and indeed $\tau(v,w)=\tau(w,v)=u$ (and all others are necessarily $e$). There is an action $u{\triangleright} v=w, u{\triangleright} w=v$ but ${\triangleleft}$ is still trivial. For examples
\begin{align*} vw&=wu \Rightarrow v\cdot w=w,\ \tau(v,w)=u;\quad wv=vu \Rightarrow w\cdot v=v,\ \tau(w,v)=u\\
uv&=wu \Rightarrow u{\triangleright} v=w,\ u{\triangleleft} v=u;\quad uw=vu \Rightarrow u{\triangleright} w=v,\ u{\triangleleft} w=u. \end{align*}
This has left division/right inverses as it must but {\em not right division} as $e\cdot w=v\cdot w=w$ and $e\cdot v=w\cdot v=v$. We also have $v\cdot v=w\cdot w=e$ and $(\ )^R$ is bijective so this {\em is regular}.
\item $R=\{e,uv, v\}$ which is {\em not a subgroup} and $\tau,{\triangleright},{\triangleleft}$ are all nontrivial with
\begin{align*} \tau(uv,uv)&=\tau(v,uv)=\tau(uv,v)=u,\quad \tau(v,v)=e,\\
v\cdot v&=e,\quad v\cdot uv=uv,\quad uv\cdot v=e,\quad uv\cdot uv=v,\\
u{\triangleright} v&=uv,\quad u{\triangleright} (uv)=v,\quad u{\triangleleft} v=e,\quad u{\triangleleft} uv=e\end{align*}
and all other cases determined from the properties of $e$. Here $v^R=v$ and $(uv)^R=v$ so this is {\em not regular}.
\item $R=\{e,w,vu\}$ which is analogous to the preceding case, so {\em not a subgroup}, $\tau,{\triangleright},{\triangleleft}$ all nontrivial and {\em not regular}.
\end{enumerate}
\end{example}
We will also need the following useful lemma in some of our proofs.
\begin{lemma}\label{leminv}\cite{KM2} For any transversal $R$ with $e\in R$, we have
\begin{enumerate}
\item $(x{\triangleleft} r)^{-1}=x^{-1}{\triangleleft}(x{\triangleright} r)$.
\item $(x{\triangleright} r)^R=(x{\triangleleft} r){\triangleright} r^R$.
\item $\tau(r,r^R)^{-1}{\triangleleft} r=\tau(r^R,r^{RR})^{-1}$.
\item $\tau(r,r{}^R)^{-1}{\triangleleft} r=r^R{}^R$.
\end{enumerate}
for all $x\in K, r\in R$.
\end{lemma}
{\noindent {\bfseries Proof:}\quad } The first two items are elementary from the matched pair axioms. For (1), we use $e=(x^{-1}x){\triangleleft} r=(x^{-1}{\triangleleft}(x{\triangleright} r))(x{\triangleleft} r)$ and for (2) $e=x{\triangleright}(r\cdot r^R)=(x{\triangleright} r)\cdot((x{\triangleleft} r){\triangleright} r^R)$. The other two items are a left-right reversal of \cite[Lem.~3.2]{KM2} but given here for completeness. For (3), \begin{align*} e&=(\tau(r,r^R)\tau(r,r^R)^{-1}){\triangleleft} r=(\tau(r,r^R){\triangleleft} (\tau(r,r^R){\triangleright} r))(\tau(r,r^R)^{-1}{\triangleleft} r)\\
&=(\tau(r,r^R){\triangleleft} r^{RR})(\tau(r,r^R)^{-1}{\triangleleft} r)\end{align*}
which we combine with
\[ \tau(r^R,r^{RR})=\tau(r\cdot r^R,r^{RR})\tau(r^R,r^{RR})=\tau(r\cdot r^R, \tau(r,r^R){\triangleright} r^{RR})(\tau(r,r^R){\triangleleft} r^{RR})=\tau(r,r^R){\triangleleft} r^{RR}\]
by the cocycle property. For (4), $\tau(r,r^R){\triangleleft} r^R{}^R=(r\cdot r^R) \tau(r,r^R){\triangleleft} r^R{}^R=r\cdot (r^R\cdot r^R{}^R)=r$
by one of the matched pair conditions. \endproof
Using this lemma, it is not hard to prove cf\cite[Prop.3.3]{KM2} that
\begin{equation}\label{leftdiv}s\backslash t=s^R\cdot\tau^{-1}(s,s^R){\triangleright} t;\quad s\cdot(s\backslash t)=s\backslash(s\cdot t)=t,\end{equation}
which can also be useful in calculations.
\subsection{$\Xi(R,K)$ as a quasi-bialgebra}
We recall that a quasi-bialgebra is a unital algebra $H$, a coproduct $\Delta:H\to H\mathop{{\otimes}} H$ which is an algebra map but is no longer required to be coassociative, and ${\epsilon}:H\to \mathbb{C}$ a counit for $\Delta$ in the usual sense $(\mathrm{id}\mathop{{\otimes}}{\epsilon})\Delta=({\epsilon}\mathop{{\otimes}}\mathrm{id})\Delta=\mathrm{id}$. Instead, we have a weaker form of coassociativity\cite{Dri,Ma:book}
\[ (\mathrm{id}\mathop{{\otimes}}\Delta)\Delta=\phi((\Delta\mathop{{\otimes}}\mathrm{id})\Delta(\ ))\phi^{-1}\]
for an invertible element $\phi\in H^{\mathop{{\otimes}} 3}$ obeying the 3-cocycle identity
\[ (1\mathop{{\otimes}}\phi)((\mathrm{id}\mathop{{\otimes}}\Delta\mathop{{\otimes}}\mathrm{id})\phi)(\phi\mathop{{\otimes}} 1)=((\mathrm{id}\mathop{{\otimes}}\mathrm{id}\mathop{{\otimes}}\Delta)\phi)(\Delta\mathop{{\otimes}}\mathrm{id}\mathop{{\otimes}}\mathrm{id})\phi,\quad (\mathrm{id}\mathop{{\otimes}}{\epsilon}\mathop{{\otimes}}\mathrm{id})\phi=1\mathop{{\otimes}} 1\]
(it follows that ${\epsilon}$ in the other positions also gives $1\mathop{{\otimes}} 1$).
In our case, we already know that $\Xi(R,K)$ is a unital algebra.
\begin{lemma}\label{Xibialg} $\Xi(R,K)$ is a quasi-bialgebra with
\[ \Delta x=\sum_{s\in R}x\delta_s \mathop{{\otimes}} x{\triangleleft} s, \quad \Delta \delta_r = \sum_{s,t\in R} \delta_{s\cdot t,r}\delta_{s}\otimes \delta_{t},\quad {\epsilon} x=1,\quad {\epsilon} \delta_r=\delta_{r,e}\]
for all $x\in K, r\in R$, and
\[ \phi=\sum_{r,s\in R} \delta_r \otimes \delta_s \otimes \tau(r,s)^{-1},\quad \phi^{-1} = \sum_{r,s\in R} \delta_r\otimes \delta_s \otimes \tau(r,s).\]
\end{lemma}
{\noindent {\bfseries Proof:}\quad }
This follows by reconstruction arguments, but it is useful to check directly,
\begin{align*}
(\Delta x)(\Delta y)&=\sum_{s,r}(x\delta_s\mathop{{\otimes}} x{\triangleleft} s)(y\delta_r\mathop{{\otimes}} y{\triangleleft} r)=\sum_{s,r}(x\delta_sy\delta_r\mathop{{\otimes}} ( x{\triangleleft} s)( y{\triangleleft} r)\\
&=\sum_{r,s}xy\delta_{y^{-1}{\triangleright} s}\delta_r\mathop{{\otimes}} (x{\triangleleft} s)(y{\triangleleft} r)=\sum_r xy \delta_r\mathop{{\otimes}} (x{\triangleleft}(y{\triangleright} r))(y{\triangleleft} r)=\Delta(xy)
\end{align*}
as $s=y{\triangleright} r$ and using the formula for $(xy){\triangleleft} r$ at the end. Also,
\begin{align*}
\Delta(\delta_{x{\triangleright} s}x)&=(\Delta\delta_{x{\triangleright} s})(\Delta x)=\sum_{r, p.t=x{\triangleright} s}\delta _p x\delta_r\mathop{{\otimes}} \delta_t x{\triangleleft} r\\
&=\sum_{r, p.t=x{\triangleright} s}x\delta_{x^{-1}{\triangleright} p}\delta_r\mathop{{\otimes}} x{\triangleleft} r\delta_{(x{\triangleleft} r)^{-1}{\triangleright} t}=\sum_{(x{\triangleright} r).t=x{\triangleright} s}x \delta_r\mathop{{\otimes}} x{\triangleleft} r\delta_{(x{\triangleleft} r)^{-1}{\triangleright} t}\\
&=\sum_{(x{\triangleright} r).((x{\triangleleft} r){\triangleright} t')=x{\triangleright} s}x \delta_r\mathop{{\otimes}} x{\triangleleft} r\delta_{t'}=\sum_{r\cdot t'=s}x\delta_r\mathop{{\otimes}} (x{\triangleleft} r)\delta_{t'}=(\Delta x)(\Delta \delta _s)=\Delta(x\delta_s)
\end{align*}
using the formula for $x{\triangleright}(r\cdot t')$. This says that the coproducts stated are compatible with the algebra cross relations. Similarly, one can check that
\begin{align*}
(\sum_{p,r}\delta_p\mathop{{\otimes}}\delta_r\mathop{{\otimes}} &\tau(p,r))((\mathrm{id}\mathop{{\otimes}}\Delta )\Delta x)=\sum_{p,r,s,t}(\delta_p\mathop{{\otimes}}\delta_r\mathop{{\otimes}} \tau(p,r))(x\delta_s\mathop{{\otimes}} (x{\triangleleft} s)\delta_t\mathop{{\otimes}} (x{\triangleleft} s){\triangleleft} t)\\
&=\sum_{p,r,s,t}\delta_px\delta_s\mathop{{\otimes}}\delta_r(x{\triangleleft} s)\delta_t\mathop{{\otimes}} \tau(p,r)((x{\triangleleft} s){\triangleleft} t)\\
&=\sum_{s,t}x\delta_s\mathop{{\otimes}} (x{\triangleleft} s)\delta_t\mathop{{\otimes}}\tau(x{\triangleright} s,(x{\triangleleft} s){\triangleright} t)(x{\triangleleft} s){\triangleleft} t)\\
&=\sum_{s,t}x\delta_s\mathop{{\otimes}} (x{\triangleleft} s)\delta_t\mathop{{\otimes}}(x{\triangleleft}(s.t))\tau(s,t)\\
&=\sum_{p,r,s,t}(x\delta_s\mathop{{\otimes}} (x{\triangleleft} s)\delta_t\mathop{{\otimes}}(x{\triangleleft}(s.t))(\delta_p\mathop{{\otimes}}\delta_r\mathop{{\otimes}}\tau(p,r)\\
&=( (\Delta\mathop{{\otimes}}\mathrm{id})\Delta x ) (\sum_{p,r}\delta_p\mathop{{\otimes}}\delta_r\mathop{{\otimes}}\tau(p,r))
\end{align*}
as $p=x{\triangleright} s$ and $r=(x{\triangleleft} s){\triangleright} t$ and using the formula for $(x{\triangleleft} s){\triangleleft} t$. For the remaining cocycle relations, we have
\begin{align*}
(\mathrm{id}\mathop{{\otimes}}{\epsilon}\mathop{{\otimes}}\mathrm{id})\phi = \sum_{r,s}\delta_{s,e}\delta_r\mathop{{\otimes}}\tau(r,s)^{-1} = \sum_r\delta_r\mathop{{\otimes}} 1 = 1\mathop{{\otimes}} 1
\end{align*}
and
\[ (1\mathop{{\otimes}}\phi)((\mathrm{id}\mathop{{\otimes}}\Delta\mathop{{\otimes}}\mathrm{id})\phi)(\phi\mathop{{\otimes}} 1)=\sum_{r,s,t}\delta_r\mathop{{\otimes}}\delta_s\mathop{{\otimes}} \delta_t\tau(r,s)^{-1}\mathop{{\otimes}}\tau(s,t)^{-1}\tau(r,s\cdot t)\]
after multiplying out $\delta$-functions and renaming variables. Using the value of $\Delta\tau(r,s)^{-1}$ and similarly multiplying out, we obtain on the other side
\begin{align*} ((\mathrm{id}\mathop{{\otimes}}&\mathrm{id}\mathop{{\otimes}}\Delta)\phi)(\Delta\mathop{{\otimes}}\mathrm{id}\mathop{{\otimes}}\mathrm{id})\phi=\sum_{r,s,t}\delta_r\mathop{{\otimes}}\delta_s\mathop{{\otimes}}\tau(r,s)^{-1}\delta_t\mathop{{\otimes}}(\tau(r,s)^{-1}{\triangleleft} t)\tau(r\cdot s,t)^{-1}\\
&=\sum_{r,s,t'}\delta_r\mathop{{\otimes}}\delta_s\mathop{{\otimes}}\delta_{t'}\tau(r,s)^{-1}\mathop{{\otimes}}(\tau(r,s)^{-1}{\triangleleft} (\tau(r,s){\triangleright} t'))\tau(r\cdot s,\tau(r,s){\triangleright} t')^{-1}\\
&=\sum_{r,s,t'}\delta_r\mathop{{\otimes}}\delta_s\mathop{{\otimes}}\delta_{t'}\tau(r,s)^{-1}\mathop{{\otimes}}(\tau(r,s){\triangleleft} t')^{-1}\tau(r\cdot s,\tau(r,s){\triangleright} t')^{-1},
\end{align*}
where we change summation to $t'=\tau(r,s){\triangleright} t$ then use Lemma~\ref{leminv}. Renaming $t'$ to $t$, the two sides are equal in view of the cocycle identity for $\tau$. Thus, we have a quasi-bialgebra with $\phi$ as stated.
\endproof
We can also write the coproduct (and the other structures) more explicitly.
\begin{remark}\rm (1) If we want to write the coproduct on $\Xi$ explicitly as a vector space, the above becomes
\[ \Delta(\delta_r\mathop{{\otimes}} x)=\sum_{s\cdot t=r}\delta_s\mathop{{\otimes}} x\mathop{{\otimes}}\delta_t\mathop{{\otimes}} (x^{-1}{\triangleright} s)^{-1},\quad {\epsilon}(\delta_r\mathop{{\otimes}} x)=\delta_{r,e}\]
which is ugly due to our decision to build it on $\mathbb{C}(R)\mathop{{\otimes}}\mathbb{C} K$. (2) If we built it on the other order then we could have $\Xi=\mathbb{C} K{\triangleright\!\!\!<} \mathbb{C}(R)$ as an algebra, where we have a right action
\[ (f{\triangleright} x)(r)= f(x{\triangleleft} r);\quad \delta_r{\triangleright} x=\delta_{x^{-1}{\triangleleft} r}\]
on $f\in \mathbb{C}(R)$. Now make a right handed cross product
\[ (x\mathop{{\otimes}} \delta_r)(y\mathop{{\otimes}} \delta_s)= xy\mathop{{\otimes}} (\delta_r{\triangleright} y)\delta_s=xy\mathop{{\otimes}}\delta_s\delta_{r,y{\triangleleft} s}\]
which has cross relations $\delta_r y=y\delta_{y^{-1}{\triangleleft} r}$. These are the same relations as before. So this is the same algebra, just we prioritise a basis $\{x\delta_r\}$ instead of the other way around. This time, we have
\[ \Delta (x\mathop{{\otimes}}\delta_r)=\sum_{s\cdot t=r} x\mathop{{\otimes}}\delta_s\mathop{{\otimes}} x{\triangleright} s\mathop{{\otimes}}\delta_t.\]
We do not do this in order to be compatible with the most common form of $D(G)$ as $\mathbb{C}(G){>\!\!\!\triangleleft} \mathbb{C} G$ as in \cite{CowMa}.
\end{remark}
\subsection{$\Xi(R,K)$ as a quasi-Hopf algebra}
A quasi-bialgebra is a quasi-Hopf algebra if there are elements $\alpha,\beta\in H$ and an antialgebra map $S:H\to H$ such that\cite{Dri,Ma:book}
\[(S \xi_1)\alpha\xi_2={\epsilon}(\xi)\alpha,\quad \xi_1\beta S\xi_2={\epsilon}(\xi)\beta,\quad \phi^1\beta(S\phi^2)\alpha\phi^3=1,\quad (S\phi^{-1})\alpha\phi^{-2}\beta S\phi^{-3}=1\]
where $\Delta\xi=\xi_1\mathop{{\otimes}}\xi_2$, $\phi=\phi^1\mathop{{\otimes}}\phi^2\mathop{{\otimes}}\phi^3$ with inverse $\phi^{-1}\mathop{{\otimes}}\phi^{-2}\mathop{{\otimes}}\phi^{-3}$ is a compact notation (sums of such terms to be understood). It is usual to assume $S$ is bijective but we do not require this. The $\alpha,\beta, S$ are not unique and can be changed to $S'=U(S\ ) U^{-1}, \alpha'=U\alpha, \beta'=\beta U^{-1}$ for any invertible $U$. In particular, if $\alpha$ is invertible then we can transform to a standard form replacing it by $1$. For the purposes of this paper, we therefore call the case of $\alpha$ invertible a (left) {\em regular antipode}.
\begin{proposition}\label{standardS} If $(\ )^R$ is bijective, $\Xi(R,K)$ is a quasi-Hopf algebra with regular antipode
\[ S(\delta_r\mathop{{\otimes}} x)=\delta_{(x^{-1}{\triangleright} r)^R}\mathop{{\otimes}} x^{-1}{\triangleleft} r,\quad \alpha=\sum_{r\in R}\delta_r\mathop{{\otimes}} 1,\quad \beta=\sum_r\delta_r\mathop{{\otimes}} \tau(r,r^R).\]
Equivalently in subalgebra terms,
\[ S\delta_r=\delta_{r^R},\quad Sx=\sum_{s\in R}(x^{-1}{\triangleright} s)\delta_{s^R} ,\quad \alpha=1,\quad \beta=\sum_{r\in R}\delta_r\tau(r,r^R).\]
\end{proposition}
{\noindent {\bfseries Proof:}\quad }
For the axioms involving $\phi$, we have
\begin{align*}\phi^1\beta&(S \phi^2)\alpha\phi^3=\sum_{s,t,r}(\delta_s\mathop{{\otimes}} 1)(\delta_r\mathop{{\otimes}} \tau(r,r^R))(\delta_{t^R}\mathop{{\otimes}}\tau(s,t)^{-1})\\
&=\sum_{s,t}(\delta_s\mathop{{\otimes}}\tau(s,s^R))(\delta_{t^R}\mathop{{\otimes}} \tau(s,t)^{-1})=\sum_{s,t}\delta_s\delta_{s,\tau(s,s^R){\triangleright} t^R}\mathop{{\otimes}}\tau(s,s^R)\tau(s,t)^{-1}\\
&=\sum_{s^R.t^R=e}\delta_s\mathop{{\otimes}} \tau(s,s^R)\tau(s,t)^{-1}=1,
\end{align*}
where we used $s.(s^R.t^R)=(s.s^R).\tau(s,s^R){\triangleright} t^R=\tau(s,s^R){\triangleright} t^R$. So $s=\tau(s,s^R){\triangleright} t^R$ holds iff $s^R.t^R=e$ by left cancellation. In the sum, we can take $t=s^R$ which contributes $\delta_s\mathop{{\otimes}} e$. Here $s^R.t^R=s^R.(s^R)^R=e$; there is a unique element $t^R$ which does this and hence a unique $t$ provided $(\ )^R$ is injective, and hence a bijection.
\begin{align*}
S(\phi^{-1})\alpha&\phi^{-2}\beta S(\phi^{-3}) = \sum_{s,t,u,v}(\delta_{s^R}\otimes 1)(\delta_t\otimes 1)(\delta_u\otimes\tau(u,u^R))(\delta_{(\tau(s,t)^{-1}{\triangleright} v)^R}\otimes (\tau(s,t)^{-1}{\triangleleft} v))\\
&= \sum_{s,v}(\delta_{s^R}\otimes\tau(s^R,s^R{}^R))(\delta_{(\tau(s,s^R)^{-1}{\triangleright} v)^R}\otimes \tau(s,s^R)^{-1}{\triangleleft} v).
\end{align*}
Upon multiplication, we will have a $\delta$-function dictating that
\[s^R = \tau(s^R,s^R{}^R){\triangleright} (\tau(s,s^R)^{-1}{\triangleright} v)^R,\]
so we can use the fact that
\begin{align*}s\cdot s^R = e &= s\cdot(\tau(s^R,s^R{}^R){\triangleright} (\tau(s,s^R)^{-1}{\triangleright} v)^R)\\ &= s\cdot(s^R\cdot(s^R{}^R\cdot (\tau(s,s^R)^{-1}{\triangleright} v)^R))\\
&= \tau(s,s^R){\triangleright} (s^R{}^R\cdot(\tau(s,s^R){\triangleright} v)^R),
\end{align*}
where we use similar identities to before. Therefore $s^R{}^R\cdot (\tau(s,s^R)^{-1}{\triangleright} v)^R = e$, so $(\tau(s,s^R)^{-1}{\triangleright} v)^R = s^R{}^R{}^R$. When $(\ )^R$ is injective, this gives us $v = \tau(s,s^R){\triangleright} s^R{}^R$. Returning to our original calculation we have that our previous expression is
\begin{align*}
\cdots &= \sum_s \delta_{s^R}\otimes \tau(s^R,s^R{}^R)(\tau(s,s^R)^{-1}{\triangleleft} (\tau(s,s^R){\triangleright} s^R{}^R))\\
&= \sum_s \delta_{s^R}\otimes \tau(s^R,s^R{}^R)(\tau(s,s^R){\triangleleft} s^R{}^R)^{-1} = \sum_s \delta_{s^R}\otimes 1 = 1
\end{align*}
We now prove the antipode axiom involving $\alpha$,
\begin{align*}
(S(\delta_s \otimes& x)_1)(\delta_s \otimes x)_2 = \sum_{r\cdot t = s}(\delta_{(x^{-1}{\triangleright} r)^R}\otimes (x^{-1}{\triangleleft} r))(\delta_t\otimes (x^{-1}{\triangleleft} r)^{-1})\\
&= \sum_{r\cdot t = s}\delta_{(x^{-1}{\triangleright} r)^R, (x^{-1}{\triangleleft} r){\triangleright} t}\delta_{(x^{-1}{\triangleright} r)^R}\otimes 1 = \delta_{e,s}\sum_r \delta_{(x^{-1}{\triangleright} r)^R}\otimes 1 = {\epsilon}(\delta_s\otimes x)1.
\end{align*}
The condition from the $\delta$-functions is
\[ (x^{-1}{\triangleright} r)^R=(x^{-1}{\triangleleft} r){\triangleright} t\]
which by uniqueness of right inverses holds iff
\[ e=(x^{-1}{\triangleright} r)\cdot (x^{-1}{\triangleleft} r){\triangleright} t=x^{-1}{\triangleright}(r\cdot t)\]
which is iff $r.t=e$, so $t=r^R$. As we also need $r.t=s$, this becomes $\delta_{s,e}$ as required.
We now prove the axiom involving $\beta$, starting with
\begin{align*}(\delta_s\otimes& x)_1 \beta S((\delta_s\otimes x)_2) = \sum_{r\cdot t=s, p}(\delta_r\mathop{{\otimes}} x)(\delta_p\mathop{{\otimes}}\tau(p,p^R))S(\delta_t\mathop{{\otimes}} (x^{-1}{\triangleleft} r)^{-1})\\
&=\sum_{r\cdot t=s, p}(\delta_r\delta_{r,x{\triangleright} p}\mathop{{\otimes}} x\tau(p,p^R))(\delta_{((x^{-1}{\triangleleft} r){\triangleright} t)^R}\mathop{{\otimes}} (x^{-1}{\triangleleft} r){\triangleleft} t)\\
&=\sum_{r\cdot t=s}(\delta_r\mathop{{\otimes}} x\tau(x^{-1}{\triangleright} r,(x^{-1}{\triangleright} r)^R))(\delta_{((x^{-1}{\triangleleft} r){\triangleright} t)^R}\mathop{{\otimes}} (x^{-1}{\triangleleft} r){\triangleleft} t).
\end{align*}
When we multiply this out, we will need from the product of $\delta$-functions that
\[ \tau(x^{-1}{\triangleright} r,(x^{-1}{\triangleright} r)^R)^{-1}{\triangleright} (x^{-1}{\triangleright} r)=((x^{-1}{\triangleleft} r){\triangleright} t)^R,\]
but note that $\tau(q,q{}^R)^{-1}{\triangleright} q=q^R{}^R$ from Lemma~\ref{leminv}. So the condition from the $\delta$-functions is
\[ (x^{-1}{\triangleright} r)^R{}^R=((x^{-1}{\triangleleft} r){\triangleright} t)^R,\]
so
\[ (x^{-1}{\triangleright} r)^R=(x^{-1}{\triangleleft} r){\triangleright} t\]
when $(\ )^R$ is injective. By uniqueness of right inverses, this holds iff
\[ e=(x^{-1}{\triangleright} r)\cdot ((x^{-1}{\triangleleft} r){\triangleright} t)=x^{-1}{\triangleright}(r\cdot t),\]
where the last equality is from the matched pair conditions. This holds iff $r\cdot t=e$, that is, $t=r^R$. This also means in the sum that we need $s=e$.
Hence, when we multiply out our expression so far, we have
\[\cdots=\delta_{s,e}\sum_r\delta_r\mathop{{\otimes}} x\tau(x^{-1}{\triangleright} r,(x^{-1}{\triangleright} r)^R)(x^{-1}{\triangleleft} r){\triangleleft} r^R=\delta_{s,e}\sum_r\delta_r\mathop{{\otimes}}\tau(r,r^R)=\delta_{s,e}\beta,\]
as required, where we used
\[ x\tau( x^{-1}{\triangleright} r,(x^{-1}{\triangleright} r)^R)(x^{-1}{\triangleleft} r){\triangleleft} r^R=\tau(r,r^R)\]
by the matched pair conditions. The subalgebra form of $Sx$ is the same using the commutation relations and Lemma~\ref{leminv} to reorder.
It remains to check that
\begin{align*}S(\delta_s&\mathop{{\otimes}} y)S(\delta_r\mathop{{\otimes}} x)=(\delta_{(y^{-1}{\triangleright} s)^R}\mathop{{\otimes}} y^{-1}{\triangleleft} s)(\delta_{(x^{-1}{\triangleright} x)^R}\mathop{{\otimes}} x^{-1}{\triangleleft} r)\\
&=\delta_{r,x{\triangleright} s}\delta_{(y^{-1}{\triangleright} s)^R}\mathop{{\otimes}} (y^{-1}{\triangleleft} s)(x^{-1}{\triangleleft} r)=\delta_{r,x{\triangleright} s}\delta_{(y^{-1}x^{-1}{\triangleright} r)^R}\mathop{{\otimes}}( y^{-1}{\triangleleft}(x^{-1}{\triangleright} r))(x^{-1}{\triangleleft} r)\\
&=S(\delta_r\delta_{r,x{\triangleright} s}\mathop{{\otimes}} xy)=S((\delta_r\mathop{{\otimes}} x)(\delta_s\mathop{{\otimes}} y)),
\end{align*}
where the product of $\delta$-functions requires $(y^{-1}{\triangleright} s)^R=( y^{-1}{\triangleleft} s){\triangleright} (x^{-1}{\triangleright} r)^R$, which is equivalent to $s^R=(x^{-1}{\triangleright} r)^R$ using Lemma~\ref{leminv}. This imposes $\delta_{r,x{\triangleright} s}$. We then replace $s=x^{-1}{\triangleright} r$ and recognise the answer using the matched pair identities.
\endproof
\subsection{$\Xi(R,K)$ as a $*$-quasi-Hopf algebra}
The correct notion of a $*$-quasi-Hopf algebra $H$ is not part of Drinfeld's theory but a natural notion is to have further structure so
as to make the monoidal category of modules a bar category in the sense of \cite{BegMa:bar}. If $H$ is at least a quasi-bialgebra, the additional structure we need, fixing a typo in \cite[Def.~3.16]{BegMa:bar}, is the first three of:
\begin{enumerate}\item An antilinear algebra map $\theta:H\to H$.
\item An invertible element $\gamma\in H$ such that $\theta(\gamma)=\gamma$ and $\theta^2=\gamma(\ )\gamma^{-1}$.
\item An invertible element $\hbox{{$\mathcal G$}}\in H\mathop{{\otimes}} H$ such that
\begin{equation}\label{*GDelta}\Delta\theta =\hbox{{$\mathcal G$}}^{-1}(\theta\mathop{{\otimes}}\theta)(\Delta^{op}(\ ))\hbox{{$\mathcal G$}},\quad ({\epsilon}\mathop{{\otimes}}\mathrm{id})(\hbox{{$\mathcal G$}})=(\mathrm{id}\mathop{{\otimes}}{\epsilon})(\hbox{{$\mathcal G$}})=1,\end{equation}
\begin{equation}\label{*Gphi} (\theta\mathop{{\otimes}}\theta\mathop{{\otimes}}\theta)(\phi_{321})(1\mathop{{\otimes}}\hbox{{$\mathcal G$}})((\mathrm{id}\mathop{{\otimes}}\Delta)\hbox{{$\mathcal G$}})\phi=(\hbox{{$\mathcal G$}}\mathop{{\otimes}} 1)((\Delta\mathop{{\otimes}}\mathrm{id})\hbox{{$\mathcal G$}}).\end{equation}
\item We say the $*$-quasi bialgebra is strong if
\begin{equation}\label{*Gstrong} (\gamma\mathop{{\otimes}}\gamma)\Delta\gamma^{-1}=((\theta\mathop{{\otimes}}\theta)(\hbox{{$\mathcal G$}}_{21}))\hbox{{$\mathcal G$}}.\end{equation}
\end{enumerate}
Note that if we have a quasi-Hopf algebra then $S$ is antimultiplicative and $\theta=* S$ defines an antimultiplicative antilinear map $*$. However, $S$ is not unique and it appears that specifying $\theta$ directly is more canonical.
\begin{lemma} Let $(\ )^R$ be bijective. Then $\Xi$ has an antilinear algebra automorphism $\theta$ such that
\[ \theta(x)=\sum_s x{\triangleleft} s\, \delta_{s^R},\quad \theta(\delta_s)=\delta_{s^R},\]
\[\theta^2=\gamma(\ )\gamma^{-1};\quad \gamma=\sum_s\tau(s,s^R)^{-1}\delta_s,\quad\theta(\gamma)=\gamma.\]
\end{lemma}
{\noindent {\bfseries Proof:}\quad } We compute,
\[ \theta(\delta_s\delta_t)=\delta_{s,t}\delta_{s^R}=\delta_{s^R,t^R}\delta_{s^R}=\theta(\delta_s)\theta(\delta_t)\]
\[\theta(x)\theta(y)=\sum_{s,t}x{\triangleleft} s\delta_{s^R} y{\triangleleft} t\delta_{t^R}=\sum_{t}(x{\triangleleft} (y{\triangleright} t)) y{\triangleleft} t\delta_t=\sum_t (xy){\triangleleft} t\delta_{t^R}=\theta(xy)\]
where imagining commuting $\delta_t$ to the left fixes $s=(y{\triangleleft} t){\triangleright} t^R=(y{\triangleright} t)^R$ to obtain the 2nd equality. We also have
\[ \theta(x\delta_s)=\sum_tx{\triangleleft} t\delta_{t^R}\delta_{s^R}=x{\triangleleft} s\delta_{s^R}=\delta_{(x{\triangleleft} s){\triangleright} s^R}x{\triangleleft} s=\delta_{(x{\triangleright} s)^R}x{\triangleleft} s\]
\[ \theta(\delta_{x{\triangleright} s}x)=\sum_t\delta_{(x{\triangleright} s)^R}x{\triangleleft} t\delta_{t^R}=\sum_t\delta_{(x{\triangleright} s)^R}\delta_{(x{\triangleleft} t){\triangleright} t^R}=\sum_t\delta_{(x{\triangleright} s)^R}\delta_{(x{\triangleright} t)^R}
x{\triangleleft} t,\]
which is the same as it needs $t=s$. Next
\[ \gamma^{-1}=\sum_s \tau(s,s^R)\delta_{s^{RR}}=\sum_s \delta_s \tau(s,s^R),\]
where we recall from previous calculations that $\tau(s,s^R){\triangleright} s^{RR}=s$. Then
\begin{align*}\theta^2(x)&=\sum_s\theta(x{\triangleleft} s\delta_{s^R})=\sum_{s,t}(x{\triangleleft} s){\triangleleft} t\delta_{t^R}\delta_{s^R}=\sum_s(x{\triangleleft} s){\triangleleft} s\delta_{s^R}=\sum_s (x{\triangleleft} s){\triangleleft} x^R\delta_{s^{RR}}\\
&=\sum_s \tau(x{\triangleright} s,(x{\triangleright} s)^R)^{-1}x\tau(s,s^R)\delta_{s^{RR}}=\sum_{s,t}\tau(t,t^R)^{-1}\delta_{t} x\tau(s,s^R)\delta_{s^{RR}}\\
&=\sum_{s,t}\delta_{t^{RR}}\tau(t,t^R)^{-1}x\tau(s,s^R)\delta_{s^{RR}}=\gamma x\gamma^{-1}&\end{align*}
where for the 6th equality if we were to commute $\delta_{s^{RR}}$ to the left, this would fix $t=x\tau(s,s^R){\triangleright} s^{RR}=x{\triangleright} s$. We then use $\tau(t,t^R)^{-1}{\triangleright} t=t^{RR}$ and recognise the answer. We also check that
\begin{align*}\gamma\delta_s\gamma^{-1}&= \tau(s,s^R)^{-1}\delta_s\tau(s,s^R)=\delta_{s^{RR}}=\theta^2(\delta_s),\\
\theta(\gamma) &= \sum_{s,t}\tau(s,s^R)^{-1}{\triangleleft} t\delta_{t^R}\delta_{s^R}=\sum_s\tau(s,s^R)^{-1}{\triangleleft} s\delta_{s^R}=\sum_s\tau(s^R,s^{RR})^{-1}\delta_{s^R}=\gamma\end{align*}
using Lemma~\ref{leminv}.
\endproof
Next, we find $\hbox{{$\mathcal G$}}$ obeying the conditions above.
\begin{lemma} If $(\ )^R$ is bijective then equation (\ref{*GDelta}) holds with
\[ \hbox{{$\mathcal G$}}=\sum_{s,t} \delta_{t^R}\tau(s,t)^{-1}\mathop{{\otimes}} \delta_{s^R}\tau(t,t^R) (\tau(s,t){\triangleleft} t^R)^{-1}, \]
\[\hbox{{$\mathcal G$}}^{-1}=\sum_{s,t} \tau(s,t)\delta_{t^R}\mathop{{\otimes}} (\tau(s,t){\triangleleft} t^R)\tau(t,t^R)^{-1} \delta_{s^R}.\]
\end{lemma}
{\noindent {\bfseries Proof:}\quad } The proof that $\hbox{{$\mathcal G$}},\hbox{{$\mathcal G$}}^{-1}$ are indeed inverse is straightforward on matching the $\delta$-functions to fix the summation variables in $\hbox{{$\mathcal G$}}^{-1}$ in terms of $\hbox{{$\mathcal G$}}$. This then comes down to proving that the map
$(s,t)\to (p,q):=(\tau(s,t){\triangleright} t^R, \tau'(s,t){\triangleright} s^R)$ is injective. Indeed, the map $(p,q)\mapsto (p,p\cdot q)$ is injective by left division, so it's enough to prove that
\[ (s,t)\mapsto (p,p\cdot q)=(\tau(s,t){\triangleright} t^R, \tau(s,t){\triangleright}(t^R\cdot\tau(t,t^R)^{-1}{\triangleright} s^R))=((s\cdot t)\backslash s,(s\cdot t)^R)\]
is injective. We used $(s\cdot t)\cdot \tau(s,t){\triangleright} t^R=s\cdot(t\cdot t^R)=s$ by quasi-associativity to recognise $p$, recognised $t^R\cdot\tau(t,t^R)^{-1}{\triangleright} s^R=t\backslash s^R$ from (\ref{leftdiv}) and then
\[ (s\cdot t)\cdot \tau(s,t){\triangleright} (t\backslash s^R)=s\cdot(t\cdot(t\backslash s^R))=s\cdot s^R=e\]
to recognise $p\cdot q$. That the desired map is injective is then immediate by $(\ )^R$ injective and elementary properties of division.
We use similar methods in the other proofs. Thus, writing
\[ \tau'(s,t):=(\tau(s,t){\triangleleft} t^R)\tau(t,t^R)^{-1}=\tau(s\cdot t, \tau(s,t){\triangleright} t^R)^{-1}\]
for brevity, we have
\begin{align*}\hbox{{$\mathcal G$}}^{-1}(\theta\mathop{{\otimes}}\theta)(\Delta^{op} \delta_r)&=\hbox{{$\mathcal G$}}^{-1}\sum_{p\cdot q=r}(\delta_{q^R}\mathop{{\otimes}}\delta_{p^R})=\sum_{s\cdot t=r}\tau(s,t)\delta_{t^R}\mathop{{\otimes}}\tau'(s,t)\delta_{s^R},\\
(\Delta\theta(\delta_r))\hbox{{$\mathcal G$}}^{-1}&=\sum_{p\cdot q=r^R}(\delta_p\mathop{{\otimes}}\delta_q)\hbox{{$\mathcal G$}}^{-1}=\sum_{p\cdot q=r^R} \tau(s,t)\delta_{t^R}\mathop{{\otimes}}\tau'(s,t)\delta_{s^R},
\end{align*}
where in the second line, commuting the $\delta_{t^R}$ and $\delta_{s^R}$ to the left sets $p=\tau(s,t){\triangleright} t^R$, $q=\tau'(s,t){\triangleright} s^R$ as studied above. Hence $p\cdot q=r^R$ in the sum is the same as $s\cdot t=r$, so the two sides are equal and we have proven
(\ref{*GDelta}) on $\delta_r$. Similarly,
\begin{align*}\hbox{{$\mathcal G$}}^{-1}&(\theta\mathop{{\otimes}}\theta)(\Delta^{op} x)\\
&=\sum_{p,q,s,t} \left(\tau(p,q)\delta_{q^R}\mathop{{\otimes}} (\tau(p,q){\triangleleft} q^R)\tau(q,q^R)^{-1} \delta_{p^R} \right)\left((x{\triangleleft} s){\triangleleft} t\, \delta_{t^R}\mathop{{\otimes}}\delta_{(x{\triangleright} s)^R}x{\triangleleft} s\right)\\
&=\sum_{s,t}(x{\triangleleft} s\cdot t)\tau(s,t)\delta_{t^R}\mathop{{\otimes}} \tau(x{\triangleright}(s\cdot t),(x{\triangleleft} s\cdot t)\tau(s,t){\triangleright} t^R)^{-1}(x{\triangleleft} s)\delta_{s^R}
\end{align*}
where we first note that for the $\delta$-functions to connect, we need
\[ p=x{\triangleright} s,\quad ((x{\triangleleft} s){\triangleleft} t){\triangleright} t^R=q^R,\]
which is equivalent to $q=(x{\triangleleft} s){\triangleright} t$ since $e=(x{\triangleleft} s){\triangleright} (t\cdot t^R)=((x{\triangleleft} s){\triangleright} t)\cdot(( (x{\triangleleft} s){\triangleleft} t){\triangleright} t^R)$. In this case
\[ \tau(p,q)((x{\triangleleft} s){\triangleleft} t)=\tau(x{\triangleright} s, (x{\triangleleft} s){\triangleright} t)((x{\triangleleft} s){\triangleleft} t)=(x{\triangleleft} s\cdot t)\tau(s,t)\]
by the cocycle axiom. Similarly, $(x{\triangleleft} s)^{-1}{\triangleright}(x
{\triangleright} s)^R=s^R$ by Lemma~\ref{leminv} gives us $\delta_{s^R}$. For its coefficient, note that $p\cdot q=(x{\triangleright} s)\cdot((x{\triangleleft} s){\triangleright} t)=x{\triangleright}(s\cdot t)$ so that, using the other form of $\tau'(p.q)$, we obtain
\[ \tau(p\cdot q,\tau(p,q){\triangleright} q^R)^{-1}(x{\triangleleft} s)=\tau(x{\triangleright}(s\cdot t),\tau(p,q)((x{\triangleleft} s){\triangleleft} t){\triangleright} t^R)^{-1}(x{\triangleleft} s) \]
and we use our previous calculation to put this in terms of $s,t$. On the other side, we have
\begin{align*}
(\Delta\theta(x))&\hbox{{$\mathcal G$}}^{-1}= \sum_t\Delta(x{\triangleleft} t\, \delta_{t^R} )\hbox{{$\mathcal G$}}^{-1}\\
&=\sum_{p,q,s\cdot r=t^R}x{\triangleleft} t\, \delta_s\tau(p,q)\delta_{q^R}\mathop{{\otimes}} (x{\triangleleft} t){\triangleleft} r\, \delta_r \tau(p\cdot q,\tau(p,q){\triangleright} q^R)^{-1}\delta_{p^R}\\
&=\sum_{p,q}x{\triangleleft}(p\cdot q)\, \tau(p,q)\delta_{q^R}\mathop{{\otimes}} (x{\triangleleft} p\cdot q){\triangleleft} s\, \tau(p\cdot q,s)^{-1}\delta_{p^R},
\end{align*}
where, for the $\delta$-functions to connect, we need
\[ s=\tau(p,q){\triangleright} q^R,\quad r=\tau'(p,q){\triangleright} p^R.\]
The map $(p,q)\mapsto (s,r)$ has the same structure as the one we studied above but applied now to $p,q$ in place of $s,t$. It follows that $s\cdot r=(p\cdot q)^R$ and hence this being equal $t^R$ is equivalent to $p\cdot q=t$. Taking this for the value of $t$, we obtain the second expression for $(\Delta\theta(x))\hbox{{$\mathcal G$}}^{-1}$.
We now use the identity for $(x{\triangleleft} p\cdot q){\triangleleft} s $ and $(p\cdot q)\cdot \tau(p,q){\triangleright} q^R=p\cdot(q\cdot q^R)=p$ to obtain the same as we obtained for $\hbox{{$\mathcal G$}}^{-1}(\theta\mathop{{\otimes}}\theta)(\Delta^{op} x)$ on $x$, upon renaming $s,t$ there to $p,q$. The proofs of (\ref{*Gphi}), (\ref{*Gstrong}) are similarly quite involved, but omitted given that it is known that the category of modules is a strong bar category. \endproof
\begin{corollary}\label{corstar} For $(\ )^R$ bijective and the standard antipode in Proposition~\ref{standardS}, we have a $*$-quasi Hopf algebra with $\theta=* S$, where $x^*=x^{-1},\delta_s^*=\delta_s$ is the standard $*$-algebra structure on $\Xi$ as a cross product and $\gamma,\hbox{{$\mathcal G$}}$ are as above.
\end{corollary}{\noindent {\bfseries Proof:}\quad } We check that
\[*Sx=*(\sum_s \delta_{(x^{-1}{\triangleright} s)^R}x^{-1}{\triangleleft} s)=\sum_s (x^{-1}{\triangleleft} s)^{-1}\delta_{(x^{-1}{\triangleright} s)^R}=\sum_{s'}x{\triangleleft} s'\delta_{s'{}^R}=\theta(x),\] where $s'=x^{-1}{\triangleright} s$ and we used Lemma~\ref{leminv}. \endproof
The key property of any quasi-bialgebra is that its category of modules is monoidal with associator $\phi_{V,W,U}: (V\mathop{{\otimes}} W)\mathop{{\otimes}} U\to V\mathop{{\otimes}} (W\mathop{{\otimes}} U)$ given by the action of $\phi$. In the $*$-quasi case, this becomes a bar category as follows\cite{BegMa:bar}. First, there is a functor ${\rm bar}$ from the category to itself which sends a module $V$ to a `conjugate', $\bar V$. In our case, this has the same set and abelian group structure as $V$ but $\lambda.\bar v=\overline{\bar\lambda v}$ for all $\lambda\in \mathbb{C}$, i.e. a conjugate action of the field, where we write $v\in V$ as $\bar v$ when viewed in $\bar V$. Similarly,
\[ \xi.\bar v=\overline{\theta(\xi).v}\]
for all $\xi\in \Xi(R,K)$. On morphisms $\psi:V\to W$, we define $\bar\psi:\bar V\to \bar W$ by $\bar \psi(\bar v)=\overline{\psi(v)}$. Next, there is a natural isomorphism $\Upsilon: {\rm bar}\circ\mathop{{\otimes}} \Rightarrow \mathop{{\otimes}}^{op}\circ({\rm bar}\times{\rm bar})$, given in our case for all modules $V,W$ by
\[ \Upsilon_{V,W}:\overline{V\mathop{{\otimes}} W}{\cong} \bar W\mathop{{\otimes}} \bar V, \quad \Upsilon_{V,W}(\overline{v\mathop{{\otimes}} w})=\overline{ \hbox{{$\mathcal G$}}^2.w}\mathop{{\otimes}}\overline{\hbox{{$\mathcal G$}}^1.v}\]
and making a hexagon identity with the associator, namely
\[ (\mathrm{id}\mathop{{\otimes}}\Upsilon_{V,W})\circ\Upsilon_{V\mathop{{\otimes}} W, U}=\phi_{\bar U,\bar V,\bar W}\circ(\Upsilon_{W,U}\mathop{{\otimes}}\mathrm{id})\circ\Upsilon_{V,W\mathop{{\otimes}} U}\circ \overline{\phi_{V,W,U}}.\]
We also have a natural isomorphism ${\rm bb}:\mathrm{id}\Rightarrow {\rm bar}\circ{\rm bar}$, given in our case for all modules $V$ by
\[ {\rm bb}_V:V\to \overline{\overline V},\quad {\rm bb}_V(v)=\overline{\overline{\gamma.v}}\]
and obeying $\overline{{\rm bb}_V}={\rm bb}_{\bar V}$. In our case, we have a strong bar category, which means also
\[ \Upsilon_{\bar W,\bar V}\circ\overline{\Upsilon_{V,W}}\circ {\rm bb}_{V\mathop{{\otimes}} W}={\rm bb}_V\mathop{{\otimes}}{\rm bb}_W.\]
Finally, a bar category has some conditions on the unit object $\underline 1$, which in our case is the trivial representation with these automatic. That $G=RK$ leads to a strong bar category is in \cite[Prop.~3.21]{BegMa:bar} but without the underlying $*$-quasi-Hopf algebra structure as found above.
\begin{example}\label{exS3quasi} \rm {\sl (i) $\Xi(R,K)$ for $S_2\subset S_3$ with its standard transversal.} As an algebra, this is generated by $\mathbb{Z}_2$, which means by an element $u$ with $u^2=e$, and by $\delta_{0},\delta_{1},\delta_{2}$
for $\delta$-functions as the points of $R=\{e,uv,vu\}$. The relations are $\delta_i$ orthogonal and add to $1$, and cross relations
\[ \delta_0u=u\delta_0,\quad \delta_1u=u\delta_2,\quad \delta_2u=u\delta_1.\]
The dot product is the additive group $\mathbb{Z}_3$, i.e. addition mod 3. The coproducts etc are
\[ \Delta \delta_i=\sum_{j+k=i}\delta_j\mathop{{\otimes}}\delta_k,\quad \Delta u=u\mathop{{\otimes}} u,\quad \phi=1\mathop{{\otimes}} 1\mathop{{\otimes}} 1\]
with addition mod 3. The cocycle and right action are trivial and the dot product is that of $\mathbb{Z}_3$ as a subgroup generated by $uv$. This gives an ordinary cross product Hopf algebra $\Xi=\mathbb{C}(\mathbb{Z}_3){>\!\!\!\triangleleft}\mathbb{C} \mathbb{Z}_2$. Here $S\delta_i=\delta_{-i}$ and $S u=u$. For the $*$-structure, the cocycle is trivial so $\gamma=1$ and $\hbox{{$\mathcal G$}}=1\mathop{{\otimes}} 1$ and we have an ordinary Hopf $*$-algebra.
{\sl (ii) $\Xi(R,K)$ for $S_2\subset S_3$ with its second transversal.} For this $R$, the dot product is specified by $e$ the identity and $v\cdot w=w$, $w\cdot v=v$. The algebra has relations
\[ \delta_e u=u\delta_e,\quad \delta_v u=u\delta_w,\quad \delta_w u=u\delta_v\]
and the quasi-Hopf algebra coproducts etc. are
\[ \Delta \delta_e=\delta_e\mathop{{\otimes}} \delta_e+\delta_v\mathop{{\otimes}}\delta_v+\delta_w\mathop{{\otimes}}\delta_w,\quad
\Delta \delta_v=\delta_e\mathop{{\otimes}} \delta_v+\delta_v\mathop{{\otimes}}\delta_e+\delta_w\mathop{{\otimes}}\delta_v,\]
\[
\Delta \delta_w=\delta_e\mathop{{\otimes}} \delta_w+\delta_w\mathop{{\otimes}}\delta_e+\delta_v\mathop{{\otimes}}\delta_w,\quad \Delta u=u\mathop{{\otimes}} u,\]
\[ \phi=1\mathop{{\otimes}} 1\mathop{{\otimes}} 1+(\delta_v \mathop{{\otimes}}\delta_w+\delta_w\mathop{{\otimes}}\delta_v )\mathop{{\otimes}} (u-1)=\phi^{-1}.\]
The antipode is
\[ S\delta_s=\delta_{s^R}=\delta_s,\quad S u=\sum_{s}\delta_{(u{\triangleright} s)^R}u=u,\quad \alpha=1,\quad \beta=\sum_s \delta_s\mathop{{\otimes}}\tau(s,s)=1\]
from the antipode lemma, since the map $(\ )^R$ happens to be injective and indeed acts as the identity. In this case, we see that $\Xi(R,K)$ is nontrivially a quasi-Hopf algebra. Only $\tau(v,w)=\tau(w,v)=u$ are nontrivial, hence for the $*$-quasi Hopf algebra structure, we have
\[ \gamma=1,\quad \hbox{{$\mathcal G$}}=1\mathop{{\otimes}} 1+(\delta_v\mathop{{\otimes}}\delta_w+\delta_w\mathop{{\otimes}}\delta_v)(u\mathop{{\otimes}} u-1\mathop{{\otimes}} 1)\]
with $\theta=*S$ acting as the identity on our basis, $\theta(x)=x$ and $\theta(\delta_s)=\delta_s$. \end{example}
We also note that the algebras $\Xi(R,K)$ here are manifestly isomorphic for the two $R$, but the coproducts are different, so the tensor products of representations is different, although they turn out isomorphic. The set of irreps does not change either, but how we construct them can look different. We will see in the next that this is part of a monoidal equivalence of categories.
\begin{example}\rm $S_2\subset S_3$ with its 2nd transversal. Here $R$ has two orbits: (a) ${\hbox{{$\mathcal C$}}}=\{e\}$ with $r_0=e, K^{r_0}=K$ with two 1-diml irreps $V_\rho$ as $\rho$=trivial and $\rho={\rm sign}$, and hence two irreps of $\Xi(R,K)$; (b) ${\hbox{{$\mathcal C$}}}=\{w,v\}$ with $r_0=v$ or $r_0=w$, both with $K^{r_0}=\{e\}$ and hence only $\rho$ trivial, leading to one 2-dimensional irrep of $\Xi(R,K)$. So, altogether, there are again three irreps of $\Xi(R,K)$:
\begin{align*} V_{(\{e\},\rho)}:& \quad \delta_r.1 =\delta_{r,e},\quad u.1 =\pm 1,\\
V_{(\{w,v\}),1)}:& \quad \delta_r. v=\delta_{r,v}v,\quad \delta_r. w=\delta_{r,w}w,\quad u.v= w,\quad u.w=v
\end{align*}
acting on $\mathbb{C}$ and on the span of $v,w$ respectively. These irreps are equivalent to what we had in Example~\ref{exS3n} when computing irreps from the standard $R$.
\end{example}
\section{Categorical justification and twisting theorem}\label{sec:cat_just}
We have shown that the boundaries can be defined using the action of the algebra $\Xi(R,K)$ and that one can perform novel methods of fault-tolerant quantum computation using these boundaries. The full story, however, involves the quasi-Hopf algebra structure verified in the preceding section and now we would like to connect back up to the category theory behind this.
\subsection{$G$-graded $K$-bimodules.} We start by proving the equivalence ${}_{\Xi(R,K)}\hbox{{$\mathcal M$}} \simeq {}_K\hbox{{$\mathcal M$}}_K^G$ explicitly and use it to derive the coproduct studied in Section~\ref{sec:quasi}. Although this equivalence is known\cite{PS}, we believe this to be a new and more direct derivation.
\begin{lemma} If $V_\rho$ is a $K^{r_0}$-module and $V_{\hbox{{$\mathcal O$}},\rho}$ the associated $\Xi(R,K)$ irrep, then
\[ \tilde V_{\hbox{{$\mathcal O$}},\rho}= V_{\hbox{{$\mathcal O$}},\rho}\mathop{{\otimes}} \mathbb{C} K,\quad x.(r\mathop{{\otimes}} v\mathop{{\otimes}} z).y=x{\triangleright} r\mathop{{\otimes}}\zeta_r(x).v\mathop{{\otimes}} (x{\triangleleft} r)zy,\quad |r\mathop{{\otimes}} v\mathop{{\otimes}} z|=rz\]
is a $G$-graded $K$-bimodule. Here $r\in \hbox{{$\mathcal O$}}$ and $v\in V_\rho$ in the construction of $V_{\hbox{{$\mathcal O$}},\rho}$.
\end{lemma}
{\noindent {\bfseries Proof:}\quad } That this is a $G$-graded right $K$-module commuting with the left action of $K$ is trivial. That the left action works and is $G$-graded is
\begin{align*}x.(y.(r\mathop{{\otimes}} v\mathop{{\otimes}} z))&=x.(y{\triangleright} r\mathop{{\otimes}} \zeta_r(y).v\mathop{{\otimes}} (y{\triangleleft} r)z)= xy{\triangleright} r\mathop{{\otimes}} \zeta_r(xy).v\mathop{{\otimes}} (x{\triangleleft}(y{\triangleright} r))(y{\triangleleft} r)z\\
&=xy{\triangleright} r\mathop{{\otimes}} \zeta_r(xy).v\mathop{{\otimes}} ((xy){\triangleleft} r)z\end{align*}
and
\[ |x.(r\mathop{{\otimes}} v\mathop{{\otimes}} z).y|=(x{\triangleright} r) (x{\triangleleft} r)zy= xrzy=x|r\mathop{{\otimes}} v \mathop{{\otimes}} z|y.\]
\endproof
\begin{remark}\rm Recall that we can also think more abstractly of $\Xi=\mathbb{C}(G/K){>\!\!\!\triangleleft} \mathbb{C} K$ rather than using a transversal. In these terms, a representation of $\Xi(R,K)$ as an $R$-graded $K$-module $V$ such that $|x.v|=x{\triangleleft} |v|$ now becomes a $G/K$-graded $K$-module such that $|x.v|=x|v|$, where $|v|\in G/K$ and we multiply from the left by $x\in K$. Moreover, the role of an orbit $\hbox{{$\mathcal O$}}$ above is played by a double coset $T=\hbox{{$\mathcal O$}} K\in {}_KG_K$. In these terms, the role of the isometry group $K^{r_0}$ is played by
\[ K^{r_T}:=K\cap r_T K r_T^{-1}, \]
where $r_T$ is any representative of the same double coset. One can take $r_T=r_0$ but we can also chose it more freely.
Then an irrep is given by a double coset $T$ and an irreducible representation $\rho_T$ of $K^{r_T}$. If we denote by $V_{\rho_T}$ the carrier space for this then the associated irrep of $\mathbb{C}(G/K){>\!\!\!\triangleleft}\mathbb{C} K$ is $V_{T,\rho_T}=\mathbb{C} K\mathop{{\otimes}}_{K^{r_T}}V_{\rho_T}$ which is manifestly a $K$-module and we give it the $G/K$-grading by $|x\mathop{{\otimes}}_{K^{r_T}} v|=xK$. The construction in the last lemma is then equivalent to
\[ \tilde V_{T,\rho_T}=\mathbb{C} K\mathop{{\otimes}}_{K^{r_T}} V_{\rho_T}\mathop{{\otimes}}\mathbb{C} K,\quad |x\mathop{{\otimes}}_{K^{r_T}} v\mathop{{\otimes}} z|=xz\]
as manifestly a $G$-graded $K$-bimodule. This is an equivalent point of view, but we prefer our more explicit one based on $R$, hence details are omitted.
\end{remark}
Also note that the category ${}_K\hbox{{$\mathcal M$}}_K^G$ of $G$-graded $K$-bimodules has an obvious monoidal structure inherited from that of $K$-bimodules, where we tensor product over $\mathbb{C} K$. Here $|w\mathop{{\otimes}}_{\mathbb{C} K} w'|=|w||w'|$ in $G$ is well-defined and $x.(w\mathop{{\otimes}}_{\mathbb{C} K}w').y=x.w\mathop{{\otimes}}_{\mathbb{C} K} w'.y$ has degree $x|w||w'|y=x|w\mathop{{\otimes}}_{\mathbb{C} K}w'|y$ as required.
\begin{proposition} \label{prop:mon_equiv}
We let $R$ be a transversal and $W=V\mathop{{\otimes}} \mathbb{C} K$ made into a $G$-graded $K$-bimodule by
\[ x.(v\mathop{{\otimes}} z).y=x.v\mathop{{\otimes}} (x{\triangleleft}|v|)zy, \quad |v\mathop{{\otimes}} z|= |v|z\in G,\]
where now we view $|v|\in R$ as the chosen representative of $|v|\in G/K$. This gives a functor $F:{}_\Xi\hbox{{$\mathcal M$}}\to {}_K\hbox{{$\mathcal M$}}_K^G$ which is a monoidal equivalence for a suitable quasibialgebra structure on $\Xi(R,K)$. The latter depends on $R$ since $F$ depends on $R$.
\end{proposition}
{\noindent {\bfseries Proof:}\quad } We define $F(V)$ as stated, which is clearly a right module that commutes with the left action, and the latter is a module structure as
\[ x.(y.(v\mathop{{\otimes}} z))=x.(y.v\mathop{{\otimes}} (y{\triangleleft} |v|)z)=xy.v\mathop{{\otimes}} (x{\triangleleft} (y{\triangleright} |v|))(y{\triangleleft} |v|)z=(xy).(v\mathop{{\otimes}} z)\]
using the matched pair axiom for $(xy){\triangleleft} |v|$. We also check that $|x.(v\mathop{{\otimes}} z).y|=|x.v|zy=(x{\triangleright} |v|)(x{\triangleleft} |v|)zy=x|v|zy=x|v\mathop{{\otimes}} z|y$. Hence, we have a $G$-graded $K$-bimodule. Conversely, if $W$ is a $G$-graded $K$-bimodule, we let
\[ V=\{w\in W\ |\ |w|\in R\},\quad x.v=xv(x{\triangleleft} |v|)^{-1},\quad \delta_r.v=\delta_{r,|v|}v,\]
where $v$ on the right is viewed in $W$ and we use the $K$-bimodule structure. This is arranged so that $x.v$ on the left lives in $V$. Indeed, $|x.v|=x|v|(x{\triangleleft} |v|)^{-1}=x{\triangleright} |v|$ and $x.(y.v)=xyv(y{\triangleleft} |v|)^{-1}(x{\triangleleft}(y{\triangleright} |v|))^{-1}=xyv((xy){\triangleleft} |v|)^{-1}$ by the matched pair condition, as required for a representation of $\Xi(R,K)$. One can check that this is inverse to the other direction. Thus, given $W=\oplus_{rx\in G}W_{rx}=\oplus_{x\in K} W_{Rx}$, where we let $W_{Rx}=\oplus_{r\in R}W_{rx}$, the right action by $x\in K$ gives an isomorphism $W_{Rx}{\cong} V\mathop{{\otimes}} x$ as vector spaces and hence recovers $W=V\mathop{{\otimes}}\mathbb{C} K$. This clearly has the correct right $K$-action and from the left $x.(v\mathop{{\otimes}} z)=xv(x{\triangleleft}|v|)^{-1}\mathop{{\otimes}} (x{\triangleleft}|v|)z$, which under the identification maps to $xv(x{\triangleleft}|v|)^{-1} (x{\triangleleft}|v|)z=xvz\in W$ as required given that $v\mathop{{\otimes}} z$ maps to $vz$ in $W$.
Now, if $V,V'$ are $\Xi(R,K)$ modules then as vector spaces,
\[ F(V)\mathop{{\otimes}}_{\mathbb{C} K}F(V')=(V\mathop{{\otimes}} \mathbb{C} K)\mathop{{\otimes}}_{\mathbb{C} K} (V'\mathop{{\otimes}} \mathbb{C} K)=V\mathop{{\otimes}} V'\mathop{{\otimes}} \mathbb{C} K{\buildrel f_{V,V'}\over{\cong}}F(V\mathop{{\otimes}} V')\]
by the obvious identifications except that in the last step we allow ourselves the possibility of a nontrivial isomorphism as vector spaces. For the actions on the two sides,
\[ x.(v\mathop{{\otimes}} v'\mathop{{\otimes}} z).y=x.(v\mathop{{\otimes}} v')\mathop{{\otimes}} (x{\triangleleft} |v\mathop{{\otimes}} v'|)zy= x.v\mathop{{\otimes}} (x{\triangleleft} |v|).v'\mathop{{\otimes}} ((x{\triangleleft}|v|){\triangleleft}|v'|)zy,\]
where on the right, we have $x.(v\mathop{{\otimes}} 1)=x.v \mathop{{\otimes}} x{\triangleleft}|v|$ and then take $x{\triangleleft}|v|$ via the $\mathop{{\otimes}}_{\mathbb{C} K}$ to act on $v'\mathop{{\otimes}} z$ as per our identification. Comparing the $x$ action on the $V\mathop{{\otimes}} V'$ factor, we need
\[\Delta x=\sum_{r\in R}x\delta_r\mathop{{\otimes}} x{\triangleleft} r= \sum_{r\in R}\delta_{x{\triangleright} r}\mathop{{\otimes}} x \mathop{{\otimes}} 1\mathop{{\otimes}} x{\triangleleft} r\]
as a modified coproduct without requiring a nontrivial $f_{V,V'}$ for this to work. The first expression is viewed in $\Xi(R,K)^{\mathop{{\otimes}} 2}$ and the second is on the underlying vector space. Likewise, looking at the grading of $F(V\mathop{{\otimes}} V')$ and comparing with the grading of $F(V)\mathop{{\otimes}}_{\mathbb{C} K}F(V')$, we need to define $|v\mathop{{\otimes}} v'|=|v|\cdot|v'|\in R$ and use $|v|\cdot|v'|\tau(|v|,|v'|)=|v||v'|$ to match the degree on the left hand side. This amounts to the coproduct of $\delta_r$ in $\Xi(R,K)$,
\[ \Delta\delta_r=\sum_{s\cdot t=r}\delta_s\mathop{{\otimes}}\delta_t=\sum_{s\cdot t=r} \delta_s\mathop{{\otimes}} 1\mathop{{\otimes}} \delta_t \mathop{{\otimes}} 1\]
{\em and} a further isomorphism
\[ f_{V,V'}(v\mathop{{\otimes}} v'\mathop{{\otimes}} z)= v\mathop{{\otimes}} v'\mathop{{\otimes}}\tau(|v|,|v'|)z\] on the underlying vector space.
After applying this, the degree of this element is $|v\mathop{{\otimes}} v'|\tau(|v|,|v'|)z=|v||v'|z=|v\mathop{{\otimes}} 1||v'\mathop{{\otimes}} z|$, which is the degree on the original $F(V)\mathop{{\otimes}}_{\mathbb{C} K}F(V')$ side. Now we show that $f_{V,V'}$ respects associators on each side of $F$. Taking the associator on the $\Xi(R,K)$-module side as
\[ \phi_{V,V',V''}:(V\mathop{{\otimes}} V')\mathop{{\otimes}} V''\to V\mathop{{\otimes}}(V'\mathop{{\otimes}} V''),\quad \phi_{V,V',V''}((v\mathop{{\otimes}} v')\mathop{{\otimes}} v'')=\phi^1.v\mathop{{\otimes}} (\phi^2.v'\mathop{{\otimes}}\phi^3.v'')\]
and $\phi$ trivial on the $G$-graded $K$-bimodule side, for $F$ to be monoidal with the stated $f_{V,V'}$ etc, we need
\begin{align*}
F(\phi_{V,V,V'})&f_{V\mathop{{\otimes}} V',V''}f_{V,V'}(v\mathop{{\otimes}} v'\mathop{{\otimes}} z)\\
&=F(\phi_{V,V,V'})f_{V\mathop{{\otimes}} V',V''}(v\mathop{{\otimes}} v'\mathop{{\otimes}} \tau(|v|,|v'|).v''\mathop{{\otimes}} (\tau(|v|,|v'|){\triangleleft}|v''|)z)\\
&=F(\phi_{V,V,V'})(v\mathop{{\otimes}} v'\mathop{{\otimes}} \tau(|v|,|v'|).v''\mathop{{\otimes}}\tau(|v|.|v'|,\tau(|v|,|v'|){\triangleright} |v''|)(\tau(|v|,|v'|){\triangleleft}|v''|)z)\\
&=F(\phi_{V,V,V'})(v\mathop{{\otimes}} v'\mathop{{\otimes}} \tau(|v|,|v'|).v''\mathop{{\otimes}} \tau(|v|,|v'|.|v''|)\tau(|v'|,|v''|)z,\\
f_{V,V'\mathop{{\otimes}} V''}&f_{V',V''}(v\mathop{{\otimes}} v'\mathop{{\otimes}} v''\mathop{{\otimes}} z)=f_{V,V'\mathop{{\otimes}} V''}(v\mathop{{\otimes}} v'\mathop{{\otimes}} v''\mathop{{\otimes}} \tau(|v'|,|v''|)z) \\
&=v\mathop{{\otimes}} v'\mathop{{\otimes}} v''\mathop{{\otimes}}\tau(|v|,|v'\mathop{{\otimes}} v''|)\tau(|v'|,|v''|)z =v\mathop{{\otimes}} v'\mathop{{\otimes}} v''\mathop{{\otimes}}\tau(|v|,|v'|.|v''|)\tau(|v'|,|v''|)z,\end{align*}
where for the first equality we moved $\tau(|v|,|v'|)$ in the output of $f_{V,V'}$ via $\mathop{{\otimes}}_{\mathbb{C} K}$ to act on the $v''$. We used
the cocycle property of $\tau$ for the 3rd equality. Comparing results, we need
\[ \phi_{V,V',V''}((v\mathop{{\otimes}} v')\mathop{{\otimes}} v'')=v\mathop{{\otimes}}( v'\mathop{{\otimes}} \tau(|v|,|v'|)^{-1}.v''),\quad \phi=\sum_{s,t\in R}(\delta_s\mathop{{\otimes}} 1)\mathop{{\otimes}}(\delta_s\tens1)\mathop{{\otimes}} (1\mathop{{\otimes}} \tau(s,t)^{-1}).\]
Note that we can write
\[ f_{V,V'}(v\mathop{{\otimes}} v'\mathop{{\otimes}} z)=(\sum_{s,t\in R}(\delta_s\mathop{{\otimes}} 1)\mathop{{\otimes}}(\delta_t\mathop{{\otimes}} 1)\mathop{{\otimes}} \tau(s,t)).(v\mathop{{\otimes}} v'\mathop{{\otimes}} z)\]
but we are not saying that $\phi$ is a coboundary since this is not given by the action of an element of $\Xi(R,K)^{\mathop{{\otimes}} 2}$.
\endproof
This derives the quasibialgebra structure on $\Xi(R,K)$ used in Section~\ref{sec:quasi} but now so as to obtain an
equivalence of categories.
\subsection{Drinfeld twists induced by change of transversal}
We recall that if $H$ is a quasiHopf algebra and $\chi\in H\mathop{{\otimes}} H$ is a {\em cochain} in the sense of invertible and $(\mathrm{id}\mathop{{\otimes}}
{\epsilon})\chi=({\epsilon}\mathop{{\otimes}}\mathrm{id})\chi=1$, then its {\em Drinfeld twist} $\bar H$ is another quasi-Hopf algebra
\[ \bar\Delta=\chi^{-1}\Delta(\ )\chi,\quad \bar\phi=\chi_{23}^{-1}((\mathrm{id}\mathop{{\otimes}}\Delta)\chi^{-1})\phi ((\Delta\mathop{{\otimes}}\mathrm{id})\chi)\chi_{12},\quad \bar{\epsilon}={\epsilon}\]
\[ S=S,\quad\bar\alpha=(S\chi^1)\alpha\chi^2,\quad \bar\beta=(\chi^{-1})^1\beta S(\chi^{-1})^2\]
where $\chi=\chi^1\mathop{{\otimes}}\chi^2$ with a sum of such terms understood and we use same notation for $\chi^{-1}$, see \cite[Thm.~2.4.2]{Ma:book} but note that our $\chi$ is denoted $F^{-1}$ there. In categorical terms, this twist
corresponds to a monoidal equivalence $G:{}_{H}\hbox{{$\mathcal M$}}\to {}_{H^\chi}\hbox{{$\mathcal M$}}$ which is the identity on objects and morphisms but has a nontrivial natural transformation
\[ g_{V,V'}:G(V)\bar\mathop{{\otimes}} G(V'){\cong} G(V\mathop{{\otimes}} V'),\quad g_{V,V'}(v\mathop{{\otimes}} v')= \chi^1.v\mathop{{\otimes}}\chi^2.v'.\]
The next theorem follows by the above reconstruction arguments, but here we check it directly. The logic is that for different $R,R'$ the category of modules are both monoidally equivalent to ${}_K\hbox{{$\mathcal M$}}_K^G$ and hence monoidally equivalent but not in a manner that is compatible with the forgetful functor to Vect. Hence these should be related by a cochain twist.
\begin{theorem}\label{thmtwist} Let $R,\bar R$ be two transversals with $\bar r\in\bar R$ representing the same coset as $r\in R$. Then $\Xi(\bar R,K)$ is a cochain twist of $\Xi(R,K)$ at least as quasi-bialgebras (and as quasi-Hopf algebras if one of them is). The Drinfeld cochain is $\chi=\sum_{r\in R}(\delta_r\mathop{{\otimes}} 1)\mathop{{\otimes}} (1\mathop{{\otimes}} r^{-1}\bar r)$. \end{theorem}
{\noindent {\bfseries Proof:}\quad } Let $R,\bar R$ be two transversals. Then for each $r\in R$, the class $rK$ has a unique representative $\bar rK$ with $\bar r\in R'$. Hence $\bar r= r c_r$ for some function $c:R\to K$ determined by the two transversals as $c_r=r^{-1}\bar r$ in $G$. One can show that the cocycle matched pairs are related by
\[ x\bar{\triangleright} \bar r=(x{\triangleright} r)c_{x{\triangleright} r},\quad x\bar{\triangleleft} \bar r= c_{x{\triangleright} r}^{-1}(x{\triangleleft} r)c_r\]
among other identities. On using
\begin{align*} \bar s\bar t=sc_s tc_t=s (c_s{\triangleright} t)(c_s{\triangleleft} t)c_t&= (s\cdot c_s{\triangleright} t)\tau(s, c_s{\triangleright} t)(c_s{\triangleleft} t)c_t\\&=\overline{ s\cdot (c_s{\triangleright} t)}c_{s\cdot c_s{\triangleright} t}^{-1}\tau(s, c_s{\triangleright} t)(c_s{\triangleleft} t)c_t\end{align*}
and factorising using $\bar R$, we see that
\begin{equation}\label{taucond} \bar s\, \bar\cdot\, \bar t= \overline{s\cdot c_s{\triangleright} t},\quad\bar\tau(\bar s,\bar t)=c_{s\cdot c_s{\triangleright} t}^{-1}\tau(s, c_s{\triangleright} t)(c_s{\triangleleft} t)c_t.\end{equation}
We will construct a monoidal functor $G:{}_{\Xi(R,K)}\hbox{{$\mathcal M$}}\to {}_{\Xi(\bar R,K)}\hbox{{$\mathcal M$}}$ with $g_{V,V'}(v\mathop{{\otimes}} v')= \chi^1.v\mathop{{\otimes}}\chi^2.v'$ for a suitable $\chi\in \Xi(R,K)^{\mathop{{\otimes}} 2}$. First, let $F:{}_{\Xi(R,K)}\hbox{{$\mathcal M$}}\to {}_K\hbox{{$\mathcal M$}}_K^G$ be the monoidal functor above with natural isomorphism $f_{V,V'}$ and $\bar F:{}_{\Xi(\bar R,K)}\hbox{{$\mathcal M$}}\to {}_K\hbox{{$\mathcal M$}}_K^G$ the parallel for $\Xi(\bar R,K)$ with isomorphism $\bar f_{V,V'}$. Then
\[ C:F\to \bar F\circ G,\quad C_V:F(V)=V\mathop{{\otimes}}\mathbb{C} K\to V\mathop{{\otimes}} \mathbb{C} K=\bar FG(V),\quad C_V(v\mathop{{\otimes}} z)=v\mathop{{\otimes}} c_{|v|}^{-1}z\]
is a natural isomorphism. Check on the right we have, denoting the $\bar R$ grading by $||\ ||$, the $G$-grading and $K$-bimodule structure
\begin{align*} |C_V(v\mathop{{\otimes}} z)|&= |v\mathop{{\otimes}} c_{|v|}^{-1}z|= ||v||c_{|v|}^{-1}z=|v|z=|v\mathop{{\otimes}} z|,\\
x.C_V(v\mathop{{\otimes}} z).y&=x.(v\mathop{{\otimes}} c_{|v|}^{-1}z).y=x.v\mathop{{\otimes}} (x\bar{\triangleleft} ||v||)c_{|v|}^{-1}zy=x.v \mathop{{\otimes}} c_{x{\triangleright} |v|}^{-1} (x{\triangleleft} |v|)zy\\
&= C_V(x.(v\mathop{{\otimes}} z).y).\end{align*}
We want these two functors to not only be naturally isomorphic but for this to respect that they are both monoidal functors. Here $\bar F\circ G$ has the natural isomorphism
\[ \bar f^g_{V,V'}= \bar F(g_{V,V'})\circ \bar f_{G(V),G(V')}\]
by which it is a monoidal functor.
The natural condition on a natural isomorphism $C$ between monoidal functors is that $C$ behaves in the obvious way on
tensor product objects via the natural isomorphisms associated to each monoidal functor. In our case, this means
\[ \bar f^g_{V,V'}\circ (C_{V}\mathop{{\otimes}} C_{V'}) = C_{V\mathop{{\otimes}} V'}\circ f_{V,V'}: F(V)\mathop{{\otimes}} F(V')\to \bar F G(V\mathop{{\otimes}} V').\]
Putting in the specific form of these maps, the right hand side is
\[C_{V\mathop{{\otimes}} V'}\circ f_{V,V'}(v\mathop{{\otimes}} 1\mathop{{\otimes}}_K v'\mathop{{\otimes}} z)=C_{V\mathop{{\otimes}} V'}(v\mathop{{\otimes}} v'\mathop{{\otimes}} \tau(|v|,|v'|)z)=v\mathop{{\otimes}} v'\mathop{{\otimes}} c^{-1}_{|v\mathop{{\otimes}} v'|}\tau(|v|,|v'|)z,\]
while the left hand side is
\begin{align*}\bar f^g_{V,V'}\circ (C_{V}\mathop{{\otimes}} C_{V'})&(v\mathop{{\otimes}} 1\mathop{{\otimes}}_K v'\mathop{{\otimes}} z)=\bar f^g_{V,V'}(v\mathop{{\otimes}} c^{-1}_{|v|}\mathop{{\otimes}}_K v'\mathop{{\otimes}} c^{-1}_{|v'|}z)\\
&=\bar f^g_{V,V'}(v\mathop{{\otimes}} 1\mathop{{\otimes}}_K c^{-1}_{|v|}.v'\mathop{{\otimes}} (c^{-1}_{|v|}\bar{\triangleright} ||v'||)c^{-1}_{|v'|}z)\\
&=\bar F(g_{V,V'})(v\mathop{{\otimes}} c^{-1}_{|v|}.v'\mathop{{\otimes}} \bar\tau(||v||,||c^{-1}_{|v|}.v'||)(c^{-1}_{|v|}\bar{\triangleright}||v'||)c^{-1}_{|v'|}z)\\
&=\bar F(g_{V,V'})(v\mathop{{\otimes}} c^{-1}_{|v|}.v'\mathop{{\otimes}} c^{-1}_{|v\mathop{{\otimes}} v'|}\tau(|v|,|v'|)z,
\end{align*}
using the second of (\ref{taucond}) and $|v\mathop{{\otimes}} v'|=|v|\cdot|v'|$. We also used $\bar f^g_{V,V'}=\bar F(g_{V,V'})\bar f_{G(V),G(V')}:\bar FG(V)\mathop{{\otimes}} \bar FG(V')\to \bar FG(V\mathop{{\otimes}} V')$. Comparing, we need $\bar F(g_{V,V'})$ to be the action of the element
\[ \chi=\sum_{r\in R} \delta_r\mathop{{\otimes}} c_r\in \Xi(R,K)^{\mathop{{\otimes}} 2}.\]
It follows from the arguments, but one can also check directly, that $\phi$ indeed twists as stated to $\bar\phi$ when these are given by Lemma~\ref{Xibialg}, again using (\ref{taucond}). \endproof
The twisting of a quasi-Hopf algebra is again one. Hence, we have:
\begin{corollary}\label{twistant} If $R$ has $(\ )^R$ bijective giving a quasi-Hopf algebra with regular antipode $S,\alpha=1,\beta$ as in Proposition~\ref{standardS} and $\bar R$ is another transversal then $\Xi(\bar R,K)$ in the twisting form of Theorem~\ref{thmtwist} has an antipode
\[ \bar S=S,\quad \bar \alpha=\sum_r \delta_{r^R} c_r ,\quad \bar \beta =\sum_r \delta_r \tau(r,r^R)(c_r^{-1}{\triangleleft} r^R)^{-1} . \]
This is a regular antipode if $(\ )^R$ for $\bar R$ is also bijective (i.e. $\bar\alpha$ is then invertible and can be transformed back to standard form to make it 1).\end{corollary}
{\noindent {\bfseries Proof:}\quad } We work with the initial quasi-Hopf algebra $\Xi(R,K)$ and ${\triangleright},{\triangleleft},\tau$ refer to this but note that $\Xi(\bar R,K)$ is the same algebra when $\delta_r$ is identified with the corresponding $\delta_{\bar r}$. Then
\begin{align*}\bar \alpha&=(S\chi^{1})\chi^{2}=\sum_r S\delta_r\mathop{{\otimes}} c_r=\delta_{r^R}c_r\end{align*}
using the formula for $S\delta_r=\delta_{r^R}$ in Proposition~\ref{standardS}. Similarly,
$\chi^{-1}=\sum_r \delta_r\mathop{{\otimes}} c_r^{-1}$ and we use $S,\beta$ from the above lemma, where
\[ S (1\mathop{{\otimes}} x)= \sum_s \delta_{(x^{-1}{\triangleright} s)^R}\mathop{{\otimes}} x^{-1}{\triangleleft} s=\sum_t\delta_{t^R}\mathop{{\otimes}} x^{-1}{\triangleleft}(x{\triangleright} t)=\sum_t\delta_{t^R}\mathop{{\otimes}} (x{\triangleleft} t)^{-1}.\]
Then
\begin{align*} \bar \beta &=\chi^{-1}\beta S\chi^{-2}=\sum_{r,s,t}\delta_r\delta_s\tau(s,s^R)\delta_{t^R}(c_r^{-1}{\triangleleft} t)^{-1}\\
&=\sum_{r,t} \delta_r\tau(r,r^R)\delta_{t^R}(c_r^{-1}{\triangleleft} t)^{-1}=\sum_{r,t}\delta_r\delta_{\tau(r,r^R){\triangleright} t^R}\tau(r,r^R) (c_r^{-1}{\triangleleft} t)^{-1}.\end{align*}
Commuting the $\delta$-functions to the left requires $r=\tau(r,r^R){\triangleright} t^R$ or $r^{RR}=\tau(r,r^R)^{-1}{\triangleright} r= t^R$ so $t=r^R$ under our assumptions, giving the answer stated.
If $(\ )^R$ is bijective then $\bar\alpha^{-1}=\sum_r c_r^{-1}\delta_{r^R}=\sum_r \delta_{c_r^{-1}{\triangleright} r^R}c_r^{-1}$ provides the left inverse. On the other side, we need $c_r^{-1}{\triangleright} r^R= c_s^{-1}{\triangleright} s^R$ iff $r=s$. This is true if $(\ )^{R}$ for $\bar R$ is also bijective. That is because, if we write $(\ )^{\bar R}$ for the right inverse with respect to $\bar R$, one can show by comparing the factorisations that
\[ \bar s^{\bar R}=\overline{c_s^{-1}{\triangleright} s^R},\quad \overline{s^R}=c_s\bar{\triangleright} \bar s^{\bar R}\]
and we use the first of these. \endproof
\begin{example}\rm With reference to the list of transversals for $S_2\subset S_3$, we have four quasi-Hopf algebras of which two were already computed in Example~\ref{exS3quasi}.
{\sl (i) 2nd transversal as twist of the first.} Here $\bar\Xi$ is generated by $\mathbb{Z}_2$ as $u$ again and $\delta_{\bar r}$ with $\bar R=\{e,w,v\}$. We have the same cosets represented by these with $\bar e=e$, $\overline{uv}=w$ and $\overline{vu}=v$, which means $c_e=e, c_{vu}=u, c_{uv}=u$. To compare the algebras in the two cases, we identify $\delta_0=\delta_e,\delta_1=\delta_w, \delta_2=\delta_v$ as delta-functions on $G/K$ (rather than on $G$) in order to identify the algebras of $\bar\Xi$ and $\Xi$. The cochain from Theorem~\ref{thmtwist} is \[ \chi=\delta_e\mathop{{\otimes}} e+(\delta_{vu}+\delta_{uv})\mathop{{\otimes}} u=\delta_0\mathop{{\otimes}} 1+ (\delta_1+\delta_2)\mathop{{\otimes}} u=\delta_0\mathop{{\otimes}} 1+ (1-\delta_0)\mathop{{\otimes}} u \]
as an element of $\Xi\mathop{{\otimes}}\Xi$. One can check that this conjugates the two coproducts as claimed. We also have
\[ \chi^2=1\mathop{{\otimes}} 1,\quad ({\epsilon}\mathop{{\otimes}}\mathrm{id})\chi=(\mathrm{id}\mathop{{\otimes}}{\epsilon})\chi=1.\]
We spot check (\ref{taucond}), for example $v\bar\cdot w=\overline{vu}\, \bar\cdot\, \overline{uv}=\overline{uv}=\overline{vuvu}=\overline{vu( u{\triangleright} (uv))}$, as it had to be. We should therefore find that
\[((\Delta\mathop{{\otimes}}\mathrm{id})\chi)\chi_{12}=((\mathrm{id}\mathop{{\otimes}}\Delta)\chi)\chi_{23}\bar\phi. \]
We have checked directly that this indeed holds. Next, the antipode of the first transversal should twist to
\[ \bar S=S,\quad \bar\alpha=\delta_e c_e+\delta_{uv}c_{vu}+\delta_{vu}c_{uv}=\delta_e(e-u)+u=\delta_e c_e+\delta_{vu}c_{vu}+\delta_{uv}c_{uv}=\bar\beta\]
by Corollary~\ref{twistant} for twisting the antipode. Here, $U=\bar\alpha^{-1}=\bar\beta = U^{-1}$ and $\bar S'=U(S\ )U^{-1}$ with $\bar\alpha'=\bar\beta'=1$ should also be an antipode. We can check this:
\[U u = (\delta_0(e-u)+u)u = \delta_0(u-e)+e = u(\delta_{u^{-1}{\triangleright} 0}(e-u)+u) = u U\]
so $\bar S' u = UuU^{-1} = u$, and
\[\bar S' \delta_1 = U(S\delta_1)U= U\delta_2 U = (\delta_0(e-u)+u)\delta_2(\delta_0(e-u)+u) = \delta_1.\]
\bigskip
{\sl (ii) 3rd transversal as a twist of the first.} A mixed up choice is $\bar R=\{e,uv,v\}$ which is not a subgroup so $\tau$ is nontrivial. One has
\[ \tau(uv,uv)=\tau(v,uv)=\tau(uv,v)=u,\quad \tau(v,v)=e,\quad v.v=e,\quad v.uv=uv,\quad uv.v=e,\quad uv.uv=v,\]
\[ u{\triangleright} v=uv,\quad u{\triangleright} (uv)=v,\quad u{\triangleleft} v=e,\quad u{\triangleleft} uv=e\]
and all other cases implied from the properties of $e$. Here $v^R=v$ and $(uv)^R=v$. These are with respect to $\bar R$, but note that twisting calculations will take place with respect to $R$.
Writing $\delta_0=\delta_e,\delta_1=\delta_{uv},\delta_2=\delta_v$ we have the same algebra as before (as we had to) and now the coproduct etc.,
\[ \bar\Delta u=u\mathop{{\otimes}} 1+\delta_0u\mathop{{\otimes}} (u-1),\quad \bar\Delta\delta_0=\delta_0\mathop{{\otimes}}\delta_0+\delta_2\mathop{{\otimes}}\delta_2+\delta_1\mathop{{\otimes}}\delta_2 \]
\[
\bar\Delta\delta_1=\delta_0\mathop{{\otimes}}\delta_1+\delta_1\mathop{{\otimes}}\delta_0+\delta_2\mathop{{\otimes}}\delta_1,\quad \bar\Delta\delta_2=\delta_0\mathop{{\otimes}}\delta_2+\delta_2\mathop{{\otimes}}\delta_0+\delta_1\mathop{{\otimes}}\delta_1,\]
\[ \bar\phi=1\mathop{{\otimes}} 1\mathop{{\otimes}} 1+ (\delta_1\mathop{{\otimes}}\delta_2+\delta_2\mathop{{\otimes}}\delta_1+\delta_1\mathop{{\otimes}}\delta_1)(u-1)=\bar\phi^{-1}\]
for the quasibialgebra. We used the $\tau,{\triangleright},{\triangleleft},\cdot$ for $\bar R$ for these direct calculations.
Now we consider twisting with
\[ c_0=e,\quad c_1=(uv)^{-1}uv=1,\quad c_2=v^{-1}vu=u,\quad \chi=1\mathop{{\otimes}} 1+ \delta_2\mathop{{\otimes}} (u-1)=\chi^{-1}\]
and check twisting the coproducts
\[ (1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))(u\mathop{{\otimes}} u)(1\mathop{{\otimes}} 1+\delta_2u\mathop{{\otimes}} (u-1))=u\mathop{{\otimes}} 1+\delta_0\mathop{{\otimes}}(u-1)=\bar\Delta u, \]
\[ (1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))(\delta_0\mathop{{\otimes}}\delta_0+\delta_1\mathop{{\otimes}}\delta_2+\delta_2\mathop{{\otimes}}\delta_1)(1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))=\bar\Delta\delta_0,\]
\[ (1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))(\delta_0\mathop{{\otimes}}\delta_1+\delta_1\mathop{{\otimes}}\delta_0+\delta_2\mathop{{\otimes}}\delta_2)(1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))=\bar\Delta\delta_1,\]
\[ (1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))(\delta_0\mathop{{\otimes}}\delta_2+\delta_2\mathop{{\otimes}}\delta_0+\delta_1\mathop{{\otimes}}\delta_1)(1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))=\bar\Delta\delta_2.\]
One can also check that (\ref{taucond}) hold, e.g. for the first half,
\[ \bar 2=\bar 1\bar\cdot\bar 1=\overline{1+c_1{\triangleright} 1}=\overline{1+1},\quad \bar 0=\bar 1\bar\cdot\bar 2=\overline{1+c_1{\triangleright} 2}=\overline{1+2},\]
\[ \bar 1=\bar2\bar\cdot\bar 1=\overline{2+c_2{\triangleright} 1}=\overline{2+2},\quad \bar 0=\bar2\bar\cdot\bar 2=\overline{2+c_2{\triangleright} 2}=\overline{2+1}\]
as it must.
Now we apply the twisting of antipodes in Corollary~\ref{twistant}, remembering to do calculations now with $R$ where $\tau,{\triangleleft}$ are trivial, to get
\[ \bar S=S,\quad \bar\alpha=\delta_0+\delta_1c_2+\delta_2c_1=1+\delta_1(u-1),\quad \bar\beta=\delta_0+\delta_2c_2+\delta_1c_1=1+\delta_2(u-1),\]
which obey $\bar\alpha^2=\bar\alpha$ and $\bar\beta^2=\bar\beta$ and are therefore not (left or right) invertible. Hence, we cannot set either equal to 1 by $U$ and there is an antipode, but it is not regular. One can check the antipode indeed works:
\begin{align*}(Su)\alpha+ (Su) (S\delta_0)\alpha(u-1)&=u(1+\delta_1(u-1))+\delta_0 u(1+\delta_1(u-1))(u-1)\\
&=u+\delta_2(1-u)+\delta_0(1-u)=u+(1-\delta_1)(1-u)=\alpha\\
u\beta+\delta_0u\beta S(u-1)&=u(1+\delta_2(u-1))+\delta_0 u(1+\delta_2(u-1))(u-1)\\
&=u+\delta_1(1-u)+\delta_0(1-u)=u+(1-\delta_2)(1-u)=\beta
\end{align*}
\begin{align*} (S\delta_0)\alpha\delta_0&+(S\delta_2)\alpha\delta_2+(S\delta_1)\alpha\delta_2=\delta_0(1+\delta_1(u-1))\delta_0+(1-\delta_0)(1+\delta_1(u-1))\delta_2\\
&=\delta_0+(1-\delta_0)\delta_2+\delta_1(\delta_1 u-\delta_2)=\delta_0+\delta_2+\delta_1u=\alpha\\
\delta_0\beta S\delta_0&+\delta_2\beta S\delta_2+\delta_1\beta S\delta_2=\delta_0(1+\delta_2(u-1))\delta_0+(1-\delta_0)(1+\delta_2(u-1))\delta_1\\
&=\delta_0+(1-\delta_0)\delta_1+(1-\delta_0)\delta_2(u-1)\delta_1=\delta_0+\delta_1+\delta_2(\delta_2u-\delta_1)=\beta
\end{align*}
and more simply on $\delta_1,\delta_2$.
The fourth transversal has a similar pattern to the 3rd, so we do not list its coproduct etc. explicitly.
\end{example}
In general, there will be many different choices of transversal. For $S_{n-1}\subset S_n$, the first two transversals for $S_2\subset S_3$ generalise as follows, giving a Hopf algebra and a strictly quasi-Hopf algebra respectively.
\begin{example}\rm {\sl (i) First transversal.} Here $R=\mathbb{Z}_n$ is a subgroup with $i=0,1,\cdots,n-1$ mod $n$ corresponding to the elements $(12\cdots n)^i$. Neither subgroup
is normal for $n\ge 4$, so both actions are nontrivial but $\tau$ is trivial. This expresses $S_n$ as a double cross product $\mathbb{Z}_n{\bowtie} S_{n-1}$ (with trivial $\tau$) and the matched pair of actions
\[ \sigma{\triangleright} i=\sigma(i),\quad (\sigma{\triangleleft} i)(j)=\sigma(i+j)-\sigma(i)\]
for $i,j=1,\cdots,n-1$, where we add and subtract mod $n$ but view the results in the range $1,\cdots, n$. This was actually found by twisting from the 2nd transversal below, but we can check it directly as follows. First.
\[\sigma (1\cdots n)^i= (\sigma{\triangleright} i)(\sigma{\triangleleft} i)=(12\cdots n)^{\sigma(i)}\left((1\cdots n)^{-\sigma(i)}\sigma(12\cdots n)^i\right)\]
and we check that the second factor sends $n\to i\to \sigma(i) \to n$, hence lies in $S_n$. It follows by the known fact of unique factorisation into these subgroups that this factor is $\sigma{\triangleleft} i$. Its action on $j=1,\cdots, n-1$ is
\[ (\sigma{\triangleright} i)(j)=(12\cdots n)^{-\sigma(i)}\sigma(12\cdots n)^i(j)=\begin{cases} n-\sigma(i) & i+j=n\\ \sigma(i+j)-\sigma(i) & i+j\ne n\end{cases}=\sigma(i+j)-\sigma(i),\]
where $\sigma(i+j)\ne \sigma(i)$ as $i+j\ne i$ and $\sigma(n)=n$ as $\sigma\in S_{n-1}$. It also follows since the two factors are subgroups that these are indeed a matched pair of actions. We can also check the matched pair axioms directly. Clearly, ${\triangleright}$ is an action and
\[ \sigma(i)+ (\sigma{\triangleleft} i)(j)=\sigma(i)+\sigma(i+j)-\sigma(i)=\sigma{\triangleright}(i+j)\] for $i,j\in\mathbb{Z}_n$. On the other side,
\begin{align*}( (\sigma{\triangleleft} i){\triangleleft} j)(k)&=(\sigma{\triangleleft} i)(j+k)-(\sigma{\triangleleft} i)(j)=\sigma(i+(j+k))-\sigma(i)-\sigma(i+j)+\sigma(i)\\
&=\sigma((i+j)+k)-\sigma(i+j)=(\sigma{\triangleleft}(i+j))(k),\\
((\sigma{\triangleleft}(\tau{\triangleright} i))(\tau{\triangleleft} i))(j)&=(\sigma{\triangleleft}\tau(i))(\tau(i+j))-\tau(i))=\sigma(\tau(i)+\tau(i+j)-\tau(i)) -\sigma(\tau(i))\\
&= \sigma(\tau(i+j))-\sigma(\tau(i))=((\sigma\tau){\triangleleft} i)(j)\end{align*}
for $i,j\in \mathbb{Z}_n$ and $k\in 1,\cdots,n-1$.
This gives $ \mathbb{C} S_{n-1}{\triangleright\!\blacktriangleleft}\mathbb{C}(\mathbb{Z}_n)$ as a natural bicrossproduct Hopf algebra which we identify with $\Xi$ (which we prefer to build on the other tensor product order). From Lemma~\ref{Xibialg} and Proposition~\ref{standardS}, this is spanned by products of $\delta_i$ for $i=0,\cdots n-1$ as our labelling of $R=\mathbb{Z}_n$ and $\sigma\in S_{n-1}=K$, with cross relations $\sigma\delta_i=\delta_{\sigma(i)}\sigma$, $\sigma\delta_0=\delta_0\sigma$, and coproduct etc.,
\[ \Delta \delta_i=\sum_{j\in \mathbb{Z}_n}\delta_j\mathop{{\otimes}}\delta_{i-j},\quad \Delta\sigma=\sigma\delta_0+\sum_{i=1}^{n-1}(\sigma{\triangleleft} i),\quad {\epsilon}\delta_i=\delta_{i,0},\quad{\epsilon}\sigma=1,\]
\[ S\delta_i=\delta_{-i},\quad S\sigma=\sigma^{-1}\delta_0+(\sigma^{-1}{\triangleleft} i)\delta_{-i},\]
where $\sigma{\triangleleft} i$ is as above for $i=1,\cdots,n-1$. This is a usual Hopf $*$-algebra with $\delta_i^*=\delta_i$ and $\sigma^*=\sigma^{-1}$ according to Corollary~\ref{corstar}.
\medskip
{\sl (ii) 2nd transversal.} Here $R=\{e, (1\, n),(2\, n),\cdots,(n-1\, n)\}$, which has nontrivial ${\triangleright}$ in which $S_{n-1}$ permutes the 2-cycles according to the $i$ label, but again trivial ${\triangleleft}$ since
\[ \sigma(i\, n)=(\sigma(i)\, n)\sigma,\quad \sigma{\triangleright} (i\ n)=(\sigma(i)\, n)\]
for all $i=1,\cdots,n-1$ and $\sigma\in S_{n-1}$. It has nontrivial $\tau$ as
\[ (i\, n )(j\, n)=(j\, n)(i\, j)\Rightarrow (i\, n)\cdot (j\, n)=(j\, n),\quad \tau((i\, n),(j\, n))=(ij)\]
for $i\ne j$ and we see that $\cdot$ has right but not left division or left but not right cancellation. We also have $(in)\cdot(in)=e$ and $\tau((in),(in))=e$ so that $(\ )^R$ is the identity map, hence $R$ is regular.
This transversal gives a cross-product quasiHopf algebra $\Xi=\mathbb{C} S_{n-1}{\triangleright\!\!\!<}_\tau \mathbb{C}(R)$ where $R$ is a left quasigroup (i.e. unital and with left cancellation) except that we prefer to write it with the tensor factors in the other order. From Lemma~\ref{Xibialg} and Proposition~\ref{standardS}, this is
spanned by products of $\delta_i$ and $\sigma\in S_{n-1}$, where $\delta_0$ is the delta function at $e\in R$ and $\delta_i$ at $(i,n)$ for $i=1,\cdots,n-1$. The cross relations have the same algebra $\sigma\delta_i=\delta_{\sigma(i)}\sigma$ for $i=1,\cdots,n-1$ as before but now
the tensor coproduct etc., and nontrivial associator
\[\Delta\delta_0=\sum_{i=0}^{n-1}\delta_i\mathop{{\otimes}}\delta_i,\quad \Delta\delta_i=1\mathop{{\otimes}}\delta_i+\delta_i\mathop{{\otimes}}\delta_0,\quad \Delta \sigma=\sigma\mathop{{\otimes}}\sigma,\quad {\epsilon}\delta_i=\delta_{i,0},\quad{\epsilon}\sigma=1,\]
\[ S\delta_i=\delta_{i},\quad S\sigma=\sigma^{-1},\quad \alpha=\beta=1,\]
\[\phi=(1\mathop{{\otimes}}\delta_0+\delta_0\mathop{{\otimes}}(1-\delta_0)+\sum_{i=1}^{n-1}\delta_i\mathop{{\otimes}}\delta_i)\mathop{{\otimes}} 1+ \sum_{i,j=1\atop i\ne j}^{n-1}\delta_i\mathop{{\otimes}}\delta_j\mathop{{\otimes}} (ij).\]
This is a $*$-quasi Hopf algebra with the same $*$ as before but now nontrivial
\[ \gamma=1,\quad \hbox{{$\mathcal G$}}=1\mathop{{\otimes}}\delta_0+\delta_0\mathop{{\otimes}}(1-\delta_0)+\sum_{i=1}^{n-1}\delta_i\mathop{{\otimes}}\delta_i+ \sum_{i,j=1\atop i\ne j}^{n-1}\delta_i(ij)\mathop{{\otimes}}\delta_j(ij)\]
from Corollary~\ref{corstar}.
\medskip{\sl (iii) Twisting between the above two transversals.} We denote the first transversal $R=\mathbb{Z}_n$, where $i$ is identified with $(12\cdots n)^i$, and we denote the 2nd transversal by $\bar R$ with corresponding elements $\bar i=(i\ n)$. Then
\[ c_i=(12\cdots n)^{-i}(i\ n)\in S_{n-1},\quad c_i(j)=\begin{cases} n-i & j=i\\ j-i & else \end{cases}\]
for $i,j=1,\cdots,n-1$. If we use the stated ${\triangleright}$ for the first transversal then one can check that the first half of (\ref{taucond}) holds,
\[ \overline{i+c_i{\triangleright} i}=\overline{i+n-i}=e=\bar i\bar\cdot \bar i,\quad \overline{i+c_i{\triangleright} j}=\overline{i+j-i}=\bar j=\bar i\bar\cdot \bar j\]
as it must. We can also check that the actions are indeed related by twisting. Thus,
\[ \sigma{\triangleleft}\bar i=c_{\sigma{\triangleright} i}^{-1}(\sigma{\triangleleft} i)c_i=(\sigma(i),n)(12\cdots n)^{\sigma(i)}(\sigma{\triangleleft} i)(12\cdots n)^{-i}(i,n)=(\sigma(i),n)\sigma(i,n)=\sigma\]
\[ \sigma\bar{\triangleright} \bar i=(\sigma{\triangleright} i)c_{\sigma{\triangleright} i}=(12\cdots n)^{\sigma(i)}(12\cdots n)^{-\sigma(i)}(\sigma(i),n)=(\sigma(i),n),\]
where we did the computation with $\mathbb{Z}_n$ viewed in $S_n$.
It follows that the Hopf algebra from case (i) cochain twists to a simpler quasihopf algebra in case (ii). The required cochain from Theorem~\ref{thmtwist} is
\[ \chi=\delta_0\mathop{{\otimes}} 1+ \sum_{i=1}^{n-1}\delta_i\mathop{{\otimes}} (12\cdots n)^{-i}(in).\]
\end{example}
The above example is a little similar to the Drinfeld $U_q(g)$ as Hopf algebras which are cochain twists of $U(g)$ viewed as a quasi-Hopf algebra. We conclude with the promised example related to the octonions. This is a version of \cite[Example~4.6]{KM2}, but with left and right swapped and some cleaned up conventions.
\begin{example}\rm
We let $G=Cl_3{>\!\!\!\triangleleft} \mathbb{Z}_2^3$, where $Cl_3$ is generated by $1,-1$ and $e_{i}$, $i=1,2,3$, with relations
\[ (-1)^2=1,\quad (-1)e_i=e_i(-1),\quad e_i^2=-1,\quad e_i e_j=-e_j e_i \]
for $i\ne j$ and the usual combination rules for the product of signs. Its elements can be enumerated as $\pm e_{\vec a}$ where $\vec{a}\in \mathbb{Z}_2^3$ is viewed in the additive group of 3-vectors with entries in the field $\mathbb{F}_2=\{0,1\}$ of order 2
and
\[ e_{\vec a}=e_1^{a_1}e_2^{a_2}e_3^{a_3},\quad e_{\vec a} e_{\vec b}=e_{\vec a+\vec b}(-1)^{\sum_{i\ge j}a_ib_j}. \]
This is the twisted group ring description of the 3-dimensional Clifford algebra over $\mathbb{R}$ in \cite{AlbMa}, but now restricted to coefficients $0,\pm1$ to give a group of order 16. For an example,
\[ e_{110}e_{101}=e_2e_3 e_1e_3=e_1e_2e_3^2=-e_1e_2=-e_{011}=-e_{110+101}\]
with the sign given by the formula.
We similarly write the elements of $K=\mathbb{Z}_2^3$ multiplicatively as $g^{\vec a}=g_1^{a_1}g_1^{a_2}g_3^{a_3}$ labelled by 3-vectors with values in $\mathbb{F}_2$. The generators $g_i$ commute and obey $g_i^2=e$. The general group product becomes the vector addition, and the cross relations are
\[ (-1)g_i=g_i(-1),\quad e_i g_i= -g_i e_i,\quad e_i g_j=g_j e_i\]
for $i\ne j$. This implies that $G$ has order 128.
(i) If we take $R=Cl_3$ itself then this will be a subgroup and we will have for $\Xi(R,K)$ an ordinary Hopf $*$-algebra as a semidirect product $\mathbb{C} \mathbb{Z}_2^3{\triangleright\!\!\!<} \mathbb{C}(Cl_3)$ except that we build it on the opposite tensor product.
(ii) Instead, we take as representatives the eight elements again labelled by 3-vectors over $\mathbb{F}_2$,
\[ r_{000}=1,\quad r_{001}=e_3,\quad r_{010}=e_2,\quad r_{011}=e_2e_3g_1\]
\[ r_{100}=e_1,\quad r_{101}=e_1e_3 g_2,\quad r_{110}=e_1e_2g_3,\quad r_{111}=e_1e_2e_3 g_1g_2g_3 \]
and their negations, as a version of \cite[Example~4.6]{KM2}. This can be written compactly as
\[ r_{\vec a}=e_{\vec a}g_1^{a_2 a_3}g_2^{a_1a_3}g_3^{a_1a_2}\]
\begin{proposition}\cite{KM2} This choice of transversal makes $(R,\cdot)$ the octonion two sided inverse property quasigroup $G_{\O}$ in the Albuquerque-Majid description of the octonions\cite{AlbMa},
\[ r_{\vec a}\cdot r_{\vec b}=(-1)^{f(\vec a,\vec b)} r_{\vec a+\vec b},\quad f(\vec a,\vec b)=\sum_{i\ge j}a_ib_j+ a_1a_2b_3+ a_1b_2a_3+b_1a_2a_3 \]
with the product on signed elements behaving as if bilinear. The action ${\triangleleft}$ is trivial, and left action and cocycle $\tau$ are
\[ g^{\vec a}{\triangleright} r_{\vec b}=(-1)^{\vec a\cdot \vec b}r_{\vec b},\quad \tau(r_{\vec a},r_{\vec b})=g^{\vec a\times\vec b}=g_1^{a_2 b_3+a_3 b_2}g_2^{a_3 b_1+a_1b_3} g_3^{a_1b_2+a_2b_1}\]
with the action extended with signs as if linearly and $\tau$ independent of signs in either argument.
\end{proposition}
{\noindent {\bfseries Proof:}\quad } We check in the group
\begin{align*} r_{\vec a}r_{\vec b}&=e_{\vec a}g_1^{a_2 a_3}g_2^{a_1a_3}g_3^{a_1a_2}e_{\vec b}g_1^{b_2 b_3}g_2^{b_1b_3}g_3^{b_1b_2}\\
&=e_{\vec a}e_{\vec b}(-1)^{b_1a_2a_3+b_2a_1a_3+b_3a_1a_2} g_1^{a_2a_3+b_2b_3}g_2^{a_1a_3+b_1b_3}g_3^{a_1a_2+b_1b_2}\\
&=(-1)^{f(a,b)}r_{\vec a+\vec b}g_1^{a_2a_3+b_2b_3-(a_2+b_2)(a_3+b_3)}g_2^{a_1a_3+b_1b_3-(a_1+b_1)(a_3+b_3)}g_3^{a_1a_2+b_1b_2-(a_1+b_1)(a_2+b_2)}\\
&=(-1)^{f(a,b)}r_{\vec a+\vec b}g_1^{a_2b_3+b_2a_3} g_2^{a_1b_3+b_1a_3}g_3^{a_1b_2+b_1a_2},
\end{align*}
from which we read off $\cdot$ and $\tau$. For the second equality, we moved the $g_i$ to the right using the commutation rules in $G$. For the third equality we used the product in $Cl_3$ in our description above and then converted $e_{\vec a+\vec b}$ to $r_{\vec a+\vec b}$. \endproof
The product of the quasigroup $G_\O$ here is the same as the octonions product as an algebra over $\mathbb{R}$ in the description of \cite{AlbMa}, restricted to elements of the form $\pm r_{\vec a}$. The cocycle-associativity property of $(R,\cdot)$ says
\[ r_{\vec a}\cdot(r_{\vec b}\cdot r_{\vec c})=(r_{\vec a}\cdot r_{\vec b})\cdot\tau(\vec a,\vec b){\triangleright} r_{\vec c}=(r_{\vec a}\cdot r_{\vec b})\cdot r_{\vec c} (-1)^{(\vec a\times\vec b)\cdot\vec c}\]
giving -1 exactly when the 3 vectors are linearly independent as 3-vectors over $\mathbb{F}_2$. One also has $r_{\vec a}\cdot r_{\vec b}=\pm r_{\vec b}\cdot r_{\vec a}$ with $-1$ exactly when the two vectors are linearly independent, which means both nonzero and not equal, and $r_{\vec a} \cdot r_{\vec a}=\pm1 $ with $-1$ exactly when the one vector is linearly independent, i.e. not zero. (These are exactly the quasiassociativity, quasicommutativity and norm properties of the octonions algebra in the description of \cite{AlbMa}.) The 2-sided inverse is
\[ r_{\vec a}^{-1}=(-1)^{n(\vec a)}r_{\vec a},\quad n(0)=0,\quad n(\vec a)=1,\quad \forall \vec a\ne 0\]
with the inversion operation extended as usual with respect to signs.
The quasi-Hopf algebra $\Xi(R,K)$ is spanned by $\delta_{(\pm,\vec a)}$ labelled by the points of $R$ and products of the $g_i$ with the relations $g^{\vec a}\delta_{(\pm, \vec b)}=\delta_{(\pm (-1)^{\vec a\cdot\vec b},\vec b)} g^{\vec a}$ and tensor coproduct etc.,
\[ \Delta \delta_{(\pm, \vec a)}=\sum_{(\pm', \vec b)}\delta_{(\pm' ,\vec b)}\mathop{{\otimes}}\delta_{(\pm\pm'(-1)^{n(\vec b)},\vec a+\vec b)},\quad \Delta g^{\vec a}=g^{\vec a}\mathop{{\otimes}} g^{\vec a},\quad {\epsilon}\delta_{(\pm,\vec a)}=\delta_{\vec a,0}\delta_{\pm,+},\quad {\epsilon} g^{\vec a}=1,\]
\[S\delta_{(\pm,\vec a)}=\delta_{(\pm(-1)^{n(\vec a)},\vec a},\quad S g^{\vec a}=g^{\vec a},\quad\alpha=\beta=1,\quad \phi=\sum_{(\pm, \vec a),(\pm',\vec{b})} \delta_{(\pm,\vec a)}\mathop{{\otimes}}\delta_{(\pm',\vec{b})}\mathop{{\otimes}} g^{\vec a\times\vec b}\]
and from Corollary~\ref{corstar} is a $*$-quasi-Hopf algebra with $*$ the identity on $\delta_{(\pm,\vec a)},g^{\vec a}$ and
\[ \gamma=1,\quad \hbox{{$\mathcal G$}}=\sum_{(\pm, \vec a),(\pm',\vec{b})} \delta_{(\pm,\vec a)}g^{\vec a\times\vec b}
\mathop{{\otimes}}\delta_{(\pm',\vec{b})}g^{\vec a\times\vec b}.\]
The general form here is not unlike our $S_n$ example.
\end{example}
\subsection{Module categories context}
This section does not contain anything new beyond \cite{Os2,EGNO}, but completes the categorical picture that connects our algebra $\Xi(R,K)$ to the more general context of module categories, adapted to our notations.
Our first observation is that if $\mathop{{\otimes}}: {\hbox{{$\mathcal C$}}}\times {\hbox{{$\mathcal V$}}}\to {\hbox{{$\mathcal V$}}}$ is a left action of a monoidal category ${\hbox{{$\mathcal C$}}}$ on a category ${\hbox{{$\mathcal V$}}}$ (one says that ${\hbox{{$\mathcal V$}}}$ is a left ${\hbox{{$\mathcal C$}}}$-module) then one can check that this is the same thing as a monoidal functor $F:{\hbox{{$\mathcal C$}}}\to \mathrm{ End}({\hbox{{$\mathcal V$}}})$ where the set ${\rm End}({\hbox{{$\mathcal V$}}})$ of endofunctors can be viewed as a strict monoidal category with monoidal product the endofunctor composition $\circ$. Here ${\rm End}({\hbox{{$\mathcal V$}}})$ has monoidal unit $\mathrm{id}_{\hbox{{$\mathcal V$}}}$ and its morphisms are natural transformations between endofunctors. $F$ just sends an object $X\in {\hbox{{$\mathcal C$}}}$ to $X\mathop{{\otimes}}(\ )$ as a monoidal functor from ${\hbox{{$\mathcal V$}}}$ to ${\hbox{{$\mathcal V$}}}$. A monoidal functor comes with natural isomorphisms $\{f_{X,Y}\}$ and these are given tautologically by
\[ f_{X,Y}(V): F(X)\circ F(Y)(V)=X\mathop{{\otimes}} (Y\mathop{{\otimes}} V)\cong (X\mathop{{\otimes}} Y)\mathop{{\otimes}} V= F(X\mathop{{\otimes}} Y)(V)\]
as part of the monoidal action. Conversely, if given a functor $F$, we define $X\mathop{{\otimes}} V=F(X)V$ and extend the monoidal associativity of ${\hbox{{$\mathcal C$}}}$ to mixed objects using $f_{X,Y}$ to define $X\mathop{{\otimes}} (Y\mathop{{\otimes}} V)= F(X)\circ F(Y)V{\cong} F(X\mathop{{\otimes}} Y)V= (X\mathop{{\otimes}} Y)\mathop{{\otimes}} V$. The notion of a left module category is a categorification of the bijection between an algebra action $\cdot: A \mathop{{\otimes}} V\rightarrow V$ and a representation as an algebra map $A \rightarrow {\rm End}(V)$. There is an equally good notion of a right ${\hbox{{$\mathcal C$}}}$-module category extending $\mathop{{\otimes}}$ to ${\hbox{{$\mathcal V$}}}\times{\hbox{{$\mathcal C$}}}\to {\hbox{{$\mathcal V$}}}$. In the same way as one uses $\cdot$ for both the algebra product and the module action, it is convenient to use $\mathop{{\otimes}}$ for both in the categorified version. Similarly for the right module version.
Another general observation is that if ${\hbox{{$\mathcal V$}}}$ is a ${\hbox{{$\mathcal C$}}}$-module category for a monoidal category ${\hbox{{$\mathcal C$}}}$ then ${\rm Fun}_{\hbox{{$\mathcal C$}}}({\hbox{{$\mathcal V$}}},{\hbox{{$\mathcal V$}}})$, the (left exact) functors from ${\hbox{{$\mathcal V$}}}$ to itself that are compatible with the action of ${\hbox{{$\mathcal C$}}}$, is another monoidal category. This is denoted ${\hbox{{$\mathcal C$}}}^*_{{\hbox{{$\mathcal V$}}}}$ in \cite{EGNO}, but should not be confused with the dual of a monoidal functor which was one of the origins\cite{Ma:rep} of the centre $\hbox{{$\mathcal Z$}}({\hbox{{$\mathcal C$}}})$ construction as a special case. Also note that if $A\in {\hbox{{$\mathcal C$}}}$ is an algebra in the category then ${\hbox{{$\mathcal V$}}}={}_A{\hbox{{$\mathcal C$}}}$, the left modules of $A$ in the category, is a {\em right} ${\hbox{{$\mathcal C$}}}$-module category. If $V$ is an $A$-module then we define $V\mathop{{\otimes}} X$ as the tensor product in ${\hbox{{$\mathcal C$}}}$ equipped with an $A$-action from the left on the first factor. Moreover, for certain `nice' right module categories ${\hbox{{$\mathcal V$}}}$, there exists a suitable algebra $A\in {\hbox{{$\mathcal C$}}}$ such that ${\hbox{{$\mathcal V$}}}\simeq {}_A{\hbox{{$\mathcal C$}}}$, see \cite{Os2}\cite[Thm~7.10.1]{EGNO} in other conventions. For such module categories, ${\rm Fun}_{\hbox{{$\mathcal C$}}}({\hbox{{$\mathcal V$}}},{\hbox{{$\mathcal V$}}})\simeq {}_A{\hbox{{$\mathcal C$}}}_A$ the category of $A$-$A$-bimodules in ${\hbox{{$\mathcal C$}}}$. Here, if given an $A$-$A$-bimodule $E$ in ${\hbox{{$\mathcal C$}}}$, the corresponding endofunctor is given by $E\mathop{{\otimes}}_A(\ )$, where we require ${\hbox{{$\mathcal C$}}}$ to be Abelian so that we can define $\mathop{{\otimes}}_A$. This turns $V\in {}_A{\hbox{{$\mathcal C$}}}$ into another $A$-module in ${\hbox{{$\mathcal C$}}}$ and $E\mathop{{\otimes}}_A(V\mathop{{\otimes}} X){\cong} (E\mathop{{\otimes}}_A V)\mathop{{\otimes}} X$, so the construction commutes with the right ${\hbox{{$\mathcal C$}}}$-action.
Before we explain how these abstract ideas lead to ${}_K\hbox{{$\mathcal M$}}^G_K$, a more `obvious' case is the study of left module categories for ${\hbox{{$\mathcal C$}}} = {}_G\hbox{{$\mathcal M$}}$. If $K\subseteq G$ is a subgroup, we set ${\hbox{{$\mathcal V$}}} = {}_K\hbox{{$\mathcal M$}}$ for $i: K\subseteq G$. The functor ${\hbox{{$\mathcal C$}}}\to \mathrm{ End}({\hbox{{$\mathcal V$}}})$ just sends $X\in {\hbox{{$\mathcal C$}}}$ to $i^*(X)\mathop{{\otimes}}(\ )$ as a functor on ${\hbox{{$\mathcal V$}}}$, or more simply ${\hbox{{$\mathcal V$}}}$ is a left ${\hbox{{$\mathcal C$}}}$-module by $X\mathop{{\otimes}} V=i^*(X)\mathop{{\otimes}} V$. More generally\cite{Os2}\cite[Example~7..4.9]{EGNO}, one can include a cocycle $\alpha\in H^2(K,\mathbb{C}^\times)$ since we are only interested in monoidal equivalence, and this data $(K,\alpha)$ parametrises all indecomposable left ${}_G\hbox{{$\mathcal M$}}$-module categories. Moreover, here $\mathrm{ End}({\hbox{{$\mathcal V$}}})\simeq {}_K\hbox{{$\mathcal M$}}_K$, the category of $K$-bimodules, where a bimodule $E$ acts by $E\mathop{{\otimes}}_{\mathbb{C} K}(\ )$. So the data we need for a ${}_G\hbox{{$\mathcal M$}}$-module category is a monoidal functor ${}_G\hbox{{$\mathcal M$}}\to {}_K\hbox{{$\mathcal M$}}_K$. This is of potential interest but is not the construction we were looking for.
Rather, we are interested in right module categories of ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$, the category of $G$-graded vector spaces. It turns out that these are classified by the exact same data $(K,\alpha)$ (this is related to the fact that the $\hbox{{$\mathcal M$}}^G,{}_G\hbox{{$\mathcal M$}}$ have the same centre) but the construction is different. Thus, if $K\subseteq G$ is a subgroup, we consider $A=\mathbb{C} K$ regarded as an algebra in ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$ by $|x|=x$ viewed in $G$. One can also twist this by a cocycle $\alpha$, but here we stick to the trivial case. Then ${\hbox{{$\mathcal V$}}}={}_A{\hbox{{$\mathcal C$}}}={}_K\hbox{{$\mathcal M$}}^G$, the category of $G$-graded left $K$-modules, is a right ${\hbox{{$\mathcal C$}}}$-module category. Explicitly, if $X\in {\hbox{{$\mathcal C$}}}$ is a $G$-graded vector space and $V\in{\hbox{{$\mathcal V$}}}$ a $G$-graded left $K$-module then
\[ V\mathop{{\otimes}} X,\quad x.(v\mathop{{\otimes}} w)=v.x\mathop{{\otimes}} w,\quad |v\mathop{{\otimes}} w|=|v||w|,\quad \forall\ v\in V,\ w\in X\]
is another $G$-graded left $K$-module. Finally, by the general theory, there is an associated monoidal category
\[ {\hbox{{$\mathcal C$}}}^*_{\hbox{{$\mathcal V$}}}:={\rm Fun}_{{\hbox{{$\mathcal C$}}}}({\hbox{{$\mathcal V$}}},{\hbox{{$\mathcal V$}}})\simeq {}_K\hbox{{$\mathcal M$}}^G_K\simeq {}_{\Xi(R,K)}\hbox{{$\mathcal M$}}.\]
which is the desired category to describe quasiparticles on boundaries in \cite{KK}. Conversely, if ${\hbox{{$\mathcal V$}}}$ is an indecomposable right ${\hbox{{$\mathcal C$}}}$-module category for ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$, it is explained in \cite{Os2}\cite[Example~7.4.10]{EGNO} (in other conventions) that the set of indecomposable objects has a transitive action of $G$ and hence can be identified with $G/K$ for some subgroup $K\subseteq G$. This can be used to put the module category up to equivalence in the above form (with some cocycle $\alpha$).
\section{Concluding remarks}\label{sec:rem}
We have given a detailed account of the algebra behind the treatment of boundaries in the Kitaev model based on subgroups $K$ of a finite group $G$. New results include the quasi-bialgebra $\Xi(R,K)$ in full generality, a more direct derivation from the category ${}_K\hbox{{$\mathcal M$}}^G_K$ that connects to the module category point of view, a theorem that $\Xi(R,K)$ changes by a Drinfeld twist as $R$ changes, and a $*$-quasi-Hopf algebra structure that ensures a nice properties for the category of representations (these form a strong bar category). On the computer science side, we edged towards how one might use these ideas in quantum computations and detect quasiparticles across ribbons where one end is on a boundary. We also gave new decomposition formulae relating representations of $D(G)$ in the bulk to those of $\Xi(R,K)$ in the boundary.
Both the algebraic and the computer science aspects can be taken much further. The case treated here of trivial cocycle $\alpha$ is already complicated enough but the ideas do extend to include these and should similarly be worked out. Whereas most of the abstract literature on
such matters is at the conceptual level where we work up to categorical equivalence, we set out to give constructions more explicitly, which we we believe is essential for concrete calculations and should also be relevant to the physics. For example, much of the literature on anyons is devoted to so-called $F$-moves which express the associativity isomorphisms even though, by Mac Lane's theorem, monoidal categories are equivalent to strict ones. On the physics side, the covariance properties of ribbon operators also involve the coproduct and hence how they are realised depends on the choice of $R$. The same applies to how $*$ interacts with tensor products, which would be relevant to the unitarity properties of composite systems. Of interest, for example, should be the case of a lattice divided into two parts $A,B$ with a boundary between them and how the entropy of states in the total space relate to those in the subsystem. This is an idea of considerable interest in quantum gravity, but the latter has certain parallels with quantum computing and could be explored concretely using the results of the paper. We also would like to expand further the concrete use of patches and lattice surgery, as we considered only the cases of boundaries with $K=\{e\}$ and $K=G$, and only a square geometry. Additionally, it would be useful to know under what conditions the model gives universal quantum computation. While there are broadly similar such ideas in the physics literature, e.g., \cite{CCW}, we believe our fully explicit treatment will help to take these forward.
Further on the algebra side, the Kitaev model generalises easily to replace $G$ by a finite-dimensional semi-simple Hopf algebra, with some aspects also in the non-semisimple case\cite{CowMa}. The same applies easily enough to at least a quasi-bialgebra associated to an inclusion $L\subseteq H$ of finite-dimensional Hopf algebras\cite{PS3} and to the corresponding module category picture. Ultimately here, it is the nonsemisimple case that is of interest as such Hopf algebras (e.g. of the form of reduced quantum groups $u_q(g)$) generate the categories where anyons as well as TQFT topological invariants live. It is also known that by promoting the finite group input of the Kitaev model to a more general weak Hopf algebra, one can obtain any unitary fusion category in the role of ${\hbox{{$\mathcal C$}}}$\cite{Chang}. There remains a lot of work, therefore, to properly connect these theories to computer science and in particular to established methods for quantum circuits. A step here could be braided ZX-calculus\cite{Ma:fro}, although precisely how remains to be developed. These are some directions for further work.
\section*{Data availability statement}
Data sharing is not applicable to this article as no new data were created or analysed in this study.
\input{appendix}
\end{document}
\section{Boundary ribbon operators with $\Xi(R,K)^\star$}\label{app:ribbon_ops}
\begin{definition}\rm\label{def:Y_ribbon}
Let $\xi$ be a ribbon, $r \in R$ and $k \in K$. Then $Y^{r \otimes \delta_k}_{\xi}$ acts on a direct triangle $\tau$ as
\[\tikzfig{Y_action_direct},\]
and on a dual triangle $\tau^*$ as
\[\tikzfig{Y_action_dual}.\]
Concatenation of ribbons is given by
\[Y^{r \otimes \delta_k}_{\xi'\circ\xi} = Y^{(r \otimes \delta_k)_2}_{\xi'}\circ Y^{(r \otimes \delta_k)_1}_{\xi} = \sum_{x\in K} Y^{(x^{-1}\rightharpoonup r) \otimes \delta_{x^{-1}k}}_{\xi'}\circ Y^{r\otimes\delta_x}_{\xi},\]
where we see the comultiplication $\Delta(r \otimes \delta_k)$ of $\Xi(R,K)^*$. Here, $\Xi(R,K)^*$ is a coquasi-Hopf algebra, and so has coassociative comultiplication (it is the multiplication which is only quasi-associative). Therefore, we can concatenate the triangles making up the ribbon in any order, and the concatenation above uniquely defines $Y^{r\otimes\delta_k}_{\xi}$ for any ribbon $\xi$.
\end{definition}
Let $s_0 = (v_0,p_0)$ and $s_1 = (v_1,p_1)$ be the sites at the start and end of a triangle. The direct triangle operators satisfy
\[k'{\triangleright}_{v_0}\circ Y^{r\otimes \delta_k}_{\tau} =Y^{r\otimes \delta_{k'k}}_{\tau}\circ k'{\triangleright}_{v_0},\quad k'{\triangleright}_{v_1}\circ Y^{r\otimes\delta_k}_\tau = Y^{r\otimes\delta_{k'k^{-1}}}_\tau\circ k'{\triangleright}_{v_1}\]
and
\[[\delta_{r'}{\triangleright}_{s_i},Y^{r\otimes\delta_k}_{\tau}]= 0\]
for $i\in \{1,2\}$.
For the dual triangle operators, we have
\[k'{\triangleright}_{v_i}\circ \sum_k Y^{r\otimes\delta_k}_{\tau^*} = Y^{(k'{\triangleright} r)\otimes\delta_k}_{\tau^*}\circ k'{\triangleright}_{v_i}\]
again for $i\in \{1,2\}$. However, there do not appear to be similar commutation relations for the actions of $\mathbb{C}(R)$ on faces of dual triangle operators. In addition, in the bulk, one can reconstruct the vertex and face actions using suitable ribbons \cite{Bom,CowMa} because of the duality between $\mathbb{C}(G)$ and $\mathbb{C} G$; this is not true in general for $\mathbb{C}(R)$ and $\mathbb{C} K$.
\begin{example}\label{ex:Yrib}\rm
Given the ribbon $\xi$ on the lattice below, we see that $Y^{r\otimes \delta_k}_{\xi}$ acts only along the ribbon and trivially elsewhere. We have
\[\tikzfig{Y_action_ribbon}\]
if $g^2,g^4,g^6(g^7)^{-1}\in K$, and $0$ otherwise, and
\begin{align*}
&y^1 = (rx^1)^{-1}\\
&y^2 = ((g^2)^{-1}rx^2)^{-1}\\
&y^3 = ((g^2g^4)^{-1}rx^3)^{-1}\\
&y^4 = ((g^2g^4g^6(g^7)^{-1})^{-1}rx^3)^{-1}
\end{align*}
One can check this using Definition~\ref{def:Y_ribbon}.
\end{example}
It is claimed in \cite{CCW} that these ribbon operators obey similar equivariance properties with the site actions of $\Xi(R,K)$
as the bulk ribbon operators, but we could not reproduce these properties. Precisely, we find that when such ribbons are `open' in the sense of \cite{Kit, Bom, CowMa} then an intermediate site $s_2$ on a ribbon $\xi$ between either endpoints $s_0,s_1$ does \textit{not} satisfy
\[\Lambda_{\mathbb{C} K}{\triangleright}_{s_2}\circ Y^{r\otimes \delta_k}_{\xi} = Y^{r\otimes \delta_k}_{\xi}\circ \Lambda_{\mathbb{C} K}{\triangleright}_{s_2}.\]
in general, nor the corresponding relation for $\Lambda_{\mathbb{C}(R)}{\triangleright}_{s_2}$.
\section{Measurements and nonabelian lattice surgery}\label{app:measurements}
In Section~\ref{sec:surgery}, we described nonabelian lattice surgery for a general underlying group algebra $\mathbb{C} G$, but for simplicity of exposition we assumed that the projectors $A(v)$ and $B(p)$ could be applied deterministically. In practice, we can only make a measurement, which will only sometimes yield the desired projectors. As the splits are easier, we discuss how to handle these first, beginning with the rough split. We demonstrate on the same example as previously:
\[\tikzfig{rough_split_calc}\]
\[\tikzfig{rough_split_calc2}\]
where we have measured the edge to be deleted in the $\mathbb{C} G$ basis. The measurement outcome $n$ informs which corrections to make. The last arrow implies corrections made using ribbon operators. These corrections are all unitary, and if the measurement outcome is $e$ then no corrections are required at all. The generalisation to larger patches is straightforward, but requires keeping track of multiple different outcomes.
Next, we discuss how to handle the smooth split. In this case, we measure the edges to be deleted in the Fourier basis, that is we measure the self-adjoint operator $\sum_{\pi} p_{\pi} P_{\pi}{\triangleright}$ at a particular edge, where
\[P_{\pi} := P_{e,\pi} = {{\rm dim}(W_\pi)\over |G|}\sum_{g\in G} {\rm Tr}_\pi(g^{-1}) g\]
from Section~\ref{sec:lattice} acts by the left regular representation. Thus, for a smooth split, we have the initial state $|e\>_L$:
\[\tikzfig{smooth_split_calc1}\]
\[\tikzfig{smooth_split_calc2}\]
\[\tikzfig{smooth_split_calc3}\]
and afterwards we still have coefficients from the irreps of $\mathbb{C} G$. In the case when $\pi = 1$, we are done. Otherwise, we have detected quasiparticles of type $(e,\pi)$ and $(e,\pi')$ at two vertices. In this case, we appeal to e.g. \cite{BKKK, Cirac}, which claim that one can modify these quasiparticles deterministically using ribbon operators and quantum circuitry. The procedure should be similar to initialising a fresh patch in the zero logical state, but we do not give any details ourselves. Then we have the desired result.
For merges, we start with a smooth merge, as again all outcomes are in the group basis. Recall that after generating fresh copies of $\mathbb{C} G$ in the states $\sum_{m\in G} m$, we have
\[\tikzfig{smooth_merge_project}\]
we then measure at sites which include the top and bottom faces, giving:
\[\tikzfig{smooth_merge_measure_1}\]
for some conjugacy classes ${\hbox{{$\mathcal C$}}}, {\hbox{{$\mathcal C$}}}'$. There are no factors of $\pi$ as the edges around each vertex already satisfy $A(v)|\psi\> = |\psi\>$. When ${\hbox{{$\mathcal C$}}} = {\hbox{{$\mathcal C$}}}' = \{e\}$, we may proceed, but otherwise we require a way of deterministically eliminating the quasiparticles detected at the top and bottom faces. Appealing to e.g. \cite{BKKK, Cirac} as earlier, we assume that this may be done, but do not give details. Alternatively one could try to `switch reference frames' in the manner of Pauli frames with qubit codes \cite{HFDM}, and redefine the Hamiltonian. The former method gives
\[\tikzfig{smooth_merge_measure_2}\]
Lastly, we measure the inner face, yielding
\[\tikzfig{smooth_merge_measure_3}\]
so $|j\>_L\otimes |k\>_L \mapsto \sum_{s\in {\hbox{{$\mathcal C$}}}''} \delta_{js,k} |js\>_L$, which is a direct generalisation of the result for when $G = \mathbb{Z}_n$ in \cite{Cow2}, where now we sum over the conjugacy class ${\hbox{{$\mathcal C$}}}''$ which in the $\mathbb{Z}_n$ case are all singletons.
The rough merge works similarly, where instead of having quasiparticles of type $({\hbox{{$\mathcal C$}}},1)$ appearing at faces, we have quasiparticles of type $(e,\pi)$ at vertices.
\section{Introduction}
The Kitaev model is defined for a finite group $G$ \cite{Kit} with quasiparticles given by representations of the quantum double $D(G)$, and their dynamics described by intertwiners. In quantum computing, the quasiparticles correspond to measurement outcomes at sites on a lattice, and their dynamics correspond to linear maps on the data, with the aim of performing fault-tolerant quantum computation. The lattice can be any ciliated ribbon graph embedded on a surface \cite{Meu}, although throughout we will assume a square lattice on the plane for convenience. The Kitaev model generalises to replace $G$ by a finite-dimensional semisimple Hopf algebra, as well as aspects that work of a general finite-dimensional Hopf algebra. We refer to \cite{CowMa} for details of the relevant algebraic aspects of this theory, which applies in the bulk of the Kitaev model. We now extend this work with a study of the algebraic structure that underlies an approach to the treatment of boundaries.
The treatment of boundaries here originates in a more categorical point of view. In the original Kitaev model the relevant category that defines the `topological order' in condensed matter terms\cite{LK} is the category ${}_{D(G)}\mathcal{M}$ of $D(G)$-modules, which one can think of as an instance of the `dual' or `centre' $\hbox{{$\mathcal Z$}}({\hbox{{$\mathcal C$}}})$ construction\cite{Ma:rep}, where ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$ is the category of $G$-graded vector spaces. Levin-Wen `string-net' models \cite{LW} are a sort of generalisation of Kitaev models specified now by a unitary fusion category $\mathcal{C}$ with topological order $\hbox{{$\mathcal Z$}}(\mathcal{C})$, meaning that at every site on the lattice one has an object in $\hbox{{$\mathcal Z$}}(\mathcal{C})$, and now on a trivalent lattice. Computations correspond to morphisms in the same category.
A so-called gapped boundary condition of a string-net model preserves a finite energy gap between the vacuum and the lowest excited state(s), which is independent of system size. Such boundary conditions are defined by module categories of the fusion category ${\hbox{{$\mathcal C$}}}$. By definition, a (right) ${\hbox{{$\mathcal C$}}}$-module means\cite{Os,KK} a category ${\hbox{{$\mathcal V$}}}$ equipped with a bifunctor ${\hbox{{$\mathcal V$}}} \times {\hbox{{$\mathcal C$}}} \rightarrow {\hbox{{$\mathcal V$}}}$ obeying coherence equations which are a polarised version of the properties of $\mathop{{\otimes}}: {\hbox{{$\mathcal C$}}}\times{\hbox{{$\mathcal C$}}}\to {\hbox{{$\mathcal C$}}}$ (in the same way that a right module of an algebra obeys a polarised version of the axioms for the product). One can also see a string-net model as a discretised quantum field theory \cite{Kir2, Meu}, and indeed boundaries of a conformal field theory can also be similarly defined by module categories \cite{FS}. For our purposes, we care about \textit{indecomposable} module categories, that is module categories which are not equivalent to a direct sum of other module categories. Excitations on the boundary with condition $\mathcal{V}$ are then given by functors $F \in \mathrm{End}_{\hbox{{$\mathcal C$}}}(\mathcal{V})$ that commute with the ${\hbox{{$\mathcal C$}}}$ action\cite{KK}, beyond the vacuum state which is the identity functor $\mathrm{id}_{\mathcal{V}}$. More than just the boundary conditions above, we care about these excitations, and so $\mathrm{End}_{\hbox{{$\mathcal C$}}}(\mathcal{V})$ is the category of interest.
The Kitaev model is not exactly a string-net model (the lattice in our case will not even be trivalent) but closely related. In particular, it can be shown that indecomposable module categories for ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$, the category of $G$-graded vector spaces, are\cite{Os2} classified by subgroups $K\subseteq G$ and cocycles $\alpha\in H^2(K,\mathbb{C}^\times)$. We will stick to the trivial $\alpha$ case here, and the upshot is that the boundary conditions in the regular Kitaev model should be given by ${\hbox{{$\mathcal V$}}}={}_K\hbox{{$\mathcal M$}}^G$ the $G$-graded $K$-modules where $x\in K$ itself has grade $|x|=x\in G$. Then the excitations are governed by objects of $\mathrm{End}_{\hbox{{$\mathcal C$}}}({\hbox{{$\mathcal V$}}}) \simeq {}_K\hbox{{$\mathcal M$}}_K^G$, the category of $G$-graded bimodules over $K$. This is necessarily equivalent, by Tannaka-Krein reconstruction\cite{Ma:tan} to the category of modules ${}_{\Xi(R,K)}\mathcal{M}$ of a certain quasi-Hopf algebra $\Xi(R,K)$. Here $R\subseteq G$ is a choice of transversal so that every element of $G$ factorises uniquely as $RK$, but the algebra of $\Xi(R,K)$ depends only on the choice of subgroup $K$ and not on the transversal $R$. This is the algebra which we use to define measurement protocols on the boundaries of the Kitaev model. One also has that $\hbox{{$\mathcal Z$}}({}_\Xi\hbox{{$\mathcal M$}})\simeq\hbox{{$\mathcal Z$}}(\hbox{{$\mathcal M$}}^G)\simeq{}_{D(G)}\hbox{{$\mathcal M$}}$ as braided monoidal categories.
Categorical aspects will be deferred to Section~\ref{sec:cat_just}, our main focus prior to that being on a full understanding of the algebra $\Xi$, its properties and aspects of the physics. In fact, lattice boundaries of Kitaev models based on subgroups have been defined and characterised previously, see \cite{BSW, Bom}, with \cite{CCW} giving an overview for computational purposes, and we build on these works. We begin in Section~\ref{sec:bulk} with a recap of the algebras and actions involved in the bulk of the lattice model, then in Section~\ref{sec:gap} we accommodate the boundary conditions in a manner which works with features important for quantum computation, such as sites, quasiparticle projectors and ribbon operators. These sections mostly cover well-trodden ground, although we correct errors and clarify some algebraic subtleties which appear to have gone unnoticed in previous works. In particular, we obtain formulae for the decomposition of bulk irreducible representations of $D(G)$ into $\Xi$-representations which we believe to be new. Key to our results here is an observation that in fact $\Xi(R,K)\subseteq D(G)$ as algebras, which gives a much more direct route than previously to an adjunction between $\Xi(R,K)$-modules and $D(G)$-modules describing how excitations pass between the bulk and boundary. This is important for the physical picture\cite{CCW} and previously was attributed to an adjunction between ${}_{D(G)}\hbox{{$\mathcal M$}}$ and ${}_K\hbox{{$\mathcal M$}}_K^G$ in \cite{PS2}.
In Section~\ref{sec:patches}, as an application of our explicit description of boundaries, we generalise the quantum computational model called \textit{lattice surgery} \cite{HFDM,Cow2} to the nonabelian group case. We find that for every finite group $G$ one can simulate the group algebra $\mathbb{C} G$ and its dual $\mathbb{C}(G)$ on a lattice patch with `rough' and `smooth' boundaries. This is an alternative model of fault-tolerant computation to the well-known method of braiding anyons or defects \cite{Kit,FMMC}, although we do not know whether there are choices of group such that lattice surgery is natively universal without state distillation.
In Section~\ref{sec:quasi}, we look at $\Xi(R,K)$ as a quasi-Hopf algebra in somewhat more detail than we have found elsewhere. As well as the quasi-bialgebra structure, we provide and verify the antipode for any choice of transversal $R$ for which right-inversion is bijective. This case is in line with \cite{Nat}, but we will also consider antipodes more generally. We then show that an obvious $*$-algebra structure on $\Xi$ meets all the axioms of a strong $*$-quasi-Hopf algebra in the sense of \cite{BegMa:bar} coming out of the theory of bar categories. The key ingredient here is a somewhat nontrivial map that relates the complex conjugate $\Xi$-module to $V\mathop{{\otimes}} W$ to those of $W$ and $V$. We also give an extended series of examples, including one related to the octonions.
Lastly, in Section~\ref{sec:cat_just}, we connect the algebraic notions up to the abstract description of boundaries conditions via module categories and use this to obtain more results about $\Xi(R,K)$. We first calculate the relevant categorical equivalence ${}_K\hbox{{$\mathcal M$}}_K^G \simeq {}_{\Xi(R,K)}\mathcal{M}$ concretely, deriving the quasi-bialgebra structure of $\Xi(R,K)$ precisely such that this works.
Since the left hand side is independent of $R$, we deduce by Tannaka-Krein arguments that changing $R$ changes $\Xi(R,K)$ by a Drinfeld cochain twist and we find this cochain as a main result of the section. This is important as Drinfeld twists do not change the category of modules up to equivalence, so such aspects of the physics do not depend on $R$. Twisting arguments then imply that we have an antipode more generally for any $R$. We also look at ${\hbox{{$\mathcal V$}}} = {}_K\hbox{{$\mathcal M$}}^G$ as a module category for ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$. Section~\ref{sec:rem} provides some concluding remarks relating to generalisations of the boundaries to models based on other Hopf algebras \cite{BMCA}.
\subsection*{Acknowledgements}
The first author thanks Stefano Gogioso for useful discussions regarding nonabelian lattice surgery as a model for computation. Thanks also to Paddy Gray \& Kathryn Pennel for their hospitality while some of this paper was written and to Simon Harrison for the Wolfson Harrison UK Research Council Quantum Foundation Scholarship, which made this work possible. The second author was on sabbatical at Cambridge Quantum Computing and we thank members of the team there.
\section{Preliminaries: recap of the Kitaev model in the bulk}\label{sec:bulk}
We begin with the model in the bulk. This is a largely a recap of eg. \cite{Kit, CowMa}.
\subsection{Quantum double}\label{sec:double}Let $G$ be a finite group with identity $e$, then $\mathbb{C} G$ is the group Hopf algebra with basis $G$. Multiplication is extended linearly, and $\mathbb{C} G$ has comultiplication $\Delta h = h \otimes h$ and counit ${\epsilon} h = 1$ on basis elements $h\in G$. The antipode is given by $Sh = h^{-1}$. $\mathbb{C} G$ is a Hopf $*$-algebra with $h^* = h^{-1}$ extended antilinearly. Its dual Hopf algebra $\mathbb{C}(G)$ of functions on $G$ has basis of $\delta$-functions $\{\delta_g\}$ with $\Delta\delta_g=\sum_h \delta_h\mathop{{\otimes}}\delta_{h^{-1}g}$, ${\epsilon} \delta_g=\delta_{g,e}$ and $S\delta_g=\delta_{g^{-1}}$ for the Hopf algebra structure, and $\delta_g^* = \delta_{g}$ for all $g\in G$. The normalised integral elements \textit{in} $\mathbb{C} G$ and $\mathbb{C}(G)$ are
\[ \Lambda_{\mathbb{C} G}={1\over |G|}\sum_{h\in G} h\in \mathbb{C} G,\quad \Lambda_{\mathbb{C}(G)}=\delta_e\in \mathbb{C}(G).\]
The integrals \textit{on} $\mathbb{C} G$ and $\mathbb{C}(G)$ are
\[ \int h = \delta_{h,e}, \quad \int \delta_g = 1\]
normalised so that $\int 1 = 1$ for $\mathbb{C} G$ and $\int 1 = |G|$ for $\mathbb{C}(G)$.
For the Drinfeld double we have $D(G)=\mathbb{C}(G){>\!\!\!\triangleleft} \mathbb{C} G$ as in \cite{Ma:book}, with $\mathbb{C} G$ and $\mathbb{C}(G)$ sub-Hopf algebras and the cross relations $ h\delta_g =\delta_{hgh^{-1}} h$ (a semidirect product). The Hopf algebra antipode is $S(\delta_gh)=\delta_{h^{-1}g^{-1}h} h^{-1}$, and over $\mathbb{C}$ we have a Hopf $*$-algebra with $(\delta_g h)^* = \delta_{h^{-1}gh} h^{-1}$. There is also a quasitriangular structure which in subalgebra notation is
\begin{equation}\label{RDG} \hbox{{$\mathcal R$}}=\sum_{h\in G} \delta_h\mathop{{\otimes}} h\in D(G) \otimes D(G).\end{equation}
If we want to be totally explicit we can build $D(G)$ on either the vector space $\mathbb{C}(G)\mathop{{\otimes}} \mathbb{C} G$ or on the vector space $\mathbb{C} G\mathop{{\otimes}}\mathbb{C}(G)$. In fact the latter is more natural but we follow the conventions in \cite{Ma:book,CowMa} and use the former. Then one can say the above more explicitly as \[(\delta_g\mathop{{\otimes}} h)(\delta_f\mathop{{\otimes}} k)=\delta_g\delta_{hfh^{-1}}\mathop{{\otimes}} hk=\delta_{g,hfh^{-1}}\delta_g\mathop{{\otimes}} hk,\quad S(\delta_g\mathop{{\otimes}} h)=\delta_{h^{-1}g^{-1}h} \mathop{{\otimes}} h^{-1}\]
etc. for the operations on the underlying vector space.
As a semidirect product, irreducible representations of $D(G)$ are given by standard theory as labelled by pairs $({\hbox{{$\mathcal C$}}},\pi)$ consisting of an orbit under the action (i.e. by a conjugacy class ${\hbox{{$\mathcal C$}}}\subset G$ in this case) and an irrep $\pi$ of the isotropy subgroup, in our case
\[ G^{c_0}=\{n\in G\ |\ nc_0 n^{-1}=c_0\}\]
of a fixed element $c_0\in{\hbox{{$\mathcal C$}}}$, i.e. the centraliser $C_G(c_0)$. The choice of $c_0$ does not change the isotropy group up to isomorphism but does change how it sits inside $G$. We also fix data $q_c\in G$ for each $c\in {\hbox{{$\mathcal C$}}}$ such that $c=q_cc_0q_c^{-1}$ with $q_{c_0}=e$ and define from this a cocycle $\zeta_c(h)=q^{-1}_{hch^{-1}}hq_c$ as a map $\zeta: {\hbox{{$\mathcal C$}}}\times G\to G^{c_0}$. The associated irreducible representation is then
\[ W_{{\hbox{{$\mathcal C$}}},\pi}=\mathbb{C} {\hbox{{$\mathcal C$}}}\mathop{{\otimes}} W_\pi,\quad \delta_g.(c\mathop{{\otimes}} w)=\delta_{g,c}c\mathop{{\otimes}} w,\quad h.(c\mathop{{\otimes}} w)=hch^{-1}\mathop{{\otimes}} \zeta_c(h).w \]
for all $w\in W_\pi$, the carrier space of $\pi$. This constructs all irreps of $D(G)$ and, over $\mathbb{C}$, these are unitary in a Hopf $*$-algebra sense if $\pi$ is unitary. Moreover, $D(G)$ is semisimple and hence has a block decomposition $D(G){\cong}\oplus_{{\hbox{{$\mathcal C$}}},\pi} \mathrm{ End}(W_{{\hbox{{$\mathcal C$}}},\pi})$ given by a complete orthogonal set of self-adjoint central idempotents
\begin{equation}\label{Dproj}P_{({\hbox{{$\mathcal C$}}},\pi)}={{\rm dim}(W_\pi)\over |G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}}\sum_{n\in G^{c_0}}\mathrm{ Tr}_\pi(n^{-1})\delta_{c}\mathop{{\otimes}} q_c nq_c^{-1}.\end{equation}
We refer to \cite{CowMa} for more details and proofs. Acting on a state, this will become a projection operator that determines if a quasiparticle of type ${\hbox{{$\mathcal C$}}},\pi$ is present. Chargeons are quasiparticles with ${\hbox{{$\mathcal C$}}}=\{e\}$ and $\pi$ an irrep of $G$, and fluxions are quasiparticles with ${\hbox{{$\mathcal C$}}}$ a conjugacy class and $\pi=1$, the trivial representation.
\subsection{Bulk lattice model}\label{sec:lattice}
Having established the prerequisite algebra, we move on to the lattice model itself. This first part is largely a recap of \cite{Kit, CowMa} and we use the notations of the latter. Let $\Sigma = \Sigma(V, E, P)$ be a square lattice viewed as a directed graph with its usual (cartesian) orientation, vertices $V$, directed edges $E$ and faces $P$. The Hilbert space $\hbox{{$\mathcal H$}}$ will be a tensor product of vector spaces with one copy of $\mathbb{C} G$ at each arrow in $E$. We have group elements for the basis of each copy. Next, to each adjacent pair of vertex $v$ and face $p$ we associate a site $s = (v, p)$, or equivalently a line (the `cilium') from $p$ to $v$. We then define an action of $\mathbb{C} G$ and $\mathbb{C}(G)$ at each site by
\[ \includegraphics[scale=0.7]{Gaction.pdf}\]
Here $h\in \mathbb{C} G$, $a\in \mathbb{C}(G)$ and $g^1,\cdots,g^4$ denote independent elements of $G$ (not powers). Observe that the vertex action is invariant under the location of $p$ relative to its adjacent $v$, so the red dashed line has been omitted.
\begin{lemma}\label{lemDGrep} \cite{Kit,CowMa} $h{\triangleright}$ and $a{\triangleright}$ for all $h\in G$ and $a\in \mathbb{C}(G)$ define a representation of $D(G)$ on $\hbox{{$\mathcal H$}}$ associated to each site $(v,p)$.
\end{lemma}
We next define
\[ A(v):=\Lambda_{\mathbb{C} G}{\triangleright}={1\over |G|}\sum_{h\in G}h{\triangleright},\quad B(p):=\Lambda_{\mathbb{C}(G)}{\triangleright}=\delta_e{\triangleright}\]
where $\delta_{e}(g^1g^2g^3g^4)=1$ iff $g^1g^2g^3g^4=e$, which is iff $(g^4)^{-1}=g^1g^2g^3$, which is iff $g^4g^1g^2g^3=e$. Hence $\delta_{e}(g^1g^2g^3g^4)=\delta_{e}(g^4g^1g^2g^3)$ is invariant under cyclic rotations, hence $\Lambda_{\mathbb{C}(G)}{\triangleright}$ computed at site $(v,p)$ does not depend on the location of $v$ on the boundary of $p$. Moreover,
\[ A(v)B(p)=|G|^{-1}\sum_h h\delta_e{\triangleright}=|G|^{-1}\sum_h \delta_{heh^{-1}}h{\triangleright}=|G|^{-1}\sum_h \delta_{e}h{\triangleright}=B(p)A(v)\]
if $v$ is a vertex on the boundary of $p$ by Lemma~\ref{lemDGrep}, and more trivially if not. We also have the rest of
\[ A(v)^2=A(v),\quad B(p)^2=B(p),\quad [A(v),A(v')]=[B(p),B(p')]=[A(v),B(p)]=0\]
for all $v\ne v'$ and $p\ne p'$, as easily checked. We then define the Hamiltonian
\[ H=\sum_v (1-A(v)) + \sum_p (1-B(p))\]
and the space of vacuum states
\[ \hbox{{$\mathcal H$}}_{\rm vac}=\{|\psi\>\in\hbox{{$\mathcal H$}}\ |\ A(v)|\psi\>=B(p)|\psi\>=|\psi\>,\quad \forall v,p\}.\]
Quasiparticles in Kitaev models are labelled by representations of $D(G)$ occupying a given site $(v,p)$, which take the system out of the vacuum. Detection of a quasiparticle is via a {\em projective measurement} of the operator $\sum_{{\hbox{{$\mathcal C$}}}, \pi} p_{{\hbox{{$\mathcal C$}}},\pi} P_{\mathcal{C}, \pi}$ acting at each site on the lattice for distinct coefficients $p_{{\hbox{{$\mathcal C$}}},\pi} \in \mathbb{R}$. By definition, this is a process which yields the classical value $p_{{\hbox{{$\mathcal C$}}},\pi}$ with a probability given by the likelihood of the state prior to the measurement being in the subspace in the image of $P_{\mathcal{C},\pi}$, and in so doing performs the corresponding action of the projector $P_{\mathcal{C}, \pi}$ at the site. The projector $P_{e,1}$ corresponds to the vacuum quasiparticle.
In computing terms, this system of measurements encodes a logical Hilbert subspace, which we will always take to be the vacuum space $\hbox{{$\mathcal H$}}_{\rm vac}$, within the larger physical Hilbert space given by the lattice; this subspace is dependent on the topology of the surface that the lattice is embedded in, but not the size of the lattice. For example, there is a convenient closed-form expression for the dimension of $\hbox{{$\mathcal H$}}_{\rm vac}$ when $\Sigma$ occupies a closed, orientable surface \cite{Cui}. Computation can then be performed on states in the logical subspace in a fault-tolerant manner, with unwanted excitations constituting detectable errors.
In the interest of brevity, we forgo a detailed exposition of such measurements, ribbon operators and fault-tolerant quantum computation on the lattice. The interested reader can learn about these in e.g. \cite{Kit,Bom,CCW,CowMa}. We do give a brief recap of ribbon operators, although without much rigour, as these will be useful later.
\begin{definition}\rm \label{def:ribbon}
A ribbon $\xi$ is a strip of face width that connects two sites $s_0 = (v_0,p_0)$ and $s_1 = (v_1,p_1)$ on the lattice. A ribbon operator $F^{h,g}_\xi$ acts on the vector spaces associated to the edges along the path of the ribbon, as shown in Fig~\ref{figribbon}. We call this basis of ribbon operators labelled by $h$ and $g$ the \textit{group basis}.
\end{definition}
\begin{figure}
\[ \includegraphics[scale=0.8]{Fig1.pdf}\]
\caption{\label{figribbon} Example of a ribbon operator for a ribbon $\xi$ from $s_0=(v_0,p_0)$ to $s_1=(v_1,p_1)$.}
\end{figure}
\begin{lemma}\label{lem:concat}
If $\xi'$ is a ribbon concatenated with $\xi$, then the associated ribbon operators in the group basis satisfy
\[F_{\xi'\circ\xi}^{h,g}=\sum_{f\in G}F_{\xi'}^{f^{-1}hf,f^{-1}g}\circ F_\xi^{h,f}, \quad F^{h,g}_\xi \circ F^{h',g'}_\xi=\delta_{g,g'}F_\xi^{hh',g}.\]
\end{lemma}
The first identity shows the role of the comultiplication of $D(G)^*$,
\[\Delta(h\delta_g) = \sum_{f\in G} h\delta_f\otimes f^{-1}hf\delta_{f^{-1}g}.\]
using subalgebra notation, while the second identity implies that
\[(F_\xi^{h,g})^\dagger = F_\xi^{h^{-1},g}.\]
\begin{lemma}\label{ribcom}\cite{Kit} Let $\xi$ be a ribbon with the orientation as shown in Figure~\ref{figribbon} between sites $s_0=(v_0,p_0)$ and $s_1=(v_1,p_1)$. Then
\[ [F_\xi^{h,g},f{\triangleright}_v]=0,\quad [F_\xi^{h,g},\delta_e{\triangleright}_p]=0,\]
for all $v \notin \{v_0, v_1\}$ and $p \notin \{p_0, p_1\}$.
\[ f{\triangleright}_{s_0}\circ F_\xi^{h,g}=F_\xi^{fhf^{-1},fg} \circ f{\triangleright}_{s_0},\quad \delta_f{\triangleright}_{s_0}\circ F_\xi^{h,g}=F_\xi^{h,g} \circ\delta_{h^{-1}f}{\triangleright}_{s_0},\]
\[ f{\triangleright}_{s_1}\circ F_\xi^{h,g}=F_\xi^{h,gf^{-1}} \circ f{\triangleright}_{s_1},\quad \delta_f{\triangleright}_{s_1}\circ F_\xi^{h,g}=F_\xi^{h,g}\circ \delta_{fg^{-1}hg}{\triangleright}_{s_1}\]
for all ribbons where $s_0,s_1$ are disjoint, i.e. when $s_0$ and $s_1$ share neither vertices or faces. The subscript notation $f{\triangleright}_v$ means the local action of $f\in \mathbb{C} G$ at vertex $v$, and the dual for $\delta_f{\triangleright}_s$ at a site $s$.
\end{lemma}
We call the above lemma the \textit{equivariance property} of ribbon operators. Such ribbon operators may be deformed according to a sort of discrete isotopy, so long as the endpoints remain the same. We formalised ribbon operators as left and right module maps in \cite{CowMa}, but skim over any further details here. The physical interpretation of ribbon operators is that they create, move and annihilate quasiparticles.
\begin{lemma}\cite{Kit}\label{lem:ribs_only}
Let $s_0$, $s_1$ be two sites on the lattice. The only operators in ${\rm End}(\hbox{{$\mathcal H$}})$ which change the states at these sites, and therefore create quasiparticles and change the distribution of measurement outcomes, but leave the state in vacuum elsewhere, are ribbon operators.
\end{lemma}
This lemma is somewhat hard to prove rigorously but a proof was sketched in \cite{CowMa}. Next, there is an alternate basis for these ribbon operators in which the physical interpretation becomes more obvious. The \textit{quasiparticle basis} has elements
\begin{equation}F_\xi^{'{\hbox{{$\mathcal C$}}},\pi;u,v} = \sum_{n\in G^{c_0}} \pi(n^{-1})_{ji} F_\xi^{c, q_c n q_d^{-1}},\end{equation}
where ${\hbox{{$\mathcal C$}}}$ is a conjugacy class, $\pi$ is an irrep of the associated isotropy subgroup $G^{c_0}$ and $u = (c,i)$, $v = (d,j)$ label basis elements of $W_{{\hbox{{$\mathcal C$}}},\pi}$ in which $c,d \in {\hbox{{$\mathcal C$}}}$ and $i,j$ label a basis of $W_\pi$. This amounts to a nonabelian Fourier transform of the space of ribbons (that is, the Peter-Weyl isomorphism of $D(G)$) and has inverse
\begin{equation}F_\xi^{h,g} = \sum_{{\hbox{{$\mathcal C$}}},\pi\in \hat{G^{c_0}}}\sum_{c\in{\hbox{{$\mathcal C$}}}}\delta_{h,gcg^{-1}} \sum_{i,j = 0}^{{\rm dim}(W_\pi)}\pi(q^{-1}_{gcg^{-1}}g q_c)_{ij}F_\xi^{'{\hbox{{$\mathcal C$}}},\pi;a,b},\end{equation}
where $a = (gcg^{-1},i)$ and $b=(c,j)$. This reduces in the chargeon sector to the special cases
\begin{equation}\label{chargeon_ribbons}F_\xi^{'e,\pi;i,j} = \sum_{n\in G}\pi(n^{-1})_{ji}F_\xi^{e,n}\end{equation}
and
\begin{equation}F_\xi^{e,g} = \sum_{\pi\in \hat{G}}\sum_{i,j = 0}^{{\rm dim}(W_\pi)}\pi(g)_{ij}F_\xi^{'e,\pi;i,j}\end{equation}
Meanwhile, in the fluxion sector we have
\begin{equation}\label{fluxion_ribbons}F_\xi^{'{\hbox{{$\mathcal C$}}},1;c,d}=\sum_{n\in G^{c_0}}F_\xi^{c,q_c nq_d^{-1}}\end{equation}
but there is no inverse in the fluxion sector. This is because the chargeon sector corresponds to the irreps of $\mathbb{C} G$, itself a semisimple algebra; the fluxion sector has no such correspondence.
If $G$ is Abelian then $\pi$ are 1-dimensional and we do not have to worry about the indices for the basis of $W_\pi$; this then looks like a more usual Fourier transform.
\begin{lemma}\label{lem:quasi_basis}
If $\xi'$ is a ribbon concatenated with $\xi$, then the associated ribbon operators in the quasiparticle basis satisfy
\[ F_{\xi'\circ\xi}^{'{\hbox{{$\mathcal C$}}},\pi;u,v}=\sum_w F_{\xi'}^{'{\hbox{{$\mathcal C$}}},\pi;w,v}\circ F_\xi^{'{\hbox{{$\mathcal C$}}},\pi;u,w}\]
and are such that the nonabelian Fourier transform takes convolution to multiplication and vice versa, as it does in the abelian case.
\end{lemma}
In particular, we have the \textit{ribbon trace operators}, $W^{{\hbox{{$\mathcal C$}}},\pi}_\xi := \sum_u F_\xi^{'{\hbox{{$\mathcal C$}}},\pi;u,u}$. Such ribbon trace operators create exactly quasiparticles of the type ${\hbox{{$\mathcal C$}}},\pi$ from the vacuum, meaning that
\[P_{({\hbox{{$\mathcal C$}}},\pi)}{\triangleright}_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>}{\triangleleft}_{s_1}P_{({\hbox{{$\mathcal C$}}},\pi)}.\]
We refer to \cite{CowMa} for more details and proofs of the above.
\begin{example}\rm \label{exDS3} Our go-to example for our expositions will be $G=S_3$ generated by transpositions $u=(12), v=(23)$ with $w=(13)=uvu=vuv$. There are then 8 irreducible representations of $D(S_3)$ according to the choices ${\hbox{{$\mathcal C$}}}_0=\{e\}$, ${\hbox{{$\mathcal C$}}}_1=\{u,v,w\}$, ${\hbox{{$\mathcal C$}}}_2=\{uv,vu\}$ for which we pick representatives $c_0=e$, $q_e=e$, $c_1=u$, $q_u=e$, $q_v=w$, $q_w=v$ and $c_2=uv$ with $q_{uv}=e,q_{vu}=v$ (with the $c_i$ in the role of $c_0$ in the general theory). Here $G^{c_0}=S_3$ with 3 representations $\pi=$ trivial, sign and $W_2$ the 2-dimensional one given by (say) $\pi(u)=\sigma_3, \pi(v)=(\sqrt{3}\sigma_1-\sigma_3)/2$, $G^{c_1}=\{e,u\}=\mathbb{Z}_2$ with $\pi(u)=\pm1$ and $G^{c_2}=\{e,uv,vu\}=\mathbb{Z}_3$ with $\pi(uv)=1,\omega,\omega^2$ for $\omega=e^{2\pi\imath\over 3}$. See \cite{CowMa} for details and calculations of the associated projectors and some $W_\xi^{{\hbox{{$\mathcal C$}}},\pi}$ operators.
\end{example}
\section{Gapped Boundaries}\label{sec:gap}
While $D(G)$ is the relevant algebra for the bulk of the model, our focus is on the boundaries. For these, we require a different class of algebras.
\subsection{The boundary subalgebra $\Xi(R,K)$}\label{sec:xi}
Let $K\subseteq G$ be a subgroup of a finite group $G$ and $G/K=\{gK\ |\ g\in G\}$ be the set of left cosets. It is not necessary in this section, but convenient, to fix a representative $r$ for each coset and let $R\subseteq G$ be the set of these, so there is a bijection between $R$ and $G/K$ whereby $r\leftrightarrow rK$. We assume that $e\in R$ and call such a subset (or section of the map $G\to G/K$) a {\em transversal}. Every element of $G$ factorises uniquely as $rx$ for $r\in R$ and $x\in K$, giving a coordinatisation of $G$ which we will use. Next, as we quotiented by $K$ from the right, we still have an action of $K$ from the left on $G/K$, which we denote ${\triangleright}$. By the above bijection, this equivalently means an action ${\triangleright}:K\times R\to R$ on $R$ which in terms of the factorisation is determined by $xry=(x{\triangleright} r)y'$, where we refactorise $xry$ in the form $RK$ for some $y'\in R$. There is much more information in this factorisation, as will see in Section~\ref{sec:quasi}, but this action is all we need for now. Also note that we have chosen to work with left cosets so as to be consistent with the literature \cite{CCW,BSW}, but one could equally choose a right coset factorisation to build a class of algebras similar to those in \cite{KM2}. We consider the algebra $\mathbb{C}(G/K){>\!\!\!\triangleleft} \mathbb{C} K$ as the cross product by the above action. Using our coordinatisation, this becomes the following algebra.
\begin{definition}\label{defXi} $\Xi(R,K)=\mathbb{C}(R){>\!\!\!\triangleleft} \mathbb{C} K$ is generated by $\mathbb{C}(R)$ and $\mathbb{C} K$ with cross relations $x\delta_r=\delta_{x{\triangleright} r} x$. Over $\mathbb{C}$, this is a $*$-algebra with $(\delta_r x)^*=x^{-1}\delta_r=\delta_{x^{-1}{\triangleright} r}x^{-1}$.
\end{definition}
If we choose a different transversal $R$ then the algebra does not change up to an isomorphism which maps the $\delta$-functions between the corresponding choices of representative. Of relevance to the applications, we also have:
\begin{lemma} $\Xi(R,K)$ has the `integral element'
\[\Lambda:=\Lambda_{\mathbb{C}(R)} \otimes \Lambda_{\mathbb{C} K} = \delta_e \frac{1}{|K|}\sum_{x\in K}x\]
characterised by $\xi\Lambda={\epsilon}(\xi)\Lambda=\Lambda\xi$ for all $\xi\in \Xi$, and ${\epsilon}(\Lambda)=1$.
\end{lemma}
{\noindent {\bfseries Proof:}\quad } We check that
\begin{align*}
\xi\Lambda& = (\delta_s y)(\delta_e\frac{1}{|K|}\sum_{x\in K}x) = \delta_{s,y{\triangleright} e}\delta_s\frac{1}{|K|}\sum_{x\in K}yx= \delta_{s,e}\delta_e \frac{1}{|K|}\sum_{x\in K}x\\
&= {\epsilon}(\xi)\Lambda = \frac{1}{|K|}\sum_{x\in K}\delta_{e,x{\triangleright} y}\delta_e xy = \frac{1}{|K|}\sum_{x\in K}\delta_{e,y}\delta_e x = \Lambda\xi.
\end{align*}
And clearly, ${\epsilon}(\Lambda) = \delta_{e,e} {|K|\over |K|} = 1$.
\endproof
As a cross product algebra, we can take the same approach as with $D(G)$ to the classification of its irreps:
\begin{lemma} Irreps of $\Xi(R,K)$ are classified by pairs $(\hbox{{$\mathcal O$}},\rho)$ where $\hbox{{$\mathcal O$}}\subseteq R$ is an orbit under the action ${\triangleright}$ and $\rho$ is an irrep of the isotropy group $K^{r_0}:=\{x\in K\ |\ x{\triangleright} r_0=r_0\}$. Here we fix a base point $r_0\in \hbox{{$\mathcal O$}}$ as well as $\kappa: \hbox{{$\mathcal O$}}\to K $ a choice of lift such that
\[ \kappa_r{\triangleright} r_0 = r,\quad\forall r\in \hbox{{$\mathcal O$}},\quad \kappa_{r_0}=e.\]
Then
\[ V_{\hbox{{$\mathcal O$}},\rho}=\mathbb{C} \hbox{{$\mathcal O$}}\mathop{{\otimes}} V_\rho,\quad \delta_r(s\mathop{{\otimes}} v)=\delta_{r,s}s\mathop{{\otimes}} v,\quad x.(s\mathop{{\otimes}} v)=x{\triangleright} s\mathop{{\otimes}}\zeta_s(x).v,\quad \zeta_s(x)=\kappa^{-1}_{x{\triangleright} s}x\kappa_s\]
for $v\in V_\rho$, the carrier space for $\rho$, and
\[ \zeta: \hbox{{$\mathcal O$}}\times K\to K^{r_0},\quad \zeta_r(x)=\kappa_{x{\triangleright} r}^{-1}x\kappa_r.\]
\end{lemma}
{\noindent {\bfseries Proof:}\quad } One can check that $\zeta_r(x)$ lives in $K^{r_0}$,
\[ \zeta_r(x){\triangleright} r_0=(\kappa_{x{\triangleright} r}^{-1}x\kappa_r){\triangleright} r_0=\kappa_{x{\triangleright} r}^{-1}{\triangleright}(x{\triangleright} r)=\kappa_{x{\triangleright} r}^{-1}{\triangleright}(\kappa_{x{\triangleright} r}{\triangleright} r_0)=r_0\]
and the cocycle property
\[ \zeta_r(xy)=\kappa^{-1}_{x{\triangleright} y{\triangleright} r}x \kappa_{y{\triangleright} r}\kappa^{-1}_{y{\triangleright} r}y \kappa_r=\zeta_{y{\triangleright} r}(x)\zeta_r(y),\]
from which it is easy to see that $V_{\hbox{{$\mathcal O$}},\rho}$ is a representation,
\[ x.(y.(s\mathop{{\otimes}} v))=x.(y{\triangleright} s\mathop{{\otimes}} \zeta_s(y). v)=x{\triangleright}(y{\triangleright} s)\mathop{{\otimes}}\zeta_{y{\triangleright} s}(x)\zeta_s(y).v=xy{\triangleright} s\mathop{{\otimes}}\zeta_s(xy).v=(xy).(s\mathop{{\otimes}} v),\]
\[ x.(\delta_r.(s\mathop{{\otimes}} v))=\delta_{r,s}x{\triangleright} s\mathop{{\otimes}} \zeta_s(x). v= \delta_{x{\triangleright} r,x{\triangleright} s}x{\triangleright} s\mathop{{\otimes}}\zeta_s(x).v=\delta_{x{\triangleright} r}.(x.(s\mathop{{\otimes}} v)).\]
One can show that $V_{\hbox{{$\mathcal O$}},\pi}$ are irreducible and do not depend up to isomorphism on the choice of $r_0$ or $\kappa_r$.\endproof
In the $*$-algebra case as here, we obtain a unitary representation if $\rho$ is unitary. One can also show that all irreps can be obtained this way. In fact the algebra $\Xi(R,K)$ is semisimple and has a block associated to the $V_{\hbox{{$\mathcal O$}},\pi}$.
\begin{lemma}\label{Xiproj} $\Xi(R,K)$ has a complete orthogonal set of central idempotents
\[ P_{(\hbox{{$\mathcal O$}},\rho)}={\dim V_\rho\over |K^{r_0}|}\sum_{r\in\hbox{{$\mathcal O$}}}\sum_{n\in K^{r_0}} \mathrm{ Tr}_{\rho}(n^{-1})\delta_r\mathop{{\otimes}} \kappa_r n \kappa_r^{-1}.\]
\end{lemma}
{\noindent {\bfseries Proof:}\quad } The proofs are similar to those for $D(G)$ in \cite{CowMa}. That we have a projection is
\begin{align*}P_{(\hbox{{$\mathcal O$}},\rho)}^2&={\dim(V_\rho)^2\over |K^{r_0}|^2}\sum_{m,n\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\rho(n^{-1})\sum_{r,s\in \hbox{{$\mathcal O$}}}(\delta_r\mathop{{\otimes}} \kappa_rm\kappa_r^{-1})(\delta_s\mathop{{\otimes}}\kappa_sn\kappa_s^{-1})\\
&={\dim(V_\rho)^2\over |K^{r_0}|^2}\sum_{m,n\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\rho(n^{-1})\sum_{r,s\in \hbox{{$\mathcal O$}}}\delta_r\delta_{r,s}\mathop{{\otimes}} \kappa_rm\kappa_r^{-1}\kappa_s n\kappa_s^{-1}\\
&={\dim(V_\rho)^2\over |K^{r_0}|^2}\sum_{m,m'\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\rho(m m'{}^{-1})\sum_{r\in \hbox{{$\mathcal O$}}}\delta_r\mathop{{\otimes}} \kappa_rm'\kappa_r^{-1}= P_{(\hbox{{$\mathcal O$}},\rho)}
\end{align*}
where we used $r=\kappa_r m\kappa_r^{-1}{\triangleright} s$ iff $s=\kappa_r m^{-1}\kappa_r^{-1}{\triangleright} r=\kappa_r m^{-1}{\triangleright} r_0=\kappa_r{\triangleright} r_0=r$. We then changed $mn=m'$ as a new variable and used the orthogonality formula for characters on $K^{r_0}$. Similarly, for different projectors to be orthogonal. The sum of projectors is 1 since
\begin{align*}\sum_{\hbox{{$\mathcal O$}},\rho}P_{(\hbox{{$\mathcal O$}},\rho)}=\sum_{\hbox{{$\mathcal O$}}, r\in {\hbox{{$\mathcal C$}}}}\delta_r\mathop{{\otimes}} \kappa_r\sum_{\rho\in \hat{K^{r_0}}} \left({\dim V_\rho\over |K^{r_0}|}\sum_{n\in K^{r_0}} \mathrm{ Tr}_{\rho}(n^{-1}) n\right) \kappa_r^{-1}=\sum_{\hbox{{$\mathcal O$}},r\in\hbox{{$\mathcal O$}}}\delta_r\mathop{{\otimes}} 1=1,
\end{align*}
where the bracketed expression is the projector $P_\rho$ for $\rho$ in the group algebra of $K^{r_0}$, and these sum to 1 by the Peter-Weyl decomposition of the latter. \endproof
\begin{remark}\rm
In the previous literature, the irreps have been described using double cosets and representatives thereof \cite{CCW}. In fact a double coset in ${}_KG_K$ is an orbit for the left action of $K$ on $G/K$ and hence has the form $\hbox{{$\mathcal O$}} K$ corresponding to an orbit $\hbox{{$\mathcal O$}}\subset R$ in our approach. We will say more about this later, in Proposition~\ref{prop:mon_equiv}.
\end{remark}
An important question for the physics is how representations on the bulk relate to those on the boundary. This is afforded by functors in the two directions. Here we give a much more direct approach to this issue as follows.
\begin{proposition}\label{Xisub} There is an inclusion of algebras $i:\Xi(R,K)\hookrightarrow D(G)$
\[ i(x)=x,\quad i(\delta_r)=\sum_{x\in K} \delta_{rx}.\]
The pull-back or restriction of a $D(G)$-module $W$ to a $\Xi$-module $i^*(W)$ is simply for $\xi\in \Xi$ to act by $i(\xi)$. Going the other way, the induction functor sends a $\Xi$-module $V$ to a $D(G)$-module $D(G)\mathop{{\otimes}}_\Xi V$, where $\xi\in \Xi$ right acts on $D(G)$ by right multiplication by $i(\xi)$. These two functors are adjoint.
\end{proposition}
{\noindent {\bfseries Proof:}\quad } We just need to check that $i$ respects the relations of $\Xi$. Thus,
\begin{align*} i(\delta_r)i(\delta_s)&=\sum_{x,y\in K}\delta_{rx}\delta_{sy}=\sum_{x\in K}\delta_{r,s}\delta_{rx}=i(\delta_r\delta_s),
\\ i(x)i(\delta_r)&=\sum_{y\in K}x\delta_{ry}=\sum_{y\in K}\delta_{xryx^{-1}}x=\sum_{y\in K}\delta_{(x{\triangleright} r)x'yx^{-1}}x=\sum_{y'\in K}\delta_{(x{\triangleright} r)y'}x=i(\delta_{x{\triangleright} r} x),\end{align*}
as required. For the first line, we used the unique factorisation $G=RK$ to break down the $\delta$-functions. For the second line, we use this in the form $xr=(x{\triangleright} r)x'$ for some $x'\in K$ and then changed variables from $y$ to $y'=x'yx^{-1}$. The rest follows as for any algebra inclusion. \endproof
In fact, $\Xi$ is a quasi-bialgebra and at least when $(\ )^R$ is bijective a quasi-Hopf algebra, as we see in Section~\ref{sec:quasi}. In the latter case, it has a quantum double $D(\Xi)$ which contains $\Xi$ as a sub-quasi Hopf algebra. Moreover, it can be shown that $D(\Xi)$ is a `Drinfeld cochain twist' of $D(G)$, which implies it has the same algebra as $D(G)$. We do not need details, but this is the abstract reason for the above inclusion. (An explicit proof of this twisting result in the usual Hopf algebra case with $R$ a group is in \cite{BGM}.) Meanwhile, the statement that the two functors in the lemma are adjoint is that
\[ \hom_{D(G)}(D(G)\mathop{{\otimes}}_\Xi V,W))=\hom_\Xi(V, i^*(W))\]
for all $\Xi$-modules $V$ and all $D(G)$-modules $W$. These functors do not take irreps to irreps and of particular interest are the multiplicities for the decompositions back into irreps, i.e. if $V_i, W_a$ are respective irreps and $D(G)\mathop{{\otimes}}_\Xi V_i=\oplus_{a} n^i{}_a W_a$ then
\[ {\rm dim}(\hom_{D(G)}(D(G)\mathop{{\otimes}}_\Xi V_i,W_a))={\rm dim}(\hom_\Xi(V_i,i^*(W_a)))\]
and hence $i^*(W_a)=\oplus_i n^i_a V_i$. This explains one of the observations in \cite{CCW}. It remains to give a formula for these multiplicities, but here we were not able to reproduce the formula in \cite{CCW}. Our approach goes via a general lemma as follows.
\begin{lemma}\label{lemfrobn} Let $i:A\hookrightarrow B$ be an inclusion of finite-dimensional semisimple algebras and $\int$ the unique symmetric special Frobenius linear form on $B$ such that $\int 1=1$. Let $V_i$ be an irrep of $A$ and $W_a$ an irrep of $B$. Then the multiplicity $V_i$ in the pull-back $i^*(W_a)$ (which is the same as the multiplicity of $W_a$ in $B\mathop{{\otimes}}_A V_i$) is given by
\[ n^i{}_a={\dim(B)\over\dim(V_i)\dim(W_a)}\int i(P_i)P_a,\]
where $P_i\in A$ and $P_a\in B$ are the associated central idempotents. Moreover, $i(P_i)P_a =0$ if and only if $n^i_a = 0$.
\end{lemma}
{\noindent {\bfseries Proof:}\quad } Recall that a linear map $\int:B\to \mathbb{C}$ is Frobenius if the bilinear form $(b,c):=\int bc$ is nondegenerate, and is symmetric if this bilinear form is symmetric. Also, let $g=g^1\mathop{{\otimes}} g^2\in B\mathop{{\otimes}} B$ (in a notation with the sum of such terms understood) be the associated `metric' such that $(\int b g^1 )g^2=b=g^1\int g^2b$ for all $b$ (it is the inverse matrix in a basis of the algebra). We say that the Frobenius form is special if the algebra product $\cdot$ obeys $\cdot(g)=1$. It is well-known that there is a unique symmetric special Frobenius form up to scale, given by the trace in the left regular representation, see \cite{MaRie:spi} for a recent study. In our case, over $\mathbb{C}$, we also know that a finite-dimensional semisimple algebra $B$ is a direct sum of matrix algebras ${\rm End}(W_a)$ associated to the irreps $W_a$ of $B$. Then
\begin{align*} \int i(P_i)P_a&={1\over\dim(B)}\sum_{\alpha,\beta}\<f^\alpha\mathop{{\otimes}} e_\beta,i(P_i)P_a (e_\alpha\mathop{{\otimes}} f^\beta)\>\\
&={1\over\dim(B)}\sum_{\alpha}\dim(W_a)\<f^\alpha, i(P_i)e_\alpha\>={\dim(W_a)\dim(V_i)\over\dim(B)} n^i{}_a.
\end{align*}
where $\{e_\alpha\}$ is a basis of $W_a$ and $\{f^\beta\}$ is a dual basis, and $P_a$ acts as the identity on $\mathrm{ End}(W_a)$ and zero on the other blocks. We then used that if $i^*(W_a)=\oplus_i {n^i{}_a}V_i$ as $A$-modules, then $i(P_i)$ just picks out the $V_i$ components where $P_i$ acts as the identity.
For the last part, the forward direction is immediate given the first part of the lemma. For the other direction, suppose
$n^i_a = 0$ so that $i^*(W_a)=\oplus_j n^j_aV_j$ with $j\ne a$ running over the other irreps of $A$. Now, we can view $P_{a}\in W_{a}\mathop{{\otimes}} W_{a}^*$ (as the identity element) and left multiplication by $i(P_i)$ is the same as $P_i$ acting on $P_{a}$ viewed as an element of $i^*(W_{a})\mathop{{\otimes}} W_{a}^*$, which is therefore zero.\endproof
We apply Lemma~\ref{lemfrobn} in our case of $A=\Xi$ and $B=D(G)$, where \[ \dim(V_i)=|\hbox{{$\mathcal O$}}|\dim(V_\rho), \quad \dim(W_a)=|{\hbox{{$\mathcal C$}}}|\dim(W_\pi)\]
with $i=({\hbox{{$\mathcal C$}}},\rho)$ as described above and $a=({\hbox{{$\mathcal C$}}},\pi)$ as described in Section~\ref{sec:bulk}.
\begin{proposition}\label{nformula} For the inclusion $i:\Xi\hookrightarrow D(G)$ in Proposition~\ref{Xisub}, the multiplicities for restriction and induction as above are given by
\[ n^{(\hbox{{$\mathcal O$}},\rho)}_{({\hbox{{$\mathcal C$}}},\pi)}= {|G|\over |\hbox{{$\mathcal O$}}| |{\hbox{{$\mathcal C$}}}| |K^{r_0}| |G^{c_0}|} \sum_{{r\in \hbox{{$\mathcal O$}}, c\in {\hbox{{$\mathcal C$}}}\atop
r^{-1}c\in K}} |K^{r,c}|\sum_{\tau\in \hat{K^{r,c}} } n_{\tau,\tilde\rho|_{K^{r,c}}} n_{\tau, \tilde\pi|_{K^{r,c}}},\quad K^{r,c}=K^r\cap G^c,\]
where $\tilde \pi(m)=\pi(q_c^{-1}mq_c)$ and $\tilde\rho(m)=\rho(\kappa_r^{-1}m\kappa_r)$ are the corresponding representation of $K^r,G^c$ decomposing as $K^{r,c}$ representations as
\[ \tilde\rho|_{K^{r,c}}{\cong}\oplus_\tau n_{\tau,\tilde\rho|_{K^{r,c}}}\tau,\quad \tilde\pi|_{K^{r,c}}{\cong}\oplus_\tau n_{\tau,\tilde\pi|_{K^{r,c}}}\tau.\]
\end{proposition}
{\noindent {\bfseries Proof:}\quad } We include the projector from Lemma~\ref{Xiproj} as
\[ i(P_{(\hbox{{$\mathcal O$}},\rho)})={{\rm dim}(V_\rho)\over |K^{r_0}|}\sum_{r\in \hbox{{$\mathcal O$}}, x\in K}\sum_{m\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})\delta_{rx}\mathop{{\otimes}} \kappa_r m\kappa_r^{-1}\]
and multiply this by $P_{({\hbox{{$\mathcal C$}}},\pi)}$ from (\ref{Dproj}). In the latter, we write $c=sy$ for the factorisation of $c$. Then when we multiply these out, for $(\delta_{rx}\mathop{{\otimes}} \kappa_r m \kappa_r^{-1})(\delta_{c}\mathop{{\otimes}} q_c n q_c^{-1})$ we will need $\kappa_r m\kappa_r^{-1}{\triangleright} s=r$ or equivalently $s=\kappa_r m^{-1}\kappa_r^{-1}{\triangleright} r=r$ so we are actually summing not over $c$ but over $y\in K$ such that $ry\in {\hbox{{$\mathcal C$}}}$. Also then $x$ is uniquely determined in terms of $y$.
Hence
\[ i(P_{(\hbox{{$\mathcal O$}},\rho)})P_{({\hbox{{$\mathcal C$}}},\pi)}={{\rm dim}(V_\rho){\rm dim}(W_\pi)\over |K^{r_0}| |G^{c_0}|}\sum_{m\in K^{r_0}, n\in G^{c_0}}\sum_{r\in \hbox{{$\mathcal O$}}, y\in K | ry\in{\hbox{{$\mathcal C$}}}} \mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\pi(n^{-1}) \delta_{rx}\mathop{{\otimes}} \kappa_r m\kappa_r^{-1} q_c nq_c^{-1}\]
Now we apply the integral of $D(G)$, $\int\delta_g\mathop{{\otimes}} h=\delta_{h,e}$ which requires
\[ n=q_c^{-1}\kappa_r m^{-1}\kappa_r^{-1}q_c\]
and $x=y$ for $n\in G^{c_0}$ given that $c=ry$. We refer to this condition on $y$ as $(\star)$. Remembering that $\int$ is normalised so that $\int 1=|G|$, we have from the lemma
\begin{align*}n^{(\hbox{{$\mathcal O$}},\rho)}_{({\hbox{{$\mathcal C$}}},\pi)}&={|G|\over\dim(V_i)\dim(W_a)}\int i(P_{(\hbox{{$\mathcal O$}},\rho)})P_{({\hbox{{$\mathcal C$}}},\pi)}\\
&={|G|\over |\hbox{{$\mathcal O$}}| |{\hbox{{$\mathcal C$}}}| |K^{r_0}| |G^{c_0}|}\sum_{m\in K^{r_0}}\sum_{{r\in \hbox{{$\mathcal O$}}, y\in K\atop (*), ry\in{\hbox{{$\mathcal C$}}}}} \mathrm{ Tr}_\rho(m^{-1})\mathrm{ Tr}_\pi(q_{ry}^{-1}\kappa_r m\kappa_r^{-1}q_{ry}) \\
&={|G|\over |\hbox{{$\mathcal O$}}| |{\hbox{{$\mathcal C$}}}| |K^{r_0}| |G^{c_0}|}\sum_{m\in K^{r_0}}\sum_{{r\in \hbox{{$\mathcal O$}}, c\in {\hbox{{$\mathcal C$}}}\atop
r^{-1}c\in K}}\sum_{m'\in K^r\cap G^c} \mathrm{ Tr}_\rho(\kappa_r^{-1}m'{}^{-1}\kappa_r)\mathrm{ Tr}_\pi(q_{c}^{-1} m q_{c}),
\end{align*}
where we compute in $G$ and view $(\star)$ as $m':=\kappa_r m \kappa_r^{-1}\in G^c$. We then use the group orthogonality formula
\[ \sum_{m\in K^{r,c}}\mathrm{ Tr}_{\tau}(m^{-1})\mathrm{ Tr}_{\tau'}(m)=\delta_{\tau,\tau'}|K^{r,c}| \]
for any irreps $\tau,\tau'$ of the group
\[ K^{r,c}:=K^r\cap G^c=\{x\in K\ |\ x{\triangleright} r=r,\quad x c x^{-1}=c\}\]
to obtain the formula stated. \endproof
This simplifies in four (overlapping) special cases as follows.
\noindent{(i) $V_i$ trivial: }
\[ n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)}={|G|\over |{\hbox{{$\mathcal C$}}}||K||G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}\cap K}\sum_{m\in K\cap G^c}\mathrm{ Tr}_\pi(q_c^{-1}mq_c)={|G| \over |{\hbox{{$\mathcal C$}}}| |K||G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}\cap K} |K^c| n_{1,\tilde\pi}\]
as $\rho=1$ implies $\tilde\rho=1$ and forces $\tau=1$. Here $K^c$ is the centraliser of $c\in K$. If $n_{1,\tilde\pi}$ is independent of the choice of $c$ then we can simplify this further
as
\[ n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)}={|G| |({\hbox{{$\mathcal C$}}}\cap K)/K|\over |{\hbox{{$\mathcal C$}}}| |G^{c_0}|} n_{1,\pi|_{K^{c_0}}}\]
using the orbit-counting lemma, where $K$ acts on ${\hbox{{$\mathcal C$}}}\cap K$ by conjugation.
\noindent{(ii) $W_a$ trivial:}
\[ n^{(\hbox{{$\mathcal O$}},\rho)}_{(\{e\},1)}= {|G|\over |\hbox{{$\mathcal O$}}||K^{r_0}||G|}\sum_{r\in \hbox{{$\mathcal O$}}\cap K}\sum_{m\in K^{r_0}}\mathrm{ Tr}_\rho(m^{-1})=\begin{cases} 1 & {\rm if\ }\hbox{{$\mathcal O$}}, \rho\ {\rm trivial}\\ 0 & {\rm else}\end{cases} \]
as $\hbox{{$\mathcal O$}}\cap K=\{e\}$ if $\hbox{{$\mathcal O$}}=\{e\}$ (but is otherwise empty) and in this case only $r=e$ contributes. This is consistent with the fact that if $W_a$ is the trivial representation of $D(G)$ then its pull back is also trivial and hence contains only the trivial representation of $\Xi$.
\noindent{(iii) Fluxion sector:}
\[ n^{(\hbox{{$\mathcal O$}},1)}_{({\hbox{{$\mathcal C$}}},1)}= {|G|\over |\hbox{{$\mathcal O$}}||{\hbox{{$\mathcal C$}}}||K^{r_0}| |G^{c_0}|} \sum_{{r\in \hbox{{$\mathcal O$}}, c\in {\hbox{{$\mathcal C$}}}\atop
r^{-1}c\in K}} |K^r\cap G^c|.\]
\noindent{(iv) Chargeon sector: }
\[ n^{(\{e\},\rho)}_{(\{e\},\pi)}= n_{\rho, \pi|_{K}},\]
where $\rho,\pi$ are arbitrary irreps of $K,G$ respectively and only $r=c=e$ are allowed so $K^{r,c}=K$, and then only $\tau=\rho$ contributes.
\begin{example}\label{exS3n}\rm (i) We take $G=S_3$, $K=\{e,u\}=\mathbb{Z}_2$, where $u=(12)$. Here $G/K$ consists of
\[ G/K=\{\{e, u\}, \{w, uv\}, \{v, vu\}\}\]
and our standard choice of $R$ will be $R=\{e,uv, vu\}$, where we take one from each coset (but any other transversal will have the same irreps and their decompositions). This leads to 3 irreps of $\Xi(R,K)$ as follows. In $R$, we have two orbits $\hbox{{$\mathcal O$}}_0=\{e\}$, $\hbox{{$\mathcal O$}}_1=\{uv,vu\}$ and we choose representatives $r_0=e,\kappa_e=e$, $r_1=uv, \kappa_{uv}=e, \kappa_{vu}=u$ since $u{\triangleright} (uv)=vu$ for the two cases (here $r_1$ was denoted $r_0$ in the general theory and is the choice for $\hbox{{$\mathcal O$}}_1$). We also have $u{\triangleright}(vu)=uv$. Note that it happens that these orbits are also conjugacy classes but this is an accident of $S_3$ and not true for $S_4$. We have $K^{r_0}=K=\mathbb{Z}_2$ with representations $\rho(u)=\pm1$ and $K^{r_1}=\{e\}$ with only the trivial representation.
(ii) For $D(S_3)$, we have the 8 irreps in Example~\ref{exDS3} and hence there is a $3\times 8$ table of the $\{n^i{}_a\}$. We can easily compute some of the special cases from the above. For example, the trivial $\pi$ restricted to $K$ is $\rho=1$, the sign representation restricted to $K$ is the $\rho=-1$ representation, the $W_2$ restricted to $K$ is $1\oplus -1$, which gives the upper left $2\times 3$ submatrix for the chargeon sector. Another 6 entries (four new ones) are given from the fluxion formula. We also have ${\hbox{{$\mathcal C$}}}_2\cap K=\emptyset$ so that the latter part of the first two rows is zero by our first special case formula. For ${\hbox{{$\mathcal C$}}}_1,\pm1$ in the first row, we have ${\hbox{{$\mathcal C$}}}_1\cap K=\{u\}$ with trivial action of $K$, so just one orbit. This gives us a nontrivial result in the $+1$ case and 0 in the $-1$ case. The story for ${\hbox{{$\mathcal C$}}}_1,\pm1$ in the second row follows the same derivation, but needs $\tau=-1$ and hence $\pi=-1$ for the nonzero case.
In the third row with ${\hbox{{$\mathcal C$}}}_2,\pi$, we have $K^{r}=\{e\}$ so $G'=\{e\}$ and we only have $\tau=1=\rho$ as well as $\tilde\pi=1$ independently of $\pi$ as this is 1-dimensional. So both $n$ factors in the formula in Proposition~\ref{nformula} are 1. In the sum over $r,c$, we need $c=r$ so we sum over 2 possibilities, giving a nontrivial result as shown. For ${\hbox{{$\mathcal C$}}}_1,\pi$, the first part goes the same way and we similarly have $c$ determined from $r$ in the case of ${\hbox{{$\mathcal C$}}}_1,\pi$, so again two contributions in the sum, giving the answer shown independently of $\pi$. Finally, for ${\hbox{{$\mathcal C$}}}_0,\pi$ we have $r=\{uv,vu\}$ and $c=e$, and can never meet the condition $r^{-1}c\in K$. So these all have $0$. Thus, Proposition~\ref{nformula} in this example tells us:
\[\begin{array}{c|c|c|c|c|c|c|c|c} n^i{}_a & {\hbox{{$\mathcal C$}}}_0,1 & {\hbox{{$\mathcal C$}}}_0,{\rm sign} & {\hbox{{$\mathcal C$}}}_0,W_2 & {\hbox{{$\mathcal C$}}}_1, 1& {\hbox{{$\mathcal C$}}}_1,-1 & {\hbox{{$\mathcal C$}}}_2,1& {\hbox{{$\mathcal C$}}}_2,\omega & {\hbox{{$\mathcal C$}}}_2,\omega^2\\
\hline\
\hbox{{$\mathcal O$}}_0,1&1 & 0 & 1 &1 & 0& 0 &0 &0 \\
\hline
\hbox{{$\mathcal O$}}_0,-1&0 & 1&1& 0& 1&0 &0 & 0\\
\hline
\hbox{{$\mathcal O$}}_1,1&0 &0&0 & 1& 1 &1 &1 & 1
\end{array}\]
One can check for consistency that for each $W_a$, $\dim(W_a)$ is the sum of the dimensions of the $V_i$ that it contains, which determines one row from the other two.
\end{example}
\subsection{Boundary lattice model}\label{sec:boundary_lat}
Consider a vertex on the lattice $\Sigma$. Fixing a subgroup $K \subseteq G$, we define an action of $\mathbb{C} K$ on $\hbox{{$\mathcal H$}}$ by
\begin{equation}\label{actXi0}\tikzfig{CaK_vertex_action}\end{equation}
One can see that this is an action as it is a tensor product of representations on each edge, or simply because it is the restriction to $K$ of the vertex action of $G$ in the bulk. Next, we define the action of $\mathbb{C} (R)$ at a face relative to a cilium,
\begin{equation}\label{actXi}\tikzfig{CGK_face_action}\end{equation}
with a clockwise rotation. That this is indeed an action is also easy to check explicitly, recalling that either $rK = r'K$ when $r= r'$ or $rK \cap r'K = \emptyset$ otherwise, for any $r, r'\in R$. These actions define a representation of $\Xi(R,K)$, which is just the bulk $D(G)$ action restricted to $\Xi(R,K)\subseteq D(G)$ by the inclusion in Proposition~\ref{Xisub}. This says that $x\in K$ acts as in $G$ and $\mathbb{C}(R)$ acts on faces by the $\mathbb{C}(G)$ action after sending $\delta_r \mapsto \sum_{a\in rK}\delta_a$. To connect the above representation to the topic at hand, we now define what we mean by a boundary.
\subsubsection{Smooth boundaries}
Consider the lattice in the half-plane for simplicity,
\[\tikzfig{smooth_halfplane}\]
where each solid black arrow still carries a copy of $\mathbb{C} G$ and ellipses indicate the lattice extending infinitely. The boundary runs along the left hand side and we refer to the rest of the lattice as the `bulk'. The grey dashed edges and vertices are there to indicate empty space and the lattice borders the edge with faces; we will call this case a `smooth' boundary. There is a site $s_0$ indicated at the boundary.
There is an action of $\mathbb{C} K$ at the boundary vertex associated to $s_0$, identical to the action of $\mathbb{C} K$ defined above but with the left edge undefined. Similarly, there is an action of $\mathbb{C}(R)$ at the face associated to $s_0$. However, this is more complicated, as the face has three edges undefined and the action must be defined slightly differently from in the bulk:
\[\tikzfig{smooth_face_action}\]
\[\tikzfig{smooth_face_actionB}\]
where the action is given a superscript ${\triangleright}^b$ to differentiate it from the actions in the bulk. In the first case, we follow the same
clockwise rotation rule but skip over the undefined values on the grey edges, but for the second case we go round round anticlockwise. The resulting rule is then according to whether the cilium is associated to the top or bottom of the edge. It is easy to check that this defines a representation of $\Xi(R,K)$ on $\hbox{{$\mathcal H$}}$ associated to each smooth boundary site, such as $s_0$, and that the actions of $\mathbb{C}(R)$ have been chosen such that this holds. A similar principle holds for ${\triangleright}^b$ in other orientations of the boundary.
The integral actions at a boundary vertex $v$ and at a face $s_0=(v,p)$ of a smooth boundary are then
\[ A^b_1(v):=\Lambda_{\mathbb{C} K}{\triangleright}^b_v = {1\over |K|}\sum_k k{\triangleright}^b_v,\quad B^b_1(p):=\Lambda_{\mathbb{C}(R)}{\triangleright}^b_{p} = \delta_e{\triangleright}^b_{p},\]
where the superscript $b$ and subscript $1$ label that these are at a smooth boundary. We have the convenient property that
\[\tikzfig{smooth_face_integral}\]
so both the vertex and face integral actions at a smooth face each depend only on the vertex and face respectively, not the precise cilium, similar to the integral actions.
\begin{remark}\rm
There is similarly an action of $\mathbb{C}(G) {>\!\!\!\triangleleft} \mathbb{C} K \subseteq D(G)$ on $\hbox{{$\mathcal H$}}$ at each site in the next layer into the bulk, where the site has the vertex at the boundary but an internal face. We mention this for completeness, and because using this fact it is easy to show that
\[A_1^b(v)B(p) = B(p)A_1^b(v),\]
where $B(p)$ is the usual integral action in the bulk.
\end{remark}
\begin{remark}\rm
In \cite{BSW} it is claimed that one can similarly introduce actions at smooth boundaries defined not only by $R$ and $K$ but also a 2-cocycle $\alpha$. This makes some sense categorically, as the module categories of $\hbox{{$\mathcal M$}}^G$ may also include such a 2-cocycle, which enters by way of a \textit{twisted} group algebra $\mathbb{C}_\alpha K$ \cite{Os2}. However, in Figure 6 of \cite{BSW} one can see that when the cocycle $\alpha$ is introduced all edges on the boundary are assumed to be copies of $\mathbb{C} K$, rather than $\mathbb{C} G$. On closer inspection, it is evident that this means that the action on faces of $\delta_e\in\mathbb{C}(R)$ will always yield 1, and the action of any other basis element of $\mathbb{C}(R)$ will yield 0. Similarly, the action on vertices is defined to still be an action of $\mathbb{C} K$, not $\mathbb{C}_\alpha K$. Thus, the excitations on this boundary are restricted to only the representations of $\mathbb{C} K$, without either $\mathbb{C}(R)$ or $\alpha$ appearing, which appears to defeat the purpose of the definition. It is not obvious to us that a cocycle can be included along these lines in a consistent manner.
\end{remark}
In quantum computational terms, in addition to the measurements in the bulk we now measure the operator $\sum_{\hbox{{$\mathcal O$}},\rho}p_{\hbox{{$\mathcal O$}},\rho}P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b$ for distinct coefficients $p_{\hbox{{$\mathcal O$}},\rho} \in \mathbb{R}$ at all sites along the boundary.
\subsubsection{Rough boundaries}
We now consider the half-plane lattice with a different kind of boundary,
\[\tikzfig{rough_halfplane}\]
This time, there is an action of $\mathbb{C} K$ at the exterior vertex and an action of $\mathbb{C}(R)$ at the face at the boundary with an edge undefined. Again, the former is just the usual action of $\mathbb{C} K$ with three edges undefined, but the action of $\mathbb{C}(R)$ requires more care and is defined as
\[\tikzfig{rough_face_action}\]
\[\tikzfig{rough_face_actionB}\]
\[\tikzfig{rough_face_actionC}\]
\[\tikzfig{rough_face_actionD}\]
All but the second action are just clockwise rotations as in the bulk, but with the greyed-out edge missing from the $\delta$-function. The second action goes counterclockwise in order to have an associated representation of $\Xi(R,K)$ at the bottom left. We have similar actions for other orientations of the lattice.
\begin{remark}\rm Although one can check that one has a representation of $\Xi(R,K)$ at each site using these actions and the action of $\mathbb{C} K$ defined before, this requires $g_1$ and $g_2$ on opposite sides of the $\delta$-function, and $g_1$ and $g_3$ on opposite sides, respectively for the last two actions. This means that there is no way to get $\delta_e{\triangleright}^b$ to always be invariant under choice of site in the face. Indeed, we have not been able to reproduce the implicit claim in \cite{CCW} that $\delta_e{\triangleright}^b$ at a rough boundary can be defined in a way that depends only on the face.
\end{remark}
The integral actions at a boundary vertex $v$ and at a site $s_0=(v,p)$ of a rough boundary are then
\[ A_2^b(v):=\Lambda_{\mathbb{C} K}{\triangleright}^b_v = {1\over |K|}\sum_k k{\triangleright}^b_v,\quad B_2^b(v,p):=\Lambda_{\mathbb{C}(R)}{\triangleright}^b_{s_0} = \delta_e{\triangleright}_{s_0}^b \]
where the superscript $b$ and subscript $2$ label that these are at a rough boundary. In computational terms, we measure the operator $\sum_{\hbox{{$\mathcal O$}},\rho}p_{\hbox{{$\mathcal O$}},\rho}P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b$ at each site along the boundary, as with smooth boundaries.
Unlike the smooth boundary case, there is not an action of, say, $\mathbb{C} (R){>\!\!\!\triangleleft} \mathbb{C} G$ at each site in the next layer into the bulk, with a boundary face but interior vertex. In particular, we do not have $B_2^b(v,p)A(v) = A(v)B_2^b(v,p)$ in general, but we can still consistently define a Hamiltonian. When the action at $v$ is restricted to $\mathbb{C} K$ we recover an action of $\Xi(R,K)$ again.
As with the bulk, the Hamiltonian incorporating the boundaries uses the actions of the integrals. We can accommodate both rough and smooth boundaries into the Hamiltonian. Let $V,P$ be the set of vertices and faces in the bulk, $S_1$ the set of all sites $(v,p)$ at smooth boundaries, and $S_2$ the same for rough boundaries. Then
\begin{align*}H&=\sum_{v_i\in V} (1-A(v_i)) + \sum_{p_i\in P} (1-B(p_i)) \\
&\quad + \sum_{s_{b_1} \in S_1} ((1 - A_1^b(s_{b_1}) + (1 - B_1^b(s_{b_1}))) + \sum_{s_{b_2} \in S_2} ((1 - A_2^b(s_{b_2}) + (1 - B_2^b(s_{b_2})).\end{align*}
We can pick out two vacuum states immediately:
\begin{equation}\label{eq:vac1}|{\rm vac}_1\> := \prod_{v_i,s_{b_1},s_{b_2}}A(v_i)A_1^b(s_{b_1})A_2^b(s_{b_2})\bigotimes_E e\end{equation}
and
\begin{equation}\label{eq:vac2}|{\rm vac}_2\> := \prod_{p_i,s_{b_1},s_{b_2}}B(p_i)B_1^b(s_{b_1})B_2^b(s_{b_2})\bigotimes_E \sum_{g \in G} g\end{equation}
where the tensor product runs over all edges in the lattice.
\begin{remark}\rm
There is no need for two different boundaries to correspond to the same subgroup $K$, and the Hamiltonian can be defined accordingly. This principle is necessary when performing quantum computation by braiding `defects', i.e. finite holes in the lattice, on the toric code \cite{FMMC}, and also for the lattice surgery in Section~\ref{sec:patches}. We do not write out this Hamiltonian in all its generality here, but its form is obvious.
\end{remark}
\subsection{Quasiparticle condensation}
Quasiparticles on the boundary correspond to irreps of $\Xi(R,K)$. It is immediate from Section~\ref{sec:xi} that when $\hbox{{$\mathcal O$}} = \{e\}$, we must have $r_0 = e, K^{r_0} = K$. We may choose the trivial representation of $K$ and then we have $P_{e,1} = \Lambda_{\mathbb{C}(R)} \otimes \Lambda_{\mathbb{C} K}$. We say that this particular measurement outcome corresponds to the absence of nontrivial quasiparticles, as the states yielding this outcome are precisely the locally vacuum states with respect to the Hamiltonian. This set of quasiparticles on the boundary will not in general be the same as quasiparticles defined in the bulk, as ${}_{\Xi(R,K)}\mathcal{M} \not\simeq {}_{D(G)}\mathcal{M}$ for all nontrivial $G$.
Quasiparticles in the bulk can be created from a vacuum and moved using ribbon operators \cite{Kit}, where the ribbon operators are seen as left and right module maps $D(G)^* \rightarrow \mathrm{End}(\hbox{{$\mathcal H$}})$, see \cite{CowMa}. Following \cite{CCW}, we could similarly define a different set of ribbon operators for the boundary excitations, which use $\Xi(R,K)^*$ rather than $D(G)^*$. However, these have limited utility. For completeness we cover them in Appendix~\ref{app:ribbon_ops}. Instead, for our purposes we will keep using the normal ribbon operators.
Such normal ribbon operators can extend to boundaries, still using Definition~\ref{def:ribbon}, so long as none of the edges involved in the definition are greyed-out. When a ribbon operator ends at a boundary site $s$, we are not concerned with equivariance with respect to the actions of $\mathbb{C}(G)$ and $\mathbb{C} G$ at $s$, as in Lemma~\ref{ribcom}. Instead we should calculate equivariance with respect to the actions of $\mathbb{C}(R)$ and $\mathbb{C} K$. We will study the matter in more depth in Section~\ref{sec:quasi}, but note that if $s,t\in R$ then unique factorisation means that $st=(s\cdot t)\tau(s,t)$ for unique elements $s\cdot t\in R$ and $\tau(s,t)\in K$. Similarly, if $y\in K$ and $r\in R$ then unique factorisation $yr=(y{\triangleright} r)(y{\triangleleft} r)$ defines $y{\triangleleft} r$ to be studied later.
\begin{lemma}\label{boundary_ribcom}
Let $\xi$ be an open ribbon from $s_0$ to $s_1$, where $s_0$ is located at a smooth boundary, for example:
\[\tikzfig{smooth_halfplane_ribbon_short}\]
and where $\xi$ begins at the specified orientation in the example, leading from $s_0$ into the bulk, rather than running along the boundary. Then
\[x{\triangleright}^b_{s_0}\circ F_\xi^{h,g}=F_\xi^{xhx^{-1},xg} \circ x{\triangleright}^b_{s_0};\quad \delta_r{\triangleright}^b_{s_0}\circ F_\xi^{h,g}=F_\xi^{h,g} \circ\delta_{s\cdot(y{\triangleright} r)}{\triangleright}^b_{s_0}\]
$\forall x\in K, r\in R, h,g\in G$, and where $sy$ is the unique factorisation of $h^{-1}$.
\end{lemma}
{\noindent {\bfseries Proof:}\quad }
The first is just the vertex action of $\mathbb{C} G$ restricted to $\mathbb{C} K$, with an edge greyed-out which does not influence the result. For the second, expand $\delta_r{\triangleright}^b_{s_0}$ and verify explicitly:
\[\tikzfig{smooth_halfplane_ribbon_shortA1}\]
\[\tikzfig{smooth_halfplane_ribbon_shortA2}\]
where we see $(s\cdot(y{\triangleright} r))K = s(y{\triangleright} r)\tau(s,y{\triangleright} r)^{-1}K = s(y{\triangleright} r)K = s(y{\triangleright} r)(y{\triangleleft} r)K = syrK = h^{-1}rK$. We check the other site as well:
\[\tikzfig{smooth_halfplane_ribbon_shortB1}\]
\[\tikzfig{smooth_halfplane_ribbon_shortB2}\]
\endproof
\begin{remark}\rm
One might be surprised that the equivariance property holds for the latter case when $s_0$ is attached to the vertex at the bottom of the face, as in this case $\delta_r{\triangleright}^b_{s_0}$ confers a $\delta$-function in the counterclockwise direction, different from the bulk. This is because the well-known equivariance properties in the bulk \cite{Kit} are not wholly correct, depending on orientation, as pointed out in \cite[Section~3.3]{YCC}. We accommodated for this by specifying an orientation in Lemma~\ref{ribcom}.
\end{remark}
\begin{remark}\rm\label{rem:rough_ribbon}
We have a similar situation for a rough boundary, albeit we found only one orientation for which the same equivariance property holds, which is:
\[\tikzfig{rough_halfplane_ribbon}\]
In the reverse orientiation, where the ribbon crosses downwards instead, equivariance is similar but with the introduction of an antipode. For other orientations we do not find an equivariance property at all.
\end{remark}
As with the bulk, we can define an excitation space using a ribbon between the two endpoints $s_0$, $s_1$, although more care must be taken in the definition.
\begin{lemma}\label{Ts0s1}
Let ${|{\rm vac}\>}$ be a vacuum state on a half-plane $\Sigma$, where there is one smooth boundary beyond which there are no more edges. Let $\xi$ be a ribbon between two endpoints $s_0, s_1$ where $s_0 = \{v_0,p_0\}$ is on the boundary and $s_1 = \{v_1,p_1\}$ is in the bulk, such that $\xi$ interacts with the boundary only once, when crossing from $s_0$ into the bulk; it cannot cross back and forth multiple times. Let $|\psi^{h,g}\>:=F_\xi^{h,g}{|{\rm vac}\>}$, and $\hbox{{$\mathcal T$}}_{\xi}(s_0,s_1)$ be the space with basis $|\psi^{h,g}\>$.
(1)$|\psi^{h,g}\>$ is independent of the choice of ribbon through the bulk between fixed sites $s_0, s_1$, so long as the ribbon still only interacts with the boundary at the chosen location.
(2)$\hbox{{$\mathcal T$}}_\xi(s_0,s_1)\subset\hbox{{$\mathcal H$}}$ inherits actions at disjoint sites $s_0, s_1$,
\[ x{\triangleright}^b_{s_0}|\psi^{h,g}\>=|\psi^{ xhx^{-1},xg}\>,\quad \delta_r{\triangleright}^b_{s_0}|\psi^{h,g}\>=\delta_{rK,hK}|\psi^{h,g}\>\]
\[ f{\triangleright}_{s_1}|\psi^{h,g}\>=|\psi^{h,gf^{-1}}\>,\quad \delta_f{\triangleright}_{s_1}|\psi^{h,g}\>=\delta_{f,g^{-1}h^{-1}g}|\psi^{h,g}\>\]
where we use the isomorphism $|\psi^{h,g}\>\mapsto \delta_hg$ to see the action at $s_0$ as a representation of $\Xi(R,K)$ on $D(G)$. In particular it is the restriction of the left regular representation of $D(G)$ to $\Xi(R,K)$, with inclusion map $i$ from Lemma~\ref{Xisub}. The action at $s_1$ is the right regular representation of $D(G)$, as in the bulk.
\end{lemma}
{\noindent {\bfseries Proof:}\quad }
(1) is the same as the proof in \cite[Prop.3.10]{CowMa}, with the exception that if the ribbon $\xi'$ crosses the boundary multiple times it will incur an additional energy penalty from the Hamiltonian for each crossing, and thus $\hbox{{$\mathcal T$}}_{\xi'}(s_0,s_1) \neq \hbox{{$\mathcal T$}}_{\xi}(s_0,s_1)$ in general.
(2) This follows by the commutation rules in Lemma~\ref{boundary_ribcom} and Lemma~\ref{ribcom} respectively, using
\[x{\triangleright}^b_{s_0}{|{\rm vac}\>} = \delta_e{\triangleright}^b_{s_0}{|{\rm vac}\>} = {|{\rm vac}\>}; \quad f{\triangleright}_{s_1}{|{\rm vac}\>} = \delta_e{\triangleright}_{s_1}{|{\rm vac}\>} = {|{\rm vac}\>}\]
$\forall x\in K, f \in G$. For the hardest case we have
\begin{align*}\delta_r{\triangleright}^b_{s_0}F^{h,g}{|{\rm vac}\>} &= F_\xi^{h,g} \circ\delta_{s\cdot(y{\triangleright} r)}{\triangleright}^b_{s_0}{|{\rm vac}\>}\\
&= F_\xi^{h,g}\delta_{s\cdot(y{\triangleright} r)K,K}{|{\rm vac}\>}\\ &= F_\xi^{h,g}\delta_{rK,hK}{|{\rm vac}\>}.
\end{align*}
For the restriction of the action at $s_0$ to $\Xi(R,K)$, we have that
\[\delta_r\cdot\delta_hg = \delta_{rK,hK}\delta_hg = \sum_{a\in rK}\delta_{a,h}\delta_hg=i(\delta_r)\delta_hg.\]
and $x\cdot \delta_hg = x\delta_hg = i(x)\delta_hg$.
\endproof
In the bulk, the excitation space $\hbox{{$\mathcal L$}}(s_0,s_1)$ is totally independent of the ribbon $\xi$ \cite{Kit,CowMa}, but we do not know of a similar property for $\hbox{{$\mathcal T$}}_\xi(s_0,s_1)$ when interacting with the boundary without the restrictions stated.
We explained in Section~\ref{sec:xi} how representations of $D(G)$ at sites in the bulk relate to those of $\Xi(R,K)$ in the boundary by functors in both directions. Physically, if we apply ribbon trace operators, that is operators of the form $W_\xi^{{\hbox{{$\mathcal C$}}},\pi}$, to the vacuum, then in the bulk we create exactly a quasiparticle of type $({\hbox{{$\mathcal C$}}},\pi)$ and $({\hbox{{$\mathcal C$}}}^*,\pi^*)$ at either end. Now let us include a boundary.
\begin{definition}Given an irrep of $D(G)$ provided by $({\hbox{{$\mathcal C$}}},\pi)$, we define the {\em boundary projection} $P_{i^*({\hbox{{$\mathcal C$}}},\pi)}\in \Xi(R,K)$ by
\[ P_{i^*({\hbox{{$\mathcal C$}}},\pi)}=\sum_{(\hbox{{$\mathcal O$}},\rho)\ |\ n^{(\hbox{{$\mathcal O$}},\rho)}_{({\hbox{{$\mathcal C$}}},\pi)}\ne 0} P_{(\hbox{{$\mathcal O$}},\rho)}\]
i.e. we sum over the projectors of all the types of irreps of $\Xi(R,K)$ contained in the restriction of the given $D(G)$ irrep.
\end{definition}
It is clear that $P_{i^*({\hbox{{$\mathcal C$}}},\pi)}$ is a projection as a sum of orthogonal projections.
\begin{proposition}\label{prop:boundary_traces}
Let $\xi$ be an open ribbon extending from an external site $s_0$ on a smooth boundary with associated algebra $\Xi(R,K)$ to a site $s_1$ in the bulk, for example:
\[\tikzfig{smooth_halfplane_ribbon}\]
Then
\[P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = 0\quad {\rm iff} \quad n^{(\hbox{{$\mathcal O$}},\rho)}_{({\hbox{{$\mathcal C$}}},\pi)} = 0.\]
In addition, we have
\[P_{i^*({\hbox{{$\mathcal C$}}},\pi)}{\triangleright}^b_{s_0} W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} {\triangleleft}_{s_1} P_{({\hbox{{$\mathcal C$}}},\pi)},\]
where we see the left action at $s_1$ of $P_{({\hbox{{$\mathcal C$}}}^*,\pi^*)}$ as a right action using the antipode.
\end{proposition}
{\noindent {\bfseries Proof:}\quad }
Under the isomorphism in Lemma~\ref{Ts0s1} we have that $W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} \mapsto P_{({\hbox{{$\mathcal C$}}},\pi)} \in D(G)$. For the first part we therefore have
\[P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} \mapsto i(P_{(\hbox{{$\mathcal O$}},\rho)}) P_{({\hbox{{$\mathcal C$}}},\pi)}\]
so the result follows from the last part of Lemma~\ref{lemfrobn}. Since the sum of projectors over the irreps of $\Xi$ is 1, this then implies the second part:
\[W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = \sum_{\hbox{{$\mathcal O$}},\rho}P_{(\hbox{{$\mathcal O$}},\rho)}{\triangleright}^b_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>} = P_{i^*({\hbox{{$\mathcal C$}}},\pi)}{\triangleright}^b_{s_0}W^{{\hbox{{$\mathcal C$}}},\pi}_\xi{|{\rm vac}\>}.\]
The action at $s_1$ is the same as for bulk ribbon operators.
\endproof
The physical interpretation is that application of a ribbon trace operator $W_\xi^{{\hbox{{$\mathcal C$}}},\pi}$ to a vacuum state creates a quasiparticle at $s_0$ of all the types contained in $i^*({\hbox{{$\mathcal C$}}},\pi)$, while still creating one of type $({\hbox{{$\mathcal C$}}}^*,\pi^*)$ at $s_1$; this is called the \textit{condensation} of $({{\hbox{{$\mathcal C$}}},\pi})$ at the boundary. While we used a smooth boundary in this example, the proposition applies equally to rough boundaries with the specified orientation in Remark~\ref{rem:rough_ribbon} by similar arguments.
\begin{example}\rm
In the bulk, we take the $D(S_3)$ model. Then by Example~\ref{exDS3}, we have exactly 8 irreps in the bulk. At the boundary, we take $K=\{e,u\} = \mathbb{Z}_2$ with $R = \{e,uv,vu\}$. As per the table in Example~\ref{exS3n} and Proposition~\ref{prop:boundary_traces} above, we then have for example that
\[(P_{\hbox{{$\mathcal O$}}_0,-1}+P_{\hbox{{$\mathcal O$}}_1,1}){\triangleright}^b_{s_0}W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} = W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} = W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} {\triangleleft}_{s_1}P_{{\hbox{{$\mathcal C$}}}_1,-1}.\]
We can see this explicitly. Recall that
\[\Lambda_{\mathbb{C}(R)}{\triangleright}^b_{s_0}{|{\rm vac}\>} = \Lambda_{\mathbb{C} K}{\triangleright}^b_{s_0}{|{\rm vac}\>} = {|{\rm vac}\>}.\]
All other vertex and face actions give 0 by orthogonality. Then,
\[P_{\hbox{{$\mathcal O$}}_0,-1} = {1\over 2}\delta_e \mathop{{\otimes}} (e-u); \quad P_{\hbox{{$\mathcal O$}}_1, 1} = (\delta_{uv} + \delta_{vu})\mathop{{\otimes}} e\]
and
\[W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1} = \sum_{c\in \{u,v,w\}}F_\xi^{c,e}-F_\xi^{c,c}\]
by Lemmas~\ref{Xiproj} and \ref{lem:quasi_basis} respectively. For convenience, we break the calculation up into two parts, one for each projector. Throughout, we will make use of Lemma~\ref{boundary_ribcom} to commute projectors through ribbon operators. First, we have that
\begin{align*}
&P_{\hbox{{$\mathcal O$}}_0,-1}{\triangleright}^b_{s_0}W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} = {1\over 2}(\delta_e \mathop{{\otimes}} (e - u)){\triangleright}^b_{s_0} \sum_{c\in \{u,v,w\}}(F_\xi^{c,e}-F_\xi^{c,c}){|{\rm vac}\>}\\
&= {1\over 2}\delta_e{\triangleright}^b_{s_0}[\sum_{c\in\{u,v,w\}}(F_\xi^{c,e}-F_\xi^{c,c})-(F_\xi^{u,u}-F_\xi^{e,u}+F_\xi^{v,u}-F_\xi^{v,uv}+F_\xi^{w,u}-F_\xi^{w,vu})]{|{\rm vac}\>}\\
&= {1\over 2}[(F_\xi^{u,e}-F_\xi^{u,u})\delta_e{\triangleright}^b_{s_0}+(F_\xi^{v,e}-F_\xi^{v,v})\delta_{vu}{\triangleright}^b_{s_0}+(F_\xi^{w,e}-F_\xi^{w,w})\delta_{uv}{\triangleright}^b_{s_0}\\
&+ (F^{u,e}_\xi-F^{u,u}_\xi)\delta_e{\triangleright}^b_{s_0} + (F^{v,uv}_\xi-F^{v,u}_\xi)\delta_{vu}{\triangleright}^b_{s_0} + (F^{w,vu}_\xi-F^{w,u}_\xi)\delta_{uv}{\triangleright}^b_{s_0}]{|{\rm vac}\>}\\
&= (F_\xi^{u,e}-F_\xi^{u,u}){|{\rm vac}\>}
\end{align*}
where we used the fact that $u = eu, v=vuu, w=uvu$ to factorise these elements in terms of $R,K$. Second,
\begin{align*}
P_{\hbox{{$\mathcal O$}}_1,1}{\triangleright}^b_{s_0}W_\xi^{{\hbox{{$\mathcal C$}}}_1,-1}{|{\rm vac}\>} &= ((\delta_{uv} + \delta_{vu})\mathop{{\otimes}} e){\triangleright}^b_{s_0}\sum_{c\in \{u,v,w\}}(F_\xi^{c,e}-F_\xi^{c,c}){|{\rm vac}\>}\\
&= (F_\xi^{v,e}-F_\xi^{v,v}+F_\xi^{w,e}-F_\xi^{w,w})(\delta_e\mathop{{\otimes}} e){\triangleright}^b_{s_0}{|{\rm vac}\>}\\
&= (F_\xi^{v,e}-F_\xi^{v,v}+F_\xi^{w,e}-F_\xi^{w,w}){|{\rm vac}\>}.
\end{align*}
The result follows immediately. All other boundary projections of $D(S_3)$ ribbon trace operators can be worked out in a similar way.
\end{example}
\begin{remark}\rm
Proposition~\ref{prop:boundary_traces} does not tell us exactly how \textit{all} ribbon operators in the quasiparticle basis are detected at the boundary, only the ribbon trace operators. We do not know a similar general formula for all ribbon operators.
\end{remark}
Now, consider a lattice in the plane with two boundaries, namely to the left and right,
\[\tikzfig{smooth_twobounds}\]
Recall that a lattice on an infinite plane admits a single ground state ${|{\rm vac}\>}$ as explained in\cite{CowMa}. However, in the present case, we may be able to also use ribbon operators in the quasiparticle basis extending from one boundary, at $s_0$ say, to the other, at $s_1$ say, such that no quasiparticles are detected at either end. These ribbon operators do not form a closed, contractible loop, as all undetectable ones do in the bulk; the corresponding states $|\psi\>$ are ground states and the vacuum has increased degeneracy. We can similarly induce additional degeneracy of excited states. This justifies the term \textit{gapped boundaries}, as the boundaries give rise to additional states with energies that are `gapped'; that is, they have a finite energy difference $\Delta$ (which may be zero) independently of the width of the lattice.
\section{Patches}\label{sec:patches}
For any nontrivial group, $G$ there are always at least two distinct choices of boundary conditions, namely with $K=\{e\}$ and $K=G$ respectively. In these cases, we necessarily have $R=G$ and $R=\{e\}$ respectively.
Considering $K=\{e\}$ on a smooth boundary, we can calculate that $A^b_1(v) = \mathrm{id}$ and $B^b_1(s)g = \delta_{e,g} g$, for $g$ an element corresponding to the single edge associated with the boundary site $s$. This means that after performing the measurements at a boundary, these edges are totally constrained and not part of the large entangled state incorporating the rest of the lattice, and hence do not contribute to the model whatsoever. If we remove these edges then we are left with a rough boundary, in which all edges participate, and therefore we may consider the $K=\{e\}$ case to imply a rough boundary. A similar argument applies for $K=G$ when considered on a rough boundary, which has $A^b_2(v)g = A(v)g = {1\over |G|}\sum_k kg = {1\over |G|}\sum_k k$ for an edge with state $g$ and $B^b_2(s) = \mathrm{id}$. $K=G$ therefore naturally corresponds instead to a smooth boundary, as otherwise the outer edges are totally constrained by the projectors. From now on, we will accordingly use smooth to refer always to the $K=G$ condition, and rough for $K=\{e\}$.
These boundary conditions are convenient because the condensation of bulk excitations to the vacuum at a boundary can be partially worked out in the group basis. For $K=\{e\}$, it is easy to see that the ribbon operators which are undetected at the boundary (and therefore leave the system in a vaccum state) are exactly those of the form $F_\xi^{e,g}$, for all $g\in G$, as any nontrivial $h$ in $F_\xi^{h,g}$ will be detected by the boundary face projectors. This can also be worked out representation-theoretically using Proposition~\ref{nformula}.
\begin{lemma}\label{lem:rough_functor}
Let $K=\{e\}$. Then the multiplicity of an irrep $({\hbox{{$\mathcal C$}}},\pi)$ of $D(G)$ with respect to the trivial representation of $\Xi(G,\{e\})$ is
\[n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)} = \delta_{{\hbox{{$\mathcal C$}}},\{e\}}{\rm dim}(W_\pi)\]
\end{lemma}
{\noindent {\bfseries Proof:}\quad }
Applying Proposition~\ref{nformula} in the case where $V_i$ is trivial, we start with
\[n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)}={|G| \over |{\hbox{{$\mathcal C$}}}| |G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}\cap \{e\}} |\{e\}^c| n_{1,\tilde\pi}\]
where ${\hbox{{$\mathcal C$}}}\cap \{e\} = \{e\}$ iff ${\hbox{{$\mathcal C$}}}=\{e\}$, or otherwise $\emptyset$. Also, $\tilde\pi = \oplus_{{\rm dim}(W_\pi)} (\{e\},1)$, and if ${\hbox{{$\mathcal C$}}} = \{e\}$ then $|G^{c_0}| = |G|$.
\endproof
The factor of ${\rm dim}(W_\pi)$ in the r.h.s. implies that there are no other terms in the decomposition of $i^*(\{e\},\pi)$. In physical terms, this means that the trace ribbon operators $W^{e,\pi}_\xi$ are the only undetectable trace ribbon operators, and any ribbon operators which do not lie in the block associated to $(e,\pi)$ are detectable. In fact, in this case we have a further property which is that all ribbon operators in the chargeon sector are undetectable, as by equation~(\ref{chargeon_ribbons}) chargeon sector ribbon operators are Fourier isomorphic to those of the form $F_\xi^{e,g}$ in the group basis.
In the more general case of a rough boundary for an arbitrary choice of $\Xi(R,K)$ the orientation of the ribbon is important for using the representation-theoretic argument. When $K=\{e\}$, for $F^{e,g}_\xi$ one can check that regardless of orientation the rough boundary version of Proposition~\ref{Ts0s1} applies.
The $K=G$ case is slightly more complicated:
\begin{lemma}\label{lem:smooth_functor}
Let $K=G$. Then the multiplicity of an irrep $({\hbox{{$\mathcal C$}}},\pi)$ of $D(G)$ with respect to the trivial representation of $\Xi(\{e\},G)$ is
\[n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)} = \delta_{\pi,1}\]
\end{lemma}
{\noindent {\bfseries Proof:}\quad }
We start with
\[n^{(\{e\},1)}_{({\hbox{{$\mathcal C$}}},\pi)}={1 \over |{\hbox{{$\mathcal C$}}}| |G^{c_0}|}\sum_{c\in {\hbox{{$\mathcal C$}}}} |G^c| n_{1,\tilde\pi}.\]
Now, $K^{r,c} = G^c$ and so $\tilde\pi = \pi$, giving $n_{1,\tilde\pi} = \delta_{1,\pi}$. Then $\sum_{c\in{\hbox{{$\mathcal C$}}}}|G^c| = |{\hbox{{$\mathcal C$}}}||G^{c_0}|$.
\endproof
This means that the only undetectable ribbon operators between smooth boundaries are those in the fluxion sector, i.e. those with assocated irrep $({\hbox{{$\mathcal C$}}}, 1)$. However, there is no factor of $|{\hbox{{$\mathcal C$}}}|$ on the r.h.s. and so the decomposition of $i^*({\hbox{{$\mathcal C$}}},1)$ will generally have additional terms other than just $(\{e\},1)$ in ${}_{\Xi(\{e\},G)}\hbox{{$\mathcal M$}}$. As a consequence, a fluxion trace ribbon operator $W^{{\hbox{{$\mathcal C$}}},1}_\zeta$ between smooth boundaries is undetectable iff its associated conjugacy class is a singlet, say ${\hbox{{$\mathcal C$}}}= \{c_0\}$, and thus $c_0 \in Z(G)$, the centre of $G$.
\begin{definition}\rm
A \textit{patch} is a finite rectangular lattice segment with two opposite smooth sides, each equipped with boundary conditions $K=G$, and two opposite rough sides, each equipped with boundary conditions $K=\{e\}$, for example:
\[\tikzfig{patch}\]
\end{definition}
One can alternatively define patches with additional sides, such as in \cite{Lit1}, or with other boundary conditions which depend on another subgroup $K$ and transversal $R$, but we find this definition convenient. Note that our definition does not put conditions on the size of the lattice; the above diagram is just a conveniently small and yet nontrivial example.
We would like to characterise the vacuum space $\hbox{{$\mathcal H$}}_{\rm vac}$ of the patch. To do this, let us begin with $|{\rm vac}_1\>$ from equation~(\ref{eq:vac1}), and denote $|e\>_L := |{\rm vac}_1\>$. This is the \textit{logical zero state} of the patch. We will use this as a reference state to calculate other states in $\hbox{{$\mathcal H$}}_{\rm vac}$.
Now, for any other state $|\psi\>$ in $\hbox{{$\mathcal H$}}_{\rm vac}$, there must exist some linear map $D \in {\rm End}(\hbox{{$\mathcal H$}}_{\rm vac})$ such that $D|e\>_L = |\psi\>$, and thus if we can characterise the algebra of linear maps ${\rm End}(\hbox{{$\mathcal H$}}_{\rm vac})$, we automatically characterise $\hbox{{$\mathcal H$}}_{\rm vac}$. To help with this, we have the following useful property:
\begin{lemma}\label{lem:rib_move}
Let $F_\xi^{e,g}$ be a ribbon operator for some $g\in G$, with $\xi$ extending from the top rough boundary to the bottom rough boundary. Then the endpoints of $\xi$ may be moved along the rough boundaries with $G=\{e\}$ boundary conditions while leaving the action invariant on any vacuum state.
\end{lemma}
{\noindent {\bfseries Proof:}\quad }
We explain this on an example patch with initial state $|\psi\> \in \hbox{{$\mathcal H$}}_{\rm vac}$ and a ribbon $\xi$,
\[\tikzfig{bigger_patch}\]
\[\tikzfig{bigger_patch2}\]
using the fact that $a = cb$ and $m = lk$ by the definition of $\hbox{{$\mathcal H$}}_{\rm vac}$ for the second equality. Thus, we see that the ribbon through the bulk may be deformed as usual. As the only new component of the proof concerned the endpoints, we see that this property holds regardless of the size of the patch.
\endproof
One can calculate in particular that $F_\xi^{e,g}|e\>_L = \delta_{e,g}|e\>_L$, which we will prove more generally later. The undetectable ribbon operators between the smooth boundaries are of the form
\[W^{{\hbox{{$\mathcal C$}}},1}_\xi = \sum_{n\in G} F_\zeta^{c_0,n}\]
when ${\hbox{{$\mathcal C$}}} = \{c_0\}$ by Lemma~\ref{lem:smooth_functor}, hence $G^{c_0} = G$. Technically, this lemma only tells us the ribbon trace operators which are undetectable, but in the present case none of the individual component operators are undetectable, only the trace operators. There are thus exactly $|Z(G)|$ orthogonal undetectable ribbon operators between smooth boundaries. These do not play an important role, but we describe them to characterise the operator algebra on $\hbox{{$\mathcal H$}}_{\rm vac}$. They obey a similar rule as Lemma~\ref{lem:rib_move}, which one can check in the same way.
In addition to the ribbon operators between sides, we also have undetectable ribbon operators between corners on the lattice. These corners connect smooth and rough boundaries, and thus careful application of specific ribbon operators can avoid detection from either face or vertex measurements,
\[\tikzfig{corner_ribbons}\]
where one can check that these do indeed leave the system in a vacuum using familiar arguments about $B(p)$ and $A(v)$. We could equally define such operators extending from either left corner to either right corner, and they obey the discrete isotopy laws as in the bulk. If we apply $F_\xi^{h,g}$ for any $g\neq e$ then we have $F_\xi^{h,g}|\psi\> =0$ for any $|\psi\>\in \hbox{{$\mathcal H$}}_{\rm vac}$, and so these are the only ribbon operators of this form.
\begin{remark}\rm
Corners of boundaries are algebraically interesting themselves, and can be used for quantum computation, but for brevity we skim over them. See e.g. \cite{Bom2,Brown} for details.
\end{remark}
These corner to corner, left to right and top to bottom ribbon operators span ${\rm End}(\hbox{{$\mathcal H$}}_{\rm vac})$, the linear maps which leave the system in vacuum. Due to Lemma~\ref{lem:ribs_only}, all other linear maps must decompose into ribbon operators, and these are the only ribbon operators in ${\rm End}(\hbox{{$\mathcal H$}}_{\rm vac})$ up to linearity.
As a consequence, we have well-defined patch states $|h\>_L := \sum_gF^{h,g}_\xi|e\>_L$ for each $h\in G$, where $\xi$ is any ribbon extending from the bottom left corner to right. Now, working explicitly on the small patch below, we have
\[\tikzfig{wee_patch}\]
to start with, then:
\[\tikzfig{wee_patch2}\]
It is easy to see that we may always write $|h\>_L$ in this manner, for an arbitrary size of patch. Now, ribbon operators which are undetectable when $\xi$ extends from bottom to top are those of the form $F_\xi^{e,g}$, for example
\[\tikzfig{wee_patch3}\]
and so $F_\xi^{e,g}|h\>_L = \delta_{g,h}|h\>_L$, where again if we take a larger patch all additional terms will clearly cancel. Lastly, undetectable ribbon operators for a ribbon $\zeta$ extending from left to right are exactly those of the form $\sum_{n\in G} F_\zeta^{c_0,n}$ for any $c_0 \in Z(G)$. One can check that $|c_0 h\>_L = \sum_{n\in G} F_\zeta^{c_0,n} |h\>_L$, thus these give us no new states in $\hbox{{$\mathcal H$}}_{\rm vac}$.
\begin{lemma}\label{lem:patch_dimension}
For a patch with the $D(G)$ model in the bulk, ${\rm dim}(\hbox{{$\mathcal H$}}_{\rm vac}) = |G|$.
\end{lemma}
{\noindent {\bfseries Proof:}\quad }
By the above characterisation of undetectable ribbon operators, the states $\{|h\>_L\}_{h\in G}$ span ${\rm dim}(\hbox{{$\mathcal H$}}_{\rm vac})$. The result then follows from the adjointness of ribbon operators, which means that the states $\{|h\>_L\}_{h\in G}$ are orthogonal.
\endproof
We can also work out that for $|{\rm vac}_2\>$ from equation~(\ref{eq:vac2}), we have $|{\rm vac}_2\> = \sum_h |h\>_L$. More generally:
\begin{corollary}\label{cor:matrix_basis}
$\hbox{{$\mathcal H$}}_{\rm vac}$ has an alternative basis with states $|\pi;i,j\>_L$, where $\pi$ is an irreducible representation of $G$ and $i,j$ are indices such that $0\leq i,j<{\rm dim}(V_\pi)$. We call this the quasiparticle basis of the patch.
\end{corollary}
{\noindent {\bfseries Proof:}\quad }
First, use the nonabelian Fourier transform on the ribbon operators $F_\xi^{e,g}$, so we have $F_\xi^{'e,\pi;i,j} = \sum_{n\in G}\pi(n^{-1})_{ji}F_\xi^{e,n}$. If we start from the reference state $|1;0,0\>_L := \sum_h |h\>_L = |{\rm vac}_2\>$ and apply these operators with $\xi$ from bottom to top of the patch then we get
\[|\pi;i,j\>_L = F_\xi^{'e,\pi;i,j}|1;0,0\>_L = \sum_{n\in G}\pi(n^{-1})_{ji} |n\>_L\]
which are orthogonal. Now, as $\sum_{\pi\in \hat{G}}\sum_{i,j=0}^{{\rm dim}(V_\pi)} = |G|$ and we know ${\rm dim}(\hbox{{$\mathcal H$}}_{\rm vac}) = |G|$ by the previous Lemma~\ref{lem:patch_dimension}, $\{|\pi;i,j\>_L\}_{\pi,i,j}$ forms a basis of ${\rm dim}(\hbox{{$\mathcal H$}}_{\rm vac})$.
\endproof
\begin{remark}\rm
Kitaev models are designed in general to be fault tolerant. The minimum number of component Hilbert spaces, that is copies of $\mathbb{C} G$ on edges, for which simultaneous errors will undetectably change the logical state and cause errors in the computation is called the `code distance' $d$ in the language of quantum codes. For the standard method of computation using nonabelian anyons \cite{Kit}, data is encoded using excited states, which are states with nontrivial quasiparticles at certain sites. The code distance can then be extremely small, and constant in the size of the lattice, as the smallest errors need only take the form of ribbon operators winding round a single quasiparticle at a site. This is no longer the case when encoding data in vacuum states on patches, as the only logical operators are specific ribbon operators extending from top to bottom, left to right or corner to corner. The code distance, and hence error resilience, of any vacuum state of the patch therefore increases linearly with the width of the patch as it is scaled, and so the square root of the number $n$ of component Hilbert spaces in the patch, that is $n\sim d^2$.
\end{remark}
\subsection{Nonabelian lattice surgery}\label{sec:surgery}
Lattice surgery was invented as a method of fault-tolerant computation with the qubit, i.e. $\mathbb{C}\mathbb{Z}_2$, surface code \cite{HFDM}. The first author generalised it to qudit models using $\mathbb{C}\mathbb{Z}_d$ in \cite{Cow2}, and gave a fresh perspective on lattice surgery as `simulating' the Hopf algebras $\mathbb{C}\mathbb{Z}_d$ and $\mathbb{C}(\mathbb{Z}_d)$ on the logical space $\hbox{{$\mathcal H$}}_{\rm vac}$ of a patch. In this section, we prove that lattice surgery generalises to arbitrary finite group models, and `simulates' $\mathbb{C} G$ and $\mathbb{C}(G)$ in a similar way. Throughout, we assume that the projectors $A(v)$ and $B(p)$ may be performed deterministically for simplicity. In Appendix~\ref{app:measurements} we discuss the added complication that in practice we may only perform measurements which yield projections nondeterministically.
\begin{remark}\rm
When proving the linear maps that nonabelian lattice surgeries yield, we will use specific examples, but the arguments clearly hold generally. For convenience, we will also tend to omit normalising scalar factors, which do not impact the calculations as the maps are $\mathbb{C}$-linear.
\end{remark}
Let us begin with a large rectangular patch. We now remove a line of edges from left to right by projecting each one onto $e$:
\[\tikzfig{split2}\]
We call this a \textit{rough split}, as we create two new rough boundaries. We no longer apply $A(v)$ to the vertices which have had attached edges removed. If we start with a small patch in the state $|l\>_L$ for some $l\in G$ then we can explicitly calculate the linear map.
\[\tikzfig{rough_split_project}\]
where we have separated the two patches afterwards for clarity, showing that they have two separate vacuum spaces. We then have that the last expression is
\[\tikzfig{rough_split_project2}\]
Observe the factors of $g$ in particular. The state is therefore now $\sum_g |g^{-1}\>_L\otimes |gl\>_L$, where the l.h.s. of the tensor product is the Hilbert space corresponding to the top patch, and the r.h.s. to the bottom. A change of variables gives $\sum_g |g\>_L\otimes |g^{-1}l\>_L$, the outcome of comultiplication of $\mathbb{C}(G)$ on the logical state $|l\>_L$ of the original patch.
Similarly, we can measure out a line of edges from bottom to top, for example
\[\tikzfig{split1}\]
We call this a \textit{smooth split}, as we create two new smooth boundaries. Each deleted edge is projected into the state ${1\over|G|}\sum_g g$. We also cease measurement of the faces which have had edges removed, and so we end up with two adjacent but disjoint patches. Working on a small example, we start with $|e\>_L$:
\[\tikzfig{smooth_split_project}\]
where in the last step we have taken $b\mapsto jc$, $g\mapsto kh$ from the $\delta$-functions and then a change of variables $j\mapsto jc^{-1}$, $k\mapsto kh^{-1}$ in the summation. Thus, we have ended with two disjoint patches, each in state $|e\>_L$. One can see that this works for any $|h\>_L$ in exactly the same way, and so the smooth split linear map is $|h\>_L \mapsto |h\>_L\otimes|h\>_L$, the comultiplication of $\mathbb{C} G$.
The opposite of splits are merges, whereby we take two disjoint patches and introduce edges to bring them together to a single patch. For the rough merge below, say we start with the basis states $|k\>_L$ and $|j\>_L$ on the bottom and top. First, we introduce an additional joining edge in the state $e$.
\[\tikzfig{merge1}\]
This state $|\psi\>$ automatically satisfies $B(p)|\psi\> = |\psi\>$ everywhere. But it does not satisfy the conditions on vertices, so we apply $A(v)$ to the two vertices adjacent to the newest edge. Then we have the last expression
\[\tikzfig{rough_merge_project}\]
which by performing repeated changes of variables yields
\[\tikzfig{rough_merge_project2}\]
Thus the rough merge yields the map $|j\>_L\otimes|k\>_L\mapsto|jk\>_L$, the multiplication of $\mathbb{C} G$, where again the tensor factors are in order from top to bottom.
Similarly, we perform a smooth merge with the states $|j\>_L, |k\>_L$ as
\[\tikzfig{merg2}\]
We introduce a pair of edges connecting the two patches, each in the state $\sum_m m$.
\[\tikzfig{smooth_merge_project}\]
The resultant patch automatically satisfies the conditions relating to $A(v)$, but we must apply $B(p)$ to the freshly created faces to acquire a state in $\hbox{{$\mathcal H$}}_{\rm vac}$, giving
\[\tikzfig{smooth_merge_project2}\]
where the $B(p)$ applications introduced the $\delta$-functions
\[\delta_{e}(bf^{-1}m^{-1}),\quad \delta_{e}(dh^{-1}n^{-1}),\quad \delta_e(dj^{-1}b^{-1}bf^{-1}fkh^{-1}hd^{-1}) = \delta_e(j^{-1}k).\]
In summary, the linear map on logical states is evidently $|j\>_L\otimes |k\>_L \mapsto \delta_{j,k}|j\>_L$, the multiplication of $\mathbb{C}(G)$.
The units of $\mathbb{C} G$ and $\mathbb{C}(G)$ are given by the states $|e\>_L$ and $|1;0,0\>_L$ respectively. The counits are given by the maps $|g\>_L \mapsto 1$ and $|g\>_L\mapsto \delta_{g,e}$ respectively. The logical antipode $S_L$ is given by applying the antipode to each edge individually, i.e. inverting all group elements. For example:
\[\tikzfig{antipode_1A}\]
This state is now no longer in the original $\hbox{{$\mathcal H$}}_{\rm vac}$, so to compensate we must modify the lattice. We flip all arrows in the lattice -- this is only a conceptual flip, and does not require any physical modification:
\[\tikzfig{antipode_1B}\]
This amounts to exchanging left and right regular representations, and redefining the Hamiltonian accordingly. In the resultant new vacuum space, the state is now $|g^{-1}\>_L = F_\xi^{e,g^{-1}}|e\>_L$, with $\xi$ running from the bottom left corner to bottom right as previously.
\begin{remark}\rm
This trick of redefining the vacuum space is employed in \cite{HFDM} to perform logical Hadamards, although in their case the lattice is rotated by $\pi/2$, and the edges are directionless as the model is restricted to $\mathbb{C}\mathbb{Z}_2$.
\end{remark}
Thus, we have all the ingredients of the Hopf algebras $\mathbb{C} G$ and $\mathbb{C}(G)$ on the same vector space $\hbox{{$\mathcal H$}}_{\rm vac}$. For applications, one should like to know which quantum computations can be performed using these algebras (ignoring the subtlety with nondeterministic projectors). Recall that a quantum computer is called approximately universal if for any target unitary $U$ and desired accuracy ${\epsilon}\in\mathbb{R}$, the computer can perform a unitary $V$ such that $||V-U||\leq{\epsilon}$, i.e. the operator norm error of $V$ from $U$ is no greater than ${\epsilon}$.
We believe that when the computer is equipped with just the states $\{|h\>_L\}_{h\in G}$ and the maps from lattice surgery above then one cannot achieve approximately universal computation \cite{Stef}, but leave the proof to a further paper. If we also have access to all matrix algebra states $|\pi;i,j\>_L$ as defined in Corollary~\ref{cor:matrix_basis}, we do not know whether the model of computation is then universal for some choice of $G$, and we do not know whether these states can be prepared efficiently. In fact, how these states are defined depends on a choice of basis for each irrep, so whether it is universal may depend not only on the choice of $G$ but also choices of basis. This is an interesting question for future work.
\section{Quasi-Hopf algebra structure of $\Xi(R,K)$}\label{sec:quasi}
We now return to our boundary algebra $\Xi$. It is known that $\Xi$ has a great deal more structure, which we give more explicitly in this section than we have seen elsewhere. This structure generalises a well-known bicrossproduct Hopf algebra construction for when a finite group $G$ factorises as $G=RK$ into two subgroups $R,K$. Then each acts on the set of the other to form a {\em matched pair of actions} ${\triangleright},{\triangleleft}$ and we use ${\triangleright}$ to make a cross product algebra $\mathbb{C} K{\triangleright\!\!\!<} \mathbb{C}(R)$ (which has the same form as our algebra $\Xi$ except that we have chosen to flip the tensor factors) and ${\triangleleft}$ to make a cross product coalgebra $\mathbb{C} K{>\!\!\blacktriangleleft} \mathbb{C}(R)$. These fit together to form a bicrossproduct Hopf algebra $\mathbb{C} K{\triangleright\!\blacktriangleleft} \mathbb{C}(R)$. This construction has been used in the Lie group version to construct quantum Poincar\'e groups for quantum spacetimes\cite{Ma:book}.
In \cite{Be} was considered the more general case where we are just given a subgroup $K\subseteq G$ and a choice of transversal $R$ with the group identity $e\in R$. As we noted, we still have unique factorisation $G=RK$ but in general $R$ need not be a group. We can still follow the same steps. First of all, unique factorisation entails that $R\cap K=\{e\}$. It also implies maps
\[{\triangleright} : K\times R \rightarrow R,\quad {\triangleleft}: K\times R\rightarrow K,\quad \cdot : R\times R \rightarrow R,\quad \tau: R \times R \rightarrow K\]
defined by
\[xr = (x{\triangleright} r)(x{\triangleleft} r),\quad rs = r\cdot s \tau(r,s)\]
for all $x\in R, r,s\in R$, but this time these inherit the properties
\begin{align} (xy) {\triangleright} r &= x {\triangleright} (y {\triangleright} r), \quad e {\triangleright} r = r,\nonumber\\ \label{lax}
x {\triangleright} (r\cdot s)&=(x {\triangleright} r)\cdot((x{\triangleleft} r){\triangleright} s),\quad x {\triangleright} e = e,\end{align}
\begin{align}
(x{\triangleleft} r){\triangleleft} s &= \tau\left(x{\triangleright} r, (x{\triangleleft} r){\triangleright} s)^{-1}(x{\triangleleft} (r\cdot s)\right)\tau(r,s),\quad
x {\triangleleft} e = x,\nonumber\\ \label{rax}
(xy) {\triangleleft} r &= (x{\triangleleft} (y{\triangleright} r))(y{\triangleleft} r),\quad e{\triangleleft} r = e,\end{align}
\begin{align}
\tau(r, s\cdot t)\tau(s,t)& = \tau\left(r\cdot s,\tau(r,s){\triangleright} t\right)(\tau(r,s){\triangleleft} t),\quad \tau(e,r) = \tau(r,e) = e,\nonumber\\ \label{tax}
r\cdot(s\cdot t) &= (r\cdot s)\cdot(\tau(r,s){\triangleright} t),\quad r\cdot e=e\cdot r=r\end{align}
for all $x,y\in K$ and $r,s,t\in R$. We see from (\ref{lax}) that ${\triangleright}$ is indeed an action (we have been using it in preceding sections) but ${\triangleleft}$ in (\ref{rax}) is only only up to $\tau$ (termed in \cite{KM2} a `quasiaction'). Both ${\triangleright},{\triangleleft}$ `act' almost by automorphisms but with a back-reaction by the other just as for a matched pair of groups. Meanwhile, we see from (\ref{tax}) that $\cdot$ is associative only up to $\tau$ and $\tau$ itself obeys a kind of cocycle condition.
Clearly, $R$ is a subgroup via $\cdot$ if and only if $\tau(r,s)=e$ for all $r,s$, and in this case we already see that $\Xi(R,K)$ is a bicrossproduct Hopf algebra, with the only difference being that we prefer to build it on the flipped tensor factors. More generally, \cite{Be} showed that there is still a natural monoidal category associated to this data but with nontrivial associators. This corresponds by Tannaka-Krein reconstruction to a $\Xi$ as quasi-bialgebra which in some cases is a quasi-Hopf algebra\cite{Nat}. Here we will give these latter structures explicitly and in maximum generality compared to the literature (but still needing a restriction on $R$ for the antipode to be in a regular form). We will also show that the obvious $*$-algebra structure makes a $*$-quasi-Hopf algebra in an appropriate sense under restrictions on $R$. These aspects are new, but more importantly, we give direct proofs at an algebraic level rather than categorical arguments, which we believe are essential for detailed calculations. Related works on similar algebras and coset decompositions include \cite{PS,KM1} in addition to \cite{Be,Nat,KM2}.
\begin{lemma}\cite{Be,Nat,KM2}
$(R,\cdot)$ has the same unique identity $e$ as $G$ and has the left division property, i.e. for all $t, s\in R$, there is a unique solution $r\in R$ to the equation $s\cdot r = t$ (one writes $r = s\backslash t$). In particular, we let $r^R$ denote the unique solution to $r\cdot r^R=e$, which we call a right inverse.\end{lemma}
This means that $(R,\cdot)$ is a left loop (a left quasigroup with identity). The multiplication table for $(R,\cdot)$ has one of each element of $R$ in each row, which is the left division property. In particular, there is one instance of $e$ in each row. One can recover $G$ knowing $(R,\cdot)$, $K$ and the data ${\triangleright},{\triangleleft},\tau$\cite[Prop.3.4]{KM2}. Note that a parallel property of left inverse $(\ )^L$ need not be present.
\begin{definition} We say that $R$ is {\em regular} if $(\ )^R$ is bijective.
\end{definition}
$R$ is regular iff it has both left and right inverses, and this is iff it satisfies $RK=KR$ by\cite[Prop.~3.5]{KM2}. If there is also right division then we have a loop (a quasigroup with identity) and under further conditions\cite[Prop.~3.6]{KM2} we have $r^L=r^R$ and a 2-sided inverse property quasigroup. The case of regular $R$ is studied in \cite{Nat} but this excludes some interesting choices of $R$ and we do not always assume it. Throughout, we will specify when $R$ is required to be regular for results to hold. Finally, if $R$ obeys a further condition $x{\triangleright}(s\cdot t)=(x{\triangleright} s){\triangleright} t$ in \cite{KM2} then $\Xi$ is a Hopf quasigroup in the sense introduced in \cite{KM1}. This is even more restrictive but will apply to our octonions-related example. Here we just give the choices for our go-to cases for $S_3$.
\begin{example}\label{exS3R}\rm $G=S_3$ with $K=\{e,u\}$ has four choices of transversal $R$ meeting our requirement that $e\in R$. Namely
\begin{enumerate}
\item $R=\{e,uv,vu\}$ (our standard choice) {\em is a subgroup} $R=\mathbb{Z}_3$, so it is associative and there is 2-sided division and a 2-sided inverse. We also have $u{\triangleright}(uv)=vu, u{\triangleright} (vu)=uv$ but ${\triangleleft},\tau$ trivial.
\item $R=\{e,w,v\}$ which is {\em not a subgroup} and indeed $\tau(v,w)=\tau(w,v)=u$ (and all others are necessarily $e$). There is an action $u{\triangleright} v=w, u{\triangleright} w=v$ but ${\triangleleft}$ is still trivial. For examples
\begin{align*} vw&=wu \Rightarrow v\cdot w=w,\ \tau(v,w)=u;\quad wv=vu \Rightarrow w\cdot v=v,\ \tau(w,v)=u\\
uv&=wu \Rightarrow u{\triangleright} v=w,\ u{\triangleleft} v=u;\quad uw=vu \Rightarrow u{\triangleright} w=v,\ u{\triangleleft} w=u. \end{align*}
This has left division/right inverses as it must but {\em not right division} as $e\cdot w=v\cdot w=w$ and $e\cdot v=w\cdot v=v$. We also have $v\cdot v=w\cdot w=e$ and $(\ )^R$ is bijective so this {\em is regular}.
\item $R=\{e,uv, v\}$ which is {\em not a subgroup} and $\tau,{\triangleright},{\triangleleft}$ are all nontrivial with
\begin{align*} \tau(uv,uv)&=\tau(v,uv)=\tau(uv,v)=u,\quad \tau(v,v)=e,\\
v\cdot v&=e,\quad v\cdot uv=uv,\quad uv\cdot v=e,\quad uv\cdot uv=v,\\
u{\triangleright} v&=uv,\quad u{\triangleright} (uv)=v,\quad u{\triangleleft} v=e,\quad u{\triangleleft} uv=e\end{align*}
and all other cases determined from the properties of $e$. Here $v^R=v$ and $(uv)^R=v$ so this is {\em not regular}.
\item $R=\{e,w,vu\}$ which is analogous to the preceding case, so {\em not a subgroup}, $\tau,{\triangleright},{\triangleleft}$ all nontrivial and {\em not regular}.
\end{enumerate}
\end{example}
We will also need the following useful lemma in some of our proofs.
\begin{lemma}\label{leminv}\cite{KM2} For any transversal $R$ with $e\in R$, we have
\begin{enumerate}
\item $(x{\triangleleft} r)^{-1}=x^{-1}{\triangleleft}(x{\triangleright} r)$.
\item $(x{\triangleright} r)^R=(x{\triangleleft} r){\triangleright} r^R$.
\item $\tau(r,r^R)^{-1}{\triangleleft} r=\tau(r^R,r^{RR})^{-1}$.
\item $\tau(r,r{}^R)^{-1}{\triangleleft} r=r^R{}^R$.
\end{enumerate}
for all $x\in K, r\in R$.
\end{lemma}
{\noindent {\bfseries Proof:}\quad } The first two items are elementary from the matched pair axioms. For (1), we use $e=(x^{-1}x){\triangleleft} r=(x^{-1}{\triangleleft}(x{\triangleright} r))(x{\triangleleft} r)$ and for (2) $e=x{\triangleright}(r\cdot r^R)=(x{\triangleright} r)\cdot((x{\triangleleft} r){\triangleright} r^R)$. The other two items are a left-right reversal of \cite[Lem.~3.2]{KM2} but given here for completeness. For (3), \begin{align*} e&=(\tau(r,r^R)\tau(r,r^R)^{-1}){\triangleleft} r=(\tau(r,r^R){\triangleleft} (\tau(r,r^R){\triangleright} r))(\tau(r,r^R)^{-1}{\triangleleft} r)\\
&=(\tau(r,r^R){\triangleleft} r^{RR})(\tau(r,r^R)^{-1}{\triangleleft} r)\end{align*}
which we combine with
\[ \tau(r^R,r^{RR})=\tau(r\cdot r^R,r^{RR})\tau(r^R,r^{RR})=\tau(r\cdot r^R, \tau(r,r^R){\triangleright} r^{RR})(\tau(r,r^R){\triangleleft} r^{RR})=\tau(r,r^R){\triangleleft} r^{RR}\]
by the cocycle property. For (4), $\tau(r,r^R){\triangleleft} r^R{}^R=(r\cdot r^R) \tau(r,r^R){\triangleleft} r^R{}^R=r\cdot (r^R\cdot r^R{}^R)=r$
by one of the matched pair conditions. \endproof
Using this lemma, it is not hard to prove cf\cite[Prop.3.3]{KM2} that
\begin{equation}\label{leftdiv}s\backslash t=s^R\cdot\tau^{-1}(s,s^R){\triangleright} t;\quad s\cdot(s\backslash t)=s\backslash(s\cdot t)=t,\end{equation}
which can also be useful in calculations.
\subsection{$\Xi(R,K)$ as a quasi-bialgebra}
We recall that a quasi-bialgebra is a unital algebra $H$, a coproduct $\Delta:H\to H\mathop{{\otimes}} H$ which is an algebra map but is no longer required to be coassociative, and ${\epsilon}:H\to \mathbb{C}$ a counit for $\Delta$ in the usual sense $(\mathrm{id}\mathop{{\otimes}}{\epsilon})\Delta=({\epsilon}\mathop{{\otimes}}\mathrm{id})\Delta=\mathrm{id}$. Instead, we have a weaker form of coassociativity\cite{Dri,Ma:book}
\[ (\mathrm{id}\mathop{{\otimes}}\Delta)\Delta=\phi((\Delta\mathop{{\otimes}}\mathrm{id})\Delta(\ ))\phi^{-1}\]
for an invertible element $\phi\in H^{\mathop{{\otimes}} 3}$ obeying the 3-cocycle identity
\[ (1\mathop{{\otimes}}\phi)((\mathrm{id}\mathop{{\otimes}}\Delta\mathop{{\otimes}}\mathrm{id})\phi)(\phi\mathop{{\otimes}} 1)=((\mathrm{id}\mathop{{\otimes}}\mathrm{id}\mathop{{\otimes}}\Delta)\phi)(\Delta\mathop{{\otimes}}\mathrm{id}\mathop{{\otimes}}\mathrm{id})\phi,\quad (\mathrm{id}\mathop{{\otimes}}{\epsilon}\mathop{{\otimes}}\mathrm{id})\phi=1\mathop{{\otimes}} 1\]
(it follows that ${\epsilon}$ in the other positions also gives $1\mathop{{\otimes}} 1$).
In our case, we already know that $\Xi(R,K)$ is a unital algebra.
\begin{lemma}\label{Xibialg} $\Xi(R,K)$ is a quasi-bialgebra with
\[ \Delta x=\sum_{s\in R}x\delta_s \mathop{{\otimes}} x{\triangleleft} s, \quad \Delta \delta_r = \sum_{s,t\in R} \delta_{s\cdot t,r}\delta_{s}\otimes \delta_{t},\quad {\epsilon} x=1,\quad {\epsilon} \delta_r=\delta_{r,e}\]
for all $x\in K, r\in R$, and
\[ \phi=\sum_{r,s\in R} \delta_r \otimes \delta_s \otimes \tau(r,s)^{-1},\quad \phi^{-1} = \sum_{r,s\in R} \delta_r\otimes \delta_s \otimes \tau(r,s).\]
\end{lemma}
{\noindent {\bfseries Proof:}\quad }
This follows by reconstruction arguments, but it is useful to check directly,
\begin{align*}
(\Delta x)(\Delta y)&=\sum_{s,r}(x\delta_s\mathop{{\otimes}} x{\triangleleft} s)(y\delta_r\mathop{{\otimes}} y{\triangleleft} r)=\sum_{s,r}(x\delta_sy\delta_r\mathop{{\otimes}} ( x{\triangleleft} s)( y{\triangleleft} r)\\
&=\sum_{r,s}xy\delta_{y^{-1}{\triangleright} s}\delta_r\mathop{{\otimes}} (x{\triangleleft} s)(y{\triangleleft} r)=\sum_r xy \delta_r\mathop{{\otimes}} (x{\triangleleft}(y{\triangleright} r))(y{\triangleleft} r)=\Delta(xy)
\end{align*}
as $s=y{\triangleright} r$ and using the formula for $(xy){\triangleleft} r$ at the end. Also,
\begin{align*}
\Delta(\delta_{x{\triangleright} s}x)&=(\Delta\delta_{x{\triangleright} s})(\Delta x)=\sum_{r, p.t=x{\triangleright} s}\delta _p x\delta_r\mathop{{\otimes}} \delta_t x{\triangleleft} r\\
&=\sum_{r, p.t=x{\triangleright} s}x\delta_{x^{-1}{\triangleright} p}\delta_r\mathop{{\otimes}} x{\triangleleft} r\delta_{(x{\triangleleft} r)^{-1}{\triangleright} t}=\sum_{(x{\triangleright} r).t=x{\triangleright} s}x \delta_r\mathop{{\otimes}} x{\triangleleft} r\delta_{(x{\triangleleft} r)^{-1}{\triangleright} t}\\
&=\sum_{(x{\triangleright} r).((x{\triangleleft} r){\triangleright} t')=x{\triangleright} s}x \delta_r\mathop{{\otimes}} x{\triangleleft} r\delta_{t'}=\sum_{r\cdot t'=s}x\delta_r\mathop{{\otimes}} (x{\triangleleft} r)\delta_{t'}=(\Delta x)(\Delta \delta _s)=\Delta(x\delta_s)
\end{align*}
using the formula for $x{\triangleright}(r\cdot t')$. This says that the coproducts stated are compatible with the algebra cross relations. Similarly, one can check that
\begin{align*}
(\sum_{p,r}\delta_p\mathop{{\otimes}}\delta_r\mathop{{\otimes}} &\tau(p,r))((\mathrm{id}\mathop{{\otimes}}\Delta )\Delta x)=\sum_{p,r,s,t}(\delta_p\mathop{{\otimes}}\delta_r\mathop{{\otimes}} \tau(p,r))(x\delta_s\mathop{{\otimes}} (x{\triangleleft} s)\delta_t\mathop{{\otimes}} (x{\triangleleft} s){\triangleleft} t)\\
&=\sum_{p,r,s,t}\delta_px\delta_s\mathop{{\otimes}}\delta_r(x{\triangleleft} s)\delta_t\mathop{{\otimes}} \tau(p,r)((x{\triangleleft} s){\triangleleft} t)\\
&=\sum_{s,t}x\delta_s\mathop{{\otimes}} (x{\triangleleft} s)\delta_t\mathop{{\otimes}}\tau(x{\triangleright} s,(x{\triangleleft} s){\triangleright} t)(x{\triangleleft} s){\triangleleft} t)\\
&=\sum_{s,t}x\delta_s\mathop{{\otimes}} (x{\triangleleft} s)\delta_t\mathop{{\otimes}}(x{\triangleleft}(s.t))\tau(s,t)\\
&=\sum_{p,r,s,t}(x\delta_s\mathop{{\otimes}} (x{\triangleleft} s)\delta_t\mathop{{\otimes}}(x{\triangleleft}(s.t))(\delta_p\mathop{{\otimes}}\delta_r\mathop{{\otimes}}\tau(p,r)\\
&=( (\Delta\mathop{{\otimes}}\mathrm{id})\Delta x ) (\sum_{p,r}\delta_p\mathop{{\otimes}}\delta_r\mathop{{\otimes}}\tau(p,r))
\end{align*}
as $p=x{\triangleright} s$ and $r=(x{\triangleleft} s){\triangleright} t$ and using the formula for $(x{\triangleleft} s){\triangleleft} t$. For the remaining cocycle relations, we have
\begin{align*}
(\mathrm{id}\mathop{{\otimes}}{\epsilon}\mathop{{\otimes}}\mathrm{id})\phi = \sum_{r,s}\delta_{s,e}\delta_r\mathop{{\otimes}}\tau(r,s)^{-1} = \sum_r\delta_r\mathop{{\otimes}} 1 = 1\mathop{{\otimes}} 1
\end{align*}
and
\[ (1\mathop{{\otimes}}\phi)((\mathrm{id}\mathop{{\otimes}}\Delta\mathop{{\otimes}}\mathrm{id})\phi)(\phi\mathop{{\otimes}} 1)=\sum_{r,s,t}\delta_r\mathop{{\otimes}}\delta_s\mathop{{\otimes}} \delta_t\tau(r,s)^{-1}\mathop{{\otimes}}\tau(s,t)^{-1}\tau(r,s\cdot t)\]
after multiplying out $\delta$-functions and renaming variables. Using the value of $\Delta\tau(r,s)^{-1}$ and similarly multiplying out, we obtain on the other side
\begin{align*} ((\mathrm{id}\mathop{{\otimes}}&\mathrm{id}\mathop{{\otimes}}\Delta)\phi)(\Delta\mathop{{\otimes}}\mathrm{id}\mathop{{\otimes}}\mathrm{id})\phi=\sum_{r,s,t}\delta_r\mathop{{\otimes}}\delta_s\mathop{{\otimes}}\tau(r,s)^{-1}\delta_t\mathop{{\otimes}}(\tau(r,s)^{-1}{\triangleleft} t)\tau(r\cdot s,t)^{-1}\\
&=\sum_{r,s,t'}\delta_r\mathop{{\otimes}}\delta_s\mathop{{\otimes}}\delta_{t'}\tau(r,s)^{-1}\mathop{{\otimes}}(\tau(r,s)^{-1}{\triangleleft} (\tau(r,s){\triangleright} t'))\tau(r\cdot s,\tau(r,s){\triangleright} t')^{-1}\\
&=\sum_{r,s,t'}\delta_r\mathop{{\otimes}}\delta_s\mathop{{\otimes}}\delta_{t'}\tau(r,s)^{-1}\mathop{{\otimes}}(\tau(r,s){\triangleleft} t')^{-1}\tau(r\cdot s,\tau(r,s){\triangleright} t')^{-1},
\end{align*}
where we change summation to $t'=\tau(r,s){\triangleright} t$ then use Lemma~\ref{leminv}. Renaming $t'$ to $t$, the two sides are equal in view of the cocycle identity for $\tau$. Thus, we have a quasi-bialgebra with $\phi$ as stated.
\endproof
We can also write the coproduct (and the other structures) more explicitly.
\begin{remark}\rm (1) If we want to write the coproduct on $\Xi$ explicitly as a vector space, the above becomes
\[ \Delta(\delta_r\mathop{{\otimes}} x)=\sum_{s\cdot t=r}\delta_s\mathop{{\otimes}} x\mathop{{\otimes}}\delta_t\mathop{{\otimes}} (x^{-1}{\triangleright} s)^{-1},\quad {\epsilon}(\delta_r\mathop{{\otimes}} x)=\delta_{r,e}\]
which is ugly due to our decision to build it on $\mathbb{C}(R)\mathop{{\otimes}}\mathbb{C} K$. (2) If we built it on the other order then we could have $\Xi=\mathbb{C} K{\triangleright\!\!\!<} \mathbb{C}(R)$ as an algebra, where we have a right action
\[ (f{\triangleright} x)(r)= f(x{\triangleleft} r);\quad \delta_r{\triangleright} x=\delta_{x^{-1}{\triangleleft} r}\]
on $f\in \mathbb{C}(R)$. Now make a right handed cross product
\[ (x\mathop{{\otimes}} \delta_r)(y\mathop{{\otimes}} \delta_s)= xy\mathop{{\otimes}} (\delta_r{\triangleright} y)\delta_s=xy\mathop{{\otimes}}\delta_s\delta_{r,y{\triangleleft} s}\]
which has cross relations $\delta_r y=y\delta_{y^{-1}{\triangleleft} r}$. These are the same relations as before. So this is the same algebra, just we prioritise a basis $\{x\delta_r\}$ instead of the other way around. This time, we have
\[ \Delta (x\mathop{{\otimes}}\delta_r)=\sum_{s\cdot t=r} x\mathop{{\otimes}}\delta_s\mathop{{\otimes}} x{\triangleright} s\mathop{{\otimes}}\delta_t.\]
We do not do this in order to be compatible with the most common form of $D(G)$ as $\mathbb{C}(G){>\!\!\!\triangleleft} \mathbb{C} G$ as in \cite{CowMa}.
\end{remark}
\subsection{$\Xi(R,K)$ as a quasi-Hopf algebra}
A quasi-bialgebra is a quasi-Hopf algebra if there are elements $\alpha,\beta\in H$ and an antialgebra map $S:H\to H$ such that\cite{Dri,Ma:book}
\[(S \xi_1)\alpha\xi_2={\epsilon}(\xi)\alpha,\quad \xi_1\beta S\xi_2={\epsilon}(\xi)\beta,\quad \phi^1\beta(S\phi^2)\alpha\phi^3=1,\quad (S\phi^{-1})\alpha\phi^{-2}\beta S\phi^{-3}=1\]
where $\Delta\xi=\xi_1\mathop{{\otimes}}\xi_2$, $\phi=\phi^1\mathop{{\otimes}}\phi^2\mathop{{\otimes}}\phi^3$ with inverse $\phi^{-1}\mathop{{\otimes}}\phi^{-2}\mathop{{\otimes}}\phi^{-3}$ is a compact notation (sums of such terms to be understood). It is usual to assume $S$ is bijective but we do not require this. The $\alpha,\beta, S$ are not unique and can be changed to $S'=U(S\ ) U^{-1}, \alpha'=U\alpha, \beta'=\beta U^{-1}$ for any invertible $U$. In particular, if $\alpha$ is invertible then we can transform to a standard form replacing it by $1$. For the purposes of this paper, we therefore call the case of $\alpha$ invertible a (left) {\em regular antipode}.
\begin{proposition}\label{standardS} If $(\ )^R$ is bijective, $\Xi(R,K)$ is a quasi-Hopf algebra with regular antipode
\[ S(\delta_r\mathop{{\otimes}} x)=\delta_{(x^{-1}{\triangleright} r)^R}\mathop{{\otimes}} x^{-1}{\triangleleft} r,\quad \alpha=\sum_{r\in R}\delta_r\mathop{{\otimes}} 1,\quad \beta=\sum_r\delta_r\mathop{{\otimes}} \tau(r,r^R).\]
Equivalently in subalgebra terms,
\[ S\delta_r=\delta_{r^R},\quad Sx=\sum_{s\in R}(x^{-1}{\triangleright} s)\delta_{s^R} ,\quad \alpha=1,\quad \beta=\sum_{r\in R}\delta_r\tau(r,r^R).\]
\end{proposition}
{\noindent {\bfseries Proof:}\quad }
For the axioms involving $\phi$, we have
\begin{align*}\phi^1\beta&(S \phi^2)\alpha\phi^3=\sum_{s,t,r}(\delta_s\mathop{{\otimes}} 1)(\delta_r\mathop{{\otimes}} \tau(r,r^R))(\delta_{t^R}\mathop{{\otimes}}\tau(s,t)^{-1})\\
&=\sum_{s,t}(\delta_s\mathop{{\otimes}}\tau(s,s^R))(\delta_{t^R}\mathop{{\otimes}} \tau(s,t)^{-1})=\sum_{s,t}\delta_s\delta_{s,\tau(s,s^R){\triangleright} t^R}\mathop{{\otimes}}\tau(s,s^R)\tau(s,t)^{-1}\\
&=\sum_{s^R.t^R=e}\delta_s\mathop{{\otimes}} \tau(s,s^R)\tau(s,t)^{-1}=1,
\end{align*}
where we used $s.(s^R.t^R)=(s.s^R).\tau(s,s^R){\triangleright} t^R=\tau(s,s^R){\triangleright} t^R$. So $s=\tau(s,s^R){\triangleright} t^R$ holds iff $s^R.t^R=e$ by left cancellation. In the sum, we can take $t=s^R$ which contributes $\delta_s\mathop{{\otimes}} e$. Here $s^R.t^R=s^R.(s^R)^R=e$; there is a unique element $t^R$ which does this and hence a unique $t$ provided $(\ )^R$ is injective, and hence a bijection.
\begin{align*}
S(\phi^{-1})\alpha&\phi^{-2}\beta S(\phi^{-3}) = \sum_{s,t,u,v}(\delta_{s^R}\otimes 1)(\delta_t\otimes 1)(\delta_u\otimes\tau(u,u^R))(\delta_{(\tau(s,t)^{-1}{\triangleright} v)^R}\otimes (\tau(s,t)^{-1}{\triangleleft} v))\\
&= \sum_{s,v}(\delta_{s^R}\otimes\tau(s^R,s^R{}^R))(\delta_{(\tau(s,s^R)^{-1}{\triangleright} v)^R}\otimes \tau(s,s^R)^{-1}{\triangleleft} v).
\end{align*}
Upon multiplication, we will have a $\delta$-function dictating that
\[s^R = \tau(s^R,s^R{}^R){\triangleright} (\tau(s,s^R)^{-1}{\triangleright} v)^R,\]
so we can use the fact that
\begin{align*}s\cdot s^R = e &= s\cdot(\tau(s^R,s^R{}^R){\triangleright} (\tau(s,s^R)^{-1}{\triangleright} v)^R)\\ &= s\cdot(s^R\cdot(s^R{}^R\cdot (\tau(s,s^R)^{-1}{\triangleright} v)^R))\\
&= \tau(s,s^R){\triangleright} (s^R{}^R\cdot(\tau(s,s^R){\triangleright} v)^R),
\end{align*}
where we use similar identities to before. Therefore $s^R{}^R\cdot (\tau(s,s^R)^{-1}{\triangleright} v)^R = e$, so $(\tau(s,s^R)^{-1}{\triangleright} v)^R = s^R{}^R{}^R$. When $(\ )^R$ is injective, this gives us $v = \tau(s,s^R){\triangleright} s^R{}^R$. Returning to our original calculation we have that our previous expression is
\begin{align*}
\cdots &= \sum_s \delta_{s^R}\otimes \tau(s^R,s^R{}^R)(\tau(s,s^R)^{-1}{\triangleleft} (\tau(s,s^R){\triangleright} s^R{}^R))\\
&= \sum_s \delta_{s^R}\otimes \tau(s^R,s^R{}^R)(\tau(s,s^R){\triangleleft} s^R{}^R)^{-1} = \sum_s \delta_{s^R}\otimes 1 = 1
\end{align*}
We now prove the antipode axiom involving $\alpha$,
\begin{align*}
(S(\delta_s \otimes& x)_1)(\delta_s \otimes x)_2 = \sum_{r\cdot t = s}(\delta_{(x^{-1}{\triangleright} r)^R}\otimes (x^{-1}{\triangleleft} r))(\delta_t\otimes (x^{-1}{\triangleleft} r)^{-1})\\
&= \sum_{r\cdot t = s}\delta_{(x^{-1}{\triangleright} r)^R, (x^{-1}{\triangleleft} r){\triangleright} t}\delta_{(x^{-1}{\triangleright} r)^R}\otimes 1 = \delta_{e,s}\sum_r \delta_{(x^{-1}{\triangleright} r)^R}\otimes 1 = {\epsilon}(\delta_s\otimes x)1.
\end{align*}
The condition from the $\delta$-functions is
\[ (x^{-1}{\triangleright} r)^R=(x^{-1}{\triangleleft} r){\triangleright} t\]
which by uniqueness of right inverses holds iff
\[ e=(x^{-1}{\triangleright} r)\cdot (x^{-1}{\triangleleft} r){\triangleright} t=x^{-1}{\triangleright}(r\cdot t)\]
which is iff $r.t=e$, so $t=r^R$. As we also need $r.t=s$, this becomes $\delta_{s,e}$ as required.
We now prove the axiom involving $\beta$, starting with
\begin{align*}(\delta_s\otimes& x)_1 \beta S((\delta_s\otimes x)_2) = \sum_{r\cdot t=s, p}(\delta_r\mathop{{\otimes}} x)(\delta_p\mathop{{\otimes}}\tau(p,p^R))S(\delta_t\mathop{{\otimes}} (x^{-1}{\triangleleft} r)^{-1})\\
&=\sum_{r\cdot t=s, p}(\delta_r\delta_{r,x{\triangleright} p}\mathop{{\otimes}} x\tau(p,p^R))(\delta_{((x^{-1}{\triangleleft} r){\triangleright} t)^R}\mathop{{\otimes}} (x^{-1}{\triangleleft} r){\triangleleft} t)\\
&=\sum_{r\cdot t=s}(\delta_r\mathop{{\otimes}} x\tau(x^{-1}{\triangleright} r,(x^{-1}{\triangleright} r)^R))(\delta_{((x^{-1}{\triangleleft} r){\triangleright} t)^R}\mathop{{\otimes}} (x^{-1}{\triangleleft} r){\triangleleft} t).
\end{align*}
When we multiply this out, we will need from the product of $\delta$-functions that
\[ \tau(x^{-1}{\triangleright} r,(x^{-1}{\triangleright} r)^R)^{-1}{\triangleright} (x^{-1}{\triangleright} r)=((x^{-1}{\triangleleft} r){\triangleright} t)^R,\]
but note that $\tau(q,q{}^R)^{-1}{\triangleright} q=q^R{}^R$ from Lemma~\ref{leminv}. So the condition from the $\delta$-functions is
\[ (x^{-1}{\triangleright} r)^R{}^R=((x^{-1}{\triangleleft} r){\triangleright} t)^R,\]
so
\[ (x^{-1}{\triangleright} r)^R=(x^{-1}{\triangleleft} r){\triangleright} t\]
when $(\ )^R$ is injective. By uniqueness of right inverses, this holds iff
\[ e=(x^{-1}{\triangleright} r)\cdot ((x^{-1}{\triangleleft} r){\triangleright} t)=x^{-1}{\triangleright}(r\cdot t),\]
where the last equality is from the matched pair conditions. This holds iff $r\cdot t=e$, that is, $t=r^R$. This also means in the sum that we need $s=e$.
Hence, when we multiply out our expression so far, we have
\[\cdots=\delta_{s,e}\sum_r\delta_r\mathop{{\otimes}} x\tau(x^{-1}{\triangleright} r,(x^{-1}{\triangleright} r)^R)(x^{-1}{\triangleleft} r){\triangleleft} r^R=\delta_{s,e}\sum_r\delta_r\mathop{{\otimes}}\tau(r,r^R)=\delta_{s,e}\beta,\]
as required, where we used
\[ x\tau( x^{-1}{\triangleright} r,(x^{-1}{\triangleright} r)^R)(x^{-1}{\triangleleft} r){\triangleleft} r^R=\tau(r,r^R)\]
by the matched pair conditions. The subalgebra form of $Sx$ is the same using the commutation relations and Lemma~\ref{leminv} to reorder.
It remains to check that
\begin{align*}S(\delta_s&\mathop{{\otimes}} y)S(\delta_r\mathop{{\otimes}} x)=(\delta_{(y^{-1}{\triangleright} s)^R}\mathop{{\otimes}} y^{-1}{\triangleleft} s)(\delta_{(x^{-1}{\triangleright} x)^R}\mathop{{\otimes}} x^{-1}{\triangleleft} r)\\
&=\delta_{r,x{\triangleright} s}\delta_{(y^{-1}{\triangleright} s)^R}\mathop{{\otimes}} (y^{-1}{\triangleleft} s)(x^{-1}{\triangleleft} r)=\delta_{r,x{\triangleright} s}\delta_{(y^{-1}x^{-1}{\triangleright} r)^R}\mathop{{\otimes}}( y^{-1}{\triangleleft}(x^{-1}{\triangleright} r))(x^{-1}{\triangleleft} r)\\
&=S(\delta_r\delta_{r,x{\triangleright} s}\mathop{{\otimes}} xy)=S((\delta_r\mathop{{\otimes}} x)(\delta_s\mathop{{\otimes}} y)),
\end{align*}
where the product of $\delta$-functions requires $(y^{-1}{\triangleright} s)^R=( y^{-1}{\triangleleft} s){\triangleright} (x^{-1}{\triangleright} r)^R$, which is equivalent to $s^R=(x^{-1}{\triangleright} r)^R$ using Lemma~\ref{leminv}. This imposes $\delta_{r,x{\triangleright} s}$. We then replace $s=x^{-1}{\triangleright} r$ and recognise the answer using the matched pair identities.
\endproof
\subsection{$\Xi(R,K)$ as a $*$-quasi-Hopf algebra}
The correct notion of a $*$-quasi-Hopf algebra $H$ is not part of Drinfeld's theory but a natural notion is to have further structure so
as to make the monoidal category of modules a bar category in the sense of \cite{BegMa:bar}. If $H$ is at least a quasi-bialgebra, the additional structure we need, fixing a typo in \cite[Def.~3.16]{BegMa:bar}, is the first three of:
\begin{enumerate}\item An antilinear algebra map $\theta:H\to H$.
\item An invertible element $\gamma\in H$ such that $\theta(\gamma)=\gamma$ and $\theta^2=\gamma(\ )\gamma^{-1}$.
\item An invertible element $\hbox{{$\mathcal G$}}\in H\mathop{{\otimes}} H$ such that
\begin{equation}\label{*GDelta}\Delta\theta =\hbox{{$\mathcal G$}}^{-1}(\theta\mathop{{\otimes}}\theta)(\Delta^{op}(\ ))\hbox{{$\mathcal G$}},\quad ({\epsilon}\mathop{{\otimes}}\mathrm{id})(\hbox{{$\mathcal G$}})=(\mathrm{id}\mathop{{\otimes}}{\epsilon})(\hbox{{$\mathcal G$}})=1,\end{equation}
\begin{equation}\label{*Gphi} (\theta\mathop{{\otimes}}\theta\mathop{{\otimes}}\theta)(\phi_{321})(1\mathop{{\otimes}}\hbox{{$\mathcal G$}})((\mathrm{id}\mathop{{\otimes}}\Delta)\hbox{{$\mathcal G$}})\phi=(\hbox{{$\mathcal G$}}\mathop{{\otimes}} 1)((\Delta\mathop{{\otimes}}\mathrm{id})\hbox{{$\mathcal G$}}).\end{equation}
\item We say the $*$-quasi bialgebra is strong if
\begin{equation}\label{*Gstrong} (\gamma\mathop{{\otimes}}\gamma)\Delta\gamma^{-1}=((\theta\mathop{{\otimes}}\theta)(\hbox{{$\mathcal G$}}_{21}))\hbox{{$\mathcal G$}}.\end{equation}
\end{enumerate}
Note that if we have a quasi-Hopf algebra then $S$ is antimultiplicative and $\theta=* S$ defines an antimultiplicative antilinear map $*$. However, $S$ is not unique and it appears that specifying $\theta$ directly is more canonical.
\begin{lemma} Let $(\ )^R$ be bijective. Then $\Xi$ has an antilinear algebra automorphism $\theta$ such that
\[ \theta(x)=\sum_s x{\triangleleft} s\, \delta_{s^R},\quad \theta(\delta_s)=\delta_{s^R},\]
\[\theta^2=\gamma(\ )\gamma^{-1};\quad \gamma=\sum_s\tau(s,s^R)^{-1}\delta_s,\quad\theta(\gamma)=\gamma.\]
\end{lemma}
{\noindent {\bfseries Proof:}\quad } We compute,
\[ \theta(\delta_s\delta_t)=\delta_{s,t}\delta_{s^R}=\delta_{s^R,t^R}\delta_{s^R}=\theta(\delta_s)\theta(\delta_t)\]
\[\theta(x)\theta(y)=\sum_{s,t}x{\triangleleft} s\delta_{s^R} y{\triangleleft} t\delta_{t^R}=\sum_{t}(x{\triangleleft} (y{\triangleright} t)) y{\triangleleft} t\delta_t=\sum_t (xy){\triangleleft} t\delta_{t^R}=\theta(xy)\]
where imagining commuting $\delta_t$ to the left fixes $s=(y{\triangleleft} t){\triangleright} t^R=(y{\triangleright} t)^R$ to obtain the 2nd equality. We also have
\[ \theta(x\delta_s)=\sum_tx{\triangleleft} t\delta_{t^R}\delta_{s^R}=x{\triangleleft} s\delta_{s^R}=\delta_{(x{\triangleleft} s){\triangleright} s^R}x{\triangleleft} s=\delta_{(x{\triangleright} s)^R}x{\triangleleft} s\]
\[ \theta(\delta_{x{\triangleright} s}x)=\sum_t\delta_{(x{\triangleright} s)^R}x{\triangleleft} t\delta_{t^R}=\sum_t\delta_{(x{\triangleright} s)^R}\delta_{(x{\triangleleft} t){\triangleright} t^R}=\sum_t\delta_{(x{\triangleright} s)^R}\delta_{(x{\triangleright} t)^R}
x{\triangleleft} t,\]
which is the same as it needs $t=s$. Next
\[ \gamma^{-1}=\sum_s \tau(s,s^R)\delta_{s^{RR}}=\sum_s \delta_s \tau(s,s^R),\]
where we recall from previous calculations that $\tau(s,s^R){\triangleright} s^{RR}=s$. Then
\begin{align*}\theta^2(x)&=\sum_s\theta(x{\triangleleft} s\delta_{s^R})=\sum_{s,t}(x{\triangleleft} s){\triangleleft} t\delta_{t^R}\delta_{s^R}=\sum_s(x{\triangleleft} s){\triangleleft} s\delta_{s^R}=\sum_s (x{\triangleleft} s){\triangleleft} x^R\delta_{s^{RR}}\\
&=\sum_s \tau(x{\triangleright} s,(x{\triangleright} s)^R)^{-1}x\tau(s,s^R)\delta_{s^{RR}}=\sum_{s,t}\tau(t,t^R)^{-1}\delta_{t} x\tau(s,s^R)\delta_{s^{RR}}\\
&=\sum_{s,t}\delta_{t^{RR}}\tau(t,t^R)^{-1}x\tau(s,s^R)\delta_{s^{RR}}=\gamma x\gamma^{-1}&\end{align*}
where for the 6th equality if we were to commute $\delta_{s^{RR}}$ to the left, this would fix $t=x\tau(s,s^R){\triangleright} s^{RR}=x{\triangleright} s$. We then use $\tau(t,t^R)^{-1}{\triangleright} t=t^{RR}$ and recognise the answer. We also check that
\begin{align*}\gamma\delta_s\gamma^{-1}&= \tau(s,s^R)^{-1}\delta_s\tau(s,s^R)=\delta_{s^{RR}}=\theta^2(\delta_s),\\
\theta(\gamma) &= \sum_{s,t}\tau(s,s^R)^{-1}{\triangleleft} t\delta_{t^R}\delta_{s^R}=\sum_s\tau(s,s^R)^{-1}{\triangleleft} s\delta_{s^R}=\sum_s\tau(s^R,s^{RR})^{-1}\delta_{s^R}=\gamma\end{align*}
using Lemma~\ref{leminv}.
\endproof
Next, we find $\hbox{{$\mathcal G$}}$ obeying the conditions above.
\begin{lemma} If $(\ )^R$ is bijective then equation (\ref{*GDelta}) holds with
\[ \hbox{{$\mathcal G$}}=\sum_{s,t} \delta_{t^R}\tau(s,t)^{-1}\mathop{{\otimes}} \delta_{s^R}\tau(t,t^R) (\tau(s,t){\triangleleft} t^R)^{-1}, \]
\[\hbox{{$\mathcal G$}}^{-1}=\sum_{s,t} \tau(s,t)\delta_{t^R}\mathop{{\otimes}} (\tau(s,t){\triangleleft} t^R)\tau(t,t^R)^{-1} \delta_{s^R}.\]
\end{lemma}
{\noindent {\bfseries Proof:}\quad } The proof that $\hbox{{$\mathcal G$}},\hbox{{$\mathcal G$}}^{-1}$ are indeed inverse is straightforward on matching the $\delta$-functions to fix the summation variables in $\hbox{{$\mathcal G$}}^{-1}$ in terms of $\hbox{{$\mathcal G$}}$. This then comes down to proving that the map
$(s,t)\to (p,q):=(\tau(s,t){\triangleright} t^R, \tau'(s,t){\triangleright} s^R)$ is injective. Indeed, the map $(p,q)\mapsto (p,p\cdot q)$ is injective by left division, so it's enough to prove that
\[ (s,t)\mapsto (p,p\cdot q)=(\tau(s,t){\triangleright} t^R, \tau(s,t){\triangleright}(t^R\cdot\tau(t,t^R)^{-1}{\triangleright} s^R))=((s\cdot t)\backslash s,(s\cdot t)^R)\]
is injective. We used $(s\cdot t)\cdot \tau(s,t){\triangleright} t^R=s\cdot(t\cdot t^R)=s$ by quasi-associativity to recognise $p$, recognised $t^R\cdot\tau(t,t^R)^{-1}{\triangleright} s^R=t\backslash s^R$ from (\ref{leftdiv}) and then
\[ (s\cdot t)\cdot \tau(s,t){\triangleright} (t\backslash s^R)=s\cdot(t\cdot(t\backslash s^R))=s\cdot s^R=e\]
to recognise $p\cdot q$. That the desired map is injective is then immediate by $(\ )^R$ injective and elementary properties of division.
We use similar methods in the other proofs. Thus, writing
\[ \tau'(s,t):=(\tau(s,t){\triangleleft} t^R)\tau(t,t^R)^{-1}=\tau(s\cdot t, \tau(s,t){\triangleright} t^R)^{-1}\]
for brevity, we have
\begin{align*}\hbox{{$\mathcal G$}}^{-1}(\theta\mathop{{\otimes}}\theta)(\Delta^{op} \delta_r)&=\hbox{{$\mathcal G$}}^{-1}\sum_{p\cdot q=r}(\delta_{q^R}\mathop{{\otimes}}\delta_{p^R})=\sum_{s\cdot t=r}\tau(s,t)\delta_{t^R}\mathop{{\otimes}}\tau'(s,t)\delta_{s^R},\\
(\Delta\theta(\delta_r))\hbox{{$\mathcal G$}}^{-1}&=\sum_{p\cdot q=r^R}(\delta_p\mathop{{\otimes}}\delta_q)\hbox{{$\mathcal G$}}^{-1}=\sum_{p\cdot q=r^R} \tau(s,t)\delta_{t^R}\mathop{{\otimes}}\tau'(s,t)\delta_{s^R},
\end{align*}
where in the second line, commuting the $\delta_{t^R}$ and $\delta_{s^R}$ to the left sets $p=\tau(s,t){\triangleright} t^R$, $q=\tau'(s,t){\triangleright} s^R$ as studied above. Hence $p\cdot q=r^R$ in the sum is the same as $s\cdot t=r$, so the two sides are equal and we have proven
(\ref{*GDelta}) on $\delta_r$. Similarly,
\begin{align*}\hbox{{$\mathcal G$}}^{-1}&(\theta\mathop{{\otimes}}\theta)(\Delta^{op} x)\\
&=\sum_{p,q,s,t} \left(\tau(p,q)\delta_{q^R}\mathop{{\otimes}} (\tau(p,q){\triangleleft} q^R)\tau(q,q^R)^{-1} \delta_{p^R} \right)\left((x{\triangleleft} s){\triangleleft} t\, \delta_{t^R}\mathop{{\otimes}}\delta_{(x{\triangleright} s)^R}x{\triangleleft} s\right)\\
&=\sum_{s,t}(x{\triangleleft} s\cdot t)\tau(s,t)\delta_{t^R}\mathop{{\otimes}} \tau(x{\triangleright}(s\cdot t),(x{\triangleleft} s\cdot t)\tau(s,t){\triangleright} t^R)^{-1}(x{\triangleleft} s)\delta_{s^R}
\end{align*}
where we first note that for the $\delta$-functions to connect, we need
\[ p=x{\triangleright} s,\quad ((x{\triangleleft} s){\triangleleft} t){\triangleright} t^R=q^R,\]
which is equivalent to $q=(x{\triangleleft} s){\triangleright} t$ since $e=(x{\triangleleft} s){\triangleright} (t\cdot t^R)=((x{\triangleleft} s){\triangleright} t)\cdot(( (x{\triangleleft} s){\triangleleft} t){\triangleright} t^R)$. In this case
\[ \tau(p,q)((x{\triangleleft} s){\triangleleft} t)=\tau(x{\triangleright} s, (x{\triangleleft} s){\triangleright} t)((x{\triangleleft} s){\triangleleft} t)=(x{\triangleleft} s\cdot t)\tau(s,t)\]
by the cocycle axiom. Similarly, $(x{\triangleleft} s)^{-1}{\triangleright}(x
{\triangleright} s)^R=s^R$ by Lemma~\ref{leminv} gives us $\delta_{s^R}$. For its coefficient, note that $p\cdot q=(x{\triangleright} s)\cdot((x{\triangleleft} s){\triangleright} t)=x{\triangleright}(s\cdot t)$ so that, using the other form of $\tau'(p.q)$, we obtain
\[ \tau(p\cdot q,\tau(p,q){\triangleright} q^R)^{-1}(x{\triangleleft} s)=\tau(x{\triangleright}(s\cdot t),\tau(p,q)((x{\triangleleft} s){\triangleleft} t){\triangleright} t^R)^{-1}(x{\triangleleft} s) \]
and we use our previous calculation to put this in terms of $s,t$. On the other side, we have
\begin{align*}
(\Delta\theta(x))&\hbox{{$\mathcal G$}}^{-1}= \sum_t\Delta(x{\triangleleft} t\, \delta_{t^R} )\hbox{{$\mathcal G$}}^{-1}\\
&=\sum_{p,q,s\cdot r=t^R}x{\triangleleft} t\, \delta_s\tau(p,q)\delta_{q^R}\mathop{{\otimes}} (x{\triangleleft} t){\triangleleft} r\, \delta_r \tau(p\cdot q,\tau(p,q){\triangleright} q^R)^{-1}\delta_{p^R}\\
&=\sum_{p,q}x{\triangleleft}(p\cdot q)\, \tau(p,q)\delta_{q^R}\mathop{{\otimes}} (x{\triangleleft} p\cdot q){\triangleleft} s\, \tau(p\cdot q,s)^{-1}\delta_{p^R},
\end{align*}
where, for the $\delta$-functions to connect, we need
\[ s=\tau(p,q){\triangleright} q^R,\quad r=\tau'(p,q){\triangleright} p^R.\]
The map $(p,q)\mapsto (s,r)$ has the same structure as the one we studied above but applied now to $p,q$ in place of $s,t$. It follows that $s\cdot r=(p\cdot q)^R$ and hence this being equal $t^R$ is equivalent to $p\cdot q=t$. Taking this for the value of $t$, we obtain the second expression for $(\Delta\theta(x))\hbox{{$\mathcal G$}}^{-1}$.
We now use the identity for $(x{\triangleleft} p\cdot q){\triangleleft} s $ and $(p\cdot q)\cdot \tau(p,q){\triangleright} q^R=p\cdot(q\cdot q^R)=p$ to obtain the same as we obtained for $\hbox{{$\mathcal G$}}^{-1}(\theta\mathop{{\otimes}}\theta)(\Delta^{op} x)$ on $x$, upon renaming $s,t$ there to $p,q$. The proofs of (\ref{*Gphi}), (\ref{*Gstrong}) are similarly quite involved, but omitted given that it is known that the category of modules is a strong bar category. \endproof
\begin{corollary}\label{corstar} For $(\ )^R$ bijective and the standard antipode in Proposition~\ref{standardS}, we have a $*$-quasi Hopf algebra with $\theta=* S$, where $x^*=x^{-1},\delta_s^*=\delta_s$ is the standard $*$-algebra structure on $\Xi$ as a cross product and $\gamma,\hbox{{$\mathcal G$}}$ are as above.
\end{corollary}{\noindent {\bfseries Proof:}\quad } We check that
\[*Sx=*(\sum_s \delta_{(x^{-1}{\triangleright} s)^R}x^{-1}{\triangleleft} s)=\sum_s (x^{-1}{\triangleleft} s)^{-1}\delta_{(x^{-1}{\triangleright} s)^R}=\sum_{s'}x{\triangleleft} s'\delta_{s'{}^R}=\theta(x),\] where $s'=x^{-1}{\triangleright} s$ and we used Lemma~\ref{leminv}. \endproof
The key property of any quasi-bialgebra is that its category of modules is monoidal with associator $\phi_{V,W,U}: (V\mathop{{\otimes}} W)\mathop{{\otimes}} U\to V\mathop{{\otimes}} (W\mathop{{\otimes}} U)$ given by the action of $\phi$. In the $*$-quasi case, this becomes a bar category as follows\cite{BegMa:bar}. First, there is a functor ${\rm bar}$ from the category to itself which sends a module $V$ to a `conjugate', $\bar V$. In our case, this has the same set and abelian group structure as $V$ but $\lambda.\bar v=\overline{\bar\lambda v}$ for all $\lambda\in \mathbb{C}$, i.e. a conjugate action of the field, where we write $v\in V$ as $\bar v$ when viewed in $\bar V$. Similarly,
\[ \xi.\bar v=\overline{\theta(\xi).v}\]
for all $\xi\in \Xi(R,K)$. On morphisms $\psi:V\to W$, we define $\bar\psi:\bar V\to \bar W$ by $\bar \psi(\bar v)=\overline{\psi(v)}$. Next, there is a natural isomorphism $\Upsilon: {\rm bar}\circ\mathop{{\otimes}} \Rightarrow \mathop{{\otimes}}^{op}\circ({\rm bar}\times{\rm bar})$, given in our case for all modules $V,W$ by
\[ \Upsilon_{V,W}:\overline{V\mathop{{\otimes}} W}{\cong} \bar W\mathop{{\otimes}} \bar V, \quad \Upsilon_{V,W}(\overline{v\mathop{{\otimes}} w})=\overline{ \hbox{{$\mathcal G$}}^2.w}\mathop{{\otimes}}\overline{\hbox{{$\mathcal G$}}^1.v}\]
and making a hexagon identity with the associator, namely
\[ (\mathrm{id}\mathop{{\otimes}}\Upsilon_{V,W})\circ\Upsilon_{V\mathop{{\otimes}} W, U}=\phi_{\bar U,\bar V,\bar W}\circ(\Upsilon_{W,U}\mathop{{\otimes}}\mathrm{id})\circ\Upsilon_{V,W\mathop{{\otimes}} U}\circ \overline{\phi_{V,W,U}}.\]
We also have a natural isomorphism ${\rm bb}:\mathrm{id}\Rightarrow {\rm bar}\circ{\rm bar}$, given in our case for all modules $V$ by
\[ {\rm bb}_V:V\to \overline{\overline V},\quad {\rm bb}_V(v)=\overline{\overline{\gamma.v}}\]
and obeying $\overline{{\rm bb}_V}={\rm bb}_{\bar V}$. In our case, we have a strong bar category, which means also
\[ \Upsilon_{\bar W,\bar V}\circ\overline{\Upsilon_{V,W}}\circ {\rm bb}_{V\mathop{{\otimes}} W}={\rm bb}_V\mathop{{\otimes}}{\rm bb}_W.\]
Finally, a bar category has some conditions on the unit object $\underline 1$, which in our case is the trivial representation with these automatic. That $G=RK$ leads to a strong bar category is in \cite[Prop.~3.21]{BegMa:bar} but without the underlying $*$-quasi-Hopf algebra structure as found above.
\begin{example}\label{exS3quasi} \rm {\sl (i) $\Xi(R,K)$ for $S_2\subset S_3$ with its standard transversal.} As an algebra, this is generated by $\mathbb{Z}_2$, which means by an element $u$ with $u^2=e$, and by $\delta_{0},\delta_{1},\delta_{2}$
for $\delta$-functions as the points of $R=\{e,uv,vu\}$. The relations are $\delta_i$ orthogonal and add to $1$, and cross relations
\[ \delta_0u=u\delta_0,\quad \delta_1u=u\delta_2,\quad \delta_2u=u\delta_1.\]
The dot product is the additive group $\mathbb{Z}_3$, i.e. addition mod 3. The coproducts etc are
\[ \Delta \delta_i=\sum_{j+k=i}\delta_j\mathop{{\otimes}}\delta_k,\quad \Delta u=u\mathop{{\otimes}} u,\quad \phi=1\mathop{{\otimes}} 1\mathop{{\otimes}} 1\]
with addition mod 3. The cocycle and right action are trivial and the dot product is that of $\mathbb{Z}_3$ as a subgroup generated by $uv$. This gives an ordinary cross product Hopf algebra $\Xi=\mathbb{C}(\mathbb{Z}_3){>\!\!\!\triangleleft}\mathbb{C} \mathbb{Z}_2$. Here $S\delta_i=\delta_{-i}$ and $S u=u$. For the $*$-structure, the cocycle is trivial so $\gamma=1$ and $\hbox{{$\mathcal G$}}=1\mathop{{\otimes}} 1$ and we have an ordinary Hopf $*$-algebra.
{\sl (ii) $\Xi(R,K)$ for $S_2\subset S_3$ with its second transversal.} For this $R$, the dot product is specified by $e$ the identity and $v\cdot w=w$, $w\cdot v=v$. The algebra has relations
\[ \delta_e u=u\delta_e,\quad \delta_v u=u\delta_w,\quad \delta_w u=u\delta_v\]
and the quasi-Hopf algebra coproducts etc. are
\[ \Delta \delta_e=\delta_e\mathop{{\otimes}} \delta_e+\delta_v\mathop{{\otimes}}\delta_v+\delta_w\mathop{{\otimes}}\delta_w,\quad
\Delta \delta_v=\delta_e\mathop{{\otimes}} \delta_v+\delta_v\mathop{{\otimes}}\delta_e+\delta_w\mathop{{\otimes}}\delta_v,\]
\[
\Delta \delta_w=\delta_e\mathop{{\otimes}} \delta_w+\delta_w\mathop{{\otimes}}\delta_e+\delta_v\mathop{{\otimes}}\delta_w,\quad \Delta u=u\mathop{{\otimes}} u,\]
\[ \phi=1\mathop{{\otimes}} 1\mathop{{\otimes}} 1+(\delta_v \mathop{{\otimes}}\delta_w+\delta_w\mathop{{\otimes}}\delta_v )\mathop{{\otimes}} (u-1)=\phi^{-1}.\]
The antipode is
\[ S\delta_s=\delta_{s^R}=\delta_s,\quad S u=\sum_{s}\delta_{(u{\triangleright} s)^R}u=u,\quad \alpha=1,\quad \beta=\sum_s \delta_s\mathop{{\otimes}}\tau(s,s)=1\]
from the antipode lemma, since the map $(\ )^R$ happens to be injective and indeed acts as the identity. In this case, we see that $\Xi(R,K)$ is nontrivially a quasi-Hopf algebra. Only $\tau(v,w)=\tau(w,v)=u$ are nontrivial, hence for the $*$-quasi Hopf algebra structure, we have
\[ \gamma=1,\quad \hbox{{$\mathcal G$}}=1\mathop{{\otimes}} 1+(\delta_v\mathop{{\otimes}}\delta_w+\delta_w\mathop{{\otimes}}\delta_v)(u\mathop{{\otimes}} u-1\mathop{{\otimes}} 1)\]
with $\theta=*S$ acting as the identity on our basis, $\theta(x)=x$ and $\theta(\delta_s)=\delta_s$. \end{example}
We also note that the algebras $\Xi(R,K)$ here are manifestly isomorphic for the two $R$, but the coproducts are different, so the tensor products of representations is different, although they turn out isomorphic. The set of irreps does not change either, but how we construct them can look different. We will see in the next that this is part of a monoidal equivalence of categories.
\begin{example}\rm $S_2\subset S_3$ with its 2nd transversal. Here $R$ has two orbits: (a) ${\hbox{{$\mathcal C$}}}=\{e\}$ with $r_0=e, K^{r_0}=K$ with two 1-diml irreps $V_\rho$ as $\rho$=trivial and $\rho={\rm sign}$, and hence two irreps of $\Xi(R,K)$; (b) ${\hbox{{$\mathcal C$}}}=\{w,v\}$ with $r_0=v$ or $r_0=w$, both with $K^{r_0}=\{e\}$ and hence only $\rho$ trivial, leading to one 2-dimensional irrep of $\Xi(R,K)$. So, altogether, there are again three irreps of $\Xi(R,K)$:
\begin{align*} V_{(\{e\},\rho)}:& \quad \delta_r.1 =\delta_{r,e},\quad u.1 =\pm 1,\\
V_{(\{w,v\}),1)}:& \quad \delta_r. v=\delta_{r,v}v,\quad \delta_r. w=\delta_{r,w}w,\quad u.v= w,\quad u.w=v
\end{align*}
acting on $\mathbb{C}$ and on the span of $v,w$ respectively. These irreps are equivalent to what we had in Example~\ref{exS3n} when computing irreps from the standard $R$.
\end{example}
\section{Categorical justification and twisting theorem}\label{sec:cat_just}
We have shown that the boundaries can be defined using the action of the algebra $\Xi(R,K)$ and that one can perform novel methods of fault-tolerant quantum computation using these boundaries. The full story, however, involves the quasi-Hopf algebra structure verified in the preceding section and now we would like to connect back up to the category theory behind this.
\subsection{$G$-graded $K$-bimodules.} We start by proving the equivalence ${}_{\Xi(R,K)}\hbox{{$\mathcal M$}} \simeq {}_K\hbox{{$\mathcal M$}}_K^G$ explicitly and use it to derive the coproduct studied in Section~\ref{sec:quasi}. Although this equivalence is known\cite{PS}, we believe this to be a new and more direct derivation.
\begin{lemma} If $V_\rho$ is a $K^{r_0}$-module and $V_{\hbox{{$\mathcal O$}},\rho}$ the associated $\Xi(R,K)$ irrep, then
\[ \tilde V_{\hbox{{$\mathcal O$}},\rho}= V_{\hbox{{$\mathcal O$}},\rho}\mathop{{\otimes}} \mathbb{C} K,\quad x.(r\mathop{{\otimes}} v\mathop{{\otimes}} z).y=x{\triangleright} r\mathop{{\otimes}}\zeta_r(x).v\mathop{{\otimes}} (x{\triangleleft} r)zy,\quad |r\mathop{{\otimes}} v\mathop{{\otimes}} z|=rz\]
is a $G$-graded $K$-bimodule. Here $r\in \hbox{{$\mathcal O$}}$ and $v\in V_\rho$ in the construction of $V_{\hbox{{$\mathcal O$}},\rho}$.
\end{lemma}
{\noindent {\bfseries Proof:}\quad } That this is a $G$-graded right $K$-module commuting with the left action of $K$ is trivial. That the left action works and is $G$-graded is
\begin{align*}x.(y.(r\mathop{{\otimes}} v\mathop{{\otimes}} z))&=x.(y{\triangleright} r\mathop{{\otimes}} \zeta_r(y).v\mathop{{\otimes}} (y{\triangleleft} r)z)= xy{\triangleright} r\mathop{{\otimes}} \zeta_r(xy).v\mathop{{\otimes}} (x{\triangleleft}(y{\triangleright} r))(y{\triangleleft} r)z\\
&=xy{\triangleright} r\mathop{{\otimes}} \zeta_r(xy).v\mathop{{\otimes}} ((xy){\triangleleft} r)z\end{align*}
and
\[ |x.(r\mathop{{\otimes}} v\mathop{{\otimes}} z).y|=(x{\triangleright} r) (x{\triangleleft} r)zy= xrzy=x|r\mathop{{\otimes}} v \mathop{{\otimes}} z|y.\]
\endproof
\begin{remark}\rm Recall that we can also think more abstractly of $\Xi=\mathbb{C}(G/K){>\!\!\!\triangleleft} \mathbb{C} K$ rather than using a transversal. In these terms, a representation of $\Xi(R,K)$ as an $R$-graded $K$-module $V$ such that $|x.v|=x{\triangleleft} |v|$ now becomes a $G/K$-graded $K$-module such that $|x.v|=x|v|$, where $|v|\in G/K$ and we multiply from the left by $x\in K$. Moreover, the role of an orbit $\hbox{{$\mathcal O$}}$ above is played by a double coset $T=\hbox{{$\mathcal O$}} K\in {}_KG_K$. In these terms, the role of the isometry group $K^{r_0}$ is played by
\[ K^{r_T}:=K\cap r_T K r_T^{-1}, \]
where $r_T$ is any representative of the same double coset. One can take $r_T=r_0$ but we can also chose it more freely.
Then an irrep is given by a double coset $T$ and an irreducible representation $\rho_T$ of $K^{r_T}$. If we denote by $V_{\rho_T}$ the carrier space for this then the associated irrep of $\mathbb{C}(G/K){>\!\!\!\triangleleft}\mathbb{C} K$ is $V_{T,\rho_T}=\mathbb{C} K\mathop{{\otimes}}_{K^{r_T}}V_{\rho_T}$ which is manifestly a $K$-module and we give it the $G/K$-grading by $|x\mathop{{\otimes}}_{K^{r_T}} v|=xK$. The construction in the last lemma is then equivalent to
\[ \tilde V_{T,\rho_T}=\mathbb{C} K\mathop{{\otimes}}_{K^{r_T}} V_{\rho_T}\mathop{{\otimes}}\mathbb{C} K,\quad |x\mathop{{\otimes}}_{K^{r_T}} v\mathop{{\otimes}} z|=xz\]
as manifestly a $G$-graded $K$-bimodule. This is an equivalent point of view, but we prefer our more explicit one based on $R$, hence details are omitted.
\end{remark}
Also note that the category ${}_K\hbox{{$\mathcal M$}}_K^G$ of $G$-graded $K$-bimodules has an obvious monoidal structure inherited from that of $K$-bimodules, where we tensor product over $\mathbb{C} K$. Here $|w\mathop{{\otimes}}_{\mathbb{C} K} w'|=|w||w'|$ in $G$ is well-defined and $x.(w\mathop{{\otimes}}_{\mathbb{C} K}w').y=x.w\mathop{{\otimes}}_{\mathbb{C} K} w'.y$ has degree $x|w||w'|y=x|w\mathop{{\otimes}}_{\mathbb{C} K}w'|y$ as required.
\begin{proposition} \label{prop:mon_equiv}
We let $R$ be a transversal and $W=V\mathop{{\otimes}} \mathbb{C} K$ made into a $G$-graded $K$-bimodule by
\[ x.(v\mathop{{\otimes}} z).y=x.v\mathop{{\otimes}} (x{\triangleleft}|v|)zy, \quad |v\mathop{{\otimes}} z|= |v|z\in G,\]
where now we view $|v|\in R$ as the chosen representative of $|v|\in G/K$. This gives a functor $F:{}_\Xi\hbox{{$\mathcal M$}}\to {}_K\hbox{{$\mathcal M$}}_K^G$ which is a monoidal equivalence for a suitable quasibialgebra structure on $\Xi(R,K)$. The latter depends on $R$ since $F$ depends on $R$.
\end{proposition}
{\noindent {\bfseries Proof:}\quad } We define $F(V)$ as stated, which is clearly a right module that commutes with the left action, and the latter is a module structure as
\[ x.(y.(v\mathop{{\otimes}} z))=x.(y.v\mathop{{\otimes}} (y{\triangleleft} |v|)z)=xy.v\mathop{{\otimes}} (x{\triangleleft} (y{\triangleright} |v|))(y{\triangleleft} |v|)z=(xy).(v\mathop{{\otimes}} z)\]
using the matched pair axiom for $(xy){\triangleleft} |v|$. We also check that $|x.(v\mathop{{\otimes}} z).y|=|x.v|zy=(x{\triangleright} |v|)(x{\triangleleft} |v|)zy=x|v|zy=x|v\mathop{{\otimes}} z|y$. Hence, we have a $G$-graded $K$-bimodule. Conversely, if $W$ is a $G$-graded $K$-bimodule, we let
\[ V=\{w\in W\ |\ |w|\in R\},\quad x.v=xv(x{\triangleleft} |v|)^{-1},\quad \delta_r.v=\delta_{r,|v|}v,\]
where $v$ on the right is viewed in $W$ and we use the $K$-bimodule structure. This is arranged so that $x.v$ on the left lives in $V$. Indeed, $|x.v|=x|v|(x{\triangleleft} |v|)^{-1}=x{\triangleright} |v|$ and $x.(y.v)=xyv(y{\triangleleft} |v|)^{-1}(x{\triangleleft}(y{\triangleright} |v|))^{-1}=xyv((xy){\triangleleft} |v|)^{-1}$ by the matched pair condition, as required for a representation of $\Xi(R,K)$. One can check that this is inverse to the other direction. Thus, given $W=\oplus_{rx\in G}W_{rx}=\oplus_{x\in K} W_{Rx}$, where we let $W_{Rx}=\oplus_{r\in R}W_{rx}$, the right action by $x\in K$ gives an isomorphism $W_{Rx}{\cong} V\mathop{{\otimes}} x$ as vector spaces and hence recovers $W=V\mathop{{\otimes}}\mathbb{C} K$. This clearly has the correct right $K$-action and from the left $x.(v\mathop{{\otimes}} z)=xv(x{\triangleleft}|v|)^{-1}\mathop{{\otimes}} (x{\triangleleft}|v|)z$, which under the identification maps to $xv(x{\triangleleft}|v|)^{-1} (x{\triangleleft}|v|)z=xvz\in W$ as required given that $v\mathop{{\otimes}} z$ maps to $vz$ in $W$.
Now, if $V,V'$ are $\Xi(R,K)$ modules then as vector spaces,
\[ F(V)\mathop{{\otimes}}_{\mathbb{C} K}F(V')=(V\mathop{{\otimes}} \mathbb{C} K)\mathop{{\otimes}}_{\mathbb{C} K} (V'\mathop{{\otimes}} \mathbb{C} K)=V\mathop{{\otimes}} V'\mathop{{\otimes}} \mathbb{C} K{\buildrel f_{V,V'}\over{\cong}}F(V\mathop{{\otimes}} V')\]
by the obvious identifications except that in the last step we allow ourselves the possibility of a nontrivial isomorphism as vector spaces. For the actions on the two sides,
\[ x.(v\mathop{{\otimes}} v'\mathop{{\otimes}} z).y=x.(v\mathop{{\otimes}} v')\mathop{{\otimes}} (x{\triangleleft} |v\mathop{{\otimes}} v'|)zy= x.v\mathop{{\otimes}} (x{\triangleleft} |v|).v'\mathop{{\otimes}} ((x{\triangleleft}|v|){\triangleleft}|v'|)zy,\]
where on the right, we have $x.(v\mathop{{\otimes}} 1)=x.v \mathop{{\otimes}} x{\triangleleft}|v|$ and then take $x{\triangleleft}|v|$ via the $\mathop{{\otimes}}_{\mathbb{C} K}$ to act on $v'\mathop{{\otimes}} z$ as per our identification. Comparing the $x$ action on the $V\mathop{{\otimes}} V'$ factor, we need
\[\Delta x=\sum_{r\in R}x\delta_r\mathop{{\otimes}} x{\triangleleft} r= \sum_{r\in R}\delta_{x{\triangleright} r}\mathop{{\otimes}} x \mathop{{\otimes}} 1\mathop{{\otimes}} x{\triangleleft} r\]
as a modified coproduct without requiring a nontrivial $f_{V,V'}$ for this to work. The first expression is viewed in $\Xi(R,K)^{\mathop{{\otimes}} 2}$ and the second is on the underlying vector space. Likewise, looking at the grading of $F(V\mathop{{\otimes}} V')$ and comparing with the grading of $F(V)\mathop{{\otimes}}_{\mathbb{C} K}F(V')$, we need to define $|v\mathop{{\otimes}} v'|=|v|\cdot|v'|\in R$ and use $|v|\cdot|v'|\tau(|v|,|v'|)=|v||v'|$ to match the degree on the left hand side. This amounts to the coproduct of $\delta_r$ in $\Xi(R,K)$,
\[ \Delta\delta_r=\sum_{s\cdot t=r}\delta_s\mathop{{\otimes}}\delta_t=\sum_{s\cdot t=r} \delta_s\mathop{{\otimes}} 1\mathop{{\otimes}} \delta_t \mathop{{\otimes}} 1\]
{\em and} a further isomorphism
\[ f_{V,V'}(v\mathop{{\otimes}} v'\mathop{{\otimes}} z)= v\mathop{{\otimes}} v'\mathop{{\otimes}}\tau(|v|,|v'|)z\] on the underlying vector space.
After applying this, the degree of this element is $|v\mathop{{\otimes}} v'|\tau(|v|,|v'|)z=|v||v'|z=|v\mathop{{\otimes}} 1||v'\mathop{{\otimes}} z|$, which is the degree on the original $F(V)\mathop{{\otimes}}_{\mathbb{C} K}F(V')$ side. Now we show that $f_{V,V'}$ respects associators on each side of $F$. Taking the associator on the $\Xi(R,K)$-module side as
\[ \phi_{V,V',V''}:(V\mathop{{\otimes}} V')\mathop{{\otimes}} V''\to V\mathop{{\otimes}}(V'\mathop{{\otimes}} V''),\quad \phi_{V,V',V''}((v\mathop{{\otimes}} v')\mathop{{\otimes}} v'')=\phi^1.v\mathop{{\otimes}} (\phi^2.v'\mathop{{\otimes}}\phi^3.v'')\]
and $\phi$ trivial on the $G$-graded $K$-bimodule side, for $F$ to be monoidal with the stated $f_{V,V'}$ etc, we need
\begin{align*}
F(\phi_{V,V,V'})&f_{V\mathop{{\otimes}} V',V''}f_{V,V'}(v\mathop{{\otimes}} v'\mathop{{\otimes}} z)\\
&=F(\phi_{V,V,V'})f_{V\mathop{{\otimes}} V',V''}(v\mathop{{\otimes}} v'\mathop{{\otimes}} \tau(|v|,|v'|).v''\mathop{{\otimes}} (\tau(|v|,|v'|){\triangleleft}|v''|)z)\\
&=F(\phi_{V,V,V'})(v\mathop{{\otimes}} v'\mathop{{\otimes}} \tau(|v|,|v'|).v''\mathop{{\otimes}}\tau(|v|.|v'|,\tau(|v|,|v'|){\triangleright} |v''|)(\tau(|v|,|v'|){\triangleleft}|v''|)z)\\
&=F(\phi_{V,V,V'})(v\mathop{{\otimes}} v'\mathop{{\otimes}} \tau(|v|,|v'|).v''\mathop{{\otimes}} \tau(|v|,|v'|.|v''|)\tau(|v'|,|v''|)z,\\
f_{V,V'\mathop{{\otimes}} V''}&f_{V',V''}(v\mathop{{\otimes}} v'\mathop{{\otimes}} v''\mathop{{\otimes}} z)=f_{V,V'\mathop{{\otimes}} V''}(v\mathop{{\otimes}} v'\mathop{{\otimes}} v''\mathop{{\otimes}} \tau(|v'|,|v''|)z) \\
&=v\mathop{{\otimes}} v'\mathop{{\otimes}} v''\mathop{{\otimes}}\tau(|v|,|v'\mathop{{\otimes}} v''|)\tau(|v'|,|v''|)z =v\mathop{{\otimes}} v'\mathop{{\otimes}} v''\mathop{{\otimes}}\tau(|v|,|v'|.|v''|)\tau(|v'|,|v''|)z,\end{align*}
where for the first equality we moved $\tau(|v|,|v'|)$ in the output of $f_{V,V'}$ via $\mathop{{\otimes}}_{\mathbb{C} K}$ to act on the $v''$. We used
the cocycle property of $\tau$ for the 3rd equality. Comparing results, we need
\[ \phi_{V,V',V''}((v\mathop{{\otimes}} v')\mathop{{\otimes}} v'')=v\mathop{{\otimes}}( v'\mathop{{\otimes}} \tau(|v|,|v'|)^{-1}.v''),\quad \phi=\sum_{s,t\in R}(\delta_s\mathop{{\otimes}} 1)\mathop{{\otimes}}(\delta_s\tens1)\mathop{{\otimes}} (1\mathop{{\otimes}} \tau(s,t)^{-1}).\]
Note that we can write
\[ f_{V,V'}(v\mathop{{\otimes}} v'\mathop{{\otimes}} z)=(\sum_{s,t\in R}(\delta_s\mathop{{\otimes}} 1)\mathop{{\otimes}}(\delta_t\mathop{{\otimes}} 1)\mathop{{\otimes}} \tau(s,t)).(v\mathop{{\otimes}} v'\mathop{{\otimes}} z)\]
but we are not saying that $\phi$ is a coboundary since this is not given by the action of an element of $\Xi(R,K)^{\mathop{{\otimes}} 2}$.
\endproof
This derives the quasibialgebra structure on $\Xi(R,K)$ used in Section~\ref{sec:quasi} but now so as to obtain an
equivalence of categories.
\subsection{Drinfeld twists induced by change of transversal}
We recall that if $H$ is a quasiHopf algebra and $\chi\in H\mathop{{\otimes}} H$ is a {\em cochain} in the sense of invertible and $(\mathrm{id}\mathop{{\otimes}}
{\epsilon})\chi=({\epsilon}\mathop{{\otimes}}\mathrm{id})\chi=1$, then its {\em Drinfeld twist} $\bar H$ is another quasi-Hopf algebra
\[ \bar\Delta=\chi^{-1}\Delta(\ )\chi,\quad \bar\phi=\chi_{23}^{-1}((\mathrm{id}\mathop{{\otimes}}\Delta)\chi^{-1})\phi ((\Delta\mathop{{\otimes}}\mathrm{id})\chi)\chi_{12},\quad \bar{\epsilon}={\epsilon}\]
\[ S=S,\quad\bar\alpha=(S\chi^1)\alpha\chi^2,\quad \bar\beta=(\chi^{-1})^1\beta S(\chi^{-1})^2\]
where $\chi=\chi^1\mathop{{\otimes}}\chi^2$ with a sum of such terms understood and we use same notation for $\chi^{-1}$, see \cite[Thm.~2.4.2]{Ma:book} but note that our $\chi$ is denoted $F^{-1}$ there. In categorical terms, this twist
corresponds to a monoidal equivalence $G:{}_{H}\hbox{{$\mathcal M$}}\to {}_{H^\chi}\hbox{{$\mathcal M$}}$ which is the identity on objects and morphisms but has a nontrivial natural transformation
\[ g_{V,V'}:G(V)\bar\mathop{{\otimes}} G(V'){\cong} G(V\mathop{{\otimes}} V'),\quad g_{V,V'}(v\mathop{{\otimes}} v')= \chi^1.v\mathop{{\otimes}}\chi^2.v'.\]
The next theorem follows by the above reconstruction arguments, but here we check it directly. The logic is that for different $R,R'$ the category of modules are both monoidally equivalent to ${}_K\hbox{{$\mathcal M$}}_K^G$ and hence monoidally equivalent but not in a manner that is compatible with the forgetful functor to Vect. Hence these should be related by a cochain twist.
\begin{theorem}\label{thmtwist} Let $R,\bar R$ be two transversals with $\bar r\in\bar R$ representing the same coset as $r\in R$. Then $\Xi(\bar R,K)$ is a cochain twist of $\Xi(R,K)$ at least as quasi-bialgebras (and as quasi-Hopf algebras if one of them is). The Drinfeld cochain is $\chi=\sum_{r\in R}(\delta_r\mathop{{\otimes}} 1)\mathop{{\otimes}} (1\mathop{{\otimes}} r^{-1}\bar r)$. \end{theorem}
{\noindent {\bfseries Proof:}\quad } Let $R,\bar R$ be two transversals. Then for each $r\in R$, the class $rK$ has a unique representative $\bar rK$ with $\bar r\in R'$. Hence $\bar r= r c_r$ for some function $c:R\to K$ determined by the two transversals as $c_r=r^{-1}\bar r$ in $G$. One can show that the cocycle matched pairs are related by
\[ x\bar{\triangleright} \bar r=(x{\triangleright} r)c_{x{\triangleright} r},\quad x\bar{\triangleleft} \bar r= c_{x{\triangleright} r}^{-1}(x{\triangleleft} r)c_r\]
among other identities. On using
\begin{align*} \bar s\bar t=sc_s tc_t=s (c_s{\triangleright} t)(c_s{\triangleleft} t)c_t&= (s\cdot c_s{\triangleright} t)\tau(s, c_s{\triangleright} t)(c_s{\triangleleft} t)c_t\\&=\overline{ s\cdot (c_s{\triangleright} t)}c_{s\cdot c_s{\triangleright} t}^{-1}\tau(s, c_s{\triangleright} t)(c_s{\triangleleft} t)c_t\end{align*}
and factorising using $\bar R$, we see that
\begin{equation}\label{taucond} \bar s\, \bar\cdot\, \bar t= \overline{s\cdot c_s{\triangleright} t},\quad\bar\tau(\bar s,\bar t)=c_{s\cdot c_s{\triangleright} t}^{-1}\tau(s, c_s{\triangleright} t)(c_s{\triangleleft} t)c_t.\end{equation}
We will construct a monoidal functor $G:{}_{\Xi(R,K)}\hbox{{$\mathcal M$}}\to {}_{\Xi(\bar R,K)}\hbox{{$\mathcal M$}}$ with $g_{V,V'}(v\mathop{{\otimes}} v')= \chi^1.v\mathop{{\otimes}}\chi^2.v'$ for a suitable $\chi\in \Xi(R,K)^{\mathop{{\otimes}} 2}$. First, let $F:{}_{\Xi(R,K)}\hbox{{$\mathcal M$}}\to {}_K\hbox{{$\mathcal M$}}_K^G$ be the monoidal functor above with natural isomorphism $f_{V,V'}$ and $\bar F:{}_{\Xi(\bar R,K)}\hbox{{$\mathcal M$}}\to {}_K\hbox{{$\mathcal M$}}_K^G$ the parallel for $\Xi(\bar R,K)$ with isomorphism $\bar f_{V,V'}$. Then
\[ C:F\to \bar F\circ G,\quad C_V:F(V)=V\mathop{{\otimes}}\mathbb{C} K\to V\mathop{{\otimes}} \mathbb{C} K=\bar FG(V),\quad C_V(v\mathop{{\otimes}} z)=v\mathop{{\otimes}} c_{|v|}^{-1}z\]
is a natural isomorphism. Check on the right we have, denoting the $\bar R$ grading by $||\ ||$, the $G$-grading and $K$-bimodule structure
\begin{align*} |C_V(v\mathop{{\otimes}} z)|&= |v\mathop{{\otimes}} c_{|v|}^{-1}z|= ||v||c_{|v|}^{-1}z=|v|z=|v\mathop{{\otimes}} z|,\\
x.C_V(v\mathop{{\otimes}} z).y&=x.(v\mathop{{\otimes}} c_{|v|}^{-1}z).y=x.v\mathop{{\otimes}} (x\bar{\triangleleft} ||v||)c_{|v|}^{-1}zy=x.v \mathop{{\otimes}} c_{x{\triangleright} |v|}^{-1} (x{\triangleleft} |v|)zy\\
&= C_V(x.(v\mathop{{\otimes}} z).y).\end{align*}
We want these two functors to not only be naturally isomorphic but for this to respect that they are both monoidal functors. Here $\bar F\circ G$ has the natural isomorphism
\[ \bar f^g_{V,V'}= \bar F(g_{V,V'})\circ \bar f_{G(V),G(V')}\]
by which it is a monoidal functor.
The natural condition on a natural isomorphism $C$ between monoidal functors is that $C$ behaves in the obvious way on
tensor product objects via the natural isomorphisms associated to each monoidal functor. In our case, this means
\[ \bar f^g_{V,V'}\circ (C_{V}\mathop{{\otimes}} C_{V'}) = C_{V\mathop{{\otimes}} V'}\circ f_{V,V'}: F(V)\mathop{{\otimes}} F(V')\to \bar F G(V\mathop{{\otimes}} V').\]
Putting in the specific form of these maps, the right hand side is
\[C_{V\mathop{{\otimes}} V'}\circ f_{V,V'}(v\mathop{{\otimes}} 1\mathop{{\otimes}}_K v'\mathop{{\otimes}} z)=C_{V\mathop{{\otimes}} V'}(v\mathop{{\otimes}} v'\mathop{{\otimes}} \tau(|v|,|v'|)z)=v\mathop{{\otimes}} v'\mathop{{\otimes}} c^{-1}_{|v\mathop{{\otimes}} v'|}\tau(|v|,|v'|)z,\]
while the left hand side is
\begin{align*}\bar f^g_{V,V'}\circ (C_{V}\mathop{{\otimes}} C_{V'})&(v\mathop{{\otimes}} 1\mathop{{\otimes}}_K v'\mathop{{\otimes}} z)=\bar f^g_{V,V'}(v\mathop{{\otimes}} c^{-1}_{|v|}\mathop{{\otimes}}_K v'\mathop{{\otimes}} c^{-1}_{|v'|}z)\\
&=\bar f^g_{V,V'}(v\mathop{{\otimes}} 1\mathop{{\otimes}}_K c^{-1}_{|v|}.v'\mathop{{\otimes}} (c^{-1}_{|v|}\bar{\triangleright} ||v'||)c^{-1}_{|v'|}z)\\
&=\bar F(g_{V,V'})(v\mathop{{\otimes}} c^{-1}_{|v|}.v'\mathop{{\otimes}} \bar\tau(||v||,||c^{-1}_{|v|}.v'||)(c^{-1}_{|v|}\bar{\triangleright}||v'||)c^{-1}_{|v'|}z)\\
&=\bar F(g_{V,V'})(v\mathop{{\otimes}} c^{-1}_{|v|}.v'\mathop{{\otimes}} c^{-1}_{|v\mathop{{\otimes}} v'|}\tau(|v|,|v'|)z,
\end{align*}
using the second of (\ref{taucond}) and $|v\mathop{{\otimes}} v'|=|v|\cdot|v'|$. We also used $\bar f^g_{V,V'}=\bar F(g_{V,V'})\bar f_{G(V),G(V')}:\bar FG(V)\mathop{{\otimes}} \bar FG(V')\to \bar FG(V\mathop{{\otimes}} V')$. Comparing, we need $\bar F(g_{V,V'})$ to be the action of the element
\[ \chi=\sum_{r\in R} \delta_r\mathop{{\otimes}} c_r\in \Xi(R,K)^{\mathop{{\otimes}} 2}.\]
It follows from the arguments, but one can also check directly, that $\phi$ indeed twists as stated to $\bar\phi$ when these are given by Lemma~\ref{Xibialg}, again using (\ref{taucond}). \endproof
The twisting of a quasi-Hopf algebra is again one. Hence, we have:
\begin{corollary}\label{twistant} If $R$ has $(\ )^R$ bijective giving a quasi-Hopf algebra with regular antipode $S,\alpha=1,\beta$ as in Proposition~\ref{standardS} and $\bar R$ is another transversal then $\Xi(\bar R,K)$ in the twisting form of Theorem~\ref{thmtwist} has an antipode
\[ \bar S=S,\quad \bar \alpha=\sum_r \delta_{r^R} c_r ,\quad \bar \beta =\sum_r \delta_r \tau(r,r^R)(c_r^{-1}{\triangleleft} r^R)^{-1} . \]
This is a regular antipode if $(\ )^R$ for $\bar R$ is also bijective (i.e. $\bar\alpha$ is then invertible and can be transformed back to standard form to make it 1).\end{corollary}
{\noindent {\bfseries Proof:}\quad } We work with the initial quasi-Hopf algebra $\Xi(R,K)$ and ${\triangleright},{\triangleleft},\tau$ refer to this but note that $\Xi(\bar R,K)$ is the same algebra when $\delta_r$ is identified with the corresponding $\delta_{\bar r}$. Then
\begin{align*}\bar \alpha&=(S\chi^{1})\chi^{2}=\sum_r S\delta_r\mathop{{\otimes}} c_r=\delta_{r^R}c_r\end{align*}
using the formula for $S\delta_r=\delta_{r^R}$ in Proposition~\ref{standardS}. Similarly,
$\chi^{-1}=\sum_r \delta_r\mathop{{\otimes}} c_r^{-1}$ and we use $S,\beta$ from the above lemma, where
\[ S (1\mathop{{\otimes}} x)= \sum_s \delta_{(x^{-1}{\triangleright} s)^R}\mathop{{\otimes}} x^{-1}{\triangleleft} s=\sum_t\delta_{t^R}\mathop{{\otimes}} x^{-1}{\triangleleft}(x{\triangleright} t)=\sum_t\delta_{t^R}\mathop{{\otimes}} (x{\triangleleft} t)^{-1}.\]
Then
\begin{align*} \bar \beta &=\chi^{-1}\beta S\chi^{-2}=\sum_{r,s,t}\delta_r\delta_s\tau(s,s^R)\delta_{t^R}(c_r^{-1}{\triangleleft} t)^{-1}\\
&=\sum_{r,t} \delta_r\tau(r,r^R)\delta_{t^R}(c_r^{-1}{\triangleleft} t)^{-1}=\sum_{r,t}\delta_r\delta_{\tau(r,r^R){\triangleright} t^R}\tau(r,r^R) (c_r^{-1}{\triangleleft} t)^{-1}.\end{align*}
Commuting the $\delta$-functions to the left requires $r=\tau(r,r^R){\triangleright} t^R$ or $r^{RR}=\tau(r,r^R)^{-1}{\triangleright} r= t^R$ so $t=r^R$ under our assumptions, giving the answer stated.
If $(\ )^R$ is bijective then $\bar\alpha^{-1}=\sum_r c_r^{-1}\delta_{r^R}=\sum_r \delta_{c_r^{-1}{\triangleright} r^R}c_r^{-1}$ provides the left inverse. On the other side, we need $c_r^{-1}{\triangleright} r^R= c_s^{-1}{\triangleright} s^R$ iff $r=s$. This is true if $(\ )^{R}$ for $\bar R$ is also bijective. That is because, if we write $(\ )^{\bar R}$ for the right inverse with respect to $\bar R$, one can show by comparing the factorisations that
\[ \bar s^{\bar R}=\overline{c_s^{-1}{\triangleright} s^R},\quad \overline{s^R}=c_s\bar{\triangleright} \bar s^{\bar R}\]
and we use the first of these. \endproof
\begin{example}\rm With reference to the list of transversals for $S_2\subset S_3$, we have four quasi-Hopf algebras of which two were already computed in Example~\ref{exS3quasi}.
{\sl (i) 2nd transversal as twist of the first.} Here $\bar\Xi$ is generated by $\mathbb{Z}_2$ as $u$ again and $\delta_{\bar r}$ with $\bar R=\{e,w,v\}$. We have the same cosets represented by these with $\bar e=e$, $\overline{uv}=w$ and $\overline{vu}=v$, which means $c_e=e, c_{vu}=u, c_{uv}=u$. To compare the algebras in the two cases, we identify $\delta_0=\delta_e,\delta_1=\delta_w, \delta_2=\delta_v$ as delta-functions on $G/K$ (rather than on $G$) in order to identify the algebras of $\bar\Xi$ and $\Xi$. The cochain from Theorem~\ref{thmtwist} is \[ \chi=\delta_e\mathop{{\otimes}} e+(\delta_{vu}+\delta_{uv})\mathop{{\otimes}} u=\delta_0\mathop{{\otimes}} 1+ (\delta_1+\delta_2)\mathop{{\otimes}} u=\delta_0\mathop{{\otimes}} 1+ (1-\delta_0)\mathop{{\otimes}} u \]
as an element of $\Xi\mathop{{\otimes}}\Xi$. One can check that this conjugates the two coproducts as claimed. We also have
\[ \chi^2=1\mathop{{\otimes}} 1,\quad ({\epsilon}\mathop{{\otimes}}\mathrm{id})\chi=(\mathrm{id}\mathop{{\otimes}}{\epsilon})\chi=1.\]
We spot check (\ref{taucond}), for example $v\bar\cdot w=\overline{vu}\, \bar\cdot\, \overline{uv}=\overline{uv}=\overline{vuvu}=\overline{vu( u{\triangleright} (uv))}$, as it had to be. We should therefore find that
\[((\Delta\mathop{{\otimes}}\mathrm{id})\chi)\chi_{12}=((\mathrm{id}\mathop{{\otimes}}\Delta)\chi)\chi_{23}\bar\phi. \]
We have checked directly that this indeed holds. Next, the antipode of the first transversal should twist to
\[ \bar S=S,\quad \bar\alpha=\delta_e c_e+\delta_{uv}c_{vu}+\delta_{vu}c_{uv}=\delta_e(e-u)+u=\delta_e c_e+\delta_{vu}c_{vu}+\delta_{uv}c_{uv}=\bar\beta\]
by Corollary~\ref{twistant} for twisting the antipode. Here, $U=\bar\alpha^{-1}=\bar\beta = U^{-1}$ and $\bar S'=U(S\ )U^{-1}$ with $\bar\alpha'=\bar\beta'=1$ should also be an antipode. We can check this:
\[U u = (\delta_0(e-u)+u)u = \delta_0(u-e)+e = u(\delta_{u^{-1}{\triangleright} 0}(e-u)+u) = u U\]
so $\bar S' u = UuU^{-1} = u$, and
\[\bar S' \delta_1 = U(S\delta_1)U= U\delta_2 U = (\delta_0(e-u)+u)\delta_2(\delta_0(e-u)+u) = \delta_1.\]
\bigskip
{\sl (ii) 3rd transversal as a twist of the first.} A mixed up choice is $\bar R=\{e,uv,v\}$ which is not a subgroup so $\tau$ is nontrivial. One has
\[ \tau(uv,uv)=\tau(v,uv)=\tau(uv,v)=u,\quad \tau(v,v)=e,\quad v.v=e,\quad v.uv=uv,\quad uv.v=e,\quad uv.uv=v,\]
\[ u{\triangleright} v=uv,\quad u{\triangleright} (uv)=v,\quad u{\triangleleft} v=e,\quad u{\triangleleft} uv=e\]
and all other cases implied from the properties of $e$. Here $v^R=v$ and $(uv)^R=v$. These are with respect to $\bar R$, but note that twisting calculations will take place with respect to $R$.
Writing $\delta_0=\delta_e,\delta_1=\delta_{uv},\delta_2=\delta_v$ we have the same algebra as before (as we had to) and now the coproduct etc.,
\[ \bar\Delta u=u\mathop{{\otimes}} 1+\delta_0u\mathop{{\otimes}} (u-1),\quad \bar\Delta\delta_0=\delta_0\mathop{{\otimes}}\delta_0+\delta_2\mathop{{\otimes}}\delta_2+\delta_1\mathop{{\otimes}}\delta_2 \]
\[
\bar\Delta\delta_1=\delta_0\mathop{{\otimes}}\delta_1+\delta_1\mathop{{\otimes}}\delta_0+\delta_2\mathop{{\otimes}}\delta_1,\quad \bar\Delta\delta_2=\delta_0\mathop{{\otimes}}\delta_2+\delta_2\mathop{{\otimes}}\delta_0+\delta_1\mathop{{\otimes}}\delta_1,\]
\[ \bar\phi=1\mathop{{\otimes}} 1\mathop{{\otimes}} 1+ (\delta_1\mathop{{\otimes}}\delta_2+\delta_2\mathop{{\otimes}}\delta_1+\delta_1\mathop{{\otimes}}\delta_1)(u-1)=\bar\phi^{-1}\]
for the quasibialgebra. We used the $\tau,{\triangleright},{\triangleleft},\cdot$ for $\bar R$ for these direct calculations.
Now we consider twisting with
\[ c_0=e,\quad c_1=(uv)^{-1}uv=1,\quad c_2=v^{-1}vu=u,\quad \chi=1\mathop{{\otimes}} 1+ \delta_2\mathop{{\otimes}} (u-1)=\chi^{-1}\]
and check twisting the coproducts
\[ (1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))(u\mathop{{\otimes}} u)(1\mathop{{\otimes}} 1+\delta_2u\mathop{{\otimes}} (u-1))=u\mathop{{\otimes}} 1+\delta_0\mathop{{\otimes}}(u-1)=\bar\Delta u, \]
\[ (1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))(\delta_0\mathop{{\otimes}}\delta_0+\delta_1\mathop{{\otimes}}\delta_2+\delta_2\mathop{{\otimes}}\delta_1)(1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))=\bar\Delta\delta_0,\]
\[ (1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))(\delta_0\mathop{{\otimes}}\delta_1+\delta_1\mathop{{\otimes}}\delta_0+\delta_2\mathop{{\otimes}}\delta_2)(1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))=\bar\Delta\delta_1,\]
\[ (1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))(\delta_0\mathop{{\otimes}}\delta_2+\delta_2\mathop{{\otimes}}\delta_0+\delta_1\mathop{{\otimes}}\delta_1)(1\mathop{{\otimes}} 1+\delta_2\mathop{{\otimes}} (u-1))=\bar\Delta\delta_2.\]
One can also check that (\ref{taucond}) hold, e.g. for the first half,
\[ \bar 2=\bar 1\bar\cdot\bar 1=\overline{1+c_1{\triangleright} 1}=\overline{1+1},\quad \bar 0=\bar 1\bar\cdot\bar 2=\overline{1+c_1{\triangleright} 2}=\overline{1+2},\]
\[ \bar 1=\bar2\bar\cdot\bar 1=\overline{2+c_2{\triangleright} 1}=\overline{2+2},\quad \bar 0=\bar2\bar\cdot\bar 2=\overline{2+c_2{\triangleright} 2}=\overline{2+1}\]
as it must.
Now we apply the twisting of antipodes in Corollary~\ref{twistant}, remembering to do calculations now with $R$ where $\tau,{\triangleleft}$ are trivial, to get
\[ \bar S=S,\quad \bar\alpha=\delta_0+\delta_1c_2+\delta_2c_1=1+\delta_1(u-1),\quad \bar\beta=\delta_0+\delta_2c_2+\delta_1c_1=1+\delta_2(u-1),\]
which obey $\bar\alpha^2=\bar\alpha$ and $\bar\beta^2=\bar\beta$ and are therefore not (left or right) invertible. Hence, we cannot set either equal to 1 by $U$ and there is an antipode, but it is not regular. One can check the antipode indeed works:
\begin{align*}(Su)\alpha+ (Su) (S\delta_0)\alpha(u-1)&=u(1+\delta_1(u-1))+\delta_0 u(1+\delta_1(u-1))(u-1)\\
&=u+\delta_2(1-u)+\delta_0(1-u)=u+(1-\delta_1)(1-u)=\alpha\\
u\beta+\delta_0u\beta S(u-1)&=u(1+\delta_2(u-1))+\delta_0 u(1+\delta_2(u-1))(u-1)\\
&=u+\delta_1(1-u)+\delta_0(1-u)=u+(1-\delta_2)(1-u)=\beta
\end{align*}
\begin{align*} (S\delta_0)\alpha\delta_0&+(S\delta_2)\alpha\delta_2+(S\delta_1)\alpha\delta_2=\delta_0(1+\delta_1(u-1))\delta_0+(1-\delta_0)(1+\delta_1(u-1))\delta_2\\
&=\delta_0+(1-\delta_0)\delta_2+\delta_1(\delta_1 u-\delta_2)=\delta_0+\delta_2+\delta_1u=\alpha\\
\delta_0\beta S\delta_0&+\delta_2\beta S\delta_2+\delta_1\beta S\delta_2=\delta_0(1+\delta_2(u-1))\delta_0+(1-\delta_0)(1+\delta_2(u-1))\delta_1\\
&=\delta_0+(1-\delta_0)\delta_1+(1-\delta_0)\delta_2(u-1)\delta_1=\delta_0+\delta_1+\delta_2(\delta_2u-\delta_1)=\beta
\end{align*}
and more simply on $\delta_1,\delta_2$.
The fourth transversal has a similar pattern to the 3rd, so we do not list its coproduct etc. explicitly.
\end{example}
In general, there will be many different choices of transversal. For $S_{n-1}\subset S_n$, the first two transversals for $S_2\subset S_3$ generalise as follows, giving a Hopf algebra and a strictly quasi-Hopf algebra respectively.
\begin{example}\rm {\sl (i) First transversal.} Here $R=\mathbb{Z}_n$ is a subgroup with $i=0,1,\cdots,n-1$ mod $n$ corresponding to the elements $(12\cdots n)^i$. Neither subgroup
is normal for $n\ge 4$, so both actions are nontrivial but $\tau$ is trivial. This expresses $S_n$ as a double cross product $\mathbb{Z}_n{\bowtie} S_{n-1}$ (with trivial $\tau$) and the matched pair of actions
\[ \sigma{\triangleright} i=\sigma(i),\quad (\sigma{\triangleleft} i)(j)=\sigma(i+j)-\sigma(i)\]
for $i,j=1,\cdots,n-1$, where we add and subtract mod $n$ but view the results in the range $1,\cdots, n$. This was actually found by twisting from the 2nd transversal below, but we can check it directly as follows. First.
\[\sigma (1\cdots n)^i= (\sigma{\triangleright} i)(\sigma{\triangleleft} i)=(12\cdots n)^{\sigma(i)}\left((1\cdots n)^{-\sigma(i)}\sigma(12\cdots n)^i\right)\]
and we check that the second factor sends $n\to i\to \sigma(i) \to n$, hence lies in $S_n$. It follows by the known fact of unique factorisation into these subgroups that this factor is $\sigma{\triangleleft} i$. Its action on $j=1,\cdots, n-1$ is
\[ (\sigma{\triangleright} i)(j)=(12\cdots n)^{-\sigma(i)}\sigma(12\cdots n)^i(j)=\begin{cases} n-\sigma(i) & i+j=n\\ \sigma(i+j)-\sigma(i) & i+j\ne n\end{cases}=\sigma(i+j)-\sigma(i),\]
where $\sigma(i+j)\ne \sigma(i)$ as $i+j\ne i$ and $\sigma(n)=n$ as $\sigma\in S_{n-1}$. It also follows since the two factors are subgroups that these are indeed a matched pair of actions. We can also check the matched pair axioms directly. Clearly, ${\triangleright}$ is an action and
\[ \sigma(i)+ (\sigma{\triangleleft} i)(j)=\sigma(i)+\sigma(i+j)-\sigma(i)=\sigma{\triangleright}(i+j)\] for $i,j\in\mathbb{Z}_n$. On the other side,
\begin{align*}( (\sigma{\triangleleft} i){\triangleleft} j)(k)&=(\sigma{\triangleleft} i)(j+k)-(\sigma{\triangleleft} i)(j)=\sigma(i+(j+k))-\sigma(i)-\sigma(i+j)+\sigma(i)\\
&=\sigma((i+j)+k)-\sigma(i+j)=(\sigma{\triangleleft}(i+j))(k),\\
((\sigma{\triangleleft}(\tau{\triangleright} i))(\tau{\triangleleft} i))(j)&=(\sigma{\triangleleft}\tau(i))(\tau(i+j))-\tau(i))=\sigma(\tau(i)+\tau(i+j)-\tau(i)) -\sigma(\tau(i))\\
&= \sigma(\tau(i+j))-\sigma(\tau(i))=((\sigma\tau){\triangleleft} i)(j)\end{align*}
for $i,j\in \mathbb{Z}_n$ and $k\in 1,\cdots,n-1$.
This gives $ \mathbb{C} S_{n-1}{\triangleright\!\blacktriangleleft}\mathbb{C}(\mathbb{Z}_n)$ as a natural bicrossproduct Hopf algebra which we identify with $\Xi$ (which we prefer to build on the other tensor product order). From Lemma~\ref{Xibialg} and Proposition~\ref{standardS}, this is spanned by products of $\delta_i$ for $i=0,\cdots n-1$ as our labelling of $R=\mathbb{Z}_n$ and $\sigma\in S_{n-1}=K$, with cross relations $\sigma\delta_i=\delta_{\sigma(i)}\sigma$, $\sigma\delta_0=\delta_0\sigma$, and coproduct etc.,
\[ \Delta \delta_i=\sum_{j\in \mathbb{Z}_n}\delta_j\mathop{{\otimes}}\delta_{i-j},\quad \Delta\sigma=\sigma\delta_0+\sum_{i=1}^{n-1}(\sigma{\triangleleft} i),\quad {\epsilon}\delta_i=\delta_{i,0},\quad{\epsilon}\sigma=1,\]
\[ S\delta_i=\delta_{-i},\quad S\sigma=\sigma^{-1}\delta_0+(\sigma^{-1}{\triangleleft} i)\delta_{-i},\]
where $\sigma{\triangleleft} i$ is as above for $i=1,\cdots,n-1$. This is a usual Hopf $*$-algebra with $\delta_i^*=\delta_i$ and $\sigma^*=\sigma^{-1}$ according to Corollary~\ref{corstar}.
\medskip
{\sl (ii) 2nd transversal.} Here $R=\{e, (1\, n),(2\, n),\cdots,(n-1\, n)\}$, which has nontrivial ${\triangleright}$ in which $S_{n-1}$ permutes the 2-cycles according to the $i$ label, but again trivial ${\triangleleft}$ since
\[ \sigma(i\, n)=(\sigma(i)\, n)\sigma,\quad \sigma{\triangleright} (i\ n)=(\sigma(i)\, n)\]
for all $i=1,\cdots,n-1$ and $\sigma\in S_{n-1}$. It has nontrivial $\tau$ as
\[ (i\, n )(j\, n)=(j\, n)(i\, j)\Rightarrow (i\, n)\cdot (j\, n)=(j\, n),\quad \tau((i\, n),(j\, n))=(ij)\]
for $i\ne j$ and we see that $\cdot$ has right but not left division or left but not right cancellation. We also have $(in)\cdot(in)=e$ and $\tau((in),(in))=e$ so that $(\ )^R$ is the identity map, hence $R$ is regular.
This transversal gives a cross-product quasiHopf algebra $\Xi=\mathbb{C} S_{n-1}{\triangleright\!\!\!<}_\tau \mathbb{C}(R)$ where $R$ is a left quasigroup (i.e. unital and with left cancellation) except that we prefer to write it with the tensor factors in the other order. From Lemma~\ref{Xibialg} and Proposition~\ref{standardS}, this is
spanned by products of $\delta_i$ and $\sigma\in S_{n-1}$, where $\delta_0$ is the delta function at $e\in R$ and $\delta_i$ at $(i,n)$ for $i=1,\cdots,n-1$. The cross relations have the same algebra $\sigma\delta_i=\delta_{\sigma(i)}\sigma$ for $i=1,\cdots,n-1$ as before but now
the tensor coproduct etc., and nontrivial associator
\[\Delta\delta_0=\sum_{i=0}^{n-1}\delta_i\mathop{{\otimes}}\delta_i,\quad \Delta\delta_i=1\mathop{{\otimes}}\delta_i+\delta_i\mathop{{\otimes}}\delta_0,\quad \Delta \sigma=\sigma\mathop{{\otimes}}\sigma,\quad {\epsilon}\delta_i=\delta_{i,0},\quad{\epsilon}\sigma=1,\]
\[ S\delta_i=\delta_{i},\quad S\sigma=\sigma^{-1},\quad \alpha=\beta=1,\]
\[\phi=(1\mathop{{\otimes}}\delta_0+\delta_0\mathop{{\otimes}}(1-\delta_0)+\sum_{i=1}^{n-1}\delta_i\mathop{{\otimes}}\delta_i)\mathop{{\otimes}} 1+ \sum_{i,j=1\atop i\ne j}^{n-1}\delta_i\mathop{{\otimes}}\delta_j\mathop{{\otimes}} (ij).\]
This is a $*$-quasi Hopf algebra with the same $*$ as before but now nontrivial
\[ \gamma=1,\quad \hbox{{$\mathcal G$}}=1\mathop{{\otimes}}\delta_0+\delta_0\mathop{{\otimes}}(1-\delta_0)+\sum_{i=1}^{n-1}\delta_i\mathop{{\otimes}}\delta_i+ \sum_{i,j=1\atop i\ne j}^{n-1}\delta_i(ij)\mathop{{\otimes}}\delta_j(ij)\]
from Corollary~\ref{corstar}.
\medskip{\sl (iii) Twisting between the above two transversals.} We denote the first transversal $R=\mathbb{Z}_n$, where $i$ is identified with $(12\cdots n)^i$, and we denote the 2nd transversal by $\bar R$ with corresponding elements $\bar i=(i\ n)$. Then
\[ c_i=(12\cdots n)^{-i}(i\ n)\in S_{n-1},\quad c_i(j)=\begin{cases} n-i & j=i\\ j-i & else \end{cases}\]
for $i,j=1,\cdots,n-1$. If we use the stated ${\triangleright}$ for the first transversal then one can check that the first half of (\ref{taucond}) holds,
\[ \overline{i+c_i{\triangleright} i}=\overline{i+n-i}=e=\bar i\bar\cdot \bar i,\quad \overline{i+c_i{\triangleright} j}=\overline{i+j-i}=\bar j=\bar i\bar\cdot \bar j\]
as it must. We can also check that the actions are indeed related by twisting. Thus,
\[ \sigma{\triangleleft}\bar i=c_{\sigma{\triangleright} i}^{-1}(\sigma{\triangleleft} i)c_i=(\sigma(i),n)(12\cdots n)^{\sigma(i)}(\sigma{\triangleleft} i)(12\cdots n)^{-i}(i,n)=(\sigma(i),n)\sigma(i,n)=\sigma\]
\[ \sigma\bar{\triangleright} \bar i=(\sigma{\triangleright} i)c_{\sigma{\triangleright} i}=(12\cdots n)^{\sigma(i)}(12\cdots n)^{-\sigma(i)}(\sigma(i),n)=(\sigma(i),n),\]
where we did the computation with $\mathbb{Z}_n$ viewed in $S_n$.
It follows that the Hopf algebra from case (i) cochain twists to a simpler quasihopf algebra in case (ii). The required cochain from Theorem~\ref{thmtwist} is
\[ \chi=\delta_0\mathop{{\otimes}} 1+ \sum_{i=1}^{n-1}\delta_i\mathop{{\otimes}} (12\cdots n)^{-i}(in).\]
\end{example}
The above example is a little similar to the Drinfeld $U_q(g)$ as Hopf algebras which are cochain twists of $U(g)$ viewed as a quasi-Hopf algebra. We conclude with the promised example related to the octonions. This is a version of \cite[Example~4.6]{KM2}, but with left and right swapped and some cleaned up conventions.
\begin{example}\rm
We let $G=Cl_3{>\!\!\!\triangleleft} \mathbb{Z}_2^3$, where $Cl_3$ is generated by $1,-1$ and $e_{i}$, $i=1,2,3$, with relations
\[ (-1)^2=1,\quad (-1)e_i=e_i(-1),\quad e_i^2=-1,\quad e_i e_j=-e_j e_i \]
for $i\ne j$ and the usual combination rules for the product of signs. Its elements can be enumerated as $\pm e_{\vec a}$ where $\vec{a}\in \mathbb{Z}_2^3$ is viewed in the additive group of 3-vectors with entries in the field $\mathbb{F}_2=\{0,1\}$ of order 2
and
\[ e_{\vec a}=e_1^{a_1}e_2^{a_2}e_3^{a_3},\quad e_{\vec a} e_{\vec b}=e_{\vec a+\vec b}(-1)^{\sum_{i\ge j}a_ib_j}. \]
This is the twisted group ring description of the 3-dimensional Clifford algebra over $\mathbb{R}$ in \cite{AlbMa}, but now restricted to coefficients $0,\pm1$ to give a group of order 16. For an example,
\[ e_{110}e_{101}=e_2e_3 e_1e_3=e_1e_2e_3^2=-e_1e_2=-e_{011}=-e_{110+101}\]
with the sign given by the formula.
We similarly write the elements of $K=\mathbb{Z}_2^3$ multiplicatively as $g^{\vec a}=g_1^{a_1}g_1^{a_2}g_3^{a_3}$ labelled by 3-vectors with values in $\mathbb{F}_2$. The generators $g_i$ commute and obey $g_i^2=e$. The general group product becomes the vector addition, and the cross relations are
\[ (-1)g_i=g_i(-1),\quad e_i g_i= -g_i e_i,\quad e_i g_j=g_j e_i\]
for $i\ne j$. This implies that $G$ has order 128.
(i) If we take $R=Cl_3$ itself then this will be a subgroup and we will have for $\Xi(R,K)$ an ordinary Hopf $*$-algebra as a semidirect product $\mathbb{C} \mathbb{Z}_2^3{\triangleright\!\!\!<} \mathbb{C}(Cl_3)$ except that we build it on the opposite tensor product.
(ii) Instead, we take as representatives the eight elements again labelled by 3-vectors over $\mathbb{F}_2$,
\[ r_{000}=1,\quad r_{001}=e_3,\quad r_{010}=e_2,\quad r_{011}=e_2e_3g_1\]
\[ r_{100}=e_1,\quad r_{101}=e_1e_3 g_2,\quad r_{110}=e_1e_2g_3,\quad r_{111}=e_1e_2e_3 g_1g_2g_3 \]
and their negations, as a version of \cite[Example~4.6]{KM2}. This can be written compactly as
\[ r_{\vec a}=e_{\vec a}g_1^{a_2 a_3}g_2^{a_1a_3}g_3^{a_1a_2}\]
\begin{proposition}\cite{KM2} This choice of transversal makes $(R,\cdot)$ the octonion two sided inverse property quasigroup $G_{\O}$ in the Albuquerque-Majid description of the octonions\cite{AlbMa},
\[ r_{\vec a}\cdot r_{\vec b}=(-1)^{f(\vec a,\vec b)} r_{\vec a+\vec b},\quad f(\vec a,\vec b)=\sum_{i\ge j}a_ib_j+ a_1a_2b_3+ a_1b_2a_3+b_1a_2a_3 \]
with the product on signed elements behaving as if bilinear. The action ${\triangleleft}$ is trivial, and left action and cocycle $\tau$ are
\[ g^{\vec a}{\triangleright} r_{\vec b}=(-1)^{\vec a\cdot \vec b}r_{\vec b},\quad \tau(r_{\vec a},r_{\vec b})=g^{\vec a\times\vec b}=g_1^{a_2 b_3+a_3 b_2}g_2^{a_3 b_1+a_1b_3} g_3^{a_1b_2+a_2b_1}\]
with the action extended with signs as if linearly and $\tau$ independent of signs in either argument.
\end{proposition}
{\noindent {\bfseries Proof:}\quad } We check in the group
\begin{align*} r_{\vec a}r_{\vec b}&=e_{\vec a}g_1^{a_2 a_3}g_2^{a_1a_3}g_3^{a_1a_2}e_{\vec b}g_1^{b_2 b_3}g_2^{b_1b_3}g_3^{b_1b_2}\\
&=e_{\vec a}e_{\vec b}(-1)^{b_1a_2a_3+b_2a_1a_3+b_3a_1a_2} g_1^{a_2a_3+b_2b_3}g_2^{a_1a_3+b_1b_3}g_3^{a_1a_2+b_1b_2}\\
&=(-1)^{f(a,b)}r_{\vec a+\vec b}g_1^{a_2a_3+b_2b_3-(a_2+b_2)(a_3+b_3)}g_2^{a_1a_3+b_1b_3-(a_1+b_1)(a_3+b_3)}g_3^{a_1a_2+b_1b_2-(a_1+b_1)(a_2+b_2)}\\
&=(-1)^{f(a,b)}r_{\vec a+\vec b}g_1^{a_2b_3+b_2a_3} g_2^{a_1b_3+b_1a_3}g_3^{a_1b_2+b_1a_2},
\end{align*}
from which we read off $\cdot$ and $\tau$. For the second equality, we moved the $g_i$ to the right using the commutation rules in $G$. For the third equality we used the product in $Cl_3$ in our description above and then converted $e_{\vec a+\vec b}$ to $r_{\vec a+\vec b}$. \endproof
The product of the quasigroup $G_\O$ here is the same as the octonions product as an algebra over $\mathbb{R}$ in the description of \cite{AlbMa}, restricted to elements of the form $\pm r_{\vec a}$. The cocycle-associativity property of $(R,\cdot)$ says
\[ r_{\vec a}\cdot(r_{\vec b}\cdot r_{\vec c})=(r_{\vec a}\cdot r_{\vec b})\cdot\tau(\vec a,\vec b){\triangleright} r_{\vec c}=(r_{\vec a}\cdot r_{\vec b})\cdot r_{\vec c} (-1)^{(\vec a\times\vec b)\cdot\vec c}\]
giving -1 exactly when the 3 vectors are linearly independent as 3-vectors over $\mathbb{F}_2$. One also has $r_{\vec a}\cdot r_{\vec b}=\pm r_{\vec b}\cdot r_{\vec a}$ with $-1$ exactly when the two vectors are linearly independent, which means both nonzero and not equal, and $r_{\vec a} \cdot r_{\vec a}=\pm1 $ with $-1$ exactly when the one vector is linearly independent, i.e. not zero. (These are exactly the quasiassociativity, quasicommutativity and norm properties of the octonions algebra in the description of \cite{AlbMa}.) The 2-sided inverse is
\[ r_{\vec a}^{-1}=(-1)^{n(\vec a)}r_{\vec a},\quad n(0)=0,\quad n(\vec a)=1,\quad \forall \vec a\ne 0\]
with the inversion operation extended as usual with respect to signs.
The quasi-Hopf algebra $\Xi(R,K)$ is spanned by $\delta_{(\pm,\vec a)}$ labelled by the points of $R$ and products of the $g_i$ with the relations $g^{\vec a}\delta_{(\pm, \vec b)}=\delta_{(\pm (-1)^{\vec a\cdot\vec b},\vec b)} g^{\vec a}$ and tensor coproduct etc.,
\[ \Delta \delta_{(\pm, \vec a)}=\sum_{(\pm', \vec b)}\delta_{(\pm' ,\vec b)}\mathop{{\otimes}}\delta_{(\pm\pm'(-1)^{n(\vec b)},\vec a+\vec b)},\quad \Delta g^{\vec a}=g^{\vec a}\mathop{{\otimes}} g^{\vec a},\quad {\epsilon}\delta_{(\pm,\vec a)}=\delta_{\vec a,0}\delta_{\pm,+},\quad {\epsilon} g^{\vec a}=1,\]
\[S\delta_{(\pm,\vec a)}=\delta_{(\pm(-1)^{n(\vec a)},\vec a},\quad S g^{\vec a}=g^{\vec a},\quad\alpha=\beta=1,\quad \phi=\sum_{(\pm, \vec a),(\pm',\vec{b})} \delta_{(\pm,\vec a)}\mathop{{\otimes}}\delta_{(\pm',\vec{b})}\mathop{{\otimes}} g^{\vec a\times\vec b}\]
and from Corollary~\ref{corstar} is a $*$-quasi-Hopf algebra with $*$ the identity on $\delta_{(\pm,\vec a)},g^{\vec a}$ and
\[ \gamma=1,\quad \hbox{{$\mathcal G$}}=\sum_{(\pm, \vec a),(\pm',\vec{b})} \delta_{(\pm,\vec a)}g^{\vec a\times\vec b}
\mathop{{\otimes}}\delta_{(\pm',\vec{b})}g^{\vec a\times\vec b}.\]
The general form here is not unlike our $S_n$ example.
\end{example}
\subsection{Module categories context}
This section does not contain anything new beyond \cite{Os2,EGNO}, but completes the categorical picture that connects our algebra $\Xi(R,K)$ to the more general context of module categories, adapted to our notations.
Our first observation is that if $\mathop{{\otimes}}: {\hbox{{$\mathcal C$}}}\times {\hbox{{$\mathcal V$}}}\to {\hbox{{$\mathcal V$}}}$ is a left action of a monoidal category ${\hbox{{$\mathcal C$}}}$ on a category ${\hbox{{$\mathcal V$}}}$ (one says that ${\hbox{{$\mathcal V$}}}$ is a left ${\hbox{{$\mathcal C$}}}$-module) then one can check that this is the same thing as a monoidal functor $F:{\hbox{{$\mathcal C$}}}\to \mathrm{ End}({\hbox{{$\mathcal V$}}})$ where the set ${\rm End}({\hbox{{$\mathcal V$}}})$ of endofunctors can be viewed as a strict monoidal category with monoidal product the endofunctor composition $\circ$. Here ${\rm End}({\hbox{{$\mathcal V$}}})$ has monoidal unit $\mathrm{id}_{\hbox{{$\mathcal V$}}}$ and its morphisms are natural transformations between endofunctors. $F$ just sends an object $X\in {\hbox{{$\mathcal C$}}}$ to $X\mathop{{\otimes}}(\ )$ as a monoidal functor from ${\hbox{{$\mathcal V$}}}$ to ${\hbox{{$\mathcal V$}}}$. A monoidal functor comes with natural isomorphisms $\{f_{X,Y}\}$ and these are given tautologically by
\[ f_{X,Y}(V): F(X)\circ F(Y)(V)=X\mathop{{\otimes}} (Y\mathop{{\otimes}} V)\cong (X\mathop{{\otimes}} Y)\mathop{{\otimes}} V= F(X\mathop{{\otimes}} Y)(V)\]
as part of the monoidal action. Conversely, if given a functor $F$, we define $X\mathop{{\otimes}} V=F(X)V$ and extend the monoidal associativity of ${\hbox{{$\mathcal C$}}}$ to mixed objects using $f_{X,Y}$ to define $X\mathop{{\otimes}} (Y\mathop{{\otimes}} V)= F(X)\circ F(Y)V{\cong} F(X\mathop{{\otimes}} Y)V= (X\mathop{{\otimes}} Y)\mathop{{\otimes}} V$. The notion of a left module category is a categorification of the bijection between an algebra action $\cdot: A \mathop{{\otimes}} V\rightarrow V$ and a representation as an algebra map $A \rightarrow {\rm End}(V)$. There is an equally good notion of a right ${\hbox{{$\mathcal C$}}}$-module category extending $\mathop{{\otimes}}$ to ${\hbox{{$\mathcal V$}}}\times{\hbox{{$\mathcal C$}}}\to {\hbox{{$\mathcal V$}}}$. In the same way as one uses $\cdot$ for both the algebra product and the module action, it is convenient to use $\mathop{{\otimes}}$ for both in the categorified version. Similarly for the right module version.
Another general observation is that if ${\hbox{{$\mathcal V$}}}$ is a ${\hbox{{$\mathcal C$}}}$-module category for a monoidal category ${\hbox{{$\mathcal C$}}}$ then ${\rm Fun}_{\hbox{{$\mathcal C$}}}({\hbox{{$\mathcal V$}}},{\hbox{{$\mathcal V$}}})$, the (left exact) functors from ${\hbox{{$\mathcal V$}}}$ to itself that are compatible with the action of ${\hbox{{$\mathcal C$}}}$, is another monoidal category. This is denoted ${\hbox{{$\mathcal C$}}}^*_{{\hbox{{$\mathcal V$}}}}$ in \cite{EGNO}, but should not be confused with the dual of a monoidal functor which was one of the origins\cite{Ma:rep} of the centre $\hbox{{$\mathcal Z$}}({\hbox{{$\mathcal C$}}})$ construction as a special case. Also note that if $A\in {\hbox{{$\mathcal C$}}}$ is an algebra in the category then ${\hbox{{$\mathcal V$}}}={}_A{\hbox{{$\mathcal C$}}}$, the left modules of $A$ in the category, is a {\em right} ${\hbox{{$\mathcal C$}}}$-module category. If $V$ is an $A$-module then we define $V\mathop{{\otimes}} X$ as the tensor product in ${\hbox{{$\mathcal C$}}}$ equipped with an $A$-action from the left on the first factor. Moreover, for certain `nice' right module categories ${\hbox{{$\mathcal V$}}}$, there exists a suitable algebra $A\in {\hbox{{$\mathcal C$}}}$ such that ${\hbox{{$\mathcal V$}}}\simeq {}_A{\hbox{{$\mathcal C$}}}$, see \cite{Os2}\cite[Thm~7.10.1]{EGNO} in other conventions. For such module categories, ${\rm Fun}_{\hbox{{$\mathcal C$}}}({\hbox{{$\mathcal V$}}},{\hbox{{$\mathcal V$}}})\simeq {}_A{\hbox{{$\mathcal C$}}}_A$ the category of $A$-$A$-bimodules in ${\hbox{{$\mathcal C$}}}$. Here, if given an $A$-$A$-bimodule $E$ in ${\hbox{{$\mathcal C$}}}$, the corresponding endofunctor is given by $E\mathop{{\otimes}}_A(\ )$, where we require ${\hbox{{$\mathcal C$}}}$ to be Abelian so that we can define $\mathop{{\otimes}}_A$. This turns $V\in {}_A{\hbox{{$\mathcal C$}}}$ into another $A$-module in ${\hbox{{$\mathcal C$}}}$ and $E\mathop{{\otimes}}_A(V\mathop{{\otimes}} X){\cong} (E\mathop{{\otimes}}_A V)\mathop{{\otimes}} X$, so the construction commutes with the right ${\hbox{{$\mathcal C$}}}$-action.
Before we explain how these abstract ideas lead to ${}_K\hbox{{$\mathcal M$}}^G_K$, a more `obvious' case is the study of left module categories for ${\hbox{{$\mathcal C$}}} = {}_G\hbox{{$\mathcal M$}}$. If $K\subseteq G$ is a subgroup, we set ${\hbox{{$\mathcal V$}}} = {}_K\hbox{{$\mathcal M$}}$ for $i: K\subseteq G$. The functor ${\hbox{{$\mathcal C$}}}\to \mathrm{ End}({\hbox{{$\mathcal V$}}})$ just sends $X\in {\hbox{{$\mathcal C$}}}$ to $i^*(X)\mathop{{\otimes}}(\ )$ as a functor on ${\hbox{{$\mathcal V$}}}$, or more simply ${\hbox{{$\mathcal V$}}}$ is a left ${\hbox{{$\mathcal C$}}}$-module by $X\mathop{{\otimes}} V=i^*(X)\mathop{{\otimes}} V$. More generally\cite{Os2}\cite[Example~7..4.9]{EGNO}, one can include a cocycle $\alpha\in H^2(K,\mathbb{C}^\times)$ since we are only interested in monoidal equivalence, and this data $(K,\alpha)$ parametrises all indecomposable left ${}_G\hbox{{$\mathcal M$}}$-module categories. Moreover, here $\mathrm{ End}({\hbox{{$\mathcal V$}}})\simeq {}_K\hbox{{$\mathcal M$}}_K$, the category of $K$-bimodules, where a bimodule $E$ acts by $E\mathop{{\otimes}}_{\mathbb{C} K}(\ )$. So the data we need for a ${}_G\hbox{{$\mathcal M$}}$-module category is a monoidal functor ${}_G\hbox{{$\mathcal M$}}\to {}_K\hbox{{$\mathcal M$}}_K$. This is of potential interest but is not the construction we were looking for.
Rather, we are interested in right module categories of ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$, the category of $G$-graded vector spaces. It turns out that these are classified by the exact same data $(K,\alpha)$ (this is related to the fact that the $\hbox{{$\mathcal M$}}^G,{}_G\hbox{{$\mathcal M$}}$ have the same centre) but the construction is different. Thus, if $K\subseteq G$ is a subgroup, we consider $A=\mathbb{C} K$ regarded as an algebra in ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$ by $|x|=x$ viewed in $G$. One can also twist this by a cocycle $\alpha$, but here we stick to the trivial case. Then ${\hbox{{$\mathcal V$}}}={}_A{\hbox{{$\mathcal C$}}}={}_K\hbox{{$\mathcal M$}}^G$, the category of $G$-graded left $K$-modules, is a right ${\hbox{{$\mathcal C$}}}$-module category. Explicitly, if $X\in {\hbox{{$\mathcal C$}}}$ is a $G$-graded vector space and $V\in{\hbox{{$\mathcal V$}}}$ a $G$-graded left $K$-module then
\[ V\mathop{{\otimes}} X,\quad x.(v\mathop{{\otimes}} w)=v.x\mathop{{\otimes}} w,\quad |v\mathop{{\otimes}} w|=|v||w|,\quad \forall\ v\in V,\ w\in X\]
is another $G$-graded left $K$-module. Finally, by the general theory, there is an associated monoidal category
\[ {\hbox{{$\mathcal C$}}}^*_{\hbox{{$\mathcal V$}}}:={\rm Fun}_{{\hbox{{$\mathcal C$}}}}({\hbox{{$\mathcal V$}}},{\hbox{{$\mathcal V$}}})\simeq {}_K\hbox{{$\mathcal M$}}^G_K\simeq {}_{\Xi(R,K)}\hbox{{$\mathcal M$}}.\]
which is the desired category to describe quasiparticles on boundaries in \cite{KK}. Conversely, if ${\hbox{{$\mathcal V$}}}$ is an indecomposable right ${\hbox{{$\mathcal C$}}}$-module category for ${\hbox{{$\mathcal C$}}}=\hbox{{$\mathcal M$}}^G$, it is explained in \cite{Os2}\cite[Example~7.4.10]{EGNO} (in other conventions) that the set of indecomposable objects has a transitive action of $G$ and hence can be identified with $G/K$ for some subgroup $K\subseteq G$. This can be used to put the module category up to equivalence in the above form (with some cocycle $\alpha$).
\section{Concluding remarks}\label{sec:rem}
We have given a detailed account of the algebra behind the treatment of boundaries in the Kitaev model based on subgroups $K$ of a finite group $G$. New results include the quasi-bialgebra $\Xi(R,K)$ in full generality, a more direct derivation from the category ${}_K\hbox{{$\mathcal M$}}^G_K$ that connects to the module category point of view, a theorem that $\Xi(R,K)$ changes by a Drinfeld twist as $R$ changes, and a $*$-quasi-Hopf algebra structure that ensures a nice properties for the category of representations (these form a strong bar category). On the computer science side, we edged towards how one might use these ideas in quantum computations and detect quasiparticles across ribbons where one end is on a boundary. We also gave new decomposition formulae relating representations of $D(G)$ in the bulk to those of $\Xi(R,K)$ in the boundary.
Both the algebraic and the computer science aspects can be taken much further. The case treated here of trivial cocycle $\alpha$ is already complicated enough but the ideas do extend to include these and should similarly be worked out. Whereas most of the abstract literature on
such matters is at the conceptual level where we work up to categorical equivalence, we set out to give constructions more explicitly, which we we believe is essential for concrete calculations and should also be relevant to the physics. For example, much of the literature on anyons is devoted to so-called $F$-moves which express the associativity isomorphisms even though, by Mac Lane's theorem, monoidal categories are equivalent to strict ones. On the physics side, the covariance properties of ribbon operators also involve the coproduct and hence how they are realised depends on the choice of $R$. The same applies to how $*$ interacts with tensor products, which would be relevant to the unitarity properties of composite systems. Of interest, for example, should be the case of a lattice divided into two parts $A,B$ with a boundary between them and how the entropy of states in the total space relate to those in the subsystem. This is an idea of considerable interest in quantum gravity, but the latter has certain parallels with quantum computing and could be explored concretely using the results of the paper. We also would like to expand further the concrete use of patches and lattice surgery, as we considered only the cases of boundaries with $K=\{e\}$ and $K=G$, and only a square geometry. Additionally, it would be useful to know under what conditions the model gives universal quantum computation. While there are broadly similar such ideas in the physics literature, e.g., \cite{CCW}, we believe our fully explicit treatment will help to take these forward.
Further on the algebra side, the Kitaev model generalises easily to replace $G$ by a finite-dimensional semi-simple Hopf algebra, with some aspects also in the non-semisimple case\cite{CowMa}. The same applies easily enough to at least a quasi-bialgebra associated to an inclusion $L\subseteq H$ of finite-dimensional Hopf algebras\cite{PS3} and to the corresponding module category picture. Ultimately here, it is the nonsemisimple case that is of interest as such Hopf algebras (e.g. of the form of reduced quantum groups $u_q(g)$) generate the categories where anyons as well as TQFT topological invariants live. It is also known that by promoting the finite group input of the Kitaev model to a more general weak Hopf algebra, one can obtain any unitary fusion category in the role of ${\hbox{{$\mathcal C$}}}$\cite{Chang}. There remains a lot of work, therefore, to properly connect these theories to computer science and in particular to established methods for quantum circuits. A step here could be braided ZX-calculus\cite{Ma:fro}, although precisely how remains to be developed. These are some directions for further work.
\section*{Data availability statement}
Data sharing is not applicable to this article as no new data were created or analysed in this study.
\input{appendix}
\end{document} | {'timestamp': '2022-08-15T02:13:04', 'yymm': '2208', 'arxiv_id': '2208.06317', 'language': 'en', 'url': 'https://arxiv.org/abs/2208.06317'} |
\section{Introduction}
The study of the beneficial role of disorder in a broad range of biological, physical, and chemical phenomena, has become a fundamental research topic in complex systems dynamics.
A seminal work in the field was the introduction of stochastic resonance (SR) in the early eighties, initially proposed to explain the occurrence of Earth ice ages~\cite{Benzi-1981a} and later on studied by numerous authors across various disciplines~\cite{Nicolis-1981a,Nicolis-1982a,Benzi-1983a,McNamara-1988a,Gammaitoni-1989a,Matteucci-1989a,Jung-1991a,Gammaitoni-1991a,Mori-2002a,Rao-2002a}. SR can happen when a nonlinear system is driven simultaneously by a periodic external forcing and noise, resulting in an amplification of the system response to the external signal~\cite{Heinsalu-2009a}.
Some of the subsequent studies showed that significant noise-driven effects, analogous to SR, can be observed also without periodicity of the external signal~\cite{Gang-1993a}, and even in the absence of any external signal, as in self-induced stochastic resonance~\cite{MURATOV2005227,Marius-2017} and coherence resonance~\cite{Pikovsky-1997a}.
Coherence resonance (CR) is an ordered response of a nonlinear excitable system to an optimal noise amplitude, resulting in regular pulses.
Beyond its effects on a single nonlinear unit, the role of noise was also extensively studied from the standpoint of its ability to improve synchronization in coupled oscillator networks, again both in the absence and in the presence of an external forcing~\cite{Jung-1995a,Liu-1999a,Busch-2003a}.
Broadly speaking, one may say that the first twenty years of research in this field focused, to a large degree, on the effects of noise in bistable or excitable systems~\cite{Gammaitoni-1998a,McDonnell-2009a}, comprising either several or just one element.
At the beginning of the new century, it was found that somewhat analogous effects to those of noise can be produced in networks of coupled oscillators through the heterogeneity of the oscillator population~\cite{Cartwright-2000a,Tessone-2006a}.
This led to the introduction of diversity-induced resonance (DIR), which denotes the amplification of a network response to an external signal, driven by the heterogeneity of network elements~\cite{Tessone-2006a,Toral-2009a,Chen-2009a,Wu-2010a,Wu-2010b,Patriarca-2012a,Tessone-2013a,Grace-2014a,Patriarca-2015a,Liang-2020a}. Just like SR, also DIR can occur both in the presence and in the absence of an external forcing.
In the latter case it has been named diversity-induced coherence~\cite{KAMAL-2015a}.
It should be clear from the above that SR and CR can occur even in systems made of a single element, therefore they are not intrinsically collective phenomena. Instead, by definition, DIR represents a collective disorder effect driven by population heterogeneity.
Most of the previous literature has emphasized either the analogies between SR and DIR~\cite{Tessone-2006a,Tessone-2007a}, considering them as two faces of the same medal, or the possibility to enhance resonance induced by noise thanks to diversity optimization (or vice versa)~\cite{degliesposti-2001a,Li-2012a,Li-2014a,Gassel-2007a}.
Relatively little work~\cite{Zhou-2001a,Glatt-2008a} has been devoted to the implications of the above-mentioned intrinsic difference between the two phenomena, which has not yet been fully analyzed. In this paper, by systematically studying the prototypical model of a heterogeneous network of pancreatic $\beta$-cells~\cite{Cartwright-2000a,Scialla-2021a} with the addition of noise, we provide insights into the different mechanisms by which diversity and noise can have markedly distinct effects on network dynamics.
\section{Model}
\label{model}
As a paradigmatic example of a system of coupled nonlinear units, we investigate an excitable network of FitzHugh-Nagumo elements.
Individual elements of such network are described by the dimensionless FitzHugh-Nagumo equations~\cite{Fitzhugh-1960a,FitzHugh-1961a,Nagumo-1962a,Cartwright-2000a,Scialla-2021a}:
\begin{eqnarray}
\dot{x} &=& a \left( x - x^3/3 + y \right) \, ,
\label{eq_FN1a}
\\
\dot{y} &=& - \left( x + by - J \right)/a .
\label{eq_FN1b}
\end{eqnarray}
When modelling the behavior of a $\beta$-cell, the variable $x(t)$ represents the fast relaxing membrane potential, while $y(t)$ is a recovery variable mimicking the slow potassium channel gating.
Depending on the value of $J$, the unit will be in an oscillatory state if $|J|< \varepsilon$ or in an excitable state if $|J| \ge \varepsilon$, where
\begin{equation}
\label{epsilon}
\varepsilon = \frac{3 a^2 - 2 a^2 b -b^2}{3 a^3} \sqrt{a^2 - b} \, .
\end{equation}
In addition to determining the width of the oscillatory interval $ (-\varepsilon,+\varepsilon)$, parameters $a$ and $b$ define the oscillation waveform and period.
Moving from the description of a single element to that of a heterogeneous 3D network of $N$ FitzHugh-Nagumo units, we assume a cubic lattice topology.
This implies that each element is coupled to its six nearest neighbors via a coupling term $C_{ij} (x_j - x_i)$, where $i$ and $j$ are indexes that identify an element $i$ and one of its coupled neighbors $j$, and $C_{ij}$ is the interaction strength.
Notice that the choice of a cubic lattice topology is consistent with what is known about the architecture of $\beta$-cell clusters, where each cell is surrounded on average by 6-7 neighbor cells~\cite{PERSAUD-2014a,Nasteska-2018a}.
We make the simplifying assumption that the value of the coupling constants is the same for each network element, $C_{ij} \equiv C$ for any $i,j$. Since we want to study the interplay between diversity and noise, we also add a noise term $\xi_i(t)$ to the first equation.
Then the corresponding FitzHugh-Nagumo equations for the $i$th element of the network are~\cite{Cartwright-2000a,Scialla-2021a}:
\begin{eqnarray}
\dot{x_i} &=& a \!\! \left[x_i \! - \! \frac{{x_i}^3}{3} \! + \! y_i + \! C\!\! \sum_{j \in \{ n \}_i} \!\! (x_j \! - \! x_i) \! + \! \xi_i (t)\! \right] \!\! , \label{FNEN1} \\
\dot{y_i} &=& -(x_i + by_i - J_i)/a \, . \label{FNEN2}
\end{eqnarray}
The sum over $j$ in Eq.~(\ref{FNEN1}) is limited to the set $\{n\}_i$ of the $n=6$ neighbors coupled to the $i$th oscillator.
The $J_i$ parameters in Eq.~(\ref{FNEN2}) are different for each network element and are used to introduce diversity; the $i$th element will be in an oscillatory state if $|J_i|< \varepsilon$ or in an excitable state if $|J_i| \ge \varepsilon$.
The $J_i$ values are drawn from a Gaussian distribution with standard deviation $\sigma_d$, mean value $J_\mathrm{av}$,
and are randomly assigned to network elements.
The standard deviation $\sigma_d$ will be used in what follows as a measure of oscillator population diversity, while the mean value $J_\mathrm{av}$ expresses how far the whole population is from the oscillatory range $ (-\varepsilon,+\varepsilon)$.
The term $\xi_i(t)$ in Eq.~(\ref{FNEN1}) is a Gaussian noise with zero mean, standard deviation $\sigma_n$, and correlation function $\langle \xi_i(t) \xi_j(t') \rangle = {\sigma_n}^2 \delta_{ij} \delta (t-t')$, meaning that $\xi_i(t)$ and $\xi_j(t)$ ($i \ne j$) are statistically independent of each other. The standard deviation $\sigma_n$ will be used in this work as a measure of noise applied to each network element.
The reason why we add $\xi_i(t)$ to the first equation for the fast variable is that this maximizes the effects of noise, making it easier to study its combination with diversity.
Introducing $\xi_i(t)$ into the second equation would result in a minimal impact of noise on network synchronization~\cite{degliesposti-2001a}, due to the slower dynamics of the refractory variable.
Noise effects would be mostly averaged out to zero by time integration and coupling, as we will show below with some numerical simulations.
The above model, which to our knowledge is studied here for the first time in the version we propose, can be used to mimic various excitable biological systems, such as pancreatic $\beta$-cell clusters and some types of neurons~\cite{Poznanski-1997a,Cartwright-2000a,Andreu-2000a,degliesposti-2001a,VRAGOVIC-2006a,Scialla-2021a}.
In what follows we use the model to analyze the combined effect of diversity and noise, acting together on the same network.
In particular, we are interested in potential synergies or antagonisms, as well as in a possible hierarchy between the two sources of disorder, which in spite of some analogies have fundamentally different synchronization mechanisms.
\section{Qualitative theoretical analysis}
\label{analysis}
The white noise $\xi_i(t)$ in Eq.~(\ref{FNEN1}) can represent a randomly fluctuating external current, which is able to shift the nullcline of the $x$ variable up or down and, therefore, to instantaneously change the position of the equilibrium point of each oscillator.
Depending on the extent of the shift and on the value of $J_i$, this may result in a switch from a stable to an unstable equilibrium (or vice versa), corresponding to a transition from a resting to a spiking state of the oscillator (or vice versa).
This mechanism can be further illustrated, following Ref.~\cite{Tessone-2006a}, by introducing the global variables $X(t) = N^{-1} \sum_{i=1}^N x_i(t)$ and $Y(t) = N^{-1} \sum_{i=1}^N y_i(t)$.
We then define $\delta_i$ as the difference between $x_i$ and $X$, i.e., $x_i \equiv X + \delta_i$,
and introduce $M = N^{-1} \sum_{i=1}^N {\delta_i}^2$~\cite{Tessone-2006a,Desai-1978a}.
Therefore, $M$ will increase when diversity increases.
By averaging Eqs.~(\ref{FNEN1}) and (\ref{FNEN2}) over all $N$ network elements,
we obtain the equations for the global variables $X$, $Y$:
\begin{eqnarray}
\dot{X} &=& a \left[X(1-M) - X^3 /3 + Y + \xi_G (t) \right] , \label{FNENG1} \\
\dot{Y} &=& -(X + bY - J_\mathrm{av})/a \, . \label{FNENG2}
\end{eqnarray}
Here noise effects are represented by a global white noise term $\xi_G(t) = N^{-1} \sum_i \xi_i(t)$ with zero mean
and correlation function $\langle \xi_G(t) \xi_G(t') \rangle = N^{-1} \sigma_n^2 \delta (t-t')$.
It is instructive to observe the different impact of diversity and noise on the nullclines of Eqs.~(\ref{FNENG1}) and (\ref{FNENG2}).
A change in diversity, i.e., in the standard deviation of the $J_i$ distribution, causes a change in $M$,
which affects the shape of the cubic nullcline by changing the coefficient of the linear term $X$ (see Fig.~\ref{nullclines}, panels (a) and (b)).
This indicates that diversity can have a significant effect on overall network dynamics, independently of whether the mean value $J_\mathrm{av}$ is inside or outside the intrinsic oscillatory range $(-\varepsilon,+\varepsilon)$.
On the other hand, the global noise term $\xi_G(t)$ can only cause rigid shifts, positive or negative, of the cubic nullcline along the vertical axis, as a consequence of its instantaneous fluctuations (compare the dashed and solid lines in Fig.~\ref{nullclines}).
This suggests that noise is unlikely to play a constructive role when diversity is optimized (i.e., in the conditions corresponding to a DIR) and $J_\mathrm{av}=0$.
In this situation, noise will likely act as a perturbation of the system, which is already in an intrinsically oscillatory and resonant state.
Let us now consider what happens when $J_\mathrm{av} \neq 0$ (see Fig.~\ref{nullclines}-(c)).
In this case, the constant term $J_\mathrm{av}$ determines a rigid shift, upwards or downwards, of the second nullcline.
This can significantly change the position of the equilibrium point of the system dynamics, turning the network from oscillatory into excitable.
In these conditions, noise can play a synergistic role with diversity, by causing instantaneous rigid shifts of the cubic nullcline that counterbalance the effect of $J_\mathrm{av}$, thus triggering global network oscillations.
It should be noted that we did not add a periodic driving force to our system equations, of the type $A \sin(\Omega t)$.
As mentioned in the introduction, this term is not necessary to observe either stochastic or diversity-induced resonance effects and its presence would introduce the additional constraint of matching two time scales, i.e., the driving force period and the oscillation period of the FitzHugh-Nagumo elements constituting the network.
\section{Numerical results and discussion}
In order to quantitatively study the combined effect of diversity and noise, we numerically solve the FitzHugh-Nagumo Eqs.~(\ref{FNEN1}) and (\ref{FNEN2}) for a network of $10^3$ elements with the above-mentioned topology and the following system parameters: $a=60$, $b=1.45$, $C=0.15$.
The selected values of $a$ and $b$ generate a waveform and period similar to those of bursting oscillations of pancreatic $\beta$-cells~\cite{Scialla-2021a}.
We set the coupling constant $C=0.15$, since, as we verified in an earlier work~\cite{Scialla-2021a}, the oscillatory response of the system is substantially unchanged by further increasing $C$ beyond $C=0.15$.
We run simulations for a range of diversity values $\sigma_d$ (from $\sigma_d =0$ to $\sigma_d =2.5$) and, at the same time, for a range of white noise standard deviation values $\sigma_n$ (from $\sigma_n =0$ to $\sigma_n =5.0$).
We repeat this for each of the following diversity distribution mean values: $J_\mathrm{av} = 0, \pm 0.5, \pm 1$.
For each simulation, corresponding to a set of $\sigma_d$, $\sigma_n$, and $J_\mathrm{av}$ values, we quantify the network synchronization efficiency by computing the global oscillatory activity $\rho$~\cite{Cartwright-2000a,Scialla-2021a},
\begin{equation}
\rho = N^{-1} \sqrt{\left \langle [S(t) -\bar{S}]^2 \right \rangle} \, ,
\label{FNENG7}
\end{equation}
where $N = 10^3$ is the total number of oscillators, $S(t) = \sum_i x_i(t)$, and $\bar{S} = \langle S(t) \rangle$, with $\langle \dots \rangle$ denoting a time average.
The results for the global oscillatory activity $\rho$ are plotted versus $\sigma_d$ and $\sigma_n$, generating five three-dimensional surfaces corresponding to each of the above listed $J_\mathrm{av}$ values.
\begin{figure}[!t]
\includegraphics[width=8.8cm]{fig-6_nullclinesfin.jpg}
\caption{Nullclines of Eqs.~(\ref{FNENG1}) and (\ref{FNENG2}) for different values of $J_\mathrm{av}$ and $M$.
A comparison between panel (a) and (b) shows the effect of $M$ on the shape of the cubic nullcline.
The area delimited by the dashed curves above and below the cubic nullcline in each panel illustrates the effect of instantaneous shifts caused by noise with an amplitude of up to $\pm 1$.}
\label{nullclines}
\end{figure}
\begin{figure}[!t]
\includegraphics[width=8.0cm]{fig-1_mu0_s0.25.pdf}
\caption{Global oscillatory activity $\rho$, defined in Eq.~(\ref{FNENG7}), as a function of diversity ($\sigma_d$) and noise ($\sigma_n$), for $J_\mathrm{av}=0$. The full blue dot highlights the global surface maximum, which is coincident with the DIR maximum. The empty red dot corresponds to the noise-induced resonance maximum.}
\label{tevol-b0}
\end{figure}
In the $J_\mathrm{av}=0$ regime (Fig.~\ref{tevol-b0}), where a relatively high fraction or all of the network elements are inside the intrinsic oscillatory range, simulation results show that both diversity and noise are able to generate a resonance on their own.
If we move along the diversity axis ($\sigma_n=0$, no noise) or along the noise axis ($\sigma_d=0$, no diversity), we observe in both cases a resonance maximum that is about 20-25\% higher than the $\rho$ value corresponding to the origin.
In addition, the two sources of noise seem to act independently of one another, showing no evidence of a synergy. As a matter of fact, the global maximum of the surface coincides with the diversity-induced resonance maximum, therefore it occurs on the diversity axis, i.e., at $\sigma_d$=0.5, $\sigma_n$=0.
If, from this global maximum (shown as a full blue dot in Fig.~\ref{tevol-b0}), we move in any direction towards the middle of the surface, i.e., if we add noise, there is no gain in terms of collective oscillatory activity. This is consistent with the predictions of our qualitative theoretical analysis, indicating that noise is unlikely to play a constructive role for $J_\mathrm{av}=0$, when diversity is optimized.
\begin{figure}[!t]
\includegraphics[width=8.0cm]{fig-2_mu-1_s0.25.pdf}
\vspace{0.5cm}
\includegraphics[width=8.0cm]{fig-3_mu1_s0.25.pdf}
\caption{Global oscillatory activity $\rho$, defined in Eq.~(\ref{FNENG7}), as a function of diversity ($\sigma_d$) and noise ($\sigma_n$), for $J_\mathrm{av} = -1$ (panel (a)) and $J_\mathrm{av} = +1$ (panel (b)).
The full blue and empty green/red dots in each panel highlight the global surface maximum, the DIR maximum and the noise-induced resonance maximum, respectively.}
\label{fig-2}
\end{figure}
Moving to the opposite end of the $J_\mathrm{av}$ range, i.e., $J_\mathrm{av} = \pm1$ (Fig.~\ref{fig-2}), we observe a very different situation. In this regime most network elements are outside the intrinsic oscillatory range, either below ($J_\mathrm{av} =-1$) or above it ($J_\mathrm{av} =+1$).
Here the addition of noise to diversity always results in a significant increase of the network oscillatory activity.
In line with the theoretical analysis based on global system variables, this can be explained by considering that, in the case of $J_\mathrm{av} =-1$, most network elements are below the excitation threshold, i.e., in an excitable state, and can be pushed up into the oscillatory range by an instantaneous injection of positive external current, deriving from sufficiently large noise fluctuations with positive sign.
Vice versa, in the case of $J_\mathrm{av} =+1$, most elements are above the upper limit of the intrinsic oscillatory range, i.e., in an excitation block state, and can be pushed down into the oscillatory range by an instantaneous injection of negative external current, deriving from negative noise fluctuations with sufficiently large modulus.
In both cases, the addition of noise on top of diversity causes a synergistic effect and a remarkable network synchronization improvement: for instance, the network oscillatory activity for $J_\mathrm{av} =+1$ raises by almost $50\%$, if we compare the DIR maximum ($\rho \approx 515$, empty green dot in Fig.~\ref{fig-2}, panel (b)),
to the global maximum of the $\rho$ surface ($\rho \approx 738$, full blue dot in Fig.~\ref{fig-2}, panel (b)) resulting from the combination of diversity and noise effects.
It is also apparent from the data that, in this regime, noise is more efficient than diversity, as shown by the significantly higher noise-induced resonance maxima along the noise axis (empty red dots in Fig.~\ref{fig-2}, panel (a) and (b)), compared to their equivalents along the diversity axis (empty green dots in Fig.~\ref{fig-2}, panel (a) and (b)).
It is worth noting that the position of the DIR gets shifted towards higher values of $\sigma_d$ going from $J_\mathrm{av} = 0$ to $J_\mathrm{av} = \pm 1$.
The DIR maximum is at $\sigma_d =0.5$ for $J_\mathrm{av} =0$, versus $\sigma_d =1$ for both $J_\mathrm{av} =-1$ and $J_\mathrm{av} =+1$ (empty green dots in Fig.~\ref{fig-2}, panels (a) and (b)).
However, when we combine together noise and diversity, the position of the global maximum goes back to the same optimal diversity value found for $J_\mathrm{av} =0$
(full blue dots in Fig.~\ref{fig-2}, panels (a) and (b)).
The mechanism of this effect is that noise stochastically ''throws'' network elements towards the oscillatory range, and it does so with respect to an average position on the $J$ axis that is determined, for each element, by its $J_i$ coefficient, deriving from the diversity distribution. When this mechanism reaches the highest efficiency, i.e., at the global maximum of the surface, the optimal diversity for $J_\mathrm{av}= \pm 1$ tends to be equal to that for $J_\mathrm{av}=0$.
We may conclude that, in the $J_\mathrm{av} = \pm 1$ regime, there is a strong synergy between diversity and stochastic effects, which significantly broadens the range of resonant states of the network versus what can be observed when either source of disorder is applied individually.
Finally, in the intermediate regime corresponding to $J_\mathrm{av} = \pm 0.5$ (Fig.~\ref{fig-4}), we observe an in between situation, with various regions of the $\rho$ surface where the combination of diversity and noise produces a synergy and an extension of the resonant range of the network.
For example, at $J_\mathrm{av} =-0.5$, there are no network oscillations for diversity values $\sigma_d =0.0$ and $\sigma_d =0.25$, whereas, with the addition of noise, resonant states are observed in both cases, starting from $\sigma_n =1$ and $\sigma_n =0.5$, respectively.
We point out that, also in this regime, the global maximum of the $\rho$ surface
due the combined diversity- and noise-induced resonance occurs, for $J_\mathrm{av} = -0.5$, at $\sigma_d =0.5$ (full blue dot in Fig.~\ref{fig-4}, panel (a)) and is shifted to smaller values with respect to the DIR maximum in the absence of noise ($\sigma_d =0.75$, empty green dot in Fig.~\ref{fig-4}, panel (a)).
Therefore, the mechanism described in the previous paragraph, regarding the tendency of the optimal diversity value to be equal to that for $J_\mathrm{av} =0$, is at play here as well.
\begin{figure}[!t]
\includegraphics[width=8.0cm]{fig-4_mu-0.5_s0.25.pdf}
\vspace{0.5cm}
\includegraphics[width=8.0cm]{fig-5_mu0.5_s0.25.pdf}
\caption{Global oscillatory activity $\rho$, defined in Eq.~(\ref{FNENG7}), as a function of diversity ($\sigma_d$) and noise ($\sigma_n$), for $J_\mathrm{av} = -0.5$ (panel (a)) and $J_\mathrm{av} = +0.5$ (panel (b)). The full blue and empty green/red dots in each panel highlight the global surface maximum, the DIR maximum and the noise-induced resonance maximum, respectively.}
\label{fig-4}
\end{figure}
In order to confirm the rationale for our choice of adding the noise term $\xi_i(t)$ into the first FitzHugh-Nagumo equation, Eq.~(\ref{FNEN1}), we also performed some simulations where $\xi_i(t)$ was added instead into the second equation, Eq.~(\ref{FNEN2}).
We did this for $J_\mathrm{av} =0$ and $J_\mathrm{av} =-1$.
As expected, the results reported in Fig.~\ref{fig-7} show that in this case the effect of noise is negligible and the network dynamics is entirely determined by diversity~\cite{degliesposti-2001a}.
\begin{figure}[!t]
\includegraphics[width=8.0cm]{fig-7_mu0_s0.25_noise-sotto.pdf}
\vspace{0.5cm}
\includegraphics[width=8.0cm]{fig-7_mu-1_s0.25_noise-sotto.pdf}
\caption{Global oscillatory activity $\rho$, defined in Eq.~(\ref{FNENG7}), as a function of diversity ($\sigma_d$) and noise ($\sigma_n$), for $J_\mathrm{av} = 0$ (panel (a)) and $J_\mathrm{av} = -1$ (panel (b)), when noise is added into the second equation~(\ref{FNEN2}).}
\label{fig-7}
\end{figure}
\section{Conclusions}
Our theoretical and numerical analysis shows that, while there are some analogies between diversity- and noise-induced network synchronization, the two effects are substantially different and interact with each other differently, depending on the distance of the mean value of the diversity distribution from the intrinsic oscillatory range of the network elements.
Specifically, when the diversity distribution is centered around the intrinsic oscillatory range ($J_\mathrm{av} =0$), diversity and noise act independently of one another and there is no indication of a synergy. On the other hand, when the mean value of the diversity distribution is far away from the intrinsic oscillatory range ($J_\mathrm{av} = \pm1$), then there is a clear synergy between the two sources of disorder, which determines a major improvement of network synchronization. In addition, in this regime, noise can improve network synchronization more effectively than diversity. This provides useful indications on the relative importance of the two effects in different network configurations, and on the possibility to neglect one or the other as a consequence.
Another important finding is that the optimal diversity value of the network is the same in all regimes, if noise is taken into account.
In other words, when noise effects are added, the amount of diversity that maximizes collective oscillatory efficiency seems to be an intrinsic property of the network, independent of $J_\mathrm{av}$.
The fact that diversity and noise are not equivalent sources of disorder, but have distinct effects on network dynamics, may have implications for biological systems.
Our results suggest that different network configurations can lead to a hierarchy between the two sources of disorder. This may have driven the exploitation of diversity and noise to a different degree in different biological systems during their evolution, depending on their specific nature and on the types of signals that trigger their activity.
\begin{acknowledgments}
The authors acknowledge support from the Estonian Research Council through Grant PRG1059
and the ERDF (European Development Research Fund) CoE (Center of Excellence) program through Grant TK133.
The authors would like to thank Alessandro Loppini for helpful discussions.
\end{acknowledgments}
| {'timestamp': '2022-03-24T01:33:30', 'yymm': '2203', 'arxiv_id': '2203.12506', 'language': 'en', 'url': 'https://arxiv.org/abs/2203.12506'} |
\section{Preliminary and basic results}
\par\medskip
\par\medskip
We begin this section by recalling the definition and notation
about $C^*$-algebraic bundles.
First of all, we refer the reader to [7, VIII.2.2] for the notion
of Banach algebraic bundle $\cal B$ over a locally compact group $G$.
Let $B$ be the bundle space of $\cal B$ and $\pi$ be the bundle
projection.
Let $B_t = \pi^{-1}(t)$ be the fiber over $t\in G$.
It is clear that $B_e$ is a $C^*$-algebra if $\cal B$ is a $C^*$-algebraic
bundle (see [7, VIII.16.2] for a definition).
We will use the materials from [7, VIII] implicitly.
Following the notation of [7], we denote by $\cal L(B)$ the set of all
continuous cross-sections on $\cal B$ with compact support.
Moreover, for any $f\in \cal L(B)$, ${\rm supp}(f)$ is the closed support
of $f$.
Furthermore, let $({\cal L}_p(\mu; {\cal B}), \|\ \|_p)$ be the normed
space as defined in [7, II.15.7] (where $\mu$ is the left Haar
measure on $G$).
For simplicity, we will denote ${\cal L}_p(\mu; \cal B)$ by
${\cal L}_p(\cal B)$.
By [7, II.15.9], $\cal L(B)$ is dense in ${\cal L}_p(\cal B)$.
We also need the theory of operator valued integration from [7, II],
especially, we would like to draw the readers' attention to
[7, II.5.7] and [7, II.\S 16].
\par\medskip
Throughout this paper, $\cal B$ is a $C^*$-algebraic bundle
over a locally compact group $G$ with bundle space $B$ and bundle
projection $\pi$.
Denote by $C^*(\cal B)$ the cross-sectional $C^*$-algebra of
$\cal B$ (see [7, VIII.17.2]).
We recall from [7, VIII.5.8] that there exists a canonical map $m$
from the bundle space $B$ to the set of multipliers of ${\cal L}_1(\cal B)$
(or $C^*(\cal B)$).
\par\medskip
\begin{lemma} \label{1.4}
The map $m$ from $B$ to $M(C^*(\cal B))$ is {\it faithful} in the sense that
if $m_a = m_b$, then either $a=b$ or $a=0_r$ and $b=0_s$ ($r,s\in G$).
\end{lemma}
\noindent {\bf Proof}:
Suppose that $\pi(a) = \pi(b) = r$.
Then $m_{a-b} = 0$ will imply that $a-b = 0_r$ (since $\cal B$ has a strong
approximate unit and enough continuous cross-sections).
Suppose that $\pi(a)=r\neq s=\pi(b)$.
Then there exists a neighbourhood $V$ of $e$ such that
$rV\cap sV = \emptyset$.
For any $f\in \cal L(B)$, $a(f(r^{-1}t)) = m_a(f)(t) = m_b(f)(t) =
b(f(s^{-1}t))$.
Now let $b_i$ be a strong approximate unit of $\cal B$ and $\{ f_i \}$ be
elements in $\cal L(B)$ such that ${\rm supp}(f_i)\subseteq V$ and $f_i(e) = b_i$.
Therefore $ab_i = a(f(e)) = b(f(s^{-1}r)) =0$ and hence $a=0_r$.
Similarly, $b=0_s$.
\par\medskip
From now on, we will identify $B_r$ ($r\in G$) with its image
in $M(C^*({\cal B}))$.
\par\medskip
Let ${\cal B}\times G$ be the $C^*$-algebraic bundle over $G\times G$
with the Cartesian product $B\times G$ as its bundle space such that
the bundle projection $\pi'$ is given by $\pi'(b,t) = (\pi(b),t)$ ($b\in
B$; $t\in G$).
It is not hard to see that any non-degenerate representation $T'$ of
${\cal B}\times G$ is of the form $T'_{(b,t)} = T_bu_t$ ($b\in B$; $t\in G$)
for a non-degenerate representation $T$ of ${\cal B}$ and a unitary
representation $u$ of $G$ with commuting ranges.
This gives following lemma.
\par\medskip
\begin{lemma} \label{1.5}
$C^*({\cal B}\times G) = C^*({\cal B})\otimes_{\max}C^*(G)$.
\end{lemma}
Consider the map $\delta_{\cal B}$ from $B_r$ to
$M(C^*({\cal B})\otimes_{\max}C^*(G))$ given by $\delta_{\cal B} (b) =
b\otimes \Delta_r$ where $\Delta_r$ is the canonical image of $r$ in
$M(C^*(G))$.
Denote again by $\delta_{\cal B}$ the integral form of $\delta_{\cal
B}$.
Then we have the following equalities.
\begin{eqnarray*}
\delta_{\cal B}(f)(1\otimes k)(g\otimes l)(r,s) & = &
\int_G f(t)g(t^{-1}r)[\int_G k(u)l(u^{-1}t^{-1}s)du]dt\\
& = & \int_G\int_G f(t)g(t^{-1}r)k(t^{-1}v)l(v^{-1}s) dvdt
\end{eqnarray*}
for any $f,g\in \cal L(B)$ and $k,l\in K(G)$.
If we denote by $f\bullet k(r,s)=f(r)k(r^{-1}s)$, then
$\delta_{\cal B}(f)(1\otimes k)(g\otimes l)(r,s) =
(f\bullet k)(g\otimes l)(r,s)$.
It is not hard to see that $\delta_{\cal B}$ is a full coaction
(note that $\delta_{\cal B}$ extends to a representation of
${\cal L}_1({\cal B})$ and hence of $C^*({\cal B})$).
\par\medskip
\begin{lemma} \label{1.6}
The set $N = \{ \sum_{i=1}^n f_i\bullet k_i:f_i\in {\cal L(B)},
k_i\in K(G)\}$ is dense in $C^*({\cal B}\times G)$.
Consequently, $\delta_{\cal B}$ is a non-degenerate full coaction.
\end{lemma}
\noindent {\bf Proof}:
It is sufficient to show that for any $f\in \cal L(B)$ and $k\in K(G)$,
$f\otimes k$ can be approximated by elements in $N$ with respect
to the ${\cal L}_1$-norm.
Let $M$ and $K$ be the closed supports of $f$ and $k$ respectively.
Since $k$ is uniformly continuous, for a given $\epsilon>0$, there exists
a neighbourhood $V$ of $e$ such that $\|k(u)-k(v)\| <
\epsilon/(\mu(M)\cdot \mu(MK)\cdot \sup_{t\in M}\|f(t)\|)$ if
$u^{-1}v\in V$ (where $\mu$ is the left Haar measure of $G$).
Since $M$ is compact, there exist $r_1,..., r_n$ in $M$ such that
$\cup_{i=1}^n r_iV$ covers $M$.
Let $k_i(s) = k(r_i^{-1}s)$ and let $g_1,...,g_n$ be the partition of
unity subordinate to $\{ r_iV\}_{i=1}^n$.
Then
\begin{eqnarray*}
\| f\bullet k(r,s) - \sum_{i=1}^n (g_if\otimes k_i)(r,s)\| & \leq &
\|f(r)\| \cdot \mid \sum_{i=1}^n g_i(r)k(r^{-1}s) -
\sum_{i=1}^n g_i(r)k(r_i^{-1}s)\mid \\
& \leq & \|f(r)\|\cdot (\ \sum_{i=1}^n\mid g_i(r)\mid\cdot
\mid k(r^{-1}s) - k(r_i^{-1}s) \mid\ ).
\end{eqnarray*}
As $g_i(r)\neq 0$ if and only if $r\in r_iV$, we have $\int_{MK}\int_M
\|f\bullet k(r,s) - \sum_{i=1}^n (g_if\otimes k_i)(r,s)\| drds \leq
\epsilon$.
This proves the lemma.
\par\medskip
The following lemma is about general coactions of $C^*(G)$.
It implies, in particular that $\delta_{\cal B}$ is injective.
Note that the trivial representation of $G$ on $\mathbb{C}$
induces a $*$-homomorphism $\cal E$ from $C^*(G)$ to $\mathbb{C}$
which is a coidentity in the sense that $({\cal E}\otimes {\rm id})
\delta_G = {\rm id} = ({\rm id}\otimes {\cal E})\delta_G$ (where $\delta_G$ is
the comultiplication on $C^*(G)$)
\par\medskip
\begin{lemma} \label{1.1}
Let $\epsilon$ be a coaction on $A$ by $C^*(G)$.
Suppose that $\epsilon_{\cal E} = ({\rm id}\otimes {\cal E})\epsilon$ and
$A_{\cal E} = \epsilon_{\cal E} (A)$.
\par\noindent
(a) If $\epsilon$ is non-degenerate, then it is automatically
injective.
\par\noindent
(b) $A = A_{\cal E}\oplus \ker (\epsilon)$ (as Banach space).
\end{lemma}
\noindent {\bf Proof}:
(a) We first note that $\epsilon_{\cal E}$ is a
$*$-homomorphism from $A$ to itself and so $A_{\cal E}$ is
a $C^*$-subalgebra of $A$.
It is clear that $\epsilon$ is injective on $A_{\cal E}$ and
we want to show that $A=A_\epsilon$.
For any $a\in A$ and any $s\in C^*(G)$ such that ${\cal E}(s)=1$,
$a\otimes s$ can be approximated by elements of the form
$\sum \epsilon(b_i)(1\otimes t_i)$ (as $\epsilon$ is non-degenerate).
Therefore, $\sum \epsilon_{\cal E}(b_i){\cal E}(t_i) =
({\rm id}\otimes {\cal E})(\sum \epsilon(b_i)(1\otimes t_i))$
converges to $a$.
\par\noindent
(b) Note that for any $a\in A$, $\epsilon(a-\epsilon_{\cal E}(a)) = 0$.
Since $\epsilon_{\cal E}$ is a projection on $A$, $A=A_{\cal E}\oplus
\ker (\epsilon)$.
\par\medskip
The above lemma actually holds for a general Hopf $C^*$-algebra with
a co-identity instead of $C^*(G)$ (for a brief review of Hopf
$C^*$-algebras, please see e.g. [5] or [10]).
\par\medskip
\begin{remark}
By [5, 7.15], if $\Gamma$ is a discrete amenable group, then any
injective coaction of $C^*_r(\Gamma)$ is automatically non-degenerate.
More generally, the arguments in [5, \S7] actually show that for
any discrete group $G$, any injective coaction of $C^*(G)$ is
non-degenerate.
Hence, a coaction of $C^*(G)$ is injective if and only if it is
non-degenerate (when $G$ is discrete).
\end{remark}
We end this section with the following technical lemma.
\par\medskip
\begin{lemma} \label{1.3}
Let $A$ be a $C^*$-algebra and $E$ be a Hilbert $A$-module.
Suppose that $({\cal H},\pi)$ is a faithful representation of $A$.
Then
\par\noindent
(a) $\|x\| = \sup \{\|x\otimes_\pi \xi\|: \|\xi\|\leq 1\}$ for any $x\in E$;
\par\noindent
(b) the canonical map from ${\cal L}(E)$ to ${\cal L}(E\otimes_\pi {\cal H})$
(which sends $a$ to $a\otimes 1$) is injective.
\end{lemma}
Part (a) follows from a direct computation and the
part (b) is a consequence of part (a).
\par\medskip
\par\medskip
\par\medskip
\section{Reduced cross-sectional $C^*$-algebras}
\par\medskip
\par\medskip
In this section, we will define the reduced cross-sectional
$C^*$-algebras for $C^*$-algebraic bundles and show that they
carry canonical reduced coactions.
The intuitive idea is to consider the representation of
${\cal L}_1({\cal B})$ as bounded operators on ${\cal L}_2({\cal B})$.
However, since ${\cal L}_2({\cal B})$ is not a Hilbert $C^*$-module, it
seems unlikely that we can get a $C^*$-algebra out of this
representation.
Instead, we will consider a slightly different version of
``${\cal L}_2({\cal B})$'' which is a Hilbert $B_e$-module.
The difficulty then is to show that the representation is well
defined and bounded.
This can be proved directly by a quite heavy analytical argument but we
will use Lemma \ref{2.8} to do the trick instead.
We will also define the interesting notion of proper $C^*$-algebraic
bundles which will be needed in the next section.
\par\medskip
\begin{lemma} \label{2.1}
Consider the map $\langle \ ,\ \rangle _e$ from $\cal L(B)\times \cal L(B)$
to $B_e$ defined by
$$\langle f,g\rangle _e = \int_G f(t)^*g(t) dt$$
for all $f,g\in \cal L(B)$.
Then $\langle \ ,\ \rangle _e$ is a $B_e$-valued inner product on
$\cal L(B)$.
\end{lemma}
\noindent {\bf Proof}:
It is easily seen that $\langle \ ,\ \rangle_e$ is a well defined
$B_e$-valued pre-inner product.
Moreover, for all $f\in \cal L(B)$, $\langle f,f\rangle _e = 0$
if and only if $\int_G \varphi(f(t)^*f(t))dt = 0$
for all $\varphi\in (B_e)^*_+$ which implies that
$f(t)^*f(t) = 0$ for all $t\in G$.
\par\medskip
\begin{definition} \label{2.2}
The completion of $\cal L(B)$ with respect to the $B_e$-valued inner
product in Lemma \ref{2.1} is a Hilbert $B_e$-module and is denoted by
\mbox{$(L^2_e({\cal B}), \|\cdot\|_e)$}.
\end{definition}
It is clear that $\|\langle f,g\rangle _e\|\leq \|f\|_2\|g\|_2$
(by [7, II.5.4] and the H\" older's inequality).
Hence there is a continuous map $J$ from ${\cal L}_2(\cal B)$ to
$L^2_e({\cal B})$ with dense range.
In fact, it is not hard to see that ${\cal L}_2(\cal B)$ is a right
Banach $B_e$-module and $J$ is a module map.
\par\medskip
Throughout this paper, $T$ is a non-degenerate *-representation
of $\cal B$ on a Hilbert space ${\cal H}$ and $\phi$ is the
restriction of $T$ on $B_e$.
Moreover, $\mu_T$ is the representation of $C^*({\cal B})$ on $\cal H$
induced by $T$.
By [7, VIII.9.4], $\phi$ is a non-degenerate representation of $B_e$.
\par\medskip
\begin{lemma} \label{2.4}
There exists an isometry
$$V:L^2_e({\cal B})\otimes_\phi {\cal H}\rightarrow L^2(G; {\cal H}),$$
such that for all $f\in \cal L(B)$, $\xi\in\cal H$, and $t\in G$ one has
$$V(f\otimes \xi)(t) = T_{f(t)}\xi.$$
\end{lemma}
\noindent {\bf Proof}:
It is easy to check that the map $V$ defined as above is inner
product preserving and hence extends to the required map.
\par\medskip
One technical difficulty in the study of reduced cross-sectional
$C^*$-algebras is that $V$ is not necessarily surjective.
\par\medskip
\begin{example} \label{2.6}
(a) If $\cal B$ is saturated, then $V$ is surjective.
In fact, let $K= V(L^2_e({\cal B})\otimes_\phi {\cal H})$ and
$\Theta$ be an element in the complement of $K$.
For any $g\in \cal L(B)$ and $\eta\in {\cal H}$,
$\int_G \langle T_{g(r)}\eta, \Theta(r)\rangle \ dr = 0$ which implies
that $\int_G T_{g(r)}^*\Theta(r)\ dr = 0$.
Now for any $f\in \cal L(B)$, we have
$$(\mu_T\otimes \lambda_G)\delta_{\cal B}(f)(\Theta)(t) =
\int_G T_{f(s)}\Theta(s^{-1}t)\ ds =
\int_G T_{f(tr^{-1})}\Theta(r)\Delta(r)^{-1}\ dr.$$
Moreover, for any $b\in B_{t^{-1}}$, let
$g(r) = \Delta(r)^{-1}f(tr^{-1})^*b^*$.
Then $g\in \cal L(B)$ and
$$T_b (\mu_T\otimes \lambda_G)
\delta_{\cal B}(f)(\Theta)(t) = \int_G T_{g(r)}^*\Theta(r)\ dr = 0$$
for any $b\in B_{t^{-1}}$ (by the above equality).
Since $\cal B$ is saturated and the restriction $\phi$ of $T$ is
non-degenerate, $(\mu_T\otimes \lambda_G)\delta_{\cal B}(f)(\Theta)=0$
for any $f\in \cal L(B)$.
Thus, $\Theta=0$ (because $(\mu_T\otimes \lambda_G)\circ\delta_{\cal B}$
is non-degenerate).
\par\noindent
(b) Let $\cal B$ be the trivial bundle over a discrete group $G$
(i.e. $B_e = \mathbb{C}$ and $B_t = (0)$ if $t\neq e$).
Then $L^2_e({\cal B})\otimes_\phi {\cal H} \cong {\cal H}$ is a proper
subspace of $L^2(G; {\cal H})$.
\end{example}
For any $b\in B$, let $\hat T_b$ be the map from $L^2_e({\cal B})$
to itself defined by
$\hat T_b(f) = b\cdot f$ for any $f\in \cal L(B)$ (where $b\cdot f(t) =
bf(\pi(b)^{-1}t)$).
The argument for $\hat T$ being continuous seems not easy.
Instead, we will consider the corresponding representation of
${\cal L}_1({\cal B})$ and show that it is well defined.
\par\medskip
For any $f\in {\cal L}({\cal B})$, define a map $\lambda_{\cal B} (f)$ from ${\cal
L}({\cal B})$ to itself by $\lambda_{\cal B} (f)(g) = f\ast g$ ($g\in {\cal
L}({\cal B})$).
We would like to show that this map is bounded and induces a bounded
representation of ${\cal L}_1({\cal B})$.
In order to prove this, we will first consider a map
$\tilde\lambda_{\cal B}(f)$ from ${\cal L}({\cal B})\otimes_{\rm alg} {\cal H}$
to itself given by
$\tilde \lambda_{\cal B}(f)(g\otimes \xi) = f\ast g\otimes \xi$ ($g\in {\cal
L}({\cal B})$; $\xi\in \cal H$).
In the following, we will not distinguish ${\cal L}({\cal B})\otimes_{\rm alg}
{\cal H}$ and its image in $L_e^2({\cal B})\otimes_\phi{\cal H}$.
\par\medskip
\begin{lemma} \label{2.8}
For any $f\in \cal L(B)$, $\tilde \lambda_{\cal B}(f)$ extends to a
bounded linear operator on $L_e^2({\cal B})\otimes_\phi{\cal H}$ such that
$\mu_{\lambda,T} (f)\circ V = V\circ
(\tilde \lambda_{\cal B}(f))$
(where $\mu_{\lambda,T}$ is the composition:
$C^*({\cal B})\stackrel{\delta_{\cal B}}{\longrightarrow} C^*({\cal B})\otimes_{\rm max} C^*(G)
\stackrel{\mu_T\otimes \lambda_G}{\longrightarrow} {\cal B}({\cal H}\otimes
L^2(G))$).
\end{lemma}
\noindent {\bf Proof:}
For any $g\in \cal L(B)$, $\xi\in \cal H$ and $s\in G$, we have,
\begin{eqnarray*}
\mu_{\lambda,T}(f)V(g\otimes \xi)(s)
&=& \int_G (T_{f(t)}\otimes \lambda_t) V(g\otimes \xi)(s) dt
\ \ =\ \ \int_G T_{f(t)}T_{g(t^{-1}s)}\xi dt\\
&=& T_{f\ast g(s)}\xi
\ \ =\ \ V(f\ast g\otimes \xi)(s)
\ \ =\ \ V(\tilde \lambda_{\cal B}(f)(g\otimes\xi))(s).
\end{eqnarray*}
Since $V$ is an isometry, $\tilde \lambda_{\cal B}(f)$ extends
to a bounded linear operator on $L_e^2({\cal B})\otimes_\phi{\cal H}$ and
satisfies the required equality.
\par\medskip
Now by considering the representation $T$ for which $\phi$ is injective
and using Lemmas \ref{1.3}(a) and \ref{2.4}, $\lambda_{\cal B}(f)$ extends
to a bounded linear map from $L_e^2({\cal B})$ to itself.
It is not hard to show that $\langle f\ast g, h\rangle_e =
\langle g,f^*\ast h\rangle_e$ (for any $g,h\in {\cal L}({\cal B})$).
Hence $\lambda_{\cal B}(f)\in {\cal L}(L_e^2({\cal B}))$.
Moreover, we have the following proposition.
\par\medskip
\begin{proposition} \label{2.10}
The map $\lambda_{\cal B}$ from ${\cal L}_1(\cal B)$ to
${\cal L}(L^2_e(\cal B))$ given by $\lambda_{\cal B} (f)(g) = f\ast g$
($f,g\in \cal L(B)$) is a well defined norm decreasing non-degenerate
*-homomorphism such that $ \mu_{\lambda,T}(f)\circ V = V\circ (\lambda_{\cal B}(f)
\otimes_\phi 1)$ ($f\in \cal L(B)$).
\end{proposition}
\begin{definition} \label{2.11}
(a) $\lambda_{\cal B}$ is called the {\it reduced representation of $C^*({\cal B})$}
and $C^*_r({\cal B}) = \lambda_{\cal B}(C^*(\cal B))$ is called the
{\it reduced cross-sectional $C^*$-algebra of $\cal B$}.
\par\noindent
(b) $\cal B$ is said to be {\it amenable} if $\lambda_{\cal B}$ is
injective.
\end{definition}
\begin{example} \label{2.12}
Suppose that $\cal B$ is the semi-direct product bundle corresponding to
an action $\alpha$ of $G$ on a $C^*$-algebra $A$.
Then $C^*({\cal B}) = A\times_\alpha G$ and $C^*_r({\cal B}) =
A\times_{\alpha, r} G$.
\end{example}
As in the case of full cross-sectional $C^*$-algebras,
we can define non-degenerated reduced coactions on
reduced cross-sectional $C^*$-algebras.
First of all, let us consider (as in the case of reduced
group $C^*$-algebras) an operator $W$ from
${\cal L(B}\times G)$ to itself defined by $W(F)(r,s) =
F(r,r^{-1}s)$ ($F\in {\cal L(B}\times G)$).
Note that for any $f\in\cal L(B)$ and $k\in K(G)$, $W(f\otimes k) =
f\bullet k$ (where $f\bullet k$ is defined in the
paragraph before Lemma \ref{1.6}) and that
$L^2_e({\cal B}\times G) = L^2_e({\cal B})\otimes L^2(G)$ as
Hilbert $B_e$-modules.
\par\medskip
\begin{lemma} \label{2.13}
$W$ is a unitary in ${\cal L}(L^2_e({\cal B})\otimes L^2(G))$.
\end{lemma}
\par
\noindent {\bf Proof}:
For any $f,g\in \cal L(B)$ and $k,l\in K(G)$, we have the following
equality:
\begin{eqnarray*}
\langle W(f\otimes k), W(g\otimes l)\rangle
& = & \int_G\int_G (f\bullet k)(r,s)^*(g\bullet l)(r,s) dsdr\\
& = & \int_G\int_G f(r)^*\overline{k(r^{-1}s)} g(r)l(r^{-1}s) drds\\
& = & \int_G\int_G f(r)^*g(r)\overline{k(t)}l(t) dtdr
\quad = \quad \langle f\otimes k, g\otimes l\rangle.
\end{eqnarray*}
Hence $W$ is continuous and extends to an operator on
$L^2_e({\cal B})\otimes L^2(G)$.
Moreover, if we define $W^*$ by $W^*(f\otimes k)(r,s) = f(r)k(rs)$,
then $W^*$ is the adjoint of $W$ and $WW^* = 1 = W^*W$.
\par\medskip
As in [12], we can define a *-homomorphism $\delta^r_{\cal B}$
from $C^*_r(\cal B)$ to ${\cal L}(L^2_e({\cal B})\otimes L^2(G))$
by $\delta^r_{\cal B}(x) = W(x\otimes 1)W^*$
($x\in C^*_r(\cal B)$).
Moreover, for any $b\in B\subseteq M(C^*(\cal B))$ (see Lemma
\ref{1.4}), $\delta^r_{\cal B}(\lambda_{\cal B}(b)) =
\lambda_{\cal B}(b)\otimes \lambda_{\pi(b)}$
(where $\lambda_t$ is the canonical image of $t$ in $M(C^*_r(G))$).
\par\medskip
\begin{proposition} \label{2.14}
The map $\delta^r_{\cal B}$ defined above is an injective non-degenerate
coaction on $C^*_r(\cal B)$ by $C^*_r(G)$.
\end{proposition}
\noindent {\bf Proof}:
It is clear that $\delta^r_{\cal B}$ is an injective *-homomorphism.
Moreover,
$(\lambda_{\cal B}\otimes\lambda_G)\circ\delta_{\cal B} =
\delta^r_{\cal B}\circ\lambda_{\cal B}$
which implies that $\delta^r_{\cal B}$ is a non-degenerate coaction
(see Lemma \ref{1.6}).
\par\medskip
There is an alternative natural way to define ``reduced''
cross-sectional $C^*$-algebra (similar to the corresponding
situation of full and reduced crossed products):
$C^*_R({\cal B}) := C^*({\cal B})/\ker (\epsilon_{\cal B})$
(where $\epsilon_{\cal B}$ is the composition: $C^*({\cal B})\stackrel
{\delta_{\cal B}}{\longrightarrow}C^*({\cal B})\otimes_{\rm max}C^*(G)
\stackrel{{\rm id}\otimes \lambda_G}{\longrightarrow}C^*({\cal B})\otimes C^*_r(G)$).
\par\medskip
\begin{remark} \label{2.b}
(a) It is clear that $\mu_{\lambda,T} = (\mu_T\otimes \lambda_G)\circ\delta_{\cal B}$
(see Lemma \ref{2.8}) induces a representation of $C^*_R({\cal B})$ on
$L^2(G; {\cal H})$.
If $\mu_T$ is faithful, then this induced representation is also
faithful and $C^*_R({\cal B})$ can be identified with the image of $\mu_{\lambda,T}$.
\par\noindent
(b) If $\mu_T$ is faithful, then so is $\phi$ and
$\lambda_{\cal B}\otimes_\phi 1$
is a faithful representation of $C^*_r({\cal B})$ (by Lemma \ref{1.3}(b)).
Therefore, part (a) and Lemma \ref{2.10} implies that $C^*_r({\cal B})$ is a
quotient of $C^*_R({\cal B})$.
\end{remark}
In [12, 3.2(1)], it was proved that these two reduced cross-sectional
$C^*$-algebras coincide in the case of semi-direct product bundles.
The corresponding result in the case of $C^*$-algebraic bundles
over discrete groups was proved implicitly in [6, 4.3].
In the following we shall see that it is true in general.
\par\medskip
The idea is to define a map $\varphi$ from $C_r^*(\cal B)$ to
${\cal L}(L^2(G; {\cal H}))$ such that $\varphi\circ\lambda_{\cal B} =
\mu_{\lambda,T}$ (see Remark \ref{2.b}(a)).
As noted above, the difficulty is that $V$ may not be surjective and,
by Lemma 2.10, $\lambda_{\cal B}\otimes_\phi 1$ may only be a proper
subrepresentation of $\mu_{\lambda,T}$ (see Example \ref{2.6}(b)).
However, we may ``move it around'' filling out the whole
representation space for $\mu_{\lambda,T}$ using the right regular
representation $\rho$ of $G$ (on $L^2(G)$):
$\rho_r(g)(s)=\Delta(r)^{1/2}g(sr)$ ($g\in L^2(G)$; $r,s\in G$) where
$\Delta$ is the modular function for $G$.
\par\medskip
\begin{lemma} \label{2.a}
For each $r\in G$,
\par\noindent
(a) The unitary operator $\rho_r\otimes 1$ on $L_2(G)\otimes {\cal H}=
L^2(G;{\cal H})$ lies in the commutant of $\mu_{\lambda,T} (C^*(\cal B))$.
\par\noindent
(b) Consider the isometry
$$
V^r: L_2({\cal B})\otimes_\phi{\cal H} \to L^2(G;{\cal H}),
$$
given by $V^r = (\rho_r\otimes 1) V$.
Then for all $a\in C^*({\cal B})$ one
has $V^r (\lambda_{\cal B}(a)\otimes 1) = \mu_{\lambda,T}(a) V^r$.
\par\noindent
(c) Let $K_r$ be the range of\/ $V^r$.
Then $K_r$ is invariant under $\mu_{\lambda,T}$ and the restriction of $\mu_{\lambda,T}$
to $K_r$ is equivalent to $\lambda_{\cal B}\otimes 1$.
\end{lemma}
\noindent {\bf Proof:}
It is clear that $\rho_r\otimes 1$ commutes with $\mu_{\lambda,T}(b_t) =
\lambda_t\otimes T_{b_t}$ for any $b_t\in B_t$ (see Lemma \ref{1.4}).
It then follows that $\rho_r\otimes 1$ also commutes with the
range of the integrated form of $\mu_{\lambda,T}$, whence (i).
The second point follows immediately from (i) and
Proposition \ref{2.10}. Finally, (iii) follows from (ii).
\par\medskip
Our next result is intended to show that the $K_r$'s do indeed fill
out the whole of $L^2(G;{\cal H})$.
\par\medskip
\begin{proposition} \label{2.c}
The linear span of\/ $\bigcup_{r\in G} K_r$ is dense in $L^2(G;{\cal H})$.
\end{proposition}
\noindent {\bf Proof:}
Let
$$
\Gamma = {\rm span}\{V^r( f\otimes \eta): r\in G,\ f\in {\cal L}({\cal B}),
\ \eta\in{\cal H}\}.
$$
Since for any $t\in G$,
$$
V^r( f\otimes \eta)(t) =
(\rho_r\otimes 1)V( f\otimes \eta)(t) =
\Delta(r)^{1/2}V(f\otimes \eta)(tr) =
\Delta(r)^{1/2}T_{f(tr)}\eta,
$$
and since we are taking $f$ in ${\cal L}({\cal B})$ above, it is easy to see
that $\Gamma$ is a subset of $C_c(G,{\cal H})$.
Our strategy will be to use [7, II.15.10]
(on the Banach bundle ${\cal H}\times G$ over $G$)
for which we must prove that:
\par
\noindent (I) If $f$ is a continuous complex function on $G$ and
$\zeta\in\Gamma$, then the pointwise product $f\zeta$ is in $\Gamma$;
\par
\noindent (II) For each $t\in G$, the set $\{\zeta(t):\zeta\in \Gamma\}$
is dense in ${\cal H}$.
\par\noindent
The proof of (I) is elementary in view of the fact that ${\cal L}({\cal B})$ is
closed under pointwise multiplication by continuous scalar-valued
functions [7, II.13.14].
In order to prove (II), let $\xi\in{\cal H}$ have the form
$\xi=T_b\eta=\phi(b)\eta$, where $b\in B_e$ and $\eta\in{\cal H}$.
By [7, II.13.19], let $f\in {\cal L}({\cal B})$ be such that $f(e)=b$.
It follows that
$\zeta_r:=V^r(f\otimes \eta)$ is in $\Gamma$ for all $r$.
Also note that,
setting $r=t^{-1}$, we have
$$
\zeta_{t^{-1}}(t) =
\Delta(t)^{-1/2}T_{ f(e)}\eta =
\Delta(t)^{-1/2}T_{b}\eta =
\Delta(t)^{-1/2}\xi.
$$
This shows that $\xi\in\{\zeta(t):\zeta\in \Gamma\}$.
Since the set of such $\xi$'s is dense in ${\cal H}$ (because
$\phi$ is non-degenerate by assumption), we have that (II) is proven.
As already indicated, it now follows from [7, II.15.10] that
$\Gamma$ is dense in $L^2(G;{\cal H})$. Since $\Gamma$ is contained in the
linear span of $\bigcup_{r\in G} K_r$, the conclusion follows.
\par\medskip
We can now obtain the desired result.
\par\medskip
\begin{theorem} \label{2.d}
For all $a\in C^*({\cal B})$ one has that $\|\mu_{\lambda,T}(a)\| \leq
\|\lambda_{\cal B}(a)\|$.
Consequently, $C^*_R({\cal B})=C^*_r({\cal B})$.
\end{theorem}
\noindent {\bf Proof:}
We first claim that for all $a\in C^*({\cal B})$ one has
that
$$
\lambda_{\cal B}(a) = 0 \quad =\!\!\Rightarrow \quad \mu_{\lambda,T}(a) = 0.
$$
Suppose that $\lambda_{\cal B}(a) = 0$.
Then for each $r\in G$ we have by Lemma \ref{2.a}(b) that
$$
\mu_{\lambda,T}(a) V^r = V^r (\lambda_{\cal B}(a)\otimes 1) =0.
$$
Therefore
$\mu_{\lambda,T}(a)=0$ in the range $K_r$ of $V^r$.
By Proposition \ref{2.c} it follows that $\mu_{\lambda,T}(a)=0$, thus proving our
claim.
Now define a map
$$
\varphi : C^*_r({\cal B}) \longrightarrow {\cal B}(L^2(G;{\cal H}))
$$
by $\varphi (\lambda_{\cal B}(a)) := \mu_{\lambda,T}(a)$, for all $a$ in $C^*({\cal B})$.
By the claim above we have that $\varphi$ is well defined.
Also, it is easy to see that $\varphi$ is a *-homomorphism.
It follows that $\varphi$ is contractive and hence that for all
$a$ in $C^*({\cal B})$
$$
\|\mu_{\lambda,T}(a)\| =
\|\varphi (\lambda_{\cal B}(a))\| \leq
\|\lambda_{\cal B}(a)\|.
$$
For the final statement, we note that if $\mu_T$ is faithful, then
the map $\varphi$ defined above is the inverse of the quotient map
from $C^*_R({\cal B})$ to $C^*_r({\cal B})$ given in Remark \ref{2.b}(ii).
\par\medskip
The following generalises [11, 7.7.5] to the context of $C^*$-
algebraic bundles:
\par\medskip
\begin{corollary} \label{2.e}
Let $T:{\cal B}\rightarrow {\cal L}({\cal H})$ be a non-degenerate
$*$-representation of the $C^*$-algebraic bundle ${\cal B}$
and let $\mu_{\lambda,T}$ be the representation of ${\cal B}$ on $L^2(G;{\cal H})$ given by
$\mu_{\lambda,T}(b_t) = \lambda_t\otimes T_{b_t}$, for $t\in G$, and $b_t\in B_t$.
Then $\mu_{\lambda,T}$ is a well defined representation and induces a
representation of $C^*({\cal B}) ($again denoted by $\mu_{\lambda,T}$).
In this case, $\mu_{\lambda,T}$ factors through $C^*_r({\cal B})$.
Moreover, if $\phi = T\!\mid_{B_e}$ is faithful, the
representation of $C^*_r({\cal B})$ arising from this factorisation
is also faithful.
\end{corollary}
\noindent {\bf Proof:}
By Remark \ref{2.b}(a), $\mu_{\lambda,T}$ factors through a representation of
$C^*_R({\cal B})=C^*_r({\cal B})$ (Theorem \ref{2.d}).
Now if $\phi$ is faithful, then by Theorem \ref{2.d}, Lemmas \ref{2.10}
and \ref{1.3}(a), $\|\mu_{\lambda,T}(a)\| = \|\lambda_{\cal B}(a)\|$.
This proves the second statement.
\par\medskip
\par\medskip
\par\medskip
\section{The approximation property of $C^*$-algebraic bundles}
\par\medskip
\par\medskip
From now on, we assume that $\mu_T$ (see the paragraph
before Lemma \ref{2.4}) is faithful.
Moreover, we will not distinguish ${\cal L}({\cal B})$ and its image in $C^*({\cal B})$.
\par\medskip
The materials in this section is similar to the discrete case in [6].
Let ${\cal B}_e$ be the $C^*$-algebraic bundle $B_e\times G$ over $G$.
We will first define a map from $L^2_e({\cal B}_e)\times
C^*_r({\cal B})\times L^2_e({\cal B}_e)$ to $C^*(\cal B)$.
For any $\alpha\in {\cal L(B}_e)$, let $V_\alpha$ be a map
from ${\cal H}$ to $L^2(G; {\cal H})$ given by
$$V_\alpha(\xi)(s) = \phi(\alpha(s))\xi$$
($\xi\in {\cal H}$; $s\in G$).
It is clear that $V_\alpha$ is continuous and $\|V_\alpha\|\leq \|\alpha\|$.
Moreover, we have $V_\alpha^*(\Theta) =
\int_G \phi(\alpha(r)^*)\Theta(r)\ dr$
($\alpha\in {\cal L(B}_e)$; $\Theta\in L^2(G)\otimes {\cal H} =
L^2(G;{\cal H})$) and $\|V_\alpha^*\|\leq \|\alpha\|$.
Thus, for any $\alpha, \beta\in L^2_e({\cal B}_e)$, we obtain a continuous
linear map $\Psi_{\alpha,\beta}$ from ${\cal L}(L^2(G)\otimes {\cal H})$
to ${\cal L}({\cal H})$ defined by
$$\Psi_{\alpha,\beta}(x) = V_\alpha^*xV_\beta$$
with $\|\Psi_{\alpha,\beta}\|\leq \|\alpha\|\|\beta\|$.
Recall from Remark \ref{2.b}(a) and Theorem \ref{2.d} that $C^*_r(\cal B)$
is isomorphic to the image of $C^*(\cal B)$ in ${\cal L}(L^2(G)\otimes
{\cal H})$ under $\mu_{\lambda,T} = (\mu_T\otimes \lambda_G)\circ \delta_{\cal B}$.
\par\medskip
\begin{lemma} \label{3.1}
Let $\alpha, \beta\in {\cal L(B}_e)$ and $f\in \cal L(B)$.
Then $\Psi_{\alpha,\beta}(\mu_{\lambda,T}(f)) =
\alpha\cdot f\cdot \beta$ where $\alpha\cdot f\cdot \beta\in {\cal L(B)}$
is defined by $\alpha\cdot f\cdot \beta(s) =
\int_G \alpha(t)^*f(s)\beta(s^{-1}t)\ dt$.
\end{lemma}
\noindent {\bf Proof}:
For any $\xi \in {\cal H}$, we have
\begin{eqnarray*}
\Psi_{\alpha,\beta}(\mu_{\lambda,T}(f))\xi & = &
\int_G \phi(\alpha(t)^*)(\mu_{\lambda,T}(f)V_\beta\xi)(t)\ dt\\
& = & \int_G\int_G \phi(\alpha^*(t))T_{f(s)}\phi(\beta(s^{-1}t))
\xi\ ds dt\\
& = & \int_G T_{(\alpha\cdot f\cdot \beta)(s)}\xi\ ds.
\end{eqnarray*}
\par\medskip
Hence we have a map from $L^2_e({\cal B}_e)\times
C^*_r({\cal B})\times L^2_e({\cal B}_e)$ to $C^*(\cal B)$ such
that $\|\alpha\cdot x\cdot \beta\|\leq \|\alpha\|\|x\|\|\beta\|$.
Next, we will show that this map sends
$L^2_e({\cal B}_e)\times \mu_{\lambda,T}({\cal L(B}))\times
L^2_e({\cal B}_e)$ to $\cal L(B)$.
\par\medskip
\begin{lemma} \label{3.2}
For any $\alpha, \beta\in L^2_e({\cal B}_e)$ and $f\in \cal L(B)$,
$\alpha\cdot f\cdot \beta\in \cal L(B)$.
\end{lemma}
\noindent {\bf Proof}:
If $\alpha', \beta' \in {\cal L(B}_e)$
\begin{eqnarray*}
\|(\alpha'\cdot f\cdot \beta')(s)\| & = &
\sup \{\mid \langle \eta,\int_G T_{\alpha'(t)^*f(s)\beta'(s^{-1}t)}
\xi\ dt\rangle \mid: \|\eta\|\leq 1; \|\xi\|\leq 1\}\\
& \leq & \sup \{ \|f(s)\| (\int_G \|\phi(\alpha'(t))\eta\|^2\ dt)^{1/2}
(\int_G \|\phi(\beta'(t))\xi\|^2\ dt)^{1/2}: \|\eta\|\leq 1; \|\xi\|\leq
1\}\\
& = & \|f(s)\|\|\alpha'\|\|\beta'\|.
\end{eqnarray*}
Let $\alpha_n$ and $\beta_n$ be two sequences of elements in ${\cal L(B}_e)$
that converge to $\alpha$ and $\beta$ respectively.
Then $(\alpha_n\cdot f\cdot \beta_n)(s)$ converges to an element
$g(s)\in B_s$.
Moreover, since $f$ is of compact support and the convergence is uniform,
$g\in \cal L(B)$ and ${\rm supp}(g)\subseteq {\rm supp}(f)$.
In fact, this convergence actually takes place in
${\cal L}_1({\cal B})$ and hence in $C^*(\cal B)$.
Therefore $\Psi_{\alpha,\beta}(\mu_{\lambda,T}(f))=\mu_T(\alpha\cdot f\cdot \beta)$.
\par\medskip
\begin{remark} \label{3.3}
The proof of the above lemma also shows that $\Psi_{\alpha,\beta}$
sends the image of $B_t$ in $M(C^*_r(\cal B))$ to the image
of $B_t$ in $M(C^*(\cal B))$.
Hence $\Psi_{\alpha,\beta}$ induces a map $\Phi_{\alpha,\beta}$ from $B$
to $B$ which preserves fibers.
\end{remark}
\begin{definition} \label{3.4}
Let $\{\Phi_i\}$ be a net of maps from $B$ to itself such that
they preserve fibers and are linear on each fiber.
\par\noindent
(a) $\{\Phi_i\}$ is said to be {\it converging to $1$ on compact slices
of $B$} if for any $f\in {\cal L(B})$ and any $\epsilon > 0$, there
exists $i_0$ such that for any $i\geq i_0$, $\|\Phi_i(b)-b\|<\epsilon$
for any $b\in f(G)$ ($f(G)$ is called a {\it compact slice} of B).
\par\noindent
(b) Then $\{\Phi_i\}$ is said to be {\it converging to
$1$ uniformly on compact-bounded subsets of $B$} if for any compact
subset $K$ of $G$ and any $\epsilon > 0$, there exists $i_0$ such that
for any $i\geq i_0$, $\|\Phi_i(b)-b\|<\epsilon$ if
$\pi(b)\in K$ and $\|b\|\leq 1$.
\end{definition}
\begin{lemma} \label{3.5}
Let $\{\Phi_i\}$ be a net as in Definition \ref{3.4}.
Then each of the following conditions is stronger than the next one.
\begin{enumerate}
\item[i.] $\{\Phi_i\}$ converges to $1$ uniformly on
compact-bounded subsets of $B$.
\item[ii.] $\{\Phi_i\}$ converges to $1$ uniformly on
compact slices of $B$.
\item[iii.] For any $f\in \cal L(B)$, the net $\Phi_i\circ f$ converges
to $f$ in ${\cal L}_1(\cal B)$.
\end{enumerate}
\end{lemma}
\par\noindent {\bf Proof:}
Since every element in ${\cal L(B})$ has compact support and is bounded,
it is clear that (i) implies (ii).
On the other hand, (ii) implies (iii) is obvious.
\par\medskip
Following the idea of [6], we define the approximation property
of $\cal B$.
\par\medskip
\begin{definition} \label{3.6}
(a) Let $\cal B$ be a $C^*$-algebraic bundle.
For $M>0$, $\cal B$ is said to have the {\it $M$-approximation property}
(respectively, {\it strong $M$-approximation property}) if
there exist nets $(\alpha_i)$ and $(\beta_i)$ in ${\cal L(B}_e)$ such that
\begin{enumerate}
\item[i.] $\sup_i \|\alpha_i\|\|\beta_i\| \leq M$;
\item[ii.] the map $\Phi_i = \Phi_{\alpha_i,\beta_i}$ (see Remark \ref{3.3})
converges to $1$ uniformly on compact slices of $B$ (respectively,
uniformly on compact-bounded subsets of $B$).
\end{enumerate}
$\cal B$ is said to have the (respectively, {\it strong})
{\it approximation property} if it has
the (respectively, strong) $M$-approximation property
for some $M > 0$.
\par\noindent
(b) We will use the terms ({\it strong}) {\it positive
$M$-approximation property} and ({\it strong}) {\it positive
approximation property} if we can choose $\beta_i = \alpha_i$
in part (a).
\end{definition}
Because of Remark 3.7(b) below as well as [11, 7.3.8], we believe that
the above is the weakest condition one can think of to ensure the
amenability of the $C^*$-algebraic bundle.
\par\medskip
\begin{remark} \label{3.7}
(a) Since any compact subset of a discrete group is finite and any
$C^*$-algebraic bundle has enough cross-sections,
the approximation property defined in [6] is the
same as positive 1-approximation property defined above.
\par\noindent
(b) It is easy to see that the amenability of $G$ implies
the positive 1-approximation property of $\cal B$ (note that the
positive 1-approximation property is similar to the condition in
[11, 7.3.8]).
In fact, let $\xi_i$ be the net given by [11, 7.3.8] and let
$\eta_i(t) = \overline{\xi_i(t)}$.
If $u_j$ is an approximate unit of $B_e$ (which is also a strong
approximate unit of $\cal B$ by [7, VIII.16.3]),
then the net $\alpha_{i,j} = \beta_{i,j} = \eta_i u_j$
will satisfy the required property.
\par\noindent
(c) We can also formulate the approximation property as follows:
there exists $M>0$ such that for any compact slice $S$ of $B$ and any
$\epsilon > 0$, there exist $\alpha,\beta\in L^2_e({\cal B}_e)$ with
$$\|\alpha\|\|\beta\|\leq M\qquad {\rm and} \qquad
\|\alpha\cdot b\cdot \beta - b\| < \epsilon$$
if $b\in S$.
In fact, we can replace $L^2_e({\cal B}_e)$ by ${\cal L(B}_e)$ and
consider the directed set $D= \{ (K,\epsilon): K$ is a compact subset
of $G$ and $\epsilon > 0 \}$.
For any $d=(K,\epsilon)\in D$, we take $\alpha_d$ and $\beta_d$
that satisfying the above condition.
These are the required nets.
\end{remark}
We can now prove the main results of this section.
\par\medskip
\begin{proposition} \label{3.8}
If $\cal B$ has the approximation property, then the coaction
$\epsilon_{\cal B} = ({\rm id}\otimes \lambda_G)\circ\delta_{\cal B}$ is injective.
\end{proposition}
\noindent {\bf Proof}:
Let $\Phi_i = \Phi_{\alpha_i,\beta_i}$ be the map from $B$ to itself
as given by Definition \ref{3.6}(a)(ii) and $\Psi_i = \Psi_{\alpha_i,\beta_i}$.
Let $J_i=\Psi_i\circ\mu_{\lambda,T}$.
By Lemma \ref{3.2}, for any $f\in\cal L(B)$, $J_i(f)\in {\cal L}({\cal B})$
(note that we regard ${\cal L}({\cal B})\subseteq C^*({\cal B})$) and $J_i(f)(s) =
\Phi_i(f(s))$ ($s\in G$).
Since $\Phi_i\circ f$ converges to $f$ in ${\cal L}_1(\cal B)$
(by Lemma \ref{3.5}), $J_i(f)$ converges to $f$ in $C^*(\cal B)$.
Now because $\|J_i\|\leq \|\Psi_i\|\leq \sup_i\|\alpha_i\|
\|\beta_i\|\leq M$, we know that $J_i(x)$ converges to $x$ for all
$x\in C^*(\cal B)$ and $\epsilon_{\cal B}$ is injective.
\par\medskip
Note that if $G$ is amenable, we can also obtain directly from
the Lemma \ref{1.1}(a) that $\epsilon_{\cal B}$ is injective.
\par\medskip
\begin{theorem} \label{3.9}
Let $\cal B$ be a $C^*$-algebraic bundle having the
approximation property (in particular, if $G$ is amenable).
Then $\cal B$ is amenable.
\end{theorem}
\noindent {\bf Proof:}
Proposition \ref{3.8} implies that $C_R^*({\cal B}) = C^*({\cal B})$
(see the paragraph before Remark \ref{2.b}).
Now the amenability of ${\cal B}$ clearly follows from Theorem \ref{2.d}.
\par\medskip
\par\medskip
\par\medskip
\section{Two special cases}
\par\medskip
\par\medskip
\noindent {\em I. Semi-direct product bundles and nuclearity of
crossed products.}
\par\medskip
Let $A$ be a $C^*$-algebra with action $\alpha$ by a locally compact
group $G$.
Let ${\cal B}$ be the semi-direct product bundle of $\alpha$.
\par\medskip
\begin{remark} \label{4.1}
$\cal B$ has the (respectively, strong) $M$-approximation property
if there exist nets $\{\gamma_i\}_{i\in I}$ and
$\{\theta_i\}_{i\in I}$ in $K(G;A)$ such that
$$\|\int_G \gamma_i(r)^*\gamma_i(r)\
dr\|\cdot\|\int_G \theta_i(r)^*\theta_i(r)\ dr\| \leq M^2$$
and for any $f\in K(G;A)$ (respectively, for any compact subset
$K$ of $G$),
$\int_G \gamma_i(r)^*a\alpha_t (\theta_i(t^{-1}r))\ dr$ converges to
$a\in A$ uniformly for $(t,a)$ in the graph of $f$ (respectively,
uniformly for $t\in K$ and $\|a\|\leq 1$).
\end{remark}
\begin{definition} \label{4.2}
An action $\alpha$ is said to have the (respectively, strong)
{\it ($M$-)approximation property}
(respectively, $\alpha$ is said to be {\it weakly amenable}) if
the $C^*$-algebraic bundle $\cal B$ associated with $\alpha$ has the
(respectively, strong) ($M$-)approximation property
(respectively, ${\cal B}$ is amenable).
\end{definition}
Let $G$ and $H$ be two locally compact groups.
Let $A$ and $B$ be $C^*$-algebras with actions $\alpha$ and $\beta$ by
$G$ and $H$ respectively.
Suppose that $\tau = \alpha\otimes \beta$ is the product action on
$A\otimes B$ by $G\times H$.
\par\medskip
\begin{lemma} \label{4.3}
With the notation as above, if $A$ is nuclear and both $\alpha$ and
$\beta$ have the approximation property, then $(A\otimes B)
\times_\tau(G\times H)=(A\otimes B)\times_{\tau,r}(G\times H)$.
\end{lemma}
\noindent {\bf Proof}:
Let $\cal B$, $\cal D$ and $\cal F$ be the semi-direct product bundles of
$\alpha$, $\beta$ and $\tau$ respectively.
Then $C^*_r({\cal F})=C^*_r({\cal B})\otimes C^*_r({\cal D})$
(by Example \ref{2.12}).
Moreover, since $A$ is nuclear, $C^*({\cal F}) =
C^*({\cal B})\otimes_{\max} C^*({\cal D})$ (by Example \ref{2.12} and
[10, 3.2]).
It is not hard to see that the coaction, $\delta_{\cal F}$, on
$C^*(\cal F)$ is the tensor product of the coactions on
$C^*(\cal B)$ and $C^*(\cal D)$.
Suppose that $C^*(\cal F)$ is a $C^*$-subalgebra of $\cal L(H)$.
Consider as in Section 2, the composition:
$$\mu_{\cal F}: C^*({\cal F})\stackrel{\delta_{\cal F}}
{\longrightarrow} C^*({\cal F})\otimes_{\rm max} C^*(G\times H)
\stackrel{{\rm id}\otimes \lambda_{G\times H}}{\longrightarrow}
{\cal L} ({\cal H}\otimes L^2(G\times H))$$
and identify its image with $C^*_R({\cal F})=C^*_r({\cal F})$
(see Remark \ref{2.b}(a)).
We also consider similarly the maps $\mu_{\cal B}$ and $\mu_{\cal D}$
from $C^*(\cal B)$ and $C^*(\cal D)$ to ${\cal L}({\cal H}\otimes
L^2(G))$ and ${\cal L}({\cal H}\otimes L^2(H))$ respectively.
Now for any $f\in \cal L(B)$ and $g\in \cal L(D)$,
we have $\mu_{\cal F}(f\otimes g) = (\mu_{\cal B}(f))_{12}
(\mu_{\cal D}(g))_{13}\in {\cal L}({\cal H}\otimes L^2(G)\otimes L^2(H))$.
As in Section 3, we define, for any $k\in {\cal L(B}_e)$ and
$l\in {\cal L(D}_e)$, an operator $V_{k\otimes l}$ from $\cal H$ to
${\cal H}\otimes L^2(G\times H)$ by
$V_{k\otimes l}\zeta (r,s) = k(r)(l(s)\zeta)$
($r\in G$; $s\in H$; $\zeta\in {\cal H}$).
It is not hard to see that $V_{k\otimes l}(\zeta) =
(V_k\otimes 1)V_l(\zeta)$ and
$$V_{k\otimes l}^*\mu_{\cal F}
(f\otimes g)V_{k'\otimes l'} = (V_k^*\mu_{\cal B}(f)V_{k'})
(V_l^*\mu_{\cal D}(g)V_{l'})\in \cal L(H)$$
(note that $B_r$
commutes with $D_s$ in ${\cal L}(\cal H)$).
Now let $k_i, k'_i\in \cal L(B)$ and $l_j, l'_j\in \cal L(D)$ be
the nets that give the corresponding approximation property on
$\cal B$ and $\cal D$ respectively.
Then $V_{k_i}^*\mu_{\cal B}(f)V_{k'_i}$ converges to $f$ in
$C^*(\cal B)$ and $V_{l_j}^*\mu_{\cal D}(g)V_{l'_j}$
converges to $g$ in $C^*(\cal D)$.
Hence $J_{i,j}(z) =
V_{k_i\otimes l_j}^*\mu_{\cal F}(z)V_{k'_i\otimes l'_j}$
converges to $z$ in $C^*(\cal F)$
for all $z\in {\cal L(B)}\otimes_{alg} {\cal L(D)}$.
Since $\|J_{i,j}\|$ is uniformly bounded, $\mu_{\cal F}$ is
injective and $C^*({\cal F}) = C^*_r({\cal F})$.
\par\medskip
An interesting consequence of this proposition is the nuclearity
of the crossed products of group actions with the approximation
property (which is a generalisation of the case of actions
of amenable groups).
Note that in the case of discrete group, this was also proved by
Abadie in [1].
\par\medskip
\begin{theorem} \label{4.4}
Let $A$ be a $C^*$-algebra and $\alpha$ be an action on $A$ by a
locally compact group $G$.
If $A$ is nuclear and $\alpha$ has the approximation property,
then $A\times_{\alpha} G = A\times_{\alpha , r} G$
is also nuclear.
\end{theorem}
\noindent {\bf Proof}:
By Lemma \ref{4.3} (or Theorem \ref{3.9}),
$A\times_{\alpha} G = A\times_{\alpha , r} G$.
For any $C^*$-algebra $B$, let $\beta$ be the trivial action
on $B$ by the trivial group $\{e\}$.
Then $(A\times_\alpha G)\otimes_{\max} B =
(A\otimes_{\max}B)\times_{\alpha\otimes\beta} G =
(A\otimes B)\times_{\alpha\otimes\beta, r} G =
(A\times_{\alpha , r} G)\otimes B$ (by Lemma \ref{4.3} again).
\par\medskip
One application of Theorem \ref{4.4} is to relate the amenability of
Anantharaman-Delaroche (see [4, 4.1]) to the approximation property
in the case when $A$ is nuclear and $G$ is discrete.
The following corollary clearly follows from this theorem and [4, 4.5].
\par\medskip
\begin{corollary} \label{4.5}
Let $A$ be a nuclear $C^*$-algebra with an action $\alpha$ by
a discrete group $G$.
If $\alpha$ has the approximation property,
then $\alpha$ is amenable in the sense of Anantharaman-Delaroche.
\end{corollary}
We don't know if the two properties coincide in general.
However, we can have a more direct and transparent comparison of
them in the case of commutative $C^*$-algebras and show that they
are the same.
Furthermore, they also coincide in the case of finite
dimensional $C^*$-algebras.
\par\medskip
\begin{corollary} \label{4.a}
Let $A$ be $C^*$-algebra with action $\alpha$ by a discrete group $G$.
\par\noindent
(a) If $A$ is commutative, the followings are equivalent:
\begin{enumerate}
\item[i.] $\alpha$ is amenable in the sense of Anantharaman-Delaroche;
\item[ii.] $\alpha$ has the positive 1-approximation property;
\item[iii.] $\alpha$ has the approximation property.
\end{enumerate}
\par\noindent
(b) If $A$ is unital and commutative or if $A$ is finite
dimensional, then (i)-(iii) are also equivalent to the
following conditions:
\begin{enumerate}
\item[iv.] $\alpha$ has the strong positive 1-approximation property;
\item[v.] $\alpha$ has the strong approximation property.
\end{enumerate}
\end{corollary}
\noindent {\bf Proof:}
(a) By [4, 4.9(h')], $\alpha$ is amenable in the sense of
Anantharaman-Delaroche if and only if
there exists a net $\{\gamma_i\}$ in $K(G;A)$ such that
$\|\sum_{r\in G} \gamma_i(r)^*\gamma_i(r)\|\leq 1$
and $\sum_{r\in G} \gamma_i(r)^*\alpha_t(\gamma_i(t^{-1}r))$ converges
to $1$ strictly for any $t\in G$.
It is exactly the original definition of the approximation property
given in [1].
Hence conditions (i) is equivalent to conditions (ii) (see Remark
\ref{3.7}(a)).
Now part (a) follows from Corollary \ref{4.5}.
\par\noindent
(b) Suppose that $A$ is both unital and commutative.
Let $\alpha$ satisfy condition (i) and $\{\gamma_i\}$ be the net as
given in the proof of part (a) above.
As $A$ is unital, the strict convergence and the norm convergence are
equivalent.
Moreover, as $G$ is discrete, any compact subset $K$ of $G$ is finite.
These, together with the commutativity of $A$, imply that
$\sum_{r\in G} \gamma_i(r)^*\alpha_t(\gamma_i(t^{-1}r))$ converges to 1
strictly for any $t\in G$ if and only if
$\sum_{r\in G} \gamma_i(r)^*a\alpha_t(\gamma_i(t^{-1}r))$ converges to
$a\in A$ uniformly for $t\in K$ and $\|a\|\leq 1$.
Thus, by Remark \ref{4.1}, we have the equivalence of (i) and (iv)
in the case of commutative unital $C^*$-algebras and the equivalence
of (i)-(v) follows from Lemma \ref{3.5} and Corollary \ref{4.5}.
Now suppose that $A$ is a finite dimensional $C^*$-algebra (but not
necessary commutative).
By [4, 4.1] and [4, 3.3(b)], $\alpha$ satisfies condition (i) if and
only if there exists a net $\{\gamma_i\}$ in $K(G; Z(A))$
(where $Z(A)$ is the centre of $A=A^{**}$) such that
$\|\sum_{r\in G} \gamma_i(r)^*\gamma_i(r)\|\leq 1$
and for any $t\in G$, $\sum_{r\in G} \gamma_i(r)^*\alpha_t
(\gamma_i(t^{-1}r))$ converges to $1$ weakly (and hence converges
to $1$ in norm as $A$ is finite dimensional).
Let $K$ be any compact (and hence finite) subset of $G$.
Since $\alpha_t(\gamma_i(t^{-1}r))\in Z(A)$,
$\sum_{r\in G} \gamma_i(r)^*a \alpha_t(\gamma_i(t^{-1}r))$
converges to $a\in A$ uniformly for $t\in K$ and $\|a\|\leq 1$.
This shows that $\alpha$ satisfies condition (iv)
(see Remark \ref{4.1}).
The equivalence follows again from Lemma \ref{3.5} and
Corollary \ref{4.5}.
\par\medskip
Because of the above results, we believe that approximation property
is a good candidate for the notion of amenability
of actions of locally compact groups on general $C^*$-algebras.
\par\medskip
\par\medskip
\noindent {\em II. Discrete groups: $G$-gradings and coactions.}
\par\medskip
Let $G$ be a discrete group and let $D$ be a $C^*$-algebra with a
$G$-grading (i.e. $D=\overline{\oplus_{r\in G} D_r}$ such that
$D_r\cdot D_s \subseteq D_{rs}$ and $D_r^*\subseteq D_{r^{-1}}$).
Then there exists a canonical $C^*$-algebraic bundle structure
(over $G$) on $D$.
We denote this bundle by $\cal D$.
Now by [6, \S 3], $D$ is a quotient of $C^*(\cal D)$.
Moreover, if the grading is topological in the sense that there exists a
continuous conditional expectation from $D$ to $D_e$ (see [6, 3.4]),
then $C^*_r({\cal D})$ is a quotient of $D$ (see [6, 3.3]).
Hence by [12, 3.2(1)] (or [10, 2.17]), there is an induced
non-degenerate coaction on $D$ by $C^*_r(G)$
which define the given grading.
Now the proof of [9, 2.6], [6, 3.3] and the above
observation imply the following equivalence.
\par\medskip
\begin{proposition} \label{4.6}
Let $G$ be a discrete group and $D$ be a $C^*$-algebra.
Then a $G$-grading $D=\overline{\oplus_{r\in G} D_r}$ is topological
if and only if it is induced by a non-degenerate coaction
of $C^*_r(G)$ on $D$.
\end{proposition}
\begin{corollary} \label{4.7}
Let $D$ be a $C^*$-algebra with a non-degenerate coaction $\epsilon$
by $C^*_r(G)$.
Then it can be ``lifted'' to a full coaction i.e. there exist a
$C^*$-algebra $A$ with a full coaction $\epsilon_A$ by
$G$ and a quotient map $q$ from $A$ to $D$ such that
$\epsilon\circ q = (q\otimes \lambda_G)\circ\epsilon_A$.
\end{corollary}
In fact, if $\cal D$ is the bundle as defined above, then we can take
$A=C^*({\cal D})$ and $\epsilon_A = \delta_{\cal D}$.
\par\medskip
\par\medskip
\par\medskip
\noindent {\bf References}
\par\medskip
\noindent
[1] F. Abadie, Tensor products of Fell bundles over discrete groups,
preprint (funct-an/9712006), Universidade de S\~{a}o Paulo, 1997.
\par\noindent
[2] C. Anantharaman-Delaroche, Action moyennable d'un groupe
localement compact sur une alg\`ebre de von Neumann,
Math. Scand. 45 (1979), 289-304.
\par\noindent
[3] C. Anantharaman-Delaroche, Action moyennable d'un groupe
localement compact sur une alg\`ebre de von Neumann II,
Math. Scand. 50 (1982), 251-268.
\par\noindent
[4] C. Anantharaman-Delaroche, Sys\`emes dynamiques non
commutatifs et moyennabilit\`e, Math. Ann. 279 (1987), 297-315.
\par\noindent
[5] S. Baaj and G. Skandalis, $C^{*}$-alg\`ebres de Hopf et
th\'eorie de Kasparov \'equivariante, $K$-theory 2 (1989), 683-721.
\par\noindent
[6] R. Exel, Amenability for Fell bundles, J. Reine Angew. Math.,
492 (1997), 41--73.
\par\noindent
[7] J. M. G. Fell and R. S. Doran, {\it Representations of *-algebras,
locally compact groups, and Banach *-algebraic bundles vol. 1 and 2},
Academic Press, 1988.
\par\noindent
[8] K. Jensen and K. Thomsen, {\it Elements of $KK$-Theory},
Birkh\"auser, 1991.
\par\noindent
[9] C. K. Ng, Discrete coactions on $C^*$-algebras,
J. Austral. Math. Soc. (Series A), 60 (1996), 118-127.
\par\noindent
[10] C. K. Ng, Coactions and crossed products of Hopf
$C^{*}$-algebras, Proc. London. Math. Soc. (3), 72 (1996), 638-656.
\par\noindent
[11] G. K. Pedersen, {\it $C^*$-algebras and their automorphism groups},
Academic Press, 1979.
\par\noindent
[12] I. Raeburn, On crossed products by coactions and their
representation theory, Proc. London Math. Soc. (3), 64 (1992), 625-652.
\par\noindent
[13] M. A. Rieffel, Induced representations of $C^*$-algebras,
Adv. Math. 13 (1974), 176--257.
\par\noindent
\par
\medskip
\noindent Departamento de Matem\'{a}tica, Universidade Federal de Santa
Catarina, 88010-970 Florian\'{o}polis SC, Brazil.
\par
\noindent $E$-mail address: exel@mtm.ufsc.br
\par\medskip
\noindent Mathematical Institute, Oxford University, 24-29 St. Giles, Oxford
OX1 3LB, United Kingdom.
\par
\noindent $E$-mail address: ng@maths.ox.ac.uk
\par
\end{document}
| {'timestamp': '1999-06-11T11:20:49', 'yymm': '9906', 'arxiv_id': 'math/9906070', 'language': 'en', 'url': 'https://arxiv.org/abs/math/9906070'} |
\section{Introduction}
The search for periodicities in time or spatial dependent signals is a topic
of the uttermost relevance in many fields of research, from geology
(stratigraphy, seismology, etc.;
(Brescia et al. 1996) ) to astronomy
Barone et al. 1994) where it finds wide application in the study of light
curves of variable stars, AGN's, etc.
The more sensitive instrumentation and observational techniques become, the
more frequently we find variable signals in time domain that previously were
believed to be constant. Research for possible periodicities in the signal
curves is then a natural consequence, when not an important issue. One of
the most relevant problems concerning the techniques of periodic signal
analysis is the way in which data are collected: many time series are
collected at unevenly sampling rate. We have two types of problems related
either to unknown fundamental period of the data, or their unknown multiple
periodicities. Typical cases, for instance in Astronomy, are the
determination of the fundamental period of eclipsing binaries both of light
and radial velocity curves, or the multiple periodicities determination of
ligth curves of pulsating stars. The difficulty arising from unevenly spaced
data is rather obvious and many attempts have been made to solve the problem
in a more or less satisfactory way. In this paper we will propose a new way
to approach the problem using neural networks, that seems to work
satisfactory well and seems to overcome most of the problems encountered
when dealing with unevenly sampled data.
\section{Spectral analysis and unevenly spaced data}
\subsection{Introduction}
In what follows, we assume $x$ to be a physical variable measured at
discrete times $t_{i}$. ${x(t_{i})}$ can be written as the sum of the signal
$x_{s}$ and random errors $R$: $x_{i}=x(t_{i})=x_{s}(t_{i})+R(t_{i})$. The
problem we are dealing with is how to estimate fundamental frequencies which
may be present in the signal $x_{s}(t_{i})$
(Deeming 1975; Kay 1988; Marple 1987).
If $x$ is measured at uniform time steps (even sampling)
(Horne \& Baliunas 1986; Scargle 1982) there are a lot of tools to
effectively solve the problem which are based on Fourier analysis
(Kay 1988; Marple 1987; Oppennheim \& Schafer 1965). These methods, however,
are usually unreliable for unevenly sampled data. For instance, the typical
approach of resampling the data into an evenly sampled sequence, through
interpolation, introduces a strong amplification of the noise which affects
the effectiveness of all Fourier based techniques which are strongly
dependent on the noise level
(Horowitz 1974).
There are other techniques used in specific areas
(Ferraz-Mello 1981; Lomb 1976): however, none of them faces directly the
problem, so that they are not truly reliable. The most used tool for
periodicity analysis of evenly or unevenly sampled signals is the
Periodogram
(Lomb 1976; Scargle 1982); therefore we will refer to it to evaluate our
system.
\subsection{Periodogram and its variations}
The Periodogram (P), is an estimator of the signal energy in the frequency
domain
(Deeming 1975; Kay 1988; Marple 1987; Oppennheim \& Schafer 1965). It has
been extensively applied to pulsating star light curves, unevenly spaced,
but there are difficulties in its use, specially concerning with aliasing
effects.
\subsubsection{Scargle's Periodogram}
This tool is a variation of the classical P. It was introduced by J.D.
Scargle
(Scargle 1982) for these reasons: 1) data from instrumental sampling are
often not equally spaced; 2) due to P inconsistency
(Kay 1988; Marple 1987; Oppennheim \& Schafer 1965), we must introduce a
selection criterion for signal components.
In fact, in the case of even sampling, the classical P has a simple
statistic distribution: it is exponentially distributed for Gaussian noise.
In the uneven sampling case the distribution becomes very complex. However,
Scargle's P has the same distribution of the even case
(Scargle 1982). Its definition is:
\begin{eqnarray} \label{PScargle}
P_x(f) & = & \frac{1}{2} \frac{[\sum_{n=0}^{N-1} x(n)\cos 2\pi f(t_n-\tau)]^2}
{\sum_{n=0}^{N-1} \cos^2 2\pi f(t_n-\tau)} + \nonumber \\
& & \frac{[\sum_{n=0}^{N-1} x(n)\sin
2\pi f(t_n-\tau)]^2}{\sum_{n=0}^{N-1} \sin^2 2\pi f(t_n-\tau)}
\end{eqnarray}
\noindent
where
\[
\tau=\frac{1}{4\pi f}\frac{\sum_{n=0}^{N-1} \sin 4\pi ft_n}{\sum_{n=0}^{N-1}
\cos 4\pi ft_n}
\]
\smallskip
\noindent
and $\tau $ is a shift variable on the time axis, $f$ is the
frequency, $\{x\left( n\right) ,t_{n}\}$ is the observation series.
\subsubsection{Lomb's Periodogram}
This tool is another variation of the classical P and is similar to the
Scargle's P. It was introduced by Lomb
(Lomb 1976) and we used the {\it Numerical Recipes in C} release (Numerical
Recipes in C 1988-1992).
Let us suppose to have $N$ points $x(n)$ and to compute mean and variance:
\begin{equation}
\bar{x}=\frac{1}{N}\sum_{n=1}^{N}x(n)\qquad \qquad \sigma ^{2}=\frac{1}{N-1}
\sum_{n=1}^{N}\left( x(n)-\bar{x}\right) ^{2}. \label{eqc2.9}
\end{equation}
Therefore, the normalised Lomb's P (power spectra as function of an angular
frequency $\omega \equiv 2\pi f>0$) is defined as follows
\begin{eqnarray}
P_{N}(\omega ) & = & \frac{1}{2\sigma^{2}}\left[ \frac{[\sum_{n=0}^{N-1}\left(
x(n)-\bar{x}\right) \cos \omega (t_{n}-\tau )]^{2}}{\sum_{n=0}^{N-1}
\cos^{2}\omega (t_{n}-\tau )} \right] + \nonumber \\
& & + \frac{1}{2\sigma^{2}} \left[ \frac{[\sum_{n=0}^{N-1}\left( x(n)-
\bar{x}\right) \sin \omega (t_{n}-\tau )]^{2}}{\sum_{n=0}^{N-1}\sin ^{2}\omega
(t_{n}-\tau )} \right] \label{PLomb}
\end{eqnarray}
\noindent
where $\tau $ is defined by the equation
\[
\tan \left( 2\omega \tau \right) =\frac{\sum_{n=0}^{N-1}\sin 2\omega t_{n}}
{\sum_{n=0}^{N-1}\cos 2\omega t_{n}}
\]
\smallskip
\noindent
and $\tau $ is an offset, $\omega $ is the frequency, $\{x\left(n\right),
t_{n}\}$ is the observation series. The horizontal lines in the figures 19, 22,
25, 27, 32 and 34 correspond to the practical significance levels, as indicated
in (Numerical Recipes in C 1988-1992).
\subsection{Modern spectral analysis}
Frequency estimation of narrow band signals in Gaussian noise is a problem
related to many fields
(Kay 1988; Marple 1987). Since the classical methods of Fourier analysis
suffer from statistic and resolution problems, then newer techniques based
on the analysis of the signal autocorrelation matrix eigenvectors were
introduced
(Kay 1988; Marple 1987).
\subsubsection{Spectral analysis with eigenvectors}
Let us assume to have a signal with p sinusoidal components (narrow band).
The p sinusoids are modelled as a stationary ergodic signal, and this is
possible only if the phases are assumed to be indipendent random variables
uniformly distributed in $[ 0,2\pi )$
(Kay 1988; Marple 1987). To estimate the frequencies we exploit the
properties of the signal autocorrelation matrix (a.m.)
(Kay 1988; Marple 1987). The a.m. is the sum of the signal and the noise
matrices; the p principal eigenvectors of the signal matrix allow the
estimate of frequencies; the p principal eigenvectors of the signal matrix
are the same of the total matrix.
\section{PCA Neural Nets\label{section3}}
\subsection{Introduction}
Principal Component analysis (PCA) is a widely used technique in data
analysis. Mathematically, it is defined as follows: let ${\bf C}=E({\bf x}
{\bf x}^{T})$ be the covariance matrix of L-dimensional zero mean input data
vectors ${\bf x}$. The {\em i}-th principal component of ${\bf x}$ is
defined as ${\bf x}^{T}{\bf c}(i)$, where ${\bf c}(i)$ is the normalized
eigenvector of {\bf C} corresponding to the {\em i}-th largest eigenvalue
$\lambda (i)$. The subspace spanned by the principal eigenvectors
${\bf c}(1),\ldots ,{\bf c}(M),(M<L))$ is called the PCA subspace (of
dimensionality M)
(Oja et al. 1991; Oja et al. 1996). PCA's can be neurally realized in
various ways
(Baldi \& Hornik 1989; Jutten \& Herault 1991; Oja 1982; Oja et al. 1991;
Plumbley 1993; Sanger 1989). The PCA neural network used by us is a one
layer feedforward neural network which is able to extract the principal
components of the stream of input vectors. Typically, Hebbian type learning
rules are used, based on the one unit learning algorithm originally proposed
by Oja
(Oja 1982). Many different versions and extensions of this basic algorithm
have been proposed during the recent years; see
(Karhunen \& Joutsensalo 1994; Karhunen \& Joutsensalo 1995; Oja et al.
1996; Sanger 1989).
\subsection{Linear, robust, nonlinear PCA Neural Nets}
The structure of the PCA neural network can be summarised as follows
(Karhunen \& Joutsensalo 1994; Karhunen \& Joutsensalo 1995; Oja et al.
1996; Sanger 1989): there is one input layer, and one forward layer of
neurons totally connected to the inputs; during the learning phase there are
feedback links among neurons, that classify the network structure as either
hierarchical or symmetric. After the learning phase the network becomes
purely feedforward. The hierarchical case leads to the well known GHA
algorithm
(Karhunen \& Joutsensalo 1995; Sanger 1989); in the symmetric case we have
the Oja's subspace network
(Oja 1982).
PCA neural algorithms can be derived from optimisation problems, such as
variance maximization and representation error minimisation
(Karhunen \& Joutsensalo 1994; Karhunen \& Joutsensalo 1995) so obtaining
nonlinear algorithms (and relative neural networks). These neural networks
have the same architecture of the linear ones: either hierarchical or
symmetric. These learning algorithms can be further classified in: robust
PCA algorithms and nonlinear PCA algorithms. We define robust a PCA algorithm
When the objective function grows less than quadratically
(Karhunen \& Joutsensalo 1994; Karhunen \& Joutsensalo 1995). The nonlinear
learning function appears at selected places only. In nonlinear PCA
algorithms all the outputs of the neurons are nonlinear function of the
responses.
\subsubsection{Robust PCA algorithms}
In the robust generalization of variance maximisation, the objective
function $f(t)$ is assumed to be a valid cost function
(Karhunen \& Joutsensalo 1994; Karhunen \& Joutsensalo 1995), such as
$\ln\cos (t)$ e $|t|$. This leads to the algorithm:
\begin{eqnarray} \label{eq34}
{\bf w}_{k+1}(i) &=&{\bf w}_{k}(i)+\mu _{k}g(y_{k}(i)){\bf e}_{k}(i), \\
\qquad {\bf e}_{k}(i) &=&{\bf x}_{k}-\sum_{j=1}^{I(i)}y_{k}(j){\bf w}_{k}(j)
\nonumber
\end{eqnarray}
In the hierarchical case we have $I(i)=i$. In the symmetric case $I(i)=M$,
the error vector ${\bf e}_k(i)$ becomes the same ${\bf e}_k$ for all the
neurons, and equation(\ref{eq34}) can be compactly written as:
\begin{eqnarray} \label{eq35}
{\bf W}_{k+1}={\bf W}_k+\mu{\bf e}_kg({\bf y}_k^T)
\end{eqnarray}
\noindent
where ${\bf y}={\bf W}^T_k{\bf x}$ is the instantaneous vector of neuron
responses. The learning function $g$, derivative of $f$, is applied
separately to each component of the argument vector.
The robust generalisation of the representation error problem
(Karhunen \& Joutsensalo 1994; Karhunen \& Joutsensalo 1995), with
$f(t)\le t^2$, leads to the stochastic gradient algorithm :
\begin{eqnarray} \label{eq37}
{\bf w}_{k+1}(i) & = & {\bf w}_k(i)+\mu ({\bf w}_k(i)^Tg({\bf e}_k(i))
{\bf x}_k + \\
& + & {\bf x}_k^T{\bf w}_k(i)g({\bf e}_k(i))) \nonumber
\end{eqnarray}
\noindent
This algorithm can be again considered in both hierarchical and symmetric
cases. In the symmetric case $I(i)=M$, the error vector is the same
$({\bf e}_k)$ for all the weights ${\bf w}_k$. In the hierarchical case
$I(i)=i$, equation(\ref{eq37}) gives the robust counterparts of principal
eigenvectors ${\bf c}(i)$.
\subsubsection{Approximated Algorithms}
The first update term ${\bf w}_k(i)^Tg({\bf e}_k(i)){\bf x}_k$ in eq.(\ref
{eq37}) is proportional to the same vector ${\bf x}_k$ for all weights
${\bf w}_k(i)$. Furthermore, we can assume that the error vector ${\bf e}_k$
should be relatively small after the initial convergence. Hence, we can
neglet the first term in equation(\ref{eq37}) and this leads to:
\begin{equation} \label{eq39}
{\bf w}_{k+1}(i)={\bf w}_k(i)+\mu {\bf x}_k^T y_k(i)g({\bf e}_k(i))
\end{equation}
\subsubsection{Nonlinear PCA Algorithms}
Let us consider now the nonlinear extensions of PCA algorithms. We can
obtain them in a heuristic way by requiring all neuron outputs to be always
nonlinear in the equation(\ref{eq34})
(Karhunen \& Joutsensalo 1994; Karhunen \& Joutsensalo 1995). This leads to:
\begin{eqnarray}
{\bf w}_{k+1}(i) & = & {\bf w}_{k}(i)+\mu g(y_{k}(i)){\bf b}_{k}(i),
\label{eq310} \\
\qquad {\bf b}_{k}(i) &=& {\bf x}_{k}-\sum_{j=1}^{I(i)}g(y_{k}(j))
{\bf w}_{k}(j)\quad \forall i=1,\ldots ,p \nonumber
\end{eqnarray}
\section{Independent Component Analysis}
Independent Component Analysis (ICA) is a useful extension of PCA that was
developed in context with source or signal separation applications
(Oja et al. 1996): instead of requiring that the coefficients of a linear
expansion of data vectors are uncorrelated, in ICA they must be mutually
independent or as independent as possible. This implies that second order
moments are not sufficient, but higher order statistics are needed in
determining ICA. This provides a more meaningful representation of data
than PCA. In current ICA methods based on PCA neural networks, the following
data model is usually assumed. The $L$-dimensional $k$-th data vector
${\bf x}_{k}$ is of the form
(Oja et al. 1996):
\begin{equation}
{\bf x}_{k}={\bf As}_{k}+{\bf n}_{k}=\sum_{i=1}^{M}s_{k}(i){\bf a}(i)+
{\bf n}_{k} \label{eq311}
\end{equation}
where in the M-vector ${\bf s}_{k}=[s_{k}(1),\ldots ,s_{k}(M)]^{T}$,
$s_{k}(i)$ denotes the $i$-th independent component (source signal) at time
$k$, ${\bf A}=[{\bf a}(1),\ldots ,{\bf a}(M)]$ is a $L\times M$ {\em mixing
matrix} whose columns ${\bf a}(i)$ are the basis vectors of ICA, and
${\bf n}_{k}$ denotes noise.
The source separation problem is now to find an $M\times L$ separating
matrix ${\bf B}$ so that the $M$-vector ${\bf y}_{k}={\bf Bx}_{k}$ is an
estimate ${\bf y}_{k}={\bf \hat{s}}_{k}$ of the original independent source
signal
(Oja et al. 1996).
\subsection{Whitening}
Whitening is a linear transformation ${\bf A}$ such that, given a matrix
${\bf C}$, we have ${\bf ACA}^{T}={\bf D}$ where ${\bf D}$ is a diagonal
matrix with positive elements
(Kay 1988; Marple 1987).
Several separation algorithms utilise the fact that if the data vectors
${\bf x}_{k}$ are first pre-processed by whitening them (i.e.
$E(x_{k}x_{k}^{T})=I$ with $E(.)$ denoting the expectation), then the
separating matrix ${\bf B}$ becomes orthogonal (${\bf BB^{T}=I}$ see
(Oja et al. 1996)).
Approximating contrast functions which are maximised for a separating matrix
have been introduced because the involved probability densities are unknown
(Oja et al. 1996).
It can be shown that, for prewhitened input vectors, the simpler contrast
function given by the sum of kurtoses is maximised by a separating matrix
${\bf B}$
(Oja et al. 1996).
However, we found that in our experiments the whitening was not as good as
we expected, because the estimated frequencies calculated for prewhitened
signals with the neural estimator (n.e.) were not too much accurate.
In fact we can pre-elaborate the signal, whitening it, and then we can apply
the n.e. . Otherwise we can apply the whitening and separate the signal in
independent components with the nonlinear neural algorithm of
equation(\ref{eq310})
and then apply the n.e. to each of these components and estimate the
single frequencies separately.
The first method gives comparable or worse results than n.e. without
whitening. The second one gives worse results and is very expensive. When we
used the whitening in our n.e. the results were worse and more time
consuming than the ones obtained using the standard n.e. (i.e. without
whitening the signal). Experimental results are given in the following
sections. For these reasons whitening is not a suitable technique to improve
our n.e.\ .
\section{The neural network based estimator system}
The process for periodicity analysis can be divided in the following steps:
\par\noindent
{\bf - Preprocessing}
\par\noindent
We first interpolate the data with a simple linear fitting and then
calculate and subtract the average pattern to obtain zero mean process
(Karhunen \& Joutsensalo 1994; Karhunen \& Joutsensalo 1995).
\par\noindent
{\bf - Neural computing}
The fundamental learning parameters are:
\par\noindent
{\bf 1)} the initial weight matrix;
\par\noindent
{\bf 2)} the number of neurons, that is the number of principal eigenvectors
that we need, equal to twice the number of signal periodicities (for real
signals);
\par\noindent
{\bf 3)} $\epsilon $, i.e. the threshold parameter for convergence ;
\par\noindent
{\bf 4)} $\alpha $, the nonlinear learning function parameter;
\par\noindent
{\bf 5)} $\mu $, that is the learning rate.
\par\noindent
We initialise the weight matrix ${\bf W}$ assigning the classical small
random values. Otherwise we can use the first patterns of the signal as the
columns of the matrix: experimental results show that the latter technique
speeds up the convergence of our neural estimator (n.e.). However, it cannot
be used with anomalously shaped signals, such as stratigraphic geological
signals.
Experimental results show that $\alpha $ can be fixed to : $1.$, $5.$, $10.$,
$20.$, even if for symmetric networks a smaller value of $\alpha $ is
preferable for convergence reasons. Moreover, the learning rate $\mu $ can
be decreased during the learning phase, but we fixed it between $0.05$ and
$0.0001$ in our experiments.
We use a simple criterion to decide if the neural network has reached the
convergence: we calculate the distance between the weight matrix at step $%
k+1 $, ${\bf W}_{k+1}$, and the matrix at the previous step ${\bf W}_k$, and
if this distance is less than a fixed error threshold ($\epsilon$) we stop
the learning process.
We finally have the following general algorithm in which STEP 4 is one of
the neural learning algorithms seen above in section \ref{section3}:
\begin{itemize}
\itemindent=-0.5cm
\item[]{\bf STEP 1} Initialise the weight vectors ${\bf w}_{0}(i)\quad
\forall i=1,\ldots ,p$ with small random values, or with orthonormalised
signal patterns. Initialise the learning threshold $\epsilon $, the learning
rate $\mu $. Reset pattern counter $k=0$.
\item[]{\bf STEP 2} Input the k-th pattern ${\bf x}_{k}=[x(k),\ldots,
x(k+N+1)]$ where $N$ is the number of input components.
\item[]{\bf STEP 3} Calculate the output for each neuron $y(j) =
{\bf w}^{T}(j){\bf x}_{i}\qquad \forall i=1,\ldots, p$.
\item[]{\bf STEP 4} Modify the weights ${\bf w}_{k+1}(i)={\bf w}_{k}(i)+\mu
_{k}g(y_{k}(i)){\bf e}_{k}(i)\qquad \forall i=1, \ldots, p$.
\item[]{\bf STEP 5} Calculate \begin{equation}
testnorma=\sqrt{\sum_{j=1}^{p}\sum_{i=1}^{N}
({\bf w}_{k+1}(ij)-{\bf w}_{k}(ij))^{2}}. \end{equation}
\item[]{\bf STEP 6} Convergence test: if $(testnorma < \epsilon )$ then goto
{\bf STEP 8}.
\item[]{\bf STEP 7} $k=k+1$. Goto {\bf STEP 2}.
\item[]{\bf STEP 8} End.
\end{itemize}
\par\noindent
{\bf - Frequency estimator}
\par\noindent
We exploit the frequency estimator {\em Multiple Signal Classificator} (MUSIC). It
takes as input the weight matrix columns after the learning. The estimated
signal frequencies are obtained as the peak locations of the following
function
(Kay 1988; Marple 1987):
\begin{eqnarray}
P_{MUSIC} = \log(\frac{1}{1-\sum_{i=1}^M |{\bf e}_f^H{\bf w}(i)|^2}) &
\end{eqnarray}
where ${\bf w}(i)$ is the $i-$th weight vector after learning, and ${\bf e}%
_f^H $ is the pure sinusoidal vector : ${\bf e}_f^H=[1,e_f^{j2\pi
f},\ldots,e_f^{j2\pi f(L-1)}]^H $.
When $f$ is the frequency of the $i-$th sinusoidal component, $f=f_i$, we
have ${\bf e} = {\bf e}_i $ and $P_{MUSIC} \to \infty$. In practice we have
a peak near and in correspondence of the component frequency. Estimates are
related to the highest peaks.
\section{Music and the Cramer-Rao Lower Bound}
In this section we show the relation between the Music estimator and the
Cramer-Rao bound following the notation and the conditions proposed by
Stoica and Nehorai in their paper (Stoica and Nehorai 1990).
\subsection{The model}
The problem under consideration is to determine the parameters of the
following model:
\begin{equation}
{\bf y}(t)=A({\bf \theta }){\bf x}(t)+{\bf e}(t) \label{eq*.1}
\end{equation}
\noindent
where $\left\{ {\bf y}(t)\right\} \in C^{m\times 1}$ are the vectors of the
observed data, $\left\{ {\bf x}(t)\right\} \in C^{n\times 1}$ are the
unknown vectors and ${\bf e}(t)\in C^{m\times 1}$ is the added noise; the
matrix $A(\theta )\in C^{m\times n}$ and the vector $\theta $ are given by
\begin{equation}
A({\bf \theta })=\left[ {\bf a}\left( \omega _{1}\right) ...{\bf a}\left(
\omega _{n}\right) \right] ;\qquad \qquad {\bf \theta }=\left[ \omega
_{1}...\omega _{n}\right] \label{eq*.2}
\end{equation}
\noindent
where ${\bf a}\left( \omega \right) $ varies with the applications. Our aim
is to estimate the unknown parameters of ${\bf \theta }$. The dimension $n$
of ${\bf x}(t)$ is supposed to be known a priori and the estimate of the
parameters of ${\bf x}(t)$ is easy once ${\bf \theta }$ is known.
Now, we reformulate MUSIC to follow the above notation. The MUSIC estimate
is given by the position of the $n$ smallest values of the following
function:
\begin{equation}
f\left( \omega \right) ={\bf a}^{\ast }\left( \omega \right) \hat{G}\hat{G}%
^{\ast }{\bf a}\left( \omega \right) ={\bf a}^{\ast }\left( \omega \right) %
\left[ I-\hat{S}\hat{S}^{\ast }\right] {\bf a}\left( \omega \right)
\label{eq*.3}
\end{equation}
From equation(\ref{eq*.3}) we can define the estimation error of a given
parameter. $\left\{ \hat{\omega}_{i}-\omega _{i}\right\} $ has (for big $N$)
an asintotic gaussian distribution, with $0$ mean and with the following
covariance matrix:
\begin{equation}
C_{MU}=\frac{\sigma }{2n}\left( H\circ I \right)^{-1}
Re \left\{ H\circ
\left( A^{\ast }UA\right)^{T} \right\} \left( H\circ I\right)^{-1}
\label{eq*.4}
\end{equation}
\noindent
where $Re \left( x\right) $ is the real part of $x$, where
\begin{equation}
H=D^{\ast }GG^{\ast }D=D^{\ast }\left[ I-A\left( A^{\ast }A\right)
^{-1}A^{\ast }\right] D \label{eq*.5}
\end{equation}
\noindent
and where $U$ is implicitly defined by:
\begin{equation}
A^{\ast }UA=P^{-1}+\sigma P^{-1}\left( A^{\ast }A\right) ^{-1}P^{-1}
\label{eq*.6}
\end{equation}
\noindent
where $P$ is the covariance matrix of $x\left( t\right) $. The elements of
the diagonal of the matrix $C_{MU}$ are the variances of the estimation
error. On the other hand, the Cramer-Rao lower bound of the covariance
matrix of every estimator of ${\bf \theta }$, for large $N$, is given by:
\begin{equation}
C_{CR}=\frac{\sigma }{2n}\left\{ Re \left[ H\circ P^{T}\right]
\right\}^{-1}. \label{eq*.7}
\end{equation}
\noindent
Therefore the statistical efficiency can be defined with the condition that
P is diagonal as:
\begin{equation}
\left[ C_{MU}\right] _{ii}\geq \left[ C_{CR}\right] _{ii} \label{eq*.8}
\end{equation}
\noindent
where the equality is reached when $m$ increases if and only if
\begin{equation}
{\bf a}^{\ast }\left( \omega \right) {\bf a}\left( \omega \right)
\longrightarrow \infty \qquad \qquad as
\quad m\longrightarrow \infty .
\label{eq*.9}
\end{equation}
\noindent
For $P$ non-diagonal, $\left[ C_{MU}\right] _{ii}>\left[ C_{CR}\right]_{ii}$.
\noindent
To adapt the model used in the spectral analysis
\begin{equation}
{\bf y}(k)=\sum_{i=1}^{p}A_{i}e^{j\omega _{i}k}+{\bf e}(k)\qquad \qquad
k=1,2,...,M \label{eq*.10}
\end{equation}
\noindent
where $M$ is the total number of samples, to equation(\ref{eq*.3}) we make the
following transformations, after fixing an integer $m>p$:
\begin{eqnarray}
{\bf y}(t) &=&\left[ y_{t}\quad ...\quad y_{t+m-1}\right] \nonumber \\
{\bf a}\left( \omega \right) &=&\left[ 1\quad e^{j\omega }\quad e^{j\omega
\left( m-1\right) }\right] \label{eq*.23} \\
{\bf x}(t) &=&\left[ A_{1}e^{j\omega _{1}t}\quad ...\quad A_{n}e^{j\omega
_{n}t}\right] \qquad t=1,...,M-m+1 \nonumber
\end{eqnarray}
\noindent
In this way our model satisfies the conditions of (Stoica and Nehorai 1990).
Moreover, equations(\ref{eq*.23}) depend on the choice of $m$ which influences
the minimum error variance.
\subsection{Comparison between PCA-MUSIC and the Cramer-Rao lower bound}
In this subsection we compare the n.e. method with the Cramer-Rao lower
bound, by varying the frequencies distance, the parameters $M$ and $m$ and
the noise variance.
From the experiments it derives that, fixed $M$ and $m$, by varying the
noise (white Gaussian) variance, the n.e. estimate is more accurate for
small values of the noise variance as shown in figures 1-3. For $\Delta \omega $
small, the noise variance is far from the bound. By increasing $m$ the
estimate improves, but there is a sensitivity to the noise (figures 4-6). By
varying $M$, there is a sensitivity of the estimator to the number of points
and to $m$ (figures 7-8). In fact, if we have a quite large number of points we
reach the bound as illustrated in figures 9-10.
\begin{figure}
\includegraphics[width=8cm , height=7cm]{ds1489f1.ps}
\caption[]{CRB and standard deviation of n.e. estimates; abscissa is
the distance between the frequencies $\omega_{2}$ and $\omega_{1}$. \\
CRB (down); standard deviation of n.e. (up) with $m=5$, $\sigma=0.5$ and
$M=50$.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f2.ps}
}}
\caption[]{CRB and standard deviation of n.e. estimates; abscissa is
the distance between the frequencies $\omega_{2}$ and $\omega_{1}$. \\
CRB (down); standard deviation of n.e. (up) with $m=5$, $\sigma=0.001$
and $M=50$.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f3.ps}
}}
\caption[]{CRB and standard deviation of n.e. estimates; abscissa is
the distance between the frequencies $\omega_{2}$ and $\omega_{1}$. \\
CRB (down); standard deviation of n.e. (up) with $m=5$, $\sigma=0.0001$ and
$M=50$.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f4.ps}
}}
\caption[]{CRB and standard deviation of n.e. estimates; abscissa is
the distance between the frequencies $\omega_{2}$ and $\omega_{1}$. \\
CRB (down); standard deviation of n.e. (up) with $m=20$, $\sigma=0.5$ and
$M=50$.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f5.ps}
}}
\caption[]{CRB and standard deviation of n.e. estimates; abscissa is
the distance between the frequencies $\omega_{2}$ and $\omega_{1}$. \\
CRB (down); standard deviation of n.e. (up) with $m=20$, $\sigma=0.01$ and
$M=50$.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f6.ps}
}}
\caption[]{CRB and standard deviation of n.e. estimates; abscissa is
the distance between the frequencies $\omega_{2}$ and $\omega_{1}$. \\
CRB (down); standard deviation of n.e. (up) with $m=20$, $\sigma=0.0001$ and
$M=50$.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f7.ps}
}}
\caption[]{CRB and standard deviation of n.e. estimates; abscissa is
the distance between the frequencies $\omega_{2}$ and $\omega_{1}$. \\
CRB (down); standard deviation of n.e. (up) with $m=5$, $\sigma=0.01$ and
$M=20$.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f8.ps}
}}
\caption[]{CRB and standard deviation of n.e. estimates; abscissa is
the distance between the frequencies $\omega_{2}$ and $\omega_{1}$. \\
CRB (down); standard deviation of n.e. (up) with $m=5$, $\sigma=0.01$ and
$M=20$.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f9.ps}
}}
\caption[]{CRB and standard deviation of n.e. estimates; abscissa is
the distance between the frequencies $\omega_{2}$ and $\omega_{1}$. \\
CRB (down); standard deviation of n.e. (up) with $m=20$, $\sigma=0.01$ and
$M=100$.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f10.ps}
}}
\caption[]{CRB and standard deviation of n.e. estimates; abscissa is
the distance between the frequencies $\omega_{2}$ and $\omega_{1}$. \\
CRB (down); standard deviation of n.e. (up) with $m=50$, $\sigma=0.001$ and
$M=100$.}
\end{figure}
Therefore, the n.e. estimate depends on both the increase of $m$ and the
number of points in the input sequence. Increasing the number of points, we
improve the estimate and the error approximates the Cramer-Rao bound. On the
other hand, for noise variances very small, the estimate reaches a very good
performance. Finally, we see that in all the experiments shown in the
figures we reach the bound with a good approximation, and we can conclude
that the n.e.\ method is statistically efficient.
\section{Experimental results}
\subsection{Introduction}
In this section we show the performance of the neural based estimator system
using artificial and real data. The artificial data are generated following
the literature
(Kay 1988; Marple 1987) and are noisy sinusoidal mixtures. These are used to
select the neural models for the next phases and to compare the n.e. with
P's, by using Montecarlo methods to generate samples. Real data, instead,
come from astrophysics: in fact, real signals are light curves of Cepheids
and a light curve in the Johnson's system.
In the sections \ref{realistic data} and \ref{real data}, we use an
extension of Music to directly include unevenly sampled data without using
the interpolation step of the previous algorithm in the following way:
\begin{equation}
P_{MUSIC}^{\prime }=\frac{1}{M-\sum_{i=1}^{p}|{\bf e}_{f}^{H}{\bf w}(i)|^{2}}
\end{equation}
\noindent
where $p$ is the frequency number, ${\bf w}(i)$ is the $i-$th weight vector
of the PCA neural network after the learning, and ${\bf e}_{f}^{H}$ is the
sinusoidal vector: ${\bf e}_{f}^{H}=[1,e_{f}^{j2\pi ft_{0}},\ldots
,e_{f}^{j2\pi ft_{(L-1)}}]^{H}$ where $\left\{ t_{0},t_{1},...,t_{\left(
L-1\right) }\right\} $ are the first $L$ components of the temporal
coordinates of the uneven signal.
Furthermore, to optimise the performance of the PCA neural networks, we stop
the learning process when $\sum_{i=1}^{p}|{\bf e}_{f}^{H}{\bf w}%
(i)|^{2}>M\qquad \forall f$, so avoiding overfitting problems.
\subsection{Models selection\label{model selection}}
In this section we use synthetic data to select the neural networks used in
the next experiments. In this case, the uneven sampling is obtained by
randomly deleting a fixed number of points from the synthetic
sinusoid-mixtures: this is a widely used technique in the specialised
literature
(Horne \& Baliunas 1986).
The experiments are organised in this way. First of all, we use synthetic
unevenly sampled signals to compare the different neural algorithms in the
neural estimator (n.e.) with the Scargle's P.
For this type of experiments, we realise a statistical test using five
synthetic signals. Each one is composed by the sum of five sinusoids of
randomly chosen frequencies in $[0,0.5]$ and randomly chosen phases in $%
[0,2\pi ]$
(Kay 1988; Karhunen \& Joutsensalo 1994; Marple 1987), added to white random
noise of fixed variance. We take $200$ samples of each signal and randomly
discard $50\%$ of them ($100$ points), getting an uneven sampling
(Horne \& Baliunas 1986). In this way we have several degree of randomness:
frequencies, phases, noise, deleted points.
After this, we interpolate the signal and evaluate the P and the n.e. system
with the following neural algorithms: robust algorithm in equation(\ref{eq34})
in
the hierarchical and symmetric case; nonlinear algorithm in
equation(\ref{eq310})
in the hierarchical and symmetric case. Each of these is used with two
nonlinear learning functions: $g_1(t)=\tanh(\alpha t)$ and $%
g_2(t)=sgn(t)\log (1+\alpha|t|)$. Therefore we have eight different neural
algorithms to evaluate.
We chose these algorithms after we made several experiments involving all
the neural algorithms presented in section \ref{section3}, with several
learning functions, and we verified that the behaviour of the algorithms and
learning functions cited above was the same or better than the others. So we
restricted the range of algorithms to better show the most relevant features
of the test.
We evaluated the average differences between target and estimated
frequencies. This was repeated for the five signals and then for each
algorithm we made the average evaluation of the single results over the five
signals. The less this averages were, the greatest the accuracy was.
We also calculated the average of the number of epochs and CPU time for
convergence. We compare this with the behaviour of P.
Common signals parameters are: number of frequencies $=5$, variance noise $%
=0.5$, number of sampled points $=200$, number of deleted points $=100$.
\par\noindent
Signal 1: frequencies=$0.03, 0.19, 0.25, 0.33, 0.46 \quad 1/s $
\par\noindent
Signal 2: frequencies=$0.02, 0.11, 0.20, 0.33, 0.41 \quad 1/s $
\par\noindent
Signal 3: frequencies=$0.34, 0.29, 0.48, 0.42, 0.04 \quad 1/s $
\par\noindent
Signal 4: frequencies=$0.32, 0.20, 0.45, 0.38, 0.13 \quad 1/s $
\par\noindent
Signal 5: frequencies=$0.02, 0.37, 0.16, 0.49, 0.31 \quad 1/s $
\par\noindent
Neural parameters: $\alpha =10.0$; $\mu =0.0001$; $\epsilon =0.001$; number
of points in each pattern $N=110$ (these are used for almost all the neural
algorithms; however, for a few of them a little variation of some parameters
is required to achieve convergence).
\par\noindent
Scargle parameters: $Tapering=30\%$, $p_0=0.01$.
\par\noindent
Results are shown in Table~\ref{table61}:
\par\noindent
\begin{table*}[tbp]
\caption[]{Performance evaluation of n.e. algorithms and P on
synthetic signals.}
\label{table61}
\begin{flushleft}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
& \multicolumn{6}{|c|}{} & \multicolumn{2}{|c|}{} \\
& \multicolumn{6}{|c|}{average normalised differences} &
\multicolumn{2}{|c|}{} \\
& \multicolumn{6}{|c|}{} & \multicolumn{2}{|c|}{} \\ \hline
& & & & & & & & \\
Algorithm & sig1 & sig2 & sig3 & sig4 & sig5 & TOT & average & average \\
& & & & & & & n. epochs & time \\
& & & & & & & & \\ \hline\hline
& & & & & & & & \\
1. eq.(\ref{eq34}) hierarc.+$g1$ & 0.000 & 0.002 & 0.004 & 0.000 & 0.004 &
0.0020 & 898.4 & 189.2 s \\
2. eq.(\ref{eq34}) hierarc.+$g2$ & 0.000 & 0.002 & 0.004 & 0.000 & 0.004 &
0.0020 & 667.2 & 105.2 s \\
3. eq.(\ref{eq310}) hierarc.+$g1$ & 0.000 & 0.002 & 0.005 & 0.000 & 0.004 &
0.0022 & 5616.2 & 1367.4 s \\
4. eq.(\ref{eq310}) hierarc.+$g2$ & 0.000 & 0.002 & 0.005 & 0.000 & 0.004 &
0.0022 & 3428.4 & 1033.4 s \\
5. eq.(\ref{eq34}) symmetr.+$g1$ & 0.000 & 0.002 & 0.002 & 0.000 & 0.004 &
0.0016 & 814.0 & 100.2 s \\
6. eq.(\ref{eq34}) symmetr.+$g2$ & 0.000 & 0.002 & 0.004 & 0.002 & 0.004 &
0.0024 & 855.2 & 124.4 s \\
7. eq.(\ref{eq310}) symmetr.+$g1$ & 0.000 & 0.002 & 0.004 & 0.002 & 0.004 &
0.0024 & 6858.2 & 1185 s \\
8. eq.(\ref{eq310}) symmetr.+$g2$ & 0.000 & 0.002 & 0.004 & 0.002 & 0.004 &
0.0024 & 3121.8 & 675.8 s \\
Periodogram & 0.004 & 0.000 & 0.002 & 0.004 & 0.004 & 0.0028 & & 22.2 s \\
& & & & & & & & \\ \hline
\end{tabular}
\end{flushleft}
\end{table*}
We have to spend few words about the differences of behaviour among the
neural algorithms elicited by the experiments. Nonlinear algorithms are more
complex than robust ones; they are relatively slower in converging, with
higher probability to be caught in local minima, so their estimates results
are sometimes not reliable. So we restrict our choice to robust models.
Moreover, symmetric models require more effort in finding the right
parameters to achieve convergence than the hierarchical ones. The
performance, however, are comparable.
From Table~\ref{table61} we can see that the best neural algorithm for our
aim is the n.5 in Table~\ref{table61} (equation(\ref{eq34}) in the symmetric
case
with learning function $g_1(t)=\tanh(\alpha t)$).
However, this algorithm requires much more efforts in finding the right
parameters for the convergence than the algorithm n.2 from the same table
(equation(\ref{eq34}) in the hierarchical case with learning function
$g_{2}(t)=sgn(t)\log (1+\alpha |t|)$), which has performance comparable with
it.
For this reason, in the following experiments when we present the neural
algorithm, it is algorithm n.2.
We show, as an example, in figures 11-13 the estimate result of the n.e.
algorithm and P on signal n.1.
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f11.ps}
}}
\caption[]{Synthetic Signal.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f12.ps}
}}
\caption[]{P estimate.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f13.ps}
}}
\caption[]{n.e. estimate.}
\end{figure}
We now present the result for whitening pre-processing on one synthetic
signal (figures 14-16). We compare this technique with the standard n.e.\ .
\par\noindent
Signal frequencies=$0.1, 0.15, 0.2, 0.25, 0.3 \quad 1/s $
\par\noindent
Neural network estimates with whitening: $0.1, 0.15, 0.2, 0.25, 0.3 \quad
1/s $
\par\noindent
Neural network estimates without whitening: $0.1, 0.15, 0.2, 0.25, 0.3 \quad
1/s $
From this and other experiments we saw that when we used the whitening in
our n.e. the results were worse and more time consuming than the ones
obtained using the standard n.e. (i.e. without whitening the signal). For
these reasons whitening is not a suitable technique to improve our n.e.\ .
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f14.ps}
}}
\caption[]{Synthetic Signal.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f15.ps}
}}
\caption[]{n.e. estimate without whitening.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f16.ps}
}}
\caption[]{n.e. estimate with withening.}
\end{figure}
\subsection{ Comparison of the n.e. with the Lomb's Periodogram\label%
{realistic data}}
Here we present a set of synthetic signals generated by random varying the
noise variance, the phase and the deleted points with Montecarlo methods.
The signal is a sinusoid ($0.5\cos (2\pi 0.1t+\phi )+R(t)$) with frequency
$0.1$ $Hz$, $R(t)$ the Gaussian random noise with $0$ mean composed by $100$
points, with a random phase. We follow Horne \& Baliunas (Horne \& Baliunas
1986) for the choice of the signals.
We generated two different series of samples depending on the number of
deleted points: the first one with $50$ deleted points, the second one with
$80$ deleted points. We made 100 experiments for each variance value. The
results are shown in Table~\ref{table2} and Table~\ref{table3}, and compared
with the Lomb's P
because it works better than the Scargle's P with unevenly spaced data,
introducing confidence intervals which are useful to identify the accepted
peaks.
The results show that both the techniques obtain a comparable performance.
\par\noindent
\begin{table*}[tbp]
\caption[]{Synthetic signal with 50 deleted points,
frequency interval $\left[ \frac{2\pi}{T}, \frac{\pi N_{o}}{T} \right]$,
MSE = Mean Square Error,
$T$ = total period $(X_{max} - X_{min})$ and $N_{o}$ = total number of
points.}
\label{table2}
\begin{flushleft}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
& & \multicolumn{3}{|c|}{Lomb's P} & \multicolumn{2}{|c|}{n.e.} \\
\hline
Error Variance $\sigma^{2}$ & S.N.R. $\xi = \frac{X_{o}}{2\sigma^{2}}$ &
Mean & Variance & MSE & Mean & Variance & MSE \\
\hline
0.75 & 0.2 & 0.1627 & 0.0140 & 0.0178 & 0.1472 & 0.0116 & 0.0131 \\
0.5 & 0.5 & 0.1036 & 0.0013 & 0.0013 & 0.1020 & 3.0630 e$^{-4}$ &
3.0725 e$^{-4}$ \\
0.1 & 12.5 & 0.1000 & 1.0227 e$^{-8}$ & 1.0226 e$^{-8}$ & 0.1000 &
6.1016 e$^{-8}$ & 6.2055 e$^{-8}$ \\
0.001 & 1250 & 0.1000 & 2.905 e$^{-9}$ & 2.3139 e$^{-9}$ & 0.1000 &
3.8130 e$^{-32}$ & 0.00000 \\
\hline
\end{tabular}
\end{flushleft}
\end{table*}
\par\noindent
\begin{table*}[tbp]
\caption[]{Synthetic signal with 80 deleted points,
frequency interval $\left[ \frac{2\pi}{T}, \frac{\pi N_{o}}{T} \right]$,
MSE = Mean Square Error,
$T$ = total period $(X_{max} - X_{min})$ and $N_{o}$ = total number of
points.}
\label{table3}
\begin{flushleft}
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
& & \multicolumn{3}{|c|}{Lomb's P} & \multicolumn{2}{|c|}{n.e.} \\
\hline
Error Variance $\sigma^{2}$ & S.N.R. $\xi = \frac{X_{o}}{2\sigma^{2}}$ &
Mean & Variance & MSE & Mean & Variance & MSE \\
\hline
0.75 & 0.2 & 0.2323 & 0.0205 & 0.0378 & 0.2055 & 0.0228 & 0.0337 \\
0.5 & 0.5 & 0.2000 & 0.0190 & 0.0288 & 0.2034 & 0.0245 & 0.0349 \\
0.1 & 12.5 & 0.1000 & 2.37 e$^{-7}$ & 2.3648 e$^{-7}$ & 0.1004 &
1.8437 e$^{-5}$ & 1.8435 e$^{-5}$ \\
0.001 & 1250 & 0.1000 & 8.6517 e$^{-8}$ & 8.5931 e$^{-8}$ & 0.1000 &
4.7259 e$^{-8}$ & 4.7259 e$^{-8}$ \\
\hline
\end{tabular}
\end{flushleft}
\end{table*}
\subsection{ Real data\label{real data}}
The first real signal is related to the Cepheid SU Cygni (Fernie 1979). The
sequence was obtained with the photometric tecnique UBVRI and the sampling
made from June to December 1977. The light curve is composed by 21 samples
in the V band, and a period of $3.8^{d}$, as shown in figure 17. In this
case, the parameters of the n.e. are: $N=10$, $p=2$, $\alpha =20$, $\mu
=0.001$. The estimate frequency interval is $\left[ 0(1/JD),0.5(1/JD)\right]
$. The estimated frequency is $0.26$ (1/JD) in agreement with the Lomb's P,
but without showing any spurious peak (see figures 18 and 19).
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f17.ps}
}}
\caption[]{Light curve of SU Cygni.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f18.ps}
}}
\caption[]{n.e. estimate of SU Cygni.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f19.ps}
}}
\caption[]{Lomb's P estimate of SU Cygni.}
\end{figure}
The second real signal is related to the Cepheid U Aql (Moffet and Barnes
1980). The sequence was obtained with the photometric tecnique BVRI and the
sampling made from April 1977 to December 1979. The light curve is composed
by 39 samples in the V band, and a period of $7.01^{d}$, as shown in figure
20. In this case, the parameters of the n.e. are: $N=20$, $p=2$, $\alpha =5$,
$\mu =0.001$. The estimate frequency interval is $\left[ 0(1/JD),0.5(1/JD)
\right] $. The estimated frequency is $0.1425$ (1/JD) in agreement with the
Lomb's P, but without showing any spurious peak (see figures 21 and 22).
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f20.ps}
}}
\caption[]{Light curve of U Aql.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f21.ps}
}}
\caption[]{n.e. estimate of U Aql.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f22.ps}
}}
\caption[]{Lomb's P estimate of U Aql.}
\end{figure}
The third real signal is related to the Cepheid X Cygni (Moffet and Barnes
1980). The sequence was obtained with the photometric technique BVRI and the
sampling made from April 1977 to December 1979. The light curve is composed
by 120 samples in the V band, and a period of $16.38^{d}$, as shown in
figure 23. In this case, the parameters of the n.e. are: $N=70$, $p=2$,
$\alpha =5$, $\mu =0.001$. The estimate frequency interval is $\left[
0(1/JD),0.5(1/JD)\right] $. The estimated frequency is $0.061$ (1/JD) in
agreement with the Lomb's P, but without showing any spurious peak (see
figures 24 and 25).
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f23.ps}
}}
\caption[]{Light curve of X Cygni.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f24.ps}
}}
\caption[]{n.e. estimate of X Cygni.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f25.ps}
}}
\caption[]{Lomb's P estimate of X Cygni.}
\end{figure}
The fourth real signal is related to the Cepheid T Mon (Moffet and Barnes
1980). The sequence was obtained with the photometric technique BVRI and the
sampling made from April 1977 to December 1979. The light curve is composed
by $24$ samples in the V band, and a period of $27.02^{d}$, as shown in
figure 26. In this case, the parameters of the n.e. are: $N=10$, $p=2$,
$\alpha =5$, $\mu =0.001$. The estimate frequency interval is $\left[
0(1/JD),0.5(1/JD)\right] $. The estimated frequency is $0.037$ (1/JD) (see
figure 28).
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f26.ps}
}}
\caption[]{Light curve of T Mon.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f27.ps}
}}
\caption[]{Lomb's P estimate of T Mon.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f29.ps}
}}
\caption[]{n.e. estimate of T Mon.}
\end{figure}
The Lomb's P does not work in this case because there many peaks, and at
least two greater than the threshold of the most accurate confidence
interval (see figure 27).
The fifth real signal we used for the test phase is a light curve in the
Johnson's system (Binnendijk 1960) for the eclipsing binary U Peg (see
figure 29 and 30). This system was observed photoelectrically in the
effective wavelengths 5300 A and 4420 A with the 28-inch reflecting
telescope of the Flower and Cook Observatory during October and November,
1958.
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f29.ps}
}}
\caption[]{Light curve of U Peg.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f30.ps}
}}
\caption[]{Light curve of U Peg (first window).}
\end{figure}
We made several experiments with the n.e., and we elicited a dependence of
the frequency estimate on the variation of the number of elements for input
pattern. The optimal experimental parameters for the n.e. are: $N=300$, $%
\alpha =5$; $\mu =0.001$. The period found by the n.e. is expressed in $JD$
and is not in agreement with results cited in literature (Binnendijk 1960),
(Rigterink 1972), (Zhai et al. 1984),(Lu 1985) and (Zhai et al. 1988). The
fundamental frequency is $5.4$ $1/JD$ (see figure 31) instead of $2.7$
$1/JD$. We obtain a frequency double of the observed one. Lomb's P has some
high peaks as in the previous experiments and the estimated frequency is
always the double of the observed one (see figure 32).
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f31.ps}
}}
\caption[]{n.e. estimate of U Peg.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f32.ps}
}}
\caption[]{Lomb's P of U Peg.}
\end{figure}
\section{Conclusions}
We have realised and experimented a new method for spectral analysis for
unevenly sampled signals based on three phases: preprocessing, extraction of
principal eigenvectors and estimate of signal frequencies. This is done,
respectively, by input normalization, nonlinear PCA neural networks, and the
Multiple Signal Classificator algorithm. First of all, we have shown that
neural networks are a valid tool for spectral analysis.
However, above all, what is really important is that neural networks, as
realised in our neural estimator system, represent a new tool to face and
solve a problem tied with data acquisition in many scientific fields: the
unevenly sampling scheme.
Experimental results have shown the validity of our method as an alternative
to Periodogram, and in general to classical spectral analysis, mainly in
presence of few input data, few a priori information and high error
probability. Moreover, for unevenly sampled data, our system offers great
advantages with respect to P. First of all, it allows us to use a simple and
direct way to solve the problem as shown in all the experiments with
synthetic and Cepheid's real signals. Secondly, it is insensitive to the
frequency interval: for example, if we expand our interval in the SU Cygni
light curve, while our system finds the correct frequency, the Lomb's P
finds many spurious frequencies, some of them greater than the confidence
threshold, as shown in figures 33 and 34.
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f33.ps}
}}
\caption[]{n.e. estimate of SU Cygni with enlarged window.}
\end{figure}
\begin{figure}
\vbox{
\hbox{
\includegraphics[width=8cm , height=7cm]{ds1489f34.ps}
}}
\caption[]{Lomb's P of SU Cygni with enlarged window.}
\end{figure}
Furthermore, when we have a multifrequency signal, we can use our system
also if we do not know the frequency number. In fact, we can detect one
frequency at each time and continue the processing after the cancellation of
the detected periodicity by IIR filtering.
A point worth of noting is the failure to find the right frequency in the
case of eclipsing binary for both our method and Lomb's P. Taking account of
the morphology of eclipsing light curve with two minima, this fact can not
be of concern because in practical cases the important thing is to have a
first guess of the orbital frequency. Further refinement will be possible
through a wise planning of observations. In any case we have under study
this point to try to find a method to solve the problem.
\section*{Acknowledgments}
The authors would like to thank Dr. M. Rasile for the experiments related to the
model selection and an unknown referee for his comments who helped the
authors to improve the paper.
| {'timestamp': '1999-06-10T09:34:40', 'yymm': '9906', 'arxiv_id': 'astro-ph/9906182', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/9906182'} |
\section{INTRODUCTION}
\newpage
New physical phenomena such as high-$T_c$ superconductivity,
colossal magnetoresistance and dilute magnetism occur when
strongly correlated materials are doped with hole charges. These
phenomena may be caused by a strongly correlated Mott first-order
metal-insulator transition (MIT) with a change in on-site Coulomb
interaction without the accompanying structural phase transition
(SPT) \cite{Mott}. The Mott MIT is still under intense debate,
even though many scientists have tried to clarify the MIT
mechanism \cite{Imada}. We expect that studies on the MIT may
provide a decisive clue in understanding these new phenomena.
VO$_2$ (paramagnetic $V^{4+}$, 3$d^1$) had monoclinic M$_1$,
transient triclinic T, monoclinic M$_2$, and rutile R phases
\cite{Pouget}. The monoclinic phase has two electronic structures,
which are one-half of the V chains of the R phase being paired
without twisting, while the other half twist but do not pair
\cite{Pouget,Rice,Tomakuni}. The M$_2$ phase, consisting of
equally spaced V chains, was defined as a monoclinic Mott-Hubbard
insulator phase and the M$_1$ phase, insulating monoclinic phase,
may be a superposition of two M$_2$-type lattice distortions
\cite{Pouget,Rice}. The first-order MIT was considered to be due
to a change in atom position \cite{Pouget,Rice}. More recently,
Laad $et~al.$ calculated, using the local-density approximation +
dynamical mean field multiorbital iterated-perturbation theory
scheme, that the MIT from R to M$_1$ phases was accompanied by a
large spectral weight transfer due to changes in the orbital
occupations \cite{Laad}. This supported the Mott-Hubbard picture
of the MIT in VO$_2$, where the Peierls instability arises
subsequent to the MIT \cite{Kim-2,Kim-3}.
For SPT in VO$_2$, in contrast, it has also been proposed that the
MIT near 68$^{\circ}$C is the Peierls transition caused by
electron-phonon interaction. For example, it has been argued that
VO$_2$ is an ordinary band (or Peierls like) insulator on the
basis of the $d_{II}$-bonding combination of the 3$d^1$ electron,
resulting in a Peierls-like band gap \cite{Goodenough}. The same
conclusion was also reached by band structure calculations based
on the local density approximation \cite{Wentzcovitch}, an
orbital-assisted MIT model \cite{Haverkort}, a structure-driven
MIT \cite{Cavalleri-1}, and experimental measurements of a
structural bottleneck with a response time of 80 fs
\cite{Cavalleri-2}. This has been also supported by other models
including electron-electron interaction and electron-phonon
interaction \cite{Biermann,Okazaki}.
These controversial argue on the electronic structure of VO$_2$ is
largely due to preconception that the MIT and the SPT occur
simultaneously even though there may only be a causal relation
between the SPT and the MIT.
We developed theoretically the hole-driven MIT theory (an
extension of the Brinkman-Rice picture \cite{Brinkman}) with a
divergence for an inhomogeneous system \cite{Kim-1,Kim-4}, and
have reported the MIT with an abrupt first-order jump in
current-voltage measurements below 68$^{\circ}$C, the SPT
temperature \cite{Kim-2,Kim-3,Chae,Kim-4}. We have demonstrated
that the Mott MIT occurs when the valence band is doped with hole
charges of a very low critical density \cite{Kim-2,Kim-1,Kim-4}.
In this letter, we report time-resolved pump-probe measurements
for VO$_2$ and demonstrate the causality between the MIT and the
SPT by analyzing coherent phonon oscillations that reveal
different active modes across the critical temperature. We also
simultaneously measure the temperature dependence of the
resistance and crystalline structure to confirm the optical
results. To our knowledge, this letter is the first to report the
simultaneous analysis of MIT and SPT of VO$_2$. We newly define a
$\bf m$onoclinic and $\bf c$orrelated $\bf m$etal(MCM) phase
between the MIT and the SPT, which is different to Pouget's M$_2$
definition \cite{Pouget,Rice}, on the grounds that the MIT occurs
without an intermediate step. Photo-assisted temperature
excitation measurements that were used to induce a new MCM phase
as evidence of the Mott transition are also presented. The origin
of the MCM phase is discussed on the basis of the hole-driven MIT
theory \cite{Brinkman,Kim-1,Kim-4}. The use of this theory is
valid because the increase of conductivity near 68$^{\circ}$ was
due to inhomogeneity \cite{Choi} and inhomogeneity in VO$_2$ films
was observed \cite{Kim-2,Kim-3,Chae,Kim-4}.
High quality VO$_2$ films were deposited on both-side polished
Al$_2$O$_3$(1010) substrates by the sol-gel method \cite{Chae-2}.
The thickness of the films is approximately 100 nm. The
crystalline structure of the films was measured by x-ray
diffraction (XRD). For coherent phonon measurements, time-resolved
transmissive pump-probe experiments were performed using a
Ti:sapphire laser which generated 20 fs pulses with a 92 MHz
repetition rate centered on a wavelength of 780 nm. The diameter
of the focused laser beam was about 30${\mu}$m.
\begin{figure}
\vspace{-0.6cm}
\centerline{\epsfysize=13cm\epsfxsize=9cm\epsfbox{MCM-Fig1.eps}}
\vspace{-0.2cm} \caption{(color online) (a) The temperature
dependence of coherent phonon oscillations measured in a time
domain at a pump power of 100mW. (b) The temperature dependence of
the fast Fourier transformed (FFT) spectra taken from the coherent
phonon oscillations in Fig. 1(a). The peaks at 4.5 and 6 THz
appear simultaneously. (c) The temperature dependence of the sum
of the monoclinic A$_g$ peak spectral intensities measured at pump
powers of 30 and 100mW. (d) The temperature dependence of FFT
spectra of coherent phonon oscillations measured in the forward
and backward directions at 30mW pump power near $T_{SPT}$.}
\end{figure}
The coherent phonon measurement has many advantages over
conventional continuous wave spectroscopy, such as the
amplification effect of coherent phonons and a very low level
background in the low wavenumber range \cite{Yee}. Figure 1(a)
shows the temperature dependence of coherent phonon oscillations
measured at a pump laser power of 100mW. The enlarged oscillation
trace measured at 62$^{\circ}$C is shown in the inset. The
oscillation traces below 54$^{\circ}$C are clear, while for
temperatures of 54$^{\circ}$C and above the oscillation amplitude
is weakened since the MIT has already occurred.
Figure 1(b) shows the temperature dependence of coherent phonon
peaks obtained by taking a fast Fourier transform (FFT) of the
time-domain oscillations in Fig. 1(a). The intense monoclinic
A$_g$ peak near 5.8 THz decreases in intensity as the temperature
increases. The A$_g$ peak finally disappears at 58$^{\circ}$C. A
new broad peak near 4.5 THz (150 cm$^{-1}$) and a sharp peak at
6.0 THz (202 cm$^{-1}$) appear at 58$^{\circ}$C and over, as
denoted by the arrows in Fig. 1(b). The intense peak at 6.0 THz is
identified as the B$_{1g}$ (208 cm$^{-1}$) Raman active mode of
the R phase \cite{Srivastava}, but the broad peak near 4.5 THz
(150 cm$^{-1}$) is not assignable, because this mode is excluded
from allowable Raman active modes of the rutile structure
\cite{Srivastava,Schilbe}. We suggest that this peak belongs to an
active mode of the R phase because it appears at the same time as
the B$_{1g}$ mode. Moreover, it is obvious that the large decrease
of the A$_g$ peak is attributed not to the SPT but rather to the
MIT approximately between 50 and 58$^{\circ}$C.
Figure 1(c) shows the temperature dependence of the sum of the
A$_g$ peak spectral intensity centered at 5.8 THz from FFT spectra
in Fig. 1(b). These measurements were performed with pump powers
of 30 mW and 100 mW, respectively. The temperature dependence of
the spectral intensity shows a similar trend irrespective of pump
power. The hysteresis curve at 30 mW is denoted by circles
(heating: forward) and triangles (cooling: backward). The circle
curve with the intensity drop near 68$^{\circ}$C is similar to the
resistance curve of VO$_2$. This result indicates that the heating
effect due to a focused laser beam at 30mW is negligible.
Conversely, the transition temperature observed for a pump power
of 100 mW (black filled squares) is $\sim$12$^{\circ}$C lower than
that observed for the 30 mW case (Fig. 1 (c)). This is due to a
local heating effect by laser spot \cite{Laser}. Thus, the true
temperature of 58$^{\circ}$C shown in Fig. 1(b) is likely to be
$T_{SPT}$=70$^{\circ}$C.
Figure 1(d) shows the coherent phonon spectra obtained for forward
(heating) and backward directions at 30 mW. Remarkably, the
coherent phonon peak at 4.5 THz appears at 77$^{\circ}$C (left
side in Fig. 1(d)). The phonon peaks near 71$^{\circ}$C indicate
an intermediate state for the SPT. The phonon peaks at
74$^{\circ}$C (not displayed in the figure) showed the same
behavior as those at 71$^{\circ}$C. The SPT temperature is
regarded as 77$^{\circ}$C. Thus, the MCM phase is in a temperature
range from 62 to 77$^{\circ}$C (Fig. 1(c) and 1(d)). Although a
temperature increase due to the low pump power is insignificant,
the laser can still excite holes in the film \cite{Rini}. The
photo-induced holes cause the transition to the MCM phase, which
will be discussed in a following section. In the backward
direction of the hysteresis curve (denoted by triangles in Fig.
1(c) and right side in Fig. 1(d)), the peak at 4.5 THz is
recovered at 65$^{\circ}$C and the intermediate structure also
appears at 59$^{\circ}$C. The corresponding $T_{SPT}$ in the
cooling cycle is then regarded as being 65$^{\circ}$C. Such a
decreased $T_{SPT}$ (77$\rightarrow$65$^{\circ}$C) is due to a
residual heating effect from the high temperature increase up to
80$^{\circ}$C. For the cooling process, the MCM phase is likely
present in a temperature range between 56 and 62$^{\circ}$C.
\begin{figure}
\vspace{-0.1cm}
\centerline{\epsfysize=11cm\epsfxsize=8.4cm\epsfbox{MCM-Fig2.eps}}
\vspace{0.2cm} \caption{(color online) (a) The temperature
dependence of the resistance simultaneously measured by x-ray
diffraction. The inset cited from reference 6 shows the
temperature dependence of carriers near the MIT. Red bullets
indicate holes. (b) The temperature dependence of the XRD data.}
\end{figure}
In order to confirm the above optical results, we simultaneously
measured the temperature dependence of the resistance and XRD
pattern for the VO$_2$ (521)/Al$_2$O$_3$(1010) films. The
temperature range shown in Fig. 2(a) corresponds to the
temperatures of the XRD measurement in Fig. 2(b) and is performed
at 1$^{\circ}$C intervals. At 56$^{\circ}$C, the resistance begins
to deviate from the linear fit denoted by line A. $T_{MIT}$ can be
regarded as 56$^{\circ}$C. The (521) plane (2${\theta}$
=65.42$^{\circ}$) in Fig. 2(b) has a single peak at 55$^{\circ}$C.
A shoulder on the (521) plane peak appears at 61$^{\circ}$C, as
indicated by dot-line B in Fig. 2 (b). The peaks indicated by line
B denote an intermediate structure and not the R metal phase. The
peak related to the (002) plane appears at 65$^{\circ}$C and
represents the R metal phase, as indicated by dot-line C in Fig. 2
(b). It is likely that the SPT begins at 65$^{\circ}$C and is
continuous with temperature. This is the same result as in
previous work \cite{Leroux}. As shown in Fig. 2(a), the resistance
at 65$^{\circ}$C is as small as about 100${\Omega}$ above
65$^{\circ}$C. This is four orders of magnitude lower than the
resistance of the insulating phase. These results clearly show
that $T_{MIT}$ is different to $T_{SPT}$ and that the MCM phase
exists between 56 and 65$^{\circ}$C as an intermediate state.
Furthermore, the inset in Fig. 2(a) shows a Hall measurement in
which a change of carriers from holes (red bullets) to electrons
(black bullet) is observed near 60$^{\circ}$C. This indicates that
the type of carrier in the metal phase is an electron. This shows
that the MIT has already occurred at 60$^{\circ}$C lower than
$T_{SPT}$, and that hole carriers drive the first-order MIT
\cite{Kim-2}. It should be noted that the hole-driven MIT is quite
different from the well-known original Mott transition idea where
a first-order MIT from a metal to an insulator occurs when
electron carriers in the metal are reduced to a critical density.
This is caused by a reduction of the screened long range Coulomb
potential energy. However, there is no change of carrier type in
Mott's idea.
A femtosecond X-ray study for the SPT showed a slow response time
(as long as 1 ps) with an intermediate structure displaying peaks
at 13.75$^{\circ}$ and 13.8$^{\circ}$ \cite{Cavalleri-1}. In this
study the metal phase peak at 13.8$^{\circ}$ appeared after times
longer than 300 fs after a 50 fs x-ray irradiation. This indicated
that the SPT is continuous. As for the MIT, the rate of change in
the reflectivity due to the increased free carriers was observed
to be as short as 80 fs at the most \cite{Cavalleri-2,Rini}. It
was suggested that this shorter response time might be due to a
structural bottleneck since the authors of the works had assumed
the concurrence of the MIT with the SPT
\cite{Cavalleri-1,Cavalleri-2}. The response time difference
between 300 fs and 80 fs supports the idea that the MIT and the
SPT cannot occur simultaneously. This is consistent with the
results shown in the present work.
We will now explain the origin of the MCM phases in Fig. 1. The
critical hole density required to induce the MIT in VO$_2$ was
theoretically predicted by the hole-driven MIT theory
\cite{Kim-1,Kim-4}, and has been known as n$_c\approx$0.018$\%$
\cite{Kim-2}. The critical hole density is given by
n$_c$(T,Photo)=n(T)+n(Photo) where n$_c$(T,Photo) is the hole
density excited by both light and temperature, and n(T) and
n(Photo) are the hole carrier densities excited by temperature and
light, respectively. External excitations by temperature,
pressure, chemical doping and light generate the transition to the
MCM phase according to the MIT condition. If there is no external
excitation, only n(T) can induce the MCM phase.
This scenario is based on the hole-driven MIT theory, which
explains the breakdown of the critical on-site Coulomb energy by
hole doping doping of a low concentration into the valence band of
the Mott insulator \cite{Kim-1,Kim-4}. It predicts that the MIT
can be switched on or off by the doping or de-doping of the
valence band with a low concentration of holes \cite{Kim-4}.
Recently there have been several experimental reports confirming
that the abrupt MIT is induced by holes
\cite{Kim-2,Cavalleri-2,Basov,Lee}. We have experimentally
demonstrated that the first-order MIT occurs with the doping of
the valence band to a critical hole density \cite{Kim-2}. The MIT
induced by temperature is identical to the abrupt first-order MIT
observed by an external electric field \cite{Kim-2}, since MITs
can be driven only by holes, irrespective of the method of
excitation. Moreover, the MIT differs from the transition driven
by the SPT, which has lead us to believe that VO$_2$ is a Peierls
insulator \cite{Goodenough,Wentzcovitch,Haverkort,Cavalleri-2}.
The first-order MIT is quite different from the Mott-Hubbard
continuous MIT in which the density of states on the Fermi surface
gradually decreases as the on-site Coulomb potential increases
\cite{Zhang}.
The MCM phase is supposed to have a maximum conductivity, related
to the maximum effective mass, $m^{\ast}$, near the MIT as
${\sigma\propto}(m^{\ast}/m)^2$ \cite{Kim-2,Kim-1,Kim-4}. The
maximum effective mass can be regarded as a diverging true
effective mass in the Brinkman-Rice picture \cite{Brinkman}. It
was suggested that the metal phase is correlated by I-V
measurement \cite{Kim-4} and optical measurements
\cite{Qazilbash}. The monoclinic T phase instead of M$_2$ in
VO$_2$ \cite{Pouget,Rice,Tomakuni} can be classified as a
correlated paramagnetic Mott insulator with the equally spaced V-V
chain, on the basis of the jump between the T and MCM phases. The
T phase is defined as a semiconductor or insulator phase before
the transition from insulator to metal occurs near 340 K.
In conclusion, the first-order MIT is driven not by the SPT but by
hole carriers in VO$_2$ and occurs between T and MCM. The
monoclinic and correlated metal phase can be regarded as a
non-equilibrium state because the MCM phase exists at the
divergence in the hole-driven MIT theory and the Brinkman-Rice
picture. The characteristics of the MCM phase need to be studied
in more depth.
We acknowledge Prof. D. N. Basov and Dr. M. M. Qazilbash for
valuable comments. This research was performed by a project of
High Risk High Return in ETRI.
| {'timestamp': '2006-08-03T11:39:17', 'yymm': '0608', 'arxiv_id': 'cond-mat/0608085', 'language': 'en', 'url': 'https://arxiv.org/abs/cond-mat/0608085'} |
\section{Introduction}
Understanding and modelling climate extremes such as floods, heatwaves and heavy precipitation, is of paramount importance because they often lead to severe impacts on our socio-economic system \citep{IPCC2012}. Many impacts are associated with compounding drivers, for instance multiple hazards occurring at the same time at the same or different locations affecting the same system \citep{leonard2014compound,zscheischler2018future}. Ignoring potential dependencies between multiple hazards can lead to severe miss-specification of the associated risk \citep{zscheischler2017dependence,Hillier2020}. However, estimating dependencies between extremes is a challenging task, requiring large amounts of data and/or suitable approaches to model the phenomena of interest in a computationally feasible time.
A particularly challenging class of events are spatially compounding events \citep{Zscheischler2020}. Spatially compounding events occur when multiple connected locations are affected by the same or different hazards within a limited time window, thereby causing an impact. The compounding is established via a system capable of spatial integration, which accumulates hazard impacts in spatially distant locations. For instance, the global food systems are vulnerable to multiple co-occurring droughts and heatwaves in key crop-producing regions of the world \citep{Anderson2019,Mehrabi2019}. Similarly, spatially extensive floods \citep{Jongman2014} or sequences of cyclones \citep{Raymond2020} can deplete emergency response funds. Climate extremes can be correlated over very large distances due to teleconnections \citep{Boers2019} but modelling such dependencies is challenging.
One approach to tackle the challenge of spatially correlated extremes is to create a lot of data by running very long simulations with state-of-the-art climate models, which have the physical spatial dependencies built into them. For instance, over the recent years for a number of climate models large ensembles have been created \citep{deser2020insights}. However, these simulations typically have rather coarse resolution, are usually not stationary in time and are very expensive to run.
Extreme value theory provides mathematically justified models for the tail region of a multivariate distribution $(X_1,\dots, X_d)$, $d\geq 2$. This enables the extrapolation beyond the range of the data and the accurate estimation of the small probabilities of rare (multivariate) events. Statistical methods building on this theory are popular in a broad range of domains such as meteorology \citep{le2018dependence}, climate science \citep{naveau2020statistical} and finance \citep{poon2003extreme}. Applications have so far been limited to low-dimensional settings for several reasons.
On the one hand, even for moderately large dimensions $d$, the fitting and simulation of parametric models is computationally intensive because it requires computing complex likelihoods \citep{dom2016a}. On the other hand, the extremal dependence structures in applications are difficult to model and the existing approaches are either simplistic or over-parametrised. In spatial applications, for instance, Euclidean distance is used to parametrise the isotropic dependence of stationary max-stable random fields \citep{bla2011}. Most real world datasets that cover larger spatial domains do, however, feature non-stationarities in space that cannot be captured by the stationarity assumed in current geostatistical models; see \cite{eng2021} for a review on recent extreme value methods in higher dimensions.
Methods from machine learning are well-suited for complex and non stationary data sets. Their loss functions are, however, typically designed with the purpose to predict well in the bulk of the distribution. It is therefore a difficult problem to construct approaches with an accurate performance outside the range of the training data. In prediction problems, one possibility is to adapt the loss function to make the algorithm more sensitive to extreme values, as for instance done in quantile regression forests \citep{meinshausen2006quantile} or extreme gradient boosting \citep{velthoen2021gradient} for the prediction of high conditional quantiles. \citet{jalalzai2018binary} discuss this problem for classification in extreme regions and propose a new notion of asymptotic risk that helps to define classifiers with good generalization capacity beyond the observations range in the predictor space.
Rather than predicting an unknown response, in this work we are interested in a generative model that is able to learn a high-dimensional distribution $(X_1,\dots, X_d)$ both in the bulk and in the extremes. We concentrate on Generative Adversarial Networks (GANs), which are known to be competitive models for multivariate density estimation. While classical applications of GANs are often in the field of image analysis \citep{zhang2017stackgan,karras2017progressive,choi2018stargan}, they have been used much more broadly in domains such as finance \citep{efimov2020using}, fraud detection \citep{zheng2019one}, speech recognition \citep{sahu2019modeling} and medicine \citep{schlegl2017unsupervised}.
There are two different aspects that characterize the extremal properties of multivariate data: the univariate tail behaviour of each margin and the extremal dependence between the largest observations.
For GANs, it has recently been shown \citep{wie2019, hus2021} that the marginal distributions of the generated samples are either bounded or light-tailed if the input noise is uniform or Gaussian, respectively. \citet{hus2021} propose to use heavy-tailed input to overcome this issue.
\cite{bhatia2020exgan} use conditional GANs to perform importance sampling of extreme scenarios.
Here we concentrate on modeling the extremal properties of both the marginal distributions and the dependence structure in a realistic way even for high-dimensional and spatially non-stationary data. Our model combines the asymptotic theory of extremes with the flexibility of GANs to overcomes the limitations of classical statistical approaches to low dimensional and stationary data. We use a stationary 2000-year climate simulation to test and validate our approach.
\section{Methodology}
\label{sec:meth}
In the climate community there is a great need for methods able to efficiently provide an empirical description of the climate system, including extreme events, starting from as few ensemble runs as possible \citep{castruccio2019reproducing,deser2020insights}. Emulation techniques that tackle this challenge have been proposed in the last few years, usually focusing on the spatial correlation properties of the phenomena of interest \citep{mckinnon2017observational,mckinnon2018internal,link2019fldgen,Brunner2021}. In this work, our aim is to build an emulator specifically designed for extreme events that is able to reproduce the spatial tail dependencies of the data and extrapolate outside the range of the training samples. Our method does not require to discard any simulations and is therefore highly efficient. In this section we first recall some background on extreme value theory and generative adversarial networks, and then describe our algorithm called evtGAN. Similar to a copula approach \citep[e.g.][]{Brunner2021}, it relies on the idea to disentangle the modeling of the marginal distributions and the extremal dependence structure in order to use extreme value theory for former and a flexible machine learning technique called generative adversarial network for the latter.
\subsection{Extreme value theory}
\label{sec:evt}
Extreme value theory provides mathematically justified methods to analyze the tail of a $d$-dimensional random vector $\mathbf X = (X_1,\dots, X_d)$. This branch of statistical theory has been widely applied in climate science, for example to study droughts \citep{burke2010extreme}, floods \citep{asadi2018optimal}, heatwaves \citep{tanarhte2015heat} and extreme rainfalls \citep{le2018dependence}. We use extreme value theory in conjunction with generative adverserial networks to develop a flexible way of simulating rare climate events.
For multivariate data, there are two different aspects that determine the quality of an extrapolation method: the univariate tail behaviour of each margins and the extremal dependence between the largest observations. We explain how to measure and model those based on componentwise maxima.
Let $\mathbf X_i = (X_{i1}, \dots, X_{id})$, $i=1,\dots, n$, be a sample of $\mathbf X$ and let
$\mathbf M_n = (M_{n1}, \dots, M_{nd})$, where $M_{nj} = \max(X_{1j}, \dots, X_{nj})$ is the maximum over the $j$th margin.
\subsubsection{Univariate theory}
The Fisher--Tippett--Gnedenko theorem \citep[e.g.,][]{coles2001introduction} describes the limit behavior of the univariate maximum $M_{nj}$. It states that if there exist sequences $a_{nj}\in\mathbb R$, $b_{nj} > 0$ such that
\begin{align}\label{GEV_conv}
\lim_{n\to\infty} \mathbb P\left(\frac{M_{nj}-a_{nj}}{b_{nj}} \le z\right) = G_j(z), \qquad n\rightarrow \infty,
\end{align}
then if $G_j$ is the distribution function of a non-degenerate random variable $Z_j$, then is in the class of generalized extreme value (GEV) distributions with
$$ G_j(z)=\exp \left[ -\left\{ 1+\xi_j\left(\frac{z-\mu_j}{\sigma_j}\right)\right\}_+^{-1/\xi_j} \right], \quad z\in \mathbb R, $$
where $x_+$ denotes the positive part of a real number $x$, and $\xi_j \in \mathbb R$, $\mu_j \in \mathbb R$ and $\sigma_j>0$ are the shape, location and scale parameters, respectively. The shape parameter is the most important parameter since it indicates whether the $j$th margin is heavy-tailed ($\xi_j > 0$), light-tailed ($\xi =0$) or whether it has a finite uper end-point ($\xi<0$).
It can be shown that under mild conditions on the distribution of margin $X_j$, appropriate sequences exist for \eqref{GEV_conv} to hold. The above theorem therefore suggests to fit a generalized extreme value distribution $\widehat G_j$ to maxima taken over blocks of the same lengths, which is common practice when modelling yearly maxima. The shape, location and scale parameters can then be estimated in different ways, including moment-based estimators \cite{hosking1985algorithm}, Bayesian methods \citep{yoon2010full} and maximum likelihood estimation \citep{hosking1985estimation}. We use the latter in this work. This allows extrapolation in the direction of the $j$th marginal since the size of a $T$-year event can be approximated by the $(1-1/T)$-quantile of the distribution $\widehat G_j$, even if $T$ is larger than the length of the data record.
\subsubsection{Multivariate theory}
For multivariate data, the correct extrapolation of the tail not only depends on correct models for the marginal extremes, but also whether the dependence between marginally large values is well captured.
For two components $X_i$ and $X_j$ with limits $Z_i$ and $Z_j$ of the corresponding maxima in \eqref{GEV_conv}, this dependence is summarized by the extremal correlation $\chi_{ij}\in[0,1]$ \citep{sch2003}
\begin{equation*}
\chi_{ij} = \lim_{q\to1} \mathbb P(F_i(X_i) > q \mid F_j(X_j) > q) \in [0,1].
\end{equation*}
The larger $\chi_{ij}$, the stronger the dependence in the extremes between the two components, that is, between $Z_i$ and $Z_j$. If
$\chi_{ij}>0$ we speak of asymptotic dependence, and otherwise of asymptotic independence. While the extremal correlation is a useful summary statistic, it does not reflect the complete extremal dependence structure between $X_i$ and $X_j$. Under asymptotic dependence, the latter can be characterized by the so-called spectral distribution \citep{deh1977}.
Let $\tilde X_j = -1/\log F_j(X_j)$, $j=1,\dots, d$, be the data normalized to standard Fr\'echet margins. The spectral distribution describes the extremal angle of $(X_i, X_j)$, given that the radius exceeds a high threshold, that is,
$$H(x) = \lim_{u\to \infty} \mathbb P( \tilde X_i / (\tilde X_i + \tilde X_j) \leq x \mid \tilde X_i + \tilde X_j > u ), \quad x \in [0,1].$$
Under strong extremal dependence, the spectral distribution centers around $1/2$; under weak extremal independence, it has mass close to the boundary points $0$ and $1$.
In multivariate extreme value theory, a popular way to ensure a correct extrapolation of tail dependence under asymptotic dependence is by modeling the joint distribution $\mathbf Z = (Z_1,\dots, Z_d)$ of the limits in \eqref{GEV_conv} as a max-stable distribution with multivariate distribution function
\begin{equation}\label{mevd}
\mathbb P(G_1(Z_1)\leq z_1,\dots,G_d(Z_d)\leq z_d) = \exp\{-\ell(z_1,\dots,z_d)\}, \quad z_1,\dots,z_d \in [0,1],
\end{equation}
where $\ell$ is the so-called stable tail dependence function \citep[e.g.,][]{de2007extreme}. This is a copula approach in the sense that the right-hand side is independent of the original marginal distributions $G_j$ and only describes the dependence structure. In practice, the $G_j$ are replaced by a estimates $\hat G_j$ that are fitted to the data first. The right-hand side of~\eqref{mevd} is also called an extreme value copula \citep[e.g.,][]{seg2010}.
A natural extension of max-stable distributions to the spatial setting is given by max-stable processes $\{Z(t) : t \in \mathbb R^d\}$ \citep[e.g.,][]{davison2012statistical}, where the maxima $Z_i = Z(t_i)$ are observed at spatial locations $t_i$, $i=1,\dots, d$. This type of models have been widely applied in many different areas such flood risk \citep{eng2014b}, heat waves \citep{eng2017a} and extreme precipitation \citep{buishand2008spatial}.
A popular parametric model for a max-stable process $Z$ is the Brown--Resnick process \citep{kab2009, eng2014}. This model class is parameterized by a variogram function $\gamma$ on $\mathbb R^d$. In this case, each bivariate distribution is in the so-called H\"usler--Reiss family \citep{hue1989}, and the corresponding dependence parameter is the variogram function evaluated at the distance between the two locations,
$$\lambda_{ij} = \gamma( \| t_i - t_j \|),\qquad i,j =1,\dots, d.$$
One can show that the extremal correlation coefficient for this model is given by $\chi_{ij} = 2 - 2\Phi(\sqrt{\lambda_{ij}} /2)$, where $\Phi$ is the standard normal distribution function. A popular parametric family is the class of fractal variograms that can be written as $\gamma_{\alpha, s}(h) = h^\alpha/s$, $\alpha \in (0,2]$, $s>0$.
While these methods enjoy many interesting properties, they are often quite complex to fit \citep{dom2016a} and impose strong assumptions on the spatial phenomena of interest, such as spatial stationarity, isotropic behaviour and asymptotic dependence between all pairs of locations.
\subsection{Generative adversarial network}
Generative adversarial networks \citep{goodfellow2014generative} are attracting great interest thanks to their ability of learning complex multivariate distributions that are possibly supported on lower-dimensional manifolds. These methods are generative since they allow the generation of new realistic samples from the learnt distribution that can be very different from the training samples.
Given observations $\mathbf U_1, \dots, \mathbf U_n$ from some $d$-dimensional unknown target distribution $p_{data}$,
generative adversarial networks can be seen as a game opposing two agents, a discriminator $D$ and a generator $G$.
The generator $G$ takes as input a random vector $\mathbf Y= (Y_1,\dots, Y_m)$ from an $m$-dimensional latent space and transforms it to a new sample $\mathbf U^* = G(\mathbf Y)$. The components $Y_j$, $j=1,\dots, m$, are independent and typically have a standard uniform or Gaussian distribution.
The discriminator $D$ has to decide whether a new sample $\mathbf U^*$ generated by $G$ is fake and following the distribution $p_G$, or coming from the real observations with distribution $p_{data}$. The discriminator expresses its guesses with a value between 0 and 1 corresponding to its predicted probability of the sample combing from the real data distribution $p_{data}$. Both the generator and the discriminator become better during the game, in which $D$ is trained to maximize the probability of correctly labelling samples from both sources, and $G$ is trained to minimize $D$’s performance and thus learns to generate more realistic samples.
Mathematically, the optimization problem is a two-player minimax game with cross-entropy objective function \citep[see,][]{goodfellow2014generative}:
\begin{equation}
\min\limits_{G}\max\limits_{D} \mathbb{E}_{\mathbf U \sim p_{data}} [\log(D(\mathbf U))] + \mathbb{E}_{\mathbf Y\sim p_{\mathbf Y}} [\log(1-D(G(\mathbf Y))], \label{eq:goodfel}
\end{equation}
where $\mathbf Y$ is a random vector sampled from a latent space. In equilibrium, the optimal generator satisfies $p_G = p_{data}$ \citep{goodfellow2014generative}.
In practice, the discriminator and the generator are modeled through feed-forward neural networks. While the equilibrium with $p_G = p_{data}$ is only guaranteed thanks to the convex-concave property in the non-parametric formulation~\eqref{eq:goodfel}, the results remain very good as long as suitable adjustments to the losses, training algorithm and overall architecture are made to improve on the stability and convergence of training.
The architecture of the generator $G$ and the discriminator $D$ can be specifically designed for the structure of the data under consideration. In this direction, \citet{radford2015unsupervised} introduces the use of the convolutional neural network as an add-on feature to the GAN yielding a deep convolutional generative adversarial network model (DCGAN). It is considered the preferred standard for dealing with image data and more generally any high dimensional vector of observations describing a complex underlying dependence structure, showing superior performance in the representation and recognition fields.
\subsection{evtGAN}
Let $\mathbf Z_1, \dots, \mathbf Z_n$ be independent observations obtained as approximations of componentwise maxima as described in Section~\ref{sec:evt}.
The evtGAN algorithm (Algorithm~\ref{alg:evt}) below follows a copula approach where marginal distributions and dependence structure are treated separately. In particular, this allows us to impose different amounts of parametric assumptions on margins and dependence.
It is known that classical GANs that are trained with bounded or light tailed noise input distributions in the latent space will also generate bounded or light-tailed samples, respectively \citep[see][]{wie2019, hus2021}. That means that they are not well-suited for extrapolating in a realistic way outside the range of the data. For this reason, we suppose that the approximation of the margins by generalized extreme value distributions $G_j$ as in \eqref{GEV_conv} holds, which allows for extrapolation beyond the data range.
On the other hand, we do not make explicit use of the assumption of a multivariate max-stable distribution as in \eqref{mevd}. This has two reasons. First, while the univariate limit \eqref{GEV_conv} holds under very weak assumptions, the multivariate max-stability as in \eqref{mevd} requires much stronger assumptions \citep[see][]{res2008} and may be too restrictive for cases with asymptotic independence \citep[e.g.,][]{wadsworth2012dependence}. Second, even if the data follow a multivariate max-stable distribution \eqref{mevd}, the probabilistic structure is difficult to impose on a flexible machine learning model such as a GAN.
\begin{algorithm}
\caption{evtGAN}
\textbf{Input:} Observations $\mathbf Z_i = (Z_{i1}, \dots, Z_{id})$, $i=1,\dots, n$.
\begin{algorithmic}[1]
\STATE For $j=1,\dots, d$, fit a GEV distribution $\widehat G_j$ to the data $Z_{1j},\dots, Z_{nj}$ with estimated parameters $(\hat \mu_j, \hat \sigma_j, \hat \xi_j)$.
\STATE Normalize all margins empirically to a standard uniform distribution to obtain pseudo-observations
$$ \mathbf{U}_i = (\widehat F_1(Z_{i1}), \dots, \widehat F_d(Z_{id})), \quad i=1,\dots, n,$$
where $\widehat F_j$ is the empircal distribution function of the $Z_{1j},\dots, Z_{nj}$.
\STATE Train a DCGAN $G$ on the normalized data $\mathbf{U}_1,\dots, \mathbf{U}_n$ based on the loss in equation \eqref{eq:goodfel}.
\STATE Generate $n^*$ new data points $\mathbf{U}_1^*,\dots, \mathbf{U}_n^*$ from $G$ with uniform margins, $i=1,\dots, n^*$.
\STATE Normalize back to the scale of the original observations
$$\mathbf{Z}_i^* = (\widehat G_1^{-1}(U_{i1}^*), \dots, \widehat G_d^{-1}(U_{id}^*)),\quad i=1,\dots, n^*.$$
\end{algorithmic}
\textbf{Output:} Set of new generated observations $\mathbf Z^*_i = (Z_{i1}, \dots, Z^*_{id})$, $i=1,\dots, n^*$.
\label{alg:evt}
\end{algorithm}
The margins are first fitted by a parametric GEV distribution (line 1 in Algorithm~\ref{alg:evt}). They are then normalized to (approximate) pseud-observations by applying the empirical distributions functions to each margin (line 2). This standardization to uniform margins stabilizes the fitting of the GAN. Alternatively, we could use the fitted GEV distributions for normalization but this seems to give slightly worse results in practice.
The pseudo-observations contain the extremal dependence structure of the original observations. Since this dependence structure is very complex in general and our focus is to reproduce gridded spatial fields, we do not rely on restrictive parametric assumptions but rather learn it by a highly flexible DCGAN (line 3). From the fitted model we can efficiently simulate any number of new pseudo-observations that have the correct extremal dependence (line 4). Finally, we transform the generated pseudo-observations back to the original scale to have realistic properties of the new samples also in terms of the margins (line 5).
\section{Application}
\label{sec:appl}
We asses the ability of the proposed method to correctly model the spatial tail dependence structure between climate extremes. Because observational records are relatively short and contain temporal trends, here we rely on 2000 years of climate model output that is representative of present-day weather over western Europe. In particular, we apply our approach to summer temperature and winter precipitation maxima, yielding 2000 and 1999 maxima for temperature and precipitations, respectively. We then compare the performance of our algorithm to the Brown--Resnick model, a state-of-the-art statistical model for spatial extremes.
In order to retrieve a non-biased estimate of the error that each of the method makes, we divide the data into a training set of 50 observations, and the rest is taken as a test set. Both methods are fitted using only the training dataset and the evaluation is made considering the information contained in the test set as ground truth. When not stated otherwise, the results from the Brown--Resnick model are analytical, while the results from evtGAN are obtained simulating $10'000$ data points
\subsection{Data}
The model experiment, which uses large ensemble simulations with the EC-Earth global climate model \citep[v2.3,][]{hazeleger2012} and has been used in climate impact studies \citep{van2019meteorological,kempen2021,Tschumi2021,Vogel2021,vanderwiel2021}, was originally designed to investigate the influence of natural variability and climate extremes. A detailed description of these climate simulations is provided in \citet{vanderWiel2019}, here we only provide a short overview.
The large ensemble contains 2000~years of daily weather data, representative of the present-day climate. Present-day was defined by the observed value of global mean surface temperature (GMST) over the period 2011-2015 \citep[HadCRUT4 data,][]{morice2012}; the 5-year time slice in long transient model simulations (forced by historical and Representative Concentration Pathway (RCP) 8.5 scenarios) with the same GMST was simulated repeatedly. To create the large ensemble, at the start of the time slice, twenty-five ensemble members were branched off from the sixteen transient runs. Each ensemble member was integrated for the five year time slice period. Differences between ensemble members were forced by choosing different seeds in the atmospheric stochastic perturbations \citep{buizza1999}. This resulted in a total of $16 \times 25 \times 5 = 2000$~years of meteorological data, at T159 horizontal resolution (approximately 1$^\circ$), among which we selected temperature (Kelvin) and precipitations (meter/day) for this paper.
We choose an area such that it is big enough to be relevant for climate application while being small enough to ensure a fair comparison of our approach and the classical approach in statistics to model spatial tail dependence, the Brown--Resnick process. Our analysis thus focuses on a total of $18\times 22$ grid points covering most of western Europe.
For that area we compute for each grid point the summer temperature maxima and winter precipitation maxima.
\subsection{Architecture and hyper-parameters}
In our application, we consider the non-heuristic saturating loss \citep{goodfellow2014generative} for the generator, and the standard empirical cross-entropy loss for the discriminator.
Training is done iteratively, where the discriminator is allowed to train two times longer than the generator.
We incorporate an exponential moving average scheme \citep{gidel2018variational} to the training algorithm as it has demonstrated more stability, a far better convergence and improved results.
Since we focus on reproducing gridded spatial fields, we make use of the DCGAN model \citep{radford2015unsupervised} as an architectural design to take advantage of convolution \citep{fukushima1980neocognitron}.
We use classical regularisation tools such as drop-out \citep{srivastava2014dropout}, batch-normalisation \citep{ioffe2015batch} and the Adam optimizer \citep{kingma2014adam} with a learning rate of $2\times10^{-4}$ and batch size of $100$ to train the neural network, and a total of $30'000$ training epochs.
The value of the exponential moving average parameter in the training algorithm is set to $\alpha=0.9$. The values of the hyper-parameters are the results of an extensive heuristic tuning.
\subsection{Results}
A particularity of evtGAN is that it decomposes the full high-dimensional distribution of the data into its marginal distributions and their dependence structure and processes them separately. We thus first report estimated key parameters of the marginal distributions.
For temperature, the location parameter $\mu$ is typically higher over land than over oceans (Fig.~\ref{fig:grid}a). Furthermore, there is a trend towards lower values of $\mu$ for more northern regions, illustrating the well-known latitudinal temperature gradient from south to north. The scale parameter $\sigma$, representing the width of the distribution, is also higher over land than over the ocean, with a fairly homogeneous distribution over land (Fig.~\ref{fig:grid}b). The shape parameter $\xi$ is well below 0 for most regions except some areas in the Atlantic north of Spain and in the Mediterranean at the north African coast, indicating a bounded tail (Fig.~\ref{fig:grid}c).
\begin{figure}
\centering
\includegraphics[width=.81\textwidth]{New_figures/grid.pdf}
\caption{The generalized extreme value distribution parameters estimated for each grid point for temperature (a-c) and precipitation (d-f) extremes. On the left (a, d) the mean parameter $\mu$, in the middle (b, e) the scale parameter $\sigma$ and on the right (c, f) the shape parameter $\xi$.}
\label{fig:grid}
\end{figure}
For precipitation, the location parameter $\mu$ is more similar between land and ocean compared to temperature but is a little bit more heterogeneous in space (Fig.~\ref{fig:grid}d), illustrating orographic effects on precipitation extremes. For the scale parameter $\sigma$ there is also no clear land-sea contrast but areas with relatively much higher values (corresponding to higher interannual variability in extremes) occur over the Western part of the Iberian peninsula and along a coastal band at the northern Mediterranean coast (Fig.~\ref{fig:grid}e). The shape parameter $\xi$ is spatially quite heterogeneous, with values ranging from -0.15 to 0.25 (Fig.~\ref{fig:grid}f). Overall, the parameter of the extreme value distributions of precipitation extremes show much higher spatial heterogeneity than the one for temperature extremes, suggesting that it might be more difficult to learn a model that represents well all tail dependencies for the entire region.
Next we look at examples of bivariate scatterplots of simulated temperature extremes from the different approaches for three cases with varying tail dependence (Fig.~\ref{fig:bivtemp}). The three rows correspond to three pairs of grid points with weak, mild and strong tail dependence, respectively. The train sets (Fig.~\ref{fig:bivtemp}a, f, k) illustrate the distributions from which the models are tasked to learn while the test sets (Fig.~\ref{fig:bivtemp}b, g, l) are used for validation. As can be clearly seen, extremes are much less likely to co-occur between locations that are characterized by weak tail dependence (Fig.~\ref{fig:bivtemp}b) compared to locations that are characterized by strong tail dependence (Fig.~\ref{fig:bivtemp}l). Purely based on visual judgement, it seems that evtGAN is able to characterize the different tail dependencies relatively well and can simulate samples outside of the range of the train set (Fig.~\ref{fig:bivtemp}c, h, m) whereas DCGAN only simulates distributions bounded to the range of the train set (Fig.~\ref{fig:bivtemp}d, i, n) and Brown--Resnick tends to overestimate tail dependence in cases where tail dependence is weak (Fig.~\ref{fig:bivtemp}e, j).
\begin{figure}[h]
\includegraphics[width=\textwidth]{New_figures/biv_temp.pdf}
\caption{Bivariate plots of temperature extremes based on train and test set and simulated through the different presented methods. Shown are selected pairs of locations with varying tail dependence. Columns from left to right: train, test, evtGAN, DCGAN and Brown--Resnick. From top to bottom: weak tail dependence (a-e), mild tail dependence (f-j), strong tail dependence (k-o)}
\label{fig:bivtemp}
\end{figure}
The corresponding figure for precipitation extremes is shown in Fig.~\ref{fig:bivrain}. Conclusions are similar as for temperature extremes. The evtGAN simulates distributions with different tail dependencies and samples that are far outside the range of the train set (Fig.~\ref{fig:bivrain}c, h, m). DCGAN simulates distributions that are bounded to the train set range (Fig.~\ref{fig:bivrain}d, i, n). On the other hand, Brown--Resnick overestimates tail dependence in the case of mild tail dependence (Fig.~\ref{fig:bivrain}j), and underestimates tail dependence when tail dependence is strong (Fig.~\ref{fig:bivrain}o).
\begin{figure}[h]
\includegraphics[width=\textwidth]{New_figures/biv_rain.pdf}
\caption{Bivariate plots of precipitation extremes based on train and test set and simulated through the different presented methods. Shown are selected pairs of locations with varying tail dependence. Columns from left to right: train, test, evtGAN, DCGAN and Brown--Resnick. From top to bottom: weak tail dependence (a-e), mild tail dependence (f-j), strong tail dependence (k-o)}
\label{fig:bivrain}
\end{figure}
A scatterplot of bivariate extremal correlations between 100 randomly selected locations estimated from the train set, evtGAN and Brown--Resnick, respectively, against estimates based on the test set (1950 samples) is shown in Fig.~\ref{fig:eval}. The estimates derived directly from the train sets (Fig.~\ref{fig:eval}a, d) are the benchmark, and by design better performance is not possible. Clearly, pairs of locations with stronger tail dependence are much more likely for temperature (Fig.~\ref{fig:eval}a) than for precipitation (Fig.~\ref{fig:eval}d), confirming the impression obtained from the spatial homogeneity of the parameters in the extreme value distributions (Fig.~\ref{fig:grid}). Furthermore, evtGAN seems to better capture the observed relationship. Brown--Resnick has difficulties in particular with pairs that have weak or no tail dependence (extremal correlation equals 0, lower left in the figures), which is consistent with Fig.~\ref{fig:bivtemp} and Fig.~\ref{fig:bivrain}.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{New_figures/qqplots50.pdf}
\caption{Scatter plots of the extremal correlations for temperature extremes (a-c) and precipitation extremes (d-f) between $100$ randomly selected locations. The x-axes always show the estimates based on the test set. Estimates on the y-axes are based on the train set (a, d), on the output of evtGAN (b, d) and on the Brown--Resnick model}
\label{fig:eval}
\end{figure}
To further explore the behaviour of evtGAN and Brown--Resnick in dealing with different levels of dependence, Fig.~\ref{fig:spectral} shows a comparison of the estimated spectral distribution for the same three pairs of chosen grid points characterized by weak (a, d), mild (b, e) and strong tail dependence (c, f), respectively. The magenta bars show the estimates based on the 1950 sample in the test set. The estimates of evtGAN are very close to the ground truth for all cases, i.e., weak, mild and strong dependence (red lines in Fig.~\ref{fig:spectral}), except for some bias in the case of mild dependence in precipitation extremes (Fig.~\ref{fig:spectral}e). In contrast, the performance of the Brown--Resnick model (black lines in Fig.~\ref{fig:spectral}) is much more variable. It captures relatively well the two pairs with weak tail dependence (Fig.~\ref{fig:spectral}a, d) and does a decent job for strong dependence in temperature extremes (Fig.~\ref{fig:spectral}c) but fails completely for the remaining three cases (Fig.~\ref{fig:spectral}b,e, f). Furthermore, in contrast to Brown--Resnick, evtGAN is able to represent asymmetric dependence structures, as is evident from Fig.~\ref{fig:spectral}.
\begin{figure}[h]
\centering
\includegraphics[width=0.91\linewidth]{New_figures/spec50.pdf}
\caption{Spectral distributions for a threshold of $0.95$ for selected pairs of locations with varying tail dependence for temperature (a-c) and precipitation (d-f). (a, d) weak tail dependence, (b, e) mild tail dependence, (c, f) strong tail dependence. In red the kernel density estimate of the evtGAN, in blue the Brown--Resnick model and in magenta bars the ground truth}
\label{fig:spectral}
\end{figure}
We finally present a sensitivity analysis of the performance of evtGAN for different realistic samples sizes, namely 30, 50 and 100, while learning for up to 30,000 epochs (Fig.~\ref{fig:conv}). Overall the error tends to decrease the longer the models learn and it does not seem to matter whether we evaluate the performance on the train set or the test set (difference between black and red lines in Fig.~\ref{fig:conv} is small). The results for the best performance with respect to the train sets (minima in the red lines, $min(C_{tr})$, in Fig.~\ref{fig:conv}c, d) are shown in the figures presented in the results section above. An improvement with increasing sample size is clearly visible (compare Fig.~\ref{fig:conv}a, b with Fig.~\ref{fig:conv}e, f). Furthermore, the model errors in temperature extremes are smaller than the ones for precipitation extremes (left versus right column in Fig.~\ref{fig:conv}).
\begin{figure}[h]
\centering
\includegraphics[width=.91\textwidth]{New_figures/accuracy.pdf}
\caption{Model performance versus training epochs for different sample sizes in evtGAN for temperature (a, c, e) and precipitation extremes (b, d, f). A number of observations equal to $n_{test}=2000-n_{train}$ was sampled from evtGAN. The mean $l^2$ norms for train (black) and test set (red) are defined as $C_{te}= ||\chi_{evtG}-\chi_{te}||_2$, $C_{tr}= ||\chi_{evtG}-\chi_{tr}||_2$, where $\chi_{evtG}$, $\chi_{te}$ and $\chi_{tr}$ denote the vectors of extremal correlations calculated on the samples from evtGAN, the test and the train set, respectively. The epochs at which the minima are reached in the train set are marked with vertical bars}
\label{fig:conv}
\end{figure}
\section{Discussion}
The use of evtGAN combines the best of two worlds: correct extrapolation based on extreme value theory on the one hand, and flexible dependence modeling through GANs on the other hand.
GANs are an excellent tool to model complex dependencies in high-dimensional spaces. However, they are typically not tailored to model extremes in the marginals well. Indeed, for a standard DCGAN implementation where the marginals are not estimated by GEV distributions but empirically transformed, Figs.~\ref{fig:bivtemp}d, i, n and Figs.~\ref{fig:bivrain}d, i, n show that the generated samples are bounded by the range of the training sample. For an accurate extrapolation that resembles the marginal distributions of the test set, extreme value theory is required (panels c, e, h, j, m and o of Figs.~\ref{fig:bivtemp} and \ref{fig:bivrain}).
On the other hand, classical methods of spatial extreme value theory such as the Brown--Resnick process have accurate extrapolation properties for the marginal distributions. However, for an application to a spatially heterogeneous data set on a large domain (Fig.~\ref{fig:grid}) their parametric assumption may be too restrictive. Indeed, Figs.~\ref{fig:eval}c and f show a clear bias of the Brown--Resnick model in terms of bivariate extremal correlations, which is particularly visible for pairs with weak extremal dependence. Another indication for this bias can be seen in Figs.~\ref{fig:bivrain}j and o where the distributions of the Brown--Resnick samples differ strongly from the test set distributions. The evtGAN does not make prior assumptions on spatial stationarity or isotropy and it does therefore not exhibit a bias (points in Figs.~\ref{fig:eval}b and e are centered around diagonal). This is particularly noteworthy since modeling complex non-stationarities for extremes is very difficult with parametric models \citep{hus2016,engelke2020graphical}.
Considering the fitted bivariate distributions of evtGAN and Brown--Resnick underlines this point. The spectral distributions of the Brown--Resnick model are restricted to a parametric class of functions, which, for instance, are symmetric about $1/2$. The blue lines in Fig.~\ref{fig:spectral} show that this is too restrictive for our data since the strength of dependence is not correctly modelled (Fig.~\ref{fig:spectral}b) or the asymmetry is not captured (Fig.~\ref{fig:spectral}c,f). The evtGAN (red lines) on the other hand can model weak and strong dependence, and it even adapts to possible asymmetries. This is also evident from the scatterplots in Fig.~\ref{fig:bivtemp} and Fig.~\ref{fig:bivrain}, where the shape of the Brown--Resnick samples are restricted to a parametric sub-class of distributions, the so-called H\"usler--Reiss family \citep{hue1989}.
A further restriction of classical spatial max-stable processes is the fact that they are limited to modeling asymptotic dependence. For precipitation, it can be seen in Fig.~\ref{fig:eval}d-f that most of the test extremal correlations are close to zero, indicating asymptotic independence. While the evtGAN is able to capture this fairly well (Fig.~\ref{fig:eval}e), the Brown--Ressnick model always has positive extremal correlations, explaining the bias in the bottom left corner of Fig.~\ref{fig:eval}f. A spatial asymptotically independent model \citep[e.g.,][]{wadsworth2012dependence} would be a possible remedy for this, but such processes would still suffer from the limitation in terms of non-stationarity and asymmetry described above.
Overall, evtGAN tends to perform better in capturing dependencies between temperature extremes than precipitation extremes (Fig.~\ref{fig:conv}). This is likely related to the fact that extremes in temperature are more spatially coherent \citep{Keggenhoff2014} (Fig.~\ref{fig:grid}).
\section{Conclusions}
\label{sec:conc}
Understanding and evaluating the risk associated with extreme events is of primal importance for our society, as recently emphasized in the 6th Assessment Report of the Intergovernmental Panel on Climate Change \citep{Seneviratne2021}. Extreme event analysis and impact assessments are often limited by the available sample size in observations. Furthermore, simulations with complex climate models are very expensive. Here we combine a machine learning approach with extreme value theory to model complex spatial dependencies between extreme events in temperature and precipitation across Europe based on a limited sample size. We demonstrate that this hybrid approach outperforms the typically used approach in multivariate extreme value theory and can well represent the marginal distributions and extremal dependencies across spatially distributed climate extremes. The approach can be easily adapted to other types of extremes and used to create large sample sizes that are often required for climate risk assessments.
\paragraph{Impact Statement}
Spatially co-occurring climate extremes such as heavy precipitation events or temperature extremes can have devastating impacts on human and natural systems. Modelling complex spatial dependencies between climate extremes in different locations is notoriously difficult and traditional approaches from the field of extreme value theory are relatively inflexible. We show that combining extreme value theory with a deep learning architecture (Generative Adversarial Networks) can well represent complex spatial dependencies between extremes. Hence, instead of running expensive climate models, the approach can be used to sample many instances of spatially co-occurring extremes with realistic dependence structure, which may be used for climate risk modelling and stress testing of climate-sensitive systems.
\paragraph{Funding Statement}
This work was funded by the Swiss National Science Foundation (grant nos. 179876, 189908 and 186858) and the Helmholtz Initiative and Networking Fund (Young Investigator Group COMPOUNDX; grant agreement no. VH-NG-1537).
\paragraph{Competing Interests}
None
\paragraph{Data Availability Statement} The temperature and precipitation maxima used in this study are available at https://doi.org/10.5281/zenodo.5554105.
\paragraph{Author Contributions}
Conceptualization: Y.B.; E.V.; S.E. Methodology: Y.B; E.V.; S.E. Formal analysis: Y.B. Software: Y.B.; E.V. Data curation: K.v.d.W. Data visualisation: Y.B. Supervision: J.Z.; S.E. Writing original draft: J.Z.; E.V.; S.E. All authors approved the final submitted draft.
\bibliographystyle{apalike}
| {'timestamp': '2021-11-02T01:12:32', 'yymm': '2111', 'arxiv_id': '2111.00267', 'language': 'en', 'url': 'https://arxiv.org/abs/2111.00267'} |
\section{Accuracy}\label{apx:accuracy}
\begin{figure*}[t!]
\subfigure[VGGNet ($B=64$)]{
\includegraphics[width=.48\hsize]{imgs/{VGG-10-64-0.01-accuracy}.pdf}
}
\subfigure[VGGNet ($B=4096$)]{
\includegraphics[width=.48\hsize]{imgs/{VGG-10-4096-0.1-accuracy}.pdf}
}
\subfigure[NIN ($B=64$)]{
\includegraphics[width=.48\hsize]{imgs/{NIN-64-0.1-accuracy}.pdf}
}
\subfigure[NIN ($B=4096$)]{
\includegraphics[width=.48\hsize]{imgs/NIN-4096-1-accuracy.pdf}
}
\subfigure[DenseNet on STL-10 ($B=64$)]{
\includegraphics[width=.48\hsize]{imgs/{DenseNet-STL-64-0.01-accuracy}.pdf}
}
\subfigure[DenseNet on STL-10 ($B=4096$)]{
\includegraphics[width=.48\hsize]{imgs/{DenseNet-STL-4096-0.1-accuracy}.pdf}
}
\caption{Accuracy. Solid, dashed, and dotted lines indicate testing accuracy, training accuracy, and the generalization gap, respectively.}\label{fig:accuracy-appendix}
\end{figure*}
The training curves for the VGGNet, NIN, and DenseNet models on the STL-10 dataset are shown in Figure~\ref{fig:accuracy-appendix}.
We can observe that, in every setting, \textsf{spectral} shows the smallest generalization gap or the best test accuracy, which demonstrates that \textsf{spectral} can effectively reduce the generalization gap without suppressing the model complexity significantly.
\section{Sensitivity to the perturbation of the input}\label{apx:sensitivity}
\begin{figure*}
\centering
\subfigure[VGGNet (Train)]{
\includegraphics[width=.475\hsize]{./imgs/scattered-VGG-10-train-crop.pdf}
}
\subfigure[VGGNet (Test)]{
\includegraphics[width=.475\hsize]{./imgs/scattered-VGG-10-test-crop.pdf}
}
\subfigure[NIN (Train)]{
\includegraphics[width=.475\hsize]{./imgs/scattered-NIN-train-crop.pdf}
}
\subfigure[NIN (Test)]{
\includegraphics[width=.475\hsize]{./imgs/scattered-NIN-test-crop.pdf}
}
\subfigure[DenseNet on STL-10 (Train)]{
\includegraphics[width=.475\hsize]{./imgs/scattered-DenseNet-STL-train-crop.pdf}
}
\subfigure[DenseNet on STL-10 (Test)]{
\includegraphics[width=.475\hsize]{./imgs/scattered-DenseNet-STL-test-crop.pdf}
}
\caption{Relation between the generalization gap and the $\ell_2$-norm of the gradient. The solid and dashed lines indicate the results for the small-batch and large-batch regimes, respectively.}\label{fig:grad-transition-appendix}
\end{figure*}
Figure~\ref{fig:grad-transition-appendix} shows the transition of the $\ell_2$-norm of the gradient of the loss function, defined with the training data or test data, with respect to the input.
For every setting, the $\ell_2$-norm of the gradient with respect to the test data is well-correlated with the generalization gap.
On the contrary, for the VGGNet and NIN models, the $\ell_2$-norm of the gradient with respect to the training data does not predict generalization gap well.
\section{Conclusions}\label{sec:conclusions}
In this work, we hypothesized that a high sensitivity to the perturbation of the input data degrades the performance of the data.
In order to reduce the sensitivity to the perturbation of the test data, we proposed the spectral norm regularization method, and confirmed that it exhibits a better generalizability than other baseline methods through experiments.
Experimental comparison with other methods indicated that the insensitivity to the perturbation of the test data plays a crucial role in determining the generalizability.
There are several interesting future directions to pursue.
It is known that weight decay can be seen as a regularization in MAP estimation derived from a Gaussian prior to the model parameters.
Is it possible to understand spectral norm regularization as a derivation of a prior?
We also need a theoretical understanding of the effect of spectral norm regularization on generalization.
It is known that, in some ideal cases, weight decay improves generalizability by preventing neural networks from fitting noises~\cite{Krogh:1991uo}.
Can we extend this argument to spectral norm regularization?
\section{Experiments}\label{sec:experiments}
In this section, we experimentally demonstrate the effectiveness of spectral norm regularization on classification tasks over other regularization techniques, and confirm that the insensitivity to test data perturbation is an important factor for generalization.
All the training methods discussed here are based on stochastic gradient descent (SGD).
We consider two regimes for the choice of the mini-batch size.
In the small-batch regime, we set the mini-batch size to $B=64$, and in the large-batch regime, we set it to $B=4096$.
In our experiments, we compared the following four problems:
\begin{itemize}
\itemsep=0pt
\item Vanilla problem (\textsf{vanilla}):
As a vanilla problem, we considered empirical risk minimization without any regularization, that is, $\mathop{\text{minimize}}_{\Theta} \frac{1}{K}\sum_{i=1}^K L(f_\Theta({\bm x}_i),{\bm y}_i)$, where $L$ is the cross entropy.
\item Weight decay (\textsf{decay}): We considered the problem~\eqref{eq:weight-decay}, where $L$ is the cross entropy.
We selected the regularization factor $\lambda=10^{-4}$.
\item Adversarial training (\textsf{adversarial}): We considered the problem~\eqref{eq:adversarial}, where $L$ is the cross entropy.
We selected $\alpha=0.5$ and $\epsilon=1$, as suggested in~\cite{Goodfellow:2015tl}.
\item Spectral norm regularization (\textsf{spectral}): We considered the problem~\eqref{eq:spectral}, where $L$ is the cross entropy.
We selected the regularization factor $\lambda =0.01$.
\end{itemize}
We trained models using Nesterov's accelerated gradient descent~\cite{Bengio:2013fn} with momentum 0.9.
We decreased the learning rate by a factor of $1/10$ when the half and the three quarters of the training process have passed.
We optimized the hyper-parameters through a grid search and selected those that showed a reasonably good performance for every choice of neural network and mini-batch size.
In our experiments, we used the following four settings on the model and dataset.
\begin{itemize}
\itemsep=0pt
\item The VGG network (VGGNet for short)~\cite{Simonyan:2014ws} on the CIFAR-10 dataset~\cite{Krizhevsky:2009tr}.
\item The network in network (NIN) model ~\cite{Lin:2014wb} on the CIFAR-100 dataset~\cite{Krizhevsky:2009tr}.
\item The densely connected convolutional network (DenseNet) ~\cite{Huang:2016wa} having a depth of 40 on the CIFAR-100 dataset.
\item DenseNet having a depth of 22 on the STL-10 dataset~\cite{Coates:2011wo}.
The depth was decreased because of the memory consumption issue.
\end{itemize}
We preprocessed all the datasets using global contrast normalization.
We further applied data augmentation with cropping and horizontal flip on STL-10 because the number of training data samples is only 5,000, which is small considering the large mini-batch size of 4096.
The learning rate of SGD was initialized to $0.1$ in the small-batch regime and $1.0$ in the large-batch regime for the NIN and DenseNet models on the CIFAR-100 dataset, and was initialized to $0.01$ in the small-batch regime and $0.1$ in the large-batch regime for the VGGNet and DenseNet models on the CIFAR-10 dataset.
\subsection{Accuracy and Generalization Gap}\label{subsec:accuracy}
\begin{table}
\centering
\caption{Test accuracy and generalization gap (best values in bold).}\label{tab:accuracy}
{
\small
\tabcolsep=1mm
\begin{tabular}{|c|r||rrrr||r|rrrr|}
\hline
& & \multicolumn{4}{c||}{Test accuracy} & \multicolumn{5}{c|}{Generalization gap} \\
Model & $B$ & \textsf{vanilla} & \textsf{decay} & \textsf{adver.} & \textsf{spectral} & \multicolumn{1}{c}{$\alpha$} & \textsf{vanilla} & \textsf{decay} & \textsf{adver.} & \textsf{spectral} \\ \hline \hline
\multirow{2}{*}{VGGNet} & 64 & 0.898 & 0.897 & 0.884 & \textbf{0.904} & 0.88 & 0.079 & 0.074 & 0.109 & \textbf{0.068} \\
& 4096 & 0.858 & 0.863 & 0.870 & \textbf{0.885} & 0.85 & 0.092 & 0.064 & 0.064 & \textbf{0.045} \\ \hline
\multirow{2}{*}{NIN} & 64 & 0.626 & \textbf{0.672} & 0.627 & 0.669 & 0.62 & 0.231 & 0.120 & 0.253 & \textbf{0.090} \\
& 4096 & 0.597 & 0.618 & 0.607 & \textbf{0.640} & 0.59 & 0.205 & 0.119 & 0.196 & \textbf{0.090} \\ \hline
DenseNet & 64 & 0.675 & \textbf{0.718} & 0.675 & 0.709 & 0.67 & 0.317 & 0.080 & 0.299 & \textbf{0.095}\\
(CIFAR100) & 4096 & 0.639 & 0.671 & 0.649 & \textbf{0.697} & 0.63 & 0.235 & 0.111 & 0.110 & \textbf{0.051} \\ \hline
DenseNet & 64 & 0.724 & 0.723 & 0.707 & \textbf{0.735} & 0.70 & \textbf{0.063} & 0.073 & 0.069 & 0.068 \\
(STL-10) & 4096 & 0.686 & 0.689 & 0.676 & \textbf{0.697} & 0.67 & 0.096 & 0.057 & \textbf{0.015} & 0.042 \\
\hline
\end{tabular}
}
\end{table}
\begin{figure*}[t!]
\subfigure[$B=64$]{
\includegraphics[width=.48\hsize]{imgs/{DenseNet-64-0.1-accuracy}.pdf}
}
\subfigure[$B=4096$]{
\includegraphics[width=.48\hsize]{imgs/DenseNet-4096-1-accuracy.pdf}
}
\caption{Accuracy of the DenseNet model on the CIFAR-100 dataset. The solid, dashed, and dotted lines indicate the test accuracy, training accuracy, and generalization gap, respectively.}\label{fig:accuracy-DenseNet}
\end{figure*}
First, we look at the test accuracy obtained by each method, which is summarized in the left columns of Table~\ref{tab:accuracy}.
In the small-batch regime, \textsf{decay} and \textsf{spectral} show better test accuracies than the other two methods.
In the large-batch regime, \textsf{spectral} clearly achieves the best test accuracy for every model.
Although the test accuracy decreases as the mini-batch size increases, as reported in~\cite{Keskar:2017tz}, the decrease in the test accuracy of \textsf{spectral} is less significant than those of the other three methods.
Next, we look at the generalization gap.
We define the generalization gap at a threshold $\alpha$ as the minimum difference between the training and test accuracies when the test accuracy exceeds $\alpha$.
The generalization gap of each method is summarized in the right columns of Table~\ref{tab:accuracy}.
For each setting, we selected the threshold $\alpha$ so that every method achieves a test accuracy that (slightly) exceeds this threshold.
For each of the settings, except for the DenseNet model on the STL-10 dataset, \textsf{spectral} clearly shows the smallest generalization gap followed by \textsf{decay}, which validates the effectiveness of \textsf{spectral}.
Figure~\ref{fig:accuracy-DenseNet} shows the training curve of the DenseNet model on the CIFAR-100 dataset.
The results for other settings are given in Appendix~\ref{apx:accuracy}.
As we can observe, in both the small-batch and large-batch regimes, \textsf{spectral} shows the smallest generalization gaps.
The generalization gap of \textsf{decay} increases as the mini-batch size increases, whereas that of \textsf{spectral} does not increase significantly.
Investigating the reason behind this phenomena is an interesting future work.
The choice of the mini-batch size and the training method is not important to obtain a model with a good training accuracy; all of them exceeds 95\%.
However, obtaining a model with a good test accuracy, or a small generalization gap, is important.
In the subsequent sections, we investigate which property of a trained model determines its generalization gap.
To summarize, \textsf{spectral} consistently achieves the small generalization gap and shows the best test accuracy, especially in the large-batch regime.
\subsection{Sensitivity to the perturbation of the input}
\begin{figure}
\centering
\subfigure[Train]{
\label{fig:grad-transition-train}
\includegraphics[width=.475\hsize]{./imgs/scattered-DenseNet-train-crop.pdf}
}
\subfigure[Test]{
\label{fig:grad-transition-test}
\includegraphics[width=.475\hsize]{./imgs/scattered-DenseNet-test-crop.pdf}
}
\caption{Relation between the generalization gap and the $\ell_2$-norm of the gradient of the DenseNet model on the CIFAR-100 dataset. The solid and dashed lines indicate the results for the small-batch and large-batch regimes, respectively.}\label{fig:grad-transition}
\end{figure}
To understand the relation between the sensitivity to the input perturbation and the generalization gap of the trained model, we look at the gradient of the loss function, defined with the training data or the test data, with respect to the input.
Figure~\ref{fig:grad-transition} shows the transition of the $\ell_2$-norm when training the DenseNet model on the CIFAR-100 dataset.
The results for other settings are given in Appendix~\ref{apx:sensitivity}.
We can observe in Figure~\ref{fig:grad-transition-test} that the $\ell_2$-norm of the gradient with respect to the test data is well correlated with the generalization gap.
In particular, the $\ell_2$-norm of the gradient gradually increases as training proceeds, which matches the behavior of the generalization gap, which also increases as training proceeds.
On the contrary, as shown in Figure~\ref{fig:grad-transition-train}, although the $\ell_2$-norm of the gradient gradually decreases as training proceeds, the generalization gap actually increases.
Hence, the $\ell_2$-norm of the gradient with respect to the training data does not predict the generalization gap well.
These results motivate us to reduce the $\ell_2$-norm of the gradient with respect to the test data to obtain a smaller generalization gap.
As we do not know the test data a priori, the spectral norm regularization method is reasonable to achieve this goal because it reduces the gradient at any point.
The superiority of \textsf{spectral} to \textsf{decay} can be elucidated from this experiment.
In order to achieve a better test accuracy, we must make the training accuracy high and the generalization gap small.
To achieve the latter, we have to penalize the $\ell_2$-norm of the gradient with respect to the test data.
To this end, \textsf{decay} suppresses all the weights, which decreases model complexity, and hence, we cannot fit the model to the training data well.
On the other hand, \textsf{spectral} only suppresses the spectral norm, and hence, we can achieve a greater model complexity than \textsf{decay} and fit the model to the training data better.
\subsection{Maximum eigenvalue of the Hessian with respect to the model parameters}
In~\cite{Keskar:2017tz}, it is claimed that the maximum eigenvalue of the Hessian of the loss function defined with the training data predicts the generalization gap well.
To confirm this claim, we computed the maximum eigenvalue of the DenseNet model trained on the CIFAR-100 dataset, shown in Figure~\ref{fig:hessian}.
As it is computationally expensive to compute the Hessian, we approximated its maximum eigenvalue by using the power iteration method because we can calculate the Hessian-vector product without explicitly calculating the Hessian~\cite{Martens:2010vo}.
We also computed the maximum eigenvalue of the Hessian of the loss function defined with the test data.
We can observe that, for \textsf{vanilla}, larger eigenvalues (in both the training and test data) are obtained if the mini-batch size is increased, which confirms the claim of~\cite{Keskar:2017tz}.
However, the models trained with regularizations tend to have larger eigenvalues, although they have better generalizability.
In particular, the models trained by \textsf{spectral} have the largest eigenvalues, although they have the best generalizability as we have seen in Section~\ref{subsec:accuracy}.
\begin{figure}
\centering
\begin{minipage}{.45\hsize}
\subfigure[Train]{
\includegraphics[width=0.46\hsize]{./imgs/hessian-train.pdf}
}
\subfigure[Test]{
\includegraphics[width=0.46\hsize]{./imgs/hessian-test.pdf}
}
\caption{Maximum eigenvalue of the Hessian of the DenseNet on CIFAR100}\label{fig:hessian}
\end{minipage}
\quad
\begin{minipage}{.45\hsize}
\includegraphics[width=\hsize]{imgs/spectral-DenseNet-200-trans3_conv-crop.pdf}
\caption{Singular values of weight matrices in the DenseNet on CIFAR100. Solid and dashed lines indicate the results for the small-batch and large-batch regimes, respectively.}\label{fig:spectrum}
\end{minipage}
\end{figure}
This phenomena can be understood as follows:
If we train a model without regularization, then a small perturbation does not significantly affect the Frobenius and spectral norms of weight matrices because they are already large.
However, if we train a model with regularization, then because of these small norms a small perturbation may significantly affect those norms, which may cause a significant change in the output.
To summarize, this experiment indicates that the maximum eigenvalue of the Hessian of the loss function is not a suggestive measure for predicting generalizability.
\subsection{Singular values of weight matrices}
Finally, we look at the singular values of weight matrices in the models trained by each method.
Figures~\ref{fig:spectrum} shows the singular values of a weight matrix taken from the DenseNet model on the CIFAR-100 dataset.
The matrix is selected arbitrarily because all matrices showed similar spectra.
We can observe that the spectrum of \textsf{vanilla} is highly skewed, and \textsf{adversarial} and \textsf{decay} shrink the spectrum while maintaining the skewed shape.
In contrast, the spectrum of \textsf{spectral} is flat.
This behavior is as expected because the spectral norm regularization tries to reduce the largest singular value.
Because the maximum singular value obtained by \textsf{spectral} is low, we obtain less sensitivity to the perturbation of the input.
\section{Introduction}
Deep learning has been successfully applied to several machine learning tasks including visual object classification~\cite{he2016identity,Krizhevsky:2012wl}, speech recognition~\cite{Hinton:2012is}, and natural language processing~\cite{Collobert:2008kg, jozefowicz2016exploring}.
A well-known method of training deep neural networks is stochastic gradient descent (SGD).
SGD can reach a local minimum with high probability over the selection of the starting point~\cite{Lee:2016vd}, and all local minima attain similar loss values~\cite{Choromanska:2015ui,Dauphin:2014tn,Kawaguchi:2016ub}.
However, the performance on test data, that is, \emph{generalizability}, can be significantly different among these local minima.
Despite the success of deep learning applications, we lack the understanding of generalizability, even though there has been progress ~\cite{Keskar:2017tz,Zhang:2017te}.
While understanding the generalizability of deep learning is an interesting topic, it is important from a practical point of view.
For example, suppose that we are training a deep neural network using SGD and want to parallelize the process using multiple GPUs or nodes to accelerate the training.
A well-known method for achieving this is synchronous SGD~\cite{das2016distributed, chen2016revisiting}, which requires a large minibatch to effectively exploit parallel computation on multiple GPUs or nodes.
However, it is reported that models trained through synchronous SGD with large minibatches exhibit poor generalization~\cite{Keskar:2017tz}, and a method is required to resolve this problem.
In this study, we consider the generalizability of deep learning from the perspective of sensitivity to input perturbation.
Intuitively, if a trained model is insensitive or sensitive to the perturbation of an input, then the model is confident or not confident about the output, respectively.
As the performance on test data is important, models that are insensitive to the perturbation of \emph{test} data are required.
Note that adversarial training~\cite{Szegedy:2014vw,Goodfellow:2015tl} is designed to achieve insensitivity to the perturbation of \emph{training} data, and it is not always effective for achieving insensitivity to the perturbation of test data.
To obtain insensitivity to the perturbation of test data, we propose a simple and effective regularization method, referred to as spectral norm regularization.
As the name suggests, spectral norm regularization prevents the weight matrices used in neural networks from having large spectral norms.
Through this, even though test data are not known in advance, a trained model is ensured to exhibit slight sensitivity to the perturbation of test data.
Using several real-world datasets, we experimentally confirm that models trained using spectral norm regularization exhibit better generalizability than models trained using other baseline methods.
It is claimed in~\cite{Keskar:2017tz} that the maximum eigenvalue of the Hessian predicts the generalizability of a trained model.
However, we show that the insensitivity to the perturbation of test data is a more important factor for predicting generalizability, which further motivates the use of spectral norm regularization.
Finally, we show that spectral norm regularization effectively reduces the spectral norms of weight matrices.
The rest of this paper is organized as follows:
We review related works in Section~\ref{sec:related}.
In Section~\ref{sec:method}, we explain spectral norm regularization and compare it with other regularizing techniques .
Experimental results are provided in Section~\ref{sec:experiments}, and conclusions are stated in Section~\ref{sec:conclusions}.
\section*{Acknowledgement}
The authors thank Takuya Akiba and Seiya Tokui for helpful discussions.
\bibliographystyle{abbrvurl}
{
\small
\section{Spectral Norm Regularization}\label{sec:method}
In this section, we explain spectral norm regularization and how it reduces the sensitivity to test data perturbation.
\subsection{General idea}\label{subsec:intuition}
We consider feed-forward neural networks as a simple example to explain the intuition behind spectral norm regularization.
A feed-forward neural network can be represented as cascaded computations, ${\bm x}^\ell = f^\ell(W^\ell{\bm x}^{\ell-1} + {\bm b}^\ell)$ for $\ell = 1,\ldots,L$ for some $L$, where ${\bm x}^{\ell-1} \in \mathbb{R}^{n_{\ell-1}}$ is the input feature of the $\ell$-th layer, $f^\ell:\mathbb{R}^{n_\ell} \to \mathbb{R}^{n_\ell}$ is a (non-linear) activation function, and $W^\ell \in \mathbb{R}^{n_{\ell} \times n_{\ell-1}}$ and $b^\ell \in \mathbb{R}^{n_\ell}$ are the layer-wise weight matrix and bias vector, respectively.
For a set of parameters, $\Theta = \set{W^\ell,{\bm b}^\ell}_{\ell=1}^L$, we denote by $f_\Theta:\mathbb{R}^{n_0} \to \mathbb{R}^{n_L}$ the function defined as $f_\Theta({\bm x}^0) = {\bm x}^L$.
Given training data, $({\bm x}_i,{\bm y}_i)_{i=1}^K$, where ${\bm x}_i \in \mathbb{R}^{n_0}$ and ${\bm y}_i \in \mathbb{R}^{n_L}$, the loss function is defined as $\frac{1}{K}\sum_{i=1}^K L(f_\Theta({\bm x}_i),{\bm y}_i)$, where $L$ is frequently selected to be cross entropy and the squared $\ell_2$-distance for classification and regression tasks, respectively.
The model parameter to be learned is $\Theta$.
Let us consider how we can obtain a model that is insensitive to the perturbation of the input.
Our goal is to obtain a model, $\Theta$, such that the $\ell_2$-norm of $f({\bm x} + {\bm \xi}) - f({\bm x})$ is small, where ${\bm x} \in \mathbb{R}^{n_0}$ is an arbitrary vector and ${\bm \xi} \in \mathbb{R}^{n_0}$ is a perturbation vector with a small $\ell_2$-norm.
A key observation is that most practically used neural networks exhibit nonlinearity only because they use piecewise linear functions, such as ReLU , maxout~\cite{Goodfellow:2013tf}, and maxpooling~\cite{Ranzato:2007eb}, as activation functions.
In such a case, function $f_\Theta$ is a piecewise linear function.
Hence, if we consider a small neighborhood of ${\bm x}$, we can regard $f_\Theta$ as a linear function. In other words, we can represent it by an affine map, ${\bm x} \mapsto W_{\Theta,{\bm x}}{\bm x} + {\bm b}_{\Theta,{\bm x}}$, using a matrix, $W_{\Theta,{\bm x}} \in \mathbb{R}^{n_0\times n_L}$, and a vector, ${\bm b}_{\Theta,{\bm x}} \in \mathbb{R}^{n_L}$, which depend on $\Theta$ and ${\bm x}$.
Then, for a small perturbation, ${\bm \xi} \in \mathbb{R}^{n_0}$, we have
\begin{align*}
\frac{\|f_\Theta({\bm x} + {\bm \xi}) - f({\bm x})\|_2}{\|{\bm \xi}\|_2}
= \frac{\|(W_{\Theta,{\bm x}}({\bm x} + {\bm \xi}) + {\bm b}_{\Theta,{\bm x}}) - (W_{\Theta,{\bm x}}{\bm x} + {\bm b}_{\Theta,{\bm x}}) \|_2 }{\|{\bm \xi}\|_2}
= \frac{\|W_{\Theta,{\bm x}}{\bm \xi}\|_2}{\|{\bm \xi}\|_2} \leq \sigma(W_{\Theta,{\bm x}}),
\end{align*}
where $\sigma(W_{\Theta,{\bm \xi}})$ is the spectral norm of $W_{\Theta,{\bm \xi}}$.
The \emph{spectral norm} of a matrix $A \in \mathbb{R}^{m \times n}$ is defined as
\[
\sigma(A) = \max_{{\bm \xi} \in \mathbb{R}^n,{\bm \xi} \neq {\bm 0}}\frac{\|A {\bm \xi}\|_2}{\|{\bm \xi}\|_2},
\]
which corresponds to the largest singular value of $A$.
Hence, the function $f_\Theta$ is insensitive to the perturbation of ${\bm x}$ if the spectral norm of $W_{\Theta,{\bm x}}$ is small.
The abovementioned argument suggests that model parameter $\Theta$ should be trained so that the spectral norm of $W_{\Theta,{\bm x}}$ is small for any ${\bm x}$.
To further investigate the property of $W_{\Theta,{\bm x}}$, let us assume that each activation function, $f^\ell$, is an element-wise ReLU (the argument can be easily generalized to other piecewise linear functions).
Note that, for a given vector, ${\bm x}$, $f^\ell$ acts as a diagonal matrix, $D^\ell_{\Theta,{\bm x}} \in \mathbb{R}^{n_\ell \times n_{\ell}}$, where an element in the diagonal is equal to one if the corresponding element in ${\bm x}^{\ell-1}$ is positive; otherwise, it is equal to zero.
Then, we can rewrite $W_{\Theta,{\bm x}}$ as $W_{\Theta,{\bm x}} = D^{L}_{\Theta,{\bm x}}W^{L}D^{L-1}_{\Theta,{\bm x}} W^{L-1} \cdots D^1_{\Theta,{\bm x}} W^1$.
Note that $\sigma(D^\ell_{\Theta,{\bm x}}) \leq 1$ for every $\ell \in \set{1,\ldots,L}$.
Hence, we have
\[
\sigma(W_{\Theta,{\bm x}})
\leq
\sigma(D^{L}_{\Theta,{\bm x}})\sigma(W^{L})\sigma(D^{L-1}_{\Theta,{\bm x}}) \sigma(W^{L-1}) \cdots \sigma(D^{1}_{\Theta,{\bm x}}) \sigma(W^1)
\leq
\prod_{\ell=1}^L \sigma(W^\ell).
\]
It follows that, to bound the spectral norm of $W_{\Theta,{\bm x}}$, it suffices to bound the spectral norm of $W^\ell$ for each $\ell \in \set{1,\ldots,L}$.
This motivates us to consider spectral norm regularization, which is described in the next section.
\subsection{Details of spectral norm regularization}\label{subsec:proposed}
In this subsection, we explain spectral norm regularization.
The notations are the same as those used in Section~\ref{subsec:intuition}.
To bound the spectral norm of each weight matrix, $W^\ell$, we consider the following empirical risk minimization problem:
\begin{align}
\mathop{\text{minimize}}_{\Theta} \frac{1}{K}\sum_{i=1}^K L(f_\Theta({\bm x}_i),{\bm y}_i) + \frac{\lambda}{2} \sum_{\ell=1}^L \sigma(W^\ell)^2,
\label{eq:spectral}
\end{align}
where $\lambda \in \mathbb{R}_+$ is a regularization factor.
We refer to the second term as the \emph{spectral norm regularizer}. It decreases the spectral norms of the weight matrices.
When performing SGD, we need to calculate the gradient of the spectral norm regularizer.
To this end, let us consider the gradient of $\sigma(W^\ell)^2_2/2$ for a particular $\ell \in \set{1,2,\ldots,L}$.
Let $\sigma_1 = \sigma(W^\ell)$ and $\sigma_2$ be the first and second singular values, respectively.
If $\sigma_1 > \sigma_2$, then the gradient of $\sigma(W^\ell)^2/2$ is $\sigma_1 {\bm u}_1{\bm v}_1^\top$, where ${\bm u}_1$ and ${\bm v}_1$ are the first left and right singular vectors, respectively.
If $\sigma_1 = \sigma_2$, then $\sigma(W^\ell)^2_2$ is not differentiable.
However, for practical purposes, we can assume that this case never occurs because numerical errors prevent $\sigma_1 $ and $\sigma_2$ from being exactly equal.
As it is computationally expensive to compute $\sigma_1$, ${\bm u}_1$, and ${\bm v}_1$, we approximate them using the power iteration method.
Starting with a randomly initialized ${\bm v} \in \mathbb{R}^{n_{\ell-1}}$, we iteratively perform the following procedure a sufficient number of times: ${\bm u} \leftarrow W^\ell {\bm v}$ and ${\bm v} \leftarrow (W^\ell)^\top {\bm u}$, and $\sigma \leftarrow \|{\bm u}\|_2/\|{\bm v}\|_2$.
Then, $\sigma$, ${\bm u}$, and ${\bm v}$ converge to $\sigma_1$, ${\bm u}_1$, and ${\bm v}_1$, respectively (if $\sigma_1 > \sigma_2$).
To approximate $\sigma_1$, ${\bm u}_1$, and ${\bm v}_1$ in the next iteration of SGD, we can reuse ${\bm v}$ as the initial vector.
in our experiments, which are explained in Section~\ref{sec:experiments}, we performed only one iteration because it was adequate for obtaining a sufficiently good approximation.
A pseudocode is provided in Algorithm~\ref{alg:spectral}.
\begin{algorithm}[t!]
\caption{SGD with spectral norm regularization}\label{alg:spectral}
\begin{algorithmic}[1]
\For{$\ell=1$ to $L$}
\State ${\bm v}^\ell \leftarrow$ a random Gaussian vector.
\EndFor
\For{each iteration of SGD}
\State Consider a minibatch, $\set{({\bm x}_{i_1},y_{i_1}),\ldots,({\bm x}_{i_k},y_{i_k})}$, from training data.
\State Compute the gradient of $\frac{1}{k}\sum_{i=1}^k L(f_{\Theta}({\bm x}_{i_j}),y_{i_j})$ with respect to $\Theta$.
\For{$\ell=1$ to $L$}
\For{a sufficient number of times} \Comment{One iteration was adequate in our experiments}
\State ${\bm u}^\ell \leftarrow W^\ell {\bm v}^\ell$, ${\bm v}^\ell \leftarrow (W^\ell)^\top {\bm u}^\ell$, $\sigma^\ell \leftarrow \|{\bm u}^\ell\|/\|{\bm v}^\ell\|$
\State Add $\lambda \sigma^\ell {\bm u}^\ell ({\bm v}^\ell)^\top$ to the gradient of $W^\ell$.
\EndFor
\EndFor
\State Update $\Theta$ using the gradient.
\EndFor
\end{algorithmic}
\end{algorithm}
\paragraph{Convolutions.}
We describe how to handle convolutions because they are used widely in recent applications of deep neural networks.
Consider a convolutional layer with $a$ input channels, $b$ output channels, and a $k_w \times k_h$-sized kernel.
This implies that the convolution has $abk_wk_h$ parameters.
Note that a value in an output channel is determined using $ak_wk_h$ values in the input channels.
Hence, we align the parameters as a matrix of size $b \times ak_wk_h$ and apply the abovementioned power iteration method to the matrix to calculate its spectral norm and gradient.
\subsection{Comparison with other regularization methods}
We now compare the spectral norm regularization with other regularization techniques.
\paragraph{Weight decay.}
\emph{Weight decay}, or the \emph{Frobenius norm regularization}, is a well-known regularization technique for deep learning.
It considers the following problem:
\begin{align}
\mathop{\text{minimize}}_{\Theta} \frac{1}{K}\sum_{i=1}^K L(f_\Theta({\bm x}_i),{\bm y}_i) + \frac{\lambda}{2}\sum_{\ell=1}^L\|W^\ell\|_F^2,
\label{eq:weight-decay}
\end{align}
where $\lambda \in \mathbb{R}_+$ is a regularization factor.
We note that $\|W^\ell\|_F^2 = \sum_{i=1}^{\min\set{n_{\ell-1},n_\ell}}\sigma_i(W^\ell)^2$, where $\sigma_i(W^\ell)$ is the $i$-th singular value of $W^\ell$.
Hence, the Frobenius norm regularization reduces the sum of squared singular values.
Even though it will be effective to train models that are insensitive to input perturbation, a trained model may lose important information about the input because each trained weight matrix, $W^\ell$, acts as an operator that shrinks in all directions.
In contrast, spectral norm regularization focuses only on the first singular value, and each trained weight matrix, $W^\ell$, does not shrink significantly in the directions orthogonal to the first right singular vector.
\paragraph{Adversarial training.}
Adversarial training~\cite{Goodfellow:2015tl} considers the following problem:
\begin{align}
\mathop{\text{minimize}}_{\Theta} \alpha \cdot \frac{1}{K}\sum_{i=1}^K L(f_\Theta({\bm x}_i),{\bm y}_i) + (1-\alpha) \cdot \frac{1}{K}\sum_{i=1}^K L(f_\Theta({\bm x}_i + {\bm \eta}_i), {\bm y}_i)\label{eq:adversarial},
\end{align}
where
\[
{\bm \eta}_i = \epsilon \cdot \frac{{\bm g}_i}{\|{\bm g}_i\|_2} \quad \text{and} \quad {\bm g}_i = \nabla_{{\bm x}} L(f_{\Theta}({\bm x}),{\bm y}_i)|_{{\bm x} = {\bm x}_i},
\]
and $\alpha$ and $\epsilon \in \mathbb{R}_+$ are hyperparameters.
It considers the perturbation toward the direction that increases the loss function the most.
Hence, a trained model is insensitive to the adversarial perturbation of training data.
In contrast, spectral norm regularization automatically trains a model that is insensitive to the perturbation of training data and test data.
\paragraph{Jacobian regularization.}
Another method of reducing the sensitivity of a model against input perturbation is penalizing the $\ell_2$-norm of the derivative of the output with respect to the input.
Let us denote the Jacobian matrix of ${\bm y}$ with respect to ${\bm x}$ by $\partial {\bm y} / \partial {\bm x}$.
The \emph{Jacobian regularization} considers the following regularization term:
\begin{align*}
\frac{1}{K}\sum_{i=1}^K \left\|\frac{\partial {\bm y}}{\partial {\bm x}}\Big|_{{\bm x}={\bm x}_i}\right\|_F^2.
\end{align*}
The Jacobian regularization promotes the smoothness of a model against input perturbation.
However, this regularization is impractical because calculating the derivative of a Jacobian with respect to parameters is computationally expensive.
To resolve the issue, Gu~\emph{et~al.}~\cite{gu2014towards} proposed an alternative method that regularizes layer-wise Jacobians:
\[
\frac{1}{K}\sum_{i=1}^K \sum_{\ell=1}^{L} \left\|\frac{\partial {\bm x}^{\ell}}{\partial {\bm x}^{\ell-1}}\Big|_{{\bm x}^{\ell-1}={\bm x}^{\ell-1}_i}\right\|_F^2,
\]
where ${\bm x}^\ell_i$ is the input to the $\ell$-th layer calculated using ${\bm x}_i$.
Note that, if we neglect the effect of activation functions between layers, this regularization coincides with weight decay.
Hence, we exclude this regularization from our experiments.
\section{Related Works}\label{sec:related}
A conventional method of understanding the generalizability of a trained model is the notion of the flatness/sharpness of a local minimum~\cite{Hochreiter:1997wf}.
A local minimum is (informally) referred to as \emph{flat} if its loss value does not increase significantly when it is perturbed; otherwise, it is referred to as \emph{sharp}.
In general, the high sensitivity of a training function at a sharp local minimizer negatively affects the generalizability of the trained model.
In~\cite{Hochreiter:1997wf}, this is explained in more detail through the minimum description length theory, which states that statistical models that require fewer bits to describe generalize better~\cite{Rissanen:1983fr}.
It is known that SGD with a large minibatch size leads to a model that does not generalize well~\cite{lecun2012efficient}.
In~\cite{Keskar:2017tz}, this problem is studied based on the flatness/sharpness of the obtained local minima.
They formulated a flat local minimum as a local minimum at which all eigenvalues of the Hessian are small (note that all eigenvalues are non-negative at a local minimum). Using a proxy measure, they experimentally showed that SGD with a smaller minibatch size tends to converge to a flatter minimum.
The notion of flat/sharp local minima considers the sensitivity of a loss function against the perturbation of model parameters.
However, it is natural to consider the sensitivity of a loss function against the perturbation of input data, as we discuss in this paper.
In~\cite{Szegedy:2014vw}, the perturbation to training data that increases the loss function the most is considered, and the resulting perturbed training data are referred to as \emph{adversarial examples}.
It is reported in~\cite{Goodfellow:2015tl} that training using adversarial examples improves test accuracy.
Recently,~\cite{Zhang:2017te} showed that the classical notions of Rademacher complexity and the VC dimension are not adequate for understanding the generalizability of deep neural networks.
Note that the spectral norm of a matrix is equal to its largest singular value.
Singular values have attracted attention in the context of training recurrent neural networks (RNN).
In~\cite{Arjovsky:2016tb,Wisdom:2016vk}, it is shown that by restricting the weight matrices in RNN to be unitary or orthogonal, that is, matrices with all singular values equal to one, the problem of diminishing and exploding gradients can be prevented and better performance can be obtained.
| {'timestamp': '2017-06-01T02:03:13', 'yymm': '1705', 'arxiv_id': '1705.10941', 'language': 'en', 'url': 'https://arxiv.org/abs/1705.10941'} |
\section{Introduction} \label{intro}
The association of quasars with galaxies at similar redshifts allows one to use
quasars as signposts for locating galaxies and galaxy clusters at high redshift.
Although radio-quiet quasars (RQQs) are very rarely found in clusters at
z$\lesssim$0.7, a significant fraction of radio-loud quasars (RLQs) with
0.4$<$z$<$0.7 are located in clusters of galaxies
(e.g., \cite{yg87} (YG87), \cite{eyg91} (EYG91), \cite{ey94}).
The population of
quasars located in such rich clusters is seen to evolve 5--6 times faster than
their counterparts in poor environments (EYG91, \cite{ye93}): at z$\lesssim$0.4
RLQs are almost never found in rich clusters, at z$\sim$0.4--0.55 only faint
RLQs can be found in them, and at z$\gtrsim$0.55 both luminous and faint RLQs
are found there (cf. Figure 1 of \cite{ye93}). This evolution can be
extrapolated to include the very faint optical AGN activity seen in some radio
galaxies in low-redshift rich clusters (e.g. \cite{dry90}).
The environments of the population of Fanaroff-Riley class II (FR~II)
`classical double' powerful radio galaxies (PRGs)
evolve with redshift as well (\cite{pp88,ymp89,hl91,as93}).
RLQs also have FR~II morphologies, and in the unification model (\cite{ant93})
PRGs and RLQs are the same class of objects seen at different orientations,
in which case their large-scale environments should be statistically identical.
One scenario which can explain these observations is that the physical
conditions in RLQ and PRG host cluster cores have undergone substantial changes
which have caused the high-z FR~II RLQs and PRGs in clusters to fade at optical
wavelengths on a dynamical time scale and evolve into low-z optically
faint FR~I radio galaxies.
In the context of this evolution the results of Tsai \& Buote (1996) are quite
interesting. They conclude from hydronamical
simulations that in an $\Omega$=1 CDM universe the formation rate of clusters
from (unvirialized) background matter was large at z$\gtrsim$0.6, but dropped
off sharply at z$\sim$0.6 and remained constant and small at z$<$0.6, where new
clusters form primarily by mergers of preexisting, virialized smaller clusters.
This is very similar to the scenario suggested by Yee \& Ellingson (1993)
to explain the steep decline in the optical luminosities of RLQ population
in rich clusters at z$\lesssim$0.6.
The RLQ luminosity function is consistent with a constant cluster birth rate
providing a continual supply of RLQ formation sites at z$>$0.6, in agreement
with the Tsai \& Buote simulations if clusters formed by mergers of
preexisting subclusters are unfavorable sites for RLQ formation.
This agreement is intriguing but possibly coincidental, as other simulations
(\cite{rlt92,co94}) yield different cluster formation rates with redshift.
Many mechanisms have been put forth as explanations for the quasar
environment-evolution link: galaxy-galaxy interactions and mergers
(\cite{hut84}), cooling flows (\cite{fc90}), and/or an intra-cluster medium
(ICM) of different density at high redshift (\cite{sp81}, \cite{bm88}).
These different models have considerably different implications for X-ray
observations of quasar host clusters, as discussed in Hall et~al. (1995;
hereafter Paper~I) and in \S\ref{disc}. We are imaging quasars known to lie in
rich clusters with the {\sl ROSAT} High Resolution Imager (HRI; \cite{zom90})
to help discriminate among these scenarios for the evolution of powerful
AGN in clusters and the role played by the ICM in that evolution.
Paper~I presented upper limits for the first two quasars we studied.
In this paper we add new data on one lower-redshift quasar,
archival data on another, and data from the literature on a third.
We then discuss the data in comparison to FR~II radio galaxy host
clusters and optically and X-ray selected clusters.
We restrict our discussion primarily to PRGs and quasars with unambiguous FR~II
morphologies believed to be located at the centers of their host clusters.
Unless otherwise noted, we take H$_{\rm o}$=50~h$_{50}$~km~s$^{-1}$~Mpc$^{-1}$, q$_{\rm o}$=1/2, and $\Lambda$=0.
\section{Observations and Analysis} \label{obsanal}
Table 1
details the observations and results
discussed in this paper, Paper~I, and Crawford \& Fabian (1995).
\begin{table}
\dummytable\label{tbl-1}
\end{table}
The three objects newly added to our sample are discussed in detail in this
section.
\subsection{IRAS~09104$+$4109} \label{9104}
IRAS~09104$+$4109\ is a very IR-luminous `buried' radio-quiet quasar (\cite{hw93}; HW93)
at the center of a rich, flattened cluster at z=0.442 (\cite{kle88}).
It is a radio source intermediate between FR~I and FR~II in
radio power and morphology (HW93).
HW93 show that IRAS~09104$+$4109\ is powered by a hidden AGN based on
the detection of broad Mg~II $\lambda$2798 and strong wavelength-dependent
polarization.
We classify IRAS~09104$+$4109\ as a RQQ based on its k-corrected 5~GHz to
(estimated) unobscured B luminosity ratio R$^*$ (\cite{sto92}).
Using data from HW93, we find R$^*$=0.89--4.27,
placing it at the high end of the range found for RQQs. Also,
its position in the O[{\sc iii}]-P$_{5\rm GHz}$ plane (\cite{raw94}) clearly
shows it to be a RQQ, albeit an extreme one (similar to H~1821$+$643), even after
accounting for its steep radio spectral slope and correcting its large
O[{\sc iii}] luminosity for non-nuclear emission (\cite{cv96}).
We estimated the richness B$_{\rm gq}$\ of the host cluster CL~09104+4109 by using data
from Kleinmann et~al. (1988) to find N$_{\rm 0.5}$ (\cite{bah81}) and then
converting to B$_{\rm gq}$\ using the empirical relation B$_{\rm gq}$=34$\times$N$_{\rm 0.5}$
(\cite{hl91}). We find B$_{\rm gq}$=1210$_{-269}^{+316}$, equivalent to Abell richness
2. The host galaxy is a cD possibly in the midst of
cannibalizing several smaller galaxies (\cite{soi96}).
CL~09104+4109 has been detected by the \sl ROSAT\rm\ HRI (\cite{fc95}; hereafter
FC95; see also \cite{cv96}). It is one of the most X-ray luminous clusters
known and shows
evidence for a cooling flow in the form of an excess central emission component
above the best-fit King model.
An apparent deficit in the central X-ray emission is also observed,
possibly caused by H{\sc i} absorption in a cooling flow,
or by displacement of the ICM by the radio jets
or a mass outflow from the center of the host galaxy (\cite{cv96}).
FC95 calculate L$_{\rm X}$=2.9$\pm$0.25~10$^{45}$~ergs~s$^{-1}$
in the observed \sl ROSAT\rm\ band, and kT=11.4$^{+\infty}_{-3.2}$~keV from a fit to
an ASCA spectrum of the object. Using a kT=11.4~keV thermal brehmsstrahlung
spectrum redshifted to z=0.442, we calculate
a rest-frame 0.1--2.4~keV luminosity 3.03$\pm$0.26~10$^{45}$~ergs~s$^{-1}$.
Although the quasar and its cD host are at the center of the X-ray emission,
they may not lie at the optical center of the cluster (\cite{kle88}).
\subsection{3C~206} \label{3C206}
The first observation we present
is of the radio-loud quasar (RLQ) 3C~206 (z=0.1976).
3C~206 resides in a flattened cluster of Abell richness class 1 which has a
lower velocity dispersion than is typical for such clusters (\cite{ell89}).
The cluster (which we designate CL~3C~206) is very centrally concentrated,
with a best-fit optical core radius of 35~kpc (\cite{ell89}).
3C~206 is unusual in that it is the only radio-loud quasar at z$<$0.4 known to
reside in a cluster of Abell richness 0 or greater.
The host galaxy of 3C~206 is an elliptical in
the approximate optical center of the host cluster,
but the galaxy is $\gtrsim$1~mag fainter than expected for a first-rank
cluster galaxy (\cite{ell89,hut87}).
The host galaxy is slightly redder than a typical RLQ host galaxy but slightly
bluer than a normal elliptical (\cite{ell89}).
\subsubsection{Analysis of EINSTEIN HRI Observations of 3C~206}
Our `observation' of 3C~206 consists of a 61~ksec archival \sl EINSTEIN\rm\ High Resolution
Imager (HRI) image. No extended emission is obvious in the image.
Since our non-detection data analysis techniques were discussed in detail in
Paper~I, we give only an overview here. Our modeling requires
binned radial profiles for the object, the HRI Point Response Function (PRF),
and for PRF-convolved $\beta$=2/3 King model clusters of
r$_{\rm core}$=125 and 250~kpc at the quasar redshift for several different cosmologies.
After background subtraction,
the PRF was normalized to the object counts in the innermost bin and subtracted,
leaving a radial profile consisting of any excess counts above the
profile expected for an object of the observed central intensity.
The object's radial profile, the fitted PRF and background, and the
PRF subtracted (but not background subtracted) residual profile are plotted in
Figure \ref{206fit}.
The residual is exaggerated in this log-log plot; note that the apparent excess
emission is of the same scale as the PRF and that the residual is negative
between 15-40\arcsec, where cluster emission should be most prominent.
Since the PRF fit is sufficient, we derive a cts~s$^{-1}$\ value for a 3$\sigma$
upper limit cluster as described in Paper I.
This upper limit on cluster cts~s$^{-1}$\ within 8\arcmin\ was corrected for deadtime
and vignetting through comparison with the \sl EINSTEIN\rm\ HRI source catalog (available
through the Einstein On-Line Service of the SAO).
We measured the counts of the quasar and the next brightest source in the 3C~206
field in exactly the same manner as the HRI source catalog.
and found the archive
count rates to still be a factor 1.112 higher than ours. Although this deadtime
plus vignetting correction factor is somewhat higher than might be expected,
we adopt it to be conservative.
To convert our limit from cts~s$^{-1}$\ to L$_{\rm X}$,
we first convert to the emitted flux in the \sl EINSTEIN\rm\ passband
corrected for Galactic absorption of log~N$_{\rm H}$=20.75 (\cite{elw89}).
We assume a Raymond \& Smith (1977) plasma spectrum with temperature from
\begin{equation}
\rm{\beta=\mu m_p \sigma_v^2/kT} \label{A}
\end{equation}
where $\mu$ is the mean molecular weight of the cluster gas (0.63 for solar
abundance) and $\sigma_v$ is the cluster velocity dispersion.
Assuming $\beta$=2/3 gives kT=2.5$\pm$1.1~keV
using the observed $\sigma_v$=500$\pm$110 km~s$^{-1}$.
Using the \sl EINSTEIN\rm\ Users Manual (\cite{h84}), we find a conversion factor
1.3~10$^{-13}$~ergs~cm$^{-2}$~count$^{-1}$ for this N(H{\sc i})\ and kT.
For intercomparison of our targets we convert to the rest-frame
\sl ROSAT\rm\ passband (0.1-2.4~keV).
Conversions were calculated using redshifted thermal brehmsstrahlung
spectra and the effective areas as a function of energy for the \sl EINSTEIN\rm\ and
\sl ROSAT\rm\ HRIs given in HH86 and David et~al. (1995).
Next we convert this rest-frame 0.1-2.4~keV band flux F to a luminosity
L$_{\rm X}$ using F=L$_{\rm X}$/4$\pi$D$_{\rm L}^2$, where D$_{\rm L}$
is the quasar's luminosity distance in the assumed cosmology:
\begin{equation}
\rm{D_L={{{2cz}\over{H_o(G+1)}}\left( {1 + {{z}\over{G+1}}}\right) }} \label{B}
\end{equation}
where G=$\sqrt{1+2zq_{\rm o}}$ (\cite{sg86}).
(Note that the equations for {\rm D$_{\rm L}$} given in Paper~I are incorrect.)
Finally, we correct the cluster luminosity for emission beyond r=8\arcmin.
The resulting upper limits, for different cosmologies and core radii,
are given in Table 2.
\begin{table}
\dummytable\label{tbl-2}
\end{table}
For kT=2.5~keV and r$_{\rm core}$=125~kpc
our CL~3C~206 3$\sigma$ upper limit is 1.63~10$^{44}$~ergs~s$^{-1}$.
Also given in Table 2 are our upper limits from Paper~I, now corrected to
the rest-frame 0.1-2.4~keV band.
\subsection{H~1821$+$643} \label{1821}
H~1821$+$643\ (z=0.297) is an IR-luminous X-ray selected radio-quiet quasar (RQQ)
residing in a giant elliptical galaxy
in a rich (Abell richness class $\sim$2) cluster at low redshift.
(H~1821$+$643\ and IRAS~09104$+$4109\ have the richest quasar host clusters known at any redshift.)
H~1821$+$643\ has been detected and
studied in the radio despite being radio-quiet (\cite{lrh92,bl95,pap95,blu96}).
It has a core, a lobe, and two small jets (\cite{bl95,blu96}).
We calculate $\sigma_v$=1046$\pm$108 km~s$^{-1}$ for CL~1821+643 (in its rest
frame) using 26 members from Schneider et~al. (1992) and Le Brun, Bergeron
\& Boiss\'e (1995).
The host galaxy is bright, large, red, and featureless, but slightly
asymmetrical and offset from the nucleus by 1-2\arcsec\ (\cite{hn91b}).
All the galaxy's measured parameters are consistent with it being
a cD at the center of the cluster.
\subsubsection{Analysis of ROSAT HRI Observations of H~1821$+$643} \label{1821obs}
Our \sl ROSAT\rm\ HRI observation of H~1821$+$643\ is shown in Figure \ref{1821img},
binned into 1\arcsec\ pixels.
Both the quasar and the nearby white dwarf central star of
the planetary nebula Kohoutek 1-16 (K1-16) are easily detected, along with
obvious extended emission from the quasar host cluster. The X-ray emission
from K1-16 shows no signs of being resolved on our HRI image. Isophote fitting
from r=10\arcsec--70\arcsec\ on an adaptively smoothed (\cite{wbc95})
image showed that the cluster has an ellipticity of $\sim$0.1 at all radii.
The cluster isophotes' center is, within the errors,
the same as the quasar position for all r$<$70\arcsec.
To decrease the FHWM and ellipticity of the PRF in our data, we subdivided the
image by exposure time, centroided, and restacked (cf. Morse 1994).
We also added a 1460~sec archival HRI image of the field at this stage.
We then extracted the quasar and white dwarf radial profiles using annuli of
1\arcsec\ width on the corrected, unbinned, unsmoothed HRI image, excluding
data within r=22\farcs5 of all objects detected by the standard processing,
and fitted the radial profile of emission surrounding the quasar.
\subsubsection{H~1821$+$643\ Radial Profile} \label{1821rp}
We initially assume a three-component radial profile:
a constant background, a \sl ROSAT\rm\ HRI PRF, and a King surface brightness cluster
(${\rm S(r) \propto [1 + (r/r_{core})^2]^{-(3\beta-0.5)}}$)
convolved with the \sl ROSAT\rm\ HRI PRF.
(Because of the complexity in fitting a non-analytic PRF-convolved King profile
to the data, we fit a simple King profile instead; simulations indicate $<$5\%
systematic error in this procedure, which we account for in our results.)
The parameters in our model are
the background level,
the normalization, core radius, and $\beta$ of the King profile,
the PRF normalization,
the widths $\sigma_1$ and $\sigma_2$ of the two gaussians that comprise the PRF
(see Paper~I and David et~al. (1995); hereafter D95),
and the relative normalizations of the PRF gaussians.
We kept the normalization and scale length of the exponential PRF component
fixed at the standard values in all models.
We used {\sc nfit1d} in {\sc IRAF}\footnote{The
Image Reduction and Analysis Facility
is distributed by National Optical Astronomy Observatories,
operated by the Association of Universities for Research in Astronomy, Inc.,
under contract to the National Science Foundation.}
to fit our model to the observed radial profile.
We used $\beta$=2/3 and the standard normalization of the two PRF gaussians;
all other parameters were allowed to vary.
The solution (plotted in Figure 3a)
was good (reduced $\chi^2$=0.8612)
but it gave a value of $\sigma_2$=4\farcs58$\pm$0\farcs05, noticeably above the
maximum of 4\farcs1 measured for long PRF characterization observations
(\cite{hri95}).
We attempted to produce an adequate fit more in line with the known PRF
properties by allowing the all the parameters to vary, but no fit gave
a smaller $\sigma_2$.
To see if our broad PRF result was robust, we fitted the PRF of the white dwarf
using data within r=23\arcsec.
The best fit (reduced $\chi^2$=0.698) had $\sigma_1$=2\farcs03$\pm$0\farcs18
and $\sigma_2$=4\farcs08$\pm$0\farcs20, consistent with the standard values
and the range of observed values (D95).
So, either the PRF in our image is best given
by the fit to the quasar, not the white dwarf, and is slightly broader than
measured in the PRF characterization observations, or the PRF is best given by
the fit to the white dwarf and there is a barely resolved component to the
cluster emission which broadens the fitted quasar radial profile.
Such a component would most likely be due to a cooling flow.
To test the hypothesis that a single King profile is inadequate to
describe the cluster emission,
we fitted a simple gaussian along with a point source and a $\beta$=2/3
King profile. The best fit (plotted in Figure 3b) was a considerable improvement
(F-test probability $<$0.5\% of occuring by chance) and had
$\sigma_2$=2\farcs34$\pm$0\farcs04 and $\sigma_2$=3\farcs91$\pm$0\farcs20,
smaller than in the fit without the extra gaussian and consistent with the WD
fit and the range observed in PRF characterization observations. However,
the amplitude of the gaussian component is constrained to only $\pm$32\%.
Thus the total cluster emission is better described by a King profile plus a
barely resolved component than a King model alone, but the amplitude of the
barely resolved component is considerably uncertain.
As a check, we fitted a King model and gaussian to the original, uncorrected
image, and found that the parameters for both components were identical within
the errors to the values determined from the corrected image.
The King component of the cluster's total flux is 3523$\pm$498 counts.
The best-fit additional gaussian component has 2139$\pm$713 counts,
for a total of 5662$\pm$870 cluster counts, integrated to infinity.
\subsubsection{CL~1821+643 Physical Parameters} \label{1821pp}
Several steps must be taken to convert from cts~s$^{-1}$\ to L$_{\rm X}$.
We take Galactic log~N(H{\sc i})=20.58 ({\cite{ls95}) and assume a Raymond-Smith
spectrum with observed kT=5~keV, the highest value tabulated in D95;
a higher value would increase the estimated L$_{\rm X}$\ only a little,
since \sl ROSAT\rm\ has little effective area above 2~keV.
We find the energy-to-counts conversion factor as described in Paper~I and
divide this factor (0.223) into our cts~s$^{-1}$\ limit to obtain the energy flux
in units of 10$^{-11}$ ergs~cm$^{-2}$~s$^{-1}$.
We then convert to L$_{\rm X}$ in the rest-frame 0.1-2.4~keV band.
We measure a rest-frame 0.1-2.4~keV luminosity of
3.74$\pm$0.57 h$_{50}^{-2}$~10$^{45}$~ergs~s$^{-1}$ for CL~1821+643
with 2.33$\pm$0.33 and 1.41$\pm$0.47 h$_{50}^{-2}$~10$^{45}$~ergs~s$^{-1}$
from the King model and cooling flow components respectively.
The values of L$_{\rm X}$\ for different cosmologies are tabulated in Table 2.
This luminous cluster complicates the interpretation of previous X-ray
observations of this field (Appendix \ref{yaq}).
The detection of ICM X-ray emission allows us to calculate the central
electron number density n$_{\rm e,0}$\ of the cluster
if the emission follows a King model, and to put a lower limit on it
in the case of a cooling flow.
For $\beta$=2/3, equation (3) of Henry \& Henriksen (1986; HH86) becomes:
\begin{equation}
\rm{I(0; E_1 , E_2)=1.91 \times 10^{-3} n_{e,0}^2 r_{c} \sqrt{kT}~[\gamma(0.7,E_1/kT) - \gamma(0.7,E_2/kT)]~~ergs~s^{-1}~cm^{-2}~sr^{-1}} \label{D}
\end{equation}
where
r$_{\rm c}$ is the cluster core radius in kpc,
n$_{\rm e,0}$\ is the central electron number density of the cluster in cm$^{-3}$,
kT is the cluster temperature in keV,
and $\gamma$(a,z) is the incomplete gamma function
$\int_z^\infty$~x$^{a-1}$~e$^{-x}$~dx.
(The order of the gamma function terms is incorrectly reversed in HH86).
I(0;E$_{\rm 1}$,E$_{\rm 2}$) is the cluster's central X-ray surface brightness
(at the cluster) in the band E$_{\rm 1}$ to E$_{\rm 2}$,
and E$_{\rm 1}$ and E$_{\rm 2}$ are the energies (in keV) corresponding to the
lower and upper limits, respectively, of the instrumental passband
{\it at the object's redshift}.
For \sl ROSAT\rm, E$_{\rm 1}$=0.1(1+z)~keV and E$_{\rm 2}$=2.4(1+z)~keV.
I(0;E$_{\rm 1}$,E$_{\rm 2}$) can be related to I$_{\rm obs}$, the observed
central surface brightness in the (E$_{\rm 1}$,E$_{\rm 2}$) band
in ergs~cm$^{-2}$~s$^{-1}$~arcsec$^{-2}$ as follows (see also \cite{bw93}).
For $\beta$=2/3, the total cluster X-ray luminosity in the
(E$_{\rm 1}$,E$_{\rm 2}$) band is easily found by
integrating the surface brightness either at the source or at the observer.
Equating the two, we have:
\begin{equation}
{\rm L_X(E_1 , E_2) = I(0; E_1 , E_2)~4\pi~2\pi r_{core}^2 = I_{obs}~4\pi d_L^2~2\pi\theta_c^2} \label{E}
\end{equation}
where r$_{\rm core}$ and d$_{\rm L}$ are in cm
and $\theta_c$ is the angular size corresponding to r$_{\rm core}$\ at the object's
redshift. This yields
\begin{equation}
{\rm I(0; E_1 , E_2) = I_{obs}~d_L^2~\theta_c^2/r_{core}^2} \label{F}
\end{equation}
Also, if the cluster T is unknown but $\sigma_v$ is, we can use Eq.~\ref{A}
for $\beta$=2/3 to replace $\sqrt{\rm kT}$ in Eq.~\ref{D}:
\begin{equation}
\rm{I(0; E_1 , E_2)=5.86 \times 10^{-6} n_{e,0}^2 r_{c} \sigma_v~[\gamma(0.7,E_1/kT) - \gamma(0.7,E_2/kT)]~~ergs~s^{-1}~cm^{-2}~sr^{-1}} \label{G}
\end{equation}
where $\sigma_v$ is in km~s$^{-1}$.
For H~1821$+$643, we find
n$_{\rm e,0}$=0.015$\pm$0.002 h$_{50}^{1/2}$ cm$^{-3}$ for the King model component
and a lower limit of n$_{\rm e,0}$=0.081$\pm$0.022 h$_{50}^{1/2}$ cm$^{-3}$ for the
cooling flow component using the central surface brightness of our gaussian fit.
(Strictly speaking a deprojection analysis is required to calculate n$_{\rm e,0}$\ in the
case of a cooling flow, and the densities will still be underestimated because
the central regions are unresolved,
but our estimates should be accurate lower limits.)
With an estimate for n$_{\rm e,0}$, we can estimate t$_{\rm cool}$, the cooling time for gas in
the center of the cluster, using equation (5.23) of Sarazin (1988):
\begin{equation}
{\rm t_{cool}=2.89~10^7~n_{e,0}^{-1}~\sqrt{T}~years} \label{H}
\end{equation}
where T (estimated from $\sigma_{\rm v}$\ using Eq.~\ref{A} if necessary) is in keV,
n$_{\rm e,0}$\ is in cm$^{-3}$, and we have used the relation
n$_{\rm p}$=0.82n$_{\rm e,0}$\ for completely ionized H-He gas. We find
t$_{\rm cool}$$<$6.4$\pm$1.2 h$_{50}^{-1/2}$~Gyr (since n$_{\rm e,0}$$\propto$h$_{50}^{1/2}$),
which is less than the age of the universe at z=0.297,
8.8~h$_{50}^{-1}$~Gyr (10.1~h$_{50}^{-1}$~Gyr for q$_{\rm o}$=0),
for all reasonable H$_{\rm o}$.
Thus CL~1821+643 meets the standard criteria for
the presence of a central cooling flow.
The mass cooling rate \.{M}$_{\rm cool}$\ can be found from
\begin{equation}
{\rm L_{cool} = 2.4\ 10^{43}\ T_{\rm keV}\ \dot{M}_{\rm cool,100}\ ergs\ s^{-1}} \label{I}
\end{equation}
where L$_{\rm cool}$ is the total luminosity of the cooling gas,
T$_{\rm keV}$ its initial temperature,
and \.{M}$_{\rm cool,100}$ the mass deposition rate
in 100 h$_{50}^{-2}$ M$_{\sun}$ yr$^{-1}$ (\cite{fab86}).
We find \.{M}$_{\rm cool}$=1120$\pm$440 h$_{50}^{-2}$ M$_{\sun}$~yr$^{-1}$ for H~1821$+$643. This
is likely a lower limit since we have not used the bolometric L$_{\rm cool}$.
\section{Discussion} \label{disc}
\subsection{Physical Parameters of Quasar Host Clusters} \label{phys}
Only five quasar host clusters have observations deep enough to put
interesting limits on their X-ray emission.
The two detections and three upper limits are listed in Table 2.
We note that the three upper limits are for the host clusters of RLQs
with P$_{\rm 20cm}$$>$10$^{26}$ W/Hz and the detections are of
the host clusters of two RQQs with P$_{\rm 20cm}$$\sim$10$^{25}$ W/Hz,
two of the richest quasar host clusters known at any redshift,
which are two of the most X-ray luminous clusters known.
These two luminous RQQ host clusters have dense ICM
(cf. Table 1, \S\ref{1821pp}).
For CL~09104+4109,
using the best-fit FC95 King model
r$_{\rm core}$=30\arcsec, kT=11.4$\pm$3.2~keV, and an extrapolated central
surface brightness of 0.3--4.0~counts~arcmin$^{-2}$~s$^{-1}$
(see Fig. 3 of FC95), we obtain from Eq.~\ref{D} a lower limit for
n$_{\rm e,0}$\ (electron density at the cluster center)
in the range 0.027--0.097 h$_{50}^{1/2}$ cm$^{-3}$.
This is consistent with the value of $\sim$0.038 h$_{50}^{1/2}$ cm$^{-3}$ at
r$<$50~kpc obtained from the deprojection analysis of Crawford \& Vanderreist
(1996), but we note once again that these values are upper limits for the true
central densities, because of the cooling flows.
Cooling flow gas should have roughly r$^{-1}$ density and pressure profiles
(\cite{fab94}, p. 299), and the X-ray images do not resolve the innermost
regions where the density should be highest.
In addition, the apparent central X-ray deficit in
IRAS~09104$+$4109\ may alter conditions in the cluster center.
For comparison, Abell clusters typically have
n$_{\rm e,0}$=0.001-0.010 h$_{50}^{1/2}$~cm$^{-3}$ (\cite{jf84}),
and the host clusters of the radio galaxies Cygnus~A and 3C~295 have
n$_{\rm e,0}$=0.057$\pm$0.016
and $>$0.026$_{-0.009}^{+0.018}$ h$_{50}^{1/2}$ cm$^{-3}$ respectively
(\cite{cph94}, HH86).
We can also constrain n$_{\rm e,0}$, t$_{\rm cool}$, and \.{M}$_{\rm cool}$\ for the three RLQ host clusters.
Using our upper limit surface brightnesses for r$_{\rm core}$=125~kpc and assuming
kT=5~keV (kT=2.5~keV for 3C~206), we obtain the limits shown in Table 1.
The limiting central surface brightnesses and density limits are
lower and the cooling times longer for r$_{\rm core}$$>$125~kpc and/or H$_{\rm o}$$>$50,
but can be shorter if the gas is abnormally cool or centrally concentrated
(e.g. in galaxy size halos or clusters with r$_{\rm core}$$<$125~kpc).
For 3C~206, the cooling time at the center of a putative cluster right at our
3$\sigma$\ upper limit
is less than the age of the universe at z=0.1976 for plausible cosmologies.
For 3C~263 and PKS~2352-342, if q$_{\rm o}$=0.5 the age of the universe at their
redshifts is several Gyr shorter than the cooling times of their host clusters
and thus no cooling flows are possible, but cooling flows are possible if q$_{\rm o}$=0.
If we assume q$_{\rm o}$=0 and make a maximal assumption of clusters just at
our 3$\sigma$\ upper limits with 50\% of their emission from cooling flow
components, we obtain the limits on \.{M}$_{\rm cool}$\ shown in Table 1.
We discuss the implications of these values later, in \S\ref{cf}.
Another interesting constraint on some of the clusters' line-of-sight
properties can be made.
H~1821$+$643\ has significant flux below 912~\AA\ (\cite{kol91,lee93,kri96}); thus,
the cooling flow in CL~1821+643 does not produce a Lyman limit and must have
intrinsic N(H{\sc i})$\leq$10$^{17}$~cm$^{-2}$ along the line of sight. This is also
the case for CL~3C~263 (\cite{cra91}) but not for CL~09104+4109, which has
intrinsic N(H{\sc i})=2.5$_{-1.1}^{+1.8}$ 10$^{21}$~cm$^{-2}$ (FC95).
This latter value is typical for nearby cooling flows
(\cite{aea95}, but cf. Laor 1996);
thus, the cooling flow in CL~1821+643 and any putative cooling flow in CL~3C~263
have unusually low intrinsic N(H{\sc i})\ along our line of sight.
Either the cooling flows have low overall N(H{\sc i}), or,
more likely, the ionizing radiation from the two quasars is
confined to a cone (including our line of sight) within which cooling
gas is reionized, as suggested by Bremer, Fabian \& Crawford (1996).
\subsection{Comparison of Observations and Models} \label{compare}
In this section we compare our observations
to three models which have been proposed to explain the evolution of AGN cluster
environments. We introduce each model, discuss data on key predictions, point
out problems, and discuss implications and possible solutions to the problems.
\subsubsection{The Cooling Flow Model} \label{cf}
The cooling flow model (\cite{fab86,fc90}) is not a model for quasar formation
in cooling flows, but rather a model for how dense cooling flows can fuel
AGN located within them in a self-sustaining manner. However, if it is to
explain the evolution of RLQs and PRGs in clusters at z$<$0.6,
such objects must preferentially be found in cooling flow clusters for some reason,
perhaps because radio galaxies in clusters have higher radio luminosity
due to radio lobe confinement (\cite{ba96}).
This model is supported by the work of Bremer et~al. (1992, and references
therein), who find that extended line-emitting gas around z$\lesssim$1 RLQs
is so common that it must be long-lived and therefore confined.
If a hot ICM confines the gas, the required pressure is such
that the ICM should have cooling flows of 100-1000~M$_{\sun}$/yr.
Fabian \& Crawford (1990) outline a model where luminous quasars at z$>$1
are surrounded by dense cooling flows in subclusters.
They show that an AGN of luminosity L in
dense (P=nT$\sim$10$^8$ K/cm$^3$) gas at the virial temperature of the central
cluster galaxy (T$\sim$10$^7$~K)
will Compton-cool the gas within a radius R$\propto$L$^{1/2}$ for a
mass accretion rate (proportional to this volume) of \.{M}$\propto$L$^{3/2}$.
Since L$\propto$\.{M}c$^2$, this Compton-cooled
accretion flow will grow by positive feedback until L=L$_{\rm Edd}$,
as long as Compton cooling dominates brehmsstrahlung for L$<$L$_{\rm Edd}$,
which occurs only in high-P environments.
AGN so powered will last until a major subcluster merger
or until the supply of dense cooling gas is exhausted.
Assuming the most luminous objects form in the densest regions, the observed
optical fading of the RLQ/PRG population in clusters below z$\sim$0.6 can be
explained by major subcluster mergers (which occur earlier in the richest
environments) disrupting the Compton-cooling mechanism, leading to a precipitous
drop in the quasar luminosity.
Thus this model is also intriguingly consistent
with the simulations of Tsai \& Buote (1996) discussed in \S\ref{intro}.
The cooling flow model can be tested in detail for 3C~263.
Crawford et~al. (1991), hereafter C91, predict \.{M}$_{\rm cool}$=100-1000~M$_{\sun}$/yr\ for
CL~3C~263 from extended emission-line gas observations.
We predict at most \.{M}$_{\rm cool}$$<$202~M$_{\sun}$/yr\ for CL~3C~263, and then only if q$_{\rm o}$=0.
For q$_{\rm o}$=0.5, to have t$_{\rm cool}$\ less than the age of the universe at its
redshift and \.{M}$_{\rm cool}$=100~M$_{\sun}$/yr, CL~3C~263 must have
either r$_{\rm core}$$<$62~kpc and cooling flow L$_{\rm X,44}$=1.2, or T$<$1.3~keV and cooling
flow L$_{\rm X,44}$=0.3, where L$_{\rm X,44}$\ is X-ray luminosity in units of 10$^{44}$~erg~s$^{-1}$.
In addition, the minimum energy pressure 100~kpc
from the quasar given by C91 can be produced by the ICM only if there is
a cluster with kT=5~keV and r$_{\rm core}$=125~kpc right at our upper limit luminosity.
However, matching the pressure at $<$100~kpc inferred by C91 from
any of their models based on observed [O{\sc iii}]/[O{\sc ii}] line ratios and
various assumptions for the quasar's ionizing spectrum requires an additional
compact cooling flow component or a cluster with r$_{\rm core}$$\lesssim$100~kpc.
Thus for CL~3C~263 to match the predictions of the cooling flow model as given
in C91, it cannot be much fainter than our upper limit and must have
r$_{\rm core}$$\lesssim$60--100~kpc and/or an unusually low kT for its L$_{\rm X}$.
Crawford \& Fabian (1989) and Fabian (1992) point out, however, that since
collapsed structures at high z have shallower potentials, the gas in them will
have a lower kT, and since they are denser, ``more compact objects
than present-day clusters may be appropriate sites for remote cooling flows.''
The cooling flows in our two RQQ host clusters may very well have central ($<$1~kpc) pressures $\sim$10$^8$~K/cm$^3$, and thus be explained by the cooling flow
model's Compton feedback mechanism, again with the caveat for IRAS~09104$+$4109\ that
the apparent central X-ray deficit may indicate unusual conditions.
But if pressures do reach 10$^8$ K/cm$^3$ at $<$1~kpc in massive
(\.{M}$\gtrsim$500~M$_{\sun}$/yr) cooling flows at low redshift (z$\lesssim$0.4),
this model must explain why surveys of powerful AGN at such redshifts
very rarely find them in massive cooling flow clusters.
Also, if our three RLQ host clusters have cooling flows at all,
they would have \.{M}$\lesssim$200~M$_{\sun}$/yr.
Central pressures $\sim$10$^7$~K/cm$^3$ have been estimated for cooling flows
of this strength at low redshift (\cite{heck89}).
Thus the central pressures in these RLQ host clusters are not likely to be
consistent with the cooling flow model
because their central pressures are an order of magnitude lower
than required for the Compton-cooling feedback mechanism of Fabian \& Crawford
(1990) to successfully power the AGN.
{\it This evidence suggests that the cooling flow model cannot be the sole
explanation for the evolution of powerful AGN in clusters.}
However, current data is insufficient to completely rule the model out as the
sole explanation, since our three RLQ host clusters which seem to lack dense
cooling flows {\it might} be powered in the manner predicted by this model
{\it if} 1) cooling flows are in more compact and cooler clusters at high redshift
or 2) there is something unusual about these objects which has caused the
density of hot gas in their innermost regions to increase by at least an order
of magnitude above the density predicted by X-ray data (cf. \S\ref{spec}).
As for powerful FR~II radio galaxies (PRGs), at low redshift they are extremely
rarely found in clusters (which preferentially host FR~I radio galaxies),
and most of those that are in clusters are not at the cluster centers
(\cite{lo95,wd96}), which may in itself argue against the cooling flow model.
The only two clusters with central FR~II galaxies in which cooling flows could
have been definitively detected with observations of the S/N and spatial
resolution made to date are CL~3C~295 and CL~CygA, both of which have cooling
flows of $\sim$200~M$_{\sun}$/yr\ (see references in legend to Fig. \ref{LvB}).
The next best candidate for a FR~II radio galaxy at the center of
a cooling flow is B3~1333+412 in A1763 at z=0.189 (\cite{vb82}).
For CL~3C~295, existing observations do not rule out high central pressures.
For CL~CygA, Reynolds \& Fabian (1996) find P=2.5~10$^6$~K/cm$^3$ at 15 kpc.
This extrapolates to $\sim$4~10$^7$~K/cm$^3$ at r=1~kpc assuming
P$\propto$r$^{-1}$ (\cite{fab94}).
So it is possible these two PRGs have central densities and pressures sufficient
to support the Compton feedback quasar fueling mechanism, but it is also
possible that some other process is needed to create the required high densities.
The finding of Wan \& Daly (1996) that FR~II host clusters at z$\leq$0.6 are
typically X-ray underluminous (i.e. cooler or less dense than average clusters)
may also be a problem for this model (cf. Fig. \ref{LvB}).
The central densities they give
clusters translate into cooling times longer than the age of the universe for
all clusters in their sample except CL~CygA.
But Wellman, Daly \& Wan (1996a, 1996b), using radio bridge
parameters of a sample of FR~II radio galaxies at z=0.5--1.8,
derive somewhat larger surrounding densities and temperatures,
consistent with present day ICM, and
cooling times short enough to form cooling flows in some cases (cf. \S\ref{icm}).
Thus the cooling flow model for the evolution of FR~II AGN environments is
unlikely to be the {\it sole} explanation for this evolution. H~1821$+$643\ and IRAS~09104$+$4109\
are easily explained by this model, but all other host clusters we have
discussed may harbor cooling flows as dense as required by the model only if
1) cooling flows are found in cooler and denser clusters at z$\gtrsim$0.4,
or 2) some other mechanism has boosted their central densities
high enough to engage the fueling mechanism proposed in the model.
(However, Cygnus~A and 3C~206 need not follow this model for it to explain the
evolution of FR~II AGN environments at z$\gtrsim$0.4.)
One possible mechanism for increasing central densities is strong interactions
or mergers, discussed in \S\ref{spec}.
\subsubsection{The Low ICM Density Model} \label{icm}
The low-ICM-density model (\cite{sp81}, EYG91, {\cite{ye93}) predicts that
quasars are preferentially located in host clusters with low-density ICM
($\lesssim$10$^{-4}$ cm$^{-3}$) where ram pressure stripping
is inefficient and gas remains in galaxies as a possible AGN fuel source.
This model is consistent with
findings that radio sources at z$\sim$0.5 have radio morphologies
uncorrelated with the richnesses of their environs (\cite{rse95,hut96}),
implying that at z$\sim$0.5 the ICM/IGM in optically rich environments is not
consistently denser than in poor ones.
Similarly, Wan \& Daly (1996)
find that at z$<$0.35 clusters with FR~II sources tend to be less
X-ray luminous (less dense and/or cooler) than those without.
FR~II host clusters at z$\sim$0.5 are consistent with being underluminous
as well, based on comparison of inferred radio bridge pressures to those in
the z$<$0.35 sample.
However, this model is inconsistent with
Wellman, Daly \& Wan (1996a, 1996b), who find from radio bridge studies
that at z=0.5--1.8 large FR~II 3C radio galaxies may be surrounded by gas with
densities and temperatures similar to nearby clusters.
These different results may be explained by the different radio powers
of the objects in each sample (cf. \cite{ba96}).
Also, some high-z radio galaxies are observed to have large
rotation measures which at low z are caused only by dense ICM (\cite{car97}).
This model predicts that powerful AGN host clusters are
X-ray underluminous for their optical richnesses.
In Figure \ref{LvB} we plot cluster richness B$_{\rm gc}$ vs. rest-frame
soft X-ray luminosity L$_{\rm X}$(0.1-2.4~keV) to look for this trend.
B$_{\rm gc}$
is a linear measure of richness.
Quasar host clusters are plotted as filled squares,
with upper limits assuming r$_{\rm core}$=125~kpc.
Open squares are a \={z}$\sim$0.3 subsample of X-ray
selected EMSS clusters studied by the CNOC group (\cite{car96}),
radio galaxies are filled triangles, and other symbols are objects from the
literature
(see figure legend for references).
The dotted line is the best-fit relation to the CNOC/EMSS data.
Compared to both the CNOC sample and other clusters from the literature,
H~1821$+$643\ and IRAS~09104$+$4109\ are X-ray overluminous for their optical richnesses
and 3C~206, 3C~263, and PKS~2352$-$342\ are either normal or underluminous,
consistent with this model.
Note, however, that several z$>$0.5 optically selected clusters (half-filled
squares; see figure legend for references) lie at the low-L$_{\rm X}$\ end of the
literature range; thus, our two high-z RLQ host clusters might be normal or even
overluminous compared to high-z optically selected clusters of similar richness.
Clusters with central FR~II radio galaxies (filled triangles)
are either normal or underluminous for their richnesses.
The L$_{\rm X}$\ for 3C~382 is probably contaminated by the central engine,
as an archival WFPC2 image reveals a bright, unresolved source in the
center of the host galaxy.
Thus the only overluminous PRG host clusters are those of 3C~295 and Cygnus~A,
but these objects do present immediate problems for this model.
Perley \& Taylor (1991) argue convincingly that 3C~295 is $<$10 Myr old, based
on its observed size and the ram pressure needed to confine the hot spots.
Similar calculations for Cygnus~A
yield an age of $\sim$30-40~Myr (\cite{car91,bla96}).
Multiple-generation AGN models predict characteristic lifetimes of
$\sim$100~Myr and single-generation models even longer ones (\cite{cp88}),
so these truly are young AGN.
If the low-ICM-density model is correct and radio sources should not form in
dense clusters, these
clusters must have grown dense only after the radio sources formed.
However, the shortest timescale on which the ICM might
significantly evolve is the cluster-core sound crossing time, $\sim$100 Myr, so
it appears some strong radio sources have formed while immersed in dense ICM.
The existence of these two high ICM density RQQ host clusters
is a problem for the low ICM density model if it is to be a
universal explanation for the presence and evolution of quasars in clusters.
But as RQQs in clusters are rare, they may very well have different
evolutionary histories than RLQs in clusters. One possibility is that the RQQs
formed as RLQs when the clusters were less dense and have been
continuously active ever since.
Their evolution into RQQs sometime after formation
might have been due to spin-down of the black hole
(\cite{bec94}) or the increasing density of their environments
interfering with jet production (\cite{ree82}).
In this scenario,
even if we assume the ICM density doubles on a dynamical timescale, a rate
ten times faster than simulations predict (\cite{evr90})
but probably still consistent with data on high-z optically selected
clusters (\cite{cas94}), H~1821$+$643\ and IRAS~09104$+$4109\ must be quite old ($\sim$10$^9$~yr)
to have formed in clusters with even moderately low ICM densities
(n$_{\rm e,0}$$\lesssim$10$^{-3}$~cm$^{-3}$).
If these two RLQ host clusters are typical of z$\sim$0.7 RLQ host clusters,
then with this assumed rapid evolution of the ICM density it is possible that
these clusters could evolve to be as luminous as the two RQQ host clusters by
z$\sim$0.30--0.44, consistent with the suggestion that these RQQs were once RLQs.
Another constraint comes from the mass of the central black hole in H~1821$+$643,
which is estimated at M$_{\rm BH}$=3~10$^9$ M$_{\sun}$ (\cite{kol93}).
If H~1821$+$643\ has accreted continuously at the Eddington limit with a 10\%
efficiency for conversion of accreted matter into energy, it would have
reached this M$_{\rm BH}$ after 0.9~Gyr, just consistent with the age necessary
for formation in a low ICM density cluster.
If the efficiency were any less, the black hole would have reached its estimated
mass more quickly and the quasar would have to be younger. If the accretion
rate were any less, the quasar would not likely be as luminous as it is.
Thus while these two RQQs could be very old continuously active quasars which
formed as RLQs in moderately low ICM density clusters,
the required rate of ICM density increase is extremely large,
the different timescales involved agree for only a small range of parameters,
and the requirement for continuous fueling of these rather luminous quasars at
the Eddington rate for $\sim$1~Gyr is a difficult one to fulfill without
invoking interactions which allow gas to flow into the center of the host
galaxies.
This scenario does make the testable prediction that if any remnant radio lobes
exist around these objects, they should be very old.
The major drawback of this scenario is that it is not applicable to the two PRGs
(Cygnus~A and 3C~295), since those AGN are very young. One explanation which
might be applicable to all four objects in high ICM density clusters is that they
were created recently when their host galaxies underwent strong interactions.
We defer discussion of this possibility to \S\ref{spec}.
In any case, {\it the existence of these four objects in high ICM density
clusters is sufficient to rule out the low-ICM-density model as the sole
explanation for the presence of powerful AGN in clusters,}
even though current data do not rule out
low density ICM being present in {\it most} powerful AGN host clusters.
\subsubsection{The Low-$\sigma_v$ Interaction/Merger Model} \label{sigv}
The low-$\sigma_{\rm v}$\ interaction/merger model (\cite{hut84}, EYG91) predicts that
quasars will be preferentially found in unvirialized, low velocity dispersion
($\sigma_v$) clusters where the low-$\Delta$v encounters needed for
strong interactions and mergers are relatively common (\cite{af80}).
Ellingson, Green \& Yee (1991) showed that the composite
$\sigma_v$ of quasar host clusters is
lower than for comparably rich Abell clusters, consistent with this model.
To test this model, in Figure \ref{DvB}
we plot cluster richness B$_{\rm gq}$\ vs. cluster velocity dispersion $\sigma_v$.
Compared to the CNOC and literature data, 3C~206 has a slightly low $\sigma_{\rm v}$\ for its
richness while H~1821$+$643\ is normal.
The $\sigma_v$ and B$_{\rm gq}$\ of 3C~206 are representative of the ensemble quasar
host cluster of Ellingson, Green \& Yee (1991).
The host clusters of the PRGs Cygnus~A and 3C~295 are outliers.
Smail et~al. (1997) give 1670~km~s$^{-1}$\ (21 objects) for CL~3C~295,
and $\pm$250~km~s$^{-1}$\ uncertainty (Smail, personal communication).
The redshift histogram of CL~3C~295
shows no evidence for subclustering or field contamination (\cite{dg92}).
The $\sigma_v$ of CL~CygA, based on only five galaxies,
is almost certainly an overestimate (\cite{ss82}).
Thus the few available measurements of quasar and FR~II radio galaxy host
cluster velocity dispersions are not particularly supportive of this model,
although some measurements may suffer from field contamination (\cite{sma97}).
Of the two RQQs in our sample, no $\sigma_{\rm v}$\ measurement exists for CL~09104+4109,
and CL~1821+643 has a normal or high $\sigma_v$ for its richness.
Since the cluster $\sigma_{\rm v}$\ evolves on the dynamical timescale during formation,
we can make the same arguments about the age of H~1821$+$643\
as we did in the previous discussion for the low-ICM-density model, namely that
these RQQs could be old AGN which formed as RLQs when the cluster had a lower $\sigma_{\rm v}$.
But for
CL~3C~295 and CL~CygA, even if we assume $\sigma_v$$\sim$850~km~s$^{-1}$
for consistency with X-ray data (\cite{hh86,car91}) their velocity dispersions
would still be normal or high for their richnesses, and these AGN are too
young to have formed when their clusters had lower $\sigma_{\rm v}$.
As we discuss in the next section, however, there is another possible
explanation for these exceptions to the low-$\sigma_{\rm v}$\ model which may preserve
the model's viability.
\subsection{Which Model(s) Are Correct?} \label{spec}
None of the three simple models purporting to explain the presence and evolution
of powerful
AGN in cluster centers seem able to explain all the observations at first glance.
The low ICM density model cannot account for AGN in clusters with dense ICM
(3C~295 \& Cygnus~A, and H~1821$+$643\ \& IRAS~09104$+$4109\ unless they are very old; c.f. \S\ref{icm}),
but is consistent with our nondetection of ICM emission from RLQ host clusters.
Some other recent radio and X-ray work supports this model
(\cite{rse95,hut96,wd96}), but some does not
(Wellman, Daly \& Wan 1996a, 1996b; \cite{cf96b}).
The cooling flow model requires very strong cooling flows, and thus has
difficulty accounting for host clusters without luminous X-ray emission
or with relatively weak cooling flows,
but can explain the X-ray luminous host clusters of H~1821$+$643\ and IRAS~09104$+$4109.
And the low-$\sigma_v$ model has difficulty explaining 3C~295, H~1821$+$643,
and Cygnus~A, whose host clusters have apparently high $\sigma_v$'s,
although it is supported by the scarce data on RLQ host clusters (\cite{eyg91}).
The evidence suggests that neither the cooling flow nor the low-ICM-density
models can be the sole explanation for the presence and evolution of powerful
AGN in clusters.
Strictly speaking, the low-$\sigma_{\rm v}$\ model cannot be the sole explanation either,
since some powerful AGN reside in high-$\sigma_v$ clusters.
However, even in high-$\sigma_v$ clusters, the low-$\Delta$v interactions or
mergers required by the low-$\sigma_{\rm v}$\ model can still occur, albeit rarely.
{\it We suggest the possibility that AGN are produced
in clusters solely by a strong interaction of their host galaxy with another
galaxy in the cluster.} (We define a strong interaction as an interaction
and/or merger which results in considerable gas flow into the central regions
of the post-interaction AGN host galaxy.)
This would naturally favor host clusters with low $\sigma_{\rm v}$\ (and possibly low ICM
density), since strong interactions with gas-containing galaxies are more common
in such clusters, but again, such interactions can occur in any cluster
(\cite{dm94}) as well as in the field where most quasars exist.
If this strong interaction scenario is correct,
the distribution of AGN host cluster velocity dispersions should be biased to
low values, but need not consist exclusively of low-$\sigma_{\rm v}$\ clusters.
A large sample of AGN host cluster $\sigma_{\rm v}$'s will be needed to test that prediction.
A more easily testable prediction is that all host galaxies of quasars in
clusters should show evidence of interaction with another galaxy.
It is also possible that strong interactions are not the sole formation process
for AGN in clusters, and that the cooling flow
model operates in some cases.
Strong interactions may also allow the cooling flow model to operate in clusters
it would not otherwise be able to, by providing a mechanism for increasing the
ICM densities at the center of the host galaxies sufficiently high to switch on
the Compton-feedback fueling mechanism.
X-ray observations of additional powerful AGN in clusters will determine how
prevalent the cooling flow model can be, and how necessary an additional
mechanism for increasing the central densities is.
Observations of the host galaxies of such quasars will determine how often
mergers might provide that mechanism.
We now consider whether there is evidence for or against this strong
interaction scenario in the far-IR and optical properties of the host galaxies
of the AGN we have discussed (cf. Table 1).
\paragraph{Far-IR Properties:} \label{farir}
{\bf RQQs:} Both IRAS~09104$+$4109\ and H~1821$+$643\ are luminous far-IR sources (Table 1),
with a 60$\mu$m to optical luminosity ratio at least 2.5 times greater than
any of our three RLQs.
This excess far-IR emission above what is expected for quasars of
their luminosity can plausibly be attributed to an excess of gas and
dust in the RQQ host galaxies resulting from a recent strong interaction.
The excess IR luminosity is too strong to be attributed to gas and dust in the
cooling flow (\cite{bmo90}).
{\bf RLQs:}
The far-IR luminosity of 3C~206 is almost two orders of magnitude lower than
the two RQQs.
Both 3C~263 and PKS~2352$-$342\ cannot be ruled out as being IR-luminous, although at
most they would still be an order of magnitude less luminous than the two RQQs.
Thus any interactions in which these RLQs were involved must have been much less
gas-rich than those in which the RQQs were involved.
{\bf PRGs:} 3C~295 has log~L$_{\rm 60\mu m}$$<$12.06~L$_{\sun}$ (\cite{gmn88}),
and Cygnus~A has log~L$_{\rm 60\mu m}$=11.72~L$_{\sun}$,
luminosities roughly an order of magnitude lower than those of the two RQQs.
The far-IR luminosity could still be produced by interaction-induced
starbursts, but it could also be reprocessed AGN emission.
Since it is impossible to disentangle the two, the far-IR luminosities for
these two radio galaxies are inconclusive.
We note that CO(1-0) observations have been made of Cygnus~A
(\cite{ba94,eva96}) and CO(3-2) observations of IRAS~09104$+$4109\ (\cite{eva96}).
Neither object was detected, but it is unclear how to
extrapolate these cold gas mass limits to total gas mass limits,
particularly if the gas is very near the central engine
(or if it is immersed in a dense, hot cooling flow),
as Barvainis \& Antonucci point out.
\paragraph{Optical Properties:} \label{opt}
{\bf RQQs:} The host galaxy of H~1821$+$643\ is a featureless red cD galaxy which is
slightly asymmetrical and offset from the nucleus by 1-2\arcsec\ (\cite{hn91b}).
Hutchings \& Neff (1991a) subjectively classify the galaxy as
undergoing a weak interaction of somewhat old age.
The host galaxy of IRAS~09104$+$4109\ is a cD galaxy possibly in the midst of cannibalizing
several smaller galaxies (\cite{soi96}).
Hutchings \& Neff (1991a) subjectively classify it as
undergoing only a somewhat weak interaction of moderate age.
Thus the optical evidence for mergers or strong interactions in these two RQQ
host galaxies is suggestive but not strong.
{\bf RLQs:} A 600~sec archival WFPC2 image of 3C~206 shows strong evidence for
interaction of its elliptical host galaxy with several smaller galaxies.
The host galaxy's isophotes are slightly asymmetrically extended to the WNW,
and two knots of emission, possibly galaxies being swallowed, appear within the
host galaxy 1.5\arcsec\ SE and 0.5\arcsec\ W of the quasar.
Projection effects cannot be ruled out,
but the chances of such close projections occurring are quite small.
A third galaxy, 4.25\arcsec\ SSW of the quasar, shows a faint nucleus inside a
distorted envelope of low surface brightness emission, consistent with being
tidally disrupted by the quasar host.
A 280~sec archival WFPC2 image of 3C~263 shows five faint knots of emission
within 5\arcsec\ of the quasar
and a very faint, possibly asymmetrical, underlying envelope of emission.
Better data are needed to determine the galaxy's morphology,
as the current data do not rule out e.g. a luminous spiral galaxy host
(which would however be unprecedented for a RLQ).
No information is available on the host galaxy of PKS~2352$-$342.
{\bf PRGs:} An archival WFPC2 image of 3C~295 shows that its host galaxy is
definitely disturbed, with distorted, asymmetrical isophotes and a partial
shell or plume of emission.
Optical evidence for interaction in Cygnus~A (\cite{sh89,srl94,j96})
is less obvious but still strong:
a secondary IR peak 1$\arcsec$\ north of the nucleus,
substantial dust in the inner regions of the galaxy, counter-rotating gas
structures and evidence for star formation in the nuclear region,
and twisted isophotes (which might not indicate interaction, however;
cf. Smith and Heckman 1989).
Thus the 3C~295 host galaxy has almost certainly undergone a merger or strong
interaction which could have begun any time within the last $\sim$Gyr, and
Cygnus~A
probably also has been disturbed (as suggested by Stockton, Ridgway \& Lilly
1994), but by a less disruptive or less recent event.
\bigskip
Thus while the
evidence is not conclusive except in the case of 3C~295, it is on the whole
supportive of a scenario where these AGN host galaxies have undergone strong
interactions or mergers. Specifically, {\it in no case where data is available
is no evidence for interaction seen.}
This lends support to our suggestion that strong interactions may be the sole
mechanism needed to explain the presence and evolution of powerful AGN in clusters.
However, the observations do not rule out the validity of the cooling flow model
for at least some objects.
While the cooling flow model need not be applicable to the host clusters of the
low-z AGN Cygnus~A, 3C~206, and H~1821$+$643\ for it to explain the evolution of FR~II
AGN environments at z$\gtrsim$0.4, it should apply to the others.
In IRAS~09104$+$4109\ and possibly 3C~295, the central ICM densities may reach the values
required by the cooling flow model without need for an additional mechanism.
But in any case, the z$\gtrsim$0.4 AGN host galaxies show evidence for having
undergone strong interaction(s) capable of boosting the central densities
sufficiently high for the Compton-cooling feedback mechanism to occur.
(Average densities within the central $\sim$100~pc of up to 2900~cm$^{-3}$\ have been
inferred from CO observations of interacting or merging gas-rich galaxies (\cite{sco91}).)
Also, cooling flows may be preferentially located in more compact and
cooler clusters at z$\gtrsim$0.4, which would make their detection more
difficult in our data.
In summary,
we suggest that strong interactions with gas-containing galaxies may be
the only mechanism needed to explain the presence and evolution of powerful
AGN in clusters.
This suggestion is consistent with the far-IR and optical properties of the host
galaxies of the AGN discussed in this paper, despite the rarity of such encounters in the high-$\sigma_{\rm v}$, high ICM density cluster environments of some of those AGN.
However, the cooling flow model cannot be ruled out for at least some objects.
The data most needed to help determine the relative importance of
each process
are X-ray imaging, optical imaging, and $\sigma_{\rm v}$\ measurements of powerful AGN
host clusters.
The strong interaction scenario
predicts that
the host galaxies of all AGN in clusters should show signs of interaction,
and that the host clusters will rarely have high velocity dispersions,
and rarely high X-ray luminosities and ICM densities as well.
The cooling flow model predicts that FR~II AGN host clusters at high z
should have cooling flows, but not necessarily at low z.
\section{Conclusions} \label{finale}
We report a limit of 1.63~10$^{44}$~ergs~s$^{-1}$ on the rest-frame 0.1-2.4~keV
X-ray luminosity of the host cluster of the RLQ 3C~206
(assuming r$_{\rm core}$=125~kpc and kT=2.5~keV)
and a detection of L$_{\rm X}$=3.74$\pm$0.57~10$^{44}$~ergs~s$^{-1}$ for the host
cluster of the RQQ H~1821$+$643 (H$_{\rm o}$=50, q$_{\rm o}$=0.5 for both values).
CL~1821+643 is one of the most X-ray luminous clusters known,
overluminous for its optical richness (also the case for IRAS~09104$+$4109), and it has
a cooling flow of \.{M}$_{\rm cool}$=1120$\pm$440~h$_{50}^{-2}$~M$_{\sun}$~yr$^{-1}$.
Its existence complicates interpretation of X-ray spectra of this
field (Appendix \ref{yaq}). In particular, the observed
Fe~K$\alpha$ emission is probably solely due to the cluster,
and either the quasar is relatively X-ray quiet for its optical luminosity
or the cluster has a relatively low temperature for its luminosity.
We combine our data with the recent observation of X-ray emission from the
host cluster of the buried RQQ IRAS~09104$+$4109\ (\cite{fc95}), our previous upper limits
for two RLQs at z$\sim$0.7 (\cite{paper1}), and literature data on FR~II radio
galaxies.
We compare this dataset to the predictions of three simple models for the
presence and evolution of powerful AGN in clusters:
the cooling flow model, the low-ICM-density model, and the low-$\sigma_{\rm v}$\ model.
In the cooling flow model (\S\ref{cf}),
FR~II AGN host clusters at z$\gtrsim$0.4 have dense cooling flows.
Cooling flows have been detected in a few PRG and RQQ host
clusters (Cygnus~A, H~1821$+$643, IRAS~09104$+$4109, 3C~295).
However, three RLQ host clusters (PKS~2352$-$342, 3C~263, 3C~206)
have \.{M}$_{\rm cool}$$\lesssim$200~M$_{\sun}$/yr, unless cooling
flows are preferentially found in cooler, denser clusters at high z
or some mechanism besides the cooling flow has increased the central densities
in those clusters to create the high central pressures required by this model.
Strong interactions with gas-containing galaxies could be that mechanism.
Nevertheless, it is likely that the cooling flow model is not the {\it sole}
explanation for the presence and evolution of powerful AGN in clusters.
In the low-ICM-density model (\S\ref{icm}), FR~II AGN form in low-density ICM
clusters and are destroyed as the ICM density increases.
The three RLQs in our sample have host clusters consistent with this model, but
the two FR~II PRGs and the two RQQs have high-density host clusters overluminous
for their optical richnesses.
This is consistent with recent radio and X-ray studies of radio sources in
different environments at z$\sim$0.5 which show no evidence for dense ICM in the
majority of powerful AGN host clusters at that redshift
(\cite{rse95,hut96,wd96}), but not with radio work around z$\sim$1 PRGs which
infers gas densities and temperatures similar to nearby clusters (Wellman, Daly
\& Wan 1996ab, \cite{car97}), or X-ray work which detects extended emission with
L$_{\rm X}$$\sim$10$^{44}$~erg~s$^{-1}$ around z$>$1 radio galaxies
(\cite{sd95}, Crawford \& Fabian 1996a, 1996b).
These data show that the low-ICM-density model cannot be the only mechanism
behind the presence and evolution of powerful AGN in clusters.
Nonetheless, it is possible that most powerful AGN host clusters have low ICM
densities, and that the exceptions are either old AGN which originally formed
when the cluster ICM was less dense, or rare cases of strong interactions with
galaxies which retained some of their gas in high-ICM density environments.
In the low-$\sigma_{\rm v}$\ interaction/merger model (\S\ref{sigv}), FR~II AGN
are preferentially found in clusters with low velocity dispersions, where the
strong interactions which can create powerful AGN are more common.
Only a handful of $\sigma_{\rm v}$\ measurements for powerful AGN host clusters exist.
The measurements of CL~3C~206 and the composite quasar host cluster of EYG91
support this model, those of CL~3C~295 and CL~1821+643 do not, and
CL~CygA lacks an accurate $\sigma_{\rm v}$\ determination.
More data are needed to be definitive.
We suggest that strong interactions with gas-containing galaxies may be
the only mechanism needed to explain the presence and evolution of powerful
AGN in clusters.
The far-IR and optical properties of the host galaxies of the AGN discussed in
this paper are consistent with this strong interaction scenario (\S\ref{farir}),
despite the rarity of such encounters in the high-$\sigma_{\rm v}$, high ICM density cluster
environments of some of those AGN.
However, the cooling flow model cannot be ruled out for at least some objects.
The relative importance of strong interactions and cooling flows
can be determined by
testing the predictions of the models with future observations.
The cooling flow model predicts that FR~II AGN at z$\gtrsim$0.4 will be found
in dense cooling flow clusters and that if the cooling flows do not provide the
necessary high central pressures for the Compton-cooling feedback mechanism to
work, there should be evidence for an additional process which has increased the
pressure, such as a strong interaction with a gas-containing galaxy.
The strong interaction scenario predicts that the host galaxies of all AGN in
clusters should show signs of interaction,
and that the host clusters will rarely have high velocity dispersions
or high X-ray luminosities and ICM densities.
Unlike the cooling flow model, the strong interaction scenario has the advantage
that it is applicable to FR~II AGN in all environments, not just clusters.
To definitively rule out some of the models we have considered and to advance
our understanding of the relationships between powerful AGN and their host
clusters, the following data will be needed:
1) more X-ray observations of FR~II AGN in clusters, especially those for which
extended emission-line regions have been studied by Bremer et~al. (1992) and
others, to ascertain whether these AGN host clusters are more likely
to have cooling flows or low-density ICM ({\sl ROSAT} HRI data on 3C~215 and
3C~254 received after this paper was submitted do not show luminous cluster
X-ray emission);
2) accurate measurements of $\sigma_{\rm v}$\ (and B$_{\rm gq}$\ where necessary) for FR~II AGN host
clusters, to test the low-$\sigma_{\rm v}$\ model;
3) more detailed studies and modelling of the host galaxy properties of
AGN in clusters,
to rigorously test our strong interaction scenario for their origins;
4) optical and X-ray studies of the rare RQQs in rich clusters, to determine
the properties of their host clusters and why they are not radio-loud AGN;
5) searches for extended low-frequency emission from remnant radio lobes
around H~1821$+$643\ and IRAS~09104$+$4109, to test the idea that these RQQs were once RLQs.
\acknowledgements
We thank Julio Navarro and the referee for their helpful comments
and the \sl ROSAT\rm\ AO6 TAC for their support of this project and for pointing out
the existence of the archival \sl EINSTEIN\rm\ observation of 3C~206 to us.
This research has made use of
data obtained through the High Energy Astrophysics Science Archive Research
Center Online Service, provided by the NASA-Goddard Space Flight Center;
data from the NRAO VLA Sky Survey obtained through the
Astronomy Digital Image Library, a service of the
National Center for Supercomputing Applications;
data from operations made with the NASA/ESA Hubble Space Telescope,
obtained from the data archive at the Space Telescope Science Institute,
which is operated by the Association of Universities for Research in Astronomy,
Inc., under NASA contract NAS 5-26555;
and data from the IRAS archive at the Infrared Processing and Analysis Center
and the NASA/IPAC Extragalactic Database (NED), which are operated by the Jet
Propulsion Laboratory, California Institute of Technology, under contract
to NASA.
PBH acknowledges support from an NSF Graduate Fellowship
and from NASA funding for analysis of {\sl ROSAT} observations.
| {'timestamp': '1996-12-31T02:45:16', 'yymm': '9612', 'arxiv_id': 'astro-ph/9612241', 'language': 'en', 'url': 'https://arxiv.org/abs/astro-ph/9612241'} |
\section{Introduction}
Interplay between current and magnetization is an essential issue underpinning the field of spintronics~\cite{ref:spintronics}, which consists of two reciprocal problems: control of current through magnetization with a known configuration, and its converse, \emph{i.e.}, control of magnetization dynamics via applied current. In ferromagnetic (FM) materials with slowly varying spin texture $\bm{m}(\bm{r},t )$ over space and time, these issues can be solved by assuming that conduction electron spins always follow the background texture profile, known as the adiabatic approximation~\cite{ref:BerryPhase, ref:Adiabaticity}. The microscopic basis underlying adiabaticity is the strong exchange coupling $H=-J\sigma\cdot\bm{m}(\bm{r},t )$ between conduction electron spins and local magnetic moments, through which spin mistracking with the background causes large energy penalty and becomes highly unfavorable~\cite{ref:Shengyuan}.
Under adiabatic approximation, the current -- magnetization interaction is recast into an emergent electrodynamics, in which its reciprocal influence boils down to a simple electromagnetic problem. Specifically, by diagonalizing the local exchange Hamiltonian via local unitary transformation, fictitious electric and magnetic fields are generated into the orbital dynamics~\cite{ref:Shengyuan, ref:Volovik, ref:SMF, ref:Karin}
\begin{align}
E_i&=\frac12\bm{m}\cdot(\partial_t\bm{m}\times\partial_i\bm{m})=\frac12\sin\theta(\partial_t\theta\partial_i\phi-\partial_i\theta\partial_t\phi), \notag \\
B_i&=-\frac14\varepsilon_{ijk}\bm{m}\cdot(\partial_j\bm{m}\times\partial_k\bm{m})=-\frac12\varepsilon_{ijk}\sin\theta\partial_j\theta\partial_k\phi, \notag
\end{align}
where $\theta(\bm{r},t)$ and $\phi(\bm{r},t)$ are spherical angles specifying the direction of $\bm{m}$. As a consequence, the influence of background texture is represented by an effective Lorentz force $\bm{F}=s\hbar(\bm{E}+\dot{\bm{r}}\times\bm{B})$ exerted on conduction electrons, where $s=+1 (-1)$ denotes spin-up (-down) bands. The electric and magnetic components of the Lorentz force are responsible for the spin motive force~\cite{ref:Shengyuan, ref:SMF} and the topological Hall effect,~\cite{ref:THE} respectively. In turn, back-reaction of the Lorentz force provides an interpretation to the current-induced spin torque exerted on magnetic texture.~\cite{ref:ZhangShoucheng, ref:Tserkovnyak, ref:STT} In a formal language, the adiabaticity induces an effective gauge interaction $\mathcal{L}_{\mathrm{int}}=j_\mu\mathcal{A}_\mu$, where current $j_\mu$ acquires gauge charge according to $s=\pm1$, and $\mathcal{A}_\mu=\mathcal{A}_\mu(\bm{m},\partial\bm{m})$ is the effective electromagnetic potential representing space-time dependence of the texture. Variation over the current $\delta\mathcal{L}_{\mathrm{int}}/\delta j_\mu=0$ yields the effective Lorentz force; and varying over the magnetization $\delta\mathcal{L}_{\mathrm{int}}/\delta\bm{m}=0$ produces the spin-transfer torque. Thus the current-magnetization interaction is reciprocal.
However, the above picture apparently fails in antiferromagnetic (AFM) materials where neighboring magnetic moments are antiparallel. Conduction electrons are not able to adjust spins with local moments that change orientation on atomic scale. Nevertheless, the staggered order parameter $\bm{n}=(\bm{M}_A-\bm{M}_B)/2M_s$ can be slowly varying over space-time, where $\bm{M}_A$ and $\bm{M}_B$ are the alternating local moments and $M_s$ denotes their magnitudes. A natural question is whether a slowly varying staggered order still renders adiabatic dynamics of conduction electrons in some other sense. This is desired knowledge for studying spin transport in AFM materials, especially the quest for current-magnetization interaction as that for FM materials.
In spite of recent theoretical~\cite{ref:AFMTheory} and experimental~\cite{ref:AFMExperiment} progress, this problem has never been addressed. But, at the same time, AFM materials are believed to be promising candidates for new thrusts of spintronics,~\cite{ref:AFMSpintronics} partly due to their tiny anisotropy, robustness against external magnetic perturbations, and absence of demagnetization, which brings prevailing advantages for experimental control. In this paper, we develop the effective electron dynamics in a smooth AFM texture by applying the non-Abelian Berry phase theory~\cite{ref:Dimi,ref:NABerryPhase,ref:Dalibard} on energy bands that are doubly degenerate. The physics of adiabaticity in AFM materials is found to be an internal dynamics between degenerate bands which can be attributed to a SU(2) Berry curvature. When translating into spin dynamics, the adiabaticity no more indicates spin alignment with the background, but a totally new evolution principle [Eq.~\eqref{eq:ds}]. Aside from spin dynamics, the orbital motion of conduction electrons is coupled to two different gauge fields: one leads to the non-Abelian generalization of the effective Lorentz force; the other results in an anomalous velocity that is truly new and unique to AFM systems. With comparisons to FM materials, this paper provides a general framework on how a given AFM texture affects the dynamics of conduction electrons. The other side of the story, \emph{i.e.}, back-reaction of current on the AFM texture, will appear in a forthcoming publication.
The paper is organized as follows. In Sec.~II, the general formalism is presented where iso-spin is introduced. In Sec.~III, our central results~\eqref{eq:EOMtotal} are derived, followed by discussions on non-Abelian Berry phase and monopole, spin and orbital dynamics of conduction electrons, and comparisons of AFM electron dynamics with its FM counterparts. In Sec.~IV, two examples are provided, and the paper is summarized in Sec.~V. Mathematical derivations are included in the Appendixes.
\section{Formalism}
Consider an AFM system on a bipartite lattice with local magnetic moments labeled by alternating $\bm{M}_A$ and $\bm{M}_B$. The spin of a conduction electron couples to the local moments by the exchange interaction $J(\bm{M}/M_s)\cdot\bm{\sigma}$, where $\bm{\sigma}$ denotes the spin operator of the conduction electron, and $\bm{M}$ flips sign on neighboring $A$ and $B$ sublattice sites. In spite of antiparallel of neighboring moments, the staggered order parameter $\bm{n}=(\bm{M}_A-\bm{M}_B)/2M_s$ usually varies slowly over space and time, and we can treat it as a continuous function $\bm{n}(\bm{r},t)$. Accordingly, the conduction electron is described by a nearest-neighbor tight-binding Hamiltonian locally defined around $\bm{n}(\bm{r},t)$:
\begin{equation}
\mathcal{H}(\bm{n}(\bm{r},t))=
\begin{bmatrix}
-J\bm{n}\cdot\bm{\sigma} \ \ & \gamma(\bm{k})\\
\gamma^*(\bm{k}) & J\bm{n}\cdot\bm{\sigma}
\end{bmatrix}
\end{equation}
where $\gamma(\bm{k})=-t\sum_{\bm{\delta}}e^{i\bm{k}\cdot\bm{\delta}}$ is the hopping term with $\bm{\delta}$ connecting nearest neighboring $A-B$ sites (we set $\hbar=1$). In general $J$ can be negative, but we assume a positive $J$ throughout this paper.
The local band structure can be easily solved as $\pm\varepsilon (\bm{k})$ with $\varepsilon(\bm{k})=\sqrt{J^2+|\gamma(\bm{k})|^2}$, and in the adiabatic limit we neglect transitions between $\varepsilon$ and $-\varepsilon$. Each of the two bands are doubly degenerate, and without loss of generality we will focus on the lower band $-\varepsilon$ with the two sub-bands labeled by $A$ and $B$, whose wave functions are $|\psi_a\rangle=e^{i\bm{k}\cdot\bm{r}}|u_a\rangle$ and $|\psi_b\rangle=e^{i\bm{k}\cdot\bm{r}}|u_b\rangle$. The Bloch waves $|u_a\rangle=|A(\bm{k})\rangle|\!\uparrow(\bm{r},t)\rangle$ and $|u_b\rangle=|B(\bm{k})\rangle|\!\downarrow(\bm{r},t)\rangle$ maintain local periodicity around $(\bm{r},t)$, where
\begin{align}
|\!\uparrow\!(\bm{r},t)\rangle=\!
\begin{bmatrix}
e^{-i\frac{\phi}2}\cos\frac{\theta}2\\
e^{i\frac{\phi}2}\sin\frac{\theta}2
\end{bmatrix};\
|\!\downarrow\!(\bm{r},t)\rangle=\!
\begin{bmatrix}
-e^{-i\frac{\phi}2}\sin\frac{\theta}2\\
e^{i\frac{\phi}2}\cos\frac{\theta}2
\end{bmatrix}
\label{eq:wavefunctions}
\end{align}
are local spin wave functions with $\theta=\theta(\bm{r},t)$ and $\phi=\phi(\bm{r},t)$ being spherical angles specifying the orientation of $\bm{n}(\bm{r},t)$. The periodic parts are spinors in the pseudo-spin space furnished by the $A-B$ sublattices,
\begin{subequations}
\begin{align}
|A(\bm{k})\rangle=\frac{[\varepsilon(\bm{k})+J,\ |\gamma(\bm{k})|]^\mathrm{T}}{\sqrt{(\varepsilon(\bm{k})+J)^2+|\gamma(\bm{k})|^2}}, \\ |B(\bm{k})\rangle=\frac{[\varepsilon(\bm{k})-J,\ |\gamma(\bm{k})|]^\mathrm{T}}{\sqrt{(\varepsilon(\bm{k})-J)^2+|\gamma(\bm{k})|^2}},
\end{align}
\end{subequations}
which exhibit opposite spatial patterns schematically illustrated in Fig.~\ref{Fig:SDW}. While $\langle \psi_a|\psi_b\rangle=0$ due to the orthogonality of local spin eigenstates, $\langle A(\bm{k})|B(\bm{k})\rangle$ does not vanish, and we define this overlap as
\begin{equation}
\xi(\bm{k})\!=\!\langle A(\bm{\bm{k}})|B(\bm{k})\rangle\!=\!\frac{|\gamma(\bm{k})|}{\sqrt{J^2+|\gamma(\bm{k})|^2}}\!=\!\frac{\sqrt{\varepsilon^2-J^2}}{\varepsilon}, \label{eq:overlap}
\end{equation}
which is a key parameter in our theory and $\xi<1$. It reaches maximum at the Brillouin zone (BZ) center and vanishes at the BZ boundary. From Eq.~\eqref{eq:overlap} we know $\xi(\bm{k})$ is a system parameter determined by the band structure, and it is constant since the energy conservation $\dot{\varepsilon}=0$ requires $\dot{\xi}=0$. If $J$ tends to infinity, the overlap $\xi(\bm{k})$ will vanish and the two subbands will be effectively decoupled, by which the system will become a simple combination of two independent FM subsystems.
\begin{figure}[t]
\centering
\includegraphics[width= 0.92\linewidth]{SDW.eps}
\caption{A schematic view of Bloch waves in the lower band. Sub-band $A$ means a local spin up electron has a larger probability on the $A$ sites and a smaller probability on the $B$ sites; sub-band $B$ means the opposite case. They are degenerate in energy and their wave functions have a finite overlap depending on the ratio of $J/\varepsilon$.}{\label{Fig:SDW}}
\end{figure}
We adopt the semiclassical approach to construct effective electron dynamics,~\cite{ref:Dimi} where an individual electron is described by a wave packet $|W\rangle=\int\mathrm{d}\bm{k}w(\bm{k}-\bm{k}_c)[c_a|\psi_a\rangle +c_b|\psi_b\rangle]$, where $\int\mathrm{d}\bm{k}\bm{k}|w(\bm{k}-\bm{k}_c)|^2=\bm{k}_c$ gives the center of mass momentum, and $\langle W|\bm{r}|W\rangle=\bm{r}_c$ is the center of mass position. The coefficients $c_a$ and $c_b$ reflect relative contributions from the two sub-bands and $|c_a|^2+|c_b|^2=1$. Since now the band is degenerate, non-Abelian formalism~\cite{ref:Adiabaticity,ref:Dimi,ref:NABerryPhase,ref:Dalibard} must be invoked (see Appendix A), where dynamics between the $A$ and $B$ subbands introduces an internal degree of freedom represented by the isospin vector
\begin{align}
\bm{\mathcal{C}}&=\{c_1, c_2, c_3\} \notag\\
&=\{2\mathrm{Re}(c_ac_b^*), -2\mathrm{Im}(c_ac_b^*), |c_a|^2-|c_b|^2\}.
\end{align}
The electron dynamics is characterized by the equations of motion of the three parameters $\bm{k}_c$, $\bm{r}_c$, and $\bm{\mathcal{C}}$, which can be obtained from variational principles with the effective Lagrangian $\mathcal{L}=\langle W|(i\frac{\partial}{\partial t}-\mathcal{H})|W\rangle$.~\cite{ref:Adiabaticity,ref:Dimi} The detailed derivations are presented in Appendix A, here we only write down the results,
\begin{subequations}
\label{eq:EOM}
\begin{align}
\dot{\bm{\mathcal{C}}}&=2\bm{\mathcal{C}}\times(\bm{\mathcal{A}}^r_\mu\dot{r}_\mu +\bm{\mathcal{A}}^k_\mu\dot{k}_\mu), \label{eq:EOMeta}\\
\dot{k}_\mu& = \partial^r_\mu\varepsilon +\bm{\mathcal{C}}\cdot[\bm{\Omega}^{rr}_{\mu\nu}\dot{r}_\nu+\bm{\Omega}^{rk}_{\mu\nu}\dot{k}_\nu], \quad \label{eq:EOMk}\\
\dot{r}_\mu& = -\partial^k_\mu\varepsilon -\bm{\mathcal{C}}\cdot[\bm{\Omega}^{kr}_{\mu\nu}\dot{r}_\nu+\bm{\Omega}^{kk}_{\mu\nu}\dot{k}_\nu], \label{eq:EOMr}
\end{align}
\end{subequations}
where $r_\mu=(t,\bm{r}_c)$, but $k_\mu=(0,\bm{k}_c)$ has no temporal component. In Eqs.~\eqref{eq:EOM} the $\cdot$ and $\times$ denote scalar and cross products in the isospin vector space. The Berry curvatures $\bm{\Omega}$ are obtained from the gauge potentials $\bm{\mathcal{A}}$ defined on the $A$ and $B$ sub-bands, for instance,
\begin{subequations}
\begin{align}
&[\bm{\mathcal{A}}^r_\mu\cdot\bm{\tau}]_{ij}=i\langle u_i|\partial^r_\mu|u_j\rangle \label{eq:Berrypotential}\\
&\bm{\Omega}^{rr}_{\mu\nu}=\partial^r_\mu \bm{\mathcal{A}}^r_\nu-\partial^r_\nu\bm{\mathcal{A}}^r_\mu +2\bm{\mathcal{A}}^r_\mu\times\bm{\mathcal{A}}^r_\nu, \label{eq:rrcurv}
\end{align}
\end{subequations}
where $\bm{\tau}$ is a vector of Pauli matrices representing the isospin and $i,j$ run between $a,b$. Other components of the Berry curvatures are explained in Appendix A.
\section{Electron Dynamics}
Equipped with Eqs.~\eqref{eq:EOM}, we are now ready to derive the dynamics of an individual electron. Note that the isospin vector $\bm{\mathcal{C}}$ itself is not gauge invariant in the sense that different choice of spin wave functions results in different $\bm{\mathcal{C}}$. Therefore, in deriving electron dynamics we need to relate $\bm{\mathcal{C}}$ to the real spin defined by $\bm{s}=\langle W|\bm{\sigma}|W\rangle$ (in unit of $\frac12$) which is a physical variable and fully gauge invariant. While detailed derivations are lengthy and sophisticated, which is left for Appendix B, the final results are quite simple and elegant:
\begin{subequations}
\label{eq:EOMtotal}
\begin{align}
\dot{\bm{s}} &= (1-\xi^2)(\bm{s}\cdot\bm{n})\dot{\bm{n}}, \label{eq:ds}\\
\dot{\bm{k}} &= -\frac12\bm{n}\cdot(\nabla\bm{n}\times\dot{\bm{s}}), \label{eq:dk}\\
\dot{\bm{r}} &= -\partial_{\bm{k}}\varepsilon -\frac12(\bm{s}\times\bm{n})\cdot\dot{\bm{n}}\ \partial_{\bm{k}}\ln\xi,\qquad \label{eq:dr}
\end{align}
\end{subequations}
where $\dot{\bm{n}}=\partial_t\bm{n}+(\dot{\bm{r}}\cdot\nabla)\bm{n}$, and we have omitted subscript $c$ of $\bm{r}_c$ and $\bm{k}_c$ for convenience of following discussions. Equations.~\eqref{eq:EOMtotal} are the fundamental equations of motion of a conduction electron in an AFM material with slowly varying texture, which are represented by joint evolutions of three variables $(\bm{s}, \bm{k}, \bm{r})$. An essential feature distinguishing the AFM electron dynamics from its FM counterpart lies in Eq.~\eqref{eq:ds}, from which we know that the real spin $\bm{s}$ does not follow the background order parameter $\bm{n}$ in the adiabatic limit.
\emph{Spin dynamics}. The motion of $\bm{s}$ can be decomposed into a superposition of two motions: one strictly follows $\bm{n}$ (for stationary $\bm{\mathcal{C}}$) and the other represents mistracking with $\bm{n}$ (for dynamical $\bm{\mathcal{C}}$), where the latter originates from dynamics between the $A$ and $B$ sub-bands and is unique to AFM materials (see examples in Sec.~IV). It is worth emphasizing that the mistracking between $\bm{s}$ and $\bm{n}$ has nothing to do with any non-adiabatic process, but is entirely due to the non-Abelian nature of the problem. The overall spin evolution can be attributed to the accumulation of a SU(2) non-Abelian Berry phase $\mathcal{P}\exp[-i\int\bm{\mathcal{A}}^r_\mu\cdot\bm{\tau}\mathrm{d}r_\mu]$ along the electron trajectory~\cite{ref:NABerryPhase}. As compared with its U(1) counterpart in FM materials~\cite{ref:Adiabaticity,ref:Shengyuan,ref:Volovik,ref:SMF,ref:Karin,ref:THE,ref:ZhangShoucheng,ref:Tserkovnyak,ref:STT}, which can be regarded as the magnetic flux of a Dirac monopole located at the center of the sphere spanned by $\bm{m}$, the SU(2) Berry phase here can be related to a 't~Hooft-Polyakov monopole in the parameter space. Detailed discussions on the monopole can be found in Appendix C, and the key point here is that a recent proposal of artificial 't~Hooft-Polyakov monopole~\cite{ref:BPSmonopole} can be realized in our AFM texture systems.
\begin{figure}[t]
\centering
\includegraphics[width=0.95\linewidth]{Spin.eps}
\caption{(Color online) Left panel: the isospin vector $\bm{\mathcal{C}}$ (blue arrow) in the local frame: $\bm{\mathcal{C}}=c_1\bm{\theta}+c_2\bm{\phi}+c_3\bm{n}$, where $\bm{\theta}$ and $\bm{\phi}$ are spherical unit vectors. Right panel: In our particular gauge, $\bm{\mathcal{C}}$ (blue) is coplanar with $\bm{n}$ and $\bm{s}$ (red). The tip of $\bm{\mathcal{C}}$ moves on a unit sphere, whereas tip of $\bm{s}$ is constrained on the ellipsoid whose semi-major axis is $\bm{n}$ and semi-minor axis having length $\xi$. \label{Fig:spinfootball}}
\end{figure}
Since $|\!\uparrow\!(\bm{r},t)\rangle$ and $|\!\downarrow\!(\bm{r},t)\rangle$ form local spin bases with quantization axis being $\bm{n}(\bm{r},t)$, the isospin $\bm{\mathcal{C}}$ can be pictured as a vector in the local frame moving with $\bm{n}(\bm{r}, t)$. In Appendix B, we have shown that in the particular gauge marked by $\chi=0$ (see Eq.~\eqref{eq:etavssApp}),
\begin{align}
\bm{\mathcal{C}}=c_1\bm{\theta}+c_2\bm{\phi}+c_3\bm{n}=\frac1\xi(s_1\bm{\theta}+s_2\bm{\phi})+s_3\bm{n}, \label{eq:etavss}
\end{align}
where $\{\bm{\theta}, \bm{\phi}, \bm{n}\}$ form local bases associated with the local order parameter $\bm{n}(\bm{r},t)$ (Fig.~\ref{Fig:spinfootball}, left). Equation~\eqref{eq:etavss} indicates two important properties: (1) $\bm{\mathcal{C}}$ is coplanar with $\bm{n}$ and $\bm{s}$, which is specific to the particular gauge $\chi=0$; (2) while the isospin vector is constrained on the unit sphere $c_1^2+c_2^2+c_3^2=1$, the physical spin satisfies $\frac{s_1^2+s_2^2}{\xi^2}+s_3^2=1$, which constrains the tip of $\bm{s}$ on an prolate spheroid with semi-major axis being $\bm{n}(\bm{r},t)$ and semi-minor axis on its equator having length $\xi$ (Fig.~\ref{Fig:spinfootball}, right). The latter property is gauge independent and can be justified directly from Eq.~\eqref{eq:ds} without using Eq.~\eqref{eq:etavss} (see Appendix D), which, when written in gauge invariant from, becomes
\begin{equation}
(\bm{s}\cdot\bm{n})^2+\frac{(\bm{s}\times\bm{n})^2}{\xi^2}=s_3^2+\frac{s_1^2+s_2^2}{\xi^2}=1. \label{eq:ellipsoid}
\end{equation}
For arbitrary gauges with $\chi\neq0$, it is easy to show that we always have $s_3=c_3$ and $s_1^2+s_2^2=\xi^2(c_1^2+c_2^2)$, but the angles between $s_{1,2}$ and $c_{1,2}$ will be different, \emph{i.e.}, $\bm{\mathcal{C}}$ will not be coplanar with $\bm{n}$ and $\bm{s}$.
Now, the physical picture of adiabatic spin evolution is clear: as the background order parameter $\bm{n}(\bm{r},t)$ moves slowly in space-time, the prolate spheroid moves with it. The motion of physical spin $\bm{s}$ is a superposition of the relative motion on the spheroid and the motion of the spheroid itself. The overall motion of $\bm{s}$ described by Eq.~\eqref{eq:ds} is purely geometrical as $\mathrm{d}t$ can be eliminated on both sides, as a result, a given path of $\bm{n}$ uniquely determines a path of $\bm{s}$ on the spheroid which is independent of the Hamiltonian. Associated with this geometric motion, a SU(2) Berry phase is accumulated by the electron wave function, which can be regarded as the (non-Abelian) gauge flux of a 't~Hooft-Polyakov monopole at the center of the unit sphere spanned by $\bm{n}$ (Table~\ref{Tab:comparison}).
A further remark: it seems to be a surprise that the magnitude of $\bm{s}$ varies on the spheroid since $\xi\le1$, but how can the physical spin have a nonconstant magnitude? We answer this question by studying the \emph{reduced} density matrix for the spin degree of freedom. It is a $2\times2$ matrix and can be written as $\rho_s=\frac12(1+\bm{a}\cdot\bm{\sigma})$, thus the expectation value of physical spin is $\bm{s}=\mathrm{Tr}[\rho_s\bm{\sigma}]=\bm{a}$. Now since $s^2\le1$, thus $a^2\le1$, and what follows is $\mathrm{Tr}\rho_s^2\le\mathrm{Tr}\rho_s$, which suggests that the electron is \emph{effectively} in a mixed spin state. This can be attributed to the entanglement of spin and sublattice degrees of freedom, specifically, because $s_3=c_3$, we are able to infer the spin projection along $\bm{n}$ by measuring the probability difference on neighboring $A-B$ sites (vice versa). The entanglement provides us with partial information of spin orientation from the knowledge of sublattice, this destroys full coherence of the spin states.
\emph{Orbital dynamics}. In correspondence with the novel spin dynamics, the orbital dynamics of an individual electron also becomes non-trivial. By substituting Eq.~\eqref{eq:ds} into~\eqref{eq:dk} we get (see also Appendix B),
\begin{align}
\dot{\bm{k}}&=(1-\xi^2)(\bm{s}\cdot\bm{n})(\bm{E}+\dot{\bm{r}}\times\bm{B}), \label{eq:SMF} \\
\bm{E}&=\frac12\sin\theta(\partial_t\theta\nabla\phi-\nabla\theta\partial_t\phi), \label{eq:E} \\ \bm{B}&=-\frac12\sin\theta(\nabla\theta\times\nabla\phi), \label{eq:B}
\end{align}
the $\bm{E}$ and $\bm{B}$ fields here are the same as their FM counterparts, where they are responsible for the spin motive force~\cite{ref:SMF} and the topological Hall effect,~\cite{ref:THE} respectively. Also, as in FM systems, it is easy to check that Eqs.~\eqref{eq:E} and \eqref{eq:B} satisfy the Faraday's relation $\nabla\times\bm{E}+\frac{\partial\bm{B}}{\partial t}=0$.
However, quite different from the FM case, the gauge charge $\bm{s}\cdot\bm{n}$ in Eq.~\eqref{eq:SMF} is not just a constant, but involves the internal dynamics. In other words, the orbital motion is accompanied by a time-dependent gauge charge which should be determined by solving the coupled equations Eqs.~\eqref{eq:EOMtotal} all together. Moreover, the factor $\xi^2$ results from the non-commutative term $2\bm{\mathcal{A}}^r_\mu\times\bm{\mathcal{A}}^r_\nu$ in Eq.~\eqref{eq:rrcurv}, it also reflects the coupling between spin and orbital dynamics. The parameter $\xi\in(0,1)$ plays a key role here: in the $\xi\rightarrow1$ limit, $1-\xi^2$ vanishes thus from Eqs.~\eqref{eq:ds} and~\eqref{eq:SMF} we get null results $\dot{\bm{s}}=0$ and $\dot{\bm{k}}=0$. In the other limit where $\xi\rightarrow0$, the solution of Eq.~\eqref{eq:ds} reduces to $\bm{s}=\pm\bm{n}$ if initial condition is $\bm{s}(0)=\pm\bm{n}(0)$, and Eq.~\eqref{eq:SMF} reduces to the FM Lorentz force equation, by which the system loses the manifest non-Abelian feature and behaves as two decoupled FM sub-systems. It deserves attention that in real AFM materials, both $A$ and $B$ sub-bands host majority carriers, but they are subject to effective Lorentz forces of opposite directions, which may lead to non-trivial spin transport.
Furthermore, the real space dynamics governed by Eq.~\eqref{eq:dr} exhibits spin-orbit coupling through the anomalous velocity term $\frac12(\bm{s}\times\bm{n})\cdot\dot{\bm{n}}\ \partial_{\bm{k}}\ln\xi$. It is along the same direction as $\partial_{\bm{k}}\varepsilon$, so Eq.~\eqref{eq:dr} amounts to give a modified group velocity. We mention that this term is unique to AFM textures and has nothing to do with the anomalous velocity studied in FM or quantum Hall systems~\cite{ref:Adiabaticity}. It originates from the $\bm{\Omega}^{kr}_{\mu\nu}$ curvature that joints real space with BZ, the importance of which has been overlooked before. For better comparison, we summarize the fundamental electron dynamics of FM and AFM textures in Table~\ref{Tab:comparison}.
\begin{table}[]
\begin{tabular}{l|l}
\hline\hline
FM spin texture & AFM spin texture \B\\
\hline
$\bm{s}=\bm{n}$ & $\ \dot{\bm{s}}=(1-\xi^2)(\bm{s}\cdot\bm{n})\dot{\bm{n}}$\\
U(1) Abelian Berry Phase: & SU(2) non-Abelian Berry Phase:\\
$\gamma(\Gamma)=\oint_{\Gamma}\mathcal{A}_\mu\mathrm{d}r_\mu$ & $U(\Gamma)=\mathcal{P}\exp[-i\oint_{\Gamma}\bm{\mathcal{A}}^r_\mu\cdot\bm{\tau}\mathrm{d}r_\mu]$\\
Dirac monopole & 't Hooft-Polyakov monopole\\
\hline
$\dot{\bm{k}}=\bm{E}+\dot{\bm{r}}\times\bm{B} \qquad\qquad\ \ \ $ & $\ \dot{\bm{k}}=(1-\xi^2)(\bm{s}\cdot\bm{n})(\bm{E}+\dot{\bm{r}}\times\bm{B})$\\
$\dot{\bm{r}}=-\partial_{\bm{k}}\varepsilon$ \B & $\ \dot{\bm{r}}=-\partial_{\bm{k}}\varepsilon -\frac12(\bm{s}\times\bm{n})\cdot\dot{\bm{n}}\ \partial_{\bm{k}}\ln\xi$\\
\hline\hline
\end{tabular}
\caption{Comparison of effective electron dynamics in FM and AFM textures. In the FM case, spin dynamics is trivial; along closed path $\Gamma$ the electron acquires an U(1) Berry phase which is the magnetic flux of a Dirac monopole; A Lorentz force is resulted in the orbital motion. In the AFM case, spin dynamics is non-trivial due to the mixture of degenerate sub-bands through a SU(2) non-Abelian Berry phase, which is the gauge flux generated by a 't Hooft-Polyakov monopole; The orbital dynamics is subject to a spin-dependent Lorentz force and an anomalous velocity.}{\label{Tab:comparison}}
\end{table}
A final remark on the theory part: In real materials with impurities, our fundamental equations~\eqref{eq:EOMtotal} are valid so long as spin coherence length is as large as, if not more than, the typical width of the texture. While this is quite true in FM materials, its validity in AFM materials awaits experimental verification. At extremely low temperatures, spin-flip scattering is dominated by magnetic impurities which can be made negligibly small in clean samples. Besides, spin-independent scattering processes (\emph{e.g.}, electron-phonon scattering) do not destroy our essential conclusions if $\dot{\bm{r}}$ is understood as the drift velocity of carriers. We mention that AFM spintronics is an emerging field where very little is known. While it shares some similarities with the established FM spintronics, it is not always correct to copy ideas from FM systems. The adiabatic electron dynamics studied in this paper is one example.
\section{IV Examples}
First, consider a spiraling AFM texture sandwiched by two ferromagnetic layers (see Fig.~\ref{Fig:DW}). This magnetic structure has been realized in Co/FeMn/Py trilayers in a recent experiment~\cite{ref:SpiralingSpin}, where the FM order of Co layer is nearly fixed but that of Py can be rotated by external magnetic field. The AFM order is dragged into a spiral due to the exchange bias effect on the AFM/FM interfaces. The layer thickness of FeMn is roughly $10\sim20\ \mathrm{nm}$ and can be made even larger, which far exceeds the lattice constant thus adiabatic approximation is valid; Meanwhile, typical spin coherence length is larger than the layer thickness at low temperatures so that spin evolution is governed by Eq.~\eqref{eq:ds}.
\begin{figure}[]
\centering
\includegraphics[width=0.88\linewidth]{Trilayer.eps}
\caption{(Color online) Left: FM/AFM/FM trilayer with opposite FM orientations on two sides. The black double arrows represent the $A$-$B$ sublattices of the AFM layer, which is dragged into a spiraling texture due to exchange bias on the interfaces. Right: incoming electrons only enter the $A$ sub-band due to the upper FM polarizer, the out-going electrons partially occupy the $B$ sub-band depending on the value of $\xi$.}{\label{Fig:DW}}
\end{figure}
When an electron flows from top to bottom with applied current, the top FM layer polarizes its spin so that it enters the $A$ sub-band across the interface. According to Eq.~\eqref{eq:ds}, the physical spin orientation of the electron after passing through the AFM layer is rotated by $\Pi=\pi-\arctan[\xi\tan\xi\pi]$ if $\xi<\frac12$, and $\Pi=-\arctan[\xi\tan\xi\pi]$ is $\xi>\frac12$. This is a topological result that only depends on the initial and final directions of $\bm{n}$, but is \textit{independent} of the texture's profile detail. When $\xi\rightarrow0$, $\Pi$ reduces to $\pi$, which means the electron spin follows $\bm{n}$ and remains in the $A$ sub-band, thus it flows into the bottom FM layer with a lower resistance; in the $\xi\rightarrow 1$ limit, $\Pi$ vanishes and the electron completely evolves into the $B$ sub-band thus experiencing a higher resistance. For an arbitrary $\xi$ and an arbitrary total rotation of the spiral denoted by $\Phi$, the electron will partially evolve into the $B$ sub-band with the wave function $\cos(\xi\Phi/2)|\psi_a\rangle+ i\sin(\xi\Phi/2)|\psi_b\rangle$, thus the total resistance is
\begin{equation}
\rho=\rho_0+\frac12\Delta\rho[1-\cos(\xi\Phi)],
\label{eq:MR}
\end{equation}
where $\rho_0$ is the intrinsic resistance of the AFM texture itself, which depends monotonously but not too much on $\Phi$. $\Delta\rho$ represents the magnetoresistance of the spin valve which is determined by material details of the two FM layers and is independent of $\Phi$. If $\Phi$ is increased beyond $\pi$, $\rho$ will reach a maximum at $\Phi_m=\pi/\xi$ and then reduces. The resistance maximum, if observed, serves as an experimental verification of Eq.~\eqref{eq:ds}. Moreover, measuring $\Phi_m$ also enables us to find $\xi$ without calculating the band structure.
We remark that the above results survive in the presence of diffusive processes so long as spin-flip scattering is ignored. The reason is that spin-independent scattering only deflects $\bm{k}$-space orbit, whereas the $\bm{s}$ dynamics is determined by the variation of $\bm{n}$ that is blind to $\bm{k}$ in one dimension. In addition, FeMn is a non-collinear antiferromagnet that has more than two sub-lattices. To test our theory unambiguously, we can replace FeMn by the collinear IrMn which is feasible for current technique~\cite{ref:private}. Moreover, we are aware of the experiment~\cite{ref:Mnlayer} where the spiraling AFM texture exhibits spatial \textit{periodic} patterns, it provides a better way of realizing large $\Phi's$.
\begin{figure}[]
\centering
\includegraphics[height=0.23\textheight]{Rings.eps}
\caption{(Color online) Spin evolutions for three different $\xi$'s when $\bm{n}(t)$ is moving round a cone with constant angle $\theta$ from the $z$ axis. Upper panels: the tip of $\bm{s}$ respects two constraints: it stays both on the cone's bottom (small gray slab) and on the spheroid described by Eq.~\eqref{eq:ellipsoid} (blue ellipsoid), thus the vector $\bm{s}$ is confined in between two cones with different semiangles. Lower panels: orbits of the tip from bird's eye view. The topology of the orbits is separated into two classes (left and right) by the critical case (middle) where the inner cone's semiangle shrinks to zero. The orbits are not necessarily commensurate with $\bm{n}$.} {\label{Fig:Rings}}
\end{figure}
Consider a second example where $\bm{n}(t)$ is varying round a cone of constant semiangle $\theta$ in the laboratory frame, which can be realized in a spin wave (see Fig.~\ref{Fig:Rings}). According to Eq.~\eqref{eq:ds}, we know that $\mathrm{d}s_z=0$ due to $\mathrm{d}n_z=0$, thus, the tip of $\bm{s}$ should stay in the bottom plane of the cone. On the other hand, we learn from Eq.~\eqref{eq:ellipsoid} that the tip is constrained on the spheroid that moves with the instantaneous $\bm{n}(t)$. Therefore, the actual orbit traversed by the tip is contained in the intersection of the two constraints. Through some straightforward geometric analysis, we know that $\bm{s}$ is bounded between the $\bm{n}$ cone and an inner cone whose semiangle depends on $\xi$. Figure~\ref{Fig:Rings} depicts the actual orbits of $\bm{s}$ for three different $\xi$'s: they all exhibit precession and nutation, which can be easily read out from the bird's eye view. Remarkably, the motion of $\bm{s}$ falls into two topologically distinct classes separated by the critical condition
\begin{align}
\xi_c^2=\frac{\cos^2\theta}{(1+\cos^2\theta)}.
\end{align}
In a real spin wave, $\theta$ is nearly zero, thus $\xi_c\approx1/\sqrt{2}$. For real materials, we expect $t\le J$, thus from Eq.~\eqref{eq:overlap} we know that for a partially filled band, $\xi$ is always smaller than $1/\sqrt{2}$. Therefore, the $\xi<\xi_c$ phase is more realistic.
\section{Conclusions}
In this paper, we find that a slowly varying AFM texture renders adiabatic dynamics of conduction electrons, which are described by three coupled equations of motion [Eqs.~\eqref{eq:EOMtotal}]. Quite different from the FM case, the adiabaticity in AFM materials does not imply strict alignment between conduction electron spins and the profile of background texture. Instead, the adiabatic spin evolution is a superposition of a motion following the background order plus a motion on a prolate spheroid attached to the local order, where the latter originates from internal dynamics between degenerate bands. The overall motion of the spin is still geometric; it can be attributed to the accumulation of a SU(2) non-Abelian Berry phase originating from the gauge flux of an effective 't~Hooft-Polyakov monopole in the parameter space.
The corresponding orbital dynamics shares some similarities with FM materials in that the $\bm{k}$-space dynamics can be described by an effective Lorentz force equation. However, two prominent differences in the orbital dynamics distinguish an AFM system from its FM counterpart: first, the gauge charge is dynamical rather than constant, by which spin and orbital motions no longer separate; second, the group velocity is renormalized by a spin-dependent anomalous velocity, which is quite different from what has been studied before.
Our theory lays the foundation for charge and spin transports in AFM texture systems, which will be applied to real materials in the future. The validity of our theory needs to be tested experimentally since available data on AFM spintronic materials are very rare. Theoretically, this paper solves only the first half of the whole story; the other half, \emph{i.e.}, the converse effect regarding the back-reaction of current on background AFM order, will appear in a forthcoming paper.
\section{Acknowledgements}
We are grateful for Dr. Yizhuang You and Prof. Biao Wu for numerous helps on detailed calculations, and Dr. Karin Everschor for insightful comments. We thank Prof. Maxim Tsoi and Prof. Jianwang Cai for discussions on possible experimental observations. We also acknowledge helps from J.~Zhou, S.~A.~Yang, A.~H.~MacDonald, X.~Li, Y.~Gao, and D.~Xiao for comments from diverse aspects. This work is supported by DOE-DMSE, NBRPC, NSFC, and the Welch Foundation.
| {'timestamp': '2012-12-21T02:00:48', 'yymm': '1211', 'arxiv_id': '1211.0782', 'language': 'en', 'url': 'https://arxiv.org/abs/1211.0782'} |
\section*{ Acknowledgement}
{Tne paper is supported by RFBR
grants 11-02-00242 and 12-02-00613;
the work of A.R.
is supported by Jefferson Science Associates, LLC under U.S. DOE Contract No. DE-AC05-06OR23177.
}
| {'timestamp': '2012-11-07T02:01:27', 'yymm': '1211', 'arxiv_id': '1211.1133', 'language': 'en', 'url': 'https://arxiv.org/abs/1211.1133'} |
\section{Introduction}
\hspace{1.0cm}As recently pointed out by Gamboa et al \cite{1} the cornstone of cosmology is the cosmological principle which is based on the fact that
the spacetime is Lorentz invariant. Nevertheless, physical problems such as that primordial magnetic fields (PMF)
\cite{2}, matter antimatter asymmetry and dark energy (matter) lead us to rethink Lorentz invariant Einsteinian relativity dogma. One of the ways to address this issue
has been recently discovered by Kostelecky et al \cite{3}, which considers that LV could be associated to another alternative gravity theory
gravity theory called torsion theory \cite{4}. An important issue at this point is to stress that here as in most string inspired
Kalb Rammond theory, torsion propagates in vacuum instead of being considered as a contact interaction as in Einstein-Cartan gravitation
\cite{5}.
In this paper we shall be concerned with some examples where LV not only is present in spacetime but it can be used to place
limits in spacetime torsion by its manifestations on dynamo effects and GBRs and cosmic microwave background (CMB)
with an interesting analogy of electromagnetic (EM) waves in magnetised plasmas \cite{6}. Faraday rotation, which is so
important in measuring magnetic fields \cite{7} is used to place constraints on LV torsion vector. An important issue is that here one addopts perturbative approach to quantum electrodynamics
(QED) effects and not the non-perturbative ones used by Enqvist et al \cite{8}. The idea of using this EW analogy in GRB
ultraviolet LV has been used by Kahniatishvilly et al \cite{9} in Minkowski spacetime without torsion. One of the main diffrences
between their results and ours, is that to EM waves frequency a torsion wave frequency is summed up, and only in the very low frequency
torsion waves, they approach. In this first paper one takes in account this low frequency limit and left the high frequency torsion wave limit
to a future work. One also consider here a linearised approach to dispersive and rotations measures. The paper is
organised as follows: In section II the de Sabbata-Gasperini formulation of the Riemann-Cartan (RC) Maxwells
vacuum electrodynamics, with photon-torsion semi-minimal coupling is reviewed. In section III the EM waves analogy in magnetised plasma is considered and
the CMB torsion limit is placed in the low frequency torsion wave limit. In section IV galactic magnetic dynamo seeds are used to place limits on LV through
torsion modes. In section V conclusions and discussions are presented.
\section{Perturbative QED in Minkowski torsioned spacetime}
Throughout the paper second order effects on torsion
shall be neglected in the electrodynamics including LV terms due to the three-dimensionl torsion vector $\textbf{Q}$. In this section we consider a
simple cosmological application concerning the electrodynamics in vacuum QED
spacetime background. Perturbative approach to electrodynamics leads to the following set of equations
\begin{equation}
{\partial}_{i}F^{ji}=4{\pi}J^{j}+\frac{2{\alpha}}{3{\pi}}{\epsilon}^{jklm}Q_{l}{F_{kj}}
\label{1}
\end{equation}
\begin{equation}
{\partial}_{[i}F_{jk]}=0
\label{2}
\end{equation}
where ${\partial}_{i}$ is the partial derivative, and $\alpha$ is the e.m fine structure constant, while $F^{ij}={\partial}^{i}A^{j}-{\partial}^{j}A^{i}$
is the electromagnetic field tensor non-minimally coupled to torsion gravity. Here $A^{i}$ is the electromagnetic
vector potential and $(i,k=0,1,2,3)$ and $Q_{l}$ represents the torsion four-vector. In three-dimensional notation the above Maxwells generalised equations read
\begin{equation}
{\nabla}.{\textbf{E}}=4{\pi}{\rho}+\frac{4{\alpha}}{3{\pi}}\textbf{Q}.\textbf{B}
\label{3}
\end{equation}
\begin{equation}
{\nabla}.\textbf{B}=0
\label{4}
\end{equation}
\begin{equation}
{\nabla}{\times}\textbf{E}= -\frac{{\partial}{\textbf{B}}}{{\partial}t}
\label{5}
\end{equation}
\begin{equation}
{\nabla}\times{\textbf{B}}=\frac{4{\alpha}}{3{\pi}}\textbf{E}{\times}\textbf{Q}+\frac{{\partial}\textbf{E}}{{\partial}t}
\label{6}
\end{equation}
After some algebraic manipulation on these generalised Maxwell equations one obtains the EM wave equations
\begin{equation}
{\nabla}^{2}\textbf{E} -\frac{{\partial}^{2}{\textbf{E}}}{{\partial}t^{2}}+
\frac{4{\alpha}}{3{\pi}}[\textbf{Q}{\times}\frac{{\partial}\textbf{E}}{{\partial}t}-\textbf{E}{\times}\frac{{\partial}\textbf{Q}}{{\partial}t}]
=0\label{7}
\end{equation}
\begin{equation}
{\nabla}^{2}\textbf{B} -\frac{{\partial}^{2}{\textbf{B}}}{{\partial}t^{2}}-
\frac{16{\alpha}}{3}{\rho}\textbf{Q}-(\frac{4{\alpha}}{3{\pi}})^{2}\textbf{Q}(\textbf{Q}.\textbf{B})=0
\label{8}
\end{equation}
By Fourier analyzing the first expression or substituting ${\partial}_{t}\rightarrow{i{\omega}}$ and
${\nabla}\rightarrow{-ik}$ one obtains from expression (\ref{7}) the following expression
\begin{equation}
[({\omega}^{2}-k^{2}){\delta}_{ab}-\frac{4{\alpha}}{3{\pi}}i({\omega}_{1}+{\omega}){\epsilon}_{acb}Q^{c}]E^{b}=0
\label{9}
\end{equation}
\begin{equation}
ik_{a}E_{a}=\frac{4{\alpha}}{3{\pi}}Q_{c}E_{c}
\label{10}
\end{equation}
where $(a,b=1,2,3)$, ${\omega}$ is the EM wave frequency while ${\omega}_{1}$ is the torsion wave frequency. Here we also chose the charge density
${\rho}=0$ since we are addopting vacuum QED. The dispersion relation is given by
\begin{equation}
{\omega}^{2}\mp({\omega}_{1}+{\omega})kQ-k^{2}[1\mp{\gamma}]=0
\label{11}
\end{equation}
where ${\gamma}(k)$ is the photon-spin-sign-dependent term on the LHS of equation (\ref{11}), to account for the
phenomenological LV of an energy-dependent photon speed. Now by considering the analogy to EM waves in a magnetised plasma with an index of refraction
of refraction of $n=\frac{k}{\omega}$, one obtains
\begin{equation}
n_{L,R}=({\epsilon}_{1}\pm{\epsilon}_{2})^{\frac{1}{2}}
\label{12}
\end{equation}
where ${\epsilon}$ is the electric permittivity. From the dispersion relation above one obtains
the permittivities
\begin{equation}
{\epsilon}_{1}=\frac{1}{(1\pm{\gamma}(k))}
\label{13}
\end{equation}
\begin{equation}
{\epsilon}_{2}=-\frac{({\omega}_{1}+{\omega})Q}{(1\pm{\gamma}(k))}\approx{({\omega}+{\omega}_{1})}Q
\label{14}
\end{equation}
From the approximation of low torsion frequency ${\omega}_{1}<<<{\omega}$, this can be dropped in the last expression and torsion
vector reduces to the $\textbf{g}$ LV vector used by Khniatishivilly et al. Thus these expressions one obtains the refraction index
\begin{equation}
{n}_{L,R}=(1\pm{\omega}Q\pm{\gamma}(k))^{\frac{1}{2}}
\label{15}
\end{equation}
By making the approximation ${\gamma}<<Q{\omega}$ the refractive index reduces to
\begin{equation}
{n}_{L,R}\approx{(1\pm{\omega}Q)^{\frac{1}{2}}}=\frac{k}{\omega}
\label{16}
\end{equation}
Now the dispersion measure and rotation measure (RM) of the GRBs depend on the photon travel distance ${\Delta}l$ and are expressed as
\begin{equation}
{\Delta}t_{L,R}={\Delta}l(1-\frac{{\partial}k_{L,R}}{{\partial}{\omega}})
\label{17}
\end{equation}
\begin{equation}
{\Delta}{\phi}=\frac{1}{2}(k_{L}-k_{R}){\Delta}l
\label{18}
\end{equation}
where ${\phi}$ is the polarization plane rotation of the electric field describing the Faraday rotation.
These expressions can be written in terms of torsion by
\begin{equation}
{\Delta}t_{L,R}=\pm{\Delta}l{\omega}Q
\label{19}
\end{equation}
\begin{equation}
{\Delta}{\phi}\approx{\frac{1}{2}{\omega}^{2}Q{\Delta}l}
\label{20}
\end{equation}
Therefore, when the photon-spin is damped by the torsion wave, the Faraday rotation of
${\Delta}{\phi}\approx{10^{-2}rad}$, allows one to estimate torsion as
\begin{equation}
Q_{CMB}\approx{{10}^{-18}GeV}
\label{21}
\end{equation}
thus stablishing new limits for LV from torsin distinct from those of Kostelecky et al.
\section{Galactic dynamo seeds constraints to LV in spacetime with torsion}
Recently a more complicated approach to place constraints on LV from galactic dynamo magnetic seed fields appeared in
the literature \cite{3,4}. Here folllowing the perturbative methd bove nd the magnetic field equation one obtains a much simpler
and straightforward method of placing limits on LV from torsion and galactic dynamo seeds. Performing the Fourier spectrum of the
magnetic field equation yields
\begin{equation}
[({\omega}^{2}-k^{2}){\delta}_{ab}+\frac{16{\alpha}^{2}}{9{\pi}}Q_{a}Q_{b}]B^{b}=0
\label{22}
\end{equation}
which yields the following dispersion relation
\begin{equation}
{\omega}^{2}= k^{2}+\frac{16{\alpha}^{2}}{9{\pi}^{2}}Q^{2}
\label{23}
\end{equation}
Actually ${\omega}$ coincides with dynamo growth rate ${\gamma}_{0}$ with the ansatz
\begin{equation}
B(t)= B_{0}e^{{\gamma}t}
\label{24}
\end{equation}
From the dispersion relation one may conclude that in order that torsion may contribute to dynamo action it must be
be comparable with the large scale coherence which is the inverse of the wave vector k, under the law
\begin{equation}
k^{2}\approx{\frac{16{\alpha}^{2}}{9{\pi}^{2}}Q^{2}}
\label{25}
\end{equation}
As for today coherence scales torsion would be extremely weak and of the order of $Q\approx{10^{-21}cm^{-1}}$.
This is exactly the estimate obtained by Laemmerzahl \cite{11} on the basis of Earth laboratory Hughes-Drever experiment. It is interesting to note that Kostelecky et al
have also obtained LV with table top experiments on Earth lab.
\section{Discussion and conclusions}
The investigation of Faraday rotation has been proved very important in high-energy astronomy of magnetic fields in the
universe. Here one uses Faraday rotation to stablish limits of the LV in terms of torsion as stblished by Kostelecky
et al with torsion being a constant vector. Actually here torsion vector is not constant though LV is attainble. Methods used here were previously
investigated by Kahniatshivilly et al in the context of GRBs in torsionless Minkowski spacetime.
Dynamo plasma is obtained from the dispersion relation where torsion can be express3ed in terms of the coherence
scale of magnetic fields. Quantum effects may be obtained here from perturbative QED instead of non-perturbative primordial magnetic fields obtained by Enqvist.
\section*{Acknowledgements}
I would like to express my gratitude to Tina Kahniashivily and Rodion Stepanov for helpful discussions on the
subject of this paper. I also thank Tanmay Vachaspati for his kind invitation to the Primordial magnetic fields 2011 held in rizona State
University last april. Thanks are due to F Hehl for some useful correspondence on torsion waves. I also benifit from discussions with some participants like Bharat Ratra. Financial support from CNPq. and University of State of Rio de Janeiro (UERJ) are grateful acknowledged.
| {'timestamp': '2011-05-10T02:03:32', 'yymm': '1105', 'arxiv_id': '1105.1573', 'language': 'en', 'url': 'https://arxiv.org/abs/1105.1573'} |
\section{Introduction}
\label{sec:intro}
The current success of deep learning models is showing how modern artificial intelligent systems can manage supervised machine learning tasks with growing accuracy. However, when the level of supervision decreases, all the limitations of the existing data-hungry approaches become evident. For many applications, large amount of supervised data are not readily available, moreover collecting and manually annotating such data may be difficult or very costly. Different sub-fields of computer vision, such as \emph{domain adaptation} \cite{csurka} and \emph{self-supervised learning} \cite{doersch2015unsupervised}, aim at designing new learning solutions to compensate for this lack of supervision. Domain adaptation focuses on leveraging a fully supervised data-rich source domain to learn a classification model that performs well on a different but related unlabeled target domain.
Traditional domain adaptation methods assume that the target contains exactly the same set of labels of the source (\emph{closed-set} scenario). In recent years, this constraint has been relaxed in favor of the more realistic \emph{open-set} scenario where the target also contains samples drawn from unknown classes. In this case, it becomes important to identify and isolate the unknown class samples before reducing the domain shift to avoid negative transfer. Self-supervised learning focuses on training models on pretext tasks, such as image colorization or rotation prediction, using unlabeled data to then transfer the acquired high-level knowledge to new tasks with scarce supervision.
Recent literature has highlighted how self-supervision can be used for domain adaptation: jointly solving a pretext self-supervised task together with the main supervised problem leads to learning robust cross-domain features and supports generalization \cite{xu2019self-supervised,Carlucci_2019_CVPR}. Other works have also shown that the output of self-supervised models can be used in anomaly detection to discriminate normal and anomalous data \cite{golan2018deep,Bergman2020Classification-Based}. However, these works only tackle binary problems (normal and anomalous class) and deal with a single domain.
In this paper, we propose for the first time to use the inherent properties of self-supervision both for cross-domain robustness and for novelty detection to solve \emph{Open-Set Domain Adaptation} (OSDA). To this purpose, we propose a two-stage method called \emph{Rotation-based Open Set} (ROS) that is illustrated in Figure \ref{fig:scheme}. In the first stage, we separate the known and unknown target samples by training the model on a modified version of the rotation task that consists in predicting the relative rotation between a reference image and the rotated counterpart. In the second stage, we reduce the domain shift between the source domain and the known target domain using, once again, the rotation task. Finally we obtain a classifier that predicts each target sample as either belonging to one of the known classes or rejects it as unknown.
While evaluating ROS on the two popular benchmarks \emph{Office-31}~\cite{saenko2010adapting} and \emph{Office-Home}~\cite{venkateswara2017deep}, we expose the reproducibility problem of existing OSDA approaches and assess them with a new evaluation metric that better represents the performance of open set methods.
\noindent \textbf{We can summarize the contributions of our work as following}:
\begin{enumerate}
\item we introduce a novel OSDA method that exploits rotation recognition to tackle both known/unknown target separation and domain alignment;
\item we define a new OSDA metric that properly accounts for both known class recognition and unknown rejection;
\item we present an extensive experimental benchmark against existing OSDA methods with two conclusions: (a) we put under the spotlight the urgent need of a rigorous experimental validation to guarantee result reproducibility; (b) our ROS defines the new state-of-the-art on two benchmark datasets.
\end{enumerate}
\noindent
A Pytorch implementation of our method, together with instructions to replicate our experiments, is available at \url{https://github.com/silvia1993/ROS} .
\begin{figure*}[t!]
\centering
\includegraphics[width=\linewidth]{img/teaser.pdf}
\caption{Schematic illustration of our Rotation-based Open Set (\textbf{ROS}). Stage I: the source dataset $\mathcal{D}_s$ is used to train the encoder $E$, the semantic classifier $C_1$, and the multi-rotation classifier $R_1$ to perform known/unknown separation. $C_1$ is trained using the features of the original image, while $R_1$ is trained using the concatenated features of the original and rotated image. After convergence, the prediction of $R_1$ on the target dataset $\mathcal{D}_t$ is used to generate a normality score that defines how the target samples are split into a known target dataset $\mathcal{D}_t^{knw}$ and an unknown target dataset $\mathcal{D}_t^{unk}$. Stage II: $E$, the semantic+unknown classifier $C_2$ and the rotation classifier $R_2$ are trained to align the source and target distributions and to recognize the known classes while rejecting the unknowns. $C_2$ is trained using the original images from $\mathcal{D}_s$ and $\mathcal{D}_t^{unk}$, while $R_2$ is trained using the concatenated features of the original and rotated known target samples.}
\label{fig:scheme}
\end{figure*}
\section{Related Work}
\label{sec:related}
\textbf{Self-supervised learning} applies the techniques of supervised learning on problems where external supervision is not available. The idea is to manipulate the data to generate the supervision for an artificial task that is helpful to learn useful feature representations. Examples of self-supervised tasks in computer vision include predicting the relative position of image patches~\cite{doersch2015unsupervised,noroozi2016}, colorizing a gray-scale image~\cite{zhang2016colorful,Larsson_2017_CVPR}, and inpainting a removed patch~\cite{pathakCVPR16context}. Arguably, one of the most effective self-supervised tasks is rotation recognition~\cite{gidaris2018unsupervised} that consists in rotating the input images by multiples of $90^{\circ}$ and training the network to predict the rotation angle of each image. This pretext task has been successfully used in a variety of applications including anomaly detection~\cite{golan2018deep} and closed-set domain adaptation~\cite{xu2019self-supervised}.
\textbf{Anomaly detection}, also known as outlier or novelty detection, aims at learning a model from a set of \emph{normal} samples to be able to detect out-of-distribution (\emph{anomalous}) instances. The research literature in this area is wide with three main kind of approaches. \emph{Distribution-based} methods~\cite{zimek2012survey,rKDE,deepanomaly,zong2018deep} model the distribution of the available normal data so that the anomalous samples can be recognized as those with a low likelihood under the learned probability function. \emph{Reconstruction-based} methods~\cite{Eskin2002,rPCA,Xia_2015_ICCV,autoenc,schlegl2017unsupervised} learn to reconstruct the normal samples from an embedding or a set of basis functions. Anomalous data are then recognized by having a larger reconstruction error with respect to normal samples. \emph{Discriminative} methods~\cite{OSVM,ruff18a,HendrycksG17,liang2018enhancing} train a classifier on the normal data and use its predictions to distinguish between normal and anomalous samples.
\textbf{Closed-set domain adaptation} (CSDA) accounts for the difference between source and target data by considering them as drawn from two different marginal distributions. The literature of DA can be divided into three groups based on the strategy used to reduce the domain shift. \emph{Discrepancy-based} methods~\cite{Long:2015,sun2016return,Xu_2019_ICCV} define a metric to measure the distance between source and target data in feature space. This metric is minimized while training the network to reduce the domain shift. \emph{Adversarial} methods~\cite{ganin2016domain,tzeng2017adversarial,russo2018from} aim at training a domain discriminator and a generator network in an adversarial fashion so that the generator converges to a solution that makes the source and target data indistinguishable for the domain discriminator. \emph{Self-supervised} methods~\cite{ghifary2016deep,bousmalis2016domain,Carlucci_2019_CVPR} train a network to solve an auxiliary self-supervised task on the target (and source) data, in addition to the main task, to learn robust cross-domain representations.
\textbf{Open Set Domain Adaptation} (OSDA) is a more realistic version of CSDA, where the source and target distribution do not contain the same categories. The term ``OSDA" was first introduced by Busto and Gall~\cite{Busto_2017_ICCV} that considered the setting where each domain contains, in addition to the shared categories, a set of private categories. The currently accepted definition of OSDA was introduced by Saito~\emph{et al}\onedot~\cite{saito2018open} that considered the target as containing all the source categories and additional set of private categories that should be considered \emph{unknown}. To date, only a handful of papers tackled this problem. \emph{Open Set Back-Propagation}~(OSBP)~\cite{saito2018open} is an adversarial method that consists in training a classifier to obtain a large boundary between source and target samples whereas the feature generator is trained to make the target samples far from the boundary. \emph{Separate To Adapt}~(STA)~\cite{liu2019separate} is an approach based on two stages. First, a multi-binary classifier trained on the source is used to estimate the similarity of target samples to the source. Then, target data with extreme high and low similarity are re-used to separate known and unknown classes while the features across domains are aligned through adversarial adaptation. \emph{Attract or Distract}~(AoD)~\cite{feng2019attract} starts with a mild alignment with a procedure similar to~\cite{saito2018open} and refines the decision by using metric learning to reduce the intra-class distance in known classes and push the unknown class away from the known classes. \emph{Universal Adaptation Network}~(UAN)\footnote{UAN is originally proposed for the universal domain adaptation setting that is a superset of OSDA, so it can also be used in the context of this paper.}~\cite{you2019universal} uses a pair of domain discriminators to both generate a sample-level transferability weight and to promote
the adaptation in the automatically discovered common label set. Differently from all existing OSDA methods, \textbf{our approach abandons adversarial training in favor of self-supervision. Indeed, we show that rotation recognition can be used, with tailored adjustments, both to separate known and unknown target samples and to align the known source and target distributions}\footnote{See Appendix E for a discussion on the use of other self-supervised tasks.}.
\section{Method}
\label{sec:method}
\subsection{Problem formulation}
\label{subsec:formulation}
Let us denote with $\mathcal{D}_s=\{(\boldsymbol{x}_j^s,y_j^s)\}_{j=1}^{N_s} \sim p_s$ the labeled source dataset drawn from distribution $p_s$ and $\mathcal{D}_t=\{\boldsymbol{x}^t_j\}_{j=1}^{N_t} \sim p_t$ the unlabeled target dataset drawn from distribution $p_t$. In OSDA, the source domain is associated with a set of \emph{known} classes $y^s \in \{1,\ldots, |\mathcal{C}_s |\}$ that are shared with the target domain $\mathcal{C}_s\subset\mathcal{C}_t$, but the target covers also a set $\mathcal{C}_{t \setminus s}$ of additional classes, which are considered \emph{unknown}. As in CSDA, it holds that $p_s\neq p_t$ and we further have that $p_s\neq p_t^{\mathcal{C}_s}$, where $p_t^{\mathcal{C}_s}$ denotes the distribution of the target domain belonging to the shared label space $\mathcal{C}_s$. Therefore, in OSDA we face both a domain gap ($p_s\neq p_t^{\mathcal{C}_s}$) and a category gap ($\mathcal{C}_s \neq \mathcal{C}_t$). OSDA approaches aim at assigning the target samples to either one of the $|\mathcal{C}_s |$ shared classes or to reject them as \emph{unknown} using only annotated source samples, with the unlabeled target samples available transductively. An important measure characterizing a given OSDA problem is the \emph{openness} that relates the size of the source and target class set. For a dataset pair $(\mathcal{D}_s, \mathcal{D}_t )$, following the definition of~\cite{bendale2016towards}, the openness $\mathbb{O}$ is measured as $\mathbb{O}=1-\frac{|\mathcal{C}_s |}{|\mathcal{C}_t |}$. In CSDA $\mathbb{O} = 0$, while in OSDA $\mathbb{O} > 0$.
\subsection{Overview}
\label{subsec:overview}
When designing a method for OSDA, we face two main challenges: \emph{negative transfer} and \emph{known/unknown separation}. Negative transfer occurs when the whole source and target distribution are forcefully matched, thus also the unknown target samples are mistakenly aligned with source data. To avoid this issue, cross-domain adaptation should focus only on the shared $\mathcal{C}_s$ classes, closing the gap between $p^{\mathcal{C}_s}_t$ and $p_s$.
This leads to the challenge of known/unknown separation: recognizing each target sample as either belonging to one of the shared classes $\mathcal{C}_s$ (known) or to one of the target private classes $\mathcal{C}_{t \setminus s}$ (unknown). Following these observations, we structure our approach in two stages: (i) we separate the target samples into known and unknown, and (ii) we align the target samples predicted as known with the source samples (see Figure \ref{fig:scheme}). The first stage is formulated as an anomaly detection problem where the unknown samples are considered as anomalies. The second stage is formulated as a CSDA problem between source and the known target distribution. Inspired by recent advances in anomaly detection and CSDA~\cite{xu2019self-supervised,golan2018deep}, we solve both stages using the power of self-supervision. More specifically, we use two variations of the rotation classification task to compute a normality score for the known/unknown separation of the target samples and to reduce the domain gap.
\subsection{Rotation recognition for open set domain adaptation}
Let us denote with $rot90(\boldsymbol{x},i)$ the function that rotates clockwise a 2D image $\boldsymbol{x}$ by $i\times90^{\circ}$. Rotation recognition is a self-supervised task that consists in rotating a given image $x$ by a random $i \in [1,4]$ and using a CNN to predict $i$ from the rotated image $\tilde{\boldsymbol{x}}=rot90(\boldsymbol{x},i)$.
We indicate with $|r|=4$ the cardinality of the label space for this classification task.
In order to effectively apply rotation recognition to OSDA, we introduce the following variations.
\textit{Relative rotation:}
Consider the images in Figure~\ref{fig:relative_rotation}. Inferring by how much each image has been rotated without looking at its original (non-rotated) version is an ill-posed problem since the pens, as all the other object classes, are not presented with a coherent orientation in the dataset. On the other hand, looking at both original and rotated image to infer the relative rotation between them is well-defined. Following this logic, we modify the standard rotation classification task~\cite{gidaris2018unsupervised} by introducing the original image as an anchor. Finally, we train the rotation classifier to predict the rotation angle given the concatenated features of both original (anchor) and rotated image. As indicated by Figure \ref{fig:relative_rotation2}, the proposed relative rotation has the further effect of boosting the discriminative power of the learned features. It guides the network to focus more on specific shape details rather than on confusing texture information across different object classes.
\begin{figure}[t!]
\begin{minipage}[b]{0.42\textwidth}
\centering
\includegraphics[width=\textwidth]{img/relativerot.pdf}
\caption{\label{fig:relative_rotation} Are you able to infer the rotation degree of the rotated images without looking at the respective original one?}
\end{minipage}
\hfill
\begin{minipage}[b]{0.52\textwidth}
\centering
\includegraphics[width=\textwidth]{img/discriminativerot3.pdf}
\caption{\label{fig:relative_rotation2} The objects on the left may be confused. The relative rotation guides the network to focus on discriminative shape information}
\end{minipage}\vspace{-2mm}
\end{figure}
\textit{Multi-rotation classification:} The standard setting of anomaly detection considers samples from one semantic category as the normal class and samples from other semantic categories as anomalies. Rotation recognition has been successfully applied to this setting, but it suffers when including multiple semantic categories in the normal class~\cite{golan2018deep}. This is the case when coping with the known/unknown separation of OSDA, where we have all the $|\mathcal{C}_s|$ semantic categories as known data.
To overcome this problem, we propose a simple solution: we extend rotation recognition from a $4$-class problem to a $(4\times |\mathcal{C}_s|)$-class problem, where the set of classes represents the combination of semantic and rotation labels. For example, if we rotate an image of category $y^s=2$ by $i=3$, its label for the multi-rotation classification task is $z^s=(y^s\times 4)+i=11$.
In Appendix E, we discuss the specific merits of the multi-rotation classification task with further experimental evidences.
In the following, we indicate with $\boldsymbol{y},\boldsymbol{z}$ the one-hot vectors respectively for the class and multi-rotation labels.
\subsection{Stage I: known/unknown separation}
\label{subsec:stage1}
To distinguish between the known and unknown samples of $\mathcal{D}_t$, we train a CNN on the multi-rotation classification task using $\tilde{\mathcal{D}}_s=\{(\boldsymbol{x}^s_j, \tilde{\boldsymbol{x}}^s_j, z^s_j)\}^{4\times N_s}_{j=1}$. The network is composed of an encoder $E$ and two heads: a multi-rotation classifier $R_1$ and a semantic label classifier $C_1$.
The rotation prediction is computed on the stacked features of the original and rotated image produced by the encoder $\hat{\boldsymbol{z}}^s=\text{softmax}\big(R_1([E(\boldsymbol{x}^s),E(\tilde{\boldsymbol{x}}^s)])\big)$, while the semantic prediction is computed only from the original image features as $\hat{\boldsymbol{y}}^s=\text{softmax}\big(C_1(E(\boldsymbol{x}^s)\big)$. The network is trained to minimize the objective function $\mathcal{L}_1 = \mathcal{L}_{C_1} + \mathcal{L}_{R_1}$, where the semantic loss $\mathcal{L}_{C_1}$ is defined as a cross-entropy and the multi-rotation loss $\mathcal{L}_{R_1}$ combines cross-entropy and center loss \cite{centerWenZL016}. More precisely,\vspace{-1mm}
\begin{align}
\mathcal{L}_{C_1} &= -\sum_{j \in \mathcal{D}_s} \boldsymbol{y}^s_j \cdot \log(\hat{\boldsymbol{y}}^s_j),\\
\mathcal{L}_{R_1} &= \sum_{j \in \tilde{\mathcal{D}}_s} -\lambda_{1,1} \boldsymbol{z}^s_j \cdot \log(\hat{\boldsymbol{z}}^s_j) + \lambda_{1,2} ||\boldsymbol{v}^s_j - \gamma(\boldsymbol{z}^s_j)||^2_2, \label{eq:centerloss}\vspace{-6mm}
\end{align}
where $||.||_2$ indicates the $l_2$-norm operator, $\boldsymbol{v}_j$ indicates the output of the penultimate layer of $R_1$ and $\gamma(\boldsymbol{z}_j)$ indicates the corresponding centroid of the class associated with $\boldsymbol{v}_j$. By using the center loss we further encourage the network to minimize the intra-class variations while keeping far the features of different classes. This supports the following use of the rotation classifier output as a metric to detect unknown category samples.
Once the training is complete, we use $E$ and $R_1$ to compute the \emph{normality score} $\mathcal{N} \in [0,1]$ for each target sample, with large $\mathcal{N}$ values indicating normal (known) samples and vice-versa.
We start from the network prediction on all the relative rotation variants of a target sample $\hat{\boldsymbol{z}_i}^t=\text{softmax}\big(R_1([E(\boldsymbol{x}^t),E(\tilde{\boldsymbol{x}}_i^t)])\big)_i$ and their related entropy $H(\hat{\boldsymbol{z}}_i^t)= \big(\hat{\boldsymbol{z}}_i^t \cdot \log(\hat{\boldsymbol{z}}_i^t)/\log|\mathcal{C}_s|\big)_i$ with $i=1,\ldots,|r|$. We indicate with $[\hat{\boldsymbol{z}}^t]_m$ the $m$-th component of the $\hat{\boldsymbol{z}}^t$ vector.
The full expression of the normality score is:
\begin{equation}
\mathcal{N}(\boldsymbol{x}^t) = \max \Bigg\{ \max_{k=1,\ldots,|\mathcal{C}_s|}\bigg(\sum_{i=1}^{|r|}[\hat{\boldsymbol{z}}_i^t]_{k\times|r|+i}\bigg) , \bigg(1-\frac{1}{|r|}\sum_{i=1}^{|r|}H(\hat{\boldsymbol{z}}_i^t)\bigg)\Bigg\}~.
\label{eq:normalityscore
\end{equation}
In words, this formula is a function of the ability of the network to correctly predict the semantic class and orientation of a target sample (first term in the braces, \emph{Rotation Score}) as well as of its confidence evaluated on the basis of the prediction entropy (second term, \emph{Entropy Score}). We maximize over these two components with the aim of taking the most reliable metric in each case.
Finally, the normality score is used to separate the target dataset into a known target dataset $\mathcal{D}_t^{knw}$ and an unknown target dataset $\mathcal{D}_t^{unk}$. The distinction is made directly through the data statistics using the average of the normality score over the whole target $\bar{\mathcal{N}}=\frac{1}{N_t}\sum_{j=1}^{N_t}\mathcal{N}_j$,
without the need to introduce any further parameter:
\begin{equation}
\begin{cases}
\boldsymbol{x}^t \in \mathcal{D}_t^{knw} & \quad \text{if} \quad \mathcal{N}(\boldsymbol{x}^t) > \bar{\mathcal{N}} \\
\boldsymbol{x}^t \in \mathcal{D}_t^{unk} & \quad \text{if} \quad \mathcal{N}(\boldsymbol{x}^t) < \bar{\mathcal{N}}~.
\end{cases}
\end{equation}
It is worth mentioning that only $R_1$ is directly involved in computing the normality score, while $C_1$ is only trained for regularization purposes and as a warm up for the following stage. For a detailed pseudo-code on how to compute $\mathcal{N}$ and generate $\mathcal{D}_t^{knw}$ and $\mathcal{D}_t^{unk}$, please refer to Appendix G.
\subsection{Stage II: domain alignment}
\label{subsec:stage2}
Once the target unknown samples have been identified, the scenario gets closer to that of standard CSDA. On the one hand, we can use $\mathcal{D}_t^{knw}$ to close the domain gap without the risk of negative transfer and, on the other hand, we can exploit $\mathcal{D}_t^{unk}$ to extend the original semantic classifier, making it able to recognize the unknown category.
Similarly to Stage I, the network is composed of an encoder $E$ and two heads: a rotation classifier $R_2$ and a semantic label classifier $C_2$. The encoder is inherited from the previous stage. The heads also leverage on the previous training phase but have two key differences with respect to Stage I:
(1) $C_1$ has a $|\mathcal{C}_s|$-dimensional output, while $C_2$ has a $(|\mathcal{C}_s|+1)$-dimensional output because of the addition of the unknown class; (2) $R_1$ is a multi-rotation classifier with a $(4\times|\mathcal{C}_s|)$-dimensional output, $R_2$ is a rotation classifier with a $4$-dimensional output.
The rotation prediction is computed as $\hat{\boldsymbol{q}}=\text{softmax}\big(R_2([E(\boldsymbol{x}),E(\tilde{\boldsymbol{x}})])\big)$ while the semantic prediction is $\hat{\boldsymbol{g}}=\text{softmax}\big(C_2(E(\boldsymbol{x})\big)$.
The network is trained to minimize the objective function $\mathcal{L}_2 = \mathcal{L}_{C_{2}} + \mathcal{L}_{R_2}$, where $\mathcal{L}_{C_{2}}$ combines the supervised cross-entropy and the unsupervised entropy loss for the classification task, while $\mathcal{L}_{R_2}$ is defined as a cross-entropy for the rotation task.
The unsupervised entropy loss is used to involve in the semantic classification process also the unlabeled target samples recognized as known. This loss enforces the decision boundary to pass through low-density areas.
More precisely,
\begin{align}
\mathcal{L}_{C_{2}} &= -\sum_{j \in \{\mathcal{D}_{s}\cup \mathcal{D}_t^{unk}\}} \boldsymbol{g}_j \cdot \log(\hat{\boldsymbol{g}}_j)
-\lambda_{2,1}\sum_{j \in \mathcal{D}_t^{knw}} \hat{\boldsymbol{g}}_j \cdot \log(\hat{\boldsymbol{g}}_j),\\
\mathcal{L}_{R_2} &= -\lambda_{2,2}\sum_{j \in {\mathcal{D}}_t^{knw}} \boldsymbol{q}_j \cdot \log(\hat{\boldsymbol{q}}_j)~.
\end{align}
Once the training is complete, $R_2$ is discarded and the target labels are simply predicted as $c_j^t=C_2(E(\boldsymbol{x}_j^t))$ for all $j=1, \ldots, N_t$.
\section{On reproducibility and open set metrics}
\label{sec:metrics}
OSDA is a young field of research first introduced in 2017. As it is gaining momentum, it is crucial to guarantee the \emph{reproducibility} of the proposed methods and have a valid \emph{metric} to properly evaluate them.
\emph{Reproducibility:} In recent years, the machine learning community has become painfully aware of a reproducibility crisis~\cite{henderson2018deep,dodge2019show,lucic2018gans}. Replicating the results of state-of-the-art deep learning models is seldom straightforward due to a combination of non-deterministic factors in standard benchmark environments and poor reports from the authors. Although the problem is far from being solved, several efforts have been made to promote reproducibility through checklists~\cite{checklist}, challenges~\cite{challenge} and by encouraging authors to submit their code. On our side, we contribute by re-running the state-of-the-art methods for OSDA and compare them with the results reported in the papers (see Section~\ref{sec:expers}). Our results are produced using the original public implementation together with the parameters reported in the paper and, in some cases, repeated communications with the authors. We believe that this practice, as opposed to simply copying the results reported in the papers, can be of great value to the community.
\emph{Open set metrics:} The usual metrics adopted to evaluate OSDA are the average class accuracy over the known classes \emph{OS$^*$}, and the accuracy of the unknown class \emph{UNK}. They are generally combined in \emph{OS}$=\frac{|\mathcal{C}_s |}{|\mathcal{C}_s | + 1} \times$\emph{OS$^*$}$+ \frac{1}{|\mathcal{C}_s | + 1} \times$\emph{UNK} as a measure of the overall performance.
However, we argue (and we already demonstrated in \cite{LOGHMANI2020198}) that treating the unknown as an additional class does not provide an appropriate metric.
As an example, let us consider an algorithm that is not designed to deal with unknown classes (\emph{UNK}=$0.0\%$) but has perfect accuracy over $10$ known classes (\emph{OS$^*$}=$100.0\%$). Although this algorithm is not suitable for open set scenarios because it completely disregards false positives,
it presents a high score of \emph{OS}=$90.9\%$. With increasing number of known classes, this effect on OS becomes even more acute, making the role of \emph{UNK} negligible. For this reason, we propose a new metric defined as the harmonic mean of \emph{OS$^*$} and \emph{UNK}, \emph{HOS}~$= 2 \frac{\text{\emph{OS}$^*$} \times \text{\emph{UNK}}}{\text{\emph{OS$^*$}} + \text{\emph{UNK}}}$. Differently from \emph{OS}, \emph{HOS} provides a high score only if the algorithm performs well both on known and on unknown samples, independently of $|\mathcal{C}_s |$. Using a harmonic mean instead of a simple average penalizes large gaps between \emph{OS$^*$} and \emph{UNK}.
\section{Experiments}
\label{sec:expers}
\subsection{Setup: Baselines, Datasets}
We validate ROS with a thorough experimental analysis on two widely used benchmark datasets, Office-31 and Office-Home.
\emph{Office-31}~\cite{saenko2010adapting} consists of three domains, Webcam (W), Amazon (A) and Dslr (D), each containing $31$ object categories. We follow the setting proposed in~\cite{saito2018open}, where the first $10$ classes in alphabetic order are considered known and the last $11$ classes are considered unknown.
\emph{Office-Home}~\cite{venkateswara2017deep} consists of four domains, Product (Pr), Art (Ar), Real World (Rw) and Clipart (Cl), each containing $65$ object categories.
Unless otherwise specified, we follow the setting proposed in~\cite{liu2019separate}, where the first $25$ classes in alphabetic order are considered known classes and the remaining $40$ classes are considered unknown. Both the number of categories and the large domain gaps make this dataset much more challenging than Office-31.
We compare ROS against the state-of-the-art methods STA~\cite{liu2019separate}, OSBP~\cite{saito2018open}, UAN~\cite{you2019universal}, AoD~\cite{feng2019attract}, that we already described in Section~\ref{sec:related}. For each of them, we run experiments using the official code provided by the authors, with the exact parameters declared in the relative paper. The only exception was made for AoD for which the authors have not released the code at the time of writing, thus we report the values presented in their original work.
We also highlight that STA presents a practical issue related to the similarity score used to separate known and unknown categories. Its formulation is based on the \emph{max} operator according to the Equation (2) in \cite{liu2019separate},
but appears instead based on \emph{sum} in the implementation code. In our analysis we considered both the two variants (STA\textsubscript{sum}, STA\textsubscript{max}) for the sake of completeness. All the results presented in this section, both for ROS and for the baseline methods, are the average over three independent experimental runs. We do not cherry pick the best out of several trials, but only run the three experiments we report.
\subsection{Implementation Details}
By following standard practice, we evaluate the performances of ROS on Office-31 using two different backbones {ResNet-50} \cite{he2016deep} and {VGGNet} \cite{simonyan2014very}, both pre-trained on ImageNet \cite{deng2009imagenet}, and we focus on ResNet-50 for Office-Home. The hyper-parameters values are the same regardless of the backbone and the dataset used. In particular, in both Stage I and Stage II of ROS the batch size is set to $32$ with a learning rate of $0.0003$ which decreases during the training following an inverse decay scheduling. For all layers trained from scratch, we set the learning rate $10$ times higher than the pre-trained ones. We use SGD, setting the weight decay as $0.0005$ and momentum as $0.9$. In both stages, the weight of the self-supervised task is set three times the one of the semantic classification task, thus $\lambda_{1,1}= \lambda_{2,2}=3$. In Stage I, the weight of the center loss is $\lambda_{1,2}=0.1$ and in Stage II the weight of the entropy loss is $\lambda_{2,1}=0.1$. The network trained in Stage I is used as starting point for Stage II.
To take into consideration the extra category, in Stage II we set the learning rate of the new unknown class to twice that of the known classes (already learned in Stage I).
More implementation details and a sensitivity analysis of the hyper-parameters are provided in Appendix A and D.
\subsection{Results}
\label{subsec:results}
\paragraph{How does our method compare to the state-of-the-art?}
Table~\ref{tab:office31Resnet50} and \ref{tab:officehomeResnet50} show the average results over three runs on each of the domain shifts, respectively of Office-31 and Office-Home.
To discuss the results, we focus on the HOS metric since it is a synthesis of OS* and UNK, as discussed in Section \ref{sec:metrics}. Overall, ROS
outperforms the state-of-the-art on a total of $13$ out of $18$ domain shifts and presents the highest average performance on both Office-31 and Office-Home.
The HOS improvement gets up to $2.2\%$ compared to the second best method OSBP. Specifically, ROS has a large gain over STA, regardless of its specific max or sum implementation, while UAN is not a challenging competitor due to its low performance on the unknown class. We can compare against AoD only when using VGG for Office-31: we report the original results in gray in Table \ref{tab:officehomeResnet50}, with the HOS value confirming our advantage.
A more in-depth analysis indicates that the advantage of ROS is largely related in its ability in separating known and unknown samples.
Indeed, while our average OS* is similar to that of the competing methods, our average UNK is significantly higher. This characteristic is also visible qualitatively by looking at the t-SNE visualizations in Figure~\ref{fig:tsne} where we focus on the comparison against the second best method OSBP. Here the features for the known (red) and unknown (blue) target data appear more confused than for ROS.
\begin{table}[t!]
\centering
\caption{Accuracy (\%) averaged over three runs of each method on Office-31 dataset using ResNet-50 and VGGNet as backbones}
\resizebox{\textwidth}{!}{
\begin{tabular}{l@{~~~~~}cc ccc ccc ccc ccc ccc ccc ccc}
\hline
& \multicolumn{21}{c}{\textbf{Office-31}} \\
\hline
& \multicolumn{21}{c}{\textbf{ResNet-50}} \\
\hline
& &\multicolumn{3}{c|}{A $\rightarrow$ W } & \multicolumn{3}{c|}{A $\rightarrow$ D } & \multicolumn{3}{c|}{D $\rightarrow$ W } & \multicolumn{3}{c|}{W $\rightarrow$ D} & \multicolumn{3}{c|}{D $\rightarrow$ A } & \multicolumn{3}{c|}{W $\rightarrow$ A } & \multicolumn{3}{c}{\textbf{Avg.} }
\\
& & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c}{\textbf{\underline{HOS}}} \\
\hline
\multicolumn{1}{c}{STA\textsubscript{sum}}& \multirow{2}{*}{\cite{liu2019separate}} & 92.1 & 58.0 & \multicolumn{1}{c|}{71.0} & 95.4 & 45.5 & \multicolumn{1}{c|}{61.6} & 97.1 & 49.7 & \multicolumn{1}{c|}{65.5} & 96.6 & 48.5 & \multicolumn{1}{c|}{64.4} & 94.1 & 55.0 & \multicolumn{1}{c|}{69.4} & 92.1 & 46.2 & \multicolumn{1}{c|}{60.9} & 94.6 & 50.5 & \multicolumn{1}{c}{65.5$\pm$0.3} \\
\multicolumn{1}{c}{STA\textsubscript{max}}& & 86.7 & 67.6 & \multicolumn{1}{c|}{75.9} & 91.0 & 63.9 & \multicolumn{1}{c|}{75.0} & 94.1 & 55.5 & \multicolumn{1}{c|}{69.8} & 84.9 & 67.8 & \multicolumn{1}{c|}{75.2} & 83.1 & 65.9 & \multicolumn{1}{c|}{73.2} & 66.2 & 68.0 & \multicolumn{1}{c|}{66.1} & 84.3 & 64.8 & \multicolumn{1}{c}{72.5$\pm$0.8}\\
\multicolumn{1}{c}{OSBP} \cite{saito2018open} & & 86.8 & 79.2 & \multicolumn{1}{c|}{\textbf{82.7}} & 90.5 & 75.5 & \multicolumn{1}{c|}{\textbf{82.4}} & 97.7 & 96.7 & \multicolumn{1}{c|}{\textbf{97.2}} & 99.1 & 84.2 & \multicolumn{1}{c|}{91.1} & 76.1 & 72.3 & \multicolumn{1}{c|}{75.1} & 73.0 & 74.4 & \multicolumn{1}{c|}{73.7} & 87.2 & 80.4 & \multicolumn{1}{c}{83.7$\pm$0.4} \\
\multicolumn{1}{c}{UAN} \cite{you2019universal}& & 95.5 & 31.0 & \multicolumn{1}{c|}{46.8} & 95.6 & 24.4 & \multicolumn{1}{c|}{38.9} & 99.8 & 52.5 & \multicolumn{1}{c|}{68.8} & 81.5 & 41.4 & \multicolumn{1}{c|}{53.0} & 93.5 & 53.4 & \multicolumn{1}{c|}{68.0} & 94.1 & 38.8 & \multicolumn{1}{c|}{54.9} & 93.4 & 40.3 & \multicolumn{1}{c}{55.1$\pm$1.4} \\
\multicolumn{1}{c}{\textbf{ROS}} & & 88.4 & 76.7 & \multicolumn{1}{c|}{82.1} & 87.5 & 77.8 & \multicolumn{1}{c|}{\textbf{82.4}} & 99.3 & 93.0 & \multicolumn{1}{c|}{96.0} & 100.0 & 99.4 & \multicolumn{1}{c|}{\textbf{99.7}} & 74.8 & 81.2 & \multicolumn{1}{c|}{\textbf{77.9}} & 69.7 & 86.6 & \multicolumn{1}{c|}{\textbf{77.2}} & 86.6 & 85.8 & \multicolumn{1}{c}{\textbf{85.9$\pm$0.2}} \\
\hline\hline
& \multicolumn{21}{c}{\textbf{VGGNet}} \\
\hline
& &\multicolumn{3}{c|}{A $\rightarrow$ W } & \multicolumn{3}{c|}{A $\rightarrow$ D } & \multicolumn{3}{c|}{D $\rightarrow$ W } & \multicolumn{3}{c|}{W $\rightarrow$ D} & \multicolumn{3}{c|}{D $\rightarrow$ A } & \multicolumn{3}{c|}{W $\rightarrow$ A } & \multicolumn{3}{c}{\textbf{Avg.} }
\\
& & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c}{\textbf{\underline{HOS}}} \\
\hline
\multicolumn{1}{c}{OSBP} \cite{saito2018open}& & 79.4 & 75.8 & \multicolumn{1}{c|}{77.5} & 87.9 & 75.2 & \multicolumn{1}{c|}{81.0} & 96.8 & 93.4 & \multicolumn{1}{c|}{\textbf{95.0}} & 98.9 & 84.2 & \multicolumn{1}{c|}{91.0} & 74.4 & 82.4 & \multicolumn{1}{c|}{\textbf{78.2}} & 69.7 & 76.4 & \multicolumn{1}{c|}{72.9} & 84.5 & 81.2 & \multicolumn{1}{c}{82.6$\pm$0.8} \\
\multicolumn{1}{c}{\textbf{ROS}} & & 80.3 & 81.7 & \multicolumn{1}{c|}{\textbf{81.0}} & 81.8 & 76.5 & \multicolumn{1}{c|}{79.0} & 99.5 & 89.9 & \multicolumn{1}{c|}{94.4} & 99.3 & 100.0 & \multicolumn{1}{c|}{\textbf{99.7}} & 76.7 & 79.6 & \multicolumn{1}{c|}{78.1} & 62.2 & 91.6 & \multicolumn{1}{c|}{\textbf{74.1}} & 83.3 & 86.5 & \multicolumn{1}{c}{\textbf{84.4$\pm$0.2}} \\
\hline
\multicolumn{1}{c}{\textcolor{gray}{AoD \cite{feng2019attract}}}& & \textcolor{gray}{87.7} & \textcolor{gray}{73.4} & \multicolumn{1}{c|}{\textcolor{gray}{79.9}} & \textcolor{gray}{92.0} & \textcolor{gray}{71.1} & \multicolumn{1}{c|}{\textbf{\textcolor{gray}{79.3}}} & \textcolor{gray}{99.8} & \textcolor{gray}{78.9} & \multicolumn{1}{c|}{\textcolor{gray}{88.1}} & \textcolor{gray}{99.3} & \textcolor{gray}{87.2} & \multicolumn{1}{c|}{\textcolor{gray}{92.9}} & \textcolor{gray}{88.4} & \textcolor{gray}{13.6} & \multicolumn{1}{c|}{\textcolor{gray}{23.6}} & \textcolor{gray}{82.6} & \textcolor{gray}{57.3} & \multicolumn{1}{c|}{\textcolor{gray}{67.7}} & \textcolor{gray}{91.6} & \textcolor{gray}{63.6} & \multicolumn{1}{c}{\textcolor{gray}{71.9}} \\
\hline
\end{tabular}
}
\label{tab:office31Resnet50}
\end{table}
\begin{table}[t]
\centering
\caption{Accuracy (\%) averaged over three runs of each method on Office-Home dataset using ResNet-50 as backbone}
\resizebox{\textwidth}{!}{
\begin{tabular}{l@{~~~~~~~}c ccc ccc ccc ccc ccc ccc ccc}
\hline
& \multicolumn{18}{c}{~~~~~~~~~~~\textbf{Office-Home}} \\
\hline
& & & \multicolumn{3}{c|}{Pr $\rightarrow$ Rw } & \multicolumn{3}{c|}{Pr $\rightarrow$ Cl } & \multicolumn{3}{c|}{Pr $\rightarrow$ Ar } &
\multicolumn{3}{c|}{Ar $\rightarrow$ Pr } & \multicolumn{3}{c|}{Ar $\rightarrow$ Rw } & \multicolumn{3}{c}{Ar $\rightarrow$ Cl } \\
& & & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c}{\textbf{\underline{HOS}}} \\
\hline
\multicolumn{1}{c}{STA\textsubscript{sum}} & \multirow{2}{*}{\cite{liu2019separate}} & & 78.1 & 63.3 & \multicolumn{1}{c|}{69.7} & 44.7 & 71.5 & \multicolumn{1}{c|}{55.0} & 55.4 & 73.7 & \multicolumn{1}{c|}{63.1} & 68.7 & 59.7 & \multicolumn{1}{c|}{63.7} & 81.1 & 50.5 & \multicolumn{1}{c|}{62.1} & 50.8 & 63.4 & \multicolumn{1}{c}{56.3} \\
\multicolumn{1}{c}{STA\textsubscript{max}}& & & 76.2 & 64.3 & \multicolumn{1}{c|}{69.5} & 44.2 & 67.1 & \multicolumn{1}{c|}{53.2} & 54.2 & 72.4 & \multicolumn{1}{c|}{61.9} & 68.0 & 48.4 & \multicolumn{1}{c|}{54.0} & 78.6 & 60.4 & \multicolumn{1}{c|}{68.3} & 46.0 & 72.3 & \multicolumn{1}{c}{55.8}\\
\multicolumn{1}{c}{OSBP} \cite{saito2018open}& & & 76.2 & 71.7 & \multicolumn{1}{c|}{73.9} & 44.5 & 66.3 & \multicolumn{1}{c|}{53.2} & 59.1 & 68.1 & \multicolumn{1}{c|}{\textbf{63.2}} & 71.8 & 59.8 & \multicolumn{1}{c|}{65.2} & 79.3 & 67.5 & \multicolumn{1}{c|}{72.9} & 50.2 & 61.1 & \multicolumn{1}{c}{55.1}\\
\multicolumn{1}{c}{UAN} \cite{you2019universal}& & & 84.0 & 0.1 & \multicolumn{1}{c|}{0.2} & 59.1 & 0.0 & \multicolumn{1}{c|}{0.0} & 73.7 & 0.0 & \multicolumn{1}{c|}{0.0} & 81.1 & 0.0 & \multicolumn{1}{c|}{0.0} & 88.2 & 0.1 & \multicolumn{1}{c|}{0.2} & 62.4 & 0.0 & \multicolumn{1}{c}{0.0} \\
\multicolumn{1}{c}{\textbf{ROS}} & & & 70.8 & 78.4 & \multicolumn{1}{c|}{\textbf{74.4}} & 46.5 & 71.2 & \multicolumn{1}{c|}{\textbf{56.3}} & 57.3 & 64.3 & \multicolumn{1}{c|}{60.6} & 68.4 & 70.3 & \multicolumn{1}{c|}{\textbf{69.3}} & 75.8 & 77.2 & \multicolumn{1}{c|}{\textbf{76.5}} & 50.6 & 74.1 & \multicolumn{1}{c}{\textbf{60.1}} \\
\hline\hline
& \multicolumn{3}{c|}{Rw $\rightarrow$ Ar } & \multicolumn{3}{c|}{Rw $\rightarrow$ Pr } & \multicolumn{3}{c|}{Rw $\rightarrow$ Cl }
& \multicolumn{3}{c|}{Cl $\rightarrow$ Rw } & \multicolumn{3}{c|}{Cl $\rightarrow$ Ar } & \multicolumn{3}{c}{Cl $\rightarrow$ Pr } & \multicolumn{3}{c}{\textbf{Avg.}} \\
& OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}}& OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c|}{\textbf{\underline{HOS}}} & OS* & UNK & \multicolumn{1}{c}{\textbf{\underline{HOS}}} \\
\hline
\multicolumn{1}{c}{STA\textsubscript{sum}} & 67.9 & 62.3 & \multicolumn{1}{c|}{65.0} & 77.9 & 58.0 & \multicolumn{1}{c|}{66.4} & 51.4 & 57.9 & \multicolumn{1}{c|}{54.2} & 69.8 & 63.2 & \multicolumn{1}{c|}{66.3} & 53.0 & 63.9 & \multicolumn{1}{c|}{57.9} & 61.4 & 63.5 & \multicolumn{1}{c|}{62.5} & 63.4 & 62.6 & \multicolumn{1}{c}{61.9$\pm$2.1} \\
\multicolumn{1}{c}{STA\textsubscript{max}} & 67.5 & 66.7 & \multicolumn{1}{c|}{67.1} & 77.1 & 55.4 & \multicolumn{1}{c|}{64.5} & 49.9 & 61.1 & \multicolumn{1}{c|}{54.5} & 67.0 & 66.7 & \multicolumn{1}{c|}{66.8} & 51.4 & 65.0 & \multicolumn{1}{c|}{57.4} & 61.8 & 59.1 & \multicolumn{1}{c|}{60.4} & 61.8 & 63.3 & \multicolumn{1}{c}{61.1$\pm$0.3} \\
\multicolumn{1}{c}{OSBP} & 66.1 & 67.3 & \multicolumn{1}{c|}{66.7} & 76.3 & 68.6 & \multicolumn{1}{c|}{72.3} & 48.0 & 63.0 & \multicolumn{1}{c|}{54.5} & 72.0 & 69.2 & \multicolumn{1}{c|}{\textbf{70.6}} & 59.4 & 70.3 & \multicolumn{1}{c|}{\textbf{64.3}} & 67.0 & 62.7 & \multicolumn{1}{c|}{64.7} & 64.1 & 66.3 & \multicolumn{1}{c}{64.7$\pm$0.2} \\
\multicolumn{1}{c}{UAN} & 77.5 & 0.1 & \multicolumn{1}{c|}{0.2} & 85.0 & 0.1 & \multicolumn{1}{c|}{0.1} & 66.2 & 0.0 & \multicolumn{1}{c|}{0.0} & 80.6 & 0.1 & \multicolumn{1}{c|}{0.2} & 70.5 & 0.0 & \multicolumn{1}{c|}{0.0} & 74.0 & 0.1 & \multicolumn{1}{c|}{0.2} & 75.2 & 0.0 & \multicolumn{1}{c}{0.1$\pm$0.0} \\
\multicolumn{1}{c}{\textbf{ROS}} & 67.0 & 70.8 & \multicolumn{1}{c|}{\textbf{68.8}} & 72.0 & 80.0 & \multicolumn{1}{c|}{\textbf{75.7}} & 51.5 & 73.0 & \multicolumn{1}{c|}{\textbf{60.4}} & 65.3 & 72.2 & \multicolumn{1}{c|}{68.6} & 53.6 & 65.5 & \multicolumn{1}{c|}{58.9} & 59.8 & 71.6 & \multicolumn{1}{c|}{\textbf{65.2}} & 61.6 & 72.4 & \multicolumn{1}{c}{\textbf{66.2$\pm$ 0.3}}\\
\hline
\end{tabular}
}
\label{tab:officehomeResnet50}
\end{table}
\begin{figure}
\centering
\includegraphics[width=0.85\linewidth]{img/tsne.pdf}
\caption{t-SNE visualization of the target features for the
W$\rightarrow${A} domain shift from Office-31. Red and blue points are respectively features of known and unknown classes}
\label{fig:tsne}
\end{figure}
\paragraph{Is it possible to reproduce the reported results of the state-of-the-art?}
By analyzing the published OSDA papers, we noticed some incoherence in the reported results. For example, some of the results from OSBP are different between the pre-print~\cite{saito2018open-arxiv} and the published~\cite{saito2018open} version, although they present the same description for method and hyper-parameters. Also, AoD~\cite{feng2019attract} compares against the pre-print results of OSBP, while omitting the results of STA. To dissipate these ambiguities and gain a better perspective on the current state-of-the-art methods, in Table~\ref{tab:reproducibility} we compare the results on Office-31 reported in previous works with the results obtained by running their code. For this analysis we focus on OS since it is the only metric reported for some of the methods. The comparison shows that, despite using the original implementation and the information provided by the authors, the OS obtained by re-running the experiments is between $1.3\%$ and $4.9\%$ lower than the originally published results. The significance of this gap calls for greater attention in providing all the relevant information for reproducing the experimental results. A more extensive reproducibility study is provided in Appendix B.
\begin{table}[t]
\centering
\caption{Reported vs reproduced OS accuracy (\%) averaged over three runs}
\resizebox{0.8\textwidth}{!}{
\begin{tabular}{l@{~~~} cccccc ccc}
\hline
\multicolumn{9}{c}{\textbf{Reproducibility Study}} \\
\hline
\multicolumn{6}{c|}{Office-31 (ResNet-50)} & \multicolumn{3}{c}{Office-31 (VGGNet)}\\
\hline
\multicolumn{3}{c|}{STA\textsubscript{sum}} & \multicolumn{3}{c|}{UAN} & \multicolumn{3}{c}{OSBP}\\
OS\textsubscript{reported} & OS\textsubscript{ours} & \multicolumn{1}{c|}{gap} & OS\textsubscript{reported} & OS\textsubscript{ours} & \multicolumn{1}{c|}{gap} & OS\textsubscript{reported} & OS\textsubscript{ours} & \multicolumn{1}{c}{gap} \\
\hline
\multicolumn{1}{c}{92.9} & \multicolumn{1}{c}{90.6$\pm$1.8} & \multicolumn{1}{c|}{\textbf{2.3}} & \multicolumn{1}{c}{89.2} & \multicolumn{1}{c}{87.9$\pm$0.03} & \multicolumn{1}{c|}{\textbf{1.3}} & \multicolumn{1}{c}{89.1} & \multicolumn{1}{c}{84.2 $\pm$0.4} & \multicolumn{1}{c}{\textbf{4.9}}\\
\hline
\end{tabular}
}
\label{tab:reproducibility}
\end{table}
\paragraph{Why is it important to use the HOS metric?}
The most glaring example of why OS is not an appropriate metric for OSDA is provided by the results of UAN. In fact, when computing OS from the average (OS*,UNK) in Table~\ref{tab:office31Resnet50} and~\ref{tab:officehomeResnet50}, we can see that UAN has OS=$72.5\%$ for Office-Home and OS=$91.4\%$ for Office-31. This is mostly reflective of the ability of UAN in recognizing the known classes (OS*), but it completely disregards its (in)ability to identify the unknown samples (UNK). For example, for most domain shifts in Office-Home, UAN does not assign (almost) any samples to the unknown class, resulting in UNK=$0.0\%$. On the other hand, HOS better reflects the open set scenario and assumes a high value only when OS* and UNK are both high.
\paragraph{Is rotation recognition effective for known/unknown separation in OSDA?}
To better understand the effectiveness of rotation recognition for known/unknown separation, we measure the performance of our Stage I and compare it to the Stage I of STA. Indeed, also STA has a similar two-stage structure, but uses a multi-binary classifier instead of a multi-rotation classifier to separate known and unknown target samples. To assess the performance, we compute the \textit{area under receiver operating characteristic curve} (AUC-ROC) over the normality scores $\mathcal{N}$ on Office-31. Table~\ref{tab:ablationstep1} shows that the AUC-ROC of ROS ($91.5$) is significantly higher than that of the multi-binary used by STA ($79.9$). Table~\ref{tab:ablationstep1} also shows the performance of Stage I when alternatively removing the center loss (No Center Loss) from Equation (\ref{eq:centerloss}) ($\lambda_{1,2}=0$) and the anchor image (No Anchor) when training $R_1$, thus passing from relative rotation to the more standard absolute rotation. In both cases, the performance significantly drops compared to our full method, but still outperforms the multi-binary classifier of STA.
\begin{table}[t!]
\centering
\caption{Ablation Analysis on Stage I and Stage II}
\resizebox{0.95\textwidth}{!}{
\begin{tabular}{l@{~} c c c c c c c}
\hline
\multicolumn{8}{c}{\textbf{Ablation Study}}\\
\hline
\multicolumn{1}{c}{\multirow{1}{*}{\textbf{STAGE I} (AUC-ROC)}} & \multicolumn{1}{c|}{A $\rightarrow$ W }
& \multicolumn{1}{c|}{A $\rightarrow$ D }
& \multicolumn{1}{c|}{D $\rightarrow$ W }
& \multicolumn{1}{c|}{W $\rightarrow$ D}
& \multicolumn{1}{c|}{D $\rightarrow$ A }
& \multicolumn{1}{c|}{W $\rightarrow$ A }
& \textbf{Avg.} \\
\hline
\textbf{ROS} & \multicolumn{1}{c|}{90.1} & \multicolumn{1}{c|}{88.1} & \multicolumn{1}{c|}{99.4} & \multicolumn{1}{c|}{99.9} & \multicolumn{1}{c|}{87.5} & \multicolumn{1}{c|}{83.8} &\textbf{91.5}\\
Multi-Binary (from STA \cite{liu2019separate}) & \multicolumn{1}{c|}{83.2} & \multicolumn{1}{c|}{84.1} & \multicolumn{1}{c|}{86.8} & \multicolumn{1}{c|}{72.0} & \multicolumn{1}{c|}{75.7} & \multicolumn{1}{c|}{78.3} & 79.9\\
ROS - No Center loss & \multicolumn{1}{c|}{88.8} & \multicolumn{1}{c|}{83.2} & \multicolumn{1}{c|}{98.8} & \multicolumn{1}{c|}{99.8} & \multicolumn{1}{c|}{84.7} & \multicolumn{1}{c|}{84.5} & 89.9\\
ROS - No Anchor & \multicolumn{1}{c|}{84.5} & \multicolumn{1}{c|}{84.9} & \multicolumn{1}{c|}{99.1} & \multicolumn{1}{c|}{99.9} & \multicolumn{1}{c|}{87.6} & \multicolumn{1}{c|}{86.2} & 90.4\\
ROS - No Rot. Score & \multicolumn{1}{c|}{86.3} & \multicolumn{1}{c|}{82.7} & \multicolumn{1}{c|}{99.5} & \multicolumn{1}{c|}{99.9} & \multicolumn{1}{c|}{86.3} & \multicolumn{1}{c|}{82.9} & 89.6\\
ROS - No Ent. Score & \multicolumn{1}{c|}{80.7} & \multicolumn{1}{c|}{78.7} & \multicolumn{1}{c|}{99.7} & \multicolumn{1}{c|}{99.9} & \multicolumn{1}{c|}{86.6} & \multicolumn{1}{c|}{84.4} & 88.3\\
ROS - No Center loss, No Anchor & \multicolumn{1}{c|}{76.5} & \multicolumn{1}{c|}{79.1} & \multicolumn{1}{c|}{98.3} & \multicolumn{1}{c|}{99.7} & \multicolumn{1}{c|}{85.2} & \multicolumn{1}{c|}{83.5} & 87.1\\
ROS - No Rot. Score, No Anchor & \multicolumn{1}{c|}{83.9} & \multicolumn{1}{c|}{84.6} & \multicolumn{1}{c|}{99.4} & \multicolumn{1}{c|}{99.9} & \multicolumn{1}{c|}{84.7} & \multicolumn{1}{c|}{84.9} & 89.6\\
ROS - No Ent. Score, No Anchor & \multicolumn{1}{c|}{80.1} & \multicolumn{1}{c|}{81.0} & \multicolumn{1}{c|}{99.5} & \multicolumn{1}{c|}{99.7} & \multicolumn{1}{c|}{84.3} & \multicolumn{1}{c|}{83.3} & 87.9\\
ROS - No Rot. Score, No Center loss & \multicolumn{1}{c|}{80.9} & \multicolumn{1}{c|}{81.6} & \multicolumn{1}{c|}{98.9} & \multicolumn{1}{c|}{99.8} & \multicolumn{1}{c|}{85.6} & \multicolumn{1}{c|}{83.3} & 88.3\\
ROS - No Ent. Score, No Center loss & \multicolumn{1}{c|}{76.4} & \multicolumn{1}{c|}{79.8} & \multicolumn{1}{c|}{99.0} & \multicolumn{1}{c|}{98.3} & \multicolumn{1}{c|}{84.4} & \multicolumn{1}{c|}{84.3} & 87.0\\
ROS - No Ent. Score, No Center loss, No Anchor & \multicolumn{1}{c|}{78.6} & \multicolumn{1}{c|}{80.4} & \multicolumn{1}{c|}{99.0} & \multicolumn{1}{c|}{98.9} & \multicolumn{1}{c|}{86.2} & \multicolumn{1}{c|}{83.2} & 87.7\\
ROS - No Rot. Score, No Center loss, No Anchor & \multicolumn{1}{c|}{78.7} & \multicolumn{1}{c|}{82.2} & \multicolumn{1}{c|}{98.3} & \multicolumn{1}{c|}{99.8} & \multicolumn{1}{c|}{85.0} & \multicolumn{1}{c|}{82.6} & 87.8\\
\hline\hline
\multicolumn{1}{c}{\multirow{1}{*}{\textbf{STAGE II} (HOS)}} & \multicolumn{1}{c|}{A $\rightarrow$ W } & \multicolumn{1}{c|}{A $\rightarrow$ D } & \multicolumn{1}{c|}{D $\rightarrow$ W } & \multicolumn{1}{c|}{W $\rightarrow$ D} & \multicolumn{1}{c|}{D $\rightarrow$ A } & \multicolumn{1}{c|}{W $\rightarrow$ A } & \multicolumn{1}{c}{\textbf{Avg.}}
\\
\hline
\textbf{ROS} & \multicolumn{1}{c|}{82.1} & \multicolumn{1}{c|}{82.4} & \multicolumn{1}{c|}{96.0} & \multicolumn{1}{c|}{99.7} & \multicolumn{1}{c|}{77.9} & \multicolumn{1}{c|}{77.2} & \textbf{85.9} \\
ROS Stage I - GRL \cite{ganin2016domain} Stage II & \multicolumn{1}{c|}{83.5} & \multicolumn{1}{c|}{80.9} & \multicolumn{1}{c|}{97.1} & \multicolumn{1}{c|}{99.4} & \multicolumn{1}{c|}{77.3} & \multicolumn{1}{c|}{72.6} & 85.1\\
ROS Stage I - No Anchor in Stage II & \multicolumn{1}{c|}{80.0} & \multicolumn{1}{c|}{82.3} & \multicolumn{1}{c|}{94.5} & \multicolumn{1}{c|}{99.2} & \multicolumn{1}{c|}{76.9} & \multicolumn{1}{c|}{76.6} & 84.9\\
ROS Stage I - No Anchor, No Entropy in Stage II & \multicolumn{1}{c|}{80.1} & \multicolumn{1}{c|}{84.4} & \multicolumn{1}{c|}{97.0} & \multicolumn{1}{c|}{99.2} & \multicolumn{1}{c|}{76.5} & \multicolumn{1}{c|}{72.9} & 85.0\\
\hline
\end{tabular}
}
\label{tab:ablationstep1}
\end{table}
\paragraph{Why is the normality score defined the way it is?}
As defined in Equation (\ref{eq:normalityscore}), our normality score is a function of the rotation score and entropy score. The rotation score is based on the ability of $R_1$ to predict the rotation of the target samples, while the entropy score is based on the confidence of such predictions. Table~\ref{tab:ablationstep1} shows the results of Stage I when alternatively discarding either the rotation score (No Rot. Score) or the information of the entropy score (No Ent. Score). In both cases the AUC-ROC significantly decreases compared to the full version, justifying our choice.
\paragraph{Is rotation recognition effective for domain alignment in OSDA?}
While rotation classification has already been used for CSDA~\cite{xu2019self-supervised}, its application in OSDA, where the shared target distribution could be noisy (\emph{i.e}\onedot} \def\Ie{\emph{I.e}\onedot contain unknown samples) has not been studied. On the other hand, GRL~\cite{ganin2016domain} is used, under different forms, by all existing OSDA methods. We compare rotation recognition and GRL in this context by evaluating the performance of our Stage II when replacing the $R_2$ with a domain discriminator. Table~\ref{tab:ablationstep1} shows that rotation recognition performs on par with GRL, if not slightly better. Moreover we also evaluate the role of the relative rotation in the Stage II: the results in the last row of Table~\ref{tab:ablationstep1} confirm that it improves over the standard absolute rotation (No Anchor in Stage II) even when the rotation classifier is used as cross-domain adaptation strategy. Finally, the cosine distance between the source and the target domain without adaptation in Stage II ($0.188$) and with our full method ($0.109$) confirms that rotation recognition is indeed helpful to reduce the domain gap.
\begin{figure}[t!]
\centering
\includegraphics[width=0.28\linewidth]{img/os.pdf}
\includegraphics[width=0.28\linewidth]{img/UNK.pdf}
\includegraphics[width=0.28\linewidth]{img/HOS.pdf}
\caption{Accuracy (\%) averaged over the three openness configurations.}
\label{fig:decreaseknownclass}
\end{figure}
\paragraph{Is our method effective on problems with a high degree of openness?}
The standard open set setting adopted in so far, presents a relatively balanced number of shared and private target classes with openness close to $0.5$. Specifically it is $\mathbb{O}=1-\frac{10}{21}=0.52$ for Office-31 and $\mathbb{O}=1-\frac{25}{65}=0.62$ for Office-Home. In real-world problems, we can expect the number of unknown target classes to largely exceed the number of known classes, with openness approaching $1$. We investigate this setting using Office-Home and, starting from the classes sorted with ID from 0 to 64 in alphabetic order, we define the following settings with increasing openness:
\textbf{25} known classes $\mathbb{O}=0.62$, ID:\{0-24, 25-49, 40-64\},
\textbf{10} known classes $\mathbb{O}=0.85$, ID:\{0-9, 10-19, 20-29\},
\textbf{5} known classes $\mathbb{O}=0.92$, ID:\{0-4, 5-9, 10-14\}.
Figure~\ref{fig:decreaseknownclass} shows that the performance of our best competitors, STA and OSBP, deteriorates with larger $\mathbb{O}$ due to their inability to recognize the unknown samples. On the other hand, ROS maintains a consistent performance.
\section{Discussion and conclusions}
\label{sec:conclusions}
In this paper, we present ROS: a novel method that tackles OSDA by using the self-supervised task of predicting image rotation. We show that, with simple variations of the rotation prediction task, we can first separate the target samples into known and unknown, and then align the target samples predicted as known with the source samples. Additionally, we propose HOS: a new OSDA metric defined as the harmonic mean between the accuracy of recognizing the known classes and rejecting the unknown samples. HOS overcomes the drawbacks of the current metric OS where the contribution of the unknown classes vanishes with increasing number of known classes.
We evaluate the perfomance of ROS
on the standard Office-31 and Office-Home benchmarks, showing that it
outperforms the competing methods.
In addition, when tested on
settings with increasing openness, ROS is the only method that maintains a steady performance. HOS reveals to be crucial in this evaluation to correctly assess the performance of the methods on both known and unknown samples.
Finally, the failure in reproducing the reported results of existing methods exposes an important issue in OSDA that echoes the current reproducibility crisis in machine learning. We hope that our contributions can help laying a more solid foundation for the field.\\
\noindent\textbf{Acknowledgements} This work was partially founded by the ERC grant 637076 RoboExNovo (SB), by the H2020 ACROSSING project grant 676157 (MRL) and took advantage of the NVIDIA GPU Academic Hardware Grant (TT).
\clearpage
\section*{Appendix}
| {'timestamp': '2020-07-27T02:07:11', 'yymm': '2007', 'arxiv_id': '2007.12360', 'language': 'en', 'url': 'https://arxiv.org/abs/2007.12360'} |
\section{Selected Work on Intonational Meaning}
Introductions to speech research and acoustic phonetics can be found
in \cite{Ladefoged62,Denes73,Fry79}. For survey-level introductions to
studies of English intonation, see
\cite{Halliday67b,Crystal69,Crystal75,Cruttenden86,Couper-Kuhlen86}.
\cite{Ladd80,Bolinger86,Bolinger89} also represent good introductions
to studies of intonational meaning. Works oriented toward particular
aspects of intonation theory and models of intonation, also primarily
for English are
\cite{Bolinger51,Liberman75,Garding83,Hirst83,Pierrehumbert80,Vaissiere83,Liberman84,Ladd88,tHart90}
\nocite{Ladd83a,Ladd83b,Ladd83c,Ladd86,Pike45,O'Connor61}
\nocite{Buxton83,Edwards88}
An idea of some of the topics and methods of research on intonational
representation and intonational meaning can
be gained from \cite{Cutler83a,Gibbon84}.
\nocite{Duncan77}
The meaning of particular intonational contours has been studied by
\cite{Liberman74,Sag75,Ladd77,Ladd78,Bing79b,Ladd80,Bouton82,Ward85b}.
Attempts to determine the relative contribution of intonational
contours and other intonational features to utterance interpretation
include \cite{Ladd85,Pierrehumbert90,Hirschberg91b}.
For intonational studies of adult intonation in speech directed to
children, see the overview in
\cite{Fernald91b}. For experimental work on infants' perception of
phrasing see \cite{Hirsh-Pasek85,Ratner86}, and for children's
turn-taking devices, see
\cite{Ervin-Tripp79}. \cite{Morgan87} studies prosodic and
morphological cues to language acquisition.
\nocite{Fernald91a}
There is a considerable literature on the relationship between
intonational phrasing and syntactic phenomena. General and
theoretical work includes \cite{Downing70,Bresnan71,Selkirk84a,Cooper80}.
Empirical work on acoustic correlates of intonation boundaries
includes experimental work (production and perception) such as
\cite{Grosjean79,O'Malley73,Lehiste73,Klatt75a,Lehiste76,Cooper77,Streeter78,Wales79,Lehiste80a,Gee83,Umeda82,Beach91}.
\nocite{Martin70,Grosjean87}
Corpus-based research includes \cite{Quirk64,Altenberg87}. Work in
computational linguistics proposes parsing strategies including
prosodic components \cite{Marcus85,Marcus90,Steedman90}.
Text-to-speech applications inspire some of the work on intonational
boundary predictions \cite{Gee83,Altenberg87,Bachenko90}. Recognition
applications have motivated recent work
\cite{Ostendorf90,Wang92}.
Also see \cite{Wilkenfeld81} for research on prosody and orthography.
\nocite{Silva-Corvalan83,Bruce90}
\nocite{Taft84,O'Shaughnessy89}
\nocite{Wang91a}
Also see \nocite{Misono90,Sugito90}
Early debates on intonational prominence (stress or pitch accent) are
summarized in
\cite{Bolinger58,Crystal69,Liberman77,Ladd80,Bolinger86}. More recent
contributions include \cite{Beckman86c,Pierrehumbert87b}. Constraints
on sentence (nuclear) stress are discussed in
\cite{Cutler77b,Erteschik83,Schmerling76,Schmerling74,Bardovi83a}.
Despite Bolinger's seminal article on the unpredictability of accent
\cite{Bolinger72a}, attempts to predict accent from other features of
the uttered text include \cite{Altenberg87,Hirschberg90}. A number of
authors have examined the relationship between accent and various
characterizations of information status: Work on the focal domains of
accent and the representation and interpretation of intonational focus
and presupposition includes
\cite{Lakoff71,Schmerling71,Jackendoff72,Ball77,Wilson79,Enkvist79,Gussenhoven83a,Culicover83,Rooth85,Rochemont90,Rooth91,Horne85,Horne87,Baart87,Dirksen92,Zacharski92}.
Topic/comment, given/new, theme/rheme distinctions are discussed with
respect to accent by
\cite{Schmerling75b,Bardovi83b,GBrown83a,Gundel78,Lehman77,Fuchs80,Chafe76,Nooteboom82,Fuchs84,Terken84,Terken85,Terken87,Fowler87,Horne91a,Horne91b}.
\nocite{Allerton79,Kruyt85}
And contrastive stress is examined by
\cite{Bolinger61,Harries-Delisle78,Couper-Kuhlen84}.
Others have looked at the interpretation of accent with particular
attention to anaphora
\cite{Gleitman61,Akmajian70,Williams80,Lujan85,Hirschberg91a}.
See \cite{Ladd87a,Sproat90} for discussions of the phrasing and
accenting of complex nominals.
\nocite{Cantrall69,Cantrall75}
\nocite{Hirschberg90a}
\nocite{Wilson79}
\nocite{Pierrehumbert89}
\nocite{Liberman77}
For general discussion of the intonational characteristics of longer
discourses,
see \cite{Brazil80,GBrown80,GBrown83b}. Halliday \cite{Halliday76}
also has insightful comments on intonation and discourse cohesion.
For work on intonation and discourse structure, see
\cite{Yule80,Silverman86,Avesani88,Ayers92,Hirschberg92b,Grosz92}, and see
\cite{Kumpf87} for discussion of
pitch phenomena and stories. \cite{Hirschberg87a,Litman90} investigate
the intonation of cue phrases.
\cite{Butterworth75,Butterworth77,Schegloff79b} investigate
intonational characteristics of speech-only communication channels.
\nocite{Goodwin81}
\nocite{Kutik83}
\nocite{Rees76}
\nocite{Hirschberg86} \nocite{Gazdar80b}
\nocite{Gumperz77}
A good overview of work in speech synthesis is \cite{Klatt87}.
Current work in the area can be sampled in \cite{Autrans90}.
\nocite{Olive85,Anderson84,JHouse90a}
Some experimental message-to-speech systems are described in
\cite{Young79,LWitten77,Danlos86,Davis88,JHouse90b}.
A good introduction to past and recent work on speech recognition can
be found in \cite{Waibel90a}. More specialized work on the use of
prosody in recognition includes
\cite{Lea75,Lea79,Pierrehumbert83,Waibel79,Ljolje86,Ljolje87a,Waibel88,DHouse89,Silverman92a}
\nocite{Pierrehumbert81,Hirschberg89}
\nocite{Silverman87}
\nocite{Hemphill90,Price88}
\nocite{Talkin89} \nocite{Nash80} \nocite{Carlson83} \nocite{Cutler83b}
\nocite{Prince84} \nocite{Lehiste80b} \nocite{DHouse90a}
\nocite{Roe91}
\nocite{Cutler77a}
\nocite{Scherer84}
\nocite{Taylor87}
\nocite{Bolinger82}
\nocite{Oakeshott84}
\nocite{Cruz-Ferreira83}
\nocite{Stockwell71}
\nocite{Beckman86a}
\nocite{Beckman86b}
\nocite{Fry58}
| {'timestamp': '1996-03-28T04:04:41', 'yymm': '9405', 'arxiv_id': 'cmp-lg/9405003', 'language': 'en', 'url': 'https://arxiv.org/abs/cmp-lg/9405003'} |
\section{Introduction}
The di-Higgs production will continue to be one of the most important physics targets in the Large Hadron Collider (LHC) and beyond, since its observation leads to a measurement of the tri-Higgs coupling, and will provide a test if it matches with the Standard Model (SM) prediction~\cite{Glover:1987nx,Eboli:1987dy,Dawson:1998py,Baur:2002qd,Baur:2003gp,Shao:2013bz,Grigo:2013rya,deFlorian:2013jea,Degrassi:2016vss,Borowka:2016ypz,Ferrera:2016prr}.
Since its production in the SM is destructively interfered by the top-quark box-diagram contribution, sizable production of di-Higgs directly implies a new physics signature~\cite{Baglio:2012np}.
It is important to examine in what kind of a model the di-Higgs signal is enhanced.
Indeed the enhancement has been pointed out in the models with two Higgs doublets~\cite{Craig:2013hca,Baglio:2014nea,Aad:2014yja,Hespel:2014sla,Barger:2014qva,Lu:2015qqa,Dorsch:2016tab,Kling:2016opi,Bian:2016awe}, type-I\hspace{-.1em}I seesaw~\cite{Han:2015sca},
light colored scalars~\cite{Kribs:2012kz},
heavy quarks~\cite{Dawson:2012mk},
effective operators~\cite{Pierce:2006dh,Kanemura:2008ub,Dolan:2012rv,Nishiwaki:2013cma,Chen:2014xra,Liu:2014rba,Slawinska:2014vpa,Goertz:2014qta,Azatov:2015oxa,Lu:2015jza,Carvalho:2015ttv,deFlorian:2016uhr,Gorbahn:2016uoy,Carvalho:2016rys,Cao:2016zob},
dilaton~\cite{Dolan:2012ac},
strongly interacting light Higgs
and minimal composite Higg
~\cite{Contino:2010mh,Grober:2010yv,Contino:2012xk,Grober:2016wmf},
little Higgs~\cite{Liu:2004pv,Dib:2005re,Wang:2007zx},
twin Higgs~\cite{Craig:2015pha},
Higgs portal interactions~\cite{Dolan:2012ac,Chen:2014ask,Robens:2015gla,Martin-Lozano:2015dja,Falkowski:2015iwa,Buttazzo:2015bka,Dawson:2015haa,Robens:2016xkb,Dupuis:2016fda,Banerjee:2016nzb},
supersymmetric partners~\cite{Plehn:1996wb,Djouadi:1999rca,Baur:2003gp,Cao:2013si,Han:2013sga,Bhattacherjee:2014bca,Cao:2014kya,Djouadi:2015jea,Batell:2015zla,Wu:2015nba,Batell:2015koa,Costa:2015llh,Agostini:2016vze,Hammad:2016trm,Biswas:2016ffy},
and Kaluza-Klein graviton~\cite{Khachatryan:2016sey}.
Other related issues are discussed in Refs.~\cite{Asakawa:2010xj,Papaefstathiou:2012qe,Klute:2013cx,Goertz:2013kp,Barr:2013tda,Chen:2013emb,Papaefstathiou:2015iba,vonBuddenbrock:2015ema,Cao:2015oaa,Cao:2015oxx,Huang:2015izx,Behr:2015oqq,Cao:2016udb,Kang:2016wqi,vonBuddenbrock:2016rmr,Fichet:2016xvs,Fichet:2016xpw,Huang:2017jws}.
The triple Higgs productions at the LHC and the future circular collider (FCC) are also discussed in Refs.~\cite{Plehn:2005nk,Maltoni:2014eza,Papaefstathiou:2015paa}.
In this paper, we study a class of models in which the di-Higgs process is enhanced by a resonant production of an extra neutral scalar particle. Its production is radiatively induced by the gluon fusion via a loop of new colored particles. Its tree-level decay is due to the mixing with the SM Higgs boson. As concrete examples of the new colored particle that can decay into SM ones in order not to spoil cosmology, we examine the top/bottom partner, such as in the dilaton model, and the colored scalar which are triplet (leptoquark), sextet (diquark), and octet (coloron).
We are also motivated by the anomalous result reported by the ATLAS Collaboration: the 2.4\,$\sigma$ excess in the search of di-Higgs signal using $b\bar b$ and $\gamma\gamma$ final states with the $m_{(b\bar{b})(\gamma\gamma)} (=m_{hh})$ invariant mass at around 300\,\text{GeV}~\cite{Aad:2014yja}.
The excess in $m_{(\gamma\gamma)}$ distribution is right at the SM Higgs mass on top of both the lower and higher mass-side-band background events.
The requested signal cross section roughly corresponds to 90 times larger than what is expected in the SM.
Thus the enhancement, if from new physics, should be dramatically generated via e.g.\ a new resonance at $300\,\text{GeV}$.
This paper is organized as follows. In Sec.~\ref{model section}, we present the model.
In Sec.~\ref{signal section}, we show how the di-Higgs event is enhanced.
In Sec.~\ref{constraint}, we examine the constraints on the model from the latest results from the ongoing LHC experiment.
In Sec.~\ref{2.4sigma}, we present a possible explanation for the 2.4\,$\sigma$ excess.
In Sec.~\ref{summary section}, we summarize our result and provide discussion.
In Appendix~\ref{general model}, we show how the effective interaction between the new scalar and Higgs is obtained from the original Lagrangian.
In Appendix~\ref{Z2 model section}, we give a parallel discussion for the $Z_2$ model.
In Appendix~\ref{colored scalar section}, we spell out the possible Yukawa interactions between the colored scalar and the SM fields.
\section{Model}\label{model section}
\begin{figure}[tp]
\begin{center}
\includegraphics[width=0.35\textwidth]{fig_feynman.pdf}
\caption{Di-Higgs ($hh$) production mediated by $ s$.}
\label{Di-Higgs figure}
\end{center}
\end{figure}
We consider a class of models in which the di-Higgs ($hh$) production is enhanced by the schematic diagram depicted in Fig.~\ref{Di-Higgs figure}, where
$s$ denotes the new neutral scalar and
the blob generically represents an effective coupling of $s$ to the pair of gluons via the loop of the extra heavy colored particles.
We assume that $h$ and $s$ are lighter and heavier mass eigenstates obtained from the mixing of the neutral component of the $SU(2)_L$-doublet $H$ and a real singlet $S$ that couples to the extra colored particles:
\al{
H^0 &= {v+h\cos\theta+ s\sin\theta\over\sqrt{2}},
\label{H0 written}\\
S &= f-h\sin\theta+ s\cos\theta,
\label{S written}
}
where $\theta$ is the mixing angle and $v$ and $f$ denote the vacuum expectation values (VEVs):
\al{
\left\langle H^0\right\rangle
&= {v\over\sqrt{2}}, &
\left\langle S\right\rangle
&= f,
\label{VEVs}
}
with $v\simeq246\,\text{GeV}$ and $m_h=125\,\text{GeV}$.
We phenomenologically parametrize the effective $shh$ interaction as
\al{
\Delta\mc L
&= -{\mu_\text{eff}\sin\theta\over2} s h^2,
\label{shh coupling}
}
where $\mu_\text{eff}$ is a real parameter of mass dimension unity, whose explicit form in terms of original Lagrangian parameters is given in Appendix~\ref{general model}.
We note that the parameter $\mu_\text{eff}$ is a purely phenomenological interface between the experiment and the underlying theory in order to allow a simpler phenomenological expression for the tree-level branching ratios; see Sec.~\ref{tree decay section}.
We note also that the $\theta$-dependent $\mu_\text{eff}\fn{\theta}$ goes to a $\theta$-independent constant in the small mixing limit $\theta^2\ll1$; see Appendix~\ref{general model} for detailed discussion.
In Sec.~\ref{constraint}, it will indeed turn out that only the small, but non-zero, mixing region is allowed in order to be consistent with the signal-strength data of the 125\,GeV Higgs at the LHC.
The extra colored particle that runs in the loop, which has been generically represented by the blob in Fig.~\ref{Di-Higgs figure}, can be anything that couples to~$S$.
It should be sufficiently heavy to evade the LHC direct search and decay into SM particles in order not to affect the cosmological evolution.
In this paper, we consider the following two possibilities: a Dirac fermion that mixes with either top or bottom quark and a scalar that decays via a new Yukawa interaction with the SM fermions. For simplicity, we assume that the new colored particles are singlet under the SU(2)$_L$ in both cases.
In Table~\ref{table of fields}, we list the colored particles of our consideration. The higher rank representations of $SU(3)_C$ for the colored scalars are terminated at $\bs 8$ in order not to have too higher dimensional Yukawa operators.\footnote{
The ultraviolet completion of the higher dimensional operator requires other new colored particles.
We assume that their contributions are subdominant.
E.g.\ they do not contribute to the effective $ggs$ vertex if they do not have a direct coupling to $S$.
}
The triplet $\phi_{\bs 3}$ is nothing but the leptoquark. It is worth noting that the leptoquark with $Y=-1/3$ may account for $R_{D^{(*)}}$, $R_K$, and $\paren{g-2}_\mu$ anomalies simultaneously~\cite{Bauer:2015knc}.
\begin{table}[tp]
\centering
\begin{tabular}{|c|ccc|cccc|c}
\hline
&\multicolumn{3}{c|}{Dirac spinor}&\multicolumn{4}{c|}{complex scalar}\\
field & $T$ & $B$ & \dots & $\phi_{\bs 3}$ & $\phi_{\bs 6}$ & $\phi_{\bs 8}$ &\dots \\
\hline
$SU(3)_C$ & \bs 3 & \bs 3 & \dots & \bs 3 & \bs 6 & \bs 8 &\dots\\
$Q$ & $2\ov3$ & $-{1\ov3}$ & \dots & $-{1\ov3}$, $-{4\ov3}$ & ${1\ov3}$, $-{2\ov3}$, ${4\ov3}$ & 0, $-1$&\dots\\
$\Delta b_g$ & ${2\ov3}$ & ${2\ov3}$ & \dots & ${1\ov6}$ & ${5\ov6}$ & $1$ & \dots\\
$\Delta b_\gamma$
& ${16\ov9}$ & ${4\ov9}$ & \dots &
${1\ov9}$, $16\ov9$ & $2\ov9$, $8\ov9$, $32\ov9$& 0, ${8\ov3}$&\dots\\
$\eta$ & & $y_FN_F{v\over M_F}$ & & \multicolumn{4}{c|}{$\kappa_\phi N_\phi{fv\over M_\phi^2}$}\\
\hline
\end{tabular}
\caption{Colored particles that may run in the loop represented by the blob in Fig.~\ref{Di-Higgs figure}, and their possible parameters. We assume that they are $SU(2)_L$ singlets. The electromagnetic charge $Q$ is fixed to allow a mixing with either top or bottom quark for the Dirac spinor and a Yukawa coupling with a pair of SM fermions for the complex scalar; see Appendix~\ref{colored scalar section}. In the last row, $F$ stands for $T$ or $B$.}
\label{table of fields}
\end{table}
\subsection{Tree-level decay}\label{tree decay section}
The scalar $s$ may dominantly decay into di-Higgs at the tree level due to the coupling~\eqref{shh coupling}:
\al{
\Gamma\fn{s\to hh}
&
= {\mu_\text{eff}^2\over32\pi m_s}\sqrt{1-{4m_h^2\over m_s^2}}\sin^2\theta.
\label{s_to_hh}
}
For $m_s>2m_Z$, the partial decay rate into the pair of vector bosons $s\to VV$ with $V=W,Z$ are
\al{
\Gamma\fn{s\to VV}
&= {m_s^3\over32\pi v^2}\delta_V\sqrt{1-4x_V}\paren{1-4x_V+12x_V^2}\,\sin^2\theta,
}
where $\delta_Z=1$, $\delta_W=2$, and $x_V=m_V^2/m_s^2$; see e.g.\ Ref.~\cite{Djouadi:2005gi}.
Similarly for $m_s>2m_t$, the partial decay width into a top-quark pair is
\al{
\Gamma\fn{s\to t\bar t}
&= {N_cm_sm_t^2\over8\pi v^2}\paren{1-{4m_t^2\over m_s^2}}^{3/2}\sin^2\theta.
}
Note that the tree-level branching ratios become independent of $\theta$ thanks to the parametrization~\eqref{shh coupling}.
The total decay width $\Gamma_\text{total}$ is the sum of the above rates at the tree level.
In the small mixing limit $\theta^2\ll1$, the tree-level decay width becomes small and the loop level decay, which is described in Sec.~\ref{loop decay}, can be comparable to it.
The diphoton constraint is severe in this parameter region, as will be discussed in Sec.~\ref{constraint}.
In Fig.~\ref{BR figure}, we plot the tree-level branching ratios in the $\mu_\text{eff}$ vs $m_s$ plane. Note that the $\theta$-dependence drops out of the tree-level branching ratios when we use $\mu_\text{eff}$ as a phenomenological input parameter as in Eq.~\eqref{shh coupling} because then all the decay channels have the same $\theta$ dependence $\propto\sin^2\theta$.
\begin{figure}[tp]
\centering
\includegraphics[width=0.4\textwidth]{fig_BRshh.pdf}
\includegraphics[width=0.4\textwidth]{fig_BRstt.pdf}\smallskip\\
\includegraphics[width=0.4\textwidth]{fig_BRsZZ.pdf}
\includegraphics[width=0.4\textwidth]{fig_BRsWW.pdf}
\caption{Tree-level branching ratio for the decay of $s$ in the $\mu_\text{eff}$ vs $m_s$ plane.}\label{BR figure}
\end{figure}
\subsection{Effective coupling to photons and gluons}
We first consider the vector-like top-partner $T$ as the colored particle running in the loop that is represented as the blob in Fig.~\ref{Di-Higgs figure}.
The bottom-partner $B$ can be treated in the same manner, as well as the colored scalars.
The mass of the top partner is given as
\al{
M_T &= m_T+y_T f
}
where $m_T$ and $y_T$ are the vector-like mass of $T$ and the Yukawa coupling between $T$ and $S$, respectively.
The top-partner $T$ mixes with the SM top quark.
We note that limit $m_T\to 0$ corresponds to an effective dilaton model.\footnote{
The particular dilaton model in Ref.~\cite{Abe:2012eu} corresponds to the identification of the lighter 125\,GeV scalar to be an $S$-like one, contrary to this paper.
}
Given the kinetic term of gluon that is non-canonically normalized,
\al{
\mathcal L_\text{eff}
&= -{1\over 4g_s^2}G_{\mu\nu}^aG^{a\mu\nu},
}
the effective coupling after integrating out the top and $T$ can be obtained by the replacement $\left\langle S\right\rangle\to S$ and $\left\langle H^0\right\rangle\to H^0$ in the running coupling; see e.g.\ Refs.~\cite{Carena:2012xa,Abe:2012eu}:
\al{
{1\over g_s^2}
&\longrightarrow
{1\over g_s^2}-{2\over\paren{4\pi}^2}\paren{ b_g^\text{top}{h\cos\theta+ s\sin\theta\over v}+\Delta b_g\,y_T{-h\sin\theta+ s\cos\theta\over M_T}},
\label{replacement}
}
where $b_g^\text{top}$ and $\Delta b_g$ are the contributions of top and $T$ to the beta function, respectively.
To use this formula, we need to assume the new colored particles are slightly heavier than the neutral scalar.
For a Dirac spinor in the fundamental representation, $b_g^\text{top}=\Delta b_g={1\over2}\times{4\over3}={2\ov3}$.
The resultant effective interactions for the canonically normalized gauge fields are
\al{
\mathcal L_\text{eff}^{hgg}
&=
{\alpha_s\over8\pi v}\paren{
b_g^\text{top} \cos\theta - \Delta b_g\eta\sin\theta
}
h\,G_{\mu\nu}^aG^{a\mu\nu},
\label{hgg effective}\\
\mathcal L_\text{eff}^{ s gg}
&=
{\alpha_s\over8\pi v}\paren{
\Delta b_g\eta\cos\theta + b_g^\text{top}\sin\theta
}
s\,G_{\mu\nu}^aG^{a\mu\nu},\\
\mathcal L_\text{eff}^{h\gamma\gamma}
&=
{\alpha\over8\pi v}\paren{
b_\gamma^\text{SM}\cos\theta-\Delta b_\gamma\,\eta\sin\theta
}
hF_{\mu\nu}F^{\mu\nu},\\
\mathcal L_\text{eff}^{ s\gamma\gamma}
&=
{\alpha\over8\pi v}\paren{
\Delta b_\gamma\,\eta\cos\theta+b_\gamma^\text{SM}\sin\theta
}
s F_{\mu\nu}F^{\mu\nu},
\label{s gam gam eff}
}
where $F_{\mu\nu}$ being the (canonically normalized) field strength tensor of the photon, $\alpha_s$ and $\alpha$ denoting the chromodynamic and electromagnetic fine structure constants, respectively,
$N_c=3$, $b_\gamma^\text{SM}\simeq-6.5$ and
\al{
\eta
&= y_TN_T{v\over M_T},
\label{EqEta}
}
with $N_T$ being the number of $T$ introduced. The values $\Delta b_g={1\ov2}\times{4\ov3}={2\ov3}$ and $\Delta b_\gamma=N_cQ_T^2\times{4\ov3}={16\ov9}$ are listed in Table~\ref{table of fields}.
The bottom partner $B$ can be treated exactly the same way. According to Table~1, $\Delta b_\gamma$ becomes one fourth compared to the above.
For the colored scalar $\phi$, its diagonal mass is given as
\al{
M_\phi^2
&= m_\phi^2+{\kappa_\phi\over2} \left\langle S\right\rangle^2,
}
where we have assumed the $Z_2$ symmetry $S\to-S$ for simplicity;
$m_\phi$ is the original diagonal mass in the Lagrangian; and
$\kappa_\phi$ is the quartic coupling between $S$ and $\phi$.\footnote{
The three point interaction between the neutral and the colored scalar can be introduced.
If the sign of the three and the four point couplings are opposite, $\eta$ can be enhanced in some parameter region.
}
The possible values of the electromagnetic of $\phi$ are
$Q=-1/3$ and $-4/3$ for the leptoquark $\phi_{\bs 3}$;
$Q=1/3$, $-2/3$, and $4/3$ for the color-sextet $\phi_{\bs 6}$; and
$Q=0$ and $-1$ for the color-octet $\phi_{\bs 8}$; see Appendix~\ref{colored scalar section}.
Correspondingly the values of $\Delta b_g$ are ${1\over2}\times{1\over3}={1\ov6}$, $\paren{{N_c\ov2}+1}\times{1\ov3}={5\ov6}$, and $N_c\times{1\ov3}=1$, and $\Delta b_\gamma$ are $Q^2$, $2Q^2$, and ${8\ov3}Q^2$.
Again the effective interactions are obtained as in Eqs.~\eqref{hgg effective}--\eqref{s gam gam eff} from the replacement~\eqref{replacement} with the substitution $y_T/M_T\to \kappa_\phi f/M_\phi^2$, where $f$ has been the VEV of $S$; see Eq.~\eqref{VEVs}.
Note that the expression for $\eta$ is now $\eta=\kappa_\phi N_\phi{fv/M_\phi^2}$, where $N_\phi$ is the number of $\phi$ introduced.
We list all these parameters in Table~\ref{table of fields}.
\subsection{Loop-level decay}\label{loop decay}
No direct contact to the gauge bosons are allowed for the singlet scalar $S$, and the tree-level decay of $s$ into a pair of gauge bosons is only via the mixing with the SM Higgs boson. Therefore the decay of $s$ to $gg$ and $\gamma\gamma$ are only radiatively generated.
Given the effective operators from the loop of heavy colored particle
\al{
\mc L_\text{eff}
&= -{\alpha_s b_g\over4\pi v} s G^a_{\mu\nu}G^{a\mu\nu}
-{\alpha b_\gamma\over4\pi v} s F_{\mu\nu}F^{\mu\nu},
}
the partial decay widths are
\al{
\Gamma\fn{ s\to gg}
&= \paren{\alpha_s b_g\over4\pi v}^2{2m_ s^3\over\pi}, &
\Gamma\fn{ s\to\gamma\gamma}
&= \paren{\alpha b_\gamma\over4\pi v}^2{m_ s^3\over4\pi},
}
where the factor 8 difference comes from the number of degrees of freedom of gluons in the final state.
Concretely,
\al{
b_g
&= -{1\over2}\paren{\Delta b_g\,\eta\cos\theta+ b_g^\text{top}\sin\theta}, \label{bg given}\\
b_\gamma
&= -{1\ov2}\paren{\Delta b_\gamma\,\eta\cos\theta+b_\gamma^\text{SM}\sin\theta}.
}
If we go beyond the scope of this paper and allow the particles in the loop to be charged under $SU(2)_L$, then the loop contribution to the decay channels to $Z\gamma$, $ZZ$ and $W^+W^-$ might also become significant; see e.g.\ Ref.~\cite{Kim:2015vba}.
\section{Production of singlet scalar at hadron colliders}\label{signal section}
We calculate the production cross section of $s$ via the gluon fusion with the narrow width approximation:\footnote{
The colored particles running in the blob in Fig.~\ref{Di-Higgs figure} might also have a direct coupling with the quarks in the proton, and possibly change the production cross section of $s$ if it is extremely large. In this paper we assume that this is not the case.
}
\al{
\hat\sigma\fn{gg\to s}
&= {\pi^2\over8m_ s}\Gamma\fn{ s\to gg}\delta\fn{\hat\sigma-m_ s^2}
= \sigma_s m_ s^2\delta\fn{\hat\sigma-m_ s^2},
\label{narrow width limit}
}
where
\al{
\sigma_s
&:= {\pi^2\over8m_ s^3}\Gamma\fn{ s\to gg}
= \paren{\alpha_s b_g\over4\pi v}^2{\pi\over4}
= 36.5\,\text{fb}\times\sqbr{b_g\over-1/3}^2\sqbr{\alpha_s\over0.1}^2.
\label{sigma_s}
}
Therefore, we reach the expression with the gluon parton distribution function~(PDF) for the proton $g(x,\mu_F)$:
\al{
\sigma\fn{pp\to s}
&= \sigma_s m_ s^2\int_0^1\text{d} x_1\int_0^1\text{d} x_2\,g\fn{x_1,\mu_F}\,g\fn{x_2,\mu_F}\,\delta\fn{x_1x_2s-m_ s^2}
= \sigma_s\tau{\text{d}\mc L^{gg}\over\text{d}\tau},
\label{singlet production}
}
where $\tau:=m_s^2/s$ and
\al{
{\text{d}\mc L^{gg}\over\text{d}\tau}
&= \int_\tau^1{\text{d} x\over x}g\fn{x,\mu_F}g\fn{\tau/x,\mu_F}
= \int_{\ln\sqrt{\tau}}^{\ln{1\over\sqrt{\tau}}}\text{d} y\,
g\fn{\sqrt{\tau}e^y,\sqrt{\tau s}}\,g\fn{\sqrt{\tau}e^{-y},\sqrt{\tau s}},
}
is the luminosity function, in which the factorization scale $\mu_F$ is taken to be $\mu_F=\sqrt{\tau s}$.\footnote{
Notational abuse of $s$ for the singlet scalar field and for the Mandelstam variable of $pp$ scattering should be understood.
}
\begin{figure}[tp]
\begin{center}
\includegraphics[width=0.7\textwidth]{fig_production.pdf}
\end{center}
\caption{
Production cross section $\sigma\fn{pp\to s}$ for $\ab{b_g}=\frac{\Delta b_g}{2}\frac{v}{m_s}$ with $\Delta b_g={2\ov3}$ (top/bottom partner).
The result for other parameter can be obtained just by a simple scaling $\sigma\fn{pp\to s}\propto\paren{\Delta b_g}^2$;
see Eq.~\eqref{sigma_s} with Eq.~\eqref{bg given} and Table~\ref{table of fields}.
The $K$-factor is not included in this plot.
}\label{prospect}
\end{figure}
Using the leading order CTEQ6L~\cite{Nadolsky:2008zw} PDF, we plot in Fig.~\ref{prospect} the production cross section $\sigma\fn{pp\to s}$ as a function of $m_s$ for a phenomenological benchmark setting $\ab{b_g}={\Delta b_g\ov2}{v\over m_s}$ with $\Delta b_g={2\ov3}$ (top/bottom partner).
Other particles just scale as $\sigma\fn{pp\to s}\propto\paren{\Delta b_g}^2$.
The value $\sqrt{s}=14\,\text{TeV}$ is motivated by the High-Luminosity LHC;
28\,TeV and 33\,TeV by the High-Energy LHC (HE-LHC); and
75\,TeV, and 100\,TeV by the Future Circular Collider (FCC)~\cite{FCC,Benedikt:1742294,Ball}.
We see that typically the top/bottom partner models give a cross section $\sigma\fn{pp\to s}\gtrsim1\,\text{fb}$, which could be accessed by a luminosity of $\mc O\fn{\text{ab}^{-1}}$, for the scalar mass $m_s\lesssim1.3\,\text{TeV}$, 2\,TeV, and 4\,TeV at the LHC, HE-LHC, and FCC, respectively.
Several comments are in order:
\begin{itemize}
\item Our setting corresponds to putting $M_T = y_T N_T m_s$ in Eq.~\eqref{EqEta}
in order to reflect the naive scaling of $\eta\sim v/f$ with $f\sim m_s$; recall that we need $M_T\gtrsim m_s$ to justify integrating out the top partner to write down the effective interactions~\eqref{hgg effective}--\eqref{s gam gam eff}.
\item Here we have used the leading order parton distribution function. The higher order corrections may be approximated by multiplying an overall factor $K$, the so-called $K$-factor, which takes value $K\simeq1.6$ for the SM Higgs production at LHC; see e.g.\ Ref.~\cite{Djouadi:2005gi}.
\item \label{SM box} The SM cross section for $pp\to hh$ is of the order of 10\,fb and $10^3\,\text{fb}$ for $\sqrt{s}=8\,\text{TeV}$ and 100\,TeV, respectively~\cite{Baglio:2012np}. We are interested in the on-shell production of $s$, and the non-resonant SM background can be discriminated by kinematical cuts. The detailed study is beyond the scope of this paper and will be presented elsewhere.
\item When we consider the new resonance with a narrow width~\eqref{narrow width limit}, we can neglect the box contribution from the extra colored particles as the box contribution gets a suppression factor\footnote{
In the SM, the $gg\to hh$ cross section takes the following form at the leading order~\cite{Baglio:2012np}:
\als{
\hat\sigma^\text{SM}_\text{LO}\fn{gg\to hh}
&= \int_{\hat t_-}^{\hat t_+}\text{d}\hat t{G_\text{F}^2\alpha_s^2\over256\paren{2\pi}^3}\sqbr{
\ab{{\mu_{hhh}v\over\paren{\hat s-m_h^2}+im_h\Gamma_h}F^\text{SM}_\bigtriangleup+F^\text{SM}_\Box}^2+\ab{G^\text{SM}_\Box}^2
},
}
where $G_\text{F}$ is the Fermi constant; $\mu_{hhh}=3m_h^2/v$ is the $hhh$ coupling in the SM; and $F^\text{SM}_\bigtriangleup$, $F^\text{SM}_\Box$, and $G^\text{SM}_\Box$ are the triangular and box form factors, approaching $F^\text{SM}_\bigtriangleup\to2/3$, $F^\text{SM}_\Box\to-2/3$, and $G^\text{SM}_\Box\to0$ in the large top-quark-mass limit. A large cancellation takes place between $F^\text{SM}_\bigtriangleup$ and $F^\text{SM}_\Box$ as is well known.
For the \emph{on-shell} resonance production of $s$, on the other hand, the triangle contribution from the fermion loops dominates over the box loop contribution: The new triangle contribution for $s$ can be well approximated by replacing the expression for the SM as
\als{
\mu_{hhh}
&\to \mu_{\rm eff}\sin\theta, &
\,m_h
&\to m_s, &
\Gamma_h
&\to \Gamma_s, &
F^\text{SM}_\bigtriangleup
&\to \Delta b_g \eta \cos \theta + b_g^{\rm top} \sin \theta,
}
and the new box contribution of the top partner can be obtained from that of the SM-top quark with the multiplicative factor
\als{
{N_Ty_T^2\sin^2\theta\over y_t^2/2}\,{y_T^2f^2\over M_T^2}.
}
Finally, taking the ratio of the size of the box contribution and the triangle contribution with $\Delta b_g=2/3$ and $\eta=y_TN_Tv/M_T\sim N_Tv/M_T$, $y_T \sim y_t$, and
$m_s \Gamma_s
\sim \mu_\text{eff}^2\sin^2\theta/32\pi$, we get the result in Eq.~\eqref{box suppression}.
}
\begin{align}
{\mu_\text{eff}M_T\ov32\pi v^2}\sin^3\theta
&\sim 10^{-4}\sqbr{\mu_\text{eff}\ov1\,\text{TeV}}\sqbr{M_T\ov1\,\text{TeV}}\sqbr{\sin\theta\ov0.1}^3 \ll 1.
\label{box suppression}
\end{align}
\end{itemize}
\section{LHC constraints}\label{constraint}
We examine LHC constraints on the model for various $m_s$.
That is, we verify constraints from 125\,GeV Higgs signal strength, from $s\to ZZ\to4l$ search, from $s\to\gamma\gamma$ search, and from the direct search of the colored particles running in the blob in Fig.~\ref{Di-Higgs figure}.
\subsection{Bound from Higgs signal strength}
\begin{figure}[tp]
\begin{center}
\includegraphics[width=0.3\textwidth]{fig_top_partner.pdf}
\includegraphics[width=0.3\textwidth]{fig_bottom_partner.pdf}
\includegraphics[width=0.3\textwidth]{fig_leptoquark1ov3.pdf}\medskip\\
\includegraphics[width=0.3\textwidth]{fig_leptoquark4ov3.pdf}
\includegraphics[width=0.3\textwidth]{fig_6_1ov3.pdf}
\includegraphics[width=0.3\textwidth]{fig_6_2ov3.pdf}\medskip\\
\includegraphics[width=0.3\textwidth]{fig_6_4ov3.pdf}
\includegraphics[width=0.3\textwidth]{fig_8_0.pdf}
\includegraphics[width=0.3\textwidth]{fig_8_1.pdf}
\end{center}
\caption{2$\sigma$-excluded regions from the signal strength of 125\,GeV Higgs are shaded.
The color represents the contribution from each channel; see Fig.~\ref{each} for details.}\label{combined}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.24\textwidth]{fig_ggFgamgam.pdf}
\includegraphics[width=0.24\textwidth]{fig_ggFZZ.pdf}
\includegraphics[width=0.24\textwidth]{fig_ggFWW.pdf}
\includegraphics[width=0.24\textwidth]{fig_ggFtautau.pdf}\\
\includegraphics[width=0.24\textwidth]{fig_VBFgamgam.pdf}
\includegraphics[width=0.24\textwidth]{fig_VBFWW.pdf}
\includegraphics[width=0.24\textwidth]{fig_VBFtautau.pdf}
\vspace{-5mm}
\end{center}
\caption{
The 2$\sigma$-excluded regions from the signal strength of 125\,GeV Higgs.
The top-partner parameters are chosen as an illustration to present the contribution from each channel.
}
\label{each}
\end{figure}
We first examine the bound on $\theta$ and $\eta$ from the Higgs signal strengths in various channels.
The ``partial signal strength'' for the Higgs production becomes
\al{
\mu_\text{ggF}
&= \paren{\cos\theta-{\Delta b_g\over b_g^\text{top}}\eta\sin\theta}^2,&
\mu_\text{VBF}
= \mu_\text{VH}
= \mu_\text{ttH}
&= \cos^2\theta,
}
where ggF, VBF, VH, and ttH are the gluon fusion, vector-boson fusion, associated production with vector, and that with a pair of top quarks, respectively; see e.g.\ Ref.~\cite{Khachatryan:2016vau} for details.
Similarly, the partial signal strength for the Higgs decay is
\al{
\mu_{h\to\gamma\gamma}
&= \paren{\cos\theta-{\Delta b_\gamma\over b_\gamma^\text{SM}}\eta\sin\theta}^2
\paren{\frac{\Gamma_h}{\Gamma_h^\text{SM}}}^{-1},\\
\mu_{h\to gg}
&= \mu_\text{ggF} \paren{\frac{\Gamma_h}{\Gamma_h^\text{SM}}}^{-1},\\
\mu_{h\to f\bar f,WW,ZZ}
&= \cos^2\theta \paren{\frac{\Gamma_h}{\Gamma_h^\text{SM}}}^{-1},
}
where the ratio of the total widths is given by
\al{
\paren{\frac{\Gamma_h}{\Gamma_h^\text{SM}}}
&= \text{Br}_{h \to \text{SM others}}^{\text{SM}} \cos^2\theta
+\text{Br}_{h \to \gamma\gamma}^{\text{SM}} \paren{\cos\theta-{\Delta b_\gamma\over b_\gamma^\text{SM}}\eta\sin\theta}^2
+\text{Br}_{h \to gg}^{\text{SM}} \, \mu_\text{ggF},
}
with $\text{Br}_{h \to \text{SM others}}^{\text{SM}}=0.913$, $\text{Br}_{h \to \gamma\gamma}^{\text{SM}}=0.002$ and $\text{Br}_{h \to gg}^{\text{SM}}=0.085$.
We compare these values with the corresponding constraints given in Ref.~\cite{Khachatryan:2016vau}.
Results are shown in Fig.~\ref{combined} for the matter contents summarized in Table~\ref{table of fields}.
We note that the region near $\theta \simeq 0$ is always allowed by the signal strength constraints, though it is excluded by the di-photon search as we will see.
\subsection{Bound from $s\to ZZ\to4l$}\label{ZZ4l constraint}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{fig_ZZ4lconstraint.pdf}
\vspace{-5mm}
\end{center}
\caption{
The 2$\sigma$-excluded regions from $s\to ZZ\to4l$ bound in the $\mu_\text{eff}$ vs $m_s$ plane. The color is changed in increments of 0.1. The weakest bound starts existing from $b_g=0.2$.
$K$-factor is set to be $K=1.6$.
}
\label{ZZ4lconstraint}
\end{figure}
One of the strongest constraints on the model comes from the heavy Higgs search in the four lepton final state at $\sqrt{s}=13\,\text{TeV}$ at ATLAS~\cite{ATLAS-CONF-2016-079}.
Experimentally, an upper bound is put on the cross section $\sigma\fn{pp \to s \to ZZ\to 4l}$, with $l=e,\mu$, for each $m_s$.
Its theoretical cross section is obtained by multiplying the production cross section~\eqref{singlet production} by the branching ratio $\BR\fn{s\to ZZ}=\Gamma\fn{s\to ZZ}/\Gamma\fn{s\to\text{all}}$ and $\paren{\BR_\text{SM}\fn{Z\to ee,\mu\mu}}^2\simeq\paren{6.73\%}^2$; see Sec.~\ref{tree decay section}.
In Fig.~\ref{ZZ4lconstraint}, we plot $2\sigma$ excluded regions on the $\mu_\text{eff}$ vs $m_s$ plane with varying $b_g$ from 0 to 1 with incrementation 0.2. The weakest bound starts to exist on the plane from $b_g=0.2$.
$K$-factor is set to be $K=1.6$.
The experimental bound
becomes milder for large $\mu_\text{eff}$ because the di-Higgs channel dominate the decay of the neutral scalar.
The large fluctuation of the bound is due to the statistical fluctuation of the original experimental constraint.
We note that though we have focused on the strongest constraint at the low $m_s$ region, the other decay channels of $WW\to l\nu qq$ and of $ZZ\to\nu\nu qq$ and $ll\nu\nu$ may also become significant at the high mass region $m_s\gtrsim 700\,\text{GeV}$.
\subsection{Bound from $s\to\gamma\gamma$}\label{general diphoton bound}
\begin{figure}[tp]
\begin{center}
\includegraphics[width=0.45\textwidth]{fig_diphoton_sinth001.pdf}
\includegraphics[width=0.45\textwidth]{fig_diphoton_sinth003.pdf}\bigskip\\
\includegraphics[width=0.45\textwidth]{fig_diphoton_sinth005.pdf}
\includegraphics[width=0.45\textwidth]{fig_diphoton_sinth01.pdf}
\end{center}
\caption{
The 2$\sigma$-excluded regions from $s\to \gamma\gamma$ bound in the $\mu_\text{eff}$ vs $m_s$ plane for various $\sin\theta$.
The color is changed in increments of 0.2.
$K$-factor is set to be $K=1.6$.
\label{diphoton mu vs ms}
}
\end{figure}
\begin{figure}[tp]
\begin{center}
\includegraphics[width=0.496\textwidth]{fig_diphoton_mu1TeV.pdf}
\includegraphics[width=0.496\textwidth]{fig_diphoton_varying_mu.pdf}
\end{center}
\caption{
The 2$\sigma$-excluded regions from $s\to \gamma\gamma$ bound in the $\sin\theta$ vs $\eta$ plane for various $m_s$ with $\mu_\text{eff}=$ 1 TeV and $\sqrt{3}m_s^2/v$.
The color is changed in increments of 300\,GeV.
$K$-factor is set to be $K=1.6$.
\label{diphoton sintheta vs eta}
}
\end{figure}
A strong constraint comes from the heavy Higgs search in the di-photon final state at $\sqrt{s}=13\,\text{TeV}$ at ATLAS~\cite{ATLAS-CONF-2016-059}.
Experimentally, an upper bound is put on the cross section $\sigma\fn{pp \to s \to \gamma\gamma}$ for each $m_s$.
Its theoretical cross section is obtained by multiplying the production cross section~\eqref{singlet production} by the branching ratio $\BR\fn{s\to \gamma\gamma}=\Gamma\fn{s\to\gamma\gamma}/\Gamma\fn{s\to\text{all}}$; see Sec.~\ref{tree decay section}.
Since this constraint is strong in the small mixing region, where the loop-level decay is comparable to the tree-level decay, we include the loop-level decay channels into $\Gamma\fn{s\to\text{all}}$ for this analysis; see Sec.~\ref{loop decay}.
In Fig~\ref{diphoton mu vs ms}, we plot the $2\sigma$-excluded regions on $\mu_\text{eff}$ vs $m_s$ plane for $\sin\theta=$0.01, 0.03, 0.05, and 0.1, with varying $b_gb_\gamma$ from 0 to 2 with incrementation 0.2.
$K$-factor is set to be $K=1.6$.
If $\sin\theta = 0.01$, broad region is excluded for $b_g b_\gamma=0.4$.
On the other hand, the experimental bound is negligibly weak in the case of $\sin\theta = 0.1$.
The large fluctuation of the bound is due to the statistical fluctuation of the original experimental constraint.
In Fig.~\ref{diphoton sintheta vs eta}, we plot the same $2\sigma$-excluded regions on the $\sin\theta$ vs $\eta$ plane for $m_s=300\,\text{GeV}$, $600\,\text{GeV}$, $900\,\text{GeV}$, $1200\,\text{GeV}$, and $1500\,\text{GeV}$.
In the left and right panels, we set $\mu_\text{eff}=1\,\text{TeV}$ and $\mu_\text{eff}=\sqrt{3}m_s^2/v$.
The latter corresponds to $\Gamma\fn{s\to hh}=\sum_{V=W,Z}\Gamma\fn{s\to VV}$ which is chosen such that there are sizable di-Higgs event and that $\mu_\text{eff}$ is not too large.
$K$-factor is set to be $K=1.6$.
We emphasize that the small mixing limit $\sin\theta\to 0$ is always excluded by the di-photon channel in contrast to the other bounds, though it cannot be seen in Fig.~\ref{diphoton sintheta vs eta} in the small $\eta$ region due to the resolution.
The bound from $s\to Z\gamma$ is weaker and we do not present the result here.
\subsection{Bound from direct search for colored particles}
We first review the mass bound on the extra colored particles.
For the $SU(2)_L$ singlet $T$ and $B$~\cite{Aad:2015kqa,ATLAS-CONF-2016-101},
\al{
M_T,M_B\gtrsim800\,\text{GeV}.
}
The mass bound for the leptoquark $\phi_{\bs 3}$, diquark $\phi_{\bs 6}$, and coloron $\phi_{\bs 8}$ are given in Refs.~\cite{Aaboud:2016qeg,Khachatryan:2016jqo}, \cite{Chivukula:2015zma}, and \cite{Sirunyan:2016iap} as
\al{
m_{\phi_{\bf 3}}
&\gtrsim0.7\text{--}1.1\,\text{TeV},&
m_{\phi_{\bf 6}}
&\gtrsim 7 \,\text{TeV}.&
m_{\phi_{\bf 8}}
&\gtrsim 5.5 \,\text{TeV}.
}
respectively, depending on the possible decay channels.
For the top-partner $M_T\gtrsim800\,\text{GeV}$ with $\theta\simeq0$, we get $\eta\lesssim 0.3y_TN_T$. Therefore, we need rather large Yukawa coupling $y_T\simeq 2.2$ for $N_T=1$ in order to account for Eq.~\eqref{excess} by Eq.~\eqref{pp to s cross section}.\footnote{
Strictly speaking, the bound on $M_T$ slightly changes when $N_T\geq2$, and hence the bound for $y_TN_T$ could be modified accordingly.
}
The same argument applies for the bottom partner since it has the same $\Delta b_g=2/3$.
Similarly for a colored scalar with $M_\phi\gtrsim0.7$, 1.1, 5.5, and 7\,TeV, we get $\eta\lesssim \kappa_\phi N_\phi{f\ov2\,\text{TeV}}$, $\kappa_\phi N_\phi{f\ov4.9\,\text{TeV}}$, $\kappa_\phi N_\phi{f\over 123 \,\text{TeV}}$, and $\kappa_\phi N_\phi{f\over 200 \,\text{TeV}}$, respectively.
For $\theta \simeq 0$, the value of $b_g$ is suppressed or enhanced by extra factors ${1\over6}/{2\over3}=1/4$, ${5\ov6}/{2\ov3}={5\ov4}$, and $1/{2\over3}=3/2$, respectively, compared to the top partner. Therefore, from Eq.~\eqref{eta required}, we need $\kappa_\phi N_\phi f\gtrsim5$--13\,TeV, {106}\,TeV, and {54}\,TeV for $\phi_{\bs 3}$, $\phi_{\bs 6}$, and $\phi_{\bs 8}$, respectively, in order to account for the 2.4$\sigma$ excess at $\theta^2\ll1$.
\section{Accounting for 2.4$\sigma$ excess of $b\bar b\gamma\gamma$ by $m_s=300\,\text{GeV}$}\label{2.4sigma}
It has been reported by the ATLAS Collaboration that there exist 2.4$\sigma$ excess of $hh$-like events in the $b\bar b\gamma\gamma$ final state~\cite{Aad:2014yja}.
This corresponds to the extra contribution to the SM cross section\footnote{
At $\sqrt{s}=8\,\text{TeV}$, $\sigma_\text{SM}\fn{pp\to hh}=9.2\,\text{fb}$.
The expected number of events are $1.3\pm0.5$, $0.17\pm0.04$, and $0.04$ for the non-$h$ background, single $h$, and the SM $hh$ events, respectively.
Since the observed number of events is 5, excess is $5-1.3-0.17=3.5$, which is $3.5/0.04=87.5$ times larger than the SM $hh$ events.
Therefore, the excess corresponds to $9.2\,\text{fb}\times87.5=0.8\,\text{pb}$.
}
\al{
\sigma\fn{pp\to hh}_\text{extra,\,8\,\text{TeV}}
&\simeq 0.8\,\text{pb}.
\label{excess}
}
In Fig.~\ref{BR300}, we plot the branching ratio at $m_h=300\,\text{GeV}$ as a function of $\mu_\text{eff}$.
\subsection{Signal}
\begin{figure}[tp]
\begin{center}
\includegraphics[width=0.4\textwidth]{fig_BR300.pdf}
\end{center}
\caption{Branching ratios $\BR\fn{s\to hh}$ and $\BR\fn{s\to ZZ}$ at $m_s=300\,\text{GeV}$ as functions of $\mu_\text{eff}$.
}\label{BR300}
\end{figure}
With $m_ s=300\,\text{GeV}$, we get the luminosity functions
\al{
\left.\tau{\text{d}\mc L^{gg}\over\text{d}\tau}\right|_{m_s=300\,\text{GeV}}
&\simeq \begin{cases}
17.2 & (\sqrt{s}=8\,\text{TeV}),\\
54.5 ~(64.2)& (\sqrt{s}=13~ (14)\,\text{TeV}),\\
263 ~(357)& (\sqrt{s}=28~ (33) \,\text{TeV}),\\
2310~(1470) & (\sqrt{s}=100~(75)\,\text{TeV}).
\end{cases}
\label{luminosity function}
}
That is,
\al{
\sigma\fn{pp\to s}_{m_s=300\,\text{GeV}}
&\simeq
\sqbr{b_g\over-1/3}^2\sqbr{\alpha_s\over0.1}^2\sqbr{K\over1.6}\times
\begin{cases}
1.0\,\text{pb} & (\sqrt{s}=8\,\text{TeV}),\\
3.2~(3.8)\,\text{pb} & (\sqrt{s}=13~(14)\,\text{TeV}),\\
15~(18)\,\text{pb} & (\sqrt{s}=28~(33)\,\text{TeV}),\\
130~(83)\,\text{pb} & (\sqrt{s}=100~(75)\,\text{TeV}).
\end{cases}
\label{pp to s cross section}
}
\begin{figure}
\begin{center}
\includegraphics[width=0.24\textwidth]{fig_partner300.pdf}
\includegraphics[width=0.24\textwidth]{fig_leptoquark300.pdf}
\includegraphics[width=0.24\textwidth]{fig_diquark300.pdf}
\includegraphics[width=0.24\textwidth]{fig_coloron300.pdf}
\end{center}
\caption{In each panel, the line corresponds to the preferred contour to explain the 2.4$\sigma$ excess at $m_s=300\,\text{GeV}$, and the shaded region is excluded at the 95\% C.L.\ by $\sigma\fn{ZZ\to4l}_{13\,\text{TeV}}$.
The $K$-factor is set to be $K=1.6$.
The region $10^{-4}\lesssim\theta^2\ll1$ is assumed.
Note that the plotted region of $\eta$ in horizontal axis differs panel by panel.
}\label{excess fig}
\end{figure}
In Fig.~\ref{excess fig}, we plot the preferred contour to explain the 2.4$\sigma$ excess at $m_s=300\,\text{GeV}$, where the shaded region is excluded at the 95\% C.L.\ by the $\sigma\fn{pp \to s \to ZZ\to4l}_{13\,\text{TeV}}$ constraint that has been discussed in Sec.~\ref{ZZ4l constraint}. We have assumed the $K$-factor $K=1.6$.
We see that at the benchmark point $\theta\simeq0$, the lowest and highest possible values of $\mu_\text{eff}$ and $\eta$ are, respectively,
\al{
\mu_\text{eff}
&\gtrsim 800\,\text{GeV}, &
\eta
&\lesssim \begin{cases}
0.66 & \text{top/bottom partner,}\\
2.6 & \text{leptoquark,}\\
0.53 & \text{diquark,}\\
0.44 & \text{coloron,}
\end{cases}
\label{eta required}
}
in order to account for the cross section~\eqref{excess}.
The ratio of the upper bound on $\eta$ is given by the scaling $\propto\paren{\Delta b_g}^2$.
\subsection{Constraints}
When $m_s = 300\,\text{GeV}$, the $95\%$ C.L.\ upper bound at $\sqrt{s} = 13\,\text{TeV}$ is $\sigma\fn{s(\text{ggF})\to ZZ\to 4l}_{13\,\text{TeV}}\lesssim 0.8\,\text{fb}$~\cite{ATLAS-CONF-2016-079}; see also Fig.~\ref{ZZ4lconstraint}. The corresponding excluded region is plotted in Fig.~\ref{excess fig}.
Currently, the strongest direct constraint on the di-Higgs resonance at $m_s=300\,\text{GeV}$ comes from the $\sqrt{s}=8\,\text{TeV}$ data in the $b\bar b\gamma\gamma$ final state at CMS~\cite{diHiggsCMS} and in $b\bar b\tau\tau$ at ATLAS~\cite{diHiggsATLAS}:
\al{
{\sigma\fn{pp\to s\to hh}}_\text{8\,\text{TeV}}
< \begin{cases}
1.1\,\text{pb} & \text{($b\bar b\gamma\gamma$ at CMS)},\\
1.7\,\text{pb} & \text{($b\bar b\tau\tau$ at ATLAS)},
\end{cases}
\label{8TeV bound}
}
at the 95\% C.L.
The preferred value~\eqref{excess} is still within this limit.
We note that the current limit for the $m_s=300\,\text{GeV}$ resonance search at $\sqrt{s}=13\,\text{TeV}$ is from the $b\bar b\gamma\gamma$ final state at ATLAS~\cite{diHiggsATLAS} and from $b\bar bbb$ at CMS~\cite{diHiggsCMS}:
\al{
{\sigma\fn{pp\to s\to hh}}_\text{13\,\text{TeV}}
< \begin{cases}
5.5\,\text{pb} & \text{($b\bar b\gamma\gamma$ at ATLAS)},\\
11\,\text{pb} & \text{($b\bar bbb$ at CMS)},
\end{cases}
}
at the 95\% C.L.
This translates to the $\sqrt{s}=8\,\text{TeV}$ cross section:
\al{
{\sigma\fn{pp\to s\to hh}}_\text{8\,\text{TeV}}
< \begin{cases}
1.7\,\text{pb} & \text{($b\bar b\gamma\gamma$ at ATLAS)},\\
3.5\,\text{pb} & \text{($b\bar bbb$ at CMS)}.
\end{cases}
}
This is weaker than the direct 8\,TeV bound~\eqref{8TeV bound}.
The branching ratio for $s\to\gamma\gamma$ is\footnote{
The power of $m_s$ dependence is valid in the limit $m_s\gg 2m_h$.
}
\al{
\BR\fn{s\to\gamma\gamma}
&\sim
2.3\times10^{-3}\sqbr{\alpha\over1/129}^2\sqbr{\mu_\text{eff}\over800\,\text{GeV}}^{-2}\sqbr{b_\gamma\over-8/9}^2\sqbr{m_s\over300\,\text{GeV}}^4\sqbr{\sin\theta\over0.01}^{-2}.
}
We see that the loop suppressed decay into diphoton is negligible compared to the tree-level decay via the interaction~\eqref{shh coupling}.
For $m_s=300$\,GeV, the cross section at $\sqrt{s}=13\,\text{TeV}$ is
\al{
\sigma\fn{pp\to s \to \gamma\gamma}_\text{13\,\text{TeV}}
\sim7.4\,\text{fb}
\sqbr{b_g\over-1/3}^2\sqbr{b_\gamma\over-8/9}^2
\sqbr{\alpha_s\over0.1}^2\sqbr{\alpha\over1/129}^2\sqbr{\mu_\text{eff}\over800\,\text{GeV}}^{-2}\sqbr{\sin\theta\over0.01}^{-2}.
}
We see that the loop-suppressed $\Gamma\fn{s\to\gamma\gamma}$ becomes the same order as $\Gamma\fn{s\to hh}$ when $\theta\lesssim 10^{-3}$ and that the region $\theta\lesssim 10^{-2}$ is excluded by the diphoton search, $\sigma\fn{pp\to s \to \gamma\gamma}_\text{13\,\text{TeV}}\lesssim10\,\text{fb}$~\cite{ATLAS-CONF-2016-059}, for a typical set of parameters that explains the 300\,GeV excess; see also Sec.~\ref{general diphoton bound}.
We comment on the case where the neutral scalar is charged under the $Z_2$ symmetry, $S\to-S$, or is extended to a complex scalar charged under an extra U(1), $S\to e^{i\varphi}S$.
In such a model,
the effective coupling in the small mixing limit becomes
\begin{align}
\mu_\text{eff} &\sim \frac{m_sf}{v}\lesssim {m_s\over\eta};
\label{Z2 relation}
\end{align}
see Appendix~\ref{Z2 model section}.
That is, for a given $m_s$, there is an upper bound on the product $\mu_\text{eff}\,\eta$: $\mu_\text{eff}\,\eta\lesssim m_s$.
On the other hand, the production cross section and the di-Higgs decay rate of $s$ are proportional to $\eta^2$ and $\mu_\text{eff}^2$, and hence there is a preferred value of $\mu_\text{eff}\,\eta$ in order to account for the 2.4\,$\sigma$ excess by $m_s=300\,\text{GeV}$; see Fig.~\ref{excess fig}. In the $Z_2$ model and the $U(1)$ model, this preferred value exceeds the above upper bound.
That is, they cannot account for the excess.
More rigorous proof can be found in Appendix~\ref{Z2 model section}.
On the other hand, a singlet scalar that does not respect additional symmetry does not obey this relation~\eqref{Z2 relation}.
Because of this reason, a singlet scalar without $Z_2$ symmetry is advantageous to enhance the di-Higgs signal in general, and can explain the excess by $m_s=300\,\text{GeV}$.
\section{Summary and discussion}\label{summary section}
We have studied a class of models in which the di-Higgs production is enhanced by the $s$-channel resonance of the neutral scalar that couples to a pair of gluons by the loop of heavy colored fermion or scalar.
As such a colored particle, we have considered two types of possibilities:
\begin{itemize}
\item the vector-like fermionic partner of top or bottom quark, with which the neutral scalar may be identified as the dilaton in the quasi-conformal sector,
\item the colored scalar which is either triplet (leptoquark), sextet (diquark), or octet (coloron).
\end{itemize}
We have presented the future prospect for the enhanced di-Higgs production in the LHC and beyond. Typically, the top/bottom partner models give a cross section $\sigma\fn{pp\to s}\gtrsim1\,\text{fb}$, which could be accessed by a luminosity of $\mc O\fn{\text{ab}^{-1}}$, for the scalar mass $m_s\lesssim1.3\,\text{TeV}$, 2\,TeV, and 4\,TeV at the LHC, HE-LHC, and FCC, respectively.
We have examined the constraints from the direct searches for the di-Higgs signal and for a heavy colored particle, as well as the Higgs signal strengths in various production and decay channels.
Typically small and large mixing regions are excluded by the diphoton resonance search and by the Higgs signal strength bounds, respectively.
Region of small $\mu_\text{eff}$ is excluded by the diphoton search as well as by the $s\to ZZ\to 4l$ channel.
We also show a possible explanation of the 2.4$\sigma$ excess of the di-Higgs signal in the $b\bar b\gamma\gamma$ final state, reported by the ATLAS experiment.
We have shown that the $Z_2$ model explained in Appendix~\ref{Z2 model section} cannot account for the excess, while the general model in Appendix~\ref{general model} can.
A typical benchmark point which evades all the bounds and can explain the excess is
\al{
\mu_\text{eff}
&\sim 1\,\text{TeV}, &
\eta
&\sim \begin{cases}
0.6 & \text{top/bottom partner,}\\
2.4 & \text{leptoquark,}\\
0.5 & \text{diquark,}\\
0.4 & \text{coloron,}
\end{cases} &
\sin\theta
&\sim 0.1.
}
For the top/bottom partner $T,B$, the required value to explain the 2.4$\sigma$ excess for the Yukawa coupling is rather large $y_FN_F\gtrsim 2.2$, where $N_F$ is the number of $F=T,B$ introduced. For the colored scalar $\phi$, required value of the neutral scalar VEV, $f=\left\langle S\right\rangle$, are
\al{
f\kappa_\phi N_\phi\gtrsim \begin{cases}
5\text{--}13\,\text{TeV} & \text{leptoquark, depending on possible decay channels},\\
{106} \,\text{TeV} & \text{diquark},\\
{54} \,\text{TeV} & \text{coloron},
\end{cases}
}
where $\kappa_\phi$ and $N_\phi$ are the quartic coupling between the colored and neutral scalars and the number of colored scalar introduced, respectively.
In this paper, we have restricted ourselves to the case where the colored particle running in the blog in Fig.~\ref{Di-Higgs figure} are $SU(2)_L$ singlet. Cases for doublet, triplet, etc., which could be richer in phenomenology, will be presented elsewhere.
We have assumed $M_F, M_\phi\gtrsim m_s$ to justify integrating out the colored particle. It would be worth including loop functions to extend the region of study toward $M_F, M_\phi\lesssim m_s$.
A full collider simulation of this model for HL-LHC and FCC would be worth studying. A theoretical background of this type of the neutral scalar assisted by the colored fermion/scalar is worth pushing, such as the dilaton model and the leptoquark model with spontaneous $B-L$ symmetry breaking.
\subsection*{Acknowledgement}
The work of K. Nakamura\ and K.O.\ are partially supported by the JSPS KAKENHI Grant No.~26800156 and Nos.~15K05053, 23104009, respectively.
S.C.P. and Y.Y. are supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIP) (No. 2016R1A2B2016112).
K. Nishiwaki would like to thank David London and the Group of Particle Physics of Universit\'e de Montr\'eal for the kind hospitality at the final stage of this work.
| {'timestamp': '2017-05-16T02:11:05', 'yymm': '1701', 'arxiv_id': '1701.06137', 'language': 'en', 'url': 'https://arxiv.org/abs/1701.06137'} |
\section{Introduction}
We shall be concerned with the time-harmonic electromagnetic (EM) wave scattering.
Let
\begin{equation}\label{eq:plane waves}
E^i(x)=pe^{ik x\cdot d},\quad H^i(x)=\frac{1}{i k} \nabla\wedge E^i(x),\quad x\in\mathbb{R}^3\,,
\end{equation}
be a pair of time-harmonic EM plane waves, where $E^i$ and $H^i$ are, respectively, the electric and magnetic fields, and $k\in\mathbb{R}_+$, $d\in\mathbb{S}^2$, $p\in\mathbb{R}^3$ with $p\perp d$ are, respectively, the wave number, incident direction and polarization vector. In the homogeneous background space $\mathbb{R}^3$, where the EM medium is characterized by the electric permittivity $\varepsilon_0=1$, magnetic permeability $\mu_0=1$ and conductivity $\sigma_0=0$, the plane waves $(E^i, H^i)$ propagate indefinitely. If an EM inhomogeneity is presented in the homogeneous space, the propagation of the plane waves will be perturbed, leading to the so-called {\it scattering}. Throughout, we assume that the EM inhomogeneity is compactly supported in a bounded Lipschitz domain $\Omega\subset \mathbb{R}^3$ with $\mathbb{R}^3\backslash\overline{\Omega}$ connected. The inhomogeneity is referred to as a {\it scatterer}, and it is also characterized by the EM medium parameters including the electric permittivity $\varepsilon(x)$, the magnetic permeability $\mu(x)$ and the conductivity $\sigma(x)$. The medium parameters $\varepsilon(x)$, $\mu(x)$ and $\sigma(x)$ for $x\in\Omega$ are assumed to be $C^2$-smooth functions with $\varepsilon(x), \mu(x)>0$ and $\sigma(x)\geq 0$. The propagation of the total EM fields $(E,H)\in\mathbb{C}^3\wedge\mathbb{C}^3$ in the medium is governed by the Maxwell equations
\begin{equation}\label{eq:pp1}
\nabla\wedge E(x)-ik\mu(x) H(x)=0,\quad \nabla\wedge H(x)+(ik\varepsilon(x)-\sigma(x)) E(x)=0,\quad x\in\Omega,
\end{equation}
whereas in the background space it is governed by
\begin{equation}\label{eq:pp2}
\nabla\wedge E(x)-ik H(x)=0,\quad \nabla\wedge H(x)+ik E(x)=0,\quad x\in\mathbb{R}^3\backslash\overline{\Omega}.
\end{equation}
The total EM wave fields outside the inhomogeneity, namely in $\mathbb{R}^3\backslash \overline{\Omega}$, are composed of two parts: the incident wave fields $E^i, H^i$ and the scattered wave fields $E^+, H^+$. That is, we have
\begin{equation}\label{eq:total fields}
E(x)=E^i(x)+E^+(x),\quad H(x)=H^i(x)+H^+(x),\quad x\in\mathbb{R}^3\backslash\overline{\Omega}.
\end{equation}
The scattered EM fields are radiating, characterized by the Silver-M\"uller radiation condition
\[
\displaystyle{\lim_{|x|\rightarrow+\infty}|x|\left| (\nabla\wedge E^+)(x)\wedge\frac{x}{|x|}- ik E^+(x) \right|=0},
\]
which holds uniformly in all directions $\hat{x}:=x/|x|\in\mathbb{S}^2$, $x\in\mathbb{R}^3 \backslash \{0\}$. In the extreme situation where the conductivity of the inhomogeneity goes to infinity, the scatterer becomes perfectly conducting and the EM fields cannot penetrate inside $\Omega$. Moreover, the tangential component of the total electric field vanishes on the boundary of the scatterer, namely,
\begin{equation}
\nu\wedge E=0\quad \mbox{on\ \ $\partial\Omega$},
\end{equation}
where $\nu$ is the outward unit normal vector to $\partial\Omega$. In the perfectly conducting case, the scatterer is usually referred to as a {\it PEC obstacle}.
In summary, let us consider the scattering due to an inhomogeneous EM medium $(M; \varepsilon, \mu, \sigma)$ and a PEC obstacle $O$, and denote the combined scatterer $\Omega:=M\cup O$. $M$ and $O$ are assumed to be bounded Lipschitz domains with $\overline{M}\cap\overline{O}=\emptyset$ and $\mathbb{R}^3\backslash(\overline{M}\cup\overline{O})$ connected. The EM scattering is governed by the following Maxwell system
\begin{equation}\label{eq:Maxwell general}
\begin{cases}
\displaystyle{\nabla\wedge E-ik \bigg(1+(\mu-1)\chi_{M} \bigg) H=0}\ & \mbox{in\ \ $\mathbb{R}^3\backslash\overline{O}$},\\
\displaystyle{\nabla\wedge H+\bigg(ik(1+(\varepsilon-1)\chi_{M})- \sigma\chi_{M} \bigg)E=0}\ & \mbox{in\ \ $\mathbb{R}^3\backslash\overline{O}$},\\
E^-=E|_{M},\ E^+=(E-E^i)|_{\mathbb{R}^3\backslash\overline{M\cup O}},\\
H^-=H|_M,\ H^+=(H-H^i)|_{\mathbb{R}^3\backslash\overline{M\cup O}}, \\
\nu\wedge E^+=-\nu\wedge E^i\hspace*{1cm}\mbox{on \ \ $\partial O$},\\
\displaystyle{\lim_{|x|\rightarrow+\infty}|x|\left| (\nabla\wedge E^+)(x)\wedge\frac{x}{|x|}- ik E^+(x) \right|=0}.
\end{cases}
\end{equation}
We refer to \cite{Lei,Ned} for the well-posedness study of the forward scattering problem \eqref{eq:Maxwell general}. There exists a unique pair of solutions $(E, H)\in H_{loc}(\text{curl}; \mathbb{R}^3\backslash\overline{O})\wedge H_{loc}(\text{curl}; \mathbb{R}^3\backslash\overline{O})$ to the system \eqref{eq:Maxwell general}. Moreover, $E^+$ admits the following asymptotic expansion (cf.\!\cite{CK})
\begin{equation}\label{eq:farfield}
E^+(x)=\frac{e^{ik|x|}}{|x|} A\left(\frac{x}{|x|}; (M;\varepsilon,\mu, \sigma), O, d,p,k\right)+\mathcal{O}\left(\frac{1}{|x|^2}\right),
\quad \mathrm{as}~~ |x|\rightarrow +\infty\,,
\end{equation}
where $A(\hat{x}; (M;\varepsilon,\mu, \sigma), O, d, p, k)$, (or for short $A(\hat{x})$ or $A(\hat{x}; d, p, k)$ with emphasis on the dependence), is known as the electric far-field pattern. Due to the real analyticity of $A(\hat{x})$ on the unit sphere $\mathbb{S}^2$, if $A(\hat{x})$ is known on any open subset of $\mathbb{S}^2$, it is known on $\mathbb{S}^2$ by the analytic continuation.
The inverse problem that we shall consider is to recover the medium inclusion $(M;\varepsilon,\mu,\sigma)$ and/or the PEC obstacle $O$ by the knowledge of $A(\hat{x};d,p,k)$. In the physical situation, the inhomogeneous EM medium and the obstacle are the unknown/inaccessible target objects. One sends detecting EM plane waves and collects the corresponding scattered data produced by the underlying scatterer, and from which to infer knowledge of the target objects. This inverse scattering problem is of critical importance in many areas of science and technology, such as radar and sonar, geophysical exploration, medical imaging and non-destructive testing, to name just a few (cf.\!\cite{AK1,AK2,CK,Mar,Uhl}). If one introduces an operator $\mathcal{F}$ which maps the EM scatterer to the corresponding far-field pattern, the inverse scattering problem can be formulated as the following operator equation
\begin{equation}\label{eq:operator equation}
\mathcal{F}((M;\varepsilon,\mu,\sigma),O)=A(\hat{x};d,p,k),\quad \hat{x},d\in\mathbb{S}^2,\ p\in\mathbb{R}^3,\ k\in\mathbb{R}_+.
\end{equation}
It is widely known that the operator equation~\eqref{eq:operator equation} is non-linear and ill-posed (cf.\!\cite{CK}).
In this work, we are mainly concerned with the numerical reconstruction algorithms for the inverse scattering problem aforementioned. There are many results in the literature and various imaging schemes have been developed; see, e.g. \cite{AILP,AK1,CCM,CK,IJZ,KG,Pot,SZC,UZ,Zha} and the references therein. It is remarked that most existing schemes involve inversions, and in order to tackle the ill-posedness, regularizations are always utilized. For the present study, we are particularly interested in the reconstruction by making use of a single far-field measurement, namely $A(\hat{x};d,p,k)$ for all $\hat{x}\in\mathbb{S}^2$ but fixed $d\in\mathbb{S}^2, p\in\mathbb{R}^3$ and $k\in\mathbb{R}_+$. Here, we note that in \eqref{eq:operator equation}, the unknown scatterer depends on a 3D parameter $x\in\Omega$, whereas the far-field pattern depends on a 2D parameter $\hat{x}\in\mathbb{S}^2$, and hence, much less information is used for the proposed reconstruction scheme. The inverse electromagnetic scattering problem with minimum measurement data is extremely challenging with very limited theoretical and computational progress in the literature (cf.\!\cite{CK,Isa,LZr}). Furthermore, we shall conduct our study in a very general and complex environment. The target scatterer may consist of multiple components with an unknown number, and each component could be either an inhomogeneous medium inclusion or a PEC obstacle.
Two imaging schemes using a single measurement were recently proposed in \cite{LiLiuShangSun}, namely Schemes S and R, for locating multiple EM scatterer components, respectively, of small size and regular size compared to the wavelength of the incident EM plane waves. The schemes rely on certain new indicator functions, which can be directly calculated from the measured far-field data. In calculating the indicator functions, there are no inversions or regularizations and hence the proposed schemes are shown to be very efficient and robust to noisy data. However, Scheme R, the locating scheme for regular-size scatterers, requires the {\it a priori} knowledge of the possible shapes, orientations and sizes of the underlying scatterer components. It is our {first goal} to extend Scheme R in \cite{LiLiuShangSun} to an improved Scheme AR by relaxing the requirement on orientations and sizes. This is achieved in light of the idea that the admissible reference space can be augmented by more data carrying information about orientations and sizes. Next, based on the newly developed Scheme AR and Scheme S in \cite{LiLiuShangSun}, in a certain generic practical setting, we develop a novel imaging procedure (Scheme M) in locating multiple multi-scale EM scatterers, which include both regular-size and small-size components. A novel local re-sampling technique is proposed and plays a key role in tackling the challenging multi-scale reconstruction in Scheme M. Furthermore, for the multi-scale locating scheme, if one additional set of far-field data is used, more robust and accurate reconstruction can be achieved. To our best knowledge, this is the first reconstruction scheme in the literature on recovering multi-scale EM scatterers by using such less scattering information. For all the proposed imaging schemes, we provide rigorous mathematical justifications. We also conduct systematical numerical experiments to demonstrate the effectiveness and the promising features of the schemes.
The rest of the paper is organized as follows. Section~\ref{sect:2} is devoted to the description of multi-scale EM scatterers and the two locating schemes in \cite{LiLiuShangSun}. In Section~\ref{sect:3}, we develop techniques on relaxing the requirement on knowledge of the orientation and size for locating regular-size scatterers. In Section~\ref{sect:4}, we present the imaging schemes of locating multiple multi-scale scatterers. Finally, in Section~\ref{sect:5}, numerical experiments are given to demonstrate the effectiveness and the promising features of the proposed imaging schemes.
\section{Multi-scale EM scatterers and two locating schemes}\label{sect:2}
Throughout the rest of the paper, we assume that $k\sim 1$. That is, the wavelength of the EM plane waves is given by $\lambda=2\pi/k\sim 1$ and hence the size of a scatterer can be expressed in terms of its Euclidean diameter.
\subsection{Scheme S}
We first introduce the class of small scatterers for our study. Let $l_s\in\mathbb{N}$ and $D_j$, $1\leq j\leq l_s$ be bounded Lipschitz domains in $\mathbb{R}^3$. It is assumed that all $D_j$'s are simply connected and contain the origin. For $\rho\in\mathbb{R}_+$, we let $\rho D_j:=\{\rho x; x\in D_j\}$ and set
\begin{equation}\label{eq:small component}
\Omega_j^{(s)}=z_j+\rho D_j,\quad z_j\in\mathbb{R}^3,\ \ 1\leq j\leq l_s.
\end{equation}
Each $\Omega_j^{(s)}$ is referred to as a scatterer component and its content is endowed with $\varepsilon_j, \mu_j$ and $\sigma_j$. The parameter $\rho\in\mathbb{R}_+$ represents the relative size of the scatterer (or, more precisely, each of its components). The scatterer components $(\Omega_j^{(s)}; \varepsilon_j, \mu_j, \sigma_j)$, $1\leq j\leq l_s$, are assumed to satisfy:~i).~if for some $j$, $0\leq \sigma_j<+\infty$, then $\varepsilon_j, \mu_j$ and $\sigma_j$ are all real valued $C^2$-smooth functions in the closure of ${\Omega_j^{(s)}}$; ii).~in the case of i), the following condition is satisfied, $|\varepsilon_j(x)-1|+|\mu_j(x)-1|+|\sigma_j(x)|>c_0>0$ for all $x\in\Omega_j^{(s)}$ and some positive constant $c_0$; iii).~if for some $j$, $\sigma_j=+\infty$, then disregarding the parameters $\varepsilon_j$ and $\mu_j$, $\Omega_j^{(s)}$ is regarded as a PEC obstacle. Condition ii) means that if $(\Omega_j^{(s)}; \varepsilon_j, \mu_j, \sigma_j)$ is a medium component, then it is inhomogeneous from the homogeneous background space. We set
\begin{equation}\label{eq:small scatterer}
\Omega^{(s)}:=\bigcup_{j=1}^{l_s} \Omega_j^{(s)}\quad \mbox{and}\quad (\Omega^{(s)};\varepsilon,\mu,\sigma):=\bigcup_{j=1}^{l_s} (\Omega_j^{(s)}; \varepsilon_j,\mu_j,\sigma_j).
\end{equation}
and make the following qualitative assumption,
\begin{equation}\label{eq:qualitative assumptions}
\rho\ll 1\qquad \mbox{and}\qquad \mbox{dist}(z_j, z_{j'})\gg 1\quad \mbox{for\ $j\neq j'$, $1\leq j, j'\leq l_s$}.
\end{equation}
The assumption \eqref{eq:qualitative assumptions} implies that compared to the wavelength of the incident plane waves, the relative size of each scatterer component is small and if there are multiple components, they are sparsely distributed. It is numerically shown in \cite{LiLiuShangSun} that if the relative size is smaller than half a wavelength and the distance between two different components is bigger than half a wavelength, the scheme developed there works well for locating the multiple components of $\Omega^{(s)}$.
Let $0\leq l_s'\leq l_s$ be such that when $1\leq j\leq l_s'$, $\sigma_j=+\infty$, and when $l_s'+1\leq j\leq l_s$, $0\leq \sigma_j<+\infty$. That is, if $1\leq j\leq l_s'$, $\Omega_j^{(s)}$ is a PEC obstacle component, whereas if $l_s'+1\leq j\leq l_s$, $(\Omega_j^{(s)}; \varepsilon_j, \mu_j, \sigma_j)$ is a medium component. If $l_s'=0$, then all the components of the small scatterer $\Omega^{(s)}$ are of medium type and if $l_s'=l_s$, then all the components are PEC obstacles. The EM scattering corresponding to $\Omega^{(s)}$ due to a single pair of incident waves $(E^i, H^i)$ is governed by \eqref{eq:Maxwell general} with $O=\bigcup_{j=1}^{l_s'} \Omega_j^{(s)}$ and $(M; \varepsilon, \mu, \sigma)=\bigcup_{j=l_s'+1}^{l_s} (\Omega_j^{(s)}; \varepsilon_j, \mu_j, \sigma_j)$. We denote the electric far-field pattern by $A(\hat{x}; \Omega^{(s)})$.
In order to locate the multiple components of $\Omega^{(s)}$ in \eqref{eq:small scatterer}, the following indicator function is introduced in \cite{LiLiuShangSun},
\begin{equation}\label{eq:indicator function}
\begin{split}
I_s(z):=\frac{1}{\|A(\hat x;\Omega^{(s)})\|^2_{T^2(\mathbb{S}^2)}}&\sum_{m=-1,0,1}\bigg( {\bigg|\left\langle A(\hat x;\Omega^{(s)}), e^{ik (d-\hat x)\cdot z}\, U_1^m(\hat x) \right\rangle_{T^2(\mathbb{S}^2)}\bigg|^2}\\
& +{\bigg|\left\langle A(\hat x;\Omega^{(s)}), e^{i k (d-\hat x)\cdot z}\, V_1^m(\hat x) \right\rangle_{T^2(\mathbb{S}^2)}\bigg|^2} \bigg),\ \ z\in\mathbb{R}^3,
\end{split}
\end{equation}
where
\[
T^2(\mathbb{S}^2):=\{\mathbf{a}\in\mathbb{C}^3|\ \mathbf{a}\in L^2(\mathbb{S}^2)^3, \ \hat{x}\cdot \mathbf{a}=0\ \ \mbox{for a.e. $\hat{x}\in\mathbb{S}^2$}\}.
\]
and
\begin{equation*}
U_n^m(\hat{x}):=\frac{1}{\sqrt{n(n+1)}}\text{Grad}\, Y_n^m(\hat{x}),\ \
V_n^m(\hat x):=\hat x\wedge U_n^m(\hat x),
\ n\in\mathbb{N},\ \ m=-n,\cdots,n,
\end{equation*}
with $Y_n^m(\hat x)$, $m=-n,\ldots,n$ the spherical harmonics of order $n\geq 0$ (cf.\!\cite{CK}). It is shown in \cite{LiLiuShangSun} that $z_j$ (cf.\!~\eqref{eq:small component}), $1\leq j\leq l_s$, is a local maximum point for $I_s(z)$. Based on such indicating behavior, the following scheme is proposed in \cite{LiLiuShangSun} for locating the multiple components of the small scatterer $\Omega^{(s)}$.
\medskip
\hrule
\medskip
\noindent {\bf Algorithm: Locating Scheme S}
\medskip
\hrule
\begin{enumerate}[1)]
\item For an unknown EM scatterer $\Omega^{(s)}$ in \eqref{eq:small scatterer}, collect the far-field data by sending a
single pair of detecting EM plane waves specified in \eqref{eq:plane waves}.
\item Select a sampling region with a mesh $\mathcal{T}_h$ containing $\Omega^{(s)}$.
\item For each point $z\in \mathcal{T}_h$, calculate $I_s(z). $
\item Locate all the significant local maxima of $I_s(z)$ on $\mathcal{T}_h$, which represent the locations of the scatterer components.
\end{enumerate}
\hrule
\medskip
\subsection{Scheme R}
Next, we consider the locating of multiple obstacles of regular size. For this locating scheme, one must require the following generic uniqueness result holds for the inverse scattering problem. Let $O_1$ and $O_2$ be obstacles and both of them are assumed to be bounded simply connected Lipschitz domains in $\mathbb{R}^3$ containing the origin. Then
\begin{equation}\label{eq:uniqueness}
\text{$A(\hat{x}; O_1)=A(\hat{x}; O_2)$ if and only if $O_1=O_2$. }
\end{equation}
This result implies that by using a single far-field measurement, one can uniquely determine an obstacle. There is a widespread belief that such a uniqueness result holds, but there is only limited progress in the literature, see, e.g., \cite{L,LZr,LYZ}. Throughout the present study, we shall assume that such a generic uniqueness holds true.
We now briefly recall {\it Scheme R} in \cite{LiLiuShangSun} for locating multiple regular-size obstacles. Let $l_r\in\mathbb{N}$ and let $G_j$, $1\leq j\leq l_r$ be bounded simply connected Lipschitz domains containing the origin in $\mathbb{R}^3$. Set
\begin{equation}\label{eq:regular component}
\Omega_j^{(r)}=z_j+G_j,\quad z_j\in\mathbb{R}^3,\ \ 1\leq j\leq l_r.
\end{equation}
Each $\Omega_j^{(r)}$ denotes a PEC obstacle located at the position $z_j\in\mathbb{R}^3$. It is required that
\begin{equation}\label{eq:regular condition}
\text{diam}(\Omega_j^{(r)})=\text{diam}(G_j)\sim 1,\ 1\leq j\leq l_r;\ \ \ L=\min_{1\leq j, j'\leq l_r, j\neq j'}\text{dist}(z_j, z_{j'})\gg 1.
\end{equation}
Furthermore, there exists an admissible reference obstacle space
\begin{equation}\label{eq:reference space}
\mathscr{S}:=\{\Sigma_j\}_{j=1}^{l'},
\end{equation}
where each $\Sigma_j\subset\mathbb{R}^3$ is a bounded simply connected Lipschitz domain that contains the origin and
\begin{equation}\label{eq:assumption 1}
\Sigma_j\neq \Sigma_{j'},\quad \mbox{for}\ \ j\neq j',\ 1\leq j, j'\leq l',
\end{equation}
such that
\begin{equation}\label{eq:shape known}
G_j\in\mathscr{S},\quad j=1,2,\ldots,l_r.
\end{equation}
The admissible class $\mathscr{S}$ is required to be known in advance, and by reordering if necessary, it is assumed that
\begin{equation}\label{eq:assumption2}
\| A(\hat{x};\Sigma_{j}) \|_{T^2(\mathbb{S}^2)}\geq \| A(\hat x;\Sigma_{j+1}) \|_{T^2(\mathbb{S}^2)},\quad j=1,2,\ldots,l'-1.
\end{equation}
Let
\begin{equation}\label{eq:regular scatterer}
\Omega^{(r)}:=\bigcup_{j=1}^{l_r} \Omega_j^{(r)}.
\end{equation}
Then $\Omega^{(r)}$ denotes the regular-size scatterer for our current study, which may consist of multiple obstacle components. The second condition in \eqref{eq:regular condition} means that the components are sparsely distributed. It is numerically observed in \cite{LiLiuShangSun} that if the distance is larger than a few numbers of wavelength, then Scheme R works effectively. The assumption \eqref{eq:shape known} indicates that certain {\it a priori} knowledge of the target scatterer is required. It is remarked that $l_r$ is not necessarily the same as $l'$. Define $l'$ indicator functions as follows,
\begin{equation}\label{eq:indicator regular}
I^j_r(z)=\frac{\bigg| \langle A(\hat x;\Omega^{(r)}), e^{ik(d-\hat x)\cdot z} A(\hat x; \Sigma_j) \rangle_{T^2(\mathbb{S}^2)} \bigg|}{\| A(\hat x; \Sigma_j) \|^2_{T^2(\mathbb{S}^2)}},\quad j=1,2,\ldots, l',\quad z\in\mathbb{R}^3.
\end{equation}
The following indicating behavior of $I^j_r(z)$'s is proved in \cite{LiLiuShangSun} and summarized below.
\begin{thm}\label{thm:main2}
Consider the indicator function $I_r^1(z)$ introduced in \eqref{eq:indicator regular}. Suppose there exists $J_0\subset\{1,2,\ldots,l_r\}$ such that for $j_0\in J_0$, $G_{j_0}=\Sigma_1$, whereas $G_j\neq \Sigma_1$ for $j\in \{1,2,\ldots,l_r\}\backslash J_0$. Then for each $z_j$, $j=1,2,\ldots,l_r$, there exists an open neighborhood of $z_j$, $neigh(z_j)$, such that
\begin{enumerate}
\item[(i).]~if $j\in J_0$, then
\begin{equation}\label{eq:further ind 1}
\widetilde{I}_r^1(z):=|I_r^1(z)-1|\leq \mathcal{O}\left( \frac 1 L \right ),\quad z\in neigh(z_{j}),
\end{equation}
and moreover, $z_{j}$ is a local minimum point for $\widetilde{I}_r^1(z)$;
\item[(ii).]~if $j\in \{1,2,\ldots,l_r\}\backslash J_0$, then there exists $\epsilon_0>0$ such that
\begin{equation}\label{eq:further ind 2}
\widetilde{I}_r^1(z):=|I_r^1(z)-1|\geq \epsilon_0+\mathcal{O}\left( \frac 1 L \right ),\quad z\in neigh(z_j).
\end{equation}
\end{enumerate}
\end{thm}
Based on Theorem~\ref{thm:main2}, the {\it Scheme R} for locating the multiple components in $\Omega^{(r)}$ can be successively formulated as follows.
\medskip
\hrule
\medskip
\noindent {\bf Algorithm: Locating Scheme R}
\medskip
\hrule
\begin{enumerate}[1)]
\item For an unknown EM scatterer $\Omega^{(r)}$ in \eqref{eq:regular scatterer}, collect the far-field data by sending a single pair of detecting EM plane waves specified in \eqref{eq:plane waves}.
\item Select a sampling region with a mesh $\mathcal{T}_h$ containing $\Omega^{(r)}$.
\item Collect in advance the far-field patterns associated with the admissible reference scatterer space $\mathscr{S}$
in \eqref{eq:reference space}, and reorder $\mathscr{S}$ if necessary to make it satisfy
\eqref{eq:assumption2}, and also verify the generic assumption \eqref{eq:uniqueness}.
\item Set $j=1$.
\item For each point $z\in \mathcal{T}_h$, calculate $I_r^j(z)$ (or $\widetilde{I}_r^j(z)=|I_r^j(z)-1|$).
\item Locate all those significant local maxima of $I_r^j(z)$ such that $I_r^j(z)\sim 1$ (or the minima of $\widetilde{I}_r^j(z)$ on $\mathcal{T}_h$ such that $\widetilde{I}_r^j(z)\ll 1$), where scatterer components of the form $z+\Sigma_j$ is located.
\item Trim all those $z+\Sigma_j$ found in 6) from $\mathcal{T}_h$.
\item If $\mathcal{T}_h=\emptyset$ or $j=l'$, then Stop; otherwise, set $j=j+1$, and go to 5).
\end{enumerate}
\hrule
\medskip
\begin{rem}\label{rem:MediumObstacle}
By \eqref{eq:uniqueness} and \eqref{eq:assumption 1}, it is readily seen that
\begin{equation}\label{eq:rerell}
A(\hat{x}; \Sigma_j)\neq A(\hat{x}; \Sigma_{j'}),\quad j\neq j',\ 1\leq j, j'\leq l'.
\end{equation}
\eqref{eq:rerell} plays a critical role in justifying the indicating behavior of $I_r^j(z)$ in Theorem~\ref{thm:main2}. Nevertheless, since the reference space
\eqref{eq:reference space} is given, one can verify \eqref{eq:rerell} in advance. On the other hand, one can also include inhomogeneous medium components into the admissible reference space provided the relation \eqref{eq:rerell} is satisfied. For the inhomogeneous medium component in $\mathscr{S}$, its content is required to be known in advance; see Remark~\ref{rem:MediumObstacle2} in the following.
\end{rem}
{\it Scheme R} could find important practical applications, e.g., in radar technology in locating an unknown group of aircrafts, where one has the {\it a priori} knowledge on the possible models of the target airplanes. However, we note here some important practical situations that Scheme R does not cover. Indeed, in Scheme R, it is required that each component, say $\Omega_1^{(r)}$, is a translation of the reference obstacle $\Sigma_1$, namely $\Omega_1^{(r)}=z+\Sigma_1$. This means that, in addition to the shape of the obstacle component $\Omega_1^{(r)}$, one must also know its orientation and size in advance (two concepts to be mathematically specified in Section~\ref{sect:3}). In the radar technology, this means that in addition to the model of each aircraft, one must also know which direction the aircraft is heading to. Clearly, this limits the applicability of the locating scheme. In the next section, we shall propose strategies to relax the limitations about the requirement on orientation and size. Furthermore, we shall consider the locating of multiple multi-scale scatterers, which may include, at the same time, small- and regular-size scatterers. To that end, we introduce the multiple multi-scale scatterer for our subsequent study
\begin{equation}\label{eq:multiscale scatterer}
\Omega^{(m)}:=\Omega^{(s)}\cup\Omega^{(r)},
\end{equation}
where $\Omega^{(s)}$ and $\Omega^{(r)}$ are, respectively, given in \eqref{eq:small scatterer} and \eqref{eq:regular scatterer}.
\section{Scheme R with augmented reference spaces}\label{sect:3}
In this section, we propose an enhanced version of Scheme R with augmented reference spaces to image a regular-size scatterer with multiple components
of different shapes, orientations and sizes. This goal is achieved through collecting more reference far field data of
a set of a priori known components, in particular associated with their possible orientations and sizes.
Let $\Pi_{\theta,\phi,\psi}$ denote the 3D rotation whose Euler angles are $\theta,\phi$ and $\psi$ with the $x_1-x_2-x_3$ convention for $x=(x_1,x_2,x_3)\in\mathbb{R}^3$. That is, $\Pi_{\theta,\phi,\psi} x=U(\theta,\phi,\psi) x$, where $U\in SO(3)$ is given by
\begin{equation}\label{eq:U}
\!\!\!\!\!U=\begin{pmatrix}
\cos\theta\cos\psi & -\cos\theta\sin\psi+\sin\phi\sin\theta\cos\psi & \sin\phi\sin\psi+\cos\phi\sin\theta\cos\psi\\
\cos\theta\sin\psi & \cos\phi\cos\psi+\sin\phi\sin\theta\sin\psi & -\sin\phi\cos\psi+\cos\phi\sin\theta\sin\psi \\
-\sin\theta & \sin\phi\cos\theta & \cos\phi\cos\theta
\end{pmatrix}
\end{equation}
with $0\leq \theta,\phi\leq 2\pi$ and $0\leq \psi\leq \pi$. In the sequel, we suppose there exist triplets $(\theta_j,\phi_j,\psi_j)$, $j=1,2,\ldots, l_r$ such that
\begin{equation}\label{eq:regular scatterer 2}
\Omega_j^{(r)}=z_j+\Pi_{\theta_j,\phi_j,\psi_j} G_j,
\end{equation}
where $G_j\in\mathscr{S}$ defined in \eqref{eq:reference space}.
Now, we let
\begin{equation}\label{eq:rot2}
\Omega^{(r)}=\bigcup_{j=1}^{l_r} (z_j+\Pi_{\theta_j,\phi_j,\psi_j} G_j):=\bigcup_{j=1}^{l_r} (z_j+\widetilde G_j)
\end{equation}
denote the regular-size target scatterer for our current study. Compared to the regular-size scatterer in \eqref{eq:regular scatterer} considered in \cite{LiLiuShangSun} (cf. \eqref{eq:regular component}--\eqref{eq:regular scatterer}), the scatterer introduced in \eqref{eq:rot2} possesses the new feature that each component is allowed to be rotated. In the sequel, the Euler angles $(\theta_j,\phi_j,\psi_j)$ will be referred to as the {\it orientation} of the scatterer component $\Omega_j^{(r)}$ in \eqref{eq:regular scatterer 2}.
Next, we also introduce a scaling/dilation operator $\Lambda_{\tau_j}$, $\tau_j\in\mathbb{R}_+$, and for $\Omega_j^{(r)}=z_j+G_j$, $G_j\in\mathscr{S}$, we set
\begin{equation}\label{eq:sca1}
\Omega_j^{(r)}:=z_j+\Lambda_{\tau_j} G_j,
\end{equation}
where $\Lambda_{\tau_j} G_j:=\{{\tau_j} x\,; \ x\in G_j\}$. Now, for a sequence of $\{\tau_j\}_{j=1}^{l_r}$
we set
\begin{equation}\label{eq:sca2}
\Omega^{(r)}=\bigcup_{j=1}^{l_r} (z_j+\Lambda_{\tau_j} G_j).
\end{equation}
We shall call $\tau_j$ the {\it size} or {\it scale} of the component $\Omega_j^{(r)}$ relative to the reference one $G_j$.
For our subsequent study, we would consider locating a regular-size scatterer with its components both possibly orientated and scaled,
\begin{equation}\label{eq:regular scatterer 4}
\Omega^{(r)}=\bigcup_{j=1}^{l_r} (z_j+\Pi_{\theta_j,\phi_j,\psi_j}\Lambda_{\tau_j} G_j):=\bigcup_{j=1}^{l_r} (z_j+\widehat G_j).
\end{equation}
Compared to the scatterer in \eqref{eq:regular scatterer} considered in \cite{LiLiuShangSun}, the scatterer introduced in \eqref{eq:sca2} is scaled relatively.
To that end, we first show a relation of the far-field pattern when the underlying scatterer is rotated and scaled.
\begin{prop}\label{prop:rot1}
Let $G$ be a bounded simply connected Lipschitz domain containing the origin, which represents a PEC obstacle. Then, we have that
\begin{equation}\label{eq:rot3}
A(\hat{x}; \Pi_{\theta,\phi,\psi}G, d, p, k)=UA(U^T\hat{x}; G, U^Tp, U^Td, k),
\end{equation}
where $U=U(\theta,\phi,\psi)$ is the rotation matrix corresponding to $\Pi_{\theta,\phi,\psi}$; and
\begin{equation}\label{eq:sca3}
A(\hat{x}; \Lambda_\tau G, d, p, k)=\tau A(\hat{x}; G, d, p, k\tau)
\end{equation}
\end{prop}
\begin{proof}
Let $E\in H_{loc}^1(\mathbb{R}^3\backslash\Pi_{\theta,\phi,
\psi} G)$ and $H\in H_{loc}^1(\mathbb{R}^3\backslash\Pi_{\theta,\phi,
\psi} G)$ be the solutions to the following Maxwell system
\begin{equation}\label{eq:eee1}
\begin{split}
& \nabla\wedge E-ik H=0,\qquad\qquad \nabla\wedge H+ikE=0\quad \mbox{in\ \ $\mathbb{R}^3\backslash\overline{\Pi_{\theta,\phi,\psi}G}$ },\\
& \nu\wedge E=0\quad\mbox{on\ \ $\partial (\Pi_{\theta,\phi,\psi}G)$},\quad\, \ E=E^i+E^+\quad \mbox{in\ \ $\mathbb{R}^3\backslash\overline{\Pi_{\theta,\phi,\psi}G}$ },\\
& {\lim_{|x|\rightarrow+\infty}|x|\left| (\nabla\wedge E^+)(x)\wedge\frac{x}{|x|}- ik E^+(x) \right|=0},
\end{split}
\end{equation}
where $E^i(x)=p e^{ikx\cdot d}$ and $\nu$ is the outward unit normal vector to $\partial(\Pi_{\theta,\phi,\psi} G)$.
Set
\begin{equation}\label{eq:ttt1}
\begin{split}
\widetilde{E}=&\Pi_{\theta,\phi,\psi}^* E:=\Pi_{\theta,\phi,\psi}^{-1}\circ E\circ\Pi_{\theta,\phi,
\psi}=U^TE\circ U\\
\widetilde{H}=&\Pi_{\theta,\phi,\psi}^* H:=\Pi_{\theta,\phi,\psi}^{-1}\circ H\circ\Pi_{\theta,\phi,
\psi}=U^TH\circ U
\end{split}\qquad \mbox{in\ \ $\mathbb{R}^3\backslash\overline{G}$},
\end{equation}
and
\begin{equation}\label{eq:ttt2}
\widetilde{E}^i(x):=(U^Tp) e^{ik x\cdot (U^Td)}
\end{equation}
Then, by the transformation properties of Maxwell's equations (see, e.g., \cite{LiuZhou}), it is straightforward to verify that
\begin{equation}\label{eq:ttt3}
\begin{split}
&\nabla\wedge\widetilde{E}-ik\widetilde{H}=0,\quad \nabla\wedge\widetilde{H}+ik\widetilde{E}=0\quad\mbox{in\ \ $\mathbb{R}^3\backslash \overline{G}$},\\
& \widetilde\nu\wedge\widetilde E=0\quad\mbox{on\ \ $\partial G$},\qquad \widetilde{E}=\widetilde{E}^i+\widetilde{E}^+\quad \mbox{in\ \ $\mathbb{R}^3\backslash\overline{G}$},\\
& {\lim_{|x|\rightarrow+\infty}|x|\left| (\nabla\wedge \widetilde{E}^+)(x)\wedge\frac{x}{|x|}- ik \widetilde{E}^+(x) \right|=0},
\end{split}
\end{equation}
where $\widetilde\nu$ is the outward unit normal vector to $\partial G$. Clearly, $A(\hat{x}; \Pi_{\theta,\phi,\psi} G)$ can be read-off from the large $|x|$ asymptotics of $E(x)$ in \eqref{eq:eee1},
\begin{equation}\label{eq:ffnn1}
E(x)=p e^{ikx\cdot d}+\frac{e^{ik|x|}}{|x|} A\left(\frac{x}{|x|}; \Pi_{\theta,\phi,\psi} G,d,p,k\right)+\mathcal{O}\left(\frac{1}{|x|^2}\right).
\end{equation}
Hence, by \eqref{eq:ttt1} and \eqref{eq:ffnn1}, we have
\begin{equation}\label{eq:ffnn2}
\begin{split}
\widetilde E(x)=&U^TE(Ux)\\
=&U^Tpe^{ik Ux\cdot d}+\frac{e^{ik|Ux|}}{|Ux|} U^T A\left(\frac{Ux}{|Ux|}; \Pi_{\theta,\phi,\psi} G, d, p, k\right)+\mathcal{O}\left(\frac{1}{|Ux|^2}\right)\\
=&U^Tp e^{ikx\cdot U^Td}+\frac{e^{ik|x|}}{|x|} U^TA(U\hat{x}; \Pi_{\theta,\phi,\psi}G, d, p, k)+\mathcal{O}\left(\frac{1}{|x|^2}\right).
\end{split}
\end{equation}
By \eqref{eq:ttt3} and \eqref{eq:ffnn2}, one can readily see that
\[
U^TA(U\hat{x};\Phi_{\theta,\phi,\psi} G, d, p, k)=A(\hat{x}; G, \widetilde{E}^i)=A(\hat{x}; G, U^Td,U^Tp,k).
\]
which immediately implies \eqref{eq:rot3}.
In a completely similar manner, one can show \eqref{eq:sca3}. The proof is complete.
\end{proof}
Proposition~\ref{prop:rot1} suggests that in order to locate a scatterer $\Omega^{(r)}$ in \eqref{eq:rot2} by using the Scheme R, one can make use of the multi-polarization and multi-incident-direction far-field data, namely $A(\hat{x}; p, d, k)$ for all $p\in\mathbb{R}^3$, $d\in\mathbb{S}^2$ and a fixed $k\in\mathbb{R}_+$. On the other hand, in order to still make use of a single far-field for the locating, one can augment the reference space $\mathscr{S}$ by letting
\begin{equation}\label{eq:aug1}
\widetilde{\mathscr{S}}=\Pi_{\theta,\phi,\psi}\mathscr{S}:=\{\Pi_{\theta,\phi,\psi}\Sigma_j\}_{j=1}^{l'},\quad (\theta,\phi,\psi)\in [0,2\pi]^2\times [0,\pi].
\end{equation}
Furthermore, from a practical viewpoint, we introduce a discrete approximation of $\widetilde{\mathscr{S}}$ and set
\begin{equation}\label{eq:discrete aug1}
\widetilde{\mathscr{S}}_h:=\{\Pi_{\theta^h,\phi^h,\psi^h}\Sigma_j\}_{j=1}^{l'}=\{\widetilde{\Sigma}_j\}_{j=1}^{\widetilde{l}_h},
\end{equation}
where $(\theta^h,\phi^h,\psi^h)$ denotes an equal distribution over $[0,2\pi]^2\times[0,\pi]$ with an angular mesh-size $h\in\mathbb{R}_+$ and its cardinality $N_h$, and $\widetilde{l}_h:=l'\times N_h$. By reordering if necessary, we assume the non-increasing relation \eqref{eq:assumption2} also holds for those components. Next, based on the same single far-field data for Scheme S, one can calculate $\widetilde{l}_h$ indicator functions according to \eqref{eq:indicator regular}, but with the reference scatterers taken from $\widetilde{\mathscr{S}}_h$. We denote the $\widetilde{l}_h$ indicator functions by $I_{h}^j(z)$, $1\leq j\leq \widetilde{l}_h$. Then, we have
\begin{thm}\label{thm:main23}
Consider the multiple scatterers introduced in \eqref{eq:rot2} and the indicator function $I_h^1(z)$ introduced above. Let $\widetilde\Sigma_l\in\widetilde{\mathscr{S}}_h$ be such that
\begin{equation}\label{eq:main231}
\widetilde\Sigma_1=\Pi_{\theta_1^h,\phi_1^h,\psi_1^h}\Sigma_{m_0}\quad \mbox{with}\quad \Sigma_{m_0}\in\mathscr{S}.
\end{equation}
Suppose there exists $J_0\subset\{1,2,\ldots,l_r\}$ such that for $j_0\in J_0$,
\[
\Omega_{j_0}^{(r)}=z_{j_0}+\widetilde{G}_{j_0}=z_{j_0}+\Pi_{\theta_{j_0},\phi_{j_0},\psi_{j_0}} G_{j_0}
\]
with
\begin{equation}\label{eq:cond1}
G_{j_0}=\Sigma_{m_0}\quad\mbox{and}\quad \|(\theta_{j_0},\phi_{j_0}, \psi_{j_0})-(\theta_1^h,\phi_1^h,\psi_1^h)\|_{l^\infty}=\mathcal{O}(h);
\end{equation}
whereas for the other components $\Omega_{j}^{(r)}$, $j\in\{1,2,\ldots,l_r\}\backslash J_0$, either of the two conditions in \eqref{eq:cond1} is violated. Then for each $z_j$, $j=1,2,\ldots,l_r$, there exists an open neighborhood of $z_j$, $neigh(z_j)$, such that
\begin{enumerate}
\item[(i).]~if $j\in J_0$, then
\begin{equation}\label{eq:main232}
\widetilde{I}_h^1(z):=|I_h^1(z)-1|\leq \mathcal{O}\left( \frac 1 L+h \right ),\quad z\in neigh(z_{j}),
\end{equation}
and moreover, $z_{j}$ is a local minimum point for $\widetilde{I}_h^1(z)$;
\item[(ii).]~if $j\in \{1,2,\ldots,l_r\}\backslash J_0$, then there exists $\epsilon_0>0$ such that
\begin{equation}\label{eq:main233}
\widetilde{I}_h^1(z):=|I_h^1(z)-1|\geq \epsilon_0+\mathcal{O}\left( \frac 1 L+h \right ),\quad z\in neigh(z_j).
\end{equation}
\end{enumerate}
\end{thm}
\begin{proof}
Let
\[
\widetilde{\Gamma}_1:=\Pi_{\theta_{j_0},\phi_{j_0},\psi_{j_0}}\Sigma_{m_0},
\]
and
\[
H^1_r(z)=\frac{\bigg| \langle A(\hat x;\Omega^{(r)}), e^{ik(d-\hat x)\cdot z} A(\hat x; \widetilde\Gamma_1) \rangle_{T^2(\mathbb{S}^2)} \bigg|}{\| A(\hat x; \widetilde\Gamma_1) \|^2_{T^2(\mathbb{S}^2)}},\quad z\in\mathbb{R}^3.
\]
By a completely similar argument to the proof of Theorem~2.1 in \cite{LiLiuShangSun}, one can show that $H_r^1(z)$ possesses the two indicating behaviors given in \eqref{eq:further ind 1} and \eqref{eq:further ind 2}. Next, by Proposition~\ref{prop:rot1}, we have
\begin{equation}\label{eq:pA}
A(\hat{x}; \widetilde\Gamma_1)=A(\hat{x}; \Pi_{\theta_0,\phi_0,\psi_0}\Sigma_{m_0})=U_0A(U_0^T\hat{x}; \Sigma_{m_0}, U_0^Tp, U_0^Td, k),
\end{equation}
and
\begin{equation}\label{eq:pB}
A(\hat{x}; \widetilde\Sigma_1)=A(\hat{x}; \Pi_{\theta_1^h,\phi_1^h,\psi_1^h}\Sigma_{m_0})=U_hA(U_h^T\hat{x}; \Sigma_{m_0}, U_h^Tp, U_h^Td, k),
\end{equation}
where $U_0$ and $U_h$ are the rotation matrices corresponding to $\Pi_{\theta_0,\phi_0,\psi_0}$ and $\Pi_{\theta_1^h,\phi_1^h,\psi_1^h}$, respectively. By the second assumption in \eqref{eq:cond1}, it is straightforward to show that
\begin{equation}\label{eq:pp3}
\|A(\hat{x}; \widetilde\Gamma_1)-A(\hat{x};\widetilde\Sigma_1)\|_{T^2(\mathbb{S}^2)}=\mathcal{O}(h).
\end{equation}
Finally, by \eqref{eq:pp3}, one has by direct verification that
\begin{equation}\label{eq:pp4}
|I_h^1(z)-H_r^1(z)|=\mathcal{O}(h),\quad z\in neigh(z_j),\quad j=1,2,\ldots, l_r.
\end{equation}
It is remarked that the estimate in \eqref{eq:pp4} is independent of $neigh(z_j)$, $j=1,\ldots, l_r$.
By \eqref{eq:pp4} and the indicating behaviors of $H_r^1(z)$, one immediately has \eqref{eq:main232} and \eqref{eq:main233}.
\end{proof}
Based on Theorem~\ref{thm:main23}, we propose the following enhanced locating scheme for locating the multiple components of $\Omega^{(r)}$ in \eqref{eq:rot2}.
\medskip
\hrule
\medskip
\noindent {\bf Algorithm: Locating Scheme AR}
\medskip
\hrule
\medskip
This scheme is the same as Scheme R in Section~\ref{sect:2} with steps~3), 5), 7), respectively modified as
\smallskip
\noindent 3)~Augment the reference space $\mathscr{S}$ to be $\widetilde{\mathscr{S}}_h$ in \eqref{eq:discrete aug1}, and reorder the elements in $\widetilde{\mathscr{S}}_h$ such that
\begin{equation}\label{eq:agu 1assumption2}
\| A(\hat{x};\widetilde{\Sigma}_{j}) \|_{T^2(\mathbb{S}^2)}\geq \| A(\hat x; \widetilde{\Sigma}_{j+1}) \|_{T^2(\mathbb{S}^2)},\quad j=1,2,\ldots,\widetilde{l}_h-1.
\end{equation}
\smallskip
\noindent 5) Replace $I_r^j(z)$ by $I_h^j(z)$.
\smallskip
\noindent 7) Trim all those $z+\widetilde\Sigma_j$ found in Step 6) from $\mathcal{T}_h$.
\medskip
\hrule
\medskip
\medskip
\begin{rem}\label{rem:ar1}
We remark that in Scheme AR, if certain {a priori} information is available about the possible range of the orientations of the scatterer components, it is sufficient for the augmented reference space $\widetilde{\mathscr{S}}_h$ to cover that range only. Clearly, Scheme AR can not only locate the multiple components of $\Omega^{(r)}$ in \eqref{eq:rot2}, but can also recover the orientation of each scatterer component.
\end{rem}
\begin{rem}\label{rem:MediumObstacle2}
Similar to Remark~\ref{rem:MediumObstacle}, our Scheme AR can be extended to include inhomogeneous medium components as long as the relation \eqref{eq:rerell} holds for the reference scatterers in $\widetilde{\mathscr{S}}_h$. Indeed, in our numerical experiments in Section~\ref{sect:5}, we consider the case that the reference scatterers are composed of two inhomogeneous mediums, $(\Sigma_j; \varepsilon_j, \mu_j, \sigma_j),\ j=1,2$ with $\varepsilon_j, \mu_j$ and $\sigma_j$ all constants that are known in advance. For this case, we would like to remark that by following the same argument, Proposition~\ref{prop:rot1} remains the same, which in turn guarantees that Theorem~\ref{thm:main23} remains the same as well. Furthermore, we would like to emphasize that Scheme AR could be straightforwardly extended to work in a much more general setting where there might be both inhomogeneous medium components with variable contents and PEC obstacles presented in the reference space, as long as the generic relation \eqref{eq:rerell} is satisfied.
\end{rem}
In an analogous manner, for a scatterer described in \eqref{eq:regular scatterer 4}, Scheme AR can be modified that the reference space is augmented by the sizes of
components to be
\begin{equation}\label{eq:discrete aug2}
\widetilde{\mathscr{S}}_h:=\{\widetilde{\Sigma}_j\}_{j=1}^{\widetilde{l}_{h,m}}
=\cup_{h,m}\{\Pi_{\theta^h,\phi^h,\psi^h}\Lambda_{\tau^m}\Sigma_j\}_{j=1}^{l'},
\end{equation}
where $\tau^m$ is an equal distribution of an interval $[s_1, s_2]$ with its cardinality $N_k$, or some other discrete distribution depending on the availability of certain {a priori} information of relative sizes, and ${\widetilde{l}_{h,m}}=l'\times N_h\times N_m$. Here, $s_1, s_2$ are positive numbers such that $[s_1,s_2]$ contains the scales/sizes of all the scatterer components. With such an augmented reference space, Scheme AR can be used to locate the multiple components and also recover both orientations and relative sizes of the scatterer $\Omega^{(r)}$ in \eqref{eq:regular scatterer 4}.
\section{Locating multiple multi-scale scatterers}\label{sect:4}
In this section, we shall consider locating a multi-scale scatterer $\Omega^{(m)}$ as described in \eqref{eq:multiscale scatterer} with multiple components. In addition to the requirements imposed on the small component $\Omega^{(s)}$ and the regular-size component $\Omega^{(r)}$ in Section~\ref{sect:2}, we shall further assume that
\begin{equation}\label{eq:dist multiscale}
L_m:=\text{dist}(\Omega^{(s)}, \Omega^{(r)})\gg 1.
\end{equation}
By Lemmas~3.1 and 3.2 in \cite{LiLiuShangSun}, one has, respectively,
\begin{equation}\label{eq:mm1}
A(\hat{x}; \Omega^{(m)}, k)=A(\hat{x}; \Omega^{(s)}, k)+A(\hat{x}; \Omega^{(r)}, k)+\mathcal{O}\left(L_m^{-1}\right)\,,
\end{equation}
\begin{equation}\label{eq:mm2}
A(\hat{x}; \Omega^{(s)}, k)=\mathcal{O}((k\rho)^3).
\end{equation}
That is, if $k\sim 1$, in the far-field pattern $A(\hat{x}; \Omega^{(m)})$, the scattering information from the regular-size component $\Omega^{(r)}$ is dominant and the scattering contribution from the small component $\Omega^{(s)}$ can be taken as small perturbation. Hence, a primitive way to locate the components of $\Omega^{(m)}$ can be proceeded in two stages as follows. First, using the single far-field pattern $A(\hat{x}; \Omega^{(m)})$ as the measurement data, one utilizes Scheme AR to locate the components of the regular-size scatterer $\Omega^{(r)}$. After the recovery of the regular-size scatterer $\Omega^{(r)}$, the far-field pattern from $\Omega^{(r)}$, namely $A(\hat{x}; \Omega^{(r)})$ becomes known. By subtracting $A(\hat{x};\Omega^{(r)})$ from $A(\hat{x};\Omega^{(m)})$, one then has $A(\hat{x}; \Omega^{(s)})$ (approximately). Finally, by applying Scheme S with the far-field data $A(\hat{x};\Omega^{(s)})$, one can then locate the components of $\Omega^{(s)}$. However, if the size contrast between $\Omega^{(r)}$ and $\Omega^{(s)}$ is too big, the scattering information of $\Omega^{(s)}$ will be hidden in the
noisy far-field data of $\Omega^{(r)}$. Hence, in order for the above two-stage scheme to work in locating $\Omega^{(m)}$, the size contrast between $\Omega^{(s)}$ and $\Omega^{(r)}$ cannot be excessively big. But if it is this case, the scattering effect from $\Omega^{(s)}$ would be a significant constituent part to $A(\hat{x};\Omega^{(m)})$, and this will deteriorate the recovery in the first stage and then the second-stage recovery will be deteriorated consequently as well. In order to overcome such a dilemma for this multi-scale locating, we shall develop a subtle local re-sampling technique.
\medskip
\hrule
\medskip
\noindent {\bf Algorithm: Locating Scheme M}
\medskip
\hrule
\begin{enumerate}[1)]
\item Collect a single far-field measurements $A(\hat{x}; \Omega^{(m)}, k)$ corresponding to the multi-scale scatterers $\Omega^{(m)}$.
\item Select a sampling region with a mesh $\mathcal{T}_h$ containing $\Omega^{(m)}$.
\item Suppose that
\[
\Omega^{(r)}=\bigcup_{j=1}^{l_r} ( z_{j}+\widetilde{\Sigma}_j ),\quad \widetilde{\Sigma}_j\in \widetilde{\mathscr{S}}_h,
\]
as described in \eqref{eq:regular scatterer 4} of Section~\ref{sect:3}.
Using $A(\hat{x}; \Omega^{(m)}, k)$ as the measurement data, one locates the rough locations $\widetilde{z}_j\in\mathcal{T}_h$, $j=1,2,\ldots, l_r$, shapes
and orientations of each scatterer component following Scheme AR. Here $\widetilde{z}_j$, $j=1,2,\ldots, l_r$, are the approximate position points to the exact ones $z_j$, $j=1,2,\ldots, l_r$.
\item Apply the \emph{local re-sampling technique} following the next sub-steps to update $\widetilde{z}_j$'s and to locate the components of the small-size scatterer $\Omega^{(s)}$.
\begin{enumerate}
\item[a)] For each point $\widetilde{z}_j$ found in Step~3), one generates a finer local mesh $\mathcal{Q}_{h'}(\widetilde{z}_j)$ around $\widetilde{z}_j$.
\item[b)] For one set of sampling points, $\hat{z}_j\in\mathcal{Q}_{h'}(\widetilde{z}_j)$, $j=1,2,\ldots, l_r$, one calculates
\begin{equation}\label{eq:re1}
\widetilde{A}(\hat{x}; k)
=A(\hat{x}; \Omega^{(m)}, k)-\sum_{j=1}^{l_r} e^{ik(d-\hat{x})\cdot \hat{z}_j} A(\hat{x}; \widetilde{\Sigma}_j, k).
\end{equation}
\item[c)] Using $\widetilde{A}(\hat{x}; k)$ in Step b) as the measurement data, one applies Scheme S to locate the significant local maximum points on $\mathcal{T}_h\backslash\cup_{j=1}^{l_r} \mathcal{Q}_{h'}(\widetilde{z}_j)$ of the corresponding indicator function.
\item[d)] Repeat Steps b) and c) by all the possible sets of sampling points from $\mathcal{Q}_{h'}(\widetilde{z}_j)$, $j=1,2,\ldots l_r$. The clustered local maximum points on $\mathcal{T}_h\backslash\cup_{j=1}^{l_r} \mathcal{Q}_{h'}(\widetilde{z}_j)$ are the positions corresponding to the scatterer components of $\Omega^{(s)}$.
\item[e)] One updates the $\widetilde{z}_j$'s to be those sampling points $\hat{z}_j$'s which generate the clustered local maximum points in Step d).
\end{enumerate}
\end{enumerate}
\hrule
\medskip
\medskip
We note that in \eqref{eq:re1}, if the re-sampling points $\hat{z}_j$'s are the exact position points, namely $\hat{z}_j=z_j$, $j=1,2,\ldots, l_r$, then
\[
\sum_{j=1}^{l_r} e^{ik(d-\hat{x})\cdot \hat{z}_j} A(\hat{x}; \widetilde{\Sigma}_j, k)=A(\hat{x}; \Omega^{(r)}, k).
\]
This, together with \eqref{eq:mm1}, implies that $\widetilde{A}(\hat{x}; k)$ calculated according to \eqref{eq:re1} is an approximation to $A(\hat{x}; \Omega^{(s)}, k)$.
Next, we propose an enhanced Scheme M by making use of two far-field measurements which could provide a more robust and accurate locating of the multi-scale scatterers $\Omega^{(m)}$. Indeed, we assume that in $\Omega^{(m)}$, the diameters of the multiple components of $\Omega^{(r)}$ are around $d_1$, whereas the diameters of the multiple components of $\Omega^{(s)}$ are around $d_2$ such that $d_1/d_2$ is relatively large. We choose two wave numbers $k_1$ and $k_2$ such that for $\lambda_1=2\pi/k_1$ and $\lambda_2=2\pi/k_2$, $\lambda_1>d_1$ with $\lambda_1\sim d_1$, and $d_2<\lambda_2<d_1$ with $\lambda_2/d_2$ relatively large. Then, in $A(\hat{x}; \Omega^{(m)}, k_1)$, according to \eqref{eq:mm1} and \eqref{eq:mm2}, $A(\hat{x}; \Omega^{(r)}, k_1)$ is more significant and this will enable Scheme AR to have a more accurate locating of $\Omega^{(r)}$. On the other hand, according to \eqref{eq:mm2}, $A(\hat{x}; \Omega^{(m)}, k_2)$ clearly carries more scattering information of $\Omega^{(s)}$ than that in $A(\hat{x};\Omega^{(m)}, k_1)$. Hence, after the locating of $\Omega^{(r)}$ by using $A(\hat{x}; \Omega^{(m)}, k_1)$, one can use $A(\hat{x}; \Omega^{(m)}, k_2)$ as the measurement data for the second stage in Scheme M to yield a more accurate reconstruction of $\Omega^{(s)}$. In summary, the enhanced Scheme M by making use of two far-field measurements can be formulated as follows.
\medskip
\hrule
\medskip
\noindent {\bf Algorithm: Enhanced Locating Scheme M}
\medskip
\hrule
\begin{enumerate}[1)]
\item Collect two far-field measurements $A(\hat{x}; \Omega^{(m)}, k_1)$ and $A(\hat{x}; \Omega^{(m)}, k_2)$ corresponding to the multi-scale scatterer $\Omega^{(m)}$.
\item Use $A(\hat{x}; \Omega^{(m)}, k_1)$ as the measurement data for the first stage in Scheme M, namely Steps 2) and 3).
\item Use $A(\hat{x}; \Omega^{(m)}, k_2)$ as the measurement data for the second stage in Scheme M, namely Step 4).
\item Apply the local re-sampling technique following the next sub-steps of Step 4) in Scheme M to update $\widetilde{z}_j$'s and to locate the components of the small-size scatterer $\Omega^{(s)}$. Particularly, \eqref{eq:re1} is modified to be
\begin{equation}\label{eq:re1n}
\widetilde{A}(\hat{x}; k_2)
=A(\hat{x}; \Omega^{(m)}, k_2)-\sum_{j=1}^{l_r} e^{ik_2(d-\hat{x})\cdot \hat{z}_j} A(\hat{x}; \widetilde{\Sigma}_j, k_2).
\end{equation}
\end{enumerate}
\hrule
\medskip
\medskip
\section{Numerical experiments and discussions}\label{sect:5}
In this section, we present some numerical results to illustrate salient features of our new schemes using augmented far field data set as well as its ability to image multiple multi-scale scatterers by the novel Scheme M with the local re-sampling technique.
Three geometries will be considered for the scatterer components in our numerical experiments. They are given by revolving bodies through rotating the following 2D shapes in the $x$-$y$ plane around the $x$-axis
\begin{eqnarray*}
\mathbf{Circle:} &\quad & \{ (x,y) : x=\cos(s), \ y=\sin(s), \ 0\le s\le 2\pi \},\\
\mathbf{Peanut:} &\quad & \{ (x,y) : x=\sqrt{3 \cos^2 (s) + 1}\cos(s), \ y=\sqrt{3 \cos^2 (s) + 1}\sin(s), \ 0\le s\le 2\pi \},\\
\mathbf{Kite:} &\quad & \{ (x,y) : x=\cos(s)+0.65\cos (2s)-0.65, \ y=1.5\sin(s), \ 0\le s\le 2\pi \}.
\end{eqnarray*}
In the sequel, they are denoted by \textbf{B}, \textbf{P} and \textbf{K}, respectively, for short.
The candidate data set $\widetilde{\mathscr{S}}_h$ includes far field data of all three reference components \textbf{B}, \textbf{P} and \textbf{K}, and is further lexicographically augmented by a collection of a priori known orientations and sizes. More precisely, the augmented data set is obtained by rotating \textbf{P} and \textbf{K} in the $x$-$y$ plane every $\pi/4$ radian\footnote{There are only four different orientations for \textbf{P} due to its symmetry.} as shown in Figs.~\ref{fig:Scatterer-shape-peanut} and \ref{fig:Scatterer-shape-kite}, respectively, and by scaling \textbf{B}, \textbf{P} and \textbf{K} by one fifth, one half, one, twice and five times.
\begin{figure}[b]
\hfill{}\includegraphics[width=0.23\textwidth]{true_peanut_thetaindex_0}\hfill{}\includegraphics[width=0.23\textwidth]{true_peanut_thetaindex_1}\hfill{}\includegraphics[width=0.23\textwidth]{true_peanut_thetaindex_2}\hfill{}\includegraphics[width=0.23\textwidth]{true_peanut_thetaindex_3}\hfill{}
\caption{\label{fig:Scatterer-shape-peanut}Scatterer component Peanut with four orientations.}
\end{figure}
\begin{figure}[pt]
\hfill{}\includegraphics[width=0.23\textwidth]{true_kite_thetaindex_0}\hfill{}\includegraphics[width=0.23\textwidth]{true_kite_thetaindex_1}\hfill{}\includegraphics[width=0.23\textwidth]{true_kite_thetaindex_2}\hfill{}\includegraphics[width=0.23\textwidth]{true_kite_thetaindex_3}\hfill{}
\hfill{}\includegraphics[width=0.23\textwidth]{true_kite_thetaindex_4}\hfill{}\includegraphics[width=0.23\textwidth]{true_kite_thetaindex_5}\hfill{}\includegraphics[width=0.23\textwidth]{true_kite_thetaindex_6}\hfill{}\includegraphics[width=0.23\textwidth]{true_kite_thetaindex_7}\hfill{}
\caption{\label{fig:Scatterer-shape-kite}Scatterer component Kite with eight orientations.}
\end{figure}
In the examples below, as assumed earlier,
we set $\varepsilon_0=\mu_0=1$ and $\sigma_0=0$ outside
the scatterer, and hence the wavelength is unitary in the homogeneous background. Unless otherwise specified, all
the scatterer components are either PEC conductors or
inhomogeneous media with all other parameters the same as those in the homogeneous
background except $\varepsilon=4$. Our near-field data are obtained by solving the Maxwell system \eqref{eq:Maxwell general} using the quadratic
$H(\mathrm{curl})$-conforming edge element discretization in a spherical domain centered at the origin and holding inside all the scatterer components. The computational domain is enclosed
by a PML layer to damp the reflection. Local adaptive
refinement scheme within the inhomogeneous scatterer is adopted to
enhance the resolution of the scattered wave. The far-field data
are approximated by the integral equation representation \cite[p.~181, Theorem~3.1]{PiS02} using the spherical Lebedev quadrature (cf.\!\cite{Leb99}).
We refine the mesh successively
till the relative maximum error of successive groups of far-field
data is below $0.1\%$. The far-field patterns on the finest mesh
are used as the exact data. The electric far-field patterns $A(\hat{x},\Omega)$, $\Omega = \Omega^{(r)}$ or $\Omega^{(m)}$, are observed at 590
Lebedev quadrature points distributed on the unit sphere
$\mathbb{S}^{2}$ (cf.\!\cite{Leb99} and references therein). The exact far-field data
$A(\hat{x},\Omega)$ are corrupted point-wise by the formula
\begin{equation}
A_{\delta}(\hat{x},\Omega)=A(\hat{x},\Omega)+\delta\zeta_1\underset{\hat{x}}{\max}|A(\hat{x},\Omega)|\exp(i2\pi
\zeta_2)\,,
\end{equation}
where $\delta$ refers to the relative noise level, and both $\zeta_1$ and $\zeta_2$ follow the
uniform distribution ranging from $-1$ to $1$. The values of the indicator functions have been
normalized between $0$ and $1$ to highlight the positions
identified.
Some experimental settings are defined as follows. In our tests,
we shall always take the incident direction $d=(1,0,0)^T$ and the polarization $p=(0,0,1)^T$.
In all our tests, the noise level is $3\%$.
To improve the accuracy and robustness of imaging results using Scheme AR and Enhanced Scheme M,
we adopt two full augmented data sets associated with two detecting EM waves with two proper wave numbers,
which will be clearly specified later.
Two inverse scattering benchmark problems are considered here.
The first one \textbf{PK} is to image two regular-size scatterer components with kite- and peanut-shape, respectively. In this case, we reconstruct the scatterer components with correct orientations and sizes by the augmented data set using Scheme AR. The second example \textbf{KB} is to image a combined scatterer consisting of multiple multi-scale components, an enlarged kite of \textbf{K} by two times and a relatively small ball of \textbf{B} scaled to one half from the unit one.
The size ratio between the two components is about six.
\subsection{Scheme AR}
\paragraph*{Example PK.}
In this example, we try to locate with Scheme AR a kite component \textbf{K} located at $(2,2,2)$ with azimuthal angle $\pi/4$ radian,
and a peanut component \textbf{P} located at $(-2,-2,-2)$ with azimuthal angle $3\pi/4$ radian as shown as in Fig.~\ref{fig:True-scatterer}(a)
and its projection on the $x-y$, $y-z$ and $z-x$ planes shown in Fig \ref{fig:True-scatterer}(b)-(d), respectively.
\begin{figure}[bp]
\hfill{}\includegraphics[width=0.23\textwidth]{true_p__k_thetaindex_0_theta1index_0}\hfill{}\includegraphics[width=0.23\textwidth]{true_p__k_thetaindex_0_theta1index_0_xy}\hfill{}\includegraphics[width=0.23\textwidth]{true_p__k_thetaindex_0_theta1index_0_yz}\hfill{}\includegraphics[width=0.23\textwidth]{true_p__k_thetaindex_0_theta1index_0_zx}\hfill{}
\hfill{}(a)\hfill{}\hfill{}(b)\hfill{}\hfill{}(c)\hfill{}\hfill{}(d)\hfill{}
\caption{\label{fig:True-scatterer}True scatterer for Example \textbf{PK}. }
\end{figure}
As remarked earlier, we choose the two scatterer components to be inhomogeneous media. There are two considerations for such a choice. First, we developed Scheme AR in Section~\ref{sect:3} mainly for locating PEC obstacles, but we also gave the extension to locate medium components if the generic situation described in Remark~\ref{rem:MediumObstacle2} is fulfilled. Second, we would like to illustrate the wide applicability of Scheme AR, and we refer to \cite{LiLiuShangSun} for numerical results on recovering multiple PEC obstacles by Scheme R. We implement Scheme AR in a two-stage imaging procedure as follows:
\paragraph{Scheme S.}
We first set $k=1$, which amounts to sending a detecting EM wave of wavelength at least twice larger than each component of the scatterer. With the collected far-field data, we implement Scheme S to find how many components to be recovered and locate the rough positions of those scatterer components.
The imaging result at this coarse stage is shown in Fig.~\ref{fig:Final-image-of-Coarse}, indicated by
the characteristic behavior of the function $I_s^j(z)$ (cf.~\eqref{eq:indicator function}) in Scheme S.
Note that no reference spaces are needed up to this stage.
It can be observed that the indicator function achieves local maxima in the region where
there exists a scatterer component, either kite or peanut. The rough position of the peanut is highlighted in Figs.~\ref{fig:Final-image-of-Coarse}(a)
which indicate a possible scatterer component somewhere around the highlighted region. In Figs.~\ref{fig:Final-image-of-Coarse}(b), we see that the rough position of the kite could also be found.
But its dimer brightness as shown in Figs.~\ref{fig:Final-image-of-Coarse}(b) tells us that one cannot figure out its shape and size up to this stage.
Then we could incorporate the suspicious regions
into a stack of cubes, as in Figs.~\ref{fig:Final-image-of-Coarse}(c)
and (d). And the computation of the next stage, i.e., Scheme
AR, is just performed on these cubes, which are shown exclusively in Figs.~\ref{fig:Final-image-of-Coarse}(e)
and (f). It is emphasized that this preprocessing stage can be skipped and one can directly implement the Scheme AR as described in the next stage to locate the kite \textbf{K} and the peanut \textbf{P}. However, by performing this preprocessing stage, the computational costs can be significantly reduced, and the robustness and resolution can be enhanced for Scheme AR, as will be performed in the next stage.
\begin{figure}
\hfill{}\includegraphics[width=0.4\textwidth]{k__p_indicator_stage1_1}\hfill{}\includegraphics[width=0.4\textwidth]{k__p_indicator_stage1_2}\hfill{}
\hfill{}(a)\hfill{}\hfill{}(b)\hfill{}
\hfill{}\includegraphics[width=0.4\textwidth]{k__p_indicator_stage1_surf1spi}\hfill{}\includegraphics[width=0.4\textwidth]{k__p_indicator_stage1_surf2spi}\hfill{}
\hfill{}(c)\hfill{}\hfill{}(d)\hfill{}
\hfill{}\includegraphics[width=0.4\textwidth]{anycube_k__p_nzoom1}\hfill{}\includegraphics[width=0.4\textwidth]{anycube_k__p_nzoom2}\hfill{}
\hfill{}(e)\hfill{}\hfill{}(f)\hfill{}
\caption{\label{fig:Final-image-of-Coarse}Identification in the coarse/preprocessing stage in Example \textbf{PK}.}
\end{figure}
\paragraph{Scheme AR.}
In this stage, we take $k=5$. With the collected far-field data, we implement Scheme AR to determine the location, shape, orientation and size of each scatterer component.
When we use the far-field data of the reference peanut with $3\pi/4$ azimuthal angle and unitary scale as the test data
in the indicator function $I_r^j(z)$ (cf.~\eqref{eq:indicator regular}),
the distribution of the indicator
function is shown in Fig.~\ref{fig:Fine-Stage-Identification-Peanut}(a). Then we take maximum of the indicator values and find a
much precise location $(-2.1,\ -2.1,\ -2.1)$ of the peanut, as in Fig.~\ref{fig:Fine-Stage-Identification-Peanut}(b).
Based on that position, we plot the proper shape, orientation and size based on the information carried with the far field data employed
and plot the imaging result in Fig.~\ref{fig:Fine-Stage-Identification-Peanut}(c).
Its projection on the orthogonal cut planes across its location are shown in Fig.~\ref{fig:Fine-Stage-Identification-Peanut}(d)-(f).
It can be concluded that the position identified is quite good and reasonable.
After excluding the peanut component, we apply Scheme AR to
the local mesh around the Kite component. When
the far-field data of the reference kite with $\pi/4$ azimuthal angle and unitary scale is adopted in the indicator function $I_r^j(z)$
(cf.~\eqref{eq:indicator regular}), the value distribution of
the indicator function is shown in Fig.~\ref{fig:Fine-Stage-Identification-Kite}(a). Then we take maximum of the indicator values and find the
the location $(2.2,\ 2.2,\ 2.2)$ of the kite, as in Fig.~\ref{fig:Fine-Stage-Identification-Kite}(b).
As previous, we plot the exact shape, orientation and size and show three orthogonal cut planes across the location identified
in Fig.~\ref{fig:Fine-Stage-Identification-Kite}(c)-(f).
The identified location is very close to the exact position of the kite.
\begin{figure}
\hfill{}\includegraphics[width=0.32\textwidth]{Bk__lb_indicator_stage2kth_2_surf1}\hfill{}
\includegraphics[width=0.32\textwidth]{MRTS_fine_L4_stage2final_kth1_ex1offnew}\hfill{}\includegraphics[width=0.32\textwidth]{MRTS_fine_L4_stage2final_kth1_ex1onnew}\hfill{}
\hfill{}\!\!\!\!\!\!\!\!\!\!(a)\hfill{}~~~~~~~~~~(b)\hfill{}~~~~~~~(c)\hfill{}\hfill{
\hfill{}\includegraphics[width=0.32\textwidth]{MRTS_fine_L4_stage2final_xy_kth1_ex1onnew}\hfill{}\includegraphics[width=0.32\textwidth]{MRTS_fine_L4_stage2final_yz_kth1_ex1onnew}\hfill{}\includegraphics[width=0.32\textwidth]{MRTS_fine_L4_stage2final_zx_kth1_ex1onnew}\hfill{}
\hfill{}\!\!\!\!\!\!\!\!\!\!(d)\hfill{}~~~~~~~~~~(e)\hfill{}~~~~~~~(f)\hfill{}\hfill{
\caption{\label{fig:Fine-Stage-Identification-Peanut} Fine stage identification of the Peanut component in Example \textbf{PK}:
(a) the multi-slice plot of the indicator function; (b) rough position
by take maximum of indicator function; (c) the reconstructed component after the determination of the orientation of the peanut; (d)-(f) projections of the reconstruction in (c).}
\end{figure}
\begin{figure}
\hfill{}\includegraphics[width=0.32\textwidth]{Bk__lb_indicator_stage2kth_1_surf2}\hfill{}\includegraphics[width=0.32\textwidth]{MRTS_fine_L4_stage2final_kth2_ex1offnew}\hfill{}\includegraphics[width=0.32\textwidth]{MRTS_fine_L4_stage2final_kth2_ex1onnew}\hfill{}
\hfill{}\!\!\!\!\!\!\!\!\!\!(a)\hfill{}~~~~~~~~~~(b)\hfill{}~~~~~~~(c)\hfill{}\hfill{
\hfill{}\includegraphics[width=0.30\textwidth]{MRTS_fine_L4_stage2final_xy_kth2_ex1onnew}\hfill{}\includegraphics[width=0.32\textwidth]{MRTS_fine_L4_stage2final_yz_kth2_ex1onnew}\hfill{}\includegraphics[width=0.32\textwidth]{MRTS_fine_L4_stage2final_zx_kth2_ex1onnew}\hfill{}
\hfill{}\!\!\!\!\!\!\!\!\!\!(d)\hfill{}~~~~~~~~~~(e)\hfill{}~~~~~~~(f)\hfill{}\hfill{
\caption{\label{fig:Fine-Stage-Identification-Kite} Fine stage Identification of the Kite component in Example \textbf{PK}: (a) the multi-slice plot of the indicator function; (b) rough position by taking
maximum of the indicator function; (c) the reconstructed component after the determination of the orientation and size of the kite; (d)-(f) projections of the reconstruction in (c).}
\end{figure}
\subsection{Enhanced Scheme M}
\paragraph*{Example KB.} In this example we try to locate multiple multi-scale scattering components using Enhanced Scheme M.
The exact scatterer is composed of a kite-shaped scatterer enlarged by two times from the reference one and a ball scatterer scaled by a half from the unit one.
The kite is chosen to be a PEC obstacle, whereas the ball is an inhomogeneous medium.
The exact scatterer is shown in Fig.~\ref{fig:True-scatterer-mlts}, where the 3D kite-shaped component is located at
$(0,\,0,\,-4)$ and the ball component is located
at $(0,\,0,\,9)$ with radius a half unit.
\begin{figure}[bp]
\hfill{}\includegraphics[width=0.23\textwidth]{true_kite_mlts}\hfill{}\includegraphics[width=0.23\textwidth]{true_kite_mlts_xy}\hfill{}\includegraphics[width=0.23\textwidth]{true_kite_mlts_yz}\hfill{}\includegraphics[width=0.23\textwidth]{true_kite_mlts_xz}\hfill{}
\hfill{}(a)\hfill{}\hfill{}(b)\hfill{}\hfill{}(c)\hfill{}\hfill{}(d)\hfill{}
\caption{\label{fig:True-scatterer-mlts}True scatterer of Example \textbf{KB}. }
\end{figure}
Now we employ Enhanced Scheme M to detect the unknown scatterers by applying
Scheme AR first and then Scheme S with the local re-sampling technique. In the first stage of Scheme AR, the far-field data used are collected by illuminating the scatterer by an incident EM waves of $k=\pi$. In the second stage for Scheme S, the far-field data used are collected by illuminating the scatterers by an detecting EM waves of $k=2\pi/5$.
For $k=\pi$, we enrich our augmented reference space $\widetilde{\mathscr{S}}$ by the far-field data corresponding to each reference components with different orientations and sizes on $590$ Lebedev quadrature points on the unit sphere.
\paragraph{Scheme AR.}
We first apply Scheme AR to the multi-scale scatterers. When
the far-field data of the reference kite with vanishing azimuthal angle and double size is adopted in the indicator function $I_r^j(z)$
(cf.~\eqref{eq:indicator regular}),
the local maximum behavior of the indicator function is shown in Fig.~\ref{fig:Fine-Stage-Identificationt-mlts}, (a).
Using Scheme AR, we obtain a rough position of the kite component by taking
the coordinates at which the indicator function achieves the maximum, namely $(0,\,0,\,-4.3056)$ as shown in Fig.~\ref{fig:Fine-Stage-Identificationt-mlts}(a).
Its shape, orientation and size are superimposed by the message carried in the far-field
data and plotted in Fig.~\ref{fig:Fine-Stage-Identificationt-mlts}(b), where we reverse the $x$-axis for ease of visualization.
\paragraph*{Local re-sampling technique}
The detected position from Scheme AR in the previous step is an approximate position of the kite component due
to the noise. In order to implement the local re-sampling technique, we set a local searching
region around the obtained position point, namely $(0,\,0,\,-4.3056)$. In this test, we choose a stack of $10$-by-$10$-by-$10$ cubes centered at $(0,\,0,\,-4.3056)$
with total side length $1$, namely within the precision of half wave length, as shown in Fig.~\ref{fig:Fine-Stage-Identificationt-mlts}, (c)
and (d). Then we subtract the the far-field pattern associated with the regular-size component from the total one following \eqref{eq:re1} by testing every searching node in the
cubic mesh points.
\paragraph{Scheme S.}
The rest of the job is to follow Step 4) in Enhanced Scheme M to test every suspicious points among the cubic grid points as shown in Fig.~\ref{fig:Fine-Stage-Identificationt-mlts}(c). Fig.~\ref{fig:Recogonize little scatterer} shows a gradual evolution process as we
move gradually the sampling grid point from the nearly correct $z_{0}=(0,\,0,\,-4.0056)$
to a perturbed position $z_{0}=(0,\,0,\,-4.1056))$, which helps us update the position of the regular-size \textbf{K} component to be $z_{0}=(0,\,0,\,-4.0056)$ but also determine the location of the small-size \textbf{B} component. From this example, we see that the identified position of the small ball component
is no longer available if the position of the regular-size component is slightly perturbed. For the current test, the tolerance of the perturbation is within $0.05$.
Hence, a nice by-product from the local re-sampling technique is that it helps improve significantly the position of the regular-size component.
The operation in this stage is essentially very cheap since only a few local grid points are involved and the
re-sampling procedure only computes inner product of the subtracted far-field data with the test data in \eqref{eq:indicator function}. Moreover,
efficiency can be further improved by implementing the algorithm in parallel.
\begin{figure}
\hfill{}\includegraphics[width=0.3\textwidth]{Bk__lb_indicator_stage2}\hfill{}\includegraphics[width=0.3\textwidth]{Bk__lb_indicator_stage2real}\hfill{}
\hfill{}(a)\hfill{}~~~~~~~~(b)\hfill{}
\hfill{}\includegraphics[width=0.3\textwidth]{Bk__lb_indicator_stage2spi}\hfill{}\includegraphics[width=0.3\textwidth]{anycube_stage2}\hfill{}
\hfill{}(c)\hfill{}~~~~~~~~(d)\hfill{}
\caption{\label{fig:Fine-Stage-Identificationt-mlts} Locating by Scheme AR in Example \textbf{KB}: (a) the multi-slice plot of the indicator function by Scheme AR; (b) the reconstructed component after the determination of
the orientation and size of the kite; (c) a multi-slice plot with re-sampling cubes;
(d) the isolated re-sampling cubes without the background multi-slide plot.}
\end{figure}
\begin{figure}
\hfill{}\includegraphics[width=0.23\textwidth]{Bk__lb_indicator_stage3}\hfill{}\includegraphics[width=0.23\textwidth]{Bk__lb_indicator_stage3fail}\hfill{}
\hfill{}(a)\hfill{}\hfill{}(b)\hfill{}
\caption{\label{fig:Recogonize little scatterer} Locating the small ball scatterer component in Example \textbf{KB}.
The multi-slice plots of the indicator function (a) when $z_{0}$ is sufficiently near its actual position ($z_{0}=(0,\,0,\,-4.0056)$), or
(b) when $z_{0}$ is away from its actual position ($z_{0}=(0,\,0,\,-4.1056))$.}
\end{figure}
\section{Conclusion}
In this paper we have developed several variants of the one-shot method proposed in \cite{LiLiuShangSun}. The methods can be used for the efficient numerical reconstruction of multiple multi-scale scatterers for inverse electromagnetic scattering problems. The methods are based on the local 'maximum' behaviors of the indicating functions aided by a candidate set of a priori known far-field data. Rigorous mathematical justifications are provided and several benchmark examples are presented to illustrate the efficiency of the schemes.
The local re-sampling technique is shown to be an effective a posteriori position-fine-tuning method, which required rough information of the position by an preprocessing stage of Scheme AR. The local re-sampling technique adds only a small amount of computational overhead, but helps calibrate the positions of the regular-size scatterers and determine the locations of the small-sized scatterers.
The present approaches can be extended in several directions including the one by making use of limited-view measurement data. The extension to the use of time-dependent measurement data would be nontrivial and poses interesting challenges for further investigation. Finally, it would be worthwhile to consider different noise background such as Gaussian and impulsive noise.
| {'timestamp': '2013-05-09T02:02:05', 'yymm': '1305', 'arxiv_id': '1305.1838', 'language': 'en', 'url': 'https://arxiv.org/abs/1305.1838'} |
\section{Introduction}\label{secintro}
The main purpose of this episode is to give an examination in depth
for admissible partitions generated by nonabelian subalgebras.
As pointed out earlier~\cite{SuTsai1}, a nonabelian subalgebra
is eligible as well to create a bi-subalgebra
and a commutator partitions over a unitary Lie algebra.
An elegant relation of duality between these two kinds of partitions is
revealed and expounded in detail.
Importantly, the partitions of these two kinds dual to each other
merge into one quotient algebra partition when the associated bi-subalgebra
recovers to be abelian.
It is asserted that all Cartan decompositions of three types are acquirable recursively within the quotient algebra partition of the highest rank.
Moreover, the structure of quotient algebra partition
is universal to classical and exceptional Lie algebras.
\section{Bi-Subalgebras of $su(2^p)$\label{secbisubalginsu}}
Writing its generators in the $s$-representation unveils
a simple group structure embedded in the Lie algebra $su(2^p)$,
{\em cf.} Appendix~B in~\cite{Su}.
\vspace{6pt}
\begin{lemma}\label{lemisosu2^p}
The set of spinor generators of $su(2^p)$ forms an abelian group isomorphic to $Z^{2p}_2$
under the bi-addition $\diamond$:
$\forall\hspace{2pt}{\cal S}^{\zeta}_{\alpha},{\cal S}^{\eta}_{\beta}\in{su(2^p)}$,
${\cal S}^{\zeta}_{\alpha}\diamond{\cal S}^{\eta}_{\beta}\equiv{\cal S}^{\zeta+\eta}_{\alpha+\beta}\in{su(2^p)}$.
\end{lemma}
\vspace{3pt}
\begin{proof}
This lemma is easily asserted by one-to-one mapping each spinor generator
${\cal S}^{\zeta}_{\alpha}$
to the concatenated string $\zeta\circ\alpha\in Z^{2p}_2$, and {\em vice versa}.
\end{proof}
\vspace{6pt}
With this structure, the concept of a bi-subalgebra introduced in~\cite{SuTsai1}
is naturally generalised from a Cartan subalgebra to the whole algebra $su(2^p)$.
\vspace{6pt}
\begin{defn}\label{defbisubalgsu}
A set of spinor generators of $su(2^p)$ is called a bi-subalgebra
of $su(2^p)$, denoted as $\mathcal{B}_{su}$, if
${\cal S}^{\zeta+\eta}_{\alpha+\beta}\in\mathcal{B}_{su}$,
$\forall\hspace{2pt}{\cal S}^{\zeta}_{\alpha},{\cal S}^{\eta}_{\beta}\in\mathcal{B}_{su}$.
\end{defn}
\vspace{6pt}
Intentionally the notation $\mathcal{B}_{su}$ distinguishes bi-subalgebras of $su(2^p)$
from those of a Cartan subalgebra~\cite{SuTsai1,SuTsai2}.
Similarly, these bi-subalgebras can be {\em ordered} by {\em the degree of maximality}.
\vspace{6pt}
\begin{defn}\label{defmaxbisubalgsu}
A bi-subalgebra $\mathcal{B}_{su}$ is maximal in $su(2^p)$
if ${\cal S}^{\zeta+\eta}_{\alpha+\beta}\in\mathcal{B}_{su}$,
$\forall\hspace{2pt}{\cal S}^{\zeta}_{\alpha},{\cal S}^{\eta}_{\beta}\in\mathcal{B}^c_{su}={su(2^p)}-\mathcal{B}_{su}$.
\end{defn}
\vspace{6pt}
The ordering is recursive.
\vspace{6pt}
\begin{defn}\label{defrthbisubinsu}
Denoted as $\mathcal{B}^{[r]}_{su}$, an $r$-th maximal bi-subalgebra of $su(2^p)$
is a proper maximal bi-subalgebra of an $(r-1)$-th maximal bi-subalgebra $\mathcal{B}^{[r-1]}_{su}\subset{su(2^p)}$,
$1<r\leq 2p$.
\end{defn}
\vspace{6pt}
The order $r$ of a such bi-subalgebras ranges from $0$ to $2p$,
initially with the zeroth maximal $\mathcal{B}^{[0]}_{su}=su(2^p)$ and
ending at the identity
$\mathcal{B}^{[2p]}_{su}=\{{\cal S}^{\bf 0}_{\hspace{.01cm}{\bf 0}}\}$.
Notice that every bi-subalgebra of order less than $p$ is nonabelian,
and that of order $r\geq p$ is not necessarily abelian.
The total number of spinors in a bi-subalgebra is acquired
with the isomorphism of $su(2^p)$ and $Z^{2p}_2$.
\vspace{6pt}
\begin{lemma}\label{lemnumbi-subinsu}
A bi-subalgebra $\mathcal{B}_{su}$ of $su(2^p)$ is an $r$-th
maximal bi-subalgebra in $su(2^p)$ if and only if it consists of a number $2^{2p-r}$
of spinor generators.
\end{lemma}
\vspace{6pt}
Most of assertions regarding bi-subalgebras of a Cartan subalgebra
$\mathfrak{C}\subset{su(2^p)}$
remain valid to those of an $r$-th maximal bi-subalgebra
$\mathcal{B}^{[r]}_{su}\subset{su(2^p)}$ as in the following.
Their proofs are omitted because
each of which can be carried out through the same procedure
as that in the corresponding assertion in~\cite{SuTsai1,SuTsai2} except replacing
$\mathfrak{C}$ with $\mathcal{B}^{[r]}_{su}$.
Let it begin with the feature that
a $3$rd member is obtainable from every two bi-subalgebras.
\vspace{6pt}
\begin{lemma}\label{lem3rdmaxbisub}
Derived from every two maximal bi-subalgebras $\mathcal{B}_{su,1}$ and $\mathcal{B}_{su,2}$ of
an $r$-th maximal bi-subalgebra $\mathcal{B}^{[r]}_{su}$ in $su(2^p)$, the subspace
$\mathcal{B}=(\mathcal{B}_{su,1}\cap\mathcal{B}_{su,2})\cup(\mathcal{B}^c_{su,1}\cap\mathcal{B}^c_{su,2})$
is also a maximal bi-subalgebra of $\mathcal{B}^{[r]}_{su}$,
here $\mathcal{B}^c_{su,1}=\mathcal{B}^{[r]}_{su}-\mathcal{B}_{su,1}$ and
$\mathcal{B}^c_{su,2}=\mathcal{B}^{[r]}_{su}-\mathcal{B}_{su,2}$.
\end{lemma}
\vspace{6pt}
An abelian group associated to $\mathcal{B}^{[r]}_{su}$ is needed for a purpose of partitioning.
\vspace{6pt}
\begin{lemma}\label{lemabeGinsu}
Given an $r$-th maximal bi-subalgebra $\mathcal{B}^{[r]}_{su}$ of $su(2^p)$,
the set $\mathcal{G}(\mathcal{B}^{[r]}_{su})=
\{\mathcal{B}_{su,i}:
\mathcal{B}_{su,i}\text{ is a maximal bi-subalgebra of }\mathcal{B}^{[r]}_{su},\hspace{2pt}0\leq i<2^{2p-r}\}$
forms an abelian group isomorphic to $Z^{2p-r}_2$ under the
$\sqcap$-operation, that is,
$\forall\hspace{2pt}\mathcal{B}_{su,i},\mathcal{B}_{su,j}\in\mathcal{G}(\mathcal{B}^{[r]}_{su})$,
$\mathcal{B}_{su,i}\sqcap\mathcal{B}_{su,j}=(\mathcal{B}_{su,i}\cap\mathcal{B}_{su,j})\cup(\mathcal{B}^c_{su,i}\cap\mathcal{B}^c_{su,j})\in\mathcal{G}(\mathcal{B}^{[r]}_{su})$,
where $\mathcal{B}^c_{su,i}=\mathcal{B}^{[r]}_{su}-\mathcal{B}_{su,i}$, $\mathcal{B}^c_{su,j}=\mathcal{B}^{[r]}_{su}-\mathcal{B}_{su,j}$
and $\mathcal{B}_{su,0}=\mathcal{B}^{[r]}_{su}$ is the group identity, $0\leq i,j<2^{2p-r}$.
\end{lemma}
\vspace{6pt}
Every spinor commutes with a unique maximal
bi-subalgebra $\mathcal{B}_{su}$ of $\mathcal{B}^{[r]}_{su}$
but with none part of its complement.
\vspace{6pt}
\begin{lemma}\label{lemspinorcommmax}
For an $r$-th maximal bi-subalgebra $\mathcal{B}^{[r]}_{su}$ of
$su(2^p)$ and a spinor generator ${\cal S}^{\zeta}_{\alpha}\in{su(2^p)}$, $p>1$,
there exists a unique maximal bi-subalgebra $\mathcal{B}_{su}\in\mathcal{G}(\mathcal{B}^{[r]}_{su})$
of $\mathcal{B}^{[r]}_{su}$ such that $[{\cal S}^{\zeta}_{\alpha},\mathcal{B}_{su}]=0$
and $[{\cal S}^{\zeta}_{\alpha},{\cal S}^{\eta}_{\beta}]\neq 0$
for all ${\cal S}^{\eta}_{\beta}\in\mathcal{B}^c_{su}
=\mathcal{B}^{[r]}_{su}-\mathcal{B}_{su}$.
\end{lemma}
\vspace{3pt}
\vspace{6pt}
\begin{lemma}\label{lemspinorinBcomm}
If a spinor ${\cal S}^{\zeta}_{\alpha}\in{su(2^p)}$ commuting
with a maximal bi-subalgebra $\mathcal{B}_{su}\in\mathcal{G}(\mathcal{B}^{[r]}_{su})$
of an $r$-th maximal bi-subalgebra $\mathcal{B}^{[r]}_{su}\subset{su(2^p)}$,
i.e., $[{\cal S}^{\zeta}_{\alpha},\mathcal{B}_{su}]=0$,
any generator ${\cal S}^{\eta}_{\beta}$
commuting with ${\cal S}^{\zeta}_{\alpha}$
must be in $\mathcal{B}_{su}$.
\end{lemma}
\vspace{6pt}
Based on the {\em commutator rule} $[{\cal W},\mathcal{B}_{su}]=0$,
the subspace ${\cal W}\subset{su(2^p)}$ spanned by
generators commuting with a bi-subalgebra $\mathcal{B}_{su}$ is
defined to be {\em the commutator subspace determined by} $\mathcal{B}_{su}$
and denoted as ${\cal W}(\mathcal{B}_{su})$ whenever necessary.
Any two such subspaces are disjoint.
\vspace{6pt}
\begin{lemma}\label{lemdisjointComm}
Two commutator subspaces ${\cal W}_1$ and ${\cal W}_2\subset{su(2^p)}$
respectively determined by two maximal bi-subalgebras $\mathcal{B}_{su,1}$
and $\mathcal{B}_{su,2}\in\mathcal{G}(\mathcal{B}^{[r]}_{su})$
of an $r$-th maximal bi-subalgebra $\mathcal{B}^{[r]}_{su}$ of
$su(2^p)$, i.e., $[{\cal W}_1,\mathcal{B}_{su,1}]=0$
and $[{\cal W}_2,\mathcal{B}_{su,2}]=0$,
share the null intersection ${\cal W}_1\cap{\cal W}_2=\{0\}$.
\end{lemma}
\vspace{6pt}
Thanks to Lemmas from~\ref{lemabeGinsu} to~\ref{lemdisjointComm},
a partition is affirmed in $su(2^p)$.
\vspace{6pt}
\begin{thm}\label{thmCommPar}
The group $\mathcal{G}(\mathcal{B}^{[r]}_{su})$ comprising
maximal bi-subalgebras of an $r$-th maximal bi-subalgebra $\mathcal{B}^{[r]}_{su}$
in the Lie algebra $su(2^p)$ by the commutator rule decides a partition of $su(2^p)$.
\end{thm}
\vspace{6pt}
This partition, consisting of $2^{2p-r}$ commutator subspaces in total,
is termed
{\em the commutator partition of order $r$ generated by} $\mathcal{B}^{[r]}_{su}$
and
denoted as $\{{\cal P}_{\mathcal{C}}(\mathcal{B}^{[r]}_{su})\}$.
The partition has a group structure too.
\vspace{6pt}
\begin{lemma}\label{lemcommsubclose}
For two commutator subspaces ${\cal W}_1$ and ${\cal W}_2$
respectively determined by the maximal bi-subalgebras $\mathcal{B}_{su,1}$
and $\mathcal{B}_{su,2}\in\mathcal{G}(\mathcal{B}^{[r]}_{su})$
of an $r$-th maximal bi-subalgebra $\mathcal{B}^{[r]}_{su}$
of $su(2^p)$, the closure holds that,
$\forall\hspace{2pt}{\cal S}^{\zeta}_{\alpha}\in{\cal W}_1$ and ${\cal S}^{\eta}_{\beta}\in{\cal W}_2$,
the bi-additive generator ${\cal S}^{\zeta+\eta}_{\alpha+\beta}$
belongs to the commutator subspace ${\cal W}=[{\cal W}_1,{\cal W}_2]$
determined by the bi-subalgebra
$\mathcal{B}_{su,1}\sqcap\mathcal{B}_{su,2}\in\mathcal{G}(\mathcal{B}^{[r]}_{su})$,
i.e., $[{\cal S}^{\zeta+\eta}_{\alpha+\beta},\mathcal{B}_{su,1}\sqcap\mathcal{B}_{su,2}]=0$
and thus $[{\cal W},\mathcal{B}_{su,1}\sqcap\mathcal{B}_{su,2}]=0$.
\end{lemma}
\vspace{6pt}
Generated by an $r$-th maximal bi-subalgebra, group isomorphisms analogous to
those of Corollary~2 in~\cite{SuTsai1} are preserved.
\vspace{6pt}
\begin{cor}\label{corisoBrsu}
Given an $r$-th maximal bi-subalgebra $\mathcal{B}^{[r]}_{su}$
of the Lie algebra $su(2^p)$, there exists the isomorphism relation
$\{\mathcal{P}_{\mathcal{C}}(\mathcal{B}^{[r]}_{su})\}
\cong\mathcal{G}(\mathcal{B}^{[r]}_{su})
\cong\{{\cal S}^{\zeta}_{\alpha}:{\cal S}^{\zeta}_{\alpha}\in\mathcal{B}^{[r]}_{su}\}
\cong{Z^{2p-r}_2}$ for the sets of all commutator subspaces in $su(2^p)$,
of all maximal bi-subalgebras, of all spinor generators in $\mathcal{B}^{[r]}_{su}$
and of all $(2p-r)$-digit strings under the commutator, the
$\sqcap$, the bi-addition $\diamond$ operations and the bit-wise
addition respectively.
\end{cor}
\vspace{6pt}
Due to this isomorphism relation,
it is favorable to label every maximal bi-subalgebra
in $\mathcal{G}(\mathcal{B}^{[r]}_{su})$
by a $(2p-r)$-digit binary string such that
$\mathcal{B}_{su,i}\sqcap\mathcal{B}_{su,j}=\mathcal{B}_{su,i+j}$
for $i,j\in{Z^{2p-r}_2}$ with the designation
$\mathcal{B}^{[r]}_{su}=\mathcal{B}_{su,\mathbf{0}}$.
To every commutator partition, there corresponds a
{\em dual partition} yielded by a coset rule.
\vspace{6pt}
\begin{thm}\label{thmbisubpar}
An $r$-th maximal bi-subalgebra $\mathcal{B}^{[r]}_{su}\equiv\mathcal{B}^{[r,\mathbf{0}]}_{su}\subset{su(2^p)}$
can generate a partition in $su(2^p)$ consisting of $2^r$ disjoint subspaces $\mathcal{B}^{[r,i]}_{su}$
via the coset rule of partition,
namely $su(2^p)=\bigcup_{i\in{Z^r_2}}\mathcal{B}^{[r,i]}_{su}$
complying with the condition $\forall\hspace{2pt}{\cal S}^{\zeta}_{\alpha}\in\mathcal{B}^{[r,i]}_{su}$,
${\cal S}^{\eta}_{\beta}\in\mathcal{B}^{[r,j]}_{su}$,
$\exists !\hspace{3pt}\mathcal{B}^{[r,l]}_{su}$ such that
${\cal S}^{\zeta+\eta}_{\alpha+\beta}\in\mathcal{B}^{[r,l]}_{su}$
with $i+j=l$, $\forall\hspace{2pt}i,j,l\in{Z^r_2}$.
\end{thm}
\vspace{6pt}
Under the operation of bi-addition every subspace $\mathcal{B}^{[r,i]}_{su}$ is
a coset of the ``subgroup" $\mathcal{B}^{[r]}_{su}$ in $su(2^p)$
and is thus named {\em a coset subspace of} $\mathcal{B}^{[r]}_{su}$.
Denoted as $\{\mathcal{P}_{\mathcal{B}}(\mathcal{B}^{[r]}_{su})\}$,
the partition comprising these $2^r$ cosets is known as
{\em the bi-subalgebra partition of order $r$ generated by} $\mathcal{B}^{[r]}_{su}$.
\section{Partition Duality\label{secPARDUAL}}
The partitions over the Lie algebra $su(2^p)$ generated by the commutator
and the coset rules introduced in last section are in general distinct.
However, as immediately to be shown, the two partitions
have an elegant duality relation.
\vspace{6pt}
\begin{lemma}\label{lemcommBiofBrsu}
The subspace spanned by spinors commuting with an $r$-th maximal
bi-subalgebra $\mathcal{B}^{[r]}_{su}$ of $su(2^p)$, $0\leq r\leq 2p$,
forms a $(2p-r)$-th maximal bi-subalgebra of $su(2^p)$.
\end{lemma}
\vspace{3pt}
\begin{proof}
Suppose $V\subset{su(2^p)}$ is the subspace spanned by spinors
commuting with $\mathcal{B}^{[r]}_{su}$.
For any spinors ${\cal S}^{\zeta_1}_{\alpha_1},{\cal S}^{\zeta_2}_{\alpha_2}\in{V}$,
the bi-additive ${\cal S}^{\zeta_1+\zeta_2}_{\alpha_1+\alpha_2}$
commutes with $\mathcal{B}^{[r]}_{su}$ owing to the vanishing
commutators
$[{\cal S}^{\zeta_1}_{\alpha_1},\mathcal{B}^{[r]}_{su}]=[{\cal S}^{\zeta_2}_{\alpha_2},\mathcal{B}^{[r]}_{su}]=0$.
Thus ${\cal S}^{\zeta_1+\zeta_2}_{\alpha_1+\alpha_2}\in{V}$
and the subspace $V\equiv\mathcal{B}$ is a bi-subalgebra.
By Lemma~\ref{lemnumbi-subinsu}, the fact of $\mathcal{B}$
being a $(2p-r)$-th maximal bi-subalgebra of $su(2^p)$
can be affirmed by counting the number of
spinors commuting with $\mathcal{B}^{[r]}_{su}$.
Let the subalgebra $\mathcal{B}^{[r]}_{su}$ be
spanned by $2p-r$ independent generators
$\{{\cal S}^{\eta_1}_{\beta_1},{\cal S}^{\eta_2}_{\beta_2},\cdots,{\cal S}^{\eta_{2p-r}}_{\beta_{2p-r}}\}$.
It is easy to derive a number $2^r$ of spinors ${\cal S}^{\zeta}_{\alpha}\in{su(2^p)}$
satisfying the identities
$\zeta\cdot\beta_t+\eta_t\cdot\alpha=0$ for all $1\leq t\leq 2p-r$.
Hence $\mathcal{B}$ is composed of in total $2^r$ spinor generators and
is a $(2p-r)$-th maximal bi-subalgebra of $su(2^p)$.
\end{proof}
\vspace{6pt}
That is to say, there exists a {\em unique}
$(2p-r)$-th maximal bi-subalgebra $\mathcal{B}^{[2p-r]}_{su}$
commuting with a given $r$-th maximal bi-subalgebra
$\mathcal{B}^{[r]}_{su}\subset{su(2^p)}$.
The following two lemmas pave the way to the duality relation.
\vspace{6pt}
\begin{lemma}\label{lemcommsubcoset}
A commutator subspace of the commutator partition
$\{{\cal P}_{\mathcal{C}}(\mathcal{B}^{[r]}_{su})\}$
generated by an $r$-th maximal bi-subalgebra $\mathcal{B}^{[r]}_{su}\subset{su(2^p)}$
is a coset subspace of the bi-subalgebra partition
$\{{\cal P}_{\mathcal{B}}(\mathcal{B}^{[2p-r]}_{su})\}$
generated by a $(2p-r)$-th maximal bi-subalgebra
$\mathcal{B}^{[2p-r]}_{su}\subset{su(2^p)}$
as long as
$[\mathcal{B}^{[r]}_{su},\mathcal{B}^{[2p-r]}_{su}]=0$, $0\leq r\leq 2p$.
\end{lemma}
\vspace{3pt}
\begin{proof}
This lemma is an implication of Lemma~\ref{lemcommsubclose}
by relating the commutator subspaces of
$\{{\cal P}_{\mathcal{C}}(\mathcal{B}^{[r]}_{su})\}$
in terms of bitwise addition of index strings.
Specifically for two arbitrary commutator subspaces
${\cal W}(\mathcal{B}_{su,i})$ and ${\cal W}(\mathcal{B}_{su,j})$
determined by two maximal bi-subalgebras
$\mathcal{B}_{su,i}$ and $\mathcal{B}_{su,j}\in\mathcal{G}(\mathcal{B}^{[r]}_{su})$
respectively,
the bi-additive ${\cal S}^{\zeta+\eta}_{\alpha+\beta}$
of any two spinors
${\cal S}^{\zeta}_{\alpha}\in{\cal W}(\mathcal{B}_{su,i})$
and
${\cal S}^{\eta}_{\beta}\in{\cal W}(\mathcal{B}_{su,j})$
belongs to ${\cal W}(\mathcal{B}_{su,i+j}=\mathcal{B}_{su,i}\sqcap\mathcal{B}_{su,j})$,
$i,j\in{Z^{2p-r}_2}$.
By this index rule,
the subalgebra
$\mathcal{B}^{[2p-r]}_{su}=\mathcal{B}_{su,\mathbf{0}}$
is designated to be ${\cal W}(\mathcal{B}_{su,\mathbf{0}})$.
\end{proof}
\vspace{6pt}
\vspace{6pt}
\begin{lemma}\label{lemcosetcommwith}
A coset subspace of the bi-subalgebra partition $\{{\cal P}_{\mathcal{B}}(\mathcal{B}^{[r]}_{su})\}$
generated by an $r$-th maximal bi-subalgebra $\mathcal{B}^{[r]}_{su}\subset{su(2^p)}$
is a commutator subspace of the commutator partition $\{{\cal P}_{\mathcal{C}}(\mathcal{B}^{[2p-r]}_{su})\}$
generated by a $(2p-r)$-th maximal bi-subalgebra
$\mathcal{B}^{[2p-r]}_{su}\subset{su(2^p)}$
as long as
$[\mathcal{B}^{[r]}_{su},\mathcal{B}^{[2p-r]}_{su}]=0$,
$0\leq r \leq 2p$.
\end{lemma}
\vspace{3pt}
\begin{proof}
This lemma is asserted by showing the fact that,
for any coset subspace
$\mathcal{B}^{[r,i]}_{su}\in\{{\cal P}_{\mathcal{B}}(\mathcal{B}^{[r]}_{su})\}$,
there exists a unique maximal bi-subalgebra
$\mathcal{B}_{su,i}$
of $\mathcal{B}^{[2p-r]}_{su}$ such that
$\mathcal{B}^{[r,i]}_{su}$ commutes with $\mathcal{B}_{su,i}$
but with no part of the complement
${\mathcal{B}}^c_{su,i}
=\mathcal{B}^{[2p-r]}_{su}-\mathcal{B}_{su,i}$,
here $\mathcal{B}^{[r]}_{su}=\mathcal{B}^{[r,\mathbf{0}]}_{su}$
and $i\in{Z^r_2}$.
With an arbitrary choice ${\cal S}^{\zeta_0}_{\alpha_0}\in\mathcal{B}^{[r,i]}_{su}$,
every generator of $\mathcal{B}^{[r,i]}_{su}$ can be written as the bi-additive
${\cal S}^{\zeta_0+\xi}_{\alpha_0+\gamma}$ of ${\cal S}^{\zeta_0}_{\alpha_0}$
and a spinor ${\cal S}^{\xi}_{\gamma}\in\mathcal{B}^{[r]}_{su}$,
{\em cf}. Theorem~\ref{thmbisubpar}.
Due to the vanishing commutator
$[\mathcal{B}^{[r]}_{su},\mathcal{B}^{[2p-r]}_{su}]=0$,
a spinor ${\cal S}^{\eta}_{\beta}\in\mathcal{B}^{[2p-r]}_{su}$
commutes with the whole bi-subalgebra $\mathcal{B}^{[r,i]}_{su}$
if $[{\cal S}^{\zeta_0}_{\alpha_0},{\cal S}^{\eta}_{\beta}]=0$
or with none subset of it if $[{\cal S}^{\zeta_0}_{\alpha_0},{\cal S}^{\eta}_{\beta}]\neq 0$.
There exists at least one generator of $\mathcal{B}^{[2p-r]}_{su}$
commuting with $\mathcal{B}^{[r,i]}_{su}$, otherwise
contradictions occur.
Let $\mathcal{V}\subset\mathcal{B}^{[2p-r]}_{su}$
be the subspace spanned by all spinors
commuting with $\mathcal{B}^{[r,i]}_{su}$.
The bi-additive ${\cal S}^{\eta_1+\eta_2}_{\beta_1+\beta_2}$
of any ${\cal S}^{\eta_1}_{\beta_1}$ and ${\cal S}^{\eta_2}_{\beta_2}\in\mathcal{V}$
must commute with $\mathcal{B}^{[r,i]}_{su}$ because of
the vanishing commutators
$[{\cal S}^{\eta_1}_{\beta_1},\mathcal{B}^{[r,i]}_{su}]=[{\cal S}^{\eta_2}_{\beta_2},\mathcal{B}^{[r,i]}_{su}]=0$.
Further by the nonvanishing commutators
$[{\cal S}^{\zeta_0}_{\alpha_0},{\cal S}^{\hat{\eta}_1}_{\hat{\beta}_1}]\neq 0$
and $[{\cal S}^{\zeta_0}_{\alpha_0},{\cal S}^{\hat{\eta}_2}_{\hat{\beta}_2}]\neq 0$
for any pair ${\cal S}^{\hat{\eta}_1}_{\hat{\beta}_1}$
and
${\cal S}^{\hat{\eta}_2}_{\hat{\beta}_2}\in\mathcal{V}^c=\mathcal{B}^{[2p-r]}_{su}-\mathcal{V}$,
it derives
the commuting of
the bi-additive ${\cal S}^{\hat{\eta}_1+\hat{\eta}_2}_{\hat{\beta}_1+\hat{\beta}_2}$
and ${\cal S}^{\zeta_0}_{\alpha_0}$ and then
$[{\cal S}^{\hat{\eta}_1+\hat{\eta}_2}_{\hat{\beta}_1+\hat{\beta}_2},\mathcal{B}^{[r,i]}_{su}]=0$.
This validates $\mathcal{V}\equiv\mathcal{B}_{su,i}$ to be a maximal bi-subalgebra
of $\mathcal{B}^{[2p-r]}_{su}$.
The uniqueness of $\mathcal{B}_{su,i}$ is shown by contradiction.
Suppose there have two maximal bi-subalgebras
$\mathcal{B}_{su,1}$ and $\mathcal{B}_{su,2}$ both commuting with $\mathcal{B}^{[r,i]}_{su}$
and at least one generator
${\cal S}^{\eta_3}_{\beta_3}\in\mathcal{B}_{su,1}$
and
${\cal S}^{\eta_3}_{\beta_3}\in\mathcal{B}^{[2p-r]}_{su}-\mathcal{B}_{su,2}$.
An inconsistency arises that
$[{\cal S}^{\eta_3}_{\beta_3},\mathcal{B}^{[r,i]}_{su}]=0$
as well as
$[{\cal S}^{\eta_3}_{\beta_3},\mathcal{B}^{[r,i]}_{su}]\neq 0$.
The lemma is thus affirmed.
\end{proof}
\vspace{6pt}
A {\em partition duality} hidden in the Lie algebra $su(2^p)$
is thus disclosed.
\vspace{6pt}
\begin{thm}\label{thmBiCommBrsu}
In the Lie algebra $su(2^p)$,
the commutator partition of order $r$
$\{{\cal P}_{\mathcal{C}}(\mathcal{B}^{[r]}_{su})\}$
generated by an $r$-th maximal bi-subalgebra
$\mathcal{B}^{[r]}_{su}\subset{su(2^p)}$
is the bi-subalgebra partition of order $2p-r$
$\{{\cal P}_{\mathcal{B}}(\mathcal{B}^{[2p-r]}_{su})\}$
generated by a $(2p-r)$-th maximal bi-subalgebra
$\mathcal{B}^{[2p-r]}_{su}\subset{su(2^p)}$
provided the two bi-subalgebras
$\mathcal{B}^{[r]}_{su}$ and $\mathcal{B}^{[2p-r]}_{su}$
commute, $0\leq r\leq 2p$.
\end{thm}
\vspace{6pt}
The partitions
$\{{\cal P}_{\mathcal{C}}(\mathcal{B}^{[r]}_{su})\}$ and $\{{\cal P}_{\mathcal{B}}(\mathcal{B}^{[r]}_{su})\}$
become identical when
$\mathcal{B}^{[r]}_{su}$ is a Cartan subalgebra.
For a Cartan subalgebra
is a $p$-th maximal bi-subalgebra of $su(2^p)$
and is the maximal subspace commuting with itself.
\vspace{6pt}
\begin{cor}\label{coroBiComCartan}
The commutator partition and the bi-subalgebra partition generated by a same
Cartan subalgebra are identical.
\end{cor}
\vspace{6pt}
An important linkage is to be examined that
both a bi-subalgebra partition
$\{{\cal P}_{\mathcal{B}}(\mathcal{B}^{[r]}_{su})\}$
and a commutator partition
$\{{\cal P}_{\mathcal{C}}(\mathcal{B}^{[r]}_{su})\}$
can return to be a {\em Quotient Algebra Partition (QAP)}
if the generating bi-subalgebra $\mathcal{B}^{[r]}_{su}$
is abelian.
The partition conversions mainly reply upon the fact that
an abelian bi-subalgebra of order $r$ $\mathcal{B}^{[r]}_{su}$ is
an $(r-p)$-th maximal bi-subalgebra
of a Cartan subalgebra $\mathfrak{C}\subset{su(2^p)}$, $p\leq r\leq 2p$.
While $\mathcal{B}^{[r]}_{su}=\mathfrak{B}^{[r-p]}$ being a bi-subalgebra of $\mathfrak{C}$,
each coset subspace of $\{{\cal P}_{\mathcal{B}}(\mathcal{B}^{[r]}_{su})\}$
is also a {\em partitioned conjugate-pair subspace}
${\cal W}(\mathfrak{B},\mathfrak{B}^{[r-p]};s)$
of the doublet $(\mathfrak{B},\mathfrak{B}^{[r-p]})$,
for $s\in{Z^{r-p}_2}$
and $\mathfrak{B}\in\mathcal{G}(\mathfrak{C})$ being a maximal bi-subalgebra of $\mathfrak{C}$,
{\em cf.} Lemma~5 of~\cite{SuTsai2}.
This equivalence
requires the existence
of a unique maximal bi-subalgebra $\mathfrak{B}$
commuting with
the former subspace.
\vspace{6pt}
\begin{lemma}\label{lemcosubconjupair}
A coset subspace of the bi-subalgebra partition
$\{{\cal P}_{\mathcal{B}}(\mathcal{B}^{[r]}_{su})\}$
generated by an abelian $r$-th maximal bi-subalgebra
$\mathcal{B}^{[r]}_{su}\subset{su(2^p)}$, $p\leq r\leq 2p$,
uniquely commutes with a maximal bi-subalgebra
of a Cartan subalgebra $\mathfrak{C}\supset\mathcal{B}^{[r]}_{su}$.
\end{lemma}
\vspace{2pt}
\begin{proof}
Every spinor generator of $su(2^p)$ must commute with a unique maximal bi-subalgebra
of $\mathfrak{C}$ by Lemma~3 in~\cite{SuTsai1}.
Let ${\cal S}^{\hspace{.5pt}\zeta}_{\alpha}$
be an arbitrary generator in a coset subspace
$\mathcal{B}^{[r,i]}_{su}\in\{{\cal P}_{\mathcal{B}}(\mathcal{B}^{[r]}_{su})\}$
and commute with a maximal bi-subalgebra
$\mathfrak{B}\in\mathcal{G}(\mathfrak{C})$ of $\mathfrak{C}$,
here $i\in{Z^r_2}$ and $\mathcal{B}^{[r]}_{su}=\mathcal{B}^{[r,\mathbf{0}]}_{su}$.
According to Theorem~\ref{thmbisubpar},
each generator in $\mathcal{B}^{[r,i]}_{su}$ else from ${\cal S}^{\hspace{.5pt}\zeta}_{\alpha}$
can be written as a bi-additive ${\cal S}^{\hspace{.5pt}\zeta+\eta}_{\alpha+\beta}$
with a generator ${\cal S}^{\hspace{.2pt}\eta}_{\beta}\in\mathcal{B}^{[r]}_{su}$,
${\cal S}^{\hspace{.2pt}\eta}_{\beta}\neq{\cal S}^{\mathbf{0}}_{\hspace{.4pt}\mathbf{0}}$.
Owing to the vanishing commutator $[\mathcal{B}^{[r]}_{su},\mathfrak{B}]=0$
as $\mathcal{B}^{[r]}_{su}\subset\mathfrak{C}$
and by Lemma~1 in~\cite{SuTsai1}, it derives the commuting of
${\cal S}^{\hspace{.5pt}\zeta+\eta}_{\alpha+\beta}$ and $\mathfrak{B}$.
The uniqueness of a such maximal bi-subalgebra $\mathfrak{B}$
can be confirmed by Lemma~3 in~\cite{SuTsai2}.
\end{proof}
\vspace{6pt}
With the application of the {\em coset rule of bisection}~\cite{SuTsai1,SuTsai2},
the subspace ${\cal W}(\mathfrak{B},\mathfrak{B}^{[r-p]};s)$ further divides into
two {\em conditioned subspaces} $W(\mathfrak{B},\mathfrak{B}^{[r-p]};s)$
and $\hat{W}(\mathfrak{B},\mathfrak{B}^{[r-p]};s)$,
which are respectively abelian and a member subspace of
the partition $\{{\cal P}_{\mathcal{Q}}(\mathcal{B}^{[r]}_{su})\}$.
Returning to a quotient-algebra partition from a bi-subalgebra partition is then arrived.
\vspace{6pt}
\begin{cor}\label{coroBiPartoQA}
Imposed with the coset rule of bisection, the bi-subalgebra partition $\{{\cal P}_{\mathcal{B}}(\mathcal{B}^{[r]}_{su})\}$
generated by an $r$-th maximal bi-subalgebra
$\mathcal{B}^{[r]}_{su}\subset su(2^p)$
recovers to be a quotient algebra partition
$\{{\cal P}_{\mathcal{Q}}(\mathcal{B}^{[r]}_{su})\}$
when $\mathcal{B}^{[r]}_{su}$ being abelian.
\end{cor}
\vspace{6pt}
In advance of converting to a quotient-algebra partition,
a commutator partition
$\{{\cal P}_{\mathcal{C}}(\mathcal{B}^{[r]}_{su})\}$
transits to the bi-subalgebra partition
$\{{\cal P}_{\mathcal{B}}(\mathcal{B}^{[r]}_{su})\}$
resorting to the {\em coset rule of partition}~\cite{SuTsai2}.
\vspace{6pt}
\begin{lemma}\label{lemcommsubdivide}
Refined by applying the coset rule of partition,
the commutator partition $\{{\cal P}_{\mathcal{C}}(\mathcal{B}^{[r]}_{su})\}$
generated by an $r$-th maximal bi-subalgebra
$\mathcal{B}^{[r]}_{su}\subset su(2^p)$
changes into the bi-subalgebra partition
$\{{\cal P}_{\mathcal{B}}(\mathcal{B}^{[r]}_{su})\}$.
\end{lemma}
\vspace{3pt}
\begin{proof}
Specifically,
every commutator subspace ${\cal W}(\mathcal{B}_{su,i})$ of
$\{{\cal P}_{\mathcal{C}}(\mathcal{B}^{[r]}_{su})\}$
can divide into a number $2^{2r-2p}$ of
subspaces ${\cal W}(\mathcal{B}_{su,i};s)$
respecting the coset rule of partition:
${\cal S}^{\zeta+\eta}_{\alpha+\beta}\in{\cal W}(\mathcal{B}_{su,i+j};s+t)$
for every pair
${\cal S}^{\zeta}_{\alpha}\in{\cal W}(\mathcal{B}_{su,i};s)$ and
${\cal S}^{\eta}_{\beta}\in{\cal W}(\mathcal{B}_{su,j};t)$
with ${\cal W}(\mathcal{B}_{su,\mathbf{0}};\mathbf{0})=\mathcal{B}^{[r]}_{su}$,
here $\mathcal{B}_{su,i},\mathcal{B}_{su,j}\in\mathcal{G}(\mathcal{B}^{[r]}_{su})$,
$i,j\in{Z^{2p-r}_2}$, and $s,t\in{Z^{2r-2p}_2}$.
This division is validated following the same procedures in Lemmas~1, 5 and~15
of~\cite{SuTsai2}
except replacing
the bi-subalgebra $\mathfrak{B}^{[r]}$ of $\mathfrak{C}$ by
$\mathcal{B}^{[r]}_{su}$ and
the Cartan subalgebra
$\mathfrak{C}\subset{su(2^p)}$ by a $(2p-r)$-th maximal bi-subalgebra
$\mathcal{B}^{[2p-r]}_{su}\subset{su(2^p)}$.
Here, $\mathcal{B}^{[2p-r]}_{su}$ is the unique bi-subalgebra
commuting with $\mathcal{B}^{[r]}_{su}$, {\em cf.} Lemma~\ref{lemcommBiofBrsu}, and
the latter is a $(2r-2p)$-th maximal bi-subalgebra of the former.
With the above substitution,
Lemmas~1 and~5 of~\cite{SuTsai2} affirm
the coset rule on partitioned subspaces
${\cal W}(\mathcal{B}_{su,i};s)$, $s\in{Z^{2r-2p}_2}$, of a same maximal bi-subalgebra
$\mathcal{B}_{su,i}\in\mathcal{G}(\mathcal{B}^{[r]}_{su})$.
While Lemma~15 of~\cite{SuTsai2} accounts for
the rule relating partitioned subspaces
associated to different maximal bi-subalgebras,
recalling the labelling choice
$\mathcal{B}_{su,i}\sqcap\mathcal{B}_{su,j}=\mathcal{B}_{su,i+j}$.
\end{proof}
\vspace{6pt}
Notice that this lemma is valid whether or not the bi-subalgebra
$\mathcal{B}^{[r]}_{su}$ is abelian.
After a required bisection on each divided subspace, the conversion completes.
\vspace{6pt}
\begin{cor}\label{coroCommPartoQA}
Imposed with the coset rules of partition and bisection,
the commutator partition $\{{\cal P}_{\mathcal{C}}(\mathcal{B}^{[r]}_{su})\}$
generated by an $r$-th maximal bi-subalgebra
$\mathcal{B}^{[r]}_{su}\subset su(2^p)$
recovers to be a quotient algebra partition
$\{{\cal P}_{\mathcal{Q}}(\mathcal{B}^{[r]}_{su})\}$
when $\mathcal{B}^{[r]}_{su}$ being abelian.
\end{cor}
\vspace{6pt}
Remind that the imposition of the two coset rules is orderless~\cite{SuTsai2}.
It is concluded that a bi-subalgebra partition and
a commutator partition are two notons sharing a duality relation, and both
can respectively return to a quotient-algebra partition as the generating
bi-subalgebra is abelian.
A quotient-algebra partition
is endowed with an {\em abelian group structure}.
Recall Theorem~3 in~\cite{SuTsai2} that,
within the quotient-algebra partition of rank $r$
$\{{\cal P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}$ generated by an $r$-th maximal bi-subalgebra
$\mathfrak{B}^{[r]}$ of a Cartan subalgebra
$\mathfrak{C}\subset{su(N)}$,
$0\leq r\leq p$,
the conditioned subspaces ${W}^{\epsilon}(\mathfrak{B},\mathfrak{B}^{[r]};i)$
satisfy the {\em quaternion condition of closure of rank $r$},
for all $\epsilon,\sigma\in{Z_2}$, $i,j\in{Z^r_2}$
and $\mathfrak{B},\mathfrak{B}'\in\mathcal{G}(\mathfrak{C})$,
\begin{align}\label{eqgenWcommrankr}
[{W}^{\epsilon}(\mathfrak{B},\mathfrak{B}^{[r]};i), {W}^{\sigma}(\mathfrak{B}',\mathfrak{B}^{[r]};j)]\subset
{W}^{\epsilon+\sigma}(\mathfrak{B}\sqcap\mathfrak{B}',\mathfrak{B}^{[r]};i+j),
\end{align}
here ${W}^{0}(\mathfrak{C},\mathfrak{B}^{[r]};\mathbf{0})=\{0\}$
and
${W}^{1}(\mathfrak{C},\mathfrak{B}^{[r]};\mathbf{0})=\mathfrak{C}$.
Moreover,
an operation $\circledcirc$ called {\em tri-addition} is then defined in
$\{{\cal P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}$,
{\em cf.} Corollary~2 in~\cite{SuTsai2}:
${W}^{\epsilon}(\mathfrak{B},\mathfrak{B}^{[r]};i)\circledcirc{W}^{\sigma}(\mathfrak{B}',\mathfrak{B}^{[r]};j)
={W}^{\epsilon+\sigma}(\mathfrak{B}\sqcap\mathfrak{B}',\mathfrak{B}^{[r]};i+j)\in\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}$
for every pair
${W}^{\epsilon}(\mathfrak{B},\mathfrak{B}^{[r]};i)$
and
${W}^{\sigma}(\mathfrak{B}',\mathfrak{B}^{[r]};j)
\in\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}$.
Under this operation,
the set of subspaces in
$\{{\cal P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}$ is isomorphic
to the additive group $Z^{p+r+1}_2$.
Expositions in the subsequent sections will make use of
this group structure of importance.
\section{Merging and Detaching a Co-Quotient Algebra\label{secmergedetach}}
One of major subjects concerned in this serial of articles~\cite{Su,SuTsai1,SuTsai2}
is to establish the scheme comprehensively producing admissible decompositions of unitary Lie algebras.
Relying upon the framework of quaotient-algebra partition, such a scheme
makes easy the systematic generating of Cartan decompositions.
A Cartan decomposition
$su(N)=\mathfrak{t}\oplus\mathfrak{p}$ is the direct sum
of a subalgebra $\mathfrak{t}$ and a vector subspace $\mathfrak{p}$
satisfying the decomposition condition
$[\mathfrak{t},\mathfrak{t}]\subset\mathfrak{t}$,
$[\mathfrak{t},\mathfrak{p}]\subset\mathfrak{p}$,
$[\mathfrak{p},\mathfrak{p}]\subset\mathfrak{t}$,
and
$\text{Tr}(\mathfrak{t}\hspace{.5pt}\mathfrak{p})=0$.
The success of the decomposition production is attributed to
a merit of the scheme recognizing the truth that
the subalgebra $\mathfrak{t}$ of a decomposition
$\mathfrak{t}\oplus\mathfrak{p}$ is a proper maximal subgroup
of a quotient-algebra partition under the operation of tri-addition~\cite{SuTsai2}.
Thanks to the group-structured nature of quotient-algebra partitions,
a terse law to decide the type of a decomposition has been concluded~\cite{SuTsai2}:
a Cartan decomposition $su(N)=\mathfrak{t}\oplus\mathfrak{p}$
is of type {\bf AI} if the maximal
abelian subalgebra of $\mathfrak{p}$ is a Cartan subalgebra $\mathfrak{C}\subset{su(N)}$,
a decomposition of type {\bf AII} if the maximal
abelian subalgebra of $\mathfrak{p}$ is a proper maximal bi-subalgebra $\mathfrak{B}$ of $\mathfrak{C}$,
or a type {\bf AIII} if the maximal
abelian subalgebra of $\mathfrak{p}$ is a complement
$\mathfrak{B}^c=\mathfrak{C}-\mathfrak{B}$.
The arrangements of quotient and co-quotient algebras help
schematic manipulations of a quotient-algebra partition~\cite{Su,SuTsai1,SuTsai2}.
In the quotient-algebra partition of rank $r$
$\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}$
generated by an $r$-th maximal bi-subalgebra
$\mathfrak{B}^{[r]}$ of a Cartan subalgebra $\mathfrak{C}\subset{su(N)}$,
for $0\leq r\leq p$ and $2^{p-1}<N\leq 2^p$,
there determine the quotient algebra $\{{\cal Q}(\mathfrak{B}^{[r]})\}$
given by the center subalgebra $\mathfrak{B}^{[r]}$ and
a set of co-quotient algebras, each of which
$\{\mathcal{Q}({W}^{\epsilon}(\mathfrak{B},\mathfrak{B}^{[r]};i))\}$
is given by a non-null conditioned subspace
${W}^{\epsilon}(\mathfrak{B},\mathfrak{B}^{[r]};i)\in\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}$
other than $\mathfrak{B}^{[r]}$,
here $\epsilon\in{Z_2}$, $i\in{Z^r_2}$ and
$\mathfrak{B}\in\mathcal{G}(\mathfrak{C})$ being a maximal bi-subalgebra of $\mathfrak{C}$.
Yet, two kinds of co-quotient algebras are admitted
according to the two occasions that the center subalgebra
${W}^{\epsilon}(\mathfrak{B},\mathfrak{B}^{[r]};i)$ is a
{\em degrade} conditioned subspace as
$\mathfrak{B}\supset\mathfrak{B}^{[r]}$
or a {\em regular} one as $\mathfrak{B}\nsupseteq\mathfrak{B}^{[r]}$~\cite{SuTsai2}.
The two kinds exhibit distinct properties as detailed in this section:
the former allows a procedure of {\em merging} conjugate pairs and
the latter can convert to a refined version
by {\em detaching} the pairs.
\subsection{Merging a Co-Quotient Algebra\label{subsecmerge}}
To illustrate the merging procedure, the quotient algebra of rank
$r$ $\{{\cal Q}(\mathfrak{B}^{[r]};2^{p+r}-1)\}$
given by $\mathfrak{B}^{[r]}\subset\mathfrak{C}$
and the co-quotient algebra of rank $r$
$\{{\cal Q}(\mathfrak{B}^{[r,l]};2^{p+r}-2^{2r-2})\}$ given by
a coset $\mathfrak{B}^{[r,l]}$ of $\mathfrak{B}^{[r]}$ in $\mathfrak{C}$,
$l\in{Z^r_2-\{\mathbf{0}\}}$,
are respectively displayed in Figs.~\ref{figmergQA}(a) and~\ref{figmergQA}(b).
Here it is no lack of universality to take the coset $\mathfrak{B}^{[r,l]}$
as the center subalgebra of the co-quotient algebra,
for a non-null degrade conditioned subspace
${W}^{\epsilon}(\mathfrak{B},\mathfrak{B}^{[r]};i)$ with $\mathfrak{B}\supset\mathfrak{B}^{[r]}$
is a coset of $\mathfrak{B}^{[r]}$ in a Cartan subalgebra
$\mathfrak{C}^{\ast}$ being a superset of both
$\mathfrak{B}^{[r]}$ and
${W}^{\epsilon}(\mathfrak{B},\mathfrak{B}^{[r]};i)$.
As depicted in Fig.~\ref{figmergQA},
the maximal bi-subalgebras $\mathfrak{B}_m$ and $\mathfrak{B}_n\in\mathcal{G}(\mathfrak{C})$
keep the closure identity
$\mathfrak{B}_1=\mathfrak{B}_m\sqcap\mathfrak{B}_n$, $1<m,n<2^p$,
and the relations hold for subspace indices
$s+\hat{s}=i+\hat{i}=j+\hat{j}=l$ and $s=i+j=\hat{i}+\hat{j}$,
$i,\hat{i},j,\hat{j},s,\hat{s}\in{Z^r_2}$.
In addition, $\mathfrak{B}_1$ is a superset of $\mathfrak{B}^{[r]}$
but $\mathfrak{B}_1\nsupseteq\mathfrak{B}^{[r,l]}$.
The merging procedure is conducted in $\{{\cal Q}(\mathfrak{B}^{[r,l]};2^{p+r}-2^{2r-2})\}$
by pairwise combining conditioned subspaces that commute.
Two options of merged co-quotient algebras
are then rendered, the one via {\em parallel} merging as shown
in Fig.~\ref{figmergQA}(c) and
the other via {\em crossing} merging in~\ref{figmergQA}(d).
Based on the original structure in Figs.~\ref{figmergQA}(a) or~\ref{figmergQA}(b),
it is plain to verify
the commutation relation of Eq.~\ref{eqgenWcommrankr} preserved in these two algebras.
As a consequence of the merging, there yield two choices of $(r-1)$-th maximal bi-subalgebras
$\mathfrak{B}^{[r-1]}=\mathfrak{B}^{[r]}\cup{W}(\mathfrak{B}_1,\mathfrak{B}^{[1]};s)$
of a Cartan subalgebra $\mathfrak{C}_{merg}$
and
$\hat{\mathfrak{B}}^{[r-1]}=\mathfrak{B}^{[r]}\cup\hat{W}(\mathfrak{B}_1,\mathfrak{B}^{[1]};\hat{s})$
of another Cartan subalgebra $\hat{\mathfrak{C}}_{merg}$,
for ${W}(\mathfrak{B}_1,\mathfrak{B}^{[r]};s)$ and
$\hat{W}(\mathfrak{B}_1,\mathfrak{B}^{[r]};\hat{s})$ being a conditioned
subspace of the doublet $(\mathfrak{B}_1,\mathfrak{B}^{[1]})$.
The next crucial step is to assert the fact that each abelian subspace thus merged is a conditioned subspace
in the quotient-algebra of rank $r-1$
$\{\mathcal{P}_{\mathcal{Q}}(\widetilde{\mathfrak{B}}^{[r-1]})\}$
given by
$\widetilde{\mathfrak{B}}^{[r-1]}=\mathfrak{B}^{[r-1]}\subset\mathfrak{C}^{\hspace{0.5pt}\ast}=\mathfrak{C}_{merg}$
or by
$\widetilde{\mathfrak{B}}^{[r-1]}=\hat{\mathfrak{B}}^{[r-1]}\subset\mathfrak{C}^{\hspace{0.5pt}\ast}=\hat{\mathfrak{C}}_{merg}$.
Without loss of generality, let the merged subspace
$W=W(\mathfrak{B}_m,\mathfrak{B}^{[r]};i)\cup W(\mathfrak{B}_n,\mathfrak{B}^{[r]};j)$
in Fig.~\ref{figmergQA}(c) be an example.
There have two requirements for $W$ to be a conditioned subspace in the quotient-algebra
partition $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r-1]})\}$,
specifically, $W$ commuting
with a maximal bi-subalgebra $\mathfrak{B}$ of $\mathfrak{C}_{merg}$ and
$\forall\hspace{2pt}{\cal S}^{\zeta}_{\alpha},{\cal S}^{\eta}_{\beta}\in{W}$,
${\cal S}^{\zeta+\eta}_{\alpha+\beta}\in\mathfrak{B}\cap\mathfrak{B}^{[r-1]}$,
{\em cf.} Definition~2 in~\cite{SuTsai2}.
The subspace $W$ is abelian due to the commutation relation
of Eq.~\ref{eqgenWcommrankr}
$[{W}(\mathfrak{B}_m,\mathfrak{B}^{[r]};i), {W}(\mathfrak{B}_n,\mathfrak{B}^{[r]};j)]\subset
\hat{W}(\mathfrak{B}_1,\mathfrak{B}^{[r]};s)=\{0\}$.
For any two generators ${\cal S}^{\zeta}_{\alpha}$ and ${\cal S}^{\eta}_{\beta}\in{W}$,
the inclusion holds that either
${\cal S}^{\zeta}_{\alpha},{\cal S}^{\eta}_{\beta}\in W(\mathfrak{B}_m,\mathfrak{B}^{[r]};i)$
($\in W(\mathfrak{B}_n,\mathfrak{B}^{[r]};j)$) or
${\cal S}^{\zeta}_{\alpha}\in W(\mathfrak{B}_m,\mathfrak{B}^{[r]};i)$ and
${\cal S}^{\eta}_{\beta}\in W(\mathfrak{B}_n,\mathfrak{B}^{[r]};j)$.
By definition, the bi-additive generator ${\cal S}^{\zeta+\eta}_{\alpha+\beta}$ is in
$\mathfrak{B}^{[r]}\subset\mathfrak{C}_{merg}$ for the $1$st case.
In the $2$nd case, the generator ${\cal S}^{\zeta+\eta}_{\alpha+\beta}$
is contained in the subspace
${W}(\mathfrak{B}_1,\mathfrak{B}^{[r]};s)\subset\mathfrak{C}_{merg}$ owing to
Corollary~3 in~\cite{SuTsai2}.
Thus the generator ${\cal S}^{\zeta+\eta}_{\alpha+\beta}$ must
belong to $\mathfrak{B}^{[r-1]}$.
Since the generator ${\cal S}^{\zeta+\eta}_{\alpha+\beta}\in\mathfrak{C}_{merg}$ commutes
with not only ${\cal S}^{\zeta}_{\alpha}$ and ${\cal S}^{\eta}_{\beta}\in W$
but also the whole $W$ of being abelian,
it deduces the existence of a non-null subspace $V$ in $\mathfrak{C}_{merg}$
commuting with $W$.
Further endorsed by Lemma~4 in~\cite{SuTsai1}, such a non-null subspace ought to be a maximal
bi-subalgebra $V\equiv\mathfrak{B}$, {\em i.e.}, $[W,\mathfrak{B}]=0$,
for $\mathfrak{C}_{merg}$ being a Cartan subalgebra.
Meanwhile, ${\cal S}^{\zeta+\eta}_{\alpha+\beta}$ is in $\mathfrak{B}$
by Lemma~14 in~\cite{SuTsai1}.
The assertion is then arrived that $W$
is a conditioned subspace of the doublet $(\mathfrak{B},\mathfrak{B}^{[r-1]})$
in the quotient-algebra partition $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r-1]})\}$.
Similarly, the other merged subspaces in Fig.~\ref{figmergQA}(c)
are respectively a conditioned subspace in $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r-1]})\}$.
By the same token on the other hand, each subspace in Fig.~\ref{figmergQA}(d) is a
conditioned subspace in $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r-1]})\}$.
That is to say, the partition $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r-1]})\}$
admits co-quotient algebras of both ranks $r$ and $r-1$.
\subsection{Detaching a Co-Quotient Algebra\label{subsecmerge}}
The exposition of the detaching starts with,
respectively in Figs.~\ref{figdetQA}(a) and~\ref{figdetQA}(b),
the quotient algebra of rank
$r$ $\{{\cal Q}(\mathfrak{B}^{[r]};2^{p+r}-1)\}$ given by $\mathfrak{B}^{[r]}\subset\mathfrak{C}$
and the co-quotient algebra of rank $r$
$\{{\cal Q}({W}(\mathfrak{B}_1,\mathfrak{B}^{[r]};s);2^{p+r}-1)\}$ given by
a regular conditioned subspace ${W}(\mathfrak{B}_1,\mathfrak{B}^{[r]};s)$,
$\mathfrak{B}_1\nsupseteq\mathfrak{B}^{[r]}$ and $s\in{Z^r_2}$.
The maximal bi-subalgebras $\mathfrak{B}_m$ and $\mathfrak{B}_n\in\mathcal{G}(\mathfrak{C})$
in Fig.~\ref{figdetQA}
holds the closure
$\mathfrak{B}_1=\mathfrak{B}_m\sqcap\mathfrak{B}_n$, $1<m,n<2^p$,
and the identity $s=i+j$ relates the indices $i,j,s\in{Z^r_2}$.
Notice that the intersection
$\mathfrak{B}^{[r+1]}=\mathfrak{B}_1\cap\mathfrak{B}^{[r]}$ is an
$(r+1)$-th maximal bi-subalgebra of $\mathfrak{C}$ for $\mathfrak{B}_1\nsupseteq\mathfrak{B}^{[r]}$.
The detaching procedure is performed in the co-quotient algebra
$\{{\cal Q}({W}(\mathfrak{B}_1,\mathfrak{B}^{[r]};s))\}$
by bisecting each conditioned subspace
in Fig.~\ref{figdetQA}(a) or~\ref{figdetQA}(b).
In order to validate the two refined versions of co-quotient algebras,
via {\em parallel} and {\em crossing} detachings
as in Figs.~\ref{figdetQA}(c) and (d),
it requires the affirmation that subspaces in these two
structures respect the commutation relation of Eq.~\ref{eqgenWcommrankr}
and are respectively a conditioned subspace in the
quotient-algebra partition of rank $r+1$
$\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r+1]})\}$ given by $\mathfrak{B}^{[r+1]}$.
Take the subspace ${W}(\mathfrak{B}_m,\mathfrak{B}^{[r]};i)$
in Fig.~\ref{figdetQA}(b) as an example.
This subspace is divided into two halves
${W}(\mathfrak{B}_m,\mathfrak{B}^{[r+1]};0\circ i)$
and ${W}(\mathfrak{B}_m,\mathfrak{B}^{[r+1]};1\circ i)$
following the coset rule:
$\forall\hspace{2pt}{\cal S}^{\zeta}_{\alpha},{\cal S}^{\eta}_{\beta}
\in{W}(\mathfrak{B}_m,\mathfrak{B}^{[r+1]};0\circ i)$
($\in{W}(\mathfrak{B}_m,\mathfrak{B}^{[r+1]};1\circ i)$),
${\cal S}^{\zeta+\eta}_{\alpha+\beta}\in\mathfrak{B}_m\cap\mathfrak{B}^{[r+1]}$;
here the subspace index $0\circ s$
is the concatenation of the digit $0$ and string $s\in{Z^r_2}$
and $\hat{W}(\mathfrak{B}_1,\mathfrak{B}^{[r+1]};0\circ s)=\{0\}$.
Neither of the subspaces is null if
$\mathfrak{B}_m\nsupseteq\mathfrak{B}^{[r+1]}$
or either of them is so if
$\mathfrak{B}_m\supset\mathfrak{B}^{[r+1]}$.
Apparently both the subspaces commute with
$\mathfrak{B}_m\subset\mathfrak{C}$ and are separately a conditioned subspace of
the doublet $(\mathfrak{B}_m,\mathfrak{B}^{[r+1]})$ by Definition~2 in~\cite{SuTsai2}.
Hence each of the two refined subspaces is a conditioned subspace of
the quotient-algebra partition of rank $r+1$
$\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r+1]})\}$.
Likewise, every of the other subspaces in Figs.~\ref{figdetQA}(c) and (d)
is as well a conditioned subspace of $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r+1]})\}$.
Remark that the center subalgebra {\em splits} into a non-null
${W}(\mathfrak{B}_1,\mathfrak{B}^{[r+1]};0\circ s)$
and an empty subspace
$\hat{W}(\mathfrak{B}_1,\mathfrak{B}^{[r+1]};0\circ s)=\{0\}$
due to the inclusion $\mathfrak{B}_1\supset\mathfrak{B}^{[r+1]}$.
Therefore, besides one rank $r$,
a regular conditioned subspace
${W}(\mathfrak{B}_1,\mathfrak{B}^{[r]};s)$
can generate a co-quotient algebra of rank $r+1$ through the detaching procedure,
$1\leq r<p$.
\section{More on Cartan Decompositions of Type AIII\label{secgenAIII}}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\setcounter{equation}{0} \noindent
Supplementary to early discussions of Cartan decompositions of the three types~\cite{SuTsai2},
the attention of this section will focus on the type {\bf AIII} of $su(m+n)$ as $m\neq n$.
The {\em intrinsic} Cartan decomposition
$su(m+n)=\hat{\mathfrak{t}}_{\hspace{1pt}\rm III}\oplus\hat{\mathfrak{p}}_{\hspace{1pt}\rm III}$
of type {\bf AIII} for $m\geq n$, as designated in~\cite{Helgason,SuTsai1},
is composed of the subalgebra
$\hat{\mathfrak{t}}_{\hspace{1pt}{\rm III}}=c\otimes su(m)\otimes su(n)
=\{(\begin{array}{cc}
A&0\\
0&B
\end{array}):A\in{u(m)},B\in{u(n)}\text{ and }{\text{Tr}(A+B)=0}\}$ and
the subspace
$\hat{\mathfrak{p}}_{\hspace{1pt}{\rm III}}=
\{(\begin{array}{cc}
0&{\cal Z}\\
-{\cal Z}^{\dagger}&0
\end{array}):{\cal Z} \in M_{m\times n}(\mathbb{C})\}$;
note $[c,\hat{\mathfrak{t}}_{\hspace{1pt}{\rm III}}]=0$.
The subalgebra $\hat{\mathfrak{t}}_{\hspace{1pt}{\rm III}}$ is spanned by a total
number $m^2+n^2-1$ of generators and
the set $\hat{\mathfrak{a}}_{\hspace{1pt}{\rm III}}=\{i\ket{k}\bra{k+m}-i\ket{k+m}\bra{k}:1\leq k\leq n\}$
is optionally a maximal abelian subalgebra in $\hat{\mathfrak{p}}_{\hspace{1pt}{\rm III}}$.
Since those of $su(N)$ for $2^{p-1}<N<2^p$
are acquirable by applying the removing process~\cite{Su,SuTsai1,SuTsai2},
as always only the decompositions of $su(2^p)$ are considered.
The decompositions of $su(m+n)$ with $m=n$
have been addressed in Theorem~3 in~\cite{SuTsai2}.
In examining other instances of $m>n$, the $\lambda$-generators
are a better choice as the generating set of the algebra.
The set consists of the off-diagonal matrices $\lambda_{kl}=\ket{k}\bra{l}+\ket{l}\bra{k}$ and
$\hat{\lambda}_{kl}=-i\ket{k}\bra{l}+i\ket{l}\bra{k}$ and
the diagonal ones $d_{kl}=\ket{k}\bra{k}-\ket{l}\bra{l}$, $1\leq k,l\leq N$,
which satisfy the commutator relations as listed in from Eqs.~A.1 to A.6 in Appendix~A of~\cite{Su}.
To generate decompositions of type {\bf AIII} in $su(m+n=2^p)$ for $m>n$,
it is essentail to perform an appropriate
{\em division} over the conditioned subspaces of a quotient-algebra
partition of {\em rank zero}.
The exposition is confined to
the {\em intrinsic} quotient-algebra partition of this rank
$\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{C}_{[\mathbf{0}]})\}$
given by the {\em intrinsic} Cartan subalgebra
$\mathfrak{C}_{[\mathbf{0}]}=\{{\cal S}^{\nu_0}_{\hspace{.5pt}\mathbf{0}}:\forall\hspace{2pt}\nu_0\in{Z^p_2}\}
\subset{su(2^p)}$.
For the other decomposition of the same type can be mapped to a decomposition determined
in $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{C}_{[\mathbf{0}]})\}$
via a conjugate transformation~\cite{Su,SuTsai1,SuTsai2}.
The partition $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{C}_{[\mathbf{0}]})\}$
comprises the following conditioned subspaces, $\gamma\in{Z^p_2}-\{\mathbf{0}\}$,
\begin{align}\label{eqintrW}
&{W}^0(\mathfrak{B}_{\mathbf{0}})=\{0\},\notag\\
&{W}^1(\mathfrak{B}_{\mathbf{0}})=\mathfrak{C}_{[\mathbf{0}]}
=\{{\cal S}^{\nu}_{\mathbf{0}}:\forall\hspace{2pt}\nu\in{Z^p_2},\}
=\{d_{ij}:\forall\hspace{2pt}0<i,j\leq 2^p\},\notag\\
&{W}^0(\mathfrak{B}_{\gamma})
=\{{\cal S}^{\hat{\xi}}_{\gamma}:\forall\hspace{2pt}\hat{\xi}\in{Z^p_2},\hat{\xi}\cdot\gamma=1\}\notag\\
&=\{\hat{\lambda}_{ij}:1\leq i,j\leq 2^p,
\hspace{2pt}i-1=\omega,j-1=\tau,\omega+\tau=\gamma\text{ for }\omega,\tau\in{Z^p_2}\},\text{ and}\notag\\
&{W}^1(\mathfrak{B}_{\gamma})
=\{{\cal S}^{\xi}_{\gamma}:\forall\hspace{2pt}\xi\in{Z^p_2},\xi\cdot\gamma=0\}\notag\\
&=\{\lambda_{ij}:1\leq i,j\leq 2^p,
\hspace{2pt}i-1=\omega,j-1=\tau,\omega+\tau=\gamma\text{ for }\omega,\tau\in{Z^p_2}\},
\end{align}
where $\mathfrak{B}_{\gamma}=\{{\cal S}^{\mu}_{\mathbf{0}}:\forall\hspace{2pt}\mu\in{Z^p_2}\text{ and }\mu\cdot\gamma=0\}$
is a maximal bi-subalgebra of
$\mathfrak{C}_{[\mathbf{0}]}=\mathfrak{B}_{\mathbf{0}}$.
Thanks to the commutation relation of Eq.~\ref{eqgenWcommrankr} aforesaid
and by Corollary~2 in~\cite{SuTsai2},
these conditioned subspaces form an {\em abelian group}
obeying the closure under the {\em tri-addition}
$\circledcirc$: for all
$\alpha,\beta\in{Z^p_2}$ and $\epsilon,\sigma\in{Z_2}$,
\begin{align}\label{eqtriaddinEp4}
{W}^{\epsilon}(\mathfrak{B}_{\alpha})\circledcirc{W}^{\sigma}(\mathfrak{B}_{\beta})
={W}^{\epsilon+\sigma}(\mathfrak{B}_{\alpha+\beta}).
\end{align}
The division cuts each conditioned subspace
${W}^{\epsilon}(\mathfrak{B}_{\alpha})$ into two subspaces
${W}^{\epsilon}(\mathfrak{B}_{\alpha};\kappa)$ additionally tagged with
an index $\kappa\in{Z_2}$.
Let $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{C}_{[\mathbf{0}]})\}_{div}$
denote the partition constituted by these
divided conditioned subspaces.
The key of the subspace division is to preserve the same group structure in this
version of partition.
In
$\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{C}_{[\mathbf{0}]})\}_{div}$,
the subspace
${W}^0(\mathfrak{B}_{\mathbf{0}})={W}^0(\mathfrak{B}_{\mathbf{0}};0)\cup{W}^0(\mathfrak{B}_{\mathbf{0}};1)$
is divided into two null subspaces and
the subspace ${W}^1(\mathfrak{B}_{\mathbf{0}})$
into the null ${W}^1(\mathfrak{B}_{\mathbf{0}};1)=\{0\}$
and ${W}^1(\mathfrak{B}_{\mathbf{0}};0)=\mathfrak{C}_{[\mathbf{0}]}$.
As to regular conditioned subspaces, each ${W}^\epsilon(\mathfrak{B}_{\gamma\neq\mathbf{0}})$
splits into two non-null subspaces ${W}^\epsilon(\mathfrak{B}_{\gamma\neq\mathbf{0}};0)$
and ${W}^\epsilon(\mathfrak{B}_{\gamma\neq\mathbf{0}};1)$
by a cut over the generating set depending on the integer $m$ of $su(m+n)$.
Specifically, the divided subspace ${W}^0(\mathfrak{B}_{\gamma\neq\mathbf{0}};0)$
(or ${W}^1(\mathfrak{B}_{\gamma\neq\mathbf{0}};0)$)
contains the generators $\hat{\lambda}_{kl}$ (or $\lambda_{kl}$)
with subscripts ranging in $0<k<l\leq m$,
while ${W}^0(\mathfrak{B}_{\gamma\neq\mathbf{0}};1)$
(or ${W}^1(\mathfrak{B}_{\gamma\neq\mathbf{0}};1)$)
carries $\hat{\lambda}_{k'l'}$ (or $\lambda_{k'l'}$)
with $0<k'<m<l'<2^p$.
Thus, the general forms of the divided subspaces can be written as follows,
$\gamma\in{Z^p_2}-\{\mathbf{0}\}$,
\begin{align}\label{eqQAPrefined}
&{W}^0(\mathfrak{B}_{\mathbf{0}};0)
={W}^0(\mathfrak{B}_{\mathbf{0}};1)=\{0\},\notag\\
&{W}^1(\mathfrak{B}_{\mathbf{0}};0)=\mathfrak{C}_{[\mathbf{0}]},\hspace{2pt}
{W}^1(\mathfrak{B}_{\mathbf{0}};1)=\{0\},\notag\\
&{W}^0(\mathfrak{B}_{\gamma};0)
=\{\hat{\lambda}_{kl}:\forall\hspace{2pt}0<k<l\leq m,\omega=k-1,\tau=l-1\text{ and }\omega+\tau=\gamma,\omega,\tau\in{Z^p_2}\},\notag\\
&{W}^0(\mathfrak{B}_{\gamma};1)
=\{\hat{\lambda}_{k'l'}:\forall\hspace{2pt}0<k'\leq m<l'\leq 2^p,\omega'=k'-1,\tau'=l'-1\text{ and }\omega'+\tau'=\gamma,\omega',\tau'\in{Z^p_2}\},\notag\\
&{W}^1(\mathfrak{B}_{\gamma};0)
=\{\lambda_{kl}:\forall\hspace{2pt}0<k<l\leq m,\omega=k-1,\tau=l-1\text{ and }\omega+\tau=\gamma,\omega,\tau\in{Z^p_2}\},\text{ and}\notag\\
&{W}^1(\mathfrak{B}_{\gamma};1)
=\{\lambda_{k'l'}:\forall\hspace{2pt}0<k'\leq m<l'\leq 2^p,\omega'=k'-1,\tau'=l'-1\text{ and }\omega'+\tau'=\gamma,\omega',\tau'\in{Z^p_2}\}.
\end{align}
It is easy to validate the tri-addition closure for these subspaces in terms of the identity
\begin{align}\label{eqtriaddrefsub}
{W}^{\epsilon}(\mathfrak{B}_{\alpha};\kappa)\circledcirc{W}^{\sigma}(\mathfrak{B}_{\beta};\kappa')
={W}^{\epsilon+\sigma}(\mathfrak{B}_{\alpha+\beta};\kappa+\kappa'),
\end{align}
for $\epsilon,\sigma,\kappa,\kappa'\in{Z_2}$ and
$\alpha,\beta\in{Z^p_2}$,
which establishes the demanded structure of an abelian group
in $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{C}_{[\mathbf{0}]})\}_{div}$.
Based on this group structure,
generating Cartan decompositions
becomes routine taking advantage of the assertion
in Lemma~20 of~\cite{SuTsai2}.
That is, the subalgebra $\mathfrak{t}$ of a decomposition $\mathfrak{t}\oplus\mathfrak{p}$
is a {\em proper maximal subgroup} of $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{C}_{[\mathbf{0}]})\}_{div}$
under the tri-addition.
Owing to the truth of the Cartan subalgebra $\mathfrak{C}_{[\mathbf{0}]}={W}^1(\mathfrak{B}_{\mathbf{0}};0)$
being either in $\mathfrak{t}$ or $\mathfrak{p}$
of a decomposition $\mathfrak{t}\oplus\mathfrak{p}$,
there breed two types of Cartan decompositions in the partition
$\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{C}_{[\mathbf{0}]})\}_{div}$, {\em cf.} Corollary~5 in~\cite{SuTsai2}:
a type {\bf AI} as $\mathfrak{C}_{[\mathbf{0}]}\subset\mathfrak{p}$
or a type {\bf AIII} as
$\mathfrak{C}_{[\mathbf{0}]}\subset\mathfrak{t}$.
In the former case, the subalgebra $\mathfrak{t}_{\rm I}$ of
a type-{\bf AI} decomposition $\mathfrak{t}_{\rm I}\oplus\mathfrak{p}_{\rm I}$
coincides with the algebra $so(2^p)$ up to a conjugate transformation.
Being a proper maximal subgroup of $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{C}_{[\mathbf{0}]})\}_{div}$,
the set $\hat{\mathfrak{t}}_{\hspace{1pt}{\rm III}}$ embracing
all divided subspaces with the subspace index $\kappa=0$,
{\em i.e.},
\begin{align}\label{eqA3tp}
\hat{\mathfrak{t}}_{\hspace{1pt}{\rm III}}=\{{W}^{\epsilon}(\mathfrak{B}_{\alpha};0):
\forall\hspace{2pt}\epsilon\in{Z_2}\text{ and }\alpha\in{Z^p_2}\},
\end{align}
is the subalgebra $\hat{\mathfrak{t}}_{\hspace{1pt}{\rm III}}=c\oplus{su(m)}\oplus{su(n)}$
of the {\em intrinsic} type-{\bf AIII} decomposition
$\hat{\mathfrak{t}}_{\hspace{1pt}{\rm III}}\oplus\hat{\mathfrak{p}}_{\hspace{1pt}{\rm III}}$.
The subspace $\hat{\mathfrak{a}}_{\hspace{1pt}{\rm III}}={W}^0(\mathfrak{B}_{10\cdots 0};1)$
is an option of a maximal abelian subalgebra of the complement
$\hat{\mathfrak{p}}_{\hspace{1pt}{\rm III}}
=\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{C}_{[\mathbf{0}]})\}_{div}-\hat{\mathfrak{t}}_{\hspace{1pt}{\rm III}}$.
As a final remark, besides the intrinsic, non-intrinsic decompositions of type {\bf AIII} of $su(m'+n')$
with integer pairs $(m',n')=(m-2l,n+2l)$, $0\leq l\leq m-2^{p-1}$,
are also attainable in $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{C}_{[\mathbf{0}]})\}_{div}$.
Refer to Figs.~\ref{figsu8tp5+3}, \ref{figsu8tp6+2} and \ref{figsu8tp7+1} for
example demonstrations of $su(8)$.
\section{Recursive $\mathfrak{t}$-$\mathfrak{p}$ Decomposition\label{secRecurtpD}}
As stated in~\cite{SuTsai2},
a quotient algebra partition
$\{ \mathcal{P}_{{\cal Q}}(\mathfrak{B}^{[r]}) \}$
of rank $r$ admits a $\mathfrak{t}$-$\mathfrak{p}$ decomposition
of $1$st level obeying the criterion that, under the tri-addition,
the subalgebra $\mathfrak{t}$ is a proper maximal subgroup of
$\{ \mathcal{P}_{{\cal Q}}(\mathfrak{B}^{[r]}) \}$
and the subspace $\mathfrak{p}$
the coset of $\mathfrak{t}$.
This criterion is also satisfied by decompositions of higher levels.
\vspace{6pt}
\begin{defn}\label{defnl-thleveltpD}
In a quotient algebra partition $\{ \mathcal{P}_{{\cal Q}}(\mathfrak{B}^{[r]}) \}$,
a $\mathfrak{t}$-$\mathfrak{p}$ decomposition of the $l$-th-level
$\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$
is conducted on the subalgebra $\mathfrak{t}_{[l-1]}=\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$
of a $\mathfrak{t}$-$\mathfrak{p}$ decomposition of the $(l-1)$-th level
$\mathfrak{t}_{[l-1]}\oplus\mathfrak{p}_{[l-1]}$, where
the subalgebra $\mathfrak{t}_{[l]}$ is a proper maximal subgroup of $\mathfrak{t}_{[l-1]}$
and the subspace
$\mathfrak{p}_{[l]}=\mathfrak{t}_{[l-1]}-\mathfrak{t}_{[l]}$
the coset of $\mathfrak{t}_{[l]}$
in $\mathfrak{t}_{[l-1]}$,
the partition $\{ \mathcal{P}_{{\cal Q}}(\mathfrak{B}^{[r]}) \}=\mathfrak{t}_{[0]}\oplus\mathfrak{p}_{[0]}$
being the decomposition of $0$th level with
$\mathfrak{p}_{[0]}=\{0\}$.
\end{defn}
\vspace{6pt}
The following covering relation is essential to
the type decision of a decomposition.
\vspace{6pt}
\begin{lemma}\label{lemLvlthin1st}
For a $\mathfrak{t}$-$\mathfrak{p}$ decomposition of the $l$-th
level $\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$ in $su(N)$,
$2^{p-1}<N\leq 2^p$ and $1< l\leq p+r+1$,
there exists a $\mathfrak{t}$-$\mathfrak{p}$ decomposition of the
$1$st level $\mathfrak{t}_{[1]}\oplus\mathfrak{p}_{[1]}$
endowed with the coverings
$\mathfrak{t}_{[1]}\supset\mathfrak{t}_{[l]}$
and $\mathfrak{p}_{[1]}\supset\mathfrak{p}_{[l]}$.
\end{lemma}
\vspace{2pt}
\begin{proof}
Assume the subalgebra $\mathfrak{t}_{[l]}$
is an $l$-th maximal subgroup of the quotient-algebra partition
of rank $r$
$\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}
=\{{W}^{\epsilon}(\mathfrak{B},\mathfrak{B}^{[r]};i):\forall\hspace{2pt}
\mathfrak{B}\in\mathcal{G}(\mathfrak{C}),\epsilon\in{Z_2},i\in{Z^r_2}\}$
generated by an $r$-th maximal bi-subalgebra $\mathfrak{B}^{[r]}$
of a Cartan subalgebra $\mathfrak{C}\subset{su(N)}$ under the tri-addition,
referring to Definition~\ref{defnl-thleveltpD}.
The construction of a required $1$st-level $\mathfrak{t}_{[1]}\oplus\mathfrak{p}_{[1]}$
decomposition relies upon
a chosen set
${\cal M}_{l-1}=\{{W}^{\epsilon_s}(\mathfrak{B}_s,\mathfrak{B}^{[r]};i_s):1\leq s\leq l-1\}$
comprising a number $l-1$ of conditioned subspaces in
$\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}-\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$.
This set is independent in the sense that
no subspace in ${\cal M}_{l-1}$ is the tri-additive of any two other members in the set.
In addition, the subspaces in ${\cal M}_{l-1}$ meet the disjoint criterion that the two tri-additives
${W}^{\epsilon_s+\epsilon_{s'}}(\mathfrak{B}_s\sqcap\mathfrak{B}_{s'},\mathfrak{B}^{[r]};i_s+i_{s'})$
and
${W}^{\epsilon_t+\epsilon_{t'}}(\mathfrak{B}_t\sqcap\mathfrak{B}_{t'},\mathfrak{B}^{[r]};i_t+i_{t'})$
belong to two distinct cosets of $\mathfrak{t}_{[l]}$
for four arbitrary members
${W}^{\epsilon_s}(\mathfrak{B}_s,\mathfrak{B}^{[r]};i_s)$,
${W}^{\epsilon_{s'}}(\mathfrak{B}_{s'},\mathfrak{B}^{[r]};i_{s'})$,
${W}^{\epsilon_t}(\mathfrak{B}_t,\mathfrak{B}^{[r]};i_t)$ and
${W}^{\epsilon_{t'}}(\mathfrak{B}_{t'},\mathfrak{B}^{[r]};i_{t'})\in{\cal M}_{l-1}$.
Let the subalgebra $\mathfrak{t}_{[1]}=Span\{\mathfrak{t}_{[l]},{\cal M}_{l-1}\}$
be spanned by $\mathfrak{t}_{[l]}$ and ${\cal M}_{l-1}$ and
the subspace $\mathfrak{p}_{[1]}=Span\{\mathfrak{p}_{[l]},{\cal M}_{l-1}\}$
by $\mathfrak{p}_{[l]}$ and ${\cal M}_{l-1}$.
Thereupon the composition $\mathfrak{t}_{[1]}\oplus\mathfrak{p}_{[1]}$
is a $1$st-level $\mathfrak{t}$-$\mathfrak{p}$ decomposition as demanded
possessing the inclusions $\mathfrak{t}_{[1]}\supset\mathfrak{t}_{[l]}$
and $\mathfrak{p}_{[1]}\supset\mathfrak{p}_{[l]}$.
The explicit construction is recursive and delivered level by level as follows.
Initiated with a set
${\cal M}_1=\{{W}^{\epsilon_1}(\mathfrak{B}_1,\mathfrak{B}^{[r]};i_1)\}$
of one arbitrary conditioned subspace
in $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}-\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$,
the composition $\mathfrak{t}_{[l-1]}\oplus\mathfrak{p}_{[l-1]}$
of the subalgebra
$\mathfrak{t}_{[l-1]}=Span\{\mathfrak{t}_{[l]},{\cal M}_1\}
=\{{W}^{\epsilon+\epsilon_1}(\mathfrak{B}\sqcap\mathfrak{B}_1,\mathfrak{B}^{[r]};i+i_1):
\forall\hspace{2pt}{W}^{\epsilon}(\mathfrak{B},\mathfrak{B}^{[r]};i)\subset\mathfrak{t}_{[l]}\}$
and the coset
$\mathfrak{p}_{[l-1]}=Span\{\mathfrak{p}_{[l]},{\cal M}_1\}
=\{{W}^{\epsilon'+\epsilon_1}(\mathfrak{B}'\sqcap\mathfrak{B}_1,\mathfrak{B}^{[r]};i'+i_1):
\forall\hspace{2pt}{W}^{\epsilon'}(\mathfrak{B}',\mathfrak{B}^{[r]};i')\subset\mathfrak{p}_{[l]}\}$
of $\mathfrak{t}_{[l-1]}$
under the tri-addition is an $(l-1)$-th level $\mathfrak{t}$-$\mathfrak{p}$
decomposition satisfying $\mathfrak{t}_{[l-1]}\supset\mathfrak{t}_{[l]}$
and $\mathfrak{p}_{[l-1]}\supset\mathfrak{p}_{[l]}$.
With another choice of conditioned subspace ${W}^{\epsilon_2}(\mathfrak{B}_2,\mathfrak{B}^{[r]};i_2)$
in $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}-\mathfrak{t}_{[l-1]}\oplus\mathfrak{p}_{[l-1]}$
to form the set
${\cal M}_2=\{{W}^{\epsilon_s}(\mathfrak{B}_s,\mathfrak{B}^{[r]};i_s):s=1,2\}$,
the composition $\mathfrak{t}_{[l-2]}\oplus\mathfrak{p}_{[l-2]}$ of
the subalgebra
$\mathfrak{t}_{[l-2]}=Span\{\mathfrak{t}_{[l]},{\cal M}_2\}
=\{{W}^{\epsilon+\epsilon_2}(\mathfrak{B}\sqcap\mathfrak{B}_2,\mathfrak{B}^{[r]};i+i_2):
\forall\hspace{2pt}{W}^{\epsilon}(\mathfrak{B},\mathfrak{B}^{[r]};i)\subset\mathfrak{t}_{[l-1]}\}$
and the coset
$\mathfrak{p}_{[l-2]}=Span\{\mathfrak{p}_{[l]},{\cal M}_2\}
=\{{W}^{\epsilon'+\epsilon_2}(\mathfrak{B}'\sqcap\mathfrak{B}_2,\mathfrak{B}^{[r]};i'+i_2):
\forall\hspace{2pt}{W}^{\epsilon'}(\mathfrak{B}',\mathfrak{B}^{[r]};i')\subset\mathfrak{p}_{[l-1]}\}$
of $\mathfrak{t}_{[l-2]}$
is an $(l-2)$-th-level decomposition realizing the inclusions
$\mathfrak{t}_{[l-2]}\supset\mathfrak{t}_{[l]}$
and $\mathfrak{p}_{[l-2]}\supset\mathfrak{p}_{[l]}$.
The procedure of constructing such $\mathfrak{t}$-$\mathfrak{p}$ decompositions
completes when a set ${\cal M}_{l-1}$ is yielt
and a required $1$st-level decomposition is obtained.
Notice that it is plain to verify the compliance of a $\mathfrak{t}$-$\mathfrak{p}$ decomposition
$\mathfrak{t}_{[k]}\oplus\mathfrak{p}_{[k]}$ so achieved at every level, $1\leq k<l$,
with the decomposition condition
$[\mathfrak{t}_{[k]},\mathfrak{t}_{[k]}]\subset\mathfrak{t}_{[k]}$,
$[\mathfrak{t}_{[k]},\mathfrak{p}_{[k]}]\subset\mathfrak{p}_{[k]}$,
$[\mathfrak{p}_{[k]},\mathfrak{p}_{[k]}]\subset\mathfrak{t}_{[k]}$
and
${\rm Tr}\{\mathfrak{t}_{[k]}\hspace{.5pt}\mathfrak{p}_{[k]}\}=0$.
The proof ends.
\end{proof}
\vspace{6pt}
\vspace{6pt}
\begin{lemma}\label{LvlsupLvlMaxAbelSubAlg}
For a 1st-level $\mathfrak{t}$-$\mathfrak{p}$ decomposition
$\mathfrak{t}_{[1]}\oplus\mathfrak{p}_{[1]}$ and an $l$-th-level decomposition
$\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$ having the inclusions
$\mathfrak{t}_{[1]}\supset\mathfrak{t}_{[l]}$ and $\mathfrak{p}_{[1]}\supset\mathfrak{p}_{[l]}$,
the four occasions hold for the two maximal abelian subalgebras
${\cal A}_{[1]}$ of $\mathfrak{p}_{[1]}$ and ${\cal A}_{[l]}$ of $\mathfrak{p}_{[l]}$
that
${\cal A}_{[1]}$ is a Cartan subalgebra $\mathfrak{C}$ if so is ${\cal A}_{[l]}$,
${\cal A}_{[1]}$ is $\mathfrak{C}$ or a $1$st maximal
bi-subalgebra $\mathfrak{B}^{[1]}$ of $\mathfrak{C}$ if ${\cal A}_{[l]}$ is
an $r$-th maximal bi-subalgebra of $\mathfrak{B}^{[r]}\subset\mathfrak{B}^{[1]}\subset\mathfrak{C}$ with $r\geq 1$,
${\cal A}_{[1]}$ is $\mathfrak{C}$ or the coset $\mathfrak{B}^{[1,1]}$ of $\mathfrak{B}^{[1]}$ in $\mathfrak{C}$
if ${\cal A}_{[l]}$ is $\mathfrak{B}^{[1,1]}$,
and finally ${\cal A}_{[1]}$ is $\mathfrak{C}$, $\mathfrak{B}^{[1]}$ or $\mathfrak{B}^{[1,1]}$ if
${\cal A}_{[l]}$ is a coset $\mathfrak{B}^{[r',i]}$ of an
$r'$-th maximal bi-subalgebra of $\mathfrak{B}^{[r]}\subset\mathfrak{B}^{[1]}\subset\mathfrak{C}$
with $r'\geq 2$ and $i\neq \mathbf{0}$.
\end{lemma}
\vspace{2pt}
\begin{proof}
Suppose that
$\mathfrak{t}_{[l]}$ is a subgroup and $\mathfrak{p}_{[l]}$ a
coset of $\mathfrak{t}_{[l]}$ of the rank-$s$ quotient-algebra partition
$\{{\cal P}_{\mathcal{Q}}(\mathfrak{B}^{[s]})\}$ generated by
an $s$-th maximal bi-subalgebra $\mathfrak{B}^{[s]}$ of a Cartan subalgebra $\mathfrak{C}'$
under the tri-addition, $0\leq s\leq p$,
{\em cf.} Definition~\ref{defnl-thleveltpD}.
Based on the proof of Lemma~\ref{lemLvlthin1st},
a set of $l-1$ indepedent conditioned subspaces ${\cal M}=\{W^{\epsilon_t}(\mathfrak{B}_t,\mathfrak{B}^{[s]};j_t):1\leq t\leq l-1\}$
can be chosen from $\{{\cal P}_{\mathcal{Q}}(\mathfrak{B}^{[s]})\}-\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$
to form the $1$st-level decomposition
$\mathfrak{t}_{[1]}\oplus\mathfrak{p}_{[1]}$ with
$\mathfrak{t}_{[1]}=Span\{\mathfrak{t}_{[l]},{\cal M}\}$ and
$\mathfrak{p}_{[1]}=Span\{\mathfrak{p}_{[l]},{\cal M}\}$.
The proof will divided into two occasions that ${\cal A}_{[l]}$
is a bi-subalgebra $\mathfrak{C}$ or $\mathfrak{B}^{[r]}$
with $r\geq 1$, or ${\cal A}_{[l]}$
is the coset $\mathfrak{B}^{[1,1]}$ or $\mathfrak{B}^{[r',i]}$
with $r'\geq 2$ and $i\neq \mathbf{0}$.
On the first occasion, if ${\cal A}_{[l]}$ is the Cartan subalgebra $\mathfrak{C}$,
the $l-1$ subspaces of ${\cal M}$ are chosen arbitrarily from
$\{{\cal P}_{\mathcal{Q}}(\mathfrak{B}^{[s]})\}-\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$
to form $\mathfrak{p}_{[1]}$.
Here, the maximal abelian subalgebra ${\cal A}_{[1]}$ must be a
Cartan subalgebra $\mathfrak{C}'$ identical to $\mathfrak{C}$
or to another Cartan subalgebra
by composing $\mathfrak{B}^{[s]}$ and other
conditioned subspaces of $\mathfrak{p}_{[1]}$ as in the proof of Lemmas~28 and~29
in~\cite{SuTsai2}.
If ${\cal A}_{[l]}$ is the $r$-th maximal bi-subalgebra $\mathfrak{B}^{[r]}$,
two cases for the independent set ${\cal M}$ should be considered.
One case is to choose ${\cal M}$ consisting of the $p-s$ subspaces ${W}^{\epsilon_m}(\mathfrak{B}_m,\mathfrak{B}^{[s]};j_m)$
obeying the condition
$\mathfrak{B}_m\cap\mathfrak{B}^{[r]}=\mathfrak{B}^{[r]}$
for $1\leq m\leq p-s$ and the remaining subspaces following
$\mathfrak{B}_n\cap\mathfrak{B}^{[r]}\neq\mathfrak{B}^{[r]}$ for $p-s<n\leq l-1$.
The other case is to choose ${\cal M}$ composed of the $t=p-s-1$ subspaces of ${\cal M}$
satisfying $\mathfrak{B}_t\cap\mathfrak{B}^{[r]}=\mathfrak{B}^{[r]}$
for $1\leq t\leq p-s-1$
and the remaining having $\mathfrak{B}_t\cap\mathfrak{B}^{[r]}\neq\mathfrak{B}^{[r]}$.
One thus derives that the maximal abelian subalgebra ${\cal A}_{[1]}\subset\mathfrak{p}_{[1]}$ is a
Cartan subalgebra $\mathfrak{C}'$ in the first case and is a 1st
maximal bi-subalgebra $\mathfrak{B}'^{[1]}$ of $\mathfrak{C}'$
in the latter case.
Likewise, $\mathfrak{C}'$ (or $\mathfrak{B}'^{[1]}$)
is equal to $\mathfrak{C}$ (or $\mathfrak{B}^{[1]}$), or to
another Cartan subalgebra (or another 1st maximal bi-subalgebra)
by composing $\mathfrak{B}^{[s]}$ with other
conditioned subspaces in $\mathfrak{p}_{[1]}$, referring to the proofs of Lemmas~28 and~29 in~\cite{SuTsai2}.
On the other occasion, if ${\cal A}_{[l]}$ is the coset $\mathfrak{B}^{[1,1]}$
of $\mathfrak{B}^{[1]}$, the maximal abelian subalgebra ${\cal A}_{[1]}$
is also the coset $\mathfrak{B}^{[1,1]}$ via a similar argument on the 1st occasion of ${\cal A}_{[l]}=\mathfrak{B}^{[r]}$
except the bi-subalgebra
$B^{[s]}=W^1(\mathfrak{B}^{[1]},\mathfrak{B}^{[s]};\mathbf{0})$
being obligated to be in ${\cal M}$ and $B^{[s]}\subset\mathfrak{t}_{[1]}$.
While ${\cal A}_{[1]}$
is the Cartan subalgebra $\mathfrak{C}'$ as
$B^{[s]}$ is not in ${\cal M}$ but in $\mathfrak{p}_{[1]}$ to form $\mathfrak{C}'$
through a similar argument on the 1st occasion.
Finally, if ${\cal A}_{[l]}$ is a coset $\mathfrak{B}^{[r',i]}$
of $\mathfrak{B}^{[r']}$ with $r'\geq 2$ and $i\neq\mathbf{0}$,
the maximal abelian subalgebra ${\cal A}_{[1]}$ is the coset $\mathfrak{B}^{[1,1]}$
when the bi-subalgebra
$B^{[s]}=W^{\epsilon_1}(\mathfrak{B}_1,\mathfrak{B}^{[s]};j_1)$
is in ${\cal M}$ and $B^{[s]}\subset\mathfrak{t}_{[1]}$ together with
a similar argument on the 1st occasion.
Yet, ${\cal A}_{[1]}$
is a Cartan subalgebra $\mathfrak{C}'$ (or a 1st maximal bi-subalgebra $\mathfrak{B}'^{[1]}$) as
$B^{[s]}$ is not in ${\cal M}$ but in $\mathfrak{p}_{[1]}$ to form $\mathfrak{C}'$ (or $\mathfrak{B}'^{[1]}$)
by a similar argument on the 1st occasion.
\end{proof}
\vspace{6pt}
This consequence will be applied to decide the type of an
$l$-th-level decomposition.
To factorize a unitary action $U\in{SU(N)}$,
$2^{p-1}<N\leq 2^p$,
it is necessary to prepare level by level a series of Cartan decompositions
and to decide the {\em type} of the decomposition at each level.
The type of a $1$st-level $\mathfrak{t}$-$\mathfrak{p}$
$su(N)=\mathfrak{t}_{[1]}\oplus\mathfrak{p}_{[1]}$,
as stated in Theorem~4 of~\cite{SuTsai2},
is decided by the maximal abelian subalgebra $\mathfrak{a}$ of $\mathfrak{p}_{[1]}$
according to a concise law:
the decomposition $su(N)=\mathfrak{t}_{[1]}\oplus\mathfrak{p}_{[1]}$
is a Cartan decomposition of type {\bf AI} if
$\mathfrak{a}$ is a Cartan subalgebra $\mathfrak{C}\subset{su(N)}$,
of type {\bf AII} if
$\mathfrak{a}$ is a proper maximal bi-subalgebra $\mathfrak{B}$ of $\mathfrak{C}$,
or of type {\bf AIII} if
$\mathfrak{a}$ is a complement
$\mathfrak{B}^c=\mathfrak{C}-\mathfrak{B}$.
An extended version of this law applicable to succeeding levels is the foucs of the
continued discussion.
Granted by the {\em KAK theorem}~\cite{Knapp},
a unitary action $U\in{SU(2^p)}$ can be factorized into the form
$U=K_{[1],0}AK_{[1],1}$ due to a Cartan decomposition
$su(2^p)=\mathfrak{t}_{[1]}\oplus\mathfrak{p}_{[1]}$.
Herein the factor $A=e^{i\mathbf{a}_{[0]}}$ is the exponential mapping of a vector
$\mathbf{a}_{[0]}$ of a maximal abelian subalgebra $\mathfrak{a}$ in the subspace $\mathfrak{p}_{[1]}$
and $K_{[1],s}=e^{i\mathfrak{t}_s}$ is contributed by a vector $\mathbf{t}_s$
of the subalgebra $\mathfrak{t}_{[1]}$ for $s\in{Z_2}$.
The calculation of the factorization is in practice realized only on the occasion
of the {\em intrinsic} Cartan decomposition
$\hat{\mathfrak{t}}_{[1]}\oplus\hat{\mathfrak{p}}_{[1]}$~\cite{Su,SuTsai1,SuTsai2}.
For other occasions,
the decomposition $\mathfrak{t}_{[1]}\oplus\mathfrak{p}_{[1]}$
must first be mapped to the intrinsic by a conjugate transformation
fulfilling
$\hat{\mathfrak{t}}_{[1]}=Q\mathfrak{t}_{[1]}Q^\dag$
and
$\hat{\mathfrak{p}}_{[1]}=Q\mathfrak{p}_{[1]}Q^\dag$ via a operator
$Q\in{SU(2^p)}$, which is explicitly written in Appendix~2 of~\cite{SuTsai2}.
Let the calculation proceed on the modified factorization
$\hat{U}=QUQ^{\dag}=\hat{K}_{[1],0}\hat{A}\hat{K}_{[1],1}$
and then the desired factorization is arrived through the reverse mapping $U=Q^{\dag}\hat{U}Q$
with $\hat{A}=QAQ^{\dag}$ and
$\hat{K}_{[1],s}=QK_{[1],s}Q^{\dag}$, $s\in{Z_2}$.
The matrix computation of the factorization
$\hat{U}=\hat{K}_{[1],0}\hat{A}\hat{K}_{[1],1}$ due to the
intrinsic decomposition
$\hat{\mathfrak{t}}_{[1]}\oplus\hat{\mathfrak{p}}_{[1]}$
is carried out within the {\em bilinear constraint}
$\hat{K}_{[1],s}M\hat{K}^t_{[1],s}=M$ (or $\hat{K}_{[1],s}M\hat{K}^{\dag}_{[1],s}=M$),
where the {\em metric} $M$ is
a $2^p\times 2^p$ {\em invertible real symmetric} matrix
and the factor $\hat{A}$ is also {\em real} and {\em symmetric}~\cite{Helgason,Knapp,RG,GVL}.
One type of intrinsic Cartan decomposition
$\hat{\mathfrak{t}}_{[1]}\oplus\hat{\mathfrak{p}}_{[1]}$ is associated with
one distinct metric $M$.
When the metric $M=M_{\hspace{.5pt}\mathbf{I}}=I={\cal S}^{\mathbf{0}}_{\mathbf{0}}$ is the identity matrix,
the factorization $\hat{U}=\hat{K}_{[1],0}\hat{A}\hat{K}_{[1],1}$ is
the well-known {\em Singular Value Decomposition (SVD)} requiring
computation over the relations
$\hat{K}_{[1],s}\hat{K}^t_{[1],s}=I$ for $s\in{Z_2}$, whose factor $\hat{A}$
is the exponential mapping of a vector in the maximal abelian subalgebra
$\hat{\mathfrak{a}}=\{{{\cal S}^{\zeta}_{\mathbf{0}}:\forall\hspace{2pt}\zeta\in{Z^p_2}}\}\subset\hat{\mathfrak{p}}_{[1]}$
consisting of all diagonal generators of $su(2^p)$.
The intrinsic Cartan decomposition $\hat{\mathfrak{t}}_{[1]}\oplus\hat{\mathfrak{p}}_{[1]}$
causing SVD
is of type {\bf AI} with the subalgebra $\hat{\mathfrak{t}}_{[1]}=\hat{\mathfrak{t}}_{\mathbf{I}}=so(2^p)$~\cite{SuTsai2}.
The KAK factorization
is called the {\em Symplectic Decomposition (SpD)}
as $M=M_{\hspace{.5pt}\mathbf{II}}=J_{2^{p-1}}={\cal S}^{\zeta_0}_{\alpha_0}$,
here the strings $\zeta_0=\alpha_0=10\cdots 0\in{Z^p_2}$
having only one single nonzero bit at the leftmost digit.
The components $\hat{K}_{[1],s}$ comply with the constraint
$\hat{K}_{[1],s}J_{2^{p-1}}\hat{K}^t_{[1],s}J^t_{2^{p-1}}=I$
and the factor $\hat{A}$ is evolved by a vector of the maximal abelian subalgebra
$\hat{\mathfrak{a}}=\{{\cal S}^{\eta}_{\mathbf{0}}:
\forall\hspace{2pt}\eta\in{Z^p_2},\eta\cdot\alpha_0=0\}\subset\hat{\mathfrak{p}}_{[1]}$.
The corresponding intrinsic decomposition
is a type {\bf AII} with the subalgebra $\hat{\mathfrak{t}}_{\mathbf{II}}=sp(2^{p-1})$.
Finally, the intrinsic type-{\bf AIII} decomposition,
bearing the subalgebra $\hat{\mathfrak{t}}_{\mathbf{III}}=c\otimes{su(2^{p-1})}\otimes{su(2^{p-1})}$
of the center $c=\{{\cal S}^{\zeta_0}_{\mathbf{0}}\}$,
leads to the factorization known as the {\em Cosine-Sine Decomposition (CSD)}, which is
associated with
the metric $M=M_{\hspace{.5pt}\mathbf{III}}=I_{2^{p-1},2^{p-1}}={\cal S}^{\zeta_0}_{\mathbf{0}}$.
Its component computation is guided by the identity
$\hat{K}_{[1],s}I_{2^{p-1},2^{p-1}}\hat{K}^\dag_{[1],s}I_{2^{p-1},2^{p-1}}=I$
and the factor $\hat{A}$ is the contribution of a vector in the maximal abelian subalgebra
$\hat{\mathfrak{a}}=\{{\cal S}^{\zeta}_{\alpha_0}:\forall\hspace{2pt}\zeta\in{Z^p_2}\}\subset\hat{\mathfrak{p}}_{[1]}$.
For the convenience of below expositions, the
letters {\bf A$\Omega$} will stand for the type index {\bf AI}, {\bf AII} or {\bf AIII}.
The action $U$ admits further factorizations owing to decompositions of higher levels.
As a result of decomposing the subalgebra
$\mathfrak{t}_{[1]}=\mathfrak{t}_{[2]}\oplus\mathfrak{p}_{[2]}$
into a $\mathfrak{t}$-$\mathfrak{p}$ decomposition of the $2$nd level
by the {\em KAK} theorem again,
the component $K_{[1],s}$ is factorized into the form
$K_{[1],s}=K_{[2],s_0}A_{[1],s}K_{[2],s_1}$,
here $s\in{Z_2}$ and $s_\epsilon=s\circ\epsilon$ being a
concatenation of $s$ and $\epsilon\in{Z_2}$.
Likewise, the factor $A_{[1],s}$ is the exponential evolution of a vector
of a maximal abelian subalgebra in the subspace $\mathfrak{p}_{[2]}$
and $K_{[2],s_\epsilon}$ contributed by a vector of the subalgebra $\mathfrak{t}_{[2]}$.
By Lemma~\ref{lemLvlthin1st} there must exist a $1$st-level
$\mathfrak{t}$-$\mathfrak{p}$ decomposition $\mathfrak{t}_{\Omega_2}\oplus\mathfrak{p}_{\Omega_2}$
such that
$\mathfrak{t}_{\Omega_2}\supset\mathfrak{t}_{[2]}$
and
$\mathfrak{p}_{\Omega_2}\supset\mathfrak{p}_{[2]}$.
Suppose
$\mathfrak{t}_{\Omega_2}\oplus\mathfrak{p}_{\Omega_2}$ is a type {\bf A$\Omega_2$}.
Similar to the case at the $1$st level,
the key step of the KAK factorization is practiced only
according to the {\em intrinsic} Cartan decomposition.
The calculation is conducted on the modified version
$\hat{K}_{[1],s}=\hat{K}_{[2],s_0}\hat{A}_{[1],s}\hat{K}_{[2],s_1}$
under the constraint
$\hat{K}_{[2],s_0}M_{\Omega_2}\hat{K}^t_{[2],s_1}=M_{\Omega_2}$
(or $\hat{K}_{[2],s_0}M_{\Omega_2}\hat{K}^{\dag}_{[2],s_1}=M_{\Omega_2}$)
with the associated metric $M_{\Omega_2}$.
After the calculation completes, each factorization component at the 2nd level is delivered by the reverse mapping
$K_{[1],s}=Q^{\dag}_{\Omega_2}\hat{K}_{[1],s}Q_{\Omega_2}$;
here $Q_{\Omega_2}$ is the operator
mapping $\mathfrak{t}_{\Omega_2}\oplus\mathfrak{p}_{\Omega_2}$ into
the intrinsic
$\hat{\mathfrak{t}}_{\Omega_2}\oplus\hat{\mathfrak{p}}_{\Omega_2}$~\cite{SuTsai2}
via the conjugate transformation
$\hat{A}_{[1],s}=Q_{\Omega_2}A_{[1],s}Q^{\dag}_{\Omega_2}$
and
$\hat{K}_{[2],s_\epsilon}=Q_{\Omega_2}K_{[2],s_\epsilon}Q^{\dag}_{\Omega_2}$.
Based on the allowed factorization of this choice, it is legitimate
to label the $2$nd-level decomposition
$\mathfrak{t}_{[2]}\oplus\mathfrak{p}_{[2]}$ as a type {\bf A$\Omega_2$}.
When the decomposition goes forward to the $l$-th level,
$l>2$,
the major task to accomplish is the recursive factorization
$K_{[l-1],s}=K_{[l],s_0}A_{[l-1],s}K_{[l],s_1}$
owing to a $\mathfrak{t}$-$\mathfrak{p}$ decomposition
$\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$ of this level,
$s\in{Z^{l-1}_2}$ and $s_\epsilon=s\circ \epsilon$ for $\epsilon\in{Z_2}$.
Similarly, there exists a $1$st-level decomposition
$\mathfrak{t}_{\Omega_l}\oplus\mathfrak{p}_{\Omega_l}$
supporting the coverings
$\mathfrak{t}_{\Omega_l}\supset\mathfrak{t}_{[l]}$
and
$\mathfrak{p}_{\Omega_l}\supset\mathfrak{p}_{[l]}$.
Suppose $\mathfrak{t}_{\Omega_l}\oplus\mathfrak{p}_{\Omega_l}$
is a type {\bf A$\Omega_l$} and
let the factorization proceed
exactly follow the same recipe employed at the $2$nd level.
The decomposition
$\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$ is then
considered of type {\bf A$\Omega_l$},
although the choice is not unique.
Furnished with the factorization recipe as above,
a decomposition can be
identified in the type with a $1$st level supporting subspace coverings.
\vspace{6pt}
\begin{defn}\label{defnl-thCDtype}
A $\mathfrak{t}$-$\mathfrak{p}$ decomposition of the $l$-th level $\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$
in the Lie algebra ${su(N)}$, $2^{p-1}<N\leq 2^p$ and $1< l\leq p$,
is a Cartan decomposition of type {\bf A$\Omega$},
if there exists a $1$st-level $\mathfrak{t}$-$\mathfrak{p}$ decomposition
$su(N)=\mathfrak{t}_{\Omega}\oplus\mathfrak{p}_{\Omega}$ of the same type {\bf A$\Omega$}
bearing the inclusions $\mathfrak{t}_{[l]}\subseteq\mathfrak{t}_{\Omega}$
and $\mathfrak{p}_{[l]}\subseteq\mathfrak{p}_{\Omega}$;
the type {\bf A$\Omega$} is referring to the type {\bf AI}, {\bf AII} or {\bf AIII}.
\end{defn}
\vspace{6pt}
In other words, the type assigned to the subalgebra $\mathfrak{t}_{[l]}$
is offered by
a choice of a $1$st maximal subgroup $\mathfrak{t}_{\Omega}$
respecting the condition that the the former subalgebra is
an $(l-1)$-th maximal subgroup of the latter under the tri-addition.
When necessary,
a type once chosen can be preserved throughout a serial of decompositions.
\vspace{6pt}
\begin{lemma}\label{lemlto1stlevelCDtype}
For every $l$-th-level $\mathfrak{t}$-$\mathfrak{p}$
decomposition $\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$ of type {\bf A$\Omega$}
in $su(N)$, $l>1$,
there exist a serial of $\mathfrak{t}$-$\mathfrak{p}$ decompositions
$\mathfrak{t}_{[k]}\oplus\mathfrak{p}_{[k]}$ of the same type {\bf A$\Omega$},
from the $1$st to the $l$-th level,
respecting the condition that every
$\mathfrak{t}_{[k]}$ is a proper maximal subgroup of $\mathfrak{t}_{[k-1]}$
under the tri-addition
and $\mathfrak{p}_{[k]}\subset\mathfrak{p}_{[k-1]}$
for all $1<k\leq l$.
\end{lemma}
\vspace{3pt}
\begin{proof}
Since $\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$ is a type {\bf A$\Omega$},
there exists a $1$st-level decomposition
$\mathfrak{t}_{\Omega}\oplus\mathfrak{p}_{\Omega}$ of the same type
holding the coverings
$\mathfrak{t}_{\Omega}\supset\mathfrak{t}_{[l]}$ and $\mathfrak{p}_{\Omega}\supset\mathfrak{p}_{[l]}$
according to Definition~\ref{defnl-thCDtype}.
A such serial of $\mathfrak{t}$-$\mathfrak{p}$
decompositions $\mathfrak{t}_{[k]}\oplus\mathfrak{p}_{[k]}$,
$1<k\leq l$, can be constructed
following the similar procedure of recursion as shown in the proof
of Lemma~\ref{lemLvlthin1st}.
By taking an arbitrary conditioned subspace
${W}^{\epsilon_1}(\mathfrak{B}_1,\mathfrak{B}^{[r]};i_1)$ in
$\mathfrak{t}_{\Omega}-\mathfrak{t}_{[l]}$ for
a coset
$\mathfrak{p}'_{[l]}=\{{W}^{\epsilon_1+\epsilon}(\mathfrak{B}_1\sqcap\mathfrak{B},\mathfrak{B}^{[r]};i_1+i):
\forall\hspace{2pt}{W}^\epsilon(\mathfrak{B},\mathfrak{B}^{[r]};i)\in\mathfrak{t}_{[l]}\}$
of $\mathfrak{t}_{[l]}$ under the tri-addition,
the composition $\mathfrak{t}_{[l-1]}=\mathfrak{t}_{[l]}\oplus\mathfrak{p}'_{[l]}$
forms an $(l-1)$-th maximal subgroup of the given quotient-algebra partition
and belongs to $\mathfrak{t}_{\Omega}$.
In addition, with the subspace
$\mathfrak{p}''_{[l]}=\{{W}^{\epsilon_1+\sigma}(\mathfrak{B}_1\sqcap\mathfrak{B}^\dag,\mathfrak{B}^{[r]};i_1+j):
\forall\hspace{2pt}{W}^\sigma(\mathfrak{B}^\dag,\mathfrak{B}^{[r]};j)\in\mathfrak{p}_{[l]}\}$
being another coset of $\mathfrak{t}_{[l]}$ under the tri-addition,
the composition $\mathfrak{p}_{[l-1]}=\mathfrak{p}_{[l]}\oplus\mathfrak{p}''_{[l]}$
is a coset of $\mathfrak{t}_{[l-1]}$ and
a subset of $\mathfrak{p}_{\Omega}$, recalling
a rule of decomposition condition
$[\mathfrak{t}_{\Omega},\mathfrak{p}_{\Omega}]\subset\mathfrak{p}_{\Omega}$.
Thus, the composition $\mathfrak{t}_{[l-1]}\oplus\mathfrak{p}_{[l-1]}$ is of type {\bf A$\Omega$}.
Let this procedure be recursively performed, which then produces
a serial of decompositions
$\mathfrak{t}_{[k]}\oplus\mathfrak{p}_{[k]}$ of the same type
as demanded for $1<k\leq l$.
\end{proof}
\vspace{6pt}
By Lemmas~\ref{lemLvlthin1st} and~\ref{LvlsupLvlMaxAbelSubAlg},
when the maximal abelian subalgebras of the subspace $\mathfrak{p}_{[l]}$
of an $l$-th-level $\mathfrak{t}$-$\mathfrak{p}$ decomposition
$\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$ is given,
there exists a $1$st-level decomposition
$\mathfrak{t}_{[1]}\oplus\mathfrak{p}_{[1]}$ to have the
inclusions
$\mathfrak{t}_{[1]}\supset\mathfrak{t}_{[l]}$ and
$\mathfrak{p}_{[1]}\supset\mathfrak{p}_{[l]}$.
Thus, together with Definition~\ref{defnl-thCDtype},
it is no difficult to extend the rule of type decision to
decompositions of levels beyond the $1$st.
\vspace{6pt}
\begin{thm}\label{thm3typesoflthtp}
For an $l$-th-level $\mathfrak{t}$-$\mathfrak{p}$ decomposition
$\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$ of a rank-$r$ quotient-algebra partition over $su(N)$,
$2^{p-1}<N\leq 2^p$, $0\leq r\leq p$ and $1< l\leq p+r$,
the decomposition is a Cartan decomposition of type {\bf AI} if the maximal abelian subalgebra ${\cal A}$
of the subspace $\mathfrak{p}_{[l]}$
is a Cartan subalgebra $\mathfrak{C}\subset{su(N)}$,
is a type {\bf AI} or {\bf AIII} if ${\cal A}$ is the coset $\mathfrak{B}^{[1,1]}$
of a $1$st maximal bi-subalgebra $\mathfrak{B}^{[1]}$ in $\mathfrak{C}$,
is a type {\bf AI} or {\bf AII} if ${\cal A}$ is an $r'$-th
maximal bi-subalgebra $\mathfrak{B}^{[r']}$ of $\mathfrak{C}$ for $1\leq r'\leq p$,
or is one of the three types {\bf AI}, {\bf AII} and {\bf AIII} if ${\cal A}$ is
a coset of an $r''$-th maximal bi-subalgebra $\mathfrak{B}^{[r'']}$ in $\mathfrak{C}$ for $1<r''\leq p$.
\end{thm}
\vspace{2pt}
\begin{proof}
This theorem is a direct consequence of
Lemmas~\ref{lemLvlthin1st} and~\ref{LvlsupLvlMaxAbelSubAlg} as well as
Definition~\ref{defnl-thCDtype}.
\end{proof}
\vspace{6pt}
There hence implies the nonunique freedom of types choosing
at each decomposition level during factorizing a unitary action.
A series of $\mathfrak{t}$-$\mathfrak{p}$ decompositions in strict accord with
the abelian-group structure of a quotient algebra partition
and halting at an abelian subalgebra establish
a {\em $\mathfrak{t}$-$\mathfrak{p}$ decomposition sequence}.
\vspace{6pt}
\begin{defn}\label{defntpseq}
Denoted as
$seq_{\mathfrak{t}\mathfrak{p}}(\mathfrak{t}_{[M]})$,
a $\mathfrak{t}$-$\mathfrak{p}$ decomposition sequence of length $M$
consists of a series of $\mathfrak{t}$-$\mathfrak{p}$ decompositions
$\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$ from the $1$st to the $M$-th level
in a quotient-algebra partition of the Lie algebra $su(N)$,
$2^{p-1}<N\leq 2^p$ and $1\leq l\leq M$,
where $su(N)=\mathfrak{t}_{[0]}=\mathfrak{t}_{[1]}\oplus\mathfrak{p}_{[1]}$ and
$\mathfrak{p}_{[0]}=\{0\}$,
the subalgebra $\mathfrak{t}_{[l]}$ is a proper maximal subgroup of
$\mathfrak{t}_{[l-1]}$ under the tri-addition
with the complement
$\mathfrak{p}_{[l]}=\mathfrak{t}_{[l-1]}-\mathfrak{t}_{[l]}$,
every $\mathfrak{t}_{[l']}$ for $1\leq l'<M$ is nonabelian
and only the subalgebra $\mathfrak{t}_{[M]}$ at the final level is abelian.
\end{defn}
\vspace{6pt}
The length of a decomposition sequence is bounded between
the rank of the partition and the power of the algebra's dimension.
\vspace{6pt}
\begin{lemma}\label{lemtplength}
A quotient-algebra partition of rank $r$ of the Lie algebra $su(N)$,
$2^{p-1}<N \leq 2^p$ and $0\leq r\leq p$,
admits $\mathfrak{t}$-$\mathfrak{p}$ decomposition
sequences of lengths ranging from $p$ to $p+r$.
\end{lemma}
\vspace{3pt}
\begin{proof}
It will be shown first there has no decomposition sequence of length shorter than
$p$; namely,
every subalgebra $\mathfrak{t}_{[k]}$ of a decomposition
$\mathfrak{t}_{[k]}\oplus\mathfrak{p}_{[k]}$
at the $k$-th level for $1\leq k<p$
is nonabelian.
In the quotient-algebra partition of rank $r$
$\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}
=\{{W}^\epsilon(\mathfrak{B},\mathfrak{B}^{[r]};i):\forall\hspace{2pt}
\mathfrak{B}\in\mathcal{G}(\mathfrak{C}),\epsilon\in{Z_2}\text{ and }i\in{Z^r_2}\}$
generated by an $r$-th maximal bi-subalgebra $\mathfrak{B}^{[r]}$ of a Cartan
subalgebra $\mathfrak{C}\subset{su(N)}$, consider
a maximal abelian subalgebra
$\mathfrak{C'}=\bigcup_{i\in{Z^r_2}}{W}^1(\mathfrak{C},\mathfrak{B}^{[r]};i)$
formed by
a number $2^{r}$ of non-null conditioned subspaces.
A fact is revealed that
the subalgebra
$\mathfrak{t}_{[p]}
=\{{W}^0(\mathfrak{C},\mathfrak{B}^{[r]};i),{W}^1(\mathfrak{C},\mathfrak{B}^{[r]};i):\forall\hspace{2pt}i\in{Z^r_2}\}$
is the smallest subgroup of $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}$
under the tri-addition which is abelian as well as a superset of $\mathfrak{C'}$.
Thus, every subalgebra $\mathfrak{t}_{[k]}$ of a sequence
in $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}$
as $1\leq k<p$ is nonabelian
and per Definition~\ref{defntpseq} there is no
decomposition sequence of length shorter than $p$.
However, there always exist decomposition sequences
of lengths ranging from $p$ to $p+r$.
Let the construction begin with a sequence of length $p+r$.
In $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}$,
the subalgebra
$\mathfrak{t}_{[p+r]}
=\{{W}^0(\mathfrak{C},\mathfrak{B}^{[r]};\mathbf{0}),{W}^\epsilon(\mathfrak{B},\mathfrak{B}^{[r]};i)\}$
comprising the group identity ${W}^0(\mathfrak{C},\mathfrak{B}^{[r]};\mathbf{0})=\{0\}$
and a non-null subspace
${W}^\epsilon(\mathfrak{B},\mathfrak{B}^{[r]};i)$
is a smallest abelian subgroup under the tri-addition.
With another choice of non-null conditioned subspace
${W}^\sigma(\mathfrak{B}_1,\mathfrak{B}^{[r]};j)$,
the subspace
$\mathfrak{p}_{[p+r]}=
\{{W}^{\sigma+\tau}(\mathfrak{B}_1\sqcap\mathfrak{B}',\mathfrak{B}^{[r]};j+s):
\forall\hspace{2pt}{W}^{\tau}(\mathfrak{B}',\mathfrak{B}^{[r]};s)\in\mathfrak{t}_{[p+r]}\}$
corresponds to a coset of $\mathfrak{t}_{[p+r]}$.
Their composition
$\mathfrak{t}_{[p+r-1]}=\mathfrak{t}_{[p+r]}\oplus\mathfrak{p}_{[p+r]}$
is nonabelian provided the two subspaces ${W}^\sigma(\mathfrak{B}_1,\mathfrak{B}^{[r]};j)$
and $\mathfrak{t}_{[p+r]}$ do not commute.
Consequently every subalgebra $\mathfrak{t}_{[K_{[1],1}]}\supset\mathfrak{t}_{[p+r-1]}$
for $1\leq K_{[1],1}<p+r-1$ is nonabelian too.
Hereby, one
$\mathfrak{t}$-$\mathfrak{p}$ decomposition sequence of length
$p+r$ with $\mathfrak{t}_{[p+r]}\oplus\mathfrak{p}_{[p+r]}$ at
the final level is obtained.
A decomposition sequence of length $p+r-1$ is created undertaking similar steps.
Given an abelian subalgebra $\mathfrak{t}_{[p+r]}$,
the composition
$\mathfrak{t}_{[p+r-1]}=\mathfrak{t}_{[p+r]}\oplus\mathfrak{p}^{\dag}_{[p+r]}$
of the subalgebra $\mathfrak{t}_{[p+r]}$
and its coset
$\mathfrak{p}^{\dag}_{[p+r]}=
\{{W}^{\nu+\tau}(\mathfrak{B}_2\sqcap\mathfrak{B}',\mathfrak{B}^{[r]};m+s):
\forall\hspace{2pt}{W}^{\tau}(\mathfrak{B}',\mathfrak{B}^{[r]};s)\in\mathfrak{t}_{[p+r]}\}$
is abelian by picking some conditioned subspace
${W}^{\nu}(\mathfrak{B}_2,\mathfrak{B}^{[r]};m)$ commuting with $\mathfrak{t}_{[p+r]}$.
Then, a nonabelian subalgebra
$\mathfrak{t}_{[p+r-2]}=\mathfrak{t}_{[p+r-1]}\oplus\mathfrak{p}_{[p+r-1]}$
with $\mathfrak{p}_{[p+r-1]}$ being a coset of $\mathfrak{t}_{[p+r-1]}$
is rendered as long as
$[\mathfrak{t}_{[p+r-1]},\mathfrak{p}_{[p+r-1]}]\neq 0$.
By the same token, due to the nonabelianness of every subalgebra $\mathfrak{t}_{[k_2]}\supset\mathfrak{t}_{[p+r-2]}$
at the $k_2$-th level as $1\leq k_2<p+r-2$,
a decomposition sequence of length $p+r-1$ is thus built.
Recursively this procedure can generates
decomposition sequences of lengths from $p+r$ to $p$.
The construction terminates at the length
$p$, because as aforesaid there always have abelian subalgebras $\mathfrak{t}_{[p]}$
but every subalgebra $\mathfrak{t}_{[k]}$ in a sequence for $1\leq k<p$ is nonabelian.
\end{proof}
\vspace{6pt}
Thus, the Lie algebra $su(2^p)$ has decomposition sequences
from the shortest of length $p$ to the longest $2p$.
Based on Theorem~5 in~\cite{SuTsai2},
a $1$st-level $\mathfrak{t}$-$\mathfrak{p}$ decomposition determined in a quotient-algebra of rank $r$
is reconstructed in that of rank higher than $r$.
A such reconstruction is applicable to a $\mathfrak{t}$-$\mathfrak{p}$ decomposition of the $l$-th level
since a quotient-algebra partition can be partitioned into a quotient-algebra partition of a higher rank.
\vspace{6pt}
\begin{lemma}\label{leminclul-thCD}
A $\mathfrak{t}$-$\mathfrak{p}$ decomposition of the $l$-th level decided in the
quotient-algebra partition of rank $r$
generated by an $r$-th maximal bi-subalgebra $\mathfrak{B}^{[r]}$
of a Cartan subalgebra
$\mathfrak{C}\subset{su(N)}$, $2^{p-1}<N\leq 2^p$,
is a decomposition of the same level
in the partition of a higher rank
generated by an $r'$-th maximal bi-subalgebra
$\mathfrak{B}^{[r']}\subset\mathfrak{B}^{[r]}$ of $\mathfrak{C}$, $0\leq r<r'\leq p$ and $1\leq l\leq p+r+1$.
\end{lemma}
\vspace{2pt}
\begin{proof}
The quotient algebra partition
$\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}$
generated by
$\mathfrak{B}^{[r]}$
allows a further partitioning into the partition of a higher rank $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r']})\}$
generated by
$\mathfrak{B}^{[r']}$.
The partitioning is obtained by the fact
that every conditioned subspace ${W}^\epsilon(\mathfrak{B},\mathfrak{B}^{[r]};i)$
in $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}$
can split into
$2^{r'-r}$ conditioned subspaces ${W}^\epsilon(\mathfrak{B},\mathfrak{B}^{[r']};i')$
in $\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r']})\}$,
here $\mathfrak{B}\in\mathcal{G}(\mathfrak{C})$, $i\in{Z^r_2}$ and $i'\in{Z^{r'}_2}$.
Since the subalgebra $\mathfrak{t}_{[l]}$ and the subspace $\mathfrak{p}_{[l]}$
of an $l$-th-level decomposition
$\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$
both comprise the conditioned subspaces in
$\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r]})\}$,
the decomposition $\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}$
is simply a $\mathfrak{t}$-$\mathfrak{p}$ decomposition of the same level consisting
of the refined conditioned subspaces in
$\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[r']})\}$.
\end{proof}
\vspace{6pt}
As an implication of Lemma~\ref{leminclul-thCD},
a $\mathfrak{t}$-$\mathfrak{p}$ decomposition sequence decided in
a quotient-algebra partition of rank $r$ is reconstructed in that of a higher rank.
\vspace{6pt}
\begin{thm}\label{thmincluRecCD}
A $\mathfrak{t}$-$\mathfrak{p}$ decomposition sequence
in the quotient-algebra partition generated by an $r$-th maximal bi-subalgebra $\mathfrak{B}^{[r]}$
of a Cartan subalgebra $\mathfrak{C}$ in the Lie algebra ${su(N)}$,
$2^{p-1}<N\leq 2^p$,
can also be
constructed in the partition of a higher rank
generated by an $r'$-th maximal bi-subalgebra
$\mathfrak{B}^{[r']}\subset\mathfrak{B}^{[r]}$ of $\mathfrak{C}$, $0\leq r<r'\leq p$.
\end{thm}
\vspace{3pt}
\begin{proof}
Let $seq_{\mathfrak{t}\mathfrak{p}}(\mathfrak{t}_{[M]})=\{\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]};l=1,2,\cdots,M\}$
be a $\mathfrak{t}$-$\mathfrak{p}$ decomposition sequence of the length $M$
decided in the quotient algebra partition generated by $\mathfrak{B}^{[r]}$.
Since the decomposition of the $l$-th level
$\mathfrak{t}_{[l]}\oplus\mathfrak{p}_{[l]}\in{seq_{\mathfrak{t}\mathfrak{p}}(\mathfrak{t}_{[M]})}$
is a $\mathfrak{t}$-$\mathfrak{p}$ decomposition of the same level in the partition
generated by $\mathfrak{B}^{[r']}\subset\mathfrak{B}^{[r]}$
according to Lemma~\ref{leminclul-thCD},
the sequence ${seq_{\mathfrak{t}\mathfrak{p}}(\mathfrak{t}_{[M]})}$
is rebuildable in a more refined partition.
\end{proof}
\vspace{6pt}
Recovering a decomposition sequence in a higher-rank quotient-algebra partition
is a direct consequence of the above theorem.
\vspace{6pt}
\begin{cor}\label{corincluSeqDec}
A decomposition sequence determined in the quotient-algebra partition generated by an $r$-th maximal
bi-subalgebra $\mathfrak{B}^{[r]}$ of a Cartan subalgebra
$\mathfrak{C}\subset{su(N)}$, $2^{p-1}<N\leq 2^p$,
is recoverable in the partition generated by an
$r'$-th maximal bi-subalgebras
$\mathfrak{B}^{[r']}\subset\mathfrak{B}^{[r]}$ of $\mathfrak{C}$,
$0\leq r<r'\leq p$.
\end{cor}
\vspace{6pt}
It is apparent that all decomposition sequences
are acquirable in the quotient algebra partition of the highest rank.
\vspace{6pt}
\begin{cor}\label{corcomplte}
The complete set of decomposition sequences of the Lie algebra
$su(N)$, $2^{p-1}<N\leq 2^p$,
is produced in the
quotient-algebra partition of the highest rank $p$
$\{\mathcal{P}_{\mathcal{Q}}(\mathfrak{B}^{[p]})\}$ generated
by the $p$-th maximal bi-subalgebra $\mathfrak{B}^{[p]}=\{{\cal S}^{\bf 0}_{\hspace{.01cm}{\bf 0}}\}$
of a Cartan subalgebra $\mathfrak{C}\subset{su(N)}$.
\end{cor}
\vspace{6pt}
Therefore, all admissible factorizations of a unitary action are derivable from the complete set of decomposition sequences.
\nonumsection{References} \vspace*{-10pt} \noindent
| {'timestamp': '2019-12-10T02:02:11', 'yymm': '1912', 'arxiv_id': '1912.03362', 'language': 'en', 'url': 'https://arxiv.org/abs/1912.03362'} |
\section{Introduction}
The recent discover of the Higgs particle at LHC and Tevatron
has reconfirmed that the electroweak theory of Weinberg and
Salam provides the true unification of electromagnetic and weak
interactions \cite{LHC,tev}. Indeed the discovery of the Higgs
particle has been claimed to be the ``final" test of the standard
model. This, however, might be a premature claim. The real final
test should come from the discovery of the electroweak monopole,
because the standard model predicts it \cite{plb97,yang}. In fact
the existence of the monopole topology in the standard model tells
that the discovery of the monopole must be the topological test
of the standard model.
In this sense it is timely that the latest MoEDAL detector
(``The Magnificient Seventh") at LHC is actively searching for
such monopole \cite{pin,ijmpa14}. To detect the electroweak monopole
experimentally, however, it is important to estimate the
monopole mass in advance. {\it The purpose of this paper is
to estimate the mass of the electroweak monopole. We show
that the monopole mass is expected to be around 4 to 7 TeV.}
Ever since Dirac \cite{Dirac} has introduced the concept of
the magnetic monopole, the monopoles have remained a fascinating
subject. The Abelian monopole has been generalized to the non-Abelian
monopoles by Wu and Yang \cite{wu,prl80} who showed that the pure
$SU(2)$ gauge theory allows a point-like monopole, and by 't Hooft
and Polyakov \cite{Hooft,prasad} who have constructed a finite energy
monopole solution in Georgi-Glashow model as a topological soliton.
Moreover, the monopole in grand unification has been constructed by
Dokos and Tomaras \cite{dokos}.
In the interesting case of the electroweak theory of Weinberg and
Salam, however, it has been asserted that there exists no topological
monopole of physical interest \cite{vach}. The basis for this
``non-existence theorem'' is, of course, that with the spontaneous
symmetry breaking the quotient space $SU(2) \times U(1)_Y/U(1)_{\rm em}$
allows no non-trivial second homotopy. This has led many people to
believe that there is no monopole in Weinberg-Salam model.
This claim, however, has been shown to be false. If the electroweak
unification of Weinberg and Salam is correct, the standard model
must have a monopole which generalizes the Dirac monopole. Moreover,
it has been shown that the standard model has a new type of monopole
and dyon solutions \cite{plb97}. This was based on the observation
that the Weinberg-Salam model, with the $U(1)_Y$, could be viewed
as a gauged $CP^1$ model in which the (normalized) Higgs doublet
plays the role of the $CP^1$ field. So the Weinberg-Salam model
does have exactly the same nontrivial second homotopy as the
Georgi-Glashow model which allows topological monopoles.
Once this is understood, one could proceed to construct the desired
monopole and dyon solutions in the Weinberg-Salam model. Originally
the electroweak monopole and dyon solutions were obtained by numerical
integration. But a mathematically rigorous existence proof has been
established which endorses the numerical results, and the solutions
are now referred to as Cho-Maison monopole and dyon \cite{yang}.
It should be emphasized that the Cho-Maison monopole is completely
different from the ``electroweak monopole'' derived from the Nambu's
electroweak string. In his continued search for the string-like
objects in physics, Nambu has demonstrated the existence of a rotating
dumb bell made of the monopole anti-monopole pair connected by the
neutral string of $Z$-boson flux (actually the $SU(2)$ flux) in
Weinberg-Salam model \cite{nambu}. Taking advantage of the Nambu's
pioneering work, others claimed to have discovered another type of
electroweak monopole, simply by making the string infinitely long
and moving the anti-monopole to infinity \cite{vacha}. This
``electroweak monopole'', however, must carry a fractional magnetic
charge and can not be isolated with finite energy. Moreover, this
has no spherical symmetry which is manifest in the Cho-Maison
monopole \cite{plb97}.
The existence of the electroweak monopole makes the experimental
confirmation of the monopole an urgent issue. Till recently the
experimental effort for the monopole detection has been on the Dirac's
monopole \cite{cab}. But the electroweak unification of the Maxwell's
theory requires the modification of the Dirac monopole, and this
modification changes the Dirac monopole to the Cho-Maison monopole.
This means that the monopole which should exist in the real world is
not likely to be the Dirac monopole but the electroweak monopole.
To detect the electroweak monopole experimentally, it is important
to estimate the mass of the monopole theoretically. Unfortunately the
Cho-Maison monopole carries an infinite energy at the classical level,
so that the monopole mass is not determined. This is because it can
be viewed as a hybrid between the Dirac monopole and the 'tHooft-Polyakov
monopole, so that it has a $U(1)_{\rm em}$ point singularity at the center
even though the $SU(2)$ part is completely regular.
{\em A priori} there is nothing wrong with this. Classically
the electron has an infinite electric energy but a finite mass.
But for the experimental search for the monopole we need a
solid idea about the monopole mass. In this paper we show
how to predict the mass of the electroweak monopole. Based
on the dimensional argument we first show that the monopole
mass should be of the order of $1/\alpha$ times the W-boson
mass, or around 10 TeV. To back up this we adopt the scaling
argument to predict the mass to be around 4 TeV. Finally,
we show how the quantum correction could regularize
the point singularity of the Cho-Maison dyon, and construct
finite energy electroweak dyon solutions introducing the effective
action of the standard model. Our result suggests that the
electroweak monopole with the mass around 4 to 7 TeV could
exist, which implies that there is a very good chance that the
MoEDAL at the present LHC can detect the electroweak
monopole.
The paper is organized as follows. In Section II we review the
Cho-Maison dyon for later purpose. In Section III we provide
two arguments, the dimensional and scaling arguments, which
indicate that the mass of the electroweak monopole could be
around 4 to 10 TeV. In Section IV we discuss the Abelian
decomposition and gauge independent Abelianization of
Weinberg-Salam model and Georgi-Glashow model to help
us how to regularize the Cho-Maison monopole. In Section
V we discuss two different methods to regularize the Cho-Maison
dyon with the quantum correction which modifies the coupling
constants at short distance, and construct finite energy dyon
solutions which support the scaling argument. In Section VI
we discuss another way to make the Cho-Maison dyon regular,
by enlarging the gauge group $SU(2)\times U(1)_Y$ to
$SU(2)\times SU(2)_Y$. Finally in Section VII we discuss
the physical implications of our results.
\section{Cho-Maison Dyon in Weinberg-Salam Model: A Review}
Before we construct a finite energy dyon solution in the electroweak
theory we must understand how one can obtain the infinite energy
Cho-Maison dyon solution first. Let us start with the Lagrangian
which describes (the bosonic sector of) the Weinberg-Salam theory
\begin{gather}
{\cal L} = -|{\cal D}_{\mu} \phi|^2
-\displaystyle\frac{\lambda}{2}\big(\phi^\dagger \phi
-\displaystyle\frac{\mu^2}{\lambda} \big)^2-\displaystyle\frac{1}{4} \vec F_{\mu\nu}^2
-\displaystyle\frac{1}{4} G_{\mu\nu}^2, \nonumber\\
{\cal D}_{\mu} \phi=\big(\partial_{\mu}
-i\displaystyle\frac{g}{2} \vec \tau \cdot \vec A_{\mu}
- i\displaystyle\frac{g'}{2}B_{\mu}\big) \phi \nonumber\\
= \big(D_\mu - i\displaystyle\frac{g'}{2}B_{\mu}\big) \phi,
\label{wslag1}
\end{gather}
where $\phi$ is the Higgs doublet, $\vec F_{\mu\nu}$ and $G_{\mu\nu}$
are the gauge field strengths of $SU(2)$ and $U(1)_Y$ with the
potentials $\vec A_{\mu}$ and $B_{\mu}$, and $g$ and $g'$
are the corresponding coupling constants. Notice that $D_{\mu}$
describes the covariant derivative of the $SU(2)$ subgroup only.
With
\begin{eqnarray}
\phi = \dfrac{1}{\sqrt{2}}\rho~\xi,~~~(\xi^{\dagger} \xi = 1),
\end{eqnarray}
where $\rho$ and $\xi$ are the Higgs field and unit doublet,
we have
\begin{gather}
{\cal L}=-\displaystyle\frac{1}{2}{(\partial_{\mu}\rho)}^2
- \displaystyle\frac{\rho^2}{2} {|{\cal D}_{\mu} \xi |}^2
-\displaystyle\frac{\lambda}{8}\big(\rho^2-\displaystyle\frac{2\mu^2}{\lambda}\big)^2 \nonumber\\
-\displaystyle\frac{1}{4}{\vec F}_{\mu\nu}^2 -\displaystyle\frac{1}{4} G_{\mu\nu}^2.
\end{gather}
Notice that the $U(1)_Y$ coupling of $\xi$ makes
the theory a gauge theory of $CP^1$ field \cite{plb97}.
From (\ref{wslag1}) one has the following equations of motion
\begin{gather}
\partial^2\rho= |{\cal D}_\mu \xi |^2 \rho
+\displaystyle\frac{\lambda}{2}\big (\rho^2
- \displaystyle\frac {2\mu^2}{\lambda}\big) \rho, \nonumber\\
{\cal D}^2 \xi=-2 \dfrac{\partial_\mu \rho}{\rho}
{\cal D}_\mu \xi +\big[\xi^\dagger {\cal D}^2\xi
+2\dfrac{\partial_\mu \rho}{\rho}
(\xi^\dagger {\cal D}_\mu \xi)\big] \xi, \nonumber\\
D_{\mu} \vec F_{\mu\nu}=i \displaystyle\frac{g}{2}\rho^2 \big[\xi^{\dagger}
\vec \tau( {\cal D}_{\nu} \xi )
-({\cal D}_{\nu} \xi)^{\dagger} \vec \tau \xi \big], \nonumber\\
\partial_{\mu} G_{\mu\nu}
=i\displaystyle\frac{g'}{2}\rho^2 \big[\xi^{\dagger} ({\cal D}_{\nu} \xi)
- ({\cal D}_{\nu} \xi)^{\dagger} \xi \big].
\end{gather}
Now we choose the following ansatz in the spherical coordinates
$(t,r,\theta,\varphi)$
\begin{gather}
\rho =\rho(r),
~~~\xi=i\left(\begin{array}{cc} \sin (\theta/2)~e^{-i\varphi}\\
- \cos(\theta/2) \end{array} \right), \nonumber\\
\vec A_{\mu}= \displaystyle\frac{1}{g} A(r)\partial_{\mu}t~\hat r
+\displaystyle\frac{1}{g}(f(r)-1)~\hat r \times \partial_{\mu} \hat r, \nonumber\\
B_{\mu} =\displaystyle\frac{1}{g'} B(r) \partial_{\mu}t
-\displaystyle\frac{1}{g'}(1-\cos\theta) \partial_{\mu} \varphi.
\label{ans1}
\end{gather}
Notice that $\xi^{\dagger} \vec \tau ~\xi = -\hat r$. Moreover,
${\vec A}_\mu$ describes the Wu-Yang monopole when $A(r)=f(r)=0$.
So the ansatz is spherically symmetric. Of course, $\xi$ and
$B_{\mu}$ have an apparent string singularity along the negative
$z$-axis, but this singularity is a pure gauge artifact which
can easily be removed making the $U(1)_Y$ bundle
non-trivial. So the above ansatz describes a most general
spherically symmetric ansatz of an electroweak dyon.
Here we emphasize the importance of the non-trivial nature of
$U(1)_Y$ gauge symmetry to make the ansatz spherically symmetric.
Without the extra $U(1)_Y$ the Higgs doublet does not allow a
spherically symmetric ansatz. This is because the spherical
symmetry for the gauge field involves the embedding of the radial
isotropy group $SO(2)$ into the gauge group that requires the
Higgs field to be invariant under the $U(1)$ subgroup of $SU(2)$.
This is possible with a Higgs triplet, but not with a Higgs
doublet \cite{Forg}. In fact, in the absence of the $U(1)_Y$
degrees of freedom, the above ansatz describes the $SU(2)$
sphaleron which is not spherically symmetric \cite{manton}.
To see this, one might try to remove the string in $\xi$ with
the $U(1)$ subgroup of $SU(2)$. But this $U(1)$ will necessarily
change $\hat r$ and thus violate the spherical symmetry. This
means that there is no $SU(2)$ gauge transformation which can
remove the string in $\xi$ and at the same time keeps the spherical
symmetry intact. The situation changes with the inclusion of the
$U(1)_Y$ in the standard model, which naturally makes $\xi$
a $CP^1$ field \cite{plb97}. This allows the spherical symmetry
for the Higgs doublet.
To understand the physical content of the ansatz we perform
the following gauge transformation on (\ref{ans1})
\begin{gather}
\xi \rightarrow U \xi = \left(\begin{array}{cc} 0 \\ 1
\end{array} \right), \nonumber\\
U=i\left( \begin{array}{cc}
\cos (\theta/2)& \sin(\theta/2)e^{-i\varphi} \\
-\sin(\theta/2) e^{i\varphi} & \cos(\theta/2)
\end{array} \right),
\label{gauge}
\end{gather}
and find that in this unitary gauge we have
\begin{gather}
\hat r \rightarrow \left( \begin{array}{c} 0\\ 0\\
1 \end{array} \right), \nonumber\\
\vec A_\mu \rightarrow \displaystyle\frac{1}{g} \left( \begin{array}{c}
-f(r)(\sin\varphi\partial_\mu\theta
+\sin\theta\cos\varphi \partial_\mu\varphi) \\
f(r)(\cos\varphi\partial_\mu \theta
-\sin\theta\sin\varphi\partial_\mu\varphi) \\
A(r)\partial_\mu t -(1-\cos\theta)\partial_\mu\varphi
\end{array} \right).
\label{unitary}
\end{gather}
So introducing the electromagnetic and neutral $Z$-boson potentials
$A_\mu^{\rm (em)}$ and $Z_\mu$ with the Weinberg angle $\theta_{\rm w}$
\begin{gather}
\left( \begin{array}{cc} A_\mu^{\rm (em)} \\ Z_{\mu}
\end{array} \right)
= \left(\begin{array}{cc}
\cos\theta_{\rm w} & \sin\theta_{\rm w}\\
-\sin\theta_{\rm w} & \cos\theta_{\rm w}
\end{array} \right)
\left( \begin{array}{cc} B_{\mu} \\ A^3_{\mu}
\end{array} \right) \nonumber\\
= \displaystyle\frac{1}{\sqrt{g^2 + g'^2}} \left(\begin{array}{cc}
g & g' \\ -g' & g \end{array} \right)
\left( \begin{array}{cc} B_{\mu} \\ A^3_{\mu}
\end{array} \right),
\label{wein}
\end{gather}
we can express the ansatz (\ref{ans1}) in terms of the physical
fields
\begin{gather}
W_{\mu} = \displaystyle\frac{1}{\sqrt{2}}(A_\mu^1 + i A_\mu^2)
=\dfrac{i}{g}\displaystyle\frac{f(r)}{\sqrt2}e^{i\varphi}
(\partial_\mu \theta +i \sin\theta \partial_\mu \varphi), \nonumber\\
A_{\mu}^{\rm (em)} = e\big( \displaystyle\frac{1}{g^2}A(r)
+ \displaystyle\frac{1}{g'^2} B(r) \big) \partial_{\mu}t \nonumber\\
-\displaystyle\frac{1}{e}(1-\cos\theta) \partial_{\mu} \varphi, \nonumber \\
Z_{\mu} = \displaystyle\frac{e}{gg'}\big(A(r)-B(r)\big) \partial_{\mu}t,
\label{ans2}
\end{gather}
where $W_\mu$ is the $W$-boson and $e$ is the electric charge
\begin{eqnarray*}
e=\displaystyle\frac{gg'}{\sqrt{g^2+g'^2}}=g\sin\theta_{\rm w}=g'\cos\theta_{\rm w}.
\end{eqnarray*}
This clearly shows that the ansatz is for the electroweak dyon.
The spherically symmetric ansatz reduces the equations of
motion to
\begin{gather}
\ddot{\rho}+\displaystyle\frac{2}{r} \dot{\rho}-\displaystyle\frac{f^2}{2r^2}\rho
=-\displaystyle\frac{1}{4}(A-B)^2\rho +\displaystyle\frac {\lambda}{2}\big(\rho^2
-\displaystyle\frac{2\mu^2}{\lambda}\big)\rho , \nonumber\\
\ddot{f}-\displaystyle\frac{f^2-1}{r^2}f=\big(\displaystyle\frac{g^2}{4}\rho^2
- A^2\big)f, \nonumber\\
\ddot{A}+\displaystyle\frac{2}{r}\dot{A}-\displaystyle\frac{2f^2}{r^2}A
=\displaystyle\frac{g^2}{4}\rho^2(A-B), \nonumber \\
\ddot{B} +\displaystyle\frac{2}{r} \dot{B}
=-\displaystyle\frac{g'^2}{4} \rho^2 (A-B).
\label{cmeq}
\end{gather}
Obviously this has a trivial solution
\begin{eqnarray}
\rho=\rho_0=\sqrt{2\mu^2/\lambda},~~~f=0,
~~~A=B=0,
\end{eqnarray}
which describes the point monopole in Weinberg-Salam model
\begin{eqnarray}
A_\mu^{\rm (em)}=-\displaystyle\frac{1}{e}(1-\cos \theta) \partial_\mu \varphi.
\end{eqnarray}
This monopole has two remarkable features. First, this is the
electroweak generalization of the Dirac's monopole, but not
the Dirac's monopole. It has the electric charge $4\pi/e$, not
$2\pi/e$ \cite{plb97}. Second, this monople naturally admits a
non-trivial dressing of weak bosons. Indeed, with the non-trivial
dressing, the monopole becomes the Cho-Maison dyon.
To see this let us choose the following boundary condition
\begin{eqnarray}
&\rho(0)=0,~~f(0)=1,~~A(0)=0,~~B(0)=b_0, \nonumber\\
&\rho(\infty)=\rho_0,~f(\infty)=0,~A(\infty)=B(\infty)=A_0.
\label{bc0}
\end{eqnarray}
Then we can show that the equation (\ref{cmeq}) admits
a family of solutions labeled by the real parameter $A_0$
lying in the range \cite{plb97,yang}
\begin{eqnarray}
0 \leq A_0 < {\rm min} ~\Big(e\rho_0,~\displaystyle\frac{g}{2}\rho_0\Big).
\label{boundA}
\end{eqnarray}
In this case all four functions
$f(r),~\rho(r),~A(r)$, and $B(r)$ must be positive for $r>0$, and
$A(r)/g^2+B(r)/g'^2$ and $B(r)$ become increasing functions of
$r$. So we have $0 \leq b_0 \leq A_0$. Furthermore, we have
$B(r)\ge A(r)\ge 0$ for all range, and $B(r)$ must approach to
$A(r)$ with an exponential damping. Notice that, with the
experimental fact $\sin^2\theta_{\rm w}=0.2312$, (\ref{boundA})
can be written as $0 \leq A_0 < e\rho_0$.
With the boundary condition (\ref{bc0}) we can integrate
(\ref{cmeq}). For example, with $A=B=0$, we have the
Cho-Maison monopole. In general, with $A_0\ne0$, we find
the Cho-Maison dyon \cite{plb97}.
Near the origin the dyon solution has the following behavior,
\begin{eqnarray}
&\rho \simeq \alpha_1 r^{\delta_-},
~~~~~f \simeq 1+ \beta_1 r^2, \nonumber \\
&A \simeq a_1 r,~~~~~B \simeq b_0 + b_1 r^{2\delta_+},
\label{origin}
\end{eqnarray}
where $\delta_{\pm} =(\sqrt{3} \pm 1)/2$.
Asymptotically it has the following behavior,
\begin{eqnarray}
&\rho \simeq \rho_0 +\rho_1\dfrac{\exp(-\sqrt{2}\mu r)}{r},
~~~f \simeq f_1 \exp(-\omega r), \nonumber\\
&A \simeq A_0 +\dfrac{A_1}{r},
~~~B \simeq A +B_1 \dfrac{\exp(-\nu r)}{r},
\label{infty}
\end{eqnarray}
where $\omega=\sqrt{(g\rho_0)^2/4 -A_0^2}$,
and $\nu=\sqrt{(g^2 +g'^2)}\rho_0/2$.
The physical meaning of the asymptotic behavior
must be clear. Obviously $\rho$, $f$, and $A-B$ represent
the Higgs boson, $W$-boson, and $Z$-boson whose masses
are given by $M_H=\sqrt{2}\mu=\sqrt{\lambda}\rho_0$,
$M_W=g\rho_0/2$, and $M_Z=\sqrt{g^2+g'^2}\rho_0/2$.
So (\ref{infty}) tells that $M_H$, $\sqrt{1-(A_0/M_W)^2}~M_W$,
and $M_Z$ determine the exponential damping of the Higgs boson,
$W$-boson, and $Z$-boson to their vacuum expectation values
asymptotically. Notice that it is $\sqrt{1-(A_0/M_W)^2}~M_W$,
but not $M_W$, which determines the exponential damping of the
$W$-boson. This tells that the electric potential of the dyon
slows down the exponential damping of the $W$-boson, which is
reasonable.
The dyon has the following electromagnetic charges
\begin{eqnarray}
&q_e=-\dfrac{8\pi}{e}\sin^2\theta_{\rm w} \displaystyle{\int}_0^\infty f^2 A dr
=\dfrac{4\pi}{e} A_1, \nonumber\\
&q_m = \dfrac{4\pi}{e}.
\label{eq:Charge}
\end{eqnarray}
Also, the asymptotic condition (\ref{infty}) assures
that the dyon does not carry any neutral charge,
\begin{eqnarray}
&Z_e =-\dfrac{4\pi e}{gg'}\big[ r^2 (\dot{A}-\dot{B})\big]
\Big|_{r=\infty} =0,\nonumber\\
&Z_m = 0.
\label{neutral}
\end{eqnarray}
Furthermore, notice that the dyon equation (\ref{cmeq})
is invariant under the reflection
\begin{eqnarray}
A \rightarrow -A,~~~~~~B\rightarrow -B.
\label{ref}
\end{eqnarray}
This means that, for a given magnetic charge,
there are always two dyon solutions
which carry opposite electric charges $\pm q_e$.
Clearly the signature of
the electric charge of the dyon is determined by
the signature of the boundary value $A_0$.
We can also have the anti-monopole or in general anti-dyon
solution, the charge conjugate state of the dyon, which
has the magnetic charge $q_m=-4\pi/e$ with the following
ansatz
\begin{gather}
\rho' =\rho(r),
~~~\xi'=-i\left(\begin{array}{cc} \sin (\theta/2)~e^{+i\varphi}\\
- \cos(\theta/2) \end{array} \right), \nonumber\\
\vec A'_{\mu}= -\displaystyle\frac{1}{g} A(r)\partial_{\mu}t~\hat r'
+\displaystyle\frac{1}{g}(f(r)-1)~\hat r' \times \partial_{\mu} \hat r', \nonumber\\
B'_{\mu} =-\displaystyle\frac{1}{g'} B(r) \partial_{\mu}t
+\displaystyle\frac{1}{g'}(1-\cos\theta) \partial_{\mu} \varphi, \nonumber\\
\hat r'=-\xi'^{\dagger} \vec \tau ~\xi'
=(\sin \theta \cos \phi,-\sin \theta \sin \phi,\cos \theta).
\label{antid1}
\end{gather}
Notice that the ansatz is basically the complex conjugation
of the dyon ansatz.
To understand the meaning of the anti-dyon ansatz notice that
in the unitary gauge
\begin{gather}
\xi' \rightarrow U' \xi' = \left(\begin{array}{cc} 0 \\ 1
\end{array} \right),
\nonumber\\
U'=-i\left( \begin{array}{cc}
\cos (\theta/2)& \sin(\theta/2)e^{i\varphi} \\
-\sin(\theta/2) e^{-i\varphi} & \cos(\theta/2)
\end{array} \right),
\end{gather}
we have
\begin{gather}
{\vec A}'_\mu \rightarrow \displaystyle\frac{1}{g} \left( \begin{array}{c}
f(r)(\sin\varphi\partial_\mu\theta
+\sin\theta\cos\varphi \partial_\mu\varphi) \\
f(r)(\cos\varphi\partial_\mu \theta
-\sin\theta\sin\varphi\partial_\mu\varphi) \\
-A(r)\partial_\mu t +(1-\cos\theta)\partial_\mu\varphi
\end{array} \right).
\label{unitary}
\end{gather}
So in terms of the physical fields the ansatz (\ref{antid1})
is expressed by
\begin{eqnarray}
&W'_{\mu}=\dfrac{i}{g}\dfrac{f(r)}{\sqrt2}e^{-i\varphi}
(\partial_\mu \theta -i \sin\theta \partial_\mu \varphi)
=-W_{\mu}^*, \nonumber\\
&A_{\mu}^{ \rm (em)} = -e\big( \dfrac{1}{g^2}A(r)
+ \dfrac{1}{g'^2} B(r) \big) \partial_{\mu}t \nonumber\\
&+\dfrac{1}{e}(1-\cos\theta) \partial_{\mu} \varphi, \nonumber \\
&Z'_{\mu} = -\dfrac{e}{gg'}\big(A(r)-B(r)\big) \partial_{\mu}t
=-Z_\mu.
\label{antid2}
\end{eqnarray}
This clearly shows that the the electric and magnetic charges
of the ansatz (\ref{antid1}) are the opposite of the dyon ansatz,
which confirms that the ansatz indeed describes the anti-dyon.
With the ansatz (\ref{antid1}) we have exactly the same
equation (\ref{cmeq}) for the anti-dyon. This assures
that the standard model has the anti-dyon as well as
the dyon.
The above discussion tells that the W and Z boson part of
the anti-dyon solution is basically the complex conjugation
of the dyon solution. This, of course, is natural from
the physical point of view. On the other hand there is
one minor point to be clarified here. Since the topological
charge of the monopole is given by the second homotopy
defined by $\hat r=-\xi^\dagger \vec \tau \xi$, one might
expect that $\hat r'$ defined by the anti-dyon ansats
$\xi'=\xi^*$ must be $-\hat r$. But this is not so, and
we have to explain why.
To understand this notice tha we can change $\hat r'$
to $-\hat r$ by a SU(2) gauge transformation, by the
$\pi$-rotation along the y-axis. With this gauge
transformation the ansatz (\ref{antid1}) changes to
\begin{gather}
\xi' \rightarrow i\left(\begin{array}{cc} \cos(\theta/2) \\
\sin (\theta/2)~e^{+i\varphi} \end{array} \right),
~~~\hat r' \rightarrow -\hat r, \nonumber\\
\vec A_{\mu} \rightarrow -\displaystyle\frac{1}{g} A(r)\partial_{\mu}t~\hat r
+\displaystyle\frac{1}{g}(f(r)-1)~\hat r \times \partial_{\mu} \hat r.
\label{antid3}
\end{gather}
This tells that the monopole topology defined by $\hat r'$ is
the same as that of $\hat r$.
Since the Cho-Maison solution is obtained numerically one
might like to have a mathematically rigorous existence proof
of the Cho-Maison dyon. The existence proof is non-trivial,
because the equation of motion (\ref{cmeq}) is not the
Euler-Lagrange equation of the positive definite energy
(\ref{cme}), but that of the indefinite action
\begin{gather}
{\cal L}=-4\pi \int\limits_0^\infty dr
\bigg\{\displaystyle\frac{1}{2}(r\dot\rho)^2
+\displaystyle\frac{\lambda r^2}{8}\big(\rho^2-\rho_0^2\big)^2 \nonumber\\
+\displaystyle\frac{1}{4} f^2\rho^2+ \frac1{g^2} \big(\dot f^2
-\displaystyle\frac{1}{2}(r\dot A)^2- f^2 A^2 \big)-\displaystyle\frac{1}{2g'^2}(r\dot B)^2 \nonumber\\
-\displaystyle\frac{r^2}{8} (B-A)^2 \rho^2 +\displaystyle\frac{1}{2 r^2}\big(\displaystyle\frac{1}{g'^2}
+\frac1{g^2}(f^2-1)^2\big) \bigg\}.
\label{cmlag}
\end{gather}
Fortunately the existence proof has been established by
Yang \cite{yang}.
Before we leave this section it is worth to re-address the
important question again: Does the standard model predict
the monopole? Notice that the Dirac monopole in electrodynamics
is optional: It can exist only when the $U(1)_{\rm em}$ is
non-trivial, but there is no reason why this has to be so.
If so, why can't the electroweak monopole be optional?
As we have pointed out, the non-trivial $U(1)_Y$ is crucial
for the existence of the monopole in the standard model.
So the question here is why the $U(1)_Y$ must be non-trivial.
To see why, notice that in the standard model $U(1)_{\rm em}$
comes from two $U(1)$, the $U(1)$ subgroup of $SU(2)$ and
$U(1)_Y$, and it is well known that the $U(1)$ subgroup of $SU(2)$
is non-trivial. Now, to obtain the electroweak monopole we have
to make the linear combination of two monopoles, that of the
$U(1)$ subgroup of $SU(2)$ and $U(1)_Y$. This must be clear
from (\ref{wein}).
In this case the mathematical consistency requires the two
potentials $A_\mu^3$ and $B_\mu$ (and two $U(1)$) to have
the same structure, in particular the same topology. But we
already know that $A_\mu^3$ is non-trivial. So $B_\mu$,
and the corresponding $U(1)_Y$, has to be non-trivial. In
other words, requiring $U(1)_Y$ to be trivial is inconsistent
(i.e., in contradiction with the self-consistency) in the
standard model. This tells that, unlike the Maxwell's theory,
the $U(1)_{\rm em}$ in the standard model must be non-trivial.
This assures that the standard model must have the monopole.
But ultimately this question has to be answered by the experiment.
So the discovery of the monopole must be the topological test
of the standard model, which has never been done before. This
is why MoEDAL is so important.
\section{Mass of the Electroweak Monopole}
To detect the electroweak monopole experimentally, we have to
have a firm idea on the mass of the monopole. Unfortunately,
at the classical level we can not estimate the mass of the
Cho-Maison monopole, because it has a point singularity at the
center which makes the total energy infinite.
Indeed the ansatz (\ref{ans1}) gives the following energy
\begin{gather}
E=E_0 +E_1, \nonumber \\
E_0=4\pi\int_0^\infty \displaystyle\frac{dr}{2 r^2}
\bigg\{\displaystyle\frac{1}{g'^2}+ \frac1{g^2}(f^2-1)^2\bigg\}, \nonumber\\
E_1=4\pi \int_0^\infty dr \bigg\{\frac12 (r\dot\rho)^2
+\frac1{g^2} \big(\dot f^2 +\displaystyle\frac{1}{2}(r\dot A)^2 \nonumber\\
+ f^2 A^2 \big)+\displaystyle\frac{1}{2g'^2}(r\dot B)^2
+\displaystyle\frac{\lambda r^2}{8}\big(\rho^2-\rho_0^2 \big)^2 \nonumber\\
+\frac14 f^2\rho^2
+\displaystyle\frac{r^2}{8} (B-A)^2 \rho^2 \bigg\}.
\label{cme}
\end{gather}
The boundary condition (\ref{bc0}) guarantees that
$E_1$ is finite. As for $E_0$ we can minimize it with
the boundary condition $f(0)=1$, but even with this
$E_0$ becomes infinite. Of course the origin of this
infinite energy is obvious, which is precisely due to
the magnetic singularity of $B_\mu$ at the origin.
This means that one can not predict the mass of dyon.
Physically it remains arbitrary.
To estimate of the monopole mass theoretically, we have to
regularize the point singularity of the Cho-Maison dyon. One
might try to do that introducing the gravitational interaction,
in which case the mass is fixed by the asymptotic behavior of
the gravitational potential. But the magnetic charge of the
monopole is not likely to change the character of the singularity,
so that asymptotically the leading order of the gravitational
potential becomes of the Reissner-Nordstrom type \cite{bais}.
This implies the gravitational interaction may not help us to
estimate the monopole mass.
To make the the energy of the Cho-Maison monopole finite,
notice that the origin of the infinite energy is the first
term $1/g'^2$ in $E_0$ in (\ref{cme}). A simple way to make
this term finite is to introduce a UV-cutoff which removes
this divergence. This type of cutoff could naturally come
from the quantum correction of the coupling constants. In
fact, since the quantum correction changes $g'$ to the
running coupling $\bar g'$, $E_0$ can become finite
if $\bar g'$ diverges at short distance.
We will discuss how such quantum correction could take place
later, but before doing that we present two arguments, the
dimsnsional argument and the scaling argument, which
could give us a rough estimate of the monopole mass.
\subsection{Dimensional argument}
To have the order estimate of the monopole mass it is important
to realize that, roughly speaking, the monopole mass comes from
the Higgs mechanism which generates the mass to the W-boson.
This can easily be seen in the 'tHooft-Polyakov monopole in
Georgi-Glashow model
\begin{eqnarray}
&{\cal L}_{GG} =-\dfrac{1}{4} \vec{F}_{\mu\nu}^2
-\dfrac{1}{2}(D_\mu \vec{\Phi} )^2-\dfrac{\lambda}{4}
\big(\vec \Phi^2-\dfrac{\mu^2}{\lambda}\big)^2,
\label{ggl}
\end{eqnarray}
where $\vec \Phi$ is the Higgs triplet. Here the monopole ansatz
is given by
\begin{eqnarray}
&\vec \Phi=\rho~\hat r,~~~{\vec A}_\mu= \vec C_\mu+ {\vec W}_\mu, \nonumber\\
&\vec C_\mu= -\dfrac{1}{g} \hat r \times \partial_\mu \hat r,
~~~{\vec W}_\mu=-f \vec C_\mu,
\label{wu}
\end{eqnarray}
where $\vec C_\mu$ represents the Wu-Yang monopole
potential \cite{wu,prd80}. Notice that the W-boson
part of the monopole is given by the Wu-Yang potential,
except for the overall amplitude $f$.
With this we clearly have
\begin{eqnarray}
|D_\mu \Phi|^2=(\partial_\mu \rho)^2
+ g^2 \rho^2 f^2 (\vec C_\mu)^2.
\end{eqnarray}
So, when the Higgs field has a non-vanishing vacuum expectation
value, $\vec C_\mu$ acquires a mass (with $f\simeq 1$). This,
of course, is the Higgs mechanism which generates the W-boson
mass. The only difference is that here the W-boson is expressed
by the Wu-Yang potential and the Higgs coupling becomes magnetic
($\vec C_\mu$ contains the extra factor $1/g$).
Similar mechanism works for the Weinberg-Salam model. Here again
${\vec A}_\mu$ (with $A=B=0$) of the asnatz (\ref{ans1}) is identical to
(\ref{wu}), and we have
\begin{gather}
D_\mu \xi=i\big(g f~\vec C_\mu+(1-\cos \theta) \partial_\mu \phi~{\hat r} \big)
\cdot \displaystyle\frac{\vec \tau}{2}~\xi, \nonumber\\
|{\cal D}_\mu \xi|^2=|D_\mu \xi|^2- |\xi^\dagger D_\mu \xi|^2 \nonumber\\
-(\xi^\dagger D_\mu\xi-i \displaystyle\frac{g'}{2} B_\mu)^2
=\frac14 g^2 f^2 (\vec C_\mu)^2, \nonumber\\
|{\cal D}_\mu \phi|^2=\frac12 (\partial_\mu \rho)^2
+\frac12 \rho^2 |{\cal D}_\mu \xi|^2 \nonumber\\
=\frac12 (\partial_\mu \rho)^2+\frac18 g^2 \rho^2 f^2 (\vec C_\mu)^2.
\end{gather}
This (with $f\simeq 1$) tells that the electroweak monopole
acquires mass through the Higgs mechanism which generates mass
to the W-boson.
Once this is understood, we can use the dimensional argument
to predict the monopole energy. Since the monopole mass term
in the Lagrangian contributes to the monopole energy in the
classical solution we may expect
\begin{eqnarray}
E \simeq C \times \dfrac{4\pi}{e^2} M_W,
~~~C\simeq 1.
\label{mmass}
\end{eqnarray}
This implies that the monopole mass should be about $1/\alpha$
times bigger than the electroweak scale, around 10 TeV. But this
is the order estimate. Now we have to know how to estimate $C$.
\subsection{Scaling argument}
We can use the Derrick's scaling argument to estimate the
constant $C$ in (\ref{mmass}), assuming the existence of
a finite energy monopole solution. If a finite energy monopole
does exist, the action principle tells that it should be stable
under the rescaling of its field configuration. So consider such
a monopole configuration and let
\begin{gather}
K_A = \displaystyle{\int} d^3x ~\dfrac{1}{4} \vec{F}_{ij}^2,
\quad K_B=\displaystyle{\int} d^3x ~\dfrac{1}{4} B_{ij}^2 \nonumber\\
K_\phi=\displaystyle{\int} d^3 x ~|{\cal D}_i \phi|^2, \nonumber\\
V_\phi=\displaystyle{\int} d^3x ~\dfrac{\lambda}{2}\big( |\phi|^2
-\dfrac{\mu^2}{\lambda} \big)^2.
\end{gather}
With the ansatz (\ref{ans1}) we have (with $A=B=0$)
\begin{gather}
K_A= \displaystyle\frac{4\pi}{g^2} \int_0^\infty
\Big\{\dot{f}^2 + \displaystyle\frac{(f^2-1)^2}{2r^2} \Big\} dr, \nonumber\\
K_B=\displaystyle\frac{2\pi}{g'^2}\int\limits_0^\infty \displaystyle\frac{1}{r^2}dr,
~~~K_\phi= 2\pi \int_0^\infty (r\dot{\rho})^2 dr, \nonumber \\
V_\phi=\displaystyle\frac{\pi}{2} \int_0^\infty \lambda r^2
\big(\rho^2 -\rho_0^2 \big)^2 dr.
\end{gather}
Notice that $K_B$ makes the monopole energy infinite.
Now, consider the spatial scale transformation
\begin{eqnarray}
\vec x \longrightarrow \lambda \vec x.
\label{scale}
\end{eqnarray}
Under this we have
\begin{eqnarray}
&{\vec A}_k(\vec x) \rightarrow \lambda {\vec A}_k(\lambda \vec x),
~~~B_k(\vec x)\rightarrow \lambda B_k(\lambda \vec x), \nonumber\\
&\phi (\vec x) \rightarrow \phi (\lambda \vec x),
\end{eqnarray}
so that
\begin{eqnarray}
&K_A \longrightarrow \lambda K_A,
~~~K_B \longrightarrow \lambda K_B, \nonumber\\
&K_\phi \longrightarrow \lambda^{-1} K_\phi,
~~~V_\phi \longrightarrow \lambda^{-3} V_\phi.
\end{eqnarray}
With this we have the following requirement for the stable
monopole configuration
\begin{eqnarray}
K_A+K_B=K_\phi+3V_\phi.
\label{derrick}
\end{eqnarray}
From this we can estimate the finite value of $K_B$.
Now, for the Cho-Maison monopole we have (with
$M_W \simeq 80.4~{\rm GeV}$, $M_H \simeq 125~{\rm GeV}$,
and $\sin^2\theta_{\rm w}=0.2312$)
\begin{eqnarray}
&K_A \simeq 0.1904 \times\dfrac{4\pi}{e^2}{M_W},
~~~K_\phi \simeq 0.1577 \times\dfrac{4\pi}{e^2}{M_W}, \nonumber \\
&V_\phi \simeq 0.0111 \times\dfrac{4\pi}{e^2}{M_W}.
\end{eqnarray}
This, with (\ref{derrick}), tells that
\begin{eqnarray}
K_B \simeq 0.0006 \times \dfrac{4\pi}{e^2} M_W.
\end{eqnarray}
From this we estimate the energy of the monopole to be
\begin{eqnarray}
E \simeq 0.3598 \times \dfrac{4\pi}{e^2} M_W \simeq 3.96~{\rm TeV}.
\end{eqnarray}
This strongly endorses the dimensional argument. In particular, this
tells that the electroweak monopole of mass around a few TeV could
be possible.
The important question now is to show how the quantum correction
could actually make the energy of the Cho-Maison monopole finite.
To do that we have to understand the structure of the electroweak
theory, in particular the Abelian decomposition of the electroweak
theory. So we review the gauge independent Abelian decomposition
of the standard model first.
\section{Abelian Decomposition of the Electroweak Theory}
Consider the Yang-Mills theory
\begin{eqnarray}
{\cal L}_{YM} =-\dfrac{1}{4} \vec F_{\mu\nu}^2.
\end{eqnarray}
A best way to make the Abelian decomposition is to introduce
a unit $SU(2)$ triplet $\hat n$ which selects the Abelian
direction at each space-time point, and impose the isometry
on the gauge potential which determines the restricted potential
$\hat A_\mu$ \cite{prd80,prl81}
\begin{eqnarray}
&D_\mu {\hat n}=0, \nonumber\\
&\vec A_\mu\rightarrow \hat A_\mu
=A_\mu {\hat n} -\dfrac{1}{g} {\hat n}\times\partial_\mu{\hat n}
=A_\mu {\hat n}+\vec C_\mu, \nonumber\\
&A_\mu={\hat n} \cdot {\vec A}_\mu,
~~~\vec C_\mu=-\dfrac{1}{g} {\hat n}\times\partial_\mu{\hat n} .
\label{chocon}
\end{eqnarray}
Notice that the restricted potential is precisely the connection
which leaves ${\hat n}$ invariant under parallel transport. The restricted
potential is called Cho connection or Cho-Duan-Ge (CDG)
connection \cite{fadd,shab,zucc}.
With this we obtain the gauge independent Abelian decomposition
of the $SU(2)$ gauge potential adding the valence potential
$\vec W_\mu$ which was excluded by the isometry \cite{prd80,prl81}
\begin{gather}
\vec{A}_\mu = \hat A_\mu + {\vec W}_\mu,
~~~(\hat{n}\cdot\vec{W}_\mu=0).
\label{chodecom}
\end{gather}
The Abelian decomposition has recently been referred
to as Cho (also Cho-Duan-Ge or Cho-Faddeev-Niemi)
decomposition \cite{fadd,shab,zucc}.
Under the infinitesimal gauge transformation
\begin{eqnarray}
\delta {\hat n} = - \vec \alpha \times {\hat n},
~~~~\delta {\vec A}_\mu = \displaystyle\frac{1}{g} D_\mu \vec \alpha,
\end{eqnarray}
we have
\begin{gather}
\delta A_\mu = \displaystyle\frac{1}{g} {\hat n} \cdot \partial_\mu {\vec \alpha},
\quad \delta \hat A_\mu = \displaystyle\frac{1}{g} {\hat D}_\mu {\vec \alpha}, \nonumber\\
\delta {\vec W}_\mu = -{\vec \alpha} \times {\vec W}_\mu.
\label{gt1}
\end{gather}
This tells that $\hat A_\mu$ by itself describes an $SU(2)$
connection which enjoys the full $SU(2)$ gauge degrees of freedom.
Furthermore the valence potential $\vec W_\mu$ forms a gauge
covariant vector field under the gauge transformation. But
what is really remarkable is that the decomposition is gauge
independent. Once $\hat n$ is chosen, the decomposition follows
automatically, regardless of the choice of gauge.
Notice that $\hat{A}_\mu$ has a dual structure,
\begin{gather}
\hat{F}_{\mu\nu}
= \partial_\mu \hat A_\nu-\partial_\nu \hat A_\mu
+ g \hat A_\mu \times \hat A_\nu
= (F_{\mu\nu}+ H_{\mu\nu})\hat{n}, \nonumber \\
F_{\mu\nu} =\partial_\mu A_\nu-\partial_\nu A_\mu, \nonumber\\
H_{\mu\nu} = -\displaystyle\frac{1}{g} \hat{n}\cdot(\partial_\mu
\hat{n}\times\partial_\nu\hat{n}).
\end{gather}
Moreover, $H_{\mu \nu}$ always admits the potential because
it satisfies the Bianchi identity. In fact, replacing ${\hat n}$
with a $CP^1$ field $\xi$ (with ${\hat n}=-\xi^{\dagger} \vec \tau ~\xi$)
we have
\begin{eqnarray}
&H_{\mu\nu} = \partial_\mu \tilde C_\nu-\partial_\nu \tilde C_\mu
= \dfrac{2i}{g} (\partial_\mu \xi^{\dagger}
\partial_\nu \xi - \partial_\nu \xi^{\dagger} \partial_\mu \xi), \nonumber\\
&\tilde C_\mu = \dfrac{2i}{g} \xi^{\dagger} \partial_\mu \xi
=\dfrac{i}{g} \big(\xi^{\dagger} \partial_\mu \xi
- \partial_\mu \xi^{\dagger} \xi \big).
\end{eqnarray}
Of course $\tilde C_\mu$ is determined uniquely up to the $U(1)$
gauge freedom which leaves ${\hat n}$ invariant. To understand the
meaning of $\tilde C_\mu$, notice that with $\hat n=\hat r$ we
have
\begin{eqnarray}
\tilde C_\mu = \displaystyle\frac{1}{g} (1- \rm cos~\theta) \partial_\mu \varphi.
\end{eqnarray}
This is nothing but the Abelian monopole potential, and the
corresponding non-Abelian monopole potential is given by
the Wu-Yang monopole potential $\vec C_\mu$ \cite{wu,prl80}.
This justifies us to call $A_\mu$ and $\tilde C_\mu$ the electric
and magnetic potential.
The above analysis tells that $\hat{A}_\mu$ retains all essential
topological characteristics of the original non-Abelian potential.
First, $\hat{n}$ defines $\pi_2(S^2)$ which describes the non-Abelian
monopoles. Second, it characterizes the Hopf invariant
$\pi_3(S^2)\simeq\pi_3(S^3)$ which describes the topologically
distinct vacua \cite{bpst,plb79}. Moreover, it provides the gauge
independent separation of the monopole field from the generic
non-Abelian gauge potential.
With the decomposition (\ref{chodecom}), we have
\begin{eqnarray}
\vec{F}_{\mu\nu}
=\hat F_{\mu \nu} + {\hat D} _\mu {\vec W}_\nu - {\hat D}_\nu
{\vec W}_\mu + g{\vec W}_\mu \times {\vec W}_\nu,
\end{eqnarray}
so that the Yang-Mills Lagrangian is expressed as
\begin{gather}
{\cal L}_{YM} =-\displaystyle\frac{1}{4} {\hat F}_{\mu\nu}^2
-\displaystyle\frac{1}{4}({\hat D}_\mu{\vec W}_\nu-{\hat D}_\nu{\vec W}_\mu)^2 \nonumber\\
-\displaystyle\frac{g}{2}{\hat F}_{\mu\nu} \cdot ({\vec W}_\mu \times {\vec W}_\nu)
-\displaystyle\frac{g^2}{4} ({\vec W}_\mu \times {\vec W}_\nu)^2.
\end{gather}
This shows that the Yang-Mills theory can be viewed as
a restricted gauge theory made of the restricted potential,
which has the valence gluons as its source \cite{prd80,prl81}.
An important advantage of the decomposition (\ref{chodecom}) is
that it can actually Abelianize (or more precisely ``dualize'')
the non-Abelian gauge theory gauge independently \cite{prd80,prl81}.
To see this let$({\hat n}_1,~{\hat n}_2,~{\hat n})$ be a right-handed orthonormal
basis of $SU(2)$ space and let
\begin{gather}
\vec{W}_\mu =W^1_\mu ~\hat{n}_1 + W^2_\mu ~\hat{n}_2,
\nonumber\\
(W^1_\mu = \hat {n}_1 \cdot \vec W_\mu,~~~W^2_\mu =
\hat {n}_2 \cdot \vec W_\mu).
\nonumber
\end{gather}
With this we have
\begin{eqnarray}
&\hat{D}_\mu \vec{W}_\nu =\Big[\partial_\mu W^1_\nu
-g(A_\mu+ \tilde C_\mu)W^2_\nu \Big]\hat n_1 \nonumber\\
&+\Big[\partial_\mu W^2_\nu
+ g (A_\mu+ \tilde C_\mu)W^1_\nu \Big]\hat{n}_2,
\end{eqnarray}
so that with
\begin{eqnarray}
{\cal A}_\mu = A_\mu+ \tilde C_\mu,
\quad
W_\mu = \displaystyle\frac{1}{\sqrt{2}} ( W^1_\mu + i W^2_\mu ), \nonumber
\end{eqnarray}
we can express the Lagrangian explicitly in terms of the dual
potential ${\cal A}_\mu$ and the complex vector field $W_\mu$,
\begin{eqnarray}
\label{eq:Abelian}
{\cal L}_{YM} = -\displaystyle\frac{1}{4} {\cal F}_{\mu\nu}^2
-\displaystyle\frac{1}{2}|\hat{D}_\mu{W}_\nu-\hat{D}_\nu{W}_\mu|^2 \nonumber\\
+ ig {\cal F}_{\mu\nu} W_\mu^*W_\nu
+ \displaystyle\frac{g^2}{4}(W_\mu^* W_\nu - W_\nu^* W_\mu)^2,
\end{eqnarray}
where ${\cal F}_{\mu\nu} = F_{\mu\nu} + H_{\mu\nu}$ and
$\hat{D}_\mu=\partial_\mu + ig {\cal A}_\mu$. This shows
that we can indeed Abelianize the non-Abelian theory
with our decomposition.
Notice that in the Abelian formalism the Abelian potential
${\cal A}_\mu$ has the extra magnetic potential $\tilde C_\mu$.
In other words, it is given by the sum of the electric and magnetic
potentials $A_\mu+\tilde C_\mu$. Clearly $\tilde C_\mu$
represents the topological degrees of the non-Abelian
symmetry which does not show up in the naive Abelianization that
one obtains by fixing the gauge \cite{prd80,prl81}.
Furthermore, this Abelianization is gauge independent,
because here we have never fixed the gauge to obtain
this Abelian formalism. So one might ask how the non-Abelian
gauge symmetry is realized in this Abelian formalism. To
discuss this let
\begin{gather}
\vec \alpha = \alpha_1~{\hat n}_1 + \alpha_2~{\hat n}_2 + \theta~\hat n,
\quad
\alpha = \displaystyle\frac{1}{\sqrt 2} (\alpha_1 + i ~\alpha_2),
\nonumber\\
\vec C_\mu = - \displaystyle\frac {1}{g} {\hat n} \times \partial_\mu {\hat n}
= - C^1_\mu {\hat n}_1 - C^2_\mu {\hat n}_2,
\nonumber\\
C_\mu = \displaystyle\frac{1}{\sqrt 2} (C^1_\mu + i ~ C^2_\mu).
\end{gather}
Certainly the Lagrangian (\ref{eq:Abelian}) is invariant under
the active (classical) gauge transformation (\ref{gt1}) described
by
\begin{gather}
\delta A_\mu = \displaystyle\frac{1}{g} \partial_\mu \theta
- i (C_\mu^* \alpha - C_\mu \alpha^*),
\nonumber\\
\delta \tilde C_\mu = - \delta A_\mu,
\quad
\delta W_\mu = 0.
\label{eq:active}
\end{gather}
But it has another gauge invariance, the invariance under the
following passive (quantum) gauge transformation
\begin{gather}
\delta A_\mu = \displaystyle\frac{1}{g} \partial_\mu \theta
-i (W_\mu^* \alpha - W_\mu \alpha^*),
\nonumber\\
\delta \tilde C_\mu = 0,
\quad
\delta W_\mu = \displaystyle\frac{1}{g} {\hat D}_\mu \alpha - i \theta W_\mu.
\label{eq:passive}
\end{gather}
Clearly this passive gauge transformation assures the desired
non-Abelian gauge symmetry for the Abelian formalism.
This tells that the Abelian theory not only retains
the original gauge symmetry, but actually has an enlarged (both
active and passive) gauge symmetries.
The reason for this extra (quantum) gauge symmetry is that
the Abelian decomposition automatically put the theory in
the background field formalism which doubles the gauge
symmetry \cite{dewitt}. This is because in this decomposition
we can view the restricted and valence potentials as the
classical and quantum potentials, so that we have freedom to
assign the gauge symmetry either to the classical field
or to the quantum field. This is why we have the extra gauge
symmetry.
The Abelian decomposition has played a crucial role in QCD
to demonstrate the Abelian dominance and the monopole condensation
in color confinement \cite{prd00,prd13,kondo}. This is because
it separates not only the Abelian potential but also the monopole
potential gauge independently.
Now, consider the Georgi-Glashow model (\ref{ggl}). With
\begin{eqnarray}
\vec{\Phi} = \rho~\hat{n},
~~~{\vec A}_\mu=\hat A_\mu +\vec W_\mu,
\end{eqnarray}
we have the Abelian decomposition,
\begin{eqnarray}
&{\cal L}_{GG} =-\dfrac{1}{2} (\partial_\mu \rho)^2
-\dfrac{g^2}{2} {\rho}^2 ({\vec W}_\mu)^2-\dfrac{\lambda}{4}
\big(\rho^2 -\dfrac{\mu^2}{\lambda}\big)^2 \nonumber\\
&-\dfrac{1}{4} {\hat F}_{\mu\nu}^2
-\dfrac{1}{4}({\hat D}_\mu{\vec W}_\nu-{\hat D}_\nu{\vec W}_\mu)^2 \nonumber\\
&-\dfrac{g}{2}{\hat F}_{\mu\nu} \cdot ({\vec W}_\mu \times {\vec W}_\nu)
-\dfrac{g^2}{4} ({\vec W}_\mu \times {\vec W}_\nu)^2.
\label{gglag1}
\end{eqnarray}
With this we can Abelianize it gauge independently,
\begin{gather}
{\cal L}_{GG}= -\displaystyle\frac{1}{2} (\partial_\mu \rho)^2
- g^2 {\rho}^2 |W_\mu |^2-\displaystyle\frac{\lambda}{4}\big(\rho^2
-\displaystyle\frac{\mu^2}{\lambda}\big)^2 \nonumber\\
- \displaystyle\frac{1}{4} {\cal F}_{\mu\nu}^2
-\displaystyle\frac{1}{2} |{\hat D}_\mu W_\nu-{\hat D}_\nu W_\mu|^2
+ ig {\cal F}_{\mu\nu} W_\mu^*W_\nu \nonumber\\
+ \displaystyle\frac{g^2}{4}(W_\mu^* W_\nu - W_\nu^* W_\mu)^2.
\label{gglag2}
\end{gather}
This clearly shows that the theory can be viewed as a
(non-trivial) Abelian gauge theory which has a charged
vector field as a source.
The Abelianized Lagrangian looks very much like the
Georgi-Glashow Lagrangian written in the unitary gauge.
But we emphasize that this is the gauge independent
Abelianization which has the full (quantum) $SU(2)$
gauge symmetry.
Obviously we can apply the same Abelian decomposition
to the Weinberg-Salam theory
\begin{eqnarray}
&{\cal L}=-\dfrac{1}{2}{(\partial_{\mu}\rho)}^2
-\dfrac{\rho^2}{2} {|{\cal \hat D}_{\mu} \xi |}^2
-\dfrac{\lambda}{8}(\rho^2-\rho_0^2)^2 \nonumber\\
&-\dfrac{1}{4} {\hat F}_{\mu\nu}^2
-\dfrac{1}{4} G_{\mu\nu}^2
-\dfrac{1}{4}({\hat D}_\mu{\vec W}_\nu-{\hat D}_\nu{\vec W}_\mu)^2
-\dfrac{g^2}{8}\rho^2 ({\vec W}_\mu)^2 \nonumber\\
&-\dfrac{g}{2}{\hat F}_{\mu\nu} \cdot ({\vec W}_\mu \times {\vec W}_\nu)
-\dfrac{g^2}{4} ({\vec W}_\mu \times {\vec W}_\nu)^2, \nonumber\\
&{\cal \hat D}_\mu=\partial_\mu
-i\dfrac{g}{2} \vec{\tau}\cdot\hat {A}_\mu-i\dfrac{g'}{2}B_\mu.
\label{wslag2}
\end{eqnarray}
Moreover, with
\begin{gather}
\left( \begin{array}{cc} A_\mu^{\rm (em)} \\ Z_{\mu}
\end{array} \right)
= \displaystyle\frac{1}{\sqrt{g^2 + g'^2}} \left(\begin{array}{cc} g & g' \\
-g' & g
\end{array} \right)
\left( \begin{array}{cc}
B_{\mu} \\ {\cal A}_{\mu}
\end{array} \right),
\label{mixing}
\end{gather}
we can Abelianize it gauge independently
\begin{gather}
{\cal L}= -\displaystyle\frac{1}{2}(\partial_\mu \rho)^2
-\displaystyle\frac{\lambda}{8}\big(\rho^2-\rho_0^2 \big)^2 \nonumber\\
-\displaystyle\frac{1}{4} {F_{\mu\nu}^{\rm (em)}}^2
-\displaystyle\frac{1}{4} Z_{\mu\nu}^2-\displaystyle\frac{g^2}{4}\rho^2 |W_\mu|^2
-\displaystyle\frac{g^2+g'^2}{8} \rho^2 Z_\mu^2 \nonumber\\
-\displaystyle\frac{1}{2}|(D_\mu^{\rm (em)} W_\nu - D_\nu^{\rm (em)} W_\mu)
+ ie \displaystyle\frac{g}{g'} (Z_\mu W_\nu - Z_\nu W_\mu)|^2 \nonumber\\
+ie F_{\mu\nu}^{\rm (em)} W_\mu^* W_\nu
+ie \displaystyle\frac{g}{g'} Z_{\mu\nu} W_\mu^* W_\nu \nonumber\\
+ \displaystyle\frac{g^2}{4}(W_\mu^* W_\nu - W_\nu^* W_\mu)^2,
\label{wslag3}
\end{gather}
where $D_\mu^{\rm (em)}=\partial_\mu+ieA_\mu^{\rm (em)}$.
Again we emphasize that this is not the Weinberg-Salam Lagrangian
in the unitary gauge. This is the gauge independent Abelianization
which has the extra quantum (passive) non-Abelian gauge degrees of
freedom. This can easily be understood comparing (\ref{mixing})
with (\ref{wein}). Certainly (\ref{mixing}) is gauge independent, while
(\ref{wein}) applies to the unitary gauge.
This provides us important piece of information. In the absence
of the electromagnetic interaction (i.e., with $A_\mu^{\rm (em)}
= W_\mu = 0$) the Weinberg-Salam model describes a spontaneously
broken $U(1)_Z$ gauge theory,
\begin{gather}
{\cal L} = -\displaystyle\frac{1}{2}(\partial_\mu \rho)^2
-\displaystyle\frac{\lambda}{8}\big(\rho^2-\rho_0^2\big)^2 \nonumber\\
-\displaystyle\frac{1}{4} Z_{\mu\nu}^2-\displaystyle\frac{g^2+g'^2}{8} \rho^2 Z_\mu^2,
\end{gather}
which is nothing but the Ginsburg-Landau theory of superconductivity.
Furthermore, here $M_H$ and $M_Z$ corresponds to the coherence length
(of the Higgs field) and the penetration length (of the magnetic
field made of $Z$-field). So, when $M_H > M_Z$ (or $M_H < M_Z$),
the theory describes a type II (or type I) superconductivity,
which is well known to admit the Abrikosov-Nielsen-Olesen
vortex solution. This confirms the existence of Nambu's string
in Weinberg-Salam model. What Nambu showed was that he could make
the string finite by attaching the fractionally charged monopole
anti-monopole pair to this string \cite{nambu}.
\section{Comparison with Julia-Zee Dyon}
The Cho-Maison dyon looks very much like the well-known
Julia-Zee dyon in the Georgi-Glashow model. Both can be
viewed as the Wu-Yang monopole dressed by the weak boson(s).
However, there is a crucial difference. The the Julia-Zee
dyon is completely regular and has a finite energy, while
the Cho-Maison dyon has a point singularity at the center
which makes the energy infinite.
So, to regularize the Cho-Maison dyon it is important
to understand the difference between the two dyons.
To do that notice that, in the absence of the
$Z$-boson, (\ref{wslag3}) reduces to
\begin{eqnarray}
&{\cal L}= -\dfrac{1}{2}(\partial_\mu \rho)^2
-\dfrac{\lambda}{8}\big(\rho^2-\rho_0^2\big)^2
-\dfrac{g^2}{4}\rho^2 |W_\mu|^2 \nonumber\\
&-\dfrac{1}{4} {F_{\mu\nu}^{\rm (em)}}^2
-\dfrac{1}{2}|D_\mu^{\rm (em)} W_\nu-D_\nu^{\rm (em)} W_\mu|^2 \nonumber\\
&+ie F_{\mu\nu}^{\rm (em)} W_\mu^* W_\nu
+ \dfrac{g^2}{4}(W_\mu^* W_\nu - W_\nu^* W_\mu)^2.
\label{wslag4}
\end{eqnarray}
This should be compared with (\ref{gglag2}), which shows that
the two theories have exactly the same type of interaction in
the absence of the $Z$-boson, if we identify $\cal F_{\mu\nu}$ in
(\ref{gglag2}) with $F_{\mu\nu}^{\rm (em)}$ in (\ref{wslag4}). The
only difference is the coupling strengths of the $W$-boson
quartic self-interaction and Higgs interaction of $W$-boson
(responsible for the Higgs mechanism). This difference, of
course, originates from the fact that the Weinberg-Salam model
has two gauge coupling constants, while the Georgi-Glashow model
has only one.
This tells that, in spite of the fact that the Cho-Maison dyon
has infinite energy, it is not much different from the Julia-Zee
dyon. To amplify this point notice that the spherically symmetric
ansatz of the Julia-Zee dyon
\begin{gather}
\vec \Phi=\rho(r)~\hat r,
~~~\hat A_\mu=\displaystyle\frac{1}{g}A(r)\partial_\mu t~\hat{r}
- \displaystyle\frac{1}{g}\hat{r}\times\partial_\mu \hat{r} \nonumber\\
{\vec W}_\mu= \displaystyle\frac{1}{g}f(r) \hat{r}\times\partial_\mu \hat{r} ,
\label{ggdans}
\end{gather}
can be written in the Abelian formalism as
\begin{gather}
\rho = \rho(r),
~~~W_{\mu} = \displaystyle\frac{i}{g}\displaystyle\frac{f(r)}{\sqrt2}e^{i\varphi}
(\partial_\mu \theta +i \sin\theta \partial_\mu \varphi), \nonumber\\
{\cal A}_{\mu} = \displaystyle\frac{1}{g}A(r) \partial_{\mu}t
-\displaystyle\frac{1}{g}(1-\cos\theta) \partial_{\mu} \varphi.
\end{gather}
In the absence of the $Z$-boson this is identical to the
ansatz (\ref{ans2}).
With the ansatz we have the following equation for the dyon
\begin{gather}
\ddot{\rho}+\displaystyle\frac{2}{r}\dot{\rho} - 2\displaystyle\frac{f^2}{r^2}\rho
=\lambda \big(\rho^2 - \displaystyle\frac{\mu^2}{\lambda} \big)\rho,
\nonumber \\
\ddot{f}- \displaystyle\frac{f^2-1}{r^2}f=(g^2\rho^2-A^2)f,
\nonumber\\
\ddot{A}+\displaystyle\frac{2}{r}\dot{A} -2\displaystyle\frac{f^2}{r^2} A=0.
\label{ggdeq}
\end{gather}
This should be compared to the equation of motion (\ref{cmeq})
for the Cho-Maison dyon. They are not much different.
With the boundary condition
\begin{gather}
\rho(0)=0, \quad f(0)=1, \quad A(0)=0, \nonumber\\
\rho(\infty)=\bar \rho_0=\sqrt{\mu^2/\lambda},~~f(\infty)=0,
~~A(\infty)=A_0,
\label{ggbc}
\end{gather}
one can integrate (\ref{ggdeq}) and obtain the Julia-Zee
dyon which has a finite energy. Notice that the boundary condition
$A(0)=0$ and $f(0)=1$ is crucial to make the solutions regular
at the origin. This confirms that the Julia-Zee dyon is nothing
but the Abelian monopole regularized by $\rho$ and $W_\mu$, where
the charged vector field adds an extra electric charge to the
monopole. Again it must be clear from (\ref{ggdeq}) that,
for a given magnetic charge, there are always two dyons with
opposite electric charges.
Moreover, for the monopole (and anti-monopole) solution
with $A=0$, the equation reduces to the following
Bogomol'nyi-Prasad-Sommerfield equation in the limit
$\lambda=0$
\begin{gather}
\dot{\rho}\pm \displaystyle\frac{1}{gr^2}(f^2-1)=0,
~~~\dot{f} \pm g \rho f=0.
\label{pseq}
\end{gather}
This has the analytic solution
\begin{gather}
\rho= \bar \rho_0\coth(g\bar \rho_0 r)-\dfrac{1}{er},
~~~f= \dfrac{g\bar \rho_0 r}{\sinh(g \bar \rho_0 r)},
\end{gather}
which describes the Prasad-Sommerfield monopole \cite{prasad}.
Of course, the Cho-Maison dyon has a non-trivial dressing of
the $Z$-boson which is absent in the Julia-Zee dyon. But notice
that the $Z$-boson plays no role in the Cho-Maison monopole.
This confirms that the Cho-Maison monopole and the `tHooft-Polyakov
monopole are not so different, so that the Cho-Maison monopole
could be modified to have finite energy.
For the anti-dyon we can have the following ansatz
\begin{gather}
\vec \Phi=\rho(r)~{\hat r}',
~~~\hat A_\mu'=-\displaystyle\frac{1}{g}A(r)\partial_\mu t~{\hat r}'
- \displaystyle\frac{1}{g}{\hat r}' \times\partial_\mu {\hat r}' \nonumber\\
{\vec W}_\mu'= \displaystyle\frac{1}{g}f(r)~{\hat r}' \times\partial_\mu {\hat r}', \nonumber\\
{\hat r}'=(\sin \theta \cos \phi,-\sin \theta \sin \phi,\cos \theta),
\label{antiggd}
\end{gather}
or equivalently
\begin{gather}
\rho' = \rho(r),
~~~W_{\mu} = \displaystyle\frac{i}{g}\displaystyle\frac{f(r)}{\sqrt2}e^{-i\varphi}
(\partial_\mu \theta -i \sin\theta \partial_\mu \varphi), \nonumber\\
{\cal A}'_{\mu} = -\displaystyle\frac{1}{g}A(r) \partial_{\mu}t
+\displaystyle\frac{1}{g}(1-\cos\theta) \partial_{\mu} \varphi.
\end{gather}
This ansatz looks different from the popular ansatz described
by $\vec \Phi=-\rho(r)~{\hat r}$, but we can easily show that they
are gauge equivalent. With this we have exactly the same equation
(\ref{ggdeq}) for the anti-dyon, which assures that the theory
has both dyon and anti-dyon.
\section{Ultraviolet Regularization of Cho-Maison Dyon}
Since the Cho-Maison dyon is the only dyon in the standard
model, it is impossible to regularize it within the model.
However, the Weinberg-Salam model is the ``bare" theory
which should change to the ``effective" theory after the
quantum correction, and the ``real" electroweak dyon must
be the solution of such theory. So we may hope that the
quantum correction could regularize the Cho-Maison dyon.
The importance of the quantum correction in classical solutions
is best understood in QCD. The ``bare" QCD Lagrangian has no
confinement, so that the classical solutions of the bare QCD
can never describe the quarkonium or hadronic bound states.
Only the effective theory can do.
To see how the quantum modification could make the energy of
the Cho-Maison monopole finite, notice that after the quantum
correction the coupling constants change to the scale dependent
running couplings. So, if this quantum correction makes $1/g'^2$
in $E_0$ in (\ref{cme}) vanishing in the short distance limit, the
Cho-Maison monopole could have finite energy.
To do that consider the following effective Lagrangian which
has the non-canonical kinetic term for the $U(1)_Y$ gauge field
\begin{gather}
{\cal L}_{eff} = -|{\cal D} _\mu \phi|^2
-\displaystyle\frac{\lambda}{2} \Big(\phi^2 -\displaystyle\frac{\mu^2}{\lambda}\Big)^2
-\displaystyle\frac{1}{4} \vec F_{\mu\nu}^2 \nonumber\\
-\displaystyle\frac{1}{4} \epsilon(|\phi|^2 ) G_{\mu\nu}^2,
\label{effl}
\end{gather}
where $\epsilon(|\phi|^2)$ is a positive dimensionless function
of the Higgs doublet which approaches to one asymptotically.
Clearly $\epsilon$ modifies the permittivity of the $U(1)_Y$
gauge field, but the effective action still retains the
$SU(2)\times U(1)_Y$ gauge symmetry. Moreover, when
$\epsilon \rightarrow 1$ asymptotically, the effective
action reproduces the standard model.
This type of effective theory which has the field dependent
permittivity naturally appears in the non-linear electrodynamics
and higher-dimensional unified theory, and has been studied
intensively in cosmology to explain the late-time accelerated
expansion \cite{prd87,prl92,babi}.
\begin{figure}
\includegraphics[height=4cm, width=8cm]{fcdyon}
\caption{\label{fcdyon} The finite energy electroweak dyon
solution obtained from the effective Lagrangian (\ref{effl}).
The solid line represents the finite energy dyon
and dotted line represents the Cho-Maison dyon, where
$Z=A-B$ and we have chosen $f(0)=1$ and $A(\infty)=M_W/2$.}
\label{fig2}
\end{figure}
From (\ref{effl}) we have the equations for $\rho$ and $B_\mu$
\begin{gather}
\partial^2 \rho =|{\cal D}_\mu \xi|^2 \rho
+\displaystyle\frac{\lambda}{2}(\rho^2 - \rho_0^2) \rho
+\displaystyle\frac{1}{2} \epsilon' \rho G_{\mu\nu}^2, \nonumber \\
\partial_\mu G_{\mu\nu}= i \displaystyle\frac{g'}{2 \epsilon} \rho^2
[\xi^\dagger {\cal D}_\nu \xi - ({\cal D}_\nu\xi)^\dagger \xi]
-\displaystyle\frac{\partial_\mu \epsilon}{\epsilon} G_{\mu\nu},
\label{meq}
\end{gather}
where $\epsilon' = d\epsilon/d{\rho^2}$. This changes the dyon
equation (\ref{cmeq}) to
\begin{gather}
\ddot{\rho} + \displaystyle\frac{2}{r}\dot{\rho}-\displaystyle\frac{f^2}{2r^2}\rho
=-\displaystyle\frac{1}{4} (A-B)^2 \rho
+\displaystyle\frac{\lambda}{2} (\rho^2- \rho_0^2) \rho \nonumber\\
+ \displaystyle\frac{\epsilon'}{g'^2}\Big(\displaystyle\frac{1}{r^4}-\dot{B}^2 \Big) \rho, \nonumber\\
\ddot{f}-\displaystyle\frac{f^2-1}{r^2}f=\big(\displaystyle\frac{g^2}{4}\rho^2
- A^2\big)f, \nonumber\\
\ddot{A}+\displaystyle\frac{2}{r}\dot{A}-\displaystyle\frac{2f^2}{r^2}A
=\displaystyle\frac{g^2}{4}\rho^2(A-B), \nonumber \\
\ddot{B} + 2\big(\displaystyle\frac{1}{r}+
\displaystyle\frac{\epsilon'}{\epsilon} \rho \dot{\rho} \big) \dot{B}
=-\displaystyle\frac{g'^2}{4 \epsilon} \rho^2 (A-B).
\end{gather}
This tells that effectively $\epsilon$ changes the
$U(1)_Y$ gauge coupling $g'$ to the ``running" coupling
$\bar g'=g' /\sqrt{\epsilon}$. This is because with
the rescaling of $B_\mu$ to $B_\mu/g'$, $g'$ changes
to $g' /\sqrt{\epsilon}$. So, by making $\bar g'$
infinite (requiring $\epsilon$ vanishing) at the
origin, we can regularize the Cho-Maison monopole.
From the equations of motion we find that we need the
following condition near the origin to make the monopole
energy finite
\begin{gather}
\epsilon \simeq \Big(\displaystyle\frac{\rho}{\rho_0}\Big)^n,
~~~n > 4+2\sqrt 3 \simeq 7.46.
\end{gather}
With $n=8$ we have
\begin{eqnarray}
\rho(r) \simeq r^\delta,
~~~\delta={\displaystyle\frac{\sqrt{3}-1}{2}},
\end{eqnarray}
near the origin, and have the finite energy dyon solution
shown in Fig. \ref{fcdyon}. It is really remarkable
that the regularized solutions look very much like the
Cho-Maison solutions, except that for the finite energy
dyon solution $Z(0)$ becomes zero. This confirms that
the ultraviolet regularization of the Cho-Maison
monopole can indeed be possible.
\begin{figure}
\includegraphics[width=8cm, height=4cm]{effg}
\caption{\label{effg} The running coupling $\bar g'$ of $U(1)_Y$
gauge field induced by the effective Lagrangian (\ref{effl}).}
\end{figure}
As expected with $n=8$ the running coupling $\bar g'$ becomes
divergent at the origin, which makes the energy contribution
from the $U(1)_Y$ gauge field finite. The scale dependence of
the running coupling is shown in Fig. \ref{effg}. With $A=B=0$
we can estimate the monopole energy to be
\begin{gather}
E \simeq 0.65 \times \displaystyle\frac{4\pi}{e^2} M_W \simeq 7.19 ~{\rm TeV}.
\end{gather}
This tells that the estimate of the monopole energy based on
the scaling argument is reliable. The finite energy monopole
solution is shown in Fig.~\ref{fcmono}.
\begin{figure}
\includegraphics[width=8cm, height=4cm]{fcmono}
\caption{\label{fcmono}The finite energy electroweak monopole
solution obtained from the effective Lagrangian (\ref{modl}).
The solid line (red) represents the regularized monopole and
the dotted (blue) line represents the Cho-Maison monopole.}
\end{figure}
There is another way to regularize the Cho-Maison monopole.
Suppose we have the following ultraviolet modification of
(\ref{wslag2}) from the quantum correction
\begin{gather}
\delta {\cal L}=ie \alpha F_{\mu\nu}^{\rm (em)} W_\mu^* W_\nu
+\beta \displaystyle\frac{g^2}{4}(W_\mu^*W_\nu-W_\nu^*W_\mu)^2 \nonumber\\
-\gamma \displaystyle\frac{g^2}{4} \rho^2 |W_\mu|^2.
\end{gather}
where $\alpha,~\beta,~\gamma$ are the scale dependent parameters
which vanish asymptotically (and modify the theory only at short distance).
With this we have the modified Weinberg-Salam Lagrangian
\begin{gather}
{\cal L}'= -\displaystyle\frac{1}{2}(\partial_\mu \rho)^2
-\displaystyle\frac{\lambda}{8}\big(\rho^2-\rho_0^2\big)^2
-\displaystyle\frac{1}{4} {F_{\mu\nu}^{\rm (em)}}^2
-\displaystyle\frac{1}{4} Z_{\mu\nu}^2 \nonumber\\
-\displaystyle\frac{1}{2}\big|(D_\mu^{\rm (em)} W_\nu - D_\nu^{\rm (em)} W_\mu)
+ie \displaystyle\frac{g}{g'}(Z_\mu W_\nu - Z_\nu W_\mu)\big|^2 \nonumber \\
+ie(1+\alpha) F_{\mu\nu}^{\rm (em)} W_\mu^* W_\nu
+ie \displaystyle\frac{g}{g'} Z_{\mu\nu} W_\mu^* W_\nu \nonumber\\
+(1+\beta)\displaystyle\frac{g^2}{4}(W_\mu^* W_\nu -W_\nu^* W_\mu)^2 \nonumber\\
-(1+\gamma) \displaystyle\frac{g^2}{4}\rho^2 |W_\mu|^2
-\displaystyle\frac{g^2+g'^2}{8} \rho^2 Z_\mu^2.
\label{modl}
\end{gather}
Of course, this modification is supposed to hold only in the
short distance, so that asymptotically $\alpha,~\beta,~\gamma$
should vanish to make sure that ${\cal L}'$ reduces to the
standard model. But we will treat them as constants, partly
because it is difficult to make them scale dependent, but
mainly because asymptotically the boundary condition automatically
makes them irrelevant and assures the solution to converge to
the Cho-Maison solution.
To understand the physical meaning of (\ref{modl}) notice
that in the absence of the $Z$-boson the above Lagrangian
reduces to the Georgi-Glashow Lagrangian where the $W$-boson
has an extra ``anomalous" magnetic moment $\alpha$ when
$(1+\beta)=e^2/g^2$ and $(1+\gamma)=4e^2/g^2$, if we
identify the coupling constant $g$ in the Georgi-Glashow model
with the electromagnetic coupling constant $e$. Moreover, the
ansatz (\ref{ans1}) can be written as
\begin{eqnarray}
&{\vec A}_\mu=\hat A_\mu^{\rm (em)} +\vec W_\mu, \nonumber\\
&\hat A_\mu^{\rm (em)}=e \big[\dfrac{1}{g^2}A(r)
+\dfrac{1}{g'^2}B(r)\big] \partial_\mu t ~\hat r
-\dfrac1e \hat r\times \partial_\mu \hat r , \nonumber\\
&\vec W_\mu=\dfrac{f(r)}{g} \hat r\times \partial_\mu \hat r, \nonumber\\
&Z_{\mu} = \dfrac{e}{gg'}\big(A(r)-B(r)\big) \partial_{\mu}t.
\label{ans3}
\end{eqnarray}
This shows that, for the monopole (i.e., for $A=B=0$)
the ansatz becomes formally identical to (\ref{ggdans})
if $\vec W_\mu$ is rescaled by a factor $g/e$. This tells
that, as far as the monopole solution is concerned, in
the absence of the $Z$-boson the Weinberg-Salam model and
Georgi-Glashow model are not so different.
With (\ref{modl}) the energy of the dyon is given by
\begin{gather}
\hat E =\hat E_0 +\hat E_1, \nonumber\\
\hat E_0 =\displaystyle\frac{2\pi}{g^2}\int_0^\infty
\displaystyle\frac{dr}{r^2}\Big\{\displaystyle\frac{g^2}{g'^2}+1 -2(1+\alpha) f^2
+(1+\beta)f^4 \Big\} \nonumber\\
=\displaystyle\frac{2\pi}{g^2}\int_0^\infty
\displaystyle\frac{dr}{r^2}\Big\{\displaystyle\frac{g^2}{e^2}-\displaystyle\frac{(1+\alpha)^2}{1+\beta}
+(1+\beta)\big(f^2-\displaystyle\frac{1+\alpha}{1+\beta}\big)^2 \Big\}, \nonumber\\
\hat E_1 =\displaystyle\frac{4\pi}{g^2} \int_0^\infty dr
\bigg\{\displaystyle\frac{g^2}{2}(r\dot\rho)^2
+\displaystyle\frac{\lambda g^2r^2}{8}\big(\rho^2-\rho_0^2 \big)^2 \nonumber\\
+\dot f^2 +\displaystyle\frac{1}{2}(r\dot A)^2
+\displaystyle\frac{g^2}{2g'^2}(r\dot B)^2
+(1+\gamma) \displaystyle\frac{g^2}{4} f^2\rho^2 \nonumber\\
+\displaystyle\frac{g^2r^2}{8} (B-A)^2 \rho^2 +f^2 A^2 \bigg\}.
\label{energy_2}
\end{gather}
Notice that $\hat E_1$ remains finite with the modification,
and $\gamma$ plays no role to make the monopole energy
finite.
\begin{figure}
\includegraphics[height=4cm, width=8cm]{fecdyon}
\caption{\label{fecdyon} The finite energy electroweak dyon
solution obtained from the modified Lagrangian (\ref{modl}).
The solid line represents the finite energy dyon and dotted
line represents the Cho-Maison dyon.}
\label{fig2}
\end{figure}
To make $\hat E_0$ finite we must have
\begin{gather}
1+\alpha=\dfrac1{f(0)^2} \dfrac{g^2}{e^2},
~~~1+\beta=\dfrac1{f(0)^4} \dfrac{g^2}{e^2},
\label{fecon}
\end{gather}
so that the constants $\alpha$ and $\beta$ are fixed by $f(0)$.
With this the equation of motion is given by
\begin{gather}
\ddot \rho+\displaystyle\frac{2}{r}\dot\rho-\displaystyle\frac{(1+\gamma)f^2}{2r^2}\rho
=-\displaystyle\frac{1}{4}(A-B)^2 \rho
+\displaystyle\frac{\lambda}{2}\big(\rho^2-\rho_0^2 \big)\rho, \nonumber \\
\ddot f -\displaystyle\frac{(1+\alpha)}{r^2}\Big( \dfrac{f^2}{f^2(0)}-1 \Big) f
=\Big( (1+\gamma)\displaystyle\frac{g^2}{4}\rho^2-A^2 \Big) f, \nonumber\\
\ddot A +\displaystyle\frac{2}{r} \dot A - \displaystyle\frac{2f^2}{r^2} A
=\displaystyle\frac{g^2}{4}(A-B)\rho^2, \nonumber \\
\ddot B+\displaystyle\frac{2}{r}\dot B =-\displaystyle\frac{g'^2}{4}(A-B) \rho^2 .
\label{eqm3}
\end{gather}
The solution has the following behavior near the origin,
\begin{eqnarray}
&\rho \simeq \alpha_1 r^{\delta_1},
~~~~\dfrac{f}{f(0)} \simeq 1 + \beta_1 r^{\delta_2}, \nonumber \\
&A \simeq a_1 r^{\delta_3},~~~~B \simeq b_0 + b_1 r^{\delta_4},
\label{origin1}
\end{eqnarray}
where
\begin{eqnarray}
&\delta_1 = \dfrac{1}{2}(\sqrt{1+2(1+\gamma)f^2(0)} -1), \nonumber\\
&\delta_2 = \dfrac{1}{2}(1+\sqrt{8\alpha+9}),
~~~\delta_3 = \dfrac{1}{2}(\sqrt{1+8f^2(0)} -1),\nonumber\\
&\delta_4 = \sqrt{1+2f^2(0)} +1. \nonumber
\end{eqnarray}
Notice that all four deltas are positive (as far as $(1+\alpha)>0$),
so that the four functions are well behaved at the origin.
If we assume $\alpha=\gamma=0$ we have $f(0)=g/e$,
and we can integrate (\ref{eqm3}) with the boundary condition
\begin{eqnarray}
&\rho(0)=0,~~~f(0)=g/e,~~~A(0)=0,~~~B(0)=b_0, \nonumber\\
&f(\infty)=0,~\rho(\infty)=\rho_0,
~A(\infty)=B(\infty)=A_0.
\label{bc1}
\end{eqnarray}
The finite energy dyon solution is shown in Fig. \ref{fecdyon}.
It should be emphasized that the solution is an approximate
solution which is supposed to be valid only near the origin,
because the constants $\alpha,~\beta,~\gamma$ are supposed
to vanish asymptotically. But notice that asymptotically the
solution automatically approaches to the Cho-Maison solution
even without making them vanish, because we have the same
boundary condition at the infinity. Again it is remarkable that
the finite energy solution looks very similar to the Cho-Maison
solution.
Of course, we can still integrate (\ref{eqm3}) with arbitrary
$f(0)$ and have a finite energy solution. The monopole energy for
$f(0)=1$ and $f(0)=g/e$ (with $\alpha=\gamma=0$) are given
by
\begin{eqnarray}
&E(f(0)=1) \simeq 0.61 \times \dfrac{4\pi }{e^2} M_W
\simeq 6.73~{\rm TeV}, \nonumber\\
&E(f(0)=\dfrac{g}{e})\simeq 1.27 \times \dfrac{4\pi }{e^2} M_W
\simeq 13.95~{\rm TeV}.
\end{eqnarray}
In general the energy of dyon depends on $f(0)$, but must be
of the order of $(4\pi/e^2) M_W$. The energy dependence of
the monopole on $f(0)$ is shown in Fig. \ref{edf0}. This
strongly supports our prediction of the monopole mass based
on the scaling argument.
\begin{figure}
\includegraphics[height=4cm, width=8cm]{edf0}
\caption{\label{edf0} The energy dependence of the electroweak
monopole on $f(0)$.}
\label{fig3}
\end{figure}
As we have emphasized, in the absence of the $Z$-boson
(\ref{modl}) reduces to the Georgi-Glashow theory with
\begin{eqnarray}
\alpha = 0, \quad
1+\beta=\displaystyle\frac{e^2}{g^2}, \quad
1+\gamma=\displaystyle\frac{4e^2}{g^2}.
\label{cond4}
\end{eqnarray}
In this case (\ref{eqm3}) reduces to the following
Bogomol'nyi-Prasad-Sommerfield equation in the limit
$\lambda=0$ \cite{prasad}
\begin{gather}
\dot{\rho}\pm \displaystyle\frac{1}{er^2}\big(\displaystyle\frac{e^2}{g^2}f^2
-1 \big)=0,
~~~\dot{f}\pm e\rho f=0.
\label{self2}
\end{gather}
This has the analytic monopole solution
\begin{gather}
\rho=\rho_0\coth(e\rho_0r)-\displaystyle\frac{1}{er},
~~~f= \displaystyle\frac{g\rho_0 r}{\sinh(e\rho_0r)},
\end{gather}
whose energy is given by the Bogomol'nyi bound
\begin{eqnarray}
E=\sin \theta_{\rm w} \times \displaystyle\frac{8\pi}{e^2} M_{W}
\simeq 5.08~{\rm TeV}.
\end{eqnarray}
From this we can confidently say that the mass of the electroweak
monopole could be around 4 to 7 TeV.
This confirms that we can regularize the Cho-Maison dyon with
a simple modification of the coupling strengths of the existing
interactions which could be caused by the quantum correction.
This provides a most economic way to make the energy of the dyon
finite without introducing a new interaction in the standard model.
\section{Embedding $U(1)_Y$ to $SU(2)_Y$}
Another way to regularize the Cho-Maison dyon, of course, is
to enlarge $U(1)_Y$ and embed it to another SU(2). This type of
generalization of the standard model could naturally arise in the
left-right symmetric grand unification models, in particular in the
SO(10) grand unification, although this generalization may be too
simple to be realistic.
To construct the desired solutions we introduce a hypercharged vector
field $X_\mu$ and a Higgs field $\sigma$, and generalize the Lagrangian
(\ref{wslag2}) adding the following Lagrangian
\begin{eqnarray}
&\Delta {\cal L}=-\dfrac{1}{2}|\tilde D_\mu X_\nu-\tilde D_\nu X_\mu|^2
+ig' G_{\mu\nu}X_\mu^* X_\nu \nonumber\\
&+\dfrac{1}{4}g'^2(X_\mu^* X_\nu -X_\nu^* X_\mu)^2 \nonumber\\
&-\dfrac{1}{2}(\partial_\mu\sigma)^2 -g'^2\sigma^2 |X_\mu|^2
-\dfrac{\kappa}{4}\big( \sigma^2-\dfrac{m^2}{\kappa}\big)^2,
\label{lag4}
\end{eqnarray}
where $\tilde D_\mu = \partial_\mu +ig' B_\mu$. To understand
the meaning of it let us introduce a hypercharge $SU(2)$ gauge field
$\vec B_\mu$ and a scalar triplet ${\vec \Phi}$, and consider
the $SU(2)_Y$ Georgi-Glashow model
\begin{eqnarray}
&{\cal L}'=-\dfrac{1}{2}(D_\mu {\vec \Phi})^2
-\dfrac{\kappa}{4}\big({\vec \Phi}^2-\dfrac{m^2}{\kappa}\big)^2
-\dfrac{1}{4} \vec {G}_{\mu\nu}^2.
\end{eqnarray}
Now we can have the Abelian decomposition of this Lagrangian
with $\vec \Phi=\sigma {\hat n}$, and have (identifying $B_\mu$ and $X_\mu$
as the Abelian and valence parts)
\begin{eqnarray}
&{\cal L}'=-\dfrac14 G_{\mu\nu}^2+ \Delta {\cal L}.
\end{eqnarray}
This clearly shows that Lagrangian (\ref{lag4}) describes nothing but
the embedding of the hypercharge $U(1)$ to an $SU(2)$ Georgi-Glashow
model.
Now for a static spherically symmetric ansatz
we choose (\ref{ans1}) and let
\begin{eqnarray}
&\sigma =\sigma(r), \nonumber \\
&X_\mu =\dfrac{i}{g'}\dfrac{h(r)}{\sqrt{2}}e^{i\varphi}
(\partial_\mu \theta+i\sin\theta\partial_\mu \varphi).
\label{ansatz3}
\end{eqnarray}
With the spherically symmetric ansatz the equations of motion are
reduced to
\begin{gather}
\ddot{f}-\displaystyle\frac{f^2-1}{r^2}f
=\big(\dfrac{g^2}{4}\rho^2-A^2\big)f, \nonumber\\
\ddot{\rho}+\displaystyle\frac{2}{r} \dot{\rho} - \displaystyle\frac{f^2}{2r^2}\rho
=-\displaystyle\frac{1}{4}(A-B)^2\rho + \displaystyle\frac{\lambda}{2}\big(\rho^2
-\displaystyle\frac{2\mu^2}{\lambda}\big)\rho, \nonumber\\
\ddot{A} + \displaystyle\frac{2}{r}\dot{A} -\displaystyle\frac{2f^2}{r^2}A
= \displaystyle\frac{g^2}{4} \rho^2(A-B), \nonumber\\
\ddot{B} + \displaystyle\frac{2}{r} \dot{B}- \displaystyle\frac{2h^2}{r^2} B
=\displaystyle\frac{g'^2}{4} \rho^2 (B-A), \nonumber\\
\ddot h -\displaystyle\frac{h^2-1}{r^2} h =(g'^2\sigma^2-B^2) h, \nonumber\\
\ddot\sigma +\displaystyle\frac{2}{r}\dot\sigma -\displaystyle\frac{2h^2}{r^2} \sigma
= \kappa\big(\sigma^2-\displaystyle\frac{m^2}{\kappa}\big)\sigma.
\label{eom4}
\end{gather}
Furthermore, the energy of the above configuration is given by
\begin{gather}
E=E_W +E_X, \nonumber\\
E_{W}= \displaystyle\frac{4\pi}{g^2}\int_0^\infty dr
\Big\{\dot f^2 +\displaystyle\frac{(f^2-1)^2}{2r^2}
+\displaystyle\frac{1}{2}(r\dot A)^2 \nonumber\\
+f^2A^2+\displaystyle\frac{g^2}{2}(r\dot\rho)^2 + \displaystyle\frac{g^2}{4} f^2\rho^2
+\displaystyle\frac{g^2r^2}{8}(A-B)^2\rho^2 \nonumber\\
+\displaystyle\frac{\lambda g^2r^2}{8}\big(\rho^2-\displaystyle\frac{2\mu^2}{\lambda}\big)^2
\Big\}=\displaystyle\frac{4\pi}{g^2}~C_1~M_W, \nonumber\\
E_X=\displaystyle\frac{4\pi}{g'^2}\int_0^\infty dr\Big\{\dot h^2
+\displaystyle\frac{(h^2-1)^2}{2r^2}+\displaystyle\frac{1}{2}(r\dot B)^2 \nonumber\\
+h^2B^2 +\displaystyle\frac{g'^2}{2}(r\dot\sigma)^2+g'^2 h^2\sigma^2 \nonumber\\
+\displaystyle\frac{\kappa g'^2r^2}{4}(\sigma^2-\sigma_0^2)^2 \Big\}
=\displaystyle\frac{4\pi }{g'^2}~C_2~M_X,
\end{gather}
where $\sigma_0=\sqrt{m^2/\kappa}$, $M_X=g' \sigma_0$, $C_1$ and $C_2$
are constants of the order one. The boundary conditions
for a regular field configuration can be chosen as
\begin{eqnarray}
&f(0)=h(0)=1,~~ A(0)=B(0)=\rho(0)=\sigma(0)=0, \nonumber\\
&f(\infty)=h(\infty)=0,~A(\infty)=A_0,~B(\infty)=B_0,\nonumber\\
&\rho(\infty)=\rho_0,~\sigma(\infty)=\sigma_0.
\label{bound3}
\end{eqnarray}
Notice that this guarantees the analyticity of
the solution everywhere, including the origin.
\begin{figure}
\includegraphics[height=4cm, width=8cm]{su2embed}
\caption{The $SU(2)\times SU(2)$ monopole solution with
$M_H/M_W=1.56$, $M_X=10~M_W$, and $\kappa=0$.}
\label{fig4}
\end{figure}
With the boundary condition (\ref{bound3}) one may try to
find the desired solution. From the physical point of view
one could assume $M_X \gg M_W$, where $M_X$ is an intermediate
scale which lies somewhere between the grand unification scale
and the electroweak scale. Now, let $A=B=0$ for simplicity.
Then (\ref{eom4}) decouples to describe two independent systems
so that the monopole solution has two cores, the one with the
size $O(1/M_W)$ and the other with the size $O(1/M_X)$. With
$M_X=10M_W$ we obtain the solution shown in Fig. \ref{fig4}
in the limit $\kappa=0$ and $M_H/M_W=1.56$.
In this limit we find
$C_1=1.53$ and $C_2=1$ so that the energy of the solution
is given by
\begin{eqnarray}
&E=\dfrac{4\pi}{e^2}\Big( \cos^2\theta_{\rm w}
+0.153~\sin^2\theta_{\rm w}\Big)~M_X \nonumber\\
&\simeq 110.17~M_X.
\end{eqnarray}
Clearly the solution describes the Cho-Maison monopole whose singularity
is regularized by a Prasad-Sommerfield monopole of the size $O(1/M_X)$.
Notice that, even though the energy of the monopole is fixed by
the intermediate scale, the size of the monopole is determined
by the electroweak scale. Furthermore from the outside the monopole
looks exactly the same as the Cho-Maison monopole. Only the inner core
is regularized by the hypercharged vector field.
\section{Conclusions}
In this paper we have discussed three ways to estimate
the mass of the electroweak monopole, the dimensional argument,
the scaling argument, and the ultraviolet regularization of
the Cho-Maison monopole. As importantly, we have shown that
the standard model has the anti-dyon as well as the dyon
solution, so that they can be produced in pairs.
It has generally been believed that the finite energy monopole
could exist only at the grand unification scale \cite{dokos}.
But our result tells that the genuine electroweak monopole
of mass around 4 to 10 TeV could exist. This strongly implies
that there is an excellent chance that MoEDAL could actually
detect such monopole in the near future, because the 14 TeV
LHC upgrade now reaches the monopole-antimonopole pair production
threshold. But of course, if the mass of the monopole exceeds
the LHC threshold 7 TeV, we may have to look for the monopole
from cosmic ray with the ``cosmic" MoEDAL.
The importance of the electroweak monopole is that it is the
electroweak generalization of the Dirac monopole, and that it is
the only realistic monopole which can be produced and detected.
A remarkable aspect of this monopole is that mathematically it
can be viewed as a hybrid between the Dirac monopole and the
'tHooft-Polyakov monopole.
However, there are two crucial differences. First, the magnetic
charge of the electroweak monopole is two times bigger than that
of the Dirac's monopole, so that it satisfes the Schwinger quantization
condition $q_m=4\pi n/e $. This is because the electroweak generalization
requires us to embed $U(1)_{\rm em}$ to the U(1) subgroup of SU(2),
which has the period of $4\pi$. So the magnetic charge of the electroweak
monopole has the unit $4\pi/e$.
Of course, the finite energy dyon solutions we discussed in
the above are not the solutions of the ``bare" standard model.
Nevertheless they tell us how the Cho-Maison dyon could be
regularized and how the regularized electroweak dyon would
look like. From the physical point of view there is no doubt
that the finite energy solutions should be interpreted as
the regularized Cho-Maison dyons whose mass (and size) is
fixed by the electroweak scale.
We emphasize that, unlike the Dirac's monopole which can exist
only when $U(1)_{\rm em}$ becomes non-trivial, the electroweak
monopole must exist in the standard model. So, if the standard
model is correct, we must have the monopole. {\it In this sense,
the experimental discovery of the electroweak monopole should
be viewed as the final topological test of the standard model.}
Clearly the electroweak monopole invites more difficult questions.
How can we justify the perturbative expansion and the renormalization
in the presence of the monopole? What are the new physical processes
which can be induced by the monopole? Most importantly, how can we
construct the quantum field theory of the monopole?
Moreover, the existence of the finite energy electroweak monopole
should have important physical implications. In particular, it could
have important implications in cosmology, because it can be produced
after inflation. The physical implications of the monopole will be
discussed in a separate paper \cite{cho}.
\textbf{Acknowledgments}
The work is supported in part by the National Research
Foundation (2012-002-134) of the Ministry of
Science and Technology and by Konkuk University.
| {'timestamp': '2014-10-14T02:12:10', 'yymm': '1305', 'arxiv_id': '1305.1699', 'language': 'en', 'url': 'https://arxiv.org/abs/1305.1699'} |
\section{Introduction}
\label{sec:introduction}
Recent years have witnessed a rapid growth of interest in understanding condensed-phase materials using quantum chemistry methods \cite{Marsman09JCP,Muller12PCCP,DelBen12JCTC,DelBen13JCTC,Booth13Nature,Yang14Science,McClain17JCTC,Gruber18PRX,Zhang19FM,Wang20JCTC,Lau21JPCL,Lange21JCP,Wang21JCTC},
especially those beyond density functional theory \cite{Hohenberg64PR,Kohn65PR} (DFT) with local and semi-local exchange-correlation functionals \cite{Perdew96PRL,Perdew08PRL}.
The non-local exchange and the many-body electron correlation in the quantum chemistry methods promise many advantages, such as systematic improvability \cite{McClain17JCTC} and the ability to describe dispersion interactions \cite{Jeziorski94CR,Sinnokrot02JACS,Sherrill09JPCA} and strong electron correlations \cite{Zheng17Science,Li19NC}, but are also computationally demanding, especially when using a plane-wave (PW) basis set due to the slow convergence with the number of virtual bands~\cite{Marsman09JCP,Gruneis10JCP,Gruneis11JCTC,Shepherd12PRB,Jiang16PRB,Booth16JCP,Morales20JCP,Callahan21JCP,Wei21JCTC}.
Atom-centered Gaussian basis sets are most popular in molecular quantum chemistry \cite{Hill13IJQC}, where the correlation-consistent basis sets \cite{Dunning89JCP} allow systematic convergence to the complete basis set (CBS) limit in correlated calculations. \cite{Feller11JCP,Neese11JCTC,Zhang13NJP,Varandas21PCCP}
However, the properties that define a good basis set for molecules are not the same as those for periodic solids.
For example, the standard Gaussian basis sets \cite{Feller96JCC,Weigend03JCP,Weigend05PCCP}, often optimized on free atoms, contain relatively diffuse functions that are needed to correctly describe the wavefunction in distant regions of space.
These diffuse functions cause significant linear dependencies when used in periodic calculations \cite{Klahn77IJQC,VandeVondele07JCP,Peintinger13JCC,VilelaOliveira19JCC}, leading to numerical instabilities in the self-consistent field (SCF) calculations \cite{Roothaan51RMP}.
While this SCF convergence issue can sometimes be solved by discarding diffuse primitives in the basis set \cite{Peintinger13JCC} or by canonical orthogonalization \cite{Lee21JCP}, these modifications hinder reproducibility, cause discontinuities in potential energy surfaces, and degrade the quality of virtual orbitals, which affects subsequent correlated calculations.
More importantly, the linear dependency problem is worse for larger basis sets, preventing convergence of a periodic calculation to the CBS limit.
One way to mitigate the linear dependency issue is re-optimizing the Gaussian exponents $\zeta_i$ and contraction coefficients $c_i$ of existing basis sets based on a cost function such as \cite{VandeVondele07JCP,Daga20JCTC,Li21JPCL,Zhou21JCTC,Neufeld21TBP}
\begin{equation} \label{eq:molbulkopt}
\Omega(\{\zeta_i,c_i\}; \gamma)
= E(\{\zeta_i,c_i\}) + \gamma \log \mathrm{cond}\mathbf{S}(\{\zeta_i,c_i\}),
\end{equation}
the minimization of which trades some of the energy $E$ for a lower condition number of the basis set overlap matrix $\mathbf{S}$ to the extent controlled by the developer-selected parameter $\gamma > 0$.
Such a cost function can be minimized on a paradigmatic system that exhibits the linear dependency of concern \cite{VandeVondele07JCP,Li21JPCL} or on each system under study \cite{Daga20JCTC,Zhou21JCTC,Neufeld21TBP}, and recent works have demonstrated the success of such approaches for producing Gaussian basis sets with better behavior \cite{Li21JPCL,Daga20JCTC,Zhou21JCTC,Neufeld21TBP}.
However, aside from the extra cost associated with frequent basis set reoptimization, such approaches obviously hinder---or forfeit---transferability and reproducibility.
In this work, we take a different approach to constructing Gaussian basis sets for periodic systems, which are designed to be universal and transferable, by revisiting Dunning's strategy for generating correlation-consistent Gaussian basis sets \cite{Dunning89JCP,Woon93JCP}.
The key modification needed for extended systems is found to be restricting the size of the valence basis to reach a balance between the accuracy and the numerical stability of a basis set.
Our strategy is general and applies to both all-electron and pseudopotential-based calculations;
as a specific example, we use the Goedecker-Teter-Hutter (GTH) family of pseudopotentials \cite{Goedecker96PRB,Hartwigsen98PRB} optimized for Hartree-Fock \cite{HutterPP} (HF) calculations and generate correlation-consistent Gaussian basis sets up to the quadruple-zeta (QZ) level for the main-group elements from the first three rows of the periodic table.
The resulting GTH-cc-pV$X$Z ($X = $ D, T, and Q) basis set series show fast convergence to the CBS limit (verified using a PW basis with the same pseudopotential) on the bulk properties of $19$ semiconductors calculated at both mean-field (HF) and correlated (second-order M\o{}ller-Plesset perturbation theory \cite{Moller34PR}, MP2) levels.
This article is organized as follows.
In \cref{sec:methodology}, we review Dunning's original scheme for generating correlation-consistent basis sets and provide a high-level description of our adaptation for periodic systems, using the GTH-cc-pV$X$Z basis set of carbon as an illustrative example.
In \cref{sec:computational_details}, we describe the computational details of the basis optimization and the numerical tests.
In \cref{sec:results_and_discussion}, we evaluate the quality of the GTH-cc-pV$X$Z basis sets on a variety of bulk properties by comparing to results obtained with a PW basis.
In \cref{sec:conclusion}, we conclude this work by pointing out future directions.
\section{Methodology}
\label{sec:methodology}
\subsection{Correlation consistent basis sets}
\label{subsec:correlation_consistent_basis_sets}
We begin with a brief review of Dunning's original approach to constructing the cc-pV$X$Z basis set series for the main-group elements, \cite{Dunning89JCP} which our strategy closely follows.
A cc-pV$X$Z basis set consists of a valence basis and a set of polarization functions.
The valence basis has $s$ and $p$ primitive orbitals (only $s$ for hydrogen and helium) whose exponents are determined by minimizing the HF ground state energy of a free atom.
These optimized primitive orbitals are then contracted with coefficients obtained from spherically averaging the atomic HF orbitals.
The most diffuse one, two, etc.~primitive orbitals are freed from the contraction for DZ, TZ, etc.~to better describe the electron density in the bonding region of a molecule.
The size of the valence basis (i.e.,~the number of primitive $s$ and $p$ orbitals) is typically chosen by the desired accuracy of the atomic HF energy.
The polarization functions are primitive orbitals of $d$ angular momentum or higher ($p$ or higher for hydrogen and helium) whose exponents are determined by minimizing the correlation energy of a free atom.
The rule of correlation consistency---an empirical observation first made by Dunning \cite{Dunning89JCP} and later confirmed by other \cite{Woon93JCP,Balabanov05JCP,Prascher11TCA}---states that the increase in the magnitude of the correlation energy $|\Delta E_\mathrm{c}|$ obtained by adding the $n$th polarization function of angular momentum $l$ is roughly equal to that of adding the $(n-1)$th polarization function of angular momentum $l+1$.
For this reason, the polarization functions are added in groups, $1d$ for DZ, $2d1f$ for TZ, $3d2f1g$ for QZ, etc., and the correlation energies obtained with the cc-pV$X$Z series can often be extrapolated to the CBS limit using simple functional forms \cite{Feller11JCP,Neese11JCTC}.
\subsection{The linear dependency problem}
\label{subsec:lindep_problem}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.8]{gth_old_vs_cc.pdf}
\caption{Condition number of (a) the GTH-$X$ZVP (taken from the CP2K software package\cite{Kuhne20JCP}) and (b) the GTH-cc-pV$X$Z (this work) basis sets evaluated on $16$ three-dimensional bulk materials at their respective experimental lattice parameters using a $5\times5\times5$ $k$-point mesh (the maximum condition number from all $k$-points is plotted).
For materials containing $s$-block elements, the condition numbers from the small-core and the large-core GTH-cc-pV$X$Z basis sets are comparable and the former is shown here.}
\label{fig:condnum_cmp}
\end{figure}
The primary problem that must be solved for generating Gaussian basis sets for solids is the potential high linear dependency of the basis sets \cite{Klahn77IJQC,VandeVondele07JCP,Peintinger13JCC,VilelaOliveira19JCC}, which is particularly severe for three-dimensional solids.
This is illustrated in \cref{fig:condnum_cmp}(a) for the original GTH-$X$ZVP basis sets \cite{VandeVondele05CPC} on $16$ bulk solids composed of the main-group elements from the first three rows.
The GTH-$X$ZVP basis sets, which were first reported in Ref.~\citenum{VandeVondele05CPC} and are now distributed with the CP2K package \cite{Kuhne20JCP}, were constructed by combining a valence basis optimized on free atoms at the DFT level and polarization functions of $d$ angular momentum taken from the corresponding cc-pV$X$Z basis sets.
In practice, a condition number higher than $10^{10}$ is found to be problematic in the manners discussed in~\cref{sec:introduction}.
As a result, most solids listed in \cref{fig:condnum_cmp}(a) can only be studied at the DZ level when using the original GTH basis sets.
The situation is similar for other all-electron \cite{Dunning89JCP,Woon93JCP,Weigend05PCCP} or pseudopotential-based basis sets \cite{Bennett17JCP,Bennett18JCP}, including the cc-pV$X$Z series \cite{Dunning89JCP,Woon93JCP}, as shown in Fig.~S1.
We emphasize that this issue is mostly isolated to hard crystalline solids and is less severe for molecular solids or liquids.
\subsection{Balancing accuracy and numerical stability}
\label{subsec:balancing_acc_and_numstab}
\begin{figure*}[!t]
\centering
\includegraphics[scale=0.8]{C_acc_vs_cond.pdf}
\caption{Correlation consistent basis sets generated for carbon in the GTH pseudopotential using three different valence bases.
(a) Convergence of the atomic HF energy. The grey shaded area indicates an error below $1$ m\Ha{}.
(b) Gaussian exponents of the QZ basis sets. Different colors label shells of different angular momentum as indicated by the text in the corresponding color.
(c) Increment of the atomic CCSD correlation energy with the number of polarization functions in each angular momentum channel for the $4s4p$ valence basis. The $3s3p$ and $5s5p$ valence bases give virtually the same plot (not shown).
(d) Condition number of the basis overlap matrix evaluated for bulk diamond at the experimental geometry (the maximum condition number from a $5\times5\times5$ $k$-point mesh is plotted).
The red horizontal line highlights a condition number of $10^{10}$.
(e) Error of the per-cell HF energy of bulk diamond evaluated using the Gaussian basis sets against a PW benchmark calculation.
(f) Same as (e) for the HF band gap of diamond.}
\label{fig:C_acc_vs_cond}
\end{figure*}
In our approach to basis set design for solids, we control the linear dependency by limiting the size of the valence basis, being careful not to introduce large basis set incompleteness errors.
In this section, we use the carbon element with a GTH pseudopotential as an example to discuss how a balance between accuracy and numerical stability can be reached.
We postpone a discussion of computational details to \cref{sec:computational_details}.
\Cref{fig:C_acc_vs_cond}(a) shows the error in the atomic HF energy of carbon with three optimized valence bases of increasing size (i.e., number of primitives): $3s3p$, $4s4p$, and $5s5p$.
A relatively small basis of $4s4p$ achieves an error of about 1~m\Ha{}, and that of $5s5p$ is already below $0.1$~m\Ha{}.
For each of the three valence bases, we generate correlation-consistent DZ, TZ, and QZ basis sets, by optimizing the polarization functions based on a correlated calculation.
The optimized exponents of the QZ primitives are shown in \cref{fig:C_acc_vs_cond}(b). Focusing on the $s$ primitives, we note that the \textit{largest} exponent splits into two from $3s$ to $4s$, but the \textit{smallest} exponent splits into two from $4s$ to $5s$. Only the latter is consistent with the conventional ``split-valence'' picture. We will see that this split-valence structure yields unacceptably high condition numbers in solids, but is not necessary for accurate predictions.
Despite the difference in the underlying valence basis, the optimized exponents for the polarization functions share a similar structure [\cref{fig:C_acc_vs_cond}(b)] and all exhibit perfect correlation consistency as shown explicitly for the $4s4p$ case in \cref{fig:C_acc_vs_cond}(c).
To test the applicability and the performance of the nine correlation-consistent bases obtained above in periodic calculations, we consider bulk diamond with its experimental lattice constant.
The condition numbers plotted in \cref{fig:C_acc_vs_cond}(d) exhibit a quick and monotonic increase with both the zeta-level and the size of the valence basis.
As a result, we observed convergence issues in the SCF calculations using the $5s5p$-derived TZ and QZ basis sets, consistent with the high condition numbers of the two basis sets (greater than $10^{10}$ as indicated by the red line), which is due to the presence of exponents near $0.1$ or below for this system.
For the remaining seven basis sets (DZ to QZ for $3s3p$ and $4s4p$ and DZ for $5s5p$), we used HF and MP2 to calculate various structural and energetic properties of diamond, which were then compared with benchmark results obtained using a PW basis.
For most of these properties, the performance of the correlation-consistent basis sets at the same zeta-level shows a weak dependence on the underlying valence basis (Figs.~S2 and S3).
However, the $4s4p$ family is a clear winner on the more sensitive properties, including the HF total energy and the HF band gap as shown in \cref{fig:C_acc_vs_cond}(e-f); large residual errors persist in the $3s3p$-derived basis sets, even at the QZ level, and little improvement is gained in the $5s5p$-derived basis sets (at least at the DZ level which is the only one we can test due to their high linear dependencies).
The results in this section reveal the strong effect of the choice of a valence basis on the accuracy and the numerical stability of the resulting correlation-consistent basis sets.
In particular, we see that limiting the size of the valence basis can preclude the appearance of problematic diffuse functions without significantly compromising the accuracy of calculated properties.
In the next section, we extend the strategy used here to obtain correlation-consistent basis sets for all main-group elements from the first three rows of the periodic table.
\subsection{The GTH-cc-pV$X$Z basis sets}
\label{subsec:gthccpvxz}
In the following subsections, we describe the detailed construction of our GTH-cc-pV$X$Z basis sets ($X = $ D, T, and Q) for the first three rows of the periodic table.
For selected elements (\textit{vide infra}), we also construct basis sets augmented by diffusion functions (GTH-aug-cc-pV$X$Z).
All of our basis sets are available for download in an online repository~\cite{ccgto_github} and full details of their primitive and contracted structure is given in Table S1 and Figs.~S4 and S5.
\subsubsection{The valence basis}
\label{subsubsec:gth_valence_basis}
The scheme for choosing the optimal valence basis for the carbon atom with a GTH pseudopotential, presented in \cref{subsec:balancing_acc_and_numstab}, can be made general.
Based on atomic calculations, we generate candidate correlation-consistent basis sets using valence bases of multiple sizes, which are then tested on a few reference materials.
The final basis set is then chosen as the one that remains numerically stable while predicting bulk properties that are converged with the size of the valence basis (or as converged as possible before large linear dependencies arise).
\rvv{In contrast to Dunning's original approach, we use these primitives for all zeta levels, with the final valence basis differing only in the number of uncontracted functions (however, see \ref{subsubsec:contraction_valence_orbitals} for a modification of this procedure for group VI to VIII elements).}
We emphasize that the reference periodic system serves only as a \emph{guide} for choosing an appropriate valence basis and is not used in the optimization of any parameters, unlike in previous works based on \cref{eq:molbulkopt} \cite{VandeVondele07JCP,Daga20JCTC,Li21JPCL,Zhou21JCTC,Neufeld21TBP}.
As we will see in the numerical results (\cref{sec:results_and_discussion}), our scheme maintains the important atomic electronic structure of a basis set, which is crucial for its transferability and high accuracy.
The structure of the valence basis determined this way for all elements from the first three rows of the periodic table is summarized in \cref{tab:valence_basis}.
The reference systems are chosen to be simple semiconductors and insulators formed by these elements.
The bulk properties being monitored include the equilibrium lattice constant and bulk modulus evaluated at both the HF and the MP2 levels and the HF band gap at equilibrium geometry, all evaluated with the Brillouin zone sampled by a $3\times3\times3$ $k$-point mesh.
The $s$-block elements, Li, Be, Na, and Mg, can be simulated using either a large-core pseudopotential or a small-core pseudopotential (which differ according to the treatment of core/semi-core electrons), and we present optimized basis sets for them separately.
Furthermore, for these $s$-block elements, the exponents of the valence $p$ orbitals cannot be determined in the usual way because these orbitals are unoccupied in the atomic HF ground state.
In the literature, these exponents are often determined by minimizing the HF energy of the corresponding $s \to p$ valence excited state \cite{Prascher11TCA}.
We found that this approach leads to valence $p$ orbitals that are too diffuse and cause significant linear dependencies in the oxides and fluorides of these elements using the QZ basis sets.
Therefore, we optimize the valence $p$ orbitals at the correlated level in the same way as in the determination of the polarization functions, described more in the next section.
We verified that the DZ and TZ basis sets obtained from both schemes give very similar numerical results for all properties tested in \cref{sec:results_and_discussion}.
The size of the valence basis is found to correlate with the hardness of the underlying pseudopotentials as reported in previous work \cite{Li21JPCL} but is otherwise smaller than the valence bases in the original GTH-TZVP and QZVP basis sets.
For this reason, the condition numbers of the new basis sets are significantly reduced compared to the original GTH-$X$ZVP series as shown in \cref{fig:condnum_cmp}(b).
\rvv{This can also be seen from the \emph{compactness} of our valence basis (\cref{fig:sp_expn_rat}), defined as the ratio of the smallest exponent in our valence basis to that in the corresponding all-electron cc-pV$5$Z basis set (geometric mean is taken in case of multiple angular momentum channels).}
\rvv{From this perspective, the constraint on the size of the valence basis is strongest for the $s$-block metals (compactness $> 3$), which explains the relatively large error of the atomic HF energy for these elements, but gradually relaxed for elements of higher group numbers (compactness $\approx 1$).}
As we will see by thorough numerical tests in \cref{sec:results_and_discussion}, this way of constraining the size of the valence basis does not degrade the performance of the full correlation-consistent basis sets in bulk calculations.
\rvv{The compactness of the valence basis in \cref{fig:sp_expn_rat} also provides a practical guide to constructing valence bases of similar quality for other nuclear potentials (including the full Coulomb potential for all-electron calculations).}
\begin{table}[t]
\centering
\caption{The active electrons (not covered by the pseudopotential), the valence basis structure (i.e.,~number of primitives), the errors of the atomic HF energy (in m\Ha{}, \rvv{evaluated as the energy difference from a sufficiently large valence basis}), and the reference systems on which the basis linear dependency and accuracy are monitored for the GTH-cc-pV$X$Z basis sets of all elements studied in this work.
For the $s$-block elements, the valence bases are listed for both the large-core and the small-core pseudopotentials.
The valence $p$ orbitals of the $s$-block elements are obtained in a different manner (see the discussion at the end of \cref{subsubsec:gth_valence_basis}) and denoted by a "+" sign in the table.}
\label{tab:valence_basis}
\begin{ruledtabular}
\begin{tabular}{lllll}
Element & Active electrons & Valence basis & HF error & Ref.~sys.~ \\
\hline
H & $[1s^1]$ & $4s$ & $0.19$ & LiH \\
He & $[1s^2]$ & $5s$ & $0.05$ & solid He\\
Li
& $[2s^1]$ & $2s+2p$ & $1.24$ & LiH, LiF, LiCl \\
& $[1s^22s^1]$ & $4s+\rvv{4}p$ & $2.39$ & LiH, LiF, LiCl \\
Be
& $[2s^2]$ & $\rvv{3}s+3p$ & $\rvv{0.12}$ & BeO, BeS \\
& $[1s^22s^2]$ & $\rvv{5}s+4p$ & $\rvv{1.53}$ & BeO, BeS \\
B & $[2s^22p^1]$ & $3s3p$ & $\rvv{8.15}$ & BN, BP \\
C & $[2s^22p^2]$ & $4s4p$ & $1.13$ & diamond, SiC \\
N & $[2s^22p^3]$ & $5s\rvv{5}p$ & $\rvv{0.19}$ & BN, AlN \\
O & $[2s^22p^4]$ & $5s5p$ & $0.41$ & BeO, MgO \\
F & $[2s^22p^5]$ & $5s5p$ & $0.72$ & LiF, NaF \\
Ne & $[2s^22p^6]$ & $6s6p$ & $0.16$ & solid Ne \\
Na
& $[3s^1]$ & $2s+2p$ & $0.93$ & NaF, NaCl \\
& $[2s^22p^63s^1]$ & $5s5p+1p$ & $2.44$ & NaF, NaCl \\
Mg
& $[3s^2]$ & $2s+1p$ & $4.32$ & MgO, MgS \\
& $[2s^22p^63s^2]$ & $4s4p+1p$ & $17.27$ & MgO, MgS \\
Al & $[3s^23p^1]$ & $3s2p$ & $0.62$ & AlN, AlP \\
Si & $[3s^23p^2]$ & $4s4p$ & $0.05$ & Si, SiC \\
P & $[3s^23p^3]$ & $4s4p$ & $0.13$ & AlN, AlP \\
S & $[3s^23p^4]$ & $5s4p$ & $0.18$ & BeS, MgS \\
Cl & $[3s^23p^5]$ & $5s4p$ & $0.24$ & LiCl, NaCl \\
Ar & $[3s^23p^6]$ & $4s4p$ & $0.76$ & solid Ar
\end{tabular}
\end{ruledtabular}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[scale=0.35]{sp_expn_rat.pdf}
\caption{Compactness of the valence basis developed in this work, defined as the ratio of the smallest exponent in our basis to that in the all-electron cc-pV$5$Z basis set for the same element.
\rvv{For $s$-block elements, the compactness of the valence basis optimized for the small-core and the large-core pseudopotentials is comparable, and we only show the results for the former.}}
\label{fig:sp_expn_rat}
\end{figure}
\subsubsection{The polarization functions}
\label{subsubsec:gth_polarization_functions}
We applied Dunning's scheme for determining the polarization functions up to QZ (i.e.,~$3d2f1g$) without modifications to the elements from group III to VIII.
However, we used coupled-cluster theory with single and double excitations (CCSD)~\cite{PurvisIII82JCP} instead of the more common configuration interaction with single and double excitations (CISD)~\cite{Dunning89JCP,Woon93JCP,Prascher11TCA}, for the calculation of the correlation energy.
The increment of the correlation energy with the number of polarization functions follows the rule of correlation consistency for all these elements.
Plots similar to \cref{fig:C_acc_vs_cond}(c) for other atoms are shown in Fig.~S6.
The $s$-block elements (including hydrogen) need some special treatments.
The hydrogen atom only has one electron and hence no correlation energy.
We follow ref \citenum{Prascher11TCA} and minimize the correlation energy of a \ce{H2} molecule with the experimental bond length of $0.7414$ \AA{} \cite{Huber79Book}, which gives polarization functions showing good correlation consistency [Fig.~S6(a)].
For $s$-block elements with small-core pseudopotentials, the $[1s^2]$ core electrons (for Li and Be) or the $[2s^2 2p^6]$ semi-core electrons (for Na and Mg) are frozen in the CCSD calculations in order to determine the polarization functions that only account for the valence electron correlation, following ref \citenum{Prascher11TCA}.
With this choice, the same number of electrons are correlated when using either the large-core or the small-core pseudopotentials, leading to similar exponents for the optimized polarization functions for both pseudopotentials.
For Li and Na with only one correlated valence electron, we apply a similar treatment as we did for the hydrogen atom and minimize the correlation energy of a \ce{Li2} molecule and a \ce{Na2} molecule with the respective experimental bond lengths of $2.673$ \AA{} and $2.303$ \AA{}. \cite{Huber79Book}
However, we found significant linear dependencies for both elements in periodic calculations upon adding the second polarization function in each angular momentum channel.
We thus choose a $1d$, $1d1f$, and $1d1f1g$ structure for the \emph{valence}-correlated DZ, TZ, and QZ polarization functions of these elements.
For Be and Mg, the regular structure of polarization functions is kept, but deviation from ideal correlation consistency is observed, wherein the third $d$ function recovers much less correlation energy than the second $f$ function [Figs.~S6(e,f,o,p)].
Similar observations have also been reported for these elements in the all-electron cases. \cite{Prascher11TCA}
\rvv{Despite the use of the CCSD correlation energy and a pseudopotential, the exponents of the polarization functions in our GTH-cc-pV$X$Z basis sets in general agree very well with those in the all-electron cc-pV$X$Z basis sets (Figs.~S4 and S5).
The few exceptions come from the third-row elements such as Al and Si, where our CCSD-optimized $3d$ set (for GTH-cc-pVQZ) shows a larger exponent splitting from the $2d$ set (for GTH-cc-pVTZ) than the corresponding $3d$ set in the all-electron cc-pVQZ basis sets.
In these cases, we verified explicitly that very similar exponents are obtained by minimizing the CISD correlation energy instead.
Thus, the observed difference between our GTH-cc-pV$X$Z series and the cc-pV$X$Z series for these elements is due to the pseudopotential.}
%
\subsubsection{Contraction of valence orbitals}
\label{subsubsec:contraction_valence_orbitals}
In an all-electron cc-pV$X$Z basis set, the primitive orbitals in the valence basis describe both the core and the valence electrons of an atom, with those for the core being contracted in the way discussed in \cref{subsec:correlation_consistent_basis_sets} to reduce the computational cost.
We follow this rule formally in constructing the GTH-cc-pV$X$Z basis sets \rvv{for most elements}, where the most diffuse one, two, and three primitive orbitals in each angular momentum channel of the valence basis are released from the contraction for $X = $ D, T, and Q.
(In case of, e.g.,~three primitive orbitals in an angular momentum channel, TZ and QZ will have the same valence basis, which is fully uncontracted in that angular momentum channel.)
\rvv{However, for group VI to VIII elements where} \rvv{the size of the valence basis is only weakly constrained (\cref{fig:sp_expn_rat}),} \rvv{the procedure above results in suboptimal performance, especially at the DZ level.
For these elements, we instead augment the fully contracted valence basis with $1s1p$, $2s2p$, and $3s3p$ primitives ($s$-only for helium), determined separately from minimizing the atomic correlation energy (in the presence of the polarization functions), to make the DZ, TZ, and QZ basis sets.}
The contraction coefficients determined from atomic HF orbitals might have limited transferability especially for group III to VIII elements and $s$-block elements with large-core pseudopotentials, because they have no core electrons.
For this reason, we also generate valence-uncontracted DZ and TZ basis sets where the valence basis is made to match higher zeta-levels.
For example, a GTH-cc-pV(T)DZ basis set has the polarization functions taken from GTH-cc-pVDZ and the valence basis from GTH-cc-pVTZ.
We will see the importance of such valence uncontraction in \cref{sec:results_and_discussion} in the calculation of virtual bands.
\subsubsection{\rvv{Extensions}}
\label{subsubsec:gth_diffuse}
\rvv{Like the all-electron cc-pV$X$Z series, our basis sets can be straightforwardly extended by core-valence correlating functions \cite{Woon95JCP,Peterson02JCP}, tight $d$ functions \cite{Dunning01JCP}, etc.}
\rvv{In this work, we explore one such extension, namely the augmentation with diffuse functions}~\cite{Woon94JCP}, which may be appropriate for the simulation of molecular crystals \cite{DelBen12JCTC,Yang14Science} or surface phenomena \cite{Schimka10NM,Tsatsoulis17JCP,Lau21JPCL}.
Particularly, we find that the noble gas solids, which are used as the reference materials for the evaluation of our noble gas basis sets, benefit substantially from augmentation with diffuse functions.
We thus augment all Gaussian basis sets for the three noble gas elements used in this work by adding one diffuse function to each angular momentum channel.
\rvv{Because of the low density of noble gas solids, the condition numbers of their overlap matrices are $10^{7}$ or less, even after augmentation.}
The exponent of this augmentation function is chosen to be proportional to the exponent of the most diffuse function in the non-augmented basis: $\alpha_\tr{aug} = x \alpha_\tr{min}$, where $x$ is the analogous ratio of exponents in the all-electron aug-cc-pV$X$Z basis set for the same element. \cite{Woon94JCP}
We name these basis sets ``GTH-aug-cc-pV$X$Z''.
\section{Computational details}
\label{sec:computational_details}
The protocol for generating correlation-consistent basis sets described in \cref{sec:methodology} was followed, and all calculations are performed using the PySCF software package \cite{Sun18WIRCMS,Sun20JCP}.
The spin-restricted (or spin-restricted open-shell) HF and the spin-unrestricted CCSD are used to optimize the valence basis and the polarization functions, respectively.
For periodic calculations, the recently developed range-separated Gaussian density fitting \cite{Ye21JCPa,Ye21JCPb} (RSGDF) is used to handle the electron repulsion integrals.
The density fitting auxiliary basis is an even-tempered Gaussian basis with a progression factor $\beta = 2.0$ (generated automatically by PySCF).
Finite-size errors associated with the divergence of the HF exchange integral at $G=0$ are handled using a Madelung constant, as described in ref \citenum{Paier06JCP,Broqvist09PRB,Sundararaman13PRB}.
The basis set parameters are optimized using the Nelder-Mead algorithm \cite{Nelder65CJ} (as implemented in the SciPy library \cite{Virtanen20NM}), which we found to give the same exponents as the Broyden-Fletcher-Goldfarb-Shanno \cite{Press92Book} or the conjugate gradient \cite{Hestenes52JRNBS} algorithms (as used in previous work \cite{Daga20JCTC,Zhou21JCTC}) in most cases, but to be more robust against local minima in more challenging situations.
We assess the quality of the GTH-cc-pV$X$Z basis sets along with their valence-uncontracted counterparts at both HF and MP2 levels on a test set of $19$ three-dimensional bulk systems listed in Table S2 (the $16$ materials shown in \cref{fig:condnum_cmp} plus the solids of helium, neon, and argon).
Results from the original GTH-DZVP basis sets, which show no linear dependency issues on these materials, will also be reported.
For the three noble gas elements, we augment the GTH-DZVP basis with extra diffuse functions in the same manner as described in \cref{subsubsec:gth_diffuse}.
For systems containing $s$-block elements that have small-core and large-core pseudopotentials, calculations in the GTH-cc-pV$X$Z family use the corresponding small-core and large-core basis sets that we developed, while those in the GTH-DZVP family use the small-core basis sets for both pseudopotentials, because large-core GTH-DZVP basis sets do not exist.
The selected bulk properties include the cell energy ($E_{\tr{cell}}$), the cohesive energy ($E_{\tr{coh}}$), the band gap ($E_{\tr{gap}}$) and band structure (only at the HF level), and the equilibrium lattice constant ($a_0$) and bulk modulus ($B_0$).
For $E_{\tr{tot}}$, $E_{\tr{coh}}$, and $E_{\tr{gap}}$, single-point calculations at experimental geometries are performed with the Brillouin zone sampled using a $5\times5\times5$ $k$-point mesh (evenly spaced and $\Gamma$-point included) without further extrapolation.
The cohesive energy is counterpoise corrected for basis set superposition error.
The band structure is obtained by performing individual single-point calculations using a $3\times3\times3$ $k$-point mesh shifted along a chosen $k$-point path.
The $a_0$ and $B_0$ are obtained by scanning the lattice constant around the HF minimum and fitting the total energy curve to the Birch-Murnaghan equation of state \cite{Murnaghan44PNAS,Birch47PR}.
A $3\times3\times3$ $k$-point mesh is used in all these calculations, except the results in \cref{tab:LiH_cf_PAW}, which were calculated using a $5\times 5\times 5$ $k$-point mesh to facilitate comparison with literature values.
The errors in the above properties due to the Gaussian basis set incompleteness are determined by comparison to calculations with a PW basis made large enough to essentially achieve the CBS limit; the HF total energy, the HF band energy, and the MP2 energy are all converged to an accuracy better than $0.1$ meV/cell.
We note that the $k$-point meshes used in all of our calculations are sufficiently large to eliminate finite-size effects in \textit{all} of the basis set errors and in \textit{most} (but not all) of the predicted properties.
In our PW calculations, the HF exchange is calculated with the adaptively compressed exchange (ACE) operator. \cite{Lin16JCTC}
To converge the MP2 correlation energy to the CBS limit, we extrapolate according to the asymptotic behavior~\cite{Shepherd12PRB}
\begin{equation} \label{eq:MP2_corr_nvir}
E_{\tr{corr}}^{\tr{MP2}}(n_{\tr{vir}})
= A n_{\tr{vir}}^{-1} + E_{\tr{corr}}^{\tr{MP2}}(\infty),
\end{equation}
where $n_{\tr{vir}}$ is the number of virtual bands per $k$-point.
The proper range of $n_{\tr{vir}}$ for a safe extrapolation using \cref{eq:MP2_corr_nvir} is determined by monitoring the convergence of the properties calculated using the estimated $E_{\tr{c}}^{\tr{MP2}}(\infty)$.
For the bulk systems composed of the elements from group III to V, well-converged estimates of $a_0^{\tr{MP2}}$ and $B_0^{\tr{MP2}}$ can be obtained by using $n_{\tr{vir}} = 350 \sim 400$ with uncertainties of about $0.1$ pm and $1$ GPa, respectively.
Unfortunately, a much larger $n_{\tr{vir}}$, which is beyond the reach of our computational resources, is needed for the other bulk systems that contain the $s$-block or the noble gas elements and for all atomic calculations.
Therefore, at the MP2 level, we are unable to evaluate basis set errors of all properties of these materials and the cohesive energy of all materials.
However, we will study LiH with the small-core pseudopotential for Li as one such example of a difficult case.
For this material, we will compare our results to MP2 calculations reported in literature and discuss how the Gaussian basis sets developed in this work can be leveraged to significantly improve the convergence of correlated calculations using a PW basis.
\section{Results and discussion}
\label{sec:results_and_discussion}
\begin{figure}[!t]
\centering
\includegraphics[scale=0.8]{hf_totcoha0b0_nomax_group.pdf}
\caption{The root-mean-square error of (a) the cell energy, (b) the cohesive energy, (c) the equilibrium lattice constant, and (d) the equilibrium bulk modulus calculated at the HF level using different Gaussian basis sets for the $19$ bulk materials.}
\label{fig:hf_totcoha0b0}
\end{figure}
\subsection{Occupied bands: HF ground-state properties}
\label{subsec:occ_bands}
\begin{figure*}[t]
\centering
\includegraphics[scale=0.8]{hf_egap_nomax_group_diamondband.pdf}
\caption{(a) The root-mean-square error of the HF band gap calculated using different Gaussian basis sets for the $19$ bulk materials.
(b-e) HF band structure for diamond calculated using different Gaussian basis sets: (b) GTH-cc-pVDZ, (c) GTH-cc-pV(T)DZ, (d) GTH-cc-pVTZ, and (e) GTH-cc-pVQZ.
The PW bands are shown as black dots.}
\label{fig:hf_egap_and_C_bands}
\end{figure*}
We first study the basis set performance for HF ground-state properties, which reflects the quality of the occupied bands.
The root-mean-square error (RMSE) of the total energy $E_{\tr{tot}}^{\tr{HF}}$, the cohesive energy $E_{\tr{coh}}^{\tr{HF}}$, the equilibrium lattice constant $a_{0}^{\tr{HF}}$, and the equilibrium bulk modulus $B_{0}^{\tr{HF}}$ are presented in \cref{fig:hf_totcoha0b0} for different Gaussian basis sets, where the error is computed with respect to our PW results.
The errors for each material are shown in Figs.~S7--S10.
The most obvious trend in \cref{fig:hf_totcoha0b0} is the monotonic decrease of the error of all four properties by following the hierarchy:
GTH-DZVP, GTH-cc-pVDZ, GTH-cc-pV(T)DZ, GTH-cc-pV(Q)DZ, GTH-cc-pVTZ, GTH-cc-pV(Q)TZ, GTH-cc-pVQZ.
This ranking is consistent with the flexibility of the basis sets except for GTH-DZVP, which differs from GTH-cc-pVDZ only in the size of the valence basis and the basis parameters.
The larger error of the GTH-DZVP basis mainly comes from the two beryllium compounds where the contraction coefficients for the valence orbitals of Be are poor and the solid neon where the valence basis is too small (Figs.~S7--S10).
We verified that a modified GTH-DZVP basis for Be with the contraction coefficients re-computed using our code gives very similar accuracy as our GTH-cc-pVDZ basis.
Similar reparametrization for the GTH-TZVP and GTH-QZVP basis sets can be performed but is not useful in practice due to the high linear dependencies of these basis sets.
Among our correlation-consistent basis sets, increasing the zeta-level is significantly more effective at reducing the errors than de-contracting the valence basis, which suggests reasonable transferability of the valence contraction coefficients determined from the atomic HF orbitals, at least for describing the occupied bands (see also the discussion on the HF band structure in \cref{subsec:lowe_vir_bands}).
The relatively large RMSE in the HF total energy [\cref{fig:hf_totcoha0b0}(a)] (about 4~m\Ha{} even for the GTH-cc-pVQZ basis) is dominated by the two small-core magnesium compounds (Fig.~S7), which inherit the large HF energy error of the magnesium atom, as shown in \cref{tab:valence_basis}.
Nonetheless, the error in the total energy does not affect computed properties, suggesting a robust and systematic error cancellation in these basis sets.
\subsection{Low-energy virtual bands: HF band gap and band structure}
\label{subsec:lowe_vir_bands}
The RMSEs of the HF band gap calculated using different Gaussian basis sets are summarized in \cref{fig:hf_egap_and_C_bands}(a) (the errors for each material are shown in Fig.~S11), which exhibits an overall trend similar to that discussed in \cref{subsec:occ_bands} for the HF ground-state properties.
However, a major distinction in the band gap calculations is the significant reduction of error by de-contracting the valence basis of GTH-cc-pVDZ, which indicates limited transferability of the valence contraction coefficients for describing the virtual bands.
Nonetheless, even without the valence de-contraction, the smallest GTH-cc-pVDZ basis already achieves a RMSE below $0.1$ eV, which is sufficient for most band gap calculations.
We emphasize that the additional diffuse functions determined in the way described in \cref{subsubsec:gth_diffuse} are crucial for obtaining accurate band gaps for the noble gas solids (Fig.~S11), reducing the error from a few eVs to less than $0.1$ eV in the most extreme case.
As an example of the performance for the band structure, in \cref{fig:hf_egap_and_C_bands}(b-e) we compare the the valence and low-energy conduction bands of diamond calculated using our Gaussian basis sets to those usings PWs.
Similar plots for all other materials are displayed in Figs.~S14--S41.
The smallest GTH-cc-pVDZ basis already gives an accurate description of the valence and the first few conduction bands as shown in \cref{fig:hf_egap_and_C_bands}(b), which is consistent with the good performance observed for the HF ground-state properties (\cref{fig:hf_totcoha0b0}) and the band gap [\cref{fig:hf_egap_and_C_bands}(a)].
The fixed contraction coefficients in the valence basis of GTH-cc-pVDZ are responsible for the deviations from the PW band structure immediately beyond the first few virtual bands (e.g., at the $\Gamma$ and L points, about $30$ eV above the valence band maximum).
Using the valence-uncontracted GTH-cc-pV(T)DZ basis fixes this problem and shows quantitative agreement with the PW bands up to about 40~eV, as shown in \cref{fig:hf_egap_and_C_bands}(c).
The quantitative agreement between the Gaussian and the PW band structures extends to even higher energies by using the GTH-cc-pVTZ ($\sim 70$ eV) and the GTH-cc-pVQZ ($\sim 90$ eV) basis sets, as shown in \cref{fig:hf_egap_and_C_bands}(d) and (e), respectively.
We note that polarization functions of angular momentum $f$ or higher, which are commonly absent from Gaussian basis sets meant for use in DFT calculations, can be important for the low-energy band structure.
For example, the authors of ref \citenum{Lee21JCP} highlighted a missing state in the band structure of MgO (the fifth conduction band at the $\Gamma$ point) unless a very large QZ basis set is used ($167$ basis functions per MgO unit).
By contrast, our calculations on the same system (Fig.~S29) suggest that the observed missing state is primarily an $f$-state localized on the oxygen atom and can already be captured accurately using our GTH-cc-pVTZ basis set with only $59$ basis functions per MgO unit.
\subsection{Convergence to the full virtual space limit: MP2 ground-state properties}
\label{subsec:fvs_limit}
The discussion in the previous two sections have focused on the occupied and the low-lying virtual bands.
In this section, we study the basis set quality in correlated calculations at the MP2 level, which in principle requires an infinite number of virtual bands in order to reach the CBS limit.
In a PW basis, the CBS limit is approached in a \emph{dense} manner by increasing the number of virtual bands $n_{\tr{vir}}$ being correlated from low to high energy.
The Gaussian virtual bands follow the dense manifold of the PW bands in the low-energy regime (as discussed in \cref{subsec:lowe_vir_bands}), but become \emph{sparse} at higher energy, effectively skipping some states.
Ideally, the correlation energy obtained using either basis should show the asymptotic $n_{\tr{vir}}^{-1}$ convergence (\ref{eq:MP2_corr_nvir}) for sufficiently large $n_{\tr{vir}}$, but this may or may not be achievable with the available computational resources.
For the seven materials that do not contain $s$-block or noble gas elements, i.e.,~BN, BP, AlN, AlP, C, Si, and SiC, reliable extrapolations using \cref{eq:MP2_corr_nvir} can be performed to obtain accurate estimates of $E_{\tr{corr}}^{\tr{MP2}}(\infty)$ in the PW basis (see \cref{sec:computational_details}), from which we compute reference values for the equilibrium lattice constant $a_0^{\tr{MP2}}$ and the equilibrium bulk modulus $B_0^{\tr{MP2}}$.
The RMSEs of these two properties calculated using different Gaussian basis sets are shown in \cref{fig:mp2_a0b0}(a-b).
The errors for each material are shown in Figs.~S12 and S13.
We also compute these two properties using the PW basis without extrapolation for a series of $n_{\tr{vir}}$ and plot the RMSEs in \cref{fig:mp2_a0b0}(c-d).
The first three points in \cref{fig:mp2_a0b0}(c-d) with $n_{\tr{vir}} = 20$, $50$, and $100$ are chosen to match roughly the number of virtual bands in the GTH-cc-pV$X$Z basis set for $X = $ D, T, and Q, respectively.
For both properties, the Gaussian basis exhibits the familiar hierarchy observed in the HF band gap calculations (\cref{subsec:lowe_vir_bands}), where increasing the zeta-level significantly improves the accuracy, and de-contracting the valence basis is also effective at the DZ level [\cref{fig:mp2_a0b0}(a-b)].
The difference between the GTH-DZVP basis and the GTH-cc-pVDZ basis is somewhat smaller than in the HF calculations mainly because the problematic beryllium compounds are not included in the statistics here.
For correlated calculations with a Gaussian basis, basis set errors enter through both the HF energy and the correlation energy.
In contrast, for correlated calculations with a PW basis, basis set errors enter through the correlation energy only, because the HF energy is essentially converged with respect to the number of PWs.
Indeed, the errors observed in \cref{fig:mp2_a0b0}(a-b) for the DZ and TZ bases are dominated by errors in the HF energy, and the results can be significantly improved
by combining the MP2 correlation energies calculated in a given basis set with the more accurate HF energies obtained from the GTH-cc-pVQZ basis, as shown by the thinner white bars in \cref{fig:mp2_a0b0}(a-b).
In a similar spirit, one could perform a HF calculation in a large Gaussian basis and then perform an MP2 calculation with some number of frozen virtual orbitals. \cite{Lange20MP,Wang20JCTC}
Similar corrections are impossible with the original GTH basis set series due to the high linear dependencies at the QZ (or even TZ) level [\cref{fig:condnum_cmp}(a)].
\begin{figure}[t]
\centering
\includegraphics[scale=0.8]{mp2_a0b0_gtovspw_bygroup.pdf}
\caption{Root-mean-square errors of (a,c) the MP2 equilibrium lattice constant and (b,d) the MP2 equilibrium bulk modulus calculated using (a-b) different Gaussian basis sets and (c-d) the PW basis with increasing number of virtual bands, $n_{\tr{vir}}^{\tr{PW}}$, where the first three points ($n_{\tr{vir}}^{\tr{PW}} = 20$, $50$, and $100$) are chosen to match the size of the virtual space of the GTH-cc-pV$X$Z basis for $X = $ D, T, and Q, respectively.
In each case, the errors are evaluated against CBS-extrapolated PW results for the seven bulk materials not containing the $s$-block or the noble gas elements: BN, BP, AlN, AlP, C, Si, and SiC.
For the DZ and TZ Gaussian basis sets (except for GTH-DZVP), the HF-corrected results (see main text for explanation) are shown as the thinner bar with black edge.}
\label{fig:mp2_a0b0}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.8]{C_LiH_mp2_qzhf_cpw.pdf}
\caption{Convergence of the MP2 correlation energy (left column) and the MP2 equilibrium lattice constant (right column) with the number of virtual bands included in the calculations for two materials: diamond (a-b) and LiH (c-d).
The small-core pseudopotential is used for Li.
In each case, the small black dots are results obtained using a PW basis, while the blue, orange, and green filled circles are results obtained using the GTH-cc-pVDZ, TZ, and QZ basis sets, respectively.
For DZ and TZ, the open circles are $a_0^{\tr{MP2}}$ calculated by combining the MP2 correlation energy with the HF energy computed using the QZ basis.
Gray shaded areas indicate the extrapolated PW results and their uncertainty.
For LiH, the ``+'' symbols are results obtained by using the PW-resolved Gaussian virtual bands derived from the GTH-cc-pV$X$Z basis sets for the virtual space.
}
\label{fig:C_LiH_mp2_qzhf}
\end{figure}
For the MP2 calculation of $a_0$ in a PW basis [\cref{fig:mp2_a0b0}(c)], the error of the smallest calculations with only $20$ virtual bands is much smaller than the error of the calculations with the GTH-cc-pVDZ basis without the HF correction, but is similar to the error after the HF correction.
The situation is different for $B_0$ [\cref{fig:mp2_a0b0}(d)], where the error of the PW calculations with $20$ virtual bands is notably larger than that of the calculations with the GTH-cc-pVDZ basis, even without the HF correction.
As $n_{\tr{vir}}$ increases, the errors of both properties decay very slowly in the PW basis, especially for small $n_{\tr{vir}}$.
The largest PW calculations with $n_{\tr{vir}} = 400$ only achieve a RMSE comparable to the HF-corrected GTH-cc-pVTZ basis for $a_0$ and the HF-corrected GTH-cc-pVDZ basis for $B_0$, where the two Gaussian basis sets use only about $50$ (TZ) and $20$ (DZ) virtual bands in the correlated calculations.
The slow convergence of $a_0$ and $B_0$ with the number of PWs is caused by the small imbalance of the correlation energies evaluated using a fixed $n_{\tr{vir}}$ at the different cell volumes needed for the equation of state.
This is illustrated in \cref{fig:C_LiH_mp2_qzhf}(a-b) for diamond. At a given lattice constant (here, the equilibrium lattice constant), the MP2 correlation energies evaluated in both the PW and the Gaussian basis sets exhibit the desired $n_{\tr{vir}}^{-1}$ convergence (\ref{eq:MP2_corr_nvir}) [\cref{fig:C_LiH_mp2_qzhf}(a)].
But only the Gaussian basis sets converge quickly to the CBS limit for $a_0$ [\cref{fig:C_LiH_mp2_qzhf}(b)].
This behavior occurs because the PW basis is ignorant of the underlying atomic structure and thus exhibits an unphysical sensitivity to the cell volume.
The situation is even worse in correlated calculations of molecular crystals \cite{DelBen12JCTC,DelBen13JCTC} and free molecules or atoms \cite{Gruneis11JCTC}, due to the large amount of empty space between atoms or molecules.
The Gaussian basis sets, along with the well-established BSSE correction \cite{Boys70MP,vanDuijneveldt94CR}, are more suitable for describing electron correlation in these systems.
The poor performance of PWs is exacerbated for elements with hard pseudopotentials, e.g.,~the $s$-block elements with core or semi-core electrons.
We illustrate this behavior for the ionic crystal LiH, using the small-core pseudopotential for Li.
As shown in \cref{fig:C_LiH_mp2_qzhf}(c), the convergence of the PW MP2 correlation energy is much slower than for diamond, with the asymptotic $n_{\tr{vir}}^{-1}$ convergence (\ref{eq:MP2_corr_nvir}) not achieved even when $n_{\tr{vir}} \approx 400$.
This slow convergence yields a large uncertainty in the extrapolated $E_{\tr{corr}}^{\tr{MP2}}(\infty)$ (grey shaded area), which is similar to the GTH-cc-pVQZ result (green circle) obtained with only \rvv{$65$} virtual bands.
Extrapolation of the Gaussian basis results suggests that the extrapolated PW result is likely an underestimate.
The convergence in the PW basis is even slower for $a_0$ as shown in \cref{fig:C_LiH_mp2_qzhf}(d), which yields an extrapolated value that is about $4$~pm below that obtained with our Gaussian basis sets.
The Gaussian basis results converge much more quickly and their correctness is verified by comparison to literature values \cite{Gruneis10JCP} as shown in \cref{tab:LiH_cf_PAW}.
We end the discussion by showing that the convergence of PW-based correlated calculations can be significantly improved by leveraging a good Gaussian basis, such as the one developed in this work.
Specifically, we compute the virtual bands in a PW-resolved Gaussian basis \cite{Booth16JCP,Morales20JCP} generated by projecting out the converged PW occupied bands from our GTH-cc-pV$X$Z basis sets, followed by orthonormalization.
As shown in \cref{fig:C_LiH_mp2_qzhf}(c-d) for LiH, the MP2 calculations that use the PW occupied bands plus the PW-resolved Gaussian virtual bands show significantly faster convergence than those that use the bare PW virtual bands.
This example shows that our Gaussian basis sets are also useful for PW-based correlated calculations.
\begin{table}[!t]
\centering
\caption{Comparison of the bulk properties of LiH (the small-core pseudopotential is used for Li) to those from Ref.~\citenum{Gruneis10JCP} obtained using PWs and the projector augmented-wave (PAW) method \cite{Blochl94PRB}.
The Gaussian basis calculations use an unshifted $5\times5\times5$ $k$-point mesh without extrapolation.}
\label{tab:LiH_cf_PAW}
\begin{ruledtabular}
\begin{tabular}{lcccccc}
& \multicolumn{2}{c}{$a_0$ [pm]} &
\multicolumn{2}{c}{$B_0$ [GPa]} &
\multicolumn{2}{c}{$E_{\tr{coh}}$ [eV]} \\
\cmidrule(lr){2-3}\cmidrule(lr){4-5}\cmidrule(lr){6-7}
& HF & MP2 & HF & MP2 & HF & MP2 \\
\hline
GTH-cc-pVDZ & $410.4$ & $394.3$ & $32.5$ & $39.4$ & $1.81$ & $2.34$ \\
GTH-cc-pVTZ & $410.2$ & $396.7$ & $32.5$ & $38.5$ & $1.81$ & $2.37$ \\
GTH-cc-pVQZ & $410.1$ & $396.5$ & $32.6$ & $38.6$ & $1.81$ & $2.38$ \\
PAW+PW & $411.1$ & $397.1$ & $32$ & $38$ & $1.79$ & $2.39$
\end{tabular}
\end{ruledtabular}
\end{table}
\section{Conclusion}
\label{sec:conclusion}
To conclude, we extended Dunning's strategy for constructing correlation-consistent Gaussian basis sets to periodic systems by controlling the size of the valence basis to reach a balance between accuracy and numerical stability.
The generated GTH-cc-pV$X$Z basis sets are found to be well-conditioned for solid-state calculations and show fast convergence to the CBS limit at both mean-field and correlated levels of theory on a number of bulk properties.
Our scheme can also be used straightforwardly to design all-electron basis sets for solids, which will differ only by the addition of primitives with large exponents that do not significantly contribute to linear dependencies.
Although our basis sets were tested using MP2, they will be valuable in work using more accurate ab initio correlated methods, such as coupled-cluster theory \cite{McClain17JCTC,Gruber18PRX,Zhang19FM,Lau21JPCL,Lange21JCP,Callahan21JCP},
auxiliary field quantum Monte Carlo \cite{Ma15PRL,Rudshteyn20JCTC,Morales20JCP,Malone20JCTC,Shi21JCP},
or quantum embedding approaches \cite{Bulik14JCP,Rusakov19JCTC,Iskakov20PRB,Yeh21PRB,Yeh21PRB,Pham20JCTC,Cui20JCTC,Zhu20JCTC,Ye20JCTC,Zhu21PRX}.
In particular, our basis sets remain to be tested on three-dimensional metals, where MP2 is inapplicable.\cite{Shepherd13PRL}
Reliable and standardized Gaussian basis sets for periodic systems also call for the development of optimized auxiliary basis sets for density fitting \cite{Weigend98CPL,Weigend02PCCP,Weigend08JCC}.
Finally, future work will proceed down the periodic table to obtain performant Gaussian basis sets for more elements.
Of special interest are the $d$ and $f$-block metals due to their appearance in a variety of functional materials \cite{Keimer15Nature,Du20Nature,Giuffredi21ACSMAu}, whose accurate description demands correlated electronic structure theories.
\section*{Acknowledgements}
We thank Dr.~Verena Neufeld for helpful discussions.
This work was supported by the National Science Foundation under Grant No.\
OAC-1931321. We acknowledge computing resources from Columbia University's
Shared Research Computing Facility project, which is supported by NIH Research
Facility Improvement Grant 1G20RR030893-01, and associated funds from the New
York State Empire State Development, Division of Science Technology and
Innovation (NYSTAR) Contract C090171, both awarded April 15, 2010. The Flatiron
Institute is a division of the Simons Foundation.
\section*{Supporting Information}
\rvv{See the supporting information for (i) plot of the condition number of the overlap matrix of commonly used Gaussian basis sets for 16 bulk materials, (ii) convergence of the HF and MP2 bulk properties for diamond with respect to the size of the valence basis of carbon, (iii) comparison of the exponents of the GTH-cc-pV$X$Z basis sets with the all-electron cc-pV$X$Z series for all elements studied in this work, (iv) increment of the atomic CCSD correlation energy as a function of the number of polarization functions for all elements studied in this work, (v) material-specific errors at both HF and MP2 levels for all properties reported in \cref{fig:hf_totcoha0b0,fig:hf_egap_and_C_bands,fig:mp2_a0b0}, (vi) band structure calculated using the GTH-cc-pV$X$Z basis sets compared to the PW CBS result for all materials studied in this work, (vii) detailed basis size information of the GTH-cc-pV$X$Z basis sets, and (viii) lattice structure and experimental lattice constants for all materials studied in this work.}
\section*{Data availability statement}
The data that support the findings of this study are available from the
corresponding author upon reasonable request.
\section{Supplementary figures}
\begin{figure}[h]
\centering
\includegraphics[width=0.6\linewidth]{allecc_def2_ccecp.pdf}
\caption{Same plot as Fig.~M1 for the condition numbers of (a) cc-pV$X$Z, (b) def2-SVP (DZ), def2-TZVPP (TZ), and def2-QZVPP (QZ), and (c) ccECP-cc-pV$X$Z. For ccECP, the "reg" basis is used for Li and Be, and the "helium-core" basis is used for Mg to match the number of electrons in the small-core GTH pseudopotentials.}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\linewidth]{hf_all.pdf}
\caption{Properties of bulk diamond calculated at HF level using the seven correlation consistent basis sets generated with different valence basis (DZ, TZ, and QZ for $3s3p$ and $4s4p$ and DZ for $5s5p$). (a) Total energy, (b) cohesive energy, (c) equilibrium lattice constant, (d) equilibrium bulk modulus, and (e) band gap.
Panels (a) and (e) are the same as Fig.~M2(e) and (f), respectively.}
\label{fig:hf_all}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\linewidth]{mp2_all.pdf}
\caption{Same plot as \cref{fig:hf_all}(a-d) for MP2 except that the original values instead of the errors are shown for the MP2 cohesive energy in (b).}
\label{fig:mp2_all}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{expn_12row.pdf}
\caption{Comparing the exponents in our GTH-cc-pV$X$Z basis sets (labelled as "G-XZ") with those in the all-electron cc-pV$X$Z basis sets (labelled as "A-XZ") for elements from the first two rows.
The exponents for the valence basis and the polarization functions are separately shown for each element.
Color scheme: blue, orange, green, red, and violet for $s$, $p$, $d$, $f$, and $g$ angular momentum, respectively.}
\label{fig:expn_12row}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{expn_3row.pdf}
\caption{Same plot as \cref{fig:expn_12row} for elements from the third row.}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{eccinc_all.pdf}
\caption{Same plot as Fig.~M2(c) for all elements studied in this work.
For $s$-block metals, the number after "q" denotes the number of active electrons (i.e.,~not covered by the pseudopotentials).}
\end{figure}
\clearpage
\begin{figure}[p]
\centering
\includegraphics[width=1.0\linewidth]{hfeach_etot.pdf}
\caption{Error of the HF total energy evaluated by different Gaussian basis sets compared to the PW benchmark using a $5\times5\times5$ $k$-point mesh to sample the Brillouin zone.
For each formula, from left to right: GTH-DZVP (violet), GTH-cc-pVDZ (blue), GTH-cc-pV(T)DZ (blue), GTH-cc-pV(Q)DZ (blue), GTH-cc-pVTZ (orange), GTH-cc-pV(Q)TZ (orange), and GTH-cc-pVQZ (green).}
\label{fig:hferreach_etot}
\end{figure}
\clearpage
\begin{figure}[p]
\centering
\includegraphics[width=1.0\linewidth]{hfeach_ecoh.pdf}
\caption{Same as \cref{fig:hferreach_etot} for the HF cohesive energy.}
\end{figure}
\clearpage
\begin{figure}[p]
\centering
\includegraphics[width=1.0\linewidth]{hfeach_a0.pdf}
\caption{Same as \cref{fig:hferreach_etot} for the HF equilibrium lattice constant evaluated using a $3\times3\times3$ $k$-point mesh.}
\end{figure}
\clearpage
\begin{figure}[p]
\centering
\includegraphics[width=1.0\linewidth]{hfeach_b0.pdf}
\caption{Same as \cref{fig:hferreach_etot} for the HF equilibrium bulk modulus evaluated using a $3\times3\times3$ $k$-point mesh.}
\end{figure}
\clearpage
\begin{figure}[p]
\centering
\includegraphics[width=1.0\linewidth]{hfeach_egap.pdf}
\caption{Same as \cref{fig:hferreach_etot} for the HF band gap.}
\end{figure}
\clearpage
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{mp2each_a0.pdf}
\caption{Same as \cref{fig:hferreach_etot} for the MP2 equilibrium lattice constant evaluated using a $3\times3\times3$ $k$-point mesh.}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{mp2each_b0.pdf}
\caption{Same as \cref{fig:hferreach_etot} for the MP2 equilibrium bulk modulus evaluated using a $3\times3\times3$ $k$-point mesh.}
\end{figure}
\clearpage
\input{bands.txt}
\clearpage
\section{Supplementary tables}
\input{basis_info.txt}
\begin{table}
\centering
\caption{The 19 materials studied in this work. The listed lattice constants are for the eight-atom cubic cell while the two-atom primitive cells are used in all our simulations, except for the noble gas solids, where the listed lattice constants are for the four-atom cubic cell while the one-atom primitive cells are used in our simulations.}
\begin{tabular}{lll}
\hline\hline
formula & lattice type & $a_0 / \mathrm{\AA}$ \\
\hline
LiH & rocksalt & $4.083$ \\
LiF & rocksalt & $4.035$ \\
LiCl & rocksalt & $5.130$ \\
NaF & rocksalt & $4.620$ \\
NaCl & rocksalt & $5.640$ \\
BeO & zincblende & $3.797$ \\
BeS & zincblende & $4.870$ \\
MgO & rocksalt & $4.207$ \\
MgS & rocksalt & $5.200$ \\
BN & zincblende & $3.616$ \\
BP & zincblende & $4.538$ \\
AlN & zincblende & $4.380$ \\
AlP & zincblende & $5.463$ \\
C & diamond & $3.567$ \\
Si & diamond & $5.430$ \\
SiC & zincblende & $4.358$ \\
He & fcc & $4.112$ \\
Ne & fcc & $4.446$ \\
Ar & fcc & $5.311$ \\
\hline
\end{tabular}
\end{table}
\iffalse
\clearpage
\section{Cartesian coordinates of the test systems in Fig.\ M1}
The water-solvated urea is optimized at PBE level using Quantum Espresso [1], while others all taken from the Materials Project website.
The structures are listed below in the format of the VASP POSCAR file.
\begin{itemize}
\item SiC:
\VerbatimInput{data/SiC.POSCAR.vasp}
\item ZnS:
\VerbatimInput{data/ZnS.POSCAR.vasp}
\item \ce{TiO2}:
\VerbatimInput{data/TiO2.POSCAR.vasp}
\item \ce{NaCl}:
\VerbatimInput{data/NaCl.POSCAR.vasp}
\item \ce{CO(NH2)2 + 8 H2O}:
\VerbatimInput{data/w8_u1.POSCAR.vasp}
\end{itemize}
\clearpage
\section{Even tempered basis (ETB) used in this work}
Generated by PySCF with the following code (Zn/cc-pVDZ as an example):
\begin{verbatim}
from pyscf import gto, df
mol = gto.M(atom="Zn", basis="cc-pVDZ", spin=None)
auxmol = df.make_auxmol(mol)
print(auxmol._basis["Zn"]) # ETB with progression beta = 2.0
\end{verbatim}
\VerbatimInput{data/Na_DZ}
\VerbatimInput{data/Ti_DZ}
\VerbatimInput{data/Zn_DZ}
\VerbatimInput{data/Zn_TZ}
\VerbatimInput{data/Zn_QZ}
\clearpage
\section{Timing data}
Listed in the table below are component CPU time data for computing the periodic three-center SR ERIs with $\epsilon = 10^{-8}$, where
\begin{itemize}
\item $t_0$ is the time (unit: minute) for precomputing integral screening data.
\item $t_1$ is the time (unit: minute) for the actual lattice sum based on the precomputed data.
\item $t_0\%$ is the percentage fraction of the time spent on precomputation.
\item For \ce{ZnS}, the number in the parentheses gives the supercell size in terms of the number of primitive cells.
\end{itemize}
One can see that the precomputation time ($t_0$) is a small fraction of the subsequent lattice sum for small omega, but may become comparable for large omega for some systems. However, the latter is mainly due to the overhead of our pure Python implementation of the precomputation, which we expect to significantly drop when implemented in a compiled language like C.
In addition, the precomputation time remains rigorously a constant when increasing the number of $k$-points (not shown here) and roughly a constant when increasing the supercell size in real space [see \ce{ZnS(1^3)} to \ce{ZnS(5^3)}] while the cost of the lattice sum grows as $N_k^2$ and $N_{\tr{cell}}$, respectively; this suggests that most of the $t_0\%$ data reported here are expected to drop as the simulation approaches the thermodynamic limit.
Finally, in a product level implementation which is likely integral-direct, the precomputation needs to be done only once, which further amortizes its cost.
We thus conclude that the precomputation time is small compared to the subsequent lattice sum.
Nonetheless, comparing the cost of different estimators for precomputation, which is expected to be more independent of the programming language used, reveals that the more sophisticated estimators derived in this work cost only $2 \sim 3$ times more than the ISF estimator, whose four-center version is already widely used in existing periodic quantum chemistry software packages.
\begin{landscape}
\global\pdfpageattr\expandafter{\the\pdfpageattr/Rotate 90}
\input{data/timing.tex}
\clearpage
\global\pdfpageattr\expandafter{\the\pdfpageattr/Rotate 0}
\end{landscape}
\section*{References}
\begin{enumerate}
\item [(1)] Paolo Giannozzi, Oscar Baseggio, Pietro Bonf\`{a}, Davide Brunato, Roberto Car, Ivan Carnimeo, Carlo Cavazzoni, Stefano de Gironcoli, Pietro Delugas, Fabrizio Ferrari Ruffino, Andrea Ferretti, Nicola Marzari, Iurii Timrov, Andrea Urru, and Stefano Baroni , "Quantum ESPRESSO toward the exascale", \textit{The Journal of Chemical Physics} \textbf{152}, 154105 (2020) https://doi.org/10.1063/5.0005082
\end{enumerate}
\fi
\end{document}
| {'timestamp': '2022-02-07T02:00:58', 'yymm': '2112', 'arxiv_id': '2112.05824', 'language': 'en', 'url': 'https://arxiv.org/abs/2112.05824'} |
\section{Introduction}
\label{sec:sec0001}
Quantum information science has paved the way for a comprehensive understanding of quantum phenomena as resources, which in turn may be consumed to perform tasks that are not possible in the classical world~\cite{RevModPhys.91.025001}. This is the case of entanglement and quantum coherence, both concepts being rigorously formulated under the framework of resource theories~\cite{PhysRevLett.122.120503,PhysRevResearch.2.012035,PhysRevLett.119.140402,PhysRevLett.116.120404}. Indeed, the former is an essential resource to enhance precision of phase estimation tasks in quantum me\-tro\-lo\-gy~\cite{PhysRevLett.102.100401}, also useful for quantum key distribution~\cite{PhysRevLett.67.661,PhysRevLett.68.557}, while the later finds applications in quantum optics~\cite{RevModPhys.37.231}, quantum thermodynamics~\cite{PhysRevA.93.052335}, and many-body physics~\cite{PhysRevB.93.184428}. In this context, electronic and spin states have emerged very promising platform for designing high-sensitive devices with applicability from material science to biochemistry~\cite{RevModPhys.89.035002,PhysRevLett.126.170404,DeMille990,natrevmats.2017.88,Aslam67,nature25781,annurev-physchem-040513-103659}. Hence, designing physical system where quantum correlation resources can be produced in a controllable fashion becomes imperative.
The ubiquitous idea proposed by Ettore Majorana~\cite{Majorana_1937} regarding particles that constitutes their own antiparticles, nowadays called Majorana fermions, opened new avenues in several areas of physics, including quantum computation~\cite{PhysRevB.97.205404,PhysRevX.6.031016}. Unlike ordinary Dirac fermions such as electrons and protons, i.e., spin-${1}/{2}$ charged particles described by complex fields, Majorana fermions are spin-${1}/{2}$ neutral particles whose real field equations remains invariant to charge conjugation symmetry~\cite{NaturePhys.5.614.2009}. Over the past eighty years, pro\-bing the signature of Majorana fermions still remains an experimental challenge for the high energy physics community. Quite recently, Majorana fermions has been predicted as zero-mode quasi-particle excitations in quantum many-body systems, and its signatures experimentally observed in some solid-state setups~\cite{PhysRevLett.104.040502,PhysRevB.81.125318}. Indeed, Majorana fermions play an important role as quasi-particle excitation in prototypical models for fault-tolerant topological quantum computation~\cite{KITAEV20032}, possibly manipulated in topological superconductors~\cite{PhysRevB.61.10267}, or fractional quantum Hall systems~\cite{MOORE1991362}, and even in driven dissipative devices~\cite{PhysRevB.102.134501,PhysRevLett.125.147701}. Noteworthy, the interest in Majorana fermions relies on their exotic properties, such as non-abelian statistics, thus differing quite radically from the original conventional electrons that condense into the superconducting state~\cite{PhysRevLett.98.237002}. Despite the intense debate regarding some experimental results in the recent years, the hope for rea\-li\-zing quantum devices based on Majorana fermions is still alive~\cite{d41586-021-00612-z,d41586-021-00954-8}.
Motivated by the potential applicability of Majorana fermions (MFs), in this work we study the dynamics of two fermionic systems, one composed of MFs, and the other comprising regular fermions (RFs). Both systems are mediated by a quantum dot (QD) in an experimentally feasible solid-state setup. From an open quantum dynamics approach, we investigate the behavior of the quantum resources of these fermionic systems combined to the QD, i.e., quantum coherence and entanglement, thus showing these quantities are somehow correlated. By consi\-de\-ring initial states with different fermionic occupations, the results show the two-body marginal states of each system exhibit nonzero values of concurrence and quantum coherences du\-ring the nonunitary dynamics. Moreover, the time evolution of such quantum resources behave quite distinctly when the QD is coupled to MFs as compared to the case in which it is coupled to RFs. While in the former configuration the quantum resources evolves quite similarly regardless of temperature, in the latter they are rather distinct. This is a striking feature in favor of the physical setup with MFs as the results point towards the possibility of designing quantum sensors that work in wide range of temperatures. The results presented here may be also relevant for the use of quantum correlation measures to probe non-local features of Majorana fermions as proposed by You {\it et al}~\cite{SciRep_You_2014}.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.8]{FIG_01.pdf}
\caption{(Color online) Schematic representation of the physical setups discussed in the paper. In the upper panel $(a)$, the non-local Majorana fermions appears as edge modes in the ends of two nanowires of length $L$. The right-most MFs labelled as ${\gamma_{B_{1}}}$ and ${\gamma_{B_{2}}}$ are coupled to a single-level quantum dot (QD), with $\lambda_1$ and $\lambda_2$ being the coupling parameters of the QD to its $j$-th neighboring MFs. In turn, the QD interacts with a fermionic reservoir at finite temperature. The lower panel $(b)$ describes the system of two electronic $f$-orbitals, i.e., regular fermions, which in turn hybridizes with the electronic orbitals of the quantum dot by means of coupling constants $\lambda_1$ and $\lambda_2$. The system of RFs$+$QD is allowed to interact to a bath of free fermions, thus modeling an open quantum system.}
\label{fig:fig00001}
\end{center}
\end{figure}
The paper is organized as follows. In Sec.~\ref{sec:sec0002} we introduce the details of the physical models. In Sec.~\ref{sec:sec0003} we describe the open system dynamics due to the coupling of the physical models with a dissipative environment. Such environment depends on the ohmicity parameter in a way that we can set both the Markovian and non-Markovian regimes. In Sec.~\ref{sec:sec0004n} we analyze the dynamics of en\-tan\-glement and quan\-tum co\-he\-rences in the system of MFs$+$QD and RFs$+$QD under the influence of the environment in both regimes, at zero and finite temperatures. In particular, Sec.~\ref{sec:sec0004a} presents the numerical analysis of occupations for a separable initial state with single-fermion occupation, while in Sec~\ref{sec:sec0004b} we discuss the dynamics of concurrence and $\ell_1$-norm of coherence for entangled initial states with single-fermion and two-fermion occupations. Focusing on the system of MFs$+$QD, we discuss in Sec.~\ref{sec:sec0005} the role of non-locality of the Majorana bound states in the dynamics of occupations and quantum resources. Finally, in Sec.~\ref{sec:conclusions} we summarize our conclusions.
\section{The Model}
\label{sec:sec0002}
In this section we will investigate the spectral properties of the two different models depicted in Fig.~\ref{fig:fig00001}: the first one consist of non-local Majorana bound states, while the second comprises canonical regular fermions. In both systems, the two fermionic species couples to a single level quantum dot (QD). Throughout this work we assume a system of spinless fermions, which can be experimentally realized by applying a strong magnetic field. In fact, spinless fermion is an important condition for the emergence of Majorana bound states in topological superconductors~\cite{MOORE1991362,PhysRevB.81.125318,0034-4885/75/7/076501}. Unless other\-wise stated, the coupling to the reservoir is turned-off, and thus both MFs$+$QD and RFs$+$QD setups can be understood as closed quantum systems. The dynamics of the open quantum system will be investigated in Sec.~\ref{sec:sec0003}.
\subsection{Majorana fermions}
\label{subsec:sec0002a}
For the case of Majorana fermions, we consider two wires with finite length $L$, supporting Majorana modes in their ends. Figure~\ref{fig:fig00001}(a) depicts this physical setting. Due to the finiteness of $L$, we shall expect a non-zero overlap between the two MFs wave functions at the ends of each nanowire. The Hamiltonian of the paired Majorana fermions is given by
\begin{equation}
\label{eq:eq000001}
{H_{MF}} = \frac{i}{2}\, {\sum_{j = 1,2}} \, {\epsilon_{j}}\, {\gamma_{A_j}}{\gamma_{B_j}} ~,
\end{equation}
where $\gamma_{{A_j},{B_j}}$ are the two MF operators satisfying the Clifford algebra $\{ {\gamma_{X_j}}, {\gamma_{Y_l}}\} = 2 \,{\delta_{X,Y}}\,{\delta_{j,l}}$, and ${\epsilon_j} \sim e^{-L/{\xi_j}}$ stands for the hybridization between each pair of MFs edge modes due to the overlap of their wavefunctions, with $\xi_j$ being the effective coherence length~\cite{Wang_2013}. Importantly, the coupling energy $\epsilon_j$ decreases exponentially with the length $L$ of the nanowires, thus vanishing in the case of $L \gg {\xi_j}$. Furthermore, the MF pair becomes equivalent to a single zero-energy regular fermion (RF) in the asymptotic regime $L \rightarrow \infty$. Next, we will consider the single level non-interacting quantum dot (QD) modeled by the Hamiltonian
\begin{equation}
\label{eq:eq000002}
{H_{QD}} = {\epsilon_d}\, {\hat{n}_d} ~,
\end{equation}
where $\epsilon_d$ is the energy of the QD, ${\hat{n}_d} = {d^{\dagger}}d$ is the electron number operator, while $d$ ($d^{\dagger}$) is the electron annihilation (creation) operator. The coupling between the QD and the pair of right-most Majorana edge modes, being labelled as ${\gamma_{B_{1}}}$ and ${\gamma_{B_{2}}}$, is described by the tunneling Hamiltonian
\begin{equation}
\label{eq:eq000003}
{H_{MD}} = {\sum_{j = 1,2}}\, {\lambda_j}({d^{\dagger}} - d)\, {\gamma_{Bj}} ~,
\end{equation}
where the real parameters $\{ \lambda_j \}_{j = 1,2}$ characterize the coupling of the QD to its $j$-th neighboring MFs.
Remarkably, Majorana operators can be mapped onto RFs creation and annihilation operators as ${\gamma_{A_j}} = i({f_j} - {f_j^{\dagger}})$, and ${\gamma_{B_j}} = {f_j} + {f_j^{\dagger}}$, where $f_j$ ($f_j^{\dagger}$) is the RF annihilation (creation) operator fulfilling the anticommutation relations $\{{f_j}, {f_l^{\dagger}}\} = {\delta_{jl}}$, and $\{{f_j},{f_l}\} = \{{f_j^{\dagger}}, {f_l^{\dagger}}\} = 0$. Noteworthy, RFs are non-local in the sense that they are composed by Majorana modes living far apart from each other. Next, it can be readily shown that the Hamiltonian of paired MFs in Eq.~\eqref{eq:eq000001} can be recasted as
\begin{equation}
\label{eq:eq000004}
{H_{MF}} = {\sum_{j = 1,2}} \, {\epsilon_j}\left({\hat{n}_j} - \frac{1}{2}\right) ~,
\end{equation}
where ${\hat{n}_j} = {f^{\dagger}_j}{f_j}$ is the RFs number operator. Similarly, the Hamiltonian in Eq.~\eqref{eq:eq000003} which models the QD-MFs coupling is rewritten as
\begin{equation}
\label{eq:eq000005}
{H_{MD}} = {\sum_{j = 1,2}}{\lambda_j}({d^{\dagger}}{f_j} + {f_j^{\dagger}}d + {d^{\dagger}}{f_j^{\dagger}} + {f_j}d) ~.
\end{equation}
Finally, combining Eqs.~\eqref{eq:eq000002},~\eqref{eq:eq000004}, and~\eqref{eq:eq000005}, the total Hamiltonian of the system read as
\begin{equation}
\label{eq:eq000006}
{H_M} = {\epsilon_d}{\hat{n}_d} + {\sum_{j = 1,2}}{\epsilon_j}\left({\hat{n}_j} - \frac{1}{2}\right) + {\sum_{j = 1,2}}{\lambda_j}({d^{\dagger}}{f_j} + {d^{\dagger}}{f_j^{\dagger}} + \text{H.c.}) ~.
\end{equation}
\subsection{Regular fermions}
\label{subsec:sec0002b}
For the system composed of regular fermions, we assume the QD is coupled to two other electron orbitals as depicted in Fig.~\ref{fig:fig00001}(b). These orbitals could be thought as two other quantum dots. For the sake of clariness, we will continue using the same notation of $\{ {f_j} \}_{j = 1,2}$ operators for these orbitals. In this case, the Hamiltonian of the system is written as
\begin{equation}
\label{eq:eq000007}
{H_R} = {\epsilon_d}{\hat{n}_d} + {\sum_{j = 1,2}}{\epsilon_j}\left({\hat{n}_j} - \frac{1}{2}\right) + {\sum_{j = 1,2}}{\lambda_j}({d^{\dagger}}{f_j} + {f_j^{\dagger}}{d}) ~,
\end{equation}
where the first term describes the QD, the second describes the electron orbitals, and the third accounts for the hybridization between the QD and the $f$-orbitals. Similarly to the previous case, here we have ${\hat{n}_d} = {d^{\dagger}}d$, and ${\hat{n}_j} = {f^{\dagger}_j}{f_j}$ for $j = \{1,2\}$. Note that, for convenience, the energy of the $f$-orbitals are shifted by constant $(-1/2)\sum_{j=1,2}\epsilon_j$ which does not change the dynamics of the quantum states.
\subsection{Generalized Hamiltonian}
\label{subsec:sec0002c}
Next, we will present a generalized Hamiltonian that encompasses both MFs and RFs systems. From Eqs.~\eqref{eq:eq000006} and~\eqref{eq:eq000007} we see that, apart from the terms ${d^{\dagger}} {f_j^{\dagger}} + {f_j}\, d$ inherited from the tunnel coupling of the QD and MFs, both Hamiltonians have the same form. It is therefore convenient to define a generic Hamiltonian that comprises both MFs and RFs systems as
\begin{equation}
\label{eq:eq000008}
{H_{M,R}} = {\epsilon_d}{\hat{n}_d} + {\sum_{j = 1}^2}\, {\epsilon_j}\left({\hat{n}_j} - \frac{1}{2}\right) + {\sum_{j = 1,2}}({\lambda_j}\,{d^{\dagger}}{f_j} + {\widetilde{\lambda}_j}\,{d^{\dagger}}{f_j^{\dagger}} + {\rm H.c.}) ~.
\end{equation}
In special, the Hamiltonian describing the MFs system [see Eq.~\eqref{eq:eq000006}] is recovered by choosing ${\widetilde{\lambda}_j} = {\lambda_j}$, while that one modeling the RFs system [see Eq.~\eqref{eq:eq000007}] is obtained by setting ${\widetilde{\lambda}_j} = 0$.
In the following we will discuss the matrix representation of the generalized Hamiltonian with respect to the basis $\{ |{n_1},{n_2}, {n_d}\rangle\}$, where the index ${n_{1,2,d}} = \{ 0,1 \}$ assigns the occupation number in the single-particle states created with the operators $f^{\dagger}_1$, $f^{\dagger}_2$, and $d^{\dagger}$ acting on the vacuum state $\ket{\tilde 0}\equiv |{0_1},{0_2},{0_d}\rangle$. To study in details the Hamiltonian in Eq.~\eqref{eq:eq000008} we fix the basis ordering $\{ \ket{\tilde 0}$, ${f_1^{\dagger}} {d^{\dagger}}\ket{\tilde 0}$, ${f_2^{\dagger}} {d^{\dagger}}\ket{\tilde 0}$, ${f_1^{\dagger}}{f_2^{\dagger}}\ket{\tilde 0}, {d^{\dagger}}\ket{\tilde 0}$, ${f_1^{\dagger}}\ket{\tilde 0}$, ${f_2^{\dagger}}\ket{\tilde 0}$, ${f_1^{\dagger}} {f_2^{\dagger}} {d^\dagger} \ket{\tilde 0} \}$. Note the first four states in such basis are eigenvectors of the operator $\hat N = {\hat{n}_1} + {\hat{n}_2} + {\hat{n}_d}$ with even eigenvalues, while for the last four states the eigenvalues are odd. We clearly see that all the processes described by the generic Hamiltonian either conserves the number of particle (for the RFs system), or change it by two particles (for the MFs system). Onto this basis the Hamiltonian in Eq.~\eqref{eq:eq000008} takes the form
\begin{equation}
\label{eq:eq000010}
H = \frac{1}{2}\left(\mathbb{I} + {\sigma_z} \right) \otimes {\mathcal{H}_0} + \frac{1}{2}\left(\mathbb{I} - {\sigma_z}\right)\otimes {\mathcal{H}_1} ~,
\end{equation}
where $\mathbb{I}$ is the $2\times 2$ identity matrix and $\sigma_z$ is the Pauli matrix, with the Hermitian blocks
\begin{equation}
\label{eq:eq000011}
{\mathcal{H}_0} = \left[\begin{matrix} -{\epsilon_+} & -{\widetilde\lambda_1} & -\widetilde\lambda_2 & 0\\
- {\widetilde\lambda_1} & {\epsilon_d} + {\epsilon_-} & 0 & {\lambda_2} \\
-{\widetilde\lambda_2} & 0 & {\epsilon_d} - {\epsilon_-} & -{\lambda_1} \\
0 & {\lambda_2} & -{\lambda_1} & {\epsilon_+} \end{matrix}\right] ~,
\end{equation}
and
\begin{equation}
\label{eq:eq000012}
{\mathcal{H}_1} = \left[\begin{matrix} {\epsilon_d} - {\epsilon_+} & {\lambda_1} & {\lambda_2} & 0\\
{\lambda_1} & {\epsilon_-} & 0 & - {\widetilde{\lambda}_2} \\
{\lambda_2} & 0 & - {\epsilon_-} & {\widetilde{\lambda}_1} \\
0 & - {\widetilde{\lambda}_2} & {\widetilde{\lambda}_1} & {\epsilon_d} + {\epsilon_+}
\end{matrix}\right] ~,
\end{equation}
where we have defined ${\epsilon_{\pm}} = ({\epsilon_1} \pm {\epsilon_2})/2$.
\begin{table}[!t]
\caption{Eigenvectors and eigenvalues of the Hamiltonian in Eq.~\eqref{eq:eq000010} for the case of Majorana fermions (${\widetilde{\lambda}_j} = {\lambda_j}$). Here we have defined the parameters $\xi_{\pm}=(\epsilon_d\pm\epsilon)/2$, and ${\Delta_{\pm}} = \sqrt{{\xi_{\pm}^2} + {\lambda^2_1}+\lambda_2^2}$, while ${b_{\mu\nu}^{-1}} = {\sqrt{\,2{\Delta_{\mu}}({\Delta_{\mu}} + \nu{\xi_{\mu}})}}$, and ${c_{\mu\nu}} = ({\xi_{\mu}} + \nu{\Delta_{\mu}}){b_{\mu\nu}}$, with $\mu, \nu=\{+,- \}$.}
\begin{center}
\begin{tabular}{ll}
\hline\hline
Eigenstate & Energy \\
\hline
$|{E_1}\rangle = \left({b_{++}}({\lambda_2}{f_2^{\dag}} + {\lambda_1}{f_1^\dag}){d^\dag} + {c_{++}} \right)\ket{\tilde 0}$ & ${E_1} = {\xi_-} - {\Delta_+}$ \\
$|{E_2}\rangle = \left({b_{--}}({\lambda_1} {f_1^\dag} + {\lambda_2}{f_2^\dag}) + {c_{--}}{d^\dag}\right)\ket{\tilde 0}$ & ${E_2} = {\xi_-} - {\Delta_-}$ \\
$|{E_3}\rangle = \left({b_{+-}} ({\lambda_1}{f^\dag_2} - {\lambda_2}{f^\dag_1}) + {c_{+-}}{f_1^\dag}{f_2^\dag}{d^\dag}\right)\ket{\tilde 0} $ & ${E_3} = {\xi_+} - {\Delta_+}$ \\
$|{E_4}\rangle = \left({b_{-+}}({\lambda_1}{f^\dag_2} - {\lambda_2}{f^\dag_1}){d^\dag} + {c_{-+}}{f^{\dag}_1}{f^{\dag}_2}\right)\ket{\tilde 0}$ & ${E_4} = {\xi_+} - {\Delta_-}$ \\
$|{E_5}\rangle = \left({b_{-+}}({\lambda_2}{f^\dag_2} + {\lambda_1}{f^\dag_1}) + {c_{-+}}{d^\dag}\right)\ket{\tilde 0}$ & ${E_5} = {\xi_-} + {\Delta_-}$ \\
$|{E_6}\rangle = \left({b_{+-}}({\lambda_1}{f^\dag_1} + {\lambda_2}{f^\dag_2}){d^\dag} + {c_{+-}}\right)\ket{\tilde 0}$ & ${E_6} = {\xi_-} +{\Delta_+}$ \\
$|{E_7}\rangle = \left({b_{--}}({\lambda_2} {f^\dag_1} - {\lambda_1}{f^\dag_2}){d^\dag} - {c_{--}}{f^\dag_1}{f^\dag_2}\right)\ket{\tilde 0}$ & ${E_7} = {\xi_+} + {\Delta_-}$ \\
$|{E_8}\rangle = \left({b_{++}}({\lambda_1}{f^\dag_2} - {\lambda_2}{f^\dag_1}) + {c_{++}}{f^\dag_1}{f^\dag_2}{d^\dag}\right)\ket{\tilde 0}$ & ${E_8} = {\xi_+} + {\Delta_+}$ \\
\hline\hline
\end{tabular}
\label{tab:tab000001}
\end{center}
\end{table}
Next, we will discuss the spectral properties of the Hamiltonian in Eq.~\eqref{eq:eq000010} for the case of MFs and RFs. For simplicity, from now on we will focus on the particular case of ${\epsilon_1} = {\epsilon_2} = \epsilon$, which implies ${\epsilon_-} = 0$ and ${\epsilon_+} = \epsilon$. For the system of MFs, we set ${\widetilde{{\lambda}_j}} = {{\lambda}_j}$ in Eqs.~\eqref{eq:eq000011} and~\eqref{eq:eq000012}. The set of eigenstates ${\{|{E_j}\rangle\}_{j = 1,\ldots,8}}$ and energies ${\{ {E_j} \}_{j = 1,\ldots,8}}$ of the Hamiltonian for MFs are listed in Table~\ref{tab:tab000001}. It is worth of mentioning that ${E_1}$ corresponds to the ground state of the system. Importantly, the set of states $\{|{E_j}\rangle \}_{j = 1,\ldots,4}$ exhibits even parity respective to the occupation number, while the set $\{|{E_j}\rangle \}_{j = 5,\ldots,8}$ belongs to the odd parity sector. Particularly, for the asymptotic case $L \gg \xi_{1,2}$ in which the MFs e\-ner\-gies becomes negligible, i.e., ${\epsilon_1} = {\epsilon_2} = \epsilon \approx 0$, we thus have that ${\xi_+} = {\xi_-} = {\epsilon_d}/2$. As a consequence, the energy spectrum will collapse into the two four-fold degenerated energy levels ${\epsilon_d}/2 \pm \sqrt{ {({{\epsilon}_d}/2)^2} + {{\lambda}^2_1}+ {{\lambda}_2^2} }$.
For the system of RFs we set the parameter $\widetilde\lambda_j=0$ into Eq.~\eqref{eq:eq000010}. On the one hand, from Eq.~\eqref{eq:eq000011} it follows that $\mathcal{H}_0$ exhibits a one-dimensional matrix block corresponding to the occupation quantum number $N = 0$, and also a three-dimensional block respective to $N = 2$. On the other hand, $\mathcal{H}_1$ in Eq.~\eqref{eq:eq000012} presents a three-dimensional matrix block regarding to $N = 1$, and a one-dimensional block for the $N = 3$ parity sector. The set of eigenstates ${\{|{E_j}\rangle\}_{j = 1,\ldots,8}}$ and energies ${\{ {E_j} \}_{j = 1,\ldots,8}}$ of the Hamiltonian for RFs are listed in Table~\ref{tab:tab000002}.
\section{Dynamics of the open quantum system}
\label{sec:sec0003}
In this section we will describe the dynamics of both the systems of MFs$+$QD and RFs$+$QD undergoing the dissipative effects from the coupling to a fermionic reservoir $\mathcal{B}$ that is initialized at the equilibrium state (see Fig.~\ref{fig:fig00001} for details). In this analysis, we set ${H_S} = {H_{M,R}}$ being the generalized Hamiltonian of the systems [see Eq.~\eqref{eq:eq000008}], which recovers either the Hamiltonian of MFs$+$QD [see Eq.~\eqref{eq:eq000006}] or RFs$+$QD [see Eq.~\eqref{eq:eq000007}] depending on adjusting of the physical parameters. In turn, the QD is weakly coupled to an environment of free fermions at finite temperature, with Hamiltonian ${H_{\mathcal{B}}} = {\sum_k}\, {\varepsilon_k}{c_k^{\dagger}}{c_k}$, where ${c_k^{\dagger}}$ (${c_k}$) is the creation (annihilation) operator respective to the $k$-th fermionic mode, while ${\varepsilon_k}$ is the energy~\cite{PhysRevB.101.155134,PhysRevE.102.012136}. The Hamiltonian modeling the inte\-rac\-tion between the system and the bath $\mathcal{B}$ is given by ${H_I} = {\sum_k} \, {g_k} ({d^{\dagger}}{c_k} + d{c_k^{\dagger}})$, where $g_k$ is the coupling strength. Hence, the Hamiltonian of the joint system read as $H := {H_S} + {H_{\mathcal{B}}} + {H_I}$.
\begin{table}[t]
\caption{Eigenstates and energies of the Hamiltonian in Eq.~\eqref{eq:eq000010} for the case of regular fermions (${\widetilde{\lambda}_j} = 0$). Here we have defined the parameters $\xi_{\pm}=(\epsilon_d\pm\epsilon)/2$, and ${\Delta_{\pm}} = \sqrt{{\xi_{\pm}^2} + {\lambda^2_1}+\lambda_2^2}$, while ${b_{\mu\nu}^{-1}} ={\sqrt{\, 2{\Delta_{\mu}}({\Delta_{\mu}} + \nu{\xi_{\mu}})}}$, and ${c_{\mu\nu}} = ({\xi_{\mu}} + \nu{\Delta_{\mu}}){b_{\mu\nu}}$, with $\mu, \nu=\{+,- \}$.}
\begin{center}
\begin{tabular}{ll}
\hline\hline
Eigenstate & Energy \\
\hline
$|{E_1}\rangle = \ket{\tilde 0}$ & ${E_1} = - \epsilon$ \\
$|{E_2}\rangle = \frac{1}{\sqrt{ {\lambda_1^2} + {\lambda_2^2}}}\, ({\lambda_1}{f_2^{\dagger}} - {\lambda_2}{f_1^{\dagger}})\ket{\tilde 0}$ & ${E_2} = 0$ \\
$|{E_3}\rangle = \frac{1}{\sqrt{ {\lambda_1^2} + {\lambda_2^2}}}\, ({\lambda_1}{f_1^\dag} + {\lambda_2}{f_2^{\dagger}}){d^\dag}\ket{\tilde 0}$ & ${E_3} = {\epsilon_d}$ \\
$|{E_4}\rangle = {f_1^{\dagger}}{f^\dag_2}{d^\dag}\ket{\tilde 0} $ & ${E_4} = {\epsilon_d} + \epsilon$ \\
$|{E_5}\rangle = \left({b_{--}}({\lambda_1}{f^\dag_1} + {\lambda_2}{f^\dag_2}) + {c_{--}}{d^\dag}\right)\ket{\tilde 0}$ & ${E_5} = {\xi_-} - {\Delta_-}$ \\
$|{E_6}\rangle = \left({b_{-+}}({\lambda_1}{f^\dag_2} - {\lambda_2}{f^\dag_1}){d^\dag} + {c_{-+}}{f_1^{\dagger}}{f_2^{\dagger}}\right)\ket{\tilde 0}$ & ${E_6} = {\xi_+} - {\Delta_-}$ \\
$|{E_7}\rangle = \left({b_{-+}}({\lambda_1} {f^\dag_1} + {\lambda_2}{f^\dag_2}) + {c_{-+}}{d^\dag}\right)\ket{\tilde 0}$ & ${E_7} = {\xi_-} + {\Delta_-}$ \\
$|{E_8}\rangle = \left({b_{--}}({\lambda_2}{f^\dag_1} - {\lambda_1}{f^\dag_2}){d^{\dagger}} - {c_{--}}{f^\dag_1}{f^\dag_2}\right)\ket{\tilde 0}$ & ${E_8} = {\xi_+} + {\Delta_-}$ \\
\hline\hline
\end{tabular}
\label{tab:tab000002}
\end{center}
\end{table}
\subsection{Density matrix formalism}
To obtain the dynamics of the physical quantities of the system, we employ the well-known density matrix formalism, within which the dynamics of the system can be obtained by tracing out the environmental degrees of freedom. The re\-sul\-ting quantum master equation can be written as~\cite{PhysRevA.60.91,RevModPhys.89.015001}
\begin{align}
\label{eq:eq000013}
\frac{d{\rho_S}(t)}{dt} &= - i[{H_S} ,{\rho_S} (t) ] + {\int_0^t} d\tau {{\alpha}^+}(t - \tau) [({V_{\tau - t}} \, {d^{\dagger}}) {\rho_S} (t) ,d] \nonumber\\
&+{\int_0^t} d\tau {{\alpha}^-}(t - \tau)[({V_{\tau - t}} \, d) \, {\rho_S} (t), {d^{\dagger}}] + \text{H.c.} ~,
\end{align}
in which ${V_{\tau - t}}\bullet = {e^{ i (\tau - t) {H_S}}} \bullet \, {e^{- i (\tau - t) {H_S}}}$, and the correlation functions read as
\begin{equation}
\label{eq:eq000014}
{\alpha^+} (t) = {\int_0^{\infty}} d\omega J(\omega) {N_F}(\omega){e^{i\omega t}} ~,
\end{equation}
and
\begin{equation}
\label{eq:eq000015}
{\alpha^-} (t) = {\int_0^{\infty}} d\omega J(\omega)({N_F}(\omega) + 1){e^{-i\omega t}} ~.
\end{equation}
Here, ${N_F}({\omega}) = {{[\exp(\beta{\omega})+1]}^{-1}}$ is the Fermi-Dirac distribution describing the fermionic reservoir at a given temperature $T= ({k_B}\beta)^{-1}$, and $J(\omega) = {g^2}(\omega)\, {\left|{ {\partial {\omega}(k)}/{\partial k}} \right|}^{-1}$ is the spectral density of the environment, in which $g(\omega)$ is the density of states of the bath. Hereafter we set Boltzmann's and Plank's constants to the unity, i.e., $k_B = \hbar = 1$. Furthermore, we have implicitly assumed the chemical potential of the bath to be zero.
\begin{figure}[t]
\begin{center}
\includegraphics[scale=0.9]{FIG_02.pdf}
\caption{(Color online) Spectral density $J(\omega)$ for $s = 1$ ({\it ohmic} case), coupling strength $\gamma = 0.05$, and cut-off frequencies ${\omega_c} = 10$ (red solid line), ${\omega_c} = 30$ (blue dashed line), and ${\omega_c} = 50$ (black dotted line).}
\label{fig:fig00002}
\end{center}
\end{figure}
Overall, the behavior of a system can be described by considering an accurate model for the spectral density at low frequencies. Therefore, from now on we will focus on the affect of a generic bath on the system as described the spectral density~\cite{RevModPhys.59.1,CALDEIRA1983587,Weiss_2008_book}
\begin{equation}
\label{eq:eq000016}
J(\omega) = \gamma \, {{\omega}^s} \, {{\omega}_c^{1 - s}} {e^{-\omega/{{\omega}_c}}} ~,
\end{equation}
for all $s > 0$, where $\gamma$ is the coupling strength of the system and the environment. The exponential factor in Eq.~\eqref{eq:eq000016} provides a smooth cut-off for the spectral density, which is modulated by the frequency $\omega_c$. The environment can be classified as \textit{sub-ohmic} ($0 < s < 1$), \textit{ohmic} ($s=1$), and \textit{super-ohmic} ($s>1$)~\cite{CALDEIRA1983587,RevModPhys.59.1,Weiss_2008_book}. The frequency $\omega_c$ describes the decaying time scale of the environment as ${\tau_c}\sim {\omega_c^{-1}}$. The Markov limit would correspond to the case ${{\tau}_c} \ll {1/{\Gamma}}$, i.e., when the environment correlation time $\tau_c$ is much smaller than the typical dissipation time scale of the system given by $\Gamma \sim {\int_0^{\infty}} dx\, {{\alpha}^-}(t - x)$. For clarity, in Fig.~\ref{fig:fig00002} we show the spectral density $J(\omega)$ in the ohmic case $s = 1$, for ${\omega_c} = 10$ (non-Markovian) and ${\omega_c} = 50$ (Markovian), also setting $\gamma = 0.05$. Generically speaking, the larger $\omega_c$ is the more Markovian is the dynamics of the system.
To solve the master equation, one may recast the marginal state ${{\rho}_S}(t)$ in terms of the occupation number basis $\{|{n_1},{n_2},{n_d}\rangle\}$ as
\begin{equation}
\label{eq:eq000017}
{{\rho}_S}(t) = {\sum_{ \mathbf{k}, \mathbf{m} }} \, {{A}^{ {k_1}, \, {k_2}, \, {k_d}}_{ {m_1}, \, {m_2},\, {m_d} }}(t)\, |{k_1},{k_2},{k_d}\rangle\langle{m_1},{m_2},{m_d}| ~,
\end{equation}
where $\mathbf{k} = ({k_1},{k_2},{k_d} )$, $\mathbf{m} = ( {m_1},{m_2},{m_d})$, with ${k_j} = \{0,1\}$, and ${m_j} = \{0,1\}$ for $j = \{1,2,d\}$, while
\begin{equation}
\label{eq:eq000018}
{{A}^{ {k_1},\,{k_2},\, {k_d}}_{ {m_1},\, {m_2},\, {m_d} }}(t) = \langle{k_1},{k_2},{k_d}|{{\rho}_S}(t)|{m_1},{m_2},{m_d}\rangle ~.
\end{equation}
Plugging Eq.~\eqref{eq:eq000018} into Eq.~\eqref{eq:eq000013}, we obtain a set of coupled differential equations for the time-dependent coefficients $\{ {{A}^{ {k_1}, \, {k_2}, \, {k_d}}_{ {m_1}, \, {m_2},\,{m_d} }}(t) \}_{\mathbf{k}, \mathbf{m}}$, whose solution fully characterize the reduced density matrix ${{\rho}_S}(t)$, for a given initial state ${{\rho}_S}(0)$ of the system. The analytical solution is rather complicated and far from trivial. We refer to Appendix~\ref{sec:appendix0A} for details on simplifying the master equation.
The density matrix ${{\rho}_S}(t)$ stores information about the evolution of the system. The solution of the master equation in Eq.~\eqref{eq:eq000013}, also using Eq.~\eqref{eq:eq000017}, allow us to study the occupation numbers $\{ \langle{\hat{n}_{\mu}}\rangle\}_{\mu = 1,2,d}$ of the QD and the two fermions, which can be written as
\begin{equation}
\label{eq:eq000019}
\langle{\hat{n}_{\mu}}\rangle = {\sum_{{n_1},\,{n_2},\, {n_d}}}\, {n_{\mu}} \, {{A}^{ {n_1}, \, {n_2}, \, {n_d}}_{ {n_1}, \, {n_2},\, {n_d} }} \, (t) ~,
\end{equation}
where ${\hat{n}_d} = {d^{\dagger}}d$, ${\hat{n}_1} = {f_1^{\dagger}}{f_1}$, and ${\hat{n}_2} = {f_2^{\dagger}}{f_2}$ are the fermion number operator.
\subsection{Entanglement and quantum coherences}
\label{sec:sec0004}
Here we will introduce the minimal theoretic framework to study the role of entanglement and quantum cohe\-ren\-ces in both the systems of MFs and RFs, with respect to the two-body reduced states
\begin{equation}
\label{eq:eq000020}
{\rho_{jl}}(t) = {\sum_{ {k_j}, \, {k_l} }}~{\sum_{ {m_j}, \, {m_l} }}~{{A}^{ {k_j},\, {k_l}}_{ {m_j},\, {m_l} }}(t) \, |{k_j},{k_l}\rangle\langle{m_j},{m_l}| ~,
\end{equation}
with $j,l = \{1,2,d\}$ and $j \neq l$, ${k_j} = \{0,1\}$, and ${m_j} = \{0,1\}$, where
\begin{equation}
\label{eq:eq000021}
{{A}^{ {k_j},\, {k_l}}_{ {m_j},\, {m_l} }}(t) = {\sum_{\substack{{k_y} : \, y \neq j \neq l }}}~{{A}^{ {k_1},\, {k_2},\, {k_d}}_{ {m_1},\, {m_2},\, {k_d} }}(t) ~.
\end{equation}
For our purposes, we will address quantum correlations according to the concurrence, a bipartite entanglement quantifier~\cite{RevModPhys.81.865}, while for quantum coherences our analysis is based on the so-called $\ell_1$-norm of coherence~\cite{RevModPhys.89.041003}. The concurrence is defined as ~\cite{PhysRevLett.78.5022,PhysRevLett.80.2245}
\begin{equation}
\label{eq:eq000022}
\text{Conc}[\rho] = \max(0,{\varpi_1} - {\varpi_2} - {\varpi_3} - {\varpi_4}) ~,
\end{equation}
where $\{ {\varpi_j} \}_{j = 1,\ldots,4}$ are the eigenvalues in decreasing order of the matrix
\begin{equation}
\label{eq:eq000023}
R[\rho] := \sqrt{ \sqrt{\rho}\, ({\sigma_y}\otimes{\sigma_y}){\rho^*}({\sigma_y}\otimes{\sigma_y})\sqrt{\rho} } ~.
\end{equation}
Opposite to entanglement, quantum coherence is a basis dependent quantity, and thus its formulation requires to fix some preferred basis states. Hereafter we will adopt the reference basis $\{ | {n_1},{n_2}\rangle, |{n_1},{n_d}\rangle, |{n_2},{n_d}\rangle \}$, with ${n_j} = \{0,1\}$ for $j = \{1,2,d\}$. To characterize the quantum coherence stored in the marginal states of Eq.~\eqref{eq:eq000020}, we consider the so-called ${{\ell}_1}$-norm of coherence, i.e., a monotonic distance-based quantifier of coherence written as~\cite{BCP2014}
\begin{equation}
\label{eq:eq000024}
{\mathcal{C}_{\ell_1}}[{\rho_{jl}}(t)] = {\sum_{\substack{j\neq l ; \, {\mathbf{n}_{jl}} \neq {\mathbf{k}_{jl}} }}} \, |\langle{k_j},{k_l}|\, {\rho_{jl}}(t) |{n_j},{n_l}\rangle| ~,
\end{equation}
where ${\mathbf{n}_{jl}} := ({n_j},{n_l} )$, and ${\mathbf{k}_{jl}} := ({k_j},{k_l} )$, with ${n_j} = \{0,1\}$, and ${k_j} = \{0,1\}$, for $j,l = \{1,2,d\}$. In what follows we present our numerical results obtained from the equations derived above.
\section{Numerical results}
\label{sec:sec0004n}
To obtain our numerical results we set the coupling strength $\gamma = 0.05$, $\lambda_2=2\lambda_1=0.2$ and $\epsilon_d=0.5$. Bearing in mind that the chemical potential of the bath is zero, this fixed value of $\epsilon_d$ serves as an energy reference for our calculations. Otherwise stated, we will also use $\epsilon=0.5$. Moreover, we will assume a bath characterized by $s=1$. Our analysis will consider either Markovian and non-Markovian baths, for which we use ${\omega_c} = 10$ and ${\omega_c} = 50$, respectively. The results will also be shown for zero temperature ($T=0$) and finite temperature ($\beta = 1$). We will consider initial states within the Hilbert subspace of the system characterized by occupation $N=1$ (odd) and $N=2$ (even).
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.925]{FIG_03.pdf}
\caption{(Color online) Populations of the subsystem of MFs$+$QD (black solid line), and RFs$+$QD (red dotted line), where $\langle \bullet \rangle = \text{Tr}(\bullet \, {\rho_S}(t))$, with ${\rho_S}(t)$ being the reduced density matrix obtained from the master equation in Eq.~\eqref{eq:eq000013}. The system of MFs$+$QD (RFs$+$QD) is initialized at the state ${{\rho}_S}(0) = |\tilde{1}\rangle\langle\tilde{1}|$, with $|\tilde{1}\rangle = {d^{\dagger}}\ket{\tilde 0}$, and coupled to a fermionic reservoir at zero temperature $(T = 0)$. Here we set the coupling strength $\gamma = 0.05$, $s = 1$, and cut-off frequencies ${\omega_c} = 10$ (left panels), and ${\omega_c} = 50$ (right panels).}
\label{fig:fig00003}
\end{center}
\end{figure}
\begin{figure}[t!]
\begin{center}
\includegraphics[scale=0.925]{FIG_04.pdf}
\caption{(Color online)
Populations of the subsystem of MFs$+$QD (black solid line), and RFs$+$QD (red dotted line), where $\langle \bullet \rangle = \text{Tr}(\bullet \, {\rho_S}(t))$, with ${\rho_S}(t)$ being the reduced density matrix obtained from the master equation in Eq.~\eqref{eq:eq000013}. The system of MFs$+$QD (RFs$+$QD) is initialized at the pure state ${{\rho}_S}(0) = |\tilde{1}\rangle\langle\tilde{1}|$, with $|\tilde{1}\rangle = {d^{\dagger}}\ket{\tilde 0}$, and coupled to a fermionic reservoir at finite temperature $(\beta = 1)$. Here we set the coupling strength $\gamma = 0.05$, $s = 1$, and cut-off frequencies ${\omega_c} = 10$ (left panels), and ${\omega_c} = 50$ (right panels)
}
\label{fig:fig00004}
\end{center}
\end{figure}
\begin{figure*}[!t]
\begin{center}
\includegraphics[scale=0.95]{FIG_05.pdf}
\caption{(Color online) Comparison between concurrence (black solid line) and $\ell_1$-norm of coherence (red dashed line) for the non-Markovian dynamics (${\omega_c} = 10$) of MFs and RFs. Panels (a),~(b),~(e),~(f),~(i), and~(j) refers to the case of zero temperature ($T = 0$), while panels (c),~(d),~(g),~(h),~(k), and~(l) show the curves for the bath at finite temperature ($\beta = 1$). Here we choose the initial state of the system MFs$+$QD (RFs$+$QD) given by ${{\rho}_S}(0) = |\tilde{1}\rangle\langle\tilde{1}|$, with $|\tilde{1}\rangle = {d^{\dagger}}\ket{\tilde 0}$, and $\gamma = 0.05$, $s = 1$, $\epsilon = 0.5$, ${{\epsilon}_d} = 0.5$, and ${\lambda_2} = 2{\lambda_1} = 0.2$.}
\label{fig:fig00005}
\end{center}
\end{figure*}
\begin{figure*}[!t]
\begin{center}
\includegraphics[scale=0.95]{FIG_06.pdf}
\caption{(Color online) Comparison between concurrence (black solid line) and $\ell_1$-norm of coherence (red dashed line) for the Markovian dynamics (${\omega_c} = 50$) of MFs and RFs. Panels (a),~(b),~(e),~(f),~(i), and~(j) refers to the case of zero temperature ($T = 0$), while panels (c),~(d),~(g),~(h),~(k), and~(l) show the curves for the bath at finite temperature ($\beta = 1$). Here we choose the initial state of the system MFs$+$QD (RFs$+$QD) given by ${{\rho}_S}(0) = |\tilde{1}\rangle\langle\tilde{1}|$, with $|\tilde{1}\rangle = {d^{\dagger}}\ket{\tilde 0}$, and $\gamma = 0.05$, $s = 1$, $\epsilon = 0.5$, ${{\epsilon}_d} = 0.5$, and ${\lambda_2} = 2{\lambda_1} = 0.2$.}
\label{fig:fig00006}
\end{center}
\end{figure*}
\begin{figure*}[ht]
\begin{center}
\includegraphics[scale=0.95]{FIG_07.pdf}
\caption{(Color online) Comparison between concurrence (black solid line) and $\ell_1$-norm of coherence (red dashed line) for the non-Markovian dynamics (${\omega_c} = 10$) of MFs and RFs. Panels (a),~(b),~(e),~(f),~(i), and~(j) refers to the case of zero temperature ($T = 0$), while panels (c),~(d),~(g),~(h),~(k), and~(l) show the curves for finite temperature ($\beta = 1$). Here we choose the initial state of the system MFs$+$QD (RFs$+$QD) given by ${{\rho}_S}(0) = |\tilde{+}\rangle\langle\tilde{+}|$, with $|\tilde{+}\rangle := \frac{1}{\sqrt{2}}\, ({f_1^{\dagger}} + {f_2^{\dagger}})\ket{\tilde 0}$, and $\gamma = 0.05$, $s = 1$, $\epsilon = 0.5$, ${{\epsilon}_d} = 0.5$, and ${\lambda_2} = 2{\lambda_1} = 0.2$.}
\label{fig:fig00007}
\end{center}
\end{figure*}
\begin{figure*}[!t]
\begin{center}
\includegraphics[scale=0.95]{FIG_08.pdf}
\caption{(Color online) Comparison between concurrence (black solid line) and $\ell_1$-norm of coherence (red dashed line) for the Markovian dynamics (${\omega_c} = 50$) of MFs and RFs. Panels (a),~(b),~(e),~(f),~(i), and~(j) refers to the case of zero temperature ($T = 0$), while panels (c),~(d),~(g),~(h),~(k), and~(l) show the curves at finite temperature ($\beta = 1$). Here we choose the initial state of the system MFs$+$QD (RFs$+$QD) given by ${{\rho}_S}(0) = |\tilde{+}\rangle\langle\tilde{+}|$, with $|\tilde{+}\rangle := \frac{1}{\sqrt{2}}\, ({f_1^{\dagger}} + {f_2^{\dagger}})\ket{\tilde 0}$, and $\gamma = 0.05$, $s = 1$, $\epsilon = 0.5$, ${{\epsilon}_d} = 0.5$, and ${\lambda_2} = 2{\lambda_1} = 0.2$.}
\label{fig:fig00008}
\end{center}
\end{figure*}
\begin{figure*}[!t]
\begin{center}
\includegraphics[scale=0.95]{FIG_09.pdf}
\caption{(Color online) Comparison between concurrence (black solid line) and $\ell_1$-norm of coherence (red dashed line) for the non-Markovian dynamics (${\omega_c} = 10$) of MFs and RFs. Panels (a),~(b),~(e),~(f),~(i), and~(j) refers to the case of zero temperature ($T = 0$), while panels (c),~(d),~(g),~(h),~(k), and~(l) show the curves at finite temperature ($\beta = 1$). Here we choose the initial state of the system MFs$+$QD (RFs$+$QD) given by ${{\rho}_S}(0) = |{W}\rangle\langle{W}|$, with $|{W}\rangle := \frac{1}{\sqrt{3}}\, ({d^{\dagger}} + {f_1^{\dagger}} + {f_2^{\dagger}})\ket{\tilde 0}$, and $\gamma = 0.05$, $s = 1$, $\epsilon = 0.5$, ${{\epsilon}_d} = 0.5$, and ${\lambda_2} = 2{\lambda_1} = 0.2$.}
\label{fig:fig00009}
\end{center}
\end{figure*}
\begin{figure*}[!t]
\begin{center}
\includegraphics[scale=0.95]{FIG_10.pdf}
\caption{(Color online) Comparison between concurrence (black solid line) and $\ell_1$-norm of coherence (red dashed line) for the Markovian dynamics (${\omega_c} = 50$) of MFs and RFs. Panels (a),~(b),~(e),~(f),~(i), and~(j) refers to the case of zero temperature ($T = 0$), while panels (c),~(d),~(g),~(h),~(k), and~(l) show the curves at finite temperature ($\beta = 1$). Here we choose the initial state of the system MFs$+$QD (RFs$+$QD) given by ${{\rho}_S}(0) = |{W}\rangle\langle{W}|$, with $|{W}\rangle := \frac{1}{\sqrt{3}}\, ({d^{\dagger}} + {f_1^{\dagger}} + {f_2^{\dagger}})\ket{\tilde 0}$, and $\gamma = 0.05$, $s = 1$, $\epsilon = 0.5$, ${{\epsilon}_d} = 0.5$, and ${\lambda_2} = 2{\lambda_1} = 0.2$.}
\label{fig:fig00010}
\end{center}
\end{figure*}
\begin{figure*}[t]
\begin{center}
\includegraphics[scale=0.95]{FIG_11.pdf}
\caption{(Color online) Comparison between concurrence (black solid line) and $\ell_1$-norm of coherence (red dashed line) for the non-Markovian dynamics (${\omega_c} = 10$) of MFs and RFs at zero ($T = 0$) and finite ($\beta = 1$) temperatures. Here we choose the initial state of the system MFs$+$QD (RFs$+$QD) given by ${{\rho}_S}(0) = |\tilde{\phi}\rangle\langle\tilde{\phi}|$, with $|\tilde{\phi}\rangle := \frac{1}{\sqrt{2}}(\mathbb{I} + {f_1^{\dagger}}{f_2^{\dagger}})\ket{\tilde 0}$, and $\gamma = 0.05$, $s = 1$, $\epsilon = 0.5$, ${{\epsilon}_d} = 0.5$, and ${\lambda_2} = 2{\lambda_1} = 0.2$.}
\label{fig:fig00011}
\end{center}
\end{figure*}
\begin{figure*}[!ht]
\begin{center}
\includegraphics[scale=0.95]{FIG_12.pdf}
\caption{(Color online) Comparison between concurrence (black solid line) and $\ell_1$-norm of coherence (red dashed line) for the Markovian dynamics (${\omega_c} = 50$) of MFs and RFs at zero ($T = 0$) and finite ($\beta = 1$) temperatures. Here we choose the initial state of the system MFs$+$QD (RFs$+$QD) given by ${{\rho}_S}(0) = |\tilde{\phi}\rangle\langle\tilde{\phi}|$, with $|\tilde{\phi}\rangle := \frac{1}{\sqrt{2}}(\mathbb{I} + {f_1^{\dagger}}{f_2^{\dagger}})\ket{\tilde 0}$, and $\gamma = 0.05$, $s = 1$, $\epsilon = 0.5$, ${{\epsilon}_d} = 0.5$, and ${\lambda_2} = 2{\lambda_1} = 0.2$.}
\label{fig:fig00012}
\end{center}
\end{figure*}
\subsection{Dynamics of occupations of the single-fermion initial state}
\label{sec:sec0004a}
As we have discussed already, we are mainly interested in the dynamics of quantum coherences and correlations, before doing so, we will briefly discuss how the occupations evolve in time for a simple case of $N=1$. For this, we will consider the initial state ${{\rho}_S}(0) = |\tilde{1}\rangle\langle\tilde{1}|$, with $|\tilde{1}\rangle = {d^{\dagger}}\ket{\tilde 0}$ representing the nonzero fermionic occupation in the QD at time $t = 0$.
The dynamics of the populations $\langle f^\dagger_1 f_1\rangle$, $\langle f^\dagger_2 f_2\rangle$ and $\langle d^\dagger d\rangle$ in both MFs$+$QD (black solid line) and RFs$+$QD (red dotted line) setups are shown in Figs.~\ref{fig:fig00003} ($T = 0$) and~\ref{fig:fig00004} ($\beta=1$). Left and right panels refer to cutoff frequencies ${\omega_c} = 10$ (non-Markovian), and ${\omega_c} = 50$ (Markovian), respectively. For $T = 0$, the non-Markovian dynamics ($\omega_c = 10$) of the average occupation $\langle{d^{\dagger}}{d}\rangle$ of the QD decreases and exhibits damped oscillations in both the systems of RFs$+$QD and MFs$+$QD, going to zero in the former case, while the latter reaches a nonzero stationary value [see Fig.~\ref{fig:fig00003}(a)]. The long-time behavior of the occupation vanishes for the RFs$+$QD system because all the levels are above the chemical potential of the system, in which case the fermion leaks into the bath. In contrast, the occupations $\langle{f_j^{\dagger}}{f_j}\rangle$ will grow and oscillate with damped amplitudes, thus approaching a sta\-tio\-na\-ry value which is asymptotically zero for RFs and nonzero for MFs [see Figs.~\ref{fig:fig00003}(c), and~\ref{fig:fig00003}(e)]. Here, the finite occupation of the MFs in the long-time regime results from the terms proportional to $\widetilde\lambda_j$ in the Hamiltonian [see Eq.~\eqref{eq:eq000008}] acting as source of charge for the system.
At finite temperature ($\beta = 1$), the occupations will behave quite similarly to the case of zero temperature, except that populations of the QD and both fermions will asymptotically converge to nonzero values at later times of the dynamics [see Figs.~\ref{fig:fig00004}(a), ~\ref{fig:fig00004}(c), and~\ref{fig:fig00004}(e)]. Here, the occupation is mainly provided by thermal excitation, since $T=1$ is of the order of the energy of the levels $\epsilon=\epsilon_d=0.5$ in this case.
In the Markovian dynamics of the occupations ($\omega_c = 50$) shown in the right panels of Figs.~\ref{fig:fig00003} and~\ref{fig:fig00004}, the population of the QD decreases quite smoothly vanishing at large $t$ for zero tem\-pe\-ra\-ture [see Fig.~\ref{fig:fig00003}(b)], while reaches a stationary nonzero value for finite temperature ($\beta = 1$) [Fig.~\ref{fig:fig00004}(b)]. Interestingly, the occupation of the QD for both the systems of MFs and RFs show similar behaviors, with a negligible difference. This is because the temperature used is large enough to smooth out any effect of the non-conserving term of the Hamiltonian into the QD occupation. Moreover, the occupations $\langle{f_1^{\dagger}}{f_1}\rangle$ and $\langle{f_2^{\dagger}}{f_2}\rangle$ exhibit nonzero values that increases and reach stationary values for a wide time window in the system of MFs, for both temperatures [see Figs.~\ref{fig:fig00003}(d),~\ref{fig:fig00003}(f),~\ref{fig:fig00004}(d), and~\ref{fig:fig00004}(f)]. However, the occupations of RFs increase initially, but invert this behavior vanishing eventually at later time for $T = 0$ [see Figs.~\ref{fig:fig00003}(d), and~\ref{fig:fig00003}(f)]. At finite temperature ($\beta = 1$) these occupations saturate at a nonzero value for longer times [see Figs.~\ref{fig:fig00004}(d), and~\ref{fig:fig00004}(f)]. The insets clearly show fluctuations in the amplitudes of the fermionic occupations at earlier times, which in turn are smoothly suppressed as $t$ increases.
\subsection{Dynamics of coherence and correlations}
\label{sec:sec0004b}
Let us now turn our attention to the numerical analysis for entanglement and quantum coherence for different classes of initial pure states for the systems of MFs$+$QD and RFs$+$QD.
\subsubsection{Single-fermion initial state}
\paragraph{Separable initial state ---} Similar to the previous section, here we consider the initial state ${{\rho}_S}(0) = |\tilde{1}\rangle\langle\tilde{1}|$, with $|\tilde{1}\rangle = {d^{\dagger}}\ket{\tilde 0}$. Importantly, this initial state is completely uncorrelated, and also incoherent with respect to the reference basis of states $\{ |{n_1},{n_2},{n_d}\rangle\}$.
In Figs.~\ref{fig:fig00005} and~\ref{fig:fig00006} we compare the concurrence and the $\ell_1$-norm of coherence for both systems of MFs$+$QD and RFs$+$QD for cutoff frequencies ${\omega_c} = 10$ and ${\omega_c} = 50$, respectively. In Fig.~\ref{fig:fig00005}, note the reduced density matrix ${\rho_{12}}(t)$ [see Eq.~\eqref{eq:eq000020}] exhibits nonzero values for concurrence and $\ell_1$-norm of coherence, which in turn coincides in both systems of MFs and RFs, regardless the temperature [see Figs.~\ref{fig:fig00005}(a),~\ref{fig:fig00005}(b),~\ref{fig:fig00005}(c), and~\ref{fig:fig00005}(d)]. In contrast, for marginal states ${\rho_{1d}}(t)$ and ${\rho_{2d}}(t)$ [see Eq.~\eqref{eq:eq000020}], the dynamics of entanglement and quantum co\-he\-ren\-ce will coincide only in the system of RFs, at zero tem\-pe\-ra\-ture [see Fig.~\ref{fig:fig00005}(f) and~\ref{fig:fig00005}(j)], thus behaving quite diffe\-ren\-tly in the other scenarios. Indeed, at finite temperature ($\beta = 1$), note that concurrence starts growing but goes suddenly to zero for MFs and RFs, while the quantum coherence exhibits oscillations that are suppressed until approaching a nonzero constant value [see Figs.~\ref{fig:fig00005}(g),~\ref{fig:fig00005}(h),~\ref{fig:fig00005}(k), and~\ref{fig:fig00005}(l)]. In Figs.~\ref{fig:fig00005}(e) and~\ref{fig:fig00005}(i), concurrence exhibits a revival after suddenly going to zero, and then saturates at a constant value for longer times of the dynamics.
Figures~\ref{fig:fig00006}(a),~\ref{fig:fig00006}(b),~\ref{fig:fig00006}(c), and~\ref{fig:fig00006}(d) show that concurrence and quantum coherence of the marginal state ${\rho_{12}}(t)$ coincide for both species of MFs and RFs, regardless the temperature, for the Markovian frequency $\omega_c = 50$. However, for states ${\rho_{1d}}(t)$ and ${\rho_{2d}}(t)$, entanglement and quantum co\-he\-ren\-ce will coincide exclusively for RFs, at zero tem\-pe\-ra\-ture, also exhi\-bi\-ting a highly oscillating behavior [see Fig.~\ref{fig:fig00006}(f) and~\ref{fig:fig00006}(j)]. Indeed, for finite temperature ($\beta = 1$), note that concurrence starts increasing and dropping to zero suddenly for MFs [see Figs.~\ref{fig:fig00006}(g), and~\ref{fig:fig00006}(k)], while it exhibits revivals whose amplitude mostly decreases in the system of RFs [see Figs.~\ref{fig:fig00006}(h), and~\ref{fig:fig00006}(l)]. In addition, quantum cohe\-ren\-ce exhibits a highly oscillating regime, saturating at a nonzero constant value for longer times [see Figs.~\ref{fig:fig00006}(g),~\ref{fig:fig00006}(h),~\ref{fig:fig00006}(k), and~\ref{fig:fig00006}(l)]. In Figs.~\ref{fig:fig00006}(e) and~\ref{fig:fig00006}(i), concurrence exhibits a revival after suddenly going to zero, also experiencing fluctuations in its amplitude that are suppressed as starts decreasing, and then approaches zero at later times. We emphasize this behavior is strikingly different from the non-Markovian case ($\omega_c = 10$) in Figs.~\ref{fig:fig00005}(e) and~\ref{fig:fig00005}(i), in which both concurrence and quantum coherence remains nonzero for longer times of the dynamics. An important message we can take from the result from Figs.~\ref{fig:fig00005} and~\ref{fig:fig00006} is that the quantum correlations and coherences depend strongly on the temperature for RFs, but are qualitatively similar for MFs at both zero and finite temperatures.
\paragraph{Superposition initial state ---} Here we set the input state ${{\rho}_S}(0) = |\tilde{+} \rangle\langle\tilde{+}|$, with $|\tilde{+}\rangle := \frac{1}{\sqrt{2}}\, ({f_1^{\dagger}} + {f_2^{\dagger}})\ket{\tilde 0}$, in which the QD is entangled with one of the two fermionic species, also exhi\-bi\-ting nonzero coherence regarding to the same subspa\-ce. For this case, Figs.~\ref{fig:fig00007} and~\ref{fig:fig00008} show the concurrence and the $\ell_1$-norm of coherence for both systems of MFs$+$QD and RFs$+$QD, for the cutoff frequencies ${\omega_c} = 10$ and ${\omega_c} = 50$, respectively.
In Fig.~\ref{fig:fig00007}, for the case of RFs at zero temperature, we first note that the dynamics of entanglement and quantum co\-he\-ren\-ce are identical for each of two-body reduced state of the system [see Figs.~\ref{fig:fig00007}(b),~\ref{fig:fig00007}(f), and~\ref{fig:fig00007}(j)]. In opposite, for MFs at zero temperature, concurrence of state ${\rho_{12}}(t)$ exhibits a revival after going to zero, and then decreases until completely va\-ni\-shing, while the quantum coherence oscillates until reaches a stationary value [see Fig.~\ref{fig:fig00007}(a)]. In addition, Figs.~\ref{fig:fig00007}(e) and~\ref{fig:fig00007}(i) show the marginal states ${\rho_{1d}}(t)$ and ${\rho_{2d}}(t)$ have nonzero oscilla\-ting values of quantum coherence that approaches a stationary value for a wide time window. In turn, concurrence of the state ${\rho_{1d}}(t)$ increases and suddenly goes to zero for short times [see Fig.~\ref{fig:fig00007}(e)], while for the state ${\rho_{2d}}(t)$ the concurrence shows revivals after going suddenly to zero, and then oscillates and remains finite for longer times [see Fig.~\ref{fig:fig00007}(i)]. Next, moving to the case of finite temperature ($\beta = 1$), the concurrence of state ${\rho_{12}}(t)$ decreases and goes to zero in both systems of MFs and RFs, also showing an intermediate revival before it completely vanishing in the former setup, while quantum coherence starts decreasing and asymptotically converges to a stationary value [see Figs.~\ref{fig:fig00007}(c) and~\ref{fig:fig00007}(d)]. Furthermore, Figs.~\ref{fig:fig00007}(g),~\ref{fig:fig00007}(h),~\ref{fig:fig00007}(k) and~\ref{fig:fig00007}(l) show that states ${\rho_{1d}}(t)$ and ${\rho_{2d}}(t)$ have zero valued concurrence, except for a narrow peak that appears at short times of the dynamics, while the $\ell_1$-norm of coherence shows damped oscillations and then saturates to a fixed value.
The dynamics in the Markovian regime, with $\omega_c = 50$, for the same initial state as shown in Fig.~\ref{fig:fig00008}. We first note that the curves of entanglement and quantum coherence are identical for the two-body reduced states of the system of RFs at zero temperature [see Figs.~\ref{fig:fig00008}(b),~\ref{fig:fig00008}(f), and~\ref{fig:fig00008}(j)]. In particular, we highlight the damping of rapid oscillations that appears in both quantities for states ${\rho_{1d}}(t)$ and ${\rho_{2d}}(t)$ encoding information about two fermions and QD. For MFs at zero temperature, concurrence and quantum coherence of state ${\rho_{12}}(t)$ decays exponentially and saturates into a stationary value [see Fig.~\ref{fig:fig00008}(a)]. Furthermore, Figs.~\ref{fig:fig00008}(e) and~\ref{fig:fig00008}(i) show the marginal states ${\rho_{1d}}(t)$ and ${\rho_{2d}}(t)$ have nonzero damped oscillating va\-lues of concurrence and quantum coherence, with the former going to zero and the later approaching a fixed nonzero value. For finite temperature ($\beta = 1$), Figs.~\ref{fig:fig00008}(c) and~\ref{fig:fig00008}(d) show the concurrence of state ${\rho_{12}}(t)$ decreases exponentially in the system of MFs, while abruptly goes to zero in the case of RFs. In turn, quantum coherence of MFs starts decreasing and asymptotically converges to a stationary value, and for RFs the $\ell_1$-norm of coherence exhibits a revival after suddenly going to zero and then asymptotically approaches a stationary value. Figs.~\ref{fig:fig00008}(g),~\ref{fig:fig00008}(h),~\ref{fig:fig00008}(k) and~\ref{fig:fig00008}(l) show the states ${\rho_{1d}}(t)$ and ${\rho_{2d}}(t)$ have nonzero quantum coherences with oscillation damping in the amplitudes, and also nonzero entanglement signaled by a narrow peak of concurrence that survives only for a short time window of the dynamics.
\paragraph{Werner initial state ---} We now consider the time evolution of the dynamics of a third type of single-fermion initial state. We choose ${{\rho}_S}(0) = |{W}\rangle\langle{W}|$, where $|{W}\rangle := \frac{1}{\sqrt{3}}\left({d^{\dagger}} + {f_1^{\dagger}} + {f_2^{\dagger}} \right)\ket{\tilde 0}$ is the Werner state that fully correlates the two fermions and the QD, also exhibiting nonzero quantum coherence in all Hilbert subspaces. Figures~\ref{fig:fig00009} and~\ref{fig:fig00010} show the plots of concurrence and the $\ell_1$-norm of coherence for both the systems of MFs$+$QD and RFs$+$QD, for the cutoff frequencies ${\omega_c} = 10$ and ${\omega_c} = 50$, respectively.
In Fig.~\ref{fig:fig00009}, for the case of RFs at zero temperature, values of concurrence and $\ell_1$-norm of coherence coincides for each of two-body reduced state of the system [see Figs.~\ref{fig:fig00009}(b),~\ref{fig:fig00009}(f), and~\ref{fig:fig00009}(j)]. Conversely, for MFs at zero temperature, concurrence of states ${\rho_{jd}}(t)$ exhibit revivals after going to zero, while it reaches nonzero asymptotic values for the two-body states that mix both the QD and fermionic degrees of freedom [see Figs.~\ref{fig:fig00009}(e), and~\ref{fig:fig00009}(i)], and also vanishes at later times for the reduced state of two fermions [see Fig.~\ref{fig:fig00009}(a)]. In addition, for MFs at finite temperature, concurrence of the state ${\rho_{12}}(t)$ exhi\-bits revivals and asymptotically goes to zero [see Fig.~\ref{fig:fig00009}(c)], while abruptly vanishes for states ${\rho_{jd}}(t)$ [see Figs.~\ref{fig:fig00009}(g), and~\ref{fig:fig00009}(k)]. In turn, quantum coherence show fluctuations in its amplitudes and approaches stationary values, regardless of the temperature [see Figs.~\ref{fig:fig00009}(a),~\ref{fig:fig00009}(c),~\ref{fig:fig00009}(e),~\ref{fig:fig00009}(g),~\ref{fig:fig00009}(i), and~\ref{fig:fig00009}(k)]. For the system of RFs, $\ell_1$-norm of coherence exhibits damped oscillations and vanishes asymptotically for the states ${\rho_{jd}}(t)$ [see Fig.~\ref{fig:fig00009}(h), and~\ref{fig:fig00009}(l)], while for the state ${\rho_{12}}(t)$ it reaches nonzero stationary value [see Fig.~\ref{fig:fig00009}(d)].
In the Markovian regime (${\omega_c} = 50$), at zero temperature ($T = 0$), Figs.~\ref{fig:fig00010}(b),~\ref{fig:fig00010}(f), and~\ref{fig:fig00010}(j) show the concurrence and quantum coherence of all states behave identically in the system of RFs, experiencing rapid oscillations at earlier times. In contrast, for the system of MFs, both quantities takes different values and exhibit non-monotonic decays with rapid oscillations that are suppressed at later times [see Figs.~\ref{fig:fig00010}(a),~\ref{fig:fig00010}(e), and~\ref{fig:fig00010}(i)]. Next, moving to the case of finite temperature ($\beta = 1$), the concurrence of state ${\rho_{jd}}(t)$ decreases and suddenly goes to zero in both systems of MFs and RFs, while quantum coherence show fluctuations in its amplitudes and asymptotically converges to a stationary value [see Figs.~\ref{fig:fig00010}(g),~\ref{fig:fig00010}(h),~\ref{fig:fig00010}(k), and~\ref{fig:fig00010}(l)]. In addition, for the marginal state ${\rho_{12}}(t)$, both quantum resources monotonically decays and approaches stationary values [Figs.~\ref{fig:fig00010}(c), and~\ref{fig:fig00010}(d)]. We point out that quantum coherence converges to a finite value in the system MFs, while it goes to zero for the reduced state of two fermions. This show that quantum co\-he\-ren\-ce of the two-body state of MFs is a more robust quantum resource than that of RFs.
\subsubsection{Two-fermions initial state}
We now show the dynamics of correlation and quantum coherences in the system starting from an initial state containing two fermions. Here we choose the initial state ${{\rho}_S}(0) = |\tilde{\phi}\rangle\langle\tilde{\phi}|$, where $|\tilde{\phi}\rangle := \frac{1}{\sqrt{2}}(\mathbb{I} + {f_1^{\dagger}}{f_2^{\dagger}})\ket{\tilde 0}$. Noteworthy, this initial state has even parity respective to the occupation number of fermions, also exhi\-bi\-ting nonzero correlations and quantum coherences between the two fermionic subspaces. In Figs.~\ref{fig:fig00011} and~\ref{fig:fig00012} we compare the concurrence and the $\ell_1$-norm of coherence of the systems of MFs$+$QD and RFs$+$QD, for the cutoff frequencies ${\omega_c} = 10$ and ${\omega_c} = 50$, respectively. We shall begin discussing Fig.~\ref{fig:fig00011} for the case of non-Markovian dynamics (${\omega_c} = 10$). For MFs at zero temperature, Fig.~\ref{fig:fig00011}(a) show that $\ell_1$-norm of coherence of ${\rho_{12}}(t)$ decreases and slowly, reaching a finite stationary value, while concurrence oscillates and suddenly goes to zero. In addition, Figs.~\ref{fig:fig00011}(e) and~\ref{fig:fig00011}(i) show the oscillation patterns for quantum coherence of states ${\rho_{1d}}(t)$ and ${\rho_{2d}}(t)$, with the latter saturating faster to a finite stationary value than the former. Interestingly, concurrence of state ${\rho_{1d}}(t)$ goes to zero after displaying a few revivals [see Fig.~\ref{fig:fig00011}(e)], while for the state ${\rho_{2d}}(t)$ concurrence starts increasing and exhibits periodic oscillations around a stationary value for larger times [see Fig.~\ref{fig:fig00011}(i)]. For RFs at zero temperature, Figs.~\ref{fig:fig00011}(f) and~\ref{fig:fig00011}(j) show that both quantum resources for states ${\rho_{jd}}(t)$ oscillates and vanishes for larger times. In particular, note that concurrence and quantum coherence of state ${\rho_{1d}}(t)$ oscillates appro\-xi\-ma\-te\-ly with the same period [see Fig.~\ref{fig:fig00011}(f)], while the former goes to zero faster than the latter for the reduced state ${\rho_{2d}}(t)$ [see Fig.~\ref{fig:fig00011}(j)]. For the two-body state ${\rho_{12}}(t)$, Fig.~\ref{fig:fig00011}(b) show that concurrence exhibits a revival after suddenly going to zero, while quantum coherence decay to a stationary value. Interestingly, both resources asym\-pto\-ti\-cally reaches the same numerical value for later times of the dynamics. For MFs at finite temperature, Fig.~\ref{fig:fig00011}(g) and~\ref{fig:fig00011}(k) show the $\ell_1$-norm of coherence of states ${\rho_{jd}}(t)$ starts in\-crea\-sing and slowly reaches a finite stationary value, while concurrence shows narrow peaks at the earlier times of the dynamics. In Fig.~\ref{fig:fig00011}(c), concurrence of state ${\rho_{12}}(t)$ goes abruptly to zero, while $\ell_1$-norm of coherence decreases slowly. For RFs at finite temperature, Fig.~\ref{fig:fig00011}(d) show the quantum coherence of state ${\rho_{12}}(t)$ decreases and remains finite, while concurrence suddenly goes to zero. In Fig.~\ref{fig:fig00011}(h), the concurrence of state ${\rho_{1d}}(t)$ show a narrow peak at earlier times and suddenly goes to zero, while in Fig.~\ref{fig:fig00011}(l) concurrence of ${\rho_{2d}}(t)$ is zero at all times of the dynamics. For both two-body reduced state, the $\ell_1$-norm of coherence shows damped oscillations and approaches zero.
Next, let us comment on Fig.~\ref{fig:fig00012} for the case of Markovian dynamics (${\omega_c} = 50$). For MFs at zero temperature, Fig.~\ref{fig:fig00012}(a) show that $\ell_1$-norm of coherence of ${\rho_{12}}(t)$ decreases exponentially, while concurrence suddenly goes to zero. In addition, Figs.~\ref{fig:fig00012}(e) and~\ref{fig:fig00012}(i) show the concurrence of states ${\rho_{1d}}(t)$ and ${\rho_{2d}}(t)$, with the former going to zero faster than the latter. In both cases, quantum coherence oscillates and approaches a stationary value for larger times. For RFs at zero temperature, Figs.~\ref{fig:fig00012}(f) and~\ref{fig:fig00012}(j) show that both quantum resources for states ${\rho_{jd}}(t)$ display rapid oscillations that are suppressed for later times. In contrast, for the two-body state ${\rho_{12}}(t)$, Fig.~\ref{fig:fig00012}(b) show that concurrence exhibits a revival after dropping suddenly to zero, while quantum coherence decays monotonically to a stationary value. We point out that both resources starts saturating to the same numerical value for later times of the dynamics. For MFs at finite temperature, Fig.~\ref{fig:fig00012}(g) and~\ref{fig:fig00012}(k) show the states ${\rho_{jd}}(t)$ have zero valued concurrence, except for a narrow peak at the earlier times of the dynamics, while the $\ell_1$-norm of coherence shows damped oscillations and then saturates to a stationary value. In Fig.~\ref{fig:fig00012}(c), concurrence of state ${\rho_{12}}(t)$ abruptly goes to zero, while $\ell_1$-norm of coherence of state ${\rho_{12}}(t)$ monotonically decreases and va\-ni\-shes for larger times. For RFs at finite temperature, Fig.~\ref{fig:fig00012}(d) show the quantum coherence of state ${\rho_{12}}(t)$ decreases monotonically and remains finite, while concurrence suddenly goes to zero. For the two-body reduced states ${\rho_{jd}}(t)$, Figs.~\ref{fig:fig00012}(h) and~\ref{fig:fig00012}(l) show the $\ell_1$-norm of coherence displays oscillations that are rapidly damped, while concurrence is zero at all times of the dynamics.
Before closing this section, as a final remark is in order. Apart from the asymptotic case shown in Figs.~\ref{fig:fig00011}(b) and~\ref{fig:fig00012}(b), note that concurrence and quantum coherence display different dynamical behaviors, i.e., the two quantum resources do not coincide at any time of the dynamics. This feature somehow suggest a parity fingerprint that is related to the global occupation of the initial state. On the one hand, for initial states with odd parity in the occupation number, quantum coherence and concurrence coincide for some of the marginal states of the system [see Figs.~\ref{fig:fig00005},~\ref{fig:fig00006}, ~\ref{fig:fig00007},~\ref{fig:fig00008},~\ref{fig:fig00009},~\ref{fig:fig00010}]. On the other hand, Figs.~\ref{fig:fig00011} and~\ref{fig:fig00012} show the quantum resources behave quite differently at all times of the dynamics when the system is initialized in a probe state with even parity in the occupation number.
\section{Dynamics in the non-local regime of the Majorana fermions}
\label{sec:sec0005}
In this section we will discuss possible role of non-locality of Majorana fermions in the dynamics of occupations and quantum resources. The physical system of MFs$+$QD comprises a pair of Majorana bound states that arises at the ends of two superconductor nanowires of length $L$ [see Figure~\ref{fig:fig00001}(a)]. In turn, for large values of $L$, i.e., taking the MFs far from apart each other, it is well known the couplings $\epsilon_j \approx 0$ ($j = 1,2$) between them will become negligible, mainly because such energies decreases exponentially with the length of the nanowires. Hence, in our setting, the smaller the coupling $\epsilon$, the more non-local should be the pair of MFs. In the following, we will investigate the non-locality of MFs by setting $\epsilon = 0.0005$. For simplicity, the system of MFs$+$QD is initialized in the fully uncorrelated initial state ${{\rho}_S}(0) = |\tilde{1}\rangle\langle\tilde{1}|$, with $|\tilde{1}\rangle = {d^{\dagger}}\ket{\tilde 0}$. Moreover, we set $\gamma = 0.05$, $s = 1$, ${{\epsilon}_d} = 0.5$, and ${\lambda_2} = 2{\lambda_1} = 0.2$.
\begin{figure}[h!]
\begin{center}
\includegraphics[scale=0.925]{FIG_13.pdf}
\caption{(Color online) Populations of the subsystem of MFs$+$QD with non-local Majorana fermions ($\epsilon = 0.0005$), where $\langle \bullet \rangle = \text{Tr}(\bullet \, {\rho_S}(t))$, for both the non-Markovian (${\omega_c} = 10$) and Markovian dynamics (${\omega_c} = 50$), at zero ($T = 0$, right panels) and finite ($\beta = 1$, left panels) temperatures. The system of MFs$+$QD is initialized at the state ${{\rho}_S}(0) = |\tilde{1}\rangle\langle\tilde{1}|$, with $|\tilde{1}\rangle = {d^{\dagger}}\ket{\tilde 0}$. Here we set $\gamma = 0.05$, $s = 1$, ${{\epsilon}_d} = 0.5$, and ${\lambda_2} = 2{\lambda_1} = 0.2$.}
\label{fig:fig00013}
\end{center}
\end{figure}
\begin{figure*}[t!]
\begin{center}
\includegraphics[scale=0.95]{FIG_14.pdf}
\caption{(Color online) Plot of concurrence (black solid line) and $\ell_1$-norm of coherence (red dashed line) of the subsystem of MFs$+$QD with non-local Majorana fermions ($\epsilon = 0.0005$), for both the non-Markovian (${\omega_c} = 10$) and Markovian dynamics (${\omega_c} = 50$), at zero ($T = 0$) and finite ($\beta = 1$) temperatures. The system is initialized in the uncorrelated state ${{\rho}_S}(0) = |\tilde{1}\rangle\langle\tilde{1}|$, with $|\tilde{1}\rangle = {d^{\dagger}}\ket{\tilde 0}$, with $\gamma = 0.05$, $s = 1$, ${{\epsilon}_d} = 0.5$, and ${\lambda_2} = 2{\lambda_1} = 0.2$.}
\label{fig:fig00014}
\end{center}
\end{figure*}
Figure~\ref{fig:fig00013} shows the dynamics of the occupations in the system of MFs$+$QD. In the non-Markovian regime ($\omega_c = 10$), note the occupation of the QD and both the MFs take nonzero values and oscillates around stationary values, regardless of the temperature. For the case of Markovian dynamics ($\omega_c = 50$), we point out the occupation $\langle{d^{\dagger}}{d}\rangle$ mostly decreases, while $\{ \langle{f_j^{\dagger}}{f_j}\rangle\}_{j = 1,2}$ starts growing and approaches stationary values at later times of the dynamics, also regardless of the temperature of the system. In particular, note the occupation of the QD goes to zero at zero temperature [see Fig.~\ref{fig:fig00013}(a)], while it remains a nonzero constant for finite temperature ($\beta = 1$) [see Fig.~\ref{fig:fig00013}(b)].
In Fig.~\ref{fig:fig00014} we plot the concurrence and the $\ell_1$-norm of coherence for the system of MFs$+$QD for the cutoff frequencies ${\omega_c} = 10$ and ${\omega_c} = 50$, at zero ($T = 0$) and finite temperature ($\beta = 1$). Note both the entanglement and quantum coherence of the reduced state of fermions ${\rho_{12}}(t)$ takes nonzero values, and also coincide for all times of the dynamics, regardless of temperature and cutoff frequencies [see Figs.~\ref{fig:fig00014}(a),~\ref{fig:fig00014}(b),~\ref{fig:fig00014}(c), and~\ref{fig:fig00014}(d)]. Noteworthy, such quantum resources are robust against the environmental decohering effects and asymptotically converge to stationary values. Next, we point out the marginal states $\{ {\rho_{jd}}(t) \}_{j = 1,2}$ have nonzero oscillating quantum coherences that approach stationary values at later times, regardless the temperature, for both the non-Markovian ($\omega_c = 10$) and Markovian ($\omega_c = 50$) settings. However, the concurrence of such states is a zero valued quantity, except for narrow peaks that appears at earlier times of the dynamics as shown in Figs.~\ref{fig:fig00014}(e),~\ref{fig:fig00014}(g),~\ref{fig:fig00014}(i),~\ref{fig:fig00014}(k) ($T = 0$, $\omega_c = 10$), and Figs.~\ref{fig:fig00014}(h),~\ref{fig:fig00014}(l) ($\beta = 1$, $\omega_c = 50$). In particular, for $\beta = 1$ and $\omega_c = 10$, note that he concurrence of ${\rho_{1d}}(t)$ and ${\rho_{2d}}(t)$ shows an oscillation pattern that is smoothly suppressed to zero [see Figs.~\ref{fig:fig00014}(f),~\ref{fig:fig00014}(j)].
\section{Conclusions}
\label{sec:conclusions}
In this work we have investigated the dynamics of quantum resources in a tripartite fermionic system coupled to an external reservoir. Two classes of systems were considered: (i) a two-level quantum dot coupled to two regular fermion levels (RFs$+$QD), and (ii) a quantum dot coupled to two pairs of Majorana fermions (MFs$+$QD). In both cases, the quantum dot is coupled to a fermionic reservoir. Invoking a quantum master equation approach that includes a memory kernel allowing study both the Markovian to the non-Markovian regimes, at zero and finite temperatures, we analyze the time evolution of the quantum resources of the systems, namely, pairwise entanglement and quantum coherence, quantified by concurrence and ${{\ell}_1}$-norm, respectively.
In general, we observe a clear distinction of the dynamics in each system, depending on the initial state and the pa\-ri\-ty of the global occupation of such state. On the one hand, for a fully uncorrelated incoherent initial state with a single fermion in the system, we observe that entanglement and coherence are generated by the dynamics at both the zero and finite temperatures in the MFs$+$QD system. On the other hand, in the RFs$+$QD these quantities are generated but survive only during certain time window, vanishing at later times of the dynamics at zero temperature, and saturating at a small value at finite temperature. Overall, these features are observed in both the Markovian and non-Markovian dissipation regimes.
For entangled and coherent single-fermion initial states, the quantum resources decrease but saturate to finite stationary values for the MFs$+$QD system regardless of temperature, for both the Markovian and non-Markovian regimes. On the other hand, for the RFs$+$QD system, the quantum coherence settle down at a finite value at nonzero temperature. Similar features are observed for choosing an initial state with two fermions in the system. We discussed the physical setting in which the Majorana fermions in each pair are almost fully decoupled from each other, i.e., typically a long topological nanowire. We observe qualitatively similar behavior, showing the robustness of our results against changes in the to\-po\-lo\-gical superconductor hosting the Majorana fermions. Finally, we highlight the correlated behavior of the quantum resources at both zero and finite temperatures in the MFs$+$QD system, which is not observed in the RFs$+$QD case. Importantly, this might be seen as a signature of the property called genuine and distributed correlated coherence, which in turn witness the amount of quantum coherence that is contained within the correlations of the tripartite system~\cite{PhysRevA.94.022329,JMathPhys_51_414013}. This suggests a potential applicability of the MFs in designing quantum sensors capable of working in a wider range of temperatures, which may be relevant in quantum metrology.
\begin{acknowledgments}
We thank In\'{e}s de Vega for fruitful conversations. The authors acknowledge the financial support from the Brazilian ministries MEC and MCTIC. The project was funded by Brazilian funding agencies CNPq (Grants 307028/2019-4 and 305738/2018-6), FAPESP (Grant No 2017/03727-0), Coordena\c{c}\~{a}o de Aperfei\c{c}oamento de Pessoal de N\'{i}vel Superior -- Brasil (CAPES) (Finance Code 001), and by the Brazilian National Institute of Science and Technology of Quantum Information (INCT/IQ).
\end{acknowledgments}
\setcounter{equation}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
\setcounter{section}{0}
\numberwithin{equation}{section}
\makeatletter
\renewcommand{\thesection}{\Alph{section}}
\renewcommand{\thesubsection}{\thesection.\arabic{subsection}}
\renewcommand{\theequation}{\Alph{section}\arabic{equation}}
\renewcommand{\thefigure}{\arabic{figure}}
\renewcommand{\bibnumfmt}[1]{[#1]}
\renewcommand{\citenumfont}[1]{#1}
\section*{Appendix}
\section{Details on the Master equation}
\label{sec:appendix0A}
The reduced dynamics of the system MFs$+$QD is described by the quantum master equation
\begin{align}
\label{eq:eq000024}
\frac{d{\rho_S}(t)}{dt} &= - i\, [{H_S},{\rho_S} (t) ] + {\int_0^t} d\tau {{\alpha}^+}(t - \tau) [({V_{\tau - t}} \, {d^{\dagger}}) {\rho_S} (t) ,d] \nonumber\\
&+ {\int_0^t} d\tau {{\alpha}^-}(t - \tau)[({V_{\tau - t}} \, d) \, {\rho_S} (t),{d^{\dagger}}] + \text{H.c.} ~,
\end{align}
with ${H_S} = {H_1} + {H_2} + {H_3}$, where ${H_1} = {\epsilon_d}{\hat{n}_d}$ is the QD Hamiltonian, ${H_2} = {\sum_{j = 1}^2}\, {\epsilon_j}\left({\hat{n}_j} - {1}/{2}\right)$ is the MFs (RFs) Hamiltonian, and ${H_3} = {\sum_{j = 1,2}}({\lambda_j}\,{d^{\dagger}}{f_j} + {\widetilde{\lambda}_j}\,{d^{\dagger}}{f_j^{\dagger}} + {\rm H.c.})$ stands for the Hamiltonian modeling the MFs$+$QD (RFs$+$QD) coupling. To clarify, by setting ${\widetilde{\lambda}_j} = {\lambda_j}$ one obtains the Hamiltonian of the system MFs$+$QD, while the Hamiltonian of the system RFs$+$QD is recovered imposing that ${\widetilde{\lambda}_j} = 0$. In the former case, the e\-ner\-gies $\{ {E_j} \}_{j = 1,\ldots,8}$ and the eigenstates $\{ | {E_j}\rangle\}_{j = 1,\ldots, 8}$ of the system MFs$+$QD have been presented in Table~\ref{tab:tab000001}, while for the latter the spectral decomposition of the Hamiltonian for RFs$+$QD is given in Table~\ref{tab:tab000002} in the main text. In Eq.~\eqref{eq:eq000024} we have introduced the operator ${V_{\tau - t}}\, {d^{\dagger}} = {e^{ i (\tau - t) {H_S}}} {d^{\dagger}} \, {e^{- i (\tau - t) {H_S}}}$, which in turn can be written more conveniently as
\begin{align}
\label{eq:eq000025}
{{V}_{\tau - t}}\,{d^{\dagger}} = {\sum_{j,l = 1}^8} \, {{e}^{i(\tau - t)({{E}_j} - {{E}_l})}} \, \langle{E_j}|{{d}^{\dagger}}|{E_l}\rangle \, |{E_j}\rangle\langle{E_l}| ~,
\end{align}
with
\begin{equation}
\label{eq:eq000026}
\langle{E_j}|{{d}^{\dagger}}|{E_l}\rangle = {\sum_{{k_1}, \, {k_2}}} \, {{(-1)}^{ {{k}_1} + {{k}_2} }} \langle{E_j}|{{k}_1},{{k}_2}, 1\rangle\langle{{k}_1},{{k}_2},0|{E_l}\rangle ~.
\end{equation}
Next, the system-bath correlation functions are given by
\begin{equation}
\label{eq:eq000027}
{\alpha^+} (t) = {\int_0^{\infty}} d\omega \, J(\omega) {N_F}(\omega) \, {e^{i\omega t}} ~,
\end{equation}
and
\begin{equation}
\label{eq:eq000028}
{\alpha^-} (t) = {\int_0^{\infty}} d\omega \, J(\omega)({N_F}(\omega) + 1)\, {e^{-i\omega t}} ~,
\end{equation}
with ${N_F}({\omega}) = {{[\exp(\beta{\omega})+1]}^{-1}}$ being the Fermi-Dirac distribution related to the fermionic reservoir, and $J(\omega) = \gamma \, {{\omega}^s} \, {{\omega}_c^{1 - s}} {e^{-\omega/{{\omega}_c}}}$ is the spectral density of the environment. For all $s > 0$ and finite temperature $0 < T < \infty$ (i.e., $\infty > \beta > 0$), it is straightforward to verify that
\begin{equation}
\label{eq:eq000029}
{{{{\alpha}^+}(t)}} = \frac{\gamma}{4\, {\beta^2}} \, {(2\beta{\omega_c})^{1 - s}} \, \Gamma(1 + s) \left[ {{\xi}_{1 + s}}\left( z(t)\right) - {{\xi}_{1 + s}}\left( z(t) + 1/2 \right) \right] ~,
\end{equation}
with
\begin{equation}
\label{eq:eq000030}
z(t) := \frac{1 + \beta{{\omega}_c} - i{\omega_c}t}{2\beta{{\omega}_c}} ~,
\end{equation}
and
\begin{equation}
\label{eq:eq000031}
{{{{\alpha}^-}(t)}} = {{{{\alpha}^+}(t)}^*} + \frac{\gamma\, {{\omega}_c^2}\, \Gamma(1 + s)}{ {\left( 1 + i{{\omega}_c} t \right)}^{1 + s} } ~,
\end{equation}
where ${{\xi}_{1 + s}}(x)$ defines the generalized Riemann zeta function as ${{\xi}_{1 + s}}(x) = {\sum_{k = 0}^{\infty}} \, { {{(x + k)}^{- 1 - s}} } $. Noteworthy, for the asymptotic case of zero temperature $T \rightarrow 0$ (i.e., $\beta \rightarrow \infty$), one gets ${\lim_{\beta \rightarrow \infty}} \, {{{{\alpha}^+}(t)}} = {\lim_{\beta \rightarrow \infty}} \, {{{{\alpha}^+}(t)}^*} = 0$, and thus
\begin{equation}
\label{eq:eq000032}
{\lim_{\beta \rightarrow \infty}} \, {{\alpha^-}(t)} = \frac{\gamma\, {{\omega}_c^2}\, \Gamma(1 + s)}{ {\left( 1 + i{{\omega}_c}t \right) }^{1 + s} } ~.
\end{equation}
In the following we will discuss how to solve the master equation in Eq.~\eqref{eq:eq000024} for the reduced density matrix ${{\rho}_S}(t)$. To do so, we point out the marginal state ${{\rho}_S}(t)$ can be written in terms of the occupation number basis states $\{|{n_1},{n_2},{n_d}\rangle\}$ as follows
\begin{equation}
\label{eq:eq000033}
{{\rho}_S}(t) = {\sum_{ \mathbf{k}, \mathbf{m} }} \, {{A}^{ {k_1}, \, {k_2}, \, {k_d}}_{ {m_1}, \, {m_2},\, {m_d} }}(t)\, |{k_1},{k_2},{k_d}\rangle\langle{m_1},{m_2},{m_d}| ~,
\end{equation}
where $\mathbf{k} = ({k_1},{k_2},{k_d} )$, $\mathbf{n} = ( {m_1},{m_2},{m_d})$, with ${k_j} = \{0,1\}$, and ${m_j} = \{0,1\}$ for $j = \{1,2,d\}$, while
\begin{equation}
\label{eq:eq000034}
{{A}^{ {k_1},\,{k_2},\, {k_d}}_{ {m_1},\, {m_2},\, {m_d} }}(t) = \langle{k_1},{k_2},{k_d}|{{\rho}_S}(t)|{m_1},{m_2},{m_d}\rangle ~.
\end{equation}
Next, we will substitute Eq.~\eqref{eq:eq000033} into Eq.~\eqref{eq:eq000024}, and also project the aforementioned master equation onto the occupation number basis, thus obtaining a set of coupled differential equations for the time-dependent coefficients $\{ {{A}^{ {k_1}, \, {k_2}, \, {k_d}}_{ {m_1}, \, {m_2},\,{m_d} }}(t) \}_{\mathbf{k}, \mathbf{m}}$. By proceeding in this way, one gets
\begin{align}
\label{eq:eq000035}
& \frac{d}{dt}{{A}^{ {\ell_1},\, {\ell_2},\,{\ell_d}}_{ {n_1},\,{n_2},\,{n_d} }}(t) = - i\, \langle{{\ell}_1},{{\ell}_2},{{\ell}_d}|[{ {{H}_S}, {{\rho}_S}(t)}]|{{n}_1},{{n}_2},{{n}_d}\rangle \nonumber\\
& + {\int_0^t}\, d\tau \, {{\alpha}^+}(t - \tau) \, \langle{{\ell}_1},{{\ell}_2},{{\ell}_d}|[{ ({{V}_{\tau - t}}\,{{d}^{\dagger}}) \, {{\rho}_S}(t), d }]|{{n}_1},{{n}_2},{{n}_d}\rangle \nonumber\\
&+ {\int_0^t}\, d\tau \, {{\alpha}^-}(t - \tau) \, \langle{{\ell}_1},{{\ell}_2},{{\ell}_d}|[{ ({{V}_{\tau - t}}\,{d}) \, {{\rho}_S}(t), {{d}^{\dagger}} }]|{{n}_1},{{n}_2},{{n}_d}\rangle + \text{H.c.} ~.
\end{align}
The set of differential equations given in Eq.~\eqref{eq:eq000035} are too complicated to be solved analytically, and thus we require some numerical simulations to fully characterize the marginal state ${\rho_S}(t)$. We point out that is possible to simplify the calculations with help of analytical expressions for each contribution that appears in the right-hand side of Eq.~\eqref{eq:eq000035}. In this regard, the matrix elements of the sum of Hamiltonians $H_1 + H_2$ respective to the occupation number basis are given by
\begin{widetext}
\begin{align}
\label{eq:eq000036}
\langle{{\ell}_1},{{\ell}_2},{{\ell}_d}|[{ {H_1} + {H_2}, {{\rho}_S}(t)}]|{{n}_1},{{n}_2},{{n}_d}\rangle = {\sum_{j = \{1,2,d\}}} {{\epsilon}_j} \, ({{\delta}_{{{\ell}_j},1}} {{\ell}_j} - {{\delta}_{{{n}_j},1}} {{n}_j}) \, {A^{{\ell_1},\,{\ell_2},\,{\ell_d}}_{{n_1},\, {n_2},\, {n_d}}}(t) ~,
\end{align}
while the matrix element for the coupling Hamiltonian $H_3$ read as
\begin{align}
\label{eq:eq000037}
\langle{{\ell}_1},{{\ell}_2},{{\ell}_d}|[{ {{H}_3}, {{\rho}_S}(t)}]|{{n}_1},{{n}_2},{{n}_d}\rangle &= {\lambda_1} {\sum_{j = 0,1}}\, {(-1)^j} \left[ {{(-1)}^{{\ell_2}}} \, {\mathcal{C}_{{\ell_1},\, {\ell_d},j}} \, {A^{{\ell_1} - 2j + 1,\, {\ell_2},\, {\ell_d} + 2j - 1}_{{n_1},\,{n_2},\,{n_d}}} (t) - {{(-1)}^{ {n_2}}} \, {\mathcal{C}_{{n_1},\, {n_d}, j}} \, {A_{{n_1} - 2j + 1,\, {n_2},\, {n_d} + 2j - 1}^{{\ell_1},\,{\ell_2},\,{\ell_d}}} (t) \right] \nonumber\\
& + {\widetilde{\lambda}_1} {\sum_{j = 0,1}}\, {(-1)^j} \left[ {(-1)^{\ell_2}} \, {\widetilde{\mathcal{C}}_{{\ell_1},\, {\ell_d},j}} \, {A^{{\ell_1} + 2j - 1,\, {\ell_2},\, {\ell_d} + 2j - 1}_{{n_1},\,{n_2},\,{n_d}}} (t) - {{(-1)}^{{n_2}}} \, {\widetilde{\mathcal{C}}_{{n_1},\, {n_d},j}} \, {A_{{n_1} + 2j - 1,\, {n_2},\, {n_d} + 2j - 1}^{{\ell_1},\,{\ell_2},\,{\ell_d}}} (t) \right] \nonumber\\
& + {\lambda_2} {\sum_{j = 0,1}}\, {(-1)^j} \left[ {\mathcal{C}_{{\ell_2},\, {\ell_d},j}} \, {A^{{\ell_1}, \, {\ell_2} - 2j + 1,\, {\ell_d} + 2j - 1}_{{n_1},\,{n_2},\,{n_d}}} (t) - {\mathcal{C}_{{n_2},\, {n_d},j}} \, {A_{{n_1}, \, {n_2} - 2j + 1,\, {n_d} + 2j - 1}^{{\ell_1},\,{\ell_2},\,{\ell_d}}} (t) \right] \nonumber\\
& + {\widetilde{\lambda}_2} {\sum_{j = 0,1}}\, {(-1)^j} \left[ {\widetilde{\mathcal{C}}_{{\ell_2},\, {\ell_d},j}} \, {A^{{\ell_1},\, {\ell_2} + 2j - 1,\, {\ell_d} + 2j - 1}_{{n_1},\,{n_2},\,{n_d}}} (t) - {\widetilde{\mathcal{C}}_{{n_2},\, {n_d},j}} \, {A_{{n_1},\, {n_2} + 2j - 1,\, {n_d} + 2j - 1}^{{\ell_1},\,{\ell_2},\,{\ell_d}}} (t) \right] ~,
\end{align}
\end{widetext}
where we have defined the time-independent coefficients
\begin{equation}
\label{eq:eq000038}
{\mathcal{C}_{x,y,j}} = {\delta_{x,j}} \, {\delta_{y, 1 - j}} \, {(-1)^x} \sqrt{(x + 1 - j)(y + j)} ~,
\end{equation}
and
\begin{equation}
\label{eq:eq000039}
{\widetilde{\mathcal{C}}_{x,y,j}} = {\delta_{x,1 - j}} \, {\delta_{y, 1 - j}} \, {(-1)^x} \sqrt{(x + j)(y + j)} ~.
\end{equation}
We point out that Eq.~\eqref{eq:eq000037} will reduce to a simpler form for the case of Majorana fermions (${\widetilde{\lambda}_j} = {\lambda_j}$), and also for regular fermions (${\widetilde{\lambda}_j} = 0$). Moreover, one may verify that
\begin{align}
\label{eq:eq000040}
& {\int_0^t}\, d\tau \, {{\alpha}^+}(t - \tau) \, \langle{{\ell}_1},{{\ell}_2},{{\ell}_d}|[{ ({{V}_{\tau - t}}\,{{d}^{\dagger}}) \, {{\rho}_S}(t), d }]|{{n}_1},{{n}_2},{{n}_d}\rangle = \nonumber\\
=& {\sum_{j,l = 1}^8}\, {{\texttt{G}}_{jl}^{+}}(t) \, \langle{E_j}|\, {{d}^{\dagger}}|{E_l}\rangle \left[ {\delta_{{n_d} , 1}} {{(-1)}^{ {{n}_1} + {{n}_2} }} \sqrt{n_d} \, {\mathcal{Y}^{{\ell_1},\,{\ell_2},\,{\ell_d}}_{{n_1},\, {n_2},\, {n_d} - 1}}(j,l,t) \right. \nonumber\\
&\left. - \, {\delta_{{\ell_d} , 0}} \, {{(-1)}^{ {{\ell}_1} + {{\ell}_2} }} \sqrt{{{\ell}_d} + 1} \, {\mathcal{Y}^{{\ell_1},\,{\ell_2},\,{\ell_d} + 1}_{{n_1},\, {n_2},\, {n_d}}}(j,l,t) \right]~,
\end{align}
and
\begin{align}
\label{eq:eq000041}
& {\int_0^t}\, d\tau \, {{\alpha}^-}(t - \tau) \, \langle{{\ell}_1},{{\ell}_2},{{\ell}_d}|[{ ({{V}_{\tau - t}}\,{d}) \, {{\rho}_S}(t), {{d}^{\dagger}} }]|{{n}_1},{{n}_2},{{n}_d}\rangle = \nonumber\\
& = {\sum_{j,l = 1}^8}\, {{\texttt{G}}_{jl}^{-}}(t) \, \langle{E_j}|{d}|{E_l}\rangle \left[ {\delta_{{n_d} , 0}} \, {{(-1)}^{ {{n}_1} + {{n}_2} }} \sqrt{{{n}_d} + 1} \, {\mathcal{Y}^{{\ell_1},\,{\ell_2},\,{\ell_d}}_{{n_1},\, {n_2},\, {n_d} + 1}}(j,l,t) \right. \nonumber\\
&\left. - \, {\delta_{{\ell_d} , 1}} {{(-1)}^{ {{\ell}_1} + {{\ell}_2} }} \sqrt{{{\ell}_d}} \, {\mathcal{Y}^{{\ell_1},\,{\ell_2},\,{\ell_d} - 1}_{{n_1},\, {n_2},\, {n_d}}}(j,l,t) \right] ~,
\end{align}
where we define
\begin{equation}
\label{eq:eq000042}
{{\texttt{G}}_{jl}^{\pm}}(t) := {\int_0^t}\, d\tau \, {{\alpha}^{\pm}}(t - \tau) \, {{e}^{i(\tau - t)({{E}_j} - {{E}_l})}} ~,
\end{equation}
and
\begin{equation}
\label{eq:eq000043}
{\mathcal{Y}^{{\ell_1},\,{\ell_2},\,{\ell_d}}_{{n_1},\, {n_2},\, {n_d}}}(j,l,t) := {\sum_{ {k_1},\, {k_2}, \, {k_d}}} \langle{{\ell}_1},{{\ell}_2},{{\ell}_d}|{E_j}\rangle\langle{E_l}|{k_1},{k_2},{k_d}\rangle \, {A^{{k_1},\,{k_2},\,{k_d}}_{{n_1},\, {n_2},\, {n_d}}}(t) ~,
\end{equation}
with the matrix element $ \langle{E_j}|{d^{\dagger}}|{E_l}\rangle$ explicitly defined in Eq.~\eqref{eq:eq000026}.
| {'timestamp': '2021-08-04T02:05:53', 'yymm': '2108', 'arxiv_id': '2108.01205', 'language': 'en', 'url': 'https://arxiv.org/abs/2108.01205'} |
\section{Complex ellipsoids}
For $p=(p_1,\dots,p_n)\in\mathbb R_{>0}^n$, $n\geq2$, define the \emph{complex ellipsoid}
\begin{equation*}
\mathbb E_p:=\left\{(z_1,\dots,z_n)\in\mathbb C^n:\sum_{j=1}^n|z_j|^{2p_j}<1\right\}.
\end{equation*}
Note that $\mathbb B_n:=\mathbb E_{(1,\dots,1)}$ is the unit ball in $\mathbb C^n$. We shall write $\mathbb D:=\mathbb B_1$, $\mathbb T:=\partial\mathbb D$. Moreover, if $\alpha/\beta\in\mathbb N^n$ then $\Psi_{\alpha/\beta}\in\Prop(\mathbb E_{\alpha},\mathbb E_{\beta})$.
The problem of characterization of $\Prop(\mathbb E_p,\mathbb E_q)$ and $\Aut(\mathbb E_p)$ has been investigated in \cite{l1984} and \cite{ds1991}. The question on non-emptiness of $\Prop(\mathbb E_p,\mathbb E_q)$ as well as the form of $\Prop(\mathbb E_p,\mathbb E_q)$ and $\Aut(\mathbb E_p)$ in the case $p,q\in\mathbb N^n$ was completely solved in \cite{l1984}. The case $p,q\in\mathbb R_{>0}^n$ was considered in \cite{ds1991}, where the Authors characterized non-emptiness of $\Prop(\mathbb E_p,\mathbb E_q)$ and found $\Aut(\mathbb E_p)$. They did not give, however, the explicit form of an $F\in\Prop(\mathbb E_p,\mathbb E_q)$. The previous results are formulated in the following
\begin{thm}[cf. \cite{l1984}, \cite{ds1991}]\label{thm:ep}Assume that $n\geq2$, $p,q\in\mathbb R^n_{>0}$.
\begin{enumerate}[(a)]
\item\label{item:epexist}The following conditions are equivalent
\begin{enumerate}[(i)]
\item$\Prop(\mathbb E_p,\mathbb E_q)\neq\varnothing$;
\item there exists $\sigma\in\Sigma_n$ such that $p_{\sigma}/q\in\mathbb N^n$.
\end{enumerate}
\item\label{item:epform}If $p,q\in\mathbb N^n$, then the following conditions are equivalent
\begin{enumerate}[(i)]
\item$F\in\Prop(\mathbb E_p,\mathbb E_q)$;
\item$F=\phi\circ \Psi_{p_{\sigma}/q}\circ\sigma$, where $\sigma\in\Sigma_n$ is such that $p_{\sigma}/q\in\mathbb N^n$ and $\phi\in\Aut(\mathbb E_q)$.
\end{enumerate}
In particular, $\Prop(\mathbb E_p)=\Aut(\mathbb E_p)$.
\item\label{item:epaut}If $0\leq k\leq n$, $p\in\{1\}^k\times(\mathbb R_{>0}\setminus\{1\})^{n-k}$, $z=(z',z_{k+1},\dots,z_n)$, then the following conditions are equivalent
\begin{enumerate}[(i)]
\item$F=(F_1,\dots,F_n)\in\Aut(\mathbb E_p)$,
\item$F_j(z):=\begin{cases}H_j(z'),\quad&\textnormal{if }j\leq k\\\zeta_jz_{\sigma(j)}\left(\frac{\sqrt{1-\|a'\|^2}}{1-\langle z',a'\rangle}\right)^{1/p_{\sigma(j)}},\quad&\textnormal{if }j>k\end{cases}$, where $\zeta_j\in\mathbb T$, $j>k$, $H=(H_1,\dots,H_k)\in\Aut(\mathbb B_k)$, $a':=H^{-1}(0)$, and $\sigma\in\Sigma_n(p)$.
\end{enumerate}
\end{enumerate}
\end{thm}
In the general case thesis of Theorem~\ref{thm:ep}~(\ref{item:epform}) is no longer true (take, for instance, $\Psi_{(2,2)}\circ H\circ \Psi_{(2,2)}\in\Prop(\mathbb E_{(2,2)},\mathbb E_{(1/2,1/2)})$, where $H\in\Aut(\mathbb B_2)$, $H(0)\neq0$).
Nevertheless, from the proof of Theorem~1.1 in \cite{ds1991} we easily derive the following theorem which will be of great importance during the investigation of proper holomorphic mappings between generalized Hartogs triangles.
\begin{thm}\label{thm:epform}Assume that $n\geq2$, $p,q\in\mathbb R^n_{>0}$. Then the following conditions are equivalent
\begin{enumerate}[(i)]
\item$F\in\Prop(\mathbb E_p,\mathbb E_q)$;
\item$F=\Psi_{p_{\sigma}/(qr)}\circ\phi\circ \Psi_r\circ\sigma$, where $\sigma\in\Sigma_n$ is such that $p_{\sigma}/q\in\mathbb N^n$, $r\in\mathbb N^n$ is such that $p_{\sigma}/(qr)\in\mathbb N^n$, and $\phi\in\Aut(\mathbb E_{p_{\sigma}/r})$.
\end{enumerate}
In particular, $\Prop(\mathbb E_p)=\Aut(\mathbb E_p)$.
\end{thm}
\section{Generalized Hartogs triangles}
Let $n,m\in\mathbb N$. For $p=(p_1,\dots,p_n)\in\mathbb R_{>0}^n$ and $q=(q_1,\dots,q_m)\in\mathbb R_{>0}^m$, define the \emph{generalized Hartogs triangle}
\begin{equation*}
\mathbb F_{p,q}:=\left\{(z_1,\dots,z_n,w_1,\dots,w_m)\in\mathbb C^{n+m}:\sum_{j=1}^n|z_j|^{2p_j}<\sum_{j=1}^m|w_j|^{2q_j}<1\right\}.
\end{equation*}
Note that $\mathbb F_{p,q}$ is nonsmooth pseudoconvex Reinhardt domain, not containing the origin. Moreover, if $n=m=1$, then $\mathbb F_{1,1}$ is the standard Hartogs triangle.
The problem of characterization of $\Prop(\mathbb F_{p,q},\mathbb F_{\tilde p,\tilde q})$ and $\Aut(\mathbb F_{p,q})$ has been investigated in many papers. The necessary and sufficient conditions for the non-emptiness of $\Prop(\mathbb F_{p,q},\mathbb F_{\tilde p,\tilde q})$ are given in \cite{cx2001} for $p,\tilde p\in\mathbb N^n$, $q,\tilde q\in\mathbb N^m$, $n,m\geq2$, in \cite{c2002} for $p,\tilde p\in\mathbb R_{>0}^n$, $q,\tilde q\in\mathbb R_{>0}^m$, $n,m\geq2$, and in \cite{l1989} for $p,\tilde p\in\mathbb N^n$, $q,\tilde q\in\mathbb N^m$, $m=1$. The explicit form of an $F\in\Prop(\mathbb F_{p,q},\mathbb F_{\tilde p,\tilde q})$ is presented in \cite{l1989} for $p,\tilde p\in\mathbb N^n$, $q,\tilde q\in\mathbb N^m$, $m=1$, whereas the description of $\Aut(\mathbb F_{p,q})$ may be found in \cite{cx2002} for $p\in\mathbb N^n$, $q\in\mathbb N^m$, $n,m\geq2$, and in \cite{l1989} for $p\in\mathbb N^n$, $q\in\mathbb N^m$, $m=1$.
In the paper we shall only consider the case $n=1$.
First we deal with the case $m=1$.
\begin{thm}\label{thm:fp2}Assume that $n=m=1$, $p,q,\tilde p,\tilde q\in\mathbb R_{>0}$.
\begin{enumerate}[(a)]
\item\label{item:fp2exist}The following conditions are equivalent
\begin{enumerate}[(i)]
\item$\Prop(\mathbb F_{p,q},\mathbb F_{\tilde p,\tilde q})\neq\varnothing$;
\item there exist $k,l\in\mathbb N$ such that $l\tilde q/\tilde p-kq/p\in\mathbb Z$.
\end{enumerate}
\item\label{item:fp2form}The following conditions are equivalent
\begin{enumerate}[(i)]
\item$F\in\Prop(\mathbb F_{p,q},\mathbb F_{\tilde p,\tilde q})$;
\item$F(z,w)=\begin{cases}\left(\zeta z^kw^{l\tilde q/\tilde p-kq/p},\xi w^l\right),\quad&\textnormal{if }q/p\notin\mathbb Q\\\left(\zeta z^{k'}w^{l\tilde q/\tilde p-k'q/p}B\left(z^{p'}w^{-q'}\right),\xi w^l\right),\hfill&\textnormal{if }q/p\in\mathbb Q\end{cases}$,\\ where $\zeta,\xi\in\mathbb T$, $k,l\in\mathbb N$, $k'\in\mathbb N\cup\{0\}$ are such that $l\tilde q/\tilde p-kq/p\in\mathbb Z$, $l\tilde q/\tilde p-k'q/p\in\mathbb Z$, $p',q'\in\mathbb Z$ are relatively prime with $p/q=p'/q'$, and $B$ is a finite Blaschke product nonvanishing at 0 (it may happen that $B\equiv1$, but then $k'>0$).
\end{enumerate}
In particular, $\Prop(\mathbb F_{p,q})\supsetneq\Aut(\mathbb F_{p,q})$.
\item\label{item:fp2aut}The following conditions are equivalent
\begin{enumerate}[(i)]
\item$F\in\Aut(\mathbb F_{p,q})$;
\item$F(z,w)=\left(w^{q/p}\phi(zw^{-q/p}),\xi w\right)$, where $\xi\in\mathbb T$, and $\phi\in\Aut(\mathbb D)$ with $\phi(0)=0$ whenever $q/p\notin\mathbb N$.
\end{enumerate}
\end{enumerate}
\end{thm}
\begin{rem}The counterpart of the Theorem~\ref{thm:fp2} for $p,q,\tilde p,\tilde q\in\mathbb N$ was proved (with minor mistakes) in \cite{l1989}, where it was claimed that $F\in\Prop(\mathbb F_{p,q},\mathbb F_{\tilde p,\tilde q})$ iff
\begin{equation}\label{eq:landucci}F(z,w)=\begin{cases}\left(\zeta z^kw^{l\tilde q/\tilde p-kq/p},\xi w^l\right),\quad&\textnormal{if }q/p\notin\mathbb N,\ l\tilde q/\tilde p-kq/p\in\mathbb Z\\\left(\zeta w^{l\tilde q/\tilde p}B\left(zw^{-q/p}\right),\xi w^l\right),\hfill&\textnormal{if }q/p\in\mathbb N,\ l\tilde q/\tilde p\in\mathbb N\end{cases},
\end{equation}
where $\zeta,\xi\in\mathbb T$, $k,l\in\mathbb N$, and $B$ is a finite Blaschke product. Nevertheless, the mapping
$$
\mathbb F_{2,3}\ni(z,w)\mapsto\left(z^3w^3B(z^2w^{-3}),w^3\right)\in\mathbb F_{2,5},
$$
where $B$ is nonconstant finite Blaschke product nonvanishing at 0, is proper holomorphic but not of the form (\ref{eq:landucci}). In fact, from the Theorem~\ref{thm:fp2}~(\ref{item:fp2form}) it follows immediately that for any choice of $p,q,\tilde p,\tilde q\in\mathbb N$ one may find mapping $F\in\Prop(\mathbb F_{p,q},\mathbb F_{\tilde p,\tilde q})$ having, as a factor of the first component, nonconstant Blaschke product nonvanishing at 0.
\end{rem}
Our result gives a negative answer to the question posed by the Authors in \cite{jp2008}, whether the structure of $\Prop(\mathbb F_{p,q},\mathbb F_{\tilde p,\tilde q})$ remains unchanged when passing from $p,q,\tilde p,\tilde q\in\mathbb N$ to arbitrary $p,q,\tilde p,\tilde q\in\mathbb R_{>0}$. It should be mentioned that, on the other hand, the automorphism group $\Aut(\mathbb F_{p,q})$ does not change when passing from $p,q\in\mathbb N$ to $p,q\in\mathbb R_{>0}$.
In the proof of Theorem~\ref{thm:fp2}, however, neither the method from \cite{c2002} (where the assumption $m\geq2$ is essential) nor the method from \cite{l1989} (where the assumption $p,q,\tilde p,\tilde q\in\mathbb N$ is essential) can be used. Fortunately, it turns out that one may get Theorem~\ref{thm:fp2} using part of the main result from \cite{ik2006}, where complete characterization of nonelementary proper holomorphic mappings between bounded Reinhardt domains in $\mathbb C^2$ is given.
The remaining case $m\geq2$ is considered in the following result.
\begin{thm}\label{thm:fps}Assume that $n=1$, $m\geq2$, $p,\tilde p\in\mathbb R_{>0}$, $q,\tilde q\in\mathbb R^m_{>0}$, $(z,w)\in\mathbb C\times\mathbb C^m$.
\begin{enumerate}[(a)]
\item\label{item:fpsexist}The following conditions are equivalent
\begin{enumerate}[(i)]
\item$\Prop(\mathbb F_{p,q},\mathbb F_{\tilde p,\tilde q})\neq\varnothing$;
\item $p/\tilde p\in\mathbb N$ and there exists $\sigma\in\Sigma_m$ such that $q_{\sigma}/\tilde q\in\mathbb N^m$.
\end{enumerate}
\item\label{item:fpsform}The following conditions are equivalent
\begin{enumerate}[(i)]
\item$F\in\Prop(\mathbb F_{p,q},\mathbb F_{\tilde p,\tilde q})$;
\item$F(z,w)=(\zeta z^k,h(w))$, where $\zeta\in\mathbb T$, $k\in\mathbb N$, $h\in\Prop(\mathbb E_q,\mathbb E_{\tilde q})$, $h(0)=0$.
\end{enumerate}
In particular, $\Prop(\mathbb F_{p,q})=\Aut(\mathbb F_{p,q})$.
\item\label{item:fpsaut}The following conditions are equivalent
\begin{enumerate}[(i)]
\item$F\in\Aut(\mathbb F_{p,q})$;
\item$F(z,w)=(\zeta z,h(w))$, where $\zeta\in\mathbb T$, $h\in\Aut(\mathbb E_q)$, $h(0)=0$.
\end{enumerate}
\end{enumerate}
\end{thm}
Theorem~\ref{thm:fps}~(\ref{item:fpsexist}) was proved in \cite{cx2001} (for $n,m\geq2$, $p,\tilde p\in\mathbb N^n$, $q,\tilde q\in\mathbb N^m$) and in \cite{c2002} (for $n,m\geq2$, $p,\tilde p\in\mathbb R_{>0}^n$, $q,\tilde q\in\mathbb R_{>0}^m$). Theorem~\ref{thm:fps}~(\ref{item:fpsform}) was proved in \cite{cx2002} for $n,m\geq2$, $p=\tilde p\in\mathbb N^n$, $q=\tilde q\in\mathbb N^m$. Theorem~\ref{thm:fps}~(\ref{item:fpsaut}) was proved in \cite{cx2002} for $n,m\geq2$, $p\in\mathbb N^n$, $q\in\mathbb N^m$. Part (\ref{item:fpsaut}) of Theorem~\ref{thm:fps} gives an affirmative answer to the question posed by the Authors in \cite{jp2008}, whether the description of the automorphism group (\ref{item:fpsaut}) remains true for arbitrary $p\in\mathbb R_{>0}^n$, $q\in\mathbb R_{>0}^m$ (at least in the case $n=1$).
\begin{rem}\label{rem:boundary}Using Barrett's and Bell's results (cf.~\cite{b1984}, \cite{be1982}) one may show (cf.~\cite{cx2001}) that any $F\in\Prop(\mathbb F_{p,q},\mathbb F_{\tilde p,\tilde q})$ extends holomorphically past any boundary point $(z,w)\in\partial\mathbb F_{p,q}\setminus\{(0,0)\}$.
Let
\begin{align*}
K=K_{p,q}:=&\left\{(z,w)\in\mathbb C^{n+m}:0<\sum_{j=1}^n|z_j|^{2p_j}=\sum_{j=1}^m|w_j|^{2q_j}<1\right\},\\
L=L_{p,q}:=&\left\{(z,w)\in\mathbb C^{n+m}:\sum_{j=1}^n|z_j|^{2p_j}<\sum_{j=1}^m|w_j|^{2q_j}=1\right\}.
\end{align*}
Analogically we define $\tilde K:=K_{\tilde p,\tilde q}$ and $\tilde L:=L_{\tilde p,\tilde q}$.
For $m>1$ it is shown in \cite{c2002} that
$$
F(K)\subset\tilde K,\quad F(L)\subset\tilde L.
$$
for any $F\in\Prop(\mathbb F_{p,q},\mathbb F_{\tilde p,\tilde q})$.
\end{rem}
\section{Proofs}
\begin{proof}[Proof of Theorem~\ref{thm:epform}]The implication (ii)$\Rightarrow$(i) is obvious.
To prove (i)$\Rightarrow$(ii) let $F=(F_1,\dots,F_n)\in\Prop(\mathbb E_p,\mathbb E_q)$.
Following \cite{s1972} any automorphism $H=(H_1,\dots,H_n)\in\Aut(\mathbb B_n)$ is of the form
\begin{equation*}\label{eq:aut}
H_j(z)=\frac{\sqrt{1-\|a\|^2}}{1-\langle z,a\rangle}\sum_{k=1}^nh_{j,k}(z_k-a_k),\quad z=(z_1,\dots,z_n)\in\mathbb B_n,\ j=1,\dots,n,
\end{equation*}
where $a=(a_1,\dots,a_n)\in\mathbb B_n$ and $Q=[h_{j,k}]$ is an $n\times n$ matrix such that
\begin{equation*}
\bar Q(\mathbb I_n-\bar a{}^t\!a){}^t\!Q=\mathbb I_n,
\end{equation*}
where $\mathbb I_n$ is the unit $n\times n$ matrix, whereas $\bar A$ (resp.~${}^t\!A$) is the conjugate (resp.~transpose) of an arbitrary matrix $A$. In particular, $Q$ is unitary if $a=0$.
It follows from \cite{ds1991} that there exists $\sigma\in\Sigma_n$ such that $p_{\sigma}/q\in\mathbb N^n$, $h_{j,\sigma(j)}\neq0$, and
\begin{equation}\label{eq:ds1.6}
F_j(z)=\left(\frac{\sqrt{1-\|a\|^2}}{1-\langle z^p,a\rangle}h_{j,\sigma(j)}z_{\sigma(j)}^{p_{\sigma(j)}}\right)^{1/q_j}
\end{equation}
whenever $1/q_j\notin\mathbb N$.
If $1/q_j\in\mathbb N$ then $F_j$ either is of the form (\ref{eq:ds1.6}), where $p_{\sigma(j)}/q_j\in\mathbb N$, or
\begin{equation*}
F_j(z)=\left(\frac{\sqrt{1-\|a\|^2}}{1-\langle z^p,a\rangle}\sum_{k=1}^nh_{j,k}(z_k^{p_k}-a_k)\right)^{1/q_j}
\end{equation*}
where $p_k\in\mathbb N$ for any $k$ such that $h_{j,k}\neq0$.
Consequently, if we define $r=(r_1,\dots,r_n)$ as
\begin{equation*}
r_j:=\begin{cases}p_{\sigma(j)},\quad&\textnormal{if }a_{\sigma(j)}\neq0\textnormal{ or there is }k\neq\sigma(j)\textnormal{ with }h_{j,k}\neq0\\
p_{\sigma(j)}/q_j,\quad&\textnormal{otherwise}
\end{cases},
\end{equation*}
then it is easy to see that $r\in\mathbb N^n$, $p_{\sigma}/(qr)\in\mathbb N^n$, and $F$ is as desired.
\end{proof}
\begin{rem}Note that in the case $p,q\in\mathbb N^n$ we have $1/q_j\in\mathbb N$ iff $q_j=1$. Hence the above definition of $r$ implies that $r=p_{\sigma}/q$ and, consequently, we get thesis of Theorem~\ref{thm:ep}~(\ref{item:epform}).
\end{rem}
\begin{proof}[Proof of Theorem~\ref{thm:fp2}]Observe, that (\ref{item:fp2exist}) and (\ref{item:fp2aut}) follows immediately from (\ref{item:fp2form}). Thus, it suffices to prove part (\ref{item:fp2form}).
The implication (ii)$\Rightarrow$(i) in part (\ref{item:fp2form}) holds for any $p,q,\tilde p,\tilde q>0$. Indeed, if $F=(G,H)$ is of the form given in (ii), then
$$
|G(z,w)|^{\tilde p}|H(z,w)|^{-\tilde q}=\begin{cases}\left(|z||w|^{-q/p}\right)^{k\tilde p},\quad&\textnormal{if }q/p\notin\mathbb Q\\\left(|z||w|^{-q/p}\right)^{k'\tilde p}\left|B(z^{p'}w^{-q'})\right|^{\tilde p},\quad&\textnormal{if }q/p\in\mathbb Q\end{cases}.
$$
To prove the implication (i)$\Rightarrow$(ii) in (\ref{item:fp2form}), let $F\in\Prop(\mathbb F_{p,q},\mathbb F_{\tilde p,\tilde q})$.
Assume first that $F$ is elementary algebraic mapping, i.e. it is of the form
$$
F(z,w)=\left(\alpha z^aw^b,\beta z^cw^d\right),
$$
where $a,b,c,d\in\mathbb Z$ are such that $ad-bc\neq0$ and $\alpha,\beta\in\mathbb C$ are some constants. Since $F$ is surjective, we infer that $c=0$, $d\in\mathbb N$, and $\xi:=\beta\in\mathbb T$. Moreover,
\begin{equation}\label{eq:elem1}
|\alpha|^{\tilde p}|z|^{a\tilde p}|w|^{b\tilde p-d\tilde q}<1,
\end{equation}
whence $a\in\mathbb N$, $b\tilde p-d\tilde q\in\mathbb N$, and $\zeta:=\alpha\in\mathbb T$. Let $k:=a$, $l:=d$. One may rewrite (\ref{eq:elem1}) as
$$
\left(|z|^p|w|^{-q}\right)^{k\tilde p/p}|w|^{b\tilde p-l\tilde q+kq\tilde p/p}<1.
$$
Since one may take sequence $(z_{\nu},1/2)_{\nu\in\mathbb N}\subset\mathbb F_{p,q}$ with $|z_{\nu}|^p2^q\to1$ as $\nu\to\infty$, we infer that $b\tilde p-l\tilde q+kq\tilde p/p=0$, i.e.
$$
b=\frac{l\tilde q}{\tilde p}-\frac{kq}{p}.
$$
Consequently, $F$ is as in the condition (ii) of the Theorem~\ref{thm:fp2}~(\ref{item:fp2form}).
Assume now that $F$ is not elementary. Then from the Theorem 0.1 in \cite{ik2006} it follows that $F$ is of the form
$$
F(z,w)=\left(\alpha z^aw^b\tilde B\left(z^{p'}w^{-q'}\right),\beta w^l\right),
$$
where $a,b\in\mathbb Z$, $a\geq0$, $p',q',l\in\mathbb N$, $p',q'$ are relatively prime,
\begin{equation}\label{eq:isakru}
\frac{q'}{p'}=\frac{q}{p},\quad\frac{\tilde q}{\tilde p}=\frac{aq'+bp'}{lp'},
\end{equation}
$\alpha,\beta\in\mathbb C$ are some constants, and $\tilde B$ is a nonconstant finite Blaschke product non-vanishing at the origin.
From the surjectivity of $F$ we immediately infer that $\zeta:=\alpha\in\mathbb T$ and $\xi:=\beta\in\mathbb T$. If we put $k':=a$, then (\ref{eq:isakru}) implies
$$
b=\frac{l\tilde q}{\tilde p}-\frac{k'q}{p},
$$
which ends the proof.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm:fps}]We shall write $w=(w_1,\dots,w_m)\in\mathbb C^m$. We may assume without loss of generality that there is $0\leq\mu\leq m$ with $\tilde q\in\{1\}^{\mu}\times(\mathbb R_{>0}\setminus\{1\})^{m-\mu}$. Let
$$
F=(G,H):\mathbb F_{p,q}\to\mathbb F_{\tilde p,\tilde q}\subset\mathbb C\times\mathbb C^m
$$
be proper holomorphic mapping. It follows from \cite{c2002} that $F(L)\subset\tilde L$ and $H$ is independent of the variable $z$. Hence $h:=H(0,\cdot)\in\Prop((\mathbb E_q)_*,(\mathbb E_{\tilde q})_*)$. Consequently, by Hartogs theorem $h\in\Prop(\mathbb E_q,\mathbb E_{\tilde q})$, i.e.~(Theorem~\ref{thm:epform})
$$
h=\Psi_{q_{\sigma}/(\tilde qr)}\circ\psi\circ \Psi_r\circ\sigma
$$
for some $\sigma\in\Sigma_m$ with $q_{\sigma}/\tilde q\in\mathbb N^m$, $r\in\mathbb N^m$ with $q_{\sigma}/(\tilde qr)\in\mathbb N^m$, and $\psi\in\Aut(\mathbb E_{q_{\sigma}/r})$ with $\psi(0)=0$. Indeed, if $a=(a_1,\dots,a_m)$ is a zero of $h$ we immediately get
$$
G(z,a)=0,\quad|z|^{2p}<\sum_{j=1}^m|a_j|^{2q_j},
$$
which is clearly a contradiction, unless $a=0$.
Without loss of generality we may assume that there is $\mu\leq l\leq m$ with $1/\tilde q_j\notin\mathbb N$ iff $j>l$. It follows from the proof of Theorem~\ref{thm:epform} that
$$
\frac{q_{\sigma(j)}}{r_j}=\begin{cases}1,\quad&\textnormal{if }j=1,\dots,l\\\tilde q_j,\quad&\textnormal{if }j=l+1,\dots,m\end{cases},
$$
whence
$$
\psi(w)=(U(w_1,\dots,w_l),\xi_{l+1}w_{l+\tau(1)},\dots,\xi_mz_{l+\tau(m-l)}),
$$
where $U=(U_1,\dots,U_l)\in\mathbb U(l)$ and $\tau\in\Sigma_{m-l}(\tilde q_{l+1},\dots,\tilde q_m)$. Finally,
\begin{multline*}
h(w)=\left(U_1^{1/\tilde q_1}\left(w_{\sigma(1)}^{q_{\sigma(1)}},\dots,w_{\sigma(l)}^{q_{\sigma(l)}}\right),\dots, U_l^{1/\tilde q_l}\left(w_{\sigma(1)}^{q_{\sigma(1)}},\dots,w_{\sigma(l)}^{q_{\sigma(l)}}\right),\right.\\ \left.\xi_{l+1}w_{\sigma(l+1)}^{q_{\sigma(l+1)}/\tilde q_{l+1}},\dots,\xi_mw_{\sigma(m)}^{q_{\sigma(m)}/\tilde q_m}\right).
\end{multline*}
In particular, if we write $h=(h_1,\dots,h_m)$,
\begin{equation}\label{eq:unitarity}
\sum_{j=1}^m|h_j(w)|^{2\tilde q_j}=\sum_{j=1}^m|w_j|^{2q_j},\quad w\in\mathbb E_q.
\end{equation}
For $w\in\mathbb C^m$, $0<\rho_w:=\sum_{j=1}^m|w_j|^{2q_j}<1$ let
$$
g(z):=G(z,w),\quad z\in\rho_{w}^{1/(2p)}\mathbb D.
$$
$g$ may depend, a priori, on $w$. Since $F(K)\subset\tilde K$, it follows from (\ref{eq:unitarity}) that $g\in\Prop\left(\rho_w^{1/(2p)}\mathbb D,\rho_w^{1/(2\tilde p)}\mathbb D\right)$, i.e.
\begin{equation}\label{eq:f1}
g(z)=\rho_w^{1/(2\tilde p)}B\left(z\rho_w^{-1/(2p)}\right),\quad z\in\rho_w^{1/(2p)}\mathbb D,
\end{equation}
where $B$ is a finite Blaschke product. Let
\begin{align*}
\mathbb F^0_{p,q}&:=\mathbb F_{p,q}\cap\left(\mathbb C\times\{0\}^{\sigma(1)-1}\times\mathbb C\times\{0\}^{m-\sigma(1)}\right),\\
\mathbb F^0_{\tilde p,q_{\sigma}/r}&:=\mathbb F_{\tilde p,q_{\sigma}/r}\cap\left(\mathbb C^2\times\{0\}^{m-1}\right).
\end{align*}
Let $\Phi\in\Aut(\mathbb F_{\tilde p,q_{\sigma}/r})$ be defined by
$$
\Phi(z,w):=\left(z,U^{-1}(w_1,\dots,w_l),w_{l+1},\dots,w_m\right)
$$
and let
$$
\hat\xi_1:=\begin{cases}\xi_1,\quad&\textnormal{if }l=0\\1,\quad&\textnormal{if }l>0\end{cases},\qquad\hat q_1:=\begin{cases}\tilde q_1,\quad&\textnormal{if }l=0\\1,\quad&\textnormal{if }l>0\end{cases}.
$$
Then $\Phi\circ(G,\psi\circ \Psi_r\circ\sigma)\in\Prop(\mathbb F^0_{p,q},\mathbb F^0_{\tilde p,q_{\sigma}/r})$ with
\begin{equation}\label{eq:phi1}
(\Phi\circ(G,\psi\circ\Psi_r\circ\sigma))(z,w)=\left(G(z,w),\hat\xi_1w_{\sigma(1)}^{q_{\sigma(1)}/\hat q_1},0,\dots,0\right),\quad (z,w)\in\mathbb F^0_{p,q}.
\end{equation}
It follows from Theorem~\ref{thm:fp2} that
\begin{equation}\label{eq:phithmfp2}
(\Phi\circ(G,\psi\circ\Psi_r\circ\sigma))(z,w)=\left(\hat G(z,w),\eta w_{\sigma(1)}^s,0,\dots,0\right),\quad (z,w)\in\mathbb F^0_{p,q},
\end{equation}
where
$$
\hat G(z,w):=\begin{cases}\zeta z^kw_{\sigma(1)}^{s\hat q_1/\tilde p-kq_{\sigma(1)}/p},\hfill&\textnormal{if }q_{\sigma(1)}/p\notin\mathbb Q\\\zeta z^{k'}w_{\sigma(1)}^{s\hat q_1/\tilde p-k'q_{\sigma(1)}/p}\hat B\left(z^{p'}w_{\sigma(1)}^{-q'_{\sigma(1)}}\right),\hfill&\textnormal{if }q_{\sigma(1)}/p\in\mathbb Q\end{cases},
$$
$\zeta,\eta\in\mathbb T$, $k,s,p',q'_{\sigma(1)}\in\mathbb N$, $k'\in\mathbb N\cup\{0\}$ are such that $p',q'_{\sigma(1)}$ are relatively prime, $q_{\sigma(1)}/p=q'_{\sigma(1)}/p'$, $s\hat q_1/\tilde p-kq_{\sigma(1)}/p\in\mathbb Z$, $q_{\sigma(1)}/p=q'_{\sigma(1)}/p'$, $s\hat q_1/\tilde p-kq_{\sigma(1)}/p\in\mathbb Z$, and $\hat B$ is a finite Blaschke product nonvanishing at 0 (if $\hat B\equiv1$ then $k'>0$). Hence
\begin{multline}\label{eq:h1}
(\Phi\circ(G,\psi\circ\Psi_r\circ\sigma))(z,w)=\\
\left(\hat G(z,w)+\alpha(z,w),w_{\sigma(1)}^{q_{\sigma(1)}},\dots,w_{\sigma(l)}^{q_{\sigma(l)}},\xi_{l+1}w_{\sigma(l+1)}^{q_{\sigma(l+1)}/\tilde q_{l+1}},\dots,\xi_mw_{\sigma(m)}^{q_{\sigma(m)}/\tilde q_m}\right),
\end{multline}
for $(z,w)\in\mathbb F_{p,q}$, $w_{\sigma(1)}\neq0$, where $\alpha$ is holomorphic on $\mathbb F_{p,q}$ with $\alpha|_{\mathbb F^0_{p,q}}=0$. Comparing (\ref{eq:phi1}) and (\ref{eq:phithmfp2}) we conclude that
$$
\eta=\hat\xi_1,\quad s=q_{\sigma(1)}/\hat q_1.
$$
Since the mapping on the left side of (\ref{eq:h1}) is holomorphic on $\mathbb F_{p,q}$, the function
\begin{equation}\label{eq:hatG}
\hat G(z,w)=\begin{cases}\zeta z^kw_{\sigma(1)}^{q_{\sigma(1)}(1/\tilde p-k/p)},\hfill&\textnormal{if }q_{\sigma(1)}/p\notin\mathbb Q\\\zeta z^{k'} w_{\sigma(1)}^{q_{\sigma(1)}(1/\tilde p-k'/p)}\hat B\left(z^{p'}w_{\sigma(1)}^{-q'_{\sigma(1)}}\right),\hfill&\textnormal{if }q_{\sigma(1)}/p\in\mathbb Q\end{cases}
\end{equation}
with $q_{\sigma(1)}(1/\tilde p-k/p)\in\mathbb Z$ and $q_{\sigma(1)}(1/\tilde p-k'/p)\in\mathbb Z$ has to be holomorphic on $\mathbb F_{p,q}$, too. Since $m\geq2$, it may happen $w_{\sigma(1)}=0$. Consequently, $q_{\sigma(1)}(1/\tilde p-k/p)\in\mathbb N\cup\{0\}$ in the first case of (\ref{eq:hatG}), whereas $\hat B(t)=t^{k''}$ for some $k''\in\mathbb N$ with $q_{\sigma(1)}(1/\tilde p-k'/p)-k''q'_{\sigma(1)}\in\mathbb N\cup\{0\}$ in the second case. Thus
$$
\hat G(z,w)=\zeta z^kw_{\sigma(1)}^{q_{\sigma(1)}(1/\tilde p-k/p)},
$$
where $k\in\mathbb N$, $q_{\sigma(1)}(1/\tilde p-k/p)\in\mathbb N\cup\{0\}$ (in the second case of (\ref{eq:hatG}) it suffices to take $k:=k'+p'k''$).
Observe that $\hat G+\alpha=G$. Fix $w\in\{0\}^{\sigma(1)-1}\times\mathbb C\times\{0\}^{m-\sigma(1)}$ with $0<\rho_w<1$. Then $\rho_w=|w_{\sigma(1)}|^{2q_{\sigma(1)}}$ and $\hat G(\cdot,w)=g$ on $\rho_w^{1/(2p)}\mathbb D$, i.e.
$$
\zeta z^kw_{\sigma(1)}^{q_{\sigma(1)}(1/\tilde p-k/p)}=|w_{\sigma(1)}|^{q_{\sigma(1)}/\tilde p}B\left(z|w_{\sigma(1)}|^{-q_{\sigma(1)}/p}\right),\quad z\in|w_{\sigma(1)}|^{q_{\sigma(1)}/p}\mathbb D.
$$
Hence $B(t)=\zeta t^k$ and $q_{\sigma(1)}(1/\tilde p-k/p)=0$, i.e.~$k=p/\tilde p$. Hence part (\ref{item:fpsexist}) is proved. To finish part (\ref{item:fpsform}), note that $g(z)=\zeta z^{p/\tilde p}$. Consequently, $g$ does not depend on $w$ and
$$
G(z,w)=\zeta z^{p/\tilde p},\quad (z,w)\in\mathbb F_{p,q}.
$$
Part (\ref{item:fpsaut}) follows directly from (\ref{item:fpsform}).
\end{proof}
| {'timestamp': '2013-01-21T02:01:34', 'yymm': '1211', 'arxiv_id': '1211.0786', 'language': 'en', 'url': 'https://arxiv.org/abs/1211.0786'} |
\section{#1}
\protect\setcounter{secnum}{\value{section}}
\protect\setcounter{equation}{0}
\protect\renewcommand{\theequation}{\mbox{\arabic{secnum}.\arabic{equation}}}}
\setcounter{tocdepth}{1}
\begin{document}
\title{Infinitesimal Deformations and Obstructions for Toric Singularities}
\author{Klaus Altmann\footnotemark[1]\\
\small Dept.~of Mathematics, M.I.T., Cambridge, MA 02139, U.S.A.
\vspace{-0.7ex}\\ \small E-mail: altmann@math.mit.edu}
\footnotetext[1]{Die Arbeit wurde mit einem Stipendium des DAAD unterst\"utzt.}
\date{}
\maketitle
\begin{abstract}
The obstruction space $T^2$ and the cup product
$T^1\times T^1\to T^2$ are computed for toric singularities.
\end{abstract}
\tableofcontents
\sect{Introduction}\label{s1}
\neu{11}
For an affine scheme $\,Y= \mbox{Spec}\; A$, there are two important $A$-modules,
$T^1_Y$ and $T^2_Y$, carrying information about its deformation theory:
$T^1_Y$ describes the infinitesimal deformations, and $T^2_Y$ contains the
obstructions for extending deformations of $Y$ to larger base spaces.\\
\par
In case $Y$ admits a versal deformation, $T^1_Y$ is the tangent space of the
versal base space $S$. Moreover, if $J$ denotes the ideal defining $S$ as a
closed
subscheme of the affine space $T^1_Y$, the module
$\left( ^{\displaystyle J}\! / \! _{\displaystyle m_{T^1} \,J} \right) ^\ast$
can be canonically embedded into $T^2_Y$, i.e. $(T_Y^2)^\ast$-elements induce
the
equations defining $S$ in $T^1_Y$.\\
\par
The vector spaces $T^i_Y$ come with a cup product
$T_Y^1 \times T^1_Y \rightarrow T^2_Y$.
The associated quadratic form $T^1_Y \rightarrow T^2_Y$ describes the
quadratic part of the elements of $J$, i.e. it can be used to get a better
approximation of the versal base space $S$ as regarding its tangent space
only.\\
\par
\neu{12}
In \cite{T1} we have determined the vector space $T^1_Y$ for affine toric
varieties.
The present paper can be regarded as its continuation - we will compute $T^2_Y$
and
the cup product. \\
These modules $T^i_Y$ are canonically graded (induced from the character group
of
the torus). We will describe their homogeneous pieces as cohomology groups of
certain complexes, that are directly induced from the combinatorial structure
of the
rational, polyhedral cone defining our variety $Y$. The results
can be found in \S \ref{s3}.\\
\par
Switching to another, quasiisomorphic complex provides a second formula for the
vector spaces $T^i_Y$ (cf. \S \ref{s6}). We will use this particular version
for
describing these spaces and the cup product in the special case of
three-dimensional toric Gorenstein singularities (cf. \S \ref{s7}).\\
\par
\sect{$T^1$, $T^2$, and the cup product (in general)}\label{s2}
In this section we will give a brief reminder to the well known
definitions of these objects. Moreover, we will use this opportunity to fix
some
notations.\\
\par
\neu{21}
Let $Y \subseteq \,I\!\!\!\!C^{w+1}$ be given by equations $f_1,\dots,f_m$, i.e.
its ring of regular functions equals
\[
A=\;^{\displaystyle P}\!\! / \! _{\displaystyle I} \quad
\mbox{ with }
\begin{array}[t]{l}
P = \,I\!\!\!\!C[z_0,\dots, z_w]\\
I = (f_1,\dots,f_m)\, .
\end{array}
\]
Then, using $d:^{\displaystyle I}\! / \! _{\displaystyle I^2}
\rightarrow A^{w+1}\;$ ($d(f_i):= (\frac{\partial f_i}{\partial z_0},\dots
\frac{\partial f_i}{\partial z_w})$),
the vector space $T^1_Y$ equals
\[
T^1_Y = \;^{\displaystyle \mbox{Hom}_A(^{\displaystyle I}\!\! / \!
_{\displaystyle I^2}, A)} \! \left/ \!
_{\displaystyle \mbox{Hom}_A(A^{w+1},A)} \right.\; .
\vspace{1ex}
\]
\par
\neu{22}
Let ${\cal R}\subseteq P^m$ denote the $P$-module of relations between the equations
$f_1,\dots,f_m$. It contains the so-called Koszul relations
${\cal R}_0:= \langle f_i\,e^j - f_j \,e^i \rangle$ as a submodule.\\
Now, $T^2_Y$ can be obtained as
\[
T^2_Y = \;^{\displaystyle \mbox{Hom}_P(^{\displaystyle {\cal R}}\! / \!
_{\displaystyle {\cal R}_0}, A)} \! \left/ \!
_{\displaystyle \mbox{Hom}_P(P^m,A)} \right.\; .
\vspace{1ex}
\]
\par
\neu{23}
Finally, the cup product $T^1\times T^1 \rightarrow T^2$ can be defined in the
following way:
\begin{itemize}
\item[(i)]
Starting with an $\varphi\in \mbox{Hom}_A(^{\displaystyle I}\! / \!
_{\displaystyle I^2}, A)$, we lift the images of the $f_i$ obtaining
elements $\tilde{\varphi}(f_i)\in P$.
\item[(ii)]
Given a relation $r\in {\cal R}$, the linear combination
$\sum_ir_i\,\tilde{\varphi}(f_i)$ vanishes in $A$, i.e. it is contained in the
ideal $I\subseteq P$. Denote by $\lambda(\varphi)\in P^m$ any set of
coefficients such that
\[
\sum_i r_i \, \tilde{\varphi}(f_i) + \sum_i \lambda_i(\varphi)\, f_i =0\quad
\mbox{ in } P.
\]
(Of course, $\lambda$ depends on $r$ also.)
\item[(iii)]
If $\varphi, \psi \in \mbox{Hom}_A(^{\displaystyle I}\! / \!
_{\displaystyle I^2}, A)$ represent two elements of $T^1_Y$, then we define for
each relation $r\in {\cal R}$
\[
(\varphi \cup \psi)(r) := \sum_i \lambda_i (\varphi)\, \psi(f_i) +
\sum_i \varphi(f_i)\, \lambda_i(\psi)\; \in A\, .
\vspace{1ex}
\]
\end{itemize}
{\bf Remark:}
The definition of the cup product does not depend on the choices we made:
\begin{itemize}
\item[(a)]
Choosing a $\lambda'(\varphi)$ instead of $\lambda(\varphi)$ yields
$\lambda'(\varphi) - \lambda(\varphi) \in {\cal R}$, i.e. in $A$ we obtain the same
result.
\item[(b)]
Let $\tilde{\varphi}'(f_i)$ be different liftings to $P$. Then, the difference
$\tilde{\varphi}'(f_i) - \tilde{\varphi}(f_i)$ is contained in $I$, i.e. it can
be written as some linear combination
\[
\tilde{\varphi}'(f_i) - \tilde{\varphi}(f_i) = \sum_j t_{ij}\, f_j\, .
\]
Hence,
\[
\sum_i r_i \,\tilde{\varphi}'(f_i) = \sum_i r_i \,\tilde{\varphi}(f_i) +
\sum_{i,j} t_{ij}\, r_i\, f_j\,,
\]
and we can define $\lambda'_j(\varphi):= \lambda_j(\varphi) -
\sum_it_{ij}\,r_i$
(corresponding to $\tilde{\varphi}'$ instead of $\tilde{\varphi}$). Then, we
obtain
for the cup product
\[
(\varphi\cup\psi)'(r) - (\varphi\cup\psi)(r) = -\sum_ir_i\cdot
\left( \sum_j t_{ij}\, \psi(f_j)\right)\, ,
\]
but this expression comes from some map $P^m\rightarrow A$.
\vspace{3ex}
\end{itemize}
\sect{$T^1$, $T^2$, and the cup product (for toric varieties)}\label{s3}
\neu{31}
We start with fixing the usual notations when dealing with affine toric
varieties (cf. \cite{Ke} or
\cite{Oda}):
\begin{itemize}
\item
Let $M,N$ be mutually dual free Abelian groups, we denote by $M_{I\!\!R}, N_{I\!\!R}$
the associated real
vector spaces obtained by base change with $I\!\!R$.
\item
Let $\sigma=\langle a^1,\dots,a^N\rangle \subseteq N_{I\!\!R}$ be a rational,
polyhedral
cone with apex - given by its fundamental generators. \\
$\sigma^{\scriptscriptstyle\vee}:= \{ r\in M_{I\!\!R}\,|\; \langle \sigma,\,r\rangle \geq 0\}
\subseteq M_{I\!\!R}$
is called the dual cone. It induces a partial order on the lattice $M$ via
$[\,a\geq b$ iff
$a-b \in \sigma^{\scriptscriptstyle\vee}\,]$.
\item
$A:= \,I\!\!\!\!C[\sigma^{\scriptscriptstyle\vee}\cap M]$ denotes the semigroup algebra. It is the ring of
regular
functions of the toric variety $Y= \mbox{Spec}\; A$ associated to $\sigma$.
\item
Denote by $E\subset \sigma^{\scriptscriptstyle\vee}\cap M$ the minimal set of generators of this
semigroup
("the Hilbert basis"). $E$ equals the set of all primitive (i.e.
non-splittable) elements
of $\sigma^{\scriptscriptstyle\vee}\cap M$.
In particular, there is a surjection of semigroups $\pi:I\!\!N^E \longrightarrow\hspace{-1.5em}\longrightarrow
\sigma^{\scriptscriptstyle\vee}\cap M$, and
this fact translates into a closed embedding $Y\hookrightarrow \,I\!\!\!\!C^E$.\\
To make the notations
coherent with \S \ref{s2}, assume that $E=\{r^0,\dots,r^w\}$ consists of $w+1$
elements.
\vspace{2ex}
\end{itemize}
\neu{32}
To a fixed degree $R\in M$ we associate ``thick facets'' $K_i^R$ of the dual
cone
\[
K_i^R := \{r\in \sigma^{\scriptscriptstyle\vee}\cap M \, | \; \langle a^i, r \rangle <
\langle a^i, R \rangle \}\quad (i=1,\dots,N) .
\vspace{2ex}
\]
\par
{\bf Lemma:}{\em
\begin{itemize}
\item[(1)]
$\cup_i K_i^R = (\sigma^{\scriptscriptstyle\vee}\cap M) \setminus (R+ \sigma^{\scriptscriptstyle\vee})$.
\item[(2)]
For each $r,s\in K_i^R$ there exists an $\ell\in K_i^R$ such that $\ell\geq
r,s$.
Moreover, if $Y$ is smooth in codimension 2, the intersections $K^R_i\cap
K^R_j$
(for 2-faces $\langle a^i,a^j\rangle <\sigma$) have the same property.
\vspace{1ex}
\end{itemize}
}
\par
{\bf Proof:}
Part (i) is trivial; for (ii) cf. (3.7) of \cite{T1}.
\hfill$\Box$\\
\par
Intersecting these sets with $E\subseteq \sigma^{\scriptscriptstyle\vee}\cap M$, we obtain the
basic objects for describing the modules $T^i_Y$:
\begin{eqnarray*}
E_i^R &:=& K_i^R \cap E = \{r\in E\, | \; \langle a^i,r \rangle <
\langle a^i, R \rangle \}\, ,\\
E_0^R &:=& \bigcup_{i=1}^N E_i^R\; ,\mbox{ and}\\
E^R_{\tau} &:=& \bigcap_{a^i\in \tau} E^R_i \; \mbox{ for faces }
\tau < \sigma\,.
\end{eqnarray*}
We obtain a complex
$L(E^R)_{\bullet}$ of free Abelian groups via
\[
L(E^R)_{-k} := \bigoplus_{\begin{array}{c}
\tau<\sigma\\ \mbox{dim}\, \tau=k \end{array}} \!\!L(E^R_{\tau})
\]
with the usual differentials.
($L(\dots)$ denotes the free Abelian group of integral, linear dependencies.)
\\
The most interesting part ($k\leq 2$) can be written explicitely as
\[
L(E^R)_{\bullet}:\quad \cdots
\rightarrow
\oplus_{\langle a^i,a^j\rangle<\sigma} L(E^R_i\cap E^R_j)
\longrightarrow
\oplus_i L(E_i^R) \longrightarrow L(E_0^R) \rightarrow 0\,.
\vspace{1ex}
\]
\par
\neu{33}
{\bf Theorem:}
{\em
\begin{itemize}
\item[(1)]
$T^1_Y(-R) = H^0 \left( L(E^R)_{\bullet}^\ast \otimes_{Z\!\!\!Z}\,I\!\!\!\!C\right)$
\item[(2)]
$T^2_Y(-R) \supseteq H^1 \left( L(E^R)_{\bullet}^\ast \otimes_{Z\!\!\!Z}\,I\!\!\!\!C\right)$
\item[(3)]
Moreover, if $Y$ is smooth in codimension 2
(i.e.\ if the 2-faces $\langle a^i, a^j \rangle < \sigma$ are spanned
by a part of a $Z\!\!\!Z$-basis of the lattice $N$), then
\[
T^2_Y(-R) = H^1 \left( L(E^R)_{\bullet}^\ast \otimes_{Z\!\!\!Z}\,I\!\!\!\!C\right)\, .
\]
\item[(4)]
Module structure: If $x^s\in A$ (i.e. $s\in \sigma^{\scriptscriptstyle\vee}\cap M$), then
$E_i^{R-s}\subseteq R_i^R$, hence $L(E^R)_{\bullet}^\ast \subseteq
L(E^{R-s})^\ast_{\bullet}$. The induced map in cohomology corresponds to the
multiplication with $x^s$ in the $A$-modules $T^1_Y$ and $T^2_Y$.
\vspace{2ex}
\end{itemize}
}
\par
The first part was shown in \cite{T1}; the formula for $T^2$ will be proved
in \S \ref{s4}. Then, the claim concerning the module structure will become
clear
automatically.\\
\par
{\bf Remark:}
The assumption made in (3) can not be dropped: \\
Taking for $Y$ a 2-dimensional cyclic quotient
singularity given by some 2-dimensional cone $\sigma$, there are only two
different sets
$E_1^R$ and $E_2^R$ (for each $R\in M$). In particular,
$H^1 \left( L(E^R)_{\bullet}^\ast \otimes_{Z\!\!\!Z}\,I\!\!\!\!C\right)=0$.\\
\par
\neu{34}
We want to describe the isomorphisms connecting the general $T^i$-formulas of
\zitat{2}{1} and \zitat{2}{2} with the toric ones given in \zitat{3}{3}.\\
\par
$Y\hookrightarrow\,I\!\!\!\!C^{w+1}$ is given by the equations
\[
f_{ab}:= \underline{z}^a-\underline{z}^b\quad (a,b\in I\!\!N^{w+1} \mbox{ with } \pi(a)=\pi(b)
\mbox{ in } \sigma^{\scriptscriptstyle\vee}
\cap M),
\]
and it is easier to deal with this infinite set of equations
(which generates the ideal $I$ as a $\,I\!\!\!\!C$-vector
space) instead of selecting a finite number of them in some non-canonical way.
In particular, for
$m$ of \zitat{2}{1} and \zitat{2}{2} we take
\[
m:= \{ (a,b)\in I\!\!N^{w+1}\timesI\!\!N^{w+1}\,|\;\pi(a)=\pi(b)\}\,.
\]
The general $T^i$-formulas mentioned in \zitat{2}{1} and \zitat{2}{2} remain
true.\\
\par
{\bf Theorem:}
{\em
For a fixed element $R\in M$ let $\varphi: L(E)_{\,I\!\!\!\!C}\rightarrow \,I\!\!\!\!C$ induce some
element of
\[
\left(\left. ^{\displaystyle L(E_0^R)}\!\!\right/
\!_{\displaystyle \sum_i L(E_i^R)} \right)^\ast \otimes_{Z\!\!\!Z} \,I\!\!\!\!C
\cong T^1_Y(-R)\quad \mbox{(cf. Theorem \zitat{3}{3}(1)).}
\]
Then, the $A$-linear map
\begin{eqnarray*}
^{\displaystyle I}\!\!/\!_{\displaystyle I^2} &\longrightarrow& A\\
\underline{z}^a-\underline{z}^b & \mapsto & \varphi(a-b)\cdot x^{\pi(a)-R}
\end{eqnarray*}
provides the same element via the formula \zitat{2}{1}.
}\\
\par
Again, this Theorem follows from the paper \cite{T1} - accompanied by the
commutative diagram
of \zitat{4}{3} in the present paper. (Cf. Remark \zitat{4}{4}.)\\
\par
{\bf Remark:}
A simple, but nevertheless important check shows that the map
$(\underline{z}^a-\underline{z}^b) \mapsto
\varphi(a-b)\cdot x^{\pi(a)-R}$ goes into $A$, indeed:\\
Assume $\pi(a)-R \notin \sigma^{\scriptscriptstyle\vee}$. Then, there exists an index $i$ such
that
$\langle a^i, \pi(a)-R \rangle <0$.
Denoting by "supp $q$" (for a $q\in I\!\!R^E$) the set of those $r\in E$ providing
a non-vanishing entry
$q_r$, we obtain
\[
\mbox{supp}\,(a-b) \subseteq \mbox{supp}\,a \cup \mbox{supp}\, b \subseteq
E^R_i\, ,
\]
i.e. $\varphi(a-b)=0$.\\
\par
\neu{35}
The $P$-module ${\cal R}\subseteq P^m$ is generated by relations of two different
types:
\begin{eqnarray*}
r(a,b;c) &:=& e^{a+c,\,b+c}- \underline{z}^c\, e^{a,b}\quad
(a,b,c\in I\!\!N^{w+1};\, \pi(a)=\pi(b))\quad \mbox{ and}\\
s(a,b,c) &:=& e^{b,c} - e^{a,c} + e^{a,b}\quad
(a,b,c\in I\!\!N^{w+1};\, \pi(a)=\pi(b)=\pi(c))\,.\\
&&\qquad\qquad(e^{\bullet,\bullet} \mbox{ denote the standard basis vectors of
} P^m.)
\vspace{1ex}
\end{eqnarray*}
\par
{\bf Theorem:}
{\em
For a fixed element $R\in M$ let $\psi_i: L(E_i^R)_{\,I\!\!\!\!C}\rightarrow \,I\!\!\!\!C$ induce
some
element of
\[
\left( \frac{\displaystyle
\mbox{Ker}\,\left( \oplus_i L(E_i^R) \longrightarrow
L(E'^R)\right)}{\displaystyle
\mbox{Im}\, \left( \oplus_{\langle a^i,a^j\rangle<\sigma} L(E_i^R\cap E_j^R)
\rightarrow \oplus_i L(E_i^R)\right)} \right)^\ast
\otimes_{Z\!\!\!Z}\,I\!\!\!\!C \subseteq T^2_Y(-R)
\quad \mbox{(cf. \zitat{3}{3}(2)).}
\]
Then, the $P$-linear map
\begin{eqnarray*}
^{\displaystyle {\cal R}}\!\!/\!_{\displaystyle {\cal R}_0} &\longrightarrow & A\\
r(a,b;c) & \mapsto &
\left\{ \begin{array}{ll}
\psi_i(a-b)\, x^{\pi(a+c)-R} & \mbox{for } \pi(a)\in K_i^R;\; \pi(a+c)\geq R\\
0 & \mbox{for }\pi(a)\geq R \mbox{ or } \pi(a+c)\in\bigcup_i K_i^R
\end{array}\right.\\
s(a,b,c) &\mapsto & 0
\end{eqnarray*}
is correct defined, and via the formula \zitat{2}{2} it induces the same
element of
$T^2_Y$.
}
\vspace{2ex}
\\
\par
For the prove we refer to \S \ref{s4}. Nevertheless, we check the {\em
correctness of
the definition} of the $P$-linear map
$^{\displaystyle {\cal R}}\!/\!_{\displaystyle {\cal R}_0} \rightarrow A$ instantly:
\begin{itemize}
\item[(i)]
If $\pi(a)$ is contained in two different sets $K_i^R$ and $K_j^R$, then the
two
fundamental generators $a^i$ and $a^j$ can be connected by a sequence
$a^{i_0},\dots,a^{i_p}$, such that
\begin{itemize}
\item[$\bullet$]
$a^{i_0}=a^i,\, a^{i_p}=a^j,$
\item[$\bullet$]
$a^{i_{v-1}}$ and $a^{i_v}$ are the edges of some 2-face of $\sigma$
($v=1,\dots,p$),
and
\item[$\bullet$]
$\pi(a)\in K^R_{i_v}$ for $v=0,\dots,p$.
\end{itemize}
Hence, $\mbox{supp}\,(a-b)\subseteq E^R_{i_{v-1}}\cap E^R_{i_v}$
($v=1,\dots,p$),
and we obtain
\[
\psi_i(a-b)=\psi_{i_1}(a-b)=\dots=\psi_{i_{p-1}}(a-b)=\psi_j(a-b)\,.
\]
\item[(ii)]
There are three types of $P$-linear relations between the generators $r(\dots)$
and
$s(\dots)$ of ${\cal R}$:
\begin{eqnarray*}
0 &=& \underline{z}^d\,r(a,b;c) -r(a,b;c+d) + r(a+c,b+c;d)\,,\\
0 &=& r(b,c;d) - r(a,c;d) + r(a,b;d) - s(a+d,b+d,c+d) + \underline{z}^d\,
s(a,b,c)\,,\\
0 &=& s(b,c,d) - s(a,c,d) + s(a,b,d) - s(a,b,c)\,.
\end{eqnarray*}
Our map respects them all.
\item[(iii)]
Finally, the typical element
$(\underline{z}^a-\underline{z}^b)e^{cd} - (\underline{z}^c-\underline{z}^d)e^{ab} \in {\cal R}_0$
equals
\[
-r(c,d;a)+r(c,d;b)+r(a,b;c)-r(a,b;d) - s(a+c,b+c,a+d) - s(a+d,b+c,b+d)\,.
\]
It will be sent to 0, too.
\vspace{2ex}
\end{itemize}
\par
\neu{36}
The cup product $T^1_Y\times T^1_Y\rightarrow T^2_Y$ respects the grading, i.e.
it splits
into pieces
\[
T^1_Y(-R)\times T^1_Y(-S) \longrightarrow T^2_Y(-R-S)\quad (R,S\in M)\,.
\]
To describe these maps in our combinatorial language, we choose some
set-theoretical
section $\Phi:M\rightarrowZ\!\!\!Z^{w+1}$ of the $Z\!\!\!Z$-linear map
\begin{eqnarray*}
\pi: Z\!\!\!Z^{w+1} &\longrightarrow& M\\
a&\mapsto&\sum_v a_v\,r^v
\end{eqnarray*}
with the additional property $\Phi(\sigma^{\scriptscriptstyle\vee}\cap M)\subseteq I\!\!N^{w+1}$.\\
\par
Let $q\in L(E)\subseteqZ\!\!\!Z^{w+1}$ be an integral relation between the generators
of
the semigroup $\sigma^{\scriptscriptstyle\vee}\cap M$. We introduce the following notations:
\begin{itemize}
\item
$q^+,q^-\inI\!\!N^{w+1}$ denote the positive and the negative part of $q$,
respectively.
(With other words: $q=q^+-q^-$ and $\sum_v q^-_v\,q^+_v=0$.)
\item
$\bar{q}:=\pi(q^+)=\sum_v q_v^+\,r^v = \pi(q^-)=\sum_v q_v^-\,r^v \in M$.
\item
If $\varphi,\psi: L(E)\rightarrowZ\!\!\!Z$ are linear maps and $R,S\in M$, then we
define
\[
t_{\varphi,\psi,R,S}(q):=
\varphi(q)\cdot \psi \left( \Phi(\bar{q}-R)+\Phi(R)-q^-\right) +
\psi(q)\cdot \varphi\left( \Phi(\bar{q}-S)+\Phi(S)-q^+\right)\,.
\vspace{2ex}
\]
\end{itemize}
\par
{\bf Theorem:}
{\em
Assume that $Y$ is smooth in codimension 2.\\
Let $R,S\in M$, and let $\varphi,\psi: L(E)_{\,I\!\!\!\!C}\rightarrow\,I\!\!\!\!C$ be linear maps
vanishing on $\sum_i L(E_i^R)_{\,I\!\!\!\!C}$ and $\sum_i L(E_i^S)_{\,I\!\!\!\!C}$, respectively.
In particular,
they define elements $\varphi\in T^1_Y(-R),\,\psi\in T^1_Y(-S)$ (which involves
a slight
abuse of notations).\\
Then, the cup product $\varphi\cup\psi\in T^2_Y(-R-S)$ is given (via
\zitat{3}{3}(3))
by the linear maps $(\varphi\cup\psi)_i: L(E_i^{R+S})_{\,I\!\!\!\!C}\rightarrow\,I\!\!\!\!C$
defined as follows:
\begin{itemize}
\item[(i)]
If $q\in L(E_i^{R+S})$ (i.e. $\langle a^i,\mbox{supp}\,q\rangle < \langle
a^i,R+S\rangle$) is an integral relation,
then there exists a decomposition $q=\sum_k q^k$ such that
\begin{itemize}
\item
$q^k\in L(E_i^{R+S})$, and moreover
\item
$\langle a^i, \bar{q}^k\rangle < \langle a^i,R+S\rangle$.
\end{itemize}
\item[(ii)]
$(\varphi\cup\psi)_i\left( q\in L(E_i^{R+S})\right):= \sum_k
t_{\varphi,\psi,R,S}(q^k)$.
\vspace{2ex}
\end{itemize}
}
\par
It is even not obvious that the map $q\mapsto \sum_k t(q^k)$
\begin{itemize}
\item
does not depend on the representation of $q$ as a particular sum of $q_k$'s
(which
would instantly imply linearity on $L(E_i^{R+S})$), and
\item
yields the same result on $L(E_i^{R+S}\cap E_j^{R+S})$ for $i,j$ corresponding
to edges
$a^i, a^j$ of some 2-face of $\sigma$.
\end{itemize}
The proof of these facts (cf.\ \zitat{5}{4})and of the entire theorem is
contained
in \S \ref{s5}.\\
\par
{\bf Remark 1:}
Replacing all the terms $\Phi(\bullet)$ in the $t$'s of the previous formula
for
$(\varphi\cup\psi)_i(q)$ by arbitrary liftings from $M$ to $Z\!\!\!Z^{w+1}$,
the result in $T^2_Y(-R-S)$ will be unchanged as long as we obey the following
two
rules:
\begin{itemize}
\item[(i)]
Use always (for all $q$, $q^k$, and $i$)
the {\em same liftings} of $R$ and $S$ to $Z\!\!\!Z^{w+1}$ (at the places of
$\Phi(R)$ and $\Phi(S)$, respectively).
\item[(ii)]
Elements of $\sigma^{\scriptscriptstyle\vee}\cap M$ always have to be lifted to $I\!\!N^{w+1}$.
\vspace{2ex}
\end{itemize}
{\bf Proof:}
Replacing $\Phi(R)$ by $\Phi(R)+d$ ($d\in L(E)$) at each occurence changes all
maps $(\varphi\cup\psi)_i$ by the summand $\psi(d)\cdot\varphi(\bullet)$.
However,
this additional linear map comes from $L(E)^\ast$, hence it is trivial on
$\mbox{Ker}\left(\oplus_iL(E_i^{R+S})\rightarrow L(E_0^{R+S})\subseteq
L(E)\right)$.\\
\par
Let us look at the terms $\Phi(\bar{q}-R)$ in $t(q)$ now:
Unless $\bar{q}\geq R$, the factor $\varphi(q)$ vanishes (cf. Remark
\zitat{3}{4}). On
the other hand, the expression $t(q)$ is never used for those relations $q$
satisfying
$\bar{q}\geq R+S$ (cf. conditions for the $q^k$'s). Hence, we may assume that
\[
(\bar{q}-R)\geq 0\; \mbox{ and, moreover, } (\bar{q}-R)\in \bigcup_i K_i^S\,.
\]
Now, each two liftings of $(\bar{q}-R)$ to $I\!\!N^{w+1}$ differ by an element of
$\mbox{Ker}\,\psi$ only (apply the method of Remark \zitat{3}{4} again), in
particular,
they cause the same result for $t(q)$.
\hfill$\Box$\\
\par
{\bf Remark 2:}
In the special case of $R\geq S\kgeq0$ we can choose liftings $\Phi(R)\geq
\Phi(S)
\geq 0$ in $I\!\!N^{w+1}$. Then,
there exists an easier description for $t(q)$:
\begin{itemize}
\item[(i)]
Unless $\bar{q}\geq R$, we have $t(q)=0$.
\item[(ii)]
In case of $\bar{q}\geq R$ we may assume that $q^+\geq\Phi(R)$ is true
in $I\!\!N^{w+1}$.
(The general $q$'s are differences of those ones.) Then, $t$ can be computed as
the
product $t(q)=\varphi(q)\,\psi(q)$.
\vspace{2ex}
\end{itemize}
\par
{\bf Proof:}
(i) As used many times, the property $\bar{q}\in\bigcup_iE_i^R$ implies
$\varphi(q)=0$.
Now, we can distinguish between two cases:\\
{\em Case 1: $\bar{q}\in\bigcup_iE_i^S$.} We obtain $\psi(q)=0$, in particular,
both
summands of $t(q)$ vanish.\\
{\em Case 2: $\bar{q}\geq S$.} Then, $\bar{q}-S,\,S\in \sigma^{\scriptscriptstyle\vee}\cap M$,
and $\Phi$ lifts
these elements to $I\!\!N^{w+1}$. Now, the condition $\bar{q}\in\bigcup_iE_i^R$
implies
that $\varphi\left( \Phi(\bar{q}-S)+\Phi(S)-q^+\right)=0$.\\
\par
(ii)
We can choose $\Phi(\bar{q}-R):=q^+-\Phi(R)$ and
$\Phi(\bar{q}-S):=q^+-\Phi(S)$. Then,
the claim follows straight forward.
\hfill$\Box$\\
\par
\sect{Proof of the $T^2$-formula}\label{s4}
\neu{41}
We will use the sheaf $\Omega^1_Y=\Omega^1_{A|\,I\!\!\!\!C}$ of K\"ahler
differentials for computing the modules $T^i_Y$. The maps
\[
\alpha_i: \mbox{Ext}^i_A\left(
\;^{\displaystyle\Omega_Y^1}\!\!\left/\!_{\displaystyle
\mbox{tors}\,(\Omega_Y^1)}\right. ,
\, A \right)
\hookrightarrow
\mbox{Ext}^i_A\left( \Omega^1_Y,\,A\right) \cong T^i_Y\quad
(i=1,2)
\]
are injective. Moreover, they are isomorphisms for
\begin{itemize}
\item
$i=1$, since $Y$ is normal, and for
\item
$i=2$, if $Y$ is smooth in codimension 2.
\vspace{2ex}
\end{itemize}
\par
\neu{42}
As in \cite{T1}, we build a special $A$-free resolution (one step further now)
\[
{\cal E}\stackrel{d_E}{\longrightarrow}{\cal D}\stackrel{d_D}{\longrightarrow}
{\cal C}\stackrel{d_C}{\longrightarrow}{\cal B} \stackrel{d_B}{\longrightarrow}
\;^{\displaystyle\Omega_Y^1}\!\!\left/\!_{\displaystyle
\mbox{tors}\,(\Omega_Y^1)}\right.
\rightarrow 0\,.
\vspace{2ex}
\]
With $L^2(E):=L(L(E))$, $L^3(E):=L(L^2(E))$, and
\[
\mbox{supp}^2\xi:= \bigcup_{q\in supp\,\xi} \mbox{supp}\,q\quad (\xi\in
L^2(E)),\quad
\mbox{supp}^3\omega:= \bigcup_{\xi\in supp\,\omega}\mbox{supp}^2\xi\quad
(\omega\in L^3(E)),
\]
the $A$-modules involved in this resolution are defined as follows:
\[
\begin{array}{rcl}
{\cal B}&:=&\oplus_{r\in E} \,A\cdot B(r),\qquad
{\cal C}\,:=\,\oplus_{\!\!\!\!\!\begin{array}[b]{c}\scriptstyle q\in L(E) \vspace{-1ex}\\
\scriptstyle\ell\geq supp\, q\end{array}}
\!\!\!A\cdot C(q;\ell),\\
{\cal D}&:=&\left(
\oplus_{\!\!\!\!\!\!\!\begin{array}{c}\scriptstyle q\in L(E)\vspace{-1ex}\\
\scriptstyle\eta\geq\ell\geq supp\, q\end{array}}
\!\!\!\!A\cdot D(q;\ell,\eta) \right)
\oplus \left(
\oplus_{\!\!\!\!\begin{array}{c}\scriptstyle\xi\in L^2(E)\vspace{-1ex}\\
\scriptstyle\eta\geq supp^2 \xi\end{array}}
\!\!\!A\cdot D(\xi;\eta) \right),\;\mbox{ and}\\
{\cal E}&:=&
\begin{array}[t]{r} \left(
\oplus_{\!\!\!\!\!\!\!\!\begin{array}{c}\scriptstyle q\in L(E)\vspace{-1ex}\\
\scriptstyle\mu\geq\eta\geq\ell\geq supp\, q\end{array}}
\!\!\!\!\!A\cdot E(q;\ell,\eta,\mu) \right)
\oplus \left(
\oplus_{\!\!\!\!\!\!\!\begin{array}{c}\scriptstyle\xi\in L^2(E)\vspace{-1ex}\\
\scriptstyle\mu\geq\eta\geq supp^2 \xi\end{array}}
\!\!\!A\cdot E(\xi;\eta,\mu) \right) \oplus \qquad \\
\oplus \left(
\oplus_{\!\!\!\!\begin{array}{c}\scriptstyle\omega\in L^3(E)\vspace{-1ex}\\
\scriptstyle\omega\geq supp^3 \omega\end{array}}
\!\!\!\! A\cdot E(\omega;\mu)\right)
\end{array}
\end{array}
\]
($B,C,D,$ and $E$ are just symbols).
The differentials equal
\[
\begin{array}{cccl}
d_B: &B(r)&\mapsto &d\,x^r\vspace{1ex}\\
d_C: &C(q;\ell)&\mapsto &\sum_{r\in E} q_r\,x^{\ell-r}\cdot B(r)\vspace{1ex}\\
d_D: &D(q;\ell,\eta)&\mapsto &C(q;\eta) - x^{\eta-\ell}\cdot C(q,\ell)\\
d_D: &D(\xi;\eta)&\mapsto& \sum_{q\in L(E)}\xi_q\cdot C(q,\eta)\vspace{1ex}\\
d_E: &E(q;\ell,\eta,\mu)&\mapsto& D(q;\eta,\mu)-D(q;\ell,\mu)+
x^{\mu-\eta}\cdot D(q;\ell,\eta) \\
d_E: &E(\xi;\eta,\mu)&\mapsto &D(\xi;\mu) - x^{\mu-\eta}\cdot D(\xi;\eta) -
\sum_{q\in L(E)} \xi_q\cdot D(q;\eta,\mu)\\
d_E: &E(\omega;\mu)&\mapsto &\sum_{\xi\in L^2(E)} \omega_{\xi}\cdot
D(\xi;\mu)\, .
\vspace{2ex}
\end{array}
\]
\par
Looking at these maps, we see that the complex is $M$-graded: The degree of
each of
the elements $B$, $C$, $D$, or $E$ can be obtained by taking the last of its
parameters
($r$, $\ell$, $\eta$, or $\mu$, respectively).\\
\par
{\bf Remark:} If one prefered a resolution with free $A$-modules of finite rank
(as it was
used in
\cite{T1}), the following replacements would be necessary:
\begin{itemize}
\item[(i)]
Define succesively $F\subseteq L(E)$, $G\subseteq L(F) \subseteq L^2(E)$, and
$H\subseteq L(G)\subseteq L^2(F) \subseteq L^3(E)$ as the finite
sets of normalized, minimal relations between elements of $E$, $F$, or
$G$, respectively. Then, use them instead of $L^i(E)$ ($i=1,2,3$).
\item[(ii)]
Let $\ell$, $\eta$, and $\mu$ run through finite generating
(under $(\sigma^{\scriptscriptstyle\vee}\cap M)$-action)
systems of all possible elements meeting the desired inequalities.
\end{itemize}
The disadvantages of those treatment are a more comlplicated description of
the resolution, on the one hand, and difficulties to obtain the
commutative diagram \zitat{4}{3},
on the other hand.\\
\par
\neu{43}
Combining the two exact sequences
\[
^{\displaystyle {\cal R}}\!/\!_{\displaystyle I\,{\cal R}} \longrightarrow
A^m \longrightarrow
^{\displaystyle I}\!\!/\!_{\displaystyle I^2}\rightarrow 0\quad
\mbox{and}\quad
^{\displaystyle I}\!\!/\!_{\displaystyle I^2}\longrightarrow
\Omega^1_{\,I\!\!\!\!C^{w+1}}\otimes A \longrightarrow
\Omega_Y^1 \rightarrow 0\,,
\]
we get a complex (not exact at the place of $A^m$) involving $\Omega_Y^1$. We
will compare
in the following commutative diagram this complex with the previous resolution
of
$^{\displaystyle\Omega_Y^1}\!\!\left/\!_{\displaystyle
\mbox{tors}\,(\Omega_Y^1)}\right.$:
\vspace{-5ex}\\
\[
\dgARROWLENGTH=0.8em
\begin{diagram}
\node[5]{^{\displaystyle I}\!\!/\!_{\displaystyle I^2}}
\arrow{se,t}{d}\\
\node[2]{^{\displaystyle {\cal R}}\!/\!_{\displaystyle I\,{\cal R}}}
\arrow[2]{e}
\arrow{se,t}{p_D}
\node[2]{A^m}
\arrow{ne}
\arrow[2]{e}
\arrow[2]{s,l}{p_C}
\node[2]{\Omega_{\,I\!\!\!\!C^{w+1}}\!\otimes \!A}
\arrow{e}
\arrow[2]{s,lr}{p_B}{\sim}
\node{\Omega_Y}
\arrow[2]{s}
\arrow{e}
\node{0}\\
\node[3]{\mbox{Im}\,d_D}
\arrow{se}\\
\node{{\cal E}}
\arrow{e,t}{d_E}
\node{{\cal D}}
\arrow[2]{e,t}{d_D}
\arrow{ne}
\node[2]{{\cal C}}
\arrow[2]{e,t}{d_C}
\node[2]{{\cal B}}
\arrow{e}
\node{^{\displaystyle\Omega_Y^1}\!\!\!\left/\!\!_{\displaystyle
\mbox{tors}\,(\Omega_Y^1)}\right.}
\arrow{e}
\node{0}
\end{diagram}
\]
\par
Let us explain the three labeled vertical maps:
\begin{itemize}
\item[(B)]
$p_B: dz_r \mapsto B(r)$ is an isomorphism between two free $A$-modules of rank
$w+1$.
\item[(C)]
$p_C: e^{ab} \mapsto C(a-b;\pi(a))$. In particular, the image of this map is
spanned by
those $C(q,\ell)$ meeting $\ell\geq \bar{q}$ (which is stronger than just
$\ell\geq\mbox{supp}\,q$).
\item[(D)]
Finally, $p_D$ arises as pull back of $p_C$ to $^{\displaystyle {\cal R}}\!/\!_{\displaystyle I\,{\cal R}}$.
It can
be described by
$r(a,b;c)\mapsto D(a-b; \pi(a),\pi(a+c))$ and $s(a,b,c)\mapsto D(\xi;\pi(a))$
($\xi$ denotes
the relation $\xi=[(b-c)-(a-c)+(a-b)=0]$).
\vspace{2ex}
\end{itemize}
\par
{\bf Remark:}
Starting with the typical ${\cal R}_0$-element mentioned in \zitat{3}{5}(iii), the
previous
description of the map $p_D$ yields 0 (even in ${\cal D}$).\\
\par
\neu{44}
By \zitat{4}{1} we get the $A$-modules $T^i_Y$ by computing the cohomology
of the complex dual to those of \zitat{4}{2}.\\
\par
As in \cite{T1}, denote by $G$ one of the capital letters $B$, $C$, $D$, or
$E$. Then, an element $\psi$ of the dual free module $(\bigoplus\limits_G
\,I\!\!\!\!C[\check{\sigma}\cap M]\cdot G)^\ast$ can be described by giving elements
$g(x)\in\,I\!\!\!\!C[\check{\sigma}\cap M]$ to be the images of the generators $G$
($g$ stands for $b$, $c$, $d$, or $e$, respectively).\\
\par
For $\psi$ to be homogeneous of degree $-R\in M$, $g(x)$ has to be
a monomial of degree
\[
\deg g(x)=-R+\deg G.
\]
In particular, the corresponding complex coefficient $g\in \,I\!\!\!\!C$ (i.e.
$g(x)=g\cdot x^{-R+\deg G}$) admits the property that
\[
g\neq 0\quad\mbox{implies}\quad -R+\deg G\ge 0\quad (\mbox{i.e.}\;
-R+\deg G\in\check{\sigma}).
\vspace{2ex}
\]
\par
{\bf Remark:}
Using these notations,
Theorem \zitat{3}{3}(1) was proved in \cite{T1} by showing that
\begin{eqnarray*}
\left(\left. ^{\displaystyle L(E_0^R)}\!\!\right/
\!_{\displaystyle \sum_i L(E_i^R)} \right)^\ast \otimes_{Z\!\!\!Z} \,I\!\!\!\!C
&\longrightarrow&
^{\displaystyle \mbox{Ker}({\cal C}^\ast_{-R}\rightarrow {\cal D}^\ast_{-R})}\!\!\left/
\!_{\displaystyle \mbox{Im}({\cal B}^\ast_{-R}\rightarrow{\cal C}^\ast_{-R})}\right.\\
\varphi &\mapsto&
[\dots,\, c(q;\ell):=\varphi(q),\dots]
\end{eqnarray*}
is an isomorphism of vector spaces.\\
Moreover, looking at the diagram of
\zitat{4}{3}, $e^{ab}\in A^m$ maps to both $\underline{z}^a-\underline{z}^b\in ^{\displaystyle
I}\!\!/\!
_{\displaystyle I^2}$ and $C(a-b;\pi(a))\in {\cal C}$. In particular, we can verify Theorem
\zitat{3}{4}:
Each $\varphi:L(E)_{\,I\!\!\!\!C} \rightarrow\,I\!\!\!\!C$, on the one hand,
and its associated $A$-linear map
\begin{eqnarray*}
^{\displaystyle I}\!\!/\!_{\displaystyle I^2} &\longrightarrow& A\\
\underline{z}^a-\underline{z}^b & \mapsto & \varphi(a-b)\cdot x^{\pi(a)-R},
\end{eqnarray*}
on the other hand, induce the same element of $T^1_Y(-R)$. \\
\par
\neu{45}
For computing $T_Y^2(-R)$,
the interesting part of the dualized complex
$\zitat{4}{2}^\ast$ in degree $-R$ equals the complex
of $\,I\!\!\!\!C$-vector spaces
\[
{\cal C}^{\ast}_{-R} \stackrel{d_D^{\ast}}{\longrightarrow} {\cal D}^{\ast}_{-R}
\stackrel{d_E^{\ast}}{\longrightarrow} {\cal E}^{\ast}_{-R}
\]
with coordinates $\underline{c}$, $\underline{d}$, and $\underline{e}$,
respectively:
\begin{eqnarray*}
{\cal C}^{\ast}_{-R} &=&
\{\underline{c(q;\ell)}\, |\;
c(q;\ell)=0 \;\mbox{for}\;\ell-R\notin\check{\sigma}\}\\
{\cal D}^{\ast}_{-R} &=&
\{[\underline{d(q;\ell,\eta)},\underline{d(\xi;\eta)}]\;|\;
\begin{array}[t]{ccccl}
d(q;\ell,\eta)&=&0& \mbox{for} &\eta-R\notin\check{\sigma}\mbox{, and}\\
d(\xi;\eta)&=&0& \mbox{for} &\eta-R\notin\check{\sigma} \}
\end{array}\\
{\cal E}^{\ast}_{-R} &=&
\{ [\underline{e(q;\ell,\eta,\mu)},
\underline{e(\xi;\eta,\mu)},
\underline{e(\omega ;\mu)}]\,|\;
\mbox{each coordinate vanishes for } \mu - R \notin \check{\sigma} \}.
\vspace{1ex}
\end{eqnarray*}
\par
The differentials
$d_D^{\ast}$ and $d_E^{\ast}$ can be described by
\[
\begin{array}{lcll}
d(q;\ell,\eta)&=&c(q;\eta)-c(q;\ell),&\\
d(\xi;\eta)&=&\sum\limits_{q\in F}\xi_q\cdot c(q;\eta),&
\mbox{and}\\
e(q;\ell,\eta,\mu) &=&
d(q;\eta,\mu) - d(q;\ell,\mu) + d(q;\ell,\eta),\\
e(\xi;\eta,\mu) &=&
d(\xi;\mu) - d(\xi;\eta) - \sum_{q\in F} \xi_q\cdot
d(q;\eta,\mu),\\
e(\omega ;\mu) &=&
\sum\limits_{\xi\in G} \omega_{\xi}\cdot d(\xi;\mu).
\end{array}
\vspace{1ex}
\]
\par
Denote $V:= \mbox{Ker}\,d^{\ast}_E \subseteq {\cal D}_{-R}^{\ast}\,$ and
$\,W:= \mbox{Im}\,d_D^{\ast}\subseteq V$, i.e.
\begin{eqnarray*}
V&=& \{ [\underline{d(q;\ell,\eta)};\,\underline{d(\xi;\eta)}]\,|\;
\begin{array}[t]{l}
q\in L(E), \;\eta\geq\ell\geq\mbox{supp}\,q\mbox{ in }M;\\
\xi\in L^2(E), \;\eta\geq\mbox{supp}^2\xi;
\vspace{0.5ex}\\
d(q;\ell,\eta) = d(\xi;\eta) = 0 \mbox{ for } \eta -R \notin \check{\sigma},\\
d(q;\ell,\mu) = d(q;\ell,\eta) + d(q;\eta,\mu) \; (\mu\geq\eta\geq\ell\geq
\mbox{supp}\, q),\\
d(\xi;\mu)= d(\xi;\eta) + \sum_q \xi_q \cdot d(q;\eta,\mu)\;
(\mu\geq\eta\geq \mbox{supp}^2 \xi),\\
\sum_{\xi\in G}\omega_{\xi} \,d(\xi;\mu) =0 \mbox{ for }\omega \in L^3(E)
\mbox{ with } \mu \geq \mbox{supp}^3\omega\,\},
\end{array}\\
W&=& \{ [\underline{d(q;\ell,\eta)};\,\underline{d(\xi;\eta)}]\,|\;
\mbox{there are $c(q;\ell)$'s with}
\begin{array}[t]{l}
c(q,\ell)=0 \mbox{ for } \ell-R\notin\check{\sigma},\\
d(q;\ell,\eta) = c(q;\eta)-c(q;\ell),\\
d(\xi;\eta)= \sum_q\xi_q\cdot c(q;\eta)\,\}.
\end{array}
\end{eqnarray*}
By construction, we obtain
\[
V\!\left/\!_{\displaystyle W}\right. =
\mbox{Ext}^i_A\left(
\;^{\displaystyle\Omega_Y^1}\!\!\left/\!_{\displaystyle
\mbox{tors}\,(\Omega_Y^1)}\right.
, \, A \right)(-R)
\subseteq T^2_Y(-R)
\]
(which is an isomorphism, if $Y$ is smooth in codimension 2).\\
\par
\neu{46}
Let us define the much easier vector spaces
\begin{eqnarray*}
V_1&:=& \{[\underline{x_i(q)}_{(q\in L(E_i^R))}]\,|\;
\begin{array}[t]{l}
x_i(q)=x_j(q) \mbox{ for }
\begin{array}[t]{l}
\bullet\,
\langle a^i, a^j \rangle < \sigma \mbox{ is a 2-face and}\\
\bullet\,
q\in L(E_i^R\cap E_j^R)\,,
\end{array}\\
\xi\in L^2(E_i^R) \mbox{ implies } \sum_q\xi_q\cdot
x_i(q)=0
\,\}\;\mbox{ and}
\end{array}
\vspace{1ex}
\\
W_1&:=& \{[\underline{x(q)}_{(q\in \cup_i L(E_i^R))}]
\,|\;
\begin{array}[t]{l}
\xi\in L(\bigcup_i L(E_i^R)) \mbox{ implies } \sum_q\xi_q\cdot
x(q)=0 \,\}.
\vspace{2ex}
\end{array}
\end{eqnarray*}
\par
{\bf Lemma:}
{\em
The linear map $V_1\rightarrow V$ defined by
\begin{eqnarray*}
d(q;\ell,\eta) &:=& \left\{
\begin{array}{lll}
x_i(q) &\mbox{ for }& \ell\in K_i^R,\;\; \eta \geq R\\
0 &\mbox{ for }& \ell \geq R \;\mbox{ or } \;\eta \in
\bigcup_i K_i^R\,;
\end{array} \right. \\
d(\xi;\eta) &:=&
0\,
\end{eqnarray*}
induces an injective map
\[
V_1\!\left/\!_{\displaystyle W_1}\right.
\hookrightarrow
V\!\left/\!_{\displaystyle W}\right.\,.
\]
If $Y$ is smooth in codimension 2, it will be an isomorphism.
}
\\
\par
{\bf Proof:}
1) The map $V_1 \rightarrow V$ is {\em correct defined}:
On the one hand, an argument as used in \zitat{3}{5}(i) shows that $\ell\in
K_i^R\cap K_j^R$ would imply $x_i(q)=x_j(q)$. On the other hand,
the image of
$[x_i(q)]_{q\in L(E_i^R)}$ meets all conditions in the definition of $V$.
\vspace{1ex}
\\
2) $W_1$ maps to $W$ (take $c(q,\ell):=x(q)$ for $\ell\geq R$ and
$c(q,\ell):=0$ otherwise).
\vspace{1ex}
\\
3) The map between the two factor spaces is {\em injective}: Assume for
$[x_i(q)]_{q\in L(E_i^R)}$ that there exist elements $c(q,\ell)$, such that
\begin{eqnarray*}
c(q;\ell) &=& 0 \; \mbox{ for } \ell \in \bigcup_i K^R_i\, ,\\
x_i(q) &=& c(q;\eta) - c(q;\ell) \; \mbox{ for }
\eta \geq \ell,\, \ell\in K_i^R,\, \eta\geq R\,,\\
0 &=&
c(q;\eta) - c(q;\ell)\; \mbox{ for } \eta\geq \ell \mbox{ and }
[\ell\geq R\mbox{ or } \eta\in \bigcup_iK_i^R]\, , \mbox{ and}\\
0 &=&
\sum_q \xi_q \cdot c(q;\eta) \; \mbox{ for } \eta \geq
\mbox{supp}^2\xi\, .
\end{eqnarray*}
In particular, $x_i(q)$ do not depend on $i$, and these elements
meet the property
\[
\sum_q \xi_q \cdot x_{\bullet}(q) = 0 \; \mbox{ for } \xi\in L(\bigcup_i
L(E_i^R)).
\]
4) If $Y$ is smooth in codimension 2, the map is {\em surjective} :\\
Given an element $[d(q;\ell,\eta),\,d(\xi;\eta)]\in V$, there exist
complex numbers $c(q;\eta)$ such that:
\begin{itemize}
\item[(i)]
$d(\xi;\eta) = \sum_q\xi_q\cdot c(q;\eta)\,$ ,
\item[(ii)]
$c(q;\eta)=0 \mbox{ for } \eta\notin R+\sigma^{\scriptscriptstyle\vee}\,
(\mbox{i.e. }\eta\in \bigcup_iK_i^R)\,$.
\end{itemize}
(Do this separately for each $\eta$ and distinguish between the cases
$\eta\in R +\sigma^{\scriptscriptstyle\vee}$ and $\eta\notin R+\sigma^{\scriptscriptstyle\vee}$.)\\
In particular, $[c(q;\eta) - c(q;\ell),\, d(\xi;\eta)]\in W$. Hence, we
have seen that we may assume $d(\xi;\eta)=0$.\\
\par
Let us choose some sufficiently high degree $\ell^\ast\geq E$.
Then,
\[
x_i(q):= d(q;\ell,\eta) - d(q;\ell^\ast\!,\eta)
\]
(with $\ell\in K_i^R$, $\ell\geq \mbox{supp}\,q$
(cf.\ Lemma \zitat{3}{2}(2)), and $\eta\geq\ell,\ell^\ast\!,R$)
defines some preimage:
\begin{itemize}
\item[(i)]
It is independent from the choice of $\eta$: Using a different $\eta'$
generates
the difference $d(q;\eta,\eta')-d(q;\eta,\eta')$.
\item[(ii)]
It is independent from $\ell\in K_i^R$: Choosing another $\ell'\in K_i^R$
with $\ell'\geq\ell$ would add the summand $d(q;\ell,\ell')$, which is 0;
for the general case use Lemma \zitat{3}{2}(2).
\item[(iii)]
If $\langle a^i,a^j\rangle < \sigma$ is a 2-face with $\mbox{supp}\,q
\subseteq L(E^R_i)\cap L(E_j^R)$, then by Lemma \zitat{3}{2}(2) we can choose
an
$\ell\in K_i^R\cap K_j^R$ achieving $x_i(q)=x_j(q)$.
\item[(iv)]
For $\xi\in L^2(E_i^R)$ we have
\[
\sum_q \xi_q\cdot d(q;\ell,\eta) = \sum_q \xi_q\cdot d(q;\ell^\ast\!,\eta) =
0\,,
\]
and this gives the corresponding relation for the $x_i(q)'$s.
\item[(v)]
Finally, if we apply to
$[\underline{x_i(q)}]\in V_1$
the linear map $V_1\rightarrow V$, the result differs from
$[d(q;\ell,\eta),0]\in V$ by
the $W$-element built from
\[
c(q;\ell) := \left\{ \begin{array}{ll}
d(q;\ell,\eta) - d(q;\ell^\ast\!,\eta) & \mbox{ if } \ell\geq R \\
0 & \mbox{ otherwise }.
\end{array} \right.
\vspace{-2ex}
\]
\end{itemize}
\hfill$\Box$\\
\par
\neu{47}
Now, it is easy to complete the proofs for Theorem \zitat{3}{3} (part 2 and 3)
and
Theorem \zitat{3}{5}:\\
\par
First, for a tuple $[\underline{x_i(q)}]_{q\in L(E_i^R)}$, the condition
\[
\xi\in L^2(E_i^R) \mbox{ implies } \sum_q\xi_q\cdot x_i(q)=0
\]
is equivalent to the fact the components $x_i(q)$ are induced by elements
$x_i\in L(E_i^R)_{\,I\!\!\!\!C}^\ast$.\\
The other condition for elements of $V_1$ just says that for 2-faces
$\langle a^i,a^j\rangle<\sigma$ there is $x_i=x_j$ on
$L(E_i^R\cap E_j^R)_{\,I\!\!\!\!C}=L(E_i^R)_{\,I\!\!\!\!C}\cap L(E_j^R)_{\,I\!\!\!\!C}$. In particular, we
obtain
\[
V_1= \mbox{Ker}\left( \oplus_i L(E_i^R)_{\,I\!\!\!\!C}^\ast \rightarrow
\oplus_{\langle a^i,a^j\rangle <\sigma} L(E_i^R\cap E_j^R)_{\,I\!\!\!\!C}^\ast \right)\,.
\]
In the same way we get
\[
W_1 = \left( \sum_i L(E^R_i)_{\,I\!\!\!\!C}\right)^\ast\,,
\]
and our $T^2$-formula is proven.\\
\par
Finally, if $\psi_i:L(E_i^R)_{\,I\!\!\!\!C}\rightarrow \,I\!\!\!\!C$ are linear maps defining an
element of
$V_1$, they induce the following $A$-linear map on ${\cal D}$ (even on
$\mbox{Im}\,d_D$):
\begin{eqnarray*}
D(q;\ell,\eta) &\mapsto& \left\{
\begin{array}{lll}
\psi_i(q)\cdot x^{\eta-R} &\mbox{ for }& \ell\in K_i^R,\;\; \eta \geq R\\
0 &\mbox{ for }& \ell \geq R \;\mbox{ or } \;\eta \in
\bigcup_i K_i^R
\end{array} \right.\\
D(\xi;\eta &\mapsto& 0\,.
\end{eqnarray*}
Now, looking at the diagram of \zitat{4}{3}, this translates exactly into the
claim of
Theorem \zitat{3}{5}.\\
\par
\sect{Proof of the cup product formula}\label{s5}
\neu{51}
Fix an $R\in M$, and let $\varphi\in L(E)^\ast_{\,I\!\!\!\!C}$ induce some element
(also denoted by $\varphi$)
of $T^1_Y(-R)$. Using the notations of \zitat{2}{3}, \zitat{3}{4},
and \zitat{3}{6} we can take
\[
\tilde{\varphi}(f_{\alpha\beta}):=
\varphi(\alpha-\beta)\cdot \underline{z}^{\Phi(\pi(\alpha)-R)}
\]
for the auxiliary $P$-elements needed to compute the $\lambda(\varphi)$'s
(cf. Theorem \zitat{3}{4}).\\
\par
Now, we have to distinguish between the two several types of relations
generating the $P$-module ${\cal R}\subseteq P^m$:
\begin{itemize}
\item[(r)]
Regarding the relation $r(a,b;c)$ we obtain
\begin{eqnarray*}
\sum_{(\alpha,\beta)\in m} r(a,b;c)_{\alpha\beta}\cdot
\tilde{\varphi}(f_{\alpha\beta}) &=&
\tilde{\varphi}(f_{a+c,b+c}) - \underline{z}^c\,\tilde{\varphi}(f_{ab})
\\
&=&
\varphi(a-b)\cdot \left(
\underline{z}^{\Phi(\pi(a+c)-R)} - \underline{z}^{c+\Phi(\pi(a)-R)} \right)
\\
&=&
\varphi(a-b)\cdot
f_{\Phi(\pi(a+c)-R),\,c+\Phi(\pi(a)-R)}\,.
\end{eqnarray*}
In particular,
\[
\lambda_{\alpha\beta}^{r(a,b;c)}(\varphi) =
\left\{\begin{array}{ll}
\varphi(a-b) & \mbox{ for } [\alpha,\beta] = [c+\Phi(\pi(a)-R),\,
\Phi(\pi(a+c)-R)]\\
0 & \mbox{ otherwise}\,.
\end{array} \right.
\]
\item[(s)]
The corresponding result for the relation $s(a,b,c)$ is much nicer:
\begin{eqnarray*}
\sum_{(\alpha,\beta)\in m} s(a,b,c)_{\alpha\beta}\cdot
\tilde{\varphi}(f_{\alpha\beta}) &=&
\tilde{\varphi}(f_{bc})-
\tilde{\varphi}(f_{ac})+
\tilde{\varphi}(f_{ab})\\
&=&
[\varphi(b-c)-\varphi(a-c)+\varphi(a-b)]\cdot
\underline{z}^{\Phi(\pi(a)-R)}\\
&=& 0\,.
\end{eqnarray*}
In particular, $\lambda^{s(a,b,c)}(\varphi)=0$.
\vspace{2ex}
\end{itemize}
\par
\neu{52}
Now, let $R,S,\varphi$, and $\psi$ as in the assumption of Theorem
\zitat{3}{6}. Using formula \zitat{2}{3}(iii), our previous computations
yield $(\varphi\cup\psi)(s(a,b,c))=0$ and
\[
\begin{array}{l}
(\varphi\cup\psi)(r(a,b;c))=
\sum_{\alpha,\beta}\lambda^{r(a,b;c)}_{\alpha\beta}(\varphi)
\cdot \psi(f_{\alpha\beta}) +
\sum_{\alpha,\beta} \varphi(f_{\alpha\beta})\cdot
\lambda^{r(a,b;c)}_{\alpha\beta}(\psi)
\vspace{2ex}\\
\qquad=
\begin{array}[t]{r}
\varphi(a-b)\cdot \psi\left(
c^{}+\Phi(\pi(a)-R)-\Phi(\pi(a+c)-R)\right)\cdot
x^{\pi(c+\Phi(\pi(a)-R))-S} +\qquad\\
+\psi(a-b)\cdot \varphi\left(
c+\Phi(\pi(a)-S)-\Phi(\pi(a+c)-S)\right)\cdot
x^{\pi(c+\Phi(\pi(a)-S))-R}
\end{array}
\vspace{2ex}\\
\qquad=
\begin{array}[t]{r}
\left[ \varphi(a-b)\cdot \psi\left(
c+\Phi(\pi(a)-R)-\Phi(\pi(a+c)-R)\right) +\right.
\qquad\qquad\qquad\qquad\qquad\\
\left. + \psi(a-b)\cdot \varphi\left(
c+\Phi(\pi(a)-S)-\Phi(\pi(a+c)-S)\right)
\right]
\cdot x^{\pi(a+c)-R-S}\,.
\vspace{1ex}
\end{array}
\end{array}
\]
\par
{\bf Remark:}
Unless $\pi(a+c)\geq R+S$, both summand in the brackets will vanish. For
instance,
on the one hand, $\pi(a)\in\bigcup_iK_i^R$ would cause $\varphi(a-b)=0$, and,
on the
other hand, $\pi(a)-R\geq 0$ and $\pi(c+\Phi(\pi(a)-R))\in \bigcup_iK_i^S$
imply $\psi(c+\Phi(\pi(a)-R)-\Phi(\pi(a+c)-R))=0$.\\
\par
To apply Theorem \zitat{3}{5} we would like to remove the argument $c$ from
the big coefficient. This will be done by adding a suitable
coboundary $T$.\\
\par
\neu{53}
Let us start with defining for $(\alpha,\beta)\in m$
\[
t(\alpha,\beta):= \begin{array}[t]{r}
\varphi(\alpha-\beta)\cdot
\psi \left( \Phi(\pi(\alpha)-R)+\Phi(R)-\beta\right)+\qquad\qquad\qquad\\
+ \psi(\alpha-\beta) \cdot
\varphi \left( \Phi(\pi(\alpha)-S)+\Phi(S)-\alpha\right)\,.
\end{array}
\]
(This expression is related to $t_{\varphi,\psi,R,S}$ from
\zitat{3}{6} by $t(q)=t(q^+,q^-)$.) \\
\par
{\bf Lemma:}{\em
Let $\alpha,\beta,\gamma\inI\!\!N^E$ with $\pi(\alpha)=\pi(\beta)=\pi(\gamma)$.
\begin{itemize}
\item[(1)]
$t(\alpha,\beta)=t(\alpha-\beta)$
as long as $\pi(\alpha)\in\bigcup_i K_i^{R+S}$.
\item[(2)]
$t(\beta,\gamma)-t(\alpha,\gamma)+t(\alpha,\beta)=0$.
\vspace{2ex}
\end{itemize}
}
\par
{\bf Proof:}
(1) It is enough to show that $t(\alpha+r,\beta+r)=t(\alpha,\beta)$ for
$r\in I\!\!N^E$, $\pi(\alpha+r)\in\bigcup_iK_i^{R+S}$. But the difference of these
two terms has exactly the shape of the coefficient of $x^{\pi(a+c)-R-S}$ in
\zitat{5}{2}. In particular, the argument given in the previous remark
applies again.\\
\par
(2) By extending $\varphi$ and $\psi$ to linear maps $\,I\!\!\!\!C^E\rightarrow\,I\!\!\!\!C$,
we obtain
\[
t(\alpha,\beta) = \begin{array}[t]{r}
[\varphi(\alpha-\beta)\,\psi\left(
\Phi(\pi(\alpha)-R) + \Phi(R)\right) + \psi(\alpha-\beta) \,
\varphi\left( \Phi(\pi(\alpha)-S)+\Phi(S)\right)]+\,\\
+[\varphi(\beta)\,\psi(\beta)-\varphi(\alpha)\,\psi(\alpha)].
\end{array}
\]
Now, since $\pi(\alpha)=\pi(\beta)=\pi(\gamma)$, both types of summands add
up to 0 separately in
$t(\beta,\gamma)-t(\alpha,\gamma)+t(\alpha,\beta)$.
\hfill$\Box$\\
\par
{\bf Remark:} The previous lemma does not imply that $t(q)$ is
$Z\!\!\!Z$-linear in $q$. The
assumption for $\pi(\alpha)$ made in (1) is really essential.\\
\par
Now, we obtain a
$P$-linear map $T\in \mbox{Hom}(P^m,A)$ by
\[
T: e^{\alpha\beta}\mapsto
\left\{ \begin{array}{ll}
t(\alpha,\beta)\,x^{\pi(\alpha)-R-S} & \mbox{ for } \pi(\alpha)\geq R+S\\
0 & \mbox{ otherwise}\,.
\end{array} \right.
\]
Pulling back $T$ to ${\cal R}\subseteq P^m$ yields (in case of $\pi(a+c)\geq R+S$)
\begin{eqnarray*}
T(r(a,b;c)) &=& \left\{ \begin{array}{ll}
[t(a+c,b+c)-t(a,b)]\cdot x^{\pi(a+c)-R-S} & \mbox{ for } \pi(a)\geq R+S\\
t(a+c,b+c)\cdot x^{\pi(a+c)-R-S} & \mbox{ otherwise}
\end{array} \right.\\
&=&
\left\{ \begin{array}{ll}
-(\varphi\cup\psi)(r(a,b;c)) & \mbox{ for } \pi(a)\geq R+S\\
t(a,b)\,x^{\pi(a+c)-R-S} -(\varphi\cup\psi)(r(a,b;c)) & \mbox{ otherwise}\,
\end{array} \right.
\end{eqnarray*}
and $T(s(a,b,c))=0$ (by (2) of the previous lemma).\\
\par
On the other hand, $T$ yields a trivial element of $T^2_Y(-R-S)$,
i.e. inside this group we may replace
$\varphi\cup\psi$ by $(\varphi\cup\psi)+T$ to obtain
\begin{eqnarray*}
(\varphi\cup\psi)(r(a,b;c)) &=&
\left\{ \begin{array}{ll}
t(a,b)\cdot x^{\pi(a+c)-R-S}&\mbox{ for } \pi(a)\in \bigcup_i K_i^{R+S};\;
\pi(a+c)\geq R+S\\
0 & \mbox{ otherwise}\,,
\end{array} \right.
\vspace{1ex}\\
(\varphi\cup\psi)(s(a,b,c)) &=& 0\,.
\vspace{1ex}
\end{eqnarray*}
\par
Having Theorem \zitat{3}{5} in mind, this formula for $\varphi\cup\psi$ is
exactly what
we were looking for:\\
Given an $r(a,b;c)$ with $\pi(a)\in K_i^{R+S}$,
let us compute $(\varphi\cup\psi)_i(q:=a-b)$ following the recipe of (i), (ii)
of Theorem
\zitat{3}{6}. We do not need to split
$q=a-b$ into a sum $q=\sum_k q^k$ - the element $q$ itself already satisfies
the condition
\[
\langle a^i,\bar{q}\rangle \leq \langle a^i, \pi(a) \rangle < \langle
a^i,R+S\rangle.
\]
In particular, with $(\varphi\cup\psi)_i(a-b)=t(a-b)=t(a,b)$ we
will obtain the right result - if the recipe is assumed to be correct. \\
\par
\neu{54}
We will fill those remaining gap now, i.e. we will show that
\begin{itemize}
\item[(a)]
each $q\in L(E_i^{R+S})$
admits a decomposition $q=\sum_k q^k$ with the desired properties,
\item[(b)]
$\sum_k q^k=0$ (with $\bar{q}^k\in K_i^{R+S}$) implies $\sum_k t(q^k)=0$, and
\item[(c)]
for adjacent $a^i,a^j$ the relations $q\in L(E_i^{R+S}\cap E_j^{R+S})$ admit
a decomposition $q=\sum_kq^k$ that works for both $i$ and $j$.
\end{itemize}
(In particular, this answers the questions arised right after stating the
theorem in
\zitat{3}{6}.)\\
\par
Let us fix an element $i\in \{1,\dots,N\}$. Since $\sigma^{\scriptscriptstyle\vee}\cap M$
contains elements $r$ with $\langle a^i,r\rangle =1$, some of them must be
contained in the generating set $E$, too. We choose one of these elements
and call it $r(i)$.\\
Now, to each $r\in E$ we associate some relation $p(r)\in L(E)$ by
\[
p(r):= e^r - \langle a^i, r \rangle\cdot e^{r(i)} +
[\mbox{suitable element of } Z\!\!\!Z^{E\cap (a^i)^\bot}]\,.
\]
The two essential properties of these special relations are
\begin{itemize}
\item[(i)]
$\langle a^i, \bar{p}(r)\rangle = \langle a^i, r\rangle$, and
\item[(ii)]
if $q\in L(E)$ is any relation, then $q$ and $\sum_{r\in E}q_r\cdot p(r)$
differ
by some element of $L(E\cap (a^i)^\bot)$ only.
\vspace{1ex}
\end{itemize}
\par
In particular, this proves (a). For (b) we start with the following\\
\par
{\em Claim:}
Let $q^k\in L(E)$ be relations such that
$\sum_k \langle a^i,\bar{q}^k\rangle < \langle a^i, R+S\rangle$.
Then, $\sum_k t(q^k)=t(\sum_k q^k)$.\\
\par
{\em Proof:} We can restrict ourselves to the case of two summands, $q^1$ and
$q^2$. Then,
by Lemma \zitat{5}{3},
\begin{eqnarray*}
t(q^1)+t(q^2) &=&
t\left((q^1)^+,(q^1)^-\right) + t\left((q^2)^+,(q^2)^-\right)\\
&=&
t\left((q^1)^++(q^2)^+,(q^1)^-+(q^2)^+\right) +
t\left((q^2)^++(q^1)^-,(q^2)^-+(q^1)^-\right)\\
&=&
t\left((q^1)^++(q^2)^+,(q^2)^-+(q^1)^-\right)\\
&=& t(q^1+q^2)\,.
\hspace{9cm} \Box
\end{eqnarray*}
\par
In particular, if $\sum_kq^k=0$ (with $\bar{q}^k\in K_i^{R+S}$), then this
applies for
the special decompositions
\[
q^k=\sum_r q^k_r\cdot p(r) + q^{0,k} \quad (q^{0,k}\in L(E\cap(a^i)^\bot))
\]
of the summands $q^k$ themselves. We obtain
\[
\sum_{q^k_r>0}q^k_r\cdot t\left(p(r)\right) + t(q^{0,k}) = t\left(
\sum_{q^k_r>0}q^k_r\,p(r)+q^{0,k}\right) =: t(q^{1,k})
\]
and
\[
\sum_{q^k_r<0}q^k_r\cdot
t\left(p(r)\right)= t\left( \sum_{q^k_r<0}q^k_r\,p(r)\right)=:t(q^{2,k})\,.
\]
Up to elements of $E\cap (a^i)^\bot$, the relations $q^{1,k}$ and $q^{2,k}$ are
connected by
the common
\[
(q^{1,k})^-=-q^{1,k}_{r(i)}\cdot e^{r(i)}=\langle a^i,\bar{q}^k\rangle
\cdot e^{r(i)}=q^{2,k}_{r(i)}\cdot e^{r(i)}=(q^{2,k})^+\,.
\]
Hence, Lemma \zitat{5}{3} yields
\[
\sum_r q^k_r\cdot t\left(p(r)\right) + t(q^{0,k}) = t(q^{1,k}) + t(q^{2,k}) =
t\left(
q^{1,k}+q^{2,k}\right) = t(q^k)\,,
\]
and we conclude
\begin{eqnarray*}
\sum_k t(q^k) &=&
\sum_k \left(\sum_r q^k_r\cdot t\left(p(r)\right) + t(q^{0,k})\right)\\
&=&
\sum_r \left( \sum_k q^k_r \right) t\left(p(r)\right) + t\left(\sum_k
q^{0,k}\right)
\quad (\mbox{cf. previous claim})\\
&=&
0+ t\left( \sum_k q^k - \sum_{k,r} q^k_r\,p(r) \right)\\
&=& 0\,.
\vspace{2ex}
\end{eqnarray*}
\par
Finally, only (c) is left. Let $a^i$, $a^j$ be two adjacent edges of $\sigma$.
We adapt the construction of the elementary relations
$p(r)$. Instead of the $r(i)$'s, we will use elements $r(i,j)\in E$
characterized by the
property
\[
\langle a^i, r(i,j)\rangle = 1\,,\; \langle a^j, r(i,j)\rangle = 0\,.
\]
(Those elements exist, since $Y$ is assumed to be smooth in codimension 2.)\\
Now, we define
\[
p(r):= e^r - \langle a^i,r\rangle \cdot e^{r(i,j)} - \langle a^j,r \rangle
\cdot e^{r(j,i)}
+ [\mbox{suitable element of }Z\!\!\!Z^{E\cap(a^i)^\bot\cap(a^j)^\bot}]\,.
\]
These special $p(r)$'s meet the usual properties (i) and (ii) - but for the two
different
indices $i$ and $j$ at the same time. In particular, if $q\in L(E)$ is any
relation, then
$q$ and $\sum_{r\in E}q_r\cdot p(r)$ differ by some element of
$L(E\cap(a^i)^\bot\cap(a^j)^\bot)$ only.\\
\par
\sect{An alternative to the complex $L(E^R)_{\bullet}$}\label{s6}
\neu{61}
Let $R\in M$ be fixed for the whole \S \ref{s6}. The complex $L(E^R)_{\bullet}$
introduced in \zitat{3}{2} fits naturally into the exact sequence
\[
0\rightarrow L(E^R)_{\bullet} \longrightarrow (Z\!\!\!Z^{E^R})_{\bullet}
\longrightarrow \mbox{span}(E^R)_{\bullet}\rightarrow 0
\]
of complexes built in the same way as $L(E^R)_{\bullet}$, i.e.
\[
(Z\!\!\!Z^{E^R})_{-k} := \oplus\!\!\!\!\!\!_{\begin{array}{c}
\scriptstyle\tau<\sigma\vspace{-1ex} \\ \scriptstyle dim\, \tau=k \end{array}}
\!\!\!\!Z\!\!\!Z^{E^R_{\tau}}
\qquad \mbox{and}\qquad
\mbox{span}(E^R)_{-k} := \oplus\!\!\!\!\!\!_{\begin{array}{c}
\scriptstyle\tau<\sigma\vspace{-1ex} \\ \scriptstyle dim\, \tau=k \end{array}}
\!\!\!\!\mbox{span}(E^R_{\tau})\,.
\]
\par
{\bf Lemma:}{\em
The complex $(Z\!\!\!Z^{E^R})_{\bullet}$ is exact.\\
}
\par
{\bf Proof:}
The complex $(Z\!\!\!Z^{E^R})_{\bullet}$ can be decomposed into a direct sum
\[
(Z\!\!\!Z^{E^R})_{\bullet} = \bigoplus_{r\in M} (Z\!\!\!Z^{E^R})(r)_{\bullet}
\]
showing the contribution of each $r\in M$. The complexes occuring as summands
are
defined as
\begin{eqnarray*}
(Z\!\!\!Z^{E^R})(r)_{-k} &:=&
\oplus\!\!\!\!\!\!_{\begin{array}{c}
\scriptstyle\tau<\sigma\vspace{-1ex}\\ \scriptstyle dim\, \tau=k \end{array}} \!\!\!\!
\left\{ \begin{array}{ll}
Z\!\!\!Z=Z\!\!\!Z^{\{r\}} & \mbox{ for } r\in E^R_{\tau}\\
0 & \mbox{ otherwise}
\end{array} \right\}\\
&=&
Z\!\!\!Z^{\#\{\tau\,|\; dim\,\tau=k; \, r\in E^R_{\tau}\}}\,.
\end{eqnarray*}
Denote by $H^+$ the halfspace
\[
H^+ := \{ a\in N_{I\!\!R}\,|\; \langle a,r\rangle < \langle a, R\rangle\} \subseteq
N_{I\!\!R}.
\]
Then, for $\tau \neq 0$, the fact that $r\in E^R_{\tau}$ is equivalent to
$\tau \setminus \{0\} \subseteq H^+$. On the other hand, $r\in E^R_0$
corresponds to
the condition $\sigma \cap H^+ \neq \emptyset$.\\
In particular, $(Z\!\!\!Z^{E^R})(r)_{\bullet}$,
shifted by one place, equals the complex for computing the reduced homolgy of
the
topological space $\cup \{\tau\,|\;\tau \setminus \{0\} \subseteq H^+\}
\subseteq \sigma$ cut
by some affine hyperplane. Since this space is contractable, the complex is
exact.
\hfill$\Box$\\
\par
{\bf Corollary:}{\em
The complexes $L(E^R)_{\bullet}^\ast$ and $\mbox{span}(E^R)_{\bullet}^\ast[1]$
are
quasiisomorphic. In particular, under the usual assumptions (cf. Theorem
\zitat{3}{3}), we obtain
\[
T^i_Y(-R) = H^i\left( \mbox{span}(E^R)_{\bullet}^\ast\otimes _{Z\!\!\!Z}\,I\!\!\!\!C\right)\,.
\vspace{2ex}
\]
}
\par
\neu{62}
We define the $I\!\!R$-vector spaces
\begin{eqnarray*}
V^R_i &:= &\mbox{span}_{I\!\!R}(E_i^R)=\left\{
\begin{array}{l@{\quad\mbox{for}\;\:}l}
0 &\langle a^i,R\rangle\le 0\\
\left[ a^i=0\right] \subseteq M_{I\!\!R} & \langle a^i,R\rangle =1\\
M_{I\!\!R}=I\!\!R^n & \langle a^i,R\rangle\ge2
\end{array}
\right.\\
&& \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad (i=1,\ldots,N),\;
\mbox{ and}\\
V^R_{\tau} &:=& \cap_{a^i\in \tau} V^R_i
\supseteq \mbox{span}_{I\!\!R}(E^R_{\tau})
\quad (\mbox{for faces } \tau<\sigma)\,.
\end{eqnarray*}
$\,$
\vspace{-2ex}\\
\par
{\bf Proposition:}{\em
With ${\cal V}^R_{-k}:= \oplus\!\!\!\!\!\!_{\begin{array}{c}
\scriptstyle\tau<\sigma\vspace{-1.5ex} \\ \scriptstyle dim\, \tau=k \end{array}}
\!\!\!\!V_{\tau}^R$ we obtain a complex
${\cal V}^R_{\bullet}
\supseteq \mbox{span}_{I\!\!R}(E^R)_{\bullet}$.
Moreover, if $Y$ is smooth in
codimension $k$, then both complexes are equal at $\geq\!(-k)$.
}
\\
\par
{\bf Proof:}
$V^R_{\tau} = \mbox{span}_{I\!\!R}(E^R_{\tau})$ is true
for smooth cones $\tau<\sigma$ (cf.(3.7) of \cite{T1}).
\hfill$\Box$\\
\par
{\bf Corollary:}{\em
\begin{itemize}
\item[(1)]
If $Y$ is smooth in codimension 2, then $T^1_Y(-R)=
H^1\left(({\cal V}^R_{\bullet})^\ast \otimes_{I\!\!R} \,I\!\!\!\!C \right)$.
\item[(2)]
If $Y$ is smooth in codimension 3, then $T^2_Y(-R)=
H^2\left(({\cal V}^R_{\bullet})^\ast \otimes_{I\!\!R} \,I\!\!\!\!C \right)$.
\vspace{1ex}
\end{itemize}
}
\par
The formula (1) for $T^1_Y$ (with a more boring proof) was already obtained
in (4.4) of \cite{T1}.\\
\par
\sect{3-dimensional Gorenstein singularities}\label{s7}
\neu{71}
We want to apply the previous results for the special case of an isolated,
3-dimensional,
toric Gorenstein singularity. We start with fixing the notations.\\
\par
Let $Q=\mbox{conv}(a^1,\dots,a^N)\subseteq I\!\!R^2$ be a lattice polygon with
primitive
edges
\[
d^i:= a^{i+1}-a^i\in Z\!\!\!Z^2\,.
\]
Embedding $I\!\!R^2$ as the affine hyperplane $[a_3=1]$ into $N_{I\!\!R}:=I\!\!R^3$,
we can define the cone
\[
\sigma:= \mbox{Cone}(Q) \subseteq N_{I\!\!R}\,.
\]
The fundamental generators of $\sigma$ equal the vectors
$(a^1,1),\dots,(a^N,1)$, which we
will also denote by $a^1,\dots,a^N$, respectively. \\
\par
The vector space $M_{I\!\!R}$ contains a special element $R^\ast:=[0,0;1]$:
\begin{itemize}
\item
$\langle \bullet,R^\ast\rangle = 1$ defines the affine hyperplane containing
$Q$,
\item
$\langle \bullet,R^\ast\rangle = 0$ describes the vectorspace containing the
edges
$d^i$ of $Q$.
\end{itemize}
The structure of the dual cone $\sigma^{\scriptscriptstyle\vee}$ can be described as follows:
\begin{itemize}
\item
$[c;\eta]\in M_{I\!\!R}$ is contained in $\sigma^{\scriptscriptstyle\vee}$, iff $\langle Q,-c\rangle
\leq \eta$.
\item
$[c;\eta]\in \partial \sigma^{\scriptscriptstyle\vee}$ iff there exists some $i$ with
$\langle a^i,-c\rangle = \eta$.
\item
The set $E$ contains $R^\ast$. However, $E\setminus \{R^\ast\}\subseteq
\partial
\sigma^{\scriptscriptstyle\vee}$.
\vspace{1ex}
\end{itemize}
\par
{\bf Remark:}
The toric variety $Y$ built by the cone $\sigma$ is 3-dimensional, Gorenstein,
and
regular outside its 0-dimensional orbit. Moreover, all those singularities can
be
obtained in this way.\\
\par
\neu{72}
Let $V$ denote the $(N-2)$-dimensional $I\!\!R$-vector space
\[
V:=\{(t_1,\dots,t_N)\,|\; \sum_i t_i\,d^i=0\}\subseteq I\!\!R^N\,.
\]
The non-negative tuples among the $\underline{t}\in V$ describe the set of Minkowski
summands
$Q_{\underline{t}}$
of positive multiples of the polygon $Q$. ($t_i$ is the scalar by which $d^i$
has to be
multiplied to get the $i$-th edge of $Q_{\underline{t}}$.)\\
\par
We consider the bilinear map
\[
\begin{array}{cclcl}
V&\times& I\!\!R^E & \stackrel{\Psi}{\longrightarrow}& I\!\!R\\
\underline{t}&,&[c;\eta]\in E &\mapsto&
\left\{ \begin{array}{ll}
0& \mbox{ if } c=0\quad(\mbox{i.e. } [c;\eta]=R^\ast)\\
\sum_{v=1}^{i-1} t_v\cdot
\langle d^v,-c\rangle & \mbox{ if }\langle a^i,-c\rangle =\eta\,.
\end{array} \right.
\end{array}
\]
Assuming both $a^1$ and the associated vertices of all Minkowski sumands
$Q_{\underline{t}}$
to coincide with $0\inI\!\!R^2$, the map $\Psi$ detects the maximal values of the
linear
functions $c$ on these summands
\[
\Psi(\underline{t},[c;\eta]) = \mbox{Max}\,(\langle a,-c\rangle\,|\; a\in
Q_{\underline{t}})\,.
\]
In particular, $\Psi(\underline{1},[c;\eta])=\eta$, i.e. $\Psi$ induces a map
\[
\Psi: \quad^{\displaystyle V}\!\!/\!_{\displaystyle I\!\!R\cdot\underline{1}} \times L_{I\!\!R}(E) \longrightarrow
I\!\!R\,.
\]
The results of \cite{Gor} and \cite{Sm} imply that $\Psi$ provides an
isomorphism
\[
^{\displaystyle V_{\,I\!\!\!\!C}}\!\!/\!_{\displaystyle \,I\!\!\!\!C\cdot\underline{1}}\stackrel{\sim}{\longrightarrow}
\left(\left. ^{\displaystyle L(E_0^{R^\ast})}\!\!\right/
\!_{\displaystyle \sum_i L(E_i^{R^\ast})} \right)^\ast \otimes_{Z\!\!\!Z} \,I\!\!\!\!C
\cong T^1_Y(-R^\ast)= T^1_Y\,.
\]
In particular, $\mbox{dim}\,T^1_Y = N-3$.\\
\par
\neu{73}
Let $R\in M$. Combining the general results of \S \ref{s6} with the fact
\[
\bigcap_i V_i^R = \mbox{Ker}\left[ \oplus_i(V_i^R\cap V^R_{i+1})\longrightarrow
\oplus_i V^R_i\right]
\]
coming from the special situation we are in, we obtain the handsome formula
\[
T^2_Y(-R)=
\left[ \left.^{\displaystyle \bigcap_i (\mbox{span}_{\,I\!\!\!\!C} E_i^R)}\!\!\! \right/
\!\!\! _{\displaystyle \mbox{span}_{\,I\!\!\!\!C} (\bigcap_i E_i^R)} \right] ^\ast\,.
\]
$T^1_Y$ is concentrated in the degree $-R^\ast$. Hence, for computing $T^2_Y$,
the
degrees $-kR^\ast$ ($k\geq 2$) are the most interesting (but not only) ones. In
this
special case, the vector spaces $V^{kR^\ast}_i$ equal $M_{I\!\!R}$, i.e.
\[
T^2_Y(-kR^\ast)= \left[ \left. ^{\displaystyle M_{\,I\!\!\!\!C}}\!\!\! \right/ \!\!\!
_{\displaystyle \mbox{span}_{\,I\!\!\!\!C} (\bigcap_i E_i^{kR^\ast})} \right] ^\ast
\subseteq
\left[ \left. ^{\displaystyle M_{\,I\!\!\!\!C}}\!\!\! \right/ \!\!\! _{\displaystyle \,I\!\!\!\!C\cdot R^\ast}\right]
^\ast =
\mbox{span}_{\,I\!\!\!\!C}(d^1,\dots,d^N) \subseteq N_{\,I\!\!\!\!C}\,.
\vspace{2ex}
\]
\par
{\bf Proposition:}
{\em
For $c\in I\!\!R^2$ denote by
\[
d(c):= \mbox{Max}\,(\langle a^i,c\rangle\,|\; i=1,\dots,N) -
\mbox{Min}\,(\langle a^i,c\rangle\,|\; i=1,\dots,N)
\]
the diameter of $Q$ in $c$-direction. If
\[
k_1:= \!\!\begin{array}[t]{c}
\mbox{Min}
\vspace{-1ex}\\
\scriptstyle c\inZ\!\!\!Z^2\setminus 0
\end{array} \!\!d(c) \quad \mbox{ and } \quad
k_2:= \!\!\!\begin{array}[t]{c}
\mbox{Min}
\vspace{-1ex}\\ \scriptstyle c,c'\inZ\!\!\!Z^2
\vspace{-1ex}\\ \scriptstyle lin.\, indept.
\end{array} \!\!\!\mbox{Max}\,[ d(c), d(c')]\,,
\]
then
$\quad\begin{array}[t]{lll}
\dim T^2_Y(-kR^\ast) = 2 & \mbox{ for } & 2\leq k \leq k_1\,,\\
\dim T^2_Y(-kR^\ast) = 1 & \mbox{ for } & k_1+1\leq k \leq k_2\,,\mbox{ and}\\
\dim T^2_Y(-kR^\ast) = 0 & \mbox{ for } & k_2+1\leq k \,.
\end{array}
\vspace{2ex}
$
}
\par
{\bf Proof:}
We have to determine the dimension of $\;\mbox{span}_{\,I\!\!\!\!C}\left( \bigcap_i
E_i^{kR^\ast}\right)\!\!\left/\!\!_{\displaystyle \,I\!\!\!\!C\cdot R^\ast}\right.$. Computing
modulo $R^\ast$
simply means to forget the $\eta$ in $[c;\eta]\in M$. Hence, we are done by the
following
observation for each $c\in Z\!\!\!Z^2\setminus0$:
\[
\begin{array}{rcl}
\exists \eta\inZ\!\!\!Z: \;[c,\eta]\in \bigcap_iK_i^{kR^\ast}
& \Longleftrightarrow &
\exists \eta\inZ\!\!\!Z: \;(k-1)R^{\ast}\geq [c;\eta] \geq 0\\
& \Longleftrightarrow &
d(c) \leq k-1\,.
\end{array}
\vspace{-3ex}
\]
\hfill$\Box$\\
\par
{\bf Corollary:}
{\em
Unless $Y=\,I\!\!\!\!C^3$ or $Y=\mbox{cone over }I\!\!P^1\timesI\!\!P^1$, we have
\[
T^2_Y(-2R^\ast)= \mbox{span}_{\,I\!\!\!\!C}(d^1,\dots,d^N),
\]
i.e. $\dim T^2_Y(-2R^\ast)=2$.}
\\
\par
\neu{74}
{\bf Proposition:}
{\em
Using both the isomorphism $\;V_{\,I\!\!\!\!C}\!\!\left/\!\!_{\displaystyle \,I\!\!\!\!C\cdot\underline{1}}\right.
\stackrel{\sim}{\rightarrow} T^1_Y$ and the injection
$T^2_Y(-2R^\ast)\hookrightarrow \mbox{span}_{\,I\!\!\!\!C}(d^1,\dots,d^N)$,
the cup product $T^1_Y\times T^1_Y \rightarrow T^2_Y$ equals the bilinear map
\[
\begin{array}{ccccc}
V_{\,I\!\!\!\!C}\!\!\left/\!\!_{\displaystyle \,I\!\!\!\!C\cdot\underline{1}}\right. &
\times &
V_{\,I\!\!\!\!C}\!\!\left/\!\!_{\displaystyle \,I\!\!\!\!C\cdot\underline{1}}\right. &
\longrightarrow &
\mbox{span}_{\,I\!\!\!\!C}(d^1,\dots,d^N)\\
\underline{s} &
,
&
\underline{t} &
\mapsto &
\sum_i s_i\,t_i\,d^i\,.
\end{array}
\vspace{1ex}
\]
}
\par
{\bf Proof:}
{\em Step 1:} To apply Theorem \zitat{3}{6} we will combine the isomorphisms
for $T^2_Y$ presented in \S \ref{s6} and \zitat{7}{3}. Actually, we will
describe the dual map
by associating to each $r\in M$ an element
$[q^1(r),\dots,q^N(r)]\in\oplus_iL(E_i^{2R^\ast})$.\\
First, for every $i=1,\dots,N$, we have to write
$r\in M = (\mbox{span}\, E_i^{2R^\ast})\cap (\mbox{span}\, E_{i+1}^{2R^\ast})$
as a linear
combination of elements from $E_i^{2R^\ast}\cap E_{i+1}^{2R^\ast}$.
This set contains a $Z\!\!\!Z$-basis for $M$ consisting of
\begin{itemize}
\item
$r^i:=$ primitive element of $\sigma^{\scriptscriptstyle\vee}\cap (a^i)^\bot \cap
(a^{i+1})^\bot$,
\item
$R^\ast$, and
\item
$r(i):= r(i,i+1)$ (cf. notation at the end of \zitat{5}{4}), i.e.
$\begin{array}[t]{l}
\langle a^i, r(i)\rangle = 1 \mbox{ and}\\
\langle a^{i+1}, r(i) \rangle = 0\,.
\end{array} $
\end{itemize}
In particular, we can write
\[
r= g^i(r)\cdot r^i + \langle a^{i+1},r\rangle\cdot R^\ast + \left(
\langle a^i,r \rangle - \langle a^{i+1},r\rangle \right) \cdot r(i)
\]
with some integer $g^i(r)\in Z\!\!\!Z$.\\
\par
Now, we have to apply the differential in the complex
$(Z\!\!\!Z^{E^{2R^\ast}})_{\bullet}$,
i.e. we map the previous expression via the map
\[
\oplus_i Z\!\!\!Z^{E_i^{2R^\ast}\cap E_{i+1}^{2R^\ast}} \longrightarrow
\oplus_i Z\!\!\!Z^{E_i^{2R^\ast}}\, .
\]
The result is (for every $i$) the element of $L(E_i^{2R^\ast})$
\[
\begin{array}{l}
g^i(r)\, e^{r^i} - g^{i-1}(r)\, e^{r^{i-1}} + \langle a^i-a^{i+1},r\rangle\cdot
e^{r(i)} -
\langle a^{i-1} - a^i,r\rangle \cdot e^{r(i-1)} +
\langle a^{i+1}-a^i, r \rangle\cdot e^{R^\ast}
\vspace{1ex}\\
\qquad = \langle d^i,r\rangle \cdot \left( e^{R^\ast} -
e^{r(i)}\right) + [(a^i)^\bot \mbox{-summands}] =: q^i(r)\,.
\end{array}
\vspace{2ex}
\]
\par
{\em Step 2:}
Defining
\[
q^i:= e^{R^\ast}-e^{r(i)} + [(a^i)^\bot \mbox{-summands}] \in
L(E_i^{2R^\ast})\quad
(i=1,\dots,N)\,,
\]
we use Theorem \zitat{3}{6} and the second remark of \zitat{3}{6} to obtain
\[
(\underline{s}\cup\underline{t})_i \left( q^i(r) \right) =
\langle d^i,r \rangle \cdot t_{\Psi(\underline{s},\bullet), \Psi(\underline{t},\bullet),
R^\ast,R^\ast}
(q^i) = \Psi(\underline{s},q^i)\cdot \Psi(\underline{t},q^i)\,.
\]
To compute those two factors, we take a closer look at the $q^i$'s. Let
\[
q^i= e^{R^\ast}-e^{r(i)} + \sum_v \lambda^i_v \,e^{[c^v; \eta^v]}\,,
\vspace{-1ex}
\]
and the sum is taken over those $v$'s meeting the property
$\langle a^i,-c^v\rangle = \eta^v$. Then, by definition of $\Psi$ in
\zitat{7}{2},
\[
\Psi(\underline{s},q^i)= \sum_{j=1}^{(i+1)-1}s_j\,\langle d^j, r(i)\rangle -
\sum_v \lambda^i_v \cdot \left( \sum_{j=1}^{i-1}s_j\, \langle d^j,c^v\rangle
\right)\,.
\]
On the other hand, we know that $q^i$ is a relation, i.e. the equation
\[
R^\ast - r(i) + \sum_v \lambda^i_v [c^v; \eta^v] =0
\vspace{-1ex}
\]
is true in $M$. Hence,
\[
\begin{array}{rcl}
\Psi(\underline{s},q^i)&=& \sum_{j=1}^i s_j\,\langle d^j, r(i)\rangle -
\sum_{j=1}^{i-1} s_j \langle d^j, r(i)\rangle
\vspace{0.5ex}\\
&=& s_i\cdot \langle d^i, r(i)\rangle
\vspace{0.5ex}\\
&=& -s_i\,.
\end{array}
\vspace{-3ex}
\]
\hfill$\Box$
\vspace{2ex}\\
\par
$T^1_Y\subseteq \,I\!\!\!\!C^N$ is the tangent space of the versal base space $S$ of our
singularity
$Y$. It is given by the linear equation $\sum_i t_i\cdot d^i=0$.\\
On the other hand,
the cup product $T^1_Y\times T^1_Y\rightarrow T^2_Y$ shows the quadratic part
of the
equations defining $S\subseteq\,I\!\!\!\!C^N$. By the previous proposition, it equals
$\sum_i t_i^2 \cdot d^i$.\\
\par
These facts suggest an idea how the equations of $S\subseteq\,I\!\!\!\!C^N$ could look
like. In
\cite{Vers} we have proved this conjecture; $S$ is indeed given by the
equations
\[
\sum_{i=1}^N t_i^k \cdot d^i =0\quad (k\geq 1)\,.
\vspace{3ex}
\]
\par
| {'timestamp': '1994-05-16T19:17:14', 'yymm': '9405', 'arxiv_id': 'alg-geom/9405008', 'language': 'en', 'url': 'https://arxiv.org/abs/alg-geom/9405008'} |
\section{Introduction}
Even when the significant progress of acoustics was developed in the 60's, some exciting features remain unexplored for scientists and engineers. A good example is the acoustical field distribution coming from a freely vibrating rod, where the studies are scarce, due to the difficulty to excite a solid with a well defined frequency as well as with a non-contact perturbation keeping a reasonable signal-to-noise ratio.
The radiated field of a rod can be used to study its resonant frequencies~\cite{Anderson}, and it has been directly measured in water using the standard technique of exciting the solid with a piezoelectric transducer~\cite{Jarosz, Gook, Malladi}. Another common technique to excite a solid is the use of a shaker instead of a piezoelectric transducer~\cite{Wu, Blake}. Despite the efficiency of those methods to excite a medium, they cannot be used to analyze the radiated acoustic field of a freely vibrating rod. That is because the physical contact between the exciter and the rod induces undesired higher vibrating modes that disturb the wave field of interest~\cite{Morales}.
A non-contact technique to excite structures consists of the use of a collimated air pulses controlled by valves~\cite{Farshidi}. However, due to strong air perturbations, it does not allow us to measure the sound field properties adequately. A more suitable technique uses electromagnetic acoustic transducers (EMATs)~\cite{Hefner}. This technique allows to excite and detect the vibrations of a solid with extraordinary accuracy due to the high selectivity in the frequency as well as its noise-reduction capacities.
From the theoretical point of view, the radiated acoustic field of a vibrating rod was studied extensively a long time ago by Williams et al.~\cite{Williams}. But it can also be considered by more straightforward models using a finite sum of spherical harmonics~\cite{Morfey}, or the Green's functions method~\cite{New}, among others.
In this paper, the experimental stationary acoustical field produced by a freely vibrating rod is studied. The rod has finite length and is vibrating in its first compressional mode. Due to the nature of this mode, the amplitude of the harmonic displacement is maximal at the ends of the rod. This means that the biggest interaction with the surrounded media is occurring there. In order to describe the acoustical wave field, a semi-analytical model is developed. Finally, the experimental results are compared with the theoretical prediction.
\section{Elastic modes of the rod}
As a quasi-onedimensional elastic system, where the length $L$ is much larger than the diameter $D$, the rod has eigenmodes, which can be classified into compressional, torsional and flexural modes~\cite{Graff, Morales2002, DominguezRocha, Franco}. The compressional eigenfrequencies are given by
\begin{equation}
\label{eq:freq}
f_\nu = \frac{\nu}{2L}\; \sqrt{\frac{E}{\rho}} \; , \qquad
\nu = 1,2,3,\ldots\; ,
\end{equation}
where $E$ is the Young modulus, $\rho$ is the density, and $\nu$ is the mode
number. For a given mode number $\nu$, the nodes $\xi_\eta$ of the wave
function which describes the local displacement of volume elements in the rod,
are located at
\begin{equation}
\label{eq:nodes}
\xi_\eta = \frac{\eta\, L}{\nu +1} \; , \qquad \eta = 1,2,\ldots, \nu\; .
\end{equation}
Specifically, for our experiment we used an aluminium rod of 1m length and 0.0127m diameter. Hence, according to Eq.~(\ref{eq:freq}), the first compressional mode has the frequency of $f_{1\textrm{C}} = 2.5\, {\rm kHz}$, and according to Eq.~(\ref{eq:nodes}), this mode has one node which lies exactly in the middle of the rod at $\xi_1 = L/2$. Fig.~\ref{fig:fig1} shows the displacement for the first compressional mode of a free boundary rod which presents a node in its geometrical center.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.37\textwidth]{Fig1a}
\includegraphics[width=0.1\textwidth]{Fig1b}
\caption{\label{fig:fig1}{(Colour online) COMSOL simulation of a cylindrical rod vibrating in its first compressional normal mode. The relative surface displacement is shown from blue to red. The number and position of nodes are predicted by Eq.~(\ref{eq:nodes}). This normal mode presents a node at the center of the rod and two maxima at the axial ends. The black wires represent the equilibrium state of the rod.}}
\end{center}
\end{figure}
In vacuum, the dynamical oscillation produces mainly a shrink and stretch process along the axial axis of the rod. Now, let us place this oscillating system into the air. As can be anticipated, the strongest interaction between the air and the rod takes place at both ends of the rod. From this interaction, we neglect the effect of the air on the oscillating rod and focus on the acoustical wave field generated by the oscillating ends of the rod. To that end, we observe that the system has an axial symmetry with respect to the symmetry axis of the rod. In addition, we note that we may neglect reflections of the provided acoustical wave field at other boundaries at sufficiently long times. Finally, we neglect the much smaller radial displacement of the boundary of the open cylinder (that is the cylinder boundary without its ends). Thus, to measure the acoustical wave field of the rod it is sufficient to measure the wave field in a plane that contains the symmetry axis of the rod.
\section{Experimental measurement of the acoustical wave field}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.41\textwidth]{Fig2}
\caption{\label{fig:fig2}{(Colour online) (a) Experimental setup to measure the spatial distribution of the acoustic field distribution of a compressional mode of a rod. The displacement field scale of the rod is the same as in \ref{fig:fig1}. One of the rod edges is under the action of a harmonic force, while the opposite one remains free. The sound is measured with a microphone at different positions, of the four regions A, B, C and D. The arrows indicate how the microphone is placed to detect the sound radiation at different space positions in each region. The configuration of the microphone shows two measuring positions in regions B and D. (b) The circuit diagram used to feed the ELECTRET microphone. The resistances and capacitances values are: $R_1=10$~k$\Omega$, $R_2=100$~k$\Omega$, and $C_1=0$.1~$\mu$F. The transistor models are 2SK596 for $T_1$, and 2N3904 for $T_2$. It is also shown the circuit diagram of the ELECTRET microphone where the pressure sensible capacitor $C(t)$ is located.}}
\end{center}
\end{figure}
The experimental scheme shown in Fig.~\ref{fig:fig2}(a) is used to measure the acoustical wave field emitted by a cylindrycal aluminum rod which is vibrating in its first compressional mode. To generate the compressional resonance, a vector network analyzer (VNA, Anritsu MS-4630B) is used. The VNA produces a sinusoidal signal of frequency $f$, which is amplified by a Cerwin-Vega amplifier (CV-900). Then, the signal is sent to an exciter (EMAT) that is placed very close to the rod.
The EMAT is basically controlled by the VNA in a small frequency window tuned around 2.5~kHz. A further advantage of EMATs is that they work up to several tens of kHz making them excellent for studying the resonance spectra of a body. The rod is excited without physical contact using an EMAT, which is highly selective between different modes~\cite{Morales, Morales2002}. Also, EMAT's can be used to excite a system in different ways depending on the magnetic properties of the material~\cite{Rossing1992, Russell2000, Russell2013}.
The vibration of the rod interacts with the surrounded air producing an audible signal (acoustic field) that is measured by an ELECTRET microphone. Those microphones contain a transistor inside which acts as an amplifier. In order to operate these devices correctly, a small power source is added to feed the transistor. The diagram of the external amplifier and the power source is shown in Fig.~\ref{fig:fig2}(b).
The rod is excited in the same frequency window many times while the position and orientation of the microphone are changed to map the area of interest. Each measurement taken by the microphone is sent back to the VNA which allows us to obtain the amplitude of the pressure variation. Since one resonant frequency of the rod lies inside the chosen frequency window, we can discriminate the resonance amplitude as the highest point in the measurement. Figure~\ref{fig:fig3} shows a common resonance measurement taken by aligning the center of the microphone with the axial axis of the rod (region A) by keeping a distance of 1cm between them. The actual amplitude value is related to the spatial microphone configuration and assigned to the specific coordinate where the microphone was placed. A detailed explanation about how each measurement was taken and the data analysis is included in the next subsection. We have to mention that we are interested in the intensity of the acoustic wave field that can be gotten by means of the amplitude measured by the VNA.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.48\textwidth]{Fig3}
\caption{\label{fig:fig3}{The experimental acoustical amplitude measurement by a microphone in region A. The acoustical wave field is generated by a cylindrical rod vibrating in its first compressional mode. Each measurement presents a maximum at the resonance frequency which does not depend on the microphone configuration.}}
\end{center}
\end{figure}
Another fundamental detail to take into account in acoustic studies is the reflection of waves in the walls of the laboratory. To avoid them, we used polyurethane conventional acoustic sponge, like the one used in music recording studios. In order to reduce a direct reflection, we cover the sponge with low-density polyethylene foam. Finally, the insulating set (polyurethane plus polyethylene) is placed in a wedge shape in order to promote the reflections of the acoustic waves towards their vertex. Four of these wedges are placed around the system and two flat insulating sets are situated, one on the ceiling and another on the work table.
\subsection{Description of the measurement of the sound field}
Given the cylindrical symmetry of the rod, we restrict the measurement area to four regions (see Fig.~\ref{fig:fig2}(a)). Region A is measured along the axial axis from the rod edge and has a length of 0.14~m; while region D is perpendicular to the axial axis from the middle of the rod and form a rectangle whose sides are 0.14~m width and 0.45~m length. While in regions A and D measurements are taken by pointing the microphone towards the rod, in regions B and C two measurements are taken at each point. One measurement is taken by placing the microphone parallel to the axial axis of the rod while the other one is taken by chosing a perpendicular configuration. Region B is a square of 0.14~m side length and region C is a rectangle whose sides are 0.14~m length and 0.05~m width.
The closest measurements in all regions are approximately 1~mm away from the rod. For the measurements of region A, the center of the microphone is aligned with the axial axis of the rod and each measurement has a distance of 1~cm from the contiguous ones. Region B is divided into a square mesh of 1~cm length between adjacent points. Similarly, region C is divided into a rectangular mesh of 1~cm length between adjacent points. Finally, region D is divided into a rectangular mesh of 5~cm by 1~cm length between adjacent points for the $z$ and $x$ axis, respectively.
From each measurement in regions A and D, the extracted maximum value of the amplitude is assigned to the spatial coordinate where the microphone is placed. On the other hand, each coordinate in regions B and C has two different measurements related to the orientation of the microphone. The actual value for those points is obtained by computing the square root of the components square addition. Fig.~\ref{fig:fig4} shows a 3D plot of the experimental acoustic intensity field as well as its 2D projection. It can be observed the maxima of the field are found at the ends of the rod while the field is weakest near the geometrical center of the rod, as was expected.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.48\textwidth]{Fig4}
\caption{\label{fig:fig4}{(Colour online) The experimental acoustical intensity field radiated by a cylindrical rod vibrating in its first compressional mode. The white rectangle represents the physical space occupied by the rod. As the elasticity theory predicts, the maximum displacement is at the rod edges and the node is placed at the geometrical center of the rod, Eqs.~(\ref{eq:freq}) and (\ref{eq:nodes}). Here we plot the 3D acoustical intensity field as well as a projection in 2D at the top where contour lines are shown.}}
\end{center}
\end{figure}
\section{Semi-Analytical approach: A simplified model}
In acoustics, spherical sources are good approximations to describe the radiated sound of many structures~\cite{Morse, Kinsler, Rossing2004, Rienstra}. As an approximation to the experimental rod, we place two spherical sound sources separated by a distance $L$. The spheres are joined by a narrow rod which does not interfere with their vibrations and does not emit sound. In other words, we have two spherical sources emitting a superposition of spherical outgoing waves, such that the wavefield along the connecting segment vanishes (see Fig.~\ref{fig:fig5}).
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.5\textwidth]{Fig5}
\caption{\label{fig:fig5}{(Colour online) Schematic diagram of two identical spheres vibrating in free space separated by a distance $L$. The spurious thick black line that links both spheres does not interfere with the vibration of them but suppresses the acoustic field through all its length.}}
\end{center}
\end{figure}
The sound intensity is defined as the average energy flux corresponding to sound propagation. Then, in terms of the complex fields at a point ${\bf{r}}$ it can be written as
\begin{equation}
\label{eq:intensity}
{\bf I}({\bf r})=\frac{1}{2} \textrm{Re}\left[p({\bf r}){\bf u}^*({\bf r})\right],
\end{equation}
where ${\bf u}$ is the acoustic velocity related with the acoustic pressure $p$ through
\begin{equation}
\label{eq:u}
{\bf u}({\bf r})=\frac{-\textrm{i}}{\omega\rho}\nabla p({\bf r}),
\end{equation}
being $\rho$ the density of the medium, and $\omega$ the oscillation frequency. The time-independent sound pressure field is described by the 3D-Helmholtz equation
\begin{equation}
\label{eq:Helmholtz}
(\nabla^2 + k^2)p({\bf r}) = 0,
\end{equation}
where $k$ is the wavenumber and $\nabla^2$ is the Laplacian operator. As might be expected, the suitable coordinate system to describe the pressure field is the spherical one. The frequently encountered definitions where $r$, $\theta$, and $\phi$ give the radial distance, the polar, and the azimuthal angle, respectively, are taken. The coordinate system can be observed in Fig.~\ref{fig:fig5}.
In addition to the rotational symmetry of the problem, we know the boundary conditions respect to the coordinates, one at the surface of the sphere and another one at infinity where the intensity field must vanish~\cite{Jacobsen}. Thus, the boundary conditions that the acoustic pressure must be fulfilled at the surface are~\cite{Zaman}
\begin{equation}
\label{eq:boundary}
\frac{\partial p(r,\theta)}{\partial n}\bigg\rvert_{S}=\hat{\bf n}\cdot\nabla p(r,\theta)\bigg\rvert_{S}=0,
\end{equation}
with $\hat{\bf n}$ the normal vector to the surface, $S$, of the ``rod''. Also, the imposition of Eq.~(\ref{eq:boundary}) leads to maximum values of the pressure at the surface of the ``built rod''.
In order to find the solution of Eq.~(\ref{eq:Helmholtz}) for a single sphere, it is possible to apply the separation of variables method to describe the pressure field as a product of functions for each one of the coordinates $R(r)$, $\Theta(\theta)$, and $\Phi(\phi)$. Thus, the solution is a product of spherical harmonics and a superposition of outgoing and incoming spherical Hankel functions. However, if a temporal harmonic oscillation of the form $\textrm{e}^{-\textrm{i}\omega t}$ is considered, $h_n^{(1)}(kr)$ can be interpreted as outgoing traveling spherical waves and $h_n^{(2)}(kr)$ as the incoming waves which must be removed. Therefore, the resulting time-independent solution is a sum of the spherical harmonics times the outgoing spherical Hankel functions
\begin{equation}
\label{eq:3Dgensol}
p(r,\theta,\phi) = \sum_{n,m=0}^{\infty} A_{nm} h_n^{(1)}(kr)Y_n^m(\theta,\phi),
\end{equation}
where the coefficients $A_{nm}$ are, in general, complex.
Let us add another sphere separated a distance $L$ and a narrow rod that joins both spheres which does not interfere with their vibrations nor does not emit sound. Thus, the whole pressure field is the superposition of the field generated by each sphere placed at $z=\pm L/2$, therefore
\begin{equation}
\label{eq:ptot}
p(r, \theta, \phi) = p^+(r, \theta, \phi) + p^-(r, \theta, \phi).
\end{equation}
Here, the $\pm$ sign labels the corresponding pressure of the sound emitted by the sphere located at $\pm L/2$, respectively. In general, the pressure field depends on the three cordinates, however we only consider the azimuthal symmetric case $m=0$ due to the studied system. Therefore, Eq.~(\ref{eq:3Dgensol}) for each sphere becomes
\begin{eqnarray}
p^\pm(r,\theta) &=& \sum_{n=0}^{\infty} \sqrt{\frac{2n+1}{2}}A^\pm_{n} h_n^{(1)}(kd^\pm)\nonumber\\
&\times&P_n\left(\frac{r\cos{\theta}\mp L/2}{d^\pm}\right),
\end{eqnarray}
with $d^\pm=\sqrt{r^2\mp rL\cos{\theta}+ L^2/4}$, and $P_n$ being the Legendre polynomial of degree $n$.
Fig.~\ref{fig:fig6} shows the intensity acoustic field $\rvert{\bf{I}}({\bf{r}})\rvert$ radiated by our model considering $n_{\rm{max}}=13$. Due to the divergence of Neumann functions at $kr=0$, we consider an initial radius $r_0$ such that the evaluation can be performed. This value is interpreted as the physical radius of each sphere plus the distance between the surface and the measurement device.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.48\textwidth]{Fig6}
\caption{\label{fig:fig6}{(Colour online) The acoustical intensity field radiated by two spherical sources jointed by a narrow beam. The white rectangle represents the physical space occupied by the ``rod''. Here we plot the 3D acoustical intensity field as well as a projection in 2D at the top where contour lines are shown. The numerical parameters used to obtain this graph are $k=0.45$, $L=50$, $A^\pm_0=1$ and $r_0=13$ as well as the normalized units $2\rho\omega=1$.}}
\end{center}
\end{figure}
From Fig.~\ref{fig:fig4} and Fig.~\ref{fig:fig6} the acoustic intensity of both, experimental and semi-analytical, systems can be compared qualitatively. For a deeper analysis, we look for a quantitative comparison of both behaviors. In Fig.~\ref{fig:fig7} the decay of the acoustic intensity field of region A is plotted. It can be observed the experimental data (black squares) and the semi-analytical result (black solid line). It is noticiable that the analytical curve has a good agreement with experimental data.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.48\textwidth]{Fig7}
\caption{\label{fig:fig7}{Comparison between experimental data of region A (dots) and the fitted absolute value of Eq.~\ref{eq:fitting} (black solid line).}}
\end{center}
\end{figure}
The semi-analytical curve plotted in Fig.~\ref{fig:fig7} is an evaluation of
\begin{equation}
\label{eq:fitting}
I(z)=a\textrm{Re}\left[{\rm{i}}p(z+c)\partial_z p^*(z+c)\right],
\end{equation}
by using the amplitudes $A_n$ obtained from Fig.~\ref{fig:fig6}. Here, $a=42.4$ is a general amplitude, $b=0.5$~cm$^{-1}$ the wavenumber and $c=2.925$~cm a traslation in the $z$-axis. By taking $b$ and the resonance frequency as $f_{\rm{C}}=2.5$~kHz, one can compute the speed of sound as ${\rm{c}}\approx314$~m/s. The accuracy of the fitting was found as $R-$square$=0.9969$.
\section{Conclusions}
In this work we have studied experimentally the time-independent acoustic intensity field radiated by a cilindrical rod vibrating in its first compressional mode. The rod was excited by an EMAT together with a VNA, and the acoustical response was measured with a microphone. The resulting acoustical field was obtained by mapping the surrounding area of the rod with the microphone. To avoid undesired wall reflections, a system of foams was developed to mimic an anechoic chamber but at a very low cost.
To model the behavior of the intensity field, we proposed a simple analytical model of two spherical sources joined by a non-interacting thin rod. We added the constriction of no acoustical emission from the rod that joins the sources and the proper boundary conditions. As a result, we obtained a good qualitatively agreement comparison between our model and the experimental measurements.
Finally, we performed a quantitative comparison between the experimental measurements and the analytical model for one of the studied spatial regions with an excellent agreement.
\begin{acknowledgments}
V.~D.-R. thanks the financial support of DGAPA. The authors thank Centro Internacional de Ciencias A.~C. for the facilities given for several groups meetings and gatherings celebrated there, as well as the given space to set up a laboratory. We also thank M.~Mart\'inez-Mares and G.~B\'aez for usefull comments. L.~A.~R.-L acknowledges support by Academia Mexicana de Ciencias.
\end{acknowledgments}
| {'timestamp': '2020-08-17T02:05:58', 'yymm': '2008', 'arxiv_id': '2008.06183', 'language': 'en', 'url': 'https://arxiv.org/abs/2008.06183'} |
\section{Introduction}
The study of embedded surfaces is indispensable in $3$-manifold topology and has a history of more than $100$ years. For the relatively new interests in understanding contact structures on $3$-manifolds, one finds it important, not surprisingly, to study embedded surfaces together with germs of contact structures on them. As a relative version, Legendrian knots are studied, with great success, by examining the germ of contact structure on spanning surfaces, e.g., Seifert surfaces. This tradition goes back to the work of Bennequin \cite{Ben83}, Eliashberg \cite{Eli89,Eli92}, and Fuchs-Tabachnikov\cite{FT97} at the very beginning of the subject, followed by more systematic development by Giroux \cite{Gi91,Gi00}, Honda \cite{Hon00}, Etnyre-Honda \cite{EH03} and many others. As of today, $3$-dimensional contact topology is fairly well understood, at least in comparison with the higher dimensional case.
The focus of this paper is on contact manifolds $(M,\xi)$ of dimension $2n+1>3$. In this case, one can develop a parallel theory for hypersurfaces $\Sigma^{2n} \subset M$ generalizing the study of surfaces in contact $3$-manifolds. This is carried out in \cite{HH18,HH19}, where a \emph{contact Morse theory} is established in the spirit of Eliashberg-Gromov \cite{EG91} and refining Giroux's work \cite{Gi91,Gi00} in dimension $3$ and \cite{Gi02} in higher dimensions. However, the corresponding relative theory for Legendrian submanifolds $\Lambda \subset M$ breaks down for dimensional reasons, i.e., $\Lambda$ cannot be the boundary of a hypersurface unless $\dim M=3$. As a piece of jargon, we say a compact submanifold $Y \subset M$ is a \emph{filling} of $\Lambda$ if $\Lambda=\partial Y$. All (sub-)manifolds in this paper will be assumed to be orientable.
One motivation for studying fillings of $\Lambda$ (and the associated contact germ) comes from the desire of understanding isotopies of Legendrians. Suppose $\Lambda_t, t \in [0,1]$, is a Legendrian isotopy. Information of this isotopy is captured by the totality $Y \coloneqq \cup_{t \in [0,1]} \Lambda_t$, which we pretend to be a submanifold although it is often not. Then $Y$ is a filling of $\Lambda_0 \cup \Lambda_1$ and it is foliated by Legendrians. The above consideration yields two important, but loosely formulated, observations: first, if one wants to know whether two Legendrians $\Lambda_0,\Lambda_1$ are Legendrian isotopic (given they are topologically isotopic), then one can look for a (topologically trivial) filling of $\Lambda_0 \cup \Lambda_1$ and try to normalize the contact germ on the filling so that it looks like the totality of a Legendrian isotopy; second, a submanifold smoothly foliated by Legendrian leaves is a special case of the so-called \emph{coisotropic submanifolds} introduced in \cite{Hua15}. We recall the definition of coisotropic submanifolds and introduce the key object of this paper: \emph{coLegendrian submanifolds}, as follows.
\begin{definition} \label{defn:coLeg submfd}
A smoothly embedded submanifold $Y \subset (M,\xi)$ is \emph{coisotropic} if $T_x Y \cap \xi(x) \subset \xi(x)$ is a coisotropic subspace with respect to the canonical conformal symplectic structure for every $x \in Y$. A coisotropic $Y$ is a \emph{coLegendrian} if $\dim Y=n+1$, given $\dim M=2n+1$.
\end{definition}
CoLegendrians (under a different name) and the associated Legendrian foliations were studied in \cite{Hua15,Hua14} with little success for one reason: they are extremely difficult to find in a given contact manifold. Indeed, Gromov's $h$-principle techniques from \cite{Gro86} can, at best, be used to show the existence of \emph{immersed} coLegendrians and not the embedded ones. Nonetheless, under different disguises, coLegendrians have appeared in the literature many times, mostly for a completely different reason: to detect the tight--overtwisted dichotomy of contact structures due to Eliashberg \cite{Eli89} in dimension $3$ and Borman-Eliashberg-Murphy \cite{BEM15} in higher dimensions. See, for example, Niederkr\"uger \cite{Nie06}, Massot-Niederkr\"uger-Wendl \cite{MNW13} and \cite{HH18}.
It is the purpose of this paper to show the abundance of (embedded) coLegendrians with certain singularities, which will be explained later, in any contact $5$-manifold using contact Morse theory. In the rest of the introduction, we will sketch the main ideas of this paper, leaving technical details to subsequent sections.
Recall from \cite{HH19} that a hypersurface $\Sigma \subset (M,\xi)$ is \emph{Morse} if the characteristic foliation $\Sigma_{\xi}$ is a Morse vector field, i.e., it is gradient-like for some Morse function on $\Sigma$. Furthermore, we say $\Sigma$ is Morse$^+$ if, in addition, there exists no flow line from a negative critical point of $\Sigma_{\xi}$ to a positive one. \emph{In this paper, (Morse) singularities of characteristic foliations will always be called critical points even without explicit mention of the Morse function.} By analogy with the definition of regular Lagrangians in the theory of Weinstein manifolds by Eliashberg-Ganatra-Lazarev \cite{EGL20}, a Legendrian $\Lambda \subset \Sigma$ is said to be \emph{regular} if it is tangent to $\Sigma_{\xi}$. Similarly, a coLegendrian $Y \subset \Sigma$ is \emph{regular} if it is tangent to $\Sigma_{\xi}$. The key point here is that regular (co)Legendrians inherits a Morse function, or equivalently a handle decomposition, from $\Sigma$.
Let's assume $\dim M=5$ for the rest of the introduction although many of our discussions can be easily generalized to any dimension. We now describe the singularities of a regular coLegendrian $Y$ in order for it to exist in abundance. Suppose $Y$ passes through an index $0$ critical point $p_0 \in \Sigma$. A neighborhood of $p_0$ in $\Sigma$ can be identified with the standard symplectic $(B^4,\omega_{\std})$ such that $p_0=0$ and the characteristic foliation is identified with the Liouville vector field $X=\tfrac{1}{2}r\partial_r$. The \emph{link} of $p_0$ is the standard contact $3$-sphere $\partial B^4$. Then $\link(Y,p_0) \coloneqq Y \cap \partial B^4$ is a (smooth) $2$-sphere in $(S^3,\xi_{\std})$. We say $p_0$ is a \emph{cone singularity} of $Y$. Note that topologically speaking, such a cone singularity can always be smoothed. However, geometrically speaking, with respect to the Euclidean metric on $B^4$, the cone singularity is smooth precisely when $\link(Y,p_0)$ is a great $S^2$ in the round $S^3$. Contact topology stands in between topology and geometry, so does the cone singularity. Namely, an $S^2 \subset (S^3,\xi_{\std})$ is said to be \emph{standard} if its characteristic foliation has exactly one source and one sink and every flow line goes from the source to the sink. For example, if we identify $B^4 \subset \mathbb{C}^2$ with the unit ball such that the contact structure on $\partial B^4$ is given by the tangential complex lines. Then one can check that every great $S^2 \subset \partial B^4$ is standard.
We say $p_0$ is \emph{smooth(able)} if $\link(Y,p_0)$ is standard. If $p_0$ is smoothable, then indeed it can be smoothed, while staying regular, by a small perturbation of $\Sigma$. Similarly, cone singularities can be defined for index $4$ critical points $p_4 \in \Sigma$ by simply reversing the direction of the characteristic foliation.
With the above preparation, we can state the following existence theorem of regular coLegendrians.
\begin{theorem} \label{thm:coLeg approx}
Suppose $(M,\xi)$ is contact $5$-manifold. Then any closed $3$-submanifold $Y$ with trivial normal bundle can be $C^0$-approximated by a regular coLegendrian with isolated cone singularities. If $Y$ is compact with (smooth) Legendrian boundary, then the same approximation holds such that $\partial Y$ is a regular Legendrian. In particular, the cone singularities stay away from $\partial Y$.
\end{theorem}
\begin{remark}
A technical, but important, point in our approach to (co)Legendrian submanifolds in this paper is that all submanifolds are \emph{unparameterized}, i.e., they are subsets of the ambient manifold rather than (smooth) embeddings. This is to be compared with \cite{Mur12}, where all Legendrians (singular or smooth) are parameterized.
\end{remark}
Note that the triviality of $T_Y M$ is necessary since the definition of regularity requires $Y$ to be contained in a hypersurface (here we use $\dim M=5$). In fact, \autoref{thm:coLeg approx} follows readily from \cite[Section 12]{HH19} without the conditions on the singularities of $Y$, and the main contribution of this paper is to eliminate all the singularities except possibly the cones. The proof of \autoref{thm:coLeg approx} will be carried out in \autoref{sec:regular coLeg}, after we set up the background theory of regular Legendrians in \autoref{sec:regular Leg}.
Let's point out that \autoref{thm:coLeg approx} by itself is rather useless in improving our understanding of either contact structures or Legendrians. Instead, it is through the proof of the theorem that we understand how regular coLegendrians are built out of handles and how such handles can be manipulated. However, since our main focus of this paper is just to establish the existence of coLegendrians, we postpone a more thorough study of coLegendrian handle decomposition to a forthcoming work where the theory of coLegendrians will be applied to study isotopies of Legendrians.
As an application of our study of coLegendrians, we introduce in \autoref{sec:orange II} the second generation of \emph{overtwisted orange} (cf. \cite{HH18}), which we call \emph{orange II} and denoted by $\mathcal{O}_2$. Although we construct $\mathcal{O}_2$ as a regular coLegendrian, it is the Legendrian foliation $\mathcal{F}_{\mathcal{O}_2}$ which determines the (overtwisted) contact germ. Hence, in fact, one can define $\mathcal{O}_2$ without any reference to regular coLegendrians, and \autoref{sec:orange II} can be understood independent of the rest of the paper. The second generation orange is considerably simpler than the first and maybe also other known overtwisted objects. But the road leading to its discovery is rougher than one might think.
We wrap up the introduction with an explanation of our choice in this paper of giving up the term ``convex hypersurface theory'' used in \cite{HH19}.
\begin{remark}
In the foundational papers \cite{Gi91,Gi00}, Giroux introduced the notion of \emph{convex hypersurfaces} into contact topology, which has then been used ubiquitously especially in the study contact $3$-manifolds and also in \cite{HH18,HH19}. A hypersurface is \emph{convex} in the sense of Giroux if it is transverse to a contact vector field. A key feature of convex surfaces in dimension $3$ is that the so-called \emph{dividing set} completely determines the contact germ on the surface, reducing $3$-dimensional contact topology essentially to a combinatorial problem. Such useful feature is, unfortunately, not available in higher dimensions. Indeed, the main contribution of \cite{HH19} is to show that $\Sigma_{\xi}$ can be made Morse by a $C^0$-small perturbation of $\Sigma$, rather than showing $\Sigma$ can be made convex, even though Morse$^+$ hypersurfaces are convex as it turns out. As we will see in this paper, many hypersurfaces that are important in our study of contact structures are \emph{not} convex, but only Morse, even in dimension $3$. Due to this change of perspective, we will drop Giroux's convexity from our terminology and rely instead on Morse-theoretic notions. However, certain convenient terminologies, such as the dividing set, will be retained.
\end{remark}
\section{Basics on contact Morse theory} \label{sec:contact Morse theory}
In this section, we recall the contact Morse theory on hypersurfaces established in \cite{HH19}, which is also the starting point of this paper.
Let $(M,\xi)$ be a contact manifold of dimension $2n+1$. Following \cite{HH19}, a hypersurface $\Sigma \subset M$ is \emph{Morse} if the characteristic foliation $\Sigma_{\xi}$, viewed as a vector field, is Morse, i.e., there exists a Morse function $f: \Sigma \to \mathbb{R}$ such that $\Sigma_{\xi}$ is gradient-like with respect to $f$. A Morse hypersurface $\Sigma$ is Morse$^+$ if, in addition, there exists no flow lines of $\Sigma_{\xi}$ from a negative critical point to a positive one. Suppose $\Sigma$ is Morse$^+$. Define the \emph{dividing set} $\Gamma \subset \Sigma$ to be the boundary of the handlebody built by the positive handles. Clearly $\Gamma$ is well-defined up to isotopy. Moreover $\Sigma \setminus \Gamma$ is naturally the disjoint union of two Weinstein manifolds. Following \cite{Gi91}, write $\Sigma \setminus \Gamma=R_+ \cup R_-$ such that $R_{\pm}$ is the Weinstein manifold built by the positive/negative handles, respectively.
\begin{remark}
In this paper, we will not in general assume that $\Sigma$ is closed, and \emph{no} boundary condition will be imposed in general, e.g., $\partial \Sigma$ needs not to be transverse or tangent to $\Sigma_{\xi}$. Of course, in this case, one cannot decompose $\Sigma$ into Weinstein manifolds as in the closed case.
\end{remark}
The following result from \cite{HH19} shows that the assumption of $\Sigma_{\xi}$ being Morse is rather mild.
\begin{theorem} \label{thm:morse is generic}
Any hypersurface can be $C^0$-approximated by a Morse hypersurface, which can be further $C^{\infty}$-perturbed to becomes Morse$^+$.
\end{theorem}
Indeed, \autoref{thm:morse is generic} can be extended to a relative version. Namely, if there exists a closed subset $K \subset \Sigma$ such that $\Sigma_{\xi}$ is already Morse on an open neighborhood of $K$, then $\Sigma$ be $C^0$-approximated by a Morse hypersurface relative to $K$.
\section{Regular Legendrians} \label{sec:regular Leg}
In this section, we apply the contact Morse theory introduced in \autoref{sec:contact Morse theory} to Legendrian submanifolds. We work with arbitrary dimensions in this section since there is nothing special about the theory of regular Legendrians in dimension $5$, in contrast to the theory of coLegendrians to be discussed in \autoref{sec:regular coLeg}.
Inspired by the notion of \emph{regular Lagrangians} in Weinstein manifolds introduced by Eliashberg-Ganatra-Lazarev \cite{EGL20}, we define \emph{regular Legendrians} as follows.
\begin{definition} \label{defn:regular legendrian}
A Legendrian $\Lambda \subset \Sigma \subset (M,\xi)$ is \emph{regular} with respect to a Morse hypersurface $\Sigma$ if it is tangent to $\Sigma_{\xi}$, and nondegenerate critical points of $\Sigma_{\xi}$ on $\Lambda$ restrict to nondegenerate critical points of $\Sigma_{\xi}|_{\Lambda}$ in $\Lambda$.
\end{definition}
It is a difficult problem (cf. \cite[Problem 2.5]{EGL20}) in symplectic topology to find non-regular Lagrangians in a Weinstein manifold. It turns out that the contact topological counterpart is much more flexible. Moreover, observe that the very definition of regular Legendrians depends on a choice of the hypersurface $\Sigma \supset \Lambda$. We will familiarize ourselves with regular Legendrians through the following examples.
\begin{exam} \label{ex:collar nbhd}
Any Legendrian $\Lambda$ is regular with respect to the hypersurface $\Sigma \coloneqq T^{\ast} \Lambda \subset (M,\xi)$ such that $\Lambda \subset T^{\ast} \Lambda$ as the 0-section. Indeed, by Legendrian neighborhood theorem, there exists a tubular neighborhood $(U(\Lambda), \xi|_{U(\Lambda)})$ of $\Lambda$ which is contactomorphic to a tubular neighborhood of the 0-section in $J^1(\Lambda) = \mathbb{R}_z \times T^{\ast} \Lambda$, equipped with the standard contact structure. It remains to Morsify the canonical Liouville form $pdq$ on $T^{\ast} \Lambda$ (cf. \cite[Example 11.12]{CE12}) so that $\Sigma$ is Morse and $\Lambda$ is tangent to $\Sigma_{\xi}$.
\qed
\end{exam}
\begin{exam} \label{ex:funny unknot}
This example is the reformulation of a result of Courte-Ekholm in \cite{CE18}. Let $\Sigma \coloneqq S^{2n} \subset (\mathbb{R}^{2n+1}, \xi_{\std})$ be the unit sphere, which is clearly Morse$^+$. Moreover, in the decomposition $\Sigma \setminus \Gamma = R_+ \cup R_-$, the dividing set $\Gamma$ is contactomorphic to $(S^{2n-1}, \eta_{\std})$, and $R_{\pm}$ are both symplectomorphic to the standard symplectic vector space $(\mathbb{R}^{2n}, \omega_{\std})$.
Let $(B^{2n}, \omega_{\std})$ be the standard symplectic filling of $(S^{2n-1}, \eta_{\std})$. Suppose $\Lambda_0$ is a Legendrian sphere in $S^{2n-1}$ which bounds a regular Lagrangian disk $D \subset B^{2n}$. Identify $\Gamma=S^{2n-1}$ and take two copies $D_{\pm} \subset R_{\pm}$ of $D$, modulo obvious completions, respectively.
Define the Legendrian sphere $\Lambda \coloneqq D_+ \cup_{\Lambda_0} D_- \subset \Sigma$. Then $\Lambda$ is regular by construction. Moreover, it follows from \cite{CE18} that $\Lambda$ is in fact Legendrian isotopic to the standard Legendrian unknot, regardless of the choices of $D$ and $\Lambda_0$.
\qed
\end{exam}
\begin{exam} \label{ex:knot in 3d}
We restrict our attention to contact 3-manifolds in this example. Suppose $\Sigma \subset (M^3,\xi)$ is a Morse$^+$ surface, and $\Lambda \subset \Sigma$ is a regular Legendrian loop. Then $\Lambda$ is transverse to the dividing set $\Gamma$. Let $\abs{\Lambda \cap \Gamma}$ be the (honest) count of intersection points, which turns out to be always even.
Recall that given a framing $\sigma$ on $\Lambda$, i.e., a trivialization of the normal bundle $T_{\Lambda} M$, the \emph{Thurston-Bennequin invariant} $\tb_{\sigma} (\Lambda) \in \mathbb{Z}$ measures the total rotation of $\xi$ along $\Lambda$. See e.g. \cite{Et05} for more details. Now note that $\Sigma$ uniquely specifies a framing on $\Lambda$, with respect to which we have
\begin{equation} \label{eqn:tb computation}
\tb_{\Sigma} (\Lambda) = - \tfrac{1}{2}\abs{\Lambda \cap \Gamma}
\end{equation}
Suppose $(M,\xi)=(\mathbb{R}^3,\xi_{\std})$. Then one can take $\Sigma$ to be any Seifert surface of $\Lambda$, and $\tb(\Lambda) = \tb_{\Sigma} (\Lambda)$ is independent of the choice of $\Sigma$. It follows from \autoref{eqn:tb computation} that $\Sigma$ can be made Morse$^+$ only if $\tb(\Lambda) \leq 0$. The other classical invariant: the \emph{rotation number} $r(\Lambda)$ (cf. \cite{Et05} for the definition) can also be computed in this setup by
\begin{equation} \label{eqn:rotation number}
r(\Lambda) = \chi(R_+) - \chi(R_-),
\end{equation}
where $\Sigma \setminus \Gamma = R_+ \cup R_-$ is the usual decomposition. Note, however, that $r(\Lambda)$ depends on an orientation of $\Lambda$, which by convention is the induced orientation from an orientation on $\Sigma$.
In the proof of \autoref{prop:regular Leg}, we will generalize \autoref{eqn:tb computation} to Morse surfaces and to higher dimensions (cf. \autoref{eqn:compute 1-framing}). The higher-dimensional counterpart of \autoref{eqn:rotation number} is \autoref{lem:Leg self intersection number}.
\qed
\end{exam}
We highlight a few special features about (regular) Legendrian knots from \autoref{ex:knot in 3d} which, as we will see, are in sharp contrast to the higher-dimensional case. Firstly, a Legendrian $\Lambda$ can be realized as a regular Legendrian in a Morse$^+$ surface $\Sigma$ only if $\tb_{\Sigma}(\Lambda) \leq 0$. Secondly, if $\Sigma$ is a Seifert surface, then the intersection $\Lambda \cap \Gamma$, as a finite set of points, is an invariant of $\Lambda$, i.e., the Thurston-Bennequin invariant. This is not true in higher dimensions, i.e., the topology of $\Lambda \cap \Gamma$ is \emph{not} an invariant of $\Lambda$.
Now we turn to the problem of realizing any Legendrian as a regular Legendrian in a given hypersurface. Let $\Lambda \subset (M,\xi)$ be a $n$-dimensional closed Legendrian. As usual, identify a tubular neighborhood of $\Lambda$ with a tubular neighborhood of the $0$-section in $J^1(\Lambda) = \mathbb{R}_z \times T^{\ast} \Lambda$. Fix a Riemannian metric $g$ on $M$. Define the \emph{normal sphere bundle}
\begin{equation*}
S_{\Lambda} M \coloneqq \{ v \in T_{\Lambda} M ~|~ \norm{v}_g=1 \}.
\end{equation*}
For a suitable choice of $g$, we can assume $S_{\Lambda} M \subset J^1(\Lambda)$ and $\partial_z \in S_{\Lambda} M$.
\begin{definition} \label{defn:1-framing}
A \emph{1-framing} of $\Lambda$ is a section of $S_{\Lambda} M \to \Lambda$. The 1-framing defined by $\partial_z$ is called the \emph{canonical 1-framing} of $\Lambda$.
\end{definition}
Up to homotopy, the canonical 1-framing is specified by any unit vector field along $\Lambda$ which is positively transverse to $\xi$. The set of 1-framings of a given Legendrian is described by the following lemma.
\begin{lemma} \label{lem:set of 1-framings}
The homotopy classes of $1$-framings of $\Lambda$ can be canonically identified with $\mathbb{Z}$ such that the canonical $1$-framing is identified with zero.
\end{lemma}
\begin{proof}
This is a standard consequence of the Pontryagin-Thom construction (cf. \cite[Chapter 7]{Mil65}). I learned the following trick from Patrick Massot around 2012. Given two sections $\sigma_i: \Lambda \to S_{\Lambda} M,i =0,1$, we consider the following set
\begin{equation*}
L \coloneqq \{ x \in \Lambda ~|~ \sigma_0(x) = -\sigma_1(x) \}.
\end{equation*}
Generically $L$ is an oriented 0-dimensional compact submanifold of $\Lambda$. The signed count of points in $L$ defines the difference between $\sigma_i, i=0,1$. Conversely, given a 1-framing $\sigma: \Lambda \to S_{\Lambda} M$ and an integer $k \in \mathbb{Z}$, one can construct a 1-framing $\sigma+k$ by modifying $\sigma$ in a neighborhood of a point.
\end{proof}
\begin{remark}
In dimension $3$, the $1$-framings of Legendrian knots in the sense of \autoref{defn:1-framing} coincides with the usual notation of framings of knots, and the canonical $1$-framing corresponds to the so-called \emph{contact framing}.
\end{remark}
Taking a dual point of view, up to homotopy, any $1$-framing of $\Lambda$ determines locally a hypersurface $\Sigma \supset \Lambda$, and \textit{vice versa}. Using this terminology, we can rephrase \autoref{ex:collar nbhd} as follows: any Legendrian is contained in a hypersurface corresponding to the canonical $1$-framing as a regular Legendrian. As we will explain now, the same holds for any choice of $1$-framing.
\begin{prop} \label{prop:regular Leg}
Given a closed Legendrian $\Lambda \subset (M,\xi)$, any hypersurface $\Sigma$ containing $\Lambda$ can be $C^0$-small perturbed, relative to $\Lambda$, to a new hypersurface $\Sigma'$ such that $\Sigma'_{\xi}$ is Morse, and $\Lambda$ is regular with respect to $\Sigma'_{\xi}$. Moreover, $\Sigma'$ can be made Morse$^+$ if $\dim M \geq 5$.
\end{prop}
\begin{proof}
If $\dim M=3$, then given any $\sigma \in \mathbb{Z}$, one can construct by hand an annulus $\Sigma \supset \Lambda$ corresponding to the framing $\sigma$, with respect to which $\Lambda$ is regular. However, $\Sigma$ can be made Morse$^+$ only when $\tb_{\sigma}(\Lambda) \leq 0$. Assume $\dim M=2n+1 \geq 5$ for the rest of the proof.
By the local nature of the Proposition, we can assume w.l.o.g. that as a smooth manifold $M = J^1(\Lambda)$, $\Lambda \subset M$ is the $0$-section, and $D^n \to \Sigma \to \Lambda$ is a (not necessarily trivial) disk bundle. The idea is to construct a contact structure $\xi_{\Lambda}$ on $M$ such that $\Lambda$ is Legendrian with respect to $\xi_{\Lambda}$, and $\Sigma$ satisfies all the properties of the Proposition. Then we argue that $\xi_{\Lambda}$ is isomorphic to $\xi$ by the Legendrian neighborhood theorem. For clarity, the proof is divided into three steps.
\vskip.1in\noindent
\textsc{Step 1.} \emph{Construct a Morse vector field on $M$.}
\vskip.1in
Let $v$ be a Morse vector field on $\Lambda$ with critical points $\mathbf{x} \coloneqq \{x_1,\dots,x_m\}$. Define a partial order on the set $\mathbf{x}$ by requiring $x_i \prec x_j$ if and only if there exists a flow line of $v$ from $x_i$ to $x_j$. Assume w.l.o.g. that if $x_i \prec x_j$, then $i<j$. Let $k_i$ be the Morse index of $x_i$. Clearly $k_1=0$ and $k_m=n$, but $k_i$ is not necessarily smaller than $k_j$ when $i<j$. Note that $v$ naturally induces a handle decomposition of $\Lambda$ such that each $x_i$ corresponds to a $k_i$-handle.
Extend $v$ to a Morse vector field $\bar{v}$ on $\Sigma$ such that the critical points of $v$ and $\bar{v}$ coincide. Such extension is by no means unique. For purely notational purposes, let us write $\bar{\mathbf{x}} \coloneqq \{ \bar{x}_1,\dots,\bar{x}_m \}$ for the set of critical points of $\bar{v}$ such that $\bar{x}_i=x_i$ for all $1 \leq i \leq m$. Let $\bar{k}_i$ be the Morse index of $\bar{x}_i$. Then clearly $k_i \leq \bar{k}_i$ for all $i$. We assign signs to each element in $\bar{\mathbf{x}}$ such that $\bar{x}_i$ is positive if $\bar{k}_i \leq n$ and $\bar{x}_i$ is negative if $\bar{k}_i \geq n$. Such sign assignment may not be unique, i.e., critical points of index $n$ can be either positive or negative.
\vskip.1in\noindent
\textsc{Step 2.} \emph{Construct the contact form $\alpha_{\Lambda}$.}
\vskip.1in
So far the discussion is purely topological. Now we describe how to construct the desired contact structure on $M$ using $\bar{v}$. Around any point $x \in \Sigma$, we can choose local coordinates $(z,\mathbf{p},\mathbf{q}) \in \mathcal{O}p(x) \subset M$, where $\mathbf{p} = (p_1,\dots,p_n), \mathbf{q}=(q_1,\dots,q_n)$, such that $\Sigma \cap \mathcal{O}p(x) = \{z=0\}$ and $\Lambda \cap \mathcal{O}p(x) = \{ z=\mathbf{p}=0 \}$. Throughout this paper, $\mathcal{O}p$ denotes an unspecified small open neighborhood.
We construct a contact form $\alpha_{\Lambda}$ on $M$ as follows. First, we construct $\alpha_{\Lambda}$ near each $\bar{x}_i, 1 \leq i \leq m$. If $\bar{x}_i$ is positive, then $\bar{k}_i \leq n$ and we define
\begin{equation*}
\alpha_{\Lambda}|_{\mathcal{O}p(\bar{x}_i)} = dz - q_1dp_1 - \dots - q_{\bar{k}_i}dp_{\bar{k}_i} + \dots + q_ndp_n - 2\mathbf{p} \cdot d\mathbf{q},
\end{equation*}
where $\mathbf{p} \cdot d\mathbf{q} \coloneqq p_1dq_1 + \dots + p_ndq_n$. If $\bar{x}_i$ is negative, then $\bar{k}_i \geq n$ and $k_i \geq \bar{k}'_i$ where $\bar{k}'_i \coloneqq \bar{k}_i-n$. Define
\begin{equation*}
\alpha_{\Lambda}|_{\mathcal{O}p(\bar{x}_i)} = -dz - 2\mathbf{q} \cdot d\mathbf{p} - p_1dq_1 - \dots - p_{\bar{k}'_i}dq_{\bar{k}'_i} + \dots + p_ndq_n.
\end{equation*}
Moreover, in both cases, the coordinates are chosen so that $\Lambda \cap \mathcal{O}p(\bar{x}_i)$ is contained in $\{ z=q_{k_i+1}=\dots=q_n=p_1=\dots=p_{k_i}=0 \}$.
Following the arguments in \cite[Proposition 2.2.3]{HH19} and \cite[\S 9.2]{HH19}, one can extend $\alpha_{\Lambda}$ to $M$ such that $\Lambda$ is Legendrian with respect to $\xi_{\Lambda} \coloneqq \ker\alpha_{\Lambda}$, and the characteristic foliation $\Sigma_{\xi_{\Lambda}} = \bar{v}$. In other words, $\Lambda \subset \Sigma$ is a regular Legendrian with respect to $\xi_{\Lambda}$. By Legendrian neighborhood theorem, shrinking the neighborhood size of $\Lambda$ if necessary, we can assume $\xi$ is contactomorphic to $\xi_{\Lambda}$.
\vskip.1in\noindent
\textsc{Step 3.} \emph{Compute the $1$-framing.}
\vskip.1in
To complete the proof of the Proposition, it remains to compute the 1-framing $\sigma_{\Sigma}(\Lambda)$ of $\Lambda$ induced by $\Sigma$, which, in turn, is determined by $v$ together with the sign assignment. Using the same trick as in the proof of \autoref{lem:set of 1-framings}, we compute that
\begin{equation} \label{eqn:compute 1-framing}
\sigma_{\Sigma}(\Lambda) = \sum_{x_i \text{ negative}} (-1)^{k_i}
\end{equation}
where sum is taken over all the negative singularities $x_i \in \Lambda$. In particular $\sigma_{\Sigma}(\Lambda)$ is independent of the extension $\bar{v}$.
Assume $\bar{v}$ is Morse$^+$. Then there exists a (possibly disconnected) codimension-$1$ submanifold $\Gamma_{\Lambda} \subset \Lambda$, called the \emph{Legendrian divide}, which satisfies the following conditions:
\begin{itemize}
\item[(LD1)] $\Gamma_{\Lambda}$ is everywhere transverse to $v$;
\item[(LD2)] There exists the decomposition $\Lambda \setminus \Gamma_{\Lambda} = R_+(\Lambda) \cup R_-(\Lambda)$ such that the positive (resp. negative) critical points of $v$ are contained in $R_+(\Lambda)$ (resp. $R_-(\Lambda)$).
\item[(LD3)] Near each component of $\Gamma_{\Lambda}$, $v$ flows from $R_+(\Lambda)$ to $R_-(\Lambda)$.
\end{itemize}
In this case, the $1$-framing induced by $\Sigma$ can be computed by
\begin{equation*}
\sigma_{\Sigma}(\Lambda) = (-1)^n \chi(R_-(\Lambda))
\end{equation*}
In particular, we recover \autoref{eqn:tb computation} by setting $n=1$ and observing that $R_-(\Lambda)$ is a disjoint union of intervals.
Since $\dim \Lambda \geq 2$ by assumption, one can choose $v,\bar{v}$ and sign assignment such that $\Sigma$ is Morse$^+$ and $\chi(R_-(\Lambda))$ takes any prescribed integer value.
\end{proof}
The following is an easy computation.
\begin{lemma} \label{lem:Leg self intersection number}
Given a regular Legendrian $\Lambda$ in a Morse$^+$ hypersurface $\Sigma$, the self-intersection number of $\Lambda$ in $\Sigma$ is $\chi(R_+(\Lambda)) - \chi(R_-(\Lambda))$.
\end{lemma}
In particular, we say a regular Legendrian $\Lambda$ is \emph{balanced} if $\chi(R_+(\Lambda)) = \chi(R_-(\Lambda))$.
\section{Regular coLegendrians in dimension $5$} \label{sec:regular coLeg}
In this section, we study the Morse-theoretic structures of regular coLegendrians $Y \subset (M,\xi)$ introduced in \autoref{defn:coLeg submfd}. If $Y$ is a smooth submanifold, then it follows from \cite{Hua15} that $Y$ is naturally equipped with a (singular) Legendrian foliation $\mathcal{F} \coloneqq \ker\alpha|_Y$, where $\alpha$ is a contact form. Conversely, the Legendrian foliation determines the germ of the contact structure near $Y$. However, smooth coLegendrians are not only difficult to find in general but also not sufficient for studying Legendrian isotopies. It turns out that the appropriate class of coLegendrians to study in this context contain certain ``cone-type'' singularities, which we will explain in details in this section.
\emph{For the rest of this section, we will assume $\dim M=5$ and $\dim Y=3$.} One major advantage in this dimension is the following obvious fact, which fails in higher dimensions.
\begin{lemma} \label{lem:coiso tangent to char fol}
A smooth $3$-submanifold $Y \subset (M,\xi)$ contained in a hypersurface $\Sigma$ is coLegendrian if it is tangent to $\Sigma_{\xi}$.
\end{lemma}
\begin{proof}
For any $x \in Y$, either $T_x Y \subset \xi_x$ or $T_x Y \cap \xi_x \subset \xi_x$ is $2$-dimensional. In the former case $Y$ is clearly coisotropic at $x$. In the latter case $T_x Y \cap \xi_x \subset \xi_x$ is a Lagrangian subspace since $\Sigma_{\xi}(x) \in T_x Y \cap \xi_x$ by assumption.
\end{proof}
\begin{remark}
Since we will discuss submanifolds $Y$ which are not everywhere smooth, we say $Y$ is tangent to a vector field $v$ if for any $x \in Y$, the flow line passing through $x$ is completely contained in $Y$. Under this convention, \autoref{lem:coiso tangent to char fol} can be generalized to non-smooth $Y$ and asserts that $Y$ is coLegendrian on the smooth part.
\end{remark}
The definition of \emph{regular coLegendrians} is completely parallel to the definition of regular Legendrians in \autoref{defn:regular legendrian}. Namely, with respect to a Morse hypersurface $\Sigma$ containing $Y$, we say $Y$ is \emph{regular} if it is tangent to $\Sigma_{\xi}$ and the restricted critical points of $\Sigma_{\xi}|_Y$ on $Y$ are nondegenerate. Note that the normal bundle of a regular coLegendrian is necessarily trivial since it is contained in a hypersurface by definition.
The section is organized as follows. In \autoref{subsec:coLeg handles}, we study models of coLegendrian handles which can be used to build any regular coLegendrian. In \autoref{subsec:coLeg existence}, we establish the existence of coLegendrians in the closed case. Then case of coLegendrians with Legendrian boundary is dealt with in \autoref{subsec:coLeg with bdry}.
\subsection{CoLegendrian handles} \label{subsec:coLeg handles}
Suppose $Y \subset \Sigma$ is a regular coLegendrian. It turns out that the (Morse) vector field $\Sigma_{\xi}|_Y$ itself is insufficient to determine the contact germ near $Y$. Indeed, it is the (singular) Legendrian foliation $\mathcal{F}$ on $Y$, which determine the contact germ by \cite{Hua15}. The goal of this subsection is to work out local models of $\mathcal{F}$ in the handles given by $\Sigma_{\xi}|_Y$.
\vskip.1in\noindent
\textsc{Notation:}
\textit{Suppose $p \in \Sigma$ is a critical point of $\Sigma_{\xi}$. The Morse index of $p$ is called the \emph{$\Sigma$-index}. If, in addition, $p \in Y$, then the Morse index of $\Sigma_{\xi}|_Y$ at $p$ is called the \emph{$Y$-index}. This terminology extends to other regular submanifolds, e.g., Legendrians, in $\Sigma$ in the obvious way.}
\vskip.1in
In the following, we will study \emph{coLegendrian handles}, i.e., the handles in $Y$ determined by $\Sigma_{\xi}|_Y$, and the associated Legendrian foliations $\mathcal{F}$ in detail.
\vskip.1in
\subsubsection{CoLegendrian handle $H_0$ of $Y$-index $0$} \label{subsubsec:H_0}
Let $p_0 \in H_0$ be the critical point. Then the $\Sigma$-index $\ind_{\Sigma} (p_0)=0$ or $1$. Denote the handle in $\Sigma$ corresponding to $p_0$ by $\widetilde{H}_0$, which is either a $0$-handle or a $1$-handle. Moreover, write $\partial \widetilde{H}_0 = \partial_+ \widetilde{H}_0 \cup \partial_- \widetilde{H}_0$, where $\Sigma_{\xi}$ is inward-pointing along $\partial_- \widetilde{H}_0$ and outward-pointing along $\partial_+ \widetilde{H}_0$. Similarly, one can write $\partial H_0 = \partial_+ H_0 \cup \partial_- H_0$ such that $\partial_{\pm} H_0 \subset \partial_{\pm} \widetilde{H}_0$, respectively, although $\partial_- H_0 = \varnothing$ in this case. Note that $\partial_{\pm} \widetilde{H}_0$ are naturally contact $3$-manifolds and $\partial_+ H_0 \subset \partial_+ \widetilde{H}_0$ is a $2$-sphere.
Observe that $p_0$ is necessarily positive. In what follows, we will always identify the characteristic foliation with the Liouville vector field for positive critical points, and the negative Liouville vector field for negative critical points.
\vskip.1in\noindent
\textsc{Case 1.} $\ind_{\Sigma} (p_0)=1$.
\vskip.1in
Identify $\widetilde{H}_0 \cong B^1 \times B^3$ such that the Liouville vector field can be written as
\begin{equation} \label{eqn:index 1 vf}
X_1 \coloneqq -x_1\partial_{x_1} + 2y_1\partial_{y_1} + \tfrac{1}{2} (x_2\partial_{x_2}+y_2\partial_{y_2}),
\end{equation}
where $x_1 \in B^1$ and $(y_1,x_2,y_2) \in B^3$, and the Liouville form on $\widetilde{H}_0$, i.e., the restricted contact form, is
\begin{equation} \label{eqn:index 1 Liouville form}
\lambda_1 \coloneqq \alpha|_{\widetilde{H}_0} = -x_1 dy_1-2y_1 dx_1 + \tfrac{1}{2} (x_2 dy_2 - y_2 dx_2).
\end{equation}
Under this identification, we have $H_0 \cong \{0\} \times B^3$ is the unstable manifold of $p_0$, which is of course smooth. Moreover, the Legendrian foliation $\mathcal{F}_{H_0}$ on $H_0$ is defined by
\begin{equation} \label{eqn:Leg foliation on H0}
\mathcal{F}_{H_0} = \ker(\lambda_1|_{H_0}) = \ker(x_2 dy_2 - y_2 dx_2).
\end{equation}
It follows that the characteristic foliation on $\partial H_0$, which is nothing but $\mathcal{F}_{H_0} \cap \partial H_0$, is standard, i.e., there are one source and one sink and all flow lines travel from the source to the sink.
\begin{remark}
The particular choice of the Liouville vector field in \autoref{eqn:index 1 vf} (and the Liouville form) is somewhat arbitrary. Two different choices of such Liouville forms differ by an exact $1$-form, and we say the different choices are \emph{deformation equivalent}. Note that deformation equivalence is strictly weaker than (symplectic) isotopy. This remark applies to all the particular choices of Liouville forms in subsequent models.
\end{remark}
\vskip.1in\noindent
\textsc{Case 2.} $\ind_{\Sigma} (p_0)=0$.
\vskip.1in
Identify $\widetilde{H}_0 \cong B^4$ such that the Liouville vector field can be written as
\begin{equation*}
X_0 \coloneqq \tfrac{1}{2} (x_1\partial_{x_1} + y_1\partial_{y_1} + x_2\partial_{x_2} + y_2\partial_{y_2}),
\end{equation*}
and the Liouville form
\begin{equation} \label{eqn:index 0 Liouville form}
\lambda_0 \coloneqq \alpha|_{\widetilde{H}_0} = \tfrac{1}{2} (x_1 dy_1 - y_1 dx_1 + x_2 dy_2 - y_2 dx_2).
\end{equation}
Observe that $(\partial \widetilde{H}_0, \lambda_0|_{\partial \widetilde{H}_0}) \cong (S^3,\xi_{\std})$ and $\partial H_0=\partial_+ H_0 \subset \partial \widetilde{H}_0$ can be identified with a $2$-sphere in the standard contact $S^3$. In particular $\xi_{\std}$ induces a characteristic foliation $(\partial H_0)_{\xi_{\std}}$ on $\partial H_0$. It follows that $H_0$ is the cone over $\partial H_0$ and the Legendrian foliation $\mathcal{F}_{H_0}$ is also the cone over $(\partial H_0)_{\xi_{\std}}$. Namely, a leaf of $\mathcal{F}_{H_0}$ is the cone over a leaf of $(\partial H_0)_{\xi_{\std}}$. \emph{Hereafter all cones are taken with respect to appropriate Liouville vector fields, which is $X_0$ in this case.} Note that, in this case, $H_0$ is smooth only when $\partial H_0$ is equatorial.
\vskip.1in
\subsubsection{Positive coLegendrian handle $H^+_1$ of $Y$-index $1$} \label{subsubsec:poitive H_1}
Let $p_1 \in H^+_1$ be the critical point. Then $\ind_{\Sigma} (p_1) = 1$ or $2$. We continue using the terminologies from \autoref{subsubsec:H_0} to denote the corresponding handle in $\Sigma$ by $\widetilde{H}^+_1$.
\vskip.1in\noindent
\textsc{Case 1.} $\ind_{\Sigma} (p_1)=2$.
\vskip.1in
Identify $\widetilde{H}^+_1 \cong B^2 \times B^2$ such that the Liouville vector field can be written as
\begin{equation} \label{eqn:index 2 vf}
X_2 \coloneqq -x_1\partial_{x_1}-x_2\partial_{x_2}+2y_1\partial_{y_1}+2y_2\partial_{y_2},
\end{equation}
and the Liouville form
\begin{equation} \label{eqn:index 2 Liouville form}
\lambda_2 \coloneqq \alpha|_{\widetilde{H}^+_1} = -x_1 dy_1 - x_2 dy_2 - 2y_1 dx_1 - 2y_2 dx_2,
\end{equation}
where $(x_1,x_2) \in B^2$ in the first component and $(y_1,y_2) \in B^2$ in the second component.
To see the embedding $H^+_1 \subset \widetilde{H}^+_1$, observe that the unstable disk in $H^+_1$ coincides with the unstable disk in $\widetilde{H}^+_1$ for index reasons. On the other hand, the $1$-dimensional stable disk in $H^+_1$ sits in the stable disk $B^2_x \coloneqq B^2 \times \{0\}$ in $\widetilde{H}^+_1$, and is tangent to the restricted Liouville vector field
\begin{equation*}
X_2|_{B^2_x} = -x_1\partial_{x_1}-x_2\partial_{x_2}.
\end{equation*}
In other words, $H^+_1 = \delta \times B^2 \subset \widetilde{H}^+_1$ where $\delta \subset B^2_x$ is the union of two (different) radii. In general $H^+_1$ has corners along $\{0\} \times B^2$, and is smooth precisely when $\delta$ is a diameter.
Next we turn to the Legendrian foliation $\mathcal{F}_{H^+_1}$ on $H^+_1$. Suppose $\overline{\delta} \subset \delta$ is one of the two radii and suppose further w.l.o.g. that $\overline{\delta}=\{x_1 \geq 0, x_2=0\} \subset B^2_x$. It suffices to understand the Legendrian foliation $\mathcal{F}_{H^+_1}|_{\overline{\delta} \times B^2}$ on one half of $H^+_1$, which is given by
\begin{equation*}
\mathcal{F}_{H^+_1}|_{\overline{\delta} \times B^2} = \ker(\lambda_2|_{\overline{\delta} \times B^2}) = \ker(x_1 dy_1+2y_1 dx_1).
\end{equation*}
In particular, the characteristic foliation on each (disk) component of $\partial_- H^+_1$ is a linear foliation, and on $\partial_+ H^+_1$, which is an annulus with corners along $\{0\} \times \partial B^2$, it is as shown in \autoref{fig:cornered char foliation}. In particular, observe that there are four half-saddle points on $\partial_+ H^+_1$, all of which lie on $\{0\} \times \partial B^2$. Let $p_1,p_2$ be the two half-saddles on $\partial_+ (\overline{\delta} \times B^2)$, and $q_1,q_2$ be the other two half-saddles. Then the relative positions between $p_1$ and $p_2$, as well as between $q_1$ and $q_2$, are fixed. However, the relative position between $p_1$ and $q_1$ (or equivalently, $p_2$ and $q_2$) depends on the angle of $\delta$ at the origin. In particular, if the angle is $\pi$, i.e., $\delta$ is a diameter, then $p_1$ (resp. $p_2$) collides with $q_1$ (resp. $q_2$), and the characteristic foliation on the smooth $\partial_+ H^+_1$ possesses two (full) saddles. Finally, if we fix an orientation of $\partial_+ H^+_1$, then $p_1$ and $p_2$ (resp. $q_1$ and $q_2$) always have opposite signs, and the characteristic foliation is oriented in such a way that on $\{0\} \times B^2$, it flows from the positive half-saddle to the negative half-saddle.
\begin{figure}[ht]
\begin{overpic}[scale=.3]{char_fol_cornered.eps}
\end{overpic}
\caption{The characteristic foliation on $\partial_+ H^+_1$. The left and right sides are identified and the corners are along the blue circle.}
\label{fig:cornered char foliation}
\end{figure}
\vskip.1in\noindent
\textsc{Case 2.} $\ind_{\Sigma} (p_1)=1$.
\vskip.1in
Identify $\widetilde{H}^+_1 \cong B^1 \times B^3$ such that the Liouville vector field and the Liouville form are given by \autoref{eqn:index 1 vf} and \autoref{eqn:index 1 Liouville form}, respectively. Continue using the notations from \autoref{subsubsec:H_0}, we write $B^3_0 \coloneqq \{0\} \times B^3$. Then the embedding $H^+_1 \subset \widetilde{H}^+_1$ takes the form $H^+_1 = B^1 \times K(\gamma)$, where $\gamma \subset \partial B^3_0$ is a closed loop and $K(\gamma) \subset B^3_0$ is the cone over $\gamma$ taken with respect to the vector field
\begin{equation} \label{eqn:skewed radial vf}
X_1|_{B^3_0} = 2y_1 \partial_{y_1} + \tfrac{1}{2} (x_2 \partial_{x_2}+y_2 \partial_{y_2}).
\end{equation}
Assume $\gamma$ is smooth and \emph{generic} in the following sense. Consider the foliation $\mathcal{G}$ on $B^3_0$ defined by
\begin{equation} \label{eqn:char fol on S2}
\mathcal{G} \coloneqq \ker(\lambda_1|_{B^3_0}) = \ker(x_2dy_2-y_2dx_2) = \ker (r^2d\theta),
\end{equation}
where $(r,\theta)$ denotes the polar coordinates on the $x_2y_2$-plane. It induces a foliation $\mathcal{G}|_{\partial B^3_0} \coloneqq \mathcal{G} \cap \partial B^3_0$ which is singular at the north pole $(1,0,0)$ and the south pole $(-1,0,0)$. We say $\gamma$ is \emph{generic} if the following hold:
\begin{itemize}
\item[(Gen1)] $\gamma$ does not pass through the north and the south poles.
\item[(Gen2)] The intersection between $\gamma$ and a leaf of $\mathcal{G}|_{\partial B^3_0}$ is either transversal or quadratic tangential, i.e., modeled on the intersection between the $u$-axis in $\mathbb{R}^2_{u,v}$ and the graph of $v=u^2$. In particular, the tangential points are isolated.
\item[(Gen3)] None of the quadratic tangential points lie on the equator $\partial B^3_0 \cap \{y_1=0\}$.
\end{itemize}
To understand the Legendrian foliation $\mathcal{F}_{H^+_1}$ on $H^+_1$, it turns out to be convenient to zoom in on a small neighborhood of the critical point $p_1 \in \widetilde{H}^+_1$. Motivated by this, let $R_1,R_2$ be the radii of $B^1,B^3$, respectively. More explicitly, $B^1=\{\abs{x_1} \leq R_1\}$ and $B^3=\{\abs{y_1}^2+r^2 \leq R_2^2\}$. Before getting into the details, let's briefly explain the strategy to visualize $\mathcal{F}$ as follows. In all previous cases, we first describe the Legendrian foliation $\mathcal{F}$, which is defined by a relatively simple $1$-form, on the relevant handle, say, $H$, and then examine its trace on the boundaries $\partial_{\pm} H$. In this case, however, the above procedure will be reversed due to the more complicated structure of $\mathcal{F}_{H^+_1}$. Namely, we will first describe the trace of $\mathcal{F}_{H^+_1}$ on $\partial_{\pm} H^+_1$, i.e., the characteristic foliations, and then use it to describe $\mathcal{F}_{H^+_1}$.
Let $w_1,\dots,w_m \in \gamma$ be the quadratic tangential points introduced in (Gen2). We first analyze the characteristic foliation on $\partial_+ H^+_1 = B^1 \times \gamma$, which is the easier part. Fix a orientation of $\gamma$ and let $\dot{\gamma}$ be the positive tangent vector. Identify $\gamma$ with $\{0\} \times \gamma$. If we denote the restriction of the vector field $\partial_{x_1}$ on $\partial_+ H^+_1$ along $\gamma$ by $\partial_{x_1}|_{\gamma}$, then observe that $\lambda_1 (\partial_{x_1}|_{\gamma}) = -2y_1(\gamma)$, where $y_1(\gamma)$ denotes the $y_1$-coordinate of the points on $\gamma$. Together with (Gen3), we see that the characteristic foliation on $\partial_+ H^+_1$ is nonsingular for $R_1$ sufficiently small and is as shown in \autoref{fig:char fol positive bdry}, where the tangencies between $\gamma$ and the characteristic foliation are in one-to-one correspondence with the $w_i$'s.
\begin{figure}[ht]
\begin{overpic}[scale=.3]{char_fol_pos_bdry.eps}
\end{overpic}
\caption{The characteristic foliation on $\partial_+ H^+_1$. The left and right sides are identified.}
\label{fig:char fol positive bdry}
\end{figure}
Next, we turn to the characteristic foliation on $\partial_- H^+_1$, which consists of two disks. In what follows we consider the component of $\partial_- H^+_1$ with $x_1=R_1>0$. The other component with $x_1=-R_1$ can be analyzed similarly. Cut $\gamma$ open at the $w_i$'s to obtain $m$ consecutive open segments $\gamma_1,\dots,\gamma_m$ such that $\gamma_i$ denotes the segment between $w_i$ and $w_{i+1}$, where $1 \leq i \leq m$ and $m+1$ is identified with $1$. Let $K(\gamma_i)$ be the cone over $\gamma_i$. For definiteness, let's consider $K(\gamma_1)$ and suppose for simplicity that the span of $\theta(\gamma_1)$ is less than $2\pi$, where $\theta(\gamma_1)$ denotes the $\theta$-coordinate of the points on $\gamma_1$. The general case can be dealt with similarly. Let $\overline{\gamma}_1$ be the projection of $\gamma_1$ to the $x_2y_2$-plane, and $K(\gamma_1)$ be the cone over $\gamma_1$, taken with respect to the radial vector field $r\partial_r$. Then $K(\overline{\gamma}_1) \subset \mathbb{R}^2_{x_2,y_2}$ is an embedded sector, over which $K(\gamma_1)$ is graphical and can be written as
\begin{equation} \label{eqn:cone gamma_1}
K(\gamma_1) = \{ y_1=f(\theta)r^4 ~|~ (r,\theta) \in K(\overline{\gamma}_1) \},\footnote{The term $r^4$ comes from the particular choice of the Liouville vector field in \autoref{eqn:index 1 vf}.}
\end{equation}
such that $f'(\theta)$ blows up as $\theta$ approaches $\theta_{\min}$ or $\theta_{\max}$, where $\theta_{\min},\theta_{\max}$ are the lower and upper limits of the $\theta$-coordinate in $K(\overline{\gamma}_1)$.
We are interested in the characteristic foliation $K(\gamma_1)_{\xi}$ on $K(\gamma_1)$. Let $\overline{K(\gamma_1)}_{\xi}$ be the projection of $K(\gamma_1)_{\xi}$ to $K(\overline{\gamma}_1)$. We have
\begin{align*}
\overline{K(\gamma_1)}_{\xi} &= \ker (-R_1 d(f r^4) + \tfrac{1}{2} r^2 d\theta) \\
&= \ker ((\tfrac{1}{2} r^2 - R_1 f' r^4) d\theta - 4R_1 f r^3 dr).
\end{align*}
We claim that $\overline{K(\gamma_1)}_{\xi}$ is nonzero away from the origin if $R_2$ is sufficiently small. Indeed, away from the origin, the $dr$ component vanishes precisely when $f$ vanishes. But at these points $f'$ is finite due to (Gen3), and therefore the $d\theta$ component is nonzero for $r$ sufficiently small.
The key to visualize $\overline{K(\gamma_1)}_{\xi}$ consists of two observations. First, note that the $dr$ component is nonvanishing whenever $f$ is nonvanishing. Second, the $d\theta$ component can possibly vanish only near $\theta_{\max}$ and $\theta_{\min}$, where $f'$ blows up. Let's consider $\theta_{\max}$ here and leave the discussion of $\theta_{\min}$ to the interested reader. If $\lim_{\theta \to \theta_{\max}} f'(\theta)=-\infty$, then the $d\theta$ component is never zero near $\theta_{\max}$. Hence we can assume that $\lim_{\theta \to \theta_{\max}} f'(\theta)=+\infty$. In this case, for each $\theta$ sufficiently close to $\theta_{\max}$, there exists a unique point $(r(\theta),\theta) \in \overline{K(\gamma_1)} \setminus \{0\}$ at which $\overline{K(\gamma_1)}_{\xi}=\ker(dr)$. Moreover, the sequence of points $(r(\theta),\theta)$ converge to the origin as $\theta \to \theta_{\max}$.
At this point, we note that the dynamics of the characteristic foliation is better understood on the entire $K(\gamma)$ rather than on each individual $K(\gamma_i)$. More precisely, let $\nu(w_i) \subset \gamma$ be a neighborhood of $w_i$, and consider the cone $K(\nu(w_i)) \subset K(\gamma)$. Observe that if $f_{i-1}$ and $f_i$ are the defining angular functions for $\gamma_{i-1}$ and $\gamma_i$, respectively, as in \autoref{eqn:cone gamma_1}, then either $\lim_{\theta \to \theta_i} f_{i-1}'(\theta)=-\infty$ or $\lim_{\theta \to \theta_i} f_i'(\theta)=-\infty$, where $\theta_i$ denotes the angular coordinate of $w_i$. Hence by the observations made above, the restriction of $K(\gamma_i)_{\xi}$ to $K(\nu(w_i))$ is either one of the two scenarios shown in \autoref{fig:char foliation negative bdry}. Finally, note that away from the $K(\nu(w_i))$'s, the flows line of $K(\gamma)_{\xi}$ simply go from the origin towards $\gamma$.
\begin{figure}[ht]
\begin{overpic}[scale=.4]{char_fol_neg_bdry.eps}
\end{overpic}
\caption{Two possibilities of the restriction of $K(\gamma_i)_{\xi}$ to $K(\nu(w_i))$.}
\label{fig:char foliation negative bdry}
\end{figure}
Finally, let's describe the Legendrian foliation $\mathcal{F}_{H^+_1}$, which, in fact, can be read off from the characteristic foliation $(\partial_+ H^+_1)_{\xi}$ on $\partial_+ H^+_1$ (cf. \autoref{fig:char fol positive bdry}) as follows. Recall the vector field
\begin{equation*}
X_1|_{H^+_1} = -x_1 \partial_{x_1} + \tfrac{1}{2} (x_2 \partial_{x_2}+y_2 \partial_{y_2}),
\end{equation*}
which is transverse to $\partial_+ H^+_1$. Then each leaf $F$ of $\mathcal{F}_{H^+_1}$ can be visualized as the totality of trajectories of $X_1|_{H^+_1}$ which pass through a leaf of $(\partial_+ H^+_1)_{\xi}$. Recall that a leaf $\ell$ of $(\partial_+ H^+_1)_{\xi}$ is always a properly embedded arc. Let $F(\ell)$ be the leaf of $\mathcal{F}_{H^+_1}$ such that $F(\ell) \cap \partial_+ H^+_1=\ell$. Then we have the following possibilities for the shape of $F(\ell)$ depending on the position of $\ell \subset \partial_+ H^+_1$:
\begin{itemize}
\item Suppose $\partial\ell$ is contained in one component of $\partial(\partial H^+_1)$. Then
\begin{itemize}
\item if $\ell \cap \gamma=\varnothing$, then $F(\ell)$ is a disk as shown in \autoref{fig:Leg foliation on 1 handle}(a);
\item if $\ell$ is tangent to $\gamma$, then $F(\ell)$ is a disk as shown in \autoref{fig:Leg foliation on 1 handle}(b);
\item if $\ell$ intersects $\gamma$ is two points, then $F(\ell)$ is an annulus as shown in \autoref{fig:Leg foliation on 1 handle}(c).
\end{itemize}
\item Suppose the two points $\partial\ell$ are contained in different components of $\partial(\partial H^+_1)$. Then $F(\ell)$ is a strip as shown in \autoref{fig:Leg foliation on 1 handle}(d).
\end{itemize}
\begin{figure}[ht]
\begin{overpic}[scale=.35]{leg_fol_1_handle.eps}
\put(6,-3.5){(a)}
\put(33.7,-3.5){(b)}
\put(61.5,-3.5){(c)}
\put(89,-3.5){(d)}
\end{overpic}
\vspace{4mm}
\caption{Different leaves of $\mathcal{F}_{H^+_1}$. The blue arc represents $\gamma \subset \partial_+ H^+_1$.}
\label{fig:Leg foliation on 1 handle}
\end{figure}
\vskip.1in\noindent
\textsc{Comparison between Case 1 and Case 2.}
\vskip.1in
Since our interests lie in $H^+_1$ and its Legendrian foliation $\mathcal{F}_{H^+_1}$ and not in the ambient handle $\widetilde{H}^+_1$, which may be either a $1$-handle (Case 2) or a $2$-handle (Case 1), it is instructive to compare the two models and understand their differences. To avoid confusions, conflicting notations used in (Case 1) and (Case 2) will be decorated by $(1)$ and $(2)$, respectively.
First of all, observe that the attaching region $\partial_- H^{+,(1)}_1$ is always smooth, but this is not the case for $\partial_- H^{+,(2)}_1$. Indeed, in the generic case, i.e., (Gen1)--(Gen3) are satisfied, a necessary condition for $\partial_- H^{+,(2)}_1$ to be smooth is the nonexistence of (quadratic) tangencies on $\gamma$ (cf. (Gen2)). Equivalently, it means that $\gamma$ can be written in the form of \autoref{eqn:cone gamma_1} for a globally defined $f(\theta), \theta \in [0,2\pi]$. Strictly speaking, one also needs to assume that the oscillation of $f(\theta)$ is not too rapid to guarantee the smoothness of $K(\gamma)$. However, this technical point will not be important for us. The key observation here is that all the leaves of the Legendrian foliation $\mathcal{F}_{H^+_1}^{(2)}$ are of the form shown in \autoref{fig:Leg foliation on 1 handle}(d), at least when $R_1,R_2$ are small.
Next, let's consider the situation where $\partial_- H^{+,(1)}_1$ is smooth, i.e., $\delta$ is a diameter. In this case, the Legendrian foliation $\mathcal{F}_{H^+_1}^{(1)}$ coincides with $\mathcal{F}_{H^+_1}^{(2)}$ when $\mu \coloneqq \gamma$ is a meridian great circle. Note, however, that $\mu$ is not generic since it fails (Gen1)--(Gen3). In what follows we consider a particularly simple perturbation of $\mu$ so that it becomes generic and try to understand how $\mathcal{F}_{H^+_1}^{(2)}$ changes. Continue using notations from (Case 2), suppose w.l.o.g. that $\mu=\partial B^3_0 \cap \{y_2=0\}$. Let $\tau: \partial B^3_0 \to \partial B^3_0$ to a small rotation about the $x_2$-axis. Then $\tau(\mu)$ is generic. Indeed, it is everywhere transverse to $\mathcal{G}|_{\partial B^3_0}$. If we fix $R_1=R_2=1$, instead of letting them shrink as in (Case 2), then the characteristic foliation on one component of $\partial_- H^{+,(2)}_1$ is Morse and has precisely two critical points: one source at the center and one saddle, which are in the canceling position. Moreover, as the angle of rotation $\tau$ tends to zero, the saddle approaches towards the source at the center, and cancels it in the limit\footnote{This procedure is nothing but a realization of Giroux's elimination lemma.}. On the other hand, if we fix the angle of rotation $\tau$ and shrink $R_1,R_2$, then we recover the smooth $H^{+,(2)}_1$ discussed in the previous paragraph.
\vskip.1in
\subsubsection{Negative coLegendrian handle $H^-_1$ of $Y$-index $1$} \label{subsubsec:negative H_1}
Let $p_1 \in H^-_1$ be the critical point. Then $\ind_{\Sigma} (p_1)=2$ necessarily, and we are in the same situation as in \autoref{subsubsec:poitive H_1} (Case 1). Note that for negative critical points, the characteristic foliation (viewed as a vector field) and the Liouville vector field differ by a sign.
There is only one difference between the positive and the negative case which we now explain. Recall from \autoref{subsubsec:poitive H_1} (Case 1) that the characteristic foliation on $\partial_+ H^+_1$ has four half-saddles, which come in two pairs $p_1,p_2$ and $q_1,q_2$. Moreover, with respect to a given orientation of $\partial_+ H^+_1$, $p_1$ and $p_2$ have opposite signs and the stable manifold of the positive one coincides with the unstable manifold of the negative one. The same applies to $q_1$ and $q_2$. Now in the negative case, $\partial_+ H^-_1$ also has four half saddles, which we denote by $p^-_1,p^-_2$ and $q^-_1,q^-_2$. As before, $p_1^-$ and $p_2^-$ have opposite signs, but in this case, the unstable manifold of the positive one coincides with the stable manifold of the negative one. In other words, there exist two flow lines coming from the negative (half) saddle and flowing into the positive one. The same applies to $q_1^-,q_2^-$.
\vskip.1in
\subsubsection{Positive coLegendrian handle $H^+_2$ of $Y$-index $2$} \label{subsubsec:positive H_2}
Let $p_2 \in H^+_2$ be the critical point. Then $\ind_{\Sigma} (p_2)=2$ necessarily. Identify the ambient handle $\widetilde{H}^+_2 \cong B^2 \times B^2$ and continue using the model from \autoref{subsubsec:poitive H_1} (Case 1). In particular, the Liouville vector field and the Liouville form are given by \autoref{eqn:index 2 vf} and \autoref{eqn:index 2 Liouville form}, respectively. This case is completely dual to \autoref{subsubsec:negative H_1} in the sense that the characteristic foliations on $\partial_{\pm} H_2^+$ can be identified with those on $\partial_{\mp} H^-_1$, respectively, and the Legendrian foliations $\mathcal{F}_{H^+_2}$ and $\mathcal{F}_{H^-_1}$ coincide with a flip of coordinates.
\vskip.1in
\subsubsection{Negative coLegendrian handle $H^-_2$ of $Y$-index $2$} \label{subsubsec:negative H_2}
Let $p_2 \in H^-_2$ be the critical point. Then $\ind_{\Sigma} (p_2)=2$ or $3$. Now the $\ind_{\Sigma} (p_2)=2$ case is dual to \autoref{subsubsec:poitive H_1} (Case 1) and the $\ind_{\Sigma} (p_2)=3$ case is dual to \autoref{subsubsec:poitive H_1} (Case 2). We omit the details.
\vskip.1in
\subsubsection{CoLegendrian handle $H_3$ of $Y$-index $3$} \label{subsubsec:H_3}
Let $p_3 \in H_3$ be the critical point. Then $\ind_{\Sigma} (p_3)=3$ or $4$. In either case $H_3$ is necessarily a negative handle. Here the $\ind_{\Sigma} (p_3)=3$ case is dual to \autoref{subsubsec:H_0} (Case 1) and the $\ind_{\Sigma}(p_3)=4$ case is dual to \autoref{subsubsec:H_0} (Case 2). We omit the details.
\begin{remark}
As a concluding remark to our constructions of coLegendrian handles, note that these handles as constructed not necessarily smooth and there may be cones, corners and families of cones. However, from a purely topological point of view, the smoothness regularity of the handles can be much lower than those considered in this subsection. For example, the loop $\gamma$ considered in \autoref{subsubsec:negative H_1} (Case 2) are assumed to be smooth for no obvious reasons. Our choices will be justified in the next subsection where we study the existence of coLegendrians.
\end{remark}
\subsection{Existence of coLegendrians} \label{subsec:coLeg existence}
The goal of this subsection is to prove the following result on coLegendrian approximation.
\begin{prop} \label{prop:coLeg approx}
Suppose $Y \subset (M^5,\xi)$ is a closed $3$-submanifold with trivial normal bundle. Then $Y$ can be $C^0$-approximated by a regular coLegendrian with isolated cone singularities.
\end{prop}
\begin{proof}
The proof essentially consists of two steps\footnote{The weight of the two steps may seem extremely imbalanced: Step 2 is some ten times longer than Step 1. But the truth is that Step 1 relies on all of \cite{HH19}, which is some ten times longer than Step 2.}. The first step is to approximate $Y$ by a regular coLegendrian with various singularities appeared in \autoref{subsec:coLeg handles}, and the second step is to eliminate all singularities except an isolated collection of cones.
\vskip.1in\noindent
\textsc{Step 1.} \textit{Topological approximation.}
\vskip.1in
Consider a hypersurface $\Sigma \coloneqq Y \times [-1,1] \subset M$ such that $Y$ is identified with $Y \times \{0\}$. By the existence $h$-principle for contact submanifolds in \cite{HH19}, we can assume, up to a $C^0$-small perturbation of $\Sigma$, that $\Sigma_{\xi}=\partial_s$ where $s$ denotes the coordinate on $[-1,1]$. Again by the folding techniques developed in \cite{HH19}, one can further $C^0$-perturb $\Sigma$ such that with respect to the new Morse vector field $\Sigma_{\xi}$, there exists a (topological) copy of $Y$ satisfying the following
\begin{itemize}
\item[(RA1)] $Y$ is tangent to $\Sigma_{\xi}$;
\item[(RA2)] $\Sigma_{\xi}|_Y$ is Morse;
\item[(RA3)] $\Sigma_{\xi}$ is inward pointing along the $1$-dimensional transverse direction to $Y$.
\end{itemize}
Note that there also exists a (disjoint) copy of $Y$ which satisfies all the above conditions but replacing ``inward pointing'' by ``outward pointing'' in (RA3). Our choice here is completely arbitrary.
In this way, we have constructed a $C^0$-approximation of $Y$ which is regular and coisotropic according to \autoref{lem:coiso tangent to char fol}. \emph{By abusing notations, we will denote the approximating regular coisotropic submanifold by $Y$ in what follows.} However, such $Y$ may not be everywhere smooth and our next task is to analyze its singularities.
\vskip.1in\noindent
\textsc{Step 2.} \textit{Smoothing of singularities.}
\vskip.1in
Observe that for any critical point $p \in Y$, we have $\ind_Y (p)+1=\ind_{\Sigma} (p)$ by (RA3). This is a rather strong constraint on the structure of $Y$. We will make use of this rigidity in the beginning of the argument and gradually get rid of it as more flexibility becomes necessary. For clarity, this step is further subdivided into substeps according to the $Y$-index of the handles.
\vskip.1in\noindent
\textsc{Substep 2.1.} \textit{The $0$-handles $H_0$.}
\vskip.1in
According to \autoref{subsubsec:H_0} (Case 1), $H_0$ is a smooth $3$-ball with boundary $\partial H_0$. Hence there is nothing to smooth. Note that the characteristic foliation on $\partial H_0$ is standard (cf. \autoref{eqn:Leg foliation on H0}).
\vskip.1in\noindent
\textsc{Substep 2.2.} \textit{Round the $1$-handles $H_1^{\pm}$.}
\vskip.1in
We only discuss the case of $H^+_1$ and note that the case of $H^-_1$ is similar. According to \autoref{subsubsec:poitive H_1} (Case 1), the ambient handle $\widetilde{H}^+_1 \cong B^2 \times B^2$ comes with the Liouville form $\lambda_2$ given by \autoref{eqn:index 2 Liouville form}, and $H_1^+ = \delta \times B^2$ where $\delta \subset B^2_x$ is the union of two radii $\delta_1$ and $\delta_2$. It follows that $H_1^+$ is smooth exactly when $\delta$ is a diameter. The goal of this substep is apply a Hamiltonian perturbation to $\widetilde{H}^+_1$ to, in effect, round $\delta$ and hence also $H^+_1$. Roughly speaking, the idea is that the Hamiltonian isotopy, when restricted to $B^2_x$, generates a partial rotation which rotates, say, $\delta_2$ to an angle opposite to that of $\delta_1$. See \autoref{fig:round radii}.
\begin{figure}[ht]
\begin{overpic}[scale=.4]{round_corner.eps}
\put(7.5,14){$\delta_1$}
\put(28,14){$\delta_2$}
\put(72,15){$\delta'$}
\end{overpic}
\caption{Smoothing of $\delta=\delta_1 \cup \delta_2$.}
\label{fig:round radii}
\end{figure}
To carry out the details, let's introduce polar coordinates $(r,\theta) \in B^2_x$ and dual coordinates $(r^{\ast},\theta^{\ast}) \in B^2_y$ defined by
\begin{equation*}
r^{\ast} \coloneqq y_1\cos\theta+y_2\sin\theta \quad\text{ and }\quad \theta^{\ast} \coloneqq y_2r\cos\theta-y_1r\sin\theta.
\end{equation*}
One can check that under this change of coordinates
\begin{equation*}
\omega=dx_1 \wedge dy_1+dx_2 \wedge dy_2 = dr \wedge dr^{\ast} + d\theta \wedge d\theta^{\ast}.
\end{equation*}
Consider a Hamiltonian function $H=\rho_1(r) \rho_2(r^{\ast}) \rho_3(\theta) \rho_4(\theta^{\ast})$ on $\widetilde{H}^+_1$ such that
\begin{itemize}
\item $\rho_1,\rho_2,\rho_3$ are $C^1$-small bump functions such that $\rho_1$ is supported in a near $r=r_0 \neq 0$, $\rho_2$ is supported near $r^{\ast}=0$, and $\rho_3$ is supported in an interval in $S^1$ of length $<\pi$ which intersects both $\delta_2$ and $-\delta_1$, but not $\delta_1$. See the shaded region in the right-hand-side of \autoref{fig:round radii}.
\item $\rho_4$ is supported near $\theta^{\ast}=0$ and $\rho_4(0)=0$ but $\rho'_4(0) \neq 0$.
\end{itemize}
Observe that the Hamiltonian isotopy $\phi_H$ induced by $H$ leaves $B^2_x$ invariant since $H|_{B^2_x} \equiv 0$. Indeed, $\phi_H|_{B^2_x}$ is a partial rotation supported in $\supp(\rho_1(r) \rho_3(\theta))$ (e.g., the shaded region in the right-hand-side of \autoref{fig:round radii}), and the angle of rotation depends on $r,\theta$ and $\rho'_4(0)$.
Now consider the deformed Liouville form $\lambda'_2 \coloneqq \lambda_2+dH$ and the associated Liouville vector field $X'_2$. Then for appropriate choices of $\rho_1,\dots,\rho_4$, one can find a diameter $\delta' \subset B^2_x$ with respect to $X'_2$, i.e., a properly embedded smooth arc which is tangent to $X'_2$ and passes through the origin, such that $\delta'$ agrees with $\delta$ near $\partial B^2_x$. In fact, by choosing $r_0$ sufficiently small, we can arrange so that $\delta$ agrees with $\delta'$ outside of a small neighborhood of the origin. Hence we have constructed a smoothed handle $H_1^{+,\operatorname{sm}} \coloneqq \delta' \times B^2$ in the deformed $\widetilde{H}_1^+$.
By construction, the smoothed $1$-handles $H_1^{+,\operatorname{sm}}$ are attached to the $0$-handles in the same way that the original $1$-handles $H_1^+$ are attached. It remains to argue that the above smoothing operation does not affect the subsequent $2$ and $3$-handle attachments. Indeed, let $Y^{(1)}$ be the union of $0$ and $1$-handles before smoothing. Then a $2$-handle $H^-_2$ (which is a slice of a $3$-handle in $\Sigma$) is attached along a loop $\gamma \subset \partial Y^{(1)}$. Note that generically, $\gamma$ may not be smooth since $\partial Y^{(1)}$ is not smooth in general. In fact, $\gamma$ may have corners exactly where $\partial Y^{(1)}$ has corners. However, the smoothing of the $1$-handles as described above simultaneously smooths the $\gamma$'s. See \autoref{fig:round corner cont'd}. Finally, the $3$-handle attachments are clearly not affected.
\begin{figure}[ht]
\begin{overpic}[scale=.5]{round_corner_contd.eps}
\end{overpic}
\caption{Rounding the corners on $\partial Y^{(1)}$ and $\gamma$ (blue). The shaded region represents part of the attaching sphere of the ambient $3$-handle $\widetilde{H}_2^-$ in $\Sigma$.}
\label{fig:round corner cont'd}
\end{figure}
\emph{From now on, all the $1$-handles $H_1^{\pm}$ are assumed to be smooth.}
As it turns out, the $1$-handles $H_1^{\pm}$ which are slices of ambient $2$-handles are too rigid for our later purposes of getting rid of singularities on the $H_2^-$'s. Hence we will spend the next two substeps on certain modifications of $H_1^+$ and $H_1^-$, respectively, as preparations to subsequent $2$-handle attachments.
\vskip.1in\noindent
\textsc{Substep 2.3.} \textit{Transform $H_1^+$.}
\vskip.1in
From this point on, we will gradually drop the condition (RA3) imposed in Step 1. Let $p_1 \in H_1^+$ be the critical point. \emph{By a $C^0$-small perturbation of $\Sigma$ relative to $Y$ near $p_1$, one can arrange $\ind_{\Sigma}(p_1)=1$.} Note that this is possible since the stable manifold of $p_1$ in $Y$ is assumed to smooth by Substep 2.2. Therefore we are in the situation of \autoref{subsubsec:poitive H_1} (Case 2), where $\gamma=\mu$ is a meridian great circle on $\partial B_0^3$. By the discussions in \autoref{subsubsec:poitive H_1}, a small rotation of $\gamma$ makes it everywhere transverse to $\mathcal{G}|_{\partial B_0^3}$. In what follows, let's write $\mu$ for the great circle and $\gamma$ for the slightly rotated copy. Also write $H_1^+(\mu)$ and $H_1^+(\gamma)$ for the corresponding $1$-handles. Then the characteristic foliations on $\partial H_1^+(\mu)$ and $\partial H_1^+(\gamma)$ are shown in \autoref{fig:char foliation modify H1+}. In particular, observe that $\mu$ is tangent to the characteristic foliation while $\gamma$ is transverse. Moreover, the two saddles on $\partial_+ H_1^+(\gamma)$ are separated by $\gamma$.
\begin{figure}[ht]
\begin{overpic}[scale=.4]{char_fol_modif_H1+.eps}
\put(18,-4){(a)}
\put(81.7,-4){(b)}
\end{overpic}
\vspace{5mm}
\caption{(a) The characteristic foliation on $\partial H_1^+(\mu)$. (b) The characteristic foliation on $\partial H_1^+(\gamma)$. The dots indicate the critical points and the blue circles represent $\mu$ and $\gamma$, respectively.}
\label{fig:char foliation modify H1+}
\end{figure}
If we identify $H_1^+(\gamma) \cong B^1(R) \times B^2$ where $B^1(R) \coloneqq [-R,R]$ and $\gamma=\{0\} \times \partial B^2$, then for $R'>0$ sufficiently small, the truncated $1$-handle $\widecheck{H}_1^+(\gamma) \coloneqq B^1(R') \times B^2 \subset H_1^+$ is smooth(able) and the Legendrian leaves in $\widecheck{H}_1^+(\gamma)$ are all of the type shown in \autoref{fig:Leg foliation on 1 handle}(d). We call such $\widecheck{H}_1^+(\gamma)$ a \emph{turbine} $1$-handle since it looks like a turbine engine. Note that the characteristic foliation on $\partial_+ \widecheck{H}_1^+ (\gamma)$ is linear, and on each (disk) component of $\partial_- \widecheck{H}_1^+ (\gamma)$ is a neighborhood of either a source or a sink.
Now the idea is to push $H_1^+(\gamma) \setminus \widecheck{H}_1^+(\gamma)$ down to the $0$-handles. However, for this to work, one must drop the condition (RA3) near $Y$-index $0$ critical point since it imposes too strict restrictions on the characteristic foliations on $\partial H_0$ (cf. Substep 2.1). \emph{By a $C^0$-small perturbation of $\Sigma$ relative to $Y$ near the $Y$-index $0$ critical points, one can arrange so that their $\Sigma$-index is also $0$.} The main advantage of this modification is that the characteristic foliation on $\partial_+ H_0$ can now be anything realizable by an embedded $S^2 \subset (S^3,\xi_{\std})$. Of course, as a price paid for this extra flexibility, we lose, in general, the smoothness of $H_0$ at the critical point which is turned into a cone singularity. After this modification, one can easily push $H_1^+(\gamma) \setminus \widecheck{H}_1^+(\gamma)$ down to the $0$-handles using the flow of $\Sigma_{\xi}|_{H_1^+(\gamma) \setminus \widecheck{H}_1^+(\gamma)}$. In particular, the saddles on $\partial_+ H_1^+(\gamma)$ become saddles on appropriate $\partial_+ H_0$'s.
\emph{From now on, all the positive $1$-handles $H_1^+$ are assumed to be turbine $1$-handles.}
\vskip.1in\noindent
\textsc{Substep 2.4.} \textit{Transform $H_1^-$.}
\vskip.1in
Let $p_1 \in H_1^-$ be the critical point. Then $\ind_{\Sigma} (p_1)=2$ necessarily, and the trick for $H_1^+$ does not apply to the negative case. Instead, the plan is to create a canceling pair of negative critical points of $Y$-index $1$ and $2$ within $H_1^-$ so that a single negative $1$-handle $H_1^-$ will be turned into a combination of two copies of $H_1^-$ and one $H_2^-$.
To carry out the plan, let's identified the ambient $2$-handle $\widetilde{H}_1^- \cong B^2_x \times B^2_y$ equipped with the \emph{negative} Liouville vector field
\begin{equation*}
X_2^- \coloneqq -2x_1 \partial_{x_1} - 2x_2 \partial_{x_2} + y_1 \partial_{y_1} + y_2 \partial_{y_2},
\end{equation*}
with respect to the standard symplectic form $\omega_{\std}=dx_1 \wedge dy_1 + dx_2 \wedge dy_2$. Recall that near negative critical points, the (oriented) characteristic foliation coincides with the negative Liouville vector field. Then the embedding $H_1^- = B^1 \times B^2 \subset \widetilde{H}_1^-$ is given by $x_2=0$. In particular,
\begin{equation*}
X_2^-|_{H_1^-} = -2x_1 \partial_{x_1} + y_1 \partial_{y_1} + y_2 \partial_{y_2}.
\end{equation*}
Now one can create a pair of negative critical points $q_1,q_2$ of $Y$-index $1$ and $2$ (and $\Sigma$-index of $2$ and $3$), respectively, along the positive (or negative) $y_2$-axis such that the Legendrian foliation $\mathcal{F}_{H^-_1}$ remains unchanged. See \autoref{fig:modify Leg foliation on negative H1-1}, where we draw only the Legendrian foliation and not explicitly the handles. It follows that the contact germ on $H_1^-$ is unchanged under such modification since it is determined by $\mathcal{F}_{H^-_1}$. To avoid confusions, let's denote the original $H_1^-$ by $H_1^{-,\operatorname{orig}}$, the $1$-handles corresponding to $p_1,q_1$ by $H_1^- (p_1), H_1^- (q_1)$, respectively, and the $2$-handle corresponding to $q_2$ by $H_2^- (q_2)$. Then
\begin{equation*}
H_1^{-,\operatorname{orig}} = H_1^- (p_1) \cup H_1^- (q_1) \cup H_2^- (q_2).
\end{equation*}
\begin{figure}[ht]
\begin{overpic}[scale=.2]{modify_leg_fol_neg_1.eps}
\put(12.5,20.3){\tiny{$p_1$}}
\put(74.5,20.3){\tiny{$p_1$}}
\put(79.5,20.3){\tiny{$q_2$}}
\put(84.5,20.3){\tiny{$q_1$}}
\end{overpic}
\caption{Creation of critical points $q_1,q_2$ on $H_1^-$ without changing $\mathcal{F}_{H^-_1}$.}
\label{fig:modify Leg foliation on negative H1-1}
\end{figure}
Observe that the attaching locus $\partial_- H_2^-(q_2)$ contains two saddles $h_{\pm}$ of opposite signs such that the unstable manifold of $h_+$ coincides with the stable manifold of $h_-$. Indeed $\partial_- H_2^-(q_2)$ is a tubular neighborhood of the tangential loop $\mu$ passing through $h_{\pm}$. Moreover, up to a flip of orientation, we can assume $h_+ \in \partial_+ H_1^-(p_1)$ and $h_- \in \partial_+ H_2^-(q_1)$. For later use, let $h_-(p_1) \in \partial_+ H_2^-(p_1)$ and $h_+(q_1) \in \partial_+ H_2^-(q_1)$ be the other saddles.
Next, we want to perturb $H_2^-(q_2)$ such that $\partial_- H_2^-(q_2)$ becomes a neighborhood of a transverse loops instead of a tangential loop. This procedure is dual to the perturbation discussed in Substep 2.3. Namely, the boundary of the stable manifold of the ambient $3$-handle $\widetilde{H}_2^-(q_2)$ is a $2$-sphere equipped with a restricted characteristic foliation identical to $\mathcal{G}|_{\partial B_0^3}$. Moreover $\mu$, the boundary of the stable manifold of $q_2$, is a meridian great circle. By the same construction as in Substep 2.3, one can apply a small rotation to $\mu$ to obtain a transverse loop $\gamma$. Then we obtain a perturbed $2$-handle $H_2^{-,\operatorname{pert}} (q_2)$ whose stable manifold is the cone over $\gamma$. See \autoref{fig:modify Leg foliation on negative H1-2}. Note that the left-hand-side of \autoref{fig:modify Leg foliation on negative H1-2} is the same as the right-hand-side of \autoref{fig:modify Leg foliation on negative H1-1}, except that the $2$-handle corresponding to $q_2$ looks somewhat squashed. Let's introduce a piece of notation for later use: a $2$-handle $H_2^-$ is \emph{turbine} if the unstable manifold is a cone over a transverse loop. Note that turbine $2$-handles are dual to turbine $1$-handles introduced in Substep 2.3.
\begin{figure}[ht]
\begin{overpic}[scale=.18]{modify_leg_fol_neg_2.eps}
\end{overpic}
\caption{The unperturbed $H_2^-(q_2)$ on the left, and the perturbed $H_2^{-,\operatorname{pert}} (q_2)$ on the right. The characteristic foliations on $\partial H_2^- (q_2)$ are drawn in blue. The red curves represent the tangential $\mu$ on the left and the transverse $\gamma$ on the right.}
\label{fig:modify Leg foliation on negative H1-2}
\end{figure}
Write
\begin{equation*}
H_1^{-,\operatorname{pert}} \coloneqq H_1^-(p_1) \cup H_1^-(q_1) \cup H_2^{-,\operatorname{pert}}(q_2),
\end{equation*}
where $H_2^{-,\operatorname{pert}}(q_2)$ is a turbine $2$-handle. We describe the characteristic foliation on $\partial H_1^{-,\operatorname{pert}}$ as follows. On $\partial_- H_1^{-,\operatorname{pert}}$, the characteristic foliation is linear, just as in the unperturbed case although the holonomy is slightly changed due to the perturbation. On the other hand, note that the union of the stable manifold of $h_-(p_1)$ and the unstable manifold of $h_+(q_1)$ separates $\partial_+ H_1^{-,\operatorname{pert}}$ into two connected components. On one component, one has two critical points $h_+$ and a source in canceling position, and on the other component, one has $h_-$ and a sink also in canceling position. See \autoref{fig:modify Leg foliation on negative H1-2}.
Finally, note that the above transformation of $H_1^-$ can be done repeatedly. The number of times we transform $H_1^-$ as above will depend on how the $2$-handles are attached subsequently.
\vskip.1in\noindent
\textsc{Substep 2.5.} \textit{Smoothing the $2$-handles $H^-_2$.}
\vskip.1in
Let $Y^{(1)}$ be a neighborhood of the $1$-skeleton of $Y$. We introduce a special collection of loops on $\partial Y^{(1)}$ as follows: an $\alpha_{\pm}$ curve is the intersection between $\partial Y^{(1)}$ and the unstable manifold of a $H_1^{\pm}$, respectively, and a $\beta$ curve is the intersection between $\partial Y^{(1)}$ and the stable manifold of a $H_2^-$. By Substep 2.3 and 2.4, $\alpha_+$ curves are transverse and $\alpha_-$ curves are tangential with respect to the characteristic foliation $(\partial Y^{(1)})_{\xi}$. By genericity, assume moreover that $\beta$ curves are transverse to $\alpha_{\pm}$ curves. Note, however, that the relative position between a $\beta$ curve and $(\partial Y^{(1)})_{\xi}$ can be complicated. The main idea of this substep is, roughly speaking, to ``modify'' the $\beta$ curves to become transverse to $(\partial Y^{(1)})_{\xi}$.
Focus on one $\beta$ curve for the moment. We will try to find a parallel copy of $\beta$ which is transverse to $(\partial Y^{(1)})_{\xi}$. Clearly this is not possible in general if we keep $(\partial Y^{(1)})_{\xi}$ unchanged. First we want to find a parallel copy $\beta'$ of $\beta$ which is \emph{coherently} transverse to $(\partial Y^{(1)})_{\xi}$ near the $\alpha_{\pm}$ curves. Here by ``coherently transverse'' we mean the following. Note that $\beta \subset \partial Y^{(1)}$ is two-sided. We say $\beta$ is \emph{coherently transverse} to $(\partial Y^{(1)})_{\xi}$ in some region (e.g., a neighborhood of the $\alpha_{\pm}$ curves) if $(\partial Y^{(1)})_{\xi}$ flows from one chosen side of $\beta$ to the other side in this region. Of course this notion of coherent transversality is stronger than just transversality only when the region in question is disconnected. The coherently transverse $\beta'$ near $\alpha_+$ curves can be produced from $\beta$ by an isotopy near $\alpha_+$ as shown in \autoref{fig:make beta transverse}~(a). On the other hand, the coherently transverse $\beta'$ near $\alpha_-$ curves can be produced by first making a transformation of $H_1^-$ corresponding to the $\alpha_-$ curve, followed by an isotopy of $\beta$. See \autoref{fig:make beta transverse}~(b). Note that the set of $\alpha_-$ curves changes through the transformations of $H_1^-$ as new handles are created.
\begin{figure}[ht]
\begin{overpic}[scale=.54]{transversality_H2_1.eps}
\put(11,-2.5){(a)}
\put(68,-2.5){(b)}
\end{overpic}
\vspace{4mm}
\caption{Making $\beta$ coherently transverse to $\alpha_+$ (left) and $\alpha_-$ (right). The $\alpha$ curves are painted in red, the $\beta$ curves in blue, and the coherently transverse parallel copy of $\beta$ in green. The shaded regions indicate neighborhoods of $\alpha_{\pm}$ curves.}
\label{fig:make beta transverse}
\end{figure}
Let $N(\alpha) \subset \partial Y^{(1)}$ be a neighborhood of the original $\alpha_{\pm}$ curves, i.e., before the transformations of $H_1^-$. Then the above procedure gives us a parallel copy $\beta'$ of $\beta$ which is coherently transverse to $(\partial Y^{(1)})_{\xi}$ within $N(\alpha)$. Now observe that $\partial Y^{(1)} \setminus N(\alpha)$ can be viewed as a subsurface of $2$-spheres in $(S^3,\xi_{\std})$. Hence up to a $C^0$-small perturbation of $\partial Y^{(1)} \setminus N(\alpha)$ supported in a small neighborhood of $\beta'$, relative to $\beta'$, one can find another parallel copy $\beta''$ which is everywhere transverse to (the perturbed) $(\partial Y^{(1)})_{\xi}$. This procedure can be equivalently viewed as a transverse approximation of arcs contained in a surface, which is the $3$-dimensional (trivial) case of the contact approximation described in \cite{HH19}. We apply this transverse approximation to every component of $\beta$.
\emph{From now on, we assume $\beta''$ is a parallel copy of $\beta$ such that each component is transverse.} In particular, $\beta$ and $\beta''$ curves are pairwise disjoint. Let $Y^{(2)}$ be a neighborhood of the $2$-skeleton. Then each component of $\partial Y^{(2)}$ can be viewed as a (not necessarily smooth) $2$-sphere in $(S^3,\xi_{\std})$, which is to be filled in by a $3$-handle, i.e., a cone in the standard Liouville $(B^4,\lambda_0)$ (cf. \autoref{subsubsec:H_3} and \autoref{eqn:index 0 Liouville form}). Note that $\beta''$ lies on the smooth part of $\partial Y^{(2)}$, i.e., away from the cone points. For simplicity of notation, let's assume both $\beta$ (hence $\beta''$) and $\partial Y^{(2)}$ are connected. Otherwise, each connected component can be dealt with separately.
Let $H_3 \subset (B^4,\lambda_0)$ be the filling $3$-handle, i.e., $\partial H_3 = \partial Y^{(2)}$. Then $\beta''$ can be viewed as a transverse unknot in $(S^3,\xi_{\std})$. Roughly speaking, the idea is to subdivide $H_3$ into two $3$-handles along a $2$-disk bounding $\beta''$ so that the $2$-disk itself can be turned into a (negative) $2$-handle, which will be smooth(able) since $\beta''$ is transverse. Unfortunately, as we will see, this naive idea does not always work. But we will begin with a toy case, where the naive idea does work, under the following additional assumption:
\begin{assump} \label{assump:unknot}
$\beta'' \subset (S^3,\xi_{\std})$ is the standard transverse unknot (which is the same as the transverse push-off of the standard Legendrian unknot).
\end{assump}
Recall that a standard $S^2 \subset (S^3,\xi_{\std})$ is a smoothly embedded $2$-sphere whose characteristic foliation has exactly two critical points: a source and a sink. Under \autoref{assump:unknot}, there exists a standard $S^2$ such that $S^2 \cap \partial H_3 = \beta''$. Note that $\beta''$, viewed in the standard $S^2$, is a loop winding once around the source (or equivalently, the sink). Now there exists a Liouville homotopy which turns the standard $(B^4,\lambda_0)$ into a collection of two standard balls $B_1,B_2$, joined by a symplectic $1$-handle (cf. \autoref{eqn:index 1 vf} and \autoref{eqn:index 1 Liouville form}) such that the unstable manifold $D \cong B^3$ of the $1$-handle intersects $\partial B^4$ along the standard $S^2$ as above. Let $K(\beta'') \subset D$ be the cone over $\beta''$. Then $K(\beta'')$ can be thickened to a coLegendrian (turbine) $2$-handle $H_2^-$. Attach this $H_2^-$ to $Y^{(2)}$ along $\beta''$. Then the boundary of $Y^{(2)} \cup H_2^-$ consists of two $2$-spheres, which can be coned off in $B_1$ and $B_2$, respectively. This completes the ``subdivision'' of $H_3$ as desired. Note, however, that we are not really subdividing $H_3$ since the original $H_3$ does not interact with the Liouville homotopy. Instead, we construct by hand a new filling of $\partial Y^{(2)}$ which, as a regular coLegendrian, is build out of one $2$-handle and two $3$-handles.
Now we drop \autoref{assump:unknot} and return to the general case. Since $\beta''$ is topologically the unknot, it is necessarily a (transverse) stabilization of the standard unknot. Indeed, $\beta''$ bounds a $2$-disk $\Delta$ in $(S^3,\xi_{\std})$ such that the characteristic foliation $\Delta_{\xi_{\std}}$ can be normalized to have exactly $m$ sources $e_1,\dots,e_m$ and $m-1$ negative saddles $h_1,\dots,h_{m-1}$, up to a flip of orientation of $\Delta$.\footnote{This is equivalent to saying that the self-linking number of $\beta''$ is $-m$.} Moreover, one can arrange, for definiteness, that each source has at most two saddles connect to it. The plan to to deform $(B^4,\lambda_0)$, not as a Liouville domain as in the above toy model, but rather as a (Morse) hypersurface in a Darboux $5$-ball where critical points of both signs will be created.
More precisely, we will first build a regular coLegendrian $3$-ball in $B^4$, which we still denote by $D$. Thicken $D$ to $D \times [-1,1] \subset B^4$ such that the $[-1,1]$-component of the characteristic foliation points away from $0$. In particular, the $D$-index of every critical point coincides with its $B^4$-index. Finally, we glue two standard $4$-balls to $D \times [-1,1]$ along $D \times \{-1\}$ and $D \times \{1\}$, respectively, to complete the construction of the deformed $(B^4,\lambda_0)$. However, in this case $D$ will be built out of $3m-2$ coLegendrian handles, instead of being the unstable manifold of a single handle as in the toy model, which can be thought of as the case $m=1$. Another technical (and slightly unfortunate) remark is that the build of $D$ will involve handles of $D$-index $3$ but no handles of $D$-index $0$. \emph{Hence it will be more convenient to start from the highest index handles and attach to them the lower index ones, i.e., the attaching locus will be the positive boundaries instead of the usual negative boundaries.}
Here is the recipe to build $D$. First lay out $k$ $D$-index $3$ handles $H_3^1,\dots,H_3^m$. Note that each $H_3^i$ comes with a Legendrian foliation by half-disks bound together along a diameter (cf. \autoref{eqn:Leg foliation on H0}). In particular, the induced characteristic foliation on each $\partial H_3^i \cong S^2$, denoted by $\mathcal{G}$, is standard. Define a \emph{square} $\square \subset (S^2,\mathcal{G})$ to be an embedded rectangle such that $\mathcal{G}|_{\square}$ is a linear foliation parallel to one of the sides. Fix two disjoint squares $\square^i_{\pm} \subset \partial H_3^i$ for $1 \leq i \leq m$. Next, attach $m-1$ positive $D$-index $2$ handles $H_2^{1,+},\dots,H_2^{m-1,+}$ such that $H_2^{j,+}$ joins $H_3^{j}$ and $H_3^{j+1}$ with the attaching locus $\partial_+ H_2^{j,+}$ identified with $\square^j_+ \cup \square^{j+1}_+$ for $1 \leq j \leq m-1$ (cf. \autoref{subsubsec:positive H_2}). The $m-1$ negative $D$-index $2$ handles are attached similarly. Finally, attach $m-1$ (positive) $D$-index $1$ handles $H_1^{1,+},\dots,H_1^{m-1,+}$ to, intuitively speaking, fill in the holes created by the $2$-handle attachments. Namely, each $H_1^{k,+}, 1 \leq k \leq m-1$, is attached along a transverse loop that traverses once around the negative boundaries of $H_3^k, H_2^{k,+},H_3^{k+1}$, and $H_2^{k,-}$. See \autoref{fig:deform 3-handle}. Note that this particular pattern of the arrangement of the handles is in line with the characteristic foliation on $\Delta$.
\begin{figure}[ht]
\begin{overpic}[scale=.4]{transversality_H2_2.eps}
\put(8,9){$H_3^1$}
\put(45.5,9){$H_3^2$}
\put(88,9){$H_3^m$}
\put(26,9){\small $H_1^{1,+}$}
\put(26.5,15.85){\tiny $H_2^{1,+}$}
\put(26.5,2.8){\tiny $H_2^{1,-}$}
\put(66.7,16.5){$\dots$}
\put(66.7,10){$\dots$}
\put(66.7,3.5){$\dots$}
\end{overpic}
\caption{A schematic picture of the construction of $D$ using coLegendrian handles.}
\label{fig:deform 3-handle}
\end{figure}
Continuing using the convention from \autoref{subsec:coLeg handles}, the ambient (symplectic) handle in $B^4$ corresponding to a coLegendrian handle $H_{\ast}^{\ast}$ in $D$ is denoted by $\widetilde{H}_{\ast}^{\ast}$. Then we can thicken $D$ as follows
\begin{equation*}
D \times [-1,1] \cong \left( \cup_{1 \leq i \leq m} \widetilde{H}_3^i \right) \cup \left( \cup_{1 \leq j \leq m-1} (\widetilde{H}_2^{j,+} \cup \widetilde{H}_2^{j,-} \cup \widetilde{H}_1^{j,+}) \right).
\end{equation*}
Note that $\partial_+ (D \times [-1,1]) = D \times \{-1,1\}$ is contactomorphic to two copies of Darboux $3$-balls since the handles $\widetilde{H}_1^{j,+}$ and $\widetilde{H}_2^{j,+}$ cancel in pairs. It follows that one can attach two standard Liouville $4$-balls $B_1,B_2$ to $D \times [-1,1]$ along $\partial_+ (D \times [-1,1])$ to complete our construction of the perturbed $B^{4,\operatorname{pert}}$. Note that $B^{4,\operatorname{pert}}$, viewed as a hypersurface in the Darboux $5$-ball, is deformation equivalent to the standard $(B^4,\lambda_0)$, relative to $\partial B^4$, by the obvious cancellations of handles, e.g., $\widetilde{H}_1^{j,+}$ cancels $\widetilde{H}_2^{j,+}$, and $\widetilde{H}_2^{i,-}$ cancels $\widetilde{H}_3^i$, etc.
The point of the above construction is that there exists a properly embedded regular $2$-disk $\Delta' \subset D$, i.e., $\Delta'$ is tangent to the Morse vector field on $D$, such that the intersection of the Legendrian foliation $\mathcal{F}_D$ with $\Delta'$ gives a vector field identical to $\Delta_{\xi_{\std}}$ as described above. In order to find such $\Delta'$, for each $1 \leq j \leq m-1$, let's identify $H_2^{j,+} \cong B^2_{x_1,x_2} \times B^1_{y_1}$ such that the Legendrian foliation
\begin{equation*}
\mathcal{F}_{H^{j,+}_2} = \ker(x_1dy_1-2y_1dx_1).
\end{equation*}
Let $S_j \coloneqq \{x_2=0\} \subset H_2^{j,+}$ be a strip. Assume w.l.o.g. that $S_j$ is disjoint from the attaching locus of $H_1^{j,+}$. For each $1 \leq i \leq m$, let $D_i \subset H_3^i$ be a properly embedded disk transverse to $\mathcal{F}_{H_3^i}$ such that
\begin{itemize}
\item $S_j \cap \partial H_3^j \subset \partial D_j$ and $S_j \cap \partial H_3^{j+1} \subset \partial D_{j+1}$;
\item $\partial D_i$ is disjoint from the attaching loci of $H_1^{\ast,+}$ and $H_2^{\ast,-}$, where $\ast = i$ or $i-1$.
\end{itemize}
It follows that the union of all the $D_i$'s and $S_j$'s gives the desired $\Delta'$. Indeed, each $D_i$ contains an $e_i$ and each $S_j$ contains an $h_j$. Now the subdivision of $H_3$ can be done in the way as in the toy case. Namely, one first attach the thickened $\Delta'$ to $\partial Y^{(2)}$, and then cone off the two boundary spheres in $B_1$ and $B_2$, respectively.
Finally, we get rid of the original (non-smooth) $2$-handle along $\beta$ by canceling it with (either) one of the adjacent $3$-handles. Applying the above procedure to every component of $\beta$, we replace all the potentially non-smooth $H_2^-$'s by smooth ones, i.e., the attaching locus being a transverse loop, at the cost of introducing extra $H_1^+$'s.
Observe that after all the above steps, the resulting $Y$ can possibly be singular only at critical points of $Y$-index $0$ and $3$, which are precisely the isolated cone singularities. The proof is therefore complete.
\end{proof}
\begin{remark} \label{rmk:necessary coLeg pieces}
The proof of \autoref{prop:coLeg approx} actually provides more information about the approximating regular coLegendrian $Y$ than what is stated in the Proposition. Namely, one can always build such $Y$ using only the following pieces:
\begin{itemize}
\item[(HD1)] Cones over $S^2 \subset (S^3,\xi_{\std})$, which can have $Y$-index $0$ or $3$.
\item[(HD2)] Positive turbine $1$-handles (cf. Substep 2.3).
\item[(HD3)] Negative $1$-handles as described in \autoref{subsubsec:negative H_1}.
\item[(HD4)] Negative turbine $2$-handles (cf. Substep 2.4).
\end{itemize}
\end{remark}
\subsection{Regular coLegendrians with boundary} \label{subsec:coLeg with bdry}
Suppose $Y \subset (M^5,\xi)$ is a compact $3$-submanifold with Legendrian boundary such that the normal bundle $T_Y M$ is trivial. We say $Y$ is a \emph{regular coLengdrian with boundary} if there exists a Morse hypersurface $\Sigma \supset Y$ such that $\partial Y$ is a regular Legendrian with respect to $\Sigma_{\xi}$ and $Y$ is regular as in the closed case. Sometimes we also simply say $Y$ is regular if there is no risk of confusion.
The goal of this subsection is to prove the following relative analog of \autoref{prop:coLeg approx}.
\begin{prop} \label{prop:coLeg approx with bdry}
Suppose $Y \subset (M^5,\xi)$ is a compact $3$-submanifold with smooth Legendrian boundary such that $T_Y M$ is trivial. Then $Y$ can be $C^0$-approximated, relative to $\partial Y$, by a regular coLegendrian with isolated cone singularities in the interior.
\end{prop}
\begin{proof}
The proof is divided into two steps. The first step is to normalize a collar neighborhood of $\partial Y$ in $Y$ such that $Y$ becomes regular near the boundary. Once this is done, the second step, which is to perturb the interior of $Y$, works in the same way as in the closed case.
\vskip.1in\noindent
\textsc{Step 1.} \textit{Build a collar neighborhood of $\partial Y \subset Y$ by partial coLegendrian handles.}
\vskip.1in
Recall that a closed regular coLegendrian can be built out of coLegendrian handles described in \autoref{subsec:coLeg handles}. In fact, according to \autoref{rmk:necessary coLeg pieces}, only a sub-collection of handles are needed. In the relative case, one needs, in addition, \emph{partial handles} which can be obtained by cutting a coLegendrian handle in half along a (Legendrian) leaf passing through the critical point. Instead of exhausting all possible partial handles, we focus on describing those which we will need to construct the collar neighborhood of $\partial Y$. The following list is in line with (HD1)--(HD3) in \autoref{rmk:necessary coLeg pieces}.
\begin{itemize}
\item[(PHD1)] Partial handles $PH_0$ and $PH_3$ of index $0$ and $3$, respectively, are modeled on cones over a $2$-disk $D \subset (S^3,\xi_{\std})$ such that $\partial D$ is the standard Legendrian unknot.
\item[(PHD2)] Model a positive turbine $1$-handle $H_1^+ \cong B^1_x \times B^2_{y,z}$ such that the Legendrian foliation $$\mathcal{F}_{H_1^+}=\ker(ydz-zdy).$$ Then a positive turbine partial $1$-handle $PH_1^+$ is modeled on $H_1^+ \cap \{z \geq 0\}$.
\item[(PHD3)] Model a negative $1$-handle $H_1^- \cong B^1_x \times B^2_{y,z}$ such that the Legendrian foliation
\begin{equation} \label{eqn:Leg foliation on negative partial H1}
\mathcal{F}_{H_1^-} = \ker(xdy+2ydx).
\end{equation}
Then the negative partial $1$-handle $PH_1^-$ is modeled on $H_1^- \cap \{y \geq 0\}$.
\end{itemize}
Note that the boundary of any partial handle naturally splits into two pieces: the \emph{tangential boundary} which is nothing but the Legendrian leaf along which the coLegendrian is cut apart, and the \emph{transverse boundary} which is transverse to the (restricted) characteristic foliation. In what follows, we denote the tangential boundary of a partial handle $PH_{\ast}^{\ast}$ by $\partial_L PH_{\ast}^{\ast}$, and the transverse boundary by $\partial_{\tau} PH_{\ast}^{\ast}$.
Using the partial handles described above, let's construct a regular collar neighborhood of $\partial Y$ as follows. Suppose $\partial Y$ is a (closed) genus $g$ surface. Note that $\partial Y \subset \Sigma$ has self-intersection number equal to zero. It follows from \autoref{lem:Leg self intersection number} that $\partial Y$ is balanced, i.e., $\chi(R_+(\partial Y)) = \chi(R_-(\partial Y))$. Hence we will build a collar neighborhood of $\partial Y$ using one $PH_0$, $g$ copies of $PH_1^{\pm}$, respectively, and one $PH_3$, such that the tangential boundaries of these handles glue together to a (balanced) regular Legendrian structure on $\partial Y$.
To spell out more details, let $\partial _{\tau}^c PH_{\ast}^{\ast}$ be the intersection of $\partial_{\tau} PH_{\ast}^{\ast}$ with the collar neighborhood of $\partial Y$. First, identify $\partial_{\tau}^c PH_0 = \mathbb{R}/(6g-2)\mathbb{Z} \times [0,1]$ such that $\mathbb{R}/(6g-2)\mathbb{Z} \times \{0\} = \partial_{\tau} PH_0 \cap \partial_L PH_0$. One can arrange the characteristic foliation $(\partial_{\tau}^c PH_0)_{\xi}$ on $\partial_{\tau}^c PH_0$ such that the following hold.
\begin{itemize}
\item The critical points of $(\partial_{\tau}^c PH_0)_{\xi}$ are the following:
\begin{itemize}
\item half sources at $(4i,0)$ and half sinks at $(4i+2,0)$ for $0 \leq i \leq g-1$.
\item positive half saddles at $(4g+2j+1,0)$ and negative saddles at $(4g+2j,0)$ for $0 \leq j \leq g-1$.
\end{itemize}
\item The stable manifolds of the positive half saddles and the unstable manifold of the negative half saddles are contained in $\mathbb{R}/(6g-2)\mathbb{Z} \times \{0\}$.
\end{itemize}
In particular $\partial_{\tau}^c PH_0$ is not Morse$^+$ due to the presence of flow lines from negative half saddles to positive ones. Nevertheless, one can compute using \autoref{eqn:compute 1-framing} that $\tb(\partial_{\tau} PH_0 \cap \partial_L PH_0)=-1$, i.e., the link of the $\partial Y$-index $0$ critical point is the standard Legendrian unknot. See \autoref{fig:Leg foliation near boundary}.
Next, observe that the characteristic foliation on $\partial_{\tau} PH_1^+$ consists of a half source, a half sink, and all flow lines travel from the half source to the half sink. Moreover, the attaching region $\partial_- PH_1^+ \cap \partial_{\tau} PH_1^+$ has two components, which are neighborhoods of the half source and the half sink, respectively. It follows that the $(i+1)$-th $PH_1^+$ can be attached to $PH_0$ such that the half source matches with the half source on $\partial_{\tau}^c PH_0$ at $(4i,0)$, and the half sink matches with the half sink at $(4i+2,0)$, for $0 \leq i \leq g-1$. Similarly for negative $1$-handles, observe that the characteristic foliation on $\partial_- PH_1^- \cap \partial_{\tau} PH_1^-$ is a square in the sense of Substep 2.5 in the proof of \autoref{prop:coLeg approx}, i.e., all leaves are parallel to one of the sides. It follows that the $(i+1)$-th $PH_1^-$ can be attached to $PH_0$ along squares on $\partial_{\tau}^c PH_0$ near $(4i+1,0)$ and $(4i+3,0)$ for $0 \leq i \leq g-1$. See \autoref{fig:Leg foliation near boundary}.
\begin{figure}[ht]
\begin{overpic}[scale=.3]{leg_fol_near_bdry.eps}
\end{overpic}
\caption{A collar neighborhood of $\partial Y$ in the case $g=2$. The bottom sheet (gray) represents $\partial Y$, at least away from the $\partial Y$-index $2$ handle. The characteristic foliations on $\partial_{\tau}^c PH_0$ is drawn in orange (except on overlaps); on $\partial_{\tau}^c PH_1^+$'s is drawn in blue; and on $\partial_{\tau}^c PH_1^-$'s is drawn in green.}
\label{fig:Leg foliation near boundary}
\end{figure}
After attaching the partial $1$-handles above, it remains to cap it off with the last partial $3$-handle $PH_3$ such that the characteristic foliation on $\partial_{\tau}^c PH_3$ matches with the one on
\begin{equation*}
\partial_{\tau}^c \left( PH_0 \cup (\cup_{1 \leq i \leq g-1} PH_1^+) \cup (\cup_{1 \leq i \leq g-1} PH_1^-) \right).
\end{equation*}
Moreover, one can check (using \autoref{eqn:compute 1-framing}) that the link of the $\partial Y$-index $2$ critical point (which has $Y$-index $3$) is the standard Legendrian unknot, as expected. Indeed, the characteristic foliation on $\partial_{\tau}^c PH_3$ has precisely $2g-1$ positive half saddles and $2g-1$ negative saddles, all of which lie on $\partial_{\tau} PH_3 \cap \partial_L PH_3$, i.e., the link of the $\partial Y$-index $2$ critical point.
\vskip.1in\noindent
\textsc{Step 2.} \textit{Modify the collar neighborhood of $\partial Y$ to achieve transversal inner boundary condition.}
\vskip.1in
Identified the regular collar neighborhood of $\partial Y$ constructed above with $\partial Y \times [0,1]$ such that $\partial Y$ is identified wit $\partial Y \times \{0\}$. A problem of the above construction is that the restricted characteristic foliation $(\partial Y \times [0,1])_{\xi}$ is \emph{not} transverse to the \emph{inner} boundary $\partial Y \times \{1\}$. Indeed, $(\partial Y \times [0,1])_{\xi}$ is pointing away from $\partial Y$ near the $\partial Y$-index $0$ and $1$ critical points, but is pointing towards $\partial Y$ near the $\partial Y$-index $2$ critical point. The goal of this step is to modified $(\partial Y \times [0,1])_{\xi}$ by adjusting the partial handles such that $\partial Y \times \{1\}$ becomes transversal.
At the end of Step 1, we saw that the characteristic foliation on $\partial_{\tau}^c PH_3$ has $4g-2$ saddles, which we label by $\{ a^1_{\pm},\dots,a^g_{\pm},b^1_{\pm},\dots,b^{g-1}_{\pm} \}$ such that $a^i_{\pm}$ are the two saddles on the boundary of the $i$-th $PH_1^-$, and the $b^i_{\pm}$'s lie on $\partial_{\tau}^c PH_0$. By a $C^0$-small wiggling of $\partial_{\tau}^c PH_0$, viewed as an annulus in $(S^3,\xi_{\std})$, relative to a small collar neighborhood of $\partial_{\tau}^c PH_0 \cap \partial_L PH_0$, one can create (canceling) pairs of critical points above the $b^i_{\pm}$'s as shown in \autoref{fig:perturb char foliation on side}.
\begin{figure}[ht]
\begin{overpic}[scale=.6]{char_fol_perturb_on_side.eps}
\put(11.5,-3.5){\small $b^i_-$}
\put(36.8,-3.5){\small $b^i_+$}
\put(61,-3.5){\small $b^{i+1}_-$}
\put(86.2,-3.5){\small $b^{i+1}_+$}
\put(-2,-2.3){$\dots$}
\put(98.3,-2.3){$\dots$}
\end{overpic}
\vspace{3mm}
\caption{Modify the characteristic foliation on $\partial_{\tau}^c PH_0$. Positive critical points are marked in red and the negative ones are marked in blue. The gray band represents the small collar neighborhood of $\partial_{\tau}^c PH_0 \cap \partial_L PH_0$, away from which the perturbation is supported.}
\label{fig:perturb char foliation on side}
\end{figure}
We claim that a similar pattern of the creation of critical points can be achieved on each $\partial_{\tau}^c PH_1^-$ as well. Indeed, following (PHD3), let's identify $PH_1^- = H_1^- \cap \{y \geq 0\}$. Observe that the ambient contact form restricts to a Liouville form on the $xy$-plane (cf. \autoref{eqn:Leg foliation on negative partial H1}) such that the origin becomes a saddle. Up to a Liouville homotopy, one can create a pair of canceling critical points, one source $e$ and one saddle $h$, on the positive $y$-axis. For definiteness, suppose $e=(0,\tfrac{1}{3},0)$ and $h=(0,\tfrac{2}{3},0)$. As a consequence, $PH_1^-$ splits into $3$ handles, which we call \emph{top, middle} and \emph{bottom}, ordered by the $y$-coordinate, as follows
\begin{equation*}
PH_1^{t,-} \coloneqq PH_1^- \cap \{ \tfrac{1}{2} \leq y \leq 1 \},~ PH_1^{m,-} \coloneqq PH_1^- \cap \{ \tfrac{1}{4} \leq y \leq \tfrac{1}{2} \},\text{ and } PH_1^{b,-} \coloneqq PH_1^- \cap \{ 0 \leq y \leq \tfrac{1}{4} \}.
\end{equation*}
It follows that $PH_1^{t,-}$ is a negative $1$-handle as in (HD3), $PH_1^{m,-}$ is a (negative) turbine $2$-handle, and $PH_1^{b-}$ is a negative partial $1$-handle, which is isomorphic to the original $PH_1^-$. After applying this modification to all $PH_1^-$'s, the resulting characteristic foliation on $\partial_{\tau}^c PH_3$, away from the strip in $\partial_{\tau}^c PH_0$ which contains all the $b^i_{\pm}$'s, looks exactly like the one in \autoref{fig:perturb char foliation on side} except for the signs of the critical points. Namely, one needs to put $a^i_{\pm}$ in the places of $b^i_{\mp}$, respectively, and arrange so that the critical points which are aligned vertically have the same sign. In \autoref{fig:perturb char foliation on side}, this means that the dots along each vertical line have the same color, which can be determined by the condition that a source is always red and a sink is always blue.
The point of the above modifications is that now one can easily find a parallel copy $\gamma \subset \partial_{\tau}^c PH_3$ of the Legendrian unknot $\partial_{\tau}^c PH_3 \cap \partial_L PH_3$, which is transverse to the characteristic foliation. The rest of the argument is essentially identical to Substep 2.5 in the proof of \autoref{prop:coLeg approx}, where $\gamma$ plays the role of $\beta''$ there. Namely, one divides $PH_3$ along a $2$-disk bounding $\gamma$ into a partial $3$-handle and a (full) $3$-handle, which we call $H_3$. The $2$-disk itself is turned into a number (depending on $\operatorname{sl}(\gamma)$) of $1$ and $2$-handles.
Finally, let's explain how to make the inner boundary $\partial Y \times \{1\}$ transverse to the modified $(\partial Y \times [0,1])_{\xi}$. Since all the above perturbations are supported in the interior of $\partial Y \times [0,1]$, the relative positions between $\partial Y \times \{1\}$ and $(\partial Y \times [0,1])_{\xi}$ has not changed at all. Namely, the characteristic foliation is outward pointing along the $0$ and $1$-handles on $\partial Y \times \{1\}$ and inward pointing on a $2$-disk $D \subset \partial Y \times \{1\}$ representing the $2$-handle. It suffices to ``turn $(\partial Y \times [0,1])_{\xi}$ inside out'' along $D$. Observe that $D \subset \partial H_3$ is part of a transverse $2$-sphere. Hence the following surface
\begin{equation*}
(\partial Y \times \{1\} \setminus D) \cup (\partial H_3 \setminus D),
\end{equation*}
up to corner rounding, is the desired transverse parallel copy of $\partial Y$.
\vskip.1in\noindent
\textsc{Step 3.} \textit{Make the interior of $Y$ regular.}
\vskip.1in
By the previous steps, we can assume that a collar neighborhood $\partial Y \times [0,1]$ of $\partial Y = \partial Y \times \{0\}$ is regular and $\partial Y \times \{1\}$ is transverse to the (restricted) characteristic foliation. In particular $\partial Y \times \{1\}$ can be viewed as a surface inside a contact $3$-manifold. Furthermore assume by genericity that the characteristic foliation on $\partial Y \times \{1\}$ is Morse$^+$. Then the techniques for making (the interior of) $Y$ regular is the same as in the absolute case (i.e., Step 1 of the proof of \autoref{prop:coLeg approx}) except that one needs \emph{relative C-folds} rather than the usual C-folds in this case as explained in \cite[Section 12]{HH19}.
\end{proof}
\begin{remark}
With a bit more effort, \autoref{prop:coLeg approx with bdry} can be strengthened so that one can prescribe a balanced regular Legendrian structure on $\partial Y$, and the coLegendrian approximation can be made relative to a collar neighborhood of $\partial Y$.
\end{remark}
\begin{remark}
In a different direction, \autoref{prop:coLeg approx with bdry} can be generalized to allow $\partial Y$ to be a singular Legendrian with cone singularities. Cone singularities on a Legendrian $\Lambda$ is defined in a way similar to those on a coLegendrian. Namely, suppose $p_0 \in \Lambda \subset \Sigma$ is a critical point of $\Sigma$-index $0$. Then the link $\link(\Lambda,p_0)$ is a Legendrian unknot in $(S^3,\xi_{\std})$. We say $p_0$ is a \emph{cone singularity} of $\Lambda$. Clearly $\Lambda$ is smooth(able) at $p_0$ if and only if $\tb(\link(\Lambda,p_0))=-1$. For example, if $\tb(\link(\Lambda,p_0))=-2$, then $\Lambda$ is singular at $p_0$, and the singularity is traditionally called a \emph{unfurled swallowtail} (cf. \cite{HW73,EM09}). Note, once again, that the usual unfurled swallowtail refers to the singularity of a (smooth) map, and what we mean here is rather the image of such a map.
\end{remark}
\section{Orange II and overtwistedness} \label{sec:orange II}
In \cite{HH18}, we constructed a singular, but contractible, coLegendrian which we call the \emph{overtwisted orange} $\mathcal{O}$ such that if a contact manifold contains an embedded $\mathcal{O}$, then it is overtwisted. Other coLegendrians which cause overtwistedness include plastikstufe \cite{Nie06} and bLob \cite{MNW13}, at least in dimension $5$ (cf. \cite{Hua17}). In this section, we add yet another overtwisted coLegendrian to the list: the \emph{orange II} $\mathcal{O}_2$, whose main features are the following:
\begin{itemize}
\item $\mathcal{O}_2$ is diffeomorphic to a ball.
\item $\mathcal{O}_2$ can be found in some overtwisted Morse (but \emph{not} Morse$^+$) hypersurfaces considered in \cite{HH18}.
\item The proof that $\mathcal{O}_2$ implies overtwistedness is an immediate consequence of \cite{CMP19}, in contrast to all the other models: plastikstufe, bLob and $\mathcal{O}$, where quite some effort is needed.
\end{itemize}
We continue assuming $\dim M=5$ for the following reason: although, as we will see, the construction of $\mathcal{O}_2$ (and the fact that it implies overtwistedness) can be easily generalized to any dimension, we prefer to view it as a regular coLegendrian but the foundation of regular coLegendrians, i.e., the higher dimensional form of \autoref{thm:coLeg approx}, has not been fully established yet.
As a regular coLegendrian, one builds an $\mathcal{O}_2$ by assembling four pieces together: one $0$-handle $H_0$; one $3$-handle $H_3$; one positive half-handle $PH_2^+$ of $\mathcal{O}_2$-index $2$; and one \emph{strange} negative half-handle $PH_1^{s,-}$ of $\mathcal{O}_2$-index $1$. Here $PH_2^+$ and $PH_1^{s,-}$ are described explicitly as follows (the indexing is a continuation of the ones in \autoref{subsec:coLeg with bdry}).
\begin{itemize}
\item[(PHD4)] Model a positive $2$-handle $H_2^+ \cong B^2_{x,y} \times B^1_z$ such that the Legendrian foliation
\begin{equation*}
\mathcal{F}_{H_2^+} = \ker(xdz+2zdx).
\end{equation*}
Then $PH_2^+$ is identified with $H_2^+ \cap \{ z \geq 0 \}$.
\item[(PHD5)] Model a negative $1$-handle $H_1^- \cong B^1_x \times B^2_{y,z}$ such that the Legendrian foliation
\begin{equation*}
\mathcal{F}_{H_1^-} = \ker(xdy+2ydx).
\end{equation*}
Then $PH_1^{s,-}$ is identified with $H_1^- \cap \{ x \geq 0 \}$. This is to be distinguished from the $PH_1^-$ constructed in (PHD3).
\end{itemize}
Now the recipe for assembling $\mathcal{O}_2$ consists of the following four steps:
\begin{enumerate}
\item Prepare a standard $H_0$ in the sense that $\partial H_0$ is a standard $2$-sphere in $(S^3,\xi_{\std})$.
\item Attach $PH_1^{s,-}$ to $H_0$ along a square.
\item Attach $PH_2^+$ to $PH_1^{s,-}$ such that the stable $2$-disk of $PH_2^+$ is glued to the unstable $2$-disk of $PH_1^{s,-}$ to form a (Legendrian) $2$-sphere, which will be $\partial \mathcal{O}_2$.
\item Cap off the boundary component which is \emph{not} $\partial \mathcal{O}_2$ by filling in the standard $H_3$.
\end{enumerate}
In particular, both $H_0$ and $H_3$ are smooth, and so is $\mathcal{O}_2$. Note that the characteristic foliation on $\mathcal{O}_2$ is not Morse$^+$ since when restricted to $\partial \mathcal{O}_2$, all the flow lines go from the \emph{negative} source to the \emph{positive} sink.
The Legendrian foliation $\mathcal{F}_{\mathcal{O}_2}$ can be visualized as shown in \autoref{fig:O2}. In particular, away from $\partial \mathcal{O}_2$, all leaves of $\mathcal{F}_{\mathcal{O}_2}$ are disks except for one leaf which is an annulus. Since the contact germ on $\mathcal{O}_2$ is completely determined by $\mathcal{F}_{\mathcal{O}_2}$, one can forget about the regular coLegendrian structure and only remember the Legendrian foliation. In this case, one needs to specify the \emph{co-orientations} of the singular loci of $\mathcal{F}_{\mathcal{O}_2}$ as follows. For definiteness, let's identify $\mathcal{O}_2$ with the unit $3$-ball in $\mathbb{R}^3_{x,y,z}$ such that the vertical axis as shown in \autoref{fig:O2} is identified with the $z$-axis. Let $P \coloneqq \{ x \geq 0,y=0 \} \subset \mathbb{R}^3$ be a half-plane. Then $\mathcal{O}_2 \cap P$ is a half-disk equipped with a line field with two critical points, one elliptic $e$ in the interior and one half-hyperbolic $h$ on the boundary. \emph{We require $d\alpha$, restricted to the half-disk, to have opposite signs at $e$ and $h$.} The individual signs are irrelevant, and depend on a choice of the contact form $\alpha$ and the orientation of the half-disk. It is in this form that $\mathcal{O}_2$ can be generalized to arbitrary dimensions in the obvious way.
\begin{figure}[ht]
\begin{overpic}[scale=.3]{O2.eps}
\end{overpic}
\caption{A cross section of the Orange II, which can be obtained by spinning about the blue axis.}
\label{fig:O2}
\end{figure}
It remains to show that $\mathcal{O}_2$ implies overtwistedness.
\begin{prop} \label{prop:orange II is OT}
If there exists $\mathcal{O}_2 \subset (M,\xi)$, then $\xi$ is overtwisted.
\end{prop}
\begin{proof}
To facilitate the exposition, let's introduce some notations as follows. Let $A$ be the annulus leaf of $\mathcal{F}_{\mathcal{O}_2}$; $D_1,D_2$ be two disk leaves in the interior; and $N,S \subset \partial \mathcal{O}_2$ be the northern and southern hemispheres separated by $\partial A \cap \partial \mathcal{O}_2$.
The proof involves considering three Legendrian $2$-spheres. The first one is $\Lambda \coloneqq D_1 \cup D_2$. Throughout the proof we will skip the step of rounding the corners when it is obvious. Clearly $\Lambda$ is the standard Legendrian unknot. The second one is $\Lambda' \coloneqq D_1 \cup A \cup S$, which we claim to be Legendrian isotopic to $\Lambda$. Indeed, it follows from the observation that $D_2$ and $A \cup S$ cobound a $3$-ball in $\mathcal{O}_2$ which is foliated by Legendrian disks with the same boundary $\partial D_2$. Finally, the third one is $\partial \mathcal{O}_2$ which we claim to be a ``destabilization'' of $\Lambda'$, i.e., $\Lambda'$ is a stabilization of $\partial \mathcal{O}_2$. This is most easily seen through the front projection. Namely, on each cross section, $\Lambda'$ is obtained from $\partial \mathcal{O}_2$ by a positive and a negative stabilizations. See \autoref{fig:O2 in front}.
\begin{figure}[ht]
\begin{overpic}[scale=.4]{O2_stab.eps}
\end{overpic}
\caption{The front view of a cross section of $\mathcal{O}_2$ between $\Lambda'$ and $\partial \mathcal{O}_2$. The red dots indicate positive critical points, while the blue dots indicate negative ones.}
\label{fig:O2 in front}
\end{figure}
To summarize, we have shown that the standard Legendrian unknot $\Lambda$ is a stabilization of $\partial \mathcal{O}_2$. The proposition now follows from \cite{CMP19}.
\end{proof}
\begin{remark}
\autoref{prop:orange II is OT} holds in any dimension with the same proof. In particular, in dimension $3$, $\partial \mathcal{O}_2$ is a $\tb=1$ Legendrian unknot.
\end{remark}
| {'timestamp': '2020-06-23T02:21:40', 'yymm': '2006', 'arxiv_id': '2006.11844', 'language': 'en', 'url': 'https://arxiv.org/abs/2006.11844'} |
\section{Introduction}
With the rapid development of online payment, credit card fraud also continues to climb to new heights, which leads to losses totalling billions of US dollars each year [1]. To address this issue, banks and online payment companies turn to anti-fraud systems to detect illegitimate transactions. Among numerous techniques considered, machine learning (ML) and deep learning (DL) are attracting much attention due to their powerful predictive ability as a fraud detection system [2][3]. Generally, for ML/DL-based methods, models' input features are card transaction data such as the identity of card holders and amount of funds, while outputs are confidence scores that form a probability space for determination of genuine/fraudulent transactions.
Credit card fraud detection, however, is not a typical classification task since fraudulent transactions are extremely rare out of all transactions. Specifically, the dataset is highly skewed, which often causes models to perform poorly when encountering rare cases. There have been some works directly applying traditional ML-based methods [4][5][6][7] where a large dataset is inessential. In terms of DL-based approaches, attempts at high quality data augmentation [8] and the use of key features [9][10] have been made, while few studies [11] regard this problem as anomaly detection which naturally avoids the imbalance problem and has larger room for improvement.
Another crucial topic for DL-based credit card fraud detection systems is the model explainability [12], a research hotspot in the ML community. Considering the black-box property of neural networks, it is desirable to give an explanation of why a transaction is detected as fraudulent for official reports in court. Though various techniques for different levels and perspectives of explanations have been proposed [13][14][15][16][17], investigation into their applications of anomaly detection methods are lacking.
In this paper, we make the first attempt to leverage a novel anomaly detection framework [18] along with a LIME-based [13] explaining module in the realm of credit card fraud detection. This anomaly detection framework is adopted for its promising performance on detecting irregular images, while the LIME is chosen because of its focus on a single instance of interest which might be the fraudulent transaction in this case. The detection module is adjusted to match the properties of tabular financial data and experiments on comparison with other baseline methods are carried out. LIME explainers are displayed in different parts of the detection module to monitor and inspect the whole prediction pipeline. In summary, the contributions of this work is twofold:
\begin{enumerate}
\item Modifying and utilizing a novel yet simple anomaly detection architecture to cope with the credit card fraud detection problem, achieving state-of-the-art performance in comparison with other cutting-edge or iconic methods.
\item Applying LIME to give explanations to different input-output relations for a certain transaction of interest in the detection module.
\end{enumerate}
\section{Related Work}
This paper puts effort into the imbalance issue of credit card fraud detection dataset and the interpretability of utilized anomaly detection framework. In this section, we shall discuss several benchmark methods for imbalanced classification and explainable AI.
\subsection{Imbalanced Classification}
Datasets for detection of fraudulent transactions are inherently skewed because normal cases significantly outnumber dishonest ones. The class imbalance problem greatly increases the difficulty of neural network training. There are two main solutions: (1) manipulating data to balance the dataset and (2) using algorithms like weighted loss function or one-class classification (OCC).
The first approach includes undersampling of majority class and oversampling of minority class. The former deletes excessive data points and the latter generates synthetic data that have similar distributions of real data in minority class, which is much more complicated yet widely adopted. For data synthesis, SMOTE [19] is the most common technique which utilizes k-nearest neighbors algorithm to produce samples close to the minor-class data in the feature space. GAN-based [20] upsampling models [8] have also been broadly studied due to its impressive performance in generating vivid images.
In terms of algorithm-based approaches, weighting of majority and minority class is usually implemented by weighted loss functions. Among them, Focal Loss [21] is one of the most iconic works which encourages networks to down-weight easy examples and focus on learning hard negative examples. One-class classification, also known as anomaly detection from application point of view, merely uses the majority class during training. Conventional ML anomaly detection methods mainly include linear models, such as OCSVM [22], proximity-based models, such as OCNN [23], and probabilistic models, such as COPOD [24]. OCSVM is SVM trained in an unsupervised manner, trying to find a hyperplane that well-encloses the only class in the training set and excludes potential outliers. OCNN is the one-class k-nearest neighbors method which calculates the averaged distance to k nearest data points as the outlier score. COPOD is a copula-based non-parametric statistical approach that can directly provide some decision explanations.
AnoGAN [25] first utilizes adversarial training, where the generator learns to produce samples with distributions similar to major-class data (inliers). As a result, the discriminator could tell whether the input is from a minor class (outliers) by comparing its distribution with that of generated samples. OCAN [11] uses a novel LSTM-AutoEncoder for input feature representation and then conducts the prediction process similar to AnoGAN. These two models both apply a generator to produce fake features sampling from a noise. However, as an intrinsic defect of GAN, the upsampling generation process is unstable. Reference [18] replaces the original noise-to-sample generator with an AutoEncoder for sample reconstruction, which is adopted by many following works [26][27]. For example, Both the GANomaly [26] and Skip-GANomaly [27] have an AutoEncoder as parts of the generator. The former introduces an extra encoder for latent space restriction, while the latter features the use of skip-connections.
\subsection{Explainable AI}
Explainable AI (XAI) is an emerging field in the AI community which aims at allowing human users to comprehend and trust model outputs. Though this domain is relatively new, broad and comprehensive studies [12] have been done. This paper categorizes these methods based on the scope of explanations: (1) local interpretability and (2) global interpretability.
Local interpretability refers to explanations of decisions a model makes about an instance. Methods in this category either visualize or plot the importance of each input feature to the final output. The saliency map [14], which has long been present and used in local explanations, is a gradient-based visualization approach that calculates each pixel's attribution. LIME [13] proposes a concrete framework for implementations of local surrogate models. Given an instance of interest and a dataset, LIME first generates new data points by perturbing the dataset and getting the black-box predictions for these new data points. Subsequently, a simple transparent model, such as a decision tree, is trained on this dataset, which then should be a good approximation of the target model around this instance. Therefore, analysis on this explainable proxy model could be easily carried out to provide human users with a clear view on how each input feature of this particular instance leads to final outputs made by the black-box model.
Global interpretability answers how model parameters affect the final prediction. It explains the whole model rather than a certain input instance. Activation Maximization (AM) methods [15][16] put forward visualization of neural networks, which is done by finding the input probing image that maximally activates certain neurons with gradient ascent. In particular, reference [16] proposes to use generative models, where the generated images are fed into the target model for classification. We could then examine what kinds of noises and corresponding generated images lead to a certain prediction result. Following works [17] have achieved image-level fine-grained visual explanations for convolutional neural networks (CNN).
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{structure}
\caption{Structure of our method.}
\end{figure}
\section{Methodology}
The proposed framework comprises two modules: (1) the anomaly detection model and (2) the model explainers. The former's architecture is mainly derived from [18], which consists of an AutoEncoder for input-output reconstruction and a fully-connected classifier for fraud detection. The two networks are trained in an adversarial and unsupervised manner so as to cope with the class imbalance issue, and some adjustments are applied on account of specific properties of this credit card fraud detection task. The model explainers provide users with a clear perspective on how the network output is influenced by input features. We choose LIME to analyze a single transaction instance of interest. The overview of the proposed structure is shown in Fig 1.
\subsection{Anomaly Detection Model - AutoEncoder}
Capable of reconstructing and denoising input samples [18][26][27], AutoEncoders have been widely used in novelty detection tasks recently. In this paper, the target and non-target class are the genuine and fraudulent transaction, respectively. For the low-dimensional tabular financial transaction data, we adopt an AutoEncoder for reconstruction. That is, no extra Gaussian noise is added in the input.
The model is composed of fully-connected layers. Batch Normalization (BN) and Rectified Linear Unit (ReLU) are utilized to ensure stable gradient flows. Given that fraudulent transactions are extremely rare, we only train the AutoEncoder with genuine transactions. Specifically, the model is trained to preserve the distribution of target-class inputs, while the outliers, namely, non-target class, will be naturally mapped to an uncertain point in the latent space and fail to be reconstructed due to their absence from the training set. To achieve this goal, two loss functions are presented:
\begin{equation}
L_\mathcal{R} = \|\mathcal{R}(X) - X)\|_2,
\end{equation}
\begin{equation}
L_\mathcal{R}^{GAN} = -log{\ \mathcal{C}(\mathcal{R}(X))}.
\end{equation}
Equation (1) is the reconstruction loss, which is decided to be L2 loss in the next section. R denotes the reconstructor, that is, the AutoEncoder. X is the transaction data. Equation (2), which is a part of adversarial loss, is the binary cross entropy that guides the AutoEncoder to produce outputs that have the same distribution as inputs and subsequently confuse the discriminator. C stands for the Classifier. The overall training objective of the AutoEncoder is then:
\begin{equation}
\min(\E_{X \sim pt}[\|\mathcal{R}(X) - X)\|_2 -log{\ \mathcal{C}(\mathcal{R}(X))}]),
\end{equation}
where pt denotes the distribution of genuine transactions.
\subsection{Anomaly Detection Model - Classifier}
The classifier is a simple Multilayer Perceptron (MLP) that aims to separate original transaction data from the reconstructed one. The output of this model is a single value representing the probability of target class. As a result, the binary cross entropy loss for the Classifier can be as follows:
\begin{equation}
L_{\mathcal{C}}^{GAN} = -log{\ \mathcal{C}(X)} - log{(1- \mathcal{C}(\mathcal{R}(X)))}.
\end{equation}
The adversarial training objective in the whole anomaly detection model can then be summarized as:
\begin{equation}
\begin{split}
\min_{\mathcal{R}}\max_{\mathcal{C}}(\E_{X \sim pt}[log{\ \mathcal{C}(X)} + log{(1- \mathcal{C}(\mathcal{R}(X)))}]).
\end{split}
\end{equation}
It is notable that the training procedure is done when the AutoEncoder and classifier reach the Nash equilibrium and the reconstruction loss is small. This means the reconstruction process is performed well and the classifier outputs around 0.5 on both original and reconstructed features, which indicates that their distributions are too close to be distinguished. In this situation, it is simple and straightforward to determine by a threshold whether the instance of interest is a fraudulent transaction, which is mapped to an unknown distribution when passing through the AutoEncoder and leads to a prediction value far from 0.5 at the classifier. The threshold is set to be 0.7 in this work.
\subsection{Model Explainer}
In terms of the explainability of our one-class fraud detection framework, we are particularly interested in the interpretations of (1) how the reconstruction stage is affected by genuine/fraudulent transaction data, (2) how the consequent reconstructed features bring about final prediction results, and (3) the overall credit card fraud detection module. Therefore, three LIME-based explainers, which are simple white-box models, are employed to interpret the three different input-output relations. According to their functions, models are named as AE explainer, C explainer, and general explainer, respectively, as demonstrated in Fig 1.
As a model-agnostic technique, LIME only requires the target model to be a classifier or regressor. Thus, the C and general classification explainer can be directly trained with samples acquired by sampling from the distribution of the given dataset, while the vector output of AutoEncoder obstructs this way for the AE explainer. To address this problem, we make use of L2-norm to compute a single value of reconstruction error as the label for the training of the AE regression explainer. This is equivalent to (1), while we describe the formula here to clarify the calculation of label:
\begin{equation}
label = \frac{\sum\limits_{i = 1}^n{(\mathcal{R}(X)_i - X_i)^2}}{n},
\end{equation}
where the character n is the number of features. With this step, the AE explainer is turned to an explainable regression model whose outputs denote L2 reconstruction error.
\setlength{\arrayrulewidth}{0.5pt}
\setlength{\tabcolsep}{25pt}
\renewcommand{\arraystretch}{1.5}
\begin{table*}[t]
\centering
\caption{Credit card fraud detection results on accuracy, precision, recall, F1-score, and Matthews correlation coefficient}
\begin{tabular}{c c c c c c}
\hline
Methods & Accuracy & Precision & Recall & F1-score & MCC \\
\hline
OCSVM & 0.8898 & 0.9204 & 0.8673 & 0.8931 & 0.7811 \\
OCNN & 0.9000 & 0.9122 & 0.8904 & 0.9012 & 0.8002 \\
COPOD & 0.8388 & 0.7224 & 0.9415 & 0.8175 & 0.6967 \\
AutoEncoder & 0.8000 & \textbf{0.9245} & 0.7402 & 0.8221 & 0.6195 \\
OCAN & 0.8806 & 0.8061 & \textbf{0.9472} & 0.8710 & 0.7698 \\
Ours & \textbf{0.9061} & 0.9216 & 0.8878 & \textbf{0.9044} & \textbf{0.8128} \\
\hline
\end{tabular}
\end{table*}
\section{Experiments}
\subsection{Experimental setup}
\subsubsection{Datasets}
We test the performance of our anomaly detection framework on a benchmark credit card fraud detection dataset\footnote{https://www.kaggle.com/mlg-ulb/creditcardfraud}. The dataset contains 284,807 credit card transactions, collected in Europe during a 2-day period in September 2013. There are only 492 fraudulent cases (minority class, labeled as 1) accounting for 0.172\% and the other 234315 cases are genuine transactions (majority class, labeled as 0), which indicates the dataset is highly imbalanced. Each transaction has 30 features, 28 of which are principal components obtained from PCA and the other two are time and amount. The 28 features are listed as V1 to V28 and further information is not provided because of confidentiality issues. Time feature denotes the elapsed time between the current and first transaction, which is considered in temporal-based models [9][10]. Amount feature is the amount of money in the transaction, and it is used in cost-sensitive learning. We only utilize the 28 features in this work as in [11]. Since the data have been normalized before PCA, we do not clean them again before the training process. Since the original dataset is not split into training and testing sets, we select 490 out of 492 fraudulent cases and 490 out of 234315 genuine cases to generate a well-balanced testing set, and the remaining 233825 genuine cases form the training set for our model. All the baselines are tested on this testing set, while training sets for different methods may differ.
\subsubsection{anomaly detection methods}
We compare our framework with five baseline models: OCSVM, OCNN, COPOD, vanilla AutoEncoder, and OCAN. For the first four baselines, we use implementations provided in PyOD [28], which is a renowned high-quality toolkit for novelty detection. OCAN is not included in this package, so we retrieve its open-source original implementation\footnote{https://github.com/PanpanZheng/OCAN}. In terms of training parameters, OCSVM, COPOD, AutoEncoder, and OCAN are trained with 700 genuine cases. OCNN has no training stage, but an evaluation set is required for hyper-parameter tuning. Thus, we randomly pick 25 genuine and 25 fraudulent cases for evaluation and subsequently determine k = 5. All hyper-parameters are manually adjusted to reach the best performances if no default values are provided by the original paper or source code.
Our model is trained on the whole training set for 2 epochs. The batch size is 4096, and the learning rate is 2e-4. Weight initialization is applied to ensure a fine starting point. The threshold is 0.7, and a classifier output above this value is identified as an illegal transaction.
\subsubsection{LIME}
We use LIME's open-source implementations\footnote{https://github.com/marcotcr/lime} to build the three explainers. The AE explainer is a regression model, where the label is defined in (6). The D and general explainer are classifiers, where the label is the output of the fraud detection module.
Firstly, we select a single fraudulent case from the testing set. Secondly, 5000 data points are produced by sampling from the distribution of the class-balanced testing set, and their corresponding outputs by our anomaly detection model are used as labels. Finally, these data points and labels serve as the training set to build a simple explainable model, into which we can feed the original fraudulent case to obtain its prediction explanation.
\subsection{Results}
\subsubsection{Anomaly Detection Performance}
The performance measures include accuracy, precision, recall, F1-score, and Matthews correlation coefficient (MCC). Particularly, F1-score and MCC are balanced binary classification measures that take the entire confusion matrix into account. Precision indicates how likely a case detected asf fraudulent is indeed illegitimate. Recall denotes the proportion of fraudulent cases that are successfully detected. The testing results are displayed in Table 1.
It is remarkable that our method achieves 0.9061 accuracy, 0.9044 F1-score, and 0.8128 MCC, higher than any other baselines. This indicates that our model reaches the best performance if we consider precision and recall simultaneously. For precision only, AutoEncoder sacrifices recall to reach the highest precision (0.9245), resulting in a lower accuracy, F1-score, and MCC. However, this merely outperforms our result (0.9216), by a small margin. For recall only, OCAN and COPOD obtain promising results (0.9472 and 0.9415, respectively). Nevertheless, they fail to reach a balance between precision and recall. If we compare OCSVM, OCNN, and our method, which have relatively balanced precision and recall, our method almost outperforms the other two models in all indices except for recall, where OCNN is only slightly better.
We also compare different loss functions to determine the most suitable form for reconstruction. Three candidates are L1, SmoothL1, and L2 loss. For simplicity, we finally choose L2 loss because it results in the best MCC, as shown in Table 2.
\begin{table}[h]
\centering
\caption{MCC of different reconstruction losses}
\begin{tabular}{ | c | c | }
\hline
Loss Function & MCC \\
\hline
L1 & 0.7990 \\
\hline
SmoothL1 & 0.7970 \\
\hline
L2 & \textbf{0.8128} \\
\hline
\end{tabular}
\end{table}
Finally, we plot the receiver operating characteristic (ROC) curve and calculate the area under curve (AUC) in Fig 2. ROC and AUC show the classifier performance at various thresholds and serve as a criterion for model robustness. The AUC value ranges from 0 to 1. Specifically, 0.5 (such as the diagonal in Fig 2) denotes a naive classifier and 1 indicates a perfect classifier. Our method achieves an AUC of 0.9434.
\begin{figure}[H]
\centering
\includegraphics[width=0.3\textwidth]{AUROC}
\caption{ROC and AUC of our model}
\end{figure}
\subsubsection{Model Interpretation}
Fig 3., 4., and 5. display interpretations of a fraudulent transaction by the AE, C, and general explainers, respectively. The left sub-figure of each figure (Fig 3(a), 4(a), 5(a)) is the output value of the explainer. The middle bar chart (Fig 3(b), 4(b), 5(b)) shows the contribution of each feature value to the explainer output. The right table (Fig 3(c), 4(c), 5(c)) lists down the value of each feature. For clarity, we only plot the top six important features out of 28 features.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.17\textwidth}
\centering
\includegraphics[width=\textwidth]{AE_predicted}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{AE_explanation}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.08\textwidth}
\centering
\includegraphics[width=\textwidth]{AE_table}
\caption{}
\end{subfigure}
\caption{Interpretations by the AE explainer}
\label{fig:three graphs}
\end{figure}
We first elaborate on Fig 3. Since the input transaction is fraudulent, it is mapped to an unknown distribution rather than being perfectly reconstructed, as discussed in the previous section. This leads to a large reconstructed error up to 156.35. Then, by disturbing the fraudulent input, it is found that V1 possesses the highest influence among all features. If V1 rises from -16.60, as listed in the right feature value table, to -2.73, as shown in the middle chart, the predicted value will be reduced to 156.35 - 23.92 = 132.43.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.17\textwidth}
\centering
\includegraphics[width=\textwidth]{D_predicted}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{D_explanation}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.08\textwidth}
\centering
\includegraphics[width=\textwidth]{D_table}
\caption{}
\end{subfigure}
\caption{Interpretations by the C explainer}
\label{fig:three graphs}
\end{figure}
Fig 4. shows how each feature of the reconstructed input affects the classifier prediction. If V19 decreases from 1.63 to 0.58, the classifier will reduce its confidence of fraudulent transaction from 0.59 to 0.53. Observing Fig 3(c) and 4(c), we can find that feature values of the reconstructed inputs are significantly different from that of the original input. This matches the high reconstruction error shown in Fig 3.
\begin{figure}[H]
\centering
\begin{subfigure}[b]{0.17\textwidth}
\centering
\includegraphics[width=\textwidth]{AED_predicted}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.2\textwidth}
\centering
\includegraphics[width=\textwidth]{AED_explanation}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.08\textwidth}
\centering
\includegraphics[width=\textwidth]{AED_table}
\caption{}
\end{subfigure}
\caption{Interpretations by the general explainer}
\label{fig:three graphs}
\end{figure}
Fig 5. is the overall analysis. The explainer is a single transparent model, where inputs are original transactions and outputs are predictions. It approximates the entire fraud detection module and the provided analysis can be interpreted as in Fig 3. and Fig 4.
\section{Conclusion}
This work proposes to leverage an adversarially trained anomaly detection model for credit card fraud detection problem. Experimental results show that this framework outperforms other iconic or state-of-the-art baseline models. Furthermore, LIME is applied to investigate input-output relations of this fraud detection model and analyses of an instance of interest are presented, providing a clear view on how each input feature influences the final prediction. Future work will focus on interpretability of other unsupervised or semi-supervised methods in the application area.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
| {'timestamp': '2022-06-30T02:08:27', 'yymm': '2108', 'arxiv_id': '2108.02501', 'language': 'en', 'url': 'https://arxiv.org/abs/2108.02501'} |
\section{Introduction}
\label{sect:Introduction}
Data is considered a key production factor, comparable in importance to labour, capital, and infrastructure. Companies are often in need of data they do not possess, or cannot collect directly. Therefore, general purpose\footnote{See for example DAWEX, Azure Data Catalog, or AWS Data Exchange} and domain specific\footnote{See for example, Openprise, Lotame PDX (marketing), Qlik (business intelligence), or Battlefin (investment information)} data marketplaces (DMs) have appeared with the purpose of building a business of mediation between data selling and data buying \emph{companies}. Leading data management platforms and innovative startups\footnote{See for example Snowflake, Cognite, Carto, and Openprise} are also introducing marketplace functionalities into their products. Finally, personal information management systems (PIMS) have answered the call of recent legislative developments in personal data protection by offering data control, portability, and monetization services for \emph{individuals}.
Designing and building a successful DM calls for solving a plethora of technology, business, and economics challenges in the context of a complex two-sided market (see \cite{Armstrong06, Rochet06, Rysman09} for an exposition). According to our survey of more than 75 real-world data marketplaces, the most common bootstrapping strategy is for a DM to spend effort and money to attract a sufficient set of data sellers, and then try to convince as many buyers as possible to start purchasing these datasets. Therein lie two fundamental problems: (1) the \emph{dataset pricing problem} for data sellers, and (2) the \emph{dataset purchasing problem} for data buyers.
Recent theoretical work on the intersection between computer science and economics has looked into those problems, and has proposed solution concepts and algorithms for them \cite{Agarwal19, Chen19, Chawla19, Koutris15}. For data sellers, selecting efficient prices requires knowing the level of competition with other data sellers, the willingness to pay of buyers, potential customer lock-ins and other information that affects prices in digital and non-digital markets. For buyers, the problem of selecting which datasets to buy (problem (2)), given the prices set by sellers (problem (1)), can be further broken down into 2 interrelated subproblems: (2.a) compute how useful these datasets will be to their AI/ML algorithms, something that can be captured by various accuracy metrics, and (2.b) compute how such accuracy can be converted into monetary gains (via e.g., improved sales, acquisition of new customers, retention of existing ones, etc.). Sub-problem (2.b) is probably the easiest of the two challenges faced by buyers, since most companies are able to gather historical data about the impact of things like recommendation quality on actual sales \cite{Brovman16}. Subproblem (2.a), on the other hand, is inherently more challenging, since buyers need to have access to the data before they can compute their value for their AI/ML task, but such access is only granted \emph{after} a dataset purchase has taken place -- a chicken and egg problem essentially. (2.a) is further exacerbated when the buyer can/has to buy more than one dataset in order to improve the accuracy of its AI/ML algorithm. With $N$ available datasets, a buyer has $O(2^N)$ data purchase options each one with a cost equal to the sum of individual dataset prices and a value defined by the maximum accuracy of AI/ML algorithm operating over the aggregate data.
In theoretical works, the value of any subset of datasets for a data buyer is considered as known \textit{a priori} \cite{Shen16}. In reality, however, things are completely different \cite{Hubert18}. In almost all the 75 DMs that we have surveyed, data sellers provide only a description of their datasets, a price, sometimes an outdated sample, and buyers have to make purchase decision with that information alone. Few of these DM (e.g., Dawex, Airbloc, Wibson or Databroker) also allow buyers to make offers (bids) for data when sellers do not indicate a fixed price, or if they are willing to pay something below the asking price. This case suffers as well from the fundamental problem (2.a) of not knowing the value of a dataset before purchasing it.
\vspace{2pt}
\noindent \textbf{Our contribution:} In this paper we show how to solve the \emph{dataset purchasing problem} for data buyers in a way that approximates the efficiency of an optimal full information solution, but in a way that is implementable in practice with real-world DMs. Our main contribution is a family of dataset purchase algorithms that we call "Try Before You Buy" (or TBYB) that allow data buyers to identify the best datasets to buy with only $O(N)$ information about the accuracy of AI/ML algorithms on individual datasets, instead of $O(2^N)$ information used by an optimal strategy using full information. Effectively, TBYB needs to know only the accuracy of an AI/ML on \emph{individual} datasets, and with this information it can approximate the optimal \emph{combination} of datasets that maximizes the profit of the buyer, i.e., the difference between the value extracted from the datasets minus the cost of purchasing them. The accuracy of individual datasets can either be precomputed by the DM or the data sellers, and be made available as part of the dataset description (e.g., for some common AI/ML algorithms). Another alternative is for the DM to use recently developed ``sandboxed'' environments that allow data buyers to experiment versions of the data without being able to copy or extract them (hence the ``Try'' part on the algorithm's name; Otonomo, Advaneo, Caruso or Battlefin are examples of marketplaces that implement such functionality). Overall, with TBYB our objective is to increase the efficiency of buying datasets online from DMs. We believe that this is key for allowing both DMs and the data offer side to grow, as well.
\vspace{2pt}
\noindent \textbf{Our findings:} We compare the performance of TBYB against several heuristics that do not use information about the value of a datasets for the particular AI/ML task at hand, as well as against an optimal solution that uses full information. We start with a synthetic evaluation and then validate our conclusions using real-world spatio-temporal data and a use case in predicting demand for taxi rides in metropolitan areas \cite{Andres20}. Our findings are as follows:
\begin{itemize}
\item TBYB remains close to the optimal for a wide range of parameters, whereas its performance gap against the heuristics increases with the catalog size.
\item TBYB is almost optimal when buying more data is yielding a progressively diminishing return in value for the buyer (i.e., when the value function of the buyer is concave). With convex function, it becomes increasingly difficult for TBYB to match the optimal performance. It's performance gap with the heuristics, however, is maintained.
\item When the asking price of datasets does not correlate with their actual value for the buyer, the performance advantage of TBYB over the heuristic becomes maximal. When the pricing of data follows their value for buyers, the performance of TBYB is still superior but the gap with the heuristics becomes smaller.
\end{itemize}
Overall, our work demonstrates that near optimal dataset purchasing is realistic in practice and that it could be implemented relatively easy by real-world data marketplaces.
\section{Marketplace Model \& Definitions}
\label{sect:Definitions}
Existing DMs typically list the datasets that they make available and provide for each one a description and a price. In our case we will assume that the DM also provides for each dataset its \emph{accuracy} over a range of common AI/ML tasks. This list cannot/need not be exhaustive, nor it needs to capture all the specificities of the particular AI/ML algorithm that the buyer intends to use. The intention is merely to provide the buyer with a hint, even an \emph{approximate} one, regarding the accuracy that he should expect if he buys a particular dataset.
If, on the other hand, a buyer would like to know before buying a dataset the \emph{exact} performance of his algorithm on the data, then the following two options exist. The buyer could submit a description of the task for which he needs data, so that the DM returns a list of candidate sellers, or, alternatively, she could just go over the data catalog and select the best candidates manually. In both cases, the DM can provide a sandboxed environment in which the buyer can submit his \emph{exact} algorithm and get an \emph{exact} answer in terms of the achieved accuracy over each candidate dataset, without being able to see/copy the raw data.
To model any of the above cases (see figure \ref{Fig:model}), we will denote by $\mathcal{S}$ the set of suitable sellers for the AI/ML task of a particular buyer. We will denote by $d(s)$ the dataset offered by seller $s \in \mathcal{S}$, by $p(s)$ its \emph{price}, and by $a(d(s))$ the \emph{accuracy} that the buyer's AI/ML task can achieve if trained by $d(s)$. Similarly, for a subset of the sellers $S\in \mathcal{S}$, we will denote by $d(S)$ their aggregated dataset, and by $a(d(S))$ the maximum accuracy that can be achieved using all or a subset of the data in $d(S)$. We will also introduce the \emph{value function} $v(a)$ of the buyer, that indicates the (monetary) value that the buyer can achieve when his AI/ML algorithm gets to an accuracy of $a$. In Sect.~\ref{subsec:TheorSensitivityMUP} we will look at both concave and convex $v(\cdot)$ functions.
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{Model.jpg}
\caption{Reference DM model}
\label{Fig:model}
\end{figure}
When datasets are sequentially bought, we will append a subscript to identify the round to the notation we already defined. Hence, $S_n \subset \mathcal{S}$ will refer to the set of eligible datasets in round $n$. We will denote by $P_n$ the set of data already under the buyer's control in round $n$. As a result, she is able to achieve an accuracy equal to $a_n=max_{S'\in 2^{P_n}} a(S')$ and a value $v_n=v(a_n)$.
Buyers will always look forward to optimizing their profit, hence a greedy buyer will decide to purchase a dataset $d(s)$, if its marginal value exceeds its cost, i.e. $v(a(d(s|P))) \geq {p(s)}$. In the event that such a value is unknown, the buyer must estimate it, and assume the risk that the purchase may not fulfil expectations.
\section{Data Purchase Strategies}
\label{sect:Purchase algorithms}
In this section we will present a series of data purchase strategies that cover the spectrum from full information, i.e., knowing the accuracy over any subset of the available data, to having no information about accuracy, as is currently the case in most DMs. In between the two extremes, lies our proposed algorithm called TBYB, that runs only on accuracy information of individual datasets.
\subsection{Optimal purchase under full information}
\label{subsec:fullinformation}
In this case, the buyer knows $a(d(S))$ for any subset $S\subseteq 2^\mathcal{S}$. This allows for an optimal purchase $\mathcal{S}^\star$ that maximizes the profit, i.e., the difference between the value that the buyer extracts from the data, and the cost paid to purchase them:
\begin{equation}
\label{eq:optimalfull}
\mathcal{S}^\star = arg\,max_{S\in 2^\mathcal{S}} \left(v(a(d(S)))-\sum_{s \in S} p(s) \right),
\end{equation}
subject to $v(a(d(\mathcal{S^\star}))) \geq \sum_{s\in \mathcal{S}^\star} p(s)$.
Such a full information scenario is optimal from a buyer's perspective, but not scalable nor practical: a DM would need to compute the accuracy of each AI/ML algorithm over $2^{|\mathcal{S}|}$ combinations of eligible datasets.
\subsection{Try Before You Buy (TBYB)}
\label{subsec:TBYB}
For our proposal, we assume that the DM provides the buyer with the accuracy of her algorithm on individual datasets, but not on combinations of them. The algorithm is sequential and greedy in nature, and can run for up to $|\mathcal{S}|$ iterations. We will consider two versions.
\subsubsection{Stand-alone version - S-TBYB}
\label{subsec:S-TBYB}
The marketplace provides $a(d(s))$ on all $s\in\mathcal{S}$. Then the algorithm starts buying datasets in descending order of \emph{expected profit} until a stopping condition is reached. For the first dataset, the profit is not expected but exact, so the best dataset is bought provided $v(a(d(s)))- p(s) \geq -\lambda \cdot v(a^\star)$, where:
\begin{enumerate}
\item the best accuracy $a^\star \leq 1$ that can be delivered by the marketplace. Either the data marketplace provides data buyers with this information, or the buyer makes her best guess, and
\item the risk parameter $\lambda$ models the maximum relative admissible loss the buyer is willing to assume in each operation. The risk assumed every round will be bounded by $\lambda$ times the potential value of the sourcing operation which, in round n, is equal to $v(a^\star) - v_n$. For example, $\lambda = 0.1$ means that the buyer will buy a new dataset $s$ if its price is lower than the marginal value she expects to get plus $10\%$ of the maximum value that she could add by buying new data.
\end{enumerate}
In some sourcing problems, the marginal value of new data increases as more information is bought. In such a setting, buyers may be required to assume some temporary losses when acquiring the first datasets, in the hope that they provide additional accuracy, and become profitable when fused together with other data.
\paragraph{n-th iteration}The buyer will proceed as follows:
\begin{itemize}
\item Identify the best possible dataset $s^\star \in S_n$ such that:
\begin{equation}
s^\star = arg\,max_{s\in S_n} \left( v(a(d(s)))- p(s) \right)
\end{equation}
\item Purchase $s^\star$ if its estimated marginal value exceeds its price, and a risk threshold that depends on the remaining value she expects to get out of the operation, i.e., if $v(E\{a(s^\star \cup S_n)\}) - v_n - p(s^\star) \geq -\lambda \cdot (v(a^\star) - v_n)$
\item If the buy condition is met then, $s^\star$ is added to the set of controlled datasets: $P_{n+1} = P_n \cup d(s^\star)$ and the next round starts
\item else if no dataset in $S$ meets this requirement, then the process stops
\end{itemize}
To estimate $E\{a(s^\star \cup S_n)\}$ the buyer could use the following information:
\begin{enumerate}
\item The price and accuracy pairs $<p(s),v(s)>$ for all individual datasets $s\in S$
\item The accuracy of every possible combination of already purchased datasets, i.e., the $v(S')$ for all $S' \subseteq 2^{P_n}$
\end{enumerate}
This estimation must be tailored to each specific problem, and it turns out to be non-trivial. We estimate the relative added accuracy of $s^\star$ by multiplying its individual accuracy $a(s^\star)$, and the ratio of the marginal contribution and the individual accuracy of the last purchased dataset:
\begin{equation}
\label{eq:ExpectedAccuracyEstimation}
E\{a(s^\star \cup S_n)\} = \frac{a_n - a_{n-1}}{a(P_n - P_{n-1})} \cdot a(s^\star) \cdot (a^\star - a_n).
\end{equation}
\subsubsection{Assisted version - A-TBYB}
\label{subsec:A-TBYB}
In this case, we will assume the buyer is allowed to ask the marketplace \textit{every round} for the marginal accuracy of any eligible datasets given the data she already owns.
\paragraph{n-th iteration}The purchase process will be the following:
\begin{enumerate}
\item Ask the marketplace for complementary datasets $S_n \subseteq \mathcal{S}$, and $a(d(s)|P_n),$ $\forall s \in S_n$ given the task $(\mathcal{M}, a)$ and $P_n$
\item If $S_n \neq \emptyset$: \begin{itemize}
\item Identify the best possible dataset $s^\star \in S_n$ such that:
\begin{equation}
s^\star = arg\,max_{s\in S_n} \left( v(a(d(s \cup P_n)))- p(s) \right)
\end{equation}
\item Buy provided $v(a(d(P_n \cup d(s^\star)))) - v_n - p(s^\star) \geq -\lambda \cdot (v(a^\star) - v(a(P_n)))$
\item If the buy condition is met then, $s^\star$ is added to the set of controlled datasets: $P_{n+1} = P_n \cup d(s^\star)$ and the next round starts
\item Else if the buy condition is not met, then the process stops
\end{itemize}
\end{enumerate}
As a result, if the marketplace is asked to compute the marginal accuracy for every remaining dataset every round, the model will be processed a maximum if $\sum^{r-1}_{i=0}{|S|-i}$ times for $r$ rounds. To prevent abuses from buyers, a marketplace implementing this solution could set up a maximum limit of trials for a certain task. Such a limit may be updated as the buyer purchases data.
\subsection{Buying without trying}
\label{subsec:BuyingWithoutTrying}
\subsubsection{Volume-based purchasing}
\label{subsec:volumePurchasing}
Most commercial marketplaces provide buyers with a description of datasets, their metadata, source, procedure used to collect them, etc. Oftentimes, the volume of data in a particular dataset (e.g. nº observations or samples) is used as the deciding figure of merit for choosing among different offers. Let $vol(s)$ denote the volume of dataset $s$, used as merit figure by the following volume-based purchasing heuristic:
\paragraph{n-th iteration}We will assume that a greedy buyer would select the dataset $s^\star \in S_n$ with the highest $vol(s) / p(s)$ ratio every round. However, it is not possible to know the accuracy it will yield to the specific problem. We assume a conservative condition for the algorithm to decide to purchase $s^\star$, specifically:
\begin{equation}
p(s^*) \leq -\lambda \cdot (v(a^\star) - v_n),
\end{equation}
which assumes that even in the worst case, where the purchase does not improve $(\mathcal{M},a)$ accuracy at all, the maximum relative admissible loss is not exceeded in the operation.
\subsubsection{Price-based purchasing}
\label{subsec:PriceBasedPurchasing}
It may happen that the marketplace just publishes the list of suitable datasets $\mathcal{S}$, and their prices. This setting resembles real situations where information about data offer is insufficient or misleading to the buyer's purposes. We assume such a buyer would randomly select among datasets whose price is lower than their maximum relative admissible loss.
\paragraph{n-th iteration}The buyer will randomly select one of the datasets $S_n \subseteq \mathcal{S}$ such that, $\forall s \in S_n, p(s) \leq - \lambda \cdot (v(a^\star) - v_n)$. If $S_n = \emptyset$ then the process stops.
\section{Performance evaluation with synthetic data}
\label{sect:TheoreticalEvaluation}
We will use synthetic data to evaluate the performance of the different purchase strategies of Section \ref{sect:Purchase algorithms} across a wide range of parameters. Our synthetic model is easy to reproduce, captures a wide range of parameters, and allows us to extract useful insights about the relative performance of different data purchase strategies. As we will show later in Sect.~\ref{sect:ValidationWithData}, our conclusions from this section are also validated by results with real data.
\subsection{Synthetic model description}
\label{subsec:TheoreticalModel}
To simplify the evaluation, we will assume that the value for the data buyer will be equal to the accuracy $a$. Hence, the maximum value that a buyer can extract from data is equal to 1, which occurs when the accuracy of its AI/ML algorithm trained on the purchased data, becomes 1 (i.e., 100\%). We will denote as Total Cost of Data (TCOD), the cost of buying all the available datasets in $\mathcal{S}$, i.e., $TCOD=\sum_{s\in \mathcal{S}} p(s)$. Therefore, when $TCOD<1$, then the buyer is guaranteed to make a profit, independently of the data purchase strategy used. In the more interesting case of $TCOD\geq 1$, the buyer needs to select which datasets to buy carefully, to avoid ending up with a loss.
Having equated value with accuracy, we will also need to connect the datasets bought, with the achieved accuracy. For a data buyer that buys datasets $S\subseteq \mathcal{S}$, the value will be given by the following expression:
\begin{equation}
v(a(d(S))) = a(d(S)) = \left( \frac{\sum_{s_i \in S}{DI^{i}}} {\sum_{s_i \in \mathcal{S}}{DI^{i}}} \right) ^{MUP},
\end{equation}
where:
\begin{itemize}
\item \textbf{MUP is the Marginal Utility Profile parameter} that controls the concaveness/convexity of $v$ from 0 to 1 as more data is bought. When $MUP<1$, then buying additional datasets will have a decreasing marginal utility in terms of accuracy, and hence value for the buyer, both of which will be concave with respect to the amount of data bought. On the other hand, with $MUP>1$, the marginal contribution of new data sources will be increasing as more datasets are bought, making $v(\cdot)$ and $a(\cdot)$ convex. Finally, $MUP=1$ means that all datasets yield the same accuracy if they are bought first, and the same incremental change if they are bought second, and so forth.
\item \textbf{DI is the Data Interchangeability parameter} that controls the relative importance of different datasets in $\mathcal{S}$. Setting DI equal to 1, amounts to making all datasets fully interchangeable. Therefore, in this case, it only matters how many datasets are bought, but not which ones. For $DI>1$ and $MUP=1$, dataset $s_i$ becomes DI times more important than dataset $s_{i-1}$, $1\leq i\leq |\mathcal{S}|$. Effectively, for $DI\neq 1$ what matters is not only how many datasets are bought, but also which ones.
\end{itemize}
The last element of our synthetic model has to do with how we set the prices of individual datasets. We will consider the following pricing schemes:
\begin{itemize}
\item \textbf{All datasets having the same price}, i.e., $p(s)=$TCOD$/|\mathcal{S}|$, $\forall s\in \mathcal{S}$.
\item \textbf{Datasets having random prices} drawn from a uniform distribution in $[0,1]$ and scaled to add up to TCOD.
\item \textbf{Datasets having a price that reflects their importance} captured by their Shapley value within a coalition of $\mathcal{S}$ datasets that achieve a total value equal to $a(d(\mathcal{S}))$ (see works such as \cite{Ghorbani19, Paraschiv19} for a justification and explanation about hot to use the Shapley value with aggregate datasets).
\end{itemize}
\subsection{Results for different marginal utility profiles}
\label{subsec:TheorSensitivityMUP}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{MUPSensitivityRandomPrice.jpg}
\caption{Profit (\% of the optimal) vs. TCOD for different MUP and value-unrelated prices}
\label{Fig:MUPSensitivity Random Price}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{SamplePurchasingSequencesTheoretical.jpg}
\caption{Purchase sequences for MUP = 1 and TCOD = 1.5 (left), and 3 (right)}
\label{Fig:PurchasingSequencesTheoretical}
\end{figure}
Next we will compare optimal purchasing under complete information, with TBYB, and the value-agnostic heuristics across a range of parameter values for capturing things like the total cost of datasets (TCOD), the marginal utility of buying more datasets (MUP), their relative value (DI), and their relative price. Our main evaluation metric will be the profit for a data buyer, i.e., the value extracted from the data minus the cost paid to obtain them. As stated in the introduction, guaranteeing that buyers obtain a healthy profit from data is vital for bootstrapping the nascent data marketplace sector. In the future we will also examine seller side profits and social welfare. Of course doing the latter makes sense for already bootstrapped markets. It also requires modeling complex market dynamics such as competition, dynamic pricing, etc. that go beyond the scope of the current work. Whenever randomization is used, e.g., in pricing, or in some of the value-agnostic heuristics, we report average values from 50 executions.
The first parameters that we examine are TCOD and MUP, assuming some datasets are more important than others ($DI = 2$). Obviously, as data become more expensive (higher TCOD), all the strategies, including the optimal one, yield a smaller profit for buyers. What we are interested to study, therefore, is the relative performance of different strategies under different TCODs and MUPs. Figure \ref{Fig:MUPSensitivity Random Price} shows that A-TBYB matches the optimal purchasing for both concave (MUP=0.5, left subplot) and linear (MUP=1, middle subplot) value profiles, across the entire range of TCOD values. For convex value profiles (MUP=3, right subplot), A-TBYB ceases to be optimal, but remains the best performing strategy.
What is even more interesting, is the performance of the much simpler to implement S-TBYB, which stays above 90\% of the optimal for concave and linear MUPs, when the value agnostic heuristics drops below 50\% with TCOD>1 and even leads to losses (see MUP=1 results). Under convex value profiles (MUP=3), all strategies yield a lower performance, since reaching a higher accuracy (and therefore buyer value) requires buying more datasets, which, in turn, eats away the profit margins for buyers. Even in these cases, S-TBYB yields a profit and avoids loss.
To explain why TBYB outperforms the value unaware heuristics, we plot in Fig.~\ref{Fig:PurchasingSequencesTheoretical} a series of ``\textit{purchase sequences}'', demonstrating the evolution of profit with the number of datasets purchased (by different algorithms). As shown in the plot, TBYB algorithms buy both the most valuable datasets (they achieve higher profits from the first round), and the right number of them (they stop buying before profits decrease). On the other hand, value unaware heuristics overbuy, and randomly select datasets they can afford according to their risk appetite, which generally lead to lower profits, or even loses especially for risk-prone buyers.
\subsection{The effect of data interchangeability}
\label{subsec:TheorSensitivityDI}
To find out how TBYB is affected by the interchangeability of datasets, we have run a set of simulations for different values of the parameter DI. Figure~\ref{Fig:DISensitivity} shows three different plots of the relative profit of different purchase algorithms for different DI values under MUP=1. The subplot on the left depicts results for perfectly interchangeable datasets (DI = 1), whereas the next two show cases of datasets that are increasingly less interchangeable (DI=2 and DI=3).
\begin{figure}
\centering
\includegraphics[width=\textwidth]{DISensitivity.jpg}
\caption{Profit for purchase algorithms (MUP = 1, linear) with different DI values}
\label{Fig:DISensitivity}
\end{figure}
These plots show that the performance benefits of TBYB over the heuristics increase when different datasets have different value in terms of the accuracy they can achieve, both alone, as well as combined with other datasets. This happens, of course, because the advantage of knowing the value of data before buying them, gets diminished when datasets are almost interchangeable. In reality, as we will show in the next section, real world datasets are not interchangeable, which means that value unaware heuristics will not be able to match the performance of TBYB.
\subsection{The effect of data pricing}
\label{subsec:TheorSensitivityPricing}
In this section, we look at the role of dataset pricing on the performance of TBYB. Our main interest is to see what happens when the price of a dataset is proportional to its value for an AI/ML algorithm, and when it is not. The former we create via the Shapley value method discussed in Sect.~\ref{subsec:TheoreticalModel}. The latter we model it in two ways: with datasets that have the same price but yield different accuracy, and with datasets that have randomly distributed prices and different accuracy.
Figure \ref{Fig:PricingSensitivity} shows the results of our purchase algorithms for different pricing models. In every case, A-TBYB matches the optimal. Pricing data based on their real value for AI/ML algorithms reduces the gap of S-TBYB vs. price-based purchasing, although S-TBYB still outperforms price-based purchasing for the same level of risk. Notice, however, that pricing data in accordance to their actual value for buyers, requires knowing the value function of each buyer, something that buyers, of course, have no incentive to disclose to sellers. Even if they did, different buyers may have different value functions, so, in general, it cannot be expected that the price of a dataset will follow its value for different buyers that may be using it with different AI/ML algorithms, and having different value functions.\footnote{Notice that to simplify our synthetic evaluation we have assumed that buyer value follows the accuracy achieved by each dataset. This, of course, need not apply in the real world, since different buyers may have radically different value functions that translate accuracy into monetary worth.}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{PricingSensivity.jpg}
\caption{Profit for different pricing methodologies (MUP = 1, DI = 2)}
\label{Fig:PricingSensitivity}
\end{figure}
\subsection{Summary}
\label{subsec:TheorConclusions}
Table \ref{tab:ParametersImpact} summarizes the impact of parameters in the performance gap between TBYB and price-based purchasing. In summary, TBYB, even in its simplest, stand-alone version, always outperforms the value unaware heuristics, especially in the most realistic scenarios involving high TCOD, concave value functions, non-interchangeable datasets, and pricing that does not follow value. In a good part of the parameter space TBYB is very close to the performance of optimal purchasing that uses full information.
\begin{table}
\caption{Impact of parameters on the gap between TBYB and price-based purchasing}
\label{tab:ParametersImpact}
\scriptsize
\begin{tabular}{p{1.2cm}p{5cm}p{6cm}}
\hline
Parameter&Impact&Explanation\\
\hline
\texttt{TCOD}&The higher TCOD, the more valuable TBYB&More difficult for other strategies to find the right datasets in terms of price - value to buy \\
\texttt{MUP}&The higher MUP, the more difficult to find the optimal. TBYB loses effectiveness but still outperforms other algorithms&TBYB buys more valuable datasets, minimizes temporary losses and limits risk for buyers, since it allows for a better estimation of expected marginal value of datasets\\
\texttt{DI}&The less interchangeable datasets are, the more advantage of using TBYB&With perfectly interchangeable datasets, TBYB only improves the estimation of marginal utility as information increases \\
\texttt{Pricing}&TBYB gap with price-based purchasing narrows when prices are not related to value&Price-based purchasing works better if value is embedded in price\\
\hline
\end{tabular}
\end{table}
\section{Validation with real data}
\label{sect:ValidationWithData}
The synthetic model of the previous section limits the ways in which two or more datasets may mix, and impact the accuracy of an AI/ML algorithm. It allows only for concave/convex mixing with equal (interchangeable, DI=1) or unequal (non-interchangeable, DI>1) contributions to accuracy from the different datasets. In reality, however, different datasets may mix in much more complex ways, that cannot be represented by any parameter setting of the above model. For example, a certain dataset $d_i$ can be very useful if combined with another dataset $d_j$, but not so useful if combined with others that individually yield the same accuracy as $d_j$ does. To verify our conclusions from the previous section, we tested the performance of different data purchase strategies using real spatio-temporal data, in a use case that involves forecasting demand for taxi rides in a city.
Furthermore, in this section we expand our performance evaluation by introducing a new data pricing scheme, and a new data purchase strategy:
\begin{enumerate}
\item \textbf{Volume-based pricing}. In this case the price of a dataset becomes proportional to its volume. In our use case, volume will correspond to the number of drivers in the company.
\item \textbf{Volume-based purchasing}. We will test the performance of a new heuristic that seeks to purchase that largest possible dataset in terms of volume for a given price.
\end{enumerate}
According to an internal survey covering more than 75 companies in the data economy, pricing and purchasing data by volume is a commonly extended practice in data trading. Therefore we compare TBYB against those practices as well.
\subsection{Use case description}
\label{subsec:ValidationUseCaseDescription}
We will assume that a data buyer is looking to purchase datasets for training a multiseasonal SARIMA forecasting model with the purpose of forecasting, at an hourly timescale, the demand for taxi rides in different districts of Chicago City, for the weeks to come. To achieve this objective, a number of taxi companies will be assumed to be the data sellers, that release historical data on past trips that they have provided in the \textit{observation period} or $T_o$. Such datasets are publicly available, thanks to reporting obligations that such companies have to fulfil towards the local authorities \cite{TaxiTrips19}. The accuracy of the forecasting algorithm is quantified by how accurate the algorithm can predict real demand observed in a \textit{control period} or $T_c$. Our model is able to accommodate any sequence similarity metric in order to compare predicted vs. real demand in $T_c$.
\subsection{Dataset description}
\label{subsec:ValidationDatasetDescription}
From the above-mentioned repository \cite{TaxiTrips19}, we have obtained 11.1 MM rides corresponding to the first 8 months of 2019. These rides are included in 15 datasets that correspond to the 15 largest taxi companies in the city (servicing 94\% of the total demand), plus a hypothetical 16th company that aggregates all the rides reported by the remaining smaller companies. These will be our 16 data sellers according to our problem formulation.
We computed the exact Shapley value of data from each company to the forecasting accuracy achieved by the multiseasonal SARIMA model in predicting the demand in the second half of April using taxi rides from the previous six weeks for training ($T_o = $ Mar. 4th - Apr. 14th and $T_c = $ Apr. 15th - 28th). We deliberately chose to predict the taxi demand of a medium size district of the city (community area 11, Jefferson Park), where data from several companies is needed in order to achieve a good prediction accuracy. As a result, the Shapley values are very different for each source (standard deviation = 76\% of the average). Moreover, these were found to be weakly correlated with the number of licenses of each company ($R^2 = 0.54397$), because big companies usually concentrate in other areas of the city.
Unlike what happened in the theoretical use case, the maximum accuracy the marketplace can give using all the information is $a^\star = 0.896294$ in this case. As in the synthetic case, we will assume that the economic value of a prediction is equal to its accuracy.
\subsection{Empirical results}
\label{subsec:ValidationResults}
We have simulated all purchase algorithms for different TCOD, pricing models, and $\lambda$ parameters. Figure~\ref{Fig:ResultsChicagoD11} (a) shows that both A-TBYB and S-TBYB achieve above 90\% of the optimal buyer's profit under value-unrelated dataset pricing. The results are in line with the ones we obtained using synthetic data.
Regarding volume-based purchasing, it proved to outperform price-based purchasing, but only when TCOD is low (< 5). This is because value and volume are not tightly correlated in this case, hence buying by volume does not lead necessarily to higher accuracy.
Looking at Fig.~\ref{Fig:ResultsChicagoD11} (b) we see the corresponding results under volume-based prices. In this case, profit reduces faster than in the case of value-unrelated prices as TCOD grows, since valuable datasets are set higher prices. Still TBYB outperforms buying without trying algorithms, since it selects cheaper and more valuable datasets.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{D11Results.jpg}
\caption{Profit for different purchase strategies when prices are (a) unrelated to value, and (b) related to volume}
\label{Fig:ResultsChicagoD11}
\end{figure}
Figure \ref{Fig:PurchasingSequencesD11} shows an average purchase sequence to understand why TBYB works for volume-related prices. TBYB improves price-based purchasing both by selecting the best datasets, and stopping the purchase process before profit gets diminished. This feature is especially relevant when the offer is wide in comparison to the value they provide (TCOD $>> 1$). Picking datasets based on volume did not improve price-based purchasing in this specific example.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{D11Sequence2.jpg}
\caption{Sample purchase sequence for volume-based pricing and TCOD = 3.3}
\label{Fig:PurchasingSequencesD11}
\end{figure}
\section{Related work}
\label{sect:RelatedWork}
Different efforts have been done by the research community in order to identify the challenges of existing data marketplaces and define new business models \cite{Fernandez20}. In particular, some ML/AI oriented data marketplace proposals are trying to mechanize in a marketplace the process that some niche data an AI service providers are already bringing to the market. Most AI/ML-oriented theoretical data marketplace platforms leverage a data valuation framework similar to what we propose in this paper \cite{Agarwal19,Chen19}. In general, such marketplaces train buyers’ models in a neutral platform by feeding them with their data, and ask for a price depending on the accuracy (and, thus the value) they provide. They even suggest that marketplaces should return a trained algorithm instead of bulk data. The more you pay, the higher the accuracy you get in any case.
To the best of our knowledge, there is no practical nor commercial implementation of those designs yet, although some digital service providers (SigOpt, Comet.ml), provide algorithm optimization services.
Furthermore, some researchers have studied the dynamics of a data marketplace \cite{Moor19}, and proposed mechanisms that prevent sellers and buyers to postpone their arrival to the marketplace or misreport their costs or values. However, lots of challenges and research issues are still open around data trading and pricing \cite{Pei20, Yang19, Shen16}.
As data provided to buyers usually benefits from combining different sources, it becomes very relevant the problem of how to fairly split the payment for a transaction among all the sources that contributed to the traded data. Existing marketplaces usually calculate this through simple heuristics, such as the data volume or the number of sources involved. However, simple heuristics are not necessarily tied to the utility of data, and consequently could be considered unfair by sellers. To address this challenge, researchers resort to well-known concepts of game theory to split the revenue. Most papers propose using the Shapley value \cite{Shapley52} for such a task \cite{Agarwal19, Ghorbani19, Paraschiv19}, whereas others propose to use the core \cite{Yan20}. We have used the Shapley value and its approximation algorithms (see \cite{Jia19_2, Castro09, Ghorbani19}) to price datasets according to their value.
\section{Conclusion and future work}
\label{Conclusion}
TBYB was shown to provide near optimal data buyers' profits under a wide range of parameters and data. Used with off-the-shelf AI/ML algorithms, or more complex ones in a sandboxed ``try before you buy'' infrastructure, can make TBYB a practical, high performance alternative to value-unaware purchasing, which is currently the normal in real-world DMs. Helping buyers achieve higher profits would thus help in bootstrapping and growing the currently nascent data marketplace economy.
We are currently working on developing fully functional prototype of TBYB to be used by real users. This will enable us to test arbitrary user-provided algorithms and models. Of course, building a functional DM goes beyond the scope of this paper. It involves additional aspects not covered here such as dynamic data pricing, protection against arbitrage, price discrimination, and lots of engineering, scalability, and security challenges that will be the focus of forthcoming works.
| {'timestamp': '2020-12-17T02:15:39', 'yymm': '2012', 'arxiv_id': '2012.08874', 'language': 'en', 'url': 'https://arxiv.org/abs/2012.08874'} |
\section{Introduction}
M87 is a supergiant elliptical galaxy with an active nucleus (AGN). It is classified as a radio galaxy (RDG) of type Fanaroff-Riley I (FR-I) and is located in the Virgo Cluster at a distance of 16.7 $\pm$ 0.2 Mpc \cite{mei2007}. Its prominent jet was first discovered in 1918 \cite{curtis1918} and has been studied in multiple wavelengths and scales \cite{blandford2019}. The Event Horizon Telescope (EHT) observed the supermassive black hole (SMBH) M87*, which engines the AGN of M87. producing an image of its shadow \cite{ehtI2019}. This result was used to constrain the black hole mass \cite{ehtVI2019}, spin and recently, the magnetic field structure near the event horizon \cite{ehtVIII2021}. \\
M87 is a well-established MeV, GeV and TeV gamma-ray source \cite{abdo2009,magic2020}. It was the first RDG detected in the TeV gamma-ray range and some TeV flares have been observed \cite{aharonian2006,albert2008,abramowski2012}. In non-flaring state, this source has also shown complicated gamma-ray spectral and flux variability \cite{benkhali2019}. The zone where this emission is produced is not well determined, being the inner jet or core its most likely origin \cite{abramowski2012,benkhali2019}. Other candidates are the jet feature HST-I and the SMBH vicinity \cite{abramowski2012,benkhali2019}. The physical mechanism that produces this emission is not known either. It is commonly accepted that an one-zone synchrotron self Compton (SSC) scenario is not enough to explain this emission i.e, \citep{fraija2016,georganopoulos2005}. This is supported by the evidence of a spectral turnover at energies of $\sim 10$ GeV, which could be produced by the transition between two different emission processes \cite{benkhali2019}. One of the proposed alternatives to explain this emission are the lepto-hadronic scenarios, which explain the spectral energy distribution (SED) combining leptonic models with photo-hadronic interactions \cite{fraija2016,sahu2015}.\\
The High Energy Water Cherenkov (HAWC) gamma-ray observatory marginally detected this source at $E>0.5$ TeV \cite{albert2021}. This long-term 1523 days observation represents a good constraint of the average TeV emission of M87. In this work we fit a lepto-hadronic model to a SED built to include the HAWC observations for the first time. \\
\section{Data}
An average SED of M87 was constructed using historical archive data \cite{morabito1986,morabito1988,junor1995,lee2008,lonsdale1998,doeleman2012,biretta1991,perlman2001,sparks1996,marshall2002,wong2017,abdo2009}. We also included \textit{Fermi}-LAT observations from the 4FGL catalog \cite{abdollahi2020}. As it was mentioned above, HAWC data from \cite{albert2021} were used to consider the VHE emission. These observations were already corrected by an Extra-galactic background light (EBL) absorption model \cite{dominguez2011}. Therefore, we do not consider this effect during the fitting process.
\section{Model and Methodology}
The model that we use postulates an electron population contained in spherical region in the inner jet \cite{finke2008}. The spectral electron distribution (as a function of $\gamma^\prime$, the electron comoving Lorentz factor) is a broken power law given by Equation \ref{eq:Ne} :
\begin{equation}
N_e(\gamma^\prime)\propto \begin{cases} \gamma^{\prime^{-p_1}} \text{for } \gamma^\prime<\gamma_c^\prime \\
\gamma^{\prime^{-p_2}} \text{for } \gamma^\prime>\gamma_c^\prime
\end{cases},
\label{eq:Ne}
\end{equation}
where $p_1$,$p_2$ correspond to the power law indices, and $\gamma^\prime_c$ to the break Lorentz factor, which is one of the fitting parameters of the model. Other two model parameters are B, the magnetic field intensity and $D$, the Doppler factor of the emission zone. The one-zone SSC scenario explains the SED range from radio to X-rays as synchrotron emission produced by electrons moving in the magnetic field. A second energy component, from X-rays to gamma rays, is produced when electrons Compton scatter synchrotron photons \cite{finke2008}. \\
The proton population in the emission zone is assumed to have a single power law spectral distribution (Equation \ref{eq:Np}):
\begin{equation}
\label{eq:Np}
N_p(\gamma_p^\prime) \propto \gamma_p^{\prime-\alpha},
\end{equation}
where $\alpha_p$ is the proton spectral index, which is a fitting parameter of the model. This scenario postulates that gamma-ray emission is produced in particle cascades generated by interaction between SSC photons and accelerated protons \cite{sahu2019}. The other fitting parameter of this model is a normalization constant $A_\gamma$ \cite{sahu2019}. \\
The methodology consisted of developing a Python code to fit the emission model to the SED. First, the one-zone SSC model \cite{finke2008} was fitted to the data between radio and MeV-GeV gamma rays. Then, the photo-hadronic \cite{sahu2019} component was added to fit the VHE emission. The best fit values of the model parameters were obtained with chi-square minimization and errors were estimated with Monte Carlo simulations. \\
\section{Results and Discussion}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{plotSSC_ICRC.png}
\caption{Spectral Energy Distribution (SED) of the RDG M87 with the SSC model fit. Archive data points are shown black. The violet curve corresponds to the synchrotron component and the blue curve to the inverse Compton one. The red curve corresponds to the total emission. The HAWC 1$\sigma$ error band is shown in light blue. }
\label{fig:SSC}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\textwidth]{plotHAWC_ICRC.png}
\caption{Spectral Energy Distribution (SED) of the RDG M87, including HAWC data, with the lepto-hadronic model fit. Archive data points are shown black. The violet curve corresponds to the synchrotron component, the blue curve to the inverse Compton component and the green curve to photo-hadronic one. The red curve corresponds to the total emission. The HAWC 1$\sigma$ error band is shown in light blue. }
\label{fig:ph}
\end{figure}
\begin{table}[]
\centering
\begin{tabular}{c c c} \hline
Parameter& & Value \\ \hline
Magnetic Field intensity (mG) & $B$&$ 46 \pm 3$ \\
Doppler Factor & $D$ & $ 4.3\pm 0.2$\\ \hline
\textbf{Electron spectral parameters} \\ \hline
Broken PL index & $p_1$ &$1.52 ^{+0.02}_{-0.01} $\\
Broken PL index & $p_2$ & $ 3.53 \pm 0.02 $\\
Break Lorentz factor ($\times 10^3$) & $\gamma_c^\prime$ &$ 3.80 ^{+0.07}_{-0.05} $ \\ \hline
\textbf{Photo-hadronic Component} \\ \hline
Proton spectral index & $\alpha$ & $3.0 \pm 0.2 $ \\
Normalization & $\log(A_\gamma)$ & $-0.5\pm 0.2 $ \\ \hline
$\chi^2_\nu$(d.o.f) & & 25.8 (22) \\ \hline
\end{tabular}
\caption{Best fit values of the fitting parameters}
\label{tab:SSC}
\end{table}
The best fit model for the SSC scenario is shown in Figure \ref{fig:SSC}. At first glance, it seems that the VHE spectral region shows a spectral hardening which cannot be explained with the SSC components. This agrees with the spectral turnover reported by \cite{benkhali2019}, which would explain that the VHE SSC flux prediction is well below the HAWC observed spectrum.\\
The best fit model for the entire lepto-hadronic scenario is shown in Figure \ref{fig:ph}. As can be seen, this component is necessary to fit both GeV and TeV gamma-ray observations. A slight spectral hardening is seen at $E\sim10$ GeV, which agrees with the results obtained in \cite{benkhali2019}. The fit results are shown in Table \ref{tab:SSC}. \\
A similar spectral hardening is also observed in the gamma-ray emitter RDG Centaurus A \cite{sahakyan2013}. The existence of an additional emission component in TeV emitter RDGs could be expected due to their low Doppler factors ($D \lesssim 10$)\cite{rieger2018}, compared to those of blazars whose gamma-ray emission is enhanced by their substantial Doppler-boosting. Moreover, the possible detection of neutrino emission from blazars suggests that similar mechanisms may be present in other types of AGNs \cite{aartsen2018}. \\
The goal of this work is explaining the VHE emission of M87, but specific features in the rest of the bands need a more detailed analysis \cite{algaba2021}. Multi-zone models are probably needed, as well as some specific data sets. In case of the mm bands, the recent EHT results could be very helpful.\\
HAWC data correspond to the first long-term continuous TeV observation of this source. Previous air Cherenkov campaigns constrained very well the average TeV emission, however they could be affected by short-term spectral and flux variations. For example, in the MAGIC two-year campaign reported in \cite{acciari2020} no evidence of the gamma-ray spectral turnover was found. However, this campaign was coincident with a high activity period reported by \cite{benkhali2019}, where the spectral turnover seems to have temporarily disappeared. A more detailed analysis would be needed to know if these short term variations could be explained with the photo-hadronic scenario.\\
HAWC is still continuously taken data, which can be used in a near future to improve the results of this analysis, as well as to test other physical scenarios.
\subsection{Summary and conclusions}
M87 is a giant RDG that emits in gamma rays up to TeV bands. The physical mechanism that produces the VHE emission has yet to be determined. The lepto-hadronic scenario explains this emission with photo-hadronic interactions, whereas the rest of the broadband SED is attributed to a leptonic mechanism. The HAWC Collaboration recently reported a marginal detection of this source that can be used to constrain its average VHE emission, which can be used to test possible physical scenarios. In this work we fit a lepto-hadronic model to a SED which includes the results from HAWC. We obtained best values for the fitting parameters, which are the mean magnetic field intensity ($B=46\pm3$ mG), the Doppler factor ($D=4.3\pm 0.2$), the electron spectral parameters ($p_1=1.52^{+0.02}_{-0.01},p_2=3.53\pm0.02,\gamma_c^\prime=3.80^{+0.07}_{-0.05} \times 10^{3}$) and the photo-hadronic parameters ($\alpha=3.0\pm0.2$, $\log(A_\gamma)=-0.5\pm0.2$). We concluded that this scenario could explain the M87 VHE emission, including some spectral features like a possible turnover at $\sim 10$ GeV.
\acknowledgments
We acknowledge the support from: the US National Science Foundation (NSF); the US Department of Energy Office of High-Energy Physics; the Laboratory Directed Research and Development (LDRD) program of Los Alamos National Laboratory; Consejo Nacional de Ciencia y Tecnolog\'ia (CONACyT), M\'exico, grants 271051, 232656, 260378, 179588, 254964, 258865, 243290, 132197, A1-S-46288, A1-S-22784, c\'atedras 873, 1563, 341, 323, Red HAWC, M\'exico; DGAPA-UNAM grants IG101320, IN111716-3, IN111419, IA102019, IN110621, IN110521; VIEP-BUAP; PIFI 2012, 2013, PROFOCIE 2014, 2015; the University of Wisconsin Alumni Research Foundation; the Institute of Geophysics, Planetary Physics, and Signatures at Los Alamos National Laboratory; Polish Science Centre grant, DEC-2017/27/B/ST9/02272; Coordinaci\'on de la Investigaci\'on Cient\'ifica de la Universidad Michoacana; Royal Society - Newton Advanced Fellowship 180385; Generalitat Valenciana, grant CIDEGENT/2018/034; Chulalongkorn University’s CUniverse (CUAASC) grant; Coordinaci\'on General Acad\'emica e Innovaci\'on (CGAI-UdeG), PRODEP-SEP UDG-CA-499; Institute of Cosmic Ray Research (ICRR), University of Tokyo, H.F. acknowledges support by NASA under award number 80GSFC21M0002. We also acknowledge the significant contributions over many years of Stefan Westerhoff, Gaurang Yodh and Arnulfo Zepeda Dominguez, all deceased members of the HAWC collaboration. Thanks to Scott Delay, Luciano D\'iaz and Eduardo Murrieta for technical support.
| {'timestamp': '2021-08-03T02:42:17', 'yymm': '2108', 'arxiv_id': '2108.01048', 'language': 'en', 'url': 'https://arxiv.org/abs/2108.01048'} |
\section{Introduction}
NaI(Tl) has been used as a scintillator for particle detection for many years~\cite{hofstadter_detection_1949}, and its wide-spread availability makes it a convenient low-cost choice for many particle-detection applications.
Undoped NaI has also been known to be an effective scintillator at low temperatures~\cite{van_sciver_fundamental_1958}.
The scintillation properties of these crystals have been characterized, though the difference between crystals can be large~\cite{sibczynski_properties_2011,moszynski_study_2003}, highlighting the importance of characterizing individual crystals in use, or at least batches of crystals.
The DAMA/LIBRA collaboration~\cite{bernabei_final_2013} detect an annual modulation signal~\cite{bernabei_final_2013,bernabei_first_2018} consistent with the presence of a dark matter halo, using an array of radiopure NaI(Tl) crystals.
To this date however, no other experiments have detected a similar signal with other target materials~\cite{kahlhoefer_model-independent_2018}.
The SABRE collaboration~\cite{antonello_sabre_2018,bignell_sabre_2020} aim to verify the DAMA/LIBRA annual modulation observation using the same target material in two distinct locations to remove seasonal effects: the Gran Sasso Lab in Italy and the Stawell Underground Physics Lab in Australia.
Though SABRE plans to use room-temperature NaI(Tl) crystals, others intend to use it a lower temperatures (for instance ASTAROTH~\cite{zani_astaroth_2021} at 87~K).
Moreover, advantages can be gained by introducing background discrimination in the experiment.
It has been shown that using an alkali halide scintillator at cryogenic temperatures, measuring both scintillation light and phonons created by particle interactions, can lead to the detection of a modulation in relatively small exposures~\cite{nadeau_sensitivity_2015}.
This allows the discrimination of electron recoils, usually from electron and $\gamma$ backgrounds, and nuclear recoils which are expected from WIMP interactions, reducing the level of background.
The COSINUS collaboration plans to use undoped NaI at cryogenic temperatures to search for dark matter signals~\cite{angloher_results_2017}.
This paper will present measurements made with SABRE-grown crystals to verify the scintillation properties of these NaI and NaI(Tl) crystals at low temperature.
It is important to ascertain these properties to allow for cryogenic experiments to properly design their light-collection and processing equipment.
It also may be interesting to determine the difference in performance for moderately cold (liquid nitrogen) temperatures to allow for better use for other applications.
\section{Experiment}
The experiment was carried out in a compact optical cryostat based on a Gifford-McMahon cryocooler capable of cooling small $5\times5\times2$~mm$^3$ samples of NaI and NaI(Tl) to any temperature between 3.5~K and 300~K~\cite{di_stefano_counting_2013,clark_particle_2018}.
The samples were taken from the tips of crystals grown by RMD Inc. as part of the SABRE collaboration~\cite{antonello_sabre_2018}.
These crystals were of a standard purity with an unknown crystal orientation and polished to transparency.
The NaI(Tl) sample is doped at roughly 1/1000 atoms Tl/NaI.
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{crystal}
\caption{\label{fig:cryst}Photograph of the NaI(Tl) sample mounted inside the cryostat. The crystal is held onto the gold-plated copper finger using silver paint for good thermal contact. A collimated Am-241 source illuminates the large face of the crystal.}
\end{figure}
Each crystal sample was mounted inside the cryostat and excited by $\alpha$ particles and 60~keV $\gamma$ quanta from an internal collimated $^{241}$Am source mounted to the sample holder (Fig.~\ref{fig:cryst}).
Additionally, an external $^{57}$Co source could be used to test $\gamma$ energies of 122~keV.
The $^{241}$Am $\alpha$ source used in this work was adapted from a common smoke detector where a protective film covers the radioactive material, reducing the energy of the $\alpha$-particles to 4.7~MeV.
The cryostat is housed in an acrylic glovebox to allow for careful control of the humidity.
By flowing pure nitrogen through the glovebox during installation of the crystal and throughout the data-taking, relative humidity within the glovebox was kept below 0.1\%, the sensitivity of our General Tools SAM990DW Psychrometer.
The samples were adhered to a custom-made holder using silver paint, instead of being mechanically held in place, to minimize mechanical stress as the sample cools.
The crystal sample was at an angle of {30\textdegree} with respect to the $^{241}$Am source so that the emitted $\alpha$ particles were incident with one of the smooth, large faces of the sample, but at the same time this face was well exposed to a 28~mm diameter PMT (Hamamatsu R7056). A second PMT was placed opposite of the first one and was exposed to the rear face of the crystal, partially obstructed by the sample holder.
These PMTs have sensitivity to wavelengths down to 185~nm and a maximum quantum efficiency of 25\% at 420~nm.
The PMT that was in view of the unobstructed face of the sample was used as the primary PMT for data acquisition, while the other was used to assist triggering.
Based on the solid angle of the crystal exposed to the primary PMT and transmittance of the windows, we expect a light collection efficiency of roughly $10\%$ for the main PMT~\cite{nadeau_cryogenic_2015}.
Note that no optical filters were used, so all light within the sensitivity of the PMT contributed to the light yield, as is standard in particle detection.
The temperature of the crystal was controlled by a PID such that it was steady to within 0.1~K of the goal temperature during data taking.
Data were taken starting from room temperature, cooling down to the lowest temperature of 3.5~K in a single run to remove the possibility of releasing trapped energy through warming, known as thermoluminescence, which could occur when heating a crystal to a temperature that corresponds to a gap in its energy levels~\cite{sabharwal_thermoluminescence_1985}.
Scintillation pulse shapes were acquired using a modified version of the multiple photon counting coincidence technique~\cite{clark_particle_2018}.
When a coincidence within 30~ns was detected between the two PMTs observing the sample, a 600~$\mu$s acquisition window sampled at $1.25 \times 10^9$~samples/second was opened in which the full digitizer trace was acquired. The window included a 30~$\mu$s pre-trigger.
This trace was later zero-suppressed to reduce the data volume.
In previous measurements~\cite{clark_particle_2018}, we observed that simply recording information above a threshold caused a bias by not recording some information.
To combat this, we additionally record the 5 samples immediately preceding and following a sample above threshold.
This number was used since a typical single photon pulse was found to be around 10 samples wide.
The threshold for a given pulse was set to 5 standard deviations of the fluctuations of the baseline of that pulse.
An example of the effect of this zero-suppression is shown in Figure~\ref{fig:pulse}.
\begin{figure}
\centering
\includegraphics[width=\textwidth]{rawpulse}
\caption{\label{fig:pulse}An example of a single event in the raw digitizer trace (red line), recording only samples above the threshold (blue circles), and recording those plus the surrounding points (black dots).}
\end{figure}
The signal from the front PMT was passed to two digitizer channels with different ranges to allow for the measurement of both high voltage signals and small single-photon signals, as described previously~\cite{clark_particle_2018}.
This dynamic range was important due to the large energy range of interest between 60~keV gammas and nearly 5~MeV alpha-particles.
The combination of the two channels resulted in a single measurement containing unsaturated, reconstructed pulses with higher detail at low voltages than would be possible with a single channel.
At low temperatures where the light yield is higher (below 30~K for NaI(Tl) and below 150~K for NaI), a 10dB signal attenuator was used to prevent the data acquisition from saturating.
The typical integral of a single photon was measured in both attenuated and non-attenuated configurations, such that the light yields can be directly compared in terms of number of photons.
In addition, full datasets were taken at reference temperatures (50~K for NaI(Tl); 200~K and 150~K for NaI) in both configurations to ensure that the results were consistent.
\section{Data Analysis}
To determine the light yield, the integral of the events was binned and a gaussian fit was performed to find the peak corresponding to each particle excitation.
The mean of each peak was taken to be the average light yield of that excitation.
An example of an alpha excitation peak is shown in Figure~\ref{fig:alpha} and a gamma excitation peak in Figure~\ref{fig:gamma}.
The integral was then converted to a corresponding number of photo-electrons (PE) using the typical single photon integral in those digitizer conditions.
The conversion to number of PE allows us to compare our light yields with and without the signal attenuator directly, using different conversion factors for the two acquisition configurations.
It was noticed during the measurement of NaI that the alpha peak seemed to split into two distinct peaks at temperatures below 60~K.
It was later seen through inspection that the crystal was cracked after the experiment was completed, likely due to differences in thermal contraction between the copper sample holder and the NaI crystal.
We believe that the crack changed the optical properties of the crystal, changing the efficiency of detecting photons from events on either side of the crack.
Due to limited availability of crystal samples, this data will still be studied despite the change in crystal quality, with some added uncertainty after the cracking event.
To compare with the rest of our data, the weighted average of the two peaks was used to determine the light yield at temperatures where a split alpha peak is observed.
For each run, we perform a cut on the first photon time such that the first photon does not arrive before the expected coincidence window of 30~ns.
This removes events that have photons in the pre-trigger, which could indicate that the tail of a previous event is present in our acquisition (pile-up event), which would give an inaccurate measurement of both the light yield and time behaviour.
We then perform a cut on the light yield (integral of the event) to select for a particular particle interaction (4.7~MeV $\alpha$-particle, 60~keV $\gamma$).
This is done using the gaussian fit as shown in Figure~\ref{fig:peaks} where all events within 2 sigma of the mean of the gaussian distribution are chosen to represent each population of events.
If the light yield is low, a minimum integral of two PE is set as the lower bound.
Finally, we perform a cut on the mean arrival time of photons after the first photon.
A large mean arrival time compared to the typical value of the population could indicate another particle interaction occurred during the tail of the event which could interfere with our average pulses.
\begin{figure}
\centering
\begin{subfigure}{0.7\textwidth}
\includegraphics[width=\textwidth]{SABRENaIfit_77att_1gaus.png}
\caption{\label{fig:alpha}4.7~MeV alpha events from Am-241}
\end{subfigure}
\begin{subfigure}{0.7\textwidth}
\includegraphics[width=\textwidth]{SABRENaIfit_60keV_77att.png}
\caption{\label{fig:gamma}60~keV gamma events from Am-241}
\end{subfigure}
\caption{\label{fig:peaks}Histograms of the measured light from events of both excitations from the Am-241 source for NaI at 77~K. In red is the fit to a gaussian curve, to determine the average light yield. In the case of the gamma spectrum, the lower energy shoulder is attributed to Np X-rays from the source.}
\end{figure}
\section{Results and Discussion}
\subsection{Light yield}
The results of our light yield measurements as a function of temperature are shown in Figure~\ref{fig:LYT}, in addition to results from previous measurements~\cite{sailer_low_2012,nadeau_cryogenic_2015} that have been scaled to be equivalent with these results at room temperature for each type of excitation.
For NaI(Tl) (Fig~\ref{fig:NaITlLY}), the light yield changes by as much as 30\% as the temperature decreases for both $\alpha$ and $\gamma$ excitations.
At room temperature, we observe 1.4~PE/keV (photoelectrons per keV) for the 60~keV $\gamma$-excitation, which, when accounting for the solid angle of our photomultiplier ($\approx$ 11\%) and the quantum efficiency (24\% at 400~nm), gives a rough estimate of $50 \pm 10$~photons/keV for the absolute light yield.
This is marginally higher than the frequently-quoted value of 38~photons/keV~\cite{holl_measurement_1988}.
At our lowest temperature of 3.5~K, we detect 1.1~PE/keV for $\gamma$ excitation corresponding to a decrease of light yield of 20\%. This low-temperature value of $40 \pm 8$~photons/keV is slightly above the 29~photons/keV of CaWO4~\cite{mikhailik_performance_2010}.
The light yield of 4.7~MeV $\alpha$ excitations remain relatively stable at 0.65~pe/keV throughout the temperature range.
We observe a drop in light production at 150~K, which coincides with a thermoluminescence peak that has been observed previously for thallium-doped NaI~\cite{sabharwal_thermoluminescence_1985}.
This indicates there could be a shallow trap at the corresponding energy of $kT$ $\approx$ 10~meV.
\begin{figure}
\centering
\begin{subfigure}{0.7\textwidth}
\includegraphics[width=\textwidth]{R83LY_all}
\caption{\label{fig:NaITlLY}NaI(Tl)}
\end{subfigure}
\begin{subfigure}{0.7\textwidth}
\includegraphics[width=\textwidth]{R84LY_phperE}
\caption{\label{fig:NaILY}Undoped NaI}
\end{subfigure}
\caption{\label{fig:LYT}The light yield as a function of temperature for both crystals, along with comparisons to previous measurements~\cite{sailer_low_2012,nadeau_cryogenic_2015}, scaled to our measured value at room temperature. Data with signal attenuation are shown in empty markers, and without attenuation in full markers. At temperatures where both attenuated and non-attenuated data was taken, results are consistent. Data from Coron (respectively Sailer) are normalized to our room temperature gamma (resp. alpha) value. Data from Coron 2013 are not time-resolved. Errorbars on each point are within the individual markers.}
\end{figure}
In Figure~\ref{fig:NaITlLY}, we have compared the results reported in other studies~\cite{nadeau_cryogenic_2015,sailer_low_2012,coron_study_2013} to our own.
Since these results were reported as relative shifts in light yield, to compare with our results we have scaled each study to be equal to our own measurement at room temperature, for the specific particle interaction that was measured. We see features at approximately the same temperatures as other studies, though not to the same degree.
It should also be noted that Coron et al.~\cite{coron_study_2013} were not measuring time-resolved interactions but a constant illumination with X-rays, in addition to reporting in the warming phase over most of the temperature range, which could add additional complications from thermoluminescence.
In contrast, undoped NaI (Fig~\ref{fig:NaILY}) shows a large increase from 0.04~PE/keV at room temperature for $\gamma$ excitation, reaching a maximum of 1.15~PE/keV at 70~K, increasing by roughly a factor of 30.
The light yield then decreases, but remains high at 0.85~PE/keV until the lowest temperature, roughly $30 \pm 6$ photons/keV in absolute light yield.
This is a much higher relative increase than has been seen previously, with previous measurements increasing by at most a factor of 10~\cite{nadeau_cryogenic_2015}.
It should be noted that the decrease in observed light yield coincides with the temperature at which the crystal is suspected to have cracked.
This may indicate that thermal stresses within the crystal, which were released when the crystal cracked, had an effect on the light yield. Another factor may be changes in optical efficiency.
\subsection{Quenching factor}
From the previous measurements, we have also determined the $\alpha$/$\gamma$ ratio of both NaI and NaI(Tl) as a function of temperature.
The results are shown in Figure~\ref{fig:ag}.
\begin{figure}
\centering
\includegraphics[width=0.7\textwidth]{both_ag_np}
\caption{\label{fig:ag}Measured $\alpha$/$\gamma$ ratio for both NaI and NaI(Tl) as a function of temperature. Errorbars on each point are within the individual markers.}
\end{figure}
Since these results compare $\alpha$ and $\gamma$ excitations of different energies, we have employed a theory of inorganic scintillator non-proportionality to correct for this, transforming the 60~keV $\gamma$ light yield to that of a 4.7~MeV $\gamma$, as has been described in previous publications~\cite{clark_particle_2018}.
NaI(Tl) shows a slight increase with decreasing temperature, with a value of 0.7 at 3.5~K.
A peak value of 0.7 is also seen at the same location in the dip in light yield, corresponding again with the likely location of a shallow trap at the corresponding energy~\cite{sabharwal_thermoluminescence_1985}.
Undoped NaI also shows an evolution over temperature, but in this case drops from 0.7 at room temperature to 0.35 at 160~K, returning to 0.7 at 40~K once again.
This result differs from our previous measurements~\cite{nadeau_cryogenic_2015}, in which the relative humidity was only kept below 20\% during sample handling, as opposed to the more stringent upper limit of 0.1\% in the present work. This indicates that light yield of surface events, caused by alphas, depends on the state of the sample surface. An alternative explanation is that the discrepancy is caused by differences in trace contamination between the different lots of samples.
\subsection{Time Structure}
We have also investigated the pulse shapes induced by alpha and gamma interactions at different temperatures (Fig.~\ref{fig:AllPulses_cum}). The time constants tend to increase as the temperature decreases, as observed previously in various materials (eg~\cite{verdier2011scintillation}) including NaI~\cite{birks_theory_1964}.
A more precise determination of the time constants is complicated by the presence of afterpulsing in the PMs, which has been independently confirmed by exposing the PMs to brief light pulses from an LED (Fig.~\ref{fig:aftpcomp}). At the standard operating voltage of the PMs, a main narrow structure was observed $\sim 300$~ns after the excitation, as was a broad secondary feature $\sim 1.5$~$\mu$s after the excitation. This corresponds with the features in the scintillation pulse, as can be seen in Fig.~\ref{fig:aftpcomp}.
The integral of the afterpulses represents 3\% of the signal from the LED, and thus has a small effect on the light yield measurement.
Attempts to deconvolve the afterpulsing from the overall pulse were hindered by the fact that the afterpulsing appears to have increased in the 2 years between the main experiment and the tests.
\begin{figure}
\centering
\includegraphics[width=1\textwidth]{NaITlGamma_cumulative} \\
\includegraphics[width=1\textwidth]{NaITlAlpha_cumulative} \\
\includegraphics[width=1\textwidth]{NaIGamma_cumulative} \\
\includegraphics[width=1\textwidth]{NaIAlpha_cumulative}
\caption{\label{fig:AllPulses_cum}The average pulses from NaI(Tl) (top two rows) and NaI (bottom two rows) (Run 83) under $\gamma$ excitation (odd rows) and $\alpha$ excitation (even rows), at 3 different temperatures. Pretrigger is 30~$\mu$s long. Trend is for pulses to become longer as temperature decreases. }
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{AftPulseCompare.png}
\caption{\label{fig:aftpcomp}The LED measurement of afterpulsing overlaid with data from $\alpha$ excitation in NaI at 60~K. The largest afterpulse corresponds with the pulse shape at low time, and a later bump may correspond with the later bump in the pulse shape. Pulses have been shifted to correspond to a pretrigger of 30~$\mu$s.}
\end{figure}
To present a quantitative measure of the changing time structure as a function of temperature, Figure~\ref{fig:allMAT} shows the mean arrival time of photons after the first detected photon, for $\alpha$ and $\gamma$ events for each crystal.
For each event of a given type of particle (alpha or gamma), we calculate the mean arrival time of photons after the first one; the standard deviation of these values for a given population are used as error bars.
Unusually, the slowing trend in the data as the temperature is cooled is not monotonic. All the data seem to show particularly long pulses at $\sim 40$~K and $\sim 175$~K.
We considered if the attenuator, used at low temperatures, could have played a role in this, since it might have made it harder to detect straggling single photons that appear at the low temperature. Checks were therefore carried out with and without attenuator at 50~K for NaI(Tl) and at 150~K and 200~K for NaI; results were found to be consistent, making the attenuator an unlikely explanation.
We also note that the attenuator would have had little effect on the light yield, because for both samples at all temperatures, the bulk of the photons are not resolved individually.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{MATplot_NaITl.png}
\includegraphics[width=0.45\textwidth]{MATplot_NaI.png}
\caption{\label{fig:allMAT}The evolution of mean photon arrival time as a function of temperature for the NaI(Tl) sample (left) and NaI sample (right). Filled (respectively empty) markers are data taken without (with) attenuator. For NaI(Tl), cross check at 50~K showed consistent results with and without attenuator. For NaI, cross checks, at 150~K and 200~K, were also consistent. }
\end{figure}
\section{Conclusion}
In this work, we have shown that the light yield of the NaI and NaI(Tl) crystals grown by the SABRE collaboration perform very well in terms of scintillation yield at low temperatures.
NaI(Tl) maintains its high light yield at all measured temperatures, while undoped NaI sees a large increase in light yield as temperature decreases, up to a factor of 30 times higher than at room temperature.
The evolution of the light yield of NaI(Tl) as a function of temperature is quite irregular above $\sim 50$~K, which is consistent with previous experiments.
For gamma excitations at 4~K, compared to CaWO4, the light yield of NaI(Tl) is slightly greater while that of NaI is comparable.
Because of our instrumentation and relatively high light collection, we are able to measure both $\alpha$ and $\gamma$ excitations under the same conditions.
We have reported here the $\alpha$/$\gamma$ ratio as a function of temperature, showing that it changes over the temperature range, with a maximum value of 0.7.
The response to alphas may very well depend on the state of the surface of the crystal, which itself depends on how much humidity the samples have been exposed to.
Therefore, care was taken to handle these samples under dry nitrogen to maintain a good surface quality.
In addition, all measurements of a given sample were made during a single cooling cycle to avoid contributions from thermoluminescence.
Another factor that may be relevant is the presence of the impurities in a given sample.
These various factors could explain the differences between this result and previous measurements in light yield and $\alpha$/$\gamma$ ratio.
Time-resolved measurements of the scintillation light show presence of light out to 600~$\mu$s after the initial excitation. At cryogenic temperatures, undoped NaI shows an increased proportion of light at long times, where NaI(Tl) shows a larger increase at short times, even if the absolute light yield at long times increases. The mean arrival times of photons from both $\alpha$ and $\gamma$ excitations do not increase monotonically, showing a complicated evolution as the temperature decreases.
Significant efforts were made to shelter the samples from humidity and to handle them delicately to reduce thermal stress during cooling. The sensitivity of NaI, both pure and doped, to humidity and the mechanical fragility of the crystals underscore the challenges of using these materials in cryogenic applications, though the potential for background discrimination may make it worth the effort.
\acknowledgments
Funding in Canada has been provided by NSERC through SAPIN grants, by CFI-LOF and ORF-SIF.
Queen's summer student Miaofen Zheng contributed to the data taking. Queen's Visiting Research Student Ilian Moundib contributed to the afterpulsing analysis.
F.~Froborg has received funding from the European Unions Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 703650. F.~Froborg's participation in the experiment was made possible by a grant from the Queen's Principal's Development Fund.
We thank Radiation Monitoring Devices (RMD) and the SABRE Collaboration for providing the test samples.
\bibliographystyle{JHEP}
| {'timestamp': '2021-12-23T02:10:03', 'yymm': '2112', 'arxiv_id': '2112.11688', 'language': 'en', 'url': 'https://arxiv.org/abs/2112.11688'} |
\section{Introduction}
\label{sec:Intro}
Muons observed in underground laboratories are produced mainly in
decays of pions and kaons generated by the interaction of primary
cosmic rays with the upper atmosphere. Since muons loose energy
crossing the rock overburden, only high energy muons can be detected,
with an energy threshold depending on depth,
usually expressed in metre water equivalent (m.w.e.).
It is known that the flux of atmospheric muons detected deep underground shows
seasonal time variations correlated with the temperature of the stratosphere,
where the primary cosmic rays interact~\cite{bib:season}.
This effect has been reported by other experiments at the underground Gran Sasso Laboratory (LNGS)
(MACRO~\cite{bib:macro1, bib:macro1bis, bib:macro2},
LVD~\cite{bib:lvd}, Borexino~\cite{bib:borex, bib:borex2} and GERDA~\cite{bib:gerda})
and elsewhere (AMANDA~\cite{bib:amanda}, IceCube~\cite{bib:icecube}, MINOS~\cite{bib:minosf, bib:minosn},
Double Chooz~\cite{bib:db} and Daya Bay~\cite{bib:dayabay}).
An increase in temperature of the stratosphere, indeed,
causes a decrease of
the air density, thus reducing the chance of $\pi, K$ mesons to interact, and
resulting in a larger fraction decaying into muons.
So, the atmospheric muon rate changes during the year
increasing in summer and decreasing in winter.
The variation can be modeled as a sinusoidal function, which is only a first
order approximation since the average temperature is not precisely constant
over the years and short term effects occur, leading to local maxima and
minima, like the ``sudden stratospheric warming'' events~\cite{bib:ssw}.
The OPERA experiment~\cite{bib:detector} discovered
$\nu_\mu$ into $\nu_\tau$ oscillations in appearance mode
using the CERN Neutrino to Gran Sasso (CNGS) beam~\cite{bib:prl2015, bib:prl2018}.
The experiment was
located
in the Hall C of
LNGS laboratory, at 3800 m.w.e. depth.
To identify $\nu_\tau$ Charged Current interactions, the Emulsion Cloud Chamber
technique was used, with 1 mm lead sheets alternated with
nuclear emulsion
films, for a total target mass of about 1.2 kt.
Lead and emulsions were organised into units called ``bricks''.
The detector was divided into two identical Super-Modules (SM), each made of a target section,
composed by 31 brick walls interleaved with target tracker layers to locate bricks with neutrino interactions,
and of a muon
spectrometer to optimise the muon identification probability and to
measure momentum and charge.
In each SM, the Target Tracker (TT)~\cite{bib:TTref}
consisted of 31 pairs of orthogonal planes made of
2.6~cm wide
scintillator strips,
read-out by means of
WLS fibers.
The fibers from the 6.7~m long strips were collected into four
groups of 64 and coupled to 64-channel Hamamatsu H7546
photomultipliers.
The muon
spectrometers were made by an iron-core dipole magnet with
drift tubes used as precision trackers and 22 layers of resistive
plate chambers (RPC)~\cite{bib:RPC} inside the magnetised iron.
The 2D read-out was performed by
means of 2.6~cm pitch and 8~m long vertical strips, which measured the
coordinate in the bending plane, and 3.5~cm pitch and 8.7~m long
horizontal strips, measuring the orthogonal coordinate.
The analysis presented here is based on TT and RPC data recorded
during about five years from January 2008 to March 2013.
In the TT,
the trigger condition required either hits in the horizontal and
vertical views of at least two
planes or
at least 10
hits in a single plane
with a signal greater than
30 photo-electrons for 2008 and 2009 runs;
from 2010 up to 2013
the latter requirements were lowered to
4 hits and 10
photo-electrons,
respectively.
Data from the RPCs of each
spectrometer were acquired
in presence of at least 3 planes fired in a
time window of 200~ns.
Events were recorded in presence of at least 5
TT and/or RPC hits in each view, horizontal and vertical, within a time window of 500~ns.
The TT systems were operative during most of the time in
the considered years, while the RPCs had a lower run-time, being
operative only during CNGS runs
and switched off during the CNGS
winter shutdowns.
More details about the electronic
detectors used for the cosmic ray muon analysis can be found in
\cite{bib:detector}.
\section{Cosmic ray muon flux measurement and its modulation}
\label{sec:Cray}
Cosmic ray induced events in the OPERA detector were selected, through
their absolute time,
outside of the CNGS spill window.
Once the event was tagged as ``off-beam'' it was classified as cosmic and
processed in a dedicated way.
The
standard reconstruction package ``OpRec''~\cite{oprec}
was complemented with a set of algorithms developed for the different
cosmic and beam event topologies.
The reconstruction was effective at identifying single and multiple muon tracks
(muon bundles).
For this analysis, a total of about 4 million single muon events have been
selected requiring a single track reconstructed in both views.
Different atmospheric muon rates have been measured in periods with and without RPC acquisition.
The scale factor between the two rates has been evaluated directly from data over the full data taking period,
extracting the two constant terms
$I^0_{TT+RPC} = (3359 \pm 5)~\mu$/day and $I^0_{TT-only} = (1960 \pm 5)~\mu$/day from a maximum likelihood fit on the two data sets.
In Fig.~\ref{fig:murate} (top panel) the flux of atmospheric single muons measured
from 1 January 2008 (day 1 in the plot) to
March 2013 is shown.
After data quality cuts,
our data set is composed of 1274 live days, out of which 919 days with TT+RPC.
The longer downtime period corresponds to the first 5 months of 2009, when the acquisition was
stopped due to a DAQ upgrade, at first, and then to the earthquake in L'Aquila.
Other shorter downtime periods are present in winter due to maintenance operations.
Data with TT-only acquisition have been rescaled to the TT+RPC average rate.
The flux has been fitted to
\begin{equation}
I_{\mu}(t) = I^0_\mu + I^1_\mu \cos \frac{2 \pi} {T} (t-\phi)
\label{murate}
\end{equation}
\begin{figure}[]
\centerline{\includegraphics[width=\linewidth]{plot_murate.pdf}}
\centerline{\includegraphics[width=\linewidth]{plot_Teff7.pdf}}
\caption[]{Single muon rate measured by the OPERA detector (top) and
effective atmospheric temperature (bottom) from January 2008 to March 2013.
Fit results are superimposed to the data sets.
Symbols in the legends are defined in the text and in Eqs.~\ref{murate} and~\ref{trate}.
}
\label{fig:murate}
\end{figure}
The presence of a sinusoidal
component with a period $T=(359 \pm 2)$ days is observed, with an
amplitude amounting to $\alpha= I_\mu^1/I_\mu^0 = (1.55 \pm 0.08)$\% of the average flux,
and a phase $\phi=(197 \pm 5)$~days, with a $\chi^2/\textrm{dof} = 1.46$.
In accordance with other LNGS experiments and
with the fit result on the temperature reported in Sec.~\ref{sec:Temperature},
we take as the best estimate of the phase the result obtained fixing the period to one year of 365 days.
The maximum is then observed
at
day
$\phi = (186 \pm 3)$, corresponding to July 5,
with a $\chi^2$/dof value of 1.47. The result is also shown in Fig.~~\ref{fig:murate} (top panel).
The Lomb-Scargle
periodogram \cite{bib:LS1,bib:LS2} is a common tool to analyze
unevenly spaced data
to detect a periodic variation
independently of the modulation phase.
For the analysis presented here, we have
exploited the generalised Lomb-Scargle
periodogram, proposed in \cite{bib:LSgen}, which takes into account the
non-zero average value of the event rate.
The periodogram obtained for the single muon event rate is shown in Fig.~\ref{fig:LS} (left panel).
To assess the significance of the periodogram peaks, 10$^5$ toy
experiments with a constant rate of 3359~$\mu$/day in the
detector have been simulated, and the corresponding periodograms
reconstructed. In Fig.~\ref{fig:LS} the 99\% significance level,
defined as the value for which 99\% of the toy experiments result in a
lower spectral power, is also shown.
The most significant peak is around one year ($P_{max}$ at $T \sim 365$ days), but
other less significant peaks are also present, as a consequence of the fact
that Eq.~\ref{murate} is
an approximation.
A simulated experiment has been performed extracting
daily rates according to the result of our fit and comparing the periodograms obtained
with and without data in the days of detector downtime.
It shows that the amplitude of the peaks around 200 days
increases with the detector downtime.
\begin{figure}
\includegraphics[width=0.5\linewidth]{plot_GLS_mu.pdf}
\includegraphics[width=0.5\linewidth]{plot_GLS_T6.pdf}
\caption[]{Generalised Lomb-Scargle periodograms for the measured muon
rate (left) and the effective atmospheric temperature (right).
The 99\% significance level is also drawn as a reference in both periodograms. }
\label{fig:LS}
\end{figure}
We checked possible systematic effects on the phase $\phi$ coming from the data taking stability along the years.
Possible variations in DAQ efficiency are likely to coincide with calendar years by the fact that RPC were turned on and off on a yearly basis and DAQ maintenance and interventions were done during the CNGS winter shutdowns.
By applying scale factors on a yearly basis,
a constant average rate is achieved by definition.
The normalisation, different at the few per mille level, was applied
to TT-only and TT+RPC data, which were then rescaled one to the other.
The rate has been fitted to Eq.~\ref{murate}
fixing the period to 365 days, the modulation maximum results on day~$\phi = (177 \pm 3)$ with a $\chi^2/\textrm{dof} = 1.64$.
Comparing these results with those obtained with a constant scale factor,
we evaluate a systematic error on the phase as the semi-difference between
the two $\phi$ values
extracted
with period fixed to one year,
i.e. $\delta \phi_{\textrm{sys}} = 5$~days.
Our best estimate of the muon rate maximum, obtained at fixed period $T = 365$~days, is found on day
$\phi = 186 \pm 3_{\textrm{stat}} \pm 5_{\textrm{sys}}$, i.e. July 5.
\section{Atmospheric temperature modulation}
\label{sec:Temperature}
To measure the atmospheric temperature modulation, we have used data
from the European Center for Medium-range Weather Forecasts
(ECMWF)~\cite{bib:ECMWF}.
The center provides temperature values at different altitudes above given
locations, obtained by means of interpolations based on measurements
of various kinds around
the planet and on a global atmospheric model.
The coordinates are those used in \cite{bib:borex}:
13.5333$^\circ$~E, 42.4275$^\circ$~N.
The atmospheric temperature is provided at 37 discrete pressure levels
ranging from 1 to 1000 hPa four times in each day (0.00~h, 6.00~h, 12.00~h
and 18.00~h).
Averaging these temperature values using weights accounting for the production
of pions and kaons at different altitudes (see Appendix~\ref{sec:Appendix}),
the effective atmospheric temperature $T_{eff}$ has been calculated four times
a day.
The four measurements are then averaged and the variance used as an estimate
of the uncertainty on the mean value.
The weights used in this analysis have been computed as described in~\cite{bib:Grashorn} and in
previous experimental papers~(\cite{bib:macro1}-\cite{bib:dayabay}). They depend on the inclusive meson
production in the forward region, on the attenuation lengths of
cosmic ray primaries,
pions and kaons, as well as on the average value
$\langle E_{thr} \cos \theta \rangle$, where $E_{thr}$ is the minimum energy required
for a muon to reach the considered underground site and $\theta$ is
the angle between the muon and the vertical direction.
From a full Monte Carlo simulation taking into account the rock map above Hall C of LNGS~\cite{chargeratio}, it results that
$E_{thr} = 1.4$~TeV and $\langle E_{thr} \cos \theta \rangle = (1.1 \pm 0.2)$~TeV.
A previous analysis~\cite{bib:borex} quoted a higher value, $E_{thr} = 1.8$~TeV,
extracted from numerical methods
assuming a flat overburden~\cite{bib:Grashorn}.
All other parameters in the weight functions are site independent.
More details about the effective atmospheric temperature calculation are
given in Appendix~\ref{sec:Appendix}.
The $T_{eff}$ values are shown in Fig.~\ref{fig:murate}~(bottom panel) in the
time period of the
data taking, from January 2008 to March 2013.
The temperature has been fitted to a sinusoidal function similar to Eq.~\ref{murate}:
\begin{equation}
T_{eff}(t) = T^0_{eff} + T^1_{eff} \cos \frac{2 \pi} {T} (t-\phi)
\label{trate}
\end{equation}
The fit results are also shown in Fig.~\ref{fig:murate}.
The average effective temperature is 220.9~K, and a modulation is
observed with an amplitude $\alpha = T_{eff}^1/T_{eff}^0 = (1.726 \pm 0.002)\%$ of $T^0_{eff}$.
The period $T = (364.9 \pm 0.1)$~days and phase $\phi=(184.6 \pm 0.1)$~days are similar to those observed for the
single muon rate.
A more refined study about the time correlation between temperature and
muon rate is
presented
in the next Section.
In Fig.~\ref{fig:LS} (right panel) the generalised Lomb-Scargle periodogram is displayed also for
$T_{eff}$. As for the muon rate, the most significant peak is around 365 days
and other less significant peaks are present.
\section{Cosmic ray flux and effective atmospheric temperature correlation}
\label{sec:correlation}
The possible presence of a time shift $\tau$ between the modulated components of
the cosmic ray muon rate
and of the effective atmospheric temperature has been
investigated using the cross correlation function defined as:
\begin{equation}
R(\tau)~ =~ \int_0^{\Delta t} \frac{I_\mu (t)-I^0_\mu}{I^1_\mu}~\frac{T_{eff}(t-\tau)-T^0_{eff}}{T^1_{eff}}~\frac{dt}{\Delta t} ~\simeq~ \frac{1}{N_{d}}~\Sigma_i~\frac{I_\mu (t_i)-I^0_\mu}{I^1_\mu}~\frac{T_{eff}(t_i-\tau)-T^0_{eff}}{T^1_{eff}}
\label{eqtimecor}
\end{equation}
$I^0_\mu$ and $T^0_{eff}$ are the average values as obtained in the fits of
Sec.~\ref{sec:Cray}, for the single muon cosmic events, and
of Sec.~\ref{sec:Temperature}, for the effective atmospheric temperature, while
$I^1_\mu$ and $T^1_{eff}$ are the corresponding amplitudes of the
modulated components.
The sum runs over the $N_{d}$ days with both measurements.
The correlation function is shown in Fig.~\ref{fig:correphase},
together with the 99\% C.L., in dashed red, evaluated by producing with Monte Carlo
techniques 10$^5$ toy experiments, each one consisting of two time
series, one for the temperature and the other for the muon rate. The
99\% C.L. value is defined as that for which 99\% of the toy
experiments have a correlation value for $\tau=0$ day lower than it.
Both temperatures and rates have been generated according to the results
of the fits reported in the previous sections, fixing the time period
to 365 days and the phase to 185 days both for the simulated cosmic ray flux
and effective atmospheric temperature.
In the same figure, in dotted blue, the cross correlation function is reported, as
a reference, for one of the toy experiments.
In real data
a peak with maximum at $\tau = 0$ day
is observed
above the expected
contribution of the modulated components.
This peak is due to correlated short term deviations (few
days scale) from the fitted functions in the atmospheric muon rate and the effective temperature.
Both the sinusoidal components and the short term variations of
the two
time series are synchronous.
\begin{figure}[]
\centerline{\includegraphics[width=0.7\linewidth]{correphase6.pdf}}
\caption[]{Cross correlation function (continuous black) between the measured daily muon rate
and the effective atmospheric temperature. In dotted blue the result of a toy
Monte Carlo simulation described in the text
is reported, where the muon rate and the effective
temperature have been extracted according to the fit results, but with equal
time period and phase. In dashed red the 99\% significance level is also
shown (see text for the definition).}
\label{fig:correphase}
\end{figure}
In Fig.~\ref{fig:alphaT} the percentage deviation of the single muon
flux, $\Delta I_\mu/I^0_\mu=(I_\mu-I^0_\mu)/I^0_\mu$, is shown as a function of
the relative effective temperature variation,
$\Delta T_{eff}/T^0_{eff}=(T_{eff}-T^0_{eff})/T^0_{eff}$.
According to the model described in Appendix \ref{sec:Appendix}, a
proportionality relation is expected:
\begin{equation}
\frac{\Delta I_\mu (t)}{I_\mu^0} = \alpha_T ~\frac{\Delta T_{eff} (t)}{T_{eff}^0}
\label{eqalphat}
\end{equation}
The effective temperature coefficient $\alpha_T$ depends
on the energy threshold for atmospheric muons to reach the detector
and on the ratio of kaon/pion production in cosmic rays interactions with
the atmosphere.
The linear fit performed on the plot of Fig.~\ref{fig:alphaT}
gives $\alpha_T=0.95\pm0.04$,
consistent with expectations and with results
from other LNGS experiments (\cite{bib:macro1}-\cite{bib:gerda}).
In the fit, a constant term, $a_0$, has been added
to verify the
consistency of our data with Eq.~\ref{eqalphat}.
Its measured value $a_0 = -0.08 \pm 0.05$ is consistent with zero, as expected.
The correlation coefficient is $R=0.50$.
It is worth noticing that $R$ is proportional to the cross correlation function
evaluated for $\tau=0$, and can be obtained from Eq.~\ref{eqtimecor}
replacing $I^1_\mu$ and $T^1_{eff}$, the modulation amplitudes, by the
respective standard deviations.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{alphat6.pdf}
\caption[]{Correlation between the muon rate and the effective
temperature relative variations. Symbols in the legend are defined in the text. }
\label{fig:alphaT}
\end{figure}
Two sources of systematic errors have been investigated: the energy
threshold on cosmic ray muons collected by the
detector, affecting
the calculation of the effective atmospheric temperatures, and the
muon rate data-set rescaling.
Using $\langle E_{thr} \cos \theta \rangle \sim 1.8$~TeV, as done by other LNGS experiments,
the weights
have been re-evaluated
resulting
in an effective temperature which is, on average,
0.2~K higher, with no appreciable
effect on $\alpha_T$ measurement and also on the other analyses reported here.
The evaluation of $\alpha_T$ has been performed also
applying scaling factors, different year by year, to the muon flux
measurements,
independently
with and without the RPC system
as done in Sec.~\ref{sec:Cray}; possible effects
due to small changes in detector settings and data acquisition
would be taken into account.
The obtained effective temperature coefficient
$\alpha_T = (0.93 \pm 0.04)$
is compatible with the previously
quoted value within statistical uncertainties.
The systematic error on $\alpha_T$ has been therefore neglected with
respect to the statistical one.
In Fig.~\ref{fig:alphaTvsE} our result is shown together with
the values measured by other experiments as a function of the energy threshold $\langle E_{thr} \cos \theta \rangle$.
The model accounting for pion and kaon contributions to the muon flux is represented by the continuous black line, where the kaon/pion ratio
$r_{K/\pi} = Z_{NK}/Z_{N\pi} = 0.144$ is fixed to the value inferred by the muon charge ratio analysis~\cite{bib:chargeratio2014}.
\begin{figure}
\centering
\includegraphics[width=0.8\linewidth]{alphaT_vs_energy7.pdf}
\caption[]{ Experimental values of the correlation coefficient $\alpha_T$ as a function of $\langle E_{thr} \cos \theta \rangle$. The result of this analysis is displayed in black, measurements
reported by other experiments in grey (AMANDA~\cite{bib:amanda}, IceCube~\cite{bib:icecube}, MINOS~\cite{bib:minosf, bib:minosn}, Double Chooz~\cite{bib:db} and Daya Bay~\cite{bib:dayabay}).
The seven measurements performed by five LNGS
experiments (MACRO~\cite{bib:macro1bis}, Borexino~\cite{bib:borex, bib:borex2}, GERDA~\cite{bib:gerda}, LVD~\cite{bib:lvd} and OPERA) are shown in the insert plot.
The LNGS energy threshold - assumed to be $(1.1 \pm 0.2)$~TeV, as evaluated in this paper - is artificially displaced for each data point
for the sake of visualisation.
The continuous black line represents the model
accounting for pions and kaons, with $r_{K/\pi} = 0.144$~\cite{bib:chargeratio2014}. }
\label{fig:alphaTvsE}
\end{figure}
\section{Conclusions}
In this paper we report on studies about the seasonal variation of
the flux of single muons generated
in cosmic rays interactions in the high atmosphere, as measured by the OPERA electronic detectors located in the LNGS underground laboratory.
The observed dependence is
approximated by a
constant
flux
with the presence of a
$(1.55 \pm 0.08) \%$ modulated component
with one year period
and maximum
at
$(186 \pm 3_{\textrm{stat}} \pm 5_{\textrm{sys}})$~day, corresponding to July 5.
An effective atmospheric temperature has been defined as the average of the
temperatures of the air column above the experimental site, weighted by the
production probability of high energy muons, detectable by our underground
detector.
The effective atmospheric temperature
shows a time variation similar to cosmic ray rate changes.
A cross correlation function based study has demonstrated the presence of
short term deviations from a pure yearly modulated model, both in the
cosmic ray flux and in the effective temperature.
These short term deviations and the modulated components appear to be
simultaneous in the two time series.
The effective atmospheric temperature and muon rate variations are positively
correlated ($R=~0.50$). The effective temperature
coefficient is measured to be $\alpha_T = (0.95 \pm 0.04)$, consistent
with the model of $\pi$ and $K$ production in primary cosmic ray
interactions and with previous experimental results. Both the phase and the effective temperature coefficient were measured by several LNGS experiments over different time periods in different decades, and the results agree very well within the quoted uncertainties.
| {'timestamp': '2018-10-26T02:08:15', 'yymm': '1810', 'arxiv_id': '1810.10783', 'language': 'en', 'url': 'https://arxiv.org/abs/1810.10783'} |
\section{Introduction}
The link between the classical Black-Scholes equation and quantum mechanics and the application of quantum formalism to Mathematical Finance has been investigated by several authors. For example: \cite{AccBoukas}-\cite{Choustova}, \cite{Haven}-\cite{Hidalgo}, \cite{Maslov}, and \cite{McCloud_1}-\cite{Segal}. In particular, the approach of modelling derivative prices using self-adjoint operators on a Hilbert space was suggested by Segal \& Segal in \cite{Segal}. In this paper the authors noted that, in the real world, the market operates with imperfect information and that different observables, such as underlying price and option delta, are usually not simultaneously observable. This fact makes the non-commutative extension of the Black-Scholes framework a natural step. The authors point out that this approach addresses some of the limitations of the classical Black-Scholes model, such as the underestimation of the probability of extreme events - so called ``fat tails". In this sense, non-commutative Quantum models present an alternative means to capture complex market dynamics, without the addition of new stochastic variables.\newline
In \cite{AccBoukas}, Accardi \& Boukas derive a general form for the Quantum Black-Scholes equation based on the Hudson-Parthasarathy calculus (cf \cite{Hudson-Parthasarathy}) and show that a commutative unitary time development operator acting on the market state, leads to a classical Black-Scholes system. Further they give the quantum stochastic differential equation governing the time development operator, and demonstrate how unitary transformations can lead to non-commutativity. An example of a non-commutative Quantum Black-Scholes partial differential equation is derived, although the authors work in an abstract setting and do not discuss specific unitary transformations and Hilbert space representations of financial markets.\newline
Therefore, one objective of this work is to use the Accardi-Boukas framework to look at how different unitary transformations can be used to transform the classical Black-Scholes equation, and to understand how quantum effects become apparent. We then go on to explore an example application of the approach in the modelling of bid-offer spread dynamics. The final objective of the current work is to identify suitable Monte-Carlo methods, which can be used for the simulation of solutions.\newline
In section 2, we give an overview of the Accardi-Boukas derivation of the general form for the Quantum Black-Scholes equation, from \cite{AccBoukas}. With the objective of looking at ``near classical'' Black-Scholes worlds, we then derive specific forms for the resulting partial differential equations that result from small translations, and rotations. This in turn involves the extension of the Accardi-Boukas equation to systems with more than one underlying variable. We go on to discuss how this approach can be applied to the modelling of bid-offer dynamics.\newline
In section 3, we show how this can be linked to the nonlocal diffusion processes discussed by Luczka, H{\"a}nggi and Gadomski in \cite{Luczka}. Here the impact of the diffusion differential operator is spread out through the convolution with a ``blurring" function. The Kramers-Moyal expansion of the nonlocal Fokker-Planck equations allows us to derive the moments of the blurring function for the ``near classical'' quantum system.\newline
This approach allows a natural route to the visualisation of the quantum effects on the system using McKean SDEs (cf \cite{McKean}). The Monte-Carlo methods, developed by Guyon, and Henry-Labord\`ere in \cite{Guyon}, can then be adapted to the simulation of solutions. This is discussed in section 4, where we present numerical results and show how, by introducing small transformations to the system, the stochastic process now reacts to a market downturn by returning higher volatility. This effect is observed even where there is a single static Black-Scholes type volatility.
\section{Quantum Black-Scholes equation}
In this section we follow the notation given, by Accardi \& Boukas, in \cite{AccBoukas}. The current market is represented by a vector in a Hilbert space: $\mathbb{H}$, which contains all relevant information about the state of the market at an instant in time. The tradeable price for a security is represented by an self-adjoint operator on $\mathbb{H}$: $X$, and the the spectrum of $X$ represents possible prices.\newline
\break
Let $L^2[\mathbb{R}^+;\mathbb{H}]$ represent functions from the positive real axis (time) to the Hilbert space $\mathbb{H}$. Then the random behaviour of tradeable securities can be modelled using the tensor product of $\mathbb{H}$ with the bosonic Fock space: $\mathbb{H}\otimes\Gamma (L^2[\mathbb{R}^+;\mathbb{H}])$. We term this the ``market space". The operator that returns the current price becomes $X\otimes \mathbb{I}$, where $\mathbb{I}$ represents the identity operator. The time development of $X\otimes \mathbb{I}$ into the future is modelled by:\newline
\break
$j_t(X)=U_t^*X\otimes\mathbb{I}U_t$\newline
\break
$\mathbb{H}$ carries the initial state of the market and $U_t$ acts by introducing random fluctuations that fill up the empty states in: $\Gamma (L^2[\mathbb{R}^+;\mathbb{H}])$. The functional form for $U_t$ is derived by Hudson \& Parthasarathy in \cite{Hudson-Parthasarathy}, and is given by:\newline
\break
$dU_t=-\Bigg(\bigg(iH+\frac{1}{2}L^*L\bigg)dt+L^*SdA_t-LdA^{\dagger}_t+\bigg(1-S\bigg)d\Lambda_t\Bigg)U_t$\newline
\break
$dA^{\dagger}_t, dA_t, d\Lambda_t$ represent the standard creation, annihilation, and Poisson operators of quantum stochastic calculus. $H, S$ and $L$ also operate on the market space, with $S$ unitary, and $H$ self-adjoint. The multiplication rules of the Hudson-Parthasarathy calculus are given below (cf \cite{Hudson-Parthasarathy}):\newline
\break
\begin{tabular}{p{1cm}|p{1cm}p{1cm}p{1cm}p{1cm}}
-&$dA^{\dagger}_t$&$d\Lambda_t$&$dA_t$&$dt$\\
\hline
$dA^{\dagger}_t$&0&0&0&0\\
$d\Lambda_t$&$dA^{\dagger}_t$&$d\Lambda_t$&0&0\\
$dA_t$&$dt$&$dA_t$&0&0\\
$dt$&0&0&0&0
\end{tabular}\newline
\break
The first thing to note is that, for $S\neq 1$, there is a non-zero Poisson term and the time development operator is non-commutative.\newline
\break
The next thing to note is that, where $S=1$, the Poisson term disappears. The model can be written using the Ito calculus in place of the more general Hudson-Parthasarathy framework. The Wiener process $dW_t$ can be modelled using: $dA_t+dA^{\dagger}_t$.\newline
\break
Let $V_T=j_T(X-K)^+$, represent the option price process as at final expiry $T$, and $K$ the operator given by multiplying by the strike. Further, for $V_t=j_t(X-K)^+$ the following expansion is assumed:\newline
\break
$V_t=F(t,x)=\sum_{n,k} a_{n,k}(t-t_0)^n(x-x_0)^k$\newline
\break
The Hudson-Parthasarathy multiplication rules can be applied to this expansion to give a quantum stochastic differential equation for $V_t$, that corresponds to the usual Ito expansion used in the derivation of the classical Black-Scholes. By assuming one can construct a hedge portfolio by holding the underlying and a risk free numeraire asset, Accardi \& Boukas are able to derive the general form the Quantum Black Scholes equation using the assumption that any portfolio must be self financing. Proposition 1, from \cite{AccBoukas} gives the full Quantum Black-Scholes equation:\newline
\break
\begin{equation}\label{QBS}
a_{1,0}(t,j_t(X))+a_{0,1}(t,j_t(X))j_t(\theta)+\sum_{k=2}^{\infty} a_{0,k}(t,j_t(X))j_t(\alpha\lambda^{k-2}\alpha^{\dagger})=a_tj_t(\theta)+V_tr-a_tj_t(X)r
\end{equation}
Here, $a_t$ represents the holding in the underlying asset and is given by the boundary conditions:\newline
\break
$\sum_{k=1}^{\infty} a_{0,k}(t,j_t(X))j_t(\lambda^{k-1}\alpha^{\dagger})=a_tj_t(\alpha^{\dagger})$\newline
$\sum_{k=1}^{\infty} a_{0,k}(t,j_t(X))j_t(\alpha\lambda^{k-1})=a_tj_t(\alpha)$\newline
$\sum_{k=1}^{\infty} a_{0,k}(t,j_t(X))j_t(\lambda^{k})=a_tj_t(\lambda)$\newline
\break
Further, $\theta, \alpha$ and $\lambda$ are given by:\newline
\break
$\alpha=[L^*,X]S$, $\lambda=S^*XS-X$, $\theta=i[H,X]-\frac{1}{2}\{L^*LX+XL^*L+2L^*XL\}$. In this case the boundary conditions arise because when the Poisson term: $d\Lambda$ is non-zero, unlike Ito calculus where expansion terms with order above 2 can be ignored, higher order terms still contain non-vanishing contribution.
\subsection{Translation}
The natural Hilbert space for an equity price (say the FTSE price) is: $\mathbb{H}=L^2[\mathbb{R}]$. In this case, the only unitary transactions we can use are the translations:\newline
\break
$T_\epsilon:f(x)\rightarrow f(x-\varepsilon)$\newline
\break
Here we have, for a translation invariant Lebesgue measure $\mu$:\newline
\break
$\langle T_\varepsilon f|T_\varepsilon g\rangle = \int_{\mathbb{R}} \compconj{f(x-\varepsilon)}g(x-\varepsilon)d\mu=\int_{\mathbb{R}} \compconj{f(x)}g(x)d\mu=\langle f|g\rangle$\newline
\break
So S is unitary in this case. Therefore, translating by $\varepsilon$ we get:\newline
\break
$\lambda=T_{-\varepsilon}XT_\varepsilon f(x) - Xf(x)=T_{-\varepsilon}xf(x-\varepsilon )-xf(x)=(x+\varepsilon )f(x)-xf(x)=\varepsilon f(x)$\newline
\break
So we have $\lambda=\varepsilon$, and it is clear the example given in \cite{AccBoukas} relates to a translation by $\varepsilon=1$. Following the key steps from \cite{AccBoukas} Proposition 3, and inserting this back into equation \ref{QBS}, we get the following Quantum Black-Scholes partial differential equation for this system:\begin{lemma}
Let $u(t,x)$ represent the price at time t, of a derivative contract in the system described above under small translation $\varepsilon$, and with interest rate $r$. Then the quantum Black-Scholes equation becomes:
\begin{equation} \label{eq:1}
\frac{\partial u(t,x)}{\partial t}=rx\frac{\partial u(t,x)}{\partial x}-u(t,x)r+\sum_{k=2}^{\infty}\frac{\varepsilon^{k-2}}{k!}\frac{\partial^k u(t,x)}{\partial x^k}g(x)
\end{equation}
\end{lemma}
\begin{proof}
The proof follows the same steps Accardi \& Boukas outline in \cite{AccBoukas} proposition 3, with small modifications.
\end{proof}
For $\varepsilon=0$, the last term drops out, and the equation reverts to the classical Black-Scholes. We investigate the impact of non-zero $\varepsilon$ in section 4.
\subsection{Rotation}
For the one dimensional market space: $L^2[\mathbb{R}]$, the Lebesgue invariant translations, are the only unitary transformations available. However, the true current state of the financial market contains a much richer variety of information than just a single price, and by increasing the dimensionality of the Market space accordingly we introduce a wider variety of unitary transformations, that can introduce non-commutativity. For example, let $x$ represent the FTSE mid-price, and $\epsilon$ half of the bid-offer spread so that $(x+\epsilon)$ represents the best offer-price and $(x-\epsilon)$ the best bid-price. Now the market is represented by the Hilbert space: $\mathbb{H}=L^2[\mathbb{R}^2]$, and we can apply rotations, in addition to translations.\newline
\break
We make the simplifying assumption that market participants can trade the mid-price: $x$ (for example during the end of day auction process) and that the market has sufficient liquidity to enable participants to alternatively act as market makers (receiving bid-offer spread) or as hedgers (crossing bid-offer spread) and therefore trade the bid-offer spread: $\epsilon$. Therefore we make the following assumption:
\begin{assumption}\label{ass_1}
For any derivative payout $V(x_T,\epsilon_T)$, we can construct a hedged portfolio, and can proceed with the derivation of the Quantum Black Scholes equation following the basic methodology from \cite{AccBoukas}.
\end{assumption}
We now have separate creation, annihilation and Poisson operators, for $x$ and $\epsilon$; $dA_x$, $dA_{\epsilon}$ etc. These can be combined using the multiplication table (\cite{Hudson-Parthasarathy}, Theorem 4.5), by making the assumption that the bid-offer is uncorrelated with the equity price. This corresponds to assumption
\ref{ass_2}:
\begin{assumption}\label{ass_2}
$dA_xd\Lambda_{\epsilon}=dA_{\epsilon}d\Lambda_x=d\Lambda_xd\Lambda_{\epsilon}=d\Lambda_{\epsilon}d\Lambda_x=dA_xdA^{\dagger}_{\epsilon}=dA_{\epsilon}dA^{\dagger}_x=d\Lambda_xdA^{\dagger}_{\epsilon}=d\Lambda_{\epsilon}dA^{\dagger}_x=0$.
\end{assumption}
Lastly, we make the assumption that we can expand the derivative payout as before:
\begin{assumption}\label{ass_3}
$V_t=F(t,x,\epsilon)=\sum_{n,k,l} a_{n,l,k}(t-t_0)^n(x-x_0)^k(\epsilon-\epsilon_0)^l$
\end{assumption}
We can now derive the relevant Quantum Black-Scholes equation:
\begin{proposition}
Let $\mathbb{H}=L^2[\mathbb{R}^2]$, and let $X\otimes 1$ and $\epsilon\otimes 1$ operate on the market space: $\mathbb{H}\otimes\Gamma (L^2[\mathbb{R}^+;\mathbb{H}])$, to return the mid-price, and bid-offer spread for a tradeable security respectively. Further, let the notation from \cite{AccBoukas}, and the above assumptions apply.\newline
\break
Then the Quantum Black-Scholes equation in this case is given by:\newline
\begin{equation}\label{QBS2}
\begin{split}
a_{1,0,0}(t,j_t(X),j_t(\epsilon))+a_{0,1,0}(t,j_t(X),j_t(\epsilon))j_t(\theta_x)+a_{0,0,1}(t,j_t(X),j_t(\epsilon))j_t(\theta_{\epsilon})\\
+\sum_{k=2}^{\infty} a_{0,k,0}(t,j_t(X),j_t(\epsilon))j_t(\alpha_x\lambda_x^{k-2}\alpha_x^{\dagger})+\sum_{l=2}^{\infty} a_{0,0,l}(t,j_t(X),j_t(\epsilon))j_t(\alpha_{\epsilon}\lambda_{\epsilon}^{l-2}\alpha_{\epsilon}^{\dagger})\\
=a_{x,t}j_t(\theta_x)+a_{\epsilon,t}j_t(\theta_{\epsilon})+V_tr-a_{x,t}j_t(X)r-a_{\epsilon,t}j_t(\epsilon)r
\end{split}
\end{equation}
Where for $j_t(X)$:
\begin{equation}\label{bc_x}
\begin{split}
\sum_{k=1}^{\infty} a_{0,k,0}(t,j_t(X),j_t(\epsilon))j_t(\lambda_x^{k-1}\alpha_x^{\dagger})=a_{x,t}j_t(\alpha_x^{\dagger})\\
\sum_{k=1}^{\infty} a_{0,k,0}(t,j_t(X),j_t(\epsilon))j_t(\alpha_x\lambda_x^{k-1})=a_{x,t}j_t(\alpha_x)\\
\sum_{k=1}^{\infty} a_{0,k,0}(t,j_t(X),j_t(\epsilon))j_t(\lambda_x^{k})=a_{x,t}j_t(\lambda_x)\\
\end{split}
\end{equation}
and for $j_t(\epsilon)$:\newline
\begin{equation}\label{bc_e}
\begin{split}
\sum_{l=1}^{\infty} a_{0,0,l}(t,j_t(X),j_t(\epsilon))j_t(\lambda_{\epsilon}^{l-1}\alpha_{\epsilon}^{\dagger})=a_{\epsilon,t}j_t(\alpha_{\epsilon}^{\dagger})\\
\sum_{l=1}^{\infty} a_{0,0,l}(t,j_t(X),j_t(\epsilon))j_t(\alpha_{\epsilon}\lambda_{\epsilon}^{l-1})=a_{\epsilon,t}j_t(\alpha_{\epsilon})\\
\sum_{l=1}^{\infty} a_{0,0,l}(t,j_t(X),j_t(\epsilon))j_t(\lambda_{\epsilon}^{l})=a_{\epsilon,t}j_t(\lambda_{\epsilon})
\end{split}
\end{equation}
\end{proposition}
\begin{proof}
First, the equations for time-development operators for $X\otimes 1$, and $\epsilon\otimes 1$ become:\newline
\break
$dU_{x,t}=-\Bigg(\bigg(iH+\frac{1}{2}L_x^*L_x\bigg)dt+L_x^*SdA_x-L_xdA_x^{\dagger}+\bigg(1-S\bigg)d\Lambda_x$\newline
$dU_{\epsilon,t}=-\Bigg(\bigg(iH+\frac{1}{2}L_{\epsilon}^*L_{\epsilon}\bigg)dt+L_{\epsilon}^*SdA_{\epsilon}-L_{\epsilon}dA_{\epsilon}^{\dagger}+\bigg(1-S\bigg)d\Lambda_{\epsilon}$\newline
\break
Then, applying the Hudson-Parthasarathy multiplication rules to the expansion given in assumption \ref{ass_3} gives:
\begin{equation}\label{expansion}
\begin{split}
dV_t=\bigg(a_{1,0,0}(t,j_t(x),j_t(\epsilon))+a_{0,1,0}(t,j_t(x),j_t(\epsilon))j_t(\theta_x)+a_{0,0,1}(t,j_t(x),j_t(\epsilon))j_t(\theta_{\epsilon})\\
+\sum_{k=2}^{\infty} a_{0,k,0}(t,j_t(X),j_t(\epsilon))j_t(\alpha_x\lambda_x^{k-2}\alpha_x^{\dagger})+\sum_{l=2}^{\infty} a_{0,0,l}(t,j_t(X),j_t(\epsilon))j_t(\alpha_{\epsilon}\lambda_{\epsilon}^{l-2}\alpha_{\epsilon}^{\dagger})\bigg)dt\\
+\bigg(a_{0,1,0}(t,j_t(X),j_t(\epsilon))j_t(\alpha_x)+\sum_{k=2}^{\infty} a_{0,k,0}(t,j_t(X),j_t(\epsilon))j_t(\alpha_x\lambda_x^{k-1})\bigg)dA_x\\
+\bigg(a_{0,0,1}(t,j_t(X),j_t(\epsilon))j_t(\alpha_{\epsilon})+\sum_{l=2}^{\infty} a_{0,0,l}(t,j_t(X),j_t(\epsilon))j_t(\alpha_{\epsilon}\lambda_{\epsilon}^{k-1})\bigg)dA_{\epsilon}\\
+\bigg(a_{0,1,0}(t,j_t(X),j_t(\epsilon))j_t(\alpha_x^{\dagger})+\sum_{k=2}^{\infty} a_{0,k,0}(t,j_t(X),j_t(\epsilon))j_t(\lambda_x^{k-1}\alpha_x^{\dagger})\bigg)dA_x^{\dagger}\\
+\bigg(a_{0,0,1}(t,j_t(X),j_t(\epsilon))j_t(\alpha_{\epsilon}^{\dagger})+\sum_{l=2}^{\infty} a_{0,0,l}(t,j_t(X),j_t(\epsilon))j_t(\lambda_{\epsilon}^{k-1}\alpha_{\epsilon}^{\dagger})\bigg)dA_{\epsilon}^{\dagger}
\end{split}
\end{equation}
Where $\theta_x, \theta_{\epsilon}$ are given by:\newline
\break
$\theta_x=i[H,X]-\frac{1}{2}\bigg(L_x^*L_xX+XL_x^*L_x-2L_x^*XL_x\bigg)$\newline
$\theta_{\epsilon}=i[H,\epsilon]-\frac{1}{2}\bigg(L_{\epsilon}^*L_{\epsilon}\epsilon+\epsilon L_{\epsilon}^*L_{\epsilon}-2L_{\epsilon}^*\epsilon L_{\epsilon}\bigg)$\newline
\break
$\alpha_x, \alpha_{\epsilon}$ are given by:\newline
\break
$\alpha_x=[L_x^*,X]S$\newline
$\alpha_{\epsilon}=[L_{\epsilon}^*,\epsilon]S$\newline
\break
and finally $\lambda_x, \lambda_{\epsilon}$ are given by:\newline
\break
$\lambda_x=S^*XS-X$\newline
$\lambda_{\epsilon}=S^*\epsilon S-\epsilon$\newline
\break
By assumption \ref{ass_1} we can form a hedge portfolio which we now use:\newline
\break
$V_t=a_{x,t}j_t(X)+a_{\epsilon,t}j_t(\epsilon)+b_t\beta$, for risk free numeraire asset $\beta$.\newline
\break
$dV_t=a_{x,t}dj_t(X)+a_{\epsilon,t}dj_t(\epsilon)+b_t\beta rdt$\newline
\break
Applying the unitary time development operators for $\epsilon$ and $x$ we have:\newline
\break
\begin{equation}\label{hedges}
\begin{split}
dV_t=a_{x,t}\big(j_t(\alpha_x^{\dagger})dA_x^{\dagger}+j_t(\lambda_x)d\Lambda_x+j_t(\alpha_x)dA_x\big)\\
+a_{\epsilon,t}\big(j_t(\alpha_{\epsilon}^{\dagger})dA_{\epsilon}^{\dagger}+j_t(\lambda_{\epsilon})d\Lambda_{\epsilon}+j_t(\alpha_{\epsilon})dA_{\epsilon}\big)\\
+\big(j_t(\theta_x)+(V_t-a_{x,t}j_t(X)-a_{\epsilon,t}j_t(\epsilon))r\big)dt
\end{split}
\end{equation}
\break
Equating the risky terms between equations (\ref{expansion}), and (\ref{hedges}) leads to the boundary conditions, (\ref{bc_x}) and (\ref{bc_e}) on $a_{x,t}$ and $a_{\epsilon,t}$. Similarly, equating the $dt$ terms, leads to the Quantum Black-Scholes equation for this system: equation (\ref{QBS2}).
\end{proof}
Now, let $f\big(x,\epsilon\big)$ represent a vector in $\mathbb{H}$, and apply a rotation matrix:\newline
$S=\begin{bmatrix} cos(\phi) && -sin(\phi) \\ sin(\phi) && cos(\phi) \end{bmatrix}$\newline
\break
We have:\newline
\break
$Sf\big(x,\epsilon\big)=f\big(cos(\phi)x-sin(\phi)\epsilon,cos(\phi)\epsilon+sin(\phi)x\big)$\newline
\break
$XSf=xf\big(cos(\phi)x-sin(\phi)\epsilon,cos(\phi)\epsilon+sin(\phi)x\big)$\newline
\break
$S^*XSf=\big(cos(\phi)x+sin(\phi)\epsilon\big)f(x,\epsilon)$\newline
\break
So, we end up with:\newline
\break
$\lambda_x=\bigg(\big(cos(\phi)-1\big)x+sin(\phi)\epsilon\bigg)$, $\lambda_{\epsilon}=\bigg(\big(cos(\phi)-1\big)\epsilon-sin(\phi)x\bigg)$.\newline
\break
Finally, inserting this back into equation (\ref{QBS2}), we get the Black-Scholes equation for the system (following notation from \cite{AccBoukas}):
\begin{proposition}
Let $u(t,x,\epsilon)$ represent the price at time t, of a derivative contract in the system described above under rotation $\phi$, and with interest rate $r$. Then the quantum Black-Scholes equation becomes:
\begin{equation}\label{eq:2}
\begin{split}
\frac{\partial u(t,x,\epsilon)}{\partial t}=rx\frac{\partial u(t,x,\epsilon)}{\partial x}+r\epsilon\frac{\partial u(t,x,\epsilon)}{\partial \epsilon}-u(t,x,\epsilon)r\\
+\sum_{k=2}^{\infty}\frac{((cos(\phi)-1)x+sin(\phi)\epsilon)^{k-2}}{k!}\frac{\partial^k u(t,x,\epsilon)}{\partial x^k}g_1(x,\epsilon)\\
+\sum_{l=2}^{\infty}\frac{((cos(\phi)-1)\epsilon-sin(\phi)x)^{l-2}}{l!}\frac{\partial^l u(t,x,\epsilon)}{\partial \epsilon^l}g_2(x,\epsilon)
\end{split}
\end{equation}
\end{proposition}
\begin{proof}
We assume that the operators $L_x, L_x^*,L_{\epsilon}, L_{\epsilon}^*$ involve multiplication by a polynomial in $x, \epsilon$, and therefore commute with $\lambda_x, \lambda_{\epsilon}$. Therefore, from the boundary conditions we have:\newline
\break
$\sum_{k=1}^{\infty} a_{0,k,0}(t,j_t(X),j_t(\epsilon))j_t(\lambda_x^{k-1})=a_{x,t}$\newline
\break
$\sum_{l=1}^{\infty} a_{0,0,l}(t,j_t(X),j_t(\epsilon))j_t(\lambda_{\epsilon}^{l-1})=a_{\epsilon,t}$\newline
\break
Inserting this into \ref{QBS2} gives:
\begin{equation}
\begin{split}
a_{1,0,0}(t,j_t(X),j_t(\epsilon))+a_{0,1,0}(t,j_t(X),j_t(\epsilon))j_t(X)r+a_{0,0,1}(t,j_t(X),j_t(\epsilon))j_t(\epsilon)r\\
+\sum_{k=2}^{\infty} a_{0,k,0}(t,j_t(X),j_t(\epsilon))j_t(\lambda_x^{k-2}(\alpha_x\alpha_x^*-\lambda_x(\theta_x-xr)))\\
+\sum_{l=2}^{\infty} a_{0,0,l}(t,j_t(X),j_t(\epsilon))j_t(\lambda_{\epsilon}^{l-2}(\alpha_{\epsilon}\alpha_{\epsilon}^*-\lambda_{\epsilon}(\theta_{\epsilon}-\epsilon r)))\\
=V_tr
\end{split}
\end{equation}
Now writing $g_1(x,\epsilon)=j_t(\alpha_x\alpha_x^*-\lambda_x(\theta_x-xr))$, $g_2(x,\epsilon)=j_t(\alpha_{\epsilon}\alpha_{\epsilon}^*-\lambda_{\epsilon}(\theta_{\epsilon}-\epsilon r))$, and $a_{0,k,0}(t,j_t(X),j_t(\epsilon))=\frac{1}{k!}\frac{\partial^k u}{\partial x^k}, a_{0,0,l}(t,j_t(X),j_t(\epsilon))=\frac{1}{l!}\frac{\partial^l u}{\partial {\epsilon}^l}$, we have the result given.
\end{proof}
For small rotations, we have $cos(\phi)=1-\frac{\varepsilon^2}{2}+o(\varepsilon^2)$, and $sin(\phi)=\varepsilon + o(\varepsilon^2)$. Inserting this into equation (\ref{eq:2}), we have a new partial differential equation, where the coefficient of the $k$th partial derivative, for $k\geq 3$, with respect to $x, \epsilon$, is correct to $o(\varepsilon^{2(k-2)})$. This form for small rotations is more amenable to the methods we apply in section 3.
\begin{equation}\label{eq:3}
\begin{split}
\frac{\partial u(t,x,\epsilon)}{\partial t}=rx\frac{\partial u(t,x,\epsilon)}{\partial x}+r\epsilon\frac{\partial u(t,x,\epsilon)}{\partial \epsilon}-u(t,x,\epsilon)r\\
+\sum_{k=2}^{\infty}\frac{(\varepsilon\epsilon-(\varepsilon^2/2)x)^{k-2}}{k!}\frac{\partial^k u(t,x,\epsilon)}{\partial x^k}g_1(x,\epsilon)\\
+\sum_{l=2}^{\infty}\frac{(-\varepsilon x-(\varepsilon^2/2)\epsilon)^{l-2}}{l!}\frac{\partial^l u(t,x,\epsilon)}{\partial \epsilon^l}g_2(x,\epsilon)
\end{split}
\end{equation}
As is the case for equation (\ref{eq:1}), this reduces to the classical Black-Scholes for 2 uncorrelated random variables (in this case price: $x$, and bid-offer spread: $\epsilon$) when $\varepsilon=0$.\newline
\break
For the classical case, the addition of the bid-offer spread is in some ways unnecessary when using the model for derivative pricing. For derivative contracts depending on the close price, one can usually hedge daily at the closing price during the end of day auction process. For many trading desks this may be sufficient in practice, and terms involving the bid-offer spread will drop out of the model. In the quantum case, examination of equations (\ref{eq:2}) and (\ref{eq:3}) shows that we expect interference between the bid-offer spread dynamics and the price dynamics. For small rotations, these equations are singular PDEs, and we expect the behaviour in most regions to approximate classical behaviour. However, when the higher derivative terms are larger, quantum interference may be significant. We discuss this more in sections 3 and 4.
\section{Nonlocal Diffusions}
In this section, we derive the Fokker-Planck equations associated to the Quantum Black-Scholes equations: (\ref{eq:1}), and (\ref{eq:3}). We show how these can be written in integral form, by using the Kramers-Moyal expansion (see for example \cite{Frank}). This enables us to link the Quantum Black-Scholes models of the previous section to nonlocal diffusions (see for example the paper by Luczka, H{\"a}nggi and Gadomski: \cite{Luczka}). We assume zero interest rates in this section to help clarify the notation without changing the key dynamics. The integral form for the Fokker-Planck equations is given by:
\begin{equation}\label{FokkerPlanck1}
\begin{split}
\frac{\partial p(t,x,\epsilon)}{\partial t}=\frac{1}{2}\frac{\partial^2}{\partial x^2}\bigg(\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\big(H(y_x,y_{\epsilon}|x,\epsilon)g_1(x,\epsilon)p(x-y_x,\epsilon-y_{\epsilon},t)\big)dy_xdy_{\epsilon}\bigg)\\
+\frac{1}{2}\frac{\partial^2}{\partial {\epsilon}^2}\bigg(\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\big(H(y_x,y_{\epsilon}|x,\epsilon)g_2(x,\epsilon)p(x-y_x,\epsilon-y_{\epsilon},t)\big)dy_xdy_{\epsilon}\bigg)
\end{split}
\end{equation}
The function $H(y_x,y_{\epsilon}|x,\epsilon)$ has the effect of "blurring" the impact of the diffusion operator. In the case that $H(y_x,y_{\epsilon}|x,\epsilon)$ is a Dirac delta function, the diffusion operator is localised as usual, and the associated Fokker-Planck equation reduces to the standard Kolmogorov forward equation associated with the classical Black-Scholes.\newline
\break
We start with the following general form for equations (\ref{eq:1}) and (\ref{eq:3}):
\begin{equation}\label{eq:4}
\frac{\partial u(t,x,\epsilon)}{\partial t}=g_1(x,\epsilon)\sum_{k=2}^{\infty}\frac{f_1(x,\epsilon,\varepsilon)^{k-2}}{k!}\frac{\partial^k u(t,x,\epsilon)}{\partial x^k}+g_2(x,\epsilon)\sum_{l=2}^{\infty}\frac{f_2(x,\epsilon,\varepsilon)^{l-2}}{l!}\frac{\partial^l u(t,x,\epsilon)}{\partial \epsilon^l}
\end{equation}
\begin{proposition}
The Fokker-Planck equation associated to equation (\ref{eq:4}), with $r=0$ is given by:
\begin{equation}\label{FokkerPlanck2}
\begin{split}
\frac{\partial p(t,x,\epsilon)}{\partial t}=\sum_{k=2}^{\infty}\frac{(-1)^k}{k!}\frac{\partial^k \big(g_1(x,\epsilon)f_1(x,\epsilon,\varepsilon)^{k-2}p(t,x,\epsilon)\big)}{\partial x^k}\\
+\sum_{l=2}^{\infty}\frac{(-1)^l}{l!}\frac{\partial^l \big(g_2(x,\epsilon)f_2(x,\epsilon,\varepsilon)^{l-2}p(t,x,\epsilon)\big)}{\partial \epsilon^l}
\end{split}
\end{equation}
\end{proposition}
\begin{proof}
For a derivative payout $h(x,\epsilon)$, with zero interest rates, we have the following price in risk neutral measure $Q$:\newline
\break
$u(x_t,\epsilon_t, t)=E^{Q}\big[h(x_T,\epsilon_T)\big]=\int_{\mathbb{R}^2} h(y_x,y_{\epsilon})p(y_x,y_{\epsilon}|x,\epsilon,t)dy_xdy_{\epsilon}$\newline
\break
Where $p(y_x,y_{\epsilon}|x,\epsilon,t)$ represents the risk neutral probability density for the variables observed at time $T$, conditional on the values at time $t$. $h(x,\epsilon)$ represents a derivative payout at $T$. We then write the right hand integral as:\newline
\break
$\int_{\mathbb{R}^2} g(y_x,y_{\epsilon})p(y_x,y_{\epsilon}|x,\epsilon,t)dy_xdy_{\epsilon}=\int_0^t\int_{\mathbb{R}^2} Lh(y_x,y_{\epsilon})p(y_x,y_{\epsilon}|x,\epsilon,s)dy_xdy_{\epsilon}ds$\newline
\break
Where $L$ represents the operator:\newline
\break
$Lh(x,\epsilon)=\bigg(g_1(x,\epsilon)\sum_{k=2}^{\infty}\frac{f_1(x,\epsilon,\varepsilon)^{k-2}}{k!}\frac{\partial^k}{\partial x^k}+g_2(x,\epsilon)\sum_{l=2}^{\infty}\frac{f_2(x,\epsilon,\varepsilon)^{l-2}}{l!}\frac{\partial^l}{\partial \epsilon^l}\bigg)h(x,\epsilon)$\newline
\break
The Fokker-Planck equation, is given by the adjoint operator $L^*$. Therefore, since:\newline
\break
$\int_0^t\int_{\mathbb{R}^2} Lh(y_x,y_{\epsilon})p(y_x,y_{\epsilon}|x,\epsilon,s)dy_xdy_{\epsilon}ds=\int_0^t\int_{\mathbb{R}^2} h(y_x,y_{\epsilon})L^*p(y_x,y_{\epsilon}|x,\epsilon,s)dy_xdy_{\epsilon}ds$\newline
\break
If we truncate equation (\ref{eq:4}) at a certain order for the derivative: $N$, the result follows by integrating by parts $N$ times. Proceeding with higher and higher $N$, we can match the derivative terms of any arbitary order, and the result follows.
\end{proof}
The objective now, is to write equation (\ref{FokkerPlanck2}) in the form of (\ref{FokkerPlanck1}). To do this we can follow a Moment Matching algorithm. We use the following expansion:\newline
\break
$g(x,\epsilon)p(x-y_x,\epsilon-y_{\epsilon},t)=\sum_{i,j=0}^{\infty} \frac{(-1)^{(i+j)}}{(i+j)!}y_x^iy_{\epsilon}^j\frac{d^{i+j}(g(x,\epsilon)p(x,\epsilon))}{dx^id{\epsilon}^j}$\newline
\break
Inserting this into equation (\ref{FokkerPlanck1}) gives:
\begin{equation}\label{eq:5}
\begin{split}
\frac{\partial p(t,x,\epsilon)}{\partial t}=\frac{1}{2}\frac{\partial^2}{\partial x^2}\bigg(\sum_{i,j=0}^{\infty}\frac{(-1)^{(i+j)}}{(i+j)!}\frac{\partial^{i+j}(g_1(x,\epsilon)p(x,\epsilon))}{\partial x^i\partial\epsilon^j}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}H(y_x,y_{\epsilon}|x,\epsilon)y_x^iy_{\epsilon}^jdy_xdy_{\epsilon}\bigg)\\
+\frac{1}{2}\frac{\partial^2}{\partial {\epsilon}^2}\bigg(\sum_{i,j=0}^{\infty}\frac{(-1)^{(i+j)}}{(i+j)!}\frac{\partial^{i+j}(g_2(x,\epsilon)p(x,\epsilon))}{\partial x^i\partial\epsilon^j}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}H(y_x,y_{\epsilon}|x,\epsilon)y_x^iy_{\epsilon}^jdy_xdy_{\epsilon}\bigg)
\end{split}
\end{equation}
Now by equating the coefficients of the derivatives with respect to $x$ and $\epsilon$, between equations (\ref{eq:5}) and (\ref{FokkerPlanck2}) one can calculate the moments of the ``blurring" function $H(y_x,y_{\epsilon}|x,\epsilon)$. For the translation case, $g_2(x,\epsilon)=0$, and the probability density is a function of $x$ only.
\subsection{Moment Matching: Translation Case}
In the translation case, of section 2.1, since the coefficients of each differential term in equation (\ref{eq:1}) is a constant multiplied by $g(x)$, the moments of the ``blurring'' function $H(y)$ will not depend of $x$. Equation (\ref{eq:5}) becomes:
\begin{equation}\label{eq:6}
\frac{\partial p(t,x)}{\partial t}=\frac{1}{2}\bigg(\sum_{j=0}^{\infty}\frac{(-1)^{(j)}}{j!}\frac{d^{(j+2)}(g(x)p(x))}{dx^{(j+2)}}\int_{-\infty}^{\infty}H(y)y^jdy\bigg)
\end{equation}
Similarly, the Fokker-Planck associated with equation (\ref{eq:1}), with $r=0$, is given by:
\begin{equation}\label{eq:7}
\frac{\partial p(t,x)}{\partial t}=\sum_{k=2}^{\infty}\frac{(-1)^k\varepsilon^{k-2}}{k!}\frac{\partial^k (g(x)p(t,x))}{\partial x^k}
\end{equation}
Now the moments of the ``blurring'' function can be matched by equating directly equations (\ref{eq:6}) and (\ref{eq:7}):
\begin{proposition}\label{trans_mom}
Let $H_i$ represent the $i^{th}$ moment of $H(y)$, for the Fokker-Planck equation (\ref{FokkerPlanck2}), relating to the translation case described in section 2.1. Then, $H_i$ is given by:\newline
\break
$H_i=\frac{2(-\varepsilon)^i}{(i+1)(i+2)}$
\end{proposition}
\begin{proof}
$H_i$ follows (for $i\geq 0$) by equating the coefficients for: $\frac{\partial^{(i+2)}}{\partial x^{(i+2)}}$, between equations (\ref{eq:6}) and (\ref{eq:7}).
\end{proof}
We find that, in this case, $H(y)$ is a normalised function that tends to a Dirac function as $\varepsilon$ tends to zero, and for $\varepsilon=0$ we end up with classical 2nd order Fokker-Planck equation. This is discussed further in section 4.
\subsection{Moment Matching: Rotation Case}
In the rotation case of section 2.2, the coefficients of each differential term in equation (\ref{FokkerPlanck2}) are functions of $x$ and $\epsilon$. Therefore, we require the moments for the ``blurring'' function also to be functions of $x$, and $\epsilon$: $H(y_x,y_{\epsilon}|e,\epsilon)$. Once we have calculated the coefficients for the differential terms, we can use these to form an inhomogeneous 2nd order differential equation for the moments of $H(y_x,y_{\epsilon}|e,\epsilon)$.\newline
\break
In this case, from equation (\ref{FokkerPlanck2}) we have: $f_1(x,\epsilon)=\varepsilon\epsilon-(\varepsilon^2/2)x$, and $f_2(x,\epsilon)=-\varepsilon x-(\varepsilon^2/2)\epsilon$. Therefore, the Fokker-Planck equation associated with equation (\ref{eq:3}), with $r=0$, is given by:
\begin{equation}\label{eq:8}
\begin{split}
\frac{\partial p(t,x,\epsilon)}{\partial t}=\sum_{k=2}^{\infty}\frac{1}{k!}\frac{\partial^k \bigg(\big((\varepsilon^2/2)x-\varepsilon\epsilon\big)^{k-2}g_1(x,\epsilon)p(t,x,\epsilon)\bigg)}{\partial x^k}\\
+\sum_{l=2}^{\infty}\frac{1}{l!}\frac{\partial^l \bigg(\big(\varepsilon x+(\varepsilon^2/2)\epsilon\big)^{l-2}g_2(x,\epsilon)p(t,x,\epsilon)\bigg)}{\partial \epsilon^l}
\end{split}
\end{equation}
The moments of the ``blurring'' function will now follow by equating coefficients for the differential terms between equations (\ref{eq:5}), and (\ref{eq:8}).
\begin{proposition}\label{rot_mom}
Where the moments of the ``blurring'' function: $H(y_x,y_{\epsilon}|x,\epsilon)$ are given by:\newline
\break
$a^i_{x}=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}H(y_x,y_{\epsilon}|x,\epsilon)y_x^idy_xdy_{\epsilon}$\newline
$a^j_{\epsilon}=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}H(y_x,y_{\epsilon}|x,\epsilon)y_{\epsilon}^jdy_xdy_{\epsilon}$\newline
\break
and $a^0, a^1_x, a^1_{\epsilon}$ are assumed to be:\newline
\break
$a^0=1, a^1_x, a^1_{\epsilon}=0$\newline
\break
Then for the higher moments we have, for $n\geq 2$:
\begin{equation}
\frac{(-1)^na^{n-2}_x+2n\frac{\partial a^{n-1}_x}{\partial x}+n(n-1)\frac{\partial^2 a^{n}_x}{\partial x^2}}{n!}=\frac{((\varepsilon^2/2)x-\varepsilon\epsilon)^{n-2}}{n(n-1)(1-(\varepsilon^2/2))^{(n-1)}}
\end{equation}
\begin{equation}
\frac{(-1)^na^{n-2}_{\epsilon}+2n\frac{\partial a^{n-1}_{\epsilon}}{\partial \epsilon}+n(n-1)\frac{\partial^2 a^{n}_{\epsilon}}{\partial {\epsilon}^2}}{n!}=\frac{((\varepsilon^2/2)\epsilon+\varepsilon x)^{n-2}}{n(n-1)(1-(\varepsilon^2/2))^{(n-1)}}
\end{equation}
\end{proposition}
\begin{proof}
We first calculate the coefficients for $\frac{\partial^n (g_1(x,\epsilon)p(x,\epsilon))}{\partial x^n}$ from equation (\ref{eq:8}).\newline
\break
The 2nd order coefficient is given by:\newline
\break
$\sum_{i\geq 2} \frac{(i-2)!(\varepsilon^2/2)^{i-2}\binom{i}{2}}{i!}=\frac{1}{2}\sum_{i\geq 0} (\varepsilon^2/2)^i=\frac{1}{2(1-(\varepsilon^2/2))}$\newline
\break
Similarly, the 3rd order coefficient is given by:\newline
\break
$\sum_{i\geq 3} \frac{(i-2)!(\varepsilon^2/2)^{(i-2)}\binom{i}{3}((\varepsilon^2/2)x-\varepsilon\epsilon)}{i!}=\frac{((\varepsilon^2/2)x-\varepsilon\epsilon)}{3!}\sum_{i\geq 0} (i+1)(\varepsilon^2/2)^i=\frac{((\varepsilon^2/2)x-\varepsilon\epsilon)}{3!(1-(\varepsilon^2/2))^2}$\newline
\break
In general, the nth order coefficient is given by:\newline
\break
$\sum_{i\geq n} \frac{(i-2)!(\varepsilon^2/2)^{(i-2)}\binom{i}{n}((\varepsilon^2/2)x-\varepsilon\epsilon)^{n-2}}{i!(n-2)!}$\newline
\break
$=\frac{((\varepsilon^2/2)x-\varepsilon\epsilon)^{(n-2)}}{n!}\sum_{i\geq 0} (i+1)(i+2)...(i+n-2)(\varepsilon^2/2)^i$\newline
\break
The final summation can be calculated by differentiating $(n-2)$ times, the infinite sum $1/(1-v)$, where $v=(\varepsilon^2/2)$.\newline
\break
Therefore, the coefficient for $n\geq 2$ is given by:
\begin{equation}\label{x_coeff}
\frac{((\varepsilon^2/2)x-\varepsilon\epsilon)^{n-2}}{n(n-1)(1-(\varepsilon^2/2))^{(n-1)}}\frac{\partial^n (g_1(x,\epsilon)p(x,\epsilon))}{\partial x^n}
\end{equation}
Following similar logic for $\epsilon$ we have the coefficient:
\begin{equation}\label{epsilon_coeff}
\frac{((\varepsilon^2/2)\epsilon+\varepsilon x)^{n-2}}{n(n-1)(1-(\varepsilon^2/2))^{(n-1)}}\frac{\partial^n (g_2(x,\epsilon)p(x,\epsilon))}{\partial \epsilon^n}
\end{equation}
These coefficients can now be used to calculate a 2nd order inhomogeneous differential equation for the moments of $H(y_x,y_{\epsilon}|x,\epsilon)$. We start by expanding the $\partial^2/\partial x^2$, and $\partial^2/\partial\epsilon^2$ in equation (\ref{eq:5}).\newline
\break
Since, we assume from section 2.2, that $x,\epsilon$ are uncorrelated, equation (\ref{eq:5}) can be written:
\begin{equation}\label{eq:9}
\begin{split}
\frac{\partial p(t,x,\epsilon)}{\partial t}=
\frac{1}{2}\sum_{i=0}^{\infty}\frac{(-1)^{(i)}}{i!}\Bigg(\frac{\partial^{i}(g_1(x,\epsilon)p(x,\epsilon))}{\partial x^i}\frac{\partial^2 a^i_{x}}{\partial x^2}+\frac{\partial^{i+2}(g_1(x,\epsilon)p(x,\epsilon))}{\partial x^{i+2}}a^i_{x}\\+2\frac{\partial^{i+1}(g_1(x,\epsilon)p(x,\epsilon))}{\partial x^{i+1}}\frac{\partial a^i_{x}}{\partial x}\Bigg)\\
+\frac{1}{2}\sum_{j=0}^{\infty}\frac{(-1)^{(j)}}{j!}\Bigg(\frac{\partial^{j}(g_2(x,\epsilon)p(x,\epsilon))}{\partial \epsilon^j}\frac{\partial^2 a^j_{\epsilon}}{\partial \epsilon^2}+\frac{\partial^{j+2}(g_1(x,\epsilon)p(x,\epsilon))}{\partial \epsilon^j}a^j_{\epsilon}\\+2\frac{\partial^{j+1}(g_1(x,\epsilon)p(x,\epsilon))}{\partial \epsilon^{j+1}}\frac{\partial a^j_{\epsilon}}{\partial \epsilon}\Bigg)\\
\end{split}
\end{equation}
The coefficients for $\frac{\partial^n (g_1(x,\epsilon)p(x,\epsilon))}{\partial x^n}$ from equation (\ref{eq:9}) are now given by: $\frac{\partial^2 a^0}{\partial x^2}(g_1(x,\epsilon)p(x,\epsilon))$ for $n=0$, $(\frac{\partial^2 a^1_x}{\partial x^2}+2\frac{\partial a^0}{\partial x})\frac{\partial (g_1(x,\epsilon)p(x,\epsilon))}{\partial x}$ for $n=1$, and:
\begin{equation}\label{eq:10}
\frac{(-1)^na^{n-2}_x+2n\frac{\partial a^{n-1}_x}{\partial x}+n(n-1)\frac{\partial^2 a^{n}_x}{\partial x^2}}{n!}\frac{\partial^n (g_1(x,\epsilon)p(x,\epsilon))}{\partial x^n}
\end{equation}
\ for $n\geq 2$. Similarly, for $\epsilon$ we have:
\begin{equation}\label{eq:11}
\frac{(-1)^na^{n-2}_{\epsilon}+2n\frac{\partial a^{n-1}_{\epsilon}}{\partial \epsilon}+n(n-1)\frac{\partial^2 a^{n}_{\epsilon}}{\partial {\epsilon}^2}}{n!}\frac{\partial^n (g_2(x,\epsilon)p(x,\epsilon))}{\partial \epsilon^n}
\end{equation}
We now make the assumption that $H$ is a normalised probability distribution with expectation zero for $x$ and $\epsilon$. Ie, $\frac{\partial a_0}{\partial x}=0$, $a^1_x=0$, and $a^1_{\epsilon}=0$. These assumptions ensure the coefficients with $n=0,1$ equate to zero on both sides of equation (\ref{eq:8}). The proposition follows by equating equations (\ref{x_coeff})/(\ref{eq:10}) and (\ref{epsilon_coeff})/(\ref{eq:11}).
\end{proof}
\section{Monte-Carlo Methods \& Numerical Simulations}
In this section, we give a brief overview of McKean stochastic differential equations, before introducing how the particle method, discussed in the book by Guyon \& Henry-Labord\`ere: \cite{Guyon}, can be used in their simulation. We then go on to present numerical results from the bid-offer model discussed above, placing particular emphasis on understanding how quantum effects become apparent through small transformations applied to a classical Black-Scholes system.
\subsection{McKean Stochastic Differential Equations}
McKean nonlinear stochastic differential equations were introduced in \cite{McKean}, and refer to SDEs, where the drift \& volatility coefficients depend on the underlying probability law for the stochastic process. Following notation from \cite{Guyon} we have:\newline
\break
$dX_t=b(t,X_t,\mathbb{P}_t)dt+\sigma(t,X_t,\mathbb{P}_t)dW_t$\newline
\break
These are then related to the nonlinear Fokker Planck equation:
\begin{equation}
\frac{\partial p}{\partial t}=\frac{1}{2}\sum_{i,j}\frac{\partial^2 (\sigma_i(t,x,\mathbb{P}_t)\sigma_j(t,x,\mathbb{P}_t)p(t,x))}{\partial x_i\partial x_j}-\sum_{i}\frac{\partial (b^i(t,x,\mathbb{P}_t))}{\partial x_i}
\end{equation}
In this case, we can write equation (\ref{FokkerPlanck1}) in this form. We have for $r=0$, $b^1(t,x,\epsilon,\mathbb{P}_t)=b^2(t,x,\epsilon,\mathbb{P}_t)=0$ and $\sigma_1(t,x,\epsilon,\mathbb{P}_t)=\sqrt{g_1(x,\epsilon)\mathbb{E}^p\bigg[\frac{H(x-y_x,\epsilon-y_{\epsilon}|x,\epsilon)}{p(x,\epsilon,t)}\bigg]}$, $\sigma_2(t,x,\epsilon,\mathbb{P}_t)=\sqrt{g_2(x,\epsilon)\mathbb{E}^p\bigg[\frac{H(x-y_x,\epsilon-y_{\epsilon}|x,\epsilon)}{p(x,\epsilon,t)}\bigg]}$.\newline
\break
Therefore, we can simulate the solution to equation (\ref{FokkerPlanck1}) by first calculating the function $H(x-y_x,\epsilon-y_{\epsilon})$ using a moment matching algorithm, and then simulating the following McKean SDE, with uncorrelated Wiener processes $dW^1, dW^2$:
\begin{equation}\label{McKean}
\begin{split}
dx=\sqrt{\frac{g_1(x,\epsilon)}{p(x,\epsilon,t)}\mathbb{E}^{p(y)}\big[H(x-y_x,\epsilon-y_{\epsilon}|x,\epsilon)\big]}dW^1\\
d\epsilon=\sqrt{\frac{g_2(x,\epsilon)}{p(x,\epsilon,t)}\mathbb{E}^{p(y)}\big[H(x-y_{\epsilon},\epsilon-y_{\epsilon}|x,\epsilon)\big]}dW^2\\
\end{split}
\end{equation}
The simulation of the above SDE relies on the {\it particle method} outlined in Guyon \& Henry-Labord\`ere's book {\it Nonlinear Option Pricing} chapters 10, 11 (cf: \cite {Guyon}).\newline
\break
Each path $(x^i,\epsilon^i)$ now interacts with the other paths: $(x^j,\epsilon^j), j\neq i$ during the simulation process, and the convergence of the method relies on the so called {\it propagation of the chaos} property. This states:
\begin{definition}
For all functions $\phi(x,\epsilon,t)\in C_0(\mathbb{R}^2)$:
\begin{equation}
\frac{1}{N}\sum_{j=1}^{N} \phi(x^j,\epsilon^j)\xrightarrow{N\rightarrow\infty} \int_{\mathbb{R}^2} \phi(x,\epsilon,t)p(x,\epsilon,t)dxd\epsilon
\end{equation}
\end{definition}
In our case, the SDE (\ref{McKean}), is a McKean-Vlasov process, and we have from Guyon, Henry-Labord\`ere (cf: \cite{Guyon} Theorem 10.3), and originally Sznitman (cf: \cite{Sznitman}), that the propagation of the chaos property holds.
\subsection{Particle Method}
The first step is to discretize the SDE: (\ref{McKean}), as follows:
\begin{equation}\label{discrete_SDE}
\begin{split}
dx^i=\bigg(\sum_{j=1}^{N} H(x^j-x^i,\epsilon^j-\epsilon^i)\frac{P(x^j,\epsilon^j)}{P(x^i,\epsilon^i)}g_1(x^i,\epsilon^i)\bigg)^{0.5}dW^{1,i}\\
d\epsilon^i=\bigg(\sum_{j=1}^{N} H(x^j-x^i,\epsilon^j-\epsilon^i)\frac{P(x^j,\epsilon^j)}{P(x^i,\epsilon^i)}g_2(x^i,\epsilon^i)\bigg)^{0.5}dW^{2,i}
\end{split}
\end{equation}
Where $P(x^j,\epsilon^j)$ represents a suitably discretized probability function. The algorithm then proceeds as follows:
\begin{enumerate}
\item Solve for the moments of the ``blurring'' function $H(x-y_x,\epsilon-y_{\epsilon}|x,\epsilon)$ using propositions \ref{trans_mom}, and \ref{rot_mom}.
\item Choose a parameterised distribution to approximate $H(x-y_x,\epsilon-y_{\epsilon}|x,\epsilon)$, and fit the parameters using the calculated moments. For example, approximate $H(x-y_x,\epsilon-y_{\epsilon}|x,\epsilon)$ as a univariate/bivariate normal distribution.
\item Simulate the 1st time step, $t_1$, using the value of $H(0,0|x_0,\epsilon_0)$, for starting positions $x_0,\epsilon_0$.
\item After each simulation, allocate the simulated paths into discrete probability buckets: $P(x^j,\epsilon^j)$, for paths $j=1$ to $N$.
\item Proceed from the $t_{k-1}$ to $t_k$ timestep, using (\ref{discrete_SDE}), the value of $H(x-y_x,\epsilon-y_{\epsilon}|x,\epsilon)$, and the discrete buckets at $t_{k-1}$.
\item Iterate steps 4 \& 5 until the final maturity: $t_F$.
\end{enumerate}
\subsection{Modelling the Market Fear Factor}
We can see from (\ref{discrete_SDE}), that small translations, will lead to a variance scaling factor:\newline
\break
$\sum_{j=1}^{N} H(x^j-x^i,\epsilon^j-\epsilon^i)\frac{P(x^j,\epsilon^j)}{P(x^i,\epsilon^i)}$\newline
\break
This will have the impact of reducing the volatility of those paths which lie in the middle of the ``bell curve'', owing to the negative curvature of the probability law at these points - probability mass is spread by the ``blurring'' function to lower probability points.\newline
\break
Similarly, at the extremes of the probability density curve where the curvature is positive, probability mass is spread to areas with net higher probability. In essence the market memory of a recent extreme event, will lead to a higher market volatility at the next time step.\newline
\break
This effect differs from the negative skew observed in local volatility models (for example the work by Dupire: cf \cite{Dupire}), and from stochastic volatility models (for example Heston: \cite{Heston}), in the sense that the increase in volatility is linked to recent random moves in the tail of the probability distribution, rather than to the level of the stochastic volatility or a static function of the price, and time.\newline
\break
To highlight the difference, in the process given by equation (\ref{discrete_SDE}), one could allow for periodic rebalancing of the process. For example, one could replace the unconditional probability, with the probability conditional on the previous step. In this way, the level of the volatility would depend purely on a ``memory'' of recent price history, rather than on the absolute level of the market price, or an additional random variable. The market responds to large moves with a heightened fear factor. The study of modelling such processes with rebalancing, will involve advanced techniques for calculating the conditional probabilities, and we defer detailed study to a future work.\newline
\break
\subsection{Numerical Results}
In this section, we simulate the one-factor process described in section 2.1, and 3.1. In this case, we approximate $H(y)$ using a normal distribution using the moments from proposition 3.2: $N(\frac{\varepsilon}{3},\frac{\varepsilon^2}{18})$.\newline
\break
The non-zero 1st moment, will lead to an upside/downside bias to the ``market fear factor'' effect. Essentially, by introducing a translation in the negative $x$ direction, one introduces downside `fear' into the model.\newline
\break
Figures 1 \& 2 below, illustrate the results from a 2 step Monte-Carlo process, with $g(x)=0.01x^2$, starting value: $x_0=1$, 100K Monte-Carlo paths, and 500 discrete probability buckets. The scatter plot shows the magnitude of the proportional return on the 1st time-step on the horizontal axis, and the second time-step on the vertical axis:\newline
\break
\includegraphics[scale=0.5]{Fig1.jpg}\newline
Figure 1: $\varepsilon=0$, horizontal axis represents the proportional return for the first time-step, vertical axis represents the second second time-step.\newline
\break
\includegraphics[scale=0.5]{Fig2.jpg}\newline
Figure 2: The results for $\varepsilon=0.02$, horizontal axis represents the proportional return for the first time-step, vertical axis represents the second time-step.\newline
\break
Figure 1 shows the results for $\varepsilon=0$. This is a classical Black-Scholes system, and there is no correlation between the magnitude \& direction of the 1st and 2nd time-steps.\newline
\break
Figure 2 shows the proportional returns for $\varepsilon=0.02$ (in blue), overlaid on top of the $\varepsilon=0$ results (in orange). The volatility of the second step is reduced on those paths where the first time-step has been small. There is a slight increased second step volatility for those paths with large positive first steps, and significant second step volatility for those paths with a large negative first step. In effect, the drop in market prices has introduced ``fear'' into these paths.\newline
\break
The final chart shows the probability distributions for the natural logarithm of the simulated value after 50 one day time-steps. The non-zero translation results in a natural skewness in the distribution.\newline
\break
\includegraphics[scale=0.5]{Fig3.jpg}\newline
Figure 3: Distribution for the natural log of the final price after 50 one day time-steps. 100K Monte-Carlo paths, and 500 discrete probability buckets.
\section{Conclusions}
In this paper, we demonstrate how unitary transformations can be used to model novel quantum effects in the Quantum Black-Scholes system of Accardi \& Boukas (cf \cite{AccBoukas}).\newline
We show how these quantum stochastic processes can also be modelled using nonlocal diffusions, and simulated using the particle method outlined by Guyon \& Henry-Labord\`ere in \cite{Guyon}.\newline
By introducing a bid-offer spread parameter, and extending the Accardi-Boukas framework to 2 variables, we show how rotations, in addition to translations, can be applied. Thus, a richer representation of the information contained in the current market leads to a wider variety of unitary transformations that can be used.\newline
In section 4, using a Monte-Carlo simulation, we illustrate how introducing a translation to the one dimensional model leads to a skewed distribution, whereby recent market down moves leads to increased volatility going forward. In effect, the market retains memory of recent significant moves.\newline
In \cite{Dupire}, Dupire shows how to calibrate a local volatility to the current vanilla option smile. This enables a Monte-Carlo simulation that is fully consistent with current market option prices. Carrying out the same analysis, using the new Quantum Fokker-Planck equations, is another important next step to consider as a future development of the current work.
| {'timestamp': '2018-06-28T02:10:35', 'yymm': '1806', 'arxiv_id': '1806.07983', 'language': 'en', 'url': 'https://arxiv.org/abs/1806.07983'} |
\section{Introduction}
\label{irl::intro}
Approximately 350,000 Americans suffer from serious spinal cord injuries (SCI), resulting in loss of
some voluntary motion control. Recently, epidural and transcutaneous spinal stimulation have proven
to be promising methods for regaining motor function. To find the optimal stimulation signal, it is
necessary to quantitatively measure the effects of different stimulations on a patient. Since motor
function is our concern, we mainly study the effects of stimulations on patient motion, represented
by a sequence of poses captured by motion sensors. One typical experiment setting is shown in Figure
\ref{irl:game}, where a patient moves to follow a physician's instructions, and a sensor records the
patient's center-of-pressure (COP) continuously. This study will assist our design of stimulating
signals, as well as advancing the understanding of patient motion with spinal cord injuries.
\begin{figure}
\centering
\subfloat[A patient sitting on a sensing device\label{irl::game1}]{
\includegraphics[width=0.2\textwidth]{game.eps}
}
\qquad
\subfloat[Instructions on movement directions\label{irl::game2}]{
\includegraphics[width=0.2\textwidth]{instructions.eps}
}
~
\subfloat[The patient's COP trajectories during the movement\label{irl::game3}]{
\includegraphics[width=0.2\textwidth]{gametrajectory.eps}
}
\caption{Rehabilitative game and observed trajectories: in Figure \ref{irl::game1}, the patient
sits on a sensing device, and then moves to follow the instructed directions in Figure
\ref{irl::game2}. Figure \ref{irl::game3} shows the patient's center-of-pressure (COP) during
the movements.}
\label{irl:game}
\end{figure}
We assume the stimulating signals will alter the patient's initial preferences over poses,
determined by body weight distribution, spinal cord injuries, gravity, etc., and an accurate
estimation of the preference changes will reveal the effect of spinal stimulations on spinal cord
injuries, as other factors are assumed to be invariant to the stimulations. To estimate the
patient's preferences over different poses, the most straightforward approach is counting the pose
visiting frequencies from the motion, assuming that the preferred poses are more likely to be
visited. However, the patient may visit an undesired pose to follow the instructions or to change
into a subsequent preferred poses, making preference estimation inaccurate without regarding the
context.
In this work, we formulate the patient's motion as a Markov Decision Process, where each state
represents a pose, and its reward value encodes all the immediate factors motivating the patient to
visit this state, including the pose preferences and the physician's instructions. With this
formulation, we adopt inverse reinforcement learning (IRL) algorithms to estimate the reward value
of each state from the observed motion of the patient.
Existing solutions of the IRL problem mainly work on small-scale problems, by collecting a set of
observations for reward estimation and using the estimated reward afterwards. For example, the
methods in \cite{irl::irl1,irl::irl2, irl::subgradient} estimate the agent's policy from a set of
observations, and estimate a reward function that leads to the policy. The method in
\cite{irl::maxentropy} collects a set of trajectories of the agent, and estimates a reward function
that maximizes the likelihood of the trajectories. However, the state space of human motion is huge
for non-trivial analysis, and these methods cannot handle it well due to the reinforcement learning
problem in each iteration of reward estimation. Several methods \cite{irl::guidedirl,irl::relative}
solve the problem by approximating the reinforcement learning step, at the expense of a
theoretically sub-optimal solution.
The problem can be simplified under the condition that the transition model and the action set
remain unchanged for the subject, thus each reward function leads to a unique optimal value
function. Based on this assumption, we propose a function approximation method that learns the
reward function and the optimal value function, but without the computationally expensive
reinforcement learning steps, thus it can be scaled to a large state space. We find that this
framework can also extend many existing methods to high-dimensional state spaces.
The paper is organized as follows. We review existing work on inverse reinforcement learning in
Section \ref{irl::related}, and formulate the function approximation inverse reinforcement learning
method in large state spaces in \ref{irl::largeirl}. A simulated experiment and a clinical
experiment are shown in Section \ref{irl::experiments}, with conclusions in Section
\ref{irl::conclusions}.
\section{Related Works}
\label{irl::related}
The idea of inverse optimal control is proposed by Kalman \cite{irl::kalman}, white the inverse
reinforcement learning problem is firstly formulated in \cite{irl::irl1}, where the agent observes
the states resulting from an assumingly optimal policy, and tries to learn a reward function that
makes the policy better than all alternatives. Since the goal can be achieved by multiple reward
functions, this paper tries to find one that maximizes the difference between the observed policy
and the second best policy. This idea is extended by \cite{irl::maxmargin}, in the name of
max-margin learning for inverse optimal control. Another extension is proposed in \cite{irl::irl2},
where the purpose is not to recover the real reward function, but to find a reward function that
leads to a policy equivalent to the observed one, measured by the amount of rewards collected by
following that policy.
Since a motion policy may be difficult to estimate from observations, a behavior-based method is
proposed in \cite{irl::maxentropy}, which models the distribution of behaviors as a maximum-entropy
model on the amount of reward collected from each behavior. This model has many applications and
extensions. For example, \cite{irl::sequence} considers a sequence of changing reward functions
instead of a single reward function. \cite{irl::gaussianirl} and \cite{irl::guidedirl} consider
complex reward functions, instead of linear one, and use Gaussian process and neural networks,
respectively, to model the reward function. \cite{irl::pomdp} considers complex environments,
instead of a well-observed Markov Decision Process, and combines partially observed Markov Decision
Process with reward learning. \cite{irl::localirl} models the behaviors based on the local
optimality of a behavior, instead of the summation of rewards. \cite{irl::deepirl} uses a
multi-layer neural network to represent nonlinear reward functions.
Another method is proposed in \cite{irl::bayirl}, which models the probability of a behavior as the
product of each state-action's probability, and learns the reward function via maximum a posteriori
estimation. However, due to the complex relation between the reward function and the behavior
distribution, the author uses computationally expensive Monte-Carlo methods to sample the
distribution. This work is extended by \cite{irl::subgradient}, which uses sub-gradient methods to
simplify the problem. Another extensions is shown in \cite{irl::bayioc}, which tries to find a
reward function that matches the observed behavior. For motions involving multiple tasks and varying
reward functions, methods are developed in \cite{irl::multirl1} and \cite{irl::multirl2}, which try
to learn multiple reward functions.
Most of these methods need to solve a reinforcement learning problem in each step of reward
learning, thus practical large-scale application is computationally infeasible. Several methods are
applicable to large-scale applications. The method in \cite{irl::irl1} uses a linear approximation
of the value function, but it requires a set of manually defined basis functions. The methods in
\cite{irl::guidedirl,irl::relative} update the reward function parameter by minimizing the relative
entropy between the observed trajectories and a set of sampled trajectories based on the reward
function, but they require a set of manually segmented trajectories of human motion, where the
choice of trajectory length will affect the result. Besides, these methods solve large-scale
problems by approximating the Bellman Optimality Equation, thus the learned reward function and Q
function are only approximately optimal. We propose an approximation method that guarantees the
optimality of the learned functions as well as the scalability to large state space problems.
\section{Function Approximation Inverse Reinforcement Learning}
\label{irl::largeirl}
\subsection{Markov Decision Process}
A Markov Decision Process is described with the following variables:
\begin{itemize}
\item $S=\{s\}$, a set of states
\item $A=\{a\}$, a set of actions
\item $P_{ss'}^a$, a state transition function that defines the probability that state $s$ becomes
$s'$ after action $a$.
\item $R=\{r(s)\}$, a reward function that defines the immediate reward of state $s$.
\item $\gamma$, a discount factor that ensures the convergence of the MDP over an infinite
horizon.
\end{itemize}
A motion can be represented as a sequence of state-action pairs:
\[\zeta=\{(s_i,a_i)|i=0,\cdots,N_\zeta\},\]
where $N_\zeta$ denotes the length of the motion, varying in different observations. Given the
observed sequence, inverse reinforcement learning algorithms try to recover a reward function that
explains the motion.
One key problem is how to model the action in each state, or the policy, $\pi(s)\in A$, a mapping
from states to actions. This problem can be handled by reinforcement learning algorithms, by
introducing the value function $V(s)$ and the Q-function $Q(s,a)$, described by the Bellman Equation
\cite{irl::rl}:
\begin{align}
&V^\pi(s)=\sum_{s'|s,\pi(s)}P_{ss'}^{\pi(s)}[r(s')+\gamma*V^\pi(s')],\\
&Q^\pi(s,a)=\sum_{s'|s,a}P_{ss'}^a[r(s')+\gamma*V^\pi(s')],
\end{align}
where $V^\pi$ and $Q^\pi$ define the value function and the Q-function under a policy $\pi$.
For an optimal policy $\pi^*$, the value function and the Q-function should be maximized on every
state. This is described by the Bellman Optimality Equation \cite{irl::rl}:
\begin{align}
&V^*(s)=\max_{a\in A}\sum_{s'|s,a}P_{ss'}^a[r(s')+\gamma*V^*(s')],\\
&Q^*(s,a)=\sum_{s'|s,a}P_{ss'}^a[r(s')+\gamma*\max_{a'\in A}Q^*(s',a')].
\end{align}
In typical inverse reinforcement learning algorithms, the Bellman Optimality Equation needs to be
solved once for each parameter updating of the reward function, thus it is computationally
infeasible when the state space is large. While several existing approaches solve the problem at the
expense of the optimality, we propose an approximation method to avoid the problem.
\subsection{Function Approximation Framework}
Given the set of actions and the transition probability, a reward function leads to a unique optimal
value function. To learn the reward function from the observed motion, instead of directly
learning the reward function, we use a parameterized function, named as \textit{VR function}, to
represent the summation of the reward function and the discounted optimal value function:
\begin{equation}
f(s,\theta)=r(s)+\gamma*V^*(s),
\label{equation:approxrewardvalue}
\end{equation}
where $\theta$ denotes the parameter of \textit{VR function}. The function value of a state is named
as \textit{VR value}.
Substituting Equation \eqref{equation:approxrewardvalue} into Bellman Optimality Equation, the
optimal Q function is given as:
\begin{equation}
Q^*(s,a)=\sum_{s'|s,a}P_{ss'}^af(s',\theta),
\label{equation:approxQ}
\end{equation}
the optimal value function is given as:
\begin{align}
V^*(s)&=\max_{a\in A}Q^*(s,a)\nonumber\\
&=\max_{a\in A}\sum_{s'|s,a}P_{ss'}^af(s',\theta),
\label{equation:approxV}
\end{align}
and the reward function can be computed as:
\begin{align}
r(s)&=f(s,\theta)-\gamma*V^*(s)\nonumber\\
&=f(s,\theta)-\gamma*\max_{a\in A}\sum_{s'|s,a}P_{ss'}^af(s',\theta).
\label{equation:approxR}
\end{align}
This approximation method is related to value function approximation method in reinforcement
learning, but the proposed method can compute the reward function without solving a set of linear
equations in stochastic environments.
Note that this formulation can be generalized to other extensions of Bellman Optimality Equation by
replacing the $max$ operator with other types of Bellman backup operators. For example,
$V^*(s)=\log_{a\in A}\exp Q^*(s,a)$ is used in the maximum-entropy method\cite{irl::maxentropy};
$V^*(s)=\frac{1}{k}\log_{a\in A}\exp k*Q^*(s,a)$ is used in Bellman Gradient Iteration
\cite{irl::BGI}.
For any \textit{VR function} $f$ and any parameter $\theta$, the optimal Q function $Q^*(s,a)$,
optimal value function $V^*(s)$, and reward function $r(s)$ constructed with Equation
\eqref{equation:approxQ}, \eqref{equation:approxV}, and \eqref{equation:approxR} always meet the
Bellman Optimality Equation. Under this condition, we try to recover a parameterized function
$f(s,\theta)$ that best explains the observed motion $\zeta$ based on a predefined motion model.
Combined with different Bellman backup operators, this formulation can extend many existing methods
to high-dimensional spaces, like the motion model based on the value function in
\cite{irl::motionvalue}, $p(a|s)=-v(s)-\log\sum_k p_{s,k}\exp(-v(k))$, the reward function in
\cite{irl::maxentropy}, $p(a|s)=\exp{Q(s,a)-V(s)}$, and the Q function in \cite{irl::bayirl}. The
main limitation is the assumption of a known transition model $P_{ss'}^a$, but it only requires a
partial model on the experienced states rather than a full environment model, and it can be learned
independently in an unsupervised way.
To demonstrate the usage of the framework, this work chooses $max$ as the Bellman backup operator
and a motion model $p(a|s)$ based on the optimal Q function $Q^*(s,a)$ \cite{irl::bayirl}:
\begin{equation}
P(a|s)=\frac{\exp{b*Q^*(s,a)}}{\sum_{\tilde{a}\in
A}\exp{b*Q^*(s,\tilde{a})}},
\label{equation:motionmodel}
\end{equation}
where $b$ is a parameter controlling the degree of confidence in the agent's ability to choose
actions based on Q values. In the remaining sections, we use $Q(s,a)$ to denote the optimal Q values
for simplified notations.
\subsection{Function Approximation with Neural Network}
Assuming the approximation function is a neural network, the parameter $\theta=\{w,b\}$-weights and
biases-in Equation \eqref{equation:approxrewardvalue} can be estimated from the observed sequence
of state-action pairs $\zeta$ via maximum-likelihood estimation:
\begin{equation}
\label{equation:maxtheta}
\theta=\argmax_{\theta}\log{P(\zeta|\theta)},
\end{equation}
where the log-likelihood of $P(\zeta|\theta)$ is given by:
\begin{align}
L_{nn}(\theta)&=\log{P(\zeta|\theta)}\nonumber\\
&=\log{\prod_{(s,a)\in \zeta} P(a|\theta;s)}\nonumber\\
&=\log{\prod_{(s,a)\in \zeta} \frac{\exp{b*Q^*(s,a)}}{\sum_{\hat{a}\in A}\exp{b*Q^*(s,\hat{a})}}
}\nonumber\\
&=\sum_{(s,a)\in\zeta}(b*Q(s,a)-\log{\sum_{\hat{a}\in
A}\exp{b*Q(s,\hat{a}))}},
\label{equation:loglikelihood}
\end{align}
and the gradient of the log-likelihood is given by:
\begin{align}
\nabla_\theta L_{nn}(\theta)&=\sum_{(s,a)\in\zeta}(b*\nabla_\theta Q(s,a)\nonumber\\
&-b*\sum_{\hat{a}\in
A}P((s,\hat{a})|r(\theta))\nabla_\theta Q(s,\hat{a})).
\label{equation:loglikelihoodgradient}
\end{align}
With a differentiable approximation function,
\[\nabla_\theta Q(s,a)=\sum_{s'|s,a}P_{ss'}^a\nabla_\theta f(s',\theta),\]
and
\begin{align}
\nabla_\theta L_{nn}(\theta)&=\sum_{(s,a)\in\zeta}(b*\sum_{s'|s,a}P_{ss'}^a\nabla_\theta f(s',\theta)\nonumber\\
&-b*\sum_{\hat{a}\in
A}P((s,\hat{a})|r(\theta))\sum_{s'|s,a}P_{ss'}^a\nabla_\theta f(s',\theta)),
\label{equation:loglikelihoodgradient}
\end{align}
where $\nabla_\theta f(s',\theta)$ denotes the gradient of the neural network output with respect to neural
network parameter $\theta=\{w,b\}$.
If the \textit{VR function} $f(s,\theta)$ is linear, the objective function in Equation
\eqref{equation:loglikelihood} is concave, and a global optimum exists. However, a multi-layer
neural network works better to handle the non-linearity in approximation and the high-dimensional
state space data.
A gradient ascent method can be used to learn the parameter $\theta$:
\begin{equation}
\label{equation:gradientascent}
\theta=\theta+\alpha*\nabla_\theta L_{nn}(\theta),
\end{equation}
where $\alpha$ is the learning rate.
When the method converges, we can compute the optimal Q function, the optimal value function, and the
reward function based on Equation \eqref{equation:approxrewardvalue}, \eqref{equation:approxQ},
\eqref{equation:approxV}, and \eqref{equation:approxR}. The algorithm under a neural network-based
approximation function is shown in Algorithm \ref{alg:nnapprox}.
This method does not involve solving the MDP problem for each updated parameter $\theta$, and
large-scale state spaces can be easily handled by an approximation function based on a deep neural
network.
\begin{algorithm}[tb]
\caption{Function Approximation IRL with Neural Network}
\label{alg:nnapprox}
\begin{algorithmic}[1]
\STATE Data: {$\zeta,S,A,P,\gamma,b,\alpha$}
\STATE Result: {optimal value $V[S]$, optimal action value $Q[S,A]$, reward value $R[S]$}
\STATE create variable $\theta=\{W,b\}$ for a neural network
\STATE build $f[S,\theta]$ as the output of the neural network
\STATE build $Q[S,A]$, $V[S]$, and $R[S]$ based on Equation \eqref{equation:approxrewardvalue},
\eqref{equation:approxQ}, \eqref{equation:approxV}, and \eqref{equation:approxR}.
\STATE build loglikelihood $L_{nn}[\theta]$ based on $\zeta$ and $Q[S,A]$
\STATE compute gradient $\nabla_\theta L_{nn}[\theta]$
\STATE initialize $\theta$
\WHILE{not converging}
\STATE $\theta=\theta+\alpha*\nabla_\theta L_{nn}[\theta]$
\ENDWHILE
\STATE evaluate {optimal value $V[S]$, optimal action value $Q[S,A]$, reward value $R[S]$}
\STATE return $R[S]$
\end{algorithmic}
\end{algorithm}
\subsection{Function Approximation with Gaussian Process}
Assuming the \textit{VR function} $f$ is a Gaussian process (GP) parameterized by $\theta$, the
posterior distribution is similar to the distribution in \cite{irl::gaussianirl}:
\begin{align}
P(\theta,f_u|S_u,\zeta)
&\propto P(\zeta,f_u,\theta|S_u)\\\nonumber
&= \int_{f_S}\underbrace{P(\zeta|f_S)}_\text{IRL}\underbrace{P(f_S|f_u,\theta,S_u)}_\text{GP
posterior}df_S\underbrace{P(f_u,\theta|S_u)}_\text{GP prior},
\end{align}
where $S_u$ denotes a set of supporting states for sparse Gaussian approximation
\cite{irl::sparsegaussian}, $f_u$ denotes the \textit{VR values} of $S_u$, $f_S$ denotes the
\textit{VR values} of the whole set of states, and $\theta$ denotes the parameter of the Gaussian
process.
Without a closed-form integration, we use the mean function of the Gaussian posterior as the
\textit{VR value}:
\begin{align}
P(\zeta,f_u,\theta|S_u)=P(\zeta|\bar{f_S})P(f_u,\theta|S_u),
\label{equation:gpfairl}
\end{align}
where $\bar{f_S}$ denotes the mean function.
Given a kernel function $k(x_i,x_j,\theta)$, the log-likelihood function is given as:
\begin{align}
&L_{gp}(\theta,f_u)\label{eq:gausslikelihood}\\&=\log P(\zeta|\bar{f_S})+\log P(f_u,\theta|S_u)\\
&=b*\sum_{(s,a)\in\zeta}(\sum_{s'|s,a}P_{ss'}^a\bar{f(s')}
-\log{\sum_{\hat{a}\in A}\exp{\sum_{s'|s,\hat{a}}P_{ss'}^{\hat{a}}\bar{f(s'))}}}\label{eq:irlterm}\\
&-\frac{f_u^TK_{S_u,S_u}^{-1}f_u}{2}-\frac{\log|K_{S_u,S_u}|}{2}-\frac{n\log
2\pi}{2}\label{eq:gaussterm}
\\ &+\log P(\theta),\label{eq:priorterm}
\end{align}
where $K$ denotes the covariance matrix computed with the kernel function,
$\bar{f(s)}=K_{s,S_u}^TK_{S_u,S_u}^{-1}f_u$ denotes the \textit{VR value} with the mean function
$\bar{f_S}$, expression \eqref{eq:irlterm} is the IRL likelihood, expression \eqref{eq:gaussterm} is
the Gaussian prior likelihood, and expression \eqref{eq:priorterm} is the kernel parameter prior.
The parameters $\theta,f_u$ can be similarly learned with gradient methods. It has similar
properties with neural net-based approach, and the full algorithm is shown in Algorithm
\ref{alg:gpapprox}.
\begin{algorithm}[tb]
\caption{Function Approximation IRL with Gaussian Process}
\label{alg:gpapprox}
\begin{algorithmic}[1]
\STATE Data: {$\zeta,S,A,P,\gamma,b,\alpha$}
\STATE Result: {optimal value $V[S]$, optimal action value $Q[S,A]$, reward value $R[S]$}
\STATE create variable $\theta$ for a kernel function and $f_u$ for supporting points
\STATE compute $\bar{f(s,\theta,f_u)}=K_{s,S_u}^TK_{S_u,S_u}^{-1}f_u$
\STATE build $Q[S,A]$, $V[S]$, and $R[S]$ based on Equation \eqref{equation:approxrewardvalue},
\eqref{equation:approxQ}, \eqref{equation:approxV}, and \eqref{equation:approxR}.
\STATE build loglikelihood $L_{gp}[\theta,f_u]$ based on Equation \eqref{eq:gausslikelihood}.
\STATE compute gradient $\nabla_{\theta,f_u} L_{gp}[\theta,f_u]$
\STATE initialize $\theta,f_u$
\WHILE{not converging}
\STATE $[\theta,f_u]=[\theta,f_u]+\alpha*\nabla_{\theta,f_u} L_{gp}[\theta,f_u]$
\ENDWHILE
\STATE evaluate {optimal value $V[S]$, optimal action value $Q[S,A]$, reward value $R[S]$}
\STATE return $R[S]$
\end{algorithmic}
\end{algorithm}
\section{Experiments}
\label{irl::experiments}
We use a simulated environment to compare the proposed methods with existing methods and demonstrate
the accuracy and scalability of the proposed solution, then we show how the function approximation
framework can extend existing methods to large state spaces. In the end, we apply the proposed
method to a clinical application.
\subsection{Simulated Environment}
\begin{figure}
\centering
\includegraphics[width=0.2\textwidth]{objectworld.eps}
\caption{An example of a reward table for one objectworld mdp on a $10\times 10$ grid: it depends
on randomly placed objects.}
\label{fig:objectworld}
\end{figure}
The simulated environment is an objectworld mdp \cite{irl::gaussianirl}. It is a $N*N$ grid, but
with a set of objects randomly placed on the grid. Each object has an inner color and an outer
color, selected from a set of possible colors, $C$. The reward of a state is positive if it is
within 3 cells of outer color $C1$ and 2 cells of outer color $C2$, negative if it is within 3 cells
of outer color $C1$, and zero otherwise. Other colors are irrelevant to the ground truth reward. One
example of the reward values is shown in Figure \ref{fig:objectworld}. In this work, we place two
random objects on a $5*5$ grid, and the feature of a state describes its discrete distance to each
inter color and outer color in $C$.
We evaluate the proposed method in three aspects. First, we compare its accuracy in reward learning
with other methods. We generate different sets of trajectory samples, and implement the
maximum-entropy method in \cite{irl::maxentropy}, deep inverse reinforcement learning method in
\cite{irl::deepirl}, and Bellman Gradient Iteration approaches \cite{irl::BGI}. The \textit{VR
function} based on a neural network has five-layers, where the number of nodes in the first four
layers equals to the feature dimensions, and the last layer outputs a single value as the summation
of the reward and the optimal value. The \textit{VR function} based on a Gaussian process uses an
automatic relevance detection (ARD) kernel \cite{irl::gaussianml} and an uninformed prior, and the
supporting points are randomly picked. The accuracy is defined as the correlation coefficient
between the ground truth reward value and the learned reward value.
The result is shown in Figure \ref{fig:objectaccuracy}. The accuracy is not monotonously increasing
as the number of sample grows. The reason is that a function approximator based on a large neural
network will overfit the observed trajectory, which may not reflect the true reward function
perfectly. During reward learning, we observe that as the loglikelihood increases, the accuracy of
the updated reward function reaches the maximum after a certain number of iterations, and then
decreases to a stable value. A possible solution to this problem is early-stopping during reward
learning. For a function approximator with Gaussian process, the supporting set is important,
although an universal solution is unavailable.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{accuracy.eps}
\caption{Accuracy comparison with different numbers of observations: "maxent" denotes maximum
entropy method; "deep maxent" denotes the deep inverse reinforcement learning approach, "pnorm
irl" and "gsoft irl" denote Bellman Gradient Iteration method; "fairl with nn" denotes the
function approximation inverse reinforcement learning with a neural network; "fairl with gp"
denotes the function approximation inverse reinforcement learning with a Gaussian process.}
\label{fig:objectaccuracy}
\end{figure}
Second, we evaluate the scalability of the proposed method. Since all these methods involve gradient
method, we choose different numbers of states, ranging from 25 to 9025, and compute the time for
one iteration of gradient ascent under each state size with each method. "Maxent" and "BGI" are
implemented with a mix of Python and C programming language; "DeepMaxent" is implemented with
Theano, and "FAIRL" is implemented with Tensorflow. They all have C programming language in the
backend and Python in the forend.
\begin{table}
\caption{The computation time (second) of one iteration of gradient method under different number
of states with different methods: "Maxent" denotes maximum entropy method, "DeepMaxent" denotes
the deep inverse reinforcement learning approach, "BGI" denotes Bellman Gradient Iteration method,
and "FAIRLNN" and "FAIRLGP" denote the function approximation inverse reinforcement learning.}
\label{tab:time}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
States (\#)&Maxent&DeepMaxent&BGI&FAIRLNN&FAIRLGP\\
\hline
25&0.017&0.012&0.0313&0.197&0.331\\
\hline
225& 1.831&0.178& 2.031& 0.397&0.721\\
\hline
625& 24.151&0.95& 20.963& 0.724&1.317\\
\hline
1225& 133.839&3.158& 102.460& 0.921&2.163\\
\hline
2025& 474.907&8.119& 352.007& 0.776&2.332\\
\hline
3025& 1319.365&20.253& 1061.147& 0.762&3.723\\
\hline
4225& 3030.723&59.279& 2630.309& 2.468&4.459\\
\hline
5625& 6197.718&101.434& 5228.343& 2.831&6.495\\
\hline
7225& 12234.417&229.752& 10147.628& 2.217&9.316\\
\hline
9025& 20941.9&10466.784& 16345.874& 3.347&12.372\\
\hline
\end{tabular}
\end{table}
The result is shown in Table \ref{tab:time}. Even though the computation time may be affected by
different implementations, it still shows that the proposed method is significantly better than
the alternatives in scalability, and in practice, it can be further improved by paralleling the
computation of the reward function, the value function, and the Q function from the function
approximator. Besides, the Gaussian process-based method requires more time than the neural net,
because of the matrix inverse operations.
Third, we demonstrate how the proposed framework extends existing methods to large-scale state
spaces. We increase the objectworld to a $80*80$ grid, with 10 objects in 5 colors, and generate
a large sample set with size ranging from 16000 to 128000 at an interval of 16000. Then we show the
accuracy and computation time of inverse reinforcement learning with different combinations of
Bellman backup operators and motion models. The combinations include LogSumExp as Bellman backup
operator with a motion model based on the reward value \cite{irl::maxentropy} and three Bellman
backup operators ($max$, $pnorm$, $gsoft$) with a motion model based on the Q values. We do not use
even larger state spaces because the generation of trajectories from the ground truth reward function
requires a computation-intensive and memory-intensive reinforcement learning step in larger state
spaces. A three-layer neural network is adopted for function approximation, implemented with
Tensorflow on NVIDIA GTX 1080. The training is done with batch sizes 400, learning rate 0.001, and
20 training epochs are ran. The accuracy is shown in Figure \ref{fig:extendaccuracy}. The
computation time for one training epoch is shown in Figure \ref{fig:extendtime}.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{largeaccuracy.eps}
\caption{Reward learning accuracy of existing methods in large state spaces: "LogSumExp", "Max",
"PNorm", and "GSoft" are the Bellman backup operators; "Reward" and "QValues" are the types of
motion models; different combinations of extended methods are plotted. The accuracy is measured as
the correlation between the ground truth and the recovered reward.}
\label{fig:extendaccuracy}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{largetime.eps}
\caption{Computation time for one training epoch of existing methods in large state spaces:
"LogSumExp", "Max", "PNorm", and "GSoft" are the Bellman backup operators; "Reward" and "QValues"
are the types of motion models; different combinations of extended methods are plotted.}
\label{fig:extendtime}
\end{figure}
The results show that the proposed method achieves accuracy and efficiency simultaneously. In
practice, multi-start strategy may be adopted to avoid local optimum.
\subsection{Clinical Experiment}
In the clinic, a patient with spinal cord injuries sits on a box, with a force sensor, capturing the
center-of-pressure (COP) of the patient during movement. Each experiment is composed of two
sessions, one without transcutaneous stimulation and one with stimulation. The electrodes
configuration and stimulation signal pattern are manually selected by the clinician.
In each session, the physician gives eight (or four) directions for the patient to follow, including
left, forward left, forward, forward right, right, right backward, backward, backward left, and the
patient moves continuously to follow the instruction. The physician observes the patient's behaviors
and decides the moment to change the instruction.
Six experiments are done, each with two sessions. The COP trajectories in Figure \ref{fig:patient1}
denote the case with four directional instructions; Figure \ref{fig:patient2}, \ref{fig:patient3},
\ref{fig:patient4}, \ref{fig:patient5}, and \ref{fig:patient6} denote the sessions with eight
directional instructions.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{motion5.eps}
\caption{Patient 1 under four directional instructions: "unstimulated motion" means that the
patient moves without transcutaneous stimulations, while "stimulated motion" represents the motion
under stimulations.}
\label{fig:patient1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{motion0.eps}
\caption{Patient 2 under eight directional instructions: "unstimulated motion" means that the
patient moves without transcutaneous stimulations, while "stimulated motion" represents the motion
under stimulations.}
\label{fig:patient2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{motion1.eps}
\caption{Patient 3 under eight directional instructions: "unstimulated motion" means that the
patient moves without transcutaneous stimulations, while "stimulated motion" represents the motion
under stimulations.}
\label{fig:patient3}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{motion2.eps}
\caption{Patient 4 under eight directional instructions: "unstimulated motion" means that the
patient moves without transcutaneous stimulations, while "stimulated motion" represents the motion
under stimulations.}
\label{fig:patient4}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{motion3.eps}
\caption{Patient 5 under eight directional instructions: "unstimulated motion" means that the
patient moves without transcutaneous stimulations, while "stimulated motion" represents the motion
under stimulations.}
\label{fig:patient5}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{motion4.eps}
\caption{Patient 6 under eight directional instructions: "unstimulated motion" means that the
patient moves without transcutaneous stimulations, while "stimulated motion" represents the motion
under stimulations.}
\label{fig:patient6}
\end{figure}
The COP sensory data from each session is discretized on a $100\times 100$ grid, which is fine
enough to capture the patient's small movements. The problem is formulated into a MDP, where each
state captures the patient's discretized location and velocity, and the set of actions changes the
velocity into eight possible directions. The velocity is represented with a two-dimensional vector
showing eight possible velocity directions. Thus the problem has 80000 states and 8 actions, and the
transition model assumes that each action will lead to one state with probability one.
\begin{table*}
\centering
\caption{Evaluation of the learned rewards: "forward" etc. denote the instructed direction; "1u"
denote the patient id "1", with "u" denoting unstimulated session and "s" denoting stimulated
sessions. The table shows the correlation coefficient between the ideal reward and the recovered
reward.}
\label{tab:feature1}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
&forward&backward&left&right&top left&top right&bottom left&bottom right&origin\\\hline
1u&-0.352172&-0.981877&-0.511908&-0.399777&&&&&-0.0365778\\\hline
1s&-0.36437&-0.999993&-0.14757&-0.321745&&&&&0.154132\\\hline
2u&-0.459214&-0.154868&0.134229&0.181629&0.123853&0.677538&-0.398259&0.264739&-0.206476\\\hline
2s&-0.115516&-0.127179&0.569024&0.164638&0.360013&0.341521&0.0817681&0.134049&-0.00986036\\\hline
3u&0.533031&0.0364088&0.128325&-0.729293&0.397182&0.155565&-0.48818&-0.293617&-0.176923\\\hline
3s&-0.340902&-0.091139&0.344993&0.0557266&0.162783&0.740827&-0.0897398&-0.00674047&-0.414462\\\hline
4u&0.099563&-0.0965766&0.145509&-0.912844&0.250434&-0.299531&0.577489&0.134106&-0.151334\\\hline
4s&-0.258762&-0.019275&-0.263354&0.549305&0.0910128&0.755755&-0.225137&0.289126&-0.216737\\\hline
5u&0.287442&0.0859648&-0.368503&0.504589&-0.297166&0.401829&0.0583192&-0.23662&-0.0762139\\\hline
5s&-0.350374&-0.0969275&0.538291&-0.617767&-0.00442265&0.0923481&0.115864&-0.576655&-0.0108339\\\hline
6u&0.205348&0.302459&0.550447&0.0549231&-0.348898&0.420478&0.378317&0.56191&0.145699\\\hline
6s&0.105335&-0.155296&0.0193898&-0.283895&-0.0577008&0.220243&-0.31611&-0.296682&-0.0753326\\\hline
\end{tabular}
\end{table*}
To learn the reward function from the observed trajectories based on the formulated MDP, we use the
coordinate and velocity direction of each grid as the feature, and learn the reward function
parameter from each set of data. The function approximator is a neural network with three hidden
layers and $[100,50,25]$ nodes.
We only test the proposed method with a neural-net function approximator, because it will take
prohibitive amount of time to learn the reward function with other methods, and the GP approach
relies on the set of supporting points. Assuming it takes only 100 iterations to converge, the
proposed method takes about one minute while others run for two to four weeks, and in practice, it
may take more iterations to converge.
To compare the reward function with and without stimulations, we adopt the same initial parameter
during reward function learning, and run both learning process with 10000 iterations with learning
rate 0.00001.
Given the learned reward function, we score the patient's recovery with the correlation coefficient
between the recovered rewards and the ideal rewards under the clinicians' instructions of the states
visited by the patient. The ideal reward for each state is the cosine similarity between the state's
velocity vector and the instructed direction.
The result is shown in Table \ref{tab:feature1}. It shows that the patient's ability to follow the
instructions is affected by the stimulations, but whether it is improved or not varies among
different directions. The clinical interpretations will be done by physicians.
\section{Conclusions}
\label{irl::conclusions}
This work deals with the problem of inverse reinforcement learning in large state spaces, and solves
the problem with a function approximation method that avoids solving reinforcement learning problems
during reward learning. The simulated experiment shows that the proposed method is more accurate and
scalable than existing methods, and can extends existing methods to high-dimensional spaces. A
clinical application of the proposed method is presented.
In future work, we will remove the requirement of a-priori known transition function by combining an
environment model learning process into the function approximation framework.
| {'timestamp': '2017-08-15T02:08:29', 'yymm': '1707', 'arxiv_id': '1707.09394', 'language': 'en', 'url': 'https://arxiv.org/abs/1707.09394'} |
\section{Introduction}
There are two frequently appeared structured eigenvalue problems in
(relativistic) electronic structure
methods, which can be written into a unified way as
\begin{eqnarray}
\mathbf{M}_s\mathbf{z}&=&\mathbf{z}\,\omega,\quad s=\pm1,\nonumber\\
\mathbf{M}_s&=&\left[\begin{array}{cc}
\mathbf{A} & \mathbf{B} \\
-\mathbf{B}^* & s\mathbf{A}^* \\
\end{array}\right],\quad
\mathbf{z}=\left[\begin{array}{cc}
\mathbf{x} \\
\mathbf{y} \\
\end{array}\right],
\nonumber\\
\mathbf{A}^\dagger&=&\mathbf{A},\quad \mathbf{B}^T = -s \mathbf{B},\label{Mdef}
\end{eqnarray}
where $\mathbf{M}_s\in\mathbb{C}^{2n\times2n}$, $\mathbf{z}\in \mathbb{C}^{2n}$,
$\mathbf{A}\in\mathbb{C}^{n\times n}$ is Hermitian, and
$\mathbf{B}\in\mathbb{C}^{n\times n}$ is antisymmetric for $s=+1$ or
symmetric for $s=-1$.
The $s=+1$ case appears in matrix representations
of Hermitian operators, such as the Fock operator
of closed-shell systems, in a Kramers paired basis\cite{dyall2007introduction}.
Another example is the equation-of-motion method\cite{rowe1968equations}
for ionization and electron attachment
from a closed-shell reference, where the excitation operator
$O_n^\dagger$ is expanded in a paired basis $\{a_p^\dagger,a_p\}$,
viz., $O_n^\dagger=\sum_{p}(a_p^\dagger X_p-a_q Y_p)$.
The Hermitian matrix $\mathbf{M}_+$ is usually referred as quaternion matrix\cite{rosch1983time,bunse1989quaternion,saue1999quaternion},
since it can be rewritten as
\begin{eqnarray}
\mathbf{M}_+=\mathbf{I}_2\otimes\mathbf{A}_{\mathrm{R}}+i\sigma_z\otimes \mathbf{A}_{\mathrm{I}}
+i\sigma_y\otimes \mathbf{B}_{\mathrm{R}}+i\sigma_x\otimes \mathbf{B}_{\mathrm{I}}.
\end{eqnarray}
where $\{\mathbf{I}_2,i\sigma_z,i\sigma_y,i\sigma_x\}$ is isomorphic to
the set of quaternion units $\{1,\breve{i},\breve{j},\breve{k}\}$, where
$\mathbf{A}_{\mathrm{R}}$ (or $\mathbf{A}_{\mathrm{I}}$) represents the real (or imaginary) part of $\mathbf{A}$.
The corresponding eigenvalue problem is well-studied,
and several efficient algorithms have been presented\cite{rosch1983time,dongarra1984eigenvalue,bunse1989quaternion,saue1999quaternion,shiozaki2017efficient},
based on the generalization of established algorithms for complex matrices to quaternion
algebra or the use of unitary symplectic transformations.
The $s=-1$ case appears in the linear response problem\cite{olsen1985linear,olsen1988solution,sasagane1993higher,christiansen1998response,
casida1995response,gao2004time,bast2009relativistic,egidi2016direct,liu2018relativistic,komorovsky2019four}
for excitation energies of Hartree-Fock (HF), density functional theory (DFT), multi-configurational self-consistent field (MCSCF), or the Bethe-Salpeter equation (BSE)\cite{salpeter1951relativistic}.
Compared with the $s=1$ case, the linear response eigenvalue problem is more challenging since
$\mathbf{M}_-$ is non-Hermitian. In practice, we are mostly interested in the real eigenvalues,
which correspond to physical excitation energies. Unfortunately, the
condition for the existence of all \emph{real} eigenvalues is only partially understood.
In the nonrelativistic\cite{stratmann1998efficient} and some relativistic cases\cite{liu2018relativistic},
where $\mathbf{M}_-$ becomes real, the eigenvalue problem \eqref{Mdef} is equivalent to
the reduced problem
\begin{eqnarray}
(\mathbf{A}-\mathbf{B})(\mathbf{A}+\mathbf{B})(\mathbf{x}+\mathbf{y})=(\mathbf{x}+\mathbf{y})\omega^2,\label{AmBApB}
\end{eqnarray}
or
\begin{eqnarray}
(\mathbf{A}+\mathbf{B})(\mathbf{A}-\mathbf{B})(\mathbf{x}-\mathbf{y})=(\mathbf{x}-\mathbf{y})\omega^2.\label{ApBAmB}
\end{eqnarray}
Thus, the eigenvalues of the original problem are all real if and only if
the eigenvalues of the reduced matrix $(\mathbf{A}-\mathbf{B})(\mathbf{A}+\mathbf{B})$ (or its transpose $(\mathbf{A}+\mathbf{B})(\mathbf{A}-\mathbf{B})$) are all nonnegative,
i.e., $\omega^2\ge 0$. Besides, the use of Eq. \eqref{AmBApB} or \eqref{ApBAmB} also
reduces the cost for diagonalization compared with that for Eq. \eqref{Mdef}.
If $\mathbf{M}_-$ is complex as in the relativistic case in general,
such reduction is not possible. Assuming the positive definiteness of the so-called
electronic Hessian,
\begin{eqnarray}
\left[\begin{array}{cc}
\mathbf{A} & \mathbf{B} \\
\mathbf{B}^* & \mathbf{A}^* \\
\end{array}\right]\succ0,
\end{eqnarray}
one can show that all eigenvalues of $\mathbf{M}_-$ are real\cite{shao2015properties,shao2016structure,benner2018some}.
However, this condition is only a sufficient condition.
In the real case, this implies $\mathbf{A}-\mathbf{B}\succ0$ and $\mathbf{A}+\mathbf{B}\succ0$.
Another sufficient condition is $\mathbf{B}=0$, in which case
$\mathbf{M}_-$ is block-diagonal and all its eigenvalues are real,
even though there can be negative eigenvalues in $\mathbf{A}$.
The situation, where the electronic Hessian is not positive definite but
$\mathbf{M}_-$ still have all real eigenvalues, is also practically meaningful.
It happens in using an excited state as reference,
a trick that has been commonly used in time-dependent DFT to treat
excited states of systems with a spatially degenerate ground state\cite{seth2005calculation}.
Besides, it also happens in scanning the potential energy curves,
where a curve crossing is encountered between the ground and an excited
state with a different symmetry\cite{li2011spin}.
In such cases, the negative eigenvalues correspond to de-excitations to a lower
energy state.
Due to the similarity in mathematical structures for the $s=1$ and
$s=-1$ cases, in this work we examine these two problems from a unified perspective.
We first identify the Lie group structures for their eigenvectors (see Sec. \ref{sec:liegroup}).
Then, by using the same reduction algorithm for the $s=1$ case (see Sec. \ref{sec:reduction}),
we provide a condition as a generalization of the real case based
on the reduced problems (Eqs. \eqref{AmBApB} and \eqref{ApBAmB})
to characterize the different scenarios, where the eigenvalues of the
complex linear response problem are real, purely imaginary, or complex (see Sec. \ref{sec:condition}).
Some typical examples are provided in Sec. \ref{sec:examples} to illustrate the complexity of
the eigenvalue problem \eqref{Mdef} in the $s=-1$ case.
\section{Lie group structures of the eigensystems}\label{sec:liegroup}
The matrices in Eq. \eqref{Mdef} with $s=1$ and $s=-1$ are closely
related with the skew-Hamiltonian matrix $\mathbf{W}$ and Hamiltonian matrix $\mathbf{H}$
in real field\cite{kressner2005}, respectively,
\begin{eqnarray}
\mathbf{W}&=&\left[\begin{array}{cc}
\mathbf{W}_{11} & \mathbf{W}_{12} \\
\mathbf{W}_{21} & \mathbf{W}_{11}^T \\
\end{array}\right],\; \mathbf{W}_{12}=-\mathbf{W}_{12}^T,\; \mathbf{W}_{21}=-\mathbf{W}_{21}^T,\label{SkewH}\\
\mathbf{H}&=&\left[\begin{array}{cc}
\mathbf{H}_{11} & \mathbf{H}_{12} \\
\mathbf{H}_{21} & -\mathbf{H}_{11}^T \\
\end{array}\right],\; \mathbf{H}_{12}=\mathbf{H}_{12}^T,\; \mathbf{H}_{21}=\mathbf{H}_{21}^T.\label{Ham}
\end{eqnarray}
The identification of the Hamiltonian structure for the linear
response problem was presented in Ref. \onlinecite{list2014identifying}
for the real matrix case. It can be shown that the eigenvalues of $\mathbf{W}$ appear
in pairs $\{\omega,\omega\}$, while the eigenvalues of $\mathbf{H}$ appear in pairs $\{\omega,-\omega\}$\cite{kressner2005}.
The same results also hold for complex matrices. Besides,
the additional relations with complex conjugation in Eq. \eqref{Mdef} compared
with Eqs. \eqref{SkewH} and \eqref{Ham}, viz.,
$\mathbf{W}_{21}=-\mathbf{W}_{12}^*=-\mathbf{B}^*$,
$\mathbf{W}_{11}^T=\mathbf{W}_{11}^*=\mathbf{A}^*$,
$\mathbf{H}_{21}=-\mathbf{H}_{12}^*=-\mathbf{B}^*$,
and
$\mathbf{H}_{11}^T=\mathbf{H}_{11}^*=\mathbf{A}^*$,
lead to further structures on eigenvalues and eigenvectors,
\begin{eqnarray}
\mathbf{Z}_s=
\left[\begin{array}{cc}
\mathbf{X} & -s\mathbf{Y}^* \\
\mathbf{Y} & \mathbf{X}^* \\
\end{array}\right],\quad
\boldsymbol{\omega}_s=
\left[\begin{array}{cc}
\boldsymbol{\omega} & \mathbf{0} \\
\mathbf{0} & s\boldsymbol{\omega}^* \\
\end{array}\right].\label{eigenvectors}
\end{eqnarray}
Thus, the symmetry relationships among eigenvalues of Eq. \eqref{Mdef} can be deducted.
For $s=1$, the eigenvalues are real doubly degenerate $\{\omega,\omega=\omega^*\}$,
which is a reflection of the time reversal symmetry.
For $s=-1$, the quadruple of eigenvalues $\{\omega, -\omega, \omega^*, -\omega^*\}$ appears.
If $\omega=\omega^*$ (or $-\omega^*$) is real (purely imaginary),
then the quadruple reduces to the pair $\{\omega,-\omega\}$.
Note that the pair structure \eqref{eigenvectors} always holds for $s=1$,
since $(\mathbf{X},\mathbf{Y})$ and $(-\mathbf{Y}^*,\mathbf{X}^*)$ are orthogonal. However, for $s=-1$,
the situation becomes more complicated in the
degenerate case $\omega=-\omega^*$, where the pair structure
of eigenvectors does not necessarily hold. The following discussion in this section only
works for the $s=-1$ case where all the eigenvectors have the pair structure \eqref{eigenvectors},
while the algorithm presented in Sec. \ref{sec:condition} does not have this assumption.
While most of the previous studies focused on the paired structure \eqref{eigenvectors}
for a given matrix $\mathbf{M}_s$, we note that if the set of matrices with
the same form as the eigenvectors $\mathbf{Z}_s$ \eqref{eigenvectors} is considered,
along with the commonly applied normalization conditions,
\begin{eqnarray}
\mathbf{Z}^\dagger_s \mathbf{N}_s\mathbf{Z}_s =\mathbf{N}_s,\quad
\mathbf{N}_s=\left[\begin{array}{cc}
\mathbf{I}_n & \mathbf{0} \\
\mathbf{0} & s\mathbf{I}_n
\end{array}\right],\label{normal}
\end{eqnarray}
these matrices actually form matrix groups, viz.,
\begin{eqnarray}
\mathcal{G}_s =\{\mathbf{Z}_s:\,
\mathbf{Z}_s=\left[\begin{array}{cc}
\mathbf{X} & -s\mathbf{Y}^* \\
\mathbf{Y} & \mathbf{X}^* \\
\end{array}\right],\,\mathbf{Z}^\dagger_s \mathbf{N}_s\mathbf{Z}_s =\mathbf{N}_s\}.\label{Gs}
\end{eqnarray}
This can be simply verified by following the definition of groups as follows:
(1) This set is closed under multiplication of two matrices, since
\begin{eqnarray}
\mathbf{Z}_{s,1}\mathbf{Z}_{s,2}
&=&
\left[\begin{array}{cc}
\mathbf{X}_1 & -s\mathbf{Y}^*_1 \\
\mathbf{Y}_1 & \mathbf{X}^*_1 \\
\end{array}\right]
\left[\begin{array}{cc}
\mathbf{X}_2 & -s\mathbf{Y}^*_2 \\
\mathbf{Y}_2 & \mathbf{X}^*_2 \\
\end{array}\right]\nonumber\\
&=&
\left[\begin{array}{cc}
\mathbf{X}_1\mathbf{X}_2-s\mathbf{Y}^*_1\mathbf{Y}_2 & -s(\mathbf{X}_1\mathbf{Y}^*_2+\mathbf{Y}^*_1\mathbf{X}^*_2) \\
\mathbf{X}^*_1\mathbf{Y}_2+\mathbf{Y}_1\mathbf{X}_2 & \mathbf{X}^*_1\mathbf{X}^*_2-s\mathbf{Y}_1\mathbf{Y}^*_2 \\
\end{array}\right]\nonumber\\
&\triangleq&
\left[\begin{array}{cc}
\mathcal{X} & -s\mathcal{Y}^* \\
\mathcal{Y} & \mathcal{X}^* \\
\end{array}\right],
\end{eqnarray}
and $(\mathbf{Z}_{s,1}\mathbf{Z}_{s,2})^\dagger\mathbf{N}_s(\mathbf{Z}_{s,1}\mathbf{Z}_{s,2})=\mathbf{N}_s$,
that is $\mathbf{Z}_{s,1}\mathbf{Z}_{s,2} \in \mathcal{G}_s$.
(2) The identity element is just $\mathbf{I}_{2n}$.
(3) The inverse of any element $\mathbf{Z}_s$ exists, since $\mathbf{Z}_s$ satisfies
the normalization condition \eqref{normal},
\begin{eqnarray}
\mathbf{Z}_s^{-1}
=\mathbf{N}_s\mathbf{Z}_s^\dagger\mathbf{N}_s
=\left[\begin{array}{cc}
\mathbf{X}^\dagger & s\mathbf{Y}^\dagger \\
-\mathbf{Y}^T & \mathbf{X}^T
\end{array}\right]\in \mathcal{G}_s.\label{zinv}
\end{eqnarray}
In fact, the groups \eqref{Gs} are just the
unitary symplectic Lie groups, viz., $\mathrm{USp}(2n)=\mathrm{U}(2n)\cap \mathrm{Sp}(2n,\mathbb{C})$ for $s=+1$,
and $\mathrm{USp}(n,n)=\mathrm{U}(n,n)\cap \mathrm{Sp}(2n,\mathbb{C})$ for $s=-1$.
Here, $\mathrm{U}(p,q)$ represents the unitary group with signature $(p,q)$, i.e.,
\begin{eqnarray}
\mathbf{U}(p,q)=\{\mathbf{M}:\;\mathbf{M}^\dagger \mathbf{I}_{p,q}\mathbf{M} = \mathbf{I}_{p,q}\},\;\;
\mathbf{I}_{p,q}=\left[
\begin{array}{cc}
\mathbf{I}_{p} & \mathbf{0} \\
\mathbf{0} & -\mathbf{I}_{q} \\
\end{array}\right],\label{Upq}
\end{eqnarray}
and $\mathrm{Sp}(2n,\mathbb{C})$ represents the complex symplectic
group,
\begin{eqnarray}
\mathrm{Sp}(2n,\mathbb{C})=\{\mathbf{M}:\;\mathbf{M}^T \mathbf{J}\mathbf{M} = \mathbf{J}\},\;\;
\mathbf{J}=\left[\begin{array}{cc}
\mathbf{0} & \mathbf{I}_{n} \\
-\mathbf{I}_{n} & \mathbf{0}\\
\end{array}\right].\label{Sp2n}
\end{eqnarray}
Such equivalence can be established by realizing that
a combination of the conditions in Eqs. \eqref{Upq} and \eqref{Sp2n} leads to
a condition
\begin{eqnarray}
\mathbf{J}_s\mathbf{M}^*=\mathbf{M}\mathbf{J}_s,\quad
\mathbf{J}_s=\left[\begin{array}{cc}
\mathbf{0} & \mathbf{I}_{n} \\
-s\mathbf{I}_{n} & \mathbf{0}\\
\end{array}\right],\label{JMsMJ}
\end{eqnarray}
which implies the block structures in Eq. \eqref{Gs}
for the $2n$-by-$2n$ matrix $\mathbf{M}$. Note in passing that
the case $s=1$ for Eq. \eqref{JMsMJ} reveals the
underlying commutation between an operator
and the time reversal operator.
The identification of Lie group structures for $\mathcal{G}_s$
has important consequences. In particular, it simplifies the
design of structure-preserving algorithms. For instance,
for the case $s=1$, the Lie algebra $\mathfrak{usp}(2n)$
corresponding to the Lie group $\mathrm{USp}(2n)$ is
\begin{eqnarray}
\mathfrak{usp}(2n)=\{\mathbf{K}:\;
\mathbf{K}^\dagger =-\mathbf{K},\;
\mathbf{J}\mathbf{K}^*=\mathbf{K}\mathbf{J}\},\label{usp}
\end{eqnarray}
where the matrix $\mathbf{K}$ can be written more explicitly as
\begin{eqnarray}
\mathbf{K}=\left[\begin{array}{cc}
\mathbf{K}_{11} & -\mathbf{K}_{21}^* \\
\mathbf{K}_{21} & \mathbf{K}_{11}^* \\
\end{array}\right],\;
\mathbf{K}_{11}^\dagger = -\mathbf{K}_{11},\;
\mathbf{K}_{21}^T=\mathbf{K}_{21}.
\end{eqnarray}
This applies to the construction of time reversal adapted
basis operators\cite{aucar1995operator}.
Furthermore, the exponential map $\exp(\mathbf{K})$ can be used to
transform one Kramers paired basis into another Kramers
paired basis, which was previously used in the Kramers-restricted MCSCF\cite{fleig1997spinor}. More generally,
such Lie group \eqref{Gs} and Lie algebra structures for
the case $s=1$ also implies the possibility to apply the numerical
techniques for the optimization on manifolds\cite{absil2009optimization}
to relativistic spinor optimizations while preserving
the Kramers pair structure. Due to the unified framework presented here,
one can expect that the similar techniques can also be applied to the $s=-1$ case.
\section{Reduction algorithm for the $s=1$ case}\label{sec:reduction}
In this section, we will briefly recapitulate the reduction algorithm for the $s=1$ case,
by adapting the techniques developed for the real skew Hamiltonian matrices\cite{paige1981schur,van1984symplectic}
to the complex matrix $\mathbf{M}_+$ \eqref{Mdef}. Such techniques also form basis for developing diagonalization
algorithms for the relativistic Fock matrix\cite{dongarra1984eigenvalue,shiozaki2017efficient}.
The essential idea is to realize that the unitary symplectic transformation
\begin{eqnarray}
\mathbf{G}=\left[\begin{array}{cc}
\mathbf{U} & -\mathbf{V}^* \\
\mathbf{V} & \mathbf{U}^* \\
\end{array}\right]\in\mathrm{USp}(2n),
\end{eqnarray}
when acting on a skew Hamiltonian (complex) matrix, such as
$\mathbf{W}$ \eqref{SkewH} via $\tilde{\mathbf{W}}=\mathbf{G}\mathbf{W}\mathbf{G}^\dagger$, will preserve
the skew-Hamiltonian structure, viz.,
\begin{eqnarray}
\tilde{\mathbf{W}}_{22}=\tilde{\mathbf{W}}_{11}^T,\;\;
\tilde{\mathbf{W}}_{12}^T=-\tilde{\mathbf{W}}_{12},\;\;
\tilde{\mathbf{W}}_{21}^T=-\tilde{\mathbf{W}}_{21}.
\end{eqnarray}
Then, there is a constructive way\cite{paige1981schur} to eliminate the lower-left block of the matrix $\mathbf{W}$ \eqref{SkewH}
by a series of unitary symplectic Householder and Givens transformations,
such that the transformed matrix $\tilde{\mathbf{W}}$ is in the following Paige/Van Loan (PVL) form\cite{kressner2005},
\begin{eqnarray}
\tilde{\mathbf{W}}=\mathbf{G}\mathbf{W}\mathbf{G}^{\dagger}=
\left[\begin{array}{cc}
\tilde{\mathbf{W}}_{11} & \tilde{\mathbf{W}}_{12} \\
\mathbf{0} & \tilde{\mathbf{W}}_{11}^T \\
\end{array}\right].\label{PVLform}
\end{eqnarray}
The crucial point for being able to transform $\mathbf{W}$
into the form \eqref{PVLform} is $\mathbf{W}_{21}=-\mathbf{W}_{21}^T$, and such
property can be preserved during the transformations,
see Ref. \onlinecite{paige1981schur} for details of the transformations.
The computational scaling of such transformation is cubic in the dimension
of the matrix.
Applying this result to $\mathbf{M}_+$ and realizing that the transformed matrix $\tilde{\mathbf{M}}_+$
is still Hermitian, one can conclude that $(\tilde{\mathbf{M}}_+)_{12}=0$ and
$\tilde{\mathbf{M}}_{11}^T=\tilde{\mathbf{M}}_{11}^*$, viz.,
$\tilde{\mathbf{M}}_+$ becomes block-diagonal\cite{shiozaki2017efficient}.
Then, the eigenvalues can be obtained by diagonalizing the Hermitian matrix $\tilde{\mathbf{M}}_{11}$ by
a unitary matrix $\bar{\mathbf{U}}$, which reduces the computational cost compared with the original problem.
The final solution to the original problem can be obtained as
\begin{eqnarray}
\mathbf{Z}_+=\mathbf{G}\left[\begin{array}{cc}
\bar{\mathbf{U}} & \mathbf{0} \\
\mathbf{0} & \bar{\mathbf{U}}^* \\
\end{array}\right]=
\left[\begin{array}{cc}
\mathbf{U}\bar{\mathbf{U}} & -\mathbf{V}^*\bar{\mathbf{U}} \\
\mathbf{V}\bar{\mathbf{U}}^* & \mathbf{U}^*\bar{\mathbf{U}}^* \\
\end{array}\right]\in \mathcal{G}_{+},
\end{eqnarray}
which still preserves the structure \eqref{Gs} due to
the closeness of groups.
\section{Reduction algorithm for the $s=-1$ case}\label{sec:condition}
Due to the non-Hermicity of $\mathbf{M}_-$, a straightforward generalization
of the above reduction algorithm to $\mathbf{M}_-$ using the transformation
in $\mathrm{USp}(n,n)$ does not seem to be possible. Because the validity of the
form \eqref{eigenvectors} depends on the properties of the eigensystem.
In some cases, $\mathbf{M}_-$ cannot be diagonalizable, e.g.,
$\mathbf{M}_-=\left[\begin{array}{cc}
1 & 1 \\
-1 & -1 \\
\end{array}\right]$.
To avoid such difficulty, following the observation for
the Hamiltonian matrix by van Loan\cite{van1984symplectic},
one can find the square matrix $\mathbf{M}_-^2$ is a skew
Hamiltonian matrix,
\begin{eqnarray}
\mathbf{M}^2_-&=&
\left[\begin{array}{cc}
\mathbf{A}^2-\mathbf{B}\B^* & \mathbf{A}\mathbf{B}-\mathbf{B}\mathbf{A}^* \\
-\mathbf{B}^*\mathbf{A}+\mathbf{A}^*\mathbf{B}^* & (\mathbf{A}^*)^2-\mathbf{B}^*\mathbf{B}\\
\end{array}\right]\nonumber\\
&\triangleq&
\left[\begin{array}{cc}
\mathcal{A} & \mathcal{B} \\
\mathcal{B}^* & \mathcal{A}^* \\
\end{array}\right],\label{M2}
\end{eqnarray}
where
\begin{eqnarray}
\mathcal{A} &=& \mathbf{A}^2-\mathbf{B}\B^*,\nonumber\\
\mathcal{B} &=& \mathbf{A}\mathbf{B}-\mathbf{B}\mathbf{A}^*,\nonumber\\
\mathcal{A}^\dagger &=& (\mathbf{A}^\dagger)^2-\mathbf{B}^T\mathbf{B}^\dagger
=\mathbf{A}^2-\mathbf{B}\B^* =\mathcal{A},\nonumber\\
\mathcal{B}^T &=& \mathbf{B}^T\mathbf{A}^T-\mathbf{A}^\dagger\mathbf{B}^T
=\mathbf{B}\mathbf{A}^*-\mathbf{A}\mathbf{B} = -\mathcal{B}.
\end{eqnarray}
Thus, it can be brought into the PVL form \eqref{PVLform} using unitary symplectic transformations
as for $\mathbf{M}_+$. However, it deserves to be pointed out that unlike for the Hermitian
matrix $\mathbf{M}_+$, such transformations will not
preserve the relation between the lower-left and upper-right blocks of $\mathbf{M}_-^2$, such that
while $(\tilde{\mathbf{M}}^2_-)_{21}$ is made zero in the transformed matrix, $(\tilde{\mathbf{M}}^2_-)_{12}$ is nonzero.
But the upper-left block $(\tilde{\mathbf{M}}^2_-)_{11}$ can still be used
to compute $\omega^2$ with reduced cost for diagonalization.
This basically gives a criterion for the different scenarios of eigenvalues of the linear response problem.
However, to make better connection to the conditions \eqref{AmBApB} and \eqref{ApBAmB} for the real case.
We use the following transformation\cite{shao2016structure},
\begin{eqnarray}
\mathbf{Q} = \frac{1}{\sqrt{2}}\left[
\begin{array}{cc}
\mathbf{I}_n & \mathbf{I}_n \\
i\mathbf{I}_n & -i\mathbf{I}_n \\
\end{array}\right]
\end{eqnarray}
which transforms $\mathbf{M}_-$ into a purely imaginary matrix
\begin{eqnarray}
\mathbf{Q}\mathbf{M}_-\mathbf{Q}^\dagger=i\left[
\begin{array}{cc}
(\mathbf{A}+\mathbf{B})_{\mathrm{I}} & -(\mathbf{A}-\mathbf{B})_{\mathrm{R}} \\
(\mathbf{A}+\mathbf{B})_{\mathrm{R}} & (\mathbf{A}-\mathbf{B})_{\mathrm{I}} \\
\end{array}\right].
\end{eqnarray}
Then, the square matrix is transformed into a real skew-Hamiltonian matrix,
\begin{widetext}
\begin{eqnarray}
\mathbf{\Pi} \triangleq \mathbf{Q}\mathbf{M}_-^2\mathbf{Q}^\dagger =
\left[\begin{array}{cc}
(\mathbf{A}-\mathbf{B})_{\mathrm{R}}(\mathbf{A}+\mathbf{B})_{\mathrm{R}}-(\mathbf{A}+\mathbf{B})_{\mathrm{I}}(\mathbf{A}+\mathbf{B})_{\mathrm{I}} &
(\mathbf{A}-\mathbf{B})_{\mathrm{R}}(\mathbf{A}-\mathbf{B})_{\mathrm{I}}+(\mathbf{A}+\mathbf{B})_{\mathrm{I}}(\mathbf{A}-\mathbf{B})_{\mathrm{R}} \\
-(\mathbf{A}+\mathbf{B})_{\mathrm{R}}(\mathbf{A}+\mathbf{B})_{\mathrm{I}}-(\mathbf{A}-\mathbf{B})_{\mathrm{I}}(\mathbf{A}+\mathbf{B})_{\mathrm{R}} &
(\mathbf{A}+\mathbf{B})_{\mathrm{R}}(\mathbf{A}-\mathbf{B})_{\mathrm{R}}-(\mathbf{A}-\mathbf{B})_{\mathrm{I}}(\mathbf{A}-\mathbf{B})_{\mathrm{I}} \\
\end{array}\right].\label{pi}
\end{eqnarray}
\end{widetext}
It is clear that if $\mathbf{M}_-$ is real, then $\mathbf{\Pi}$ becomes diagonal,
with the diagonal blocks being simply $(\mathbf{A}-\mathbf{B})(\mathbf{A}+\mathbf{B})$ and $(\mathbf{A}+\mathbf{B})(\mathbf{A}-\mathbf{B})$, and
the transformed eigenvalue problem becomes equivalent to Eqs. \eqref{AmBApB} and \eqref{ApBAmB}, since
\begin{eqnarray}
\mathbf{Q}\mathbf{z}=\mathbf{Q}
\left[\begin{array}{cc}
\mathbf{x} \\
\mathbf{y} \\
\end{array}\right]
=\frac{1}{\sqrt{2}}
\left[\begin{array}{cc}
\mathbf{x}+\mathbf{y} \\
i(\mathbf{x}-\mathbf{y}) \\
\end{array}\right].
\end{eqnarray}
By applying unitary symplectic transformations to reduce $\tilde{\mathbf{\Pi}}=\mathbf{G}\mathbf{\Pi}\mathbf{G}^\dagger$
into the PVL form \eqref{PVLform}, the real non-Hermitian matrix
$\tilde{\mathbf{\Pi}}_{11}$ can be used to compute eigenvalues of $\mathbf{M}_-^2$.
Suppose the eigenvalues of $\tilde{\mathbf{\Pi}}_{11}$ are denoted by $\lambda$, then
we have the following three cases:
(1) $\lambda=\lambda_{\mathrm{R}}\ge 0$: the pair of real eigenvalues of $\mathbf{M}_-$ is $\{\sqrt{\lambda_{\mathrm{R}}},-\sqrt{\lambda_{\mathrm{R}}}\}$.
(2) $\lambda=\lambda_{\mathrm{R}}<0$: the pair of purely imaginary eigenvalues of $\mathbf{M}_-$ is $\{i\sqrt{-\lambda_{\mathrm{R}}},
-i\sqrt{-\lambda_{\mathrm{R}}}\}$.
(3) $\lambda=\lambda_{\mathrm{R}}+i\lambda_{\mathrm{I}}$ is complex: $\lambda^*=\lambda_{\mathrm{R}}-i\lambda_{\mathrm{I}}$ will also be an eigenvalue of $\tilde{\mathbf{\Pi}}_{11}$, and the quadruple of complex eigenvalues of
$\mathbf{M}_-$ is $\{\omega, -\omega, \omega^*, -\omega^*\}$ with $\omega$ given by
\begin{eqnarray}
\omega = \frac{\zeta}{\sqrt{2}}+i\frac{\lambda_{\mathrm{I}}}{\sqrt{2}\zeta},\quad
\zeta=\sqrt{\lambda_{\mathrm{R}}+\sqrt{\lambda_{\mathrm{R}}^2+\lambda_{\mathrm{I}}^2}}.
\end{eqnarray}
Thus, the goal to characterize the eigenvalues of the complex linear response problem in a way similar
way as the real case is accomplished based on the eigenvalues of the reduced matrix $\tilde{\mathbf{\Pi}}_{11}$.
In the next section, we will examine some concrete examples.
\section{Illustrative examples}\label{sec:examples}
We first illustrate the simplification due to using the square matrix $\mathbf{M}_-^2$ by considering
a 2-by-2 example,
\begin{eqnarray}
\mathbf{M}_-=\left[
\begin{array}{cc}
x & 3i \\
3i & -x \\
\end{array}
\right],\;\;
\mathbf{M}_-^2=\left[
\begin{array}{cc}
x^2-9 & 0 \\
0 & x^2-9 \\
\end{array}
\right],\label{example1}
\end{eqnarray}
where $x\in\mathbb{R}$ is a parameter to mimic the effect of changing physical parameters
in the linear response problem, such as the change of bond length of diatomic molecules in
scanning potential energy curves. It is seen that the matrix $\mathbf{M}_-^2$ is already diagonal,
which gives two identical eigenvalues $\lambda=x^2-9$. Consequently, the original problem
have two eigenvalues $\omega=\pm\sqrt{x^2-9}$. The eigenvalues as a function of $x$ are shown in
Fig. \ref{fig:case1}. The graphs can be classified into three regions:
(1) $x\le-3$: $\lambda\ge 0$ and a pair of real eigenvalues $\omega=\pm\sqrt{x^2-9}$
appears, although the electronic Hessian is not positive definite.
(2) $-3<x<3$: $\lambda<0$ and a pair of purely imaginary eigenvalue $\omega=\pm i\sqrt{9-x^2}$ appear.
(3) $x\ge3$: $\lambda\ge 0$ and the electronic Hessian is positive definite.
\begin{figure}\centering
\begin{tabular}{c}
\resizebox{0.4\textwidth}{!}{\includegraphics{figures/case1-w}} \\
(a) $\omega$ as a function of $x$ \\
\resizebox{0.4\textwidth}{!}{\includegraphics{figures/case1-w2}} \\
(b) $\lambda$ as a function of $x$ \\
\end{tabular}
\caption{A 2-by-2 example for $\mathbf{M}_-$ \eqref{example1}:
(a) eigenvalues of $\mathbf{M}_-$ (denoted by $\omega$) as a function of $x$; (b) eigenvalues of $\mathbf{M}_-^2$ (denoted by $\lambda$) as a function of $x$. The blue solid (dashed) lines represent the real (imaginary) parts of eigenvalues.
The two red vertical lines represent critical values of $x=\pm3$.}\label{fig:case1}
\end{figure}
Next, we examine a more complex example, which covers all the scenarios for eigenvalues of $\mathbf{M}_-$.
The matrices $\mathbf{A}$ and $\mathbf{B}$ are chosen as
\begin{eqnarray}
\mathbf{A}=\left[
\begin{array}{cc}
x & 3+i \\
3-i & 5 \\
\end{array}
\right],
\quad
\mathbf{B}=
\left[
\begin{array}{cc}
4 & 4 \\
4 & 4 \\
\end{array}
\right].\label{example2}
\end{eqnarray}
The eigenvalues of $\mathbf{M}_-$ can be found analytically as
\begin{eqnarray}
\omega&=&\pm\sqrt{\frac{-19+x^2\pm\sqrt{\Delta}}{2}},\nonumber\\
\Delta&=&25 + 272 x - 74 x^2 + x^4.\label{example2:omega}
\end{eqnarray}
Following the procedure described in the previous section, one can find the corresponding
skew-Hamiltonian $\mathbf{\Pi}$ \eqref{pi} as
\begin{eqnarray}
\mathbf{\Pi}=
\left[\begin{array}{cccc}
-22+x^2 & -37+7x & 0 & -3+x \\
3-x & 3 & 3-x & 0 \\
0 & -13-x & -22+x^2 & 3-x \\
13+x & 0 & -37+7x & 3 \\
\end{array}\right]
\end{eqnarray}
Applying the following Givens rotation $\mathbf{G}$ with an appropriate angle $\theta$ to eliminate $\Pi_{41}$,
\begin{eqnarray}
\mathbf{G} =
\left[\begin{array}{cccc}
1 & 0 & 0 & 0 \\
0 & \cos\theta & 0 & -\sin\theta \\
0 & 0 & 1 & 0 \\
0 & \sin\theta & 0 & \cos\theta \\
\end{array}\right],
\end{eqnarray}
we can find the upper-left block of $\tilde{\mathbf{\Pi}}=\mathbf{G}\mathbf{\Pi}\mathbf{G}^\dagger$ as
\begin{eqnarray}
\tilde{\mathbf{\Pi}}_{11}=
\left[\begin{array}{cc}
-22+x^2 & \frac{\sqrt{2}(-3+x)(-25+3x)}{\sqrt{89+10x+x^2}} \\
-\sqrt{2(89+10x+x^2)} & 3 \\
\end{array}\right].
\end{eqnarray}
It can be verified that its two eigenvalues are given by
$\lambda_\pm=\frac{-19+x^2\pm\sqrt{\Delta}}{2}$, which is
consistent with Eq. \eqref{example2:omega}.
As shown in Fig. \ref{fig:case2}, the eigenvalues $\omega$ and $\lambda$ as a function of $x$ are much more complicated
in this example. The conditions $\Delta=0$ and $\lambda_{\pm}=0$ determine four real critical values of $x$ in total, viz.,
\begin{eqnarray}
&x_1\approx -10.04,\quad x_2\approx -0.09,\nonumber\\
&x_3=\frac{14}{9}\approx 1.56,\quad x_4=6.\label{critical2x}
\end{eqnarray}
Consequently, the graphs can be classified into five regions:
(1) $x\le x_1$: $\lambda_+\ge\lambda_->0$ and $\mathbf{M}_-$ has two pairs of real eigenvalues.
(2) $x_1<x< x_2$: $\lambda_+=\lambda_-^*$ become complex, such that $\mathbf{M}_-$ has a quadruple of eigenvalues.
(3) $x_2\le x< x_3$: $0>\lambda_+\ge\lambda_-$ and $\mathbf{M}_-$ has two pairs of purely imaginary eigenvalues.
(4) $x_3\le x<x_4$: $\lambda_+\ge0>\lambda_-$ and $\mathbf{M}_-$ has a pair of real eigenvalues and a pair of purely imaginary eigenvalues.
(5) $x\ge x_4$: $\lambda_+>\lambda_-\ge0$ and $\mathbf{M}_-$ has two pairs of real eigenvalues.
This example covers all the three different scenarios of eigenvalues of $\mathbf{M}_-$ discussed in the previous section.
All of them can be easily characterized by eigenvalues of a simpler matrix $\tilde{\mathbf{\Pi}}_{11}$ with halved dimension,
which is a natural generalization of $(\mathbf{A}-\mathbf{B})(\mathbf{A}+\mathbf{B})$ or $(\mathbf{A}+\mathbf{B})(\mathbf{A}-\mathbf{B})$ in the real case.
Finally, we mention that for larger matrices, the eigenvalues cannot be computed analytically, but it is straightforward to
implement the reduction procedure numerically. The behaviors of eigenvalues can be understood in the same way following
the examples presented here.
\begin{figure}\centering
\begin{tabular}{c}
\resizebox{0.4\textwidth}{!}{\includegraphics{figures/case2-w}} \\
(a) $\omega$ as a function of $x$ \\
\resizebox{0.4\textwidth}{!}{\includegraphics{figures/case2-w2}} \\
(b) $\lambda$ as a function of $x$ \\
\end{tabular}
\caption{A 4-by-4 example for $\mathbf{M}_-$ \eqref{example2}:
(a) eigenvalues of $\mathbf{M}_-$ (denoted by $\omega$) as a function of $x$; (b) eigenvalues of $\mathbf{M}_-^2$ (denoted by $\lambda$) as a function of $x$.
The blue solid (dashed) lines represent the real (imaginary) parts of eigenvalues.
The four red vertical lines represent critical values of $x$ given in Eq. \eqref{critical2x}.
}\label{fig:case2}
\end{figure}
\section{Conclusion}
In this work, we provided a unified view for the two commonly appeared structured eigenvalue problems
in (relativistic) electronic structure methods - the quaternion matrix eigenvalue problem and
the linear response eigenvalue problem for excitation energies. Using the same
reduction algorithm, we derived a generalized condition
to characterize the different scenarios for eigenvalues of the
complex linear response problem. Such understandings may allow to design
more efficient and robust diagonalization algorithms in future.
\section*{Acknowledgements}
This work was supported by the National Natural Science Foundation of China (Grants
No. 21973003) and the Beijing Normal University Startup Package.
\bibliographystyle{apsrev4-1}
| {'timestamp': '2021-10-29T02:11:04', 'yymm': '2009', 'arxiv_id': '2009.01136', 'language': 'en', 'url': 'https://arxiv.org/abs/2009.01136'} |