text
stringlengths
0
6.23M
__index_level_0__
int64
0
419k
» Job Seekers » Bedfordview » ad 22032 home based typist Posted Apr 14, 2019 1266 views Province: Gauteng City: Bedfordview Job-Industry: Computing-IT CVs Address: johannesburg Need A Job Home Based typist/data admin entry clerks positions available. We have several openings available in this area earn R2000 to R4000 per week. We are looking for honest, self motivated people with a desire to work in the comfort of your own gome. The preffered applicant should be atleast 18 years and older. Al you need is a valid emaik adress and internet access. No experience needed al you need is the following skills, basic typing skills, ability to spell and ability to folow directions. Contact me Contact me Claim listing Related Ads Sandton, Gauteng Posted Mar 20, 2019 to Job Seekers Bedfordview, Gauteng Posted Mar 27, 2019
71,431
TITLE: How do I convince my students that the choice of variable of integration is irrelevant? QUESTION [62 upvotes]: I will be TA this semester for the second course on Calculus, which contains the definite integral. I have thought this since the time I took this course, so how do I convince my students that for a definite integral $$\int_a^b f(x)\ dx=\int_a^b f(z)\ dz=\int_a^b f(☺)\ d☺$$ i.e. The choice of variable of integration is irrelevant? I still do not have an answer to this question, so I would really hope someone would guide me along, or share your thoughts. (through comments of course) NEW EDIT: I've found a relevant example from before, that will probably confuse most new students. And also give new insights to this question. Example: If $f$ is continuous, prove that $$\int_0^{\pi/2}f(\cos x)\ dx = \int_0^{\pi/2}f(\sin x)\ dx$$ And so I start proving... Note that $\cos x=\sin(\frac{\pi}{2} -x)$ and that $f$ is continuous, the integral is well-defined and $$\int_0^{\pi/2}f(\cos x)\ dx=\int_0^{\pi/2}f(\sin(\frac{\pi}{2}-x))\ dx $$ Applying the substitution $u=\frac{\pi}{2} -x$, we obtain $dx =-du$ and hence $$\int_0^{\pi/2}f(\sin(\frac{\pi}{2}-x))\ dx=-\int_{\pi/2}^{0}f(\sin u)\ du=\int_0^{\pi/2}f(\sin u)\ du\color{red}{=\int_0^{\pi/2}f(\sin x)\ dx}$$ Where the red part is the replacement of the dummy variable. So now, students, or even some of my peers will ask: $u$ is now dependent on $x$, what now? Why is the replacement still valid? For me, I guess I will still answer according to the best answer here (by Harald), but I would love to hear more comments about this. REPLY [1 votes]: Maybe it helps to investigate the wording: integration variable is just a fancy name for what we used to call placeholder in elementary school when we solved 3 + _ = 5 and used an underscore or an empty box as the placeholder. Isn't it obvious then that the symbol (or variable name) cannot have an effect on the solution?
204,499
Just got a new batch of this in the mail today in Decaf blend. Come in a nice small 6 cup bag that I brewed in my coffee maker and poured over ice. I thought this would add a coffee flavor but surprisingly did not. Love the convenience and yumminess of this tea. Preparation Iced
186,690
Here you can download Faberge vector logo absolutely free. To view and edit the logo use Adobe Photohop, Adobe Illustator or Corel Draw. Faberge Category: f Type: .ai (Adobe Illustrator format) Filesize: 12.93 KB Date Added: 28.06.2011, 18:20 Views: 691 Downloads: 9 Below you can find logos, similar to the Faberge .
330,221
July 7, 2017 at 5:51 pm At least 23 Egyptian soldiers were killed and at least 26 more were injured by two deadly car bombs that ripped through army checkpoints in northern Sinai today, security sources said. The two cars exploded on a road outside the border city of Rafah. No group has claimed responsibility for the attacks. Social media users said the commander of the 103rd Thunderbolt Battalion, Colonel Ahmed Mansa was one of those killed in the attacks. For the past three years, Egyptian forces have been fighting Daesh offshoot Wilayat Sinai in the Sinai Peninsula. Large areas of the area have been depopulated to secure them, forcing undreds out of their family homes. Show Comments
354,650
Tiny girls nude pics the nude young girls with little tiny tits small virgin Petite naked girl; Babe Outdoor Teen Hot Shaved ... with tiny skinny girls! Fresh tiny tits and skinny nude young body Naked Erotic Petite Girl With Tiny Parts from PetiteGirlNude.com Small girls nude - Little nude blonde with small tits and little pussy ... Naked Flat Boobs Petite Outside from FlatChestedCoeds.com from little april desert nude located at littleapril com return round tits and tiny pussy posing naked on big bed. This beautiful girl ... The nude girls with small tits and small pussyes. Only the HQ pictures ... Pretty Naked Petite Girl With Little Titties Outdoors from ... Hot little filipina girl posing naked Tiny Girl, Tiny Pussy, Big Dick Nude tiny tits girl exposes her hairy pussy Free Erotic Nude Teen ... Nude Petite Teen Latina With Tiny Shaved Pussy from LaTeenie.com
338,731
Lindsay Lohan Appears Older Than Her Years at NYC Event May 10, 2012 08:50 From splash news Lindsay Lohan looked older than her years as she arrived at the A-E network upfronts in New York. Does this look like a 25-year-old? Liberal government declared April 7, 2014 21:08 Lindsay Lohan Appears Older Than Her Years at NYC Event Health 1631363227001 Search All Videos Email Successfully Sent Invalid Email Addresses
202,048
ForexTradingSeminar.com Stop wasting time on Forex Trading Methods that don’t work when you can find the method you need to succeed at ForexTradingSeminar.com. Millions of people have made money using Forex trading and if you are interested in learning exactly how to make Forex Trading work for you, then don’t look any further than ForexTradingSeminar.com. Other Forex trading seminars will take up your time and money only to leave you more confused and unable to make the money you wanted. And not only that, but those other Forex trading programs are incredibly high priced! Well, at ForexTradingSeminar.com, you’ll learn why those other programs didn’t work because this website offers one of the only programs that will show you how to become a successful trader through insights and breakthroughs you’ve never seen before. The video trading course you can order at ForexTradingSeminar.com will teach you how to identify the beginning of big shifts in the market, how to avoid waves in the market, how to capture massive profit while your colleagues are frustrated with the current market and much more. Visit the site today to discover the special promotion going on right now. You can get the entire collection of videos for only $479 and if you order soon, you’ll get free bonuses!
30,071
Opening hours - Mon - - Tue - - Wed - - Thu - - Fri - - Sat - Products and ServicesItems Handrails, Brackets, Racks, Siding, Machinery, Truck Racks, Automotive Tools, Bars, Bearings, Pipes, Beams, Copper Pipe, Bull Bars, Tanks, Aluminium Pipe, Cans, Batteries, Computers, Roofing, Radiators, Tools, Auto Bodies, Castings, WiresMarkets Served Residential, Commercial, Agricultural, IndustrialFacilities Scrap YardsFeatures FerrousServices RecyclingMaterials Copper, Lead, Aluminium, Scrap Metal, Brass, Cast Iron, Iron, Nickel, General Sheet Metal About General Metal Recyclers Ltd. Scrap We Buy We scrap a wide range of metal items which include: - Air Conditioners - Aluminium - Brass - Copper - Dishwashers - Dryers - Steel - Stainless Steel - Power Cables - Radiators - Refrigerators - Roofing Iron - Vehicle Bodies - Transformers - Washing Machines - Water Heaters (Hot Water Cylinders) - Zinc - + Much More Scrap We Sell/Export - Copper - Brass - Aluminium - Stainless Steel - Lead - Power Cable - Steel HMS1 - Insize bales - Shredder - Zinc So ring us now or come in and see us with all your scrap. We can pickup your scrap metal and make life a whole lot easier for you. If its scrap and its metal we want it What you Need when Selling Scrap - You will need to bring in a photo ID - You may be asked to verify your Identification - You will be required to give your full name and date of birth - You will be required to sign for the transaction of goods - You may be asked to provide proof of ownership of the goods that you are selling - You will need to give your current address. You may be asked for proof of this. - Every transaction at GMR is recorded with our security camera system. - All vehicles must be De-registered prior to delivery to any GMR depot. You may be asked to provide proof of this via showing our buyers a completed and officially stamped MR15 form. This can be obtained by going to one of the following: - The Automobile Association (AA) - Post Shops and Books & More outlets (excluding number plates) - Vehicle Inspection New Zealand (VINZ) - Vehicle Testing New Zealand (VTNZ) For any additional questions please don't hesitate to contact our experienced team today. What makes us different General Metal Recyclers LTD (GMR) has grown to become one of New Zealand's largest 100% New Zealand owned scrap metal dealers with its head office based at Belmont Lower Hutt.
6,478
\newcommand{\ExpanderDecompose}{\mathsf{ExpanderDecompose}} \newcommand{\PartialPathSparsify}{\mathsf{PartialPathSparsify}} \newcommand{\Ecut}{E_{\mathrm{cut}}} \subsection{Path Sparsification on Dense Near-Regular Expanders} \label{sec:expander-compute} Here we provide an efficient procedure to compute path sparsifiers of a constant fraction of the edges in a dense degree regular graph. This procedure leverages \Cref{thm:paths_in_expanders}, which shows that dense near-regular expanders have many short vertex disjoint paths. Coupled with a procedure for partitioning a graph into approximately regular subgraphs (\Cref{sec:deg_reg_decomp}) this yields our main theorem of this section, an efficient path sparsification procedure. Consequently, in the remainder of this subsection our goal is to prove the following theorem. \begin{theorem}[Partial Path Sparsification of Nearly Regular Graphs] \label{thm:partial_path_sparsifiers} Given any $n$-node $m$-edge undirected unweighted graph $G = (V, E)$ and $k \geq 1$, the procedure $\PartialPathSparsify(G, k)$ (\Cref{alg:pathsparsifier}) in time $O(m + n k \cdot \dratio(G) \log^8(n))$, outputs $F,\Ecut \subseteq E$ such that w.h.p. in $n$ \begin{itemize} \item \emph{(Size Bound)}: $|F| = O(n k \cdot \dratio(G) \log(n))$, $|\Ecut| \leq |E|/2$ and \item \emph{(Path Sparsification)}: $G(F)$ is a $(\Omega(k / (\dratio(G) \log^2(n))) , O(\dratio(G) \log^4(n)) )$-path sparsifier of $(V,E\setminus\Ecut)$. \end{itemize} \end{theorem} Our partial path sparsification procedure, $\PartialPathSparsify(G, k)$ (\Cref{alg:pathsparsifier}), works simply by randomly sampling the edges, partition the resulting graph into expanders, and output the edges of those expanders as a path sparsifier of the edges on those node induced subgraphs in the original graph. In the following lemma we give basic properties of this random sampling procedure, \Cref{lem:unif}, which is reminiscent of the sublinear sparsification result of \cite{Lee14arxiv}. After that we give the expander partitioning procedure we use, \Cref{thm:expanderdecomp} from \cite{SW19} and our algorithm, \Cref{alg:pathsparsifier}. We then prove \Cref{thm:partial_path_sparsifiers} by showing that the expanders found have sufficiently many vertex disjoint paths by \Cref{thm:paths_in_expanders} and the right size properties by \Cref{thm:expanderdecomp} \begin{lemma} \label{lem:unif} Let $G = (V,E)$ be an unweighted $n$-node graph, $d \leq \dmin(G)$, and let $H$ be a graph constructed by sampling every edge from $G$ with probability $p = \min\{1 , \Theta(d^{-1} \log n) \}$. for some parameter $d \leq \dmin(G)$. With high probability in $n$ \[ \frac{1}{2} \mlap_H - \frac{pd}{n} \mlap_{K_n} \preceq p \mlap_G \preceq \frac{3}{2} \mlap_H + \frac{pd}{n} \mlap_{K_n}. \] and \begin{equation} \label{eq:uniform_sample_degree_bound} \deg_H(a) \in \left[\frac{p}{2} \cdot \deg_G(a) ~ , ~ 2p \cdot \deg_G(a) \right] \text{ for all } a \in V ~. \end{equation} \end{lemma} \begin{proof} Define auxiliary graph $G'$ with $\mlap_{G'} = \mlap_{G} + \frac{2d}{n} \mlap_{K_n}$. We observe that for any nodes $u,v$, \[ \effres_{G'}(u,v) \leq \frac{n}{2d} \effres_{K_n}(u,v) = \frac{1}{d}. \] Thus, we interpret our sampling procedure as construct $H$ as sampling edges from $G'$ with the leverage score overestimates \begin{equation*} \widetilde{\effres}(e)= \begin{cases} 1/d & \text{if}\ e \in G \\ 1 & \text{if}\ e \in \frac{2d}{n} K_n. \end{cases} \end{equation*} We observe that these values are valid leverage score overestimates in $G'$: thus sampling and reweighting the edges in $G$ with probability $p$ and preserving the edges in $\frac{2d}{n} K_n$ produces a graph $H'$ with $\mlap_{H'} = \frac{1}{p} \mlap_{H} + \frac{2d}{n} \mlap_{K_n}$ such that $\frac{1}{2} \mlap_{H'} \preceq \mlap_{G'} \preceq \frac{3}{2} \mlap_{H'}$ (see, e.g. Lemma 4 \cite{CohenLMMPS14arxiv} ). Rearranging yields the desired \[ \frac{1}{2} \mlap_H - \frac{pd}{n} \mlap_{K_n} \preceq p \mlap_G \preceq \frac{3}{2} \mlap_H + \frac{pd}{n} \mlap_{K_n} \] Finally, \eqref{eq:uniform_sample_degree_bound} follows by an application of the Chernoff bound to the number of edges picked incident to each node in $G$. Since $d \leq \dmin(G)$ this number concentrates around its expected value with high probability in $n$ for appropriate choice of constant in the assumption of $p$. \end{proof} \begin{theorem}[Expander Decomposition \cite{SW19}] \label{thm:expanderdecomp} There is a procedure $\ExpanderDecompose(G)$ which given any $m$-edge graph $G = (V,E)$, in time $O(m \log^7 m )$ with high probability outputs a partition of $V$ into $V_1, V_2, \cdots V_k$ such that \begin{itemize} \item $\condedge(G\{V_i\}) \geq \Omega(1/\log^3(m))$ for all $i \in [k]$, \item $\sum_i |\partial_G(V_i)| \leq m/8$, \end{itemize} where $G_{\{V_i\}}$ denotes the induced subgraph of $G$ with self-loops added to vertices such that their degrees match their original degrees in $G$. \end{theorem} \begin{proof} This is a specialization of Theorem 1.2 from \cite{SW19} where $\phi$ in that theorem is chosen to be $\Theta(1/\log^3(m))$. \end{proof} \begin{algorithm2e}[h!] \LinesNumbered \caption{$F = \PartialPathSparsify(G, k \geq 1)$} \label{alg:partialpathsparsify} \KwIn{$G = (V,E)$ simple input graph with degrees between $\dmin$ and $\dmax$} \KwOut{$(F, \Ecut)$ such that $F$ is a path sparsifier for $(V, E \setminus E')$ and $|\Ecut| \leq |E|/2$} \vspace{0.05in} \tcp{Sample edges uniformly at random to apply \Cref{lem:unif}} $d \defeq \frac{\dmin(G)}{10k}$ and $p = \min\{1 , \Theta(d^{-1} \log n) \}$ (where the constant in $\Theta$ is as in \Cref{lem:unif})\; \lIf{$p = 1$}{\Return $(E,\emptyset)$} \label{line:quick_return} $E' = \emptyset$ \; \For{$e \in E$}{ Add $e$ to $E'$ with probability $p$. } $G' = (V, E')$\; \vspace{0.05in} \tcp{Find expanders and use their edges as a path sparsifier} $(V_1, V_2, \cdots V_r) \gets \ExpanderDecompose(G')$\label{line:decompose} \tcp*{Computed via \Cref{thm:expanderdecomp}} For all $i \in [r]$ let $G_i = (V_i, E_i) \defeq G'[V_i]$\; Let $E_{\mathrm{cut}} \defeq \cup_{i\in[r]} \partial_G(V_i)$\; Let $F \defeq \cup_{i \in [r]} E_i$\; \Return $(F, E_{\mathrm{cut}})$\; \end{algorithm2e} \begin{proof}[Proof of \Cref{thm:partial_path_sparsifiers}] First, suppose that $p = 1$. In this case, by \Cref{line:quick_return} the algorithm outputs $F = E$ and $\Ecut = \emptyset$ in linear time. Consequently, $F$ is a path sparsifier of desired quality with $m$ edges and $|\Ecut| \leq |E|/2$. Since, in this case $d^{-1} \log n = \Omega(1)$ we have $\dmin(G) = O(k \log n)$, \[ 2 p m \leq O(n \dmax(g)) = O(n \dratio(G) \dmin(G)) = O(n k \cdot \dratio(G) \log(n)) ~, \] and the theorem follows. Consequently, in the remainder of the proof we assume $p < 1$. Next, note that by design, $G'$ was constructed so \Cref{lem:unif} applies. Consequently, with high probability in $n$ the following hold: \begin{itemize} \item $G'$ is an edge-subgraph of $G$ satisfying $p \mlap_G \preceq \frac{3}{2} \mlap_{G'} + \frac{pd}{n} \mlap_{K_n}$. \item $\deg_H(a) \in \left[\frac{p}{2} \cdot \deg_G(a) ~ , ~ 2p \cdot \deg_G(a) \right]$ for all $a \in V$. \item $G'$ contains at most $2pm$ edges. \end{itemize} Leveraging the bounds we prove the path sparsification property and size boound for $F$. By \Cref{thm:expanderdecomp} we have that for all $i \in [k]$, the $V_i$ output by $\ExpanderDecompose$ satisfy that $\Phi_{G'\{V_i\}} = \Omega(1/\log^3(m))$ for all $i \in [k]$. Further, by the above properties of $G'$ we have that $\dmin(G'\{V_i\}) \geq \frac{p}{2} \cdot \dmin(G)$ and $\dmax(G'\{V_i\}) \leq 2p \cdot \dmax(G)$. Consequently, by \Cref{thm:paths_in_expanders} and the fact that $p < 1$ we have that $G$ has at least \[ \Omega \left( \frac{ \dmin(G) p }{ \dratio(G) \log^3(n) } \right) = \Omega \left( \frac{ k }{ \dratio(G) \log^2(n) } \right) \] vertex disjoint paths from $s$ to $t$ of length at most $O(\dratio(G) \log^4(n))$. Consequently, $F$ is a path sparsifier as desired. Further, $F$ has at most \[ 2pm = O(n \dmax(G) \log(n) k / \dmin) = O(n k \log(n) \dratio(G)) \] edges by the properties above. Next, we bound the size of $\Ecut$. Note that \begin{equation} \label{eq:partial_sparse_size_1} |\Ecut| = \frac{1}{2} \sum_{i \in [r]} |\partial_G(V_i)| = \frac{1}{2} \sum_{i \in [r]} v_i^\top \mlap_{G}(V_i) v_i \end{equation} where $v_i$ is the indicator vector for $V_i$, i.e. $v_i \in \R^V$ with $[v_i]_a = 1$ if $a \in V_i$ and $[v_i]_a = 0$ if $a \notin V_i$. Since, $p \mlap_G \preceq \frac{3}{2} \mlap_{G'} + \frac{p d}{n} \mlap_{K_n}$ we have that for all $i \in [r]$ that \begin{equation} \label{eq:partial_sparse_size_2} p |\partial_G(V_i)| = p v_i^\top \mlap_G v_i \leq \frac{3}{2} v_i^\top \mlap_{G'} v_i + \frac{pd}{n} = \frac{3}{2} |\partial_{G'}(V_i)| + \frac{pd}{n} |V_i| |V\setminus V_i| ~. \end{equation} Combining \eqref{eq:partial_sparse_size_1} and \eqref{eq:partial_sparse_size_2} yields that \begin{align*} |\Ecut| &\leq \frac{1}{2} \sum_{i \in [r]} \left( \frac{3}{2p} |\partial_{G'}(V_i)| + d |V_i| \right) \leq \frac{3}{32p} |E'| + dn \leq \frac{3m}{16} + \frac{\dmin(G) n}{10 k} \leq \frac{|E|}{2} \end{align*} where in the third to last step we used that $\sum_{i \in [r]} |\partial_{G'}(V_i)| \leq |E'|/8$ by \Cref{thm:expanderdecomp}, in the second to last step we used that $|E'| \leq 2pm$ and that $d = \dmin(G) / (10k)$, and in the last step we used that $\dmin(G) n \leq |E|$. Finally, we bound the running time of the algorithm. Every line except for \Cref{line:decompose} in our algorithm is a standard graph operation that can be implemented in linear total time. The call to $\ExpanderDecompose$ on \Cref{line:decompose} is on a graph with $O(mp)$ edges. Consequently, by \Cref{thm:expanderdecomp} the call takes time \[ O(mp \log^7(m)) = O(m (\log(n) / d) \log^7(n)) = O(n k \cdot \dratio(G) \log^8(n)) \] where the first equality used the simplicity of $G$ and that $p = \Theta(d^{-1} \log n) \leq 1$ and the second equality used that $m \leq n \dmax(G)$ and the definition of $d$. \end{proof}
10,211
Hi All, I'm new to this forum. I am a UK national living in Washington state for a couple years now with a legal Visa. I'm looking at talking up target shooting and possibly hunting. I was wondering if anybody has had any experience in applying for a Alien firearms license or knows anything about it: Now one of the requirements is "documentation showing you are a member of a sport-shooting club". I've joined the West Coast Armory Range () and wondering if that would be considered part of the membership requirements for the license?
94,348
TITLE: If a set of vector fails to have an additive identity, can it still have an additive inverse? QUESTION [3 upvotes]: Let $V = \mathbb{R}^2$ Let addition in $V$ be defined as: $(u, v) + (x, y) = (u + x, 0)$ Let scalar multiplication be defined as: $a(u, v) = (au, av)$ Clearly, there is no additive identity for $V$. Can $V$ still have an additive inverse, where we define the zero vector arbitrarily? For example, could I define the zero vector in $V$ to be $(1, 0)$. Then the additive inverse would be $(-u + 1, a)$ where $a$ can be any real number: $(u, v) + (-u + 1, a) = (1, 0)$ REPLY [2 votes]: The problem here is that a vector space has an additive structure which is defined to be an abelian group and this requires an identity, usually written as zero. If we had an abelian structure that allowed for subtraction but without an identity we can see if we can build a vector space like structure over it. Well, there is such a structure and its known as an inverse semigroup which like all semigroups is associative but has no identity. It is inverse because it does have unique inverses for every element in the sense: for all $x$, there is a unique $y$ such that $x = xyx$ and $y=yxy$. We can see why these axioms were chosen by supposing our inverse semigroup to actually be a group and then we see that $y$ is must be $x^{-1}$. Since being an inverse semigroup does not preclude it from having an identity, it may have one. We insist upon it. Now, we want this to be abelian, so the axioms become: for all $x$, there is a unique $y$ such that $x = x+y+x$ and $y=y+x+y$ However, to build an analogue of a vector space you would need to give an action of the reals (or of some field) on this semigroup. Again we hit a problem. This time about what to do when we multiply an element of the semigroup by the real zero. We have no zero in the the semigroup to set it to. One option is to drop the real zero. This means we need to explore what a field looks like when it has no zero. We can copy what we did above and stipulate that its additive structure is an abelian inverse semigroup with no zero and we leave the multiplication as a monoid and ask that it distributes over addition. There is no standard name for this structure but for the purposes of this post, we could call it a semiring. Now, in a field we have inverses for all non-zero elements. A semi-ring has no zero, so we should ask that all elements are invertible. Again, there is no standard name for this structure, so we call it for the purposes of this post, a semi-field. Finally, we define a semi-vector space to be an action of a semi-field on an additive inverse semigroup. This has no zero by construction. Another possibility are unpointed true convex cones. A cone is a subset of an ambient vector space which is closed under strictly positive scalings. This may include a zero, in which case we call it pointed. If not, we call it unpointed. It is a true cone when for any $v$ in the cone, $-v$ doesn't belong to the cone. It is convex when it is closed under convex combinations. This is equivalent in the presence of scalings to be closed under strictly positive linear combinations. These are called conical combinations. Geometrically speaking, this is a cone in the vector spaces with the tip on the origin but not including it.
200,168
- Womens - Skirts and Dresses - Full Grain Leather Skirt With Belt GABBY Product Description Achieve covetable style with ease in this Leather Skirt. Crafted in butter-soft black leather, this classic mini skirt is given a distinctive twist with a zip at the front and detail on the sides. Just the right combination of rock n’ roll with feminine chic, just add this to your next event by teaming this skirt up with a sheer statement blouse. MATERIAL & CARE INSTRUCTIONS - Material: 100% genuine leather - Care instructions: Specialist Clean Only - Details: zipper, belt Full-length zipper is incorporated on the front to completely open the skirt. Product Reviews Write Review Find Similar Products by Category Customers also viewed £125.00£100.00 £125.00£100.00 £110.00£88.00 £160.00£95.00
77,173
\chapter{Asynchronous Optimization over Weakly Coupled Renewal Systems} In this chapter, we present our asynchronous algorithm along with the new analysis. Along the way, we try to provide some intuitions and high level ideas of the analysis. Consider $N$ renewal systems that operate over a slotted timeline ($t \in \{0, 1, 2, \ldots\}$). The timeline for each system $n \in \{1, \ldots, N\}$ is segmented into back-to-back intervals, which are renewal frames. The duration of each renewal frame is a random positive integer with distribution that depends on a control action chosen by the system at the start of the frame. The decision at each renewal frame also determines the penalty and a vector of performance metrics during this frame. The systems are coupled by time average constraints placed on these metrics over all systems. The goal is to design a decision strategy for each system so that overall time average penalty is minimized subject to time average constraints. Recall that we use $k=0, 1,2,\cdots$ to index the renewals. Let $t^n_k$ be the time slot corresponding to the $k$-th renewal of the $n$-th system with the convention that $t^n_0=0$. Let $\mathcal{T}^{n}_{k}$ be the set of all slots from $t^n_k$ to $t^n_{k+1}-1$. At time $t^n_k$, the $n$-th system chooses a possibly random decision $\alpha^n_k$ in a set $\mathcal{A}^n$. This action determines the distributions of the following random variables: \begin{itemize} \item The duration of the $k$-th renewal frame $T^n_k:=t^n_{k+1}-t^n_k$, which is a positive integer. \item A vector of performance metrics at each slot of that frame $\mathbf{z}^n[t]:=\left(z^n_1[t],~z^n_2[t],~\cdots,~z^n_L[t]\right)$,\\ $t\in\mathcal{T}^{n}_{k}$. \item A penalty incurred at each slot of the frame $y^n[t]$, $t\in\mathcal{T}^{n}_{k}$. \end{itemize} We assume each system has the \textit{renewal property} as defined in Definition \ref{def:renewal} that given $\alpha^n_k=\alpha^n\in\mathcal{A}^n$, the random variables $T^n_k$, $\mathbf{z}^n[t]$ and $y^n[t]$,~$t\in\mathcal{T}^n_k$ are independent of the information of all systems from the slots before $t^n_k$ with the following \textit{known} conditional expectations $\expect{\left.T^n_k\right|\alpha^n_k=\alpha^n}$, $\expect{\left.\sum_{t\in\mathcal{T}^n_k}y^n[t]\right|\alpha^n_k=\alpha^n}$ and $\expect{\left.\sum_{t\in\mathcal{T}^n_k}\mathbf{z}^n[t]\right|\alpha^n_k=\alpha^n}$. \section{Technical preliminaries}\label{sec:chap-2-pre} Throughout the chapter, we make the following basic assumptions. \begin{assumption}\label{feasible-assumption} The problem \eqref{prob-1}-\eqref{prob-2} is feasible, i.e. there are action sequences $\{\alpha_k^n\}_{k=0}^\infty$ for all $n\in\{1,2,\cdots,N\}$ so that the corresponding process $\{\mathbf{z}^n[t]\}_{t=0}^\infty$ satisfies the constraints \eqref{prob-2}. \end{assumption} Following this assumption, we define $f_*$ as the infimum objective value for \eqref{prob-1}-\eqref{prob-2} over all decision sequences that satisfy the constraints. \begin{assumption}[Boundedness]\label{bounded-assumption} For any $k\in\mathbb{N}$ and any $n\in\{1,2,\cdots,N\}$, there exist absolute constants $y_{\max}$, $z_{\max}$ and $d_{\max}$ such that \begin{align*} |y^n[t]|\leq y_{\max},~~|z^n_l[t]|\leq z_{\max},~~|d_l[t]|\leq d_{\max},~~\forall t\in\mathcal{T}^n_k, ~\forall l\in\{1,2,\cdots,L\}. \end{align*} Furthermore, there exists an absolute constant $B\geq1$ such that for every fixed $\alpha^n\in\mathcal{A}^n$ and every $s\in\mathbb{N}$ for which $Pr(T^n_k\geq s|\alpha^n_k=\alpha^n)>0$, \begin{equation}\label{residual-life-bound} \expect{\left.(T^n_k-s)^2\right|~\alpha^n_k=\alpha^n,T^n_k\geq s}\leq B. \end{equation} \end{assumption} \begin{remark} The quantity $T^n_k-s$ is usually referred to as the residual lifetime. In the special case where $s=0$, \eqref{residual-life-bound} gives the uniform second moment bound of the renewal frames as \[\expect{\left.(T^n_k)^2\right|~\alpha^n_k=\alpha^n}\leq B.\] Note that \eqref{residual-life-bound} is satisfied for a large class of problems. In particular, it can be shown to hold in the following three cases: \begin{enumerate} \item If the inter-renewal $T^n_k$ is deterministically bounded. \item If the inter-renewal $T^n_k$ is geometrically distributed. \item If each system is a finite state ergodic MDP with a finite action set. \end{enumerate} \end{remark} \begin{definition}\label{PV-def} For any $\alpha^n\in\mathcal{A}^n$, let \[\widehat{y}^n(\alpha^n):=\expect{\left.\sum_{t\in\mathcal{T}^n_k}y^n[t]\right|\alpha^n_k=\alpha^n},~ ~\widehat{z}^n_l(\alpha^n):=\expect{\left.\sum_{t\in\mathcal{T}^n_k}z^n_l[t]\right|\alpha^n_k=\alpha^n},\] and $\widehat{T}^n(\alpha^n):=\expect{T^n_k|\alpha^n_k=\alpha^n}$. Define \begin{align*} &\widehat{f}^n(\alpha^n):=\widehat{y}^n(\alpha^n)/\widehat{T}^n(\alpha^n),\\ &\widehat{g}^n_l(\alpha^n):=\widehat{z}^n_l(\alpha^n)/\widehat{T}^n(\alpha^n),~\forall l\in\{1,2,\cdots,L\}, \end{align*} and let $\left(\widehat{f}^n(\alpha^n),~\widehat{\mathbf{g}}^n(\alpha^n)\right)$ be a performance vector under the action $\alpha^n$. \end{definition} Note that by Assumption \ref{bounded-assumption}, $\widehat{y}^n(\alpha^n)$ and $\widehat{\mathbf{z}}^n(\alpha^n)$ in Definition \ref{PV-def} are both bounded, and $T^n_k\geq1,~\forall k\in\mathbb{N}$, thus, the set $\left\{\left(\widehat{f}^n(\alpha^n),~\widehat{\mathbf{g}}^n(\alpha^n)\right),~\alpha^n\in\mathcal{A}^n\right\}$ is also bounded. The following mild assumption states that this set is also closed. \begin{assumption}\label{compact-assumption} The set $\left\{\left(\widehat{f}^n(\alpha^n),~\widehat{\mathbf{g}}^n(\alpha^n)\right),~\alpha^n\in\mathcal{A}^n\right\}$ is compact. \end{assumption} The motivation of this assumption is to guarantee that there always exists at least one solution to each subproblem in our algorithm. Finally, we define the performance region of each individual system as follows. \begin{definition}\label{PR-def} Let $\mathcal{S}^n$ be the convex hull of $\left\{\left(\widehat{y}^n(\alpha^n),~\widehat{\mathbf{z}}^n(\alpha^n),~\widehat{T}^n(\alpha^n)\right):~\alpha^n\in\mathcal{A}^n\right\}\subseteq\mathbb{R}^{L+2}$. Define \[\mathcal{P}^n:=\left\{\left(y/T,~\mathbf{z}/T\right):~(y,\mathbf{z},T)\in\mathcal{S}^n\right\}\subseteq\mathbb{R}^{L+1}\] as the performance region of system $n$. \end{definition} \section{Algorithm}\label{section:algorithm} \subsection{Proposed algorithm} In this section, we propose an algorithm where each system can make its own decision after observing a global vector of multipliers which is updated using the global information from all systems. We start by defining a vector of virtual queues $\mathbf{Q}[t]:=\left(Q_1[t],~Q_2[t],~\cdots,~Q_L[t]\right)$, which are 0 at $t=0$ and updated as follows, \begin{align} Q_l[t+1]=\max\left\{Q_l[t]+\sum_{n=1}^Nz_l^n[t]-d_l[t],~0\right\},~ l\in\{1,2,\cdots,L\}.\label{queue-update} \end{align} These virtual queues will serve as global multipliers to control the growth of corresponding resource consumptions. Then, the proposed algorithm is presented in Algorithm \ref{proposed-algorithm}. \begin{algorithm} \begin{Alg}\label{proposed-algorithm} Fix a trade-off parameter $V>0$: \begin{itemize} \item At the beginning of $k$-th frame of system $n$, the system observes the vector of virtual queues $\mathbf{Q}[t^n_k]$ and makes a decision $\alpha^n_k\in\mathcal{A}^n$ so as to solve the following subproblem: \begin{align}\label{DPP-ratio} D^n_k:=\min_{\alpha^n\in\mathcal{A}^n} \frac{\expect{\left.\sum_{t\in\mathcal{T}^n_k}\left(Vy^n[t]+\dotp{\mathbf{Q}[t^n_k]}{\mathbf{z}^n[t]}\right)\right|\alpha^n_k=\alpha^n,\mathbf{Q}[t^n_k]}}{\expect{\left.T^n_k\right|\alpha^n_k=\alpha^n,\mathbf{Q}[t^n_k]}}. \end{align} \item Update the virtual queue after each slot: \begin{align} Q_l[t+1]=\max\left\{Q_l[t]+\sum_{n=1}^Nz_l^n[t]-d_l[t],~0\right\},~ l\in\{1,2,\cdots,L\}. \label{eq:virtual-queue-2} \end{align} \end{itemize} \end{Alg} \end{algorithm} Note that using the notation specified in Definition \ref{PV-def}, we can rewrite \eqref{DPP-ratio} in a more concise way as follows: \begin{align}\label{DPP-ratio-simple} \min_{\alpha^n\in\mathcal{A}^n} \left\{V\widehat{f}^n(\alpha^n)+\dotp{\mathbf{Q}[t^n_k]}{\widehat{\mathbf{g}}^n(\alpha^n)}\right\}, \end{align} which is a deterministic optimization problem. Then, by the compactness assumption (Assumption \ref{compact-assumption}), there always exists a solution to this subproblem. \begin{remark} We would like to compare this algorithm to the DPP ratio algorithm (Algorithm \ref{dpp-ratio-algorithm}). For each renewal system, both algorithms update the decision variable frame-wise based on the virtual queue value at the beginning of each frame. The major difference is that the proposed algorithm updates virtual queue slot-wise while Algorithm \ref{dpp-ratio-algorithm} updates virtual queues per frame. Such a seemingly small change, somewhat surprisingly, requires significant generalizations of the analysis on Algorithm \ref{dpp-ratio-algorithm}. \end{remark} This algorithm requires knowledge of the conditional expectations associated with the performance vectors $\left(\widehat{f}^n(\alpha^n),~\widehat{\mathbf{g}}^n(\alpha^n)\right),~\alpha^n\in\mathcal{A}^n$, but only requires individual systems $n$ to know their own $\left(\widehat{f}^n(\alpha^n),~\widehat{\mathbf{g}}^n(\alpha^n)\right),~\alpha^n\in\mathcal{A}^n$, and therefore decouples these systems. Furthermore, the virtual queue update uses observed $d_l[t]$ and does not require knowledge of distribution or mean of $d_l[t]$. In addition, we introduce $\mathbf{Q}[t]$ as ``virtual queues'' for the following two reasons: First, it can be mapped to real queues in applications (such as the server scheduling problem mentioned in Section \ref{sec:server-app}), where $\mathbf{d}[t]$ stands for the arrival process and $\mathbf{z}[t]$ is the service process. Second, stabilizing these virtual queues implies the constraints \eqref{prob-2} are satisfied, as is illustrated in the following lemma. \begin{lemma}\label{lemma:queue-bound} If $Q_l[0]=0$ and $\lim_{T\rightarrow\infty}\frac{1}{T}{\expect{Q_l[T]}}=0$, then, $\limsup_{T\rightarrow\infty}\frac{1}{T}\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{z^n_l[t]}\leq d_l$. \end{lemma} \begin{proof}[Proof of Lemma \ref{lemma:queue-bound}] Fix $l\in\{1,2,\cdots,L\}$. For any fixed $T$, $Q_l[T]=\sum_{t=0}^{T-1}(Q_l[t+1]-Q_l[t])$. For each summand, by queue updating rule \eqref{queue-update}, \begin{align*} Q_l[t+1]-Q_l[t]=&\max\left\{Q_l[t]+\sum_{n=1}^Nz^n_l[t]-d_l[t],~0\right\}-Q_l[t]\\ \geq&Q_l[t]+\sum_{n=1}^Nz^n_l[t]-d_l[t]-Q_l[t]=\sum_{n=1}^Nz^n_l[t]-d_l[t]. \end{align*} Thus, by the assumption $Q_l[0]=0$, $$Q_l[T]\geq\sum_{t=0}^{T-1}\left(\sum_{n=1}^Nz^n_l[t]-d_l[t]\right).$$ Taking expectations of both sides with $\expect{d_l[t]}=d_l,~\forall l$, gives $$\expect{Q_l[T]}\geq\sum_{t=0}^{T-1}\left(\sum_{n=1}^N\expect{z^n_l[t]}-d_l\right).$$ Dividing both sides by $T$ and passing to the limit gives \[\limsup_{T\rightarrow\infty}\frac1T\sum_{t=0}^{T-1}\left(\sum_{n=1}^N\expect{z^n_l[t]}-d_l\right) \leq\lim_{T\rightarrow\infty}\frac{1}{T}{\expect{Q_l[T]}}=0,\] finishing the proof. \end{proof} \subsection{Computing subproblems} Since a key step in the algorithm is to solve the optimization problem \eqref{DPP-ratio-simple}, we make several comments on the computation of the ratio minimization \eqref{DPP-ratio-simple}. In general, one can solve the ratio optimization problem \eqref{DPP-ratio} (therefore \eqref{DPP-ratio-simple}) via a bisection search algorithm. For more details, see section 7 of \cite{Neely2010}. However, more often than not, bisection search is not the most efficient one. We will discuss two special cases arising from applications where we can find a simpler way of solving the subproblem. First of all, when there are only a finite number of actions in the set $\mathcal{A}^n$, one can solve \eqref{DPP-ratio-simple} simply via enumerating. This is a typical scenario in energy-aware scheduling where a finite action set consists of different processing modes that can be chosen by servers. Second, when the set $\left\{\left(\widehat{y}^n(\alpha^n),~\widehat{\mathbf{z}}^n(\alpha^n),~\widehat{T}^n(\alpha^n)\right):~\alpha^n\in\mathcal{A}^n\right\}$ specified in Definition \ref{PR-def} is itself a convex hull of a finite sequence $\{(y_j,\mathbf{z}_j,T_j)\}_{j=1}^m$, then, \eqref{DPP-ratio-simple} can be rewritten as a simple enumeration: \[ \min_{i\in\{1,2,\cdots,m\}}~~\left\{V\frac{y_i}{T_i}+\dotp{\mathbf{Q}[t^n_k]}{\frac{\mathbf{z}_i}{T_i}}\right\}. \] To see this, note that by definition of convex hull, for any $\alpha^n\in\mathcal{A}^n$, $\left(\widehat{y}^n(\alpha^n),~\widehat{\mathbf{z}}^n(\alpha^n),~\widehat{T}^n(\alpha^n) \right) = \sum_{j=1}^mp_j\cdot(y_j,z_j,T_j)$ for some $\{p_j\}_{j=1}^m$, $p_j\geq0$ and $\sum_{j=1}^mp_j=1$. Thus, \begin{align*} V\widehat{f}^n(\alpha^n)+\dotp{\mathbf{Q}[t^n_k]}{\widehat{\mathbf{g}}^n(\alpha^n)} =& V\frac{\sum_{j=1}^mp_jy_j}{\sum_{j=1}^mp_jT_j} +\dotp{\mathbf{Q}[t^n_k]}{\frac{\sum_{j=1}^mp_j\mathbf{z}_j}{\sum_{j=1}^mp_jT_j}}\\ =& \sum_{i=1}^m\frac{p_iT_i}{\sum_{j=1}^mp_jT_j}\left(V\frac{y_i}{T_i}+\dotp{\mathbf{Q}[t^n_k]}{\frac{\mathbf{z}_i}{T_i}}\right)\\ =:& \sum_{i=1}^m q_i\left(V\frac{y_i}{T_i}+\dotp{\mathbf{Q}[t^n_k]}{\frac{\mathbf{z}_i}{T_i}}\right), \end{align*} where we let $q_i=\frac{p_iT_i}{\sum_{j=1}^mp_jT_j}$. Note that $q_i\geq0$ and $\sum_{i=1}^mq_i = 1$ because $T_i\geq1$. Hence, solving \eqref{DPP-ratio-simple} is equivalent to choosing $\{q_i\}_{i=1}^m$ to minimize the above expression, which boils down to choosing a single $(y_i,\mathbf{z}_i,T_i)$ among $\{(y_j,\mathbf{z}_j,T_j)\}_{j=1}^m$ which achieves the minimum. Note that such a convex hull case stands out not only because it yields a simple solution, but also because of the fact that ergodic coupled MDPs discussed in Section \ref{sec:MDP} have the region $\left\{\left(\widehat{y}^n(\alpha^n),~\widehat{\mathbf{z}}^n(\alpha^n),~\widehat{T}^n(\alpha^n)\right):~\alpha^n\in\mathcal{A}^n\right\}$ being the convex hull of a finite sequence of points $\{(y_j,\mathbf{z}_j,T_j)\}_{j=1}^m$, where each point $(y_j,\mathbf{z}_j,T_j)$ results from a pure stationary policy (\cite{Al99}). \footnote{A pure stationary policy is an algorithm where the decision to be taken at any time $t$ is a deterministic function of the state at time $t$, and independent of all other past information.} Thus, solving \eqref{DPP-ratio-simple} for the ergodic coupled MDPs reduces to choosing a pure policy among a finite number of pure policies. \section{Limiting Performance}\label{section:limiting} In this section, we provide the performance analysis of Algorithm \ref{proposed-algorithm}. Let $f_*$ be the optimal objective value for problem \eqref{prob-1}-\eqref{prob-2}. The goal is to show the following bound similar to that of Algorithm \ref{dpp-ratio-algorithm}: \begin{align*} &\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{y^n[t]}\leq f_*+\frac{C}{V},\\ &\expect{\|Q[T]\|}\leq C'\sqrt{VT}, \end{align*} for some constant $C,C'>0$. Then, by Lemma \ref{lemma:queue-bound}, one readily obtains the constraint satisfaction result. For the rest of the chapter, the underlying probability space is denoted as the tuple $(\Omega,~\mathcal{F},~P)$. Let $\mathcal{F}[t]$ be the system history up until time slot $t$. Formally, $\{\mathcal{F}[t]\}_{t=0}^\infty$ is a filtration with $\mathcal{F}[0]=\{\emptyset,\Omega\}$ and each $\mathcal{F}[t],~t\geq1$ is the $\sigma$-algebra generated by all random variables from slot 0 to $t-1$. For the rest of the chapter, we always assume Assumptions \ref{feasible-assumption}-\ref{compact-assumption} hold without explicitly mentioning them. \subsection{Convexity, performance region and other properties} In this section, we present several lemmas on the fundamental properties of the optimization problem \eqref{prob-1}-\eqref{prob-2}. The following lemma demonstrates the convexity of $\mathcal{P}^n$ in Definition \ref{PR-def}. \begin{lemma}\label{convex-lemma} The performance region $\mathcal{P}^n$ specified in Definition \ref{PR-def} is convex for any $n\in\{1,2,\cdots,N\}$. Furthermore, it is the convex hull of the set $\left\{ \left(\widehat{f}^n(\alpha^n),~\widehat{\mathbf{g}}^n(\alpha^n)\right) : \alpha^n\in\mathcal{A}^n \right\}$ and thus compact, where $\left(\widehat{f}^n(\alpha^n),~\widehat{\mathbf{g}}^n(\alpha^n)\right)$ is specified Definition \ref{PV-def}. \end{lemma} First of all, we have the following fundamental performance lemma which states that the optimality of \eqref{prob-1}-\eqref{prob-2} is achievable within $\mathcal{P}^n$ specified in Definition \ref{PR-def}. \begin{lemma}\label{stationary-lemma} For each $n\in\{1,2,\cdots,N\}$, there exists a pair $\left(\overline{f}^n_*,~\overline{\mathbf{g}}^n_*\right)\in\mathcal{P}^n$ such that the following hold: \begin{align*} &\sum_{n=1}^N\overline{f}^n_*=f_*\\ &\sum_{n=1}^N\overline{g}^n_{l,*}\leq d_l,~l\in\{1,2,\cdots,L\}, \end{align*} where $f^*$ is the optimal objective value for problem \eqref{prob-1}-\eqref{prob-2}, i.e. the optimality is achievable within $\otimes_{n=1}^N\mathcal{P}^n$, the Cartesian product of $\mathcal{P}^n$. Furthermore, for any $\left(\overline{f}^n,~\overline{\mathbf{g}}^n\right)\in\mathcal{P}^n,~n\in\{1,2,\cdots, N\}$, satisfying $\sum_{n=1}^N\overline{g}^n_{l}\leq d_l,~l\in\{1,2,\cdots,L\}$, we have $\sum_{n=1}^N\overline{f}^n\geq f_*$, i.e. one cannot achieve better performance than \eqref{prob-1}-\eqref{prob-2} in $\otimes_{n=1}^N\mathcal{P}^n$. \end{lemma} The proof of this Lemma is delayed to Section \ref{appendix-proof}. In particular, the proof uses the following lemma, which also plays an important role in several lemmas later. \begin{lemma}\label{bound-lemma-1} Suppose $\{y^n[t]\}_{t=0}^\infty$, $\{\mathbf{z}^n[t]\}_{t=0}^\infty$ and $\{T^n_k\}_{k=0}^\infty$ are processes resulting from any algorithm,\footnote{Note that this algorithm might make decisions using the past information.} then, $\forall T\in\mathbb{N}$, \begin{align} &\frac1T\sum_{t=0}^{T-1}\expect{f^n[t]-y^n[t]}\leq\frac{B_1}{T},\label{bound-1}\\ &\frac1T\sum_{t=0}^{T-1}\expect{g^n_l[t]-z^n_l[t]}\leq\frac{B_2}{T},~l\in\{1,2,\cdots,L\}, \label{bound-2} \end{align} where $B_1=2y_{\max}\sqrt{B}$, $B_2=2z_{\max}\sqrt{B}$ and $f^n[t]$, $\mathbf{g}^n[t]$ are constant over each renewal frame for system $n$ defined by \begin{align*} f^n[t]=\widehat{f}^n(\alpha^n),~~\textrm{if}~t\in\mathcal{T}^n_k,\alpha^n_k=\alpha^n\\ \mathbf{g}^n[t]=\widehat{\mathbf{g}}^n(\alpha^n),~~\textrm{if}~t\in\mathcal{T}^n_k, \alpha^n_k=\alpha^n, \end{align*} and $\left(\widehat{f}^n(\alpha^n),\widehat{\mathbf{g}}^n(\alpha^n)\right)$ are defined in Definition \ref{PV-def}. \end{lemma} The proof of this lemma is delayed to Section \ref{appendix-proof}. \begin{remark} Note that directly computing $\overline{f}^n_*$ and $\overline{g}^n_{l,*}$ indicated by Lemma \ref{stationary-lemma} would be difficult because of the fractional nature of $\mathcal{P}^n$, the coupling between different systems through time average constraints and the fact that $d_l=\expect{d_l[t]}$ might be unknown. However, Lemma \ref{stationary-lemma} can be used to prove important performance theorems regarding our proposed algorithm as is indicated by the following lemma. \end{remark} \subsection{Main result and near optimality analysis}\label{sec-4.4} The following theorem gives the performance bound of our proposed algorithm. \begin{theorem}\label{thm:main} The sequences $\{y^n[t]\}_{t=0}^\infty$ and $\{\mathbf{z}^n[t]\}_{t=0}^\infty$ produced by the proposed algorithm satisfy all the constraints in \eqref{prob-2} and achieves $\mathcal{O}(1/V)$ near optimality, i.e. \[\limsup_{T\rightarrow\infty}\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{y^n[t]}\leq f_*+\frac{NC_1+C_3}{V},\] where $f_*$ is the optimal objective of \eqref{prob-1}-\eqref{prob-2}, $C_1= 6Lz_{\max}(Nz_{\max}+d_{\max})B$ i and $C_3:=(Nz_{\max}+d_{\max})^2L/2$. \end{theorem} \begin{proof}[Proof of Theorem \ref{thm:main}] Define the drift-plus-penalty (DPP) expression at time slot $t$ as \begin{equation}\label{compound-dpp} P[t]:=\expect{\sum_{n=1}^NVy^n[t]+\frac12\left(\|\mathbf{Q}[t+1]\|^2-\|\mathbf{Q}[t]\|^2\right)}. \end{equation} By the queue updating rule \eqref{queue-update}, we have \begin{align*} P[t]\leq&\expect{\sum_{n=1}^NVy^n[t]+\frac12\sum_{l=1}^L\left(\sum_{n=1}^Nz^n_l[t]-d_l[t]\right)^2 +\sum_{l=1}^LQ_l[t]\left(\sum_{n=1}^Nz^n_l[t]-d_l[t]\right)}\\ \leq&\frac12(Nz_{\max}+d_{\max})^2L+\expect{\sum_{n=1}^NVy^n[t] +\sum_{l=1}^LQ_l[t]\left(\sum_{n=1}^Nz^n_l[t]-d_l[t]\right)}\\ =&\frac12(Nz_{\max}+d_{\max})^2L+\expect{\sum_{n=1}^NVy^n[t] +\sum_{l=1}^LQ_l[t]\left(\sum_{n=1}^Nz^n_l[t]-d_l\right)} \end{align*} where the second inequality follows from the boundedness assumption (Assumption \ref{bounded-assumption}) that $\sum_{l=1}^L\left(\sum_{n=1}^Nz^n_l[t]-d_l[t]\right)^2\leq (Nz_{\max}+d_{\max})^2L$, and the equality follows from the fact that $d_l[t]$ is i.i.d. and independent of $Q_l[t]$, thus, \[\expect{Q_l[t]d_l[t]}=\expect{Q_l[t]\cdot\expect{d_l[t]|Q_l[t]}}=\expect{Q_l[t]d_l}.\] For simplicity, define $C_3=\frac12(Nz_{\max}+d_{\max})^2L$. Now, by the achievability of optimality in $\otimes_{n=1}^N\mathcal{P}^n$ (Lemma \ref{stationary-lemma}), we have $\sum_{n=1}^N\overline{g}^n_{l,*}\leq d_l$, thus, substituting this inequality into the above bound for $P[t]$ gives \begin{align*} P[t]\leq& C_3+\expect{\sum_{n=1}^NVy^n[t] +\sum_{n=1}^N\sum_{l=1}^LQ_l[t]\left(z^n_l[t]-\overline{g}^n_{l,*}\right)}\\ =&C_3+\sum_{n=1}^N\expect{Vy^n[t]+\dotp{\mathbf{Q}[t]}{\mathbf{z}^n[t]-\overline{\mathbf{g}}^n_*}}\\ =&C_3+\sum_{n=1}^N\expect{X^n[t]}+V\sum_{n=1}^N\overline{f}^n_*\\ =&C_3+\sum_{n=1}^N\expect{X^n[t]}+Vf_*, \end{align*} where we use the definition of $X^n[t]$ in \eqref{def-X} by substituting $(\overline{f}^n,\overline{\mathbf{g}}^n)$ with $(\overline{f}^n_*,\overline{\mathbf{g}}^n_*)$, i.e. $X^n[t]=V(y^n[t]-\overline{f}^n_*)+\dotp{\mathbf{Q}[t]}{\mathbf{z}^n[t]-\overline{\mathbf{g}}^n_*}$, in the second from last equality and use the optimality condition (Lemma \ref{stationary-lemma}) in the final equality. Thus, it follows \begin{align} \frac1T\sum_{t=0}^{T-1}P[t]\leq &C_3+Vf_*+\sum_{n=1}^N\frac1T\sum_{t=0}^{T-1}\expect{X^n[t]}. \nonumber \end{align} By the virtual queue updating rule \eqref{eq:virtual-queue-2} and the trivial bound $Q_l[t]\leq \mathcal{O}(t)$, we readily get \[ \sum_{t=0}^{T-1}\expect{X^n[t]}= \sum_{t=0}^{T-1}\expect{V(y^n[t] - f^n_*) + \sum_{l=1}^LQ_l[t](z_l^n[t] - g^n_*)}\leq C(T^2 +VT), \] for some constant $C>0$. However, this bound is too weak to allow us proving the convergence result. The key to this proof is to improve such a bound so that \begin{equation*} \sum_{t=0}^{T-1}\expect{X^n[t]} \leq C_1T + C_2V. \end{equation*} where $C_1$ and $C_2$ are two constants independent of $V$ or $T$. This is Lemma \ref{sync-lemma}. As a consequence for any $T\in\mathbb{N}$, \begin{equation}\label{inter-dpp} \frac1T\sum_{t=0}^{T-1}P[t]\leq (NC_1+C_3) + \frac{NC_2V}{T}. \end{equation} On the other hand, by the definition of $P[t]$ in \eqref{compound-dpp} and then telescoping sums with $\mathbf{Q}[0]=0$, we have \begin{align*} \frac1T\sum_{t=0}^{T-1}P[t] =&\frac1T\sum_{t=0}^{T-1}\expect{\sum_{n=1}^NVy^n[t]+\frac12\left(\|\mathbf{Q}[t+1]\|^2-\|\mathbf{Q}[t]\|^2\right)}\\ =&\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^NV\expect{y^n[t]} +\frac{1}{2T}\expect{\|\mathbf{Q}[T]\|^2}. \end{align*} Combining this with inequality \eqref{inter-dpp} gives \begin{equation}\label{final-dpp} \frac1T\sum_{t=0}^{T-1}\sum_{n=1}^NV\expect{y^n[t]} +\frac{1}{2T}\expect{\|\mathbf{Q}[T]\|^2} \leq NC_1+C_3+Vf_*+\frac{NC_2V}{T}. \end{equation} Since $\frac{1}{2T}\expect{\|\mathbf{Q}[T]\|^2}\geq0$, we can throw away the term and the inequality still holds, i.e. \begin{equation}\label{final-dpp-2} \frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{y^n[t]} \leq f_*+\frac{NC_1+C_3}{V}+\frac{NC_2}{T}. \end{equation} Taking $\limsup_{T\rightarrow\infty}$ from both sides gives the near optimality in the theorem. To get the constraint violation bound, we use Assumption \ref{bounded-assumption} that $|y^n[t]|\leq y_{\max}$, then, by \eqref{final-dpp} again, we have \begin{align*} \frac{1}{T}\expect{\|\mathbf{Q}[T]\|^2} \leq 2(NC_1+C_3)+4Vy_{\max}+\frac{2NC_2V}{T}. \end{align*} By Jensen's inequality $\expect{\|\mathbf{Q}[T]\|^2}\geq\expect{\|\mathbf{Q}[T]\|}^2$. This implies that \[ \expect{\|\mathbf{Q}[T]\|}\leq\sqrt{(2(NC_1+C_3)+4Vy_{\max})T+2NC_2V}, \] which implies \begin{equation}\label{key-bound} \frac{1}{T}\expect{\|\mathbf{Q}[T]\|}\leq\sqrt{\frac{2(NC_1+C_3)+4Vy_{\max}}{T}+\frac{2NC_2V}{T^2}}. \end{equation} Sending $T\rightarrow\infty$ gives \[ \lim_{T\rightarrow\infty}\frac{1}{T}{\expect{Q_l[T]}}=0,~~\forall l\in\{1,2,\cdots,L\}. \] Finally, by Lemma \ref{lemma:queue-bound}, all constraints are satisfied. \end{proof} Note that the above proof implies a more refined result that illustrates the convergence time. Fix an $\varepsilon>0$, let $V=1/\varepsilon$, then, for all $T\geq 1/\varepsilon$, \eqref{final-dpp-2} implies that \[\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{y^n[t]} \leq f_*+\mathcal{O}(\varepsilon).\] However, \eqref{key-bound} suggests a larger convergence time is required for constraint satisfaction! For $V=1/\varepsilon$, it can be shown that \eqref{key-bound} implies that \[\frac{1}{T}\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{z^n_l[t]}\leq d_l + \mathcal{O}(\varepsilon),\] whenever $T\geq 1/\varepsilon^3$. The next section shows a tighter $1/\varepsilon^2$ convergence time with a mild Lagrange multiplier assumption. The rest of this section is devoted to proving Lemma \ref{sync-lemma}. \subsection{Key-feature inequality and supermartingale construction}\label{sec-4.2} In this section and the next section, our goal is to show that the term \begin{equation}\label{eq:target} \sum_{t=0}^{T-1}\expect{V(y^n[t] - f^n_*) + \sum_{l=1}^LQ_l[t](z_l^n[t] - g^n_*)}\leq C'(V+ T). \end{equation} Learning from the single renewal analysis (equation \eqref{eq:key-simple}), we have the following key-feature inequality connecting our proposed algorithm with the performance vectors inside $\mathcal{P}^n$. \begin{lemma}\label{key-feature} Consider the stochastic processes $\{y^n[t]\}_{t=0}^\infty$, $\{\mathbf{z}^n[t]\}_{t=0}^\infty$, and $\{T^n_k\}_{k=0}^\infty$ resulting from the proposed algorithm. For any system $n$, the following holds for any $k\in\mathbb{N}$ and any $(\overline{f}^n,\overline{\mathbf{g}}^n)\in\mathcal{P}^n$, \begin{align}\label{key-feature-in} \frac{\expect{\left.\sum_{t\in\mathcal{T}^n_k}\left(Vy^n[t]+\dotp{\mathbf{Q}[t^n_k]}{\mathbf{z}^n[t]}\right)\right|\mathbf{Q}[t^n_k]}}{\expect{T^n_k|\mathbf{Q}[t^n_k]}}\leq V\overline{f}^n+\dotp{\mathbf{Q}[t^n_k]}{\overline{\mathbf{g}}^n}, \end{align} \end{lemma} \begin{proof}[Proof of Lemma \ref{key-feature}] First of all, since the proposed algorithm solves \eqref{DPP-ratio} over all possible decisions in $\mathcal{A}^n$, it must achieve value less than or equal to that of any action $\alpha^n\in\mathcal{A}^n$ at the same frame. This gives, \begin{align*} D^n_k\leq\frac{\expect{\left.\sum_{t\in\mathcal{T}^n_k}\left(Vy^n[t]+\dotp{\mathbf{Q}[t^n_k]}{\mathbf{z}^n[t]}\right)\right|\mathbf{Q}[t^n_k],\alpha^n_k=\alpha^n}}{\expect{\left.T^n_k\right|\mathbf{Q}[t^n_k],\alpha^n_k=\alpha^n}} =\frac{V\widehat{y}^n(\alpha^n)+\dotp{\mathbf{Q}[t^n_k]}{\widehat{\mathbf{z}}^n(\alpha^n)}}{\widehat{T}^n(\alpha^n)}, \end{align*} where $D^n_k$ is defined in \eqref{DPP-ratio} and the equality follows from the renewal property of the system that $T^n_k$, $\sum_{t\in\mathcal{T}^n_k}y^n[t]$ and $\sum_{t\in\mathcal{T}^n_k}\mathbf{z}^n[t]$ are conditionally independent of $\mathbf{Q}[t^n_k]$ given $\alpha^n_k=\alpha^n$. Since $T^n_k\geq1$, this implies \begin{align*} \widehat{T}^n(\alpha^n)\cdot D^n_k\leq V\widehat{y}^n(\alpha^n)+\dotp{\mathbf{Q}[t^n_k]}{\widehat{\mathbf{z}}^n(\alpha^n)}, \end{align*} thus, for any $\alpha^n\in\mathcal{A}^n$, \[V\widehat{y}^n(\alpha^n)+\dotp{\mathbf{Q}[t^n_k]}{\widehat{\mathbf{z}}^n(\alpha^n)} -D^n_k\cdot\widehat{T}^n(\alpha^n)\geq0.\] Since $\mathcal{S}^n$ specified in Definition \ref{PR-def} is the convex hull of $\left\{(\widehat{y}^n(\alpha^n),~\widehat{\mathbf{z}}^n(\alpha^n),~\widehat{T}^n(\alpha^n)),~\alpha^n\in\mathcal{A}^n\right\}$, it follows for any vector $(y,\mathbf{z},T)\in\mathcal{S}^n$, we have \[Vy+\dotp{\mathbf{Q}[t^n_k]}{\mathbf{z}} -D^n_k\cdot T\geq0.\] Dividing both sides by $T$ and using the definition of $\mathcal{P}^n$ in Definition \ref{PR-def} give \[D^n_k\leq V\overline{f}^n+\dotp{\mathbf{Q}[t^n_k]}{\overline{\mathbf{g}}^n},~\forall(\overline{f}^n,\overline{\mathbf{g}}^n)\in\mathcal{P}^n.\] Finally, since $\{y^n[t]\}_{t=0}^\infty$, $\{\mathbf{z}^n[t]\}_{t=0}^\infty$, and $\{T^n_k\}_{k=0}^\infty$ result from the proposed algorithm and the action chosen is determined by $\mathbf{Q}[t^n_k]$ as in \eqref{DPP-ratio}, \[D^n_k=\frac{\expect{\left.\sum_{t\in\mathcal{T}^n_k}\left(Vy^n[t]+\dotp{\mathbf{Q}[t^n_k]}{\mathbf{z}^n[t]}\right)\right|\mathbf{Q}[t^n_k]}}{\expect{T^n_k|\mathbf{Q}[t^n_k]}}.\] This finishes the proof. \end{proof} Our next step is to give a frame-based analysis for each system by constructing a supermartingale on the per-frame timescale. We start with a definition of supermartingale: \begin{definition}[Supermartingale]\label{def:sup-MG} Consider a probability space $(\Omega,\mathcal F, \mathcal P)$ and a filtration $\{\mathcal F_i\}_{i=0}^{\infty}$ on this space with $\mathcal F_0 = \{\emptyset,\Omega\}$, $\mathcal F_i\subseteq\mathcal F_{i+1},~\forall i$ and $\mathcal F_i\subseteq\mathcal F,~\forall i$. Consider a process $\{X_i\}_{i=0}^{\infty}\subseteq\mathbb{R}$ adapted to this filtration, i.e. $X_i\in\mathcal F_{i+1},~\forall i$. Then, we have $\{X_i\}_{i=0}^{\infty}$ is a supermartigale if $\expect{|X_i|}<\infty$ and $\expect{X_{i+1} | \mathcal F_{i+1}}\leq X_i$. Furthermore, $\{X_{i+1}-X_i\}_{i=0}^{\infty}$ is called a supermartingale difference sequence. \end{definition} Note that by definition of supermartigale, we always have$\expect{X_{i+1}-X_i|\mathcal F_{i+1}}\leq 0$. Along the way, we also have a standard definition of stopping time which will be used later: \begin{definition}[Stopping time] Given a probability space $(\Omega, \mathcal{F}, P)$ and a filtration $\{\varnothing, \Omega\}=\mathcal{F}_0\subseteq\mathcal{F}_1\subseteq\mathcal{F}_2\cdots$ in $\mathcal{F}$. A stopping time $\tau$ with respect to the filtration $\{\mathcal{F}_i\}_{i=0}^{\infty}$ is a random variable such that for any $i\in\mathbb{N}$, \[\{\tau=i\}\in\mathcal{F}_i,\] i.e. the stopping time occurring at time $i$ is contained in the information during slots $0,~1,~2,~\cdots,~i-1$. \end{definition} Recall that $\{\mathcal{F}[t]\}_{t=0}^{\infty}$ is a filtration (with $\mathcal F[t]$ representing system history during slots $\{0, \cdots, t-1\}$). Fix a system $n$ and recall that $t_k^n$ is the time slot where the $k$-th renewal occurs for system $n$. We would like to define a filtration corresponding to the random times $t_k^n$. To this end, define the collection of sets $\{\mathcal F_k^n\}_{k=0}^\infty$ such that for each $k$, \[ \mathcal F_k^n := \{A \in\mathcal F : A \cap \{t_k^n \leq t\} \in \mathcal F[t], \forall t \in \{0, 1, 2,\cdots\}\} \] For example, the following set $A$ is an element of $\mathcal F_3^n$: \[ A = \{t_3^n=5\} \cap \{y[0]=y_0, y[1]=y_1, y[2]=y_2, y[3]=y_3, y[4]=y_4\} \] where $y_0,\cdots, y_4$ are specific values. Then $A \in\mathcal F_3^n$ because for $i \in \{0, 1, 2, 3, 4\}$ we have $A \cap \{t_3^n \leq i\} = \emptyset \in \mathcal F[i]$, and for $i \in \{5, 6, 7, \cdots\}$ we have $A \cap \{t \leq i\} = A \in \mathcal F[i]$. The following technical lemma is proved in Section \ref{appendix-proof}. \begin{lemma}\label{lemma:filtration} The sequence $\{\mathcal F_k^n \}_{k=0}^\infty$ is a valid filtration, i.e. $\mathcal F_k^n \subseteq\mathcal F_{k+1}^n ,~\forall k\geq0$. Furthermore, for any real-valued adapted process $\{Z^n[t-1]\}_{t=1}^\infty$ with respect to $\{\mathcal{F}[t]\}_{t=1}^\infty$, \footnote{Meaning that for each $t$ in $\{1, 2,3, \cdots\}$, the random variable $Z^n[t-1]$ is determined by events in $\mathcal F[t]$.} $$\left\{G_{t^n_k}(Z^n[0],~Z^n[1],~\cdots,Z^n[t^n_k-1])\right\}_{k=1}^\infty$$ is also adapted to $\{\mathcal F_k^n \}_{k=1}^\infty$, where for any $t\in\mathbb{N}$, $G_t(\cdot)$ is a fixed real-valued measurable mappings. That is, for any $k$, it holds that the value of any measurable function of $(Z^n[0], \cdots, Z[t_k^n-1])$ is determined by events in $\mathcal F_k^n$. \end{lemma} With Lemma \ref{key-feature} and Lemma \ref{lemma:filtration}, we can construct a supermartingale as follows, \begin{lemma}\label{supMG} Consider the stochastic processes $\{y^n[t]\}_{t=0}^\infty$, $\{\mathbf{z}^n[t]\}_{t=0}^\infty$, and $\{T^n_k\}_{k=0}^\infty$ resulting from the proposed algorithm. For any $(\overline{f}^n,\overline{\mathbf{g}}^n)\in\mathcal{P}^n$, let \begin{equation} \label{def-X} X^n[t]:=V\left(y^n[t]-\overline{f}^n\right)+\dotp{\mathbf{Q}[t]}{\mathbf{z}^n[t]-\overline{\mathbf{g}}^n}, \end{equation} then, \[\expect{\left.\sum_{t\in\mathcal{T}^n_k}X^n[t]\right|\mathcal F_k^n }\leq Lz_{\max}(Nz_{\max}+d_{\max})B:=C_0,\] where $B$, $z_{\max}$ and $d_{\max}$ are as defined in Assumption \ref{bounded-assumption}. Furthermore, define a real-valued process $\{Y^n_K\}_{K=0}^\infty$ on the frame such that $Y^n_0=0$ and \[Y^n_K=\sum_{k=0}^{K-1}\left(\sum_{t\in\mathcal{T}^n_k}X^n[t]-C_0\right),~K\geq1.\] Then, $\{Y^n_K\}_{K=0}^\infty$ is a supermartingale adapted to the aforementioned filtration $\{\mathcal F_k^n \}_{K=0}^\infty$. \end{lemma} \begin{remark} Note that in the above lemma the quantity $X^n[t]$ is the term we aim to bound in \eqref{eq:target}. Having $\{Y^n_K\}_{K=0}^\infty$ being a supermartingale implies $\expect{Y^n_K}\leq 0,~\forall K$. This implies $$ \expect{\sum_{\tau=0}^{t^n_K-1}X^n[\tau]}\leq C_0K\leq C_0t^n_K. $$ Thus, this lemma proves \eqref{eq:target} is true when $T$ is taken to be the end of any renewal frame of system $n$. Our goal in the next section is to get rid of this restriction and finish the proof via a stopping time argument. \end{remark} \begin{proof}[Proof of Lemma \ref{supMG}] Consider any $t\in\mathcal{T}^n_k$, then, we can decompose $X^n[t]$ as follows \begin{align}\label{eq-decompose} X^n[t]=&V(y^n[t]-\overline{f}^n)+\dotp{\mathbf{Q}[t^n_k]}{\mathbf{z}^n[t]-\overline{\mathbf{g}}^n} +\dotp{\mathbf{Q}[t]-\mathbf{Q}[t^n_k]}{\mathbf{z}^n[t]-\overline{\mathbf{g}}^n}. \end{align} By the queue updating rule \eqref{queue-update}, we have for any $l\in\{1,2,\cdots,L\}$ and any $t>t^n_k$, \begin{equation}\label{queue-update-bound} |Q_l[t]-Q_l[t^n_k]|\leq\sum_{s=t^n_k}^{t-1}\left|\sum_{m=1}^Nz^m_l[s]-d_l[t]\right| \leq(t-t^n_k)(Nz_{\max}+d_{\max}) \end{equation} Thus, for the last term in \eqref{eq-decompose}, by H\"{o}lder's inequality, \begin{align*} \dotp{\mathbf{Q}[t]-\mathbf{Q}[t^n_k]}{\mathbf{z}^n[t]-\overline{\mathbf{g}}^n} \leq&\|\mathbf{Q}[t]-\mathbf{Q}[t^n_k]\|_1\cdot\|\mathbf{z}^n[t]-\overline{\mathbf{g}}^n\|_{\infty}\\ \leq&\sum_{s=t^n_k}^{t-1}\left\|\sum_{m=1}^N\mathbf{z}^n[s]-\mathbf{d}[t]\right\|_1\cdot\|\mathbf{z}^n[t]-\overline{\mathbf{g}}^n\|_{\infty}\\ \leq&(t-t^n_k)L(Nz_{\max}+d_{\max})\cdot2z_{\max}, \end{align*} where the second inequality follows from \eqref{queue-update-bound} and the last inequality follows from the boundedness assumption (Assumption \ref{bounded-assumption}) of corresponding quantities. Substituting the above bound into \eqref{eq-decompose} gives a bound on $\expect{\left.\sum_{t\in\mathcal{T}^n_k}X^n[t]\right|\mathcal F_k^n }$ as \begin{align} \expect{\left.\sum_{t\in\mathcal{T}^n_k}X^n[t]\right|\mathcal F_k^n } \leq&\expect{\left.\sum_{t\in\mathcal{T}^n_k}\left(V\left(y^n[t]-\overline{f}^n\right)+\dotp{\mathbf{Q}[t^n_k]}{\mathbf{z}^n[t]-\overline{\mathbf{g}}^n}\right)\right|\mathcal F_k^n }\nonumber\\ &+\expect{\left.\sum_{t\in\mathcal{T}^n_k}(t-t^n_k)\right|\mathcal F_k^n } \cdot2L(Nz_{\max}+d_{\max})z_{\max}\nonumber\\ \leq&\expect{\left.\sum_{t\in\mathcal{T}^n_k}\left(V\left(y^n[t]-\overline{f}^n\right)+\dotp{\mathbf{Q}[t^n_k]}{\mathbf{z}^n[t]-\overline{\mathbf{g}}^n}\right)\right|\mathcal F_k^n }\nonumber\\ &+\expect{\left.(T^n_k)^2\right|\mathcal F_k^n } \cdot L(Nz_{\max}+d_{\max})z_{\max},\label{bound-on-X} \end{align} where we use the fact that $0+1+\cdots+T^n_k-1 = (T^n_k-1)T^n_k/2\leq (T^n_k)^2$ in the last inequality. Next, by the queue updating rule \eqref{queue-update}, $Q_l[t^n_k]$ is determined by $z_l^n[0],\cdots,z_l^n[t^n_k-1]$ ($n=1,2,\cdots,N$) and $d_l[0],\cdots,d_l[t^n_k-1]$ for any $l\in\{1,2,\cdots,L\}$. Thus, by Lemma \ref{lemma:filtration}, $\mathbf{Q}[t^n_k]$ is determined by $\mathcal F_k^n $. For the proposed algorithm, each system makes decisions purely based on the virtual queue state $\mathbf{Q}[t^n_k]$, and by the renewal property of each system, given the decision at the $k$-th renewal, the random quantities $T^n_k$, $\mathbf{z}^n[t]$ and $y^n[t]$,~$t\in\mathcal{T}^n_k$ are independent of the outcomes from the slots before $t^n_k$. This implies the following display, \begin{align} &\expect{\left.\sum_{t\in\mathcal{T}^n_k}\left(V\left(y^n[t]-\overline{f}^n\right)+\dotp{\mathbf{Q}[t^n_k]}{\mathbf{z}^n[t]-\overline{\mathbf{g}}^n}\right)\right|\mathcal F_k^n }\nonumber\\ &=\expect{\left.\sum_{t\in\mathcal{T}^n_k}V\left(y^n[t]-\overline{f}^n\right)\right|\mathcal F_k^n }+\dotp{\mathbf{Q}[t^n_k]}{\expect{\left.\sum_{t\in\mathcal{T}^n_k}\left(\mathbf{z}^n[t]-\overline{\mathbf{g}}^n\right)\right|~\mathcal F_k^n }}\nonumber\\ &=\expect{\left.\sum_{t\in\mathcal{T}^n_k}V\left(y^n[t]-\overline{f}^n\right)\right|\mathbf{Q}[t^n_k]}+\dotp{\mathbf{Q}[t^n_k]}{\expect{\left.\sum_{t\in\mathcal{T}^n_k}\left(\mathbf{z}^n[t]-\overline{\mathbf{g}}^n\right)\right|~\mathbf{Q}[t^n_k]}}\nonumber\\ &=\expect{\left.\sum_{t\in\mathcal{T}^n_k}\left(V\left(y^n[t]-\overline{f}^n\right)+\dotp{\mathbf{Q}[t^n_k]}{\mathbf{z}^n[t]-\overline{\mathbf{g}}^n}\right)\right|\mathbf{Q}[t^n_k]},\label{mark-1} \end{align} By Lemma \ref{key-feature}, we have the following: \begin{align*} \expect{\left.\sum_{t\in\mathcal{T}^n_k}\left(Vy^n[t]+\dotp{\mathbf{Q}[t^n_k]}{\mathbf{z}^n[t]}\right)\right|\mathbf{Q}[t^n_k]} \leq \left(V\overline{f}^n+\dotp{\mathbf{Q}[t^n_k]}{\overline{\mathbf{g}}^n}\right)\cdot\expect{T^n_k|\mathbf{Q}[t^n_k]}. \end{align*} Thus, rearranging terms in above inequality gives the expectation on the right hand side of \eqref{mark-1} is no greater than 0 and hence the first expectation on the right hand side of \eqref{bound-on-X} is also no greater than 0. For the second expectation in \eqref{bound-on-X}, using \eqref{residual-life-bound} in Assumption \ref{bounded-assumption} gives $\expect{\left.(T^n_k)^2\right|\mathcal F_k^n }\leq B$ and the first part of the lemma is proved. For the second part of the lemma, by Lemma \ref{lemma:filtration} and the definition of $Y^n_K$, the process $\{Y^n_K\}_{K=0}^\infty$ is adapted to $\{\mathcal F_k^n \}_{K=0}^{\infty}$. Moreover, by Assumption \ref{bounded-assumption}, \begin{align*} \expect{\left|\sum_{t\in\mathcal{T}^n_k}X^n[t]\right|} \leq\expect{\sum_{t\in\mathcal{T}^n_k}\left|X^n[t]\right|}<\infty,~\forall k. \end{align*} Thus, $\expect{|Y^n_K|}<\infty,~\forall K\in\mathbb{N}$, i.e. it is absolutely integrable. Furthermore, by the first part of the lemma, \begin{align*} \expect{Y^n_{K+1}~|~\mathcal F_k^n } =Y^n_K+\expect{\left.\left(\sum_{t\in\mathcal{T}^n_K}X^n[t]-C_0\right)~\right|~\mathcal F_k^n } \leq Y^n_K, \end{align*} finishing the proof. \end{proof} \subsection{Synchronization lemma}\label{section:sync} So far, we have analyzed the processes related to each individual system over its renewal frames. However, due the asynchronous behavior of different systems, the supermartingales of each system cannot be immediately summed. In order to prove the result \eqref{eq:target} and get a global performance bound, we have to get rid of any index related to individual renewal frames only. In other words, we need to look at the system property at any time slot $T$ as opposed to any renewal $t^n_k$. For any fixed slot $T>0$, let $S^n[T]$ be the number of renewals up to (and including) time slot $T$, with the convention that the first renewal occurs at time $t=0$, so $t_0^n=0$ and $S^n[0]=1$, i.e. $t^n_0=0$. The next lemma shows $S^n[T]$ is a valid stopping time, whose proof is in the appendix. \begin{lemma}\label{valid-stopping-time} For each $n\in\{1,2,\cdots,N\}$, the random variable $S^n[T]$ is a stopping time with respect to the filtration $\{\mathcal F_k^n \}_{k=0}^\infty$, i.e. $\{S^n[T]= k\}\in\mathcal F_k^n ,~\forall k\in\mathbb{N}$. \end{lemma} The following theorem tells us a stopping-time truncated supermartingale is still a supermartingale. \begin{theorem}[Theorem 5.2.6 in \cite{Durrett}]\label{stopping-time} If $\tau$ is a stopping time and $Z[i]$ is a supermartingale with respect to $\{\mathcal{F}_i\}_{i=0}^\infty$, then $Z[i\wedge \tau]$ is also a supermartingale, where $a\wedge b\triangleq\min\{a,b\}$. \end{theorem} With this theorem and the above stopping time construction, we have the following lemma which finishes the argument proving \eqref{eq:target}: \begin{lemma}\label{sync-lemma} For each $n\in\{1,2,\cdots,N\}$ and any fixed $T\in\mathbb{N}$, we have \begin{align*} \frac1T\sum_{t=0}^{T-1}\expect{X^n[t]}\leq C_1+\frac{C_2V}{T}, \end{align*} where $X^n[t]$ is defined in \eqref{eq-decompose} and \[C_1:=6Lz_{\max}(Nz_{\max}+d_{\max})B,~~C_2:=2y_{\max}\sqrt{B}.\] \end{lemma} \begin{proof} First, note that the renewal index $k$ starts from 0. Thus, for any fixed $T\in \mathbb{N}$, $ t^n_{S^n[T]-1}\leq T<t^n_{S^n[T]}$, and \begin{align} \expect{\sum_{t=0}^{T-1}X^n[t]}=&\expect{\sum_{t=0}^{t^n_{S^n[T]}-1}X^n[t]-\sum_{t=T}^{t^n_{S^n[T]}-1}X^n[t]}\nonumber\\ =&\expect{\sum_{t=1}^{t^n_{S^n[T]}-1}X^n[t]}-\expect{\sum_{t=T}^{t^n_{S^n[T]}-1}X^n[t]}\nonumber\\ =&\expect{Y^n_{S^n[T]}}+C_0\expect{S^n[T]}-\expect{\sum_{t=T}^{t^n_{S^n[T]}-1}X^n[t]}\nonumber\\ \leq&\expect{Y^n_{S^n[T]}}+C_0(T+1)-\expect{\sum_{t=T}^{t^n_{S^n[T]}-1}X^n[t]},\label{decompose-inequality} \end{align} where the third equality follows from the definition of $Y^n_K$ in Lemma \ref{supMG} and the last inequality follows from the fact that the number of renewals up to time slot $T$ is no more than the total number of slots, i.e. $S^n[T]\leq T+1$. For the term $\expect{Y^n_{S^n[T]}}$, we apply Theorem \ref{stopping-time} with $\tau = S^n[T]$ and index $K$ to obtain $\{Y^n_{K\wedge S^n[T]}\}_{K=0}^\infty$ is a supermartingale. This implies \[\expect{Y^n_{K\wedge S^n[T]}}\leq\expect{Y^n_{0\wedge S^n[T]}}=\expect{Y^n_0}=0,~\forall K\in\mathbb{N}.\] Since $S^n[T]\leq T+1$, it follows by substituting $K=T+1$, \[\expect{Y^n_{S^n[T]}}=\expect{Y^n_{(T+1)\wedge S^n[T]}}\leq0.\] For the last term in \eqref{decompose-inequality}, by queue updating rule \eqref{queue-update}, for any $l\in\{1,2,\cdots,L\}$, \[|Q_l[t]|\leq\sum_{s=0}^{t-1}\left|\sum_{m=1}^Nz^m_l[s]-d_l[t]\right| \leq t(Nz_{\max}+d_{\max}),\] it then follows from H\"{o}lder's inequality again that \begin{align*} \expect{\left|\sum_{t=T}^{t^n_{S^n[T]}-1}X^n[t]\right|} =&\expect{\left|\sum_{t=T}^{t^n_{S^n[T]}-1}\left(V(y^n[t]-\overline{f}^n)+\dotp{\mathbf{Q}[t]}{\mathbf{z}^n[t]-\overline{\mathbf{g}}^n}\right)\right|}\\ \leq&\expect{\sum_{t=T}^{t^n_{S^n[T]}-1}\left(V\left|y^n[t]-\overline{f}^n\right|+\|\mathbf{Q}[t]\|_1\cdot\|\mathbf{z}^n[t]-\overline{\mathbf{g}}^n\|_{\infty}\right)}\\ \leq&\expect{\sum_{t=T}^{t^n_{S^n[T]}-1}\left(2Vy_{\max}+L(Nz_{\max}+d_{\max})t\cdot2z_{\max}\right)}\\ =&2Vy_{\max}\cdot\expect{t^n_{S^n[T]}-T}+Lz_{\max}(Nz_{\max}+d_{\max})\\ &\cdot\left((2T-1)\cdot\expect{t^n_{S^n[T]}-T}+\expect{t^n_{S^n[T]}-T}^2\right)\\ \leq& 2Vy_{\max}\sqrt{B}+2Lz_{\max}(Nz_{\max}+d_{\max})\sqrt{B}T+Lz_{\max}(Nz_{\max}+d_{\max})B\\ \leq& 2Vy_{\max}\sqrt{B}+2Lz_{\max}(Nz_{\max}+d_{\max})B(T+1), \end{align*} where in the second from last inequality we use \eqref{residual-life-bound} of Assumption \ref{bounded-assumption} that the residual life $t^n_{S^n[T]}-T$ satisfies $$\expect{(t^n_{S^n[T]}-T)^2} =\expect{\expect{\left.(t^n_{S^n[T]}-T)^2\right|~t^n_{S^n[T]}-t^n_{S^n[T]-1}\geq T-t^n_{S^n[T]-1}}}\leq B$$ and $\expect{t^n_{S^n[T]}-T}\leq\sqrt{B}$, and in the last inequality we use the fact that $B\geq1$, thus, $\sqrt{B}\leq B$. Substitute the above bound into \eqref{decompose-inequality} gives \begin{align*} \expect{\sum_{t=0}^{T-1}X^n[t]}\leq& C_0(T+1)+2Vy_{\max}B+2Lz_{\max}(Nz_{\max}+d_{\max})B(T+1)\\ =&2Vy_{\max}\sqrt{B}+3Lz_{\max}(Nz_{\max}+d_{\max})B(T+1)\\ \leq&2Vy_{\max}\sqrt{B}+6Lz_{\max}(z_{\max}+d_{\max})BT \end{align*} where we use the definition $C_0=Lz_{\max}(z_{\max}+d_{\max})B$ from Lemma \ref{supMG} in the equality and use $T+1\leq2T$ in the final equality. Dividing both sides by $T$ finishes the proof. \end{proof} \section{Convergence Time Analysis}\label{sec-convergence-time} \subsection{Lagrange Multipliers} Consider the following optimization problem: \begin{align} \min&~~\sum_{n=1}^N\overline{f}^n\label{modi-prob-1}\\ s.t.&~~\sum_{n=1}^N\overline{g}^n_l\leq d_l,~\forall l\in\{1,2,\cdots, L\}, \\ &~~(\overline{f}^n,\overline{\mathbf{g}}^n)\in\mathcal{P}^n,~\forall n \in\{1,2,\cdots,N\}. \label{modi-prob-3} \end{align} Since $\mathcal{P}^n$ is convex, it follows $\mathcal{P}^n$ is convex and $\otimes_{n=1}^N\mathcal{P}^n$ is also convex. Thus, \eqref{modi-prob-1}-\eqref{modi-prob-3} is a convex program. Furthermore, by Lemma \ref{stationary-lemma}, we have \eqref{modi-prob-1}-\eqref{modi-prob-3} is feasible if and only if \eqref{prob-1}-\eqref{prob-2} is feasible, and when assuming feasibility, they have the same optimality $f_*$ as is specified in Lemma \ref{stationary-lemma}. Since $\mathcal{P}^n$ is convex, one can show (see Proposition 5.1.1 of \cite{Be09}) that there \textit{always} exists a sequence $(\gamma_0, \gamma_1,\cdots,\gamma_L)$ so that $\gamma_i\geq0,~i=0,1,\cdots,L$ and \[ \sum_{n=1}^N\gamma_0\overline{f}^n+\sum_{l=1}^L\gamma_l\sum_{n=1}^N\overline{g}^n_l \geq \gamma_0f_*+\sum_{l=1}^L\gamma_ld_l, ~\forall (\overline{f}^n,\overline{\mathbf{g}}^n)\in\mathcal{P}^n, \] i.e. there always exists a hyperplane parametrized by $(\gamma_0, \gamma_1,\cdots,\gamma_L)$, supported at $(f_*,d_1,\cdots,d_L)$ and containing the set $\left\{\left(\sum_{n=1}^N\overline{f}^n,~\sum_{n=1}^N\overline{\mathbf{g}}^n\right):~(\overline{f}^n,\overline{\mathbf{g}}^n)\in\mathcal{P}^n,~\forall n\in\{1,2,\cdots,N\}\right\}$ on one side. This hyperplane is called ``separating hyperplane''. The following assumption stems from this property and simply assumes this separating hyperplane to be non-vertical (i.e. $\gamma_0>0$): \begin{assumption}\label{sep-hype} There exists non-negative finite constants $\gamma_1,~\gamma_2,~\cdots,~\gamma_L$ such that the following holds, \begin{align*} \sum_{n=1}^N\overline{f}^n+\sum_{l=1}^L\gamma_l\sum_{n=1}^N\overline{g}^n_l \geq f_*+\sum_{l=1}^L\gamma_ld_l, ~\forall (\overline{f}^n,\overline{\mathbf{g}}^n)\in\mathcal{P}^n, \end{align*} i.e. there exists a separating hyperplane parametrized by $(1,\gamma_1,\cdots,\gamma_L)$. \end{assumption} \begin{remark} The parameters $\gamma_1,~\cdots,~\gamma_L$ are called Lagrange multipliers and this assumption is equivalent to the existence of Lagrange multipliers for constrained convex program \eqref{modi-prob-1}-\eqref{modi-prob-3}. It is known that Lagrange multipliers exist if the Slater's condition holds (\cite{Be09}), which states that there exists a nonempty interior of the feasible region for the convex program. Slater's condition is very common in convex optimization theory and plays an important role in convergence rate analysis, such as the analysis of the interior point algorithm (\cite{BV04}). In the current context, this condition is satisfied, for example, in energy aware server scheduling problems, if the highest possible sum of service rates from all servers is strictly higher than the arrival rate. \end{remark} \begin{lemma}\label{bound-lemma-2} Suppose $\{y^n[t]\}_{t=0}^\infty$, $\{\mathbf{z}^n[t]\}_{t=0}^\infty$ and $\{T^n_k\}_{k=0}^\infty$ are processes resulting from the proposed algorithm. Under the Assumption \ref{sep-hype}, \begin{align*} \frac1T\sum_{t=0}^{T-1}\left(f_*-\sum_{n=1}^N\expect{y^n[t]}\right) \leq\frac1T\sum_{t=0}^{T-1}\sum_{l=1}^L\gamma_l\left(\sum_{n=1}^N\expect{z^n_l[t]}-d_l\right) +\frac{C_4}{T}, \end{align*} where $C_4=B_1N+B_2N\sum_{l=1}^L\gamma_l$, and $B_1$, $B_2$ are defined in Lemma \ref{bound-lemma-1}. \end{lemma} \begin{proof} First of all, from the statement of Lemma \ref{bound-lemma-1}, for the proposed algorithm, we can define the corresponding processes $(f^n[t],\mathbf{g}^n[t])$ for all $n$ as \begin{align*} f^n[t] =& \widehat{f}^n(\alpha^n) = \widehat{y}^n(\alpha^n)/\widehat{T}^n(\alpha^n), ~~\textrm{if}~t\in\mathcal{T}^n_k,\alpha^n_k=\alpha^n\\ \mathbf{g}^n[t] =& \widehat{\mathbf{g}}^n(\alpha^n)= \widehat{\mathbf z}^n(\alpha^n)/\widehat{T}^n(\alpha^n),~~\textrm{if}~t\in\mathcal{T}^n_k,\alpha^n_k=\alpha^n, \end{align*} where the last equality follows from the definition of $\widehat{f}^n(\alpha^n)$ and $\widehat{\mathbf{g}}^n(\alpha^n)$ in Definition \ref{PV-def}. Since $\left(\widehat{y}^n(\alpha^n),~\widehat{\mathbf z}^n(\alpha^n),~\widehat{T}^n(\alpha^n)\right) \in\mathcal{S}^n$, by definition of $\mathcal{P}^n$ in Definition \ref{PR-def}, $(f^n[t],\mathbf{g}^n[t])\in\mathcal P^n\subseteq\mathcal{P}^n,~\forall n,~\forall t$. Since $\mathcal{P}^n$ is a convex set by Lemma \ref{convex-lemma}, it follows \begin{align*} \left(\expect{f^n[t]},~\expect{\mathbf{g}^n[t]}\right)\in\mathcal{P}^n,~~\forall t,~\forall n. \end{align*} By Assumption \ref{sep-hype}, we have \begin{align*} \sum_{n=1}^N\expect{f^n[t]}+\sum_{l=1}^L\gamma_l\sum_{n=1}^N\expect{g^n_l[t]} \geq f_*+\sum_{l=1}^L\gamma_ld_l,~~\forall t. \end{align*} Rearranging terms gives \begin{align*} f_*-\sum_{n=1}^N\expect{f^n[t]}\leq \sum_{l=1}^L\gamma_l\left(\sum_{n=1}^N\expect{g^n_l[t]}-d_l\right),~~\forall t. \end{align*} Taking the time average from 0 to $T-1$ gives \begin{align} \frac1T\sum_{t=0}^{T-1}\left(f_*-\sum_{n=1}^N\expect{f^n[t]}\right) \leq \frac1T\sum_{t=0}^{T-1}\sum_{l=1}^L\gamma_l\left(\sum_{n=1}^N\expect{g^n_l[t]}-d_l\right). \label{inter-ave-bound-1} \end{align} For the left hand side of \eqref{inter-ave-bound-1}, we have \begin{align} l.h.s.&=\frac1T\sum_{t=0}^{T-1}\left(f_*-\sum_{n=1}^N\expect{y^n[t]}\right) +\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{y^n[t]-f^n[t]}\nonumber\\ &\geq\frac1T\sum_{t=0}^{T-1}\left(f_*-\sum_{n=1}^N\expect{y^n[t]}\right)-\frac{B_1N}{T}.\label{inter-ave-bound-2} \end{align} where the inequality follows from \eqref{bound-1} in Lemma \ref{bound-lemma-1}. For the right hand side of \eqref{inter-ave-bound-1}, we have \begin{align} r.h.s.&= \frac1T\sum_{t=0}^{T-1}\sum_{l=1}^L\gamma_l\left(\sum_{n=1}^N\expect{z^n_l[t]}-d_l\right) +\frac1T\sum_{t=0}^{T-1}\sum_{l=1}^L\gamma_l\sum_{n=1}^N\expect{g^n_l[t]-z^n_l[t]}\nonumber\\ &\leq\frac1T\sum_{t=0}^{T-1}\sum_{l=1}^L\gamma_l\left(\sum_{n=1}^N\expect{z^n_l[t]}-d_l\right)+\frac{B_2N\sum_{l=1}^L\gamma_l}{T}, \label{inter-ave-bound-3} \end{align} where the inequality follows from the fact that $\gamma_l\geq0,~\forall l$ and \eqref{bound-2} in Lemma \ref{bound-lemma-1}. Substituting \eqref{inter-ave-bound-2} and \eqref{inter-ave-bound-3} into \eqref{inter-ave-bound-1} finishes the proof. \end{proof} \subsection{Convergence time theorem} \begin{theorem} Fix $\varepsilon\in(0,1)$ and define $V=1/\varepsilon$. If the problem \eqref{prob-1}-\eqref{prob-2} is feasible and the Assumption \ref{sep-hype} holds, then, for all $T\geq1/\varepsilon^2$, \begin{align} &\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{y^n[t]}\leq f_*+\mathcal{O}(\varepsilon),\label{ctime-1}\\ &\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{z^n_l[t]}\leq d_l+\mathcal{O}(\varepsilon), l\in\{1,2,\cdots,L\}.\label{ctime-2} \end{align} Thus, the algorithm provides $\mathcal{O}(\varepsilon)$ approximation with the convergence time $\mathcal{O}(1/\varepsilon^2)$. \end{theorem} \begin{proof} First of all, by queue updating rule \eqref{queue-update}, \begin{equation}\label{queue-bound} \sum_{t=0}^{T-1}\left(\sum_{n=1}^N\expect{z^n_l[t]}-d_l\right)\leq\expect{Q_l[T]}. \end{equation} By Lemma \ref{bound-lemma-2}, we have \begin{align} \frac1T\sum_{t=0}^{T-1}\left(f_*-\sum_{n=1}^N\expect{y^n[t]}\right) \leq&\frac1T\sum_{t=0}^{T-1}\sum_{l=1}^L\gamma_l\left(\sum_{n=1}^N\expect{z^n_l[t]}-d_l\right) +\frac{C_4}{T},\nonumber\\ \leq&\sum_{l=1}^L\frac{\gamma_l}{T}\expect{Q_l[T]}+\frac{C_4}{T}.\label{inter-ctime-1} \end{align} Combining this with \eqref{final-dpp} gives \begin{align} \frac{1}{2T}\expect{\|\mathbf{Q}[T]\|^2} &\leq NC_1+C_3+\frac{V}{T}\sum_{t=0}^{T-1}\left(f_*-\sum_{n=1}^N\expect{y^n[t]}\right)+\frac{NC_2V}{T}\nonumber\\ &\leq NC_1+C_3 +\frac{(NC_2+C_4)V}{T}+V\sum_{l=1}^L\frac{\gamma_l}{T}\expect{Q_l[T]} \nonumber\\ &\leq NC_1+C_3 +\frac{(NC_2+C_4)V}{T}+\frac{V}{T}\|\gamma\|\cdot\|\expect{\mathbf{Q}[T]}\|, \label{inter-ctime-2} \end{align} where $\gamma:=(\gamma_1,~\cdots,~\gamma_L)$, the second inequality follows from \eqref{inter-ctime-1} and the final inequality follows from Cauchy-Schwarz. Then, by Jensen's inequality, we have \[\|\expect{\mathbf{Q}[T]}\|^2\leq\expect{\|\mathbf{Q}[T]\|^2}.\] Thus, it follows by \eqref{inter-ctime-2} that \begin{align*} \|\expect{\mathbf{Q}[T]}\|^2- 2V\|\gamma\|\cdot\|\expect{\mathbf{Q}[T]}\| - 2(NC_1+C_3)T -2(NC_2+C_4)V\leq 0. \end{align*} The left hand side is a quadratic form on $\|\expect{\mathbf{Q}[T]}\|$, and the inequality implies that $\|\expect{\mathbf{Q}[T]}\|$ is deterministically upper bounded by the largest root of the equation $x^2-bx-c=0$ with $b=2V\|\gamma\|$ and $c=2(NC_1+C_3)T+2(NC_2+C_4)V$. Thus, \begin{align*} \|\expect{\mathbf{Q}[T]}\|\leq&\frac{b+\sqrt{b^2+4c}}{2}\\ =& V\|\gamma\|+\sqrt{V^2\|\gamma\|^2+2(NC_1+C_3)T+2(NC_2+C_4)V}\\ \leq& 2V\|\gamma\|+ \sqrt{2(NC_1+C_3)T} + \sqrt{2(NC_2+C_4)V}. \end{align*} Thus, for any $l\in\{1,2,\cdots,L\}$, \begin{align*} \frac1T\expect{Q_l[T]}\leq \frac{2V\|\gamma\|}{T} + \sqrt{\frac{2(NC_1+C_3)}{T}} + \frac{\sqrt{2(NC_2+C_4)V}}{T}. \end{align*} By \eqref{queue-bound} again, \begin{align*} \frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{z^n_l[t]}\leq d_l+\frac{2V\|\gamma\|}{T} + \sqrt{\frac{2(NC_1+C_3)}{T}} + \frac{\sqrt{2(NC_2+C_4)V}}{T}. \end{align*} Substituting $V=1/\varepsilon$ and $T\geq1/\varepsilon^2$ into the above inequality gives $\forall l\in\{1,2,\cdots,L\}$, \begin{align*} \frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{z^n_l[t]}\leq& d_l + \left(2\|\gamma\|+\sqrt{2(NC_1+C_3)}\right)\varepsilon + \sqrt{2(NC_2+C4)}\varepsilon^{3/2}\\ =&d_l+\mathcal{O}(\varepsilon). \end{align*} Finally, substituting $V=1/\varepsilon$ and $T\geq1/\varepsilon^2$ into \eqref{final-dpp-2} gives \[\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{y^n[t]}\leq f_*+\mathcal{O}(\varepsilon),\] finishing the proof. \end{proof} \section{Simulation Study in Energy-aware Scheduling}\label{section-application} Here, we apply the algorithm introduced in Section \ref{section:algorithm} to deal with the energy-aware scheduling problem described in Section \ref{sec:application}. To be specific, we consider a scenario with 5 homogeneous servers and 3 different classes of jobs, i.e. $N=5$ and $L=3$. We assume that each server can only choose one class of jobs to serve during each frame. So the mode set $\mathcal{M}^n$ contains three actions $\{1,2,3\}$ and the action $i$ stands for serving the $i$-th class of jobs and we count the number of serviced jobs at the end of each service duration. The action $m^n_k$ determines the following quantities: \begin{itemize} \item The uniformly distributed total number of class $l$ jobs that can be served with expectation $\expect{\left.\sum_{t\in\mathcal{T}^n_k}\mu^n_l[t]\right|~m^n_k}:=\widehat{\mu}^n_l(m^n_k)$. \item The geometrically distributed service duration $H^n_k$ slots with expectation $\expect{\left.H^n_k\right|~m^n_k}:=\widehat{H}^n(m^n_k)$. \item The energy consumption $\widehat{e}^n(m^n_k)$ for serving all these jobs. \item The geometrically distributed idle/setup time $I^n_k$ slots with constant energy consumption $p^n$ per slot and zero job service. The expectation $\expect{\left.I^n_k\right|~m^n_k}:=\widehat{I}^n(m^n_k)$. \end{itemize} The idle/setup cost is $p^n=3$ units per slot and the rest of the parameters are listed in Table 1. Following the algorithm description in Section \ref{section:algorithm}, the proposed algorithm has the queue updating rule \[Q_l[t+1]=\max\left\{Q_l[t]+\lambda_l[t]-\sum_{n=1}^N\mu^n_l[t],~0\right\},\] and each system minimizes \eqref{DPP-ratio} each frame, which can be written as \begin{align*} \min_{m^n_k\in\mathcal{M}^n}\frac{V\left(\widehat{e}^n_l(m^n_k)+p^n\widehat{I}^n(m^n_k)\right) -\dotp{\mathbf{Q}[t^n_k]}{\widehat{\mu}^n(m^n_k)}}{\widehat{H}^n(m^n_k)+\widehat{I}^n(m^n_k)}. \end{align*} \begin{table} \begin{center} \caption{Problem parameters} \begin{tabular}{c|c|c|c|c|c} \hline & $\lambda_i$ & $\widehat{H}^n(i)$ & $\widehat{\mu}^n(i)$ & $\widehat{e}^n(i)$ & $\widehat{I}^n(i)$\\ \hline Class 1 & 2 & 5.5 & 15 (Uniform $[9,21]\cap\mathbb{N}$) & 16 & 2.5 \\ \hline Class 2 & 3 & 4.6 & 21 (Uniform $[15,27]\cap\mathbb{N}$) & 20 & 4.3\\ \hline Class 3 & 4 & 3.8 & 17 (Uniform $[11,23]\cap\mathbb{N}$) & 13 & 3.7\\ \hline \end{tabular} \end{center} \end{table} Each plot for the proposed algorithm is the result of running 1 million slots and taking the time average as the performance of the proposed algorithm. The benchmark is the optimal stationary performance obtained by performing a change of variable and solving a linear program, knowing the arrival rates (see also \cite{Neely12} for details). Fig. \ref{fig:Stupendous2} shows as the trade-off parameter $V$ gets larger, the time average energy consumptions under the proposed algorithm approaches the optimal energy consumption. Fig. \ref{fig:Stupendous3} shows as $V$ gets large, the time average number of services also approaches the optimal service rate for each class of jobs. In Fig. \ref{fig:Stupendous4}, we plot the time average queue backlog for each class of jobs verses $V$ parameter. We see that the queue backlog for the first class is always low whereas the rest queue backlogs scale up linearly with $V$. This is because the service rate for the first class is always strictly larger than the arrival rate whereas for the rest classes, as $V$ gets larger, the service rates approach the arrival rates. This plot, together with Fig. \ref{fig:Stupendous2}, also demonstrate that $V$ is indeed a trade-off parameter which trades queue backlog for near optimality. \begin{figure}[htbp] \centering \includegraphics[height=3in]{chapter2/energy-V} \caption{Time average energy consumption verses $V$ parameter over 1 millon slots.} \label{fig:Stupendous2} \end{figure} \begin{figure}[htbp] \centering \includegraphics[height=3in]{chapter2/service-V} \caption{Time average services verses $V$ parameter over 1 millon slots.} \label{fig:Stupendous3} \end{figure} \begin{figure}[htbp] \centering \includegraphics[height=3in]{chapter2/Q-V} \caption{Time average queue size verses $V$ parameter over 1 million slots.} \label{fig:Stupendous4} \end{figure} \section{Additional lemmas and proofs.}\label{appendix-proof} \subsection{Proof of Lemma \ref{convex-lemma}} \begin{proof} We first prove the convexity of $\mathcal{P}^n$. Consider any two points $(f_1,\mathbf{g}_1),~(f_2,\mathbf{g}_2)\in\mathcal{P}^n$. We aim to show that for any $q\in(0,1)$, $(qf_1+(1-q)f_2,q\mathbf{g}_1+(1-q)\mathbf{g}_2)\in\mathcal{P}^n$. Notice that by definition of $\mathcal{P}^n$, there exists $(y_1,\mathbf{z}_1,T_1),~(y_2,\mathbf{z}_2,T_2)\in\mathcal{S}^n$ such that $f_1=y_1/T_1$, $\mathbf{g}_1=\mathbf{z}_1/T_1$, $f_2=y_2/T_2$, and $\mathbf{g}_2=\mathbf{z}_2/T_2$. Thus, it is enough to show \begin{equation}\label{convex-combo} \left(q\frac{y_1}{T_1}+(1-q)\frac{y_2}{T_2},q\frac{\mathbf{z}_1}{T_1}+(1-q)\frac{\mathbf{z}_2}{T_2}\right)\in\mathcal{P}^n. \end{equation} To show this, we make a change of variable by letting $p=\frac{qT_2}{(1-q)T_1+qT_2}$. It is obvious that $p\in(0,1)$. Furthermore, $q=\frac{pT_1}{pT_1+(1-p)T_2}$ and \begin{align*} &q\frac{y_1}{T_1}+(1-q)\frac{y_2}{T_2}=\frac{py_1+(1-p)y_2}{pT_1+(1-p)T_2},\\ &q\frac{\mathbf{z}_1}{T_1}+(1-q)\frac{\mathbf{z}_2}{T_2} =\frac{p\mathbf{z}_1+(1-p)\mathbf{z}_2}{pT_1+(1-p)T_2}. \end{align*} Since $\mathcal{S}^n$ is convex, $$(py_1+(1-p)y_2,~p\mathbf{z}_1+(1-p)\mathbf{z}_2 ,~pT_1+(1-p)T_2)\in\mathcal{S}^n.$$ Thus, by definition of $\mathcal{P}^n$ again, \eqref{convex-combo} holds and the first part of the proof is finished. To show the second part of the claim, let $$\mathcal{Q}^n: = \left\{ \left(\widehat{f}^n(\alpha^n),~\widehat{\mathbf{g}}^n(\alpha^n)\right) : \alpha^n\in\mathcal{A}^n \right\} = \left\{ \left(\widehat{y}^n(\alpha^n)\left/\widehat{T}^n(\alpha^n)\right.,~\widehat{\mathbf{z}}^n(\alpha^n)\left/\widehat{T}^n(\alpha^n)\right)\right. : \alpha^n\in\mathcal{A}^n \right\}$$ and let $\text{conv}(\mathcal{Q}^n)$ be the convex hull of $\mathcal{Q}^n$. First of all, By Definition \ref{PR-def}, \[\mathcal{P}^n=\left\{\left(y/T,~\mathbf{z}/T\right):~(y,\mathbf{z},T)\in\mathcal{S}^n\right\}\subseteq\mathbb{R}^{L+1},\] for $\mathcal{S}^n$ being the convex hull of $\left\{\left(\widehat{y}^n(\alpha^n),~\widehat{\mathbf{z}}^n(\alpha^n),~\widehat{T}^n(\alpha^n)\right):~\alpha^n\in\mathcal{A}^n\right\}$, thus, in view of the definition of $\mathcal{Q}^n$, we have $\mathcal{Q}^n\subseteq\mathcal{P}^n$. Since both $\mathcal{P}^n$ and $\text{conv}(\mathcal{Q}^n)$ are convex, by definition of convex hull (\cite{rockafellar2015convex}) that $\text{conv}(\mathcal{Q}^n)$ is the smallest convex set containing $\mathcal{Q}^n$, we have $\text{conv}(\mathcal{Q}^n)\subseteq\mathcal{P}^n$. To show the reverse inclusion $\mathcal{P}^n\subseteq\text{conv}(\mathcal{Q}^n)$, note that any point in $\mathcal{P}^n$ can be written in the form $\left( \frac{y}{T},\frac{\mathbf{z}}{T} \right)$, where $(y,\mathbf{z},T)\in\mathcal{S}^n$. Since $\mathcal{S}^n$ by definition is the convex hull of $$\left\{\left(\widehat{y}^n(\alpha^n),~\widehat{\mathbf{z}}^n(\alpha^n),~\widehat{T}^n(\alpha^n)\right):~\alpha^n\in\mathcal{A}^n\right\}\subseteq\mathbb{R}^{L+2},$$ by the definition of convex hull, $(y,\mathbf{z},T)$ can be written as a convex combination of $m$ points in the above set. Let $\left\{\left(\widehat{y}^n(\alpha^n_i),~\widehat{\mathbf{z}}^n(\alpha^n_i),~\widehat{T}^n(\alpha^n_i)\right)\right\}_{i=1}^m$ be these points, so that \begin{align*} &(y,\mathbf{z},T) = \sum_{i=1}^m p_i\cdot\left(\widehat{y}^n(\alpha^n_i),~\widehat{\mathbf{z}}^n(\alpha^n_i),~\widehat{T}^n(\alpha^n_i)\right),\\ &p_i\geq0,~~\sum_{i=1}^mp_i = 1. \end{align*} As a result, we have \[ \left( \frac{y}{T},\frac{\mathbf{z}}{T} \right) =\left( \frac{\sum_{i=1}^m p_iy^n(\alpha^n_i)}{\sum_{i=1}^m p_iT^n(\alpha^n_i)},\frac{\sum_{i=1}^m p_i\mathbf{z}^n(\alpha^n_i)}{\sum_{i=1}^m p_iT^n(\alpha^n_i)} \right). \] We make a change of variable by letting $q_j = \frac{p_jT^n(\alpha^n_j)}{\sum_{i=1}^m p_iT^n(\alpha^n_i)},~\forall j=1,2,\cdots,m$, then, $$p_j = \frac{q_j}{T^n(\alpha^n_j)}\cdot\sum_{i=1}^m p_iT^n(\alpha^n_i),$$ it follows, \[ \left( \frac{y}{T},\frac{\mathbf{z}}{T} \right) =\sum_{i=1}^m q_i\cdot\left( \frac{ y^n(\alpha^n_i)}{T^n(\alpha^n_i)},\frac{\mathbf{z}^n(\alpha^n_i)}{T^n(\alpha^n_i)} \right) = \sum_{i=1}^m q_i\cdot\left(\widehat{f}^n(\alpha^n_i),~\widehat{\mathbf{g}}^n(\alpha^n_i)\right). \] Since $\sum_{i=1}^mq_i = 1$ and $q_i\geq0$, it follows any point in $\mathcal{P}^n$ can be written as a convex combination of finite number of points in $ \mathcal{Q}^n$, which implies $\mathcal{P}^n\subseteq\text{conv}(\mathcal{Q}^n)$. Overall, we have $\mathcal{P}^n=\text{conv}(\mathcal{Q}^n)$. Finally, by Assumption \ref{compact-assumption}, we have $\mathcal{Q}^n=\left\{ \left(\widehat{f}^n(\alpha^n),~\widehat{\mathbf{g}}^n(\alpha^n)\right) : \alpha^n\in\mathcal{A}^n \right\}$ is compact. Thus, $\mathcal{P}^n$, being a convex hull of a compact set, is also compact. \end{proof} \subsection{Proof of Lemma \ref{bound-lemma-1}} \begin{proof} We prove bound \eqref{bound-1} (\eqref{bound-2} is proved similarly). By definition of $\widehat{f}^n(\alpha^n)$ in Definition \ref{PV-def}, we have for any $\alpha^n\in\mathcal{A}^n$, \[\widehat{f}^n(\alpha^n)=\frac{\expect{\left.\sum_{t\in\mathcal{T}^n_k}y^n[t]\right|~\alpha^n_k=\alpha^n}}{\expect{T^n_k|~\alpha^n_k=\alpha^n}},\] thus, \[\expect{\left.\sum_{t\in\mathcal{T}^n_k}\left(\widehat{f}^n(\alpha^n_k)-y^n[t]\right)\right|~\alpha^n_k=\alpha^n}=0.\] By the renewal property of the system, given $\alpha^n_k=\alpha^n$, $T^n_k$ and $\sum_{t\in\mathcal{T}^n_k}y^n[t]$ are independent of the past information before $t^n_k$. Thus, the same equality holds if conditioning also on $\mathcal F_k^n $, i.e. \[\expect{\left.\sum_{t\in\mathcal{T}^n_k}\left(\widehat{f}^n(\alpha^n_k)-y^n[t]\right)\right|~\alpha^n_k=\alpha^n,~\mathcal F_k^n }=0.\] Hence, \[\expect{\left.\sum_{t\in\mathcal{T}^n_k}\left(\widehat{f}^n(\alpha^n_k)-y^n[t]\right)\right|~\mathcal F_k^n }=0.\] By the definition of $f^n[t]$, this further implies that \[\expect{\left.\sum_{t\in\mathcal{T}^n_k}\left(f^n[t]-y^n[t]\right)\right|~\mathcal F_k^n }=0.\] Since $|y^n[t]|\leq y_{\max}$ and $\expect{T^n_k}\leq\sqrt{B}$, it follows $\expect{\left|\sum_{t\in\mathcal{T}^n_k}\left(f^n[t]-y^n[t]\right)\right|}<\infty$ and the process $\{F^n_K\}_{K=0}^\infty$ defined as \[F^n_K=\sum_{k=0}^{K-1}\sum_{t\in\mathcal{T}^n_k}\left(f^n[t]-y^n[t]\right),~K\geq1,\] $F^n_0=0$ is a \textit{martingale}. Consider any fixed $T\in\mathbb{N}$ and define $S^n[T]$ as the number of renewals up to $T$. Lemma \ref{valid-stopping-time} shows $S^n[T]$ is a valid stopping time with respect to the filtration $\{\mathcal F_k^n \}_{k=0}^\infty$. Furthermore, $\{F^n_{K\wedge S^n[T]}\}_{K=0}^\infty$ is a supermartingale by Theorem \ref{stopping-time}, where $a\wedge b:=\min\{a,b\}$. For this fixed $T$, we have \begin{align*} \expect{\sum_{t=0}^{T-1}\left(f^n[t]-y^n[t]\right)} =&\expect{\sum_{t=0}^{t^n_{S^n[T]}-1}\left(f^n[t]-y^n[t]\right)} -\expect{\sum_{t=T}^{t^n_{S^n[T]}-1}\left(f^n[t]-y^n[t]\right)}\\ =&\expect{F^n_{S^n[T]}}-\expect{\sum_{t=T}^{t^n_{S^n[T]}-1}\left(f^n[t]-y^n[t]\right)}. \end{align*} Since the number of renewals is always bounded by the number of slots at any time, i.e. $S^n[T]\leq T+1$, it follows \[\expect{F^n_{S^n[T]}}=\expect{F^n_{(T+1)\wedge S^n[T]}}\leq 0.\] On the other hand, \begin{align*} \left|\expect{\sum_{t=T}^{t^n_{S^n[T]}-1}\left(f^n[t]-y^n[t]\right)}\right| \leq\expect{t^n_{S^n[T]}-T}\cdot2y_{\max}\leq2y_{\max}\sqrt{B}. \end{align*} where the last inequality follows from Assumption \ref{bounded-assumption} for the residual life time. Thus, \[\expect{\sum_{t=0}^{T-1}\left(f^n[t]-y^n[t]\right)}\leq2y_{\max}\sqrt{B}.\] Dividing both sides by $T$ finishes the proof. \end{proof} \subsection{Proof of Lemma \ref{lemma:filtration}} \begin{proof} Recall that $t^n_k$ is the time slot where the $k$-th renewal occurs ($k=0,1,2,\cdots$), then, it follows from the definition of stopping time (\cite{Durrett}) that $\{t^n_k\}_{k=0}^\infty$ is a sequence of stopping times with respect to $\{\mathcal{F}[t]\}_{t=0}^{\infty}$ satisfying $t^n_k<t^n_{k+1},~\forall k$. Thus, by definition of $\mathcal F_k^n $, for any set $A\in\mathcal F_k^n $, \[A\cap\{t^n_{k+1}\leq t\}=A\cap\{t^n_k\leq t\}\cap\{t^n_{k+1}\leq t\}\in\mathcal{F}[t].\] Thus, $A\in\mathcal F_{k+1}^n $, which implies $\mathcal F_k^n \subseteq\mathcal F_{k+1}^n ,~\forall k$, and $\{\mathcal F_k^n \}_{k=0}^\infty$ is indeed a filtration. This finishes the first part of the proof. Next, we would like to show that $G_{t^n_k}(Z^n_0,\cdots,Z^n[t^n_k-1])$ is measurable with respect to $\mathcal F_k^n ,~\forall k\geq1$, i.e. $\left\{G_{t^n_k}(Z^n_0,\cdots,Z^n[t^n_k-1])\in B\right\}\in\mathcal F_k^n $, for any Borel set $B\subseteq\mathbb{R}$. By definition of $\mathcal F_k^n $, this is equivalent to showing $\{G_{t^n_k}(Z^n_0,\cdots,Z^n[t^n_k-1])\in B\} \cap \{t^n_k\leq s\}\in\mathcal{F}[s]$ for any slot $s\geq0$. For $s=0$, this is obvious because $ \{t^n_k\leq 0\}=\emptyset,~\forall k\geq1$. Consider any $s\geq1$, \begin{align*} &\left\{G_{t^n_k}(Z^n_0,\cdots,Z^n[t^n_k-1])\in B\right\} \cap \{t^n_k\leq s\}\\ &= \bigcup_{i=1}^{s}\left(\left\{G_{i}(Z^n_0,\cdots,Z^n[i-1]) \in B\right\}\bigcap\{t^n_k = i\}\right)\\ &= \bigcup_{i=1}^{s}\left(\left\{(Z^n_0,\cdots,Z^n[i-1]) \in G^{-1}_i(B)\right\}\bigcap\{t^n_k = i\}\right) \in\mathcal{F}[s], ~\forall k\geq1, \end{align*} where the last step follows from the assumption that the random variable $Z^n[t-1]$ is measurable with respect to $\mathcal{F}[t]$ for any $t>0$ and $t^n_k$ is a stopping time with respect to $\{\mathcal{F}[t]\}_{t=0}^{\infty}$ for all $k\geq1$. This gives the second part of the claim. \end{proof} \subsection{Proof of Lemma \ref{valid-stopping-time}} \begin{proof} We aim to prove $\{S^n[T]= k\}\in\mathcal F_k^n ,~\forall k\in\mathbb{N}$. First of all, recall that the index of the renewal starts from $k=0$ and $t^n_0=0$, thus, for any $k\in\mathbb{N}$, $\{S^n[T]=k\} = \{t^n_k> T\}\cap\{t^n_{k-1}\leq T\}$, and any $t\in\mathbb{N}$, \begin{align} \{S^n[T]=k\}\cap\{t^n_k\leq t\} =&\{t^n_k> T\}\cap\{t^n_{k-1}\leq T\}\cap\{t^n_k\leq t\}. \label{the-set} \end{align} Consider two cases as follows: \begin{enumerate} \item $t\leq T$. In this case, the set \eqref{the-set} is empty and obviously belongs to $\mathcal{F}[t]$. \item $t>T$. In this case, we have $\{t^n_k> T\}\cap\{t^n_k\leq t\}=\{T<t^n_k\leq t\}\in\mathcal{F}[t]$ as well as $\{t^n_{k-1}\leq T\}\in\mathcal{F}[T]\subseteq\mathcal{F}[t]$. Thus, the set \eqref{the-set} belongs to $\mathcal{F}[t]$. \end{enumerate} Overall, we have $\{S^n[T]=k\}\cap\{t^n_k\leq t\}\in \mathcal{F}[t],~\forall t\in\mathbb{N}$. Thus, $\{S^n[T]=k\}\in\mathcal F_k^n $ and $S^n[T]$ is indeed a valid stopping time with respect to the filtration $\{\mathcal F_k^n \}_{k=0}^\infty$. \end{proof} \subsection{Proof of Lemma \ref{stationary-lemma}} \begin{proof} To prove the first part of the claim, we define the following notation: \[\bigoplus_{n=1}^N\mathcal{P}^n:=\left\{\sum_{n=1}^N\mathbf{p}_n,~\mathbf{p}_n\in\mathcal{P}^n,~\forall n \right\}\] is the Minkowski sum of sets $\mathcal{P}_n,~n\in\{1,2,\cdots,N\}$, and for any sequence $\{\mathbf{x}[t]\}_{t=0}^\infty$ taking values in $\mathbb{R}^d$, define $$\limsup_{T\rightarrow\infty}\mathbf{x}[T]:= \left(\limsup_{T\rightarrow\infty}x_1[T],~\cdots,\limsup_{T\rightarrow\infty}x_d[T]\right)$$ is a vector of $\limsup$s. By definition, any vector in $\oplus_{n=1}^N\mathcal{P}^n$ can be constructed from $\otimes_{n=1}^N\mathcal{P}^n$, thus, it is enough to show that there exists a vector $\mathbf{r}^*\in\oplus_{n=1}^N\mathcal{P}^n$ such that $r_0^*=f^*$ and the rest of the entries $r^*_l\leq d_l,~l=1,2,\cdots,L$. By the feasibility assumption for \eqref{prob-1}-\eqref{prob-2}, we can consider \textit{any algorithm that achieves the optimality} of \eqref{prob-1}-\eqref{prob-2} and the corresponding process $\{(f^n[t],\mathbf{g}^n[t])\}_{t=0}^\infty$ defined in Lemma \ref{bound-lemma-1} for any system $n$. Notice that $(f^n[t],\mathbf{g}^n[t])\in\mathcal{P}^n,~\forall n,~\forall t$. This follows from the definition of $\widehat{f}^n(\alpha^n)$ and $\widehat{\mathbf{g}}^n(\alpha^n)$ in Definition \ref{PV-def} that \begin{align*} f^n[t] =& \widehat{f}^n(\alpha^n) = \widehat{y}^n(\alpha^n)/\widehat{T}^n(\alpha^n), ~~\textrm{if}~t\in\mathcal{T}^n_k,\alpha^n_k=\alpha^n\\ \mathbf{g}^n[t] =& \widehat{\mathbf{g}}^n(\alpha^n)= \widehat{\mathbf z}^n(\alpha^n)/\widehat{T}^n(\alpha^n),~~\textrm{if}~t\in\mathcal{T}^n_k,\alpha^n_k=\alpha^n, \end{align*} and $\left(\widehat{y}^n(\alpha^n),~\widehat{\mathbf z}^n(\alpha^n),~\widehat{T}^n(\alpha^n)\right) \in\mathcal{S}^n$. By definition of $\mathcal{P}^n$ in Definition \ref{PR-def}, $(f^n[t],\mathbf{g}^n[t])\in\mathcal P^n,~\forall n,~\forall t$. Since $\mathcal{P}^n$ is convex by Lemma \ref{convex-lemma}, it follows that $\left(\expect{f^n[t]},\expect{\mathbf{g}^n[t]}\right)\in\mathcal{P}^n,~\forall n,~\forall t$. Hence, \[\left(\frac1T\sum_{t=1}^{T-1}\expect{f^n[t]},~\frac1T\sum_{t=1}^{T-1}\expect{\mathbf{g}^n[t]}\right)\in\mathcal{P}^n,~\forall T, \forall n.\] This further implies that \[ \mathbf{r}(T):=\left(\frac1T\sum_{t=1}^{T-1}\sum_{n=1}^N\expect{f^n[t]},~\frac1T\sum_{t=1}^{T-1}\sum_{n=1}^N\expect{\mathbf{g}^n[t]}\right)\in\bigoplus_{n=1}^N\mathcal{P}^n. \] By Lemma \ref{convex-lemma}, $\mathcal P^n$ is compact in $\mathbb{R}^{L+1}$. Thus, $\oplus_{n=1}^N\mathcal{P}^n$ is also compact. This implies that the sequence $\{\mathbf{r}(T)\}_{T=1}^\infty$ has at least one limit point, and any such limit point is contained in $\oplus_{n=1}^N\mathcal{P}^n$. We consider a specific limit point of $\{\mathbf{r}(T)\}_{T=1}^\infty$ denoted as $\mathbf{r}^*\in\oplus_{n=1}^N\mathcal{P}^n$, with the first entry denoted as $r_0^*$ satisfying $$r_0^* = \limsup_{T\rightarrow\infty}\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{f^n[t]}.$$ Then, we have the rest of the entries of $\mathbf{r}^*$ must satisfy \[r_l^*\leq\limsup_{T\rightarrow\infty}\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{\mathbf{g}^n[t]}, ~\forall l\in\{1,2,\cdots,L\}.\] Now, by Lemma \ref{bound-lemma-1}, we can connect the $\limsup$ with respect to $f^n[t]$ and $\mathbf{g}^n[t]$ to that of $y^n[t]$ and $\mathbf{z}^n[t]$ as follows: \begin{align*} &\limsup_{T\rightarrow\infty}\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{y^n[t]}\\ =&\limsup_{T\rightarrow\infty}\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\left(\expect{y^n[t]-f^n[t]}+\expect{f^n[t]}\right)\\ =&\lim_{T\rightarrow\infty}\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{y^n[t]-f^n[t]} +\limsup_{T\rightarrow\infty}\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{f^n[t]}\\ =&\limsup_{T\rightarrow\infty}\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{f^n[t]}. \end{align*} Similarly, we can show that \[ \limsup_{T\rightarrow\infty}\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{\mathbf{z}^n[t]} =\limsup_{T\rightarrow\infty}\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{\mathbf{g}^n[t]}.\] Thus, by our preceeding assumption that the algorithm under consideration achieves the optimality of \eqref{prob-1}-\eqref{prob-2}, we have \begin{align*} &r_0^*=\limsup_{T\rightarrow\infty}\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{y^n[t]}=f^*\\ &r_l^*\leq\limsup_{T\rightarrow\infty}\frac1T\sum_{t=0}^{T-1}\sum_{n=1}^N\expect{z_l^n[t]} \leq d_l,~\forall i\in\{1,2,\cdots,L\}. \end{align*} Overall, we have shown that $\mathbf{r}^*\in\oplus_{n=1}^N\mathcal{P}^n$ achieves the optimality of \eqref{prob-1}-\eqref{prob-2}, and the first part of the lemma is proved. To prove the second part of the lemma, we show that any point in $\otimes_{n=1}^N\mathcal{P}^n$ is achievable by the corresponding time averages of some algorithm. Specifically, consider the following class of \textit{randomized stationary algorithms}: For each system $n$, at the beginning of $k$-th frame, the controller independently chooses an action $\alpha^n_k$ from the set $\mathcal{A}^n$ with a fixed probability distribution. Thus, the actions $\{\alpha^n_k\}_{k=0}^{\infty}$ result from any randomized stationary algorithm is i.i.d.. By the renewal property of each system, we have $$\left\{\left(\sum_{t\in\mathcal{T}^n_k}y^n[t],~\sum_{t\in\mathcal{T}^n_k}\mathbf{z}^n[t],~T^n_k\right)\right\}_{k=0}^\infty,$$ is also an i.i.d. process for each system $n$. Next, we would like to show that any point in $\mathcal{S}^n$ can be achieved by the corresponding expectations of some randomized stationary algorithm. Recall that $\mathcal{S}^n$ defined in Definition \ref{PR-def} is the convex hull of $$\mathcal{G}^n:=\left\{\left(\widehat{y}^n(\alpha^n),~\widehat{\mathbf z}^n(\alpha^n),~\widehat{T}^n(\alpha^n)\right),~\alpha^n\in\mathcal{A}^n\right\} \subseteq\mathbb{R}^{L+2},$$ By definition of convex hull, any point $(y,\mathbf{z},T)\in\mathcal{S}^n$, can be written as a convex combination of a finite number of points from the set $\mathcal{G}^n$. Let $\left\{\left(\widehat{y}^n(\alpha^n_i),~\widehat{\mathbf z}^n(\alpha^n_i),~\widehat{T}^n(\alpha^n_i)\right)\right\}_{i=1}^m$ be these points, then, we have there exists a finite sequence $\{p_i\}_{i=1}^m$, such that \begin{align*} &(y,\mathbf{z},T) = \sum_{i=1}^m p_i\cdot\left(\widehat{y}^n(\alpha^n_i),~\widehat{\mathbf z}^n(\alpha^n_i),~\widehat{T}^n(\alpha^n_i)\right),\\ &p_i\geq0,~\sum_{i=1}^mp_i=1. \end{align*} We can then use $\{p_i\}_{i=1}^m$ to construct the following randomized stationary algorithm: At the start of each frame $k$, the controller independently chooses action $\alpha_i\in\mathcal{A}^n$ with probability $p_i$ defined above for $i=1,2,\cdots,m$. Then, the one-shot expectation of this particular randomized stationary algorithm on system $n$ satisfies \[ \left(\expect{\sum_{t\in\mathcal{T}^n_k}y^n[t]},~\expect{\sum_{t\in\mathcal{T}^n_k}\mathbf{z}^n[t]},~\expect{T^n_k}\right)= \sum_{i=1}^m p_i\cdot\left(\widehat{y}^n(\alpha^n_i),~\widehat{\mathbf z}^n(\alpha^n_i),~\widehat{T}^n(\alpha^n_i)\right)=(y,\mathbf{z},T), \] which implies any point in $\mathcal{S}^n$ can be achieved by the corresponding expectations of a randomized stationary algorithm. Next, by definition of $\mathcal P^n$ in Definition \ref{PR-def}, any $(\overline{f}^n,\overline{\mathbf{g}}^n)\in\mathcal P^n$ can be written as $(\overline{f}^n,\overline{\mathbf{g}}^n)=(y/T,\mathbf{z}/T)$, where $(y,\mathbf{z},T)\in\mathcal{S}^n$. Thus, it is achievable by the ratio of one-shot expectations from a randomized stationary algorithm, i.e. \[ \frac{\expect{\sum_{t\in\mathcal{T}^n_k}y^n[t]}}{\expect{T^n_k}}=\frac{y}{T}=\overline{f}^n,~~ \frac{\expect{\sum_{t\in\mathcal{T}^n_k}\mathbf{z}^n[t]}}{\expect{T^n_k}}=\frac{\mathbf{z}}{T} =\overline{\mathbf{g}}^n. \] Now we claim that for $y^n[t]$, $\mathbf{z}^n[t]$ and $T^n_k$ result from the randomized stationary algorithm, \begin{align} &\lim_{T\rightarrow\infty}\frac1T\sum_{t=0}^{T-1}\expect{y^n[t]}=\frac{\expect{\sum_{t\in\mathcal{T}^n_k}y^n[t]}}{\expect{T^n_k}},\label{lln-1}\\ &\lim_{T\rightarrow\infty}\frac1T\sum_{t=0}^{T-1}\expect{\mathbf{z}^n[t]}=\frac{\expect{\sum_{t\in\mathcal{T}^n_k}\mathbf{z}^n[t]}}{\expect{T^n_k}}.\label{lln-2} \end{align} We prove \eqref{lln-1} and \eqref{lln-2} is shown in a similar way. Consider any fixed $T$, and let $S^n[T]$ be the number of renewals up to (and including) time $T$. Then, from Lemma \ref{valid-stopping-time} in Section \ref{section:limiting}, $S^n[T]$ is a valid stopping time with respect to the filtration $\{\mathcal F_k^n \}_{k=0}^\infty$. We write \begin{equation}\label{split-stop} \frac1T\sum_{t=0}^{T-1}\expect{y^n[t]}=\frac{1}{T} \expect{\sum_{k=0}^{S^n[T]}\sum_{t\in\mathcal{T}^n_k}y^n[t]}-\frac1T\expect{\sum_{t=T}^{t^n_{S^n[T]}-1}y^n[t]}. \end{equation} For the first part on the right hand side of \eqref{split-stop}, since $\left\{\sum_{t\in\mathcal{T}^n_k}y^n[t]\right\}_{k=0}^{\infty}$ is an i.i.d. process, by Wald's equality (Theorem 4.1.5 of \cite{Durrett}), \[ \frac{1}{T} \expect{\sum_{k=0}^{S^n[T]}\sum_{t\in\mathcal{T}^n_k}y^n[t]}=\expect{\sum_{t\in\mathcal{T}^n_k}y^n[t]} \cdot\frac{\expect{S^n[T]}}{T}. \] By renewal reward theorem (Theorem 4.4.2 of \cite{Durrett}), \[ \lim_{T\rightarrow\infty}\frac{\expect{S^n[T]}}{T}=\frac{1}{\expect{T^n_k}}. \] Thus, \[ \lim_{T\rightarrow\infty}\frac{1}{T} \expect{\sum_{k=0}^{S^n[T]}\sum_{t\in\mathcal{T}^n_k}y^n[t]} =\frac{\expect{\sum_{t\in\mathcal{T}^n_k}y^n[t]}}{\expect{T^n_k}}. \] For the second part on the right hand side of \eqref{split-stop}, by Assumption \ref{bounded-assumption}, \[ \left|\expect{\sum_{t=T}^{t^n_{S^n[T]}-1}y^n[t]}\right|\leq y_{\max}\cdot\expect{t^n_{S^n[T]}-T} \leq\sqrt{B}y_{\max}, \] which implies $\lim_{T\rightarrow\infty}\frac1T\expect{\sum_{t=T}^{t^n_{S^n[T]}-1}y^n[t]}=0$. Overall, we have \eqref{lln-1} holds. To this point, we have shown that for any $(\overline{f}^n,\overline{\mathbf{g}}^n)\in\mathcal P^n$, $n\in\{1,2,\cdots,N\}$, there exists a randomized stationary algorithm so that \begin{align*} \lim_{T\rightarrow\infty}\frac1T\sum_{t=0}^{T-1}\expect{y^n[t]}=\overline{f}^n,~~ \lim_{T\rightarrow\infty}\frac1T\sum_{t=0}^{T-1}\expect{\mathbf{z}^n[t]}=\overline{\mathbf{g}}^n, \end{align*} for any $n\in\{1,2,\cdots,N\}$. Since $f^*$ is the optimal solution to \eqref{prob-1}-\eqref{prob-2} over all algorithms, it follows for any $(\overline{f}^n,\overline{\mathbf{g}}^n)\in\mathcal P^n$, $n\in\{1,2,\cdots,N\}$ satisfying $\sum_{n=1}^N\overline{g}^n_l\leq d_l,~\forall l\in\{1,2,\cdots,L\}$, we have $\sum_{n=1}^N\overline{f}^n\geq f^*$, and the second part of the lemma is proved. \end{proof}
170,108
Accomplishments of the 7th World Wilderness CongressathaiPrivateullum, author, psychiatrist, wikderness trail leader — speaking to the issue of peresonal growth programmes.
190,153
\chapter{Introduction} \setcounter{page}{1} This thesis studies a special type of matrix models, namely matrix field theory models. These models have implications in modern areas of mathematics and mathematical physics which seem to be different but connected via matrix field theory. The various types of implications and applications of matrix field theory to quantum field theory, quantum field theory on noncommutative geometry, 2D quantum gravity and algebraic geometry will be introduced, respectively. \section{Quantum Field Theory} Nature is, on fundamental level, governed by four different interactions of two separated theories in physics. Elementary particles are described in the Standard Model with three interactions, the weak interaction, the strong interaction and the electromagnetic interaction. Quantum field theory (QFT) describes the dynamics of these elementary particles by fundamental principles. The fourth interaction is gravity and described by the theory of general relativity. General relativity is, from a mathematical point of view, rigorously understood. The achievement of Einstein was to recognise that the 4-dimensional spacetime is curved by energy densities, and the motion occurs along geodesics. This theory is confirmed experimentally with astonishing precision, e.g. recently by the measurement of gravitational waves \cite{PhysRevLett.116.061102}. Also the predictions of the Standard Model are verified day-by-day in huge particle colliders. The theoretical prediction, for instance, for the anomalous magnetic moment of an electron agrees with the experimental data up to eleven decimal digits \cite{Odom:2006zz}. However, the mathematical construction of QFT is, independent of the particle content, hard to formulate rigorously. Wightman formulated these fundamental principles for a QFT on Minkowski space with natural axioms for operator-valued tempered distributions, smeared over the support of a test function, on a separable Hilbert space \cite{Wightman:1956zz,Streater:1989vi}. The first application was to show that the 4-dimensional free scalar field satisfies these axioms, which indeed holds. Furthermore, Wightman's powerful reconstruction theorem implies that if the full set of correlation functions is known, then under certain conditions the Hilbert space and the entire quantum field theory can be reconstructed. Unfortunately, the axiomatic formulation of Wightman has one problem: no interacting QFT model satisfying these axioms could be constructed in 4D, yet. An equivalent formulation to Wightman's axioms on the Euclidean space, instead of Minkowski space, was found by Osterwalder and Schrader \cite{Osterwalder:1973dx,Osterwalder:1974tc}. A different approach to QFT makes use of the path integral formalism. The idea behind is that a particle propagates between two points not along the path with extremal/minimal action, but along any path weighted by some probability. For a QFT, the particle is described by a field, a scalar field $\phi$ can be for instance a Schwartz function $\phi\in\mathcal{S}(\R^D)$. Therefore, the path integral translates into a sum (or even an integral) over all field configurations of the field content of the model \cite{Popov:1984mx}. This expression is on Minkowski space not well-defined and has, even on Euclidean space, a lot of technical issues. Nevertheless, the path (or better: functional) integral formalism can be used to approximate correlation functions around the free theory, which is called pertubative expansion. To make the pertubative expansion well-defined, certain parameters of the model need to be adjusted (renormalised) appropriately. These approximated and renormalised correlation function can then be compared via LSZ reduction formula \cite{Lehmann1955} to the experiment. This comparison of theory and experiment fits remarkably well. Up to now, it is not clear whether the approximation of a correlation function by perturbation theory converges in any sense. The number of terms for the perturbative expansion grows factorially from order to order. Furthermore, the values of the different terms themselves increase after renormalisation (renormalon problem) such that even Borel summablility seems to be a hopeless concept \cite{PhysRev.85.631}. It will be proved in this thesis that \textit{matrix field theory} provides non-trivial examples for models which have the same issues as QFT models, but the pertubative expansion is indeed convergent, in fact we will determine the function it converges to. We will define the dimension of the matrix field theory model in the natural sense given by Weyl's law \cite{Weyl1911}. The entire machinery of renormalisation will be necessary, as in QFT, to generate finite results for the perturbative expansion. We will see for selected examples that the number and the value of the terms grow for the perturbative expansion factorially, just like in QFT. The exact results of the correlation function will be computed directly and coincide with perturbative expansion after applying Zimmermann's forest formula for renormalising all divergences and subdivergences. From these examples, the following question arises: What are the mathematical conditions that the perturbative expansion (in the sense of QFT) converges? \section{Quantum Field Theory on Noncommutative Geometry} As mentioned before, QFT is described on a flat spacetime (Minkowski or Euclidean space). Since the theory of general relativity implies a curved spacetime, a natural question is whether both theories can be combined, which is first of all not the case. For instance, Heisenberg's uncertainty relation of quantum mechanics implies for a spherical symmetric black whole (solution of Einstein's field equation in general relativity) an uncertainty of the Schwarzschild radius. Applying this to a quantum field yields that the support of the quantum field cannot be localised better than the Planck scale $l_P= \sqrt{\frac{G\hbar}{c^3}}$, where $G$ is Newton's constant, $\hbar$ Planck's constant and $c$ the speed of light \cite{Misner1973}. Noncommutative geometry can avoid this gravitational collapse caused by localising events with extreme precision \cite{Doplicher:1994tu}. The coordinate uncertainties have to satisfy certain inequalities which are induced by noncommutative coordinate operators $\hat{x}^\mu$ satisfying $[\hat{x}^\mu,\hat{x}^\nu]=\mathrm{i} \hat{\Theta}^{\mu\nu}$, where $\hat{\Theta}^{\mu\nu}$ are the components of a 2-form with the properties $\langle \hat{\Theta},\hat{\Theta}\rangle=0$ and $\langle \hat{\Theta},* \hat{\Theta}\rangle =8l^4_P $ in 4D. This suggests that, if QFT and gravity (in the classical sense of general relativity) are combined, spacetime itself should be noncommutative. First examples of scalar QFTs on noncommutative spaces face in the perturbative expansion the problem of mixing ultraviolet and infrared divergences \cite{Minwalla:1999px}. This mixing problem was solved by adding a harmonic oscillator term depending on $\hat{\Theta}$ to the action \cite{Grosse:2004yu}. The most natural example of a scalar QFT is the quartic interacting model, the Grosse-Wulkenhaar model, which was proved to be renormalisable to all order in perturbation theory, a necessary condition for a QFT \cite{Grosse:2004yu}. The representation of a scalar QFT model on a noncommutative space (especially on the Moyal space) is approximated in momentum space by large matrices \cite{GraciaBondia:1987kw}. At the self-dual point \cite{Langmann:2002cc}, this type of model becomes a \textit{matrix field theory} model with a special choice for the external matrix (or better the Laplacian) defining the dynamics. The QFT model itself is reconstructed in the limit of infinitely large matrices. \section{2D Quantum Gravity} Quantum gravity designs a different approach to combine QFT and gravity. Spacetime, and therefore gravity itself, is quantised in the sense of a quantum field. Remarkable results were achieved for quantum gravity in 2 dimensions, since orientable manifolds of dimension 2 are Riemann surfaces which are simpler than higher dimensional manifolds. The quantisation of gravity implies (in the sense of the path integral formalism) an average of special weights (corresponding to the physical theory) over all geometries of Riemann surfaces. One way of doing so was by discretising the Riemann surfaces into polygons which are glued together. The dual picture of a discretisation of a Riemann surface is a ribbon graph such that a sum over discretised Riemann surfaces can be performed as a sum over the dual ribbon graphs \cite{DiFrancesco:1993cyw}. In analogy to the perturbative expansion of QFT, ribbon graphs are generated by the Hermitian 1-matrix models. To end up in finite volumes for the Riemann surfaces in the continuum limit, the size of the polygons has to tend to zero, whereby the number of the polygons tends to infinite (double-scaling limit). Conjecturally, matrix models should provide 2-dimensional quantum gravity in this double-scaling limit, which was for a long time not understood rigorously. A second approach to 2D quantum gravity was formulated by Polyakov \cite{Polyakov:1981rd} under the name of Liouville quantum gravity. His idea was to sum over all metrics on a surface instead of summing over all surfaces. In 2 dimensions, any metric can be transformed in a conformal form, i.e. it is after transformation diagonal and characterised by a scalar, the Liouville field which can be coupled to gravity. The Jacobian to achieve the conformal form of the metric is called Liouville action which is by itself conformally invariant. This conformal invariance gives strong conditions on the correlation functions given by representations of the Virasoro algebra (due to the conformal group). Finite representations of the conformal group are classified by Kac's table into $(p,q)$-minimal models, which implies that the partition function of a conformal field theory coupled to gravity is a $\tau$-function of KdV hierarchy (nonlinear partial differential equation of Painlev\'e type) \cite{DiFrancesco:639405}. Heuristic asymptotics yield the guess that the partition function of matrix models is in the double-scaling limit a $\tau$-function of a $(p,q)$-minimal model. In other words, the partition function of a matrix model satisfies a partial differential equation in the double-scaling limit. This conjecture was later proved rigorously (see e.g. \cite[Ch. 5]{Eynard:2016yaa}). Consequently, 2D quantum gravity was proved to be approximated by a particular discretisation of the underlying space. The interest in matrix models increased due to the relation to Liouville quantum gravity. Further examples of matrix models were investigated. The Kontsevich model \cite{Kontsevich:1992ti} had even higher impact which is the first non-trivial example for a \textit{matrix field theory}, where the attention of this thesis lies on. The ribbon graph expansion consists of weighted graphs with only trivalent vertices. Unexpectedly, the Kontsevich model was proved to be in the limit of infinite matrix size equivalent to the Hermitian 1-matrix model by a certain choice of the parameters, the so-called Miwa-transformation (or Kontsevich times) \cite{Ambjorn:1993sj}. Hence, the Kontsevich model, as first non-trivial example for a matrix field theory, agrees with the $\tau$-function of KdV hierarchy and is therefore also an counterintuitive approximation for 2D quantum gravity. \section{Algebraic Geometry} A third approach to 2D quantum gravity goes back to concepts of algebraic geometry. This approach (so-called topological gravity) tries to take the sum over all Riemann surfaces up to holomorphic reparametrisations. The set of Riemann surfaces for given topology modulo holomorphic reparametrisation is called moduli space which is a finite dimensional complex variety. For the interest of quantum gravity, an integral over the moduli space (or better its compactification) should be performed. A volume form on the moduli space is constructed from wedging the Chern classes of the line bundles which are naturally constructed by the cotangent spaces at the marked points of the Riemann surface. If these forms are of top dimension, then the integral over the compactified moduli space provides a nonvanishing rational number, which is called the intersection number. These numbers are topological invariants characterising the corresponding moduli space. The original motivation of integrating over the moduli spaces coming from 2D quantum gravity inspired Witten to his famous conjecture \cite{Witten:1990hr} that the generating function of the intersection numbers of stable Riemann surfaces ($=$ stable complex curves) is a $\tau$-function of KdV hierarchy. Liouville quantum gravity is related to the KdV hierarchy. Otherwise stated, the approach of Liouville quantum gravity and the approach of topological gravity are equivalent. This conjecture was proved by Kontsevich \cite{Kontsevich:1992ti} by relating the generating function for a special choice of the formal parameters (Kontsevich times) to the weighted ribbon graphs generated by the Kontsevich model. As mentioned before, the Kontsevich model is the easiest example for a \textit{matrix field theory} and satisfies via the connection to Hermitian 1-matrix models the PDE's of the KdV hierarchy. Intensive studies on matrix models have shown that also the correlation functions (and not only the partition function) of the Hermitian 1-matrix model and the Kontsevich model are related in some sense. The correlation functions obey the same type of recursive relations, the so-called topological recursion. The beauty of topological recursion is that for a given initial data (the spectral curve) topological recursion universally produces symmetric meromorphic functions \cite{Eynard:2007kz}. These are, in the case of matrix models, the correlation functions of the corresponding model. Topological recursion provides a modern formulation of the equivalence between algebraic geometric numbers and geometric models. In the last few years, special choices of the spectral curve have produced via topological recursion numbers of algebraic geometric significance, e.g. Hurwitz numbers \cite{Bouchard:2007hi}, Weil-Petersson volumes of moduli spaces \cite{Mirzakhani:2006fta}, Gromov-Witten invariants \cite{Dunin-Barkowski:2013wca} and Jones polynomials of knot theory \cite{Borot:2012cw}. Since the simplest matrix field theoretical model, the Kontsevich model, is known to obey topological recursion \cite{Eynard:2007kz}, a natural question is whether also other matrix field theory models obey topological recursion (or any generalisation of it) and what their equivalent algebraic geometric meanings are. Take the example of the Hermitian 2-matrix model, it fulfils a generalised form of topological recursion \cite{Eynard:2007gw}, where the algebraic geometric meaning is still open. We will give hints that this model is possibly related to the Grosse-Wulkenhaar model. \section{Outline of the Thesis} The thesis starts in \sref{Ch.}{ch:matrix} with an introduction to matrix field theory in general. The basic definitions are given for the action of a matrix field theory, the partition function and the expectation values. To get an intuition for these models, \sref{Sec.}{Sec:Pert} is included which explains the perturbative expansion in detail. The general setting of obtaining equations and identities between expectation values (Schwinger-Dyson equation and Ward-Takahashi identity) is described in \sref{Sec.}{Sec:SDE}. In \sref{Sec.}{Sec:LargeLimit}, a scaling limit is performed which provides matrix field theory models of spectral dimension greater than 0 in the sense of QFT. For this limit, renormalisation (\sref{Sec.}{Sec:Renorm}) is necessary which is a technique developed by physicists. The perturbative expansion needs for a renormalised matrix field theory a careful treatment by Zimmermann's forest formula (\sref{Sec.}{Sec.Zimmer}) to avoid all divergences in the scaling limit. The chapter is finished by \sref{Sec.}{Sec.Moyal} which shows the explicit construction of QFT on the noncommutative Moyal space from a matrix field theory model. \sref{Ch.}{chap:cubic} is dedicated to the simplest matrix field theory model with cubic interaction, the Kontsevich model. This model is solved completely in \sref{Sec.}{Sec:CubicSolution} which means that an algorithm is given to compute exactly any correlation function for any spectral dimension $\D<8$. The Kontsevich model is for higher spectral dimension $\D\geq 8$ nonrenormalisable. The main theorems for the algorithm are \sref{Theorem}{finaltheorem} and \sref{Theorem}{thm:G-residue}. The free energies (and therefore the intersection numbers on the moduli space of stable complex curves) are determined in \sref{Sec.}{Sec:CubicFreeEnergy} via a Laplacian. The case of quartic interaction (known as Grosse-Wulkenhaar model) is developed in \sref{Ch.}{ch:quartic}. The total set of Schwinger-Dyson equations is derived in \sref{Sec.}{Sec.quartSD}. The initial step in computing all correlation function starts for the quartic model with the 2-point correlation function described in \sref{Sec.}{Sec.quartSolution}. The exact solution of this function is given in \sref{Theorem}{prop:HT} for spectral dimension $\D<6$, where the two important special cases of finite matrices and on the 4-dimensional Moyal space are explained in \sref{Sec.}{sec.fm} and \sref{Sec.}{Sec.4dSol}, respectively. We give in \sref{Sec.}{Sec.quartHO} an outline for the correlation function with higher topology. In the planar case with one boundary (of arbitrary length), the entire combinatorial structure is analyised in \sref{Sec.}{Sec.quartRec}. To make the thesis fluently readable, a lot of technical details are outsourced to the appendix. Basic properties of the Moyal space and the description of Schwinger functions on it are found in \sref{App.}{App:Moyal} and \sref{App.}{App:Schwinger}, respectively. The proof of \sref{Theorem}{finaltheorem} is split in several lemmata in \sref{App.}{appendixC}. An important cross-check for the validity of the results is derived in \sref{App.}{App:Pert} by perturbative calculations with Feynman graphs and Zimmermann's forest formula. Additionally, the perturbative analysis of the quartic model on the 4-dimensional Moyal space is discussed in much more detail in \sref{App.}{App:Solv}. Examples for the combinatorial constructions used in \sref{Sec.}{Sec.quartRec} are given in \sref{App.}{App:Expl}. The last appendix \sref{App.}{App:3C} provides a multi-matrix field theory model which interestingly shares properties of both models, the cubic model of \sref{Ch.}{chap:cubic} and the quartic model of \sref{Ch.}{ch:quartic}.
155,578
\begin{document} \title[Branching processes in Markovian environment]{The survival probability of critical and subcritical branching processes in finite state space Markovian environment} \author{Ion Grama} \curraddr[Grama, I.]{ Universit\'{e} de Bretagne-Sud, LMBA UMR CNRS 6205, Vannes, France} \email{ion.grama@univ-ubs.fr} \author{Ronan Lauvergnat} \curraddr[Lauvergnat, R.]{Universit\'{e} de Bretagne-Sud, LMBA UMR CNRS 6205, Vannes, France} \email{ronan.lauvergnat@univ-ubs.fr} \author{\'Emile Le Page} \curraddr[Le Page, \'E.]{Universit\'{e} de Bretagne-Sud, LMBA UMR CNRS 6205, Vannes, France} \email{emile.le-page@univ-ubs.fr} \date{\today} \subjclass[2000]{ Primary 60J80. Secondary 60J10. } \keywords{Branching process in Markovian environment, Markov chain, Survival probability, Critical and subcritical regimes} \begin{abstract} Let $(Z_n)_{n\geq 0}$ be a branching process in a random environment defined by a Markov chain $(X_n)_{n\geq 0}$ with values in a finite state space $\bb X$ starting at $X_0=i \in\mathbb X.$ We extend from the i.i.d.\ environment to the Markovian one the classical classification of the branching processes into critical and strongly, intermediate and weakly subcritical states. In all these cases, we study the asymptotic behaviour of the probability that $Z_n>0$ as $n\to+\infty$. \end{abstract} \maketitle \section{Introduction} Galton-Watson branching process is one of the most used models in the dynamic of populations. It has numerous applications in different areas such as biology, medicine, physics, economics etc; for an introduction we refer to Harris \cite{harris2002theory} or Athreya and Ney \cite{athreya_branching_1972} and to the references therein. A significant advancement in the theory and practice was made with the introduction of the branching process in which the offspring distributions vary according to a random environment, see Smith and Wilkinson \cite{smith_branching_1969} and Athreya and Karlin \cite{athreya1971branching1, athreya1971branching2}. This allowed a more adequate modeling and turned out to be very fruitful from the practical as well as from the mathematical points of view. The recent advances in the study of conditioned limit theorems for sums of functions defined on Markov chains in \cite{grama_conditioned_2016}, \cite{GLLP_affine_2016}, \cite{grama_limit_2016-1} and \cite{GLLP_CLLT_2017} open the way to treat some unsolved questions in the case of Markovian environments. The problem we are interested here is to study the asymptotic behaviour of the survival probability. Assume first that on the probability space $\left( \Omega, \scr F, \bb P \right)$ we are given a branching process $\left( Z_n \right)_{n\geq 0}$ in a random environment represented by the i.i.d.\ sequence $\left( X_n \right)_{n\geq 0}$ with values in the space $\mathbb X.$ Let $f_i(\cdot)$ be the probability generating function of the offspring distributions of $\left( Z_n \right)_{n\geq 0}$, provided the value of the environment is $i\in \mathbb X.$ In a remarkable series of papers Afanasyev \cite{afanasyev_limit_2009}, Dekking \cite{dekking_survival_1987}, Kozlov \cite{kozlov_asymptotic_1977}, Liu \cite{liu1996survival}, D'Souza and Hambly \cite{dsouza_survival_1997}, Geiger and Kersting \cite{geiger_survival_2001}, Guivarc'h and Liu \cite{guivarch_proprietes_2001} and Geiger, Kersting and Vatutin \cite{geiger_limit_2003} under various assumptions have determined the asymptotic behaviour as $n\to+\infty$ of the survival probability $\mathbb P (Z_n>0)$. Let $\phi(\lambda)$ be the Laplace transform of the random variable $\ln f'_{X_1}(1)$: $\phi(\ll)=\bb E \left(e^{\ll \ln f'_{X_1}(1)} \right)$, $\ll \in \bb R,$ where $\mathbb E$ is the expectation pertaining to $\mathbb P$. In function of the values of the derivatives $\phi'(0)=\mathbb E(\ln f'_{X_1}(1))$ and $\phi'(1)=\mathbb E(f'_{X_1}(1)\ln f'_{X_1}(1))$ and under some additional moment assumptions on the variables $\ln f'_{X_1}(1)$ and $Z_1,$ the following asymptotic results have been found. In the critical case, $\phi'(0)=0$, it was shown in \cite{kozlov_asymptotic_1977} and \cite{geiger_survival_2001} that $\mathbb P (Z_n>0)\sim \frac{c}{\sqrt{n}}$; hereafter $c$ stands for a constant and $\sim$ means equivalence of sequences as $n\to +\infty.$ The behaviour in the subcritical case, $\phi'(0)<0$, turns out to depend on the value $\phi'(1)$. The strongly subcritical case, $\phi'(0)<0$ \& $\phi'(1)<0$, has been studied in \cite{dsouza_survival_1997} and \cite{guivarch_proprietes_2001} where it was shown that $\mathbb P (Z_n>0) \sim c \phi(1)^n$, with $0< \phi(1)=\mathbb E f'_{X_1}(1)<1$. In the intermediate and weakly subcritical cases, $\phi'(0)<0$ \& $\phi'(1)=0$ and $\phi'(0)<0$ \& $\phi'(1)>0$, respectively, it was shown in \cite{geiger_limit_2003} that $\mathbb P (Z_n>0)\sim c n^{-1/2} \phi(1)^n$ and $\mathbb P (Z_n>0)\sim c n^{-3/2}\phi(\ll)^n$, where $\ll$ is the unique critical point of $\phi$: $\phi'(\ll)=0.$ The goal of the present paper is to determine the asymptotic behaviour as $n\to +\infty$ of the survival probability $\mathbb P_i (Z_n>0)$ when the environment $\left( X_n \right)_{n\geq 0}$ is a Markov chain with values in a finite state space $\mathbb X.$ Hereafter $\mathbb P_i$ and $\mathbb E_i$ are the probability and expectation generated by the trajectories of $\left( X_n \right)_{n\geq 0}$ starting at $X_0=i \in \bb X.$ Set $\rho(i) = \ln f_i'(1)$, $i \in \bb X$. Consider the associated Markov walk $S_n = \sum_{k=1}^n \rho\left( X_1 \right)$, $n \geq 0$. In the case of a Markovian environment the behaviour of the survival probability $\mathbb P_i (Z_n>0)$ depends on the function \[ k(\ll) := \lim_{n\to +\infty} \bb E_i^{1/n} \left( \e^{\ll S_n} \right), \] which is well defined, analytic in $\ll \in \bb R$ and does not depend on $i \in \bb X$ (see Section \ref{nenuphar}). In some sense the function $k$ plays the same role that the function $\phi$ in the case of i.i.d.\ environment. Let us present briefly the main results of the paper. Under appropriate conditions, we show the asymptotic behaviour of the survival probability $\mathbb P_i (Z_n>0)$ in function of the following classification: \begin{itemize} \item Critical case: if $k'(0)=0$, then, for any $i,j \in \bb X$, \[ \bb P_i \left( Z_n > 0 \,,\, X_n = j \right) \underset{n \to +\infty}{\sim} \frac{\bs \nu (j) u(i)}{\sqrt{n}}, \] where $u(i)$ is a constant depending on $i$ and $\bs \nu$ is the stationary probability measure of the Markov chain $\left( X_n \right)_{n\geq 0}$. \item Strongly subcritical case: if $k'(0)<0$ and $k'(1)<0$, then, for any $i,j \in \bb X$, \[ \bb P_i \left( Z_n > 0 \,,\, X_n = j \right) \underset{n \to +\infty}{\sim} v_1(i)u(j) k(1)^n . \] where $u(j)$ and $v_1(i)$ are depending only on $j$ and $i$ respectively. \item Intermediate subcritical case: if $k'(0)<0$ and $k'(1)=0$, then, for any $i,j \in \bb X$, \[ \bb P_i \left( Z_n > 0 \,,\, X_n = j \right) \underset{n \to +\infty}{\sim} v_1(i) u(j) \frac{k(1)^n}{\sqrt{n}}. \] where $u(i)$ depends only on $i$. \item Weakly subcritical case: if $k'(0)<0$ and $k'(1)>0$, then, for any $i,j \in \bb X$, \[ \bb P_i \left( Z_n > 0 \,,\, X_n = j \right) \underset{n \to +\infty}{\sim} k(\ll)^n \frac{u(i,j)}{n^{3/2}}, \] where $u(i,j)$ depends only on $i$ and $j$ and $\ll$ is the critical point of $k$: $k'(\ll)=0.$ \end{itemize} The critical case has been considered in Le Page and Ye \cite{le_page_survival_2010} in a more general setting. However, the conditions in their paper do not cover the present situation and the employed method is different from ours. From the results of Section \ref{nenuphar} it follows that the classification stated above coincides with the usual classification for branching processes when the environment is i.i.d. Indeed, Lemma \ref{mulet} implies that $k'(0) = \mathbb E_{\bs \nu} \left( \ln f_{X_1}'(1) \right)$, where $\mathbb E_{\bs \nu} $ is the expectation generated by the finite dimensional distributions of the Markov chain $\left( X_n \right)_{n\geq 0}$ in the stationary regime. For an i.i.d.\ environment this is exactly $\mathbb E(\ln f'_{X_1}(1))=\phi'(0).$ The value $k'(1)$ can also be related to the first moment of the random variable $\ln f'_{X_1}(1)$. For this we need the transfer operator $P_{\ll}$ related to the Markov chain $\left( X_n \right)_{n\geq 0}$, see Section \ref{nenuphar} for details. The normalized transfer operator $\tbf P_{\ll}$ generates a Markov chain whose invariant probability is denoted by $\tbs \nu_{\ll}.$ Again by Lemma \ref{mulet}, it holds $\frac{k'(1)}{k(1)} = \tbb E_{\tbs \nu_{\ll}} \left( \ln f_{X_1}'(1) \right)$, where $\tbb E_{\tbs \nu_{\ll}} $ is the expectation generated by the finite dimensional distributions of the Markov chain $( X_n )_{n\geq 0}$ with transition probabilities $\tbf P_{\ll}$ in the stationary regime. For an i.i.d.\ environment, we have $\frac{k'(1)}{k(1)} =\mathbb E \left( f_{X_1}'(1) \ln f_{X_1}'(1) \right)=\phi'(1),$ which shows that both classifications are equivalent. Now we shall shortly explain the approach of the paper. We start with a well known relation between the survival probability $\mathbb P_i(Z_n>0)$ and the associated random walk $\left( S_n \right)_{n\geq 0}$ which goes back to Agresti \cite{agresti_bounds_1974} and which is adapted it to the Markov environment as follows: for any initial state $X_0=i,$ \begin{equation} \label{Agre001} \mathbb P_i(Z_n>0) = \mathbb E_i (q_n), \quad \mbox{where} \quad q_n^{-1}= \e^{-S_n} + \sum_{k=0}^{n-1} \e^{-S_k} \eta_{k+1,n} \end{equation} and under the assumptions of the paper the random variables $\eta_{k+1,n}$ are bounded. Our proof is essentially based on three tools: conditioned limit theorems for Markov chains which have been obtained recently in \cite{grama_limit_2016-1} and \cite{GLLP_CLLT_2017}, the exponential change of measure which is defined with the help of the transfer operator, see Guivarc'h and Hardy \cite{guivarch_theoremes_1988}, and the duality for Markov chains which we develop in Section \ref{batailleBP}. Let us first consider the critical case. Let $\tau_y$ be the first moment when the random walk $\left(y+ S_n \right)_{n\geq 0}$ becomes negative. In the critical case, one can show that only the trajectories that stay positive (i.e.\ when $\tau_y>n$) have impact on the survival probability, so that the probability $\sqrt{n}\bb P\left( Z_n>0, \tau_y\leq n \right)$ is negligible as $n\to+\infty$ and $y\to+\infty$. This permits to replace the expectation $\sqrt{n}\mathbb E_i (q_n)$ by $\sqrt{n}\bb E_i\left( q_n \,;\, \tau_y > n \right)=\sqrt{n}\bb E_i\left( \sachant{q_n}{\tau_y > n} \right) \bb P_i\left(\tau_y > n \right)$. The asymptotic of $\sqrt{n}\mathbb P_i\left(\tau_y > n \right)$ is given in \cite{grama_limit_2016-1} and using the local limit theorem from \cite{GLLP_CLLT_2017} we show that the expectation $\bb E_i\left( \sachant{q_n}{\tau_y > n} \right)$ converges to a positive constant. The subcritical case is much more delicate. Using the normalized transfer operator $\tbf P_{\ll}$ we apply a change of the probability measure, say $\tbb P_i$, under which \eqref{Agre001} reduces to the study of the expectation \[ k(\ll)^n \tbb E_i \left( e^{-\ll S_n} q_n \right). \] Choosing $\ll=1,$ we have $\tbb E_i \left( e^{- S_n} q_n \right) = \tbb E^*_i \left( q^*_n \right)$, where $\tbb E^*_i$ is the expectation generated by the dual Markov walk $\left( S^*_n \right)_{n\geq 0}$, \begin{equation} \label{Agredual001} (q^*_n)^{-1}= 1 + \sum_{k=1}^{n} \e^{-S^*_k} \eta^*_k \end{equation} and the random variables $\eta^*_k$ are bounded. In the strongly subcritical case the series in \eqref{Agredual001} converges by the law of large numbers for $\left( S^*_n \right)_{n\geq 0}$, so the resulting rate of convergence is determined only by $k(1)^{n}.$ To find the asymptotic behaviour of the expectation $\tbb E^*_i \left( q^*_n \right)$ in the intermediate subcritical case we proceed basically in the same way as in the critical case which explains the apparition of the factor $n^{-1/2}$. In the weakly subcritical case we choose $\ll$ to be the critical point of $k$: $k'(\ll)=0$. We make use of the conditioned local limit theorem which, in addition to $k(\ll)^{n}$, contributes with the factor $n^{-3/2}$. The outline of the paper is as follows: \begin{itemize} \item Section \ref{sec not res}: We give the necessary notations and formulate the main results. \item Section \ref{prliminrez}: Introduce the associated Markov chain and relate it to the survival probability. Introduce the dual Markov chain. State some useful assertions for walks on Markov chains conditioned to stay positive and on the transfer operator. \item Sections \ref{critcase}, \ref{lagon}, \ref{intermedcrit} and \ref{weaklysubcrit}: Proofs in the critical, strongly subcritical, intermediate subcritical and weakly subcritical cases, respectively. \end{itemize} Let us end this section by fixing some notations. The symbol $c$ will denote a positive constant depending on the all previously introduced constants. Sometimes, to stress the dependence of the constants on some parameters $\alpha,\beta,\dots$ we shall use the notations $ c_{\alpha}, c_{\alpha,\beta},\dots$. All these constants are likely to change their values every occurrence. The indicator of an event $A$ is denoted by $\mathbbm 1_A$. For any bounded measurable function $f$ on $\bb X$, random variable $X$ in some measurable space $\bb X$ and event $A$, the integral $\int_{\bb X} f(x) \bb P (X \in \dd x, A)$ means the expectation $\bb E\left( f(X); A\right)=\bb E \left(f(X) \mathbbm 1_A\right)$. \section{Notations and main results} \label{sec not res} Assume that $\left( X_n \right)_{n\geq 0}$ is a homogeneous Markov chain defined on the probability space $\left( \Omega, \scr F, \bb P \right)$ with values in the finite state space $\bb X$. Let $\scr C$ be the set of functions from $\bb X$ to $\bb C$. Denote by $\bf P$ the transition operator of the chain $(X_n)_{n\geq 0}$: $ \bf P g(i) = \bb E_i \left( g(X_1) \right), $ for any $g \in \scr C$ and $i \in \bb X$. Set $\bf P(i,j) = \bf P(\delta_j)(i)$, where $\delta_j(i) = 1$ if $i = j$ and $\delta_j(i) = 0$ else. Note that the iterated operator $\bf P^n$, $n \geq 0$ is given by $ \bf P^ng(i) = \bb E_i \left( g(X_n) \right). $ Let $\bb P_i$ be the probability on $\left( \Omega, \scr F \right)$ generated by the finite dimensional distributions of the Markov chain $\left( X_n \right)_{n\geq 0}$ starting at $X_0 = i$. Denote by $\bb E$ and $\bb E_i$ the corresponding expectation associated to $\bb P$ and $\bb P_i.$ We assume in the sequel that $\left( X_n \right)_{n\geq 0}$ is irreducible and aperiodic. This is known to be equivalent to the following condition: \begin{condition} \label{primitif} The matrix $\bf P$ is primitive, which means that there exists $k_0 \geq 1$ such that, for any non-negative and non-identically zero function $g\in \scr C$ and $i \in \bb X$, \[ \bf P^{k_0} g(i) > 0. \] \end{condition} By the Perron-Frobenius theorem, under Condition \ref{primitif}, there exist positive constants $c_1$ and $c_2$, a unique positive $\bf P$-invariant probability $\bs \nu$ on $\bb X$ and an operator $Q$ on $\scr C$ such that for any $g \in \scr C$ and $n \geq 1$, \[ \bf Pg(i) = \bs \nu(g) + Q(g)(i) \qquad \text{and} \qquad \norm{Q^n(g)}_{\infty} \leq c_1\e^{-c_2n} \norm{g}_{\infty}, \] where $\bs \nu(g) := \sum_{i \in \bb X} g(i) \bs \nu(i)$, $Q \left(1 \right) = \bs \nu \left(Q(g) \right) = 0$ and $\norm{g}_{\infty}= \max_{i \in \bb X} \abs{g(i)}$. In particular, for any $(i,j) \in \bb X^2$, we have \begin{equation} \label{soeur} \abs{\bf P^n(i,j) - \bs \nu(j)} \leq c_1\e^{-c_2 n}. \end{equation} The branching process in the Markov environment $\left( X_n \right)_{n\geq 0}$ is defined with the help of a collection of generating functions \begin{equation} \label{jazz} f_i(s) := \bb E \left( s^{\xi_i} \right), \quad \forall i \in \bb X, \; s \in [0,1], \end{equation} where the random variable $\xi_i$ takes its values in $\bb N$ and means the total offspring of one individual when the environment is $i\in \bb X.$ For any $i \in \bb X$, let $( \xi_i^{n,j} )_{j,n \geq 1}$ be independent and identically distributed random variables with the same generating function $f_i$ living on the same probability space $\left( \Omega, \scr F, \bb P \right)$. We assume that the sequence $( \xi_i^{n,j} )_{j,n \geq 1}$ is independent of the Markov chain $\left( X_n \right)_{n\geq 0}.$ Assume that the offspring distribution satisfies the following moment constraints. \begin{condition} \label{eglise} For any $i \in \bb X$, the random variable $\xi_i$ is non-identically zero and has a finite variance: \[ 0 < \bb E \left( \xi_i \right) \qquad \text{and} \qquad \bb E ( \xi_i^2 ) < +\infty, \qquad \forall i \in \bb X. \] \end{condition} Note that, under Condition \ref{eglise} we have, \[ \forall i \in \bb X, \qquad 0< \bb E \left( \xi_i \right) = f_i'(1) < +\infty. \] and \[ \forall i \in \bb X, \qquad f_i''(1) =\bb E ( \xi_i^2 )-\bb E \left( \xi_i \right) < +\infty. \] Define the branching process $\left( Z_n \right)_{n\geq 0}$ iteratively: for each time $n=1,2,\dots$, given the environment $X_n = i$, the total offspring of each individual $j\in \{1, \dots Z_{n-1} \}$ is given by the random variable $\xi_{i}^{n,j},$ so that the total population is \begin{equation} \label{roseau} Z_0 = 1 \qquad \text{and} \qquad Z_n = \sum_{j=1}^{Z_{n-1}} \xi_{X_n}^{n,j}, \qquad \forall n \geq 1. \end{equation} We shall consider branching processes $\left( Z_n \right)_{n\geq 0}$ in one of the following two regimes: critical or subcritical (see below for the precise definition). In both cases the probability that the population survives until the $n$-th generation tends to zero, $\bb P \left( Z_n > 0 \right) \to 0$ as $n \to +\infty$, see Smith and Wilkinson \cite{smith_branching_1970}. As noted in the introduction, when the environment is i.i.d., the question of determining the speed of this convergence was answered in \cite{geiger_survival_2001}, \cite{guivarch_proprietes_2001} and \cite{geiger_limit_2003}. The key point in establishing their results is a close relation between the branching process and the associated random walk. Let us introduce the associated Markov walk corresponding to our setting. Define the real function $\rho$ on $\bb X$ by \begin{equation} \label{pipeau} \rho(i) = \ln f_i'(1) , \qquad \forall i \in \bb X. \end{equation} The associated Markov walk $\left( S_n \right)_{n\geq 0}$ is defined as follows: \begin{equation} \label{petale} S_0 := 0 \qquad \text{and} \qquad S_n := \ln \left( f_{X_1}'(1) \cdots f_{X_n}'(1) \right) = \sum_{k=1}^n \rho\left( X_k \right), \quad \forall n \geq 1. \end{equation} In order to state the precise results we need one more condition, namely that the Markov walk $(S_n)_{n\geq 0}$ is non-lattice: \begin{condition} \label{cathedrale} For any $(\theta,a) \in \bb R^2$, there exist $x_0, \dots, x_n$ in $\bb X$ such that \[ \bf P(x_0,x_1) \cdots \bf P(x_{n-1},x_n) \bf P(x_n,x_0) > 0 \] and \[ \rho(x_0) + \cdots + \rho(x_n) - (n+1)\theta \notin a\bb Z. \] \end{condition} The following function plays an important role in determining the asymptotic behaviour of the branching processes when the environment is Markovian. It will be shown in Section \ref{nenuphar} that under Conditions \ref{primitif} and \ref{cathedrale}, for any $\ll \in \bb R$ and any $i \in \bb X$, the following limit exists and does not depend on the initial state of the Markov chain $X_0=i$: \[ k(\ll) := \lim_{n\to +\infty} \bb E_i^{1/n} \left( \e^{\ll S_n} \right). \] Le us recall some facts on the function $k$ which will be discussed in details in Section \ref{nenuphar} and which are used here for the formulation of the main results. The function $k$ is closely related to the so-called transfer operator $\bf P_{\ll}$ which is defined for any $\ll \in \bb R$ on $\scr C$ by the relation \begin{equation} \label{transfoper} \bf P_{\ll}g(i) := \bf P\left( \e^{\ll \rho} g \right)(i) = \bb E_i \left( \e^{\ll S_1} g(X_1) \right), \quad \mbox{for}\quad g \in \scr C, i \in \bb X. \end{equation} In particular, $k(\ll)$ is an eigenvalue of the operator $\mathbf P_{\ll}$ corresponding to an eigenvector $v_{\ll}$ and is equal to its spectral radius. Moreover, the function $k(\ll)$ is analytic on $\mathbb R,$ see Lemma \ref{mulet}. Note also that the transfer operator $\mathbf P_{\ll}$ is not Markov, but it can be easily normalized so that the operator $\tbf P_{\ll}g = \frac{\bf P_{\ll}(gv_{\ll})}{k(\ll)v_{\ll}}$ is Markovian. We shall denote by $\tbs \nu_{\ll}$ its unique invariant probability measure. The branching process in Markovian environment is said to be \textit{subcritical} if $k'(0)<0$, \textit{critical} if $k'(0)=0$ and \textit{supercritical} if $k'(0)>0$. This definition at first glance may appear different from what is expected in the case of branching processes with i.i.d.\ environment. With a closer look, however, the relation to the usual i.i.d.\ classification becomes clear from the following identity, which is established in Lemma \ref{mulet}: \begin{equation} \label{classifiid} k'(0) = \bs \nu(\rho) = \bb E_{\bs \nu} \left( \rho(X_1) \right) = \bb E_{\bs \nu} \left( \ln f_{X_1}'(1) \right), \end{equation} where $\mathbb E_{\bs \nu} $ is the expectation generated by the finite dimensional distributions of the Markov chain $\left( X_n \right)_{n\geq 0}$ in the stationary regime, i.e. when the starting point $X_0$ is a random variable distributed according to the $\bf P$-invariant measure $\bs \nu.$ In particular, when the environment $\left( X_n \right)_{n\geq 0}$ is just an i.i.d.\ sequence of random variables with common law $\bs \nu$, it follows from \eqref{classifiid} that the two classifications coincide. We proceed to formulate our main result in the critical case. \begin{theorem}[Critical case] \label{prince} Assume Conditions \ref{primitif}-\ref{cathedrale} and \[ k'(0) = 0. \] Then, there exists a positive function $u$ on $\bb X$ such that for any $(i,j) \in \bb X^2$, \[ \bb P_i \left( Z_n > 0 \,,\, X_n = j \right) \underset{n \to +\infty}{\sim} \frac{\bs \nu (j) u(i)}{\sqrt{n}}. \] \end{theorem} The asymptotic for the probability that $ Z_n > 0 $ in the case of i.i.d.\ environment has been established earlier by Geiger and Kersting \cite{geiger_survival_2001} under some moment assumptions on the random variable $\rho(X_1)=\ln \left( f_{X_1}'(1) \right)$, which are weaker that our assumption on finiteness of the state space $\mathbb X.$ Since we deal with dependent environment, Theorem \ref{prince} is not covered by the results in \cite{geiger_survival_2001}. Now we consider the subcritical case. The classification of the asymptotic behaviours of the survival time of a branching process $\left( Z_n \right)_{n\geq 0}$ in the subcritical case $k'(0)<0$ is made in function of the values of $k'(1).$ We say that the branching process in Markovian environment is \textit{strongly subcritical} if $k'(0)<0, k'(1)<0$, \textit{intermediately subcritical} if $k'(0)<0, k'(1)=0$ and \textit{weakly subcritical} if $k'(0)<0, k'(1)>0$. In order to relate these definitions to the values of some moments of the random variable $\ln f_{X_1}'(1)$, we note that, again by Lemma \ref{mulet}, \begin{equation} \label{subclassif} \frac{k'(1)}{k(1)} = \tbs \nu_{1}(\rho) = \bb E_{\tbs \nu_{1}} \left( \rho(X_1) \right) = \bb E_{\tbs \nu_1} \left( \ln f_{X_1}'(1) \right), \end{equation} where $\mathbb E_{\tbs \nu_{\ll}} $ is the expectation generated by the finite dimensional distributions of the Markov chain $( X_n )_{n\geq 0}$ with transition probabilities $\tbf P_{\ll}$ in the stationary regime, i.e. when the starting point $X_0$ is a random variable distributed according to the unique positive $\tbf P_{\ll}$-invariant probability $\tbs \nu_{\ll}.$ Since $k(1)>0$, the equivalent classification can be done according to the value of the expectation $\bb E_{\tbs \nu_{1}} \left( \ln \left( f_{X_1}'(1) \right) \right)$. When the environment is an i.i.d.\ sequence of common law $\tbs \nu$ we have in addition \begin{equation} \label{equivalence001} \frac{k'(1)}{k(1)} =\bb E_{\tbs \nu_{1}} \left( \ln f_{X_1}'(1) \right) =\bb E_{\bs \nu} \left( f_{X_1}'(1) \ln f_{X_1}'(1) \right)=\phi'_{\bs \nu}(1), \end{equation} where $\phi_{\bs \nu}(\ll)=\bb E_{\bs \nu} \left( e^{\ll\ln f_{X_1}'(1)} \right)$, $\ll \in \bb R.$ This shows that both classifications (the one according to the values of $k'(1)$ and the other according to the values of $\phi'_{\bs \nu}(1)$) for branching processes with i.i.d.\ environment are equivalent. We would like to stress that, in general, the identity \eqref{equivalence001} is not fulfilled for a Markovian environment and therefore the function $\phi_{\bs \nu}(\ll)$ is not the appropriate one for the classification. For a Markovian environment the classification equally can be done using the function $K'(\ll)$, where $K(\ll)=\ln k(\ll),$ $\ll \in \mathbb R.$ Note that by Lemma \ref{mulet} the function $\ll \mapsto K(\ll)$ is strictly convex. In the strongly and intermediately subcritical cases, this implies that $0<k(1)<1.$ The following theorem gives the asymptotic behaviour of the survival probability jointly with the state of the Markov chain in the strongly subcritical case. \begin{theorem}[Strongly subcritical case] \label{couronne} Assume Conditions \ref{primitif}-\ref{cathedrale} and \[ k'(0) < 0, \qquad k'(1) < 0. \] Then, there exists a positive function $u$ on $\bb X$ such that for any $(i,j) \in \bb X^2$, \[ \bb P_i \left( Z_n > 0 \,,\, X_n = j \right) \underset{n \to +\infty}{\sim} k(1)^n v_1(i)u(j). \] \end{theorem} Recall that $v_1$ is the eigenfunction of the transfer operator $\bb P_{1}$ (see also Section \ref{nenuphar} eq. \eqref{totem} for details). Note also that in the formulation of the Theorem \ref{couronne} we can drop the assumption $k'(0) < 0$, since it is implied by the assumption $k'(1) < 0$, by strict convexity of $K(\ll)$. The corresponding result in the case when the environment is i.i.d.\ has been established by Guivarc'h and Liu \cite{guivarch_proprietes_2001} under some moment assumptions on the random variable $\rho(X_1)=\ln \left( f_{X_1}'(1) \right)$. Our result extends \cite{guivarch_proprietes_2001} to finite dependent environments. A break trough in determining the behaviour of the survival probability for intermediate subcritical and weakly subcritical cases for branching processes with i.i.d.\ environment was made by Geiger, Kersting and Vatutin \cite{geiger_limit_2003}. Note that the original results in \cite{geiger_limit_2003} have been established under some moment assumptions on the random variable $\rho(X_1)=\ln \left( f_{X_1}'(1) \right)$. For these two cases and finite Markovian environments we give below the asymptotic of the survival probability jointly with the state of the Markov chain. \begin{theorem}[Intermediate subcritical case] \label{sceptre} Assume Conditions \ref{primitif}-\ref{cathedrale} and \[ k'(0) < 0, \qquad k'(1) = 0. \] Then, there exists a positive function $u$ on $\bb X$ such that for any $(i,j) \in \bb X^2$, \[ \bb P_i \left( Z_n > 0 \,,\, X_n = j \right) \underset{n \to +\infty}{\sim} k(1)^n \frac{v_1 (i) u(j)}{\sqrt{n}}. \] \end{theorem} As in the previous Theorem \ref{couronne}, $k'(1) = 0$ implies the assumption $k'(0) < 0$, since the function $\ll \mapsto K(\ll) = \ln (k(\ll))$ is strictly convex (see Lemma \ref{mulet}). \begin{theorem}[Weakly subcritical case] \label{cape} Assume Conditions \ref{primitif}-\ref{cathedrale} and \[ k'(0) < 0, \qquad k'(1) > 0. \] Then, there exist a unique $\ll \in (0,1)$ satisfying $k'(\ll)=0$ and a positive function $u$ on $\bb X^2$ such that for any $(i,j) \in \bb X^2$, \[ \bb P_i \left( Z_n > 0 \,,\, X_n = j \right) \underset{n \to +\infty}{\sim} k(\ll)^n \frac{u(i,j)}{n^{3/2}}. \] \end{theorem} The existence and the unicity of $\ll \in (0,1)$ satisfying $k'(\ll)=0$ and $0< k(\ll) < 1$ in Theorem \ref{cape} is an obvious consequence of the strict convexity of $K$. Note that Theorems \ref{prince} , \ref{couronne}, \ref{sceptre} and \ref{cape} give the asymptotic behaviour of the joint probabilities $\bb P_i \left( Z_n > 0 \,,\, X_n = j\right)$. By summing both sides of the corresponding equivalences in $j$ we obtain the asymptotic behaviour of the survival probability $\mathbb P_i \left( Z_n>0 \right)$. The corresponding results for the survival probability when the Markovian environment is in the stationary regime are easily obtained by integrating the previous ones with respect to the invariant measure $\bs \nu$. \section{Preliminary results on the associated Markov walk} \label{prliminrez} The aim of this section is to provide necessary assertions on the Markov chain $(X_n)_{n\geq 0}$ and on the associated Markov walk $(S_n)_{n\geq 0}$ defined by \eqref{petale} and to relate them to the survival probability of $(Z_n)_{n\geq 0}$ at generation $n$. For the ease of the reader we recall the outline of the section: \begin{itemize} \item Subsection \ref{etang}: Relate the branching process $(Z_n)_{n\geq 0}$ to the associated Markov walk $(S_n)_{n\geq 0}$. \item Subsection \ref{batailleBP}: Construct the dual Markov chain $( X_n^* )_{n\geq 0}$. \item Subsection \ref{flamme}: Recall results on the Markov walks conditioned to stay positive. \item Subsection \ref{nenuphar}: Introduce the transfer operator of the Markov chain $(X_n)_{n\geq 0}$ and the change of the probability measure. State the properties of the associated Markov walk $(S_n)_{n\geq 0}$ under the changed measure. \end{itemize} \subsection{The link between the branching process and the associated Markov walk} \label{etang} In this section we recall some identities on the branching process. Some of them are stated for the commodity of the reader and are merely adaptations to the Markovian environments of the well-known statements in the i.i.d.\ case. The first one is a representation of the conditioned probability generating function given the environment: \begin{lemma}[Conditioned generating function] \label{CondGenFun} For any $s\in [0,1]$ and $n \geq 1$, \[ \bb E_i \left( \sachant{s^{Z_{n}}}{X_1, \dots, X_{n} } \right) = f_{X_1} \circ \dots \circ f_{X_n} (s). \] \end{lemma} \begin{proof} For all $s\in[0,1]$, $n \geq 1$, $(z_1, \dots, z_{n-1}) \in \bb N^{n-1}$ and $(i_1, \dots,i_{n}) \in \bb X^{n}$, by \eqref{roseau}, we have \[ \bb E_i \left( \sachant{s^{Z_{n}} }{Z_1=z_1, \dots, Z_{n-1}=z_{n-1}, X_1=i_1, \dots, X_{n} = i_{n}} \right) = \bb E \left( s^{\sum_{j=1}^{z_{n-1}} \xi_{i_{n}}^{n,j}} \right). \] Since $\left( \xi_{i_{n}}^{n,j} \right)_{j \geq 1}$ are i.i.d., by \eqref{jazz}, \[ \bb E_i \left( \sachant{s^{Z_{n}}}{Z_1=z_1, \dots, Z_{n-1}=z_{n-1}, X_1=i_1, \dots, X_{n}=i_{n}} \right) = f_{i_{n}} (s)^{z_{n-1}}. \] From this we get, \[ \bb E_i \left( \sachant{s^{Z_{n}}}{X_1=i_1, \dots, X_{n} = i_n} \right) = \bb E_i \left( \sachant{ f_{i_n} (s)^{Z_{n-1}} }{X_1=i_1, \dots, X_{n-1}=i_{n-1}} \right). \] By induction, for any $(i_1, \dots,i_{n}) \in \bb X^{n}$, \[ \bb E_i \left( \sachant{s^{Z_{n}}}{X_1=i_1, \dots, X_{n}=i_{n} } \right) = f_{i_1} \circ \dots \circ f_{i_n} (s). \] and the assertion of the lemma follows. \end{proof} For any $n \geq 1$ and $s\in [0,1]$ set \begin{equation} \label{jeux} q_n(s) := 1- f_{X_1} \circ \dots \circ f_{X_n} (s) \qquad \text{and} \qquad q_n := q_n(0). \end{equation} Lemma \ref{CondGenFun} implies that \begin{equation} \label{ange} \bb P_i \left( \sachant{Z_n > 0}{ X_1, \dots, X_{n} } \right) = q_n. \end{equation} Taking the expectation in \eqref{ange}, we obtain the well-known equality, which will be the starting point for our study: \begin{equation} \label{rose} \bb P_i \left( Z_n > 0\right) = \bb E_i \left( q_n \right). \end{equation} Under Condition \ref{eglise}, for any $i\in \bb X$ and $s\in [0,1)$, we have $f_i(s) \in [0,1)$. Therefore $f_{X_1} \circ \cdots \circ f_{X_n} (s) \in [0,1)$ and in particular \begin{equation} \label{histoire} q_n \in (0,1], \qquad \forall n \geq 1. \end{equation} Introduce some additional notations, which will be used all over the paper: \begin{align} \label{montre001} f_{k,n} := f_{X_k} \circ \cdots \circ f_{X_n}, &\qquad \forall n \geq 1,\; \forall k \in \{1,\dots,n\},\\ \label{montre002} f_{n+1,n} := \id, &\qquad \forall n \geq 1, \\ \label{montre003} g_i(s) := \frac{1}{1-f_i(s)} - \frac{1}{f_i'(1)(1-s)}, &\qquad \forall i \in \bb X,\; \forall s \in [0,1), \\ \label{montre004} \eta_{k,n}(s) := g_{X_k} \left( f_{k+1,n}(s) \right), &\qquad \forall n \geq 1,\; \forall k \in \{1,\dots,n\},\; \forall s \in [0,1), \\ \label{montre005} \eta_{k,n} := \eta_{k,n}(0) = g_{X_k} \left( f_{k+1,n}(0) \right), &\qquad \forall n \geq 1,\; \forall k \in \{1,\dots,n\} . \end{align} The key point in proving our main results is the following assertion which relies the random variable $q_n(s)$ to the associated Markov walk $(S_n)_{n\geq 0}$, see \eqref{petale}. This relation is known from Agresti \cite{agresti_bounds_1974} in the case of linear fractional generating functions. It turned out to be very useful for studying general branching processes and was generalized in Geiger and Kersting \cite{geiger_survival_2001}. We adapt their argument to the case when the environment is Markovian. \begin{lemma} \label{foin} For any $s\in [0,1)$ and $n \geq 1$, \[ q_n(s)^{-1} = \frac{\e^{-S_n}}{1-s} + \sum_{k=0}^{n-1} \e^{-S_k} \eta_{k+1,n}(s). \] \end{lemma} \begin{proof} With the notations \eqref{montre002}-\eqref{montre005} we write for any $s\in [0,1)$ and $n \geq 1$, \begin{align*} q_n(s)^{-1} &:= \frac{1}{1-f_{X_1} \circ \cdots \circ f_{X_n} (s)} \nonumber\\ &= \frac{1}{1-f_{1,n} (s)} \nonumber\\ &= g_{X_1} \left( f_{2,n}(s) \right) + \frac{f_{X_1}'(1)^{-1}}{1-f_{2,n}(s)} \nonumber\\ &= \dots \nonumber\\ &= \frac{\left( f_{X_1}'(1) \cdots f_{X_n}'(1) \right)^{-1}}{1-s} + g_{X_1} \left( f_{2,n}(s) \right) + \sum_{k=2}^{n} \left( f_{X_1}'(1) \cdots f_{X_{k-1}}'(1) \right)^{-1} g_{X_k} \left( f_{k+1,n}(s) \right) \nonumber\\ &= \frac{\e^{-S_n}}{1-s} + \sum_{k=0}^{n-1} \e^{-S_k} \eta_{k+1,n}(s). \end{align*} \end{proof} Taking $s=0$ in Lemma \ref{foin} we obtain the following identity which will play the central role in the proofs: \begin{equation} \label{ciel} q_n^{-1} = \e^{-S_n} + \sum_{k=0}^{n-1} \e^{-S_k} \eta_{k+1,n}, \qquad \forall n \geq 1. \end{equation} Since $f_i$ is convex on $[0,1]$ for all $i \in \bb X$, the function $g_i$ is non-negative, \begin{equation} \label{yeux} g_i(s) = \frac{f_i'(1)(1-s) - \left( 1-f_i(s) \right)}{\left( 1-f_i(s) \right)f_i'(1)(1-s)} \geq 0, \qquad \forall s \in [0,1), \end{equation} which, in turn, implies that the random variables $\eta_{k+1,n}$ are non-negative for any $n \geq 1$ and $k \in \{0, \dots,n-1 \}$. \begin{lemma} Assume Condition \ref{eglise}. \label{pieuvre} For any $n \geq 2$, $( i_1, \dots, i_n) \in \bb X^n$ and $s\in[0,1)$, we have \[ 0 \leq g_{i_1} \left( f_{i_2} \circ \cdots \circ f_{i_n} (s) \right) \leq \eta := \max_{i \in \bb X} \frac{f_i''(1)}{f_i'(1)^2} < +\infty. \] Moreover, for any $( i_n )_{n \geq 1} \in \bb X^{\bb N^*}$, $s\in[0,1)$ and any $k \geq 1$, \begin{equation} \label{dorade} \lim_{n\to+\infty} g_{i_k} \left( f_{i_{k+1}} \circ \cdots \circ f_{i_n} (s) \right) \in [0,\eta]. \end{equation} \end{lemma} \begin{proof} Fix $( i_n )_{n \geq 1} \in \bb X^{\bb N^*}$. For any $i \in \bb X$ and $s \in [0,1)$, we have $f_i (s) \in [0,1)$. So $f_{i_2} \circ \cdots \circ f_{i_n} (s) \in [0,1)$. In addition, by \eqref{yeux}, $g_i$ is non-negative on $[0,1)$ for any $i \in \bb X$, therefore $g_{i_1} \left( f_{i_2} \circ \cdots \circ f_{i_n} (s) \right) \geq 0$. Moreover by the lemma 2.1 of \cite{geiger_survival_2001}, for any $i \in \bb X$ and any $s\in [0,1)$, \begin{equation} \label{yeux002} g_i(s) \leq \frac{f_i''(1)}{f_i'(1)^2}. \end{equation} By Condition \ref{eglise}, $\eta < +\infty$ and so $g_{i_1} \left( f_{i_2} \circ \cdots \circ f_{i_n} (s) \right) \in [0,\eta]$, for any $s\in[0,1)$. Since $f_i$ is increasing on $[0,1)$ for any $i \in \bb X$, it follows that for any $k \geq 1$ and any $n \geq k+1$, \[ 0 \leq f_{i_{k+1}} \circ \cdots \circ f_{i_n} (s) \leq f_{i_{k+1}} \circ \cdots \circ f_{i_n} \circ f_{i_{n+1}} (s) \leq 1, \] and the sequence $\left( f_{i_{k+1}} \circ \cdots \circ f_{i_n} (s) \right)_{n\geq k+1}$ converges to a limit, say $l \in [0,1]$. For any $i \in \bb X$, the function $g_i$ is continuous on $[0,1)$ and we have \begin{align} \lim_{\substack{s\to 1\\s<1}} g_i(s) &= \lim_{\substack{s\to 1\\s<1}} \frac{f_i'(1)(1-s) - \left( 1-f_i(s) \right)}{f'(1)\left( 1-f_i(s) \right)(1-s)} \nonumber\\ &= \lim_{\substack{s\to 1\\s<1}} \frac{1}{f_i'(1)} \frac{f_i(s) - 1 - f_i'(1)(s-1)}{(s-1)^2} \frac{1-s}{ 1-f_i(s) } \nonumber\\ &= \frac{1}{f_i'(1)} \frac{f_i''(1)}{2} \frac{1}{ f_i'(1) } = \frac{f_i''(1)}{2 f_i'(1)^2} <+\infty. \label{champ} \end{align} Denoting $g_i(l) = \frac{f_i''(1)}{2 f_i'(1)^2}$ if $l=1$, we conclude that $g_{i_k} \left( f_{i_{k+1}} \circ \cdots \circ f_{i_n} (s) \right)$ converges to $g_{i_k}(l)$ as $n \to +\infty$. By \eqref{yeux} and \eqref{yeux002}, we obtain that $g_{i_k}(l) \in [0,\eta]$. \end{proof} \subsection{The dual Markov walk} \label{batailleBP} We will introduce the dual Markov chain $(X_n^*)_{n\geq 0}$ and the associated dual Markov walk $(S_n^*)_{n\geq 0}$ and state some of their properties. Since $\bs \nu$ is positive on $\bb X$, the following dual Markov kernel $\bf P^*$ is well defined: \begin{equation} \label{statueBP} \bf P^* \left( i,j \right) = \frac{\bs \nu \left( j \right)}{\bs \nu (i)} \bf P \left( j,i \right), \quad \forall (i,j) \in \bb X^2. \end{equation} Let $\left( X_n^* \right)_{n\geq 0}$ be a dual Markov chain, independent of the chain $\left( X_n \right)_{n\geq 0}$, defined on $(\Omega, \scr F, \bb P)$, living on $\bb X$ and with transition probability $\bf P^*$. We define the dual Markov walk by \begin{equation} \label{promenade001} S_0^* = 0 \qquad \text{and} \qquad S_n^* = -\sum_{k=1}^n \rho \left( X_k^* \right), \quad \forall n \geq 1. \end{equation} For any $z\in \bb R$, let $\tau_z^*$ be the associated exit time: \begin{equation} \label{promenade002} \tau_z^* := \inf \left\{ k \geq 1 : z+S_k^* \leq 0 \right\}. \end{equation} For any $i\in \bb X$, denote by $\bb P_i^*$ and $\bb E_i^*$ the probability, respectively the expectation generated by the finite dimensional distributions of the Markov chain $( X_n^* )_{n\geq 0}$ starting at $X_0^* = i$. It is easy to see that $\bs \nu$ is also $\bf P^*$-invariant and for any $n \geq 1$, $(i,j) \in \bb X^2$, \[ \left(\bf P^* \right)^n (i,j) = \bf P^n (j,i) \frac{\bs \nu(j)}{\bs \nu(i)}. \] This last formula implies in particular the following result. \begin{lemma} \label{sourire} Assume Conditions \ref{primitif} and \ref{cathedrale} for the Markov kernel $\bf P$. Then Conditions \ref{primitif} and \ref{cathedrale} hold also for dual kernel $\bf P^*$. \end{lemma} Similarly to \eqref{soeur}, we have for any $(i,j) \in \bb X^2$, \begin{equation} \label{vautour} \abs{\left( \bf P^* \right)^n (i,j) - \bs \nu (j)} \leq c\e^{-cn}. \end{equation} Note that the operator $\bf P^*$ is the adjoint of $\bf P$ in the space $\LL^2 \left( \bs \nu \right) :$ for any functions $f$ and $g$ on $\bb X,$ \[ \bs \nu \left( f \left(\bf P^*\right)^n g \right) = \bs \nu \left( g \bf P^n f \right). \] For any measure $\mathfrak{m}$ on $\bb X$, let $\bb E_{\mathfrak{m}}$ (respectively $\bb E_{\mathfrak{m}}^*$) be the expectation associated to the probability generated by the finite dimensional distributions of the Markov chain $\left( X_n \right)_{n\geq 0}$ (respectively $\left( X_n^* \right)_{n\geq 0}$) with the initial law $\mathfrak{m}$. \begin{lemma}[Duality] \label{dualityBP} For any probability measure $\mathfrak{m}$ on $\bb X$, any $n\geq 1$ and any function $g$: $\bb X^n \to \bb C$, \[ \bb E_{\mathfrak{m}} \left( g \left( X_1, \dots, X_n \right) \right) = \bb E_{\bs \nu}^* \left( g \left( X_n^*, \dots, X_1^* \right) \frac{\mathfrak{m} \left( X_{n+1}^* \right)}{\bs \nu \left( X_{n+1}^* \right)} \right). \] Moreover, for any $n\geq 1$ and any function $g$: $\bb X^n \to \bb C$, \[ \bb E_i \left( g \left( X_1, \dots, X_n \right) \,;\, X_{n+1} = j \right) = \bb E_j^* \left( g \left( X_n^*, \dots, X_1^* \right) \,;\, X_{n+1}^* = i \right) \frac{\bs \nu(j)}{\bs \nu(i)}. \] \end{lemma} \begin{proof} The first equality is proved in Lemma 3.2 of \cite{GLLP_CLLT_2017}. The second can be deduced from the first as follows. Taking $\mathfrak{m} = \bs \delta_i$ and $\tt g(i_1,\cdots,i_n,i_{n+1}) = g(i_1,\cdots,i_n)\bbm 1_{\{ i_{n+1} = j \}}$, from the first equality of the lemma, we see that \begin{align*} \bb E_i \left( g \left( X_1, \dots, X_n \right) \,;\, X_{n+1} = j \right) &= \bb E_{\bs \nu}^* \left( \tt g \left( X_{n+1}^*, \dots, X_1^* \right) \,;\, X_{n+2}^* = i \right) \frac{1}{\bs \nu(i)} \\ &= \bb E_{\bs \nu}^* \left( g \left( X_{n+1}^*, \dots, X_2^* \right) \,;\, X_1^* = j \,,\, X_{n+2}^* = i \right) \frac{1}{\bs \nu(i)}. \end{align*} Since $\bs \nu$ is $\bf P^*$-invariant, we obtain \begin{align*} \bb E_i \left( g \left( X_1, \dots, X_n \right) \,;\, X_{n+1} = j \right) &= \sum_{i_1 \in \bb X} \bb E_{i_1}^* \left( g \left( X_n^*, \dots, X_1^* \right) \,;\, X_{n+1}^* = i \right) \frac{1}{\bs \nu(i)} \bbm 1_{\{ i_1 = j \}} \bs \nu(i_1) \\ &=\bb E_j^* \left( g \left( X_n^*, \dots, X_1^* \right) \,;\, X_{n+1}^* = i \right) \frac{\bs \nu(j)}{\bs \nu(i)}. \end{align*} \end{proof} \subsection{Markov walks conditioned to stay positive} \label{flamme} In this section we recall the main results from \cite{grama_limit_2016-1} and \cite{GLLP_CLLT_2017} for Markov walks conditioned to stay positive. We complement these results by some new assertions which will be used in the proofs. For any $y \in \bb R$ define the first time when the Markov walk $\left( S_n \right)_{n\geq 0}$ becomes non-positive by setting \[ \tau_y := \inf \left\{ k \geq 1 : y+S_k \leq 0 \right\}. \] Under Conditions \ref{primitif}, \ref{cathedrale} and $\bs \nu(\rho) = 0$ the stopping time $\tau_y$ is well defined and finite $\bb P_i$-almost surely for any $i \in \bb X$. The following three assertions deal with the existence of the harmonic function, the limit behaviour of the probability of the exit time and of the law of the random walk $y+S_n$, conditioned to stay positive and are taken from \cite{grama_limit_2016-1}. \begin{proposition}[Preliminary results, part I] Assume Conditions \ref{primitif}, \ref{cathedrale} and $\bs \nu(\rho) = 0$. \label{sable} There exists a non-negative function $V$ on $\bb X \times \bb R$ such that \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{sable001} For any $(i,y) \in \bb X \times \bb R$ and $n \geq 1$, \[ \bb E_i \left( V \left( X_n, y+S_n \right) \,;\, \tau_y > n \right) = V(i,y). \] \item \label{sable002} For any $i\in \bb X$, the function $V(i,\cdot)$ is non-decreasing and for any $(i,y) \in \bb X \times \bb R$, \[ V(i,y) \leq c \left( 1+\max(y,0) \right). \] \item \label{sable003} For any $i \in \bb X$, $y > 0$ and $\delta \in (0,1)$, \[ \left( 1- \delta \right)y - c_{\delta} \leq V(i,y) \leq \left(1+\delta \right)y + c_{\delta}. \] \end{enumerate} \end{proposition} We define \begin{equation} \label{comete} \sigma^2 := \bs \nu \left( \rho^2 \right) - \bs \nu \left( \rho \right)^2 + 2 \sum_{n=1}^{+\infty} \left[ \bs \nu \left( \rho \bf P^n \rho \right) - \bs \nu \left( \rho \right)^2 \right]. \end{equation} It is known that under Conditions \ref{primitif} and \ref{cathedrale} we have $\sigma^2 > 0$, see Lemma 10.3 in \cite{GLLP_CLLT_2017}. \begin{proposition}[Preliminary results, part II] Assume Conditions \ref{primitif}, \ref{cathedrale} and $\bs \nu(\rho) = 0$. \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \label{oreiller} \item \label{oreiller001} For any $(i,y) \in \bb X \times \bb R$, \[ \lim_{n\to +\infty} \sqrt{n} \bb P_i \left( \tau_y > n \right) = \frac{2V(i,y)}{\sqrt{2\pi} \sigma}, \] where $\sigma$ is defined by \eqref{comete}. \item \label{oreiller002} For any $(i,y) \in \bb X \times \bb R$ and $n\geq 1$, \[ \bb P_i \left( \tau_y > n \right) \leq c\frac{ 1 + \max(y,0) }{\sqrt{n}}. \] \end{enumerate} \end{proposition} We denote by $\supp(V) = \left\{ (i,y) \in \bb X \times \bb R : \right.$ $\left. V(i,y) > 0 \right\}$ the support of the function $V$. Note that from property \ref{sable003} of Proposition \ref{sable}, for any fixed $i\in \bb X$, the function $y \mapsto V(i,y)$ is positive for large $y$. For more details on the properties of $\supp (V)$ see \cite{grama_limit_2016-1}. \begin{proposition}[Preliminary results, part III] \label{racine} Assume Conditions \ref{primitif}, \ref{cathedrale} and $\bs \nu(\rho) = 0$. \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{racine001} For any $(i,y) \in \supp(V)$ and $t\geq 0$, \[ \bb P_i \left( \sachant{\frac{y+S_n}{\sigma \sqrt{n}} \leq t }{\tau_y >n} \right) \underset{n\to+\infty}{\longrightarrow} \mathbf \Phi^+(t), \] where $\bf \Phi^+(t) = 1-\e^{-\frac{t^2}{2}}$ is the Rayleigh distribution function. \item \label{racine002} There exists $\ee_0 >0$ such that, for any $\ee \in (0,\ee_0)$, $n\geq 1$, $t_0 > 0$, $t\in[0,t_0]$ and $(i,y) \in \bb X \times \bb R$, \[ \abs{ \bb P_i \left( y+S_n \leq t \sqrt{n} \sigma \,,\, \tau_y > n \right) - \frac{2V(i,y)}{\sqrt{2\pi n}\sigma} \bf \Phi^+(t) } \leq c_{\ee,t_0} \frac{\left( 1+\max(y,0)^2 \right)}{n^{1/2+\ee}}. \] \end{enumerate} \end{proposition} The next assertions are two local limit theorems for the associated Markov walk $y+S_n$ from \cite{GLLP_CLLT_2017}. \begin{proposition}[Preliminary results, part IV] Assume Conditions \ref{primitif}, \ref{cathedrale} and $\bs \nu(\rho) = 0$. \label{goliane} \begin{enumerate}[ref=\arabic*, leftmargin=*, label=\arabic*.] \item \label{liane} For any $i \in \bb X$, $a>0$, $y \in \bb R$, $z \geq 0$ and any non-negative function $\psi$: $\bb X \to \bb R_+$, \begin{align*} \lim_{n\to +\infty} n^{3/2} &\bb E_i \left( \psi(X_n) \,;\, y+S_n \in [z,z+a] \,,\, \tau_y > n \right) \\ &\qquad = \frac{2V(i,y)}{\sqrt{2\pi}\sigma^3} \int_z^{z+a} \bb E_{\bs \nu}^* \left( \psi(X_1^*) V^*\left( X_1^*, z'+S_1^* \right) \,;\, \tau_{z'}^* > 1 \right) \dd z'. \end{align*} \item \label{gorilleBP} Moreover, for any $a>0$, $y \in \bb R$, $z \geq 0$, $n \geq 1$ and any non-negative function $\psi$: $\bb X \to \bb R_+$, \begin{align*} \sup_{i\in \bb X} \bb E_i &\left( \psi(X_n) \,;\, y+S_{n} \in [z,z+a] \,,\, \tau_y > n \right) \leq \frac{c \left( 1+a^3 \right)}{n^{3/2}} \norm{\psi}_{\infty} \left( 1+z \right)\left( 1+\max(y,0) \right). \end{align*} \end{enumerate} \end{proposition} Recall that the dual chain $( X_n^* )_{n\geq 0}$ is constructed independently of the chain $( X_n )_{n\geq 0}$. For any $(i,j) \in \bb X^2$, the probability generated by the finite dimensional distributions of the two dimensional Markov chain $(X_n,X_n^*)_{n\geq 0}$ starting at $(X_0,X_0^*)=(i,j)$ is given by $\bb P_{i,j} = \bb P_i \times \bb P_j$. Let $\bb E_{i,j}$ be the corresponding expectation. For any $l \geq 1$ we define $\scr C^+ \left( \bb X^l \times \bb R_+ \right)$ the set of non-negative function $g$: $\bb X^l \times \bb R_+ \to \bb R_+$ satisfying the following properties: \begin{itemize} \item for any $(i_1,\dots,i_l) \in \bb X^l$, the function $z \mapsto g(i_1,\dots,i_l,z)$ is continuous, \item there exists $\ee >0$ such that $\max_{i_1,\dots i_l \in \bb X} \sup_{z\geq 0} g(i_1,\dots,i_l,z) (1+z)^{2+\ee} < +\infty$. \end{itemize} \begin{proposition}[Preliminary results, part V] Assume Conditions \ref{primitif}, \ref{cathedrale} and $\bs \nu(\rho) = 0$. \label{sorcier} For any $i \in \bb X$, $y \in \bb R$, $l \geq 1$, $m \geq 1$ and $g \in \scr C^+ \left( \bb X^{l+m} \times \bb R_+ \right)$, \begin{align*} &\lim_{n\to +\infty} n^{3/2} \bb E_i \left( g \left(X_1, \dots, X_l, X_{n-m+1}, \dots, X_n, y+S_n \right) \,;\, \tau_y > n \right) \\ &\qquad = \frac{2}{\sqrt{2\pi}\sigma^3} \int_0^{+\infty} \sum_{j \in \bb X} \bb E_{i,j} \left( g \left( X_1, \dots, X_l,X_m^*,\dots,X_1^*,z \right) \right. \\ &\hspace{4cm} \left. \times V \left( X_l, y+S_l \right) V^* \left( X_m^*, z+S_m^* \right) \,;\, \tau_y > l \,,\, \tau_z^* > m \right) \bs \nu(j) \dd z. \end{align*} \end{proposition} We complete these results by determining the asymptotic behaviour of the law of the Markov chain $(X_n)_{n\geq 1}$ jointly with $\{ \tau_y > n\}.$ \begin{lemma} \label{moustique} Assume Conditions \ref{primitif}, \ref{cathedrale} and $\bs \nu(\rho) = 0$. Then, for any $(i,y) \in \bb X \times \bb R$ and $j \in \bb X$, we have \[ \lim_{n\to +\infty} \sqrt{n} \bb P_{i} \left( X_n = j \,,\, \tau_y > n \right) = \frac{2V(i,y) \bs \nu (j)}{\sqrt{2\pi} \sigma}. \] \end{lemma} \begin{proof} Fix $(i,y) \in \bb X \times \bb R$ and $j \in \bb X$. We will prove that \begin{align*} \frac{2V(i,y) \bs \nu (j)}{\sqrt{2\pi} \sigma} &\leq \liminf_{n\to+\infty} \sqrt{n} \bb P_i \left( X_n = j \,,\, \tau_y > n \right) \\ &\leq \limsup_{n\to+\infty} \sqrt{n} \bb P_i \left( X_n = j \,,\, \tau_y > n \right) \leq \frac{2V(i,y) \bs \nu (j)}{\sqrt{2\pi} \sigma}. \end{align*} \textit{The upper bound.} By the Markov property, for any $n \geq 1$ and $k=\pent{n^{1/4}}$ we have \[ \bb P_i \left( X_n = j \,,\, \tau_y > n \right) \leq \bb P_i \left( X_n = j \,,\, \tau_y > n-k \right) = \bb E_i \left( \bf P^k \left( X_{n-k},j \right) \,;\, \tau_y > n-k \right). \] Using \eqref{soeur}, we obtain that \[ \bb P_i \left( X_n = j \,,\, \tau_y > n \right) \leq \left( \bs \nu(j) + c\e^{-ck} \right) \bb P_i \left( \tau_y > n-k \right). \] Using the point \ref{oreiller001} of Proposition \ref{oreiller} and the fact that $k=\pent{n^{1/4}}$, \begin{equation} \label{mare} \limsup_{n\to +\infty} \sqrt{n} \bb P_i \left( X_n = j \,,\, \tau_y > n \right) \leq \frac{2V(i,y) \bs \nu (j)}{\sqrt{2\pi} \sigma}. \end{equation} \textit{The lower bound.} Again, let $n \geq 1$ and $k=\pent{n^{1/4}}$. We have \begin{equation} \label{canneton} \bb P_i \left( X_n = j \,,\, \tau_y > n \right) \geq \bb P_i \left( X_n = j \,,\, \tau_y > n-k \right) - \bb P_i \left( n-k < \tau_y \leq n \right). \end{equation} As for the upper bound, using the Markov property and \eqref{soeur}, \[ \bb P_i \left( X_n = j \,,\, \tau_y > n-k \right) = \bb E_i \left( \bf P^k \left( X_{n-k}, j \right) \,;\, \tau_y > n-k \right) \geq \left( \bs \nu(j) - c\e^{-ck} \right) \bb P_i \left( \tau_y > n-k \right). \] Using the point \ref{oreiller001} of Proposition \ref{oreiller} and using the fact that $k=\pent{n^{1/4}}$, \begin{equation} \liminf_{n\to+\infty} \sqrt{n} \bb P_i \left( X_n = j \,,\, \tau_y > n-k \right) \geq \frac{2V(i,y) \bs \nu (j)}{\sqrt{2\pi} \sigma}. \label{canard} \end{equation} Furthermore, on the event $\left\{ n-k < \tau_y \leq n \right\}$, we have \[ 0 \geq \min_{n-k < i \leq n} y+S_i \geq y+S_{n-k} - k \norm{\rho}_{\infty}, \] where $\norm{\rho}_{\infty}$ is the maximum of $\abs{\rho}$ on $\bb X$. Consequently, \begin{align*} \bb P_i \left( n-k < \tau_y \leq n \right) &\leq \bb P_i \left( y+S_{n-k} \leq c k \,,\, \tau_y > n-k \right) \\ &= \bb P_i \left( y+S_{n-k} \leq \frac{c k}{\sqrt{n-k}} \sqrt{n-k} \,,\, \tau_y > n-k \right). \end{align*} Now, using the point \ref{racine002} of Proposition \ref{racine} with $t_0 = \max_{n\geq 1} \frac{ck}{\sqrt{n-k}}$, we obtain that, for $\ee > 0$ small enough, \[ \bb P_i \left( n-k < \tau_y \leq n \right) \leq \frac{2V(i,y)}{\sqrt{2\pi (n-k)}\sigma} \left( 1-\e^{-\frac{ck^2}{2(n-k)}} \right) + c_{\ee} \frac{\left( 1+y^2 \right)}{(n-k)^{1/2+\ee}}. \] Therefore, since $k = \pent{n^{1/4}}$, \begin{equation} \label{canne} \lim_{n\to+\infty} \sqrt{n} \bb P_i \left( n-k < \tau_y \leq n \right) = 0. \end{equation} Putting together \eqref{canneton}, \eqref{canard} and \eqref{canne}, we conclude that \[ \liminf_{n\to+\infty} \sqrt{n} \bb P_i \left( X_n = j \,,\, \tau_y > n \right) \geq \frac{2V(i,y) \bs \nu (j)}{\sqrt{2\pi} \sigma}, \] which together with \eqref{mare} concludes the proof of the lemma. \end{proof} Now, with the help of the function $V$ from Proposition \ref{sable}, for any $(i,y) \in \supp(V)$, we define a new probability $\bb P_{i,y}^+$ on $\sigma\left( X_n, n \geq 1 \right)$ and the corresponding expectation $\bb E_{i,y}^+$, which are characterized by the following property: for any $n \geq 1$ and any $g$: $\bb X^n \to \bb C$, \begin{equation} \label{soif} \bb E_{i,y}^+ \left( g \left( X_1, \dots, X_n \right) \right) := \frac{1}{V(i,y)} \bb E_i \left( g\left( X_1, \dots, X_n \right) V\left( X_n, y+S_n \right) \,;\, \tau_y > n \right). \end{equation} The fact that $\bb P_{i,y}^+$ is a probability measure and that it does not depend on $n$ follows easily from the point \ref{sable001} of Proposition \ref{sable}. The probability $\bb P_{i,y}^+$ is extended obviously to the hole probability space $\left( \Omega, \scr F, \bb P \right)$. The corresponding expectation is again denoted by $\bb E_{i,y}^+$. \begin{lemma} \label{cumulus} Assume Conditions \ref{primitif}, \ref{cathedrale} and $\bs \nu(\rho) = 0$. Let $m \geq 1$. For any $n \geq 1$, bounded measurable function $g$: $\bb X^m \to \bb C$, $(i,y) \in \supp(V)$ and $j \in \bb X$, \[ \lim_{n\to +\infty} \bb E_i \left( \sachant{g\left( X_1, \dots, X_m \right) \,;\, X_n = j}{ \tau_y > n } \right) = \bb E_{i,y}^+ \left( g\left( X_1, \dots, X_m \right) \right) \bs \nu (j). \] \end{lemma} \begin{proof} For the sake of brevity, for any $(i,j) \in \bb X^2$, $y \in \bb R$ and $n \geq 1$, set \[ J_n(i,j,y) := \bb P_i \left( X_n = j \,,\, \tau_y > n \right). \] Fix $m \geq 1$ and let $g$ be a function $\bb X^m \to \bb C$. By the point \ref{oreiller001} of Proposition \ref{oreiller}, it is clear that for any $(i,y) \in \supp(V)$ and $n$ large enough, $\bb P_i \left( \tau_y > n \right) > 0$. By the Markov property, for any $j \in \bb X$ and $n \geq m+1$ large enough, \begin{align*} I_0 &:= \bb E_i \left( \sachant{g\left( X_1, \dots, X_m \right) \,;\, X_n = j}{ \tau_y > n } \right)\\ &= \bb E_i \left( g\left( X_1, \dots, X_m \right) \frac{J_{n-m} \left( X_m,j,y+S_m \right)}{\bb P_i \left( \tau_y > n \right)} \,;\, \tau_y > m \right). \end{align*} Using Lemma \ref{moustique} and the point \ref{oreiller001} of Proposition \ref{oreiller}, by the Lebesgue dominated convergence theorem, \begin{align*} \lim_{n\to+\infty} I_0 &= \bb E_i \left( g\left( X_1, \dots, X_m \right) \frac{V \left( X_m,y+S_m \right)}{V(i,y)} \,;\, \tau_y > m \right) \bs \nu(j) \\ &= \bb E_{i,y}^+ \left( g\left( X_1, \dots, X_m \right) \right) \bs \nu (j). \end{align*} \end{proof} \begin{lemma} \label{soir} Assume Conditions \ref{primitif}, \ref{cathedrale} and $\bs \nu(\rho) = 0$. For any $(i,y) \in \supp(V)$, we have, for any $k \geq 0$, \[ \bb E_{i,y}^+ \left( \e^{-S_k}\right) \leq \frac{c \left( 1+\max(y,0) \right)\e^{y}}{k^{3/2} V(i,y)}. \] In particular, \[ \bb E_{i,y}^+ \left( \sum_{k=0}^{+\infty} \e^{-S_k} \right) \leq \frac{c \left( 1+\max(y,0) \right)\e^{y}}{V(i,y)}. \] \end{lemma} \begin{proof} By \eqref{soif}, for any $k \geq 1$, \[ \bb E_{i,y}^+ \left( \e^{-S_k}\right) = \bb E_i \left( \e^{-S_k} \frac{V\left( X_k, y+S_k \right)}{V(i,y)} \,;\, \tau_y > k \right). \] Using the point \ref{sable002} of Proposition \ref{sable}, \begin{align*} \bb E_{i,y}^+ \left( \e^{-S_k}\right) &\leq \e^{y} \bb E_i \left( \e^{-(y+S_k)} \frac{c\left( 1+\max \left(0,y+S_k \right) \right)}{V(i,y)} \,;\, \tau_y > k \right) \\ &= \e^{y} \sum_{p=0}^{+\infty} \bb E_i \left( \e^{-(y+S_k)} \frac{c\left( 1+\max \left(0,y+S_k \right) \right)}{V(i,y)} \,;\, y+S_k \in (p,p+1] \,,\, \tau_y > k \right) \\ &\leq \e^{y} \sum_{p=0}^{+\infty} \e^{-p} \frac{c( 1+p )}{V(i,y)} \bb P_i \left( y+S_k \in [p,p+1] \,,\, \tau_y > k \right). \end{align*} By the point \ref{gorilleBP} of Proposition \ref{goliane}, \begin{align*} \bb E_{i,y}^+ \left( \e^{-S_k}\right) &\leq \frac{c}{k^{3/2}} \sum_{p=0}^{+\infty} \e^{-p} ( 1+p )^2 \frac{\e^{y}\left( 1+\max(0,y) \right)}{V(i,y)} \\ &= \frac{c \left( 1+\max(0,y) \right)\e^{y}}{k^{3/2}V(i,y)}. \end{align*} This proves the first inequality of the lemma. Summing both sides in $k$ and using the Lebesgue monotone convergence theorem, it proves also the second inequality of the lemma. \end{proof} \subsection{The change of measure related to the Markov walk} \label{nenuphar} In this section we shall establish some useful properties of the Markov chain under the exponential change of the probability measure, which will be crucial in the proofs of the results of the paper. For any $\ll \in \bb R$, let $\bf P_{\ll}$ be the transfer operator defined on $\scr C$ by, for any $g \in \scr C$ and $i \in \bb X$, \begin{equation} \label{ocean} \bf P_{\ll}g(i) := \bf P\left( \e^{\ll \rho} g \right)(i) = \bb E_i \left( \e^{\ll S_1} g(X_1) \right). \end{equation} From the Markov property, it follows easily that, for any $g \in \scr C$, $i \in \bb X$ and $n \geq 0$, \begin{equation} \label{balancoire} \bf P_{\ll}^n g(i) = \bb E_i \left( \e^{\ll S_n} g(X_n) \right). \end{equation} For any non-negative function $g \geq 0$, $\ll \in \bb R$, $i\in \bb X$ and $n \geq 1$, we have \begin{equation} \label{pinson} \bf P_{\ll}^n g(i) \geq \min_{x_1, \dots, x_n \in \bb X^n} \e^{\ll \left(\rho(x_1) + \cdots + \rho(x_n)\right)} \bf P^n g(i). \end{equation} Therefore the matrix $\bf P_{\ll}$ is primitive \textit{i.e.} satisfies the Condition \ref{primitif}. By the Perron-Frobenius theorem, there exists a positive number $k(\ll) > 0$, a positive function $v_{\ll}$ : $\bb X \to \bb R_+^*$, a positive linear form $\bs \nu_{\ll}$: $\scr C \to \bb C$ and a linear operator $Q_{\ll}$ on $\scr C$ such that for any $g \in \scr C$, and $i \in \bb X$, \begin{align} \label{marteau001} &\bf P_{\ll} g(i) = k(\ll)\bs \nu_{\ll}(g) v_{\ll}(i) + Q_{\ll}(g)(i), \\ \label{marteau002} &\bs \nu_{\ll}\left( v_{\ll} \right) = 1 \qquad \text{and} \qquad Q_{\ll} \left(v_{\ll}\right) = \bs \nu_{\ll} \left(Q_{\ll}(g) \right) = 0, \end{align} where the spectral radius of $Q_{\ll}$ is strictly less than $k(\ll)$: \begin{equation} \label{torrent} \frac{\norm{Q_{\ll}^n(g)}_{\infty}}{k(\ll)^n} \leq c_{\ll} \e^{-c_{\ll}n} \norm{g}_{\infty}. \end{equation} Note that, in particular, $k(\ll)$ is equal to the spectral radius of $\bf P_{\ll},$ and, moreover, $k(\ll)$ is an eigenvalue associated to the eigenvector $v_{\ll}$: \begin{equation} \label{totem} \bf P_{\ll} v_{\ll} (i) = k(\ll) v_{\ll}(i). \end{equation} From \eqref{marteau001} and \eqref{marteau002}, we have for any $n \geq 1$, \begin{equation} \label{psychedelique} \bf P_{\ll}^n g(i) = k(\ll)^n \bs \nu_{\ll}(g) v_{\ll}(i) + Q_{\ll}^n(g)(i). \end{equation} By \eqref{torrent}, for any $g \in \scr C$ and $i \in \bb X$, \[ \lim_{n\to+\infty} \frac{\bf P_{\ll}^n g(i)}{k(\ll)^n} = \bs \nu_{\ll}(g) v_{\ll}(i) \] and so for any non-negative and non-identically zero function $g \in \scr C$ and $i \in \bb X$, \begin{equation} \label{colombe} k(\ll) = \lim_{n\to+\infty} \left( \bf P_{\ll}^n g(i) \right)^{1/n} = \lim_{n\to+\infty} \bb E_i^{1/n} \left( \e^{\ll S_n} g(X_n) \right). \end{equation} Note that when $\ll = 0$, we have $k(0) = 1$, $v_0(i)=1$ and $\bs \nu_0(i) = \bs \nu(i)$, for any $i \in \bb X$. However, in general case, the operator $\bf P_{\ll}$ is no longer a Markov operator and we define $\tbf P_{\ll}$ for any $\ll \in \bb R$ by \begin{equation} \label{lacBP} \tbf P_{\ll}g(i) = \frac{\bf P_{\ll}(gv_{\ll})(i)}{k(\ll)v_{\ll}(i)} = \frac{\bf P\left( \e^{\ll \rho}gv_{\ll} \right)(i)}{k(\ll)v_{\ll}(i)} = \frac{ \bb E_i \left( \e^{\ll S_1} g(X_1) v_{\ll}(X_1) \right)}{k(\ll)v_{\ll}(i)}, \end{equation} for any $g \in \scr C$ and $i \in \bb X$. It is clear that $\tbf P_{\ll}$ is a Markov operator: by \eqref{totem}, \[ \tbf P_{\ll}v_0(i) = \frac{\bf P_{\ll}(v_{\ll})(i)}{k(\ll)v_{\ll}(i)} = 1, \] where for any $i \in \bb X$, $v_0(i) = 1$. Iterating \eqref{lacBP} and using \eqref{balancoire}, we see that for any $n \geq 1$, $g \in \scr C$ and $i \in \bb X$. \begin{equation} \label{horizon} \tbf P_{\ll}^n g(i) = \frac{\bf P_{\ll}^n(gv_{\ll})(i)}{k(\ll)^nv_{\ll}(i)} = \frac{ \bb E_i \left( \e^{\ll S_n} g(X_n) v_{\ll}(X_n) \right)}{k(\ll)^nv_{\ll}(i)}. \end{equation} In particular, as in \eqref{pinson}, \[ \tbf P_{\ll}^n g(i) \geq \min_{x_1, \dots, x_n \in \bb X^n} \e^{\ll \left(\rho(x_1) + \cdots + \rho(x_n)\right)}v_{\ll}(x_n) \frac{\bf P^n g(i)}{k(\ll)^nv_{\ll}(i)}. \] The following lemma is an easy consequence of this last inequality. \begin{lemma} \label{jument} Assume Conditions \ref{primitif} and \ref{cathedrale} for the Markov kernel $\bf P$. Then for any $\ll \in \bb R$, Conditions \ref{primitif} and \ref{cathedrale} hold also for the operator $\tbf P_{\ll}$. \end{lemma} Using \eqref{psychedelique} and \eqref{horizon}, the spectral decomposition of $\tbf P_{\ll}$ is given by \[ \tbf P_{\ll}^n g(i) = \bs \nu_{\ll} \left( gv_{\ll} \right)v_0(i) + \frac{Q_{\ll}^n(gv_{\ll})(i)}{k(\ll)^nv_{\ll}(i)} = \tbs \nu_{\ll}(g) v_0(i) + \tt Q_{\ll}^n(g)(i), \] with, for any $\ll \in \bb R$, $g \in \scr C$ and $i \in \bb X$, \begin{equation} \label{ecorce} \tbs \nu_{\ll}(g) := \bs \nu_{\ll} \left( gv_{\ll} \right) \qquad \text{and} \qquad \tt Q_{\ll}(g)(i) := \frac{Q_{\ll}(gv_{\ll})(i)}{k(\ll)v_{\ll}(i)}. \end{equation} By \eqref{marteau002}, \[ \tbs \nu_{\ll} \left( \tt Q_{\ll}(g) \right) = \bs \nu_{\ll} \left( \frac{Q_{\ll}(g v_{\ll})}{k(\ll)} \right) = 0 \qquad \text{and} \qquad \tt Q_{\ll}(v_0) = \frac{Q_{\ll}(v_{\ll})(i)}{k(\ll)v_{\ll}(i)} = 0. \] Consequently, $\tbs \nu_{\ll}$ is the positive invariant measure of $\tbf P_{\ll}$ and since by \eqref{torrent}, \[ \norm{\tt Q_{\ll}^n(g)}_{\infty} \leq \frac{\norm{Q_{\ll}^n(gv_{\ll})}_{\infty}}{k(\ll)^n \min_{i \in \bb X} v_{\ll}} \leq c_{\ll} \e^{-c_{\ll}n} \norm{g}_{\infty}, \] we can conclude that for any $(i,j) \in \bb X^2$, \[ \abs{\tbf P_{\ll}^n (i,j) - \tbs \nu_{\ll}(j)} \leq c_{\ll} \e^{-c_{\ll}n}. \] Fix $\ll \in \bb R$ and let $\tbb P_i$ and $\tbb E_i$ be the probability, respectively the expectation, generated by the finite dimensional distributions of the Markov chain $(X_n)_{n\geq 0}$ with transition operator $\tbf P_{\ll}$ and starting at $X_0=i$. For any $n \geq 1$, $g$: $\bb X^n \to \bb C$ and $i \in \bb X$, \begin{equation} \label{chandelle} \tbb E_i \left( g(X_1, \dots, X_n) \right) := \frac{\bb E_i \left( \e^{\ll S_n} g(X_1, \dots, X_n) v_{\ll}(X_n) \right)}{k(\ll)^n v_{\ll}(i)}. \end{equation} We are now interested in establishing some properties of the function $\ll \mapsto k(\ll)$ which are important to distinguish between the four different cases considered in this paper. \begin{lemma} \label{mulet} Assume Conditions \ref{primitif} and \ref{cathedrale}. The function $\ll \mapsto k(\ll)$ is analytic on $\bb R$. Moreover the function $K$: $\ll \mapsto \ln\left( k(\ll) \right)$ is strictly convex and satisfies for any $\ll \in \bb R$, \begin{equation} \label{chevalBP} K'(\ll) = \frac{k'(\ll)}{k(\ll)} = \tbs \nu_{\ll} (\rho), \end{equation} and \begin{equation} \label{ane} K''(\ll) = \tbs \nu_{\ll} \left( \rho^2 \right) - \tbs \nu_{\ll} \left( \rho \right)^2 + 2 \sum_{n=1}^{+\infty} \left[ \tbs \nu_{\ll} \left( \rho \tbf P_{\ll}^n \rho \right) - \tbs \nu_{\ll} \left( \rho \right)^2 \right] =:\tt \sigma_{\ll}^2. \end{equation} \end{lemma} \begin{proof} It is clear that $\ll \mapsto \bf P_{\ll}$ is analytic on $\bb R$ and consequently, by the perturbation theory for linear operators (see for example \cite{kato_perturbation_1976} or \cite{dunford_linear_1971}) $\ll \to k(\ll)$, $\ll \to v_{\ll}$ and $\ll \mapsto \bs \nu_{\ll}$ are also analytic on $\bb R$. In particular we write for any $h \in \bb R$, \begin{align*} \bf P_{\ll+h} &= \bf P_{\ll} + h \bf P_{\ll}' + \frac{h^2}{2} \bf P_{\ll}'' + o(h^2), \\ v_{\ll+h} &= v_{\ll} + h v_{\ll}' + \frac{h^2}{2} v_{\ll}'' + o(h^2), \\ k(\ll+h) &= k(\ll) + hk'(\ll) + \frac{h^2}{2} k''(\ll) + o(h^2), \end{align*} where for any $h \in \bb R$, $o(h^2)$ refers to an operator, a function or a real such that $o(h^2)/h^2 \to 0$ as $h \to 0$. Since $v_{\ll+h}$ is an eigenvector of $\bf P_{\ll+h}$ we have $\bf P_{\ll+h} v_{\ll+h} = k(\ll+h) v_{\ll+h}$ and its development gives \begin{align} \bf P_{\ll} v_{\ll} &= k(\ll) v_{\ll}, \nonumber\\ \label{vampire} \bf P_{\ll} v_{\ll}' + \bf P_{\ll}' v_{\ll} &= k(\ll) v_{\ll}' + k'(\ll) v_{\ll}, \\ \label{loupgarou} \frac{1}{2} \bf P_{\ll} v_{\ll}'' + \bf P_{\ll}' v_{\ll}' + \frac{1}{2} \bf P_{\ll}'' v_{\ll} &= \frac{1}{2} k(\ll) v_{\ll}'' + k'(\ll) v_{\ll}' + \frac{1}{2} k''(\ll) v_{\ll}. \end{align} Since $\bs \nu_{\ll}$ is an invariant measure, $\bs \nu_{\ll} \left( \bf P_{\ll}g \right) = k(\ll) \bs \nu_{\ll} (g)$ and \eqref{vampire} implies that \[ k(\ll) \bs \nu_{\ll} \left( v_{\ll}' \right) + \bs \nu_{\ll} \left( \bf P_{\ll}' v_{\ll} \right) = k(\ll) \bs \nu_{\ll} \left( v_{\ll}' \right) + k'(\ll). \] In addition, by \eqref{ocean}, $\bf P_{\ll}' v_{\ll} = \bf P_{\ll} \left( \rho v_{\ll} \right)$. Therefore, \[ k(\ll) \bs \nu_{\ll} \left( \rho v_{\ll} \right) = k'(\ll), \] which, with the definition of $\tbs \nu_{\ll}$ in \eqref{ecorce}, proves \eqref{chevalBP}. From \eqref{loupgarou} and the fact that $\bs \nu_{\ll} \left( \bf P_{\ll}g \right) = k(\ll) \bs \nu_{\ll} (g)$, we have \[ \frac{k(\ll)}{2} \bs \nu_{\ll} \left( v_{\ll}'' \right) + k(\ll) \bs \nu_{\ll} \left( \rho v_{\ll}' \right) + \frac{k(\ll)}{2} \bs \nu_{\ll} \left( \rho^2 v_{\ll} \right) = \frac{1}{2} k(\ll) \bs \nu_{\ll} \left( v_{\ll}'' \right) + k'(\ll) \bs \nu_{\ll} \left( v_{\ll}' \right) + \frac{1}{2} k''(\ll). \] So, \[ \frac{k''(\ll)}{k(\ll)} = \bs \nu_{\ll} \left( \rho^2 v_{\ll} \right) + 2 \left[ \bs \nu_{\ll} \left( \rho v_{\ll}' \right) - \frac{k'(\ll)}{k(\ll)}\bs \nu_{\ll} \left( v_{\ll}' \right) \right]. \] By \eqref{chevalBP}, we obtain that \begin{equation} \label{orient} K''(\ll) = \frac{k''(\ll)}{k(\ll)} - \left( \frac{k'(\ll)}{k(\ll)} \right)^2 = \bs \nu_{\ll} \left( \rho^2 v_{\ll} \right) - \bs \nu_{\ll}^2 \left( \rho v_{\ll} \right) + 2 \left[ \bs \nu_{\ll} \left( \rho v_{\ll}' \right) - \bs \nu_{\ll} \left( \rho v_{\ll} \right) \bs \nu_{\ll} \left( v_{\ll}' \right) \right]. \end{equation} It remains to determine $v_{\ll}'$. By \eqref{vampire}, we have \[ v_{\ll}' -\frac{\bf P_{\ll} v_{\ll}'}{k(\ll)} = \frac{\bf P_{\ll} \left( \rho v_{\ll} \right)}{k(\ll)} - \frac{k'(\ll)}{k(\ll)} v_{\ll} \] and for any $n \geq 0$, using \eqref{chevalBP}, \begin{equation} \label{rivage} \frac{\bf P_{\ll}^n v_{\ll}'}{k(\ll)^n} - \frac{\bf P_{\ll}^{n+1} v_{\ll}'}{k(\ll)^{n+1}} = \frac{\bf P_{\ll}^{n+1} \left( \rho v_{\ll} \right)}{k(\ll)^{n+1}} - \bs \nu_{\ll} \left( \rho v_{\ll} \right) v_{\ll}. \end{equation} Note that \begin{align*} \frac{\bf P_{\ll}^{n+1} \left( \rho v_{\ll} \right)}{k(\ll)^{n+1}} - \bs \nu_{\ll} \left( \rho v_{\ll} \right) v_{\ll} = \frac{Q_{\ll}^{n+1}\left( \rho v_{\ll} \right)}{k(\ll)^{n+1}}. \end{align*} By \eqref{torrent}, \[ \norm{\frac{\bf P_{\ll}^{n+1} \left( \rho v_{\ll} \right)}{k(\ll)^{n+1}} - \bs \nu_{\ll} \left( \rho v_{\ll} \right) v_{\ll}}_{\infty} \leq c_{\ll}\e^{-c_{\ll}(n+1)} \norm{\rho v_{\ll}}_{\infty} = c_{\ll}\e^{-c_{\ll}(n+1)}. \] Consequently, by \eqref{rivage}, the series $\sum_{n\geq 0} \left[ \frac{\bf P_{\ll}^n v_{\ll}'}{k(\ll)^n} - \frac{\bf P_{\ll}^{n+1} v_{\ll}'}{k(\ll)^{n+1}} \right]$ converges absolutely and we deduce that \[ v_{\ll}' = \sum_{n=0}^{+\infty} \left[ \frac{\bf P_{\ll}^{n+1} \left( \rho v_{\ll} \right)}{k(\ll)^{n+1}} - \bs \nu_{\ll} \left( \rho v_{\ll} \right) v_{\ll} \right]. \] In particular, \[ \bs \nu_{\ll} \left( v_{\ll}' \right) = \sum_{n=0}^{+\infty} \left[ \bs \nu_{\ll} \left( \rho v_{\ll} \right) - \bs \nu_{\ll} \left( \rho v_{\ll} \right) \right] = 0, \] and \[ \bs \nu_{\ll} \left( \rho v_{\ll}' \right) = \sum_{n=0}^{+\infty} \left[ \frac{\bs \nu_{\ll} \left( \rho \bf P_{\ll}^{n+1} \left( \rho v_{\ll} \right) \right)}{k(\ll)^{n+1}} - \bs \nu_{\ll} \left( \rho v_{\ll} \right)^2 \right]. \] Therefore \eqref{orient} becomes \[ K''(\ll) = \bs \nu_{\ll} \left( \rho^2 v_{\ll} \right) - \bs \nu_{\ll}^2 \left( \rho v_{\ll} \right) + 2 \sum_{n=0}^{+\infty} \left[ \frac{\bs \nu_{\ll} \left( \rho \bf P_{\ll}^{n+1} \left( \rho v_{\ll} \right) \right)}{k(\ll)^{n+1}} - \bs \nu_{\ll} \left( \rho v_{\ll} \right)^2 \right]. \] To conclude the proof of the lemma, we establish that $K''(\ll) > 0$, from which the strict convexity of $K$ follows. By \eqref{ecorce}, \begin{equation} \label{taniere} K''(\ll) = \tbs \nu_{\ll} \left( \tt \rho_{\ll}^2 \right) + 2 \sum_{n=1}^{+\infty} \left[ \tbs \nu_{\ll} \left( \tt \rho_{\ll} \tbf P_{\ll}^n \tt \rho_{\ll} \right) \right], \end{equation} where for any $\ll \in \bb R$, $\tt \rho_{\ll} = \rho - \tbs \nu_{\ll}(\rho) v_0$. Moreover, Conditions \ref{primitif} and \ref{cathedrale} and Lemma \ref{jument} imply that the normalized transfer operator $\tbf P_{\ll}$ together with the function $\tt \rho_{\ll}$ satisfies Conditions \ref{primitif} and \ref{cathedrale}. In conjunction with \eqref{taniere} and Lemma 10.3 of \cite{GLLP_CLLT_2017}, this proves that \eqref{taniere} and so \eqref{ane} are positive. \end{proof} \section{Proofs in the critical case} \label{critcase} In this section we prove Theorem \ref{prince}. By equations \eqref{rose} and \eqref{ciel}, the survival probability of the branching process is related to the study of the sum $q_{n}^{-1}=\e^{-S_n} + \sum_{k=0}^{n-1} \e^{-S_k}\eta_{k+1,n}$ where $(S_n)_{n\geq 0}$ is a Markov walk defined by \eqref{petale}. Very roughly speaking, the sum $q_n^{-1}$ converges mainly when the walk stays positive: $S_k >0$ for any $k \geq 1$ and we will see that (at least in the critical case) only positive trajectories of the Markov walk $(S_n)_{n\geq 0}$ count for the survival of the branching process. Recall that the hypotheses of Theorem \ref{prince} are Conditions \ref{primitif}-\ref{cathedrale} and $k'(0)=\bs \nu(\rho)=0$. Under these assumptions the conclusions of all the theorems of Section \ref{flamme} hold for the probability $\bb P_i$, for any $i \in \bb X$. Recall also that $\bb E_{i,y}^+$ is the expectation corresponding to the probability measure \eqref{soif}. We carry out the proof through a series of lemmata. \begin{lemma} Assume conditions of Theorem \ref{prince}. \label{nuage} For any $m\geq 1$, $(i,y) \in \supp(V)$, and $j \in \bb X$, we have \[ \lim_{n\to +\infty} \bb P_i \left( \sachant{Z_m > 0 \,;\, X_n = j}{ \tau_y > n } \right) = \bb E_{i,y}^+ \left( q_m \right) \bs \nu (j). \] \end{lemma} \begin{proof} Fix $m \geq 1$, $(i,y) \in \supp(V)$, and $j \in \bb X$. By \eqref{ange}, for any $n \geq m+1$, \begin{align*} \bb P_i \left( Z_m > 0 \,,\, X_n = j \,,\, \tau_y > n \right) &= \bb E_i \left( \bb P_i \left( \sachant{Z_m > 0}{ X_1, \dots, X_n } \right) \,;\, X_n = j \,,\, \tau_y > n \right) \\ &= \bb E_i \left( \bb E_i \left( \sachant{q_m}{ X_1, \dots, X_n } \right) \,;\, X_n = j \,,\, \tau_y > n \right) \\ &= \bb E_i \left( q_m \,;\, X_n = j \,,\, \tau_y > n \right). \end{align*} Using Lemma \ref{cumulus}, we conclude that \[ \lim_{n\to +\infty} \bb P_i \left( \sachant{Z_m > 0 \,;\, X_n = j}{ \tau_y > n } \right) = \lim_{n\to +\infty} \bb E_i \left( \sachant{q_m \,;\, X_n = j }{ \tau_y > n } \right) = \bb E_{i,y}^+ \left( q_m \right) \bs \nu(j). \] \end{proof} By Lemma \ref{pieuvre}, we have for any $(i,y) \in \supp(V)$, $k \geq 1$ and $n \geq k+1$, \begin{equation} \label{aquarium} 0 \leq \eta_{k,n} \leq \eta := \max_{x \in \bb X} \frac{f_x''(1)}{f_x'(1)^2} < +\infty \qquad \bb P_{i,y}^+\text{-a.s.} \end{equation} By \eqref{yeux} and \eqref{yeux002}, this equation holds also when $n=k$. Moreover, by Lemma \ref{pieuvre}, \begin{equation} \label{poissonBP} \eta_{k,\infty} := \lim_{n\to+\infty} \eta_{k,n} \in [0,\eta] \qquad \bb P_{i,y}^+\text{-a.s.} \end{equation} Let $q_{\infty}$ be the following random variable: \begin{equation} \label{voyage} q_{\infty} := \left[ \sum_{k=0}^{+\infty} \e^{-S_k} \eta_{k+1,\infty} \right]^{-1} \in [0,+\infty]. \end{equation} The random variable $q_{\infty}^{-1}$ is $\bb P_{i,y}^+$-integrable for any $(i,y) \in \supp (V)$: indeed by \eqref{poissonBP}, \[ q_{\infty}^{-1} \leq \sum_{k=0}^{+\infty} \e^{-S_k} \eta. \] Using Lemma \ref{soir}, for any $(i,y) \in \supp (V)$ \begin{equation} \label{temple} \bb E_{i,y}^+ \left( q_{\infty}^{-1} \right) \leq \eta \bb E_{i,y}^+ \left( \sum_{k=0}^{+\infty} \e^{-S_k} \right) \leq \eta \frac{c \left( 1+\max(y,0) \right)\e^{y}}{V(i,y)} < +\infty. \end{equation} \begin{lemma} Assume conditions of Theorem \ref{prince}. For any $(i,y) \in \supp (V)$, \begin{equation} \label{telescope001} \lim_{m\to+\infty} \bb E_{i,y}^+ \left( \abs{q_m^{-1} - q_{\infty}^{-1}} \right) = 0, \end{equation} and \begin{equation} \label{telescope002} \lim_{m\to+\infty} \bb E_{i,y}^+ \left( \abs{q_m - q_{\infty}} \right) = 0. \end{equation} \end{lemma} \begin{proof} Let $(i,y) \in \supp (V)$ and fix $l \geq 1$. By \eqref{ciel} and \eqref{voyage}, we have for all $m \geq l+2$, \begin{align*} \bb E_{i,y}^+ \left( \abs{q_m^{-1} - q_{\infty}^{-1}} \right) &= \bb E_{i,y}^+ \left( \abs{\e^{-S_m} + \sum_{k=0}^{m-1} \e^{-S_k} \eta_{k+1,m} - \sum_{k=0}^{+\infty} \e^{-S_k} \eta_{k+1,\infty}} \right) \\ &\leq \bb E_{i,y}^+ \left( \e^{-S_m} \right) + \bb E_{i,y}^+ \left( \sum_{k=0}^{l} \e^{-S_k} \abs{\eta_{k+1,m} - \eta_{k+1,\infty}} \right) \\ &\qquad + \bb E_{i,y}^+ \left( \sum_{k=l+1}^{m-1} \e^{-S_k} \abs{\eta_{k+1,m} - \eta_{k+1,\infty}} \right) + \bb E_{i,y}^+ \left( \sum_{k=m}^{+\infty} \e^{-S_k} \eta_{k+1,\infty} \right). \end{align*} By \eqref{aquarium} and \eqref{poissonBP}, \[ \bb E_{i,y}^+ \left( \abs{q_m^{-1} - q_{\infty}^{-1}} \right) \leq \bb E_{i,y}^+ \left( \e^{-S_m} \right) + \bb E_{i,y}^+ \left( \sum_{k=0}^{l} \e^{-S_k} \abs{\eta_{k+1,m} - \eta_{k+1,\infty}} \right) + \eta \bb E_{i,y}^+ \left( \sum_{k=l+1}^{+\infty} \e^{-S_k} \right). \] Using Lemma \ref{soir} and the Lebesgue monotone convergence theorem, \begin{align*} \bb E_{i,y}^+ \left( \abs{q_m^{-1} - q_{\infty}^{-1}} \right) &\leq \frac{c \left( 1+\max(y,0) \right)\e^{y}}{V(i,y)} \left( \frac{1}{m^{3/2}} + \eta \sum_{k=l+1}^{+\infty} \frac{1}{k^{3/2}} \right) \\ &\qquad + \bb E_{i,y}^+ \left( \sum_{k=0}^{l} \e^{-S_k} \abs{\eta_{k+1,m} - \eta_{k+1,\infty}} \right) \\ &\leq \frac{c \left( 1+\max(y,0) \right)\e^{y}}{V(i,y)} \left( \frac{1}{m^{3/2}} + \frac{\eta}{\sqrt{l}} \right) + \bb E_{i,y}^+ \left( \sum_{k=0}^{l} \e^{-S_k} \abs{\eta_{k+1,m} - \eta_{k+1,\infty}} \right). \end{align*} Moreover, by \eqref{aquarium} and \eqref{poissonBP}, we have $\sum_{k=0}^{l} \e^{-S_k} \abs{\eta_{k+1,m} - \eta_{k+1,\infty}} \leq \eta \sum_{k=0}^{+\infty} \e^{-S_k}$ which is $\bb P_{i,y}^+$-integrable by Lemma \ref{soir}. Consequently, using the Lebesgue dominated convergence theorem and \eqref{poissonBP}, when $m \to+\infty$, we obtain that for any $l\geq 1$, \[ \limsup_{m\to+\infty} \bb E_{i,y}^+ \left( \abs{q_m^{-1} - q_{\infty}^{-1}} \right) \leq \frac{c \eta \left( 1+\max(y,0) \right)\e^{y}}{V(i,y) \sqrt{l}}. \] Letting $l\to +\infty$ it proves \eqref{telescope001}. Now, it follows easily from \eqref{histoire} that $q_{\infty} \leq 1$: for any $\ee > 0$ and $m \geq 1$, we write that $\bb P_{i,y}^+ \left( q_{\infty}^{-1} < 1-\ee \right) \leq \bb P_{i,y}^+ \left( q_{\infty}^{-1} - q_m^{-1} < -\ee \right)$. Since by \eqref{telescope001}, $q_{m}^{-1}$ converges in $\bb P_{i,y}^+$-probability to $q_{\infty}^{-1}$, it follows that for any $\ee > 0$, $\bb P_{i,y}^+ \left( q_{\infty}^{-1} < 1-\ee \right) = 0$ and so \begin{equation} \label{echo} q_{\infty} \leq 1 \qquad \bb P_{i,y}^+\text{-a.s.} \end{equation} Consequently, $\abs{q_m - q_{\infty}} = q_mq_{\infty}\abs{q_m^{-1} - q_{\infty}^{-1}} \leq \abs{q_m^{-1} - q_{\infty}^{-1}}$ and by \eqref{telescope001}, it proves \eqref{telescope002}. \end{proof} Let $U$ be a function defined on $\supp (V)$ by \[ U(i,y) = \bb E_{i,y}^+ \left( q_{\infty} \right). \] Note that for any $(i,y) \in \supp(V)$, by \eqref{temple}, $q_{\infty} > 0$ $\bb P_{i,y}^+$-a.s.\ and so \begin{equation} \label{princesse} U(i,y) > 0. \end{equation} By \eqref{echo}, we have also $U(i,y) \leq 1$. \begin{lemma} Assume conditions of Theorem \ref{prince}. \label{promesse} For any $(i,y) \in \supp(V)$ and $j \in \bb X$, we have \[ \lim_{m\to +\infty} \lim_{n\to +\infty} \bb P_i \left( \sachant{Z_m > 0 \,;\, X_n = j}{ \tau_y > n } \right) = \bs \nu(j) U(i,y). \] \end{lemma} \begin{proof} By Lemma \ref{nuage}, for any $(i,y) \in \supp(V)$, $j \in \bb X$ and $m \geq 1$, we have \[ \lim_{n\to +\infty} \bb P_i \left( \sachant{Z_m > 0 \,;\, X_n = j}{ \tau_y > n } \right) = \bs \nu(j) \bb E_{i,y}^+ \left( q_m \right). \] By \eqref{telescope002}, we obtain the desired equality. \end{proof} \begin{lemma} Assume conditions of Theorem \ref{prince}. \label{prologue} For any $(i,y) \in \supp (V)$ and $\theta \in (0,1)$, \[ \lim_{m\to+\infty} \limsup_{n\to+\infty} \bb P_i \left( \sachant{Z_m > 0 \,,\, Z_{\pent{\theta n}} =0}{ \tau_y > n } \right) = 0. \] \end{lemma} \begin{proof} Fix $(i,y) \in \supp (V)$ and $\theta \in (0,1)$. For any $m \geq 1$ and any $n \geq 1$ such that $\pent{\theta n} \geq m+1$ we define $\theta_n = \pent{\theta n}$ and we write \begin{align*} I_0 &:= \bb P_i \left( Z_m > 0 \,,\, Z_{\theta_n} =0 \,,\, \tau_y > n \right) \\ &= \bb P_i \left( Z_m > 0 \,,\, \tau_y > n \right) - \bb P_i \left( Z_{\theta_n} > 0 \,,\, \tau_y > n \right) \\ &= \bb E_i \left( \bb P_i \left( \sachant{Z_m > 0}{X_1, \dots, X_m} \right) \,;\, \tau_y > n \right) - \bb E_i \left( \bb P_i \left( \sachant{Z_{\theta_n} > 0}{X_1, \dots, X_{\theta_n}} \right) \,;\, \tau_y > n \right). \end{align*} By \eqref{ange}, \[ I_0 = \bb E_i \left( \abs{q_m - q_{\theta_n}} \,;\, \tau_y > n \right). \] We define $J_p(i,y) := \bb P_i \left( \tau_y > p \right)$ for any $(i,y) \in \bb X \times \bb R$ and $p \geq 0$ and consider \[ I_1 := \bb P_i \left( \sachant{ Z_m > 0 \,,\, Z_{\theta_n} =0 }{ \tau_y > n } \right) \] for any $(i,y) \in \supp (V)$. By the Markov property, for any $(i,y) \in \supp (V)$, \[ I_1 = \frac{I_0}{J_n(i,y)} = \bb E_i \left( \abs{ q_m - q_{\theta_n} } \frac{J_{n-\theta_n}\left( X_{\theta_n}, y+S_{\theta_n} \right)}{J_{n}(i,y)} \,;\, \tau_y > {\theta_n} \right). \] By the point \ref{oreiller002} of Proposition \ref{oreiller}, \[ I_1 \leq \frac{c}{\sqrt{(1-\theta)n} J_n(i,y)} \bb E_i \left( \abs{ q_m - q_{\theta_n} } \left( 1+ y+S_{\theta_n} \right) \,;\, \tau_y > {\theta_n} \right). \] Using also the point \ref{sable003} of Proposition \ref{sable}, we have \[ I_1 \leq \frac{c}{\sqrt{(1-\theta)n} J_n(i,y)} \bb E_i \left( \abs{ q_m - q_{\theta_n} } \left( 1+ V\left( X_{\theta_n}, y+S_{\theta_n} \right) \right) \,;\, \tau_y > \theta_n \right). \] Using \eqref{histoire} and \eqref{soif}, we obtain that \[ I_1 \leq \frac{c}{\sqrt{(1-\theta)n} J_n(i,y)} \left( \bb P_i \left( \tau_y > \theta_n \right) + V(i,y) \bb E_{i,y}^+ \left( \abs{ q_m - q_{\theta_n} } \right) \right). \] Using the point \ref{oreiller001} of Proposition \ref{oreiller}, for any $(i,y) \in \supp (V)$, \[ \frac{1}{\sqrt{(1-\theta)n} J_n(i,y)} = \frac{1}{\sqrt{(1-\theta)n} \bb P_i \left( \tau_y > n \right)} \underset{n \to +\infty}{\sim} \frac{\sqrt{2\pi} \sigma}{2\sqrt{1-\theta}V(i,y)}. \] Moreover using again the point \ref{oreiller001} of Proposition \ref{oreiller} and using \eqref{telescope002}, \[ \bb P_i \left( \tau_y > \theta_n \right) + V(i,y) \bb E_{i,y}^+ \left( \abs{q_m - q_{\theta_n}} \right) \underset{n \to +\infty}{\longrightarrow} V(i,y) \bb E_{i,y}^+ \left( \abs{q_m - q_{\infty}} \right). \] Therefore, we obtain that, for any $m \geq 1$ and $\theta \in (0,1)$, \[ \limsup_{n\to+\infty} I_1 \leq \frac{c}{\sqrt{1-\theta}} \bb E_{i,y}^+ \left( \abs{q_m - q_{\infty}} \right). \] Letting $m$ go to $+\infty$ and using \eqref{telescope002}, we conclude that \[ \lim_{m\to+\infty} \limsup_{n\to+\infty} I_1 = \lim_{m\to+\infty} \limsup_{n\to+\infty} \bb P_i \left( \sachant{ Z_m > 0 \,,\, Z_{\theta_n} =0 }{ \tau_y > n } \right) =0. \] \end{proof} \begin{lemma} Assume conditions of Theorem \ref{prince}. \label{rire} For any $(i,y) \in \supp (V)$, $j \in \bb X$, and $\theta \in (0,1)$, \[ \lim_{n\to+\infty} \bb P_i \left( \sachant{Z_{\pent{\theta n}} > 0 \,,\, X_n = j}{ \tau_y > n } \right) = \bs \nu(j) U(i,y). \] In particular, \begin{equation} \label{chochutement} \lim_{n\to+\infty} \bb P_i \left( \sachant{Z_{\pent{\theta n}} > 0 }{ \tau_y > n } \right) = U(i,y). \end{equation} \end{lemma} \begin{proof} Fix $(i,y) \in \supp(V)$ and $j \in \bb X$. Let $\theta_n := \pent{\theta n}$ for any $\theta \in (0,1)$ and $n \geq 1$. For any $m \geq 1$ and $n \geq 1$ such that $\theta_n \geq m+1$, we write \begin{align*} &\bb P_i \left( \sachant{Z_{\theta_n} > 0 \,,\, X_n = j}{ \tau_y > n } \right) \\ &\qquad = \bb P_i \left( \sachant{Z_m > 0 \,,\, Z_{\theta_n} > 0 \,,\, X_n = j}{ \tau_y > n } \right) \\ &\qquad = \bb P_i \left( \sachant{Z_m > 0 \,,\, X_n = j}{ \tau_y > n } \right) - \bb P_i \left( \sachant{Z_m > 0 \,,\, Z_{\theta_n} = 0 \,,\, X_n = j}{ \tau_y > n } \right). \end{align*} By Lemma \ref{prologue}, \begin{align*} &\lim_{m\to+\infty} \limsup_{n\to+\infty} \bb P_i \left( \sachant{Z_m > 0 \,,\, Z_{\theta_n} = 0 \,,\, X_n = j}{ \tau_y > n } \right) \\ &\hspace{3cm} \leq \lim_{m\to+\infty} \limsup_{n\to+\infty} \bb P_i \left( \sachant{Z_m > 0 \,,\, Z_{\theta_n} = 0}{ \tau_y > n } \right) = 0. \end{align*} Therefore, using Lemma \ref{promesse}, it follows that \[ \lim_{n\to+\infty} \bb P_i \left( \sachant{Z_{\theta_n} > 0 \,,\, X_n = j}{ \tau_y > n } \right) = \bs \nu(j) U(i,y). \] \end{proof} \begin{lemma} Assume conditions of Theorem \ref{prince}. \label{main} For any $(i,y) \in \supp (V)$, \[ \lim_{p\to+\infty} \bb P_i \left( \sachant{ Z_p > 0 }{ \tau_y > p } \right) = U(i,y). \] \end{lemma} \begin{proof} Fix $(i,y) \in \supp (V)$. For any $p \geq 1$ and $\theta \in (0,1)$, we have \[ \bb P_i \left( \sachant{ Z_p > 0 }{ \tau_y > p } \right) = \frac{\bb P_i \left( Z_p > 0 \,,\, \tau_y > \frac{p}{\theta}+1 \right) + \bb P_i \left( Z_p > 0 \,,\, p < \tau_y \leq \frac{p}{\theta}+1 \right) }{\bb P_i \left( \tau_y > p \right)}. \] Let $n = \pent{\frac{p}{\theta}}+1$ and note that $\pent{\theta n} = p$. So, by \eqref{chochutement}, \[ \lim_{p\to+\infty} \bb P_i \left( \sachant{ Z_p > 0 }{ \tau_y > p } \right) = U(i,y) \lim_{p\to+\infty} \frac{\bb P_i \left( \tau_y > n \right)}{\bb P_i \left( \tau_y > p \right)} + \lim_{p\to+\infty} \frac{\bb P_i \left( Z_p > 0 \,,\, p < \tau_y \leq n \right) }{\bb P_i \left( \tau_y > p \right)}. \] By the point \ref{oreiller001} of Proposition \ref{oreiller}, we obtain that \[ \lim_{p\to+\infty} \bb P_i \left( \sachant{ Z_p > 0 }{ \tau_y > p } \right) = U(i,y) \sqrt{\theta} + \lim_{p\to+\infty} \frac{\bb P_i \left( Z_p > 0 \,,\, p < \tau_y \leq n \right) }{\bb P_i \left( \tau_y > p \right)}. \] Moreover, using again the point \ref{oreiller001} of Proposition \ref{oreiller}, for any $\theta \in (0,1)$, \[ \frac{\bb P_i \left( Z_p > 0 \,,\, p < \tau_y \leq n \right) }{\bb P_i \left( \tau_y > p \right)} \leq \frac{\bb P_i \left( \tau_y > p \right) - \bb P_i \left( \tau_y > n \right) }{\bb P_i \left( \tau_y > p \right)} \underset{p\to +\infty}{\longrightarrow} 1-\sqrt{\theta}. \] Letting $\theta \to 1$, we conclude that \[ \lim_{p\to+\infty} \bb P_i \left( \sachant{ Z_p > 0 }{ \tau_y > p } \right) = U(i,y). \] \end{proof} \begin{lemma} Assume conditions of Theorem \ref{prince}. \label{chapeau} For any $(i,y) \in \supp (V)$ and $\theta \in (0,1)$, \[ \lim_{n\to+\infty} \bb P_i \left( \sachant{ Z_{\pent{\theta n}} > 0 \,,\, Z_n = 0 }{ \tau_y > n } \right) = 0. \] \end{lemma} \begin{proof} For any $(i,y) \in \supp (V)$, $\theta \in (0,1)$ and $n \geq 1$, \[ \bb P_i \left( \sachant{ Z_{\pent{\theta n}} > 0 \,,\, Z_n = 0 }{ \tau_y > n } \right) = \bb P_i \left( \sachant{ Z_{\pent{\theta n}} > 0 }{ \tau_y > n } \right) - \bb P_i \left( \sachant{ Z_n > 0 }{ \tau_y > n } \right). \] From \eqref{chochutement} and Lemma \ref{main}, it follows \[ \bb P_i \left( \sachant{ Z_{\pent{\theta n}} > 0 \,,\, Z_n = 0 }{ \tau_y > n } \right) \underset{n\to+\infty}{\longrightarrow} U(i,y) - U(i,y) = 0. \] \end{proof} \begin{lemma} Assume conditions of Theorem \ref{prince}. \label{trompette} For any $(i,y) \in \supp (V)$ and $j \in \bb X$, \[ \lim_{n\to+\infty} \bb P_i \left( \sachant{ Z_n > 0 \,,\, X_n = j }{ \tau_y > n } \right) = \bs \nu(j) U(i,y). \] \end{lemma} \begin{proof} For any $(i,y) \in \supp (V)$, $j \in \bb X$, $\theta \in (0,1)$ and $n \geq 1$, \begin{align*} \bb P_i \left( \sachant{ Z_n > 0 \,,\, X_n = j }{ \tau_y > n } \right) &= \bb P_i \left( \sachant{ Z_{\pent{\theta n}} > 0 \,,\, X_n = j }{ \tau_y > n } \right) \\ &\qquad - \bb P_i \left( \sachant{ Z_{\pent{\theta n}} > 0 \,,\, Z_n = 0 \,,\, X_n = j }{ \tau_y > n } \right) \end{align*} Using Lemmas \ref{rire} and \ref{chapeau}, the result follows. \end{proof} \textbf{Proof of Theorem \ref{prince}.} Fix $(i,j) \in \bb X^2$. For any $y \in \bb R$, we have \begin{equation} \label{chateau} 0 \leq \bb P_i \left( Z_n > 0 \,,\, X_n = j \right) - \bb P_i \left( Z_n > 0 \,,\, X_n = j \,,\, \tau_y > n \right) \leq \bb P_i \left( Z_n > 0 \,,\, \tau_y \leq n \right). \end{equation} Using \eqref{ange}, \[ \bb P_i \left( Z_n > 0 \,,\, \tau_y \leq n \right) = \bb E_i \left( q_n \,;\, \tau_y \leq n \right). \] Moreover, by the definition of $q_n$ in \eqref{jeux}, for any $k \geq 1$, \[ q_k \leq f_{X_k}'(1) \times \cdots \times f_{X_1}'(1) = \e^{S_k}. \] Since $( q_k )_{k\geq 1}$ is non-increasing, we have $q_n = \min_{1\leq k \leq n} q_k \leq \e^{\min_{1\leq k \leq n} S_k}$. Therefore \begin{align} \bb P_i \left( Z_n > 0 \,,\, \tau_y \leq n \right) &\leq \bb E_i \left( \e^{\min_{1\leq k \leq n} S_k} \,;\, \tau_y \leq n \right) \nonumber\\ &= \e^{-y} \sum_{p=0}^{+\infty} \bb E_i \left( \e^{\min_{1\leq k \leq n} \{y+S_k\}} \,;\, -(p+1) < \min_{1\leq k \leq n} \{y+S_k\} \leq -p \,,\, \tau_y \leq n \right) \nonumber\\ &\leq \e^{-y} \sum_{p=0}^{+\infty} \e^{-p} \bb P_i \left( \tau_{y+p+1} > n \right). \label{butte} \end{align} By the point \ref{oreiller002} of Proposition \ref{oreiller}, \begin{equation} \label{squaw} \bb P_i \left( Z_n > 0 \,,\, \tau_y \leq n \right) = \frac{c \e^{-y}}{\sqrt{n}} \sum_{p=0}^{+\infty} \e^{-p} \left( 1+p+1+\max(y,0) \right) \leq \frac{c \e^{-y}\left( 1+\max(y,0) \right)}{\sqrt{n}}. \end{equation} Note that from the point \ref{sable003} of Proposition \ref{sable}, it is clear that there exits $y_0 = y_0(i) < +\infty$ such that for any $y \geq y_0$, we have $V(i,y) > 0$ i.e.\ $(i,y) \in \supp (V)$ (for more information on $\supp (V)$ see \cite{grama_limit_2016-1}). Using Lemma \ref{trompette} and the point \ref{oreiller001} of Proposition \ref{oreiller}, for any $y \geq y_0$, \begin{equation} \label{cheyenne} \sqrt{n} \bb P_i \left( Z_n > 0 \,,\, X_n = j \,,\, \tau_y > n \right) \underset{n\to+\infty}{\longrightarrow} \frac{2\bs \nu(j) U(i,y) V(i,y)}{\sqrt{2\pi}\sigma}. \end{equation} Let \[ I(i,j) = \liminf_{n \to +\infty} \sqrt{n} \bb P_i \left( Z_n > 0 \,,\, X_n = j \right) \] and \[ J(i,j) = \limsup_{n \to +\infty} \sqrt{n} \bb P_i \left( Z_n > 0 \,,\, X_n = j \right). \] Using \eqref{chateau}, \eqref{squaw} and \eqref{cheyenne}, we obtain that, for any $y \geq y_0(i)$, \begin{align} \frac{2\bs \nu(j) U(i,y) V(i,y)}{\sqrt{2\pi}\sigma} &\leq I(i,j) \nonumber\\ &\leq J(i,j) \leq \frac{2\bs \nu(j) U(i,y) V(i,y)}{\sqrt{2\pi}\sigma} + c \e^{-y}\left( 1+\max(y,0) \right) < +\infty. \label{domino} \end{align} From \eqref{cheyenne}, it is clear that $y \mapsto \frac{2 U(i,y) V(i,y)}{\sqrt{2\pi}\sigma}$ is non-decreasing and from \eqref{domino} the function is bounded by $I(i,j)/\bs \nu(j) < +\infty$. Therefore \[ u(i) := \lim_{y\to +\infty} \frac{2 U(i,y) V(i,y)}{\sqrt{2\pi}\sigma} \] exists. Moreover by \eqref{princesse}, for any $y \geq y_0(i)$, \[ u(i) \geq \frac{2 U(i,y) V(i,y)}{\sqrt{2\pi}\sigma} > 0. \] Taking the limit as $y \to +\infty$ in \eqref{domino}, we conclude that \[ \lim_{n \to +\infty} \sqrt{n} \bb P_i \left( Z_n > 0 \,,\, X_n = j \right) = \bs \nu(j) u(i), \] which finishes the proof of Theorem \ref{prince}. \section{Proofs in the strongly subcritical case} \label{lagon} Assume the hypotheses of Theorem \ref{couronne} that is Conditions \ref{primitif}-\ref{cathedrale} and $k'(1)<0$. We fix $\ll = 1$ and define the probability $\tbb P_i$ and the corresponding expectation $\tbb E_i$ by \eqref{chandelle}, such that, for any $n \geq 1$ and any $g$: $\bb X^n \to \bb C$, \begin{equation} \label{chandelier} \tbb E_i \left( g(X_1, \dots, X_n) \right) = \frac{\bb E_i \left( \e^{S_n} g(X_1, \dots, X_n) v_1(X_n) \right)}{k(1)^n v_1(i)}. \end{equation} By \eqref{ange}, we have, for any $(i,j) \in \bb X^2$ and $n \geq 1$, \begin{align*} \bb P_i \left( Z_{n+1} > 0 \,,\, X_{n+1} = j \right) &= \bb E_i \left( q_{n+1} \,,\, X_{n+1} = j \right) \\ &= \tbb E_i \left( \frac{\e^{-S_{n+1}}}{v_1 \left( X_{n+1} \right)} q_{n+1} \,;\, X_{n+1} = j \right) k(1)^{n+1} v_1(i) \\ &= \tbb E_i \left( \e^{-S_n} q_n\left( f_j(0) \right) \,;\, X_{n+1} = j \right) k(1)^{n+1} \frac{v_1(i) \e^{-\rho(j)}}{v_1(j)}, \end{align*} where $q_n(s)$ is defined for any $s \in [0,1]$ by \eqref{jeux}. From Lemma \ref{foin}, we write \begin{align} \e^{-S_n} q_n\left( f_j(0) \right) &= \left[ \frac{1}{1-f_j(0)} + \sum_{k=0}^{n-1} \e^{S_n -S_k} \eta_{k+1,n}\left( f_j(0) \right) \right]^{-1} \nonumber\\ &= \left[ \frac{1}{1-f_j(0)} + \sum_{k=1}^{n} \e^{S_n -S_{n-k}} \eta_{n-k+1,n}\left( f_j(0) \right) \right]^{-1}. \label{potion001} \end{align} As in Section \ref{batailleBP}, we define the dual Markov chain $\left( X_n^* \right)_{n\geq 0}$, where the dual Markov kernel is given, for any $(i,j) \in \bb X^2$, by \[ \tbf P_1^*(i,j) = \tbf P_1 (j,i) \frac{\tbs \nu_1 (j)}{\tbs \nu_1 (i)} = \bf P(j,i) \frac{\e^{\rho(i)} \bs \nu_1 (j)}{k(1) \bs \nu_1 (i)}. \] Let $(S^*_n)_{n\geq 0}$ be the associated Markov walk defined by \eqref{promenade001} and \begin{equation} \label{chemin} q_n^*(j) := \left[ \frac{1}{1-f_j(0)} + \sum_{k=1}^{n} \e^{-S_k^*} \eta_k^*(j) \right]^{-1}, \end{equation} where \begin{align} \label{chemin003} \eta_k^*(j) := g_{X_k^*} \left( f_{X_{k-1}^*} \circ \cdots \circ f_{X_1^*}\circ f_j (0) \right) \qquad \text{and} \qquad \eta_1^*(j) := g_{X_1^*} \left( f_j (0) \right). \end{align} Following the proof of Lemma \ref{foin}, we obtain \begin{equation} \label{cheminbis} q_n^*(j) = \e^{S_n^*} \left( 1-f_{X_n^*} \circ \cdots \circ f_{X_1^*} \circ f_j (0) \right). \end{equation} We are going to apply duality Lemma \ref{dualityBP}. The following correspondences designed by the two-sided arrow $\longleftrightarrow$ are included for the ease of the reader: \begin{align*} X_k^* &\longleftrightarrow X_{n-k+1}, \\ S_k^* & \longleftrightarrow S_{n-k}-S_n, \\ \eta_k^*(j) & \longleftrightarrow \eta_{n-k+1,n}\left( f_j (0) \right),\\ q_n^*(j) & \longleftrightarrow \e^{-S_n}q_{n}\left( f_j (0) \right). \end{align*} Now Lemma \ref{dualityBP} implies, \begin{equation} \label{coleoptere} \bb P_i \left( Z_{n+1} > 0 \,,\, X_{n+1} = j \right) = \tbb E_j^* \left( q_n^*(j) \,;\, X_{n+1}^* = i \right) k(1)^{n+1} \frac{\tbs \nu_1(j) v_1(i) \e^{-\rho(j)}}{\tbs \nu_1(i) v_1(j)}, \end{equation} where $\tbb E_j^*$ is the expectation generated by the trajectories of the chain $\left( X_n^* \right)_{n\geq 0}$ starting at $X_0^* = j$. Note that, under Condition \ref{eglise}, by Lemma \ref{pieuvre} we have, for any $j \in \bb X$ and $k \geq 1$, \begin{equation} 0 \leq \eta_k^*(j) \leq \eta = \max_{i \in \bb X} \frac{f_i''(1)}{f_i'(1)^2} < +\infty \qquad \tbb P_j^*\text{-a.s.} \label{ehtabound001} \end{equation} In particular, by \eqref{chemin}, \[ q_n^*(j) \in (0,1], \quad \forall n \geq 1. \] For any $j \in \bb X$, consider the random variable \begin{equation} \label{potion002} q_{\infty}^*(j) := \left[ \frac{1}{1-f_j(0)} + \sum_{k=1}^{\infty} \e^{-S_k^*} \eta_k^*(j) \right]^{-1} \in [0,1]. \end{equation} \begin{lemma} Assume that the conditions of Theorem \ref{couronne} are satisfied. For any $j \in \bb X$, \begin{equation} \label{dino001} \lim_{n\to+\infty} q_n^*(j) = q_{\infty}^*(j) \in (0,1], \qquad \tbb P_j^*\text{-a.s.} \end{equation} and \begin{equation} \label{dino002} \lim_{n\to+\infty} \tbb E_j^* \left( \abs{q_n^*(j) - q_{\infty}^*(j)} \right) = 0. \end{equation} \end{lemma} \begin{proof} Fix $j \in \bb X$. By the law of large numbers for finite Markov chains, \[ \frac{S_k^*}{k} \underset{k \to +\infty}{\longrightarrow} \tbs \nu_1(-\rho), \qquad \tbb P_j^*\text{-a.s.} \] This means that there exists a set $N$ of null probability $\tbb P_j^*(N) = 0$, such that for any $\omega \in \Omega \setminus N$ and any $\ee > 0$, there exists $k_0(\omega,\ee)$ such that for any $k \geq k_0(\omega,\ee)$, \[ \e^{-S_k^*(\omega)} \eta_k^*(j)(\omega) \leq \e^{k\tbs \nu_1(\rho)+k\ee} \eta, \] where for the last inequality we used the bound \eqref{ehtabound001}. By Lemma \ref{mulet}, we have $\tbs \nu_1(\rho) = k'(1)/k(1) < 0$. Taking $\ee = -\tbs \nu_1(\rho)/2$ we obtain that, for any $k \geq k_0(\omega),$ \[ 0 \leq \e^{-S_k^*(\omega)} \eta_k^*(j)(\omega) \leq \e^{k\frac{\tbs \nu_1(\rho)}{2}} \eta. \] Consequently, the series $\left( q_n^*(j) \right)^{-1}$ converges a.s.\ to $\left( q_{\infty}^*(j) \right)^{-1} \in [1,+\infty)$ which proves \eqref{dino001}. Now the sequence $( q_n^*(j) )_{n\geq 1}$ belongs to $[0,1)$ a.s.\ and so by the Lebesgue dominated convergence theorem, \[ \lim_{n\to+\infty} \tbb E_j^* \left( \abs{q_n^*(j) - q_{\infty}^*(j)} \right) = 0. \] \end{proof} \begin{lemma} Assume that the conditions of Theorem \ref{couronne} are satisfied. \label{marsupilami} For any $(i,j) \in \bb X^2$, \[ \lim_{n\to +\infty} \tbb E_j^* \left( q_n^*(j) \,;\, X_{n+1}^* = i \right) = \tbs \nu_1(i) \tbb E_j^* \left( q_{\infty}^*(j) \right). \] \end{lemma} \begin{proof} Let $m \geq 1$. For any $(i,j) \in \bb X^2$, and $n \geq m$, \begin{equation} \label{dessert001} \tbb E_j^* \left( q_n^*(j) \,;\, X_{n+1}^* = i \right) = \tbb E_j^* \left( q_m^*(j) \,;\, X_{n+1}^* = i \right) + \tbb E_j^* \left( q_n^*(j) - q_m^*(j) \,;\, X_{n+1}^* = i \right). \end{equation} By the Markov property, \[ \tbb E_j^* \left( q_m^*(j) \,;\, X_{n+1}^* = i \right) = \tbb E_j^* \left( q_m^*(j) \left(\tbf P_1^*\right)^{n-m+1} \left( X_m^*, i \right) \right). \] Using \eqref{vautour} (which holds also for $\tbf P_1^*$ by Lemmas \ref{jument} and \ref{sourire}) and \eqref{dino002}, we have \begin{equation} \label{dessert002} \lim_{m\to +\infty} \lim_{n\to +\infty} \tbb E_j^* \left( q_m^*(j) \,;\, X_{n+1}^* = i \right) = \lim_{m\to +\infty}\tbb E_j^* \left( q_m^*(j) \right) \tbs \nu_1(i) = \tbb E_j^* \left( q_{\infty}^*(j) \right) \tbs \nu_1(i). \end{equation} Moreover, again by \eqref{dino002}, \begin{align*} \lim_{m\to +\infty}\lim_{n\to +\infty} \abs{\tbb E_j^* \left( q_n^*(j) - q_m^*(j) \,;\, X_{n+1}^* = i \right)} &\leq \lim_{m\to +\infty}\lim_{n\to +\infty} \tbb E_j^* \left( \abs{q_n^*(j) - q_m^*(j)} \right) \\ &= \lim_{m\to +\infty} \tbb E_j^* \left( \abs{q_{\infty}^*(j) - q_m^*(j)} \right) \\ &= 0. \end{align*} Together with \eqref{dessert001} and \eqref{dessert002}, this concludes the lemma. \end{proof} \textbf{Proof of Theorem \ref{couronne}.} By \eqref{dino001}, the function \[ u(j) = \frac{\tbs \nu_1(j) \e^{-\rho(j)} \tbb E_j^* \left( q_{\infty}^*(j) \right)}{v_1(j)} \] is positive. The result of the theorem follows from Lemma \ref{marsupilami} and the identity \eqref{coleoptere}. \section{Proofs in the intermediate subcritical case} \label{intermedcrit} We assume the conditions of Theorem \ref{sceptre}, that is Conditions \ref{primitif}-\ref{cathedrale} and $k'(1)=0$. As in the critical case the proof is carried out through a series of lemmata. The beginning of the reasoning is the same as in the strongly subcritical case. Keeping the same notation as in Section \ref{lagon} (see \eqref{chandelier}-\eqref{coleoptere}), we have \begin{equation} \label{coleopterebis} \bb P_i \left( Z_{n+1} > 0 \,,\, X_{n+1} = j \right) = \tbb E_j^* \left( q_n^*(j) \,;\, X_{n+1}^* = i \right) k(1)^{n+1} \frac{\tbs \nu_1(j) v_1(i) \e^{-\rho(j)}}{\tbs \nu_1(i) v_1(j)}. \end{equation} Under the hypotheses of Theorem \ref{sceptre}, the Markov walk $( S_n^* )_{n\geq 0}$ is centred under the probability $\tbb P_j^*$ for any $j \in \bb X$: indeed $\tbs \nu_1 (-\rho) = -k'(1)/k(1) = 0$ (see Lemma \ref{mulet}) and by Lemma \ref{jument}, Conditions \ref{primitif} and \ref{cathedrale} hold for $\tbf P_1$. In this case, by Lemma \ref{sourire}, Conditions \ref{primitif} and \ref{cathedrale} hold also for $\tbf P_1^*$. Therefore all the results of Section \ref{flamme} hold for the probability $\tbb P^*$. Let $\tau_z^*$ be the exit time of the Markov walk $( z+S_n^* )_{n\geq 0}$: \[ \tau_z^* := \inf \left\{ k \geq 1 : z+S_k^* \leq 0 \right\}. \] Denote by $\tt V_1^*$ the harmonic function defined by Proposition \ref{sable} with respect to the probability $\tbb P^*$. As in \eqref{soif}, for any $(j,z) \in \supp(\tt V_1^*)$, define a new probability $\tbb P_{j,z}^{*+}$ and its associated expectation $\bb E_{j,z}^{*+}$ on $\sigma\left( X_n^*, n \geq 1 \right)$ by \[ \tbb E_{j,z}^{*+} \left( g \left( X_1^*, \dots, X_n^* \right) \right) := \frac{1}{\tt V_1^*(j,z)} \tbb E_j^* \left( g\left( X_1^*, \dots, X_n^* \right) \tt V_1^*\left( X_n^*, z+S_n^* \right) \,;\, \tau_z^* > n \right), \] for any $n \geq 1$ and any $g$: $\bb X^n \to \bb C$. \begin{lemma} Assume that the conditions of Theorem \ref{sceptre} are satisfied. \label{jaguar} For any $m\geq 1$, $(j,z) \in \supp(\tt V_1^*)$, and $i \in \bb X$, we have \[ \lim_{n\to +\infty} \tbb E_j^* \left( \sachant{ q_m^*(j) \,;\, X_{n+1}^* = i }{ \tau_z^* > n+1 } \right) = \tbb E_{j,z}^{*+} \left( q_m^*(j) \right) \tbs \nu_1 (i). \] \end{lemma} \begin{proof} The equation \eqref{cheminbis} gives an explicit formula for $q_m^*(j)$ in terms of $\left( X_1^*, \dots, X_m^* \right)$. Therefore, the assertion of the lemma is a straightforward consequence of Lemma \ref{cumulus}. \end{proof} As in Section \ref{lagon}, using Lemma \ref{pieuvre} we have for any $(j,z) \in \supp(\tt V_1^*)$ and $k \geq 1$, \begin{equation} \label{sablier001} 0 \leq \eta_k^*(j) \leq \eta = \max_{i \in \bb X} \frac{f_i''(1)}{f_i'(1)^2} < +\infty \qquad \text{and} \qquad q_n^*(j) \in (0,1], \qquad \tbb P_{j,z}^{*+}\text{-a.s.} \end{equation} Consider the random variable \begin{equation} \label{sablier002} q_{\infty}^*(j) := \left[ \frac{1}{1-f_j(0)} + \sum_{k=1}^{+\infty} \e^{-S_k^*} \eta_k^*(j) \right]^{-1} \in [0,1]. \end{equation} \begin{lemma} Assume that the conditions of Theorem \ref{sceptre} are satisfied. \label{piano} For any $(j,z) \in \supp (\tt V_1^*)$ \begin{equation} \label{piano001} \lim_{m\to+\infty} \tbb E_{j,z}^{*+} \left( \abs{\left(q_m^*(j)\right)^{-1} - \left(q_{\infty}^*(j)\right)^{-1}} \right) = 0, \end{equation} and \begin{equation} \label{piano002} \lim_{m\to+\infty} \tbb E_{j,z}^{*+} \left( \abs{q_m^*(j) - q_{\infty}^*(j)} \right) = 0. \end{equation} \end{lemma} \begin{proof} Fix $(j,z) \in \supp(\tt V_1^*)$. By \eqref{chemin}, \eqref{sablier002} and \eqref{sablier001}, for any $m \geq 1$, \[ \tbb E_{j,z}^{*+} \left( \abs{\left(q_m^*(j)\right)^{-1} - \left(q_{\infty}^*(j)\right)^{-1}} \right) \leq \eta \tbb E_{j,z}^{*+} \left( \sum_{k=m+1}^{+\infty} \e^{-S_k^*} \right). \] From this bound, by Lemma \ref{soir} and the dominated convergence theorem when $m\to+\infty$, we obtain \eqref{piano001}. Now by \eqref{sablier001} and \eqref{sablier002} we have for any $m\geq 1$, \begin{align*} \tbb E_{j,z}^{*+} \left( \abs{q_m^*(j) - q_{\infty}^*(j)} \right) &= \tbb E_{j,z}^{*+} \left( \abs{q_m^*(j) q_{\infty}^*(j)} \abs{\left(q_m^*(j)\right)^{-1} - \left(q_{\infty}^*(j)\right)^{-1}} \right) \\ &\leq \tbb E_{j,z}^{*+} \left( \abs{\left(q_m^*(j)\right)^{-1} - \left(q_{\infty}^*(j)\right)^{-1}} \right), \end{align*} which proves \eqref{piano002}. \end{proof} Let $U$ be the function defined on $\supp(\tt V_1^*)$ by \[ U^*(j,z) = \tbb E_{j,z}^{*+} \left( q_{\infty}^*(j) \right). \] Using \eqref{sablier001} and Lemma \ref{soir}, we have \begin{equation} \label{panda} \tbb E_{j,z}^{*+} \left( \left(q_{\infty}^*(j)\right)^{-1} \right) \leq \frac{1}{1-f_j(0)} +\eta \tbb E_{j,z}^{*+} \left( \sum_{k=1}^{+\infty} \e^{-S_k^*} \right) <+\infty. \end{equation} Therefore $q_{\infty}^* > 0$ $\bb P_{i,y}^+$-a.s.\ and so $U^*(j,z) > 0$. In addition, by \eqref{sablier002}, $U^*(j,z) \leq 1$. For any $(j,z) \in \supp(\tt V_1^*)$, \begin{equation} \label{eventail} U^*(j,z) \in (0,1]. \end{equation} \begin{lemma} Assume that the conditions of Theorem \ref{sceptre} are satisfied. \label{recreation} For any $(j,z) \in \supp(\tt V_1^*)$ and $i \in \bb X$, we have \[ \lim_{m\to+\infty} \lim_{n\to +\infty} \tbb E_j^* \left( \sachant{ q_m^*(j) \,;\, X_{n+1}^* = i }{ \tau_z^* > n+1 } \right) = U^*(j,z) \tbs \nu_1 (i). \] \end{lemma} \begin{proof} The assertion of the lemma is straightforward consequence of Lemmas \ref{jaguar} and \ref{piano}. \end{proof} \begin{lemma} Assume that the conditions of Theorem \ref{sceptre} are satisfied. \label{castorBP} For any $(j,z) \in \supp(\tt V_1^*)$ and $\theta \in (0,1)$, we have \[ \lim_{m\to+\infty} \limsup_{n\to +\infty} \tbb E_j^* \left( \sachant{ \abs{q_m^*(j)-q_{\pent{\theta n}}^*(j)} }{ \tau_z^* > n+1 } \right) = 0. \] \end{lemma} \begin{proof} Fix $(j,z) \in \supp(\tt V_1^*)$ and $\theta \in (0,1)$. Let $m \geq 1$ and $n \geq 1$ be such that $\theta n \geq m+1$. Set $\theta_n = \pent{\theta n}$. Denote \[ I_0 := \tbb E_j^* \left( \sachant{ \abs{q_m^*(j)-q_{\theta_n}^*(j)} }{ \tau_z^* > n+1 } \right) \qquad \text{and} \qquad J_n(j,z) := \tbb P_j^* \left( \tau_z^* > n \right). \] Note that by the point \ref{oreiller001} of Proposition \ref{oreiller}, we have $J_n(j,z) > 0$ for any $n$ large enough. By the Markov property and the point \ref{oreiller002} of Proposition \ref{oreiller}, \begin{align*} I_0 &= \frac{1}{J_{n+1}(j,z)} \tbb E_j^* \left( \abs{q_m^*(j)-q_{\theta_n}^*(j)} J_{n+1-\theta_n} \left( X_{\theta_n}^*,z+S_{\theta_n}^* \right) \,;\, \tau_z^* > \theta_n \right) \\ &\leq \frac{c}{J_{n+1}(j,z) \sqrt{n+1-\theta_n}} \tbb E_j^* \left( \abs{q_m^*(j)-q_{\theta_n}^*(j)} \left( 1+z+S_{\theta_n}^* \right) \,;\, \tau_z^* > \theta_n \right). \end{align*} Using the point \ref{sable003} of Proposition \ref{sable} and \eqref{sablier001}, \begin{align*} I_0 &\leq \frac{c}{J_{n+1}(j,z) \sqrt{n(1-\theta)}} \tbb E_j^* \left( \abs{q_m^*(j)-q_{\theta_n}^*(j)} \left( 1+\tt V_1^*\left( X_{\theta_n}^*,z+S_{\theta_n}^* \right) \right) \,;\, \tau_z^* > \theta_n \right) \\ &\leq \frac{c}{J_{n+1}(j,z) \sqrt{n(1-\theta)}} \left( \tbb P_j^* \left( \tau_z^* > \theta_n \right) + \tt V_1(j,z) \tbb E_{j,z}^{*+} \left( \abs{q_m^*(j)-q_{\theta_n}^*(j)} \right) \right). \end{align*} By the point \ref{oreiller001} of Proposition \ref{oreiller} and \eqref{piano002}, we obtain that \[ \limsup_{n\to+\infty} I_0 \leq \limsup_{n\to+\infty} \frac{c\sqrt{n+1}}{\sqrt{n(1-\theta)}} \tbb E_{j,z}^{*+} \left( \abs{q_m^*(j)-q_{\theta_n}^*(j)} \right) = \frac{c}{\sqrt{(1-\theta)}} \tbb E_{j,z}^{*+} \left( \abs{q_m^*(j)-q_{\infty}^*(j)} \right). \] Taking the limit as $m \to +\infty$ and using \eqref{piano002}, we conclude that \[ \lim_{m\to+\infty} \limsup_{n\to +\infty} \tbb E_j^* \left( \sachant{ \abs{q_m^*(j)-q_{\pent{\theta n}}^*(j)} }{ \tau_z^* > n+1 } \right) = 0. \] \end{proof} \begin{lemma} Assume that the conditions of Theorem \ref{sceptre} are satisfied. \label{panier} For any $(j,z) \in \supp(\tt V_1^*)$, $i \in \bb X$ and $\theta \in (0,1)$, we have \[ \lim_{n\to +\infty} \tbb E_j^* \left( \sachant{ q_{\pent{\theta n}}^*(j) \,;\, X_{n+1}^* = i }{ \tau_z^* > n+1 } \right) = U^*(j,z) \tbs \nu_1 (i). \] \end{lemma} \begin{proof} For any $(j,z) \in \supp(\tt V_1^*)$, $i \in \bb X$, $\theta \in (0,1)$, $m \geq 1$ and $n \geq m+1$ such that $\pent{\theta n} \geq m$, we have \begin{align*} I_0 &:= \tbb E_j^* \left( \sachant{ q_{\pent{\theta n}}^*(j) \,;\, X_{n+1}^* = i }{ \tau_z^* > n+1 } \right) \\ &= \tbb E_j^* \left( \sachant{ q_m^*(j) \,;\, X_{n+1}^* = i }{ \tau_z^* > n+1 } \right) + \underbrace{\tbb E_j^* \left( \sachant{ q_{\pent{\theta n}}^*(j)-q_m^*(j) \,;\, X_{n+1}^* = i }{ \tau_z^* > n+1 } \right)}_{=:I_1}. \end{align*} By Lemma \ref{castorBP}, \[ \limsup_{m\to+\infty} \limsup_{n\to +\infty} \abs{I_1} \leq \lim_{m\to+\infty} \limsup_{n\to +\infty} \tbb E_j^* \left( \sachant{ \abs{q_{\pent{\theta n}}^*(j)-q_m^*(j)} }{ \tau_z^* > n+1 } \right) = 0. \] Consequently, using Lemma \ref{recreation}, \[ \lim_{n\to +\infty} I_0 = \lim_{m\to+\infty} \lim_{n\to +\infty} \tbb E_j^* \left( \sachant{ q_{m}^*(j) \,;\, X_{n+1}^* = i }{ \tau_z^* > n+1 } \right) = U^*(j,z) \tbs \nu_1 (i). \] \end{proof} \begin{lemma} Assume that the conditions of Theorem \ref{sceptre} are satisfied. \label{osier} For any $(j,z) \in \supp(\tt V_1^*)$, we have \[ \lim_{p\to +\infty} \tbb E_j^* \left( \sachant{ q_{p}^*(j) }{ \tau_z^* > p+1 } \right) = U^*(j,z). \] \end{lemma} \begin{proof} Fix $(j,z) \in \supp(\tt V_1^*)$. For any $p\geq 1$ and $\theta \in (0,1)$ set $n = \pent{p/\theta}+1$. Note that $p=\pent{\theta n}$. We write, for any $p \geq 1$, \[ \tbb E_j^* \left( \sachant{ q_p^*(j) }{ \tau_z^* > p+1 } \right) = \frac{\tbb E_j^* \left( q_p^*(j) \,;\, \tau_z^* > n+1 \right) + \tbb E_j^* \left( q_p^*(j) \,;\, p+1 < \tau_z^* \leq n+1 \right)}{\tbb P_j^* \left( \tau_z^* > p+1 \right)}. \] By Lemma \ref{panier} and the point \ref{oreiller001} of Proposition \ref{oreiller}, \begin{align*} \frac{\tbb E_j^* \left( q_p^*(j) \,;\, \tau_z^* > n+1 \right)}{\tbb P_j^* \left( \tau_z^* > p+1 \right)} &= \sum_{i\in \bb X} \tbb E_j^* \left( \sachant{ q_p^*(j) \,;\, X_{n+1}^*=i }{ \tau_z^* > n+1 } \right) \frac{\tbb P_j^* \left( \tau_z^* > n+1 \right)}{\tbb P_j^* \left( \tau_z^* > p+1 \right)} \\ &\underset{p\to+\infty}{\longrightarrow} U^*(j,z) \sqrt{\theta}. \end{align*} Moreover, using \eqref{sablier001} and the point \ref{oreiller001} of Proposition \ref{oreiller}, \[ \frac{\tbb E_j^* \left( q_p^*(j) \,;\, p+1 < \tau_z^* \leq n+1 \right)}{\tbb P_j^* \left( \tau_z^* > p+1 \right)} \leq 1- \frac{\tbb P_j^* \left(\tau_z^* > n+1 \right)}{\tbb P_j^* \left( \tau_z^* > p+1 \right)} \underset{p\to+\infty}{\longrightarrow} 1-\sqrt{\theta}. \] Therefore, for any $\theta \in (0,1)$, \[ \abs{\lim_{p\to+\infty} \tbb E_j^* \left( \sachant{ q_p^*(j) }{ \tau_z^* > p+1 } \right) - U^*(j,z) \sqrt{\theta}} \leq 1-\sqrt{\theta}. \] Taking the limit as $\theta \to 1$ it concludes the proof. \end{proof} \begin{lemma} Assume that the conditions of Theorem \ref{sceptre} are satisfied. \label{moulin} For any $(j,z) \in \supp(\tt V_1^*)$ and $\theta \in (0,1)$, we have \[ \lim_{n\to +\infty} \tbb E_j^* \left( \sachant{ \abs{ q_{\pent{\theta n}}^*(j)-q_n^*(j) } }{ \tau_z^* > n+1 } \right) = 0. \] \end{lemma} \begin{proof} Using the fact that $\eta_k^*(j)$ are non-negative and the definition of $q_n^*(j)$ in \eqref{chemin}, we see that $( q_n^*(j) )_{n\geq 1}$ is non-increasing. Therefore, using Lemmas \ref{panier} and \ref{osier}, \begin{align*} I_0 &:= \lim_{n\to +\infty} \tbb E_j^* \left( \sachant{ \abs{ q_{\pent{\theta n}}^*(j)-q_n^*(j) } }{ \tau_z^* > n+1 } \right) \\ &= \lim_{n\to +\infty} \sum_{i \in \bb X} \tbb E_j^* \left( \sachant{ q_{\pent{\theta n}}^*(j) \,;\, X_{n+1}^* = i }{ \tau_z^* > n+1 } \right) - \lim_{n\to +\infty} \tbb E_j^* \left( \sachant{ q_n^*(j) }{ \tau_z^* > n+1 } \right) \\ &= U^*(j,z) - U^*(j,z) = 0. \end{align*} \end{proof} \begin{lemma} Assume that the conditions of Theorem \ref{sceptre} are satisfied. \label{aulne} For any $(j,z) \in \supp(\tt V_1^*)$ and $i \in \bb X$, we have \[ \lim_{n\to +\infty} \tbb E_j^* \left( \sachant{ q_n^*(j) \,;\, X_{n+1}^* = i }{ \tau_z^* > n+1 } \right) = U^*(j,z) \tbs \nu_1(i). \] \end{lemma} \begin{proof} By Lemmas \ref{panier} and \ref{moulin}, for any $(j,z) \in \supp(\tt V_1^*)$, $i \in \bb X$ and $\theta \in (0,1)$, \begin{align*} I_0 &:= \lim_{n\to +\infty} \tbb E_j^* \left( \sachant{ q_n^*(j) \,;\, X_{n+1}^* = i }{ \tau_z^* > n+1 } \right) \\ &= \lim_{n\to +\infty} \tbb E_j^* \left( \sachant{ q_{\pent{\theta n}}^*(j) \,;\, X_{n+1}^* = i }{ \tau_z^* > n+1 } \right) \\ &\qquad+ \lim_{n\to +\infty} \tbb E_j^* \left( \sachant{ q_n^*(j) - q_{\pent{\theta n}}^*(j) \,;\, X_{n+1}^* = i }{ \tau_z^* > n+1 } \right) \\ &= U^*(j,z) \tbs \nu_1(i). \end{align*} \end{proof} \begin{lemma} Assume that the conditions of Theorem \ref{sceptre} are satisfied. \label{dieu} There exists $\tt u$ a positive function on $\bb X$ such that, for any $(i,j) \in \bb X^2$, we have \[ \tbb E_j^* \left( q_n^*(j) \,;\, X_{n+1}^* = i \right) \underset{n\to+\infty}{\sim} \frac{\tt u(j) \tbs \nu_1(i)}{\sqrt{n}}. \] \end{lemma} \begin{proof} Fix $(i,j) \in \bb X^2$. For any $z \in \bb R$ and $n \geq 1$, \begin{equation} \label{gourmand001} 0 \leq \tbb E_j^* \left( q_n^*(j) \,;\, X_{n+1}^* = i \right) - \tbb E_j^* \left( q_n^*(j) \,;\, X_{n+1}^* = i \,,\, \tau_z^* > n+1 \right) \leq \tbb E_j^* \left( q_n^*(j) \,;\, \tau_z^* \leq n+1 \right). \end{equation} Since $q_n^*(j) \leq 1$ (see \eqref{sablier001}), we have \begin{equation} \label{bonheur001} \tbb E_j^* \left( q_n^*(j) \,;\, \tau_z^* \leq n+1 \right) \leq \tbb E_j^* \left( q_n^*(j) \,;\, \tau_z^* \leq n \right) + \tbb P_j \left( \tau_z^* = n+1 \right). \end{equation} By \eqref{cheminbis}, $q_n^*(j) \leq \e^{S_n^*}$. Since $( q_n^*(j) )_{n\geq 1}$ is non-increasing, we have $q_n^*(j) = \min_{1\leq k \leq n} q_k^*(j) \leq \e^{\min_{1\leq k \leq n} S_k^*}$. Consequently, \begin{align*} \tbb E_j^* \left( q_n^*(j) \,;\, \tau_z^* \leq n \right) &\leq \e^{-z} \tbb E_j^* \left( \e^{\min_{1\leq k \leq n} z+S_k^*} \,;\, \tau_z^* \leq n \right) \\ &\leq \e^{-z} \sum_{p=0}^{+\infty} \e^{-p}\tbb P_j^* \left( -(p+1) < \min_{1\leq k \leq n} z+S_k^* \leq -p \,,\, \tau_z^* \leq n \right) \\ &\leq \e^{-z} \sum_{p=0}^{+\infty} \e^{-p}\tbb P_j^* \left( \tau_{z+p+1}^* > n \right). \end{align*} Using the point \ref{oreiller002} of Proposition \ref{oreiller}, \begin{equation} \label{bonheur002} \tbb E_j^* \left( q_n^*(j) \,;\, \tau_z^* \leq n \right) \leq \frac{c \e^{-z} \left( 1+\max(0,z) \right)}{\sqrt{n}}. \end{equation} By the point \ref{sable003} of Proposition \ref{sable}, there exists $z_0 \in \bb R$ such that for any $z \geq z_0$, $\tt V_1^*(j,z) > 0$, which means that $(j,z) \in \supp(\tt V_1^*)$. Therefore, using the point \ref{oreiller001} of Proposition \ref{oreiller}, for any $z\geq z_0$, \begin{equation} \label{bonheur003} \lim_{n\to+\infty} \sqrt{n}\tbb P_j \left( \tau_z^* = n+1 \right) = \lim_{n\to+\infty} \sqrt{n}\tbb P_j \left( \tau_z^* > n \right) - \lim_{n\to+\infty} \sqrt{n}\tbb P_j \left( \tau_z^* > n+1 \right) = 0. \end{equation} Putting together \eqref{bonheur001}, \eqref{bonheur002} and \eqref{bonheur003}, we obtain that, for any $z\geq z_0$, \begin{equation} \label{gourmand002} \lim_{n\to+\infty} \sqrt{n} \tbb E_j^* \left( q_n^*(j) \,;\, \tau_z^* \leq n+1 \right) \leq c \e^{-z} \left( 1+\max(0,z) \right). \end{equation} Moreover, using Lemma \ref{aulne} and the point \ref{oreiller001} of Proposition \ref{oreiller}, \begin{equation} \label{gourmand003} \lim_{n\to+\infty} \sqrt{n} \tbb E_j^* \left( q_n^*(j) \,;\, X_{n+1}^* = i \,,\, \tau_z^* > n+1 \right) = \frac{2\tt V_1^*(j,z)}{\sqrt{2\pi} \tt \sigma_1} U^*(j,z) \tbs \nu_1(i), \end{equation} where $\tt \sigma_1$ is defined in \eqref{ane}. Denoting \[ I(i,j) = \liminf_{n\to +\infty} \sqrt{n}\tbb E_j^* \left( q_n^*(j) \,;\, X_{n+1}^* = i \right) \quad \text{and} \quad J(i,j) = \limsup_{n\to +\infty} \sqrt{n} \tbb E_j^* \left( q_n^*(j) \,;\, X_{n+1}^* = i \right), \] and using \eqref{gourmand001}, \eqref{gourmand002} and \eqref{gourmand003}, we obtain that, for any $z \geq z_0,$ \begin{align} \label{armure} \frac{2\tt V_1^*(j,z)}{\sqrt{2\pi} \tt \sigma_1} U^*(j,z) \tbs \nu_1(i) &\leq I(i,j) \\ &\leq J(i,j) \leq \frac{2\tt V_1^*(j,z)}{\sqrt{2\pi} \tt \sigma_1} U^*(j,z) \tbs \nu_1(i) + c \e^{-z} \left( 1+\max(0,z) \right). \nonumber \end{align} By \eqref{gourmand003}, we observe that $z \mapsto \frac{2\tt V_1^*(j,z)U^*(j,z)}{\sqrt{2\pi} \tt \sigma_1}$ is non-decreasing and by \eqref{armure}, this function is bounded by $I(i,j)/ \tbs \nu_1(i)$. Consequently the limit \[ \tt u(j) := \lim_{z\to+\infty} \frac{2\tt V_1^*(j,z)U^*(j,z)}{\sqrt{2\pi} \tt \sigma_1} \] exists and for any $z \geq z_0$, by \eqref{eventail}, \begin{equation} \label{luxe} \tt u(j) \geq \frac{2\tt V_1^*(j,z)U^*(j,z)}{\sqrt{2\pi} \tt \sigma_1} >0. \end{equation} Taking the limit as $z \to +\infty$ in \eqref{armure}, we conclude that \[ I(i,j) = J(i,j) = \tt u(j) \tbs \nu_1(i). \] \end{proof} \textbf{Proof of Theorem \ref{sceptre}.} By \eqref{luxe} the function \[ u(j) = \tt u(j) \frac{\tbs \nu_1(j) \e^{-\rho(j)}}{ v_1(j)}, \qquad \forall j \in \bb X, \] is positive on $\bb X$. The assertion of Theorem \ref{sceptre} is a consequence of \eqref{coleopterebis} and Lemma \ref{dieu}. \section{Proofs in the weakly subcritical case} \label{weaklysubcrit} We assume the conditions of Theorem \ref{cape}, that is Conditions \ref{primitif}-\ref{cathedrale} and $\bs \nu(\rho)=k'(0)<0$, $k'(1)>0$. By Lemma \ref{mulet}, the function $\ll \mapsto K'(\ll)$ is increasing. Consequently, there exists $\ll \in (0,1)$ such that \begin{equation} \label{luth} K'(\ll) = \frac{k'(\ll)}{k(\ll)} = \tbs \nu_{\ll} (\rho) = 0. \end{equation} For this $\ll$ and any $i \in \bb X$, define the changed probability measure $\tbb P_i$ and the corresponding expectation $\tbb E_i$ by \eqref{chandelle}, such that for any $n \geq 1$ and any $g$: $\bb X^n \to \bb C$, \begin{equation} \label{samovar} \tbb E_i \left( g(X_1, \dots, X_n) \right) = \frac{\bb E_i \left( \e^{\ll S_n} g(X_1, \dots, X_n) v_{\ll}(X_n) \right)}{k(\ll)^n v_{\ll}(i)}. \end{equation} Our starting point is the following formula which is a consequence of \eqref{jeux}: for any $(i,j) \in \bb X^2$ and $n \geq 1$, \begin{align} &\bb E_i \left( q_{n+1} \,;\, X_{n+1} = j \,,\, \tau_y > n \right) \nonumber\\ &\hspace{2cm}= \tbb E_i \left( \e^{-\ll S_n} q_n\left( f_j(0) \right) \,;\, X_{n+1} = j \,,\, \tau_y > n \right) k(\ll)^{n+1} \frac{v_{\ll}(i)}{v_{\ll}(j)} \e^{-\ll \rho(j)}. \label{falaise} \end{align} The transition probabilities of $\left( X_n \right)_{n\geq 0}$ under the changed measure are given by \eqref{lacBP}: \[ \tbf P_{\ll} (i,j) = \frac{\e^{\ll \rho(j)} v_{\ll}(j)}{k(\ll)v_{\ll}(i)} \bf P(i,j). \] By \eqref{luth}, the Markov walk $(S_n)_{n\geq 0}$ is centred under $\tbb P_i$. Note that under the hypotheses of Theorem \ref{cape}, by Lemma \ref{jument}, Conditions \ref{primitif} and \ref{cathedrale} hold also for $\tbf P_{\ll}$. Therefore all the results of Section \ref{flamme} hold for the Markov walk $\left( S_n \right)_{n\geq 0}$ under $\tbb P_i$. Let $\left( X_n^* \right)_{n\geq 0}$ be the dual Markov chain independent of $\left( X_n \right)_{n\geq 0}$, with transition probabilities $\tbf P_{\ll}^*$ defined by (cp.\ \eqref{statueBP}) \begin{equation} \label{monument} \tbf P_{\ll}^*(i,j) = \frac{\tbs \nu_{\ll}(j)}{\tbs \nu_{\ll}(i)} \tbf P(j,i) = \frac{\bs \nu_{\ll}(j)}{\bs \nu_{\ll}(i)} \frac{\e^{\rho(i)}}{k(\ll)} \bf P(j,i). \end{equation} As in Section \ref{batailleBP}, we define the dual Markov walk $( S_n^* )_{n\geq 0}$ by \eqref{promenade001} and its exit time $\tau_z^*$ for any $z \in \bb R$ by \eqref{promenade002}. Let $\tbb P_{i,j}$ be the probability on $\left( \Omega, \scr F \right)$ generated by the finite dimensional distributions of $( X_n, X_n^* )_{n\geq 0}$ starting at $(X_0,X_0^*) = (i,j)$. By \eqref{luth}, the Markov walk $( S_n^* )_{n\geq 1}$ is centred under $\tbb P_{i,j}$: \[ \tbs \nu_{\ll} (\rho) = \tbs \nu_{\ll} (-\rho) = 0 \] and by Lemma \ref{sourire}, Conditions \ref{primitif} and \ref{cathedrale} hold for $\tbf P_{\ll}^*$. Let $\tt V_{\ll}$ and $\tt V_{\ll}^*$ be the harmonic functions of the Markov walks $\left( S_n \right)_{n\geq 0}$ and $\left( S_n^* \right)_{n\geq 0}$, respectively (see Proposition \ref{sable}). The idea of the proof is in line with that of the previous sections: the positive trajectories (corresponding to the event $\left\{ \tau_y > n \right\}$) affect the asymptotic behaviour of the survival probability. However, in the weakly subcritical case, the factor $\e^{-\ll S_n}$ in the expectation $\tbb E_i ( \e^{-\ll S_n} q_n\left( f_j(0) \right) \,;\, X_{n+1} = j )$ contributes in such a way that, only the trajectories starting at $y\in \bb R$ conditioned to stay positive and to finish nearby $0$, have an impact on the asymptotic of $\tbb E_i \left( \e^{-\ll S_n} q_n\left( f_j(0) \right) \,;\, X_{n+1} = j \right)$. We start by some preliminary bounds. The following assertion is similar to Lemma \ref{soir}. \begin{lemma} Assume that the conditions of Theorem \ref{cape} are satisfied. \label{savaneBP} For any $i \in \bb X$, $y \in \bb R$, $k \geq 1$ and $n\geq k+1$, we have \[ n^{3/2} \tbb E_i \left( \e^{-S_k} \e^{-\ll S_n} \,;\, \tau_y > n \right) \leq \e^{(1+\ll) y} (1+\max(y,0)) \frac{c n^{3/2}}{(n-k)^{3/2}k^{3/2}}. \] \end{lemma} \begin{proof} Fix $i \in \bb X$, $y \in \bb R$, $k \geq 1$ and $n\geq k+1$. By the Markov property, \begin{align*} I_0 &:= n^{3/2} \tbb E_i \left( \e^{-S_k} \e^{-\ll S_n} \,;\, \tau_y > n \right) \\ &\leq \sum_{p=0}^{+\infty} n^{3/2} \e^{\ll y} \e^{-\ll p} \tbb E_i \left( \e^{-S_k} \,;\, y+S_n \in [p,p+1] \,,\, \tau_y > n \right) \\ &= \sum_{p=0}^{+\infty} n^{3/2} \e^{\ll y} \e^{-\ll p} \tbb E_i \left( \e^{-S_k} J_{n-k} \left( X_k, y+S_k \right) \,;\, \tau_y > k \right), \end{align*} where for any $i' \in \bb X$, $y' \in \bb R$ and $p \geq 1$ \[ J_{n-k} (i',y') = \tbb P_{i'} \left( y'+S_{n-k} \in [p,p+1] \,,\, \tau_{y'} > n-k \right). \] By the point \ref{gorilleBP} of Proposition \ref{goliane}, \[ J_{n-k} (i',y') \leq \frac{c}{(n-k)^{3/2}} (1+p)(1+\max(y',0)). \] Consequently, \begin{align*} I_0 &\leq \e^{\ll y} \frac{c n^{3/2}}{(n-k)^{3/2}} \tbb E_i \left( \e^{-S_k} \left( 1+y+S_k \right) \,;\, \tau_y > k \right) \sum_{p=0}^{+\infty} \e^{-\ll p} (1+p) \\ &\leq \e^{\ll y} \frac{c n^{3/2}}{(n-k)^{3/2}} \tbb E_i \left( \e^{-S_k} \left( 1+y+S_k \right) \,;\, \tau_y > k \right) \\ &\leq \e^{(1+\ll) y} \frac{c n^{3/2}}{(n-k)^{3/2}} \sum_{p=0}^{+\infty} \e^{-p}(2+p) \tbb P_i \left( y+S_k \in [p,p+1] \,;\, \tau_y > k \right). \end{align*} Again by the point \ref{gorilleBP} of Proposition \ref{goliane}, \[ I_0 \leq \e^{(1+\ll) y} (1+\max(y,0)) \frac{c n^{3/2}}{(n-k)^{3/2}k^{3/2}} \sum_{p=0}^{+\infty} \e^{-p}(2+p)(1+p). \] This concludes the proof of the lemma. \end{proof} For any $l \geq 1$ and $n \geq l+1$, set \[ q_{l,n}\left( f_j(0) \right) := 1-f_{l+1,n}\left( f_j(0) \right) = 1-f_{X_{l+1}} \circ \cdots \circ f_{X_n} \circ f_j(0), \] In the same way as in Lemma \ref{foin}, we obtain: \begin{equation} \label{noisette} q_{l,n}\left( f_j(0) \right)^{-1} = \frac{\e^{S_l-S_n}}{1-f_j(0)} + \sum_{k=l}^{n-1} \e^{S_l-S_k} \eta_{k+1,n}\left( f_j(0) \right), \end{equation} where $\eta_{k+1,n}(s)$ are defined by \eqref{montre004}. Moreover, similarly to \eqref{histoire}, we have for any $n \geq l+1 \geq 2$, \begin{equation} \label{mouton} q_{l,n}\left( f_j(0) \right) \in (0,1] \qquad \tbb P_i\text{-a.s.} \end{equation} In addition, by Lemma \ref{pieuvre}, for any $k \leq n-1$, \begin{equation} \label{piscine} 0 \leq \eta_{k+1,n}\left( f_j(0) \right) \leq \eta \qquad \tbb P_i\text{-a.s.} \end{equation} \begin{lemma} Assume that the conditions of Theorem \ref{cape} are satisfied. \label{constellation} For any $(i,j) \in \bb X^2$ and $y \in \bb R$, we have \[ \lim_{l,m \to +\infty} \limsup_{n\to+\infty} n^{3/2} \tbb E_i \left( \abs{\e^{-S_{n-m}} q_{n-m,n}\left( f_j(0) \right)^{-1} - \e^{-S_l} q_{l,n}\left( f_j(0) \right)^{-1}} \e^{-\ll S_n} \,;\, \tau_y > n \right) = 0. \] \end{lemma} \begin{proof} Fix $(i,j) \in \bb X^2$ and $y \in \bb R$. For any $l \geq 1$, $m \geq 1$ and $n \geq l+m+1$, we have \begin{align*} I_0 &:= n^{3/2} \tbb E_i \left( \abs{\e^{-S_{n-m}} q_{n-m,n}\left( f_j(0) \right)^{-1} - \e^{-S_l} q_{l,n}\left( f_j(0) \right)^{-1}} \e^{-\ll S_n} \,;\, \tau_y > n \right) \\ &= n^{3/2} \tbb E_i \left( \sum_{k=l}^{n-m-1} \e^{-S_k} \eta_{k+1,n}\left( f_j(0) \right) \e^{-\ll S_n} \,;\, \tau_y > n \right). \end{align*} Using \eqref{piscine} and Lemma \ref{savaneBP}, \[ I_0 \leq \eta \sum_{k=l}^{n-m-1} \e^{(1+\ll) y} (1+\max(y,0)) \frac{c n^{3/2}}{(n-k)^{3/2}k^{3/2}}. \] Let $n_1 := \pent{n/2}$. We note that \begin{align*} \sum_{k=l}^{n-m-1} \frac{c n^{3/2}}{(n-k)^{3/2}k^{3/2}} &\leq \frac{c n^{3/2}}{(n-n_1)^{3/2}} \sum_{k=l}^{n_1} \frac{1}{k^{3/2}} + \frac{c n^{3/2}}{n_1^{3/2}}\sum_{k=n_1+1}^{n-m-1} \frac{1}{(n-k)^{3/2}} \\ &\leq c \sum_{k=l}^{+\infty} \frac{1}{k^{3/2}} + c \sum_{k=m}^{+\infty} \frac{1}{k^{3/2}}. \end{align*} Consequently, \[ \limsup_{n\to +\infty} I_0 \leq c\eta \e^{(1+\ll) y} (1+\max(y,0)) \left( \sum_{k=l}^{+\infty} \frac{1}{k^{3/2}} + \sum_{k=m}^{+\infty} \frac{1}{k^{3/2}} \right). \] Taking the limits as $l \to +\infty$ and $m\to +\infty$, proves the lemma. \end{proof} For any $l \geq 1$, $m\geq 1$ and $n \geq l+m+1$, consider the random variables \begin{align*} &r_n^{(l,m)}(j) := 1 - f_{1,l} \left( \left[ 1-f_{l+1,n-m}'(1) \left( 1-f_{n-m+1,n}\left( f_j(0) \right) \right) \right]^+ \right) \\ &= 1- f_{X_1}\circ \cdots \circ f_{X_l} \left( \left[ 1-f_{X_{l+1}}'(1) \times \dots \times f_{X_{n-m}}'(1) \left( 1-f_{X_{n-m+1}}\circ \cdots \circ f_{X_n} \circ f_j(0) \right) \right]^+ \right), \end{align*} where $[t]^+ = \max(t,0)$ for any $t\in \bb R$. The random variable $r_n^{(l,m)}(j)$ approximates $q_n\left( f_j(0) \right)$ in the following sense: \begin{lemma} Assume that the conditions of Theorem \ref{cape} are satisfied. \label{carrousel} For any $(i,j) \in \bb X^2$ and $y \in \bb R$, \[ \lim_{l,m \to +\infty} \limsup_{n\to +\infty} n^{3/2} \tbb E_i \left( \abs{q_n\left( f_j(0) \right) - r_n^{(l,m)}(j)} \e^{-\ll S_n} \,;\, \tau_y > n \right) = 0. \] \end{lemma} \begin{proof} Fix $(i,j) \in \bb X^2$ and $y \in \bb R$. Since for any $i' \in \bb X$, $f_{i'}$ is increasing and convex, the function $f_{l+1,n-m}$ is convex. So, for any $l \geq 1$, $m \geq 1$ and $n \geq l+m+1$, \[ f_{l+1,n}\left( f_j(0) \right) = f_{l+1,n-m} \left( f_{n-m+1,n}\left( f_j(0) \right) \right) \geq \left[ 1- f_{l+1,n-m}'(1) \left( 1-f_{n-m+1,n}\left( f_j(0) \right) \right) \right]^+. \] Since $f_{1,l}$ is increasing, \[ q_n\left( f_j(0) \right) = 1-f_{1,n}\left( f_j(0) \right) \leq r_n^{(l,m)}(j), \] or equivalently \[ 0 \leq r_n^{(l,m)}(j) - q_n\left( f_j(0) \right). \] Moreover, by the convexity of $f_{1,l}$, \begin{align*} r_n^{(l,m)}(j) - q_n\left( f_j(0) \right) &= f_{1,l} \circ f_{l+1,n}\left( f_j(0) \right) - f_{1,l} \left( \left[ 1-f_{l+1,n-m}'(1) \left( 1-f_{n-m+1,n}\left( f_j(0) \right) \right) \right]^+ \right) \\ &\leq f_{1,l}'(1) \left( f_{l+1,n}\left( f_j(0) \right) - \left[ 1-f_{l+1,n-m}'(1) \left( 1-f_{n-m+1,n}\left( f_j(0) \right) \right) \right]^+ \right) \\ &\leq f_{1,l}'(1) \left( f_{l+1,n-m}'(1) q_{n-m,n}\left( f_j(0) \right) -q_{l,n}\left( f_j(0) \right) \right) \\ &= \e^{S_{n-m}} q_{n-m,n}\left( f_j(0) \right) - \e^{S_l} q_{l,n}\left( f_j(0) \right) \\ &= \e^{S_{n-m}} q_{n-m,n}\left( f_j(0) \right) \e^{S_l} q_{l,n}\left( f_j(0) \right) \\ &\qquad \times \left( \e^{-S_l} q_{l,n}\left( f_j(0) \right)^{-1} - \e^{-S_{n-m}} q_{n-m,n}\left( f_j(0) \right)^{-1} \right). \end{align*} By \eqref{noisette}, we have $q_{l,n}\left( f_j(0) \right) \leq \e^{S_n-S_l}$ and so \[ r_n^{(l,m)}(j) - q_n\left( f_j(0) \right) \leq \e^{2S_n} \left( \e^{-S_l} q_{l,n}\left( f_j(0) \right)^{-1} - \e^{-S_{n-m}} q_{n-m,n}\left( f_j(0) \right)^{-1} \right). \] In addition, by the definition of $r_n^{(l,m)}(j)$ and $q_n\left( f_j(0) \right)$, we have $r_n^{(l,m)}(j) - q_n\left( f_j(0) \right) \leq 1$. Therefore, $\tbb P_i\text{-a.s.}$ it holds, \[ r_n^{(l,m)}(j) - q_n\left( f_j(0) \right) \leq \min\left( 1,\e^{2S_n} \left( \e^{-S_l} q_{l,n}\left( f_j(0) \right)^{-1} - \e^{-S_{n-m}} q_{n-m,n}\left( f_j(0) \right)^{-1} \right) \right). \] Using the previous bound, it follows that, for any integer $N \geq 1$, \begin{align*} I_0 &:= n^{3/2} \tbb E_i \left( \abs{q_n\left( f_j(0) \right) - r_n^{(l,m)}(j)} \e^{-\ll S_n} \,;\, \tau_y > n \right) \\ &\leq \e^{2(N-y)} n^{3/2} \tbb E_i \left( \abs{\e^{-S_l} q_{l,n}\left( f_j(0) \right)^{-1} - \e^{-S_{n-m}} q_{n-m,n}\left( f_j(0) \right)^{-1}} \e^{-\ll S_n} \,;\, \tau_y > n \right) \\ &\hspace{5cm} + n^{3/2} \tbb E_i \left( \e^{-\ll S_n} \,;\, y+S_n > N \,,\, \tau_y > n \right). \end{align*} Moreover, using the point \ref{gorilleBP} of Proposition \ref{goliane}, \begin{align*} n^{3/2} \tbb E_i \left( \e^{-\ll S_n} \,;\, y+S_n > N \,,\, \tau_y > n \right) &\leq \sum_{p=N}^{+\infty} \e^{\ll y} \e^{-\ll p} n^{3/2} \tbb P_i \left( y+S_n \in [p,p+1] \,,\, \tau_y > n \right) \\ &\leq c \e^{\ll y} (1+\max(y,0)) \sum_{p=N}^{+\infty} \e^{-\ll p} (1+p). \end{align*} Consequently, using Lemma \ref{constellation}, we obtain that \[ \lim_{l,m \to +\infty} \limsup_{n\to+\infty} I_0 \leq c \e^{\ll y} (1+\max(y,0)) \sum_{p=N}^{+\infty} \e^{-\ll p} (1+p). \] Taking the limit as $N \to +\infty$, proves the lemma. \end{proof} We now introduce the following random variable: for any $j \in \bb X$, $u \in \bb R$, $l \geq 1$ and $m \geq 1$ \[ r_{\infty}^{(l,m)}(j,u) := 1-f_{X_1} \circ \cdots \circ f_{X_l} \left( \left[ 1- \e^{-S_l} \e^{u} q_m^*(j) \right]^+ \right) \in [0,1], \] where, as in \eqref{chemin} and \eqref{cheminbis}, for any $m \geq 1$, \[ q_m^*(j) := \e^{S_m^*} \left( 1- f_{X_m^*} \circ \cdots \circ f_{X_1^*} \circ f_j (0) \right) = \left[ \frac{1}{1-f_j(0)} + \sum_{k=1}^{n} \e^{-S_k^*} \eta_k^*(j) \right]^{-1} \] and as in \eqref{chemin003}, for any $k \geq 2$, \[ \eta_k^*(j) := g_{X_k^*} \left( f_{X_{k-1}^*} \circ \cdots \circ f_{X_1^*} \circ f_j (0) \right) \qquad \text{and} \qquad \eta_1^* := g_{X_1^*} \left( f_j(0) \right). \] For any $(i,y) \in \supp(\tt V_{\ll})$ and $(j,z) \in \supp(\tt V_{\ll}^*)$, let $\tbb P_{i,y,j,z}^+$ and $\tbb E_{i,y,j,z}^+$ be, respectively, the probability and its associated expectation defined for any $n \geq 1$ and any function $g$: $\bb X^{l,m} \to \bb C$ by \begin{align} \tbb E_{i,y,j,z}^+ &\left( g \left( X_1, \dots, X_l,X_m^*,\dots,X_1^* \right) \right) = \tbb E_{i,j} \left( g \left( X_1, \dots, X_l,X_m^*,\dots,X_1^* \right) \frac{\tt V_{\ll} \left( X_l, y+S_l \right)}{\tt V_{\ll}(i,y)} \times \right. \nonumber \\ &\hspace{8cm} \left. \frac{\tt V_{\ll}^* \left( X_m^*, z+S_m^* \right)}{\tt V_{\ll}^*(j,z)} \,;\, \tau_y > l \,,\, \tau_z^* > m \right). \label{theiere} \end{align} For any $j \in \bb X$ let $z_0(j) \in \bb R$ be the unique real such that $(j,z) \in \supp \left( \tt V_{\ll}^* \right)$ for any $z > z_0$ and $(j,z) \notin \supp \left( \tt V_{\ll}^* \right)$ for any $z < z_0$ (see \cite{grama_limit_2016-1} for details on the domain of positivity of the harmonic function). Set $z_0(j)^+=\max\left\{z_0(j), 0 \right\}$. \begin{lemma} Assume that the conditions of Theorem \ref{cape} are satisfied. \label{corbeau} For any $j \in \bb X$, $(i,y) \in \supp\left( \tt V_{\ll} \right)$, $l \geq 1$ and $m \geq 1$, \begin{align*} &\lim_{n\to +\infty} n^{3/2} \tbb E_i \left( r_n^{(l,m)}(j) \e^{-\ll S_n} \,;\, X_{n+1} = j \,,\, \tau_y > n \right) \\ &\hspace{2cm} = \frac{2}{\sqrt{2\pi} \sigma^3} \e^{\ll y} \int_{z_0(j)^+}^{+\infty} \e^{-\ll z} \tbb E_{i,y,j,z}^+ \left( r_{\infty}^{(l,m)}(j,z-y) \right) \tt V_{\ll}(i,y) \tt V_{\ll}^*(j,z) \dd z \tbs \nu_{\ll}(j). \end{align*} \end{lemma} \begin{proof} Fix $(i,y) \in \supp\left( \tt V_{\ll} \right)$, $j \in \bb X$, $l\geq 1$ and $m \geq 1$ and let $g$ be a function $\bb X^{l+m} \times \bb R \to \bb R_+$ defined by \begin{align*} &g(i_1, \dots, i_l,i_{n-m+1},\dots, i_n, z) = \e^{\ll y} \e^{-\ll z} \bbm 1_{\{ z \geq 0 \}} \tbf P_{\ll}(i_n,j) \left[ 1 \right. \\ &\left. -f_{i_1} \circ \cdots \circ f_{i_l} \left( \left[ 1- \e^{z-y-\rho(i_n)-\cdots-\rho(i_{n-m+1})-\rho(i_l)-\dots-\rho(i_1)} \left( 1-f_{i_{n-m+1}}\circ \cdots \circ f_{i_n} \circ f_j (0) \right) \right]^+ \right) \right] \end{align*} for all $(i_1, \dots, i_l,i_{n-m+1},\dots, i_n, z) \in \bb X^{l+m} \times \bb R $ and note that on $\{ \tau_y > n \}$, \[ g(X_1,\dots,X_l,X_{n-m+1},\dots,X_n,y+S_n) = r_n^{(l,m)}(j) \e^{-\ll S_n} \tbf P_{\ll}(i_n,j). \] Observe also that since $0 \leq g(i_1, \dots, i_l,i_{n-m+1},\dots, i_n, z) \leq \e^{\ll y} \e^{-\ll z} \bbm 1_{\{ z \geq 0 \}}$, the function $g$ belongs to the set, say $\scr C^+ \left( \bb X^{l+m} \times \bb R_+ \right)$, of non-negative function $g$: $\bb X^{l+m} \times \bb R_+ \to \bb R_+$ satisfying the following properties: \begin{itemize} \item for any $(i_1,\dots,i_{l+m}) \in \bb X^{l+m}$, the function $z \mapsto g(i_1,\dots,i_{l+m},z)$ is continuous, \item there exists $\ee >0$ such that $\max_{i_1,\dots i_{l+m} \in \bb X} \sup_{z\geq 0} g(i_1,\dots,i_{l+m},z) (1+z)^{2+\ee} < +\infty$. \end{itemize} Therefore, by the Markov property and Proposition \ref{sorcier}, we obtain that \begin{align*} I_0 &:= \lim_{n\to +\infty} n^{3/2} \tbb E_i \left( r_n^{(l,m)}(j) \e^{-\ll S_n} \,;\, X_{n+1} = j \,,\, \tau_y > n \right) \\ &= \lim_{n\to +\infty} n^{3/2} \tbb E_i \left( g \left(X_1, \dots, X_l, X_{n-m+1}, \dots, X_n, y+S_n \right) \,;\, \tau_y > n \right) \\ &= \frac{2}{\sqrt{2\pi} \sigma^3} \int_0^{+\infty} \e^{-\ll (z-y)} \sum_{j' \in \bb X} \tbb E_{i,j'} \left( r_{\infty}^{(l,m)}(j,z-y) \tbf P_{\ll}(X_1^*,j) \tt V_{\ll}\left( X_l,y+S_l \right) \right. \\ &\hspace{6cm} \left. \times \tt V_{\ll}^* \left( X_m^*, z+S_m^* \right) \,;\, \tau_y > l \,,\, \tau_z^* > m \right) \tbs \nu_{\ll}(j') \dd z. \end{align*} Since $\tbs \nu_{\ll}$ is $\tbf P_{\ll}^*$-invariant, we write \begin{align*} I_0 &= \frac{2}{\sqrt{2\pi} \sigma^3} \int_0^{+\infty} \e^{-\ll (z-y)} \sum_{j_1 \in \bb X} \tbf P_{\ll}(j_1,j) \tbs \nu_{\ll}(j_1) \tbb E_i \left( r_{\infty}^{(l,m)}(j,z-y) \tt V_{\ll}\left( X_l,y+S_l \right) \right. \\ &\hspace{6cm} \left. \times \sachant{\tt V_{\ll}^* \left( X_m^*, z+S_m^* \right) \,;\, \tau_y > l \,,\, \tau_z^* > m}{X_1^*=j_1} \right) \dd z. \end{align*} Using the definition of $\tbf P_{\ll}^*$ in \eqref{monument}, we have \begin{align*} I_0 &= \frac{2}{\sqrt{2\pi} \sigma^3} \int_0^{+\infty} \e^{-\ll (z-y)} \tbs \nu_{\ll}(j) \tbb E_{i,j} \left( r_{\infty}^{(l,m)}(j,z-y) \tt V_{\ll}\left( X_l,y+S_l \right) \right. \\ &\hspace{6cm} \left. \times \tt V_{\ll}^* \left( X_m^*, z+S_m^* \right) \,;\, \tau_y > l \,,\, \tau_z^* > m \right) \dd z. \end{align*} Now, note that when $(j,z) \notin \supp\left( \tt V_{\ll}^* \right)$, using the point \ref{sable001} of Proposition \ref{sable}, \begin{align*} \tbb E_{i,j} &\left( r_{\infty}^{(l,m)}(j,z-y) \tt V_{\ll}\left( X_l,y+S_l \right) \tt V_{\ll}^* \left( X_m^*, z+S_m^* \right) \,;\, \tau_y > l \,,\, \tau_z^* > m \right) \\ &\leq \tbb E_i \left( \tt V_{\ll}\left( X_l,y+S_l \right) \,;\, \tau_y > l \right) \tbb E_j^* \left( \tt V_{\ll}^* \left( X_m^*, z+S_m^* \right) \,;\, \tau_z^* > m \right) = \tt V_{\ll}(i,y) \tt V_{\ll}^*(j,z) = 0. \end{align*} Together with \eqref{theiere}, it proves the lemma. \end{proof} Consider for any $l \geq 1$, $j \in \bb X$ and $u \in \bb R$, \begin{equation} \label{mousse} r_{\infty}^{(l,\infty)}(j,u) = 1-f_{X_1} \circ \cdots \circ f_{X_l} \left( \left[ 1- \e^{-S_l} \e^{u} q_{\infty}^*(j) \right]^+ \right) \in [0,1], \end{equation} where as in \eqref{potion002}, \[ q_{\infty}^*(j) = \left[ \frac{1}{1-f_j(0)} + \sum_{k=1}^{\infty} \e^{-S_k^*} \eta_k^*(j) \right]^{-1}. \] \begin{lemma} Assume that the conditions of Theorem \ref{cape} are satisfied. \label{fontaine001} For any $u \in \bb R$, $(i,y) \in \supp \left( \tt V_{\ll} \right)$, $(j,z) \in \supp \left( \tt V_{\ll}^* \right)$ and $l\geq 1$, \[ \lim_{m\to +\infty} \tbb E_{i,y,j,z}^+ \left( \abs{r_{\infty}^{(l,m)}(j,u) - r_{\infty}^{(l,\infty)}(j,u)} \right) = 0. \] \end{lemma} \begin{proof} Fix $(i,y) \in \supp \left( \tt V_{\ll} \right)$, $(j,z) \in \supp \left( \tt V_{\ll}^* \right)$, $l\geq 1$ and $u \in \bb R$. By the convexity of $f_{1,l}$, for any $m \geq 1$, we have $\tbb P_{i,y,j,z}^+$ a.s., \begin{align*} \abs{r_{\infty}^{(l,m)}(j,u) - r_{\infty}^{(l,\infty)}(j,u)} &\leq \left(f_{X_1} \circ \cdots \circ f_{X_l}\right)'(1) \abs{\left[ 1- \e^{-S_l} \e^{u} q_m^*(j) \right]^+ - \left[ 1- \e^{-S_l} \e^{u} q_{\infty}^*(j) \right]^+} \\ &\leq \e^{S_l} \abs{\e^{-S_l} \e^{u} q_m^*(j) - \e^{-S_l} \e^{u} q_{\infty}^*(j)} \\ &= \e^{u} \abs{q_m^*(j)q_{\infty}^*(j)} \abs{ \left(q_{\infty}^*(j) \right)^{-1} - \left(q_{m}^*(j) \right)^{-1}}. \end{align*} Moreover, for any $m \geq 1$, \begin{align*} q_m^*(j) &= \left[ \frac{1}{1-f_j(0)} + \sum_{k=1}^{m} \e^{-S_k^*} \eta_k^*(j) \right]^{-1} \in (0,1],\\ q_{\infty}^*(j) &= \left[ \frac{1}{1-f_j(0)} + \sum_{k=1}^{\infty} \e^{-S_k^*} \eta_k^*(j) \right]^{-1} \in [0,1] \end{align*} and by Lemma \ref{pieuvre}, for any $k \geq 1$, \begin{equation} \label{clocher001} 0 \leq \eta_k^*(j) \leq \eta. \end{equation} Therefore, \[ \abs{r_{\infty}^{(l,m)}(j,u) - r_{\infty}^{(l,\infty)}(j,u)} \leq \e^{u} \eta \sum_{k=m+1}^{+\infty} \e^{-S_k^*}. \] Using Lemma \ref{soir} and the Lebesgue dominated convergence theorem, \[ \tbb E_{i,y,j,z}^+ \left( \abs{r_{\infty}^{(l,m)}(j,u) - r_{\infty}^{(l,\infty)}(j,u)} \right) \leq \e^{u} \eta \sum_{k=m+1}^{+\infty} \tbb E_{i,y,j,z}^+ \left( \e^{-S_k^*} \right). \] By Lemma \ref{soir}, we conclude that \[ \lim_{m\to+\infty} \tbb E_{i,y,j,z}^+ \left( \abs{r_{\infty}^{(l,m)}(j,u) - r_{\infty}^{(l,\infty)}(j,u)} \right) = 0. \] \end{proof} For any $l\geq 1$, $j \in \bb X$ and $u \in \bb R$, set \begin{equation} \label{pollen} s_l(j,u) = \left[ 1- \e^{-S_l} \e^{u} q_{\infty}^*(j) \right]^+. \end{equation} Note that, by Lemma \ref{soir}, $\left( q_{\infty}^*(j) \right)^{-1}$ is integrable and so finite a.s.\ (see \eqref{panda}). Therefore $s_l(j,u) \in [0,1)$. In addition, by the convexity of $f_{X_{l+1}}$, we have for any $j \in \bb X$, $u \in \bb R$ and $l \geq 1$, \[ f_{X_{l+1}}(s_{l+1}(j,u)) \geq 1-f_{X_{l+1}}'(1) \left( 1 - s_{l+1}(j,u) \right) \geq 1-\e^{\rho(X_{l+1})} \e^{-S_{l+1}} \e^{u} q_{\infty}^*(j) = 1- \e^{-S_l} \e^{u} q_{\infty}^*(j). \] Since $f_{X_{l+1}}$ is non-negative on $[0,1]$, we see that $f_{X_{l+1}}(s_{l+1}(j,u)) \geq s_l(j,u)$ and so for any $k \geq 1$, $\left( f_{k+1,l}(s_l(j,u)) \right)_{l\geq k}$ is non-decreasing and bounded by $1$. Using the continuity of $g_{X_k}$ and \eqref{champ}, we deduce that $\left( \eta_{k,l}(s_l(j,u)) \right)_{l\geq k}$ converges and we denote for any $k \geq 1$, \begin{equation} \label{balcon} \eta_{k,\infty}(j,u) := \lim_{l \to +\infty} \eta_{k,l}(s_l(j,u)). \end{equation} Moreover, by Lemma \ref{pieuvre}, we have for any $k \geq 1$, $l \geq k$ and $u \in \bb R$, \begin{equation} \label{clocher002} 0 \leq \eta_{k,l}(s_l(j,u)) \leq \eta \qquad \text{and} \qquad 0 \leq \eta_{k,\infty}(j,u) \leq \eta. \end{equation} For any $j \in \bb X$ and $u \in \bb R$, set \[ r_{\infty}(j,u) := \left[ \frac{\e^{-u}}{q_{\infty}^*(j)} + \sum_{k=0}^{+\infty} \e^{-S_k} \eta_{k+1,\infty}(j,u) \right]^{-1}. \] \begin{lemma} Assume that the conditions of Theorem \ref{cape} are satisfied. \label{fontaine002} For any $u \in \bb R$, $(i,y) \in \supp \left( \tt V_{\ll} \right)$ and $(j,z) \in \supp \left( \tt V_{\ll}^* \right)$, \[ \lim_{l\to +\infty} \tbb E_{i,y,j,z}^+ \left( \abs{r_{\infty}^{(l,\infty)}(j,u) - r_{\infty}(j,u)} \right) = 0. \] \end{lemma} \begin{proof} Fix $(i,y) \in \supp \left( \tt V_{\ll} \right)$, $(j,z) \in \supp \left( \tt V_{\ll}^* \right)$ and $u \in \bb R$. By \eqref{mousse}, Lemma \ref{foin} and \eqref{pollen}, we have \[ \left( r_{\infty}^{(l,\infty)}(j,u) \right)^{-1} = \frac{\e^{-S_l}}{1-s_l(j,u)} + \sum_{k=0}^{l-1} \e^{-S_k} \eta_{k+1,l}(s_l(j,u)). \] So, for any $p \geq 1$ and $l \geq p$, using \eqref{clocher002}, \begin{align*} \abs{\left( r_{\infty}^{(l,\infty)}(j,u) \right)^{-1} - r_{\infty}(j,u)^{-1}} &\leq \sum_{k=0}^{p} \e^{-S_k} \abs{\eta_{k+1,l}(s_l(j,u)) - \eta_{k+1,\infty}(j,u)} \\ &\qquad + \abs{\frac{\e^{-u}}{q_{\infty}^*(j)} - \frac{\e^{-S_l}}{1-s_l(j,u)}} + 2\eta \sum_{k=p+1}^{+\infty} \e^{-S_k}. \end{align*} Therefore, \begin{align*} I_0 &:= \tbb E_{i,y,j,z}^+ \left( \abs{\left( r_{\infty}^{(l,\infty)}(j,u) \right)^{-1} - r_{\infty}(j,u)^{-1}} \right) \\ &\leq \sum_{k=0}^{p} \tbb E_{i,y,j,z}^+ \left( \e^{-S_k} \abs{\eta_{k+1,l}(s_l(j,u)) - \eta_{k+1,\infty}(j,u)} \right) \\ &\qquad+ \tbb E_{i,y,j,z}^+ \left( \abs{\frac{\e^{-u}}{q_{\infty}^*(j)} - \e^{-S_l}} \,;\, \e^{-S_l} > \frac{\e^{-u}}{q_{\infty}^*(j)} \right) + 2\eta \tbb E_{i,y}^+ \left( \sum_{k=p+1}^{+\infty} \e^{-S_k} \right), \end{align*} where $\tbb P_{i,y}^+$ is the marginal law of $\tbb P_{i,y,j,z}^+$ on $\sigma\left( X_n \,,\, n \geq 1 \right)$. Using Lemma \ref{soir} and the Lebesgue dominated convergence theorem, \begin{align*} I_0 &\leq \tbb E_{i,y}^+ \left( \e^{-S_l} \right) + \sum_{k=0}^{p} \tbb E_{i,y,j,z}^+ \left( \e^{-S_k} \abs{\eta_{k+1,l}\left(s_l(j,u)\right) - \eta_{k+1,\infty}(j,u)} \right) + 2\eta \sum_{k=p+1}^{+\infty} \tbb E_{i,y}^+ \left( \e^{-S_k} \right) \\ &\leq \frac{c \left( 1+\max(y,0) \right)\e^{y}}{V(i,y)} \left( \frac{1}{l^{3/2}} + \sum_{k=p+1}^{+\infty} \frac{\eta}{k^{3/2}} \right) \\ &\hspace{3cm} + \sum_{k=0}^{p} \tbb E_{i,y,j,z}^+ \left( \e^{-S_k} \abs{\eta_{k+1,l}(s_l(j,u)) - \eta_{k+1,\infty}(j,u)} \right). \end{align*} Since $\abs{\eta_{k+1,l}(s_l(j,u)) - \eta_{k+1,\infty}(j,u)} \leq 2\eta$, by the Lebesgue dominated convergence theorem and \eqref{balcon} \[ \limsup_{l\to+\infty} I_0 \leq \frac{c \left( 1+\max(y,0) \right)\e^{y}}{V(i,y)} \sum_{k=p+1}^{+\infty} \frac{\eta}{k^{3/2}}. \] Letting $p \to +\infty$, we obtain that $\lim_{l\to+\infty} I_0 = 0$. Moreover, by \eqref{mousse} for any $l \geq 1$, $r_{\infty}^{(l,\infty)}(j,u) \in [0,1]$. In the same manner as we proved \eqref{echo}, we have also \[ r_{\infty}(j,u) \leq 1. \] Consequently, \[ \lim_{l\to +\infty} \tbb E_{i,y,j,z}^+ \left( \abs{r_{\infty}^{(l,\infty)}(j,u) - r_{\infty}(j,u)} \right) \leq \lim_{l\to+\infty} I_0 = 0. \] \end{proof} We now consider the function \[ U(i,y,j) := \frac{2}{\sqrt{2\pi} \sigma^3} \frac{v_{\ll}(i)}{v_{\ll}(j)} \e^{\ll (y-\rho(j))} \int_{z_0(j)^+}^{+\infty} \e^{-\ll z} \tbb E_{i,y,j,z}^+ \left( r_{\infty} (j,z-y) \right) \tt V_{\ll} (i,y) \tt V_{\ll}^* (j,z) \dd z \tbs \nu_{\ll}(j). \] Using \eqref{clocher001}, \eqref{clocher002} and Lemma \ref{soir}, for any $(i,y) \in \supp \left( \tt V_{\ll} \right)$, $(j,z) \in \supp \left( \tt V_{\ll}^* \right)$ and $u \in \bb R$, \[ \tbb E_{i,y,j,z}^+ \left( r_{\infty} (j,u)^{-1} \right) \leq \e^{-u} \left( \frac{1}{1-f_j(0)} + \eta \tbb E_{i,y,j,z}^+ \left( \sum_{k=1}^{+\infty} \e^{-S_k^*} \right) \right) + \eta \tbb E_{i,y,j,z}^+ \left( \sum_{k=1}^{+\infty} \e^{-S_k} \right) < +\infty. \] So $r_{\infty} (j,u) > 0$ $\tbb P_{i,y,j,z}^+$-a.s.\ and therefore, for any $(i,y) \in \supp \left( \tt V_{\ll} \right)$, $j \in \bb X,$ \begin{equation} \label{canopee} U(i,y,j) > 0. \end{equation} \begin{lemma} Assume that the conditions of Theorem \ref{cape} are satisfied. \label{chaumiere} For any $(i,y) \in \supp \left( \tt V_{\ll} \right)$ and $j \in \bb X$, we have \[ \bb E_i \left( q_{n+1} \,;\, X_{n+1} = i \,,\, \tau_y > n \right) \underset{n\to+\infty}{\sim} \frac{U(i,y,j) k(\ll)^{n+1}}{(n+1)^{3/2}}. \] \end{lemma} \begin{proof} Fix $(i,y) \in \supp \left( \tt V_{\ll} \right)$ and $j \in \bb X$. By \eqref{falaise}, for any $n \geq 1$, \begin{align*} I_0 &:= \frac{(n+1)^{3/2}}{k(\ll)^{n+1}} \bb E_i \left( q_{n+1} \,;\, X_{n+1} = i \,,\, \tau_y > n \right) \\ &= \frac{v_{\ll}(i)\e^{-\ll \rho(j)}}{v_{\ll}(j)} (n+1)^{3/2} \tbb E_i \left( \e^{-\ll S_n} q_{n+1} \,;\, X_{n+1} = j \,,\, \tau_y > n \right). \end{align*} Using Lemmas \ref{carrousel} and \ref{corbeau}, \begin{align*} \lim_{n\to +\infty} I_0 &= \lim_{(l,m) \to +\infty} \lim_{n\to +\infty} \frac{v_{\ll}(i)\e^{-\ll \rho(j)}}{v_{\ll}(j)} (n+1)^{3/2} \tbb E_i \left( r_n^{(l,m)}(j) \e^{-\ll S_n} \,;\, X_{n+1} = j \,,\, \tau_y > n \right) \\ &= \lim_{(l,m) \to +\infty} \frac{2 v_{\ll}(i)}{\sqrt{2\pi} \sigma^3v_{\ll}(j)} \e^{\ll (y - \rho(j))} \int_{z_0(j)^+}^{+\infty} \e^{-\ll z} \tbb E_{i,y,j,z}^+ \left( r_{\infty}^{(l,m)}(j,z-y) \right) \\ &\hspace{9cm} \times \tt V_{\ll}(i,y) \tt V_{\ll}^*(j,z) \dd z \tbs \nu_{\ll}(j). \end{align*} Since for any $l \geq 1$, $m \geq 1$ and $u \in \bb R$, $r_{\infty}^{(l,m)}(j,u) \leq 1$, by the Lebesgue dominated convergence theorem and Lemmas \ref{fontaine001} and \ref{fontaine002}, \begin{align*} \lim_{n\to +\infty} I_0 &= \frac{2v_{\ll}(i)}{\sqrt{2\pi} \sigma^3v_{\ll}(j)} \e^{\ll (y-\rho(j))} \int_{z_0(j)^+}^{+\infty} \e^{-\ll z} \lim_{l \to +\infty} \tbb E_{i,y,j,z}^+ \left( r_{\infty}^{(l,\infty)}(j,z-y) \right) \\ &\hspace{8cm} \times \tt V_{\ll}(i,y) \tt V_{\ll}^*(j,z) \dd z \tbs \nu_{\ll}(j) \\ &= \frac{2v_{\ll}(i)}{\sqrt{2\pi} \sigma^3v_{\ll}(j)} \e^{\ll (y-\rho(j))} \int_{z_0(j)^+}^{+\infty} \e^{-\ll z} \tbb E_{i,y,j,z}^+ \left( r_{\infty}(j,z-y) \right) \tt V_{\ll}(i,y) \tt V_{\ll}^*(j,z) \dd z \tbs \nu_{\ll}(j) \\ &= U(i,y,j). \end{align*} \end{proof} \textbf{Proof of Theorem \ref{cape}.} We use arguments similar to those of the proof of Lemma \ref{dieu}. Fix $(i,j) \in \bb X^2$. For any $y \in \bb R$ and $n \geq 1$, let \[ I_0 := \frac{(n+1)^{3/2}}{k(\ll)^{n+1}} \bb E_i \left( q_{n+1} \,;\, X_{n+1} = j \right) \] and \begin{align} \label{celeste} I_1 &:= I_0 - \frac{(n+1)^{3/2}}{k(\ll)^{n+1}} \bb E_i \left( q_{n+1} \,;\, X_{n+1} = j \,,\, \tau_y > n \right) \\ &= \frac{(n+1)^{3/2}}{k(\ll)^{n+1}} \bb E_i \left( q_n\left( f_j(0) \right) \,;\, X_{n+1} = j \,,\, \tau_y \leq n \right). \nonumber \end{align} By Lemma \ref{foin}, we have $q_n\left( f_j(0) \right) \leq \e^{S_n}$. Using the fact that $\left( q_k\left( f_j(0) \right) \right)_{k\geq 1}$ is non-increasing, it holds $q_n\left( f_j(0) \right) \leq \e^{\min_{1\leq k \leq n} S_k}$. Therefore, as in \eqref{butte}, \begin{align*} I_1 &\leq \frac{(n+1)^{3/2}}{k(\ll)^{n+1}} \bb E_i \left( \e^{\min_{1\leq k \leq n} S_k} \,;\, X_{n+1} = j \,,\, \tau_y \leq n \right) \\ &\leq \frac{(n+1)^{3/2}}{k(\ll)^{n+1}} \e^{-y} \sum_{p=0}^{+\infty} \e^{-p} \bb P_i \left( X_{n+1} = j \,,\, \tau_{y+p+1} > n \right). \end{align*} By \eqref{samovar}, \begin{align*} I_1 &\leq \frac{(n+1)^{3/2}}{n^{3/2}} \frac{v_{\ll}(i)}{v_{\ll}(j)} \e^{-y-\ll \rho(j)} \sum_{p=0}^{+\infty} \e^{-p} \tbb E_i \left( \e^{-\ll S_n} \,;\, \tau_{y+p+1} > n \right) \\ &\leq c\frac{v_{\ll}(i)}{v_{\ll}(j)} \e^{-y-\ll \rho(j)} \sum_{p=0}^{+\infty} \e^{-p} \sum_{l=0}^{+\infty} \e^{\ll (y+p+1)} \e^{-\ll l} \\ &\hspace{5cm} \times n^{3/2} \tbb P_i \left( y+p+1+S_n \in [l,l+1] \,;\, \tau_{y+p+1} > n \right). \end{align*} Using the point \ref{gorilleBP} of Proposition \ref{goliane}, \begin{align*} I_1 &\leq c\frac{v_{\ll}(i)\e^{-\ll \rho(j)}}{v_{\ll}(j)} \e^{-(1-\ll)y} \sum_{p=0}^{+\infty} \e^{-(1-\ll)p} \sum_{l=0}^{+\infty} \e^{-\ll l} (1+\max(y+p+1,0))(1+l) \\ &\leq c\frac{v_{\ll}(i)\e^{-\ll \rho(j)}}{v_{\ll}(j)} \e^{-(1-\ll)y} (1+\max(y,0)). \end{align*} Moreover, there exists $y_0(i) \in \bb R$ such that, for any $y \geq y_0(i)$ it holds $(i,y) \in \supp \left( \tt V_{\ll} \right)$. Using \eqref{celeste} and Lemma \ref{chaumiere}, we obtain that, for any $y \geq y_0(i)$, \begin{align} U(i,y,j) \leq \liminf_{n\to +\infty} I_0 \leq \limsup_{n\to +\infty} I_0 &\leq U(i,y,j) \nonumber\\ &\qquad + c\frac{v_{\ll}(i)\e^{-\ll \rho(j)}}{v_{\ll}(j)} \e^{-(1-\ll)y} (1+\max(y,0)). \label{accordeon} \end{align} This proves that $\limsup_{n\to +\infty} I_0$ is a finite real which does not depend on $y$ and so $y \mapsto U(i,y,j)$ is a bounded function. Moreover, by Lemma \ref{chaumiere}, \[ U(i,y,j) = \lim_{n\to\infty} \frac{(n+1)^{3/2}}{k(\ll)^{n+1}} \bb E_i \left( q_{n+1} \,;\, X_{n+1} = j \,,\, \tau_y > n \right) \] and so $y \mapsto U(i,y,j)$ is non-decreasing. Let $u$ be its limit: \[ u(i,j) := \lim_{y\to+\infty} U(i,y,j) \in \bb R. \] By \eqref{canopee}, for any $y \geq y_0(i)$, \[ u(i,j) \geq U(i,y,j) > 0. \] Taking the limit as $y \to +\infty$ in \eqref{accordeon}, \[ \lim_{n \to +\infty} I_0 = u(i,j). \] Finally, by \eqref{ange}, \[ \lim_{n \to +\infty} \frac{(n+1)^{3/2}}{k(\ll)^{n+1}} \bb P_i \left( Z_{n+1} > 0 \,,\, X_{n+1} = j \right) = \lim_{n \to +\infty} \frac{(n+1)^{3/2}}{k(\ll)^{n+1}} \bb E_i \left( q_{n+1} \,;\, X_{n+1} = j \right) = u(i,j). \] \bibliographystyle{plain} \bibliography{biblioT} \vskip5mm \end{document}
147,758
TITLE: Random variable related by conditional expectations QUESTION [3 upvotes]: Let X and Y be random variables such that $E(X|Y)=\frac Y 2$ and $E(Y|X)=\frac X 2$. Does it follow that X and Y are 0? If not is their a simple example of such random variables? Motivation: if $E(X|Y)= Y $ and $E(X|Y)=Y$ then X=Y necessarily. This is easy to prove: if X>0 and Y>0 we can write $E(\frac X Y +\frac Y X)=E(\frac X Y) +E(\frac Y X)=1+1=2$ and $x+\frac 1 x \geq 2$ with equality if and only if $x=1$. For the general case we can use $X^{+}+1$ and $Y^{+}+1$ in place of X and Y to get $X^{+}=Y^{+}$ and a similar argument for $X^{-}$ and $Y^{-}$. REPLY [0 votes]: Here is a generalized result related to the motivating example: Suppose random variables $X$ and $Y$ satisfy $E[|X|]<\infty$, $E[|Y|]<\infty$, and $$ E[X|Y]\leq Y, E[Y|X]\leq X $$ Then $X=Y$ with probability 1. Proof: Fix $M>0$ as a (large) integer. Define truncated random variables: \begin{align} A_M = \left\{ \begin{array}{ll} X &\mbox{ if $X \geq -M$} \\ -M & \mbox{ otherwise} \end{array} \right.\\ B_M = \left\{ \begin{array}{ll} Y &\mbox{ if $Y \geq -M$} \\ -M & \mbox{ otherwise} \end{array} \right.\\ \end{align} Then $$ \lim_{M\rightarrow\infty} P[X\neq A_M] = \lim_{M\rightarrow\infty} P[Y\neq B_M] = 0$$ Because of this, it can be shown that for any random variable $Z$ that satisfies $E[|Z|]<\infty$ we have $$ \lim_{M\rightarrow\infty} E[Z1_{\{X \neq A_M\}}] = \lim_{M\rightarrow\infty} E[Z1_{\{Y\neq B_M\}}] = 0 \quad (**) $$ Define $c = M+1$. So $A_M+c\geq 1$ and $B_M+c \geq 1$ and we can apply an argument similar to that suggested by Kavi, \begin{align} E[(A_M+c)/(B_M+c)] &= E[E[(A_M+c)/(B_M+c)|Y]]\\ &= E[1/(B_M+c) E[(A_M+c)|Y]]\\ &= E[1/(B_M+c) E[(X + c) + (A_M-X)|Y]]\\ &\leq E[1/(B_M+c)(Y+c + E[A_M-X|Y])] \\ &= E[1/(B_M+c)(B_M+c + (Y-B_M) + E[A_M-X|Y])] \\ &=E\left[1 + \frac{Y-B_M}{B_M+c} + \frac{A_M-X}{B_M+c} \right]\\ &\leq 1 + E[|Y-B_M|] + E[|A_M-X|] \end{align} By symmetry we also get $$ E[(B_M+c)/(A_M+c)] \leq 1 + E[|Y-B_M|] + E[|A_M-X|] $$ Define $f(M) = E[|Y- B_M|] + E[|A_M-X|]$. By fact (**), it can be shown that $f(M)\rightarrow 0$. Thus $$ E[(B_M+c)/(A_M+c)] + E[(A_M+c)/(B_M+c)] \leq 2 + 2f(M) \rightarrow 2$$ On the other hand, for all $M$ and all realizations of the random variables we have $$ (B_M+c)/(A_M+c) + (A_M+c)/(B_M+c)\geq 2 $$ with "near equality" only when $A_M+c \approx B_M+c$. Thus, for any $\epsilon>0$ we get: $$ \lim_{M\rightarrow\infty} P[|B_M-A_M|>\epsilon] =0 $$ However, $$ P[|X-Y|>\epsilon] \leq P[X \neq A_M] + P[Y\neq B_M] + P[|A_M-B_M|>\epsilon] $$ Taking a limit as $M\rightarrow\infty$ gives $$ P[|X-Y|>\epsilon] = 0$$ This holds for all $\epsilon>0$ and so $P[X=Y]=1$. $\Box$
122,151
This Ones through Netgalley, thanks to St. Martin’s Press / Thomas Dunne Books. All thoughts and opinions are my own. My Summary Antonina, or Nina as she likes to be called, is a telekinetic, she has the ability to move objects with a single thought. She is entering her first Grand Season, a time where she must attend balls and social gatherings in hopes of finding a husband. She has a fiery personality with little care for what the world around her thinks. She is under the watchful eye of her cousin’s wife, Valérie, who demands everything be done particularly, which Antonina doesn’t often oblige to. Valérie is also a jealous woman, which can never be a good thing. Antonina meets Hector, a dazzling performer who also has telekinetic abilities. She requests he helps her master her own powers, and he agrees, though his reason is deceptive. His past love was Valérie, and he is using Antonina to get closer to her. Antonina is quickly falling for Hector, and it’s only a matter of time before her heart is broken, but what happens when Hector soon finds himself falling for Antonina? Overall Thoughts I was absolutely astounded by the writing, though I had been expecting something different from the blurb. I was assuming I would be reading a paranormal romance, but instead I was greeted with something that felt like a historical romance set in a different yet familiar world. This story is also very character-driven, which was a pleasant change from my usual readings. I found myself quickly wanting to know more about the characters. I adored Nina’s naive and spunky personality, Hector’s blind yet stoic love, and Valérie’s calculating jealousy. Each character contributed well to the story. At one point, the story even went from a love triangle to a love square, which was quite interesting to read. One of the biggest emotions I encountered was for Hector. He seemed to have one of the largest developments for the characters. There were parts of the story when I couldn’t decide whether or not I hated him, but in a way that kept me wanting to read more. The characters in this story definitely had depth that made them feel like real people. The telekinetic bit was important, but not as important as the overarching story. True, it brings Antonina and Hector together, but I felt they could have came together without it. It added an interesting element, but it didn’t take center stage for the story. Final Thoughts: I thought this was a wonderful book. I was dazzled by the characters and the world. As I was reading, I continually had questions, which were all answered. I would recommend this story to those who enjoy historical romances that are character driven and want a bit of a fantasy aspect. Rating: 4.5/5 on Goodreads Further Information: ISBN: 9781250099068 RELEASE DATE: OCT 24, 2017 PAGE COUNT: 327 WHERE TO BUY: Amazon – Barnes & Noble – iBooks
271,687
\begin{document} \title{Effective Martingales with Restricted Wagers} \author{Ron Peretz} \affil{London School of Economics} \date{} \maketitle \begin{abstract} The classic model of computable randomness considers martingales that take real values. Recent work by \cite{teutsch-etal2012} and \cite{teutsch2013} show that fundamental features of the classic model change when the martingales take integer values. We compare the prediction power of martingales whose increments belong to three different subsets of the real numbers: (a) all real numbers, (b) real numbers excluding a punctured neighborhood of 0, and (c) integers. We also consider three different success criteria: (i) accumulating an infinite amount of money, (ii) consuming an infinite amount of money, and (iii) making the accumulated capital oscillate. The nine combinations of (a)-(c) and (i)-(iii) define nine notions of computable randomness. We provide a complete characterization of the relations between these notions, and show that they form five linearly ordered classes. Our results solve outstanding questions raised in \cite{teutsch-etal2012}, \cite{teutsch2013}, and \cite{chalcraft12}, and strengthen existing results. \end{abstract} \section{Introduction} \subsection{Restricted wagers and effective prediction} A binary sequence that follows a certain pattern can serve as a test for the sophistication of gamblers. Only martingales who are sufficiently ``smart'' should be able to recognize the pattern and exploit it. Conversely, a martingale (or a class of martingales) can serve as a test for predictability. A predictable sequence is one that can be exploited by that martingale (or class of martingales). When we consider the class of all martingales that belong to a certain computation complexity class, unpredictable sequences are effectively random (where the ``effectiveness'' is relative to the complexity class). Following our intuition of randomness, when a martingale (or a countable class of martingales) bets against the bits of a random binary sequence, its accumulated capital should (almost surely) converge to a finite value. So, a ``predictable'' sequence should be defined as one on which that martingale (or a martingale in that class) does not converge. Not converging divides into two cases: going to infinity, and oscillating. The former is used as the \emph{success criterion} in the classic definition of ``computable randomness''; we call it $\infty$-\textsc{gains}. The latter we call \textsc{oscillation}. A third, economically appealing success criterion requires that the martingale specify a certain amount to be consumed at each turn and the accumulated consumption go to infinity. A martingale together with a consumption function describe a \emph{supermartingale}. We call the success of a supermartingale $\infty$-\textsc{consumption}. Things are not very interesting unless restricted wagers are introduced. It turns out that the above three success criteria are equivalent when all real-valued martingales are allowed. Things become more involved when the increments (wagers) of the martingales are restricted to subsets of the reals. We consider three sets of wagers: $\fR$, $V=\{x\in\fR:\ |x|\geq 1\text{ or }x=0\}$, and $\fZ$. The wager sets together with the success criteria form nine \emph{predictability classes}. A complete characterization of the relations between these predictability classes is given (see Figure~\ref{fig summary}). These relations are robust to a wide interpretation of the term ``effective'' -- specifically, any countable class of functions closed under Turing reductions.\footnote{Our message is that the computational model does not matter very much. Finding minimal computational requirements is beyond the scope of the present paper. In fact, Turing reduction is not necessary. Polynomial time reduction is sufficient.} \begin{figure} \begin{center} \begin{tabular}{c |c c c c c} Wagers& \multicolumn{5}{c}{Success Criterion}\\ & $\infty$-\textsc{gains} & & $\infty$-\textsc{consumption} & & \textsc{oscillation} \\ \hline \\ $\fR$ & $\bullet$ & $\xleftrightarrow{}$ & $\bullet$ & $\xleftrightarrow{}$ & $\bullet$\\ & $\uparrow$&&&&\\ $V$ & $\bullet$ & $\xleftrightarrow{}$ & $\bullet$ & & $\bullet$\\ & $\uparrow$&&&&$\updownarrow$\\ $\fZ$ & $\bullet$ & $\xleftarrow{}$ & $\bullet$ & $\xleftarrow{}$ & $\bullet$\\ \\ \hline \end{tabular} \end{center} \caption{\label{fig summary}Relations between classes. Arrows indicate implication.} \end{figure} \subsection{Relations to existing literature} Computable randomness, introduced by \cite{schnorr71}, and its relative variant classify binary sequences in terms of the oracles needed to predict a binary sequence effectively. For background, see \cite{downey-online}, \cite{downey-book}, or \cite{nies-book}. The present paper is motivated by refinements of the notion of computable randomness (and its relative variant) recently introduced by \cite{teutsch-etal2012}, \cite{chalcraft12}, and \cite{teutsch2013}. The main notion of computable randomness is $\infty$-\textsc{gains}. Other, well-studied success criteria include those of \cite{schnorr71} and \cite{kurtz81}. The less familiar success criteria, $\infty$-\textsc{consumption} and \textsc{oscillation}, turned out to be equivalent to the main notion of computable randomness (as mentioned above) and became folklore. When \cite{teutsch-etal2012} introduced integer-valued martingales some of the folklore criteria gained renewed interest. \cite{teutsch-etal2012} showed that real- and integer-valued martingales are different with respect to $\infty$-\textsc{gains}. Section~\ref{sec casino} provides an alternative elementary proof. Theorem~\ref{thm R gain not to V gain} shows that the separation can be done with a very simple history-independent martingale. \cite{teutsch-etal2012} asked whether martingales whose increments take values in $V$ (defined above) were different from integer martingales with respect to $\infty$-\textsc{gains}. Theorem~\ref{thm V gain not to Z gain} answers their question in the affirmative. \cite{teutsch2013} introduced the success criterion we call $\infty$-\textsc{consumption} as a \emph{qualitative} distinction between real and integer martingales. He showed\footnote{Modulo a minor mistake that is corrected here.} that $\infty$-\textsc{gains} and $\infty$-\textsc{consumption} are equivalent for real but not for integer martingales. He asked what was the relation between $\infty$-\textsc{gains} and $\infty$-\textsc{consumption} for $V$ martingales. Proposition~\ref{prop V gain to save} shows that the two are equivalent for $V$ martingales. However, \textsc{oscillation} can serve as a qualitative distinction between real and $V$ martingales. Proposition~\ref{prop gain to oscillate} and Theorem~\ref{thm V save not to oscillate} show that $\infty$-\textsc{consumption} and \textsc{oscillation} are equivalent for real but not for $V$ martingales. Integer and $V$ martingales are (Baire) categorically different from real martingales (\citealt{teutsch-etal2012}). The reason is that the former attain a minimum while the latter don't, as illustrated in Section~\ref{sec casino}. However, Baire's category is too coarse to distinguish between two sets that exclude a punctured neighborhood of zero, such as $V$ and $\fZ$. \cite{chalcraft12} developed a finer argument when they characterized the relations between finite wager sets with respect to $\infty$-\textsc{gains}. They asked whether their characterization extends to infinite sets and in particular asked about the relation between $V$ and the integers. \section{The casino setting}\label{sec casino} Before providing the formal definitions, we first consider an illustrating example that demonstrates how to distinguish between classes of predictability (specifically, integer and real martingales with respect to $\infty$-\textsc{gains}). A sequence of players enter a casino. Player 1 declares her betting strategy, a function from finite histories of Heads and Tails to real-valued bets. Then the rest of the players, 2, 3, ... (countably many of them), declare their strategies, which are restricted to integer-valued bets. The casino wants Player 1 to win and all the others to lose. That is, the casino should choose a sequence of Heads and Tails so that the limit of Player 1's capital is infinite and everyone else's is finite. Is it possible? Consider the following strategy for Player 1. She enters the casino with an irrational amount of $x_0$ dollars. After $t$ periods she has $x_t$ dollars in her pocket and she bets $\frac 1 2 \set{x_t}$ on Heads (where $\set{x}:=x-\floor{x}$). Now, Players 2, 3, ... declare their betting strategies. The casino places a finite sequence of Heads and Tails $\sigma$ on which the capital of Player 2 is minimal. Recall that Players 2, 3, ... may bet only integer numbers; hence that minimum exists. At this point Player 2 is bankrupt. If he places a (non-zero) bet, it will contradict the fact that $\sigma$ is a minimizer of his capital. Note that Player 1 bets on only the fractional part of her capital, and so $\floor{x_t}$ never decreases. In the next stage, the casino extends $\sigma$ by appending to it sufficiently many Heads, so that Player 1's capital increases by at least 1. The casino repeats the same trick against every player in turn in order to bankrupt him while ensuring that Player 1 does not lose more than the fractional part of her capital, and then it continues to place Heads until she accumulates a dollar. QED. \section{Definitions and Results} \subsection{Definitions} The set of all finite bit strings is denoted by $\set{-1,+1}^{<\infty}=\bigcup_{n=0}^\infty\set{-1,+1}^n$. The length of a string $\sigma\in\set{-1,+1}^{<\infty}$ is denoted by $\norm{\sigma}$. The empty string is denoted $\varepsilon$. For an infinite bit sequence $x\in\set{-1,+1}^\fN$ and a non-negative integer $n$, the prefix of $x$ of length $n$ is denoted by $x\restriction n$. A \emph{supermartingale} is a function $M:\set{-1,+1}^{<\infty}\to \fR$, satisfying \[ M(\sigma)\geq \frac{M(\sigma,-1)+M(\sigma,+1)}2, \] for every $\sigma\in\set{-1,+1}^{<\infty}$. We call the difference $M(\sigma)- \frac{M(\sigma,-1)+M(\sigma,+1)}2$ $M$'s \emph{marginal consumption} at $\sigma$. If $M$'s marginal consumption is 0 at every $\sigma\in\set{-1,+1}^{<\infty}$, we say that $M$ is a \emph{(proper) martingale}. The \emph{increment} of $M$ at $\sigma$ is defined as \[ M'(\sigma) = \frac{M(\sigma,+1)-M(\sigma,-1)}2. \] Note that $M'(\sigma)$ is positive if $M$ wagers on ``$+1$'' and negative if $M$ wagers on ``$-1$'' at $\sigma$. The \emph{initial capital} of $M$ is defined as $M(\varepsilon)$. A proper martingale is determined by its initial capital and its wagers at every $\sigma\in\set{-1,+1}^{<\infty}$. For a supermartingale $M$, the \emph{proper cover} of $M$ is the martingale, $\tilde M$, whose initial capital and wagers are the same as $M$'s. The \emph{accumulated consumption} of $M$ is defined as $\tilde M -M$. A (super)martingale is called \emph{history-independent} if $M'(\sigma)=M'(\tau)$, whenever $\norm{\sigma}=\norm{\tau}$. For a supermartingale $M$ and a string $\sigma$, we say that $M$ \emph{goes bankrupt at} $\sigma$, if \[ M(\sigma)-\norm{M'(\sigma)}< M\text{'s marginal consumption at }\sigma. \] For an infinite sequence $x\in\set{-1,+1}^\fN$, we say that $M$ \emph{goes bankrupt on} $x$, if $M$ goes bankrupt at $x\restriction n$, for some non-negative integer $n$. Let $M$ be a supermartingale and $x\in\set{-1,+1}^\fN$. If $M$ does not go bankrupt on $x$, we say that $M$ achieves \begin{itemize} \item \emph{$\infty$-\textsc{gains}} on $x$, if $\lim_{n\to\infty} M(x\restriction n)=\infty$; \item \emph{$\infty$-\textsc{consumption}} on $x$, if $\lim_{n\to\infty}(\tilde M-M)(x\restriction n)=\infty$; \item \emph{\textsc{oscillation}} on $x$, if $\liminf_{n\to\infty} \tilde M(x\restriction n) \neq \limsup_{n\to\infty} \tilde M(x\restriction n)$. \end{itemize} We refer to $\infty$-\textsc{gains}, $\infty$-\textsc{consumption} and \textsc{oscillation} as \emph{success criteria}. \begin{rem*} Note that {$\infty$-\textsc{consumption}} is the only success criterion that relies on supermartingales rather than proper martingales. The reason for defining the other criteria on supermartingales is entirely semantic. We want to distinguish between strategies (martingales/supermartingales) and payoffs (success criteria). In the sequel, when $\infty$-\textsc{gains} or \textsc{oscillation} are considered, it is often be assumed (when no loss of generality occurs) that the supermartingales in question are in fact proper martingales. Also, the standard definition of (super)martingales asserts non-negative values. We include the requirement that there be non-negative values in the success criteria by imposing that the martingales do not go bankrupt. It is sometimes convenient to assume (when no loss of generality occurs) that the martingales in question are non-negative. The reason for taking this non-standard approach is to allow for the definition of history-independent (super)martingales. \end{rem*} For $A\subset \fR$, an \emph{$A$}-(super)martingale is a (super)martingale whose increments take values in $A$. We will be mainly interested in restricting the increments to the set of integers $\fZ$ and the set $V:=\set{a\in\fR:\ \norm{a}\geq 1}$. We define a \emph{predictability class} (\emph{class}, for short) as a pair $\mathcal C=(A,C)$, where $A\subset \fR$ and $C\in\set{\text{$\infty$-\textsc{gains}, $\infty$-\textsc{consumption}, \textsc{oscillation}}}$. We use ``$\mathcal C$-(super)martingale'' for ``$A$-(super)martingale,'' and ``achieving $\mathcal C$'' for ``achieving $C$.'' \begin{definition} We say that a class $(A_1,C_1)$ \emph{implies} another class $(A_2,C_2)$, and write $(A_1,C_1)\longrightarrow(A_2,C_2)$ if for every $x\in\set{-1,+1}^\fN$ and every $A_1$-supermartingale $M_1$ that achieves $C_1$ on $x$, there exists an $A_2$-supermartingale $M_2$ such that \begin{enumerate}[(a)] \item $M_2$ achieves $C_2$ on $x$; and \item $M_2$ is computable relative to $M_1$. \end{enumerate} \end{definition} Note that implication is a transitive relation. It turns out that the classes we study exhibit the property that if they do not imply each other, they satisfy a stronger relation than just the negation of implication. \begin{definition} We say that a class $(A_1,C_1)$ \emph{anti-implies} another class $(A_2,C_2)$, and write $(A_1,C_1)\centernot\longrightarrow(A_2,C_2)$, if there exists a computable history-independent $A_1$-supermartingale, $M_1$, such that for any countable set of $A_2$-supermartingales, $\mathcal B$, there exists a sequence $x\in\set{-1,+1}^\fN$ on which \begin{enumerate}[(a)] \item $M_1$ achieves $C_1$; and \item none of the elements of $\mathcal B$ achieves $C_2$. \end{enumerate} \end{definition} Anti-implication behaves similarly to the negation of implication in the following sense. \begin{lem}\label{lem anti} Let $\mathcal C_1$, $\mathcal C_2$, and $\mathcal C_3$ be classes. If $\mathcal C_2\longrightarrow\mathcal C_3$ and $\mathcal C_1{\centernot\longrightarrow}\mathcal C_3$, then $\mathcal C_1{\centernot\longrightarrow}\mathcal C_2$. \end{lem} \begin{proof} Take a supermartingale $M_1$ that separates $\mathcal C_1$ from $\mathcal C_3$. Let $\mathcal B$ be a countable set of $\mathcal C_2$-supermartingales. Let $\mathcal B'$ be the set of all $\mathcal C_3$-supermartingales computable from an enumeration of $\mathcal B$. There exists a sequence $x\in\set{-1,+1}^\fN$ on which $M_1$ achieves $C_1$, but no element of $\mathcal B'$ achieves $\mathcal C_3$. Since $\mathcal C_2\longrightarrow\mathcal C_3$, no element of $\mathcal B$ achieves $\mathcal C_2$ on $x$. \end{proof} \subsection{Implication results} The subsequent propositions explain the arrows in Figure~\ref{fig summary} and also, by transitivity, their transitive closure, as well. All upwards arrows hold since $\fZ\subset V\subset\fR$. The leftwards arrows follow from the following proposition. \begin{prop}\label{prop oscillate to save} For every $0\in A\subset\fR$, $(A,\text{\textsc{oscillation}})\longrightarrow(A,\infty\text{-\textsc{consumption}})$. \end{prop} The next proposition explains why $V$ and $\fZ$ \textsc{oscillation} are the same. \begin{prop}\label{prop V to Z oscillate} $(V,\text{\textsc{oscillation}})\longrightarrow (\set{0,-1,+1},\text{\textsc{oscillation}})$ \end{prop} The rightwards arrows in the top row are explained by the next proposition. \begin{prop}\label{prop gain to oscillate} $(\fR,\infty\text{-\textsc{gains}})\longrightarrow (\fR,\text{\textsc{oscillation}})$ \end{prop} The remaining rightwards arrow in the middle row is explained. \begin{prop}\label{prop V gain to save} $(V,\infty\text{-\textsc{gains}})\longrightarrow (V,\infty\text{-\textsc{consumption}})$ \end{prop} \subsection{Anti-implication results} More interesting than the implication results are the anti-implication results. By virtue of Lemma~\ref{lem anti}, we need only to separate adjacent strongly connected components of the diagram in Figure~\ref{fig summary}, and consider one representative from each strongly connected component. The next Theorem separates between integer $\infty$-\textsc{gains} and $\infty$-\textsc{consumption}. \begin{thm}[\citealt{teutsch2013}]\label{thm Z gain not to save} $(\set{1},\infty\text{-\textsc{gains}})\centernot\longrightarrow (\fZ,\infty\text{-\textsc{consumption}})$ \end{thm} Next is a separation between integer $\infty$-\textsc{consumption} and \textsc{oscillation}. \begin{thm}\label{thm V save not to oscillate} $(\set{1},\infty\text{-\textsc{consumption}})\centernot\longrightarrow (V,\text{\textsc{oscillation}})$ \end{thm} A very simple history-independent strategy, betting $\frac 1 n$ at time $n$, separates between $\fR$ and $V$ $\infty$-\textsc{gains}. \begin{thm}\label{thm R gain not to V gain} $(\set{\frac 1 n}_{n=1}^\infty,\infty\text{-\textsc{gains}}){\centernot\longrightarrow} (V,\infty\text{-\textsc{gains}})$ \end{thm} Similarly, betting $1+\frac 1 n$ at time $n$ separates between $V$ and $Z$ $\infty$-\textsc{gains}. \begin{thm}\label{thm V gain not to Z gain} $(\set{1+\frac 1 n}_{n=1}^\infty,\infty\text{-\textsc{gains}})\centernot\longrightarrow (\fZ,\infty\text{-\textsc{gains}})$ \end{thm} \section{Proofs} \begin{proof}[Proof of Proposition \ref{prop oscillate to save}] Let $0\in A\subset\fR$, $x\in\set{-1,+1}^\fN$, and let $M$ be an $A$-supermartingale that achieves \textsc{oscillation} on $x$. We assume w.l.o.g. that $M$ is a martingale, because by definition $M$ achieves \textsc{oscillation} iff $\tilde M$ oscillates. Take $a,b\in\fR$, such that \[ \liminf_{n\to\infty} M(x\restriction n)<a<b<\limsup_{n\to\infty} M(x\restriction n). \] For $y\in\set{-1,+1}^\fN$, define stopping times $n_0(y),n_1(y),\ldots$ recursively by \begin{align*} n_0(y)&=\inf\set{n\geq 0:\ M(y\restriction n)< a},\\ n_{2i+1}(y) &=\inf\set{n>n_{2i}:\ M(y\restriction n)> b},\\ n_{2(i+1)}(y) &=\inf\set{n>n_{2i+1}:\ M(y\restriction n)<a}, \end{align*} with the convention that the infimum of the empty set is $\infty$. We define an $A$-supermartingale $S$ by specifying $S'$, $S(\varepsilon)$, and $f=\tilde S-S$ as follows: $S(\varepsilon)=2a$; before time $n_0(y)$, $S'\equiv 0$ and $f\equiv 0$. For $n_{2i}(y) \leq t < n_{2i+1}(y)$, $S'(y\restriction t)=M'(y\restriction t)$; otherwise $S'=0$. At times $\set{n_{2i+1}(y)}_{i=0}^\infty$, $f$ increases by $b-a$; otherwise $f$ doesn't change. \end{proof} \begin{proof}[Proof of Proposition \ref{prop V to Z oscillate}.] Let $x\in\set{-1,+1}^\fN$, and let $M$ be a $V$-martingale that oscillates on $x$. Let $L=\liminf_{n\to\infty}M(x\restriction n)$. There exists $t_0\in\fN$ such that $M(x\restriction t)> L-\frac 1 2$, for every $t\geq t_0$, and $\norm{M(x\restriction t)-L}< \frac 1 2$ infinitely often. The following $\set{0,-1,+1}$-martingale, $S$, oscillates on $x$: \begin{align*} S(\varepsilon)&=1,\\ S'(y\restriction t)&= \begin{cases} \mathrm{sign}(M'(y\restriction t)) &\text{if $t\geq t_0$, $\norm{M(x\restriction t)-L}< \frac 1 2$, and $S(y\restriction t)=1$,}\\ -\mathrm{sign}(M'(y\restriction t)) &\text{if $t\geq t_0$, $\norm{M(x\restriction t)-L}< \frac 1 2$, and $S(y\restriction t)=2$,}\\ 0&\text{otherwise,} \end{cases} \end{align*} where $\mathrm{sign}(0):=0$. \end{proof} \begin{proof}[Proof of Proposition \ref{prop gain to oscillate}] Let $x\in\set{-1,+1}^\fN$ and let $M$ be an $\fR$-supermartingale that makes $\infty$-\textsc{gains} on $x$. For $y\in\set{-1,+1}^\fN$, define stopping times $n_0(y),n_1(y),\ldots$ recursively by \begin{align*} n_0(y)&=0,\\ n_{i+1}(y)&=\inf\set{n>n_i:\ M(y\restriction n)\geq 2 M(y\restriction n_i)}. \end{align*} Define a martingale $S$ by \begin{align*} S(\varepsilon)&=5,\\ \shortintertext{and for $n_i \leq t < n_{i+1}$,} S'(y\restriction t)&=\frac{M'(y\restriction t)}{M(y\restriction n_i)}(-1)^{\chi(S(y\restriction n_i)\geq 5)}. \end{align*} \end{proof} \begin{proof}[Proof of Proposition \ref{prop V gain to save}] Let $x\in\set{-1,+1}^\fN$ and let $M$ be a $V$-supermartingale that makes $\infty$-\textsc{gains} on $x$. Assume without loss of generality that $M$ is a martingale and $M(\varepsilon)\geq 2$. For $y\in\set{-1,+1}^\fN$, define stopping times $n_0(y),n_1(y),\ldots$ recursively by \begin{align*} n_0(y)&=0,\\ n_{i+1}(y)&=\inf\set{n>n_i:\ M(y\restriction n)\geq 2 M(y\restriction n_i)}. \end{align*} Define a supermartingale $S$ specified by $S(\varepsilon)$, $S'$, and $f=\tilde S-S$. \begin{align*} S(\varepsilon)&=2M(\varepsilon),\\ \shortintertext{and for $n_i \leq t < n_{i+1}$,} S'(y\restriction t)&=\left(M'(y\restriction t)\right)(2-\sum_{j<i}\frac 1 {M(y\restriction n_i)}),\\ f(y\restriction t)&=i. \end{align*} \end{proof} \begin{proof}[Proof of Theorem~\ref{thm V save not to oscillate}] Denote by $M$ the $\set{1}$-martingale with initial capital $1$. Define a saving function \begin{align*} f(\sigma)&= \left\lfloor\frac 1 2 \max_{0\leq k\leq n}M(\sigma\restriction k)\right\rfloor,\\ \shortintertext{and denote} m(\sigma)&=M(\sigma)-f(\sigma). \end{align*} Note that according to our definition $M$,$m$ and $f$ diverge to $\infty$ on the same set of sequences, and $f$ increases only after two consecutive $+1$s, and never increases twice in a row. Let $\mathcal B$ be a countable set of $V$-martingales (assumed w.l.o.g.). Arrange the elements of $\mathcal B\times \fN$ in a sequence $\set{(S_e,k_e)}_{e=1}^\infty$, such that whenever $e<e'$ and $S_e=S_{e'}$, then $k_e<k_{e'}$. We define a sequence $x\in\set{-1,+1}^\fN$ recursively. Assume $x\restriction n$ is defined for some $n\geq 0$. We say that $(S_e,k_e)$ \emph{receives attention} at time $n$, if $e$ is minimal with respect to the following properties: \begin{enumerate}[(i)] \item $S_e(x\restriction n)\leq k_e$, \item $m(x\restriction n) > e$, \item $S'_e(x\restriction n)\neq 0$. \end{enumerate} Define \[ X_{n+1}= \begin{cases} -\mathrm{sign} S'_e(x\restriction n)& \text{if some $(S_e,k_e)$ receives attention at time $n$,}\\ +1&\text{otherwise.} \end{cases} \] According to our definition, no $(S_e,k_e)$ receives attention at time $n$ when $m(x\restriction n)=1$; therefore, $m(x\restriction t)\geq 1$, for every $t$. Let $L$ be an arbitrary integer satisfying $1\leq L \leq \liminf_{n\to\infty} m(x\restriction n)$. Since $m$ is integer-valued there exists some $n_0\in\fN$ such that $m(x\restriction n)\geq L$, for every $n\geq n_0$. Consider the set of indexes $I=\set{n\geq n_0:\\m(x\restriction n)> L}$. Since $f$ never increases twice in a row, $m(x\restriction n)$ never stabilizes and hence $I$ is infinite. Consider the sequence of $L$-tuples of integers $\set{a_n=\tuple{\min\set{k_e,\floor{S_e(x\restriction n)}}}_{e=1}^L}_{n\in I}$. According to our definition, $a_n$ never increases (with respect to the lexicographic order on $\fZ^L$), it decreases whenever $m(x\restriction n)-1=m(x\restriction n+1)=L$, and then $m(x\restriction n+2)=L+1$ (since $f$ increases only after two consecutive $+1$s); therefore $a_n$ is fixed for $n$ large enough and $m(x\restriction n)=L$ only finitely many times. This is true for every $1\leq L \leq \liminf m(x\restriction n)$; therefore, $\liminf m(x\restriction n)=\infty$. Next, for every $S\in\mathcal B$ and $k\in\fN$, take $L$ such that $(S_L,k_L)=(S,k)$. The above shows that $\inf\set{\floor{S(x\restriction n)},k}$ is fixed for $n$ large enough; therefore $S$ does not oscillate around $k$, for every $k\in\fN$, and since $S$ is a $V$-martingale, $S$ does not oscillate at all. \end{proof} \begin{proof}[Proof of Theorem \ref{thm R gain not to V gain}.] Let $\set{S_1,S_2,\ldots}$ be a countable set of $V$-supermartingales. Assume w.l.o.g. that $S_i(\sigma)\geq 0$, for every $i\geq 1$ and $\sigma\in\set{-1,+1}^{<\infty}$. Let $\epsilon_1,\epsilon_2,\dots$ be independent random variables assuming the values $\pm 1$ with probability $(\frac 1 2,\frac 1 2)$. The series $\sum_{i=1}^\infty \frac{\epsilon_i}{i}$ converges a.s., by Doob's martingale convergence theorem, since the finite sums have bounded second moment. Let $L>0$ be large enough that \[ \Pr(\norm{\sum_{i=N}^{N+K} \frac{\epsilon_i}{i}}< L,\ \forall N,K\in\fN)>0. \] Define a history-independent $\fR$-martingale $S_0:\set{-1,+1}^{<\infty}\to\fR$ to be \[ S_0(x_1,\ldots,x_n)=(L+2)\sum_{i=1}^n \frac {x_i} i. \] We shall construct a random process $x_1,x_2,\ldots$ that satisfies the following properties: \begin{enumerate}[(i)] \item $\limsup_{n\to\infty} S_j(x\restriction n)<\infty$, for $j\geq 1$ ($x$ almost surely); \item $\liminf_{n\to\infty} S_0(x\restriction n)=\infty$ ($x$ almost surely); \item $\Pr_x(\inf_{n\in\fN} S_0(x\restriction n) \geq 1)>0$. \end{enumerate} Our plan is to define an increasing sequence of stopping times $n_0(x)< n_1(x)<\dots$ and to use it in a recursive definition of $\Pr(x_{n+1}|x_1,\ldots,x_{n})$. Let $n_0\equiv 0$. For $i>0$, let \begin{align*} n_{2i-1}(x)&=\inf\set{t>n_{2i-2}(x):\ \sum_{j=1}^iS_j(x\restriction t)<t}\text{ and}\\ n_{2i}(x)&=\inf\set{t>n_{2i-1}(x):\ \sum_{n=n_{2i-1}(x)}^t \frac{x_n} n \geq L}. \end{align*} Now we define $\Pr(x_{n+1}|x_1,\ldots,x_n)$. For $i\in\fN\cup\set{0}$ and $n_{2i}(x) \leq n < n_{2i+1}(x)$ (note that $n_i$ are stopping times), \[ \Pr(x_{n+1}=1|x_1,\ldots,x_n)\equiv \frac 1 2. \] For $n_{2i+1}(x) \leq n < n_{2i+2}(x)$, the value of $x_{n+1}$ will be determined by $x_1,\ldots,x_n$. We write $x_{n+1}=a$ to abbreviate the expression $\Pr(x_{n+1}=a|x_1,\ldots,x_n)=1$. Let \[ j=\inf\set{e:\ \exists n_{2i+1}(x)\leq t\leq n,\ S_e'(x_1,\ldots,x_t)\neq 0}. \] Define \[ x_{n+1}= \begin{cases} -1 &\text{if $j\leq i+1$ (in particular, $j$ is finite) and $S_j'(x_1,\ldots,x_n)>0$,}\\ +1 &\text{otherwise.} \end{cases} \] We now analyze the behavior of each $S_j(x\restriction n)$ by separately considering its increments over two parts of the time line: $T^{\mathrm{odd}}(x)=\bigcup_{i=0}^\infty(n_{2i}(x),n_{2i+1}(x)]$ and $T^{\mathrm{even}}(x)=\left(\bigcup_{i=0}^\infty(n_{2i+1}(x),n_{2(i+1)}(x)]\right)$. We recursively define functions $S_j^{\mathrm{odd}}:\set{-1,+1}^{<\infty}\to\fR$, for $j\geq 0$, by \begin{align*} S_j^{\mathrm{odd}}(\varepsilon)&=S_j(\varepsilon),\\ S_j^{\mathrm{odd}}(x\restriction n)-S_j^{\mathrm{odd}}(x\restriction n-1) &= \begin{cases} S_j(x\restriction n)-S_j(x\restriction n-1),&\text{if $n\in T^{\mathrm{odd}}$,}\\ 0 &\text{if $n\in T^{\mathrm{even}}$,}\\ \end{cases}\\ \shortintertext{and define} S_j^{\mathrm{even}} &= S_j-S_j^{\mathrm{odd}}. \end{align*} By Doob's martingale convergence theorem (this time the $S_j$-s are non-negative), $S^{\mathrm{odd}}_j(x\restriction n)$ is convergent for every $j\geq 0$ ($x$ a.s.). Also, if $n_{2i}(x)$ is finite, then so is $n_{2i+1}(x)$, ($x$ a.s.). For $n\in T^{\mathrm{even}}$, the definition of $x_n$ is such that $\set{\tuple{S_j(x\restriction n)}_{j=1}^i}_{n=n_{2i+1}(x)}^{n_{2(i+1)}(x)}$ is non-increasing (lexicographically), it decreases each time $x_n=-1$, and it can decrease at most $n_{2i+1}(x)$ times; therefore, if $n_{2i+1}(x)$ is finite, so is $n_{2(i+1)}(x)$. Also, $\tuple{\floor{S_j(x\restriction n)}}_{j=1}^i$ is non-increasing; therefore, $S^{\mathrm{even}}_j(x\restriction n)$ is eventually non-increasing (for every $j\geq 1$) and hence convergent. By definition, $S_0^{\mathrm{even}}(x\restriction n_{2(i+1)})-S_0^{\mathrm{even}}(x\restriction n_{2i})\geq L>0$. Since $\#\{n_{2i+1}(x)<n\leq n_{2(i+1)}(x):\ x_n=-1\}\leq n_{2i+1}(x)$, $S_0^{\mathrm{even}}(x\restriction n) - S_0^{\mathrm{even}}(x\restriction n_{2i})\geq -1$, for every $n_{2i}(x)<n\leq n_{2(i+1)}(x)$; therefore $\lim_{n=1}^{\infty}S_0^{\mathrm{even}}(x\restriction n)=\infty$ and $\inf_{n\in\fN}S_0^{\mathrm{even}}(x\restriction n)\geq -1$. The choice of $L$ was made so that there would be an event of positive probability, $\mathcal E$, in which $\inf_{n\in\fN}S_0^{\mathrm{odd}}(x\restriction n)\geq 2$; therefore \begin{multline*} \inf_{n\in\fN}S_0(x\restriction n)=\inf_{n\in\fN}(S_0^{\mathrm{odd}}+S_0^{\mathrm{even}})(x\restriction n)\geq \inf_{n\in\fN}S_0^{\mathrm{odd}}(x\restriction n) + \inf_{n\in\fN}S_0^{\mathrm{even}}(x\restriction n) \geq 2-1, \end{multline*} in the event $\mathcal E$. \end{proof} \begin{proof}[Proof of Theorem \ref{thm V gain not to Z gain}.] Define a history-independent $V$-martingale $M$ by \begin{align*} M(\varepsilon) = 2,\\ M'(\cdot\restriction n)=1+\frac 1 n. \end{align*} Denote the harmonic sum $H(n)=\sum_{k=1}^n\frac 1 k$. For $y\in\set{-1,+1}^\fN$, we define the number of consecutive losses that $M$ can incur before going bankrupt after betting against $y\restriction n$ as \begin{align*} K(y\restriction n)&=k_n(M(y\restriction n))\text{, where}\\ k_n(m)&= \max\set{k: k + H(n+k)-H(n) \leq m}. \end{align*} Let $\mathcal B=\set{S_1,S_2,\ldots}$ be a countable set of $\fZ$-supermartingales. Assume w.l.o.g. that $S_i(\sigma)\geq 0$, for every $i\geq 1$ and $\sigma\in\set{-1,+1}^{<\infty}$. Assume, also, without loss of generality (for technical reasons that will become clear) that $\mathcal B$ includes constant martingales of arbitrarily large capital. We begin with an informal description of $x\in\set{-1,+1}^\fN$. We apply two phases against each $S_j$ in turn. In the first phase we make sure that $K(x\restriction n)>S_1(x\restriction n)$. In the second phase, we play adversarially to $S_1$, if $S'_1(x\restriction n)\neq 0$, and otherwise we play the first phase against $S_2$ with initial capital $M(x\restriction n)-k^{-1}_n(S_1(x\restriction n))$. More generally, if at some stage $n$, $e$ is maximal with respect to $K(x\restriction n)>S_1(x\restriction n)+\cdots+S_{e}(x\restriction n)$, we play adversarially to $S_i$, if $i=\min\set{j\leq e:\ S'_j(x\restriction n)\neq 0}$ exists, and otherwise we play the first phase against $S_{e+1}$ with initial capital $M(x\restriction n)-k^{-1}_n(S_1(x\restriction n)+\cdots+S_{e}(x\restriction n))$. We now turn to a formal recursive definition of $x\in\set{-1,+1}^\fN$. Assume by induction that $x\restriction n$ is already defined. Let \begin{align*} e(n)&=\inf\set{e:\ K(x\restriction n) \leq S_1(x\restriction n)+\cdots+S_{e}(x\restriction n)}\text{, and}\\\ k(n)&=K(x\restriction n)-\left(S_1(x\restriction n)+\cdots+S_{e(n)-1}(x\restriction n)\right). \end{align*} By the assumption that $\mathcal B$ includes arbitrarily large constants, $e(n)$ is well defined. If there is an index $1\leq i< e(n)$ such that $S'_i(x\restriction n)\neq 0$, let $j$ be the minimum of these indexes, and let \[ x_{n+1}=-\mathrm{sign}S'_j(x\restriction n). \] Note that in this case \begin{equation}\label{eq e > j} e(n+1)\geq j. \end{equation} Assume now that $S'_1(x\restriction n)=\cdots=S'_{e(n)-1}(x\restriction n)=0$ (or $e(n)=1$). Define \[ x_{n+1}= \begin{cases} +1 &\text{if $S'_{e(n)}(x\restriction n)\leq \frac {S_{e(n)}(x\restriction n)}{k(n)}$,}\\ -1 & \text{if $S'_{e(n)}(x\restriction n)> \frac {S_{e(n)}(x\restriction n)}{k(n)}$.} \end{cases} \] Note that in this case \begin{equation}\label{eq e > e} e(n+1)\geq e(n). \end{equation} Since $\mathcal B$ includes constant martingales of arbitrarily large capital, it is sufficient to prove that $e(n)$ diverges to $\infty$ and $S_i(x\restriction n)$ converges for every $i\in\fN$. Assume by negation that \[ e=\inf \left(\set{h:S_h(x\restriction n)\text{ diverges}}\cup\set{\liminf e(n)}\right) \] is finite. By \eqref{eq e > j} and \eqref{eq e > e}, $e(n)$ converges to $e$. Denote the long division with remainder of $\frac {S_{e(n)}(x\restriction n)}{k(n)}$ by \[ S_e(x\restriction n)=q(n)k(n)+r(n). \] For $n$ large enough so that $e(n)=e$ and $S'_1(x\restriction n)=\cdots=S'_{e-1}(x\restriction n)=0$, we have $(q(n+1),r(n+1))\leq (q(n),r(n))$ (lexicographically) with a strict inequality if either $x_{n+1}=-1$ or $k(n+1)>k(n)+1$; therefore, for some $n_0\in\fN$ and every $n\geq n_0$, $x_{n+1}=+1$ and $k(n+1)=k(n)+1$. But this is impossible by the definition of $k(n)$ and the fact that the harmonic sum is unbounded. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm Z gain not to save}] We present a correction to the construction of \cite{teutsch2013}. Let $\{S_e\}_{e=1}^\infty$ be an arbitrary sequence of $\fZ$-supermartingales. Let $M$ be the $\set{1}$-martingale with initial capital $1$. We construct a sequence $x\in\set{+1,-1}^{\fN}$ on which $M$ makes $\infty$-\textsc{gains} and none of the $\{S_e\}_{e=1}^\infty$ makes $\infty$-\textsc{consumption}. We may assume w.l.o.g. that $\{S_e\}_{e=1}^\infty$ are integer-valued, because if they were not so, then $\set{\lceil S_e\rceil}_{e=1}^\infty $ would be. Also, we may include among $\{S_e\}_{e=1}^\infty$ constant martingales of arbitrarily large capital and further assume w.l.o.g. that $\{S_e\}_{e=1}^\infty$ never go bankrupt. We define a sequence $x$ recursively. Assume $x\restriction n$ is already defined. Define the following integer-valued functions: \begin{align*} s_e(n)&:= S_e(x\restriction n),\\ s'_e(n)&:= S'_e(x\restriction n),\\ f_e(n)&:= \tilde S_e(x\restriction n)-S_e(x\restriction n),\\ m_1(n)&:= M(x\restriction n),\\ m_{e+1}(n)&:= m_e(n)-r_e(n), \\ \shortintertext{where} q_e(n)&:=\left\lfloor \frac{s_e(n)}{m_e(n)}\right\rfloor,\\ r_e(n)&:=s_e(n)-q_e(n)m_e(n). \end{align*} The above is well defined as long as $m_1(n)> 0$, and in that case we have \begin{equation}\label{eq m} m_1(n)\geq m_2(n)\geq\cdots \geq 1. \end{equation} Let $i=i(n)=\min\set{e:s'_e(n)\neq q_e(n)}$. Note that the set in the definition of $i$ is not empty, since $\set{m_e(n)}_{e=1}^\infty$ is bounded (by $m_1(n)$) and $\set{s_{e}(n):s'_e(n)=0}$ is not bounded, because $\{S_e\}_{e=1}^\infty$ include arbitrarily large constant martingales. We are now ready to define \[ x_{n+1}= \begin{cases} +1 &\text{if $s'_{i}(n)\leq q_i(n)$,}\\ -1 &\text{if $s'_{i}(n)> q_i(n)$.} \end{cases} \] The following properties follow by induction on $n$: \begin{enumerate}[(i)] \item $m_1(n)>0$; \item for every $e < i(n)$, the pair $\langle q_e(n+1),r_e(n+1) \rangle$ is lexicographically not greater than $\langle q_e(n),r_e(n) \rangle$, with strict inequality if $S_e$ consumes money at time $n$; \item $\langle q_i(n+1),r_i(n+1) \rangle$ is lexicographically strictly less than $\langle q_i(n),r_i(n) \rangle$. \end{enumerate} From (ii) and (iii) the sequence of words \[ \set{\langle q_e(n),r_e(n)\rangle_{e=1}^{i(n)}}_{n=1}^\infty \] is strictly decreasing. Since natural numbers cannot decrease indefinitely, it must be the case that $\lim_{n\to\infty}i(n)=\infty$ and each $\langle q_e(n),r_e(n)\rangle$ is fixed for $n$ large enough. It follows from (ii) that none of $\set{S_e}_{e=1}^\infty$ achieves $\infty$-\textsc{consumption}. Since $\lim_{n\to\infty}i(n)=\infty$ and $\set{S_e}_{e=1}^\infty$ include arbitrarily large constants, $M$'s capital must go to $\infty$. The only delicate point to notice when verifying (i)-(iii) is (ii), in the case where $S_e$ does not consume and $r_e(n)=m_e(n)-1$. In that case, if $x_{n+1}$ were equal to $-1$ , we would have $q_e(n+1)=q_e(n)+1$ (and $r_e(n+1)=0$). Fortunately, this situation is avoided by the definition of $m_e(n)$. If $r_e(n)=m_e(n)-1$, then $m_{e+1}(n)=1$; hence, by \eqref{eq m}, $m_i(n)=1$; hence, by the definition of $x$ and the assumption that $S_i$ does not go bankrupt, $x_{n+1}=+1$. \end{proof} \section*{Acknowledgments} The author wishes to thank Aviv Keren for useful comments. The early stages of this work were done in 2010-12 while the author was a postdoctoral researcher at Tel Aviv University. The author wishes to thank his hosts Ehud Lehrer and Eilon Solan. This work was supported in part by the Google Inter-university Center for Electronic Markets and Auctions.
47,801
I am at times speechless at the happiness people share with me - Good Day and Happy New Year! I do not remember if we told you how we received the rings…. on Saturday Dec 22, we were expecting the rings to arrive. In our minds we imagined their arrival so that we could have them for the ceremony. We had gone out for a couple of hours. When we returned there was a note that the Postal Service had attempted to deliver the package and that we could pick it up the next business day…. too late for the ceremony!! So Judy went to the back of the Post Office, got the emergency phone numbers and began calling. The first 3 were not valid, but the 4th was the cell phone number of the person in the post office where the package was taken! His number was “definitely not supposed to be listed as an emergency number!”)… so he said…. :) And….. they were just about to close! We grabbed some holiday “goodies” (nuts, cookies), jumped in the car, and zoomed over to the Post Office, 30 minutes from our home. Well, he met us at the door with our package!! We laughed and laughed at the unlikely series of events that allowed us to have our rings for our sacred wedding ceremony! You had first told us that the rings would be made by the 24th or 25th…. then we found out that you shipped them on Thursday, the 20th, with “2-5 day shipping”. Then, as unlikely as it first seemed (except to our imaginations!) the rings arrived in time! Woo-hoo! Divine Intervention at work!! :) What fun!! Thank you for these wonderful rings! We are happy to know of your beautiful products and the magnificence of your character! We happily recommend you to all, Dear Ones! Blessings All-Ways, Phil & Judy We are sending you a couple of pix of the happy couple… us! All is Love, Dear Ones, Amarananda & Nadiananda Phil & Judy chose the 18k gold Lotus ring as their wedding rings.
229,367
\begin{document} \title{Ramanujan's approximation to the exponential function and generalizations} \author{ Cormac ~O'Sullivan\footnote{ {\it 2010 Mathematics Subject Classification:} 33B10, 30E15 \newline \indent \ \ \ {\em Key words and phrases.} Exponential function, gamma function, saddle-point method, exponential integral. \newline \indent \ \ \ Support for this project was provided by a PSC-CUNY Award, jointly funded by The Professional Staff Congress and The City \newline \indent \ \ \ University of New York.} } \date{} \maketitle \def\s#1#2{\langle \,#1 , #2 \,\rangle} \def\F{{\frak F}} \def\C{{\mathbb C}} \def\R{{\mathbb R}} \def\Z{{\mathbb Z}} \def\Q{{\mathbb Q}} \def\N{{\mathbb N}} \def\G{{\Gamma}} \def\GH{{\G \backslash \H}} \def\g{{\gamma}} \def\L{{\Lambda}} \def\ee{{\varepsilon}} \def\K{{\mathcal K}} \def\Re{\mathrm{Re}} \def\Im{\mathrm{Im}} \def\PSL{\mathrm{PSL}} \def\SL{\mathrm{SL}} \def\Vol{\operatorname{Vol}} \def\lqs{\leqslant} \def\gqs{\geqslant} \def\sgn{\operatorname{sgn}} \def\res{\operatornamewithlimits{Res}} \def\li{\operatorname{Li_2}} \def\lip{\operatorname{Li}'_2} \def\pl{\operatorname{Li}} \def\ei{\mathrm{Ei}} \def\clp{\operatorname{Cl}'_2} \def\clpp{\operatorname{Cl}''_2} \def\farey{\mathscr F} \def\dm{{\mathcal A}} \def\ov{{\overline{p}}} \def\ja{{K}} \def\nb{{\mathcal B}} \def\cc{{\mathcal C}} \def\nd{{\mathcal D}} \newcommand{\stira}[2]{{\genfrac{[}{]}{0pt}{}{#1}{#2}}} \newcommand{\stirb}[2]{{\genfrac{\{}{\}}{0pt}{}{#1}{#2}}} \newcommand{\eu}[2]{{\left\langle\!\! \genfrac{\langle}{\rangle}{0pt}{}{#1}{#2}\!\!\right\rangle}} \newcommand{\eud}[2]{{\left\langle\! \genfrac{\langle}{\rangle}{0pt}{}{#1}{#2}\!\right\rangle}} \newcommand{\norm}[1]{\left\lVert #1 \right\rVert} \newcommand{\e}{\eqref} \newcommand{\bo}[1]{O\left( #1 \right)} \newtheorem{theorem}{Theorem}[section] \newtheorem{lemma}[theorem]{Lemma} \newtheorem{prop}[theorem]{Proposition} \newtheorem{conj}[theorem]{Conjecture} \newtheorem{cor}[theorem]{Corollary} \newtheorem{assume}[theorem]{Assumptions} \newtheorem{adef}[theorem]{Definition} \newcounter{counrem} \newtheorem{remark}[counrem]{Remark} \renewcommand{\labelenumi}{(\roman{enumi})} \newcommand{\spr}[2]{\sideset{}{_{#2}^{-1}}{\textstyle \prod}({#1})} \newcommand{\spn}[2]{\sideset{}{_{#2}}{\textstyle \prod}({#1})} \numberwithin{equation}{section} \let\originalleft\left \let\originalright\right \renewcommand{\left}{\mathopen{}\mathclose\bgroup\originalleft} \renewcommand{\right}{\aftergroup\egroup\originalright} \bibliographystyle{alpha} \begin{abstract} Ramanujan's approximation to the exponential function is reexamined with the help of Perron's saddle-point method. This allows for a wide generalization that includes the results of Buckholtz, and where all the asymptotic expansion coefficients may be given in closed form. Ramanujan's approximation to the exponential integral is treated similarly. \end{abstract} \section{Introduction} \subsection{Ramanujan's approximation to $e^n$} The largest terms in the Taylor series development of $e^n$, when $n$ is a positive integer, are $n^j/j!$ for $j=n-1$ and $j=n$. So it is natural to compare $e^n/2$ with the sum of the first $n$ terms of this series. Ramanujan did this in Entry 48 of Chapter 12 in his second notebook, writing \begin{equation}\label{rb} 1+\frac{n}{1!}+\frac{n^2}{2!}+ \cdots +\frac{n^{n-1}}{(n-1)!} + \frac{n^n}{n!}\theta_n = \frac{e^n}2, \end{equation} and computing an asymptotic expansion which is equivalent to \begin{equation}\label{rb2} \theta_n = \frac{1}3+ \frac{4}{135 n}-\frac{8}{2835 n^2}-\frac{16}{8505 n^3}+\frac{8992}{12629925 n^4}+O\left( \frac{1}{n^5}\right), \end{equation} as $n \to \infty$. Label the coefficient of $n^{-r}$ in the above expansion as $\rho_r$. The difficulty of computing $\rho_r$ in general was resolved by Marsaglia in \cite{Mar86} with a recursive procedure. In this paper, all expansion coefficients are given in closed form. For example, one of our formulas for $\rho_r$ is \begin{equation*} \rho_r = - \sum_{k=0}^{2r+1} \frac{(2r+2k)!!}{(-1)^k k!} \dm_{2r+1,k}\left( \frac{1}{3!}, \frac{1}{4!}, \frac{1}{5!}, \dots \right), \end{equation*} where the De Moivre polynomials $\dm_{n,k}$ are described in Definition \ref{dbf}. An elementary formula for the quantity $\dm_{m,k}\left(\frac{1}{3!}, \frac{1}{4!}, \frac{1}{5!}, \dots \right)$ is shown in \e{mvw2}. The usual double factorial notation we are using has \begin{equation} \label{doub} n!! := \begin{cases} n(n-2) \cdots 5 \cdot 3 \cdot 1 & \text{ if $n$ is odd};\\ n(n-2) \cdots 6 \cdot 4 \cdot 2 & \text{ if $n$ is even}, \end{cases} \end{equation} for $n\gqs 1$, with $0!!=(-1)!!=1$. Inspired by claims of Ramanujan, Szeg\"o in 1928 and Watson in 1929 bounded $\theta_n$ from above and below. Flajolet and coauthors in 1995 \cite[Sect. 1]{Fl95} established a finer estimate and this result was elegantly reproved and extended by Volkmer \cite{Vo08}, employing the Lambert $W$ function. See the discussion of much work related to Entry 48 in \cite[pp. 181--184]{Ber89}. Equation \e{rb} may be generalized to summing the first $n+v$ terms of the series: \begin{equation}\label{rb3} \sum_{j=0}^{n+v-1}\frac{n^j}{j!}+ \frac{n^{n+v}}{(n+v)!}\theta_n(v) = \frac{e^n}2. \end{equation} Ramanujan developed the asymptotics of a related integral, (see \e{vv} with $w=1$), as described in \cite[p. 193]{Ber89}, and his result is equivalent to \begin{multline}\label{hrb} \theta_n(v) = \frac{1}3-v+ \left(\frac{4}{135}-\frac{v^2(v+1)}{3}\right)\frac 1{n}\\ - \left(\frac{8}{2835}+\frac{v(9v^4-15v^2-2v+4)}{135}\right)\frac{1}{n^2}+O\left( \frac{1}{n^3}\right), \end{multline} for $v$ a fixed integer as $n \to \infty$. Our version of \e{hrb} is given in Theorem \ref{ext}. \subsection{Further asymptotics} We may also include another natural parameter $w$. Let $n$ and $v$ be integers with $n \gqs 1$ and $n+v \gqs 0$. For nonzero $w\in \C$, define $S_n(w;v)$ with \begin{equation}\label{ss} e^{n w} =\sum_{j=0}^{n+v-1} \frac{(n w)^j}{j!} + \frac{(n w)^{n+v}}{(n+v)!} S_n(w;v) \end{equation} and define the complimentary $T_n(w;v)$ with \begin{equation}\label{tt} e^{n w} = \frac{(n w)^{n+v}}{(n+v)!} T_n(w;v) + \sum_{j=n+v}^{\infty} \frac{(n w)^j}{j!}. \end{equation} For $w=0$ we may set $S_n(0;v)$ to be $0$, leaving $T_n(0;v)$ undefined. It is clear from \e{tt} that $T_n(w;v)$ can be given as a finite sum: \begin{equation}\label{tt2} \frac{(n w)^{n+v}}{(n+v)!} T_n(w;v) = \sum_{j=0}^{n+v-1} \frac{(n w)^j}{j!}. \end{equation} Also there is the relation \begin{equation}\label{tt3} e^{n w} = \frac{(n w)^{n+v}}{(n+v)!}\left(S_n(w;v)+ T_n(w;v)\right). \end{equation} Our earlier $\theta_n(v)$ function from \e{rb3} occurs in the $w=1$ case: \begin{align}\label{ori} \theta_n(v) & = \frac{S_n(1;v)}2 - \frac{T_n(1;v)}2 \\ & = S_n(1;v) -\frac{(n+v)!}{2 n^{n+v}} e^n = \frac{(n+v)!}{2 n^{n+v}} e^n -T_n(1;v). \label{ori2} \end{align} The behavior of $S_n(w;v)$ and $T_n(w;v)$ as $n \to \infty$ are our main results in Theorems \ref{msth} and \ref{mtth}, extending the $v=0$ case considered by Buckholtz in \cite{Bu63}. These asymptotics depend on which of certain regions $w$ lies in; see Figure \ref{fig}. The region $\mathcal X$ is given by $\{w\in \C\,:\, |w e^{1-w}|>1\}$. Also $\{w\in \C\,:\, |w e^{1-w}|<1\}$ has the disjoint parts $\mathcal Y$ with $\Re(w)<1$ and $\mathcal Z$ with $\Re(w)>1$. The boundary curves $\mathcal S$ and $\mathcal T$ are where $|w e^{1-w}|=1$ with $\Re(w)<1$ and $\Re(w)>1$, respectively. These curves have the parametrization $t\pm i \sqrt{e^{2t-2}-t^2}$ for $t\gqs -W(1/e) \approx -0.2785$, using the Lambert $W$ function. Among other things, Szeg\"o showed in \cite{Sz24} that $z$ is an accumulation point for the zeros of $T_n(w;1)$ as $n\to \infty$ if and only if $z \in \mathcal S \cup\{1\}$; this is the Szeg\"o curve. \SpecialCoor \psset{griddots=5,subgriddiv=0,gridlabels=0pt} \psset{xunit=3cm, yunit=3cm} \psset{linewidth=1pt} \psset{dotsize=4pt 0,dotstyle=*} \begin{figure}[ht] \begin{center} \begin{pspicture}(-1,-1)(2,1) \pscustom[fillstyle=gradient,gradangle=270,linecolor=white,gradmidpoint=1,gradbegin=white,gradend=lightblue,gradlines=100]{ \pspolygon[linecolor=lightblue](-1,-1)(1.66,-1)(1.66,1)(-1,1)(-1,1)} \savedata{\mydatay}[ {{1., 0.}, {0.98, 0.0197342}, {0.96, 0.0389403}, {0.94, 0.0576232}, {0.92, 0.0757878}, {0.9, 0.0934385}, {0.88, 0.11058}, {0.86, 0.127215}, {0.84, 0.143349}, {0.82, 0.158985}, {0.8, 0.174127}, {0.78, 0.188776}, {0.76, 0.202937}, {0.74, 0.216612}, {0.72, 0.229802}, {0.7, 0.242511}, {0.68, 0.25474}, {0.66, 0.26649}, {0.64, 0.277763}, {0.62, 0.288559}, {0.6, 0.29888}, {0.58, 0.308724}, {0.56, 0.318093}, {0.54, 0.326985}, {0.52, 0.3354}, {0.5, 0.343336}, {0.48, 0.350792}, {0.46, 0.357765}, {0.44, 0.364252}, {0.42, 0.370252}, {0.4, 0.375758}, {0.38, 0.380768}, {0.36, 0.385276}, {0.34, 0.389275}, {0.32, 0.39276}, {0.3, 0.395723}, {0.28, 0.398155}, {0.26, 0.400047}, {0.24, 0.401387}, {0.22, 0.402164}, {0.2, 0.402364}, {0.18, 0.40197}, {0.16, 0.400966}, {0.14, 0.399332}, {0.12, 0.397045}, {0.1, 0.39408}, {0.08, 0.390407}, {0.06, 0.385992}, {0.04, 0.380798}, {0.02, 0.374778}, {0., 0.367879}, {-0.02, 0.36004}, {-0.04, 0.351184}, {-0.06, 0.341221}, {-0.08, 0.330038}, {-0.1, 0.317495}, {-0.12, 0.303411}, {-0.14, 0.287549}, {-0.16, 0.26958}, {-0.18, 0.249039}, {-0.2, 0.225206}, {-0.2, 0.225206}, {-0.21, 0.211711}, {-0.22, 0.196878}, {-0.23, 0.180374}, {-0.24, 0.161689}, {-0.25, 0.139946}, {-0.26, 0.1134}, {-0.27, 0.0772425}, {-0.271, 0.0725798}, {-0.272, 0.0675838}, {-0.273, 0.0621741}, {-0.274, 0.0562315}, {-0.275, 0.0495648}, {-0.276, 0.0418289}, {-0.277, 0.032264}, {-0.278, -0.0181818}, {-0.277, -0.032264}, {-0.276, -0.0418289}, {-0.275, -0.0495648}, {-0.274, -0.0562315}, {-0.273, -0.0621741}, {-0.272, -0.0675838}, {-0.271, -0.0725798}, {-0.27, -0.0772425}, {-0.26, -0.1134}, {-0.25, -0.139946}, {-0.24, -0.161689}, {-0.23, -0.180374}, {-0.22, -0.196878}, {-0.21, -0.211711}, {-0.2, -0.225206}, {-0.2, -0.225206}, {-0.18, -0.249039}, {-0.16, -0.26958}, {-0.14, -0.287549}, {-0.12, -0.303411}, {-0.1, -0.317495}, {-0.08, -0.330038}, {-0.06, -0.341221}, {-0.04, -0.351184}, {-0.02, -0.36004}, {0., -0.367879}, {0.02, -0.374778}, {0.04, -0.380798}, {0.06, -0.385992}, {0.08, -0.390407}, {0.1, -0.39408}, {0.12, -0.397045}, {0.14, -0.399332}, {0.16, -0.400966}, {0.18, -0.40197}, {0.2, -0.402364}, {0.22, -0.402164}, {0.24, -0.401387}, {0.26, -0.400047}, {0.28, -0.398155}, {0.3, -0.395723}, {0.32, -0.39276}, {0.34, -0.389275}, {0.36, -0.385276}, {0.38, -0.380768}, {0.4, -0.375758}, {0.42, -0.370252}, {0.44, -0.364252}, {0.46, -0.357765}, {0.48, -0.350792}, {0.5, -0.343336}, {0.52, -0.3354}, {0.54, -0.326985}, {0.56, -0.318093}, {0.58, -0.308724}, {0.6, -0.29888}, {0.62, -0.288559}, {0.64, -0.277763}, {0.66, -0.26649}, {0.68, -0.25474}, {0.7, -0.242511}, {0.72, -0.229802}, {0.74, -0.216612}, {0.76, -0.202937}, {0.78, -0.188776}, {0.8, -0.174127}, {0.82, -0.158985}, {0.84, -0.143349}, {0.86, -0.127215}, {0.88, -0.11058}, {0.9, -0.0934385}, {0.92, -0.0757878}, {0.94, -0.0576232}, {0.96, -0.0389403}, {0.98, -0.0197342}, {1., 0.}} ] \dataplot[linecolor=black,linewidth=0.8pt,plotstyle=line,fillstyle=solid,fillcolor=white]{\mydatay} \savedata{\mydata}[ {{1.66, 0.993892}, {1.64, 0.952386}, {1.62, 0.911709}, {1.6, 0.871847}, {1.58, 0.832786}, {1.56, 0.794515}, {1.54, 0.75702}, {1.52, 0.72029}, {1.5, 0.684311}, {1.48, 0.649074}, {1.46, 0.614565}, {1.44, 0.580775}, {1.42, 0.547692}, {1.4, 0.515307}, {1.38, 0.483608}, {1.36, 0.452585}, {1.34, 0.422229}, {1.32, 0.392531}, {1.3, 0.363481}, {1.28, 0.335071}, {1.26, 0.307291}, {1.24, 0.280133}, {1.22, 0.253589}, {1.2, 0.22765}, {1.18, 0.20231}, {1.16, 0.177561}, {1.14, 0.153394}, {1.12, 0.129804}, {1.1, 0.106784}, {1.08, 0.084326}, {1.06, 0.0624248}, {1.04, 0.0410739}, {1.02, 0.0202676}, {1., 0.}, {1., 0.}, {1.02, -0.0202676}, {1.04, -0.0410739}, {1.06, -0.0624248}, {1.08, -0.084326}, {1.1, -0.106784}, {1.12, -0.129804}, {1.14, -0.153394}, {1.16, -0.177561}, {1.18, -0.20231}, {1.2, -0.22765}, {1.22, -0.253589}, {1.24, -0.280133}, {1.26, -0.307291}, {1.28, -0.335071}, {1.3, -0.363481}, {1.32, -0.392531}, {1.34, -0.422229}, {1.36, -0.452585}, {1.38, -0.483608}, {1.4, -0.515307}, {1.42, -0.547692}, {1.44, -0.580775}, {1.46, -0.614565}, {1.48, -0.649074}, {1.5, -0.684311}, {1.52, -0.72029}, {1.54, -0.75702}, {1.56, -0.794515}, {1.58, -0.832786}, {1.6, -0.871847}, {1.62, -0.911709}, {1.64, -0.952386}, {1.66, -0.993892}} ] \dataplot[linecolor=orange,linewidth=0.8pt,plotstyle=line,fillstyle=solid,fillcolor=white]{\mydata} \dataplot[linecolor=black,linewidth=0.8pt,plotstyle=line]{\mydatay} \dataplot[linecolor=orange,linewidth=0.8pt,plotstyle=line]{\mydata} \pscircle*[linecolor=white,linewidth=1pt](1,0){0.08} \pscircle[linecolor=red,linewidth=1pt](1,0){0.08} \psdot(0,0) \rput(1,-0.12){$1$} \rput(0,-0.12){$0$} \rput(-0.65,0.0){$\mathcal X$} \rput(0.5,0.0){$\mathcal Y$} \rput(1.9,0.0){$\mathcal Z$} \rput(-0.2,0.35){$\mathcal S$} \rput(1.35,0.6){$\mathcal T$} \end{pspicture} \caption{Partitioning the $w$-plane into $\mathcal X \cup \mathcal Y \cup \mathcal Z \cup \mathcal S \cup \mathcal T \cup \{ 1\}$ \label{fig}} \end{center} \end{figure} Perron's saddle-point method is reviewed in section \ref{saddle}, and all the asymptotic expansions in this paper are proved as applications of this theory. Our work also naturally includes the following version of Stirling's approximation. \begin{prop} \label{xpg} Let $v$ be any complex number. As real $n \to \infty$, \begin{equation} \label{gnv} \G(n+v+1)= \sqrt{2\pi n}\frac{n^{n+v}}{e^n} \left(1+\frac{\g_1(v)}{n}+\frac{\g_2(v)}{n^2}+ \cdots + \frac{\g_{R-1}(v)}{n^{R-1}} +O\left(\frac{1}{n^{R}}\right)\right), \end{equation} for an implied constant depending only on $R$ and $v$, with \begin{equation}\label{gnv2} \g_r(v) = \sum_{m=0}^{2r} (-1)^m \binom{v}{2r-m}\sum_{k=0}^{m} \frac{(2r+2k-1)!!}{(-1)^k k!} \dm_{m,k}\left( \frac{1}{3}, \frac{1}{4}, \frac{1}{5}, \dots \right). \end{equation} \end{prop} The same techniques are used in the final section to examine Ramanujan's approximation to the exponential integral $\ei(n)$, which may be defined as a Cauchy principal value: \begin{equation} \label{ein} \ei(n):= \lim_{\varepsilon \to 0^+} \left(\int_{-\infty}^{-\varepsilon} \frac{e^t}t \, dt + \int_{\varepsilon}^{n} \frac{e^t}t \, dt\right). \end{equation} We describe there an unexplained connection between these approximations to $\ei(n)$ and $e^{n}$. \section{De Moivre polynomials and the saddle-point method} \label{saddle} \begin{adef} \label{dbf} For integers $n$, $k$ with $k\gqs 0$, the {\em De Moivre polynomial} $\dm_{n,k}(a_1, a_2, \dots)$ is defined by \begin{equation} \label{bell} \left( a_1 x +a_2 x^2+ a_3 x^3+ \cdots \right)^k = \sum_{n\in \Z} \dm_{n,k}(a_1, a_2, a_3, \dots) x^n \qquad \quad (k \in \Z_{\gqs 0}). \end{equation} \end{adef} Many properties of these polynomials are assembled in \cite{odm}. Clearly $\dm_{n,k}(a_1, a_2, a_3, \dots)=0$ if $n<k$. If $n\gqs k$ then \begin{equation} \label{bell2} \dm_{n,k}(a_1, a_2, a_3, \dots) = \sum_{\substack{1j_1+2 j_2+ \dots +mj_m= n \\ j_1+ j_2+ \dots +j_m= k}} \binom{k}{j_1 , j_2 , \dots , j_m} a_1^{j_1} a_2^{j_2} \cdots a_m^{j_m}, \end{equation} where $m=n-k+1$ and the sum is over all possible $j_1$, $j_2$, \dots , $j_m \in \Z_{\gqs 0}$. It is a polynomial in $a_1, a_2, \dots, a_{m}$ of homogeneous degree $k$ with positive integer coefficients. As in \cite[Sect. 2]{odm}, we have the relations \begin{align} \dm_{n,k}(0, a_1, a_2, a_3, \dots) & = \dm_{n-k,k}(a_1, a_2, a_3, \dots), \label{gsb}\\ \dm_{n,k}(c a_1, c a_2, c a_3, \dots) & = c^k \dm_{n,k}(a_1, a_2, a_3, \dots), \label{mulk}\\ \dm_{n,k}(c a_1, c^2 a_2, c^3 a_3, \dots) & = c^n \dm_{n,k}(a_1, a_2, a_3, \dots). \label{muln} \end{align} In the paper \cite{OSper} we give a detailed description of Perron's saddle-point method from \cite{Pe17}. The main result requires the following assumptions and definitions. \begin{assume} \label{ma0} Let $\nb$ be a neighborhood of $z_0 \in \C$ and $\cc$ a contour of integration containing $z_0$. Assume that $\cc$ lies in a bounded region of $\C$ and is parameterized by a continuous function $c:[0,1]\to \C$ that has a continuous derivative except at a finite number of points. Suppose $p(z)$ and $q(z)$ are holomorphic functions on a domain containing $\nb \cup \cc$. We assume $p(z)$ is not constant and hence there must exist $\mu \in \Z_{\gqs 1}$ and $p_0 \in \C_{\neq 0}$ so that \begin{equation} p(z) =p(z_0)-p_0(z-z_0)^\mu(1-\phi(z)) \qquad (z\in \nb) \label{f} \end{equation} with $\phi$ holomorphic on $\nb$ and $\phi(z_0)=0$. We will need the {\em steepest-descent angles} \begin{equation}\label{bisec} \theta_\ell := -\frac{\arg(p_0)}{\mu}+\frac{2\pi \ell}{\mu} \qquad (\ell \in \Z). \end{equation} Assume that $\nb,$ $\cc,$ $p(z),$ $q(z)$ and $z_0$ are independent of $n>0$. Finally, let $K_q$ be a bound for $|q(z)|$ on $\nb \cup \cc$. \end{assume} \begin{theorem}\label{il} {\rm (Perron's method for a holomorphic integrand with contour starting at a maximum.)} Suppose that Assumptions \ref{ma0} hold, with $\cc$ a contour from $z_0$ to $z_1$ in $\C$ where $z_0 \neq z_1$. Suppose that \begin{equation}\label{c1} \Re(p(z))<\Re(p(z_0)) \quad \text{for all} \quad z \in \cc, \ z\neq z_0. \end{equation} We may choose $k \in \Z$ so that the initial part of $\cc$ lies in the sector of angular width $2\pi/\mu$ about $z_0$ with bisecting angle $\theta_k$. Then for every $S \in \Z_{\gqs 0}$, we have \begin{equation} \label{wim} \int_\cc e^{n \cdot p(z)} q(z) \, dz = e^{n \cdot p(z_0)} \left(\sum_{s=0}^{S-1} \G\left(\frac{s+1}{\mu}\right) \frac{\alpha_s \cdot e^{2\pi i k (s+1)/\mu}}{n^{(s+1)/\mu}} + O\left(\frac{K_q}{n^{(S+1)/\mu}} \right) \right) \end{equation} as $n \to \infty$ where the implied constant in \eqref{wim} is independent of $n$ and $q$. The numbers $\alpha_s$ depend only on $s$, $p$, $q$ and $z_0$. \end{theorem} Theorem \ref{il} is \cite[Thm. 1.2]{OSper} and the next proposition is \cite[Prop. 7.2]{OSper}. Write the Taylor expansions of $p$ and $q$ at $z_0$ as \begin{equation} \label{pqe} p(z)-p(z_0)=-\sum_{s=0}^\infty p_s (z-z_0)^{s+\mu}, \qquad q(z)=\sum_{s=0}^\infty q_s (z-z_0)^{s}. \end{equation} \begin{prop} \label{wojf} The numbers needed in Theorem \ref{il} have the explicit formula \begin{equation} \label{hjw} \alpha_s = \frac{1}{\mu} p_0^{-(s+a)/\mu} \sum_{m=0}^s q_{s-m}\sum_{j=0}^m \binom{-(s+a)/\mu}{j} \dm_{m,j}\left(\frac{p_1}{p_0},\frac{p_2}{p_0},\cdots\right), \end{equation} for $a=1$. (We will need an extension of this later, requiring more general $a$ values.) \end{prop} \section{Initial results for $S_n(w;v)$ and $T_n(w;v)$} Define \begin{equation}\label{pz} p(z)=p(w;z) := w(1-z)+\log z. \end{equation} For $w$, $v$ in $\C$ with $w \neq 0$ and positive real $n$, we will need the function \begin{equation}\label{pr} \frac{1}{(n w)^{n+v}} = e^{-(n+v) \log(nw)} = e^{-(n+v) \log n} \cdot e^{-(n+v) \log w}, \end{equation} where $\log w$ is evaluated using the principle branch of the logarithm with arguments in $(-\pi,\pi]$. \begin{lemma} \label{snwx} The following formulas can be used to extend the definitions \e{ss} and \e{tt} of $S_n(w;v)$ and $T_n(w;v)$ to all $w$, $v$ in $\C$ and $n$ in $\R$ with $n>0$ and $\Re(n+v)>-1$: \begin{align} \label{snw2} S_n(w;v) & =1+n w \int_0^1 e^{n \cdot p(z)} z^v \, dz,\\ T_n(w;v) & =\frac{e^{n w}}{(n w)^{n+v}} \G(n+v+1) - S_n(w;v) \qquad (w\neq 0) \label{snw3}. \end{align} As functions of $w$, $S_n(w;v)$ is entire and $T_n(w;v)$ is holomorphic outside $(-\infty,0]$. \end{lemma} \begin{proof} We first assume that $n$ and $v$ are integers with $n\gqs 1$ and $n+v \gqs 0$. Then for nonzero $w\in \C$ the integral \begin{equation}\label{vv} \int_0^\infty e^{-t}\left(1+\frac t{n w} \right)^{n+v} \, dt \end{equation} is absolutely convergent and by the binomial theorem it equals \begin{equation*} \sum_{j=0}^{n+v} \binom{n+v}{j} \int_0^\infty e^{-t} \left(\frac t{n w} \right)^j \, dt =\frac{(n+v)!}{(n w)^{n+v}} \sum_{j=0}^{n+v} \frac{(n w)^j}{j!}. \end{equation*} Hence \e{tt2} implies that \e{vv} equals $1+T_n(w;v)$. With a change of variables, \begin{equation} \label{bri} 1+T_n(w;v) = n w \int_{\nd} e^{-n w z} (1+z)^{n+v} \, dz = n w \int_{1+\nd} e^{n w(1-z)} z^{n+v} \, dz \end{equation} for $\nd = \nd_w$ the line from $0$ through $1/w$ to infinity. Next, \begin{align} \frac{e^{n w}}{(n w)^{n+v}}(n+v)! & = \frac{e^{n w}}{(n w)^{n+v}} \int_0^\infty e^{-t} t^{n+v} \, dt \notag\\ & = \frac{e^{n w}}{(n w)^{n+v}} n w\int_{\nd} e^{-nw z} (n w z)^{n+v} \, dz \notag \\ & = n w\int_{\nd} e^{nw (1-z)} z^{n+v} \, dz. \label{bri2} \end{align} From \e{tt3}, \e{bri} and \e{bri2}, writing $\G(n+v+1)$ for $(n+v)!$, \begin{align} S_n(w;v) & =\frac{e^{n w}}{(n w)^{n+v}} \G(n+v+1) - T_n(w;v) \label{bri3}\\ & = 1+n w\int_{\nd} e^{nw (1-z)} z^{n+v} \, dz -n w \int_{1+\nd} e^{n w(1-z)} z^{n+v} \, dz. \notag \end{align} Integrating this holomorphic integrand around a closed contour gives zero, and so a limiting argument implies \e{snw2} for nonzero $w\in \C$ and integers $n$, $v$ with $n\gqs 1$ and $n+v \gqs 0$. Now the integral in \e{snw2} converges to an entire function of $w$ for all $n$, $v$ with $\Re(n+v)>-1$, extending the definition of $S_n(w;v)$. Also \e{snw3} follows from \e{bri3}, allowing the definition of $T_n(w;v)$ to be extended. \end{proof} \begin{lemma} \label{xm} Let $w$, $v$ be in $\C$. As real $n \to \infty$, \begin{alignat}{2} \label{lmf} S_n(w;v) & = 1+n w \int_{1/2}^1 e^{n \cdot p(z)} z^v \, dz +O\left(2^{-n/20}\right) \qquad & &(\Re(w)\lqs 1),\\ T_n(w;v) & = -1+n w \int_{1}^{3/2} e^{n \cdot p(z)} z^v \, dz +O\left(e^{-n/30}\right) \qquad & &(\Re(w)\gqs 1), \label{lmf2} \end{alignat} for implied constants depending only on $w$ and $v$. \end{lemma} \begin{proof} With Lemma \ref{snwx}, to demonstrate \e{lmf} we must bound \begin{equation} \label{dan} n w \int_0^{1/2} e^{n \cdot p(z)} z^v \, dz \ll n \int_0^{1/2}\left( e^{\Re(w)(1-z)}z \right)^n z^{\Re(v)} \, dz. \end{equation} Use the inequality $e^{1-z}z \lqs z^{1/10}$ for $0\lqs z \lqs 1/2$ to continue, with \begin{equation*} n \int_0^{1/2}z^{n/10+ \Re(v)} \, dz \ll n \int_0^{1/2}z^{n/20} \, dz, \end{equation*} for $n$ large enough that $n/20+ \Re(v)\gqs 0$, and we obtain \e{lmf}. For \e{lmf2} we claim first that when $\Re(w)\gqs 1$ and $n$ is large enough, \begin{equation}\label{ct} T_n(w;v) = -1+n w \int_{1}^{\infty} e^{n \cdot p(z)} z^v \, dz, \end{equation} and by \e{snw3} this is true if we can establish \begin{equation}\label{ct2} \frac{e^{n w}}{(n w)^{n+v}} \G(n+v+1) = n w \int_0^\infty e^{n \cdot p(z)} z^v \, dz. \end{equation} But we have \begin{equation} \label{fol} \frac{e^{n w}}{(n w)^{n+v}} \G(n+v+1) = \frac{e^{n w}}{(n w)^{n+v}} \int_0^\infty e^{-t} t^{n+v}\, dt = n w \int_{\nd} e^{nw(1-z)} z^{n+v} \, dz, \end{equation} with $\nd = \nd_w$ the line from $0$ through $1/w$ to infinity. Let $\beta :=\arg(w)$. Then $|\beta|<\pi/2$ and we also have $\arg(1/w)=-\beta$. In the usual way, the line of integration $\nd$ may be moved to the positive real axis after checking some growth estimates as follows. Let $\nd_1$ be the path from $R_1>0$ to $R_2>R_1$. Then $\nd_2$ is the arc of radius $R_2$ from $R_2$ to $R_2 e^{-i \beta}$. Next $\nd_3$ is the line from $R_2 e^{-i \beta}$ to $R_1 e^{-i \beta}$, coinciding with part of $\nd$, and lastly $\nd_4$ is the arc of radius $R_1$ from $R_1 e^{-i \beta}$ to $R_1$. Integrating $e^{nw(1-z)} z^{n+v}$ around the closed path made up of $\nd_1$, $\nd_2$, $\nd_3$ and $\nd_4$ gives zero since it is holomorphic on the interior. Writing $w=|w|e^{i \beta}$ and $z=R e^{i \theta}$, we may bound the integrals over the arcs $\nd_2$ and $\nd_4$ with \begin{align*} \left| e^{nw(1-z)} z^{n+v}\right| & \lqs e^{n|w|(\cos(\beta)-R \cos(\beta+\theta))}R^{n+\Re(v)} \\ & \lqs e^{n|w|\cos(\beta)(1-R)}R^{n+\Re(v)} \end{align*} since $\theta$ is between $-\beta$ and $0$. Therefore, for $R=R_2$, \begin{equation} \label{ct3} n w \int_{\nd_2} e^{nw(1-z)} z^{n+v} \, dz \ll n e^{n|w|\cos(\beta)(1-R_2)}R_2^{n+\Re(v)+1}. \end{equation} As $\cos(\beta)>0$ we see that \e{ct3} goes to zero as $R_2 \to \infty$. Also the integral over $\nd_4$ goes to zero as $R=R_1 \to 0$ in \e{ct3} when $n$ is large enough that $n+\Re(v)+1>0$. Hence \e{fol} implies \e{ct2} as we wanted. Lastly we argue as in \e{dan} to bound the part of the integral \e{ct} with $z\gqs 3/2$. Use that $e^{1-z}z \lqs e^{-z/20}$ when $z\gqs 3/2$ to show \begin{equation*} n w \int_{3/2}^\infty e^{n \cdot p(z)} z^v \, dz \ll n \int_{3/2}^\infty e^{-n z/20} z^{\Re(v)} \, dz. \end{equation*} For $n$ large enough that $e^{-n z/40} z^{\Re(v)} \lqs 1$ when $3/2 \lqs z$, we may replace the last integrand by $ e^{-n z/40}$ and complete the proof of \e{lmf2}. \end{proof} \section{The case $w=1$} \label{w=1} In this section we set $w=1$ so that $p(z)=p(1;z)=1-z+\log z$. \begin{prop} \label{pil} Let $S$ be a fixed positive integer. As $n \to \infty$ \begin{align} \int_{1/2}^1 e^{n \cdot p(z)} z^v \, dz & = \sum_{s=0}^{S-1} \G\left( \frac{s+1}2\right) \frac{(-1)^s\beta_s(v)}{n^{(s+1)/2}} + O\left(\frac 1{n^{(S+1)/2}} \right),\label{in}\\ \int_1^{3/2} e^{n \cdot p(z)} z^v \, dz & = \sum_{s=0}^{S-1} \G\left( \frac{s+1}2\right) \frac{\beta_s(v)}{n^{(s+1)/2}} + O\left(\frac 1{n^{(S+1)/2}} \right), \label{in2} \end{align} for \begin{equation} \beta_s(v) =\sum_{m=0}^s (-1)^m \binom{v}{s-m} \sum_{k=0}^m 2^{(s-1)/2+k} \binom{-(s+1)/2}{k} \dm_{m,k}\left( \frac{1}{3}, \frac{1}{4}, \frac{1}{5}, \dots \right). \label{in3} \end{equation} \end{prop} \begin{proof} We see that $p(z)$ is holomorphic away from $z=0$. Let $z_0=1$ with $p(1)=0$ and we have the Taylor expansion about $1$: \begin{equation*} p(z)-p(1)=-\sum_{j=2}^\infty \frac{(-1)^{j}}{j} (z-1)^{j}. \end{equation*} According to our setup with Assumptions \ref{ma0} and \e{pqe}, $q(z)=z^v$ and \begin{equation}\label{mu2} p_s=(-1)^s/(s+2), \quad p_0=1/2, \quad \mu=2, \quad \theta_\ell=\pi \ell, \quad q_s=\binom{v}{s}. \end{equation} For the integral in \e{in} we may apply Theorem \ref{il} with $\cc$ the interval from $1$ to $1/2$ since $\Re(p(z))$ has its maximum at $z=1$. The initial part of $\cc$ lies in the sector with bisecting angle $\theta_1=\pi$, since the contour is moving left, and we need $k=1$ in \e{wim}. This means that \begin{equation*} \int_{1}^{1/2} e^{n \cdot p(z)} \, dz = \sum_{s=0}^{S-1} \G\left(\frac{s+1}{2}\right) \frac{\alpha_s \cdot (-1)^{(s+1)}}{n^{(s+1)/2}} + O\left(\frac{1}{n^{(S+1)/2}} \right) \end{equation*} and $\beta_s(v)=\alpha_s$ is computed with Proposition \ref{wojf} to get \e{in3}. The integral \e{in2} is handled the same way, the only difference being that the contour is moving right into the sector with bisecting angle $\theta_0=0$, and so $k=0$ is needed in \e{wim}. \end{proof} With \e{ori}, \e{ori2} and Lemma \ref{snwx}, we may extend the definition of $\theta_n(v)$ to all $n>0$ and $v\in \C$ with $\Re(n+v)>-1$ using \begin{align}\label{lie} \theta_n(v) & =S_n(1;v) - \frac{e^n}{2 n^{n+v}}\G(n+v+1) \\ & = \frac{S_n(1;v)}2 -\frac{T_n(1;v)}2. \label{bx} \end{align} Define \begin{equation}\label{rhrv} \rho_r(v) := \delta_{r,0}- \sum_{m=0}^{2r+1} (-1)^m\binom{v}{2r+1-m}\sum_{k=0}^{m} \frac{(2r+2k)!!}{(-1)^k k!} \dm_{m,k}\left( \frac{1}{3}, \frac{1}{4}, \frac{1}{5}, \dots \right). \end{equation} \begin{theorem} \label{ext} With the above definition, as $n \to \infty$, \begin{equation}\label{abb} \theta_n(v) =\rho_0(v)+\frac{\rho_1(v)}{n}+\frac{\rho_2(v)}{n^2}+ \cdots + \frac{\rho_{R-1}(v)}{n^{R-1}}+ O\left( \frac{1}{n^R}\right) \end{equation} for an implied constant depending only on $R\in \Z_{\gqs 1}$ and $v\in \C$. \end{theorem} \begin{proof} Together, Lemma \ref{xm}, Proposition \ref{pil} and \e{bx} imply that \begin{equation*} \theta_n(v) = 1+\frac n2 \left(\sum_{s=0}^{S-1} \G\left( \frac{s+1}2\right) \frac{\beta_s(v)}{n^{(s+1)/2}}((-1)^s -1) + O\left(\frac 1{n^{(S+1)/2}} \right) \right). \end{equation*} Summing over $s=2r+1$ odd then gives \begin{equation*} \theta_n(v) = 1-\sum_{r=0}^{R-1} \G(r+1) \frac{\beta_{2r+1}(v)}{n^{r}} + O\left(\frac 1{n^{R}}\right) \end{equation*} and the theorem follows. \end{proof} \begin{proof}[Proof of Proposition \ref{xpg}] By \e{snw3}, \begin{equation} \G(n+v+1) = \frac{n^{n+v}}{e^n} \left( S_n(1;v) + T_n(1;v)\right). \label{bx2} \end{equation} Hence, for $\beta_s(v)$ in \e{in3}, \begin{equation*} \G(n+v+1) =n \frac{n^{n+v}}{e^n} \left(\sum_{s=0}^{S-1} \G\left( \frac{s+1}2\right) \frac{\beta_s(v)}{n^{(s+1)/2}}((-1)^s +1) + O\left(\frac 1{n^{(S+1)/2}} \right) \right). \end{equation*} Summing over $s=2r$ even then gives \begin{equation*} \G(n+v+1) = 2\sqrt{n} \frac{n^{n+v}}{e^n} \left(\sum_{r=0}^{R-1} \G(r+1/2) \frac{\beta_{2r}(v)}{n^{r}} + O\left(\frac 1{n^{R}}\right)\right). \end{equation*} The proof is completed using \begin{equation*} \G(r+1/2) = \frac{\sqrt{\pi} (2r)!}{2^{2r} r!}, \qquad \binom{-r-1/2}{k} =\frac{(-1)^k}{2^{2k} k!} \frac{(2r+2k)! r!}{(2r)! (r+k)!}, \end{equation*} which, with \e{doub}, imply the identity \begin{equation}\label{idp} 2^{r+k}\frac{\G(r+1/2)}{\sqrt{\pi}}\binom{-r-1/2}{k} = \frac{(2r+2k-1)!! }{(-1)^k k!}. \end{equation} \end{proof} \begin{cor}[Stirling's approximation] \label{stirgam} As real $n \to \infty$, \begin{equation} \label{gmx} \G(n+1)= \sqrt{2\pi n}\left(\frac{n}{e}\right)^n \left(1+\frac{\g_1}{n}+\frac{\g_2}{n^2}+ \cdots + \frac{\g_{R-1}}{n^{R-1}} +O\left(\frac{1}{n^{R}}\right)\right), \end{equation} for an implied constant depending only on $R$, with \begin{equation}\label{gmy} \g_r = \sum_{k=0}^{2r} \frac{(2r+2k-1)!!}{(-1)^k k!} \dm_{2r,k}\left( \frac{1}{3}, \frac{1}{4}, \frac{1}{5}, \dots \right). \end{equation} \end{cor} Corollary \ref{stirgam} is the $v=0$ case of Proposition \ref{xpg}, and an equivalent form of \e{gmy} is due to Perron \cite[p. 210]{Pe17}. Also \e{gmy} is equivalent to \cite[Thm. 2.7]{BM11} using the generating function $z+\log(1-z)$. Brassesco and M\'endez give another formulation in \cite[Thm. 2.1]{BM11}, based on $e^z-1-z$, and the formula corresponding to \e{gmy} is the same, except that $3$, $4$, $5, \dots$ are replaced by factorials: \begin{equation}\label{gmy2} \g_r = \sum_{k=0}^{2r} \frac{(2r+2k-1)!!}{(-1)^k k!} \dm_{2r,k}\left( \frac{1}{3!}, \frac{1}{4!}, \frac{1}{5!}, \dots \right). \end{equation} This is true even though $\dm_{2r,k}\left( \frac{1}{3}, \frac{1}{4}, \frac{1}{5}, \dots \right)$ and $\dm_{2r,k}\left( \frac{1}{3!}, \frac{1}{4!}, \frac{1}{5!}, \dots \right)$ are usually not equal. We will see \e{gmy2} as the $v=0$ case of Proposition \ref{v0}. In this section, with $w=1$, the function $p(1;z)$ has a simple saddle-point at $z=1$, i.e. $\frac d{dz} p(1;z)|_{z=1}=0$. This means $\mu=2$ in \e{mu2}. In the next section, where $w\neq 1$, the function $p(w;z)$ will no longer have a saddle-point at $z=1$, only a maximum. This makes $\mu=1$ and changes the shape of the asymptotics as we will see. Soni and Soni show in \cite{So92} how to give an asymptotic expansion for $S_n(w;0)$ that is uniform for $w$ in a neighborhood of $1$. \section{The cases $w \neq 1$} Define the rational functions \begin{equation}\label{wb} U_r(w;v) := \delta_{r,0}-\sum_{m=0}^r (-1)^{m}\binom{v}{r-m}\sum_{k=0}^{m} \frac{ w}{(w-1)^{r+k+1}} \frac{(r+k)!}{ k!} \dm_{m,k}\left( \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \dots \right), \end{equation} so that, for example, \begin{gather*} U_0(w;v)=\frac{1}{1-w}, \qquad U_1(w;v)=-\frac{w}{(1-w)^3} - v\frac{w}{(1-w)^2},\\ U_2(w;v)=\frac{w(2w+1)}{(1-w)^5} + v \frac{w(w+2)}{(1-w)^4}+ v^2 \frac{w}{(1-w)^3}. \end{gather*} \begin{prop} \label{abc} Let $w$ and $v$ be fixed complex numbers with $\Re(w)\lqs 1$ and $w \neq 1$. Then as real $n \to \infty$, \begin{equation} \label{wa} S_n(w;v) = U_0(w;v) + \frac{U_1(w;v)}{n} + \frac{U_2(w;v)}{n^2} + \cdots + \frac{U_{R-1}(w;v)}{n^{R-1}} +O\left(\frac{1}{n^{R}}\right), \end{equation} for an implied constant depending only on $w$, $v$ and $R$. \end{prop} \begin{proof} Recall that $p(z)= w(1-z)+\log z$. Theorem \ref{il} may be applied to \e{lmf} since $\Re(p(z))$ is strictly increasing for $1/2\lqs z \lqs 1$ when $\Re(w)\lqs 1$. As $p(1)=0$, the expansion of $p(z)$ at $z=1$ can be written as \begin{equation*} p(z)-p(1)=-(w-1)(z-1)-\sum_{j=2}^\infty \frac{(-1)^{j}}{j} (z-1)^{j}. \end{equation*} This time our setup with Assumptions \ref{ma0} and \e{pqe} has \begin{equation}\label{mu1} p_0=w-1, \quad \mu=1, \quad p_s=(-1)^{s+1}/(s+1) \ \text{ for } \ s\gqs 1, \quad \theta_\ell=-\arg(w-1)+2\pi \ell. \end{equation} Also $q_s=\binom{v}{s}$, $\cc$ is the interval from $1$ to $1/2$ and $k=0$ in \e{wim}; when $\mu=1$ then $k$ can be any integer and there is only `one' direction with $\Re(p(z))$ decreasing. Computing $\alpha_s$ with Proposition \ref{wojf} and simplifying with \e{mulk} and \e{muln} completes the proof. \end{proof} \begin{prop} \label{abc2} Let $w$ and $v$ be fixed complex numbers with $\Re(w)\gqs 1$. Then as real $n \to \infty$, \begin{equation} \label{wat} T_n(w;v) = -U_0(w;v) - \frac{U_1(w;v)}{n} - \frac{U_2(w;v)}{n^2} - \cdots - \frac{U_{R-1}(w;v)}{n^{R-1}} +O\left(\frac{1}{n^{R}}\right), \end{equation} for an implied constant depending only on $w$, $v$ and $R$. \end{prop} \begin{proof} Theorem \ref{il} may be applied to \e{lmf2} since $\Re(p(z))$ is strictly decreasing for $1\lqs z \lqs 3/2$ when $\Re(w)\gqs 1$. The same calculation as for Proposition \ref{abc} gives the result. \end{proof} Our asymptotics for $S_n(w;v)$ and $T_n(w;v)$ can now be assembled. The functions $\rho_r(v)$, $\g_r(v)$ and $U_r(w;v)$ are defined in \e{rhrv}, \e{gnv2} and \e{wb} respectively. Also recall the partition of the $w$-plane shown in Figure \ref{fig}. If $|w e^{1-w}|=1$, write $w e^{1-w}=e^{-i \varphi(w)}$ with $\varphi(w)$ real. \begin{theorem} \label{msth} Let $w$ and $v$ be complex numbers. As real $n \to \infty$, \begin{align} S_n(1;v) & =\sum_{r=0}^{R-1} \frac{1}{n^r} \left( \rho_r(v) + \frac{\g_r(v)}{2}\sqrt{2\pi n} \right) + O\left( \frac{1}{n^{R-1/2}}\right), \label{s1}\\ S_n(w;v) & = \sum_{r=0}^{R-1} \frac{U_r(w;v)}{n^r} + O\left( \frac{1}{n^R}\right) \qquad (w\in \mathcal X \cup \mathcal Y \cup \mathcal S), \label{s2}\\ S_n(w;v) & = \sum_{r=0}^{R-1} \frac{1}{n^r} \left( U_r(w;v) + \g_r(v) \frac{e^{n i \cdot \varphi(w)}}{w^v} \sqrt{2\pi n} \right) + O\left( \frac{1}{n^{R-1/2}}\right)\qquad (w\in \mathcal T), \label{s3}\\ S_n(w;v) & = \frac{\sqrt{2\pi n} }{(w e^{1-w})^n w^v} \left(\sum_{r=0}^{R-1} \frac{\g_r(v)}{ n^r} + O\left(\frac 1{n^{R}} \right)\right)\qquad (w\in \mathcal Z). \label{s4} \end{align} \end{theorem} \begin{proof} The asymptotic \e{s1} is a consequence of \e{lie}, Theorem \ref{ext} and Proposition \ref{xpg}. It can be seen from Proposition \ref{abc} that \e{s2} is true for $\Re(w)\lqs 1$ and $w \neq 1$ and this includes $\mathcal Y$, $\mathcal S$ and part of $\mathcal X$. For $\Re(w)> 1$, starting from \e{snw3} and using Propositions \ref{xpg}, \ref{abc2}, \begin{align} S_n(w;v) & =\frac{e^{n w}}{(n w)^{n+v}} \G(n+v+1) - T_n(w;v) \notag\\ & =\frac{e^{n w}}{(n w)^{n+v}}\sqrt{2\pi n}\frac{n^{n+v}}{e^n} \left(\sum_{r=0}^{R-1}\frac{\g_r(v)}{n^r}+ O\left(\frac{1}{n^{R}}\right)\right) + \sum_{r=0}^{R-1}\frac{U_r(w;v)}{n^r}+O\left(\frac{1}{n^{R}}\right) \notag\\ & =\frac{\sqrt{2\pi n}}{(w e^{1-w})^{n}w^v} \left(\sum_{r=0}^{R-1}\frac{\g_r(v)}{n^r}+ O\left(\frac{1}{n^{R}}\right)\right) + \sum_{r=0}^{R-1}\frac{U_r(w;v)}{n^r}+O\left(\frac{1}{n^{R}}\right).\label{rrr} \end{align} If $|w e^{1-w}|>1$ then the part of \e{rrr} containing this factor decays exponentially and we obtain \e{s2} for the remaining piece of $\mathcal X$. The case $|w e^{1-w}|=1$ in \e{rrr} gives \e{s3} and lastly $|w e^{1-w}|<1$ in \e{rrr} gives \e{s4}. \end{proof} \begin{theorem} \label{mtth} Let $w$ and $v$ be complex numbers. As real $n \to \infty$, \begin{align} T_n(1;v) & =\sum_{r=0}^{R-1} \frac{1}{n^r} \left( -\rho_r(v) + \frac{\g_r(v)}{2}\sqrt{2\pi n} \right) + O\left( \frac{1}{n^{R-1/2}}\right), \label{t1} \\ T_n(w;v) & = -\sum_{r=0}^{R-1} \frac{U_r(w;v)}{n^r} + O\left( \frac{1}{n^R}\right) \qquad (w\in \mathcal X \cup \mathcal Z \cup \mathcal T), \label{t2} \\ T_n(w;v) & = \sum_{r=0}^{R-1} \frac{1}{n^r} \left( -U_r(w;v) + \g_r(v) \frac{e^{n i \cdot \varphi(w)}}{w^v} \sqrt{2\pi n} \right) + O\left( \frac{1}{n^{R-1/2}}\right)\qquad (w\in \mathcal S), \label{t3} \\ T_n(w;v) & = \frac{\sqrt{2\pi n} }{(w e^{1-w})^n w^v} \left(\sum_{r=0}^{R-1} \frac{\g_r(v)}{ n^r} + O\left(\frac 1{n^{R}} \right)\right)\qquad (w\in \mathcal Y, \ w\neq 0). \label{t4} \end{align} \end{theorem} \begin{proof} The asymptotic \e{t1} is a consequence of \e{snw3}, Proposition \ref{xpg} and \e{s1}. It can be seen from Proposition \ref{abc2} that \e{t2} is true for $\Re(w)\gqs 1$ and $w \neq 1$ and this includes $\mathcal Z$, $\mathcal T$ and part of $\mathcal X$. For $\Re(w)< 1$ and $w\neq 0$, starting from \e{snw3} and using Propositions \ref{xpg}, \ref{abc}, \begin{align} T_n(w;v) & =\frac{e^{n w}}{(n w)^{n+v}} \G(n+v+1) - S_n(w;v) \notag\\ & =\frac{\sqrt{2\pi n}}{(w e^{1-w})^{n}w^v} \left(\sum_{r=0}^{R-1}\frac{\g_r(v)}{n^r}+ O\left(\frac{1}{n^{R}}\right)\right) - \sum_{r=0}^{R-1}\frac{U_r(w;v)}{n^r}+O\left(\frac{1}{n^{R}}\right).\label{rrr2} \end{align} The remaining cases of \e{t2}, \e{t3} and \e{t4} follow from \e{rrr2} depending on the size of $|w e^{1-w}|$. \end{proof} Theorems \ref{msth} and \ref{mtth} agree with \cite[Eqs. (5), (6)]{Bu63} for $w\neq 1$ and $v=0$. His function $U_r(z)$ equals $U_r(z,0)-\delta_{r,0}$ here. Since $\g_0(v)=1$ we may state the following corollaries. \begin{cor} For all $w$, $v\in \C$ we have that $S_n(w;v)$ remains bounded as $n\to \infty$ if and only if $w\in \mathcal X \cup \mathcal Y \cup \mathcal S$. Also $T_n(w;v)$ remains bounded if and only if $w\in \mathcal X \cup \mathcal Z\cup \mathcal T$. \end{cor} \begin{cor} For all $w$, $v\in \C$ we have that $S_n(w;v)/\sqrt{n}$ remains bounded as $n\to \infty$ if and only if $w\notin \mathcal Z$. Also $T_n(w;v)/\sqrt{n}$ remains bounded if and only if $w\notin \mathcal Y$. \end{cor} \section{Another description of $\rho_r(v)$, $\g_r(v)$ and $U_r(w;v)$} \label{six} As we have seen in Theorems \ref{msth} and \ref{mtth}, the asymptotics of $S_n(w;v)$ and $T_n(w;v)$ can be completely described in terms of the functions \begin{align*} \rho_r(v)= \delta_{r,0}-{} & \sum_{m=0}^{2r+1} (-1)^m\binom{v}{2r+1-m}\sum_{k=0}^{m} \frac{(2r+2k)!!}{(-1)^k k!} \dm_{m,k}\left( \frac{1}{3}, \frac{1}{4}, \frac{1}{5}, \dots \right), \\ \g_r(v) = & \sum_{m=0}^{2r} (-1)^m \binom{v}{2r-m}\sum_{k=0}^{m} \frac{(2r+2k-1)!!}{(-1)^k k!} \dm_{m,k}\left( \frac{1}{3}, \frac{1}{4}, \frac{1}{5}, \dots \right),\\ U_r(w;v) = \delta_{r,0}-{} & \sum_{m=0}^r (-1)^{m}\binom{v}{r-m}\sum_{k=0}^{m} \frac{ w}{(w-1)^{r+k+1}} \frac{(r+k)!}{ k!} \dm_{m,k}\left( \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \dots \right). \end{align*} In this section we find variants of these formulas where the numbers $1/j$ inside the De Moivre polynomials are replaced by $1/j!$. Integrating \e{snw2} by parts shows \begin{align} S_n(w;v) & = 1+n w \int_0^1 e^{n w(1-z)} z^{n+v} \, dz \notag\\ & = (n+v) \int_0^1 e^{n w(1-z)} z^{n+v-1} \, dz \notag\\ & = (n+v) \int_{-\infty}^0 e^{n w(1-e^z)} e^{(n+v)z} \, dz, \label{here} \end{align} after the change of variables $z\to e^z$. Define \begin{equation}\label{pz2} \tilde p(z)= \tilde p(w;z) := w(1-e^z)+ z. \end{equation} \begin{lemma} \label{par} Let $w$, $v$ be in $\C$. As real $n \to \infty$, \begin{alignat}{2} \label{par2} S_n(w;v) & = (n+v) \int_{-2}^0 e^{n \cdot \tilde p(z)} e^{v z} \, dz +O\left(e^{-n/2}\right) \qquad & &(\Re(w)\lqs 1),\\ T_n(w;v) & = (n+v) \int_{0}^2 e^{n \cdot \tilde p(z)} e^{v z} \, dz +O\left(e^{-n/2}\right) \qquad & &(\Re(w)\gqs 1), \label{par3} \end{alignat} for implied constants depending only on $w$ and $v$. \end{lemma} \begin{proof} The proof is based on \e{snw3} and \e{here}. It is similar to that of Lemma \ref{xm}, requiring the inequality $1+z-e^z < -|z|/2$ for real $z$ with $|z|\gqs 2$. \end{proof} \begin{prop} \label{pilcc} Let $S \in \Z_{\gqs 1}$ be fixed and set $w=1$ so that $\tilde p(z)= \tilde p(1;z) := 1-e^z+ z$. As $n \to \infty$, \begin{align} \int_{-2}^0 e^{n \cdot \tilde p(z)} e^{v z} \, dz & = \sum_{s=0}^{S-1} \G\left( \frac{s+1}2\right) \frac{(-1)^s \tilde \beta_s(v)}{n^{(s+1)/2}} + O\left(\frac 1{n^{(S+1)/2}} \right),\label{incc}\\ \int_{0}^2 e^{n \cdot \tilde p(z)} e^{v z} \, dz & = \sum_{s=0}^{S-1} \G\left( \frac{s+1}2\right) \frac{ \tilde \beta_s(v)}{n^{(s+1)/2}} + O\left(\frac 1{n^{(S+1)/2}} \right), \label{in2cc} \end{align} for \begin{equation} \tilde \beta_s(v) =\sum_{m=0}^s \frac{v^{s-m}}{(s-m)!} \sum_{k=0}^m 2^{(s-1)/2+k} \binom{-(s+1)/2}{k} \dm_{m,k}\left( \frac{1}{3!}, \frac{1}{4!}, \frac{1}{5!}, \dots \right). \label{in3cc} \end{equation} \end{prop} \begin{proof} The function $ \tilde p(z)$ is entire. Let $z_0=0$ with $ \tilde p(0)=0$ and we have the Taylor expansion about $0$: \begin{equation*} \tilde p(z)- \tilde p(0)=-\sum_{j=2}^\infty \frac{z^{j}}{j!}. \end{equation*} According to our setup with Assumptions \ref{ma0} and \e{pqe}, $p(z)= \tilde p(z)$, $q(z)=e^{v z}$ and $$ p_s=1/(s+2)!, \quad p_0=1/2, \quad \mu=2, \quad \theta_\ell=\pi \ell, \quad q_s= v^s/s!. $$ The proof continues by applying Theorem \ref{il} and Proposition \ref{wojf}, similarly to the proof of Proposition \ref{pil}. \end{proof} Define \begin{align} \label{rhk} \tilde \rho_r(v) & := -\sum_{m=0}^{2r+1} \frac{v^{2r+1-m}}{(2r+1-m)!}\sum_{k=0}^{m} \frac{(2r+2k)!!}{(-1)^k k!} \dm_{m,k}\left( \frac{1}{3!}, \frac{1}{4!}, \frac{1}{5!}, \dots \right), \\ \tilde \g_r(v) & := \sum_{m=0}^{2r} \frac{v^{2r-m}}{(2r-m)!}\sum_{k=0}^{m} \frac{(2r+2k-1)!!}{(-1)^k k!} \dm_{m,k}\left( \frac{1}{3!}, \frac{1}{4!}, \frac{1}{5!}, \dots \right). \label{gmk} \end{align} \begin{prop} \label{v0} The asymptotic expansion coefficients $\rho_r(v)$ of $\theta_n(v)$ in \e{abb}, and $\g_r(v)$ of $\G(n+v+1)$ in \e{gnv}, satisfy $\rho_0(v)=\tilde\rho_0(v)$, $\g_0(v) = \tilde \g_0(v)$ and \begin{equation*} \rho_r(v) = \tilde \rho_r(v) + v \cdot \tilde \rho_{r-1}(v), \qquad \g_r(v) = \tilde \g_r(v) + v \cdot \tilde \g_{r-1}(v) \qquad (r\gqs 1). \end{equation*} \end{prop} \begin{proof} Start with \e{bx} and use Lemma \ref{par}, Proposition \ref{pilcc} to show that \begin{align*} \theta_n(v) & = \frac{n+v}2 \left(\sum_{s=0}^{S-1} \G\left( \frac{s+1}2\right) \frac{ \tilde\beta_s(v)}{n^{(s+1)/2}}((-1)^s -1) + O\left(\frac 1{n^{(S+1)/2}} \right)\right)\\ & = -\left(1+\frac{v}n\right) \left(\sum_{r=0}^{R-1} \G\left( r+1\right) \frac{ \tilde\beta_{2r+1}(v)}{n^{r}} + O\left(\frac 1{n^{R}} \right)\right). \end{align*} A calculation finds $-\G\left( r+1\right) \tilde\beta_{2r+1}(v) = \tilde \rho_r(v)$ and we obtain the desired relations for $\tilde \rho_r(v)$. The results for $\tilde \g_r(v)$ are shown similarly, starting with \e{bx2}. \end{proof} Set \begin{equation} \label{urt} \tilde U_r(w;v) := -\sum_{m=0}^{r} \frac{v^{r-m}}{(r-m)!}\sum_{k=0}^{m} \frac{(-w)^k}{(w-1)^{r+k+1}} \frac{(r+k)!}{ k!} \dm_{m,k}\left( \frac{1}{2!}, \frac{1}{3!}, \frac{1}{4!}, \dots \right). \end{equation} \begin{prop} \label{urt2} We have $U_0(w;v) = \tilde U_0(w;v)$ and \begin{equation*} U_r(w;v) = \tilde U_r(w;v) + v \cdot \tilde U_{r-1}(w;v) \qquad (r\gqs 1). \end{equation*} \end{prop} \begin{proof} We will use \e{par2} to give an asymptotic expansion of $S_n(w;v)$ for $\Re(w)\lqs 1$ and $w\neq 1$. The function $\tilde p(z)=w(1-e^z)+ z$ is entire. Let $z_0=0$ with $ \tilde p(0)=0$ and we have the Taylor expansion: \begin{equation*} \tilde p(z)- \tilde p(0)=-(w-1)z - w\sum_{j=2}^\infty \frac{z^{j}}{j!}. \end{equation*} In the notation of Assumptions \ref{ma0} and \e{pqe}, $p(z)= \tilde p(z)$, $q(z)=e^{v z}$ and $$ p_0=w-1, \quad \mu=1, \quad p_s=w/(s+1)! \ \text{ for } \ s\gqs 1, \quad q_s= v^s/s!. $$ The proof continues by applying Theorem \ref{il} and Proposition \ref{wojf}, and comparing the resulting series with Proposition \ref{abc}. \end{proof} \section{Further formulas when $v=0$} In this section we set $v=0$ and then omit $v$ from the notation. The formulas \e{rhrv}, and \e{rhk} with Proposition \ref{v0}, simplify to give Ramanujan's coefficients in \e{rb2} as \begin{align} \label{rfb} \rho_r = \delta_{r,0} +{} & \sum_{k=0}^{2r+1} \frac{(2r+2k)!!}{(-1)^k k!} \dm_{2r+1,k}\left( \frac{1}{3}, \frac{1}{4}, \frac{1}{5}, \dots \right) \\ = - & \sum_{k=0}^{2r+1} \frac{(2r+2k)!!}{(-1)^k k!} \dm_{2r+1,k}\left( \frac{1}{3!}, \frac{1}{4!}, \frac{1}{5!}, \dots \right). \label{rfb2} \end{align} The similar expressions for $\g_r$ in Stirling's approximation \e{gmx} have already been noted in \e{gmy}, \e{gmy2}. Also with \e{wb}, \e{urt} and Proposition \ref{urt2}, Buckholtz's functions from \cite{Bu63} can be written as \begin{align} U_r(w) = \delta_{r,0} +{} &\sum_{k=0}^{r} \frac{ w}{(1-w)^{r+k+1}} \cdot \frac{(r+k)!}{(-1)^{k} k!} \cdot \dm_{r,k}\left( \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \dots \right) \label{uj}\\ = & \sum_{k=0}^{r} \frac{w^k}{(1-w)^{r+k+1}} \cdot \frac{(r+k)!}{(-1)^{r} k!} \cdot \dm_{r,k}\left( \frac{1}{2!}, \frac{1}{3!}, \frac{1}{4!}, \dots \right). \label{uj2} \end{align} The De Moivre polynomials above can be expressed in terms of the more familiar Stirling numbers. Recall that the Stirling subset numbers $\stirb{n}{k}$ count the number of ways to partition $n$ elements into $k$ nonempty subsets. The Stirling cycle numbers $\stira{n}{k}$ count the number of ways to arrange $n$ elements into $k$ cycles. Their properties are developed in \cite[Sect. 6.1]{Knu}. Also described in \cite[Sect. 6.2]{Knu} and \cite{GS78} are the second-order Eulerian numbers $\eud{n}{k}$ and the relations \begin{align}\label{eu2a} \stira{r+j}{j} & = \sum_{k=0}^r \eu{r}{k} \binom{r+j+k}{2r}, \\ \stirb{r+j}{j} & = \sum_{k=0}^r \eu{r}{k} \binom{2r+j-1-k}{2r}. \label{eu2b} \end{align} As shown in \cite[Sect. 2]{odm}, for example, \begin{equation} \label{ank} \dm_{n,k}\left( \frac{1}{1}, \frac{1}{2}, \frac{1}{3}, \dots \right) = \frac{k!}{n!}\stira{n}{k}, \qquad \dm_{n,k}\left( \frac{1}{1!}, \frac{1}{2!}, \frac{1}{3!}, \dots \right) = \frac{k!}{n!}\stirb{n}{k}. \end{equation} Removing the first coefficient $a_1$ in $\dm_{n,k}(a_1,a_2, \dots)$ is easily achieved by the binomial theorem: \begin{equation}\label{add} \dm_{n,k}(a_2,a_3, \dots) = \sum_{j=0}^k (-a_1)^{k-j} \binom{k}{j} \dm_{n+j,j}(a_1,a_2, \dots). \end{equation} Then it follows from \e{ank} and \e{add} that \begin{align} \dm_{r,k}\left( \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \dots \right) & =\frac{k!}{(r+k)!} \sum_{j=0}^k (-1)^{k-j} \binom{r+k}{r+j} \stira{r+j}{j},\label{jfa}\\ \dm_{r,k}\left( \frac{1}{2!}, \frac{1}{3!}, \frac{1}{4!}, \dots \right) & =\frac{k!}{(r+k)!} \sum_{j=0}^k (-1)^{k-j} \binom{r+k}{r+j} \stirb{r+j}{j}.\label{jfb} \end{align} \begin{prop} \label{rop} For $w\in \C$, \begin{alignat}{2} U_r(w) & = \delta_{r,0} +\frac{(-1)^r w}{(1-w)^{2r+1}} \sum_{j=0}^r \eu{r}{j} w^{j} \qquad & &(w\neq 1), \label{kn}\\ & = \delta_{r,0} +(-1)^r\sum_{j=1}^\infty \stirb{r+j}{j} w^j\qquad & &(|w|<1). \label{car} \end{alignat} \end{prop} \begin{proof} Inserting \e{eu2a} and \e{eu2b} into \e{jfa} and \e{jfb}, respectively, and simplifying the binomial sum as in \cite[Eq. (5.24)]{Knu} reveals that \begin{align} \dm_{r,k}\left( \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \dots \right) & =\frac{k!}{(r+k)!} \sum_{j=0}^r \eu{r}{j} \binom{j}{r-k},\label{yta}\\ \dm_{r,k}\left( \frac{1}{2!}, \frac{1}{3!}, \frac{1}{4!}, \dots \right) & =\frac{k!}{(r+k)!} \sum_{j=0}^r \eu{r}{j} \binom{r-1-j}{r-k}.\label{ytb} \end{align} Then substituting \e{yta} into \e{uj}, (or \e{ytb} into \e{uj2}), interchanging summations and simplifying yields \e{kn}. Expanding $(1-w)^{-2r-1}$ in \e{kn} with the binomial theorem and using \e{eu2b} then shows \e{car}. \end{proof} The identity \e{kn} is due to Knuth and described in \cite[p. 506]{Knuprog} where the functions $$Q_w(n):=T_n(1/w;0)/w, \qquad R_w(n):=S_n(w;0) $$ are studied. Also \e{car} is due to Carlitz in \cite{Ca65}. Proposition \ref{rop} gives new proofs of these identities. Comparing coefficients of $w$ in \e{uj}, \e{uj2} and \e{kn} finds \begin{alignat}{2}\label{uk} \eu{r}{j} & = \sum_{k=0}^{r} (-1)^{r+j+k} \frac{(r+k)!}{k!}\binom{r-k}{j} \dm_{r,k}\left( \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \dots \right) \qquad &(r &\gqs 0),\\ & = \sum_{k=0}^{r} (-1)^{j+k+1} \frac{(r+k)!}{k!}\binom{r-k}{j+1-k} \dm_{r,k}\left( \frac{1}{2!}, \frac{1}{3!}, \frac{1}{4!}, \dots \right) \qquad &(r & \gqs 1). \label{uk2} \end{alignat} Combining \e{add} with \e{jfa}, \e{jfb}, \e{yta} or \e{ytb} also gives explicit formulas for $\dm_{r,k}\left( \frac{1}{3}, \frac{1}{4}, \frac{1}{5}, \dots \right)$ and $\dm_{r,k}\left( \frac{1}{3!}, \frac{1}{4!}, \frac{1}{5!}, \dots \right)$. For example, \begin{equation}\label{wew} \dm_{r,k}\left( \frac{1}{3}, \frac{1}{4}, \frac{1}{5}, \dots \right) = \sum_{j_1+j_2+j_3=k} \frac{(-1)^{j_1+j_2}}{2^{j_1}} \frac{k!}{j_1! j_2! (r+j_2+2j_3)!} \stira{r+j_2+2j_3}{j_3}, \end{equation} where the sum is over all $j_1$, $j_2$, $j_3 \in \Z_{\gqs 0}$ that sum to $k$. Applying \e{add} $r$ times leads to the following result. \begin{prop} For $r, k \gqs 0$, $\dm_{n,k}(a_{r+1},a_{r+2}, \dots)$ equals \begin{equation}\label{fortx} \sum_{ j_1+ j_2+ \dots +j_{r+1}= k} \binom{k}{j_1 , j_2 , \dots , j_{r+1}} (-a_1)^{j_1} (-a_2)^{j_2} \cdots (-a_r)^{j_r} \dm_{n+J+r j_{r+1},j_{r+1}}(a_{1},a_{2}, \dots), \end{equation} where $J$ means $(r-1)j_1+(r-2)j_2+ \cdots +1 j_{r-1}$ and the summation is over all $j_1$, \dots , $j_{r+1} \in \Z_{\gqs 0}$ with sum $k$. \end{prop} Using this, along with the equality $ \dm_{m+k,k}\left( \frac{1}{0!}, \frac{1}{1!}, \frac{1}{2!}, \dots \right) = k^m/m! $, proves the additional formulas \begin{align} \dm_{n,k}\left( \frac{1}{2!}, \frac{1}{3!}, \frac{1}{4!}, \dots \right) \label{mvw} & =\sum_{j_1+j_2+j_3=k} \binom{k}{j_1,j_2,j_3} (-1)^{j_1+j_2} \frac{j_3^{n+j_1+j_3}}{(n+j_1+j_3)!}, \\ \dm_{n,k}\left( \frac{1}{3!}, \frac{1}{4!}, \frac{1}{5!}, \dots \right) & =\sum_{j_1+j_2+j_3+j_4=k} \binom{k}{j_1,j_2,j_3,j_4} \frac{(-1)^{j_1+j_2+j_3}}{2^{j_3}} \frac{j_4^{n+2j_1+j_2+2j_4}}{(n+2j_1+j_2+2j_4)!}. \label{mvw2} \end{align} There is also a nice combinatorial interpretation of the De Moivre polynomial values appearing in this section. Let $n$, $k$ and $r$ be integers with $n$, $k\gqs 0$, $r\gqs 1$. Write $\stirb{n}{k}_{\gqs r}$ for the number of ways to partition $n$ elements into $k$ subsets, each with at least $r$ members. Let $\stira{n}{k}_{\gqs r}$ denote the number of ways to arrange $n$ elements into $k$ cycles, each of length at least $r$. We also set $\stirb{0}{k}_{\gqs r}=\stirb{0}{k}_{\gqs r}=\delta_{k,0}$. These are the so-called $r$-associated Stirling numbers, generalizing the usual $r=1$ case; see for example \cite[pp. 222, 257]{Comtet} and \cite[Sect. 2]{BM11}. We are following the Knuth-approved notation of \cite{knu95}. \begin{prop} Using sequences starting with $r-1$ zeros, we have \begin{align} \stirb{n}{k}_{\!\gqs r} & =\frac{n!}{k!} \dm_{n,k}\left(0,0, \dots, 0, \frac 1{r!}, \frac 1{(r+1)!}, \dots \right) \label{rso}\\ & =\frac{n!}{k!} \dm_{n-(r-1)k,k}\left(\frac 1{r!}, \frac 1{(r+1)!}, \dots \right),\label{rso2}\\ \stira{n}{k}_{\!\gqs r} & =\frac{n!}{k!} \dm_{n,k}\left(0,0, \dots, 0, \frac 1{r}, \frac 1{r+1}, \dots \right) \label{rso3}\\ & =\frac{n!}{k!} \dm_{n-(r-1)k,k}\left(\frac 1{r}, \frac 1{r+1}, \dots \right). \label{rso4} \end{align} \end{prop} \begin{proof} Express the right hand side of \e{rso} using \e{bell2}. Each nonzero summand corresponds to a partition of $n$ elements into $j_r$ subsets of size $r$, $j_{r+1}$ subsets of size $r+1$, and so on, with $k$ subsets altogether. Then \begin{equation*} \frac{n!}{(j_r! j_{r+1}! \cdots )(r!)^{j_r} ((r+1)!)^{j_{r+1}} \cdots} \end{equation*} counts the $n!$ ways to put the $n$ elements into this partition, dividing by the $j_m!$ ways to order the subsets of size $m$ and dividing by the $m!$ ways to order the elements of each subset of size $m$. This gives the desired $r$-associated Stirling subset number. The argument for the $r$-associated Stirling cycle number in \e{rso3} is the same except that there are only $m$ ways to write a particular cycle of length $m$. The formulas \e{rso2}, \e{rso4} follow by \e{gsb}. \end{proof} The De Moivre polynomial values in \e{gmy} -- \e{gmy2} and \e{rfb} -- \e{uj2} can now be replaced by $r$-associated Stirling numbers. For example, \begin{equation}\label{ass} \g_j = \sum_{k=0}^{2j} \frac{(-1)^k}{(2j+2k)!!} \stira{2j+2k}{k}_{\!\gqs 3} , \qquad \g_j = \sum_{k=0}^{2j} \frac{(-1)^k}{(2j+2k)!!} \stirb{2j+2k}{k}_{\!\gqs 3}, \end{equation} where the first formula in \e{ass} is due to Comtet \cite[p. 267]{Comtet}, and the second is due to Brassesco and M\'endez \cite[Thm. 2.4]{BM11}. Also \begin{align}\label{ass2} \rho_j = \delta_{j,0} +{} & \sum_{k=0}^{2j+1} \frac{(-1)^k}{(2j+2k+1)!!} \stira{2j+2k+1}{k}_{\!\gqs 3},\\ = - & \sum_{k=0}^{2j+1} \frac{(-1)^k}{(2j+2k+1)!!} \stirb{2j+2k+1}{k}_{\!\gqs 3}. \label{ass3} \end{align} \section{Approximations to the exponential integral} Ramanujan's next result after \e{rb} and \e{rb2} is Entry 49, and it seems to have attracted much less attention than Entry 48. Use the relation \begin{equation}\label{ei} 1+\frac{1!}{n}+\frac{2!}{n^2}+ \cdots +\frac{(n-1)!}{n^{n-1}} + \frac{n!}{n^n}\Psi_n = n e^{-n}\ei(n), \end{equation} to define $\Psi_n$, with $\ei(n)$ given in \e{ein}. Then Ramanujan computed the first terms in the asymptotic expansion of $\Psi_n$, writing\footnote{He was considering $1+\Psi_n$ so his first term is $2/3$.} \begin{equation}\label{ei2} \Psi_n = -\frac{1}3+ \frac{4}{135 n}+\frac{8}{2835 n^2}+O\left( \frac{1}{n^3}\right). \end{equation} See Berndt's discussion \cite[p. 184]{Ber89} of this entry, and a proof of \e{ei2} based on Olver's work in \cite[pp. 523 -- 531]{Olv}. We are also interested in the generalization \begin{equation}\label{ei3} 1+\frac{1!}{n}+\frac{2!}{n^2}+ \cdots +\frac{(n+v-1)!}{n^{n+v-1}} + \frac{(n+v)!}{n^{n+v}}\Psi_n(v) = n e^{-n}\ei(n), \end{equation} and our goal is to establish the next result. \begin{theorem} \label{xte} Let $v$ be any integer. As real $n \to \infty$, \begin{equation} \label{trw} \Psi_n(v) = \psi_0+\frac{\psi_1(v)}{n}+\frac{\psi_2(v)}{n^2}+ \cdots + \frac{\psi_{R-1}(v)}{n^{R-1}} +O\left(\frac{1}{n^{R}}\right), \end{equation} for an implied constant depending only on $R$ and $v$, with $\psi_r(v)$ given explicitly in \e{xpo}. \end{theorem} From \cite[p. 529]{Olv}, $\ei(n)$ may be expressed with a contour integral whose path of integration runs along the positive reals while moving above $1$ to avoid the pole: \begin{equation} \label{ele} \ei(n) = -\pi i +\int_0^\infty \frac{e^{n(1-z)}}{1-z} \, dz. \end{equation} Make the replacement \begin{equation*} \frac 1{1-z} = 1+z +z^2 + \cdots z^{n+v-1} + \frac{z^{n+v}}{1-z} \end{equation*} in \e{ele} to find \begin{equation*} \ei(n) = -\pi i +e^n \sum_{j=0}^{n+v-1} \frac{j!}{n^{j+1}}- \int_0^\infty e^{n \cdot p(z)} \frac{z^v}{z-1} \, dz, \end{equation*} for $p(z)=1-z+\log z$. Hence \begin{equation} \label{mc} \frac{e^n}{n^{n+v}} (n+v)! \Psi_n(v) = -n\pi i - n\int_0^\infty e^{n \cdot p(z)} \frac{z^v}{z-1} \, dz. \end{equation} We would like to reuse our work in section \ref{w=1} to find the asymptotics of the integral in \e{mc}. As well as having a saddle-point at $z=1$, the integrand also has a simple pole there and so Theorem \ref{il} cannot be used. Perron in \cite{Pe17} covered the case we need and we quote a version of his result in Theorem 6.3 of \cite{OSper} next, (though it is slightly more general than required). Note that $R_p$ depends only on the holomorphic function $p(z)$ and $z_0$; it can be any positive number that is sufficiently small. \begin{theorem} \label{m6} {\rm (Perron's method for an integrand containing a factor $(z-z_0)^{a-1}$ for arbitrary $a \in \C$.)} Suppose Assumptions \ref{ma0} hold, though with the following change to the contour $\cc$. Starting at $z_1$ it runs to the point $z'_1$ which is a distance $R_p$ from $z_0$ and on the bisecting line with angle $\theta_{k_1}$. Then the contour circles $z_0$ to arrive at the point $z'_2$ which is a distance $R_p$ from $z_0$ and on the bisecting line with angle $\theta_{k_2}$. Finally, the contour ends at $z_2$. The integers $k_1$ and $k_2$ keep track of how $\cc$ rotates about $z_0$ between $z'_1$ and $z'_2$; the angle of rotation is $2\pi(k_2-k_1)/\mu$. Suppose that $ \Re(p(z))<\Re(p(z_0)) $ for all $z$ in the segments of $\cc$ between $z_1$ and $z'_1$ and between $z'_2$ and $z_2$ (including endpoints). Let $a \in \C$. For $z\in \cc$, the branch of $(z-z_0)^{a-1}$ is specified by requiring \begin{equation} \label{thhxm6a} (z'_1-z_0)^{a-1} = |z'_1-z_0|^{a-1}\cdot e^{i\theta_{k_1}(a-1)} \end{equation} when $z=z'_1$ and by continuity at the other points of $\cc$. Then for any $S \in \Z_{\gqs 0}$, \begin{multline} \label{wimm5} \int_\cc e^{n \cdot p(z)} (z-z_0)^{a-1} q(z) \, dz \\ = e^{n \cdot p(z_0)} \left(\sum_{s=0}^{S-1} \G\left(\frac{s+a}{\mu}\right) \frac{\alpha_s \left( e^{2\pi i {k_2}(s+a)/\mu}- e^{2\pi i {k_1}(s+a)/\mu}\right)}{n^{(s+a)/\mu}} + O\left(\frac{K_q}{n^{(S+\Re(a))/\mu}} \right) \right) \end{multline} where the implied constant in \eqref{wimm5} is independent of $n$ and $q$. The numbers $\alpha_s$ are given by \eqref{hjw}, depending on $a$ now. If $(s+a)/\mu \in \Z_{\lqs 0}$ then \begin{equation*} \G((s+a)/\mu) \left( e^{2\pi i {k_2}(s+a)/\mu}- e^{2\pi i {k_1}(s+a)/\mu}\right) \end{equation*} in \eqref{wimm5} is not defined and must be replaced by $2\pi i (k_2-k_1)(-1)^{(s+a)/\mu}/|(s+a)/\mu|!$. \end{theorem} We may apply Theorem \ref{m6} to the integral in \e{mc} taking $z_1=1/2$, $z_0=1$ and $z_2=3/2$, since the remaining parts are exponentially small by the work in Lemma \ref{xm}. Then use \e{mu2}, $a=0$, $k_1=1$ and $k_2=0$, to obtain \begin{equation} \label{mc2} \int_0^\infty e^{n p(z)} \frac{z^v}{z-1} \, dz = -\pi i + \sum_{s=1}^{S-1} \G\left(\frac{s}{2}\right) \frac{\alpha_s \cdot (1- (-1)^{s})}{n^{s/2}} + O\left(\frac{1}{n^{S/2}} \right). \end{equation} Using \e{mc2} in \e{mc} and simplifying $\alpha_s$ in \e{hjw} shows the next result. \begin{prop} \label{xte2} As $n \to \infty$, \begin{equation}\label{abei} \frac{e^n}{n^{n+v}} \G(n+v+1) \Psi_n(v) = \sqrt{2\pi n}\left(\tau_0(v)+\frac{\tau_1(v)}{n}+\frac{\tau_2(v)}{n^2}+ \cdots + \frac{\tau_{R-1}(v)}{n^{R-1}}+ O\left( \frac{1}{n^R}\right)\right), \end{equation} for an implied constant depending only on $R\in \Z_{\gqs 1}$ and $v\in \Z$, with \begin{equation}\label{tav} \tau_r(v) := \sum_{m=0}^{2r+1} (-1)^{m+1}\binom{v}{2r+1-m}\sum_{k=0}^{m} \frac{(2r+2k-1)!!}{(-1)^k k!} \dm_{m,k}\left( \frac{1}{3}, \frac{1}{4}, \frac{1}{5}, \dots \right). \end{equation} \end{prop} \begin{proof}[Proof of Theorem \ref{xte}] Combining Propositions \ref{xpg} and \ref{xte2} produces \begin{equation*} \Psi_n(v) = \left(\sum_{r=0}^{R-1}\frac{\tau_r(v)}{n^r}+ O\left( \frac{1}{n^R}\right)\right)\Big/\left(1+\sum_{r=1}^{R-1}\frac{\g_r(v)}{n^r}+ O\left( \frac{1}{n^R}\right)\right). \end{equation*} Then \e{trw} follows and $\psi_r(v)$ may be expressed in terms of the $\tau_r(v)$ and $\g_r(v)$ coefficients. Using \cite[Prop. 3.2]{odm}, for example, to find the multiplicative inverse of the series involving $\g_r(v)$ yields \begin{equation}\label{xpo} \psi_r(v) = \sum_{m=0}^r \tau_{r-m}(v) \sum_{k=0}^m (-1)^k \dm_{m,k}\left(\g_1(v), \g_2(v), \dots \right). \end{equation} \end{proof} A computation now finds for example, with $v$ any fixed integer as $n \to \infty$, \begin{multline} \label{pex} \Psi_n(v) = -\frac{1}3-v+ \left(\frac{4}{135}+\frac{v(v+1)^2}{3}\right)\frac 1{n}\\ + \left(\frac{8}{2835}-\frac{v(9v^4+45v^3+75v^2+47v+8)}{135}\right)\frac{1}{n^2}+O\left( \frac{1}{n^3}\right). \end{multline} The expansion \e{hrb} of $\theta_n(v)$ looks similar to \e{pex} and, in particular, their constant terms $\psi_r=\psi_r(0)$ and $\rho_r$ seem to agree up to an alternating sign. \begin{conj} \label{ky} For all $r \gqs 0$ we have $\psi_r = (-1)^{r+1}\rho_r$. \end{conj} We have confirmed this relation for $r \lqs 100$ and hope to pursue it in a followup work. {\small \bibliography{ram-bib} } {\small \vskip 5mm \noindent \textsc{Dept. of Math, The CUNY Graduate Center, 365 Fifth Avenue, New York, NY 10016-4309, U.S.A.} \noindent {\em E-mail address:} \texttt{cosullivan@gc.cuny.edu} } \end{document}
52,144
United States Government 851 Joe Frank Harris Pkwy Se Cartersville, Georgia 30120(770) 382-3802 Print | Save | Directions AboutUnited States Government is located at the address 851 Joe Frank Harris Pkwy Se in Cartersville, Georgia 30120. They can be contacted via phone at (770) 382-3802 for pricing, hours and directions. For maps and directions to United States Government view the map to the right. For reviews of United States Government see below. CONTACT INFORMATION Phone: (770) 382-3802 CATEGORIES: People Also Viewed Bartow County Business License 112 W Cherokee Ave # 300 Cartersville, Georgia 30120 Georgia Department Of Public 1300 Joe Frank Harris Pkwy Se Cartersville, Georgia 30120 Bartow County Government 224 W Cherokee Ave Cartersville, Georgia 30120 Downtown Development Authority Welcome Center 1 Friendship Plz A Ste Cartersville, Georgia 30120 Woodland High School 800 Old Alabama rd Se Cartersville, Georgia 30120 Cartersville City Government 11 Sugar Valley rd Se Cartersville, Georgia 30120 Reviews Add You must Sign in to post reviews. 0 Reviews
81,447
About Exercises For Neck Pain 5 Simple Neck Exercises A stiff neck can impact all daily activities - use these simple exercises to relax muscles and relieve pain. Whether you’re suffering from a stick neck, are trying to get rid of that annoying crick or just want to loosen your muscles after a long day, these 5 simple neck exercises will do the trick. Don’t resort to painful and potentially dangerous neck moves – follow these examples to relieve neck... Read More ▸ Related Articles Exercises to Relieve Lower Back Pain Lower back pain can make every day tasks difficult. Stretching and strengthening muscles can help. 5 Great Neck Exercises Keeping your neck relaxed and stretched will release tension from the rest of your body. Banish Neck Pain! Is your posture getting to be a pain in the neck? Loosen up with this neck routine! 5 Exercises to Relieve Back Pain Is back pain keeping you from sidelined? These 5 exercises may help you get back in the game. Related Exercises Related Activities Archery Archery is a sport that involves a bow and arrow. Participants launch arrows from a bow... Automobile Repair Automobile repair is a skill requiring knowledge of different automobiles and how they work.... Circuit Training Circuit training involves performing multiple high intensity exercises using different pieces... Jazzercise Jazzercise is a dance-based cardio exercise program that involves an entire body workout.... Weight Lifting Weight lifting may result in a high intensity cardio workout, improved muscle tone, improved... Related Equipment Stamina AeroPilates Head and Neck Support Pillow Gaiam Restorative Exercise for Foot Pain Kit Stamina AeroPilates Head and Neck Support Pillow Power Systems Neck Machine - Selectorized Mastercare Neck Pillow Related Diets Elemental Diet The elemental diet is designed to reduce digestive tract inflammation in Crohn's patients through the use of doctor-administrated liquid supplementation. Blood Type Diet The Blood Type diet involves customizing diets that are compatible with one's blood type for the purpose of healthy eating and weight management. Related Vitamins & Supplements Wild Lettuce Wild lettuce, a member of the common lettuce found in grocery stores, comes in three species, Lactuca... Glucosamine Glucosamine is derived from glucose and is found in the cartilage. Glucosamine sulfate is a common... Black Currant Black currants are believed to be rich in many phytonutrients, antioxidants, vitamins, essential fatty... Lemon Balm Lemon balm is an herb believed to have many medicinal uses. It may stimulate mild perspiration and... Sour Cherry Sour cherries are believed to be rich in antioxidants that may combat cancer, diabetes, weight gain...
154,705
All About Puberty: Parents and Boys Together This class provides a perfect opportunity to build a bridge with your 9- to 12-year-old to demystify the challenges of puberty. Taught in partnership with Planned Parenthood of the Great Northwest. Pre-registration is required. Where: Swedish/Edmonds 21601 76th Ave. W, Edmonds. When: Monday, Dec. 2, 6:30 p.m. to 9 p.m. Cost: $35 per family (includes one child and two adults in the same family), $5 fee for each additional child or adult in the same family For more information please visit their website or call 206-386-2502.
172,073
\begin{document} \title[Combinatorial Kirillov-Reshetikhin conjecture]{Proof of the combinatorial Kirillov-Reshetikhin conjecture} \author{P. Di Francesco and R. Kedem} \address{PDF: Service de Physique Th\'eorique de Saclay, CEA/DSM/SPhT, URA 2306 du CNRS, F-91191 Gif sur Yvette Cedex.} \address{RK: Department of Mathematics, University of Illinois, 1409 W Green Street, Urbana IL 61801 USA.} \begin{abstract} In this paper we give a direct proof of the equality of certain generating function associated with tensor product multiplicities of Kirillov-Reshetikhin modules of the untwisted Yangian for each simple Lie algebra $\g$. Together with the theorems of Nakajima and Hernandez, this gives the proof of the combinatorial version of the Kirillov-Reshetikhin conjecture, which gives tensor product multiplicities in terms of restricted fermionic summations. \end{abstract} \maketitle \section{Introduction} This paper aims to resolve three related conjectures. As explained below, the proof of one implies the proof of the other two. Here, we introduce a general method which allows us to prove the "$M=N$ conjecture" of \cite{HKOTY}. A brief sketch of the necessary conjectures and theorems follows. The Kirillov-Reshetikhin conjecture about completeness of Bethe states in the generalized inhomogeneous Heisenberg spin-chain is a combinatorial formula (``the $M$-sum'' \eqref{M}) for the number of solutions of the Bethe equations. The formula has a fermionic form, in that it is a sum over products of non-negative binomial coefficients. We call this the combinatorial Kirillov-Reshetikhin conjecture, Conjecture \ref{KRconjecture}. This is closely related to another conjecture, now a theorem (Theorem \ref{NakHer}), also sometimes referred to in the literature as the Kirillov-Reshetikhin conjecture, about the characters of special finite-dimensional quantum affine algebra modules. This second version was recently proved by Nakajima \cite{Nakajima} for simply-laced Lie algebras, and more generally by Hernandez \cite{Hernandez}. It is the statement that characters of Kirillov-Reshetikhin modules are solutions of a recursion relation called the $Q$-system \cite{KR,HKOTY}. This has been proven to hold for all simple Lie algebras, and also in more general settings \cite{He07}. Hatayama et. al. \cite{HKOTY} proved that this theorem implies a certain explicit alternating sum formula in terms of binomial coefficients (``the $N$-sum'' \eqref{N}) for the tensor product multiplicities. This formula is closely related to, but manifestly different from, the $M$-sum in the combinatorial Kirillov-Reshetikhin conjecture, which involves a restricted, non-alternating sum. The second conjecture, then, is the one advanced in \cite{HKOTY}, that the two sums are equal. We call this the ``$M=N$ conjecture'' (the precise statement is Conjecture \ref{HKOTY}). There is an indirect argument which shows that this is true in special cases, because the $M$-sum formula was proven by combinatorial means in certain special cases by \cite{KKR,KR,KS,KSS,OkaSchShi,Schilling}. In this paper we prove this conjecture directly, thereby proving the combinatorial Kirillov-Reshetikhin conjecture. The third conjecture is the Feigin-Loktev conjecture. It was shown in \cite{AK} that the combinatorial KR-conjecture implies the Feigin-Loktev conjecture \cite{FL} for the fusion product of arbitrary KR-modules of $\g[t]$, as defined by \cite{Chari,ChariMoura}. This conjecture states that the graded tensor product (the unrestricted Feigin-Loktev fusion product) is independent of the localization parameters, and that its graded dimension is given by the generalized Kostka polynomials \cite{SS} or fermionic sums of \cite{HKOTY}. In this paper, we prove the $M=N$ conjecture in the untwisted case for any simple Lie algebra, by considering suitable generating functions, a standard technique in enumerative combinatorics. These functions are constructed so as to enjoy particularly nice factorization properties, in terms of solutions of the so-called $Q$-systems or certain deformations thereof. Analogous generating functions, involving fewer parameters, were also used in \cite{HKOTY} in this context, but for a different purpose. The proof of Conjecture \ref{HKOTY} completes the proof of the combinatorial Kirillov-Reshetikhin conjecture for representations of $Y(\g)$ for all simple Lie algebras $\g$ (we do not consider twisted cases in this paper). In addition, in the cases where Chari's KR-modules for $\g[t]$ are known to have the same dimension as their Yangian version (the classical algebras and some exceptional cases), this also completes the proof of the Feigin-Loktev conjecture for the unrestricted fusion products of arbitrary Kirillov-Reshetikhin modules for any simple Lie algebra. In the course of our proof, we define a new family of functions, generalizing the characters $Q$ of the KR-modules, which appear to have very useful properties. First, we define a deformation of the $Q$-system, in terms of functions in an increasing number of variables. We show that these functions can be defined alternatively in terms of a substitution recursion. In terms of the deformed $Q$-functions, there is a complete factorization of generating functions for fermionic sums of the KR-type. The paper is organized as follows. In Section 2, we recall the definitions, conjectures and theorems which we use in this paper. In Sections 3, 4, and 5, we give the proof of the $M=N$ conjecture for for $\g=\sl_2$, $\g$ simply-laced and $\g$ non simply-laced respectively. In each case, we define a deformed $Q$-system, which we refer to as the $\cQ$-system. We then define generating functions of fermionic sums which have a factorized form in terms of the $\cQ$-functions. This allows us to prove an equality of restricted generating functions, and the constant term of this identity is the conjectured $M=N$ identity of \cite{HKOTY}. \section{$Q$-systems, the Kirillov-Reshetikhin conjecture and the Feigin-Loktev conjecture} \subsection{Definitions} Let $\g$ be a simple Lie algebra with simple roots $\alpha_i$ with $i\in I_r = \{1,...,r\}$ and Cartan matrix $C$ with entries $C_{i,j} = \frac{2 (\alpha_i,\alpha_j)}{( \alpha_i,\alpha_i)}$. The algebra $\g$ has Cartan decomposition $\g=\n_- \oplus \h \oplus \n_+$, and we denote the generator of $\n_-$ corresponding to the simple root $\al_i$ by $f_i$ etc.. The irreducible integrable highest weight modules of $\g$ are denoted by $V(\lambda)$, where $\lambda\in P^+$ and $P^+$ is the set of dominant integral weights. We denote the fundamental weights of $\g$ by $\omega_i$ ($i\in I_r$). The algebra $\g[t] = \g\otimes \C[t]$ is the Lie algebra of polynomials in $t$ with coefficients in $\g$. The generators of $\g[t]$ are denoted by $x[n]:=x\otimes t^n$ where $x\in \g$ and $n\in \Z_+$. The relations in the algebra are $$ [x\otimes f(t),y\otimes h(t)]_{\g[t]} = [x,y]_\g f(t) h(t), \qquad x,y\in \g,\ f(t),h(t)\in \C[t], $$ where $[x,y]_\g$ is the usual Lie bracket in $\g$. We regard $\g$ as the subalgebra of constant currents in $\g[t]$. Thus, any $\g[t]$-module is also a $\g$-module by restriction. \subsubsection{Localization} Any $\g$-module $V$ can be extended to a $\g[t]$-module by the evaluation homomorphism. That is, given a complex number $\zeta$, the module $V(\zeta)$ is the $\g[t]$-module defined by $$ x[n] v = \zeta^n x v,\quad v\in V(\zeta). $$ If $V$ is finite-dimensional, so is $V(\zeta)$, with the same dimension. If $V$ is irreducible as a $\g$-module, so is $V(\zeta)$. More generally, given a $\g[t]$-module $V$, the $\g[t]$-module localized at $\zeta$, $V(\zeta)$, is the module on which $\g[t]$ acts by expansion in the local parameter $t_\zeta := t-\zeta$. If $v\in V(\zeta)$, then $$x[n] v = x\otimes (t_\zeta + \zeta)^n v = \sum_j {n \choose j} \zeta^j x[n-j]_\zeta v,$$ where $x[n]_\zeta := x\otimes t_\zeta$ and $x[n]_\zeta$ acts on $v\in V(\zeta)$ in the same way that $x[n]$ acts on $v\in V$. An evaluation module $V(\zeta)$ is a special case of a localized module, on which the positive modes $x[n]_\zeta$ with $n>0$ and $x\in\g$ act trivially. \subsubsection{Grading}\label{gradingsection} Let $V$ be any cyclic $\g[t]$-module. That is, there exists a vector $v\in V$ such that $$ V = U(\g[t]) v. $$ Any such module can be endowed with a $\g$-equivariant grading (which depends on the choice of $v$ in case it is non-unique) as follows. The algebra $U=U(\g[t])$ is graded by degree in $t$. That is, the graded component $U^{(j)}$ is the span of monomials of the form $$ x_1[n_1]\cdots x_m[n_m]:\quad \sum_{i=1}^m n_i = j, $$ where $x_i\in\g$ and $m\in \Z_+$. The module $V$ does not necessarily inherit this grading since the action of $\g[t]$ is not assumed to respect this grading. This is true, in general, for the localized modules above if $\zeta\neq 0$. However, $V$ does inherit a filtration, which depends on the choice of $v$. Let $U^{(\leq i)}$ be the vector space generated by monomials with degree less than or equal to $i$ in $U$. Define $\cF{(i)} = U^{(\leq i)} v$. We have $\cF{(i)}\subset \cF{(i+1)}$, where $\cF{(0)}$ is the $\g$-module generated by $v$. In the case that $V$ is finite-dimensional, this gives a finite filtration of $V$, $$ \cF{(0)}\subset \cdots \subset \cF{(N)} = V. $$ The associated graded space (here $\cF{(-1)}=\emptyset$), $$ {\rm Gr}\ V = \underset{i\geq 0}{\oplus} \cF{(i)}/\cF{(i-1)} $$ has graded components $V[i] = \cF{(i)}/\cF{(i-1)}$ which are $\g$-modules, since the filtration is $\g$-equivariant. \subsection{Kirillov-Reshetikhin modules} The term Kirillov-Reshetikhin module properly refers to certain finite-dimensional Yangian modules \cite{KR} or quantum affine algebra modules. Chari's Kirillov-Reshetikhin modules \cite{Chari,ChariMoura} are $\g[t]$-modules which are classical limits of the quantum group modules. Whereas $Y(\g)$ and $U_q(\widehat{\g})$-modules are defined in terms of their Drinfeld polynomial, Chari's KR-modules for $\g[t]$ are defined in terms of generators and relations. We refer to Chari's modules as KR-modules in this paper. As $\g$-modules, they are known to have the same structure as the Yangian KR-modules in the case of the classical Lie algebras, and in certain exceptional cases. These modules arise naturally when one considers the explicit description of the dual space of functions \cite{AK} to Feigin-Loktev fusion product \cite{FL}. KR-modules are parametrized by a complex number $\zeta\in \C^*$ (the localization parameter) and a highest weight of the special form $m \omega_\al\ (\al\in I_r)$, with $m\in \Z_+$ and $\omega_\al$ a simple weight. We denote such a module by $\KR_{\al,m}(\zeta)$. The definition given here is the one used in \cite{AK}. \begin{defn} Let $\zeta\in C^*$ and let $m\in \Z_+$, $\al\in I_r$. The {\rm KR}-module $\KR_{\al,m}(\zeta)$ is the module generated by the action of $U(\g[t])$ on a cyclic vector $v\in \KR_{\al,m}(\zeta)$, subject to the relations (recall that $x[n]_\zeta = x\otimes (t-\zeta)^n$): \begin{eqnarray*} x[n]_\zeta v &=& 0 \quad \hbox{if $x\in \n_+$ and $n\geq 0$};\\ h_\beta[n]_\zeta v &=& \delta_{n,0} \delta_{\al,\beta} m v;\\ f_\beta[n]_\zeta v &=& 0 \quad \hbox{if $n\geq \delta_{\al,\beta}$};\\ f_\al[0]_\zeta^{m+1} v &=& 0. \end{eqnarray*} The associated graded space of this module is a graded $\g[t]$-module $\overline{\KR}_{\al,m}$. Its graded components are $\g$-modules. \end{defn} For example, in the case of $\g=A_r$, $\KR_{\al,m}(\zeta) = V_{m\omega_\al}(\zeta)$, the evaluation module of $\sl_{r+1}[t]$ corresponding to the irreducible $\g$-module $V(m \omega_\al)$ at the point $\zeta$. For other Lie algebras, $\KR_{\al,m}(\zeta)$ may not be irreducible as a $\g$-module. However, the decomposition of $\KR_{\al,m}(\zeta)$ into irreducible $\g$-modules always has a unitriangular form (in the partial ordering of weights). That is, $$ \KR_{\al,m}(\zeta) \underset{\g-{\rm mod}}{\simeq} V(m\omega_\al)\oplus \left(\underset{\mu<m\omega_\al}{\oplus} V(\mu)^{\oplus m_{\mu}}\right) $$ Thus, $m\omega_\al$ is the highest $\g$-weight of $\KR_{\al,m}(\zeta)$. The decomposition of tensor products of KR-modules into irreducible $\g$-modules, is the subject of the Kirillov-Reshetikhin conjecture. \subsection{The Kirillov-Reshetikhin conjecture} The KR-conjecture was originally a conjecture about the completeness of Bethe ansatz states for the generalized, inhomogeneous Heisenberg spin chain. This is a spin-chain model with inhomegeneity parameters $\zeta_i$ at each lattice site $i$, and with a representation $V_i(\zeta_i)$ of the Yangian $Y(\g)$ (or, equivalently, of $U_q(\widehat{\g})$ if the generalized XXZ-model is considered) at each lattice site. The modules $V_i(\zeta_i)$ are each assumed to be of Kirillov-Reshetikhin type. If the Bethe ansatz gives a complete set of solutions, then the Bethe states should be in one-to-one correspondence with $\g$-highest weight vectors in the Hilbert space of the Hamiltonian or transfer matrix. The Hilbert space is simply the tensor product of the modules $V_i(\zeta_i)$. The Bethe states are parametrized by solutions of certain coupled algebraic equations, known as the Bethe equations. It is hypothesized that the solutions of the Bethe ansatz equations have a certain form of their complex parts, the so-called ``string hypothesis,'' and, more importantly in this context, are parametrized by the Bethe integers. \begin{remark} It is known that the string hypothesis is not, in fact, correct in general. However, solutions to the Bethe equations can still be shown to be parametrized by the Bethe integers in certain cases. The correctness of the string hypothesis is not relevant for the current paper. It served only as the inspiration for the original Kirillov-Reshetikhin conjecture. \end{remark} The Kirillov-Reshetikhin conjecture is that the Bethe integers parametrize solutions of the Bethe equations. It can be formulated in completely combinatorial terms as follows. Let $\bn = \{n_{\al,i} |\ \al\in I_r, i\in \N\}$ be a collection of non-negative integers whose sum is finite. These parametrize a set of $N=\sum_{i,\al} n_{\al,i}$ KR-modules, with $n_{\al,i}$ modules with highest $\g$-weight $i\omega_\al$, and hence they parametrize the Hilbert space. For each $\lambda$ a dominant integral weight $$\lambda= \sum_{\al\in I_r} l_\al \omega_\al\in P^+ $$ choose a set of non-negative integers $\bm=\{m_{\al,i}\}$ with $\al\in I_r$ and $i\in\N$, such that the total spin \begin{equation}\label{spin} q_\al = l_\al + \sum_{i,\beta} i C_{\al,\beta} m_{\beta,i} - \sum_i i n_{\al,i}, \quad (\al\in I_r) \end{equation} is zero, $q_\al=0$. Define the ``vacancy numbers'' which depend on the sets of integers $\{l_\al\}, \bm, \bn$ and on the Cartan matrix: \begin{equation}\label{pvacancy} p_{\al,i} = \sum_{j\geq 1} n_{\al,j} \min(i,j) - \sum_{\beta\in I_r}{\rm sgn}(C_{\al,\beta}) \sum_{j\geq 1} \min(|C_{\al,\beta}| j, |C_{\beta,\al}|i) m_{\beta,j}, \quad(\al\in I_r, i\in \N). \end{equation} The Bethe integers are any set of $m_{\al,i}$ distinct integers chosen from the interval $[0,p_{\al,i}]$ for each $\al$ and $i$. Therefore $p_{\al,i}<0$ does not correspond to any Bethe states. The number of distinct sets of Bethe integers is The fermionic multiplicity formula called the $M$-sum: \begin{equation}\label{M} M_{\lambda;\bn} = \sum_{\underset{q_\al=0,p_{\al,i}\geq 0}{m_{\al,i}\geq 0}} \prod_\al {m_{\al,i}+p_{\al,i}\choose m_{\al,i}}. \end{equation} Here, the summation is over all non-negative integers $\{m_{\al,i}\}$. For fixed $\bn$ and $\lambda$ there is some integer $p$ such that all $m_{\al,i}$ with $i>p$ are constrained to be zero (due to the constraint $q_\al=0$), so that there is only a finite number of summation variables. \begin{remark} One can attach an ``energy'' to each Bethe integer which is proportional to the integer itself. In this way, one obtains a graded multiplicity formula $M_{\lambda;\bn}(q)$ which is a polynomial keeping track of the energy grading parameter. This grading appears also in the fusion product, described below. Although it is of interest in discussing the fusion product, for the proof of the identities in this paper, it is not necessary to keep track of this grading. \end{remark} Note that the binomial coefficients are defined for both positive and negative values of $p_{\al,i}$: $$ {m+p\choose m} = \frac{(p+m)(p+m-1)\cdots (p+1)}{m!}. $$ If $p<0$ then if $m<-p$, the sign of the binomial coefficient is $(-1)^m$. In general, the summation over the variables $m_{\al,i}$ might include both negative and positive terms. In the $M$-sum, terms with $p_{\al,i}<0$ are excluded. \begin{conj}[The combinatorial Kirillov-Reshetikhin conjecture \cite{KR,HKOTY}]\label{KRconjecture} \begin{equation} {\rm dim}\ {\rm Hom}_\g\ \left( \underset{\al,i}{\otimes} \KR_{\al,i}^{\otimes n_{\al,i}} ,\ V(\lambda)\right) = M_{\lambda,\bn}. \end{equation} \end{conj} This conjecture has been proven for the following special cases of $\g$ and $\bn$: \begin{itemize} \item For $\g=A_r$, the conjecture was proven by \cite{KKR,KR} and \cite{KSS} for arbitrary $\bn$. \item For $\g=D_r$, the conjecture was proven in \cite{Schilling} for $\bn$ such that $n_{\al,1}\geq 0$ and $n_{\al,j}=0$ for all $j>1$. \item For $\g$ any non-exceptional simple Lie algebra and $\bn$ such that $n_{1,i}\geq 0$ and $n_{\al,j}=0$ for all $\al>1$ \cite{OkaSchShi,SS}. \end{itemize} The proof in each of these cases involves a bijection between combinatorial objects known as ``rigged configurations'' and crystal paths. \subsection{$Q$-systems}\label{q-system} Kirillov and Reshetikhin \cite{KR} also introduced another, closely related conjecture, recently proven for all simple Lie algebras $\g$ \cite{Nakajima,Hernandez} and some generalizations \cite{He07} (more precisely, the conjecture is concerned with finite-dimensional modules of $U_q(\widehat{\g})$ or $Y(\g)$). \begin{thm}[\cite{KR,Nakajima,Hernandez}]\label{NakHer} The characters $Q_{\al,i}$ of the Kirillov-Reshetikhin modules of $U_q(\widehat{\g})$ for any simple Lie algebra $\g$ satisfy the so-called $Q$-system (Equation \eqref{qsys} below). In addition, they satisfy \cite{Hernandez} the asymptotic conditions of \cite{HKOTY} (condition C of Theorem 7.1 \cite{HKOTY}), so that their decomposition into irreducible $U_q(\g)$-modules is given by Equation \eqref{N}. \end{thm} The $Q$-system is a quadratic recursion relation for the the family of functions $\{Q_{\al,j}: \al\in I_r, j\in \N\}$. Each element $Q_{\al,j}$ has the interpretation of the character of the Kirillov-Reshetikhin module corresponding to a highest $\g$-weight $j\omega_\al$, where $\omega_\al$ is one of the fundamental weights of $\g$. In general the recursion relation is \cite{KR,HKOTY} \begin{equation}\label{qsys} Q_{\al,j+1} = \frac{\displaystyle Q_{\al,j}^2-\prod_{\beta\sim \al}\prod_{k=0}^{|C_{\al,\beta}|-1} Q_{\beta,\lfloor(|C_{\beta,\al}|j+k)/|C_{\al,\beta}|\rfloor}}{Q_{\al,j-1}}, \quad (j>0), \end{equation} with initial conditions $Q_{\al,0}=1$ and $Q_{\al,1}=t_\al$, a formal variable. Here, $\beta\sim\al$ means that the nodes $\al$ and $\beta$ are connected in the Dynkin diagram of $\g$. The notation $\lfloor a \rfloor$ denotes the integer part of $a$. If $\g$ is a simply-laced Lie algebra, then the system has the form \begin{equation} Q_{\al,j+1} = \frac{Q_{\al,j}^2 - \prod_{\beta\sim \al} Q_{\beta,j} }{Q_{\al,j-1}}, \quad (j>0). \end{equation} In the non-simply laced case, the relations have the form \begin{equation} Q_{\al,j+1} = \frac{Q_{\al,j}^2 - \prod_{\beta\sim \al} T^{(\al,\beta)}_j }{Q_{\al,j-1}}, \end{equation} where $T_j^{(\al,\beta)}=Q_{\beta,j}^{|C_{\al,\beta}|}$ except in the following cases: \subsection*{$B_r$} \begin{eqnarray*} T_j^{(r-1,r)} &=& Q_{r,2j}\\ T_j^{(r,r-1)} &=& Q_{r-1,\lfloor j/2\rfloor}Q_{r-1,\lfloor (j+1)/2\rfloor}. \end{eqnarray*} \subsection*{$C_r$} \begin{eqnarray*} T_j^{(r-1,r)} &=& Q_{r,\lfloor j/2\rfloor}Q_{r,\lfloor (j+1)/2\rfloor}\\ T_j^{(r,r-1)} &=& Q_{r-1,2j}. \end{eqnarray*} \subsection*{$F_4$} \begin{eqnarray*} T_j^{(3,2)} &=& Q_{2,\lfloor j/2\rfloor}Q_{2,\lfloor (j+1)/2\rfloor}\\ T_j^{(2,3)} &=& Q_{3,2j}. \end{eqnarray*} \subsection*{$G_2$} \begin{eqnarray*} T_j^{(2,1)} &=& Q_{1,\lfloor j/3\rfloor}Q_{1,\lfloor (j+1)/3\rfloor}Q_{1,\lfloor (j+2)/3\rfloor}\\ T_j^{(1,2)} &=& Q_{2,3j}. \end{eqnarray*} We note an important corollary of this fact, which we call the polynomiality property of KR-characters: \begin{thm}\label{polynomiality} Given the data $\{Q_{\al,0}=1\}_{\al\in I_r}$, the solutions of the $Q$-system are polynomials in the variables $\{Q_{\al,1}\}_{\al\in I_r}$. \end{thm} \begin{proof} This is simply the statement that the Groethendieck group of KR-modules is generated by the trivial representation and the fundamental KR-module with highest weight $\omega_\al$. It is known that there is a unitriangular decomposition of the KR-characters into irreducible $U_q(\g)$-characters, with the highest weight module $V(i \omega_\al)$ appearing with multiplicity one in $KR_{\al,i}$. The other modules in the decomposition have highest weights which are strictly lower in the partial ordering of weights. The statement of polynomiality follows from this fact. \end{proof} It was proved in \cite{HKOTY} that if the characters $\{Q_{\al,i}\}$ satisfy the $Q$-system plus a certain asymptotic condition, then the characters of their tensor products have an explicit ``fermionic'' expression \begin{thm}[Theorem 8.1 \cite{HKOTY}]\label{HKOTYthm} Define $Q_{\al,i}$ to be the $U_q(\g)$-character of the KR-module corresponding to highest weight $i \omega_\al$. Then \begin{equation} \prod_{\al,i} Q_{\al,i}^{n_{\al,i}} = \sum_{\lambda} N_{\lambda;\bn} \ch V(\lambda), \end{equation} where $V(\lambda)$ is the irreducible $U_q(\g)$-module with highest weight $\lambda$. \end{thm} Here, the $N$-sum is \begin{equation}\label{N} N_{\lambda,\bn} = \sum_{m_{\al,i}\geq 0\atop q_\al=0 } \prod_\al {m_{\al,i}+p_{\al,i}\choose m_{\al,i}}, \end{equation} where $q_\al$ and $p_{\al,i}$ are defined by \eqref{spin} and \eqref{pvacancy} as for the $M$-sum. The only difference between this conjecture and the combinatorial KR-conjecture \ref{KRconjecture} is that the summation is not restricted to non-negative values of the vacancy numbers. That is, the $N$-sum has more terms, some of which are negative. In fact, \cite{HKOTY} conjectured that the two sums are equal. We will describe something which we call the HKOTY-conjecture, which is slightly stronger than this. (Their conjecture extends to the graded dimensions, which we will introduce below for fusion products. However, we need only prove the following version.) Define $N_{\lambda,\bn}^{(k)}$ and $M_{\lambda,\bn}^{(k)}$ to be the sums in equations \eqref{N} and \eqref{M}, respectively, with the summations further restricted so that $m_{\al, i}=n_{\al,i}=0$ if $i>t_\al k$. (Here, $t_\al$ is 1 for the long roots, $2$ for the short roots of $B_r, C_r, F_4$ and 3 for the short root of $G_2$.) \begin{conj}[The HKOTY-conjecture]\label{HKOTYconj} For any simple Lie algebra, \begin{equation}\label{mainidentity} M_{\lambda,\bn}^{(k)} = N_{\lambda,\bn}^{(k)}. \end{equation} \end{conj} We have $M_{\lambda,\bn} = \lim_{k\to\infty} M_{\lambda,\bn}^{(k)}$ and $N_{\lambda,\bn} = \lim_{k\to\infty} N_{\lambda,\bn}^{(k)}$. Therefore, if the conjecture \ref{HKOTYconj} is true, then combined with Theorem \ref{HKOTYthm} and the result of \cite{Hernandez}, it implies the completeness conjecture \ref{KRconjecture} of Kirillov and Reshetikhin. The purpose of the current article is to prove this conjecture directly, for all simple Lie algebras and for all $\bn$. \subsection{The Feigin-Loktev conjecture and Kirillov-Reshetikhin conjecture} In their paper \cite{FL} the authors introduced a graded tensor product on finite-dimensional, graded, cyclic $\g[t]$-modules for $\g$ a simple Lie algebra, which they call the fusion product. This is related to the fusion product in Wess-Zumino-Witten conformal field theory when the level is restricted. In the current paper, we take the level to be sufficiently large so that it does not enter the calculations. This is called the unrestricted fusion product. Let us summarize the results of \cite{AK} concerning the Feigin-Loktev conjecture for unrestricted fusion products of Kirillov-Reshetikhin modules for any simple Lie algebra $\g$. The description below of the Feigin-Loktev fusion product \cite{FL} is the one given in \cite{Kedem,AK}. We refer the reader to those articles for further details. Let $\{\zeta_1,...,\zeta_N\}$ to be distinct complex numbers, and choose $\{V_1(\zeta_1),...,V_N(\zeta_N)\}$ to be finite-dimensional $\g[t]$-modules, cyclic with cyclic vectors $v_i$, localized at the points $\zeta_i$. Then as $\g$-modules, we have \cite{FL}, \begin{equation}\label{filtered} V_1(\zeta_1)\otimes\cdots\otimes V_N(\zeta_N) \simeq U(\g[t]) v_1\otimes\cdots\otimes v_N. \end{equation} This tensor product is also a cyclic $\g[t]$-module with cyclic vector $v_1\otimes \cdots \otimes v_N$. It inherits a filtration from $U(\g[t])$ as in Section \ref{gradingsection}. \begin{defn} The unrestricted Feigin-Loktev fusion product, or graded tensor product, is the associated graded space of the filtered space \eqref{filtered}. It is denoted by $V_1\star \cdots \star V_N (\zeta_1,...,\zeta_N)$. \end{defn} Note that the filtration, and hence the grading, is $\g$-equivariant, and hence the graded components are $\g$-modules. Let $M_{\lambda;\{V_i\}}[n]$ denote the multiplicity of the irreducible $\g$-module $V(\lambda)$ in the $n$th-graded component of the fusion product $V_1\star\cdots\star V_N (\zeta_1,...,\zeta_N)$. \begin{defn} The graded multiplicity ($q$-multiplicty) of $V(\lambda)$ in the Feigin-Loktev fusion product $V_1\star \cdots \star V_N (\zeta_1,...,\zeta_N)$ is the polynomial in $q$ defined as $$ M_{\lambda;\{V_i\}} (q) = \sum_{n\geq 0} M_{\lambda;\{V_i\}} [n] q^n. $$ \end{defn} \begin{conj}[Feigin-Loktev \cite{FL}]\label{FLconj} In the cases where $V_i$ are sufficiently well-behaved, $M_{\lambda,\{V_i\}} (q)$ is independent of the localization parameters $\zeta_i$. \end{conj} At this time, it is not known what ``sufficiently well-behaved'' means in general, and this remains an open problem. In this paper we consider KR-modules, which we prove satisfy the necessary criteria. This in particular implies that the dimension of the fusion product is equal to the dimension of the tensor product of the $\g$-modules $V_i$, which is the $\g[t]$-module $V_i(\zeta_i)$ regarded as a $\g$-module. We have a Lemma, which follows from the fact that the fusion product is a quotient of the filtered tensor product \eqref{filtered} and a standard deformation argument (Lemma 20 of \cite{FKLMM}), \cite{FJKLM}: \begin{lemma}\cite{FJKLM}\label{dimensions} \begin{equation} M_{\lambda;\{V_i\}}(1) \geq {\rm Dim}\left( {\rm Hom}_\g \left(\underset{i}\otimes V_i,\ V(\lambda)\right)\right). \end{equation} \end{lemma} One way of proving Conjecture \ref{FLconj} is to compute $M_{\lambda,\{V_i\}}(q)$ explicitly, and to show that the polynomial is independent of $\zeta_i$. This is, of course, a stronger result. We have the following conjecture, inspired by \cite{FL} and generalized and partially proven in \cite{Kedem,FJKLM2,AKS,AK}. \begin{conj}\label{strongFLconj} In the case where $V_i$ are all of Kirillov-Reshetikhin-Chari type, the graded multiplicities $M_{\lambda,\{V_i\}}(q)$ are equal to the generalized Kostka polynomials or the fermionic sums $M_{\lambda,\bn}(q)$ of \cite{KR,HKOTY}. \end{conj} This conjecture implies \ref{FLconj} for these cases, because the polynomials are indepenent of the localization parameters. Various special cases of \ref{FLconj} have been proven. The case of $\sl_2$ was proven in \cite{FJKLM2} by proving \ref{strongFLconj} (in this case, the multiplicities are the usual co-charge Kostka polynomials). Conjecture \ref{FLconj} was proven for $\sl_n$ symmetric-power representations in \cite{Kedem} by using a result of \cite{GP}. In \cite{AKS}, we proved \ref{strongFLconj} for $\sl_n$ KR-modules by using to a result of \cite{KSS} for the fermionic form of generalized Kostka polynomials, which are the $q$-multiplicities in the case of tensor products of KR-modules of $\sl_n$. Most generally, in \cite{AK}, the following theorem was proven: \begin{thm}[\cite{AK}]\label{ardonnekedem} \begin{equation} M_{\lambda;\{V_i\}} (q) \leq M_{\lambda;\bn}(q) \end{equation} where by the inequality we mean the inequality for the coefficients in each power of $q$. Here $n_{\al,i}$ is the number of KR-modules $\KR_{\al,i}$ in the fusion product. \end{thm} Each of the coefficients on both sides is manifestly non-negative, so it is sufficient to prove the equality for $q=1$, i.e. the equality of total dimensions. Therefore, given Lemma \ref{dimensions}, in the cases where Conjecture \ref{KRconjecture} has been proven, the set of inequalities implies the equality of Hilbert polynomials, and hence provides a proof of \ref{strongFLconj} and \ref{FLconj}. Thus, in \cite{AKS,AK}, Conjecture \ref{strongFLconj} (hence \ref{FLconj}) was proven for these cases. The proofs of \cite{KR,KSS,Schilling,OkaSchShi,SS} depend on a certain bijection from a combinatorial object called rigged configurations, which is what the $M$-summation counts, and crystal paths. In the present paper, we bypass the question of the existence of crystal bases by proving the HKOTY conjecture directly. Together with Theorem \ref{ardonnekedem}, this provides a proof of Conjectures \ref{strongFLconj}, \ref{FLconj} for all simple Lie algebras and the tensor products of any arbitrary set of KR-modules, modulo the identification of the dimension of Chari's KR-modules \cite{Chari} and the usual KR-modules. That is, Chari's modules (and the fusion product) are $\g[t]$-modules and not Yangian modules. It is known that the dimension of Chari's modules are equal to the dimensions of the KR modules for Yangians or, equivalently, quantum affine algebras, in the case of classical algebras and in some of the exceptional cases. \section{Recursion relations and quadratic relations for $sl_2$} In this section, we illustrate the method of the proof of the HKOTY conjecture for the simplest case of $\sl_2$. The higher rank cases are a straightforward generalization of this case, but the notation and formulas become much more cumbersome. Hence it is instructive to examine this case first. \subsection{The $\cQ$-system} We define a generalization of the $Q$-system for functions which we call $\cQ$. Let $\bu:=(u, u_i\ (i\in \N))$ be formal variables. We define a family of functions $\{\cQ_k(u;u_1,...,u_{k-1})\}_{k\in\Z_+}$ recursively as follows. It is convenient to use the notation $\cQ_k(\bu) := \cQ_k(u;u_1,...,u_{k-1})$. Then the family is defined by the quadratic recursion relations: \begin{eqnarray} &\cQ_0(\bu)=1,\ \cQ_1(\bu) = u^{-1},& \nonumber \\ & \cQ_{k+1}(\bu) = \frac{\displaystyle{\cQ_k^2(\bu) -1}}{\displaystyle{u_k \cQ_{k-1}(\bu)}}, &\quad (k\geq 1). \label{quadratic} \end{eqnarray} Note that this system is the same as the $Q$-system if $u_i=1$ for $i\geq 1$. Therefore, $\cQ_j(u,1,...,1) = Q_j$ where $u=t_1^{-1}$ to agree with the initial conditions of \eqref{qsys}. Moreover, if we set $u_1=\cdots = u_j=1$ leaving the other variables unevaluated, then the solution of the system has $\cQ_k=Q_k$ for $k\leq j$ and $\cQ_{j+1} = Q_{j+1}/u_j$. The solutions to this system are known to be the Chebyshev polynomials of the second kind. \begin{lemma}\label{recursionlemma} A family of solutions $\{\cQ_k\}$ satisfies \eqref{quadratic} if and only if it satisfies the following recursion relations: \begin{eqnarray} & \cQ_0(\bu)=1,\ \cQ_1(\bu) = u^{-1},& \nonumber \\ &\cQ_{k+1}(\bu) = \cQ_k(\bu'),&\ (k\geq 1),\label{recursion} \end{eqnarray} where $$ u' = \frac{1}{\cQ_2(\bu)}, \ u_1' = \cQ_1(\bu)u_2,\ u_j' = u_{j+1},\ (j>1), $$ and $\cQ_2(\bu)$ is defined by the equation \begin{equation} \cQ_2(\bu) = \frac{u^{-2}-1}{u_1}. \end{equation} \end{lemma} \begin{proof} Note that the quadratic relation in \eqref{quadratic} can be expressed as \begin{equation} \cQ_{k+1}(\bu) = \cQ_2(\cQ_k(\bu)^{-1}, \cQ_{k-1}(\bu)u_k).\label{Vtwo} \end{equation} Suppose the family of solutions satisfies the recursion \eqref{recursion}. Then the quadratic relation \eqref{quadratic} holds for $k=1$ by definition. Suppose \eqref{quadratic} holds for $\cQ_m$ for all $m\leq k$, that is $$ \cQ_m(\bu) = \cQ_2(\frac{1}{\cQ_{m-1}(\bu)},\cQ_{m-2}(\bu)u_{m-1}), \ m\leq k. $$ Then \begin{eqnarray*} \cQ_{k+1}(\bu) &=& \cQ_k(\bu')\ \hbox{(by assumption)}\\ &=& \cQ_2(\cQ_{k-1}(\bu')^{-1},\cQ_{k-2}(\bu')u_{k-1}') \ \hbox{(by induction hypothesis)} \\ &=& \cQ_2(\cQ_k(\bu)^{-1},\cQ_{k-1}(\bu)u_k). \end{eqnarray*} By induction, it follows that the family defined by \eqref{recursion} satisfies the quadratic relation \eqref{quadratic} for all $k\geq 1$. Conversely, suppose we have a family of functions which satisfies the quadratic relation \eqref{quadratic}. Equation \eqref{recursion} holds for $k=1$ by definition. Suppose \eqref{recursion} holds for all $m\leq k$. Then \begin{eqnarray*} \cQ_{k+1}(\bu) &=& \cQ_2(\cQ_k(\bu)^{-1},\cQ_{k-1}(\bu)u_k) \\ &=& \cQ_2(\cQ_{k-1}(\bu')^{-1},\cQ_{k-2}(\bu')u_{k-1}') \\ &=& \cQ_k(\bu'). \end{eqnarray*} The lemma follows by induction. \end{proof} \begin{remark} The evaluation of the functions $\cQ_k$ when $u_j=1$ are the Chebyshev polynomials of the second kind in the variable $t=u^{-1}$. These polynomials, which are defined by the $sl_2$ fusion relation \begin{eqnarray*} & U_0(t) = 1, \ U_1(t) = t& \\ &U_1(t) U_k(t) = U_{k-1}(t) + U_{k+1}(t)& \end{eqnarray*} are known to satisfy the quadratic relation $$ U_{k+1}(t) = \frac{U_k^2(t) - 1}{U_{k-1}(t)}. $$ This is the $Q$-system for $\sl_2$, which is satisfied by the characters of the irreducible representations of $sl_2$. Here, $t=e^{\omega_1}+e^{-\omega_1}$. \end{remark} Lemma \ref{recursionlemma} can be recast in slightly more general terms. Define the $j$th shift operation on the variables $\bu$ as: \begin{equation} u^{(j)} = \frac{1}{\cQ_{j+1}(\bu)}, \quad u_1^{(j)} = \cQ_j(\bu) u_{j+1},\quad u_l^{(j)} = u_{l+j}\ (l>1)\label{uell}. \end{equation} The variable $\bu'$ is just $\bu^{(1)}$. Then \begin{cor} \begin{equation}\label{jtranslation} \cQ_{k+j} (\bu) = \cQ_k(\bu^{(j)}). \end{equation} \end{cor} \begin{proof} By induction. The lemma holds for any $k$ when $j=1$ by Lemma \ref{recursionlemma}. Suppose it is true for any $k$ and for all $l<j$. Then \begin{eqnarray*} \cQ_{k+j}(\bu) = \cQ_{k+j-1}(\bu') = \cQ_k((\bu')^{(j-1)}) \end{eqnarray*} where the second equality is the induction hypothesis. We compute \begin{eqnarray*} (u')^{(j-1)} &=& \frac{1}{\cQ_j(\bu')} = \frac{1}{\cQ_{j+1}(\bu)} = u^{(j)} \\ (u_1')^{(j-1)} &=& \cQ_{j-1}(\bu') u_{j+1}' = \cQ_j(\bu) u_{j+2} = u_1^{(j)} \\ (u_m')^{(j-1)} &=& u_{m+1}^{(j-1)} = u_{m+j} = u_m^{(j)}. \end{eqnarray*} The statement follows. \end{proof} \subsection{Generating functions} The general technique of the proof is to relax the restrictions on the summations by defining an appropriate generating function. It is then easy to prove properties of this generating function. In particular this allows us to prove a constant term identity among generating functions which is just the $M=N$ identity. We define generating functions in the variables $\bu$, which are labeled by the parameters $k\in \N$ and $\mathbf n\in \Z_+^k$ and $l\in \Z_+$: \begin{equation}\label{genfun} Z_{l;\mathbf n}^{(k)}(\mathbf u) := \sum_{m\in \Z_+^k} u^{q} \prod_{i=1}^{k} {m_i + q_i \choose m_i} u_i^{q_i}, \end{equation} where the integers $q_i$ depend on $m_i$ and $n_i$ $$q=l+\sum_{j=1}^k j(2 m_j-n_j), \quad q_i =q + p_i =l+ \sum_{j=1}^{k-i} j(2m_{i+j} -n_{i+j}).$$ In particular, $q_k=l$. Here, the binomial coefficient is defined for all $p\in\Z$ by $$ {m+p \choose m} = \frac{(p+m)(p+m-1) \cdots (p+1)}{m!}. $$ Note that this coefficient is non-vanishing if $m<-p$ in the case that $p$ is negative. This generating function is constructed so that the constant term in $u$ corresponds to $q=0$. In this term, $q_i=p_i$, and the evaluation at $u_i=1$ for all $i$ is just the HKOTY $N$-sum in equation \eqref{mainidentity}. On the other hand, by first taking terms with only non-negative powers of each of variables $u_i$ and then taking the evaluation at $u_i=1$ for all $i$ and considering the constant term in $u$, we obtain the $M$ side of equation \eqref{mainidentity}. We will prove a stronger result, which is an identity of power series in $u$, which implies the equality of these two types of constant terms. Then in the limit $k\to\infty$, this will prove the combinatorial Kirillov-Reshetikhin conjecture for $\g=\sl_2$. The generating function $Z_{l;\mathbf n}^{(k)}(\mathbf u) $ is a Laurent series in each $u_i$ and in $u$. (The dependence on $u_k$ is trivial, it is an overall factor $u_k^l$.) Note that $q_j$ does not depend on $m_i$ with $i\leq j$. It is therefore possible to find a factorization formula for $Z_{l;\mathbf n}^{(k)}(\mathbf u) $ by summing over $m_1$, then $m_2$ and so forth. In fact, such a factorization can be described rather nicely. To prove it, we first prove a simple Lemma. \begin{lemma} The function $ Z_{l;\mathbf n}^{(k)}(\mathbf u)$ satisfies the recursion relation \begin{equation} Z_{l;\bn}^{(k)}(\bu) = Z_{0;n_1}^{(1)}(\bu) Z_{l;n_2,...,n_k}^{(k-1)}(\bu') \end{equation} \end{lemma} \begin{proof} The function with $k=1$ can be computed explicitly (note that $q_1=l$ in this case) \begin{eqnarray} Z_{l,n_1}^{(1)}(u) &=& \sum_{m_1} u^{2m_1-n_1+l}u_1^{l}{m_1+l\choose m_1}\nonumber\\ &=& \frac{u_1^l u^{-n_1+l}}{(1-u^2)^{l+1}}\nonumber \\ &=& \frac{\cQ_1(\bu)^{n_1+l+2}}{u_1 \cQ_2(\bu)^{l+1}}.\label{Zkone} \end{eqnarray} The summation over $m_1$ in \eqref{genfun} can be computed by using the expansion (true for $|t|<1$ and for any $p$) \begin{equation}\label{binomialsum} \sum_{m\geq 0}{m+p\choose m} t^m = \frac{1}{(1-t)^{p+1}}. \end{equation} The result is \begin{eqnarray*} Z_{l,\bn}^{(k)}(\bu) &=& \sum_{m_2,...,m_k} u^{2q_1-q_2} u_1^{q_1} \prod_{i=2}^{k} {m_i+q_i\choose m_i} u_i^{q_i}\times \sum_{m_1\geq 0} u^{2m_1-n_1}{m_1+q_1\choose m_1}\\ &=& \frac{u^{-n_1}}{1-u^2} \sum_{m_2,...,m_k} \left(\frac{u^2 u_1}{1-u^2}\right)^{q_1} u^{-q_2} \prod_{i=2}^{k} {m_i+q_i\choose m_i} u_i^{q_i} \\ &=& Z_{0;n_1}^{(1)}(u) \sum_{m'_1,...,m_{k-1}'} {u'}^{q'} \prod_{i=1}^{k-1} {m'_i+q'_i\choose m'_i} {u'_i}^{q'_i}, \end{eqnarray*} where we have used the fact that $$2 q_1-q_2=2\sum_{j=2}^k j m_j = q - (2m_1-n_1).$$ Here, we have used the substitutions $$ q'_i =l+ \sum_{j=1}^{k-i-1} j(2m_{j+i}'-n_{j+i}'), \quad n'_{j}=n_{j+1},\ m'_j = m_{j+1}, $$ and $q'=q_1$. The new variables $\bu'$ are \begin{eqnarray*} u' &=& \frac{u^2 u_1}{1-u^2} = \frac{1}{\cQ_2(\bu)} \\ u'_1 &=& \cQ_1(\bu) u_2 \\ u'_j &=& u_{j+1}, \ j>1. \end{eqnarray*} This proves the recursion relation for the generating function. \end{proof} This recursion allows us to prove the factorization formula for the generating function $Z_{l;\bn}^{(k)}(\bu)$: \begin{thm}\label{sltwoZfactorization} The generating function $Z_{l;\bn}^{(k)}(\bu)$ has a factorization in terms of the functions $\cQ_i(\bu)$ \begin{equation}\label{factorization} Z_{l;\bn}^{(k)}(\bu) = \frac{\cQ_1(\bu) \cQ_k(\bu)^{l+1}}{\cQ_{k+1}(\bu)^{l+1}}\prod_{i=1}^k\frac{\cQ_i(\bu)^{n_i} }{ u_i} . \end{equation} \end{thm} \begin{proof} We prove the Theorem by induction. For $k=1$, the statement of the Theorem is equivalent to equation \eqref{Zkone}. Suppose the Theorem is true for $k-1$. Then \begin{eqnarray*} Z_{l;\bn}^{(k)} = Z_{0,n_1}^{(1)}(\bu) Z_{l;n_2,...,n_k}^{(k-1)}(\bu')&=& \frac{{\cQ_1}^{2+n_1}(\bu) }{\cQ_2(\bu) u_1} \times \frac{\cQ_1(\bu') \cQ_{k-1}(\bu')^{l+1}} {\cQ_{k}(\bu')^{l+1}} \prod_{i=1}^{k-1}\frac{ \cQ_i(\bu')^{n_{i+1}}}{ (u'_i)}\\ &=& \frac{{\cQ_1}^{2+n_1}(\bu) }{\cQ_2(\bu) u_1}\times \frac{\cQ_2(\bu) \cQ_k(\bu)^{l+1}}{\cQ_{k+1}(\bu)^{l+1}\cQ_1(\bu)} \prod_{i=2}^k \frac{ \cQ_i(\bu)^{n_i}}{u_i } \\ &=& \frac{\cQ_1(\bu) \cQ_k(\bu)^{l+1}}{\cQ_{k+1}(\bu)^{l+1}} \prod_{i=1}^k \frac{ \cQ_i(\bu)^{n_i}}{u_i}, \end{eqnarray*} and the factorization is true for $k$. By induction, the factorization holds for all $k\in\N$. \end{proof} \begin{cor} Given any $1\leq p\leq k-1$, we have the factorization \begin{equation}\label{pfactorization} Z_{l;\bn}^{(k)}(\bu) = Z_{0;n_1,...,n_p}^{(p)}(\bu) Z_{l;n_{p+1},...,n_k}^{(k-p)}(\bu^{(p)}). \end{equation} \end{cor} \begin{proof} This follows from the factorization formula \eqref{factorization} and the property \eqref{jtranslation} \begin{eqnarray*} \hspace{-.5in}&& Z_{0;n_1,...,n_p}^{(p)}(\bu)Z_{l;n_{p+1},...,n_k}^{(k-p)}(\bu^{(p)})\\ &&\hspace{.5in}=\frac{\cQ_1(\bu) \cQ_p(\bu)}{\cQ_{p+1}(\bu)} \prod_{i=1}^p \frac{\cQ_i(\bu)^{n_i}}{u_i} \times \frac{\cQ_1(\bu^{(p)}) \cQ_{k-p}(\bu^{(p)})^{l+1}}{\cQ_{k-p+1}(\bu^{(p)})^{l+1}} \prod_{i=1}^{k-p} \frac{ \cQ_i(\bu^{(p)})^{n_i}}{u_i^{(p)}} \\ &&\hspace{.5in}= \frac{\cQ_1(\bu) \cQ_p(\bu)}{\cQ_{p+1}(\bu)} \prod_{i=1}^p \frac{\cQ_i(\bu)^{n_i}}{u_i}\times \frac{\cQ_{p+1}(\bu)\cQ_{k}(\bu)^{l+1}}{\cQ_p(\bu)\cQ_{k+1}(\bu)^{l+1}} \prod_{i=1}^{k-p}\frac{ \cQ_{i+p}(\bu)^{n_{i+p}}}{u_{i+p}} \\ &&\hspace{.5in}= \frac{\cQ_1(\bu) \cQ_{k}(\bu)^{l+1}}{\cQ_{k+1}(\bu)^{l+1}} \prod_{i=1}^k \frac{\cQ_i(\bu)^{n_i}}{ u_i} = Z_{l;\bn}^{(k)}(\bu). \end{eqnarray*} \end{proof} \subsection{Identity of power series} Now consider the evaluation $\varphi_j$, which maps each variable in the subset $\{u_1,...,u_{j-1}\}\subset \{u_1,...,u_k\},\ (j\leq k)$ to the value $1$. That is, $$\varphi_j (\bu) = (u;1,...,1,u_j,...,u_{k}).$$ At this specialization we have \begin{eqnarray*} \varphi_j ( \cQ_i(\bu)) &=& U_i(u^{-1}),\ i=1,...,j,\\ \varphi_j (\cQ_{j+1}(\bu)) &=& \frac{U_{j+1}(u^{-1})}{u_j}. \end{eqnarray*} Using the factorization \eqref{pfactorization} and the fact that $$\varphi_j(Z_{0;n_1,...,n_j}^{(j)}(\bu)) = \frac{U_1(u^{-1})U_j(u^{-1})}{U_{j+1}(u^{-1})}\ \prod_{i=1}^j U_i(u^{-1})^{n_i}, $$ we have \begin{equation}\label{jfact} \varphi_j(Z_{l;n_1,...,n_k}^{(k)}(\bu)) = \left(\frac{U_1(u^{-1})U_j(u^{-1})}{U_{j+1}(u^{-1})} \prod_{i=1}^j U_i(u^{-1})^{n_i}\right)\times \varphi_j( Z_{l;n_{j+1},...,n_k}^{(k-j)} (\bu^{(j)})). \end{equation} Here, $$ \varphi_j(\bu^{(j)}) = (\frac{u_j}{U_{j+1}(u^{-1})}; U_j(u^{-1}) u_{j+1}, u_{j+2},...,u_k). $$ Note that all the dependence on the parameters $u_j,...,u_{k}$ in $\varphi_j(Z_{l;n_1,...,n_k}^{(k)}(\bu))$ is in the second factor of \eqref{jfact}. \begin{defn} Let $f(u)$ be a Laurent series in $u$. Denote the power series part (non-negative powers) of a Laurent series by $$ \cP_u f(u). $$ \end{defn} \begin{defn} Let $Z_{l;\bn}^{(k)}(\bu)$ be defined by equation \eqref{genfun}, and let $1\leq j \leq k-1$. We denote by $Z_{l;\bn}^{(k)}(\bu)^{[j]}$ the generating functions in \eqref{genfun} with the summation restricted to values of $\bm$ such that $q_i\geq 0$ for all $i\geq j$. Note that this is equivalent to taking only non-negative powers in the variables $u_i$ with $i\geq j$. \end{defn} It is clear from the factorization formula that for any $i\geq j$, \begin{eqnarray}\label{evalfactorization} \varphi_j ( Z_{l;n_1,...,n_k}^{(k)}(\bu)^{[i]}) = \left[\frac{U_1(u^{-1})U_j(u^{-1})}{U_{j+1}(u^{-1})} \prod_{i=1}^j U_i(u^{-1})^{n_i}\right] \ \varphi_j( Z^{(k)}_{l;n_{j+1},...,n_k}(\bu^{(j)})^{[i]}). \end{eqnarray} \begin{lemma}\label{inductiveZ} \begin{equation}\label{powerseries} \cP_{u} \varphi_j(Z_{l;n_1,...,n_k}^{(k)}(\bu)^{[j]}) = \cP_{u} \varphi_j(Z_{l;n_1,...,n_k}^{(k)}(\bu)^{[j+1]}). \end{equation} In other words, the power series in $u$ on the right hand side, where the integers $q_{j+1},...,q_{k-1}$ are restricted to non-negative values, has only non-negative powers of $u_{j}$. \end{lemma} \begin{proof} First consider the factorization formula \eqref{jfact} with $j=k-1$: \begin{eqnarray*} \varphi_{k-1}(Z_{l;n_1,...,n_k}^{(k)}(\bu)) &=& \varphi_{k-1}(Z_{0;n_1,...,n_{k-1}}^{(k-1)}(\bu)) \varphi_{k-1}(Z_{l;n_k}^{(1)}(\bu^{(k-1)}))\\ && \hskip-1in= \frac{U_1(u^{-1}) U_{k-1}(u^{-1})}{U_k(u^{-1})} \prod_{i=1}^{k-1}U_i(u^{-1})^{n_i} \ \sum_{m_k\geq 0} \left(\frac{u_{k-1}}{U_k(u^{-1})}\right)^{2m_k-n_k+l} {m_k+l\choose m_k}u_k^l U_{k-1}(u^{-1})^l. \end{eqnarray*} We analyze the dependence on $u$ as follows. On the right hand side, terms corresponding to strictly {\em negative} powers of $u_{k-1}$ are proportional to a product of Chebyshev polynomials in $u^{-1}$, since the factor $U_{k}$ in the denominator cancels. This means that the coefficient of $u_{k-1}^{-n}\ (n>0)$ is a polynomial in $u^{-1}$. Moreover, this polynomial has an overall factor of $U_1(u^{-1})=u^{-1}$, and thus contains no constant term in $u^{-1}$. Therefore, terms with strictly negative powers in $u_{k-1}$ appear only in the coefficients of strictly negative powers of $u$ in the Laurent series $\varphi_{k-1}(Z_{l;n_1,...,n_k}^{(k)}(\bu))$. In terms of the notation above we have shown that \begin{equation}\label{basestep} \cP_u \varphi_{k-1} (Z_{l;n_1,...,n_k}^{(k)}(\bu))= \cP_{u} \varphi_{k-1}(Z_{l;n_1,...,n_k}^{(k)}(\bu)^{[k-1]}). \end{equation} We proceed to prove the Lemma by induction. Consider the series obtained from Equation \eqref{evalfactorization} by taking power series of both sides \begin{eqnarray*} && \varphi_j(Z_{l;n_1,...,n_k}^{(k)}(\bu)^{[j+1]})= \\ && \hskip.5in \frac{U_1(u^{-1}) U_{j}(u^{-1})}{U_{j+1}(u^{-1})}\prod_{i=1}^j U_i(u^{-1})^{n_i} \times \cP_{u_{j+1},...,u_{k-1}}\varphi_j( Z_{l;n_{j+1},...,n_k}^{(k-j)}({\bu}^{(j)})) \end{eqnarray*} Explicitly, \begin{eqnarray*} & & \cP_{u_{j+1},...,u_{k-1}}\varphi_j( Z_{l;n_{j+1},...,n_k}^{(k-j)}(\bu^{(j)}))\\ &&\hskip.5in=\sum_{\underset{q_{s}\geq 0 (s>j)}{m_{j+1},...,m_k\geq 0}} \left(\frac{u_j}{U_{j+1}}\right)^{q_j} U_j^{q_{j+1}}\prod_{i=j+1}^{k} {q_i+m_i\choose m_i} u_i^{q_i}. \end{eqnarray*} Again, it is clear all terms with strictly negative powers of $u_j$ ($q_j<0$) are proportional to a product of Chebyshev polynomials in $u^{-1}$, and this is where all the dependence on the variable $u$ resides. Because of the overall factor $U_1(u^{-1})$, such terms contribute only to the coefficients of $u^{-n}$ with $n>0$ in the generating function. The Lemma follows by induction. \end{proof} \begin{thm}\label{sltwoHKOTY} \begin{equation}\label{HKOTY} \cP_u Z_{l;\bn}^{(k)}(u,1,...,1)^{[1]} = \cP_u Z_{l;\bn}^{(k)} (u,1,...,1). \end{equation} \end{thm} \begin{proof} Lemma \ref{inductiveZ} guarantees that \begin{equation}\label{indstep} \cP_u Z_{l;\bn}^{(k)}(u,1,...,1))^{[j]}=\cP_u Z_{l;\bn}^{(k)}(u,1,...,1)^{[j+1]}, \end{equation} by evaluating both sides of \eqref{powerseries} at the point $u_{j}=...=u_{k}=1$. We proceed by induction, with the initial step coming from equation \eqref{basestep}: $$ \cP_u Z_{l;\bn}^{(k)}(u,1,...,1)) = \cP_u Z_{l;\bn}^{(k)}(u,1,...,1))^{[k-1]}. $$ The induction step is \eqref{indstep}. The Theorem follows. \end{proof} The relation to the restriction in the summation of the HKOTY conjecture is as follows. The constant term in the generating function $Z_{l,\bn}^{(k)}$ corresponds to all terms with $q=0$. This constant term appears as the first term of the power series identity in $u$ proven above. Keeping in mind that $q_i = p_i + q = p_i$ in the constant term, the constant term of the right hand side of \eqref{HKOTY} is the unrestricted ($N$)-side of the HKOTY conjecture, and the constant term on the left hand side of \eqref{HKOTY} is the restricted ($M$)-side of the HKOTY conjecture. Thus, we have proven the conjecture for the case $\g=\sl_2$. \subsection{Identity of multiplicities} The number $N_{l;\bn}$ is the multiplicity of the irreducible $\sl_2$-module with highest weight $l\omega_1$ in the tensor product $\otimes_i V(i \omega_1)^{n_i}$. This number is equal to the multiplicty of the trivial representation in $V(l \omega_1) \otimes \left( \otimes_i V(i \omega_i)^{n_i}\right)$. As a non-trivial check, we show that the above generating function indeed gives this property. The factorized form of $Z_{l;\bn}^{(k)}(\bu)$ allows for writing a simple residue integral for the multiplicity $N_{l;\bn}$ of the representation $V_l$ in the tensor product $\otimes_{i=1}^p V_i^{\otimes \, n_i}$. \begin{lemma}\label{multiplicitysltwo} The multiplicities $N_{l;\bn}$ are equal to the residue integral around $u=0$: \begin{equation} N_{l;\bn}=\oint \frac{du}{2i\pi u} \prod_{i=1}^p U_i(u^{-1})^{n_i} \, U_1(u^{-1}) z(u)^{l+1}\label{multitwo} \end{equation} where $z(u)=uC(u^2)$, $C(x)=(1-\sqrt{1-4x})/(2x)=\sum_{n\geq 0}c_n x^n$ being the generating series of the Catalan numbers $c_n=(2n)!/(n!(n+1)!)$. \end{lemma} \begin{proof} The integer $N_{l;\bn}$ is the constant term of $Z_{l;\bn}^{(k)}(u,1,1,...,1)$ in the limit as $k\to \infty$, while only finitely many $n_i$ are nonzero. Assume $\bn$ is such that $n_j=0$ for all $j>p$ for some $p\in \N$ (we pick $p$ so that $l\leq p$). Then \begin{eqnarray*} N_{l;\bn}&=&\lim_{k\to \infty} \oint \frac{du}{2i\pi u} Z_{l;\bn}^{(k)}(u,1,1,...,1)\\ &=&\oint {du \over 2i\pi u} \prod_{i=1}^p U_i(u^{-1})^{n_i} \lim_{k\to \infty} U_1(u^{-1})\frac{U_k(u^{-1})^{l+1}}{U_{k+1}(u^{-1})^{l+1}} \end{eqnarray*} where the contour of integration encircles $0$. With the parametrization $u^{-1}=z+z^{-1}$, where $z=z(u):=(2u)^{-1}(1-\sqrt{1-4u^2})$, the Chebyshev polynomials read $U_k(u^{-1})=(z^{k+1}-z^{-k-1})/(z-z^{-1})$, and we have the following large $k$ asymptotics for $|z|<1$: $U_k(u^{-1})\sim -z^{-k}/(z^2-1)$, so that $\lim_{k\to \infty} U_k(u^{-1})/U_{k+1}(u^{-1}) =z(u)$. The lemma follows, as $z(u)/u=C(u^2)$. \end{proof} As a non-trivial check of the formula \eqref{multitwo}, we show that $N_{l;\bn}=N_{0;\bn+\epsilon_l}$, where the vector ${\bf \epsilon}_l$ has components $\delta_{i,l}$ for $i\geq 1$. Using the expression \eqref{multitwo} for $N_{0;\bn+{\bf \epsilon}_l}$ and the fact that $U_l (u^{-1})=z^l +z^{-1} U_{l-1}(u^{-1})$, we may rewrite \begin{eqnarray*} N_{0;\bn+{\bf \epsilon}_l}&=&\oint \frac{du}{2i\pi u} \prod_{i=1}^p U_i(u^{-1})^{n_i} \, U_1(u^{-1})U_l(u^{-1}) z(u) \\ &=& \oint \frac{du}{2i\pi u} \prod_{i=1}^p U_i(u^{-1})^{n_i} \, U_1(u^{-1})\Big( z(u)^{l+1}+ U_{l-1}(u^{-1})\Big)\\ &=& N_{l;\bn} \end{eqnarray*} as the second term $U_{l-1}(u^{-1})$ yields a product of polynomials of $u^{-1}$ with $U_1(u^{-1})$ as an overall factor, hence has no constant term in $u$, while the first term exactly matches eq. \eqref{multitwo} for $N_{l;\bn}$. \section{The HKOTY identity for $\g$ a simply-laced Lie algebra} The treatment illustrated for $\g=\sl_2$ in the previous section is essentially unchanged for other simple Lie algebras, although with more complicated notation. It is simplest to describe the non simply-laced algebras separately. Thus, we give the proof of \eqref{mainidentity} in this section for the case of simply-laced Lie algebras. The arguments of the previous section generalize in a straightforward manner to these algebras. Let $\g$ be a simple, simply-laced Lie algebra with Cartan matrix $C$, and rank $r$. \subsection{The $\cQ$-system} Let $\bu$ denote the set of formal variables $\bu:=\{u_\al; u_{\al,i}|\ 1\leq \al\leq r, i\in \N\}$. The family of functions $\{\cQ_{\alpha,k}(\bu): \ 1\leq \alpha\leq r, k\in \Z_+\}$ is defined recursively as follows: \begin{eqnarray}\label{gquadratic} && \cQ_{\al,0}(\bu)=1,\quad \cQ_{\al,1}(\bu)=(u_{\al})^{-1},\nonumber \\ &&\cQ_{\al,k+1}(\bu) = \frac{{(\cQ_{\al,k}(\bu))^2 - \prod_{\beta\neq\alpha} (\cQ_{\beta,k}(\bu))^{-C_{\beta,\alpha}}} }{\displaystyle{u_{\al,k} \cQ_{\al,k-1}(\bu)}}. \end{eqnarray} \begin{remark}\label{reduction} Note that if $u_{\al,j}=1$ for all $\al$ and all $j$, then the $\cQ$-system is equivalent to the $Q$-system for simply-laced $\g$, with given initial conditions for $Q_{\al,1}$. The functions of $u_{\al}^{-1}$ thus defined are ``generalized Chebyshev polynomials'' in the variables $u_{\al}^{-1}.$ \end{remark} In particular, \begin{equation}\label{vtwo} \cQ_{\al,2}(\bu) =\frac{(1-\prod_{\beta} (u_{\beta})^{-C_{\beta,\al}})}{u_{\al}^2 u_{\al,1}}. \end{equation} Therefore, the quadratic relation \eqref{gquadratic} can be expressed as \begin{equation} \cQ_{\al,j+1}(\bu) = \cQ_{\al,2}(\bu^{(j-1)}), \end{equation} where \begin{equation}\label{utranslate} u_{\al}^{(j)} = \frac{1}{\cQ_{\al,j+1}(\bu)},\ u_{\al,1}^{(j)} = \cQ_{\al,j}(\bu) u_{\al,j+1},\ u_{\al,l}^{(j)} = u_{\al,l+j},\ (l>1). \end{equation} \begin{lemma} A family of functions $\{\cQ_{\al,k}: \ 1\leq \alpha\leq r, k\in \Z_+\}$ satisfies \eqref{gquadratic} if and only if it satisfies the following recursion relation: \begin{eqnarray}\label{grecursion} \cQ_{\al,0}=1,\ \cQ_{\al,1}=u_{\al}^{-1},\nonumber\\ \cQ_{\al,k+1}(\bu) = \cQ_{\al,k}(\bu'), \end{eqnarray} where $\bu' = \bu^{(1)}$ is defined by equation \eqref{utranslate} and $\cQ_2(\bu)$ is defined by equation \eqref{vtwo}. \end{lemma} \begin{proof} Suppose the family of functions $\{\cQ_{\al,k}\}$ satisfies \eqref{gquadratic}. Then $\cQ_{\al,2}(\bu)$ satisfies \eqref{grecursion} because $$ \cQ_{\al,2}(\bu) = \frac{1}{u_{\al}'}= \cQ_{\al,1}(\bu') $$ from the definition \eqref{utranslate} for $j=1$. Suppose \eqref{grecursion} holds for all $\cQ_{\al,m}(\bu)$ with $m\leq k$. Then \begin{eqnarray*} \cQ_{\al,k+1} &=& \cQ_{\al,2}(\bu^{(k-1)})\qquad \hbox{(by \eqref{gquadratic})} \\ &=& \cQ_{\al,2}((\bu')^{(k-2)})\qquad \hbox{(by induction hypothesis)} \\ &=& \cQ_{\al,k}(\bu') \qquad \hbox{(by \eqref{gquadratic})}. \end{eqnarray*} Here, we used the fact that $(\bu')^{(k-2)} = \bu^{(k-1)}$, which follows from the induction hypothesis, because $$ (u_{\al}')^{(k-2)} = \frac{1}{\cQ_{\al,k-1}(\bu')} = \frac{1}{\cQ_{\al,k}(\bu)} $$ and so forth. By induction, the recursion \eqref{grecursion} holds for all $k$. Conversely, suppose the family $\{\cQ_{\al,k}(\bu)\}$ satisfies \eqref{grecursion}. Then again the relation \eqref{grecursion} holds for $\cQ_{\al,2}(\bu)$ by definition. Suppose it holds for all $m\leq k$. Then \begin{eqnarray*} \cQ_{\al,k+1}(\bu) &=& \cQ_{\al,k}(\bu') \qquad \hbox{(by \eqref{grecursion})}\\ &=& \cQ_{\al,2} (\bu'^{(k-2)})\qquad \hbox{(by induction hypothesis)} \\ &=& \cQ_{\al,2}(\bu^{(k-1)}), \end{eqnarray*} so that \eqref{gquadratic} holds for $\cQ_{\al,k+1}(\bu)$. The lemma follows by induction. \end{proof} \begin{cor} \begin{equation}\label{genrecursion} \cQ_{\al,k+j}(\bu) = \cQ_{\al,k}(\bu^{(j)}). \end{equation} \end{cor} \begin{proof} The case $j=1$ is the statement of the last Lemma. Suppose \eqref{genrecursion} holds for a fixed $k$ for all $l< j$. Then $$ \cQ_{\al,k+j}(\bu) = \cQ_{\al,k+j-1}(\bu') = \cQ_{\al,k}((\bu')^{(j-1)}) = \cQ_{\al,k}(\bu^{(j)}). $$ and \eqref{genrecursion} holds for all $j$ by induction. \end{proof} In the specialization to the case where $u_{\al,i}=1$ for $i>0$, the $\cQ$-system degenerates to the $Q$-system for $\g$ simply-laced. Therefore, the specialization of $\cQ_{\al,j}(\bu)$ to this point gives $Q_{\al,j}$. A theorem of Nakajima \cite{Nakajima} (see also \cite{Hernandez}) shows that the solution to the $Q$-system is the set of characters of the KR-modules of $\g$. That is, $Q_{\al,j}$ is the character of $\KR_{\al,j}$. \subsection{Generating functions} Fix $k\in\N$. Given a set of non-negative integers $\{m_{\al,i},n_{\al,i}:\ 1\leq i\leq k, 1\leq \alpha\leq r\}$ and a $\g$-dominant integral weight $\lambda = \sum_\al l_\al \omega_\al$, , define for each $\al\in I_r$ and $i\in \N$ \begin{eqnarray}\label{gspin} q_{\al} &=& l_\al + \sum_{j=1}^k \sum_{\beta\in I_r} j (C_{\al,\beta}m_{\beta,j} - \delta_{\al,\beta} n_{\beta,j})\label{totalspin}\\ q_{\al,i}&=& q_\al + p_{\al,i} = l_\al +\sum_{j=1}^{k-i} \sum_{\beta\in I_r} j( C_{\alpha,\beta}m_{\beta,i+j}-\delta_{\alpha,\beta} n_{\beta,i+j}).\label{vacancy} \end{eqnarray} Below, we will use the notation $\bm = (m_{1,1},m_{1,2}...,m_{1,k},m_{2,1},...)$, $\bm_1=\{m_{1,1},...,m_{r,1}\}$ and so forth. Note that if we extend equation \eqref{vacancy} to $i=0$ then $q_\al=q_{\al,0}$. In the case that $i=k$ there is no dependence on the parameters $\bn,\bm$ and $q_{\al,k}=l_\al$. Define the generating function in the variables $\{u_\al;u_{\al,i}|\ 1\leq i\leq k, 1\leq \alpha\leq r\}$: \begin{equation}\label{gZ} Z_{\lambda;\bn}^{(k)}(\bu) = \sum_{\bm \in Z_+^{r\times k}} \prod_{\al=1}^r u_\al^{q_\al} \prod_{i=1}^k {m_{\al,i}+q_{\al,i}\choose m_{\al,i}} u_{\al,i}^{q_{\al,i}}. \end{equation} This generating function is constructed so that the constant term (with $q_\al=0$ for all $\al$) is the $N$ side \eqref{N} of the identity \eqref{mainidentity} when all $u_{\al,i}=1$, because in the constant term, $q_{\al,i}=p_{\al,i}$. Note that, for any fixed set of values of $q_\al, (\al\in I_r)$, the coefficient of $\prod_\al u_\al^{q_\al}$ is a Laurent polynomial in the variables $\{u_{\al,i}\}$. The generating series is thus a Laurent series in the variables $\{u_{\al}\}$, with coefficients which are Laurent polynomials in the other the variables. In fact, because of the form \eqref{gspin} of $q_\al$, the dependence on the variables $u_\al$ is such that, up to an overall factor depending on $\bn$ and $\lambda$, $Z_{\lambda,\bn}^{(k)}(\bu)$ is a power series in the variables $$ y_\al = \prod_\beta u_{\al}^{C_{\al,\beta}}, $$ since $$\prod_\al \prod_\beta u_\al^{C_{\al,\beta}\sum_j jm_{\beta,j}} = \prod_\beta \left[\prod_\al u_\al^{C_{\al,\beta}}\right]^{\sum_j jm_{\beta,j}} = \prod_\beta y_\beta^{\sum_j jm_{\beta,j}}.$$ \begin{lemma} \begin{equation}\label{Zgone} Z_{\lambda;\bn_1}^{(1)}(\bu) = \prod_{\al=1}^r \frac{u_{\al,1}^{l_\al}u_{\alpha}^{-n_{\alpha,1}+l_\al}} {(1-\prod_{\beta=1}^r u_{\beta}^{C_{\alpha,\beta}})^{l_\al+1}}. \end{equation} where we interpret the denominator of the term corresponding to $\al$ on the right hand side as a power series in the variable $$ y_\al=\prod_\beta u_{\beta}^{C_{\al,\beta}}, \quad (\al\in I_r). $$ \end{lemma} Note that expanding as a power series in $y_\al$ is equivalent to expanding in $u_\al$ for each $\al$ since $y_\al$ has non-negative powers only in $u_\al$. \begin{proof} We use the definition \eqref{gZ}, noting that $q_{\al,1}=l_\al$ in this case: $$ Z_{\lambda;\bn_1}^{(1)} = \sum_{m_{1,1},...,m_{r,1}\geq 0} \prod_{\al} u_\al^{q_\al} { m_{\al,1}+l_\al\choose m_{\al,1}} u_{\al,1}^{l_\al}. $$ Here, $q_{\al,1} = l_\al + \sum_\beta m_{\beta,1} - n_{\al,1}$. Thus, \begin{eqnarray*} Z_{\lambda;\bn_1}^{(1)} = \prod_\al u_\al^{-n_{\al,1}+l_\al}u_{\al,1}^{l_\al} \sum_{m_{1,1},...,m_{r,1}} \prod_\al \left( \prod_\beta u_\beta^{C_{\al,\beta}}\right)^{m_{\al,1}}{m_{\al,1}+l_\al\choose m_{\al,1}}. \end{eqnarray*} We can perform the summation over each $m_{\al,1}$ using equation \eqref{binomialsum}. The Lemma follows. \end{proof} Note that $Z_{\lambda;\bn_1}^{(1)}(\bu)$ can be written in terms of the functions $\cQ(\bu)$: \begin{equation}\label{initialZ} Z_{\lambda;\bn_1}^{(1)}(\bu) = \prod_{\al=1}^r \frac{\cQ_{\al,1}(\bu)^{n_{\al,1}+l_\al+2}} {u_{\al,1} \cQ_{\al,2}(\bu)^{l_\al+1}}. \end{equation} Again, in each factor, the denominator corresponding to $\al$ is to be regarded as a power series expansion in the variable $u_\al$. We now proceed as in the $\sl_2$ case. \begin{lemma}\label{zgrecursion} The function $Z_\bn^{(k)}(\bu)$ satisfies the following recursion relation: \begin{equation} Z_{\lambda;\bn}^{(k)}(\bu) = Z_{0;\bn_1}^{(1)}(\bu) Z_{\lambda;\bn_2,...,\bn_k}^{(k-1)}(\bu'), \end{equation} where $\bu'$ is defined by equation \eqref{utranslate} with $j=1$. \end{lemma} \begin{proof} Note the identity \begin{equation}\label{qgreaterthanone} q_\al^{(1)} := q_{\al}\Big|_{\bm_1=\bn_1=0} = 2 q_{\al,1}-q_{\al,2}, \end{equation} Therefore, \begin{eqnarray*} Z_{\lambda;\bn}^{(k)}(\bu) &=& \sum_{\bm_2,...,\bm_k} \prod_{\al}u_{\al,1}^{q_{\al,1}} u_{\al}^{q_{\al}^{(1)}}\prod_{i=2}^k u_{\al,i}^{q_{\al,i}}{ q_{\al,i}+m_{\al,i}\choose m_{\al,i}} \sum_{\bm_1} \prod_\al u_{\al}^{\sum_\beta C_{\al,\beta}m_{\beta,1} - n_{\alpha,1}}{q_{\al,1}+m_{\al,1}\choose m_{\al,1}}, \end{eqnarray*} Since the functions $\{q_{\al,1}\}_{\al\in I_r}$ do not depend on $\bm_1$, the sum over $\bm_1$ can be performed explicitly: \begin{eqnarray*} Z_{\lambda;\bn}^{(k)}(\bu) &=&\prod_\al\frac{ u_{\al}^{-n_{\al,1}}}{1-\prod_\beta u_{\beta}^{C_{\al,\beta}}} \sum_{\bm_2,...,\bm_k} \prod_{\al}\left[\frac{u_{\al}^2u_{\al,1}}{1-\prod_\beta u_{\beta}^{C_{\al,\beta}}}\right]^{q_{\al,1}} u_{\al}^{-q_{\al,2}}\prod_{i=2}^k u_{\al,i}^{q_{\al,i}}{ q_{\al,i}+m_{\al,i}\choose m_{\al,i}}\\ &=& Z_{0;\bn_1}^{(1)}(\bu) \sum_{\bm_2,...,\bm_k} \prod_{\al}\left[\frac{1}{\cQ_{\al,2}}\right]^{q_{\al,1}}\cQ_{\al,1}^{q_{\al,2}} \prod_{i>1} u_{\al,i}^{q_{\al,i}}{ q_{\al,i}+m_{\al,i}\choose m_{\al,i}}. \end{eqnarray*} We have used equation \eqref{Zgone} with $\lambda=0$ to identify the first factor. The second factor is the generating function $Z_{\lambda,\bn'}^{(k-1)}(\bu')$, where $\bn'_i=\bn_{i+1}$ with $i=1,...,k-1$ and $\bu'$ are the variables defined via the substitution \eqref{utranslate} with $j=1$. \end{proof} The main factorization theorem is: \begin{thm}\label{gfactorization} There is a factorization \begin{equation} Z_{\lambda;\bn}^{(k)}(\bu) = \prod_{\al=1}^r \cQ_{\al,1}(\bu) \left(\frac{\cQ_{\al,k}(\bu)}{\cQ_{\al,k+1}(\bu)}\right)^{l_\al+1} \prod_{i=1}^k \frac{\cQ_{\al,i}(\bu)^{n_{\al,i}}}{u_{\al,i}}. \end{equation} Here, each $\cQ_{\al,i}$ in the denominator is understood as a Laurent series expansion in $u_\al$ for each $\al$. \end{thm} \begin{proof} The Lemma is true for $k=1$ by equation \eqref{initialZ}. Assume it is true for $k-1$. Use the recursion of Lemma \ref{zgrecursion}: \begin{eqnarray*} Z_{\lambda;\bn}^{(k)}(\bu) &=& Z_{0;\bn_1}^{(1)}(\bu) Z_{\lambda;\bn'}^{(k-1)} (\bu') \\ &=& \prod_{\al=1}^r \frac{\cQ_{\al,1}(\bu)^{n_{\al,1}+2}} {u_{\al,1} \cQ_{\al,2}(\bu)} \times \cQ_{\al,1}(\bu') \left[\frac{\cQ_{\al,k-1}(\bu')}{\cQ_{\al,k}(\bu')}\right]^{l_\al+1} \prod_{i=2}^k \frac{\cQ_{\al,i-1}(\bu')^{n_{\al,i}}}{u'_{\al,i-1}} \\ &=&\prod_\al \frac{\cQ_{\al,1}(\bu)^{n_\al + 2}\cQ_{\al,2}(\bu)} {u_{\al,1} \cQ_{\al,2}(\bu)} \left[ \frac{\cQ_{\al,k}(\bu)}{\cQ_{\al,k+1}(\bu)}\right]^{l_\al+1} \frac{1}{\cQ_{\al,1}(\bu)} \prod_{i=2}^k \frac{\cQ_{\al,i}(\bu)^{n_\al,i}}{u_{\al,i}} \\ &=& \prod_\al \cQ_{\al,1}\left[\frac{\cQ_{\al,k}(\bu)}{\cQ_{\al,k+1}(\bu)} \right]^{l_\al+1} \prod_{i=1}^k \frac{\cQ_{\al,i}(\bu)^{n_{\al,i}}}{u_{\al,i}}, \end{eqnarray*} where we have used the fact that $\cQ_{\al,i}(\bu')=\cQ_{\al,i+1}(\bu)$. \end{proof} \begin{cor} There is a factorization formula \begin{equation}\label{gfact} Z_{\lambda;\bn}^{(k)}(\bu) = Z_{0;\bn_1,...,\bn_j}^{(j)}(\bu) Z_{\bn^{(j)}}^{(k-j)} (\bu^{(j)}), \end{equation} where $\bu^{(j)}$ is defined by equation \eqref{utranslate} and $\bn^{(j)}=(\bn_{j+1},...,\bn_k)$. \end{cor} \begin{proof} The proof uses the factorization of Theorem \ref{gfactorization}. \begin{eqnarray*} Z_{\lambda;\bn}^{(k)}(\bu) &=& \prod_{\alpha} \frac{\cQ_{\al,1}(\bu) \cQ_{\al,k}(\bu)^{l_\al+1}}{\cQ_{\al,k+1}(\bu)^{l_\al+1}} \prod_{i=1}^j \frac{\cQ_{\al,i}(\bu)^{n_{\al,i}}}{u_{\al,i}} \times \prod_{i=j+1}^k \frac{\cQ_{\al,i}(\bu)^{n_{\al,i}}}{u_{\al,i}}\\ &&\hskip-.5in= \prod_{\alpha} \left[ \left(\frac{\cQ_{\al,1}(\bu)\cQ_{\al,j}(\bu)}{\cQ_{\al,j+1}(\bu)} \prod_{i=1}^j\frac{\cQ_{\al,i}(\bu)^{n_{\al,i}}}{u_{\al,i}}\right)\left( \frac{\cQ_{\al,k}(\bu)^{l_\al+1}\cQ_{\al,j}(\bu)}{\cQ_{\al,k+1}(\bu)^{l_\al+1}}\prod_{i=1}^{k-j} \frac{\cQ_{\al,i}(\bu^{(j)})^{n_{\al,i+j}}}{u_{\al,i}^{(j)}}\right)\right] \\ &&\hskip-.5in= \prod_{\alpha} \left[ \left(\frac{\cQ_{\al,1}(\bu)\cQ_{\al,j}(\bu)}{\cQ_{\al,j+1}(\bu)} \prod_{i=1}^j\frac{\cQ_{\al,i}(\bu)^{n_{\al,i}}}{u_{\al,i}}\right)\left( \frac{\cQ_{\al,k-j}(\bu^{(j)})^{l_\al+1}\cQ_{\al,1}(\bu^{(j)})} {\cQ_{\al,k-j+1}(\bu^{(j)})^{l_\al+1}}\prod_{i=1}^{k-j} \frac{\cQ_{\al,i}(\bu^{(j)})^{n_{\al,i+j}}}{u_{\al,i}^{(j)}}\right)\right]\\ &&= Z_{0;\bn_1,...,\bn_{j}}(\bu) Z_{\lambda;\bn_{j+1},...,\bn_{k}}(\bu^{(j)}). \end{eqnarray*} \end{proof} \subsection{Identity of power series} We need a Lemma about series expansions of $(Q_{\al,j})^{-1}$ in the variables $y_\al=u_\al^2 \prod_{\beta\sim \al} u_\beta^{-1}$. \begin{lemma}\label{seriesform} Let $A=\Z[u_1^{-1},...,u_{r}^{-1}]$. If we interpret $1/Q_{\al,2}$ as an element of the ring $A[[u_\al]]$, then for all $j\geq 2$, $$ \frac{1}{Q_{\al,j}}\in A[[u_\al]]. $$ \end{lemma} \begin{proof} This statement follows from the particular form of the $Q$-system and from the fact that $Q_{\al,j}\in A $ by Theorem \ref{polynomiality}. Note that for $j=2$, we interpret $$ \frac{1}{Q_{\al,2}} = u_\al^2 (1-y_\al)^{-1}= u_\al^2\sum_{m\geq 0}y_\al^m \in u_\al^2 A[[y_\al]] = A[[u_\al]]. $$ Then for any $j> 2$, negative powers of $Q_{\al,j}$ are in the same ring. For suppose $Q_{\al,j}^{-m}\in A[[u_\al]]$. Then using the $Q$-system, $$ \frac{1}{Q_{\al,j+1}} = \frac{Q_{\al,j-1}}{Q_{\al,j}^2(1-\prod_\beta Q_{\beta,j}^{-C_{\al,\beta}})}. $$ By the induction hypothesis and Theorem \ref{polynomiality}, $$ \frac{Q_{\al,j-1}}{Q_{\al,j}^2}\in A[[u_\al]]. $$ Moreover, $$ \prod_\beta Q_{\beta,j}^{-C_{\al,\beta}} = Q_{\al,j}^{-2}\prod_{\beta\sim\al} Q_{\beta,j}\in A[[u_\al]] $$ for the same reason. Therefore, the denominator is expanded as a function in $A[[u_\al]]$, that is, $$ \frac{1}{1-\prod_\al Q_{\al,j}^{-C_{\al,\beta}}} = \sum_{m\geq 0} \left(\prod_\al Q_{\al,j}^{-C_{\al,\beta}}\right)^{m}. $$ The Lemma follows by induction. \end{proof} \begin{defn} The evaluation map $\varphi_j$ acting on functions of $\bu$ is defined by $$ \varphi_j(u_{\al,l})=\left\{ \begin{array}{ll} 1,& l<j \\ u_{\al,l} & \hbox{otherwise},\end{array}\right. $$ extended by linearity. \end{defn} We use the fact, which follows from the definition \eqref{gquadratic} and remark \ref{reduction}, that \begin{equation} \varphi_j(\cQ_{\alpha,l}(\bu)) = Q_{\al,l}, (l\leq j),\qquad \varphi_j(\cQ_{\al,j+1}(\bu))=\frac{Q_{\al,j+1}}{u_{\al,j}}. \end{equation} This follows from the fact that $\varphi_j(\cQ_{\al,2}(\bu)),...,\varphi_j(\cQ_{\al,j}(\bu))$ satisfy the usual $Q$-system in this case, and the dependence of $\varphi_j(\cQ_{\al,j+1}(\bu))$ on $u_{\al,j}$ is explicitly just an overall factor, and it otherwise also satisfies the usual $Q$-system. \begin{defn} Let $Z_{\lambda;\bn}^{(k)}(\bu)$ be defined as the summation over $\bm$ as in equation \eqref{gZ}. Let $K=(k_1,...,k_r)\in \N^r$. Define $Z_{\lambda;\bn}^{(k)}(\bu)^{[K]}$ to be the restricted sum generating function, defined as in \eqref{gZ} but with the summation over $\bm$ restricted to values of $q_{\al,j}\geq 0$ for all $j\geq k_\al$ for each $\al$. \end{defn} Recall that $\cP_{u_\al}$ denotes the power series part of the Laurent expansion in the variable $u_\al$ of a function of $u_\al$. \begin{lemma}\label{powerseriesg} Fix a root $\al$. Let $K=(k_1,...,k_r)$ with $k_\beta\geq j$ for $\beta\neq \al$ and $k_\al=j+1$. Define $\epsilon_\al$ to be an $r$-vector with $1$ in the $\al$th entry. Then \begin{equation}\label{powerserieseq} \cP_{u_\al} \varphi_j(Z_{\lambda;\bn}^{(k)}(\bu)^{[K]}) = \cP_{u_\al} \varphi_j(Z_{\lambda;\bn}^{(k)}(\bu)^{[K-\epsilon_\al]}). \end{equation} \end{lemma} \begin{proof} Consider the factorization formula \eqref{gfact}. The restriction $[K]$ does not affect the first factor, so the factorization formula still holds for the restricted summation. We prove the Lemma by induction. The base step is to take $j=k-1$. Let $K=(k_1,...,k_\al,...,k_r)$ with $k_\beta\geq k-1$ for each $\beta$, and $k_\al=k$ \begin{eqnarray}\label{base} \varphi_{k-1} (Z_{\lambda;\bn}^{(k)}(\bu)^{[K]}) &=& \varphi_{k-1} (Z_{0;\bn_1,...,\bn_{k-1}}^{(k-1)}(\bu)) \varphi_{k-1} (Z_{\lambda;\bn_k}^{(1)}(\bu^{(k-1)}) ^{[K]})\nonumber\\ &=& \left[\prod_\al\frac{Q_{\al,1} Q_{\al,k-1}}{Q_{\al,k}} \prod_{i=1}^{k-1} Q_{\al,i}^{n_{\al,i}}\right] \varphi_{k-1}( Z_{\lambda;\bn_k}^{(1)}(\bu^{(k-1)})^{[K]}) \\ &=& \prod_\al\frac{Q_{\al,1} Q_{\al,k-1}}{Q_{\al,k}} \prod_{i=1}^{k-1} Q_{\al,i}^{n_{\al,i}} \nonumber\\ &&\times\sum_{\bm_k\atop q_{\beta,i}\geq 0 \ (i\geq k_\beta)} \prod_\al \left[\frac{u_{\al,k-1}}{Q_{\al,k}}\right]^{q_{\al,k-1}} {m_{\al,k}+l_{\al}\choose m_{\al,k}} (Q_{\al,k-1}u_{\al,k})^{l_\al}.\nonumber \end{eqnarray} Note that we have assumed that $l_\al\geq 0$. Consider a term in the summation over $\bm_k$, with fixed $q_{\al,k-1}$ such that $ q_{\al,k-1}<0$. The dependence on the variables $u_\al$ in equation \eqref{base} is only via the functions $\{Q_{\beta,i}\}_{\beta\in I_r, i\in \N}$. That is, in the factor $$ \prod_{\beta\in I_r} Q_{\beta,k-1}^{1+l_\beta}Q_{\beta,1}Q_{\beta,k}^{-q_{\beta,k-1}-1}\prod_{i=1}^{k-1} Q_{\beta,i}^{n_{\beta,i}}.$$ Moreover, terms with non-negative powers in $u_\al$ can only come from the expansion in the factor corresponding to $\beta=\alpha$. If $q_{\al,k-1}<0$ this is a polynomial in the $Q_{\al,i}$ for various $i$, which is in $A$. Moreover, it comes with a positive overall power of $Q_{\al,1}=u_{\al}^{-1}$ and hence has no constant term in $u_{\al}$ at all. Therefore the power series expansion in $u_\al$ has no contribution from this term. We have shown that $$ \cP_{u_\al} \varphi_{k-1}(Z_{\lambda;\bn}^{(k)}(\bu))= \cP_{u_\al}\varphi_{k-1}(Z_{\lambda;\bn}^{(k)}(\bu)^{[K]})= \cP_{u_\al} \varphi_{k-1}(Z_{\lambda;\bn}^{(k)}(\bu)^{[K-\epsilon_\al]}). $$ Next, suppose the Lemma is true for $j+1$. Note that a restriction on the integers $q_{\beta,i}$ with $i\geq j$ does not involve the summation over the integers $m_{\al,i}$ with $i\leq j$. Therefore there is still a partial factorization of the generating function with restrictions into a product of two factors. That is, if $K=(k_1,...,k_r)$ with $k_\beta \geq j$ and $k_\al = j+1$, then $$ Z_{\lambda;\bn}^{(k)}(\bu)^{[K]} = Z_{0;\bn_1,...,\bn_j}^{(j)} (\bu)Z_{\lambda;\bn^{(j)}}^{(k-j)}(\bu^{(j)})^{[K]}. $$ One should read the restriction in the second factor as a restriction to non-negative powers of $u_{\beta,i}$ with $i\geq k_\beta$. Therefore, we can write \begin{eqnarray*} \varphi_j ( Z_{\lambda,\bn}^{(k)}(\bu)^{[K]}) &=& \varphi_j( Z_{0;\bn_1,...,\bn_j}^{(j)}(\bu)) \varphi_j( Z_{\lambda;\bn^{(j)}}^{(k-j)}(\bu^{(j)}) ^{[K]})\\ &=& \prod_\al\frac{Q_{\al,1} Q_{\al,j}}{Q_{\al,j+1}} \prod_{i=1}^j Q_{\al,i}^{n_{\al,i}} \times \varphi_j(Z_{\lambda;\bn^{(j)}}^{(k-j)}(\bu^{(j)})^{[K]}) . \end{eqnarray*} We use the definition for the second factor: $$ \varphi_j( Z_{\lambda;\bn^{(j)}}^{(k-j)}(\bu^{(j)})^{[K]}) = \sum_{\underset{q_{\beta,s}\geq 0 \ (s\geq k_\beta)}{\bm^{(j)}}} \prod_\al \left[\frac{u_{\al, j}}{Q_{\al,j+1}}\right]^{q_{\al,j}} Q_{\al,j}^{q_{\al,j+1}} \prod_{i=j+1}^k {m_{\al,i}+q_{\al,i}\choose m_{\al,i}} u_{\al,i}^{q_{\al,i}}. $$ Consider a term in the summation with $q_{\al,j}<0$ and $q_{\al,j+1}\geq 0$ for some $\al$. The dependence on $u_\al$ is contained in the functions $Q_{\beta,i}$. Moreover, non-negative powers of $u_\al$ can only come from the expansion of $Q_{\al,j+1}$ appearing in the denominator. If $q_{\al,j}<0$ there are no such terms, and we are left with a polynomial in the $Q_{\al,i}$s with an overall factor $Q_{\al,1}$, so there is no constant term in $u_\al$, and we have a polynomial in $u_{\al}^{-1}$. We have $$ \cP_{u_\al} \varphi_j (Z_{\lambda;\bn^{(j)}}^{(k-j)}(\bu^{(j)})^{[K]}) =\cP_{u_\al} \varphi_j (Z_{\lambda;\bn^{(j)}}^{(k-j)}(\bu^{(j)})^{[K-\epsilon_\al]}) $$ where $K$ has $k_\beta\geq j$ and $k_\al=j+1$. The Lemma follows by induction on $j$. \end{proof} We have the obvious corollary: \begin{cor}\label{alphaseries} Let $J=\{\al_1,...,\al_t\}\subset I_r$. Let $K$ be a set with $k_{\al}=j+1$ for $\al\in J$, and $k_\al=j$ for $\al\in I_r\setminus J$. Let $K'$ be the set with $k_\al=j$ for all $\al\in I_r$. Then we have \begin{equation}\label{seriesalpha} \cP_{u_{\al_1},...,u_{\al_t}} \varphi_{j} (Z_{\lambda;\bn}^{(k)}(\bu)^{[K]}) = \cP_{u_{\al_1},...,u_{\al_t}} \varphi_{j} (Z_{\lambda;\bn}^{(k)}(\bu)^{[K']}). \end{equation} \end{cor} \begin{proof} This follows by repeating the argument in the Lemma above for several values of $\al$. \end{proof} This implies an identity of power series: \begin{thm}\label{gHKOTY} \begin{equation} \cP_{u_1,...,u_r} \varphi_{k+1}(Z_{\lambda;\bn}^{(k)}(\bu)) = \cP_{u_1,...,u_r}\varphi_{k+1}( Z_{\lambda;\bn}^{(k)}(\bu)^{[1,...,1]}). \end{equation} \end{thm} \begin{proof} This follows by induction on $j$ from the previous corollary, setting $t=r$. The evaluation map $\varphi_{k+1}$ can then be applied to both sides of the resulting power series identity. \end{proof} \begin{cor} The conjecture \ref{HKOTYconj} is true for $\g$ simply laced. \end{cor} \begin{proof} Theorem \ref{gHKOTY} is an identity of power series in $\{u_\al\}_{\al\in I_r}$. The constant term of this identity (the restriction to $q_\al=0$ for all $\al$) is the $M=N$ conjecture of \cite{HKOTY} for the case of $\g$ simply-laced. \end{proof} \section{The identity for non-simply laced Lie algebras} \subsection{The functions $\cQ_{\alpha,k}$} Let $\g$ be one of the non simply-laced algebras. It is convenient to define the integer $t_\al$, which takes the value 1 if $\al$ is a long root, $t_\al=2$ if $\al$ is a short root of $B_r, C_r$ or $F_4$ and $t_\al=3$ for the short root ($\al=2$) of $G_2$. We define a family of functions $\{\cQ_{\alpha,i}(\bu)|\ (\al,i)\in I_r\times \Z_+\}$, depending on the formal variables $\bu=\{u_\al, \ u_{\al,j}, a_j\ | \ \al\in I_r, \ j\in \N \}$, via the recursion relations: \begin{eqnarray}\label{quadraticQ} &&\cQ_{\al,0}(\bu)=1,\quad \cQ_{\al,1}(\bu)=u_\al^{-1}, \nonumber \\ &&\cQ_{\al,i+1}(\bu) = \frac{\cQ_{\al,i}(\bu)^2 -\prod_{\beta\sim\al} \mathcal T_i^{(\al,\beta)}(\bu)} {u_{\al,i} \cQ_{\al,i-1}(\bu)},\ i\geq1, \end{eqnarray} with \begin{equation*} \mathcal T_i^{(\al,\beta)}(\bu) = a_{i}^{-i({\rm mod}t_\al)} \prod_{k=0}^{|C_{\al,\beta}|-1} \cQ_{\beta,\lfloor{\frac{t_\beta i+k}{t_\al}\rfloor}}(\bu). \end{equation*} Explicitly, we have $$ \mathcal T_i^{(\al,\beta)}(\bu) = \cQ_{\beta,i}(\bu)\quad \hbox{if $t_\al=t_\beta$} $$ and \begin{eqnarray} & B_r: & \cT_i^{(r-1,r)} = \cQ_{r,2i},\label{exceptions} \\ & & \cT_{2i-1}^{(r,r-1)} = a_{2i-1} \cQ_{r-1,i-1}\cQ_{r-1,i},\nonumber\\ & & \cT_{2i}^{(r,r-1)} = \cQ_{r-1,i}^2.\nonumber\\ & C_r: & \cT_{2i-1}^{(r-1,r)} = a_{2i-1}\cQ_{r,i-1}\cQ_{r,i},\nonumber\\ && \cT_{2i}^{(r-1,r)} = \cQ_{r,i}^2,\nonumber\\ && \cT_{i}^{(r,r-1)} = \cQ_{r-1,2i}.\nonumber\\ & F_4: & \cT_{i}^{(2,3)} = \cQ_{3,2i},\nonumber\\ && \cT_{2i-1}^{(3,2)} = a_{2i-1} \cQ_{2,i-1}\cQ_{2,i},\nonumber\\ && \cT_{2i}^{(3,2)} = \cQ_{2,i}^2.\nonumber\\ & G_2: & \cT_i^{(1,2)} = \cQ_{2,3i},\nonumber\\ && \cT_{3i-2}^{(2,1)} = a_{3i-2}^2 \cQ_{1,i} \cQ_{1,i-1}^2,\nonumber\\ && \cT_{3i-1}^{(2,1)} = a_{3i-1} \cQ_{1,i}^2 \cQ_{1,i-1}.\nonumber\\ && \cT_{3i}^{(2,1)} = \cQ_{1,i}^3,\nonumber \end{eqnarray} Note that when $a_i=u_{\al,i}=1$ for all $i$ and $\al$, this is just the $Q$-system of Section \ref{q-system}, with the identification of the initial conditions. Thus, we have a deformed $Q$-system, which we refer to as the $\cQ$-system from now on. \begin{lemma}\label{usefulindeed} A family of functions $\{\cQ_{\al,i} | \ \al\in I_r, i\in \Z_+ \}$ satisfies the $\cQ$-system \eqref{quadraticQ} if and only if it satisfies the recursion relation \begin{equation}\label{recQ} \cQ_{\al,i+t_\al}(\bu) = \cQ_{\al,i}(\bu'), \ i\geq 1, \end{equation} subject to the initial conditions that $\cQ_{\al,j} (j\leq t_\al)$ are defined by \eqref{quadraticQ}, and such that \begin{equation}\label{shiftedua} u_{\al}' = \frac{1}{\cQ_{\al,t_\al+1}},\quad u_{\al,j}' = \cQ_{\al,t_\al}^{\delta_{j,1}}u_{\al,t_\al+j},\quad a_{i}' = \cQ_{\gamma,1}^{\delta_{i<t_{\gamma'}}}a_{i+t_{\gamma'}} \end{equation} Here, $\gamma=r-1$ for $B_r$, $\gamma=r$ for $C_r$, $\gamma = 2$ for $F_4$ and $\gamma = 1$ for $G_2$, and $\gamma'$ is the short root connected to $\gamma$, i.e. $\gamma'=r,r-1,3,2$ for $\g=B_r,C_r,F_4,G_2$, respectively. The function $\delta_{i<j}$ is 1 if $i<j$ and 0 otherwise. \end{lemma} \begin{proof} Depending on the value of $t_\al$, the proof has up to three steps, but proceeds in a similar way as for the simply-laced case. Suppose that a family of functions $\{\cQ_{\al,j}(\bu)\}$ satisfies the $\cQ$-system \eqref{quadraticQ} for all $(\al,j)$. We will show that it also satisfies the recursion relation \eqref{recQ}. The initial conditions of both systems are the same by definition. Therefore, suppose that the functions $\cQ_{\al,j}(\bu)$ also satisfy the recursion relation \eqref{recQ} for any $j\leq t_\al k$ for some $k$. We will show that it also satisfies \eqref{recQ} for $j\leq t_\al(k+1)$. For all $\g$, the function $\cT_{t_\al k}^{(\al,\beta)}(\bu')$ is some power of the function $\cQ_{\beta,t_\beta k}(\bu')$. The induction hypothesis is that $\cQ_{\beta,t_\beta k}(\bu') = \cQ_{\beta,t_\beta(k+1)}(\bu)$ Therefore, \begin{equation}\label{recursionT} \cT_{t_\al k}^{(\al,\beta)}(\bu') = \cT_{t_\al(k+1)}^{(\al,\beta)}(\bu). \end{equation} Now, consider \begin{eqnarray} \cQ_{\al, t_\al k+1}(\bu') &=& \frac{\cQ_{\al,t_\al k }(\bu') - \prod_{\beta\sim \al} \cT^{(\al,\beta)}_{t_\al k}(\bu')}{u_{\al,t_\al k}' \cQ_{\al,t_\al k-1} (\bu')} \quad \hbox{(by assumption, true for all $k$)}\nonumber \\ &=& \frac{\cQ_{\al,t_\al(k+1)}(\bu) - \prod_{\beta\sim \al} \cT^{(\al,\beta)}_{t_\al k}(\bu')} {u_{\al,t_\al (k+1)} \cQ_{\al,t_\al (k+1)-1}(\bu)}\quad \hbox{(by induction hypothesis.)}\nonumber \\ &=&\cQ_{\al,t_\al(k+1)+1}(\bu)\qquad \hbox{(by equation \eqref{recursionT}.)} \label{plusone} \end{eqnarray} In the cases where $t_\al>1$, we must now consider the functions $\cT_{t_\al k + 1}^{(\al,\beta)}(\bu')$. If $\al=\gamma'$, this function is proportional to some power of $a_{t_\al k+1}'=a_{t_\al (k+1)+1}$ (for $k\geq 1$). It is also proportional to (a product of some powers of) $\cQ_{\beta, j}(\bu')$ with $j \in \{t_\beta k,t_\beta k +1\}$. By the induction hypothesis and by \eqref{plusone}, we have that $\cQ_{\beta,j}(\bu') = \cQ_{\beta,j+t_\beta}(\bu)$ for these values of $j$. Therefore, we have that \begin{equation}\label{recursionTtwo} \cT_{k t_\al+1}^{(\al,\beta)}(\bu') = \cT_{(k+1) t_\al+1}^{(\al,\beta)}(\bu). \end{equation} We can now use \eqref{recursionTtwo} and \eqref{plusone}, together with the induction hypothesis, to obtain, for $t_\al>1$, \begin{eqnarray*} \cQ_{\al,t_\al k + 2}(\bu') &=& \frac{\cQ_{\al,t_\al k + 1}(\bu') - \prod_{\beta\sim \al} \cT_{t_\al k + 1}^{(\al,\beta)}(\bu')} { u_{\al,t_\al k + 1}' \cQ_{\al,t_\al k}(\bu')} \\ &=& \frac{\cQ_{\al,t_\al (k+1) + 1}(\bu) - \prod_{\beta\sim \al} \cT_{t_\al (k+1)+1}(\bu)}{u_{\al,t_\al(k+1) + 1} \cQ_{\al,t_\al(k+1)}(\bu)} \\ &=& \cQ_{\al,t_\al(k+1)+2}(\bu). \end{eqnarray*} Finally, consider the case of $t_\al=3$ for $\g=G_2$. In this case, $$ \cT_{3k+2}^{(2,1)} (\bu') = a_{3k+2}' \cQ_{1,k+1}(\bu')^2 \cQ_{1,k}(\bu') = a_{3k+5} \cQ_{1,k+2}(\bu)^2 \cQ_{1,k+1}(\bu) = \cT_{3(k+1)+2}^{(2,1)}(\bu). $$ It follows that for $G_2$, $\cQ_{2,3k+2}(\bu') = \cQ_{2,3(k+1)+2}(\bu)$. We have thus proven that the family of functions which satisfies the $\cQ$-system \eqref{quadraticQ} also satisfies the recursion \eqref{recQ}. Conversely, suppose we have a family of functions defined by the recursion relations \eqref{recQ}, with the functions $\cQ_{\al,j}(\bu)$ being identical to those satisfying the $\cQ$-system for all $j\leq t_\al$. Then we show that the family $\cQ_{\al,j}$ satisfies the $\cQ$-system for all $j$. Above, we showed that if the functions $\cQ_{\al,j}$ satisfy the recursion relation \eqref{recQ}, then the functions $\cT_j^{(\al,\beta)}$ satisfy a similar recursion relation: $$ \cT_j^{(\al,\beta)}(\bu') = \cT_{j+t_\al}^{(\al,\beta)}(\bu). $$ Therefore, if $\cQ_{\al,j}$ satisfies the $\cQ$-system for all $j\leq t_\al k$, then \begin{eqnarray*} \cQ_{\al,j+t_\al}(\bu) &=& \cQ_{\al,j}(\bu') \\ &=& \frac{ \cQ_{\al,j-1}(\bu')-\prod_{\beta\sim \al} \cT_{j-1}^{(\al,\beta)}(\bu')} {u_{\al,j-1}' \cQ_{\al,j-2}(\bu')} \\ &=& \frac{\cQ_{\al,j+t_\al-1}(\bu) - \prod_{\beta\sim \al} \cT_{j+t_\al-1}^{(\al,\beta)}(\bu)}{ u_{\al,j-1+t_\al} \cQ_{\al,j+t_\al-2} (\bu)}. \end{eqnarray*} But this is just the $\cQ$-system for $\cQ_{j+t_\al}$. Therefore, by induction, the $\cQ$-system is satisfied all $j, \al$. \end{proof} \begin{cor}\label{usefulcor} The family of functions $\cQ_{\al,j}$ defined by either equation \eqref{quadraticQ} or \eqref{recQ} satisfies the recursion \begin{equation} \cQ_{\al,k+t_\al j} (\bu)= \cQ_{\al,k}(\bu^{(j)}), \end{equation} where \begin{eqnarray}\label{uj} u_{\al}^{(j)} &=& \frac{1}{\cQ_{\al,jt_\al + 1}},\quad u_{\al,1}^{(j)}= \cQ_{\al, jt_\al} u_{\al,jt_\al+1},\nonumber \\ u_{\al,i}^{(j)} &=& u_{\al, i+ jt_\al}\ (i>1),\quad a_{i}^{(j)}= \cQ_{\gamma,j}^{\delta_{i<t_{\gamma'}}} a_{i+j t_{\gamma'}}. \end{eqnarray} \end{cor} \begin{proof} By induction. The case $j=1$ is the case of Lemma \ref{usefulindeed}. Suppose that the corollary holds for all $k$ and all $j<l$. Then by Lemma \ref{usefulindeed}, we have We note that $(\bu')^{(j)} = \bu^{(j+1)}$ from the definition, by using Lemma \ref{usefulindeed}. Thus, $$ \cQ_{\al,k+t_\al l}(\bu) = \cQ_{\al,k+t_\al(l-1)}(\bu')= \cQ_{\al,k}((\bu')^{(l-1)}) = \cQ_{\al,k}(\bu^{(l)}). $$ By induction on $l$, the Corollary follows. \end{proof} \subsection{Generating functions} For a given natural number $k$, define the set of pairs $J_\g^{(k)} = \{(\al,j): \al\in I_r, 1\leq j\leq t_\al k \}$. Given a set of non-negative integers $\{ m_{\al,i},n_{\al,i} \ | \ (\al,i)\in J_\g^{(k)} \}$ and a dominant integral weight $\lambda=\sum_{\al=1}^r l_\al \omega_\al$, $l_\al \in \Z_+$, we define the total spin as before: $$ q_{\al} = l_\al+\sum_{(\beta, j)\in J_\g^{(k)} } j C_{\al,\beta} m_{\beta,j} -\sum_{j=1}^{t_\al k}j n_{\al,j}. $$ In this case, the vacancy numbers have the form, for $(\al,i)\in J_\g^{(k)}$: $$ p_{\al,i} = \sum_{j=1}^{t_\al k} \min(i,j) n_{\al,j} - \sum_{(\beta, j)\in J_\g^{(k)}} {\rm sgn}(C_{\al,\beta}) \min(|C_{\al,\beta}|j, |C_{\beta,\al}|i)m_{\beta,j}. $$ For $(\al,i)\in J_\g^{(k)}$, define the modified vacancy numbers as for the simply-laced case: \begin{eqnarray} q_{\al,i} &=& p_{\al,i} + q_{\al} \nonumber \\ &=& l_\al+\sum_{\underset{|C_{\al,\beta}|j>|C_{\beta,\al}|i}{(\beta, j) \in J_\g^{(k)}:} } {\rm sgn}(C_{\alpha,\beta}) (|C_{\al,\beta}|j-|C_{\beta,\al}|i) m_{\beta,j} - \sum_{j=i+1}^{t_\al k} (j-i) n_{\al,j}.\label{nonsimpleq} \end{eqnarray} Note that for any $\al$, $q_{\al,t_\al k}=l_\al.$ We list the explicit forms of the modified vacancy numbers for each of the algebras in the Appendix. We define generating functions $Z_{\lambda;\bn}^{(k)}(\bu)$ parametrized by sets of non-negative integers $\mathbf n := \{n_{\al,j}\ | \ (\al,j)\in J_\g^{(k)}\}$ and a dominant integral weight $\lambda$ as follows: \begin{equation}\label{zgen} Z_{\lambda;\bn}^{(k)}(\bu) = \sum_{\bm} \prod_{\al=1}^r u_\al^{q_\al} \prod_{i=1}^{t_\al k} u_{\al,i}^{q_{\al,i}} a_i^{\Delta_{\al,i}} {m_{\al,i}+q_{\al,i}\choose m_{\al,i}} , \end{equation} where $\bm = \{m_{\al,i}\geq 0\ | (\al,i)\in J_\g^{(k)}\}$ and $\Delta_{\al,i}=(-i\ {\rm mod}\ t_{\gamma'})\delta_{\al,\gamma'}m_{\al,i}$. Explicitly, the non-vanishing values of $\Delta_{\al,i}$ are \begin{eqnarray*} &B_r: & \Delta_{r,2i-1}=m_{r,2i-1} \\ &C_r: & \Delta_{r-1,2i-1}=m_{r-1,2i-1} \\ &F_4: & \Delta_{3,2i-1}=m_{3,2i-1} \\ &G_2: & \Delta_{2,3i-2}=2m_{2,3i-2},\quad \Delta_{2,3i-1}=m_{2,3i-1}. \end{eqnarray*} By a slight abuse of notation, throughout this section, we shall always denote by $Z_{\lambda;\bn}^{(j)}(\bu)$ the sum \eqref{zgen} with an appropriately truncated sequence $\bn$, namely, with $n_{\al,i}=0$ for all $i>t_{\al} j$. \subsubsection{Factorization properties of the generating function} \begin{lemma}\label{lemmageninit} For $k=1$, the generating function has the factorized form in terms of the $\cQ$-functions: \begin{equation}\label{initgen} Z_{\lambda;\bn}^{(1)}(\bu) =\prod_{\al=1}^r \frac{\cQ_{\al,1}\cQ_{\al,t_\al}^{l_\al+1}} {\cQ_{\al,t_{\al}+1}^{l_\al+1}} \prod_{i=1}^{t_\al} \frac{\cQ_{\al,i}^{n_{\al,i}}}{u_{\al,i}} \end{equation} where $\cQ_{\al,i}$ are defined by the $\cQ$-system \eqref{quadraticQ}. Here, the equality is understood as an equality of Laurent series in $u_\al$ (for each $\al\in I_r$), where each factor in the product over $\al$ should be expanded in the corresponding $u_\al$. \end{lemma} The proof is by direct calculation, which is done in section \ref{appthree} of the Appendix. \begin{lemma}\label{recuZ} The generating function $Z_{\lambda;\bn}^{(k)}$ satisfies the recursion relation, for $k\geq 2$: \begin{equation}\label{Zrecursion} Z_{\lambda;\bn}^{(k)}(\bu) = Z_{0;\bn}^{(1)}(\bu) Z_{\lambda;\bn'}^{(k-1)}(\bu'), \end{equation} where $\bu'$ is defined by equation \eqref{shiftedua}, and $\bn'$ is obtained from $\bn$ omitting the integers $n_{\al,j}$ with $j\leq t_\al$, and $u^{(j)}$ is defined by equation \eqref{uj}. \end{lemma} \begin{proof} By direct calculation. See Section \ref{lemmarecuZ} of the Appendix. \end{proof} \begin{thm}\label{Zfactorized} The generating function $Z_{\lambda;\bn}^{(k)}(\bu)$ factorizes as follows: \begin{equation}\label{factogen} Z_{\lambda;\bn}^{(k)} = \prod_{\al=1}^r \frac{\cQ_{\al,1}\cQ_{\al,t_\al k}^{l_\al+1}} {\cQ_{\al,t_{\al}k+1}^{l_\al+1}} \prod_{i=1}^{t_\al k} \frac{\cQ_{\al,i}^{n_{\al,i}}}{u_{\al,i}}, \end{equation} Where again, the equality is understood as an equality of Laurent series in $u_\al$ (for each $\al\in I_r$), where each factor in the product over $\al$ should be expanded in the corresponding $u_\al$. \end{thm} \begin{proof} We proceed by induction. For $k=1$, the formula holds by Lemma \ref{lemmageninit}. Assuming it holds for $k-1$, we apply the recursion hypothesis to eq.\eqref{Zrecursion} and then use Lemma \ref{usefulindeed} to rewrite it as a function of $\bu$. \end{proof} \begin{cor} The generating function has a factorization as follows: \begin{equation}\label{factojgen} Z_{\lambda;\bn}^{(k)}(\bu) =Z_{0;\bn}^{(j)}(\bu)\, Z_{\lambda;\bn^{(j)}}^{(k-j)}(\bu^{(j)}) \end{equation} where $\bn^{(j)}$ is obtained from $\bn$ by omitting the components $n_{\beta,i} \ (i\leq t_\beta j)$. \end{cor} \begin{proof} Using \eqref{factogen}, evaluate the ratio \begin{equation} \frac{Z_{\lambda;\bn}^{(k)}(\bu)}{Z_{0;\bn}^{(j)}(\bu)} =\prod_{\al=1}^r \frac{\cQ_{\al,t_\al j+1}\cQ_{\al,t_\al k}^{l_\al+1}} {\cQ_{\al,t_{\al}k+1}^{l_\al+1}} \frac{\cQ_{\al,t_\al j+1}^{n_{\al,t_\al j+1}}}{\cQ_{t_\al j}u_{\al,t_\al j+1}} \prod_{i=t_\al j+2}^{t_\al k} \frac{\cQ_{\al,i}^{n_{\al,i}}}{u_{\al,i}}, \end{equation} then apply Corollary \ref{usefulcor} to express the r.h.s. as a function of $\bu^{(j)}$, easily identified with $Z_{\lambda;\bn^{(j)}}^{(k-j)}(\bu^{(j)})$ via \eqref{factogen}. \end{proof} We also need to consider the following generating functions arising from ``partial summations'' $Z^{(k,p)}_{\lambda;\bn}$, for $0\leq p < \max\{t_\al\}$. Let us denote by $\Pi^<$ be set of the short simple roots of $\g$. Let $\Pi^>=\Pi\setminus \Pi^<$. \begin{defn} For each $\g$, let $$J_\g^{(k,p)} = \{(\al,i)\ | \ 1\leq i\leq k\ (\al\in \Pi^>),\ p<i\leq t_\al k\ (\al\in \Pi^<)\}.$$ Define \begin{eqnarray} Z_{\lambda;\bn}^{(k,p)}(\bu) &=& \sum_{m_{\al,i}\geq 0\ (\al,i)\in J_\g^{(k,p)}} \left[\prod_{\al\in \Pi^>} u_\al^{\tilde q_\al} \prod_{i=1}^k u_{\al,i}^{q_{\al,i}}{q_{\al,i}+m_{\al,i}\choose m_{\al,i}}\right]\nonumber\\ & & \hskip.5in\times \left[ \prod_{\beta\in \Pi^<} u_{\beta,p}^{q_{\beta,p}}\prod_{i=p+1}^{t_\beta k} a_{i}^{\Delta_{\beta,i}} u_{\beta,i}^{q_{\beta,i}} {q_{\beta,i}+m_{\beta,i}\choose m_{\beta,i}}\right] . \end{eqnarray} Here, $\tilde q_{\al} = q_{\al}|_{m_{\beta,i}=0\ (\beta,i)\notin J_{\g}^{(k,p)}}$. \end{defn} Note that $Z_{\lambda,\bn}^{(k,0)} = Z_{\lambda,\bn}^{(k)}$. Also note that $Z_{\lambda;\bn}^{(k,p)}(\bu)$ does not depend on the entries $n_{\al,i}$ with $i\leq p$ and $\al$ is a short root. In terms of this partially summed generating function, we have a factorization \begin{lemma}\label{partialfactorization} \begin{equation} Z^{(k)}_{\lambda;\bn}(\bu)=\prod_{\beta\in \Pi^<} {\cQ_{\beta,1}\cQ_{\beta,p}\over \cQ_{\beta,p+1}} \prod_{i=1}^p {\cQ_{\beta,i}^{n_{\beta,i}}\over u_{\beta,i}} Z^{(k,p)}_{\lambda;\bn}(\bu^{(0,p)}) \end{equation} where \begin{equation}\label{subszerop} u^{(0,p)}_{\beta,p}={1\over \cQ_{\beta,p+1}},\ u^{(0,p)}_{\beta,p+1}=\cQ_{\beta,p}u_{\beta,p+1} \quad \forall\beta\in \Pi^<. \end{equation} The other components of $\bu$ are unchanged under the substitution. \end{lemma} \begin{proof} By direct calculation. See Section \ref{apptwo} in the Appendix. \end{proof} It is helpful to introduce, for fixed $\g$, $0\leq p< \max\{t_\al\}$ and $j$, the notation \begin{equation}\label{deftau} \tau_\al = \left\{\begin{array}{ll} t_\al j + p,& \al\in \Pi^<\\ j & \al \in \Pi^>.\end{array}\right. \end{equation} \begin{cor}\label{lastZfactor} \begin{eqnarray} Z_{\lambda;\bn}^{(k)}(\bu) &=& \left[\prod_{\al \in I_r}{\cQ_{\al,1}\cQ_{\al,\tau_\al}\over \cQ_{\al,\tau_\al+1}} \prod_{i=1}^{\tau_\al} {\cQ_{\al,i}^{n_{\al,i}}\over u_{\al,i}}\right] Z_{\lambda;\bn^{(j,p)}}^{(k-j,p)}(\bu^{(j,p)}) \end{eqnarray} where $\bu^{(j,p)} = (\bu^{(j)})^{(0,p)}$. That is, \begin{equation} u^{(j,p)}_{\beta, p}={1\over \cQ_{\beta,t_\beta j+p+1}},\ u^{(j,p)}_{\beta, p+1}=\cQ_{\beta,t_\beta j+p}u_{\beta,t_\beta j+p+1} \quad \forall \beta\in \Pi^< \ \end{equation} and the other components of $\bu^{(j,p)}$ are equal to those of $\bu^{(j)}$. The set $\bn^{(j,p)}$ is the set $\bn$ without the entries $n_{\al,i}$ with $i\leq \tau_\al$. \end{cor} \begin{proof} From the factorization \eqref{factojgen} and Lemma \ref{partialfactorization}, we have \begin{eqnarray*} Z_{\lambda;\bn}^{(k)} &=& Z_{0;\bn}^{(j)}(\bu) Z_{\lambda;\bn^{(j)}}^{(k-j)}(\bu^{(j)})\\ &=&Z_{0;\bn}^{(j)}(\bu)\, \prod_{\beta\in \Pi^<} {\cQ_{\beta,1}(\bu^{(j)}) \cQ_{\beta,p}(\bu^{(j)})\over \cQ_{\beta,p+1}(\bu^{(j)})} \prod_{i=1}^p {\cQ_{\beta,i}^{n_{\beta,t_\beta j+i}}(\bu^{(j)}) \over u_{\beta,i}^{(j)}} \, Z_{\lambda;\bn^{(j,p)}}^{(k-j,p)}((\bu^{(j)})^{(0,p)}) \\ &=& Z_{0;\bn}^{(j)}(\bu)\, \prod_{\beta\in \Pi^<} {\cQ_{\beta,t_\beta j+1}\cQ_{\beta,t_\beta j+p}\over \cQ_{\beta,t_\beta j} \cQ_{\beta,t_\beta j+p+1}} \prod_{i=1}^p {\cQ_{\beta,t_\beta j+i}^{n_{\beta,t_\beta j+i}}\over u_{\beta,t_\beta j+i}} \, Z_{\lambda;\bn^{(j,p)}}^{(k-j,p)}(\bu^{(j,p)}) \end{eqnarray*} where we have used Corollary \ref{usefulcor}. The corollary follows from the factorization of Theorem \ref{Zfactorized} applied to the first factor. \end{proof} It is useful to write the formula for the last factor in equation \eqref{lastZfactor} explicitly: \begin{equation} Z_{\lambda;\bn^{(j,p)}}^{(k-j,p)}(\bu^{(j,p)}) = \sum_{\bm^{(j,p)}} \prod_{\al}\left[ \frac{1}{\cQ_{\al,\tau_\al+1}}\right]^{\tilde q_{\al,\tau_\al}} \cQ_{\al,\tau_\al}^{q_{\al,\tau_\al+1}} \cQ_{\gamma,j}^{(\sum_{l=1}^{t_\gamma'-p-1} \Delta_{\gamma',\tau_{\gamma'}+l})}F_{j,p}(\bu) \end{equation} where $\bm^{(j,p)}=(m_{\al,i}: i>\tau_\al)$. Note that $\tilde q_{\al,\tau_\al}=q_{\al,\tau_\al}$ except in the case where $\al=\gamma$. Here, \begin{equation}\label{F} F_{j,p}(\bu) = \prod_\al \prod_{i>\tau_\al} u_{\al,i}^{q_{\al,i}} a_i^{\Delta_{\al,i}} {q_{\al,i}+m_{\al,i}\choose m_{\al,i}} \end{equation} \subsection{Identity of power series} First, we need a Lemma similar to Lemma \ref{seriesform} for non-simply laced algebras. \begin{lemma} Let $A = \C[u_1^{-1},...,u_r^{-1}]$. If $Q_{\al,2}^{-1}\in A[[u_\al]]$ for each $\al\in I_r$, then so is $Q_{\al,j}^{-1}$ for all $j$. \end{lemma} \begin{proof} The proof is identical to Lemma \ref{seriesform} with the slight modification that we need to consider the $Q$-system in the more general form $$ \frac{1}{Q_{\al,j+1}} = \frac{Q_{\al,j-1}Q_{\al,j}^{-2}}{1-Q_{\al,j}^{-2} \prod_{\beta\neq \al} \cT_j^{(\al,\beta)}}, $$ and we note that for all $j,\al,\beta$, $\cT_j^{(\al,\beta)}\in A$ as it is a polynomial in $Q$'s. The proof then proceeds in exactly the same way as Lemma \ref{seriesform}. \end{proof} \subsubsection{Evaluation maps} \begin{defn}\label{generaleval} Define the evaluation map $\varphi_{j,p}$ as follows, extended by linearity. For $0\leq p<\max(t_\al)$, the map acts as $$ \varphi_{j,p}(a_i) = 1 \hbox{ if $i\leq \tau_{\gamma'}$}, $$ and leaves other $a_i$'s unchanged. For $p=0$, it acts as $$ \varphi_{j,0}(u_{\al,i}) = 1 \hbox{ if $i< \tau_{\al}$}, $$ and leaves the other $u$-variables unchanged. Finally, if $p>0$, $$ \varphi_{j,p}(u_{\al,i}) = 1 \hbox{ if $\al$ is long and $i\leq j$ or if $\al$ is short and $i<\tau_\al$,} $$ leaving the other $u$-variables unchanged. \end{defn} \begin{lemma} Let $0\leq p\leq t_\al-1$. Then $$ \cQ_{\al,t_\al j + 1+p} \hbox{ is independent of $u_{\beta,i}$ for $i\geq t_\beta j+\delta_{p>0}$ and $\beta\neq \al$ }. $$ and $$ \cQ_{\al,j} \hbox{ is independent of $u_{\al,i}$ with $i\geq j$.} $$ \end{lemma} \begin{proof} First, note that for each $j$, $\cQ_{\al,j+1}$ is a function of $u_{\al,j}$, $\cQ_{\al,i}$ with $i\leq j$ and $\cT_j^{(\al,\beta)}$ with $\beta\sim \al$. Suppose the statement of the lemma is true for all $l<j$ and all $p$, and for $j$ with $p=0$. We proceed in three steps. \begin{enumerate} \item If $t_\alpha = 3$ (i.e. $\g=G_2$) then we have that (for $p=1,2,3$) $\cQ_{2,3j+1+p}$ depends on $\cT_{3j+p}^{(2,1)}$ which depends on $\cQ_{1,i}$ with $i\leq j+1$. We can use the induction hypothesis (for $t_\al=1$) to deduce that this is independent of $u_{\beta,i}$ with $i\geq t_\beta j + 1$ for $\beta=1,2$. Moreover for each $p$, we have an explicit dependence on $u_{2,3j+p}$. By induction on $p$, we conclude that $\cQ_{2,3j+1+p}$ is independent of $u_{1,i}$ with $i>j$ and $u_{2,i}$ with $i\geq 3j+1+p$. For $t_\alpha=3$, we conclude that the statement of the Lemma holds for $j$ with $p=1,2$ and for $j+1$ with $p=0$. \item If $t_\al=2$ consider first $\cQ_{\al,2j+2}$ which depends on $\cT_{2j+1}^{(\al,\beta)}$, which depends on $\cQ_{\beta,i}$ with $i\leq t_\beta j + 1$. By the induction hypothesis this is independent of $u_{\delta,i}$ with $i\geq t_\delta j + 1$ for any root $\delta$. Thus, $\cQ_{\alpha, 2j+2}$ is independent of $u_{\beta,2j+1}$ if $\beta\neq \al$ and of $u_{\al,2j+2}$. We conclude that the statement of the Lemma holds for $j$ with $p=1$. We then consider $\cQ_{\al,2j+3}$ which depends on $\cT_{2j+2}^{(\al,\beta)}$. This depends on $\cQ_{\beta,t_\beta(j+1)}$, with $t_\beta = 1$ or $t_\beta = 2$. In either case, either with the induction hypothesis or the previous paragraph, this is independent of $u_{\delta, i}$ with $i\geq t_\delta(j+1)$. We conclude that $\cQ_{\al,2(j+1)+1}$ is independent of $u_{\beta,i}$ with $i\geq t_\beta (j+1)$ and of $u_{\al,2j+3}$. This is the statement of the Lemma with $p=0$ and $j+1$. \item If $t_\al =1$ then we consider $\cQ_{\al,j+2}$. This depends on $\cT_{j+1}^{(\al,\beta)}$ which depends on $\cQ_{\beta, t_\beta(j+1)}$. If $t_\beta=1$ we can use the induction hypothesis. If $t_\beta>1$ we use steps (1) or (2). In either case we can say that it is independent of $u_{\delta,i}$ with $i\geq t_\delta (j+1)$. Thus, we have the statement of the Lemma for $j+1$. \end{enumerate} The Lemma follows by induction. \end{proof} We also have by a simple induction \begin{lemma} $$ \cQ_{\al,j} \hbox{ is independent of $a_i$ with $i\geq t_{\gamma'}j/t_\al$.} $$ \end{lemma} We have the easy corollary: \begin{cor}\label{evalcor} \begin{eqnarray*} \varphi_{j,0} (\cQ_{\al,i}) &=& Q_{\al,i}\ (i\leq t_\al j) \\ \varphi_{j,0} (\cQ_{\al, t_\al j + 1}) &=& \frac{Q_{\al,t_\al j+1}}{u_{\al,t_\al j}},\\ \varphi_{j,p} (\cQ_{\al,i}) &= & Q_{\al,i} \ (p>0,\ i\leq \tau_\al + \delta_{t_\al,1}), \\ \varphi_{j,p}( \cQ_{\al,\tau_\al+1} )&= & \frac{Q_{\al,\tau_\al +1}}{u_{\al,\tau_\al}},\ \hbox{($\al$ short)}. \end{eqnarray*} \end{cor} \subsubsection{Series expansions} \begin{defn} Let $K=\{k_1,...,k_r\}\in \N^r$. The restricted summation $Z_{\lambda;\bn}^{(k)}(\bu)^{[K]}$ is the generating function \eqref{zgen} restricted to $\bm$ such that $q_{\al,j}\geq 0$ for all $k_\al \leq j <k$ and for each $\al\in I_r$. \end{defn} \begin{lemma}\label{first} For fixed $j$ and $\al$, define $K=\{k_1, ..., k_r \}$ with $k_\al=t_\al j+1$ and $k_\beta\geq t_\beta j$ ($\beta\in I_r$). Then $$ \cP_{u_\al} \varphi_{j,0}(Z_{\lambda;\bn}^{(k)}(\bu)^{[K]}) = \cP_{u_\al} \varphi_{j,0}(Z_{\lambda;\bn}^{(k)}(\bu)^{[K-\epsilon_\al]}) $$ \end{lemma} \begin{proof} Consider the factorization $$ Z_{\lambda;\bn}^{(k)}(\bu) = Z_{0;\bn}^{(j)}(\bu) \ Z_{\lambda;\bn^{(j)}}^{(k-j)}(\bu^{(j)}) . $$ Let $K=(k_1,...,k_r)$ with $k_\al \geq t_\al j$, and consider the restricted generating function. The factorization formula still holds, with the restriction affecting only the second factor, because the first factor does not depend on the variables $u_{\beta,i}$ with $i\geq \tau_\beta$ for each $\beta$. Therefore, $$ Z_{\lambda;\bn}^{(k)}(\bu)^{[K]} = Z_{0;\bn}^{(j)}(\bu) \ Z_{\lambda;\bn^{(j)}}^{(k-j)}(\bu^{(j)})^{[K]} . $$ In the first factor, we have a product of $\cQ_{\al,i}$ with $i\leq t_\al j + 1$. Applying the evaluation map $\varphi_{j,0}$ and using Corollary \ref{evalcor}, the first factor becomes (after a cancellation of the factors $u_{\al,t_\al j}$) $$ \varphi_{j,0} (Z_{0;\bn}^{(j)}(\bu)) = \prod_{\al} \frac{Q_{\al,1} Q_{\al,t_\al j}}{Q_{\al,t_\al j + 1}} \prod_{i=1}^{t_\al j} Q_{\al,i}^{n_{\al,i}}. $$ The second factor can be written out explicitly as a summation over the variables $\bm^{(j)}:=\{m_{\al,i}| \ \al\in I_r, \ i>t_\al j\}$: $$ Z_{\lambda;\bn^{(j)}}^{(k-j)} (\bu^{(j)})^{[K]} = \sum_{\bm^{(j)}\atop q_{\al,i}\geq 0 \ (i\geq k_\al)} \prod_\al \left[\frac{1}{\cQ_{\al, t_\al j + 1}(\bu)}\right]^{q_{\al, t_\al j}} \cQ_{\al,t_\al j}(\bu)^{q_{\al,t_\al j+1}} F_{j,0}(\bu). $$ Here, $F_{j,0}(\bu)$ is defined in equation \eqref{F}. The evaluation map $\varphi_{j,0}$ has the following effect only: $$ \varphi_{j,0}(Z_{\lambda;\bn^{(j)}}^{(k-j)} (\bu^{(j)})^{[K]} ) = \sum_{\bm^{(j)}\atop q_{\al,i}\geq 0 \ (i\geq k_\al)} \prod_\al \left[\frac{u_{\al,t_\al j}}{Q_{\al, t_\al j + 1}}\right]^{q_{\al, t_\al j}} Q_{\al,t_\al j}^{q_{\al,t_\al j+1}} F_{j,0}(\bu). $$ For a fixed $\al$, we are interested in non-negative powers in $u_\al$ in the Laurent expansion of this function in $u_\al$. The dependence on $u_\beta$ for all $\beta$ is contained in the factors involving $Q_{\beta,i}$. Moreover the Laurent expansion has positive powers of $u_\al$ which come only from the factors with $\beta=\al$. Assume now that $k_\al>t_\al j$ for some $\al$. Consider a term in the summation over $\bm^{(j)}$ such that $q_{\al, t_\al j + 1}\geq 0$ is fixed and $q_{\al,t_\al j}<0$ for some $\al$. Considering the dependence on the factors $Q_{\al,i}$ for various $i$ we see that there are no terms left in the denominator as in this case the denominator $Q_{\al,\tau_\al+1}$ in the prefactor cancels and all other terms have non-negative powers. Therefore we have a polynomial in $Q_{\al,i}$ and hence in $u_\al^{-1}$. Moreover, this polynomial has no constant term in $u_\al^{-1}$ because there is an overall factor of $Q_{\al,1}=u_{\al}^{-1}$. Thus, the power series expansion in $u_\al$ has no contribution from this term. The power series in $u_\al$ of the term with $q_{\al,\tau_\al+1}\geq 0$ has nontrivial contributions only from terms with $q_{\al,\tau_\al}\geq 0$. \end{proof} \begin{lemma}\label{second} For $0<p<\max(t_\al)$, and for $\al$ a fixed short root, let $K=\{k_1,...,k_r\}$ with $k_\beta \geq \tau_\beta$ for all $\beta$ and $k_\al = \tau_\al+1$. Then $$ \cP_{u_\al}\varphi_{j,p}(Z_{\lambda;\bn}^{(k)}(\bu)^{[K]}) = \cP_{u_\al}\varphi_{j,p}(Z_{\lambda;\bn}^{(k)}(\bu)^{[K-\epsilon_\al]}). $$ \end{lemma} \begin{proof} Consider the factorization \eqref{partialfactorization} for some fixed $j$ and $p$. Fix $K=(k_1,...,k_r)$ with $k_\beta\geq \tau_\beta$. The factorization \eqref{partialfactorization} still holds for the restricted generating function $Z_{\lambda;\bn}^{(k)}(\bu)^{[K]}$ because the restriction only affects the second factor -- the first factor does not depend on the variables $u_{\al,i}$ with $i\geq \tau_\al$. Apply the evaluation map $\varphi_{j,p}$ to the factorized formula. Using corollary \ref{evalcor}, we have $$ \varphi_{j,p}(Z_{\lambda;\bn}^{(k)}(\bu)^{[K]}) = \prod_{\al} \frac{Q_{\al,1}Q_{\al,\tau_\al}} {Q_{\al,\tau_\al+1}} \prod_{i=1}^{\tau_\al} {Q_{\al,i}^{n_{\al,i}}}\times \varphi_{j,p}( Z_{\lambda;\bn^{(j,p)}}^{(k-j,p)}(\bu^{(j,p)})^{[K]}) $$ The second factor is $$ \sum_{m_{\al,i}\ (i>\tau_\al)\atop q_{\al,i}\geq k_\al} \left[\prod_{\al\in \Pi^>} Q_{\al,j+1}^{-\tilde{q}_{\al,j}}\right] \left[\prod_{\al\in \Pi^<} \frac{u_{\al,\tau_\al}}{Q_{\al,\tau_\al+1}}\right]^{{q}_{\al,\tau_\al}} \prod_{\al\in I_r} (u_{\al,\tau_\al+1} Q_{\al,\tau_\al})^{q_{\al,\tau_\al+1}} Q_{\gamma,j}^\mu F_{j,p}(\bu). $$ Here, $\mu=\sum_{i=1}^{t_{\gamma'}-p-1}\Delta_{\gamma',\tau_{\gamma'}+i}\geq 0$. Note that if $i> \tau_\al$, then \begin{equation}\label{inequality} \tilde q_{\beta,i}:=q_{\beta,i}|_{m_{\al,\ell}=0\ (\ell\leq \tau_\al)}=q_{\beta,i}-\delta_{\beta,\gamma}\delta_{i,j} \sum_{\ell=1}^p \ell \ m_{\gamma',\tau_{\gamma'}-p+\ell}. \end{equation} Consider a term in the summation over $\bm^{(j,p)}$ with fixed $q_{\al,\tau_\al+1}\geq 0$ and $q_{\al,\tau_\al}<0$ for some short root $\al$. The positive powers in $u_\al$ in the generating function can come only from the expansion of the functions $Q_{\al,i}$ for various $i$ appearing in the factorized formula above. Examination of the dependence on such factors in this case shows that there are no negative powers of $Q_{\al,i}$, and hence such a term is a polynomial in $u_\al^{-1}$ with no constant term due to the presence of the factor $Q_{\al,1}=u_\al^{-1}$ in the prefactor. Therefore, if $q_{\al, \tau_\al+1}\geq 0$, the power series expansion in $u_\al$ has contributions only from terms with $q_{\al,\tau_\al}\geq 0$. \end{proof} Finally we note that if we replace the evaluation maps $\varphi_{j,p}$ which appear in Lemmas \ref{first} and \ref{second} with $\varphi_{k+1,0}$ (that is, evaluation at $u_{\beta,i}=1=a_i$ for all $i,\beta$) the Lemmas still hold. From this follows our main theorem, which implies the $M=N$ identity for non-simply laced algebras. \begin{thm}\label{main} Let $K=(1,...,1)$. Then \begin{equation}\label{finalequation} \cP_{u_1,...,u_r} \varphi_{k+1,0}(Z_{\lambda;\bn}^{(k)}(\bu)) = \cP_{u_1,...,u_r} \varphi_{k+1,0}(Z_{\lambda;\bn}^{(k)}(\bu)^{[K]}) . \end{equation} \end{thm} \begin{proof} Lemmas \ref{first} and \ref{second} can be applied successively to several roots $\al_1,...,\al_t$. Thus we can write, when $t=r$, $$ \cP_{u_1,...,u_r}\varphi_{j+1,0}(Z_{\lambda;\bn}^{(k)}(\bu)^{[K]})= \cP_{u_1,...,u_r}\varphi_{j+1,0}(Z_{\lambda;\bn}^{(k)}(\bu)^{[K']}), $$ where $K$ is the set with $k_\al=t_\al (j+1)$ and $K'$ is the set with $k_\al=t_\al j$ for all $\al\in I_r$. The Theorem follows by induction on $j$. \end{proof} \begin{cor} The identity \eqref{mainidentity} is true for any simple Lie algebra $\g$. \end{cor} \begin{proof} As for the simply-laced Lie algebras in the previous section, this is just the constant term of Equation \eqref{finalequation}. \end{proof} \section{Conclusion} In this paper we have proved a combinatorial identity, Conjecture \ref{HKOTYconj}, which implies the fermionic form $M_{\lambda;\bn}$ for the multiplicities of the irreducible $\g$-modules in the tensor product of KR-modules for untwisted Yangians or quantum affine algebras. This was done in each case by constructing an appropriate generating function satisfying a factorization property, which allows us to prove that the restricted $M$-sum is equal to the unrestricted $N$-sum. This method appears to be quite general. There are certain generalizations of these formulas for twisted Yangians \cite{twistedHKOTY}, and the Kirillov-Reshetikhin characters in these cases were shown to satisfy a $Q$-system \cite{He07}. We have not addressed these systems in this paper, although it is clear that the same methods should apply to these cases. More generally, it would be interesting to understand what is the most general form of vacancy numbers, allowing for the exact cancellations resulting in an $M=N$-type identity. The main structures introduced in this paper, the factorizing generating functions and the deformed $\cQ$-systems, clearly require further study. For example, it would be interesting to understand them from a representation-theoretical point of view. The most important property of $Q$-systems we used in this paper is their polynomiality, Theorem \ref{polynomiality}. This follows from representation theory. However, $Q$-systems can also be expressed in the context of cluster algebras \cite{Ke07}, where a similar property known as the Laurent phenomenon is satisfied. Polynomiality is a very special subcase of this phenomenon, which awaits further study. \vskip.2in \noindent{\bf Acknowledgements:} We thank V. Chari, N. Reshetikhin, D. Hernandez for their valuable input. We also thank the referee for his careful reading of the manuscript and helpful remarks. RK thanks CTQM at the University of Aarhus and of CEA-Saclay IPhT for their hospitality. RK is supported by NSF grant DMS-05-00759. PDF acknowledges the support of European Marie Curie Research Training Networks ENIGMA MRT-CT-2004-5652, ENRAGE MRTN-CT-2004-005616, ESF program MISGAM, ACI GEOCOMP and of ANR program GIMP ANR-05-BLAN-0029-01. \begin{appendix} \section{Proof of Lemmas \ref{partialfactorization}, \ref{lemmageninit}, and \ref{recuZ}}\label{Zone} In this appendix, we prove the summation Lemmas \ref{partialfactorization}, \ref{lemmageninit}, and \ref{recuZ}. To do so, we start from $Z^{(k)}_{\lambda;\bn}(\bu)$ as defined through \eqref{zgen}, and explicitly sum over some $m_{\al,i}$, with $i\leq p \leq t_\al$, in a specific order. \subsection{Preliminaries} We recall the definition \eqref{zgen} of the generating fuction: $$ Z_{\lambda;\bn}^{(k)}(\bu) = \sum_{\bm} \prod_{\al=1}^r u_\al^{q_\al} \prod_{i=1}^{t_\al k} u_{\al,i}^{q_{\al,i}} a_i^{\Delta_{\al,i}} {m_{\al,i}+q_{\al,i}\choose m_{\al,i}} , $$ where we list the modified vacancy numbers for each $\g$ explicitly, from the definition \eqref{nonsimpleq} (note that $q_{\al,0}=q_\al$): \begin{eqnarray*} B_r: &&q_{\al,i}=l_\al+\sum_{j=i+1}^k (j-i)(2m_{\al,j}-n_{\al,j}-m_{\al-1,j}-m_{\al+1,j}),\quad (\al<r-1) \\ && q_{r-1,i}= l_{r-1}+\sum_{j=i+1}^k (j-i)(2m_{r-1,j}-n_{r-1,j}-m_{r-2,j}) -\sum_{j=2i+1}^{2k} (j-2i)m_{r,j},\\ && q_{r,i}=l_r+\sum_{j=i+1}^{2k} (j-i)(2m_{r,j}-n_{r,j})-\sum_{i<2j\leq 2k} (2j-i)m_{r-1,j}. \end{eqnarray*} \begin{eqnarray*} C_r: && q_{\al,i}=l_\al+\sum_{j=i+1}^{2k} (j-i)(2m_{\al,j}-n_{\al,j}-m_{\al-1,j}-m_{\al+1,j}),\quad (\al<r-1)\\ && q_{r-1,i}=l_{r-1}+\sum_{j=i+1}^{2k} (j-i)(2m_{r-1,j}-n_{r-1,j}) -\sum_{j=i+1}^{2k} (j-i)m_{r-2,j}\\ && \hskip1in-\sum_{i<2j\leq 2k} (2j-i)m_{r,j},\\ && q_{r,i}=l_r+\sum_{j=i+1}^k (j-i)(2m_{r,j}-n_{r,j})-\sum_{j=2i+1}^{2k} (j-2i)m_{r-1,j}. \end{eqnarray*} \begin{eqnarray*} F_4: && q_{1,i}=l_1+\sum_{j=i+1}^k (j-i)(2m_{1,j}-n_{1,j}-m_{2,j})\\ && q_{2,i}=l_2+\sum_{j=i+1}^k (j-i)(2m_{2,j}-n_{2,j}-m_{1,j}) -\sum_{j=2i+1}^{2k} (j-2i)m_{3,j}\\ && q_{3,i}=l_3+\sum_{j=i+1}^{2k} (j-i)(2m_{3,j}-n_{3,j}-m_{4,j}) -\sum_{i<2j\leq 2k} (2j-i)m_{2,j} \\ && q_{4,i}=l_4+\sum_{j=i+1}^{2k} (j-i)(2m_{4,j}-n_{4,j}-m_{3,j}) \end{eqnarray*} \begin{eqnarray*} G_2: && q_{1,i}=l_1+ \sum_{k\geq j>i} (j-i)(2m_{1,j} -n_{1,j}) -\sum_{3k\geq j>3i} (j-3i)m_{2,j}\\ && q_{2,i}=l_2+ \sum_{3k\geq j>i} (j-i)(2m_{2,j} -n_{2,j} )-\sum_{3k\geq 3j>i} (3j-i)m_{1,j}. \end{eqnarray*} Here, we use the convention that $m_{0,i}:=0$. As before, we use the notation $\gamma'$ and $\gamma$ for the short and long roots connected to each other in the Dynkin diagram, namely $\gamma'=r,r-1,3,2$ and $\gamma=r-1,r,2,1$ for $B_r,C_r,F_4,G_2$ respectively. We will sum \eqref{zgen} over the $m_{\al,i}$, $i=1,...,t_\al$ and $\al=1,...,r$ explicitly. The summation over these integers must be done in a certain order, because in each case, the $\{ q_{\gamma',i} | i=1,...,t_{\gamma'}-1\}$ depend on $m_{\gamma,1}$. We must therefore first sum over $m_{\gamma',i}$, $i=1,...,t_{\gamma'}-1$ before summing over the other variables. The intermediate summations are rational fractions of the $u$'s and $a$'s, which can be expressed in terms of the functions $\cQ_{\al,i}$. We first present the partial summations leading to Lemma \ref{partialfactorization}, for which only the $m_{\beta,i}$, $\beta \in \Pi^<$, $i=1,2,...,p<t_{\gamma'}$ are summed over, starting from $\beta=\gamma'$, and then the ``complete" sums corresponding to $p=t_{\gamma'}$ for short root $m$'s and also summed over the long root $m$'s, leading to Lemmas \ref{lemmageninit} and \ref{recuZ}. A crucial ingredient used repeatedly in the following is the fact that the $q$'s satisfy relations expressing $q_{\al,i-1}$ in terms solely of $m_{\al,i}, n_{\al,i}$ and the combination $2q_{\al,i}-q_{\al,i+1}$ with possible slight modifications, involving only finitely many $m$'s. These relations are (we use the notation $q_{\al,0}:= q_\al$ for convenience): \begin{eqnarray}\label{qs} &&\bullet \ {\rm short \ root}\ \gamma': \nonumber\\ &&q_{\gamma',j-1}=-n_{\gamma',j}+ \sum_{\beta\in \Pi^<} C_{\gamma',\beta} m_{\beta,j} +2 q_{\gamma',j}-q_{\gamma',j+1}-\delta_{j,t_{\gamma'}}m_{\gamma,1}, \quad 1\leq j\leq t_{\gamma'} \label{qgamp}\\ &&\bullet \ {\rm short \ root}\ \beta\neq \gamma': \nonumber\\ &&q_{\beta,j-1}=-n_{\beta,j}+ \sum_{\beta'\in \Pi^<} C_{\beta,\beta'} m_{\beta',j} +2 q_{\beta,j}-q_{\beta,j+1} \label{qbeta}\\ &&\bullet \ {\rm long \ root}\ \gamma: \nonumber\\ &&q_{\gamma}= -n_{\gamma,1}+\sum_{\beta\in \Pi^>} C_{\gamma,\beta} m_{\beta,j} +2q_{\gamma,1}-q_{\gamma,2}- \sum_{j=1}^{t_\gamma'}( j m_{\gamma',j} +\Delta_{\gamma',t_{\gamma'}+j})\label{qgam}\\ &&\bullet \ {\rm long \ root}\ \alpha\neq \gamma: \nonumber\\ &&q_{\al}= -n_{\al,1}+\sum_{\beta\in \Pi^>} C_{\al,\beta} m_{\beta,j} +2q_{\al,1}-q_{\al,2} \label{qal} \end{eqnarray} \subsection{Partial summations over short roots: proof of Lemma \ref{partialfactorization}}\label{apptwo} In each case, we must first sum over $\mu=m_{\gamma',1}$. Collecting all relevant factors in the summand of \eqref{zgen} and terms which depend on $u_{\gamma'}$ and $u_{\gamma',1}$, using \eqref{qgamp} for $j=1$, we have $$ \sum_{\mu\geq 0} u_{\gamma'}^{-n_{\gamma',1}+2 q_{\gamma',1}-q_{\gamma',2}} \left( a_1^{t_{\gamma'}-1} \prod_\beta u_\beta^{C_{\beta,\gamma'}} \right)^{\mu} u_{\gamma',1}^{q_{\gamma',1}} {\mu+q_{\gamma',1}\choose \mu} ={\cQ_{\gamma',1}^{n_{\gamma',1}}\over u_{\gamma',1}} {\cQ_{\gamma',1}^2 \over \cQ_{\gamma',2}} \frac{\cQ_{\gamma',1}^{q_{\gamma',2}}}{\cQ_{\gamma',2}^{q_{\gamma',1}} } , $$ where we have identified $\cQ_{\gamma',1}=u_{\gamma'}^{-1}$ and $$\cQ_{\gamma',2}={(1-a_1^{t_{\gamma'}-1}u_{\gamma}^{-1}\prod_{\beta\in \Pi^<} u_\beta^{C_{\beta,\gamma'}})u_{\gamma'}^{-2}\over u_{\gamma',1}}.$$ Assume $t_{\gamma'}=2$. Then we may now sum over $m_{\beta,1}$ for the other short roots $\beta\neq \gamma'$, where $t_\beta=2$. Using the relation \eqref{qbeta} for $j=1$, we have for each $\beta\in \Pi^>$, $\beta\neq \gamma'$, a factor of the form $$ \sum_{\mu\geq 0} u_{\beta}^{-n_{\beta,1}+2 q_{\beta,1}-q_{\beta,2}} \left(\prod_{\beta'} u_{\beta'}^{C_{\beta',\beta}} \right)^{\mu} u_{\beta,1}^{q_{\beta,1}} {\mu+q_{\beta,1}\choose \mu} ={\cQ_{\beta,1}^{n_{\beta,1}}\over u_{\beta,1}} {\cQ_{\beta,1}^2 \over \cQ_{\beta,2}} \frac{\cQ_{\beta,1}^{q_{\beta,2}}}{\cQ_{\beta,2}^{q_{\beta,1}} } , $$ where we have identified $\cQ_{\beta,1}=u_{\beta}^{-1}$ and $$\cQ_{\beta,2}={(1-\prod_{\beta'} u_{\beta'}^{C_{\beta',\beta}})u_{\beta}^{-2}\over u_{\beta,1}}.$$ Gathering all the above contributions and restricting $q_\gamma$ to ${\tilde q}_\gamma\equiv q_\gamma\vert_{m_{\gamma',1}=0}$ yields Lemma \ref{partialfactorization} for $p=1$, the substitutions \eqref{subszerop} being induced by the factors $\cQ_{\beta,1}^{q_{\beta,2}}/\cQ_{\beta,2}^{q_{\beta,1}}$, $\beta\in \Pi^<$. If $t_{\gamma'}=3$ ($G_2$ case, $\gamma'=2$, $\gamma=1$) then if $p=1$, we have proved Lemma \ref{partialfactorization} above. If $p=2$, we must now sum over $\mu=m_{2,2}$. We use \eqref{qgamp} for $j=2$ to rewrite $q_{\gamma',1}=2\mu-n_{\gamma',2}+2q_{\gamma',2}-q_{\gamma',3}$ in the summation: \begin{equation}\label{gtwopart} {\cQ_{\gamma',1}^{n_{\gamma',1}}\over u_{\gamma',1}} {\cQ_{\gamma',1}^2 \over \cQ_{\gamma',2}} \sum_{\mu\geq 0} \frac{\cQ_{\gamma',1}^{q_{\gamma',2}}}{\cQ_{\gamma',2}^{q_{\gamma',1}}} a_2^{\mu} u_\gamma^{-2\mu} u_{\gamma',2}^{q_{\gamma',2}} {\mu+q_{\gamma',2}\choose \mu} ={\cQ_{\gamma',1}^{n_{\gamma',1}}\over u_{\gamma',1}} {\cQ_{\gamma',2}^{n_{\gamma',2}}\over u_{\gamma',2}} \cQ_{\gamma',1}{\cQ_{\gamma',2}\over \cQ_{\gamma',3}} \frac{\cQ_{\gamma',2}^{q_{\gamma',3}}}{\cQ_{\gamma',3}^{q_{\gamma',2}} } \end{equation} where we have identified $$\cQ_{\gamma',3}={\cQ_{\gamma',2}^2-a_2 \cQ_{\gamma,1}^2\over u_{\gamma',2}\cQ_{\gamma',1}}.$$ Replacing in the summation formula for $Z^{(k)}_{\lambda;\bn}(\bu)$ the quantity $q_\gamma$ by ${\tilde q}_{\gamma}\equiv q_{\gamma}\vert_{m_{\gamma',1}=m_{\gamma',2}=0}$ yields the Lemma \ref{partialfactorization} for $p=2$ ($G_2$ case). \subsection{Complete summations: proof of Lemma \ref{lemmageninit}}\label{appthree} Starting from Lemma \ref{partialfactorization} with $p=t_{\gamma'}-1$, we have two cases to consider. In the case $t_{\gamma'}=2$ ($B_r,C_r,F_4$), let us first sum over $\mu=m_{\gamma',2}$. From \eqref{qgamp} for $j=2=t_{\gamma'}$, we get $q_{\gamma',1}=2\mu-n_{\gamma',2}+2q_{\gamma',2}-q_{\gamma',3} -m_{\gamma,1}$, hence the summation: $$ \sum_{\mu\geq 0} {\cQ_{\gamma',1}^{n_{\gamma',1}}\over u_{\gamma',1}} {\cQ_{\gamma',1}^2 \over \cQ_{\gamma',2}} \frac{\cQ_{\gamma',1}^{q_{\gamma',2}}}{\cQ_{\gamma',2}^{q_{\gamma',1}}} u_\gamma^{-2\mu} u_{\gamma',2}^{q_{\gamma',2}} {\mu+q_{\gamma',2}\choose \mu} ={\cQ_{\gamma',1}^{n_{\gamma',1}}\over u_{\gamma',1}} {\cQ_{\gamma',2}^{n_{\gamma',2}}\over u_{\gamma',2}} \cQ_{\gamma',1}{\cQ_{\gamma',2}\over \cQ_{\gamma',3}} \frac{\cQ_{\gamma',2}^{q_{\gamma',3}+m_{\gamma,1}}}{\cQ_{\gamma',3}^{q_{\gamma',2}} } $$ where $$\cQ_{\gamma',3}={\cQ_{\gamma',2}^2-\cQ_{\gamma,1}^2\over u_{\gamma',2}\cQ_{\gamma',1}}.$$ We may now sum on $\mu=m_{\beta,2}$ for the other short roots $\beta\neq \gamma'$. We use \eqref{qbeta} for $j=2$, and compute: $$ \sum_{\mu\geq 0} {\cQ_{\beta,1}^{n_{\beta,1}}\over u_{\beta,1}} {\cQ_{\beta,1}^2 \over \cQ_{\beta,2}} \frac{\cQ_{\beta,1}^{q_{\beta,2}}}{\cQ_{\beta,2}^{q_{\beta,1}}} u_{\beta,2}^{q_{\beta,2}} {\mu+q_{\beta,2}\choose \mu} ={\cQ_{\beta,1}^{n_{\beta,1}}\over u_{\beta,1}} {\cQ_{\beta2}^{n_{\beta,2}}\over u_{\beta,2}} \cQ_{\beta,1}{\cQ_{\beta,2}\over \cQ_{\beta,3}} \frac{\cQ_{\beta,2}^{q_{\beta,3}}}{\cQ_{\beta,3}^{q_{\beta,2}} } $$ where $$\cQ_{\beta,3}={(1-\prod_{\beta'} Q_{\beta',2}^{-C_{\beta',\beta}}) \cQ_{\beta,2}^2\over u_{\gamma',2}\cQ_{\gamma',1}}.$$ We may now finally sum over $m_{\al,1}$ for all $\al \in \Pi^>$. We first do the summation over $\mu=m_{\gamma,1}$, using \eqref{qgam}: $$ \sum_{\mu\geq 0} u_{\gamma}^{-n_{\gamma,1}-\Delta_{\gamma',3}} \left( \cQ_{\gamma',2} \prod_{\al\in \Pi^>} u_\al^{C_{\al,\gamma}} \right)^\mu u_{\gamma,1}^{q_{\gamma,1}} {\mu+q_{\gamma,1}\choose \mu} = {\cQ_{\gamma,1}^{n_{\gamma,1}}\over u_{\gamma,1}} {\cQ_{\gamma,1}^2\over \cQ_{\gamma,2}} \cQ_{\gamma,1}^{\Delta_{\gamma',3}} \frac{\cQ_{\gamma,1}^{q_{\gamma,2}}}{\cQ_{\gamma,2}^{q_{\gamma,1}} } $$ where we have identified $\cQ_{\gamma,1}=u_\gamma^{-1}$ and $\cQ_{\gamma,2}=(1-\cQ_{\gamma',2}\prod_{\al\in \Pi^>} u_\al^{C_{\al,\gamma}})u_\gamma^{-2}/u_{\gamma,1}$. Next, we do the remaining summations over each $\mu=m_{\al,1}$, for long roots $\al\neq \gamma$, and use \eqref{qal}: $$ \sum_{\mu\geq 0} u_\al^{-n_{\al,1}+2q_{\al,1}-q_{\al,2}} \left( \prod_{\al'} u_{\al'}^{C_{\al',\al}} \right)^\mu u_{\al,1}^{q_{\al,1}} {\mu+q_{\al,1}\choose \mu}={\cQ_{\al,1}^{n_{\al,1}}\over u_{\al,1}} {\cQ_{\al,1}^2\over \cQ_{\al,2}} \frac{\cQ_{\al,1}^{q_{\al,2}}}{\cQ_{\al,2}^{q_{\al,1}} } $$ where $\cQ_{\al,1}=u_\al^{-1}$ and $$\cQ_{\al,2}={(1-\prod_{\al'} u_{\al'}^{C_{\al',\al}} )u_\al^{-2}\over u_{\al,1}}.$$ In the case $t_{\gamma'}=3$ ($G_2$), let us first sum over $\mu=m_{\gamma',3}$, after summing over $m_{\gamma',1},m_{\gamma',2}$ as in \eqref{gtwopart}. Using \eqref{qgamp} for $j=t_{\gamma'}=3$, namely $q_{\gamma',2}=2\mu -n_{\gamma',3}+2q_{\gamma',3}-q_{\gamma',4}-m_{\gamma,1}$, this gives $$ {\cQ_{\gamma',1}^{n_{\gamma',1}}\over u_{\gamma',1}} {\cQ_{\gamma',2}^{n_{\gamma',2}}\over u_{\gamma',2}} \cQ_{\gamma',1}{\cQ_{\gamma',2}\over \cQ_{\gamma',3}} \sum_{\mu\geq 0} u_{\gamma}^{-3\mu} \frac{\cQ_{\gamma',2}^{q_{\gamma',3}}}{\cQ_{\gamma',3}^{q_{\gamma',2}} } u_{\gamma',3}^{q_{\gamma',3}} {\mu+q_{\gamma',3}\choose \mu} = {\cQ_{\gamma',1}^{n_{\gamma',1}}\over u_{\gamma',1}} {\cQ_{\gamma',2}^{n_{\gamma',2}}\over u_{\gamma',2}} {\cQ_{\gamma',3}^{n_{\gamma',3}}\over u_{\gamma',3}} \cQ_{\gamma',1}{\cQ_{\gamma',3}\over \cQ_{\gamma',4}} \frac{\cQ_{\gamma',3}^{q_{\gamma',4}+m_{\gamma,1}}}{\cQ_{\gamma',4}^{q_{\gamma',3}} } $$ where $$\cQ_{\gamma',4}={\cQ_{\gamma',3}^2-u_{\gamma}^{-3}\over u_{\gamma',3}\cQ_{\gamma',2}}.$$ We next sum over $\mu=m_{\gamma,1}$, and use \eqref{qgam}. Since we have already summed over $m_{2,i}$ $(i=1,2,3)$ these integers should be set to zero in the definition of $q_\gamma$, namely $\tilde q_\gamma=2\mu-n_{\gamma,1}+2q_{\gamma,1}-q_{\gamma,2} -\Delta_{\gamma',4}-\Delta_{\gamma',5}$, to finally write: $$ u_\gamma^{-n_{\gamma,1}} \sum_{\mu\geq 0} \left( \cQ_{\gamma',3} u_{\gamma}^2 \right)^\mu u_{\gamma,1}^{q_{\gamma,1}} {\mu+q_{\gamma,1}\choose \mu}={\cQ_{\gamma,1}^{n_{\gamma,1}}\over u_{\gamma,1}} \frac{\cQ_{\gamma,1}^2}{\cQ_{\gamma,2}} \cQ_{\gamma,1}^{\Delta_{\gamma',4}+\Delta_{\gamma',5}} \frac{\cQ_{\gamma,1}^{q_{\gamma,2}}}{\cQ_{\gamma,2}^{q_{\gamma,1}}} $$ where we have identified $\cQ_{\gamma,1}=u_\gamma^{-1}$ and $\cQ_{\gamma,2}=(u_\gamma^{-2}-\cQ_{\gamma',3})/u_{\gamma,1}$. Collecting all the above terms in both cases $t_{\gamma'}=2$ and $3$, and renaming the summation variables $m'_{\al,i}=m_{\al,i+t_\al}$, as well as $n_{\al,i}'=n_{\al,i+t_\al}$, yields in general \begin{eqnarray} &&Z_{\lambda;\bn}^{(k)}(\bu)=\left(\prod_{\al=1}^r \frac{\cQ_{\al,1}\cQ_{\al,t_\al}} {\cQ_{\al,t_{\al}+1}} \prod_{i=1}^{t_\al} \frac{\cQ_{\al,i}^{n_{\al,i}}}{u_{\al,i}}\right)\times \label{bidule} \\ &&\sum_{\{m'_{\al,i}\geq 0\ | (\al,i) \in J^{(k-1)}_\g\}} \cQ_{\gamma,1}^{\mu'_\g}\prod_{\al=1}^r \frac{\cQ_{\al,t_\al}^{q'_{\al,1}}} {\cQ_{\al,t_\al+1}^{q'_\al}} \, \prod_{i=1}^{t_\al (k-1)} u_{\al,i+t_\al}^{q'_{\al,i}} a_{i+t_{\al}}^{\Delta_{\al,i}'} {m'_{\al,i}+q'_{\al,i}\choose m'_{\al,i}}. \nonumber \end{eqnarray} Here, all the primed functions of $m$ are the functions with $m'$ substituted for $m$, and similarly for $n$. More precisely, we have $q'_{\al,i}=q_{\al,i+t_\al}$, $q'_\al=q_{\al,t_\al}$, and $\mu'_\g=\sum_{j=1}^{t_{\gamma'}} \Delta_{\gamma',j}'=m'_{r,1},\ m_{r-1,1}',\ m_{3,1}',\ 2m_{2,1}'+m_{2,2}'$ respectively for $\g=B_n,C_n,F_4,G_2$. To complete the proof of Lemma \ref{lemmageninit}, in the case $k=1$, we have summed over all $m$'s, and the sum on the r.h.s. of \eqref{bidule} is trivial, whereas $q'_\al=q'_{\al,1}=l_\al$. Eqn. \eqref{initgen} follows. \subsection{Proof of Lemma \ref{recuZ}} \label{lemmarecuZ} The factorization \eqref{Zrecursion} follows from \eqref{bidule}, upon identifying the first factor with $Z_{0;\bn}^{(1)}(\bu)$ (from Lemma \ref{lemmageninit} with all $l_\al=0$), and the second line with $Z_{\lambda;\bn'}^{(k-1)}(\bu')$, with $\bu'$ as in \eqref{shiftedua}, the substitutions in the $a$'s, $u_\al$'s and $u_{\al,1}$'s being induced respectively by the factors $\cQ_{\gamma,1}^{\mu'_\g}$, $\cQ_{\al,t_\al+1}^{-q'_\al}$, and $\cQ_{\al,t_\al}^{q'_{\al,1}}$. \end{appendix} \bibliographystyle{plain} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$} \def\cprime{$'$}
79,653
The sun is shining today, there is a lovely breeze stirring the curtains and jazz is playing on the CD player. All in all a wonderful day. Day two (this is tedious but I know it will pass) went well. Saved some points and today is going just as well. If I find I am getting hungry I try to drink water hence I am travelling everywhere even at home with a water bottle. DH is reorganising my laundry for me and it looks great. More shelves and he is having fun using his router. He has more plans yet. I think I may try tiling the walls - note to self: look up community college and see if there are any classes locally. Trying to get DD to do her invitations to her 21st but she has swanned off to the beach. Might be a very small guest list if she doesnt pull her finger out. Mum rang yesterday, all sweetness and light. I dont know whether she forgets the cruel and needless things she says or whether she chooses to ignore them. Anyway she overstepped the mark by a long way last Sunday and I dont think I will let this one slide. My foot is really sore today - had to go shopping for girlfriends 50th birthday present and I was really hobbling. Will put my sneakers on with the othotics and that will help. Must update my tracker - what a wonderful spreadsheet and great tool. Am trying to track this time religously. Drinks tonight with friends (happens every Sunday night) will be a challenge. Will try to limit myself to one wine and then diet soft drink and home early. Am going to take salsa and vegies with me to nibble on. That's the plan anyway. :o) Hope the drinks go well tonight! Your mum sounds like mine...lol
45,808
1. 1'. 1. Two pages, 12mo, bifolium, good condition. As follows: " I wish to apologise for having spoken to you in the way I did here today in the presence of several people [...] I had no knowledge of the matter in question until my Father appealed to me for my opinion, and I gave my opinion [excision] viz. that we should have been referred to at an earlier period, & if the leather was actually in stock & not promised to anypne else when the order was received. that it should have been used for those books. . Six items, invoices and receipts, 1835-1836, some revealing the spreading of the binding among different firms of "Military Prayers" [prob. John Parker Lawson, "The Military Pastor", a series of Practical Discourses addressed to soldiers, with prayers for the use of the sick published by J.W. Parker [pp. SPCK]]. Binders include: Camp [& Curtis][not in BBTI - William to 1831]; Eliza Camp [BBTI same address as William Camp for a time; BBTI]; Russell & Spencer; Joseph Smith [& Son] [not in BBTI]; F. Remnant... Sixteen invoices, all 8vo, good condition. They bought account books and stationery relevant to their function. The overseers, for example, bought a "Poor Rates Book", had "Rate Receipt books" and "Poor Rate Notices" printed by Gotelee. The surveyors' invoices listed "Printing 200 Highway Rate Receipts". The churchwardens had a "Book of Common Prayer"
122,477
\begin{document} \title{Schr\"odinger-Virasoro Lie $H$-pseuoalgebras} \author{Zhixiang Wu} \address{School of Mathematical Sciences, Zhejiang University, Hangzhou, 310027, P.R.China} \email{wzx@zju.edu.cn} \thanks{The author is supported by ZJNSF(No. LY17A010015, LZ14A010001), and NNSFC (No. 11871421)} \subjclass[2000]{Primary 17B55,17B70, Secondary 81T05,18D35} \keywords{Leibniz pseudoalgebra; Sch\"odinger-Virasoro Lie algebra; Lie conformal algebra; Hopf algebra} \begin{abstract} To solve the problem how to combine Virasoro Lie algebra and Sch\"odinger Lie algebra into a Lie algebra, we introduce the notion of $m$-type Schr\"odinger-Virasoro Lie conformal algebras and the notion of Schr\"odinger-Virasoro Lie ${\bf k}[s]$-pseudoalgebras. Then we solve the above question by classifying $m$-type Sch\"dinger-Virasoro Lie conformal algebras and Schr\"odinger-Virasoro Lie ${\bf k}[s]$-pseudoalgebras. Meanwhile, we classify all Leibniz ${\bf k}[s]$-pseudoalgebras of rank two. \end{abstract} \maketitle \baselineskip=16pt \section{Introduction} Let Vect$(S^1)$ be the Lie algebra of $C^{\infty}$-vector fields on the circle $S^1$. Then an element of Vect$(S^1)$ can be described by the vector field $f(z)\partial_z$, where $f(z)\in\mathbb{C}[z,z^{-1}]$ is a Laurent polynomial. Let $L_n=-z^{n+1}\partial_z$. Then $\{L_n|n\in\mathbb{Z}\}$ is a basis of Vect$(S^1)$ satisfying $[L_n, L_m]=(n-m)L_{n+m}$. The central extension of Vect$(S^1)$ is called the {\it Virasoro algebra} and Vect$(S^1)$ is called {\it Virasoro algebra without center}, or {\it Witt algebra}. A {\it Schr\"odinger algebra} is the Lie algebra of differential operators $\mathscr{D}$ on $\mathbb{R}\times \mathbb{R}^d$ of order at most one, satisfying: $(2\mathscr{M}\partial_t-\triangle_d)(\psi)=0\Rightarrow (2\mathscr{M}\partial_t-\triangle_d)(\mathscr{D}\psi)=0.$ An amalgamation Lie algebra of Vect$(S^1)$ and the Schr\"odinger algebra for $d=1$ is called {\it Schr\"odinger-Virasoro algebra}. It is an extension of Vect$(S^1)$ by a nilpotent Lie algebra formed with a bosonic current of weight $\frac32$ and a bosonic current of weight one. In 1994, M. Henkel (\cite{H}) introduced the Schr\"odinger-Virasoro algebra while he was trying to apply the conformal field theory to models of statistical physics which either undergo a dynamics, whether in or out of equilibrium, or are no longer isotropic. It was shown that the Schr\"odinger-Virasoro Lie algebra is a symmetry algebra for many statistical physics models undergoing a dynamics with dynamical exponent $z=2$. In \cite{U}, J. Unterberger gave a representation of the Schr\"odinger-Virasoro algebra by using vertex algebras. In the same paper, he introduced an extension of the Schr\"odinger-Virasoro Lie algebra, called it {\it extended Schr\"odinger-Virasor Lie algebra}. The Schr\"odinger-Virasoro Lie algebra has received more attentions in recent years (cf. \cite{UR} and references therein). Following \cite{Ka}, Y. Su and L. Yuan defined a Schr\"odinger-Virasoro Lie conformal algebra $SV$ and an extended Schr\"odinger-Virasoro Lie conformal algebra $\widetilde{SV}$ by using the Schr\"odinger-Virasoro Lie algebra and the extended Schr\"odinger-Virasoro Lie algebra defined in [U] respectively (\cite{SY}). $SV$ and $\widetilde{SV}$ are extensions of the Virasoro Lie conformal algebra introduced in [Ka]. A natural question is whether there are other Lie algebras which are extensions of Vect$(S^1)$ by using different weight bosonic currents. This question has been received more attentions in recent years (\cite{CKW}, \cite {UR}, \cite{HY}, \cite{WXX}). In this paper, we will give a complete answer of this question in terms of pseudoalgebra language. We call the counterpart of the extensions of the Virasoro conformal algebra {\it a Schr\"odinger-Virasoro Lie $H$-pseudoalgebra}, where $H={\bf k}[s]$. From the definition of $\widetilde{SV}$, a Schr\"odinger-Virasoro Lie $H$-pseudoalgebra $L$ should be a free $H$-module of rank four. Suppose that $\{e_0, e_1, e_2, e_3\}$ is a basis of $L$. Then $L_0=He_0$ is the Virasoro conformal Lie algebra and $He_i$ for each $i(\geq 1)$ is a nontrivial representation of the Virasoro conformal algebra $He_0$. The nontrivial representation $He_i$ of $He_0$ was characterized in \cite{CK} and uniquely determined by two parameters $\lambda_i$ and $\kappa_i$. Thus Schr\"odinger-Virasoro Lie $H$-pseudoalgebra is determined by a triple $\{(\lambda_i,\kappa_i)|1\leq i\leq 3\}$. For example, the extended Schr\"odinger-Virasoro Lie conformal algebra $\widetilde{SV}$ defined in \cite{SY} is a $\mathbb{C}[s]$-pseudoalgebra with a triple $\{ (\frac12, 0), (0, 0), (0, 0)\}$. By the way, the parameters $\lambda_i$ in $\widetilde{SV}$ are determined by the conformal weight of generators of the extended Schr\"odinger-Virasoro Lie algebra in \cite{U}. To include algebras in \cite{HY} and \cite{SY}, we use an asymmetric element $\alpha_m'=\sum\limits_{0\leq i<\leq m}w_{ij}(s^{(i)}\otimes s^{(j)}-s^{(j)}\otimes s^{(i)})\in H^{\otimes 2}$ to replace the element $\alpha=1\otimes \partial-\partial\otimes 1$ in $\widetilde{SV}$ in the definition of the Schr\"odinger-Virasoro Lie $H$-pseudoalgebras (We use $s$ to replace $\partial$ in this paper). For more precise description, see Definition \ref{def25} in Section 3. Thus the above question is equivalent to find proper parameter pairs $(\lambda_i,\kappa_i)$ and $\alpha_m'$ to make the above free $H$-module of rank four into Lie $H$-pseudoalgebras with a proper pseudobracket. In fact, we have determined all such pairs $(\lambda_i,\kappa_i)$ and $\alpha_m'$, which are the main tasks of the present paper. The paper is organized as follows. In Section 2, we classify all Leibniz $H$-pseudoalgebras of rank two, where $H={\bf k}[s]$. Meanwhile we determine all Leibniz $H$-pseudoalgebras, which are extension $Vect(S^1)$ by a rank one Lie $H$-pseudoalgebra. The central extension of a Lie (Leibniz) $H$-pseudoalgebra $L$ is measured by a cohomological group $H^2(L,M)$ (respectively $Hl^2(L,M)$)(see \cite{BDK} and \cite{Wu2}), where $M$ is a representation of $L$. Using the classification of rank two Leibniz $H$-pseudoalgebras, we get $Hl^2(L,H)=H^2(L,H)=H$ when $L$ is the Virasoro Lie conformal algebra and $Hl^2(L,H)=H^{\otimes 2}\neq H^2(L,H)=\{a\in H^{\otimes 2}|a=-(12)a\}$ when $L$ is a trivial $H$-pseudoalgebra. With the preparation of Section 2, we introduce Schr\"odinger-Virasoro Lie $H$-pseudoalgebras in Section 3. Then we give some basic properties of these algebras. In Section 4, we determine the subalgebra of Schr\"odinger-Virasoro Lie ${\bf k}[s]$-pseudoalgebras generated by $e_0, e_1, e_2$. We call this subalgebra an {\it $m$-type Schr\"odinger-Virasoro Lie conformal algebra, or $H$-pseudoalgebra}, where $m\in\mathbb{N}$, the set of all nonnegative integer numbers. At first glance from the definition, one may think that there are infinitely many Schr\"odinger-Virasoro Lie $H$-pseudoalgebras as $m\in\mathbb{N}$. However, we prove that $0\leq m\leq 3$ and there are only five kinds of $m$-type Schr\"odinger-Virasoro Lie conformal algebras. By using the annihilation Lie algebras of these $m$-type Schr\"odinger-Virasoro Lie conformal algebras, one obtains a series of infinite-dimensional Lie algebras $SV_q$, which are extensions of Vect$(S^1)$ (see Example \ref{ex33} below). For example, let $SV_\rho$ be an infinite-dimensional space spanned by $\{L_n, Y_p, M_k|n\in \mathbb{Z}, p\in\rho+\mathbb{Z}, k\in 2\rho +\mathbb{Z}\}$ for any $\rho\in{\bf k}$. Then $SV_\rho$ is a Lie algebra with nonzero brackets given by \begin{eqnarray*} &[L_n, L_{n'}]=(n-n')L_{n+n'},\qquad [Y_p, Y_{p'}]=(p-p')M_{p+p'},\\ & [L_n, Y_p]=\left(\lambda_1(n+1)-p+\rho-1\right)Y_{n+p}+\kappa_1Y_{n+p+1},\\ & [L_n, M_k]=\left((2\lambda_1-1)(n+1)-k+2\rho-1\right)M_{n+k}+2\kappa_1M_{n+k+1},\end{eqnarray*} where $\lambda_1,\kappa_1\in{\bf k}$. The Schr\"odinger-Virasoro Lie algebra in \cite{U} is the algebra $SV_{\rho}$ in the case when $\lambda_1=\rho=\frac12$ and $\kappa_1=0$. The subalgebra of $SV_{\frac12}$ for $\kappa_1=0$ and $\lambda_1=\frac12$ generated by $\{Y_p,M_k|p\in\frac12+\mathbb{Z},k\in\mathbb{Z}\}$ is called a {\it Schr\"odinger algebra}. Thus this Schr\"odinger algebra is also an annihilation algebra of a Lie $H$-pseudoalgebra. In Section 5, we completely determine all Schr\"odinger-Virasoro Lie $H$-pseudoalgebras. This means that we have determined all extensions of $Vect(S^1)$ by different weight bosonic currents. Throughout this paper, ${\bf k}$ is a field of characteristic zero. The unadorned $\otimes$ means the tensor product over ${\bf k}$. For the other undefined terms, we refer to \cite{R}, \cite{RU} and \cite{Sw}. \section{Classification of Leibniz $H$-pseudoalgebras of rank two} From Section 1, we know that an $m$-type Schr\"odinger-Virasoro Lie conformal algebra is an extension of the Virasoro Lie conformal algebra by a Lie $H$-pseudoalgebra of rank two. How many Lie $H$-pseudoalgebras of rank two are there? Further, how many Leibniz $H$-pseudoalgebras of rank two are there? This question is answered in this section. First of all, let us recall some conception and fix some notations. Let $H={\bf k}[s]$ be the polynomial algebra with a variable $s$. Then $H$ is a Hopf algebra with coproduct $\Delta(s^{(n)})=\sum\limits_{i=0}^ns^{(i)}\otimes s^{(n-i)},$ where $s^{(n)}=\frac {s^n}{n!}$ for any nonnegative integer $n$. A left $H$-module $A$ is called an {\it $H$-pseudoalgebra } if there is a mapping $\rho: A\otimes A\to H^{\otimes 2}\otimes_H A$ such that $\rho(h_1a_1\otimes h_2a_2)=(h_1\otimes h_2\otimes_H1)\rho(a_1\otimes a_2)$ for any $h_1,h_2\in H$ and $a_1,a_2\in A$. $\rho(a\otimes b)$ is usually denoted by $a*b$ and is called {\it pseudoproduct}. There are various varieties of $H$-pseudoalgebras, for example, associative $H$-pseudoalgebras (\cite{R}), left symmetric $H$-pseudoalgebras ([W1]) and Leibniz $H$-pseudoalgebras (\cite{Wu2}) etc.. A {\it Leibniz $H$-pseudoalgebra} $A$ is an $H$-pseudoalgebra such that its pseudoproduct $\rho$ satisfies the Jacobi Identity$$\rho(\rho(a_1\otimes a_2)\otimes a_3)=\rho(a_1\otimes\rho(a_2\otimes a_3))-((12)\otimes_H1)\rho(a_2\otimes \rho(a_1\otimes a_3))$$ for any $a_1, a_2, a_3\in A$, where $((12)\otimes_H1)\sum_if_i\otimes g_i\otimes h_i\otimes_Hc_i=\sum_ig_i\otimes f_i\otimes h_i\otimes_Hc_i$ for any $f_i,g_i,h_i\in H$ and $c_i\in A$. In a Leibniz $H$-pseudoalgebra $A$, its pseudoproduct $\rho$ is also called a {\it pseudobracket}, and $\rho(a_1\otimes a_2)$ is usually denoted by $[a_1, a_2]$. An asymmetric Leibniz $H$-pseudoalgebra is called a {\it Lie $H$-pseudoalgebra}, that is, a Leibniz $H$-pseudoalgebra $A$ is a Lie $H$-pseudoalgebra if in addition $$\rho(a_1\otimes a_2)=-((12)\otimes_H1)\rho(a_2\otimes a_1)$$ for any $a_1,a_2\in A$. Similar to the case when $A$ is a Lie $H$-pseudoalgebra in \cite{BDK}, one can introduce the following notations. For any Leibniz $H$-pseudoalgebra $A$, let $A^{(1)}=[A,A]$ and $A^{(n)}=[A^{(n-1)}, A^{(n-1)}]$ for any $n\geq 2$. Then $A, A^{(1)}, \cdots, A^{(n)},\cdots $ is called the {\it derived series} of $A$ and $A$ is said to be {\it solvable} if there is an integer $n$ such that $A^{(n)}=0$. If $A^{(1)}=0$, then $A$ is said to be {\it abelian}. $A$ is said to be {\it finite} if $A$ is a finitely generated left $H$-module. Since $H={\bf k}[s]$ is a principal ideal domain, any finite Leibniz $H$-pseudoalgebra $A$ has a decomposition $A=A_0\oplus A_1$ as $H$-modules, where $A_0$ is a torsion $H$-module and $A_1$ is a free $H$-module. Let $0\neq a\in H$ such that $aA_0=0$ and $\{e_1, \cdots, e_n\}$ be a basis of $A_1$. Suppose $x\in A_0, y \in A$ and $[x, y]=\sum\limits_if_i\otimes_Hx_i+\sum\limits_{j=1}^n g_j\otimes_He_j$ for some $x_i\in A_0$. Then $0=[ax,y]=\sum\limits_i(a\otimes 1)f_i\otimes_Hx_i+\sum\limits_{j=1}^n (a\otimes 1)g_j\otimes_He_j.$ Hence $g_j=0$ for all $j$ and $[x,y]\in H^{\otimes 2}\otimes _HA_0$. Similarly, we can prove that $[y,x]\in H^{\otimes2}\otimes_HA_0$ for any $x\in A_0$ and $y\in A$. This means that $A_0$ is an ideal of $A$. Since $A_0$ is torsion, $A^{(1)}_0=0$. Thus any finite Leibniz $H$-pseudoalgebra is an extension of Leibniz $H$-pseudoalgebra of a free module by an abelian Lie $H$-pseudoalgebra, which is a torsion module. Next we only need focus on a finite Leibniz $H$-pseudoalgebra $A$, where $A$ is a free $H$-module. Suppose the rank of a solvable Leibniz $H$-pseudoalgebra $A$ is $n$. Then $A$ is said to be a {\it solvable Leibniz $H$-pseudoalgebra with maximal derived series} if $A^{(n-1)}\neq 0$. \begin{lemma} \label{lem21} (1) Let $A$ be a solvable Leibniz $H$-pseudoalgebra. Suppose that $A$ is a free $H$-module of rank $n$ and $A^{(m)}=0$. Then $m\leq n$. (2) Let $A$ be a solvable Leibniz $H$-pseudoalgebra with maximal derived series. Suppose the rank of $A$ is $n$. Then there is a basis $\{e_1,e_2,\cdots,e_n\}$ such that $A^{(i)}\subseteq He_{i+1}+\cdots +He_n$ and the rank of $A^{(i)}$ is equal to $n-i$. \end{lemma} \begin{proof} (1) We prove (1) by the induction on $n$ the rank of $A$. Suppose $n=1$. Then $A$ has a basis $\{e_1\}$ such that $\{a_{1}e_{1}\}$ is a basis of $A^{(m-1)}$. Since $A^{(m)}=0$, we have $0=[a_1e_1, a_1e_1]=(a_1\otimes a_1\otimes_H1)[e_1, e_1]$. Hence $[e_1, e_1]=0$ and $A^{(1)}=0$. Thus $m=1$. Next we assume that $n\geq 2$. Let $\{e_1, e_2,\cdots,e_n\}$ be a basis of $A$ such that $\{a_{r+1}e_{r+1}, \cdots, a_ne_n\}$ is a basis of $A^{(m-1)}$. Suppose $[e_i, e_j]=\sum\limits_{k=1}^nf_k\otimes _He_k$ for any $j\geq r+1$. Then $[e_i,a_je_j]=\sum\limits_{k=1}^n(1\otimes a_j)f_k\otimes_H e_k\in H^{\otimes 2}\otimes_H A^{(m-1)}$. So $f_k=0$ for $1\leq k\leq r$ and $[e_i,e_j]\in H^{\otimes2}\otimes_HJ$, where $J=He_{r+1}+\cdots+He_n$. In the same way, $[e_j,e_i]\in H^{\otimes2}\otimes_HJ$ for any $j\geq r+1$. Thus, $J$ is an ideal of $A$. Similar to the case of $n=1$, we can prove $J^{(1)}=0$ as $A^{(m-1)}$ is abelian. Let $\bar{A}=A/J$. Then $\bar{A}$ is a solvable Leibniz $H$-pseudoalgebra and it is a free $H$-module of rank $r$. Hence $\bar{A}^{(r)}=0$ and $A^{(r)}\subseteq J$. Thus $A^{(r+1)}=0$ and $r+1\leq n$. (2) We prove (2) by the induction again on $n$ the rank of $A$. Assume $n\geq 2$. Then there is a basis $\{\varepsilon_1,\cdots,\varepsilon_n\}$ of $A$ such that $\{a_{r+1}\varepsilon_{r+1}, \cdots, a_n\varepsilon_n\}$ is a basis of $A^{(n-1)}$, where $r\leq n-1$. Let $J=H\varepsilon_{r+1}+\cdots+H\varepsilon_n$. Then $J$ is an ideal of $A$ and $J^{(1)}=0$. Assume that $m$ is the least integer such that $(A/J)^{(m)}=0$. Then $m\leq r$ and $m+1\geq n$ as $A^{(m+1)}=0$ and $A^{(n-1)}\neq 0$. Thus $n-1\leq m\leq r\leq n-1$ and $r=m=n-1$. Hence $A/J$ is a solvable Leibniz $H$-pseudoalgebra with maximal derived series. By the induction assumption, $A/J$ has a basis $\{e'_1, \cdots, e'_{n-1}\}$ such that $(A/J)^{(i)}\subseteq He'_{i+1}+\cdots+He'_{n-1}$ and the rank of $(A/J)^{(i)}$ is equal to $n-i-1$. Let $e_n=\varepsilon_n$ and $e_i\in A$ such that $e_i'=e_i+J$ for $1\leq i\leq n-1$. Then $\{e_1,\cdots,e_n\}$ is a basis of $A$ and $A^{(i)}\subseteq He_{i+1}+\cdots+He_n$. Since $n-i$ is the smallest integer such that $(A^{(i)})^{(n-i)}=0$, \ $n-i$ is not greater than $t_i$ the rank of $A^{(i)}$. Note that the rank of $He_{i+1}+\cdots+He_n$ is $n-i$. Then $t_i=n-i$. \end{proof} Let $A$ be a solvable Leibniz $H$-pseudoalgebra $A$ of rank two with maximal derived series. Then $A$ has a basis $\{e_1, e_2\}$ such that $\{ae_2\}$ is a basis of $A^{(1)}$ for some nonzero $a\in H$. In addition, there exist elements $\alpha', \eta_1, \eta_2\in H\otimes H$ such that $[e_1,e_1]=\alpha'\otimes_He_2$, $[e_1,e_2]=\eta_1 \otimes_H e_2$, $[e_2,e_1]=\eta_2 \otimes_H e_2$, and $[e_2, e_2]=0$. Since $A$ is not abelian, $\alpha',\eta_1,\eta_2$ are not all zero. Furthermore, we have the following Lemma. \begin{lemma} \label{lem22} Let $A$ be a solvable Leibniz $H$-pseudoalgebra $A$ with maximal derived series. Suppose the rank of $A$ is two. Then $A$ is isomorphic to one of the following types (i) $A$ has a basis $\{e_1,e_2\}$ such that $[e_1,e_1]= \alpha'\otimes_He_2$, $[e_1,e_2]=[e_2,e_1]=0$, where $\alpha'$ is an arbitrary nonzero element in $H\otimes H$. (ii) $A$ has a basis $\{e_1,e_2\}$ such that $[e_1,e_1]= 0$, $[e_1,e_2]=-((12)\otimes_H1)[e_2,e_1]=(A(s)\otimes1)\otimes_He_2$, where $0\neq A(s)=\sum\limits_{i=0}^nb_is^{(i)}\in H$ for some $b_i\in{\bf k}$. (iii) $A$ has a basis $\{e_1,e_2\}$ such that $[e_1,e_1]=[e_2,e_1]= 0$, $[e_1,e_2]=(A(s)\otimes1)\otimes_He_2$, where $A(s)=\sum\limits_{i=0}^nb_is^{(i)}\neq0$ for some $b_i\in{\bf k}$. \end{lemma} \begin{proof} Let $\{e_1,e_2\}$ be a basis of a solvable Leibniz $H$-pseudoalgebra $A$ with maximal derived series such that $[e_1,e_1]=\alpha'\otimes_He_2$, $[e_1, e_2]=\eta_1\otimes_He_2$, $[e_2,e_1]=\eta_2\otimes_He_2$ and $[e_2, e_2]=0$. Since $0=[[e_1, e_1], e_2]=[e_1, [e_1, e_2]]-(12)[e_1, [e_1, e_2]]$, we have $(1\otimes \eta_1\Delta)\eta_1=(12)(1\otimes \eta_1\Delta)\eta_1.$ Thus either $\eta_1=0$, or $\eta_1=\sum\limits_{i=0}^n\sum\limits_{j=0}^{p_i}a_{ij}s^{(i)}\otimes s^{(j)}\neq 0$ for some $a_{ij}\in {\bf k}$. If $\eta_1\neq 0$, then \begin{eqnarray*}\sum\limits_{i=0,j=0}^{n,p_i}\sum\limits_{i'=0,j'=0}^{n,p_i}\sum\limits_{t=1}^ja_{ij}a_{i'j'}(s^{(t)}s^{(i')}\otimes s^{(i)}-s^{(i)}\otimes s^{(i')}s^{(t)})\otimes s^{(j')} s^{(j-t)}=0,\end{eqnarray*} which implies that $\sum\limits_{i=0,j=0}^{n,p_i}\sum\limits_{t=1}^ja_{ij}s^{(n)}s^{(t)}\otimes s^{(i)}\otimes s^{(p_n)}s^{(j-t)}=0.$ Hence $a_{ij}=0$ for $j\geq 1$ and any $i$. So $\eta_1=\sum\limits_{i=1}^nb_is^{(i)}\otimes1$, where $b_i=a_{i0}$ for $0\leq i\leq n$. Since $[[e_2,e_1],e_1]=-((12)\otimes_H1)[e_1,[e_2,e_1]]$, $(\eta_2\Delta\otimes 1)\eta_2=-(12)(1\otimes \eta_2\Delta)\eta_1$. Thus $\eta_2=0$ if $\eta_1=0$. Then $\alpha'$ is any nonzero element in $H^{\otimes 2}$. Next, we assume that $\eta_1=\sum\limits_{i=0}^nb_is^{(i)}\otimes 1$ and $\eta_2=\sum\limits_{i=0}^{n'}\sum\limits_{j=0}^{p_i'}b_{ij}s^{(i)}\otimes s^{(j)}\neq 0$ for some $b_{ij}\in {\bf k}$. Since $(\eta_2\Delta\otimes 1)\eta_2=-(12)(1\otimes\eta_2\Delta)\eta_1$, $$\sum\limits_{i,k=0}^{n'}\sum\limits_{j,l=0}^{p'_i,p'_k}\sum\limits_{t=0}^ib_{ij}b_{kl}s^{(k)}s^{(t)}\otimes s^{(l)}s^{(i-t)}\otimes s^{(j)}= -\sum\limits_{i=0}^{n'}\sum\limits_{j=0}^{p'_i}\sum\limits_{k=0}^nb_kb_{ij}s^{(i)}\otimes s^{(k)}\otimes s^{(j)}.$$ Then $n'=0$, $p_0'=n$ and $-b_lb_{0j}=b_{0l}b_{0j}$. Thus $b_{0l}=-b_l$ and $\eta_2=-(12)\eta_1$. Let us assume that $\eta_2\neq 0$, $b_n\neq 0$ and $\alpha'=\sum\limits_{i=0}^m\sum\limits_{j=0}^{q_i}y_{ij}s^{(i)}\otimes s^{(j)}\neq 0$. Observe that $$[[e_1, e_1], e_1]=[e_1, [e_1, e_1]]-(12)[e_1, [e_1, e_1]].$$This means \begin{eqnarray}(\alpha'\Delta\otimes 1)\eta_2=(1\otimes\alpha'\Delta)\eta_1-(12)(1\otimes\alpha'\Delta)\eta_1,\label{abc1}\end{eqnarray} or equivalently, \begin{eqnarray} \qquad \qquad \sum\limits_{k=0}^n\sum\limits_{i=0}^m\sum\limits_{j=0}^{q_i} b_ky_{ij}\left((s^{(k)}\otimes s^{(i)}-s^{(i)}\otimes s^{(k)})\otimes s^{(j)}+ s^{(i)}\otimes s^{(j)}\otimes s^{(k)}\right)=0.\label{eq021}\end{eqnarray} From this, we get $\sum\limits_{k=0}^n\sum\limits_{i=0}^mb_ky_{in}(s^{(k)}\otimes s^{(i)}-s^{(i)}\otimes s^{(k)})=-b_n\sum\limits_{i=0}^m\sum\limits_{j=0}^{q_i} y_{ij}s^{(i)}\otimes s^{(j)}.$ Let $A(s)=\sum\limits_{i=0}^nb_is^{(i)}$ and $B(x)=\sum\limits_{i=0}^m\frac{y_{in}}{b_n}s^{(i)}$. Then $\alpha'=B(s)\otimes A(s)-A(s)\otimes B(s)$. It is easy to check that (\ref{abc1}) holds for this $\alpha'$. Now let $e_1'=e_1+B(s)e_2$. Then $[e_1',e_1']=[e_1,e_1]+((1\otimes B(s))\otimes _H1)[e_1,e_2]+((B(s)\otimes 1)\otimes_H1)[e_2,e_1]=0$, $[e_1',e_2]=-((12)\otimes_H1)[e_2,e_1']=\eta_1\otimes_He_2$. Finally, assume that $\eta_1=\sum\limits_{i=0}^nb_is^{(i)}\otimes 1\neq 0$ and $\eta_2=0$. From $[[e_1,e_1],e_1]=[e_1,[e_1,e_1]]-((12)\otimes_H1)[e_1,[e_1,e_1]]$ we get $(1\otimes \alpha'\Delta)\eta_1=(12)(1\otimes\alpha'\Delta)\eta_1$. Thus $\sum\limits_{k=0}^n\sum\limits_{i=0}^m\sum\limits_{j=0}^{q_i}b_ky_{ij}s^{(k)}\otimes s^{(i)}\otimes s^{(j)}=\sum\limits_{k=0}^n\sum\limits_{i=0}^m\sum\limits_{j=0}^{q_i}b_ky_{ij}s^{(i)}\otimes s^{(k)}\otimes s^{(j)}$. Hence $b_ky_{ij}=b_iy_{kj}$ and $y_{ij}=\frac1{b_n}b_iy_{nk}$. Thus $\alpha'=\frac1{b_n}\sum\limits_{i=0}^n\sum\limits_{j=0}^{q_n}b_iy_{nj}s^{(i)}\otimes s^{(j)}.$ Let $C(s)=\sum\limits_{j=0}^{q_n}\frac{y_{nj}}{b_n}s^{(j)}$ and $m=q_n$. Then $\alpha'=(\sum\limits_{i=0}^nb_is^{(i)})\otimes(\sum\limits_{j=0}^mc_js^{(j)})=A(s)\otimes C(s)$. Let $e_1'=e_1-C(s)e_2$. Then $[e_1',e_1']=[e_1,e_1]-((C(s)\otimes1)\otimes_H1)[e_2,e_1]-((1\otimes C(s))\otimes_H1)[e_1,e_2]=0$, $[e_1',e_2]=(\eta_1\otimes1)\otimes_He_2$ and $[e_2,e_1']=0$. \end{proof} Suppose that $A$ is a solvable Leibniz $H$-pseudoalgebra with maximal derived series and the rank of $A$ is $n$. Then $A$ has a basis $\{e_1, e_2, \cdots, e_n\}$ such that $J_i=He_i+He_{i+1}+\cdots+He_n$ is an ideal of $A$ for each $i$ by the proof of Lemma \ref{lem21}. Similarly, one can prove that both $A/J_i$ and $J_i$ are solvable Leibniz $H$-pseudoalgebras with maximal derived series. Thus $$\left\{\begin{array}{lll}[e_i,e_i]&=&\alpha_{ii}^{i+1}\otimes _He_{i+1}+\cdots+\alpha_{ii}^n\otimes_He_n\\ [e_i,e_j]&=&\alpha_{ij}^{j}\otimes _He_{j}+\cdots+\alpha_{ij}^n\otimes_He_n,\ for\ i<j\\ [e_j,e_i]&=&\beta_{ji}^{j}\otimes _He_{j}+\cdots+\alpha_{ij}^n\otimes_He_n,\ for\ i<j \end{array}\right.$$ Hence $\alpha_{ii+1}^{i+1}=b_{ii+1}^{i+1}\otimes 1$ for some $b_{ii+1}^{i+1}\in H$ by Lemma \ref{lem22}. Let $L_0=He_0$ is a free left $H$-module with a basis $\{e_0\}$. Then $L_0$ is a Leibniz $H$-pseudoalgebra with $[e_0,e_0]=\alpha\otimes_He_0$ if and only if $\alpha=c(s\otimes 1-1\otimes s)$ for some $c\in{\bf k}$. In addition, $L_0$ is a Leibniz $H$-pseudoalgebra if and only if it is a Lie $H$-pseudoalgebra by Theorem 3.1 of \cite{Wu2}. If $c\neq 0$, then $[e_0',e_0']=(s\otimes 1-1\otimes s)\otimes_He_0'$ for $e_{0}'=c^{-1}e_0$. Thus, we can assume $c=1$. In the sequel, we always assume that $c=1$, that is, $\alpha=s\otimes 1-1\otimes s$. In this case, $L_0$ is called {\it Virasoro Lie conformal algebra}. The following lemma was proved in \cite{CK}. \begin{lemma} \label{lem23} Suppose a free $H$-module $He$ with basis $\{e\}$ is a nontrivial representation of the Virasoro Lie conformal algebra $He_0$. Then $$[e_0, e]=(\lambda s\otimes 1-1\otimes s+\kappa\otimes 1)\otimes_He$$ for some $\lambda,\kappa\in{\bf k}$. This representation is irreducible if and only if $\lambda\neq 1$, and all finitely generated irreducible representations of the conformal Lie algebra $He_0$ are of this kind.\end{lemma} For any two solvable ideals $I_1,I_2$ of a finite Leibniz $H$-pseudoalgebras $L$, $I_1+I_2$ is also a solvable ideal of $L$. Therefore $L$ has a unique maximal solvable ideal $I$. We call this ideal $I$ the solvable radical of $L$. It is easy to prove that the solvable radical of $L/I$ is zero and $L/I$ is a Lie $H$-pseudoalgebra. Suppose $L$ is a Leibniz $H$-pseudoalgebra of rank two and $I$ is its solvable ideal. If $I=0$, then $L$ is a direct sum of two Virasoro Lie conformal algebras by the Theorem 13.3 of \cite{BDK}, or \cite{AK}. If the rank of $I$ is two, then $L=I$ is solvable. It is either abelian, or a solvable Leibniz $H$-pseudoalgebra with maximal derived series, which is described by Lemma \ref{lem22}. If the rank of $I$ is equal to $1$, then $L$ has a basis $\{e_0,e_1\}$ such that $I=He_1$ and $L/I$ is isomorphic to the Virasoro Lie conformal algebras. Thus we can assume that \begin{eqnarray}\label{m1}\left\{\begin{array}{ll} [e_0,e_0]=\alpha\otimes_He_0+\alpha'\otimes_He_1,& [e_0,e_1]=\eta_1\otimes_He_1,\\ [e_1,e_0]=\eta_2\otimes_He_1, &[e_1,e_1]=0\end{array}\right. \end{eqnarray} where $\alpha=s\otimes 1-1\otimes s$, $\alpha',\eta_1,\eta_2\in H\otimes H$ satisfying \begin{eqnarray}\label{aa1}\begin{array}{lll}(\alpha\Delta\otimes1)\alpha'+(\alpha'\Delta\otimes1)\eta_2&=&(1\otimes \alpha\Delta)\alpha'-(12)(1\otimes\alpha\Delta)\alpha'\\ &&+(1\otimes\alpha'\Delta)\eta_1-(12)(1\otimes\alpha'\Delta)\eta_1,\end{array}\end{eqnarray} \begin{eqnarray}\label{aa2}(\alpha\Delta\otimes1)\eta_1=(1\otimes\eta_1\Delta)\eta_1-(12)(1\otimes\eta_1\Delta)\eta_1,\end{eqnarray} \begin{eqnarray}\label{aa3}(\eta_2\Delta\otimes1)\eta_2=(1\otimes \alpha\Delta)\eta_2-(12)(1\otimes\eta_2\Delta)\eta_1,\end{eqnarray} \begin{eqnarray}\label{aa4}(\eta_1\Delta\otimes1)\eta_2=(1\otimes\eta_2\Delta)\eta_1-(12)(1\otimes\alpha\Delta)\eta_2.\end{eqnarray} From (\ref{aa2}) and Lemma \ref{lem23}, we get that either $\eta_1=0$, or $\eta_1=\lambda s\otimes 1-1\otimes s+\kappa\otimes 1$ for some $\lambda,\kappa\in {\bf k}$. From (\ref{aa3}) and (\ref{aa4}), we get $((\eta_1+(12)\eta_2)\Delta\otimes 1)\eta_2=0$. Thus either $\eta_2=0$, or $\eta_2=-(12)\eta_1$. If $\eta_2\neq 0$, then $\eta_1=\lambda s\otimes 1-1\otimes s+\kappa\otimes 1$ and $\eta_2=-\lambda\otimes s+s\otimes 1-\kappa\otimes 1$. \begin{lemma}\label{Aaa1} Let $\alpha=s\otimes 1-1\otimes s, \eta_1=-(12)\eta_2=\lambda s\otimes 1-1\otimes s+\kappa\otimes 1$, $\alpha'=\sum\limits_{j=0}^m\sum\limits_{i=0}^{p_j}x_{ij}s^{(i)}\otimes s^{(j)}\neq 0$ for some $x_{ij}\in{\bf k}$. Suppose $\alpha'$ is a solution of (\ref{aa1}). Then $\alpha'=\beta+(1\otimes A(s))\eta_1+(A(s)\otimes 1)\eta_2-\alpha\Delta(A(s))$ for some $A(s)\in H$, where $$\beta=\left\{\begin{array}{ll}x_{10}\alpha&(\kappa,\lambda)=(0,0)\\ x_{12}(s\otimes s^{(2)}-s^{(2)}\otimes s)+ x_{20}(s^{(2)}\otimes 1-1\otimes s^{(2)})&(\kappa,\lambda)=(0,-1)\\ x_{13}(s\otimes s^{(3)}-s^{(3)}\otimes s)+x_{30}(s^{(3)}\otimes 1-1\otimes s^{(3)})&(\kappa,\lambda)=(0,-2)\\ x_{34}(s^{(3)}\otimes s^{(4)}-s^{(4)}\otimes s^{(3)})&(\kappa, \lambda)=(0,-5)\\ x_{36}((s^{(3)}\otimes s^{(6)}-s^{(6)}\otimes s^{(3)}-3(s^{(4)}\otimes s^{(5)}-s^{(5)}\otimes s^{(4)}))&(\kappa,\lambda)=(0,-7)\\ 0&otherwise,\end{array}\right.$$ for some $x_{ij}\in{\bf k}$\end{lemma} \begin{proof}Let $e_0'=e_0+A(s)e_1$ for any $A(s)\in H$. Then $[e_0',e_0']=\alpha\otimes_He_0'+\alpha''\otimes_He_1$, $[e_0',e_1]=\eta_1\otimes_He_1$ and $[e_1,e_0']=\eta_2\otimes_He_1$, where $\alpha''=\alpha'+(1\otimes A(s))\eta_1+(A(s)\otimes 1)\eta_2-\alpha\Delta(A(s)).$ Thus $\beta=(1\otimes A(s))\eta_1+(A(s)\otimes 1)\eta_2-\alpha\Delta(A(s)) $ is a solution of (\ref{aa1}) for any $A(s)\in H$. In particular, $\beta=\lambda A(s)\alpha$ for any $A(s)\in{\bf k}$ is a solution of (\ref{aa1}). Thus $x_{10}\alpha$ is a solution of (\ref{aa1}) for any $x_{10}\in {\bf k}$ provided that $\lambda\neq 0$. It is easy to check that $x_{10}\alpha$ is also a solution of (\ref{aa1}) if $\lambda=0$. Applying the functor $\varepsilon\otimes1\otimes 1$ to (\ref{aa1}), we get \begin{eqnarray}\label{La8}\kappa\alpha'= (A^*(s)s\otimes 1-1\otimes A^*(s)s)+\lambda(s\otimes A^*(s)-A^*(s)\otimes s) -\alpha\Delta(A^*(s)),\end{eqnarray} where $A^*(s)=\sum\limits_{j=0}^mx_{0j}s^{(j)}$. If $\kappa\neq 0$, then $\alpha'=A'(s)s\otimes1-1\otimes A'(s)s+\lambda(s\otimes A'(s)-A'(s)\otimes s)+{\kappa}(1\otimes A'(s)-A'(s)\otimes 1)-\alpha\Delta(A'(s))$ is a solution of (\ref{aa1}), where $A'(s)=\frac{1}{\kappa}A^*(s)$. Next we assume that $\kappa=0$. Then $(\lambda+i-1)x_{0i}=0$ for $i\geq 2$ and $\sum\limits_{i=3}^{m}\sum\limits_{t=1}^{i-2}(t+1)x_{0i}(s^{(t+1)}\otimes s^{(i-t)}-s^{(i-t)}\otimes s^{(t+1)})=0$ by (\ref{La8}). Thus $x_{0i}=0$ for all $i\geq 4$. If $x_{03}\neq 0$, then $\lambda=-2$. If $x_{02}\neq 0$, then $\lambda=-1$. If $x_{01}\neq 0$, then $\lambda=0$. Applying the functor $1\otimes \varepsilon\otimes 1$ to (\ref{aa1}), we get $\lambda(\sum\limits_{j=0}^mx_{0j}s\otimes s^{(j)}+\sum\limits_{i=0}^{p_0}x_{i0}s^{(i)}\otimes s)=\sum\limits_{i=0}^{p_0}x_{i0}(i+1)s^{(i+1)}\otimes 1+\sum\limits_{j=0}^mx_{0j}(j+1)s^{(j+1)}\otimes 1$. This implies that $x_{i0}=-x_{0i}$ for all $i\geq 0$, $m=p_0$ and $ x_{00}=0$. By the same way, we can obtain $\sum\limits_{i=0}^mx_{i0}(i+1)(s^{(i+1)}\otimes 1-1\otimes s^{(i+1)})+\lambda\sum\limits_{i=0}^mx_{i0}(s\otimes s^{(i)}-s^{(i)}\otimes s) -\alpha\sum\limits_{i=0}^mx_{i0}\Delta(s^{(i)})= (s\otimes 1)(\alpha'+(12)\alpha') $ by using the functor $1\otimes 1\otimes\varepsilon$. Thus $(s\otimes 1)(\alpha'+(12)\alpha')=0$ and $\alpha'=-(12)\alpha'$. Since $\alpha'=\sum\limits_{j=0}^m\sum\limits_{i=0}^{p_j}x_{ij}s^{(i)}\otimes s^{(j)}$, \begin{eqnarray}\label{La9}&&\sum\limits_{j=1}^m\sum\limits_{i=1}^{p_j}\sum\limits_{t=0}^{i-1}(t+1)x_{ij}(s^{(t+1)}\otimes s^{(i-t)}-s^{(i-t)}\otimes s^{(t+1)})\otimes s^{(j)}\nonumber\\ &&=\sum\limits_{j=1}^m\sum\limits_{i=1}^{p_j}\sum\limits_{l=0}^{j-1}x_{ij}(2l-j+1)(s^{(i)}\otimes s^{(l+1)}-s^{(l+1)}\otimes s^{(i)})\otimes s^{(j-l)}\\ &&+\sum\limits_{j=1}^m\sum\limits_{i=1}^{p_j}x_{ij}\lambda(s\otimes s^{(i)}-s^{(i)}\otimes s)\otimes s^{(j)}+\lambda\sum\limits_{j=1}^m\sum\limits_{i=1}^{p_j}x_{ij}s^{(i)}\otimes s^{(j)}\otimes s \nonumber \end{eqnarray}by (\ref{aa1}) and (\ref{La8}). Comparing the terms $*\otimes *\otimes s$ in (\ref{La9}), we get \begin{eqnarray}\label{La10}&&\sum\limits_{i=3}^{p_1}\sum\limits_{t=1}^{i-3}(t+1)x_{i1}(s^{(t+1)}\otimes s^{(i-t)}-s^{(i-t)}\otimes s^{(t+1)}) =\sum\limits_{j=2}^m\sum\limits_{i=2}^{p_j}(\lambda+i+j-2)x_{ij}s^{(i)}\otimes s^{(j)}.\nonumber\end{eqnarray} Thus $(\lambda+u+v-2)x_{uv}=(u-v)x_{u+v-1\ 1}$ for all $u,v\geq 2$. Moreover, (\ref{La9}) implies \begin{eqnarray}\label{La10}&&\sum\limits_{j=2}^m\sum\limits_{i=3}^{p_j}\sum\limits_{t=1}^{i-2}(t+1)x_{ij}(s^{(t+1)}\otimes s^{(i-t)}-s^{(i-t)}\otimes s^{(t+1)})\otimes s^{(j)}\nonumber\\ &&=\sum\limits_{j=3}^m\sum\limits_{i=2}^{p_j}\sum\limits_{l=1}^{j-2}x_{ij}(2l-j+1)(s^{(i)}\otimes s^{(l+1)}-s^{(l+1)}\otimes s^{(i)})\otimes s^{(j-l)}\end{eqnarray} If $i+j\neq 2-\lambda$, then $x_{ij}=\frac{i-j}{i+j+\lambda-2}x_{i+j-1\ 1}$ for all $i,j\geq 2$ and $(2-\lambda-j)x_{1-\lambda\ 1}=ix_{1-\lambda\ 1}=0$ for $\lambda\leq -3$. Hence $\alpha'=\sum\limits_{i=1}^3x_{i0}(s^{(i)}\otimes 1-1\otimes s^{(i)})+\sum\limits_{i=2}^{p_1}x_{i1}(s^{(i)}\otimes s-s\otimes s^{(i)})+\sum\limits_{t=2}^{-\lambda}x_{t\ 2-\lambda-t} s^{(t)}\otimes s^{(2-\lambda-t)}+\sum\limits_{j=2}^m\sum\limits_{i=2}^{p_j}\frac{(i-j)}{i+j+\lambda-2}x_{i+j-1\ 1}s^{(i)}\otimes s^{(j)},$ where $x_{t\ 2-\lambda-t}=-x_{2-\lambda-t, t}$ and $x_{i0}$ satisfies $(\lambda+i-1)x_{i0}=0$ for $i\geq 1$. Let $A(s)=\sum\limits_{i=4,i\neq 1-\lambda}^{p_1}\frac{x_{i1}}{i+\lambda-1}s^{(i)}$. Then $\alpha'=\sum\limits_{i=1}^3x_{i0}(s^{(i)}\otimes 1-1\otimes s^{(i)})+\sum\limits_{i=2}^{3}x_{i1}(s^{(i)}\otimes s-s\otimes s^{(i)})+\sum\limits_{t=2}^{-\lambda}x_{t\ 2-\lambda-t} s^{(t)}\otimes s^{(2-\lambda-t)}-((A(s)s\otimes 1-1\otimes A(s)s)+\lambda(s\otimes A(s)-A(s)\otimes s)-\alpha\Delta(A(s)))$. If $\lambda =0$, then $\alpha'=x_{10}\alpha+(A_1(s)s\otimes 1-1\otimes A_1(s)s)-\alpha\Delta(A_1(s)))$, where $A_1(s)=x_{12}s^{(2)}+\frac{x_{13}}2s^{(3)}-A(s)$. If $\lambda=-1$, then $\alpha'=x_{12}(s\otimes s^{(2)}-s^{(2)}\otimes s)+x_{20}(s^{(2)}\otimes 1-1\otimes s^{(2)})+(A_1(s)s\otimes 1-1\otimes A_1(s)s)-(s\otimes A_1(s)-A_1(s)\otimes s)-\alpha\Delta(A_1(s))$, where $A_1(s)=-x_{10}+x_{13}s^{(3)}-A(s)$. If $\lambda=-2$, then $\alpha'=x_{13}(s\otimes s^{(3)}-s^{(3)}\otimes s)+x_{30}(s^{(3)}\otimes 1-1\otimes s^{(3)})+(A_1(s)s\otimes 1-1\otimes A_1(s)s)-2(s\otimes A_1(s)-A_1(s)\otimes s)-\alpha\Delta(A_1(s)))$, where $A_1(s)=-\frac{x_{10}}2-x_{12}s^{(2)}-A(s)$. If $\lambda\leq -3$ is an integral number, then $\alpha'=\beta+(A_1(s)s\otimes 1-1\otimes A_1(s)s)+\lambda(s\otimes A_1(s)-A_1(s)\otimes s)-\alpha\Delta(A_1(s)))$, where $A_1(s)=\frac{x_{10}}{\lambda}+\frac{x_{12}}{\lambda+1}s^{(2)}+\frac{x_{13}}{\lambda+2}s^{(3)}-A(s)$ and $\beta=\sum\limits_{t=2}^{-\lambda}x_{t\ 2-\lambda-t} s^{(t)}\otimes s^{(2-\lambda-t)}$ . Fix an integer $n\geq 2$. Comparing the terms $s^{(i)}\otimes s^{(j)}\otimes s^{(n)}$ in (\ref{La10}), we can obtain that $(i-j)x_{i+j-1\ n}=(i-n)x_{i+n-1\ j}+(j-n)x_{i\ j+n-1}$ for $i\geq n$ and $j\geq n$. Let $i+j+n-1=2-\lambda$. Then $$(2i+n+\lambda-3)x_{2-\lambda-n\ n}=(i-n)x_{i+n-1\ 3-\lambda-n-i}+(3-\lambda-i-2n)x_{i\ 2-\lambda-i}.$$ Thus\begin{eqnarray}\label{ab10}(2i+\lambda-1)x_{-\lambda\ 2}=(i-2)x_{i+1\ 1-\lambda-i}-(\lambda+i+1)x_{i\ 2-\lambda-i}\\ \label{ab11} (2i+\lambda)x_{-\lambda-1\ 3}=(i-3)x_{i+2\ -\lambda-i}-(\lambda+i+3)x_{i\ 2-\lambda-i}\end{eqnarray} If $i=-\lambda$, then $\lambda x_{-\lambda\ 2}=0$ by (\ref{ab10}). Hence $x_{-\lambda\ 2}=0$ and $x_{i\ 2-\lambda-i}=\frac{i-2}{\lambda+i+1}x_{i+1\ 1-\lambda-i}$. Suppose $\lambda=-2k$ is even. If $i=k$, then $x_{k\ k+2}=0$. Further $x_{k-1\ k+3}=-\frac{k-3}{k}x_{k\ k+2}=0$. Continuing this way, we get all $x_{i\ 2-\lambda-i}=0$ and $\beta=0$. Since $(i-j)x_{i+j-1\ 2}=(i-2)x_{i+1\ j}+(j-2)x_{i\ j+1}$ for $i\geq 2$ and $j\geq 2$, (\ref{ab10}) holds for $\lambda\leq -3$. Suppose $\lambda=-2k-1$. Then $x_{3\ 2k}=\frac1{3-2k}x_{4\ 2k-1}$, $x_{4\ 2k-1}=\frac2{4-2k}x_{5\ 2k-2}$, $x_{5\ 2k-2}=\frac3{5-2k}x_{6\ 2k-3}$ by (\ref{ab10}). Let $\lambda=-2k-1$ and $i=4$ in (\ref{ab11}). Then $x_{3\ 2k}=\frac1{2k-7}(x_{6\ 2k-3}+(2k-6)x_{4\ 2k-1})=(\frac{1}{2k-7}+\frac{6(2k-6)}{(2k-7)(4-2k)(5-2k)})x_{6\ 2k-3}=\frac6{(3-2k)(4-2k)(5-2k)}x_{6\ 2k-3}.$ If $x_{6\ 2k-3}=0$, then $\beta=0$. If $x_{6\ 2k-3}\neq 0$, then $k\geq 3$ and $\lambda\leq -7$. Moreover, $(\frac{1}{2k-7}+\frac{6(2k-6)}{(2k-7)(4-2k)(5-2k)})=\frac6{(3-2k)(4-2k)(5-2k)}.$ Thus $(4k^2-1)(k-3)=0$ and $k=3$. If $k=3$, then $\lambda=-7$ and $(2i-8)x_{7\ 2}=(i-2)x_{i+1\ 8-i}+(6-i)x_{i\ 9-i}$. If $i=7$, then $6x_{72}=-x_{72}=0$ and $x_{72}=0$. If $i=3$, then $x_{45}=-3x_{3 6}$. Hence $\beta=x_{36}(s^{(3)}\otimes s^{(6)}-s^{(6)}\otimes s^{(3)}-3(s^{(4)}\otimes s^{(5)}-s^{(5)}\otimes s^{(4)}))$. If $\lambda=-3$, then $(2i+n-6)x_{5-n\ n}=(i-n)x_{i+n-1\ 6-n-i}+(6-i-2n)x_{i\ 5-i}$. Thus $(2i-4)x_{3\ 2}=(i-2)x_{i+1\ 4-i}+(2-i)x_{i\ 5-i}$. Let $i=3$. Then $3x_{32}=x_{41}=0$ and $\beta=0$. If $\lambda=-5$, then $(2i+n-8)x_{7-n\ n}=(i-n)x_{i+n-1\ 8-n-i}+(8-i-2n)x_{i\ 7-i}$. Thus $(2i-6)x_{5\ 2}=(i-2)x_{i+1\ 6-i}+(4-i)x_{i\ 7-i}$. Let $i=5$. Then $3x_{52}=3x_{61}=0$ and $\beta=x_{34}(s^{(3)}\otimes s^{(4)}-s^{(4)}\otimes s^{(3)})$.\end{proof} Suppose $\eta_2=0$. Then $(\alpha\Delta\otimes 1)\alpha'=(1\otimes\alpha\Delta)\alpha'-(12)(1\otimes \alpha\Delta)\alpha'+(1\otimes\alpha'\Delta)\eta_1-(12)(1\otimes \alpha'\Delta)\eta_1$ by (\ref{aa1}). \begin{lemma}\label{Aa1} Let $\alpha=s\otimes 1-1\otimes s, \eta_1=\lambda s\otimes 1-1\otimes s+\kappa\otimes 1$ and $\alpha'=\sum\limits_{i=0}^m\sum\limits_{j=0}^{p_i}x_{ij}s^{(i)}\otimes s^{(j)}\neq 0$ for some $x_{ij}\in{\bf k}$ satisfying \begin{eqnarray}\label{L8}\qquad (\alpha\Delta\otimes 1)\alpha'=(1\otimes\alpha\Delta)\alpha'-(12)(1\otimes \alpha\Delta)\alpha'+(1\otimes\alpha'\Delta)\eta_1-(12)(1\otimes \alpha'\Delta)\eta_1.\end{eqnarray} Then $\alpha'=\beta+(1\otimes A(s))\eta_1-\alpha\Delta(A(s))$ for some $A(s)\in H$, where $$\beta=\left\{\begin{array}{ll}x_{00}\otimes 1&(\kappa,\lambda)=(0,1)\\ x_{30}s^{(3)}\otimes 1&(\kappa,\lambda)=(0,-1)\\ x_{22}(s^{(2)}\otimes s^{(2)}+\frac32s^{(3)}\otimes s)&(\kappa,\lambda)=(0,-2)\\ x_{23}(\frac15 s^{(2)}\otimes s^{(3)}+\frac45s^{(3)}\otimes s^{(2)}+\frac25s^{(4)}\otimes s)&(\kappa,\lambda)=(0,-3)\\ 0&otherwise\end{array}\right.$$ for some $x_{ij}\in{\bf k}$. \end{lemma} \begin{proof} Similar to the proof of Lemma \ref{Aaa1}, we can prove that $\beta=(1\otimes A(s))\eta_1-\alpha\Delta(A(s))$ is a solution of (\ref{L8}) for any $A(s)\in H$. Applying the functor $\varepsilon\otimes1\otimes 1$ to (\ref{L8}), we get \begin{eqnarray}\label{L9}\kappa\alpha'=(1\otimes s-s\otimes 1)\sum\limits_{j=0}^{p_0}x_{0j}\Delta(s^{(j)})+\eta_1\sum\limits_{j=0}^{p_0}x_{0j}\otimes s^{(j)}. \end{eqnarray} If $\kappa\neq 0$, $\alpha'=(1\otimes A(s))\eta_1-\alpha\Delta(A(s))$, where $A(s)=\sum\limits_{j=0}^{p_0}\frac{x_{0j}}{\kappa}\otimes s^{(j)}$. Thus it is a solution of (\ref{L8}). If $\kappa=0$, then $\alpha\sum\limits_{j=0}^{p_0}x_{0j}\Delta(s^{(j)})=\eta_1\sum\limits_{j=0}^{p_0}x_{0j}\otimes s^{(j)}$. Hence $\eta_1\sum\limits_{j=0}^{p_0}x_{0j}\otimes s^{(j)}=(s\otimes 1-\lambda\otimes s)\sum\limits_{j=0}^{p_0}x_{0j} s^{(j)}\otimes 1$. This implies that $(\lambda-1)x_{00}=0$ and $x_{0j}=0$ for all $j\geq 1$. Next we always assume that $\kappa=0$. Comparing the terms $*\otimes *\otimes s^{(d)}$ in (\ref{L8}), we get $$\begin{array}{l}\sum\limits_{i=1}^m\sum\limits_{t=0}^{i-1}x_{id}(t+1)(s^{(t+1)}\otimes s^{(i-t)}-s^{(i-t)}\otimes s^{t+1)})\\ =\sum\limits_{i=1}^m\sum\limits_{j=d}^{p_i}(j-2d+1)x_{ij}(s^{(i)}\otimes s^{(j-d+1)}-s^{(j-d+1)}\otimes s^{(i)})\\ +\sum\limits_{i=2}^m\lambda x_{id}(s\otimes s^{(i)}-s^{(i)}\otimes s).\end{array}$$Thus \begin{eqnarray}\label{L10}(i-j)x_{i+j-1\ d}=(j-d)x_{i\ j+d-1}-(i-d)x_{j\ i+d-1}\end{eqnarray} for $i\geq 2$ and $j\geq 2$, \begin{eqnarray}\label{L11}(2-i-\lambda-d)x_{id}=(i-d)x_{1\ i+d-1}\end{eqnarray} for all $i\geq 2$. If $i+j\neq 2-\lambda$, then $x_{ij}=\frac{i-j}{2-i-j-\lambda}x_{1\ i+j-1}$ and $(2i+\lambda-2)x_{1\ 1-\lambda}=0$ for $i\geq 2$ and $j\geq 0$. Thus $x_{1\ 1-\lambda}=0$ if $m\geq 3$. Moreover, $\alpha'=x_{00}\otimes 1+\sum\limits_{j=0}^{p_1}x_{1j}s\otimes s^{(j)}+\sum\limits_{i=2}^m\sum\limits_{j=0}^{p_i}x_{ij}s^{(i)}\otimes s^{(j)}= x_{00}\otimes 1+\sum\limits_{j=0}^{p_1}x_{1j}s\otimes s^{(j)}+\sum\limits_{j=2, j\neq 1-\lambda}^{p_1}\sum\limits_{t=0}^{j}\frac{2t-j-1}{j+\lambda-1}x_{1\ j}s^{(j+1-t)}\otimes s^{(t)}+\sum\limits_{i=2}^{2-\lambda}x_{i\ 2-\lambda-i}s^{(i)}\otimes s^{(2-\lambda-i)}=x_{00}\otimes 1 +(1\otimes A(s))\eta_1-\alpha\Delta(A(s))+ \sum\limits_{i=2}^{2-\lambda}x_{i\ 2-\lambda-i}s^{(i)}\otimes s^{(2-\lambda-i)}$, where $A(s)=\frac{x_{10}}{\lambda-1}-\frac12x_{20}s+\sum\limits_{i=2,i\neq 1-\lambda}^{p_1}\frac1{i+\lambda-1}x_{1i}s^{(i)}$ and $(\lambda-1)x_{00}=0$, $x_{11}=-\frac{\lambda}2x_{20}$. If $\lambda=1$, then $\alpha'=x_{00}\otimes 1+(1\otimes A(s))\eta_1-\alpha\Delta(A(s))$. If $\lambda =0$, then $\alpha'=(1\otimes A(s))\eta_1-\alpha\Delta(A(s))$. If $\lambda=-1$, then $\alpha'=(x_{30}-3x_{21})s^{(3)}\otimes 1+(1\otimes A_1(s))\eta_1-\alpha\Delta(A_1(s))$, where $A_1(s)=x_{21} s^{(2)}+ A(s)$. If $\lambda=-2$, then $\alpha'=x_{22}(s^{(2)}\otimes s^{(2)}+\frac32s^{(3)}\otimes s)+(1\otimes A_1(s))\eta_1-\alpha\Delta(A_1(s))$, where $A_1(s)=-\frac14x_{40} s^{(3)}+ A(s)$. If $\lambda=-3$, then $\alpha'=x_{23}s^{(2)}\otimes s^{(3)}+x_{32}s^{(3)}\otimes s^{(2)}+(x_{32}-2x_{23})s^{(4)}\otimes s+(x_{32}-4x_{23})s^{(5)}\otimes 1+(1\otimes A(s))\eta_1-\alpha\Delta(A(s))=\frac25(x_{23}+x_{32})s^{(4)}\otimes s+\frac15(x_{23}+x_{32})s^{(2)}\otimes s^{(3)}+\frac45(x_{23}+x_{32})s^{(3)}\otimes s^{(2)}+(1\otimes A_1(s))\eta_1-\alpha\Delta(A_1(s))$, where $A_1(s)=-\frac15(4x_{23}-x_{32})s^{(4)}+A(s)$. Next we assume that $\lambda\leq -4$. Let $j=d$, $n=2-\lambda$ and $i+j+d-1=n$ in (\ref{L10}). Then $(n+1-3j)x_{n-j\ j}=(n+1-3j)x_{j\ n-j}$. If $n+1\neq 3j$ for any $j$, then $x_{n-j\ j}=x_{j\ n-j}$. Thus $(2i-n)x_{n-1\ 1}=(n-i-1)x_{i\ n-i}-(i-1)x_{n-i\ i}=(n-2i)x_{i\ n-i}$ by (\ref{L10}). Hence $x_{i\ n-i}=-x_{n-1\ 1}$ for all $2\leq i\leq n-2$. Since $(2i-n-1)x_{n\ 0}=(n+1-i)x_{i\ n-i}-ix_{n-i+1\ i-1}$ for $2\leq i\leq n-1$, \begin{eqnarray}\label{L12}\left\{\begin{array}{l}(3-n)x_{n\ 0}=(n-1)x_{2\ n-2}-2x_{n-1\ 1}=-(n+1)x_{n-1\ 1}\\ (5-n)x_{n\ 0}=(n-2)x_{3\ n-3}-3x_{n-2\ 2}=(5-n)x_{n-1\ 1}.\end{array}\right.\end{eqnarray} Then $x_{n-1\ 1}=x_{n\ 0}=0$ and $x_{n\ 0}=x_{i\ n-i}=0$ for all $i$. Suppose $n+1=3i_0$ for some $i_0$. Then $(2i-n-1)x_{n\ 0}=(n+1-i)x_{i\ n-i}-ix_{n-i+1\ i-1}=(n+1-2i)x_{1\ n-1}$ for $i\neq i_0$ and $i\neq i_0+1$. Since $n\geq 6$, $i_0\geq 3$. If $i_0 >3$, then $2\neq i_0$ and $3\neq i_0+1$. Thus (\ref{L12}) holds. Hence $x_{n\ 0}=x_{ n-1\ 1}=0$. Thus $0=(2i_0-n-1)x_{n\ 0}=(n+1-i)x_{i_0\ n-i_0}-ix_{n-i_0+1\ i_0-1}=(n+1-i)x_{i_0\ n-i_0}$. So $x_{i_0\ n-i_0}=0$. Since $(i-j)x_{i+j-1\ 1}=(j-1)x_{i\ j}-(i-1)x_{j\ i}$, $(n-1-i_0)x_{i_0\ n-i_0}-(i_0-1)x_{n-i_0\ i_0}=0$. Then $x_{n-i_0\ i_0}=0$. Hence $x_{i\ n-i}=0$ for all $i$. If $i_0=3$, then $n=8$. Let $i=d=2$ and $j=5$ in (\ref{L10}). Then $x_{62}=-x_{26}$ and $x_{26}=x_{6 2}=x_{71}=0$. Thus $x_{8 0}=0$ and $x_{35}=0$ by (\ref{L12}). Let $d=1$, $i=5$ and $j=3$ in (\ref{L10}). Then $x_{71}=x_{53}-2x_{35}$ and $x_{53}=0$. Since $-x_{80}=5x_{44}-4x_{53}$, $x_{44}=0$. Thus $x_{8-i\ i}=0$ for $1\leq i\leq 8$. Therefore $\alpha'=(1\otimes A(s))\eta_1-\alpha\Delta(A(s))$ for all $\lambda\leq -4$. \end{proof} If $\eta_1=0$ in (\ref{aa4}), then $\eta_2=0$. Thus (\ref{aa1}) becomes $(\alpha\Delta\otimes 1)\alpha'=(1\otimes \alpha\Delta)\alpha'-(12)(1\otimes \alpha\Delta)\alpha'$. \begin{lemma}\label{Aa3} Let $\alpha=s\otimes 1-1\otimes s$ and $\alpha'=\sum\limits_{i=0}^m\sum\limits_{j=0}^{p_i}x_{ij}s^{(i)}\otimes s^{(j)}\neq 0$ satisfying \begin{eqnarray}\label{L15}(\alpha\Delta\otimes 1)\alpha'=(1\otimes \alpha\Delta)\alpha'-(12)(1\otimes \alpha\Delta)\alpha'.\end{eqnarray} Then $\alpha'=\sum\limits_{i=1}^m\sum\limits_{t=0}^i\frac{2t-i}{i}x_{i0}s^{(t)}\otimes s^{(i-t)}=\alpha\sum\limits_{i=1}^m\frac1ix_{i0}\Delta(s^{(i-1)})$ for some $x_{i0}\in{\bf k}$. \end{lemma} \begin{proof} Applying $\varepsilon\otimes 1\otimes 1$ to (\ref{L15}), we obtain $\Delta(s)\alpha'=(1\otimes s-s\otimes 1)\sum\limits_{j=0}^{p_0}x_{0j}\Delta(s^{(j)})$ and $\sum\limits_{i=0}^mx_{i0}s^{(i)}=-\sum\limits_{j=0}^{p_0}x_{0j}s^{(j)}$ where $\alpha'=\sum\limits_{i=0}^m\sum\limits_{j=0}^{p_i}x_{ij}s^{(i)}\otimes s^{(j)}$. Then $x_{0i}=-x_{i0}$. Moreover, $\Delta(s)\alpha'=(s\otimes 1-1\otimes s)\sum\limits_{i=0}^mx_{i0}\Delta(s^{(i)})$ means that $$\sum\limits_{i=0}^m\sum\limits_{j=0}^{p_i}x_{ij}((i+1)s^{(i+1)}\otimes s^{(j)}+(j+1)s^{(i)}\otimes s^{(j+1)})=\sum\limits_{i=0}^m\sum\limits_{t=0}^i(t+1)x_{i0}(s^{(t+1)}\otimes s^{(i-t)}-s^{(i-t)}\otimes s^{(t+1)}).$$ Thus $i+j\leq m$ and $ix_{i-1\ j}+jx_{i\ j-1}=(i-j)x_{i+j-1\ 0}$. Let $k=i+j-1$. Then $ix_{i-1\ k+1-i}+(k+1-i)x_{i\ k-i}=(2i-k-1)x_{k\ 0}$. If $i=1$, then $x_{1\ k-1}=-\frac{k-2}{k}x_{k0}$. Suppose $x_{i\ k-i}=-\frac{k-2i}{k}x_{k0}$. Then $x_{i+1\ k-i-1}=-\frac{i+1}{k-i}x_{i\ k-i}+\frac{2i-k+1}{k-i}x_{k0}=\frac{i+1}{k-i}\frac{k-2i}{k}x_{k0}+\frac{2i-k+1}{k-i}x_{k0}=-\frac{k-2(i+1)}{k}x_{k0}.$ Consequently, $\alpha'=\sum\limits_{i=1}^m\sum\limits_{t=0}^i\frac{2t-i}{i}x_{i0}s^{(t)}\otimes s^{(i-t)}.$ It is routine to check (\ref{L15}) holds for this $\alpha'$. \end{proof} Summing up all the above discussion, we get the following result. \begin{theorem}\label{thm27}Let $L$ be a Leibniz $H$-pseudoalgebras of rank two, where $H={\bf k}[s]$. Then $L$ is one of the following types (1) $He_0\oplus He_0$ is a direct sum of two Virasoro Lie conformal algebras. (2) Abelian Lie $H$-pseudoalgebra. (3) $L$ has a basis $\{e_0,e_1\}$ such that $[e_0,e_0]=\alpha'\otimes_He_1$, $[e_0,e_1]=[e_1,e_0]=[e_0,e_0]=0$, where $\alpha'$ is a nonzero element in $H^{\otimes2}$. Moreover, $L$ is a Lie $H$-pseudoalgebra if and only if $\alpha'=-(12)\alpha'$. (4) $L$ has a basis $\{e_0,e_1\}$ such that $[e_0,e_1]=-((12)\otimes_H1)[e_1,e_0]=(A(s)\otimes1)\otimes_He_1$, $[e_0,e_0]=[e_1,e_1]=0$, where $A(s)$ is a nonzero element in $H$. Moreover, $L$ is a Lie $H$-pseudoalgebra. (5) $L$ has a basis $\{e_0,e_1\}$ such that $[e_0,e_1]=(A(s)\otimes1)\otimes_He_1$, $[e_0,e_0]=[e_1,e_0]=[e_1,e_1]=0$, where $A(s)$ is a nonzero element in $H$. (6) $L$ is a direct sum of a Virasoro Lie conformal algebra $He_0$ and an abelian Lie confromal algebra $He_1$. (7) $L$ has a basis $\{e_0,e_1\}$ such that $[e_0,e_1]=-((12)\otimes_H1)[e_1,e_0]=(\lambda s\otimes 1-1\otimes s+\kappa\otimes 1)\otimes _He_1$, $[e_1,e_1]=0$, $[e_0,e_0]=\alpha\otimes_He_0$, where $\lambda,\kappa\in{\bf k}$ and $\kappa\neq 0$. Moreover, $L$ is a Lie $H$-pseudoalgebra. (8) $L$ has a basis $\{e_0,e_1\}$ such that $[e_0,e_1]=-((12)\otimes_H1)[e_1,e_0]=(\lambda s\otimes 1-1\otimes s)\otimes _He_1$, $[e_0,e_0]=\alpha\otimes_He_0$, $[e_1,e_1]=0$, where $\lambda\notin\{0,-1,-2,-5,-7\}$. Moreover, $L$ is a Lie $H$-pseudoalgebra. (9) $L$ has a basis $\{e_0,e_1\}$ such that $[e_0,e_1]=-((12)\otimes_H1)[e_1, e_0]=-1\otimes s\otimes _He_1$, $[e_0,e_0]=\alpha\otimes_He_0+x_{10}\alpha\otimes_He_1$, $[e_1,e_1]=0$, where $x_{10}\in{\bf k}$. Moreover, $L$ is a Lie $H$-pseudoalgebra. (10) $L$ has a basis $\{e_0,e_1\}$ such that $[e_0,e_1]=-((12)\otimes_H1)[e_1,e_0]=(-s\otimes 1-1\otimes s)\otimes _He_1$, $[e_0,e_0]=\alpha\otimes_He_0+(x_{12}(s\otimes s^{(2)}-s^{(2)}\otimes s)+x_{20}(s^{(2)}\otimes 1-1\otimes s^{(2)}))\otimes_He_1$, $[e_1,e_1]=0$, where $x_{12},x_{20}\in{\bf k}$. Moreover, $L$ is a Lie $H$-pseudoalgebra. (11) $L$ has a basis $\{e_0,e_1\}$ such that $[e_0,e_1]=-((12)\otimes_H1)[e_1,e_0]=(-2s\otimes 1-1\otimes s)\otimes _He_1$, $[e_0,e_0]=\alpha\otimes_He_0+(x_{13}(s\otimes s^{(3)}-s^{(3)}\otimes s)+x_{30}(s^{(3)}\otimes 1-1\otimes s^{(3)}))\alpha\otimes_He_1$, $[e_1,e_1]=0$, where $x_{13}, x_{30}\in{\bf k}$. Moreover, $L$ is a Lie $H$-pseudoalgebra. (12) $L$ has a basis $\{e_0,e_1\}$ such that $[e_0,e_1]=-((12)\otimes_H1)[e_1,e_0]=(-5s\otimes 1-1\otimes s)\otimes _He_1$, $[e_0,e_0]=\alpha\otimes_He_0+x_{34}(s^{(3)}\otimes s^{(4)}-s^{(4)}\otimes s^{(3)})\otimes_He_1$, $[e_1,e_1]=0$, where $x_{34}\in{\bf k}$. Moreover, $L$ is a Lie $H$-pseudoalgebra. (13) $L$ has a basis $\{e_0,e_1\}$ such that $[e_0,e_1]=-((12)\otimes_H1)[e_1,e_0]=(-7s\otimes 1-1\otimes s)\otimes _He_1$, $[e_0,e_0]=\alpha\otimes_He_0+x_{36}((s^{(3)}\otimes s^{(6)}-s^{(6)}\otimes s^{(3)}-3(s^{(4)}\otimes s^{(5)}-s^{(5)}\otimes s^{(4)}))\otimes_He_1$, $[e_1,e_1]=0$, where $x_{36}\in{\bf k}$. Moreover, $L$ is a Lie $H$-pseudoalgebra. (14) $L$ has a basis $\{e_0,e_1\}$ such that $[e_0,e_1]=(\lambda s\otimes 1-1\otimes s+\kappa\otimes 1)\otimes _He_1$, $[e_1,e_0]= [e_1,e_1]=0$, $[e_0,e_0]=\alpha\otimes_He_0$, where $0\neq \kappa, \lambda \in{\bf k}$, or $\kappa=0, $ $\lambda\in{\bf k}$ and $\lambda\notin \{1,-1,-2,-3\}$. (15) $L$ has a basis $\{e_0,e_1\}$ such that $[e_0,e_1]=(s\otimes 1-1\otimes s)\otimes _He_1$, $[e_1,e_0]= [e_1,e_1]=0$, $[e_0,e_0]=\alpha\otimes_He_0+(x_{00}\otimes 1)\otimes_He_1$ for some $x_{00}\in{\bf k}$. (16) $L$ has a basis $\{e_0,e_1\}$ such that $[e_0,e_1]=(-s\otimes 1-1\otimes s)\otimes _He_1$, $[e_1,e_0]= [e_1,e_1]=0$, $[e_0,e_0]=\alpha\otimes_He_0+(x_{30}s^{(3)}\otimes 1)\otimes_He_1$ for some $x_{30}\in{\bf k}$. (17) $L$ has a basis $\{e_0,e_1\}$ such that $[e_0,e_1]=(-2s\otimes 1-1\otimes s)\otimes _He_1$, $[e_1,e_0]= [e_1,e_1]=0$, $[e_0,e_0]=\alpha\otimes_He_0+x_{22}(s^{(2)}\otimes s^{(2)}+\frac32s^{(3)}\otimes s)\otimes_He_1$ for some $x_{22}\in{\bf k}$. (18) $L$ has a basis $\{e_0,e_1\}$ such that $[e_0,e_1]=(-3s\otimes 1-1\otimes s)\otimes _He_1$, $[e_1,e_0]= [e_1,e_1]=0$, $[e_0,e_0]=\alpha\otimes_He_0+x_{23}(s^{(2)}\otimes s^{(3)}+4s^{(3)}\otimes s^{(2)}+2s^{(4)}\otimes s)\otimes_He_1$ for some $x_{23}\in{\bf k}$. \end{theorem} \begin{proof}If the solvable radical of $L$ is zero, then $L$ is semisimple Lie $H$-pseudoalgebra. Thus $L$ is a direct sum of two Virasoro Lie conformal algebras by Theorem 13.3 of \cite{BDK}. If $L$ is solvable and is not abelian, then $L$ is a solvable with maximal derived series. Thus $L$ is isomorphic to one of (3)-(5) by Lemma \ref{lem22}. Suppose the rank of the solvable radical of $L$ is one and there is a basis $\{e_0,e_1\}$ such that $[e_1,e_1]=[e_0,e_1]=[e_1,e_0]=0$. Then $[e_0,e_0]=\alpha\otimes_He_0+\alpha\Delta(A(s))\otimes_He_1$ for some $A(s)\in H$ by Lemma \ref{Aa3}. Let $e_0'=e_0+A(s)e_1$. Then $[e_0', e_0']=\alpha\otimes_He_0'$, $[e_0',e_1]=[e_1,e_0']=0$. Thus $L$ is a direct sum of a Virasoro Lie conformal algebra and an abelian Lie conformal algebra of rank one. In general, if the rank of the solvable radical of $L$ is one, then $L$ has a basis $\{e_0,e_1\}$ with pseudoproduct given by (\ref{m1}). Moreover, $\alpha',\eta_1,\eta_2$ are subject to relations from (\ref{aa1}) to (\ref{aa4}). In the case when $\eta_2=-(12)\eta_1\neq 0$. Then $\eta_1=\lambda s\otimes 1-1\otimes s+\kappa\otimes 1$ for some $\lambda,\kappa\in{\bf k}$ by equations from (\ref{aa2}) to (\ref{aa4}). Thus, $\alpha'$ is determined by (\ref{aa1}). From Lemma \ref{Aaa1}, we know that $\alpha'=A(s)s\otimes 1-1\otimes A(s)s+\lambda(s\otimes A(s)-A(s)\otimes s)+\kappa(1\otimes A(s)-A(s)\otimes 1)-\alpha\Delta(A(s))$ for some $A(s)\in H$ when either $\kappa\neq 0$ or $\kappa=0$ and $\lambda\notin\{0,-1,-2,-5,-7\}$. Let $e_0'=e_0+A(s)e_1$. Then $[e_0',e_0']=\alpha\otimes_He_0'$, $[e_0',e_1]=-((12)\otimes_H1)[e_1,e_0']=\eta_1\otimes_He_1'$ and $[e_1,e_1]=0$. Thus $L$ is isomorphic to either the Lie $H$-pseudoalgebra in (7) if $\kappa\neq 0$, or the Lie $H$-pseudoalgebra in (8) if $\kappa=0$. If $\kappa=\lambda=0$, then $\alpha'=x_{10}\alpha+A(s)s\otimes 1-1\otimes A(s)s-\alpha\Delta(A(s))$ for some $A(s)\in H$. Let $e_0'=e_0+A(s)e_1$. Then $[e_0',e_0']=\alpha\otimes_He_0'+x_{10}\alpha\otimes_He_1$, $[e_0',e_1]=-((12)\otimes_H1)[e_1,e_0']=\eta_1\otimes_He_1'$ and $[e_1,e_1]=0$. Thus $L$ is isomorphic to the Lie $H$-pseudoalgebra in (9). Similarly, we can prove that $L$ is isomorphic to the Lie $H$-pseduoalgebras in (10), (11), (12) and (13) respectively if $\kappa=0$ and $\lambda=-1,-2,-5,-7$ respectively. In the case when $\eta_1\neq 0$ and $\eta_2=0$. Then we can prove that $L$ is isomorphic to one of the Leibniz $H$-pseudoalgebras described in from (14) to (18) by using Lemma \ref{Aa1}. \end{proof} Recall that a Lie conformal algebra $L$ introduced in \cite{Ka} is a $\mathbb{C}[\partial]$-module endowed with $\mathbb{C}$-linear mappings $L\otimes L\to \mathbb{C}[\partial]\otimes L$, $a\otimes b\mapsto [a_{\lambda}b]$ satisfying the following axioms: \begin{eqnarray*} & [\partial a_{\lambda}b]=-\lambda[a_{\lambda}b],\qquad [a_{\lambda}\partial b]=(\partial+\lambda)[a_{\lambda}b],\\ & [b_{\lambda}a]=-[a_{-\partial-\lambda}b],\ \ [a_{\lambda}[b_{\mu}c]]=[[a_{\lambda}]_{\lambda+\mu}c]+[b_{\mu}[a_{\lambda}c]].\end{eqnarray*} For any Lie conformal algebra $L$, let $[a,b]=\sum\limits_iQ_i(-s\otimes1,s\otimes 1+1\otimes s)\otimes_{{\mathbb{C}[s]}}c_i$ for any $[a_{\lambda}b]=\sum\limits_iQ_i(\lambda,\partial)c_i$. Then $L$ becomes a Lie $H$-psuedoalgebra, where $H=\mathbb{C}[s]$. Conversely, if $L$ is a Lie $H$-pseudoalgebras, where $H={\bf k}[s]$, then $L$ is a Lie conformal algebra with the following mapping $[a_{\lambda}b]=\sum\limits_iP_i(\lambda)c_i$, where $[a,b]=\sum\limits_{i}P_i(s)\otimes1\otimes_Hc_i$. \begin{remark}The classification of rank two Lie conformal algebras were achieved by many authors (\cite{BCH},\cite{HL},\cite{Ka1}). Our classifications of Leibniz $H$-pseudoalgebras of rank two includes their results and our method is different. The central extension of a Lie (Leibniz) $H$-pseudoalgebra $L$ is measured by a cohomological group $H^2(L,M)$ (respectively $Hl^2(L,M)$) (see \cite{BDK} and \cite{Wu2}), where $M$ is a representation of $L$. The dimension of $H^2(L,M)$ has been given in \cite{BKV} for the Virasoro Lie conformal algebra $L$ and its representations $M$ of rank one. Theorem \ref{thm27} gives a basis of $H^2(L,M)$. In addition, from Theorem \ref{thm27}, we get $Hl^2(L,H)=H^2(L,H)=H$ when $L$ is the Virasoro Lie conformal algebra and $Hl^2(L,H)=H^{\otimes 2}\neq H^2(L,H)=\{a\in H^{\otimes 2}|a=-(12)a\}$ when $L$ is a trivial $H$-pseudoalgebra. \end{remark} \section{Definition of Schr\"odinger-Virasoro Lie $H$-pseudoalgebras} In this section, we introduce the Schr\"odinger-Virasoro $H$-pseudoalgebras. First let us fix some notations used in remainder of this paper. We always assume that $\beta_i=\lambda_is\otimes 1-1\otimes s+\kappa_i\otimes 1$, where $\lambda_i,\kappa_i\in {\bf k}$ for $1\leq i\leq 3$ and $\alpha=s\otimes 1-1\otimes s$. We set $\alpha'_0=0$. Further, we give the definition of the Schr\"odinger-Virasoro Lie $H$-pseudoalgebra as follows. \begin{definition} \label{def25} Let $L=He_0\oplus He_1\oplus He_2\oplus He_3$ be a free $H$-module with basis $\{e_0, e_1, e_2, e_3\}$, and $\alpha'_m=\sum\limits_{0\leq i,j\leq m}c_{ij}s^{(i)}\otimes s^{(j)}$ for some $c_{ij}\in{\bf k}$, where $c_{im}, c_{mj}$ are not simultaneously zero. Suppose $L$ is a Lie $H$-pseudoalgebra with pseudobrackets given by $$(I*)\left\{\begin{array}{l}[e_0,e_0]=\alpha\otimes_He_0, \quad \ [e_0, e_i]=\beta_{i}\otimes_H e_i\quad 1\leq i\leq 3,\\ [e_1, e_1]=\alpha_m'\otimes_He_2,\quad [e_1, e_2]=\eta\otimes_He_2,\\ [e_1, e_3]=\eta_{11}\otimes_He_1+\eta_{12}\otimes_He_2,\\ [e_2, e_3]=\eta_{21}\otimes_He_1+\eta_{22}\otimes_He_2,\\ [e_2, e_2]=[e_3,e_3]=0.\end{array}\right.$$ \noindent Then $L$ is called a {\it Schr\"odinger-Virasoro Lie $H$-pseudoalgebra}. Its subalgebra $L_s=He_0\oplus He_1\oplus He_2$ is called an {\it $m$-type Schr\"odinger-Virasoro Lie conformal algebra}, or {\it $m$-type Schr\"o-dinger-Virasoro Lie $H$-pseudoalgebra}.\end{definition} Since $[e_1,e_1]=-((12)\otimes_H1)[e_1,e_1]$, we have $\alpha_m'=-(12)\alpha_m'$, that is, $\alpha'_m$ is asymmetric. So, if $\alpha_m'\neq 0$, then we can always assume that $$\alpha'_m=\sum\limits_{0\leq i<j\leq m}w_{ij}(s^{(i)}\otimes s^{(j)}-s^{(j)}\otimes s^{(i)}),$$where $w_{im} \in{\bf k}$ are not all zero. An $m$-type Schr\"odinger-Virasoro Lie conformal algebra is an extension of Virasoro conformal Lie algebra by a solvable $H$-pseudoalgebra $I_1=He_1\oplus He_2$. If $\alpha'_m\neq 0$ or $\eta\neq 0$, then $I$ is a solvable $H$-pseudoalgebra with maximal derived series. For any $\alpha_m'$ and $\eta$, the ideal $I=He_1+He_2+He_3$ of the Schr\"odinger-Virasoro Lie $H$-pseudoalgebra is also an extension of the abelian Lie $H$-pseudoalgebra $He_3$ by $I_1$ and the Schr\"odinger-Virasoro Lie $H$-pseudoalgebra is an extension of $He_0$ by $I$. \begin{example}Let $TSV(c)=HL\oplus HY\oplus HM$ be the Lie conformal algebra defined in \cite{HY}, where $H={\bf k}[\partial]$. Denote $e_0: =L, e_1: =Y$, $e_2: =M$ and $s: =\partial$. Then \begin{eqnarray*} & [e_0, e_1]=(\frac12 s\otimes 1-1\otimes s-c\otimes1)\otimes _He_1,\\ & [e_0, e_2]=(-s\otimes 1-1\otimes s-2c\otimes 1)\otimes_He_2,\\ & [e_1,e_1]=2((s^{(2)}\otimes1 -1\otimes s^{(2)})+c(s\otimes1 -1\otimes s))\otimes_He_2.\end{eqnarray*} Thus $TSV(c)$ is a 2-type Schr\"odinger-Virasoro Lie conformal algebra with $\lambda_1=\frac12$, $\lambda_2=-1$, $\kappa_1=-c$, $\kappa_2=-2c$ and $c_{02}=-2$. Similarly, $T(a,b)$ defined in \cite{HY} is a 1-type Schr\"odinger-Virasoro Lie conformal algebra, where $e_0=L$, $e_1=Y$, $e_2=M$, $s=\partial$, $\lambda_1=a-1,\lambda_2=2a-3$, $\kappa_1=b$, $c_{01}=-1$ and $\kappa_2=2b$. In particular, the Schr\"odinger-Virasoro type Lie conformal algebra $DSV=T(0, 0)$ defined in \cite{WXX} is a 1-type Schr\"odinger-Virasoro Lie conformal algebra with $\kappa_1=\kappa_2=0$, $\lambda_1=-1$ and $\lambda_2=-3$. \end{example} \begin{example}\label{exaa27}Let $H=\mathbb{C}[\partial]$ and $\widetilde{SV}=HL\oplus HY\oplus HM\oplus HN$. Then $\widetilde{SV}$ is a Schr\"odinger-Virasoro Lie $H$-pseudoalgebra with \begin{eqnarray*} & [L, L]=(1\otimes\partial-\partial\otimes1)\otimes_HL, \ \ \ [L, Y]=(\frac12\partial\otimes1- 1\otimes\partial)\otimes_HY,\\ & [L, M]=1\otimes\partial\otimes_HM,\ \ \ [L ,N]=1\otimes\partial\otimes_HN,\\ & [Y, Y]=(1\otimes\partial-\partial\otimes1)\otimes_HM,\ \ \ [Y, M]=0, \ \ \ [Y, N]=-1\otimes1\otimes_HY, \\ & [M, N]=-2\otimes 1\otimes_HM,\ \ \ [M, M]=[N, N]=0,\end{eqnarray*} and $SV=HL\oplus HY\oplus HM$ is a subalgebra of $\widetilde{SV}$. In \cite{SY}, these two algebras $SV$ and $\widetilde{SV}$ are called the {\it Schr\"odinger-Virasoro Lie conformal algebra} and the {\it extended Sch\"odinger-Virasoro Lie conformal algebra} respectively. Let $e_0=-L$, $e_1=-Y$, $e_2=M$, $e_3=N$ and $s=\partial$. Then $\widetilde{SV}$ is a Sch\"odinger-Virasoro Lie $H$-pseudoalgebra with \begin{eqnarray*}& \beta_1=\frac12s\otimes1-1\otimes s, \ \beta_2=-1\otimes s, \beta_3=-1\otimes s, \\ & \alpha_1'=s\otimes 1-1\otimes s, \ \eta=\eta_{12}= \eta_{21}=0, \ \eta_{22}=2\eta_{11}=2\otimes 1,\end{eqnarray*} where $H={\bf k}[s]$. It is a Sch\"odinger-Virasoro Lie $H$-pseudoalgebra, where $(\lambda_1, \kappa_1)=(\frac12,0)$, $(\lambda_2, \kappa_2)=(0,0)$ and $(\lambda_3, \kappa_3)=(0, 0).$ Moreover, the Schr\"odinger-Virasoro Lie conformal algebra $SV$ is a 1-type Lie conformal algebra. \end{example} To determine all Schr\"odinger-Virasoro Lie $H$-pseudoalgebras, we need to describe $\beta_i,\alpha_m'$, $\eta$ and $\eta_{jk}$ for $1\leq i\leq 3$ and $1\leq j,k\leq 2$. For this purpose, we need the following key lemma. \begin{lemma} \label{lem28} Let $L$ be a free $H$-module with a basis $\{e_0,e_1,e_2,e_3\}$. Then $L$ is a Schr\"odinger-Virasoro Lie $H$-pseudoalgebra with the pseudobrackets determined by $(I*)$ if and only if these pseudobrackets are asymmetric and the following equations hold. \begin{eqnarray}(\alpha_m'\Delta\otimes1)\eta_{21}&=&(\eta\Delta\otimes 1)\eta_{21}=0,\label{eq25}\\ (\alpha_m'\Delta\otimes1)\eta_{22}&=&(1\otimes\eta_{11}\Delta)\alpha_m'-(12)(1\otimes\eta_{11}\Delta)\alpha_m'\label{eq26}\\ & & +(1\otimes\eta_{12}\Delta)\eta-(12)(1\otimes\eta_{12}\Delta)\eta, \nonumber \\ (\eta\Delta\otimes 1)\eta_{22}&=&(1\otimes \eta_{21}\Delta)\alpha_m'+(1\otimes\eta_{22}\Delta)\eta \label{eq27}\\ & & +(12)(1\otimes\eta_{11}\Delta)((12)\eta),\nonumber \\ (\beta_{1}\Delta\otimes1)\eta_{11} &=&(1\otimes\eta_{11}\Delta)\beta_{1}-(12)(1\otimes\beta_{3}\Delta)\eta_{11},\label{eq28}\\ (\beta_{1}\Delta\otimes1)\eta_{12}&=& (1\otimes\eta_{12}\Delta)\beta_{2}-(12)(1\otimes\beta_{3}\Delta)\eta_{12},\label{eq29}\\ (\beta_{2}\Delta\otimes1)\eta_{21} &=&(1\otimes\eta_{21}\Delta)\beta_{1}-(12)(1\otimes\beta_{3}\Delta)\eta_{21},\label{eq210}\\ (\beta_{2}\Delta\otimes1)\eta_{22}&=& (1\otimes\eta_{22}\Delta)\beta_{2}-(12)(1\otimes\beta_{3}\Delta)\eta_{22},\label{eq211} \\ (\eta_{11}\Delta\otimes1)\eta_{11}+(\eta_{12}\Delta\otimes1)\eta_{21}&=&(12)(1\otimes\eta_{11}\Delta)((12)\eta_{11}) \label{eq212}\\ & & +(12)(1\otimes\eta_{12}\Delta)((12)\eta_{21}),\nonumber \\ (\eta_{11}\Delta\otimes1)\eta_{12}+(\eta_{12}\Delta\otimes1)\eta_{22}&=&(12)(1\otimes\eta_{11}\Delta)((12)\eta_{12})\label{eq213}\\ & & +(12)(1\otimes\eta_{12}\Delta)((12)\eta_{22}),\nonumber\\ (\eta_{21}\Delta\otimes1)\eta_{11}+(\eta_{22}\Delta\otimes1)\eta_{21}&=&(12)(1\otimes\eta_{21}\Delta)((12)\eta_{11})\label{eq214}\\ & & +(12)(1\otimes\eta_{22}\Delta)((12)\eta_{21}),\nonumber \\ (\eta_{21}\Delta\otimes1)\eta_{12}+(\eta_{22}\Delta\otimes1)\eta_{22}&=&(12)(1\otimes\eta_{21}\Delta)((12)\eta_{12})\label{eq215}\\ & & +(12)(1\otimes\eta_{22}\Delta)((12)\eta_{22}),\nonumber\\ (1\otimes\eta_{21}\Delta)((12)\eta)&=& (12)(1\otimes\eta_{21}\Delta)((12)\eta),\label{eq216}\\ (\beta_1\Delta\otimes1)\eta &=&(1\otimes\eta\Delta)\beta_2-(12)(1\otimes\beta_2\Delta)\eta,\label{eq217}\\ (\beta_1\Delta\otimes 1)\alpha_m'&=&(1\otimes\alpha_m'\Delta)\beta_2-(12)(1\otimes\beta_1\Delta)\alpha_m',\label{eq218}\\ -(\alpha'_m\Delta\otimes1)((12)\eta)&=&(1\otimes\alpha_m'\Delta)\eta-(12)(1\otimes\alpha_m'\Delta)\eta,\label{eq219}\\ (1\otimes\eta\Delta)\eta &=&(12)(1\otimes\eta\Delta)\eta.\label{eq220}\end{eqnarray} \end{lemma} \begin{proof}It is easy to check (\ref{eq25})-(\ref{eq220}) by Definition \ref{def25} and using $$[[e_i ,e_j], e_k]=[e_i, [e_j, e_k]]-((12)\otimes_H1)[e_j, [e_i, e_k]]$$ for all $0\leq i<j<k\leq 3$.\end{proof} If (\ref{eq25}) holds, then (\ref{eq216}) holds and (\ref{eq27}) can be simplified as \begin{eqnarray}\label{eq221}(\eta\Delta\otimes 1)\eta_{22}=(1\otimes\eta_{22}\Delta)\eta +(12)(1\otimes\eta_{11}\Delta)((12)\eta). \end{eqnarray} \section{ $m$-type Schr\"odinger-Virasoro Lie conformal algebras } In this section, we determine all $m$-type Schr\"odinger-Virasoro Lie conformal algebras. By Lemma \ref{lem28}, $L_s=He_0\oplus He_1\oplus He_2$ is an $m$-type Schr\"odinger-Virasoro Lie conformal algebra with the pseudobrackets given by \begin{eqnarray*} (I**)\left\{\begin{array}{l} [e_0, e_0]=\alpha\otimes_He_0,\qquad [e_0, e_i]=\beta_{i}\otimes_H e_i,\qquad i=1,2\\ [e_1, e_1]=\alpha_m'\otimes_He_2,\qquad [e_1, e_2]=\eta\otimes_He_2,\qquad [e_2, e_2]=0\end{array}\right.\end{eqnarray*} if and only if (\ref{eq217})-(\ref{eq220}) hold and the pseudobrackets in $(I**)$ are asymmetric. \begin{lemma} \label{lem24} Let $L=He_0\oplus He_1\oplus He_2$ be an $m$-type Lie conformal algebra. Then $\eta=a\otimes 1$ for some $a\in{\bf k}$ with $a\lambda_1=a\kappa_1=0$ and $(2\kappa_1-\kappa_2)\alpha'_m=0$. If $\eta=a\otimes 1\neq 0$, then $\alpha'_m=\sum\limits_{j=1}^mw_{0j}(1\otimes s^{(j)}-s^{(j)}\otimes 1)$, where $w_{0j}\in \bf{k}$.\end{lemma} \begin{proof} Assume that $\eta=\sum\limits_{i=0}^n b_is^{(i)}\otimes 1$ by Lemma \ref{lem22}. It follows from (\ref{eq217}) that $\beta_1\Delta(\sum\limits_{i=0}^nb_is^{(i)})$ $=-1\otimes s \sum\limits_{i=0}^nb_is^{(i)}$. Thus $\lambda_1s\sum\limits_{i=0}^nb_is^{(i)}=\kappa_1\sum\limits_{i=0}^nb_is^{(i)}=0$ and $\sum\limits_{i=0}^n\sum\limits_{0\leq t\leq i}b_i(i-t+1)s^{(t)}\otimes s^{(i-t+1)}=\sum\limits_{i=0}^n b_i(i+1)\otimes s^{(i+1)}$. If $n\geq 1$, then the term $b_ns^{(n)}\otimes s$ on the left hand of the previous equation can not be cancelled. Hence $n=0$. Thus $\eta=a\otimes 1$ for $a=b_0\in{\bf k}$ with $a\lambda_1=a\kappa_1=0$. Applying $\varepsilon\otimes 1\otimes 1$ to equation {(\ref{eq218})}, we get $(\kappa_1\otimes 1-s\otimes 1)\alpha'_m=\alpha_m'(\kappa_2\otimes1-1\otimes s-s\otimes 1)-\alpha'_m(\kappa_1\otimes s-1\otimes s)$, which implies that $(2\kappa_1-\kappa_2)\alpha_m'=0$. Suppose $a\neq 0$ and $\alpha_m'=\sum\limits_{i=0}^m\sum\limits_{j=0}^{q_i}y_{ij}s^{(i)}\otimes s^{(j)}\neq 0$. Observe that $$[[e_1, e_1], e_1]=[e_1, [e_1, e_1]]-(12)[e_1, [e_1, e_1]].$$This means \begin{eqnarray} \qquad \qquad \sum\limits_{i=0}^m\sum\limits_{j=0}^{q_i} y_{ij}\left((1\otimes s^{(i)}-s^{(i)}\otimes 1)\otimes s^{(j)}+ s^{(i)}\otimes s^{(j)}\otimes 1\right)=0.\label{eq021}\end{eqnarray} From this, we get $\sum\limits_{i=1}^my_{i0}(1\otimes s^{(i)}-s^{(i)}\otimes 1)=-\sum\limits_{i=0}^m\sum\limits_{j=0}^{q_i} y_{ij}s^{(i)}\otimes s^{(j)}.$ Hence $\alpha_m'=\sum\limits_{i=1}^mw_{0i}(1\otimes s^{(i)}-s^{(i)}\otimes 1)$, where $w_{0i}=y_{i0}$. \end{proof} It follows from Lemma \ref{lem24} that (\ref{eq217}) and (\ref{eq220}) hold if and only if $\eta=a\otimes 1$ for some $a\in{\bf k}$ with $a\lambda_1=a\kappa_1=(2\kappa_1-\kappa_2)\alpha_m'=0$. If $\eta=a\otimes 1$ for a nonzero $a$, then $\alpha_m'=\sum\limits_{j=0}^mw_{0j}(1\otimes s^{(j)}-s^{(j)}\otimes 1)$ by Lemma \ref{lem24}. It is easy to see that (\ref{eq219}) holds for this $\eta$ and $\alpha_m'$. (\ref{eq219}) is trivial if $\eta=0$. To determine $m$ and $w_{0j}$ in $\alpha_m'$, we need the following lemma, which is also useful in the next section. \begin{lemma}\label{lem24a}Suppose nonzero $\gamma=\sum\limits_{i=0}^m\sum\limits_{j=0}^{p_i}x_{ij}s^{(i)}\otimes s^{(j)}$ satisfying \begin{eqnarray}(\beta_{1}\Delta\otimes1)\gamma= (1\otimes\gamma\Delta)\beta_{2}-(12)(1\otimes\beta_{3}\Delta)\gamma.\label{eq21a}\end{eqnarray} Then $(\kappa_1-\kappa_2+\kappa_3)\gamma=0$, $(\lambda_1-\lambda_2+\lambda_3-i-j)x_{ij}+\kappa_1x_{i+1\ j}+\kappa_3x_{i\ j+1}=0,$ and $p_m\leq 1$. Furthermore, $p_{m-i}\leq i+1$ and $\lambda_3=0$ if $p_m=1$, and $p_{m-i}\leq i$ if $p_m=0$, where $0\leq i\leq m$. \end{lemma} \begin{proof}Comparing the terms $1\otimes s^{(i)}\otimes s^{(j)}$ and the terms $s\otimes s^{(i)}\otimes s^{(j)}$ in (\ref{eq21a}) respectively, we get $(\kappa_1-\kappa_2+\kappa_3)\gamma=0$ and \begin{eqnarray}(\lambda_1-\lambda_2+\lambda_3-i-j)x_{ij}+\kappa_1x_{i+1\ j}+\kappa_3x_{i\ j+1}=0\label{eq24a}\end{eqnarray} respectively. It follows from (\ref{eq21a}) that \begin{eqnarray} & &\sum\limits_{i=1}^{m}\sum\limits_{j=0}^{p_i}\sum\limits_{t=1}^ix_{ij}\lambda_1(t+1)s^{(t+1)}\otimes s^{(i-t)}\otimes s^{(j)}\nonumber\\ & &+\sum\limits_{i=0}^{m}\sum\limits_{j=1}^{p_i}\sum\limits_{l=1}^j\lambda_3x_{ij}(l+1)s^{(l+1)}\otimes s^{(i)}\otimes s^{(j-l)}\nonumber \\ & &+\sum\limits_{i=2}^{m}\sum\limits_{j=0}^{p_i}\sum\limits_{t=2}^i\kappa_1 x_{ij}s^{(t)}\otimes s^{(i-t)}\otimes s^{(j)}+\sum\limits_{i=0}^{m}\sum\limits_{j=2}^{p_i}\sum\limits_{l=2}^j\kappa_3 x_{ij}s^{(l)}\otimes s^{(i)}\otimes s^{(j-l)}\label{eq25a}\\ & &=\sum\limits_{i=2}^{m}\sum\limits_{j=0}^{p_i}\sum\limits_{t=2}^ix_{ij}(i-t+1)s^{(t)}\otimes s^{(i-t+1)}\otimes s^{(j)}\nonumber \\ & & +\sum\limits_{i=0}^{m}\sum\limits_{j=2}^{p_i}\sum\limits_{l=2}^j(j-l+1)x_{ij}s^{(l)}\otimes s^{(i)}\otimes s^{(j-l+1)} .\nonumber\end{eqnarray} If $q:=p_{m}\geq 2$, then \begin{eqnarray} & &\sum\limits_{l=1}^{q}\lambda_3(l+1)s^{(l+1)}\otimes s^{(m)}\otimes s^{(q-l)} =\sum\limits_{l=2}^{q}({q}-l+1)s^{(l)}\otimes s^{(m)}\otimes s^{(q-l+1)}, \label{eq22abc}\end{eqnarray} which is impossible for any $\lambda_3$. Thus $p_m\leq 1$. If $p_m=1$, then it holds $p_{m-i}\leq i+1$ for $i=0$. Suppose $p_{m-i}\leq i+1$ for some $i$. Then $(m+1-p_{m-i-1}-(m-i-1))x_{m-i-1\ p_{m-i-1}}+\kappa_1x_{m-i \ p_{m-i-1}}=0$ by (\ref{eq24a}). If $p_{m-i-1}\geq p_{m-i}+1$, then $(i+2-p_{m-i-1})x_{m-i-1\ p_{m-i-1}}=0$ and $p_{m-i-1}=i+2$. If $p_{m-i-1}\leq p_{m-i}$, then $p_{m-i-1}\leq p_{m-i}\leq i+1<i+2$. Hence $p_{m-i}\leq i+1$ for all $0\leq i\leq m$. Moreover, it follows $2\lambda_3s^{(2)}\otimes s^{(m)}\otimes 1=0$ from (\ref{eq22abc}), which implies that $\lambda_3=0$. If $p_m=0$, then $p_{m-i}\leq i$ for $i=0$. Suppose $p_{m-i}\leq i$ for some $i$. Then $(m-p_{m-i-1}-(m-i-1))x_{m-i-1\ p_{m-i-1}}+\kappa_1x_{m-i \ p_{m-i-1}}=0$ by (\ref{eq24a}). If $p_{m-i-1}\geq p_{m-i}+1$, then $(i+1-p_{m-i-1})x_{m-i-1\ p_{m-i-1}}=0$ and $p_{m-i-1}=i+1$. If $p_{m-i-1}\leq p_{m-i}$, then $p_{m-i-1}\leq p_{m-i}\leq i<i+1$. Hence $p_{m-i}\leq i$ for all $0\leq i\leq m$. \end{proof} Using Lemma \ref{lem24a}, we determine all nonzero $\gamma$ for $m\leq 1$ as follows. (T1) In the case when $m=0$ and $p_0=1$, we have $\lambda_3=0$, $\lambda_1-\lambda_2=1$ and $x_{00}+\kappa_3x_{01}=0$ by (\ref{eq24a}). So $\gamma=x_{01}(-\kappa_3\otimes 1+1\otimes s)$ for some nonzero $x_{01}\in{\bf k}$. (T2) In the case when $m=0$ and $p_0=0$, we have $\lambda_1-\lambda_2+\lambda_3=0$, $\kappa_1-\kappa_2+\kappa_3=0$ and hence $\gamma=x_{00}\otimes 1$ for some nonzero $x_{00}\in{\bf k}$. (T3) In the case when $m=1$, $p_1=1$ and $p_0=2$, we have $\lambda_3=0$ and $\lambda_1-\lambda_2=2$. From (\ref{eq25a}), we get $x_{02}=2\lambda_1x_{11}$ and $2\lambda_1x_{10}+\kappa_3x_{02}=0$. Since $x_{01}=-(\kappa_1x_{11}+\kappa_3x_{02})$ and $x_{00}=-\frac12(\kappa_1x_{10}+\kappa_3x_{01})$ by (\ref{eq24a}), we have $x_{10}=-\kappa_3x_{11}$, $x_{02}=2\lambda_1x_{11}$, $x_{01}=-(\kappa_1+2\lambda_1\kappa_3)x_{11}$ and $x_{00}=(\kappa_1\kappa_3+\lambda_1\kappa_3^2)x_{11}$. Thus $\gamma=x_{11}((\kappa_1\kappa_3+\lambda_1\kappa_3^2)\otimes1-(\kappa_1+2\lambda_1\kappa_3)\otimes s+2\lambda_1\otimes s^{(2)}-\kappa_3s\otimes1+s\otimes s)$ for some nonzero $x_{11}\in{\bf k}$. (T4) In the case when $m=1$, $p_1=1$ and $p_0=1$, we have $\lambda_3=0$ and $\lambda_1-\lambda_2=2$ by (\ref{eq24a}). From (\ref{eq25a}), we get $\lambda_1=0$. Thus $\lambda_2=-2$. Since $x_{10}=-\kappa_3x_{11}$, $x_{01}=-\kappa_1x_{11}$ and $x_{00}=\kappa_1\kappa_3x_{11}$ by (\ref{eq24a}), $\gamma=x_{11}(\kappa_1\kappa_3\otimes1-\kappa_3s\otimes1-\kappa_1\otimes s+s\otimes s)$ for some nonzero $x_{11}\in{\bf k}$. (T5) In the case when $m=1$, $p_1=1$ and $p_0=0$, we have $\lambda_3=0$ and $\lambda_1-\lambda_2=2$. From (\ref{eq25a}), we get $\lambda_1=0$. Thus $\lambda_2=-2$, $x_{10}=-\kappa_3x_{11}$ and $x_{01}=-\kappa_1x_{11}=0$. So $\kappa_1=0$ and $\gamma=x_{11}(-\kappa_3s\otimes 1+s\otimes s)$ for some nonzero $x_{11}\in{\bf k}$. (T6) In the case when $m=1$, $p_1=0$ and $p_0=1$, we have $\lambda_1-\lambda_2+\lambda_3=1$ and $\lambda_1x_{10}+\lambda_3x_{01}=0$ by (\ref{eq25a}). So, $\gamma=-(\kappa_1x_{10}+\kappa_3x_{01})\otimes 1+x_{10}s\otimes 1+x_{01}\otimes s$ for some $x_{10}, x_{01} \in{\bf k}$, where $x_{10}\neq 0$. (T7) In the case when $m=1$, $p_1=0$ and $p_0=0$, we have $\lambda_1-\lambda_2+\lambda_3=1$ and $\lambda_1=0$ by (\ref{eq25a}). Hence $\gamma=x_{10}(-\kappa_1\otimes 1+s\otimes 1)$ for some nonzero $x_{10}\in{\bf k}$. \begin{lemma}\label{lem25a}Suppose nonzero $\gamma=\sum\limits_{i=0}^m\sum\limits_{j=0}^{p_i}x_{ij}s^{(i)}\otimes s^{(j)}$ satisfying (\ref{eq21a}), $p_m=0$ and $m\geq 3$. Then $\lambda_3\neq 0$ and $m=3$. \end{lemma} \begin{proof} From (\ref{eq25a}), it is easy to see the following equations hold. \begin{eqnarray}\left\{\begin{array}{l} \lambda_1x_{m0}+\lambda_3x_{0m}=0 \\ ((m\lambda_3-1)x_{0m}+m\lambda_1x_{m-1\ 1})s^{(m)}\otimes 1\otimes s=0\\ ((2\lambda_1-m+1)x_{m0}+2\lambda_3x_{m-1\ 1})s^{(2)}\otimes s^{(m-1)}\otimes 1=0\\ ((3\lambda_1-m+2)x_{m0}+3\lambda_3x_{m-2\ 2})s^{(3)}\otimes s^{(m-2)}\otimes 1=0\\ ((m-1)\lambda_3-2)x_{0m}+(m-1)\lambda_1x_{m-2\ 2})s^{(m-1)}\otimes1\otimes s^{(2)}=0.\end{array}\right.\label{eq29c} \end{eqnarray} If $\lambda_3=0$, then $\lambda_1x_{m0}=0$ and $\lambda_1=0$. Hence $x_{0m}=m\lambda_1x_{m-1\ 1}=0$ by the second equation in (\ref{eq29c}). Thus $(1-m)x_{m0}=0$ and $m=1$, which is impossible by the assumption. Since $\lambda_3\neq 0$, the first three equations in (\ref{eq29c}) imply $\lambda_1+\lambda_3=\frac{m^2-m+2}{2m}$. Similarly, $\lambda_1+\lambda_3=\frac{m^2-3m+8}{3(m-1)}$ by the first and last two equations in (\ref{eq29c}). Hence $\frac{m^2-m+2}{2m}=\frac{m^2-3m+8}{3(m-1)}$ and $m=3$.\end{proof} From Lemma \ref{lem25a}, we have other two cases. (T8) In the case when $m=2$ and $p_2=0$, it follows \begin{eqnarray}\left\{\begin{array}{ll} (2\lambda_1-1)x_{20}+2\lambda_3x_{11}=0 & (2\lambda_3-1)x_{02}+2\lambda_1x_{11}=0\\ \lambda_1x_{20}+\lambda_3x_{02}=0& x_{10}+\kappa_1x_{20}+\kappa_3x_{11}=0\\ x_{01}+\kappa_1x_{11}+\kappa_3x_{02}=0& 2x_{00}+\kappa_1x_{10}+\kappa_3x_{01}=0\end{array}\right.\label{eq291c} \end{eqnarray} from (\ref{eq25a}). If $\lambda_3=0$, then $x_{20}=x_{02}=0$, which contradicts with the fact that $m=2$. Hence $\lambda_3\neq 0$. Moreover, $x_{02}=-\frac{\lambda_1}{\lambda_3}x_{20}$. Thus $\left|\begin{array}{cc}2\lambda_1-1&2\lambda_3\\ -\frac{(2\lambda_3-1)\lambda_1}{\lambda_3}&2\lambda_1\end{array}\right|=4\lambda_1(\lambda_1+\lambda_3-1)=0$. So either $\lambda_1=0$ or $\lambda_1+\lambda_3=1$. If $\lambda_1=0$, then $$\gamma=\frac{x_{20}}{2\lambda_3}(\kappa_1(\lambda_3\kappa_1+\kappa_3)\otimes1-\kappa_1\otimes s -(2\lambda_3\kappa_1+\kappa_3)s\otimes1+s\otimes s+2\lambda_3s^{(2)}\otimes1)$$ by (\ref{eq24a}) and (\ref{eq29c}). If $\lambda_1+\lambda_3=1$, then \begin{eqnarray*}\begin{array}{lll}\gamma&=&x_{20}((\frac{\lambda_3-1}{2\lambda_3}\kappa_3^2+\frac{2\lambda_3-1}{2\lambda_3}\kappa_1\kappa_3+\frac12\kappa_1^2)\otimes1 +(\frac{(1-2\lambda_3)\kappa_3}{2\lambda_3} -\kappa_1)s\otimes1\\ &&+\frac{(2\lambda_1-1)\kappa_1+2\lambda_1\kappa_3}{2\lambda_3}\otimes s-\frac{\lambda_1}{\lambda_3}\otimes s^{(2)} +\frac{2\lambda_3-1}{2\lambda_3}s\otimes s+s^{(2)}\otimes1)\end{array}\end{eqnarray*} by (\ref{eq24a}) and (\ref{eq29c}). (T9) In the case when $m=3$ and $p_3=0$, it follows from (\ref{eq25a}) that \begin{eqnarray}\left\{\begin{array}{ll} 2\lambda_1x_{10}+2\lambda_3x_{01}+\kappa_1x_{20}+\kappa_3x_{02}=0, & 2\lambda_1x_{12}+2\lambda_3x_{03}=2x_{03},\\ 2\lambda_1x_{11}+2\lambda_3x_{02}+\kappa_1x_{21}+\kappa_3x_{03}=x_{02},& \lambda_1x_{30}+\lambda_3x_{03}=0,\\ 2\lambda_1x_{20}+2\lambda_3x_{11}+\kappa_1x_{30}+\kappa_3x_{12}=x_{20},& 3\lambda_1x_{30}+3\lambda_3x_{12}=x_{30},\\ 3\lambda_1x_{20}+3\lambda_3x_{02}+\kappa_1x_{30}+\kappa_3x_{03}=0, &2(\lambda_1-1)x_{30}+2\lambda_3x_{21}=0,\\ 2\lambda_1x_{21}+2\lambda_3x_{12}=x_{21}+x_{12}, &3\lambda_1x_{21}+3\lambda_3x_{03}=x_{03}.\end{array}\right.\label{eq292c} \end{eqnarray} Thus $x_{03}=-\frac{\lambda_1}{\lambda_3}x_{30}$ and $\left|\begin{array}{cc}3\lambda_1-1&3\lambda_3\\ -\frac{(2\lambda_3-2)\lambda_1}{\lambda_3}&2\lambda_1\end{array}\right|=6\lambda_1(\lambda_1+\lambda_3-\frac43)=0$. If $\lambda_1=0$, then $x_{03}=0$, $x_{12}=\frac1{3\lambda_3}x_{30}$ and $x_{21}=\frac1{\lambda_3}x_{30}$. Thus $(2\lambda_3-1)\frac1{3\lambda_3}x_{30}=\frac1{\lambda_3}x_{30}$, which implies $\lambda_3=2$, $x_{12}=\frac16x_{30}$ and $x_{21}=\frac12x_{30}$. By (\ref{eq24a}) and (\ref{eq292c}), one obtains $$\begin{array}{lll}\gamma&=&x_{30}(s^{(3)}\otimes 1-(\kappa_1+\frac12\kappa_3)s^{(2)}\otimes1+\frac12s^{(2)}\otimes s+\frac12(\kappa_1^2+\frac16\kappa_3^2+\kappa_1\kappa_3)s\otimes1\\ &&-\frac16(3\kappa_1+\kappa_3)s\otimes s+\frac16s\otimes s^{(2)} -\frac16(\kappa_1^3+\frac12\kappa_1\kappa_3^2+\frac32\kappa_1^2\kappa_3)\otimes1\\&&+\frac1{12}(3\kappa_1^2+2\kappa_1\kappa_3)\otimes s -\frac16\kappa_1\otimes s^{(2)}).\end{array}$$ If $\lambda_1+\lambda_3=\frac43$, then $x_{12}=\frac{\lambda_3-1}{\lambda_3}x_{30}=-x_{12}$ and $(2\lambda_1-1)x_{21}+(2\lambda_3-1)x_{12}=2(\lambda_1-\lambda_3)x_{12}=0$ by (\ref{eq292c}). Therefore $\lambda_1=\lambda_3=\frac23$ and $\lambda_2=-\frac53$. From (\ref{eq24a}), we get $$\begin{array}{lll}\gamma&=&x_{30}((s^{(3)}\otimes1-1\otimes s^{(3)})+\frac12(s^{(2)}\otimes s-s\otimes s^{(2)})-(\kappa_1+\frac12\kappa_3)s^{(2)}\otimes 1\\&&+(\frac12\kappa_1+\kappa_3)\otimes s^{(2)}+\frac12(\kappa_1^2+\kappa_1\kappa_3-\frac12\kappa_3^2)s\otimes1+\frac12(\frac12\kappa_1^2-\kappa_1\kappa_3-\kappa_3^2)\otimes s\\ &&+ \frac12(\kappa_3-\kappa_1)s\otimes s +\frac16(\kappa_3-\kappa_1)(\kappa_1^2+\frac52\kappa_1\kappa_3+\kappa_3^2)\otimes1).\end{array}$$ \begin{lemma}\label{lem26a}Suppose nonzero $\gamma=\sum\limits_{i=0}^m\sum\limits_{j=0}^{p_i}x_{ij}s^{(i)}\otimes s^{(j)}$ satisfying (\ref{eq21a}) and $p_m=1$. Then $\lambda_3=0$ and $m\leq 2$. \end{lemma} \begin{proof}Since $p_m=1$, $\lambda_3=0$ by Lemma \ref{lem24a}. Suppose $m\geq 3$. It is easy to see the following equations hold from (\ref{eq25a}). \begin{eqnarray}\left\{\begin{array}{l} (m+1)\lambda_1x_{m1}=x_{0\ m+1}, \\ ((m\lambda_1-1)x_{m1}-x_{1m})s^{(m)}\otimes s\otimes s=0,\\ (m\lambda_1x_{m-1\ 2}-2x_{0\ m+1})s^{(m)}\otimes 1\otimes s^{(2)}=0,\\ (((m-1)\lambda_1-2)x_{m1}-x_{2\ m-1})s^{(m-1)}\otimes s^{(2)}\otimes s=0,\\ ((2\lambda_1-m+1)x_{m1}-x_{m-1\ 2})s^{(2)}\otimes s^{(m-1)}\otimes s=0,\\ (3\lambda_1-m+2)x_{m 1}-x_{m-2\ 3})s^{(3)}\otimes s^{(m-2)}\otimes s=0,\\ ((2\lambda_1-m+2)x_{m-1\ 2}-2x_{m-2\ 3})s^{(2)}\otimes s^{(m-2)}\otimes s^{(2)}=0 , \end{array}\right.\label{eq293c} \end{eqnarray} which implies that \begin{eqnarray*}\left\{\begin{array}{l}(m+1)\lambda_1x_{m1}=x_{0\ m+1},\\ (m\lambda_1-1)x_{m1}=x_{1m},\\ m\lambda_1x_{m-1\ 2}=2x_{0\ m+1},\\ ((m-1)\lambda_1-1)x_{m-1\ 2}=2x_{1m} \end{array}\right. \ \ \mbox{and}\ \ \ \left\{\begin{array}{l}(2\lambda_1-m+1)x_{m1}=x_{m-1\ 2},\\ (3\lambda_1-m+2)x_{m1}=x_{m-2\ 3},\\ (2\lambda_1-m+2)x_{m-1\ 2}=2x_{m-2\ 3}. \end{array}\right. \end{eqnarray*} Since $x_{m1}\neq 0$ by the assumption, one obtains $$\left|\begin{array}{cc}m\lambda_1-1&\frac12((m-1)\lambda_1-1)\\ 2(m+1)\lambda_1-1&m\lambda_1\end{array}\right|=\left|\begin{array}{cc}2\lambda_1-m+1&1\\ 3\lambda_1-m+2&\frac12({2\lambda_1-m+2})\end{array}\right|=0.$$ Solving the above equations yield $(\lambda_1, m)=(0, -1)$, or $(0, 2)$, or $(-1, -1)$, or $(-1, -2)$. This is a contradiction since $m\geq 3$. So $m\leq 2$. \end{proof} From Lemma \ref{lem26a}, we get the last two cases. (T10) In the case when $m=2$ and $p_2=1$, it follows from (\ref{eq25a}) that \begin{eqnarray}\left\{\begin{array}{ll} (2\lambda_1-1)x_{20}+\kappa_3x_{12}=0, & 3\lambda_1x_{20}+\kappa_3x_{03}=0,\\ 2\lambda_1x_{21}=x_{12}+x_{21},& 3\lambda_1x_{21}=x_{03},\\ 2\lambda_1x_{10}+\kappa_1x_{20}+\kappa_3x_{02}=0,& 2\lambda_1x_{12}=2x_{03},\\ 2\lambda_1x_{11}+\kappa_1x_{21}+\kappa_3x_{03}=x_{02}.&\end{array}\right.\label{eq293d} \end{eqnarray} If $\lambda_1=0$, then $$\begin{array}{lll}\gamma&=&x_{21}((s^{(2)}\otimes s-s\otimes s^{(2)})+(\kappa_1\otimes s^{(2)}-\kappa_3s^{(2)}\otimes1) +(\frac12\kappa_3(2\kappa_1-\kappa_3)s\otimes 1\\ &&+\frac12\kappa_1(\kappa_1-2\kappa_3)\otimes s)+(\kappa_3-\kappa_1)s\otimes s+\frac12\kappa_1\kappa_3(\kappa_3-\kappa_1)\otimes 1)\end{array}$$ by (\ref{eq24a}) and (\ref{eq293d}). If $\lambda_1\neq 0$, then $\lambda_1=2$, $\lambda_2=-1$ and $\lambda_3=0$ by (\ref{eq293c}). Therefore, $$\begin{array}{lll}\gamma&=&x_{21}(s^{(2)}\otimes s+3s\otimes s^{(2)}-\kappa_3s^{(2)}\otimes 1-3(\kappa_1+2\kappa_3)\otimes s^{(2)}-(\kappa_1+3\kappa_3)s\otimes s\\ &&+(\kappa_1\kappa_3+\frac32\kappa_3^2)s\otimes 1+\frac12(\kappa_1^2+6\kappa_1\kappa_3+6\kappa_3^2)\otimes s\\ &&-\frac12(\kappa_1^2\kappa_3+3\kappa_1\kappa_2^3+2\kappa_3^3)\otimes 1+6\otimes s^{(3)}).\end{array}$$ \bigskip Now we start to classify all $m$-type Schr\"odinger-Virasoro Lie conformal algebras based on the cases (T1)-(T10). Note that $\eta$ has been completely determined by Lemma \ref{lem24}. We only need to find all $\alpha_m'$ for this purpose. In the case of $\alpha_m'=0$, (\ref{eq218}) is trivial. In this case, we get the $0$-type Schr\"odinger-Virasoro Lie conformal algebra (A),with $$\alpha'_m=\alpha_0'= 0\quad \beta_1=\lambda_1s\otimes 1 -1\otimes s+\kappa_1\otimes1,$$ $$\beta_2=\lambda_2s\otimes 1 -1\otimes s+\kappa_2\otimes 1,\qquad \eta=a\otimes 1,$$ where $\lambda_1,\lambda_2, \kappa_1,\kappa_2, a\in{\bf k}$ satisfying $a\lambda_1=a\kappa_1=0$. In the case of $\alpha_m'\neq 0$, we replace $\beta_3$ with $\beta_1$ and $\gamma$ with $\alpha_m'$ in (\ref{eq21a}) to determine $\alpha_m'$ satisfying (\ref{eq218}). We also replace $x_{ij}$ with $w_{ij} (=-w_{ji})$ in (T1)-(T10). If $m=1$, then $\alpha'_m=\alpha_1'=w_{01}(1\otimes s-s\otimes 1)\neq 0$ by (T6). Moreover, $2\lambda_1-\lambda_2=1$, \ $\kappa_2=2\kappa_1$ and $\eta=a\otimes 1$, where $w_{01}, a\in{\bf k}$ satisfying $a\lambda_1=a\kappa_1=0$ by Lemma \ref{lem24}. Thus we get a $1$-type Schr\"odinger-Virasoro Lie conformal algebra (B) with $$\alpha'_m=\alpha_1'=w_{01}(1\otimes s-s\otimes 1)\neq 0\quad \beta_1=\lambda_1s\otimes 1 -1\otimes s+\kappa_1\otimes1,$$ $$\beta_2=(2\lambda_1-1)s\otimes 1 -1\otimes s+2\kappa_1\otimes 1,\qquad \eta=a\otimes 1,$$ where $\lambda_1, \kappa_1, w_{01}, a\in{\bf k}$ satisfying $a\lambda_1=a\kappa_1=0$. If $m=2$ and $w_{12}=0$, then $\alpha'_m=\alpha_2''=w_{02}((1\otimes s^{(2)}-s^{(2)}\otimes1)-\kappa_1(1\otimes s-s\otimes1))$, $\lambda_1=\frac12$ by (T8). Moreover, $\lambda_2=-1$ and $\kappa_2=2\kappa_1$ by Lemma \ref{lem24a}. Thus one obtains a $2$-type Schr\"odinger-Virasoro Lie conformal algebra (C) with $$\alpha'_m=\alpha_2''=w_{02}((1\otimes s^{(2)}-s^{(2)}\otimes1)-\kappa_1(1\otimes s-s\otimes1))\neq0,\quad \eta=0, $$ $$\beta_1=\frac12s\otimes1-1\otimes s+\kappa_1\otimes1,\quad \beta_2=-s\otimes1-1\otimes s+2\kappa_1\otimes1,$$ where $ w_{02},\kappa_1\in{\bf k}$. If $m=2$ and $w_{12}\neq 0$, then $\lambda_1=0$, $\lambda_2=-3$, $w_{02}=-\kappa_1w_{12}$ and $w_{01}=-\frac12\kappa_1w_{02}=\frac 12\kappa_1^2w_{12}$ by (\ref{eq24a}). Hence $\alpha_m'=\alpha_2'=-w_{12}(\kappa_1(1\otimes s^{(2)}-s^{(2)}\otimes1)-(s\otimes s^{(2)}-s^{(2)}\otimes s)-\frac{\kappa_1^2}2(1\otimes s-s\otimes1))$ from (T10). Note that $w_{12}\neq 0$ and $\eta=0$ by Lemma \ref{lem24}. Thus one obtains a $2$-type Schr\"odinger-Virasoro Lie conformal algebra (D) with $$\alpha_m'=-w_{12}(\kappa_1(1\otimes s^{(2)}-s^{(2)}\otimes1)-(s\otimes s^{(2)}-s^{(2)}\otimes s)-\frac{\kappa_1^2}2(1\otimes s-s\otimes1))\neq0,$$ $$\eta=0,\quad \beta_1=-1\otimes s+\kappa_1\otimes1\quad \beta_2=-3s\otimes 1-1\otimes s+2\kappa_1\otimes1$$ for any $ w_{12}, \kappa_1\in{\bf k}$. If $m=3$, then $\alpha_m'=\alpha_3'=w_{03}[(1\otimes s^{(3)}- s^{(3)}\otimes 1)+\frac12(s\otimes s^{(2)}-s^{(2)}\otimes s)-\frac{3\kappa_1}2(1\otimes s^{(2)}-s^{(2)}\otimes 1)+\frac34\kappa_1^2(1\otimes s-s\otimes 1)]$ for some nonzero $w_{03}\in{\bf k}$ from (T9). We get a $3$-type Schr\"odinger-Virasoro Lie conformal algebra (E) with $$\alpha_m'=w_{03}((1\otimes s^{(3)}- s^{(3)}\otimes 1)+\frac12(s\otimes s^{(2)}-s^{(2)}\otimes s)-\frac{3\kappa_1}2(1\otimes s^{(2)}-s^{(2)}\otimes 1)+\frac34\kappa_1^2(1\otimes s-s\otimes 1)),$$ $$\eta=0,\quad \beta_1=\frac 23s\otimes 1-1\otimes s+\kappa_1\otimes1,\quad \beta_2=-\frac53s\otimes1-1\otimes s+2\kappa_1\otimes 1$$ for any $0\neq w_{03}, \kappa_1\in{\bf k}$. Summing up, we have the following theorem. \begin{theorem} \label{thm31} There are only five kinds of $m$-type Schr\"odinger-Virasoro Lie conformal algebras from (A) to (E) described as above.\end{theorem} \begin{remark} If ${\bf k}$ is algebracially closed, then we can assume that $\alpha_1'=1\otimes s-s\otimes 1$ in (B), $\alpha_2''=(1\otimes s^{(2)}-s^{(2)}\otimes1)-\kappa_1(1\otimes s-s\otimes 1)$ in (C), $\alpha_2'=\kappa_1(1\otimes s^{(2)}-s^{(2)}\otimes1)-(s\otimes s^{(2)}-s^{(2)}\otimes s)-\frac{\kappa_1^2}2(1\otimes s-s\otimes 1)$ in (D) and $\alpha_3'=1\otimes s^{(3)}-s^{(3)}\otimes1 -\frac32\kappa_1(1\otimes s^{(2)}-s^{(2)}\otimes1)+\frac12(s\otimes s^{(2)}-s^{(2)}\otimes1)+\frac43\kappa_1^2(1\otimes s-s\otimes 1)$ in (E) by letting $e_1'=\frac1{\sqrt{w_{01}}}e_1$, $e_1'=\frac1{\sqrt{-w_{12}}}e_1$, $e_1'=\frac1{\sqrt{w_{02}}}e_1$, and $e_1'=\frac1{\sqrt{w_{03}}}e_1$ respectively. \end{remark} \begin{example}\label{ex33} Let $A={\bf k}[[t, t^{-1}]]$ be an $H$-bimodule given by $sf=fs=\frac{df}{dt}$ for any $f\in A$. Then $A$ is an $H$-differential algebra both for the left and the right action of $H$. Suppose $\mathscr{A}_A(L_s)=A\otimes_HL_s$. Then $\mathscr{A}_A(L_s)=A\otimes_HL_s$ is a Lie algebra with bracket $[f\otimes_Ha, g\otimes _Hb]=\sum\limits_{i=0}^3(fh_{1i})(gh_{2i})\otimes_He_i,$ where $[a,b]=\sum\limits_{i=0}^3h_{1i}\otimes h_{2i}\otimes_He_i\in H^{\otimes 2}\otimes_HL_s$. Let $L_n=t^{n+1}\otimes_He_0$, $Y_{p+\rho}=t^{p+1}\otimes_H e_1$ and $M_{k+2\ \rho}=t^{k+1}\otimes_He_2$ for any $n, p, k\in \mathbb{Z}$ and $\rho\in {\bf k}$. Suppose that $SV_{\rho}$ is a vector space with a basis $\{L_n, Y_p, M_k|n\in\mathbb{Z}, p\in \rho+\mathbb{Z}, k\in 2\rho+ \mathbb{Z}\}$. If $L_s$ is of type (B) with $\eta=0$ and $w_{01}=1$, then $SV_{\rho}$ is a Lie algebra with nonzero brackets given by \begin{eqnarray*} & [L_n, L_{n'}]=(n-n')L_{n+n'},\qquad [Y_p, Y_{p'}]=(p-p')M_{p+p'},\\ & [L_n, Y_p]=\left(\lambda_1(n+1)-p+\rho-1\right)Y_{n+p}+\kappa_1Y_{n+p+1}, \\ & [L_n, M_k]=\left[(2\lambda_1-1)(n+1)-k+2\rho-1\right]M_{n+k}+2\kappa_1M_{n+k+1}.\end{eqnarray*} From these, the Schr\"odinger-Virasoro Lie algebra defined in \cite{U} is the algebra $SV_{\rho}$ in the case when $\lambda_1=\rho=\frac12$ and $\kappa_1=0$. The algebra $W(\varrho)[0]$ introduced in \cite{L} is isomorphic to the subalgebra of $SV_{\rho}$ generated by $\{ L_n,Y_p|n,p\in\mathbb{Z}\}$, where $\kappa_1=0$, $\rho=\frac12$ and $\varrho=2\lambda_1-1$. If $L_s$ is of type (C) with $w_{02}=1$, then $SV_{\rho}$ is a Lie algebra with nonzero brackets \begin{eqnarray*} & [L_n,L_{n'}]=(n-n')L_{n+n'},\qquad [L_n, Y_p]=(\frac{n-1}2-p+\rho)Y_{n+p}+\kappa_1Y_{n+p+1},\\ & [L_n, M_k]=(2\rho-2-n-k)M_{n+k}+2\kappa_1M_{n+k+1}, \ \ \ \mbox{and} \\ & [Y_p, Y_{p'}]=(p-p')\left[\kappa_1M_{p+p'}-\frac12(p+p'-2\rho+1)M_{p+p'-1}\right].\end{eqnarray*} If $L_s$ is of type (D) with $\eta=0$ and $w_{12}=1$, then $SV_{\rho}$ is a Lie algebra with nonzero brackets given by \begin{eqnarray*} & [L_n, L_{n'}]=(n-n')L_{n+n'},\qquad [L_n, Y_p]=(\rho-1-p)Y_{n+p}+\kappa_1Y_{n+p+1},\\ & [L_n, M_k]=(2\rho-4-3n-k)M_{n+k}+2\kappa_1M_{n+k+1},\end{eqnarray*} and \begin{eqnarray*}[Y_p,Y_{p'}]&=&(p-p')\frac{\kappa_1}2(p+p'+1-2\rho)M_{p+p'-1}\\ & &-(p-p')\left[\frac12(p+1-\rho)(p'+1-\rho)M_{p+p'-2}+\frac{\kappa_1^2}2M_{p+p'}\right].\end{eqnarray*} If $L_s$ is of type (E) with $w_{03}=1$, then $SV_{\rho}$ is a Lie algebra with nonzero brackets \begin{eqnarray*} & [L_n, L_{n'}]=(n-n')L_{n+n'},\qquad [L_n, Y_p]=(\frac{2n-1}3-p+\rho)Y_{n+p}+\kappa_1Y_{n+p+1},\\ & [L_n, M_k]=(2\rho-\frac{5n+8}3-k)M_{n+k}+2\kappa_1M_{n+k+1},\end{eqnarray*} and \begin{eqnarray*}[Y_p, Y_{p'}]&=&(p'-p)\left[\frac34\kappa_1^2M_{p+p'}-\frac32\kappa_1(p+p'-2\rho+1)M_{p+p'-1}\right]\\ & &+\frac{p'-p}{2}\left[2p^2+3pp'+2p'^2+(1-7\rho)(p+p')+11\rho^2-2\rho+1\right]M_{p+p'-2}.\end{eqnarray*} \end{example} \section{Schr\"odinger-Virasoro Lie $H$-pseudoalgebras} In this section, we determine all Schr\"odinger-Virasoro Lie $H$-pseudoalgebras. First of all, we describe the Schr\"odinger-Virasoro Lie $H$-pseudoalgebras satisfying $\eta_{ij}=0$ for all $1\leq i,j\leq 2$. \begin{proposition} \label{prop41} Let $L$ be a Schr\"odinger-Virasoro Lie $H$-pseudoalgebra with pseudobrackets given by (I*) satisfying $\eta_{ij}=0$ for all $1\leq i, j\leq 2$. Then $L$ must be one of the following algebras (Z1)-(Z5). (Z1) $L$ is a Schr\"odinger-Virasoro Lie $H$-pseudoalgebra with $\alpha_m'=0$, $\eta=a\otimes 1$, $\beta_i=\lambda_i s\otimes1-1\otimes s+\kappa_i\otimes 1$ for $1\leq i\leq 3$, where $a,\lambda_1,\lambda_2,\lambda_3,\kappa_1,\kappa_2,\kappa_3, \in{\bf k}$ satisfying $a\lambda_1=a\kappa_1=0$. (Z2) $L$ is a Schr\"odinger-Virasoro Lie $H$-pseudoalgebra with $\alpha_m'=w_{01}(1\otimes s-s\otimes 1)\neq 0$, $\eta=a\otimes 1$, $\beta_1=\lambda_1 s\otimes1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=(2\lambda_1-1) s\otimes1-1\otimes s+2\kappa_1\otimes 1$, $\beta_3=\lambda_3 s\otimes1-1\otimes s+\kappa_3\otimes 1$, where $ w_{01}, a,\lambda_1, \lambda_3,\kappa_1,\kappa_3 \in{\bf k}$ satisfying $a\lambda_1=a\kappa_1=0$. (Z3) $L$ is a Schr\"odinger-Virasoro Lie $H$-pseudoalgebra with $\alpha_m'=-w_{12}(\kappa_1(1\otimes s^{(2)}-s^{(2)}\otimes1)-(s\otimes s^{(2)}-s^{(2)}\otimes s)-\frac{\kappa_1^2}2(1\otimes s-s\otimes1))\neq0,$ $\eta=0$, $\beta_1=-1\otimes s+\kappa_1\otimes 1$, $\beta_2=-3s\otimes 1-1\otimes s+2\kappa_1\otimes1$, $\beta_3=\lambda_3s\otimes1-1\otimes s+\kappa_3\otimes 1$, where $w_{12},\lambda_3, \kappa_1,\kappa_3\in{\bf k}$. (Z4) $L$ is a Schr\"odinger-Virasoro Lie $H$-pseudoalgebra with $\alpha_m'=\alpha_2''=w_{02}((1\otimes s^{(2)}-s^{(2)}\otimes1)-\kappa_1(1\otimes s-s\otimes1))\neq0$, $\eta=0$, $\beta_1=\frac12s\otimes 1-1\otimes s+\kappa_1\otimes1$, $\beta_2=-s\otimes 1-1\otimes s+2\kappa_1\otimes1$, $\beta_3=\lambda_3s\otimes 1-1\otimes s+\kappa_3\otimes1$, where $w_{02},\lambda_3,\kappa_1,\kappa_3\in{\bf k}$. (Z5) $L$ is a Schr\"odinger-Virasoro Lie $H$-pseudoalgebra with $\alpha_m'=w_{03}((1\otimes s^{(3)}- s^{(3)}\otimes 1)+\frac12(s\otimes s^{(2)}-s^{(2)}\otimes s)-\frac{3\kappa_1}2(1\otimes s^{(2)}-s^{(2)}\otimes 1)+\frac34\kappa_1^2(1\otimes s-s\otimes 1))\neq 0$, $\eta=0$, $\beta_1=\frac23s\otimes1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=-\frac53s\otimes 1-1\otimes s+2\kappa_1\otimes 1$, $\beta_3=\frac23s\otimes1-1\otimes s+\kappa_3\otimes 1$, where $w_{03},\kappa_1, \kappa_3\in{\bf k}$. \end{proposition} \begin{proof} Since $\eta_{ij}=0$ for all $1\leq i, j\leq n$, $L$ is a Schr\"odinger-Virasoro Lie $H$-pseudoalgebra if and only if (\ref{eq217})-(\ref{eq220}) hold by Lemma \ref{lem28}. Lemma \ref{lem24} tells us that $\eta=a\otimes 1$ for some $a\in {\bf k}$ with $a\lambda_1=a\kappa_1=0$ and $(\kappa_2-2\kappa_1)\alpha_m'=0$ if (\ref{eq217}), (\ref{eq219}) and (\ref{eq220}) hold. Thus, $L$ is a Schr\"odinger-Virasoro Lie $H$-pseudoalgebra if and only if (\ref{eq218}) is true. By Theorem \ref{thm31}, $L$ is the algebra described by one of the cases (Z1)-(Z5).\end{proof} Let us denote $e_0: =-L$, $e_1: =T(X)$, $e_2: =T^-(X)$ and $e_3: =U$, where $L, $ $T(X)$, $T^-(X)$ are given in \cite{CP}. Then the subalgebra $He_0\oplus He_1\oplus He_2\oplus He_3$ of the large $N=4$ conformal superalgebras in \cite{CP} is the Schr\"odinger-Virasoro Lie algebra described by (Z1) in Proposition \ref{prop41}, where $a=0$ and $\beta_i =-1\otimes s $ for $1\leq i\leq 3$. In the remainder of this section, we always assume that $\eta_{ij}$, for $1\leq i, j\leq 2$, are not all zero. \begin{lemma}\label{lem41a}Suppose $\eta_{11}=\sum\limits_{i=0}^{m_1}\sum\limits_{j=0}^{p_i}a_{ij}s^{(i)}\otimes s^{(j)}\neq 0$ for some $a_{ij}\in{\bf k}$ satisfying (\ref{eq28}). Then $\kappa_3=0$ and $\lambda_3\in \{0, 1, 2, 3\}$. Further, $\eta_{11}$ must be one of the following types. (a1) $\eta_{11}=a_{00}\otimes 1$ if $\lambda_3=0$. (a2) $\eta_{11}=a_{10}(-\kappa_1\otimes 1+s\otimes 1-\lambda_1\otimes s)$ for any $\kappa_1$, $\lambda_1$ if $\lambda_3=1$. (a3) $\eta_{11}=\frac{a_{20}}{4}(2\kappa_1^2\otimes1-\kappa_1\otimes s -4\kappa_1s\otimes1+s\otimes s+4s^{(2)}\otimes1)$ if $(\lambda_1,\lambda_3)=(0,2)$, or $\eta_{11}=a_{20}(\frac{\kappa_1^2}2\otimes1-\kappa_1s\otimes1-\frac{3\kappa_1}{4}\otimes s+\frac{1}{2}\otimes s^{(2)} +\frac{3}{4}s\otimes s+s^{(2)}\otimes1)$ if $(\lambda_1,\lambda_3)=(-1, 2)$. (a4) $\eta_{11}=a_{21}((s^{(2)}\otimes s-s\otimes s^{(2)})+\kappa_1\otimes s^{(2)} +\frac12\kappa_1^2 \otimes s-\kappa_1s\otimes s)$ if $\lambda_3=3$. In this case, $\lambda_1=0$. \end{lemma} \begin{proof} Note that $\kappa_3=0$ by Lemma \ref{lem25a} since $\eta_{11}\neq 0$. We replace $\gamma$ with $\eta_{11}$ and $\beta_2$ with $\beta_1$ in (\ref{eq21a}) as before. Then (\ref{eq21a}) becomes (\ref{eq28}). Since $\eta_{11}$ satisfies (\ref{eq28}), it is easy to obtain (a1)-(a4) from (T1)-(T10) in Section 3 by using $a_{ij}$ instead of $x_{ij}$. \end{proof} In the same way, one can obtain the following result by (\ref{eq211}). \begin{lemma}\label{lem42a}Suppose $\eta_{22}=\sum\limits_{i=0}^{m_4}\sum\limits_{j=0}^{v_i}d_{ij}s^{(i)}\otimes s^{(j)}\neq 0$ for some $d_{ij}\in{\bf k}$ satisfying (\ref{eq211}). Then $\kappa_3=0$ and $\lambda_3\in \{0, 1, 2, 3\}$. Further, $\eta_{22}$ must be one of the following types. (d1) $\eta_{22}=d_{00}\otimes 1$ if $\lambda_3=0$. (d2) $\eta_{22}=d_{10}(-\kappa_2\otimes 1+s\otimes 1-\lambda_2\otimes s)$ for any $\kappa_2, \lambda_2\in{\bf k}$ if $\lambda_3=1$. (d3) $\eta_{22}=\frac{d_{20}}{4}(2\kappa_2^2\otimes1-\kappa_2\otimes s -4\kappa_2s\otimes1+s\otimes s+4s^{(2)}\otimes1)$ if $(\lambda_2,\lambda_3)=(0,2)$, or $\eta_{22}=d_{20}(\frac{\kappa_2^2}2\otimes1-\kappa_2s\otimes1-\frac{3\kappa_2}{4}\otimes s+\frac{1}{2}\otimes s^{(2)} +\frac{3}{4}s\otimes s+s^{(2)}\otimes1)$ for any $\kappa_2\in{\bf k}$ if $(\lambda_2,\lambda_3)=(-1,2)$. (d4) $\eta_{22}=d_{21}((s^{(2)}\otimes s-s\otimes s^{(2)})+\kappa_2\otimes s^{(2)} +\frac12\kappa_2^2s\otimes1-\kappa_2s\otimes s)$ for any $\kappa_2$ if $\lambda_3=3$. In this case, $\lambda_2=0$. \end{lemma} In the next four lemmas, we describe all $\eta_{ij}$ satisfying (\ref{eq212})-(\ref{eq215}) for $1\leq i, j\leq 2$. With notations in Lemmas \ref{lem41a}-\ref{lem42a}, we assume in addition that $\eta_{12}=\sum\limits_{i=0}^{m_2}\sum\limits_{j=0}^{q_i}b_{ij}s^{(i)}\otimes s^{(j)}$ and $\eta_{21}=\sum\limits_{i=0}^{m_3}\sum\limits_{j=0}^{u_i}c_{ij}s^{(i)}\otimes s^{(j)}$ for some $b_{ij},c_{ij}\in {\bf k}$. \begin{lemma}\label{lem43a} Suppose $\eta_{ij}$ ($1\leq i,j\leq 2$) satisfy (\ref{eq28})-(\ref{eq215}) and $\eta_{12}=0$. Then $\eta_{11}=a_{00}\otimes 1$, $\eta_{22}=d_{00}\otimes 1$. When $\eta_{21}=0$, we have either $\eta_{11}\neq 0$ or $\eta_{22}\neq 0$, and $\lambda_3=\kappa_3=0$. When $\eta_{21}\neq 0$, one of the following cases occurs. (i) If $\eta_{11}=\eta_{22}=0$, then $\eta_{21}$ is described by (T1)-(T10) in Section 3, where $x_{ij}$ are replaced by $c_{ij}$, and $\lambda_1$, $\lambda_2$ are exchanged. Moreover, $\kappa_2-\kappa_1+\kappa_3=0$. (ii) If $\eta_{11}=\eta_{22}\neq 0$, then $\eta_{21}=c_{00}\otimes 1$ for some nonzero $c_{00}$. Moreover, $\kappa_1=\kappa_2$ and $\lambda_1-\lambda_2=\lambda_3=0$. Suppose $\eta_{11}\neq \eta_{22}$. Then $m_3\leq 1$. Further, (iii) if $m_3=0$, then $\eta_{21}=c_{00}\otimes 1$ for some nonzero $c_{00}\in{\bf k}$. Moreover, $\kappa_1=\kappa_2$ and $\lambda_1=\lambda_2$. (iv) if $m_3=1$, then $$\eta_{21}=c_{10}\left(-\kappa_1\otimes 1+s\otimes 1+\frac{d_{00}}{d_{00}-a_{00}}\otimes s\right)$$ for some nonzero $ c_{10}\in{\bf k}$. Moreover, $\kappa_1=\kappa_2$ and $\lambda_1+1=\lambda_2=\lambda_3=0$. \end{lemma} \begin{proof}Since $\eta_{12}=0$, $(\eta_{11}\Delta\otimes 1)\eta_{11}=(12)(1\otimes \eta_{11}\Delta)((12)\eta_{11})$ by (\ref{eq212}). Note that $\eta_{11}=\sum\limits_{i=0}^{m_1}\sum\limits_{j=0}^{p_i}a_{ij}s^{(i)}\otimes s^{(j)}$ by the assumption. The above equation is translated into \begin{eqnarray}&&\sum\limits_{i=0,u=0}^{m_1,m_1}\sum\limits_{j=0,v=0}^{p_i,p_u}\sum\limits_{t=0}^ua_{ij}a_{uv}s^{(i)}s^{(t)}\otimes s^{(j)}s^{(u-t)}\otimes s^{(v)}\label{eqa41}\\ &&=\sum\limits_{i=0,u=0}^{m_1,m_1}\sum\limits_{j=0,v=0}^{p_i,p_u}\sum\limits_{t=0}^ua_{ij}a_{uv}s^{(i)}s^{(t)}\otimes s^{(v)}\otimes s^{(j)}s^{(u-t)}. \nonumber\end{eqnarray} From this, we get $\max\{p_0,p_1, \cdots, p_{m_1}\}=m_1+\max\{p_0,p_1, \cdots, p_{m_1}\}$. Thus $m_1=0$ and $\eta_{11}=a_{00}\otimes 1$ by Lemma \ref{lem41a}. Similarly, we get $\eta_{22}=d_{00}\otimes 1$ for some $d_{00}\in{\bf k}$ by (\ref{eq215}). If $a_{00}=d_{00}=0$, then (\ref{eq28})-(\ref{eq211}) hold for any $\eta_{21}$. By the assumption that $\eta_{ij}$ ($1\leq i,j\leq 2$) are not all zero, the nonzero $\eta_{21}$ is given by $\gamma $ in (T1)-(T10) in Section 4, where $x_{ij}$ are replaced by $c_{ij}$ and $\lambda_1$, $\lambda_2$ are exchanged. Since $\eta_{21}\neq 0$, we have $\kappa_2-\kappa_1+\kappa_3=0$ by Lemma \ref{lem24a}. If $\eta_{21}=\eta_{12}=0$, then (\ref{eq28})-(\ref{eq211}) hold for any $\eta_{11}=a_{00}\otimes 1$ and $\eta_{22}=d_{00}\otimes 1$. By the assumption that $\eta_{ij}$ are not all zero, we have either $a_{00}\neq 0$ or $d_{00}\neq 0$. Next, we assume that $\eta_{21}\neq 0$. Then, by (\ref{eq214}), $$a_{00}(\eta_{21}\otimes 1)+d_{00}(\Delta\otimes 1)\eta_{21}=a_{00}(12)(1\otimes\eta_{21})+d_{00}(12)(1\otimes \Delta)((12)\eta_{21}).$$ Applying the functor $1\otimes1\otimes \varepsilon$ to this equation yields $$(a_{00}-d_{00})\eta_{21}=a_{00}\sum\limits_{i=0}^{m_3}c_{i0}s^{(i)}\otimes 1-d_{00}\sum\limits_{i=0}^{m_3}\sum\limits_{t=0}^ic_{i0}s^{(t)}\otimes s^{(i-t)}.$$ If $a_{00}\neq d_{00}$, $\eta_{21}=\frac{d_{00}}{d_{00}-a_{00}}\sum\limits_{i=1}^{m_3}\sum\limits_{t=0}^{i-1}c_{i0}s^{(t)}\otimes s^{(i-t)}+\sum\limits_{i=0}^{m_3}c_{i0}s^{(i)}\otimes 1$. From (T1)-(T10) in Section 3, we get $m_3\leq 1$. If $m_3=0$, then $\eta_{21}=c_{00}\otimes 1$ and $\lambda_2-\lambda_1+\lambda_3=0$. Since $a_{00}\neq d_{00}$, $\lambda_3=0$ by Lemma \ref{lem41a} and Lemma \ref{lem42a}. Thus $\lambda_1=\lambda_2$. Similar to the previous case, we can prove that $\kappa_1=\kappa_2$. If $m_3=1$, then $\eta_{21}=c_{10}(\frac{\kappa_1a_{00}-(\kappa_1+\kappa_3)d_{00}}{d_{00}-a_{00}}\otimes 1+s\otimes 1+\frac{d_{00}}{d_{00}-a_{00}}\otimes s)$ with $\lambda_2a_{00}=(\lambda_2+\lambda_3)d_{00}$ and $\lambda_2-\lambda_1+\lambda_3=1$, where $c_{10}\neq 0$. Similar to the case when $m=0$, we can prove that $\lambda_3=0$ and $\kappa_1=\kappa_2$. Then $\lambda_2a_{00}=\lambda_2d_{00}$. Since $a_{00}\neq d_{00}$, $\lambda_2=0$. Thus $\lambda_1=-1$ by (\ref{eq24a}). \end{proof} From Lemma \ref{lem41a}-\ref{lem43a}, we can get all the Schr\"odinger-Virasoro Lie $H$-pseudoalgebras with $\eta=\alpha'_m=\eta_{12}=0$. \begin{theorem} \label{thm51} Let $L$ be a Sch\"odinger-Virasoro Lie $H$-pseudoalgebra with pseudobracket given by $$\left\{\begin{array}{l}[e_0,e_0]=\alpha\otimes_He_0, \quad \ [e_0, e_i]=\beta_{i}\otimes_H e_i\quad 1\leq i\leq 3,\\ [e_1, e_1]=[e_1, e_2]=[e_2, e_2]=[e_3,e_3]=0,\\ [e_1, e_3]=\eta_{11}\otimes_He_1,\\ [e_2, e_3]=\eta_{21}\otimes_He_1+\eta_{22}\otimes_He_2,\end{array}\right.$$ Suppose $\eta_{ij}$ are not all zero. Then $\eta_{ij}$ and $\beta_i$ are described by one of the following cases: (A1)\ $\eta_{11}=\eta_{22}=0$, $\eta_{21}=c_{01}(-\kappa_3\otimes 1+1\otimes s)\neq 0$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=(\lambda_1+1)s\otimes 1-1\otimes s+(\kappa_1-\kappa_3)\otimes 1$, $\beta_3=-1\otimes s+\kappa_3\otimes 1$, where $c_{01},\lambda_1,\kappa_1,\kappa_3\in{\bf k}$. (A2)\ $\eta_{11}=\eta_{22}=0$, $\eta_{21}=c_{00}\otimes 1\neq 0$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=\lambda_2s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(\lambda_1-\lambda_2)s\otimes 1-1\otimes s+(\kappa_1-\kappa_2)\otimes 1$, where $c_{00},\lambda_1,\lambda_2,\kappa_1,\kappa_2\in{\bf k}$. (A3)\ $\eta_{11}=\eta_{22}=0$, $\eta_{21}=c_{11}((\kappa_2(\kappa_1-\kappa_2)+(\lambda_1+2)(\kappa_1-\kappa_2)^2)\otimes1-(\kappa_2+2(\lambda_1+2)(\kappa_1-\kappa_3))\otimes s+2(\lambda_1+2)\otimes s^{(2)}-(\kappa_1-\kappa_2)s\otimes1+s\otimes s)\neq 0$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=(\lambda_1+2)s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=-1\otimes s+(\kappa_1-\kappa_2)\otimes 1$, where $c_{11},\lambda_1,\kappa_1,\kappa_2\in{\bf k}$. (A4)\ $\eta_{11}=\eta_{22}=0$, $\eta_{21}=c_{11}(\kappa_2(\kappa_1-\kappa_2)\otimes1-(\kappa_1-\kappa_2)s\otimes1-\kappa_2\otimes s+s\otimes s)\neq 0$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=-2s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=-1\otimes s+(\kappa_1-\kappa_2)\otimes 1$, where $c_{11},\lambda_1,\kappa_1,\kappa_2\in{\bf k}$. (A5)\ $\eta_{11}=\eta_{22}=0$, $\eta_{21}=c_{11}(-(\kappa_1-\kappa_2)s\otimes 1+s\otimes s)\neq 0$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=(\lambda_1+2)s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=-1\otimes s+(\kappa_1-\kappa_2)\otimes 1$, where $c_{11},\lambda_1,\kappa_1,\kappa_2\in{\bf k}$. (A6)\ $\eta_{11}=\eta_{22}=0$, $\eta_{21}=-(\kappa_2c_{10}+(\kappa_1-\kappa_2)c_{01})\otimes 1+c_{10}s\otimes 1+c_{01}\otimes s$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=\lambda_2s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(\lambda_1-\lambda_2+1)s\otimes 1-1\otimes s+(\kappa_1-\kappa_2)\otimes 1$, where $c_{11},\lambda_1,\kappa_1,\kappa_2\in{\bf k}$ satisfying $\lambda_2c_{10}+(\lambda_1-\lambda_2+1)c_{01}=0$ and $c_{10}\neq 0$. (A7) \ $\eta_{11}=\eta_{22}=0$, $\eta_{21}=c_{10}(-\kappa_2\otimes 1+s\otimes 1)\neq 0$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=\lambda_2s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(\lambda_1+1)s\otimes 1-1\otimes s+(\kappa_1-\kappa_2)\otimes 1$, where $c_{10},\lambda_1,\lambda_2, \kappa_1,\kappa_2\in{\bf k}$. (A8) \ $\eta_{11}=\eta_{22}=0$, $\eta_{21}=\frac{c_{20}}{2(\lambda_1+2)}(\kappa_2((\lambda_1+2)\kappa_2+(\kappa_1-\kappa_2))\otimes1-\kappa_2\otimes s -(2(\lambda_1+2)\kappa_2+(\kappa_1-\kappa_2))s\otimes1+s\otimes s+2(\lambda_1+2)s^{(2)}\otimes1)\neq 0$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(\lambda_1+2)s\otimes 1-1\otimes s+(\kappa_1-\kappa_2)\otimes 1$, where $c_{20},\lambda_1, \kappa_1,\kappa_2\in{\bf k}$. (A9) \ $\beta_1=-s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=\lambda_2s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(1-\lambda_2)s\otimes 1-1\otimes s+(\kappa_1-\kappa_2)\otimes 1$, $\eta_{11}=\eta_{22}=0$, \begin{eqnarray*}\begin{array}{lll}\eta_{21}&=&b_{20}((\frac{\lambda_2}{2\lambda_2-2}(\kappa_1-\kappa_2)^2 +\frac{1-2\lambda_2}{2-2\lambda_2}\kappa_2(\kappa_1-\kappa_2)+\frac12\kappa_2^2)\otimes1\\ &&+(\frac{(2\lambda_2-1)(\kappa_1-\kappa_2)}{2-2\lambda_2} -\kappa_2)s\otimes1 +\frac{(2\lambda_2-1)\kappa_2+2\lambda_2(\kappa_1-\kappa_2)}{2-2\lambda_2}\otimes s-\frac{\lambda_2}{1-\lambda_2}\otimes s^{(2)}\\ && +\frac{1-2\lambda_2}{2-2\lambda_2}s\otimes s+s^{(2)}\otimes1)\neq 0,\end{array}\end{eqnarray*} where $c_{20},\lambda_2, \kappa_1,\kappa_2\in{\bf k}$. (A10)\ $\beta_1=s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=-1\otimes s+\kappa_2\otimes 1$, $\beta_3=2s\otimes 1-1\otimes s+(\kappa_1-\kappa_2)\otimes 1$, $\eta_{11}=\eta_{22}=0$,\begin{eqnarray*}\begin{array}{lll}\eta_{21}&=&c_{30}(s^{(3)}\otimes 1-\frac12(\kappa_1+\kappa_2))s^{(2)}\otimes1+\frac12s^{(2)}\otimes s+\frac1{12}(\kappa_1^2+4\kappa_1\kappa_2+\kappa_2^2)s\otimes1\\ &&-\frac16(2\kappa_2+\kappa_1)s\otimes s+\frac16s\otimes s^{(2)} -\frac1{12}(\kappa_1\kappa_2^2+\kappa_1^2\kappa_2)\otimes1\\&& +\frac1{12}(\kappa_2^2+2\kappa_1\kappa_2)\otimes s -\frac16\kappa_2\otimes s^{(2)})\neq 0,\end{array}\end{eqnarray*} where $c_{20},\lambda_2, \kappa_1,\kappa_2\in{\bf k}$. (A11) \ $\eta_{11}=\eta_{22}=0$,$$\begin{array}{lll}\eta_{21}&=&c_{30}((s^{(3)}\otimes1-1\otimes s^{(3)})+\frac12(s^{(2)}\otimes s-s\otimes s^{(2)})-\frac12(\kappa_1+\kappa_2)s^{(2)}\otimes 1\\&&+\frac12(2\kappa_1-\kappa_2)\otimes s^{(2)}-\frac14(\kappa_1^2-4\kappa_1\kappa_2+\kappa_2^2)s\otimes1+\frac14(\kappa_2^2+2\kappa_1\kappa_2-2\kappa_1^2)\otimes s\\ &&+ \frac12(\kappa_1-2\kappa_2)s\otimes s +\frac1{12}(2\kappa_2-\kappa_1)(\kappa_2^2-\kappa_1\kappa_2-2\kappa_1^2)\otimes1),\end{array}$$ $\beta_1=\frac23s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=-\frac53s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=\frac23s\otimes 1-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $c_{30}, \kappa_1,\kappa_2\in{\bf k}$. (A12) \ $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(\lambda_1+3)s\otimes 1-1\otimes s+(\kappa_1-\kappa_2)\otimes 1$, $\eta_{11}=\eta_{22}=0$, $$\begin{array}{lll}\eta_{21}&=&c_{21}((s^{(2)}\otimes s-s\otimes s^{(2)})+(\kappa_2\otimes s^{(2)}-(\kappa_1-\kappa_2)s^{(2)}\otimes1)\\ &&+(\frac12(\kappa_1-\kappa_2)(3\kappa_2-\kappa_1)s\otimes 1+\frac12\kappa_2(3\kappa_1-2\kappa_1)\otimes s)+(\kappa_1-2\kappa_2)s\otimes s\\ &&+\frac12\kappa_2(\kappa_1-\kappa_2)(\kappa_1-2\kappa_2)\otimes 1)\neq 0,\end{array}$$ where $c_{21}, \lambda_1, \kappa_1,\kappa_2\in{\bf k}$. (A13) \ $\beta_1=-s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=2s\otimes1 -1\otimes s+\kappa_2\otimes 1$, $\beta_3=-1\otimes s+(\kappa_1-\kappa_2)\otimes 1$, $\eta_{11}=\eta_{22}=0$, $$\begin{array}{lll}\eta_{21}&=&c_{21}(s^{(2)}\otimes s+3s\otimes s^{(2)}-(\kappa_1-\kappa_2)s^{(2)}\otimes 1-3(2\kappa_1-\kappa_2)\otimes s^{(2)}\\ &&-(3\kappa_1-2\kappa_2)s\otimes s+\frac12(\kappa_2^2-4\kappa_1\kappa_2+3\kappa_1^2)s\otimes 1+\frac12(\kappa_2^2-6\kappa_1\kappa_2+6\kappa_1^2)\otimes s\\ &&+\frac12(3\kappa_2^3-6\kappa_2^2\kappa_1-3\kappa_2\kappa_1^2-2\kappa_1^3)\otimes 1+6\otimes s^{(3)})\neq 0,\end{array}$$ where $c_{21}, \kappa_1,\kappa_2\in{\bf k}$. (A14)\ $\eta_{11}=\eta_{22}=a_{00}\otimes 1\neq 0$, $\eta_{21}=c_{00}\otimes 1\neq 0$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1=\beta_2$, $\beta_3=-1\otimes s$ for some $a_{00}, c_{00},\lambda_1,\kappa_1\in{\bf k}$. (A15) $\eta_{11}=a_{00}\otimes 1\neq \eta_{22}=d_{00}\otimes1$, $\eta_{21}=c_{00}\otimes 1\neq 0$, $\beta_1=\beta_2=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_3=-1\otimes s$ for some $a_{00}, c_{00}, d_{00}, \lambda_1,\kappa_1\in{\bf k}$. (A16)\ $\eta_{11}=a_{00}\otimes 1, \eta_{22}=d_{00}\otimes1$, $$\eta_{21}=c_{10}\left(-\kappa_1\otimes 1+s\otimes 1+\frac{d_{00}}{d_{00}-a_{00}}\otimes s\right)\neq0,$$ $\beta_1=-s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=-1\otimes s+ \kappa_1\otimes 1$ and $\beta_3=-1\otimes s$ for some $ a_{00},d_{00}, c_{10},\kappa_1\in{\bf k}$. \end{theorem} \begin{proof}Since $\eta=\alpha'_m=0$, $L$ is a Scr\"odinger-Virasoro Lie $H$-pseudoalgebra with the pseudobracket given by (I*) if and only if (\ref{eq28})-(\ref{eq215}) hold by Lemma \ref{lem28}. Thus this theorem follows from Lemma \ref{lem43a}.\end{proof} Similar to Lemma \ref{lem43a}, we can prove the following result. \begin{lemma}\label{lem44a} Suppose $\eta_{ij}$ ($1\leq i,j\leq 2$) satisfy (\ref{eq28})-(\ref{eq215}) and $\eta_{21}=0$. Then $\eta_{11}=a_{00}\otimes 1$, $\eta_{22}=d_{00}\otimes 1$ for some $a_{00},d_{00}\in{\bf k}$. When $\eta_{12}=0$, we have either $\eta_{11}\neq 0$ or $\eta_{22}\neq 0$, and $\lambda_3=\kappa_3=0$. When $\eta_{12}\neq 0$, one of the following cases occurs. (i) If $\eta_{11}=\eta_{22}=0$, then $\eta_{12}$ is described by (T1)-(T10) in Section 3, where $x_{ij}$ are replaced by $b_{ij}$. Moreover, $\kappa_1-\kappa_2+\kappa_3=0$. (ii) If $\eta_{11}=\eta_{22}\neq 0$, then $\eta_{12}=b_{00}\otimes 1$ for some nonzero $b_{00}\in{\bf k}$. Moreover, $\lambda_1=\lambda_2$ and $\kappa_1=\kappa_2$. Suppose $\eta_{11}\neq \eta_{22}$. Then $m_2\leq 1$. Further, (iii) if $m_2=0$, then $\eta_{12}=b_{00}\otimes 1$ for some $b_{00}\in{\bf k}$. Moreover, $\lambda_1=\lambda_2$ and $\kappa_1=\kappa_2$. (iv) if $m_2=1$, then $$\eta_{12}=b_{10}\left(-\kappa_1\otimes 1+s\otimes 1+\frac{d_{00}}{d_{00}-a_{00}}\otimes s\right)$$ for some nonzero $b_{10}\in{\bf k}$. Moreover, $\lambda_2+1=\lambda_1=\lambda_3=0$ and $\kappa_1=\kappa_2$. \end{lemma} Using Lemma 5.5, we can prove the following result. \begin{theorem} \label{thm52} Let $L$ be a Sch\"odinger-Virasoro Lie $H$-pseudoalgebra with pseudobracket given by $$\left\{\begin{array}{l}[e_0,e_0]=\alpha\otimes_He_0, \quad \ [e_0, e_i]=\beta_{i}\otimes_H e_i\quad 1\leq i\leq 3,\\ [e_1, e_1]=[e_1, e_2]=[e_2, e_2]=[e_3,e_3]=0,\\ [e_1, e_3]=\eta_{11}\otimes_He_1+\eta_{12}\otimes _He_2,\\ [e_2, e_3]=\eta_{22}\otimes_He_2.\end{array}\right.$$ Suppose $\eta_{ij}$ are not all zero. Then $\eta_{ij}$ and $\beta_i$ are described by one of the following types: (B1)\ $\eta_{11}=\eta_{22}=0$, $\eta_{12}=b_{01}(-\kappa_3\otimes 1+1\otimes s)\neq 0$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=(\lambda_1-1)s\otimes 1-1\otimes s+(\kappa_1+\kappa_3)\otimes 1$, $\beta_3=-1\otimes s+\kappa_3\otimes 1$, where $b_{01},\lambda_1,\kappa_1,\kappa_3\in{\bf k}$. (B2)\ $\eta_{11}=\eta_{22}=0$, $\eta_{12}=b_{00}\otimes 1\neq 0$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=\lambda_2s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(\lambda_2-\lambda_1)s\otimes 1-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{00},\lambda_1,\lambda_2, \kappa_1,\kappa_2\in{\bf k}$. (B3)\ $\eta_{11}=\eta_{22}=0$, $\eta_{12}=b_{11}((\kappa_1(\kappa_2-\kappa_1)+\lambda_1(\kappa_2-\kappa_1)^2)\otimes1-(\kappa_1+2\lambda_1(\kappa_2-\kappa_1))\otimes s+2\lambda_1\otimes s^{(2)}-(\kappa_2-\kappa_1)s\otimes1+s\otimes s)\neq 0$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=(\lambda_1-2)s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{11},\lambda_1, \kappa_1,\kappa_2\in{\bf k}$. (B4)\ $\eta_{11}=\eta_{22}=0$, $\eta_{12}=b_{11}(\kappa_1(\kappa_2-\kappa_1)\otimes1-(\kappa_2-\kappa_1)s\otimes1-\kappa_1\otimes s+s\otimes s)\neq 0$, $\beta_1=-1\otimes s+\kappa_1\otimes 1$, $\beta_2=-2s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{11}, \kappa_1,\kappa_2\in{\bf k}$. (B5)\ $\eta_{11}=\eta_{22}=0$, $\eta_{12}=b_{11}(-\kappa_3s\otimes 1+s\otimes s)\neq 0$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=(\lambda_1-2)s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{11}, \lambda_1, \kappa_1,\kappa_2\in{\bf k}$. (B6)\ $\eta_{11}=\eta_{22}=0$, $\eta_{12}=-(\kappa_1b_{10}+(\kappa_2-\kappa_1)b_{01})\otimes 1+b_{10}s\otimes 1+b_{01}\otimes s$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=\lambda_2s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(\lambda_2-\lambda_1+1)s\otimes 1-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{11}, \lambda_1, \kappa_1,\kappa_2\in{\bf k}$ satisfying $\lambda_1b_{10}+(\lambda_2-\lambda_1+1)b_{01}=0$ and $b_{10}\neq 0$. (B7)\ $\eta_{11}=\eta_{22}=0$, $\eta_{12}=b_{10}(-\kappa_1\otimes 1+s\otimes 1)\neq 0$, $\beta_1=-1\otimes s+\kappa_1\otimes 1$, $\beta_2=\lambda_2s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(\lambda_2+1)s\otimes 1-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{10}, \lambda_1, \lambda_2, \kappa_1,\kappa_2\in{\bf k}$. (B8)\ $\beta_1=-1\otimes s+\kappa_1\otimes 1$, $\beta_2=\lambda_2s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(\lambda_2+2)s\otimes 1-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, $\eta_{11}=\eta_{22}=0$, $\eta_{12}=\frac{b_{20}}{2(\lambda_2+2)}(\kappa_1((\lambda_2+2)\kappa_1+(\kappa_2-\kappa_1))\otimes1-\kappa_1\otimes s -(2(\lambda_2+2)\kappa_1+(\kappa_2-\kappa_1))s\otimes1+s\otimes s+2(\lambda_2+2)s^{(2)}\otimes1)\neq 0$,where $b_{20}, \lambda_2, \kappa_1,\kappa_2\in{\bf k}$. (B9)\ $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=-s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(1-\lambda_1)s\otimes 1-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, $\eta_{11}=\eta_{22}=0$, \begin{eqnarray*}\begin{array}{lll}\eta_{12}&=&b_{20}((\frac{\lambda_1}{2\lambda_1-2}(\kappa_2-\kappa_1)^2+\frac{1-2\lambda_1}{2-2\lambda_1} \kappa_1(\kappa_2-\kappa_1) +\frac12\kappa_1^2)\otimes1 \\ && +(\frac{(2\lambda_1-1)(\kappa_2-\kappa_1)}{2-2\lambda_1} -\kappa_1)s\otimes1 +\frac{(2\lambda_1-1)\kappa_1+2\lambda_1(\kappa_2-\kappa_1)}{2-2\lambda_1}\otimes s-\frac{\lambda_1}{1-\lambda_1}\otimes s^{(2)}\\ && +\frac{1-2\lambda_1}{2-2\lambda_1}s\otimes s+s^{(2)}\otimes1)\neq 0,\end{array}\end{eqnarray*} where $b_{20}, \lambda_1, \kappa_1,\kappa_2\in{\bf k}$. (B10)\ $\beta_1=-1\otimes s+\kappa_1\otimes 1$, $\beta_2=s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=2s\otimes 1-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, $\eta_{11}=\eta_{22}=0$, \begin{eqnarray*}\begin{array}{lll}\eta_{12}&=&b_{30}(s^{(3)}\otimes 1-\frac12(\kappa_1+\kappa_2))s^{(2)}\otimes1+\frac12s^{(2)}\otimes s+\frac1{12}(\kappa_1^2+4\kappa_1\kappa_2+\kappa_2^2)s\otimes1\\ &&-\frac16(2\kappa_1+\kappa_2)s\otimes s+\frac16s\otimes s^{(2)} -\frac1{12}(\kappa_1\kappa_2^2+\kappa_1^2\kappa_2)\otimes1\\&& +\frac1{12}(\kappa_1^2+2\kappa_1\kappa_2)\otimes s -\frac16\kappa_1\otimes s^{(2)})\neq 0,\end{array}\end{eqnarray*}where $b_{30}, \lambda_1, \kappa_1,\kappa_2\in{\bf k}$. (B11)\ $\beta_1=\frac23s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=-\frac53s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=\frac23s\otimes 1-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, $\eta_{11}=\eta_{22}=0$, $$\begin{array}{lll}\eta_{12}&=&b_{30}((s^{(3)}\otimes1-1\otimes s^{(3)})+\frac12(s^{(2)}\otimes s-s\otimes s^{(2)})-\frac12(\kappa_1+\kappa_2)s^{(2)}\otimes 1\\&&+\frac12(2\kappa_2-\kappa_1)\otimes s^{(2)}-\frac14(\kappa_1^2-4\kappa_1\kappa_2+\kappa_2^2)s\otimes1+\frac14(\kappa_1^2+2\kappa_1\kappa_2-2\kappa_2^2)\otimes s\\ &&+ \frac12(\kappa_2-2\kappa_1)s\otimes s +\frac1{12}(2\kappa_1-\kappa_2)(\kappa_1^2-\kappa_1\kappa_2-2\kappa_2^2)\otimes1),\end{array}$$ where $b_{30}, \kappa_1,\kappa_2\in{\bf k}$. (B12)\ $\beta_1=-1\otimes s+\kappa_1\otimes 1$, $\beta_2=\lambda_2s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(\lambda_2+3)s\otimes 1-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, $\eta_{11}=\eta_{22}=0$, $$\begin{array}{lll}\eta_{12}&=&b_{21}((s^{(2)}\otimes s-s\otimes s^{(2)})+(\kappa_1\otimes s^{(2)}-(\kappa_2-\kappa_1)s^{(2)}\otimes1) \\ && +(\frac12(\kappa_2-\kappa_1)(3\kappa_1-\kappa_2)s\otimes 1+\frac12\kappa_1(3\kappa_1-2\kappa_2)\otimes s)+(\kappa_2-2\kappa_1)s\otimes s\\ &&+\frac12\kappa_1(\kappa_2-\kappa_1)(\kappa_2-2\kappa_1)\otimes 1)\neq 0,\end{array}$$ where $b_{21}, \lambda_2, \kappa_1,\kappa_2\in{\bf k}$. (B13)\ $\beta_1=2\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=-s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, $\eta_{11}=\eta_{22}=0$, $$\begin{array}{lll}\eta_{12}&=&b_{21}(s^{(2)}\otimes s+3s\otimes s^{(2)}-(\kappa_2-\kappa_1)s^{(2)}\otimes 1-3(2\kappa_2-\kappa_1)\otimes s^{(2)}\\ && -(3\kappa_2-2\kappa_1)s\otimes s+\frac12(\kappa_1^2-4\kappa_1\kappa_2+3\kappa_2^2)s\otimes 1+\frac12(\kappa_1^2-6\kappa_1\kappa_2+6\kappa_2^2)\otimes s\\ &&+\frac12(3\kappa_1^3-6\kappa_1^2\kappa_2-3\kappa_1\kappa_2^2-2\kappa_2^3)\otimes 1+6\otimes s^{(3)})\neq 0,\end{array}$$ where $b_{21}, \kappa_1,\kappa_2\in{\bf k}$. (B14)\ $\eta_{11}=\eta_{22}=a_{00}\otimes 1\neq 0$, $\eta_{12}=b_{00}\otimes 1\neq 0$, $\beta_1=\beta_2= \lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_3=-1\otimes s$ for some $a_{00}, b_{00},\lambda_1,\kappa_1\in{\bf k}$. (B15)\ $\eta_{11}=a_{00}\otimes 1\neq \eta_{22}=d_{00}\otimes 1$, $\eta_{12}=b_{00}\otimes 1$, $\beta_1=\beta_2=\lambda_1\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_3=-1\otimes s$ for some $a_{00},b_{00}, d_{00}, \lambda_1,\kappa_1\in{\bf k}$. (B16)\ $\eta_{11}=a_{00}\otimes 1$, $ \eta_{22}=d_{00}\otimes 1$, $$\eta_{12}=b_{10}\left(-\kappa_1\otimes 1+s\otimes 1+\frac{d_{00}}{d_{00}-a_{00}}\otimes s\right),$$ $\beta_1=-\otimes s+\kappa_1\otimes 1$, $\beta_2=-s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_3=-1\otimes s$ for some $a_{00},d_{00},b_{10}, \lambda_1,\kappa_1\in{\bf k}$. \end{theorem} \begin{proof} Similar to the proof of Theorem \ref{thm51}.\end{proof} \begin{lemma}\label{lem45a} Suppose $\eta_{ij}$ ($1\leq i,j\leq 2$) satisfy (\ref{eq28})-(\ref{eq215}), $\eta_{12}\eta_{21}\neq 0$ and $\eta_{11}\eta_{22}=0$. Then $\eta_{11}=a_{00}\otimes 1$ and $\eta_{22}=d_{00}\otimes 1$ for some $a_{00},d_{00}\in{\bf k}$, $\kappa_1-\kappa_2=\kappa_3=0$. Moreover, $\lambda_1=\lambda_2$ and $\lambda_3\in \{0,1\}.$ Further, (i) if $\eta_{11}=0$, $\lambda_3=0$, then $\eta_{12}=b_{00}\otimes 1$ and $\eta_{21}=c_{00}\otimes 1$, $\eta_{22}=d_{00}\otimes 1$ for some $b_{00},c_{00}, d_{00}\in{\bf k}$, where $b_{00}c_{00}\neq 0$. (ii) if $\eta_{22}=0$, $\lambda_3=0$, then $\eta_{11}=a_{00}\otimes 1$, $\eta_{12}=b_{00}\otimes 1$ and $\eta_{21}=c_{00}\otimes 1$ for some $a_{00},b_{00}, c_{00}\in{\bf k}$, where $b_{00}c_{00}\neq 0$. (iii) if $\lambda_3=1$, then $\eta_{12}=b_{01}\otimes s$, $\eta_{21}=c_{01}\otimes s$ and $\eta_{11}=\eta_{22}=0$. \end{lemma} \begin{proof} Since $\eta_{12}\eta_{21}\neq 0$, $\kappa_1-\kappa_2+\kappa_3=\kappa_2-\kappa_1+\kappa_3=0$. Hence $\kappa_1-\kappa_2=\kappa_3=0.$ Assume that $\eta_{11}=0$. Then one obtains \begin{eqnarray}\label{eq42}&&\sum\limits_{i=0,k=0}^{m_2,m_3}\sum\limits_{j=0,v=0}^{q_i,u_k}\sum\limits_{t=0}^kb_{ij}c_{kv}s^{(i)}s^{(t)}\otimes s^{(j)}s^{(k-t)}\otimes s^{(v)}\label{eqa42}\\ &&=\sum\limits_{i=0,k=0}^{m_2,m_3}\sum\limits_{j=0,v=0}^{q_i,u_k}\sum\limits_{t=0}^kb_{ij}c_{kv}s^{(i)}s^{(t)}\otimes s^{(v)}\otimes s^{(j)}s^{(k-t)}\nonumber\end{eqnarray} from (\ref{eq212}). By (\ref{eqa42}), we get $\max\{q_1,q_2,\cdots,q_{m_2}\}+m_3=\max\{u_1, u_2, \cdots, u_{m_3}\}$. Similarly, we have $\max\{q_1,q_2,\cdots,q_{m_2}\}+m_4=\max\{v_1,v_2,\cdots,v_{m_4}\}$ and $\max\{v_1,v_2,\cdots,v_{m_4}\}+m_3=\max\{u_1,u_2,\cdots,u_{m_3}\}$ by (\ref{eq213}) and (\ref{eq214}) respectively. Therefore either $\eta_{22}=0$ or $m_4=0$. If $m_4=0$ and $\eta_{22}\neq 0$, then $\lambda_3=\kappa_3=0$ and $\eta_{22}=d_{00}\otimes 1$ for some nonzero $d_{00}\in {\bf k}$ by Lemma \ref{lem42a}. Thus $\max\{u_1,\cdots,u_{m_3}\}+m_2=\max\{q_1,q_2,\cdots,q_{m_2}\}$ by (\ref{eq215}). Hence $m_2=m_3=0$, which implies that $\lambda_1-\lambda_2+\lambda_3, \lambda_2-\lambda_1+\lambda_3\in \{0,1\}$ by Lemma \ref{lem24a}. If $\lambda_1-\lambda_2+\lambda_3=\lambda_2-\lambda_1+\lambda_3=0$, then $\lambda_1=\lambda_2$ and $\lambda_3=0$. So $\eta_{12}=b_{00}\otimes 1$ and $\eta_{21}=c_{00}\otimes 1$ for some nonzero $b_{00},c_{00}\in {\bf k}$ by (T2). This finishes the proof of (i) by now. The proof of (ii) is similar. If $\lambda_1-\lambda_2+\lambda_3=\lambda_2-\lambda_1+\lambda_3=1$, then $\lambda_1=\lambda_2$ and $\lambda_3=1$. So $\eta_{12}=b_{01}\otimes s$ and $\eta_{21}=c_{01}\otimes s$ by (T1). If $\eta_{11}=0$, then $\eta_{22}=0$ by (\ref{eq213}). Conversely, if $\eta_{22}=0$, then $\eta_{11}=0$ by (\ref{eq214}). Thus $\eta_{22}=\eta_{11}=0$ if $\eta_{11}\eta_{22}=0$. If $\lambda_1-\lambda_2+\lambda_3=0$ and $\lambda_2-\lambda_1+\lambda_3=1$, then $\eta_{12}=b_{00}\otimes 1$ and $\eta_{21}=c_{01}\otimes s$, which make (\ref{eq42}) fails. This is a contradiction. Similarly, the case of $\lambda_1-\lambda_2+\lambda_3=1$ and $\lambda_2-\lambda_1+\lambda_3=0$ does not happen. \end{proof} \begin{lemma}\label{lem46a} Suppose $\eta_{ij}$ ($1\leq i,j\leq 2$) satisfy (\ref{eq28})-(\ref{eq215}) and $\eta_{ij}\neq 0$ for all $i, j$. Then $\kappa_1-\kappa_2=\kappa_3=0$ and $\lambda_3\in \{0,1,2,3\}$. Further, (i) if $\lambda_3=0$, then $\lambda_1=\lambda_2$, $\eta_{11}=a_{00}\otimes 1$, $\eta_{12}=b_{00}\otimes 1$, $\eta_{21}=c_{00}\otimes 1$ and $\eta_{22}=d_{00}\otimes 1$ for some nonzero $a_{00},b_{00}, c_{00}, d_{00}\in{\bf k}$. (ii) if $\lambda_3=1$, then $\lambda_1=\lambda_2$, $\eta_{11}=a_{10}(-\kappa_1\otimes 1+s\otimes 1-\lambda_1\otimes s)$, $\eta_{12}=\frac{b_{10}}{a_{10}}\eta_{11}$, $\eta_{21}=-\frac{a_{10}}{b_{10}}\eta_{11}$, and $\eta_{22}=-\eta_{11}$ for some nonzero $a_{10},b_{10}\in {\bf k}.$ (iii) if $\lambda_3=2$ and $\lambda_1=0$, then $\lambda_2=0$, $\eta_{11}=\frac{a_{20}}4(2\kappa_1^2\otimes 1-\kappa_1\otimes s-4\kappa_1s\otimes 1+s\otimes s+4s^{(2)}\otimes1)$, $\eta_{12}=\frac{b_{20}}{a_{20}}\eta_{11}$, $\eta_{21}=-\frac{a_{20}}{b_{20}}\eta_{11}$ and $\eta_{22}=-\eta_{11}$ for some nonzero $a_{20},b_{20}\in{\bf k}.$ (iv) if $\lambda_3=2$ and $\lambda_1\neq 0$, then $\lambda_1=\lambda_2=-1$, $\eta_{11}={a_{20}}(\frac{\kappa_1^2}2\otimes 1-\kappa_1s\otimes 1-\frac34\kappa_1\otimes s+\frac34s\otimes s+\frac12\otimes s^{(2)}+s^{(2)}\otimes 1)$, $\eta_{12}=\frac{b_{20}}{a_{20}}\eta_{11}$, $\eta_{21}=-\frac{a_{20}}{b_{20}}\eta_{11}$ and $\eta_{22}=-\eta_{11}$ for some nonzero $a_{20},b_{20}\in{\bf k}.$ (v) If $\lambda_3=3$, then $\lambda_1=\lambda_2=0$, $\eta_{11}=a_{21}((s^{(2)}\otimes s-s\otimes s^{(2)})+\kappa_1\otimes s^{(2)} +\frac12\kappa_1^2s\otimes1-\kappa_1s\otimes s)$, $\eta_{12}=\frac{b_{21}}{a_{21}}\eta_{11}$, $\eta_{21}=-\frac{a_{21}}{b_{21}}\eta_{11}$ and $\eta_{22}=-\eta_{11}$ for some nonzero $a_{21},b_{21}\in{\bf k}.$ \end{lemma} \begin{proof} Since $\eta_{12}\eta_{21}\neq 0$, we have $\kappa_1-\kappa_2+\kappa_3=\kappa_2-\kappa_1+\kappa_3=0$ by Lemma {\ref{lem24a}}, that is, $\kappa_1-\kappa_2=\kappa_3=0$. Moreover $\eta_{11}\eta_{22}\neq 0$ implies that $\lambda_3\in\{0, 1 ,2, 3\}$ by Lemmas \ref{lem41a}-\ref{lem42a}. {\it Case (i)}: $\lambda_3=0$. In this case, we have $\eta_{11}=a_{00}\otimes 1$ and $\eta_{22}=d_{00}\otimes1$ by Lemmas \ref{lem41a}-\ref{lem42a}. From (T1)-(T10), we get $(\lambda_1-\lambda_2)(\lambda_2-\lambda_1)\geq0$ since $\eta_{12}\eta_{21}\neq 0$. Thus $\lambda_1= \lambda_2$, $\eta_{12}=b_{00}\otimes 1$ and $\eta_{21}=c_{00}\otimes 1$. {\it Case (ii)}: $\lambda_3=1$. By Lemmas \ref{lem41a}-\ref{lem42a}, we get $\eta_{11}=a_{10}(-\kappa_1\otimes 1+s\otimes 1-\lambda_1\otimes s)$ and $\eta_{22}=d_{10}(-\kappa_1\otimes 1+s\otimes 1-\lambda_2\otimes s)$. Since $\lambda_1-\lambda_2+\lambda_3\in\{0, 1, 2, 3\}$ from (T1)-(T10), we have $\lambda_1-\lambda_2\in\{-1, 0, 1, 2\}$. Similarly, $\lambda_2-\lambda_1\in\{-1, 0, 1, 2\}$. Thus $\lambda_1-\lambda_2\in\{-1, 0, 1\}$. Suppose $\lambda_1-\lambda_2=-1$. Then $\eta_{12}=b_{00}\otimes 1$ and $\lambda_2-\lambda_1+\lambda_3=2$. Thus $\lambda_2+\lambda_3=1$ and $\lambda_2=0$ by (T8). So $\eta_{21}=\frac{c_{20}}2(\kappa_1^2\otimes 1-\kappa_1\otimes s-2\kappa_1s\otimes 1+s\otimes s+2s^{(2)}\otimes1),\ \lambda_2=0$ and $\lambda_3=-\lambda_1=1. $ Hence $a_{10}^2=-\frac12b_{00}c_{20}$ and $a_{10}b_{00}=\frac12b_{00}c_{20}=0$ by (\ref{eq212}) and (\ref{eq213}) respectively. Thus $a_{10}=0$, that is, $\eta_{11}=0$, which is impossible. Similarly, we can prove that there are not nonzero $\eta_{ij}$ satisfying (\ref{eq212})-(\ref{eq215}) if $\lambda_1-\lambda_2=1$. Suppose $\lambda_1-\lambda_2=0$. Then $\eta_{12}=b_{10}(-\kappa_1\otimes 1+s\otimes 1-\lambda_1\otimes s)$ for some nonzero $b_{10}$ and $\eta_{21}=c_{10}(-\kappa_1\otimes 1+s\otimes 1-\lambda_2\otimes s)$ for some nonzero $c_{10}$ by (T6). From (\ref{eq212})-(\ref{eq215}), we get $a^2_{10}+b_{10}c_{10}=b_{10}(a_{10}+d_{10})=c_{10}(a_{10}+d_{10})=d_{10}^2+b_{10}c_{10}=0$. Thus $d_{10}=-a_{10}$ and $a_{10}^2=-b_{10}c_{10}$. {\it Case (iii)}: $\lambda_3=2$ and $\lambda_1=0$. In this case, we have $\eta_{11}=\frac{a_{20}}{4}(2\kappa_1^2\otimes 1-\kappa_1\otimes s-4\kappa_1s\otimes 1+s\otimes s+4s^{(2)}\otimes 1)$. Since $\lambda_1-\lambda_2+\lambda_3=2-\lambda_2\in\{0,1,2,3\}$, we have $\lambda_2\in\{-1,0,1,2\}$. Consequently, $\lambda_2-\lambda_1+\lambda_3=\lambda_2+2\in\{1,2,3,4\}\cap \{0,1,2,3\}=\{1,2,3\}.$ If $\lambda_2=1$, then $\eta_{22}=0$ by Lemma \ref{lem42a}. Hence $\lambda_2\in\{0,-1\}$. If $\lambda_2=0$, then $\eta_{22}=\frac{d_{20}}{4}(2\kappa_1^2\otimes 1-\kappa_1\otimes s-4\kappa_1s\otimes 1+s\otimes s+4s^{(2)}\otimes 1)$, $\eta_{12}=\frac{b_{20}}{4}(2\kappa_1^2\otimes 1-\kappa_1\otimes s-4\kappa_1s\otimes 1+s\otimes s+4s^{(2)}\otimes 1)$ and $\eta_{21}=\frac{c_{20}}{4}(2\kappa_1^2\otimes 1-\kappa_1\otimes s-4\kappa_1s\otimes 1+s\otimes s+4s^{(2)}\otimes 1)$. From (\ref{eq212})-(\ref{eq215}), we get $a^2_{20}+b_{20}c_{20}=b_{20}(a_{20}+d_{20})=c_{20}(a_{20}+d_{20})=d_{20}^2+b_{20}c_{20}=0$. Thus $d_{20}=-a_{20}.$ If $\lambda_2=-1$, then $\eta_{22}=d_{20}(\frac{\kappa_2^2}2\otimes1-\kappa_2s\otimes1-\frac{3\kappa_2}{4}\otimes s+\frac{1}{2}\otimes s^{(2)} +\frac{3}{4}s\otimes s+s^{(2)}\otimes1)$ by Lemma \ref{lem42a}. Moreover, since $\lambda_1-\lambda_2+\lambda_3=3$, we have $\eta_{12}=\vartheta$ or $\eta_{12}=\vartheta'$, where $\vartheta=b_{30}(s^{(3)}\otimes 1-\kappa_1s^{(2)}\otimes 1+\frac12s^{(2)}\otimes s+\frac12\kappa_1^2s\otimes 1-\frac12s\otimes s+\frac16s\otimes s^{(2)}+\frac16\kappa_1^3\otimes1 +\frac14\kappa_1^2\otimes s-\frac16\kappa_1\otimes s^{(2)})$ and $\vartheta'=b_{21}(s^{(2)}\otimes s-s\otimes s^{(2)}+\kappa_1\otimes s^{(2)}+\frac12\kappa_1^2s\otimes 1-\kappa_1s\otimes s)$. Similarly, we have $\eta_{21}=c_{10}(-\kappa_1\otimes 1+s\otimes 1+\frac12\otimes s)$. If $\eta_{12}=\vartheta$ or $ \vartheta'$, and (\ref{eq212}) holds, then $\eta_{11}=0$. This is impossible. {\it Case (iv)}: $\lambda_3=2$ and $\lambda_1\neq 0$. In this case, we have $\lambda_1=-1$ and $\eta_{11}={a_{20}}(\frac{\kappa_1^2}2\otimes 1-\kappa_1s\otimes 1-\frac34\kappa_1\otimes s+\frac34s\otimes s+\frac12\otimes s^{(2)}+s^{(2)}\otimes 1)$. Since $\lambda_1-\lambda_2+\lambda_3\in\{0,1,2,3\}$, $\lambda_1-\lambda_2\in\{-2,-1,0,1\}$. Similarly, $\lambda_2-\lambda_1\in\{-2,-1,0,1\}$. Thus $\lambda_1-\lambda_2\in\{-1,0,1\}$ and hence $\lambda_2\in\{0,-1,-2\}$. If $\lambda_2=-2$, then $\eta_{22}=0$ by Lemma \ref{lem42a}, which is impossible. Hence $\lambda_2\in \{0,-1\}$. If $\lambda_2=0$, then $\lambda_1-\lambda_2+\lambda_3=1$ and $\eta_{12}=b_{10}(-\kappa_1\otimes 1+s\otimes 1+\frac{1}2\otimes s)$. Since $\lambda_2-\lambda_1+\lambda_3=3$ and $\lambda_2=0$, $\eta_{21}\in\{\frac{c_{30}}{b_{30}}\vartheta,\frac{c_{21}}{b_{21}}\vartheta'\}$. It is easy to check that (\ref{eq212}) holds only if $\eta_{11}=0$. If $\lambda_2=-1$, then $\lambda_1-\lambda_2+\lambda_3=\lambda_2-\lambda_1+\lambda_3=2$, $\eta_{12}=\frac{b_{20}}{a_{20}}\eta_{11}$, $\eta_{21}=\frac{c_{20}}{a_{20}}\eta_{11}$ and $\eta_{22}=\frac{d_{20}}{a_{20}}\eta_{11}$. Thus $a_{20}^2+b_{20}c_{20}=b_{20}(a_{20}+d_{20})=c_{20}(a_{20}+d_{20})=d_{20}^2+b_{20}c_{20}=0$. So $d_{20}=-a_{20}$ and $a_{20}^2=-b_{20}c_{20}$. {\it Case (v)}: $\lambda_3=3$. Note that $\lambda_1-\lambda_2\in\{0,-1,-2,-3\}\cap\{0,1,2,3\}$. Then $\lambda_1=\lambda_2=0$ by Lemma \ref{lem41a} and Lemma \ref{lem42a}. Moreover, $\eta_{11}=a_{21}((s^{(2)}\otimes s-s\otimes s^{(2)})+\kappa_1\otimes s^{(2)} +\frac12\kappa_1^2s\otimes1-\kappa_1s\otimes s)$ and $\eta_{22}=\frac{d_{21}}{a_{21}}\eta_{11}$, $\eta_{12}=\frac{b_{21}}{a_{21}}\eta_{11}$ and $\eta_{21}=\frac{c_{21}}{a_{21}}\eta_{11}$. It is easy to check that (\ref{eq212})-(\ref{eq215}) hold if and only if $d_{21}=-a_{21}$ and $a_{21}^2=-b_{21}c_{21}.$ \end{proof} Using Lemma \ref{lem45a} and Lemma \ref{lem46a}, we can determine all Sch\"odinger-Virasoro Lie $H$-pseudoalgebras with $\eta=0$, $\alpha_m=0$ and $\eta_{12}\eta_{21}\neq 0$ in the following theorem. \begin{theorem} \label{thm53} Let $L$ be a Sch\"odinger-Virasoro Lie $H$-pseudoalgebra with pseudobracket given by $$\left\{\begin{array}{l}[e_0,e_0]=\alpha\otimes_He_0, \quad \ [e_0, e_i]=\beta_{i}\otimes_H e_i\quad 1\leq i\leq 3,\\ [e_1, e_1]=[e_1, e_2]=[e_2, e_2]=[e_3,e_3]=0,\\ [e_1, e_3]=\eta_{11}\otimes_He_1+\eta_{12}\otimes _He_2,\\ [e_2, e_3]=\eta_{21}\otimes _He_1+ \eta_{22}\otimes_He_2,\end{array}\right.$$ where $\eta_{12}\eta_{21}\neq 0$. Then $L$ is one of the following types: (C1)\ $\eta_{11}=0$, $\eta_{12}=b_{00}\otimes 1$ and $\eta_{21}=c_{00}\otimes 1$, $\eta_{22}=d_{00}\otimes 1$, $\beta_1=\beta_2=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_3=-1\otimes s$, where $b_{00},c_{00}, d_{00}, \lambda_1,\kappa_1\in{\bf k}$ satisfying $b_{00}c_{00}\neq 0$. (C2) \ $\eta_{11}=a_{00}\otimes 1$, $\eta_{12}=b_{00}\otimes 1$, $\eta_{21}=c_{00}\otimes 1$, $\eta_{22}=0$, $\beta_1=\beta_2=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_3=-1\otimes s$, where $b_{00},c_{00}, d_{00}, \lambda_1,\kappa_1\in{\bf k}$ satisfying $b_{00}c_{00}\neq 0$. (C3)\ $\eta_{12}=b_{01}\otimes s$, $\eta_{21}=c_{01}\otimes s$ and $\eta_{11}=\eta_{22}=0$, $\beta_1=\beta_2=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_3=s\otimes 1-1\otimes s$, where $b_{01},c_{01}, d_{00}, \lambda_1,\kappa_1\in{\bf k}$ satisfying $b_{01}c_{01}\neq 0$. (C4) \ $\eta_{11}=a_{00}\otimes 1$, $\eta_{12}=b_{00}\otimes 1$, $\eta_{21}=c_{00}\otimes 1$ and $\eta_{22}=d_{00}\otimes 1$ for some nonzero $a_{00},b_{00}, c_{00}, d_{00}\in{\bf k}$, $\beta_1=\beta_2=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_3=-1\otimes s$ for some $\lambda_1,\kappa_1\in{\bf k}$. (C5) \ $\eta_{11}=a_{10}(-\kappa_1\otimes 1+s\otimes 1-\lambda_1\otimes s)$, $\eta_{12}=\frac{b_{10}}{a_{10}}\eta_{11}$, $\eta_{21}=-\frac{a_{10}}{b_{10}}\eta_{11}$, and $\eta_{22}=-\eta_{11}$ for some nonzero $a_{10},b_{10}\in {\bf k},$ $\beta_1=\beta_2=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_3=s\otimes1-1\otimes s$ for some $\lambda_1,\kappa_1\in{\bf k}$. (C6) \ $\eta_{11}=\frac{a_{20}}4(2\kappa_1^2\otimes 1-\kappa_1\otimes s-4\kappa_1s\otimes 1+s\otimes s+4s^{(2)}\otimes1)$, $\eta_{12}=\frac{b_{20}}{a_{20}}\eta_{11}$, $\eta_{21}=-\frac{a_{20}}{b_{20}}\eta_{11}$ and $\eta_{22}=-\eta_{11}$ for some nonzero $a_{20},b_{20}\in{\bf k},$ $\beta_1=\beta_2=-1\otimes s+\kappa_1\otimes 1$, $\beta_3=2s\otimes 1-1\otimes s$ for some $\lambda_1,\kappa_1\in {\bf k}$. (C7)\ $\eta_{11}={a_{20}}(\frac{\kappa_1^2}2\otimes 1-\kappa_1s\otimes 1-\frac34\kappa_1\otimes s+\frac34s\otimes s+\frac12\otimes s^{(2)}+s^{(2)}\otimes 1)$, $\eta_{12}=\frac{b_{20}}{a_{20}}\eta_{11}$, $\eta_{21}=-\frac{a_{20}}{b_{20}}\eta_{11}$ and $\eta_{22}=-\eta_{11}$ for some nonzero $a_{20},b_{20}\in{\bf k},$ $\beta_1=\beta_2=-s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_3=2s\otimes 1-1\otimes s$ for some $\kappa_1\in {\bf k}$. (C8)\ $\eta_{11}=a_{21}((s^{(2)}\otimes s-s\otimes s^{(2)})+\kappa_1\otimes s^{(2)} +\frac12\kappa_1^2s\otimes1-\kappa_1s\otimes s)$, $\eta_{12}=\frac{b_{21}}{a_{21}}\eta_{11}$, $\eta_{21}=-\frac{a_{21}}{b_{21}}\eta_{11}$ and $\eta_{22}=-\eta_{11}$ for some nonzero $a_{21},b_{21}\in{\bf k},$ $\beta_1=\beta_2=-1\otimes s+\kappa_1\otimes 1$, $\beta_3=3s\otimes 1-1\otimes s$ for some $\kappa_1\in{\bf k}$. \end{theorem} \begin{proof}Similar to the proof of Theorem \ref{thm52}.\end{proof} Next, we assume that $\eta\neq 0$ or $\alpha_m'\neq 0$. First, let us determine $\eta_{ij}$ for $1\leq i,j\leq 2$ in the case of $\eta\neq 0$. \begin{lemma} \label{lem48} Let $L$ be a Schr\"odinger-Virasoro Lie $H$-pseudoalgebra with pseudobrackets given by (I*). Suppose $\eta\neq 0$. Then $\beta_{1}=-1\otimes s$, $\eta_{11}=\eta_{21}=0$, $\eta_{22}=d_{00}\otimes 1$ and $\alpha_m'=w_{01}(1\otimes s-s\otimes 1)$ for some $d_{00}, w_{01}\in{\bf k}$. Moreover, (1) If $\eta_{12}\eta_{22}\neq 0$, then $\lambda_1=\lambda_2=\lambda_3=0$, $\kappa_1=\kappa_2=\kappa_3=w_{01}=0$, $\eta_{12}=b_{00}\otimes 1\neq 0$. (2) If $\eta_{22}=0$, then $\lambda_1=\lambda_2-\lambda_3=0$, $\kappa_1=\kappa_2-\kappa_3=0$, $\eta_{12}=b_{00}\otimes 1\neq 0$ and $\alpha_m'=w_{01}(1\otimes s-s\otimes 1)$, where $\lambda_2w_{01}=\kappa_2w_{01}=0$. (3) If $\eta_{22}\neq 0$ and $\eta_{12}=0$, then $\lambda_i=\kappa_i=0$ for $i=1, 3$ and $\alpha_m'=0$ \end{lemma} \begin{proof} Since $\eta\neq 0$, $\eta_{21}=0$ by (\ref{eq25}) and $\alpha_m'=w_{01}(1\otimes s-s\otimes 1)$ by Theorem \ref{thm31}. Moreover, $\eta_{11}=a_{00}\otimes 1$ and $\eta_{22}=d_{00}\otimes 1$ by Lemma \ref{lem44a}. From (\ref{eq27}), we get $(\eta\Delta\otimes 1)\eta_{22}=(1\otimes \eta_{22}\Delta)\eta+(12)(1\otimes \eta_{11}\Delta)\eta$. Thus $\eta_{11}=0$. Similarly, one obtains that $\lambda_1=\kappa_1=0$ from (\ref{eq217}). (i) If $d_{00}\eta_{12}\neq 0$, then $\lambda_3=0$ by Lemma \ref{lem42a}. In addition, either $\eta_{12}=b_{00}\otimes 1$, or $\eta_{12}=b_{10}(s\otimes 1+1\otimes s)$ by Lemma \ref{lem44a}. From (\ref{eq26}), we get either $\alpha_m'=0$, or $\alpha'_m=\frac{b_{10}}{d_{00}}(1\otimes s-s\otimes 1)$. If $\eta_{12}=b_{10}(1\otimes s-s\otimes1)\neq 0$, then $\alpha'_m=\frac{b_{10}}{d_{00}}(1\otimes s-s\otimes 1)$, $\lambda_2b_{10}=\kappa_2b_{10}=0$ by (\ref{eq218}), and $\lambda_1-\lambda_2+\lambda_3=1$ by Lemma \ref{lem24a}. Since $b_{10}\neq 0$, we have $\lambda_2=0$ and $\lambda_1-\lambda_2+\lambda_3=0$. This is impossible. Hence $\eta_{12}=b_{00}\otimes 1$ for some nonzero $b_{00}$ and $\lambda_2=0$. Furthermore, we get $\alpha_m'=0$ by (\ref{eq26}). (ii) If $d_{00}=0$, then $1\otimes \eta_{12}=(12)(1\otimes \eta_{12})$ by (\ref{eq26}). Thus $\eta_{12}=b_{00}\otimes 1\neq 0$ by Lemma \ref{lem44a} and the assumption that $\eta_{ij}$ $ (1\leq i,j\leq 2)$ are not all zero. Moreover, $0=\lambda_1-\lambda_2+\lambda_3=\lambda_3-\lambda_2$ by (T2). If $\alpha_m'=w_{01}(1\otimes s-s\otimes 1),$ then $\lambda_2w_{01}=\kappa_2w_{01}=0$ by (\ref{eq218}). (iii) If $\eta_{12}=0$, then $d_{00}\neq 0$, $\alpha_m'=0$ and $\lambda_3=\kappa_3=0$. \end{proof} From the above lemma, we get the following theorem. \begin{theorem} \label{thm54} Let $L$ be a Schr\"odinger-Virasoro Lie $H$-pseudoalgebra with pseudobrackets given by (I*). Suppose $\eta\neq 0$. Then $L$ is one of the following three types. (D1)\ $\eta_{11}=\eta_{21}=0$, $\eta_{12}=b_{00}\otimes 1\neq 0$, $\eta_{22}=d_{00}\otimes 1\neq 0$, $\alpha_m'=0$, $\eta=a\otimes1$, $\beta_1=\beta_2=\beta_3=-1\otimes s$ for some $a,b_{00},d_{00}\in {\bf k}$. (D2)\ $\eta_{11}=\eta_{21}=\eta_{22}=0$, $\eta_{12}=b_{00}\otimes 1\neq 0$, $\alpha_m'=w_{01}(1\otimes s-s\otimes 1)$, $\eta=a\otimes1$, $\beta_1=-1\otimes s$, $\beta_2=\beta_3=\lambda_2s\otimes 1-1\otimes s+\kappa_2\otimes 1$ for some $a,b_{00},w_{01},\lambda_2,\kappa_2\in {\bf k}$, where $\lambda_2w_{01}=\kappa_2w_{01}=0$. (D3)\ $\eta_{11}=\eta_{21}=\eta_{12}=0$, $\eta_{22}=d_{00}\otimes 1\neq 0$, $\alpha_m'=0$, $\eta=a\otimes1$, $\beta_3=\beta_1=-1\otimes s$, $\beta_2=\lambda_2s\otimes 1-1\otimes s+\kappa_2\otimes 1$ for some $a,d_{00}, \lambda_2,\kappa_2\in{\bf k}$. \end{theorem} By now, we have determined the Sch\"odinger-Virasoro Lie $H$-pseudoalgebras with $\eta=a\otimes 1\neq 0$ by Theorem {\ref{thm54} and the Sch\"odinger-Virasoro Lie $H$-pseudoalgebras with $\eta=\alpha_m'= 0$ by Theorems \ref{thm51}, Theorem \ref{thm52}, Theorem \ref{thm53}. Next, we assume that $\eta=0$ and $\alpha_m'\neq 0$. In this case, (\ref{eq25}) and (\ref{eq27}) are equivalent. From (\ref{eq25}), we get that $\eta_{21}=0$. If $\alpha_m'\neq 0$, then $\alpha'_m$ must be one of $\alpha_1'$, $\alpha_2'$, $\alpha_2''$ and $\alpha'_3$ by Theorem \ref{thm31}. \begin{lemma} \label{lem410} Let $L$ be a Sch\"odinger-Virasoro Lie $H$-pseudoalgebra with $\eta=0$. Suppose $\alpha_m'\neq 0$. Then $\eta_{11}=a_{00}\otimes 1$, $\eta_{21}=0$ and $\eta_{22}=d_{00}\otimes 1$. Suppose $\alpha_m'\in \{\alpha_2',\alpha_2'',\alpha_3'\}$. Then $a_{00}=d_{00}=0$. Under the assumption that $a_{00}=d_{00}=0$, we have the following (i) If $\alpha_m'\in \{\alpha_1',\alpha_2',\alpha_2'',\alpha_3'\}$, then $\eta_{12}\neq 0$ described by (T1)-(T10), where $x_{ij}$ is replaced by $b_{ij}$. Suppose $a_{00}\neq d_{00}$. Then either $\eta_{12}=0$, or $\eta_{12}=b_{00}\otimes 1$ for some nonzero $b_{00}$, or $\eta_{12}=b_{10}(s\otimes 1+2\otimes s)$ for some nonzero $b_{10}$. Moreover, we have the following (ii) If $\eta_{12}=0$, then $d_{00}=2a_{00}$, $\alpha_m'=\alpha_1',$ $\kappa_2=2\kappa_1$, $2\lambda_1-\lambda_2=1$, $\kappa_3=\lambda_3=0$. (iii) If $\eta_{12}=b_{00}\otimes 1\neq 0$, then $d_{00}=2a_{00}$, $\alpha_m'=\alpha_1',$ $\kappa_1=\kappa_2=\kappa_3=0$, $\lambda_1=\lambda_2=1$, $\lambda_3=0$. (iv) If $\eta_{12}=b_{10}(s\otimes 1+2\otimes s)\neq 0$, then $d_{00}=2a_{00}$, $\alpha_m'=\alpha_1',$ $\kappa_1=\kappa_2=\kappa_3=0$, $\lambda_1=\lambda_3=0$, $\lambda_2=-1$. \end{lemma} \begin{proof}From Lemma \ref{lem44a}, one obtains that $\eta_{11}=a_{00}\otimes 1$, $\eta_{22}=d_{00}\otimes 1$ for some $a_{00},d_{00}\in{\bf k}$. Suppose $\alpha_m'=\alpha_1'\neq 0$. Then $d_{00}=2a_{00}$ by (\ref{eq26}). In addition, $\kappa_2=2\kappa_1$ and $2\lambda_1-\lambda_2=1$ by (\ref{eq218}). If $\alpha_m'\in\{\alpha_2',\alpha_2'',\alpha_3'\}$, then $a_{00}=d_{00}=0$ by (\ref{eq26}). Let us assume that $\alpha_m'\in\{\alpha_1',\alpha_2',\alpha_2'',\alpha_3'\}$ and $a_{00}=d_{00}=0$. Since we assume that $\eta_{ij}$ ($1\leq i,j\leq 2$) are not all zero, $\eta_{12}\neq 0$, which is described by (T1)-(T10), where $x_{ij}$ is replaced by $b_{ij}$. Next, we assume that $a_{00}\neq d_{00}$. Then either $\eta_{12}=0$, or $\eta_{12}=b_{00}\otimes 1\neq 0$, or $\eta_{12}=b_{10}(-\kappa_1\otimes 1+s\otimes 1+2\otimes s)\neq 0$. In the case when $\eta_{12}=b_{00}\otimes 1\neq 0$, we have $\lambda_1-\lambda_2+\lambda_3=0$. Since $\lambda_3=0$, $\lambda_1=\lambda_2=1.$ In the case when $\eta_{12}=b_{10}(-\kappa_1\otimes 1+s\otimes 1+2\otimes s)\neq 0$, we have $\lambda_1-\lambda_2+\lambda_3=1$. Thus $\lambda_1=0$ and $\lambda_2=-1$. \end{proof}. From the above lemma, we obtain the following theorem. \begin{theorem} \label{thm410} Let $L$ be a Sch\"odinger-Virasoro Lie $H$-pseudoalgebra with the pseudobrackets given by (I*). Suppose $\eta=0$ and $\alpha_m'\neq 0$. Then $L$ must be one of the following types. (E1)\ $\eta_{11}=\eta_{21}=\eta_{22}=0$, $\alpha_m'\in \{\alpha_1',\alpha_2',\alpha_2'',\alpha_3'\}$, $\eta=0$, $\eta_{12}=b_{01}(-\kappa_3\otimes 1+1\otimes s)\neq 0$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=(\lambda_1+1)s\otimes 1-1\otimes s+(\kappa_1+\kappa_3)\otimes 1$, $\beta_3=-1\otimes s+\kappa_3\otimes 1$, where $b_{01},\lambda_1,\kappa_1,\kappa_3\in{\bf k}$. (E2)\ $\eta_{11}=\eta_{21}=\eta_{22}=0$, $\eta_{12}=b_{00}\otimes 1\neq 0$, $\alpha_m'\in \{\alpha_1',\alpha_2',\alpha_2'',\alpha_3'\}$, $\eta=0$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=\lambda_2s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(\lambda_2-\lambda_1)s\otimes 1-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{00},\lambda_1,\lambda_2, \kappa_1,\kappa_2\in{\bf k}$. (E3)\ $\eta_{11}=\eta_{21}=\eta_{22}=0$, $\eta_{12}=b_{11}((\kappa_1(\kappa_2-\kappa_1)+\lambda_1(\kappa_2-\kappa_1)^2)\otimes1-(\kappa_1+2\lambda_1(\kappa_2-\kappa_1))\otimes s+2\lambda_1\otimes s^{(2)}-(\kappa_2-\kappa_1)s\otimes1+s\otimes s)\neq 0$, $\alpha_m'\in \{\alpha_1',\alpha_2',\alpha_2'',\alpha_3'\}$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=(\lambda_1-2)s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{11},\lambda_1, \kappa_1,\kappa_2\in{\bf k}$. (E4)\ $\eta_{11}=\eta_{21}=\eta_{22}=0$, $\eta_{12}=b_{11}(\kappa_1(\kappa_2-\kappa_1)\otimes1-(\kappa_2-\kappa_1)s\otimes1-\kappa_1\otimes s+s\otimes s)\neq 0$, $\alpha_m'\in \{\alpha_1',\alpha_2',\alpha_2'',\alpha_3'\}$, $\beta_1=-1\otimes s+\kappa_1\otimes 1$, $\beta_2=-2s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{11}, \kappa_1,\kappa_2\in{\bf k}$. (E5)\ $\eta_{11}=\eta_{21}=\eta_{22}=0$, $\eta_{12}=b_{11}(-\kappa_3s\otimes 1+s\otimes s)\neq 0$, $\alpha_m'\in \{\alpha_1',\alpha_2',\alpha_2'',\alpha_3'\}$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=(\lambda_1-2)s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{11}, \lambda_1, \kappa_1,\kappa_2\in{\bf k}$. (E6)\ $\eta_{11}=\eta_{21}=\eta_{22}=0$, $\eta_{12}=-(\kappa_1b_{10}+(\kappa_2-\kappa_1)b_{01})\otimes 1+b_{10}s\otimes 1+b_{01}\otimes s$, $\alpha_m'\in \{\alpha_1',\alpha_2',\alpha_2'',\alpha_3'\}$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=\lambda_2s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(\lambda_2-\lambda_1+1)s\otimes 1-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{11}, \lambda_1, \kappa_1,\kappa_2\in{\bf k}$ satisfying $\lambda_1b_{10}+(\lambda_2-\lambda_1+1)b_{01}=0$ and $b_{10}\neq 0$. (E7)\ $\eta_{11}=\eta_{21}=\eta_{22}=0$, $\eta_{12}=b_{10}(-\kappa_1\otimes 1+s\otimes 1)\neq 0$, $\alpha_m'\in \{\alpha_1',\alpha_2',\alpha_2'',\alpha_3'\}$, $\beta_1=-1\otimes s+\kappa_1\otimes 1$, $\beta_2=\lambda_2s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(\lambda_2+1)s\otimes 1-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{10}, \lambda_1, \lambda_2, \kappa_1,\kappa_2\in{\bf k}$. (E8)\ $\eta_{11}=\eta_{12}=\eta_{22}=0$, $\eta_{12}=\frac{b_{20}}{2(\lambda_2+2)}(\kappa_1((\lambda_2+2)\kappa_1+(\kappa_2-\kappa_1))\otimes1-\kappa_1\otimes s -(2(\lambda_2+2)\kappa_1+(\kappa_2-\kappa_1))s\otimes1+s\otimes s+2(\lambda_2+2)s^{(2)}\otimes1)\neq 0$, $\alpha_m'\in \{\alpha_1',\alpha_2',\alpha_2'',\alpha_3'\}$, $\beta_1=-1\otimes s+\kappa_1\otimes 1$, $\beta_2=\lambda_2s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(\lambda_2+2)s\otimes 1-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{20}, \lambda_2, \kappa_1,\kappa_2\in{\bf k}$. (E9)\ $\eta_{11}=\eta_{12}=\eta_{22}=0$, \begin{eqnarray*}\begin{array}{lll}\eta_{12}&=&b_{20}((\frac{\lambda_1}{2\lambda_1-2}(\kappa_2-\kappa_1)^2+\frac{1-2\lambda_1}{2-2\lambda_1} \kappa_1(\kappa_2-\kappa_1) +\frac12\kappa_1^2)\otimes1\\ && +(\frac{(2\lambda_1-1)(\kappa_2-\kappa_1)}{2-2\lambda_1} -\kappa_1)s\otimes1 +\frac{(2\lambda_1-1)\kappa_1+2\lambda_1(\kappa_2-\kappa_1)}{2-2\lambda_1}\otimes s-\frac{\lambda_1}{1-\lambda_1}\otimes s^{(2)}\\ && +\frac{1-2\lambda_1}{2-2\lambda_1}s\otimes s+s^{(2)}\otimes1)\neq 0,\end{array}\end{eqnarray*} $\alpha_m'\in \{\alpha_1',\alpha_2',\alpha_2'',\alpha_3'\}$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=-s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(1-\lambda_1)s\otimes 1-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{20}, \lambda_1, \kappa_1,\kappa_2\in{\bf k}$. (E10)\ $\eta_{11}=\eta_{12}=\eta_{22}=0$, \begin{eqnarray*}\begin{array}{lll}\eta_{12}&=&b_{30}(s^{(3)}\otimes 1-\frac12(\kappa_1+\kappa_2))s^{(2)}\otimes1+\frac12s^{(2)}\otimes s+\frac1{12}(\kappa_1^2+4\kappa_1\kappa_2+\kappa_2^2)s\otimes1\\ &&-\frac16(2\kappa_1+\kappa_2)s\otimes s+\frac16s\otimes s^{(2)} -\frac1{12}(\kappa_1\kappa_2^2+\kappa_1^2\kappa_2)\otimes1\\&& +\frac1{12}(\kappa_1^2+2\kappa_1\kappa_2)\otimes s -\frac16\kappa_1\otimes s^{(2)})\neq 0,\end{array}\end{eqnarray*} $\alpha_m'\in \{\alpha_1',\alpha_2',\alpha_2'',\alpha_3'\}$, $\beta_1=-1\otimes s+\kappa_1\otimes 1$, $\beta_2=s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=2s\otimes 1-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{30}, \lambda_1, \kappa_1,\kappa_2\in{\bf k}$. (E11)\ $\eta_{11}=\eta_{12}=\eta_{22}=0$, $$\begin{array}{lll}\eta_{12}&=&b_{30}((s^{(3)}\otimes1-1\otimes s^{(3)})+\frac12(s^{(2)}\otimes s-s\otimes s^{(2)})-\frac12(\kappa_1+\kappa_2)s^{(2)}\otimes 1\\&&+\frac12(2\kappa_2-\kappa_1)\otimes s^{(2)}-\frac14(\kappa_1^2-4\kappa_1\kappa_2+\kappa_2^2)s\otimes1+\frac14(\kappa_1^2+2\kappa_1\kappa_2-2\kappa_2^2)\otimes s\\ &&+ \frac12(\kappa_2-2\kappa_1)s\otimes s +\frac1{12}(2\kappa_1-\kappa_2)(\kappa_1^2-\kappa_1\kappa_2-2\kappa_2^2)\otimes1),\end{array}$$ $\alpha_m'\in \{\alpha_1',\alpha_2',\alpha_2'',\alpha_3'\}$, $\beta_1=\frac23s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=-\frac53s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=\frac23s\otimes 1-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{30}, \kappa_1,\kappa_2\in{\bf k}$. (E12)\ $\eta_{11}=\eta_{12}=\eta_{22}=0$, $$\begin{array}{lll}\eta_{12}&=&b_{21}((s^{(2)}\otimes s-s\otimes s^{(2)})+(\kappa_1\otimes s^{(2)}-(\kappa_2-\kappa_1)s^{(2)}\otimes1) \\ && +(\frac12(\kappa_2-\kappa_1)(3\kappa_1-\kappa_2)s\otimes 1+\frac12\kappa_1(3\kappa_1-2\kappa_2)\otimes s)+(\kappa_2-2\kappa_1)s\otimes s\\ && +\frac12\kappa_1(\kappa_2-\kappa_1)(\kappa_2-2\kappa_1)\otimes 1)\neq 0,\end{array}$$ $\alpha_m'\in \{\alpha_1',\alpha_2',\alpha_2'',\alpha_3'\}$, $\beta_1=-1\otimes s+\kappa_1\otimes 1$, $\beta_2=\lambda_2s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=(\lambda_2+3)s\otimes 1-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{21}, \lambda_2, \kappa_1,\kappa_2\in{\bf k}$. (E13)\ $\eta_{11}=\eta_{12}=\eta_{22}=0$, $$\begin{array}{lll}\eta_{12}&=&b_{21}(s^{(2)}\otimes s+3s\otimes s^{(2)}-(\kappa_2-\kappa_1)s^{(2)}\otimes 1-3(2\kappa_2-\kappa_1)\otimes s^{(2)}\\ &&-(3\kappa_2-2\kappa_1)s\otimes s+\frac12(\kappa_1^2-4\kappa_1\kappa_2+3\kappa_2^2)s\otimes 1+\frac12(\kappa_1^2-6\kappa_1\kappa_2+6\kappa_2^2)\otimes s\\ &&+\frac12(3\kappa_1^3-6\kappa_1^2\kappa_2-3\kappa_1\kappa_2^2-2\kappa_2^3)\otimes 1+6\otimes s^{(3)})\neq 0,\end{array}$$ $\alpha_m'\in \{\alpha_1',\alpha_2',\alpha_2'',\alpha_3'\}$, $\beta_1=2\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=-s\otimes 1-1\otimes s+\kappa_2\otimes 1$, $\beta_3=-1\otimes s+(\kappa_2-\kappa_1)\otimes 1$, where $b_{21}, \kappa_1,\kappa_2\in{\bf k}$. (E14)\ $\eta_{11}=a_{00}\otimes 1\neq 0$, $\eta_{12}=\eta_{21}=0$, $\eta_{22}=2a_{00}\otimes 1$, $\alpha_m'=c_{01}(1\otimes s-s\otimes 1)\neq 0$, $\beta_1=\lambda_1s\otimes 1-1\otimes s+\kappa_1\otimes 1$, $\beta_2=(2\lambda_1-1)s\otimes 1-1\otimes s+2\kappa_1\otimes 1$, $\beta_3=-1\otimes s$ for some $a_{00},w_{01},\lambda_1,\kappa_1\in{\bf k}$. (E15)\ $\eta_{11}=a_{00}\otimes 1\neq 0$, $\eta_{12}=b_{00}\otimes 1\neq 0$, $\eta_{21}=0$, $\eta_{22}=2a_{00}\otimes 1$, $\alpha_m'=c_{01}(1\otimes s-s\otimes 1)\neq 0$, $\beta_1=\beta_2=s\otimes 1-1\otimes s$, $\beta_3=-1\otimes s$ for some $a_{00},b_{00},w_{01}\in{\bf k}$. (E16)\ $\eta_{11}=a_{00}\otimes 1\neq 0$, $\eta_{12}=b_{10}(s\otimes 1+2\otimes s)\neq 0$, $\eta_{21}=0$, $\eta_{22}=2a_{00}\otimes 1$, $\alpha_m'=c_{01}(1\otimes s-s\otimes 1)\neq 0$, $\beta_1=\beta_3=-1\otimes s$, $\beta_2=-s\otimes 1-1\otimes s$ for some $a_{00},b_{10}, w_{01}\in{\bf k}$. \end{theorem} The extended Schr\"odinger Virasoro Lie conformal algebra defined in \cite{SY} (see Example \ref{exaa27}) is a pseudoalgebra described by (E14) of Theorem \ref{thm410}, where $w_{01}=-1$, $\lambda_1=\frac12$, $\kappa_1=0$ and $a_{00}=1$. Moreover, similar to Example \ref{ex33}, one can obtain a series of infinite-dimensional Lie algebras, which contain Virasoro algebra as a subalgebra and Schr\"odinger Lie algebra as a subalgebra. \begin{example} Let $A={\bf k}[[t,t^{-1}]]$ be an $H$-bimodule given by $sf=fs=\frac{df}{dt}$ for any $f\in Y$. Then $A$ is an $H$-differential algebra both for the left and the right action of $H$. Let $\mathscr{A}_A(L_s)=A\otimes_HL$ and $\rho\in {\bf k}$. Set $L_n=t^{n+1}\otimes_He_0$, $Y_{p+\rho}=t^{p+1}\otimes_H e_1$, $M_{k+2\rho}=t^{k+1}\otimes_He_2$ and $N_m=t^{m+1}\otimes _He_3$ for any $n,m, p, k\in \mathbb{Z}$. Suppose that $\mathfrak{S}V_{\rho}$ is a vector space with a basis $\{L_n,Y_p,M_k,N_m|m, n\in\mathbb{Z}, p\in \rho+\mathbb{Z}, k\in 2\rho+ \mathbb{Z}\}$. If $L$ is of the type (E14) of Theorem \ref{thm410} with $w_{01}=1$, then $\mathfrak{S}V_{\rho}$ is a Lie algebra with nonzero brackets given by $$ [L_n,L_{n'}]=(n-n')L_{n+n'},\qquad [Y_p, Y_{p'}]=(p-p')M_{p+p'},$$ \begin{eqnarray*} & [L_n, Y_p]=\left(\lambda_1(n+1)-p+\rho-1\right)Y_{n+p},\\ & [L_n, M_k]=\left((2\lambda_1-1)(n+1)-k+2\rho-1\right)M_{n+k},\\ & [L_n, N_m]=-mM_{n+m},\quad [Y_p, N_m]=a_{00}Y_{p+m+1},\quad [M_k, N_m]=2a_{00}M_{k+m+1},\\ & [Y_p, M_k]=[M_k, M_{k'}]=[N_m, N_{m'}]=0.\end{eqnarray*} From these, the extended Schr\"odinger-Virasoro Lie algebra defined in \cite{U} is exactly the algebra $\mathfrak{S}V_{\rho}$ in the case when $\lambda_1=\rho=\frac12$ and $a_{00}=1$. If $L$ is of the type (E15) of Theorem \ref{thm410} with $\eta=0$, then $\mathfrak{S}V_{\rho}$ is a Lie algebra with nonzero brackets given by \begin{eqnarray*} & [L_n, L_{n'}]=(n-n')L_{n+n'},\qquad [L_n, Y_p]=(\rho-1-p)Y_{n+p},\\ & [L_n, M_k]=(2\rho-2-n-k)M_{n+k},\qquad [M_k,N_m]=2a_{00}M_{k+m+1},\\ & [L_n, N_m]=(\rho-m-1)N_{n+m},\qquad [Y_p, Y_{p'}]=(p-p')M_{p+p'},\\ & [Y_p, N_m]=a_{00}\kappa_1Y_{p+m+1+\rho}+(p+2m+3-\rho)M_{p+m+\rho},\\ & [Y_p, M_k]=[M_k, M_{k'}]=[N_m, N_{m'}]=0.\end{eqnarray*} \end{example}
51,754
Do You Earn Righteousness Or Is It A free Gift? It Can’t Be Both. Meeting Jesus is the best thing that has ever happened to me. But, my journey hasn’t been all sandy beaches and rose petals. As a matter of fact there was a time in my life when I was trying so hard to be good enough for God that I lost sight of who I was. There were so many people who seemed to have it all figured out. They seemed perfect and I wanted to please God, I wanted Him to love me. So I took every piece of advice and tried to “be” who they told me to be. I felt I wasn’t good enough. So I spent a lot of time trying to be good enough by behavior alone. I didn’t realize that Jesus had already given me everything. God called us to himself while we were still a sinners and saved us by His grace. So why is it that once we get saved we think that we have to earn the rest of our journey with perfect, white picket fence behavior? Our behavior will never be good enough, thats why we’re saved by Grace. That’s why Jesus died, so He could take the punishment for your sin and GIVE YOU His righteousness. Good behavior doesn’t put you in right standing with God. Being in right standing with God creates good behavior! It’s so simple but it’s profound. This truth will set you free if you feel like your always trying to do more in order to make God love you or show you favor. There is no freedom in living like that. You will never feel like it’s enough. At one point, I tried earning Gods favor by reading my Bible at 5am because that’s what someone said I should do -but most days I feel asleep on the couch. I’m not really a 5am girl. On the few days I stayed up I felt really good about myself. On the days I didn’t stay awake I felt like a failure. So basically I was trying to earn God’s favor through my own efforts and on the days I got it right I was filled with pride because I did it. The other days I was filled with condemnation because I didn’t. God didn’t intend for us to get saved by Grace and then spend the rest of our lives earning His favor and love. Just receive His love for you and let Him develop the fruit inside of you that will produce the outward behavior that’s good for you. Don’t over complicate it. Relax. Let Him love you. This is all you really need. The behavior and habits will come more easily when you stop putting forth so much self effort and just let His love and His Grace Guide you! That’s my two cents for the day 😉 For daily inspiration connect with me on Facebook!.
312,856
The new LEGO Mindstorms NXT 2.0 goes on sale August 5, 2009. Hard to believe, but back when I was growing up, building forts and other objects with tiny plastic bricks fro LEGO was all I needed to be happy. But times have changed, and so have today’s youth. That same little plastic brick company now works on robotics and three years ago, the first Lego Mindstorms NXT was released. With the success of the past Mindstorms, all that is left to figure out is whether or not this new 2.0 version is enough of an improvement to be worth buying. In a nutshell, this new Mindstorm product is the culmination of the robotics projects before it and is essentially an updated kit that allows people to build their own robots that can react to touch, light, and other senses. What’s new with the Mindstorm 2.0 First, for those that are unfamiliar with the NXT series, let’s take a closer look at what makes up the new Mindstorms robot kit. The main component is the NXT Brick, which has three motor ports to allow for movement and four sensor ports to attach the sensors. Essentially, the brick is the “brain” of the robot. Color and ultrasonic sensors and a new level of computer connectivity are new with the 2.0 It also has a USB port to allow it to connect to one’s computer, while the corresponding software allows the inventor to program the robot in any way he or she pleases. New to this version is the color sensor, which, as its name suggests, can detect and distinguish between several colors. The older Lego Mindstorms also have the aforementioned light sensors and touch sensors. But what’s new is the ultrasonic sensor, which serves as the “eyes” of the robot. The various servo motors can be placed in the various ports, allowing for a wide array of movements. The sound editor and image editor are also welcome additions. When connected to one’s computer, pictures can be uploaded to the brick and be displayed on the robot. Likewise, voice samples and other sound effects can be recorded and played back through the brick’s speakers. While the various motors and sensors allow the persona’s creativity to come to life, these extra aesthetic touches give a sense of personality to each robot, as well as a connection between the robot and its creator. Building a LEGO Mindstorm Robot With Mindstorm 2.0, the only limit is your imagination The robots you build can be humanoid or animal-like. They can pick up objects and throw objects. The possibilities are seemingly endless, allowing the robot’s creator to let his or her imagination run wild. Even if you are familiar with the Lego Mindstorms, it is hard to deny how fun and incredible it is to build your own robot. Toys have come a long way. But advancements in technology come with a price, literally. The only question that is left is whether or not Lego Mindstorms NXT 2.0 is worth its $280 price tag. After seeing everything that the toy is capable of, I think the answer is yes, but only if this is the buyer’s first experience with the Mindstorms series. Honestly, there is not enough of an improvement over the original NXT to justify shelling out another 280 bucks if you already have the older model at home. But if you late to the party, the LEGO Mindstorms NXT 2.0 is the leading edge of toy robot electronics. I like the design of the new Robot very much and specially the head! From Sweden 11 year Viking Do you think It look better than first NXT but figure is not good as NXT 1?
155,623
\begin{document} \title{User-centric Handover in mmWave Cell-Free Massive MIMO with User Mobility} \author{ \IEEEauthorblockN{Carmen D'Andrea, Giovanni Interdonato and Stefano Buzzi} \IEEEauthorblockA{Dipartimento di Ingegneria Elettrica e dell'Informazione, University of Cassino and Southern Latium, Cassino, Italy \\ \{carmen.dandrea, giovanni.interdonato, buzzi\}@unicas.it}\thanks{The authors are also with the Consorzio Nazionale Interuniversitario per le Telecomunicazioni (CNIT), 43124, Parma, Italy. This paper has been supported by the Italian Ministry of Education University and Research (MIUR) Project ``Dipartimenti di Eccellenza 2018-2022'' and by the MIUR PRIN 2017 Project ``LiquidEdge''.}} \maketitle \begin{abstract} The coupling between cell-free massive multiple-input multiple-output (MIMO) systems operating at millimeter-wave (mmWave) carrier frequencies and user mobility is considered in this paper. First of all, a mmWave channel is introduced taking into account the user mobility and the impact of the channel aging. Then, three beamforming techniques are proposed in the considered scenario, along with a dynamic user association technique (handover): starting from a user-centric association between each mobile device and a cluster of access points (APs), a rule for updating the APs cluster is formulated and analyzed. Numerical results reveal that the proposed beamforming and user association techniques are effective in the considered scenario. \end{abstract} \begin{IEEEkeywords} cell-free massive MIMO, millimeter-wave, user mobility, channel aging, handovers \end{IEEEkeywords} \section{Introduction} Cell-free massive MIMO is deemed a key technology for \textit{beyond}-5G systems~\cite{Zhang2020,Rajatheva2020,Matthaiou2020} mainly for its great ability to provide a uniformly excellent spectral efficiency (SE) throughout the network and ubiquitous coverage~\cite{Ngo2017b,Interdonato2019}. Decentralizing operations such as channel estimation, power control and, possibly, precoding/combining prevents to create any bottleneck in the system, while confining the signal co-processing within a handful of properly, dynamically selected APs, implementing thereby a user-centric system~\cite{Buzzi2019c}, makes cell-free massive MIMO practical. Erasing the cell boundaries when transmitting/receiving data requires, however, smooth coordination and accurate synchronization among the APs. The SE improvements that cell-free massive MIMO can provide over co-located massive MIMO and small cells are significant in the sub-6 GHz frequency bands, especially in terms of service fairness. In the mmWave bands, where achieving high spectral efficiency is not a concern thanks to the large available bandwidth, the distributed cell-free operation can certainly better cope with the hostile propagation environment at such high frequencies compared to a co-located system, as the presence of many serving APs in the user equipment (UE)'s proximity enhances coverage and link reliability. A few recent works have investigated the potential marriage between cell-free massive MIMO and mmWave under various viewpoints: energy efficiency maximization~\cite{Alonzo2019}, analysis in case of capacity-constrained fronthaul links~\cite{Femenias2019}, energy-efficient strategies to cope with a non-uniform spatial traffic distribution~\cite{Morales2020}. An aspect that deserves a closer look in this context is the impact on the performance of the UE mobility. As the UE moves, the propagation channels towards the APs are subject to time variations, and thereby the available channel state information (CSI) at the AP ages with time, effect called \textit{channel aging}. The channel aging effects have been extensively analyzed for co-located massive MIMO~\cite{Heath_ChannelAging2013,Papazafeiropoulos_TVT2017} (and references therein), and recently for cell-free massive MIMO~\cite{ZhengJ2020} operating at sub-6 GHz frequency bands. While for mmWave bands, the effects of the channel aging have been evaluated for \textit{Hetnet} MIMO systems in~\cite{Papazafeiropoulos_WCNC2019}. In most of the works in the literature, the channel aging is considered within a transmission block (or resource block) and usually intended as the mismatch between the acquired CSI prior the transmission, used for beamforming/combining, and the actual CSI at the transmission/reception time (channel estimation error aside). \textbf{Contributions:} The novelty of this work consists in studying the coupling between cell-free massive MIMO, mmWave and user mobility. First of all, we introduce the mmWave channel model taking into account the user mobility and the impact of the channel aging. Then, we describe three beamforming techniques assuming different levels of complexity at the transmitter and receiver and propose a dynamic user association technique, which starts from a traditional user-centric approach and updates the serving cluster of APs for each user in order to reduce the number of unnecessary handovers. Numerical results reveal that the proposed beamforming and user association techniques are effective in the considered scenario. \section{System model} Let us consider a cell-free massive MIMO system operating at mmWave where $M$ multi-antenna APs simultaneously serve $K$ multi-antenna UE in the same time-frequency resources in time-division duplex (TDD) mode. We assume that both the APs and the UE are equipped with a uniform linear array (ULA) with random orientations, and steering angles taking on values in $[-\pi/2,\pi/2]$. Each AP and each UE has $N\AP$ and $N\UE$ antenna elements, respectively, with $MN\AP > KN\UE$. Finally, let $n\s < \min(N\AP,N\UE)$ be the number of streams multiplexed over the MIMO channel. Unlike most of the works in the literature, we consider a channel that (almost) continuously evolves over time according to the variation of the UE locations with respect to the APs, hence according to a model that takes into account the UE mobility. \subsection{Channel Model} \subsubsection{mmWave MIMO Channel Representation} \comment{We assume that the adopted modulation format is the orthogonal frequency division multiplexing (OFDM). Let us denote by $t_0$ the duration of an OFDM symbol. We denote by $B$ the overall available bandwidth, and by $\Delta f$ the subcarrier spacing for the OFDM signal. This implies that the number of subcarriers is $N_C=B/\Delta f$. The duration of the OFDM symbol is taken equal to $1/\Delta f + \tau_{CP}$, with $\tau_{CP}$ the length of the cyclic prefix, chosen not smaller than the largest multipath delay spread in the system. The time line is divided into four different units, namely length of the OFDM symbol $t_0$, length of the time slot containing 14 OFDM symbols $T_{\rm slot}$, channel coherence time $T_{C}$ and slow-fading coherence time $T_S$. The channel estimation is performed every $T_{C}$ and the user association, that is essentially based on the slow-fading variation, is performed every $T_S$.} The mmWave MIMO narrow-band channel between UE $k$ and AP $m$ consists in a dominant line-of-sight (LoS) path and $L_{km} \ll \min(N\AP,N\UE)$ non-LoS (NLoS) paths due to the presence of scattering clusters. At the time instant $n$, $n=\{1,2,3\ldots\}$, the channel is modelled by the complex-valued $(N\AP\!\times\!N\UE)$-dimensional matrix \begin{equation} \bH_{mk}[n]=\! \varsigma\!\!\sum_{\ell=1}^{L_{mk}}\alpha_{mk}^{(\ell)}[n] \ba\AP(\phi_{mk}^{(\ell)}[n])\ba\UE\herm(\theta_{mk}^{(\ell)}[n])\!+\! \bH\LoS_{mk}[n], \label{eq:channelmodel_km} \end{equation} where $\varsigma = \ds \sqrt{N\AP N\UE}$ is a normalization factor, $\alpha^{(\ell)}_{mk}[n]\sim\CN(0,\beta^{(\ell)}_{mk}[n])$ is the complex-valued gain of the $\ell$-th path with strength (which reflects the path loss) denoted by $\beta^{(\ell)}_{mk}[n]$. The unit-norm ULA steering vectors at UE $k$ and AP $m$ are denoted by $\ba\UE(\theta_{mk}^{(\ell)}[n])$ and $\ba\AP(\phi_{mk}^{(\ell)}[n])$, respectively, highlighting the dependency on the angle of departure (AoD), $\theta_{mk}^{(\ell)}[n]$, and the angle of arrival (AoA), $\phi_{mk}^{(\ell)}[n]$, of the $\ell$-th path. Assuming half-wavelength spacing between the antenna elements of the ULA, the steering vectors for a generic AoD and AoA are given by \begin{align} \ba\AP(\phi) &= \frac{1}{\sqrt{N\AP}}[1, \; \eul^{\imgj \pi \sin \phi}, \; \ldots, \; \eul^{\imgj \pi (N\AP-1)\sin \phi}]\trans, \label{eq:a_AP} \\ \ba\UE(\theta) &= \frac{1}{\sqrt{N\UE}} [1, \; \eul^{\imgj \pi \sin \theta}, \; \ldots, \; \eul^{\imgj \pi (N\UE-1)\sin \theta}]\trans. \label{eq:a_UE} \end{align} The LoS component of the channel matrix at the $n$th time instant, $\bH\LoS_{mk}[n]$, is given by \begin{equation} \label{eq:LoS:channel_matrix} \bH\LoS_{mk}[n] \! = \! \varsigma \epsilon(d\LoS_{mk}[n]) \varrho\LoS_{mk}[n] \ba\AP(\phi\LoS_{mk}[n])\ba\UE\herm(\theta\LoS_{mk}[n]), \end{equation} where $$\varrho\LoS_{mk}[n] \triangleq \sqrt{\beta\LoS_{mk}[n]} \eul^{\imgj \vartheta_{mk}[n]},$$ with $\vartheta_{mk}[n]\!\sim\!\mathcal{U}(0,2\pi)$, and $\epsilon(d\LoS_{mk}[n])\!\in\!\{0,1\}$ denotes the binary blockage variable characterizing the LoS link between AP $m$ and UE $k$ with length $d\LoS_{mk}[n]$, and indicating whether the LoS path, at the $n$th time instant, is obstructed. The LoS channel strength is denoted by $\beta\LoS_{mk}[n] \gg \beta^{(\ell)}_{mk}[n], \ell = 1, \ldots, L_{mk}$. \comment{ We assume a number of total scatterers, $N_{\rm s}$ say, common to all the APs and users and uniformly distributed in the system. In order to model the blockage, we assume that the communication between the $m$-th AP and the $k$-th user takes place via the $n$-th scatterer, i.e., the $n$-th scatterer is one of the effective $L_{k,m}$ contributing in the channel in Eq. \eqref{eq:channelmodel_km}, if the rays between the $m$-th AP and the $n$-th scatterer and the $k$-th user and the $n$-th scatterer \emph{simultaneously} exist. We assume a link exists between two entities, in our case one AP/UE and one scatterer, if they are in LoS, with a probability, $P_{\rm LOS}(d)$ depending on the distance between the two entities, $d$ say. In the vast majority of channel measurements campaigns in millimeter wave bands the number of rays between two entities is very low (insert references), i.e., the millimeter wave channel is sparse. In this paper, we focus in the case in which only one predominant line of sight (LoS) path is assumed between one user and one AP.} \subsubsection{User Mobility} UE mobility leads to temporal variations in the propagation environment resulting in a channel that ages with time. This channel aging is much more significant in mmWave systems than in sub-6 GHz systems as a time variation results in a larger phase variation at high frequency bands than at low frequency bands. In this work, we assume a block-fading channel model wherein the path gains---the fast fading component of the channel---stay constant within a resource block of length $\tauc$ channel uses (or time instants) and vary over the resource blocks. Whereas the ULA steering vectors and the channel variances stay constant over multiple resource blocks, for an interval with duration $\taus>\tauc$ time instants. The temporal evolution of the NLoS path gains over the resource blocks is modeled through a first order autoregressive model \cite{Heath_ChannelAging2013,Papazafeiropoulos_TVT2017}, such that a path gain at the current time instant $\alpha^{(\ell)}_{mk}[n]$, with $n=\{1,2,3\ldots\}$, is a function of its previous state $\alpha^{(\ell)}_{mk}[n\!-\!1]$ and an innovation component $\wt{\alpha}^{(\ell)}_{mk}[n]$ as \begin{align} \label{eq:aging:alpha} \begin{cases} \delta_{mk}[n] \alpha^{(\ell)}_{mk}[n\!-\!1]\!+\! \bar{\delta}_{mk}[n] \wt{\alpha}^{(\ell)}_{mk}[n], &\text{if } n\!\!\!\!\mod\tauc \!=\! 1, \\ \alpha^{(\ell)}_{mk}[\floor{n/\tauc}], &\text{otherwise,} \end{cases} \end{align} with $\wt{\alpha}^{(\ell)}_{mk}[n] \sim \CN(0,\beta_{mk}^{(\ell)}[n])$ and independent of $\alpha_{mk}^{(\ell)}[n-1]$, $\floor{\cdot}$ is the \textit{floor} function, $\ds \bar{\delta}_{mk}[n] = \sqrt{1-\delta_{mk}^2[n]}$, and $\delta_{mk}[n]$ represents the temporal correlation coefficient of UE $k$ with respect to AP $m$, which follows the Jake's model \begin{equation} \delta_{mk}[n]= J_0 \left( 2 \pi f_{\mathsf{D},mk} n T \right), \end{equation} with $J_0(\cdot)$ being the zeroth-order Bessel function of the first kind, and $T$ being the sampling period. $f_{\mathsf{D},mk}$ is the maximum Doppler frequency between the $k$-th UE and the $m$-th AP given by $ f_{\mathsf{D},mk} =\nu_{k,m} f_\mathsf{c}/c \, , $ where $\nu_{mk}$ is the radial velocity of UE $k$ with respect to AP $m$ in meters per second (m/s), $c = 3 \cdot 10^8$ m/s is the speed of light, and $f_\mathsf{c}$ is the carrier frequency in Hz. Importantly, the temporal variation of the channel variances, both for LoS and NLoS, is modeled as \begin{align} \label{eq:aging:beta} \beta^{(\ell)}_{mk}[n]= \begin{cases} f_{\mathsf{PL}}(d_{mk}^{(\ell)}[n]), & \text{if } n \!\!\!\! \mod \taus = 1, \\ \beta^{(\ell)}_{mk}[\floor{n/\taus}], & \text{otherwise}, \end{cases} \end{align} namely the path loss is assumed to stay constant within $\taus$ channel uses. The function $f_{\mathsf{PL}}(d_{mk}^{(\ell)}[n])$ denotes the path loss function which is characterized by the distance between the $k$-th UE and the $m$-th AP over the $\ell$-th path at the $n$-th time instant, $d_{k,m}^{(\ell)}[n]$. Similarly, the AoAs and the AoDs (both for LoS and NLoS) vary every $\taus$ channel uses, and their temporal evolution is modeled as in Eq.~\eqref{eq:aging:angles}, shown at the top of the next page, where $f_{\mathsf{G}}(\cdot)$ represents the geometric function depending on the propagation scenario. Whereas the phase shift of the LoS channel component, $\vartheta_{mk}[n]$, takes on a new random value uniformly distributed on the interval $[0, 2\pi]$ every resource block but stays constant within $\tau_c$ channel uses. \begin{figure*}[!t] \begin{align} \label{eq:aging:angles} \left(\phi_{mk}^{(\ell)}[n], \theta_{mk}^{(\ell)}[n]\right) = \begin{cases} f_{\mathsf{G}}(d_{mk}^{(\ell)}[n]), &\text{if } n \!\!\!\! \mod \taus = 1 \\ \left(\phi_{mk}^{(\ell)}[\floor{n/\taus}], \theta_{mk}^{(\ell)}[\floor{n/\taus}]\right), &\text{otherwise}. \end{cases} \end{align} \hrulefill \end{figure*} The variation of the $k$-th UE's location from the time instant $n$ to $n+1$ is modelled as \begin{align} x_k[n+1] &= x_k[n]+ v_{k}T_\mathsf{s} \cos\varphi_{k}, \\ y_k[n+1] &= x_k[n]+ v_{k}T_\mathsf{s} \sin\varphi_{k}, \end{align} where $\varphi_{k} \in [0, 2 \pi]$ is the direction of the movement and $v_{k}$ is the speed of the $k$-th UE with respect to a reference system centred on the UE location and assuming that $\varphi_{k}$ and $v_{k}$ remain constant over $\taus$ channel uses. A new realization of the UE's location determines a new set of distances $\{d_{mk}^{(\ell)}\}$ and, as a consequence, a new set of coefficients $\{\beta^{(\ell)}_{mk}\}$. \subsection{Downlink Signal Model} \label{DL_model} Without loss of of generality, we assume $n_\mathsf{s} = 1$, i.e., only one stream is transmitted for each AP-UE pair. Let $\bu_{mk}[n]$ be the $N\AP$-dimensional precoding vector between AP $m$ and the UE $k$, and let $\bv_{k}[n]$ be the $N\UE$-dimensional combining vector used at the $k$-th user. We also define $a_{mk}[n]$ as a binary parameter to indicate whether the $m$-th AP serves the $k$-th UE. More specifically, \begin{align} a_{mk}[n]= \begin{cases} 1 \quad &\text{if AP } m \text{ serves UE } k,\\ 0 \quad &\text{otherwise}. \end{cases} \end{align} The downlink data signal sent by the $m$-th AP is given by the following $N_{AP}$-dimensional vector-valued waveform \begin{equation} \bs_{m}[n]=\ds \sum\nolimits_{j=1}^{K} a_{mj}[n] \sqrt{\eta_{mj}[n]}\bu_{mj}[n]x_{mj}[n], \end{equation} where $x_{mj}[n]$ is the data symbol intended for the $j$-th UE, $\EX{|x_{mj}[n]|^2} = 1, \forall j, m$, and $\eta_{mj}[n]$ is the transmit power. The signal received at the $k$-th UE after the combining operation can be then expressed as \begin{align} \label{eq:received_k_n} r_k[n] \! &= \! \sum\nolimits_{m=1}^M a_{mk}[n] \sqrt{\eta_{mk}[n]} \mathbf{v}\herm_{k}[n] \mathbf{H}\herm_{mk}[n] \mathbf{u}_{mk}[n] x_{mk}[n] \nonumber \\ &\quad +\!\!\!\sum _{m=1}^M \sum_{j \neq k}^K a_{mj}[n] \sqrt{\eta_{mj}[n]} \bv\herm_{k}[n] \bH\herm_{mk}[n]\bu_{mj}[n]x_{mj}[n] \nonumber \\ & \quad + z_{k}[n], \end{align} where $z_{k}[n]\sim \CN(0,\sigma^2_k[n])$ is additive noise. \subsubsection*{Precoding and Combining} We analyze three beamforming techniques: long-term beamforming (LTB), short-term beamforming (STB) and analog beamforming (ABF). We consider two different implementations for both LTB and STB: $(i)$ digital implementation (DI) and $(ii)$ constant-modulus implementation (CI), which is also suitable for a hybrid (analog/digital) implementation. In the case of LTB, each AP and UE performs long-term precoding and long-term combining, respectively~\cite{Akdeniz_JSAC2014}, which requires only the knowledge of the channel variances, AoAs and AoDs of the channel. We thereby define the $\left(N\AP \times N\AP \right)$-dimensional covariance matrix \begin{align} \mathbf{R}_{mk}^{\mathsf{(AP)}}[n] &= \EX{\bH_{mk}[n]\bH\herm_{mk}[n]} \label{eq:LTBF-AP} \\ & = \varsigma^2 \epsilon(d\LoS_{mk}[n]) \beta\LoS_{mk}[n] \ba\AP(\phi\LoS_{mk}[n])\ba\AP\herm(\phi\LoS_{mk}[n]) \nonumber \\ &\quad + \varsigma^2 \sum\limits_{\ell=1}^{L_{mk}}\beta_{mk}^{(\ell)}[n] \ba\AP(\phi_{mk}^{(\ell)}[n]) \ba\AP\herm(\phi_{mk}^{(\ell)}[n]), \label{eq:R_AP} \end{align} and the $(N\UE \times N\UE)$-dimensional covariance matrix \begin{align} \mathbf{R}_{mk}^{\mathsf{(UE)}}[n]&= \EX{\bH\herm_{mk}[n] \bH_{mk}[n]} \label{eq:LTBF-UE} \\ & = \varsigma^2 \epsilon(d\LoS_{mk}[n]) \beta\LoS_{mk}[n] \ba\UE(\theta\LoS_{mk}[n])\ba\UE\herm(\theta\LoS_{mk}[n]) \nonumber \\ &\quad + \varsigma^2 \sum_{\ell=1}^{L_{mk}}\beta_{mk}^{(\ell)}[n] \ba\UE(\theta_{mk}^{(\ell)}[n]) \ba\UE\herm(\theta_{mk}^{(\ell)}[n]). \label{eq:R_UE} \end{align} In the case of LTB-DI, the precoding vector $\mathbf{u}_{mk}^{\mathsf{(DI)}}[n]$ is equal to the predominant eigenvector of the matrix $\mathbf{R}_{mk}^{\mathsf{(AP)}}[n]$, while the combining vector $\mathbf{v}_{k}^{\mathsf{(DI)}}[n]$ is equal to the predominant eigenvector of the matrix \begin{equation} \label{eq:R_UEk} \overline{\bR}_k^{\mathsf{(UE)}}[n] = \sum\nolimits_{m=1}^M a_{mk}[n] \bR_{mk}^{\mathsf{(UE)}}[n]. \end{equation} In the LTB-CI, the generic entry of the beamforming and combining vectors has constant modulus, and phase as that of its digital counterpart, i.e., \begin{equation} \begin{array}{llll} \mathbf{u}_{mk}^{\mathsf{(CI)}}[n]=\ds \frac{1}{\sqrt{N\AP}}e^{\imgj\angle{\mathbf{u}_{mk}^{\mathsf{(DI)}}[n]}}, \quad \mathbf{v}_{k}^{\mathsf{(CI)}}[n]=\ds \frac{1}{\sqrt{N\UE}}e^{\imgj\angle{\mathbf{v}^{\mathsf{(DI)}}_{k}[n]}}. \end{array} \label{eq:CI} \end{equation} where $\angle{\cdot}$ denotes the phase of the argument. STB can be performed if and only if we ideally assume that both APs and UEs have knowledge of the instantaneous realization of the channel $\bH_{mk}[n]$. In the case of STB-DI, the precoding vector is given by the predominant eigenvector of the matrix $\bH_{mk}[n]\bH\herm_{mk}[n]$, whereas the combining vector at the UE is given by the predominant eigenvector of the matrix defined as in~\eqref{eq:R_UEk} but with no expectation in~\eqref{eq:LTBF-UE}. The expression of the STB-CI does not differ from that of LTB-CI, the precoding and combining vectors follow Eq. \eqref{eq:CI} with $\mathbf{u}_{mk}^{\mathsf{(DI)}}[n]$ and $\mathbf{v}^{\mathsf{(DI)}}_{k}[n]$ evaluated according to the STB-DI. \comment{We thereby define the $\left(N\AP \times N\AP \right)$-dimensional matrix \begin{equation} \label{eq:R_AP_STB} \widetilde{\mathbf{R}}_{mk}^{\mathsf{(AP)}}[n]= \bH_{mk}[n]\bH\herm_{mk}[n] \end{equation} and the $(N\UE \times N\UE)$-dimensional matrix \begin{equation} \label{eq:R_UE_STB} \widetilde{\mathbf{R}}_{mk}^{\mathsf{(UE)}}[n]= \bH\herm_{mk}[n] \bH_{mk}[n] \end{equation} The precoding vector for the DI, $\mathbf{u}_{mk}^{\mathsf{(DI)}}[n]$ say, is equal to the predominant eigenvector of the matrix $\widetilde{\mathbf{R}}_{mk}^{\mathsf{(AP)}}[n]$, while the combining vector, $\mathbf{v}_{k}^{\mathsf{(DI)}}[n]$ say, is equal to the predominant eigenvector of the matrix \begin{equation} \widetilde{\bar{\mathbf{R}}}_k^{\mathsf{(UE)}}[n] = \sum_{m=1}^M a_{mk}[n] \widetilde{\mathbf{R}}_{mk}^{\mathsf{(UE)}}[n]. \end{equation}} Finally, the precoding and combining vectors for the ABF scheme are set as the steering vectors in~\eqref{eq:a_AP} and~\eqref{eq:a_UE} corresponding to the AoD and AoA of the strongest path, respectively, for each AP-UE pair. \section{User association technique} In order to detail the user association technique, we firstly define the \textit{channel strength indicator} for the communication between the $m$-th AP and the $k$-th UE, denoted by $\rho_{mk}[n]$, which is equal to the predominant eigenvalue of the channel covariance matrix. Any UE is served by a \textit{user-centric cluster} of APs, consisting of the $N_{\mathsf{UC}}$ APs with the largest channel strength indicator. We let $O^{(n)}_{k} \, : \, \{1,\ldots, M \} \rightarrow \{1,\ldots, M \}$ denote the operator sorting the AP indices by the descending order of the vector $[\rho_{1k}[n], \ldots, \rho_{Mk}[n]]$, such that $O^{(n)}_{k}(1)$ gives the index of the AP with largest $\rho_{mk}[n]$. The set $\mathcal{M}_k[n]$ of the $N_{\mathsf{UC}}$ APs serving the $k$-th user is then given by \begin{equation} \mathcal{M}_k[n]=\{ O^{(n)}_{k}(1), O^{(n)}_{k}(2), \ldots , O^{(n)}_{k}(N_{\mathsf{UC}}) \}. \end{equation} Hence, if $m \in \mathcal{M}_k[n]$, then $a_{mk}[n]=1$, and $a_{mk}[n]=0$ otherwise. The evolution of the UE's location due to the mobility changes the relative channel strength indicators between APs and UEs and thus the UE-to-APs associations. In this paper, we propose the following UE-to-APs association finalized to reduce the number of instantaneous handovers which can negatively affect the system performance. As we have assumed that the UE's mobility produces a non-negligible effect on the channel every $\taus$ samples, the UE-to-APs association updates accordingly, that is \begin{equation} \overline{\rho}_{mk}^{(q)}=\rho_{mk}\left[n: q=\ds \floor{n/\taus} \right]. \end{equation} In order to reduce the number of instantaneous handovers, we define two \textit{hysteresis parameters}: a value, $\zeta_{\mathsf{HO}}$ say, and a control number, $N_{\mathsf{HO}}$ say, used to dynamically manage the user association. The proposed dynamic user association is summarized in Algorithm \ref{User_association_Dynamic}. \begin{algorithm} \caption{The proposed dynamic user association} \begin{algorithmic}[1] \label{User_association_Dynamic} \STATE Start with $q=0$ and perform the initial user-centric approach based on the channel strength indicators $\overline{\rho}_{mk}^{(0)}$ and obtain the set $\mathcal{M}_k[0]$. \FOR{$q=1,2,3,\ldots$} \STATE Evaluate the new $\overline{\rho}_{mk}^{(q)}, \, \forall m,k$; \FOR{$k=1,\ldots,K$} \STATE Define $$m^-_k(q-1)= \arg \min_{m \in \mathcal{M}_k[q-1] }\overline{\rho}_{mk}^{(q-1)}$$ and $$m^+_k(q)=\arg \max_{m \notin \mathcal{M}_k[q-1] }\overline{\rho}_{mk}^{(q)}.$$ \STATE Replace the worst AP in $\mathcal{M}_k[q+N_{\mathsf{HO}}]$ \emph{only if} \begin{equation} \ds\frac{\overline{\rho}_{m^+_k(h+1)k}^{(h+1)}-\overline{\rho}_{m^-_k(h)k}^{(h)}}{\overline{\rho}_{m^-_k(h)k}^{(h)}} > \zeta_{\mathsf{HO}}, \label{eq:hysteresis_condition} \end{equation} with $h=q-1, q, q+1,\ldots,q+ N_{\mathsf{HO}}-1$. \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} The proposed dynamic user-association procedure is aimed at controlling and reducing the unnecessary handovers. Unnecessary handovers can be a consequence of temporary variations of the large scale fading coefficient caused by the user mobility. In order to reduce them, we consider the hysteresis parameters and change the set of the serving cluster of a generic user only if the condition in Eq. \eqref{eq:hysteresis_condition} is satisfied. The hysteresis condition in Eq. \eqref{eq:hysteresis_condition} controls, for a generic user in the system, that the AP that we are removing from the serving cluster is not temporarily in a blockage condition but the user is significantly moving away from it. \section{Numerical Results} In our simulation setup, we assume an orthogonal frequency-division multiplexing (OFDM) system operating at $f_0=28$ GHz with bandwidth $W = 500$ MHz. The subcarrier spacing is 480 kHz, and assuming that the length of the cyclic prefix is 7\% of the OFDM symbol duration, i.e., $\tau_{\mathsf{CP}}\Delta_f=0.07$, we obtain $t_0=2.23~\mu$s and $N_C=1024$ subcarriers. The antenna heights are $10$ m at the APs and $1.65$ m at the UEs as in \cite{Buzzi2019c}. The additive thermal noise has a power spectral density of $-174$ dBm/Hz, while the front-end receiver at the APs and at the UEs has a noise figure of $9$ dB. In order to avoid to exceed the simulation area due to mobility, the APs are uniformly distributed in a square area of 850 m $\times$ 850 m, while the initial position of the UEs are uniformly distributed in an inner centred square area of 350 m $\times$ 350 m. We set $K=20$, $M=120$, $N\AP=32$ and $N\UE=16$. We assume a number of total scatterers, $N_{\mathsf{s}}=200$, common to all the APs and UEs and uniformly distributed in the simulation area where the APs are located. To model the blockage, we assume that the communication between the $m$-th AP and the $k$-th UE takes place via the $n$-th scatterer, i.e., the $n$-th scatterer is one of the effective $L_{mk}$ contributing to the channel in Eq. \eqref{eq:channelmodel_km}, if the rays between the $m$-th AP and the $n$-th scatterer and the $k$-th UE and the $n$-th scatterer \emph{simultaneously} exist. We assume there exists a link between two entities, in our case one AP/UE and one scatterer, if they are in LoS, with a probability $P_{\mathsf{LoS}}(d)$ being function of the distance $d$ between the two entities, and defined as \cite{ghosh20165g_WP,5G3GPP} \begin{equation} P_{\mathsf{LoS}}(d)=\text{min}\left(20/d,1\right)\left(1-e^{-d/39}\right)+e^{-d/39} \; . \end{equation} The function $f_{\mathsf{PL}}\left(d_{mk}^{(\ell)}[n]\right)$ follows the Urban Microcellular (UMi) Street-Canyon model in \cite{ghosh20165g_WP}. Note that, given the locations of scatterers, UEs and APs in the system, the corresponding AoAs, AoDs, path lengths and $f_{\mathsf{G}}(z)$ in Eq. \eqref{eq:aging:angles} are obtained with straightforward geometric relationships. In the downlink data transmission phase, we assume equal power allocation, i.e., letting $P_m^{\mathsf{dl}}$ be the maximum transmit power at the $m$-th AP, we set $\eta_{mk}[n]=P_m^{\mathsf{dl}}/\sum\nolimits_{j=1}^K a_{mj}$. We evaluate the system performance by assuming that both the APs and the UEs perfectly know the channel variances, the AoAs and the AoDs whenever required. \begin{figure}[!t] \centering \includegraphics[scale=0.6]{Rate_v_max0_10.eps} \caption{CDF of downlink rate per user, comparison between the considered beamforming techniques in the cases of $v=0$ km/h and $v \in [5,10]$ km/h. Parameters: $N_{\mathsf{UC}}=5$, $\zeta_{\mathsf{HO}}=5$ \% and $N_{\mathsf{HO}}=10$. } \label{Fig:Rate_vmax10} \end{figure} The signal-to-interference-plus-noise ratio (SINR) at the $n$-th time instant, in case of perfect channel state information knowledge, is denoted by $\text{SINR}_k[n]$ and from Eq. \eqref{eq:received_k_n} it is equal to \begin{align} \label{eq:SINR_k} \frac{\left|\sum\limits_{m=1}^M a_{mk}^{[n]} \sqrt{\eta_{mk}[n]} \bv_k[n]\herm \bH\herm_{mk}[n]\bu_{mk}[n] \right|^2}{\sum\limits_{j \neq k}^K \left|\sum\limits_{m=1}^M a_{mj}[n] \sqrt{\eta_{mj}[n]} \bv_k[n]\herm \bH\herm_{mk}[n] \bu_{mj}[n]\right|^2 + \sigma^2_k[n]}. \end{align} Hence, a downlink achievable rate for the UE $k$ is given by $ \text{R}_k[n]= W \log_2\left( 1+ \text{SINR}_k[n] \right). $ In Fig. \ref{Fig:Rate_vmax10} we report the CDF of the downlink rate per user in the considered scenario using the beamforming techniques detailed in Section \ref{DL_model}, both for the case of no-moving users, i.e., $v=0$ km/h, and slow-moving users with $v \in [5,10]$ km/h. For the proposed dynamic user association we assume that the hysteresis parameters are $\zeta_{\mathsf{HO}}=5$ \% and $N_{\mathsf{HO}}=10$ and the dimension of the serving cluster for each user is $N_{\mathsf{UC}}=5$. Inspecting the results we can firstly note that the user mobility degrades the performance in terms of rate per user with respect to the case of no-moving users. Regarding the beamforming techniques, we can see that the LTB, which is more suitable for a practical implementation than the STB, offers good performance. We can also see that the ABF, which focuses the power only in the main AoA and AoD for each pair UE-AP is effective, especially in the case of user mobility. This behaviour is due to the sparse nature of the mmWave channel which allows us, on one hand, to focus the transmit power in few directions and, on the other hand, to reduce the interference in the system. Similar insights can be also obtained by the results in Table \ref{table:v_max_50} which reports the median Rate (MR) and 95\%-likely rate (95-R) of the system assuming the different beamforming techniques in the cases of slow-moving users with $v \in [5,10]$ km/h and fast-moving users with $v \in [20,50]$ km/h. \begin{table}[!t] \centering \renewcommand{\arraystretch}{1.3} \setlength{\tabcolsep}{5.5pt} \caption{Median Rate (MR) and 95\%-likely Rate (95-R) in Mbps} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline UE speed & \multicolumn{4}{c|}{$v \in [5,10] $ km/h} & \multicolumn{4}{c|}{$v \in [20,50]$ km/h} \\ \hline Rate & \multicolumn{2}{c|}{MR} & \multicolumn{2}{c|}{95-R} & \multicolumn{2}{c|}{MR} & \multicolumn{2}{c|}{95-R} \\ \hline $\zeta_{\mathsf{HO}}$ & 5\% & 20\% & 5\% & 20\% & 5\% & 20\% & 5\% & 20\% \\ \hline LTB, DI & 777 & 791 & 135 & 136 & 659 & 682 & 109 & 112 \\ \hline LTB, CI & 720 & 734 & 122 & 123 & 617 & 638 & 102 & 105 \\ \hline \multicolumn{1}{|l|}{STB, DI} & 736 & 749 & 149 & 151 & 648 & 669 & 122 & 126 \\ \hline \multicolumn{1}{|l|}{STB, CI} & 776 & 791 & 146 & 148 & 649 & 684 & 123 & 127 \\ \hline \multicolumn{1}{|l|}{ABF} & 814 & 830 & 156 & 158 & 684 & 709 & 129 & 133 \\ \hline \end{tabular} \label{table:v_max_50} \end{table} \section{Conclusions and future work} In this paper we considered a cell-free massive MIMO scenario at mmWave with UE mobility. First of all, we introduced the channel model taking into account the user mobility and the impact of the channel aging and described three beamforming techniques assuming different levels of complexity. Taking into account the impact of the user mobility, we proposed a dynamic user association technique, which starts from a traditional user-centric approach and updates the serving cluster of APs for each user in order to reduce the number of unnecessary handovers. Numerical results reveal that the proposed techniques are effective and motivate us to deepen on this topic. In particular, the presented results assumed perfect knowledge of the channel variances, the AoAs and the AoDs and thus, further investigations are required on the impact of channel estimation error and the use of a protocol which exploits the sparse and autoregressive nature of the channel realizations. \bibliography{Cell_free_references} \bibliographystyle{ieeetran} \end{document}
12,702
TITLE: What is the smallest cardinality of a self-linked set in a finite cyclic group? QUESTION [15 upvotes]: A subset $A$ of a group $G$ is defined to be self-linked if $A\cap gA\ne\emptyset$ for all $g\in G$. This happens if and only if $AA^{-1}=G$. For a finite group $G$ denote by $sl(G)$ the smallest cardinality of a self-linked set in $G$. It is clear that $sl(G)\ge \sqrt{|G|}$. A more accurate lower bound is $sl(G)\ge \frac{1+\sqrt{4|G|-3}}2$. By a classical result of Singer (1938), for any power $q=p^k$ of a prime number $p$, the cyclic group $C_n$ of cardinality $n=1+q+q^2$ contains a self-linked subset of cardinality $1+q$, which implies that $sl(C_n)=1+q=\frac{1+\sqrt{4n-3}}2$. So, for such numbers $n$ the lower bound $\frac{1+\sqrt{4n-3}}2$ is exact. In this paper we prove the upper bound $sl(C_n)\le \sqrt{2n}$ holding for all $n\ne 4$. Problem 1. Is $sl(C_n)=(1+o(1))\sqrt{n}$? This problem is equivalent to Problem 2. Does the limit $\lim_{n\to\infty}{sl(C_n)}/{\sqrt{n}}$ exist? If the answer to Problems 1,2 are negative, then we can also ask Problem 3. Evaluate the constant $\lambda:=\limsup_{n\to\infty}{sl(C_n)}/{\sqrt{n}}$. At the moment it is known that $1\le\lambda\le\sqrt{2}$. REPLY [10 votes]: The difference cover problem has been better studied in the context of $\mathbf{Z}$. Redei, Renyi, and others in the 40s asked for the size of the smallest set $A$ such that $A-A$ covers $\{1,2,\dots,N\}$. They proved an upper bound of roughly $\sqrt{8/3} \sqrt{N}$. To prove this they combined Singer's construction of a perfect difference set with the "perfect ruler" $\{0,1,4,6\}$ (which has difference set $\{-6,\dots,6\}$ each with multiplicity one). This was later improved by Leech and Golay to $\sqrt{8/3 - \epsilon}\sqrt{N}$ (for explicit but not very large $\epsilon$). More interestingly, Redei and Renyi proved a nontrivial lower bound of the form $\sqrt{2 + \frac{4}{3\pi}}\sqrt{N}$. The upper bound can easily be ported to the cyclic problem by taking $N\approx n/2$ and reducing the set $A$ modulo $n$. This proves an upper bound of roughly $\sqrt{4/3}\sqrt{n}$. However, because of the nontrivial lower bound, this proof technique cannot prove $(1+o(1))\sqrt{n}$. Indeed I think it suggests caution.
130,519
TITLE: Is this solution for the Gossip Problem correct? QUESTION [2 upvotes]: The Gossip Problem: Suppose there are $n$ ladies in a group, each aware of a gossip no one else in the group knows about. These ladies communicate by telephone; when two ladies in the group talk, they share information about all gossip each knows about. For example, on the first call, two people share information, so by the end of the call, each of these people knows about two gossips. The gossip problem asks for $G(n)$, the minimum number of telephone calls that are needed for all n ladies to learn about all the gossip. Prove that $G(n)\geq 2n−4$ when $n\geq4$. I came up with a solution for the Gossip Problem and would like to know if it is correct ( I was awarded 0 in an exam), Here is the gist of it: Claim 1: In a call the maximum Gossips that can be gained is n. This only occurs when they among them have all gossips and there are no overlaps. Claim 2: In the first (n-1) calls the maximum that can be gained is k+1 in the kth call {In the first call 2 and second call 3 etc.} This is because after the kth call each lady can know a maximum of k+1 gossips. From our 2 conjectures the Maximum possible number of gossips gained in C1,C2... is of the form 2,3,4,5...n,n,n,n. Let us call the maximum gossips gained as $f(n)$ Now we know the required number of gossips gained is $n^2-n$ Hence we can write, $f(n)>=n^2-n$ Let the number of meetings be 2n-k After simplification we get, $n(n+1)/2-1+(n-k+1)n>=n^2-n$ which is $n^2+n-2+2n^2-2nk+2n>=2n^2-2n$ Hence , $n^2+n(5-2k)-2>=0$ Now the interval of n is (4 , infinity). In both the possible graphs we get: $f(4)>=0$ Hence , $16+20-8k-2>=0$ $k<=4.25$ and from here $k<=4$ We hence get the number of meetings is greater than or equal to 2n-4. Further there are famous sequences of steps to prove the existence of 2n-4 as a solution to conclude that the minimum number of meetings required is atmost 2n-4. Eg: Separate 4 ladies and make the n-4 ladies give their gossip to any of the 4 ladies then let the 4 ladies gain all gossips through a sequence of 4 calls and then let them give their gossip to the n-4 ladies . So , there is $n-4+n-4+4=2n-4$ for all n . Mind Your Decisions made a video on this as well . So please let me know if this is correct. I'm surprised I came up with something this simple to a reputed hard problem :) . REPLY [1 votes]: There must be something going wrong in the "after simplification we get" step. From your two "conjectures" (it would be better to call them "claims"), adding up the number of possible gossips gained after the first $2n-k$ calls gives $f(2n-k)=3n^2/2+(1/2-k)n-1$ provided $k\leq n$, which is already more than the required number for $k\approx n/2$.
93,483
TITLE: Understanding divison by monic polynomial in $R[x]$ where $R$ is an arbitrary ring QUESTION [11 upvotes]: I read "Algebra: Chapter 0" by P.Aluffi. I encountered a topic where it says you can divide any polynomial in $R[x]$($R$ is any ring) by a monic polynomial(that is, a polynomial of the form $x^d + \sum\limits_{i=0}^{d-1} a_i x^{i}$). It says if $g(x)$ is any polynomial and $f(x)$ is a monic polynomial. Then $\exists \ \ \ q(x), r(x) \in R[x],\: \deg r(x) < \deg f(x): g(x) = f(x)q(x) + r(x).$ I have two questions regarding this: 1) Why $f(x)$ needs to be monic? 2) How can we prove it? The book talk about induction and that if $\deg g(x) = n > d = deg \ f(x)$, then $\forall a \in R$ we have $ax^n = ax^{n-d}f(x) + h(x)$ for some $h(x): \deg \ h(x) < n$. It's true, but how do I take it from here? REPLY [1 votes]: For polynomials over any commutative coefficient ring, the high-school polynomial long division algorithm works to divide with remainder by any monic polynomial, i.e any polynomial $\rm\:f\:$ whose leading coefficient $\rm\:c =1\:$ (or a unit), since $\rm\:f\:$ monic implies that the leading term of $\rm\:f\:$ divides all higher degree monomials $\rm\:x^k,\ k\ge n = deg\ f,\:$ so the high-school division algorithm works to kill all higher degree terms in the dividend, leaving a remainder of degree $\rm < n = deg\ f\,$ (see here for further detail). The division algorithm generally fails if $\rm\:f\:$ is not monic, e.g. $\rm\: x = 2x\:q + r\:$ has no solution for $\rm\:r\in \mathbb Z,\ q\in \mathbb Z[x],\:$ since evaluating at $\rm\:x=0\:$ $\Rightarrow$ $\rm\:r=0,\:$ evaluating at $\rm\:x=1\:$ $\Rightarrow$ $\:2\:|\:1\:$ in $\mathbb Z,\,$ contradiction. Notice that the same proof works in any coefficient ring $\rm\:R\:$ in which $2$ is not a unit (invertible). Conversely, if $2$ is a unit in $\rm\:R,$ say $\rm\:2u = 1\:$ for $\rm\:u\in R,\:$ then division is possible: $\rm\: x = 2x\cdot u + 0.$ However, we can generalize the division algorithm to the non-monic case as follows. Theorem (nonmonic Polynomial Division Algorithm) $\ $ Let $\,0\neq F,G\in A[x]\,$ be polynomials over a commutative ring $A,$ with $\,a\,$ = lead coef of $\,F,\,$ and $\, i \ge \max\{0,\,1+\deg G-\deg F\}.\,$ Then $\qquad\qquad \phantom{1^{1^{1^{1^{1^{1}}}}}}a^{i} G\, =\, Q F + R\ \ {\rm for\ some}\ \ Q,R\in A[x],\ \deg R < \deg F$ Proof $\ $ Hint: use induction on $\,\deg G.\,$ See this answer for a full proof.
101,734
Michelle Knight reveals how Ariel Castro beat, bound and deprived her in interview with Dr. Phil Newschannel 5 at 11 CLEVELAND - Michelle Knight, who was held captive by Ariel Castro for 11 years, revealed details of what happened to her inside the convicted rapist and kidnapper's home in a national TV interview with Dr. Phil Tuesday. "When he did abuse me, it lasted for maybe three or four hours," said Knight, the first of Castro's three victims. "He would stop, take a break in between and come back." Knight was kidnapped in Aug. of 2002 when she was 20 years old. She said Castro lured her inside his house with the promise of a puppy inside. Instead, he trapped Knight inside a room and tied her up with an orange extension cord. "I was tied up like a fish, an ornament on the wall," she added. During her first year, she described being chained to a pole in the basement and nearly suffocated by a motorcycle helmet that Castro put over her face. "The chains were so big," said Knight, who said she would often pray to God to pop the chains' locks. "And he wraps it around my neck. He sits me down on the floor, and he says this is where you're gonna stay until I trust you. Now if I do it too tight and you don't make it, that means you wasn't meant to stay here. That means God wanted to take you." The beatings turned into rape. Also in that first year, Knight said she became pregnant by Castro, and when he found out: "I was standing up and he punched me with a barbell," she said. "I fell to the floor." Knight subsequently lost the baby, but she said had faith that one day she may see her own son kept her going. "I want my son to know me as a victor, not a victim, and I want him to know that I survived loving him," she said. However, Knight's childhood wasn't much easier. Prior to her abduction, she described nearly being locked up by her own mother who didn't allow her out of the house or to have friends. In a statement to Dr. Phil, Barbara Knight said her daughter's childhood has been altered by Castro and her years in captivity. Castro committed suicide in his jail cell a month after he pled guilty to charges and was sentenced to life in prison.
365,423
Placing orders online or over the phone is quick and easy to do. Here are some things to consider when placing your order to ensure seamless service and the most accurate results SERVICES - Cutoff Times are mentioned beside the service type to identify the latest possible time that those rates are available. For example a Same Day service cutoff is before noon. - Delivery Times – How quickly does your package need to get where its going? Our services are based on the delivery time selected in the service. A four hour regular service will be delivered in 4 hours but the pickup time will vary depending on the driver’s location and dispatch route. If you have a specific pickup time, select Double Direct service CONTACTS - Make sure to include all important contact information from the order, pickup and delivery locations. Phone numbers, email and full names are always best ORDER NUMBERS - Know Your Order Number – Usually our staff is really quick at finding your order whenever you want an update on the status. Knowing your order number is the absolute fastest way to get a customer service representative to assist you on the phone - Know Your Account Number – When placing an order over the phone, we might ask you for you account number. This 4 digit easy to remember number is the best way to ensure that your order is processed promptly READY TO SHIP - What time is your shipment ready to be picked up? This is a critical pice of information because it helps us dispatch our driver routes appropriately. There is a no pick up charge (equal to half of your original orders worth) if the package is not available for pickup or waiting time for the driver INSTRUCTIONS - Its always a good idea to give as much information as you have available for you shipment. Here are a few categories to consider - Pickup and drop-off locations – Front desk, shipping side door, upstairs or any other description helps our drivers get to you quickly - Fragile or unique packages – If your package is fragile or requires special attention please make sure to include any information.
337,219
Gorgeous Kundan Meena Necklace to Enhance Bridal Beauty by Shilpee Nagota on Oct.29, 2009, under Jewelry Designs :Diamond Polkis, Kundan Meena, Necklace Set - Product name :- Necklace Set - Style of Jewelry :- Traditional, Bridal - - Kundan Meena Necklace Set with Diamond Polkis - Charming Beaded Kundan Meena Necklace Set with Emerald and Diamond Polkis - Floral Kundan Meena Necklace Set with Diamond Polkis & Ruby - Kundan Meena – Bridal Necklace Set with Emerald, Ruby and Diamond Polkis June 12th, 2010 on 4:27 am thanks! just what I had been trying to find …) December 11th, 2010 on 6:14 am yesss very thanks man i love this site January 4th, 2011 on 7:25 pm Mary Shu has it right. AOL is redesign is completely around the wrong l ist.
337,760
\begin{document} \begin{center} {\textsc {\Large On Cobweb Posets\\ and Discrete F-Boxes Tilings}} \\ \vspace{0.5cm} Maciej Dziemia\'nczuk \vspace{0.5cm} {\erm Institute of Informatics, University of Gda\'nsk \\ PL-80-952 Gda\'nsk, Wita Stwosza 57, Poland\\ e-mail: mdziemianczuk@gmail.com\\ } \end{center} \begin{abstract} $F$-boxes defined in \cite{akkmd2} as hyper-boxes in $N^{\infty}$ discrete space were applied here for the geometric description of the cobweb posetes Hasse diagrams tilings. The $F$-boxes edges sizes are taken to be values of terms of natural numbers' valued sequence $F$. The problem of partitions of hyper-boxes represented by graphs into blocks of special form is considered and these are to be called $F$-tilings. The proof of such tilings' existence for certain sub-family of admissible sequences $F$ is delivered. The family of $F$-tilings which we consider here includes among others $F$ = Natural numbers, Fibonacci numbers, Gaussian integers with their corresponding $F$-nomial (Binomial, Fibonomial, Gaussian) coefficients as it is persistent typical for combinatorial interpretation of such tilings originated from Kwa\'sniewski cobweb posets tiling problem . Extension of this tiling problem onto the general case multi $F$-nomial coefficients is here proposed. Reformulation of the present cobweb tiling problem into a clique problem of a graph specially invented for that purpose - is proposed here too. To this end we illustrate the area of our reconnaissance by means of the Venn type map of various cobweb sequences families. \vspace{0.4cm} \noindent AMS Classification Numbers: 05A10, 05A19, 11B83, 11B65 \vspace{0.2cm} \noindent \emph{Keywords}: partitions of discrete hyper-boxes, cobweb tiling problem, multi F-nomial coefficients \vspace{0.3cm} \noindent Affiliated to The Internet Gian-Carlo Polish Seminar: \\ \noindent \emph{http://ii.uwb.edu.pl/akk/sem/sem\_rota.htm}, \\ Article \textbf{No7}, April 2009, 15 April 2009, \\ (302 anniversary of Leonard Euler's birth) \end{abstract} \vspace{0.3cm} \section{Introduction} The \emph{Kwa\'sniewski upside-down} notation from \cite{akk4} (see also \cite{akk1,akk2}) is being here taken for granted. For example $n$-th element of sequence $F$ is $F_n \equiv n_F$, consequently $n_F! = n_F\cdot(n-1)_F\cdot...\cdot 1_F$ and a set $[n_F] = \{1,2,...,n_F\}$ however $[n]_F=\{1_F,2_F,...,n_F\}$. More about effectiveness of this notation see references in \cite{akk4} and Appendix ``\emph{On upside-down notation}'' in \cite{akkmd2}. Throughout this paper we shall consequently use $F$ letter for a sequence of positive integers i.e. $F\equiv\{n_F\}_{n\geq 0}$ such that $n_F\in\Nat$ for any $n\in\NatZero$. \subsection{Discrete $m$-dimensional $F$-Box} Let us define discrete $m$-dimensional $F$-box with edges sizes designated by natural numbers' valued sequence $F$ as described below. These $F$-boxes from \cite{akkmd2} where invented as a response to \emph{Kwa\'sniewski cobweb tiling} problem posed in \cite{akk1} (Problem 2 therein) and his question about visualization of this phenomenon. \begin{defn} Let $F$ be a natural numbers' valued sequence $\{n_F\}_{n\geq 0}$ and $m,n\in\Nat$ such that $n\geq m$. Then a set $V_{m,n}$ of points $v=(v_1,...,v_m)$ of discrete $m$-dimensional space $\Nat^m$ given as follows \begin{equation} V_{m,n} = [k_F] \times [(k+1)_F] \times ... \times [n_F] \end{equation} where $k=n-m+1$ and $[s_F] = \{1,2,...,s_F\}$ is called $m$-dimensional $F$-box. \end{defn} \begin{figure}[ht] \begin{center} \includegraphics[width=100mm]{boxes.eps} \caption{$F$-Boxes $V_{2,3}$ and $V_{3,4}$ with sub-boxes.} \end{center} \end{figure} In the case of $n = m$ we write for short $V_{m,m} \equiv V_{m}$. Assume that we have a $m$-dimensional box $V_{m,n} = W_1\times W_2\times...\times W_m$. Then a set $A = A_1\times A_2\times ... \times A_m$ such that $$ A_s \subset W_s, \qquad |A_s|>0, \qquad s=1,2,...,m; $$ is called \emph{$m$-dimensional sub-box of $V_{m,n}$}. Moreover, if for $s=1,2,...,m$ these sets $A_s$ satisfy the following $$ |A_s| = (\sigma\cdot s)_F $$ for any permutation $\sigma$ of set $\{1_F,2_F,...,m_F\}$ then $A$ is called \emph{$m$-dimensional sub-box of the form $\sigma V_m$}. Compare with Figure \ref{fig:Tiles3D}. \vspace{0.2cm} Note, that the permutation $\sigma$ might be understood here as an orientation of sub-box's position in the box $V_{m,n}$. Any two sub-boxes $A$ and $B$ are disjoint if its sets of points are disjoint i.e. $A\cap B = \emptyset$. \vspace{0.2cm} The number of points $v=(v_1,...,v_m)$ of $m$-dimensional box $V_{m,n}$ is called \emph{volume}. It it easy to see that the \emph{volume} of $V_{m,n}$ is equal to \begin{equation} |V_{m,n}| = n_F\cdot (n-1)_F \cdot ... \cdot (n-m+1)_F = n^{\underline{m}}_F \end{equation} while for $m = n$ \begin{equation} |V_m| = |\sigma V_m| = m_F \cdot (m-1)_F \cdot ... \cdot 1_F = m_F! \end{equation} \subsection{Partition of discrete $F$-boxes} Let us consider $m$-dimensional $F$-box $V_{m,n}$. A finite collection of $\lambda$ pairwise disjoint sub-boxes $B_1,B_2,...,B_\lambda$ of the volume equal to $\kappa$ is called \emph{$\kappa$-partition} of $V_{m,n}$ if their set union of gives the whole box $V_{m,n}$ i.e. \begin{equation} \bigcup_{1\leq j \leq \lambda} B_j = V_{m,n}, \qquad |B_i| = \kappa,\qquad i=1,2,...,\lambda. \end{equation} \vspace{0.2cm} \noindent \textbf{Convention.} In the following, we shall deal only with these $\kappa$-partition of $m$-dimensional boxes $V_{m,n}$, which volume $\kappa$ of sub-boxes is equal to the volume of box $V_m$ i.e. $\kappa = |V_m|$. Of course the box $V_{m,n}$ has $\kappa$-partition \textit{not for all} $F$ - sequences \cite{md1}. Therefore we introduce the name: \emph{$F$-admissible} sequence which means that $F$ satisfies the necessary and sufficient conditions for the box $V_{m,n}$ to have $\kappa$-partitions. In order to proceed let us recall first what follows. \begin{defn}[\cite{akk1,akk2}]\label{def:fnomial} Let $F$ be a natural numbers' valued sequence $F=\{n_F\}_{n\geq 0}$. Then $F$-nomial coefficient is identified with the symbol \begin{equation} \fnomial{n}{m} = \frac{n_F!}{m_F!(n-m)_F!} = \frac{n_F^{\underline{m}}}{m_F!} \end{equation} where $n_F^{\underline{0}} = 0_F! = 1$. \end{defn} \begin{defn}[\cite{akk1,akk2}] \label{def:admissible} A sequence $F$ is called admissible if, and only if for any $n,m\in \NatZero$ the value of $F$-nomial coefficient is natural number or zero i.e. \begin{equation}\label{eq:admissible} \fnomial{n}{m} \in \NatZero \end{equation} while $n\geq m$ else is zero. \end{defn} \vspace{0.2cm} Recall now also a combinatorial interpretation of the $F$-nomial coefficients in $F$-box reformulated form (consult Remark 5 in \cite{akk4} and \cite{akkmd2}). And note: these coefficients encompass among others Binomial, Gaussian and Fibonomial coefficients. \begin{fact}[Kwa\'sniewski \cite{akk1,akk2}] Let $F$ be an admissible sequence. Take any $m,n\in\Nat$ such that $n\geq m$, then the value of $F$-nomial coefficient $\fnomial{n}{m}$ is equal to the number of sub-boxes that constitute a $\kappa$-partition of $m$-dimensional $F$-box $V_{m,n}$ where $\kappa = |V_m|$. \end{fact} \noindent {\it{\textbf{Proof.}}} This proof comes from Observation 3 in \cite{akk1,akk2} and was adopted here to the language of discrete boxes. Let us consider $m$-dimensional box $V_{m,n}$ with $|V_{m,n}| = n^{\underline{m}}_F$. The volume of sub-boxes is equal to $\kappa = |V_m| = m_F!$. Therefore the number of sub-boxes is equal to $$ \frac{n^{\underline{m}}_F}{m_F!} = \fnomial{n}{m} $$ From definition of $F$-admissible sequence we have that the above is natural number. Hence the thesis $\blacksquare$ \vspace{0.4cm} While considering any $\kappa$-partition of certain $m$-dimensional box we only assume that sub-boxes \textbf{have the same volume}. In the next section we shall take into account these partitions which sub-boxes have additionally established structure. \subsection{Tiling problem} Now, special $\kappa$-partitions of discrete boxes are considered. Namely, we deal with only these partitions of $m$-dimensional box $V_{m,n}$ which all sub-boxes \textbf{are of the form} $V_m$. \begin{defn} Let $V_{m,n}$ be a $m$-dimensional $F$-box. Then any $\kappa$-partition into sub-boxes of the form $V_m$ is called tiling of $V_{m,n}$. \end{defn} It was shown in \cite{md1} that just the admissibility condition (\ref{eq:admissible}) is not sufficient for the existence a tiling for any given $m$-dimensional box $V_{m,n}$. Kwa\'sniewski in his papers \cite{akk1,akk2} posed the following problem called \emph{Cobweb Tiling Problem}, which was a starting point of the research with results being reported in the presents note. \begin{problem} [Tiling] Suppose now that $F$ is an admissible sequence. Under which conditions any $F$-box $V_{m,n}$ designated by sequence $F$ has a tiling? Find effective characterizations and/or find an algorithm to produce these tilings. \end{problem} \begin{figure}[ht] \begin{center} \includegraphics[width=80mm]{Tiling.eps} \caption{Sample 3D and 2D tilings.} \end{center} \end{figure} In the next sections we propose certain family $\mathcal{T}_\lambda$ of sequences $F$. Then we prove that any $F$-box $V_{m,n}$, where $m,n\in\Nat$ designated by $F\in\mathcal{T}_\lambda$ has a tiling with giving a construction of it. \subsection{Cobweb representation} In this section we recall \cite{akkmd2} that discrete $F$-boxes $V_{m,n}$ are unique codings representing \emph{Cobwebs}, introduced by Kwa\'sniewski \cite{akk1,akk2} as a special graded posets. Any poset might be represented as a Hasse digraph and this approach to tiling problem will be used throughout the paper. Next we shall consider partitions of $m$-dimensional boxes as a partitions of cobwebs with $m$ levels into sub-cobwebs called blocks. In the following we quote some necessary notation of \emph{Cobwebs} adopted to the tiling problem. For more on \emph{Cobwebs} see source papers \cite{akk1,akk2,akk4} and references therein. \begin{defn} Let $F$ be a natural numbers' valued sequence. Then a simple graph $\langle V, E\rangle$, such that $V = \bigcup_{k \leq s \leq n} \Phi_s$ and \begin{equation} E=\Big\{ \{ u,v \} : u\in\Phi_s \wedge v\in\Phi_{s+1} \wedge k\geq s < n \Big\} \end{equation} where $\Phi_s = \{1,2,...,s_F\}$ is called cobweb layer $\layer{k}{n}$. \end{defn} \begin{figure}[ht] \begin{center} \includegraphics[width=50mm]{Layer24.eps} \caption{Cobweb layer $\layer{2}{4}$ designated by $F$=Natural numbers. \label{fig:Layer}} \end{center} \end{figure} Suppose that we have a cobweb layer $\layer{k}{n}$ of $m$ levels $\Phi_s$, where $m = n-k+1$. Then any cobweb layer $\langle\phi_1\to\phi_m\rangle$ of $m$ levels $\phi_s$ such that \begin{equation} \phi_s \subseteq \Phi_s, \qquad |\phi_s| = s_F, \qquad s=1,2,...,m; \end{equation} is called \emph{cobweb block} $P_m$ of layer $\layer{k}{n}$. \vspace{0.2cm} Additionally, one considers cobweb blocks obtained via permutation $\sigma$ of theirs levels' order as follows (Compare with Figure \ref{fig:Blocks}). \begin{figure}[ht] \begin{center} \includegraphics[width=80mm]{Blocks.eps} \caption{Example of cobweb blocks $P_3$ and $\sigma P_3$. \label{fig:Blocks}} \end{center} \end{figure} \begin{defn} Let a cobweb layer $\layer{k}{n}$ with $m$ levels $\Phi_s$ be given, where $m=n-k+1$. Then a cobweb block $P_m$ with $m$ levels $\phi_s$ such that \begin{equation} \phi_s \subseteq \Phi_s, \qquad |\phi_s| = (\sigma\cdot s)_F, \qquad s=1,2,...,m; \end{equation} where $\sigma$ is a permutation of the set $\{1_F,2_F,...,m_F\}$ is called cobweb block of the form $\sigma P_m$. \end{defn} \begin{figure}[ht] \begin{center} \includegraphics[width=50mm]{Tiles2D.eps} \caption{$F$-Boxes of the form $\sigma V_2$ and cobweb blocks $\sigma P_2$. \label{fig:Tiles2D}} \end{center} \end{figure} While saying \emph{``a block $\sigma P_m$ of layer $\layer{k}{n}$''} we mean that the number of levels in block and layer is the same i.e. $m = n - k + 1$ and each of levels of block are non-empty subsets of corresponding levels in the layer. Assume that we have a cobweb layer $\layer{k}{n}$. A path $\pi$ from any vertex at first level $\Phi_k$ to any vertex at the last level $\Phi_n$, such that $$ \pi = \{v_k, v_{k+1}, ..., v_{n}\}, \qquad v_s \in \Phi_s, \qquad s=k,k+1,...,n; $$ is noted as a \emph{maximal-path $\pi$} of $\layer{k}{n}$. In the same way we nominate \emph{maximal-path} of cobweb block $\sigma P_m$. \vspace{0.2cm} Let $C_{max}(A)$ denotes a set of maximal-paths $\pi$ of cobweb block $A$. (Compare with [4]). Two cobweb blocks $A, B$ of layer $\layer{k}{n}$ are max-disjoint or disjoint for short (\cite{akk1,akk2}) if, and only if its sets of maximal-paths are disjoint i.e. $C_{max}(A) \cap C_{max}(B) = \emptyset$. The cardinality of set $C_{max}(A)$ is called \emph{size} of block $A$. \begin{figure}[ht] \begin{center} \includegraphics[width=110mm]{Tiles3D.eps} \caption{$F$-Boxes of the form $\sigma V_3$ and cobweb blocks $\sigma P_3$. \label{fig:Tiles3D}} \end{center} \end{figure} \begin{observ}[\cite{akkmd2}] Let $F$ be a natural numbers' valued sequence and $k,n\in\Nat$. Then any $F$-box $V_{m,n}$ is uniquely represented by cobweb layer $\layer{k}{n}$ and vice versa i.e., \begin{equation} V_{m,n} \Leftrightarrow \layer{k}{n}. \end{equation} where $k = n-m+1$. \end{observ} \noindent {\it{\textbf{Proof.}}} Consider a cobweb layer $\layer{k}{n}$ of $m$ levels $\Phi$ and $m$-dimensional box $V_{k,n}$. Observe that any maximal-path $\pi = (v_1,v_2,...,v_m)$ of the layer corresponds to only one point $x = (x_1,x_2,...,x_m)$ of $m$-dimensional box $V_{m,n}$, and vice versa, i.e. $$ [s_F] \ni x_s \Leftrightarrow v_s \in [s_F],\qquad s=1,2,...,m; $$ And the number of these maximal-paths and points is the same (Compare with \cite{akk4} and \cite{akkmd2}) i.e. $$ |C_{max}(\layer{k}{n})| = |V_{m,n}| $$ where $m=n-k+1$. $\blacksquare$ \begin{figure}[ht] \begin{center} \includegraphics[width=100mm]{Tiling2.eps} \caption{Correspondence between tiling of $F$-box $V_{3,4}$ and $\layer{3}{4}$.} \end{center} \end{figure} \vspace{0.2cm} Next, we draw terminology of $F$-boxes' partitions back to cobweb's language, used in the next part of this note. Take any cobweb layer $\layer{k}{n}$ with $m$ levels. Then a set of $\lambda$ pairwise disjoint cobweb blocks $A_1,A_2,...,A_\lambda$ of $m$ levels such that its size is equal to $\kappa$ and the union of $C_{max}(A_1), C_{max}(A_2),...,C_{max}(A_\lambda)$ is equal to the set $C_{max}(\layer{k}{n})$ is called \emph{cobweb $\kappa$-partition}. Finally, a $\kappa$-partition of layer $\layer{k}{n}$ with $m$ levels into cobweb blocks of the form $\sigma P_m$ is called \emph{cobweb tiling}. \vspace{0.2cm} \noindent Let us sum it up with the following Table \ref{tab:equiv}. \renewcommand{\arraystretch}{1.3} \begin{table}[ht] \caption{Equivalent notation and terminology. \label{tab:equiv}} \begin{center} \begin{tabular}{|l |l | l |} \hline & \textbf{Cobwebs} & \textbf{$F$-boxes} \\ \hline 1. & Maximal-path $(v_1,...,v_m) \in \layer{k}{n}$ & Point $(x_1,...,x_m) \in V_{m,n}$ \\ 2. & Cobweb layer $\layer{k}{n}$ & $F$-box $V_{m,n}$ \\ 3. & Cobweb block $\sigma P_m \subset \layer{k}{n}$ & Sub-box $\sigma V_m \subset V_{m,n}$ \\ 4. & Tiling of cobweb layer & Tiling of $F$-box \\ & where k = n-m+1. & \\ \hline \end{tabular} \end{center} \end{table} \section{Cobweb tiling sequences} Recall that for some \emph{$F$-admissible} sequences there is no method to tile certain $F$-boxes $V_{m,n}$ or accordingly cobweb layers $\layer{k}{n}$ (no tiling property). For example see Figure \ref{fig:contr} that comes from \cite{md1}. In the next part of this note, we define and consider \textbf{only} sequences \textbf{with tiling property}. \begin{figure}[ht] \begin{center} \includegraphics[width=100mm]{contr.eps} \caption{Layer $\layer{5}{7}$ that does not have tiling with blocks $\sigma P_3$. \label{fig:contr}} \end{center} \end{figure} \begin{defn} A cobweb admissible sequence $F$ such that for any $m,n\in\Nat$ the cobweb layer $\layer{k}{n}$ has a tiling is called cobweb tiling sequence. \end{defn} Let $\mathcal{T}$ denotes the family of all cobweb tiling sequences. Characterization of whole family $\mathcal{T}$ is still open problem. Nevertheless we define certain subfamily $\mathcal{T}_\lambda \subset \mathcal{T}$ of non-trivial cobweb tiling sequences. This family contains among others Natural and Fibonacci numbers, Gaussian integers and others. \begin{notation} Let $\mathcal{T}_\lambda$ denotes the family of natural number's valued sequences $F\equiv\{n_F\}_{n\geq 1}$ such that for any $n$-th term of $F$ satisfies the following holds \begin{equation}\label{eq:TLambda} \forall~m,k\in\Nat, \quad n_F = (m+k)_F = \lambda_K \cdot k_F \ + \ \lambda_M \cdot m_F \end{equation} while $1_F\in\Nat$ and for certain coefficients $\lambda_K\equiv\lambda_K(k,m)\in\NatZero$ and $\lambda_M\equiv\lambda_M(k,m)\in\NatZero$. \end{notation} Note, coefficients $\lambda_K$ and $\lambda_M$ might be considered as a natural numbers' with zero valued infinite matrixes $\lambda_K \equiv [k_{ij}]_{i,j\geq 1}$ and $\lambda_M \equiv [m_{ij}]_{i,j\geq 1}$. Moreover the sequence $F\equiv\{n_F\}_{n\geq 0}$ is uniquely designated by these matrixes $\lambda_K, \lambda_M$ and first element $1_F\in\mathbb{N}$. \begin{corol}\label{cor:lambdamult} Let a sequence $F\in\mathcal{T}_\lambda$ with its coefficients' matrixes $\lambda_K,\lambda_M$ and a composition $\vec{\beta}=\langle b_1,b_2,...,b_k \rangle$ of number $n$ into $k$ nonzero parts be given. Then the following takes place \begin{equation} n_F = 1_F\sum_{s=1}^n \lambda_s(\vec{\beta}) \cdot (b_s)_F \end{equation} where \begin{equation}\label{eq:coeffmulti} \lambda_s(\vec{\beta}) = \lambda_K(b_s,b_{s+1}+...+b_k)\prod_{i=1}^{s-1} \lambda_M(b_i,b_{i+1}+...+b_k) \end{equation} or equivalent \begin{equation}\label{eq:coeffmulti2} \lambda_s(\vec{\beta}) = \lambda_M(b_{s+1}+...+b_k,b_s)\prod_{i=1}^{s-1} \lambda_K(b_{i+1}+...+b_k,b_i). \end{equation} \end{corol} \noindent {\it{\textbf{Proof.}}} It is a straightforward algebraic induction exercise using property (\ref{eq:TLambda}) of the sequence $\mathcal{T}_\lambda$. The first form (\ref{eq:coeffmulti}) of the coefficients $\lambda_s(\vec{\beta})$ comes from the following $$ \Big(b_1 + (n-b_1)\Big)_{\!F} \Rightarrow \Big(b_1 + b_2 + (n-b_1-b_2)\Big)_{\!F} $$ while the second one (\ref{eq:coeffmulti2}) from $$ \Big((n-b_k) + b_k\Big)_{\!F} \Rightarrow \Big((n-b_k-b_{k-1}) + b_{k-1} + b_k\Big)_{\!F} \qquad \blacksquare $$ \vspace{0.4cm} If we take a vector $\langle 1,1,...,1\rangle$ of $n$ ones i.e. $b_s=1$ for any $s=1,2,...,n$; then we obtain alternative formula to compute elements of the sequence $F$. \begin{corol} Let $F\in\mathcal{T}_\lambda$ be given. Then $n$-th element of the sequence $F$ satisfies \begin{equation} n_F = 1_F\cdot \sum_{s=1}^n \lambda_K(1,n-s)\prod_{i=1}^{s-1}\lambda_M(1,n-i) \end{equation} for any $n\in\mathbb{N}$. \end{corol} \begin{corol} Let any sequence $F \in \mathcal{T}_\lambda$ be given. Then for any $n,k\in\mathbb{N}\cup\{0\}$ such that $n\geq k$, the $F$-nomial coefficients satisfy below recurrence identity \begin{equation}\label{eq:rec} \fnomial{n}{k} = \lambda_K \fnomial{n-1}{k-1} + \lambda_M \fnomial{n-1}{k} \end{equation} \noindent where ${n \choose n}_F = {n \choose 0}_F = 1$. \end{corol} \noindent {\it{\textbf{Proof.}}} Take any $F\in\mathcal{T}_\lambda$ and $n\in\mathbb{N}\cup\{0\}$. Then from (\ref{eq:TLambda}) of $\mathcal{T}_\lambda$ and for any $m,k\in\mathbb{N}\cup\{0\}$ such that $m+k=n$ we have that $n$-th element of the sequence $F$ satisfies following recurrence $$ n_F = (k+m)_F = \lambda_K\cdot k_F + \lambda_M\cdot m_F $$ \noindent Multiply both sides of above equation by $\frac{(n-1)_F!}{k_F!\cdot m_F!}$ to get $$ \frac{n_F!}{k_F!\cdot m_F!} = \lambda_K\cdot \frac{(n-1)_F!}{(k-1)_F!\cdot m_F!} + \lambda_M\cdot \frac{(n-1)_F!}{k_F!\cdot(m-1)_F!} $$ \noindent And from Definition \ref{def:fnomial} of $F$-nomial coefficients we have $$ \fnomial{n}{k} = \lambda_K {n-1 \choose k-1}_F + \lambda_M \fnomial{n-1}{k} \quad \blacksquare $$ \vspace{0.4cm} It turns out that the recurrence formula (\ref{eq:rec}) gives us a method to generating tilings of any layer $\layer{k}{n}$ designated by sequence $F\in\mathcal{T}_\lambda$. \begin{theoremn}\label{th:1} Let $F$ be a sequence of $\mathcal{T}_\lambda$ family. Then $F$ is cobweb tiling. \end{theoremn} \begin{figure}[ht] \begin{center} \includegraphics[width=80mm]{Step0.eps} \caption{Picture of Theorem \ref{th:1} proof's idea. \label{fig:Step0}} \end{center} \end{figure} \noindent {\it{\textbf{Proof.}}} Suppose that we have a cobweb layer $\layer{k+1}{n}$ with $m$ levels designated by sequence $F$ from $\mathcal{T}_\lambda$ family and $m = n-k$. Consider $\Phi_n$ level with $n_F$ vertices. From (\ref{eq:TLambda}) we have that the number of vertices at this level is the sum of $\lambda_M \cdot m_F$ and $\lambda_K \cdot k_F$. Therefore we separate them by cutting into two disjoint subsets as illustrated by Figure \ref{fig:Step0} and cope at first $\lambda_M \cdot m_F$ vertices in Step 1. Then we shall cope the rest $\lambda_K \cdot k_F$ ones in Step~2. \begin{figure}[ht] \begin{center} \includegraphics[width=55mm]{Step1.eps} \caption{Picture of Theorem \ref{th:1} proof's Step 1. \label{fig:Step1}} \end{center} \end{figure} \vspace{0.2cm} {\it Step 1.} Temporarily we have $\lambda_M \cdot m_F$ fixed vertices on $\Phi_n$ level to consider (Figure \ref{fig:Step1}). Let us cover them $\lambda_M$ times by $m$-th level of block $\sigma P_m$, which has exactly $m_F$ vertices. If $\lambda_M = 0$ we skip this step. What was left is the layer $\layer{k+1}{n-1}$ and we might eventually partition it with smaller disjoint blocks $\sigma P_{m-1}$ in the next induction step . \begin{figure}[ht] \begin{center} \includegraphics[width=110mm]{Step2.eps} \caption{Picture of Theorem \ref{th:1} proof's Step 2. \label{fig:Step2}} \end{center} \end{figure} {\it Step 2.} Consider now the second complementary situation, where we have $\lambda_K \cdot k_F$ vertices on $\Phi_n$ level being fixed (Figure \ref{fig:Step2}). If $\lambda_K = 0$ we skip this step. Observe that if we move this level lower than $\Phi_{k+1}$ level, we obtain exactly $\lambda_K$ the same layers $\layer{k}{n-1}$ to be partitioned with disjoint blocks of the form $\sigma P_m$. This ``\emph{move}'' operation is just permutation $\sigma$ of levels' order. \vspace{0.2cm} {\it Recapitulation.} The layer $\layer{k+1}{n}$ might be partitioned into $\sigma P_m$ blocks if $\layer{k+1}{n-1}$ might be partitioned into $\sigma P_{m-1}$ and $\layer{k}{n-1}$ into $\sigma P_m$ again. Continuing these steps by induction, we are left to prove that $\layer{k}{k}$ might be partitioned into $\sigma P_1$ blocks and $\layer{1}{m}$ into $\sigma P_m$ ones, what is trivial $\blacksquare$ \begin{observ} Let $F$ be a cobweb tiling sequence from the family $\mathcal{T}_\lambda$. Then the number $\Big\{ {n \atop k } \Big\}_F^1 $ of different tilings of layer $\layer{k}{n}$ where $n,k\in\mathbb{N}$, $n,k\geq 1$ is equal to: \begin{equation} \label{eq:3} \bigg\{ {n \atop {k} } \bigg\}_F^1 = \frac{n_F!}{(m_F!)^{\lambda_M} \cdot ((k-1)_F!)^{\lambda_K}} \cdot \bigg( \bigg\{ {n-1 \atop k } \bigg\}_F^1 \bigg)^{\lambda_M} \cdot \bigg( \bigg\{ {n-1 \atop k-1 } \bigg\}_F^1 \bigg)^{\lambda_K} \end{equation} \noindent where $\Big\{ {n \atop n } \Big\}_F^1 = 1$ and $\Big\{ {n \atop 1 } \Big\}_F^1 = 1$. \end{observ} \noindent {\it{\textbf{Proof.}}} According to steps of the proof of Theorem \ref{th:1} we might choose $m_F$ vertices $\lambda_M$ times at $n$-th level and next $(k-1)_F$ vertices $\lambda_K$ times out of $n_F$ ones in $\frac{n_F!}{(m_F!)^{\lambda_M} \cdot ((k-1)_F!)^{\lambda_K}}$ ways. Next recurrent steps of the proof of Theorem \ref{th:1} result in formula (\ref{eq:3}) via product rule of counting $\blacksquare$ \vspace{0.4cm} Note that $\Big\{ {n \atop k } \Big\}_F^1$ is not the number of all different tilings of the layer $\layer{k}{n}$ i.e. $\Big\{ {n \atop k } \Big\}_F^1 \leq \Big\{{n \atop k } \Big\}_F$ as computer experiments show \cite{md1}. There are much more other tilings with blocks $\sigma P_m$. \section{Cobweb multi tiling} In this section, more general case of the tiling problem is considered. For that to do we introduce the so-called multi $F$-nomial coefficients that counts blocks of multi-block partitions. \begin{defn}\label{def:symbol} Let natural numbers' valued sequence $F\equiv\{n_F\}_{n\geq 0}$ and a composition $\langle b_1,b_2,...,b_k\rangle$ of the number $n$ be given. Then the multi $F$-nomial coefficient is identified with the symbol \begin{equation} \fnomial{n}{b_1,b_2,...,b_k} = \frac{n_F!}{(b_1)_F!\cdot ... \cdot (b_k)_F!} \end{equation} while $n = b_1+b_2+...+b_k$. \end{defn} \begin{corol} Let $F$ be any $F$-cobweb admissible sequence. Then value of the multi $F$-nomial coefficient is natural number or zero i.e. \begin{equation} \fnomial{n}{b_1,b_2,...,b_k} \in \mathbb{N} \cup \{0\} \end{equation} for any $n,b_1,b_2,...,b_k\in\mathbb{N}$ such that $n=b_1+b_2+...+b_k$. \end{corol} For the sake of forthcoming combinatorial interpretation of multi $F$-nomial coefficients we introduce the following notation. \begin{defn} Let a cobweb layer $\layer{1}{n}$ of $n$ levels $\Phi_s$ and a composition $\langle b_1,b_2,...,b_k\rangle$ of number $n$ into $k$ non-zero parts be given. Then any cobweb layer $\langle\phi_1\to\phi_n\rangle$ of $n$ levels $\phi_s$ such that \begin{equation} \phi_s \subseteq \Phi_s, \qquad s=1,2,...,n; \end{equation} where the cardinality of $\phi_s$ is equal to $s$-th element of the vector $L$ given as follows $$ L = \sigma \cdot \langle 1, 2, ..., b_1, 1,2, ..., b_2, ..., 1,2, ..., b_k \rangle $$ for any permutation $\sigma$ of a set $[n]$ is called cobweb multi-block of the form $\sigma P_{b_1,b_2,...,b_k}$. \end{defn} \begin{figure}[ht] \begin{center} \includegraphics[width=100mm]{multiblock.eps} \caption{Examples of multi blocks $P_{4,2,1}$ and $\sigma P_{4,2,1}$. \label{fig:multiblock}} \end{center} \end{figure} In the case of $\sigma = id$ we write for short $\sigma P_{b_1,b_2,...,b_k} = P_{b_1,b_2,...,b_k}$. Compare with Figure \ref{fig:multiblock}. \vspace{0.4cm} \noindent \textbf{Example 1} \\ \noindent Take a sequence $F$ of next natural numbers i.e. $n_F = n$ and cobweb layer $\layer{1}{4}$ designated by $F$. A sample multi tiling of the layer $\layer{1}{4}$ with the help of $\fnomial{4}{2,2} = 6$ disjoint multi blocks of the form $\sigma P_{2,2}$ is in Figure \ref{fig:sampletilingmulti}. \begin{figure}[ht] \begin{center} \includegraphics[width=110mm]{sampletilingmulti.eps} \caption{Sample multi tiling of layer $\layer{1}{4}$ from Example 2. \label{fig:sampletilingmulti}} \end{center} \end{figure} \begin{observ} Let $\layer{1}{n}$ be a cobweb layer and $\langle b_1,...,b_k\rangle$ be a composition of the number $n$ into $k$ nonzero parts. Then the value of multi $F$-nomial coefficient $\fnomial{n}{b_1,b_2,...,b_k}$ is equal to the number of blocks that form the cobweb $\kappa$-partition, where $\kappa = |C_{max}(P_{b_1,...,b_k})|$. \end{observ} \noindent {\it{\textbf{Proof.}}} The proof is natural extension of Observation 3 in \cite{akk1,akk2}. The number of maximal paths in layer $\layer{1}{n}$ is equal to $n_F!$. However the number of maximal paths in any multi block $\sigma P_{b_1,b_2,...,b_k}$ is \break $\nobreak{(b_1)_F!\cdot(b_2)_F!\cdot...\cdot(b_k)_F!}$. Thus the number of such blocks is equal to $$ \frac{n_F!}{(b_1)_F!\cdot(b_2)_F!\cdot...\cdot(b_k)_F!} $$ \vspace{0.2cm} \noindent where $n=b_1+b_2+...+b_k$ for any $n,k\in\mathbb{N}$ $\blacksquare$ \vspace{0.6cm} \noindent Of course for $k=2$ we have \begin{equation} {n \choose {b,n-b}}_F \equiv {n \choose b}_F = {n \choose {n-b}}_F \end{equation} \vspace{0.2cm} \noindent \textbf{Note.} For any permutation $\sigma$ of the set $[k]$ the following holds \begin{equation} {n \choose {b_1,b_2,...,b_k}}_F = {n \choose {b_{\sigma 1},b_{\sigma 2},...,b_{\sigma k}}}_F \end{equation} \vspace{0.2cm} \noindent as is obvious from Definition \ref{def:symbol} of the multi F-nomial symbol. i.e. $$ \frac{n_F!}{(b_1)_F!\cdot(b_2)_F\cdot...\cdot(b_k)_F} = \frac{n_F!}{(b_{\sigma 1})_F!\cdot(b_{\sigma 2})_F\cdot...\cdot(b_{\sigma k})_F} $$ \vspace{0.4cm} \noindent Let us observe also that for any natural $n,k$ and $b_1+...+b_m = n-k$ the following holds \begin{equation} \label{eq:mult1} \fnomial{n}{k} \cdot \fnomial{n-k}{b_1,b_2,...,b_m} = \fnomial{n}{k,b_1,...,b_m} \end{equation} \begin{corol} Let $F\in\mathcal{T}_\lambda$ and a composition $\vec{\beta}=\langle b_1,...,b_k\rangle$ of number $n$ into $k$ parts be given. Then the multi $F$-nomial coefficients satisfy the following recurrence relation \begin{equation} \fnomial{n}{b_1,b_2,...,b_k} = \sum_{s=1}^k \lambda_s(\vec{\beta}) \cdot \fnomial{n-1}{b_1,...,b_{s-1},b_s-1,b_{s+1},...,b_k} \end{equation} for coefficients $\lambda_s(\vec{\beta})$ from (\ref{eq:coeffmulti}) and for any $n= b_1 +...+b_k$ and $\fnomial{n}{n, 0, ... ,0} = 1$. \end{corol} \noindent {\it{\textbf{Proof.}}} Take any $F\in\mathcal{T}_\lambda$ and a composition $\vec{\beta}=\langle b_1,...,b_k\rangle$ of the number $n$. Then from Corollary \ref{cor:lambdamult} we have that for certain coefficients $\lambda_s(\vec{\beta})$ any $n$-th element of the sequence $F$ satisfies $$ n_F = \sum_{s=1}^k \lambda_s(\vec{\beta}) \cdot (b_s)_F $$ \noindent If we multiply both sides by $\frac{(n-1)_F!}{(b_1)_F!\cdot ... \cdot(b_k)_F!}$ then we obtain $$ \fnomial{n}{b_1,...,b_k} = \sum_{s=1}^k \lambda_s(\vec{\beta}) \frac{(n-1)_F!}{(b_1)_F!\cdot...\cdot(b_{s-1})_F!(b_s-1)_F!(b_{s+1})_F!\cdot ...\cdot(b_k)_F!} $$ Hence the thesis $\blacksquare$ \begin{theoremn}\label{th:multi} Let any sequence $F \in \mathcal{T}_\lambda$ be given. Then the sequence $F$ is cobweb multi tiling i.e. any layer $\layer{1}{n}$ might be partitioned into multi-blocks of the form $\sigma P_{b_1,b_2,...,b_k}$ such that $b_1+...+b_k=n$. \end{theoremn} \noindent {\it{\textbf{Proof.}}} Take any cobweb layer $\layer{1}{n}$ designated by sequence $F\in\mathcal{T}_\lambda$ and a number $k\in\mathbb{N}$. We need to partition the layer into disjoint multi blocks of the form $\sigma P_{b_1,b_2,...,b_k}$. \begin{figure}[ht] \begin{center} \includegraphics[width=80mm]{proof_multi.eps} \caption{Idea's picture of Theorem \ref{th:multi}. \label{fig:proof}} \end{center} \end{figure} \vspace{0.2cm} \noindent Consider level $\Phi_n$ with $n_F$ vertices. From Corollary \ref{cor:lambdamult} we have that the number of vertices at this level is the following sum $$ n_F = \sum_{s=1}^{k}{\lambda_s(\vec{\beta}) \cdot (b_s)_F} $$ for certain coefficients $\lambda_s(\vec{\beta})$ where $1\leq s \leq k$ and $\vec{\beta} = \langle b_1,b_2,...,b_k\rangle$. \vspace{0.2cm} \noindent Therefore let us separate these $n_F$ vertices by cutting into $k$ disjoint subsets as illustrated by Fig. \ref{fig:proof} and cope at first $\lambda_1\cdot(b_1)_F$ vertices in Step 1, then $\lambda_2\cdot(b_2)_F$ ones in Step 2 and so on up to the last $\lambda_k\cdot(b_k)_F$ vertices to consider in the last $k$-th step. If any $\lambda_i = 0$ we skip $i$-th step. \vspace{0.2cm} {\it Step 1.} Temporarily we have $\lambda_1\cdot(b_1)_F$ fixed vertices at level $\Phi_n$ to consider. Let us cover them $\lambda_1$ times by $(b_1)$-th level of block $P_{b_1,b_2,...,b_k}$, which has exactly $(b_1)_F$ vertices. What was left is the layer $\layer{1}{n-1}$ and we might partition it with smaller disjoint blocks $\sigma P_{b_1-1,b_2,...,b_k}$ in the next induction step. \vspace{0.2cm} {Note.} In the next induction steps we use smaller blocks $\sigma P$ without levels which we have been already used in previous steps (disjoint of blocks condition). \vspace{0.2cm} {\it Step 2.} Consider now the second situation, where we have $\lambda_2\cdot(b_2)_F$ vertices at level $\Phi_n$ being fixed. We cover them $\lambda_2$ times by $(b_1+b_2)$-th level of block $P_{b_1,b_2,...,b_k}$, which has $(b_2)_F$ vertices. Then we obtain smaller layer $\layer{1}{n-1}$ to be partitioned with blocks $\sigma P_{b_1,b_2-1,b_3,...,b_k}$. \vspace{0.2cm} \noindent And so on up to ... \vspace{0.2cm} {\it Step $k$.} Analogously to previous steps, we cover the last $\lambda_{b_s}$ vertices by the last $(b_1+b_2+...+b_k)=n$-th level of block $P_{b_1,b_2,...,b_k}$, obtaining smaller layer $\layer{1}{n-1}$ to be partitioned with blocks $\sigma P_{b_1,...,b_{k-1},b_k-1}$. \vspace{0.2cm} {\it Conclusion.} \noindent The layer $\layer{1}{n}$ might be partitioned into blocks $\sigma P_{b_1,b_2,...,b_k}$ if $\layer{1}{n-1}$ might be partitioned into $\sigma P_{b_1-1,b_2,...,b_k}$ and $\layer{1}{n-1}$ into $\sigma P_{b_1,b_2-1,b_3,...,b_k}$ again and so on up to the layer $\layer{1}{n-1}$ which might be partitioned into $\sigma P_{b_1,...,b_{k-1},b_k-1}$. Continuing these steps by induction, we are left to prove that $\layer{1}{k}$ might be partitioned into blocks $\sigma P_{1,1,...,1}$ or $\layer{1}{1}$ by $\sigma P_{1,0,...,0}$ ones, which is trivial. $\blacksquare$ \section{Family $\mathcal{T}_\lambda(\alpha,\beta)$ of cobweb tiling sequences} In this section a specific family of cobweb tiling sequences $F\in\mathcal{T}_\lambda$ is presented as an exemplification of a might be source method. We assume that coefficients $\lambda_K$ and $\lambda_M$ of $F\in\mathcal{T}_\lambda$ take a form \begin{equation} \lambda_M(k,m) = \alpha^k \qquad \lambda_K(k,m) = \beta^m \end{equation} while $\alpha,\beta\in\mathbb{N}$. \begin{notation} Let $\mathcal{T}_\lambda(\alpha,\beta)$ denotes a family of natural numbers' valued sequences $F\equiv\{n_F\}_{n\geq 0}$ constituted by $n$-th coefficients of the generating function $\mathcal{F}(x)$ expansion i.e. $n_F = [x^n]\mathcal{F}(x)$, where \begin{equation}\label{eq:form} \mathcal{F}(x) = 1_F\cdot \frac{x}{(1-\alpha x)(1-\beta x)} \end{equation} for certain $\alpha,\beta\in \mathbb{N}\cup\{0\}$ and $1_F \in \mathbb{N}$. \end{notation} \begin{enumerate} \item If ($\alpha = \beta$), then $\mathcal{F}(x) = 1_F\cdot \frac{x}{1-\alpha x} + \alpha x \mathcal{F}(x)$ which leads to \begin{equation}\label{eq:aa} n_F = 1_F\cdot n \cdot \alpha^{n-1} \qquad n\geq 1 \end{equation} \item If ($\alpha \neq \beta$), then $\mathcal{F}(x) = \frac{1_F}{\alpha-\beta}\left( \frac{1}{1-\alpha x} - \frac{1}{1 - \beta x} \right)$ gives us \begin{equation}\label{eq:ab} n_F = \frac{1_F}{\alpha - \beta}\left( \alpha^n - \beta^n \right) \qquad n\geq 1 \end{equation} \end{enumerate} \begin{proposition}\label{prop:sum} Let $F\in\mathcal{T}_\lambda(\alpha,\beta)$ and composition $\vec{b} = \langle b_1,b_2,...,b_k\rangle$ of the number $n$ into $k$ non-zero parts be given. Then any $n$-th element of the sequence $F$ satisfies the following recurrence identity \begin{equation}\label{eq:sum} n_F = \left( \sum_{s=1}^{k} b_s \right)_{\!\!F} = \sum_{s=1}^k \lambda_s(\vec{b}) \cdot (b_s)_F \end{equation} where $$ \lambda_s(\vec{b}) = \alpha^{b_{s+1} + ... + b_{k}}\cdot \beta^{b_1+...+b_{s-1}} $$ for any $n=b_1+...+b_k$. \end{proposition} \noindent {\it{\textbf{Proof.}}} Take any composition $\vec{b} = \langle b_1,b_2,...,b_k\rangle$ of the number $n\in\mathbb{N}$ into $k$ nonzero parts i.e. $b_1+b_2+...+b_k=n$. \begin{enumerate} \item If ($\alpha = \beta$) then from (\ref{eq:aa}) $ \left( \sum_{s=1}^k b_s \right)_{\!\!F} = 1_F \left( \sum_{s=1}^k b_s \right) \cdot \alpha^{n-1} = \sum_{s=1}^k 1_F b_s \alpha^{b_s-1} \alpha^{n-b_s} = $ $ = \sum_{s=1}^k (b_s)_F \alpha^{n-b_s}$ \item If ($\alpha \neq \beta$) then from (\ref{eq:ab}) $\left( \sum_{s=1}^k b_s \right)_{\!\!F} = \frac{1_F}{\alpha-\beta}\alpha^{b_1+\sum_{s=2}^k b_s} - \frac{1_F}{\alpha-\beta}\beta^{b_k + \sum_{s=1}^{k-1} b_s } = A + B $ \noindent Next, denote $S_{\pm}(m)$ for $1<m<k$ such that $S_{+}(m) + S_{-}(m) = 0$ as follows $ S_{\pm}(m) = \pm\frac{1_F}{\alpha-\beta} \alpha^{\sum_{s=m+1}^k b_s} \cdot \beta^{\sum_{s=1}^m b_s}$. \noindent Then observe that if we add to the $A+B$ the sum of $S_{\pm}(m)$ where $1<m<k$ i.e. $ A + B = A + B + \sum_{1<j<k} S_{+}(j) + S_{-}(j) $ \noindent then we obtain $ \left\{ \begin{array}{l} A + S_{-}(1) = (b_1)_F \cdot \alpha^{\sum_{s=2}^n b_s} \beta^{0} \\ S_{+}(1) + S_{-}(2) = (b_2)_F \cdot \alpha^{\sum_{3=2}^n b_s} \cdot \beta^{b_1} \\ ... \\ S_{+}(k-1) + B = (b_k)_F \cdot \alpha^{0} \cdot \beta^{\sum_{s=1}^{k-1} b_s} \\ \end{array} \right. $ \noindent And finally \\ $ \left( \sum_{s=1}^k b_s \right)_{\!\!F} = A + B = \sum_{s=1}^k (b_s)_F\cdot \alpha^{b_{s+1}+...+b_k}\beta^{b_1+...+b_{s-1}}$ $\blacksquare$ \end{enumerate} \vspace{0.2cm} \noindent \textbf{Note.} If $k = 2$ then for any $m,b\in\mathbb{N}\cup\{0\}$ we have \begin{equation}\label{eq:two} (m+b)_F = \lambda_M m_F + \lambda_b b_F = \alpha^b m_F + \beta^m b_F \end{equation} Let us compare above with condition (\ref{eq:TLambda}) for sequences that are cobweb tiling from family $\mathcal{T}_\lambda$ and let us sum up this with the following corollary. \begin{corol} Let family of sequences $\mathcal{T}_\lambda(\alpha,\beta)$ and family $\mathcal{T}_\lambda$ of cobweb tiling sequences be given. Then the following takes place \begin{equation} \mathcal{T}_\lambda(\alpha,\beta) \subset \mathcal{T}_\lambda \end{equation} thus any sequence $F\in\mathcal{T}_\lambda(\alpha,\beta)$ is cobweb tiling. \end{corol} \noindent {\it{\textbf{Proof.}}} We only need to show that $\mathcal{T}_\lambda(\alpha,\beta) \neq \mathcal{T}_\lambda$. As an example we show that the sequence $F$ of Fibonacci numbers is cobweb tiling of the form $\mathcal{T}_\lambda$ but does not belong to the family $\mathcal{T}_\lambda(\alpha,\beta)$. Ones show that $n$-th element of the Fibonacci numbers satisfies \begin{equation} n_F = \frac{1}{\alpha - \beta}\left( \alpha^n - \beta^n \right) \end{equation} but $\alpha = \frac{1 + \sqrt{5}}{2}$ and $\beta = \frac{1-\sqrt{5}}{2}$ are not natural numbers - compare with (\ref{eq:form}). However its elements satisfy another equivalent relation for any $m,k\in\mathbb{N}\cup\{0\}$ \begin{equation} (k+m)_F = (m-1)_F\cdot k_F + (k+1)_F\cdot m_F \end{equation} \vspace{0.2cm} \noindent Therefore $F\in\mathcal{T}_\lambda$ and $F\notin\mathcal{T}_\lambda(\alpha,\beta)$. Hence the thesis $\blacksquare$ \begin{corol} Let $F\in\mathcal{T}_\lambda$ be given. Then for any $n,k\in\mathbb{N}\cup\{0\}$ the following holds \begin{equation} (k\cdot n)_F = \bigg( \underbrace{n + n + ... + n}_k \bigg)_{\!\!F} = n_F \cdot \sum_{s=1}^k \alpha^{(k-s)n} \beta^{(s-1)n} \end{equation} \end{corol} \vspace{0.4cm} From Proposition \ref{prop:sum} we obtain an another explicit formula for $n$-th element of the sequence $F\in\mathcal{T}_\lambda$ i.e. \begin{equation} n_F = (n \cdot 1)_F = 1_F\sum_{s=1}^n \alpha^{(n-s)} \beta^{(s-1)}. \end{equation} \section{Examples of cobweb tiling sequences} \label{sect:examples} In this section we are going to show a few examples of cobweb-tiling sequences. Throughout this part we shall consequently use the condition convention: $n = k + m$. \subsection{Examples of $\mathcal{T}_\lambda(\alpha,\beta)$ family } \begin{enumerate} \item \textbf{Natural numbers} \\ Putting $\alpha=\beta= 1$ gives us a sequence $n_F = 1_F\cdot n$ with the recurrence $(k+m)_F = k_F + m_F$. If $1_F=1$ then we obtain Natural numbers with Binomial coefficients' recurrence: $$ {n \choose k} \equiv \fnomial{n}{k} = \fnomial{n-1}{k-1} + \fnomial{n-1}{k} $$ \item \textbf{Powers' sequence} \\ If $\alpha = 0, \beta = 1_F = q$ then $n_F = q^n$ and $(k+m)_F = q^m\cdot k_F$ with its $F$-nomial coefficients' recurrence $$ \fnomial{n}{k} = q^m \fnomial{n-1}{k-1} = q^k \fnomial{n-1}{m-1} $$ \item \textbf{Gaussian numbers} \\ If $\alpha = 1, \beta = q$ then $n_F = \frac{1_F}{1-q}\left( 1 - q^n \right)$ and $(k+m)_F = k_F + q^k m_F$ with the recurrence for Gaussian coefficients $$ \fnomialF{n}{k}{q} \equiv \fnomial{n}{k} = \fnomial{n-1}{k-1} + q^k \fnomial{n-1}{k} $$ \item \textbf{Modified Gaussian integers} \label{ex:modgaus}\\ For $\alpha = \beta = q\in\mathbb{N}$ we have $n_F = 1_F\cdot n \cdot q^{n-1}$ and $(k+m)_F = q^m k_F + q^k m_F$ with the recurrence $$ \fnomial{n}{k} = q^m \fnomial{n-1}{k-1} + q^k \fnomial{n-1}{k} $$ \end{enumerate} \subsection{Fibonacci numbers} In the following, we prove that sequence of Fibonacci numbers is tiling sequence i.e. any cobweb layer $\layer{k}{n}$ might be partitioned into blocks of the form $\sigma P_m$. \begin{defn} Let $F(p)$ be a natural numbers' valued sequence such that for any $k,m\in\mathbb{N}\cup\{0\}$ its elements satisfy the following relation \begin{equation}\label{eq:fib} (k+m)_F = (m-1)_F\cdot k_F + (k+1)_F\cdot m_F \end{equation} while $1_F = 1$ and $2_F = p$. \end{defn} \vspace{0.2cm} \noindent From Theorem \ref{th:1} and condition (\ref{eq:TLambda}) on the sequence $\mathcal{T}_\lambda$, we have that $F(p)$ is cobweb tiling. Moreover, it is easy to see, that explicit formula for $n$-th element of $F(p)$ is \begin{equation} n_F = \frac{1}{\sqrt{2_F^2 + 4}}\left( \phi_1^n - \phi_2^n \right) \end{equation} where $\phi_{1,2} = \frac{2_F\pm\sqrt{2_F^2+4}}{2}$ and $1_F = 1$ while $2_F = p$. \vspace{0.4cm} \noindent \textbf{Examples of $F(p) = \{n_F\}_{n\geq 0}$} \begin{itemize} \item $F(1) \equiv (0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, ...) \equiv$ Fibonacci numbers \item $F(2) \equiv (0, 1, 2, 5, 12, 29, 70, 169, 408, 985, 2378, 5741, 13860,...)$ \item $F(3) \equiv (0, 1, 3, 10, 33, 109, 360, 1189, 3927, 12970, 42837,...)$ \item $F(4) \equiv ( 0, 1, 4, 17, 72, 305, 1292, 5473, 23184, 98209, 416020,...)$ \end{itemize} \begin{corol} The sequence of Fibonacci numbers is cobweb tiling. \end{corol} \noindent {\it{\textbf{Proof.}}} If we put $1_F=2_F=1$ in (\ref{eq:fib}) then we obtain \textbf{Fibonacci numbers} and well-known recurrence relation for Fibonomial coefficients \cite{eks} \begin{equation} \fnomial{n}{k} = (m-1)_F\fnomial{n-1}{k-1} + (k+1)_F\fnomial{n-1}{k} \blacksquare \end{equation} \begin{observ} Let $F$ be a sequence of the form $F(p)$. Take any composition $\langle b_1,b_2,...,b_k \rangle$ of a number $n$ into $k$ nonzero parts. Then $n$-th element of $F$ satisfies \begin{equation} n_F = \sum_{s=1}^k (b_s)_F\cdot\prod_{i=1}^{s-1}(b_i+1)_F\cdot(b_{s+1}+...+b_k-1)_F \end{equation} while $n,k\in\Nat$. \end{observ} \noindent {\it{\textbf{Proof.}}} It is a straightforward algebraic exercise using an idea from the proof of Corollary \ref{cor:lambdamult}. If we use the substitutions $m = a + b$ in the formula (\ref{eq:fib}) then we obtain the case of $3$ terms $$ (k + m)_F = (k + a + b)_F = \lambda_K k_F + \lambda_a a_F + \lambda_b b_F $$ where $\lambda_K\! =\! (a+b-1)_F$, $\lambda_a\! =\! (k + 1)_F\cdot(b - 1)_F$ and $\lambda_b\! =\! (k+1)_F\cdot(a+1)_F$. And so on by induction $\blacksquare$ \section{Cobweb tiling problem as a particular case of clique problem} Recall that the clique problem is the problem of determining whether a graph contains a clique of at least a given size $d$. In this section, we show that the cobweb tiling problem might be considered as the clique problem in specific graph. Namely reformulation of the $F$-cobweb i.e. $F$-boxes tiling problem into a clique problem of a graph specially invented for that purpose - is proposed. \vspace{0.4cm} Suppose that we have a cobweb layer $\layer{k}{n}$ designated by any sequence $F$. Let $B\left(\layer{k}{n}\right)$ denotes a family of all blocks of the form $\sigma P_m$, where $m=n-k+1$ of that layer $\layer{k}{n}$ and assume that $b_{k,n}$ is a cardinality of that family i.e. $b_{k,n} = |B\left(\layer{k}{n}\right)|$. \begin{observ} The number $b_{k,n}$ is given by the following formula $$ b_{k,n} = \sum_{\sigma\in S_m}{\prod_{s=1}^{m}{{ {(k+s-1)_F} \choose {(\sigma\cdot s)_F} }}} $$ where $m=n-k+1$ and $S_m$ is a set of permutations $\sigma$ of the set $\break\{k_F,(k+1)_F,...,n_F\}$. \end{observ} \noindent {\it{\textbf{Proof.}}} \noindent Suppose that we have the layer $\layer{k}{n}$. Take any permutation $\sigma\in S_m$ of $m$ levels of the block $\sigma P_m$. Let $s\in[m]$; for such order of levels, cope $(\sigma\cdot s)_F$ vertices by $s$-th element of the block $\sigma P_m$ from all of vertices i.e. $(k+s-1)_F$ of the $(k+s)$-th level in the layer $\layer{k}{n}$. To the end sum the above after all of permutation $\sigma$ $\blacksquare$ \vspace{0.4cm} Let us define now a simple not directed graph $G(\layer{k}{n})=(V,E)$ such that set of vertices is $V\equiv B\left(\layer{k}{n}\right)$ i.e. for any cobweb block $\beta$ we have that $$ \beta\in B\left(\layer{k}{n}\right) \Leftrightarrow v_\beta\in V $$ while set of edges $E$ is defined as follows $$ \{ v_\alpha, v_\beta \} \in E \Leftrightarrow C_{max}(\alpha) \cap C_{max}(\beta) = \emptyset $$ \noindent for any two cobweb blocks $\alpha,\beta \in B\left(\layer{k}{n}\right)$ where $C_{max}(\gamma)$ is a set of maximal paths of block $\gamma$. \begin{corol} Cobweb tiling problem of layer $\layer{k}{n}$ is the clique of size $d$ in graph $G(\layer{k}{n})$ problem , where $d = m_F!$. \end{corol} \noindent {\it{\textbf{Proof.}}} \noindent Suppose that we have a cobweb layer $\layer{k}{n}$ and consider the family $B\left(\layer{k}{n}\right)$ of all blocks of the form $\sigma P_m$ of layer $\layer{k}{n}$, where $m=n-k+1$. Assume that a cobweb tiling of layer $\layer{k}{n}$ contains $d$ pairwise disjoint blocks of the form $\sigma P_m$, where $m=n-k+1$. From combinatorial interpretation of $F$-nomial coefficients we have that $d = \fnomial{n}{m}$. Thus if the family $B\left(\layer{k}{n}\right)$ contains $d$ blocks that are pairwise disjoint then the layer has tiling $\pi$. In the other words, if a graph $G$ has $d$ vertices that are pairwise incidence then of course has a clique $\chi$ of size $d$. Moreover this clique $\chi$ of graph $G$ corresponds to the cobweb tiling $\pi$ of layer $\layer{k}{n}$ and vice versa i.e. $\pi \Leftrightarrow \chi$ $\blacksquare$ \begin{corol} If a graph $G(\layer{k}{n})$ has a clique $\chi$ of size $d=m_F!$ then $\chi$ is maximal clique of the graph. \end{corol} \begin{corol} The number of all cobweb tilings of layer $\layer{k}{n}$ is equal to the number of all maximal cliques in graph $G(\layer{k}{n})$. \end{corol} \section{Map of cobweb sequences} Here down in Figure \ref{fig:CobwebMap} we present a Venn type diagram map of cobweb sequences. Note that the boundary of the whole family of Cobweb Tilling sequences is still not known (open problem). \begin{figure}[ht] \begin{center} \includegraphics[width=83mm]{CobwebMap.eps} \caption{Venn type map of various cobweb sequences families. \label{fig:CobwebMap}} \end{center} \end{figure} \vspace{0.2cm} \textbf{Cobweb Admissible} sequences family $\mathcal{A}$ is defined in \cite{md2}, \textbf{GCD-morphic} sequences family in \cite{md1}. Subfamily $\mathcal{T}_\lambda$ of \textbf{cobweb tiling} sequences $\mathcal{T}$ is introduced in this note. \begin{enumerate} \item $A = (1,3,5,7,9,...) $; \item $B = (1,2,2,2,1,4,1,2,...) = B_{2,2} \cdot B_{2,3}$; \item $C = (1,2,2,1,2,2,1,...)$; \item $E = (1,2,3,2,1,6,1,...) = B_{2,2} \cdot B_{3,3}$; \item $F = (1,2,1,2,1,2,...) = B_{2,2} $; \item Natural numbers, Fibonacci numbers; \item $G = 1, 4, 12, 32, 80, 192, 448, 1024, ... $ (Example \ref{ex:modgaus} in Section \ref{sect:examples}); \end{enumerate} \noindent Sequences $B_{c,M}$ and $A_{c,t}$ are defined in \cite{md1}. \vspace{0.4cm} \noindent \textbf{Additional information} \vspace{0.2cm} In \cite{mdweb} we deliver some computer applications for generating tilings of any layer $\layer{k}{n}$ based on an algorithm from the proof of Theorem \ref{th:1}. There one may find also a visualization application for drawing all multi blocks of the form $\sigma P_{k,n-k}$ of a layer $\layer{1}{n}$. \vspace{0.4cm} \noindent \textbf{Acknowledgments} \vspace{0.2cm} I would like to thank my Professor A. Krzysztof Kwa\'sniewski for his comments and effective improvements of this note.
49,325
There are actually no best periods to come and visit San Blas islands. But there are 2 different seasons though : the rainy season (from June to December) and the dry season (from January to May). Some people might think that rainy season is a “bad” season because you’ll get lots of rain, or storms, etc.. but it’s definitely a false statement ! Of course there are chances to get rain, but ….; Because islands are far away from the shore, you’ll get actually only a few chances to get rain, and even though you get some, it won’t last long anyway (usually maximum 1 hour). Actually the “rainy” season could be compared to an alternation of cloudy days and beautiful sunny days, most of the time with a very light breeze, or no wind at all. On the opposite, the “dry” season (which is supposed to be the high season) is characterized by no rain at all, and sunny days, but…. also by strong and constant winds (called the trade winds, and going from east to west, force 3-4 beaufort), which might be a bit annoying for those of you who like to snorkel (water is a bit less transparent), relax on a beach, etc.. (but of course that will be a paradise for sailing) To resume : don’t hesitate to come in San Blas during the rainy season ! From our point of view, this is actually the best time to come in san blas… You will most likely have beautiful and sunny days, with no wind, and only short periods of time with clouds and eventually a bit of rain. (Plus, the islands will be less crowdy …) © 2015 San Blas Tour
403,687
OstroVit Supreme Capsules T.C.M. 1200 400 caps OstroVit Supreme Capsules T.C.M. 1200 400 caps OstroVit Supreme Capsules T.C.M. 1200 is the highest quality dietary supplement that is a source of creatine malate. It is a combination of creatine and malic acid in a 1: 3 ratio. An excellent proposition for people whose goal is to build clean muscle mass, as well as increase the strength and endurance of the body. - 400 capsules in the package - The product contains 100 servings - 1 serving = 4 capsules - The new product line Supreme Capsules Creatine Malate - Supreme Capsules T.C.M. 1200 Creatine malate repeatedly enhances the anabolic, ergogenic and anti-catabolic effects of creatine. Creatine is an organic compound made up of protein fragments that occurs in muscle tissue. The main task of creatine is to accumulate energy in the muscles, which increases the amount of energy that can be used during training. Increased energy content in muscles leads to an increase in body strength and endurance, and thus to a regular increase in training effectiveness. In addition, it accelerates the growth of lean body mass, increases muscle strength and increases the efficiency of the body. In addition, increased creatine content contributes to faster muscle regeneration between training sets, which delays the effect of physical and mental fatigue. Creatine malate is one of the most soluble forms of creatine, very well absorbed by the body and effectively transformed into phosphocreatine. It is characterized by high stability, thanks to greater resistance to gastric enzymes. The best results are obtained by long-term supplementation combined with a proper, well-balanced diet, appropriate training and rest. Properties OstroVit Supreme Capsules T.C.M. 1200 - It works anabolic - supports the construction of pure muscle mass - Increases strength and endurance of the body - Supports the process of reducing body fat - Increases training effectiveness - Accelerates muscle recovery after training - Delays the effect of body fatigue Take 1 serving (4 capsules) daily. On workout days: one serving immediately before training. On non-workout days: one serving after waking up. Do not consume if you are allergic to any of the ingredients in the product. The product should not be used by children, pregnant women and nursing mothers. Not suitable for people with kidney problems. Keep out of reach of small children. Ingredients Tri creatine malate, capsule shell (gelatin, purified water). The product may contain milk (including lactose), soy, peanuts, other nuts, sesame seeds, oats, eggs, crustaceans, fish.
295,955
Surface Design Techniques with Fabric Paint You can transform any type of material with fabric paint to create your own custom patterns, prints & designs. The best fabric paint to use for marbling fabric is marbling fabric paint (the kind you would find in a kit). Other paints can be diluted to use as well, but it can be difficult to get the mixture right, especially if you are just starting out. Regular acrylic or craft paint can be used on fabric, however, the fabric will be stiff and not very comfortable to wear. The best fabric paints for adding paint directly to your fabric would be something like Jacquard Lumiere Fabric Paints. These are acrylic paints but made for fabric. To marble fabric, all you need is a fabric marbling kit and a few basic supplies which include items you probably already have at home! Materials: - Jacquard Marbling Kit (includes alum, carrageenan and marbling paint) - Jacquard Lumiere Fabric Paints (Pearlescent Magenta, Halo Pink Gold, Bright Gold, Halo Violet Gold, Pearlescent Turquoise) - Shallow tray (at least 2” deep and as wide and long as the fabric you would like to print) - Tools to create patterns (such as: Comb, hair pick, skewer, metal stylus, etc.) - Blender or whisk - Paper towels or newspaper - 100% cotton fabric - Paint brushes - Dishwashing tub or bucket Marble Painting Basics Before you begin creating your marble fabric designs, you’ll first need to treat the fabric with a mordant. This will allow the fabric paint to permanently adhere to the fabric. (If you have a fabric marbling kit, everything should be included for you as well as the instructions. If you purchase the items separately, you’ll need to get alum for this step.) First, dissolve 4 tablespoons of alum in one gallon of warm water. Submerge the fabric into the alum solution and soak for 20 minutes. Wring out and line dry. Then, iron the fabric on a low heat setting. Then, you’ll need to prepare the marbling base, or “size.” Using a whisk or blender, add 2 tablespoons of carrageenan to one gallon of warm water. Blend for 10 minutes or until the clumps of carrageenan have dissolved into the water. If blending with a whisk, sometimes it’s best to just leave the carrageenan to dissolve on its own if some clumps remain. Once dissolved, fill your tray with the carrageenan, at least 1” deep. Skim off any film, bubbles or dust from the surface with newspaper or paper towel. Begin by squeezing drops of marbling paint out of the squeeze bottles onto the surface of the size. Drop the paint as close to the surface as possible to prevent the drops from breaking the surface and falling to the bottom of the size. You should see the paint spreading across the surface in circles. Try adding colors on top of colors to get concentric circles or bullseye type patterns. You can also apply paint splatters by dipping the tip of a brush into the paint and splattering it across the surface. This will break up the paint circles of the previous layers. Next, drag a thin tool such as a stylus, skewer or awl through the paint up and down from the left side of the tray to the right. This will create a nice, feathered pattern. Once you are satisfied with your design, hold the fabric on both ends and drop it onto the surface of the size, starting from the middle and gently placing it down. You may need to gently press the fabric down to pick up all of the paint. Then, lift the fabric from one end, pulling it up and off of the surface in one motion. Let the size drip off a bit, then run the fabric under cold water to wash off the carrageenan. Gently squeeze the excess water out and then hang or lay flat to dry. Remember to skim the surface of the size with newspaper or paper towel to remove any lingering paint before beginning a new design. Other Marbled Fabric Designs Experiment with different color combinations to get the look you want. Remember that the last color you use is most likely going to be your most dominant color. You can also mix the colors in the kit to get different colors. White can be added to any color to create pastels. You can also get orange by mixing red and yellow and another shade of purple by mixing blue and red that is different from the purple in the kit. Try splattering the paint onto the surface by dipping a paint brush and splatter painting onto the surface of the size. You can also leave the pattern as is and take a print without using the tools. This will give you a more abstract painted look. Create a few concentric circles with the paint droppers in the marbling kit. Then, take a comb and drag just the tip of the teeth of the comb through the paint (if you go too deep, you won’t get an intricate pattern. The finer the teeth of the comb, the more intricate the design). Begin with concentric circles again. Play with the stylus or skewer by swirling some of the concentric circles outward, letting the paint swirl around. The more you mix, the more intricate your design becomes. The less you mix, the more you’ll notice bigger chunks of color. For a more simple pattern, drag the stylus or skewer up and down the entire surface from left to right (like the first step of the first design). Each time you create a new design, change the color combinations a bit to vary the patterns. Adding Fabric Paint to Your Marble Painting To enhance your marbled fabric, or to add a little more interest to a “not-so-favorite” design, try adding the Lumiere Fabric Paints in splatters and splashes with a paint brush. This technique is simple, yet adds an interesting layer to the marbled fabric design. Simply dip a paint brush into the fabric paint and flick it onto the surface of the fabric (be sure it’s dry at this point). You can add several colors this way, no need to wait for colors to dry in between application. To get smaller splatters, dip the brush in paint and then take another dry brush and tap the brushes together. You can also dip an old toothbrush in the paint and use your thumb to flick small splatters onto the fabric. This step can be particularly unpredictable and messy, so be sure that you are wearing something that can get paint on it and protect your surface with newspaper as well. After all the paint has dried for at least 24 hours, you can heat set the paint by ironing. Then hand wash with cold water or machine wash on gentle/delicates cycle. Now what can you do with these beautiful painted fabric pieces once you’re done? You can cut them into smaller pieces to sew into a quilt. You can use them to create patches on clothing, purses, pillows, etc. You can sew them into little flag shapes and create a banner to decorate your home. Or sew them into little pouches and fill them with dried flowers and a few drops of essential oils to create sachets. The possibilities are endless! These techniques work great on any type of fabric. Try them on shoes, scarves, baby onesies, tea towels, etc. To marble larger items, just use a tray that is larger than the item that you want to marble. I hope you enjoyed this tutorial on how to marble fabric and that you are inspired to try fabric paint on your own projects!
414,612
TITLE: Why is the computer useful if a chaotic system is sensitive to numeric error? QUESTION [19 upvotes]: In every textbook on chaos, there are a lot of numerical simulations. A typical example is the Poincare section. But why is numerical simulation still meaningful if the system is very sensitive to numerical errors? REPLY [5 votes]: It is a valid question to ask whether the computer simulation of a dynamical system is representative of the dynamical behavior of the real system or merely the artifact of roundoff errors caused due to the necessarily finite precision of a real computer. There is a crucial result regarding this situation called the shadowing theorem [1]. It states that Although a numerically computed chaotic trajectory diverges exponentially from the true trajectory with the same initial coordinates, there exists an errorless trajectory with a slightly different initial condition that stays near ("shadows") the numerically computed one. So when I iterate a chaotic dynamical system starting from an initial condition P, the trajectory that the computer spits out may not be representative of the real position of the dynamical system due to the roundoff errors. However, what there will exist an initial condition Q, such that the real trajectory starting from Q will stay close to my computer generated trajectory from P. This tells me that if my computer simulation shows me a fractal structure of curves, this structure really is shown by the real dynamical system in that there exist trajectories that shadow the trajectories shown by my computer. [1] (1993) Ott, E. Chaos in Dynamical Systems, Cambridge University Press, pages 18-19.
183,366
TITLE: Can one reformulate tensor methods and young tableaux to account for spinor representations on $\operatorname{SO}(n)$? QUESTION [10 upvotes]: Standard tensor methods and Young tableaux methods don't give you the spinor reps of $\operatorname{SO}(n)$. Is this because spinor representation are projective representations? If so, where does this caveat of projective representations enter this formulation of finding irreducible representations? Given that 'standard' tensor methods and Young tableaux (i.e. ones that you might find in a physics book on lie algebras) don't give you spinor reps, are there generalized Young tableaux methods that give you spinor reps? Edit: Just to give you an idea where I am coming from, I am a physicist, so I am sort of asking for the dummies guide to enumerating all possible representations without missing any. REPLY [0 votes]: Analogues of Young tableaux exist for all semisimple Lie groups/algebras, even for Kac-Moody algebras. (Some look like minor modifications of YT, other models are more geometric, called "path models".) They can be used to work with spin representations of orthogonal groups as a very special case. You can find many papers on this on Peter Littelmann's web page. For instance: "The path model of representations", Proceedings of the ICM Zürich 1994, Birkhäuser Verlag, Basel--Boston, (1995), pp. 298--308.
150,119
TITLE: Intersection of n lines QUESTION [2 upvotes]: Let $n$ lines in the plane be given such that no two of them are parallel and no three of them have a common point. We want to choose the direction on every line so that the following holds: if we go along any line in its direction and put numbers from 1 to $n-1$ on the intersection points then no two equal numbers appear at the same point. For which numbers $n$ is it possible? My guess is that, we cannot do it iff when $n$ is even. For the case $n$ is even, I think we should find a point $p$ of intersection of two lines which lies in the middle of two lines (for each of them half of the intersection points in one side of p and the others on the other side of p) REPLY [2 votes]: My guess is that, we cannot do it iff when n is even. You could be right! But we'll need a proof ... For the case n is even, I think we should find a point p of intersection of two lines which lies in the middle of two lines No, that won't work as a proof, since it is false. Here is a counterexample: The midpoint for line 1 is where it crosses with line 6 The midpoint for line 2 is where it crosses with line 6 The midpoint for line 3 is where it crosses with line 6 The midpoint for line 4 is where it crosses with line 9 The midpoint for line 5 is where it crosses with line 2 The midpoint for line 6 is where it crosses with line 4 The midpoint for line 7 is where it crosses with line 3 The midpoint for line 8 is where it crosses with line 3 The midpoint for line 9 is where it crosses with line 3 The midpoint for line 10 is where it crosses with line 6 Or, as a graph, where an arrow from $i$ to $j$ means that the midpoint of line $i$ is where it crosses with line $j$: So: you see that no two lines share the same midpoint!
75,067
Patalsu Peak Trek - 7 Nights and 8 Solang Nullah This is the first day of Patalsu Peak Trek. Early morning, after breakfast at leisure, begin the trek to Solang Nullah. Enjoy the walk along the left bank of River Beas past Ghoshal, Burwa, and Shanag villages. We cross the Beas river close to Solang nullah, and then continue trekking up to the campsite. Dinner and night stay at the camp. Day 3: Trek to Shagadugh Have breakfast at leisure and then start for Shagadugh. Past Solang village, trek further to reach at the today’s campsite through the lush forests of walnut, chestnut, silver oak, and maple. Shagadugh is a rich snow capped meadow where you can accommodate yourself for overnight. Day4: Trek to Patalsu Peak and then Trek Back to Shagadugh This is the most rewarding and exciting day of the entire Patalsu Peak Trek. Initially, a trail along the mix forests of conifers and oak till we get to the base of the Pathalsu Peak. Halt here for some time and then continue the trek to Patalsu Peak Trek. Enjoy the enchanting views of Shitidhar, Friendship and Manali peaks along the way. A steep climb uphill of few hours then finally takes us at the summit of the 4220m Pathalsu Peak. The views of surroundings from the peak are just beyond your imagination. After spending some quality amount of time at the peak, retrace the path towards Shagadugh. Dinner and night stay at the camp. Day 5: Trek to Solang Nullah and Drive to Manali After morning Breakfast, retrace steps towards Solang Nullah and celebrate the successful summit of Patalsu Peak Trek upon reaching. The day is going to be easy as the path is mainly downhill. From Solang Nullah, drive to Manali. On arrival, check-in at the hotel. Dinner and night stay. Day 6: Explore Manali After morning leisurely Breakfast, take a ride to visit Hadimba Temple, Vashisht hot water springs, Manu Temple and Arjun Gufa. Afternoon take a half-day tour of Naggar Castle and Angoora Shawls Factory. Dinner and overnight at hotel. Day Chandigarh Manali Book Now
45,421
Create an Account - Increase your productivity, customize your experience, and engage in information you care about. Acer griseumAceraceae A small, deciduous, oval to oval-rounded tree with slender upright branching. It typically matures at 40 feet tall. The bark on the trunk and limbs is extremely ornamental because it peels into large curls which remain on the tree rather than falling to the ground, often in attractive contrast to the tan to rose-brown inner bark. It is particularly noted for its exfoliating copper orange to cinnamon reddish/brown bark and its showy orange to red fall color. Leaves are green above, but frosty blue-green to gray-green with fine hairs beneath. Fall color varies, typically ranging from showy shades of orange and red to less spectacular shades of reddish-green to bronze green. Yellowish flowers bloom in April-early May in clusters up to 1 inch on pendulous downy stalks. Flowers give way to winged samaras with unusually large seeds.
260,348
Walgreens Deals 11/1 to 11/7 by Briana Carter on November 1, 2009 New to Walgreens? Go HERE to learn how to maximize your savings at Walgreens. →REGISTER REWARD DEALS Blistex Lip Balm, $2.49 Earn $1.50 Register Rewards Final Price after Register Rewards: $0.99 World’s Finest 5.5 oz, or Queen Anne 13.2 oz. Chocolate, 4/$10 Earn $5 Register Rewards when you buy 4 Final Price after Register Rewards: 4/$5 Holiday Mars or Dove Minis 8.5 to 11.5 oz., 2/$6 Earn $1 Register Rewards when you buy 2 Final Price after Register Rewards: 2/$5 Vicks Dayquil or Nyquil, 2/$9 (2) $1/1 from 11-1-09 PG Earn $3 Extra Bucks when you buy 2 Final Price after Register Rewards: $2/each Sambucol or Sinus Buster, $9.99 Earn $5 Extra Bucks Final Price after Register Rewards: $4.99 Alka Seltzer 10 or 20 ct., 2/$10 (2) $1/1 printable Earn $3 Extra Bucks when you buy 2 Final Price after Register Rewards: $2.50/each Pampers Jumbo Pack Diapers or Training Pants, 2/$20 (2) $1.50/1 from 11-1-09 PG Earn $2 Extra Bucks when you buy 2 Final Price after Register Rewards: 2/$15 Gillette Fusion or Venus Razor, $8.99 Earn $3 Register Rewards Final Price after Register Rewards: $5.99 Spend $10 on participating P&G Products, Get $2.50 in Register Rewards - Bounce Dryer Bar, Downy Liquid Fabric Softener, Bounce Sheets, Tide Stain Release Liquid, $3.99 - $1/1 Bounce Dryer Bar, Any – 11-01-09 PG or 10-11-09 PG - $2.50/1 Bounce Dryer Bar, Any – VocalPoint - $0.25/1 Bounce Dryer sheets any – 10-11-09 PG or 11-01-09 PG - $1/2 Downy Liquid Fabric Softener or Dryer Sheets, Any – 11-01-09 PG - $1/2 Tide Stain Release Product, Any – 11-01-09 PG - $1.50/1 Tide Stain Release In-Wash Booster, Any – Good Housekeeping, August or September 2009 - Febreze Flameless Luminary Starter Kit, $12.99 - Febreze Air Effects, $2.50 Glade 2 Pack Plugin Scented Oil Refills or Soy Candles 4.5 oz., $3.99 B1G1 Soy Candle Earn $1 Extra bucks Final Price after Register Rewards: $2.99 for 2 candles! →7 DAY COUPONS Walgreens 24 pack water, $2.99 (Limit 2) Geisha Mushrooms 44 oz., or Madam Mandarin Orange 11 oz., 2/$1 (Limit 4) Campbell’s Classic Bowl, Select Harvest, Soup at Hand, $1.25 $1/2 Campbell’s Select Harvest Soup Cans, Regular and Healthy Request Final Price: $0.75/each Ricola Cough Drops 19 to 24 ct., $1.29 B1G1 Free Ricola Natural Mixed Berry with Vitamin C, FREE wyb any bag of Ricola, 19 ct.+ – 10-11-09 S Final Price: $0.65/each Hunt’s Tomatoes Sauce 8 oz., 3/$1 (Limit 6) Mitchum Deo/Antiperspirant, $1.99 (Limit 3) $1/1 Mitchum or Mitchum for Women product, any (excluding trial sizes and special value multipacks) – 11-01-09 SS Final Price: $0.99 Mars Candy Singles, $0.49 (Limit 6) Dawn Dish Liquid, $0.99 (Limit 3) $0.25/1 from 11-01-09 P&G Insert Final Price: $0.74 →OTHER SALES Hefty One Zip Bags, B1G1 Free @ $3.29 $1/2 Hefty OneZip Bags, any (2) packages – 11-01-09 RP Final Price: $1.14/each Dixie Plates, $2.99 $1/2 printable Final Price: $2.49/each M&Ms 9.9 to 12.6 oz., 2/$5 $1 Instant Coupon when you buy 2 Final Price after Instant coupon: $2/each W Alkaline Batteries 16 pack AA or AAA, B1G1 Free, $9.99 Campbell’s Soup 10.74 oz., Chicken Noodle or Tomato, $0.50 (Limit 4) $1/2 Printable Final Price: FREE!! Gorton’s Frozen Entree 7.6to 10.5 oz., $2 (Limit 4) $1/1 Gorton’s Shrimp Item – All You, March 20, 2009 Final Price: $1 Gillette Disposable, $5.99 $3/1 Gillette Mach3 Disposable Razor 6 Ct., Sensor3 8 Ct., or CustomPlus3 8 Ct., Any – 11-01-09 PG Final Price: $2.99 Glade Sense & Spray, $4.99 $4/1 Glade Sense & Spray Starter Kit, any – 10-18-09 SS Final Price: $0.99 Gillette Foamy Shave Cream, $1.50 Maybelline Eye Cosmetics, B1G1 FREE $1/1 Maybelline New York The Colossal Volum Express Mascara OR Maybelline Mascara, any – 10-11-09 RP $1/1 Maybelline Eye Shadow or Eyeliner, any – 10-11-09 RP $3/1 Maybelline New York Pulse Perfection Vibrating Mascara, Any (Bricks Link) Final Price: Use (2) Coupons to get cheap make up! One Gallon Milk, $1.99 (May be regional deal) Limit 2 Charmin Ultra Bathroom Tissue, $3.49 (Limit 2) $0.25/1 from 11-01-09 P&G Insert Final Price: $3.24 Dentek, $1.99 $1/1 DenTek Floss, any – 10-11-09 SS Final Price: $0.99 Tagged as: drug store deals, Walgreens - You can get free Hunts Tomato Sauce this week. Check. Also don’t forget Robitussin to Go is still free after RRs. - The Maybelline (Eye Cosmetics, B1G1 FREE) deal in AZ excludes mascara. FYI - It is grate - Previous Post: Over $11 in Kellogg’s Printable Coupons - Next Post: Target Deals 11/1 to 11/7
77,580
TITLE: Computing Gauss Legendre quadrature for large $N$ QUESTION [14 upvotes]: I've been scanning across the web, and haven't found a good method to compute the Gauss Legendre abscissas and weights $\{ x_j, w^j \} _{j=1}^N$ for large $N\in\mathbb{N}$. My question is how to do it, and why should it work? To those who need some background: The goal is to approximate an integral by a discrete interpolating sum: $$\int\limits_{(-1,1)} f(x) dx = \sum\limits_{j=-N}^N f(x_j)w^j $$ The question is, how to choose $\{ x_j, w^j \} _j$ appropriately.The Gauss Legendre quadrature tells you (for good reasons) to choose $x_j $ to be the roots of the $n$-th Legendre polynomial. Problem : A straightforward computation of the Legendre polynomial for high $N$ is highly unstable, as it involves "big" coefficients of alternating signs. EDIT: I've found a very simple code that computes weight and abscissas using eigenvalues of a symmetric matrix, but doesn't seem use Golub Welsch. The matrix is $$\forall 1\leq j\leq N-1 \, A_{j,j+1} = A_{j+1,j} = \frac{j}{\sqrt{4j^2 -1}}$$ with all other entries are zero. The discussion about it was split to another post. REPLY [7 votes]: To add to Fredrik Johansson's answer: A nice history of algorithms for computing Gauss quadrature rules can be found in this SIAM News article by Alex Townsend. Therein, it is stated that the "final chapter" was written by Ignace Bogaert in this SISC paper, which gives an algorithm that is even faster and more accurate than the algorithm of Hale & Townsend. There is a free open-source implementation at https://sourceforge.net/projects/fastgausslegendrequadrature/
163,773
Earlier this month the six finalists for the ASAP awards were named. They represented six outstanding contributions to innovation that exploited Open Access. The 3 winners were announced at a kickoff event at the World Bank in Washington DC, on the Monday of Open Access Week 2013. This year’s OAW theme is “Redefining Impact” and few project speak more to that than the six finalists of the ASAP. Nitika Pant Pai, Caroline Vadnais, Roni Deli-Houssein and Sushmita Shivkumar, developed a mobile phone app that will help people affected with HIV use a home diagnostic test and then connect with the right kind of information and the right kind of people. I asked Nitika about the project and about winning the award. Here is what she had to say: The award inspires me to do more—and i thank God for rewarding hard work- and my family and my trainees for being there for me. I share this award with my extended family of trainees- Roni, Caroline and Sush who put in many hours of work with me. All of them pitched in, with their different perspectives. I am grateful to Open access— a movement that i strongly support, give my time and strongly believe in. i am an editorial board member of Plos one and review for Plos Med and several other open access journals.And i thank my sponsors—Google –and Wellcome Trust. And finally, I dedicate this award to my husband, Dr Madhukar Pai, who is my role model. It was he who encouraged me to apply for it. As my future plans, lets keep our future plans for another interview. I do not like to talk about my work unless it is near completion…..i believe that our work and deeds should do the talking. Read the full interview with Nitika Pant Pai here. Mat Todd’s contribution to finding an open source solution for drugs to cure malaria speaks for itself. I talked to Mat about his project when he was named a finalist, and more recently I asked him about how he felt about winning the award: Well, I’d want to emphasise: I’m really happy to win this award. I am, however, representing a whole consortium of people. These guys have contributed because, I think, they just want to do the best science. They may have contributed because they realise we can do science more efficiently and with more impact if we’re open. The OSM-ers include people in my lab, in other labs around the world and the funders, the Medicines for Malaria Venture and the Aussie government, who had the courage to back a risky idea. Hopefully we can attract more people on board and do something extraordinary by discovering a drug that enters clinical trials. Read the full interview with Mat Todd here. Daniel Mietchen, Raphael Wimmer and Nils Dagsson Moskopp were nominated for the work they did “rescuing” multimedia files in the Open Access literature. They built a bot that harvests these files and then gives them a new life in WikiMedia Commons. I’ve worked with Daniel and hope to work with him in the future, so I was really curious to ask Daniel what came next. Unsurprisingly, a lot. Daniel has so many great projects in his mind it is hard to keep up. The first line of his response was “Lots, actually” and here are some examples: For instance, we could try to import media from other databases Dryad, Pangaea […], we could import files other than audio or video (e.g. images), or we could export to places other than Wikimedia Commons (e.g. YouTube).,[..] we’re not done, we’ve just tested the ground for bot-assisted large-scale reuse of openly licensed research materials, and we’d be delighted if others were to join in. Plus, if we manage to reach out to the crowd (e.g. via the YouTube channel; then that could provide a new way for them to engage with scientific materials, and we can only imagine what people would do with them in a remix culture as on YouTube. Read the full interview with Daniel Mietchen here. Other finalists for the ASAP awards were: Smartphone Becomes Microscope (Saber Iftekhar Khan, Eva Schmid, PhD and Oliver Hoeller, PhD)): Measuring and Understanding the Sea (Mark J. Costello, PhD)
150,449
Transformers Power of the Primes Leader Class Rodimus Prime Finally! We finally get a proper Leader Class Rodimus Prime for the CHUGS series Transformers. Up until now, all we got were Deluxe Class repaints of Hot Rod, with the exception of the TF Cloud Rodimus Prime/Springer redeco. Rodimus Prime is big and chunky, he looks good and I actually prefer this figure over the official Masterpiece Rodimus Prime (Takara should really redo that figure). The downside with this figure is that it is a bit hard to unsee the upside down Hot Rod legs in the shoulders. I guess it will take some getting used to. He also comes with his own “Matrix” of Leadership. But it looks REALLY awkward to get to it (you have flip up Hot Rod??) I wish there was an opening chest panel instead. Unlike Orion Pax, there’s no eject button for Hot Rod here, you just pull him out of the torso. Ok, hands down, the best Hot Rod in the CHUGS line for me right here, I actually like Hot Rod more than Rodimus Prime mode in this toy. I just wish the colors were done a la Takara to make it movie colors accurate. Sigh. Hot Rod stands roughly 6.25″ tall, which means he’s bigger than most Deluxe Class figures. Rodimus Prime’s rifle can be taken apart to form Hot Rod’s guns. For me, the Rodimus Prime vehicle mode isn’t that great. I wish the improved the designs in some areas (the hands are sticking out) and the locks for snagging onto Hot Rod aren’t really tabbing in that well for me. Rodimus is shorter than Optimus in vehicle form. Overall I love it, it’s nice to finally have a Leader Class Rodimus Prime. Now to hope they make a Leader Class Galvatron later on. Gold
103,318
It is recommended to eat a handful of dry fruits daily in order to stay fit. There is a huge variety of dry fruits that are available in the market such as Cashews, Nuts, Almonds, and Walnuts etc. Feed the children with nutritious meals and free them from the ill effects of hidden hunger. Replica Hermes Handbags Replica Hermes Birkin Hermes Replica Birkin Hermes Replica Bags To achieve the money goal you determined, look for the low hanging fruit that’s just waiting for you to pluck it off the “Money Tree”. Low hanging fruit are typically services and products you already have that can be re launched, re titled, or expanded. Dust these items off, because they represent unrealized income increasing opportunities.. Hermes Replica Bags Hermes Replica Handbags.. Hermes Replica Handbags Hermes Bags Replica Our hearts go out to the struggling musician, whether you are a street performer or you play a cushion gig’ that pays good bucks. In this article, we will reflect on the lives and struggles of these born survivors. However, I feel that for every bad apple, there are 4 good ones.. Hermes Bags Replica Hermes Birkin Replica Erik Dalton beskrivs orsaken till skolios, bde strukturell skolios och funktionell skolios och Sk efter behandlingsalternativ inklusive manuell terapi. S finns det goda nyheter och dliga nyheter nr nrmar sig frgan om skolios. Det r ena sidan alla alltfr utbredda en strning kopplad till faktorer som behver mer noga, miljgifter, kost och allmn aktivitetsniv. Hermes Birkin Replica Replica Hermes Belts Also you will want to look for deep conditioners since a lot of our daily routine can damage our hair. Not to mention the weather as well. Deep conditioners will restore the moisture balance, and help the hair to seal naturally.. We watch celebrities tote them around as if they’re worth less than a cheap handbag that can be found in any department store, and dream of owning something as beautiful as those handbags. They have designers who try to come up with trends that will either louis vuitton bags sale last for a very long time, or that will suit the next season. They then make the bags and begin to sell them, putting a huge price tag on them. Replica Hermes Belts Hermes Belts Replica Liam Payne has new tattoos AND a new song, as he reveals ink during promo for single Bedroom FloorThe singer definitely won’t have any trouble remember his own name while on the promo tour14:31, 20 OCT 2017Updated14:39, 20 OCT 2017Get celebs updates directly to your inbox+ SubscribeThank you for subscribing!Could not subscribe, try again laterInvalid EmailLiam Payne has gone to drastic measures to remind the world his name.Well, remind them of his initials, anyway.The 24 year old has smartly inked the letters L and P onto his thumbs, in a bizarre new tattoo debuted this morning as he launched his new single, Bedroom Floor.Looking mighty fresh, his skin was still red raw from the recent addition, which will fit in well amongst his many other tatts.However, not a means for him to glance down and remember his identity, the letters are created in a way that makes sense only to other people facing him.She said on ITV’s Lorraine: “I always loved the name. And the midwife always used to come in when he was little he used to, like, grunt and used to go, ‘There’s the little bear, there’s the little bear’.”And it stuck. It’s just his name and I wanted a unique name.”Like us on FacebookFollow us on Twitter NewsletterSubscribe to our newsletterEnter emailSubscribeFilms breastsBenny HillBenny Hill offered me a job in return for sexual favours, says punk queen Hazel O’ConnorHazel O’Connor says she has been haunted ever since by the incident in Hill’s London flat in 1976Meghan MarkleMeghan Markle’s nephew reveals heartache at the family rift that tore them apartThomas Dooley and Prince Harry’s girlfriend were once close and she used to love babysitting him as a little boy, but a family feud between the Suits star’s father and her half brother has meant they haven’t seen one another in yearsPaloma FaithSinger Paloma Faith reveals she is raising her child as gender neutralThe former Voice judge says she wants more kids with French boyfriend Leyman Lahcine, and wants them “to be who they want to be”Most ReadMost RecentCCTVFurious girlfriend dumps partner after CCTV images ‘show him having sex with a horse’ on farmThe man tried to set fire to the camera after the lewd actNewport CorporationDad lost both legs and hands after amputation just a day after feeling like he had a coldChris Garlick was left fighting for his life just hours after calling NHS Direct Hermes Belts Replica.
245,389
Please Sign In or start a new account The lightweight Manix2 features a Ball Bearing Lock and a translucent EdgeTek FRCP handle. More info... Spyderco Kiwi3 Stag. Copyright © 2015. All rights reserved. All product names, art and text herein are the property of Spyderco, Inc. and may not be reproduced in part or whole without the sole written permission of Spyderco, Inc.
66,107
TITLE: Induced current for conjoined loops in varying magnetic field QUESTION [2 upvotes]: How do we calculate the emf and the current induced in two loops of wire, that have a portion in common between them? The specific example we had solved in class is: The left side loop is a square and the right side loop has half the width. The magnetic field as a function of time is known, and the resistance per unit length of the wire is known. Does a loop's being conjoined with another affect the emf induced across it? I do not understand how i can calculate the current in each wire, because there is no particular point from where the emf is induced (unlike a typical circuit with batteries). Our teacher and textbook pulled it off by assuming the induced emf in each loop to be supplied by two equivalent batteries, and using Kirchhoff's law to find the currents in each portion. Is this method correct? If not, how do we calculate the current induced in each loop? More clarification: Circuit diagram: [ ABEF is a square with side length $\ell$, BC = ED = $\ell/2$. The part AB has a resistance $R$, and all wires have same resistivity and cross sectional area. Now, my doubt is, will we write Kirchhoff's Loop Law equations as: $$V_1 = i_1(3R) + (i_1 - i_2)(R) \\ V_2 = i_2(2R) - (i_1 - i_2)(R) $$ or, $$\begin{align} & V_1 - V_2 = i_1(3R) + (i_1 - i_2)(R) \\ & V_2 - V_1 = i_2(2R) - (i_1 - i_2)(R) \quad\quad\text{?} \end{align} $$ Thank you! REPLY [1 votes]: When you write Kirchoff's law going around a loop, there's nothing in the equation itself that ties a voltage (or EMF) source to a particular point. The only reason you need to know the location of a battery is so that you can tell whether to include the battery's voltage in a given loop. But there's no such requirement with a magnetic EMF. You can calculate the EMF around the loop without having to place that EMF at a single location. The EMF is actually associated with the loop as a whole, not with any specific point on it. In this way, you can include an EMF term in each loop law you write. Just use Kirchoff's laws as normal, and make sure to include the term corresponding to the EMF for each loop you use. With the specific example you give, when you write the loop law for the left loop, you use the EMF associated with the left loop - I suppose that would be $V_1$. The EMF associated with the right loop ($V_2$) is entirely separate, and should not appear in the left loop equation at all. Note that you can check this procedure by writing Kirchoff's loop law for the outer loop, which encloses both the left loop and the right loop. The equation you get should be the sum of the left loop equation and the right loop equation. You'll need to use the fact that the EMF associated with the outer loop is the sum of the EMFs associated with the left loop and the right loop (as you could derive from Faraday's law).
79,261
Since. 38 Comments Phoebe LeeFebruary 4, 2015 at 6:19 am. Jayne GormanFebruary 4, 2015 at 7:18 am! LaraMay 9, 2020 at 12:30 pm I’d really recommend the ATAS travel blog as well (), which is run by the Australian Federation of Travel Agents. It’s got a knowledge-sharing vibe with heaps of info on domestic and international travel, and they’re running a really wholesome series of articles at the moment (during the COVID-19 days of border closure, etc.) about what travel means to us, gamechanging travel books and so on. Perfect for armchair travel during #isolife!
322,325
TITLE: Decomposing arbitrary maps QUESTION [0 upvotes]: Let $X$ be any set and let $f$: $X\longrightarrow X$ be any map. Must there exist an injection $g$: $X\longrightarrow X$ such that $f=g\circ h$ for some left-inverse $h$ of $g$? Edit: The answer is No for finite $X$. What if $X$ is infinite? REPLY [1 votes]: No, for example, take $X$ to be a finite set. Then $g$ injective implies it is bijective and $h$ must also be bijective. Thus $f$ is also bijective, which contradicts the "let $f$ be any map". In the general case, it still doesn't work. Since $hg=Id_X$, $h$ is surjective. Therefore, $Im(f)=Im(gh)=Im(g)\cong X$ because $g$ is injective. This also contradicts the "let $f$ be any map" since $f$ could have an image with cardinality smaller than $|X|$.
35,734
Professional Communication - Analysis of ICAO Phraseology Used by Pilots in Routine Communication with Air-Traffic Controllers PDF B.A. thesis by Piotr Podwójci (2016) University of Warsaw, Department of Applied Linguistics, Institute of Specialised and Intercultural Communication Degree programme: Applied linguistics Key words: air-traffic controller, aviation English, aviation phraseology, civil aviation, communication, pilot, wireless communication Abstract: Civil Aviation is an industry with a relatively short history, but from the very beginning of its existence communication between airmen and ground crew was problematic. In the beginning of the 20th century there was no technology that allowed wireless communication and with the invention of the radio transmitter pilots had to face problems caused by lack of prescribed rules in terms of voice-based communication. Establishment of the International Civil Aviation Organisation, which prescribed use of the English language for international radiotelephony communications, improved the situation significantly. Nevertheless, air crashes caused by miscommunication continued to occur. These problems drew ICAO's attention and triggered established a worldwide minimum English language standard for use in civil aviation as well as phraseology to be used by aviation personnel in order to provide unambiguous, clear, efficient and intelligible flight communication. All aviation personnel, especially pilots and air-traffic controllers should comply with ICAO standards and use standard phraseology. The aim of this essay is to investigate whether pilots use ICAO Standard Phraseology in all applicable situations correctly and what types of deviations from the prescribed phraseology occur. For this purpose, the analysis of 33 transcripts of pilot-controller communication in 'Warsaw Approach' sector was conducted. Translation Quality Assessment - Quality Assessment of Polish Translation of the PMBOK Guide PDF B.A. thesis by Konrad Maliszewski (2016) University of Warsaw, Department of Applied Linguistics, Institute of Specialised and Intercultural Communication Degree programme: Applied linguistics Key words: language, specialist language, text, specialist text, translation, translation quality assessment, project, project management Abstract: The aim of the essay "Quality Assessment of Polish Translation of the PMBOK Guide" is to carry out the translation quality assessment process on the first three chapters of the standard. Due to the text being a specialist text, in the first part of the work, concepts related to project management and corresponding linguistic and translational concepts are defined (e.g. specialist language, specialist text, translation). Next, the model which was the basis for the translation quality assessment process is presented. It is a model designed by J. House in 2015. J. House has developed this model for almost 40 years. The second part of the work consists of the presentation of the results obtained through the analysis, discussion upon them and conclusions drawn from the discussion. In order to discuss the results credibly, the author repeatedly consulted them with the translator of the Polish version of the PMBOK Guide.
79,897
2014-02-02T00:00:00.000Z This profile was last updated on . and contains contributions from the Zoominfo Community. Is this you? Claim your profile. Senior Structural Designer Hatch Ltd Direct Phone: (403) ***-**** Sheridan Science & Technology Park 2800 Speakman Drive Mississauga, Ontario L5K 2R7 Canada A global organization, Hatch supplies process and business consulting, information technology, engineering and project and construction management to the mining, metallurgical, manufacturing, energy and infrastructure industries. more Bobby Arreola Rick Hill Imports Byron Arreola Flavorus Inc. Manuel Arreola Advanced Radiotherapy Consulting Angel Arreola Concur Technologies Inc Kathleen Arreola Yamhill County
78,380
HDR shots: aligning photos Hi. I just read the 4 articles about HDR photography ( ) and found them very interesting. But I'm just a beginner. So, although I learned that each software has its points of strength, I will try to choose one (probably Photomatix) and use just that, since it would be a waste for me to spend so much money for applications I don't even know how to use. Here is my question: I read that among the software used for the article, Photoshop CS5 is the best for image alignment. There is even a link to download a customized script. But what if I want to use Photomatix, and still don't want to rely on automatic alignment? -- I wish people could see the world the way I see it
4,904
TITLE: Separable vs countable QUESTION [0 upvotes]: I want to show that $l^2(X)$ is separable iff $X$ is countable. Note that a space is separable if it has a countable dense subset. I can see that if $X$ is countable, then $l^2(X)$ is separable. To prove the other direction, I need a countable dense set. How can I construct such a set? Any help would be highly appreciated. Thanks so much. REPLY [3 votes]: If $X$ is uncountable consider the elmenets $f_x, x \in X$ defined by $f_x(y)=1$ if $y=x$ and $0$ otherwise. Note that $\|f_x-f_y\|=\sqrt 2$ whenever $x \neq y$. The balls $B(f_x,\frac 1 {\sqrt 2})$ form a disjoint uncountable family open sets which implies that the space is not separable.
85,360
\begin{document} \newpage \renewcommand{\baselinestretch}{1.3} \normalsize \Large \begin{center} {\bf{Counting Conjugacy Classes of Elements of Finite Order in Lie Groups}} \large \vskip 1cm Tamar Friedmann$^*$ and Richard P. Stanley$^\dagger$ \vskip .6cm \normalsize {\it $^*$University of Rochester, Rochester, NY} {\it $^\dagger$Massachusetts Institute of Technology, Cambridge, MA} \vskip .5cm \normalsize \abstract{ Using combinatorial techniques, we answer two questions about simple classical Lie groups. Define $N(G,m)$ to be the number of conjugacy classes of elements of finite order $m$ in a Lie group $G$, and $N(G,m,s)$ to be the number of such classes whose elements have $s$ distinct eigenvalues or conjugate pairs of eigenvalues. What is $N(G,m)$ for $G$ a unitary, orthogonal, or symplectic group? What is $N(G,m,s)$ for these groups? For some cases, the first question was answered a few decades ago via group-theoretic techniques. It appears that the second question has not been asked before; here it is inspired by questions related to enumeration of vacua in string theory. Our combinatorial methods allow us to answer both questions. } \end{center} \normalsize \vskip .3cm Keywords: Conjugacy classes, finite order, Lie groups, Chu-Vandermonde Identity, binomial identities AMS Classification: 05A15 (22E10, 22E40) \vskip .3cm \section{Introduction} Given a group $G$ of linear transformations and integers $m$ and $s$, let \be E(G,m)=\{ x\in G \; |\; x^m=1\}. \ee Also let \be \label{egms1} E(G,m,s)= \{ x\in E(G,m)\; | \; x \mbox{ has } s \mbox{ distinct eigenvalues} \} \ee for $G$ a unitary group, and \be \label{egms2} E(G,m,s)= \{ x\in E(G,m) \; |\; x \mbox{ has } s \mbox{ distinct conjugate pairs of eigenvalues}\} \ee for $G$ a symplectic or orthogonal group, and let \beas N(G,m)&=& \mbox{number of conjugacy classes of } G \mbox{ in } E(G,m), \\ N(G,m,s)&=& \mbox{number of conjugacy classes of } G \mbox{ in } E(G,m,s). \eeas For $\Gamma$ any finitely generated abelian group and $G$ a Lie group, one can consider the space of homomorphisms Hom$(\Gamma, G)$ and the space of representations of $\Gamma$ in $G$, that is, consider \[ \mathrm{Rep}(\Gamma, G)\equiv \mathrm{Hom}(\Gamma, G)/G \] (where $G$ acts by conjugation); using this notation, \[ E(G,m)=\mathrm{Hom}(\Z/m\Z, G)\] and \[N(G,m)=|\mathrm{Rep}(\Z/m\Z , G)|.\] For the case $\Gamma = \Z ^n$, the spaces $\mathrm{Hom} (\Z^n, G)$ and $\mathrm{Rep}(\Z^n, G)$ have been studied for various Lie groups $G$ in \cite{BFM, AC, ACG, BJS} (and references therein), where there has been interest in their number of path-connected components and their cohomology groups. It is the purpose of this paper to compute $N(G,m)=|\mathrm{Rep}(\Z/m\Z , G)|$ and $N(G,m,s)$ for $G$ a unitary, orthogonal, or symplectic group. Unlike $\mathrm{Rep}(\Gamma, G)$ for $\Gamma = \Z^n$, the representation space $\mathrm{Rep}(\Z/m\Z , G)$ is a finite set, so we can count its number of elements. The results are summarized in Table 1. The numbers $N(G,m,s)$ have never been studied before in the mathematical literature. What motivated their definition, as well as the definition of $N(G,m)$, was the need to find a formula for the number of certain vacua in the quantum moduli space of M-theory compactifications on manifolds of $G_2$ holonomy. In that context, the numbers $N(SU(p),q)$ and $N(SU(p), q, s)$, where $q$ and $p$ are relatively prime, were computed in \cite{Friedmann:2002ct}. These numbers are related to symmetry breaking patterns in grand unified theories, with the number $N(SU(p),q,s)$ being particularly significant as $s$ is related to the number of massless fields in the gauge theory that remains after the symmetry breaking. The connections with symmetry breaking patterns arise from the fact that if $M$ is a manifold and $\pi_1 (M)$ is its fundamental group, then $\mathrm{Rep}(\pi _1(M), G)$ is the moduli space of isomorphism classes of flat connections on principal $G$-bundles over $M$; in grand unified theories arising from string or M-theory, these flat connections (called Wilson lines) serve as a symmetry breaking mechanism. For more on the physical applications and implications of these numbers, see \cite{Friedmann1}. As for $N(G,m)$, certain cases have been studied previously in the mathematical literature, using different techniques than ours. Two of the quantities we derive, Theorems~\ref{qinsup} and \ref{qinspn}, were obtained in \cite{81h:20052,86h:22010} using the full machinery of Lie structure theory with a generating function approach; in \cite{pianzola1, pianzola2}, the case of certain prime power orders is computed; and in \cite{lossers}, Theorem~\ref{minsun} is obtained. Our methods are different; they are purely combinatorial and direct, and apply not only to simply connected or adjoint groups as in \cite{81h:20052, 86h:22010}, so we are able to derive formulas for $O(n)$, $SO(n)$, and $U(n)$ alongside those for $SU(n)$ and $Sp(n)$. Other aspects of elements of finite order in Lie groups have been studied. See for example \cite{other3, other1, other2, other4, other5}. In addition to the quantities $N(G,m)$ and $N(G,m,s)$, which count conjugacy classes of elements of any order dividing $m$, we consider also conjugacy classes of elements of exact order $m$ in $G$: let $$ F(G,m)=\{ x\in G \; |\; x^m=1, x^n\neq 1 \mbox{ for all } n<m \}. $$ Also let $$ F(G,m,s)= \{ x\in F(G,m) \; | \; x \mbox{ has } s \mbox{ distinct eigenvalues}\} $$ for $G$ a unitary group, and $$ F(G,m,s)= \{ x\in F(G,m) \; | \; x \mbox{ has } s \mbox{ distinct conjugate pairs of eigenvalues} \}$$ for $G$ a symplectic or orthogonal group, and let \beas K(G,m)&=& \mbox{number of conjugacy classes of } G \mbox{ in } F(G,m), \\ K(G,m,s)&=& \mbox{number of conjugacy classes of } G \mbox{ in } F(G,m,s). \eeas Since \beas N(G,m)&=&\sum _{d|m} K(G,d) ,\\ N(G,m,s)&=& \sum _{d|m}K(G,d,s) , \eeas we have, by the M\"obius inversion formula, \bea \label{kgm} K(G,m)&=&\sum _{d|m} \mu (d) N(G,\frac{m}{d}) , \\ \label{kgms} K(G,m,s)&=& \sum _{d|m} \mu (d) N(G,{m\over d},s) , \eea where $\mu(d)$ is the M\"{o}bius function. The reader is invited to obtain $K(G,m)$ and $K(G,m,s)$ from Table 1 and equations (\ref{kgm}) and (\ref{kgms}) above. \[ \renewcommand{\baselinestretch}{2.5} \normalsize \begin{array}{|cc| c| c|} \hline \multicolumn{4}{|c|}{\mbox{{\bf Table 1: Number of conjugacy classes of elements of finite order in Lie groups}}}\\ \hline G& m & \hskip .5cm N(G,m)&\hskip .5cm N(G,m,s)\\ \hline U(n)&\mbox{any}& {n+m-1\choose m-1} &{s\over n} {n\choose s}{m\choose s} \\ SU(n)& (n,m)=1& \frac{1}{m}{n+m-1\choose n}& {s \over nm} {n\choose s}{m\choose s} \\ & \mbox{any}&{1\over m}\sum\limits _{d|(n,m)}\phi(d) {(n+m-d)/ d \choose n/ d}& {1\over m}\sum\limits _{d|(n,m)} \sum \limits_{ j\geq 0}\phi(d) {(n+m-jd-d)/ d \choose (n-jd)/ d}{m/d \choose j}{jd\choose s} (-1)^{j+s} \\ Sp(n)&\mbox{any}& { n+[\frac{m}{2}]\choose n} & {s\over n} { n \choose s} { [\frac{m}{2}]+1\choose s}\\ SO(2n+1)&\mbox{any}& { n+[\frac{m}{2}] \choose n }& {s\over n} { n \choose s } { \left [ {m\over 2}\right ] +1\choose s }\\ O(2n+1)&2k+1& { n+[\frac{m}{2}]\choose n } & {s\over n} { n \choose s } { \left [ {m\over 2}\right ] +1\choose s }\\ O(2n)&2k+1& { n+[\frac{m}{2}] \choose n }& {s\over n} { n \choose s }{ \left [ {m\over 2}\right ] +1\choose s} \\ SO(2n)&2k+1& { n+[\frac{m}{2}]-1\choose n-1} \frac{n+m-1}{n} & {s\over n} { n \choose s } {\left [ {m\over 2}\right ] \choose s } {m+1-s\over \left [ {m\over 2}\right ]+1-s } \\ O(2n+1)&2k&2{ n+\frac{m}{2}\choose n} & {2s\over n} { n \choose s }{ {m\over 2}+1 \choose s}\\ O(2n)&2k& { n+\frac{m}{2} -1\choose n-1 }{4n+m\over 2n} & {2n-s-1\over n-s}{ n-2 \choose s -1 }{ {m\over 2}+1 \choose s}\\ SO(2n)&2k& { n+\frac{m}{2}\choose n}+{ n+\frac{m}{2}-2\choose n }& {s\over n} { n \choose s } \left [ 2{ {m\over 2}\choose s } + { {m\over 2}-1\choose s-2 } \right ] \\ \hline \end{array}\] \renewcommand{\baselinestretch}{1.3} \normalsize \newpage \section{Counting conjugacy classes in unitary groups} We begin with $N(U(n), m)$, with no conditions on the integers $m$ and $n$. Since every element of $U(n)$ is diagonalizable, every conjugacy class has diagonal elements. The diagonal entries are $m^{th}$ roots of unity, $e^{2\pi i k_j/m}$, $k_j=0, \ldots, m-1$, and $j=1, \ldots , n$. In each conjugacy class there is a unique diagonal element for which the diagonal entries are ordered so that the $k_j$ are nondecreasing with $j$. Therefore, $N(U(n), m)$ is the number of such diagonal matrices with nondecreasing $k_j$. Let $\{n_k\}=( n_0, \ldots , n_{m-1})$, $\sum_{k=0}^{m-1} n_k = n$ with $n_k\geq 0$. Such a sequence is a weak $m$-composition of $n$, and it is well-known that there are ${n+m-1 \choose m-1}$ such sequences \cite{stanley}. There is a bijective map between such sequences and diagonal matrices in $U(n)$ with ordered entries: $\{n_k\}$ corresponds to the diagonal $U(n)$ matrix with $n_k$ repetitions of the eigenvalue $e^{2\pi i k/m}$: \be \mbox{diag} ( \underbrace{1, 1, \cdots , 1}_{n_0}, \underbrace{e^{2\pi i/m}, \cdots , e^{2\pi i/m}}_{n_1}, \cdots , \underbrace{e^{2(m-1)\pi i/m}, \cdots , e^{2(m-1)\pi i/m}}_{n_{m-1}})~. \ee Thus $N(U(n), m)$ is the number of weak $m$-compositions of $n$, so we obtain the following formula. \begin{formula}\label{qinup} For any positive integers $n$ and $m$, \be N(U(n),m)={n+m-1 \choose m-1} \ee \end{formula} Note that $N(U(n), m)$ is also the number of inequivalent unitary representations of $\Z/m\Z$ of dimension $n$. Now we turn to the special unitary group $SU(p)$, and calculate $N(SU(p),q)$ where $(p,q)=1$. Given a sequence $\{n_k\}$, $k=0, \ldots q-1$ with $\sum_{k=0}^{q-1} n_k = p$, $n_k\geq 0$ (i.e. a weak $q$-composition of $p$), the determinant of the corresponding matrix $x$ is $\exp {2\pi i \over q}\left (\sum_{k=0}^{q-1} kn_k \right )$, so the condition $\det x=1$ requires $\sum_k kn_k \equiv 0 \mbox{ mod } q$. Thus for a weak $q$-composition of $p$ to determine a matrix in $SU(p)$, we need $\sum_k kn_k \equiv 0 \mbox{ mod } q$. We now show the family of weak $q$-compositions of $p$ are partitioned into sets of size $q$ where in each such set there is exactly one such composition with $\sum kn_k \equiv 0$. Consider the $q$ distinct sequences \be \{n_k^{(j)}\}=\{n_{k+j}\} \hskip 1cm j=0, 1, \ldots , q-1~, \mbox{ indices are understood mod q.} \ee (The only way for the sequences not to be distinct is if all $n_k$ were equal, which would imply $qn_k=p$, impossible when $(p,q)=1$). The determinant of the matrix $x_j$ corresponding to the $j^{th}$ sequence is $\exp {2\pi i \over q}\left (\sum_{k=0}^{q-1} kn_{k+j} \right )$. Since $(p,q)=1$ and \be \sum_{k=0}^{q-1} kn_{k+j}-\sum_{k=0}^{q-1} kn_{k+j+1} \equiv p \mbox{ mod } q~,\ee exactly one of the $q$ values of $j$ gives the sum $\sum_k kn_{k+j}\equiv 0 \mbox{ mod } q$, so $\det x_j =1$ for that value of $j$. We therefore get the next result. \begin{formula}\label{qinsup} For $(p,q)=1$, \be \label{qinsupder} N(SU(p),q)=\frac{1}{q} {p+q-1 \choose q-1}= \frac{(p+q-1)!}{p!\, q!}~. \ee \end{formula} Now we turn to counting conjugacy classes whose elements have a given number $s$ of distinct eigenvalues. We begin with $N(U(n),m,s)$. A $U(n)$ matrix with $s$ distinct eigenvalues (which has centralizer of the form $\Pi _{i=1}^s U(n_i)$) corresponds to a sequence $\{n_a\} = (n_1, \ldots n_s)$, $\sum _{a=1}^s n_a = n$, $n_a\geq 1$. Such a sequence is an $s$-composition of $n$ and there are ${n-1\choose s-1}$ such sequences \cite{stanley}). There are also ${m\choose s}$ ways to choose the $s$ eigenvalues themselves. We therefore obtain the following formula. \begin{formula} \label{qsinup}For any positive integers $n$ and $m$, $$ N(U(n),m,s)=\binom{n-1}{s-1}\binom ms = \frac{s}{n} \binom ns\binom ms~.$$ \end{formula} For the special unitary group, again we impose $(p,q)=1$. Given an $s$-composition of $p$, $\{n_a\} = (n_1, \ldots n_s)$, $\sum _{a=1}^s n_a = p$, $n_a>0$, consider $\{ \lambda _a \} = (\lambda _1 , \ldots , \lambda _s)$ where $\lambda _a \in \{0, \ldots , q-1 \}$ determine the eigenvalues $e^{2\pi i\lambda _a\over q}$ with multiplicity $n_a$ of the corresponding matrix. Arrange the $\binom qs s! \hskip .1cm$ possibilities for $\{ \lambda _a \} $ in sets of size $q$ given by \be \{ \lambda _a^{(j)}\}=(\lambda _1+j, \ldots , \lambda _s+j),\; \; j=0,\ldots ,q-1 \hskip .5cm (\mbox{all numbers are understood mod }q). \ee The determinant of the matrix $x_j$ corresponding to the $j^{th}$ choice is \[ \exp {2\pi i \over q}\left (\sum _{a=1}^s n_a (\lambda _a +j)\right ) .\] Since $(p,q)=1$ and \[ \sum _a n_a (\lambda _a+j)-\sum _a n_a(\lambda +j+1) = p,\] exactly one of the $q$ matrices has determinant $1$. Since so far neither the $\lambda _a$'s nor the $n_a$'s have been ordered, once we arrange the eigenvalues to have increasing $\lambda _a$'s, each matrix would appear $s!$ times. Dividing by $s!q$, we obtain the following formula. \begin{formula} \label{qsinsup} For $(p,q)=1$, \be N(SU(p),q,s)=\frac{1}{q} \binom{p-1}{s-1}\binom qs = \frac{s}{pq}\binom ps\binom qs ~. \ee \end{formula} From Theorems \ref{qinsup} and \ref{qsinsup}, we deduce an intriguing symmetry between $p$ and $q$. \begin{corollary} For $(p,q)=1$, \beas N(SU(p),q)&=&N(SU(q),p);\\ N(SU(p),q,s)&=&N(SU(q),p,s). \eeas \end{corollary} This symmetry has implications involving dualities of gauge theories; see \cite{Friedmann1}. It is clear that for any $G$ and $m$, we must have \be \label{sum}\sum _s N(G,m,s)=N(G,m).\ee Since $N(G,m,s)=0$ when $s>m$, the sum is finite. Applying equation~(\ref{sum}) to $G=U(n)$ gives \be \label{id} \sum _s {n-1 \choose s-1}{m\choose s} = {n+m-1\choose m-1}, \ee which is a special case of the Chu-Vandermonde identity \cite{stanley}. We may also obtain both $N(SU(n),m)$ and $N(SU(n),m,s)$ without requiring $(n,m)= 1$ via a generating function approach. Let \[ F(x,t,u)= \prod _{k=0}^{m-1} \left (1+u\sum_{a=1}^\infty (t^k x)^a \right ). \] A typical term in $F(x,t,u)$ is \[x^{\sum n_k}\, t^{\sum kn_k}\, u^s ,\] where $n_k$, $k=0, \ldots, m-1$ are nonnegative integers and $s$ is the number of $k$'s for which $n_k\neq 0$. If $\sum n_k=n$ and $\sum kn_k\equiv 0$ mod $m$ then the sequence $\{ n_k\}$ corresponds to a diagonal $SU(n)$ matrix of order $m$ with $s$ distinct eigenvalues. To pick out the terms in $F(x,t,u)$ for which $\sum kn_k\equiv 0$ mod $m$, let $\zeta = \exp {2\pi i /m}$ and recall \[{1\over m}\sum _{j=0}^{m-1}\zeta ^{jb}=\left \{ \begin{array}{l} 1,\ \mbox{ if } m|b \\ 0,\ \mbox{ else} \end{array} \right . , \] so $$ G(x,u)= {1\over m}\sum_{j=0}^{m-1}F(x,\zeta^j,u)=\sum_{n,s}N(SU(n),m,s)x^nu^s . $$ Rewriting \[ 1+u\sum_{a=1}^\infty (t^k x)^a = (1-u)+{u\over 1-t^kx} = {1-t^k(1-u)x\over 1-t^kx},\] we have $$ G(x,u)={1\over m}\sum_{j=0}^{m-1}\prod _{k=0}^{m-1} {1-\zeta^{kj}(1-u)x \over 1-\zeta^{kj}x} .$$ For $\zeta ^j$ a primitive $d^{th}$ root of unity, we have the factorization $1-x^d=\prod_{l=0}^{d-1} (1-\zeta^{jl}x)$. Since $\zeta ^j$, $j=0, \ldots, m-1$ is a primitive $d^{th}$ root of unity $\phi (d)$ times, where $\phi(d)$ is Euler's function, we have $$ G(x,u)={1\over m}\sum_{d|m}\phi(d) {\left [ 1-(1-u)^dx^d\right ] ^{m/d} \over (1-x^d)^{m/d} }. $$ Expanding in binomial series gives \[ G(x,u)= {1\over m}\sum_{d|m}\phi(d) \sum _{k,j,l\geq 0} {k+m/d-1 \choose k}{m/d \choose j}{jd\choose l}(-1)^{j+l}\, x^{d(k+j)}u^l .\] Setting $d(k+j)=n$ and $l=s$ yields the next theorem. \begin{formula}\label{msinsun} For any positive integers $n,m$, and $s$, $$ N(SU(n),m,s)= {1\over m}\sum_{d|(n,m)}\sum _{j\geq 0}\phi(d) {n/d+m/d - j -1 \choose n/d-j}{m/d\choose j}{jd\choose s} (-1)^{j+s}. $$ \end{formula} We may deduce from Theorems~\ref{msinsun} and \ref{qsinsup} that for $(p,q)=1$, $$ {1\over q}\sum _{j\geq 0} {p+q-j-1 \choose p-j}{q\choose j}{j\choose s}(-1)^{j+s}={s\over pq}{p\choose s}{q\choose s} .$$ For $N(SU(n),m)$ we apply equation~(\ref{sum}), or equivalently set $u=1$ in $G(x,u)$, and obtain (see also \cite{lossers}) the next result. \begin{formula}\label{minsun} For any positive integers $n$ and $m$, $$ N(SU(n),m)={1\over m}\sum_{d|(n,m)}\phi(d) {n/d+m/d -1 \choose n/d}. $$ \end{formula} \section{Counting conjugacy classes in symplectic groups} The diagonal elements of $U(n)$ and $SU(p)$ that we counted in the previous section belong to the maximal tori of those groups. For $\mathrm{Sp}(n)\equiv \mathrm{Sp}(n,\C)\cap U(2n)$, the maximal torus is \be \label{spntorus} T_{\mathrm{Sp}(n)}=\left \{ (e^{2\pi i\theta _1}, \ldots , e^{2\pi i\theta _n}, e^{-2\pi i\theta _1}, \ldots , e^{-2\pi i\theta _n}) \right \}. \ee Since $\mathrm{Sp}(n)$ is compact and connected, we have $\mathrm{Sp}(n)=\bigcup_{x\in G} \; xT_{\mathrm{Sp}(n)}x^{-1}$. Hence, every element $x\in G$ can be conjugated into the torus, so every conjugacy class has elements in $T_{\mathrm{Sp}(n)}$. Any two elements $x$ and $x'$ of $T_{\mathrm{Sp}(n)}$ that differ only by $\theta _l'=-\theta_l$ for some $l$'s are in the same conjugacy class; the symplectic matrix $E_{l, n+l}-E_{n+l, l}$, where $(E_{ab})_{cd}=\delta_{ac}\delta_{bd}$, conjugates them. So a conjugacy class is fully determined by $n$ values of $\theta _l$ restricted to $[0,1/2]$. Conjugacy classes of elements of order $m$ have a unique element in $T_{\mathrm{Sp}(n)}$ such that $\theta _l\in {1\over m}(0,1,\ldots , [{m\over 2}])$ and the $\theta _l$ are nondecreasing as $i$ runs from 1 to $n$. Following the arguments leading to Theorem \ref{qinup}, and noting that here we have weak $([{m\over 2}]+1)$-compositions of $n$, rather than weak $m$-compositions of $n$, we obtain our next theorem. \begin{formula} \label{qinspn} For any positive integers $n$ and $m$, $$ N(\mathrm{Sp}(n),m)=\binom{n+[\frac{m}{2}]} {[\frac m2]}.$$ \end{formula} We now consider $N(\mathrm{Sp}(n), m, s)$ where $s$ denotes the number of complex conjugate pairs of eigenvalues. Following the arguments leading to Theorem \ref{qsinup}, but replacing $m$ by $([{m\over 2}]+1)$, we obtain the next result. \begin{formula}\label{qsinspn} For any positive integers $n$, $m$, and $s$, $$ N(\mathrm{Sp}(n), m, s)=\binom{n-1}{s-1} \binom{[\frac{m}{2}]+1}{s}.$$ \end{formula} \section{Counting conjugacy classes in orthogonal groups} The maximal tori of the different orthogonal groups depend on the parity of $l$ in $SO(l)$ or $O(l)$ and also on whether the orthogonal group is special or not: \bea \label{so2ntor} T_{SO(2n)}&=&\left \{ \mbox{diag} (A(\theta _1), A(\theta _2), \ldots , A(\theta _n) ) \right \}, \\ \label{so2n+1tor} T_{SO(2n+1)}&=&\left \{ \mbox{diag} (A(\theta _1), A(\theta _2), \ldots , A(\theta _n), 1 ) \right \}, \\ \label{o2ntor} T_{O(2n)}&=&\left \{ \begin{array}{l}T_{1,even}=\mbox{diag} (A(\theta _1), A(\theta _2), \ldots , A(\theta _n) ) \\ T_{2,even}=\mbox{diag}(A(\theta _1), A(\theta _2), \ldots , A(\theta _{n-1}), B ) \end{array} \right \},\\ \label{o2n+1tor} T_{O(2n+1)}&=&\left \{ \begin{array}{l} T_{1,\mathrm{odd}}= \mbox{diag} (A(\theta _1), A(\theta _2), \ldots , A(\theta _n), 1 ) \\ T_{2,\mathrm{odd}}= \mbox{diag} (A(\theta _1), A(\theta _2), \ldots , A(\theta _n), -1 )\end{array}\right \}, \eea where \be A(\theta)=\left ( \begin{array}{cc} \cos 2\pi \theta & \sin 2\pi \theta \\ -\sin 2\pi \theta & \cos 2\pi \theta \end{array} \right )\; ; \; B=\left ( \begin{array}{cc} 1 & 0 \\ 0& -1 \end{array} \right ). \ee In equations (\ref{o2ntor}) and (\ref{o2n+1tor}), the maximal torus is made of two parts. The first has elements of determinant $1$ and is identical to the tori of equations (\ref{so2ntor}) and (\ref{so2n+1tor}), respectively; the second has elements of determinant $-1$. The identity \be \label{Bconj} BA(\theta)B^{-1}=A(-\theta) \ee will become useful below. With the maximal tori defined as above, every element of the orthogonal group can be conjugated to the torus, so each conjugacy class has a nonempty intersection with the group's maximal torus. The counting of conjugacy classes depends on the parity of the order $m$ of the elements, so we treat the odd and even cases separately. \subsection{Odd $m$} We begin with $N(SO(2n+1),m)$. The block-diagonal matrix diag$(B, I_{2n-2},-1)$ is an element of $SO(2n+1)$ and equation~(\ref{Bconj}) shows that conjugation by it takes $x\in T_{SO(2n+1)}$ to $x'\in T_{SO(2n+1)}$ where $\theta _1'= -\theta_1$ and the other $\theta_l$ remain the same. Similarly, two elements $x$ and $x'$ of $T_{SO(2n+1)}$ that differ by $\theta_l'=-\theta_l$ for any $l=1,\ldots , n$ belong to the same conjugacy class. We therefore consider only elements of $T_{SO(2n+1)}$ with $\theta _l \in [0,1/2]$ as we did for the symplectic case. As before, we order the $\theta_l$ to be nondecreasing with $l$. For elements of order $m$, we have $\theta _l \in {1\over m}(0,1,\ldots , [{m\over 2}])$. So $N(SO(2n+1),m)$ is the number of weak $([{m\over 2}]+1)$-compositions of $n$. \begin{formula} \label{oddqinoddso} For any positive integer $n$ and any odd integer $m=2k+1$, $$ N(SO(2n+1),m)=\binom{n+ \left [ {m\over 2} \right ]}{\left[ {m\over 2}\right]}.$$ \end{formula} For $O(2n+1)$, there are two conjugacy classes of maximal tori, i.e. $T_{SO(2n+1)}$, and $T_{2,\mathrm{odd}}$ in equation~(\ref{o2n+1tor}). However, all elements of $T_{2,\mathrm{odd}}$ have even order, so none has order $m=2k+1$. Therefore, the number of conjugacy classes of elements of odd order in $O(2n+1)$ is the same as that for $SO(2n+1)$, so we get the following result. \begin{formula} \label{oddqinoddo} For any positive integer $n$ and any odd integer $m=2k+1$, $$ N(O(2n+1),m)=\label{nq1} \left ( \begin{array}{c} n+\left [ {m\over 2} \right ] \\ \left [ {m\over 2} \right ] \end{array} \right ) .$$ \end{formula} For $O(2n)$, again $T_{2,even}\in T_{O(2n)}$ does not play a role when $m$ is odd. Also, the block diagonal matrix diag$(B, I_{2n-2})$ is an element of $O(2n)$, so the results for $O(2n+1)$ and $O(2n)$ are the same. \begin{formula} \label{oddqineveno} For any positive integer $n$ and any odd integer $m=2k+1$, $$ N(O(2n),m)=\left ( \begin{array}{c} n+\left [ {m\over 2} \right ] \\ \left [ {m\over 2} \right ] \end{array} \right ) ~.$$ \end{formula} Things become more subtle for $SO(2n)$: diag$(B,I_{2n-2})$ has determinant $-1$ so it is not an element of $SO(2n)$. Therefore, it is no longer the case that if $x,x'\in T_{SO(2n)}$ differ only by $\theta_i'=-\theta_i$ for some $i$'s then $x$ and $x'$ are necessarily in the same conjugacy class. However, the block diagonal matrix diag$(B,B,I_{2n-4})$ \emph{is} in $SO(2n)$, so if $\theta_l'=-\theta_l$ for an even number of $l$'s, $x$ and $x'$ are in the same conjugacy class. There are two cases to consider: $\theta_1'=\theta_1=0$ and $\theta_l\neq 0$ for all $l$. In the first case, $A(\theta_1)=A(\theta_1')=I_2$, and if $\theta_l'=-\theta_l$ for any additional $l\geq 2$ (not necessarily an even number of times), then $x$ and $x'$ are in the same conjugacy class. The number of conjugacy classes that are represented by elements of $T_{SO(2n)}$ with $\theta_1=0$ is the number of weak $\left ( \left [ {m\over 2} \right ]+1\right )$-compositions of $n-1$. In the second case $\theta_l\neq 0$ for all $l$, the number of classes is the number of weak $\left [ {m\over 2} \right ]$-compositions of $n$; since here, flipping the sign of one $\theta _l$, say $\theta_1'=-\theta_1$ and leaving the others fixed lands in a different conjugacy class, we multiply the number by two to include all the classes. This leads to the following theorem. \begin{formula} \label{oddqinevenso} For any positive integer $n$ and any odd integer $m=2k+1$, $$ N(SO(2n),m)=\binom{n+\left [ {m\over 2} \right ] -1}{\left [ {m\over 2} \right ]} +2 \binom{n+\left [ {m\over 2} \right ]-1}{ \left [ {m\over 2} \right ]-1} = \binom{n+\left [ {m\over 2} \right ] -1}{ \left [ {m\over 2} \right]} \frac{n+m-1}{n} ~. $$ \end{formula} We now turn to $N(SO(2n+1),m,s)$, where as for the symplectic groups, $s$ denotes the number of distinct conjugate pairs of eigenvalues of the elements. For all the orthogonal groups, there are $n$ $\theta_l$'s and $\binom{n-1}{s-1}=\frac sn\binom ns$ ways to partition them into $s$ nonzero parts. There are $\left [{m\over 2}\right ]+1$ possible values for the $\theta _i$. The same is true for $O(2n+1)$, and $O(2n)$, yielding the next result. \begin{formula} For any positive integers $n$ and $s$, and any odd integer $m=2k+1$, $$ N(SO(2n+1),m,s)=N(O(2n+1),m,s)=N(O(2n),m,s)={s\over n} \binom ns\binom{\left [ {m\over 2}\right ] +1}{ s}. $$ \end{formula} The above derivation does not apply to $SO(2n)$ because as before, some classes need to be counted twice due to the absence of $(B, I_{2n-2})$ in $SO(2n)$. First, we divide the $n$ eigenvalue pairs into $s$ nonzero parts ($s$-compositions of $n$). In choosing the $s$ eigenvalues out of the $\left [ {m\over 2}\right ] +1$ possibilities, we differentiate the cases where $\theta_1=0$, which we count once, from the cases where $\theta_1\neq 0$, which we need to count twice to account for $\theta_1'=-\theta _1$, $\theta_l'=\theta_l$, $l>1$ which is in a distinct conjugacy class. We get the following formula. \begin{formula} \label{soddqinevenso}For any positive integers $n$ and $s$ and any odd integer $m=2k+1$, \beas N(SO(2n),m,s)&=&\left ( \begin{array}{c} n-1 \\ s -1 \end{array} \right ) \left [ \left ( \begin{array}{c} \left [ {m\over 2}\right ] \\ s-1 \end{array} \right )+2 \left ( \begin{array}{c} \left [ {m\over 2}\right ] \\ s \end{array} \right ) \right ] \\ &=&{s\over n} \left ( \begin{array}{c} n \\ s \end{array} \right )\left ( \begin{array}{c} \left [ {m\over 2}\right ] \\ s \end{array} \right ){m+1-s\over \left [ {m\over 2}\right ]+1-s}~. \eeas \end{formula} \subsection{Even m} Unlike the case for odd $m$, here we will have to consider $T_2$ in both $O(2n)$ and $O(2n+1)$. There will also be changes from the odd $m$ case due to the fact that $\theta_l=1/2$, corresponding to $A(\theta_l)=-I_2$, can appear. For $SO(2n+1)$, we have essentially the same as we did for odd $m$, i.e. weak $\left ( {m\over 2} +1 \right )$-compositions of $n$. \begin{formula} \label{evenqinoddso} For any positive integer $n$ and any even integer $m=2k$, $$ N(SO(2n+1),m)=\left ( \begin{array}{c} n+\frac{m}{2}\\ \frac{m}{2} \end{array} \right ).$$ \end{formula} For $O(2n+1)$, we have to consider conjugacy classes with elements whose determinant is $-1$, that is, elements of the second part $T_{2,\mathrm{odd}}$ of the torus $T_{O(2n+1)}$, not just the elements of determinant $1$ as we did previously. But the counting is exactly the same as in $T_{1,\mathrm{odd}}$, so the next theorem follows. \begin{formula} \label{evenqinoddo} For any positive integer $n$ and any even integer $m=2k$, $$ N(O(2n+1),m)=2\left ( \begin{array}{c} n+\frac{m}{2}\\ \frac{m}{2}\end{array} \right ) .$$ \end{formula} Turning to $O(2n)$, we note that elements in $T_{2, even}$ have only $n-1$ $\theta_l$'s. Other than that, the counting is the same as before, yielding the next result. \begin{formula} \label{evenqineveno} For any positive integers $n$ and any even integer $m=2k$, \beas N(O(2n),m)&=&\left ( \begin{array}{c} n+\frac{m}{2}\\ \frac{m}{2} \end{array} \right ) +\left ( \begin{array}{c} n+\frac{m}{2} -1\\ \frac{m}{2} \end{array} \right ) \\ &=& \left ( \begin{array}{c} n+\frac{m}{2} -1\\ \frac{m}{2} \end{array} \right ){4n+m\over 2n}~. \eeas \end{formula} For $SO(2n)$, again we need to be careful since $\theta_l'=\pm \theta_l$ does not always mean $x$ and $x'$ are in the same conjugacy class. Only when at least one of the $\theta_l$ is $0$ or $1/2$, so that $A(\theta_l)=\pm I_2$ for that $l$, which commutes with $B$, does $\theta _l'=\pm\theta_l$ mean $x$ and $x'$ are in the same conjugacy class. If no $\theta _l$ is 0 or $1/2$ then if say $\theta _1'=-\theta_1$ and $\theta_l'=\theta_l$, $l>1$, we have a different conjugacy class for $x$ and $x'$. The number of conjugacy classes such that at least one $\theta_l$ is 0 or $1/2$ is the number of weak $\left ( {m\over 2} +1 \right )$-compositions of $n-1$ (where we have fixed $\theta_1=0$) plus the number of weak $\left ( {m\over 2} \right )$-compositions of $n-1$ (where we do not allow $\theta_l=0$ and we require $\theta_l=1/2$ for some $l$). The number of conjugacy classes where no $\theta_l$ is 0 or $1/2$ is twice the number of weak $\left ( {m\over 2} -1 \right )$-compositions of $n$. After some algebra we obtain the next result. \begin{formula} \label{evenqinevenso} For any positive integer $n$ and any even integer $m=2k$, $$ N(SO(2n),m)=\left ( \begin{array}{c} n+\frac{m}{2}\\ \frac{m}{2} \end{array} \right ) +\left ( \begin{array}{c} n+\frac{m}{2}-2\\ \frac{m}{2}-2 \end{array} \right ) . $$ \end{formula} For $N(SO(2n+1),m,s)$, we have the same calculation as for odd $m$, and for $N(O(2n+1),m,s)$, we simply double the result to account for the elements in $T_{2,\mathrm{odd}}$, giving the following formulas. \begin{formula}For any positive integers $n$ and $s$ and any even integer $m=2k$, \beas N(SO(2n+1),m,s) &=&{s\over n} \left ( \begin{array}{c} n \\ s \end{array} \right ) \left ( \begin{array}{c} {m\over 2}+1 \\ s \end{array} \right ); \eeas \beas N(O(2n+1),m,s) &=&{2s\over n} \left ( \begin{array}{c} n \\ s \end{array} \right ) \left ( \begin{array}{c} {m\over 2}+1 \\ s \end{array} \right ) . \eeas \end{formula} Next is $O(2n)$, where $T_{2,even}$ has only $n-1$ $\theta_l$'s, so the contribution from $T_{2,even}$ differs from that from $T_{1,even}$ by replacing $n$ with $n-1$. After some algebra we get the following theorem. \begin{formula}For any positive integers $n$ and $s$ and any even integer $m=2k$, $$ N(O(2n),m,s) ={2n-s-1\over n-s}\binom{n-2}{ s -1} \binom{{m\over 2}+1}{ s}. $$ \end{formula} For $SO(2n)$, for each $s$-composition of $n$, the number of conjugacy classes of $T_{SO(2n)}$ with $\theta _l\neq 0, 1/2$ for all $l$ is $\binom {{m\over 2}-1}{s}$ and the number of conjugacy classes with at least one $\theta _l=0, 1/2$ is the sum of $\binom {{m\over 2}}{s-1}$, which gives the number of conjugacy classes with $\theta_1=0$, and $\binom { {m\over 2}-1}{ s-1}$ which gives the number of conjugacy classes with $\theta_l\neq 0 \; \forall l$ and $\theta_l=1/2$ for some $l$. As before, we multiply the number for $\theta_l\neq 0, 1/2$ by 2, and add the rest. After some algebra, we have our final result. \begin{formula}For any positive integers $n$ and $s$ and any even integer $m=2k$, $$ N(SO(2n),m,s)=\left ( \begin{array}{c} n-1 \\ s -1 \end{array} \right ) \left [ \left ( \begin{array}{c} {m\over 2}+1\\ s \end{array} \right ) + \left ( \begin{array}{c} {m\over 2}-1\\ s \end{array} \right ) \right ].$$ \end{formula} \vskip 1cm \noindent {\bf Acknowledgments} It is a pleasure to thank Jonathan Pakianathan, Steve Gonek, Ben Green, and Dragomir Djokovic for discussions. It is also a pleasure to thank Jonathan Pakianathan for comments on an earlier draft. The work of the first author was supported in part by US DOE Grant number DE-FG02-91ER40685, and of the second author in part by NSF Grant number DMS-1068625.
98,940
TITLE: Lie group about the quantum harmonic oscillator QUESTION [4 upvotes]: We konw that in quantum harmonic oscillator $H=a^\dagger a$, $a^\dagger$, $a$, $1$ will span a Lie algebra, where $a, a^\dagger$ is annihilation and creation operator, $H$ is the Hamiltonian operator. The algebraic relation is following $$[H,a^\dagger\ ]= a^\dagger$$ $$[H,a]=-a$$ $$[a,a^\dagger]=1$$ $$[H,1]=[a,1]=[a^\dagger,1]=[a,a]=[a^\dagger,a^\dagger]=[1,1]=[H,H]=0$$ So these four operators, $H=a^\dagger a$, $a^\dagger$, $a$, $1$, can span a lie algebra, because the commutator satisfies closure and Jacobi's identity. We konw that for any lie algebra $\mathscr{G}$ there exist only one lie group $G$ up to the difference of the topology, whose lie algebra is $\mathscr{G}$. So what is this Lie group whose lie algebra spaned by $\{H=a^\dagger a , a^\dagger ,a ,1\}$ ? REPLY [4 votes]: Note that this a $4$-dimensonal solvable Lie algebra and $a$, $a^\dagger$, $1$ span an ideal isomorphic to the $3$-dimensional Heisenberg algebra. So one realization is obtained by taking $$ a=\left(\begin{array}{ccc}0&1&0\\0&0&0\\0&0&0\end{array}\right),\quad a^\dagger=\left(\begin{array}{ccc}0&0&0\\0&0&1\\0&0&0\end{array}\right),\quad 1=\left(\begin{array}{ccc}0&0&1\\0&0&0\\0&0&0\end{array}\right). $$ Now the adjoint action of $H$ on this ideal has this matrices as eigenvectors with respective eigenvalues $1$, $-1$, $0$, so we finish by taking $$ H=\frac12\left(\begin{array}{ccc}1&0&0\\0&-1&0\\0&0&1\end{array}\right). $$ We can also realize the elements of this Lie algebra as vector fields on $\mathbb R^4$. Namely, we modify the standard realization of the Heisenberg algebra as vector fields on $\mathbb R^3$ (with coordinates $(x,y,z)$) by adding a fourth coordinate $w$: $$ a = e^w\left(\frac{\partial}{\partial x}-\frac y2\frac{\partial}{\partial z}\right),\quad a^\dagger = e^{-w}\left(\frac{\partial}{\partial y}+\frac x2\frac{\partial}{\partial z}\right),\quad 1=\frac{\partial}{\partial z},\quad H=\frac{\partial}{\partial w}.$$
47,444
Strip mining into the stats slag hill of the Blue Jays season turns up this little nugget: Toronto went 11-12 against rookie starters. So, in their very last game of 2016, Ryan Merritt was no anomaly. Although Kid Merritt didn’t actually get the win as he went only 4 1/3 innings in his share of the 3-0 freeze-out. Thing is, Merritt threw mostly breaking ball slop, abetted somewhat by the umpire giving him the corners. Thus Jays batters repeatedly pivoted away from the plate, trailing their lumber, with body language that bespoke disgust. By Oct. 19, Game 5 of the American League Championship Series, it had become Toronto’s motif, their disdain for the strike zone as boxed out by the masked men. Of course an inch matters. But it shouldn’t have mattered this much, not with the Jays’ hitting proficiency — 51 strikeouts racked up by Cleveland’s pitchers. It will be autopsied from here to spring training, why Toronto was so enfeebled by the moundsmen encountered, because they weren’t all Andrew Miller and Cody Allen doing that thing they do so well; there were plenty of weak links in Cleveland’s injury-devastated starting rotation and the bullpen. And yet the Jays were limited to 31 hits, nine of them in the only game where Toronto prevailed. And just two jacks from a team that had cranked 221 home runs in the regular season, fourth in the AL. Manager John Gibbons sounded almost sheepish at having to state the obvious in his palaver with reporters early Wednesday afternoon. “You can’t win if you don’t score some runs. And yet this puzzling anemia — the antithesis of Blue Jays baseball — had nearly wrecked the club through its September sag, putting immense pressure on the bullpen to preserve slim leads. That widely criticized relief cadre had no blame in this bracket. They were sterling, surrendering not a single run. But only once were they handed the ball with Toronto up on the scoreboard. And the Indians weren’t exactly lights-out hitters either. I know that Toronto players studiously watch video of the pitchers they’ll be facing. It’s part of routine preparation — to identify tendencies and tics, the likelihood of what that guy on the bump will offer on a particular count, because it’s crucial to have a plan before stepping into the box. Split-second reaction time means precious little room to adjust for a fastball that looked like a slider coming out of the pitcher’s hand. So Josh Donaldson, a keen student of hitting and by the sixth inning of Game 5 well acquainted with Miller’s slider, took an intuitive hack at the first pitch he saw, anticipating fastball. But he struck the ball topside, causing it to hit the dirt in front of agile shortstop Francisco Lindor: Double play, inning over. RELATED: Bautista and Blue Jays head into uncertain future: DiManno ALCS autopsy: How Cleveland beat the Blue Jays ‘Get the boots!’ Cleveland players mock Bautista’s comments Finer baseball minds than mine will assess what went so dismally wrong for Toronto hitters — inside fastball pouncers who were pitched away, away, away. But even a casual observer would have noticed that Jays didn’t shorten their bats when the situation called for it, rarely hit the opposite way, especially to stay out of double plays or take advantage of the shift, didn’t make use of all fields — Dioner Navarro, when he pinch-hit, and light-hitting Ryan Goins, when he was in the lineup, were notable exceptions — and relentlessly swung for the fences. It’s in their DNA, the long ball. Doubtless management will be seeking to transform that DNA strand this weekend. Subtracting Jose Bautista and perhaps Edwin Encarnacion (don’t do it) from the roster would entirely recalibrate the team’s essence. What Toronto saw in an immensely likeable and shrewd Cleveland squad is a blue print of the Jays’ presumable future — speed, smaller ball, matriculation from the carefully cultivated minor-league ranks, switch-hitters, right-left hitting balance. There’s little to knock your socks off on that lineup but the Indians knocked off hitting-gaudy Boston and set Toronto aside with remarkable ease. We all await further enlightenment about Shapiro’s master plan because at the moment we have only those infomercials he’s invested on Sportsnet journalists. But is this really the time to extensively re-invent the Jays? Bold changes did indeed put Toronto over the top in the early ’90s after the team had thrice failed to emerge from the AL playoffs between 1985 and 1991. What the Jays lacked in this final series were reliable contact hitters — men on base for the big bombers, although of course the grenade sluggers didn’t pull the pin much. While they managed to string hits together reasonably enough in the regular season and early in the post-season, an inability to cope with the curveball against Cleveland had Jays going down 1-2-3 on 18 occasions through five games. Shapiro has carte blanche to reconfigure this team, albeit not carte blanche via payroll. But the AL East will be even more intensely contended in 2017, with the Yankees a couple of steps ahead, having shed stars for hungry young talent this past summer, a healthy Tampa Bay ripe for rebound and the Red Sox still the Red Sox, unfettered by fiscal restraint in replacing David Ortiz, perhaps with Encarnacion. Key components of the Toronto roster aren’t going anywhere, the likes of Donaldson and Troy Tulowitzki by contract and others — Aaron Sanchez, Marcus Stroman, Roberto Osuna — by dint of major-league time in. It would be daring for Shapiro to convert one of his tiffany assets, maybe Stroman, into hot-blooded hitters because you can’t get something for nothing. “We would love to have everybody back,” Donaldson said in the sombre leave-taking clubhouse. “We would love to have Bats back. We would love to have Eddie back. These guys have been the faces of the franchise for many years now. The fact of the matter is everybody in here . . . we’ve grinded together, we shed blood with each other. “Nobody wants to go home. Unfortunately, that’s the case.” They’re gone already. Because that’s baseball, where nobody lingers. Some won’t pass this way again as Jays. Thanks for the memories.
177,187
TITLE: Properties of the internal language of the category of sheaves. QUESTION [10 upvotes]: Consider a simple case of sheaves on some topological space $X$, $\operatorname{Sh}(X)$ (recall that a sieve on $U$ is covering iff its $\operatorname{sup}$ is $U$). All of these are Grothendieck toposes but clearly not all of them are equivalent. Is there an overview of what properties of $\operatorname{Sh}(X)$ are induced by the properties of $X$? The properties I am interested in are the logical principles that hold in the internal logic. Of course the properties must necessarily be the properties of the locale of opens $\mathcal{O}(X)$. The elephant (Johnstone) has a list of some properties induced by the properties of $\mathcal{O}(X)$, but most of these are about the existence of geometric morphisms, which of course in some cases allow one to define some modalities in the internal language, but I would also like to know if there are some formulas of ordinary higher-order logic that are valid. For example, suppose $X$ is compact. Is there a useful or interesting logical principle that holds in $\operatorname{Sh}(X)$ but does not hold in sheaves over non-compact spaces? Perhaps some form of choice? REPLY [4 votes]: A topological space (or locale) $X$ is local (i.e. in any covering of $X$ by open subsets at least one such subset is already equal to $X$) iff the internal language of $\mathrm{Sh}(X)$ fulfils the following disjunction property: If $(\phi_i)_{i \in I}$ is an arbitrary family of formulas and $\mathrm{Sh}(X) \models \bigvee_{i \in I} \phi_i$, then there exists an index $i \in I$ such that $\mathrm{Sh}(X) \models \phi_i$. Also, if $X$ is local, the internal language fulfils the following existence property: If $\mathrm{Sh}(X) \models \exists x : \mathcal{F}. \phi(x)$, then there actually exists a global section $x \in \Gamma(X,\mathcal{F})$ such that $\mathrm{Sh}(X) \models \phi(x)$. A topological space $X$ is compact iff the internal language of $\mathrm{Sh}(X)$ fulfils the following property: If $I$ is a directed set, $(\phi_i)_{i \in I}$ is a monotone family of formulas (meaning $\mathrm{Sh}(X) \models (\phi_i \Rightarrow \phi_j)$ for $i \preceq j$) and $\mathrm{Sh}(X) \models \bigvee_{i \in I} \phi_i$, then there exists an index $i \in I$ such that $\mathrm{Sh}(X) \models \phi_i$. This follows directly from the order-theoretic characterization of compactness (a space $X$ being compact iff for any monotone family $(U_i)_{i \in I}$ of open sets with $X = \bigcup_i U_i$, there exists $i \in I$ such that $X = U_i$). As an example, you can use the latter characterization to give an internal proof of lemma 01BB in the Stacks Project (saying that if a filtered colimit $\mathcal{F} = \mathrm{colim}_i \mathcal{F}_i$ of $\mathcal{O}_X$-modules is of finite type on a quasi-compact scheme $X$, then one of the maps $\mathcal{F}_i \to \mathcal{F}$ is an epimorphism) by reducing to the following familiar fact of constructive linear algebra: If a filtered colimit $A = \mathrm{colim}_i A_i$ is finitely generated, one of the maps $A_i \to A$ is surjective. Unfortunately, the logical principles above are meta properties of the internal language, so I'm not sure this answers your question. Addendum: An intrinsic characterization of compactness is not possible, i.e. there cannot exist a formula $\phi$ such that the internal language of the sheaf topos of a topological space satisfies $\phi$ iff the space is compact: If a topological space $X$ can be covered by compact subspaces $U_i$, the formula $\phi$ would be satisfied on each $U_i$ and hence, by the local character of the internal language, on $X$ as well. If $X$ itself is not compact, this is a contradiction.
107,277
Any. On Friday, April 20, just prior to 6 p.m., the father and his daughter were crossing the road in a crosswalk when an SUV struck the pair and fled the scene. Unfortunately, the father died at the scene, and the daughter was rushed to a local hospital. She is currently listed in critical condition. The mother of the young girl and wife of the deceased father was lucky enough to survive the incident without any injury. She was only a few steps behind her husband and daughter, and they had all just left an afterschool program. She says her husband saw the speeding car coming toward them and moved to protect their daughter, taking the majority of the impact himself. Now, the woman is both grieving a tragic loss and praying for the full recovery of her critically ill daughter. While the daughter's struggle to survive is the most important thing right now, the surviving mother may also want to know exactly what occurred and see the hit-and-run driver brought to justice in California. If a driver is found to have been negligent in a causing a car accident that resulted in death or injury, surviving family members may have the legal right to pursue civil claims for wrongful death and personal injury. Though nothing can make a family whole again after the loss of a loved one, a successful claim may assist in holding negligent parties accountable while also providing financial reimbursement for the damages that have been suffered. Source: ABC 7 News, "Willowbrook hit-and-run leaves 7-year-old girl in coma, kills father," Darsha Philips, April 22, 2012
139,127
TITLE: Exponent of a finite group $G$ QUESTION [1 upvotes]: Let $G$ be a finite group. The exponent of $G$, $\exp(G)$ is defined as the minimal positive integer $m$ such that $x^m=1$ for all $x \in G$. prove: $a)$ if $G$ is abelian then $\exp(G)= \max\{\text{ord}(x):x \in G\}$ $b)$ if $G$ is not abelian the previous statement may fail. REPLY [1 votes]: Here is my attempt for $a)$ $G$ is finite abelian, so $G$ is direct sums of $k$ $\mathbb{Z}/p_i \mathbb{Z}$'s for some $k \in \mathbb{N}$ and $p_i|p_{i+1}$ for all $i$. Hence element with highest order is $(1,...,1)$ with order $lcm(p_1,...,p_k)=p_k$ which is the highest order of any element in $G$ which is $exp(G)$. $b)$ $S_3$ is a counterexample.
190,012
Nino Bederan's message for the people who try and discredit him on 'Sully My Name' Nino Bederan is a French-bred self-produced recording artist, based in New York City. With his growing discography, the young artist incorporates elements of POP and R&B music into his projects, creating an irresistibly ear grabbing style of Rap music. Having studied Hip-Hop for years along with its contemporaries, the young MC prides himself in his lyricism, consistently showcasing his intricate rhymes schemes, cinematic storytelling, and clever wordplay. Accompanied by diverse, experimental and catchy production, the young artist is blending rap's newest burgeoning melodic sound with message driven writing. Check out our interview below to learn a little bit more about Nino Bederan and "Sully My Name" Who are some of your musical influences? I love listening to a lot of different genres so my influences come from all across the board. Kendrick Lamar, Billie Eilish, Tyler the Creator, Mick Jenkins, J. Cole heavily inspire my work. On the production end of things, Kenny Beats, Finneas, and Kanye really motivate me to branch and be more experimental. Their production is refreshing, raw and innovative to me. Your music has a very distinct NYC vibe. Do you feel like the city inspires your sound? The city definitely inspires me. With New York having such diverse sounds and a fresh bubbling drill / melodic sound with its newest hip hop acts, the city has definitely inspired me to play around with melodies and incorporate them in my songs. The city motivates me to try new things while also showing you can really make it if you establish yourself and believe in your craft. You wrote, produced, engineered, mixed, and mastered "Sully My Name" all by yourself. That's incredible! When did you start learning all of these skills? I started learning those skills when I first started making music. I wanted to monetize my music early to see what could come from it but I couldn’t afford to buy type beats off YouTube or SoundCloud. So I got to thinking and figure since I’m already using my DAW to record music, why don’t I take full advantage of its functionalities and make my own beats. Now having creative control over my production is a must for me. I learn new things every day on all these powerful workstations. Tell me about your latest single, "Sully My Name." What inspired the song? This new record touches on how to deal with people who try to throw dirt on your name. The record starts with me acknowledging how people who are blood to me have tried to sully my name. As the verses begin to play, despite people’s attempts to characterize me as a “coward” because of my calm demeanor, everything I’ve done to this day (chase my dreams, decide to pursue music) contradicts those exact claims. This record is the SUBMARINER showing listeners he is nothing like they say and that any “attempt to sully his name” will always fall short. You wrote about others wanting to see you down. What keeps you grounded and focused when writing music so that others negativity doesn't get to you? Isolation is definitely a must. Over the years, I’ve found peace in my solace, and being alone since it allows me to take a step back from everything and re-center. The good thing about where I’m at right now is that a substantial amount of people have found beauty in my music and they tell me about how it helps them get through tough times. That kind of love and support from everyone is amazing but I would never want it to get to my head and boost my ego too much. I want to remain humble and aware that there is still a lot I need to work on. Staying alone allows me to take the good and the bad at face value to re-center and focus on myself and what I want to put out to the world which is, in retrospect, the most important. If you had a dream collaboration for an upcoming project, who would you want to work with? It would definitely be Kendrick Lamar. Watching him perform live on stage is an experience in itself. He has incredible stage presence, he’s raw, passionate and real. Working with him would definitely be an incredible opportunity and I definitely believe we would make timeless music together. The way he’s connected with him fan base and art over the year is second to none and I aspire to touch people in that same way. While being honest, empathetic, and myself. What can we expect to hear from you next? Are you working on a full length / EP / mixtape? I have certainly played around with the idea of a project for this year. It is still in the brainstorming phase but I’m currently establishing a clear direction, story and thematic. This will be my first true body of work where I’m going to give myself 100% to the people and while accepting the risks that come with it. Definitely will be exploring new sounds which will be nothing like my latest releases, but I know a new era will usher in a new and fresh idea which has me very excited. The Submariner will rise soon. Just for fun, what TV shows have you binged lately? WandaVision. I am a huge fan of Marvel comics, as well as DC and my stage name is also inspired by those same comics. Huge fan of the show and the movies that are going to roll out in the next few years. Rest In Peace to the beautiful Chadwick Boseman. The incredible and once in a lifetime talent. Thank you for everything. Listen to "Sully My Name" here: Follow Nino Bederan: Spotify / SoundCloud / Apple Music / Instagram / Facebook
287,414
\appendix \section{Proof of Theorem~\ref{th:pgdmaintheorem}.}\label{sec:proofpgd} \textbf{Preliminaries I:} We first present preliminary results regarding the zeroth-order setting. \begin{lemma}\label{lm:zerogradconc} Let Assumption~\ref{as:lipgrad}, and \ref{as:subgauss} be true for $F$. Then, in the zeroth-order setting, $\norm{\zeta_t}$ is a $2/3$-sub-exponential variable. i.e., \begin{align*} &\mathbb{P}\left(\norm{\zeta_t}\geq\tau\right)\leq 4d\exp\left(-K_1\min\left[\left(\frac{\sqrt{n_1}\tau'}{\Upsilon_t\sqrt{d}}\right)^2,\left(\frac{n_1\tau'}{\Upsilon_t\sqrt{d}}\right)^{2/3}\right]\right),\numberthis\label{eq:zerogradconc} \end{align*} where $\tau'=\tau-\frac{\nu}{2}L_G(d+3)^\frac{3}{2}$, and $\Upsilon_t=\frac{\nu L_G(d+2)}{2}+c_0\sqrt{(\rho-1)(d+1)}\|\nabla_t\|$. \end{lemma} We will choose $n_1$ such that we have $(n_1\tau'/(\Upsilon_t\sqrt{d}))^{2/3}\leq (\sqrt{n_1}\tau'/(\Upsilon_t\sqrt{d}))^2$. So from now on we will only consider the heavier subexponential tail. \begin{lemma}\label{lm:expecexpbound} Let Assumption~\ref{as:lipgrad}, and \ref{as:subgauss} be true for $F$. Then, in the zeroth-order setting \begin{align*} \expec{\exp(s(c_t\|\zeta_t\|)^\frac{1}{3})}\leq 9d\exp(s^2/b_{1,t}), \end{align*} where $s>0$, $b_{1,t}=b_{0,t}/c_t^{2/3}$, and $b_{0,t}=K_1n_1^{2/3}/(\Upsilon_t\sqrt{d})^{2/3}$. \end{lemma} \begin{lemma}\label{lm:innerprodboundzero} Let Assumption~\ref{as:lipgrad}, and \ref{as:subgauss} be true for $F$. Then, in the zeroth-order setting for $l>0$, with probability at least $1-e^{-l}$ we have \begin{align*} \eta\sum_{i=0}^{t-1}\nabla_i^\top\zeta_i\leq \frac{8\eta\sqrt{dt}}{K_1^\frac{3}{2}n_1}(t\log 9d+l)^\frac{3}{2}\sum_{i=0}^{t-1}\left(\frac{\nu L_G(d+2)}{2}\|\nabla_i\|+C_0\sqrt{(\rho-1)(d+1)}\|\nabla_i\|^2\right). \end{align*} \end{lemma} \begin{lemma}\label{lm:normsumboundzero} Let Assumption~\ref{as:lipgrad}, and \ref{as:subgauss} be true for $F$. Then, for $l>0$, with probability at least $1-e^{-l}$ we have \begin{align*} \sum_{i=0}^{t-1}\|\zeta_i\|^2\leq \frac{128dt^2(t\log 9d +l)^3}{K_1^3n_1^2}\sum_{i=0}^{t-1}\left(\left(\frac{\nu L_G(d+2)}{2}\right)^2+C_0^2(\rho-1)(d+1)\|\nabla_i\|^2\right). \end{align*} \end{lemma} \textbf{Preliminaries II:} We next present preliminary results regarding the iterates of PSGD. First, we show that the effect of PSGD updates comprises of two parts - the first term on the RHS of \eqref{eq:descentfirst}, and \eqref{eq:descentzero} represent the decrease in the function values, and the rest of the terms on the RHS represent possible increase in function value due to noise in the gradient estimator and introduced perturbation. \begin{lemma}\label{lm:descent} Under Assumption~\ref{as:lip}, \ref{as:lipgrad}, \ref{as:liphess}, \ref{as:SGC} and \ref{as:subgauss}, for any fixed $\cT_0,\cT_1,l>\log 4$, with probability at least $1-4e^{-l}$, for Algorithm~\ref{alg:pgd} we get \begin{enumerate}[label=\alph*)] \item for the first-order setting, choosing \begin{align*} n_1\geq 512lc(\rho-1) \quad \eta\leq \frac{32lc}{3L_G(l+c)}\numberthis\label{eq:descentparchoicefirst} \end{align*} we have \begin{align*} f(x_{\cT_1})-f(x_0)\leq -\frac{\eta}{16}\sum_{i=0}^{\cT_1}\|\nabla_i\|^2+3c\eta^2r^2(\cT_{1}+l)L_G+32cl\eta r^2 \numberthis\label{eq:descentfirst} \end{align*} \item for the zeroth-order case, selecting parameters such that \begin{align*} \frac{384L_GC_0^2d(\rho-1)(d+1){\cT_{0}}^2({\cT_{0}}\log 9d +l)^3}{K_1^3n_1^2}\leq\frac{1}{16}\numberthis\label{eq:parchoosecond1}\\ \frac{8C_0\sqrt{(\rho-1)d(d+1){\cT_{0}}}}{K_1^\frac{3}{2}n_1}({\cT_{0}}\log 9d+l)^\frac{3}{2}\leq \frac{1}{16}\numberthis\label{eq:parchoosecond2} \end{align*} we have \begin{align*} &f(x_{{\cT_{0}}})-f(x_0)\leq -\frac{\eta}{16}\sum_{i=0}^{{{\cT_{0}}}-1}\|\nabla_i\|^2+\wp(r,l,\nu,\eta,d,{\cT_{0}})\numberthis\label{eq:descentzero} \end{align*} where \begin{align*} &\wp(r,l,\nu,\eta,d,{\cT_{0}})=16cl\eta r^2+3cL_G\eta^2r^2({\cT_{0}}+l)\\ +&\frac{8\nu \eta LL_G(d+2)\sqrt{d}{\cT_{0}}^\frac{3}{2}}{2K_1^\frac{3}{2}n_1}({\cT_{0}}\log 9d+l)^\frac{3}{2} +\frac{96L_G^3\nu^2\eta^2d(d+2)^2{\cT_{0}}^3({\cT_{0}}\log 9d +l)^3}{K_1^3n_1^2} \end{align*} \end{enumerate} \end{lemma} In the following Lemma we show that when the function descent is small the iterates move only in a small region. \begin{lemma}\label{lm:xtx0distnormbound} Under conditions of Lemma~\ref{lm:descent}, Algorithm~\ref{alg:pgd} satisfies \begin{enumerate}[label=\alph*)] \item for first-order setting, with probability at least $1-8d\cT_1e^{-l}$, for all $\tau\leq {\cT_{1}}$ \begin{align*} \|x_\tau-x_0\|^2\leq 32\eta \left({\cT_{1}}+2cl\frac{\rho-1}{n_1}\right)\left(f(x_0)-f(x_{\cT_{1}})+3c\eta^2r^2({\cT_{1}}+l)L_G+32cl\eta r^2\right) +4cl{\cT_{1}}\eta^2r^2 \numberthis\label{eq:xtx0distnormboundfirst} \end{align*} \item for zeroth-order setting, with probability at least $1-3d\cT_0e^{-l}$, for all $\tau\leq {\cT_{0}}$ \begin{align*} \|x_\tau-x_0\|^2\leq & \eta {\cT_{0}}\left(32+\frac{16}{3L_G}\right)\left(f(x_0)-f(x_{\cT_{0}})+\wp(r,l,\nu,\eta,d,{\cT_{0}})\right)+4cl{\cT_{0}}\eta^2r^2\\ +&\frac{L_G\eta^2{\cT_{0}}^2\nu^2(d+2)^2}{48C_0^2(\rho-1)(d+1)} \numberthis\label{eq:xtx0distnormboundzero} \end{align*} \end{enumerate} \end{lemma} We also require the following definition from~\cite{jin2019nonconvex}, to proceed. \begin{definition}\label{def:coupling}\cite{jin2019nonconvex} Let $e_1$ be the eigen-vector corresponding to the minimum eigen-value of $\cH=\nabla^2 f(x_0)$, and $\gamma\vcentcolon=\lambda_{min}(\nabla^2 f(x_0))$. Also let $\calP_{-1}$ be the projection on to the complement subspace of $e_1$. Consider sequences ${x_t}$, and ${x_t'}$ that are obtained as separate versions of Algorithm~\ref{alg:pgd}, both starting from $x_0$. They are coupled in the first-order (zero-order) setting if both sequences are generated by the same $\calP_{-1}\theta_\tau$, and $\xi_\tau$ $\left(\{\xi_\tau,\{u_{\tau,i}\}_{i=1}^{n_1}\}\right)$, while in $e_1$ direction we have $e_1^\top\theta_\tau=-e_1^\top\theta_\tau'$. \end{definition} We next state some intermediate results in Lemma~\ref{lm:xthatsplit}--\ref{lm:qhqsgbound}, to prove in Lemma~\ref{lm:improvorloc} that starting from a saddle-point PSGD should either descend or the iterates will be stuck around the saddle point. Then in Lemma~\ref{lm:escapesaddle} we will show that the stuck region is narrow enough so that the iterates will escape and consequently the function will have sufficient descent. \begin{lemma}\cite{jin2019nonconvex}\label{lm:xthatsplit} Consider the coupling sequences ${x_\tau}$ and ${x_\tau'}$ as in Definition~\ref{def:coupling} and let $\hx_\tau=x_\tau-x_\tau'$. Then $\hx_t=-q_h(t)-q_{sg}(t)-q_p(t)$, where: \begin{align*} q_h(t)\vcentcolon=\eta\sum_{\tau=0}^{t-1}(I-\eta \cH)^{t-1-\tau}\Delta_\tau\hx_\tau, \quad q_{sg}(t)\vcentcolon=\eta\sum_{\tau=0}^{t-1}(I-\eta \cH)^{t-1-\tau}\hzt_\tau, \quad q_{p}(t)\vcentcolon=\eta\sum_{\tau=0}^{t-1}(I-\eta \cH)^{t-1-\tau}\hat{\theta}_\tau \end{align*} where $\Delta_t\vcentcolon=\int_0^1(\nabla^2f(\phi x_t+(1-\phi)x_t')d\phi-\cH$, and $\hzt_\tau\vcentcolon=\zeta_\tau-\zeta_\tau'$, $\hat{\theta}_\tau=\theta_\tau-\theta_\tau'$. \end{lemma} \begin{lemma}\cite{jin2019nonconvex}\label{lm:alphabetarelation} Denote $\alpha(t)\vcentcolon=\left[\sum_{i=0}^{t-1}(1+\eta\gamma)^{2(t-1-\tau)}\right]^\frac{1}{2}$, and $\beta(t)=(1+\eta\gamma)^t/\sqrt{2\eta\gamma}$. If $\eta\gamma\in[0,1]$, then (1)$\alpha(t)\beta(t)$ for any $t\in\mathbb{N}$; and (2) $\alpha(t)\geq \beta(t)/\sqrt{3}$ for $t\geq \ln(2)/(\eta\gamma)$. \end{lemma} \begin{lemma}\cite{jin2019nonconvex}\label{lm:qptaubound} Under the notation of Lemma~\ref{lm:xthatsplit}, and \ref{lm:alphabetarelation}, we have $\forall t>0$: \begin{align*} &\mathbb{P}\left(\|q_p(t)\|\leq \frac{c\beta(t)\eta r}{\sqrt{d}}\sqrt{l}\right)\geq 1-2e^{-l}\\ &\mathbb{P}\left(\|q_p({\cT_{1[0]}})\|\geq \frac{\beta({\cT_{1[0]}})\eta r}{10\sqrt{d}}\right)\geq\frac{2}{3} \end{align*} We use $1[0]$ to denote that the inequality holds for both subscripts $1$ and $0$. \end{lemma} \begin{lemma}\label{lm:qhqsgbound} Under the notation of Lemma~\ref{lm:xthatsplit} and \ref{lm:alphabetarelation}, if \begin{align*} \eta\cS{\cT_{1[0]}}\max(L_H,L_G)\leq \frac{1}{l} \quad c\leq \sqrt{l}/40 \numberthis\label{eq:parchoosecond4} \end{align*} \begin{enumerate}[label=\alph*)] \item \cite{jin2019nonconvex}then in the first-order case, we have \begin{align*} &~~\mathcal{P}\left(\min\{f(x_{\cT_{1}})-f(x_0),f(x_{\cT_{1}}')-f(x_0)\}\leq -\cF_1, \ or \ \forall t\leq {\cT_{1}}\vcentcolon \|q_h(t)+q_{sg}(t)\|\leq \frac{\beta(t)\eta t}{20\sqrt{d}}\right)\\ \geq &~~1- 10d{\cT_{1}}^2\log\left(\frac{\cS_1\sqrt{d}}{\eta r}\right)e^{-l} \end{align*} \item in the zeroth-order case, we have \begin{align*} &~~\mathcal{P}\left(\min\{f(x_{\cT_{0}})-f(x_0),f(x_{\cT_{0}}')-f(x_0)\}\leq -\cF_0, \ or \ \forall t\leq {\cT_{0}}\vcentcolon \|q_h(t)+q_{sg}(t)\|\leq \frac{\beta(t)\eta t}{20\sqrt{d}}\right)\\ \geq &~~1- 3{\cT_{0}}^2e^{-l} \end{align*} \end{enumerate} \end{lemma} \begin{lemma}\label{lm:improvorloc} \begin{enumerate}[label=\alph*)] \item \cite{jin2019nonconvex} Under the setting of Lemma~\ref{lm:descent}, for the first-order setting, we have \begin{align*} &~~\mathbb{P}\left(\min\{f(x_{\cT_{1}})-f(x_0),f(x_{\cT_{1}}')-f(x_0)\}\leq -\cF_{1}, \ \text{or} \ \forall t\leq {\cT_{1}}: \max\{\|x_t-x_0\|^2,\|x_t'-x_0\|^2\}\leq \cS_{1}^2\right)\\ \geq&~~1-16d{\cT_{1}} e^{-l} \end{align*} \item for the zeroth-order setting, we have \begin{align*} &~~\mathbb{P}\left(\min\{f(x_{\cT_{0}})-f(x_0),f(x_{\cT_{0}}')-f(x_0)\}\leq -\cF_{0}, \ \text{or} \ \forall t\leq {\cT_{0}}: \max\{\|x_t-x_0\|^2,\|x_t'-x_0\|^2\}\leq \cS_{0}^2\right)\\ \geq&~~1-4d{\cT_{0}} e^{-l} \end{align*} \end{enumerate} \end{lemma} In the following Lemma we show that while escaping from a saddle point, the PSGD descends more than it ascends with high probability. \begin{lemma} \label{lm:escapesaddle} Let Under Assumption~\ref{as:lip}, \ref{as:lipgrad}, \ref{as:liphess}, \ref{as:SGC}, and \ref{as:subgauss} are true. Under condition \eqref{eq:parchoosecond4}, for any fixed $t_0>0$, let $x_0$ satisfies $$\|\nabla_0\|\leq \ep \quad \lambda_{min}(\nabla^2 f(x_0))\leq -\sqrt{L_H\ep}.$$ Then \begin{enumerate}[label=\alph*)] \item if $\eta, r, n_1$ are chosen as in \eqref{eq:allparameterchoicefirst}, ${\cT_{1}}={0.5\lge^3}/{\sqrt{\ep}}$, $ \cF_1={\ep^{1.5}}/{\lge^7}$, $\cS_1=\frac{\sqrt{\ep}}{\lge^2}$, $l=a_0\log\left(\frac{f(x_0)-f^*}{\delta\ep}\right)$, then the sequence generated by Algorithm~\ref{alg:pgd} in the first-order case satisfies \begin{align*} &\mathbb{P}\left(f(x_{t_0+{\cT_{1}}})-f(x_{t_0})\leq 0.1\cF_1\right)\geq 1-4e^{-l} \numberthis\label{eq:smallascentfirst}\qquad \text{and}\\ &\mathbb{P}\left(f(x_{t_0+{\cT_{1}}})-f(x_{t_0})\leq -\cF_1\right)\geq \frac{1}{3}-9d{\cT_{1}}^2\log\left(\frac{\cS_1\sqrt{d}}{\eta r}\right)e^{-l}\numberthis\label{eq:largedescentfirst} \end{align*} \item if $\eta, r, n_1$ are chosen as in \eqref{eq:allparameterchoicezero}, ${\cT_{0}}=\kappa_3\frac{\lge^2\log\left(d\right)^2}{\sqrt{\ep}}$, $\cF_0=\kappa_8\ep^{1.5}$, $\cS_0=\frac{\kappa_7\sqrt{\ep}}{\lge^2}$ and $l=\kappa_6\log\left(\frac{d(f(x_0)-f^*)}{\delta\ep}\right)$, then the sequence generated by Algorithm~\ref{alg:pgd} in the zeroth-order case satisfies \begin{align*} &\mathbb{P}\left(f(x_{t_0+{\cT_{0}}})-f(x_{t_0})\leq 0.1\cF_0\right)\geq 1-4e^{-l}\numberthis\label{eq:smallascentzero} \qquad \text{and}\\ &\mathbb{P}\left(f(x_{t_0+{\cT_{0}}})-f(x_{t_0})\leq -\cF_0\right)\geq \frac{1}{3}-\frac{3}{2}{\cT_{0}}^2e^{-l}\numberthis\label{eq:largedescentzero} \end{align*} \end{enumerate} \end{lemma} \textbf{Finishing the proof:} By combining the above results, we prove Theorem~\ref{th:pgdmaintheorem}. The proof is divided in two parts -- in the first part we show that the function descends enough when the gradient is large and in the second part we show that the iterates do escape from the saddle points and then function has sufficient descent. \textbf{Choice of parameters for Zeroth-order case.} As the expressions involved in the analysis of the zeroth order case are little complicated, we show explicitly here how to choose the parameters. First define, \begin{align}\label{textxidef} \textXi:=\frac{32\sqrt{d({\cT_{0}}+1)}\eta \beta({\cT_{0}}+1)(({\cT_{0}}+1)\log 9d+\log 2+l)^\frac{3}{2}}{K_1^{3/2}n_1} \end{align} The choice of the parameters should be such that the following equations are satisfied: \begin{align*} &\frac{384L_GC_0^2d(\rho-1)(d+1){\cT_{0}}^2({\cT_{0}}\log 9d +l)^3}{K_1^3n_1^2}\leq\frac{1}{16},\\ &\frac{8C_0\sqrt{(\rho-1)d(d+1){\cT_{0}}}}{K_1^\frac{3}{2}n_1}({\cT_{0}}\log 9d+l)^\frac{3}{2}\leq \frac{1}{16},\\ &\eta\cS_0{\cT_{0}}\max(L_H,L_G)\leq \frac{1}{l}, \quad c\leq \sqrt{l}/40,\\ &\textXi \cdot \sum_{i=0}^{{\cT_{0}}}\left(\frac{\nu L_G(d+2)}{2}+C_0\sqrt{(\rho-1)(d+1)}L\right)\leq \frac{\beta({\cT_{0}})r}{40\sqrt{d}},\\ &\frac{(1+\eta \gamma)^{\cT_{0}}\sqrt{\eta} r}{40\sqrt{2\gamma d}}>\cS_0 ,\quad \wp(r,l,\nu,\eta,d,\cT_0)\leq 0.1\cF_0. \end{align*} Furthermore, we need to ensure the RHS of \eqref{eq:xtx0distnormboundzero} is of the same order of $\cS_0^2$. \begin{proof}[Proof of Theorem~\ref{th:pgdmaintheorem}] \begin{enumerate}[label=\alph*)] \item \begin{enumerate}[label=\arabic*.] \item First we look at the time instants where $\|\nabla_t\|\geq \ep$. If there are more than $\frac{T}{4}$ such time steps, then using Lemma~\ref{lm:descent} we have, with probability at least $1-4e^{-l}$ \begin{align*} f(x_T)-f(x_0)&\leq -\frac{T\ep^2}{64\lge^2}+ 3cL_G\frac{\ep^3}{\lge^{10}}\left(\frac{0.5\lge^3}{\sqrt{\ep}}+\lge\right)+32c\frac{\ep^3}{\lge^7}\\ &\leq -\frac{T\ep^2}{128\lge^2} \end{align*} Letting $T$ as in \eqref{eq:Tchoicefirst}, we get $f(x_T)\leq f(x_0)-T\ep^2/128\lge^2<f^*$ which is impossible. \item As follows from Claim 2 in the proof of Theorem 16 of \cite{jin2019nonconvex}, we have, with probability at least $1-10d{\cT_{0}}^2T^2\log(\cS_1\sqrt{d}/(\eta r))e^{-l}$ \begin{align*} f(x_T)-f(x_0)\leq -0.1\frac{T\cF_1}{{\cT_{1}}} \end{align*} which implies $f(x_T)\leq f(x_0)-0.1T\cF_1/{\cT_{1}}<f^*$ which is impossible. \end{enumerate} \item \begin{enumerate}[label=\arabic*.] \item First we look at the time instants where $\|\nabla_t\|\geq \ep$. If the parameters are chosen as in \eqref{eq:allparameterchoicezero}, ${\cT_{0}}=\kappa_3\frac{\lge^2\log\left(d\right)^2}{\sqrt{\ep}}$, and $l=\kappa_6\log\left(\frac{d(f(x_0)-f^*)}{\delta\ep}\right)$ then we have, \begin{align*} \wp(r,l,\nu,\eta,d,{\cT_{0}})=\order\left(\ep^{1.5}\right) \end{align*} If there are more than $\frac{T}{4}$ such time steps, then using Lemma~\ref{lm:descent} we have, with probability at least $1-4e^{-l}$ \begin{align*} f(x_T)-f(x_0)\leq -\frac{\kappa_0T\ep^{2.5}}{64\lge}+ \order\left(\ep^{1.5}\right) \leq -\frac{\kappa_0T\ep^{2.5}}{128\lge} \end{align*} Letting $T$ as in \eqref{eq:Tchoicezero}, $\kappa_9\geq 128$, and $\kappa_0\kappa_3/\kappa_8\geq 128$ we get $f(x_T)\leq f(x_0)-\frac{\kappa_0T\ep^{2.5}}{128\lge}< f^*$ which is impossible. \item As follows from Claim 2 in the proof of Theorem 16 of \cite{jin2019nonconvex}, we have, with probability at least $1-3{\cT_{0}}^2T^2e^{-l}$ \begin{align*} f(x_T)-f(x_0)\leq -0.1\frac{T\cF_0}{{\cT_{0}}} \end{align*} which implies $f(x_T)\leq f(x_0)-0.1T\cF_0/{\cT_{0}}<f^*$ when $\kappa_9\geq 128$ and $T$ is as in \eqref{eq:allparameterchoicezero}, which is impossible. \end{enumerate} \end{enumerate} \end{proof} \begin{proof}[Proof of Theorem~\ref{th:pgdmaintheoremwosgc}] The proof of Theorem~\ref{th:pgdmaintheoremwosgc} is same as Theorem~\ref{th:pgdmaintheorem} except for the concentration properties of $\|\zeta_t\|$. In this case we have $\|\zeta_t\|$ to be $\alpha$-sub-exponential with coefficient $(\Upsilon_t\sqrt{d}/n_1)^{2/3}$ where $$\Upsilon_t=\frac{\nu L_G(d+2)}{2}+C_0(\sigma+\|\nabla f(x_t)\|)\sqrt{d+1}.$$ So there is an extra term $C_0\sigma\sqrt{d+1}$ which can neither be made smaller using $\nu$ nor is of the same order as $\nabla f(x_t)$ so that it can be subsumed in other terms involving $\nabla f(x_t)$. Hence, the only way to make the coefficient smaller, which is essential in the proof, is to increase $n_1$. This is main reason why the rate deteriorates in the absence if SGC. For the sake of completeness, we provide below the set of conditions that need to be satisfied to pick the parameters in this setting, below. \textbf{Choice of parameters for Zeroth-order case when SGC does not hold.} When SGC does not hold in the zeroth-order setting the conditions to be satisfied are: \begin{align*} &\frac{384L_GC_0^2d(\rho-1)(d+1){\cT_{0}}^2({\cT_{0}}\log 9d +l)^3}{K_1^3n_1^2}\leq\frac{\ep^2}{16},\\ &\frac{8C_0\sqrt{(\rho-1)d(d+1){\cT_{0}}}}{K_1^\frac{3}{2}n_1}({\cT_{0}}\log 9d+l)^\frac{3}{2}\leq \frac{\ep}{16},\\ &\eta\cS_0{\cT_{0}}\max(L_H,L_G)\leq \frac{1}{l}, \quad c\leq \sqrt{l}/40,\\ &\textXi \cdot \sum_{i=0}^{{\cT_{0}}}\left(\frac{\nu L_G(d+2)}{2}+C_0\sqrt{(\rho-1)(d+1)}L\right)\leq \frac{\beta({\cT_{0}})r}{40\sqrt{d}},\\ &\frac{(1+\eta \gamma)^{\cT_{0}}\sqrt{\eta} r}{40\sqrt{2\gamma d}}>\cS_0 ,\quad \wp(r,l,\nu,\eta,d,\cT_0)\leq 0.1\cF_0. \end{align*} Furthermore, we need to ensure the RHS of \eqref{eq:xtx0distnormboundzero} is of the same order of $\cS_0^2$. \end{proof} \subsection{Proofs of Lemmas related to Perturbed Stochastic Gradient Descent} \begin{assumption}\cite{jin2019nonconvex}\label{as:subgauss1} Consider random vectors $X_1,X_2,\cdots,X_n \in \mathbb{R}^d$, and the corresponding filtrations $\calF_i=\sigma(X_1,X_2,\cdots,X_i)$ for $i=1,2,\cdots,n$, such that $X_i|\calF_{i-1}$ is zero-mean nSG$(\sigma_i)$ with $\sigma_i\in \calF_{i-1}$. That is, \begin{align*} \expec{X_i|\calF_{i-1}}=0, \quad P(\|X_i\|\geq t|\calF_{i-1})\leq e^{-{\frac{t^2}{2\sigma_i^2}}}, \quad \forall t\in \mathbb{R},\forall i=1,2,\cdots,n. \end{align*} \end{assumption} \begin{lemma}\cite{jin2019nonconvex} \label{lm:innerprodsumbound} Let $X_1,X_2,\cdots,X_n \in \mathbb{R}^d$ satisfy Assumption~\ref{as:subgauss}. $u_i\in \calF_{i-1}$ be a random vector for $i=1,2,\cdots,n$. Then for any $l>0,\ \lambda>0$, there exists absolute constant $c$ such that, with probability at least $1-e^{-l}$: \begin{align*} \sum_i u_i^\top X_i\leq c\lambda\sum_i \|u_i\|^2\sigma_i^2+\frac{l}{\lambda} \end{align*} \end{lemma} \begin{lemma}\cite{jin2019nonconvex} \label{lm:normsumbound} Let $X_1,X_2,\cdots,X_n \in \mathbb{R}^d$ satisfy Assumption~\ref{as:subgauss} with $\sigma_1=\sigma_2=\cdots=\sigma_n=\sigma$. Then for any $l>0,\ \lambda>0$, there exists absolute constant $c$ such that, with probability at least $1-e^{-l}$: \begin{align*} \sum_i \| X_i\|^2\leq c\sigma^2(n+l) \end{align*} \end{lemma} \begin{lemma}\cite{jin2019nonconvex} \label{lm:sumnormbound} Let $X_1,X_2,\cdots,X_n \in \mathbb{R}^d$ satisfy Assumption~\ref{as:subgauss} with fixed $\{\sigma_i\}$ then for any $l>0$, there exists an aboslute coonstant $c$ such that, with probability at least $1-2de^{-l}$: \begin{align*} \|\sum_{i=1}^nX_i\|\leq c \sqrt{\sum_{i=1}^n\sigma_i^2l} \end{align*} \end{lemma} Let $F_{\nu}(x,\xi)=\mathbf{E}_u\left[F(x+\nu u,\xi)\right]$, and $g_{t,i}^j$, and $\nabla F_\nu(x_t,\xi_i)^j$ denote the $j$-th coordinate of the vector $g_{t,i}=\frac{F(x_t+\nu u_i,\xi_i)-F(x_t,\xi_i)}{\nu}u_i$, and $\nabla F_\nu(x_t,\xi_i)$ respectively. \iffalse \begin{lemma} \label{lm:concentrationzero}\cite{shen2019nonasymptotic} \begin{align*} \mathbb{P}\left(\abs{g_{t,i}^j-\nabla F_\nu(x_t,\xi_i)^j}\geq \tau\right)\leq 4\exp\left(-K_0\left(\frac{\tau}{2C\sqrt{d}L}\right)^\frac{2}{3}\right) \end{align*} \end{lemma} \begin{lemma}\label{lm:gradesttoexpectgradnuerror} \begin{align*} \mathbb{P}\left(\frac{1}{n_1}\sum_{i=1}^{n_1}\norm{g_{t,i}-\nabla F_\nu(x_t,\xi_i)}\geq \tau\right)\leq 4dn_1\exp\left(-K_0\left(\frac{\tau}{2LCd}\right)^\frac{2}{3}\right) \end{align*} \end{lemma} \begin{proof} Using Lemma~\ref{lm:concentrationzero} we have, \begin{align*} \mathbb{P}\left(\abs{g_{t,i}^j-\nabla F_\nu(x_t,\xi_i)^j}\geq\tau\right)\leq 4\exp\left(-K_0\left(\frac{\tau}{2LC\sqrt{d}}\right)^\frac{2}{3}\right) \end{align*} Using union bound, \begin{align*} &\mathbb{P}\left(\norm{g_{t,i}-\nabla F_\nu(x_t,\xi_i)}\geq\tau\right)\leq \mathbb{P}\left(\exists~j\in\{1,2,\cdots,d\} s.t. \ \abs{g_{t,i}^j-\nabla F_\nu(x_t,\xi_i)^j}\geq\tau/\sqrt{d}\right)\\ &\leq \sum_{j=1}^d \mathbb{P}\left(\abs{g_{t,i}^j-\nabla F_\nu(x_t,\xi_i)^j}\geq\tau/\sqrt{d}\right)\leq 4d\exp\left(-K_0\left(\frac{\tau}{2LCd}\right)^\frac{2}{3}\right) \end{align*} \begin{align*} &\mathbb{P}\left(\sum_{i=1}^{n_1}\norm{g_{t,i}-\nabla F_\nu(x_t,\xi_i)}\geq n_1\tau\right)\leq \sum_{i=1}^{n_1}\mathbb{P}\left(\norm{g_{t,i}-\nabla F_\nu(x_t,\xi_i)}\geq\tau\right)\\ &\leq 4dn_1\exp\left(-K_0\left(\frac{\tau}{2LCd}\right)^\frac{2}{3}\right) \end{align*} \end{proof} \fi \begin{lemma}\cite{nesterov2017random}\label{lm:gradnutostochgraderror} Let Assumption~\ref{as:lipgrad} be true for $F$. Then \begin{align*} \|\nabla F_\nu(x,\xi)-\nabla F(x,\xi)\|\leq \frac{\nu}{2}L_G(d+3)^\frac{3}{2} \end{align*} \end{lemma} \iffalse \begin{lemma}\label{lm:subgausszero} \begin{align*} &\mathbb{P}(\|g_t-\nabla_t\|\geq\tau)\leq 4dn_1\exp\left(-K_0\left(\frac{\tau}{6CdL}\right)^\frac{2}{3}\right)+ 2\exp\left(-\frac{n_1\tau^2}{18(\rho-1)\|\nabla_t\|^2}\right)\numberthis\label{eq:subgausszero} \end{align*} \end{lemma} \begin{proof} \begin{align*} &\mathbb{P}(\|g_t-\nabla_t\|\geq\tau) =\mathbb{P}\left(\|\frac{1}{n_1}\sum_{i=1}^{n_1}(g_{t,i}-\nabla F_\nu(x_t,\xi_i))+\frac{1}{n_1}\sum_{i=1}^{n_1}(\nabla F_\nu(x_t,\xi_i)-\nabla F(x_t,\xi_i))\right.\\ &\left.+\frac{1}{n_1}\sum_{i=1}^{n_1}\nabla F(x_t,\xi_i)-\nabla_t\|\geq\tau\right)\\ &\leq\mathbb{P}\left(\frac{1}{n_1}\sum_{i=1}^{n_1}\|g_{t,i}-\nabla F_\nu(x_t,\xi_i)\|+\frac{1}{n_1}\sum_{i=1}^{n_1}\|\nabla F_\nu(x_t,\xi_i)-\nabla F(x_t,\xi_i)\|\right.\\ &\left.+\|\frac{1}{n_1}\sum_{i=1}^{n_1}\nabla F(x_t,\xi_i)-\nabla_t\|\geq\tau\right)\\ &\leq \mathbb{P}\left(\frac{1}{n_1}\sum_{i=1}^{n_1}\|g_{t,i}-\nabla F_\nu(x_t,\xi_i)\|\geq \frac{\tau}{3}\right)+\mathbb{P}\left(\frac{1}{n_1}\sum_{i=1}^{n_1}\|\nabla F_\nu(x_t,\xi_i)-\nabla F(x_t,\xi_i)\|\geq \frac{\tau}{3}\right)\\ &+\mathbb{P}\left(\|\frac{1}{n_1}\sum_{i=1}^{n_1}\nabla F(x_t,\xi_i)-\nabla_t\|\geq\frac{\tau}{3}\right) \end{align*} Now we will bound each probability term in the above expression. By Lemma~\ref{lm:gradesttoexpectgradnuerror} we can bound the first term as \begin{align*} \mathbb{P}\left(\frac{1}{n_1}\sum_{i=1}^{n_1}\|g_{t,i}-\nabla F_\nu(x_t,\xi_i)\|\geq \frac{\tau}{3}\right)\leq 4dn_1\exp\left(-K_0\left(\frac{\tau}{6LCd}\right)^\frac{2}{3}\right) \end{align*} If we choose $\tau$ such that $$\tau>\frac{3\nu}{2}\nu L_G(d+3)^{1.5},$$ then the second probability is $0$ by Lemma~\ref{lm:gradnutostochgraderror}. For the third term, we use Assumption~\ref{as:subgauss} to get the bound, \begin{align*} \mathbb{P}\left(\|\frac{1}{n_1}\sum_{i=1}^{n_1}\nabla F(x_t,\xi_i)-\nabla_t\|\geq \frac{\tau}{3}\right)\leq 2\exp\left(-\frac{n_1\tau^2}{18(\rho-1)\|\nabla_t\|^2}\right) \end{align*} Combining the above three bounds we have \eqref{eq:subgausszero}. \end{proof} \fi \begin{lemma}\label{lm:gaussnormbound}\cite{nesterov2017random} For a Gaussian random vector $u\sim N(0,I_d)$, we have \begin{align*} \expec{\|u\|^k}\leq (d+k)^\frac{k}{2} \end{align*} \end{lemma} \begin{lemma}\cite{shen2019nonasymptotic}\label{lm:prodconcentration} Let $(X_i,Y_i)$, $i = 1,2,\cdots,n$ be $n$ independent copies of random variables $X$ and $Y$. Let $X$ be a sub-Gaussian random variable with sub-gaussian norm $\|X\|_{\psi_2} \leq \Upsilon_1$, and $Y$ be a sub-exponential random variable with sub-exponential norm $\|Y\|_{\psi_1} \leq \Upsilon_2$ for some constants $\Upsilon_1$ and $\Upsilon_2$. Then for any $t\geq K\max\left(\Upsilon_1,\Upsilon_1^3\right)\Upsilon_2$ we have \begin{align*} \mathbb{P}\left(\abs{\sum_{i=1}^{n}{X_iY_i}-\expec{XY}}\geq t\right)\leq 4\exp\left(-K_1\min\left[\left(\frac{t}{\sqrt{n}\Upsilon_1\Upsilon_2}\right)^2,\left(\frac{t}{\Upsilon_1\Upsilon_2}\right)^{2/3}\right]\right) \end{align*} where $K$ and $K_1$ are absolute constants. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lm:zerogradconc}] Let us write $g_{t,i}=\phi(\nu,u_i,\xi_i)u_i$ where $\phi(\nu,u_i,\xi_i)=\frac{F(x_t+\nu u_i,\xi_i)-F(x_t,\xi_i)}{\nu}$. We will show that $\phi(\nu,u_i,\xi_i)$ is a sub-exponential random variable by showing that its sub-exponential norm or $\psi_1$-norm, defined as $\|.\|_{\psi_1}=\sup_{p\geq 1}p^{-1}\expec{\abs{.}^p}^{p^{-1}}$, is finite. \begin{align*} \|\phi(\nu,u_i,\xi_i)\|_{\psi_1}=\sup_{p\geq 1}\frac{1}{p}\expec{\abs{\phi(\nu,u_i,\xi_i)}^p}^\frac{1}{p} =\sup_{p\geq 1}\frac{1}{p}\mathbf{E}_{\xi_i}\left[\mathbf{E}_{u_i}\left[\abs{\phi(\nu,u_i,\xi_i)}^p\right]\right]^\frac{1}{p} \numberthis\label{eq:iteratedexpecsubexp} \end{align*} We first concentrate on the term $\mathbf{E}_{u_i}\left[\abs{\phi(\nu,u_i,\xi_i)}^p\right]$. \begin{align*} \mathbf{E}_{u_i}\left[\abs{\phi(\nu,u_i,\xi_i)}^p\right]=\mathbf{E}_{u_i}\left[\left\lvert\frac{F(x_t+\nu u_i,\xi_i)-F(x_t,\xi_i)-\nu\nabla F(x_t,\xi_i)^\top u_i}{\nu}+ \nabla F(x_t,\xi_i)^\top u_i\right\rvert^p\right] \end{align*} By Minkowski's inequality, \begin{align*} &\mathbf{E}_{u_i}\left[\abs{\phi(\nu,u_i,\xi_i)}^p\right]\\ \leq& \left[\mathbf{E}_{u_i}\left[\left\lvert\frac{F(x_t+\nu u_i,\xi_i)-F(x_t,\xi_i)-\nu\nabla F(x_t,\xi_i)^\top u_i}{\nu}\right\rvert^p\right]^\frac{1}{p}+\mathbf{E}_{u_i}\left[\left\lvert \nabla F(x_t,\xi_i)^\top u_i\right\rvert^p\right]^\frac{1}{p}\right]^p\\ \leq &\left[\frac{\nu L_G}{2}\mathbf{E}_{u_i}\left[\|u_i\|^{2p}\right]^\frac{1}{p}+\|\nabla F(x_t,\xi_i)\|\mathbf{E}_{u_i}\left[\| u_i\|^p\right]^\frac{1}{p}\right]^p \end{align*} Using Lemma~\ref{lm:gaussnormbound}, \begin{align*} \mathbf{E}_{u_i}\left[\abs{\phi(\nu,u_i,\xi_i)}^p\right]\leq \left[\frac{\nu L_G(d+2p)}{2}+\sqrt{d+p}\|\nabla F(x_t,\xi_i)\|\right]^p \end{align*} Now from \eqref{eq:iteratedexpecsubexp}, using Minkowski's inequality, we get \begin{align*} &\|\phi(\nu,u_i,\xi_i)\|_{\psi_1}\leq \sup_{p\geq 1}\frac{1}{p}\mathbf{E}_{\xi_i}\left[\left(\frac{\nu L_G(d+2p)}{2}\right)^p\right]^\frac{1}{p}+\sup_{p\geq 1}\frac{1}{p}\mathbf{E}_{\xi_i}\left[\left(\sqrt{d+p}\|\nabla F(x_t,\xi_i)\|\right)^p\right]^\frac{1}{p}\\ \leq& \frac{\nu L_G(d+2)}{2}+\sup_{p\geq 1}\sqrt{\frac{d+p}{p}}\frac{1}{\sqrt{p}}\mathbf{E}_{\xi_i}\left[\| \nabla F(x_t,\xi_i)\|^p\right]^\frac{1}{p}\\ \leq & \frac{\nu L_G(d+2)}{2}+\sup_{p\geq 1}\left(\sqrt{\frac{d+p}{p}}\sup_{p\geq 1}\frac{1}{\sqrt{p}}\mathbf{E}_{\xi_i}\left[\| \nabla F(x_t,\xi_i)\|^p\right]^\frac{1}{p}\right) \end{align*} Now, \begin{align*} &\mathbf{E}_{\xi_i}\left[\|\nabla F(x_t,\xi_i)\|^p\right]^{p^{-1}}\\ \leq &\mathbf{E}_{\xi_i}\left[\left(\|\nabla F(x_t,\xi_i)-\nabla f(x_t)+\nabla f(x_t)\|\right)^p\right]^{p^{-1}}\\ \leq & \mathbf{E}_{\xi_i}\left[2^{p-1}\|\nabla F(x_t,\xi_i)-\nabla f(x_t)\|^p+2^{p-1}\|\nabla f(x_t)\|^p\right]^{p^{-1}}\\ \leq & 2\mathbf{E}_{\xi_i}\left[\|\nabla F(x_t,\xi_i)-\nabla f(x_t)\|^p\right]^{p^{-1}}+2\|\nabla f(x_t)\| \end{align*} From \eqref{eq:sgsgc} we have, $\sup_{p\geq 1}p^{-1/2}\mathbf{E}_{\xi_i}\left[\left(\|\nabla F(x_t,\xi_i)-\nabla f(x_t)\|\right)^p\right]^{p^{-1}}\leq c_0'\sqrt{\rho-1}\|\nabla_t\|$ where $c_0$ is a constant. Then, \begin{align*} &\|\phi(\nu,u_i,\xi_i)\|_{\psi_1}\leq \frac{\nu L_G(d+2)}{2}+(2+c_0'\sqrt{(\rho-1)})\sqrt{d+1}\|\nabla_t\| \end{align*} We also have, $\|u_i^j\|_{\psi_2}\leq 1$, and $\expec{g_{t,i}}=\nabla f_\nu(x_t)$. Then using Lemma~\ref{lm:prodconcentration}, we have $\forall~j=1,2,\cdots,d$ \begin{align*} \mathbb{P}\left(\frac{1}{n_1}\left\lvert\sum_{i=1}^{n_1}\left(g_{t,i}^j-\nabla f_\nu(x_t)^j\right)\right\rvert\geq \tau\right)\leq 4\exp\left(-K_1\min\left[\left(\frac{\sqrt{n_1}\tau}{\Upsilon_t}\right)^2,\left(\frac{n_1\tau}{\Upsilon_t}\right)^{2/3}\right]\right) \end{align*} where $\Upsilon_t=\frac{\nu L_G(d+2)}{2}+c_0\sqrt{(\rho-1)(d+1)}\|\nabla_t\|$. Using union bound, \begin{align*} &\mathbb{P}\left(\norm{\frac{1}{n_1}\sum_{i=1}^{n_1}g_{t,i}-\nabla f_\nu(x_t)}\geq\tau\right)\leq \mathbb{P}\left(\exists~j\in\{1,2,\cdots,d\} s.t. \ \left\lvert\frac{1}{n_1}\sum_{i=1}^{n_1}g_{t,i}^j-\nabla f_\nu(x_t)^j\right\rvert\geq\tau/\sqrt{d}\right)\\ &\leq \sum_{j=1}^d \mathbb{P}\left(\left\lvert\frac{1}{n_1}\sum_{i=1}^{n_1} g_{t,i}^j-\nabla f_\nu(x_t)^j\right\rvert\geq\tau/\sqrt{d}\right)\leq 4d\exp\left(-K_1\min\left[\left(\frac{\sqrt{n_1}\tau}{\Upsilon_t\sqrt{d}}\right)^2,\left(\frac{n_1\tau}{\Upsilon_t\sqrt{d}}\right)^{2/3}\right]\right) \end{align*} Using Lemma~\ref{lm:gradnutostochgraderror} we have \begin{align*} &\mathbb{P}\left(\norm{\frac{1}{n_1}\sum_{i=1}^{n_1}g_{t,i}-\nabla f(x_t)}\geq\tau\right)\leq \mathbb{P}\left(\norm{\frac{1}{n_1}\sum_{i=1}^{n_1}g_{t,i}-\nabla_\nu f(x_t)}\geq\tau-\frac{\nu L_G(d+3)^\frac{3}{2}}{2}\right) \end{align*} \end{proof} \begin{proof}[Proof of Lemma~\ref{lm:expecexpbound}] \begin{align*} &\expec{(c_t\|\zeta_t\|)^\frac{k}{3}}=\int_{0}^{\infty}\mathbb{P}\left((c_t\|\zeta_t\|)^\frac{k}{3}>\tau\right)d\tau=\int_{0}^{\infty}\mathbb{P}\left(\|\zeta_t\|>\tau^\frac{3}{k}/c_t\right)d\tau\\ \leq& \int_{0}^{\infty}4d\exp(-b_{1,t}\tau^{\prime 2/k})d\tau\leq \int_{-\frac{\nu L_G(d+3)^\frac{3}{2}}{2}}^{\infty}4d\exp(-b_{1,t}\tau^{2/k})d\tau\leq \int_{0}^{\infty}8d\exp(-b_{1,t}\tau^{2/k})d\tau \end{align*} Substituting, $u=b_{1,t}\tau^{2/k}$ we have, \begin{align*} \expec{(c_t\|\zeta_t\|)^\frac{k}{3}}\leq \int_{0}^{\infty}4dk{b}_{1,t}^{-k/2}e^{-u}u^{k/2-1}du=4dkb_{1,t}^{-k/2}\Gamma \left( k/2 \right) \end{align*} Using $2(k!)^2\leq (2k)!$, and $\Gamma(k+1/2)=(2k)!\sqrt{\pi}/(4^kk!)$, we have \begin{align*} &\expec{e^{s(c_t\|\zeta_t\|)^\frac{1}{3}}}=1+\sum_{k=1}^\infty\expec{\frac{s^k(c_t\|\zeta_t\|)^\frac{k}{3}}{k!}}\leq 1+\sum_{k=1}^\infty\frac{s^k}{k!}4dkb_{1,t}^{-k/2}\Gamma\left( k/2 \right)\\ &\leq 1+4d\left[\sum_{k=1}^\infty\frac{2ks^{2k}b_{1,t}^{-k}}{(2k)!}\Gamma(k)+\sum_{k=0}^\infty\frac{(2k+1)s^{2k+1}b_{1,t}^{-k-1/2}}{(2k+1)!}\Gamma(k+1/2)\right] \\ &\leq 1+4d\left[\sum_{k=1}^\infty\frac{s^{2k}b_{1,t}^{-k}}{k!}+\sqrt{\frac{\pi s^2}{b_{1,t}}}\sum_{k=0}^\infty\frac{s^{2k}b_{1,t}^{-k}}{4^kk!}\right]\\ &\leq 1+4d\left[e^\frac{s^2}{b_{1,t}}+\sqrt{\frac{\pi s^2}{b_{1,t}}}e^{\frac{s^2}{4b_{1,t}}}\right]\leq 1+8de^{\frac{s^2}{b_{1,t}}}\leq 9de^\frac{s^2}{b_{1,t}} \end{align*} \end{proof} \begin{proof}[Proof of Lemma~\ref{lm:innerprodboundzero}] Setting $c=\eta\|\nabla_i\|$, using Lemma~\ref{lm:expecexpbound} we have \begin{align*} \expec{e^{s(\eta\|\nabla_i\|\|\zeta_i\|)^\frac{1}{3}}}\leq 9de^\frac{s^2}{b_{1,i}}. \end{align*} Hence, we have the following: \begin{align*} &\expec{\exp\left(s\sum_{i=0}^{t-1}(\eta\|\nabla_i\|\|\zeta_i\|)^\frac{1}{3}-\sum_{i=0}^{t-1}\frac{s^2}{b_{1,i}}\right)}\\ =&\expec{\exp\left(s\sum_{i=0}^{t-2}(\eta\|\nabla_i\|\|\zeta_i\|)^\frac{1}{3}-\sum_{i=0}^{t-1}\frac{s^2}{b_{1,i}}\right)\expec{\exp\left(s(\eta\|\nabla_{t-1}\|\|\zeta_{t-1}\|)^\frac{1}{3}\right)|\calF_{t-2}}}\\ =&9d\expec{\exp\left(s\sum_{i=0}^{t-2}(\eta\|\nabla_i\|\|\zeta_i\|)^\frac{1}{3}-\sum_{i=0}^{t-1}\frac{s^2}{b_{1,i}}\right)e^\frac{s^2}{b_{1,t-1}}}\\ =&9d\expec{\exp\left(s\sum_{i=0}^{t-2}(\eta\|\nabla_i\|\|\zeta_i\|)^\frac{1}{3}-\sum_{i=0}^{t-2}\frac{s^2}{b_{1,i}}\right)}. \end{align*} Continuing like above we get, \begin{align*} \expec{\exp\left(s\sum_{i=0}^{t-1}(\eta\|\nabla_i\|\|\zeta_i\|)^\frac{1}{3}-\sum_{i=0}^{t-1}\frac{s^2}{b_{1,i}}\right)} \leq (9d)^t. \numberthis\label{eq:martingalediffexpec} \end{align*} Now, we attempt the main result. Note that, we have \begin{align*} \mathbb{P}\left(\eta\sum_{i=0}^t\nabla_i^\top\zeta_i\geq \tau\right)&\leq \mathbb{P}\left(\eta\sum_{i=0}^t\|\nabla_i\|\|\zeta_i\|\geq \tau\right)\\ &\leq \mathbb{P}\left(\sum_{i=0}^t(\eta\|\nabla_i\|\|\zeta_i\|)^\frac{1}{3}\geq \tau^\frac{1}{3}\right)\\ &=\mathbb{P}\left(s\sum_{i=0}^t(\eta\|\nabla_i\|\|\zeta_i\|)^\frac{1}{3}-\sum_{i=0}^{t-1}\frac{s^2}{b_{1,i}}\geq s\tau^\frac{1}{3}-\sum_{i=0}^{t-1}\frac{s^2}{b_{1,i}}\right)\\ &=\mathbb{P}\left(\exp\left(s\sum_{i=0}^t(\eta\|\nabla_i\|\|\zeta_i\|)^\frac{1}{3}-\sum_{i=0}^{t-1}\frac{s^2}{b_{1,i}}\right)\geq \exp\left(s\tau^\frac{1}{3}-\sum_{i=0}^{t-1}\frac{s^2}{b_{1,i}}\right)\right)\\ &\leq \frac{\expec{\exp\left(s\sum_{i=0}^{t-1}(\eta\|\nabla_i\|\|\zeta_i\|)^\frac{1}{3}-\sum_{i=0}^{t-1}\frac{s^2}{b_{1,i}}\right)}}{\exp\left(s\tau^\frac{1}{3}-\sum_{i=0}^{t-1}\frac{s^2}{b_{1,i}}\right)}\\ &\leq \exp\left(t\log 9d-s\tau^\frac{1}{3}+\sum_{i=0}^{t-1}\frac{s^2}{b_{1,i}}\right). \end{align*} The RHS is minimized at $s=\frac{\tau^{1/3}}{2\sum_{i=0}^{t-1}\frac{1}{b_{1,i}}}$. Substituting for $s$ this value, for some $l>0$ we have: $$t\log 9d-\tau^\frac{2}{3}/\left(4\sum_{i=0}^{t-1}\frac{1}{b_{1,i}}\right)=-l.$$ Hence, we have $$\tau=\left(4\sum_{i=0}^{t-1}\frac{1}{b_{1,i}}(t\log 9d +l)\right)^{3/2}.$$ Finally, to prove the statement of the Lemma, note that \begin{align*} \left(\sum_{i=0}^{t-1}\frac{1}{b_{1,i}}\right)^\frac{3}{2}=\frac{\sqrt{d}}{K_1^\frac{3}{2}n_1}\left(\sum_{i=0}^{t-1}(c_i\Upsilon_i)^\frac{2}{3}\right)^\frac{3}{2}\leq \frac{\eta\sqrt{dt}}{K_1^\frac{3}{2}n_1}\sum_{i=0}^{t-1}\left(\frac{\nu L_G(d+2)}{2}\|\nabla_i\|+C_0\sqrt{(\rho-1)(d+1)}\|\nabla_i\|^2\right) \end{align*} \end{proof} \begin{proof}[Proof of Lemma~\ref{lm:normsumboundzero}] From \eqref{eq:martingalediffexpec} we have, \begin{align*} \expec{\exp\left(s\sum_{i=0}^{t-1}(\|\zeta_i\|)^\frac{1}{3}-\sum_{i=0}^{t-1}\frac{s^2}{b_{0,i}}\right)} \leq (9d)^t \end{align*} where $b_{0,i}$ is as defined in Lemma~\ref{lm:expecexpbound}. \begin{align*} &\mathbb{P}\left(\sum_{i=0}^{t-1}\|\zeta_i\|^2\geq \tau\right) \leq \mathbb{P}\left(s\sum_{i=0}^{t-1}\|\zeta_i\|^\frac{1}{3}-\sum_{i=0}^{t-1}\frac{s^2}{b_{0,i}}\geq s\tau^\frac{1}{6}-\sum_{i=0}^{t-1}\frac{s^2}{b_{0,i}}\right)\\ =&\mathbb{P}\left(\exp\left(s\sum_{i=0}^{t-1}\|\zeta_i\|^\frac{1}{3}-\sum_{i=0}^{t-1}\frac{s^2}{b_{0,i}}\right)\geq \exp\left(s\tau^\frac{1}{6}-\sum_{i=0}^{t-1}\frac{s^2}{b_{0,i}}\right)\right)\\ \leq & \frac{\expec{\exp\left(s\sum_{i=0}^{t-1}\|\zeta_i\|^\frac{1}{3}-\sum_{i=0}^{t-1}\frac{s^2}{b_{0,i}}\right)}}{\exp\left(s\tau^\frac{1}{6}-\sum_{i=0}^{t-1}\frac{s^2}{b_{0,i}}\right)}\leq \exp\left(t\log 9d-s\tau^\frac{1}{6}+\sum_{i=0}^{t-1}\frac{s^2}{b_{0,i}}\right) \end{align*} Following steps as in Lemma~\ref{lm:innerprodboundzero} we have, $\tau=\left(4\sum_{i=0}^{t-1}\frac{1}{b_{0,i}}(t\log 9d +l)\right)^3$. \begin{align*} \left(\sum_{i=0}^{t-1}\frac{1}{b_{0,i}}\right)^3=\frac{d}{K_1^3n_1^2}\left(\sum_{i=0}^{t-1}\Upsilon_i^\frac{2}{3}\right)^3\leq \frac{2dt^2}{K_1^3n_1^2}\sum_{i=0}^{t-1}\left(\left(\frac{\nu L_G(d+2)}{2}\right)^2+C_0^2(\rho-1)(d+1)\|\nabla_i\|^2\right) \end{align*} \end{proof} \begin{proof}[Proof of Lemma~\ref{lm:descent}] \begin{enumerate}[label=\alph*)] \item \begin{align*} f(x_{t+1})\leq& f(x_t)+\nabla_t^\top (x_{t+1}-x_t)+\frac{L_G}{2}\|x_{t+1}-x_t\|^2\\ \leq & f(x_t)-\eta\nabla_t^\top (\nabla_t+\tilde{\zeta}_t)+\frac{\eta^2L_G}{2}\left(\frac{3}{2}\|\nabla_t\|^2+3\|\tilde{\zeta}_t\|^2\right)\\ \leq & f(x_t)-\frac{\eta}{4}\|\nabla_t\|^2-\eta \nabla_t^\top \tilde{\zeta}_t+\frac{3\eta^2L_G}{2}\|\tilde{\zeta}_t\|^2 \end{align*} The last inequality holds as we will choose $\eta\leq 1/L_G$. Summing both sides, \begin{align*} f(x_t)-f(x_0)\leq -\frac{\eta}{4}\sum_{i=0}^{t-1}\|\nabla_t\|^2 -\eta \sum_{i=0}^{t-1}\nabla_i^\top \tilde{\zeta}_i+\frac{3\eta^2L_G}{2}\sum_{i=0}^{t-1}\|\tilde{\zeta}_i\|^2 \numberthis\label{eq:totalfunctionchange} \end{align*} Observe that, by Assumption~\ref{as:SGC}, \begin{align*} \mathbb{P}(\nabla_t^\top \zeta_t\geq \tau|\calF_{t-1})\leq \mathbb{P}(\|\nabla_t\| \|\zeta_t\|\geq \tau|\calF_{t-1})\leq 2\exp(-\tau^2/(\frac{2(\rho-1)}{n_1}\|\nabla_t\|^4 )) \numberthis\label{eq:innerprodconc} \end{align*} So $\nabla_t^\top \zeta_t|\calF_{t-1}$ is $c\sqrt{\frac{\rho-1}{n_1}}\|\nabla_t\|^2$-subGaussian. Using Lemma~\ref{lm:innerprodsumbound}, we have, with probability at least $1-e^{-l}$, \begin{align*} -\eta\sum_{i=0}^{t-1}\nabla_i^\top\zeta_i\leq \lambda\eta c\frac{\rho-1}{n_1} \sum_{i=0}^{t-1}\|\nabla_i\|^4+\eta\frac{l}{\lambda}\leq \lambda\eta c\frac{\rho-1}{n_1}\left(\sum_{i=0}^{t-1}\|\nabla_i\|^2\right)^2+\eta\frac{l}{\lambda} \end{align*} Plugging $\lambda=\frac{32l}{\sum_{i=0}^{t-1}\|\nabla_i\|^2}$, we have, \begin{align*} -\eta\sum_{i=0}^{t-1}\nabla_i^\top\zeta_i\leq \eta \left(32cl\frac{\rho-1}{n_1}+\frac{1}{32}\right)\sum_{i=0}^{t-1}\|\nabla_i\|^2 \numberthis\label{eq:zetainnerprodsumbound} \end{align*} Using Lemma~\ref{lm:innerprodsumbound}, with probability at least $1-e^{-l}$ we have, \begin{align*} -\eta\sum_{i=0}^{t-1}\nabla_i^\top \theta_i\leq \frac{\eta}{32}\sum_{i=0}^{t-1}\|\nabla_i\|^2+32cl\eta r^2 \numberthis\label{eq:thetainnerprodsumbound} \end{align*} Using Lemma~\ref{lm:normsumbound}, we have with probability at least $1-e^{-l}$, \begin{align*} \sum_{i=0}^{t-1}\|\theta_i\|^2\leq cr^2(t+l) \numberthis\label{eq:thetanormsumbound} \end{align*} Note that by Assumption~\ref{as:SGC}, $\expec{\|\zeta_t\|^2|\calF_{t-1}}\leq \frac{\rho-1}{n_1}\|\nabla_t\|^2$, and $\|\zeta_t\|^2|\calF_{t-1}$ is $c\frac{\rho-1}{n_1}\|\nabla_t\|^2$-subExponential. So we have, with probability at least $1-e^{-l}$, \begin{align*} \sum_{i=0}^{t-1}\|\zeta_i\|^2\leq (c+l)\frac{\rho-1}{n_1}\sum_{i=0}^{t-1}\|\nabla_i\|^2 \numberthis\label{eq:zetanormsumbound} \end{align*} Combining \eqref{eq:totalfunctionchange}, \eqref{eq:zetainnerprodsumbound}, \eqref{eq:thetainnerprodsumbound}, \eqref{eq:thetanormsumbound} and \eqref{eq:zetanormsumbound}, using $\|\tilde{\zeta}_t\|^2\leq 2 (\|\zeta_t\|^2+\|\theta\|^2)$, and using union bound, we have with probability at least $1-4e^{-l}$, \begin{align*} f(x_t)-f(x_0)&\leq \left(-\frac{\eta}{4}+\eta \left(32lc\frac{\rho-1}{n_1}+\frac{1}{32}\right)+\frac{\eta}{32}+3\eta^2L_G(c+l)\frac{\rho-1}{n_1}\right)\sum_{i=0}^{t-1}\|\nabla_i\|^2\\&~+3c\eta^2r^2(t+l)L_G+32cl\eta r^2 \end{align*} We need to choose $\eta$ such that $\left(-\frac{\eta}{4}+\eta \left(32lc\frac{\rho-1}{n_1}+\frac{1}{32}\right)+\frac{\eta}{32}+3\eta^2L_G(c+l)\frac{\rho-1}{n_1}\right)<-\frac{\eta}{16}$. Choosing $n_1$, and $\eta$ as in \eqref{eq:descentparchoicefirst}, and setting $t={\cT_{1}}$, we get \eqref{eq:descentfirst}. \item Using Lemma~\ref{lm:innerprodboundzero}, and Lemma~\ref{lm:normsumboundzero}, and Assumption~\ref{as:lip} we have, with probability at least $1-4e^{-l}$ \begin{align*} &f(x_t)-f(x_0)\leq -\frac{\eta}{4}\sum_{i=0}^{t-1}\|\nabla_i\|^2+\frac{\eta}{16}\sum_{i=0}^{t-1}\|\nabla_i\|^2+16cl\eta r^2+3cL_G\eta^2r^2(t+l)\\ +&\frac{8\eta\sqrt{dt}}{K_1^\frac{3}{2}n_1}(t\log 9d+l)^\frac{3}{2}\sum_{i=0}^{t-1}\left(\frac{\nu LL_G(d+2)}{2}+C_0\sqrt{(\rho-1)(d+1)}\|\nabla_i\|^2\right)\\ +&\frac{384L_Gd\eta^2t^2(t\log 9d +l)^3}{K_1^3n_1^2}\sum_{i=0}^{t-1}\left(\left(\frac{\nu L_G(d+2)}{2}\right)^2+C_0^2(\rho-1)(d+1)\|\nabla_i\|^2\right) \end{align*} We will choose ${\cT_{0}}$, $\eta$, and $n_1$ such that, \eqref{eq:parchoosecond1}, and \eqref{eq:parchoosecond2} are true. Then, with probability at least $1-4e^{-l}$, we get \eqref{eq:descentzero}. \end{enumerate} \end{proof} \begin{proof}[Proof of Lemma~\ref{lm:xtx0distnormbound}] \begin{enumerate}[label=\alph*)] \item For a fixed $\tau\leq t$, we have \begin{align*} \|x_\tau-x_0\|^2\leq \eta^2\|\sum_{i=0}^{\tau-1}(\nabla_i+\tz_i)\|^2\leq 2\eta^2 t\sum_{i=0}^{t-1}\|\nabla_i\|^2+4\eta^2(\|\sum_{i=0}^{t-1}\zeta_i\|^2+\|\sum_{i=0}^{t-1}\theta_i\|^2) \end{align*} Using Lemma~\ref{lm:sumnormbound}, we have with probability at least $1-4de^{-l}$, \begin{align*} \|\sum_{i=0}^{t-1}\zeta_i\|^2+\|\sum_{i=0}^{t-1}\theta_i\|^2\leq cl\left(\frac{\rho-1}{n_1}\sum_{i=0}^{t-1}\|\nabla_i\|^2+tr^2\right) \end{align*} Combining this with Lemma~\ref{lm:descent}, with probability at least $1-4e^{-l}-4de^{-l}$, setting $t={\cT_{1}}$, and using union bound we have \eqref{eq:xtx0distnormboundfirst}. \item \begin{align*} \mathbb{P}\left(\|\sum_{i=0}^{t-1}\zeta_i\|^2\geq \tau\right)\leq \mathbb{P}\left(\sum_{i=0}^{t-1}\|\zeta_i\|^2\geq \tau/t\right) \end{align*} So from Lemma~\ref{lm:normsumboundzero}, we have with probability at least $1-e^{-l}$ \begin{align*} \|\sum_{i=0}^{t-1}\zeta_i\|^2\leq \frac{128dt^3(t\log 9d +l)^3}{K_1^3n_1^2}\sum_{i=0}^{t-1}\left(\left(\frac{\nu L_G(d+2)}{2}\right)^2+C_0^2(\rho-1)(d+1)\|\nabla_i\|^2\right) \end{align*} Plugging $t={\cT_{0}}$, under condition \eqref{eq:parchoosecond1}, we have, \begin{align*} \|\sum_{i=0}^{\cT_{0}-1}\zeta_i\|^2\leq \frac{L_G\cT_{0}^2\nu^2(d+2)^2}{192C_0^2(\rho-1)(d+1)}+\frac{\cT_{0}}{12L_G}\sum_{i=0}^{\cT_{0}-1}\|\nabla_i\|^2 \end{align*} From \eqref{eq:descentzero} we have, with probability at least $1-e^{-l}$ \begin{align*} \sum_{i=0}^{\cT_{0}-1}\|\nabla_i\|^2\leq \frac{16}{\eta}(f(x_0)-f(x_{\cT_{0}})+\wp(r,l,\nu,\eta,d,\cT_{0})) \end{align*} Then we have with probability with at least $1-3d\cT_0e^{-l}$, we have \eqref{eq:xtx0distnormboundzero}. \end{enumerate} \end{proof} \begin{proof}[Proof of Lemma~\ref{lm:qhqsgbound}] \begin{enumerate}[label=\alph*)] \item Proof for the first-order setting is as in \cite{jin2019nonconvex}. \item Note that $q_h(t)$ is the same as in part (a). If we can ensure that for the zeroth-order case $\forall t\leq {\cT_{0}}$ we have $\|q_{sg}(t+1)\|\leq \beta(t)r/(40\sqrt{d})$, then the rest of the proof follows from \cite{jin2019nonconvex}. For a fixed $t$, using Cauchy–Schwarz inequality, \begin{align*} &\mathbb{P}\left(\left\lVert q_{sg}(t+1)\right\rVert\geq\tau\right)=\mathbb{P}\left(\eta\left\lVert\sum_{i=0}^{t}(I-\eta\cH)^{t-i}\hat{\zeta}_i\right\rVert\geq\tau\right)\leq \mathbb{P}\left(\eta\sum_{i=0}^{t}\left\lVert(I-\eta\cH)\right\rVert^{t-i}\left\lVert\zeta_i-\zeta'_i\right\rVert\geq\tau\right)\\ \leq& 2\mathbb{P}\left(\eta\sum_{i=0}^{t}\left\lVert(I-\eta\cH)\right\rVert^{t-i}\left\lVert\zeta_i\right\rVert\geq\tau/2\right)\\ \leq& 2\mathbb{P}\left(\eta\sqrt{\sum_{i=0}^{t}\left\lVert(I-\eta\cH)\right\rVert^{2t-2i}}\sqrt{\sum_{i=0}^{t}\left\lVert\zeta_i\right\rVert^2}\geq\tau/2\right)\\ \leq & 2\mathbb{P}\left(\sum_{i=0}^{t}\left\lVert\zeta_i\right\rVert^2\geq\left(\frac{\tau}{2\eta\beta(t+1)}\right)^2\right) \end{align*} From Lemma~\ref{lm:normsumboundzero}, we have with probability at least $1-e^{-l}$ \begin{align*} &\|q_{sg}(t+1)\|\leq \frac{32\sqrt{d(t+1)}\eta \beta(t+1)((t+1)\log 9d+\log 2+l)^\frac{3}{2}}{K_1^{3/2}n_1}\\ &\sum_{i=0}^{t}\left(\frac{\nu L_G(d+2)}{2}+C_0\sqrt{(\rho-1)(d+1)}\|\nabla_i\|\right) \end{align*} Recalling the definition of $\textXi$ from~\eqref{textxidef}, and setting $t={\cT_{0}}$, we will choose ${\cT_{0}}$, $r$, $\eta$, $l$, and $\nu$ such that \begin{align*} &\textXi\cdot\sum_{i=0}^{{\cT_{0}}}\left(\frac{\nu L_G(d+2)}{2}+C_0\sqrt{(\rho-1)(d+1)}\|\nabla_i\|\right)\leq \frac{\beta({\cT_{0}})r}{40\sqrt{d}} \numberthis\label{eq:parchoosecond3} \end{align*} \end{enumerate} \end{proof} \begin{proof}[Proof of Lemma~\ref{lm:escapesaddle}] \begin{enumerate}[label=\alph*)] \item For the first part, we have from Lemma~\ref{lm:descent}, with probability at least $1-4e^{-l}$, \begin{align*} f(x_{\cT_{1}})-f(x_0)\leq 3c\eta^2r^2({\cT_{1}}+l)L_G+32cl\eta r^2\leq 0.1\cF_1 \end{align*} By similar methods in \cite{jin2019nonconvex}, we have, with probability at least $2/3-10d{\cT_{1}}^2\log\left(\frac{\cS_1\sqrt{d}}{\eta r}\right)e^{-l}$, if $\min\{f(x_{\cT_{1}})-f(x_0),f(x_{\cT_{1}}')-f(x_0)\}\geq -\cF_1$, then \begin{align*} \max\{\|x_{\cT_{1}}-x_0\|,\|x_{\cT_{1}}'-x_0\|\}\geq \frac{\beta({\cT_{1}})\eta r }{40\sqrt{d}}=\frac{(1+\eta \gamma)^{\cT_{1}}\sqrt{\eta} r}{40\sqrt{2\gamma d}}>\cS_1 \numberthis\label{eq:parchoosecond5} \end{align*} This is in contradiction with Lemma~\ref{lm:improvorloc}. Then we have with probability at least $2/3-10d{\cT_{1}}^2\log\left(\frac{\cS_1\sqrt{d}}{\eta r}\right)e^{-l}$, $\min\{f(x_{\cT_{1}})-f(x_0),f(x_{\cT_{1}}')-f(x_0)\}\leq -\cF_1$. As the marginal distributions of $x_{\cT_1}$ and $x'_{\cT_1}$ are same we have, \begin{align*} &\mathbb{P}(f(x_{\cT_{1}}')-f(x_0)\}\leq -\cF_1)\geq \frac{1}{2}\mathbb{P}(\min\{f(x_{\cT_{1}})-f(x_0),f(x_{\cT_{1}}')-f(x_0)\}\leq -\cF_1)\\ \geq& 1/3-9d{\cT_{1}}^2\log\left(\frac{\cS_1\sqrt{d}}{\eta r}\right)e^{-l} \end{align*} \item Note that the probability for the second statement being true is at least $1/3-1.5{\cT_{0}}^2e^{-l}$ which is different from \cite{jin2019nonconvex} but the proof method is same. So we omit the proof here. \end{enumerate} \end{proof} \section{Proof of Theorem~\ref{th:crnmaintheorem}}\label{sec:crnproof} We first state the following optimality conditions for CR Newton method updates due to \cite{nesterov2006cubic}. \begin{lemma}\cite{nesterov2006cubic} \begin{subequations} \begin{align} \begin{split}\label{eq:optconda} g_t+H_th_t^*+\frac{M}{2}\|h_t^*\|h_t=0 \end{split}\\ \begin{split} \label{eq:optcondb} H_t+\frac{M}{2}\|h_t^*\|I\succcurlyeq 0 \end{split} \end{align} \end{subequations} \end{lemma} Intuitively, the proof follows through three stages. First, in Lemma~\ref{lm:envelopdescent}, we show that the descent at each time point is proportional to the cube of the step size. \begin{lemma}\label{lm:envelopdescent}\cite{tripuraneni2018stochastic} Let $m_t$ be as defined in \eqref{eq:envelope}. Then for all $t$, \begin{align} m_t(x_t+h_t^*)-m_t(x_t)\leq -\frac{M}{12} \|h_t^*\|^3 \label{eq:envelopdescent} \end{align} \end{lemma} Then, in Lemma~\ref{lm:htlowerbound} we show that the second-order staionarity of an iterate is upper bounded by the step size at that time point. \begin{lemma} \label{lm:htlowerbound} Let Assumption~\ref{as:lipgrad}, and \ref{as:liphess} hold true for $f$. Then the following holds $\forall t$ \begin{enumerate}[label=\alph*)] \item for the first-order update of a CR Newton method, \begin{align*} &\sqrt{\expec{\|h_t^*\|^2|\calF_t}} \geq \max\left(\left(A\expec{\|\nabla f\left(x_t+h_t^*\right)\||\calF_t}\right. \left. -B\right)^\frac{1}{2},\right.\\ &\left.\frac{2}{M+2L_H}\left(-\sqrt{\frac{\sigma_2^2}{n_2}}-\expec{\lambda_{1,t+1}|\calF_t}\right)\right) \numberthis\label{eq:htlowerbound} \end{align*} where $A=\frac{1}{2(L_H+M)}\left(1-\sqrt{\frac{\rho-1}{n_1}}\right)$, and $B=\frac{1}{4(L_H+M)^2}\left(\frac{\rho-1}{2n_1}L_G^2+\frac{\sigma_2^2}{n_2}\right)$. \item for the zeroth-order update of a CR Newton Method \begin{align*} &\sqrt{\expec{\|h_t^*\|^2|\calF_t}} \geq \max\left(\left(A'\expec{\|\nabla f\left(x_t+h_t^*\right)\||\calF_t}\right. \left. -B'\right)^\frac{1}{2},\right.\\ &\left.\frac{2}{M+2L_H}\left(-\sqrt{\frac{128(1+2\log 2d)(d+16)^4L_G^2}{3n_2}}-\sqrt{3}\nu L_H(d+16)^\frac{5}{2}-\expec{\lambda_{1,t+1}|\calF_t}\right)\right) \numberthis\label{eq:htlowerboundzero} \end{align*} where $A'=\frac{1}{2(L_H+M)}\left(1-\sqrt{\frac{\rho'-1}{n_1}}\right)$, and\\ $B'=\frac{1}{4(L_H+M)^2}\left(\frac{\rho'-1}{n_1}L_G^2 +\frac{128(1+2\log 2d)(d+16)^4L_G^2}{3n_2}+3L_H^2\nu^2(d+16)^5+\sqrt{6}\nu (L_H+M)L_G(d+3)^\frac{3}{2}\right)$. \end{enumerate} \end{lemma} Finally, in Lemma~\ref{lm:htcubeupperbound}, we prove that the expected step size becomes smaller with the horizon. \begin{lemma}\label{lm:htcubeupperbound} Let $f$ be a function for which Assumptions~\ref{as:lipgrad}, and \ref{as:liphess} are true. Then, \begin{enumerate}[label=\alph*)] \item for first-order updates generated by Algorithm~\ref{alg:nccrn} the following holds: \begin{align*} &\left(\frac{M}{72}-\left(\frac{\rho-1}{n_1}\right)^\frac{3}{4}\frac{8}{\sqrt{M}A^\frac{3}{2}}\right)\expec{\|h_R^*\|^3|\calF_t}\\ &\leq \frac{f(x_1)-f^*}{T} +\frac{1152L_G^3}{M^2}\left(\frac{\rho-1}{n_1}\right)^\frac{3}{2}\\ &+\frac{8}{\sqrt{M}}\left(\frac{\rho-1}{n_1}\right)^\frac{3}{4}\left(\frac{B}{A}\right)^\frac{3}{2} +\frac{324}{M^2}\frac{\sigma_2^3}{n_2^{3/2}} \numberthis\label{eq:htcubeupperbound} \end{align*} where $R$ is an integer random variable uniformly distributed over the support $\lbrace 1,2,\cdots,T\rbrace$. \item for zeroth-order updates generated by Algorithm~\ref{alg:nccrn} the following holds: \begin{align*} &\left(\frac{M}{144}-\left(\frac{\rho'-1}{n_1}\right)^\frac{3}{4}\frac{6}{\sqrt{M}{A'}^\frac{3}{2}}\right)\expec{\|h_R^*\|^3|\calF_t}\\ &\leq \frac{f(x_1)-f^*}{T} +\frac{864L_G^3}{M^2}\left(\frac{\rho'-1}{n_1}\right)^\frac{3}{2}+\frac{4}{M}(\nu L_G)^\frac{3}{2}(d+3)^\frac{9}{4}\\ &+\frac{6}{\sqrt{M}}\left(\frac{\rho'-1}{n_1}\right)^\frac{3}{4}\left(\frac{B'}{A'}\right)^\frac{3}{2} +\frac{162}{M^2}\left(\frac{160\sqrt{1+2\log 2d}(d+16)^6L_G^3}{n_2^\frac{3}{2}}+21L_H^3(d+16)^\frac{15}{2}\nu^3 \right) \numberthis\label{eq:htcubeupperboundzero} \end{align*} where $R$ is an integer random variable uniformly distributed over the support $\lbrace 1,2,\cdots,T\rbrace$. \end{enumerate} \end{lemma} Combining the above three facts, we complete proof of Theorem~\ref{th:crnmaintheorem}. \begin{proof}[Proof of Theorem~\ref{th:crnmaintheorem}] \begin{enumerate}[label=\alph*)] \item From Lemma~\ref{lm:htlowerbound} we have, \begin{align*} &\sqrt{\expec{\|h_t^*\|^2|\calF_t}}+\sqrt{B}+\frac{2}{(2L_H+M)}\sqrt{\frac{\sigma_2^2}{n_2}} \geq \\ &\max\left(\sqrt{A\expec{\|\nabla f\left(x_t+h_t^*\right)\||\calF_t}},-\frac{2}{(2L_H+M)}\expec{\lambda_{1,t+1}|\calF_t}\right) \numberthis\label{eq:htlowerboundrearranged} \end{align*} From Lemma~\ref{lm:htcubeupperbound}, we have \begin{align*} &\left(\left(\frac{M}{72}-\left(\frac{\rho-1}{n_1}\right)^\frac{3}{4}\frac{8}{\sqrt{M}A^\frac{3}{2}}\right)\expec{\|h_R^*\|^3|\calF_t}\right)^\frac{1}{3}\\ &\leq \left(\frac{f(x_1)-f^*}{T}\right)^\frac{1}{3} +\frac{11L_G}{M^\frac{2}{3}}\left(\frac{\rho-1}{n_1}\right)^\frac{1}{2}\\ &+\frac{2}{M^\frac{1}{6}}\left(\frac{\rho-1}{n_1}\right)^\frac{1}{4}\left(\frac{B}{A}\right)^\frac{1}{2} +\frac{7}{M^\frac{2}{3}}\frac{\sigma_2}{n_2^{1/2}} \numberthis\label{eq:htcubeonethirdupperbound} \end{align*} Combining \eqref{eq:htlowerboundrearranged} with \eqref{eq:htcubeonethirdupperbound}, using Jensens's inequality we have, and choosing $n_1$, $n_2$, $T$, and $M$ as in \eqref{eq:crnewtonparameterchoice}, we have $\max\left(\sqrt{\frac{\expec{\|\nabla f\left(x_R\right)\|}}{144M}},-\frac{\expec{\lambda_{1,R}}}{9M}\right)\leq \sqrt{\epsilon} $. Total number of first-order oracle calls, and second-order oracle calls are $Tn_1=Tn_2=\order\left(\frac{1}{\epsilon^\frac{5}{2}}\right)$. \item From Lemma~\ref{lm:htlowerbound} we have, \begin{align*} &\sqrt{\expec{\|h_t^*\|^2|\calF_t}}+\sqrt{B'}+\frac{2}{(2L_H+M)}\left(\sqrt{\frac{128(1+2\log 2d)(d+16)^4L_G^2}{3n_2}}+\sqrt{3}\nu L_H(d+16)^\frac{5}{2}\right) \geq \\ &\max\left(\sqrt{A'\expec{\|\nabla f\left(x_t+h_t^*\right)\||\calF_t}},-\frac{2}{(2L_H+M)}\expec{\lambda_{1,t+1}|\calF_t}\right) \numberthis\label{eq:htlowerboundrearrangedzero} \end{align*} From Lemma~\ref{lm:htcubeupperbound}, we have \begin{align*} &\left(\left(\frac{1}{144}-\left(\frac{\rho'-1}{n_1}\right)^\frac{3}{4}\frac{6}{{M}^\frac{3}{2}{A'}^\frac{3}{2}}\right)\expec{\|h_R^*\|^3|\calF_t}\right)^\frac{1}{3}\\ &\leq \left(\frac{f(x_1)-f^*}{MT}\right)^\frac{1}{3} +\frac{10L_G}{M}\left(\frac{\rho'-1}{n_1}\right)^\frac{1}{2}+\frac{2}{M^\frac{2}{3}}(\nu L_G)^\frac{1}{2}(d+3)^\frac{3}{4}\\ &+\frac{2}{{M}^\frac{1}{2}}\left(\frac{\rho'-1}{n_1}\right)^\frac{1}{4}\left(\frac{B'}{A'}\right)^\frac{1}{2} +\frac{6}{M}\left(\frac{6L_G(1+2\log 2d)^\frac{1}{6}(d+16)^2}{n_2^\frac{1}{2}}+3L_H(d+16)^\frac{5}{2}\nu \right) \numberthis\label{eq:htcubeonethirdupperboundzero} \end{align*} Combining \eqref{eq:htlowerboundrearrangedzero} with \eqref{eq:htcubeonethirdupperboundzero}, using Jensens's inequality we have, and choosing $n_1$, $n_2$, $T$, $\nu$, and $M$ as in \eqref{eq:crnewtonparameterchoicezero}, we have $\max\left(\sqrt{\expec{\|\nabla f\left(x_R\right)\|}},-\expec{\lambda_{1,R}}\right)\leq \order\left(\sqrt{\epsilon}\right)$. Total number of first-order oracle calls is $Tn_1=\order\left(\frac{d}{\epsilon^\frac{5}{2}}\right)$, and second-order oracle calls is $Tn_2=\order\left(\frac{d^4\log d}{\epsilon^\frac{5}{2}}\right)$. \end{enumerate} \end{proof} \subsection{Proofs of Lemmas related to CR Newton method} \begin{lemma}\label{lm:hessestvar}\cite{roy2019multipoint} \begin{align} &\expec{\left\lVert\nabla_t^2-\frac{1}{n_2}\sum_{i=1}^{n_2}\nabla^2 F(x_t,\xi_i)\right\rVert^2}\leq \frac{\sigma_2^2}{n_2}\\ &\expec{\left\lVert\nabla_t^2-\frac{1}{n_2}\sum_{i=1}^{n_2}\nabla^2 F(x_t,\xi_i)\right\rVert^3}\leq \frac{2\sigma_2^3}{n_2^\frac{3}{2}} \end{align} \end{lemma} For the zeroth-order estimates of gradient and Hessian as defined in \eqref{eq:zerohessdef} we have the following concentration result. \begin{lemma}\cite{balasubramanian2018zeroth}\label{lm:zerothorderesterror} \begin{subequations} \begin{align} \begin{split} \expec{\|g_t-\nabla_t\|^2}\leq \frac{\rho'-1}{n_1}\|\nabla_t\|^2+\frac{3\nu^2}{2}L_G^2(d+3)^3 \label{eq:gradesterror} \end{split}\\ \begin{split} \expec{\|\nabla_t^2-H_t\|^2}\leq \frac{128(1+2\log 2d)(d+16)^4L_G^2}{3n_2}+3L_H^2(d+16)^5\nu^2 \label{eq:hessesterror2} \end{split}\\ \begin{split} \expec{\|\nabla_t^2-H_t\|^3}\leq \frac{160\sqrt{1+2\log 2d}(d+16)^6L_G^3}{n_2^\frac{3}{2}}+21L_H^3(d+16)^\frac{15}{2}\nu^3 \label{eq:hessesterror3} \end{split} \end{align} \end{subequations} where $\rho'=1+4(d+5)\rho$ \end{lemma} \begin{proof}[Proof of Lemma~\ref{lm:htlowerbound}] \begin{enumerate}[label=\alph*)] \item Using \eqref{eq:optconda} we get, \begin{align*} \|g_t+H_t h_t^*\|=\frac{M}{2}\|h_t^*\|^2 \end{align*} Then, using Assumption~\ref{as:liphess}, and Young's inequality we get, \begin{align*} &\|\nabla f\left(x_t+h_t^*\right)\|\\ \leq &\|\nabla f\left(x_t+h_t^*\right)-\nabla_t -\nabla_t^2 h_t^*\|+\|\nabla_t+\nabla^2_th_t^*\|\\ \leq & \|\nabla f\left(x_t+h_t^*\right)-\nabla_t-\nabla^2_th_t^*\|+\|g_t+H_th_t^*\|\\ +&\|g_t-\nabla_t\|+\|\left(H_t-\nabla^2_t\right)h_t^*\|\\ \leq & \frac{M+L_H}{2}\|h_t^*\|^2+\|g_t-\nabla_t\|+\|\left(H_t-\nabla^2_t\right)h_t^*\|\\ \leq & \left(M+L_H\right)\|h_t^*\|^2+\|g_t-\nabla_t\|+\frac{1}{2(L_H+M)}\|H_t-\nabla^2_t\|^2 \end{align*} Taking expectation on both sides, and using Lemma~\ref{lm:gradestvar}, Lemma~\ref{lm:hessestvar}, and Jensen's inequality we have \begin{align*} &\expec{\|\nabla f\left(x_t+h_t^*\right)\||\calF_t}\\ &\leq (L_H+M)\expec{\|h_t^*\|^2|\calF_t}+\sqrt{\frac{\rho-1}{n_1}}\|\nabla_t\| +\frac{\sigma_2^2}{2(L_H+M)n_2} \end{align*} Using Assumption~\ref{as:lipgrad} we get \begin{align*} &\left(1-\sqrt{\frac{\rho-1}{n_1}}\right)\expec{\|\nabla f\left(x_t+h_t^*\right)\||\calF_t}\\ &\leq (L_H+M)\expec{\|h_t^*\|^2|\calF_t}+\sqrt{\frac{\rho-1}{n_1}}L_G \expec{\|h_t^*\||\calF_t}+\frac{\sigma_2^2}{2(L_H+M)n_2}\\ &\leq 2(L_H+M)\expec{\|h_t^*\|^2|\calF_t}+\frac{1}{2(L_H+M)}\left(\frac{\rho-1}{n_1}L_G^2 +\frac{\sigma_2^2}{n_2}\right) \end{align*} Rearranging we have, \begin{align*} &\sqrt{\expec{\|h_t^*\|^2|\calF_t}}\\ &\geq \left(\frac{1}{2(L_H+M)}\left(1-\sqrt{\frac{\rho-1}{n_1}}\right)\expec{\|\nabla f\left(x_t+h_t^*\right)\||\calF_t}\right.\\ &\left. -\frac{1}{4(L_H+M)^2}\left(\frac{\rho-1}{n_1}L_G^2 +\frac{\sigma_2^2}{n_2}\right)\right)^\frac{1}{2}\numberthis \label{eq:phtlowerboundproofa} \end{align*} Now, using Assumption~\ref{as:liphess} we get \begin{align*} &\expec{\nabla^2f\left(x_t+h_t^*\right)|\calF_t}\succcurlyeq \expec{\nabla_t^2-L_H\|h_t^*\|I|\calF_t}\\ &\succcurlyeq \expec{H_t-L_H\|h_t^*\|I|\calF_t} -\sqrt{\frac{\sigma_2^2}{n_2}}I\\ &\succcurlyeq -\sqrt{\frac{\sigma_2^2}{n_2}}I-\left(L_H+\frac{M}{2}\right)\expec{\|h_t^*\||\calF_t}I\\ &\expec{\|h_t^*\||\calF_t}\geq \frac{2}{M+2L_H}\left(-\sqrt{\frac{\sigma_2^2}{n_2}}-\expec{\lambda_{1,t+1}|\calF_t}\right)\numberthis \label{eq:phtlowerboundproofb} \end{align*} Now using Jensen's inequality, and \eqref{eq:phtlowerboundproofa} we get \eqref{eq:htlowerbound}. \item Using Lemma~\ref{lm:zerothorderesterror}, and following the proof of part (a), \eqref{eq:phtlowerboundproofa} becomes \begin{align*} &\sqrt{\expec{\|h_t^*\|^2|\calF_t}} \geq \left(\frac{1}{2(L_H+M)}\left(1-\sqrt{\frac{\rho'-1}{n_1}}\right)\expec{\|\nabla f\left(x_t+h_t^*\right)\||\calF_t}\right.\\ &\left. -\frac{1}{4(L_H+M)^2}\left(\frac{\rho'-1}{n_1}L_G^2 +\frac{128(1+2\log 2d)(d+16)^4L_G^2}{3n_2}+3L_H^2\nu^2(d+16)^5+\sqrt{6}\nu (L_H+M)L_G(d+3)^\frac{3}{2}\right)\right)^\frac{1}{2}\numberthis \label{eq:phtlowerboundproofazero} \end{align*} Similarly, \eqref{eq:phtlowerboundproofb} becomes \begin{align*} \expec{\|h_t^*\||\calF_t}\geq \frac{2}{(2L_H+M)}\left(-\sqrt{\frac{128(1+2\log 2d)(d+16)^4L_G^2}{3n_2}}-\sqrt{3}\nu L_H(d+16)^\frac{5}{2}-\expec{\lambda_{1,t+1}|\calF_t}\right)\numberthis \label{eq:phtlowerboundproofbzero} \end{align*} \end{enumerate} \end{proof} \begin{proof}[Proof of Lemma~\ref{lm:htcubeupperbound}] \begin{enumerate}[label=\alph*)] \item Using Young's inequality, and \eqref{eq:envelopdescent}, we get \begin{align*} &f(x_t+h_t^*)-f(x_t)\leq m_t( x_t+h_t^*)-m_t(x_t)\\ &+(\nabla_t-g_t)^\top h_t^* +\frac{1}{2}{h_t^*}^\top(\nabla_t^2-H_t)h_t^* \\ &\leq m_t( x_t+h_t^*)-m_t(x_t)\\ &+\frac{4}{\sqrt{3M}}\|\nabla_t-g_t\|^\frac{3}{2} +\frac{162}{M^2}\|\nabla_t^2-H_t\|^3+\frac{M}{18}\| h_t^*\|^3\\ &\leq - \frac{M}{36}\| h_t^*\|^3+\frac{4}{\sqrt{3M}}\|\nabla_t-g_t\|^\frac{3}{2} +\frac{162}{M^2}\|\nabla_t^2-H_t\|^3 \end{align*} Taking expectation on both sides, and using Lemma~\ref{lm:gradestvar} with Jensen's inequality, and Lemma~\ref{lm:hessestvar}, we get \begin{align*} &\expec{f(x_t+h_t^*)|\calF_t}-f(x_t)\leq -\frac{M}{36}\expec{\|h_t^*\|^3|\calF_t}\\ &+\frac{4}{\sqrt{3M}}\left(\frac{\rho-1}{n_1}\right)^\frac{3}{4}\|\nabla_t\|^\frac{3}{2}+\frac{162}{M^2}\frac{2\sigma_2^3}{n_2^{3/2}} \numberthis\label{eq:progressintermed} \end{align*} Now let us relate the gradient size $\|\nabla_t\|$ with $\|h_t^*\|$. Note that, as $x_{t+1}=x_t+h_t^*$ we will use $\nabla_{t+1}$ to denote $\nabla f(x_t+h_t^*)$ here. Using triangle inequality, the fact $(a+b)^{3/2}\leq \sqrt{2}(a^{3/2}+b^{3/2})$ for $a,b>0$, Assumption~\ref{as:lipgrad}, and Jensen's inequality we get \begin{align*} &\|\nabla_t\|^\frac{3}{2}=\|\nabla_t-\expec{\nabla_{t+1}|\calF_t}+\expec{\nabla_{t+1}|\calF_t}\|^\frac{3}{2}\\ \leq & (\|\nabla_t-\expec{\nabla_{t+1}|\calF_t}\|+\|\expec{\nabla_{t+1}|\calF_t}\|)^\frac{3}{2}\\ \leq & \sqrt{2}(\|\nabla_t-\expec{\nabla_{t+1}|\calF_t}\|^\frac{3}{2}+\|\expec{\nabla_{t+1}|\calF_t}\|^\frac{3}{2})\\ \leq & \sqrt{2}(L_G^\frac{3}{2}\expec{\|h_t^*\|^\frac{3}{2}|\calF_t}+\expec{\|\nabla_{t+1}\||\calF_t}^\frac{3}{2})\numberthis\label{eq:delt32boundintermed} \end{align*} From Lemma~\ref{lm:htlowerbound} we have, \begin{align*} \expec{\|h_t^*\|^2|\calF_t}+B \geq A\expec{\|\nabla_{t+1} \||\calF_t} \end{align*} Again using the fact $(a+b)^{3/2}\leq \sqrt{2}(a^{3/2}+b^{3/2})$ for $a,b>0$, and Jensens's inequality we get \begin{align}\label{eq:cubehtlowerbound} \sqrt{2}\left(\expec{\|h_t^*\|^3|\calF_t}+B^\frac{3}{2}\right) \geq \left(A\expec{\|\nabla_{t+1} \||\calF_t}\right)^\frac{3}{2} \end{align} Combining \eqref{eq:delt32boundintermed}, and \eqref{eq:cubehtlowerbound}, we get \begin{align*} \|\nabla_t\|^\frac{3}{2}\leq & \sqrt{2}L_G^\frac{3}{2}\expec{\|h_t^*\|^\frac{3}{2}|\calF_t}+\frac{2}{A^\frac{3}{2}}\expec{\|h_t^*\|^3|\calF_t}\\ +&2\left(\frac{B}{A}\right)^\frac{3}{2} \end{align*} Now, using Young's inequality \begin{align*} &\left(\frac{\rho-1}{n_1}\right)^\frac{3}{4}\|\nabla_t\|^\frac{3}{2}\leq \frac{288L_G^3}{M^\frac{3}{2}}\left(\frac{\rho-1}{n_1}\right)^\frac{3}{2}\\ &+\frac{\sqrt{3}M^\frac{3}{2}}{288}\expec{\|h_t^*\|^3|\calF_t}+\left(\frac{\rho-1}{n_1}\right)^\frac{3}{4}\frac{2}{A^\frac{3}{2}}\expec{\|h_t^*\|^3|\calF_t}\\ &+2\left(\frac{\rho-1}{n_1}\right)^\frac{3}{4}\left(\frac{B}{A}\right)^\frac{3}{2} \numberthis\label{eq:rhonablatbound} \end{align*} Combining \eqref{eq:progressintermed}, and \eqref{eq:rhonablatbound} we get \begin{align*} &\expec{f(x_t+h_t^*)|\calF_t}-f(x_t)\leq -\frac{M}{72}\expec{\|h_t^*\|^3|\calF_t}\\ &+\frac{1152L_G^3}{M^2}\left(\frac{\rho-1}{n_1}\right)^\frac{3}{2}\\ &+\left(\frac{\rho-1}{n_1}\right)^\frac{3}{4}\frac{8}{\sqrt{M}A^\frac{3}{2}}\expec{\|h_t^*\|^3|\calF_t}\\ &+\frac{8}{\sqrt{M}}\left(\frac{\rho-1}{n_1}\right)^\frac{3}{4}\left(\frac{B}{A}\right)^\frac{3}{2} +\frac{324}{M^2}\frac{\sigma_2^3}{n_2^{3/2}} \numberthis\label{eq:crndescentbeforesum} \end{align*} Rearranging and summing from $t=1$ to $T$, and dividing both sides by $T$ we get \eqref{eq:htcubeupperbound}. \item Using Lemma~\ref{lm:zerothorderesterror}, and following the proof of Lemma~\ref{lm:htcubeupperbound} we have the following inequality corresponding to \eqref{eq:progressintermed} \begin{align*} &\expec{f(x_t+h_t^*)|\calF_t}-f(x_t)\leq -\frac{M}{36}\expec{\|h_t^*\|^3|\calF_t}\\ &+\frac{3}{\sqrt{M}}\left(\frac{\rho'-1}{n_1}\right)^\frac{3}{4}\|\nabla_t\|^\frac{3}{2}+\frac{4}{M}(\nu L_G)^\frac{3}{2}(d+3)^\frac{9}{4}\\ &+\frac{162}{M^2}\left(\frac{160\sqrt{1+2\log 2d}(d+16)^6L_G^3}{n_2^\frac{3}{2}}+21L_H^3(d+16)^\frac{15}{2}\nu^3 \right) \numberthis\label{eq:progressintermedzero} \end{align*} Eventually we get the following descent in the function value similar to \eqref{eq:crndescentbeforesum} \begin{align*} &\expec{f(x_t+h_t^*)|\calF_t}-f(x_t)\leq -\frac{M}{144}\expec{\|h_t^*\|^3|\calF_t}\\ &+\frac{864L_G^3}{M^2}\left(\frac{\rho'-1}{n_1}\right)^\frac{3}{2}+\left(\frac{\rho'-1}{n_1}\right)^\frac{3}{4}\frac{6}{\sqrt{M}{A'}^\frac{3}{2}}\expec{\|h_t^*\|^3|\calF_t}+\frac{4}{M}(\nu L_G)^\frac{3}{2}(d+3)^\frac{9}{4}\\ &+\frac{6}{\sqrt{M}}\left(\frac{\rho'-1}{n_1}\right)^\frac{3}{4}\left(\frac{B'}{A'}\right)^\frac{3}{2} +\frac{162}{M^2}\left(\frac{160\sqrt{1+2\log 2d}(d+16)^6L_G^3}{n_2^\frac{3}{2}}+21L_H^3(d+16)^\frac{15}{2}\nu^3 \right) \numberthis\label{eq:crndescentbeforesumzero} \end{align*} Rearranging and summing from $t=1$ to $T$, and dividing both sides by $T$ we get \eqref{eq:htcubeupperboundzero}. \end{enumerate} \end{proof}
147,540
his comparison between counseling and Marlana's work. Want to hear a Sample session? This is an hour session divided into 3 parts and set up as a play list, so all you have to do is listen. You don't need any special software and it plays instantly without any download time. You can even surf other pages while you listen. If you would rather read a transcript of this session, go here. So you have decided to give it a try. Great! Now... let's get on to the fun stuff... NOTE: You do NOT need to know which kind of session you want to do when we work by phone (mentoring, Healing, House of Cards, Beacons, packages, etc.) Your responsibility is to be honest with yourself and be courageous about looking at what you want to be different in your life. Together, we decide what kinds of things listed on this website will work best for you. That's why phone sessions sometimes work better when you need more interaction. If you like to work on your own, and simply choose what you want to do without any interaction with me, then Email is the way to go for you. Create a Safe Space: Be sure to make your appointment at a time and place where you will be alone, quiet and free from distraction, for an hour or so.. Be ready to use your imagination! Sometimes we find that past lives play an important role in our current life issues. Not all sessions can encompass everything in my toolbox, nevertheless, it's a good idea to be ready to work with me as we do Past Life Integrations to help clear those blocked energies.
86,204
- Top 10 General Thailand Travel Guides to buy in USA 2021 | Price & Review - The Top Eleven Scams That Trick Men in Thailand - Pathumwan Princess Hotel Bangkok - Walking around Patpong Night Market in Bangkok, Thailand - ⁴ᴷ COMPILATION OF THE BEST 3 Temples To Visit In Bangkok: Wat Arun, Wat Pho, Grand Palace - Kata Palm Resort & Spa Phuket – Luxurious Rooms - DK Eyewitness Travel Guides come to iPhone and iPod Touch - 7 Sirena Natagpuan at Nahuli ng tao sa camera… - Shop Cheap at MBK Center Shopping Mall Bangkok Thailand - WAT ARUN Temple of Dawn, BANGKOK THAILAND – Bangkok Walking Tour DK Eyewitness Travel Guides: one of the most maps, digital photography, as well as images of any type of quick guide. DK Eyewitness Traveling Overview: Top 10 Bangkok is your pocket guide to the greatest of Thailand's resources. Experience the very best of Bangkok, from the most lovely Buddhist holy places to must-see galleries and galleries. Stroll with the roads and also check out the very best shops, markets, restaurants, bars, and also clubs, or locate glamorous day spas and lovely Bangkok beaches. Your Top 10 Bangkok Travel Overview consists of the best hotels for every spending plan, one of the most enjoyable locations for children, and insider tips for each budget plan. Discover DK Eyewitness Traveling Overview: Top 10 Bangkok Real to its name, this Leading 10 guidebook covers all significant views and attractions in user friendly "top 10" listings that help you prepare the getaway that corrects for you. "Don't miss" location highlights Points to do and also puts to eat, consume, and also shop by area Free, shade pull-out map (print version), plus maps as well as photos throughout Walking excursions as well as day-trip itineraries Traveler ideas as well as suggestions Neighborhood consume as well as dining specializeds to try Galleries, festivals, exterior activities Imaginative and also wacky best-of lists and also even more The excellent pocket-size fellow traveler: DK Eyewitness Traveling Overview: Top 10 Bangkok Advised: For a detailed guidebook to the nation of Thailand, look into DK Eyewitness Traveling Guide: Thailand, which offers one of the most comprehensive cultural protection of Bangkok and also Thailand; trip-planning schedules by size of remain; 3-D cross-section images of major views as well as tourist attractions; countless photographs, images, as well as maps; and also more.
58,519
TITLE: Artin's Algebra Book Problem:Prove that splitting fields of $x^3+ex+6$ over $Q(e)$ and $x^3+\pi x+6$ over $Q(\pi)$ are isomorphic. QUESTION [1 upvotes]: Assume that $\pi$ and $e$ are transcendental. Let $K$ be the splitting field of $f(x)=x^{3} + \pi x + 6$ over $F = Q(\pi)$ (a) Prove that $[K : F] = 6$. (b) Prove that $K$ is isomorphic to the splitting field of $x^3 +ex+6$ over $Q(e)$ It's clear that $f$ has only one real root and also as $\pi$ is transcedental over $Q$ therefore $f$ is irreducible over $Q(\pi)$,which in particular implies that $[K:F]$=6. Any ideas about part $(b)$ ? REPLY [3 votes]: Since $e$ and $\pi$ are transcendental, both $\mathbb Q(e)$ and $\mathbb Q(\pi)$ are isomorphic to the rational function field $\mathbb Q(t)$. If we let $\phi: \mathbb Q(e) \to \mathbb Q(\pi)$ be a field isomorphism, then $\phi$ induces an isomorphism of polynomial rings $\phi: \mathbb Q(e)[x] \to \mathbb Q(\pi)[x]$. Under this isomorphism, we have $\phi(x^3+ex+6)= x^3+\pi x+6$, and in this situation, $\phi$ extends to an isomorphism of the splitting fields of the two polynomials; e.g., see section 13.4 of Dummit and Foote, Abstract Algebra: Theorem 27. Let $\phi:F \to F'$ be an isomorphism of fields. Let $f(x)\in F[x]$ be a polynomial and let $f'(x) \in F'[x]$ be the polynomial obtained by applying $\phi$ to the coefficients of $f(x)$. Let $E$ be a splitting field for $f(x)$ over $F$ and let $E'$ be a splitting field for $f'(x)$ over $F'$. Then the isomorphism $\phi$ extends to an isomorphism $\sigma: E \to E'$, i.e., $\sigma$ restricted to $F$ is the isomorphism $\phi$.
28,203
\begin{document} \maketitle \begin{abstract} In this paper we present a classification of non-symplectic automorphisms of K3 surfaces whose order is a multiple of seven by describing the topological type of their fixed locus. In the case of purely non-symplectic automorphisms, we provide new results for order 14 and alternative proofs for orders 21, 28 and 42. For each of these orders we also consider not purely non-symplectic automorphisms and obtain a complete characterization of their fixed loci. Several results of our paper were obtained independently from the results of the recent paper \cite{brandhorst2021} by Brandhorst and Hofmann; the methods we use are also different. \end{abstract} \section{Introduction} \label{sec:introduction} An automorphism of a K3 surface induces an action on the one-dimensional space of holomorphic 2-forms on the surface, so there are two kinds of automorphisms on K3 surfaces: symplectic and non-symplectic ones. The automorphism is called \emph{symplectic} if the induced action is trivial. Otherwise, it is called \emph{non-symplectic}, in which case one distinguishes between \emph{purely non-symplectic} automorphisms, meaning the action on the volume form is given by multiplication by a primitive root of unity, and \emph{not purely non-symplectic} automorphisms, meaning some (non-trivial) power of the automorphism is symplectic. A fundamental problem is to obtain a complete classification of non-symplectic automorphisms of finite order. By \cite[Theorem 0.1]{Nikulin} the rank of the transcendental lattice of a K3 surface carrying a purely non-symplectic automorphism of order $n$ is divisible by the Euler totient function of $n$, which implies $\varphi(n)\leq 21$. Moreover, Machida and Oguiso in \cite[Main Theorem 3]{MO} show all positive integers $n \neq 60$ satisfying such property occur as orders of purely non-symplectic automorphisms. A classification of non-symplectic automorphisms of prime order $p$ was completed by Nikulin in \cite{Nikulin-inv} (when $p=2$), and Artebani, Sarti and Taki in \cite{ArtebaniSarti}, \cite{Taki}, \cite{ArtebaniSartiTaki} (when $p> 2$). The study of non-symplectic automorphisms of composite order is much more intricate and results for some possible orders can be found in \cite{ArtebaniSarti4}, \cite{Dillies}, \cite{ACV}, \cite{ACV2}, \cite{Brandhorst}, \cite{AlTabbaaGrossiSarti}, \cite{AlTabbaaSartiTaki}, \cite{AlTabaaSarti-order8-2} and \cite{brandhorst2021}, among others. In this paper, we contribute to the classification of non-symplectic automorphisms of orders that are multiples of seven by describing the topological type of their fixed locus. For purely non-symplectic automorphisms, we provide new results for order $14$ and alternative proofs for orders $21, 28$ and $42$, recovering the results in \cite{Brandhorst}. We also consider the not purely non-symplectic case and obtain a complete characterization for each of these orders. Our main result in the case of purely non-symplectic automorphisms is summarized below in Theorem \ref{main1}. We point the reader to Propositions \ref{thm14}, \ref{thm21}, \ref{thm28} and \ref{thm42} for the details. \begin{thmINTRO} Let $\sigma_n$ be a purely non-symplectic automorphism of order $n\in\{14,21,28,42\}$ on a K3 surface $X$. Then the fixed locus of $\sigma_n$ is not empty and $\Fix(\sigma_n)$ and the fixed loci of its powers are described by Tables \ref{tab:fixed}, \ref{tab:21}, \ref{tab:28}, \ref{tab:42}. \label{main1} \end{thmINTRO} We show the different possibilities for the fixed loci are indeed realizable by explicitly constructing examples that have the desired topological types. Some of these examples were already given in \cite{Brandhorst}, but we provide here a more detailed geometric description. In the not purely non-symplectic case, we also consider automorphisms of orders 14, 21, 28, and 42 and again we provide a complete classification. In each case we show that not every power of the automorphism can be symplectic and our main result in this direction is given by Theorem \ref{main2} below. The details are presented in Section \ref{sec-NP}. \begin{thmINTRO} Let $\sigma_n$ be a non-symplectic automorphism of order order $n\in\{14,21,28,42\}$ on a K3 surface $X$. \begin{enumerate}[(i)] \item If $n=14$, then both its square and its 7-th power can be symplectic. In each case, the fixed loci of $\sigma_{14}$ and its powers are described in Propositions \ref{NPorder14} and \ref{prop_NP14-2}. \item If $n=21$, its cube is necessarily non-symplectic, whereas $\sigma_{21}^7$ can be symplectic and the fixed loci of $\sigma_{21}$ and its powers in this case are described in Proposition \ref{prop21NP}. \item If $n=28$ or $n=42$, then $\sigma_n$ is necessarily purely non-symplectic. \end{enumerate} \label{main2} \end{thmINTRO} To prove Theorems \ref{main1} and \ref{main2} we apply a unified approach to all orders. A central idea consists in observing that the study of the fixed locus of $\sigma_n$ can be reduced to a local analysis of the fixed loci of (some of) its powers. In particular, we rely on the classification result for order $7$ in \cite{ArtebaniSartiTaki}, and some of the tools we use are the Hodge index theorem and the holomorphic and topological Lefschetz formulas (\ref{eq-Lefhol}) and (\ref{eq-Leftop}). Moreover, the examples we construct are often given in terms of elliptic fibrations (see Definition \ref{ellFib}). The structure of the paper is the following: Section \ref{sec:background} is devoted to presenting background material, introducing notation and recalling some standard results on automorphisms on K3 surfaces. In Section \ref{sec-order14} we classify purely non-symplectic automorphisms of order 14 in terms of the topological type of their fixed locus. Our main result is outlined in Proposition \ref{thm14} and Tables \ref{tab:fixed} and \ref{tab:14larga}. Moreover, we show the different possibilities indeed occur giving explicit examples. Section \ref{sec-order21} (resp. \ref{sec-order28}, \ref{sec-order42}) provides the classification of purely non-symplectic automophism of order 21 (resp. 28, 42). The topology of their fixed locus is summarized in Tables \ref{tab:21} (resp. \ref{tab:28}, \ref{tab:42}). In Section \ref{sec-NP} we then consider the case of not purely non-symplectic automoprhisms and obtain a complete characterization for each possible order (14, 21, 28 and 42). Finally, in Section \ref{sec-NS} we study the Néron-Severi lattice of a K3 surface carrying a purely non-symplectic automorphism of order multiple of seven. All computations in this paper are carried out using MAGMA \cite{MAGMA} and we work over $\mathbb{C}$ throughout. \section*{Acknowledgements} This work started during the workshop Women in Algebraic Geometry, held virtually at ICERM Providence in July 2020. We thank ICERM for this opportunity. We thank S. Brandhorst for useful comments. P.C. has been partially supported by Proyecto Fondecyt Iniciaci\'on N.11190428 and Proyecto Fondecyt Regular N.1200608. P.C. and A.S. have been partially supported by Programa de cooperaci\'on cient\'ifica ECOS-ANID C19E06 and Math AmSud-ANID 21 Math 02. \section{Background and notation}\label{sec:background} A K3 surface is a compact, complex surface which is simply connected and has trivial canonical bundle. An automorphism of finite order on a K3 surface is called non-symplectic if it acts non-trivially on the volume form. The automorphism is called purely non-symplectic if the action is given by multiplication by a primitive $n$-th root of unity. \begin{notation}Throughout the paper we will adopt the following notations: \leavevmode \begin{itemize} \item $\omega_X$ will denote a nowhere vanishing holomorphic 2-form on a K3 surface $X$; \item $\zeta_n$ will denote an $n$-th root of unity; \item $\sigma_n$ will denote an automorphism of (finite) order $n$ on a K3 surface $X$. \begin{itemize} \item In particular, given $\sigma_{n},$ if $m$ divides $n$, we will also denote $\sigma_{n}^{\frac{n}{m}}$ by $\sigma_{m}$; and \end{itemize} \item $S(\sigma_n)$ will denote the invariant lattice: $\{x\in H^2(X,\mathbb Z) |\ (\sigma_n)^{*}(x)=x\}$, which is primitively embedded in the N\'eron Severi lattice $\rm{NS}(X)$ of the surface X, by \cite{Nikulin}. \end{itemize} \end{notation} Given any purely non-symplectic automorphism $\sigma_n$ with $n\geq 3$, by the Hodge Index Theorem, its fixed locus $\Fix(\sigma_n)$ consists of a disjoint union of smooth curves and isolated points: \begin{equation} \label{fixed}\Fix(\sigma_n)=C_{g_n}\sqcup R_1\sqcup\ldots \sqcup R_{k_n}\sqcup\{p_1,\ldots,p_{N_n}\}\end{equation} where $C_{g_n}$ is a smooth curve of genus $g_n\geq 0$ and $R_i$ are rational curves and $p_i$ are isolated fixed points, whose total number is $N_{n}$. By \cite{Nikulin}, the action of $\sigma_n$ can be locally linearized and diagonalized around a fixed point so that $\sigma_n$ acts as multiplication by the matrix \[ A_{i,n} := \begin{bmatrix} \zeta_{n}^{1+i} & 0 \\ 0 & \zeta_{n}^{n-i} \end{bmatrix} \text{ such that }0 \leq i < n, \] and we say that such a fixed point is of type $A_{i,n}$. The total number of fixed points of type $A_{i,n}$ will be denoted by $m_{i,n}$. Observe that if $i=0$, one of the eigenvalues of $A_{0,n}$ is 1, thus the fixed point is not isolated but it belongs to a fixed curve. We may use the holomorphic Lefschetz formula for $\sigma_n$ to compute the Lefschetz number $L(\sigma_n)$ in two ways. First of all, we have: \[L(\sigma_n)=\sum_{i=0}^2(-1)^i{\rm tr} (\sigma_n^*|_{H^i(X,\mathcal O_X)})=1+\zeta_n^{n-1}\] where we are assuming $\sigma_n^*\omega_X=\zeta_n \omega_X$. On the other hand, we have: \[L(\sigma_n)=\sum_{i=1}^{n-2}\frac{m_{i,n}}{\det(I-\sigma_n^*|T_X)}+\alpha_n\frac{1+\zeta_n}{(1-\zeta_n)^2}.\] where $\displaystyle{\alpha_n\coloneqq\sum_{C\subset\Fix(\sigma_n)}(g(C)-1)}$. Equating these two expressions we obtain a linear system of equations that allows us to determine the possible values for $m_{i,n}$ and $\alpha_n$: \begin{equation}\label{eq-Lefhol} 1+\zeta_n^{n-1}=\sum_{i=1}^{n-2}\frac{m_{i,n}}{(1-\zeta_n^{1+i})(1-\zeta_n^{n-i})}+\alpha_n\frac{1+\zeta_n}{(1-\zeta_n)^2}. \end{equation} The topological Lefschetz formula, in turn, can be used to compute the Euler characteristic of the fixed locus of $\sigma_n$: \begin{equation}\label{eq-Leftop} \chi_n \doteq \chi(\Fix(\sigma_n))=2+{\rm tr}(\sigma_n^*|H^2(X,\mathbb R)). \end{equation} Both (\ref{eq-Lefhol}) and (\ref{eq-Leftop}) will be used extensively throughout the paper in order to perform a local analysis of the action of non-symplectic automorphisms with order a multiple of seven. It is this local analysis that will lead us to a complete classification of such automorphisms, in terms of the topological type of their fixed locus. We will also make extensive use of the already known classification of non-symplectic automorphisms of order seven, given by Theorem \ref{order7} below: \begin{theorem} \cite[Section 6]{ArtebaniSartiTaki} If $X$ is a K3 surface and $\sigma_7$ a non-symplectic automorphism of order 7, then the possibilities for the fixed locus of $\sigma_7$ and the invariant lattice $S(\sigma_7)$ are listed in Table \ref{tab:7} and all cases exist (see \cite{ArtebaniSartiTaki} for notations of lattices). \begin{table}[H] \centering \begin{tabular}{c!{\vrule width 1.5pt}c|c|c|c|c|c} &$m_{1,7}$&$m_{2,7}$&$m_{3,7}$&$g_7$&$k_7$&$S(\sigma_7)$\\ \noalign{\hrule height 1.5pt} A&2&1&0&1&0&$U\oplus K_7$\\ \hline $\dagger$&2&1&0&-&-&$U(7)\oplus K_7$\\ \hline B&4&3&1&1&1&$U\oplus E_8$\\ \hline C&4&3&1&0&0&$U(7)\oplus E_8$\\ \hline D&6&5&2&0&1&$U\oplus E_8\oplus A_6$ \end{tabular} \caption{Order 7} \label{tab:7} \end{table} \label{order7} \end{theorem} For each possibility in our classification, the existence of a K3 surface carrying an automorphism with fixed locus having the desired topological type will then be obtained via the construction of explicit examples. Most of the examples will arise from elliptic fibrations. Therefore, we also recall some generalities about elliptic K3 surfaces, and we refer the reader to \cite{Miranda} for details. \begin{defin} An elliptic fibration on a projective surface $X$ consists of a surjective proper morphism $\pi:X \to C$ (with connected fibers) such that the generic fiber is a smooth curve of genus one, and we further assume there exists a section $s:C \to X$ (i.e. $\pi \circ s= id_{C}$). \label{ellFib} \end{defin} A K3 surface $X$ admits an elliptic fibration if and only if there exists an embedding of the hyperbolic lattice $U$ into $NS(X)$, the Néron-Severi lattice of the surface. Any elliptic fibration can be reconstructed from its Weierstrass model, and in the case of K3 surfaces such model is given by an equation of the form: \begin{equation} y^2=x^3+A(t)x+B(t), \quad t\in \mathbb{P}^1 \label{weierstrass} \end{equation} where $A(t)$ and $B(t)$ are polynomials of degrees $8$ and $12$, respectively. Given an elliptic fibration, a chosen section $s:C \to X$ is called the zero section; and one identifies the map $s$ with the curve $s(C)$ on $X$. In the model given by (\ref{weierstrass}), the zero section is $t \mapsto (0:1:0)$. We further observe that, using (\ref{weierstrass}), the volume form can be written locally as \[ \frac{dx \wedge dt}{2y} \] Moreover, the discriminant of the fibration is the polynomial of degree $24$: \[ \Delta(t)=4A(t)^3+27B(t)^2 \] and each zero of $\Delta(t)$ corresponds to a singular fiber of the fibration. The possible singular fibers have been classified by N\'eron and Kodaira \cite{neron}, \cite{kodaira1}, \cite{kodaira2}. \section{Order 14}\label{sec-order14} Let $\sigma_{14}$ be a purely non-symplectic automorphism of order 14. As described in Section \ref{sec:background}, the local actions of $\sigma_{14}$ at fixed points are of seven types. Points of type $A_{0,14}$ lie on a fixed curve, and isolated fixed points are of type $A_{i,14}$ for $i=1,\ldots,6$. Thus, the fixed locus of $\sigma_{14}$ can contain both fixed curves and isolated fixed points of six different types. The goal of this section is to prove the following classification result: \begin{proposition} The fixed locus of a purely non-symplectic automorphism of order $14$ on a K3 surfaces is not empty and it consists of either: \begin{enumerate}[(i)] \item The union of $N_{14}$ isolated points, where $N_{14}\in \{3,4,5,6,7\}$; or \item The disjoint union of a rational curve and $N_{14}$ isolated points, where $N_{14}\in\{6,11,12\}$. \end{enumerate} Moreover, all these possibilities occur, and in each case $\sigma_7\doteq \sigma_{14}^2$ fixes at least one curve. A more detailed description is given in Tables \ref{tab:fixed} and \ref{tab:ex} below, where $\sigma_2$ denotes the involution $\sigma_{14}^7$. \label{thm14} {\small \begin{table}[H]\centering \begin{tabular}{c!{\vrule width 1.5pt}c|c|c|c} &$\Fix(\sigma_{14})$&$\Fix(\sigma_7)$&$\Fix(\sigma_2)$&Example\\ \noalign{\hrule height 1.5pt} A1(9,1)&$\{p_1,\ldots,p_7\}$&$E\sqcup\{p_1,p_2,p_3\}$&$C_9\sqcup R$&\ref{example_A1_(9,1)}\\ A1(3,2)&$\{p_1,\ldots,p_7\}$&$E\sqcup\{p_1,p_2,p_3\}$&$C_3\sqcup R_1\sqcup R_2$& \ref{example_A1_(3,2)}\\ \hline A2&$\{p_1,\ldots,p_5\}$&$E\sqcup\{p_1,q_1,q_2\}$&$C_9$& \ref{example_A2_(9,0)}\\ \noalign{\hrule height 1.5pt} B3&$R\sqcup\{p_1,\ldots,p_{12}\}$&$E\sqcup R\sqcup\{p_1,\ldots p_8\}$&$C_6\sqcup R\sqcup R_1\sqcup\ldots\sqcup R_4$ &\ref{example_B3}\\%\hline \noalign{\hrule height 1.5pt} C1(6,1)&$\{p_1,\ldots,p_6\}$&$R\sqcup\{p_1,\ldots p_4,q_1,\ldots,q_4\}$&$C_6\sqcup R'$&\ref{example_C1_(6,1)}\\ C1(7,2)&$\{p_1,\ldots,p_6\}$&$R\sqcup\{p_1,\ldots p_4,q_1,\ldots,q_4\}$&$C_7\sqcup R_1\sqcup R_2$&\ref{example_C1_(7,2)}\\ C1(0,2)&$\{p_1,\ldots,p_6\}$&$R\sqcup\{p_1,\ldots p_4,q_1,\ldots,q_4\}$&$R_1\sqcup R_2 \sqcup R_3$&\ref{example_C1_(0,2)}\\ \hline C2&$\{p_1,\ldots,p_4\}$&$R\sqcup\{p_1,p_2,q_1\ldots q_6\}$&$C_6$&\ref{example_C2(6,0)}\\ \hline C3&$R\sqcup\{p_1,\ldots,p_6\}$&$R\sqcup\{p_1,\ldots p_6,q_1,q_2\}$&$C_6\sqcup R\sqcup R'$&\ref{example_C3_(6,2)}\\ \noalign{\hrule height 1.5pt} D2&$\{p_1,p_{2},p_{3}\}$&$R_1\sqcup R_2\sqcup\{p_1,\ldots,p_{13}\}$&$C_3$&\ref{example_D2}\\\hline D3&$\{p_1,\ldots,p_{7}\}$&$R_1\sqcup R_2\sqcup\{p_1,p_2,p_3,q_1\ldots,q_{10}\}$&$C_3\sqcup R'\sqcup R''$&\ref{example_D3}\\\hline D8&$R\sqcup\{p_1,\ldots,p_{11}\}$& $R\sqcup R'\sqcup\{p_1,\ldots,p_{9},q_1,\ldots,q_4\}$& $C_3\sqcup R\sqcup R_1\sqcup\ldots R_4$&\ref{example_D8}\\ \end{tabular} \caption{Order 14} \label{tab:fixed} \end{table} } \end{proposition} The proof of Proposition \ref{thm14} is done in several steps. First, in Section \ref{sec-poss} we use formulas \eqref{eq-Lefhol} and \eqref{eq-Leftop} in order to generate Table \ref{tab:14larga}, which provides a list of possibilities for the fixed locus of $\sigma_{14}$ and its powers. In Section \ref{sec-exclude} we then exclude many of these possibilities using geometric arguments, and produce a new table - Table \ref{tab:ex}. Finally, in Section \ref{sec-ex} we show all the remaining cases listed in Table \ref{tab:ex} are indeed admissible by constructing explicit examples that have the desired topological types. \subsection{Generation of table of possibilities}\label{sec-poss} Since $\sigma_{14}$ is purely non-symplectic, its square $\sigma_7 :=\sigma_{14}^2$ is a non-symplectic automorphism of order 7. Moreover, $\Fix(\sigma_{14}) \subseteq \Fix(\sigma_7)$ and in particular each curve contained in $\Fix(\sigma_{14})$ is also contained in $\Fix(\sigma_7)$. Now, for all $i=1,\ldots,6$ we have that $(A_{i, 14})^{2} = A_{j,7}$ for some $j\in\{0,1,2,3\}$. For instance, $A_{1, 14}^{2} = A_{1, 7}$. Thus, fixed points of $\sigma_{14}$ that are of type $A_{1,14}$ are also points of type $A_{1,7}$ for $\sigma_7$. Similarly: \begin{itemize} \item points of type $A_{5,14}$ for $\sigma_{14}$ are of type $A_{1,7}$ for $\sigma_7$, \item points of types $A_{2,14}$ and $A_{4,14}$ for $\sigma_{14}$ are of type $A_{2,7}$ for $\sigma_7$, and \item points of type $A_{3,14}$ for $\sigma_{14}$ are of type $A_{3,7}$ for $\sigma_7$. \end{itemize} In particular, the following inequalities hold: \begin{equation}\label{rel_7_14} \begin{cases} m_{1,4} + m_{5,14} & \leq m_{1,7} \\ m_{2,14} + m_{4,14} & \leq m_{2,7} \\ m_{3,14}& \leq m_{3,7} \end{cases} \end{equation} And we further observe the following: \begin{remark}\label{m_6} Note that $A_{6, 14}^{2} = A_{0, 7}$, which shows that points of type $A_{6,14}$ lie on a curve fixed by $\sigma_{7}$. Therefore, if $m_{6,14}\neq 0$, then there are curves in $\Fix(\sigma_7)$ which are not in $\Fix(\sigma_{14})$. \end{remark} \begin{remark}\label{types} A rational curve $R$ invariant for an automorphism $\sigma_n$ is either pointwise fixed or $R$ admits two isolated fixed points. In the latter case, the points are of consecutive types, i.e., if one point is of type $A_{i,n}$, then the other is of type $A_{i+1,n}$. If $n=14$, as in \cite[Lemma 4]{ArtebaniSarti4}, one can prove that, given a tree of rational curves invariant for $\sigma_{14}$, the distribution of types of isolated fixed points is as shown in Figure \ref{fig:tree}. This can be done in a similar way for $n=21,28,42$. \end{remark} \begin{figure}[H] \centering \begin{tikzpicture} \draw[ultra thick] (-0.2,-0.2) .. controls (0.5,0.5) .. (1.2,-0.2); \draw (0.8,-0.2) .. controls (1.5,0.5) .. (2.2,-0.2); \draw (1.8,-0.2) .. controls (2.5,0.5) .. (3.2,-0.2); \draw (2.8,-0.2) .. controls (3.5,0.5) .. (4.2,-0.2); \draw (3.8,-0.2) .. controls (4.5,0.5) .. (5.2,-0.2); \draw (4.8,-0.2) .. controls (5.5,0.5) .. (6.2,-0.2); \draw (5.8,-0.2) .. controls (6.5,0.5) .. (7.2,-0.2); \draw (6.8,-0.2) .. controls (7.5,0.5) .. (8.2,-0.2); \draw (7.8,-0.2) .. controls (8.5,0.5) .. (9.2,-0.2); \draw (8.8,-0.2) .. controls (9.5,0.5) .. (10.2,-0.2); \draw (9.8,-0.2) .. controls (10.5,0.5) .. (11.2,-0.2); \draw (10.8,-0.2) .. controls (11.5,0.5) .. (12.2,-0.2); \draw (11.8,-0.2) .. controls (12.5,0.5) .. (13.2,-0.2); \draw (12.8,-0.2) .. controls (13.5,0.5) .. (14.2,-0.2); \draw[ultra thick] (13.8,-0.2) .. controls (14.5,0.5) .. (15.2,-0.2); \draw [black] (1,0) circle (2pt); \filldraw [gray] (2,0) circle (2pt); \filldraw [gray] (3,0) circle (2pt); \filldraw [gray] (4,0) circle (2pt); \filldraw [gray] (5,0) circle (2pt); \draw [black] (14,0) circle (2pt); \filldraw [black, ultra thick] (7,0) circle (2pt); \filldraw [black, ultra thick] (8,0) circle (2pt); \filldraw [gray] (9,0) circle (2pt); \filldraw [gray] (10,0) circle (2pt); \filldraw [gray] (11,0) circle (2pt); \filldraw [gray] (12,0) circle (2pt); \filldraw [gray] (13,0) circle (2pt); \filldraw [gray] (6,0) circle (2pt); \draw[black] (1,0.6) node {$A_{0,14}$}; \draw[black] (2,-0.6) node {$A_{1,14}$}; \draw[black] (3,0.6) node {$A_{2,14}$}; \draw[black] (4,-0.6) node {$A_{3,14}$}; \draw[black] (5,0.6) node {$A_{4,14}$}; \draw[black] (6,-0.6) node {$A_{5,14}$}; \draw[black] (7,0.6) node {$A_{6,14}$}; \draw[black] (8,0.6) node {$A_{6,14}$}; \draw[black] (9,-0.6) node {$A_{5,14}$}; \draw[black] (10,0.6) node {$A_{4,14}$}; \draw[black] (11,-0.6) node {$A_{3,14}$}; \draw[black] (12,0.6) node {$A_{2,14}$}; \draw[black] (13,-0.6) node {$A_{1,14}$}; \draw[black] (14,0.6) node {$A_{0,14}$}; \end{tikzpicture} \caption{Actions of $\sigma_{14}$ and $ \sigma_7$ on a tree of rational curves. Thin curves are invariant but not pointwise fixed. Thick curves are pointwise fixed by $\sigma_{14}$. The gray points are isolated fixed points for both $\sigma_{14}$ and $\sigma_7$, and the two black points in the middle lie on a curve fixed by $\sigma_7$ only.} \label{fig:tree} \end{figure} As a consequence, from \eqref{rel_7_14} and the previous remarks, if we apply formula (\ref{eq-Lefhol}) to $\sigma_{14}$ we obtain the following linear system of equations: \begin{equation} \begin{cases} m_{1,14}&=4\alpha_{14}-2m_{4,14}+m_{5,14}\\ m_{2,14}&=1-2m_{5,14}+3m_{4,14}\\ m_{6,14}&=8m_{4,14}+4-2m_{3,14}-2\alpha_{14}-4m_{5,14} \end{cases} \label{Lef-hol} \end{equation} This allows us to prove the following two Lemmas. \begin{lemma}\label{alpha_no2} The value of $\alpha_{14}$ is either 0 or 1. \end{lemma} \begin{proof} Since $\Fix(\sigma_{14})\subset\Fix(\sigma_7)$, a curve that is pointwise fixed by $\sigma_{14}$ must be contained in $\Fix(\sigma_7)$. Thus, according to Table \ref{tab:7}, we must have $\alpha_{14}\in\{0,1,2\}$. Assume $\alpha_{14} = 2$. Then $\sigma_{14}$ fixes at least two rational curves. Therefore the fixed locus under $\sigma_7$ is described by the last row of Table \ref{tab:7}, and both rational curves in $\Fix(\sigma_7)$ are fixed by $\sigma_{14}$. By Remark \ref{m_6}, $m_{6,14}=0$. But plugging in $\alpha_{14}=2$ and $m_{6,14}=0$ with the inequalities \eqref{rel_7_14} with the values of $m_{1,7}, m_{2,7}, m_{3,7}$ from last line of Table \ref{tab:7} into \eqref{Lef-hol} yields an unsolvable system. Therefore $\alpha_{14}$ can only equal 0 or 1. \end{proof} \begin{lemma} There is no purely non-symplectic automorphism $\sigma_{14}$ of order 14 such that the fixed locus of $\sigma_7$ is described by the second row of Table \ref{tab:7}. \end{lemma} \begin{proof} In this case, no curves are fixed by $\sigma_7$ and hence no curves are fixed by $\sigma_{14}$. Thus $\alpha_{14}=0$. But a MAGMA calculation shows that in this case a solution of \eqref{Lef-hol} would have $m_{6,14} = 4$, which would imply that $\sigma_{14}$ fixes a curve, and so this case cannot occur. \end{proof} In fact we can completely describe what are the possible solutions to (\ref{Lef-hol}), i.e. what are the possibilities for the vector $m=(m_{1,14},m_{2,14}, m_{3,14}, m_{4,14}, m_{5,14}, m_{6,14})$ and for the value of $\alpha_{14}$. Organizing the possibilities according to the fixed locus of $\sigma_7=\sigma_{14}^2$, we prove: \begin{proposition} If $\sigma_{14}$ is a purely non-symplectic automorphism of order $14$ on a K3 surface, then the possible vectors $m=(m_{1,14},m_{2,14}, m_{3,14}, m_{4,14}, m_{5,14}, m_{6,14})$ satisfying \eqref{Lef-hol} are listed in Table \ref{tab:14} below. The symbol $*$ means that the action on the elliptic curve $E$ is a translation. \label{vectors} \end{proposition} In particular, we obtain a list of possibilities for the fixed locus of $\sigma_{14}$. {\small \begin{table}[H]\centering \begin{tabular}{c|cccccc|c|c} &$m_{1,14}$&$m_{2,14}$&$m_{3,14}$&$m_{4,14}$&$m_{5,14}$&$m_{6,14}$&$\alpha_{14}$& curves fixed by $\sigma_{14}$\\ \hline A1&0&0&0&1&2&4&0&$\emptyset$\\ A2&0&1&0&0&0&4&0&$\emptyset$\\ \hline B1&0&0&1&1&2&2&0&$E$ \\ B1*&0&0&1&1&2&2&0& $\emptyset$\\ B2&0&1&1&0&0&2&0&$E$ \\ B2*&0&1&1&0&0&2&0& $\emptyset$\\ B3&3&2&1&1&1&4&1&$R$\\ B4&4&1&1&0&0&0&1&$R\sqcup E$ \\ B4*&4&1&1&0&0&0&1&$R$ \\ \hline C1&0&0&1&1&2&2&0&$\emptyset$\\ C2&0&1&1&0&0&2&0&$\emptyset$\\ C3&4&1&1&0&0&0&1&$R$\\ \hline D1&0&0&2&1&2&0&0&$\emptyset$\\ D2&0&1&2&0&0&0&0&$\emptyset$\\ D3&0&0&0&1&2&4&0&$\emptyset$\\ D4&0&1&0&0&0&4&0&$\emptyset$\\ D5&4&0&0&1&2&2&1&$R$\\ D6&4&1&0&0&0&2&1&$R$\\ D7&3&1&2&2&3&2&1&$R$\\ D8&3&2&2&1&1&2&1&$R$\\ \end{tabular} \caption{} \label{tab:14} \end{table}} \begin{proof}[Proof of Proposition \ref{vectors}] We consider each row of Table \ref{tab:7}: \begin{enumerate} \item[{\bf Case A}] This corresponds to the case in which the fixed locus of $\sigma_7$ is described by the first row of Table \ref{tab:7} and $\Fix(\sigma_7)$ consists of a genus one curve $E$, so we only need to determine whether $\sigma_{14}$ itself fixes $E$. In both cases, $\alpha_{14}=0$ and by \eqref{rel_7_14} $m_{3,14}=0$. A MAGMA calculation shows that the only vectors $m$ which satisfy \eqref{Lef-hol} with $\alpha_{14}=m_{3,14}=0$ are $(0, 0, 0, 1, 2, 4)$ and $(0, 1, 0, 0, 0, 4)$. By Remark \ref{m_6}, $\sigma_{14}$ does not fix $E$. \item[{\bf Case B}] When $\Fix(\sigma_7)$ is described by the third row of Table \ref{tab:7}, the automorphism $\sigma_7$ fixes a genus one curve $E$ and a rational curve $R$. We analyze this case by considering the possibilities for $\alpha_{14}$ and $m_{6,14}$. First, suppose $\Fix(\sigma_{14})$ contains no curves, so $\sigma_{14}$ fixes neither $R$ nor $E$; in this case, $\alpha_{14} = 0$. Since $\sigma_{14}$ acts as an involution on $E$, by the Riemann-Hurwitz formula it has either four fixed points (coming from $P \mapsto -P$ after a choice of point at infinity) or no fixed points (coming from $P \mapsto P + T$ where $T$ is a 2-torsion point). The action on $R$ has 2 fixed points, so $m_{6,14}$ is either 6 or 2. A MAGMA calculation applying the constraints from \eqref{Lef-hol} shows that the possibilities for $m$ are $(0, 0, 1, 1, 2, 2)$ and $(0, 1, 1, 0, 0, 2)$. Second, suppose that $E\subset\Fix(\sigma_{14})$ and $R\not\subset\Fix(\sigma_{14})$; in this case, $\alpha_{14} = 0$ and $m_{6,14} = 2$. The possibilities for $m$ in this case are $(0, 0, 1, 1,2,2)$ and $(0,1,1,0,0,2)$. Next, if $R\subset\Fix(\sigma_{14})$ and $E\not\subset\Fix(\sigma_{14})$, $\sigma_{14}$ fixes either none or four points on $E$, so $\alpha_{14} = 1$ and $m_{6,14} = 0$ or $4$, and the possibilities for $m$ are $(3, 2, 1,1,1,4)$ and $(4,1,1,0,0,0)$. Lastly, if $ E \sqcup R\subset\Fix(\sigma_{14})$, all curves fixed under $\sigma_7$ are also fixed under $\sigma_{14}$, so $\alpha_{14} = 1$ and $m_{6,14}=0$, and the only possibility is $m=(4,1,1,0,0,0)$. \item[{\bf Case C}] In this case, the only curve fixed by $\sigma_7$ is a rational curve $R$. If $\sigma_{14}$ does not fix $R$, then $\alpha_{14} = 0$ and $m_{6,14} = 2$ and the solutions of \eqref{Lef-hol} for $m$ are $(0, 0, 1, 1, 2, 2)$ and $(0, 1, 1, 0, 0, 2)$. On the other hand, if $\sigma_{14}$ fixes $R$, then $\alpha_{14} = 1$ and $m_{6,14} = 0$, and the only possibility for $m$ is $(4, 1, 1, 0, 0, 0)$. \item[{\bf Case D}] Finally, if the fixed locus of $\sigma_7$ is described by the last row of Table \ref{tab:7}, the curves fixed by $\sigma_7$ are two rational curves $R_1 \sqcup R_2$. First, suppose neither $R_1$ nor $R_2$ is fixed by $\sigma_{14}$, thus $\alpha_{14} = 0$. Then, either $\sigma_{14}$ exchanges $R_1$ and $R_2$, or $\sigma_{14}$ acts nontrivially on $R_1$ and $R_2$. If $R_{1}$ and $R_{2}$ are exchanged (hence fixing no points on either curve), then $m_{6,14}=0$ and the possibilities for $\vec{m}$ are $(0, 0, 2, 1, 2, 0)$ and $(0, 1, 2, 0, 0, 0)$. Otherwise, there are a total of 4 points fixed on these curves, so $m_{6,14} = 4$ and the possibilities for $m$ are $(0, 0, 0, 1, 2, 4)$ and $(0, 1, 0, 0, 0, 4)$. If $\sigma_{14}$ fixes one rational curve and acts nontrivially on the other, $\alpha_{14} = 1$ and $m_{6,14} = 2$. Possibilities for $m$ are $(0, 0, 1, 1, 2, 2)$, $(0, 1, 1, 0, 0, 2)$, $(3,2,2,1,1,2)$ and $(3,1,2,2,3,2)$. By Lemma \ref{alpha_no2}, $\sigma_{14}$ does not fix both $R_1$ and $R_2$. \end{enumerate} \end{proof} We also observe the following: \begin{proposition}\label{prop:B} If $\sigma_{14}$ is a purely non-symplectic automorphism on a $K3$ surface such that $\sigma_7=\sigma_{14}^2$ is of type $B$ (see Table \ref{tab:7}), then $\sigma_{14}$ is of type $B3$. \end{proposition} \begin{proof} Let $X$ be a K3 surface and $\sigma_{14}$ a purely non--symplectic automorphism of order 14 acting on $X$. Assume we are in case B so that $\sigma_7$ fixes a genus 1 curve, a rational curve and eight isolated points. By \cite[Thm. 6.3]{ArtebaniSartiTaki} $X$ admits an elliptic fibration with a reducible fiber of types $II^*$ at $t=\infty$, a smooth fiber at $t=0$ and $14$ singular fibers of type $I_1$. The automorphism $\sigma_7$ fixes the fiber over 0 and the central component of the fiber $II^*$; all eight isolated points of $\sigma_7$ lie on the fiber $II^*$. Since $\sigma_7$ fixes the genus one curve, the fibration is $\sigma_{7}$-invariant. Thus the fibers over $t=0$ and $t=\infty$ are preserved. The $II^*$ fiber does not admit a reflection, and so we can conclude that the central component must be fixed by $\sigma_{14}$. Moreover, the eight isolated fixed points of $\sigma_7$ are also isolated and fixed by $\sigma_{14}$. Table \ref{tab:14} shows that the only case with $N_{14}\geq8$ is case B3. We also observe that, because $m_{6,14}=4$, the automorphism $\sigma_{14}$ acts as an involution on the genus one curve with four fixed points. \end{proof} Now, in order to better understand the different fixed loci listed in Table \ref{tab:14}, the next step in our approach consists in further studying the fixed locus of the involution $\sigma_{14}^7$, and the eigenspaces of $\sigma_{14}^*$ in $H^2(X,\CC)$. We use the following notation: \[ d_i\coloneqq\dim H^2(X,\CC)_{\zeta_{i}}, i=1,2,7,14.\] In particular, we have \[22=6d_{14}+6d_7+d_2+d_1.\] \begin{remark}\label{d14} Observe that $\rk S(\sigma)=d_1$ and $\rk S(\sigma_7)=d_2+d_1$ and $\rk S(\sigma_2)=6d_7+d_1$. \end{remark} Moreover, by applying the topological Lefschetz formula \eqref{eq-Leftop} to the fixed loci of $\sigma_{14}$ and its powers, we obtain the following system of equations: \begin{equation} \begin{cases} \chi_{14}\coloneqq\chi(\Fix(\sigma_{14}))= 2+d_{14}-d_7-d_2+d_1\\ \chi_{7}\coloneqq\chi(\Fix(\sigma_7))= 2-d_{14}-d_7+d_2+d_1\\ \chi_{2}\coloneqq\chi(\Fix(\sigma_2))= 2-6d_{14}+6d_7-d_2+d_1\\ \end{cases} \label{Lef-top} \end{equation} Using \eqref{Lef-top} and Table \ref{tab:14} we can thus obtain a list of possibilities for $(d_{14},d_7,d_2,d_1)$ as well as the corresponding Euler characteristics $(\chi_{14}, \chi_7,\chi_2)$. We present our results in Table \ref{tab:14larga} below. {\small \begin{table}[H] \begin{tabular}{c!{\vrule width 1.5pt}c|c|c|c|c|c|c} &$N_{14}$ & $\alpha_{14}$ & $\chi_{14}$&$\chi_7$&$\chi_2$& $(d_{14},d_7,d_2,d_1)$&Possible $(g_2,k_2)$\\ \noalign{\hrule height 1.5pt} A1&7&0&7&3&-14& (3,0,1,3) &$(8,0),(9,1)$\\ &&&&&0& (2,1,0,4) &$(1,0),(2,1),(3,2), (4,3),(5,4),(6,5)$\\ \hline A2&5&0&5&3&-16& (3,0,2,2) &$(9,0), (10,1)$\\ &&&&&-2& (2,1,1,3) &$(2,0),(3,1),(4,2),(5,3),(6,4)$\\ &&&&&12& (1,2,0,4) &$(0,5), (1,6), (2,7)$\\ \noalign{\hrule height 1.5pt} B3&12&1&14&10&0& (2,0,0,10)&$(1,0),(2,1),(3,2),(4,3),(5,4),(6,5)$\\ \noalign{\hrule height 1.5pt} C1&6&0&6&10&-8& (2,0,4,6) &$(5,0), (6,1),(7,2)$\\ &&&&&6& (1,1,3,7) &$(0,2), (1,3),(2,3),(3,5)$\\\hline C2&4&0&4&10&-10& (2,0,5,5) &$(6,0),(7,1)$\\ &&&&&4& (1,1,4,6) & $(0,1),(1,2),(2,3),(3,4),(4,5)$\\\hline C3&6&1&8&10&-6& (2,0,3,7) &$(4,0),(5,1),(6,2)$\\ &&&&&8& (1,1,2,8) &$(0,3),(1,4),(2,5),(3,6)$\\ \noalign{\hrule height 1.5pt} D1&5&0&5&17&-2& (1,0,7,9) &$(2,0),(3,1),(4,2),(5,3),(6,4)$\\\hline D2&3&0&3&17&-4& (1,0,8,8)&$(3,0),(4,1),(5,2),(6,3)$\\\hline D3&7&0&7&17&0& (1,0,6,10)&$(1,0),(2,1),(3,2),(4,3),(5,4),(6,5)$\\\hline D4&5&0&5&17&-2&(1,0,7,9)&$(2,0),(3,1),(4,2),(5,3),(6,4)$\\\hline D5&9&1&11&17&4&(1,0,4,12)&$(0,1),(1,2),(2,3),(3,4),(4,5)$\\\hline D6&7&1&9&17&2&(1,0,5,11)&$(0,0),(1,1), (2,2),(3,3),(4,4),(5,5)$\\\hline D7&13&1&15&17&8&(1,0,2,14)&$(0,3),(1,4),(2,5),(3,6)$\\\hline D8&11&1&13&17&6&(1,0,3,13)&$(0,2), (1,3),(2,3),(3,5)$\\ \end{tabular} \caption{} \label{tab:14larga} \end{table} } Note that by \cite{Nikulin-inv}, the fixed locus of a non-symplectic involution is either empty; or it consists of two disjoint elliptic curves; or \begin{equation} \Fix(\sigma_2)=C_{g_2}\sqcup R_1\sqcup\ldots \sqcup R_{k_2} \label{fix-inv} \end{equation} where $C_{g_2}$ is a smooth curve of genus $g_2\geq 0$ and $R_i$ are rational curves, and all possibilities for the pair of invariants $(g_2,k_2)$ are classified (see for example \cite[Figure 1]{ArtebaniSartiTaki}). In our case, it follows from (\ref{Lef-hol}) that $\Fix(\sigma_2)$ cannot be empty. Any possible solution to (\ref{Lef-hol}) gives us that $\Fix(\sigma_{14})$ contains at least one fixed point. In fact, this also implies $\Fix(\sigma_2)$ cannot be the union of two elliptic curves either. If the latter occurs, then the action of $\sigma_{14}$ on each elliptic curve would be without fixed points. Since $\Fix(\sigma_{14}) \subset \Fix(\sigma_{2})$, again we would have no fixed points in $\Fix(\sigma_{14})$, contradicting (\ref{Lef-hol}). As a consequence, for each line of Table \ref{tab:14larga}, we know that $\Fix(\sigma_{2})$ is of the form (\ref{fix-inv}). And, moreover, there is more than one possible pair of invariants $(g_2,k_2)$. \subsection{Excluding cases}\label{sec-exclude} We will now show many cases of Table \ref{tab:14larga} can actually be excluded for geometric reasons. We prove a series of Lemmas in this direction. \begin{notation} In what follows, we will use the following notation: A1(8,0) means that $\Fix(\sigma_{7})$ is as in line A of Table \ref{tab:7}, $\Fix(\sigma_{14})$ is described in the line A1 of Table \ref{tab:14larga} and $(g_2,k_2)=(8,0)$. Similarly for all other cases. \end{notation} \begin{remark} Observe that cases B3(1,0) and D6(0,0) are not admissible because in both cases, the fixed locus $\Fix(\sigma_{14})$ contains a rational curve while $\Fix(\sigma_2)$ does not. \end{remark} \begin{remark} $\Fix(\sigma_2)$ does not contain a curve of genus 2, 4 or 5. This is a direct consequence of the following Lemma. \end{remark} \begin{lemma}[\cite{homma}]\label{homma} Let $C$ be a curve of genus $g\geq 2$ that admits an automorphism of prime order $q$ where $q>g$. Then either $q=g+1$ or $q=2g+1$. \end{lemma} \begin{lemma} The following cases are not admissible: \[A1(6,5),\ A1 (8,0),\ A1(1,0),\ A1 (6,5), \ A2(6,4), \ A2 (0,5),\ A2 (1,6),\ B3 (3,2),\] \[ C1 (3,5),\ C2(3,4),\ C3(3,6),\ D1 (6,4),\ D2 (6,3),\ D3 (1,0),\ D3 (6,5),\] \[ D4 (6,4),\ D5 (0,1),\ D5(1,2), \ D6(1,1),\ D7(0,3),\ D7(1,4),\ D8 (0,2),\ D8(1,3).\] \label{casesRW} \end{lemma} \begin{proof} Consider case A1(6,5). By Riemann-Hurwitz's formula, the automorphism $\sigma_{14}$ acts on the curve $C_4\subset\Fix(\sigma_2)$ fixing four fixed points, and it also acts on each of the five rational curves in $\Fix(\sigma_2)$, fixing two points on each. Therefore $\sigma_{14}$ fixes a total of 14 points. By a previous computation, the fixed locus $\Fix(\sigma_{14})$ consists of seven points. Therefore this case is not admissible. A similar argument can be used to exclude the other cases. \end{proof} \begin{lemma} Case C1(1,3) is not admissible. \label{c1(13)} \end{lemma} \begin{proof} Let $E$ be the elliptic curve fixed by the involution $\sigma_2=\sigma_{14}^7$. The curve $E$ is preserved by $\sigma_{14}$. Moreover, $E$ is not fixed by $\sigma_7$ pointwise but it is invariant for $\sigma_7$ because we are in Case C. Thus, since $E$ is elliptic, the automorphism $\sigma_{14}$ acts as a translation on $E$. Let $\mathcal E$ be the elliptic fibration induced by $E$, with fiber $E$ over $t=0$. Since fixed curves do not meet, the zero section is not fixed by the involution $\sigma_2$. The involution fixes three rational curves since $k_2=3$ and they are contained in the fiber $F_\infty$ over $t=\infty$. The only possible types of singular fibers that can contain three curves fixed by the involution are are $I_6$, or $III^*$, or $I_4^*$. \begin{figure}[h] \begin{minipage}{0.33\textwidth} \centering \begin{tikzpicture}[xscale=.6,yscale=.5, thick, every node/.style={circle, draw, fill=black!50, inner sep=0pt, minimum width=4pt}] \node (n2) at (-3,0) {}; \node [double] (n3) at (-2,0) {}; \node (n4) at (-1,0) {}; \node [double](n5) at (0,0) {}; \node (n6) at (1,0) {}; \node [double](n7) at (2,0) {}; \node (n8) at (3,0) {}; \node (n9) at (0,-1) {}; \foreach \from/\to in {n2/n3,n3/n4, n4/n5,n5/n6,n6/n7,n7/n8, n5/n9} \draw (\from) -- (\to); \end{tikzpicture} \caption{Fiber $III^*$} \label{fig:E7}\end{minipage} \begin{minipage}{0.33\textwidth} \centering \begin{tikzpicture}[xscale=.6,yscale=.5, thick, every node/.style={circle, draw, fill=black!50, inner sep=0pt, minimum width=4pt}] \node (n1) at (-3,1) {}; \node (n2) at (-3,-1) {}; \node [double] (n3) at (-2,0) {}; \node (n4) at (-1,0) {}; \node [double](n5) at (0,0) {}; \node (n6) at (1,0) {}; \node [double](n7) at (2,0) {}; \node (n8) at (3,1) {}; \node (n9) at (3,-1) {}; \foreach \from/\to in {n1/n3,n2/n3,n3/n4, n4/n5,n5/n6,n6/n7,n7/n8, n7/n9} \draw (\from) -- (\to); \end{tikzpicture} \caption{Fiber $I_4^*$}\label{I_4^*} \end{minipage}\hfill \begin{minipage}{0.33\textwidth} \centering \begin{tikzpicture}[xscale=.6,yscale=.5, thick, every node/.style={circle, draw, fill=black!50, inner sep=0pt, minimum width=4pt}] \node (n2) at (-3,-1) {}; \node [double] (n3) at (-2,0) {}; \node (n4) at (-1,0) {}; \node [double](n5) at (0,-1) {}; \node [style=rectangle, draw, inner sep=0pt, minimum width=4pt,minimum height=4pt, fill=black!50,] (n6) at (-1,-2) {}; \node [double](n7) at (-2,-2) {}; \foreach \from/\to in {n2/n3,n3/n4, n4/n5,n5/n6,n6/n7,n7/n2} \draw (\from) -- (\to); \end{tikzpicture} \caption{Fiber $I_6$} \label{I_6} \end{minipage} \end{figure} If $F_\infty=III^*$, then the three curves which are fixed by $\sigma_{2}$ are shown by the three double circles in Figure \ref{fig:E7}. The zero section would meet the external component of the fiber $III^*$ and thus it would be fixed by $\sigma_2$, which we already observed that is impossible. By a similar argument, we may exclude the case $F_\infty=I_4^*$, as shown in Figure \ref{I_4^*}. Suppose that $F_\infty=I_6$. By analyzing the types of points, it can be seen that one of the curves of the fiber $I_6$ which is not fixed by $\sigma_2$ must be fixed by $\sigma_7$. Such a curve is represented by a square in Figure \ref{I_6}. Since $\sigma_7$ must preserve the fiber, this is impossible. \end{proof} \begin{lemma} The following cases are not admissible: \[A2 (10,1),\ A2(3,1),\ C2(1,2),\ C2(0,1),\ C3(1,4),\ C3(0,3),\ D4 (3,1),\ D5(3,4), \ D6 (3,3).\] \label{consecPts} \end{lemma} \begin{proof} Observe that in Case A2(10,1), $\Fix(\sigma_2)=C_{10}\cup R$, where $C_{10}$ a curve of genus 10 and $R$ a rational curve, and neither of these curves are fixed by $\sigma_{14}$. The automorphism $\sigma_{14}$ fixes five isolated points, two of which lie on $R$. As observed in Remark \ref{types}, isolated points on a rational curve are of consequent types but this is in contradiction with the types of points for A2 (see Table \ref{tab:14}). The other cases can be excluded by a similar argument. \end{proof} \begin{lemma} Suppose that the involution $\sigma_2$ fixes a curve $C_7$ of genus seven. Then the curve $C_7$ contains two fixed points by $\sigma_{14}$, which cannot be of the same type. \label{notequal} \end{lemma} \begin{proof} First, note that $\sigma_{14}$ acts with order seven on $C_7$. Thus, by Riemann-Hurwitz it has exactly two fixed points, which we call $p$ and $q$. Considering the line bundle $L$ associated to $8p$, by Riemman-Roch we have $h^0(C_7,L)\geq 2$ so that we obtain a finite (surjective) morphism $f: C_7\to\mathbb P^1$ of degree $d\leq 8$. Now, because $\sigma$ fixes $p$, $\sigma$ and $f$ induce an automorphism $\tilde{\sigma}$ (of order $7$) on $\mathbb{P}^1$. This automorphism has two fixed points, say $\tilde{p}$ and $\tilde{q}$, and we must have (up to relabeling) $f^{-1}(\tilde{p})=p$ and $f^{-1}(\tilde{q})=q$. Moreover, we can assume $\tilde{p}=(0:1)$ and $\tilde{q}=(1:0)$. We can thus choose local coordinates $z$ on $\mathbb{P}^1$ centered on $\tilde{p}$ so that the action of $\tilde{\sigma}$ on $\tilde{p}$ is given by multiplication by $\zeta_{14}^{2j}$ and on $\tilde{q}$ it is given by multiplication by $\zeta_{14}^{14-2j}$ (for some $j$). Note that $1/z$ is then a local coordinate centered on $\tilde{q}$. In fact we can choose local coordinates on $C_7$ which are compatible with the above so that $f$ is given by $z\mapsto z^{d}$ around $\tilde{p}$ (and analogously for $\tilde{q}$). Using this, we see that the local action of $\sigma$ on $p$ must be given by multiplication by $\zeta_{14}^{2j/d}$ and on $q$ it is given by multiplication by $\zeta_{14}^{(14-2j)/d}$. The local action of $\sigma$ on $p$ and $q$ as points in $X$ can thus be diagonalized so that $p$ is a point of type $A_{i,14}$ where $i= 2j/d -1$ or $2j/d$, and $q$ is a point of type $A_{j,14}$ where $k=(14-2j)/d-1$ or $(14-2j)/d$. In any case, $i\not\equiv k \mod 14$ so that $p$ and $q$ cannot be of the same type. \end{proof} As a consequence we can prove: \begin{lemma} Case C2(7,1) is not admissible. \label{c2(71)} \end{lemma} \begin{proof} According to Table \ref{tab:14}, in case C2 the automorphism $\sigma_{14}$ fixes exactly one point of type $A_{2,14}$, one point of type $A_{3,14}$, and two points of type $A_{6,14}$. Two of these are on the rational curve fixed by $\sigma_2$ and two are on the genus seven curve $C_7$ fixed by $\sigma_2$. Since the fixed points on $R$ must be of consecutive types (see Remark \ref{types}), the two points of type $A_{6,14}$ lie on $C_7$. This contradicts Lemma \ref{notequal}. \end{proof} Thanks to \cite{brandhorst2021}, we also prove: \begin{lemma} Cases $D1$ and $D7$ are not admissible. \label{d1d7} \end{lemma} \begin{proof} By \cite[Corollary 1.3]{brandhorst2021}, there are exactly 12 distinct deformation classes of K3 surfaces $X$ carrying a purely non-symplectic automorphism $\sigma$ of order $14$. In Section \ref{sec-ex}, we show all 12 cases listed in Table \ref{tab:ex} indeed occur. Therefore, it suffices to observe the different cases determine different deformation classes. In fact, looking at the eigenvalues of the induced isometry $\sigma^*$ on $H^2(X,\mathbb{Z})$ we see that different cases determine at least 11 deformation classes. With the exception of cases $C1(6,1)$ and $C1(7,2)$, the different cases determine 11 distinct vectors $(d_{14},d_7,d_2,d_1)$ (see Table \ref{tab:ex}). So we analyse these two cases separately. By \cite[Theorem 1.5.2]{dolQuad}, if $(X,\sigma)$ is of type $C1(6,1)$ and $(\tilde{X},\tilde{\sigma})$ is of type $C1(7,2)$, then the invariant lattices $S(\sigma^7)$ and $S(\tilde{\sigma}^7)$ do not lie in the same genus. And, since the deformation class of a pair $(X,\sigma)$ is determined by the collection of genera of the lattices $S(\sigma^j)$ by \cite[Theorem 1.4]{brandhorst2021}, we conclude these two cases indeed determine two distinct deformation classes. \end{proof} Using Table \ref{tab:14larga} and combining Lemmas \ref{casesRW}, \ref{c1(13)}, \ref{consecPts}, \ref{c2(71)} and \ref{d1d7} we have thus proved: \begin{proposition} Let $\sigma_{14}$ be a purely non-symplectic automorphism on a $K3$ surface. Then the admissible cases according to the possible fixed locus are listed in Table \ref{tab:ex}. \end{proposition} \begin{table}[H]\centering \begin{tabular}{c!{\vrule width 1.5pt}c|c|c|c|c|c|cc} &$N$&$\alpha_{14}$&$\chi_{14}$&$\chi_7$&$\chi_2$&$(g_2,k_2)$&$(d_{14},d_7,d_2,d_1)$&\\ \noalign{\hrule height 1.5pt} A1&7&0&7&3&-14&$(9,1)$&(3,0,1,3)\\ &&&&&0&$(3,2)$&(2,1,0,4)\\ \hline A2&5&0&5&3&-16&$(9,0)$&(3,0,2,2)\\ \noalign{\hrule height 1.5pt} B3&12&1&14&10&0&$(6,5)$&(2,0,0,10)\\%\hline \noalign{\hrule height 1.5pt} C1&6&0&6&10&-8&$ (6,1),(7,2)$&(2,0,4,6)\\ &&&&&6&$(0,2)$&(1,1,3,7)&\\\hline C2&4&0&4&10&-10&$(6,0)$&(2,0,5,5)\\ \hline C3&6&1&8&10&-6&$(6,2)$&(2,0,3,7)\\ \noalign{\hrule height 1.5pt} D2&3&0&3&17&-4&$(3,0)$&(1,0,8,8)\\\hline D3&7&0&7&17&0&$(3,2)$&(1,0,6,10)\\\hline D8&11&1&13&17&6&$(3,5)$&(1,0,3,13)\\ \end{tabular} \caption{}\label{tab:ex} \end{table} \subsection{Realization by examples}\label{sec-ex} It remains to show each case listed in Table \ref{tab:ex} is indeed realizable. For each possibility, we construct explicit examples of K3 surfaces carrying a purely non-symplectic automorphism $\sigma_{14}$ (of order $14$) that has the desired type of fixed locus. \begin{example}\label{example_A1_(9,1)}{\bf (Case A1(9,1))} Consider $(X_{a,b}, \sigma_{14})$, taking $X_{a, b}$ to be the elliptic K3 surface with Weierstrass equation \begin{equation*} y^2=x^3+(at^7+b)x+(t^7-1), \quad t\in\mathbb P^1 \label{eq_fibr} \end{equation*} where $a,b \in \mathbb{C}$, as in \cite[Example 6.1]{ArtebaniSartiTaki}, and letting $\sigma_{14}$ be the purely non-symplectic order 14 automorphism: \[ \sigma_{14}:(x,y,t) \mapsto (x,-y,\zeta_7^4 t) \] where $\zeta_7$ denotes a primitive $7$-th root of unity. If $a$ and $b$ are generic, then $X_{a,b}$ contains a fiber of type $III$ at $t=(1:0)$ and $21$ singular fibers of type $I_1$. One can show that the fixed locus of $\sigma_{14}$ is such that $m=( 0 , 0 , 0 , 0 , 1 , 2 ,4)$. In fact it is of type $A1(9,1)$. It can be described as follows: the four isolated points of type $A_{6,14}$ lie on a curve which is fixed by $\sigma_7$, namely the fiber at $t=(0:1)$; the other three points lie on the fiber of type $III$: the tangency point, along with one other point on each of the two components. Moreover, the involution $\sigma_2$ fixes the zero section (which is rational) and the trisection (which has genus 9). \end{example} \begin{example}{\bf(Case A1(3,2))}\label{example_A1_(3,2)} Consider $(X, \sigma_{14})$, where $X$ is the elliptic K3 surface with Weierstrass equation given by \[y^2=x(x^2+(t^7+1)),\ t\in\mathbb P^1,\] and $\sigma_{14}\colon(x,y,t)\mapsto (x,-y,\zeta^4_{7}t)$ is a purely non-symplectic automorphism of order $14$. We note that $X$ contains eight singular fibers of type $III$. The fixed locus of $\sigma_7$ is given by an elliptic curve at $t=(0:1)$ and three points that lie on the fiber of type $III$ at $t=(1,0)$. On the fiber of type $III$, one of the three points is the tangency point, while the remaining two lie on different components. Therefore, we are in case $A$ (for $\sigma_7$). The fixed locus of $\sigma_{14}$ is such that $m=( 0 , 0 , 0 , 0 , 1 , 2 ,4)$ and in fact we can check it is of type $A1(3,2)$. On the elliptic curve $\sigma_{14}$ acts as an involution and we obtain $4$ fixed points there, the other $3$ fixed points are again in the fiber of type $III$ at $t=(1,0)$ distributed as above. The involution $\sigma_2$ fixes the bisection which has genus $3$, and two rational curves: the zero section and the two torsion section given by $x=y=0$. Therefore, $(g_2,k_2)=(3,2).$ \end{example} \begin{example}{\bf(Case A2(9,0))}\label{example_A2_(9,0)} Let us consider $(X, \sigma_{14})$: the elliptic K3 surface $X$ together with the automorphism $\sigma=\sigma_{14}$ from Example \ref{example_A1_(3,2)}. The translation $\tau$ given by $(x,y,t)\mapsto ((y/x)^2-x,(y/x)^3-y,t)$ (which is the translation by the $2-$torsion section) is a symplectic involution that commutes with $\sigma$. As a consequence, the composition $\sigma'\coloneqq \sigma\circ \tau$ is also a purely non-symplectic automorphism of order $14.$ We remain in case $A$ for $\sigma_7$ and the fixed locus of $\sigma'$ is such that $m=( 0 , 0 , 1 , 0 , 0 , 0 , 4)$. Indeed, $\sigma'$ acts as an involution on the elliptic curve $E$ at $t=(0,1)$ and $E$ contains four fixed points. Due to the fact that $\tau$ has only eight fixed points, which are precisely the tangency points on the singular fibers of type $III,$ we only have one additional fixed point lying on the fiber at $t=(1,0).$ The involution does not fix any rational curves and therefore we are in case $(g_2,k_2)=(9,0).$ We note that this case is also presented in \cite[Section 7.2, p.19]{GP}. \end{example} \begin{example}\label{example_B3}{\bf (Case B3)} Consider $(X_{a,b}, \sigma_{14})$, where we let $X_{a, b}$ be the elliptic K3 surface in Example \ref{example_A1_(9,1)} with $a=0$. $X_{0,b}$ contains a fiber of type $II^*$ at $t=(1:0)$, a smooth fiber at $(0:1)$, and $14$ singular fibers of type $I_1$. With the order 14 automorphism $\sigma_{14}$ given in \eqref{eq_fibr}, the component of multiplicity $6$ on the $II^*$ fiber is fixed by $\sigma$ and the action on the fiber over $t=(0:1)$ is an involution, so it has 4 fixed points. Checking types of fixed points, we find $m=(3, 2 , 1, 1, 1 , 4)$ with $\alpha_{14}=1$. \end{example} \begin{example}{\bf (Case C1(6,1))}\label{example_C1_(6,1)} Consider $(X_{a, b}, \sigma_{14})$ from Example \ref{example_A1_(9,1)}, with $a$ generic and $b$ such that $b^3=-\frac{27}{4}$. Then $X_{a,b}$ contains a fiber of type $III$ at $t=(1:0)$, a fiber of type $I_7$ at $t=(0:1)$ and $14$ singular fibers of type $I_1$. In this case the fixed locus of $\sigma_{14}$ is such that $m=( 0, 0, 0 , 1 , 1, 2 , 2)$. The trisection $\{y=0\}$ is a curve of genus 6 and it is fixed by the involution, as well as the zero section. Thus the invariants of the fixed locus of the involution $\sigma_2$ are $(g_2,k_2)=(6,1)$. \end{example} \begin{example}{\bf(Case C1(7,2))}\label{example_C1_(7,2)} Let $(X, \sigma_{14})$, the elliptic K3 surface with Weierstrass equation \[y^2=x^3+4t^4(t^7-1), \ t\in\mathbb P^1,\] together with the order 14 purely non-symplectic automorphism $\sigma_{14}$ given by \[\sigma_{14}(x,y,t)=(\zeta^4_7x, -\zeta_7^6y,\zeta_7^3t).\] We note that the singular fibers are of type $IV^*$ over $t=(0:1)$ and type $II$ over $t=(1:0)$, in addition to seven fibers of type $II$. The square of $\sigma_{14}$ fixes the component of multiplicity 3 on the fiber of type $IV^*$, so this example falls under case C. The involution $\sigma_2$ acts as a reflection on this fiber, and so the fixed locus $\Fix(\sigma_{14})$ only contains points. The 3-section $\{y=0\}$ has genus seven and it is fixed by the involution, as well as the zero section and one rational component of the fiber $IV^*$. Thus the invariant of the fixed locus of the involution are $(g_2,k_2)=(7,2)$. This surface appears in \cite[Table 3]{Brandhorst}, with a non-symplectic automorphism of a different order. \end{example} \begin{example}{\bf(Case C1(0,2))}\label{example_C1_(0,2)} Let us consider $(X, \sigma_{14})$, the elliptic K3 surface $X$ with Weierstrass equation given by \[y^2=x^3+t^2x+t^{10},\ t\in\mathbb P^1, \] and the order 14 purely non-symplectic automorphism $\sigma_{14}\colon(x,y,t)\mapsto (\zeta_{7}x,\zeta^5_{7}y,-\zeta_{7}t)$ Note that $X$ contains a fiber of type $IV$ at $t=(1:0),$ a fiber of type $I^*_0$ at $t=(0:1)$ and $14$ singular fibers of type $I_1.$ The fixed locus of $\sigma_7$ fixes one rational curve, the non-reduced component, and eight points, so this example falls under Case C. Because the involution $\sigma_2$ fixes only three rational curves, we see that $\sigma_{14}$ is of type $C1$ with $(g_2,k_2)=(0,2).$ \end{example} \begin{example}{\bf (Case C2(6,0))} \label{example_C2(6,0)} \end{example} Consider $(X, \sigma_{14})$, where $X$ is the K3 surface with equation \[y^2=x^7s-t^2(t-s^2)(t-2s^2)\] in $\mathbb{P}(4,2,1,1)_{(y,t,x,s)}$ and $\sigma_{14}\colon (y,t,x,s) \mapsto (-y,t,\zeta_{7}x,s)$ is a purely non-symplectic automorphism of order $14$. One can see that the points $(-1:1:0,0)$ and $(1:1:0,0)$ are of type $A_{1}.$ Moreover, at the point $(0:0:0:1)$ we have a singularity of type $A_{6}.$ Since $\sigma_7$ fixes the rational curve $C_{x}\coloneqq \{x=0\}$ and eight points, this example falls under case $C.$ The only curve fixed by the involution $\sigma_2$ is $C_{y}$, which has genus six. \begin{example}{\bf (Case C3)} \label{example_C3_(6,2)} Consider $(X, \sigma_{14})$, the elliptic surface $X$ with Weierstrass equation \[y^2=x^3+t^3(t^7+1),\ t\in\mathbb P^1,\] and the order 14 purely non-symplectic automorphism $\sigma_{14}$ \[\sigma_{14}(x,y,t)=(\zeta_7^2x, -\zeta_7^3y, \zeta_7^3t)\]. The singular fibers consist of a type $I_0^*$ fiber over $t=(0:1)$, a type $IV$ fiber over $t=(1:0)$, and seven type $II$ fibers (cusps). We call $R$ the non-reduced component of the $I_0^*$ fiber. The involution $\sigma_2$ fixes the zero section, the rational curve $R$ and the 3-section $C$ given by $y=0$. The curve $C$ passes through the center of the $IV$ fiber and through the cusps, and so $C$ has genus six by Riemann-Hurwitz. Thus the invariants of the involution are $(g_2,k_2)=(6,2)$ which corresponds to case C3. The fixed locus of $\sigma_{14}$ consists of the curve $R$ and six points. Another example for case $C3$ is given as follows. Let $X$ be the K3 surface with equation \[x^2+y^3z+z^7+w^{14}=0\] and weights $ (7,4,2,1)$. Singularities can occur only at singularities of $\mathbb P(7,4,2,1)$ and one can see that the point $(0:1:0:0)$ is a $A_3$ and $(0:\zeta_6^j:1:0), j=1,3,5$ are of type $A_1$. \begin{figure}[ht] \centering \begin{tikzpicture}[xscale=.6,yscale=.5, thick] \draw [thick] (0,8)--(6,8); \node [right] at (6,8){$C_{w}$}; \draw (1,8.5)--(1,4.5); \draw (1.5,8.5)--(1.5,4.5); \draw (2,8.5)--(2,4.5); \draw (0,5)--(6,5); \node [right] at (6,5){$C_{x}$}; \draw (5,8.5)--(4,6.5); \draw (4,7.5)--(5,5.5); \draw (5,6.5)--(4,4.5); \end{tikzpicture} \label{fig_weighted} \caption{C3} \end{figure} After resolving the singularities, the curve $C_w:=\{w=0\}$ has genus zero, while the transform of $C_x:=\{x=0\}$ has genus six. The automorphism $\sigma_{14}: (x,y,z,w)\mapsto(x,y,z,\zeta_{14}w)$ is a purely non-symplectic automorphism of order 14 and it fixes the rational curve $C_w$. Its square $\sigma_7$ fixes $C_w$ as well, so that this example falls under Case C. Moreover, the involution $\sigma_2$ fixes $C_w$ and $C_x$ and the central fiber of the resolution of the $A_3$ (another rational curve). Therefore, $\sigma_{14}$ is of type C3 and the invariants of the involution are $(g_{2}, k_{2}) = (6,2)$. \end{example} \begin{example}{\bf (Case D2)} \label{example_D2} Let $(X, \sigma_{14})$ be the K3 surface $X$ with equation \[y^2=x^7s-t^2(t-s^2)^2\] in $\mathbb{P}(4,2,1,1)_{(y,t,x,s)}$ and the order 14 purely non-symplectic automorphism $\sigma\colon (y,t,x,s)\mapsto (-y,t,\zeta_{7}x,s)$. The points $(-1:1:0,0)$ and $(1:1:0,0)$ are of type $A_{1}.$ Moreover, at the points $(0:0:0:1)$ and $(0:1:0:1),$ we have singularities of type $A_{6}.$ Since $\sigma_7$ fixes two rational curves $C_1$ and $C_2$, appearing when $x=0,$ this example falls under Case $D.$ The only curve fixed by the involution $\sigma_2$ is $C_{y}$, which has genus three. See also \cite[Section 7.3]{GP}. \end{example} \begin{example}{\bf (Case D3)}\label{example_D3} Consider $(X, \sigma_{14})$, where $X$ is the K3 surface with equation \[x^2=w^7y+y^4+z^7\] in $\mathbb{P}(14,7,4,3)_{(x,y,z,w)}$ given in \cite{ABS}, and $\sigma_{14}$ the order 14 purely non-symplectic automorphism $\sigma_{14} \colon (x,y,z,w)\mapsto (-x,y,z,\zeta_{7}w)$. We have the following: point $(1:0:1:0)$ of type $A_{1}$; points $(1: 1:0:0)$ and $(-1: 1:0:0)$, both of type $A_{6}$; and point $(0:0:0:1)$ of type $A_2$ (Figure \ref{fig:D3}). \begin{figure}[h!] \centering \begin{tikzpicture}[line cap=round,line join=round,>=triangle 45,x=0.6cm,y=0.6cm] \clip(-1.,2.) rectangle (9.5,9.); \draw [line width=1.5pt] (0.,8.)-- (8.,8.); \draw [line width=1.5pt] (0.,3.)-- (8.,3.); \draw [line width=1.pt] (6.66,8.5)-- (5.36,7.02); \draw [line width=1.pt] (5.46,7.6)-- (6.4,6.); \draw [line width=1.pt] (5.38,5.38)-- (6.66,6.66); \draw [line width=1.pt] (5.449316303531188,5.890578512396699)-- (6.33586776859505,4.418001502629605); \draw [line width=1.pt] (5.329105935386936,3.8169496619083425)-- (6.591314800901587,4.94392186326071); \draw [line width=1.pt] (5.344132231404967,4.3729226145755105)-- (6.275762584522924,2.4345304282494387); \draw [line width=1.pt] (4.796061012223987,8.487630817070546)-- (3.4960610122239872,7.007630817070546); \draw [line width=1.pt] (3.596061012223987,7.587630817070546)-- (4.536061012223987,5.987630817070547); \draw [line width=1.pt] (3.516061012223987,5.3676308170705465)-- (4.796061012223987,6.647630817070547); \draw [line width=1.pt] (3.585377315755175,5.8782093294672455)-- (4.471928780819037,4.4056323197001515); \draw [line width=1.pt] (3.465166947610923,3.804580478978889)-- (4.727375813125574,4.931552680331256); \draw [line width=1.pt] (3.480193243628954,4.360553431646057)-- (4.411823596746911,2.4221612453199852); \draw [line width=1.pt] (2,8.5)-- (2,6.5); \draw [line width=1.pt] (2,4.5)-- (2,2.5); \draw (8.131725487029298,8.351681812580267) node[anchor=north west] {$C_{w}$}; \draw (8.264237316430284,3.428357689451359) node[anchor=north west] {$C_{z}$}; \draw (-0.7,6.710573771537298) node[anchor=north west] {$C_{x}$}; \draw [line width=1.pt] (2.2,4.3)-- (0.3,4.3); \draw [line width=1.5pt] (0.7989755342900065,8.690920310306069)-- (0.7989755342900065,2); \begin{scriptsize} \ \end{scriptsize} \end{tikzpicture} \caption{D3}\label{fig:D3} \end{figure} Since $\sigma_7$ fixes the rational curves $C_{z}\coloneqq \{z=0\}$ and $C_{w}\coloneqq \{w=0\},$ we are in case $D.$ The involution $\sigma_2$ fixes the curve $C_{x}\coloneqq \{x=0\}$ of genus $3$ and two rational curves given by the component of the $A_1$ and one of the components of the $A_2.$ The involution $\sigma_2$ also exchanges the two $A_{6}$ points. \end{example} \begin{example}{\bf (Case D8)} \label{example_D8} Again, consider $(X_{a, b}, \sigma_{14})$, the K3 surface together with the automorphism from Example \ref{example_A1_(9,1)}. If $a=0$ and $b$ is such that $b^3=-\frac{27}{4}$, it follows that $X_{a,b}$ contains a fiber of type $II^*$ at $t=(1:0)$, a fiber of type $I_7$ at $t=(0:1)$, and seven singular fibers of type $I_1$. The fixed locus of $\sigma_{14}$ is of type D8. Observe that the surface $y^2=x^3+t^3x+t^8,$\ $t\in\mathbb P^1$ given in \cite[ Example 7.5]{kondo_trivial} admits the purely non-symplectic order 14 automorphism $ \sigma_{14}: (x,y,t) \mapsto (\zeta_7^3x,-\zeta_7y,\zeta_7^2t) $ and corresponds to case D8 as well. \end{example} \section{Order 21} \label{sec-order21} Purely non-symplectic automorphisms of order $21$ on K3 surfaces have been classified in \cite{Brandhorst}. Here we present a new proof and a more detailed description of Brandhorst's result. Using the same kind of approach from the previous section, we show the examples of \cite[Table 3]{Brandhorst} fit the invariants of Table \ref{tab:21} below, and we prove: \begin{proposition} \label{thm21} The fixed locus of a non-symplectic automorphism of order $21$ on a K3 surfaces is not empty and it consists of either: \begin{enumerate}[(i)] \item The union of $N_{21}$ isolated points, where $N_{21}\in \{4,7\}$; or \item The disjoint union of a rational curve and $N_{21}$ isolated points, where $N_{21}\in\{8,11\}$. \end{enumerate} Moreover, all these possibilities occur, and a more detailed description is given in Table \ref{tab:21} below, where $\sigma_7\doteq \sigma_{21}^3$ and $\sigma_3\doteq \sigma_{21}^7$. \begin{table}[H] \begin{tabular} {c!{\vrule width 1.5pt} c|c|c} & $\Fix(\sigma_{21})$ & $\Fix(\sigma_{7})$ & $\Fix(\sigma_3)$ \\ \noalign{\hrule height 1.5pt} C(3,2,3) & $R \sqcup\{p_1,\ldots,p_8\}$ &$R\sqcup \{p_1,\ldots,p_8\}$ & $C_3\sqcup R\sqcup R'\sqcup\{p_1,p_2,p_3\}$ \\ C(3,1,2) &$\{p_1,\ldots,p_7\}$ &$R\sqcup \{p_1,\ldots,p_5,q_1,q_2,q_3\}$& $C_3\sqcup R\ \sqcup\{p_1,q_1\}$ \\ C(3,0,1) & $\{p_1,\ldots,p_4\}$ & $R\sqcup \{p_1,\ldots,p_8\}$& $C_3\sqcup\{p_1\}$ \\ \noalign{\hrule height 1.5pt} B(3,3,4) &$R \sqcup \{p_1,\ldots,p_{11}\}$ &$E\sqcup R\sqcup \{p_1,\ldots,p_8\}$ & $C_3\sqcup R\sqcup R'\sqcup R''\sqcup \{p_1,\ldots,p_4\}$ \end{tabular} \caption{Order 21} \label{tab:21} \end{table} \end{proposition} In order to prove Proposition \ref{thm21}, we first note that, as we observed in Section \ref{sec:background}, at any fixed point a purely non-symplectic automorphism $\sigma_{21}$ of order $21$ acts as multiplication by the matrix $A_{i,21}$ for some $i$, with \[A_{i,21}\coloneqq \begin{pmatrix} \zeta_{21}^{1+i} & 0 \\ 0 & \zeta_{21}^{21-i} \end{pmatrix},\ 0\leq i \leq 10.\] Thus, the holomorphic Lefschetz formula \eqref{eq-Lefhol} applied to $\sigma_{21}$ gives us the following linear system of equations: \begin{equation} \begin{cases} 3 m_{6,21} &= 3 + 4 m_{1,21} - 5 m_{2,21} - 4 m_{4,21} + 8 m_{5,21} \\ 3 m_{7,21} &= 3 - 5 m_{1,21} + 4 m_{2,21} - 13 m_{4,21} + 17 m_{5,21} \\ m_{8,21} &= 1 - 2 m_{1,21} + 2 m_{2,21} - 5 m_{4,21} + 6 m_{5,21} \\ m_{9,21} &= 3 - 4 m_{1,21} + 4 m_{2,21} - 3 m_{3,21} - 3 m_{4,21} + 7 m_{5,21} \\ 2 m_{10,21} &= 2 - 3 m_{1,21} + 3 m_{2,21} - 2 m_{3,21} - 3 m_{4,21} + 6 m_{5,21} \\ 6 \alpha_{21} &= m_{1,21} + m_{2,21} - m_{4,21} + 2 m_{5,21} \end{cases}\end{equation} where $\alpha_{21}\coloneqq \sum 1-g(C)$ and the sum is taken over all curves $C$ fixed by $\sigma_{21}$. Moreover, considering the non-symplectic automorphism $\sigma_7=\sigma_{21}^3$ of order 7, we know that \[ \begin{cases} m_{1,21}+m_{5,21}+m_{8,21} &\leq m_{1,7} \\ m_{2,21}+m_{4,21}+m_{9,21} &\leq m_{2,7} \\ m_{3,21}+m_{10,21} &\leq m_{3,7} \end{cases} \] We note also that points of type $A_{6,21}$ and $A_{7,21}$ lie on a curve fixed by $\sigma_7$ (but not fixed by $\sigma_{21}$) and points of type $A_{j,21}, where j=2,3,5,6,8,9$, lie on a curve fixed by $\sigma_3=\sigma_{21}^7$ (but not fixed by $\sigma_{21}$). For this reason, we choose $r\coloneqq m_{6,21}+m_{7,21}$. Using MAGMA, we obtain the following four possibilities for the vector $(m_{1,21},\ldots,m_{10,21};\alpha_{21},r)$: \begin{align*} v_1=(3,3,1,0,0,0,0,1,0,0;1,0) && v_2=(0,0,0,0,0,1,1,1,3,1;0,2) \\ v_3=(0,0,1,0,0,1,1,1,0,0;0,2) && v_4=(3,2,1,1,1,3,0,0,0,0;1,3) \end{align*} Furthermore, we observe the following: \begin{lemma} If the fixed locus of $\sigma_{21}$ is described by the vectors $v_1,v_2,v_3$, then the fixed locus of $\sigma_7=\sigma_{21}^3$ is as in Case C of Table \ref{tab:7}. If it is described by the vector $v_4$, then the fixed locus of $\sigma_7$ is as in Case B. \end{lemma} \begin{proof} We first observe that $\sigma_7$ cannot be of type A. Assume we are in Case A. We know that $\Fix(\sigma_{21}) \subseteq \Fix(\sigma_{7})$. By the Riemann-Hurwitz formula, the genus one curve in $\Fix(\sigma_7)$ would contain either none or three isolated points fixed by $\sigma_{21}$, and thus $r=0$ or $3$. But the cases with these values of $r$ both have $\alpha_{21}=1$, which is not possible in Case A (recall that in Case A, a fixed curve must have genus 1, as shown in Table \ref{tab:7}). Case D for $\sigma_7$ is not admissible neither. In fact, if $\sigma_7$ is as in case D, then $\Fix(\sigma_7)$ contains two rational curves. If they were both pointwise fixed by $\sigma_{21}$, this would give $\alpha_{21}=2$. If one curve is pointwise fixed and the other one is invariant, $\alpha_{21}=1$ and $r=2$. If both curves are invariant but not pointwise fixed, then $\alpha_{21}=0$ and $r=4$. These cases do not appear among the admissible ones. Therefore we conclude that $\sigma_7$ must fall under Case $B$ or Case $C$. We now observe that the situation described by the vector $v_4$ is only possible in Case B: since $r=3$ in this case, it means that there are three points on curves fixed by $\sigma_7$ and they are not fixed by $\sigma_{21}$. Thus there must be an elliptic curve in $\Fix(\sigma_7)$. As we observed in Lemma \ref{prop:B}, if $\sigma_7$ fixes an elliptic curve and a rational curve as in Case B, the surface admits an elliptic fibration with a fiber of type $II^*$ and 14 fibers of type $I_1$. Since the fiber of type $II^*$ does not admit a symmetry of order three, $\sigma_{21}$ fixes the central curve of this fiber and eight points that lie on it. As for vector $v_2$ (respectively $v_3$), the fixed locus of $\sigma_{21}$ consists of seven (respectively four) points. Thus $\sigma_7$ cannot belong to Case B, since by the previous remark, it would fix too many points. Assume now that we are in Case B and the vector $v_1$ describes the action of $\sigma_{21}$. Then the fixed locus of $\sigma_{21}$ is the union of a rational curve and eight points; since $r=0$, the action of $\sigma_{21}$ on the elliptic curve in $\Fix(\sigma_7)$ is a translation. But then the action should be a translation on the fiber $II^*$, and this is not the case. \end{proof} At last, we are now in position to prove Proposition \ref{thm21}: \begin{proof}[Proof of Proposition \ref{thm21}] Consider the induced action of $\sigma_{21}$ on $H^2(X,\mathbb{R})$ and recall the definition of $d_i\coloneqq \dim\ H^2(X,\mathbb{R})_{\zeta_{i}}$ for $i=1,3,7,21$. For each $i=3,7,21$ we let $\chi_i$ denote the Euler characteristic of the fixed locus of $\sigma_i=\sigma^{\frac{21}i}$. By applying the topological Lefschetz formula \eqref{eq-Leftop} to $\sigma_{21},\sigma_7$ and $\sigma_3$, we obtain: \begin{equation}\label{top21}\begin{cases} \chi_{21} &= 2 + d_{21} - d_7 - d_3 + d_1 \\ \chi_7 &= 2 - (2 d_{21} + d_7) + 2 d_3 + d_1 \\ \chi_3 &= 2 - 6 d_{21} + 6 d_7 - d_3 + d_1 \end{cases}\end{equation} Moreover, we know that \[ 22 = \text{dim }H^2(X,\mathbb{R})= 12 d_{21} + 6 d_7 + 2 d_3 + d_1. \] Combining these equations one gets the following possibilities: \begin{center} \begin{table}[H] \begin{tabular} {c|c|c|c|c|c|c} Type $\sigma_7$ &$\chi_7$ & $\chi_{21}$ & $\chi_3$ & $(d_{21},d_7,d_3,d_1)$ & $(m_{1,21},\ldots,m_{10,21} ,\alpha_{21},r)$ & $\Fix(\sigma_{21})$ \\ \hline & 10 & 10 & 3 & (1,0,1,8) & (3,3,1,0,0,0,0,1,0,0,1,0) & $R \sqcup\ $ 8 pts \\ C& 10 & 7 & 0 & (1,0,2,6) & (0,0,0,0,0,1,1,1,3,1,0,2) & 7 pts \\ & 10 & 4 & -3 & (1,0,3,4) & (0,0,1,0,0,1,1,1,0,0,0,2) & 4 pts \\ \hline B & 10&13 & 6&(1,0,0,10)& (3,2,1,1,1,3,0,0,0,0,1,3)&$R \sqcup\ $11pts \end{tabular} \caption{} \end{table} \end{center} Thus, it remains to look at the fixed locus of $\sigma_3$, which by \cite{ArtebaniSarti} consists of $N_3$ isolated fixed points, a curve of genus $g_3\geq 0$, and $k_3$ rational curves, where by \cite[Theorem 2.2]{ArtebaniSarti} the following relation holds: \[1-g_3+k_3=N_3-3.\] In particular, $\chi_3=N_3+2(1-g_3+k_3)=3N_3-6$, and we can list the possibilities for $(g_3,k_3,N_3)$ according to the value of $N_3$. If $\chi_3=3$, then $N_3=3$ and by \cite{ArtebaniSarti} we have the following possibilities for the invariants $(g_3,k_3,N_3)$ of $\Fix(\sigma_3)$: \begin{align*}(g_3,k_3,N_3)=(-,-,3), (1, 0, 3), (2, 1, 3), (3, 2, 3). \end{align*} Similarly, \begin{itemize} \item if $\chi_3=0$, then $N_3=2$ and the possibilities are $(g_3,k_3,N_3)=(2, 0, 2), (3, 1, 2), (4, 2, 2)$. \item if $\chi_3=-3$ then $N_3=1$ and the possibilities are $ (g_3,k_3,N_3)= (3, 0, 1), (4, 1, 1)$. \item if $\chi_3=6$, then $N_3=4$ and the possibilities are: $(g_3,k_3,N_3)=(3,3,4), (2,2,4), (1,1,4), (0,0,4)$. \end{itemize} Next, we observe that we can actually eliminate most of these possibilities. As in Lemma \ref{homma}, the automorphism $\sigma_{21}$ acts with order seven on $\Fix(\sigma_3)$, and thus $C_{g_3}$ should admit an automorphism of order seven. But if $g_3\geq 2$ and if $\phi$ is an automorphism of prime order $p$, we must have $p\leq 2g_3+1$. Then we may eliminate the case where $g_3 = 2$. A curve of genus four does not admit an automorphism of order seven by \cite{database}, and thus $g_3\neq 4$. Finally, if $\chi_3=3$, then $\Fix(\sigma_{21})$ consists of a fixed rational curve plus eight points. Since $\Fix(\sigma_{21})\subseteq \Fix(\sigma_3)$, using Riemann-Hurwitz we can also eliminate the triples $(g_3,k_3,N_3)=(-,-,3), (1, 0, 3)$. The argument is similar for triples $(g_3,k_3,N_3)=(1,1,4),(0,0,4)$ with $\chi_3=6$. Therefore, the possible cases are the ones listed in Table \ref{tab:21}. \end{proof} \begin{remark}\label{d21} Note that in the proof of Proposition \ref{thm21}, we have \begin{equation*} \rk S(\sigma_{21})=d_1,\ \rk S(\sigma_7)=2d_3+d_1,\ \rk S(\sigma_3)=6d_7+d_1. \end{equation*}\end{remark} We end this section by showing the examples in \cite{Brandhorst} are indeed compatible with the invariants of Table \ref{tab:21}, as claimed. \begin{example}{\bf(Case C(3,2,3))}\label{ex21-C1} Let $(X,\sigma_{21})$ be the following elliptic K3 surface with the non-symplectic automorphism $\sigma_{21 }$ of order 21: \[y^2=x^3+4t^4(t^7-1), t\in\mathbb P^1\quad \sigma_{21}: (x,y,t)\mapsto (\zeta_7^6\zeta_3 x, \zeta_7^2y, \zeta_7t).\] The collection of singular fibers of the elliptic fibration consist of a fiber of type $IV^*$ over $t=0$, a fiber of type $II$ over $t=\infty$, and $7$ of type $II$ over the zeros of $t^7-1$. The fixed locus of $\sigma_7$ consist of the central component $R$ of the fiber $IV^*$, six isolated points on the fiber $IV^*$ and two points on the fiber $II$ over $t=\infty$. The automorphism $\sigma_{21}$ has the same fixed locus as $\sigma_7$. The fixed locus of $\sigma_3$ consists of the zero section, the curve $R$ and the 3-section $y=0$, which has genus three and 3 additional points. In particular, the invariants of $\Fix(\sigma_{21}^j),j=1,3,7$ are as in the first row of Table \ref{tab:21}. \end{example} \begin{example}\label{ex21-C2} {\bf(Case C(3,1,2))} Let $(X,\sigma_{21})$ be the following elliptic K3 surface with the non-symplectic automorphism $\sigma_{21}$ of order 21: K3 surface with Weierstrass equation \[y^2=x^3+t^3(t^7+1), t\in\mathbb P^1\quad \sigma_{21}: (x,y,t)\mapsto (\zeta_7^3\zeta_3 x, \zeta_7y, \zeta_7^3t).\] The singular fibers of the elliptic fibration are $I_0^*+IV+7II$. The fixed locus $\Fix(\sigma_7)$ consists of the central component $R$ of the fiber $I_0^*$, four points on $I_0^*$, and four points on $IV$. The automorphism $\sigma_{21}$ does not fix $R$ and only fixes isolated points. The automorphism $\sigma_3$ exchanges three of the non-central components of the fiber $I_0^*$ and acts on the remaining one, we obtain $(g_3,k_3,N_3)=(3,1,2).$ The conclusion is that the invariants of $\Fix(\sigma_{21}^j),j=1,3,7$ are as in the second row of Table \ref{tab:21}. \end{example} \begin{example} {\bf(Case C(3,0,1))} Let $X$ be the K3 surface whose equation in $\mathbb P^3$ is \[x_0^3x_1+x_1^3x_2+x_0x_2^3-x_0x_3^3=0.\] This surface admits the purely non-symplectic automorphism of order 21 \[\sigma_{21}:(x_0,x_1,x_2,x_3)\mapsto (\zeta_7x_0, \zeta_7^5x_1, x_2,\zeta_3x_3)\] whose fixed locus consists of the four standard coordinate points. The fixed locus of $\sigma_3$ consists of the genus three curve $\{x_3=0\}\cap X$ and the point $p_1=(0:0:0:1)$. In particular, we see that the invariants of $\Fix(\sigma_{21}^j),j=1,3,7$ are as described in the third row of Table \ref{tab:21}. \end{example} \begin{example} \label{ex21-B4}{\bf(Case B(3,3,4))} Let $(X,\sigma_{21})$ be the following elliptic K3 surface withthe non-symplectic automorphism $\sigma_{21}$ of order 21: \[y^2=x^3+t^5(t^7-1), t\in\mathbb P^1\quad \sigma_{21}: (x,y,t)\mapsto (\zeta_{21}^2 x, \zeta_7y, \zeta_7^6t).\] The collection of singular fibers consists of a type $II^*$ fiber at $t=\infty$ and seven type $II$ fibers over the zeros of $t^7+1$. The order seven automorphism $\sigma_7$ fixes the following: the smooth fiber $E$ of genus one over $t=0$, the central component $R$ of the $II^*$ fiber, and eight isolated points on the same fiber $II^*$. The automorphism $\sigma_{21}$ fixes $R$ as well and acts on $E$ as an automorphism of order three, fixing three points. The fixed locus of $\sigma_3$ consists of $R$, along with another rational curve in the fiber $II^*$, the zero section, and the genus three 3-section $X\cap\{y=0\}$, and four isolated points on the fiber $II^*$. Therefore, the invariants of $\Fix(\sigma_{21}^j),j=1,3,7$ are as in the fourth row of Table \ref{tab:21}. \\ Another example of this type of automorphism is given by the following. Consider the equation $y^3=z^7+x^2w+xw^{11}$ in the weighted projective space $\mathbb P(10,7,3,1)_{x,y,z,w}$, and consider the order 21 automorphism \[\sigma_{21}:(x,y,z,w)\mapsto (x,\zeta_3y,z,\zeta_7w).\] The curve $C_{y}\coloneqq \{y=0\}$ has genus three and is fixed by $\sigma_3$, the curve $C_{w}\coloneqq\{w=0\}$ has genus one and is fixed by $\sigma_7$. The rational curve fixed by $\sigma_{21}$ is a rational component in the resolution of the $A_9$ singularity $(1:0:0:0)$. \end{example} \section{Order 28}\label{sec-order28} We now prove a classification theorem for purely non-symplectic automorphisms of order $28$ recovering the results in \cite{Brandhorst}. Our result is the following: \begin{proposition} \label{thm28} The fixed locus of a purely non-symplectic automorphism of order $28$ on a K3 surfaces is not empty and it consists of either: \begin{enumerate}[(i)] \item The union of $N_{21}$ isolated points, where $N_{21}\in \{3,5\}$; or \item The disjoint union of a rational curve and $10$ isolated points. \end{enumerate} Moreover, all these possibilities occur. The examples of \cite[Table 3]{Brandhorst} fit the invariants of Table \ref{tab:28} below, which provides a more detailed description of the possible different fixed loci of $\sigma_{28}$ and its powers. {\small \begin{table}[H] \hspace*{-1.5cm} \begin{tabular} {c|c|c|c|c} $\Fix(\sigma_{28})$& $\Fix(\sigma_{14})$ & $\Fix(\sigma_{7})$ & $\Fix(\sigma_{4})$ & $\Fix(\sigma_{2})$\\ \noalign{\hrule height 1.5pt} $\{p_1,\ldots,p_5\}$ &$\{p_1,\ldots,p_5,p_6,p_7\}$&$E\sqcup \{ p_1,p_2,p_3\}$ & $\{q_1,\ldots,q_7,p_1\}\sqcup R_1\sqcup R_2$ & $C_3\sqcup R_1\sqcup R_2$ \\ $\{p_1,p_2,p_3\}$ & $\{p_1,\ldots,p_7\}$ & $E\sqcup \{p_1,q_1,q_2\}$ & $C_3$ & $C_3\sqcup R_1\sqcup R_2$\\ \noalign{\hrule height 1.5pt} $R\sqcup \{p_1,\ldots,p_{10}\}$ & $R\sqcup \{p_1,\ldots,p_{10},p_{11},p_{12}\}$ & $E\sqcup R\sqcup\{p_1,\ldots,p_8\}$ & $\{p_1,\ldots,p_8\}\sqcup R\sqcup R_1$ & $C_6\sqcup R\sqcup R_1\sqcup\ldots\sqcup R_4$\\ \end{tabular}\caption{Order 28} \label{tab:28} \end{table} } \end{proposition} \begin{proof} As explained in Section \ref{sec:background}, at any fixed point the automorphism $\sigma_{28}$ acts as multiplication by $A_{i,28}\coloneqq \begin{pmatrix} \zeta_{28}^{i+1} & 0 \\ 0 & \zeta_{28}^{28-i} \end{pmatrix}$ for $0\leq i \leq 13$, and we denote the number of points of type $A_{i,28}$ by $m_{i,28}$. The holomorphic Lefschetz formula \eqref{eq-Lefhol} applied to $\sigma_{28}$ gives us the following linear system of equations \[\begin{cases} 3 m_{6,28} &= 3 + 4 m_{1,28} - 5 m_{2,28} - 4 m_{4,28} + 8 m_{5,28} \\ 3 m_{7,28} &= 3 - 5 m_{1,28} + 4 m_{2,28} - 13 m_{4,28} + 17 m_{5,28} \\ m_{8,28} &= 1 - 2 m_{1,28} + 2 m_{2,28} - 5 m_{4,28} + 6 m_{5,28} \\ m_{9,28} &= 3 - 4 m_{1,28} + 4 m_{2,28} - 3 m_{3,28} - 3 m_{4,28} + 7 m_{5,28} \\ 2 m_{10,28} &= 2 - 3 m_{1,28} + 3 m_{2,28} - 2 m_{3,28} - 3 m_{4,28} + 6 m_{5,28} \\ 6 \alpha_{28} &= m_{1,28} + m_{2,28} - m_{4,28} + 2 m_{5,28} \end{cases}\] where $\alpha_{28}\coloneqq \sum (1-g(C))$ and the sum runs over all curves $C$ which are fixed by $\sigma_{28}$. Moreover, considering the automorphism $\sigma_{7}=\sigma_{28}^4$ which has order seven, we further know that \[\begin{cases} m_{1,28}+m_{5,28}+m_{8,28}+m_{12,28} &\leq m_{1,7} \\ m_{2,28}+m_{4,28}+m_{9,28}+m_{11,28} &\leq m_{2,7} \\ m_{3,28}+m_{10,28} &\leq m_{3,7} \end{cases}\] Note that \begin{itemize} \item points of type $A_{13,28}$ lie on a curve fixed by $\sigma_{14}$ (but not by $\sigma_{28}$); \item points of type $A_{7,28}, A_{8,28}$ and $A_{13,28}$ lie on a curve fixed by $\sigma_7$ (but not fixed by $\sigma_{28}$); \item points of type $A_{j,28}, j=3,4,7,8,11,12$ lie on a curve fixed by $\sigma_4$ (but not fixed by $\sigma_{28}$). \end{itemize} Because of the observations listed above, we choose $r\coloneqq m_{6,28}+m_{7,28}+m_{13,28}$ and obtain the following four possibilities for $(m_{1,28},\ldots,m_{13,28};\alpha_{28},r)$: \begin{align*} w_1=(0,0,0,0,0,0,2,2,1,0,0,0,0;0,2) && w_3=(0,0,0,1,0,0,2,0,0,0,0,0,0;0,2) \\ w_2=(3,2,1,0,1,2,0,2,1,0,0,0,0;1,2) && w_4=(3,2,1,1,1,2,0,0,0,0,0,0,0;1,2) \end{align*} We now consider the induced action of $\sigma_{28}$ on $H^2(X,\mathbb{R})$ and as in Section \ref{sec:background} we let \[d_{i} \coloneqq \dim H^2(X,\mathbb{R})_{\zeta_{i}},\ i=28,14,7,4,2,1.\] For each $i=2,4,7,14,28$ we let $\chi_i$ denote the Euler characteristic of the fixed locus of the power of $\sigma_{28}$ which has order $i$. Applying the topological Lefschetz formula \eqref{eq-Leftop} to $\sigma_{28},\sigma_{14},\sigma_7,\sigma_4$ and $\sigma_2$ we obtain: \begin{equation}\label{top28} \begin{cases} \chi_{28} &= 2 + d_{14} - d_7 - d_2 + d_1 \\ \chi_{14} &= 2 + 2d_{28} - d_{14} - d_7 - 2d_4 + d_2 + d_1 \\ \chi_7 &= 2 - 2d_{28} - d_{14} - d_7 + 2d_4 + d_2 + d_1\\ \chi_4 &= 2 - 6d_{14} + 6d_7 - d_2 + d_1\\ \chi_2 &= 2 - 12d_{28} + 6d_{14} + 6d_7 - 2d_4 + d_2 + d_1 \end{cases}\end{equation} Moreover, we know that \[ 22 = \text{dim }H^2(X,\mathbb{R})= 12 d_{28} + 6d_{14} + 6d_7 + 2d_4 + d_2 + d_1 \] Using \eqref{top28} one gets the following possibilities, according to the four cases of $w_i$: \begin{center} \begin{tabular} {c|c|c|c|c|c|c} $w_i$ & $\chi_{28}$ & $(d_{28},\ldots,d_1)$& $\chi_{14}$ & $\chi_7$& $\chi_{4}$ &$\chi_2$ \\ \hline $w_1$ & 5&(1,1,0,1,0,2) &3 & 3 & -2&-4 \\ && (1,1,0,0,1,3) &7&3&-2 &0\\ \rowcolor{Lgray} && (1,0,1,0,0,4) &7&3&12&0\\ \hline $w_2$ & 14 & - & - & - &-&-\\ \hline $w_3$ & 3 & (1,1,0,1,0,1) &3 &3 &-4&-4 \\ & & (1,0,1,1,0,2) & 3& 3&10&-4 \\ \rowcolor{Lgray} & & (1,1,0,0,2,2) & 7& 3 & -4 &0 \\ & & (1,0,1,0,1,3) & 7& 3 & 10 &0 \\ \hline \rowcolor{Lgray} $w_4$ &12& (1,0,0,0,0,10) &14&10&12&0\\ \end{tabular} \end{center} Observe that vector $w_2$ does not give any admissible case and $\chi_{14}$ cannot be 3 by our classification of Section \ref{sec-order14}. This implies either $\chi_{7}=10$, the vector of types of points is $w_4$ and $\sigma_{14}=\sigma_{28}^2$ is of type B3 of Table \ref{tab:fixed} or $(\chi_{14},\chi_7, \chi_2)= (7,3,0)$. In the latter case, $\sigma_{14}$ is of type A1(3,2) of Table \ref{tab:fixed} and $\Fix(\sigma_2)=C_3\,\cup \, R_1 \, \cup \, R_2$. Recalling that $\Fix(\sigma_4) \subseteq\Fix(\sigma_2)$ and that $\sigma_4$ acts with order 1 or 2 on $\Fix(\sigma_2)$, with Riemann-Hurwitz formula we can conclude that $\chi_4\in\{-4,0,4,8,12\}$. This leaves only the cases highlighted in gray. We now study the action of $\sigma_4$ on $\Fix(\sigma_2)$. If $\chi_4=-4$, then $\Fix(\sigma_4)=C_3$ and $R_1$ and $R_2$ are exchanged by $\sigma_4$. If $\chi_4=12$, then $\sigma_4$ can only fix rational curves and \cite[Proposition 1]{ArtebaniSarti4} implies $\sigma_4$ fixes exactly two rational curves and 8 isolated points. \end{proof} \begin{remark}\label{d28} As in the order 21 case, note that the following relations hold: \[\rk S(\sigma_7)=2d_4+d_2+d_1,\quad \rk S(\sigma_4)=6d_7+d_1,\quad \rk S(\sigma_2)=6(d_{14}+d_7)+d_2+d_1. \] \end{remark} We now show the examples in \cite[Table 3]{Brandhorst} are consistent with the invariants listed on Table \ref{tab:28}. \begin{example} The elliptic K3 surface with Weierstrass equation \[y^2=x^3+(t^7+1)x\] admits the following order 28 purely non-symplectic automorphism \[\sigma_{28}(x,y,t)=(x-(y/x)^2,i(y-(y/x)^3),\zeta_7t),\] The elliptic fibration admits a smooth fiber over $t=0$, a fiber of type $II$ over $t=\infty$ and 7 fibers of type $II$ over the roots of $\Delta=4(t^7+1)^3$. One can check the invariants of $\Fix(\sigma_{28}^j), j=1,2,4,14$ are as in the first row of Table \ref{tab:28}.In particular, the automorphism $\sigma_{14}=\sigma_{28}^2$ is of type $A1(3,2)$ in our classification of Section \ref{sec-order14}. Moreover, given that $\Fix(\sigma_2)=C_3\sqcup R_1\sqcup R_{2},$ we have that $\sigma_{4}$ does not exchange $R_1$ and $R_2$ and fixes the 8 tangential points of the fibers of type II lying on $C_{3}.$ Therefore, $\sigma_{28}$ fixes the same three points in the fiber over $t=\infty$ and two additional points in smooth fiber over $t=0.$ \end{example} \begin{example} The elliptic K3 surface with Weierstrass equation \[y^2=x^3+(t^7+1)x,\ t\in\mathbb P^1,\] admits the following order 28 purely non-symplectic automorphism \[\sigma_{28}(x,y,t)=(-x,iy,-\zeta_7t).\] One can check the invariants of $\Fix(\sigma_{28}^j), j=1,2,4,14$ are as in the second row of Table \ref{tab:28}. In particular, the automorphism $\sigma_{14}=\sigma_{28}^2$ is of type $A1(3,2)$ in our classification of Section \ref{sec-order14}. Given that $\Fix(\sigma_2)=C_3\sqcup R_1\sqcup R_{2},$ then $\sigma_{4}$ exchanges $R_1$ and $R_2$ and fixes $C_{3}.$ As a consequence, $\sigma_{28}$ fixes the tangential point in the fiber of type II over $t=\infty$ and two additional point in the smooth fiber. Another example of this type of automorphism is given by the following. Consider the K3 surface in $\mathbb P(7,3,2,2)$ which is the zero locus of the quasi-smooth polynomial $x^2+y^4z+z^7+w^7$. It admits the purely non-symplectic automorphism of order 28 \[\sigma_{28}(x,y,z,w)=(x,iy,z,\zeta_7w).\] Resolving the singularity of type $A_2$ at $(0:1:0:0)$ and the seven singularities of type $A_1$ at $(0:0:\zeta_{14}^i, 1), i=1,3,\ldots,13$ we see that the different fixed loci of the powers of $\sigma_{28}$ are as in the second row of Table \ref{tab:28}. \end{example} \begin{example} Finally, the elliptic K3 surface with Weierstrass equation \[y^2=x^3+x+t^7,\ t\in\mathbb P^1,\] admits the order 28 purely non-symplectic automorphism \[\sigma_{28}(x,y,t)=(-x,iy,-\zeta_7t).\] The elliptic fibration admits a smooth fiber over $t=0$, a fiber of type $II^*$ over $t=\infty$ and 14 nodal curves over the roots of $\Delta=4+27t^{14}$. The automorphism $\sigma_{14}=\sigma_{28}^2$ is of type $B3$ in our classification of Section \ref{sec-order14} and we can check the invariants of $\Fix(\sigma_{28}^j), j=1,4,14$ indeed agree with the third row of Table \ref{tab:28}. Moreover, since $Fix(\sigma_2)=C_6\sqcup R\sqcup R_1\sqcup \dots R_4,$ we have that $\sigma_4$ fixes two rational curves including $R,$ two points in $C_6$ and six additional points in the other rational curves. As a consequence, $\sigma_{28}$ fixes $R$ and ten additional points, two of them on the smooth fiber over $t=0.$ \end{example} \section{Order 42}\label{sec-order42} In \cite{Brandhorst}, Brandhorst classifies purely non-symplectic automorphisms of order $42$ on K3 surfaces. Here, we provide a different and more geometric view of his result. We prove: \begin{proposition} \label{thm42} The fixed locus of a purely non-symplectic automorphism of order $42$ on a K3 surfaces is not empty and it consists of either: \begin{enumerate}[(i)] \item The union of $N_{21}$ isolated points, where $N_{21}\in \{5,6\}$; or \item The disjoint union of a rational curve and $9$ isolated points. \end{enumerate} Moreover, all these possibilities occur, and a more detailed description is given in Table \ref{tab:42} below. {\small \begin{table}[H] \begin{tabular} {c!{\vrule width 1.5pt} c|c|c|c} Type $\sigma_{14}$ & $\Fix(\sigma_{42})$ & $\Fix(\sigma_7)$ & $\Fix(\sigma_{21})$ & $\Fix(\sigma_3)$ \\ \noalign{\hrule height 1.5pt} C1 & $\{p_1,\ldots,p_6\}$&$R\sqcup\{p_1,\ldots,p_8\}$ & $R\sqcup\{p_1,\ldots,p_8\}$ & $C_3\sqcup R\sqcup R'\sqcup\{p_1,p_2,p_3\}$ \\ \hline C3 & $\{p_1,\ldots,p_5\}$& $R\sqcup\{p_1,\ldots,p_8\}$ & $\{p_1,\ldots,p_7\}$ & $C_3\sqcup R\sqcup \{p_1,p_2\}$ \\ \hline B3 & $R\ \sqcup\{p_1,\ldots p_9\}$&$E\sqcup R\sqcup\{p_1,\ldots,p_8\}$&$R\ \sqcup\{p_1,\ldots p_{11}\}$ & $C_3\sqcup R\sqcup R'\sqcup R'' \sqcup\{p_1,\ldots p_4\}$ \end{tabular}\caption{Order 42} \label{tab:42} \end{table}} \end{proposition} \begin{proof} Let $\sigma_{42}$ be a purely non-symplectic automorphism of order 42. Thus its square is a purely non-symplectic automorphism of order 21 and we use the classification of Section \ref{sec-order21}. Observe that isolated fixed points for $\sigma_{42}$ of type $A_{20,42}$ lie on curves fixed by $\sigma_{21}$ and not fixed by $\sigma_{42}$. Thus, in case $\sigma_{21}$ has invariants as in the first or fourth rows of Table \ref{tab:21}, it must be the case that $m_{20,42}$ is either 0 or 2, according to the fact the the rational curve $R\subset\Fix(\sigma_{21})$ is fixed by $\sigma_{42}$ or not. We also have the following inequalities \[m_{1,42}+m_{19,42}\leq m_{1,21},\quad m_{2,42}+m_{18,42}\leq m_{2,21},\quad m_{3,42}+m_{17,42}\leq m_{3,21},\quad m_{4,42}+m_{18,42}\leq m_{4,21},\] \[m_{5,42}+m_{15,42}\leq m_{5,21},\quad m_{6,42}+m_{14,42}\leq m_{6,21},\quad m_{7,42}+m_{13,42}\leq m_{7,21},\quad m_{8,42}+m_{12,42}\leq m_{8,21},\] \[m_{9,42}+m_{11,42}\leq m_{9,21},\quad m_{10,42}\leq m_{10,21}\] According to this, we look for possible solutions $m=(m_{1,42},\ldots,m_{20,42};\alpha_{42})$ of the Lefschetz holomorphic formula \eqref{eq-Lefhol} applied to $\sigma_{42}$. Using MAGMA we get the following: \begin{itemize} \item if $\sigma_{21}$ is as in the first row of Table \ref{tab:21}, there is no possible solution $m$ with $\alpha_{42}=1$. If $\alpha_{42}=0$ one gets the vector $m=(0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,1,1,1,2;0)$. Thus $\Fix(\sigma_{42})$ consists of 6 isolated points, two of which are contained in the rational curve fixed by $\sigma_{21}$. \item if $\sigma_{21}$ is as in the second row of Table \ref{tab:21}, then $\alpha_{42}$ is necessarily 0 and $m_{20,42}=0$. There is one solution $m=(0,0,0,0,0,0,0,0,0,1,1,1,1,1,0,0,0,0,0,0;0)$. Thus $\Fix(\sigma_{42})$ consists of 5 isolated points \item if $\sigma_{21}$ is as in the third row of Table \ref{tab:21}, $\alpha_{42}$ is necessarily 0 and $m_{20,42}=0$. There is no solution in this case. \item if $\sigma_{21}$ is as in the fourth of Table \ref{tab:21}, $\alpha_{42}$ can be 0 or 1. If $\alpha_{42}=0$ and $m_{20,42}=2$ there is no solutions. If $\alpha_{42}=1$ and $m_{20,42}=0$ by MAGMA we get only one solution $(m_{1,42},\ldots,m_{20,42};\alpha_{42})=(3,2,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0;1)$. Thus $\Fix(\sigma_{42})$ consists of a rational curve and 9 isolated points. \end{itemize} Thus, there are three possibilities for $\Fix(\sigma_{42})$. As before, let $d_{i}\coloneqq \dim H^2(X,\R)_{\zeta_{i}}$ for $i=42,21,14,7,6,3,2,1$. We have \[ 22=12d_{42}+12d_{21}+6d_{14}+6d_7+2d_6+2d_3+d_2+d_1\] By the topological Lefschetz formula \eqref{eq-Leftop} applied to the powers of $\sigma_{42}$ we get the following linear system of equations \begin{equation}\label{top42} \begin{cases} \chi_{42}&=2-d_{42}+d_{21}+d_{14}-d_7+d_6-d_3-d_2+d_1\\ \chi_{21}&=2+d_{42}+d_{21}-d_{14}-d_7-d_6-d_3+d_2+d_1\\ \chi_{14}&=2+(2d_{42}+d_{14})-(2d_{21}+d_7)-(2d_6+d_2)+2d_3+d_1\\ \chi_7&=2-(2d_{42}+2d_{21}+d_{14}+d_7)+2d_6+2d_3+d_2+d_1\\ \chi_6&=2+(6d_{42}+d_6)-(6d_{21}+d_3)-(6d_{14}+d_2)+6d_7+d_1\\ \chi_3&=2-(6d_{42}+6d_{21}+d_6+d_3)+6d_{14}+6d_7+d_2+d_1\\ \chi_2&=2-(12d_{42}+6d_{14}+2d_6+d_2)+12d_{21}+6d_7+2d_3+d_1 \end{cases} \end{equation} Considering the different possible solutions we can compute the values of the Euler characteristics of the fixed locus of $\sigma_{42}$ and its powers: \begin{center} \begin{tabular} {c|c|c|c|c|c|c|c|c|c} Type $\sigma_{14}$ & $\chi_{42}$ & $\Fix(\sigma_{42})$ & $(d_{42},\ldots,d_1)$ &$\chi_{21}$ & $\chi_{14}$ & $\chi_7$ & $\chi_6$ & $\chi_{3}$ & $\chi_2$ \\ \hline C1 & 6&6 pts& (1,0,0,0,1,0,2,6) & 10 & 6 & 10 & 13 & 3 & -8 \\ C3 & 5 & 5 pts & (1,0,0,0,1,1,1,5) & 7& 8 & 10 & 12 & 0 & -6 \\ B3 & 11& $R\ \sqcup$ 9 pts & (1,0,0,0,0,0,0,10) & 13& 14 &10 & 18 & 6 & 0 \end{tabular} \end{center}\end{proof} \begin{remark}\label{d42} Observe the following relations hold: \[\rk S(\sigma)=d_1\geq 1,\quad \rk S(\sigma_7)=2d_6+2d_3+d_2+d_1,\quad \rk S(\sigma_6)=6d_7+d_1,\]\[ \rk S(\sigma_3)=6d_{14}+6d_7+d_2+d_1,\quad \rk S(\sigma_2)=6(2d_{21}+d_7)+2d_3+d_1. \] \end{remark} \begin{remark} The complete description of $\Fix(\sigma_3)$ follows from Proposition \ref{thm21}. \end{remark} \begin{remark} The possible values of $\chi_6$ obtained in the proof of Proposition \ref{thm42} and the classification in \cite{Dillies} allow us to also completely describe $\Fix(\sigma_6)$. The description is as follows: If $\sigma_{42}$ is as in the first row of Table \ref{tab:42}, then we must have $m_{2,6}=10$ and $m_{1,6}=1$. Moreover, $\sigma_6$ fixes 1 rational curve. With our notations, there are 8 fixed points under $\sigma_6$ lying on $C_3$, $\sigma_6$ fixes $p_1$, it also fixes $R$ and it has 2 more fixed points lying on $R'$. Now, if $\sigma_{42}$ is as in the second row of Table \ref{tab:42}, then $m_{2,6}=8$ and $m_{1,6}=2$. Moreover, $\sigma_6$ fixes 1 rational curve. There are 8 points fixed under $\sigma_6$ lying on $C_3$, $\sigma_6$ fixes $p_1$ and $p_2$, and it also fixes $R$. Finally, if $\sigma_{42}$ is as in the last row of Table \ref{tab:42}, then $m_{2,6}=10$ and $m_{1,6}=4$. Moreover, $\sigma_6$ fixes 2 rational curves. There are 8 points fixed under $\sigma_6$ lying on $C_3$, $\sigma_6$ fixes $p_1,\ldots,p_4$, it also fixes $R$ and $R'$ and it has 2 more fixed points lying on $R''$. \end{remark} Observe that Proposition \ref{thm42} is compatible with \cite{Brandhorst}. In fact the examples in \cite[Table 3]{Brandhorst} agree with the invariants listed on Table \ref{tab:42}, as we describe below: \begin{example} The K3 surface is the same as in Example \ref{ex21-C1}, see \cite{Brandhorst}. On the same elliptic fibration $y^2=x^3+4t^4(t^7-1)$ the order 42 automorphism $\sigma_{42}$ is given by \[ \sigma_{42}: (x,y,t)\mapsto (\zeta_7^6\zeta_3 x, -\zeta_7^2y, \zeta_7t).\] The automorphism $\sigma_{42}$ acts on the fiber of type $IV^*$ as a reflection, moving two legs and leaving the third invariant. Thus on the fiber $IV^*$ $\sigma_{42}$ fixes 4 isolated points. The 2 isolated point fixed by $\sigma_{21}$ on the cuspidal fiber over $t=\infty$ are fixed by $\sigma_{42}$ too. In particular, the invariants of $\Fix(\sigma_{42}^j),j=1,2,3,6,7,14,21$ are as in the first row of Table \ref{tab:42}. \end{example} \begin{example} The K3 surface is the same as in Example \ref{ex21-C2}, see \cite{Brandhorst}. On the same elliptic fibration $y^2=x^3+t^3(t^7+1)$ the order 42 automorphism $\sigma_{42}$ is given by \[ \sigma_{42}: (x,y,t)\mapsto (\zeta_7^3\zeta_3 x, -\zeta_7y, \zeta_7^3t).\] One can check that the invariants of $\Fix(\sigma_{42}^j),j=1,2,3,6,7,14,21$ are as in the second row of Table \ref{tab:42}. \end{example} \begin{example} The K3 surface is the same as in Example \ref{ex21-B4}, see \cite{Brandhorst}. On the same elliptic fibration $y^2=x^3+t^5(t^7+1)$ the order 42 automorphism $\sigma_{42}$ is given by \[\sigma_{42}: (x,y,t)\mapsto (\zeta_{42}^2 x, \zeta_{42}^3y, \zeta_{42}^{18}t)\] On the fiber $II^*$ $\sigma_{42}$ fixes 8 isolated points and the central component $R$. It also fixes 1 point on the elliptic curve $E$ over $t=0$. Therefore, the invariants of $\Fix(\sigma_{42}^j),j=1,2,3,6,7,14,21$ are as in the third row of Table \ref{tab:42}. \end{example} \section{Not purely non-symplectic automorphisms} \label{sec-NP} As we observed in Section \ref{sec:background}, a not purely non-symplectic automorphism $f$ is such that its action on the period $\omega_X$ is given by multiplication by a non-primitive $n$-th root of unity (different from 1). As a consequence, at least one power of $f$ is symplectic. The following are well known results about symplectic automorphisms on K3 surfaces. First, by \cite{Nikulin}, a symplectic automorphism can only fix isolated points, and its order must be less than or equal to eight. Moreover, according to the possible orders: \begin{lemma} \label{lemma-symp}(see \cite[Prop 1.1]{GS1},\cite[Prop. 5.1]{GS2}, \cite{Nikulin}) Given a symplectic automorphism $g$ on a K3 surface, the number $N$ of isolated fixed points and the rank of the invariant lattice $S(g)$ are shown in the following table. \begin{center} \begin{tabular}{c|c|c!{\vrule width 1.5pt}c|c|c} ord(g)& $N$ &$\rk S(g)$&ord(g)& $N$ &$\rk S(g)$ \\ \noalign{\hrule height 1.5pt} 2& 8& 14 &6& 2& 6\\ 3& 6& 10 &7& 3& 4\\ 4& 4& 8 & 8& 2& 2\\ 5& 4& 6\\ \end{tabular} \end{center} \end{lemma} In this section we will provide a complete classification of not purely non-symplectic automorphisms of orders $14,21,28$ and $42$ according to which powers of the automorphisms are assumed to be symplectic. \subsection{Order 14}\label{sec-NP14} Let $\sigma_{14}$ be a non-symplectic automorphisms of order 14 such that either $\sigma_7=\sigma_{14}^2$ or $\sigma_2=\sigma_{14}^7$ are symplectic. We will study the two cases separately. \subsubsection{$\sigma_7$ symplectic} When the square of $\sigma_{14}$ is symplectic we prove: \begin{proposition}\label{NPorder14} Let $\sigma_{14}$ be a non-symplectic automorphism of order 14 on a K3 surface $X$ such that $\sigma_7=\sigma_{14}^2$ is symplectic. Then $\Fix(\sigma_{14})$ consists of 3 isolated points and the possible values of $(d_{14},d_7,d_2,d_1)$ are $(2, 1, 2, 2), (3, 0, 3, 1)$. In the first case, $\Fix(\sigma_2)$ consists of a curve of genus 3, while in the second case, it consists of a curve of genus 10. Moreover, both possibilities occur. \end{proposition} \begin{proof} By Lemma \ref{lemma-symp}, the fixed locus of $\sigma_7$ consists of 3 isolated points and since $\Fix(\sigma_{14}) \subset\Fix (\sigma_7)$, it follows that the number $N_{14}$ of isolated points fixed by $\sigma_{14}$ is at most 3. Now, by Lemma \ref{lemma-symp}, we also know the invariant lattice of $\sigma_7$ has rank 4. Therefore, by Remark \ref{d14}, we further know $d_2+d_1=4$ and $d_{14}+d_7+b=3$. Moreover, by the topological Lefschetz formula \eqref{eq-Leftop} (applied to $\sigma_{14}$) we have \[ \chi_{14}=N_{14}=2+d_{14}-d_7-d_2+d_1. \] Further observing that we must have $d_2>0$ and $d_1>0$, these give the following list of possibilities for $(d_{14},d_7,d_2,d_1)$: \begin{align*} (0,3,1,3,1), (1, 2, 1, 3, 3), (1, 2, 2, 2, 1), (2, 1, 2, 2, 3), (2, 1, 3, 1, 1), (3, 0, 3, 1, 3) \end{align*} As in Section \ref{sec-poss}, using $\eqref{Lef-top}$ and \cite{Nikulin-inv}, we can compute $\chi_2$ and the possible invariants $(g_2,k_2)$ of the fixed locus of $\sigma_2$. These are listed in Table \ref{tab:sigma^2symp} below. In particular, we observe that if $(d_{14},d_7,d_2,d_1)=(0,3,1,3)$, then we would have $\chi_2=22$, which is impossible by \cite{Nikulin-inv}. \begin{table}[H] \centering \begin{tabular}{c|cccc|c|c} $N_{14}$&$d_{14}$&$d_7$&$d_2$&$d_1$&$\chi_2$&$(g_2,k_2)$\\ \hline 1&0&3&1&3&22&-\\ 1&1&2&2&2&8&$(3,6), (2,5), (1,4), (0,3)$\\ 1&2&1&3&1&-6&$(6,2), (5,1), (4,0)$\\ 3&1&2&1&3& 10 &$(2,6), (1,5), (0,4)$\\ 3&2&1&2&2& -4&$(6,3),(5,2),(4,1), (3,0)$ \\ 3&3&0&3&1&-18&$(10,0)$\\ \end{tabular} \label{tab:sigma^2symp} \end{table} With computations similar to the ones of Section \ref{sec-exclude}, we can actually eliminate many of the other possibilities. In fact we see we must have that $\Fix(\sigma_{14})=\Fix(\sigma_7)$ consists of 3 isolated points and $\Fix(\sigma_2)$ consists of either a curve of genus 3 or a curve of genus 10. The existence of both cases is shown in the following examples. \end{proof} \begin{example} Let $f(x_0, x_1, x_2):=x^3_0x_1 + x^3_1x_2 + x^3_2x_0$ and consider the K3 surface \[X_f:=\{(x_0 : x_1 : x_2 : x_3) : x^4_3= f(x_0, x_1, x_2)\}\subset \mathbb P^3.\] This surface carries the order 14 automorphism $\sigma_{14} : (x_0 : x_1 : x_2 : x_3) \mapsto(\zeta^4_7x_0 : \zeta^2_7x_1 : \zeta_7x_2 : -x_3)$. We have $\Fix (\sigma_{14}) = \{ (1 : 0 : 0 : 0), (0 : 1 : 0 : 0), (0 : 0 : 1 : 0)\}$ and $\Fix (\sigma^7)$ is given by the curve $\{x_3 = 0\}$, which has genus three. Note that $\sigma^2$ is symplectic. \end{example} \begin{example} Let $X$ be the surface in $\mathbb P(3,1,1,1)$ given as the zero locus of $x^2+y^5z+z^5w+w^5y$. $X$ admits the action of the order 14 automorphism \[\sigma_{14}:(x,y,z,w)\mapsto (-x, \zeta_7^5y, \zeta_7^3z, \zeta_7^6w)\] whose square is symplectic and fixes the three points $\{(0,1,0,0),(0,0,1,0),(0,0,0,1)\}$. The fixed locus of $\sigma_2$ is the genus 10 curve $\{x=0\}\cap X$. \end{example} \subsubsection{$\sigma_2$ symplectic} When the involution $\sigma_{14}^7$ is symplectic, our result is the following: \begin{proposition}\label{prop_NP14-2} Let $\sigma_{14}$ be a non-symplectic automorphism of order 14 on a K3 surface $X$ and assume the involution $\sigma_2=\sigma_{14}^7$ is symplectic. Then Fix($\sigma_{14}$) consists of $N_{14}\leq 8$ isolated points and the possible values of $N_{14}$ and $(d_{14},d_7,d_2,d_1)$ are given in Table \ref{tab:sigma^7symp} below, together with $\chi_7$ in each case. \centering \begin{table}[H] \begin{tabular}{c|cccc|c} $N_{14}$&$d_{14}$&$d_7$&$d_2$&$d_1$&$\chi_7$\\ \hline 1&0&1&8&8&17\\ 8&1&1&2&8&10\\ 1&1&2&2&2&3\\ \end{tabular}\caption{} \label{tab:sigma^7symp} \end{table} \label{np14inv} \end{proposition} \begin{proof} By \cite{Nikulin}, the fixed locus of the symplectic involution $\sigma_2$ consists of 8 isolated points. Since $\Fix(\sigma_{14}) \subseteq \Fix (\sigma_2)$, it follows that $N_{14}\leq 8$. The invariant lattice of $\sigma_2$ has rank 14 by Lemma \ref{lemma-symp}, thus $6d_7+d_1=14$ and $6d_{14}+d_2=8$ by Remark \ref{d14}. Moreover, by the topological Lefschetz formula (applied to $\sigma_{14}$) we have \[ \chi(\Fix(\sigma_{14}))=N_{14}=2+d_{14}-d_{7}-d_2+d_1. \] Further observing that we must have $d_7>0$ and $d_1>0$, this gives the above list of possibilities, that is, if the involution $\sigma_2$ is symplectic one has 3 possibilities for $(d_{14},d_7,d_2,d_1)$: \begin{align*} (0, 1, 8, 8), (1, 1, 2, 8), (1, 2, 2, 2). \end{align*} \end{proof} We also prove the following two Lemmas which complement Proposition \ref{prop_NP14-2} (and Table \ref{tab:sigma^7symp}): \begin{lemma} Let $\sigma_{14}$ be a non-symplectic automorphism of order 14 on a K3 surface and assume the involution $\sigma_2=\sigma_{14}^7$ is symplectic. Under the assumption that the Picard lattice agrees with $S(\sigma_7)$ we have that the order seven automorphism $\sigma_7=\sigma_{14}^2$ cannot be of type $\dagger$ (here we are referring to the notation in Table \ref{tab:7}). In particular, $\sigma_7$ must fix a curve. \label{caseX} \end{lemma} \begin{proof} Since the K3 surface admits a symplectic involution, the transcendental lattice $T_X$ must be primitively embedded in $E_8 \oplus U \oplus U \oplus U$. Now, by assumption, $T_X = T(\sigma_7)$, and we see that $\sigma_7$ cannot be of type $\dagger$. In that case $T(\sigma_7)=U(7)\oplus U \oplus E_8 \oplus A_6$. \end{proof} \begin{lemma} Let $\sigma_{14}$ be a non-symplectic automorphism of order 14 on a K3 surface $X$ satisfying that $\sigma_2=\sigma_{14}^7$ is symplectic and $\sigma_7=\sigma_{14}^2$ fixes a curve. Then $X$ also admits a purely non-symplectic automorphism $\tau$ of order $14$ which, moreover, can only be of type $A1(3,2),C1(0,2)$ or $D3(3,2)$. \label{sigma7symp} \end{lemma} \begin{proof} The existence of $\tau$ follows from \cite[Thm. 1.4]{GS}. Now, $\tau^7$ is a non-symplectic involution. Thus, $X$ admits both a symplectic and a non-symplectic involution. By \cite[Thm. 0.1]{GS}, we have that the Nikulin invariants $(r,a,\delta)$ of $\tau^7$ must satisfy that either $a>16-r$ or $\delta=0$ and $a=6,$ $r=10$. As a consequence, we then obtain that $\tau$ has to be of type $A1(3,2),C1(0,2)$ or $D3(3,2)$. \end{proof} Moreover, we construct examples realizing each of the possibilities in Table \ref{tab:sigma^7symp}. \begin{example}{\bf{Case D3(3,2)}} Let $X$ be a general K3 surface carrying a purely non-symplectic automorphism $\sigma$ of order 14 of type D3(3,2). Then $NS(X)=S(\sigma_7)=U\oplus E_8 \oplus A_6$ (see Section \ref{sec-NS}). And, by \cite[Theorem 1.12.4]{Nikulin-lattices}, the transcendental lattice $NS(X)^{\perp}$ can be primitively embedded in $E_8 \oplus U \oplus U \oplus U$. Thus, $X$ admits a symplectic involution $\tau$ and, since $\sigma_7$ acts as identity on $NS(X)$, it must be the case that $\sigma_7$ and $\tau$ commute \cite[Proposition 2.1]{GS}. Therefore, $\alpha \doteq \sigma_7 \circ \tau$ is a non-symplectic automorphism of order $14$ such that $\alpha^7=\tau$ is symplectic. Note that, alternatively, the existence of $\tau$ is also given by \cite[Theorem 0.1]{GS}. \end{example} \begin{example} {\bf{Case C1(0,2)}:} Let us consider the elliptic K3 surface $X$ with Weierstrass equation given by \[y^2=x^3+t^2x+t^{10}.\] The automorphism $\sigma_{14}\colon(x,y,z,t,s)\mapsto (\zeta_{7}x,-\zeta^5_{7}y,-\zeta_{7}t)$ is a non-symplectic automorphism of order 14 and $\sigma_2$ is symplectic. Note that $\Fix(\sigma_{14})$ consists of 8 points. \end{example} \begin{example} {\bf{Case A1(3,2)}} Let $X$ be the elliptic K3 surface given by an equation of the form \[ y^2=x(x^2+a(t)x+b(t)) \] where $a\equiv 0$ and $b(t)=t^7+1$. Then $X$ admits the order 14 purely non-symplectic automorphism \[ \sigma: (x,y,t)\mapsto(x,-y,\zeta_7t) \] This is case A1 with $(g_2,k_2)=(3,2)$ in our classification. Composing $\sigma^2=\sigma_7: (x,y,t)\mapsto (x,y,\zeta_7t)$ and the translation \[ \tau: (x,y,t)\mapsto ((y/x)^2-x,(y/x)^3-y,t) \] by the 2-torsion section produces an automorphism of order 14, say $\varphi$, which is not purely non-symplectic. By construction, $\varphi^7=(\sigma_7 \circ \tau)^7=\sigma_7^7\circ \tau^7=\tau^7=\tau$ (note that $\sigma_7 \circ \tau = \tau \circ \sigma_7$) and $\tau$ is symplectic. The invariants of $\varphi$ are as in the third row of Table \ref{tab:sigma^7symp}. \end{example} \subsection{Order 21} \label{sec-NP21} Let $\sigma_{21}$ be a non-symplectic automorphisms of order 21 such that either $\sigma_7=\sigma_{21}^3$ or $\sigma_3=\sigma_{31}^7$ are symplectic. Again, we will study the two cases separately. \subsubsection{$\sigma_7$ symplectic} \begin{proposition} If $\sigma_{21}$ is a non-symplectic automorphism of order 21 on a K3 surface $X$, then $\sigma_7$ cannot be symplectic. \end{proposition} \begin{proof} By contradiction, assume $\sigma_7$ is symplectic. Then, by Nikulin, the fixed locus of $\sigma_7$ consists of 3 isolated points. Since $\Fix(\sigma_{21}) \subseteq\Fix (\sigma_7)$, $\sigma_{21}$ is acting with order three on $\Fix (\sigma_7)$. So $\Fix(\sigma_{21})$ consists of $N_{21}=0$ or $N_{21}=3$ isolated fixed points. Now, because the invariant lattice of $\sigma_7$ has rank 4 (Lemma \ref{lemma-symp}), we also know $d_3+d_1=4$ and $d_{21}+d_7=3$. Moreover, by the topological Lefschetz formula \eqref{eq-Leftop} (applied to $\sigma_{21}, \sigma_7$ and $\sigma_3$) we have \[\begin{cases} \chi_{21}&=N_{21}=2+d_{21}-d_7-d_3+d_1\\ \chi_7&= 2-(2d_{21}+d_7)+2d_3+d_1 = 3\\ \chi_3&= 2-6(d_{21}-d_7)-d_3+d_1 \end{cases}\] By Remark \ref{d21} and further observing that $d_3,d_1>0$ these give the possibilities for $(d_{21},d_7,d_3,d_1)$ and $\chi_3$ shown below. \centering \begin{table}[H] \begin{tabular}{c|c|c} $N_{21}$ & $(d_{21},d_7,d_3,d_1)$ & $\chi_3$ \\ \hline 3 & (2,1,2,2) & -4 \\ 3 & (1,2,1,3) & 10 \\ 3 & (3,0,3,1) & -18 \\ \end{tabular} \end{table} \label{tab21_not_purely} But since $\chi_3=N_3+2(1-g_3+k_3)=N_3+2(N_3-3)=3N_3-6$ we can eliminate all cases. \end{proof} \subsubsection{$\sigma_3$ symplectic} Similarly, we can prove: \begin{proposition}\label{prop21NP} Let $\sigma_{21}$ be a non-symplectic automorphism of order 21 on a K3 surface $X$ and assume $\sigma_3$ is symplectic. Then $\Fix(\sigma_{21})$ consists of exactly $N_{21}=6$ isolated points and the only possible values for $(d_{21},d_7,d_3,d_1)$ are $(1, 1, 0, 4)$. Moreover, $\Fix(\sigma_7)$ is as in case A of Table \ref{tab:7} and such an automorphism exists (see Example \ref{egNP21}).\end{proposition} \begin{proof} By Lemma \ref{lemma-symp} , the fixed locus of $\sigma_3$ consists of 6 isolated points. Since $\Fix(\sigma_{21}) \subseteq\Fix (\sigma_3)$, it follows that $\Fix(\sigma_{21})$ consists of $N_{21}\leq 6$ isolated fixed points. The invariant lattice of $\sigma_3$ has rank 10 by Lemma \ref{lemma-symp}, then we also know $6d_7+d_1=10$ and $6d_{21}+d_3=6$. Further observing that $d_7,d_1>0$, the topological Lefschetz formula \eqref{eq-Leftop} (applied to $\sigma_{21}, \sigma_7$ and $\sigma_3$) gives $(d_{21},d_7,d_3,d_1)=(1,1,0,4)$, $N_{21}=6$ and $\chi_7=3$. Note that since $\Fix(\sigma_{21})= \{6\ pts\} \subseteq \Fix (\sigma_7)$, the above implies $\sigma_7$ is of type $A$. That is, $\Fix (\sigma_7)=E\,\cup \text{3 pts}$ and we must have three fixed points under $\sigma_{21}$ lying on $E$. \end{proof} \begin{example} In $\mathbb P(3,2,1,1)$ we consider the surface $$x^2w+xy^2+yw^5+z^7=0$$ with the order 21 automorphisms \[\sigma_{21}:(x,y,z,w)\mapsto (\zeta_3x, \zeta_3y, \zeta_7z, \zeta_3w)\] The order 7 automorphism $\sigma_7$ is non-symplectic and fixes the genus 1 curve $\{z=0\}$ and 3 more points on the resolutions of the singularities (1,0,0,0) and (0,1,0,0), of type $A_2$ and $A_1$ respectively. The automorphism $\sigma^7=\sigma_3$ is symplectic. \label{egNP21} \end{example} \subsection{Order 28} \label{sec-NP28} Let $\sigma_{28}$ be a non-symplectic automorphisms of order 28. We will prove in what follows that no power of $\sigma_{28}$ can be symplectic. \begin{proposition} If $\sigma_{28}$ is a non-symplectic automorphism of order 28, then $\sigma_{28}$ is purely non-symplectic. In other words, no power of $\sigma_{28}$ is symplectic. \end{proposition} \begin{proof} We will assume some power of $\sigma_{28}$ is symplectic. Since there are no symplectic automorphisms of (finite) order bigger than 8 by \cite{Nikulin}, we have to consider three posibilities: \begin{enumerate}[{Case} I] \item $\sigma_7=\sigma_{28}^4$ is a symplectic automorphism of order 7; or \item $\sigma_4=\sigma_{28}^7$ is a symplectic automorphism of order 4; or \item $\sigma_2=\sigma_{28}^{14}$ is a symplectic involution. \end{enumerate} Observe that in this last case, $\sigma_4=\sigma_{28}^7$ is also symplectic. We refer to Section \ref{sec-order28} for the definition of $(d_{28},d_{14},d_7,d_4,d_2,d_1)$ and recall the relations given in Remark \ref{d28}: \[\rk S(\sigma_7)=2d_4+d_2+d_1,\quad \rk S(\sigma_4)=6d_7+d_1,\quad \rk S(\sigma_2)=6(d_{14}+d_7)+d_2+d_1. \] We now study the three cases separately. \begin{enumerate} \item[{\bf{Case I}}] {If $\sigma_7$ is a symplectic automorphism of order 7, the action of $\sigma_7^*$ on the period of $X$ $\omega_X$ is trivial. Therefore $\sigma_{28}^*\omega_X=\zeta_4\omega_X$, which implies that $d_4=\dim H^2(X,\R)_{\zeta_4}\geq 1$. Then $(\sigma_{14})^*\omega_X=\pm\omega_X$ and since there are no symplectic automorphisms of finite order bigger than 8 by \cite{Nikulin}, then $(\sigma_{14})^*\omega_X=-\omega_X$. By Lemma \ref{lemma-symp}, the fixed locus of a symplectic automorphism of order 7 consists of 3 isolated points. Since $\Fix(\sigma_{28})\subseteq\Fix(\sigma_7)$, then $\sigma_{28}$ only fixes isolated points and their number is $N_{28}\leq 3$. Moreover, $\sigma_{28}$ acts with order 1,2 or 4 on $\Fix(\sigma_7)$, hence $N_{28}=1$ or 3. By Lemma \ref{lemma-symp}, $\rk S(\sigma_7)=4$; it follows by the above formulas and \eqref{top28} that \[\begin{cases} 2d_4+d_2+d_1=4&\\ 6(2d_{28}+d_{14}+d_7)=18&\\ \chi_{28}=N_{28}=2+d_{14}-d_7-d_2+d_1\\ \end{cases}\] This gives the following list of possibilities for $(d_{28},d_{14},d_7,d_4,d_2,d_1)$: \begin{align} ( 1, 0, 1, 1, 1, 1 ), ( 0, 1, 2, 1, 1, 1 ), ( 0, 0, 3, 1, 0, 2 ), ( 1, 1, 0, 1, 1, 1 ),\nonumber \\ ( 0, 2, 1, 1, 1, 1 ), ( 1, 0, 1, 1, 0, 2 ), ( 0, 1, 2, 1, 0, 2 ) \label{vec}.\end{align} Moreover observe that $\sigma_{14}$ is non-symplectic, $\sigma_{7}$ is symplectic and $\sigma_7=(\sigma_{14})^2$. Thus we case use the classification of not purely non-symplectic automorphisms of order 14 given in Proposition \ref{NPorder14}. In this case, the possible values of $(a',b',c',d')=(d_{14},d_7,d_2,d_1)$ are $(2,1,2,2),\ (3,0,3,1)$ and the relations with $(d_{28},d_{14},d_7,d_4,d_2,d_1)$ are \[d_{28}=a',\ d_{14}+d_7=b',\ c'=d_4,\ d'=d_2+d_1. \] The vectors in \eqref{vec} do not satisfy the above conditions, thus it is not possible for $\sigma_7$ to be symplectic.} \item[{\bf{Case II}}] {Assume $\sigma_{4}$ is symplectic. Since $\sigma_4^*\omega_X=\omega_X$, then $\sigma_{28}^*\omega_X=\zeta_7\omega_X$, which implies $d_7\geq 1$. By Lemma \ref{lemma-symp}, the fixed locus of $\sigma_4$ consists of 4 isolated points. Since $\Fix(\sigma_{28}) \subseteq \Fix (\sigma_4)$, it follows that $\sigma_{28}$ only fixes $N_{28}$ isolated points with $N_{28}\leq 4$. Moreover, $\sigma_{28}$ acts with order 1 or 7 on $\Fix (\sigma_4)$, hence $N_{28} = 4$. By Lemma \ref{lemma-symp}, $\rk S(\sigma_4)=8$, thus it follows from the above formulas and \eqref{top28} that \[\begin{cases} 6d_7+d_1=8&\\ 12d_{28}+6d_{14}+2d_4+d_2=14&\\ \chi_{28}=N_{28}=2+d_{14}-d_7-d_2+d_1=4\\ \end{cases}\] The only solution is $(d_{28},d_{14},d_7,d_4,d_2,d_1)=(0,1,1,4,0,2)$. Observe that in this case $\sigma_{28}^2=\sigma_{14}$ is non-symplectic. By \eqref{top28}, we can compute $\chi_{14}=-6$ and this is impossible since by \cite{ArtebaniSartiTaki}, the Euler characteristic of the fixed locus of a non-symplectic automorphism of order 14 is bigger than 0. Thus there are no possibilities for $(d_{28},d_{14},d_7,d_4,d_2,d_1)$ and hence $\sigma_4$ can't be symplectic.} \item[{\bf{Case III}}] {We now show that there is no K3 surface with a non-symplectic automorphism $\sigma_{28}$ such that $\sigma_4=\sigma_{28}^7$ is non-symplectic and $\sigma_{2}=\sigma_{28}^{14}$ is symplectic. Assume the involution $\sigma_2$ is symplectic and $\sigma_4$ is non-symplectic. Thus \[\sigma_2^*\omega_X=\omega_X,\quad \sigma_{28}^*\omega_X=\zeta_{14}^i\omega_X,\quad \sigma_4^*\omega_X\neq\omega_X.\] Thus we are interested in odd $i$'s, such that $\sigma_4^*\omega_X=-\omega_X$. In particular $d_{14}\geq 1$. By Lemma \ref{lemma-symp}, the fixed locus of $\sigma_2$ consists of 8 isolated points and since $\Fix(\sigma_{28}) \subseteq \Fix (\sigma_2)$, it follows that $\sigma_{28}$ only fixes $N_{28}$ isolated points and $N_{28}\leq 8$. Moreover, $\sigma_{28}$ acts on $\Fix(\sigma_2)$ with order 1, 2, 7 or 14; it follows that either $N_{28}$ is even or $N_{28}=1$. By Lemma \ref{lemma-symp}, $\rk S(\sigma_2)=14$, thus it follows from the above formulas and \eqref{top28} that \[\begin{cases} 6d_{14}+6d_7+d_2+d_1=14&\\ 12d_{28}+2d_4=8&\\ \chi_{28}=N_{28}=2+d_{14}-d_{7}-d_2+d_1=4\\ \end{cases}\] This gives the following list of possibilities for $(d_{28},d_{14},d_7,d_4,d_2,d_1)$: \begin{equation}\label{vec1}( 0, 2, 0, 4, 1, 1 ), ( 0, 1, 1, 4, 1, 1 ), ( 0, 2, 0, 4, 0, 2 ), ( 0, 1, 1, 4, 0, 2 ), ( 0, 1, 0, 4, 5, 3 ). \end{equation} Moreover observe that $\sigma_{14}$ is non-symplectic, $\sigma_{2}$ is symplectic and $\sigma_2=(\sigma_{14})^7$. Thus we case use the classification of not purely non-symplectic automorphisms of order 14 given in Proposition \ref{NPorder14}. In Proposition \ref{prop_NP14-2} we found 3 the possible vectors $(a',b',c',d')=(d_{14},d_7,d_2,d_1)$: \[(0, 1, 8, 8),\ (1, 1 , 2, 8),\ (1, 2, 2, 2)\] and the relations with $(d_{28},d_{14},d_7,d_4,d_2,d_1)$ are, as before, \[d_{28}=a', d_{14}+d_7=b', c'=d_4, d'=d_2+d_1. \] The vectors in \eqref{vec1} do not satisfy the above conditions, thus it is not possible for $\sigma_2$ to be symplectic. Therefore we proved that a non-symplectic automorphism of order 28 is necessarily purely non-symplectic.} \end{enumerate} \end{proof} \subsection{Order 42} Let $\sigma_{42}$ be a non-symplectic automorphism of order 42. We will prove: \begin{proposition} If $\sigma_{42}$ is a non-symplectic automorphism of order 42, then $\sigma_{42}$ is purely non-symplectic. In other words, no power of $\sigma_{42}$ is symplectic. \label{NP42} \end{proposition} To prove Proposition \ref{NP42} we will argue by contradiction. We will assume some power of $\sigma_{42}$ is symplectic. Since there are no symplectic automorphisms of (finite) order bigger than 8 by \cite{Nikulin}, we will consider four possibilities: For each $k=6,7,14,21$ we will assume $\sigma_{\frac{42}k}=\sigma_{42}^k$ is a symplectic automorphism of order $\frac{42}k$. Note that the action on $\omega_X$ will then given by multiplication by $\zeta_{42}^{\frac{42}k}=\zeta_k$ i.e. \[\sigma_{42}^*\omega_X=\zeta_k\omega_X,\ k=6,7,14,21. \] In particular, for each pair $(k,\frac{42}k)$, if we look at the local action of $\sigma_{42}$ around a fixed point, we can thus choose local coordinates such that $\sigma_{42}$ is given by multiplication by a matrix of the form \[ B^{k}_{i}=\begin{pmatrix} \zeta_{42}^{i} & \\ & \zeta_{42}^{j} \end{pmatrix} \] where $i+j \equiv \frac{42}k \mod 42$, and we have that $i,j \not\equiv 0 \mod d$, where $d\mid \frac{42}k$. As usual, we will say a fixed point under $\sigma$ is of type $B^{k}_{i}$ if $\sigma_{42}$ acts (locally) on the point as multiplication by $B^{k}_{i}$. Moreover, we will denote by $M^{k}_{i}$ the number of isolated fixed points (under $\sigma_{42}$) of type $B^{k}_{i}$. \begin{proof}[Proof of Proposition \ref{NP42}] \leavevmode \begin{enumerate} \item[{\boldmath{$k=6$}}] {Assume $\sigma_{42}^{6}=\sigma_7$ is symplectic. Observe that in this case the possible $i$ are \[ i=1,2,3,8,\ldots 13, 15,\ldots 20, 22,23,24 \] By Lemma \ref{lemma-symp}, $\Fix(\sigma_7)$ consists of $3$ points and thus $\sum M_{i}^6\leq 3$. The holomorphic Lefschetz formula \eqref{eq-Lefhol} gives two possible solutions for $(M_1^6,\ldots M_{24}^6)$: \begin{align*} M=(0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0),\\ M'=(0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0) \end{align*} Since $\sigma_6$ is purely non-symplectic, we can use the classification given in \cite{Dillies}. As in Section \ref{sec:background}, let $m_{i,6}$ denotes the number of isolated fixed points under $\sigma_6$ where the local action of $\sigma_6$ can be linearized by the matrix $ \begin{pmatrix} \zeta_{6}^{i+1} & 0\\ 0& \zeta_{6}^{6-i} \end{pmatrix}$. The following inequalities hold \[\begin{cases} M^6_{2} + M^6_{8} + M^6_{11} + M^6_{17}+M^6_{20}+M^6_{23} &\leq m_{1,6} \\ M^6_{3}+M^6_{9} + M^6_{10} + M^6_{15} + M^6_{16} +M^6_{22} &\leq m_{2,6} \end{cases}\] and we also know that $M^6_{1} + M^6_{12} +M^6_{13}+ M^6_{18}+M^6_{19}+M^6_{24}$ is the number of points that lie in a curve fixed by $\sigma_6$. Moreover $\sigma_{42}$ acts with order $1$ or $7$ on $\Fix(\sigma_6)$. Therefore, we conclude that the two solutions given by the vectors $M,M'$ are actually inconsistent with the classification from \cite[Theorem 4.1]{Dillies}. That is, we have proved that $\sigma_7=\sigma_{42}^6$ cannot be symplectic.} \item[{\boldmath{$k=7$}}]{Assume $\sigma_{42}^{7}=\sigma_6$ is symplectic. Observe that in this case the possible $i$ are $i=1,7,11,13,17$, \linebreak $19,23$. According to Lemma \ref{lemma-symp}, $\Fix(\sigma_6)$ consists of two points and the holomorphic Lefschetz formula \eqref{eq-Lefhol} gives no possible solutions for $(M^7_{1},M^7_{7},M^7_{11},M^7_{13},M^7_{17},M^7_{19},M^7_{23})$. Therefore, the conclusion is that $\sigma_{42}^7=\sigma_6$ cannot be symplectic.} \item[{\boldmath{$k=14$}}]{Assume $\sigma_{42}^{14}=\sigma_3$ is symplectic. Observe that in this case the possible $i$ are \[ i=1,16, 17,19,20,22,4,5,7,8,10,11,13,14 \] Moreover, by Lemma \ref{lemma-symp}, the fixed locus $\Fix(\sigma_3)$ consists of six points. Thus, $\sum M^{14}_{i}\leq 6$, and since $\sigma_{42}^6=\sigma_7$ is purely non-symplectic, we can use the classification given in \cite{ArtebaniSartiTaki}. As in Section \ref{sec:background}, let $m_{i,7}$ denote the number of isolated points under $\sigma_7$ were the local action is given by the matrix $ \begin{pmatrix} \zeta_{7}^{i+1} & 0\\ 0& \zeta_{7}^{6-i} \end{pmatrix}$. The following inequalities hold \[\begin{cases} M^{14}_{4} + M^{14}_{11} + M^{14}_{13} + M^{14}_{20} & \leq m_{1,7}\\ M^{14}_{1} + M^{14}_{8} + M^{14}_{16} + M^{14}_{22} &\leq m_{2,7}\\ M^{14}_{5} + M^{14}_{19} &\leq m_{3,7} \end{cases}\] And the number of points that lie in a curve fixed by $\sigma_7$ is $n\doteq M^{14}_{7} + M^{14}_{10} +M^{14}_{14}+ M^{14}_{17}$. Using these, we can prove that $\sigma_{42}^{14}=\sigma_3$ cannot be symplectic by studying each possibility for $\sigma_7$ separately. If $\sigma_7$ is of type $A$ as in Table \ref{tab:7}, then $\Fix(\sigma_7)=E\cup \{p_1,p_2,p_3\}$, and since $\sigma_{42}$ acts with order $2,3$ or $6$ on $\Fix(\sigma_7)$, it follows that we must have $m \in \{0,3,4\}$. Imposing this restriction on the equations and inequalities above and on the holomorphic Lefschetz formula \eqref{eq-Lefhol} gives no possible solution for the $M_i^{14}$'s. The same type of argument also works in the cases $\sigma_7$ is of type $X, B,C$ or $D$ of Table \ref{tab:7}. } \item[{\boldmath{$k=21$}}]{Finally, we assume $\sigma_{42}^{21}=\sigma_2$ is symplectic. Then $\sum M^{21}_{i}\leq8$ because $\Fix(\sigma_2)$ consists of $8$ points. Thus, the holomorphic Lefschetz formula \eqref{eq-Lefhol} gives no possible solutions for the $M^{21}_{i}$'s and, therefore $\sigma_{42}^{21}=\sigma_2$ cannot be symplectic. } \end{enumerate} This completes the proof of Proposition \ref{NP42}. \end{proof} \begin{remark} We observe the same approach used in this section can also be used to recover the results of Sections \ref{sec-NP14}, \ref{sec-NP21} and \ref{sec-NP28}. \end{remark} \section{The N\'eron Severi lattice}\label{sec-NS} We conclude with a description of the Néron-Severi lattice of a K3 surface $X$ admitting a purely non-symplectic automorphism $\sigma=\sigma_n$ of order $n=14,21,28$ or $42$. Under the assumption of generality we have: \begin{equation}\label{eq-NS}r \doteq \rk NS(X)=22-d_{n}\cdot\varphi(n)\end{equation} and using the results obtained in the previous sections we are able to describe $NS(X)$ in every case. Since the invariant lattices $S(\sigma^i)$ are all primitively embedded in $NS(X)$ by \cite[Section 3]{Nikulin}, if we can find one power $i$ such that the corresponding invariant lattice has the expected rank $r$, then we can conclude we have equality $S(\sigma^i)=NS(X)$. We will call a pair $(X,\sigma_n)$ satisfying \eqref{eq-NS} as above, a general pair. And we will use the classification of prime orders \cite{ArtebaniSartiTaki}, \cite{Nikulin-inv} in order to describe explicitly the lattices $NS(X)$. Our results are presented in Propositions \ref{thmNS14} and \ref{thmNS} below: \begin{proposition} Let $(X,\sigma_{14})$ be a general pair. For each possibility listed in Table \ref{tab:ex}, with the exception of case $C1(0,2)$ (see Remark \ref{c1(0,2)}), the N\'eron-Severi lattice $NS(X)$ is as in Table \ref{NS14} below. \begin{table}[H]\centering \begin{tabular}{c!{\vrule width 1.5pt}c|c|c|c|c} &$\chi_{14}$&$\chi_7$&$\chi_2$&$(d_{14},d_7,d_2,d_1)$&$NS(X)$\\ \noalign{\hrule height 1.5pt} A1&7&3&-14&(3,0,1,3)&$S(\sigma_7)=U\oplus K_7$\\ A1&7&3&0&(2,1,0,4)&$S(\sigma_2)$\\ \hline A2&5&3&-16&(3,0,2,2) &$S(\sigma_7)=U\oplus K_7$\\ \noalign{\hrule height 1.5pt} B3&14&10&0&(2,0,0,10)&$S(\sigma_7)=U\oplus E_8$\\%\hline \noalign{\hrule height 1.5pt} C1&6&10&-8&(2,0,4,6)&$S(\sigma_7)=U(7)\oplus E_8$\\ \hline C2&4&10&-10&(2,0,5,5)&$S(\sigma_7)=U(7)\oplus E_8$\\ \hline C3&8&10&-6&(2,0,3,7)&$S(\sigma_7)=U(7)\oplus E_8$\\ \noalign{\hrule height 1.5pt} D2&3&17&-4&(1,0,8,8)&$S(\sigma_7)=U\oplus E_8\oplus A_6$\\\hline D3&7&17&0&(1,0,6,10)&$S(\sigma_7)=U\oplus E_8\oplus A_6$\\\hline D8&13&17&6&(1,0,3,13)&$S(\sigma_7)=U\oplus E_8\oplus A_6$\\ \end{tabular} \caption{} \label{NS14} \end{table} \label{thmNS14} \end{proposition} \begin{proof} For $n=14$, one has $\varphi(14)=6$ and by Remark \ref{d14}, $\rk S(\sigma_7)=d_2+d_1, \rk S(\sigma_2)=6d_7+d_1$. By \eqref{eq-NS} one has \begin{itemize} \item if $d_{14}=3$, $\rk NS(X)=4$: \item if $d_{14}=2$, $\rk NS(X)=10$; \item if $d_{14}=1$, $\rk NS(X)=16$. \end{itemize} and we get that the Néron-Severi lattice $NS(X)$ is as in Table \ref{NS14}. \end{proof} \begin{remark} For a general pair $(X,\sigma_{14})$ such that $\Fix(\sigma_{14})$ is of type $C1(0,2)$, none of the invariant lattices $S(\sigma_{14}^i)$ have the expected rank. Thus we are not able to compute the N\'eron-Severi lattice of the general K3 surface in this case. \label{c1(0,2)} \end{remark} When $n=21,28$ or $42$, we have that $\varphi(21)=\varphi(28)=\varphi(42)=12$ and for all cases $d_n=1$, thus $\rk NS(X)=22-12=10$. We prove: \begin{proposition} If $n=21,28$ or $42$, the description of the lattice $NS(X)$ for a general pair $(X,\sigma_n)$ is as follows: \begin{enumerate}[(i)] \item{If $n=21$, the possibilities are shown in the following table: \vspace{\baselineskip} \begin{center} \begin{tabular} {c|c|c|c|c|c} Type $\sigma_{21}$ & $\chi_{21}$ & $\chi_7$ & $\chi_3$ & $(d_{21},d_7,d_3,d_1)$&$NS(X)$ \\ \noalign{\hrule height 1.5pt} C(3,2,3)& 10 & 10 & 3 & (1,0,1,8)&$S(\sigma_7)=U(7)\oplus E_8$ \\ \hline C(3,1,2)& 7 & 10 & 0 & (1,0,2,6)&$S(\sigma_7)=U(7)\oplus E_8$\\ \hline C(3,0,1)& 4 & 10 & -3 & (1,0,3,4) &$S(\sigma_7)=U(7)\oplus E_8$\\ \hline B(3,3,4)&13&10 & 6&(1,0,0,10)&$S(\sigma_7)=U\oplus E_8$ \end{tabular} \end{center} \vspace{\baselineskip}} \item{Similarly, if $n=28$ we have the following table of possibilities: \vspace{\baselineskip} \begin{center} \begin{tabular} {c|c|c|c|c|c|c|c} Type $\sigma_{14}$ & $\chi_2$ & $\chi_4$ & $\chi_7$ & $\chi_{14}$ & $\chi_{28}$ & $(d_{28},d_{14},d_7,d_4,d_2,d_1)$ &$NS(X)$ \\ \noalign{\hrule height 1.5pt} A1(3,2) & 0 & 12 & 3 & 7 & 5 & (1,0,1,0,0,4)&$S(\sigma_2)$\\ \hline A1(3,2) & 0 & -4 & 3 & 7 & 3 & (1,1,0,0,2,2)&$S(\sigma_2)$\\ \hline B3(6,5) & 0 & 12 & 10 & 14 & 12 & (1,0,0,0,0,10)&$S(\sigma_2)$ \end{tabular} \end{center}\vspace{\baselineskip}} \item{And if $n=42$ we have: \vspace{\baselineskip} \begin{center}\begin{tabular} {c|c|c|c|c|c} Type $\sigma_{14}$ & $\chi_{21}$ & $\chi_7$ & $\chi_3$ & $(d_{42},d_{21}, d_{14},d_7,d_6,d_3,d_2, d_1)$&$NS(X)$\\ \noalign{\hrule height 1.5pt} C1 & 10 & 10 & 3 & (1,0,0,0,1,0,2,6)&$S(\sigma_7)=U(7)\oplus E_8$ \\ \hline C3 & 7 & 10 & 0 & (1,0,0,0,1,1,1,5) & $S(\sigma_7)=U(7)\oplus E_8$ \\ \hline B3 & 13 & 10 & 6& (1,0,0,0,0,0,0,10) &$S(\sigma_7)=U\oplus E_8$ \end{tabular} \end{center}\vspace{\baselineskip}} \end{enumerate} \label{thmNS} \end{proposition} \begin{proof} It follows from Remarks \ref{d21}, \ref{d28} and \ref{d42}. \end{proof} \begin{remark} We observe that when $n=28$ and $\Fix(\sigma_{14})$ is of type $A1(3,2)$, then the 2-elementary lattice $S(\sigma_2)$ has invariants $(r,a)=(10,6)$. But, a priori, the invariant $\delta$ is not unique. By \cite[Theorem 0.1]{GS}, we have that $\delta=0$ if and only if $X$ also admits a symplectic involution. \end{remark} \bibliographystyle{plain} \bibliography{biblio14} \end{document}
211,164
Guest user u1353435255 - Horoscope Home Free Horoscopes Astro Shop Forum My Astro Horoscope for: Add a New Person Edit Data for 'Marc Penfield [Adb]' Marc Penfield [Adb] Psychological Horoscope Free Try-Out Edition for Marc Penfield [Adb], born on 8 November 1942. No one appreciates the things of the earth as much as you do. You have made peace with the world's requirements, and intend to enjoy to the full whatever life makes available to you. Reality for you consists of what you can see, touch, smell, taste, hear, and put in the [...] [..] Thus sensuality is mixed in you with emotional passion, for you are not a cold person at heart, and when you make an attachment which touches your feelings as well as your senses, then it tends to endure. You have a strongly possessive streak in you, and you try to own whatever [...]
342,312
\section{Deviatoric Decomposition} As mentioned before, the connection between totally symmetric tensors and spherical harmonics is well known. Thus, each totally symmetric tensor can be described by deviators. Also, every tensor of arbitrary order up to dimension three can be described by deviators uniquely. Backus \cite{backus} described this so-called \textbf{deviatoric decomposition}. It is well known that every tensor $\mathds{T}$ of order $q$ can be decomposed into a totally symmetric $\mathbf{s}\mathds{T}$ and an asymmetric part $\mathbf{a}\mathds{T}$ \begin{equation*} \mathds{T} = \textbf{s}\mathds{T} + \textbf{a}\mathds{T} \end{equation*} The deviatoric decomposition of the totally symmetric part is described in the previous section using the spherical harmonics. The deviatoric decomposition of the asymmetric part is far less well-known. The gap is bridged by a unique isomorphism from the totally symmetric tensor space $\mathcal{S}^p$ into the asymmetric tensor space $\mathcal{A}^q$ of order $q$ \begin{equation*} \phi ( \mathcal{S}^p ) \to \mathcal{A}^q. \end{equation*} Through Schur's Lemma \cite{wigner} it is known that the space of the isormorphisms uniquely determines the isomorphism. These totally symmetric tensors can then be decomposed into deviators. Thus, eventually this isomorphism allows the decomposition of the asymmetric part of the tensor into deviators.
26,776
\begin{document} \begin{abstract} An orbitope is the convex hull of an orbit of a compact group acting linearly on a vector space. These highly symmetric convex bodies lie at the crossroads of several fields including convex geometry, algebraic geometry, and optimization. We present a self-contained theory of orbitopes with particular emphasis on instances arising from the groups $SO(n)$ and $O(n)$. These include Schur-Horn orbitopes, tautological orbitopes, Carath\'eodory orbitopes, Veronese orbitopes, and Grassmann orbitopes. We study their face lattices, their algebraic boundaries, and representations as spectrahedra or projected spectrahedra. \end{abstract} \maketitle \section{Introduction} An \emph{orbitope} is the convex hull of an orbit of a compact algebraic group $G$ acting linearly on a real vector space. The orbit has the structure of a real algebraic variety, and the orbitope is a convex, semi-algebraic set. Thus, the study of algebraic orbitopes lies at the heart of \emph{convex algebraic geometry} -- the fusion of convex geometry and (real) algebraic geometry. Orbitopes have appeared in many contexts in mathematics and its applications. Orbitopes of finite groups are highly symmetric convex polytopes which include the platonic solids, permutahedra, Birkhoff polytopes, and other favorites from Ziegler's text book \cite{Ziegler}, as well as the Coxeter orbihedra studied by McCarthy, Ogilvie, Zobin, and Zobin~\cite{Zobin}. Farran and Robertson's regular convex bodies~\cite{FR94} are orbitopal generalzations of regular polytopes, which were classified by Madden and Robertson~\cite{MR95}. Orbitopes for compact Lie groups, such as $SO(n)$, have appeared in investigations ranging from protein structure prediction~\cite{LSS08} and quantum information~\cite{AII06} to calibrated geometries~\cite{HL1982}. Barvinok and Blekherman studied the volumes of convex bodies dual to certain $SO(n)$-orbitopes, and they concluded that there are many more non-negative polynomials than sums of squares \cite{BB}. This paper initiates the study of orbitopes as geometric objects in their own right. The questions we ask about orbitopes originate from three different perspectives: {\em convexity}, {\em algebraic geometry}, and {\em optimization}. In convexity, one would seek to characterize all faces of an orbitope. In algebraic geometry, one would examine the Zariski closure of its boundary and identify the components and singularities of that hypersurface. In optimization, one would ask whether the orbitope is a spectrahedron or the projection of a spectrahedron. Spectrahedra are to semidefinite programming what polyhedra are to linear programming. More precisely, a {\em spectrahedron} is the intersection of the cone of positive semidefinite matrices with an affine space. It can be represented as the set of points $x \in \R^n$ such that \begin{equation}\label{Eq:LMI} A_0 + x_1 A_1 + \dotsb + x_n A_n \, \, \succeq\ \, 0\,, \end{equation} where $A_0,A_1,\ldots, A_n$ are symmetric matrices and $\succeq 0$ denotes positive semidefiniteness. From a spectrahedral description many geometric properties, both convex and algebraic, are within reach. Furthermore, if an orbitope admits a representation (\ref{Eq:LMI}) then it is easy to maximize or minimize a linear function over that orbitope. Here is a simple illustration. \begin{example} \label{ex:HankelSmall} Consider the action of the group $G = SO(2)$ on the space ${\rm Sym}_4(\R^2) \simeq \R^5$ of binary quartics and take the convex hull of the orbit of $v = x^4$. The four-dimensional convex body ${\rm conv}(G \cdot v)$ is a {\em Carath\'eodory orbitope}. This orbitope is a spectrahedron: it coincides with the set of all binary quartics $\,\lambda_0 x^4 + 4 \lambda_1 x^3 y + 6 \lambda_2 x^2 y^2 + 4 \lambda_3 x y^3 + \lambda_4 y^4\,$ such that \begin{equation} \qquad \label{3by3hankel} \begin{pmatrix} \lambda_0 & \lambda_1 & \lambda_2 \\ \lambda_1 & \lambda_2 & \lambda_3 \\ \lambda_2 & \lambda_3 & \lambda_4 \end{pmatrix} \,\, \succeq \,\, 0 \qquad \hbox{and} \qquad \lambda_0 + 2 \lambda_2 + \lambda_4 = 1. \end{equation} This representation (\ref{3by3hankel}) will be derived in Section 5, where we will also see that it is equivalent to classical results from the theory of positive polynomials \cite{Rez}. The Hankel matrix shows that the boundary of ${\rm conv}(G \cdot v)$ is an irreducible cubic hypersurface in $\R^4$, defined by the vanishing of the Hankel determinant. It also reveals that this four-dimensional Carath\'eodory orbitope is $2$-neighborly: the extreme points are the rank one matrices, and any two of them are connected by an edge. The typical intersection of ${\rm conv}(G \cdot v)$ with a three-dimensional affine plane looks like an inflated tetrahedron. This three-dimensional convex body is bounded by {\em Cayley's cubic surface}, shown in Figure~\ref{F:fat_tetrahedron}. Alternative pictures of this convex body can be found in \cite[Fig.~3]{Ranestad} and \cite[Fig.~4]{SU}. The four vertices of the tetrahedron lie on the curve $G \cdot v$, and its six edges are inclusion-maximal faces of ${\rm conv}(G \cdot v)$. \qed \end{example} \begin{figure}[htb] \[ \includegraphics[height=1.9in]{figures/free-spec} \] \vskip -0.3cm \label{F:fat_tetrahedron} \caption{Cross-section of a four-dimensional Carath\'eodory orbitope.} \end{figure} This article is organized as follows. We begin by deriving the basic definitions and a few general results about orbitopes, and we formulate {\bf ten key questions} which will guide our subsequent investigations. These are organized along the themes of convex geometry (Subsection 2.1), algebraic geometry (Subsection 2.2) and optimization (Subsection 2.3). These questions are difficult. They are meant to motivate the reader and to guide our investigations. We are not yet able to offer a complete solution to any of these ten problems. Section 3 is concerned with the action of $O(n)$ and $SO(n)$ by conjugation on $n {\times} n$-matrices. These decompose into the actions by conjugation on symmetric and skew-symmetric matrices. The resulting {\em Schur-Horn orbitopes} are shown be spectrahedra, their algebraic boundary is computed, and their face lattices are derived from certain polytopes known as permutahedra. The spectrahedral representations of the Schur-Horn orbitopes, stated in Theorems \ref{thm:symspec} and \ref{thm:skewsymspec}, rely on certain Schur functors also known as the {\em additive compound matrices}. Given a compact real algebraic group of $n {\times} n$-matrices, we can form its convex hull in $\R^{n \times n}$. The resulting convex bodies are the {\em tautological orbitopes}. In Subsection 4.2 we study the tautological orbitope and its dual coorbitope for the group $O(n)$. Both are spectrahedra, characterized by constraints on singular values, and they are unit balls for the operator and nuclear matrix norm considered in \cite{Fazel07}. Subsections 4.1 and 4.4 are devoted to the tautological orbitope for $SO(n)$. A characterization of its faces is given in Theorem \ref{thm:SOntauto}. The orbitopes of the group $SO(2)$ are the convex hulls of trigonometric curves, a classical topic initiated by Carath\'eodory in \cite{Car}, further developed in \cite{BN, Smi}, and our primary focus in Section 5. These $SO(2)$-orbitopes can be represented as projections of spectrahedra in two distinct ways: in terms of Hermitian Toeplitz matrices or in terms of Hankel matrices. A natural generalization of the rational normal curves in Section 5 are the Veronese varieties. Their convex hulls, the {\em Veronese orbitopes}, are dual to the cones of polynomials that are non-negative on $\mathbb{R}^n$, as seen in \cite{BB, Ble, Rez}. In Section 6 we undertake a detailed study of the $15$-dimensional Veronese orbitope and its coorbitope which arise from ternary quartics. In Section 7 we mesh our investigations with a line of research in differential geometry. The {\em Grassmann orbitope} is the convex hull of the oriented Grassmann variety in its Pl\"ucker embedding in the unit sphere in $\wedge_d \R^n$. This vector space is the $d$-th exterior power of $\R^n$ and it is isomorphic to $\R^{\binom{n}{d}}$. The facial structure of Grassmann orbitopes has been studied in the theory of calibrated manifolds \cite{HL1982,HarMor86, Mor85}. We take a fresh look at these orbitopes from the point of view of convex algebraic geometry. Theorem \ref{thm:G2n} furnishes a spectrahedral representation and the algebraic boundary in the special case $d=2$, while Theorem \ref{thm:G36bad} shows that the Grassmann orbitopes fail to be spectrahedra in general. \section{Setup, Tools and Questions} \label{sec:Setup} Let $G$ be a real, compact, linear algebraic group, that is, a compact subgroup of $GL(n,\R)$ for some $n \in \N$ given as a subvariety. Prototypic examples are the {\em special orthogonal group} $\,SO(n) = \{ g \in GL(n,\R) : gg^T = {\rm Id}_n \,\,{\rm and}\, \,{\rm det}(g) = 1 \}\,$ and the {\em unitary group} $\,U(n) = \{ g \in GL(n,\C) : g\overline{g}^T = {\rm Id}_n \}$. A real representation of $G$ is a group homomorphism $\rho : G \rightarrow GL(V)$ for some finite dimensional real vector space $V$. We will write $g \cdot v := \rho(g)(v)$ for $g \in G$ and $v \in V$. The representation is {\em rational} if $\rho$ is a rational map of algebraic varieties. By choosing an inner product $\inner{\,\cdot \,,\, \cdot\,}$ on $V$, we may define a $G$-invariant inner product as \begin{equation}\label{Eq:G-inv_form} \inner{ v, w }_{G} \,\,: = \,\, \int_G \inner{g\cdot v, g\cdot w}\, d\mu. \end{equation} Here $\mu$ denotes the {\em Haar measure}, which is the unique $G$-invariant probability measure on $G$. Choosing coordinates on $V$ so that $\inner{ \cdot, \cdot }_G$ becomes the standard inner product on $V \simeq \R^m$, we can identify the matrix group $G$ with a subgroup of the {\em orthogonal group} $\,O(m) = \{ g \in GL(m,\R) : gg^T = {\rm Id}_m \}$. The {\em orbit} of a vector $v \in V$ under the compact group $G$ is the set $\,G\cdot v = \{ g \cdot v : g \in G\}$. This is a bounded subset of $V$. The orbit $\,G \cdot v\,$ is a smooth, compact real algebraic variety of dimension \[ \dim G\cdot v \,\,= \,\,\dim G - \dim \stab_G(v). \] Here $\,\stab_G(v) = \{ g \in G : g\cdot v = v \}\,$ is the {\em stabilizer} of the vector $v$. In particular, the orbit is isomorphic as a $G$-variety to the compact homogeneous space $\,G / \stab_G(v)$. The \emph{orbitope} of $G$ with respect to the vector $v \in V$ is the semialgebraic convex body \[ \conv(G \cdot v) \,\,= \,\, \conv \{ g \cdot v\, : \, g \in G \} \,\,\, \subset \,\,\, V. \] We tacitly assume that the group $G$ and its representation $\rho$ are clear from the context and sometimes we write $\calO_v$ or $\calO$ for $\conv(G \cdot v)$. The dimension of an orbitope $\conv(G\cdot v)$ is the dimension of the affine hull of the orbit $G\cdot v$. For small $n$, the connected subgroups of the orthogonal group $O(n)$ are known. This leads to the following census of low-dimensional orbitopes of connected groups. \begin{example}[The orbitopes of dimension at most four arising from connected groups] $\qquad$ {\rm We identify $G$ with a subgroup of $SO(n)$. We assume $\calO = \conv(G \cdot v)$ is an $n$-dimensional orbitope in $\R^n$. This implies that $G$ fixes no non-zero vector. Here is our census: \smallskip {\sl $n=1$:} There are no one-dimensional orbitopes because $SO(1)$ is a point. \smallskip \noindent It is known that every proper connected subgroup of $SO(2)$ or $SO(3)$ fixes a non-zero vector, so for $n \leq 3$, the subgroup $G$ must be equal to $SO(n)$. This establishes the next two cases: \smallskip {\sl $n=2$:} The only orbitopes in $\R^2$ are the discs $\{ (x,y) \in \R^2 : x^2 + y^2 \leq r^2 \}$. {\sl $n=3$:} The only orbitopes in $\R^3$ are the balls $\{ (x,y,z) \in \R^3 : x^2 + y^2 + z^2 \leq r^2 \}$. \smallskip \noindent The four-dimensional case is where things begin to get interesting: \smallskip {\sl $n=4$:} The group $SO(4)$ has connected subgroups $G$ of dimension $1$, $2$, $3$ and $6$. \begin{itemize} \item If ${\rm dim}(G) = 6$ then $G = SO(4)$ and the orbitopes are balls in $\R^4$. \item If ${\rm dim}(G) = 3$ then $G \simeq SU(2)$, acting as the unit quaternions on all quaternions, $\HH=\R^4$, by either left or right multiplication. Here the orbitopes are also balls in $\R^4$. \item If ${\rm dim}(G) = 2$ then $G \simeq SO(2)\times SO(2)$. These tori act on $\R^4$ through an orthogonal direct sum decomposition $\R^2\oplus \R^2$ and their orbitopes are products of two discs. \item If ${\rm dim}(G) = 1$ then $G \simeq SO(2)$ and we obtain four-dimensional orbitopes that are isomorphic, for some positive integers $a < b$, to the Carath\'eodory orbitopes $$ \Cara_{a,b}\ :=\ \conv \,\bigl\{(\cos at, \sin at, \cos bt, \sin bt) \in \R^4 \mid t\in[0,2\pi]\bigr\}\,. $$ These orbitopes were introduced one century ago by Carath\'eodory \cite{Car}. Their study was picked up in the 1980s by Smilansky~\cite{Smi} and recently by Barvinok and Novik \cite{BN}. We note that $\Cara_{1,2}$ is affinely isomorphic to the Hankel orbitope in Example~\ref{ex:HankelSmall}. \qed \end{itemize}} \end{example} To compute the dimensions of orbitopes in general we shall need a pinch of representation theory \cite{FH}. A representation $V$ of the group $G$ is {\em irreducible} if its only subrepresentations are $\{0\}$ and $V$. If $V$ and $W$ are irreducible representations, then the space $\Hom_G(V,W)$ of equivariant linear maps between them is zero unless $V\simeq W$. Schur's Lemma states that $\End_G(V):=\Hom_G(V,V)$ is a division algebra over $\R$, that is, either $\R$, $\C$, or $\HH$. If $W_1,W_2,\dotsc$ is a complete list of distinct irreducible representations of $G$, and $V$ is any representation of $G$, then we have a canonical decomposition into isotypical representations: \begin{equation} \label{eq:isotypical} \bigoplus_{i\geq 0} \Hom_G(W_i,V)\otimes_{\SEnd_G(W_i)} W_i\ \xrightarrow{\ \simeq \ }\ V. \end{equation} This is an isomorphism of $G$-modules. The map (\ref{eq:isotypical}) on each summand is $(\varphi,w)\mapsto \varphi(w)$. The image of the $i$th summand in $V$ is called the {\em $W_i$-isotypical component of $V$}, and when it is non-zero, we say that the irreducible representation $W_i$ {\em appears} in the $G$-module $V$. We say that $V$ is {\em multiplicity-free} if each irreducible representation $W_i$ appears in $V$ at most once, so that $\Hom_G(W_i,V)$ has rank 1 or 0 over $\End_G(W_i)$. Suppose that $V$ contains the trivial representation and write $V=\R^l\oplus V'$, where $\R^l$ is the trivial isotypical component of $V$ and $V'$ does not contain the trivial representation. Any vector $v\in V$ can be written as $v=v_0\oplus v'$, where $v_0\in\R^l$ and $v'\in V'$. Then \[ G\cdot v\ =\ v_0\oplus G\cdot v' \qquad\mbox{and}\qquad \conv(G\cdot v)\ =\ v_0\oplus \conv(G\cdot v')\,. \] Thus we lose no geometric information in assuming that $V$ does not contain the trivial representation. This property ensures that the linear span of an orbit coincides with its affine span. Hence the affine span of an orbitope decomposes along its isotypical components: \[ \aff(G\cdot v)\ =\ \bigoplus_{i \geq 0} \aff(G\cdot v_i)\, \qquad \hbox{for vectors} \,\,\, v = \oplus_i v_i \,\in V\,\,\, \hbox{as in (\ref{eq:isotypical}).} \] To determine the dimension of $\aff(G\cdot v)$ for $v$ in a single isotypical component $V$ we proceed as follows. Let $V = W^l$ with $W$ irreducible. Then $v=(w_1,\dotsc,w_l)$ and the affine span of $G\cdot v$ is isomorphic to $W^k$, where $k$ is the rank of the $\End_G(V)$-module spanned by $w_1,\dotsc,w_l$. Hence, the dimension of $\aff(G \cdot v)$ over $\R$ equals $k \cdot \dim W$. In particular, $k\leq \dim_{\SEnd_G(V)}(W)$. If $V$ is multiplicity free and $v$ has a nonzero projection into each isotypical component of $V$, then $\dim \conv(G\cdot v) = \dim V$. We see this in the Carath\'eodory orbitopes for the group $SO(2)$ of $2\times 2$ rotation matrices. Its nontrivial representations are $W_a\simeq \R^2$, where a rotation matric acts through its $a$th power, for $a>0$. Let $\Cara_{a,b}$ be the orbitope of $SO(2)$ with respect to a general vector $v\in W_a\oplus W_b$. If $a\neq b$, then $V$ has two isotypical components and $\Cara_{a,b}$ has dimension four. If $a=b$, then $V \simeq W_a^2$ consists of a single isotypical component and $\Cara_{a,a}$ is two-dimensional, as $\End_{SO(2)}(\R^2)=\C$, the span of $(w_1,w_2)$ is complex one-dimensional. \subsection{Convex geometry} Orbitopes are convex bodies, and it is natural to begin their study from the perspective of classical convexity. A point $p$ in a convex body $K \subset V$ is an {\em extreme point} if $\conv(K\backslash \{p\}) \not= K$. Thus, the set $E$ of extreme points of $K$ is the minimal subset satisfying ${\rm conv}(E) = K$. An extrinsic description of $K$ is given by its \emph{support function} \[ h(K,\, \cdot\,) : V^* \rightarrow \R, \, \,\ell\, \mapsto\, h(K,\ell)\, := \,\max \{ \ell(x) : x \in K \}. \] In terms of the support function, the convex body $K$ is the set of points $x \in V$ such that $\ell(x) \le h(K;\ell)$ for every $\ell \in V^*$. Each linear functional $\ell \in V^*$ defines an {\em exposed face} of $K$: \[ K^\ell\,\,\, = \,\,\,\{ \,p \in K : \ell(p) = h(K;\ell) \, \}. \] An exposed face $K^\ell$ is itself a convex body of dimension $\dim \aff(K^\ell)$. An exposed face of dimension $0$ is called an \emph{exposed point} of $K$. It follows that every exposed point is extreme, but the inclusion is typically strict. However, for orbitopes, these two notions coincide. \begin{proposition} Every point in the orbit $\,G\cdot v\,$ is exposed in its convex hull. In particular, every extreme point of the orbitope $\,\conv(G \cdot v)\,$ is an exposed point. \end{proposition} \begin{proof} Since $G$ acts orthogonally on $V$, the orbit $\,G\cdot v\,$ lies entirely in the sphere in $V$ that is centered at $0$ and contains the point $v$. As every point of the sphere is exposed, the entire orbit consists of exposed points and hence extreme points. \end{proof} A closed subset $F \subseteq K$ is a \emph{face} if $F$ contains the two endpoints to any open segment in $K$ it intersects. This includes $\emptyset$ and $K$ itself. An inclusion-maximal proper face of $K$ is called a \emph{facet}. By separation, every face is contained in an exposed face and thus facets are automatically exposed. In general, every exposed face is a face but not conversely. \begin{question} When does an orbitope have only exposed faces? \end{question} The exposed faces of a convex body form a partially ordered set with respect to inclusion, called the \emph{face lattice}. The face lattice is atomic but in general not coatomic as was pointed out to us by Stephan Weis. A necessary condition is that the polar body (see below) has all faces exposed (cf.~\cite{Weis}). Also, it is generally not graded because ``being a face of'' is not a transitive relation. For example, the four-dimensional Barvinok-Novik orbitope in Section~\ref{S:Caratheodory} has triangular exposed faces for which the three vertices are exposed but the edges are not. Similar behavior is seen in the convex body on the right of Figure~\ref{F:trig_curves}, which has two triangular exposed facets whose edges and two of three vertices are not exposed. \begin{question} \label{question2} Describe the face lattices of orbitopes. \end{question} For an orbitope $\calO = \conv( G\cdot v)$, the faces come in $G$-orbits and these $G$-orbits come in algebraic families. In particular, the zero-dimensional faces come in a family parametrized by $G$. The description of these families is the point of Question \ref{question2}. For instance, the orbitope in Example~\ref{ex:HankelSmall} is a four-dimensional, two-neighborly convex body. Its exposed points are parametrized by the circle $\sphere^1$ and the edges come in a two-dimensional family. The {\em polar body} \[ \calO^\circ \,\,= \,\, \{ \ell \in V^* : \h(\calO;\ell) \le 1 \} \] is called the \emph{coorbitope} of $G$ with respect to $v\in V$. Our assumption that $V$ does not contain the trivial representation ensures that $0$ is the centroid of $\calO$, and this implies $\,(\calO^\circ)^\circ = \calO$. We shall also make use of the cone over the coorbitope $\calO^\circ$. This is the \emph{coorbitope cone} \begin{equation} \label{eq:coorbitopecone} \widehat{\calO^\circ} \,\, =\,\, \bigl\{ (\ell,m) \in V^* \oplus \R \,: \,\h(\calO;\ell) \le m \bigr\}. \end{equation} For a convex body $K$ the assignment $\|x\|_K := \inf \{ \lambda > 0 : \lambda x \in K \}$ defines an \emph{(asymmetric) norm} on $V$ with unit ball $K$. If $K$ is centrally-symmetric with respect to the origin, then $\| \cdot \|_K$ is an actual norm. In that case the polar body $K^\circ$ is also centrally-symmetric and $\|\cdot\|_{K^\circ}$ is the {\em dual norm}. Norms and support functions are related by \[ \|\ell\|_{K^\circ} \,\, = \,\, h(K;\ell) \, \quad \hbox{ for all $\ell \in V^*$.} \] In particular, if $K$ is an orbitope, then $\| \cdot \|_K$ and $\|\cdot\|_{K^\circ}$ are $G$-equivariant norms. Every point $p$ in a convex body $K$ is in the convex hull of finitely many extreme points. We denote by $d_p$ the least cardinality of a set $E$ of extreme points with $p \in \conv(E)$. We call $\cara(K) := \sup \{ d_p : p \in K \}$ the \emph{Carath\'eodory number} of $K$. Carath\'eodory's Theorem (see e.g.~\cite[\S I.2]{Barvinok}) asserts that $\cara(K)$ is bounded from above by $\dim K + 1$. Fenchel showed that $\cara(K) \le \dim K$ when the set of extreme points of $K$ is connected \cite{fenchel29}. Note that the Carath\'eodory number of an orbitope $\calO_v = \conv(G\cdot v)$ in general depends on $v$ (cf.~\cite[Theorem~4.9]{LSS08}) whereas, for multiplicity-free representations, the dimension of $\calO_v$ does not. \begin{question} What are the Carath\'eodory numbers of orbitopes? \end{question} We refer to the recent work of Barvinok and Blekherman \cite{BB, Ble} for more information about the convex geometry of orbitopes and coorbitopes, especially with regard to their volumes. \subsection{Algebraic geometry} Here we look at orbitopes as objects in real algebraic geometry. Fix a rational representation $\rho : G \rightarrow GL(m,\R)$ of a compact connected algebraic group $G$. Every orbit $\,G \cdot v\,$ is an irreducible real algebraic variety in $\R^m$, and we may ask for its prime ideal. By the Tarski-Seidenberg Theorem \cite[\S 2.4]{BPR}, the orbitope is a semi-algebraic set. \begin{question} Which orbitopes are \emph{basic} semi-algebraic sets, i.e.\ for which triples $(G,\rho,v)$ can ${\rm conv}(G \cdot v)$ be described by a finite conjunction of polynomial equations and inequalities? \end{question} The boundary $\partial \calO$ of an orbitope $\calO$ in $\R^m$ is a compact semi-algebraic set of codimension~one in its affine span ${\rm aff}(\calO)$. The Zariski closure of $\partial \calO$ is denoted by $\partial_ a\calO$. We call it the {\em algebraic boundary} of $\calO$. If ${\rm aff}(\calO) = \R^m$ then the algebraic boundary $\partial_ a\calO$ is the zero set of a unique (up to scaling) reduced polynomial $f(x_1,\ldots,x_m)$ whose coefficients lie in the field of definition of $(G,\rho,v)$. That field of definition will often be the rational numbers $\Q$. Since scalars in $\mathbb{Q}$ have an exact representation in computer algebra, but scalars in $\R$ require numerical floating point approximations, we seek to use $\mathbb{Q}$ instead of $\R$ wherever possible. \begin{question} \label{ques:algbound} How to calculate the algebraic boundary $\,\partial_a\calO $ of an orbitope $ \calO$? \end{question} The irreducible factors of the polynomial $f(x_1,\ldots,x_m)$ that cuts out $\partial_a \calO$ arise from various singularities in the boundary $\partial \calO^o$ of the coorbitope $\calO^o$. We believe that a complete answer to Question \ref{ques:algbound} will involve a Whitney stratification of the real algebraic hypersurface $\partial_a \calO^o$. Recall that a {\em Whitney stratification} is a decomposition into locally closed submanifolds (strata) in which the singularity type of each stratum is locally constant along the stratum. The faces of a polytope form a Whitney stratification of its boundary, which is dual to the stratification of the polar polytope. We expect a similar duality between the Whitney stratification of the boundary of an orbitope and of the boundary of its coorbitope. \begin{question} How to compute and study the algebraic boundary $\,\partial_a\calO^o $ of the coorbitope $ \calO^o$? Is every component of $ \partial_a \calO$ the dual variety to a stratum in a Whitney stratification of $\partial_a \calO^o$? \end{question} Recall that the {\em dual variety} $X^\vee $ of a subvariety $X$ in $\R^m$ is the Zariski closure of the set of all affine hyperplanes that are tangent to $X$ at some regular point. When studying this duality, algebraic geometers usually work in complex projective space $\mathbb{P}_\mathbb{C}^m$ rather than real affine space $\mathbb{R}^m$. In some of the examples for $G = SO(n)$ seen in this paper, the algebraic boundary $\partial_a \calO^o$ of the coorbitope $\calO^o$ coincides with the dual variety $X^\vee $ of the orbit $X = G \cdot v$. A good example for this is the discriminantal hypersurface in Corollary \ref{cor:discrorbi}. More generally, we have the impression that the hypersurface $\partial_a \calO^o$ is often irreducible while $\partial_a \calO$ tends to be reducible. For further appearances of dual varieties in convex algebraic geometry see \cite{Ranestad, SU}. The \emph{$k$-th secant variety} of $G \cdot v$ is the Zariski closure of all $(k {+} 1)$-flats spanned by points of $G\cdot v$. The study of secant varieties leads to lower bounds for the Carath\'eodory number: \begin{proposition} \label{prop:secant} If $k \geq \cara(\calO_v) $ then the $k$-th secant variety of $G \cdot v$ is the ambient space~$\R^m$. \end{proposition} \begin{proof} Let $k \geq \cara(\calO_v)$. The set of points that lie in the convex hull of $k+1$ points of $G \cdot v$ is dense in $\calO_v$ and hence is Zariski dense in $\R^m$. The $k$-th secant variety contains that set. \end{proof} The lower bound for $\cara(\calO_v)$ from Proposition \ref{prop:secant} usually does not match Fenchel's upper bound $\cara(\calO_v) \leq {\rm dim}(\calO_v)$. For instance, consider the Carath\'eodory orbitope $\calO_v$ in Example \ref{ex:HankelSmall}. Its algebraic boundary $\partial_a \calO_v$ equals the second secant variety of the orbit $G \cdot v$, so Proposition \ref{prop:secant} implies $\cara(\calO_v) \geq 3$. This orbitope satisfies $\cara(\calO_v)= 3$ but ${\rm dim}(\calO_v) = 4$. \begin{question} For which orbitopes $\mathcal{O}$ is the boundary $\partial_a \mathcal{O}$ one of the secant varieties of $G \cdot v$~? When is the lower bound on the Carath\'eodory number $\cara(\calO)$ in Proposition \ref{prop:secant} tight ? \end{question} \subsection{Optimization} A fundamental object in convex optimization is the set ${\rm PSD}_n$ of positive semidefinite symmetric real $n {\times} n$-matrices. This is the closed basic semi-algebraic cone defined by the non-negativity of the $2^n - 1$ principal minors. It can also be described by only $n$ polynomial inequalities, namely, by requiring that the elementary symmetric polynomials in the eigenvalues, i.e.\ the suitably normalized coefficients of the characteristic polynomial, be non-negative. The algebraic boundary $\partial_a {\rm PSD}_n$ of the cone ${\rm PSD}_n$ is the symmetric $n {\times} n$-determinant. All faces of ${\rm PSD}_n$ are exposed, isomorphic to ${\rm PSD}_k$ for $k \le n$, and indexed by the lattice of linear subspaces ordered by reverse inclusion. Spectrahedra inherit these favorable properties. Recall that a {\em spectrahedron} is the intersection of the cone ${\rm PSD}_n$ with an affine-linear subspace in ${\rm Sym}_2(\R^n)$. If we know that an orbitope is a spectrahedron then this either answers or simplifies many of our questions. \begin{question} \label{ques:IsSpec1} Characterize all ${\rm SO}(n)$-orbitopes that are spectrahedra. \end{question} Polytopes are special cases of spectrahedra: they arise when the affine-linear space consists of diagonal matrices. One major distinction between polytopes and spectrahedra is that the class of spectrahedra is not closed under projection. That is, the image of a spectrahedron under a linear map is typically not a spectrahedron. See Section 5 for orbitopal examples. Characterizing projections of spectrahedra among all convex bodies is a major open problem in optimization theory; see e.g.~\cite{HelNie}. Here is a special case of this general problem: \begin{question} \label{ques:IsSpec2} Is every orbitope the linear projection of a spectrahedron? \end{question} In Questions \ref{ques:IsSpec1} and \ref{ques:IsSpec2}, it is important to keep track of the subfield of $\R$ over which the data $(G,\rho,v)$ are defined. Frequently, this subfield is the rational numbers $\Q$, and in this case we seek to write the orbitope as a (projected) spectrahedron over $\Q$ and not just over $\R$. Semidefinite programming is the problem of maximizing a linear function over a (projected) spectrahedron, and there are efficient numerical algorithms for solving this problem in practice. In our context of orbitopes, the optimization problem can be phrased as follows: \begin{question} What method can be used for maximizing a linear function $\ell$ over an orbitope $\calO$? Equivalently, how to evaluate the norm $\ell \mapsto \| \ell \|_{\calO^\circ}$ associated with the coorbitope $\calO^\circ$ ? \end{question} This is equivalent to a non-linear optimization problem over the compact group $G$. We seek to find $g \in G$ which maximizes $\,\ell ( \rho(g) \cdot v)$. This maximum is an algebraic function of $v$. \section{Schur-Horn Orbitopes} \label{sec:SchurHorn} In this section we study two families of orbitopes for the orthogonal group $G = O(n)$. This group acts on the Lie algebra $\mathfrak{gl}_n$ by restricting the adjoint representation of $GL(n,\R)$. The $O(n)$-module $\mathfrak{gl}_n $ decomposes into two distinguished invariant subspaces, namely $\sym$ and $\skew$. These correspond to the normal and tangent space of $O(n) \subset GL(n,\R)$ at the identity. In matrix terms, the spaces of symmetric and skew-symmetric matrices form two natural representations of $O(n)$ for the action $g \cdot A = g A g^T$ with $g \in O(n)$ and $A \in \R^{n \times n}$. For a symmetric matrix $M \in \sym$ we define the \emph{symmetric Schur-Horn orbitope} \begin{align*} \SH_M & \,:= \, \conv( G\cdot M ) \,\, \subset\, \sym . \\ \intertext{For a skew-symmetric matrix $N \in \skew$ we define the \emph{skew-symmetric Schur-Horn orbitope}} \SH_N &\,:= \,\conv( G\cdot N ) \subset \skew. \end{align*} We shall see that these orbitopes are intimately related to certain polytopes which govern their boundary structure and spectrahedral representation. This connection arises via the classical Schur-Horn theorem~\cite{schur23}. The material in the sections below could also be presented in symplectic language, using the moment maps of Atiyah-Guillemin-Sternberg~\cite{At82, GS82}. \subsection{Symmetric Schur-Horn orbitopes} The $\binom{n+1}{2}$-dimensional space $\sym$ decomposes into the trivial $O(n)$-representation, given by multiples of the identity matrix, and the irreducible representation of symmetric $n {\times} n$-matrices with trace zero. Every symmetric matrix $M \in \sym$ is orthogonally diagonalizable over $\R$. The ordered list of eigenvalues of $M $ is denoted $\l(M) = (\l_1(M) \ge \l_2(M) \ge \cdots \ge \l_n(M))$. The orbit $G\cdot M$ equals the set of matrices $A \in \sym$ that satisfy $\l(A) = \l(M)$. We shall see that its convex hull $\SH_M = {\rm conv}(G \cdot M)$ is the set of matrices $A \in \sym$ for which $\l(A)$ is majorized by $\l(M)$. For $p,q \in \R^n$ we say that $q$ is \emph{majorized} by $p$, written $q \maj p$, if $\,q_1 + q_2 + \cdots + q_n \,=\, p_1 + p_2 + \cdots + p_n$, and, after reordering, $\,q_1 \ge \cdots \ge q_n\,$ and $\,p_1 \ge \cdots \ge p_n$, we have \[ q_1 + q_2 + \cdots + q_k \ \le \ p_1 + p_2 + \cdots + p_k \quad\,\, \hbox{for $\,k=1,\dots,n-1$.} \] For a fixed point $p \in \R^n$, the set of all points $q$ majorized by $p$ is the convex polytope \[ \Pi(p) \,\, =\,\, \{ q \in \R^n : q \maj p \} \,\, = \,\,\conv \{ \pi\cdot p = ( p_{\pi(1)}, \dots, p_{\pi(n)}) : \pi \in \mathfrak{S}_n \}. \] Here $\mathfrak{S}_n$ denotes the symmetric group, and $\Pi(p)$ is the \emph{permutahedron} with respect to $p$. This is a well-studied polytope \cite{onn93, Ziegler} and is itself an orbitope for $\mathfrak{S}_n$. The permutahedron $\Pi(p)$ for $p = ( p_1 \ge p_2 \ge \cdots \ge p_n)$ consists of all points $q \in \R^n$ such that $\sum_i p_i = \sum_i q_i$ and \begin{equation} \label{eq:permineq} \sum_{i \in I} q_i \ \le \ \sum_{i=1}^{|I|} p_i \qquad \hbox{for all $I \subseteq [n]$.} \end{equation} Let $\diag : \sym \rightarrow \R^n$ be the linear projection onto the diagonal. \begin{proposition}[The symmetric Schur-Horn Theorem~\cite{Leite99}] Let $M \in \sym$ and $\SHsym_M$ its symmetric Schur-Horn orbitope. Then the diagonal $\diag(M)$ is majorized by the vector of eigenvalues $\l(M)$. In fact, the orbitope $\SHsym_M$ maps linearly onto the permutahedron: \[ \diag(\SHsym_M) \,\,=\,\, \Pi(\l(M)). \] \end{proposition} \begin{corollary} \label{cor:SH1} The Schur-Horn orbitope equals $\, \SHsym_M = \{ A \in \sym : \l(A) \maj \l(M) \}$. \end{corollary} \begin{proof} We have shown that the right hand side equals $\{ A \in \sym : \l(A) \in \diag(\SHsym_M) \}$. We claim that a matrix $A$ is in this set if and only if $A$ lies in $ \SHsym_M$. This is clear if $A$ is a diagonal matrix. It follows for all matrices since both sets are invariant under the $O(n)$-action. \end{proof} Our next goal is to derive a spectrahedral characterization of $\SHsym_M$. Consider the natural action of the Lie group $GL(n,\R)$ on the $k$-th exterior power $\wedge_k \R^n$. If $\{v_1,v_2,\dots, v_n \}$ is any basis of $\R^n$, then the induced basis vectors of $\wedge_k \R^n$ are $v_{i_1} \wedge v_{i_2} \wedge \cdots \wedge v_{i_k}$ for $1 \le i_1 < i_2 < \cdots < i_k \le n$. A matrix $g \in GL(n,\R)$ acts on a basis element by sending it to $g \cdot v_{i_1} \wedge v_{i_2} \wedge \cdots \wedge v_{i_k} = (g \cdot v_{i_1}) \wedge (g \cdot v_{i_2}) \wedge \cdots \wedge (g \cdot v_{i_k})$. We denote by $\Sfunc_k : \mathfrak{gl}(\R^n) \rightarrow \mathfrak{gl}(\wedge_k \R^n)$ the induced map of Lie algebras. The linear map $\Sfunc_k$ is defined by the rule \begin{equation} \label{eq:addcomprule} \Sfunc_k(B)(v_{i_1} \wedge v_{i_2} \wedge \cdots \wedge v_{i_k}) \,\,= \,\, \sum_{j = 1}^k v_{i_1} \wedge \cdots \wedge v_{i_{j-1}} \wedge (B v_{i_j}) \wedge v_{i_{j+1}} \wedge \cdots \wedge v_{i_k}. \end{equation} The $\binom{n}{k} \times \binom{n}{k}$-matrix that represents $\Sfunc_k(B)$ in the standard basis of $\wedge_k \R^n$ is known as the \emph{$k$-th additive compound matrix} of the $n \times n$-matrix $B$. It has the following main property: \begin{lemma} \label{lem:compoundeigen} Let $B \in \sym$ with eigenvalues $\l(B) = (\l_1, \l_2,\dots,\l_n)$. Then $\Sfunc_k(B)$ is symmetric and has eigenvalues $ \,\l_{i_1} + \l_{i_2} + \cdots + \l_{i_k}\,$ for $1 \le i_1 < i_2 < \cdots < i_k \le n$. \end{lemma} \begin{proof} Let $v_1,\ldots,v_n$ be an eigenbasis for $B$. Then the formula (\ref{eq:addcomprule}) says $$ \Sfunc_k(B)(v_{i_1} \wedge v_{i_2} \wedge \cdots \wedge v_{i_k}) \,\, = \,\, \sum_{j = 1}^k v_{i_1} \wedge \cdots \wedge v_{i_{j-1}} \wedge ( \lambda_{i_j} v_{i_j}) \wedge v_{i_{j+1}} \wedge \cdots \wedge v_{i_k}. $$ Hence $\, v_{i_1} \wedge v_{i_2} \wedge \cdots \wedge v_{i_k}\,$ is an eigenvector of $\Sfunc_k(B)$ with eigenvalue $\l_{i_1} + \cdots + \l_{i_k}$. \end{proof} This leads to the result that each symmetric Schur-Horn orbitope $\SHsym_M$ is a spectrahedron. \begin{theorem} \label{thm:symspec} Let $M \in \sym$ with ordered eigenvalues $\l(M) = (\l_1 \ge \cdots \ge \l_n)$. Then \[ \SHsym_M \, = \, \bigl\{ A \in \sym \,:\, \Tr(A) = \Tr(M) \,\,\hbox{and} \,\,\, \sum_{i=1}^k \l_i {\rm Id}_{\binom{n}{k}} - \Sfunc_k(A)\, \succeq \,0\,\,\, \hbox{for}\,\, k = 1,\ldots,n-1 \bigr\}. \] \end{theorem} \begin{proof} A matrix $A \in \sym$ is in $\SHsym_M$ if and only if $\l(A)$ is in the permutahedron $\Pi(\l(M))$. From the inequality representation in (\ref{eq:permineq}), in conjunction with Lemma \ref{lem:compoundeigen}, we see that this is the case if and only if the largest eigenvalue of $\l( \Sfunc_k(A) )$ is at most $\l_1 + \dots + \l_k$. \end{proof} We shall now derive the description of all faces of the Schur-Horn orbitope $\SHsym_M$. Since $\SHsym_M$ is a spectrahedron, by Theorem \ref{thm:symspec}, we know that all of its faces are exposed faces. Hence a face of $\SHsym_M$ is the set of points maximizing a linear function $\ell : \sym \rightarrow \R$. The canonical $O(n)$-invariant inner product on $\sym$ is given by $\inner{A,B} = \Tr(AB)$ and, by identifying spaces, a linear function may be written as $\ell(\,\cdot \,) = \inner{B,\, \cdot \,}$. Note that the dual space $(\sym)^*$ is equipped with the contragredient action, that is, $g \cdot \ell = \inner{g^T B g, \,\cdot \,}$. \begin{theorem} \label{thm:facelatticeSH} Every $O(n)$-orbit of faces of $\SHsym_M$ intersects the pullback of a unique $\mathfrak{S}_n$-orbit of faces of the permutahedron $\Pi(\l(M))$. In particular, the faces of $\SHsym_M$ are products of symmetric Schur-Horn orbitopes of smaller dimensions corresponding to flags in $\R^n$. \end{theorem} \begin{proof} Let $F$ be a face of $\SHsym_M$ and let $\ell = \inner{B, \,\cdot\,}$ be a linear function maximized at $F$. Then the face $g \cdot F$ is given by $g \cdot \ell$ and we may identify $O(n) \cdot F$ with the orbit $O(n) \cdot \ell$. Since $B$ is orthogonally diagonalizable, the orbit $O(n) \cdot \ell$ contains the diagonal matrices $\pi \cdot \l(B)$ for $\pi \in \mathfrak{S}_n$. The corresponding faces are the pullbacks of the orbit of faces of $\Pi(\l(M))$ given by the linear function $\inner{\l(B),\cdot}_{\R^n}$. Faces of the permutahedron correspond to flags of coordinate subspaces, and $O(n)$ translates these to arbitrary flags of subspaces. \end{proof} Facets of the permutahedron $\Pi(p)$ correspond to coordinate subspaces. Replacing these with arbitrary subspaces yields supporting hyperplanes for the Schur-Horn orbitope $\SHsym_M$. \begin{corollary} \label{cor:SH2} The Schur-Horn orbitope $\SH_M$ is the set of matrices $A \in \sym $ such that \[ \Tr(A|_L) \le \Tr(M|_L) \quad \hbox{for every subspace $L \subseteq \R^n$.} \] \end{corollary} The following three examples show how Theorem \ref{thm:facelatticeSH} translates into explicit face lattices. \begin{example}[The free spectrahedron] \label{ex:freespec} Let $M = e_1 e_1^T \in \sym$ be the diagonal matrix with diagonal $(1,0,\ldots,0)$. The orbitope $ \SHsym_M$ is the convex hull of all symmetric rank $1$ matrices with trace $1$, and hence $\, \SHsym_M = {\rm PSD}_n \,\cap \,\{{\rm trace} = 1\}$. This orbitope plays the role of a ``simplex among spectrahedra'' because every compact spectrahedron is an affine section. The face $\SHsym_M^B$ of the orbitope $\SHsym_M$ in direction $B \in \sym$ is isomorphic to \[ \conv \{ uu^T : u \in \mathbb{S}^{n-1} \cap {\rm Eig}_{\rm max}(B) \} \] where $\mathbb{S}^{n-1}$ is the unit sphere and ${\rm Eig}_{\rm max}(B)$ is the eigenspace of $B$ with maximal eigenvalue. Thus, $\SH_M^B$ is isomorphic to a lower dimensional Schur-Horn orbitope for a rank one matrix. We conclude that the face lattice of $\SHsym_M$ consists of the linear subspaces of $\R^n$ ordered by inclusion. This fact is well known; see \cite[\S II.12]{Barvinok}. The dimension of a face corresponding to a $k$-subspace is $\tbinom{k+1}{2}-1$. The projection $\diag(\SHsym_M)$ is the standard $(n-1)$-dimensional simplex $\Delta_{n-1} = \conv\{ e_1,e_2,\ldots,e_n \}$ whose faces correspond to the coordinate subspaces. \qed \end{example} \begin{example}[Spectrahedral hypersimplices] We now describe continuous analogs to hypersimplices, extending the simplices in Example \ref{ex:freespec}. Fix $0 < k < n$ and let $M \in \sym$ be the diagonal matrix with $k$ ones and $n-k$ zeros. The orbitope $ \SH_M$ of the {\em $(n,k)$-spectrahedral hypersimplex}, as its diagonal projection $\diag( \SH_M) = \Delta(n,k)$ is the classical $(n,k)$-hypersimplex; cf.~\cite[Example 0.11]{Ziegler}. For instance, if $n=4$ and $k = 2$ then $\SH_M$ is nine-dimensional and $\diag(\SH_M)$ is an octahedron. Up to $\mathfrak{S}_4$-symmetry, the octahedron has one orbit of vertices and edges but two orbits of triangles. The pullback of any edge is a circle, and the pullbacks of the triangles are five-dimensional symmetric Schur-Horn orbitopes $\calO_M$ for $\l(M) = (1,0,0)$ and $\l(M) = (1,1,0)$. Both facets are isomorphic to free spectrahedra. \qed \end{example} \begin{example}[The generic symmetric Schur-Horn orbitope] Let $M \in \sym$ be a symmetric matrix with distinct eigenvalues, e.g. $\lambda(M) =(1,2,\dots,n)$. The image of $\SHsym_M$ under the diagonal map is the classical permutahedron $\Pi_n = \Pi(1,2,3,\dots,n)$. Its face lattice may be described as the collection of all flags of coordinate subspaces in $\R^n$ ordered by refinement. We may associate to every $B \in \sym$ the complete flag whose $k$-th subspace is the direct sum of the eigenspaces of the first $k$ largest eigenvalues. Thus, $O(n) \cdot M$ may be identified with the complete flag variety over $\R$. As for the facial structure, the face $\SH_M^B$ is isomorphic to the convex hull of the orbit ${\rm stab}_{O(n)}(B) \cdot M$. Here, the stabilizer decomposes into a product of groups isomorphic to $O(d_i)$ where $d_i$ is the dimension of the $i$-th eigenspace of $B$. Hence, the face $\SH_M^B$ is isomorphic to a Cartesian product of generic Schur-Horn orbitopes and is of dimension $\sum_i\tbinom{d_i+1}{2}$. The face only depends on the flag associated to $B$. This implies that the face lattice of $\SH_M$ is isomorphic to the set of partial flags ordered by refinement. Again, in every orbit of flags there is a flag consisting only of coordinate subspaces. These special flags form the face lattice of the standard permutahedron $\Pi(1,2,\dots,d) = \diag(\SH_M)$. We regard $\SH_M$ as a continuous analog of the permutahedron. \qed \end{example} We conclude this subsection with a discussion of the algebraic boundary $\partial_a \SHsym_M$ of the Schur-Horn orbitope. Let $\K$ be the smallest subfield of $\R$ that contains the eigenvalues $\lambda_1,\ldots,\lambda_n$, and suppose that the $\lambda_i$ are sufficiently general. Then the hypersurface $\partial_a \SHsym_M$ is defined in the affine space $\{A\in \Sym_2(\R^n)\mid\Tr(A)=\Tr(M)\}$ by the following polynomial of degree $2^n-2$ in $\binom{n+1}{2}$ unknowns over the field $\K$: \[ f(A) \quad = \quad \prod_{k=1}^{n-1}{\rm det} \Bigl(\Sfunc_k(A) - \sum_{i=1}^k \lambda_i \cdot {\rm Id}_{\binom{n}{k}} \Bigr) . \] However, from a computer algebra perspective, this is not what we want. Assuming that $M$ has entries in $\Q$, we prefer not to pass to the field extension $\K$, but we want the algebraic boundary $\partial_a \SHsym_M$ to be the $\Q$-Zariski closure of the above hypersurface $\{f(A) = 0\}$. For instance, suppose that the characteristic polynomial of $M$ is irreducible over $\Q$. Then we must take the product of $f(A)$ over all permutations of the eigenvalues $\lambda_1,\ldots,\lambda_n$, and the polynomial $g(A)$ that defines $\partial_a \SHsym_M$ over $\Q$ is the reduced part of that product. It equals \[ g(A) \quad = \quad \prod_{k=1}^{\lceil n/2 \rceil}{\rm det} \bigl(\Sfunc_k(A) \oplus \Sfunc_k(-M)\bigr) , \] where $\oplus$ denotes the {\em tensor sum} of two square matrices of the same size (see e.g.~\cite[\S 3]{NieStu}). Here the product goes only up to $\lceil n/2 \rceil$ because the matrices $A$ and $M$ have the same trace. For special matrices $M$, the characteristic polynomial may factor over $\Q$, and in this case the algebraic boundary $\partial_a \SHsym_M$ is cut out by a factor of the polynomial $f(A)$ or $g(A)$. \subsection{Skew-symmetric Schur-Horn orbitopes} The space $\skew$ consists of skew-symmetric $n {\times} n$-matrices $N$. The eigenvalues of $N$ are purely imaginary, say $\pm i\hl_1,\dots,\pm i\hl_{k}$, where $i=\sqrt{-1}$, with $k = \lfloor \tfrac{n}{2} \rfloor$ and an additional $0$ eigenvalue if $n$ is odd. Thus $N$ is not diagonalizable over $\R$, but the adjoint $O(n)$-action brings the matrix $N$ into the normal form \[ gNg^T \,=\, \left( \begin{array}{rr} & \Lambda \\ -\Lambda & \end{array} \right) \text{ for $n$ even} \quad \text{ and } \quad gNg^T \,=\, \left( \begin{array}{rcr} & & \Lambda \\ & 0 & \\ -\Lambda & & \end{array} \right) \text{ for $n$ odd.} \] Here $g$ is a suitable matrix in $ O(n)$, $\Lambda$ is the diagonal matrix with diagonal $\hl_1 \ge \cdots \ge \hl_k \ge 0$, and we denote $\hl(N) = ( \hl_1, \hl_2, \dots, \hl_k)$. Let $\Sdiag : \skew \rightarrow \R^k$ be the linear map such that \begin{eqnarray*} \Sdiag(N) &=& (N_{1,k\phantom{+1}}, N_{2,k+1},\dots,N_{k,n}) \text{\qquad if $n = 2k$, and}\\ \Sdiag(N) &=& (N_{1,k+1}, N_{2,k+2},\dots,N_{k,n}) \text{\qquad if $n = 2k+1$.} \end{eqnarray*} If $N$ is in normal form as above, then $\Sdiag(N) = \diag(\Lambda) = \hl(N)$. We call $\Sdiag(N)$ the \emph{skew-diagonal} of $N$. In analogy to the symmetric case, the set $\Sdiag(\SH_N)$ of all skew-diagonals arising from $\SH_M$ is nicely behaved; in fact, the necessary changes are rather modest. For a point $q \in \R^k$ we denote by $|q| = (|q_1|,\dots,|q_k|)$ the vector of absolute values. For $p \in \R^k$ let $\BPi(p)$ be the set of points $q \in \R^k$ such that $|q|$ is {\em weakly majorized} by $|p|$. This means that $|p|$ and $|q|$ satisfy the majorization conditions except that $\sum_i |q_i| \le \sum_i |p_i|$ is allowed. The polytope $\BPi(P)$ is the \emph{$B_k$-permutahedron}. It is the convex hull of the orbit of $p$ under the action of the Coxeter group $B_k$, the group of all $2^k \cdot k!$ signed permutations. The $B_k$-permutahedron $\BPi(p)$ for $p = (p_1 \ge p_2 \ge \cdots \ge p_k)$ consists of all points $q \in \R^k$ with \begin{equation} \label{eq:Bpermineq} \sum_{i \in I} q_i - \sum_{j \in J} q_j \ \le \ \sum_{i = 1}^{|I \cup J|} p_i \quad \hbox{ for any $I,J \subseteq [k]$ with $I \cap J = \emptyset$.} \end{equation} As expected, we have the following analog of the symmetric Schur-Horn theorem. \begin{proposition}[The skew-symmetric Schur-Horn theorem~\cite{Leite99}] Let $N \in \skew$ with $\hl(N) = (\hl_1 \ge \cdots \ge \hl_k)$ and $\SH_N$ the skew-symmetric Schur-Horn orbitope of $N$. Then $|\Sdiag(N)|$ is weakly majorized by $(\hl_1,\dots,\hl_k)$. In particular, we have $\Sdiag(\SH_N) = \BPi(\hl)$. \end{proposition} The same arguments as in the symmetric case yield the following results. \begin{theorem} \label{thm:facelatticeSkewSH} Every $O(n)$ orbit of faces of the skew-symmetric Schur-Horn orbitope $\SH_N$ contains the pullback of a unique $B_k$-orbit of faces of the $B_k$-permutahedron $\BPi(\hl(N))$. \end{theorem} \begin{corollary} The skew-symmetric Schur-Horn orbitope $\SH_N$ coincides with the set of skew-symmetric matrices $A$ such that $|\Sdiag(A)|$ is weakly majorized by $|\Sdiag(N)|$. \end{corollary} \begin{example} Fix $n=6$ and $k=3$, and $p=(1,2,3)$. Then the system (\ref{eq:Bpermineq}) consists of $26$ linear inequalities, namely, six inequalities $\pm q_i \leq 3$, twelve inequalities $\pm q_i \pm q_j \leq 5$, and eight inequalities $\pm q_i \pm q_j \pm q_k \leq 6$. Their solution set is the $B_3$-permutahedron $\BPi(1,2,3)$, commonly known as the {\em truncated cuboctahedron}, and it has $48$ vertices, $72$ edges and $26$ facets (six octagons, twelve squares and eight hexagons). A picture is shown in Figure \ref{F:B3permuta}. \begin{figure}[htb] \[ \includegraphics[width=6.5cm]{figures/Bpermut} \] \vskip -0.3cm \caption{The $B_3$-permutahedron is the truncated cuboctahedron.} \label{F:B3permuta} \end{figure} Let $N$ denote a general skew-symmetric $6 {\times} 6$-matrix. Theorem \ref{thm:facelatticeSkewSH} implies that the facets of the $15$-dimensional orbitope $\SH_N$ come in three families, corresponding to $O(6)$-orbits of the octagons, squares, and hexagons in Figure \ref{F:B3permuta}. The facets arising from the octagons are skew-symmetric Schur-Horn orbitopes for $SO(4)$ with skew-diagonal $(1,2)$ and therefore have dimension six. The facets arising from the squares are the product of a line segment and a disc, coming from $O(2)\times SO(2)$ with $O(2)$ acting by the determinant. The facets arising from the hexagons are $O(3)$-orbitopes isomorphic to symmetric Schur-Horn orbitopes with eigenvalues $(1,2,3)$ and therefore have dimension five. \qed \end{example} We next present a spectrahedral description of an arbitrary skew-symmetric Schur-Horn orbitope $\SH_N$. To derive this, we return to symmetric matrices and their real eigenvalues. \begin{lemma} Let $N \in \skew$ be a matrix with eigenvalues $\pm i \hl_1,\dots, \pm i \hl_k$ and let \[ \hat{N} \,\,= \,\, \left( \begin{array}{rr} 0 & N \\ -N & 0 \\ \end{array} \right) \,\,\in\,\, \mathrm{Sym}_2\R^{2n}. \] Then $\hat{N}$ has eigenvalues $\hl_1,\hl_1,\hl_2,\hl_2,\dots,\hl_k,\hl_k, -\hl_1,-\hl_1,-\hl_2,-\hl_2,\dots,-\hl_k,-\hl_k$. For any $1 \leq j \leq k$, the additive compound matrix $ \Sfunc_{2j}(\hat{N}) $ has largest eigenvalue $2(\hl_1 + \hl_2 + \cdots + \hl_j)$. \end{lemma} We conclude that each skew-symmetric Skew-Horn orbitope $\BPi(\hl(N))$ is a spectrahedron: \begin{theorem} \label{thm:skewsymspec} Let $N \in \skew$ with $\hl(N) = (\hl_1 \ge \cdots \ge \hl_k)$. Then \[ \SH_N \, = \, \left\{ A \in \skew: 2(\hl_1 + \cdots + \hl_j) {\rm Id}_{\binom{2n}{2j}} - \Sfunc_{2j}(\hat{A}) \, \succeq \,0\,\,\, \hbox{for}\,\, j = 1,\ldots,k \right\}. \] \end{theorem} From this theorem we can derive a description of the algebraic boundary as before, and again the issue arises that the $\tilde \lambda_j$ lie in an extension $\K$ over the field of definition of $N$, which will usually be $\Q$. At present we do not know whether Theorems \ref{thm:symspec} and \ref{thm:skewsymspec} can be extended to obtain spectrahedral representations of the respective orbitopes over $\Q$. We close this section with one more example of a skew-symmetric Schur-Horn orbitope. \begin{example} Consider the skew-symmetric Schur-Horn orbitope $\SH_N$ for some $N \in \skew$ with $\hl(N) = (1,0,\dots,0) \in \R^k$. According to Theorem \ref{thm:skewsymspec}, a spectrahedral representation~is \[\SH_N \, = \, \bigl\{ \,A \in \skew: \mathcal{L}_2(\hat{A}) \,\preceq \, 2 \cdot {\rm Id}_{2n} \bigr\}. \] The projection $\Sdiag(\SH_N)$ to the skew diagonal is the \emph{crosspolytope} $\conv \{ \pm e_1, \dots, \pm e_k \}$. This a regular polytope with symmetry group $B_k$, and it has only one orbit of faces in each dimension. The orbitope $\SH_N$ is the $d=2$ instance of the {\em Grassmann orbitope} $\mathcal{G}_{d,n}$. These are important in the theory of calibrated manifolds, and we shall study them in Section 7. \qed \end{example} \section{Tautological Orbitopes} \label{sec:tautological} We argued in Section~\ref{sec:Setup} that, given a compact group $G$ acting algebraically on $V \simeq\R^n$, we can identify $G$ with a subgroup of $O(n)$, or even of $SO(n)$ when $G$ is connected. The ambient space $ \End(V) \simeq \mathfrak{gl}_n $ is itself an $n^2$-dimensional real representation of the group $G$. The action of $G$ on $\End(V)$ is by left multiplication. The orbit of the identity matrix ${\rm Id}_n$ under this action is the group $G$ itself. We call the corresponding orbitope $\,\conv(G) =\conv(G \cdot {\rm Id}_n)\,$ the \emph{tautological orbitope} for the pair $(G,V)$. This orbitope lives in $ \End(V)$, and it serves as an initial object because it maps linearly to every orbitope $\conv(G \cdot v)$ in $V$. Tautological orbitopes of finite permutation groups have been studied under the name of \emph{permutation polytopes} (see \cite{onn93}). The most famous of them all is the {\em Birkhoff polytope} for $G = \mathfrak{S}_n$, which was studied for other Coxeter groups by McCarthy, Ogilvie, Zobin, and Zobin~\cite{Zobin}. In this section we investigate the tautological orbitopes for the full groups $O(n)$ and $SO(n)$. Similar to the Schur-Horn orbitopes in Section~\ref{sec:SchurHorn}, the facial structure is governed by polytopes arising from the projection onto the diagonal. We begin with the example $G = SO(3)$. \subsection{Rotations in $3$-dimensional space} The group $SO(3)$ of $3 {\times} 3$ rotation matrices has dimension three. Its tautological orbitope is a convex body of dimension nine. The following spectrahedral representation was suggested to us by Pablo Parrilo. \begin{proposition} \label{prop:pabloSO3} The tautological orbitope ${\rm conv}(SO(3))$ is a spectrahedron whose boundary is a quartic hypersurface. In fact, a $3 {\times} 3$-matrix $X = (x_{ij})$ lies in ${\rm conv}(SO(3))$ if and only~if \begin{equation} \label{eq:magic} \begin{pmatrix} 1 {+} x_{11} {+} x_{22} {+} x_{33} & x_{32} - x_{23} & x_{13} - x_{31} & x_{21} - x_{12} \\ x_{32} - x_{23} & 1{+}x_{11} {-} x_{22} {-} x_{33} & x_{21} + x_{12} & x_{13} + x_{31} \\ x_{13} - x_{31} & x_{21} + x_{12} & 1 {-} x_{11} {+} x_{22} {-} x_{33} & x_{32} + x_{23} \\ x_{21} - x_{12} & x_{13} + x_{31} & x_{32} + x_{23} & 1 {-} x_{11} {-} x_{22} {+} x_{33} \end{pmatrix} \,\,\succeq \,\, 0. \end{equation} \end{proposition} \begin{proof} We first claim that ${\rm conv}(SO(3))$ coincides with the set of all $3 {\times} 3$-matrices \begin{equation} \label{eq:pabloSU2} \begin{pmatrix} u_{11} {+} u_{22} {-} u_{33} {-} u_{44} & 2 u_{23} -2u_{14} & 2u_{13} + 2 u_{24} \\ 2 u_{23}+2 u_{14} & u_{11}{-}u_{22}{+} u_{33}{-} u_{44} & 2 u_{34}-2 u_{12} \\ 2 u_{24}-2 u_{13} & 2 u_{12}+2 u_{34} & u_{11} {-} u_{22}{-} u_{33} {+} u_{44} \end{pmatrix} \end{equation} where $U = (u_{ij})$ runs over all positive semidefinite $4 {\times} 4$-matrices having trace $1$. Positive semidefinite $4 {\times} 4$-matrices with both trace $1$ and rank $1$ are of the form $$ U \,\,\, = \,\,\, \frac{1}{a^2+b^2+c^2+d^2} \begin{pmatrix} a^2 & ab & ac & ad \\ ab & b^2 & bc & bd \\ ac & bc & c^2 & cd \\ ad & bd & cd & d^2 \end{pmatrix}. $$ Their convex hull is the free spectrahedron of Example \ref{ex:freespec}. The image of the above rank $1$ matrices $U$ under the linear map (\ref{eq:pabloSU2}) is precisely the group $SO(3)$. This parametrization is known as the {\em Cayley transform}. Geometrically, it corresponds to the double cover $SU(2) \rightarrow SO(3)$. The claim follows because the linear map commutes with taking the convex hull. The symmetric $4 {\times} 4$-matrices $U = (u_{ij})$ with ${\rm trace}(U) = 1$ form a nine-dimensional affine space, and this space is isomorphic to the nine-dimensional space of all $3 {\times} 3$-matrices $X = (x_{ij})$ under the linear map given in \eqref{eq:pabloSU2}. We can express each $u_{ij}$ in terms of the $x_{kl}$ by inverting the linear relations $ x_{11} = u_{11} + u_{22} - u_{33} - u_{44}$, $ x_{12} = 2 u_{23} - 2u_{14}$, etc. The resulting symmetric $4 {\times} 4$-matrix $U $ is precisely the matrix \eqref{eq:magic} in the statement of Proposition \ref{prop:pabloSO3}. \end{proof} The ideal of the group $O(3)$ is generated by the entries of the $3 {\times} 3$-matrix $X \cdot X^T - {\rm Id}_3$, while the prime ideal of $SO(3)$ is that same ideal plus $\langle {\rm det}(X)-1 \rangle$. We can check that the prime ideal of $SO(3)$ coincides with the ideal generated by the $2 {\times} 2$-minors of the matrix~\eqref{eq:magic}. Thus the group $SO(3)$ is recovered as the set of matrices~\eqref{eq:magic} of rank one. Proposition \ref{prop:pabloSO3} implies that ${\rm conv}(SO(3))$ is affinely isomorphic to the free spectrahedron for $n=4$, that is, to the set of positive semidefinite $4 {\times} 4$-matrices with trace equal to $1$. This implies a characterization of all faces of the tautological orbitope for $SO(3)$. First, all faces are exposed because ${\rm conv}(SO(3))$ is a spectrahedron. All of its proper faces are free spectrahedra, for $n=1,2,3$, so they have dimensions $0$, $2$ and $5$, as seen in Example \ref{ex:freespec}. \subsection{The orthogonal group} We now examine $\,O(n) \, = \,\{X \in \R^{n \times n} : X \cdot X^T = {\rm Id}_n\}$. As before, let $\diag : \R^{n \times n} \rightarrow \R^n$ denote the projection of the $n {\times} n$-matrices onto their diagonals. \begin{lemma} \label{lem:Ontodiagonal} The projection $\diag({\rm conv}(O(n))$ of the tautological orbitope for the orthogonal group $O(n)$ to its diagonal is precisely the $n$-dimensional cube $[-1, +1]^n$. \end{lemma} \begin{proof} The columns of a matrix $X \in O(n)$ are unit vectors. Thus every coordinate $x_{ij}$ in bounded by $1$ in absolute value and $\diag({\rm conv}(O(n))$ is a subset of the cube. For the reverse inclusion, note that all $2^n$ diagonal matrices with entries $\pm1$ are orthogonal matrices. \end{proof} The cube $[-1,+1]^n$ is the special $B_n$-permutahedron $\BPi(1,1,\dots,1)$. As with Schur-Horn orbitopes, the projection onto this polytope reveals the facial structure. As general endomorphisms are not normal, the key concept of diagonalizability is replaced by that of \emph{singular value decomposition}. Recall that for any linear map $A \in \R^{n \times n}$ there are orthogonal transformations $U, V \in O(n)$ such that $U A V^T$ is diagonal with entries $\s(A) = (\s_1(A) {\ge} \s_2(A) {\ge} \cdots {\ge} \s_n(A)) \in \R^n_{\ge0}$. These entries are the \emph{singular values} of $A$. We shall see in (\ref{eq:Onspec}) that ${\rm conv}(O(n))$ is a spectrahedron, hence all of its faces are exposed faces. The following result recursively characterizes all faces of this tautological orbitope. \begin{theorem} \label{thm:tautoOn} Let $\ell = \inner{ B, \cdot \,}$ be a linear function on $\R^{n \times n}$ with $B \in \R^{n \times n}$. Then the face of ${\rm conv}(O(n))$ in direction $\ell$ is isomorphic to ${\rm conv}(O(m))$ where $m = \dim \ker (B)$. \end{theorem} \begin{proof} Let $\ell( \cdot ) = \inner{ B, \cdot \,}$ be a linear function with $B \in \R^{n \times n}$, so that $\ell(A)=\mbox{Trace}(AB)$. We fix a singular value decomposition $U\Sigma V = B$ of the matrix $B$. Here $\Sigma$ is a diagonal matrix $n {\times} n$ with its first $\,n-m \,$ entries positive and remaining $m$ entries zero. This matrix also defines a linear function $\ell'( \cdot ) = \inner{ \Sigma, \cdot \,}$ on $\R^{n \times n}$. Cyclic invariance of the trace ensures that the faces ${\rm conv}(O(n))^\ell$ and ${\rm conv}(O(n))^{\ell'}$ are isomorphic. The subset of $O(n)$ at which $\ell'$ is maximized is the subgroup $\{ g \in O(n) : g \cdot e_i = e_i \text{ for } i = 1,\dots,n-m \}$. The convex hull of this subgroup equals ${\rm conv}(O(n))^{\ell'}$. It coincides with the tautological orbitope for $O(m)$. \end{proof} We interpret Theorem \ref{thm:tautoOn} geometrically as saying that the tautological orbitope for $O(n)$ is a continuous analog of the $n$-dimensional cube. Every face of the cube is a smaller dimensional cube and the dimension of a face maximizing a linear functional $\ell$ is determined by the support of $\ell$. The role of the support is now played by the rank of the matrix $B$. This behavior yields information about the Carath\'{e}odory number of the tautological orbitope. \begin{proposition} The Carath\'{e}odory number of the orbitope $\conv( O(n) )$ is at most $n+1$. \end{proposition} \begin{proof} By \cite[Lemma~3.2]{LSS08}, the Carath\'{e}odory number of a convex body $K$ is bounded via \[ \cara(K)\,\, \le \,\, 1 + \max\{ \cara(F) : F \subset K \text{ a proper face } \}. \] Since every proper face is isomorphic to $\conv(O(k))$ for some $k < n$ the result follows by induction on $n$. The base case is $n = 1$ for which $\conv(O(1))$ is a $1$-simplex. \end{proof} Note that the orbit $O(n) \cdot {\rm Id}_n$ coincides with the orbit of the identity matrix ${\rm Id}_n$ under the action of the product group $O(n) \times O(n)$ by both right and left multiplication. Hence the tautological orbitope ${\rm conv}(O(n))$ is also an $O(n) \times O(n)$-orbitope for that action. We shall now digress and study these orbitopes in general. After we have seen (in Theorem \ref{thm:OnOnspec}) that these are spectrahedra, we shall resume our discussion of ${\rm conv}(O(n))$. \subsection{Fan orbitopes} The group $G = O(n) \times O(n)$ acts on $\R^{n \times n}$ by simultaneous left and right translation. The action is given, for $(g,h) \in O(n) \times O(n)$ and $A \in \R^{n \times n}$, by \begin{equation} \label{eq:fanact} (g,h) \cdot A \,\,:=\,\, g A h^T. \end{equation} Ky Fan proved in \cite{Fan51} that the Schur-Horn theorem for symmetric matrices under conjugation by $O(n)$ generalizes to arbitrary square matrices under this $O(n)\times O(n)$ action. Now, singular values play the role of the eigenvalues. The following is a convex geometric reformulation: \begin{lemma}[Ky Fan \cite{Fan51}] \label{lem:Fan} For a square matrix $A \in \R^{n \times n}$ let $\calO_A$ denote its orbitope under the action (\ref{eq:fanact}) of the group $O(n) \times O(n)$. Then the image $\diag(\calO_A)$ of the projection to the diagonal is the $B_n$-permutahedron with respect to the singular values $\s(A)$. \end{lemma} We shall refer to $\,\calO_A = \conv \{ (g,h)\cdot A : g,h \in O(n) \}\,$ as the {\em Fan orbitope} of the matrix $A$. From Lemma \ref{lem:Fan}, one easily deduces the analogous results to Theorems \ref{thm:facelatticeSH} and \ref{thm:facelatticeSkewSH}. \begin{remark} The facial structure of the Fan orbitope $\calO_A$ is determined by the facial structure of the $B_n$-permutahedron $\BPi(\s(A))$ specified by the singular values of the matrix~$A$. \end{remark} The description of the $B_n$-permutahedron in terms of weak majorization was stated in~\eqref{eq:Bpermineq}. Rephrasing these same linear inequalities for the singular values, and using Lemma~\ref{lem:Fan}, now leads to a spectrahedral description of the Fan orbitopes. For that we make use of the alternative characterization of singular values as the square roots of the (non-negative) eigenvalues of $AA^T$. Using Schur complements, it can be seen that the $2n \times 2n$-matrix \[ S(A) \,\,= \,\,\left( \begin{array}{ll} 0 & A \\ A^T & 0\\ \end{array}\right) \] has the eigenvalues $\pm \s_1(A), \dots, \pm \s_n(A)$. We form its additive compound matrices as before. \begin{theorem} \label{thm:OnOnspec} Let $A$ be a real $n {\times} n$-matrix with singular values $\s_1 \ge \s_2 \ge \cdots \ge \s_n$. Then its Fan orbitope $\calO_A$ equals the spectrahedron $$ \calO_A \,\, =\,\,\bigl\{ X \in \R^{n \times n}\,:\, \Sfunc_k( S(X) )\, \preceq \, (\s_1 + \cdots + \s_k) {\rm Id}_{\tbinom{2n}{k}} \quad k=1,\dotsc,n\,\bigr\}. $$ \end{theorem} Fix an integer $p \in \{1,2,\ldots,n\}$. The \emph{Ky Fan $p$-norm} is defined by \[ X \,\mapsto\, \sum_{i=1}^p \s_i(X). \] This function is indeed a norm on $\R^{n \times n}$, and its unit ball is the Fan orbitope $\calO_A$ where $A$ is the diagonal matrix with $p$ diagonal entries $1/p$ and $n-p$ diagonal entries $0$. Theorem \ref{thm:OnOnspec} shows that the unit ball in the Ky Fan $p$-norm is a spectrahedron. Two norms of special interest in applied linear algebra are the \emph{operator norm} $\| A \| := \s_1(A)$ and the \emph{nuclear norm} $\|A\|_{*} := \s_1(A) + \cdots + \s_n(A)$. Indeed, these two norms played the key role in work of Fazel, Recht and Parrilo \cite{Fazel07} on compressed sensing in the matrix setting. Both the operator norm and the nuclear norm are closely tied to their vector counterparts. \begin{remark} The unit balls in the operator and nuclear norm are both Fan orbitopes. The projection to the diagonal yields the unit balls for the $\ell_\infty$ and the $\ell_1$-norm on $\R^n$. These two unit balls are polytopes in $\R^n$, namely, the $n$-cube and the $n$-crosspolytope respectively. \end{remark} Fazel {\it et al.} showed in \cite[Prop.~2.1]{Fazel07} that the nuclear norm ball has the structure of a spectrahedron, and semidefinite programming duality allows for linear optimization over the operator norm ball. The spectrahedral descriptions of both unit balls in $\R^{n \times n}$ are special instances of Theorem \ref{thm:OnOnspec}, to be stated explicitly once more. The operator norm ball consists of all matrices $X$ whose largest singular value is at most $1$. This is equivalent to \begin{equation} \label{eq:Onspec} \left( \begin{array}{ll} {\rm Id}_n & X \\ X^T & {\rm Id}_n \\ \end{array} \right ) \succeq 0. \end{equation} The nuclear norm ball consists of all matrices $X$ for which the sum of the singular values is at most $1$. This is equivalent to saying that the sum of the largest $n$ eigenvalues of the symmetric $\tbinom{2n}{n}\times\tbinom{2n}{n}$ matrix $ \Sfunc_n( S(X) )$ is at most $1$. Hence the nuclear norm ball in $\R^{n \times n}$ is the spectrahedron defined by the linear matrix inequality \begin{equation} \label{eq:Onspec2} {\rm Id}_{\tbinom{2n}{n}} - \Sfunc_n( S(X) ) \, \succeq \,0. \end{equation} We are now finally prepared to return to the main aim of this section, which is the study of tautological orbitopes. The proof of Theorem \ref{thm:tautoOn} implies that the convex hull of $O(n)$ equals the set of $n {\times}n$-matrices whose largest singular value is at most $1$. In other words: \begin{corollary} The operator norm ball in $\R^{n \times n}$ is equal to the tautological orbitope of the orthogonal group $O(n)$. The coorbitope ${\rm conv}(O(n))^\circ$ is the nuclear norm ball. Both of these convex bodies are spectrahedra. They are characterized in (\ref{eq:Onspec}) and (\ref{eq:Onspec2}) respectively. \end{corollary} It would be worthwhile to explore implications of our geometric explorations of orbitopes for algorithmic applications in the sciences and engineering, such as those proposed in~\cite{Fazel07}. \subsection{The special orthogonal group} We next discuss the faces of the tautological orbitope of the group $SO(n)$. The relevant convex polytope is now the convex hull $\hcube_n$ of all vertices $v \in \{-1, +1\}^n$ with an even number of $(-1)$-entries. This polytope is known as the \emph{demicube} or \emph{halfcube}. It is a permutahedron for the Coxeter group of type $D_n$, i.e.~signed permutations with an even number of sign changes. The facet hyperplanes of $\hcube_n$ are derived by separating infeasible vertices of $[-1,+1]^n$ by hyperplanes through the $n$ neighboring vertices: \begin{equation} \label{eq:SOnfacets} \hcube_n \,\, = \,\,\bigl\{ x \in [-1,+1]^n \,:\, \sum_{i \not\in J} x_i - \sum_{i \in J} x_i \le n-2 \,\,\,\, \hbox{for all $J \subseteq [n]$ of odd cardinality} \bigr\}. \end{equation} While $\hcube_2$ is only a line segment, we have ${\rm dim}(\hcube_n) = n$ for $n \geq 3$. For example, the halfcube $\hcube_3$ is the tetrahedron with vertices $(1,1,1)$, $(-1,-1,1)$, $(-1,1,-1)$ and $(1,-1,-1)$. Note that its facet inequalities appear as the diagonal entries in the symmetric $4 {\times} 4$-matrix (\ref{eq:magic}): $$ \hcube_3 \,=\, \bigl\{ \,x\in \R^3\,:\ {\rm min} ( 1+x_1 + x_2 + x_3 ,\, 1+x_1 - x_2 - x_3 ,\, 1 - x_1 + x_2 - x_3 ,\, 1 - x_1 - x_2 + x_3 ) \geq 0 \,\bigr\}. $$ This observation is explained by results of Horn (cf.~\cite{Leite99}) on the diagonals of special orthogonal matrices. These imply the following lemma about the tautological orbitope of $SO(n)$: \begin{lemma} The projection of $\,{\rm conv}(SO(n))$ onto the diagonal equals the halfcube $\hcube_n$. \end{lemma} \begin{proof} Just like in Lemma \ref{lem:Ontodiagonal}, it is clear that $\hcube_n$ is a subset of $ \diag({\rm conv}(SO(n))$. The converse is derived from the linear algebra fact that the trace of any matrix in $O(n)\backslash SO(n)$ is at most $n-2$. For $J \subseteq [n]$ let $R_J$ be the diagonal matrix with $(R_J)_{ii} = -1$ if $i \in J$ and $(R_J)_{ii} = 1$ if $i \not\in J$. Let $g \in SO(n)$. Then ${\rm trace}(g \cdot R_J) \leq n-2$ for all $J$ of odd cardinality. This means that $\diag(g)$ satisfies the linear inequalities in (\ref{eq:SOnfacets}) and hence lies in $\hcube_n$. \end{proof} There is a variant of singular value decomposition with respect to the restricted class of orientation preserving transformations. For every matrix $A \in \R^{n \times n}$ there exist rotations $U,V \in SO(n)$ such that $UAV$ is diagonal. The diagonal entries are called the \emph{special singular values} and denoted by $\ts(A) = (\ts_1(A) \ge \cdots \ge \ts_n(A))$. The main difference to the usual singular values is that $\ts_n(A)$ may be negative; only the first $n-1$ entries of $\ts(A)$ are non-negative. We need to make this distinction in order to understand the faces of ${\rm conv}(SO(n))$. \begin{theorem} \label{thm:SOntauto} The tautological orbitope of $SO(n)$ has precisely two orbits of facets. These are the tautological orbitopes for $SO(n-1)$ and the free spectrahedra of dimension $\tbinom{n}{2}-1$. \end{theorem} \begin{proof} Up to $D_n$-symmetry, the halfcube $\hcube_n$ has only two distinct facets, namely an $(n-1)$-dimensional halfcube and an $(n-1)$-dimensional simplex. A typical halfcube facet $(\hcube_n)^\ell$ arises by maximizing the linear function $\ell = x_1$ over $\hcube_n$, and a typical simplex facet $(\hcube_n)^{\ell'}$ arises by maximizing the linear function $\,\ell'= x_1 + x_2 + \cdots + x_{n-1} - x_n$. Pulling back $\ell$ along the diagonal projection $\diag$, we see that the facet of ${\rm conv}(SO(n))$ corresponding to the halfcube facet is the convex hull of all rotations $g \in SO(n)$ that fix the first standard basis vector $e_1$. Pulling back $\ell'$ along $\diag$, we see that the facet of ${\rm conv}(SO(n))$ corresponding to the simplex facet is the convex hull of $\,\bigl\{g \in SO(n) \,:\,\Tr(g \cdot R_{\{n\}}) = n-2 \bigr\}$. This facet is isomorphic to the convex hull of all $g^\prime \in O(n) \backslash SO(n)$ such that $\Tr(g^\prime) = n-2$. Since $g^\prime$ is orientation reversing, one eigenvalue of $g^\prime$ is $-1$, and $\Tr(g^\prime) = n-2$ forces all other eigenvalues to be equal to $1$. Hence the facet in question is the symmetric Schur-Horn orbitope for the diagonal matrix $(1,\ldots,1,-1)$. Example \ref{ex:freespec} implies that this is a free spectrahedron. \end{proof} In Subsection 4.1 we exhibited a spectrahedral representation for the tautological orbitope ${\rm conv}(SO(3))$, and in that case, the two facet types of Theorem \ref{thm:SOntauto} collapse into one type. At present, we do not know how to generalize the representation (\ref{eq:magic}) to $SO(n)$ for $n \geq 4$. \section{Carath\'eodory Orbitopes} Orbitopes for $SO(2)$ were first studied by Carath\'e\-o\-dory \cite{Car}. The coorbitope cone (\ref{eq:coorbitopecone}) dual to such a {\em Carath\'eodory orbitope} consists of non-negative trigonometric polynomials. This leads to the Toeplitz spectrahedral representation of the universal Carath\'eodory orbitope in Theorem \ref{Th:toeplitz}, which implies that the convex hulls of all trigonometric curves are projections of spectrahedra. The universal Carath\'eodory orbitope is also affinely isomorphic to the convex hull of the compact even moment curve, whose coorbitope cone consists of non-negative univariate polynomials. This leads to the representation by Hankel matrices in Theorem~\ref{Th:Hankel}. \subsection{Toeplitz representation} \label{S:Caratheodory} The irreducible representations $\rho_a$ of $SO(2)$ are indexed by non-negative integers $a$. Here, $\rho_0$ is the trivial representation. When $a \in \N$ is positive, the representation $\rho_a$ of $SO(2)$ acts on $\R^2$, and it sends a rotation matrix to its $a$th power: $$ \rho_a \,\,:\,\, \begin{pmatrix} \cos(\theta) & - \sin(\theta) \, \\ \sin(\theta) & \phantom{-}\cos(\theta) \, \end{pmatrix} \,\,\, \mapsto \,\,\, \begin{pmatrix} \phantom{-} \cos(\theta) & - \sin(\theta) \, \\ \sin(\theta) & \phantom{-}\cos(\theta) \, \end{pmatrix}^{\! a} \, = \, \begin{pmatrix} \cos( a \theta) & -\sin( a \theta) \, \\ \sin(a \theta) & \phantom{-}\cos( a \theta) \, \end{pmatrix} . $$ For a vector $A = (a_1,a_2,\ldots,a_d) \in \N^d$ we consider the direct sum of these representations $$ \rho_A \,\,\, := \,\,\, \rho_{a_1} \,\oplus \, \rho_{a_2} \,\oplus \,\cdots \, \oplus \,\rho_{a_d}. $$ The {\em Carath\'eodory orbitope} $\Cara_A$ is the convex hull of the orbit $SO(2)\cdot (1,0)^d$ under the action $\rho_A$ on the vector space $(\R^2)^d$. This orbit is the {\em trigonometric moment curve} \begin{equation} \label{CaraCurve} \Bigl\{ \bigl( \, \cos(a_1 \theta) , \, \sin(a_1 \theta) , \, \ldots \,, \cos(a_d \theta) , \, \sin(a_d \theta) \,\bigr) \in \R^{2d} \,: \, \theta \in [0,2\pi) \, \Bigr\}. \end{equation} This curve is also identified with the matrix group $\,\rho_A(SO(2))$ lying in the space $(\R^{2 \times 2})^d$ of block-diagonal $2d \times 2d$-matrices with $d$ blocks of size $2 \times 2$. Thus $\Cara_A$ is isomorphic to the convex hull of $\,\rho_A(SO(2))$, and can therefore also be thought of as a tautological orbitope. We distinguish between isomorphisms of orbitopes that preserve the $SO(2)$-action, and the weaker notion of affine isomorphisms that preserve their structure as convex bodies. \begin{lemma}\label{L:all_Cara} Any orbitope of the circle group $SO(2)$ is isomorphic to a Carath\'eodory orbitope $\Cara_A$, where $A\in\N^d$ has distinct coordinates, and it is affinely isomorphic to a Carath\'eodory orbitope where the coordinates of $A$ are relatively prime integers. \end{lemma} \begin{proof} Let $\calO$ be an orbitope for $SO(2)$. We may assume that its ambient $SO(2)$-module $V$ has no trivial components and is the linear span of $\calO$. Then $V$ has the form $\rho_A$ for $A\in \N^d$ with distinct non-zero components, that is, $V$ is multiplicity free. This is because $\End_{SO(2)}(\rho_a)=\C$, where we identify $\R^2$ with $\C$ and $SO(2)$ with the unit circle in $\C$. The orbitope $\calO$ is generated by a vector $v=(v_1,\dotsc,v_d)\in\C^d$ with non-zero coordinates. By complex rescaling, we may assume that $v=(1,\dotsc,1)$, showing that $\calO$ is isomorphic to $\Cara_A$. Lastly, if the coordinates of $A=(a_1,\dotsc,a_d)$ have greatest common divisor $a$, then $\rho_A$ is the composition $\rho_{A'}\circ \rho_a$, where $A'=(a_1/a,\dotsc,a_d/a)$ and $\calO$ is affinely isomorphic to an orbitope for the module $\rho_{A'}$. \end{proof} We henceforth assume that $0<a_1<\dotsb<a_d$ where the $a_i$ are relatively prime. When $A=(1,2,\dotsc,d)$, Carath\'eodory \cite{Car} studied the facial structure of $\Cara_A$. An even-dimensional cyclic polytope is the convex hull of finitely many points on Carath\'eodory's curve (see~\cite{Ziegler}). Smilansky \cite{Smi} studied the four-dimensional Carath\'eodory orbitopes ($d=2$), and this was recently extended by Barvinok and Novik \cite{BN} who studied $\Cara_{(1,3,5,\ldots,2k{-}1)}$. The corresponding curve (\ref{CaraCurve}) is the {\em symmetric moment curve} which gives rise to a remarkable family of centrally symmetric polytopes with extremal face numbers. Many questions remain about the facial structure of the Barvinok-Novik orbitopes $\Cara_{(1,3,5,\ldots,2k{-}1)}$. See \cite[\S 7.4]{BN} for details. We now focus on the {\em universal Carath\'eodory orbitope} $\Cara_d := \Cara_{(1,2,\dots,d)} $ in $\R^{2d}$. This convex body has the following spectrahedral representation in terms of Hermitian Toeplitz matrices. \begin{theorem}\label{Th:toeplitz} The universal Carath\'eodory orbitope $\Cara_d $ is isomorphic to the spectrahedron consisting of positive semidefinite Hermitian Toeplitz matrices with ones along the diagonal: \begin{equation} \label{toeplitz} \left( \begin{array}{ccccc} 1 & x_1 & \cdots & x_{d-1} & x_d \\ y_1 & 1 & \ddots & x_{d-2} & x_{d-1} \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ y_{d-1} & y_{d-2} & \ddots & 1 & x_{1}\\ y_d & y_{d-1} & \cdots & y_{1} & 1 \\ \end{array}\right) \,\succeq \, 0 \qquad \ \mbox{where}\qquad \begin{bmatrix} x_j = c_j + s_j \cdot i \\ y_j = c_j - s_j \cdot i \\ i \, = \, \sqrt{-1} \end{bmatrix} \end{equation} \end{theorem} \smallskip We note that the complex spectrahedron (\ref{toeplitz}) can be translated into a spectrahedron over $\R$ as follows. Consider a Hermitian matrix $\,H = F + G \cdot i\,$ where $F$ is real symmetric and $G$ is real skew-symmetric. The Hermitian matrix $H$ is positive definite if and only if \[ \left(\begin{array}{rr} F & -G \\ G & F \\ \end{array} \right) \succeq 0. \] A {\em trigonometric curve} is any curve in $\R^n$ that is parametrized by polynomials in the trigonometric functions sine and cosine, or equivalently, any curve that is the image under a linear map of the universal trigonometric moment curve~\eqref{CaraCurve} where $A=(1,2,\dotsc,d)$. \begin{corollary}[cf.~Henrion \cite{henrion}] The convex hull of any trigonometric curve is a projected spectrahedron. In particular, all Carath\'eodory orbitopes are projected spectrahedra. \end{corollary} \begin{figure}[htb] \[ \begin{picture}(400,135)(0,-20) \put( 0,0){\includegraphics[height=120pt]{figures/Trig_Curve_1.eps}} \put(135,0){\includegraphics[height=120pt]{figures/Trig_Curve_2.eps}} \put(280,0){\includegraphics[height=120pt]{figures/Trig_Curve_3.eps}} \put( 0,-17){$(\cos(\theta),\sin(\theta),\cos(2\theta))$} \put(135,-17){$(\cos(2\theta),\sin(2\theta),\cos(\theta))$} \put(280,-17){$(\cos(\theta),\sin(2\theta),\cos(3\theta))$} \end{picture} \] \vskip -0.3cm \caption{Convex hulls of three trigonometric curves.} \label{F:trig_curves} \end{figure} Figure~\ref{F:trig_curves} shows the convex hulls of three trigonometric curves in $\R^3$. The left and middle convex bodies are each the intersection of two convex quadratic cylinders ($2x^2 = 1 + z$ and $2y^2 = 1 - z$ for the former; $x^2 + y^2 = 1$ and $2z^2 = 1 + x$ for the latter) and hence are spectrahedra. The rightmost convex body is visibly not a spectrahedron. Its exposed points are $(\cos(\theta),\sin(2\theta),\cos(3\theta))$ for $\theta\in(-\frac{\pi}{3},\frac{\pi}{3})\cup (\frac{2\pi}{3},\frac{4\pi}{3})$, and it has two algebraic families of one-dimensional facets. In addition, there are two two-dimensional facets, namely the equilateral triangles for $\,\theta = 0, \frac{2 \pi}{3}, \frac{4 \pi}{3}$ and $\,\theta = \pi, \frac{\pi}{3}, - \frac{\pi}{3}$. The six edges of these two triangles are one-dimensional non-exposed faces, as there is no linear function which achieves its minimum on this body along these edges. Also, exactly one vertex of each triangle at $\theta=0$ and $\theta=\pi$ is exposed. We conclude that this convex body is a projected spectrahedron but it is not a spectrahedron. \begin{proof}[Proof of Theorem~\ref{Th:toeplitz}] The coorbitope cone dual to $\Cara_d$ consists of all affine-linear functions that are non-negative on $\Cara_d$. These correspond to non-negative trigonometric polynomials: \[ \widehat{\Cara}^\circ_d \,= \,\bigl\{ (\delta,a_1, b_1,\dots, a_d, b_d) \in \R^{2d+1} : \delta + \sum_{k=1}^d a_k \cos(k\theta) + b_k \sin(k\theta) \ge 0 \text{ for all }\theta \bigr\} . \] We identify each point $ (\delta,a_1, b_1,\ldots, a_d, b_d) $ in $ \R^{2d+1}$ with the Laurent polynomial \begin{equation}\label{Eq:trig_poly} R(z) \,\,= \, \sum_{k=-d}^d u_k z^k \in \C[z,z^{-1}] \end{equation} with $u_0 = \delta$, $u_k = \tfrac{1}{2}(a_k - b_k i)$, $i=\sqrt{-1}$, and $u_{-k} = \overline{u_k}$. Then $\overline{R(z)}=R(\overline{z}^{-1})$ and $R\in\widehat{\Cara}^\circ_d$ if and only if $R$ is non-negative on the unit circle $\sphere^1$ of $\C$. Roots of $R$ occur in pairs $\alpha,\overline{\alpha}^{-1}$ and those on $\sphere^1$ have even multiplicity. Choosing one root from each pair gives the factorization \begin{equation} \label{circlesos1} R(z) \,\, = \,\, \overline{H}(z^{-1}) \cdot H(z), \end{equation} where $H\in\C[z]$ has degree $d$ and the coefficient vectors of $H$ and $\overline{H}$ are complex conjugate. This factorization is the classical Fej\'er-Riesz Theorem. Utilizing the monomial map $\gamma_d : \C \rightarrow \C^{d+1}$ with $\gamma_d(z) = (1,z,z^2,\dots,z^d)^T$, this is equivalent to the following: A trigonometric polynomial $R(z)$ is non-negative on the unit circle if and only if there is a non-zero vector $h\in \C^{d{+}1}$ such that \begin{equation} \label{circlesos2} R(z) \,\, = \,\, \gamma_d(z^{-1})^T\cdot \overline{h}h^T \cdot \gamma_d(z). \end{equation} A point $(c_1,s_1,\dots,c_d,s_d) \in \R^{2d}$ belongs to the Carath\'eodory orbitope $\,\Cara_d\,$ if and only if \begin{equation} \label{circlepairing} \,\, \delta + \sum_{k=1}^d a_k c_k + b_k s_k \, \ge \, 0 \quad \,\, \hbox{for all $\, (\delta,a_1,b_1,\dots,a_d,b_d) \in \widehat{\Cara}^\circ_d$.} \end{equation} The sum on the left equals the Hermitian inner product in $\C^{2d+1}$ of the coefficient vector $u$ of the polynomial $R(z)$ and the vector $\zeta=(x,1,y)$ with $x_k,y_k$ as in (\ref{toeplitz}). The formula (\ref{circlesos2}) expresses $u$ as the image of the Hermitian matrix $\overline{h}h^T$ under some linear projection $\pi$. If $\pi^*$ denotes the linear map dual to $\pi$ then $X = \pi^*(\zeta) $ is precisely the Hermitian Toeplitz matrix in (\ref{toeplitz}). We conclude that the sum in (\ref{circlepairing}) equals \[ \langle \zeta, \pi(\overline{h}h^T) \rangle \,=\, \langle \pi^*(\zeta), \overline{h}h^T \rangle \,=\, \Tr (X \cdot \overline{h}h^T) \,=\, h^T\cdot X \cdot \overline{h}. \] Thus the point $(c_1,s_1,\ldots,c_d,s_d)$ represented by a Hermitian Toeplitz matrix $X$ lies in $\Cara_d$ if and only if $\,h^T\cdot X \cdot \overline{h} \geq 0\,$ for all $\,h \in \C^{d+1}\,$ if and only if $X$ is positive semidefinite. \end{proof} The proof of Theorem~\ref{Th:toeplitz} elucidates the known results about the facial structure of $\Cara_d$. \begin{corollary}\label{neighborly} The universal Carath\'eodory orbitope $\mathcal{C}_d$ is a neighborly simplicial convex body. Its faces are in inclusion-preserving correspondence with sets of at most $d$ points on the circle. \end{corollary} \begin{proof} A Laurent polynomial $R$ as in~\eqref{Eq:trig_poly} lies in the boundary of the coorbitope cone $\widehat{\Cara}^\circ_d$ if and only if it is non-negative on the unit circle $\sphere^1$ but not strictly positive. It supports the face of $\Cara_d$ spanned by the points of the trigonometric moment curve corresponding to its zeros in $\sphere^1$. Each zero has multiplicity at least $2$, so there are at most $d$ such points, and conversely any subset of $\leq d$ points supports a face. Since any fewer than $2d+2$ points on the curve are affinely independent and since all faces are exposed, these faces are simplices. \end{proof} Corollary \ref{neighborly} implies that the Carath\'eodory number $\cara(\Cara_d)$ equals $d+1$, as no point in the interior of $\Cara_d$ lies in the convex hull of $d$ points of the orbit, but $\cara(\Cara_d)$ is at most one more than the maximal Carath\'eodory number of a facet, by Lemma~3.2 of~\cite{LSS08}. Carath\'eodory orbitopes are generally not spectrahedra because they can possess non-exposed faces. Smilansky \cite{Smi} showed that if we write $\rho(\theta)\in\R^4$ for a point on the trigonometric moment curve with weights $1$ and $3$ then the faces of $\Cara_{(1,3)}$ are exactly the points $\rho(\theta)$ of the orbit, the line segments ${\rm conv}\{\rho(\theta),\rho(\theta+\alpha)\}$, where $0<\alpha<\frac{2\pi}{3}$, and the triangles \[{\rm conv}\{ \rho(\theta), \ \rho(\theta+2\pi/3), \ \rho(\theta+4\pi/3)\}. \] In particular, each line segment ${\rm conv}\{\rho(\theta),\rho(\theta+\frac{2\pi}{3}\}$ is a non-exposed edge of $\Cara_{(1,3)}$. We conclude that the Barvinok-Novik orbitope $\Cara_{(1,3)}$ is not a spectrahedron. The Toeplitz representation (\ref{toeplitz}) of the universal Carath\'eodory orbitope $\Cara_d$ reveals complete algebraic information. For example, the algebraic boundary $\partial_a \Cara_d$ is the irreducible hypersurface of degree $d+1$ defined by the determinant of that $(d{+}1) \times (d{+} 1)$-matrix. The curve~\eqref{CaraCurve} itself is the set of all positive definite Hermitian Toeplitz matrices of rank one. The $2 \times 2$-minors of the matrix (\ref{toeplitz}) generate the prime ideal $J_{(1,\ldots,d)}$ of this rational curve. The union of the $(j-1)$-dimensional faces of $\Cara_d$ is the set of positive definite Hermitian Toeplitz matrices of rank $j$, as a point lies on a $(j-1)$-dimensional face if and only if it is the convex combination of $j$ points of the curve. The Zariski closure of this stratum is the set of all rank $j$ Hermitian Toeplitz matrices which is defined by the vanishing of the $(j{+}1)\times(j{+}1)$ minors of that matrix. This is also the $j$th secant variety of the Carath\'eodory curve. Lastly, this rank stratification is a Whitney stratification of the algebraic boundary. The derivation of the algebraic description of $\Cara_A$ for arbitrary $A = (a_1,a_2,\ldots,a_d)$ requires the process of elimination. For instance, the ideal $J_A$ of the trigonometric moment curve (\ref{CaraCurve}) can be computed from the ideal of $2 {\times} 2$-minors for $J_{(1,2,\ldots,a_d)}$ by eliminating all unknowns $x_j, y_j$ with $j \not\in \{a_1,\ldots,a_d\}$. The equation of the algebraic boundary $\partial_a \Cara_A$ is obtained by the same elimination applied to a certain ideal of larger minors of the Toeplitz matrix (\ref{toeplitz}). We refer to recent work of Vinzant \cite{Vinz} for a detailed study of the edges of Barvinok-Novik orbitopes. An analysis of the algebraic boundary the orbitope $ \Cara_{(1,3)} $ can be found in \cite[\S 2.4]{Ranestad2}. \subsection{Hankel representation}\label{S:Hankel} The cone over the degree $d$ moment curve is the image of $\R^2$ in its $d$th symmetric power $\mbox{Sym}_d \R^2\simeq \R^{d+1}$ under the map \[ \nu_d\ \colon\ (x,y)\ \longmapsto\ \bigl(\,x^d,\, x^{d-1}y,\, x^{d-2}y^2,\,\dotsc,\, y^d\,\bigr)\,. \] This map is naturally $SO(2)$-equivariant. We define the {\em compact moment curve} to be the image $\nu_d(\sphere^1)$ of the unit circle under the map $\nu_d$. This restricted map equals \[ \sphere^1\ni \theta\ \longmapsto\ (\cos^d(\theta), \cos^{d-1}(\theta)\sin(\theta), \dotsc, \sin^d(\theta))\,. \] The convex hull of the curve $\nu_d(\sphere^1)$ is an orbitope. By Lemma~\ref{L:all_Cara}, it is isomorphic to some Carath\'eodory orbitope $\Cara_A$. The following lemma makes this identification explicit. \begin{lemma}\label{L:identification} If $d \in \N$ is odd, then $\conv(\nu_d(\sphere^1))$ is isomorphic to the Barvinok-Novik orbitope $\Cara_{(1,3,\dotsc,d)}$. If $d \in \N$ is even, then $\conv(\nu_d(\sphere^1))$ is isomorphic to $\Cara_{(0,2,4,\dotsc,d)}$, which is affinely isomorphic to the universal Carath\'eodory orbitope $\Cara_{d/2}$. \end{lemma} \begin{proof} Complexifying the $SO(2)$-module $\rho_A$ where $A=(a_1,\dotsc,a_d)$ gives the $\C^\times$-module with symmetric weights $\pm a_1, \dotsc,\pm a_d$. Thus the underlying real $SO(2)$-module of this $\C^\times$-module is $\rho_{(|a_1|,\dotsc,|a_d|)}$. The lemma follows because the complexified representation $\mbox{Sym}_d \C^2$ of $\mbox{Sym}_d \R^2 $ has weights $d, d{-}2, d{-}4, \dotsc, -d$, and this representation is spanned by the pure powers of linear forms, so every weight appears in the linear span of the orbit. \end{proof} Suppose now that $d=2n$ is even. We describe the moment curve and the Carath\'eodory orbitope in coordinates $(\l_0,\l_1,\dotsc,\l_{2n})$ for $\R^{2n+1}$. Fix the $(n{+}1) {\times} (n{+}1)$-Hankel matrix \begin{equation} \label{Eq:Hankel} K(\l) \quad = \quad \begin{pmatrix} \l_0 & \l_1 & \l_2 & \cdots & \l_n \\ \l_1 & \l_2 & \l_3 & \cdots & \l_{n+1} \\ \l_2 & \l_3 & \l_4 & \cdots & \l_{n+2} \\ \vdots & \vdots & \vdots & & \vdots \\ \l_n & \l_{n+1} & \l_{n+2} & \cdots & \l_{2n} \end{pmatrix}\ . \end{equation} \begin{theorem} \label{Th:Hankel} The even moment curve consists of all vectors $\l \in \R^{2n+1}$ such that the Hankel matrix $K(\l)$ has rank one, is positive semidefinite, and satisfies the linear equation \begin{equation} \label{linrel} \sum_{j=0}^n \binom{n}{j} \l_{2j} \,\, = \,\, 1 . \end{equation} Its convex hull consists of all $\l$ such that $K(\l)$ is positive semidefinite and satisfies~\eqref{linrel}. \end{theorem} Thus the Carath\'eodory orbitope $\Cara_n$ has a second {\em Hankel representation} as a spectrahedron. The proof of this theorem follows from the well-known fact that every non-negative polynomial in one variable is a sum of squares of polynomials. It uses the duality in \cite[\S 3]{Rez}. \begin{proof} Observe that the points of the even compact moment curve satisfy~\eqref{linrel}, which comes from the polynomial identity $(x^2+y^2)^n=1$. As $\mbox{Sym}_{2n}\R^2$ has just one copy of the trivial representation, this is the only affine equation that holds on $\nu_{2n}(\sphere^1)$. The dual to $\mbox{Sym}_{2n}\R^2$ is the space $\R[x,y]_{2n}$ of real homogeneous polynomials of degree $2n$ in $x$ and $y$. The coefficients of such a polynomial gives coordinates for $\R[x,y]_{2n}$. The coorbitope cone $\widehat{\nu_{2n}(\sphere^1)^\circ}$ dual to the orbitope $\conv(\nu_{2n}(\sphere^1))$ is the cone of homogeneous polynomials of degree $2n$ that are non-negative on $\R^2$. Thus a point $(\l_0,\dotsc,\l_{2n})\in\mbox{Sym}_{2n}\R^2$ lies in the orbitope $\conv(\nu_{2n}(\sphere^1))$ if and only if it satisfies~\eqref{linrel} and \begin{equation}\label{Eq:nonnegsum} \sum_{i=0}^{2n} f_i \l_i\ \geq\ 0\, \end{equation} for every non-negative polynomial $f(x,y)=\sum_i f_ix^iy^{2n-i}$. Since non-negative homogeneous polynomials in $x$ and $y$ are sums of squares (cf.~\cite{Rez}), we only need these inequalities to hold when $f(x,y)=g(x,y)^2$ is a square. Writing $g=(g_0,\dotsc,g_{n})$ for the coefficient vector of the polynomial $g(x,y)$, the sum~\eqref{Eq:nonnegsum} becomes \[ \sum_{i=0}^{2n} \l_i \sum_{j+k=i} g_j g_k \,\,\, = \,\,\, g^T \cdot K(\l) \cdot g\,, \] where $K(\l)$ is the Hankel matrix~\eqref{Eq:Hankel}. This proves the theorem. \end{proof} \section{Veronese Orbitopes} The Hankel representation of the universal Carath\'eodory orbitope arose by considering the image of the circle $\sphere^1\subset\R^2$ in $\mbox{Sym}_{2n}\R^2$ and its relation to non-negative binary forms. Generalizing from $\R^2$ to $\R^d$ gives the Veronese orbitopes whose coorbitope cones (\ref{eq:coorbitopecone}) consist of non-negative $d$-ary forms. When $d\geq 3$, non-negative forms are not necessarily sums of squares, except for quadratic forms and the exceptional case of ternary quartics. The set of decomposable symmetric tensors is the image of the Veronese map \[ \nu_{m}\ \colon\ \R^d\ \longrightarrow\ \mbox{Sym}_m\R^d \,\simeq \,\R^{\binom{d+m-1}{m-1}}\,. \] The $SO(d)$-orbits through any two non-zero decomposable tensors are scalar multiples of each other and are thus isomorphic. We define the {\em Veronese orbitope} $\mathcal{V}_{d,m}$ to be the convex hull of the orbit through the specific decomposable tensor $\nu_{m}(1,0,\dotsc,0)$. That orbit is also the image $\nu_{m}(\sphere^{d-1})$ of the unit $(d-1)$-sphere under the $m$-th Veronese embedding of $\R^d$. Suppose that $m=2n$ is even. Then the orbit $\nu_{m}(\sphere^{d-1})$ can be identified with $\R\PP^{d-1}$ since $\nu_{2n}$ is two-to-one with $\nu_{2n}(v)=\nu_{2n}(-v)$. The dual vector space to $\mbox{Sym}_{2n}\R^d$ is the space of homogeneous forms of degree $2n$ on $\R^d$. The only invariant forms are those proportional to the form $\langle v,v\rangle^n$, so both $\mbox{Sym}_{2n}\R^d$ and its dual space contain one copy of the trivial representation, and $\nu_{2n}(\sphere^{d-1})$ lies in the hyperplane of $\mbox{Sym}_{2n}\R^d$ defined by $\langle v,v\rangle^n = 1$. The dual cone to the Veronese orbitope $\,\mathcal{V}_{d,2n} = {\rm conv}(\nu_{2n}(\sphere^{d-1}))\,$ is the cone of non-negative forms of degree $2n$ in $\mbox{Sym}_{2n}(\R^d)^*$. See also \cite[Example (1.2)]{BB}. We write $\,\widehat{\mathcal{V}}_{d,2n}^\circ$ for the Veronese coorbitope cone consisting of non-negative forms. The cone $\widehat{\mathcal{V}}_{d,2n}^\circ$ of non-negative forms contains the cone $\mathcal{K}_{d,2n}$ of sums of squares, but when $d\geq 3$, $2n\geq 4$, and $(d,2n)\neq(3,4)$, Hilbert~\cite{hilbert} showed that the inclusion is strict. We refer to \cite{Ble} for a recent study which compares the dimension of the faces of these cones. The cone $\mathcal{K}_{d,2n}$ is naturally a projection of the positive semidefinite cone. Hence its dual cone $\,\mathcal{C}_{d,2n} = \mathcal{K}_{d,2n}^\circ\,$ is a spectrahedron: it can be realized as the intersection of the positive semidefinite cone with a certain linear space of generalized Hankel matrices, discussed in detail in Reznick's book \cite{Rez}. This spectrahedron $\mathcal{C}_{d,2n}$ is strictly larger than the Veronese orbitope $\mathcal{V}_{d,2n}$ when $d\geq 3$, $2n\geq 4$, and $(d,2n)\neq(3,4)$. In fact, $\mathcal{V}_{d,2n}$ is precisely the convex hull of the subset of extreme points in $\mathcal{C}_{d,2n}$ that have rank one. We present a detailed case study of the exceptional case of ternary quartics, when $\,(d,2n)=(3,4)$. The Veronese orbitope $\mathcal{V}_{3,4} = {\rm conv}(\nu_4(\R^3))$ is a $14$-dimensional convex body. Let $\widehat{\mathcal{V}}_{3,4}$ be the $15$-dimensional cone over the Veronese orbitope $\mathcal{V}_{3,4}$. As all non-negative ternary quartics are sums of squares, we have the following identities of cones: \[ \widehat{\mathcal{V}}_{3,4} \,=\, \mathcal{C}_{3,4} \quad \hbox{and} \quad \widehat{\mathcal{V}}_{3,4}^\circ \,=\, \mathcal{K}_{3,4}. \] We next present Reznick's spectrahedral representation of $\mathcal{V}_{3,4}$. For this, we identify $\mbox{Sym}_4\R^3$ with its dual, and we introduce coordinates $\l=(\l_\alpha)$ where the indices $\alpha$ are the exponents of monomials in variables $x,y,z$ of degree $4$. The ternary quartic corresponding to $\l$ is \begin{equation}\label{Eq_Ter_Quartic} q_\l\ =\ \sum_\alpha \tbinom{4}{\alpha}\l_\alpha x^{\alpha_1}y^{\alpha_2}z^{\alpha_3}\,, \end{equation} where $\binom{4}{\alpha} = \frac{4 !}{ \alpha_1 ! \alpha_2 !\alpha_3 !}$ is the multinomial coefficient. The inner product $\, \langle q_\l, q_\mu\rangle\ =\ \sum_\alpha \tbinom{4}{\alpha}\l_\alpha\mu_\alpha \,$ is $SO(3)$-invariant. Given a ternary quartic $q_\l$ in the notation \eqref{Eq_Ter_Quartic}, we associate to it the following symmetric $6 \times 6$-matrix with Hankel structure as in \cite[eqn.~(5.25)]{Rez}: \begin{equation}\label{Eq:Reznick_Hankel} K_\l \quad = \quad \begin{pmatrix} \l_{400} & \l_{220} & \l_{202} & \l_{310} & \l_{301} & \l_{211} \\ \l_{220} & \l_{040} & \l_{022} & \l_{130} & \l_{121} & \l_{031} \\ \l_{202} & \l_{022} & \l_{004} & \l_{112} & \l_{103} & \l_{013} \\ \l_{310} & \l_{130} & \l_{112} & \l_{220} & \l_{211} & \l_{121} \\ \l_{301} & \l_{121} & \l_{103} & \l_{211} & \l_{202} & \l_{112} \\ \l_{211} & \l_{031} & \l_{013} & \l_{121} & \l_{112} & \l_{022} \end{pmatrix}. \end{equation} \begin{theorem} The Veronese orbitope $\,\mathcal{V}_{3,4}$ is a spectrahedron. It consists of all positive semidefinite Hankel matrices $K_\l$ as in $\eqref{Eq:Reznick_Hankel}$ that satisfy the equation \begin{equation}\label{Eq:affine} \l_{400} + \l_{040} + \l_{004} + 2 \l_{220} + 2 \l_{202} + 2 \l_{022} \,\,=\,\, 1. \end{equation} \end{theorem} \begin{proof} It is shown in~\cite[Ch.~5]{Rez} that the quartic $q_\l$ is non-negative if and only if the Hankel matrix $K_\l$ is positive semidefinite. Furthermore, the equation~\eqref{Eq:affine} is the affine equation $(x^2+y^2+z^2)^2 = 1$ which defines the hyperplane containing the orbit $\nu_4(\sphere^2)$. \end{proof} By contrast, the Veronese coorbitope is not a spectrahedron. \begin{theorem}\label{Th:SOS} The convex cone of non-negative ternary quartics $\mathcal{K}_{3,4} = \widehat{\mathcal{V}}_{3,4}^\circ$ is not a spectrahedron. Its facets have dimension twelve and the intersection of any two facets is an exposed face of dimension nine. It also has maximal non-exposed faces of dimension nine. \end{theorem} \begin{proof} We thank Greg Blekherman who explained this to us. The cone $\mathcal{K}_{3,4}$ is full-dimensional in the $15$-dimensional space $\mbox{Sym}_{2n}(\R^d)^*$. Its facets come from its defining linear inequalities \[ \mathcal{K}_{3,4}\ =\ \bigl\{ q\in\mbox{Sym}_{2n}(\R^d)^*\mid q(p)\geq 0\quad \forall p\in\R\PP^2 \bigr\}\,. \] For $p\in\R\PP^2$, let $F^p$ be the facet exposed by the inequality $q(p)\geq 0$, which consists of those non-negative ternary quartics $q$ that vanish at $p$. Since the boundary of $\mathcal{K}_{3,4}$ is $14$-dimensional and we have a two-dimensional family of isomorphic facets, we see that each facet $F^p$ is $12$-dimensional. A non-negative form that vanishes at $p\in\R\PP^2$ must also have its two partial derivatives (in local coordinates at $p$) vanish, which gives three linear conditions on the facet $F^p$. Concretely, if we take $p=[0:0:1]$ with local coordinates $x,y$, then the constant and linear terms of the inhomogeneous quartic $q(x,y)$ must vanish. Consequently, \begin{equation}\label{Eq:homog_decomp} q(x,y)\ =\ H(x,y)+C(x,y)+Q(x,y)\,, \end{equation} where $H$, $C$, and $Q$ are, respectively, the terms of degrees $2$, $3$, and $4$ in $q$. These are binary forms. Their $3+4+5=12$ coefficients parametrize the linear span of the facet $F^p$, showing again that $F^p$ is 12-dimensional. The quadratic form $H(x,y)$ is the Hessian of $q(x,y)$ at $p=[0:0:1]$ in these coordinates. A form $q$ lies in the relative interior of the facet $F^p$ if and only if, given any $q'$ which vanishes at $p$ along with its partial derivatives, so that it lies in the linear span of $F^p$, there is an $\epsilon>0$ such that $q+\epsilon q'$ also lies in $F^p$. These conditions are equivalent to \begin{enumerate} \item $q$ has no other zeros in $\R\PP^2$, and \item the Hessian of $q$ at $p$ is a positive definite quadratic form. \end{enumerate} A form $q\in F^p$ lies in the boundary of $F^p$ when one of these conditions fails, that is, either \begin{enumerate} \item $q$ vanishes at a second point $p'\in\R\PP^2\setminus\{p\}$, or \item the Hessian of $q$ has a double root at some point $r\in\R\PP^2$. \end{enumerate} Faces of type (1) have the form $F^p\cap F^{p'}$. These are nine-dimensional and occur in a four-dimensional family parametrized by pairs of distinct points in $\R\PP^2$. The union of all such faces is a semialgebraic subset of dimension $9+4=13$ in the boundary of $\mathcal{K}_{3,4}$. Faces of type (2) also have dimension nine. The condition that the Hessian has a double root at a point $r\in\R\PP^1$ gives two linear conditions on the coefficients of $q$. There is an additional condition that the cubic part $C$ of $q$ in \eqref{Eq:homog_decomp} also vanishes at $r$, for otherwise $q$ takes negative values along the line through $p$ corresponding to $r$. A face of type (2) is the limit of faces $F^p\cap F^{p'}$ of type (1) as $p'$ approaches $p$ along the line corresponding to $r$. These faces form a three-dimensional family on which $SO(3)$ acts faithfully and transitively. Let us now examine the exposed faces of $\mathcal{K}_{3,4}$. Let $\ell\in \widehat{\mathcal{V}}_{3,4}$ be a symmetric tensor in the dual cone to $\mathcal{K}_{3,4}$. Then $\ell$ is the sum of decomposable symmetric tensors, \[ \ell\ =\ \nu_4(p_1)+\dotsb+\nu_4(p_s)\,, \] and so it supports the face $F^{p_1}\cap \cdots \cap F^{p_s}$. Thus the exposed faces of $\mathcal{K}_{3,4}$ are intersections of facets, and they consist of non-negative ternary quartics that vanish at a given set of points. Since the faces of type (2) are not of this form, they are not exposed. \end{proof} Our next agenda item is a discussion of the algebraic boundaries of the convex bodies and cones discussed above. The algebraic boundary of the cone $\widehat{\mathcal{V}}_{3,4}$ is characterized by the vanishing of the determinant of the Hankel matrix $K_\l$ in~\eqref{Eq:Reznick_Hankel}. From this we conclude: \begin{corollary} The algebraic boundary of the Veronese orbitope $\mathcal{V}_{3,4}$ is the variety of dimension $13$ and degree six which is defined by the linear equation~\eqref{Eq:affine} and the Hankel determinant $\,{\rm det}(K_\l) = 0$. The extreme points of $\mathcal{V}_{3,4}$ are precisely the Hankel matrices $K_\l$ of rank $1$. \end{corollary} We observed in the proof of Theorem~\ref{Th:SOS} that the boundary of $\mathcal{K}_{3,4}$ consists of non-negative quartics $q$ that vanish at some point $p$ of $\R\PP^2$, and that the partial derivatives of $q$ necessarily also vanish at $p$. That is, the plane quartic curve defined by $q=0$ is singular at $p$. Thus the algebraic boundary of $\mathcal{K}_{3,4}$ consists of singular ternary quartics. Working in $\PP(\mbox{Sym}_4\C^3)\simeq\PP^{14}$, and its dual space of ternary quartics, this algebraic boundary is seen to be the dual variety to the Veronese surface which consists of rank $1$ Hankel matrices $K_\l$. \begin{corollary} \label{cor:discrorbi} The algebraic boundary of the coorbitope cone $\mathcal{K}_{3,4}$ is an irreducible hypersurface of degree $27$. Its defining polynomial is the discriminant $\Delta_q$ of the ternary quartic \begin{multline*} \quad q(x,y,z) \,\, = \,\, c_{400} x^4 + c_{310} x^3 y + c_{301} x^3 z + c_{220} x^2 y^2 + c_{211} x^2 y z + c_{202} x^2 z^2 + c_{130} x y^3 \\ \qquad + c_{121} x y^2 z + c_{112} x y z^2 + c_{103} x z^3 + c_{040} y^4 + c_{031} y^3 z + c_{022} y^2 z^2 + c_{013} y z^3 + c_{004} z^4 \,. \quad \end{multline*} \end{corollary} The discriminant $\Delta_q$ is a homogeneous polynomial of degree $27$ in the $15$ indeterminates $c_{ijk}$. In what follows we shall present an explicit expression for $\Delta_q$. That expression will be derived from a beautiful classical formula due to Sylvester which can be found in Section 3.4.D, starting on page 118, of the book by Gel$'$fand, Kapranov, and Zelevinsky~\cite{GKZ}. According to \cite[Prop.~1.7, page 434]{GKZ}, the discriminant $\Delta_q$ is proportional to the resultant $R_3(q_x,q_y,q_x)$ of the three partial derivatives of the quartic $q$. Here $R_3$ denotes the resultant of three ternary cubics, and the precise relation is $\, \Delta_q \,= \, 4^{-7} \cdot R_3(q_x,q_y,q_x)$. We write $(\R^3)^*$ for the space of linear forms on $\R^3$, and we introduce the linear map \begin{equation}\label{Eq:first_map} T\ \colon\ (\R^3)^*\oplus(\R^3)^*\oplus(\R^3)^*\ \longrightarrow\ \mbox{Sym}_4(\R^3)^* \,, \,\,(f,g,h)\mapsto f q_x + g q_y + h q_z. \end{equation} Next, for an exponent vector $\alpha=(\alpha_2,\alpha_2,\alpha_3)$ of degree $\alpha_1+\alpha_2+\alpha_3=2$ and any variable $t\in\{x,y,z\}$, we choose a decomposition of the cubic partial derivative \begin{equation}\label{Eq:decomp} q_t\ =\ x^{\alpha_1+1}P^{(t)}_\alpha\,+\, y^{\alpha_2+1}Q^{(t)}_\alpha\,+\, z^{\alpha_3+1}R^{(t)}_\alpha\,, \end{equation} where $P^{(t)}_\alpha$, $Q^{(t)}_\alpha$, and $R^{(t)}_\alpha$ are forms of degree $2-\alpha_1$, $2-\alpha_2$, and $2-\alpha_3$, respectively. Then \[ D_\alpha\ =\ \det\left( \begin{matrix} P^{(x)}_\alpha & Q^{(x)}_\alpha & R^{(x)}_\alpha \\ P^{(y)}_\alpha & Q^{(y)}_\alpha & R^{(y)}_\alpha \\ P^{(z)}_\alpha & Q^{(z)}_\alpha & R^{(z)}_\alpha \end{matrix}\right)\,\] is a quartic polynomial. Finally, we define a linear map $\,D\colon \mbox{Sym}_2\R^3 \to\mbox{Sym}_4(\R^3)^* $ by sending $\delta_\alpha\mapsto D_\alpha$, where $\{\delta_\alpha\}$ is the basis dual to the monomial basis of $\mbox{Sym}_2(\R^3)^*$. \begin{proposition} {\rm (Sylvester \cite[\S3.4.D]{GKZ})} The discriminant $\Delta_q$ is proportional to the resultant of the ternary cubics $q_x,q_y,q_z$ which is equal to the determinant of the linear map \[ T\oplus D\ \colon\ (\R^3)^*\oplus(\R^3)^*\oplus(\R^3)^*\oplus\mbox{\rm Sym}_2(\R^3)^*\ \longrightarrow\ \mbox{\rm Sym}_4(\R^3)^*\,. \] This is an irreducible homogeneous polynomial of degree $27$ in the $15$ coefficients $c_{ijk}$. \end{proposition} We write this map explicitly as a $15\times 15$ matrix $\mathcal{D}(q)$ whose rows are indexed by the $15$ monomials $x^iy^jz^k$ of degree $i+j+k=4$ and whose columns are indexed by $15$ auxiliary quartics, and whose entry in a given row and column is the coefficient of that monomial in that auxiliary quartic. Nine of the quartics come from the map $T$ in~\eqref{Eq:first_map}. They are \[ xq_x\,,\ yq_x\,,\ zq_x\,,\ xq_y\,,\ yq_y\,,\ zq_y\,,\ xq_z\,,\ yq_z\,,\ zq_z\,. \] The other six are the polynomials $D_{002}$, $D_{020}$, $D_{200}$, $D_{110}$, $D_{101}$, and $D_{011}$ from $D$. We only describe $D_{002}$ and $D_{110}$ as the others may be recovered from these by symmetry. For $D_{002}$, note that each partial derivative of $q$ has six terms divisible by $x$, three divisible by $y$ and not $x$, and a unique term involving $z^3$. This leads to a decomposition~\eqref{Eq:decomp}, and $D_{002}$ is the determinant of the $3\times 3$ matrix: \[ \begin{bmatrix} 4c_{400}x^2 + 3c_{310}xy + 3c_{301}xz + 2c_{220}y^2 + 2c_{211}yz + 2c_{202}z^2 \!&\! c_{130}y^2 + c_{121}yz + c_{112}z^2 \!&\! c_{103} \\ \! c_{310}x^2 + 2c_{220}xy + c_{211}xz + 3c_{130}y^2 + 2c_{121}yz + c_{112}z^2 \!&\! 4c_{040}y^2 + 3c_{031}yz + 2c_{022}z^2 \!&\! c_{013} \\ \! c_{301}x^2 + c_{211}xy + 2c_{202}xz + c_{121}y^2 + 2c_{112}yz + 3c_{103}z^2 \!&\! c_{031}y^2 + 2c_{022}yz + 3c_{013}z^2 \!&\! 4c_{004} \end{bmatrix} \] By a similar reasoning, we find that $D_{110}$ is the determinant of the $3\times 3$ matrix: \[ \begin{bmatrix} 4c_{400}x + 3c_{310}y + 3c_{301}z \!&\! 2c_{220}x + c_{130}y + c_{121}z \!&\! 2c_{211}xy + 2c_{202}xz + c_{112}yz + c_{103}z^2 \\ c_{310}x + 2c_{220}y + c_{211}z \!&\! 3c_{130}x + 4c_{040}y + 3c_{031}z \!&\! 2c_{121}xy + c_{112}xz + 2c_{022}yz + c_{013}z^2 \!\\ c_{301}x + c_{211}y + 2c_{202}z \!&\! c_{121}x + c_{031}y + 2c_{022}z \!&\! 2c_{112}xy + 3c_{103}xz + 3c_{013}yz + 4c_{004}z^2 \end{bmatrix} \] This concludes our discussion of the algebraic boundary of the coorbitope cone $\mathcal{K}_{3,4}$. We close with the remark that the notations $\mathcal{K}_{\bdt,\bdt} $ and $ \mathcal{C}_{\bdt,\bdt}$ are consistent with those used in the paper \cite{SU} where these cones consist of concentration matrices and sufficient statistics of a certain Gaussian model. \section{Grassmann Orbitopes} The {\em Grassmann orbitope} $\mathcal{G}_{d,n}$ is the convex hull of the Grassmann variety of oriented $d$-dimensional linear subspaces of $\R^n$ in its Pl\"ucker embedding in the unit sphere in $\wedge_d \R^n$. Equivalently, this is the highest weight orbitope for the group $SO(n)$ acting on $\wedge_d \R^n$: $$ \qquad \qquad \mathcal{G}_{d,n} \,\, = \,\, {\rm conv}( SO(n) \cdot e_{12\cdots d}) \qquad \hbox{where $\,e_{12 \cdots d} \,=\, e_1 \wedge e_2 \wedge \cdots \wedge e_d \, \in \, \wedge_d \R^n$} . $$ Faces of the Grassmann orbitope are of considerable interest in differential geometry since, according to the Fundamental Theorem of Calibrations, they correspond to area-minimizing $d$-dimensional submanifolds of $\R^n$. References to this subject include the seminal article on calibrated geometries by Harvey and Lawson \cite{HL1982} and the beautiful expositions by Morgan \cite{HarMor86, Mor85}. In this section we review basic known facts about $\mathcal{G}_{d,n}$ and we initiate its study from the perspectives of combinatorics, semidefinite programming, and algebraic geometry. Vectors in $\wedge_d \R^n$ are written in terms of Pl\"ucker coordinates relative to the standard basis: $$ p \,\,\,\,=\, \sum_{1 \leq i_1 < \cdots < i_d \leq n} \!\!\! p_{i_1 i_2 \cdots i_d} \, e_{i_1} \wedge e_{i_2} \wedge \cdots \wedge e_{i_d}. $$ The Pl\"ucker vector $p$ lies in the {\em oriented Grassmann variety} if and only if it is decomposable, i.e. $\,p = u_1 \wedge u_2 \wedge \cdots \wedge u_d$ for some pairwise orthogonal subset $\{u_1,u_2,\ldots,u_d \}$ of $ \sphere^{n-1}$. This happens if and only if $p$ lies in the unit sphere in $\wedge_d \R^n$ and satisfies all quadratic Pl\"ucker relations, the relations among the $d {\times} d$-minors of a $d {\times} n$-matrix. These relations generate the prime ideal called the {\em Pl\"ucker ideal} $I_{d,n}$. Thus the oriented Grassmann variety is the algebraic subvariety of $\wedge_d \R^n$ defined by the ideal \begin{equation} \label{eq:PluckerIdeal} I_{d,n} \,\,+\,\ \bigl\langle \, 1 \, - \!\!\!\! \sum_{1 \leq i_1 < \cdots < i_d \leq n} p_{i_1 i_2 \cdots i_d}^2 \, \bigr\rangle. \end{equation} The convex hull of that real algebraic variety is the $\binom{n}{d}$-dimensional Grassmann orbitope~$\mathcal{G}_{d,n}$. \begin{example}[$d{=}2, n{=}4$] \label{ex:G24} The Grassmann orbitope $\mathcal{G}_{2,4}$ is the convex hull of the variety~of \begin{equation} \label{eq:PluckerIdeal24} \bigl\langle \, p_{12} p_{34} - p_{13} p_{24} + p_{14} p_{23} \,,\,\, p_{12}^2 + p_{13}^2 + p_{14}^2 + p_{23}^2 + p_{24}^2 + p_{34}^2 -1\, \bigr\rangle. \end{equation} As suggested by \cite[Proposition~3.2]{Mor85}, we perform the orthogonal change of coordinates $$ \begin{matrix} u = \frac{1}{\sqrt{2}}(p_{12}+p_{34}), & v = \frac{1}{\sqrt{2}}(p_{13}-p_{24}), & w = \frac{1}{\sqrt{2}}(p_{14}+p_{23}), \\ \rule{0pt}{14pt} x = \frac{1}{\sqrt{2}}(p_{12}-p_{34}), & y = \frac{1}{\sqrt{2}}(p_{13}+p_{24}) ,& z = \frac{1}{\sqrt{2}}(p_{14}-p_{23}) . \end{matrix} $$ This is simultaneous rotation by $\pi/4$ in each of the coordinate planes spanned by the pairs $( p_{12},p_{34})$, $( p_{13},p_{24})$, and $( p_{14},p_{23})$. In these new coordinates, the prime ideal (\ref{eq:PluckerIdeal24}) equals $$ \bigl\langle \,u^2+ v^2 + w^2 - \frac{1}{2} \, ,\,\, x^2+ y^2 + z^2 - \frac{1}{2} \,\,\bigr\rangle. $$ This reveals that $\mathcal{G}_{2,4}$ is the direct product of two three-dimensional balls of radius $1/\sqrt{2}$. \qed \end{example} We next examine the case $d = 2$ and arbitrary $n$. The vectors $p$ in $ \wedge_2 \R^n$ can be identified with skew-symmetric $n {\times} n$-matrices, and this brings us back to the orbitopes in Section 3.2. \begin{corollary} \label{rmk:GrassInSec3} The Grassmann orbitope $\mathcal{G}_{2,n}$ coincides with the skew-symmetric Schur-Horn orbitope of a skew-symmetric matrix $N \in \wedge_2 \R^n$ having rank two and $\Lambda(N) = (1,0,0,\ldots,0)$. \end{corollary} If $p$ is a real skew-symmetric matrix whose eigenvalues are $\pm i \hl_1,\ldots,\pm i \hl_k$, where $i = \sqrt{-1}$, then the matrix $\,i \cdot p\,$ is Hermitian and its eigenvalues are the real numbers $\pm \hl_1,\ldots,\pm \hl_k$. Recall that the operator $\mathcal{L}_k$ computes the $k$-th additive compound matrix of a given matrix. \begin{theorem} \label{thm:G2n} Let $n \geq 5$ and $k = \lfloor n/2 \rfloor$. The Grassmann orbitope equals the spectrahedron \begin{equation} \label{eq:spectraG2n} \mathcal{G}_{2,n} \,\, = \,\, \bigl\{ \, p \in \wedge_2 \R^n \,:\, {\rm Id}_{\binom{n}{k}} - \Sfunc_{k}(i \cdot p) \,\succeq \, 0 \,\bigr\}. \end{equation} Its algebraic boundary $\partial_a \mathcal{G}_{2,n}$ is an irreducible hypersurface of degree $2^k$, defined by a factor of the determinant of the matrix ${\rm Id}_{\binom{n}{k}} - \Sfunc_{k}(i \cdot p)$. The proper faces of $\mathcal{G}_{2,n}$ are $SU(m)$-orbitopes for $1 \le m \le k$. Every face $F$ is associated with an even-dimensional subspace $V_F$ equipped with an orthogonal complex structure and the extreme points of $F$ correspond to complex lines in $V_F$. \end{theorem} Everything in this theorem is also true for the small cases $n = 3,4$, with the exception that the quartic hypersurface $\partial_a \mathcal{G}_{2,4}$ is not irreducible, as was seen in Example \ref{ex:G24}. In light of Theorem~\ref{thm:facelatticeSkewSH}, the reducibility of $\mathcal{G}_{2,4}$ arises because the two-dimensional crosspolytope equals the square, which decomposes as a Minkowski sum of two line segments. This may also be seen as the stabilizer of a decomposable tensor in $SO(4)$ is $\pm I$, where $I$ is the identity matrix, and $SO(4)/\{\pm I\}\simeq SO(3)\times SO(3)$, so $\mathcal{G}_{2,4}$ is also an orbitope for $SO(3)\times SO(3)$. \begin{proof} We begin with the last statement about the face lattice of $\mathcal{G}_{2,n}$. This result is well-known in the theory of calibrations, where it is usually phrased as follows: {\em every face of the Grassmannian of two-planes in $\R^n$ consists of the complex lines in some $2m$-dimensional subspace of\/ $\R^n$ under some orthogonal complex structure}. See~\cite[\S 1.1]{Mor85}. To derive the spectrahedral representation (\ref{eq:spectraG2n}), we note that the eigenvalues of $\Sfunc_{k}(i \cdot p)$ are the sums of any $k$ distinct numbers of the eigenvalues of the skew-symmetric matrix $p$. These are $ -\hl_1,\ldots -\hl_k,\hl_1 ,\ldots,\hl_k$ if $n$ is even and $ -\hl_1,\ldots, -\hl_k,0,\hl_1,\ldots,\hl_k$ if $n$ is odd. In light of Corollary \ref{rmk:GrassInSec3}, we can apply the results in Section 3.2 to conclude that $p$ lies in $\mathcal{G}_{2,n}$ if and only if $\,\pm \hl_1 \pm \hl_2 \pm \cdots \pm \hl_k \leq 1\,$ for all choices of signs. In terms of polyhedral geometry, this condition means that the vector $\Lambda(p) = (\hl_1,\hl_2,\ldots,\hl_k)$ lies in the crosspolytope $\Lambda(\mathcal{G}_{2,n})$. Since all $\binom{n}{k}$ eigenvalues of $\Sfunc_{k}(i \cdot p)$ are bounded above by the maximum of the $2^k$ special eigenvalues $\,\pm \hl_1 \pm \hl_2 \pm \cdots \pm \hl_k$, we conclude that $p \in \mathcal{G}_{2,n}$ if and only if $\,{\rm Id}_{\binom{n}{k}} - \Sfunc_{k}(i \cdot p) \,\succeq \, 0 $. To compute the algebraic boundary $\,\partial_a \mathcal{G}_{2,n}\,$ we consider the expression $$ \prod_{\sigma \in \{-1,+1\}^k} (1+\sigma_1 \hl_1 + \sigma_2 \hl_2 + \cdots + \sigma_k \hl_k) .$$ This is a symmetric polynomial of degree $2^{k-1}$ in the squared eigenvalues $\hl_1^2, \hl_2^2, \ldots,\hl_k^2$, and hence it can be written as a polynomial in the coefficients of the characteristic polynomial $$ {\rm det}\bigl(i\cdot p - x \cdot {\rm Id}_{n} \bigr) \,\, = \,\, x^{n \,{\rm mod} \, 2} \cdot (x^2 - \hl_1^2) (x^2 - \hl_2^2) \cdots (x^2 - \hl_k^2). $$ The resulting polynomial has degree $2^k$ in the entries $p_{ab}$ of the matrix $p $, and it vanishes on the boundary of the orbitope $\mathcal{G}_{2,n}$. It can be checked that it is irreducible for $n \geq 5$. \end{proof} \begin{example} Let $n = 6$ and consider the characteristic polynomial of our Hermitian matrix: $$ {\rm det} \begin{pmatrix} -x & i p_{12} & i p_{13} & i p_{14} & i p_{15} & i p_{16} \\ - i p_{12} & -x & i p_{23} & i p_{24} & i p_{25} & i p_{26} \\ - i p_{13} & - i p_{23} & -x & i p_{34} & i p_{35} & i p_{36} \\ - i p_{14} & - i p_{24} & - i p_{34} & - x & i p_{45} & i p_{46} \\ - i p_{15} & - i p_{25} & - i p_{35} & i p_{45} & - x & i p_{56} \\ - i p_{16} & - i p_{26} & - i p_{36} & i p_{46} & - i p_{56} & -x \end{pmatrix} \quad \begin{matrix} = && x^6 + a_4 x^4 + a_2 x^2 + a_0 \\ & & \\ = && (x^2 - \hl_1^2)(x^2 - \hl_2^2)(x^2 - \hl_3^2) \end{matrix} $$ The algebraic boundary of the Grassmann orbitope $ \mathcal{G}_{2,6}$ is derived from the polynomial $$ \prod_{\sigma \in \{\pm1\}^3} \!\!\!\! (1 + \sigma_1 \hl_1 + \sigma_2 \hl_2 + \sigma_3 \hl_3 ) \ = \ a_4^4 {+}4 a_4^3 {-}8 a_4^2 a_2 {+}16 a_2^2 {-}16 a_4 a_2 {+}6 a_4^2 {+}64 a_0 {-}8 a_2 {+}4 a_4 {+}1\,. $$ We rewrite this expression in terms of the $15$ unknowns $p_{ij}$ to get an irreducible polynomial of degree $8$ with $10791$ terms. This is the defining polynomial of the hypersurface $\partial_a \mathcal{G}_{2,6}$. For $n=7$ we use the same polynomial in $a_0,a_2,a_4$ but now there is one more eigenvalue $0$. The defining polynomial of $\partial_a \mathcal{G}_{2,7}$ has $44150$ terms of degree $8$ in the $21$ matrix entries $p_{ij}$. \qed \end{example} We now come to the harder case $d=3, n=6$. The Grassmann orbitope $\mathcal{G}_{3,6}$ is a $20$-dimensional convex body. Its facial structure was determined by Dadok and Harvey in \cite{dadok}, and independently by Morgan in \cite{Mor85}. We present their well-known results in our language. \begin{theorem} \label{thm:morgan36} The orbitope $\mathcal{G}_{3,6}$ has three classes of positive-dimensional exposed faces: \begin{enumerate} \item one-dimensional faces from pairs of subspaces that satisfy the angle condition (\ref{eq:threeangles}); \item three-dimensional faces that arise as $SO(3)$-orbitopes and these are round $3$-balls; \item $12$-dimensional faces that are $SU(3)$-orbitopes, from the Lagrangian Grassmannian. \end{enumerate} \end{theorem} Harvey and Morgan \cite{HarMor86} extended these results to $\mathcal{G}_{3,7}$. We will here focus on $d=3, n=6$. Our first goal is to explain the faces described in Theorem \ref{thm:morgan36}, starting with the edges. Let $L$ and $L'$ be three-dimensional linear subspaces in $\R^6$ with corresponding unit length Pl\"ucker vectors $p$ and $p'$. We define $\theta_1 (L,L')$ to be the minimum angle between any unit vector $v_1 \in L$ and any unit vector $w_1 \in L'$. Next, $\theta_2(L,L')$ is the minimum angle between two unit vectors $v_2 \in L$ and $w_2 \in L'$ such that $v_2 \perp v_1$ and $w_2 \perp w_1$. Finally, we define $\theta_3(L,L')$ to be the angle between two unit vectors $v_3 \in L$ and $w_3 \in L'$ such that $v_3 \perp \{v_1,v_2\}$ and $w_3 \perp \{ w_1,w_2\}$. We refer to \cite[Lemma 2.3]{Mor85} for the fact that $\theta_2$ and $\theta_3$ are well defined by this rule. The angle condition referred to in part (1) of Theorem \ref{thm:morgan36} is the inequality \begin{equation} \label{eq:threeangles} \theta_3 (L,L') \,<\, \theta_1(L,L') + \theta_2 (L,L'). \end{equation} The convex hull of $p$ and $p'$ is an exposed edge of $\mathcal{G}_{3,6}$ if and only if the condition (\ref{eq:threeangles}) holds. The maximal facets in part (3) of Theorem \ref{thm:morgan36} are known to geometers as {\em special Lagrangian facets}, and we represent them as orbitopes as follows. The {\em special unitary group} $SU(3)$ consists of all complex $3 {\times} 3$-matrices $U$ with ${\rm det}(U)= 1$ and $U U^* = {\rm Id}_3$. If $A = {\rm re}(U)$ and $B = {\rm im}(U)$, so that $U = A + iB$, then we can identify $U$ with the real $6 {\times} 6$-matrix $$ \tilde U \,\,\, = \,\,\, \begin{pmatrix} \,A & -B \, \\ \,B &\phantom{-} A \,\end{pmatrix} .$$ Note that this matrix lies in $SO(6)$ if and only if $A A^T +B B^T = {\rm Id}_3$, $\,A B^T = B A^T$, and ${\rm det}(A+iB) =1$. Hence the transformation $U \mapsto \tilde U$ realizes $SU(3)$ as a subgroup of $SO(6)$. The orbit $\,SU(3) \cdot e_1 {\wedge} e_2 {\wedge} e_3\,$ is known as the {\em special Langrangian Grassmannian}. This is a five-dimensional real algebraic variety in the unit sphere in $\wedge_3 \R^6$. We call its convex hull $$\,\mathcal{SL}_{3,6} \,=\, {\rm conv} (SU(3) \cdot e_1 {\wedge} e_2 {\wedge} e_3) $$ the {\em special Lagrangian orbitope}. It is obtained from the $20$-dimensional Grassmann orbitope $\mathcal{G}_{3,6}$ by maximizing the linear function $\, p_{123} - p_{156} + p_{246} - p_{345} $, which takes value $1$ on that face. The face $\mathcal{SL}_{3,6}$ is $12$-dimensional because it satisfies the two affine equations $$ p_{123} - p_{156} + p_{246} - p_{345} \, = \, {\rm re}( A + iB) \,=\,1 \,\,\, \hbox{and} \,\,\, p_{126} - p_{135} + p_{234} - p_{456} \, = \, {\rm im}( A + iB) \,=\, 0 $$ plus the six independent linear equations that cut out the Langrangian Grassmannian: $$ p_{125} + p_{136} \,=\, p_{134} + p_{235} \,=\, p_{124} - p_{236} \,=\, p_{146} + p_{256} \,=\, p_{245} + p_{346} \,=\, p_{145} - p_{356} \,\, =\,\, 0. $$ The exposed faces of type (2) in Theorem \ref{thm:morgan36} are not facets. Each of them is the intersection of two special Lagrangian facets. For example, consider the two linear functionals $$ \phi_+ \, = \, p_{123} - p_{156} + p_{246} - p_{345} \quad \hbox{ and } \quad \phi_- \, = \, p_{123} - p_{156} - p_{246} + p_{345}, $$ which were discussed in \cite[\S 4.4]{Mor85}. Each of them supports a special Lagrangian facet. The intersection of these two $14$-dimensional facets is a three-dimensional ball. Indeed, the linear functional $\frac{1}{2} ( \phi_+ + \phi_-) = p_{123} - p_{156}$ is bounded above by $1$ on $\mathcal{G}_{3,6}$, and the subset at which the value equals $1$ is contained in the linear span of the six basis vectors $e_1 \wedge e_i \wedge e_j$ where $i,j \in \{2,3,5,6\}$. In the intersection of this linear space with $\mathcal{G}_{3,6}$ we find the Grassmann orbitope $\mathcal{G}_{2,4}$ from Example \ref{ex:G24}, with our linear functional $p_{123} - p_{156}$ being represented by the scaled coordinate $\sqrt{2} x$. The face of $\mathcal{G}_{2,4}$ where $\sqrt{2} x$ attains its maximal value $1$ is a $3$-ball. This concludes our discussion of the census of faces given in Theorem \ref{thm:morgan36}. \smallskip A natural question that arises next is whether the Grassmann orbitope $\mathcal{G}_{3,6}$ or its dual body $\mathcal{G}_{3,6}^\circ $ can be represented as a spectrahedron. It turns out that the answer is negative. \begin{theorem} \label{thm:G36bad} The Grassmann coorbitope $\mathcal{G}_{3,6}^\circ$ has extreme points that are not exposed. The Grassmann orbitope $\mathcal{G}_{3,6}$ has edges that are not exposed. Neither of them is a spectrahedron. \end{theorem} \begin{proof} The first assertion was proved by Dadok and Harvey in \cite[Theorem 7]{dadok}. For the second assertion we proceed as follows. We apply the technique in \cite[Lemma 2.2]{Mor85} and restrict to the linear subspace $\,V = \R \{e_{123},e_{126},e_{135},e_{234},e_{156},e_{246},e_{345},e_{456}\} $ of $\wedge_3 \R^6$. The intersection of $\mathcal{G}_{3,6} \cap V$ is the $SO(2) \times SO(2) \times SO(2)$-orbitope of $e_{123}$, where the action is by unitary diagonal $3 {\times} 3$-matrices, while the intersection $\mathcal{SL}_{3,6} \cap V$ is the $SO(2) \times SO(2)$-orbitope of $ e_{123}$, where the action is by special unitary diagonal $3 {\times} 3$-matrices, as described in \cite[\S 4.3]{Mor85}. We claim that the $SO(2) \times SO(2)$-orbitope $\mathcal{SL}_{3,6} \cap V$ is $2$-neighborly. This can be seen by examining the bivariate trigonometric polynomials of the form $$ \begin{matrix} f \ = & \mbox{\ \quad} \ x_{123} \cdot {\rm cos}(\alpha) {\rm cos}(\beta) {\rm cos}(-\alpha -\beta) \,\,+\,\,x_{126} \cdot {\rm cos}(\alpha) {\rm cos}(\beta) {\rm sin}(-\alpha -\beta)\\ & \,\,-\,\,x_{135}\cdot {\rm cos}(\alpha) {\rm sin}(\beta) {\rm cos}(-\alpha -\beta) \,\,+\,\,x_{234} \cdot {\rm sin}(\alpha) {\rm cos}(\beta) {\rm sin}(-\alpha -\beta) \\ & \,\,+\,\,x_{156} \cdot {\rm cos}(\alpha) {\rm sin}(\beta) {\rm sin}(-\alpha -\beta) \,\,-\,\,x_{246} \cdot {\rm sin}(\alpha) {\rm cos}(\beta) {\rm sin}(-\alpha -\beta) \\ & \,\,+\,\,x_{345} \cdot {\rm sin}(\alpha) {\rm sin}(\beta) {\rm cos}(-\alpha -\beta) \,\,-\,\,x_{456} \cdot {\rm sin}(\alpha) {\rm sin}(\beta) {\rm sin}(-\alpha -\beta). \end{matrix} $$ Indeed, for any choice of $(\alpha_1, \beta_1) $ and $(\alpha_2,\beta_2)$, we can find eight real coefficients $\,x_{ijk}\,$ such that $f$ and its derivatives vanish at $(\alpha_1, \beta_1) $ and $(\alpha_2,\beta_2)$ but $f$ is strictly positive elsewhere. The results in \cite{Mor85} imply that every exposed edge of $\mathcal{G}_{3,6} \cap V$ is also an exposed edge of $\mathcal{G}_{3,6}$. We believe that Morgan's technique can be adapted to show that {\em every} exposed edge of $\mathcal{SL}_{3,6} \cap V$ is an exposed edge of $\mathcal{SL}_{3,6}$. For the purpose of proving Theorem \ref{thm:G36bad}, however, we only need to identify {\em one} exposed edge of $\mathcal{SL}_{3,6} \cap V$ that is an exposed edge of $\mathcal{SL}_{3,6}$. The claim that such exposed edges exist can be derived from \cite[Theorem 12 (i)]{dadok}. With the help of Philipp Rostalski, we also obtained a computational proof of that claim. This was done as follows. We selected various random choices of points $(\alpha_1, \beta_1) $ and $ (\alpha_2,\beta_2)$ in $\R^2$. Each choice specifies two three-dimensional subspaces $L$ and $L'$ of $\R^6$ for which equality holds in the angle condition (\ref{eq:threeangles}). The corresponding line segment is not an exposed edge of $\mathcal{G}_{3,6}$. To show that the line segment between $L$ and $L'$ is exposed in $\mathcal{SL}_{3,6}$, we use a technique from semidefinite programming. First we compute the eight coefficients $x_{ijk}$ of the supporting function $f$ as above. This gives a linear function $\sum x_{ijk} p_{ijk}$ on $\wedge_3 \R^6$. We then run a first-order Lasserre relaxation (cf.~\cite{Lasserre}) to minimize this linear function subject to the linear and quadratic constraints that cut out the special Lagrangian Grassmannian. The optimal value is zero, the optimal Lasserre moment matrix has rank two, and its image in $\wedge_3 \R^6$ lies in the relative interior of the line segment between $L$ and $L'$. We then re-optimize for various perturbations of the linear function $\sum x_{ijk} p_{ijk}$. The output of each run is a rank one moment matrix which certifies either $L$ or $L'$ as optimal solution of the optimization problem. This proves that the face of $\mathcal{SL}_{3,6}$ exposed by $\sum x_{ijk} p_{ijk}$ is the line segment between $L$ and~$L'$. \end{proof} Theorem \ref{thm:G36bad} shows that Grassmann orbitopes are generally not spectrahedra. We do not know whether they are linear projections of spectrahedra. \def\cprime{$'$} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{ \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
133,451
Printer for Windows 7 1.01 PDF Printer for Windows 7 1.01 - Easily create Adobe PDF document on Windows 7 Program ID: 35175 Author: Vivid Document Imaging Technologies All programs by this author Downloads: 31 License: Shareware [?] Cost: $69.00 US Operating Systems: Size: 2212K Release Status: new Last Updated: 2009-07-30 Our Rating: Users Rating: (total votes: 0) Feedback: Report broken download Report spyware [?]. All products support Microsoft Windows 7 32-bit (x86 Edition) and 64-bit (x64 Edition), and are backward compatible with Microsoft Windows Vista, Windows XP, Windows 2000, Windows Server 2003, Windows 2000 Server, and Windows Server 2008. Keywords: pdf, printer, driver, print, windows, seven, windows7, win7, win7pdf, adobe, microsoft, Shareware, Graphic Apps, Converters & Optimizers, Vivid Document Imaging Technologies, PDF Printer for Windows 7 Recent Changes: Add support for Windows 7 64-bit (x64 Edition) Install Support: Install and Uninstall Supported Languages: English, Catalan, Danish, Dutch, Finnish, French, German, Greek, Hungarian, Italian, Japanese, Korean, Norwegian, Portuguese, Spanish, Swedish Additional Requirements: Windows 2000, XP, 2003, Vista, 2008, or Windows
56,401
TITLE: Scheme over a field vs. scheme defined over a field QUESTION [2 upvotes]: A scheme $X$ over another scheme $S$ is simply a morphism $X \to S$. In particular, if $K$ is a field, a scheme $X$ over $K$ is given by a morphism $X \to \operatorname{Spec} K$. For a long time, I conflated this with the notion of being defined over $K$. However, the morphisms don’t even go in the right direction: For if $k \subseteq K$ is a subfield, we have a morphism $\operatorname{Spec} K \to \operatorname{Spec} k$ and composing it with our morphisms $X \to \operatorname{Spec} K$ above allows us to consider $X$ as a scheme over $k$ as well, but $X$ should not automatically be defined over every subfield (which intuitively I think of as being given by polynomial equations with coefficients in $k$). So here is where I am at: The question “Is (a general scheme) $X$ defined over a field $k$?” does not make sense. We can only ask it if $X$ is already given as a scheme over some extension $K$ of $k$. In this case, we say that $X$ is defined over $k$ if there is a scheme $X_k$ over $k$ such that the fiber product $X_k \times_k \operatorname{Spec} K$ is $X$ (or rather, isomorphic to $X$ as a scheme over $k$). In particular this would entail that a scheme over $k$ is always defined over $k$ (as a scheme over $k$!). This might explain my long-time confusion. Is this understanding correct? REPLY [5 votes]: Here $k$ and $K$ are fields, and a "$k$-scheme" just means a scheme over $k$. That's right, you don't say that a $K$-scheme $X$ is defined over a subfield $k \subset K$ just by composing the morphisms $X \rightarrow \operatorname{Spec} K \rightarrow \operatorname{Spec} k$. It's more subtle than that. In particular, one reason we don't do this is because for example if $K = \overline{k}$ then $X$ would no longer be of finite type over $k$ even if it is so over $K$. Usually the kind of $k$-schemes people are interested in are varieties, which by any reasonable definintion (there are some minor differences between different authors in the definition of variety) are finite type. The usual context in which you encounter the language you're talking about is when you have an ambient $k$-variety $\mathscr X$, and $X$ is a subscheme of the $K$-scheme $\mathscr X_K = \mathscr X \times_{\operatorname{Spec}(k)} \operatorname{Spec}(K)$. By composition $X \rightarrow \mathscr X_K \rightarrow \operatorname{Spec}(K)$, $X$ is naturally a $K$-scheme. We say $X$ is defined over $k$ if there is a subscheme $X_0$ of $\mathscr X$ such that $(X_0)_K = X$. If $X_0$ exists, then it is unique.
206,993
Contrary to what you may read in your Facebook feed or junk email box, there's no "magic pill" when it comes to fitness. Instead, following these simple, basic principles can help you reach and maintain your fitness goals.
236,938
\section{Conservation properties} \label{sec: conservation} \noindent In order to extend the operator theory for $L$ to Hardy spaces, we need to guarantee that certain operators $f(L)$ preserve vanishing zeroth moments or have the \emph{conservation property}\index{conservation property} $f(L)c = c$ whenever $c$ is a constant. In absence of integral kernels, the action of such operators on constants is explained via off-diagonal estimates as follows.\index{off-diagonal estimates!operator extensions by} \begin{prop} \label{prop: general OD extension} Let $T$ be a bounded linear operator on $\L^2(\R^n; V)$, where $V$ is a finite dimensional Hilbert space. If $T$ satisfies $\L^p$ off-diagonal estimates of order $\gamma > \nicefrac{n}{p}$ for some $p \in [2,\infty)$, then $T$ can be extended to a bounded operator $\L^\infty(\R^n; V) \to \Lloc^p(\R^n; V)$ via \begin{align} \label{eq: OD definition on Linfty} T f \coloneqq \sum_{j=1}^\infty T(\ind_{C_j(B(0,1))} f). \end{align} Moreover, if $(\eta_j) \subseteq \L^\infty(\R^n;\IC)$ is a family such that \begin{align} \label{eq: conservation family} \begin{minipage}{0.89\linewidth} \begin{itemize} \item $\displaystyle \sup_j \|\eta_j\|_\infty < \infty$, \item $\displaystyle \sum_{j=1}^\infty \eta_j(x) = 1$ for a.e.\ $x \in \R^n$, \item $\eta_j$ has compact support, which for some $C,c$ and all sufficiently large $j$ is contained in $B(0, C2^j) \setminus B(0, c 2^j)$, \end{itemize} \end{minipage} \end{align} then \begin{align*} Tf = \sum_{j=1}^\infty T(\eta_j f), \end{align*} where the right-hand side converges in $\Lloc^p(\R^n; V)$ and in particular in $\Lloc^2(\R^n; V)$. \end{prop} \begin{rem} \label{rem: general OD extension} A particular example for a family with the required properties is $\eta_j = \ind_{C_j(B)}$ for an arbitrary ball (or cube) $B \subseteq \R^n$. \end{rem} \begin{proof} We put $B \coloneqq B(0,1)$ and fix any compact set $K \subseteq \R^n$. For all large enough $j$ we have $\dist(K, C_j(B)) \geq 2^{j-1}$ and therefore \begin{align*} \|T (\ind_{C_j(B)} f) \|_{\L^p(K)} &\lesssim 2^{-j \gamma} \|f\|_{\L^p(C_j(B))} \\ & \lesssim 2^{j (\frac{n}{p} - \gamma)} \|f\|_\infty. \end{align*} Hence, the series on the right-hand side of \eqref{eq: OD definition on Linfty} converges absolutely in $\L^p(K)$ and the limit satisfies $\|Tf\|_{\L^p(K)} \leq C_K \|f\|_\infty$ for a constant $C_K$ that depends on $K$ but not on $f$. Next, we pick an integer $j_0 \geq 1$ such that $c2^{j_0} \geq 1$ and therefore $2^J B \subseteq B(0, c 2^{J+j_0})$ for all $J \geq 1$. If $J$ is large enough so that the annular support of $\eta_j$ is granted, then $\sum_{j=1}^J \ind_{C_j(B)} - \sum_{j=1}^{J+ j_0} \eta_j$ vanishes on $2^{J+1}B$, has support in $C' 2^J B$ for some $C'$ that does not depend on $J$ and is uniformly bounded. The off-diagonal bounds yield again \begin{align*} \bigg\|\sum_{j=1}^J T(\ind_{C_j(B)}f) - \sum_{j=1}^{J+j_0} T(\eta_j f) \bigg\|_{\L^p(K)} \lesssim 2^{J(\frac{n}{p}-\gamma)} \|f\|_\infty, \end{align*} which shows that $\sum_{j=1}^\infty T(\eta_j f)$ converges to $Tf$ in $\L^p(K)$. \end{proof} We begin with the conservation property\index{conservation property!for $BD$} for the resolvents of the perturbed Dirac operator $BD$ that has appeared implicitly in several earlier works~\cite{AusSta, R}. The proof relies on the cancellation property $Dc = 0$ for constants $c$ (where $D$ is understood in the sense of distributions). \begin{prop} \label{prop: conservation BD} If $\alpha \in \IN$ and $z \in \S_{\pi/2 - \omega_{BD}}$, then for all $c \in \IC^{m} \times \IC^{mn}$, \begin{align*} (1+\i z BD)^{-\alpha}c = c = (1+z^2 (BD)^2)^{-\alpha}c. \end{align*} \end{prop} \begin{proof} Let $R>0$ and $(\eta_j)$ be a smooth partition of unity on $\R^n$ subordinate to the sets \begin{align*} D_1 \coloneqq B(0,4R), \quad D_j \coloneqq B(0,2^{j+1}R)\setminus B(0,2^{j-1}R) \quad (j \geq 2), \end{align*} such that $\|\eta_j\|_\infty + 2^j R \|\nabla_x \eta_j\|_\infty \leq C$ for a dimensional constant $C$. We begin with the resolvents of $BD$, which satisfy $\L^2$ off-diagonal estimates of arbitrarily large order by Proposition~\ref{prop: OD for Dirac} and composition. According to Proposition~\ref{prop: general OD extension} we can write \begin{align*} (1+\i z BD)^{-\alpha} c = \sum_{j=1}^\infty (1+\i z BD)^{-\alpha} (\eta_j c), \end{align*} so that \begin{align*} (1+ \i z BD)^{-\alpha+1}c - (1+ \i z BD)^{-\alpha}c = \sum_{j=1}^\infty \i z (1+ \i z BD)^{-\alpha} BD(\eta_j c), \end{align*} where we set $(1+\i z BD)^{0}c \coloneqq c$ and used $\eta_j c \in \dom(D) = \dom(BD)$. Now, $BD(\eta_jc)$ has support in $B(0,2^{j+1}R)\setminus B(0,2^{j-1}R)$ also for $j = 1$ and satisfies $\|BD(\eta_j c)\|_\infty \leq C |c| \|B\|_\infty R^{-1}$. The off-diagonal estimates yield \begin{align*} \|(1+ \i z BD)^{-\alpha+1}c -(1+\i z BD)^{-\alpha }c \|_{\L^2(B(0, R/2))} \lesssim R^{\frac{n}{2}-\gamma-1} \sum_{j=1}^\infty 2^{j(\frac{n}{2}-\gamma)} \end{align*} with an implicit constant that is independent of $R$. Sending $R \to \infty$ gives $(1+ \i z BD)^{-\alpha+1}c = (1+\i z BD)^{-\alpha }c$. Since $(1+\i z BD)^0c=c$, we conclude $(1+\i z BD)^{-\alpha }c = c$ for all $\alpha$. The argument for the resolvents of $(BD)^2$ is identical and draws upon the identity \begin{align*} (1+ z^2 &(BD)^2)^{-\alpha+1}c - (1+ z^2 (BD)^2)^{-\alpha}c \\ &= \sum_{j=1}^\infty z^2 BD (1+ z^2 (BD)^2)^{-\alpha} BD(\eta_j c). \end{align*} The off-diagonal decay for $z^2 BD (1+ z^2 (BD)^2)^{-\alpha}$ follows again by composition since this operator can be written as \begin{align*} -\frac{ \i z}{2} \Big((1-\i z BD)^{-1} - (1+ \i z BD)^{-1} \Big) (1+ z^2 (BD)^2)^{-\alpha+1}. &\qedhere \end{align*} \end{proof} As a corollary we obtain the conservation property for the second-order operator $L$. The reader can refer to \cite[Sec.~4.4]{Ouhabaz} and references therein for related conservation properties in the realm of semigroups.\index{conservation property!for resolvents of $L$} \begin{cor} \label{cor: conservation} Let $\alpha \in \IN$ and $z \in \S_{(\pi-\omega_L)/2}^+$. Let $c \in \IC^m $ and let $f \in \L^2$ have compact support. Then one has the conservation formula \begin{align*} (1+z^2 L)^{-\alpha} c = c \end{align*} and its dual version \begin{align*} \int_{\R^n} a(1+z^2L)^{-\alpha} a^{-1}f \, \d x = \int_{\R^n} f \, \d x. \end{align*} \end{cor} \begin{proof} The left-hand sides are holomorphic functions of $z$ (valued in $\Lloc^2$ and $\IC^m$, respectively). Hence, it suffices to argue for $z=t \in (0,\infty)$. We have \begin{align*} (1+t^2 (BD)^2)^{-\alpha} = \begin{bmatrix} (1+t^2 L)^{-\alpha} & 0 \\ 0 & (1+t^2 M)^{-\alpha} \end{bmatrix}, \end{align*} so the fist claim follows from the conservation property for $BD$. As $(a^*)^{-1} L^* a^*$ belongs to the same class as $L$, we also get \begin{align*} \int_{\R^n} a(1+t^2L)^{-\alpha} a^{-1}f \cdot \cl{c} \, \d x &= \int_{\R^n} f \cdot \cl{(1+t^2 (a^*)^{-1}L^*a^*)^{-\alpha}c} \, \d x\\ &= \int_{\R^n} f \cdot \cl{c} \, \d x \end{align*} and since $c \in \IC^m$ is arbitrary, the second claim follows. \end{proof} We turn to more general operators in the functional calculus. In view of Lemma~\ref{lem: functional calculus bounds from J(L) abstract} the decay of the auxiliary function at the origin limits the available off-diagonal decay and hence, in contrast with the case of resolvents, we have to use Proposition~\ref{prop: general OD extension} for exponents $p \neq 2$. \begin{lem} \label{lem: cancellation functional calculus} Let $p \in [2,\infty)$ be such that $((1+t^2 L)^{-1})_{t>0}$ is $\L^p$-bounded. Suppose that $\psi$ is of class $\Psi_\sigma^\tau$ on any sector, where $\tau > 0$ and $\sigma > \nicefrac{n}{(2p)}$. Then \begin{align*} \psi(t^2 L) c = 0 \quad (c \in \IC^m, \, t >0). \end{align*} \end{lem} \begin{proof} Let $\theta \in (0,1]$ be such that $q \coloneqq [p,2]_\theta$ satisfies $\sigma > \nicefrac{n}{(2q)}$. Lemma~\ref{lem: OD extrapolation to sectors} provides $\L^{q}$ off-diagonal decay for the resolvents of $L$ of arbitrarily large order on some sector $\S_\mu^+$. We pick $\nu \in (0, \mu)$ and write the definition of $\psi(t^2 L)$ as \begin{align*} \psi(t^2 L) = \frac{1}{2 \pi i} \int_{\bd \S_{\pi - 2\nu}^+} \psi(t^2 z) (1-z^{-1}L)^{-1} \, \frac{\d z}{z}. \end{align*} Setting $B \coloneqq B(0,1) \subseteq \R^n$, we formally have \begin{align*} \sum_{j \geq 1} \psi(t^2 L) (\ind_{C_j(B)} c) &= \frac{1}{2 \pi i} \int_{\bd \S_{\pi - 2\nu}^+} \psi(t^2 z) \sum_{j \geq 1} (1-z^{-1}L)^{-1}(\ind_{C_j(B)} c) \, \frac{\d z}{z} \\ &=\frac{1}{2 \pi i} \int_{\bd \S_{\pi - 2\nu}^+} \psi(t^2 z) c \, \frac{\d z}{z} \\ & = 0, \end{align*} where the second line uses the conservation property and the third one Cauchy's theorem. It remains to justify convergence and interchanging sum and integral sign in the first line. To this end, fix any compact set $K \subseteq \R^n$. Using off-diagonal estimates, we obtain for all $j$ large enough to grant for $\d(K, C_j(B)) \geq 2^{j-1}$ that \begin{align*} \|\psi(t^2 z) (1-z^{-1}L)^{-1} &(\ind_{C_j(B)} c) \|_{\L^q(K)} \\ &\lesssim |\psi(t^2 z)|(1+2^{j-1} |z|^{\frac{1}{2}})^{-\gamma} \|c\|_{\L^q(C_j(B))} \\ &\lesssim 2^{j(\frac{n}{q} - \gamma)} {\begin{cases} t^{-2 \tau} |z|^{-\frac{\gamma}{2}-\tau} & \text{if } |z| \geq 1 \\ t^{2 \sigma} |z|^{\sigma-\frac{\gamma}{2}} & \text{if } |z| \leq 1 \end{cases}}, \end{align*} where $\gamma >0$ is at our disposal. We take $\nicefrac{n}{q} < \gamma < 2 \sigma$, in which case the right-hand side takes the form $2^{-j\eps} F_t(z)$ with $\eps > 0$ and $F_t \in \L^1(\bd \S_{\pi - 2\nu}^+, \nicefrac{\d |z|}{|z|})$, locally uniformly in $t$. This justifies at once convergence and interchanging sum and integral sign in $\L^q(K)$. \end{proof} Our third conservation property concerns the Poisson semigroup. In line with the previous result we need $\L^p$-boundedness of the resolvents for large $p$ to compensate for the poor decay of $\e^{-\sqrt{z}}-1$ at the origin. \begin{prop}[Conservation property for the Poisson semigroups\index{conservation property!for the Poisson semigroup}] \label{prop: conservation Poisson} If $((1+t^2 L)^{-1})_{t>0}$ is $\L^p$-bounded for some $p>n$, then \begin{align*} \e^{-t {T^{1/2}}} c = c \quad (c \in \IC^m, \, t >0). \end{align*} \end{prop} \begin{proof} We have $\e^{-\sqrt{z}} = (1+z)^{-1} + \psi(z)$ with $\psi \in \Psi_{1/2}^1$ on any sector and the claim follows from Corollary~\ref{cor: conservation} and Lemma~\ref{lem: cancellation functional calculus}. \end{proof}
194,637
\begin{document} \maketitle \begin{abstract} The Bar\'at-Thomassen conjecture, recently proved in~\cite{bib:BHLMT}, asserts that for every tree $T$, there is a constant $c_T$ such that every $c_T$-edge connected graph $G$ with number of edges (size) divisible by the size of $T$ admits an edge partition into copies of $T$ (a $T$-decomposition). In this paper, we investigate in which case the connectivity requirement can be dropped to a minimum degree condition. For instance, it was shown in~\cite{bib:orig-paths} that when $T$ is a path with $k$ edges, there is a constant $d_k$ such that every $24$-edge connected graph $G$ with size divisible by $k$ and minimum degree $d_k$ has a $T$-decomposition. We show in this paper that when $F$ is a coprime forest (the sizes of its components being a coprime set of integers), any graph $G$ with sufficiently large minimum degree has an $F$-decomposition provided that the size of $F$ divides the size of $G$ (no connectivity is required). A natural conjecture asked in~\cite{bib:orig-paths} asserts that for a fixed tree $T$, any graph $G$ of size divisible by the size of $T$ with sufficiently high minimum degree has a $T$-decomposition, provided that $G$ is sufficiently highly connected in terms of the maximal degree of $T$. The case of maximum degree 2 is answered by paths. We provide a counterexample to this conjecture in the case of maximum degree 3. \end{abstract} \section{Introduction}\label{sec:intro} \input{intro-trees.tex} \section{Disproving Conjecture~\ref{conj:wrong}}\label{sec:disprove} \input{disprove.tex} \section{Decomposition into coprime trees}\label{sec:twotrees} \input{forests.tex} \section*{Acknowledgements} Part of this work was done while the first author was a postdoc at Laboratoire d’Informatique du Parall\'elisme, \'Ecole Normale Sup\'erieure de Lyon, 69364 Lyon Cedex 07, France. \bibliography{biblio} \bibliographystyle{abbrv} \end{document}
40,857
Franchising your business means establishing the legal infrastructure, development strategy, and plan to legally and effectively sell franchises to third party franchisee partners. To franchise your business, you must establish a legal infrastructure and platform that complies with federal and state franchise laws and allows you to legally offer and sell franchises throughout the United States. Your required legal platform must include (1) a Franchise Disclosure Document (FDD) that contains legally mandated disclosures, (2) a franchise agreement and (3) information about your newly established franchise offering, franchise system and the legal relationship that will exist between you and your future franchisee partners. Franchising your business also requires the establishment of a franchising strategy and plan that differentiates your soon to be launched franchise system from competitors and that is designed to scale-up and grow. Unlike states such as New York, Illinois, California, Maryland, Michigan, Minnesota and Virginia, you do not have to register your franchise in Texas. That is to say, Franchisors do not have to register a Texas registration statement (exempt from filing) with Texas regulatory agencies they comply with all disclosure requirements and prohibitions concerning franchising in the Federal Trade Commission regulations. However, prior to offering or selling a franchise in the State of Texas, a Franchisor must file for an exemption under the “Texas Business Opportunity Law”. The Franchisor files a “one-time only” Business Opportunity Exemption Notice with the Texas Secretary of State for a cost of $25.00 In addition to having filed an exemption notice with the State of Texas before offering or selling a franchise, as a Franchisor, you are required to issue your Franchise Disclosure Agreement at least 14 days before accepting any money or commitments from a potential franchisee. Additionally, Texas charges a $195.00 filing fee for registering your business opportunity which includes a copy of your disclosure statements. To be successful at offering and selling franchises and growing your franchise system, you must be knowledgeable of both federal and state franchise laws. At the federal level franchising is regulated in accordance with the Federal Franchise Rule and the franchise regulations implemented and enforced by the Federal Trade Commission. At the federal level, franchisors are required to disclose their current and updated franchise disclosure document prior to the offer or sale of any franchise. Federal Rules-Disclosure Document Compliance Of course there are both federal rules and state rules that you have to comply with. Under the Federal Trade Commission’s (“FTC’s”) Franchise Rule, franchisors offering franchises, other than exempt franchises, anywhere in the United States or its territories must provide to prospective franchisees a copy of a Franchise Disclosure Document (“FDD”) at least 14 days before the prospective franchisee signs a binding agreement or makes any payment in connection with the franchise sale. The FTC does not register franchise offerings, review FDDs before they are provided to prospective franchisees, or require franchisors to file anything with that agency. At the federal level, in the absence of an exemption, all franchise sellers must provide all prospective franchisees with the federally required written disclosures pursuant to the FTC Franchise Rule. If a franchisor does not make the required disclosures, the franchisor, its owners, employees and franchise brokers may be subject to significant civil and criminal liability Texas Rules-Disclosure Document Compliance Texas Franchise laws prohibit sales of business opportunities unless the Franchisor gives potential purchasers a pre-sale disclosure document that has first been filed with the Secretary of State. The representations made during the marketing and sale of the business opportunity are key in determining franchisor protection under these laws. Franchisees have a private right of actions under the business opportunity law, and the penalties for violating these laws can be extensive including damages, return of fees paid by a franchisee, rescission of the contract, injunctive relief, criminal penalties, and other remedies. Successful Franchisor Goals Franchising is a business model that allows successful business owners to leverage their existing business name, community goodwill, assets and know-how to expand their business and brand. To avoid the compliance pitfalls in the State of Texas, a franchisor should have focus goals (there are no short cuts). Your franchise system must be built on a solid foundation that reflects the very best practices in the franchising industry and one that, from the very start, is strategically focused on achieving attainable growth and scalability Your franchise goals should be based on long-term plans and visions for Franchisor success, Franchisee partnerships and federal and state franchise law compliance. Your goals should include the following long-term action plans: - Developing and launching a multi-state Franchise Disclosure Document that will legally protect you and allow you to offer and sell franchises in multiple states (even you plan to start only in one state). - Developing a franchising strategy that is reflected in your Franchise Disclosure Document and that strategically and realistically positions your new franchise system for franchise sales and growth. - Protecting your trademarks and intellectual property through trademark registrations and confidentiality agreements. - Develop a franchise operations manual that will be provided to future franchisees and inform you franchisees about your business, your methods of operations, the systems that you have created and rely on and what you expect from them as future franchisee partners. - Developing and maintaining a franchise compliance process that includes registering your Franchise Disclosure Document with franchise registration states like New York, California, Illinois, Virginia, Maryland and other registration states that are strategic to your expansion plans. - Successfully launching your franchise system with the right advice and guidance focused on a sales strategy that is grounded in a realistic budget and one that incorporates a multi-sales strategy approach focused on organic sales, broker generated sales and web driven sales. If you are a new franchisor or thinking about franchising, it is critically important that you focus on your business and establishing a realistic legal and business infrastructure for the successful development and launch of your franchise system. Look for sound business and legal advisors who are focused on preserving and protecting your existing business and network that you have already built, as well as, who are committed to helping you lay the foundation for continued expansion and growth. If you would like to know more about franchise growth options and how to accelerate your franchise system while complying with Texas and federal franchise laws, we would be glad to talk with you. Contact corporate attorney Victor D. Walker at 713-724-5300.
64,759
\begin{document} \title{Generalized Whittaker quotients of Schwartz functions on $G$-spaces} \author{Dmitry Gourevitch} \address{Dmitry Gourevitch, Faculty of Mathematics and Computer Science, Weizmann Institute of Science, POB 26, Rehovot 76100, Israel } \email{dimagur@weizmann.ac.il} \urladdr{http://www.wisdom.weizmann.ac.il/~dimagur} \author{Eitan Sayag} \address{Eitan Sayag, Department of Mathematics, Ben Gurion University of the Negev, P.O.B. 653, Be'er Sheva 84105, ISRAEL} \email{eitan.sayag@gmail.com} \keywords{Whittaker support, wave-front set, nilpotent orbit, moment map, reductive group actions, distinguished representations.} \subjclass[2010]{20G05, 20G25, 22E35, 22E45,14L30, 46F99} \date{\today} \begin{abstract} Let $G$ be a reductive group over a local field $F$ of characteristic zero. Let $X$ be a $G$-space. In this paper we study the existence of generalized Whittaker quotients for the space of Schwartz functions on $X$, considered as a representation of $G$. We show that the set of nilpotent elements of the dual space to the Lie algebra such that the corresponding generalized Whittaker quotient does not vanish contains the nilpotent part of the image of the moment map, and lies in the closure of this image. This generalizes recent results of Prasad and Sakellaridis. Applying our theorems to symmetric pairs $(G,H)$ we show that there exists an infinite-dimensional $H$-distinguished representation of $G$ if and only if the real reductive group corresponding to the pair $(G,H)$ is non-compact. For quasi-split $G$ we also extend to the Archimedean case the theorem of Prasad stating that there exists a generic $H$-distinguished representation of $G$ if and only if the real reductive group corresponding to the pair $(G,H)$ is quasi-split. In the non-Archimedean case our result also gives rather sharp bounds on the wave-front sets of distinguished representations. \DimaE{ The results in the present paper can be used to recover many of the vanishing results on periods of automorphic forms proved by Ash-Ginzburg-Rallis \cite{AGR}. This follows from our Corollary \ref{cor:global} when combined with the restrictions on the Whittaker support of cuspidal automorphic representations proven in \cite{GGS2}. } \end{abstract} \maketitle \section{Introduction}\label{sec:intro} Let $F$ be a local field of characteristic zero. Let $\bf G$ be a \DimaD{connected algebraic} reductive group defined over $F$, let $G:={\bf G}(F)$ be its $F$-points and $\fg$ be the Lie algebra of $G$. If $F$ is non-Archimedean, we denote by $\cM(G)$ the category of admissible smooth finitely-generated representations (\cite{BZ}). If $F$ is Archimedean, we denote by $\cM(G)$ the category of Casselman-Wallach representations, {\it i.e.} admissible smooth finitely-generated \Fre representations of moderate growth (see \cite[\S 11]{Wal}, or \cite{CasGlob}). We denote by $\Irr(G)$ the collection of irreducible representations in $\cM(G)$. A classical theme in representation theory of reductive groups over local fields is the study of representations {\it distinguished} with respect to a subgroup $H \subset G.$ A representation $\pi\in \Irr(G)$ is called $H$-distinguished if it has a non-zero $H$-equivariant linear functional (continuous if $F$ is Archimedean). Denote by $\Irr(G)_{H}\subset \Irr(G)$ the subcollection of $H$-distinguished representations. Such are the local components of {\it distinguished} automorphic representations and such are the discrete series representations that contribute to the Plancherel decomposition of the Hilbert space $L^{2}(X)$, the space of square integrable functions (or sections of density bundle) on the $G$-space $X=G/H.$ In this paper we offer a unified approach to two seemingly unrelated questions concerning distinguished representations. The first is to clarify and generalize the relationship between distinction and genericity studied in \cite{PraSak} to arbitrary $G$-spaces over local fields, Archimedean or not. The second is a search for a non-Archimedean analogue of the qualitative study of the Plancherel decomposition provided by \cite{HarWeich}. Regarding the first question, it was recently studied in the special case of linear periods in \cite{SayVer} both for Archimedean and non-Archimedean fields, and for symmetric pairs over non-Archimedean fields in \cite{PraSak}. It should be mentioned that the motivating example here is the disjointness result of \cite{HeuRal} showing that irreducible generic representations of $GL(2n)$ never have non-zero symplectic invariant functionals (see \cite{OS} for a complete account). As for the second, while a description of the wavefront set of the unitary representation $L^{2}(X)$ is of independent interest, it also yields a-priori bound on the wavefront of individual irreducible unitary distinguished representation that occurs in the Plancherel decomposition. The unification of the topics mentioned above is obtained by studying the {\it Whittaker support} (\cite{GGS2}) of the non-admissible representation of $G$ on spaces of Schwartz functions on $X.$ Understanding Whittaker support allows, in the non-Archimedean case, to deduce exact information on wave front of individual distinguished representations leading to our answer to the first question. The study of $ \WO(\Sc(X))$, which is our smooth replacement to the problem studied in \cite{HarWeich}, is carried out using the theory of invariant distributions and in particular the {\it orbitwise} technique introduced by Gelfand-Kazhdan \cite{GK} in the non-Archimedean case (and its various extensions and ramifications), that in some cases reduces the study of invariant distributions on a space to the study of invariant distributions on each of the orbits separately. In this language, the orbits that can support equivariant distributions are those for which a certain character is trivial on a stabilizer of one (hence any) point in the orbit. This can be reformulated as a condition tangling the point and the character as living in the image of a partial moment map attached to the $G$-space $X$ and the orbit. Thus, as will also be clear from the formulation of our results, a key tool in our approach is the moment map attached to the $G$-space $X$. An unexpected aspect of this geometric approach is that it allows us to study the Whittaker supports of modules of the form $\Sc(X)$ even in cases where $X$ is not $G$-homogenous. We now turn to the results of the present paper. We need some notations. Let $\bf X$ be a smooth $\bf G$-variety defined over $F$, and let $X:={\bf X}(F)$. Let $\mu:T^*X\to \fg^*$ denote the moment map, and let $\cM$ denote the closure of its image. Note that $\cM$ is a closed conical set. Let $\cN\subset \fg^*$ denote the nilpotent cone. Let $\Sc(X)$ denote the space of Schwartz functions on $X$ and let $\WO(\Sc(X))\subset \cN$ denote the set of nilpotent elements $\varphi$ such that $\Sc(X)$ has a generalized Whittaker quotient corresponding to $\varphi$ (see \S \ref{subsec:Whit} below). In \S \ref{sec:PfWO} we prove the following theorem. \begin{introthm}[\S \ref{sec:PfWO}]\label{thm:WO}\begin{enumerate}[(i)] \item We have $ \WO(\Sc(X))\subset \cM\cap \cN$. \item If either $F$ is non-Archimedean or $\bf X$ is quasi-projective then $\Im \mu \cap \cN \subset \WO(\Sc(X))$. \end{enumerate} \end{introthm} Let us now consider the homogeneous case, {\it i.e.} $X=G/H$. In this case we have $\Im \mu=G\cdot\fh^{\bot}$. Here, $\fh^{\bot}$ denotes the orthogonal complement to $\fh$ in $\fg^*$, and $G\cdot\fh^{\bot}$ denotes the image of $\fh^{\bot}$ under the coadjoint action of $G$ on $\fg^*$. For homogeneous $X$ we can formulate a twisted version of Theorem \ref{thm:WO}. Let $\zeta:H\to F$ and $\psi:H\to F^{\times}$ be algebraic characters. Let $d\zeta\in \fh^*$ denote the differential of $\zeta$, and let $Fp_{\fh}^{-1}(d\zeta) \subset \fg^*$ denote the linear space spanned by the preimage of $d\zeta$ under the restriction map $p_{\fh}:\fg^*\onto \fh^*$. Fix a character $\xi_m$ of $F^{\times}$ and a non-trivial unitary character $\xi_a$ of $F$, and let $\chi$ be the character of $H$ given by the product \begin{equation}\label{=chi} \chi=(\xi_a\circ \zeta)\cdot(\xi_m\circ\psi). \end{equation} Let $\Sc(G)_{H,\chi}$ denote the space of $\chi$-coinvariants in $\Sc(G)$ under the action of $H$ by right multiplications (see \eqref{=coinv} below). \begin{introthm}[\S \ref{sec:PfWO}]\label{thm:Twisted} Let $H$, $\chi$, and $Fp_{\fh}^{-1}(d\zeta)$ be as above. Then $$G\cdot p_{\fh}^{-1}(d\zeta)\cap \cN \subset \WO(\Sc(G)_{H,\chi})\subset \overline{G\cdot Fp_{\fh}^{-1}(d\zeta)}\cap \cN.$$ \end{introthm} By Frobenius reciprocity for small induction (see {\it e.g.} \cite[Proposition 2.29]{BZ} and \cite[Lemma 2.3.4]{GGS}), $\pi\in \Irr(G)$ is $(H,\chi)$-distinguished if and only if the contragredient representation $\widetilde{\pi}$ is a quotient of $\Sc(G)_{H,\chi\Delta_H}$, where $\Delta_H$ denotes the modular function. \DimaD{This allows us to deduce from Theorem \ref{thm:Twisted} the following corollary. \begin{introcor}[\S \ref{subsec:PfIrr}]\label{cor:Irr} Let $H$ and $\chi$ be as above, and let $\pi\in \Irr(G)_{H,\chi}$. Then $$\WO(\pi)\subset \overline{G\cdot Fp_{\fh}^{-1}(d\zeta)}.$$ \end{introcor} For the case $G=\GL_{n+k}(F)$ and $H=\GL_n(F)\times \GL_k(F)$, and trivial $\chi$, this statement is equivalent to \cite[Theorem B]{SayVer}, which is proven for all local or finite fields $F$ of characteristic different from 2. Corollary \ref{cor:Irr} is especially useful for non-Archimedean $F$, since for them the top nilpotent orbits in $\WO(\pi)$ coincide with the top nilpotent orbits in the wave-front set $\overline{\WF}(\pi)$ by \cite{MW}. In \S \ref{sec:WF} below we recall this notion and deduce from Corollary \ref{cor:Irr} the following corollary. \begin{introcor}[\S \ref{sec:WF}]\label{cor:WF} Let $H$ and $\chi$ be as above, and let $\pi\in \Irr(G)_{H,\chi}$. Suppose that $F$ is non-Archimedean. Then $\overline{\WF}(\pi)\subset\overline{G\cdot Fp_{\fh}^{-1}(d\zeta)}.$ In particular, if $\chi$ is trivial on the unipotent radical of $H$ then $\overline{\WF}(\pi)\subset \overline{G\cdot\fh^{\bot}}$. \end{introcor} } This corollary was our starting point for this paper. It was inspired by the paper \cite{HarWeich} that shows that the wave-front set of $L^2(G/H)$ in the Archimedean case is $\overline{G\cdot\fh^{\bot}}$. Another powerful Archimedean analogue of Corollary \ref{cor:WF} is proven in \cite{GS_R}. Namely, \DimaB{in \cite{GS_R} we show that if $H$ is a spherical subgroup of $G$\ then} the Zariski closure of $\overline{\WF}$ coincides with the Zariski closure of a nilpotent $\bfG$-orbit in $\fg^*(\C)$ that intersects $\fh^\bot$. We conjecture that the same holds in the $p$-adic case. We refer the reader to \cite[\S 10]{GS_R} for more details on this conjecture and its generalization, as well as some evidence and applications. Next we ask the following question: given $\varphi\in \WO(\Sc(G)_{H,\chi})$, does there exist $\pi\in \Irr(G)_{H,\chi}$ with $\varphi\in \WO(\pi)$? We can answer this question for certain $\varphi$, and for (absolutely) spherical subgroups $H$, {\it i.e.} subgroups $H={\bf H}(F)$ with $\bf H$ acting on the flag variety of $\bf G$ with an open orbit. Let $P$ be an adapted parabolic for $G/H$ - see \S \ref{sec:Geo} below for this notion. For any character $\chi$ of $H$, denote by $\chi'_P$ the character of $P\cap H$ given by \begin{equation} \chi'_{P}:=\chi|_{P\cap H}\Delta_{P}^{-1/2}\Delta_H^{-1}\,, \end{equation} where $\Delta$ denotes the modular functions. \begin{introprop}[\S \ref{sec:PfCor}]\label{prop:PrinSer} Let $P$ be an adapted parabolic of $G/H$ such that $PH$ is open in $G$ and let $\pi=\Sc(G)_{P,\Delta_P^{-1/2}}$ be the space of smooth vectors of the normalized induction $\mathrm{Ind}_{P}^G1$. Then \begin{enumerate}[(i)] \item \label{PS:irr} $\pi$ is irreducible and unitarizable. \item \label{PS:WO} $\WO(\pi)=G\cdot \fp^{\bot}$ \item \label{PS:Quot} Assume $H$ is spherical, and $\chi$ is a character of $H$. If $\chi$ is trivial on the unipotent radical of $H$ and $\chi'_{P}=1$ then $\pi\in \Irr(G)_{H,\chi}$. \end{enumerate} \end{introprop} We note that if $H$ is unimodular then $\Delta_P=\Delta_H=1$, and $\chi'_P=\chi|_P$ (see Lemma \ref{lem:uni} below). \begin{remark} In case $X$ is a unimodular spherical homogeneous space and if in addition $\cM\cap \cN\subset G\cdot\fp^{\bot}$ then Theorem \ref{thm:WO} and Proposition \ref{prop:PrinSer} imply the exact equality \begin{equation}\label{=WO} \WO(\Sc(X))=\cM\cap \cN=G\cdot\fp^{\bot} \end{equation} We note that if $G$ is quasi-split then \cite{Kno} and \cite[Appendix A]{PraSak} imply the weaker inclusion $\cM\cap \cN\subset \bfG\cdot\fp^{\bot}$ - see Corollary \ref{cor:Geo} below. \end{remark} \begin{introcor}[\S \ref{sec:PfCor}]\label{cor:main} Let $H\subset G$ be an algebraic subgroup, and let $\chi$ be a character of $H$, trivial on its unipotent radical. Let $P$ be an adapted parabolic of $G/H$. \begin{enumerate}[(i)] \item \label{cormain:FinDim}If $P=G$ then every $\pi\in \Irr(G)_{(H,\chi)}$ is finite-dimensional. \item \label{cormain:ExFinDim} If $P\neq G$, $PH$ is open in $G$, $H$ is (absolutely) spherical and $\chi'_{P}=1$\\ then there exists an infinite-dimensional unitarizable $\pi \in \Irr(G)_{(H,\chi)}$. \item \label{cormain:ExGen}If $P$ is a Borel subgroup of $G$ (in particular $G$ is quasi-split), and $\chi'_{P}=1$\\ then there exists a generic unitarizable $\pi \in \Irr(G)_{(H,\chi)}$. \item \label{GH:NoGen} If $P$ is not a Borel subgroup of $G$ then no $\pi\in \Irr(G)_{(H,\chi)}$ is generic. \end{enumerate} \end{introcor} \DimaB{Over Archimedean $F$, Parts \eqref{cormain:FinDim} and \eqref{GH:NoGen} follow from \cite[Theorem 4.2]{KS}. Over non-Archimedean $F$, }Part \eqref{GH:NoGen} was essentially shown in \cite{PraSak}. The emphasis in \cite{PraSak} is on symmetric pairs, for which Prasad showed that some properties of distinguished representations are governed by the properties of the real reductive group given by the root system of the symmetric pair. In support of this principle we derive the following corollary. \begin{introcor}[\S \ref{sec:PfCor}]\label{cor:sym} Suppose that $G$ is quasi-split and let $H\subset G$ be a symmetric subgroup.\\ Let $G^H_{\R}$ be the real reductive group corresponding to the symmetric pair $(G,H)$. Then \begin{enumerate}[(i)] \item \label{Sym:Gen} There exists a generic $\pi\in \Irr(G)_H$ if and only if $G^H_{\R}$ is quasi-split. \item There exists an infinite-dimensional $\pi\in \Irr(G)_H$ if and only if $G^H_{\R}$ is not a quotient of the product of a compact group and a torus. \end{enumerate} \end{introcor} Part \eqref{Sym:Gen} was proven in \cite{PraSak} for non-Archimedean local fields of arbitrary characteristic, as well as a certain analogue for finite fields. In \cite[\S 6]{GS_R} we apply this corollary in order to answer \cite[Question 1]{PraSak} in many cases. \DimaC{Finally, let us give an application to automorphic periods. Suppose that $\bf G$ and $\bf H$ are defined over a number field $\mathbb{K}$, and let ${\bf G}(\mathbb{A}_\mathbb{K})$ and ${\bf H}(\mathbb{A}_\mathbb{K})$ denote their adelic points. For any automorphic representation $\pi$ we define $\WO(\pi)\subset \fg^*(\mathbb{K})$ to be the set of all nilpotent $\varphi\in \fg^*(\mathbb{K})$ such that $\pi$ possesses a non-zero continuous $({\bf N}(\mathbb{A_{\mathbb{K}}}),\eta_{\varphi})$-equivariant functional, where the nilpotent subgroup ${\bf N}\subset \mathbf{G}$ and the character $\eta_{\varphi}$ of ${\bf N}(\mathbb{A_{\mathbb{K}}})$ are defined as is Definition \ref{def:Whit} below. \begin{introcor}[\S\ref{sec:global}]\label{cor:global} Let $\pi$ be an irreducible smooth automorphic representation of ${\bf G}(\mathbb{A}_\mathbb{K})$ that possesses a non-zero continuous ${\bf H}(\mathbb{A}_\mathbb{K})$-invariant functional. Then for every place $\nu$ of $\mathbb{K}$ we have \begin{equation}\label{=aut} \WO(\pi)\subset \overline{{\bf G}(\mathbb{K}_{\nu})\cdot\fh^{\bot}(\mathbb{K}_{\nu})}, \end{equation} where the closure is taken in the local topology of $\fg^*(\mathbb{K}_{\nu})$. \end{introcor} \DimaD{If one allows the functional to be defined only on $K$-finite vectors in $\pi$, the inclusion \eqref{=aut} will still hold for all non-archimedean places $\nu$. } The main example of an ${\bf H}(\mathbb{A}_\mathbb{K})$-invariant functional is a period integral over ${\bf H}(\mathbb{K})\backslash {\bf H}(\mathbb{A}_\mathbb{K})$, or its invariant regularization. Such an integral is known to converge absolutely when $\bfH$ is a symmetric subgroup of $\bfG$ and $\pi$ is cuspidal \cite[Proposition 1]{AGR}. Together with the restriction on the Whittaker support of cuspidal representations given in \cite[Theorem 8.4(ii) and Corollary 8.11]{GGS2}, this gives an alternative proof for many of the results of \cite{AGR}.} \subsection{Acknowledgements} We thank Avraham Aizenbud, \DimaB{Itay Glazer,} and Michal Zydor for fruitful discussions. D. G. was partially supported by ERC StG 637912 and ISF 249/17. \section{Preliminaries} \subsection{Smooth representations and generalized Whittaker quotients}\label{subsec:Whit} \begin{notn}\label{not:rep} If $F$ is non-Archimedean, we denote by $\Rep^{\infty}(G)$ the category of smooth representations of $G$ in complex vector spaces (see {\it e.g.} \cite{BZ}). If $F$ is Archimedean, we denote by $\Rep^{\infty}(G)$ the category of smooth \Fre representations of $G$ of moderate growth, as in \cite[\S 1.4]{dCl}. \end{notn} For any algebraic subgroup $H\subset G$ and $\pi\in \Rep^{\infty}(G)$, denote by $\pi_H$ the space of coinvariants, i.e. quotient of $\pi$ by the intersection of kernels of all $H$-invariant functionals. Explicitly, \begin{equation}\label{=coinv} \pi_H=\pi/\overline{\{\pi(g)v -v\, \vert \,v\in \pi, \, g\in H\}} \end{equation} where the closure is needed only for Archimedean $F$. In the latter case, for connected $H$ we have $\pi_H=\pi/\overline{\fh_{\C}\pi}$ which in turn is equal to the quotient of $\oH_0(\fh,\pi)$ by the closure of zero, where $\fh$ denotes the Lie algebra of $H$. For any character $\chi$ of $H$, denote $\pi_{H,\chi}:=(\pi\otimes \chi)_{H}$. \begin{defn}\label{def:Whit} Let $\varphi\in \fg^*$ be a nilpotent element, and let $\pi\in \Rep^{\infty}(G)$. We define the generalized Whittaker quotient $\pi_{\varphi}$ in the following way. Choose an $\sll_2$-triple $(e,h,f)$ such that $\varphi$ is given by the Killing form pairing with $f$. Now, let $\fg^h_1$ denote the eigenspace of the adjoint action of $h$ on $\fg$ corresponding to eigenvalue 1, and $\fg^h_{\geq 2}$ denote the sum of the eigenspaces with eigenvalues 2 and higher. Consider the symplectic form $\omega_{\varphi}$ on $\fg^h_1$ given by $\omega_{\varphi}(X,Y):=\varphi([X,Y])$ and choose a Lagrangian $\fl$ for this form. Let $\fn$ be the nilpotent Lie algebra $\fl\oplus \fg^h_{\geq 2}$ and $N\subset G$ be the corresponding unipotent subgroup. Let $\eta_{\varphi}$ denote the unitary character of $N$ given by $\varphi$. Then we define $\pi_{\varphi}:=\pi_{N,\eta_{\varphi}}$. \end{defn} For more details about this definition, and the proof that it is independent of choices, we refer the reader to \cite[\S 2.5]{GGS2}. \begin{defn} Let $\pi\in \Rep^{\infty}(G)$. Define $\WO(\pi)$ to be the set of all nilpotent elements $\varphi \in \fg^*$ satisfying $\pi_{\varphi}\neq 0$. \end{defn} \subsection{Invariant distributions} Let $X$ be the manifold of $F$-points of a smooth algebraic variety defined over $F$. Let $\Sc(X)$ denote the space of Schwartz functions on $X$. For non-Archimedean $F$, this means the space of locally-constant compactly supported functions, see \cite{BZ}. For Archimedean $X$, Schwartz functions are functions that decay rapidly together with all their derivatives, see {\it e.g.} \cite{AG} for the precise definition. Let $\Sc^*(X)$ denote the linear dual space if $F$ is non-Archimedean, and continuous linear dual space if $F$ is Archimedean. We refer to elements of $\Sc^*(X)$ as tempered distributions. For a group $H$ acting on $X$ and its character $\chi$, we denote by $\Sc^*(X)^{H,\chi}$ the space of tempered distributions $\xi$ satisfying $h\xi=\chi(h)\xi$. This space is dual to $\Sc(G)_{H,\chi}$. \begin{thm}\label{thm:BZ} Let an $F$-algebraic group $H$ act algebraically on $X$. Let $\chi$ be a character of $H$ of the form \eqref{=chi}. Suppose that the stabilizer $H_x$ of any point $x\in X$ is unipotent. \begin{enumerate}[(i)] \item \label{BZ:Non}If $\Sc^*(X)^{H,\chi}\neq 0$ then there exists $x\in X$ such that $\chi|_{H_x}=1$. \item \label{BZ:Ex}If there exists $x\in X$ such that $\chi|_{H_x}=1$ and one of the following holds \begin{enumerate}[(a)] \item $F$ is non-Archimedean \item \label{BZ:Arch}$H$ is solvable and $X$ is quasi-projective \end{enumerate} Then $\Sc^*(X)^{H,\chi}\neq 0$. \end{enumerate} \end{thm} \begin{proof} Since all algebraic characters of unipotent groups are trivial, we have $\Delta_{H}|_{H_x}=\Delta_{H_x}=1$ for any $x\in X$. Now, \eqref{BZ:Non} follows from \cite[\S 1.5]{Ber} and \cite[\S 6]{BZ} for non-Archimedean $F$, and from \cite[Theorem 2.2.15 and the proof of Corollary 2.2.16]{AG_ST} for Archimedean $F$. For \eqref{BZ:Ex}, note that the condition $\chi|_{H_x}=1$ implies that the orbit $Hx$ of $x$ has an $H$-invariant measure. The Archimedean case under the condition \eqref{BZ:Arch} follows now from \cite[Theorem A]{GSS}. Suppose now that $F$ is non-Archimedean, and let $\cO$ be an orbit of minimal dimension among those that have an $(H,\chi)$-equivariant measure. Since all the stabilizers are unipotent, and thus Zariski connected, \cite[Theorem 1.4]{HS} implies that $\Sc^*(\overline{\cO})^{H,\chi}\neq 0$. This in turn implies $\Sc^*(X)^{H,\chi}\neq 0$. \end{proof} To complement this theorem in the Archimedean case we will need the following one. \begin{theorem}[{cf. \cite[Theorem B]{GSS}}]\label{thm:UGH} Let $G$ be a real reductive group, and let $H,N\subset G$ be real algebraic subgroups, such that $N$ is unipotent. Let $\chi$ be a character of $H$ as in \eqref{=chi}, and let $\eta$ be a unitary character of $N$. Assume that for some $g\in G$ we have $$\chi|_{H\cap N^g}=\eta^g|_{H\cap N^g},$$ where $N^g=gNg^{-1}$ and $\eta^g$ is the character of $N^g$ given by $\eta^g(gxg^{-1})=\eta(x)$. Then $$\Sc^*(G)^{N\times H,\eta \times \chi}\neq 0.$$ \end{theorem} \section{Proof of Theorems \ref{thm:WO} and \ref{thm:Twisted}, and Corollaries \ref{cor:Irr} and \ref{cor:WF}}\label{sec:PfWO} Let $\varphi\in \fg^*$ be a nilpotent element. As in Definition \ref{def:Whit}, let $(e,h,f)$ be an $\sll_2$-triple in $\fg$ such that $\varphi$ is given by the Killing form pairing with $f$. Let $\fv:=\fg^h_{\geq 2}$, and let $\fn\supset \fv$ be as in Definition \ref{def:Whit}. Let $V:=\Exp(\fv)\subset N:=\Exp(\fn)$ be the corresponding nilpotent subgroups of $G$. Let $\eta_{\varphi}$ be the unitary character of $N$ corresponding to $\varphi$. \begin{proof}[Proof of Theorem \ref{thm:WO}] Suppose that $\varphi\in \WO(\Sc(X))$. Then by definition of $\WO(\Sc(X))$, we have $\Sc^*(X)^{N,\eta_{\varphi}}\neq 0$ and thus $\Sc^*(X)^{V,\eta_{\varphi}}\neq 0$ . By Theorem \ref{thm:BZ}, this implies that there exists $x\in X$ such that $\varphi$ vanishes on the Lie algebra $\fv_x$ of the stabilizer of $x$ in $V$. Thus, the restriction $\varphi|_\fv$ lies in the image of the moment map $\mu_V:T_x^*X\to \fv^*$ which is the dual of the map $\fv\to T_xX$ given by the differential of the action of $V$ on $X$. But $\mu_V$ equals the composition of $\mu:T_x^*X\to \fg^*$ with the restriction $\fg^*\to \fv^*$: \begin{equation} T_x^*X\overset{\mu}{\to}\fg^*\onto \fv^* \end{equation} Thus there exists $\psi\in \cM$ such that $\psi|_\fv =\varphi$. Since the kernel of the restriction to $\fv$ is $(\fg^*)^h_{\geq -1}$, we have $\psi=\varphi+\psi'$, where $\psi'\in (\fg^*)^h_{\geq -1}$. Let $T\subset G$ be the one-dimensional torus with Lie algebra spanned by $h$, and let $t\in T$ with $|t|<1$. Then the coadjoint action $Ad^*(t)$ of $t$ is given on $(\fg^*)^h_i$ by $t^i$. Define a sequence $\psi_n\in \fg^*$ by $\psi_n:=t^{2n}\Ad^*(t^n)\psi$. Then $\psi_n$ converges to $\varphi$. Since $\cM$ is conic and $G$-invariant, $\psi_n\in \cM$. Since $\cM$ is closed, we have $\varphi \in \cM$. Conversely, if $\varphi$ lies in the image of the moment map then $\varphi$ vanishes on $\fg_x$ for some $x\in X$ and thus $\eta_{\varphi}$ is trivial on $N_x$. By Theorem \ref{thm:BZ} this implies $\Sc^*(X)^{N,\eta_{\varphi}}\neq 0$, which by definition means $\varphi \in \WO(\Sc(X))$. \end{proof} Let us now prove Theorem \ref{thm:Twisted} in a similar way. \begin{proof}[Proof of Theorem \ref{thm:Twisted}] Suppose that $\varphi\in \WO(\Sc(G)_{H,\chi})$. Then by definition we have $\Sc^*(G)^{N\times H,\eta_{\varphi}\times \chi}\neq 0$ and thus $\Sc^*(G)^{V\times H,\eta_{\varphi}\times \chi}\neq 0$. By Theorem \ref{thm:BZ}, this implies that there exists $g\in G$ such that $\eta_{\varphi}\times \chi$ vanishes on the stabilizer in $V\times H$ of $g$. Replacing $\varphi$ by $g^{-1}\cdot \varphi$ we can assume that $g$ is the unit element. Then the stabilizer can be identified with $V\cap H$, and we have $\varphi|_{\fv\cap \fh}=d\zeta|_{\fv\cap \fh}$. Thus there exists $\psi\in \fg^*$ such that $\psi|_{\fv}=\varphi|_{\fv}$ and $\psi|_{\fh}=d\zeta$. Since the kernel of the restriction to $\fv$ is $(\fg^*)^h_{\geq -1}$, we have $\psi=\varphi+\psi'$, where $\psi'\in (\fg^*)^h_{\geq -1}$. Let $T\subset G$ be the one-dimensional torus with Lie algebra spanned by $h$, and let $t\in T$ with $|t|<1$. Define a sequence $\psi_n:=t^{2n}\Ad^*(t^n)\psi\in \fg^*$. Then $\psi_n\to\varphi$ and thus $\varphi\in\overline{G\cdot Fp_{\fh}^{-1}(d\zeta)}\cap \cN.$ Conversely, if $\varphi\in G\cdot p_{\fh}^{-1}(d\zeta)\cap \cN$ then $\chi|_{H\cap N^g}=\eta_{\varphi}^g|_{H\cap N^g}$ for some $g\in G$. By Theorem \ref{thm:UGH} for Archimedean $F$, and Theorem \ref{thm:BZ} for non-Archimedean $F$, this implies $\Sc^*(G)^{N\times H,\eta_{\varphi}\times \chi}\neq 0$ and thus $\varphi\in \WO(\Sc(G)_{H,\chi})$. \end{proof} \DimaD{ \subsection{Proof of Corollary \ref{cor:Irr}}\label{subsec:PfIrr} For the proof we will need the following proposition \DimaE{whose proof is postponed to \S\ref{subsec:-WO}.} \begin{prop}\label{prop:-WO} For any $\pi\in \Irr(G)$ we have $\overline{\WO(\pi)}=-\overline{\WO(\widetilde{\pi})}$. Moreover, if $F$ is Archimedean then $\WO(\pi)=-\WO(\widetilde{\pi})$. \end{prop} \begin{proof}[Proof of Corollary \ref{cor:Irr}] Since the Whittaker quotient is a space of coinvariants, the functor $\pi\mapsto \pi_{\varphi}$ is right-exact. If $\pi$ is $H$-distinguished then $\widetilde{\pi}$ is a quotient of $\Sc(G)_{H,\Delta_H\chi}$ and thus $\WO(\widetilde{\pi})\subset \WO(\Sc(G)_{H,\Delta_H\chi})$. By theorem \ref{thm:Twisted} we have $\WO(\Sc(G)_{H,\Delta_H\chi}) \subset\overline{G\cdot Fp_{\fh}^{-1}(d\zeta)}$. By Proposition \ref{prop:-WO} we have $\overline{\WO(\widetilde{\pi})}=-\overline{\WO(\pi)}$. Altogether we obtain $$\WO(\pi)\subset-\overline{\WO(\widetilde{\pi})}\subset- \overline{\WO(\Sc(G)_{H,\Delta_H\chi})}\subset \overline{G\cdot Fp_{\fh}^{-1}(d\zeta)}$$ \end{proof} \subsection{Wave-front sets and the proof of Corollary \ref{cor:WF}} \label{sec:WF} In this subsection only we assume that $F$ is non-Archimedean. Let $\pi\in \Irr(G)$. Let $\chi_{\pi}$ be the character of $\pi$. It is a generalized function on $G$ and it defines a generalized function $\xi_{\pi}$ on a neighborhood of zero in ${\mathfrak{g}}$, by restriction to a neighborhood of $1\in G$ and applying logarithm. By \cite{HowGL} and \cite[p. 180]{HCWF}, $\xi_{\pi}$ is a linear combination of Fourier transforms of $G$-invariant measures of nilpotent coadjoint orbits. The measures extend to $\fg^*$ by \cite{RangaRao}. Denote by $\overline{WF}(\pi)$ the closure of the union of all the orbits that appear in the decomposition of $\xi_\pi$ with non-zero coefficients. \begin{thm}[{\cite[Proposition I.11, Theorem I.16 and Corollary I.17]{MW}, and \cite{Var}}] \label{thm:MW}$\,$\\ For any $\pi\in \Irr(G)$ we have $\overline{\WF}(\pi)=\overline{\WO(\pi)}$. \end{thm} \begin{proof}[Proof of Corollary \ref{cor:WF}] By Corollary \ref{cor:Irr} we have $\WO(\pi)\subset \overline{G\cdot Fp_{\fh}^{-1}(d\zeta)}$. By Theorem \ref{thm:MW} we have $\overline{\WF}(\pi)=\overline{\WO(\pi)}$. Altogether we obtain $\overline{\WF}(\pi)=\overline{\WO(\pi)}\subset \overline{G\cdot Fp_{\fh}^{-1}(d\zeta)}.$ If $\chi$ is trivial on the unipotent radical of $H$ then $\zeta=1$ and $\overline{G\cdot Fp_{\fh}^{-1}(d\zeta)}=\overline{G\cdot \fh^{\bot}}.$ \end{proof} \subsection{Proof of Proposition \ref{prop:-WO}}\label{subsec:-WO} For non-Archimedean this follows from Theorem \ref{thm:MW}. Indeed, in this case by Theorem \ref{thm:MW} we have $\overline{\WF}(\widetilde{\pi})=\overline{\WO(\widetilde{\pi})}$, and it is easy to see that $\overline{\WF}(\widetilde{\pi})=-\overline{\WF}(\pi)$. Thus $\overline{\WO(\widetilde{\pi})}=-\overline{\WO(\pi)}$. If $F$ is Archimedean then by \cite{HS} (cf. \cite{Ada}) there exists a canonical involution $\theta$ of $G$, called the Chevalley involution, such that $\theta(g)$ is conjugate to $g^{-1}$ for any $g\in G$, and $\alp \circ d\theta\in -G\cdot \alp$ for any $\alp\in \fg^*$. By \cite[Corollary 1.4]{Ada} the twisted representation $\pi^{\theta}$ is isomorphic to $\widetilde{\pi}$. Thus $\WO(\widetilde{\pi})=\WO(\pi^{\theta})=d\theta(\WO(\pi))=-\WO(\pi).$ \proofend } \section{Preliminaries on the geometry of $X$ }\label{sec:Geo} In this section we will identify the algebraic groups and algebraic varieties with their points over the algebraic closure $\overline{F}$ of $F$. We will view the $F$-points of the varieties as subsets invariant under the absolute Galois group $Gal(\overline{F}/F)$. As for the Lie algebra, we will use the notation $\fg$ to denote its $F$-points (as in the rest of the paper), and $\fg(\overline{F})$ will denote the points over the algebraic closure. We start with several definitions and a theorem from \cite{KnoKro}. \begin{defn} An $F$-group is called \emph{elementary} if it is connected and all $F$-rational elements are semi-simple. An \emph{elementary radical} is the subgroup generated by $F$-rational unipotent elements. It is the smallest normal subgroup with elementary quotient. If $\bf H=LU$ is a Levi decomposition of an $F$-group $\bf H$ then ${\bf H}_{el}={\bf L}_{el}{\bf U}$, where ${\bf L}_{el}\subset {\bf L}$ is the product of all non-anisotropic simple factors. \end{defn} Note that a group over an algebraically closed field is elementary if and only if it is a torus. \begin{defn} Fix a minimal $F$-parabolic subgroup ${\bf P}_0$ of $\bf G$. The $F$-adapted parabolic $\bf P$ of $\bf X$ that includes ${\bf P}_0$ is defined by \begin{equation} {\bf P}:=\{g\in \bfG \, \vert \, g{\bf P_0}x={\bf P_0} x \text{ for $x$ in a dense open subset of } {\bf X}\} \end{equation} We denote the $F$-points of $\bf P$ by $P$. We will call $P$ an \emph{adapted parabolic of $X$.} \end{defn} Most of the statements in \cite{KnoKro} deal with $F$-dense varieties. They are applicable to our case, since we assume $\bf X$ to be irreducible, $X$ to be non-empty, and $F$ to be a local field. Under these assumptions, \cite[\S 1.A.2 and Proposition 2.6]{Pop} implies that $\bf X$ is $F$-dense, {\it i.e.} $X$ is Zariski dense in $\bf X$. \begin{theorem}[{Local structure theorem, \cite[Corollary 4.6]{KnoKro}}]\label{thm:LS} Let $\bf P=LU$ be a Levi decomposition of an adapted parabolic of $\bf X$. Then there exists a smooth affine $\bf L$-subvariety ${\bf X}_{el}\subset {\bf X}$ such that \begin{enumerate}[(i)] \item ${\bf L}_{el}$ acts trivially on ${\bf X}_{el}$, all the $\bf L$-orbits on ${\bf X}_{el}$ are closed, and the categorical quotient ${\bf X}_{el}\to {\bf X}_{el}//{\bf L}$ is a locally trivial bundle in etale topology. \item The natural morphism \begin{equation}\label{=LST} {\bf U}\times {\bf X}_{el}={\bf P}\times ^{\bf L} {\bf X}_{el}\to {\bf X} \end{equation} is an open embedding. \end{enumerate} \end{theorem} \DimaE{ Recall that we denote by $\cM\subset \fg^*$ the closure of the image of the moment map of $X$. \begin{cor}\label{cor:PG} If $P=G$ then $\cM=\{0\}$. \end{cor}} It is easy to see from the definition, that any subgroup of an elementary group is reductive. Thus Theorem \ref{thm:LS} implies the following corollary. \begin{cor}\label{cor:red} There exists an open dense $P$-invariant subset $X^0$ of $X$ such that the stabilizer in $P$ of any point in $X^0$ is reductive. \end{cor} Theorem \ref{thm:LS} holds over any field of characteristic zero. It was first proven for algebraically closed fields in \cite{BLV}. We can therefore consider the adapted parabolic of $\bf X$ over the algebraic closure $\overline{F}$ and compare it to $\bf P$. \begin{proposition}[{\cite[Proposition 9.1]{KnoKro}}]\label{prop:PQ} Let $\bf B\subset P_0$ be a Borel subgroup of $\bf G$, and let $\bf Q$ be the adapted parabolic subgroup of $\bf X$ that includes $\bf B$. Then ${\bf P}={\bf P}_0{\bf Q}$. \end{proposition} Let $\bf Q^-=MU^-$ be a parabolic subgroup of $\bf G$ opposite to $\bf Q$ (where $\bf M=\bf Q\cap Q^-$), let $\bf M_s$ be the stabilizer in $\bf M$ of a point in ${\bf X}_{el}$ (where ${\bf X}_{el}$ comes from Theorem \ref{thm:LS} applied over $\overline{F}$), and let $\bf S$ be the preimage of $\bf M_s$ under the projection $\bf Q^-\onto M$. Let us now consider the moment map over the algebraic closure $\mu:T^*{\bf X}\to \fg(\overline{F})^*$. Let $\fs$ and $\fq^-$ denote the Lie algebras of $\bf S$ and $\bf Q^-$. \begin{thm}[{\cite[Appendix A]{PraSak} using \cite[Satz 5.4]{Kno}}]\label{thm:Kno} $$ \overline{\mu(T^*{\bf X})}={\bfG}\cdot\fs^{\bot}$$ \end{thm} \begin{cor}\label{cor:Geo} We have $$\cM\cap \cN\subset \overline{\mu(T^*{\bf X})}(F)\cap \cN= ({\bf G}\cdot(\fq^-)^{\bot})(F)$$ \end{cor} \begin{proof} Since $\bf M/M_s$ is elementary over $\overline{F}$, it is a torus. Thus the nilpotent elements of $\fs^{\bot}$ lie in $(\fq^{-})^{\bot}$. The corollary follows now from Theorem \ref{thm:Kno}. \end{proof} Note that the Killing form identifies $\bf (\fq^-)^{\bot}$ with $\fu^-$. \section{Proof of Proposition \ref{prop:PrinSer} and Corollaries \ref{cor:main} and \ref{cor:sym} }\label{sec:PfCor} \begin{proof}[Proof of Proposition \ref{prop:PrinSer}] \eqref{PS:irr} is a well-known consequence of Bruhat theory, see \cite[\S 4]{KV} for the Archimedean case. The non-Archimedean case is proven analogously. \eqref{PS:WO} follows from Theorem \ref{thm:Twisted}. Indeed, we take $\zeta$ to be trivial and $\xi_m\circ\psi$ to be $\Delta_P^{-1/2}$. Then $p_{\fh}^{-1}(d\zeta)=\fp^{\bot}$. Since $G\cdot\fp^{\bot}$ is closed and lies in $\cN$, we obtain that $$G\cdot p_{\fh}^{-1}(d\zeta)\cap \cN=G\cdot \fp^{\bot}=\overline{G\cdot Fp_{\fh}^{-1}(d\zeta)}\cap \cN.$$ By Theorem \ref{thm:Twisted} we obtain $G\cdot\fp^{\bot}=\WO(\Sc(G)_{P,\Delta_P^{-1/2}})=\WO(\pi)$. For \eqref{PS:Quot} note that $P\cap H$ is the stabilizer of the coset $[1]\in G/H$, that lies in an open $P$-orbit. Thus, by Corollary \ref{cor:red}, $P\cap H$ is reductive, and thus $\Delta_{P\cap H}=1$. \eqref{PS:Quot} follows now from \cite[Proposition D]{GSS}. More precisely, \cite[Proposition D]{GSS} is the case of trivial $\chi$, but the general case is proven in the same way. \end{proof} \begin{proof}[Proof of Corollary \ref{cor:main}]$\,$ \eqref{cormain:FinDim} Let $\pi\in \Irr(G)_H$. Present $G$ as a quotient of $Z\times K\times M$ \DimaD{by a finite subgroup}, where $Z,K,M$ are reductive groups with $Z$ commutative, $K$ compact and $M$ generated by its unipotent elements. Then $\pi\in \Irr(Z\times K\times M)$, and we can assume $G=Z\times K\times M$. Since $P=G$, Theorem \ref{thm:Twisted} and Corollary \ref{cor:PG} imply that $\WO(\widetilde{\pi})=\{0\}$. Thus \cite[Theorem 1.4]{GGS2} implies that $M$ acts locally finitely on $\pi$. Now, $Z$ acts on $\pi$ by scalars by the Schur-Dixmier lemma, and $\pi$ has a $K$-finite-vector. Since $\pi$ is irreducible, we obtain that it is finite-dimensional.\\ \eqref{cormain:ExFinDim} follows from Proposition \ref{prop:PrinSer}.\\ \eqref{cormain:ExGen} Let $\pi:=Ind_P^G1$. As explained in the proof of Proposition \ref{prop:PrinSer}(\ref{PS:irr},\ref{PS:WO}), $\pi$ is irreducible and generic. As before, $P\cap H$ is reductive and thus is unimodular. By \cite[Corollary C]{GSS}, $\pi\in \Irr(G)_{H,\chi}$. Again, \cite[Corollary C]{GSS} is the case of trivial $\chi$, but the general case is proven in the same way.\\ \eqref{GH:NoGen} If $G$ is not quasi-split then it has no generic representations. If $G$ is quasi-split then by \DimaD{Corollary \ref{cor:Irr}, Proposition \ref{prop:PQ} and Corollary \ref{cor:Geo} we have $\WO(\pi)\subset\overline{G\cdot \fh^{\bot}}\cap\cN=\cM\cap \cN\subset {\bfG\cdot\fp^{\bot}}$. If $P$ is not a Borel subgroup then $\bf \fp^{\bot}$ has no regular nilpotent elements, and thus neither has $\WO(\pi)$.} \end{proof} \begin{lemma}\label{lem:uni} If $PH$ is open in $G$ and $\Delta_H=1$ then $\Delta_{P}|_{P\cap H}=1$. \end{lemma} \begin{proof} Since $\Delta_H=1$, there exists a $G$-invariant measure on $G/H$. The restriction of this measure to $PH/H\cong P/(P\cap H)$ is $P$-invariant. Thus $\Delta_{P}|_{P\cap H}=\Delta_{P\cap H}$. But $\Delta_{P\cap H}=1$. \end{proof} \begin{proof}[Proof of Corollary \ref{cor:sym}] Let $\theta$ be the involution of $G$ such that $H=G^{\theta}$. Let $P$ be an adapted parabolic for $G/H$ such that $PH$ is open in $G$. Since $H$ is reductive, $\Delta_H=1$. By Lemma \ref{lem:uni}, $\Delta_{P}|_{P\cap H}=1$. Thus, by Corollary \ref{cor:main}, the existence of generic $\pi\in \Irr(G)_H$ is equivalent to $P$ being a Borel, and the non-existence of infinite-dimensional $\pi\in \Irr(G)_H$ is equivalent to $P$ being $G$. Now, $P=G$ if and only if $G/H$ is a torus, which in turn is equivalent to $G^H_{\R}$ being a quotient of the product of a compact group and a torus. This proves (ii). To prove (i), we use the fact that $P$ is a Borel if and only if there exists a $\theta$-split Borel. The existence of a $\theta$-split Borel is equivalent to $G^H_{\R}$ being quasi-split by \cite{PraSak}. \end{proof} \DimaC{ \section{Proof of Corollary \ref{cor:global}}\label{sec:global} Let $\pi\cong\otimes'_{\mu}\pi_{\mu}$ be the decomposition of $\pi$ into a restricted tensor product of irreducible local factors. Let $\eta$ be a non-zero $\bfH(\mathbb{A})$-equivariant functional on $\pi$. Then $\eta$ does not vanish on some pure tensor $\otimes w_{\mu}\in \pi\cong\otimes'_{\mu}\pi_{\mu}$. For any $\nu$, we let $G:=\bfG(\mathbb{K}_{\nu})$ and $H:=\bfH(\mathbb{K}_{\nu})$ and construct a non-zero $H$-invariant functional $\zeta_{\nu}$ on $\pi_{\nu}$ by $\zeta_{\nu}(w):=\eta(w\otimes \bigotimes_{\mu\neq \nu}w_{\mu})$. By a similar argument, we have $\WO(\pi)\subset \WO(\pi_{\nu})$.} \DimaD{The existence of $\zeta_{\nu}$ implies by Corollary \ref{cor:Irr} that $\WO(\pi_{\nu})\subset \overline{G\cdot \fh^{\bot}(\mathbb{K}_{\nu})}$. Altogether we have $\WO(\pi)\subset \WO(\pi_{\nu})\subset \overline{G\cdot \fh^{\bot}(\mathbb{K}_{\nu})}$. \proofend }
98,340
O'Reilly logo Safari Logo Start Free Trial Enterprise The Flaw of Averages: Why We Underestimate Risk in the Face of Uncertainty by Sam L. Savage Stay ahead with the world's most comprehensive technology and business learning platform. With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, tutorials, and more. Start Free Trial No credit card required CHAPTER 17 The Flaw of Extremes Did you know that localities whose residents have the largest average earlobe size tend to be small towns? More amazingly, if this seeming bit of trivia were more widely understood, it could save millions of dollars per year. To explain why, I will start with a problem familiar to anyone who has worked in a large organization: bottom-up budgeting. Sandbagging Consider a firm in which each of ten divisional vice presidents submits a budget to the CEO. The VPs are not completely certain of their cash requirements, but suppose for argument’s sake that each is 95 percent confident of requiring between $800,000 and $1.2 million with an average of $1 million. Further, their needs are not interrelated. If any one of them requires more or less than average, it won’t affect the needs of the others. If each VP requests the average requirement of $1 million, then the CEO can add the ten estimates together to arrive at the correct average of $10 million. The Strong Form of the Flaw of Averages does not apply because summing is a straight line calculation. But what kind of VP submits a budget that has a 50 percent chance of being blown? Instead of submitting their averages, the VPs will probably provide numbers they are pretty sure they won’t exceed—let’s say 90 percent sure. This is the widely practiced budgeting technique known as sandbagging. Assuming the uncertainties are bell-shaped and based on the preceding ranges of uncertainty, then the cash required ... With Safari, you learn the way you learn best. Get unlimited access to videos, live online training, learning paths, books, interactive tutorials, and more. Start Free Trial No credit card required
92,610
TITLE: Question on Concatenation of Prime Numbers QUESTION [11 upvotes]: Concatenation Concatenation of two numbers $(p,q)$ in some number base is defined as combining the digits of those numbers, written as $p||q$ . For example, $123||456=123456$. Concatenation simply combines digits of two numbers. Concatenator Let $p,q$ be prime numbers. If the result of $p||q$ is a composite number, I call it the Concatenator of $p,q$. If the result is a new prime number, then pair $(p,q)$ does not have a Concatenator. Always a Concatenator? From this you can see that all pairs of form $(p,2)$ and $(q,5)$ always have a Concatenator, since the result is divisible by $2$ for the first case and by $5$ for the second case. Also, if $(|p-q|=2)$ and the smaller prime is $\gt3$, then the pair $(p,q)$ always has a Concatenator since the result is always divisible by $3$. These pairs are twin primes. If $p=q$ then those pairs are also always a Concatenator of course. Perfect Concatenators I also wanted to mention that some pairs share the same Concatenator. If a pair $p,q$ has a unique Concatenator, then I call it the Perfect Concatenator. Example: $37193$ is not a perfect one, since pairs $(3719,3)$ and $(37,193)$ and $(3,7193)$ all share it. Example: Trivially perfect pairs are pairs where both $p$ and $q$ are one digit primes. Delayed Concatenator Furthermore, if a pair does not have a Concatenator, we can multiply $p$ by $10$ before the concatenation, and check if the result is composite. If it is, the pair is a Delayed-$1$ Concatenator. If the result is still not a Concatenator, multiply by $10$ again and repeat until you get a Concatenator. If you multiplied by $10^n$ in total, then the result is a Delayed-$n$ Concatenator. What is the most delayed concatenator? Below are the smallest most delayed concatenators I've found so far for $p=2,3,5,7,11,13$ 203, 20083, 200011, 200004133, 20000029, (5) 3013, 3007, 300011, 300002411, 30000089, (5) 5041, 500101, 50003, 500002237, 50000020063, (5) 703, 70043, 700019, 700002551, (4) 1107, 110071, 110003, 1100005879, (4) 13011, 130037, 130007, 130000307, 1300000457, (5) Below are the smallest most delayed concatenators I've found so far for $q=2,3,5,7,11,13$ [doesn't exist] 203, 29003, 50003, 27100003, 5527000003, (5) [doesn't exist] 1107, 3007, 130007, 103300007, 1069000007, 76810000007 (6) 13011, 230011, 200011, 857000011, 14990000011 (5) 3013, 190013, 15100013, 43000013, 4870000013 (5) Where you see that the best I could find was a delayed $6$ concatenator. The smallest $\text{D}6$ concatenator so far is $76810000007$. This means $7681000007,768100007,76810007,7681007,768107,76817$ are all prime. This is the result of concatenation of $(7681,7)$ by delaying it by $10^6$. But the real smallest $\text{D}6$ would be of form $2000000||q$, if such $q$ exists. Questions Can you find a more delayed concatenator? Is there such a thing as the most-delayed-concatenator? Can a more delayed concatenator be computed/calculated without brute force search? Is it possible to define a more efficient algorithm? Is there anything similar to this already analyzed somewhere? Are there any more trivial pairs $(p,q)$ such that they always have a concatenator, other than ones with $p=2,5$ and twin primes? Asking if there is a delayed $n$ concatenator for some $(p,q)$ is like asking if there exists a sequence of prime numbers of length $n$ of form $$p||\underbrace{0\dots0}_k||q$$ for $k=0,1,2\dots n-1$. REPLY [5 votes]: Note that $1,10,10^2,10^3,10^4$ and $10^5$ all have different remainders mod $7$. Thus, if neither $p$ nor $q$ is $7$, one of $p\|q,10p\|q,...,10^5p\|q$ is a multiple of $7$. So you can't get a delay of more than $5$ unless one of the primes is $7$.
111,946
TITLE: $f(x)=x^4+ax^2+b\in \mathbb{Z}[x]$ is irreducible iff $\alpha ^2, \alpha \pm\beta$ are not elements of $\mathbb{Q}.$ QUESTION [6 upvotes]: Let $\pm \alpha, \pm \beta$ denote the roots of the polynomial $f(x)=x^4+ax^2+b\in \mathbb{Z}[x]$. Prove that $f(x)$ is irreducible over $\mathbb{Q}$ if and only if $\alpha ^2, \alpha \pm\beta$ are not elements of $\mathbb{Q}.$ $\Rightarrow$ Suppose $f(x)$ is irreducible. $f(x)=(x-\alpha)(x+\alpha)(x-\beta)(x+\beta)=(x^2-\alpha^2)(x^2-\beta^2)$. So, $\alpha^2$ is not in $\mathbb{Q}$. $f(x)=(x-\alpha)(x+\alpha)(x-\beta)(x+\beta)=(x^2+(\alpha-\beta)x-\alpha\beta)(x^2-(\alpha-\beta)x-\alpha\beta)$. $\alpha\beta$ is in $\mathbb{Q}$ if $b$ is a square. How to show $\alpha\pm\beta$ are not in $\mathbb{Q}?$ REPLY [2 votes]: Since your polynomial has rational coefficients you have $$2(\alpha^2+\beta^2)=x_1^2+x_2^2+x_3^2+x_4^2=(x_1+x_2+x_3+x_4)^2-2(x_1x_2+...+x_3x_4)\in \mathbb Q$$ Now, if $\alpha \pm \beta \in \mathbb Q$ then $$(\alpha \pm \beta)^2=\alpha^2+\beta^2\pm 2\alpha \beta \in \mathbb Q$$ and hence $$\alpha \beta \in \mathbb Q$$ This shows that at least one of the factors in your factorisation is in $\mathbb Q[X]$.
185,114
Sean Finnerty Thu, Apr 18, 2019 Fri, Apr 19, 2019 Sean Finnerty. competition (2013) and) and Electric Picnic (2018), along with making it to the semi-finals of the nationwide talent search Stand Up NBC. A keen eye for detail Sean has also developed a reputation as a Roaster and is currently the number 1 ranked roast comic in New York City.
405,575
engHelp us to improve these subtitles! It is a sad fact of reality that we can no longer trust our CPUs to only run the things we want and to not have exploitable flaws. I will provide an proposal for a system to restore (some) trust in communication secrecy and system security even in this day and age without compromising too much the benefits in usability and speed modern systems provide. This Talk was translated into multiple languages. The files available for download contain all languages as separate audio-tracks. Most desktop video players allow you to choose between them. Please look for "audio tracks" in your desktop video player.
161,406
TITLE: Modular parametrization abelian varieties QUESTION [5 upvotes]: Let $N$ be a positive integer, and let $f$ be a newform for $S_2(\Gamma_0(N))$. Then by Shimura's construction, the variety $J_0(N)$ has a quotient $A_f$ which is an abelian variety attaced to $f$. $$\pi:J_0(N) \to A_f$$ Dualizing this map we get a map $A_f^\vee \to J_0(N)^\vee$, and since the latter is isomorphic to $J_0(N)$ (it is a Jacobian), we get a map $$A_f^\vee \to A_f$$ This map is an isogeny of degree $d^2$ for some positive integer $d$, and the modular degree is defined precisely as $d$. The curve $X_0(N)$ has a natural inclusion into its Jacobian, so it make sence to look at the map $$ X_0(N) \hookrightarrow J_0(N) \to A_f$$ Question: is any relation between the degree of the map $X_0(N) \to A_f$ (which I mean the degree of $X_0(N)$ onto its image) and $d$? I would guess that to prove an inequality should not be that hard, but is it an equality? (in the case $A_f$ has dimension $1$ it is) REPLY [2 votes]: Let $\tau : X_0(N) \to A_f$ be the name of your map. And let $C := \tau(X_0(N))$ then if $A_f \cong J(C)$ and furthermore the inclusion $C \subseteq A_f$ can be identified with the Abel-Jacobi map, then the relation you are asking for does hold. This just stems from a more general fact of maps between curves, namely if $g: X \to Y$ is a map of curves then the composition of pullback and pushforward $J(Y) \to J(X) \to J(Y)$ is just multiplication by $\deg g$. So there is a relation this case, namely $$\text{deg }( A_f^\vee \to A_f) = (\text{deg }X_0(N) \to C)^{2 genus(C)}.$$ So the same formula holds except that you have to include the genus of the image curve. However in general I do not expect such a relation to hold. To give an explicit counter example, if $N=311$ then according to the LMFDB there are only two (galois orbits of) newforms. Namely $f_a := 311.2.a$ and $f_b := 311.2.b$. Now consider the map $\pi_b: X_0(N) \to A_{f_b}$. Since $A_{f_b}$ is a simple abelian variety of dimension $22$ any curve inside $A_{f_b}$ has to have genus $\geq 22$. In particular $C := \pi_b(X_0(N))$ has to have genus $\geq 22$. However $genus(X_0(311))=26$ so the riemann hurwitz formula forces deg$(\pi_b)=1$. However $$\text{deg }( A_{f_b}^\vee \to A_{f_b}) = \text{deg }( A_{f_a}^\vee \to A_{f_a}) = 2^{2genus(X_0(N)^+)}=2^{8}$$ So in this case there is no relation of the form you want. See section 3.4 of these lecture notes for the first equality. Note that the above is not an isolated or pathological example. There are many more examples in the LFMDB where $N$ is prime and there are only 2 galois orbits of newforms in $S_2(\Gamma_0(N))$ and in all these cases a similar argument works to show there is no interesting relation between the degrees for the modular form whose Fricke sign is $-1$. Aside from the LMFDB examples this situation is also something that is conjectured to happen quite often (see https://mathoverflow.net/a/396522/23501).
145,106
Sub-Forums Threads / Posts Last Post The Fluff Zone View this forum's RSS feed Fluffed up beyond all recognition. This forum is for light-hearted, conversational threads. Fluff posts made in other forums and threads may be moved here by Moderators. 2,781 326,106Ennagram,MBTI and Stackings... by Today, 08:09 AM
275,646
Whew! It is the season of rainy engagement sessions! Thankfully, we have the best couples in the world who are willing to go with the flow and let the rain (quite literally) roll right off their backs. Merilee & Andy are the epitome of an easy-going, Northwest couple and we had a blast hanging out with them a few weeks ago. They first held hands while walking through Post Alley, so it seemed only fitting that we should return there for their engagement session. It was a busy Friday at the market, but we managed find some quiet spots where Andy & Merilee could simply focus on each other and the love they share. After the market, we headed to the Seattle Public Library, which is always a fun and inspiring place to shoot because of its architectural beauty and bold colors throughout. Not only that, but these two have a great sense of humor and kept us entertained throughout the session! 😉 Andy & Merilee, thank you for a wonderful afternoon–we loved capturing you and really enjoyed sharing food & conversation over Happy Hour too. You two are the best, and we can’t wait to photograph your wedding this August! ~The Popes P.S. Thanks to Tiffany & Kellen for introducing us to Andy & Merilee. We love photographing friends of friends! Ok, I LOVE the last shot! Amazing job, as always!
348,462
As the countries of the world continue to become more interconnected on a daily basis, increasing globalization is inevitable. Their economies are tied together in a web that can not be undone. After this connection most countries have their own individual set of accounting standards. Currently, it is difficult to compare the financial statements of a company from one country to those of another. As globalization accelerates, the idea of harmonization between different countries' accounting systems becomes more necessary. Although it is a complex challenge to construct and enforce a worldwide set of accounting standards, there would be many advantages. A uniform accounting system would lead to more comparable financial information, encourage international investment and trade, and minimize future economic crises. The harmonization of accounting standards would allow for the financial statements of all companies to be comparable. If every financial statement was calculated following the same standards, it would be easier to compare one corporation's performance to any others. It would even be possible to compare the financial statements of a firm in one country to those of a firm across the globe. There would be no confusion for any of the various financial statements users because they would all be prepared using the same standards. The enactment of a harmonized set of accounting standards would have made the financial statements of different countries around the globe more comparable. Easily comparable financial statements would help to facilitate international investment. Most individuals are only familiar with the financial statements of their country of residence. Foreign financial statements most often are not created following the same accounting policies. Although the information they convey may appear similar, one can not make a proper comparison because the numbers were not calculated the same way. This can make international investment a bit more risky, and therefore less likely that the average individual will participate. If a universal set of accounting standards is into place, the flow of capital across international borders would increase. Everyone, from multinational firms to individuals, would easily be able to compare the financial statements of any firms in any country. Investors could be more certain about the financial health of a foreign company and would then be more likely to invest. In addition to increasing international investment, harmonization would also have an effect on international trade. Today, firms often choose to buy products and natural resources from other countries because of greater abundance or better prices. Yet sometimes the international market for goods and resources can lead to disputes and tension. There are often discrepancies over pricing caused by the usage of different accounting practices to calculate costs. For example, the lumber producers in the US have been submitting formal complaints against Canadian lumber producers for many years. They believe that the Canadian's cost of softwood timber is too low. This allows the Canadian lumber producers to offer their goods at a more competitive price, while still maintaining a profitable margin. A lower cost gives Canada an unfair advantage in the international market. If a universal accounting method for cost was in place, both the US and Canada would calculate their costs the same way. There would be no reason to disagree, and all of the prices on the market could be more accurately and fairly compared. A universal set of accounting standards could help to avoid some potential future economic crises. In the past, the inability to fully comprehend the information on foreign financial statements has aided the development of financial crises. One such crisis took place in Southeast Asia at the end of the 1990's. This crisis began when investors believed the country could no longer maintain its levels of foreign investment and withdrew their money. The flight of capital invested in Thailand facilitated an economic crisis. A contagion effect affected investors to remove their money from other Southeast Asian countries with similar economic practices, including Indonesia. Indonesia as well as other Southeast Asian countries fell into an economic crisis since the fact that their financial information indicated health. If a universal set of accounting standards had been in place, there may never have been any unsustainable valuations in Thailand. Additionally, foreign investors would have had more confidence in their investments knowing the financial information was accurate. They could have made better decisions regarding their investments. In the future, harmonization could help to prevent this type of occurrence Globalization makes it necessary for investors and firms to have access to financial information from companies around the world. It would be beneficial to create and enforce a set of universal accounting standards for every country. Financial information would be more transparent and easier to understand. Additionally, financial statements of firms in any country would be easy to compare. Harmonization would lead to an increase in international investment since investors would have more confidence in foreign financial information. International trade would also be affected, as universal accounting procedures would limit disputes. Finally, future economic crises due to misinformation and confusion could have been avoided. Although the task of harmonization are daunting, it is evident that a universal set of accounting standards would have several benefits. Source by Zoe E Greenblatt
226,000
No Punching Bag Orange Platform boots Sale Sold out Regular price $50.00 USD Regular priceUnit price per $50.00 USD Sale price $50.00 USD * 6 inch * Platform boots * Green, white , and orange mix No Punching Bag's, "Who's Holding The Gun,"collection is based off of every day people. We wanted to show that everyone is affected by gun violence and it is everyone's responsibility to do something about it.
165,893
During the exclusive screening of Disney’s The Lion King at the Bonifacio High Street Cinemas on July 14, Scarlet Snow Belo corrected Vicki Belo as they recited a line from “Hakuna Matata.” Scarlet arrived dressed for the occasion, looking super cute in her Nala-inspired outfit. In Scarlet’s opening speech, she said, “Disney has a great surprise, just for very special eyes. Singing, dancing, that’s my thing.” Scarlet, who was joined by Vicki and Hayden Kho on stage, began reciting a few lines from “Hakuna Matata.” Vicki said, “Hakuna matata, what a wonderful world!” Scarlet was quick to correct her mom and shouted, “Phrase!” Vicki said that Scarlet is a big fan of The Lion King and has watched the cartoon version around 30 times. She said, “It’s such a wonderful story about courage, family, and friendship.” Hayden added, “I grew up watching The Lion King. Now, Scarlet and her generation will see a new interpretation of this classic story and it will become a film that we share.” He continued, “One of the things I love about the film is the unconventional friendship between Simba, Timon, and Pumba. It teaches us to stop judging based on appearances and the value of friendship in our lives.” Directed by Jon Favreau, The Lion King is showing nationwide on July 17 and has an all-star cast which includes Donald Glover, Seth Rogen, James Earl Jones, and Beyonce Knowles. Follow Monina on Instagram.
259,345
TITLE: Fundamentals of Probability QUESTION [1 upvotes]: Suppose I have two boxes , containing white and black balls. In the first one , we have 8 white and 6 black balls. In the second one , we have 4 white and 7 black balls. Now if one ball is drawn at random , suppose we need to find the probability o it being black. Now by the classical definition : there are 7 + 6 ways in which we can select a single black ball. And the total number of ways in which we can select a single ball is 14 + 11 = 25 So the probability using this approach is 13/25 However , if we break the problem up into two parts , the probability of selecting a particular box , and then selecting a single ball , we get (1/2* 6/14) + (1/2*7/11) These two approaches lead to different answers. I'd like to know why the first one is incorrect Thanks! REPLY [0 votes]: Once you say that there are two boxes, you obviously can draw a ball from only one box, so lumping together all the balls as if there were no boxes is incorrect. Assuming that each box has equal probability of being picked, the 2nd approach is correct.
38,966
The ongoing state and federal cuts to funding are continuous hitting higher ed as we continue to face an economic crisis in America. The main attack is on nation historically black colleges and universities (HBCUs), which raises the question, “Are our black colleges relevant?” And the answer is very much “yes.” HBCUs were established independently by an educator, slave or black church in serving, teaching and training folk of the African-American culture as well serving those communities. Today, HBCUs served a diverse population. These institutions are equally important to the higher ed sector for the reason that they play the role as the backbone institutions of education. For more than 150 years, HBCUs have nurtured, provide, and serve academic excellence to low-income, first-generation, and academically underprepared students. Though those students were previous or are label as “a risk of not entering or completing college,” HBCUs continue to thrive in the mission of building confidence to turning those students into educated, successful testimonies. The purpose question is the value of a college degree and how beneficial will it be in ensuring a career opportunity after college. According to UNCF’s 6 Reasons HBCUs Are More Important Than Ever, it states that wages have stagnated, college tuition has steadily climbed, and more students are saddled with crushing college debt than ever before. HBCUs forcefully had to tragically raise its tuition consequentially due to budget cuts from the state or federal are frequently happening, in addition to its low endowment and alumni giving figures. It is crucial that graduates and supporters of HBCUs, take the initiative and interest to give back to an institution actively. You will never know when the state might cut and how much of a deficit the state is facing. In this case, Lincoln University of Missouri was just announced with an across the state deduction of 7.7% and will leave Lincoln with a $1.34 million reduction from its current FY 2018 budget. Referring back to UNCF’s study, 30 percent less than at comparable institutions. Even though, HBCUs deliver an educational opportunity at a lower cost. The institutions assist in maintaining a young men or women’s life from the streets of their communities, out of the statistics and in a scholastic environment that aid them in preparing industrial training for life. When you invest back in your HBCU, you are paying homage to the activist of those in which they spent in an educational institution that taught African-Americans to adventure out to educate and train others so that African-Americans are equipped for today’s society. Furthermore, when you decide to invest back, think of innovative ways that it can work financially or physically in an individual future as well as the institution itself. Consider returning to campus and getting involved in positively promoting the institution. Volunteer and assist in recruiting prospective students to the university or college, exposing them to essentials that they may not have been aware of at an HBCU. Your contributions will play off in the awareness of outside supporters, parents, and college seeking students on how mindful an HBCU relevance is a need in today’s society. However, in this day and time, there are also many support foundations that advocate and tribute to HBCU such as UNCF, TMCF, Tom Joyner Foundation, and the HBCU Campaign Fund (HCF). You may also consider visiting the HBCU of your choice website and find ways to give as well.
395,381
\begin{document} \title{Uniqueness of solutions of stochastic differential equations} \author{A. M. Davie} \address{School of Mathematics, University of Edinburgh, King's Buildings, Mayfield Road, Edinburgh EH9 3JZ, UK} \subjclass[2000]{Primary 60H10; Secondary 34F05} \begin{abstract} We consider the stochastic differential equation \[dx(t)=dW(t)+f(t,x(t))dt,\ \ \ \ \ \ \ \ \ \ x(0)=x_0\] for $t\geq0$, where $x(t)\in\R^d$, $W$ is a standard $d$-dimensional Brownian motion, and $f$ is a bounded Borel function from $[0,\infty) \times\R^d\rightarrow\R^d$ to $\R^d$. We show that, for almost all Brownian paths $W(t)$, there is a unique $x(t)$ satisfying this equation. \end{abstract} \maketitle \section{Introduction}\label{int} In this paper we consider the stochastic differential equation \[dx(t)=dW(t)+f(t,x(t))dt,\ \ \ \ \ \ \ \ \ \ x(0)=x_0\] for $t\geq0$, where $x(t)\in\R^d$, $W$ is a standard $d$-dimensional Brownian motion, and $f$ is a bounded Borel function from $[0,\infty) \times\R^d\rightarrow\R^d$ to $\R^d$. Without loss of generality we suppose $x_0=0$ and then we can write the equation as \zz\label{eq1} x(t)=W(t)+\int_0^tf(s,x(s))ds,\ \ \ \ \ t\geq0\z It follows from a theorem of Veretennikov \cite{ver} that (\ref{eq1}) has a unique strong solution, i.e. there is a unique process $x(t)$, adapted to the filtration of the Brownian motion, satisfying (\ref{eq1}). Veretennikov in fact proved this for a more general equation. Here we consider a different question, posed by N. V. Krylov \cite{ig}: we choose a Brownian path $W$ and ask whether (\ref{eq1}) has a unique solution for that particular path. The main result of this paper is the following affirmative answer: \begin{thm}\label{mth} For almost every Brownian path $W$, there is a unique continuous $x:[0,\infty)\rightarrow\R^d$ satisfying (\ref{eq1}). \end{thm} This theorem can also be regarded as a uniqueness theorem for a random ODE: writing $x(t)=W(t)+u(t)$, the theorem states that for almost all choices of $W$, the differential equation $\frac{du}{dt}=f(t,W(t)+u(t))$ with $u(0)=0$ has a unique solution. In Section \ref{app}, we give an application of this theorem to convergence of numerical approximations to (\ref{eq1}).\vspace{.2cm}\\ {\bf Idea of proof of theorem.} The theorem is trivial when $f$ is Lipschitz in $x$, and the idea of the proof is essentially to find some substitute for a Lipschitz condition. The proof splits into two parts, the first (section \ref{bes}) being the derivation of an estimate which acts as a substitute for the Lipschitz condition, and the second (section \ref{pf}) being the application of this estimate to prove the theorem. We start with a reduction to a slightly simpler problem.\vspace{.2cm}\\ {\bf A reduction.} It will be convenient to suppose $|f(t,x)|\leq1$ everywhere, which we can by scaling. Then it will suffice to prove uniqueness of a solution on [0,1], as we can then repeat to get uniqueness on [1,2] and so on. So we work on [0,1], let $X$ be the space of continuous functions $x:[0,1]\rightarrow\R^d$ with $x(0)=0$, and let $P_W$ be the law of $\R^d$-valued Brownian motion on [0,1], which can be regarded as a probability measure on $X$. Now we apply the Girsanov theorem (see \cite{iw}): define $\phi(x)=\exp\{\int_0^1f(t,x(t))dx(t)-\frac12 \int_0^1f(t,x(t))^2dt\}$, which is well-defined for $P_W$ almost all $x\in X$, and define a measure $\mu$ on $X$ by $d\mu=\phi dP_W$. Then if $x\in X$ is chosen at random with law $\mu$, the path $W\in X$ defined by \zz\label{eq2} W(t)=x(t)-\int_0^tf(s,x(s))ds\end{equation} is a Brownian motion, i.e. $W$ has law $P_W$. For a particular choice of $x$, and with $W$ defined by (\ref{eq2}), $x$ will be the unique solution of (\ref{eq1}) provided the only solution of \zz\label{eq3} u(t)=\int_0^t\{f(s,x(s)+u(s))-f(s,x(s))\}ds\end{equation} in $X$ is $u=0$. So, to prove the theorem it suffices to show that, for $\mu$-a.a. $x$, (\ref{eq3}) has no non-trivial solution, since for such $x$, with $W$ defined by (\ref{eq2}) no other $x$ can satisfy (\ref{eq2}). But $\mu$ is absolutely continuous w.r.t. $P_W$, so it suffices to show that, for $P_W$-a.a. $x$, (\ref{eq3}) has no non-trivial solution. In other words, it suffices to show that, if $W$ is a Brownian motion then with probability 1 there is no non-trivial solution $u\in X$ of \zz\label{eq4} u(t)=\int_0^t\{f(s,W(s)+u(s))-f(s,W(s))\}ds\end{equation} We prove this in section \ref{pf}.\vspace{.2cm}\\ {\bf Remark.} Our proof does not make use of the existence of a strong solution. It is tempting to try to prove the theorem by measure-theoretic arguments based on the strong solution and Girsanov's theorem. Define $T: X\rightarrow X$ by \[Tx(t)=x(t)-\int_0^tf(s,x(s))ds\] The strong solution gives a measurable map $S: E\rightarrow F$ where $E$ and $F$ are Borel subsets of $X$ with $P_W(E)=P_W(F)=1$, such that $T\circ S$ is the identity on $E$, and $F$ is the range of $S$. It follows that $T$ is (1-1) on $F$ and for any $W\in E$ there is a unique solution of (\ref{eq1}) in $F$. But we need a solution which is unique in $X$ and to achieve this we need to show that $T(X\backslash F)$ is a $P_W$-null set, and this seems to be a significant obstacle. Our proof is quite complicated and it seems reasonable to hope that it can be simplified. In particularly one might expect a simpler proof of Proposition \ref{mp}. This seems to be nontrivial even for $p=2$. The bound for $p=2$ follows from the first part of Lemma \ref{bl2} (with $t_0=0$ and $r=0$) and I do not know an essentially simpler proof. In one dimension, in the case when $f(t,x)$ depends only on $x$, a different and shorter proof of Theorem \ref{mth} can be given, using local time, but it is not clear how to extend it to $d>1$. \section{The basic estimate}\label{bes} This section is devoted to the proof of the following: \begin{prop}\label{be} Let $g$ be a Borel function on $[0,1]\times\R^d$ with $|g(s,z)|\leq1$ everywhere. For any even positive integer $p$ and $x\in\R^d$, we have \[\IE\left(\int_0^1\{g(t,W(t)+x)-g(t,W(t))\}dt\right)^p\leq C^p(p/2)!|x|^p\] where $C$ is an absolute constant, $|x|$ denotes the usual Euclidean norm and $W(t)$ is a standard $d$-dimensional Brownian motion with $W(0)=0$, \end{prop} This will be deduced from the following one-dimensional version: \begin{prop}\label{mp} Let $g$ be a compactly supported smooth function on $[0,1]\times\R$ with $|g(s,z)|\leq1$ everywhere and $g'$ bounded (where the prime denotes differentiation w.r.t. the second variable). For any even positive integer $p$, we have \[\IE\left(\int_0^1g'(t,W(t))dt\right)^p\leq C^p(p/2)!\] where $C$ is an absolute constant, and here $W(t)$ is one-dimensional Brownian motion with $W(0)=0$. \end{prop} \begin{proof} We start by observing that the LHS can be written as \[p!\int_{0<t_1<\cdots<t_p<1}\IE\prod_{j=1}^pg'(t_j,W(t_j))dt_1\cdots dt_p\] and using the joint distribution of $W(t_1),\cdots,W(t_p)$ this can be expressed as \[p!\int_{0<t_1<\cdots<t_p<1}\int_{\R^p}\prod_{j=1}^p\{g'(t_j,z_j) E(t_j-t_{j-1},z_j-z_{j-1})\}dz_1\cdots dz_pdt_1\cdots dt_p\] where $E(t,z)=(2\pi t)^{-1/2}e^{-z^2/2t}$ and here $t_0=0$, $z_0=0$. We introduce the notation \[J_k(t_0,z_0)=\int_{t_0<t_1<\cdots<t_k<1}\int_{\R^k}\prod_{j=1}^k\{ g'(t_j,z_j)E(t_j-t_{j-1},z_j-z_{j-1})\}dz_1\cdots dz_kdt_1\cdots dt_k\] and we shall show that $J_p(0,0)\leq C^p/\Gamma(\frac p2+1)$; Proposition \ref{mp} will then follow since\\$p!\leq2^p((p/2)!)^2$. In order to estimate $J_k$ we use integration by parts to shift the derivatives to the exponential terms. We introduce some notation to handle the resulting terms - we define $B(t,z)=E'(t,z)$ and $D(t,z)=E''(t,z)$ (where again primes denote differentiation w.r.t. the second variable). If $S=S_1\cdots S_k$ is a word in the alphabet $\{E,B,D\}$ then we define \[I_S(t_0,z_0)=\int_{t_0<t_1<\cdots<t_k<1}\int_{\R^d}\prod_{j=1}^k \{g(t_j,z_j)S_j(t_j-t_{j-1},z_j-z_{j-1})\}dz_1\cdots dz_kdt_1\cdots dt_k\] In fact, only certain words in $\{E,B,D\}$ will be required: we say a word is {\em allowed} if, when all $B$'s are removed from the word, a word of the form $(ED)^r=EDED\cdots ED$, $r\geq0$, is left. The allowed words of length $k$ correspond to the subsets of $\{1,2,\cdots,k\}$ having an even number of members (namely the set of positions occupied by $E$ and $D$ in the word). Hence the number of allowed words of length $k$ is the number of such subsets of $\{1,2,\cdots,k\}$, namely $2^{k-1}$. We shall show that \zz\label{b5}J_k(t_0,z_0)=\sum_{j=1}^{2^{k-1}}\pm I_{S^{(j)}}(t_0,z_0) \end{equation} where each $S^{(j)}$ is an allowed word of length $k$ (in fact each allowed word of length $k$ appears exactly once in this sum, but we do not need this fact). The proof will then be completed by obtaining a bound for $I_S$. We prove (\ref{b5}) by induction on $k$. So, assuming (\ref{b5}) for $J_k$, we have \[\begin{split}J_{k+1}(t_0,z_0)&=\int_{t_0}^1dt_1\int g'(t_1,z_1)E(t_1-t_0, z_1-z_0)J_k(t_1,z_1)dz_1\\ &=-\int_{t_0}^1dt_1\int g(t_1,z_1)B(t_1-t_0,z_1-z_0)J_k(t_1,z_1)dz_1\\ &\ \ \ -\int_{t_0}^1\int g(t_1,z_1)E(t_1-t_0,z_1-z_0)J_k'(t_1,z_1)dz_1 \end{split}\] Now we observe that, if $S$ is an allowed string then $I_S'=-I_{\tilde{S}}$ where $\tilde{S}$ is defined as $BS^*$ if $S=ES^*$ and as $DS^*$ if $S=BS^*$ (note that $\tilde{S}$ is not an allowed string). Applying this to (\ref{b5}) we find $J_k'(t_0,z_0)=\sum_{j=1}^{2^{k-1}-1}\mp I_{\tilde{S}^j}(t_0,z_0)$ and then we obtain \[J_{k+1}(t_0,z_0)=\mp\sum_{j=1}^{2^{k-1}-1}I_{BS^j}(t_0,z_0)\pm\sum_{j=1} ^{2^{k-1}-1}I_{E\tilde{S}^j}(t_0,z_0)\] Noting that, if $S$ is an allowed string, $BS$ and $E\tilde{S}$ are also allowed, this completes the inductive proof of (\ref{b5}). We now proceed to the estimation of $I_S(t_0,z_0)$, when $S$ is an allowed string. We start with some preliminary lemmas. \begin{lemma}\label{bl1} There is a constant $C$ such that, if $\phi$ and $h$ are real-valued Borel functions on $[0,1]\times\R$ with $|\phi(t,y)|\leq e^{-y^2/3t}$ and $|h(t,y)|\leq1$ everywhere, then \[\left|\int_{1/2}^1dt\int_{t/2}^tds\int_\R\int_\R\phi(s,z)h(t,y) D(t-s,y-z)dydz\right|\leq C\] \end{lemma} \begin{proof} Denote the above integral by $I$. For $l\in\Z$, let $\chi_l$ be the characteristic function of the interval $[l,l+1)$ and define $\phi_l (s,y)=\phi(s,y)\chi_l(y)$, and similarly $h_l$. Let $I_{lm}$ denote the integral $I$ with $\phi,h$ replaced by $\phi_l,h_m$. Then we have $I=\sum_ {l,m\in\Z}I_{lm}$. Let $C_1,C_2,\cdots$ denote positive absolute constants. Now if $|l-m|=k\geq2$ then for $z\in[l,l+1)$ and $y\in[m,m+1)$ we have $|z-y|\geq k-1$ and then it follows easily that \[\left|D(t-s,y-z)\right|\leq C_1e^{-(k-2)^2/4}\] and hence $I_{lm}\leq C_2e^{-l^2/8}e^{-(k-2)^2/4}$ from which we deduce \[\sum_{|l-m|\geq2}|I_{lm}|\leq C_3\] Now suppose $|l-m|\leq1$. We use $\hat{\phi}_l(s,u)$ for the Fourier transform in the second variable, and similarly $\hat{h}_m$. We note that $\int\hat{\phi}_l(s,u)^2du=\int\phi_l(s,z)^2dz\leq C_4e^{-|l|^2/6}$ for $0 \leq s\leq1$ and similarly $\int\hat{h}_m(t,u)^2du\leq1$. We have \[I_{lm}=\int_{1/2}^1dt\int_{t/2}^tds\int_\R\hat{\phi}_l (s,u)\hat{h}_m(t,-u)e^{-(t-s)|u|^2/2}u^2du\] Applying $ab\leq\frac12(a^2c+b^2c^{-1}$ with $a=\hat{\phi}_l(s,u)$, $b=\hat {h}_m(t,-u)$ and $c=e^{l^2/12}$, we deduce that \[\begin{split}|I_{lm}|\leq &\int_{1/2}^1dt\int_{t/2}^tds\int_\R\hat{\phi}_l(s,u)^2 e^{l^2/12}u^2e^{-(t-s)u^2/2}du\\&+\int_{1/2}^1dt\int_{t/2}^tds\int_\R \hat{h}_m(-t,u)^2e^{-l^2/12}u^2e^{-(t-s)u^2/2}du\end{split}\] In the first integral we integrate first w.r.t. $t$ and obtain the bound const.$e^{-l^2/12}$ for the integral. We get a similar bound for the second integral (integrating w.r.t. $s$ first), and hence \[|I_{lm}|\leq C_5e^{-l^2/12}\] Summing over $l$ and $m$ such that $|l-m|\leq1$, we obtain \[\sum_{|l-m|\leq1}|I_{lm}|\leq C_6\] which completes the proof. \end{proof} \begin{cor}\label{bc1} There is an absolute constant $C$ such that if $g$ and $h$ are Borel functions on $[0,1]\times\R$ bounded by 1 everywhere then \[\left|\int_{1/2}^1dt\int_{t/2}^tds\int_{\R^2}g(s,z)E(s,z)h(t,y) D(t-s,y-z)dydz\right|\leq C\] and \[\left|\int_{1/2}^1dt\int_{t/2}^tds\int_{\R^2}g(s,z)B(s,z)h(t,y) D(t-s,y-z)dydz\right|\leq C\] \end{cor} \begin{proof} These follow easily from Lemma (\ref{bl1}), the second using the easily verified fact that $|B(s,z)|\leq Cs^{-1/2}(e^{-z^2/3s})$. \end{proof} We note that $\int_\R E(t,z)dz=1$, and we have the bounds \zz\label{b7}\int_\R|B(t,z)|dz\leq C_0t^{-1/2},\ \ \ \ \ \ \int_\R|D(t,z)|dz\leq C_0t^{-1}\end{equation} where $C_0$ is an absolute constant. \begin{lemma}\label{bl2} There is an absolute constant $C$ such that if $g$ and $h$ are Borel functions on $[0,1]\times\R$ bounded by 1 everywhere, and $r\geq0$ then \[\left|\int_{t_0}^1dt\int_{t_0}^tds\int_{\R^2}g(s,z)E(s-t_0,z)h(t,y) D(t-s,y-z)(1-t)^rdydz\right|\leq C(1+r)^{-1}(1-t_0)^{r+1}\] and \[\left|\int_{t_0}^1dt\int_{t_0}^tds\int_{\R^2}g(s,z)B(s-t_0,z)h(t,y) D(t-s,y-z)(1-t)^rdydz\right|\leq C(1+r)^{-1/2}(1-t_0)^{r+\frac12}\] \end{lemma} \begin{proof} Again, we let $C_1,\cdots$ be absolute constants. By using the change of variables $t'=(t-t_0)/(1-t_0)$, $s'=(s-t_0)/(1-t_0)$, $y'=y(1-t_0) ^{-1/2}$, it suffices to prove these estimates when $t_0=0$. To do this, we start by scaling the first part of Corollary \ref{bc1}, and get \[\left|\int_{2^{-k-1}}^{2^{-k}}dt\int_{t/2}^tds\int_{\R^2}g(s,z) E(s,z)h(t,y)D(t-s,y-z)(1-t)^rdydz\right|\leq C_1(1-2^{-k-1})^r2^{-k}\] for $k=0,1,2\cdots$ and then by summing over $k$, we get \[\left|\int_0^1dt\int_{t/2}^tds\int_{\R^2}g(s,z) A(s,z)h(t,y)D(t-s,y-z)(1-t)^rdydz\right|\leq C_2(1+r)^{-1}\] Moreover, from the bounds (\ref{b7}) we have \[\begin{split}&\left|\int_0^1dt\int_0^{t/2}ds\int_{\R^2}g(s,z) E(s,z)h(t,y)D(t-s,y-z)(1-t)^rdydz\right|\leq\\ &\leq C_3\int_0^1dt\int_0^{t/2}(t-s)^{-1}(1-t)^rds\leq C_4(1+r)^{-1}\end{split}\] and combining these bounds gives the first result. Similarly, by scaling the second part of Corollary \ref{bc1}, we get \[\left|\int_{2^{-k-1}}^{2^{-k}}dt\int_{t/2}^tds\int_{\R^2}g(s,z) B(s,z)h(t,y)D(t-s,y-z)(1-t)^rdydz\right|\leq C_5(1-2^{-k-1})^r2^{-k/2}\] for $k=0,1,2\cdots$ and then by summing over $k$, we get \[\left|\int_0^1dt\int_{t/2}^tds\int_{\R^2}g(s,z) B(s,z)h(t,y)D(t-s,y-z)(1-t)^rdydz\right|\leq C_6(1+r)^{-1/2}\] Moreover, from the bounds (\ref{b7}) we have \[\begin{split}&\left|\int_0^1dt\int_0^{t/2}ds\int_{\R^2}g(s,z) B(s,z)h(t,y)D(t-s,y-z)(1-t)^rdydz\right|\leq\\ &\leq C_0\int_0^1dt\int_0^{t/2}(t-s)^{-1}(1-t)^rds\leq C_7(1+r)^{-1/2}\end{split}\] which give the second result. \end{proof} We can now complete the proof of Proposition \ref{mp} by obtaining the required bound for $I_S(t_0,z_0)$. Again we use $C_1,C_2,\cdots$ for absolute constants. We shall show that, for a suitable choice of $M$, we have for any allowed string $S$ of length $k$ \zz\label{b8}|I_S(t_0,z_0)|\leq\frac{M^k}{\Gamma(\frac k2+1)}(1-t_0)^{k/2} \end{equation} We shall prove (\ref{b8}) by induction on $k$, provided $M$ is chosen large enough. The case $k=0$ is immediate, so assume $k>0$ and that (\ref{b8}) holds for all allowed strings of length less than $k$. Then there are three cases: (1) $S=BS'$ where $S'$ has length $k-1$; (2) $S=EDS'$ where $S'$ has length $k-2$; (3) $S=EB^mDS'$ where $m\geq1$ and $S'$ has length $k-m-2$. In each case $S'$ is an allowed string. We consider the three cases separately.\vspace{.1cm}\\ {\bf Case 1.} In this case we have \[\begin{split}|I_S(t_0,z_0)|&=\left|\int_{t_0}^1dt_1\int_\R B(t_1-t_0,z_1-z_0)g(t_1,z_1)I_{S'}(t_1,z_1)dz_1\right|\\ &\leq\frac{M^{k-1}}{\Gamma(\frac{k+1}2)}\int_{t_0}^1 (1-t_1)^{(k-1)/2}dt_1\int_\R|B(t_1-t_0,z_1-z_0)|dz_1\\ &\leq\frac{C_1M^{k-1}}{\Gamma(\frac{k+1}2)}\int_{t_0}^1 (1-t_1)^{(k-1)/2}(t_1-t_0)^{-1/2}dt_1\\&=C_1\sqrt{\pi}M^{k-1} (1-t_0)^{k/2}/\Gamma\left(\frac k2+1\right)\end{split}\] where we have used the inductive hypothesis to bound $I_{S'}$, and then the bound (\ref{b7}). (\ref{b8}) then follows if $M$ is large enough.\vspace{.1cm}\\ {\bf Case 2.} Now we have \[I_S(t_0,z_0)=\int_{t_0}^1dt_1\int_{t_1}^1dt_2\int_{\R^2}g(t_1,z_1) g(t_2,z_2)E(t_1-t_0,z_1-z_0)D(t_2-t_1,z_2-z_1)I_{S'}(t_2,z_2)dz_1dz_2\] We set $h(t,z)=g(t,z)I_{S'}(t,z) (1-t)^{1-\frac k2}$ so that $\|h\|_\infty\leq M^{k-2}/\Gamma(k/2)$ by the inductive hypothesis, and then from the first part of Lemma \ref{bl2} we deduce that \[|I_S(t_0,z_0)|\leq\frac{C_2M^{k-2}(1-t_0)^{k/2}}{k\Gamma(k/2)}\] and (\ref{b8}) follows if $M$ is large enough.\vspace{.1cm}\\ {\bf Case 3.} In this case have \[\begin{split}&I_S(t_0,z_0)=\int_{t_0<t_1<\cdots<t_{m+2}<1}dt_1\cdots dt_{m+2}\int_{\R^{m+2}}\left(\prod_{j=1}^{m+2}g(t_j,z_j)\right) E(t_1-t_0,z_1-z_0)\times\\&\times\prod_{j=2}^{m+1} B(t_j-t_{j-1},z_j-z_{j-1})D(t_{m+2}-t_{m+1},z_{m+2}-z_{m+1}) I_{S'}(t_{m+2},z_{m+2})dz_1\cdots dz_{m+2}\end{split}\] Now let $h(t,z)=g(t,z)I_{S'}(t,z)(1-t_{m+2})^{(2+m-k)/2}$, so that by the inductive hypothesis on $S'$ we have $\|h\|_\infty\leq M^{k-m-2} /\Gamma(\frac{k-m}2)$. Then, writing \[\begin{split}\Omega(t,z)=&\int_t^1dt_{m+1}\int_{t_{m+1}}^1dt_{M+2}\int_ {\R^2}g(t_{m+1},z_{m+1})h(t_{m+2},z_{m+2})(1-t_{m+2})^{(k-m-2)/2}\times\\& \times B(t_{m+1}-t,z_{m+1}-z)D(t_{m+2}-t_{m+1},z_{m+2}-z_{m+1})dz_{m+1} dz_{m+2}\end{split}\] we find from Lemma 2 that \[|\Omega(t,z)|\leq C_3(k-m)^{-1/2}M^{k-m-2}(1-t)^{(k-m-1)/2} /\Gamma\left(\frac{k-m}2\right)\] Using this in \[\begin{split}I_S(t_0,z_0)=&\int_{t_0<t_1<\cdots<t_m<1}dt_1\cdots dt_m\int_{\R^m}\left(\prod_{j=1}^mg(t_j,z_j)\right) E(t_1-t_0,z_1-z_0)\times\\&\times\prod_{j=2}^mB(t_j-t_{j-1},z_j-z_{j-1}) \Omega(t_m,z_m)dz_1\cdots dz_m\end{split}\] and using the bounds (\ref{b7}) we find \[\begin{split}|I_S(t_0,z_0)|&\leq C_4^{m+1}(k-m)^{-1/2}\frac{M^{k-m-2}} {\Gamma(\frac{k-m}2)}\int_{t_0<t_1<\cdots<t_m<1}(t_2-t_1)^{-1/2} \cdots\\&\cdots(t_m-t_{m-1})^{-1/2}(1-t_m)^{(k-m-1)/2}dt_1\cdots dt_m\\ &=C_4^{m+1}(k-m)^{-1/2}\frac{M^{k-m-2}\pi^{(m-1)/2}\Gamma(\frac{k-m+1}2)} {\Gamma(\frac{k-m}2)\Gamma(\frac k2+1)}(1-t_0)^{k/2}\end{split}\] from which again (\ref{b8}) follows, provided $M$ is large enough. Putting (\ref{b8}) with $t_0=0$, $z_0=0$ and $k=p$ in (\ref{b5}) completes the proof of Proposition \ref{mp}. \end{proof} \noindent {\em Proof of Proposition \ref{be}}. We first note that it suffices to prove it for $d=1$. To see this let $g,W,x$ be as in the statement of Proposition \ref{be}. By a rotation of coordinates we can suppose $x=(\alpha,0,\cdots,0)$. Then for fixed Brownian paths $W_2, \cdots,W_d$ we can define $h$ on $[0,1]\times\R$ by $h(t,u)=g(t,u,W_2(t), \cdots,W_d(t))$ and the $d=1$ case of the Proposition gives \[\IE\left(\int_0^1\{h(t,W_1(t)+\alpha)-h(t,W_1(t))\}dt\right)^p\leq C^p(p/2)! |\alpha|^p\] and then the required result follows by averaging over $W_2,\cdots,W_d$. So we suppose $d=1$. Given a Borel function $g$ on $[0,1]\times\R$ with $|g|\leq1$ we can find a sequence of compactly supprted smooth functions $g_n$ with $|g_n| \leq1$, converging to $g$ a.e. on $[0,1]\times\R$. Then $g_n(t,W(t))\rightarrow g(t,W(t))$ a.s. for a.a. $t\in[0,1]$, and the same for $g_n(t,W(t)+x)$, so by Fatou's lemma it suffices to prove the proposition for smooth $g$. But then we have $g(t,W(t)+x)-g(t,W(t))=\int_0^xg'(t,W(t)+u)du$ and we can apply Proposition \ref{mp} and Minkowsi's inequality to conclude the proof of Proposition \ref{be}. What we in fact need is a scaled version of Proposition \ref{be} for subintervals of [0,1]. For $s\geq0$ we denote by ${\mathcal F}_s$ the $\sigma$-field generated by $\{W(\tau):0<\tau<s\}$. Then we can state the required result: \begin{cor}\label{bc2} Let $g$ be a Borel function on $[0,1]\times\R^d$ with $|g|\leq1$ everywhere. Let $0\leq s\leq a<b\leq1$ and let $\rho(x)= \int_a^b\{g(W(t)+x)-g(W(t))\}dt$. Then for $x\in\R^d$ and $\lambda>0$ we have \[\IP(|\rho(x)|\geq\lambda l^{1/2}|x|\ |{\mathcal F}_s)\leq2e^{-\lambda^2 /(2C^2)}\] where $l=b-a$ and $C$ is the constant in Proposition \ref{be}. \end{cor} \begin{proof} First assume $s=a=0$, $b=1$. Let $\alpha=(2C^2|x|^2)^{-1}$. Then \[\IE(e^{\alpha\rho(x)^2})=\sum_{k=0}^\infty\frac{\alpha^k}{k!} \IE(\rho(x)^{2k})\leq\sum_{k=0}^\infty\alpha^kC^{2k}|x|^{2k}=2\] and so $\IP(|\rho(x)|\geq\lambda|x|)=\IP(e^{\alpha\rho(x)^2}\geq e^{\alpha\lambda^2|x|^2})\leq2e^{-\alpha\lambda^2|x|^2} =2e^{-\lambda^2/(2C^2)}$. For the general case, let $\tilde{W}(t)=l^{-1/2}\{W(a+tl)-W(a)\}$, so that $\tilde{W}$ is a standard Brownian motion, and let $h(x)=g(W(a)+x)$. Then $\rho(x)=l^{1/2}\int_0^1\{h(\tilde{W}(t)+x)-h(\tilde{W}(t))\}dt$ and it follows from the first part that $\IP(|\rho(x)|\geq\lambda l^{1/2}|x|\ | {\mathcal F}_a)\leq2e^{-\lambda^2/(2C^2)}$. The required result then follows by taking conditional expectation w.r.t. ${\mathcal F}_s$. \end{proof} We note that the unconditional bound \[\IP(|\rho(x)|\geq\lambda l^{1/2}|x|)\leq2e^{-\lambda^2/(2C^2)}\] follows by taking $s=0$. Also in the same way we obtain, for any even $p\in\mathbb{N}$, \zz\label{bp}\IE(\rho(x)^p|{\mathcal F}_s)\leq C^pl^{p/2}(p/2)!|x|^p\z The following lemma will also be needed: \begin{lemma}\label{bl3} If $p>1+\frac d2$ there is a constant $c(p,d)$ such that is $g\in L^p([0,1]\times\R^d)$ then \[\IE\left(\int_0^1g(t,W(t))dt\right)^2\leq c(p,d)\|g\|_p^2\] \end{lemma} \begin{proof}We have \[\IE\left(\int_0^1g(t,W(t))dt\right)^2=2\int_0^1dt\int_0^tds \int_{\R^{2d}}g(s,\zeta)g(t,z)E(s,\zeta)E(t-s,z-\zeta)d\zeta dz\] Now, if $q=\frac p{p-1}$ then $\int E(t,z)^qdz=O(t^{-(q-1)d/2})$ and $p>1+\frac d2$ implies $(q-1)d/2<1$, so the result follows from H\"{o}lder's inequality. \end{proof} \section{Proof of Theorem}\label{pf} We now apply Corollary \ref{bc2} and Lemma \ref{bl3} to the proof of the theorem. First we give a brief sketch of the proof.\vspace{.2cm}\\ {\bf Outline of proof.} The proof is motivated by the elementary case when $f$ is Lipschitz in the second variable. In this case, if $I=[a,b]$ is a subinterval of [0,1] and $u$ is a solution of (\ref{eq4}) satisfying \zz\label{eq5}|u(t)|\leq\alpha,\ \ \ \ \ t\in I\end{equation} and $\beta=|u(a)|$, then we deduce from (\ref{eq5}) that $|u(t)|\leq\alpha'=\beta+L|I|\alpha$ for $t\in I$, where $L$ is the Lipschitz constant, i.e. (\ref{eq5}) holds with $\alpha$ replaced by $\alpha'$. If $L|I|<1$ it follows that (\ref{eq5}) holds with $\alpha=(1-L|I|)^{-1}\beta$, and of course if $\beta=0$ this gives $u=0$ on $I$. We try to copy this argument using Corollary \ref{bc2} as a substitute for a Lipschitz condition. There are two difficulties: first, Corollary \ref{bc2} is a statement about probabilities and we need an `almost sure' version, and in doing so we lose something; second, in Corollary \ref{bc2}, $x$ is a constant, whereas we are dealing with a function $u$ depending on $t$. The way round the second problem is to approximate $u$ by a sequence of step functions $u_l$ and then use \zz\label{aa}\begin{split}&\int_I\{f(W(t)+u(t))-f(W(t))\}dt=\lim_{l\rightarrow\infty} \int_I\{f(W(t)+u_l(t))-f(W(t))\}dt\\ &=\int_I\{f(W(t)+u_n(t))-f(W(t))\}dt+\sum_{l=n}^\infty \int_I\{f(W(t)+u_{l+1}(t))-f(W(t)+u_l(t))\}dt\end{split}\z where $u_n$ is constant on the interval $I$, and then to apply the `almost sure' form of the proposition to each interval of constancy of the terms on the right. Again, we lose something in doing this, but, as it turns out, we still have good enough estimates to prove the theorem. In fact, we need two versions of the `almost sure' (nearly) Lipschitz condition, the first to estimate $\int\{f(W(t)+u_n(t))-f(W(t))\}dt$ and the second to estimate $\int\{f(W(t)+u_{l+1}(t))-f(W(t)+u_l(t))\}dt$. We also need a third estimate, for sums of integrals of the second type. The two versions of the `almost sure' nearly-Lipschitz condition are conditions (\ref{eq7}) and (\ref{eq9}) below, and the third estimate is (\ref{eqx}). In Lemmas \ref{l1}, \ref{l3}, \ref{nla} and \ref{nlb} it is shown that these conditions indeed hold almost surely. Lemmas \ref{l4} and \ref{l5} establish a technical condition (\ref{eq8}) needed to justify the passage to the limit as $l\rightarrow \infty$ (which is not trivial when $f$ is not continuous). With these preliminaries the above programme is carried out in Lemma \ref{ll}. The analogue of (\ref{eq5}) above is (\ref{eq10}). We no longer immediately get $\alpha=0$ when $\beta=0$, but we get a good enough bound to prove the uniqueness of the solution to (\ref{eq1}), for any $W$ satisfying (\ref{eq7},\ref{eq9},\ref{eq8},\ref{eqx}).\vspace{.2cm} We now turn to the details.\vspace{.2cm} For any $n\geq0$ we can divide [0,1] into $2^n$ intervals $I_{nk}=[k2^{-n},(k+1)2^{-n}]$, $k=0,1,2,\cdots,2^n-1$. We shall also consider dyadic decompositions of $\R^d$, and say $x\in\R^d$ is a {\em dyadic} point if each component of $x$ is rational with denominator a power of 2. Let $Q=\{x\in\R^d:\|x\|\leq1\}$, where $\|x\|$ denotes the supremum norm $\max_{1\leq j\leq d}|x_j|$. We also introduce the notation \[\sigma_{nk}(x)=\int_{I_{nk}}\{g(W(t)+x)-g(W(t))\}dt\]and \[\rho_{nk}(x,y)=\sigma_{nk}(x)-\sigma_{nk}(y)=\int_{I_{nk}}\{g(W(t)+x)- g(W(t)+y)\}dt\]Then we can state: \begin{lemma}\label{l1}Let $g$ be a real function on $[0,1]\times\R^d$ with $|g(t,z)|\leq1$ everywhere. Then with probability 1 we can find $C>0$ so that \zz\label{eq7}|\rho_{nk}(x,y)|\leq C\left\{n^{1/2}+\left(\log^+\frac1{|x-y|}\right)^{1/2}\right\}2^{-n/2}|x-y|\z for all dyadic $x,y\in Q$ and all choices of integers $n,k$ with $n>0$ and $0\leq k\leq2^n-1$. \end{lemma} \begin{proof} Let us say that two dyadic points $x,y\in\R^d$ are {\em dyadic neighbours} if for some integer $m\geq0$ we have $\|x-y\|=2^{-m}$ and $2^{-m}x,2^{-m}y\in\Z^d$. Then using the Corollary \ref{bc2} we have, for any such pair $x,y\in Q$ and any $n,k$ that \[\IP\left(|\rho_{nk}(x,y)|\geq\lambda(n^{1/2}+m^{1/2})2^{-m-n/2}\right) \leq C_1e^{-C_2\lambda^2(n+m)}\] and by summing over all possible choices of $n,k,m,x,y$ we find that the probability that \[|\rho_{nk}(x,y)|\geq\lambda(n^{1/2}+m^{1/2})2^{-m-n/2}\] for some choice of $I_{nk}$ and dyadic neighbours $x,y\in Q$ is not more than\\$\sum_{n=1}^\infty\sum_{m=0}^\infty 2^n3^d2^{d(m+3)}C_1e^{-C_2\lambda^2(1+m+n)}$ which approaches 0 as $\lambda\rightarrow\infty$. It follows that, given $\E>0$, we can find $\lambda(\E)$ such that, with probability $>1-\E$, we have \[|\rho_{nk}(x,y)|<\lambda(1+n^{1/2}+m^{1/2})2^{-m-n/2}\] for all choices of $n,k$ and dyadic neighbours in $Q$. Next let $x,y$ be any two dyadic points in $Q$, and let $m$ be the smallest non-negative integer such that $\|x-y\|<2^{-m}$. For $r\geq m$, choose $x_r$ to minimise $\|x-x_r\|$ subject to $2^rx_r\in\Z^d$, and $y_r$ similarly. Then $\|x_m-y_m\|=2^{-m}$ or 0, and for $r\geq m$, $\|x_r-x_{r+1}\|=2^{-r-1}$ or 0. So $x_m,y_m$ are dyadic neighbours or equal, and the same applies to $x_r,x_{r+1}$ and $y_r,y_{r+1}$. Then we have \[\rho_{nk}(x,y)=\rho_{nk}(x_m,y_m)+\sum_{r=m}^\infty\rho_{nk}(x_r,x_m) +\sum_{r=m}^\infty\rho_{nk}(y_m,y_r)\] (note that the sums are actually finite, since $x,y$ are dyadic, so that $x=x_r$ and $y=y_r$ for large $r$). Then applying the above bounds for the case of dyadic neighbours to each term, we get the desired result. \end{proof} Next we prove a similar estimate for $\sigma_{nk}$, which is analogous to the Law of the Iterated Logarithm for Brownian motion. \begin{lemma}\label{l3} With probability 1 there is a constant $C>0$ such that for all $n\in\mathbb{N}$, $k\in\{0,1,\cdots,2^n-1\}$ and dyadic $x\in Q$ we have \zz\label{eq9}|\sigma_{nk}(x)|\leq Cn^{1/2}2^{-n/2}(|x|+2^{-2^n})\z \end{lemma} \begin{proof} For any integer $r\geq0$ we let $Q_r=\{x\in\R^d: \|x\|\leq2^{-r}\}$. Then if $m\geq r$ the number of pairs $(x,y)$ of dyadic neighbours in $Q_r$ with $\|x-y\|=2^{-m}$ is $\leq(9\times2^{m-r})^d$ and for each such pair we have \[\IP(|\rho_{nk}(x,y)|\geq\lambda(n^{1/2}+\sqrt{m-r})2^{-m-n/2})\leq C_1e^{-C_2\lambda^2(n+m-r)}\leq C_12^{2d(r-m)}e^{-C_2\lambda^2}e^{-n}\] for $\lambda$ large. By summing over $n$, $1\leq r\leq2^n$ and $m\geq r$ and all pairs $(x,y)$, we deduce that, with probability $\geq 1-C_3e^{-C_4\lambda^2}$, we have $\rho_{nk}(x,y)\leq\lambda(n^{1/2}+\sqrt{m-r})2^{-r-n/2}$ for $n\in\mathbb{N}$, $1\leq r\leq n$ and $m\geq r$ and all pairs $(x,y)$ of dyadic neighbours in $Q_r$ with $\|x-y\|=2^{-m}$, and then, by an argument similar to Lemma \ref{l1}, we get for all $n$ and $1\leq r\leq n$ that $\sigma_{nk}(x)\leq C_5\lambda n^{1/2}2^{-r-n/2}$ for all dyadic $x\in Q_r$. The required result follows. \end{proof} The next two lemmas are used to justify the passage to the limit $l\rightarrow\infty$ in (\ref{aa}). Let $\Phi$ denote the set of $Q$-valued functions $u$ on [0,1] satisfying $|u(s)-u(t)|\leq|s-t|$, $s,t\in[0,1]$, and let $\Phi_n$ denote the set of $Q$-valued functions on [0,1] which are constant on each $I_{nk}$ and satisfy $|u(k2^{-n})-u(l2^{-n})|\leq|k-l|2^{-n}$. Then let $\Phi^*=\Phi \cup\cup_n\Phi_n$. \begin{lemma}\label{l4} Given $\E>0$, we can find $\eta>0$ such that if $U\subset(0,1)\times\R^d$ is open with $|U|<\eta$, then, with probability $\geq1-\E$, we have $\int_0^1\chi_U(t,W(t)+u(t))dt \leq\E$ for all $u\in\Phi^*$. \end{lemma} \begin{proof} Fix $\E>0$. By Lemma \ref{l1} we can find $K$ such that, for any Borel function $\phi$ on $[0,1]\times\R^d$ with $|\phi|\leq1$ everywhere we have with probability $>1-\E/2$ that \zz\label{eq55}\int_{I_{kn}}\{\phi(W(t)+x)-\phi(W(t)+y)\}dt\leq Kn^{1/2} 2^{-3n/2}\z for all pairs of dyadic points $x,y$ in $Q$ and all choices of $n,k$. Then we choose $m$ such that $4K\sum_{n=m}^\infty n^{1/2}2^{-n/2}<\E$. Let $\Omega$ be a finite set of dyadic points of $Q$ such that every $x\in Q$ is within distance $2^{-m}$ of some point of $\Omega$. Provided $\delta$ is chosen small enough, any bounded Borel function $\phi$ on $[0,1]\times\R^d$ with $\|\phi\|_{L^p([0,1]\times\R^d)}<\delta$ will satisfy \[\IP\left(\left| \int_{I_{mk}}\phi(t,W(t)+x)dt\right|\geq2^{-m}\E/4\right)<\frac\E{2^{m+1} \#(\Omega)}\] for each $k,x$. Then the probability that \zz\label{eq6}\left|\int_{I_{mk}}\phi(t,W(t)+x)dt\right|<2^{-m}\E/4\ \ {\rm for}\ {\rm every}\ \ k\in\{0,1,\cdots,2^m-1\},\ \ x\in\Omega\end{equation} is at least $1-\E/2$. Now let $\eta=\delta^p$, and suppose $U$ is open with $m(U)<\eta$. Let $(\phi_r)$ be an increasing sequence of continuous non-negative functions on $[0,1]\times\R^d$, converging pointwise to $\chi_U$. Note that then $\|\phi_r\|_{L^p([0,1]\times\R^d}<\delta$. For each $r$ define events $A_r$: (\ref{eq6}) holds for $\phi=\phi_r$ and $B_r$: (\ref{eq55}) holds for $\phi=\phi_r$. Then $\IP(A_r)\geq1-\E/2$ and $\IP(B_r)\geq1-\E/2$. Also, when $A_r$ and $B_r$ both hold, we have $\int_{I_{km}}\phi_r(t,W(t)+x)dt<2^{-m}\E/2$ for all $x$ such that $|x|\leq2$. Now let $u\in\Phi^*$. For each $n\geq m$ choose $u_n\in\Phi_n$ taking a constant dyadic value within $2^{-n}$ of $u(k2^{-n})$ on $I_{nk}$ for $k=0,1,\cdots,2^n-1$. Now if $A_r$ and $B_r$ hold then $\int_0^1\phi_r(t,W(t)+u_m(t))dt\leq\E/2$ and \[\left|\int_0^1\{\phi_r(t,W(t)+u_n(t))-\phi_r(t,W(t)+u_{n+1}(t))\}dt \right|\leq Kn^{1/2}2^{-n/2}\] from which it follows that $\int_0^1\phi_r(t,W(t)+u(t))dt<\E$. So if we define the event $Q_r:$ $\int_0^1\phi_r(t,W(t)+u(t))dt\leq\E$ for all $u\in\phi$, then we have $\IP(Q_r)\geq1-\E$. But since $\phi_{r+1}\geq\phi_r$ we have $Q_{r+1}\subseteq Q_r$, and it follows that with probability $\geq1-\E$ we have $Q_r$ for all $r$, from which the result follows, since $\int_0^1\phi_r(t,W(t)+u(t))dt\rightarrow\int_0^1\chi_U(t,W(t)+u(t))dt$ by the bounded convergence theorem. \end{proof} \begin{lemma}\label{l5} If $g$ is a bounded Borel function on $[0,1]\times\R^d$, then, with probability 1, whenever $(u_n)$ is a sequence in $\Phi^*$ converging pointwise to a limit $u\in\Phi^*$, we have \zz\label{eq8}\int_0^1g(t,W(t)+u_n(t))dt\rightarrow\int_0^1g(t,W(t)+u(t))dt\z \end{lemma} \begin{proof} Given $\E>0$, let $\eta$ be as in Lemma \ref{l4}, and let $h$ be a bounded continuous function on $[0,1]\times\R^d$ such that $g=h$ outside an open set $U$ with $m(U)<\eta$. With probability $\geq1-\E$, the conclusion of Lemma \ref{l4} holds, which means that for any convergent sequence $(u_n)$ in $\Phi$ we have $\int_0^1\mathbb{I}_U(t,W(t)+u_n(t))dt \leq\E$, and the same for the limit $u(t)$, so, if $M$ is an upper bound for $|g-h|$, we have the bound $\left|\int\{g(t,W(t)+u_n(t))-h(t,W(t)+u_n(t))\}dt \right|\leq M\E$, and the same for $x$ in place of $u_n$. Also, since $h$ is continuous, $\int_0^1h(t,W(t)+u_n(t))dt\rightarrow \int_0^1h(t,W(t)+u(t)dt$. It follows that, for $n$ large enough, $\left|\int_0^1g(t,W(t)+u_n(t))dt-\int_0^1g(t,W(t)+u(t)dt\right|<(2M+1)\E$, and, since this holds for any $\E>0$, the result follows. \end{proof} Note that Lemma \ref{l5} implies that $\rho_{nk}(x,y)$ and $\rho_{nk}(x)$ are continuous, so that the estimates of Lemmas \ref{l1} and \ref{l3} will hold for all $x,y\in Q$. We also need a stronger bound for sums of $\rho_{nk}$ terms than that given by the bounds for individual terms in Lemma \ref{l1}, and the next two lemmas provide this. They are motivated by the idea that any solution of (\ref{eq4}) should satisfy the approximate equation $u((k+1)2^{-n})\approx u(k2^{-n})+\sigma_{nk}(u(k2^{-n}))$ which suggests that on a short time interval a solution can be approximated by an `Euler scheme' $x_{k+1}=x_k+\sigma_{nk}(x_k)$. \begin{lemma}\label{nla} Given even $p\geq2$ we can find $C>0$ such that, for any choice of $n,r\in\mathbb{N}$ with $r\leq2^{n/2}$, $k\in\{0,1,\cdots,2^n-r\}$ and $x_0\in Q$, if we define $x_1,\cdots,x_r$ by the recurrence relation $x_{q+1}=x_q+\sigma_{n,k+q}(x_q)$, then \[\IP\left(\sum_{q=1}^r|\rho_{n,k+q}(x_{q-1},x_q)|\geq2^{-n}\left\{C\sum_{q=0}^{r-1}|x_q|+\lambda r^{1/2}|x_0|\right\}\right)\leq C\lambda^{-p}\] for any $\lambda>0$. \end{lemma} \begin{proof} We use $C_1,\cdots$ to denote constants which depend only on $d$ and $p$. We write ${\mathcal F}_j$ for ${\mathcal F} _{(k+j)2^{-n}}$. Note first that $x_q$ is ${\mathcal F}_q$ measurable and $\IE(|\sigma_{n,k+q}(x_q)|^p|{\mathcal F}_q)\leq C_12^{-np/2}|x_q|^p$ by (\ref{bp}). Hence $\IE|\sigma_{n,k+q}(x_q)|^p\leq C_12^{-np/2}\IE|x_q|^p$. It follows that $\IE|x_{q+1}|^p\leq(1+C^{1/p}_12^{-n/2})^p\IE|x_q|^p$ and so \zz\label{nq1}\IE|x_q|^p\leq(1+C^{1/p}_12^{-n/2})^p|x_0|^p\leq C_2|x_0|^p\z for $1\leq q\leq r$. Now let $Y_q=|\rho_{n,k+q}(x_{q-1},x_q)|$, $Z_q=\IE(Y_q|{\mathcal F}_q)$ and $X_q=Y_q-Z_q$. Then $X_q$ is ${\mathcal F}_{q+1}$ measurable and $\IE(X_q|{\mathcal F}_q)=0$ so by Burkholder's inequality \[\begin{split}\IE|\sum_{q=1}^rX_q|^p&\leq C_3\IE(\sum X_q^2)^{p/2}\leq C_3r^{p/2-1}\IE\sum|X_q|^p\leq C_4r^{p/2-1}\sum\IE(Y_q^p)\\ &\leq C_5r^{p/2-1}2^{-np/2}\sum\IE|x_q-x_{q-1}|^p=C_5r^{p/2-1}2^{-np/2}\sum\IE|\sigma_{n,k+q-1}(x_{q-1})|^p\\ &\leq C_6r^{p/2-1}2^{-np}\sum_{q=1}^r\IE|x_{q-1}|^p\end{split}\] from which we deduce using (\ref{nq1}) that \zz\label{nq2}\IE|\sum_{q=1}^rX_q|^p\leq C_7r^{p/2}2^{-np}|x_0|^p\z Also let $V_q=\IE(Z_q|{\mathcal F}_{q-1})$ and $W_q=Z_q-V_q$. Noting that $Z_q\leq C_82^{-n/2}\sigma_{n,q-1}(x_{q-1})$ we get in a similar way that \zz\label{nq3}\IE|\sum W_q|^p\leq C_9r^{p/2}2^{-np}|x_0|^p\z We also have \zz\label{nq4}|V_q|\leq C_{10}2^{-N}|X_{q-1}|\z Now $Y_q=X_q+W_q+V_q$. By (\ref{nq2}) and (\ref{nq3}) we have $\IP(|\sum_{q=1}^r(X_q+W_q)|>2^{-n}\lambda r^{1/2}|x_0|)\leq C_{11}\lambda^p$ and the result then follows by (\ref{nq4}). \end{proof} \begin{lemma}\label{nlb} With probability 1 there exists $C>0$ such that for any $n,r\in{\mathbb N}$ with $r\leq2^{n/4}$, any $k\in\{0,1,\cdots,2^n-r\}$ and any $y_0,\cdots,y_r\in Q$ we have \zz\label{eqx}\sum_{q=1}^r|\rho_{n,k+q}(y_{q-1},y_q)|\leq C\left(2^{-3n/4}|y_0|+2^{-n/4}\sum_{q=0}^{r-1}|\gamma_q|+2^{-2^{n/2}}\right)\z where $\gamma_q=y_{q+1}-y_q-\sigma_{n,k+q}(y_q)$. \end{lemma} \begin{proof} Let $\delta_n=2^{-2^{n/2}}$. By Lemma \ref{l1}, with probability 1 there exists $C>0$ such that, for any $n,k\geq0$ and any $x,y\in Q$, we have \zz\label{nq5}\rho_{nk}(x,y)\leq C2^{-n/4}|x-y|+\delta_n\z As before, let $Q_s=\{x\in\R^d:\|x\|\leq2^{-s}\}$. Then, for integers $s$ with $0\leq s<2^{n/2}$, let $\Omega_{ns}$ be a set of not more than $(2^nd^{1/2})^d$ points of $Q_s$ such that every $x\in Q_s$ is within distance $2^{-s-n}$ of a point of $\Omega_{ns}$ and let $\Omega_n=\cup_{0\leq s<2^{-n/2}}\Omega_{ns}$. Let $p=8(4+d)$. Then by Lemma \ref{nla} there is $C_1>0$ such that the probability that \[\sum_{q=1}^r|\rho_{n,k+q}(x_{q-1},x_q)|\geq2^{-n}\left(C_1\sum_{q=0}^{r-1}|x_q|+\lambda2^{n/8}r^{1/2}|x_0|\right)\] for some $n,r,k$ as in the statement and some $x_0\in\Omega_n$, is bounded above by $C_1\sum_{n=0}^\infty\lambda^{-p}2^{n(3+d)}2^{-pn/8}$ which approaches 0 as $\lambda\rightarrow\infty$. Hence with probability 1 there exists $C>0$ such that \zz\label{nq6}\sum_{q=1}^r|\rho_{n,k+q}(x_{q-1},x_q)|<C2^{-n}\left(\sum_{q=0}^{r-1}|x_q|+2^{n/8}r^{1/2}|x_0|\right)\z for all $n,k,r$ as above and $x_0\in\Omega_n$. We now suppose, as we may with probability 1, that (\ref{nq5}) and (\ref{nq6}) hold (with the same $C$). We fix $n,k,r,y_0\cdots y_r,\gamma_0\cdots\gamma_r$ as in the statement of the lemma. Take the smallest $s$ such that $y_0\in Q_s$, noting that then $2^{-s-1}\leq|y_0|\leq d^{1/2}2^{-s}$. Then we find $x_0\in\Omega_{ns}$ with $|x_0-y_0|<2^{-s-n}\leq2^{1-n}|y_0|$ and define $x_1\cdots x_r$ by the recurrence relation $x_{q+1}=x_q+\sigma_{n,k+q}(x_q)$. Then by (\ref{nq6}) \[\sum_{q=1}^r|\rho_{n,k+q}(x_{q-1},x_q)|<C2^{-n}\left(\sum_{q=0}^{r-1}|x_q|+2^{n/4}|x_0|\right)\] Using (\ref{nq5}) we have $|x_{q+1}|=|x_q+\sigma_{n,k+q}(x_q)|\leq(1+C2^{-n/4})|x_q|+\delta_n$ so $|x_q|\leq C_1(|x_0|+r\delta_n)$ and \zz\label{nq7}\sum_{q=1}^r|\rho_{n,k+q}(x_{q-1},x_q)|<C_22^{-3n/4}(|x_0|+2^{n/4}\delta_n)\z Now let $u_q=x_q-y_q$. Then $|u_{q+1}-u_q|\leq|\rho_{n,k+q}(x_q,y_q)|+|\gamma_q|$ so \[|u_{q+1}|\leq|u_q|(1+C2^{-n/4})+|\gamma_q|+\delta_n\] and since $|u_0|\leq2^{1-n}|y_0|$ we deduce that $|u_q|\leq C_3(2^{-n}|y_0|+r\delta_n+\sum_{q=0}^{r-1}|\gamma_q|)$ and so \zz\label{nq8}|\rho_{n,k+q}(x_q,y_q)|\leq C_42^{-n/4}\left(2^{-n}|y_0|+r\delta_n+\sum_{q=0}^{r-1}|\gamma_q|\right)\z and we have the same bound for $|\rho_{n,k+q}(x_{q-1},y_{q-1})|$. Now \[\rho_{n,k+q}(y_{q-1},y_q)=\rho_{n,k+q}(x_{q-1},x_q)+\rho_{n,k+q}(y_{q-1},x_{q-1})+\rho_{n,k+q}(x_q,y_q)\] and then using (\ref{nq7}), (\ref{nq8}) and the fact that $|x_0-y_0|\leq2^{1-n}|y_0|$ we deduce that \[\sum_{q=1}^r|\rho_{n,k+q}(y_{q-1},y_q|\leq C_5\left(2^{-3n/4}|y_0|+2^{-n/4}\sum_{q=0}^{r-1}|\gamma_j|+2^{-n/2}\delta_n\right)\] from which the result follows. \end{proof} We now proceed to complete the proof of the theorem. From now on we take $g=f$ in the definition of $\sigma_{nk}$ and $\rho_{nk}$. We consider a Brownian path $W$ satisfying the conclusions of Lemmas \ref{l1}, \ref{l3}, \ref{nlb} and \ref{l5} for some $C>0$. We shall show that for such a Brownian path the only solution $u$ of (\ref{eq4}) in $\Phi$ is $u=0$. This will follow from the following: \begin{lemma}\label{ll} Suppose $W$ satisfies the conclusions of Lemmas \ref{l1}, \ref{l3}, \ref{nlb} and \ref{l5} for some $C>0$. Then there are positive constants $K$ and $m_0$ such that, for all integers $m>m_0$, if $u$ is a solution of (\ref{eq4}) in $\Phi$ and for some $j\in\{0,1,\cdots,2^m-1$ and some $\beta$ with $2^{-2^{3m/4}}\leq\beta\leq2^{-2^{2m/3}}$ we have $|u(j2^{-m})|\leq\beta$, then \[|u((j+1)2^{-m})|\leq\beta\{1+K2^{-m}\log(1/\beta)\}\] \end{lemma} \begin{proof} We use $C_1,C_2,\cdots$ for positive constants which depend only on the constant $C$ and the dimension $d$. Fix $m$, $j$ and $\beta$ as in the statement, and suppose $|u(j2^{-m})|\leq\beta$. Let $N$ be the integer part of $4\log_2(1/\beta)$. Suppose $u\in\Phi$ satisfies (\ref {eq4}), and let $u_n$ be the step function which takes the constant value $u(k2^{-n})$ on the interval $I_{nk}$, for $k=0,1,\cdots,2^n-1$. Let $\alpha$ be the smallest nonnegative number such that \zz\label{eq10}\sum_{k=j2^{n-m}}^{(j+1)2^{n-m}-1}|u((k+1)2^{-n})-u(k2^{-n})| \leq\alpha2^{-m}(n^{1/2}2^{n/2}+N)\end{equation} for all $n$ with $m\leq n\leq N$. For $n\geq m$ let \zz\label{ps}\psi_n=\sum_{k=j2^{n-m}}^{(j+1)2^{n-m}-1}|u(k2^{-n})|\end{equation} Then by (\ref{eq10}) \[\psi_n\leq2\psi_{n-1}+\alpha2^{-m}(n^{1/2}2^{n/2}+N)\] for $n>m$,and since $\psi_m=\beta$ it follows that \zz\label{eq11}\psi_n\leq2^{n-m}\beta+\sum_{l=m+1}^n\alpha2^{n-l-m}(l^{1/2}2^{l/2}+N)\leq C_12^{n-m}(\beta+\alpha2^{-m}N)\end{equation} for all $n$ with $m\leq n\leq N$, where we have used the fact that $m^{1/2}2^{m/2}$ is bounded by const.$N$. Now fix $n\geq m$. Then for $k=j2^{n-m},\cdots,(j+1)2^{n-m}-1$ we have, using (\ref{eq8}) \[\begin{split}&u((k+1)2^{-n})-u(k2^{-n})=\int_{I_{kn}}\{f(W(t)+u(t))- f(W(t))\}dt\\&=\int_{I_{kn}}\{f(W(t)+u_n(t))-f(W(t))\}dt+\sum_{l=n}^\infty \int_{I_{kn}}\{f(W(t)+u_{l+1}(t))-f(W(t)+u_l(t))\}dt\end{split}\] which we can write as \zz\label{zu}u((k+1)2^{-n})-u(k2^{-n})=\sigma_{nk}(u(k2^{-n}))+\sum_{l=n}^\infty\sum_{r=k2^{l-n}}^{(k+1)2^{l-n}-1} \rho_{l+1,2r+1}(u(2^{-l-1}(2r+1)),u(2^{-l}r))\z from which we deduce \zz\label{eq12}\sum_{k=j2^{n-m}}^{(j+1)2^{n-m}-1}|u((k+1)2^{-n})-u(k2^{-n})|\leq \sum_{k=j2^{n-m}}^{(j+1)2^{n-m}-1}|\sigma_{nk}(u(k2^{-n}))|+\sum_{l=n} ^\infty\Omega_l\end{equation} where $\Omega_l=\sum_{r=j2^{l-m}}^{(j+1)2^{l-m}-1}|\rho_{l+1,2r+1}(u(2^{-l-1}(2r+1)),u(2^{-l}r))|$. We now proceed to estimate the two sums on the right of (\ref{eq12}), starting with the easier $\sigma_{nk}$ term. Using Lemma \ref{l3} and the fact that $N<2^m$, we have $|\sigma_{nk}(x)|\leq C_2n^{1/2}2^{-n/2}(2^{-N}+|x|)$ and so \zz\label{b1}\begin{split}\sum_{k=j2^{n-m}}^{(j+1)2^{n-m}-1}|\sigma_{nk}(u(k2^{-n}))|&\leq C_2\sum_{k=j2^{n-m}}^{(j+1)2^{n-m}-1}n^{1/2}2^{-n/2}(2^{-N}+|u(k2^{-n})|)\\ &\leq C_3n^{1/2}2^{n/2-m}(\beta+2^{-m}N\alpha+2^{-N})\end{split}\z using (\ref{eq11}). Next we bound $\sum\Omega_l$, which we do in two stages. We first obtain a relatively crude bound by applying (\ref{eq7}) to each term, and then obtain an improved by applying the crude bound together with Lemma (\ref{nlb}). To start with the crude bound, from (\ref{eq7}) we have $|\rho_{nk}(x,y)|\leq C_32^{-n/2}N^{1/2}(2^{-N}+|x-y|)$ and using this together with (\ref{eq10}) gives \zz\label{sl}\Omega_l\leq C_42^{-l/2}N^{1/2}\{2^{-N}2^{l-m}+\alpha2^{-m}(l^{1/2}2^{l/2}+N)\}\z and so \zz\label{b2}\sum_{l=m}^N\Omega_l\leq C_5(N^{1/2}2^{-m-N/2}+\alpha2^{-m}N^2)\z For $l>N$ we use $|u(t)-u(t')|\leq|t-t'|$ and (\ref{eq7}) to obtain \zz\label{inf}\sum_{l=N+1}^\infty\Omega_l\leq\sum_{l=N+1}^\infty C_62^{l-m}l^{1/2}2^{-3l/2}\leq C_7N^{1/2}2^{-m-N/2}\z and combining this with (\ref{b2}) we obtain \zz\label{b3}\sum_{l=m}^\infty\Omega_l\leq C_8(N^{1/2}2^{-m-N/2}+\alpha2^{-m}N^2)\z The second stage is to improve the estimate (\ref{b3}) by applying Lemma \ref{nlb} to obtain a better estimate for $\Omega_n$ for larger $n$; we use (\ref{b3}) to bound the $\gamma$ term in Lemma \ref{nlb}. Let $N^{1/6}\leq n\leq N$. We define $\gamma_{nk}=u((k+1)2^{-n})-u(k2^{-n})-\sigma_{nk}(u(k2^{-n}))$, noting that (\ref{zu}) implies that \zz\label{eqg}\sum_{k=j2^{n-m}}^{(j+1)2^{n-m}-1}|\gamma_{nk}|\leq\sum_{l=n}^\infty\Omega_l\leq C_8(N^{1/2}2^{-m-N/2}+\alpha2^{-m}N^2)\z Also we define \[\Lambda_n=\sum_{k=j2^{n-m}}^{(j+1)2^{n-m}-1}|\rho_{n,k+1}(2^{-n}k,2^{-n}(k+1))\] so that $\Omega_n\leq\Lambda_{n+1}$. Let $r=\lfloor2^{n/4}\rfloor$. In order to apply Lemma \ref{nlb} to estimate $\Lambda_n$, we will split the sum into $r$-sized pieces. First we find $i\in\{0,1,\cdots,r-1\}$ such that, writing $s=\lfloor r^{-1}(2^{n-m}-i)\rfloor$, we have $\sum_{t=0}^s|u(j2^{-m}+(i+tr)2^{-n})|\leq r^{-1}\psi_n$. Now we fix for the moment $t\in\{0,1,\cdots,s\}$ and apply Lemma \ref{nlb} with $y_q=u((k+q)2^{-n})$ where $k=j2^{n-m}+i+tr$. We obtain \[\sum_{q=1}^r|\rho_{n,k+q}(y_{q-1},y_q)|\leq C_9\left(2^{-3n/4}|u(k2^{-n})|+2^{-n/4}\sum_{q=0}^{r-1}|\gamma_{n,k+q}| +2^{-2^{n/2}}\right)\] Summing over $t$ then gives \[\begin{split}\sum_{k=j2^{n-m}+i}^{(j+1)2^{n-m}-1}|\rho_{n,k+1}(2^{-n}k,2^{-n}(k+1))|\leq&C_92^{-3n/4}\sum_{t=0}^s |u(j2^{-m}+(i+tr)2^{-n})|\\&+C_9\left(+2^{-n/4}\sum_{k=j2^{n-m}+i}^{(j+1)2^{n-m}-1}|\gamma_{n,k}|+2^{n-2^{n/2}}\right)\end{split}\] Also \[\sum_{k=j2^{n-m}}^{j2^{n-m}+i-1}|\rho_{n,k+1}(2^{-n},2^{-n}(k+1))|\leq C_9\left(2^{-3n/4}|u(j2^{-m})|+2^{-n/4}\sum_{k=j2^{n-m}}^{j2^{n-m} +i-1}|\gamma_{n,k}|+r2^{-2^{n/2}}\right)\] From the last two inequalities, using (\ref{eq11}), (\ref{eqg}) and $|u(j2^{-m})|\leq\beta$, we find that \[\Lambda_n\leq C_{10}\{2^{-m}(\beta+\alpha2^{-m}N)+2^{-m-n/4}(N^{1/2}2^{-N/2}+\alpha N^2)+2^{n-2^{n/2}}\}\] Since $n\geq N^{1/6}$ the first term dominates so $\Lambda_n\leq C_{11}2^{-m}(\beta+\alpha2^{-m}N)$, and the same bound holds for $\Omega_n\leq\Lambda_{n+1}$. We deduce that \[\sum_{N^{1/6}\leq l\leq N}\Omega_l\leq C_{12}N2^{-m}(\beta+\alpha N2^{-m})\] Using the original bound (\ref{sl}) for $l<N^{1/6}$ we have \[\sum_{m\leq l<N^{1/6}}\Omega_l\leq C_{13}N^{1/2}\{2^{-N+N^{1/4}/2-m}+\alpha2^{-m}(N^{1/4}+2^{-m/2}N)\}\] Combining these two estimates with (\ref{inf}) we get our improved bound. \[\sum_{l=m}^\infty\Omega_l\leq C_{14}\{N2^{-m}(\beta+\alpha N2^{-m})+\alpha(2^{-m}N^{3/4}+2^{-3m/2}N^{3/2})\}\] To conclude the proof we use this bound along with (\ref{b1}) in (\ref{eq12}) and obtain \[\begin{split}\sum_{k=j2^{n-m}}^{(j+1)2^{n-m}-1}|u((k+1)2^{-n})-u(k2^{-n})|\leq&C_{15}(n^{1/2}2^{n/2-m}+N2^{-m})\\ &\times\{\beta+\alpha(N2^{-m}+N^{-1/4}+2^{-m/2}N^{1/2})\}\end{split}\] for all $n$ with $m\leq n\leq N$. Comparing this with (\ref{eq10}) we see by the minimality of $\alpha$ that \[\alpha\leq C_{15}\{\beta+\alpha(N2^{-m}+N^{-1/4}+2^{-m/2}N^{1/2})\}\] Then if $m$ is large enough to ensure $C_{15}(N2^{-m}+N^{-1/4}+2^{-m/2}N^{1/2})<1/2$ it follows that $\alpha\leq2C_{15}\beta$. Then applying (\ref{eq10}) with $n=m$ gives $|u((j+1)2^{-m})|\leq\beta+2C_{15}\beta(m^{1/2}2^{m/2}+N)2^{-m}\leq\beta(1+C_{16}N2^{-m})$ from which the required result follows. \end{proof} To complete the proof of Theorem \ref{mth}, using the notation of Lemma \ref{ll} let $m>m_0$ and $\beta_0=2^{-2^{3m/4}}$, and define $\beta_j$ for $j=1,2,\cdots,2^m$ by the recurrence relation $\beta_{j+1}=\beta_j(1+K2^{-m}\log(1/\beta_j))$. Writing $\gamma_j =\log(1/\beta_j)$ we then have \[\gamma_{j+1}=\gamma_j-\log(1+K2^{-m}\gamma_j)\geq\gamma_j(1-K2^{-m})\] so the sequence $(\gamma_j)$ is decreasing and \[\gamma_j\geq\gamma_0(1-K2^{-m})^j\geq\gamma_0e^{-K-1}=2^{3m/4}e^{-K-1}\geq2^{2m/3}\] for all $j=1,2,\cdots,2^m$, provided $m$ is large enough. Then for each $j$, $\beta_j$ is in the range specified in Lemma \ref{ll}, and it follows from that lemma by induction on $j$ that $|u(j2^{-m})|\leq\beta_j$ for each $j$. Hence $|u(j2^{-m})|\leq2^{-2^{2m/3}}$ for each $j$. This holds for all large enough $m$, and hence $u$ vanishes at all dyadic points in [0,1], and, as $u$ is continuous, $u=0$ on [0,1]. This completes the proof of the theorem. \section{An Application}\label{app} We give an application of Theorem \ref{mth} to convergence of Euler approximations to (\ref{eq1}) with variable step size. In this section we assume $f$ is continuous and consider (\ref{eq1}) on a bounded interval $[0,T]$. Given a partition ${\mathcal P}=\{0=t_0<t_1<\cdots<t_N=T\}$ of $[0,T]$ we consider the Euler approximation to (\ref{eq1}) given by: \[x_{n+1}=x_n+W(t_{n+1})-W(t_n)+(t_{n+1}-t_n)f(t_n,x_n)\] for $n=0,\cdots,N-1$, with $x_0=0$. For such a partition ${\mathcal P}$ we let $\delta({\mathcal P})=\max_{n=1}^N(t_{n}-t_{n-1})$. Then we have the following: \begin{cor}\label{cor}For almost every Brownian path $W$, for any sequence\[ {\mathcal P}_k=\{t_0^{(k)},\cdots,t_{N_k}^{(k)}\}\] of partitions with $\delta({\mathcal P}_k)\rightarrow0$, we have \[\max_{n=1}^{N_k}|x_n^{(k)}-x(t_n^{(k)})|\rightarrow0\] as $k\rightarrow\infty$, where $x(t)$ is the unique solution of (\ref{eq1}) and $\{x_n^{(k)}\}$ is the Euler approximation using the partition ${\mathcal P}_k$. \end{cor} \begin{proof} Suppose $W$ is a path for which the conclusion of Theorem \ref{mth} holds, and suppose there is a sequence of partitions with $\delta({\mathcal P}_k)\rightarrow0$ such that $\max_{n=1}^{N_k}|x_n^{(k)}-x(t_n^{(k)})|\geq\delta>0$. Then if we let $u_n^{(k)}=x_n^{(k)}-W(t_n^{(k)})$ we have $|u_{n+1}^{(k)}-u_n^{(k)}| \leq\|f\|_\infty(t_{n+1}^{(k)}-t_n^{(k)})$ so by Ascoli-Arzela, after passing to a subsequence we have a continuous $u$ on $[0,T]$ such that $\max_{n=1}^{N_k}|u_n^{(k)}-u(t_n^{(k)})|\rightarrow0$. Then writing $y(t)=u(t)+W(t)$ we see that $y\neq x$ and, using the continuity of $f$, that $y$ satisfies (\ref{eq1}), contradicting the conclusion of the theorem. Corollary \ref{cor} is proved. \end{proof} The point of Corollary \ref{cor} is that the partitions can be chosen arbitrarily, no `non-anticipating' condition is required. For general SDE's with non-additive noise and sufficiently smooth coefficients Euler approximations will converge to the solution provided the partition points $t_n$ are stopping times, but this condition is rather restrictive for numerical practice, and an example is given in section 4.1 of \cite{gl} of a natural variable step-size Euler scheme for a simple SDE which converges to the wrong limit. \cite{gl} also contains related results and discussion.\vspace{.2cm} {\bf Acknowledgement.} The author is grateful to Istvan Gy\"{o}ngy for drawing his attention to Krylov's question and for valuable discussions. \bibliographystyle{amsplain}
5,083
\begin{document} \baselineskip=17pt \title{Representation of positive integers by the form $x_1...x_k+x_1+...+x_k$} \author{Vladimir Shevelev} \address{Department of Mathematics \\Ben-Gurion University of the Negev\\Beer-Sheva 84105, Israel. e-mail:shevelev@bgu.ac.il} \subjclass{11N32.} \begin{abstract} For an arbitrary given $k\geq3,$ we consider a possibility of representation of a positive number $n$ by the form $x_1...x_k+x_1+...+x_k, \enskip 1\leq x_1\leq ... \leq x_k.$ We also study a question on the smallest value of $k\geq3$ in such a representation. \end{abstract} \maketitle \section{Introduction} In 2002, R. Zumkeller published in OEIS the sequence A072670: "Number of ways to write $n$ as $ij+i+j,\enskip 0<i<=j$". This sequence possesses a remarkable property. \begin{proposition}\label{prop1} Positive integer $n$ is not represented by the form $ij+i+j,\enskip 0<i<=j,$ if and only if $n=p-1,$ where $p$ is prime. \end{proposition} \begin{proof} Condition $n=p-1$ is sufficient, since if $n=ij+i+j,$ then $n+1=(i+1)(j+1)$ cannot be prime. Thus $n$ of the form $p-1$ is not represented by the form $ij+i+j,\enskip 0<i<=j.$ \enskip Suppose that, conversely, $n$ is not represented by this form. Show that $n+1$ is prime. If $n+1\geq4$ is composite, then $n+1=rs, \enskip s\geq r\geq2.$ Set $i=r-1, \enskip j=s-1.$ We have $$ij+i+j=(r-1)(s-1)+(r-1)+(s-1)=n+1-1=n.$$ This contradicts the supposition. So $n+1$ is prime. \end{proof} In this note, for an arbitrary given $k\geq3,$ we consider a more general form $x_1...x_k+x_1+...+x_k,\enskip 1\leq x_1\leq ... \leq x_k.$ In particular, we study a question on the smallest value of $k\geq3$ in a a possible representation of $n.$ \section{Necessary condition for non-representation of n} Denote by $\nu_k(n)$ the number of ways to write $n$ by the $$F_k=F(x_1,...,x_k)=$$ \begin{equation}\label{1} x_1...x_k+x_1+...+x_k, \enskip 1\leq x_1\leq ... \leq x_k,\enskip k\geq3. \end{equation} \begin{proposition}\label{prop2} If, for a given $k\geq3,$ for $n\geq k-1$ we have $\nu_k(n)=0,$ then $n-k+3$ is prime. \end{proposition} \begin{proof} If $n-k+3\geq4$ is composite, then $n-k+3=rs, \enskip s\geq r\geq2.$ Set $x_i=1\enskip for \enskip i=1,...,k-2$ and $x_{k-1}=r-1, \enskip x_k=s-1.$ We have $$F_k=(r-1)(s-1)+(k-2)+(r-1)+(s-1)=(n-k+3)+(k-2)-1=n.$$ This contradicts the condition $\nu_k(n)=0.$ So $n-k+3$ is prime. \end{proof} \begin{proposition}\label{prop3} If $k_1<k_2$ and $\nu_{k_1}(n)>0,$ then $\nu_{k_2}(n+k_2-k_1)>0.$ \end{proposition} \begin{proof} By the condition, there exist $x_1,...,x_{k_1}$ such that $$ n= x_1...x_{k_1}+x_1+...+x_{k_1}, \enskip 1\leq x_1\leq ... \leq x_{k_1},\enskip k_1\geq3. $$ Set $y_i=1,\enskip i=1,...,k_2-k_1,$ and $y_{k_2-k_1+1}=x_1,...,y_{k_2}=x_{k_1}.$ Then we have $$ y_1...y_{k_2}+y_1+...+y_{k_2} = x_1...x_{k_1} + k_2-k_1+x_1+...+x_{k_1}=n+ x_{k_2}-x_{k_1}.$$ \end{proof} \begin{corollary}\label{cor1} If $k_1<k_2$ and $\nu_{k_1}(n+k_1-3)>0,$ then $\nu_{k_2}(n+k_2-3)>0.$ \end{corollary} \begin{corollary}\label{cor2} If $k_1<k_2$ and $\nu_{k_2}(n+k_2-3)=0,$ then $\nu_{k_1}(n+k_1-3)=0.$ \end{corollary} Note that, by Proposition \ref{prop2}, in Corollary \ref{cor2} the number $n$ is prime. \section{Cases $k=3$ and $k=4$} Consider more detail the case $k=3,$ when $$F_3=x_1x_2x_3+x_1+x_2+x_3, \enskip 1\leq x_1\leq x_2\leq x_3.$$ The numbers of ways to write the positive numbers by the form $F_3$ are given in the sequence A260803 by D. A. Corneth. Note that, by Proposition \ref{prop2}, a number $n\geq2,$ could be not represented by $F_3$ only in case when $n$ is prime. However, note that sequence of primes $p$ not represented by $F_3$ should grow fast enough. Indeed, $p$ should not be a prime of the form \begin{equation}\label{2} (2t+1)m+(t+2),\enskip t,m\geq2, \end{equation} where $t\equiv {0 \enskip or \enskip 2} \pmod 3.$ Indeed, in this case $p = x_1x_2x_3+x_1+x_2+x_3$ for $x_1=2,\enskip x_2=t, \enskip x_3=m,$ if $t\leq m,$ and for $x_1=2,\enskip x_2=m,\enskip x_3=t$ otherwise. Since $\gcd(2t+1, t+2)=\gcd(2(t+2)-3, t+2)=1,$ then, by Dirichlet's theorem, for any admissible $t\geq2,$ the progression (\ref{2}) contains infinitely many primes $p.$ For all these primes, $\nu_3(p) >0.$ \begin{question}\label{q1} Is the sequence of primes $\{p\enskip|\enskip\nu_3(p)=0\}$ infinite$?$ \end{question} However, in case of $k=4,$ in view of Corollary \ref{cor1}, to the set of progressions (\ref{2}) one can add, for example, the following set of progressions \begin{equation}\label{3} (4t+1)m+(t+3),\enskip t,m\geq2. \end{equation} \newpage Here $\gcd(4t+1, t+3)=\gcd(4(t+3)-11,t+3)=1,$ except for $t\equiv-3 \pmod{11}.$\enskip Hence, for any admissible $t\geq2$ the progression (\ref{3}) contains infinitely many primes $p.$ For such $p$ we have $$p+k-3=p+1=2\cdot2tm+2+2+t+m=F_4$$ with $x_1=x_2=2,\enskip x_3=t,\enskip x_4=m,$ if $t\leq m,$ and $x_1=x_2=2, \enskip x_3=m, \enskip x_4=t,$ if $t>m.$ So for such $p,$ $\nu_4(p+1)>0.$ Therefore, and, by the observations in table in Corneth's sequence A260804 for $k=4,$ the following question has another tint. \begin{question}\label{q2} Is the sequence of primes $\{p\enskip|\enskip\nu_4(p+1)=0\}$ only finite$?$ \end{question} \section{Smallest $k$ for representation of $prime+k-3$} According to Proposition \ref{prop2}, if $m$ is not represented in the form $F_k,$ then $m-k+3$ is prime. Denote by $p_n$ the $n$-th prime. Let $m-k+3=p_n.$ Then, for every $n,$ it is interesting a question, for either smallest $k\geq3$ the number $p_n+k-3$ is represented by $F_k?$ Denote by $s(n),\enskip n\geq1,$ this smallest $k$ and let us write $s(n)=0,$ if $p_n+k-3$ is not represented by $F_k$ for any $k\geq3.$ The sequence $\{s(n)\}$ starts with the following terms (A260965): $$0, 0, 0, 0, 0, 0, 0, 3, 4, 3, 0, 0, 4, 0, 3, 0, 3, 3, 0, 4, 3, 3, 4, 3, $$ \begin{equation}\label{4} 4, 0, 3, 5, 3, 4, 3, ... . \end{equation} \begin{conjecture}\label{con1} The sequence $({4})$ contains only a finite number of zero terms. \end{conjecture} For example, a solution in affirmative of Question \ref{q2}, immediately proofs Conjecture \ref{con1}. Here we will concern only a question on estimates of $s(n).$ \begin{proposition}\label{prop4} \begin{equation}\label{5} s(n) \leq \lfloor(\log_2(p_n)\rfloor. \end{equation} \end{proposition} \begin{proof} Suppose, for a given $p_n,$ there exists $k$ such that $p_n + k - 3$ is represented by the form $F_k.$ Then for the smallest possible $k$ such a representation we call an \slshape optimal representation \upshape with a given $p_n.$ Let us show that in an optimal representation all $x_i>=2.$ Indeed, let $x_1 = ... = x_u = 1$ and $x_i >= 2$ for $u+1 <= i <= k,$ such that $p_n + k - 3 = x_{u+1}...x_k + u + x_{u+1} + ... + x_k$ be an optimal representation. Note that $u<k,$ otherwise $F_k = 1+k$ which is not $k-3 +$ prime. Set $k_1 = k - u; \enskip y_j = x_{u+j}.$ Then $p_n + k_1 - 3 = y_1...y_{k_1} + y_1 + ... + y_{k_1}.$ Since $k_1 < k,$ it contradicts the optimality of the form $F_k.$ The contradiction shows that all $x_i$ in an optimal represen- \newpage tation are indeed more than or equal 2. So for an optimal representation, $p_n + k - 3 = F_k >= 2^k + 2k$ and $2^k + k + 3 <= p_n.$ Hence $s(n) = k_{min} < \log_2(p_n)$ and the statement follows. \end{proof} Now we need a criterion for $s(n)>0.$ \begin{proposition}\label{prop5} $s(n)>0$ if and only if either there exists $t_2\geq$ such that $$B(t_2) = 2^{t_2} + t_2 + 3 = p_n$$ or there exist $t_2\geq0, t_3\geq1$ such that $$B(t_2, t_3) = 2^{t_2}3^{t_3} + t_2 + 2t_3 + 3 = p_n$$ or there exist $t_2\geq0, t_3\geq0, t_4\geq1$ such that $$B(t_2, t_3, t_4) = 2^{t_2}3^{t_3}4^{t_4} + t_2 + 2t_3 + 3t_4 + 3 = p_n,$$ etc. \end{proposition} \begin{proof} Distinguish the following cases for $x_i \geq 2, i=1,...,k,$ and $F_k = x_1...x_k + x_1 + ... + x_k:$\newline (i) All $x_i = 2, i=1,...,t_2.$ Here $k=t_2$ and $F_k=2^{t_2} + 2t_2.$ If this is $t_2 - 3 + p_n,$ then $p_n = 2^{t_2}+t_2+3 = B(t_2).$\newline (ii) The first $t_2$ consecutive $x_i = 2$ and $t_3$ consecutive $x_i = 3.$ Note that $t_3 \geq 1$ (otherwise, we have case (i)). Here $k=t_2+t_3$ and $F_k=2^{t_2}3^{t_3} + 2t_2 + 3t_3.$ If this is $k - 3 + p_n = t_2 + t_3 -3 + p_n,$ then $p_n = 2^{t_2}3^{t_3} + t_2 + 2t_3 + 3 = B(t_2, t_3),$ etc. \end{proof} Note that in the expressions $B(t_2), B(t_2, t_3),... $ defined in Proposition \ref{prop5}, we can consider only the case when the last variable is positive. Indeed, in $B(t_2), \enskip t_2\geq1$ and if $t_{j+1}=0,$ then, evidently, $B(t_2,...,t_j, 0)= B(t_2,...,t_j).$ \begin{corollary}\label{cor3} If $v<j$ is the smallest number such that, for some $t_2,...,t_v,t_j$ $B(t_2,...,t_v,t_j) = p_n,$ then $s(n) = t_2 + ... + t_v + t_j.$ If, for a given $n,$ for any $j$ there is no such $v,$ then $s(n)=0.$ \end{corollary} Practically, using this algorithm for different $j$ (cf. Section 5), we rather quickly reduce the number of variables $t_i$ for the evaluation of $s(n).$ \section{Cases of $p_n=97$ and $p_n=101$} Here we show that, for $p_{25}=97, p_{26}=101,$ we have $s(25)=4$ and $s(26)=0.$ Note that $B(0,0,...,0,t_j)=2(j+1)$ and, for $j\geq3,$ $B(t_2,0,...,0,t_j)=(2^{t_2}+1)j+t_2+2.$ For $t_2=1,,...,5,$ we have $3j+3,5j+4,9j+5,17j+6,33j+7$ respectively. None of these expressions is equal to 97 or 101. \newpage Further, for $j\geq4,$ $B(t_2,t_3,0,...,0,t_j)=(2^{t_2}3^{t_3}+1)j+t_2+2t_3+2.$ Here $t_2>0,$ otherwise we have even values. For $(t_2,t_3)=(1,1),(2,1), (3,1),$ we have $7j+5,13j+6,25j+7$ respectively. None of these expressions is equal to 97 or 101, expect for $13j+6=97$ for $j=7$ which corresponds to $t_2=2, t_3=1,t_7=1.$ Hence, by Corollary \ref{cor3}, $s(25)=2+1+1=4.$ Continuing the research for $p=101,$ note that, for $j\geq5,$ $B(t_2,t_3,t_4,0,...,0,t_j)=(2^{t_2}3^{t_3}4^{t_4}+1)j+t_2+2t_3+3t_4+2.$ Here already for: $(t_2,t_3,t_4)=(1,1,1)$ we have $25j+8>101.$ It completes the case $t_j=1.$ In case $t_j=2$ we have $B(t_2,0,...,0,t_j)=2^{t_2}j^2+t_2+2(j-1)+3,\enskip j\geq3.$ Here $t_2$ should be even (otherwise $B(t_2,0,...,0,t_j)$ is even). For $t_2=2,4,$ we have $4j^2+2j+3,16j^2+2j+5.$ respectively. None of these expressions is equal 101. For $j\geq4,$ $B(t_2,t_3,0,...,0,t_j)=2^{t_2}3^{t_3}j^2+t_2+2t_3+2(j-1)+3$ is $\geq108$ already for $t_2=t_3=1.$ Finally, in case $t_j\geq3,\enskip j\geq3$ we have $B(t_2,0,...,0,t_j)=64$ for $t_2=1,j=3, t_j=3$ and $>101$ otherwise. So, $s(26)=0.$
184,317
Mission Statement Mission and Values We, the French American Chamber of Commerce in the United States, are a binational non-profit organization that seeks to contribute through the efforts of our chapters and members to the development and improvement of economic, commercial, and financial relations between France and the United States. To achieve this objective: We will encourage sales of goods and services between both countries and promote better international understanding. We will provide information on the economy and business environment to French and American enterprises to help promote investment by commercial and industrial enterprises of each country in the other. We will work with various French and U.S. governmental and economic agencies, diplomatic and consular agents and other associations in France and the U.S. that pursue similar goals. We are committed to providing the highest level of service to our members and representing the interest of members to external organizations. We will facilitate the interaction among our members to foster continuing good economic, commercial and financial relationships between France and the United States of America.
235,926
There really is no way to say this without sounding like a politician but I would love to represent Richmond as the 365 Days of Dining blogger. While I will not have the power to lower taxes or fight crime, my best efforts each day would be put towards making this blog as informative, engaging and fun as possible. To be frank, I do feel like I’m the best choice for the job. Why? It could be because I absolutely adore Richmond. I love Richmond like Ray Charles loved Georgia or that kid that 50 Cent says loves cake. Writing songs is probably out of the question since my musical talent is limited* but I could make this into a heck of a fun blog. I’ve always said that I think Richmond is just about the best place on the planet when it comes to living. Everything is quite convenient. People are friendly. The streets are safe. Richmond is quiet when we need some quiet and fun when we want to cut loose. I know I’ve also professed my love for Japan, Germany and even parts of the United States. I know I’ve mentioned I might want to live downtown or elsewhere for awhile but Tokyo, London, Robson Street…we’re just friends. Richmond, we’re best friends. We’re family. Whoever is selected for 365 Days of Dining is going to be representing Richmond to the world. I’m sure we’d all like to be introduced by someone that truly knows who we are rather than someone we just met. I’m relatively well traveled and I enjoy living abroad as well. Nothing broadens your mind like experiencing other cultures and parts of the world you haven’t been. Travel is food for your soul but like revisiting old favorites such as wonton noodles or a nice, rare burger after more adventurous edibles, I always love coming back to Richmond. The mystery and allure of a foreign land may keep me away for a little while but I always know where my heart is. No matter where I happen to be, Richmond will always be my hometown and it will always proudly say so on my Facebook page. It could be for my love for food. Although my parents tell me I was a fussy eater as a toddler, it didn’t take long for my love affair with food to begin. While other kids were watching Saved by the Bell, I would follow along with Yan Can Cook. I figured if he could cook, so could I! Don’t get me wrong, I still watched Saved by the Bell but throughout my younger years, I idolized the chefs on Food Network. By the time I got to high school, I followed along with Emeril Lagasse (BAM!), Bobby Flay and Mario Batali. I remember watching the original Japanese Iron Chef for the first time (Battle Asparagus) and being awed by the creativity. I love to cook. Love getting my hands, as well as several counter-tops, many dishes and much of the floor, dirty. I love knowing what goes into my food and what produces the flavors I enjoy. Food isn’t just something that keeps us alive, it’s just so much more. It can bring back the memories you thought were lost. It’s a great reason to sit down with friends for a few hours. Food can be fun. Food can tell a story. Food can be art. Some of you may be scoffing at me thinking I’m possibly exaggerating but perhaps you’ve never bit into the perfect piece of otoro, Granny Smith green apple sorbet or one of those burnt ends from The Hog Shack. It could be because I know my way around writing, photo/video-ography and social media. I won’t profess to be the best but I know what I’m capable of and how I can grow in each area. As a combination of talents, I feel like I stack up well against the competition and I can do what it takes to make this a success not only for myself but for Richmond. Blogging isn’t going to be the only responsibility with 365 Days of Dining. This isn’t Field of Dreams where they’ll come because you built something. Traffic doesn’t appear out of thin air, it has to be generated and promoted. I’ve spent the last six years networking and digging into the local social media scene and I’ve already got ideas on audience participation as well as how I can give back to my home community. That’s why I would absolutely be over the moon if I was chosen for this not only because I think it would be a great time, eating amazing food but because I would get to share Richmond with everyone and tell them exactly how awesome my city is. * I know all the words to The Fresh Prince of Bel-Air. All of them. { 11 comments… read them below or add one } “If Yan can cook, so can you! Joy geen!” It is quite apparent that your passion for food is boundless. Hope you get selected for the 365 days of Dining, as you will surely do the role justice. When I think about it, other than food, nothing else in nature has the capability to induce the dazzling variety of sensations. For example, I love sampling the same cuisine prepared by different people just because of how different each experience will be despite having similar vibes. -Jean nice to see a blog update after a very long time I think that is an awesome self promotion for the Job. You obviously have a passion for food. I hope you get the job! Hey Ed, What a passionate post. I can confirm to anyone who doesn’t know you and will be deciding who to pick that you are indeed very passionate about food and know Richmond well. As your post proves you can also write with passion about it. They should really think twice if they don’t give you this right away. How did you enjoy talking to CBC TV as they interviewed you? If you don’t mind sharing that you can expand on it by commenting on my post about CBC TV filming at Vancouver Bloggers Dot Com Pho lunch. Cheers, Vance Making blog as informative and funny will help you to attract more visitors towards your blog and users will love to stay on your blog if they see great content. Hope in future I will be at Richmond. A post full of information and passion. Great work Is Yan Can Cook still a on going program in your country? It’s pretty nostalgic for me to see the image you posted on him, because he was in Singapore for a year or 2, about a decade a go, and it was one of the program i follow! Yan can Cook, So Can YOU! And … if you enjoy writing, and you enjoy cooking, I do hope to see a lot more post from you. I think your writing is captivating and easy to read. What a great post it is? This will help you to enjoy cooking as well I just watches an episode of yan can cook. Still as good as ever
20,344
<< Wednesday, November 12, 2014 fat land Posted by Kristen Eleni Shellenbarger at 5:25 PM How can you so not to that face??? So cute! Awhhh, good for happy Laz land!! :) So friendly and inquisitive! And photogenic. Yegads, the camera does love Laz. With good reason, wouldn't you say?
398,835