content
stringlengths 0
625k
| subset
stringclasses 1
value | meta
dict |
---|---|---|
---
author:
- Stefan Ruschel
- Benoit Huard
bibliography:
- main.bib
title: |
Sounding the metabolic orchestra:\
A delay dynamical systems perspective on the glucose-insulin regulatory response to on-off glucose infusion
---
# Introduction {#introduction .unnumbered}
Cyclic rhythms are widely recognized for their significant role in regulating the function of biological and physiological systems [@goldbeter2022multi; @keener2009mathematical]. Endogenic oscillations are typically encountered in healthy individuals, while a progressive lack of control of these rhythms is often associated with system stress (e.g. sleep deprivation [@sweeney2017skeletal; @sweeney2021impairments]), and disease evolution in humans [@spiga2011hpa].
A prominent example of such endocrine oscillations in the human body is the self-regulation of blood glucose levels [@grant2018evidence]. When blood glucose levels increase, insulin is released from the pancreas. Insulin then causes blood glucose levels to decrease by stimulating body cells to absorb glucose from the blood. Conversely, when blood glucose levels fall, pancreatic $\beta$-cells release glucagon stimulating hepatic glycogenolysis and neoglucogenesis. The level of blood glucose is then controlled by the rates of insulin secretion (activation by glucose) and hepatic glucose production (inhibition by insulin). Within the glucose-insulin regulatory system, both rapid oscillations of insulin (period $\sim$ 6-15 minutes), and ultradian oscillation of glucose and insulin (of similar period $\sim$ 80-180 minutes [@scheen1996relationships]) have been observed during fasting, meal ingestion, continuous enteral and intravenous nutrition [@o1993lack].
The most important pathway to understand the underlying mechanisms of these glucose-insulin oscillations is measuring the response to glucose infusions. A large quantity of metrics and mathematical models have been devised for that purpose. While the HBA1c metric remains an essential tool for the diagnostic, prevention and control for T2 diabetes [@ADA-A1C-2022], clinical tests involving patterns of glucose intake combined with mathematical models provide a mechanism for evaluating the efficacy of internal regulation [@ajmera2013impact; @makroglou2006mathematical; @palumbo2013mathematical; @HuardKirkham2022mathematical]. The minimal model devised by Bergman and Cobelli [@bergman1979quantitative; @bergman2021origins] provides an effective method for estimating insulin sensitivity from an intravenous or oral glucose tolerance test, although it can lead to underestimation in individuals with a large acute insulin response [@ha2021minmod]. With the wider availability of continuous glucose monitors and automated insulin pumps, the ability to detect diabetic deficiencies relies on the capacity of models to reproduce more complex and realistic dynamics under various routine life conditions such as, for example, sleep deprivation [@sweeney2017skeletal].
The main goal of this article is to identify the types of behaviors in a suitable mathematical model that can be expected as a response to periodic glucose uptake, specifically periodic on-off glucose infusion, which can be readily implemented in practice. We focus on the capacity of the system to fall into lockstep with the frequency of the glucose stimulus (so-called entrainment) which has been observed in numerous contexts at the ultradian and circadian levels in endocrinology [@walker2010origin; @zavala2019mathematical; @HuardKirkham2022mathematical], but especially in models of glucose-insulin oscillations with periodic infusion [@sturis1995phase].
Many modeling efforts have been made to replicate the nonlinear response of the glycolytic system; in particular, the mathematical modeling of the delayed response of individual parts of the system by explicit time delays has proven an effective means to explain the onset of self-sustained, ultradian oscillations in the glycemic system [@li2006modeling; @chen2010modeling; @cohen2021novel]. A common approach to modeling oscillatory behavior of complex biology is to consider time delays [@glass2021nonlinear]. In particular, models of endocrine regulation often incorporate explicit delays to account for the time required for the synthesis, release, and action of hormones or metabolites [@walker2010origin]. Various models of intrapancreatic rhythmic activity have been proposed recently, see Ref. for a review. For example, it was shown that glucose oscillations can enhance the insulin secretory response at the $\beta$-cell level when tweaked at a suitable amplitude and frequency [@mckenna2016glucose]. Negative delayed feedback has also been shown to provide a suitable explanatory mechanism for the coordinated pancreatic islet activity [@bruce_coordination_2022].
![ Panel (a): Diagrammatic overview of the glucose-insulin regulatory delayed-feedback model ([\[eq:G\]](#eq:G){reference-type="ref" reference="eq:G"})--([\[eq:I\]](#eq:I){reference-type="ref" reference="eq:I"}); see methods section for details. Panels (b)--(e): Characteristic time series of system ([\[eq:G\]](#eq:G){reference-type="ref" reference="eq:G"})--([\[eq:I\]](#eq:I){reference-type="ref" reference="eq:I"}) (with positive initial condition) for different patterns of glucose infusion with intervals of fasting indicated by a white background and intervals of glucose infusion with constant rate indicated by a light blue background. Units are \[$G$\] mg dl$^{-1}$, \[$I$\] mg dl$^{-1}$, and \[$t$\] h. Infusion rates when not fasting are $G_{\text{in}}=1.35$ mg dl$^{-1}$min$^{-1}$ in panels (c)--(d) and $G_{\text{in}}=24.3$ mg dl$^{-1}$min$^{-1}$ in panel (e); period of infusion is $T_{\text{in}}= 1$ h in panel (d) and $T_{\text{in}}= 3$ h in panel (e); time of infusion is $t_{\text{in}}= 30$ min in panel (d) and $t_{\text{in}}= 5$ min in panel (e). ](Fig/Figure1.eps){#fig:schema width="\\linewidth"}
In this article, we investigate a two-component, system-level mathematical model (see Eqs. [\[eq:G\]](#eq:G){reference-type="eqref" reference="eq:G"}--[\[eq:I\]](#eq:I){reference-type="eqref" reference="eq:I"} in the methods section) for blood glucose level $G(t)$ and insulin levels $I(t)$ with two explicit time delays $\tau_I$ and $\tau_G$ corresponding to pancreatic insulin and hepatic glucose production pathways, see the methods section for details on the model. The model incorporates the following physiological processes and factors that influence glucose and insulin dynamics, see Fig. [1](#fig:schema){reference-type="ref" reference="fig:schema"}(a) for a schematic overview.
Glucose uptake: $G_\text{in}$ represents glucose uptake into the blood by meal ingestion, continuous enteral or intravenous nutrition.
Insulin production: $f_1$ represents the production of insulin. It is influenced by the concentration of glucose with a delay $\tau_I$ to account for the time lag between high glucose levels triggering insulin production in the pancreas and when it becomes available for reducing glucose in the bloodstream.
Insulin-independent glucose utilization: $f_2$ describes the utilization of glucose by tissues, mainly the brain, in an insulin-independent manner. It does not rely on the presence of insulin.
Insulin-dependent glucose utilization: $f_3\cdot f_4$ represents the utilization of glucose by muscle tissues in an insulin-dependent manner. It reflects the capacity of tissues to utilize insulin for glucose uptake.
Glucose production by the liver: $f_5$ represents the production of glucose by the liver. The delay $\tau_G$ represents the time between hepatic glucose production and insulin stimulation.
Insulin degradation: The rate $d$ accounts for the degradation of insulin in the body, primarily by the liver and kidneys. It combines both natural factors (e.g., exercise) and artificial factors (e.g., medication) that influence the rate of insulin degradation.
The nonlinear pathways $f_1, f_2, f_3, f_4,$ and $f_5$ are represented using Hill functions, which are mathematical functions commonly used in biological modeling. These functions introduce additional parameters that have specific physiological interpretations and allow for a more accurate representation of the underlying dynamics of the glucose-insulin system, see methods section for details. The delays $\tau_I$ and $\tau_G$ are important physiological parameters encapsulating the responsitivity of the signaling and production pathways. They are assumed to be constant for the purpose of this article, although in practice, they can vary between individuals, as well as during the day and lifespan, and especially in the presence of diabetes.
The model has been extensively analyzed by various authors in the case of constant rates of glucose infusion [@li2006modeling; @li2007analysis; @huard2015investigation; @huard2017mathematical]. It originates from the work of Sturis and collaborators who devised a model of glucose and insulin ultradian oscillations which were observed experimentally under various conditions [@sturisetal1991a]. We also remark here that the model belongs to a larger class of models incorporating delays to capture secretion processes [@makroglou2006mathematical; @shi2017oscillatory]. We extend these earlier efforts on the analysis of the model by studying its response to periodic variations of the parameter $G_{\rm{in}}$, that is, periodic variations of glucose uptake. In particular, we consider on-off infusion, a form of periodic infusion that is comparatively easy to implement in practice, where the rate of glucose infusion periodically switches between a positive constant value and zero. Panels (b)--(e) of Fig. [1](#fig:schema){reference-type="ref" reference="fig:schema"} show prototypical examples for the response of system [\[eq:G\]](#eq:G){reference-type="eqref" reference="eq:G"}--[\[eq:I\]](#eq:I){reference-type="eqref" reference="eq:I"} for various types of glucose uptake during fasting (b), glucose infusion with a (relatively high) constant rate (c) and periodic on-off infusion (d)--(e). We first investigate the loss of ultradian oscillations under sufficiently strong constant infusion, see Fig. [1](#fig:schema){reference-type="ref" reference="fig:schema"}(b)--(c). We then aim to study the effects of different glucose infusion patterns $G_{\rm{in}}$ on glucose homeostasis, in particular the transiton from quasi-periodicity to entrainment, as shown in Figs. [1](#fig:schema){reference-type="ref" reference="fig:schema"}(d)--(e).
# Results {#results .unnumbered}
## Ultradian oscillations {#ultradian-oscillations .unnumbered}
It has been shown that for a fixed constant glucose infusion $G_{\rm{in}}$, sufficiently large values of the response delays $\tau_1$ and $\tau_2$ lead to periodic oscillations in system [\[eq:G\]](#eq:G){reference-type="eqref" reference="eq:G"}--[\[eq:I\]](#eq:I){reference-type="eqref" reference="eq:I"} with periods closely resembling the observed range for ultradian oscillations[@li2006modeling; @huard2015investigation; @huard2017mathematical]. Mathematically speaking, the onset of oscillations is mediated by a supercritical Hopf bifurcation that leads to a local topological change in the solution space of system [\[eq:G\]](#eq:G){reference-type="eqref" reference="eq:G"}--[\[eq:I\]](#eq:I){reference-type="eqref" reference="eq:I"} from a stable equilibrium to a situation of an unstable equilibrium surrounded by a small stable limit cycle close to the bifurcation point[@li2007analysis]. For details on bifurcation theory and the Hopf bifurcation, we refer the interested reader to Ref. . To witness the bifurcation point, it necessary to vary at least one parameter of the system. Here, we focus on the response delays $\tau_I$ and $\tau_G$. Allowing these two parameters values to vary simultaneously, one obtains a one-parameter curve $\mathbf{H(\omega)}=(\tau_I(\omega),\tau_G(\omega))$ of Hopf bifurcation in the $(\tau_I,\tau_G)$-plane in terms of the Hopf frequencies $\omega$, see Methods section for a detailed derivation. The curve $\mathbf{H(\omega)}$ corresponds to the critical curve for oscillations in system [\[eq:G\]](#eq:G){reference-type="eqref" reference="eq:G"}--[\[eq:I\]](#eq:I){reference-type="eqref" reference="eq:I"}.
![ Characterization of fasting oscillations with respect to response delays. Panels show the period (a), maximum glucose value (b), and minimum glucose value (c) as a function of response delays $\tau_I$ (min) and $\tau_G$ (min). Shown are the critical curve for oscillations (black, Hopf bifurcation), and iso-curves (blue) with constant period (a), glucose-maxima (b) and minimum of $G$ (c). The light blue rectangle shows the physiological range of delay values for comparison. See methods section for the model and choice of parameters.](Fig/Figure2.eps){#fig:Hopfcurve-ext width="\\linewidth"}
Figure [2](#fig:Hopfcurve-ext){reference-type="ref" reference="fig:Hopfcurve-ext"}(a) shows the curve $\mathbf{H}$ (black) during fasting, i.e. for $G_{\text{in}}=0$, computed with the software package DDE-Biftool for Matlab [@engelborghs2002numerical; @sieber2014dde]. It has been numerically verified that the curve $\mathbf{H}$ is indeed supercritical for the range of parameter values considered. Figure [2](#fig:Hopfcurve-ext){reference-type="ref" reference="fig:Hopfcurve-ext"}(a) can be interpreted as follows: First, for value pairs above the curve and for $\tau_I\leq20~\mathrm{min},$ $\tau_G\leq60~\mathrm{min}$, any solution of the model starting in a physiological range of glucose and insulin develops periodic oscillations, see Fig. [1](#fig:schema){reference-type="ref" reference="fig:schema"}(b). Second, for value pairs $(\tau_I,\tau_G)$ below the curve $\mathbf{H}$, oscillations in system [\[eq:G\]](#eq:G){reference-type="eqref" reference="eq:G"}--[\[eq:I\]](#eq:I){reference-type="eqref" reference="eq:I"} decay and approach the equilibrium $(G^\ast,I^\ast).$ This possibly reflects the situation where an individual is administered a glucose dose that is too high to be managed in an oscillatory manner within physiological glucose and insulin ranges, compare Fig. [1](#fig:schema){reference-type="ref" reference="fig:schema"}(c).
Figure [2](#fig:Hopfcurve-ext){reference-type="ref" reference="fig:Hopfcurve-ext"}(a) also gives an overview of the resulting period of oscillation above the critical curve $\mathbf{H}$ shown in the form of isocurves (blue) of limit cycles with constant period. The physiological range of parameters $(\tau_I,\tau_G)$ is highlighted by light blue square in the background for convenience. The range of expected periods for ultradian oscillations as predicted by the model thus ranges from $2.2$ to $4.2$ hours during fasting. More generally, we observe the period of the limit cycle oscillation grow approximately linear with the sum of the two delay values $\tau_I+\tau_G$. We also observe that away from the curve $\mathbf{H}$, the limit cycle oscillation becomes less and less sinusoidal, i.e. the nonlinearity of system [\[eq:G\]](#eq:G){reference-type="eqref" reference="eq:G"}--[\[eq:I\]](#eq:I){reference-type="eqref" reference="eq:I"} has more and more of an effect on the limit cycle. Panels (b)--(c) of Fig. [2](#fig:Hopfcurve-ext){reference-type="ref" reference="fig:Hopfcurve-ext"} illustrate this effect by plotting isocurves of periodic orbit with constant minimum and maximum glucose within one period of oscillation. We observe that, whereas the glucose minimum decreases approximately linearly with the sum of the delays $\tau_I+\tau_G$, the maximum $G$ remains almost constant for the range of parameter values considered. Note that this predicted effect of long response delays is potentially harmful and is virtually undetectable by common testing methods.
On the other hand, we observe that, for fixed values of the delays, gradually increasing the glucose infusion leads to a loss of oscillations. This phenomenon has been observed before and can be interpreted as an insufficient insulin secretion to accommodate the infusion, forcing the system to lower glucose lower levels[@li2007analysis]. Figure [3](#fig:Hopfcurve-GinVar){reference-type="ref" reference="fig:Hopfcurve-GinVar"} shows how the location of the curve $\mathbf{H}$ changes for various levels of constant glucose infusion. We observe a two different types of change for values in the approximate ranges $0\leq G_\text{in}\leq 0.55$ mg dl$^{-1}$ min$^{-1}$ and $G_\text{in}>0.55$ mg dl$^{-1}$ min$^{-1}$ shown in panels (a) and (b) of Fig. [3](#fig:Hopfcurve-GinVar){reference-type="ref" reference="fig:Hopfcurve-GinVar"}, respectively. Figure [3](#fig:Hopfcurve-GinVar){reference-type="ref" reference="fig:Hopfcurve-GinVar"}(a) suggests that low levels of $G_\text{in}$ promote oscillations in system [\[eq:G\]](#eq:G){reference-type="eqref" reference="eq:G"}--[\[eq:I\]](#eq:I){reference-type="eqref" reference="eq:I"} as compared to the fasting case. This trend reverses at approximately at $G_\text{in}=0.55$ mg dl$^{-1}$ min$^{-1}$, where the location of the curve $\mathbf{H}$ starts moving to larger and larger values of $\tau_I$ and $\tau_G$, see Fig. [3](#fig:Hopfcurve-GinVar){reference-type="ref" reference="fig:Hopfcurve-GinVar"}(b). Approximately at $G_\text{in}=1.2$ mg dl$^{-1}$ min$^{-1}$ the position of $\mathbf{H}$ is comparable with the starting location for $G_\text{in}=0$. Further increasing $G_\text{in}$ moves $\mathbf{H}$ inside the physiological range of delay values (light blue) and finally beyond causing all oscillations to cease in the physiological parameter regime. Compare also Fig. [1](#fig:schema){reference-type="ref" reference="fig:schema"}(b)-(c) for an illustration of this transition and the loss of oscillations for $(\tau_I,\tau_G)=(5,20)$.
![ Position of the critical curve (curve of Hopf bifurcation) in the $(\tau_I,\tau_G)$-plane for various values of $G_{\text{in}}$ ranging from $0$ to $0.5$ and from $0.6$ to $1.6$ mg dl$^{-1}$ min$^{-1}$ (all black). The light blue rectangle shows the physiological range of delay values for comparison.](Fig/Figure2Hopf.eps){#fig:Hopfcurve-GinVar width="0.7\\linewidth"}
## Entrainment and amplitude response to on-off glucose infusion {#entrainment-and-amplitude-response-to-on-off-glucose-infusion .unnumbered}
We now investigate the effect of periodic glucose infusion on baseline fasting oscillations shown in Fig. [1](#fig:schema){reference-type="ref" reference="fig:schema"}(b), i.e. we fix $\tau_I=5$ min, $\tau_G=20$ min and periodically adjust the level of $G_\text{in}$ between $0$ and a positive value $G_{0}$ to be specified. The natural frequency of ultradian oscillation in this case is $T\approx2.2$ h. We show that the resulting glucose and insulin ranges depend sensitively on the period of the on-off infusion. Figures [1](#fig:schema){reference-type="ref" reference="fig:schema"}(d)--(e) show two of the possible outcomes with different maximal infusion strength $G_{\max},$ period of infusion $T_\text{in}$, and infusion duration $t_{\text{in}}.$
![Response of model [\[eq:G\]](#eq:G){reference-type="eqref" reference="eq:G"}--[\[eq:I\]](#eq:I){reference-type="eqref" reference="eq:I"} to glucose infusion protocol [\[eq:Gin-periodic-forcing\]](#eq:Gin-periodic-forcing){reference-type="eqref" reference="eq:Gin-periodic-forcing"} with maximum infusion rate $G_{\max}$ (mg/(dl min)) and length of infusion $t_{\text{in}}=T_{\text{in}}/2$ (h). Shown is the maximum value of $G$ (mg/dl) in colorcode (blue-white) obtained by integration for various $T_{\text{in}}$ and $G_{\max}$ over $100(\tau_I+\tau_G+T_{\text{in}})$ time units. The maximum data is overlaid by curves of torus bifurcation (purple), curves of fold bifurcation of periodic orbits (red) and curves of period doubling bifurcation (magenta) bounding regions of locking to the infusion protocol. Other parameters are $\tau_I=5$ min and $\tau_G=20$ min.](Fig/Figure3.eps){#fig:resonance width="\\linewidth"}
#### Long infusion time compared to period {#long-infusion-time-compared-to-period .unnumbered}
Figure [1](#fig:schema){reference-type="ref" reference="fig:schema"}(d) shows the result of periodic infusion with $G_\text{in}=1.35$ mg dl$^{-1}$ min$^{-1},$ for $t_\text{in}=30$ min every $T_\text{in}=60$ min, resulting in so-called quasi-periodic oscillations. Indeed, quasi-periodic oscillations are characterized by the presence of an oscillating envelope of the oscillation that evolves on a much slower time-scale, compare Fig. [1](#fig:schema){reference-type="ref" reference="fig:schema"}(d). This is in sharp contrast with panels (b) (no infusion) and (c) (constant infusion with the same maximum rate) of Fig. [1](#fig:schema){reference-type="ref" reference="fig:schema"}, where we have either periodic oscillations, or a decay of oscillations towards the equilibrium state. Quasi-periodic oscillations can be expected to occur in oscillatory systems which are externally driven by an input with non-commensurable period, here $T_\text{in}/T_0=2.2$. In this case, the effect of infusion very much depends on its current state: When insulin is low, glucose increases quickly; when insulin is high, glucose cannot increase further and the infusion only delays the expected decrease in glucose levels.
Periodicity of the oscillations can be restored by adjusting $G_\text{in}$ and $T_\text{in}$. Figure [4](#fig:resonance){reference-type="ref" reference="fig:resonance"} summarizes the response of system [\[eq:G\]](#eq:G){reference-type="eqref" reference="eq:G"}--[\[eq:I\]](#eq:I){reference-type="eqref" reference="eq:I"} to periodic forcing with different values of $G_\text{in}$ and $T_\text{in}$. The locus in parameter space of the quasi-periodic oscillation shown in Fig. [1](#fig:schema){reference-type="ref" reference="fig:schema"}(d) is indicated by a green rectangle. Figure [4](#fig:resonance){reference-type="ref" reference="fig:resonance"} shows the overall glucose maximum (in color code) observed over a time span of $100\cdot(T_{\text{in}}+\tau_I+\tau_G)$ minutes. The various mechanisms generating periodic rhythms can be understood from the numerically computed bifurcation curves shown in [4](#fig:resonance){reference-type="ref" reference="fig:resonance"}. These correspond to curves of torus bifurcations $\mathbf{T}$ (purple), curves $\mathbf{F}$ (red) of fold (or saddle-node) bifurcations of periodic orbits, and curves $\mathbf{PD}$ (magenta) of period-doubling bifurcations of periodic orbits. These mark the transition to periodic solutions and thus characterize the so-called *entrainment* of oscillations.
The curves $\mathbf{F}$ respectively enclose deltoid-like regions -- called resonance or locking tongues -- extending from the line $G_{\text{in}}=0$, inside of which we observe periodic oscillations. The curves $\mathbf{F}$ emerge pairwise from resonant points where the infusion period is a rational multiple of the natural period of the system without infusion, i.e. $pT_{\text{in}}=qT_0$ for integers $p,q$. Figure [4](#fig:resonance){reference-type="ref" reference="fig:resonance"} shows the first three principal resonances of system [\[eq:G\]](#eq:G){reference-type="eqref" reference="eq:G"}--[\[eq:I\]](#eq:I){reference-type="eqref" reference="eq:I"} where $p=1,2,3$ and $q=1$. It is expected that such resonance tongues emanate from the line $G_{\max}$ at every rational point $T_0$. These higher order resonances (exept $p=4$ and $q=1$ which is outside of the considered range of parameter values) has been omitted/not computed as they are are typically very narrow and thus unlikely to be physiologically relevant.
This behavior persists moving towards larger values of $G_{\max}$ into the regions that are bounded approximately by the curves $T$, where the underlying stable periodic orbit destabilizes and gives rise to a torus that corresponds to quasi-periodic oscillations. We find numerical evidence that the direction with which this torus emanates from the the curve $\mathbf{T}$ can change and gives rise to the discontinuous transition between the observed maximum values in Fig. [4](#fig:resonance){reference-type="ref" reference="fig:resonance"}. The curves $\mathbf{T}$ each emanate from either point of intersection with a curve $\mathbf{F}$ or $\mathbf{PD}$. Intersections with curves $\mathbf{PD}$ correspond to higher order locking between the ultradian oscillations and the infusion. Overall, we observe that the strength and period of the infusion have a crucial effect on the resulting amplitude of the oscillations. For instance, forcing the system periodically with $T_0=T_\text{in}$ and relative amplitude $G_\text{in}=1$ mg dl$^{-1}$ min$^{-1}$ leads to a $40\%$ increase of the overall amplitude of the oscillation (which appears to be still in physiological range). In contrast, stimulating the system with a gradually increasing $G_\text{in}$ in the 2:1 regime first goes through phase during which glucose amplitudes remain relatively constant before slowly increasing.
More generally, we observe that, for the assumed values of the response delays, periodic infusion with $T_{\text{in}}=2t_{\text{in}}>T_0$ and $G_\text{in}$ is sufficient for the resulting period of the resulting glucose-insulin oscillation to be set by (locked to) the period of glucose infusion.
#### Short infusion time compared to period {#short-infusion-time-compared-to-period .unnumbered}
We note here that locking can be achieved when the same glucose dose is delivered in a shorter period of time, resulting in a more concentrated and intense infusion. To further explore this phenomenon, we conducted additional experiments using an on-off glucose infusion protocol with a fixed infusion period of $T_{\rm{in}}=180$ min. Figure [5](#fig:resonance2){reference-type="ref" reference="fig:resonance2"} showcases the results obtained from these experiments, where we varied both the infusion time $t_{\rm{in}}$ and the average glucose dose per minute $\bar G$, represented by $G_{\max}\cdot t_{\rm{in}} / T_{\rm{in}}$.
In this figure, we observe a locus in the parameter space that corresponds to the quasi-periodic orbit illustrated in Fig. [1](#fig:schema){reference-type="ref" reference="fig:schema"}(e). This locus is denoted by a distinctive yellow diamond marker, which highlights the specific combination of infusion time and glucose dose that leads to the observed quasi-periodic behavior. Additionally, we present a curve labeled as $\mathbf{T}$, which represents a torus bifurcation curve. This curve serves as an indicator of the critical transition point between entrainment and quasi-periodic oscillation in response to the infusion protocol.
![ Response of model [\[eq:G\]](#eq:G){reference-type="eqref" reference="eq:G"}--[\[eq:I\]](#eq:I){reference-type="eqref" reference="eq:I"} to glucose infusion protocol [\[eq:Gin-periodic-forcing\]](#eq:Gin-periodic-forcing){reference-type="eqref" reference="eq:Gin-periodic-forcing"} with average infusion rate $\bar G=G_{\max} \cdot t_{\text{in}}/T_{\text{in}}$ mg dl$^{-1}$ min$^{-1}$ over the length of infusion $t_{\text{in}}$ (min) with constant period $T_{\text{in}}=180$ (min). Shown is the maximum value of $G$ (mg/dl) in colorcode (blue-white) obtained by integration for various $t_{\text{in}}$ and $\bar G$. The maximum data is overlaid by curves of torus bifurcation (purple), curves of fold bifurcation of periodic orbits (red) and curves of period doubling bifurcation (purple) bounding regions of locking to the infusion protocol. Other parameters are $\tau_I=5$ min and $\tau_G=20$ min.](Fig/Figure5var.eps){#fig:resonance2 width="0.7\\linewidth"}
# Discussion {#discussion .unnumbered}
It is well documented that glucose rhythms stimulate pulsatile pancreatic insulin secretion at various timescales [@sturisetal1991a; @satin2015pulsatile]. For example, the 1:1 entrainment mode -- namely one ultradian glucose oscillation per glucose infusion cycle -- was clinically shown to be present using a sinusoidal glucose infusion in individuals without diabetes [@o1993lack; @sturis1995phase]. Our analysis of periodically driven ultradian oscillations highlights that a periodic on-off stimulus, closer to normal daily conditions, also possesses the ability to entrain glucose rhythms. Furthermore, the duration of each glucose input has a crucial impact on the generation of periodic rhythms, as well as on attained glycemic levels. This theoretically provides a method for delivering a fixed glucose dose while minimising the amplitude of the resulting rhythm. This can be achieved by either altering the period of the infusion, or the length of each pulse. This is most observable in figure [5](#fig:resonance2){reference-type="ref" reference="fig:resonance2"}, where stretching the infusion duration leads to lower glucose amplitudes. For example, consider a scenario where glucose is infused every 180 minute over a 12-hour period. Infusing a dose with $G_{\max} = 2.4$ mg dl$^{-1}$ min$^{-1}$ over $t_{in}=30$ minutes leads to a maximal glucose value around $150$ mg/dl. In contrast, a dose with $G_{\max} = 1.2$ mg dl$^{-1}$ min$^{-1}$ over $t_{in}=60$ minutes reduces the maximal glucose level to around $125$ mg/dl. In both cases, the average dose per minute is $\bar{G} = 0.4$ mg dl$^{-1}$ min$^{-1}$, and a total dose of $288$ mg/dl is infused over the 12 hour timespan.
Our study provides valuable insights into the system's response to glucose infusion patterns, providing multiple pathways for the production of stable oscillatory rhythms and a similar entrainment structure is also expected for simpler models of glucose-insulin regulation featuring delays, e.g. [@panunzi_discrete_2007; @li_range_2012; @shi2017oscillatory]. Nonetheless, there are several limitations that should be considered. First, let us note that while the exact location of bifurcation curves would depend on model parameters, the bifurcation types are likely to remain the same for parameter ranges representing non-diabetic individuals. Our model assumes fixed values for the delays in insulin and glucose production pathways, represented by $\tau_I$ and $\tau_G$ respectively. In reality, these delays can vary between individuals and change over short and long timescales due to daily-life factors such as exercise, aging and the presence of insulin resistance. Future research could incorporate individual-specific delays to account for this variability and investigate their impact on the system's dynamics.
It is worth noting that our model relies solely on plasma glucose and insulin measurements for prediction, which highlights the importance of accurate and reliable measurements in clinical settings. The nonlinear structure of the model allows for the description of nontrivial dynamics and enhances parameter identifiability. This aspect is crucial for developing robust and accurate models that can capture the complex dynamics of the glucose-insulin regulatory system.
It is also worth noting that the timing of the glucose infusion does not influence the bifurcation structure ([4](#fig:resonance){reference-type="ref" reference="fig:resonance"}), nor the glucose-insulin ranges of the periodic rhythms. In other words, the long term dynamics is not dependent on the starting time of the periodic on-off glucose infusion. This does not mean that the timing of glucose inputs bears little importance. While the investigated infusion ranges ensured the positivity of glucose and insulin values, values below or above healthy physiological ranges may appear in the transient path to the limit cycle. In turn, additional dynamics may emerge from interactions with other physiological feedback loops or subsystems, such as the glucagon pathway or the hypothalamic-pituitary-adrenal axis, for which the alignment with glucose regulation is essential for maintaining good health [@zavala2022misaligned]. The recent incorporation of glucagon [@cohen2021novel] in models of the glucose-insulin feedback system may help provide a more complete and quantitative picture of dynamical interactions occurring within the pancreas [@Pedersen201329; @montefusco2020heterogeneous] which can be used to improve quantitative tests for the detection and measurement of insulin and glucagon resistance [@morettini2021mathematical].
Another aspect to consider is the interaction between the glucose-insulin regulatory system and other physiological processes. Our model focuses solely on the glucose-insulin loop, but in reality, there are complex interactions between various metabolic pathways, hormones, and organs. Integrating these interactions into a comprehensive model could provide a more complete understanding of the system's behavior and its response to different stimuli.
# Conclusion {#conclusion .unnumbered}
In this study, we employed a system-level mathematical model to investigate the response of the glucose-insulin regulatory system to periodic glucose infusion. By exploring different glucose infusion patterns and analyzing the resulting dynamics, we gained insights into the system's behavior and identified key factors influencing its response.
Our findings demonstrate that the glucose-insulin regulatory system exhibits a range of behaviors depending on the glucose infusion pattern. When a constant glucose infusion is applied, the system shows ultradian oscillations characterized by periodic variations in glucose and insulin levels. However, as the glucose infusion rate exceeds a certain threshold, these oscillations disappear, and the system focuses on reducing glucose levels without exhibiting oscillatory behavior. This observation suggests a physiological limit beyond which the system's oscillatory capacity is overwhelmed.
We further investigated the effects of periodic on-off pulses, mimicking repeated intravenous glucose tolerance tests. Our analysis revealed that the period of the on-off pulses plays a crucial role in determining the system's dynamics and glucose-insulin ranges. Different patterns of oscillations, including stable limit cycles and irregular oscillations, were observed for varying infusion periods. This highlights the importance of considering the frequency and duration of glucose stimuli in understanding the system's response.
The results of this study have important implications for understanding glucose regulation in both normal and abnormal physiological conditions. By elucidating the system's response to different glucose infusion patterns, our findings can inform the development of test strategies for evaluating the system's performance and identifying potential dysfunctions. Furthermore, they provide insights into the underlying mechanisms governing glucose-insulin dynamics, contributing to the broader understanding of metabolic regulation.
In conclusion, our study enhances our understanding of the glucose-insulin regulatory system by investigating its response to periodic glucose infusion. We identified the impact of different glucose infusion patterns on the system's dynamics and demonstrated the importance of various types of glucose stimuli. These insights can aid in the development of diagnostic and therapeutic strategies for glucose regulation and contribute to advancements in the management of metabolic disorders. Future research should aim to incorporate individual-specific delays and consider the broader physiological context to further refine our understanding of glucose regulation and its implications for human health.
# Methods {#methods .unnumbered}
## The glucose-insulin regulatory delayed-feedback model {#the-glucose-insulin-regulatory-delayed-feedback-model .unnumbered}
We consider the system-level mathematical model $$\begin{aligned}
G'(t) &= G_{\text{in}}(t) - f_2(G(t)) - f_3(G(t))f_4(I(t)) + f_5(I(t-\tau_G)) \label{eq:G}\\
I'(t) &= I_{\text{in}}(t) + f_1(G(t-\tau_I)) - d I(t) \label{eq:I}\end{aligned}$$ with variables $I(t)$ and $G(t)$ representing the concentrations and of glucose (mg dl$^{-1}$) and insulin (uU ml$^{-1}$) in the plasma at time instant $t$. System ([\[eq:G\]](#eq:G){reference-type="ref" reference="eq:G"})--([\[eq:I\]](#eq:I){reference-type="ref" reference="eq:I"}) explicitly depends on time delays $\tau_I$ and $\tau_G$ respectively representing the system's response time to insulin production as a result to glucose uptake, and the production of glucose by the liver as a result of low insulin levels. Glucose intake and insulin infusion are modeled by parameters, here called $G_{\rm{in}}$ and $I_{\rm{in}}$. The physiological response of body is modeled by the nonlinearities $$\begin{aligned}
f_1(G) &= \frac{R_m G^{h_1}}{G^{h_1} + (V_g k_1)^{h_1}},\\
f_2(G) &= \frac{U_b G^{h_2}}{G^{h_2} + (V_g k_2)^{h_2}},\\
f_3(G) &= \frac{G}{C_3 V_g},\\
f_4(I) &= U_0 + \frac{ (U_m - U_0)I^{h_4}}{I^{h_4} + (1/V_i + 1/(Et_i))^{-h_4}k_4^{h_4}},\\
f_5(I) &= \frac{R_g I^{h_5}}{I^{h_5} + (V_p k_5)^{h_5}},\end{aligned}$$ where $R_m=210,$ $V_i=11$, $V_g=10$, $E=0.2$, $U_b=72$, $t_i=100$, $C_3=1000$, $R_g=180$, $U_0=40$, $V_p=3$, $U_m=940$, $h_1=2$, $k_1=6000$, $h_2 = 1.8$, $k_2=103.5$, $h_4 = 1.5$, $k_4=80$, $h_5=-8.54$, and $k_5=26.7$ with corresponding units. Insulin degradation is modeled by a constant rate $d$. Throughout the paper we fix $d=0.06$. The model has been considered before and has been analyzed extensively by various authors [@li2006modeling; @li2007analysis; @huard2015investigation; @huard2017mathematical]. In particular, it can be shown that, for the parameter values considered and in the absence of infusion, there is a unique equilibrium solution $(G^\ast,I^\ast)$; see for example \[\]. The delay parameters used for numerical simulation are $\tau_I=5$ and $\tau_G=20$ if not stated otherwise. For the general theory of delay differential equations, such as existence, uniqueness and the stability of solutions, we refer the interested reader to classic textbooks on the topic[@hale2013introduction; @diekmann2012delay].
## Critical delay values for oscillatory behavior when infusion rate is constant {#critical-delay-values-for-oscillatory-behavior-when-infusion-rate-is-constant .unnumbered}
The critical curve for oscillations in the ($\tau_I,\tau_G$)-parameter plane can be computed from the linearization of system ([\[eq:G\]](#eq:G){reference-type="ref" reference="eq:G"})--([\[eq:I\]](#eq:I){reference-type="ref" reference="eq:I"}) about the equilibrium solution $(G^\ast,I^\ast)$ and imposing the condition $\lambda=i\omega,~\omega>0$ (Hopf bifurcation) on solutions of the corresponding characteristic equation $$\label{eq:char}
0=\chi(\lambda):=\lambda^2+\alpha_1 \lambda +\alpha_0
+\beta_1 e^{-\lambda \tau_1} + \beta_2 e^{-\lambda \tau_2},$$ where $\tau_1=\tau_I,$ $\tau_2=\tau_I+\tau_G$ and $\alpha_1=f_2^\prime
(G^\ast) + f^\prime_3(G^\ast)f_4(I^\ast)+d$, $\alpha_0=d(f_2^\prime
(G^\ast) + f^\prime_3(G^\ast)f_4(I^\ast))$, $\beta_1=f^\prime_
1(G^\ast)f_3(G^\ast)f^\prime_4
(I^\ast)$, $\beta_2=-f^\prime_1
(G^\ast)f^\prime_5(I^\ast)$. A detailed derivation of Eq. ([\[eq:char\]](#eq:char){reference-type="ref" reference="eq:char"}) can be found in \[\].\
The equation $0=\chi(i\omega)$ can be solved parametrically for $\tau_1$ and $\tau_2$ to give $$\begin{aligned}
\tau_{1,2}(\omega)&=\frac{1}{\omega}\left(\arctan\left(\frac{\alpha_1\omega}{\omega^2-\alpha_0}\right) + \arccos\left(\frac{\beta_{2,1}^2-\beta_{1,2}^2-(\omega^2-\alpha_0)^2 -\alpha_1^2\omega^2}{2\beta_{1,2}\sqrt{(\omega^2-\alpha_0)^2 +\alpha_1^2\omega^2}}\right)
\right),\label{eq:tau_om}\end{aligned}$$ revealing the critical curve for oscillations $\mathbf{H}\subset \mathbb{R}^2$ (curve of Hopf bifurcation) $$\begin{aligned}
\mathbf{H}(\omega)&=(\tau_I(\omega),\tau_G(\omega))=(\tau_1(\omega),\tau_2(\omega)-\tau_1(\omega)).\end{aligned}$$ For the considered parameter values, we have that $\alpha_1 > \alpha_0$ and $\beta_2>\alpha_0$, ensuring the existence of $\mathbf{H}$. Indeed the curve is a sharp threshold for oscillation, as it can been shown numerically that for positive values ($\tau_I,\tau_G$) below $\mathbf{H}$ the fixed point $(G^\ast,I^\ast)$ is stable for any physiological range of starting values $G$ and $I$. It is worth noting here that system ([\[eq:G\]](#eq:G){reference-type="ref" reference="eq:G"})--([\[eq:I\]](#eq:I){reference-type="ref" reference="eq:I"}) undergoes further Hopf bifurcations, respectively at $\tau_{I,k}(\omega)=\tau_{I}(\omega)+2\pi k/\omega$ and $\tau_{G,l}(\omega)=\tau_{G}(\omega)+2\pi l/\omega$ with $k,l$ an integer; however, for the parameter values considered, we can restrict ourselves to the smallest positive such value pair to cover the physiological parameter range. The range of relevant values of $\omega$ resulting in positive delays cannot be computed explicitly, however, straightforward calculations show that the boundaries $\omega_{I},\omega_{G}$ satisfying $\tau_I(\omega_{I})=0$ and $\tau_G(\omega_{G})=0$ are given by $$\begin{aligned}
\omega_G &= \sqrt{\alpha_0-\frac{\alpha_1^2}{2}
+\sqrt{\left(\alpha_0-\frac{\alpha_1^2}{2}\right)^2+(\beta_1+\beta_2)^2-\alpha_0^2}},\\
\omega_I &= \sqrt{\alpha_0+\beta_1-\frac{\alpha_1^2}{2}
+\sqrt{\left(\alpha_0+\beta_1-\frac{\alpha_1^2}{2}\right)^2+\beta_2^2-\alpha_0^2}},\end{aligned}$$ with the corresponding delay values $$\begin{aligned}
\tau_I(\omega_{G}) &=\frac{1}{\omega_{G}}\arctan \left(\frac{\alpha_1\omega_{G}}{\omega_{G}^2-\alpha_0}\right) + \frac{2\pi k^\ast}{\omega_G},\\
\tau_G(\omega_{I}) &=\frac{1}{\omega_{I}}\arctan \left(\frac{\alpha_1\omega_{I}}{\omega_{I}^2-\alpha_0-\beta_1}\right)+ \frac{2\pi l^\ast}{\omega_I},\end{aligned}$$ where $k^\ast,l^\ast$ are the smallest integers such that $\tau_G$ and $\tau_I$ are positive.
We remark that the curve $\mathbf{H}$ vaguely resembles a straight line with slope $-1$ in the $(\tau_I,\tau_G)$-plane. This can be understood by exploiting the fact that parameter $\beta_1$ is small on the order of $10^{-3}$. Imposing the regular perturbation ansatz $\omega=\omega_0+\beta_1\omega_1+\mathcal{O}(\beta_1^2)$ on the imaginary part of ([\[eq:char\]](#eq:char){reference-type="ref" reference="eq:char"}) and comparing at zeroth and first order in $\beta_1$, we formally obtain $$\begin{aligned}
\omega_0 &= \sqrt{\alpha_0-\frac{\alpha_1^2}{2}
+\sqrt{\left(\alpha_0-\frac{\alpha_1^2}{2}\right)^2+\beta_2^2-\alpha_0^2}},\\
\omega_1 &= \frac{\omega_0}{\alpha_1-\alpha_0+\omega_0^2}\tau_I \leq \frac{1}{2 \sqrt{\alpha_1-\alpha_0}} \tau_I.%\end{aligned}$$
Thus, we can approximate $\mathbf{H}$ to first order in $\beta_1$ $$\begin{aligned}
\mathbf{H}(\omega_0 + \beta_1\omega_1(\tau_I))&\approx\left(\tau_I,\tau_2(\omega_0 + \beta_1\omega_1(\tau_I))-\tau_I\right)\end{aligned}$$ by using the expression $\tau_2(\omega)$ in ([\[eq:tau_om\]](#eq:tau_om){reference-type="ref" reference="eq:tau_om"}). As a result, $\mathbf{H}$ approaches the graph of the function $\tau_I\mapsto \tau_2(\omega_0)-\tau_I$ with slope $-1$ as $\beta_1\to 0$, which can be considered as a zero order approximation of $\mathbf{H}$.
## On-off periodic infusion {#on-off-periodic-infusion .unnumbered}
Periodic shot infusion is modeled by a smooth, periodic, quickly varying function between $G_{\text{in}}(t)=I_{\text{in}}(t)=0$ in mg/(dl$\cdot$min) (no infusion) and $G_{\text{in}}(t)=G_{\max}$ (glucose infusion), respectively. We consider the specific form $$\begin{split}
G_{\text{in}}(t) = G_{\max}\cdot s(t-\sigma_G), \,
%\quad I_{\text{in}}(t) = I_{\max}\cdot s(t-\sigma_I),\\
s(t)=h(\sin(2\pi t/T_{\text{in}}))\cdot h(\sin(2\pi(t-t_{\text{in}})/ T_{\text{in}} - \pi)),
\end{split}
\label{eq:Gin-periodic-forcing}$$ where the sigmoidal function $h(y)=(1+\exp(-k y))^{-1}$ can be considered a smooth version of the Heaviside step function $H(y)=0$ if $y<0$, and $H(y)=1$ if $y\geq0$ for sufficiently large $k$. The form of ([\[eq:Gin-periodic-forcing\]](#eq:Gin-periodic-forcing){reference-type="ref" reference="eq:Gin-periodic-forcing"}) was inspired by a model for auditory perception[@ferrario2021auditory]. $T_{\text{in}}$ is the time between consecutive shots of glucose/insulin with duration $t_{\text{in}}$, and the lag $\sigma_G$ can be used to specify the timing of the infusion with respect to the underlying oscillation. The parameter $k$ models the initial and terminal variations at the beginning and end of the shot application. A detailed analysis of the influence of the $k$-parameter is beyond the scope of this paper. We found the choice $k=100$ to be sufficient. Panels (a) and (b) of Figure [6](#fig:forcing){reference-type="ref" reference="fig:forcing"} show the shape of the infusion patterns used for numerical computation of time series in Fig. [1](#fig:schema){reference-type="ref" reference="fig:schema"}.
![Form of glucose infusion used to obtain Figs. [1](#fig:schema){reference-type="ref" reference="fig:schema"}(d)--(e).](Fig/Figure6.eps){#fig:forcing width="0.75\\linewidth"}
## Numerical bifurcation analysis of time-delay systems with periodic infusion {#numerical-bifurcation-analysis-of-time-delay-systems-with-periodic-infusion .unnumbered}
Numerical simulations have been obtained using pydelay [@flunkert2009pydelay]. Numerical bifurcation analysis has been performed using the software package DDE-BIFTOOL for Matlab/Octave[@sieber2014dde]. For a general introduction to numerical continuation methods available for delay differential equations and their application to physiological systems see Refs. and [@engelborghs2002numerical], respectively. Isocurves in Fig. [2](#fig:Hopfcurve-ext){reference-type="ref" reference="fig:Hopfcurve-ext"} have been computed using numerical continuation of periodic orbits in two parameters with the additional condition fixed period (a), and fixed maximum value (b), fixed maximum value (c), where in cases (b) and (c) we also relaxed the phase condition. For bifurcation analysis in the presence of periodic infusion, we append the two-dimensional ordinary differential equation $$\begin{aligned}
x^\prime (t) &= x - \omega y(t) - x(t)(x(t)^2+y(t)^2),\\
y^\prime (t) &= -\omega x(t) + y - y(t)(x(t)^2+y(t)^2),\end{aligned}$$ with known stable periodic solution $(x(t),y(t))=(\cos(\omega t),\sin(\omega t))$ to system ([\[eq:G\]](#eq:G){reference-type="ref" reference="eq:G"})--([\[eq:I\]](#eq:I){reference-type="ref" reference="eq:I"}). The method has been employed in several other works, see for example Ref. . We achieve the specific form of infusion ([\[eq:Gin-periodic-forcing\]](#eq:Gin-periodic-forcing){reference-type="ref" reference="eq:Gin-periodic-forcing"}) by setting $G_{\text{in}}(t) = G_{\max} h(y(t-\sigma_G))h(y(t-\sigma_G-t_{in}\omega-\pi))$, where $\omega=2\pi/T_{\text{in}}$.
# Acknowledgements (not compulsory) {#acknowledgements-not-compulsory .unnumbered}
The authors thank Jan Sieber for helpful discussions on the implementation of the numerical continuation methods in DDE-BIFTOOL.
# Author contributions statement {#author-contributions-statement .unnumbered}
S.R. and B.H contributed equally to the study. Both authors conducted numerical experiments, analyzed the results, and contributed to the draft of the manuscript. Both authors reviewed the manuscript.
# Data available statement {#data-available-statement .unnumbered}
Numerical procedures to generate figures are available from the corresponding author on reasonable request.
# Additional information {#additional-information .unnumbered}
The authors declare no competing interests.
| arxiv_math | {
"id": "2310.02697",
"title": "Sounding the metabolic orchestra: A delay dynamical systems perspective\n on the glucose-insulin regulatory response to on-off glucose infusion",
"authors": "Stefan Ruschel, Benoit Huard",
"categories": "math.DS",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
In the case where $\mathbb{k}=\mathbb{F}_2$, for $S$ a contravariant functor from the category of finite dimensional $\mathbb{k}$-vector spaces to the category of sets, and for $\mathfrak{S}_S$ the category of elements of $S$, the category $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$, of functors from $\mathfrak{S}_S$ to the category of vector spaces, appears naturally as some localisation of the category of unstable $K$-modules (for some unstable algebra $K$, depending on $S$, over the mod $2$ Steenrod algebra).
In the case where $S$ satisfies some noetherianity condition (satisfied in the case where the unstable algebra $K$ is noetherian), we are able to describe the category $\mathfrak{S}_S$. In this case, we can define a notion of polynomial functors on $\mathfrak{S}_S$. And like in the usual setting of functors from the category of finite dimensional vector spaces to the one of vector spaces of any dimension, we can describe the quotient $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})/\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_S,\mathcal{V})$, where $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})$ denote the full subcategory of $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ of polynomial functors of degree less than or equal to $n$.
Finally, if $\mathbb{k}=\mathbb{F}_p$ for some prime $p$ and if $S$ satisfies the required noetherianity condition, we can compute the set of isomorphism classes of simple objects in $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$.
author:
- OURIEL BLOEDE
bibliography:
- Biblio.bib
title: POLYNOMIAL FUNCTORS ON SOME CATEGORIES OF ELEMENTS
---
# Introduction
**Notation 1**. In the following, for $F$ a functor on a category $\mathcal{C}$, $c$ and $c'$ objects of $\mathcal{C}$ and $f\ :\ c\rightarrow c'$ an arrow in $\mathcal{C}$, if there is no ambiguity on $F$, we will denote by $f_*$ the induced map $F(f)$ from $F(c)$ to $F(c')$ if $F$ is covariant, and by $f^*$ the induced map from $F(c')$ to $F(c)$ if $F$ is contravariant.\
We denote by $\mathcal{S}\text{et}^{(\mathcal{V}^f)^\text{op}}$ the category of contravariant functors from $\mathcal{V}^f$ the category of finite dimensional vector spaces over a given field $\mathbb{k}$ to $\mathcal{S}\mathrm{et}$ the category of sets, and by $\mathcal{F}\text{in}^{(\mathcal{V}^f)^\text{op}}$ the full subcategory of $\mathcal{S}\text{et}^{(\mathcal{V}^f)^\text{op}}$ with objects the functors with values in $\mathcal{F}\mathrm{in}$ the category of finite sets.
## The category $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$
For $S\in\mathcal{S}\text{et}^{(\mathcal{V}^f)^\text{op}}$, the category of elements $\mathfrak{S}_S$ is the category whose objects are the pairs $(W,\psi)$ with $W\in\mathcal{V}^f$ and $\psi$ in $S(W)$ and whose morphisms from $(W,\psi)$ to $(H,\eta)$ are the morphisms $\gamma$ of $\mathbb{k}$-vector spaces from $W$ to $H$ satisfying $\gamma^*\eta=\psi$.\
The aim of this article is to study the category $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ of functors from the category $\mathfrak{S}_S$ to the category $\mathcal{V}$ of $\mathbb{k}$-vector spaces of any dimension, under some conditions on $S$.\
We explain succinctly (see [@B2]) how such categories appear in the study of unstable modules over an unstable algebra $K$ over the mod 2 Steenrod algebra. For $K$ in $\mathcal{K}$ the category of unstable algebras, we consider the functor $S$ that maps the vector space $W$ to $\text{Hom}_\mathcal{K}(K,H^*(W))$, for $H^*(W)$ the cohomology with coefficients in $\mathbb{F}_2$ of the classifying space $BW$. This functor takes it values in the category of profinite sets.\
The functor that maps $W$ to $\mathbb{F}_2^{S(W)}$ (the set of continuous maps from $S(W)$ to $\mathbb{F}_2$) is then an algebra in the category $\mathcal{F}(\mathcal{V}^f,\mathcal{V})$ of functors from $\mathcal{V}^f$ to $\mathcal{V}$. For $K-\mathcal{U}$ the category of unstable $K$-modules, and $\mathcal{N}il$ the localising subcategory of nilpotent modules, we have an equivalence of categories between $K-\mathcal{U}/\mathcal{N}il$ and the full subcategory of analytic functors in $\mathbb{F}_2^S-\mathcal{M}od$.\
In the case where $K$ is noetherian, $S$ takes values in $\mathcal{F}in$ and $\mathbb{F}_2^S-\mathcal{M}od\cong\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ (cf [@djament:tel-00119635]).\
In section [2](#partintr){reference-type="ref" reference="partintr"}, we recall the definition of the kernel of an element of $\mathcal{S}\text{et}^{(\mathcal{V}^f)^\text{op}}$ as well as the definition of a noetherian functor from [@HLS2]. We use those to describe the category $\mathfrak{S}_S$ in the case where $S$ satisfies a condition slightly weaker than the noetherianity condition of [@HLS2].
## Polynomial functors in $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$
In section [3](#part2){reference-type="ref" reference="part2"}, we define and study a notion of polynomial functor in $\mathcal{F}(\mathfrak{S}_S,\mathcal{V}^f)$. Polynomial functors over an additive category are already well studied and have very interesting properties such as homological finiteness. They have been of importance in computing the simple objects of the category $\mathcal{F}(\mathcal{V}^f,\mathcal{V})$ (see for example [@PS]). The category $\mathfrak{S}_S$ is not additive, yet in the case where $S$ satisfies the weaker noetherianity condition, we can still introduce a notion of polynomiality. For $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})$ the full subcategory of polynomial functors of degree $n$ on $\mathfrak{S}_S$, we get the following theorem, where the category $\mathfrak{R}_S$, that we will introduce in the first section, is equivalent to a category with a finite set of objects, in the case where $S$ is noetherian in the sense of [@HLS2].
There is an equivalence of categories between $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})/\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_S,\mathcal{V})$ and\
$\mathcal{F}(\mathfrak{R}_S,\mathbb{k}\left[\Sigma_n\right]-\mathcal{M}\mathrm{od})$.
## Simple functors in $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$
In the case where $\mathbb{k}$ is a finite field $\mathbb{F}_p$ with $p$ prime, using similar techniques to those presented in [@PS] we are able to describe simple objects in $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$.
There is a one-to-one correspondence between isomorphism classes of simple objects of $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ and isomorphism classes of simple objects of $$\bigsqcup\limits_{(W,\psi),n}\mathbb{F}_p\left[\mathcal{A}ut_{\mathfrak{S}_S}(W,\psi)\times\Sigma_n\right]-\mathcal{M}od$$ with $(W,\psi)$ running through the isomorphism classes of objects in $\mathfrak{R}_S$ and $n$ running through $\mathbb{N}$.
**Acknowledgements:** I am thankful to Geoffrey Powell for his careful proofreading and for his continued support during and after my PhD. This work has been partially supported by the Labex CEMPI (ANR-11-LABX-0007-01).
# Noetherian functors {#partintr}
In this section, we start by recalling the definition of a noetherian functor from [@HLS2] and we introduce the weaker noetherianity condition that will be needed in the following sections.\
For $S$ satisfying the weaker noetherianity condition, the category $\mathfrak{S}_S$ can be described using Rector's category $\mathfrak{R}_S$. We introduce this category, and end this section by comparing the categories of functor $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ and $\mathcal{F}(\mathfrak{R}_S\times\mathcal{V}^f,\mathcal{V})$.
## Definition and first properties
We start by recalling the definition of the kernel of an element of $S(V)$, for $S\in\mathcal{S}\text{et}^{(\mathcal{V}^f)^\text{op}}$ and $V\in\mathcal{V}^f$.
**Proposition 2**. *[@HLS2 Proposition-Definition 5.1][\[imp\]]{#imp label="imp"} Let $S\in\mathcal{S}\text{et}^{(\mathcal{V}^f)^\text{op}}$, $V\in\mathcal{V}^f$ and $s\in S(V)$. Then, there exists a unique sub-vector space $U$ of $V$, denoted by $\text{ker}(s)$, such that:*
1. *For all $t\in S(W)$ and all morphism $\alpha\ :\ V\rightarrow W$ such that $s=\alpha^*t$, $\text{ker}(\alpha)\subset U$.*
2. *There exists $W_0$ in $\mathcal{V}^f$, $t_0\in S(W_0)$ and $\alpha_0\ :\ V\rightarrow W_0$ such that $s=\alpha_0^*t$ and $\text{ker}(\alpha_0)=U$.*
3. *There exists $t_0\in S(V/U)$ such that $s=\pi^*t_0$, where $\pi$ is the projection of $V$ onto $V/U$.*
Notice that, since $\pi\ :\ W\rightarrow W/\ker(\psi)$ is surjective, it admits a right inverse, therefore $\pi^*$ has a left inverse, hence it is injective. We will denote by $\Tilde{\psi}$ the unique element of $S(W/\ker(\psi))$ such that $\pi^*\Tilde{\psi}=\psi$.
**Definition 3**. Let $S\in\mathcal{S}\text{et}^{(\mathcal{V}^f)^\text{op}}$, $V\in\mathcal{V}^f$ and $s\in S(V)$. We say that $s$ is regular if $\ker(s)=0$.
Let $\text{reg}(S)(V):=\{x\in S(V)\ ;\ \ker(x)=0\}$.
We also recall the definition of Rector's category $\mathfrak{R}_S$ which is the full subcategory of $\mathfrak{S}_S$ whose objects are the pairs $(W,\psi)$ with $\psi$ regular.\
**Definition 4**. Let $S$ be in $\mathcal{F}\text{in}^{(\mathcal{V}^f)^\text{op}}$, we say that $S$ is noetherian if it satisfies the following:
1. there exists an integer $d$ such that $\text{reg}(S)(V)=\emptyset$ for $\dim(V)> d$,
2. [\[noethcond\]]{#noethcond label="noethcond"} for all $V\in\mathcal{V}^f$ and $s\in S(V)$ and for all morphisms $\alpha$ which takes values in $V$, $\text{ker}(\alpha^*s)=\alpha^{-1}(\text{ker}(s))$.
In [@HLS2], the authors proved that $S\in\mathcal{F}\text{in}^{(\mathcal{V}^f)^\text{op}}$ is noetherian if and only if there is a noetherian unstable algebra $K$ such that $S\cong\text{Hom}_\mathcal{K}(K,H^*(\_))$. In this article, $S$ will not need to satisfy all conditions of Definition [Definition 4](#noetherianfunc){reference-type="ref" reference="noetherianfunc"}.
**Definition 5**. We say that $S\in\mathcal{S}\text{et}^{(\mathcal{V}^f)^\text{op}}$ satisfies the weaker noetherianity condition if it satisfies condition [\[noethcond\]](#noethcond){reference-type="ref" reference="noethcond"} in Definition [Definition 4](#noetherianfunc){reference-type="ref" reference="noetherianfunc"}.
Yet, our results will be of particular interest in the case where $S$ is noetherian, since in this case Rector's category admits a finite skeleton.\
**Definition 6**. An object $S\in\mathcal{S}\text{et}^{(\mathcal{V}^f)^\text{op}}$ is connected if $S(0)$ has a single element $\epsilon$. In this case, for $V$ an object in $\mathcal{V}^f$, $\epsilon_V:=\pi_0^{V*}\epsilon$ for $\pi_0^V$ the unique map from $V$ to $0$.
**Remark 7**. In the case where $S$ is not connected, for $\gamma\in S(0)$, we can consider $S^\gamma$ that maps $W$ to the set of elements $\psi\in S(W)$ such that $0^*\psi=\gamma$. $S^\gamma$ is then a subfunctor of $S$ and $(S^\gamma)_{\gamma\in S(0)}$ is a partition of $S$. We get that $\mathfrak{S}_S$ is the coproduct of the categories $\mathfrak{S}_{S^\gamma}$ and that a functor on $\mathfrak{S}_S$ is just a family of functors over each of the categories $\mathfrak{S}_{S^\gamma}$.
## The category $\mathfrak{S}_S$
In this subsection we describe the objects and morphisms in the category $\mathfrak{S}_S$ in the case where $S$ is connected and satisfies the weaker noetherianity condition.
**Proposition 8**. *We consider $S\in\mathcal{S}\text{et}^{(\mathcal{V}^f)^\text{op}}$ connected that satisfies the weaker noetherianity condition. Then, for any $(W,\psi)\in\mathfrak{S}_S$, there exists a unique element $\psi\boxplus\epsilon_V\in S(W\oplus V)$, such that $\iota_W^*\psi\boxplus\epsilon_V=\psi$ and $\iota_V^*\psi\boxplus\epsilon_V=\epsilon_V$, for $\iota_W$ and $\iota_V$ the inclusions of $W$ and $V$ in $W\oplus V$.*
*Proof.* Let $\psi$ in $S(W)$. We consider $\pi^*\psi\in S(W\oplus V)$ for $\pi$ the projection from $W\oplus V$ to $W$ along $V$. It satisfies $\iota_W^*\pi^*\psi=\psi$. Furthermore, $\pi\circ\iota_V=0$. Hence, $\iota_V^*\pi^*\psi=0^*\psi$. Since the trivial morphism from $V$ to $W$ factorizes through the trivial vector space $0$, and since $S(0)=\{\epsilon\}$, $0^*\psi=0^*\epsilon$, $\iota_V^*\pi^*\psi=\epsilon_V$. This proves the existence condition. We now prove the uniqueness.\
For $\gamma\in S(W\oplus V)$ such that $\iota_W^*\gamma=\psi$ and $\iota_V^*\gamma=\epsilon_V$, since $S$ satisfies the weaker noetherianity condition, $V=\ker(\iota_V^*\gamma)=\iota_V^{-1}(\ker(\gamma))$. Therefore, $V\subset\ker(\gamma)$. By definition of the kernel, there exists $\Tilde{\gamma}\in S(W)$ such that $\gamma=\pi^*\Tilde{\gamma}$. Then, $\psi=\iota_W^*\pi^*\Tilde{\gamma}$, since $\pi\circ\iota_W=\text{id}_W$, $\psi=\Tilde{\gamma}$. Which prove the uniqueness condition. ◻
The notation $\psi\boxplus\epsilon_V$ will be convenient in the following, but as we have seen, it is just the element $\pi^*\psi\in S(W\oplus V)$ for $\pi$ the projection from $W\oplus V$ onto $W$.\
By definition of the kernel, for any $W\in\mathcal{V}^f$ and $\psi\in S(W)$, there exists a unique $\Tilde{\psi}\in S(W/\ker(\psi))$ such that $$\psi=\pi^*\Tilde{\psi}=\Tilde{\psi}\boxplus\epsilon_{\ker(\psi)}.$$ Since $S$ satisfies the weaker noetherianity condition, $\Tilde{\psi}$ is regular (this is because $\ker(\psi)=\pi^{-1}(\ker(\Tilde{\psi}))$). We get the following lemma.
**Lemma 9**. *For $S$ connected that satisfies the weaker noetherianity condition and for $(W,\psi)\in\mathfrak{S}_S$, $(W,\psi)\cong(W/\ker(\psi)\oplus\ker(\psi),\Tilde{\psi}\boxplus\epsilon_{\ker(\psi)})$, with $(W/\ker(\psi),\Tilde{\psi})\in\mathfrak{R}_S$.*
We now describe morphisms in $\mathfrak{S}_S$, using this decomposition.
**Proposition 10**. *Let $(W,\psi)$ and $(H,\eta)$ be two objects in $\mathfrak{R}_S$, and let $U$ and $V$ be two finite dimensional vector spaces. The set of morphisms in $\mathfrak{S}_S$ from $(W\oplus U,\psi\boxplus\epsilon_U)$ to $(H\oplus V,\eta\boxplus\epsilon_V)$ is the set of morphisms $\alpha$ whose block matrices have the form $\left(\begin{array}{cc}
f & 0 \\
g & h
\end{array}\right)$, with $f$ a morphism from $(W,\psi)$ to $(H,\eta)$ in $\mathfrak{R}_S$, $g$ a morphism from $W$ to $V$ and $h$ a morphism from $U$ to $V$.*
*Proof.* First, we prove that for such $\alpha$, $\alpha$ is a morphism from $(W\oplus U,\psi\boxplus\epsilon_U)$ to $(H\oplus V,\eta\boxplus\epsilon_V)$ in $\mathfrak{S}_S$. We have $\iota_W^*\alpha^*(\eta\boxplus\epsilon_V)=\iota_W^*\alpha^*\pi^*(\eta)$ for $\pi$ the projection from $H\oplus V$ onto $H$. This is equal to $f^*\eta=\psi$. Also, $\iota_U^*\alpha^*(\eta\boxplus\epsilon_V=h^*\epsilon_V=\epsilon_U$. Then, by Proposition [Proposition 8](#defbox){reference-type="ref" reference="defbox"}, $\alpha^*(\eta\boxplus\epsilon_V)=\psi\boxplus\epsilon_U$.\
We now prove that morphisms from $(W\oplus U,\psi\boxplus\epsilon_U)$ to $(H\oplus V,\eta\boxplus\epsilon_V)$ have this form. First, we have that $$U=\ker(\psi\boxplus\epsilon_U)=\alpha^{-1}(\ker(\eta\boxplus\epsilon_V))=\alpha^{-1}(V).$$ Hence, $\alpha(U)\subset V$. Now, we consider the composition $\pi_H\circ\alpha$ from $(W\oplus U)$ to $H$ for $\pi_H$ the projection from $H\oplus V$ onto $H$. We have $\pi_H\circ\alpha=f\circ\pi_W$ for $\pi_W$ the projection from $(W\oplus U)$ onto $W$. Then, $\psi\boxplus\epsilon_U=\alpha^*\pi_H^*\eta=\pi_W^*(f^*\eta)$. We get, since $\pi_W^*$ is injective, that $f^*\eta=\psi$, therefore $f$ is a map from $(W,\psi)$ to $(H,\eta)$ in $\mathfrak{R}_S$. This concludes the proof. ◻
**Remark 11**. It is worth noticing that, since $S$ satisfies the weaker noetherianity condition, morphisms from $(W,\psi)$ to $(H,\eta)$ in $\mathfrak{R}_S$ are necessarily injective morphisms from $W$ to $H$. This is one reason why functors on $\mathfrak{R}_S$ are a lot easier to understand than functors on $\mathfrak{S}_S$, and it will be a key fact in computing simple objects in $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$.
## The categories $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ and $\mathcal{F}(\mathfrak{R}_S\times\mathcal{V}^f,\mathcal{V})$
In this subsection, we compare the categories $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ and $\mathcal{F}(\mathfrak{R}_S\times\mathcal{V}^f,\mathcal{V})$, with $S$ connected and satisfying the weaker noetherianity condition.\
By Lemma [Lemma 9](#2.8.2){reference-type="ref" reference="2.8.2"}, for $W\in\mathcal{V}^f$ and $\psi\in S(W)$, $(W,\psi)$ is isomorphic as an object of $\mathfrak{S}_S$ with $(W/\ker(\psi)\oplus\ker(\psi),\Tilde{\psi}\boxplus\epsilon_{\ker(\psi)})$. Therefore, we have a faithfull and essentially surjective functor from $\mathfrak{R}_S\times\mathcal{V}^f$ to $\mathfrak{S}_S$ that maps the pair $((W,\psi),V)$ with $\psi$ regular to $(W\oplus V,\psi\boxplus\epsilon_V)$.\
This functor is not full, indeed (Proposition [Proposition 10](#2.8){reference-type="ref" reference="2.8"}) the set of morphisms between $(W\oplus V,\psi\boxplus\epsilon_V)$ and $(H\oplus U,\eta\boxplus\epsilon_U)$ in $\mathfrak{S}_S$ is given by the linear maps whose block matrices are of the form $\left(\begin{array}{cc}
f & 0 \\
g & h
\end{array}\right)$ with $g$ and $h$ any linear maps respectively from $W$ and $\ker(\psi)$ onto $\ker(\eta)$ and $f$ a morphism in $\mathfrak{R}_S$ from $(W,\psi)$ to $(H,\eta)$, whereas the image of $\mathfrak{R}_S\times\mathcal{V}^f$ contains only maps of the form $\left(\begin{array}{cc}
f & 0 \\
0 & h
\end{array}\right)$. Yet, it admits a left quasi-inverse that maps $(W,\psi)$ to $((W/\ker(\psi),\Tilde{\psi}),\ker(\psi))$ which is full and essentially surjective but not faithful. More precisely, two maps from $(W,\psi)$ to $(H,\eta)$ have the same image if and only if their restriction to $\ker(\psi)$ are equal as well as their induced maps from $W/\ker(\psi)$ to $H/\ker(\eta)$.
**Definition 12**. Let $\mathcal{O}$ be the functor from $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ to $\mathcal{F}(\mathfrak{R}_S\times\mathcal{V}^f,\mathcal{V})$ induced by the functor from $\mathfrak{R}_S\times\mathcal{V}^f$ to $\mathfrak{S}_S$ that maps $((W,\psi),V)\in\mathfrak{R}_S\times\mathcal{V}^f$ to $(W\oplus V,\psi\boxplus\epsilon_V)$ and the morphism $(f,h)$ in $\mathfrak{R}_S\times\mathcal{V}^f$ to $\left(\begin{array}{cc}
f & 0 \\
0 & h
\end{array}\right)$ in $\mathfrak{S}_S$.\
Let also $\mathcal{E}$ be the functor from $\mathcal{F}(\mathfrak{R}_S\times\mathcal{V}^f,\mathcal{V})$ to $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ induced by the functor that maps $(W,\psi)$ in $\mathfrak{S}_S$ to $((W/\ker(\psi),\Tilde{\psi}),\ker(\psi))$ in $\mathfrak{R}_S\times\mathcal{V}^f$ and $f$ from $(W,\psi)$ to $(H,\eta)$ to $(\Tilde{f},f|_{\ker(\psi)})$ with $\Tilde{f}$ the morphism induced by $f$ from $(W/\ker(\psi),\Tilde{\psi})$ to $(H/\ker(\eta),\Tilde{\eta})$.
**Lemma 13**. *For $\lambda$ a natural transformation in $\mathcal{F}(\mathfrak{R}_S\times\mathcal{V}^f,\mathcal{V})$ from $G$ to $\mathcal{O}(F)$, $\lambda$ extends to a natural transformation in $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ from $\mathcal{E}(G)$ to $F$ if and only if, for any $(W,\psi)\in\mathfrak{R}_S$, $V\in\mathcal{V}^f$ and $f\in\text{Hom}_{\mathbb{k}}(W,V)$, the following diagram commutes: $$\xymatrix{ & F(W\oplus V,\psi\boxplus\epsilon_V)\ar[dd]^-{\alpha_*}\\
G((W,\psi),V)\ar[ru]^{\lambda}\ar[rd]_-\lambda &\\
& F(W\oplus V,\psi\boxplus\epsilon_V),}$$ with $\alpha$ the morphism whose block matrix is given by $\left(\begin{array}{cc}
\text{id}_W & 0 \\
f & \text{id}_V
\end{array}\right)$.*
*Proof.* The only if part is straightforward. Let's assume that $\lambda$ satisfies the required condition. Then, for $(W,\psi)$ in $\mathfrak{S}_S$, one can choose arbitrarily a complementary subspace $C$ of $\ker(\psi)$, then for $\gamma$ the inverse isomorphism of the projection from $C$ to $W/\ker(\psi)$, one can define $\lambda_{(W,\psi)}$ from $G((W/\ker(\psi),\Tilde{\psi}),\ker(\psi))$ to $F(W,\psi)$ as the composition of $\lambda_{((W/\ker(\psi),\Tilde{\psi}),\ker(\psi))}$, which values are in $F(W/\ker(\psi)\oplus\ker(\psi),\Tilde{\psi}\boxplus\epsilon_{\ker(\psi)})$, with $$(\gamma\oplus\text{id}_{\ker(\psi)})_*\ :\ F(W/\ker(\psi)\oplus\ker(\psi),\Tilde{\psi}\boxplus\epsilon_{\ker(\psi)})\rightarrow F(W,\psi).$$
The required condition guarantees that this does not depend on the choice of $C$. Furthermore, it entails that $\lambda$ is a natural transformation on $\mathfrak{S}_S$, since any morphism in $\mathfrak{S}_S$ from $(H,\eta)$ to $(W,\psi)$ can be factorised as $\left(\begin{array}{cc}
\text{id}_C & 0 \\
f & \text{id}_{\ker(\psi)}
\end{array}\right)\circ \left(\begin{array}{cc}
g & 0 \\
0 & h
\end{array}\right)$ with some morphisms $f$ and $h$ and some injective morphism $g$. ◻
# Polynomial functor over $\mathfrak{S}_S$ {#part2}
Since $\mathcal{F}(\mathfrak{R}_S\times\mathcal{V}^f,\mathcal{V})$ is isomorphic to $\mathcal{F}(\mathfrak{R}_S,\mathcal{F}(\mathcal{V}^f,\mathcal{V}))$, there is a notion of polynomial functors of degree $n$ for functors in $\mathcal{F}(\mathfrak{R}_S\times\mathcal{V}^f,\mathcal{V})$ corresponding to those taking values in polynomial functors of degree $n$ from $\mathcal{V}^f$ to $\mathcal{V}$, in the sense of [@P1] or [@DV]. We denote by $\mathcal{P}\mathrm{ol}_n(\mathfrak{R}_S\times\mathcal{V}^f,\mathcal{V})$ the full subcategory of $\mathcal{F}(\mathfrak{R}_S\times\mathcal{V}^f,\mathcal{V})$ of polynomial functors of degree less than or equal to $n$. Using purely formal arguments, as well as known facts about polynomial functors in $\mathcal{F}(\mathcal{V}^f,\mathcal{V})$, one could easily compute the categorical quotient $\mathcal{P}\mathrm{ol}_n(\mathfrak{R}_S\times\mathcal{V}^f,\mathcal{V})/\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{R}_S\times\mathcal{V}^f,\mathcal{V})$ and would find that it is equivalent to $\mathcal{F}(\mathfrak{R}_S,\mathbb{k}\left[\Sigma_n\right]-\mathcal{M}\mathrm{od})$. All the difficulties in the following section come from the fact that the functor from $\mathfrak{R}_S\times\mathcal{V}^f$ to $\mathfrak{S}_S$ is not full.\
In this section we define a notion of polynomial functors on $\mathfrak{S}_S$ and manage to compute the quotient $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})/\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_S,\mathcal{V})$.
## Definition
We recall that for $F\in\mathcal{F}(\mathcal{V}^f,\mathcal{V})$, $\Bar{\Delta} F(W)$ is the kernel of the map from $F(W\oplus\mathbb{k})$ to $F(W)$ induced by the projection along $\mathbb{k}$. Polynomial functors of degree at most $n$ are functors $F$ such that $\Bar{\Delta}^{n+1}F=0$ and $\mathcal{P}\mathrm{ol}_n(\mathcal{V}^f,\mathcal{V})$ denote the full subcategory of polynomial functors of degree at most $n$ in $\mathcal{F}(\mathcal{V}^f,\mathcal{V})$. We define similar notions for functors on $\mathfrak{S}_S$.\
We start this section by defining a difference functor $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}$ on $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$.\
**Definition 14**. $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}\ :\ \mathcal{F}(\mathfrak{S}_S,\mathcal{V})\rightarrow \mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ is the functor such that $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}F(W,\psi)$ is the kernel of the map $F(W\oplus\mathbb{k},\psi\boxplus\epsilon_{\mathbb{k}})\rightarrow F(W,\psi)$ induced by the projection from $W\oplus\mathbb{k}$ to $W$, and such that, for $\alpha$ a morphism in $\mathfrak{S}_S$, $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}F(\alpha)$ is the map induced by $\alpha\oplus\text{id}_{\mathbb{k}}$.
**Lemma 15**. *The functor $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}$ is exact.*
*Proof.* We consider the following short exact sequence in $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ : $$0\rightarrow F'\rightarrow F\rightarrow F"\rightarrow 0.$$ For $(W,\psi)\in\mathfrak{S}_S$, we get the following commutative diagram whose lines are exact : $$\xymatrix{0\ar[r] & F'(W\oplus\mathbb{k},\psi\boxplus\epsilon_{\mathbb{k}})\ar[r]\ar[d] & F(W\oplus\mathbb{k},\psi\boxplus\epsilon_{\mathbb{k}})\ar[r]\ar[d] & F"(W\oplus\mathbb{k},\psi\boxplus\epsilon_{\mathbb{k}})\ar[r]\ar[d] & 0\\
0\ar[r] & F'(W,\psi)\ar[r] & F(W,\psi)\ar[r] & F"(W,\psi)\ar[r] & 0,
}$$ whose vertical maps are induced by the projection from $W\oplus\mathbb{k}$ to $W$. Using the exactness of the lines and commutativity of the diagram, one checks that it induces a short exact sequence $$\xymatrix{
0\ar[r] & \Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}F'(W,\psi)\ar[r] & \Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}F(W,\psi)\ar[r] & \Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}F"(W,\psi)\ar[r] & 0.
}$$ This exact sequence is natural in $(W,\psi)$. ◻
**Definition 16**. $F\in\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ is polynomial of degree less than $n$ if $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^{n+1}F=0$. We denote by $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})$ the full subcategory of $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ whose objects are the polynomial functors of degree less than or equal to $n$.
**Proposition 17**. *The category $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})$ is a Serre class of $\mathcal{P}\mathrm{ol}_{n+1}(\mathfrak{S}_S,\mathcal{V})$.*
*Proof.* This is straightforward from Lemma [Lemma 15](#decexact){reference-type="ref" reference="decexact"}. ◻
There is a notion of analytic funtors on $\mathcal{F}(\mathcal{V}^f,\mathcal{V})$. Those are the functors which are the colimit of their polynomial sub-functors. Similarly, one can define a notion of analytic functors on $\mathfrak{S}_S$.\
**Lemma 18**. *Let $F\in\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$. $F$ admits a greatest polynomial sub-functor of degree less than or equal to $n$. We denote it by $p_n(F)$.*
*Proof.* For $(W,\psi)\in\mathfrak{S}_S$ and $x\in F(W,\psi)$, we denote by $<x>_F$ the image of $\mathbb{k}\left[\text{Hom}_{\mathfrak{S}_S}((W,\psi),(\_,\_))\right]$ under the natural morphism that maps $\text{id}_{(W,\psi)}$ to $x$. We say that $x$ is polynomial of degree less than or equal to $n$ if and only if $<x>_F$ is.\
The functor $F$ is polynomial of degree less than or equal to $n$ if and only if $x$ is polynomial of degree less than or equal to $n$ for any $(W,\psi)\in\mathfrak{S}_S$ and any $x\in F(W,\psi)$. The condition is obviously necessary since $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^{n+1}<x>_F$ is a sub-functor of $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^{n+1}F$. If we assume that $F$ is not polynomial of degree less than or equal to $n$, $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^{n+1}F$ is not trivial, therefore there exists $(W,\psi)\in\mathfrak{S}_S$ and $x\in\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^{n+1}F(W,\psi)\subset F(W\oplus\mathbb{k}^{n+1},\psi\boxplus\epsilon_{\mathbb{k}^{n+1}})$ different from $0$. Then, $x$ is not polynomial of degree less than or equal to $n$. The condition is therefore sufficient.\
Finally, the set of elements $x$ of $F$ polynomial of degree less than or equal to $n$ defines a sub-functor of $F$ and is greater than any polynomial sub-functor of $F$ of degree less than or equal to $n$. ◻
By definition, we have $p_n(F)\subset p_{n+1}F$.
**Definition 19**. A functor $F$ in $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ is said to be analytic if it is the colimit of the $p_n(F)$.\
$\mathcal{F}_\omega(\mathfrak{S}_S,\mathcal{V})$ is the full subcategory of $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ of analytic functors.
We end this subsection by computing $\mathcal{P}\mathrm{ol}_{0}(\mathfrak{S}_S,\mathcal{V})$, the following subsections will have the purpose of describing the quotients $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})/\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_S,\mathcal{V})$
**Proposition 20**. *The categories $\mathcal{P}\mathrm{ol}_{0}(\mathfrak{S}_S,\mathcal{V})$ and $\mathcal{F}(\mathfrak{R}_S,\mathcal{V})$ are equivalent.*
*Proof.* For $F\in\mathcal{P}\mathrm{ol}_{0}(\mathfrak{S}_S,\mathcal{V})$, $(W,\psi)$ an object of $\mathfrak{S}_S$ and $\Tilde{\psi}\in F(W/\ker(\psi))$ such that $\pi^*\Tilde{\psi}=\psi$, $\pi_*$ (induced by $\pi$ the projection in $\mathfrak{S}_S$ from $(W,\psi)$ to $(W/\ker(\psi),\Tilde{\psi})$) is a natural isomorphism between $F(W,\pi^*\psi)$ and $F(W/\ker(\psi),\psi)$. Indeed, $\pi_*$ may be factorised in the following way $$F(W,\psi)\cong F(W/\ker(\psi)\oplus\mathbb{k}^k,\Tilde{\psi}\boxplus\epsilon_{\mathbb{k}^k})\rightarrow ...\rightarrow F(W/\ker(\psi)\oplus\mathbb{k},\Tilde{\psi}\boxplus\epsilon_{\mathbb{k}})\rightarrow F(W/\ker(\psi),\Tilde{\psi}),$$ where $k$ is the dimension of $\ker(\psi)$ and each map is induced by the projection that omits the last factor $\mathbb{k}$. And since $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}F=0$, each of those maps is an isomorphism.\
The forgetful functor from $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ to $\mathcal{F}(\mathfrak{R}_S,\mathcal{V})$ has a right quasi-inverse that maps $F\in\mathcal{F}(\mathfrak{R}_S,\mathcal{V})$ to $\Bar{F}$, where $\Bar{F}(W,\psi):=F(W/\ker(\psi),\Tilde{\psi})$ and $\Bar{F}(\gamma)$, for $\gamma\ :\ (W,\psi)\rightarrow (H,\eta)$, is $F(\Tilde{\gamma})$ for $\Tilde{\gamma}$ the induced map from $(W/\ker(\psi),\Tilde{\psi})$ to $(H/\ker(\eta),\Tilde{\eta})$. By construction, it is a quasi-inverse, if we restraint the forgetful functor to $\mathcal{P}\mathrm{ol}_{0}(\mathfrak{S}_S,\mathcal{V})$. ◻
## The $n$-th cross effect
In the context where $F$ is a functor over $\mathcal{V}^f$, the $n$-th cross effect $\mathrm{cr}_nF(X_1,...,X_n)$ is defined as the kernel of the map from $F(X_1\oplus...\oplus X_n)$ to $\bigoplus\limits_{1\leq i\leq n} F(X_1\oplus...\oplus \widehat{X_i}\oplus...\oplus X_n)$ induced by the projections from $X_1\oplus...\oplus X_n$ to $X_1\oplus...\oplus \widehat{X_i}\oplus...\oplus X_n$. Since $\Sigma_n$ acts on $\mathrm{cr}_nF(\mathbb{k},...,\mathbb{k})$ by permuting the factors $\mathbb{k}$, $F\mapsto\mathrm{cr}_nF(\mathbb{k},...,\mathbb{k})$ defines a functor from $\mathcal{F}(\mathcal{V}^f,\mathcal{V})$ to $\mathbb{k}\left[\Sigma_n\right]-\mathcal{M}\mathrm{od}$.\
We consider $T^n$ the functor from $\mathcal{V}^f$ to itself that maps $V$ to $V^{\otimes n}$. $\Sigma_n$ has a right-action on $V^{\otimes n}$ with $v_1\otimes...\otimes v_n\cdot\sigma=v_{\sigma^{-1}(1)}\otimes...\otimes v_{\sigma^{-1}(n)}$. We get the following Proposition from [@P1].
**Proposition 21**. *The functor from $\mathcal{P}\mathrm{ol}_n(\mathcal{V}^f,\mathcal{V})$ to $\mathbb{k}\left[\Sigma_n\right]-\mathcal{M}\mathrm{od}$ that maps $F$ to $\mathrm{cr}_nF(\mathbb{k},...,\mathbb{k})$ is right adjoint to the functor that maps $M\in\mathbb{k}\left[\Sigma_n\right]-\mathcal{M}\mathrm{od}$ to $T^n\otimes_{\Sigma_n}M$.*
Throughout this subsection $n$ is a fixed positive integer. We want to describe the quotient category $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})/\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_S,\mathcal{V})$. To do so, we introduce a cross effect functor for functors on $\mathfrak{S}_S$.\
**Lemma 22**. *Let $F\in\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ and $(W,\psi)$ an object in $\mathfrak{S}_S$. $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^n F(W,\psi)$ is the kernel of the map from $F(W\oplus\mathbb{k}^n,\psi\boxplus\epsilon_{\mathbb{k}^n})$ to $\bigoplus\limits_{i=1}^n F(W\oplus \mathbb{k}^{i-1}\oplus\widehat{\mathbb{k}}\oplus \mathbb{k}^{n-i},\psi\boxplus \epsilon_{\mathbb{k}^{n-1}})$ induced by the projections from $(W\oplus\mathbb{k}^n,\psi\boxplus\epsilon_{\mathbb{k}^n})$ to $(W\oplus \mathbb{k}^{i-1}\oplus\widehat{\mathbb{k}}\oplus \mathbb{k}^{n-i},\psi\boxplus \epsilon_{\mathbb{k}^{n-1}})$ in $\mathfrak{S}_S$.*
*Proof.* This is straightforward by induction. ◻
More generally, we consider the n-th cross effect $\mathrm{cr}_nF$ defined as follows.
**Definition 23**. For $F\in\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$, $\mathrm{cr}_nF$ is the functor from $\mathfrak{S}_S\times (\mathcal{V}^f)^n$ to $\mathcal{V}$ where $\mathrm{cr}_n(W,\psi;X_1,...,X_n)$ is the kernel of $F(W\oplus X_1\oplus...\oplus X_n,\psi\boxplus\epsilon)$ to $\bigoplus\limits_{i=1}^n F(W\oplus X_1\oplus...\oplus\widehat{X_i}\oplus...\oplus X_n,\psi\boxplus \epsilon)$ induced by the projections from $(W\oplus X_1\oplus...\oplus X_n,\psi\boxplus\epsilon)$ to $(W\oplus X_1\oplus...\oplus\widehat{X_i}\oplus ...\oplus X_n,\psi\boxplus \epsilon)$ in $\mathfrak{S}_S$.
We can restate Lemma [Lemma 22](#kernuck){reference-type="ref" reference="kernuck"} as $$\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^n F(W,\psi)=\mathrm{cr}_nF(W,\psi;\mathbb{k},...,\mathbb{k}).$$
As in the classical case of functors on $\mathcal{V}^f$, the $n$-th symmetric group acts on $\mathrm{cr}_nF(\_,\_;\mathbb{k},...,\mathbb{k})$ by permuting the factors $\mathbb{k}$. Therefore, $\mathrm{cr}_nF(\_,\_;\mathbb{k},...,\mathbb{k})$ takes values in $\mathbb{k}\left[\Sigma_n\right]-\mathcal{M}od$, the category of $\Sigma_n$-representations over $\mathbb{k}$.\
Considering functors on $\mathfrak{R}_S\times\mathcal{V}^f$, one can check easily that Proposition [Proposition 21](#pirash){reference-type="ref" reference="pirash"} gives rise to a similar adjunction from $\mathcal{P}\mathrm{ol}_n(\mathfrak{R}_S\times\mathcal{V}^f,\mathcal{V})$ to $\mathcal{F}(\mathfrak{R}_S,\mathbb{k}\left[\Sigma_n\right]-\mathcal{M}\mathrm{od})$ whose left adjoint maps $M$ to $T^n\otimes_{\Sigma_n}M$, that maps $((W,\psi),V)\in\mathfrak{R}_S\times\mathcal{V}^f$ to $V^{\otimes n}\otimes_{\Sigma_n}M(W,\psi)$.\
We want to extend this adjunction to $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$. To do so, we need to prove that $\mathrm{cr}_nF(\_,\_;\mathbb{k},...,\mathbb{k})$ behaves well with respect to the maps in $\mathfrak{S}_S$ that are not obtained from maps in $\mathfrak{R}_S\times\mathcal{V}^f$.\
The end of this subsection is devoted to prove that if $F\in\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})$ then $\mathrm{cr}_nF(\_,\_;\mathbb{k},...,\mathbb{k})$ does behave well with respect to those maps.\
We notice that for $F$ polynomial of degree $n$, $\mathrm{cr}_nF(W,\psi;\_,...,\_)$ is additive in each variable. More explicitly:\
**Lemma 24**. *$\mathrm{cr}_nF(W,\psi;X_1\oplus Y_1,...,X_n\oplus Y_n)$ is isomorphic to $\bigoplus \mathrm{cr}_nF(W,\psi;A_1,...,A_n)$, with the direct sum going through the families $(A_1,...,A_n)$ with $A_i=X_i\text{ or }Y_i$. The isomorphism is given by the direct sum of the $\mathcal{Q}_{A_1,...,A_n}$ induced by the projections from $X_i\oplus Y_i$ onto $A_i$.*
*Proof.* The map $\mathcal{Q}_{A_1,...,A_n}$ is induced by the map from $\mathrm{cr}_nF(W,\psi;X_1\oplus Y_1,...,X_n\oplus Y_n)$ to $F(W\oplus A_1\oplus ...A_n,\psi\oplus \epsilon)$, with $A_i=X_i\text{ or }Y_i$, induced by the projection from $W\oplus (X_1\oplus Y_1)\oplus ...\oplus(X_n\oplus Y_n)$ to $W\oplus A_1\oplus ...\oplus A_n$.\
Since $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^{n+1}F=0$, it's kernel is precisely the direct sum of the images of the $\mathrm{cr}_nF(W,\psi;B_1,...,B_n)$ with at least one $B_i\neq A_i$, under the injections in $W\oplus (X_1\oplus Y_1)\oplus ...\oplus(X_n\oplus Y_n)$. The restriction to $\mathrm{cr}_nF(W,\psi;A_1,...,A_n)$ (seen as a subspace of $\mathrm{cr}_nF(W,\psi;X_1\oplus Y_1,...,X_n\oplus Y_n)$) is the identity. ◻
**Remark 25**. The inverse of $\bigoplus\limits_{(A_1,...,A_n)}\mathcal{Q}_{A_1,...,A_n}$ from $\mathrm{cr}_nF(W,\psi;X_1\oplus Y_1,...,X_n\oplus Y_n)$ to\
$\bigoplus \mathrm{cr}_nF(W,\psi;A_1,...,A_n)$ is given by the direct sum of the $\mathcal{I}_{A_1,...,A_n}$ which are the maps induced by the inclusions of the $A_i$ in $X_i\oplus Y_i$.\
We want to emphasize that the image of $\bigoplus \mathrm{cr}_nF(W,\psi;A_1,...,A_n)$ in $\bigoplus \mathrm{cr}_nF(W,\psi;U_1,...,U_n)$, for $A_i$ a sub-vector space of $U_i$ for each $i$, does not depend on the choice of complementary subspaces $B_i$, therefore the component of an element of $\mathrm{cr}_nF(W,\psi;U_1,...,U_n)$ in $\mathrm{cr}_nF(W,\psi;A_1,...,A_n)$ under the isomorphism of Lemma [Lemma 24](#constanti){reference-type="ref" reference="constanti"} is the same for each choice of decomposition of the $U_i$ as $U_i=A_i\oplus B_i$. It will have some importance in the proof of Lemma [Lemma 26](#ident){reference-type="ref" reference="ident"}
We consider $F\in\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})$, $(W\oplus X_1\oplus...\oplus X_n,\psi\boxplus\epsilon)$ in $\mathfrak{S}_S$ and a map $\alpha$ from $(W\oplus X_1\oplus...\oplus X_n,\psi\boxplus\epsilon)$ to itself of the form $\alpha=\left(\begin{array}{cc}
\text{id}_W & 0 \\
f & \text{id}_{X_1\oplus...\oplus X_n}
\end{array}\right)$. We have the following Lemma, with $\alpha_*$ the induced map from $F(W\oplus X_1\oplus...\oplus X_n,\psi\boxplus\epsilon)$.
**Lemma 26**. *$\alpha_*$ acts on $\mathrm{cr}_nF(W,\psi;X_1,...,X_n)$ as the identity.*
*Proof.* For $\pi_i$ that maps $x_1+...+x_n$ with $x_i\in X_i$ to $x_1+...+\hat{x}_i+...x_n$, we have that $\left(\begin{array}{cc}
\text{id}_W & 0 \\
0 & \pi_i
\end{array}\right)\circ\alpha$ is equal to $$\left(\begin{array}{cc}
\text{id}_W & 0 \\
\pi_i\circ f & \pi_i
\end{array}\right)= \left(\begin{array}{cc}
\text{id}_W & 0 \\
\pi_i\circ f & \text{id}_{X_1\oplus...\oplus\widehat{X_i}\oplus...\oplus X_n}
\end{array}\right)\circ\left(\begin{array}{cc}
\text{id}_W & 0 \\
0 & \pi_i
\end{array}\right).$$ Since $\mathrm{cr}_nF(W,\psi;X_1,...,X_n)$ is the intersection of the kernels of the $\left(\begin{array}{cc}
\text{id}_W & 0 \\
0 & \pi_i
\end{array}\right)_*$, this implies that the restriction of $\alpha_*$ to $\mathrm{cr}_nF(W,\psi;X_1,...,X_n)$ takes it values in $\mathrm{cr}_nF(W,\psi;X_1,...,X_n)$.\
We prove now that it acts as the identity. We consider the diagonal map $\Delta$ from $X_1\oplus...\oplus X_n$ to $(X_1\oplus X_1)\oplus...\oplus(X_n\oplus X_n)$ and the map $\alpha'$ from $W\oplus(X_1\oplus X_1)\oplus...\oplus(X_n\oplus X_n)$ to itself whose block matrix is given by $\left(\begin{array}{cc}
\text{id}_W & 0 \\
\Delta\circ f & \text{id}_{X_1^{\oplus 2}\oplus...\oplus X_n^{\oplus 2}}
\end{array}\right)$. It fits in the following commutative diagram:
where the top vertical map is induced by the injection of the $X_i$ in the first factor of $X_i\oplus X_i$, the right horizontal one is given by the projection of $X_i\oplus X_i$ on the first factor and the left horizontal one is the projection onto the first factor along the diagonal $\Delta(X_i)$ (i.e. the morphisms that maps $(x,y)$ in $X_i\oplus X_i$ to $x-y$ in $X_i$).\
As we have seen, $\alpha'_*$ maps $\mathrm{cr}_nF(W,\psi,X_1\oplus X_1,...,X_n\oplus X_n)$ to itself. Also, the first factor $X_i$ of $X_i\oplus X_i$ admits two relevant complementary subspaces in $X_i\oplus X_i$. The first one is the second factor $X_i$, the second one is the diagonal $\Delta(X_i)$, since $X_i\oplus X_i=X_i\oplus\Delta(X_i)$ using that $(x,y)=(x-y,0)+(y,y)$ for $x$ and $y$ in $X_i$.\
By Lemma [Lemma 24](#constanti){reference-type="ref" reference="constanti"}, we get $$\mathrm{cr}_nF(W,\psi,X_1\oplus X_1,...,X_n\oplus X_n)\cong\bigoplus \mathrm{cr}_nF(W,\psi;A_1,...,A_n)\cong\bigoplus \mathrm{cr}_nF(W,\psi;A'_1,...,A'_n),$$ where the $A_i$ are either the first or the second factor in $X_i\oplus X_i$, and the $A'_i$ are either the first factor or the diagonal of $X_i\oplus X_i$.\
The components $\mathrm{cr}_nF(W,\psi,X_1,...,X_n)$ where all $A_i$ and $A'_i$ are taken to be the first factor identifies under those isomorphism, and they are stable under $\alpha'_*$. From the left part of the commutative diagram above, we get that the restriction of $\alpha'_*$ to that component is the identity, which implies that $\alpha_*$ restricted to $\mathrm{cr}_nF(W,\psi;X_1,...,X_n)$ is also the identity. ◻
## The category $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})/\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_S,\mathcal{V})$
In this subsection, we finally prove that $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^{n}$ induces an equivalence of categories between the localisation $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})/\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_S,\mathcal{V})$ and the category $\mathcal{F}(\mathfrak{R}_S,\mathbb{k}\left[\Sigma_n\right]-\mathcal{M}\mathrm{od})$.\
By abuse of notation, for $M$ a functor from $\mathfrak{R}_S$ to $\mathbb{k}\left[\Sigma_n\right]-\mathcal{M}\mathrm{od}$, we denote by $T^n\otimes_{\Sigma_n}M$ the functor $\mathcal{E}(T^n\otimes_{\Sigma_n}M)$ (Definition [Definition 12](#defi){reference-type="ref" reference="defi"}), which is the functor on $\mathfrak{S}_S$ that maps $(W,\psi)$ to $\ker(\psi)^n\otimes_{\Sigma_n}M(W/\ker(\psi),\Tilde{\psi})$.
The following lemma is straightforward.
**Lemma 27**. *$T^n\otimes_{\Sigma_n}M\in \mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})$.*
**Lemma 28**. *$\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^n (T^n\otimes_{\Sigma_n}M)\cong M$ as a functor from $\mathfrak{R}_S$ to $\mathbb{k}\left[\Sigma_n\right]-\mathcal{M}\mathrm{od}$.*
*Proof.* From Lemma [Lemma 22](#kernuck){reference-type="ref" reference="kernuck"} and for $(W,\psi)$ regular, an element in $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^n(T^n\otimes_{\Sigma_n}M)(W,\psi)\subset T^n(\mathbb{k}^n)\otimes_{\Sigma_n} M(W,\psi)$ is mapped to $0$ under each map from $\mathbb{k}^n$ to $\mathbb{k}^{n-1}$ that send one of the factor $\mathbb{k}$ to $0$. Hence, an element of $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^n(T^n\otimes_{\Sigma_n}M)(W,\psi)$ admits a unique representing element in $T^n(\mathbb{k}^n)\otimes_{\mathbb{k}} M(W,\psi)$ of the form $v_1\otimes...\otimes v_n\otimes m$, with $(v_1,...,v_n)$ the canonical basis of $\mathbb{k}^n$ and $m\in M(W,\psi)$. We get the required isomorphism. ◻
**Proposition 29**. *The functor $M\mapsto T^n\otimes_{\Sigma_n}M$ is left adjoint to $$\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^n\ :\ \mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})\rightarrow \mathcal{F}(\mathfrak{R}_S,\mathbb{k}\left[\Sigma_n\right]-\mathcal{M}\mathrm{od}).$$*
*Proof.* By naturality, a natural transformation from $T^n\otimes_{\Sigma_n}M$ to $F\in\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ is fully determined by the image of the class (for the equivalence relation induced by the actions of $\Sigma_n$ on $T^n$ and $M$) of the elements of the form $v_1\otimes...\otimes v_n\otimes m$ with $(W,\psi)$ an object of $\mathfrak{R}_S$, $m\in M(W,\psi)$ and $(v_1,...,v_n)$ the canonical basis of $\mathbb{k}^n$. Furthermore, since $v_1\otimes...\otimes v_n\otimes m$ represents an element in $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^n(T^n\otimes_{\Sigma_n} M)(W,\psi)$, its image must be in $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^n F(W,\psi)$. Hence, the application that maps a natural transformation from $T^n\otimes_{\Sigma_n}M$ to $F$ to the induced morphism from $M$ to $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^{n}F$ is an injection.\
We consider a morphism in $\mathcal{F}(\mathfrak{R}_S,\mathbb{k}\left[\Sigma_n\right]-\mathcal{M}\mathrm{od})$ from $M$ to $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^nF$. By Proposition [Proposition 21](#pirash){reference-type="ref" reference="pirash"}, it induces by adjunction a natural transformation of functors on $\mathfrak{R}_S\times\mathcal{V}^f$ from $T^n\otimes_{\Sigma_n} M$ to $\mathcal{O}(F)$. Finally, Lemma [Lemma 26](#ident){reference-type="ref" reference="ident"} and Lemma [Lemma 13](#2zoug){reference-type="ref" reference="2zoug"} imply that, if $F$ is polynomial of degree $n$, this natural transformation can be extended as a natural transformation from $\mathcal{E}(T^n\otimes_{\Sigma_n} M)$ to $F$ in $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$. ◻
**Theorem 30**. *$\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^n$ induces an equivalence of categories between $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})/\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_S,\mathcal{V})$ and $\mathcal{F}(\mathfrak{R}_S,\mathbb{k}\left[\Sigma_n\right]-\mathcal{M}\mathrm{od})$.*
*Proof.* $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^n$ is an exact functor from $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})$ to $\mathcal{F}(\mathfrak{R}_S,\mathbb{k}\left[\Sigma_n\right]-\mathcal{M}\mathrm{od})$. It maps $\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_S,\mathcal{V})$ to $0$, hence it induces a functor from $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})/\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_S,\mathcal{V})$ to $\mathcal{F}(\mathfrak{R}_S,\mathbb{k}\left[\Sigma_n\right]-\mathcal{M}\mathrm{od})$. We consider $F\in\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})$ and the exact sequence $0\rightarrow\ker(\eta)\rightarrow T^n\otimes_{\Sigma_n}\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^n F\overset{\eta}{\rightarrow} F\rightarrow \text{coker}(\eta)\rightarrow 0$, for $\eta$ the counit of the adjunction. By Lemma [Lemma 28](#congu){reference-type="ref" reference="congu"}, when we apply to it the functor $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^n$, the middle map becomes an isomorphism. Therefore, $\ker(\eta)$ and $\text{coker}(\eta)$ are in $\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_S,\mathcal{V})$, so $\eta$ is an isomorphism in $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})/\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_S,\mathcal{V})$. We get that the functor induced by $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^n$ from $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})/\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_S,\mathcal{V})$ to $\mathcal{F}(\mathfrak{R}_S,\mathbb{k}\left[\Sigma_n\right]-\mathcal{M}\mathrm{od})$ and the composition of $M\mapsto T^n\otimes_{\Sigma_n}M$ with the localization functor from $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})$ to $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})/\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_S,\mathcal{V})$ are inverses. ◻
# Simple objects in $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$
In this section, we describe the simple objects of the category $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ for $\mathbb{k}$ a finite field $\mathbb{F}_p$ with $p$ prime, using the equivalence between $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})/\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_S,\mathcal{V})$ and the category $\mathcal{F}(\mathfrak{R}_S,\mathbb{F}_p\left[\Sigma_n\right]-\mathcal{M}\mathrm{od})$. First, we prove that simple objects of $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ are polynomial.\
We consider the family of injective cogenerators $I_{(W\oplus V,\psi\boxplus \epsilon_V)}:=\mathbb{F}_p^{\text{Hom}_{\mathfrak{S}_S}(\_, (W\oplus V,\psi\boxplus\epsilon_V))}$.
**Proposition 31**. *For any $(W,\psi)\in \mathfrak{R}_S$ and any $V\in\mathcal{V}^f$, $I_{(W\oplus V,\psi\boxplus \epsilon_V)}$ is analytic.*
*Proof.* We have a forgetful functor from $\mathfrak{S}_S$ to $\mathcal{V}^f$, it induces a functor $\mathcal{U}$ from $\mathcal{F}(\mathcal{V}^f,\mathcal{V})$ to $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$. For $\Bar{\Delta}$ the usual difference functor in $\mathcal{F}(\mathcal{V}^f,\mathcal{V})$, $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}\mathcal{U}(F)(H,\eta)\cong \Bar{\Delta} F(H)$. Therefore, $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^{n+1}\mathcal{U}(F)=0$ if and only if $\Bar{\Delta}^{n+1}F=0$, so $\mathcal{U}(F)\in\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})$ if and only if $F$ is polynomial in the usual sense, and $F$ analytic implies $\mathcal{U}(F)$ analytic.\
By Proposition [Proposition 10](#2.8){reference-type="ref" reference="2.8"}, $\text{Hom}_{\mathfrak{S}_S}((H,\eta),(W\oplus V,\psi\boxplus\epsilon_V))\cong\text{Hom}_{\mathfrak{S}_S}((H,\eta),(W,\psi)) \times\text{Hom}_{\mathbb{k}}(H,V)$. Therefore, $I_{(W\oplus V,\psi\boxplus \epsilon_V)}(H,\eta)$ is naturally isomorphic to the tensor product $$\mathbb{F}_p^{\text{Hom}_{\mathfrak{S}_S}((H,\eta),(W,\psi))}\otimes\mathbb{F}_p^{\text{Hom}_{\mathbb{k}}(H,V)}.$$ We get that $I_{(W\oplus V,\psi\boxplus \epsilon_V)}\cong I_{(W,\psi)}\otimes \mathcal{U}(I_V)$, where $I_V$ denote the injective object in $\mathcal{F}(\mathcal{V}^f,\mathcal{V})$ that maps $H$ to $\mathbb{F}_p^{\text{Hom}_{\mathbb{k}}(H,V)}$.\
$I_{(W,\psi)}$ is polynomial of degree $0$, indeed, since $(W,\psi)$ is regular we have that for any map from $(H\oplus\mathbb{F}_p,\eta\oplus\epsilon_{\mathbb{F}_p})$ to $(W,\psi)$ in $\mathfrak{S}_S,$ $\mathbb{F}_p$ is mapped to $0$. Therefore, the map from $I_{(W,\psi)}(H\oplus\mathbb{F}_p,\eta\oplus\epsilon_{\mathbb{F}_p})$ to $I_{(W,\psi)}(H,\eta)$ induced by the projection from $(H\oplus\mathbb{F}_p,\eta\oplus\epsilon_{\mathbb{F}_p})$ to $(H,\eta)$ is an isomorphism and $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}I_{(W,\psi)}=0$.\
Since $I_V$ is analytic (cf [@K1]), $\mathcal{U}(I_V)$ is analytic and therefore the tensor product $I_{(W\oplus V,\psi\boxplus \epsilon_V)}$ is analytic. ◻
Since the $I_{(W\oplus V,\psi\boxplus \epsilon_V)}$ form a family of injective cogenerators, any simple object $S$ embeds in some $I_{(W\oplus V,\psi\boxplus \epsilon_V)}$. Also, since $I_{(W\oplus V,\psi\boxplus \epsilon_V)}$ is analytic, $S$ embeds in some $p_n(I_{(W\oplus V,\psi\boxplus \epsilon_V)})$ and is therefore polynomial.\
An important feature of the category $\mathfrak{S}_S$ is that, when there is a map $\gamma$ from $(H,\eta)$ to $(W,\psi)$ either $(H/\ker(\eta),\Tilde{\eta})$ and $(W/\ker(\psi),\Tilde{\psi})$ are isomorphic or there is no map from $(W,\psi)$ to $(H,\eta)$. Therefore, for $(W,\psi)$ a maximal element among isomorphism classes of objects in $\mathfrak{R}_S$ such that there exist $V$ with $F(W\oplus V,\psi\boxplus\epsilon_V)\neq 0$, one can consider the subfunctor $\Bar{F}$ of $F$, with $\Bar{F}(H,\eta)=F(H,\eta)$, if $(H/\ker(\eta),\Tilde{\eta})\cong (W,\psi)$ and $\Bar{F}=0$ otherwise. We get that, for $S$ simple, there is $(W,\psi)\in\mathfrak{R}_S$ such that $S(H,\eta)$ non trivial implies that $(H/\ker(\eta),\Tilde{\eta})\cong(W,\psi)$.\
We can now describe the simple objects of $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$.\
Let $S$ be a simple polynomial functor of degree $n$, $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^n$ maps $S$ onto a simple object of $\mathcal{F}(\mathfrak{R}_S,\mathbb{F}_p\left[\Sigma_n\right]-\mathcal{M}od)$. Those are the functors that map some $(W,\psi)\in\mathfrak{R}_S$ to some simple object in $\mathcal{F}(\mathcal{A}ut_{\mathfrak{S}_S}(W,\psi),\mathbb{F}_p\left[\Sigma_n\right]-\mathcal{M}od)\cong\mathbb{F}_p\left[\mathcal{A}ut_{\mathfrak{S}_S}(W,\psi)\times\Sigma_n\right]-\mathcal{M}od$, and the $(H,\eta)$ non isomorphic to $(W,\psi)$ to $0$.\
In the following, we will use standard results about simple objects of $\mathcal{F}(\mathcal{V}^f,\mathcal{V})$. We use the notations of [@PS]. For every $2$-regular partition $\lambda$, there is an element $\epsilon_\lambda\in\mathbb{F}_p\left[\Sigma_n\right]$, denoted $\Bar{R}_\lambda\Bar{C}_\lambda\Bar{R}_\lambda$ in [@PS], such that $\epsilon_\lambda\mathbb{F}_p\left[\Sigma_n\right]$ is isomorphic to the simple module parametrized by $\lambda$.\
It is known (cf [@PS]), that for $\epsilon_\lambda T^n$ the functor on $\mathcal{V}^f$ that maps $V$ to the image of $V^{\otimes n}$ under the right action of $\epsilon_\lambda$, $\epsilon_\lambda T^n$ is a polynomial functor of degree $n$ in $\mathcal{F}(\mathcal{V}^f,\mathcal{V})$ and admits no non-trivial subfunctor of degree less than $n-1$. It is also known that $\Bar{\Delta}^n(\epsilon_\lambda T^n)\cong\epsilon_\lambda\mathbb{F}_p\left[\Sigma_n\right]$.
**Theorem 32**. *There is a one-to-one correspondence between isomorphism classes of simple objects of $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ and isomorphism classes of simple objects of $$\bigsqcup\limits_{(W,\psi),n}\mathbb{F}_p\left[\mathcal{A}ut_{\mathfrak{S}_S}(W,\psi)\times\Sigma_n\right]-\mathcal{M}od$$ with $(W,\psi)$ running through the isomorphism classes of objects in $\mathfrak{R}_S$ and $n$ running through $\mathbb{N}$.*
*Proof.* We have already described the map from simple objects in $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$ to simple objects in $\bigsqcup\limits_{(W,\psi),n}\mathcal{F}(\mathcal{A}ut_{\mathfrak{S}_S}(W,\psi),\mathbb{F}_p\left[\Sigma_n\right]-\mathcal{M}od)$. We have to prove that it is a one to one correspondence.\
Let $M$ be a simple object in $\mathcal{F}(\mathcal{A}ut_{\mathfrak{S}_S}(W,\psi),\mathbb{F}_p\left[\Sigma_n\right]-\mathcal{M}od)$. Since $\mathcal{A}ut_{\mathfrak{S}_S}(W,\psi)$ is a category with only one object, $M$ is a $\mathbb{F}_p\left[\Sigma_n\right]$-module equipped with a left action of $\mathcal{A}ut_{\mathfrak{S}_S}(W,\psi)$. As a $\mathbb{F}_p\left[\Sigma_n\right]$-module, it admits an injection from some simple $\Sigma_n$-module $\epsilon_\lambda\mathbb{F}_p\left[\Sigma_n\right]$. Each element of $\mathcal{A}ut_{\mathfrak{S}_S}(W,\psi)$ maps $\epsilon_\lambda\mathbb{F}_p\left[\Sigma_n\right]$ to some isomorphic $\mathbb{F}_p\left[\Sigma_n\right]$-submodule of $M$. Those are either disjoint or equal to each-other. Using that $M$ is simple as an object in $\mathcal{F}(\mathcal{A}ut_{\mathfrak{S}_S}(W,\psi),\mathbb{F}_p\left[\Sigma_n\right]-\mathcal{M}od)$, we get an isomorphism of $\Sigma_n$-modules $M\cong(\epsilon_\lambda\mathbb{F}_p\left[\Sigma_n\right])^{\oplus i}$ for some $i\in\mathbb{N}$ (this is because $\mathcal{A}ut_{\mathfrak{S}_S}(W,\psi)$ is finite). We consider $T^n\otimes_{\Sigma_n}M\in\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$. It admits a quotient of the form $(\epsilon_\lambda T^n)^{\oplus i}$ (by abuse of notation, we omit the action of morphisms in $\mathfrak{R}_S$ from the notation). This subfunctor admits no sub-functor of degree less than or equal to $n-1$ and $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^n(\epsilon_\lambda T^n)^{\oplus i}\cong M$, hence it is the quotient of $T^n\otimes_{\Sigma_n}M$ by $p_{n-1}(T^n\otimes_{\Sigma_n}M)$. Therefore, $(\epsilon_\lambda T^n)^{\oplus i}$ is a simple object in $\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$.\
Furthermore, let $F\in\mathcal{F}(\mathfrak{S}_S,\mathcal{V})$, polynomial of degree $n$ such that $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^n F\cong M$. The unit of the adjunction between $\Bar{\Delta}_{(\mathbb{k},\epsilon_{\mathbb{k}})}^n$ and $T^n\otimes_{\Sigma_n}\_$, gives us a map from $T^n\otimes_{\Sigma_n}M$ to $F$, since it is an isomorphism in $\mathcal{P}\mathrm{ol}_{n}(\mathfrak{S}_S,\mathcal{V})/\mathcal{P}\mathrm{ol}_{n-1}(\mathfrak{S}_S,\mathcal{V})$, its kernel is included in $p_{n-1}(T^n\otimes_{\Sigma_n}M)$. Therefore, it factorises $T^n\otimes_{\Sigma_n}M\twoheadrightarrow(\epsilon_\lambda T^n)^{\oplus i}$. We get that either $F$ is not simple or it is isomorphic to $(\epsilon_\lambda T^n)^{\oplus i}$. ◻
Address : Université de Lille, laboratoire Paul Painlevé (UMR8524), Cité scientifique, bât. M3, 59655 VILLENEUVE D'ASCQ CEDEX, FRANCE\
e-mail : aacde13\@live.fr
| arxiv_math | {
"id": "2310.00186",
"title": "Polynomial functors on some categories of elements",
"authors": "Ouriel Bloede",
"categories": "math.CT",
"license": "http://creativecommons.org/licenses/by-nc-sa/4.0/"
} |
---
author:
- |
[Guillaume Kon Kam King]{.smallcaps}, *Université Paris-Saclay - INRAE*\
[Andrea Pandolfi]{.smallcaps}, *Bocconi University*\
[Marco Piretto]{.smallcaps}, *BrandDelta*\
[Matteo Ruggiero[^1]]{.smallcaps}, *University of Torino and Collegio Carlo Alberto*
title: " **Approximate filtering via discrete dual processes** "
---
> We consider the task of filtering a dynamic parameter evolving as a diffusion process, given data collected at discrete times from a likelihood which is conjugate to the marginal law of the diffusion, when a generic dual process on a discrete state space is available. Recently, it was shown that duality with respect to a death-like process implies that the filtering distributions are finite mixtures, making exact filtering and smoothing feasible through recursive algorithms with polynomial complexity in the number of observations. Here we provide general results for the case of duality between the diffusion and a regular jump continuous-time Markov chain on a discrete state space, which typically leads to filtering distribution given by countable mixtures indexed by the dual process state space. We investigate the performance of several approximation strategies on two hidden Markov models driven by Cox--Ingersoll--Ross and Wright--Fisher diffusions, which admit duals of birth-and-death type, and compare them with the available exact strategies based on death-type duals and with bootstrap particle filtering on the diffusion state space as a general benchmark.
>
> **Keywords:** Duality; Filtering; Bayesian inference; Diffusion; Particle filtering; Smoothing.
# Introduction
Hidden Markov models are widely used statistical models for time series that assume an unobserved Markov process $(X_{t})_{t\geq 0}$, or hidden *signal*, driving the process that generates the observations $(Y_{t_i})_{i=0,\dots, n}$, e.g., by specifying the dynamics of one or more parameters of the observation density $f_{X_{t}}(y)$, called *emission distribution*. See [@CMR05] for a general treatment of hidden Markov models. In this framework, the first task is to estimate the trajectory of the signal given noisy observations collected at discrete times $0=t_{0}<t_{1}<\cdots<t_{n}=T$, which amounts to performing sequential Bayesian inference by computing the so-called *filtering distributions* $p(x_{t_{i}}|y_{t_{0}},\ldots,y_{t_{i-1}})$, i.e., the marginal distributions of the signal at time $t_{i}$ conditional on observations collected up to time $t_{i-1}$. Originally motivated by real-time tracking and navigation systems and pioneered by [@K60; @KB61], classical and widely known explicit results for this problem include: the *Kalman--Bucy* filter, when both the signal and the observation process are formulated in a gaussian linear system; the *Baum--Welch* filter, when the signal has a finite state-space as the observations are categorical; the *Wonham* filter, when the signal has a finite state-space and the observations are Gaussian. These scenarios allow the derivation of so-called *finite-dimensional filters*, i.e., a sequence of filtering distributions whose explicit identification is obtained through a parameter update based on the collected observations and on the time separation between the collection times, such that the resulting computational cost increases linearly with the number of observation times. Other explicit results include [@S81; @FR90; @FV98; @RS01; @G03; @GK04; @CGK11]. Outside these classes, explicit solutions are difficult to obtain, and their derivation typically relies on *ad hoc* computations. This is especially true when the map $x\mapsto f_{x}$ is non-linear and when the signal transition kernel is known up to a series expansion, often intractable, as is the case for many widely used stochastic models. When exact solutions are not available, one must typically make use of approximate strategies, whose state of the art is most prominently based on extensions of the Kalman and particle filters. See, for example, [@BC09; @CP20].
A somewhat weaker but useful notion with respect to that of a finite-dimensional filter was formulated in [@CG06], who introduced the concept of *computable filter*. This extends the former class to a larger class of filters whose marginal distributions are *finite mixtures* of elementary kernels rather than single kernels. Unlike the former case, such a scenario entails a higher computational cost, usually polynomial in the number of observation times, but avoids the infinite-dimensionality typically implied by series expansion of the signal transition kernel. See [@CG09].
Recently, [@PR14] derived sufficient conditions for computable filtering based on duality. A *dual process* is a process $D_{t}$ which enjoys the identity $$\label{duality identity}
\mathbb{E}[h(X_{t},d)|X_{0}=x]=\mathbb{E}[h(x,D_{t})|D_{0}=d].$$ Here the expectation on the left-hand side is taken with respect to the transition law of the signal $X_{t}$, and that on the right hand side with respect to that of $D_{t}$, while the class of functions $h(x,d)$ which satisfy the above identity are called duality functions. See [@JK14] for a review and for the technical details we have overlooked here. Duality has received an increasing amount of attention recently, and has found applications in Markov process theory, interacting particle systems, statistical physics and population genetics, among other fields. See, e.g., [@EK86; @M99; @BEG00; @Gea07; @EG09; @Gea09a; @Gea09b; @Eea10; @O10; @Cea15; @Fea21; @Gea22]. In the framework of duality, [@PR14] showed that for a reversible signal whose marginal distributions are conjugate to the emission distribution (i.e., the Bayesian update at a fixed $t$ can be computed in closed-form), computable filtering is guaranteed if the stochastic part of the dual process evolves on a finite state space. This in turn allows one to derive recursive formulae for the filtering distributions for non-linear hidden Markov models involving signals driven by Cox--Ingersoll--Ross (CIR) processes and $K$-dimensional Wright--Fisher (WF) processes. Along similar lines, duality was exploited for computable smoothing, whereby one conditions also on future data points, in [@KKK21] and for nonparametric hidden Markov models driven by Fleming--Viot and Dawson--Watanabe diffusions in [@PRS16; @ALR21; @ALR22].
In this paper, we investigate the impact on filtering problems for hidden Markov models when the dual process takes the more general form of a continuous-time Markov chain on a discrete state space. This is of interest for example in some population genetic models with selection [@BEG00] or interaction [@Fea21] whose known dual processes are of birth-and-death (B&D) type, whose specific filtering problems are currently under investigation by some of the authors. When the dual process evolves in a countable state space, the filtering distribution can in general be expected to be countably infinite mixtures. This leads to inferential procedures which are not *computable* in the sense specified above, since the computation of the filtering distribution can no longer be exact. However, it is natural to wonder how the inferential procedures obtained in such a scenario perform, possibly aided by some suitable approximation strategies.
The paper is organized as follows. In Section [2](#sec:conditions){reference-type="ref" reference="sec:conditions"} we identify sufficient conditions for filtering based on discrete duals and provide a general description of the filtering operations in this setting. In Section [3](#sec:algorithms){reference-type="ref" reference="sec:algorithms"}, we apply these results to devise practical algorithms which allow to evaluate in recursive form filtering and smoothing distributions under this formulation. Section [4](#sec:CIR){reference-type="ref" reference="sec:CIR"} and [5](#sec:WF){reference-type="ref" reference="sec:WF"} investigate hidden Markov models driven by Cox--Ingersoll--Ross and $K$-dimensional Wright--Fisher diffusions, and show that the first admits a dual given by a one-dimensional B&D process, and the second a dual given by a $K$-dimensional Moran model. Section [6](#sec:experiments){reference-type="ref" reference="sec:experiments"} discusses several approximation strategies used to implement the above algorithms with these dual processes, and compares their performance with exact filtering based on the results in [@PR14] and with a bootstrap particle filter as a general benchmark. Finally, we conclude with some brief remarks.
# Filtering via discrete dual processes {#sec:conditions}
Let the hidden signal $(X_{t})_{t\ge0}$, which here takes the role of a temporally evolving target of estimation, be given by a diffusion process on $\mathcal{X}\subset \mathbb{R}^{K}$, for $K\ge1$. Observations $Y_{t_{i}}\in \mathcal{Y}\subset \mathbb{R}^{D}$, $D\ge1$ are collected at discrete times $0=t_{0}<t_{1}<\cdots<t_{n}=T$ with distribution $Y_{t_{i}}\overset{\text{ind}}{\sim} f_{x}(\cdot)$, given $X_{t_{i}}=x$. Given an observation $Y=y$, define the *update operator* $\phi_y$ acting on measures $\xi$ on $\mathcal{X}$ by $$\label{update operator}
\phi_y(\xi)(x) := \frac{f_x(y)\xi(x)}{\mu_{\xi}(y)}, \qquad
\mu_{\xi}(y) := \int_{\mathcal{X}} f_x(y)\xi(x).$$ Here the probability measure $\xi$ acts as prior distribution which encodes the current knowledge on the signal $X_{t}$, whereas $\mu_{\xi}(y)$ is the marginal likelihood of a data point $y$ when $X_{t}$ has distribution $\xi$. The update operator thus amounts to an application of Bayes' theorem for conditioning the probability measure $\xi$ on a new observation $y$, leading to the updated measure $\phi_y(\xi)$. Define also the *propagation operator* $\psi_t$ by $$\label{propagation operator}
\psi_t(\xi)(\mathrm{d}x') := \xi P_t(\mathrm{d}x')= \int_{\mathcal{X}}\xi(x)P_t(x, \mathrm{d}x').$$ where $P_{t}$ is the transition kernel of the signal. Hence $\psi_t(\xi)$ is the probability measure at time $t$ obtained by propagating forward the law $\xi$ of the signal at time 0 by means of the signal semigroup.
We will make three assumptions, the first two of which are the same as in [@PR14].
> **Assumption 1** (Reversibility). The signal $X_{t}$ is reversible with respect to the density $\pi$, i.e., $\pi(x)P_t(x, \mathrm{d}x')=\pi(x')P_t(x', \mathrm{d}x)$.
See the discussion in [@PR14] on the possibility of relaxing the above assumption. For $K\in \mathbb{Z}_{+}$, define now the space of multi-indices $\mathcal{M}=\mathbb{Z}_{+}^{K}$ to be $$\nonumber%\label{}
\mathcal{M}=\{\mathbf{m}=(m_{1},\ldots,m_{K}):\ m_{j}\in\mathbb{Z}_{+},\text{ for }j=1,\ldots,K\},$$ whose origin is denoted $\mathbf{0}=(0,\ldots,0)$.
> **Assumption 2** (Conjugacy). For $\Theta\subset \mathbb{R}^{l}$, $l\in \mathbb{Z}_{+}$, let $h:\mathcal{X}\times \mathcal{M}\times \Theta\rightarrow \mathbb{R}_{+}$ be such that $\sup_{x\in \mathcal{X}}h(x,\mathbf{m},\theta)<\infty$ for all $\mathbf{m}\in \mathcal{M},\theta\in \Theta$ and $h(x,\mathbf{0},\theta')=1$ for some $\theta'\in \Theta$. Then $f_{x}(\cdot)$ is conjugate to distributions in the family $$\label{family F}\nonumber
> \mathcal{F}=\{g(x,\mathbf{m},\theta)=h(x,\mathbf{m},\theta)\pi(x),\ \mathbf{m}\in \mathcal{M},\theta\in\Theta\},$$ i.e., there exist functions $t:\mathcal{Y}\times \mathcal{M}\rightarrow \mathcal{M}$ and $T:\mathcal{Y}\times \Theta\rightarrow\Theta$ such that if $X_{t}\sim g(x,\mathbf{m},\theta)$ and $Y_{t}|X_{t}=x\sim f_{x}$, we have $X_{t}|Y_{y}=y\sim g(x,t(y,\mathbf{m}),T(y,\theta))$.
Here $g(x,\mathbf{m},\theta)$ takes the role of "current" prior distribution, i.e., the prior on the signal state which was possibly derived from previous conditioning and propagations, and $g(x,t(y,\mathbf{m}),T(y,\theta))$ takes the role of the posterior, i.e., $g(x,\mathbf{m},\theta)$ conditional on the new data point $y$. The functions $t(y,\mathbf{m}),T(y,\theta)$ provide the transformations that update the parameters based on $y$. In absence of data, the condition $h(x,\mathbf{0},\theta')=1$ reduces $g(x,\mathbf{m},\theta)$ to $\pi(x)$.
The third assumption weakens Assumption 3 in [@PR14] by assuming the dual process has finite activity on a discrete state space, and possibly has a deterministic companion.
> **Assumption 3** (Duality). Given a deterministic process $\Theta_{t}\in \Theta$ and a regular jump continuous-time Markov chain $M_{t}$ on $\mathbb{Z}_{+}^{K}$ with transition probabilities $$\label{transition probabilities}
> p_{\mathbf{m},\mathbf{n}}(t;\theta):=\mathbb{P}(M_{t}=\mathbf{n}| M_{0}=\mathbf{m},\Theta_{0}=\theta),$$ equation [\[duality identity\]](#duality identity){reference-type="eqref" reference="duality identity"} holds with $D_{t}=(M_{t},\Theta_{t})$ and $h$ as in Assumption 2.
The following result provides a full description of the propagation and update steps which allow to compute the filtering distribution.
**Proposition 1**. *Let Assumptions 1-3 hold, and let $\sum_{\mathbf{m}\in \mathcal{M}}w_{\mathbf{m}}g(x,\mathbf{m},\theta)$ be a countable mixture with $\sum_{\mathbf{m}\in \mathcal{M}}w_{\mathbf{m}}=1$. Then, for $\psi_t$ as in [\[propagation operator\]](#propagation operator){reference-type="eqref" reference="propagation operator"} we have $$\label{forward propagation}
\psi_t
\bigg(\sum_{\mathbf{m}\in \mathcal{M}}w_{\mathbf{m}}g(x,\mathbf{m},\theta)\bigg)=\sum_{\mathbf{n}\in \mathcal{M}}w'_{\mathbf{n}}(t)g(x,\mathbf{n},\Theta_{t}),$$ where $$\label{weights rearrangement after propagation}
\begin{aligned}
w'_{\mathbf{n}}(t)=\sum_{\mathbf{m}\in \mathcal{M}}w_{\mathbf{n}}p_{\mathbf{m},\mathbf{n}}(t;\theta), \quad
\end{aligned}$$ and $p_{\mathbf{m},\mathbf{n}}(t;\theta)$ are as in [\[transition probabilities\]](#transition probabilities){reference-type="eqref" reference="transition probabilities"}. Furthermore, for $\phi_{y}$ as in [\[update operator\]](#update operator){reference-type="eqref" reference="update operator"}, we have $$\label{update operation}
\phi_{y} \bigg(\sum_{\mathbf{m}\in \mathcal{M}}w_{\mathbf{m}}g(x,\mathbf{m},\theta)\bigg)=
\sum_{\mathbf{m}\in \mathcal{M}}\hat w_{\mathbf{n},\theta}(y)g(x,t(y,\mathbf{m}),T(y,\theta))$$ where $\hat w_{\mathbf{m},\theta}(y)\propto w_{\mathbf{m}}\mu_{\mathbf{m},\theta}(y)$ and $$\label{marginals}
\begin{aligned}
\mu_{\mathbf{m},\theta}(y):=\int_{\mathcal{X}}f_{x}(y)g(x, \mathbf{m}, \theta)\mathrm{d}x.
\end{aligned}$$*
*Proof.* First observe that $\psi_t(g(x,\mathbf{m},\theta))=\sum_{\mathbf{n}\in \mathcal{M}}p_{\mathbf{m},\mathbf{n}}(t;\theta)g(x,\mathbf{n},\Theta_{t})$, which follows similarly to Proposition 2.2 in [@PR14]. Then the claim follows by linearity using the fact that $$\label{oper_prop}
\psi_t \bigg( \sum_{i\ge1}w_i\xi_i \bigg) = \sum_{i\ge1} w_i \psi_t (\xi_i)$$ so that $$\nonumber%\label{}
\begin{aligned}
\psi_t &\,\bigg(\sum_{\mathbf{m}\in \mathcal{M}}w_{\mathbf{m}}g(x,\mathbf{m},\theta)\bigg)
=
\sum_{\mathbf{m}\in \mathcal{M}}w_{\mathbf{m}}\psi_{t}(g(x,\mathbf{m},\theta))\\
=&\,
\sum_{\mathbf{m}\in \mathcal{M}}w_{\mathbf{m}}\sum_{\mathbf{n}\in \mathcal{M}}p_{\mathbf{m},\mathbf{n}}(t;\theta)g(x,\mathbf{n},\Theta_{t})
=
\sum_{\mathbf{n}\in \mathcal{M}}\sum_{\mathbf{m}\in \mathcal{M}}w_{\mathbf{m}}p_{\mathbf{m},\mathbf{n}}(t;\theta)g(x,\mathbf{n},\Theta_{t})
\end{aligned}$$ Using now the fact that $$\label{mixture update}
\phi_y \bigg( \sum_{i\ge1}w_i\xi_i\bigg) = \sum_{i\ge1} \frac{w_i \mu_{\xi_i}(y)}{\sum_j w_j \mu_{\xi_j}(y)} \phi_y(\xi_i),\qquad$$ we also find that $$\phi_{y} \bigg(\sum_{\mathbf{m}\in \mathcal{M}}w_{\mathbf{m}}g(x,\mathbf{m},\theta)\bigg)=
\sum_{\mathbf{m}\in \mathcal{M}}\frac{w_{\mathbf{m}}\mu_{\mathbf{m},\theta}(y)}{\sum_{\mathbf{n}\in \mathcal{M}}w_{\mathbf{n}}\mu_{\mathbf{n},\theta}(y)}g(x,t(y,\mathbf{m}),T(y,\theta))
% \sum_{m} \frac{w_m \omega_{m,\theta}(y_i)}{\sum_j w_j \omega_{j,\theta}(y_i)} g_{m+y_i, \theta+\tau}(x),$$ where $$\mu_{\mathbf{m},\theta}(y)=
\int_{\mathcal{X}} f_x(y)g(x,\mathbf{m},\theta)\mathrm{d}x
=\int_{\mathcal{X}} f_x(y)h(x,\mathbf{m},\theta)\pi(x)\mathrm{d}x.$$ ◻
The expression [\[forward propagation\]](#forward propagation){reference-type="eqref" reference="forward propagation"}, together with [\[weights rearrangement after propagation\]](#weights rearrangement after propagation){reference-type="eqref" reference="weights rearrangement after propagation"}, provides a general recipe on how to compute the forward propagation of the current marginal distribution of the signal $g(x,\mathbf{m},\theta)$, based on the transition probabilities of the dual continuous-time Markov chain. Since the update operator [\[update operator\]](#update operator){reference-type="eqref" reference="update operator"} can be easily applied to the resulting distribution, Proposition [Proposition 1](#prop:prop&update){reference-type="ref" reference="prop:prop&update"} then shows that under these assumptions all filtering distributions are countable mixtures of elementary kernels indexed by the state space of the dual process, with mixture weights determined by the dual process transition probabilities $p_{\mathbf{m},\mathbf{n}}(t;\theta)$. When the latter happens to give positive mass only to points $\{\mathbf{n}\in \mathcal{M}: \ \mathbf{n}\le \mathbf{m}\}$, as is the case for a pure-death process, then the right-hand side of [\[forward propagation\]](#forward propagation){reference-type="eqref" reference="forward propagation"} reduces to a finite sum, and one can construct an exact filter with a computational cost that is polynomial in the number of observations, as shown in [@PR14].
The above approach can be seen as an alternative to deriving the filtering distribution of the signal by leveraging on a spectral expansion of the transition function $P_{t}$ in [\[propagation operator\]](#propagation operator){reference-type="eqref" reference="propagation operator"}, which typically requires ad hoc computations and does not lend itself easily to explicit update operations through [\[update operator\]](#update operator){reference-type="eqref" reference="update operator"}. Note also that expressions like [\[forward propagation\]](#forward propagation){reference-type="eqref" reference="forward propagation"} can be used, by taking appropriate limits of $p_{\mathbf{m},\mathbf{n}}(t;\theta)$ as $t\rightarrow 0$, to identify the transition kernel of the signal $P_{t}$ itself, see, e.g., [@BEG00; @PR14; @Gea22].
# Recursive formulae for filtering and smoothing {#sec:algorithms}
In order to translate Proposition [Proposition 1](#prop:prop&update){reference-type="ref" reference="prop:prop&update"} into practical recursive formulae for filtering and smoothing, we are going to assume for simplicity of exposition that the time intervals between successive data collections $t_{i}-t_{i-1}$ equal $\Delta$ for all $i$. For ease of the reader, we will therefore use the symbol $P_{\Delta}$ instead of $P_{t_{i}-t_{i-1}}$ for the signal transition function over the interval $\Delta=t_{i}-t_{i-1}$. We will also use the established notation whereby $i|0:i-1$ indicates that the argument refers to time $t_{i}=i\Delta$, and we are conditioning on the data collected at times from $0$ to $t_{i-1}=(i-1)\Delta$.
Define the *filtering density* $$\label{filtering distr general}
\nu_{i|0: i}(x_{i}):=p(x_{i}| y_{0:i})\propto \int_{\mathcal{X}^{i}} p(x_{0:i}, y_{0:i}) \mathrm{d}x_{0:i-1},$$ i.e., the law of the signal at time $t_{i}$ given data up to time $t_{i}$, obtained by integrating out the past trajectory. Define also the *predictive density* $$\label{predictive general}
\nu_{i+1|0:i}(x_{i}):=p(x_{i+1} | y_{0:i}) =\int_{\mathcal{X}} p(x_{i} | y_{0:i})P_{\Delta}(x_{i+1} | x_{i})\mathrm{d}x_{i},$$ i.e, the marginal density of the signal at time $t_{i+1}$, given data up to time $t_{i}$. This can be expressed recursively as a function of the previous *filtering density* $p(x_{i} | y_{0:i})$, as displayed. Finally, define the marginal *smoothing density* $$\label{smoothing distr general}
\nu_{i|0:n}(x_{i}):=p(x_{i} | y_{0:n}) \propto \int_{\mathcal{X}^{n}} p(x_{0:n}, y_{0:n}) \mathrm{d}x_{0:i-1}\mathrm{d}x_{i+1:n},$$ where the signal is evaluated at time $t_{i}$ conditional on all available data. The first two distributions above are natural objects of inferential interest, whereas the latter is typically used to improve previous estimates once additional data become available. Finally, for $\Theta_{\Delta}$ as in Assumption 3 and $t(\cdot,\cdot),T(\cdot,\cdot)$ as in Assumption 2, define for $i=0,\dots,n$ the quantities $$\begin{aligned}
\label{vartheta and M-sets}
\vartheta_{i|0:i}:=&\,T(y_{i}, \vartheta_{i|0:i-1}),
\quad \quad \ \
\vartheta_{i|0:i-1} := \Theta_{\Delta}(\vartheta_{i-1|0:i-1}),
\quad \quad
\vartheta_{0|0:-1} := \theta_0.
\end{aligned}%\label{eq:theta_updates}$$ Here, $\vartheta_{i|0:i-1}$ denotes the state of the deterministic component of the dual process at time $i$, after the propagation from time $i-1$ and before updating with the datum collected at time $i$, and $\vartheta_{i|0:i}$ the state after such update.
The following Corollary of Proposition [Proposition 1](#prop:prop&update){reference-type="ref" reference="prop:prop&update"} extends a result of [@PR14] (see also Theorem 1 in [@KKK21] for an easier comparison in a similar notation).
**Corollary 2**. *Let Assumptions 1-3 hold, and assume that $$\nu_{i-1 | 0:i-1} (x) = \sum_{\mathbf{m}\in \mathcal{M}}w_\mathbf{m}^{(i-1)}g(x, \mathbf{m}, \vartheta_{i-1|0:i-1}).$$ Then [\[predictive general\]](#predictive general){reference-type="eqref" reference="predictive general"} can be written, through [\[propagation operator\]](#propagation operator){reference-type="eqref" reference="propagation operator"}, as $$\label{prediction in thm}
\begin{aligned}
\nu_{i|0:i-1}(x)
=&\,\psi_{\Delta}(\nu_{i-1 | 0:i-1}(x))
= \sum_{\mathbf{m}\in \mathcal{M}}w_\mathbf{m}^{(i-1)'}g(x, \mathbf{m}, \vartheta_{i|0:i-1}), \\
w_\mathbf{m}^{(i-1)'} = &\, \sum_{\mathbf{n}\in \mathcal{M}}w_{\mathbf{n}}^{(i-1)}p_{ \mathbf{n}, \mathbf{m}}(\Delta; \vartheta_{i-1|0:i-1}), \quad \mathbf{m}\in \mathcal{M},
\end{aligned}$$ with $p_{ \mathbf{n}, \mathbf{m}}(\Delta; \vartheta_{i|0:i})$ as in [\[transition probabilities\]](#transition probabilities){reference-type="eqref" reference="transition probabilities"}. Furthermore, given the observation $y_i$, [\[filtering distr general\]](#filtering distr general){reference-type="eqref" reference="filtering distr general"} can be written, through [\[update operator\]](#update operator){reference-type="eqref" reference="update operator"}, as $$\label{filtering in thm}
\begin{aligned}
\nu_{i|0:i}(x)
= &\, \phi_{y_i}(\nu_{i|0:i-1}(x))
= \sum_{\mathbf{m}\in \mathcal{M}}w_\mathbf{m}^{(i)} g(x, \mathbf{m}, \vartheta_{i|0:i}),\\
w_{\mathbf{m}}^{(i)} \propto &\,
\mu_{\mathbf{n}, \vartheta_{i|0:i-1}}(y_{i})w_{\mathbf{n}}^{(i-1)'},
\quad \mathbf{m}=t(y_{i},\mathbf{n}),\mathbf{n}\in \mathcal{M},
\end{aligned}$$ with $\mu_{\mathbf{m},\theta}$ as in [\[marginals\]](#marginals){reference-type="eqref" reference="marginals"}.*
> ::: {.algorithm}
> > **Note:** $\mathbf{M}_{i|0:i}=\{\mathbf{m}\in \mathcal{M}:\ w_\mathbf{m}^{(i)}>0\}\subset \mathcal{M}$ is the support of the weights of $\nu _{i|0:i}$; $\mathcal{B}(\mathbf{m})$ denotes the states reached by the dual process from $\mathbf{m}$, and $\mathcal{B}(\mathbf{M})$ those reached from all $\mathbf{m}\in \mathbf{M}$.
> :::
Algorithm [\[algorithm\]](#algorithm){reference-type="ref" reference="algorithm"} outlines the pseudo-code for implementing the update and propagation steps of Corollary [Corollary 2](#prop:rec_filtering){reference-type="ref" reference="prop:rec_filtering"}. How to use these results efficiently can depend on the model at hand. When the transition probabilities $p_{\mathbf{m},\mathbf{n}}(t;\theta)$ are available in closed form, their use could lead to the best performance, but can also at times face numerical instability issues (as is the case pointed out in Section [4](#sec:CIR){reference-type="ref" reference="sec:CIR"} below). When the transition probabilities $p_{\mathbf{m},\mathbf{n}}(t;\theta)$ are not available in closed form, one can approximate them by simulating $N$ replicates of the dual component $M_{t}$, and then regroup probability masses according to the arrival states as done in [\[prediction in thm\]](#prediction in thm){reference-type="eqref" reference="prediction in thm"}. The dual is typically easier to simulate than the original process, given its discrete state space. For instance, pure-death or B&D processes are easily simulated using a Gillespie algorithm [@G07], whereby one alternates sampling waiting times and jump transitions for the embedded chain. Depending on the dual process, there might also be more efficient simulation strategies.
A different type of approximation of the propagation step [\[prediction in thm\]](#prediction in thm){reference-type="eqref" reference="prediction in thm"} in Corollary [Corollary 2](#prop:rec_filtering){reference-type="ref" reference="prop:rec_filtering"} can be based on pruning the transition probabilities or the arrival weights (cf. [\[prediction in thm\]](#prediction in thm){reference-type="eqref" reference="prediction in thm"}) under a given threshold, followed by a renormalisation of the weights. Both this approximation strategy and that outlined above assign positive weights only to a finite subset of $\mathcal{M}$, hence they overcome the infinite dimensionality of the dual process state space. In the next sections we will investigate such strategies for two hidden Markov models driven by Cox--Ingersoll--Ross and $K$-dimensional Wright--Fisher diffusions.
Next, with the purpose of describing the marginal smoothing densities [\[smoothing distr general\]](#smoothing distr general){reference-type="eqref" reference="smoothing distr general"}, we need an additional assumption and some further notation.
> **Assumption 4** For $h$ as in Assumption 3, there exist functions $d:\mathcal{M}^{2}\to\mathcal{M}$ and $e:\Theta^{2}\to\Theta$ such that for all $x\in\mathcal{X},\ \mathbf{m},\mathbf{m}' \in\mathcal{M},\ \theta, \theta' \in \Theta$ $$\label{hh_stab}
> h(x, \mathbf{m}, \theta)h(x, \mathbf{m}', \theta')
> =C_{\mathbf{m}, \mathbf{m}', \theta, \theta'} h(x, d(\mathbf{m}, \mathbf{m}'), e(\theta, \theta')),$$ where $C_{\mathbf{m}, \mathbf{m}', \theta, \theta'}$ is constant in $x$.
Denote by $\overset{\leftarrow}{\vartheta}_{i}, \overset{\leftarrow}{\vartheta}_{i}'$ the quantities defined in [\[vartheta and M-sets\]](#vartheta and M-sets){reference-type="eqref" reference="vartheta and M-sets"} computed backwards. Equivalently, these are computed as in [\[vartheta and M-sets\]](#vartheta and M-sets){reference-type="eqref" reference="vartheta and M-sets"} with data in reverse order, i.e. using $y_{n:0}$ in place of $y_{0:n}$, namely $$\label{backward lambda e vartheta}
\begin{aligned}
\overset{\leftarrow}{\vartheta}_{i|i+1:T}=&\,\Theta_{\Delta}(\overset{\leftarrow}{\vartheta}_{i+1|i+1:T}), \quad \quad
\overset{\leftarrow}{\vartheta}_{i|i:T}=T(y_{i},\overset{\leftarrow}{\vartheta}_{i|i+1:T}),\quad \quad
\overset{\leftarrow}{\vartheta}_{T|T}=T(y_{T},\theta_{0})
\end{aligned}$$
The following result extends Proposition 3 and Theorem 4 of [@KKK21]:
**Proposition 3**. *Let Assumptions 1-4 hold, and let $\nu _0= \pi$. Then, for $0\leq i\leq n-1$, we have $$p(x_{i} | y_{0:n})=
\sum_{\mathbf{m}\in \mathcal{M},\ \mathbf{n}\in \mathcal{M}}
w_{\mathbf{m},\mathbf{n}}^{(i)}g(x_{i}, d(\mathbf{m},\mathbf{n}), e(\overset{\leftarrow}{\vartheta}_{i|i+1:n},\vartheta_{i|0:i})),$$ with $$\begin{aligned}
w_{\mathbf{m},\mathbf{n}}^{(i)}
\propto&\, \overleftarrow w^{(i+1)}_{\mathbf{m}} w_\mathbf{n}^{(i)}C_{\mathbf{m}, \mathbf{n}, \overset{\leftarrow}{\vartheta}_{i|i+1:n},\vartheta_{i|0:i}},\\
\overset{\leftarrow}{\omega}_\mathbf{m}^{(i+1)}=&\,\sum _{\mathbf{n}\in \mathcal{M}} \overset{\leftarrow}{\omega}_{\mathbf{n}}^{(i+2)}\mu _{\mathbf{n}, \overset{\leftarrow}{\vartheta}_{i+1|i+2:n}}(y_{i+1})p_{t(y_{i+1}, \mathbf{n}),\mathbf{m}}(\Delta ; \overset{\leftarrow}{\vartheta}_{i+1|i+1:n})
%=\frac{\overleftarrow w^{(i+1)}_{\mm} w_\nn^{(i)}C_{\mm, \nn, \avt_{i|i+1:n},\vartheta_{i|0:i}}}
%{\sum_{\ii \in \aMM_{i|i+1:n},\jj \in \MM_{i|0:i}}
%\overleftarrow w^{(i+1)}_{\ii} w_{\jj}^{(i)}C_{\ii, \jj, \avt_{i|i+1:n},\vartheta_{i|0:i}}},
\end{aligned}$$ $w_\mathbf{n}^{(i)}$ as in [\[filtering in thm\]](#filtering in thm){reference-type="eqref" reference="filtering in thm"} and $C_{\mathbf{m}, \mathbf{n}, \overset{\leftarrow}{\vartheta}_{i|i+1:n},\vartheta_{i|0:i}}$ as in [\[hh_stab\]](#hh_stab){reference-type="eqref" reference="hh_stab"}.*
*Proof.* Note that Bayes' Theorem and conditional independence allow to write [\[smoothing distr general\]](#smoothing distr general){reference-type="eqref" reference="smoothing distr general"} as $$\label{smoothing_expression}
\nu _{i|0:n}(x_i) = p(x_i| y_{0:n})
\propto p(y_{i+1:n}| x_i)
\nu_{i|0:i}(x_i)
%\underbrace{p(x_i| y_{0:i})}_{\nu_{i|0:i}(x_i)}$$ where the right-hand side involves the filtering distribution, available from Corollary [Corollary 2](#prop:rec_filtering){reference-type="ref" reference="prop:rec_filtering"}, and the so called *cost-to-go function* $p(y_{i+1:n}| x_i)$ (sometimes called information filter), which is the likelihood of future observations given the current signal state. Along the same lines as Proposition 3 in [@KKK21] we find that $$p(y_{i+1:n} | x_i) = \sum _{\mathbf{m}\in \mathcal{M}}\overset{\leftarrow}{\omega}_\mathbf{m}^{(i+1)}h(x_i, \mathbf{m}, \overset{\leftarrow}{\vartheta}_{i|i+1:n})$$ with $\overset{\leftarrow}{\omega}_\mathbf{m}^{(i+1)}$ as in the statement. The main claim can now be proved along the same lines as Theorem 4 in [@KKK21]. ◻
The main difference between the above result and Theorem 4 in [@KKK21] lies in the fact that the support of the weights $\{\overset{\leftarrow}{\omega}_\mathbf{m}^{(i+1)}, \mathbf{m}\in \mathcal{M}\}$ (which is [@KKK21] is denoted by $\overset{\leftarrow}{M}_{i|i+1:n}$) can possibly be countably infinite and coincide with the whole of $\mathcal{M}$. Indeed, which points of $\mathcal{M}$ have positive weight are determined by the transition probabilities of the dual process, which in the present framework is no longer assumed to make only downward moves in $\mathcal{M}$. Section [6](#sec:experiments){reference-type="ref" reference="sec:experiments"} will deal with this possibly infinite support for a concrete implementation of the inferential strategy.
# Cox--Ingersoll--Ross hidden Markov models {#sec:CIR}
The Cox--Ingersoll--Ross diffusion, also known as the square-root process, is widely used in financial mathematics for modelling short-term interest rates and stochastic volatility. See [@CIR85; @CS92; @F94; @H93; @GY03]. It also belongs to the class of continuous-state branching processes with immigration, arising as the large-population scaling limit of certain branching Markov chains [@KW71] and as the time-evolving total mass of a Dawson--Watanabe branching measure-valued diffusion [@EG93].
Let $X_{t}$ be a CIR diffusion on $\mathbb{R}_{+}$ that solves the one-dimensional SDE $$\label{SDE}
\mathrm{d}X_t = \left( \delta \sigma^2 - 2 \gamma X_t \right)dt + 2\sigma \sqrt{X_t} d B_t,\quad \quad X_{0}\ge0,$$ where $\delta,\gamma,\sigma>0$, which is reversible with respect to the Gamma density $\pi=\text{Ga}(\delta/2, \gamma / \sigma^2)$. The following proposition identifies a B&D process as dual to the CIR diffusion.
**Proposition 4**. *Let $X_{t}$ be as in [\[SDE\]](#SDE){reference-type="eqref" reference="SDE"}, let $M_{t}$ be a *B&D* process on $\mathbb{Z}_{+}$ which jumps from $m$ to $m+1$ at rate $\lambda_m = 2\sigma^2 (\delta/2 + m)(\theta - \gamma /\sigma^2)$ and to $m-1$ at rate $\mu_m = 2 \sigma^2 \theta m$, and let $$\label{CIR duality function}
h(x, m, \theta) = \frac{\Gamma(\delta/2)}{\Gamma(\delta/2+m)} \Big(\frac{\gamma}{\sigma^2}\Big)^{-\delta/2}\theta^{\delta/2 +m}x^m e^{-(\theta - \gamma/\sigma^2)x}.$$ Then [\[duality identity\]](#duality identity){reference-type="eqref" reference="duality identity"} holds with $D_{t}=M_{t}$.*
*Proof.* The infinitesimal generator associated to [\[SDE\]](#SDE){reference-type="eqref" reference="SDE"} is $$\nonumber%\label{}
\mathcal{A}f(x)=( \delta \sigma^2 - 2 \gamma x )f'(x)+ 2\sigma^{2} x f''(x),$$ for $f:\mathbb{R}_{+}\rightarrow \mathbb{R}$ vanishing at infinity. Letting $h(x,m)$ denote [\[CIR duality function\]](#CIR duality function){reference-type="eqref" reference="CIR duality function"} constant with respect to $\theta$, a direct computation yields $$\begin{aligned}
\mathcal{A} h(\cdot, m)(x)
=&\,(\delta \sigma^2 - 2 \gamma x)\Big(mx^{m-1}-x^{m}(\theta-\gamma/\sigma^{2})\Big)\frac{\Gamma(\delta/2)}{\Gamma(\delta/2+m)} \Big(\frac{\gamma}{\sigma^2}\Big)^{-\delta/2}\theta^{\delta/2 +m} e^{-(\theta - \gamma/\sigma^2)x}
\\
&\, + 2\sigma^{2} x
\Big(m(m-1)x^{m-2}+x^{m}(\theta-\gamma/\sigma^{2})^{2}-2mx^{m-1}(\theta-\gamma/\sigma^{2})\Big) \\&\,
\times\frac{\Gamma(\delta/2)}{\Gamma(\delta/2+m)}\Big(\frac{\gamma}{\sigma^2}\Big)^{-\delta/2}\theta^{\delta/2 +m} e^{-(\theta - \gamma/\sigma^2)x}\\
=&\,
\frac{\delta\sigma^{2}m\theta}{\delta/2+m-1}h(x,m-1)
+2\gamma(\theta-\gamma/\sigma^{2})\frac{\delta/2+m}{\theta}h(x,m+1)\\
&\,-[2\gamma m+\delta\sigma^{2}(\theta-\gamma/\sigma^{2})] h(x,m)
+2\sigma^{2}m(m-1)\frac{\theta}{\delta/2+m-1}h(x,m-1)\\
&\,+2\sigma^{2}(\theta-\gamma/\sigma^{2})^{2}\frac{\delta/2+m}{\theta}h(x,m+1)
-4\sigma^{2}m(\theta-\gamma/\sigma^{2})h(x,m)
\\
=&\, 2 \sigma^2 \theta m h(x, m-1) + 2\sigma^2 (\delta/2 + m)(\theta - \gamma /\sigma^2) h(x, m+1) \notag\\
&\, - [2\gamma m+\sigma^{2}(\delta+4m)(\theta-\gamma/\sigma^{2})]h(x, m).\end{aligned}$$ where it can be checked that $$\begin{aligned}
2\sigma^{2}\theta m+2\sigma^{2}(\delta/2+m)(\theta-\gamma/\sigma^{2})
=2\gamma m+\sigma^{2}(\delta+4m)(\theta-\gamma/\sigma^{2}).\end{aligned}$$ Hence the r.h.s. equals $$\nonumber%\label{}
\mathcal{B}g(m)=\lambda_{m}[g(m+1)-g(m)]+\mu_{m}[g(m-1)-g(m)]$$ with $g(\cdot):=h(x,\cdot)$, $\lambda_{m}=2\sigma^2 (\delta/2 + m)(\theta - \gamma /\sigma^2)$, and $\mu_{m}=2 \sigma^2 \theta m$, which is the infinitesimal generator of a B&D process with rates $\lambda_{m},\mu_{m}$. The claim now follows from Proposition 1.2 in [@JK14]. ◻
Assign now prior $\nu_{0}=\text{Ga}(\delta/2,\gamma/\sigma^{2})$ to $X_{0}$, and assume Poisson observations are collected at equally spaced intervals of length $\Delta$, specifically $Y|X_{t}=x \overset{\text{iid}}{\sim} \text{Po}(\tau x)$, for some $\tau>0$. By the well-known conjugacy to Gamma priors, we have $X_{t}|Y=y\sim \text{Ga}(\delta/2+y, \gamma / \sigma^2+\tau)$. For simplicity, and without loss of generality, we can set $\tau=1$, which allows to interpret the update of the gamma rate parameter as the size of the conditioning data set. The filtering algorithm starts by first updating the prior $\nu_{0}$ for the signal to $\nu_{0|0}:=\phi_{Y_{0}}(\nu_{0})$. If we observe $Y_{0}=(Y_{0,1},\ldots,Y_{0,k})$ at time $0$, then $\nu_{0|0}$ is the law of $X_{0}|\sum_{j=1}^{k}y_{0,j}=m\sim \text{Ga}(\delta/2+m, \gamma / \sigma^2+k)$. Then $\nu_{0|0}$ is propagated forward for a $\Delta$ time interval, yielding $\nu_{1|0}:=\psi_{\Delta}(\nu_{0|0})$. In light of Proposition [Proposition 4](#prop: CIR duality){reference-type="ref" reference="prop: CIR duality"}, an application of [\[forward propagation\]](#forward propagation){reference-type="eqref" reference="forward propagation"} to $\nu_{0|0}$ yields the infinite mixture $$\label{CIR propagation}
\psi_\Delta \left( \text{Ga}(\delta/2+m, \gamma / \sigma^2+k)\right) = \sum_{n\ge0} p_{m, n}(\Delta) \text{Ga}(\delta/2+n, \gamma / \sigma^2+k),$$ where $p_{m, n}(t)$ are the transition probabilities of $M_{t}$ in Proposition [Proposition 4](#prop: CIR duality){reference-type="ref" reference="prop: CIR duality"}. Hence, the law of the signal is indexed by integers, i.e., points of the dual state space, where after the first update the distribution gives mass one to the sum of the observations $\sum_{j=1}^{k}y_{0,j}=m$, whereas after the propagation the mass is spread over the whole $\mathbb{Z}_{+}$ by the effect of the dual process. We then observe $Y_{1}\sim f_{X_{1}}$, which is used to update $\nu_{1|0}$ to $\nu_{1|1}$ and has the effect of shifting the probability masses of the mixture weights. For example, the weight $p_{m,n}(\Delta)$ in [\[CIR propagation\]](#CIR propagation){reference-type="eqref" reference="CIR propagation"} is assigned to $n\in \mathbb{Z}_{+}$, but after the update based on $Y_{1}=(Y_{1,1},\ldots,Y_{1,k'})$ it will be assigned to $n+m'$ if $\sum_{j=1}^{k'}y_{1,j}=m'$, on top of being transformed according to [\[mixture update\]](#mixture update){reference-type="eqref" reference="mixture update"}. We then propagate forward again and proceed analogously.
When the current distribution of the signal, after the update, is given by a mixture of type $\sum_{m\in \mathbb{Z}_{+}}w_{m}\text{Ga}(\delta/2+m, \gamma / \sigma^2+k)$, it is enough to rearrange the mixture weights after the propagation step as in [\[weights rearrangement after propagation\]](#weights rearrangement after propagation){reference-type="eqref" reference="weights rearrangement after propagation"}.
The main difference with qualitatively similar equations found in [@KKK21] is now given by the transition probabilities $p_{m, n}(t)$ in [\[CIR propagation\]](#CIR propagation){reference-type="eqref" reference="CIR propagation"}, which are those of the B&D process in Proposition [Proposition 4](#prop: CIR duality){reference-type="ref" reference="prop: CIR duality"}. Before tackling the problem of how to use the above expressions for inference, we try to provide further intuition of the extent and implications of such differences. To this end, consider the simplified parameterization $\alpha=\delta/2, \beta=\gamma/\sigma^{2}, \sigma^{2}=1/2, \tau=1$, whereby one can check that the embedded chain of the B&D process of Proposition [Proposition 4](#prop: CIR duality){reference-type="ref" reference="prop: CIR duality"} has jump probabilities $$\nonumber%\label{}
\begin{aligned}
p_{m,m+1}=\frac{k(\alpha+m)}{k(\alpha+m)+m(\beta+k)},\quad \quad
p_{m,m-1}=1-p_{m,m+1}.
%\approx \frac{n}{2n+\beta}\quad \text{large }m
%q_{m}=\frac{m(\beta+n)}{n(\alpha+m)+m(\beta+n)},
%\approx \frac{\beta+n}{2n+\beta}\quad \text{large }m
\end{aligned}$$ Here $m,k$ are the same as in the left-hand side of [\[CIR propagation\]](#CIR propagation){reference-type="eqref" reference="CIR propagation"}, so $m/k$ is the sample mean. It is easily verified that $p_{m, m+1}<p_{m, m-1}$ if $m/k>\alpha/\beta$ and viceversa. Therefore, the dual evolves on $\mathbb{Z}_{+}$ so that it reverts $m/k$ to the prior mean $\alpha/\beta$. Indeed, the dual has ergodic distribution $\text{NB}\left(\alpha,\beta/(\beta+k)\right)$, whose mean is $k\alpha/\beta$, i.e., such that $m/k$ on average coincides with $\alpha/\beta$.
Recall now that the dual process elicited in [@PR14] for the CIR model is $D_{t}=(M_{t},\Theta_{t})$, with $M_{t}$ a pure-death process with rates from $m$ to $m-1$ equal to $2\sigma^{2}\theta$ and $\Theta_{t}$ a deterministic process that solves $\mathrm{d}\Theta_{t}/\mathrm{d}t=-2\sigma^{2}\Theta_{t}(\Theta_{t}-\gamma/\sigma^{2})$, $\Theta_{0}=\theta$. This dual has a single ergodic state given by $(0,\beta)$ (note that [@PR14] use a slightly different parameterization, where the ergodic state $(0,\beta)$ means that, in the limit for $t\rightarrow \infty$, the gamma parameters are the prior parameters). In particular this entails the convergence of $p_{m,n}(t)$ in [\[CIR propagation\]](#CIR propagation){reference-type="eqref" reference="CIR propagation"} to 1 if $n=0$ and 0 elsewhere as $t\rightarrow \infty$. Whence the strong ergodic convergence $\psi_{t}(g(x,m,\theta))\rightarrow \pi$ as $t\rightarrow \infty$, whereby the effect of the observed data become progressively negligible as $t$ increases. One could then argue that in the long run, the filtering strategy based on the pure-death dual process in [@PR14] completely forgets the collected data. Hence one could expect filtering with long-spaced observations (relative to the forward process autocorrelation) to be similar to using independent priors at each data collection point. On the other hand, the B&D dual can be thought as not forgetting but rather spreading around the probability masses in such a way as to preserve convergence of the empirical mean to the prior mean. It is not obvious a priori which of these two scenarios could be more beneficial in terms of filtering, hence in Section [6.1](#sec: CIR experiments){reference-type="ref" reference="sec: CIR experiments"} we provide numerical experiments for comparing the performance of strategies based on these different duals.
In view of such experiments, note that the transition probabilities of the above B&D dual are in principle available in closed form (cf. [@B64; @CS12]), but their computation is prone to numerical instability. Alternatively, we can approximate the transition probabilities $p_{m,n}(t)$ in [\[CIR propagation\]](#CIR propagation){reference-type="eqref" reference="CIR propagation"} by drawing $N$ sample paths of the dual started in $m$ and use the empirical distribution of the arrival points. This can in principle be done through the Gillespie algorithm [@G07], which alternates sampling waiting times and jumps of the embedded chain. A faster strategy can be achieved by writing the B&D rates in Proposition [Proposition 4](#prop: CIR duality){reference-type="ref" reference="prop: CIR duality"} as $\lambda_{m}=\lambda m+\beta$ and $\mu_{m}=\mu m$ with $$\label{algrates}
\begin{aligned}
\lambda =2\sigma^2 (\theta -\gamma/\sigma^2),\qquad \beta=\sigma^2\delta(\theta -\gamma/\sigma^2),\qquad
\mu = 2\sigma^2\theta,
\end{aligned}$$ where $\lambda,\mu$ represent the per capita birth and death rate and $\beta$ is the immigration rate. Then write $M_t = A_t + B_t$ where $A_t$ is the population size of the descendant of autochthonous individuals (already in the population at $t=0$), and $B_t$ the descendants of the immigrants. These rates define a linear B&D process, whereby [@T18] suggests simulating $A_t$ by drawing, given $A_0=i$, $$\label{tav18}
F\sim \text{Bin}(i, g(t)), \qquad A_t \sim \text{NBin}(F, h(t))+F,$$ with $h(t)=(\lambda-\mu)/(\lambda \exp\{(\lambda-\mu)t\}-\mu)$ and $g(t)=h(t)\exp\{(\lambda-\mu)t\}$, with the convention NBin$(0,p)= \delta_0$. Let now $N_s$ be the number of immigrants up to time $s$, which follows a simple Poisson process with rate $\beta$, so given $N_{t}$ the arrival times are uniformly distributed on $[0,t]$. Once in the population, the lineage of each immigrating individual follows again a B&D process and can be simulated using [\[tav18\]](#tav18){reference-type="eqref" reference="tav18"} starting at $i=1$. Summing the numerosity of each immigrant family at time $t$ yields $B_t$.
# Wright--Fisher hidden Markov models {#sec:WF}
The $K$-dimensional WF diffusion is a classical model in population genetics which is also widely used in many areas of applied probability and statistics, as it can model temporally evolving probability frequencies. It takes values in the simplex $$\nonumber%\label{}
\Delta_{K}=\bigg\{\mathbf{x}\in[0,1]^{K}: \sum_{1\le i\le K}x_{i}=1\bigg\}$$ and, in the population genetics interpretation, it models the temporal evolution of $K$ proportions of types in an underlying large population. Its infinitesimal generator on $C^{2}(\Delta_{K})$ is $$\label{WF generator}
\mathcal{A}= \frac{1}{2}\sum_{i,j=1}^{K}x_{i}(\delta_{ij}-x_{j})\frac{\partial^{2}}{\partial x_{i}\partial x_{j} }
+\frac{1}{2}\sum_{i=1}^{K}(\alpha_i - \theta x_i)\frac{\partial}{\partial x_{i}}$$ for $\bm{\alpha}=(\alpha_{1},\ldots,\alpha_{K})\in \mathbb{R}_{+}^{K}$, $\theta=\sum_{i=1}^{K}\alpha_{i}$, and its reversible measure is the Dirichlet distribution whose density with respect to Lebesgue measure is $$\label{Dirichlet distribution}
\pi_{\bm{\alpha}}(\mathbf{x})=\frac{\Gamma(\theta)}{\prod_{i=1}^{K}\Gamma(\alpha_{i})}x_{1}^{\alpha_{1}-1}\cdots x_{K}^{\alpha_{K}-1}, \quad x_{K}=1-\sum_{i=1}^{K-1}x_{i}.$$ See for example [@EK86], Chapter 10. The transition density of this model is (cf., e.g., [@EG93], eqn. (1.27)) $$\label{WF transition}
p_{t}(\mathbf{x},\mathbf{x}')=\sum_{m=0}^{\infty}d_{m}(t)
\sum_{\mathbf{m}\in \mathbb{Z}_{+}^{K}:|\mathbf{m}|=m}\text{MN}(\mathbf{m}; m,\mathbf{x})\pi_{\bm{\alpha}+\mathbf{m}}(\mathbf{x}'),$$ where $\text{MN}(\mathbf{m}; m,\mathbf{x})=\binom{m}{m_{1},\ldots,m_{K}}\prod_{i=1}^{K}x_{i}^{m_{i}}$, and where $d_{m}(t)$ are the transition probabilities of the block counting process of Kingman's coalescent on $\mathbb{Z}_{+}$, which has an entrance boundary at $\infty$. Cf., e.g., [@EG93], eqn. (1.12).
It is well known that a version of Kingman's typed coalescent with mutation is dual to the WF diffusion. This can be seen as a death process on $Z_{+}^{K}$ which jumps from $\mathbf{m}$ to $\mathbf{m}-\mathbf{e}_{i}$ at rate $$\label{kingman's rate}
q_{\mathbf{m},\mathbf{m}-\mathbf{e}_{i}}=m_{i}(\theta+|\mathbf{m}|-1)/2.$$ See, for example, [@EG09; @GS09; @Eea10]. See also [@PR14], Section 3.3. The above death process with transitions $d_{m}(t)$ is indeed the process that counts the surviving blocks of the typed version without keeping track of which types have been removed.
Recall now that a Moran model with $N$ individuals of $K$ types is a particle process with overlapping generations whereby at discrete times a uniformly chosen individual is removed and another, uniformly chosen from the remaining individuals, produces one offspring of its own type, leaving the total population size constant. See, e.g., [@E09]. In the presence of mutation, upon reproduction, the offspring can mutate to type $j$ at parent-independent rate $\alpha_{j}$. The generator of such process on the set $B(\mathbb{Z}_{+}^{K})$ of bounded functions on $\mathbb{Z}_{+}^{K}$ can be written in terms of the multiplicities of types $\mathbf{n}\in \mathbb{Z}_{+}^{K}$ as $$\label{moran generator}
\mathcal{B}f(\mathbf{n})
=\frac{1}{2}\sum_{1\le i\ne j\le K}n_{i}(\alpha_{j}+n_{j})f(\mathbf{n}-\mathbf{e}_{i}+\mathbf{e}_{j})
-\frac{1}{2}\sum_{1\le i\ne j\le K}n_{i}(\alpha_{j}+n_{j})f(\mathbf{n}),$$ where an individual of type $i$ is removed at rate $n_{i}$, the number of individuals of type $i$, and is replaced by an individual of type $j$ at rate $\alpha_{j}+n_{j}$.
The following proposition extends a result in [@Cea15] (cf. Section 5).
**Proposition 5**. *Let $X_{t}$ have generator [\[WF generator\]](#WF generator){reference-type="eqref" reference="WF generator"}, let $N_{t}\in \mathbb{Z}_{+}^{K}$ be a Moran model which from $\mathbf{n}$ jumps to $\mathbf{n}-\mathbf{e}_{i}+\mathbf{e}_{j}$ at rate $n_i(\alpha_j + n_j)/2$, and let $$\label{eq: duality function Moran-WF}
h(\mathbf{x},\mathbf{n})=\frac{\Gamma(\theta+|\mathbf{n}|)}{\Gamma(\theta)}\prod_{i=1} ^K\frac{\Gamma\left(\alpha_i\right)}{\Gamma\left(\alpha_i+n_{i}\right)}x_{i}^{n_{i}}, \quad \quad
\theta= \sum _{i=1}^K \alpha _i.$$ Then [\[duality identity\]](#duality identity){reference-type="eqref" reference="duality identity"} holds with $D_{t}=N_{t}$ and $h$ as above.*
*Proof.* From [\[WF generator\]](#WF generator){reference-type="eqref" reference="WF generator"}, since $\theta=\sum_{i=1}^{K}\alpha_{i}$, we can write $$\begin{aligned}
2\mathcal{A}
%=&\, \sum_{i,j=1}^{K}x_{i}(\delta_{ij}-x_{j})\partial_{i}\partial_{j} +\sum_{i=1}^{K}(\alpha_i - \theta x_i)\partial_{i}\\
=&\, \sum_{1\le i\le K}x_{i}(1-x_{i})\frac{\partial^{2}}{\partial x_{i}^{2}}-\sum_{1\le i\ne j\le K}x_{i}x_{j}\frac{\partial^{2}}{\partial x_{i}\partial x_{j}}
+\sum_{1\le i\le K}(\alpha_i(1- x_{i})-x_{i}\sum_{1\le j\le K,j\ne i}\alpha_{j})\frac{\partial}{\partial x_{i}}\\
%=&\, \sum_{i=1}^{K}x_{i}\sum_{j\ne i}x_{j}\partial_{i}\partial_{i}-\sum_{\le i\ne j\le K}x_{i}x_{j}\partial_{i}\partial_{j} +\sum_{i=1}^{K}\alpha_i\sum_{j\ne i}x_{j}\partial_{i}-\sum_{i}x_{i}\sum_{j\ne i}\alpha_{j}\partial_{i}\\
=&\, \sum_{1\le i\ne j\le K}x_{i}x_{j}\frac{\partial^{2}}{\partial x_{i}^{2}}
-\sum_{1\le i\ne j\le K}x_{i}x_{j}\frac{\partial^{2}}{\partial x_{i}\partial x_{j}}
+\sum_{1\le i\le K}\alpha_i\sum_{1\le j\le K,j\ne i}x_{j}\frac{\partial }{\partial x_{i}}
-\sum_{1\le i\ne j\le K}\alpha_{j}\frac{\partial }{\partial x_{i}}.
%=&\,\sum_{\le i\ne j\le K}(x_ix_j \partial_i^2 + \alpha _i x_j \partial _i)
%-\sum_{i\ne j} (x_ix_j \partial_i \partial_j + \alpha_jx_i\partial_i).\end{aligned}$$ Then one can check that $$\begin{aligned}
2\mathcal{A}h(\mathbf{x},\mathbf{n})
=&\,\sum_{1\le i\ne j\le K}n_{i}(\alpha_{i}+n_{i}-1)\frac{\Gamma(\theta+|\mathbf{n}|)}{\Gamma(\theta)}\mathbf{x}^{\mathbf{n}-\mathbf{e}_{i}+\mathbf{e}_{j}}\prod_{h=1} ^K\frac{\Gamma\left(\alpha_h\right)}{\Gamma\left(\alpha_h+n_{h}\right)}\\
&\,-\sum_{1\le i\ne j\le K}n_{i}(\alpha_{j}+n_{j})\mathbf{x}^{\mathbf{n}}\frac{\Gamma(\theta+|\mathbf{n}|)}{\Gamma(\theta)}
\prod_{h=1} ^K\frac{\Gamma\left(\alpha_h\right)}{\Gamma\left(\alpha_h+n_{h}\right)}\\
=&\,\sum_{1\le i\ne j\le K}n_{i}(\alpha_{j}+n_{j})h(\mathbf{x},\mathbf{n}-\mathbf{e}_{i}+\mathbf{e}_{j})
-\sum_{1\le i\ne j\le K}n_{i}(\alpha_{j}+n_{j})h(\mathbf{x},\mathbf{n}).\end{aligned}$$ Hence we have $$(\mathcal{A}h (\cdot,\mathbf{n}))(\mathbf{x}) = (\mathcal{B}h(\mathbf{x},\cdot) ) (\mathbf{n})$$ where the right hand side is [\[moran generator\]](#moran generator){reference-type="eqref" reference="moran generator"} applied to $h(\mathbf{x},\mathbf{n})$ as a function of $\mathbf{n}$. The claim now follows from Proposition 1.2 in [@JK14]. ◻
Assign now prior $\nu_{0}=\pi_{\bm{\alpha}}$ to $X_{0}$, and assume categorical observations so that $\mathbb{P}(Y=j|X_{t}=x)=x_{j}$. By the well-known conjugacy to Dirichlet priors, we have $X_{t}|Y=y\sim \pi_{\bm{\alpha}+\delta_{y}}$, where $\bm{\alpha}+\delta_{y}=(\alpha_{1},\ldots,\alpha_{j}+1,\ldots,\alpha_{K})$ if $y=j$. When multiple categorical observations with vector of multiplicities $\mathbf{m}\in \mathbb{Z}_{+}^{K}$ are collected, we write $\pi_{\bm{\alpha}+\mathbf{m}}$. The filtering algorithm then proceeds by first updating $\nu_{0}$ to $\nu_{0|0}:=\phi_{Y_{0}}(\nu_{0})=\pi_{\bm{\alpha}+\mathbf{m}}$, if $Y_{0}=(Y_{0,1},\ldots,Y_{0,n})$ yields multiplicities $\mathbf{m}$, then propagating $\nu_{0|0}$ to $\nu_{1|0}:=\psi_{\Delta}(\nu_{0|0})$. In light of the previous result, an application of [\[forward propagation\]](#forward propagation){reference-type="eqref" reference="forward propagation"} to a single distribution $g(x,m,\theta)=\pi_{\bm{\alpha}+\mathbf{m}}$ yields the mixture $$\label{WF propagation}
\psi_t \left(\pi_{\bm{\alpha}+\mathbf{m}} \right) = \sum_{\mathbf{n}: |\mathbf{n}|=|\mathbf{m}|} p_{\mathbf{m},\mathbf{n}}(t) \pi_{\bm{\alpha}+\mathbf{n}},$$ where $p_{\mathbf{m},\mathbf{n}}(t)$ are the transition probabilities of $N_{t}$ in Proposition [Proposition 5](#prop: WF duality){reference-type="ref" reference="prop: WF duality"}. We then observe $Y_{1}|X_{1}$, which is in turn used to update $\nu_{1|0}$ to $\nu_{1|1}$, as so forth. We refer again the reader to [@KKK21], Section 2.4.2, for details on qualitatively similar recursive formulae.
In [\[WF propagation\]](#WF propagation){reference-type="eqref" reference="WF propagation"}, the overall multiplicity $|\mathbf{n}|$ equals the original $|\mathbf{m}|$, as an effect of the population size preservation provided by the Moran model. The space $\{\mathbf{n}: |\mathbf{n}|=|\mathbf{m}|\}$ is finite, which shows that Assumption 3 need not lead to filtering distributions being infinite mixtures. However, it is not obvious *a priori* how [\[WF propagation\]](#WF propagation){reference-type="eqref" reference="WF propagation"} compares in terms of practical implementation with the different representation obtained in [@PR14], namely $$\label{PR14 WF propagation}
\psi_t \left(\pi_{\bm{\alpha}+\mathbf{m}} \right) = \sum_{\mathbf{n}: |\mathbf{n}|\le |\mathbf{m}|} \hat p_{\mathbf{m},\mathbf{n}}(t) \pi_{\bm{\alpha}+\mathbf{n}}$$ where $\hat p_{\mathbf{m},\mathbf{n}}(t)$ are the transition probabilities of the death process on $\mathbb{Z}_{+}^{K}$ with rates [\[kingman\'s rate\]](#kingman's rate){reference-type="eqref" reference="kingman's rate"}. Similarly to what has already been discussed for the CIR case, the death process dual has a single ergodic state given by the origin $(0,\ldots,0)$, which entails the convergence of $\hat p_{\mathbf{m},\mathbf{n}}(t)$ to $1$ if $\mathbf{n}=(0,\ldots,0)$ and 0 elsewhere, implying the strong convergence $\psi_t \left(\pi_{\bm{\alpha}+\mathbf{m}} \right)\rightarrow \pi_{\bm{\alpha}}$ in [\[PR14 WF propagation\]](#PR14 WF propagation){reference-type="eqref" reference="PR14 WF propagation"}. This is ultimately determined by the fact that Kingman's coalescent removes lineages by coalescence and mutation until absorption to the empty set.
At first glance, a similar convergence is seemingly precluded to [\[WF propagation\]](#WF propagation){reference-type="eqref" reference="WF propagation"}. However, we note in the first sum of [\[moran generator\]](#moran generator){reference-type="eqref" reference="moran generator"} that the new particle's type is either resampled from the survived particles or drawn from the baseline distribution, in which case the new particle is of type $j$ with (parent-independent) probability $\alpha_{j}/\sum_{i=1}^{K}\alpha_{j}$. Hence a heuristic argument is that each particle will be resampled from the baseline distribution in finite time. Together with the fact that $\sum_{j=1}^{K} \pi_{\bm{\alpha}+\delta_{j}}\alpha_{j}/\sum_{i=1}^{K}\alpha_{i}=\pi_{\bm{\alpha}}$, which follows from Corollary 1.1 in [@A74], and considering that the number of particles is finite, we can therefore expect that, as $t\rightarrow \infty$, we have the convergence $\psi_t \left(\pi_{\bm{\alpha}+\mathbf{m}} \right)\rightarrow \pi_{\bm{\alpha}}$ in [\[WF propagation\]](#WF propagation){reference-type="eqref" reference="WF propagation"} in this case as well.
The transition probabilities $p_{\mathbf{m},\mathbf{n}}(t)$ in [\[WF propagation\]](#WF propagation){reference-type="eqref" reference="WF propagation"}, induced by the Moran model, are not available in closed form. This poses a limit on the direct applicability of the presented algorithms for numerical experiments. The first alternative is then to approximate them by drawing $N$ points from the discrete distribution on the dual space before the propagation, making use of the Gillespie algorithm to draw as many paths, and evaluating the empirical distribution of the arrival points. Further alternatives are suggested by the fact that an appropriately rescaled version of the Moran model converges in distribution to a WF diffusion (see, e.g., [@E09], Lemma 2.39). This would mean spatially rescaling the Moran model in [\[moran generator\]](#moran generator){reference-type="eqref" reference="moran generator"} and writing the resulting generator as (cf. [\[moran generator\]](#moran generator){reference-type="eqref" reference="moran generator"}) $$\mathcal{C}_{|\mathbf{n}|} f(\mathbf{x})
=|\mathbf{n}|^{2}\sum_{1\le i\ne j\le K}\frac{n_{i}}{|\mathbf{n}|}\frac{\alpha_{j}+n_{j}}{|\mathbf{n}|}\bigg[f\bigg(\mathbf{x}-\frac{\mathbf{e}_{i}}{|\mathbf{n}|}+\frac{\mathbf{e}_{j}}{/|\mathbf{n}|}\bigg)-f(\mathbf{x})\bigg],$$ where $p_{i}=n_{i}/|\mathbf{n}|$, which by using a Taylor expansion can be shown to yield [\[WF generator\]](#WF generator){reference-type="eqref" reference="WF generator"}, when $|\mathbf{n}|\rightarrow \infty$. We could therefore use the WF diffusion to approximate the Moran dual transitions in [\[WF propagation\]](#WF propagation){reference-type="eqref" reference="WF propagation"}. This strategy in principle needs to deal with the intractable terms $d_{m}(t)$ in the transition function expansion [\[WF transition\]](#WF transition){reference-type="eqref" reference="WF transition"} of the diffusion, but in this regard, one can adopt the solution found by [@JS17].
It is also known that one could also construct a sequence of WF discrete Markov chains indexed by the population size which, upon appropriate rescaling, converge to the desired WF diffusion (see, e.g., [@KT81], Sec. 15.2.F or [@E09], Sec 4.1). Since two sequences that converge to the same limit can to some extent be considered close to each other, one could then consider a WF discrete chain indexed by $|\mathbf{n}|$ with a parameterization that would make it converge to [\[WF generator\]](#WF generator){reference-type="eqref" reference="WF generator"}, and use it to approximate the Moran transition probabilities. This would permit a straightforward implementation, given WF discrete chains have multinomial transitions.
# Numerical experiments {#sec:experiments}
To illustrate how the above results can be used in practice and how they perform in comparison with other methods, we are going to consider particle approximations of the dual processes to evaluate the predictive distributions for the signal, denoted $\hat p(x_{k+1}|y_{1:k})$. We compare these distributions with the exact predictive distribution obtained through the results in [@PR14] and those obtained through bootstrap particle filtering which make use of [\[WF transition\]](#WF transition){reference-type="eqref" reference="WF transition"}. Particle filtering can be considered the state of the art for this type of inferential problems, a general reference being [@CP20]. The experiments in this section provide an argument in favour of using the proposed approximations for filtering and smoothing in light of the fact that these tasks can be successfully tackled once one obtains an accurate inference on the signal prediction distribution, as shown in [@KKK21].
We first briefly describe the specific particle approximation on the dual space we are going to use. To approximate a predictive distribution $\nu_{i|0:i-1}(x_{i})$, the classical particle approximation used in bootstrap particle filtering can be described as follows:
$\bullet$
sample $X_{i-1}^m \overset{\text{iid}}{\sim} \nu_{i-1|0:i-1}$;
propagate the particles by sampling $X_{i}^m \sim p_{t}(X_{i-1}^m,\cdot)$ for $m = 1, \ldots, N$, with $p_{t}$ as in [\[WF transition\]](#WF transition){reference-type="eqref" reference="WF transition"};
estimate $\nu_{i|0:i-1}$ with $\hat \nu_{i|0:i-1} := \frac{1}{N}\sum_{m=1}^N \delta_{X_{i}^m}$.
The filtering distributions obtained from the dual processes considered in this work are mixtures of distribution of the form $\nu_{i-1|0:i-1}(x_{i-1}) = \sum\nolimits_{\mathbf{m} } w_\mathbf{m} h(x_{i-1}, \mathbf{m})\pi(x_{i-1})$. We can use the approximation $\hat \nu_{i-1|0:i-1}(x_{i-1}) := \frac{1}{N}\sum\nolimits_ {n=1}^N h(x_{i-1}, \mathbf{m}^{(n)})\pi(x_{i-1})$, where $\mathbf{m}^{(n)} \sim \sum_{\mathbf{m} } w_\mathbf{m} \delta_{\mathbf{m}},$ which amounts to transposing the previous approximation to the dual space or to perform a particle approximation of the discrete mixing measure. The natural approximation of $\nu_{i|0:i-1}(x_{i})$ is therefore as follows:
$\bullet$
sample $\mathbf{m}^{(n)} \overset{\text{iid}}{\sim} \sum_{\mathbf{m} } w_\mathbf{m} \delta_{\mathbf{m}}$;
propagate the particles by sampling $\mathbf{n}^{(n)} \sim
p_{\mathbf{m}^{(n)},\cdot}(t)$ where $p_{\mathbf{m}^{(n)},\cdot}(t)$ are the transition probabilities of the dual process;
estimate $\nu_{i|0:i-1}(x_{i})$ with $\hat \nu_{i|0:i-1}(x_{i}) := \frac{1}{N}\sum\nolimits_ {n=1}^N h(x_i, \mathbf{n}^{(n)})\pi(x_{i})$.
Here some important remarks are in order. The above dual particle approximation is a finite mixture approximation of a mixture which can be either finite or infinite. Hence the above strategy can be applied both to filtering given death-like duals but also given general duals on discrete state spaces. The quality of the dual particle approximation, in general, may differ from that obtained through the particle filtering approximation since the particles live on a discrete space in the first case and on a continuous space in the second. This is the object of the following sections, at least for these specific examples. Finally, the ease of implementation of the two approximations may be very different because simulating from the original Markov process may be much harder than simulating from the dual process. An example is the simulation of Kingman's typed coalescent, immediate as compared to the simulation from [\[WF transition\]](#WF transition){reference-type="eqref" reference="WF transition"}, which would be unfeasible without [@JS17].
## Cox--Ingersoll--Ross numerical experiments {#sec: CIR experiments}
The CIR diffusion admits two different duals:
$\bullet$
the death-like dual given by $D_{t}=(M_{t},\Theta_{t})$, with $M_{t}$ a pure-death process on $\mathbb{Z}_{+}$ with rates $2\sigma^{2}\theta m$ from $m$ to $m-1$ and $\Theta_{t}$ a deterministic process that solves the ODE $\mathrm{d}\Theta_{t}/\mathrm{d}t=-2\sigma^{2}\Theta_{t}(\Theta_{t}-\gamma/\sigma^{2})$, $\Theta_{0}=\theta$. Cf. [@PR14], Section 3.1.
the B&D dual $M_{t}$ on $\mathbb{Z}_{+}$ with birth rates from $m$ to $m+1$ given by $\lambda_m = 2\sigma^2 (\delta/2 + m)(\theta - \gamma /\sigma^2)$ and death rates from $m$ to $m-1$ given by $\mu_m = 2 \sigma^2 \theta m$ respectively. Cf. Proposition [Proposition 4](#prop: CIR duality){reference-type="ref" reference="prop: CIR duality"}.
Note that the latter is time-homogeneous, the former is not. In general, temporal homogeneity is to be preferred since a direct simulation with a Gillespie algorithm in the inhomogeneous case would require a time-rescaling. However, for this specific case, there is a convenient closed-form expression for the transition density of the first dual, which can be used to simulate for arbitrary time transitions (see the third displayed equation at page 2011 in [@PR14]). The second dual, by virtue of the temporal homogeneity, can be simulated directly using a Gillespie algorithm. This may be slow if the event rate becomes large, but as suggested in Section [4](#sec:CIR){reference-type="ref" reference="sec:CIR"} we can see it as a linear B&D process, and a convenient closed-form expression can be used to simulate arbitrary time transitions.
We compare these two particle approximations with an exact computation of the predictive distribution following [@PR14] and to a bootstrap particle filtering approach on the original state space of the signal, which is easy to implement for arbitrary time transitions thanks to the Gamma-Poisson expansion of the CIR transition density (see details in [@KKK21], Section 5).
Figure [\[fig:cv_pred_dist_CIR\]](#fig:cv_pred_dist_CIR){reference-type="ref" reference="fig:cv_pred_dist_CIR"} shows the comparison of the above-illustrated strategies, with prediction performed for a forecast time horizon of 0.05. The CIR parameters were specified to $\delta = 11, \sigma = 1, \gamma = 1.1$. The starting distribution for the prediction is a filtering distribution for a dataset whose last Poisson observation equals 4 (so the starting distribution is a mixture of Gamma distributions roughly centred around 4). The density estimates for the bootstrap particle filter were obtained from a Gamma kernel density estimator with bandwidth estimated by cross-validation. This is expected to induce a negligible error because the target distribution is a finite mixture of Gamma distributions.
The figure suggests that the bootstrap particle filter is slow to converge to the exact predictive distribution. However, with 50 particles, both dual approximations are already almost indistinguishable from the exact predictive distribution. This shows that accurately approximating the mixing measure on the discrete dual space seems to require fewer particles than approximating the continuous distribution on the original continuous state space.
![image](convergence_predictive_distr_CIR.pdf){width=".8\\textwidth"}
Next, we turn to investigating the error on the filtering distributions, which combines successive particle approximations. Since the update operation can be performed exactly through [\[update operation\]](#update operation){reference-type="eqref" reference="update operation"}, particle filtering using the dual process is conveniently implemented like a bootstrap particle approximation to a Baum-Welch filter with systematic resampling. We quantify the error on the filtering distributions by measuring the absolute error on the first moment and the standard deviation of the filtering distributions (with respect to the exact computation). We also include the error on the signal retrieval, measured as the absolute difference between the first moment of the filtering distributions and the value of the simulated "true" hidden signal. The mean filtering error is averaged over the second half of the sequence of observations to avoid possible transient effects at the beginning of the observation sequence and further averaged over 50 different simulated datasets. The parameter specification is again $\delta = 11, \sigma = 1, \gamma = 1.1$, with a single Poisson observation at each of 200 observation times, with interval spacing equal to 0.1. Figure [\[fig:filtering_error_CIR\]](#fig:filtering_error_CIR){reference-type="ref" reference="fig:filtering_error_CIR"} shows that the pure-death particle approximation performs better than the B&D particle approximation, but the latter performs comparably to the bootstrap particle filter, possibly with a modest advantage.
![image](filtering_error_comparison_CIR_2.pdf){width="\\textwidth"}
## Wright--Fisher numerical experiments
The WF diffusion admits two different duals:
$\bullet$
Kingman's typed coalescent with mutation dual, given by a pure-death process on $\mathbb{Z}_{+}^{K}$ with rates $\lambda_{\mathbf{m}, \mathbf{m}-\mathbf{e}_{i}} = m_i(|\boldsymbol \alpha| + | \boldsymbol m| -1)/2$ from $\mathbf{m}$ to $\mathbf{m}-\mathbf{e}_{i}$. Cf. [@PR14], Section 3.3.
a Moran dual process, given a homogeneous B&D process on $\mathbb{Z}_{+}^{K}$ with rates $\lambda_{\mathbf{m},\mathbf{m}-\mathbf{e}_{i}+\mathbf{e}_{j}} = m_i(\alpha_j + m_j)/2$ from $\mathbf{m}$ to $\mathbf{m}-\mathbf{e}_{i}+\mathbf{e}_{j}$. Cf. Proposition [Proposition 5](#prop: WF duality){reference-type="ref" reference="prop: WF duality"}.
Here both processes are temporally homogeneous and can thus be easily simulated using a Gillespie algorithm, with the only caveat that the simulation can be inefficient when the infinitesimal rates are large. Similar to the CIR case, there is a closed-form expression for the transition probabilities in the first case, which can be used for simulation purposes for arbitrary time transitions (see Theorem 3.1 in [@PRS16]). Although this expression is easy to handle in the one-dimensional CIR case, this is more challenging in the multi-dimensional WF case, with significant numerical stability issues raised by the need to compute the sum of alternated series with terms that can both overflow and underflow. In [@KKK21], these hurdles were addressed using arbitrary precision computation libraries and careful re-use of previous computations applicable when data is equally spaced. The Gillespie simulation strategy presents no such restriction and may be significantly faster when the event rates remain low.
As mentioned in Section [\[sec:WF\]](#sec:WF){reference-type="eqref" reference="sec:WF"}, no closed-form expression is available for the Moran dual and the Gillespie algorithm approach is the main option, likely resulting is a slow algorithm. Alternatively, it is possible to approximate the Moran dual process by a finite population Wright--Fisher process, with the quality of approximation increasing with the population size. The interest of this approximation is that the event rate is lower for the finite population Wright--Fisher process than for the Moran process. This is related to the fact that weak convergence of a sequence of WF chains to a WF diffusion occurs when time is rescaled by a factor of $N$ (cf. [@KT81], Sec. 15.2.F), whereas a Moran model whose individual updates occur at the times of a Poisson process with rate 1, needs a rescaling by a factor $N^{2}$ to obtain a similar convergence. In other words, in order to establish weak convergence to the diffusion, time must be measured in units of $N$ generations in the WF chain and in units of $N^{2}$ generations in the Moran model. For this reason, the resulting Gillespie simulation will be faster using a WF chain approximation to the Moran model.
The above considerations also suggest another possibility. Since the Moran process converges weakly to a Wright--Fisher diffusion, the latter could also be used as a possible approximation instead of a WF chain. In this case, it is possible to sample directly from [\[WF transition\]](#WF transition){reference-type="eqref" reference="WF transition"} for arbitrary time transitions using the algorithm in [@JS17]. Hence we would be using a WF diffusion to approximate the dual Moran transitions in [\[WF propagation\]](#WF propagation){reference-type="eqref" reference="WF propagation"}.
A standard bootstrap particle filter performed directly on the Wright--Fisher diffusion state space also crucially relies on the algorithm of [@JS17] for the prediction step, without which approximate sampling recipes from the transition density would be needed.
In Figure [\[fig:cv_pred_dist_WF\]](#fig:cv_pred_dist_WF){reference-type="ref" reference="fig:cv_pred_dist_WF"}, we compare prediction strategies for a $K=4$ WF diffusion using (figure legend indicated in parenthesis):
$\bullet$
the closed-form transition of the pure death dual ("Exact" in Fig. [\[fig:cv_pred_dist_WF\]](#fig:cv_pred_dist_WF){reference-type="ref" reference="fig:cv_pred_dist_WF"});
an approximation of the above using a Gillespie algorithm ("PD");
the Moran dual using a Gillespie algorithm ("BD Gillespie Moran");
the WF chain approximation of the Moran dual using a Gillespie algorithm ("BD Gillespie WF");
the WF diffusion approximation of the Moran dual using [@JS17] ("BD diffusion WF");
a bootstrap particle filtering approximation using [@JS17] ("Bootstrap PF").
Figure [\[fig:cv_pred_dist_WF\]](#fig:cv_pred_dist_WF){reference-type="ref" reference="fig:cv_pred_dist_WF"} shows that among these particle approximations of $p(x_{k+1}|y_{1:k})$, the Wright--Fisher diffusion approximation of the Moran dual seems to converge slowest, followed by the bootstrap particle filter. Prediction was performed for a forecast time horizon equal to 0.1, with WF parameters $\bm{\alpha}= (1.1, 1.1, 1.1, 1.1)$. The starting distribution for the prediction is a filtering distribution for a dataset whose last multinomial observation is equal to $(4, 0, 9, 2)$ (so the starting distribution is a mixture of Dirichlet distributions roughly centred around $(4/15, 0, 9/15, 2/15)$). The density estimates for the bootstrap particle filter are obtained from a Dirichlet kernel density estimator with bandwidth estimated by cross-validation. This is expected to induce a negligible error because the target distribution is a finite mixture of Dirichlet distributions.
![image](convergence_predictive_distr_WF4.pdf){width="\\textwidth" height="5cm"}
Figure [\[fig:filtering_error_WF\]](#fig:filtering_error_WF){reference-type="ref" reference="fig:filtering_error_WF"} evaluates the filtering error for a WF process with $K = 3$ and parameters $\bm{\alpha}= (1.1, 1.1, 1.1)$, given 20 categorical observations collected at each time, and 10 collection times with a time interval of 1. We consider increasing numbers of particles and use 100 replications to estimate the error. The figure shows that the particle approximation of the pure death dual process using the closed-form transition exhibits better performance. The bootstrap particle approximation has the fastest improvement relative to increasing the number of particles. Overall, the Moran dual performs better or comparably to bootstrap particle filtering.
![image](filtering_error_WF_2.pdf){width="\\textwidth"}
# Concluding remarks
We have provided conditions for filtering diffusion processes on $\mathbb{R}^{K}$ which avoid computations on the uncountable state space of the forward process when a dual process is available and is a regular jump continuous-time Markov chain on a discrete state space. Motivated by certain diffusion models for which only duals with a countable state space are known (e.g. B&D-like duals for WF diffusions with selection), we have investigated the performance of filtering based on a B&D dual for the CIR diffusion and based on a Moran process dual for the WF diffusion. All approximation methods proposed appear to be valuable strategies, despite resting on different simulation schemes. The optimal strategy is bound to depend on the application at hand, together with several other details like the interval lengths between data collection times, and possibly be constrained by which of these tools are available. Overall, approximate filtering using B&D-like duals may perform better or comparably to bootstrap particle filtering, with the advantage of operating on a discrete state space. The computational effort for each of these strategies is also bound to depend on a series of factors the identification of which is beyond the scope of this contribution.
[^1]: Corresponding author: ESOMAS Dept., Corso Unione Sovietica 218/bis, 10134, Torino, Italy; matteo.ruggiero\@unito.it
| arxiv_math | {
"id": "2310.00599",
"title": "Approximate filtering via discrete dual processes",
"authors": "Guillaume Kon Kam King, Andrea Pandolfi, Marco Piretto and Matteo\n Ruggiero",
"categories": "math.PR stat.CO stat.ME",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
These notes for a master class at Aarhus University (March 22--24, 2023) provide an introduction to the theory of completion for triangulated categories.
address: |
Fakultät für Mathematik\
Universität Bielefeld\
D-33501 Bielefeld\
Germany
author:
- Henning Krause
title: Completions of triangulated categories
---
# Introduction
Completions arise in all parts of mathematics. For example, the real numbers are the completion of the rationals. This construction uses equivalence classes of Cauchy sequences and is due to Cantor [@Ca1872] and Méray [@Me1869]. There is an obvious generalisation in the setting of metric spaces, leading to the completion of a metric space. Completions of rings and modules play an important role in algebra and geometry. For categories there is the notion of ind-completion due to Grothendieck and Verdier [@GV1972] which provides an embedding into categories with filtered colimits.
In these notes we combine all these ideas to study completions of triangulated categories. We do not offer an elaborated theory and rather look at various examples which arise naturally in representation theory of algebras.
These notes are based on three lectures; they are divided into nine sections (roughly three per lecture). The first three sections provide some preparations. In particular, we introduce the ind-completion of a category and explain completions in the context of modules over commutative rings. In §[5](#se:completion){reference-type="ref" reference="se:completion"} we propose a definition of a partial completion for triangulated categories; roughly speaking these are full subcategories of the ind-completion which admit a triangulated structure. In §[6](#se:pure-inj){reference-type="ref" reference="se:pure-inj"} we provide various criteria for an exact functor between triangulated categories to be a partial completion, and §[7](#se:t-vs-c){reference-type="ref" reference="se:t-vs-c"} discusses a general set-up in terms of compactly generated triangulated categories which covers many of the examples that arise in nature. The final sections provide detailed proofs for some of the examples and discuss the role of enhancements.
It is my pleasure to thank Charley Cummings, Sira Gratz, and Davide Morigi for organising the *Categories, clusters, and completions master class* at Aarhus University. In particular, I am very grateful to them for suggesting the topic of this series of lectures. My own interest in this subject arose from my collaboration with Dave Benson, Srikanth Iyengar, and Julia Pevtsova; I am most grateful for their inspiration and for their helpful comments on these notes. Last but not least let me thank Amnon Neeman for his comments, and let me recommend the notes from his lectures for another perspective on this fascinating subject.
This work was partly supported by the Deutsche Forschungsgemeinschaft (SFB-TRR 358/1 2023 - 491392403).
# The ind-completion of a category
Let $\mathcal C$ be an essentially small category. We write $\operatorname{Fun}(\mathcal C^\mathrm{op},\mathrm{Set})$ for the category of functors $\mathcal C^\mathrm{op}\to\mathrm{Set}$. Morphisms in $\operatorname{Fun}(\mathcal C^\mathrm{op},\mathrm{Set})$ are natural transformations. Note that this category is complete and cocomplete, that is, all limits and colimits exist and are computed pointwise in $\mathrm{Set}$.
For functors $E$ and $F$ we write $\operatorname{Hom}(E,F)$ for the set of morphisms from $E$ to $F$. The *Yoneda functor* $$\mathcal C\longrightarrow\operatorname{Fun}(\mathcal C^\mathrm{op},\mathrm{Set}),\quad X\mapsto
H_X:=\operatorname{Hom}(-,X)$$ is fully faithful; this follows from Yoneda's lemma which provides a natural bijection $$\operatorname{Hom}(H_X,F)\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}F(X).$$
Any functor $F\colon\mathcal C^\mathrm{op}\to\mathrm{Set}$ can be written canonically as a colimit of representable functors $$\label{eq:slice}
\operatorname*{colim}_{H_X\to F} H_X \xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}F$$ where the colimit is taken over the *slice category* $\mathcal C/F$; see [@GV1972 Proposition 3.4]. Objects in $\mathcal C/F$ are morphisms $H_X\to F$ where $X$ runs through the objects of $\mathcal T$. A morphism in $\mathcal C/F$ from $H_X\xrightarrow{\phi} F$ to $H_{X'}\xrightarrow{\phi'} F$ is a morphism $\alpha \colon X\to X'$ in $\mathcal C$ such that $\phi'
H_{\alpha}=\phi$.
**Definition 1**. A *filtered colimit* in a category $\mathcal D$ is the colimit of a functor $\mathcal I\to \mathcal D$ such that the category $\mathcal I$ is *filtered*, that is
1. the category is non-empty,
2. given objects $i,i'$ there is an object $j$ with morphisms $i\to j\leftarrow i'$, and
3. given morphisms $\alpha,\alpha'\colon i\to j$ there is a morphism $\beta\colon j\to k$ such that $\beta\alpha=\beta\alpha'$.
*Remark 1*. A partially ordered set $(I,\le)$ can be viewed as a category: the objects are the elements of $I$ and there is a unique morphism $i\to j$ whenever $i\le j$. This category is filtered if and only if $(I,\le)$ is non-empty and *directed*, that is, for each pair of elements $i,i'$ there is an element $j$ such that $i,i'\le j$. When the colimit of a functor $\mathcal I\to\mathcal D$ is given by a directed partially ordered set, this colimit is also called *directed colimit* (or confusingly *direct limit*).
For each essentially small filtered category $\mathcal I$ there exists a functor $\phi\colon \mathcal J\to\mathcal I$ such that $\mathcal J$ is the category corresponding to a directed partially ordered set and any functor $X\colon \mathcal I\to \mathcal D$ induces an isomorphism $$\operatorname*{colim}_{j\in\mathcal J} X(\phi(j))\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\operatorname*{colim}_{i\in\mathcal I}X(i).$$ This fact will not be needed, but it may be useful to know; see [@GV1972 Proposition 8.1.6].
Let us get back to functors $F\colon \mathcal C^\mathrm{op}\to\mathrm{Set}$. It is not difficult to show that $F$ is a filtered colimit of representable functors if and only if the slice category $\mathcal C/F$ is filtered.
**Definition 1**. The *ind-completion* of $\mathcal C$ is the category of functors $F\colon \mathcal C^\mathrm{op}\to\mathrm{Set}$ that are filtered colimits of representable functors. We denote this category by $\operatorname{Ind}\mathcal C$; it is a category with filtered colimits and the Yoneda functor $\mathcal C\to\operatorname{Ind}\mathcal C$ is the universal functor from $\mathcal C$ to a category with filtered colimits.
It is convenient to identify $\mathcal C$ with the full subcategory of representable functors in $\operatorname{Ind}\mathcal C$. Let $X=\operatorname*{colim}_i X_i$ and $Y=\operatorname*{colim}_j Y_j$ be objects in $\operatorname{Ind}\mathcal C$, written as filtered colimits of objects in $\mathcal C$.
**Lemma 1**. *We have natural bijections $$\operatorname*{colim}_i\operatorname{Hom}(C,X_i)\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\operatorname{Hom}(C,\operatorname*{colim}_i X_i)\quad \text{for each}
\quad C\in\mathcal C$$ and $$\operatorname{Hom}(X,Y)\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\lim_i\operatorname*{colim}_j\operatorname{Hom}(X_i,Y_j).$$*
*Proof.* The first bijection is an immediate consequence of Yoneda's lemma. For the second bijection we compute $$\begin{aligned}
\operatorname{Hom}(X,Y)&=\operatorname{Hom}(\operatorname*{colim}_i X_i, \operatorname*{colim}_j\operatorname{Hom}Y_j)\\
&\cong \lim_i\operatorname{Hom}(X_i, \operatorname*{colim}_j Y_j)\\
&\cong \lim_i\operatorname*{colim}_j\operatorname{Hom}(X_i,Y_j).\qedhere\end{aligned}$$ ◻
The ind-completion takes a more familar form when we consider additive categories. The following examples use the fact that a module $M$ over any ring is finitely presented if and only if the representable functor $\operatorname{Hom}(M,-)$ preserves filtered colimits.
**Example 1**. Let $A$ be a ring and $\mathcal C=\operatorname{proj}A$ the category of finitely generated projective $A$-modules. A theorem of Lazard says that a module is flat if and only if it is a filtered colimit of finitely generated free modules. Thus $\operatorname{Ind}\mathcal C$ identifies with the category of flat $A$-modules.
**Example 1**. Let $A$ be a ring and $\mathcal C=\operatorname{mod}A$ the category of finitely presented $A$-modules. It is well known that any module is is a filtered colimit of finitely presented modules. Thus $\operatorname{Ind}\mathcal C$ identifies with the category of all $A$-modules.
Here is a useful fact which connects the above examples; it is due to Lenzing [@Le1983 Proposition 2.1]. Let $\mathcal D\subseteq\mathcal C$ be a full subcategory. The inclusion induces a fully faithful functor $\operatorname{Ind}\mathcal D\to\operatorname{Ind}\mathcal C$; this follows easily from Lemma [Lemma 1](#le:Hom-completion){reference-type="ref" reference="le:Hom-completion"}. Thus we may view $\operatorname{Ind}\mathcal D$ as a full subcategory of $\operatorname{Ind}\mathcal C$.
**Lemma 1**. *An object $X\in\operatorname{Ind}\mathcal C$ belongs to $\operatorname{Ind}\mathcal D$ if and only if each morphism $C\to X$ with $C\in\mathcal C$ factors through an object in $\mathcal D$.*
*Proof.* If $X$ belongs to $\operatorname{Ind}\mathcal D$, then it follows from Lemma [Lemma 1](#le:Hom-completion){reference-type="ref" reference="le:Hom-completion"} that each morphism from $C\in\mathcal C$ factors through an object in $\mathcal D$. Now write $X=\operatorname*{colim}_{C\to X} C$ as a filtered colimit of objects in $\mathcal C$ as in [\[eq:slice\]](#eq:slice){reference-type="eqref" reference="eq:slice"}, using the slice category $\mathcal C/X$ as index category. The condition that each morphism $C\to X$ with $C\in\mathcal C$ factors through an object in $\mathcal D$ is precisely the fact that $\mathcal D/X$ is a *cofinal subcategory* of $\mathcal C/X$. Thus the induced morphism $$\operatorname*{colim}_{D\to X} D\longrightarrow\operatorname*{colim}_{C\to X} C=X$$ is an isomorphism; see [@GV1972 Proposition 8.1.3]. ◻
The above criterion applied to the inclusion $\operatorname{proj}A\subseteq\operatorname{mod}A$ shows that a module is flat if and only if each morphism from a finitely presented module factors through a finitely generated projective module.
**Example 1**. Let $\mathcal C$ be an essentially small additive category with cokernels. A functor $F\colon\mathcal C^\mathrm{op}\to\mathrm{Ab}$ is *left exact* if it is additive and sends each cokernel sequence $X\to Y\to Z\to 0$ to an exact sequence $0\to FZ\to FY\to FX$ of abelian groups. One can show that $F$ is left exact if and only if it is a filtered colimit of representable functors; see [@Kr2022 Lemma 11.1.14]. Thus $\operatorname{Ind}\mathcal C$ identifies with the category $\operatorname{Lex}(\mathcal C^\mathrm{op},\mathrm{Ab})$ of left exact functors $\mathcal C^\mathrm{op}\to\mathrm{Ab}$. If $\mathcal C$ is abelian, then $\operatorname{Lex}(\mathcal C^\mathrm{op},\mathrm{Ab})$ is an abelian Grothendieck category and the Yoneda functor is exact.
**Example 1**. Let $A$ be a commutative noetherian ring. Let $\mathcal C=\operatorname{fl}A$ denote the category of finite length modules (i.e. the finitely generated torsion modules). Then $\operatorname{Ind}\mathcal C$ identifies with the category of all torsion $A$-modules. This follows for example by applying the criterion in Lemma [Lemma 1](#le:Lenzing){reference-type="ref" reference="le:Lenzing"} to the inclusion $\operatorname{fl}A\subseteq\operatorname{mod}A$.
The next example is relevant because we wish to study filtered colimits arising from (or even in) triangulated categories.
**Example 1**. Let $\mathcal C$ be an essentially small triangulated category. Then $\operatorname{Ind}\mathcal C$ identifies with the category $\operatorname{Coh}\mathcal C$ of cohomological functors $\mathcal C^\mathrm{op}\to\mathrm{Ab}$. Recall that $\mathcal C^\mathrm{op}\to\mathrm{Ab}$ is *cohomological* if it is additive and sends exact triangles to exact sequences.
*Proof.* Any representable functor is cohomological, and taking filtered colimits (in the category $\mathrm{Ab}$) is exact. Thus a filtered colimit of representable functors is cohomological. Conversely, suppose that $F\colon \mathcal C^\mathrm{op}\to\mathrm{Ab}$ is cohomological. Then it easily checked that the slice category $\mathcal C/F$ is filtered. ◻
# The sequential completion of a category
The ind-completion of a category allows one to take arbitrary filtered colimits, so colimits of functors that are indexed by any filtered category. In the following we restrict to colimits of functors (or sequences) that are indexed by the natural numbers.
Let $\mathbb N=\{0,1,2,\ldots\}$ denote the set of natural numbers, viewed as a category with a single morphism $i\to j$ if $i\le j$.
Now fix a category $\mathcal C$ and consider the category $\operatorname{Fun}(\mathbb N,\mathcal C)$ of functors $\mathbb N\to\mathcal C$. An object $X$ is nothing but a sequence of morphisms $X_0\to X_1\to X_2\to \cdots$ in $\mathcal C$, and the morphisms between functors are by definition the natural transformations. We call $X$ a *Cauchy sequence* if for all $C\in\mathcal C$ the induced map $\operatorname{Hom}(C,X_i)\to\operatorname{Hom}(C,X_{i+1})$ is invertible for $i\gg 0$. This means: $$\forall\; C\in\mathcal C\;\; \exists\; n_C\in\mathbb N\;\; \forall\; j\ge i\ge
n_C\;\; \operatorname{Hom}(C,X_i)\xrightarrow{_\sim}\operatorname{Hom}(C,X_j).$$
Let $\operatorname{Cau}(\mathbb N,\mathcal C)$ denote the full subcategory consisting of all Cauchy sequences. A morphism $X\to Y$ is *eventually invertible* if for all $C\in\mathcal C$ the induced map $\operatorname{Hom}(C,X_i)\to\operatorname{Hom}(C,Y_i)$ is invertible for $i\gg 0$. This means: $$\forall\; C\in\mathcal C\;\; \exists\; n_C\in\mathbb N\;\; \forall\; i\ge
n_C\;\; \operatorname{Hom}(C,X_i)\xrightarrow{_\sim}\operatorname{Hom}(C,Y_i).$$ Let $S$ denote the class of eventually invertible morphisms in $\operatorname{Cau}(\mathbb N,\mathcal C)$.
**Definition 1**. The *sequential Cauchy completion* of $\mathcal C$ is the category $$\operatorname{Ind}_\mathrm{Cau}\mathcal C:= \operatorname{Cau}(\mathbb N,\mathcal C)[S^{-1}]$$ that is obtained from the Cauchy sequences by formally inverting all eventually invertible morphisms, together with the *canonical functor* $\mathcal C\to\operatorname{Ind}_\mathrm{Cau}\mathcal C$ that sends an object $X$ in $\mathcal C$ to the constant sequence $X\xrightarrow{\operatorname{id}} X\xrightarrow{\operatorname{id}} \cdots$.
A sequence $X\colon\mathbb N\to\mathcal C$ induces a functor $$\widetilde X\colon\mathcal C^\mathrm{op}\longrightarrow\mathrm{Set},\quad C\mapsto\operatorname*{colim}_i\operatorname{Hom}(C,X_i),$$ and this yields a functor $$\label{eq:seq-compl}
\operatorname{Ind}_\mathrm{Cau}\mathcal C\longrightarrow\operatorname{Ind}\mathcal C\subseteq\operatorname{Fun}(\mathcal C^\mathrm{op},\mathrm{Set}),\quad
X\mapsto\widetilde X,$$ because the assignment $X\mapsto\widetilde X$ maps eventually invertible morphisms to isomorphisms.
**Proposition 1** ([@Kr2020 Proposition 2.4]). *The canonical functor $\operatorname{Ind}_\mathrm{Cau}\mathcal C\to\operatorname{Ind}\mathcal C$ is fully faithful; it identifies $\operatorname{Ind}_\mathrm{Cau}\mathcal C$ with the colimits of sequences of representable functors that correspond to Cauchy sequences in $\mathcal C$.0◻*
It turns out that the class of Cauchy sequences is too restrictive; we need a more general notion of completion.
**Definition 1**. The *sequential completion* of $\mathcal C$ has as objects all sequences in $\mathcal C$, i.e. functors $\mathbb N\to\mathcal C$, and for objects $X,Y$ set $$\operatorname{Hom}(X,Y)=\lim_i\operatorname*{colim}_j\operatorname{Hom}(X_i,Y_j).$$ This category is denoted by $\operatorname{Ind}_\mathbb N\mathcal C$, and for any class $\mathcal X$ of objects the corresponding full subcategory is denoted by $\operatorname{Ind}_\mathcal X\mathcal C$.
It follows from Lemma [Lemma 1](#le:Hom-completion){reference-type="ref" reference="le:Hom-completion"} that the assignment $X\mapsto\operatorname*{colim}_i\operatorname{Hom}(-,X_i)$ induces a fully faithful functor $\operatorname{Ind}_\mathbb N\mathcal C\to\operatorname{Ind}\mathcal C$. Thus we obtain canonical inclusions $$\operatorname{Ind}_\mathcal X\mathcal C\subseteq \operatorname{Ind}_\mathbb N\mathcal C\subseteq \operatorname{Ind}\mathcal C.$$
**Example 1**. Let $\mathcal C$ be an exact category and let $\mathcal X$ denote the class of sequences $X$ such that each $X_i\to X_{i+1}$ is an admissible monomorphism. Then $\mathcal C\tilde{\phantom{e}}:=\operatorname{Ind}_\mathcal X\mathcal C$ admits a canonical exact structure and is called *countable envelope* of $\mathcal C$ [@Ke1990 Appendix B].
# Completion of rings and modules
Let $A$ be an associative ring. We consider the category $\operatorname{Mod}A$ of right $A$-modules and the following full subcategories: $$\begin{aligned}
\operatorname{mod}A &= \text{finitely presented $A$-modules}\\
\operatorname{proj}A &= \text{finitely generated projective $A$-modules}\\
\operatorname{noeth}A &= \text{noetherian $A$-modules (satisfying the ascending
chain
condition)}\\
\operatorname{art}A &= \text{artinian $A$-modules (satisfying the descending
chain
condition)}\\
\operatorname{fl}A&=\operatorname{art}A \cap \operatorname{noeth}A= \text{finite length $A$-modules} \end{aligned}$$
**Definition 1**. For an ideal $I\subseteq A$ the *$I$-adic completion* of $A$ is the limit $$\widehat A:=\lim_{n\ge 0}A/I^n.$$ Similarly for an $A$-module $M$ one sets $$\widehat M:=\lim_{n\ge 0} M/M I^n.$$
Let us consider the special case that $A$ is a commutative noetherian local ring and $I=\mathfrak{m}$ its unique maximal ideal. We denote by $E=E(A/\mathfrak{m})$ the injective envelope of the unique simple $A$-module. Then $D=\operatorname{Hom}(-,E)$ yields the *Matlis duality* $\operatorname{Mod}A\to\operatorname{Mod}A$, satisfying $$\operatorname{Hom}(M,DN)\cong\operatorname{Hom}(N,DM)
\qquad (M,N\in\operatorname{Mod}A).$$ Note that $M\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}D^2M$ when $M$ has finite length; this is easily checked by induction on the length of $M$. Thus $D$ induces an equivalence $$(\operatorname{fl}A)^\mathrm{op}\xrightarrow{\ \raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}\ }\operatorname{fl}A.$$ For $n\ge 0$ we write $E_n=\operatorname{Hom}(A/\mathfrak{m}^n,E)$ and note that $$E=\bigcup_{n\ge 0}E_n.$$ In fact, the module $E$ is artinian and each submodule $E_n$ is of finite length. Thus $$\operatorname{Hom}(E,E)\cong \operatorname{Hom}(\operatorname*{colim}_{n\ge 0}E_n,E)\cong
\lim_{n\ge 0}\operatorname{Hom}(E_n,E)\cong \lim_{n\ge 0}A/\mathfrak{m}^n=\widehat A.$$ In particular, each Matlis dual module $DM$ is canonically an $\widehat A$-module via the map $\widehat A\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\operatorname{End}(E)$. Thus Matlis duality yields the following commutative diagram. $$\begin{tikzcd}
(\operatorname{fl}A)^\mathrm{op}\arrow["\sim" labl,"D" ']{d}\arrow[tail]{rr}&&(\operatorname{art}A)^\mathrm{op}\arrow["\sim" labl,"D" ']{d}\\
\operatorname{fl}A\arrow[tail]{r} &\operatorname{noeth}A\arrow{r}{\widehat{}}&\operatorname{noeth}\widehat A
\end{tikzcd}$$
Given a module $M$, the *socle* $\operatorname{soc}M$ is the sum of all simple submodules. One defines inductively $\operatorname{soc}^n M\subseteq M$ for $n\ge 0$ by setting $\operatorname{soc}^0 M=0$, and $\operatorname{soc}^{n+1}M$ is given by the exact sequence $$0\longrightarrow\operatorname{soc}^n M\longrightarrow\operatorname{soc}^{n+1} M\longrightarrow\operatorname{soc}(M/\operatorname{soc}^n M)\longrightarrow 0.$$
Recall that a ring $A$ is *semi-local* if $A/J(A)$ is a semisimple ring, where $J(A)$ denotes the Jacobson radical of $A$. In that case we have $\operatorname{soc}M\cong\operatorname{Hom}(A/J(A),M)$ for every $A$-module $M$.
**Proposition 1** ([@Kr2020 Proposition 3.4]). *Let $A$ be a commutative noetherian semi-local ring. Then the sequential Cauchy completion of $\operatorname{fl}A$ identifies with $\operatorname{art}A$.*
*Proof.* Set $\mathcal C=\operatorname{fl}A$. The assignment $X\mapsto \bar X:=\operatorname*{colim}_n X_n$ yields a fully faithful functor $$\operatorname{Ind}_\mathrm{Cau}\mathcal C\subseteq\operatorname{Ind}\mathcal C\longrightarrow\operatorname{Ind}(\operatorname{mod}A)=\operatorname{Mod}A.$$
It is well known that an $A$-module $M$ is artinian if and only if $M$ is the union of finite length modules and $\operatorname{soc}M$ has finite length [@Kr2022 Proposition 2.4.20]. In that case the socle series $(\operatorname{soc}^nM)_{n\ge 0}$ of $M$ yields a Cauchy sequence in $\mathcal C$ with $\operatorname*{colim}_n(\operatorname{soc}^nM)=M$.
Now let $X\in\operatorname{Ind}_\mathrm{Cau}\mathcal C$. Then every finitely generated submodule of $\bar X$ has finite length, so $\bar X$ is a union of finite length modules. Also $\operatorname{soc}\bar X$ has finite length, since $$\operatorname{soc}\bar X\cong\operatorname{Hom}(A/J(A),\bar X)\cong\operatorname*{colim}_n\operatorname{Hom}(A/J(A),
X_n).$$ Thus $\bar X$ is artinian. ◻
*Remark 1*. Completions of modules or categories of modules will serve as model for completions of triangulated categories. We have seen two types of completions: the *ind-completion* (the inclusion $\operatorname{fl}A\to\operatorname{art}A$) and the *adic completion* (the functor $\operatorname{noeth}A\to\operatorname{noeth}\widehat A$). Both types of completions have their analogues when we consider triangulated categories.
# Completion of triangulated categories {#se:completion}
We are ready to propose a definition of 'completion' for a triangulated category. Roughly speaking it is a triangulated approximation of the ind-completion. In fact, it will be rare that the ind-completion of a triangulated category admits a triangulated structure, but it does happen that certain full subcategories are triangulated.
Let $\mathcal C$ be an essentially small triangulated category. We denote by $\operatorname{Coh}\mathcal C$ the category of cohomological functors $\mathcal C^\mathrm{op}\to\mathrm{Ab}$. In Example [Example 1](#ex:coh){reference-type="ref" reference="ex:coh"} we have already seen that $\operatorname{Coh}\mathcal C$ equals $\operatorname{Ind}\mathcal C$. An exact functor $f\colon\mathcal C\to\mathcal D$ between triangulated categories induces the *restriction* $$f_*\colon \mathcal D\longrightarrow\operatorname{Coh}\mathcal C,\quad X\mapsto\operatorname{Hom}(-,X)\circ f.$$
**Definition 1**. We call a fully faithful exact functor $f\colon\mathcal C\to\mathcal D$ a *partial completion* of the triangulated category $\mathcal C$ if the restriction $f_*\colon\mathcal D\to\operatorname{Coh}\mathcal C$ is fully faithful. The completion is called *sequential* if the above functor factors through $\operatorname{Ind}_\mathbb N\mathcal C$, and it is *Cauchy sequential* if the functor factors through $\operatorname{Ind}_\mathrm{Cau}\mathcal C$.
A partial completion of $\mathcal C$ is far from unique. But depending on the context there are often natural choices. An essential feature of a partial completion is the fact that any object in $\mathcal D$ can be written canonically as a filtered colimit of objects in the image of $f$. Suppose for simplicity that $f$ is an inclusion. Then we have for each object $X\in\mathcal D$ an isomorphism $$\operatorname*{colim}_{C\to X}C\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}X$$ where $C\to X$ runs through all morphisms in $\mathcal D$ with $C\in\mathcal C$. This follows from [\[eq:slice\]](#eq:slice){reference-type="eqref" reference="eq:slice"}. Moreover, using Lemma [Lemma 1](#le:Hom-completion){reference-type="ref" reference="le:Hom-completion"} we can compute morphisms in $\mathcal D$ via $$\operatorname{Hom}(X,X')\cong\lim_{C\to X}\operatorname*{colim}_{C'\to X'}\operatorname{Hom}(C,C').$$ If the completion is sequential, then there is for each object $X\in\mathcal D$ a sequence $C_0\to C_1\to C_2\to\cdots$ in $\mathcal C$ such that $$\operatorname*{colim}_n C_n\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}X.$$ We note that these filtered colimits are taken in $\mathcal D$; they exist because $\mathcal D$ identifies with a full subcategory of the ind-completion of $\mathcal C$.
Examples of partial completions arise from derived categories of exact subcategories. For an exact category $\mathcal A$ we write $\mathbf D(\mathcal A)$ for its derived category and $\mathbf D^b(\mathcal A)$ for the full subcategory of bounded complexes.
**Example 1**. For a commutative noetherian ring $A$ the inclusion $\mathbf D^b(\operatorname{fl}A)\to \mathbf D^b(\operatorname{art}A)$ is a sequential partial completion.
Recall that a ring $A$ is *right coherent* if the category $\operatorname{mod}A$ of finitely presented $A$-modules is abelian.
**Example 1**. For a right coherent ring $A$ the inclusion $\mathbf D^b(\operatorname{proj}A)\to\mathbf D^b(\operatorname{mod}A)$ is a Cauchy sequential partial completion.
The assumption on the ring $A$ to be right coherent is not essential. For an arbitrary ring one takes instead of $\operatorname{mod}A$ the exact category of $A$-modules $M$ that admit a projective resolution $$\cdots \longrightarrow P_1\longrightarrow P_0\longrightarrow M\longrightarrow 0$$ such that each $P_i$ is finitely generated; it is the largest full exact subcategory of $\operatorname{Mod}A$ containing $\operatorname{proj}A$ and having enough projective objects.
We will return to these examples and provide full proofs. In fact, the proofs require the study of compactly generated triangulated categories.
# Completion of compact objects and pure-injectivity {#se:pure-inj}
Partial completions of an essentially small triangulated category $\mathcal C$ often arise as full triangulated subcategories of a compactly generated triangulated category $\mathcal T$ such that $\mathcal C$ equals the subcategory of compact objects.
Let $\mathcal T$ be a triangulated category that admits arbitrary coproducts. An object $X$ in $\mathcal T$ is called *compact* if the functor $\operatorname{Hom}(X,-)$ preserves all coproducts. We denote by $\mathcal T^c$ the full subcategory of compact objects and note that it is a thick subcategory of $\mathcal T$. The triangulated category $\mathcal T$ is *compactly generated* if $\mathcal T^c$ is essentially small and if $\mathcal T$ has no proper localising subcategory containing $\mathcal T^c$.
From now on fix a compactly generated triangulated category $\mathcal T$ and set $\mathcal C=\mathcal T^c$.
**Definition 1**. The functor $$\mathcal T\longrightarrow\operatorname{Coh}\mathcal C,\quad X\mapsto
h_X:=\operatorname{Hom}(-,X)|_\mathcal C$$ is called *restricted Yoneda functor*. The induced map $$\operatorname{Hom}(X,Y)\longrightarrow\operatorname{Hom}(h_X,h_Y)\qquad (X,Y\in\mathcal T)$$ is in general neither injective nor surjective; its kernel is the subgroup of *phantom morphisms*.
The category $\operatorname{Coh}\mathcal C$ is an extension closed subcategory of the abelian category $\operatorname{Mod}\mathcal C$ (i.e. the category of additive functors $\mathcal C^\mathrm{op}\to\mathrm{Ab}$). Thus it is an exact category (in the sense of Quillen) with enough projective and enough injective objects. In fact, the projective objects are of the form $h_X$ with $X$ a direct summand of a coproduct of compact objects in $\mathcal T$; this follows from Yoneda's lemma. An application of Brown's representability theorem shows that also the injective objects are of the form $h_X$ for an object $X$ in $\mathcal T$. This leads to the notion of a pure-injective object.
**Definition 1**. An exact triangle $X\to Y\to Z\to$ in $\mathcal T$ is called *pure-exact* if the induced sequence $0\to h_X\to h_Y\to h_Z\to 0$ is exact. The triangle *splits* if $X\to Y$ is a split monomorphism, equivalently if $Y\to Z$ is a split epimorphism. An object $X$ in $\mathcal T$ is *pure-injective* if each pure-exact triangle $X\to Y\to Z\to$ in $\mathcal T$ splits.
We have the following characterisation of pure-injectivity.
**Proposition 1** ([@Kr2000 Theorem 1.8]). *For an object $X$ in $\mathcal T$ the following are equivalent.*
1. *The map $\operatorname{Hom}(X',X)\to\operatorname{Hom}(h_{X'},h_{X})$ is bijective for all $X'\in\mathcal T$.*
2. *The object $h_X$ is injective in $\operatorname{Coh}\mathcal C$.*
3. *The object $X$ is pure-injective in $\mathcal T$.*
4. *For each set $I$ the summation morphism $\coprod_I X\to X$ factors through the canonical morphism $\coprod_I X\to \prod_IX$. 0◻*
The following immediate consequence motivates our interest in pure-injectives.
**Corollary 1**. *Let $\mathcal D\subseteq\mathcal T$ be a triangulated subcategory containing all compact objects and consisting of pure-injective objects. Then the inclusion $\mathcal C\to\mathcal D$ is a partial completion.0◻*
Pure-injectivity is a useful homological condition but in practice hard to check. In particular, there is no obvious triangulated structure on pure-injectives. We provide a criterion for a strong form of pure-injectivity which is an analogue of artinianess for modules.
Let $X\in\mathcal T$ and $C\in\mathcal T^c$. A *subgroup of finite definition* is a subgroup of $\operatorname{Hom}(C,X)$ that equals the image of an induced map $\operatorname{Hom}(D,X)\to\operatorname{Hom}(C,X)$ given by a morphism $C\to D$ in $\mathcal T^c$. Note that any subgroup of finite definition of $\operatorname{Hom}(C,X)$ is an $\operatorname{End}(X)$-submodule.
**Lemma 1**. *The subgroups of finite definition of $\operatorname{Hom}(C,X)$ are closed under finite sums and intersections. Thus they form a lattice.*
*Proof.* Let $U_i$ be the image of $\operatorname{Hom}(D_i,X)\to\operatorname{Hom}(C,X)$ given by a morphism $C\to D_i$ in $\mathcal T^c$ ($i=1,2$). Then $U_1+U_2$ equals the image of the map induced by $C\to D_1\oplus D_2$. Now complete this to an exact triangle $C\to D_1\oplus D_2\to E\to$. Then $U_1\cap U_2$ equals the image of the map induced by $C\to D_1\to E$. ◻
We say that an object $X\in\mathcal T$ satisfies *dcc on subgroups of finite definition* if for each compact object $C$ any chain of subgroups of finite definition $$\cdots \subseteq U_2 \subseteq U_1 \subseteq U_0=\operatorname{Hom}(C,X)$$ stabilises.
The following result goes back to Crawley-Boevey [@CB1994 3.5], who proved this for locally finitely presented additive categories; see also [@Kr2022 Theorem 12.3.4]. The proof is quite involved; the basic idea is to translate the descending chain condition into a noetherianess condition for some appropriate Grothendieck category (a localisation of $\operatorname{Mod}\mathcal C$ cogenerated by $h_X$).
**Proposition 1**. *For an object $X\in\mathcal T$ the following are equivalent.*
1. *$X$ is $\Sigma$-pure-injective, i.e. any coproduct of copies of $X$ is pure-injective.*
2. *$X$ satisfies dcc on subgroups of finite definition.*
3. *Every product of copies of $X$ decomposes into a coproduct of indecomposable objects with local endomorphism rings.*
*Proof.* The category $\operatorname{Coh}\mathcal C$ is locally finitely presented and has products; so Crawley-Boevey's theory of purity can be applied. In particular, $X$ is pure-injective in $\mathcal T$ if and only if $h_X$ is pure-injective in $\operatorname{Coh}\mathcal C$, by Theorem 1 in [@CB1994 3.5] and Proposition [Proposition 1](#pr:pure-inj){reference-type="ref" reference="pr:pure-inj"}. Also, $h_X$ satisfies dcc on subgroups of finite definition in $\operatorname{Coh}\mathcal C$ if and only if $X$ satisfies dcc on subgroups of finite definition in $\mathcal T$, as $\operatorname{Hom}(C,X)\cong\operatorname{Hom}(h_C,h_X)$ for each compact object $C$. Now the assertion follows from Theorem 2 in [@CB1994 3.5]. ◻
Let $R$ be a commutative ring and suppose that $\mathcal C$ is *$R$-linear*. This means that there is a ring homomorphism $R\to Z(\mathcal C)$ into the *centre* of $\mathcal C$ (the ring of natural transformations $\operatorname{id}_\mathcal C\to\operatorname{id}_\mathcal C$). In particular, for each pair of objects $X,Y$ the group of morphisms $\operatorname{Hom}(X,Y)$ is naturally an $R$-module.
We view $\mathcal C$ as a full subcategory of $\operatorname{Coh}\mathcal C$ and call an object $X$ in $\operatorname{Coh}\mathcal C$ or $\mathcal T$ *artinian* over $R$ if $\operatorname{Hom}(C,X)$ is an artinian $R$-module for all $C\in\mathcal C$ (via the canonical homomorphism $R\to\operatorname{End}(C)$). Let $\operatorname{art}_R\mathcal C$ denote the full subcategory of $R$-artinian objects in $\operatorname{Coh}\mathcal C$, and $\operatorname{art}_R\mathcal T$ denotes the full subcategory of $R$-artinian objects in $\mathcal T$. One may drop $R$ when $R=Z(\mathcal C)$.
**Corollary 1**. *The restricted Yoneda functor induces an equivalence $\operatorname{art}_R\mathcal T\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\operatorname{art}_R\mathcal C$. In particular, the category $\operatorname{art}_R\mathcal C$ admits a canonical triangulated structure that is induced from that of $\mathcal T$.*
*Proof.* The restricted Yoneda functor induces for $X\in\mathcal T$ and $C\in\mathcal C$ a bijection $$\operatorname{Hom}(C,X)\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\operatorname{Hom}(h_C,h_X).$$ Thus $X$ is $R$-artinian if and only if $h_X$ is $R$-artinian. Now suppose that $X\in\operatorname{Coh}\mathcal C$ is $R$-artinian. One has for $X\in\operatorname{Coh}\mathcal C$ and $C\in\mathcal C$ the analogous concept of a subgroup of finite definition of $\operatorname{Hom}(C,X)$, and then artinianess over $R$ implies dcc on subgroups of finite definition. Thus it follows from Theorem 2 in [@CB1994 3.5] that $X$ is pure-injective. The exact structure on $\operatorname{Coh}\mathcal C$ agrees with the pure-exact structure. Thus $X$ is an injective object and therefore of the form $h_{\bar X}$ for a pure-injective object $\bar X$ in $\mathcal T$ by Proposition [Proposition 1](#pr:pure-inj){reference-type="ref" reference="pr:pure-inj"}. Clearly, $\bar X$ is $R$-artinian.
Any $R$-artinian object $X\in\mathcal T$ is pure-injective by Proposition [Proposition 1](#pr:dcc){reference-type="ref" reference="pr:dcc"}. Thus Proposition [Proposition 1](#pr:pure-inj){reference-type="ref" reference="pr:pure-inj"} yields a bijection $$\operatorname{Hom}(X',X)\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\operatorname{Hom}(h_{X'},h_X)$$ for any pair $X,X'$ of $R$-artinian objects in $\mathcal T$.
The $R$-artinian objects in $\mathcal T$ form a thick subcategory. Thus transport of structure provides a triangulated structure for $\operatorname{art}_R\mathcal C$. ◻
**Corollary 1**. *Let $\mathcal D\subseteq\mathcal T$ be a triangulated subcategory containing all compact objects and consisting of $R$-artinian objects. Then the inclusion $\mathcal C\to\mathcal D$ is a partial completion.*
*Proof.* The assertion is an immediate consequence of Corollary [Corollary 1](#co:pure-inj){reference-type="ref" reference="co:pure-inj"} since each object in $\mathcal D$ is $\Sigma$-pure-injective thanks to Proposition [Proposition 1](#pr:dcc){reference-type="ref" reference="pr:dcc"}. ◻
We conclude with a couple of remarks. The first one addresses possible triangulated structures for the ind-completion of an essentially small triangulated category.
*Remark 1*. There is a notion of a *locally finite* triangulated category; see [@Kr2012; @XZ2005]. One way of defining this is that all cohomological functors into abelian groups (covariant or contravariant) are coproducts of direct summands of representable functors. An equivalent condition is that every short exact sequence of cohomological functors does split. Examples are the stable module category $\operatorname{stmod}A$ when $A$ is a self-injective algebra of finite representation type, or the derived category $\mathbf D^b(\operatorname{mod}A)$ of a hereditary algebra of finite representation type. Then we have equivalences $\operatorname{Ind}(\operatorname{stmod}A)\simeq\operatorname{StMod}A$ and $\operatorname{Ind}(\mathbf D^b(\operatorname{mod}A))\simeq \mathbf D(\operatorname{Mod}A)$. In particular, the ind-completions carry a triangulated structure; they are compactly generated and each pure-exact triangle splits.
Now suppose that $\mathcal C$ is an essentially small triangulated category that is not locally finite. For example, let $\mathcal C=\operatorname{stmod}A$ when $A$ is a finite dimensional self-injective algebra of infinite representation type. Passing to $\mathcal C^\mathrm{op}$ if necessary this means that not all objects in $\operatorname{Ind}\mathcal C$ are projective. So we find an exact sequence $0\to X\to Y\to Z\to 0$ which does not split. On the other hand, if $\operatorname{Ind}\mathcal C$ admits a triangulated structure, then each kernel-cokernel pair needs to split. It follows that $\operatorname{Ind}\mathcal C$ does not admit a triangulated structure.
*Remark 1*. The suspension $\Sigma\colon\mathcal T\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\mathcal T$ induces a $\mathbb Z$-grading of $\mathcal T$. Let $Z^*(\mathcal T)$ denote the *graded centre*. This is a graded commutative ring and any ring homomorphism $R\to Z^*(\mathcal T)$ from a graded commutative ring $R$ induces an $R$-linear structure on the graded morphisms $$\operatorname{Hom}^*(X,Y):= \bigoplus_{i\in\mathbb Z}\operatorname{Hom}(X,\Sigma^i Y)
\quad \text{for}\quad
X,Y\in\mathcal T.$$ In particular, the notion of an $R$-artinian object extends to the graded setting. All statements and their proofs remain valid in this generality.
# Torsion versus completion {#se:t-vs-c}
For any compactly generated triangulated category and any choice of compact objects generating a localising subcategory, there is an adjoint pair of functors that resembles derived torsion and completion functors for the derived category of a commutative ring.
Let $\mathcal T$ be a compactly generated triangulated category with suspension $\Sigma\colon\mathcal T\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\mathcal T$ and set $\mathcal C=\mathcal T^c$. We choose a thick subcategory $\mathcal C_0\subseteq\mathcal C$ and denote by $\mathcal T_0\subseteq\mathcal T$ the localising subcategory which is generated by $\mathcal C_0$. Note that $(\mathcal T_0)^c=\mathcal C_0$. The inclusion $\mathcal T_0\to\mathcal T$ admits a right adjoint, by Brown's representability theorem, which we denote by $q\colon\mathcal T\to\mathcal T_0$. This functor preserves coproducts and then another application of Brown's representability theorem yields a right adjoint $q_\rho$ which is fully faithful. Thus the left adjoint $q_\lambda$ and the right adjoint $q_\rho$ provide two embeddings of $\mathcal T_0$ into $\mathcal T$, and our notation suggests a symmetry which does not give preference to any of the inclusions. $$\begin{tikzcd}[column sep = huge]
{\mathcal T}
\arrow[twoheadrightarrow]{r}[description]{q} &\mathcal T_0
\arrow[tail,swap,yshift=1.5ex]{l}{q_\lambda}\arrow[tail,yshift=-1.5ex]{l}{q_\rho}
\end{tikzcd}$$
**Definition 1**. The choice of $\mathcal C_0\subseteq\mathcal C$ yields exact functors $$\Gamma:= q_\lambda\circ q \qquad\text{and}\qquad \Lambda:= q_\rho\circ
q$$ which form an adjoint pair. For any object $X$ in $\mathcal T$ the unit $X\to\Lambda X$ is called *completion* and the counit $\Gamma X\to X$ is called *torsion* or *local cohomology* of $X$.
Note that these functors are idempotent, since $\operatorname{id}_{\mathcal T_0}\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}q\circ q_\lambda$ and $q\circ q_\rho\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\operatorname{id}_{\mathcal T_0}$. From the definitions it is clear that the adjoint pair $(\Gamma,\Lambda)$ induces mutually inverse equivalences $$\label{eq:DG}
\begin{tikzcd}[column sep = huge]
\Lambda\mathcal T\arrow[yshift=.75ex]{r}{\Gamma} &{\Gamma\mathcal T}
\arrow[yshift=-.75ex]{l}{\Lambda}
\end{tikzcd}$$ where $\Lambda\mathcal T=\{X\in\mathcal T\mid X\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\Lambda X\}$ and $\Gamma\mathcal T=\{X\in\mathcal T\mid \Gamma
X\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}X\}$.
We are interested in the completions of the compact objects from $\mathcal T$, and we may view these as objects of $\Gamma\mathcal T$, because of the above equivalence [\[eq:DG\]](#eq:DG){reference-type="eqref" reference="eq:DG"}. Thus it is equivalent to look at the local cohomology of the compact objects from $\mathcal T$. Set $$\widehat\mathcal C:=\operatorname{thick}(\Gamma\mathcal C)\simeq\operatorname{thick}(\Lambda\mathcal C)\subseteq\Lambda\mathcal T.$$ Then we obtain the following chain of inclusions. $$\mathcal C_0=(\Gamma\mathcal T)^c\subseteq\widehat\mathcal C\subseteq\Gamma\mathcal T\subseteq\mathcal T$$ The next diagram shows how these various subcategories of $\mathcal T$ are related. The inclusion $\mathcal T_0\to\mathcal T$ admits the right adjoint $\Gamma$, and we may think of its restriction $\mathcal C\to\widehat\mathcal C$ as a mock right adjoint of the inclusion $\mathcal C_0\to\mathcal C$. $$\begin{tikzcd}
\mathcal C_0 \arrow[tail]{d}\arrow[tail]{r}&\widehat\mathcal C\arrow[tail]{r}&\Gamma\mathcal T\arrow[tail, xshift=-.75ex]{d}\\
\mathcal C\arrow[tail]{rr} \arrow{ur}&&\mathcal T
\arrow[twoheadrightarrow,xshift=.75ex,swap,"\Gamma"]{u}
\end{tikzcd}$$
**Problem 1**. Find a description of $\widehat\mathcal C$ for given triangulated categories $\mathcal C_0\subseteq\mathcal C$.
Let us get back to the distinction between ind-completion and adic completion (cf. Remark [Remark 1](#re:ind-adic){reference-type="ref" reference="re:ind-adic"}). This carries over to our triangulated setting; it means that we can approach $\widehat\mathcal C$ from two directions, using either the inclusion $\mathcal C_0\to\widehat\mathcal C$ or the functor $\mathcal C\to\widehat\mathcal C$.
We illustrate this with an example which explains the terminology; it goes back to work of Dwyer, Greenlees, and May [@DG2002; @GM1992]. Let $A$ be a commutative ring. We set $\mathcal T=\mathbf D(\operatorname{Mod}A)$ and identify $\mathbf D^b(\operatorname{proj}A)=\mathcal T^c=\mathcal C$ (the category of perfect complexes). Recall for an ideal $I\subseteq A$ and any $A$-module $M$ the definition of *$I$-torsion* $$M\longmapsto \operatorname*{colim}_{n\ge 0}(\operatorname{Hom}_A(A/I^n,M))\subseteq M$$ and *$I$-adic completion* $$M\longmapsto\lim_{n\ge 0} (M\otimes_A A/I^n).$$
**Example 1** ([@DG2002]). Fix a finitely generated ideal $I\subseteq A$ and let $\mathcal C_0$ denote the category of perfect complexes having $I$-torsion cohomology. Then $\mathcal T_0$ equals the category of all complexes in $\mathcal T$ having $I$-torsion cohomology. The functor $\Gamma$ equals the local cohomology functor (i.e. the right derived functor of $I$-torsion), while $\Lambda$ equals the derived completion functor (i.e. the left derived functor of $I$-adic completion). Moreover, $\widehat\mathcal C$ is triangle equivalent to $\mathbf D^b(\operatorname{proj}\widehat A)$.
*Proof.* Let $K$ denote the *Koszul complex* given by a finite sequence of generators $x_1,\ldots,x_n$ of $I$. This is the object $K=K_n$ in $\mathbf D^b(\operatorname{proj}A)$ that is obtained by setting $K_0=A$ and $K_{r}=\operatorname{Cone}(K_{r-1}\xrightarrow{x_{r}}K_{r-1})$ for $r=1,\ldots,n$. We view $K$ as a dg left module over the dg endomorphism ring $E=\operatorname{\mathcal{E}\!\!\;\mathit{nd}}_A(K)$ and we view $K^\vee=\operatorname{\mathcal{H}\!\!\;\mathit{om}}_A(K,A)$ as a dg right module over $E$. Let $\mathbf D(E)$ denote the derived category of the category of dg right $E$-modules. Then we obtain the following diagram $$\begin{tikzcd}[column sep = huge]
{\mathcal T}=\mathbf D(A) \arrow{r}[description]{q} &\mathbf D(E)
\arrow[swap,yshift=1.5ex]{l}{q_\lambda}\arrow[yshift=-1.5ex]{l}{q_\rho}
\end{tikzcd}$$ where $$q=\operatorname{\mathcal{H}\!\!\;\mathit{om}}_A(K,-)=-\otimes_A K^\vee\qquad q_\lambda=-\otimes_E K\qquad
q_\rho=\operatorname{\mathcal{H}\!\!\;\mathit{om}}_E(K^\vee,-).$$ In [@DG2002 §6] it is shown that $q_\lambda$ identifies $\mathbf D(E)$ with the category of all complexes in $\mathbf D(A)$ having $I$-torsion cohomology, while $q_\rho$ identifies $\mathbf D(E)$ with the category of all complexes in $\mathbf D(A)$ that are $I$-complete. Moreover, it is shown that $\Gamma= q_\lambda\circ q$ computes local cohomology, while $\Lambda= q_\rho\circ q$ yields derived completion. Next we compute the graded endomorphisms of $q(A)=K^\vee$ and have by adjunction $$\operatorname{Hom}_E^*(K^\vee,K^\vee)\cong\operatorname{Hom}^*_A(A,\Lambda A)\cong
H^*(\Lambda A)\cong \widehat A.$$ It follows that $$\widehat\mathcal C=\operatorname{thick}(K^\vee)\simeq\mathbf D^b(\operatorname{proj}\widehat A).\qedhere$$ ◻
We have a more specific description of $\widehat\mathcal C$ when $A$ is local; it uses the tensor triangulated structure of the derived category of a commutative ring.
**Example 1** ([@BIKP2023 Theorem 4.1]). Let $A$ be a commutative noetherian local ring and $\mathfrak{m}$ its maximal ideal. Then the derived category $\mathcal T=\mathbf D(\operatorname{Mod}A)$ is a tensor triangulated category. Let $\mathcal C_0$ denote the category of perfect complexes having $\mathfrak{m}$-torsion cohomology, which is a thick tensor ideal of $\mathcal T^c$. Then $\widehat\mathcal C$ equals the subcategory of dualisable (or rigid) objects in the tensor triangulated category $\mathcal T_0$.
Let us provide a criterion for the functor $\mathcal C_0\to\widehat\mathcal C$ to be a partial completion of triangulated categories; it covers the previous example of a local ring with $\mathcal C_0$ the category of perfect complexes having torsion cohomology. Our motivation is the following. If $\mathcal C_0\to\widehat\mathcal C$ is a partial completion, then the functor $\mathcal C\to\widehat\mathcal C$ taking $X$ to $\Gamma X$ induces a functor $$\operatorname{Coh}\mathcal C_0\supseteq\{\operatorname{Hom}(-,X)|_{\mathcal C_0}\mid X\in\mathcal C\}\xrightarrow{\ \gamma\ }\widehat\mathcal C$$ such that
1. $\gamma$ is fully faithful and almost an equivalence, up to the fact that $\Gamma\mathcal C\subseteq
\Gamma\mathcal T$ need not be a thick subcategory, and
2. the category $\{\operatorname{Hom}(-,X)|_{\mathcal C_0}\mid X\in\mathcal C\}$ is explicitly given by $\mathcal C_0\subseteq\mathcal C$.
**Proposition 1**. *Let $R$ be a commutative ring and suppose that $\mathcal C$ is $R$-linear. Suppose also that $\operatorname{Hom}(X,Y)$ is a finitely generated $R$-module for all objects $X,Y$ in $\mathcal C$ and that $\operatorname{End}(X)$ has finite length for each $X$ in $\mathcal C_0$. Then the inclusion $\mathcal C_0\to\widehat\mathcal C$ is a partial completion.*
*Proof.* The assumption implies that for each $X\in\widehat\mathcal C$ and $C\in\mathcal C_0$ the $R$-module $\operatorname{Hom}(C,X)$ has finite length. Thus we can apply Corollary [Corollary 1](#co:artinian){reference-type="ref" reference="co:artinian"}. ◻
We continue with examples. Let $A$ be a right coherent ring and denote by $\operatorname{Inj}A$ the category of all injective $A$-modules. Then the category of complexes $\mathbf K(\operatorname{Inj}A)$ (with morphisms the chain morphisms up to homotopy) is compactly generated, and taking a module to its injective resolution identifies $\mathbf D^b(\operatorname{mod}A)$ with the full subcategory of compact objects; see [@Kr2022 Proposition 9.3.12] for the noetherian case and [@Kr2015 Theorem 4.9] for the general case.
The following example should be compared with Example [Example 1](#ex:coherent){reference-type="ref" reference="ex:coherent"}.
**Example 1**. Let $A$ be a right coherent ring. We consider $\mathcal T=\mathbf K(\operatorname{Inj}A)$ and choose $\mathcal C_0=\mathbf D^b(\operatorname{proj}A)$. Then it is easily checked that $\mathcal C\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\widehat\mathcal C$.
The next example is more challenging.
**Example 1** (BG-conjecture). Let $k$ be a field and $G$ a finite group. We consider the group algebra $kG$ and set $\mathcal T=\mathbf K(\operatorname{Inj}kG)$. Let $\mathbf ik$ denote an injective resolution of the trivial representation. Then the assignment $X\mapsto X\otimes_k \mathbf ik$ identifies $\mathcal C=\mathbf D^b(\operatorname{mod}kG)$ with $\mathcal T^c$. We choose $\mathcal C_0=\operatorname{thick}(k)$ and note that $\mathcal T_0$ identifies with the derived category of dg modules over the algebra $C^*(BG;k)$ (the cochains of the classifying space $BG$ with coefficients in $k$) [@BK2008]. For $X\in\mathcal T$ we set $$H^*(G,X)=\operatorname{Hom}^*(\mathbf ik,X)=\bigoplus_{n\in\mathbb Z}\operatorname{Hom}(\mathbf ik,\Sigma^n
X)$$ and view this as a module over the cohomology algebra $H^*(G,k)$. It is well known that this algebra is noetherian and that $H^*(G,X)$ is a noetherian module for any compact object $X$. A recent conjecture of Benson and Greenlees asserts that $\widehat\mathcal C$ equals the category of objects in $\mathcal T_0$ such that $H^*(G,X)$ is a noetherian module over $H^*(G,k)$; see [@BG2023 Conjecture 1.4]. Note that $\mathcal C_0\to\widehat\mathcal C$ is a partial completion.
We offer another challenge.
**Example 1** (Strong generation conjecture). Let $A$ be a commutative noetherian local ring. It is well known that $A$ is regular if and only if the triangulated category $\mathbf D^b(\operatorname{proj}A)$ admits a strong generator (in the sense of Bondal and Van den Bergh [@BV2003]). On the other hand, $A$ is regular if and only if its completion $\widehat A$ is regular. Keeping in mind Example [Example 1](#ex:BIKP){reference-type="ref" reference="ex:BIKP"}, one may conjecture: If $\mathcal C$ admits a strong generator, then $\widehat\mathcal C$ admit a strong generator.
# Completing complexes of finite length modules
We return to a previous example and give now a full proof of the following.
**Proposition 1** ([@Kr2020 Example 4.2]). *For a commutative noetherian ring $A$ the inclusion $$\mathbf D^b(\operatorname{fl}A)\longrightarrow\mathbf D^b(\operatorname{art}A)$$ is a sequential partial completion.*
*Proof.* Recall that an $A$-module $M$ is artinian if and only if $M$ is the union of finite length modules and $\operatorname{soc}M$ has finite length [@Kr2022 Proposition 2.4.20]. In particular, the abelian category $\operatorname{art}A$ has enough injective objects.
We write $\operatorname{Mod}_0 A$ for the full subcategory of $A$-modules that are filtered colimits of finite length modules. Thus the inclusion $\operatorname{fl}A\to\operatorname{Mod}A$ induces an equivalence $\operatorname{Ind}\operatorname{fl}A\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\operatorname{Mod}_0 A$. Set $$\operatorname{Inj}_0 A= \operatorname{Inj}A \cap\operatorname{Mod}_0 A\qquad\text{and}\qquad \operatorname{inj}_0
A=\operatorname{Inj}A\cap\operatorname{art}A.$$ Note that an injective $A$-module is in $\operatorname{Mod}_0 A$ if and only if each indecomposable direct summand is artinian. Now consider the compactly generated triangulated $\mathcal T=\mathbf K(\operatorname{Inj}_0 A)$ given by the complexes in $\operatorname{Inj}_0 A$. Then we have canonical triangle equivalences $$\mathbf D^b(\operatorname{fl}A)\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\mathcal T^c \qquad\text{and}\qquad \mathbf D^b(\operatorname{art}
A)\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\mathbf K^{+,b}(\operatorname{inj}_0 A)\subseteq \mathcal T;$$ see Corollary 4.2.9 and Proposition 9.3.12 in [@Kr2022]. Next observe that for $X\in \mathbf D^b(\operatorname{fl}A)$ and $Y \in\mathbf D^b(\operatorname{art}A)$ the $A$-module $\operatorname{Hom}(X,Y)$ has finite length. This amounts to showing that $\operatorname{Ext}^i(M,N)$ has finite length for all $M\in\operatorname{fl}A$, $N\in\operatorname{art}A$, and $i\in\mathbb Z$, which reduces to the case $i=0$ by taking an injective resolution of $N$. Thus we can apply Corollary [Corollary 1](#co:artinian){reference-type="ref" reference="co:artinian"} and it follows that $\mathbf D^b(\operatorname{fl}A)\to \mathbf D^b(\operatorname{art}A)$ is a partial completion. It remains to observe that each complex $X$ in $\mathbf D^b(\operatorname{art}A)$ is the colimit of the sequence $(\operatorname{soc}^n X)_{n\ge 0}$ in $\mathbf D^b(\operatorname{fl}A)$, but this need not be a Cauchy sequence. ◻
When the ring $A$ is local and regular, the completion $\mathbf D^b(\operatorname{fl}
A)\to \mathbf D^b(\operatorname{art}A)$ identifies with $\mathcal C_0\to\widehat\mathcal C$ in Example [Example 1](#ex:BIKP){reference-type="ref" reference="ex:BIKP"}.
# Completing perfect complexes
We return to another example and give a full proof of the following.
**Proposition 1** ([@Kr2020 Theorem 6.2]). *For a right coherent ring $A$ the inclusion $$\mathbf D^b(\operatorname{proj}A)\longrightarrow\mathbf D^b(\operatorname{mod}A)$$ is a Cauchy sequential partial completion.*
The proof requires several lemmas which are of independent interest. Let $\mathcal T$ be a triangulated category with suspension $\Sigma\colon\mathcal T\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\mathcal T$ and suppose that countable coproducts exist in $\mathcal T$.
**Definition 1**. A *homotopy colimit* of a sequence of morphisms $$\begin{tikzcd}
X_0\arrow{r}{\phi_0}&X_1\arrow{r}{\phi_1}&
X_2\arrow{r}{\phi_2}&\cdots
\end{tikzcd}$$ in $\mathcal T$ is an object $X$ that occurs in an exact triangle $$\begin{tikzcd}
\Sigma^{-1}X\arrow{r}&\coprod_{n\ge
0}X_n\arrow{r}{\operatorname{id}-\phi}&\coprod_{n\ge 0}X_n\arrow{r}& X.
\end{tikzcd}$$ We write $\operatorname*{hocolim}_n X_n$ for $X$ and observe that a homotopy colimit is unique up to a (non-unique) isomorphism.
Recall that an object $C$ in $\mathcal T$ is *compact* if $\operatorname{Hom}(C,-)$ preserves all coproducts. A morphism $X\to Y$ is *phantom* if any composition $C\to X\to Y$ with $C$ compact is zero. The phantom morphisms form an ideal and we write $\operatorname{Ph}(X,Y)$ for the subgroup of all phantoms in $\operatorname{Hom}(X,Y)$.
Let us compute the functor $\operatorname{Hom}(-, \operatorname*{hocolim}_n X_n)$. To this end observe that a sequence $$\begin{tikzcd}
A_0\arrow{r}{\phi_0}&A_1\arrow{r}{\phi_1}&
A_2\arrow{r}{\phi_2}&\cdots
\end{tikzcd}$$ of maps between abelian groups induces an exact sequence $$\begin{tikzcd}
0\arrow{r}&\coprod_{n\ge
0}A_n\arrow{r}{\operatorname{id}-\phi}&\coprod_{n\ge 0}A_n\arrow{r}& \operatorname*{colim}_n A_n\arrow{r}&0
\end{tikzcd}$$ because it identifies with the colimit of the exact sequences $$\begin{tikzcd}
0\arrow{r}&\coprod_{i=0}^{n-1}A_i\arrow{r}{\operatorname{id}-\phi}&\coprod_{i=0}^{n}A_i\arrow{r}& A_n\arrow{r}&0.
\end{tikzcd}$$
**Lemma 1**. *Let $C\in\mathcal T$ be compact. Then any sequence $X_0\to X_1\to X_2\to\cdots$ in $\mathcal T$ induces an isomorphism $$\operatorname*{colim}_n\operatorname{Hom}(C,X_n)\xrightarrow{\ \raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}\ }\operatorname{Hom}(C,\operatorname*{hocolim}_n X_n).$$*
*Proof.* The above observation gives an exact sequence $$\begin{tikzcd}[column sep=small]
0\arrow{r}&
\coprod_{n}\operatorname{Hom}(C,X_n)\arrow{r}&\coprod_{n}\operatorname{Hom}(C,X_n)\arrow{r}&
\operatorname*{colim}_n\operatorname{Hom}(C,X_n)\arrow{r}&0.
\end{tikzcd}$$ Now apply $\operatorname{Hom}(C,-)$ to the defining triangle for $\operatorname*{hocolim}_n X_n$. Comparing both sequences yields the assertion, since $$\coprod_n\operatorname{Hom}(C,X_n)\cong \operatorname{Hom}(C, \coprod_n X_n).\qedhere$$ ◻
Recall that for any sequence $\cdots\to A_2\xrightarrow{\phi_2} A_1\xrightarrow{\phi_1} A_0$ of maps between abelian groups the inverse limit and its first derived functor are given by the exact sequence $$\begin{tikzcd}
0\longrightarrow\lim_nA_n\longrightarrow\prod_{n\ge 0}A_n\arrow{r}{\operatorname{id}-\phi}&\prod_{n\ge 0}A_n\arrow{r}&
\lim^1_nA_n\arrow{r}& 0.
\end{tikzcd}$$ Note that $\lim^1_nA_n=0$ when $A_n\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}A_{n+1}$ for $n\gg 0$.
The following result goes back to work of Milnor [@Mi1962] and has been extended by several authors.
**Lemma 1**. *Let $X=\operatorname*{hocolim}_n X_n$ be a homotopy colimit in $\mathcal T$ such that each $X_n$ is a coproduct of compact objects. Then we have for any $Y$ in $\mathcal T$ a natural exact sequence $$0\longrightarrow\operatorname{Ph}(X,Y)\longrightarrow\operatorname{Hom}(X,Y)\longrightarrow\lim_n\operatorname{Hom}(X_n,Y)\longrightarrow 0$$ and an isomorphism $$\operatorname{Ph}(X,\Sigma Y)\cong \sideset{}{^{1}}\lim_n\operatorname{Hom}(X_n,Y).$$*
*Proof.* Apply $\operatorname{Hom}(-,Y)$ to the exact triangle defining $\operatorname*{hocolim}_n X_n$ and use that a morphism $X\to Y$ is phantom if and only if it factors through the canonical morphism $X\to\coprod_{n\ge 0}\Sigma X_n$. ◻
Let $\mathcal C\subseteq\mathcal T$ be a full triangulated subcategory consisting of compact objects and consider the restricted Yoneda functor $$\mathcal T\longrightarrow\operatorname{Coh}\mathcal C,\quad X\mapsto h_X:=\operatorname{Hom}(-,X)|_\mathcal C.$$ This functor induces for each pair of objects $X,Y\in\mathcal T$ a map $$\operatorname{Hom}(X,Y)\longrightarrow\operatorname{Hom}(h_X,h_Y).$$ Clearly, this map is bijective when $X$ is in $\mathcal C$, and it remains bijective when $X$ is a coproduct of objects in $\mathcal C$.
**Lemma 1**. *Let $X=\operatorname*{hocolim}_n X_n$ be a homotopy colimit in $\mathcal T$ such that each $X_n$ is a coproduct of objects in $\mathcal C$. Then we have for any $Y$ in $\mathcal T$ a natural isomorphism $$\operatorname{Hom}(X,Y)/\operatorname{Ph}(X,Y)\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\operatorname{Hom}(h_X,h_Y).$$*
*Proof.* We have $$\begin{aligned}
\operatorname{Hom}(X,Y)/\operatorname{Ph}(X,Y)&\cong\lim_n\operatorname{Hom}(X_n,Y)\\
&\cong\lim_n\operatorname{Hom}(h_{X_n},h_{Y})\\
&\cong\operatorname{Hom}(\operatorname*{colim}_nh_{X_n},h_{Y})\\
&\cong\operatorname{Hom}(h_X,h_Y).
\end{aligned}$$ The first isomorphism follows from Lemma [Lemma 1](#le:phantom){reference-type="ref" reference="le:phantom"}, the second uses that each $X_n$ is a coproduct of objects in $\mathcal C$, the third is clear, and the last follows from Lemma [Lemma 1](#le:hocolim){reference-type="ref" reference="le:hocolim"}. ◻
*Proof of Proposition [Proposition 1](#pr:perfect){reference-type="ref" reference="pr:perfect"}.* Set $\mathcal P=\operatorname{proj}A$. The inclusion $\operatorname{proj}A\to\operatorname{mod}A$ induces a triangle equivalence $$\mathbf K^{-,b}(\operatorname{proj}A)\xrightarrow{\ \raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}\ }\mathbf D^b(\operatorname{mod}A).$$ Thus we may identify $\mathbf D^b(\operatorname{proj}A)\to\mathbf D^b(\operatorname{mod}A)$ with the inclusion $$\mathbf K^b(\mathcal P)\longrightarrow\mathbf K^{-,b}(\mathcal P).$$ We think of these as subcategories of $\mathbf K(\operatorname{Proj}A)$, where $\operatorname{Proj}A$ denotes the category of all projective $A$-modules. In particular $\mathbf K(\operatorname{Proj}A)$ has arbitrary coproducts and all objects from $\mathbf K^b(\mathcal P)$ are compact.
For any complex $X$ we consider the sequence of truncations $$\cdots\longrightarrow\sigma_{\ge n+1}X\longrightarrow\sigma_{\ge n}X\longrightarrow\sigma_{\ge n-1}X\longrightarrow\cdots$$ given by $$\begin{tikzcd}
\sigma_{\ge n}X\arrow{d}& \cdots \arrow{r}&
0\arrow{r}{}\arrow{d}&0\arrow{r}{}\arrow{d}&
X^{n}\arrow{r}\arrow{d}{\operatorname{id}}&X^{n+1}\arrow{r}\arrow{d}{\operatorname{id}}&\cdots \\
X&\cdots\arrow{r}&
X^{n-2}\arrow{r}{}&X^{n-1}\arrow{r}{}&X^n\arrow{r}&X^{n+1}\arrow{r}&\cdots .
\end{tikzcd}$$ For $X$ in $\mathbf K^{-,b}(\mathcal P)$ and $n\in\mathbb Z$ we set $X_n= \sigma_{\ge -n}X$. This yields a Cauchy sequence $$X_0\longrightarrow X_1\longrightarrow X_2\longrightarrow\cdots$$ in $\mathbf K^b(\mathcal P)$ with $\operatorname*{hocolim}_{n\ge 0}X_n\cong X$.
We claim that the restricted Yoneda functor $$\mathbf K^{-,b}(\mathcal P)\longrightarrow\operatorname{Coh}\mathbf K^b(\mathcal P),\quad
X\mapsto h_X:=\operatorname{Hom}(-,X)|_{\mathbf K^b(\mathcal P)},$$ is fully faithful. Let $X,Y$ be objects in $\mathbf K^{-,b}(\mathcal P)$. As before we write $X$ as homotopy colimit of its truncations $X_n=\sigma_{\ge -n}X$ and denote by $C_n$ the cone of $X_{n-1}\to X_{n}$. This complex is concentrated in degree $-n$; so $\operatorname{Hom}(C_n,Y)=0$ for $n\gg 0$. Thus $X_{n}\to X_{n+1}$ induces a bijection $$\operatorname{Hom}(X_{n+1},Y)\xrightarrow{\ \raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}\ }\operatorname{Hom}(X_{n},Y) \quad\text{for}\quad n\gg
0.$$ This implies $$\operatorname{Hom}(X,Y)\xrightarrow{\ \raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}\ }\lim_n\operatorname{Hom}(X_n,Y)$$ and therefore $\operatorname{Ph}(X,Y)=0$ by Lemma [Lemma 1](#le:phantom){reference-type="ref" reference="le:phantom"}. From Lemma [Lemma 1](#le:yoneda){reference-type="ref" reference="le:yoneda"} we conclude that $$\operatorname{Hom}(X,Y)\xrightarrow{\ \raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}\ }\operatorname{Hom}(h_X,h_Y).\qedhere$$ ◻
From the proof of Proposition [Proposition 1](#pr:perfect){reference-type="ref" reference="pr:perfect"} we learn that each complex in $\mathbf D^b(\operatorname{mod}A)$ is not only a filtered colimit of perfect complexes; it is actually the colimit of a Cauchy sequence which is obtained from its truncations. In particular, we are in the situation that a homotopy colimit is an honest colimit, and therefore unique up to a unique isomorphism.
# Completion using enhancements
While the ind-completion of a category is a fairly explicit construction, it is not immediately clear how to deal with additional structure. In particular, there is no obvious triangulated structure for $\operatorname{Ind}\mathcal C$ when $\mathcal C$ is triangulated. One way to address this problem is the use of enhancements.
Recall that a triangulated category is *algebraic* if it is triangle equivalent to the stable category $\operatorname{St}\mathcal A$ of a Frobenius category $\mathcal A$. A morphism between exact triangles $$\begin{tikzcd}
X\arrow{r}\arrow{d}&Y\arrow{r}\arrow{d}&
Z\arrow{r}\arrow{d}&\Sigma
X\arrow{d}\\
X'\arrow{r}&Y'\arrow{r}&Z'\arrow{r}&\Sigma X'
\end{tikzcd}$$ in $\operatorname{St}\mathcal A$ will be called *coherent* if it can be lifted to a morphism $$\begin{tikzcd}
0\arrow{r}&\tilde X\arrow{r}\arrow{d}& \tilde
Y\arrow{r}\arrow{d}&
\tilde Z\arrow{r}\arrow{d}&0\\
0\arrow{r}&\tilde X'\arrow{r}&\tilde
Y'\arrow{r}&\tilde Z'\arrow{r}&0
\end{tikzcd}$$ between exact sequences in $\mathcal A$ so that the canonical functor $\mathcal A\to\operatorname{St}\mathcal A$ maps the second to the first diagram.
**Definition 1**. Let $\mathcal C$ be a triangulated category and $\mathcal X$ a class of sequences in $\mathcal C$. We say that $\mathcal X$ is *phantomless* if for any pair of sequences $X,Y$ in $\mathcal X$ we have $$\label{eq:ph-less}
\sideset{}{^{1}}\lim_i\operatorname*{colim}_j\operatorname{Hom}(X_i,Y_j)=0.$$
This definition is consistent with our previous discussion of phantom morphisms in the following sense. Let $\mathcal C\subseteq\mathcal T$ be a triangulated subcategory consisting of compact objects such that $\mathcal T$ admits countable coproducts. Suppose that $\mathcal X$ is *stable under suspensions*, i.e. $(X_i)_{i\ge 0}$ in $\mathcal X$ implies $(\Sigma^nX_i)_{i\ge 0}$ in $\mathcal X$ for all $n\in\mathbb Z$. Then $\mathcal X$ is phantomless if and only if $$\operatorname{Ph}(\operatorname*{hocolim}_i X_i, \operatorname*{hocolim}_j Y_j)=0$$ for all $X,Y$ in $\mathcal X$. This follows from Lemmas [Lemma 1](#le:hocolim){reference-type="ref" reference="le:hocolim"} and [Lemma 1](#le:phantom){reference-type="ref" reference="le:phantom"}.
**Proposition 1** ([@Kr2020 Theorem 4.7]). *Let $\mathcal C$ be an algebraic triangulated category, viewed as a full subcategory of its sequential completion $\operatorname{Ind}_\mathbb N\mathcal C$. Let $\mathcal X$ be a class of sequences in $\mathcal C$ that is phantomless, closed under suspensions, and closed under the formation of cones. Then the full subcategory $\operatorname{Ind}_\mathcal X\mathcal C\subseteq\operatorname{Ind}_\mathbb N\mathcal C$ given by the colimits of sequences in $\mathcal X$ admits a unique triangulated structure such that the exact triangles are precisely the ones isomorphic to colimits of sequences that are given by coherent morphisms of exact triangles in $\mathcal C$.*
Let us spell out the triangulated structure for $\operatorname{Ind}_\mathcal X\mathcal C$. Fix a sequence of coherent morphisms $\eta_0\to\eta_1\to\eta_2\to\cdots$ of exact triangles $$\eta_i\colon X_i\longrightarrow Y_i\longrightarrow Z_i\longrightarrow\Sigma X_i$$ in $\mathcal C$ and suppose that it is also a sequence of morphisms $X\to Y\to Z\to \Sigma X$ of sequences in $\mathcal X$. This identifies with the sequence $$\operatorname*{colim}_i X_i \longrightarrow\operatorname*{colim}_i Y_i\longrightarrow\operatorname*{colim}_i Z_i\longrightarrow\operatorname*{colim}_i\Sigma X_i$$ in $\operatorname{Ind}_\mathcal X\mathcal C$, and the exact triangles in $\operatorname{Ind}_\mathcal X\mathcal C$ are precisely sequences of morphisms that are isomorphic to sequences of the above form.
*Proof of Proposition [Proposition 1](#pr:tria-completion){reference-type="ref" reference="pr:tria-completion"}.* We use the enhancement as follows. Suppose that $\mathcal C=\operatorname{St}\mathcal A$ for some Frobenius category $\mathcal A$. We denote by $\mathcal C\tilde{\phantom{e}}$ the stable category $\operatorname{St}\mathcal A\tilde{\phantom{e}}$ of the countable envelope of $\mathcal A$; see Example [Example 1](#ex:countable-envelope){reference-type="ref" reference="ex:countable-envelope"}. This is a triangulated category with countable coproducts and $\mathcal C$ identifies with a full subcategory of compact objects.
Given sequences $X,Y$ in $\mathcal X$ we set $\bar X=\operatorname*{hocolim}_i X_i$ and $\bar Y=\operatorname*{hocolim}_j Y_j$ in $\mathcal C\tilde{\phantom{e}}$. Using that $\mathcal X$ is phantomless we compute $$\lim_i\operatorname*{colim}_j\operatorname{Hom}(X_i,Y_j)\cong\operatorname{Hom}(h_X,h_Y)\cong\operatorname{Hom}(\bar X,\bar
Y)$$ where the first isomorphism follows from Lemma [Lemma 1](#le:hocolim){reference-type="ref" reference="le:hocolim"} and the second from Lemma [Lemma 1](#le:yoneda){reference-type="ref" reference="le:yoneda"}. Thus taking a sequence in $\mathcal X$ to its homotopy colimit in $\mathcal C\tilde{\phantom{e}}$ provides a fully faithful functor $$\operatorname*{hocolim}\colon\operatorname{Ind}_\mathcal X\mathcal C\longrightarrow\mathcal C\tilde{\phantom{e}}.$$ Then it remains to compare the triangulated structures on both side, which turn out to be equivalent by construction. ◻
The above result admits a substantial generalisation, from algebraic triangulated categories to triangulated categories with a *morphic enhancement* in the sense of Keller [@Kr2020 Appendix C]. Moreover, in some interesting cases the morphic enhancement extends to a morphic enhancement of the completion.
**Example 1**. For a right coherent ring $A$ let $\mathcal X$ denote the class of Cauchy sequences $(X_i)_{i\ge 0}$ in $\mathbf D^b(\operatorname{proj}A)$ such that $\operatorname*{colim}_i H^n(X_i)=0$ for $|n|\gg 0$. Then $\mathcal X$ is phantomless and we have a triangle equivalence $$\operatorname{Ind}_\mathcal X\mathbf D^b(\operatorname{proj}A)\xrightarrow{\raisebox{-.4ex}[0ex][0ex]{$\scriptstyle{\sim}$}}\mathbf D^b(\operatorname{mod}A).$$
We end these notes with a couple of references that complement our approach towards the completion of triangulated categories. The work of Neeman offers an intriguing approach that uses metrics on triangulated categories, thereby avoiding the use of any enhancements [@Ne2018; @Ne2020]. On the other hand, there is Lurie's approach via stable $\infty$-categories [@Lu2017 §1]; it uses a notion of enhancement that is far more sophisticated than the one presented in these notes.
10
D. J. Benson and J. P. C. Greenlees, Modules with finitely generated cohomology, and singularities of $C^{*}BG$, arXiv:2305.08580, 2023.
D. J. Benson, S. B. Iyengar, H. Krause, and J. Pevtsova, Local dualisable objects in local algebra, arXiv:2302.08562, 2023.
D. J. Benson and H. Krause, Complexes of injective $kG$-modules, Algebra Number Theory **2** (2008), no. 1, 1--30.
A. I. Bondal and M. Van den Bergh, Generators and representability of functors in commutative and noncommutative geometry, Mosc. Math. J. **3** (2003), no. 1, 1--36, 258.
G. Cantor, Ueber die Ausdehnung eines Satzes aus der Theorie der trigonometrischen Reihen, Math. Ann. **5** (1872), no. 1, 123--132.
W. Crawley-Boevey, Locally finitely presented additive categories, Comm. Algebra **22** (1994), no. 5, 1641--1674.
W. G. Dwyer and J. P. C. Greenlees, Complete modules and torsion modules, Amer. J. Math. **124** (2002), no. 1, 199--220.
J. P. C. Greenlees and J. P. May, Derived functors of $I$-adic completion and local homology, J. Algebra **149** (1992), no. 2, 438--453.
A. Grothendieck and J. L. Verdier, Préfaisceaux, in *SGA 4, Théorie des Topos et Cohomologie Etale des Schémas, Tome 1. Théorie des Topos*, 1--184, Lecture Notes in Math., 269, Springer, Heidelberg, 1972.
B. Keller, Chain complexes and stable categories, Manus. Math. **67** (1990), 379--417.
H. Krause, Smashing subcategories and the telescope conjecture---an algebraic approach, Invent. Math. **139** (2000), no. 1, 99--133.
H. Krause, Report on locally finite triangulated categories, J. K-Theory **9** (2012), no. 3, 421--458.
H. Krause, Deriving Auslander's formula, Doc. Math. **20** (2015), 669--688.
H. Krause, Completing perfect complexes, Math. Z. **296** (2020), no. 3-4, 1387--1427.
H. Krause, *Homological theory of representations*, Cambridge Studies in Advanced Mathematics, 195, Cambridge University Press, Cambridge, 2022.
H. Lenzing, Homological transfer from finitely presented to infinite modules, in *Abelian group theory (Honolulu, Hawaii, 1983)*, 734--761, Lecture Notes in Math., 1006, Springer, Berlin, 1983.
J. Lurie, Higher Algebra, 2017.
C. Méray, Remarques sur la nature des quantités définies par la condition de servir de limites à des variables données, Revue des Sociétés savantes, Sci. Math. phys. nat. (2) **4** (1869), 280--289.
J. Milnor, On axiomatic homology theory, Pacific J. Math. **12** (1962), 337--341.
A. Neeman, The categories ${\mathcal T}^c$ and ${\mathcal T}^b_c$ determine each other, arXiv:1806.064714, 2018.
A. Neeman, Metrics on triangulated categories, J. Pure Appl. Algebra **224** (2020), no. 4, 106206, 13 pp.
J. Xiao and B. Zhu, Locally finite triangulated categories, J. Algebra **290** (2005), no. 2, 473--490.
| arxiv_math | {
"id": "2309.01260",
"title": "Completions of triangulated categories",
"authors": "Henning Krause",
"categories": "math.CT math.AG math.RT",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
We prove a local support theorem for the exponential Radon transform for functions of exponential decay at infinity. We also show that our decay condition is essentially sharp for the classical Radon transform for hyperbolic type domains as holes, by showing streched exponential counterexamples. This shows a difference of the support theorems for compact domains, where the decay has to be just faster than any polynomial. Also, gives a refinement for non-compact domains, where support theorem was proved only for functions with compact support. Our method is a version of [@Strichartz] and [@BomanUniqueness].
author:
- "Enikő Dinnyés${}^{*,\\#}$"
- "Tibor Ódor${}^{**,\\#}$"
title: Local support theorem for the exponential Radon transform
---
# [\[sec:intro\]]{#sec:intro label="sec:intro"} Introduction
Exterior problems are a special type of region-of-interest tomography, where the internal part of the projections are unavailable. This is a version of truncated projections: it falls into the category of limited-angle problems. Unique reconstruction is not always possible (as we approach the internal domain that must be avoided, the angle between possible scans is more and more limited).
Our support theorems speak about this set of problems. We measure some version of the Radon transform outside an internal domain and if we find it to be zero everywhere then we want to conclude that the original function is zero outside the given domain.
In this article we discuss the case when we only have the data yielded by a version of the exponential Radon transform along lines not intersecting a convex but not necessarily compact (not necessarily bounded) closed set $K$. Furthermore, we do not require the density function of interest to have compact support, but of exponential decay.
If the hole $K$ is convex and compact, and we know the integrals over lines avoiding $K$ the support theorem is known for $f$ decaying faster than any polynomial (for each integer $k>0, \;\;|x|^k\,f(x)$ is bounded) [@Helgason]. On functions this is a weaker restriction than exponential decay in Theorem [Theorem 1](#th:supp){reference-type="ref" reference="th:supp"}, but $K$ has to be compact, unlike in our theorem. On the other hand, if $K$ is non-compact, but closed a similar support theorem was proved for the classical Radon transform for functions of squared exponential decay e.g. in [@Odor]. For functions with compact support there is a general support theorem for generalized analytic Radon transforms [@BomanQuinto].
For hyperbolic type domains (containing two half-lines with the same endpoints), our results in Theorem [Theorem 1](#th:supp){reference-type="ref" reference="th:supp"} are essentially sharp. There is a counter-example with decay $\exp(-\mu \cdot x^{1-\epsilon})$ for any $\mu > 0$ and $0< \epsilon < 1$. For parabolic domains (all directed half-lines in $K$ are parallel) we have only counter-example with decay $\exp(-\lambda \cdot x^{1/2-\epsilon})$. It is not known, whether this counter-exeample is essentially sharp, or there is one with faster decay. Note that both hyperbolic and parabolic domains are assumed to be part of an angle, the intersection of two half-planes with different normal vectors.
These results show that there is a trade-off between decay conditions and geometric properties of the hole $K$.
Let $K\subset\mathbb R^2$ be a closed convex set. We say that the function $f\colon K^c\to \mathbb R$ has an *exponential decay* if there exists a $\mu > 0$ such that for every $\varepsilon > 0$ the integral $\int_{K^c_\varepsilon} |f(x)|\, e^{\mu |x|}\, dx$ is finite, where $K^c_\varepsilon$ is the set of those points in $\mathbb R^2$ whose distance from $K$ is not less than $\varepsilon$.
The *exponential Radon transform* with parameter $\lambda$ is defined as $$R_\lambda f(\omega,p) = \int_{\langle x,\omega \rangle=p} e^{-\lambda\langle x,\omega^\perp\rangle} f(x)dx = \int_{-\infty}^\infty e^{-\lambda u} f(p\omega+u\omega^\perp)\,du,$$ along the line $l(\omega, p) := \left\{ x \in\mathbb{R}^2 : \langle x,\omega\rangle = p \right\}$. Here $dx$ is the Lebesgue measure on the line $\langle x, \omega\rangle = p$, and $\omega^\perp$ is $(\cos \omega, \sin\omega)^\perp = (\cos (\omega + \frac{\pi}{2}), \sin(\omega + \frac{\pi}{2})) = (-\sin\omega, \cos\omega)$. With $\lambda=0$ we get the classical Radon transform of $f$.
Note that the line $l(\omega, p)$ is the same set as $l(-\omega, -p)$, but admits different orientation. Also note that although defined on the same set, the integral $(R_\lambda f)(\omega, p)$ usually differs from $(R_\lambda f)(-\omega, -p)$, unlike the special case of the classical Radon transform where $Rf(\omega,p)=Rf(-\omega,-p)$.
Our goal is to prove the following support theorem for a less restrictive class of functions than the functions of exponential decay, for the exponential Radon transform. But we state it in this simple form first, with its proof in the next section.
**Theorem 1**. *Let $K\subset\mathbb R^2$ be a (not necessarily bounded) closed convex set not containing a whole line, and $f\colon K^c\to \mathbb R$ be a locally integrable function of exponential decay. Assume that the exponential Radon transform $R_\lambda f(\omega, p) = 0$ along every line $l(\omega, p)$ not intersecting $K$. Then $f = 0$ in $L^1_{loc}(K^c)$.*
We will prove this theorem as a consequence of Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"}.
# [\[sec:motiv\]]{#sec:motiv label="sec:motiv"} Motivation: counterexamples for streched exponential decay
The stretched exponential function $e^{-z^{\beta}}$ for a $\lambda > 0$ and $0 < \beta < 1$ is obtained by inserting a fractional power into the exponential function.
The following two counter-examples work for classical Radon-transform, that is for $\lambda = 0$.
For a parabolic convex closed set $K$, that every oriented half-line is parallel, a simple example is $$f(z)=e^{-z^{1/2-\epsilon}}, \quad z\in\mathbb C\;,$$ where $0< \epsilon < 1/2$ is arbitrary. This is a complex analytic function apart from the half-line $$\mathbb R_-=\{z: \mathrm{Im}z=0, \mathrm{Re}z\le 0\}\;;$$ it is important that the real part of $z^{1/2-\epsilon}$ is positive because its angle is a bit less than half the original angle, and $|f(z)|=e^{-Re(z^{1/2-\epsilon})}$ tends to zero as $e^{-\theta}_\epsilon \cdot r^{1/2-\epsilon}$ for some $\theta_\epsilon > 0$ when $r = |z|\rightarrow\infty$ so that $z$ moves along a straight line, e.g. if we put $z=az_0$ with $z_0\in\mathbb{C}, a\in\mathbb{R}, a>0$ and $a\rightarrow\infty$.
$f(z)$ being complex analytic means its complex integral along any closed curve not intersecting $\mathbb R_-$ is zero. Fix a straight line not intersecting $\mathbb R_-$, fix a point on the line, and draw a half circle from this point on that side of the line that does not contain $\mathbb R_-$. Let the closed curve be this half circle with its diameter along the straight line. The complex integral of $e^{-z^{1/2-\epsilon}}$ is zero along this curve. Take the limit of this integral as the radius tends to infinity. The complex integral along the straight line is a constant multiple of the combination of the real integrals consisting of the real part and the imaginary part separately; these tend to the Radon transform of the real part and the Radon transform of the imaginary part respectively; and the norm of the integral along the half circle tends to zero. So the Radon transforms of the real functions $Re(f)$ and $Im(f)$ along any straight line not intersecting $\mathbb R_-$ must be zero.
In fact, this example is applicable to any domain $K$ containing a half line.
The following example for hyperbolic domains works in a similar way.
Let $$f(z)=\exp(-z^{1-\varepsilon}), \quad z\in\mathbb C$$ for some $0<\varepsilon<1$ of our choice. The same technique is applicable where the real part of $z^{1-\varepsilon}$ is positive: in the concave angular domain $\left(-\frac{1}{(1-\varepsilon)}\frac{\pi}{2},\;\frac{1}{(1-\varepsilon)}\frac{\pi}{2}\right),$ and $K$ must contain all the rest.
When $\varepsilon$ is close to 0, the domain to be covered by $K$ is a wide angular domain, close to the $\mathrm{Re}z > 0$ half plane. But applying an affine transformation for a general $K$ containing any angular domain, mapping it to the required one, this second example can be applied in the case of any non-bounded closed convex hyperbolic $K$.
Assume that $K\subset {\mathbb R^2}$ is closed, convex, $int(K)\neq \emptyset$, unbounded and does not contain a full line. Then for every $x\in \partial K$ we have a half line $h\subset K$ with endpoint $x$, and its relative interior in $int(K)$. (Let $y\in \partial{K}$, and $y\rightarrow\infty$ along the border of $K$. Then the limit of $xy$ is $h$. If $h\subset \partial K$ then move $y$ to the other direction along the border of $K$. In the limit, we get the half-line $g$. If $g\subset \partial K$ then $K$ is an angular domain, for which the statement is obviously true. Otherwise $h$ or $g$ is a suitable half-line. Assume it is $h$.) Let $s$ be a support line of $K$ in $x$. Then apply an affine transform $A$ which maps $h$ to the positive half of the $x_1$ axis, and $s$ to the $x_2$ axis.
Let $f$ be as before, but restricted to $AK^c$. Then $R_\lambda f(\omega,p)=0$ for every line $l(\omega, p)$ that avoids $AK$.
Let $f_A(x)=f(Ax)$. If $l(\omega, p)$ avoids $K$ then $Al(\omega,p)=:l(\omega^\star, p^\star)$ avoids $AK$, and vice versa.
If $R_\lambda f(\omega^\star,p^\star)=0$ then $R_\lambda f_A(\omega,p)=0$, and vice versa.
This yields a function whose exponential Radon transform $R_\lambda$ is zero on lines avoiding $K$ and which decays streched exponentially at the infinity.
By applying convolution, we can make it smooth, even analytic. We leave the details to the reader, which is a standard computation, using that the transform $R_\lambda$ commutes with convolution.
These counter-examples show that if $K$ is non-bounded then its behaviour is rather different from the compact case, where we have support theorems for functions decaying faster than any polynomial [@Helgason], [@Strichartz].
For functions with compact support similar theorems were known for a general $K$ [@BomanQuinto]. The novelty in this paper is extending the appropriate function class, and showing that less than exponential decay is not enough.
Theorem [Theorem 1](#th:supp){reference-type="ref" reference="th:supp"} is a support theorem for non-bounded $K$.
For a general exponent $\lambda > 0$, the functions have to decay faster than $e^{-(\lambda t + \epsilon |t|)}$ for any fixed $0 < \epsilon < \lambda$. Note, that in the exponent the $\lambda t$ term can be negative for negative $t$. Theorem [Theorem 1](#th:supp){reference-type="ref" reference="th:supp"} and Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"} shows that exponential decay is enough even for non-bounded $K$ if it does not contain a whole line (that is a trivial condition because if $K$ contained a whole line, only lines parallel to it could avoid $K$).
In case of the classical Radon transform (that is $\lambda = 0$) for parabolic domains still there is a gap in our knowledge. This gap cannot be directly attacked by our technique.
**Question 2**. *What are the close to exact exponential type decay conditions for parabolic convex closed sets $K$?*
# [\[sec:exp\]]{#sec:exp label="sec:exp"} A local support theorem for the exponential Radon transform for functions of exponential decay
Consider the following conditions as local conditions.
Let $(\omega_0, p_0)\in \mathbb S^1\times \mathbb R$ be fixed, $p_0>0$. Let $\Omega \subset \mathbb S^1$ be an open subset of $\mathbb S^1$ containing $\omega_0$. Let $\mathcal L(\Omega, p_0)$ be the union of all lines $l(\omega, p)$, as point sets, such that $p > p_0$ and $\omega \in \Omega$.
The complement $K^c$ of any closed convex set $K$ can be constructed as the union of sets belonging to this type with some $\Omega$ and $p_0$, if there is no complete straight line in the border of $K$ (that means $K$ is neither a half plane nor a strip). E.g. the complement of a parabola can be constructed as the union of infinitely many (countably many) of these sets, so in that case we need the below statement for infinitely many $\omega_0$'s and $p_0$'s. But the conditions for the function $f$ can be much more easily written down for each $\mathcal L(\Omega, p_0)$ separately, that is why we use this description.
We define the space of functions $\mathcal E(\omega_0, \Omega, p_0)$ as follows. $f\in\mathcal E(\omega_0, \Omega, p_0)$ if and only if
a\) $f$ is continuously differentiable on $\mathcal L(\Omega, p_0)$;
b\) $f(x) =f(p\, \omega + u\, \omega^\perp)$ decays locally uniformly faster than every polynomial on every line $l(\omega, p)$ in $\mathcal L(\Omega, p_0)$ $-$ uniformly in $\omega$ $-$, that means for every $p$ and every $k\in \mathbb N$ there exists a neighbourhood $(p-\varepsilon_p, p+\varepsilon_p)$ such that for any $p'\in(p-\varepsilon_p, p+\varepsilon_p)$, independently of $\omega$, there exists a $c_{pk}$ constant such that $$\int_{-\infty}^\infty |u^k \, f(p'\, \omega + u\, \omega^\perp)|\, du < c_{pk} < \infty;$$
c\) $\partial_{\omega} f(x) =(\partial_{\omega} f)(p\, \omega + u\, \omega^\perp)$, where $\partial_{\omega}$ means directional derivative in the direction $\omega$, decays locally uniformly faster (uniformly in $\omega$) than every polynomial on every line $l(\omega, p)$ in $\mathcal L(\Omega, p_0)$ that means for every $p$ and every $k\in \mathbb N$ there exists a neighbourhood $(p-\varepsilon_p, p+\varepsilon_p)$ such that for any $p'\in(p-\varepsilon_p, p+\varepsilon_p)$, independently of $\omega$, there exists a $C_{pk}$ constant such that $$\int_{-\infty}^\infty |u^k \, (\partial_\omega f)(p'\, \omega + u\, \omega^\perp)|\, du < C_{pk} < \infty.$$
d\) for every $p> p_0$ we demand that $f(p\, \omega_0 + u\, \omega^\perp_0)$ decays exponentially, that is, for some $\mu_p > 0$ we have $$\int_{-\infty}^\infty\,|f(p\, \omega_0 + u\, \omega_0^\perp)|\, e^{\mu_p |u|}\,du < K_p < \infty$$ (note that in this condition we demand exponential decay only for directions perpendicular to $\omega_0$, not for all $\omega\in\Omega$);
e\) we assume that there is a sequence $\left\{q_k\right\}_{k\in\mathbb N}, \; p_0 < q_k \leq +\infty$, such that $$R^{(k)}_\lambda f(\omega_0, q_k):=\int_{-\infty}^\infty u^k e^{-\lambda u}\, f(q_k\omega_0 + u\omega_0^\perp) \,du = 0,$$ or in a weaker form, the integral exists, finite and is known, for every $k\geq 0$.
It is possible that $q_k$ is a finite constant ($=q_0$), implying that we know the function $f$ on the line $l(\omega_0, q_0)$. Or all $q_k$ can be $+\infty$. For a fixed $k$, $q_k=+\infty$ means that there is a sequence $q_{kj}\to +\infty$ such that $$\limsup_{j\rightarrow\infty}\left(\int_{-\infty}^\infty |u^k e^{-\lambda u}\, f(q_{kj}\, \omega_0 + u\, \omega_0^\perp)|\, du\right)< \infty$$ and $\lim_{j\rightarrow\infty}\left(\int_{-\infty}^\infty u^k e^{-\lambda u}\, f(q_{kj}\, \omega_0 + u\, \omega_0^\perp)\, du\right)$ exists and is known.
No monotonicity or other restriction is posed to the sequence $q_k$ or to the function $f$.
These conditions form a very wide class compared to the 'usual' class of functions with compact support, or even the functions of exponential decay. This shows the flexibility of our technique.
*Example:* Condition $e)$ is fulfilled by the following function: $$f(x)=\exp(\langle\omega_0,x\rangle^2)\,\exp(-\langle\omega_0^\perp,x\rangle^2)\,\sin(\langle\omega_0,x\rangle^2).$$
For $\langle\omega_0,x\rangle^2=k\pi$ the term $\sin(\langle\omega_0,x\rangle^2)=0$, so $q_{k}=\sqrt{k\pi}$ will be suitable for all $k$ to fulfill condition $e)$.
**Theorem 3**. *Knowing the exponential Radon transform of a function $f$ belonging to the function class $\mathcal E(\omega_0, \Omega, p_0)$ defined on $\mathcal{L}(\Omega,p_0)$, the function $f$ is determined on $\mathcal{L}(\Omega,p_0)$.*
**Remark 4**. The system of conditions in Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"} are mixed, compared to polynomial decay: stronger for some directions, but weaker for other directions (no restriction for decay properties at all).
*Proof of Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"}.* First, we write down an equality for the partial derivative of the exponential Radon transform $R_\lambda f(\omega, p)$ with respect to $p$. Knowing the exponential Radon transform along lines avoiding $K$ this derivative can be calculated.
2
(12,6)(-3.5,-3.5) (-3,0)(1,0)11 (-3,1)(1,0)11 (0,0)(0,1)1 (0.2, 0.5)(0,0)\[l\]$(\Delta p){\underline\omega}$ (4,0)(0,1)1 (5,0)(0,1)1 (0,-3)(0,1)3 (0.2,-1.5)(0,0)\[l\]$p{\underline\omega}$
Study the first figure for the partial derivative with respect to $p$.
$\langle x,\omega^\perp\rangle$ does not change along the arrows. The change of $f$ along the short arrows is $\Delta p\,\partial_{\omega} f(x)$, that we divide by $\Delta p$ when differentiating.
So the derivative with respect to $p$ is $$\partial_p R_\lambda f (\omega, p) = \int_{\langle x,\omega\rangle=p} e^{-\lambda\langle x,\omega^\perp\rangle}\partial_\omega f(x)\, dx,$$ where $\partial_\omega$ means directional derivative in the direction $\omega$.
Now let's find an equality for the partial derivative of $R_\lambda f(\omega, p)$ with respect to the angle $\omega$.
(12,8)(-3.5,-3.5) (-3,0)(1,0)11 (-3,-1.14)(2,1)11 (5,0)(0,1)2.86 (5, 2.86)(-2,-1)1.87 (-0.72,0) (0.9,0.3)(0,0)$\Delta\omega$ (2.6,-0.35)(0,0)**u** (5.2,1.2)(0,0)\[l\]$(u+\varepsilon)\tan\Delta\omega$ (-0.36,-0.2)(0,0)\[t\]$\varepsilon$ (5, -0.15)(0,0)\[t\]$P$ (5.1, -0.5)(0,0)\[t\]$= p{\underline\omega}+u{\underline\omega}^\perp$ (5.05, 2.95)(0,0)\[br\]$P'$ (3.2, 2.05)(0,0)\[br\]$P''$ (0.2,-1.5)(0,0)\[l\]$p{\underline\omega}$ (0,-3)(0,1)3 (0,-3)(-1,2)1.34 (0,0)(1,0)5 (-1.34,-0.32)(2,1)4.47 (0.9, 1.1)(0,0)\[br\]**u**
We consider the change of the integrand along the path $P \rightarrow P' \rightarrow P''$. From $P$ to $P''$, there is no change in $\langle x,\omega^\perp\rangle$, so there is no change in the term $e^{-\lambda\langle x,\omega^\perp\rangle}$. So we only have to deal with the change of $f$.
Let us study the second figure. Denote $\varepsilon = p\tan\frac{\Delta\omega}{2}$. Then the length of $PP'$ is $(u+\varepsilon)\tan(\Delta\omega)$, this will multiply $\partial_\omega f(x)$, the directional derivative of $f(x)$ in the direction $\omega$.
The length of $P'P''$ is $\frac{u+\varepsilon}{\cos\Delta\omega}+\varepsilon -u$, this will multiply $\partial_{\omega^\perp}f(x)$ approximately when $\Delta\omega$ is small.
To simplify these expressions, we can use their power series form: $$\tan \Delta\omega = \Delta\omega + o(\Delta\omega) \;\;\mathrm{and}\;\; \frac{1}{\cos \Delta\omega} = 1 + o(\Delta\omega),$$ so $$(u+\varepsilon)\tan(\Delta\omega) \approx (u+p\frac{\Delta\omega}{2})\,\Delta\omega =
u\Delta\omega + o(\Delta\omega)$$ and $$\frac{u+\varepsilon}{\cos\Delta\omega}+\varepsilon -u \approx
\frac{u+p\frac{\Delta\omega}{2}}{1} + p\frac{\Delta\omega}{2} - u = p\Delta\omega.$$
So the change of $f$ between $P$ and $P''$ is altogether $$u\Delta\omega\,\partial_\omega f(x) + p\Delta\omega\,\partial_{\omega^\perp}f(x),$$ where $u=\langle x,\omega^\perp\rangle$.
The variable of integration in the original integral is $u$, the variable of integration after rotating the line by $\Delta\omega$ is $u'=u/\cos\Delta\omega=u(1+o(\Delta\omega))\approx u$.
We divide the change of the integral by $\Delta\omega$ when differentiating with respect to $\omega$. So the derivative w.r.t. the angle $\omega$ is $$\partial_\omega R_\lambda f (\omega, p) = \int_{\langle\omega,x\rangle=p} e^{-\lambda\langle x,\omega^\perp\rangle} \,\left[ \langle x,\omega^\perp\rangle\partial_\omega f(x) \,+\, p\,\partial_{\omega^\perp} f(x) \right] \, dx;$$ and it can be calculated, knowing the exponential Radon transform along lines avoiding $K$.
Using integration by parts for the second term of this expression, $$\label{eq:intbyparts}
\begin{split}
\int_{\langle\omega,x\rangle=p} e^{-\lambda\langle x,\omega^\perp\rangle} \, p\,\partial_{\omega^\perp} f(x) \, dx = \int_{-\infty}^\infty e^{-\lambda u} \, p\,\partial_u \{f(p\,\omega + u\,\omega^\perp)\} \, du =\\
= \left[e^{-\lambda u} \, p\,f(p\,\omega + u\,\omega^\perp)\right]_{-\infty}^\infty - \int_{-\infty}^\infty e^{-\lambda u}(-\lambda) \, p\,f(p\,\omega + u\,\omega^\perp) \, du =\\
= p\lambda \int_{\langle\omega,x\rangle=p} e^{-\lambda\langle x,\omega^\perp\rangle} \, f(x) \, dx = p\lambda R_\lambda f(\omega,p).
\end{split}$$ This is a known quantity, so we will be able to subtract it from the derivative w.r.t. $\omega$. (Here $\left[e^{-\lambda u} \, p\,f(p\,\omega + u\omega^\perp)\right]_{-\infty}^\infty$ is zero because the exponential Radon transform is finite on $l(\omega,p)$.) So $$\partial_\omega R_\lambda f (\omega,p) =
\int_{\langle\omega,x\rangle=p} e^{-\lambda\langle x,\omega^\perp\rangle}\langle x,\omega^\perp\rangle\partial_\omega f(x) \, dx\; +\; p\lambda R_\lambda f(\omega,p).$$ This means, we can calculate $$\int_{\langle\omega,x\rangle=p} e^{-\lambda\langle x,\omega^\perp\rangle}\langle x,\omega^\perp\rangle\partial_\omega f(x) \,dx.$$
Then using a similar argument as the one for differentiating $R_\lambda f(\omega,p)$ w.r.t. $p$ (partial integration of $\partial_{\omega} f(x)$ w.r.t. $p=\langle\omega,x\rangle$ yields $f(x)$, and $\langle x,\omega^\perp\rangle$ does not depend on $p$), we can see that integrating the above expression w.r.t. $p$ (renamed to $q$, from $p$ to $q_1$), and substituting $\omega_0$ for $\omega$, yields $$\int_p^{q_1} \int_{\langle\omega_0,x\rangle=q} e^{-\lambda\langle x,\omega_0^\perp\rangle}\langle x,\omega_0^\perp\rangle\partial_\omega f(x) \,dx \; dq =$$ $$\int_{\langle\omega_0,x\rangle=q_1} e^{-\lambda\langle x,\omega_0^\perp\rangle}\langle x,\omega_0^\perp\rangle f(x) \,dx - \int_{\langle\omega_0,x\rangle=p} e^{-\lambda\langle x,\omega_0^\perp\rangle}\langle x,\omega_0^\perp\rangle f(x) \,dx,$$ where the first integral is given according to part e) of the system of conditions in $\mathcal{E}(\omega_0, \Omega, p_0)$, so the second integral can also be calculated for any $p>p_0$.
Now we are going to prove by induction that *the values* $$R_\lambda^{(k)}f(\omega_0,p) :=
\int_{\langle\omega_0,x\rangle=p} e^{-\lambda\langle x,\omega_0^\perp\rangle}(\langle x,\omega_0^\perp\rangle)^k \, f(x) \,dx$$ *are all determined by the values of the exponential Radon transform and are all finite, with the given system of conditions.*
Suppose we already know that the above statement holds for any $k$ such that $0\le k\le K$, for some $K$ ($k\in\mathbb{N}$ and $K\in\mathbb{N}$). For $k=0$ it is obvious, and we have just proved it for $k=1$, so we have it for $K=1$. Now let's prove for a fixed $K\ge 1$ that consequently the statement is true for $k=K+1$, so for $0\le k\le K+1$, too.
Having the statement for $K$, differentiate $R_\lambda^{(K)}f(\omega,p)$ w.r.t. the angle $\omega$. As before, we move along a path that keeps $\langle x,\omega^\perp\rangle$ fixed ($P\rightarrow P''$ in the second image). So the differentiation yields $$\begin{split}
\left.\partial_\omega R_\lambda^{(K)}f(\omega,p) \right|_{\omega=\omega_0}=\\
= \left.\int_{\langle x,\omega\rangle=p} e^{-\lambda\langle x,\omega^\perp\rangle}(\langle x,\omega^\perp\rangle)^K \cdot \left[\langle x,\omega^\perp\rangle\partial_\omega f(x) + \, p \, \partial_{\omega^\perp}f(x) \right] \,dx\right|_{\omega=\omega_0}= \\
=\int_{\langle x,\omega_0\rangle=p} e^{-\lambda\langle x,\omega_0^\perp\rangle}(\langle x,\omega_0^\perp\rangle)^{K+1}\partial_\omega f(x) \,dx \;+\\
+ \; p \int_{\langle x,\omega_0\rangle=p} \left[ e^{-\lambda\langle x,\omega_0^\perp\rangle}(-\lambda)(\langle x,\omega_0^\perp\rangle)^K + e^{-\lambda\langle x,\omega_0^\perp\rangle} K(\langle x,\omega_0^\perp\rangle)^{K-1} \right] \, f(x) \,dx.
\end{split}$$ Here we used integration by parts like in equation ( [\[eq:intbyparts\]](#eq:intbyparts){reference-type="ref" reference="eq:intbyparts"}): using the substitution $u=\langle x,\omega_0^\perp\rangle$ and that $\partial_{\omega^\perp}=\partial_u$, and we also needed that $R_\lambda^{K}f(\omega_0,p)$ is finite that was included is the induction hypothesis.
In this last form, the first part of the second term is equal to $-\lambda p R_\lambda^{(K)}f(\omega_0,p)$ and the second part of the second term is equal to $Kp R_\lambda^{(K-1)}f(\omega_0,p)$. This means that the first term can be calculated as well.
Integrating the first term w.r.t. $p$ as we did for $k=1$ (renamed to $q$, from $p$ to $q_{K+1}$) yields $$\int_p^{q_{K+1}} \int_{\langle\omega_0,x\rangle=q} e^{-\lambda\langle x,\omega_0^\perp\rangle}(\langle x,\omega_0^\perp\rangle)^{K+1}\partial_\omega f(x) \,dx \; dq =$$ $$\int_{\langle\omega_0,x\rangle=q_{K+1}}e^{-\lambda\langle x,\omega_0^\perp\rangle}(\langle x,\omega_0^\perp\rangle)^{K+1} f(x) dx - \int_{\langle\omega_0,x\rangle=p} e^{-\lambda\langle x,\omega_0^\perp\rangle}(\langle x,\omega_0^\perp\rangle)^{K+1} f(x) dx$$ where the first integral is given according to part e) of the system of conditions in $\mathcal{E}(\omega_0, \Omega, p_0)$, so the second integral can also be calculated for any $p>p_0$. So we get that $R_\lambda^{K+1}f(\omega_0,p)$ can be calculated as well, along lines $l(\omega_0,p)$ avoiding the set $K$. The induction is completed.
Now we remind the reader for the following:
**Statement 5**. *If a function $f:{\mathbb R}\to {\mathbb C}$ decayes exponentially and the integral of its product with every polynomial is 0, then the function is 0.*
This can be proved using the uniqueness of the two-sided Laplace transform as follows. Although it is elementary and well known, we include the proof, because no good reference was found.
There are counter-examples to this statement if the decay is only faster than any polynomial [@Helgason]. Our counter-examples for hyperbolic domains together with the proof of Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"} also yield counter-examples for this statement with decay $e^{|x|^{1-\epsilon}}$ for any $0< \epsilon < 1$. We do not know of any other explicite counter-example.
*Proof:* We take the two-sided Laplace transform ${\mathcal L}f(s) = \int^\infty_{-\infty} f(x) e^{-sx}\, dx$ of $f$. Because $f$ is of exponential decay, that is $f(x) e^{-\mu x}$ is bounded, for a fixed $\mu > 0$ the two-sided Laplace-transform ${\mathcal L}f(s)$ is defined for any $0< s < \mu$ and complex analytic in the strip $0 < Re z < \mu$.
Take the powes series expansion of $e^{-sx}$ and interchange the summation and the integral: ${\mathcal L}f(s) =
\int^\infty_{-\infty} f(x)\Big|\sum_{n=0}^\infty \frac{(-1)^n}{n!}s^n x^n\Big]\, dx =
\sum_{n=0}^\infty \frac{(-1)^n}{n!}s^n \int^\infty_{-\infty} f(x)x^n\, dx$. Our condition is that $\int^\infty_{-\infty} f(x)x^n\, dx = 0$ for every $n \geq 0$, so the sum is 0, that is ${\mathcal L}f(z) = 0$ for on the strip $0 < Re z < \mu$. According for example to [@Widder] Theorem 6b, this implies that $f \equiv 0$.
To complete the proof, we only have to show that interchanging the sum and the integral is valid. We estimate the following two terms independently: ${\mathcal L}f(s) = - \int^\infty_{0} f(-x) e^{sx}\, dx + \int^\infty_{0} f(x) e^{-sx}\, dx$.
First, we estimate the tail of the first term.
$$\begin{split}
\Big|\int^\infty_{0} f(-x)\Big( \sum_{n=0}^N \frac{(sx)^n}{n!}- e^{sx} \Big)\, dx \Big| \leq
C\int^\infty_{0} e^{-\mu x}\Big(e^{sx} - \sum_{n=0}^N \frac{(sx)^n}{n!}\Big)\, dx = \\
C\int^\infty_{0} e^{(s-\mu)x} - C\sum_{n=0}^N \int^\infty_{0} e^{-\mu x} \frac{(sx)^n}{n!} =
\frac{C}{\mu - s} - \frac{C}{\mu} \sum_{n=0}^N \frac{s^n}{\mu^n} = \\
\frac{C}{\mu}\Big( \frac{1}{1- \frac{s}{\mu}} - \sum_{n=0}^N \frac{s^n}{\mu^n} \Big).
\end{split}$$ This is just the tail of the power series expansion of $\frac{C}{\mu}\frac{1}{1- \frac{s}{\mu}}$. It tends to 0. So the integral and the summation are interchangable.
Now we estimate the tail of the second term. This is an alternating series. So it can be estimated by its $(N+1)$th term.
$$\begin{split}
\Big|\int^\infty_{0} f(x)\Big( \sum_{n=0}^N \frac{(sx)^n}{n!}- e^{-sx} \Big)\, dx \Big| \leq
C\int^\infty_{0} e^{-\mu x} \Big( \sum_{n=0}^N \frac{(sx)^n}{n!}- e^{-sx} \Big)\, dx \leq \\
C\int^\infty_{0}\int^\infty_{0} e^{-\mu x} \frac{(sx)^{n+1}}{(n+1)!}\, dx =
\frac{1}{\mu} \frac{s^{n+1}}{\mu^{n+1}}.
\end{split}$$ So it tends to 0. So the integral and the summation is interchangable for this term too. $\square$
Observe that the statement above extends to any dimension. We can prove it by the classical Radon-transform on hyperplanes, because $\int^{\infty}_{-\infty} Rf(\omega, p) p^k\,dp = \int_{{\mathbb R}^n} f(x) \langle x, \omega\rangle^k\,dx$. Of course, we can prove it directly too, but that is more technical.
On this point we need the conditions a) - d) from the definition of the class $\mathcal{E}(\omega_0, \Omega, p_0)$. They ensure that if $R_\lambda^{k}f(\omega,p)$ is zero for all $k\in\mathbb{N}$ and all $\omega\in\Omega$, $p>p_0$, then $f\equiv 0$ on $\mathcal{L}(\Omega,p_0)$. This implies that if there are two functions $f_1$ and $f_2$ with the same value of the exponential Radon transform on lines avoiding the set $K$, then because of the linearity of the exponential Radon transform, $f_1-f_2\equiv 0$, so they are identical. $\square$
*Proof of Theorem [Theorem 1](#th:supp){reference-type="ref" reference="th:supp"}.*
All the requirements of Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"}, apart from $a)$ and $c)$, are consequences of the exponential decay supposed in Theorem [Theorem 1](#th:supp){reference-type="ref" reference="th:supp"}. $a)$ and $c)$ are not implied because this simple decay condition does not ensure anything for continuity and the derivatives. But a standard technique: convolving $f$ belonging to this function class with a smooth function $g$ of compact support will help us show that Theorem [Theorem 1](#th:supp){reference-type="ref" reference="th:supp"} still follows from Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"}. The condition e) will still be valid after the convolution, with $q_k=+\infty$.
Suppose we have a function $f$ that is not everywhere zero in $L^1_{loc}(K^c)$ and has exponential decay, but whose exponential Radon transform is zero along all straight lines in $K^c$ (that would be a counterexample for Theorem [Theorem 1](#th:supp){reference-type="ref" reference="th:supp"}). We will take its convolution with a $\mathcal{C}^\infty$ function $g$ such that ${\rm supp}\, g$ is compact and ${\rm supp}\, g \subset B_{\frac{\varepsilon}{2}}(0)$ and the integral of $g$ is 1. The convolution $f*g$ has exponential decay (inherited from $f$) and belongs to $\mathcal{C}^\infty$ (inherited from $g$).
Condition $e)$ is fulfilled by $f*g$ with $q_k=+\infty$, because $$\lim_{j\rightarrow\infty}\left(\int_{-\infty}^\infty u^k e^{-\lambda u}\, f(q_{kj}\, \omega_0 + u\, \omega_0^\perp)\, du\right)=0,$$ as a consequence of the exponential decay of $f$, and it is not spoiled by the convolution with $g$.
It is well known and we will prove a bit later that $$R_\lambda(f*g)(\omega,p)=(R_\lambda g*R_\lambda f)(\omega,p),$$ a one-dimensional convolution in the second variable. So $R_\lambda f(\omega,p)=0$ over all straight lines in $K^c$ implies $R_\lambda(f*g)(\omega,p)=0$ over all straight lines in $K_{\varepsilon}^c$, where $\varepsilon$ is bigger than the diameter of the support of $g$.
But using the condition that $f$ is not identically zero in $L^1_{loc}(K^c)$, we can choose $g$ so that $f*g$ is not identically zero in $L^1_{loc}(K_{\varepsilon}^c)$, where $\varepsilon$ is bigger than the diameter of the support of $g$. (The convolution $f*g$ of a function $f$ in $L^1_{loc}$ that is non-zero on a set of positive measure, with a non-zero non-negative continuous function $g$ of compact support, is not identically zero if the support of $g$ is small enough.)
This leads to a contradiction with Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"}, because $f*g$, restricted to $K_\varepsilon^c$, fulfils all the conditions of Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"}, including $c)$ about the decay of the partial derivatives. (We can think about $\varepsilon$ tending to zero, but it is not even necessary, we just have to choose it in such a way that $f*g$ is not zero.)
This way we proved that $R_\lambda f(\omega,p)=0$ over all straight lines in $K^c$ implies $f = 0$ in $L^1_{loc}(K^c)$.
For the proof of $R_\lambda(f*g)(\omega,p)=(R_\lambda g*R_\lambda f)(\omega,p)$, let us denote $f*g$, the (two-dimensional) convolution of $f$ and $g$, by $h$: $h(x)=\int_{\mathbb{R}^2}\,f(x-y)g(y)\,dy$. Then $$R_\lambda h(\omega,p) = \int_{\langle\omega,x\rangle=p}\,e^{-\lambda\langle\omega^\perp,x\rangle}\,h(x)\,dx=$$ $$=\int_{\langle\omega,x\rangle=p}\,e^{-\lambda\langle\omega^\perp,x\rangle}\,\int_{\mathbb{R}^2}\,f(x-y)g(y)\,dy\,dx=$$ $$=\int_{\langle\omega,x\rangle=p}\,e^{-\lambda\langle\omega^\perp,x-y\rangle}e^{-\lambda\langle\omega^\perp,y\rangle}\,\int_{\mathbb{R}^2}\,f(x-y)g(y)\,dy\,dx=$$ $$=\int_{\mathbb{R}^2}\,e^{-\lambda\langle\omega^\perp,y\rangle}\,g(y)\left\{\int_{\langle\omega,x-y\rangle=p-\langle\omega,y\rangle}\,f(x-y)\,e^{-\lambda\langle\omega^\perp,x-y\rangle}\,d(x-y)\right\}dy$$ where $d(x-y)=dx$ for a fixed $y$. Let us introduce the notation $\langle\omega,y\rangle=q$, then we can write the above expression as $$=\int_{\mathbb{R}^2}\,e^{-\lambda\langle\omega^\perp,y\rangle}\,g(y)\,R_\lambda f(\omega,p-q)\,dy,$$ where $q$ is dependent on $y$. Here $\omega$ and $p$ are fixed, so for a fixed $q$, $R_\lambda f(\omega,p-q)$ is a constant. We integrate in $\mathbb{R}^2$, that can be decomposed to lines perpendicular to the vector $\omega$; and we can integrate along these lines first, then with respect to the signed distance of the line from the origin (which is $q$), obtaining that the above expression equals $$=\int_{\mathbb{R}}\,R_\lambda g(\omega,q)\,R_\lambda f(\omega,p-q)\,dq=(R_\lambda g*R_\lambda f)(\omega,p),$$ a one-dimensional convolution.
$\square$
# Further statements
Using the Helgason moment-condition [@Helgason], that is $\int_{\mathbb R} Rf(\omega, p) p^k\,dp$ is an at most $k$th degree polynomial of $\omega\in S^{n-1}$, provided $f$ decays faster than any polynomial, we can easily derive the following well known statement.
Let $\Omega\subset{\mathbb S^1}$ an infinite set, and $f: \;{\mathbb R^2}\rightarrow {\mathbb R}$ a continuous function with compact support. Assume that $Rf(\omega, p)=0$ for every $\omega\in\Omega$ and $p\in\mathbb{R}$. Then $f\equiv 0$. Note that the condition that $f$ is decaying faster than any polynomial is not enough, even if $\Omega = S^{n-1}$. But by Statement [Statement 5](#th:expdec){reference-type="ref" reference="th:expdec"} we can apply the proof for functions with exponential decay too. This version is not mentioned in the literature to our knowledge.
In ${\mathbb R^n}$ the same is true, but $\Omega\subset {\mathbb S^{n-1}}$ has to be Zariski dense! (If a homogenuous polynomial $\equiv 0$ on $\Omega$ then it is zero.)
We can extend this well known result as follows. The difference is, that we have a kind of obstacles. This contains the case of limited-angle tomography with obstacles. This might be of interest from practical point of view.
For the presence of the obstacle we have to pay by slightly stronger conditions on $\Omega \subseteq S^{n-1}$.
**Theorem 6**. *Let $f: \;\mathbb R^2\rightarrow\mathbb R$ be a continuous function and $K$ is a convex, closed set.*
*Let $\Omega\subset\mathbb S^1$ be a perfect set (every point is a limit point). Let $K^\#$ be the union of those lines which avoid $K$ and their normal vectors are in $\Omega$.*
*Assume that $f$ admits the decay conditions of Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"} for every $\omega_0 \in \Omega$ and $R_\lambda f(\omega,p)=0$ if $l(\omega,p)\bigcap K=\emptyset$ and $\omega\in\Omega$. Then $f\equiv 0$ on $K^\#$.*
*Proof:* The immediate consequence of the proof of Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"}. $\square$.
To our knowledge this statement is new even for $\lambda=0$, the case of classical Radon transform.
Note that for the exponential Radon transform there is no moment condition, so even the $K=\emptyset$ case is non-trivial. Also note that the Helgason moment condition [@Helgason] cannot be applied if $K\neq \emptyset$.
We can extend this to higher dimensions for the classical Radon transform if $\Omega$ is rich enough.
By proof mining of Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"}, we can easily find sufficient conditions for $\Omega$ even in the $n=2$ case, that is Theorem [Theorem 6](#th:supp3){reference-type="ref" reference="th:supp3"}.
Such a sufficient condition is as follows.
Let $\omega\in\Omega$. We say that $\iota\in\mathbb S_x^{n-2}$ (the $\mathbb S_\omega^{n-1}$ sphere in the tangent space $T_\omega\mathbb S^{n-1}$) is a limit direction if there is a sequence of points $\omega_1, ..., \omega_k,...\rightarrow\omega$ such that $\frac{\omega_k-\omega}{|\omega_k-\omega|}\rightarrow\iota$. The set of all limit directions for $\omega\in\Omega$ be $I(\omega)$.
**Theorem 7**. *If $\Omega\subset\mathbb S^{n-1}$ is perfect and $I(\omega)$ is Zariski dense for every $\omega\in\Omega$ then if $K$ is closed and $f:\;\mathbb R^n\rightarrow\mathbb R$ satisfies the multi-dimensional version of the decay conditions of Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"} for every $\omega_0\in \Omega$, and $Rf(\Omega,p)=0$ for every hyperplane $H(\omega,p)$ avoiding $K$ and $\omega\in\Omega$ then $f\equiv 0$ on $K^\#$.*
*Proof:* Apply the same technique as in Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"} for hyperplanes almost perpendicular (in second order) to an apropriate two-plane $\theta(\omega_0, \eta)$ spanned by the vectors $\omega_0$ and $\eta\perp \omega_0$. Change this plane at every step of the derivations. So we get the integrals $\int_{\langle x,\omega_0\rangle = p} f(x) \langle x, \eta_1\rangle\cdots \langle x, \eta_k\rangle\,dx$. This implies that the integral $\int_{\langle x,\omega_0\rangle = p} f(x)p(x) =0$ for any polynomial $p(x)$ defined on the points of the hyperplane $H(\omega_0, p) = \{x\in {\mathbb R}^n\}: \langle x,\omega_0\rangle = p \}$. Then apply Statement [Statement 5](#th:expdec){reference-type="ref" reference="th:expdec"}. $\square$.
Note that both in Theorem [Theorem 6](#th:supp3){reference-type="ref" reference="th:supp3"} and Theorem [Theorem 7](#th:supp4){reference-type="ref" reference="th:supp4"} functions with exponential decay automatically satisfy the decay conditions.
The generalization of Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"} is as follows. Let $R$ be the classical Radon-transform on ${\mathbb R}^n$, that is integration on hyperplanes, and let $R_{n-2}$ be the integration on $n-2$ planes. Here $\Omega\subseteq S^{n-1}$ is an open set containing $\omega_0$.
**Theorem 8**. *If $f\in \mathcal E(\omega_0, \Omega, p_0)$ and $Rf(\xi) = 0$ for every hyper-plane $\xi(\omega, p)$, for $\omega \in \Omega$ and $p\geq p_0$, then $f = 0$.*
*Proof:* Apply the same technique as in Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"} for hyperplanes perpendicular to $\omega_0$ and a fixed $\eta\in S^{n-1}$ and $\eta\perp\omega_0$. By applying Fubini's theorem and Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"} we get that $R_{n-2}f(\zeta_{n-2}) = 0$ for any $(n-2)$-plane $\zeta_{n-2}$ which is perpendicular to $\omega_0$ and in a hyperplane $\xi(\omega_0, p)$ whose oriented distance $p$ from $O$ is at least $p_0$. This means that the Radon transform of $f$ is zero in every such hyperplane $\xi(\omega_0, p)$. Note that in this case the Radon transform integrates on $(n-2)$-planes, which are hyperplanes in the hyperplane $\xi(\omega, p)$. Also note that the $(n-2)$-plane transform $R_{n-2}$ does not affect the decay conditions. $\square$
To make things simple, and easier to remember, we state it by stronger decay conditions too, as in Theorem [Theorem 1](#th:supp){reference-type="ref" reference="th:supp"}.
**Theorem 9**. *If $K \subset {\mathbb R}^n$ is a convex closed set, and $f:K^c\to \mathbb R$ is a continuous function with exponential decay and $Rf(\xi) = 0$ for every hyper-plane $\xi$ avoiding $K$, that is in $\xi \subset K^c$, then $f = 0$.*
Note that Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"} and Theorem [Theorem 8](#th:supp5){reference-type="ref" reference="th:supp5"} is not a consequence of Theorem [Theorem 6](#th:supp3){reference-type="ref" reference="th:supp3"} and Theorem [Theorem 7](#th:supp4){reference-type="ref" reference="th:supp4"}, respectively. This is because the decay conditions of Theorem [Theorem 3](#th:supp2){reference-type="ref" reference="th:supp2"} are required only for one direction $\omega_0$, not for all the directions $\omega_0\in \Omega$ as in the latter two theorems.
The Theorem [Theorem 9](#th:supp6){reference-type="ref" reference="th:supp6"} above is also close to be sharp, as in the plane. We can prove it by the rotation-symmetric extension of our planar counter-examples to ${\mathbb R}^n$. But the geometry is a bit more complicated. We say that a closed convex set $K$ is strongly hyperbolic, if it does not contain a whole line, but contains $n$ independent directions of half-lines. We say that $K$ is parabolic, if it does not contain a whole line, there are no $n$ independent half-lines, but there is at least one. For strongly hyperbolic domains we have the rotation symmetric counter-example which decays as $e^{-|x|^{1-\epsilon}}$, while for parabolic ones decays as $e^{-|x|^{1/2-\epsilon}}$ for a fixed $\epsilon > 0$. The easy details are left to the reader.
Let $\omega_0$ be as before, $\lambda > 0$ and $a_0(x) = \exp(-\lambda\cdot\langle x, \omega^\perp_0\rangle)$. Then for $\langle\omega^\perp, \omega^\perp_0\rangle > 0$ the attenuated Radon transform $R_a$, as defined in [@Novikov], [@Natterer], are the same for all such $\omega$'s up to a positive multiplicative function.
**Question 10**. *Can we extend our results to attenuated Radon transforms with sufficiently smooth weights '$a$' which are exponentially close to '$a_0$'? That is $|a(x) - a_0(x)| <C\cdot\exp(-\mu\cdot|x|)$, where $C> 0$ is a suitable constant and $\mu > \lambda > 0$.*
99
J. Boman. *Local non-injectivity for weighted Radon transforms*. Conference Proceedings (2011).
J. Boman. *Local uniqueness theorem for weighted Radon transform*. Inverse Problems and Imaging, 4 No. 4 (2010).
J. Boman, E.T. Quinto. *Support theorems for real-analytic Radon transforms*. Duke Mathematical J., 55 (1987), 943-948.
S. Helgason. *Radon transform Second Edition*. Massachusetts Institute of Technology, Cambridge MA (1999).
F. Natterer. *Inversion of the attenuated Radon transform*. Inverse Problems, 17 (2001), 113-119.
R. G. Novikov. *An inversion formula for the attenuated X-ray transformation*. Arkiv för Matematik, 40 (2002), 145-167. (Preprint: 2000.)
T. Ódor. *Kandidátusi értekezés - Rekonstrukciós, karakterizációs és extrémum problémák a geometriában* (Hungarian). \[*PhD dissertation - Problems of Reconstruction, Characterisation and Extrema in Geometry*\]. Eötvös Loránd University, Budapest (1994).
R. S. Strichartz, Radon inversion--variations on a theme, Amer. Math. Monthly, 89 (1982), 377--384, 420--423.
D. V. Widder, *The Laplace Transform*, Princeton University Press (1946).
| arxiv_math | {
"id": "2309.00998",
"title": "Local support theorem for the exponential Radon transform",
"authors": "Enik\\H{o} Dinny\\'es and Tibor \\'Odor",
"categories": "math.CA",
"license": "http://arxiv.org/licenses/nonexclusive-distrib/1.0/"
} |
---
abstract: |
Motivated by the main results of the articles by Hattori [@Hattori] and Bouziad [@completez], we seek to answer the following questions about Hattori spaces. Let $A\subseteq \mathbb{R}$, then:
1. Given a compact set $K$ in the Euclidean topology, under what conditions is $K$ compact in the Hattori space $H(A)$?
2. When is $H(A)$ a quasi-metrizable space?
3. When is $H(A)$ a semi-stratifiable space?
4. When is $C_p(H(A))$ a normal space?
5. When is $C_p(H(A))$ a Lindelöf space?
We obtain complete answers for 3 out of these 5 questions, while the last ones remain with partial answers, among them:\
**Theorem**. If $\mathbb{R}\setminus A$ is analytic, then $C_p(\mathbb{R}_A)$ is not normal.\
Moreover when we work on the Solovay Model we can improve the previous result to only require $\mathbb{R}\setminus A$ to be uncountable.
author:
- Elmer Enrique Tovar Acosta
bibliography:
- biblio.bib
title: $Cp(X)$ for Hattori Spaces
---
54A10 ,54A25 ,54C05\
Hattori spaces, spaces of continuous functions, Lindelöf property, network weight, generalized metric spaces.
# Introduction
In their article [@Hattori], Hattori defines a family of intermediate topologies between the Euclidean and Sorgenfrey topologies using local bases. Namely, for every $A\subseteq \mathbb{R}$ he defines a topology $\tau(A)$ in $\mathbb{R}$ as follows:
- For every $x\in A$, the set $\lbrace (x-\varepsilon,x+\varepsilon) \ | \varepsilon>0\rbrace$ remains a local base at $x$.
- On the other hand, if $x\in \mathbb{R}\setminus A$, then $\lbrace [x,x+\varepsilon) \ | \ \varepsilon>0\rbrace$ is a local base at $x$.
Note that we always have $\tau_e\subseteq \tau(A)\subseteq \tau_s$ where $\tau_e$ is the euclidean topology and $\tau_s$ is the Sorgenfrey one.
We denote the space $(\mathbb{R}, \tau(A))$ as $H(A)$ and refer to it as the Hattori space associated to $A$.
It is well-known that the Euclidean and Sorgenfrey topologies have a somewhat curious relationship -- while they share several topological properties, in others, they are entirely opposite. This prompts a natural question: under what conditions do Hattori's spaces preserve the properties of either of these topologies? Examples of this can be found in papers [@completez] and [@Hattori], where two particularly notable patterns emerge:
- If the property is shared by both topologies, then any Hattori space maintains the property.
- If the complement of $A$ is countable, then the Hattori space associated with $A$ maintains the metric/completness properties of the euclidean topology (see [@completez]).
These two patterns raise some natural questions:
- Since both the Euclidean and Sorgenfrey topologies are quasimetrizable, is every Hattori space quasimetrizable? Note that any answer to this question breaks one of the previous patterns.
- Does these patterns hold true if we consider the space of continuous functions?
In this paper we work on the last questions, more specifically, we try to determine when is $C_p(H(A))$ a normal/Lindelöf space, and along the way, we construct closed and discrete families in $C_p(H(A))$, this could be considered the main focus of this paper. Also we find under which conditions $H(A)$ a quasimetrizable/semi-estratifiable space, and we give a characterization of compact sets in $H(A)$. We start by fixing some notation and recalling some definitions.\
# Preliminaries
Throughout this paper we will denote the Sorgenfrey line as $\mathbb{S}$, meanwhile we will use $\mathbb{S}^\star$ to refer to the "inverted\" Sorgenfrey line, that is, the space that have as local bases intervals of the form $(a,b]$. As always, $\mathbb{R}$ represents the real line with his usual topology. $C_p(X)$ represents the space of real valued continuous functions defined on $X$ with the topology of pointwise convergence. Lastly, as we said before, $H(A)$ represents the Hattori space associated with $A$.\
We refer to the intervals $(x-\varepsilon,x+\varepsilon)$ and $[x,x+\varepsilon)$ as $B_e(x,\varepsilon)$ and $B_s(x,\varepsilon)$, respectively. Also, for a fixed $A\subseteq \mathbb{R}$ we define:$$B_A(x,\varepsilon)=\begin{cases}
B_e(x,\varepsilon), \ \text{if} \ x\in A \\
B_s(x,\varepsilon), \ \text{other case}
\end{cases}$$
The symbols $\text{cl}_e$, $\text{cl}_s$, $\text{cl}_{H(A)}$ will denote the closures in the euclidean topology, the Sorgenfrey one, and in $H(A)$, respectively.
We will need the concepts of quasimetrizable, $\gamma$-space, semi-stratifiable space, Moore space, $\sigma$-space, $\Sigma$-space, $p$-space, quasicomplete space and Čech complete space. We will use most of these to state exactly one theorem, so we prefer to refer the reader to [@handbook] (explicitly, chapter 10) for the first 6 definitions, [@creede] for the definitions of $p$-space and quasicomplete and lastly, [@engelking] for the definition of Čech complete space.
The main results concerning all of this definitions is summarized in the following proposition:
**Proposition 1**.
- *Every Moore space is both a quasimetrizable space and a semi-stratifiable space, and every quasimetrizable space is a quasimetrizable space.*
- *Every Cech-Complete space is a $p$-space, and every $p$-space is quasicomplete.*
- *A Tychonoff space is a Moore space if and only if it is a semi-stratifiable $p$-space [@creede].*
- *A space $X$ is a $\sigma$-space if and only if $X$ is a $\Sigma$-space with a $G_\delta$ diagonal.*
Lastly an uncounable regular cardinal $\kappa$ is a caliber of a space $X$ if, for any family $\mathcal{U}\subseteq \tau\setminus \lbrace \emptyset\rbrace$ of cardinality $\kappa$, there exists $\mathcal{U}^\prime \subseteq \mathcal{U}$ such that $|\mathcal{U}^\prime|=\kappa$ and $\bigcap \mathcal{U}^\prime\neq \emptyset.$ All undefined terms will be interpreted as in [@engelking].\
# Metric-like properties of $H(A)$ and some compactness results
We start our study of $H(A)$ by answering the following question: Under which conditions on $K\subseteq \mathbb{R}$, $K$ is a compact set in $H(A)$? Since $\tau(A)$ contains the euclidean topology, if $K$ is compact in $H(A)$ it is compact in $\mathbb{R}$, so we need $K$ to be a closed and bounded set in $\mathbb{R}$. We have the following full caracterization of compact sets in $H(A)$:
**Proposition 2**. *Let $A, K \subseteq \mathbb{R}$ with $|K|\geq \aleph_0$. The following are equivalent:*
1. *$K$ is compact in $H(A)$.*
2. - *$K\setminus A$ is countable.*
- *For every $x\in K\setminus A$ there exists $\varepsilon>0$ such that $(x-\varepsilon,x)\cap K=\emptyset$.*
Start by supposing that $K$ is compact in $H(A)$. Since $\tau_e\subseteq \tau(A)$, $K$ is compact in the euclidean topology. Now let's verify that $K$ satisfies (1). Note that $K$ is compact and submetrizable (again, because $\tau_e\subseteq \tau(A))$, so $K$ is a compact space with a $G_\delta$ diagonal, hence a metrizable space (this is a classic result by Sneider [@sneider], see also [@handbook Ch. 9, § 2 ]). So $K\setminus A$, as a subspace of a metric space, it's also metrizable. On the other hand, the topology he inherits as a subspace of $H(A)$ is the one he inherits as a subspace of $\mathbb{S}$ (this is because $K\setminus A\subseteq \mathbb{R}\setminus A$, see 2.1 in [@Hattori]). Since the only metrizable subspaces of $\mathbb{S}$ are countable we conclude that $K\setminus A$ satisfies the first part of (2).\
Let's move on to the second part of (2). Let $x\in \mathbb{R}\setminus A$ and suppose, for the sake of contradiction, that for every $\varepsilon>0$, $(x-\varepsilon,x)\cap K\neq \emptyset$. In particular, for each $n\in \mathbb{N}$ take $x_n\in (x-\varepsilon,x)\cap K$ and wlog suppose that $x_n<x_{n+1}$. Take the following open cover of $K$, $\mathcal{U}=\lbrace (-\infty,x_2)\rbrace \cup \lbrace (x_{n-1},x_{n+1}) \ | \ n\geq 2\rbrace \cup \lbrace [x,\infty)\rbrace$ (note that the last set is open because $x\notin A$). $\mathcal{U}$ lacks finite subcovers since every element of our sequence belongs to exactly one element of $\mathcal{U}$, thus contradicting the compactness of $K$. With this, we conclude the (1)$\rightarrow (2)$ implication.\
Now let's prove the reverse implication, suppose that $K$ is a compact set in the euclidean topology and satisfies (1). Let $\mathcal{U}\subseteq \tau(A)$ be a open cover of $K$ in $H(A)$. For each $x\in K$ let $U_x\in \mathcal{U}$ such that $x\in U_x$. We can take $\varepsilon_x>0$ that satisfies the following conditions:
- If $x\in A$, then $(x-\varepsilon_x,x+\varepsilon_x)\subseteq U_x$.
- If $x\notin A$, then $[x,x+\varepsilon)\subseteq U_x$ and $(x-\varepsilon,x)\cap K=\emptyset$. ($\star$)
Let's consider $\mathcal{V}=\lbrace B_e(x,\varepsilon_x) \ | \ x\in K\rbrace$. This is an open cover for $K$ consisting of open sets in the euclidean topology, and therefore exists a finite set $F\subseteq K$ such that $$K\subseteq \bigcup_{x\in F} B_e(x,\varepsilon_x)$$But ($\star$) implies that, if $x\in K\setminus A$, then we can remove the left side of the interval $(x-\varepsilon_x,x+\varepsilon_x)$ without removing points of $K$, thus: $$K\subseteq \bigcup_{x\in F} B_A(x,\varepsilon_x)\subseteq \bigcup_{x\in F} U_x$$ And so, we have found a finite subcover of $\mathcal{U}$. With this, we conclude that $K$ is compact in $H(A)$.
The first part of condition (1) does not come as a surprise because it is often encountered when demonstrating that specific properties of $\mathbb{R}$ are preserved in $H(A)$. On the other hand, the second part may appear sudden or unrelated. However, upon closer examination and with the following reinterpretation in mind, it does make sense: Condition (1) states that if $x\in K\setminus A$, then $x$ is not an accumulation point of $K$ in $Y$, and so we can restate our Proposition as follows:
**Proposition 3**. *Let $A,K\subseteq \mathbb{R}$ with $|K|\geq \aleph_0$. Then $K$ is compact in $H(A)$ if and only if $K$ is compact in the euclidean topology, $K\setminus A$ is countable and $\text{der}_{Y}(K)\cap (K\setminus A)=\emptyset$.*
Since $H(A)$ is first-countable and hereditarily Lindelöf, we have actually proven a bit more, namely:
**Theorem 1**. *Let $A,K\subseteq \mathbb{R}$ with $K$ infinite and compact in the euclidean topology. Then the following are equivalent:*
- *$K$ is compact in $H(A)$.*
- *$K$ is countably compact in $H(A)$.*
- *$K$ is sequentially compact in $H(A)$.*
- *$|K\setminus A|\leq \aleph_0$ and $\text{der}_Y(K)\cap (K\setminus A)=\emptyset$.*
Now we move on to the question: When is $H(A)$ a quasimetrizable/semi-stratifiable? The first question will be answered with a classical condition, namely, if $|\mathbb{R}\setminus A|\leq \aleph_0$. The second question will require a more elaborate condition. Let's start by addressing the semi-stratifiable case with a couple of simple results:
**Lemma 2**. *Let $A\subseteq \mathbb{S}$ with a countable network. Then $A$ itself is countable.*
Take a subset $A$ of $\mathbb{S}$ and suppose that $A$ is semi-stratifiable. Note that $A$ is a quasimetrizable space (since $\mathbb{S}$ is), so $A$ is a semi-stratifiable, regular and quasimetrizable space, thus is a Moore space (See [@handbook Ch. 10, § 8]). Since $\mathbb{S}$ is hereditarily Lindelöf, $A$ is a Lindelöf Moore space, thus second countable and the last Lemma allow us to conclude:
**Proposition 4**. *Let $A\subseteq \mathbb{S}$ be semi-stratifiable. Then $A$ is countable.*
The following Theorem can be obtained by using the results of [@completez § 3] and Proposition [Proposition 1](#Relaciones){reference-type="ref" reference="Relaciones"}. It contains most of the results concerning metric/completeness properties of Hattori spaces.
**Theorem 3**. *Let $A\subseteq \mathbb{R}$. The following are equivalent:*
- *$\mathbb{R}\setminus A$ is countable.*
- *$H(A)$ is Polish.*
- *$H(A)$ is completely metrizable.*
- *$H(A)$ is Čech-Complete.*
- *$H(A)$ is a Moore space.*
- *$H(A)$ is a p-space.*
- *$H(A)$ is quasicomplete.*
- *$H(A)$ is a $\sigma$-space.*
- *$H(A)$ is a $\Sigma$-space.*
We are ready to add "is semi-stratifiable\" to these equivalences. Note that if $H(A)$ is a semi-stratifiable space, then $\mathbb{R}\setminus A$ is semi-stratifiable too, but this set inherits the same topology as a subspace of $\mathbb{S}$, thus Proposition [Proposition 4](#semies){reference-type="ref" reference="semies"} allow us to conclude that $\mathbb{R}\setminus A$ is countable. The other implication is trivial, so we get:
**Proposition 5**. *Let $A\subseteq \mathbb{R}$. Then $H(A)$ is semi-stratifiable iff $\mathbb{R}\setminus A$ is countable.*
Since both $\mathbb{R}$ and $\mathbb{S}$ are quasimetrizable spaces, intuition suggests that $H(A)$ should be quasimetrizable regardless of the choice of $A$. This is similar to other properties shared between $\mathbb{R}$ and $\mathbb{S}$, such as being hereditarily Lindelöf and separable, being a Baire space, etc. However, this is not the case here, and the proof of this fact can be found in [@bennettquasi] and [@Quasi], where Bennett proves what is known as the "$\gamma$-space conjecture\" (see Theorem 3.1 of [@bennettquasi]) for separable generalized ordered spaces and Kolner later modified said Theorem a little (see Theorem 10 of [@Quasi]). We must mention that the credit for following these ideas belongs to Li and Lin in [@lin], and we will expand on their ideas by adding a couple of simple results connecting Benett and Kolner results. We present Bennett's theorem adapted to our context.
**Theorem 4**. *Let $A\subseteq \mathbb{R}$. The following are equivalent:*
- *$H(A)$ us a $\gamma$-space.*
- *There exists a sequence of sets $(R_n)_{n=1}^\infty$ such that:*
- *$\mathbb{R}\setminus A=\bigcup_{n=1}^\infty R_n$*
- *For each $p\in A\cap \text{cl}_e(R_n)$, there exists $y<p$ such that\
$(y,p)\cap \text{cl}_e(R_n)=\emptyset$.*
- *$H(A)$ is quasimetrizable.*
This technically completely solves our question, but it is not very enlightening. Upon closer analysis of Bennett's proof, it turns out that we can deduce more, leading to the following result, of which we will prove the first implication for the sake of completness:
**Theorem 5**. *Let $A\subseteq \mathbb{R}$. The following are equivalent:*
- *$H(A)$ is a $\gamma$-space.*
- *There exists a sequence of sets $(R_n)_{n=1}^\infty$ such that:*
- *$\mathbb{R}\setminus A=\bigcup_{n=1}^\infty R_n$*
- *For each $p\in A\cap \text{cl}_e(R_n)$, there exists $y<p$ such that\
$(y,p)\cap \text{cl}_e(R_n)=\emptyset$.*
- *If $(x_n)\subseteq R_m$ is an increasing sequence, then it does not converge in $H(A)$.*
- *$H(A)$ is quasimetrizable.*
Let $g:\mathbb{N}\times H(A) \rightarrow \tau_A$ be a $g$ function that satisfies the conditions of Definition [\[Gamma\]](#Gamma){reference-type="ref" reference="Gamma"}, let $Q=\lbrace q_k \ | \ k \in \mathbb{N}\rbrace$ be an enumeration of the rationals and $Q_k=\lbrace q_i \ | \ i\leq k\rbrace$. For every $n,m,k\in\mathbb{N}$ we define: $$R(n,m,k)=\lbrace x\in \mathbb{R} \ | \ g(n,x)\subseteq [x,\infty) \ \& \ \alpha(n,x)=m \ \& \ g(m,x)\cap Q_k\setminus \lbrace x\rbrace\neq \emptyset\rbrace$$
Let's see that $\mathbb{R}\setminus A=\bigcup_{(n,m,k)\in \mathbb{N}^3} R(n,m,k)$. Given $x\in \mathbb{R}\setminus A$, we know that $[x,\infty)$ is an open set, and since $g$ is a $\gamma$-function, this implies that there exists $n\in\mathbb{N}$ such that $g(x,n)\subseteq [x,\infty)$. This gives us our first parameter. For the second parameter, we can take $m=\alpha(n,x)$. Finally, since $\mathbb{Q}$ is a dense set in $H(A)$ and $g(m,x)\setminus \lbrace x\rbrace$ is a non-empty open set, there exists $k\in \mathbb{N}$ such that $Q_k\cap g(m,x)\setminus \lbrace x\rbrace\neq \emptyset$. Thus, $x\in R(n,m,k)$.\
The other contention is simple. This is because the first condition defining $R(n,m,k)$ implies that $[x,\infty)$ is an open set, and therefore $x\in \mathbb{R}\setminus A$.\
Let's consider $(x_j)\subseteq R(n,m,k)$, a sequence such that $x_j<p$ for all $j\in \mathbb{N}$. We will see that $(x_j)$ cannot converge to $p$ in $H(A)$. If $p\in \mathbb{R}\setminus A$, then it is clear that $(x_j)$ cannot converge to $p$ in $H(A)$. Therefore, necessarily $p\in A$, let's suppose that the sequence does converge.\
**Claim:** One of the following cases applies:
- There exists $q\in Q_k$ such that $p\in (-\infty,q]$ and $(-\infty,q] \cap Q_k\setminus \lbrace p\rbrace=\emptyset$.
- There exist $a,q\in Q_k$ such that $b<q$, $p\in (b,q]$ and $(b,q]\cap Q_k\setminus \lbrace p\rbrace=\emptyset$.
First, let's see that it is impossible that for every $q\in Q_k$, $q<p$. Let's assume that it happens, wlog $q_k=\max Q_k$. Since $(x_j)$ converges to $p$, there exists $N\in \mathbb{N}$ such that, $x_s\in (q_k,p)$ for every $s\geq N$. Since $x_s\in R(n,m,k)$ we conclude $g(n,x_s)\subseteq [x_s,\infty)$, $\alpha(n,x_s)=m$ y $Q_k\cap g(m,x_s)\setminus \lbrace x_s\rbrace\neq \emptyset$, take $q$ in the last intersection, it follows that $g(m,q_i)\subseteq g(n,x_s)\subseteq [x_s,\infty)$ and thus $q_i\in [x_s,\infty)$ which is a contradiction. We conclude that there exists some $q\in Q_k$ such that $p\leq q$. The cases from the claim come from the following:
- If for every $q\in Q_k$ we have $p\leq q$, it is enough to take $q=\min Q_k$ so the first case from the claim is satisfied.
- There exist $q_i,q_j\in Q_k$ such that $q_i<p\leq q_j$. In this case we take $a=\max \lbrace x\in Q_k \ | \ x<p\rbrace$ y $q=\min \lbrace x\in Q_k \ | \ p\leq x\rbrace$ and it follows that these points satisfy the second point from the claim.
Continuing the proof, now we assert that there exist $N_1$ and $N_2$ in $\mathbb{N}$ such that:
- For all $i\geq N_1$, $q\in g(m,x_i)$ (this $q$ is the same from the corresponding case of the claim).
- For all $i\geq N_2$, $x_i\in g(m,p)$.
We prove the existence of $N_1$ following the cases from the claim:
- In this case $N_1=1$. For all $i\in \mathbb{N}$ we have $g(m,x_i)\subseteq g(n,x_i)\subseteq [x_i,\infty)$, the first inclusion follows from $m=\alpha(n,x_i)$. Since $(-\infty,q)\cap Q_k\setminus \lbrace p\rbrace=\emptyset$ and $g(m,x_i)\cap Q_k\setminus \lbrace x_i\rbrace\neq \emptyset$ we conclude that there exists $q_j\in Q_k\cap g(m,x_i)$ such that $q_j\geq q\geq p >x_i$ thus the interval $[x_i,q_j]$ is contained in $g(m,x_i)$, this because this last set is convex. This guarantees $q\in g(m,x_i)$.
- By convergence, there exists $N_1\in \mathbb{N}$ such that, for all $i\geq N_1$ we have $x_i \in (a,p)\subseteq (a,q)$. Let $z\in Q_k\cap g(m,x_i)\setminus \lbrace x_i\rbrace$, since $g(m,x_i)\subseteq [x_i,\infty)$, the election of $a$ and $q$ guarantees $z\geq q$. Thus $[x_i,z]\subseteq g(m,x)$ and it follows that $q\in g(m,x_i)$.
With this, we have proven the existence of $N_1$ for both cases. As for $N_2$, it is much simpler as we can directly apply the definition of convergence to the open set $g(m, p)$.\
Now, let $i,j>N_1+N_2$ such that $x_i<x_j$ (we can always do this fixing $i$ and using convergence). Notice that $x_j,q\in g(m,x_j)$ and thus $[x_j,q]\subseteq g(m,x_j)$, from where we conclude that $p\in g(m,x_j)$. Since $g$ is a $\gamma$-function, we deduce $$g(m,p)\subseteq g(n,x_j)\subseteq [x_j,\infty)$$ Let's remember that $x_i\in g(m,p)$, thus $x_j\leq x_i$, which is a contradiction. Therefore, $(x_j)$ does not converge in the euclidean sense to $p$, but since $p\in A$, this implies that the sequence does not converge to $p$ in $H(A)$.\
With what we have done so far, we have concluded that if $(x_n)$ is a sequence contained in $R(n,m,k)$, and $x_i < p$ for all $i$, then the sequence does not converge to $p$ in $H(A)$. As a particular case, no increasing sequence contained in $R(n,m,k)$ can converge in the Euclidean sense.\
Moreover, if $p\in A\cap \text{cl}_e(R(n,m,k))$, then there exist $\varepsilon>0$ such that $(p-\varepsilon,p)\cap R(n,m,k)=\emptyset$, on the contrary we could construct a sequence fully contained in $R(n,m,k)$ that converges to $p$ and stays strictly to the left of $p$, but we just proved that this is imposible. notice that for this $\varepsilon>0$ we have $(p-\varepsilon,p)\cap \text{cl}_e(R(n,m,k))=\emptyset$. With this, we have simultaneously proven that both conditions we were looking for are satisfied for each $R(n,m,k)$, so we conclude this proof.
This new version of the Theorem tells us a bit more, but it is still not clear enough which sets satisfy b). However, upon closer analysis, we obtain the following:
**Proposition 6**. *Suppose that $H(A)$ is a quasimetrizable space, and let\
$\mathbb{R}\setminus A=\bigcup_{n=1}^\infty R_n$ where each $R_n$ satisfies $b)$ from Theorem 11. Then for every $n\in\mathbb{N}$, $R_n$ is a closed set in $\mathbb{S}^\star$. That is, $\mathbb{R}\setminus A$ is a $F_\sigma$ set in $Y$.*
Let's assume that there exists $p\in \text{cl}_Y(R_n)\setminus R_n$. With this, for each $m\in \mathbb{N}$, we can choose a point $x_m\in (p-\frac{1}{m},p)\cap R_n$. Moreover, we can select these points in such a way that they give us an increasing sequence contained in $R_n$ that converges to $p$, which is impossible. We conclude that such a $p$ does not exist, and therefore, $R_n$ is closed in $Y$.
Now we have an important condition: the set $\mathbb{R}\setminus A$ must be an $F_\sigma$ set in $Y$. Notice also that if $\mathbb{R}\setminus A=\bigcup_{n=1}^\infty R_n$ is an $F_\sigma$ set in $Y$, then each $R_n$ satisfies part b) of Theorem 10. To see this, suppose $p\in A\cap \text{cl}_e(R_n)$. Then, $p\notin R_n=\text{cl}_Y(R_n)$, so there exists $y<p$ such that $(y,p]\cap R_n=\emptyset$, and thus $(y,p)\cap R_n=\emptyset$, which implies $(y,p)\cap \text{cl}_e(R_n)=\emptyset$. With all this in mind, we can now give a definitive answer to our original question. Note that the following is essentially Theorem 10 of [@Quasi].
**Theorem 6**. *Let $A\subseteq \mathbb{R}$. The following are equivalent:*
- *$H(A)$ is a $\gamma$-space.*
- *$\mathbb{R}\setminus A$ is a $F_\sigma$ in $Y$.*
- *$H(A)$ is a quasimetrizable space.*
To conclude this section, let's consider an explicit example of a set $A \subseteq \mathbb{R}$ such that $H(A)$ is not quasimetrizable. According to our theorem, it suffices to find a set $B$ that is not an $F_\sigma$ set in $\mathbb{S}^\star$, which is a simpler task than the original problem.
**Example 1**. *A Hattori space that is not quasimetrizable.\
Note that since $\mathbb{S}$ is a Baire space, $\mathbb{S}^\star$ is also a Baire space. Moreover, $\mathbb{R}\setminus \mathbb{Q}$ is a $G_\delta$ set in $Y$ because it is a $G_\delta$ set in the Euclidean topology. From these two observations, we can conclude that if we simply follow the usual proof that $\mathbb{Q}$ is not $G_\delta$ in $\mathbb{R}$, we can deduce that $\mathbb{Q}$ is not $G_\delta$ in $\mathbb{S}^\star$, and therefore $\mathbb{R}\setminus \mathbb{Q}$ is not an $F_\sigma$ set in $\mathbb{S}^\star$. Hence, we conclude that $H(\mathbb{Q})$ is not a quasimetrizable space.*
# Lindelöf property and normality of $C_p(H(A))$
Let's move on to the question of when $C_p(H(A))$ is a Lindelöf space. In order to prove that $C_p(H(A))$ is a Lindelöf space, it is enough to prove that it has countable network weight. Our next Proposition gives us a clear answer to this problem.
**Proposition 7**. *Let $A\subseteq \mathbb{R}$. Then $\text{nw}(H(A))=|\mathbb{R}\setminus A|+\aleph_0$.*
One inequality is trivial, so let's prove the other one. Let $\mathcal{B}$ be a network for $H(A)$, let's see that $|\mathcal{B}|\geq |\mathbb{R}\setminus A|$. For each $x\in \mathbb{R}\setminus A$ we have $[x,x+1)\in \tau(A)$ and therefore it exists $B_x\in \mathcal{B}$ such that $x\in B_x\subseteq [x,x+1)$, this is a one to one assignation, so $|\mathcal{B}|\geq \mathbb{R}\setminus A$. Thus $\text{nw}(H(A))\geq \mathbb{R}\setminus A+\aleph_0$.
Thanks to the previous proposition and recalling that $\text{nw}(X)=\text{nw}(C_p(X))$, we obtain:
**Theorem 7**. *Let $A\subseteq \mathbb{R}$. If $|\mathbb{R}\setminus A|\leq \aleph_0$ then $C_p(H(A))$ is Lindelöf.*
Moreover, we can do more. First, let's recall the following result (see [@tkachuk2011cp], problem 249):
**Proposition 8**. *Let $\omega_1$ be a caliber for a space $X$. Then $C_p(X)$ is a Lindelöf $\Sigma-$space if and only if $X$ has a countable network.*
Since $H(A)$ is separable, $\omega_1$ is a caliber for $H(A)$, thus we can use the last result and Proposition [Proposition 7](#peso red){reference-type="ref" reference="peso red"} to conclude:
**Theorem 8**. *Let $A\subseteq \mathbb{R}$. Then $C_p(H(A))$ is a Lindelöf $\Sigma-$space if and only if $\mathbb{R}\setminus A$ is countable.*
Let's move on with the normality of $C_p(H(A))$. Since the condition $|\mathbb{R} \setminus A|\leq \aleph_0$ seems to preserve many properties of $\mathbb{R}$ to $H(A)$, the natural question would be:
**Question 1**. *Let $A\subseteq\mathbb{R}$. Is $C_p(H(A))$ a normal space if and only if $|\mathbb{R}\setminus A|\leq \aleph_0$?*
In the following we will try to solve this question. One implication is straightforward, if $|\mathbb{R}\setminus A|\leq \aleph_0$, then $H(A)$ is a separable metric space and thus $C_p(H(A))$ is normal. So, the real question is:
**Question 2**. *Let $A\subseteq \mathbb{R}$. If $C_p(H(A))$ is normal, ¿$|\mathbb{R}\setminus A|\leq \aleph_0$?*
To tackle this problem, we will proceed by contrapositive. If the set $\mathbb{R}\setminus A$ satisfies certain conditions (in addition to being uncountable), then $C_p(H(A))$ contains an uncountable closed discrete set, and therefore $C_p(H(A))$ is not normal by Jones Lemma.\
One way to prove that $C_p(\mathbb{S})$ is not normal is by showing that it contains a closed and discrete subspace of size $\mathfrak{c}$. Explicitly, for every $a\in (0,1)$ define the following function: $$f_a(x)=\begin{cases} 0 \ \ \ & x\in(-\infty,-1) \\
1 &x\in[-1,-a)\\
0 &x \in [-a,a)\\
1 &x\in [a,1)\\
0 &x\in [1,\infty)
\end{cases}$$ In this way, $\lbrace f_a \ | \ x\in(0,1)\rbrace$ is a closed and discrete subspace of $C_p(\mathbb{S})$ with cardinality $\mathfrak{c}$ (a proof of this fact can be found in [@BLOG])\
We can replicate the previous construction on any interval which have the sorgenfrey topology, so we get the following partial results:
**Proposition 9**. *Let $A\subseteq \mathbb{R}$. If there exists $a<b$ such that $[a,b]\subseteq \mathbb{R}\setminus A$, then $C_p(H(A))$ contains a closed and discrete subspace of cardinality $\mathfrak{c}$, and thus is not normal space.*
Containing a non trivial closed interval is equivalent to containing a non trivial open interval, so:
**Proposition 10**. *Let $A\subseteq \mathbb{R}$. If $\text{int}_e(\mathbb{R}\setminus A)\neq \emptyset$, then $C_p(H(A))$ contains a closed and discrete subspace of cardinality $\mathfrak{c}$, and thus is not normal space.*
Particularly, if $A$ is a closed set (and it is not $\mathbb{R}$) then $C_p(H(A))$ is not a normal space.\
From here on, the idea is to gradually weaken the condition of containing intervals as much as possible to reach a more general result. Let's start with the following:
**Proposition 11**. *Let $A\subseteq \mathbb{R}$ such that $B=\mathbb{R}\setminus A$ is symmetric, meaning that if $x\in B$, then $-x\in B$, and $|B|=\mathfrak{c}$ . Additionally, let's assume that $B\cap (0,1)$ is dense in $(0,1)$ and there exists $q\in B\cap [1,\infty)$. Then $C_p(H(A))$ contains a closed and discrete subset of size $\mathfrak{c}$, and therefore it is not a normal space.*
Let $B=\mathbb{R}\setminus A$. For each $a\in (0,1)\cap B$ we define:
$$f_a(x)=\begin{cases} 0 \ \ \ & x\in(-\infty,-q) \\
1 &x\in[-q,-a)\\
0 &x \in [-a,a)\\
1 &x\in [a,q)\\
0 &x\in [q,\infty)
\end{cases}$$
By defining $f_a$ in this way, we note that the "jumps\" occur at points where the local topology is that of the Sorgenfrey Line, so we do not break continuity in $H(A)$, so $\mathcal{F}\subseteq C_p(H(A))$. Let's proceed to show that it is indeed a closed and discrete set.\
Let $a\in B\cap (0,1)$. Let's consider the basic open set $V=\left[f_a,\lbrace a,-a\rbrace, \frac{1}{2}\right]$ and let's verufy the following cases;\
**Case 1.-** $0<b<a<1$. Here we have that $f_b(-a)=1$, while $f_a(-a)=0$, so $f_b\notin V$.\
**Caso 2.-** $0<a<b<1$. This time $f_b(a)=0$ but $f_a(a)=1$, and so, once again, we conclude that $f_b\notin V$.\
From the previous cases we conclude that $V\cap \mathcal{F}=\lbrace f_a\rbrace$, so $\mathcal{F}$ is a discrete subset of $C_p(H(A))$. Proving that it is closed is the difficult part. Note that our functions only take values in $\lbrace 0,1\rbrace$, if $g\in C_p(H(A))$ is such that $g(z)\notin \lbrace 0,1\rbrace$ for some $z\in \mathbb{R}$, then $U=[g,\lbrace z\rbrace,\min \lbrace |g(z)|,|1-g(z)|\rbrace]$ satisfies $U\cap \mathcal{F}=\emptyset$. With the previous argument, we can suppose wlog that $g$ only takes the values $0$ or $1$. Once again, we have two cases:\
**Case 1.-** There exists $a\in B\cap (0,1)$ such that $g(a)\neq g(-a)$. Here we have two sub cases:\
**Sub case 1.1.-** $g(a)=0$ y $g(-a)=1$. Let's take $V=[g,\lbrace \pm a\rbrace, \frac{1}{2}]$. Suppose there exists $b\in B\cap(0,1)$ such that $f_b\in V$. Then we have that $f_b(a)\in (-\frac{1}{2},\frac{1}{2})$, that is, $f_b(a)=0$, analogously, $f_b(-a)=1$. $f_b(a)=0$ and $a>0$ imply that $a\in (0,b)$, so $a<b$. On the other hand $f_b(-a)=1$ implies that $-a\in[-q,-b)$, so $-a<-b$ and thus $a>b$. Since these two inequalities contradict each other, we deduce that such $b$ cannot exist, so $V\cap \mathcal{F}=\emptyset$.\
**Sub case 1.2.-** $g(a)=1$ y $g(-a)=0$. Once again let's take $V=[g,\lbrace \pm a \rbrace, \frac{1}{2}]$ and suppose there exists $b\in B\cap (0,1)$ such that $f_b\in V$. Similar reasoning to the previous paragraph allows us to deduce that $f_b(a)=1$ and $f_b(-a)=0$, so $a\in [b,q)$ and $-a\in [-b,b)$, that is, $a\leq b$ and $-b\leq a$, thus $a=b$. With this we obtain $V\cap \mathcal{F}\subseteq \lbrace f_a \rbrace$, since $C_p(H(A))$ is Hausdorff we can find an open set $U$ such that $g\in U$ but $f_b\notin U$, so $V\cap U\cap \mathcal{F}=\emptyset$ and we conclude this sub case.\
**Case 2.-** For all $y\in B\cap(0,1)$, $g(y)=g(-y)$. Once more, we have sub cases:\
**$g$ is constant in $(0,1)$**\
**Sub case 2.1.1-** $g(x)=0$ for all $x\in (0,1)$.\
By continuity we obtain $g(-1)=0$, now let's define $V=[g,\lbrace -1\rbrace,\frac{1}{2}]$. For every $b\in B\cap (0,1)$, $f_b(-1)=1$ so $f_b\notin V$ and we conclude.\
**Sub case 2.1.2-** $g(x)=1$ for all $x\in (0,1)$.\
It suffices to take $V=[g,\lbrace 0\rbrace, \frac{1}{2}]$ and reason in the same way as the last sub case noting that $f_b(0)=0$ for all $b\in B\cap (0,1)$.\
**$g$ is not constant in $(0,1)$**\
**Claim:** There exist $c,d\in B\cap (0,1)$ such that $g(c)\neq g(d)$.\
Since $g$ is not constant in $(0,1)$ there exist $x,y\in (0,1)$ such that $g(x)\neq g(y)$, due to continuity we can find $\varepsilon>0$ cuch that $g$ is constant in $[x,x+\varepsilon)$ and $[y,y+\varepsilon)$ (it does not matter if the points have the euclidean or Sorgenfrey topology). Thanks to density we can find $c\in [x,x+\varepsilon)\cap B\cap (0,1)$ and $d\in[y,y+\varepsilon)\cap B\cap (0,1)$, it follows that $g(c)\neq g(d)$. Moreover, wlog $c<d$.\
**Sub case 2.2.1** $g(c)=0$ and $g(d)=1$.\
Let's define $\mathcal{U}=\lbrace a \in B\cap (0,d) \ | \ g(a)=0\rbrace$ and let $u= \sup \mathcal{U}$. Notice that $u\leq d$ and let's see that $u\in B$. Suppose that $u\notin B$, since $g$ is continuous there exists $\varepsilon>0$ such that $g[(u-\varepsilon,u+\varepsilon)]\subseteq \lbrace g(u)\rbrace$, we take $z\in (u-\varepsilon,u)\cap \mathcal{U}$, then $0=g(z)=g(u)$. By density of $B$ we find an $w\in B\cap (0,1)\cap (u,u+\varepsilon)$ and thus $g(w)=0$, a contradiction. We conclude that $u\in B$.\
Moreover, $g(u)=1$, since in other case we could find some point in $(u,d)\cap B\cap (0,1)$ such that $g(u)=0$ contradicting the choice of $u$ (if $u=d$ there was nothing to do). Construct an increasing sequence $(x_n)\subseteq U\cap (0,1)$ converging to $u$, it follows that $g(x_n)=0$ for all $n$, then $(-x_n)$ is a decrecing sequence that converges to $-u$ and $g(x_n)=g(-x_n)=0$, due to continuity we conclude $g(-u)=0$, but, since $g(u)=1$, we also have $g(-u)=1$, a contradiction. We deduce that this case is impossible.\
**Sub case 2.2.2** $g(c)=1$ y $g(d)=0$.\
Analogously to the previous case, we define $\mathcal{U}=\lbrace a \in B\cap (0,d) \ | \ g(a)=1\rbrace$ and take $u=\sup \mathcal{U}$, once again $u\leq d$. Suppose $u\notin B$, by continuity there exists $\varepsilon>0$ such that $g[(u-\varepsilon,u+\varepsilon)]\subseteq \lbrace g(u)\rbrace$. Since there exists $z\in U\cap (u-\varepsilon,u)$ we conclude that $g(u)=1$, now we take $w\in B\cap (0,1)\cap (u,u+\varepsilon)$, but this implies $g(u)=1$(note that $u+\varepsilon<d$, otherwise $g(d)=1$), a contradiction. So we conclude $u\in B$. From here onward evrything proceeds analogously to the previous case to get a contradiction.\
Since the last two sub cases lead us to contradictions, we conclude that the condition $g(a)=g(-a)$ for all $a\in B$, implies that $g$ is constant in $(0,1)$, a case that we solved previously.
The crucial parts for the construction were as follows, let $B=\mathbb{R}\setminus A$:
- $B$ is symmetric.
- $B$ is dense in some interval, say $(0,b)$.
- There exists $q\in [b,\infty)\cap B$.
Reading carefully we note that the second condition implies the third in the following sense: if $B\cap(0,b)$ is dense in $(0,b)$, then $B\cap (0,y)$ is dense in $(0,y)$ for every $y<b$ and we know there exists $q\in (y,b)\cap B$ since $(y,b)$ is an open set of $(0,b)$.\
We can generalize the result by extending the possible symmetries. Remember that the reflection with respect to a point $a \in \mathbb{R}$ is given by $r_a(x) = 2a - x$, and all these reflections satisfy the following property: they transform increasing sequences "on the right of a\" into decreasing sequences "on the left of a\". This is a key argument for the final part of the previous proof.\
We generalize the symmetry condition with the following definition:
**Definition 1**. *Let $A \subseteq \mathbb{R}$ and $z \in \mathbb{R}$. We say that $A$ is symmetric with respect to $z$ if for every $x \in A$, $r_z(x) \in A$.*
Also let's recall the following classic results ([@libroCp]):
**Proposition 12**. *If $C_p(X)$ is normal, then it is collection-wise normal.*
**Proposition 13**. *Let $X$ collection-wise normal and ccc. Then $e(X)=\aleph_0$.*
The preceding results allow us to improve on our reasoning, we do not need a closed and discrete set of cardinality $\mathfrak{c}$, we only need that it is uncountable.\
With this new ideas we can improve Proposition 6 in the following way:
**Theorem 9**. *Let $A\subseteq \mathbb{R}$ y $B=\mathbb{R}\setminus A$, suppose that there exists $(z-a,z+a)=I$ such that :*
- *$B\cap I$ is dense in $I$.*
- *$B\cap I$ is symmetric with respect to $z$.*
- *$|B\cap I|>\aleph_0$*
*Then $C_p(H(A))$ contains a closed and discrete subspace of size $|B\cap I|$ and thus is not normal.*
While the conditions of the theorem may seem difficult to achieve, it is relatively simple to construct sets $B$ that satisfy them. For example, we start with $C = \mathbb{Q} \cap (-1,1) \setminus {0}$. This set is already dense in $(-1,1)$ and symmetric with respect to zero. We can then add as many irrationals as we want while maintaining symmetry, and the result will still be dense. In particular, we have the following:
**Proposition 14**. *For every $\aleph_0 < \kappa \leq \mathfrak{c}$, there exists $B \subseteq \mathbb{R}$ that satisfies the conditions of the previous theorem and has a cardinality of $\kappa$. Moreover, there exist $\mathfrak{c}$ distinct sets that satisfy these conditions.*
It suffices to perform the same construction while varying the symmetry point. In other words, we start with $\mathbb{Q} \cap (z-1,z+1) \setminus {z}$ and then add the desired irrationals. These sets will be distinct because the initial set of rationals is different for each $z$.
Theorem 3 allows us to construct closed and discrete sets by leveraging the density and symmetry of a sufficiently large set. Now, let's maintain the symmetry but shift to the opposite side by considering sets that are nowhere dense.
Let $\Delta$ be the Cantor set with its usual construction, that is, $\Delta = \bigcap_{n=1}^\infty C_n$ where each $C_n$ is the union of $2^n$ intervals of length $\frac{1}{3^n}$. With this in mind, we will say that $u \in (0,1)$ is a left (right) endpoint of $\Delta$ if there exists $n \in \mathbb{N}$ such that $u$ is the left (right) endpoint of one of the intervals that form $A_n$. Furthermore, we write $[0,1]\setminus\Delta = \bigcup_{n=1}^\infty I_n$, where the $I_n$ are the intervals discarded when constructing $\Delta$. We will need the following results concerning $\Delta$.
**Lemma 10**. *Let $u\in \Delta$. Then:*
- *$u$ is a left endpoint of $\Delta$ iff there exists $u^- \in \Delta$ such that $u^-<u$ and $(u^-,u)\cap \Delta=\emptyset$.*
- *$u$ is a right endpoint $\Delta$ iff there exists $u^+\in \Delta$ such that $u^+>u$ and $(u,u^+)\cap \Delta \neq \emptyset$.*
- *If $u$ is not a left endpoint, then there exists $(x_n)\subseteq \Delta$ strictly increasing sequence such that $x_n\rightarrow u$ (in the euclidean sense). Moreover, for every $\varepsilon>0$, $(u-\varepsilon,u)\cap\Delta \neq \emptyset$.*
- *If $u$ is not a right endpoint, then there exists $(x_n)\subseteq \Delta$ strictly decreasing sequence such that $x_n\rightarrow u$ (in the euclidean sense). Moreover, for every $\varepsilon>0$, $(u,u+\varepsilon)\cap\Delta \neq \emptyset$.*
Explicitly, $u^+$ is the left endpoint of the next interval that forms $A_n$ (ordered in increasing order), and similarly for $u^-$.
**Theorem 11**. *Let $A\subseteq \mathbb{R}$ such that $\Delta \subseteq \mathbb{R}\setminus A$. Then $C_p(H(A))$ contains a closed and discrete set of cardinality $\mathfrak{c}$ and thus is not normal.*
Let $B=[\frac{1}{2}, 1) \cap \Delta$ and $f(x) = 1 - x$, which is the reflection with respect to $\frac{1}{2}$. Note that $f[\Delta] = \Delta$. For each $b \in [\frac{1}{2}, 1) \cap \Delta$, we define a function as follows:
$$g_b(x)=\begin{cases} 0 \ \ \ & x\in(-\infty,0) \\
1 &x\in[0,f(b))\\
0 &x \in [f(b),b)\\
1 &x\in [b,1)\\
0 &x\in [1,\infty)
\end{cases}$$ Note that the "jumps\" of $g_b$ occur at points in $\Delta$, so continuity is preserved. Therefore, $\mathcal{F}=\lbrace g_b \ | \ b\in B\rbrace \subseteq C_p(H(A))$. Additionally, by construction, $g_b$ is constant on $I_n$ for all $n \in \mathbb{N}$. Let's see that $\mathcal{F}$ is closed and discrete in $C_p(H(A))$.\
Let's see that $\mathcal{F}$ is discrete.\
Given $b\in B=\Delta \cap [\frac{1}{2},1)$, we define $U=[g_b,\lbrace b,f(b)\rbrace,\frac{1}{2}]$. Let's take $c\in B$, we have the following two cases:
- **$c<b$.** Then $f(b)<f(c)$ and thus $g_c(f(b))=1$, but $g_b(f(b))=0$, which implies $g_c\notin U$.
- **$c>b$.** This time $g_b(b)=1$ while $g_c(b)=0$, once again, $g_c\notin \mathcal{F}$.
From this two cases we deduce that $U\cap\mathcal{F}=\lbrace g_b\rbrace$ and thus $\mathcal{F}$ is a discrete subspace.\
Now we only need to show that $\mathcal{F}$ is closed. For this, we will consider the following cases, let $g\in C_p(H(A))$.\
**Case 1.** $g[\mathbb{R}]$ is not contained in $\lbrace 0,1\rbrace$.\
It follows that there exists $x\in \mathbb{R}$ such that $g(x)\notin \lbrace 0,1\rbrace$, so we can define $$\varepsilon=\min \lbrace |g(x)|, |1-g(x)|\rbrace>0$$ under this conditions, let's take $U=[g,\lbrace x\rbrace,\varepsilon]$, it follows trivially that for every $b\in B$, $|g_b(x)-g(x)|\geq \varepsilon$ and thus $g_b\notin U$.\
Henceforth, we assume that $g[\mathbb{R}]\subseteq \lbrace 0,1\rbrace$.\
**Case 2.-**$g$ is not constant in the intervals discarded during the construction of $\Delta$.\
Then there exists $n \in \mathbb{N}$ such that $g$ is not constant on $I_n$, so there exist $x,y\in I_n$ such that $g(x)=0$ and $g(y)=1$. Let's take $U=[g,\lbrace x,y\rbrace,\frac{1}{2}]$. For every $b\in B$, since $g_b$ is constant in $I_n$, one of the quantities $|g(x)-g_b(x)|$ or $|g(y)-g_b(y)|$ is exactly $1$ and thus $g_b\notin U$.\
From here onwards, we will also assume that $g$ is constant on each interval $I_n$.\
**Case 3.-** There exists $a\in B$ such that $g(a)\neq g(f(a))$.\
First let's suppose $g(a)=0$ and $g(f(a))=1$. Let $U=[g,\lbrace a,f(a)\rbrace, \frac{1}{2}]$ y $b\in B$, then:
- If $b<a$, we get $g_b(a)=1$.
- If $b>a$, then $f(b)<f(a)$, so $g_b(f(a))=0$.
Both cases lead us to $g_b\notin U$ and thus $\mathcal{F}\cap U\subseteq \lbrace g_a\rbrace$, since the space is Hausdorff this suffices. The case $g(a)=1$ and $g(f(a))=0$ is analogous.\
We add to our list of assumptions $g(a)=g(f(a))$ for all $a\in B$.\
**Case 4.-** $g$ is constant in $B$.\
- $g(x)=0$ for all $x\in B$. By continuity and the fact that $g(f(b))=g(b)$ we deduce that $g(0)=0$. We take $V=[g,\lbrace 0\rbrace,\frac{1}{2}]$, since for all $b\in B$, $g_b(0)=1$ it follows that $g_b\notin U$.
- $g(x)=1$ for all $x\in B$. In this case we take $V=[g,\lbrace \frac{1}{2}\rbrace,\frac{1}{2}]$, for all $b\in B$, $g_b(\frac{1}{2})=0$ thus $g_b\notin U$.
In both cases we conclude. Lastly,\
**Case 5.-** $g$ is not constant $B$.\
First let's suppose that there exist $a,b\in B$ with $b<a$ such that $g(b)=0$ and $g(a)=1$ (the other case is analogous). Let's define the following set $$W=\lbrace c\in \Delta \ | \ c<a \ \& \ g(c)=0\rbrace$$ $b\in a$ so $W$ is non empty, on the other hand, $a$ is an upper bound for $W$, thus $\sup W=u$ exists, $b\leq u\leq a$ and $u$ is in $\Delta\cap [\frac{1}{2},a]$ since this set is closed in the euclidean topology.\
**Claim:** $u$ is not a left endpoint.\
Suppose that it is a left endpoint and let's see that this implies $u \neq a$. If $u = a$ and $u$ is a left endpoint, then $u^-$ is an upper bound for $W$ (since $g(u) = g(a) = 1$), which contradicts the choice of $u$. Now, since $u$ is a left endpoint, we can take a strictly decreasing sequence $(x_n) \subseteq B$ that converges to $u$ in the usual sense. Moreover, without loss of generality, we can assume that $x_n \in (u, a)$ for all $n \in \mathbb{N}$, which in turn allows us to assume that $g(x_n) = 1$ for all $n$ (otherwise, it would contradict the choice of $u$). The continuity of $g$ then implies that $g(u) = 1$, which again leads us to $u^-$ being an upper bound for $W$. Thus, $u$ cannot be a left endpoint.\
**Claim:** $g(u)=0$.\
Suppose $g(u) = 1$. Since $u$ is not a left endpoint, we can take a strictly increasing sequence $(x_n) \subseteq \Delta \cap [\frac{1}{2},1)$ that converges to $u$ in the usual sense. Moreover, as $u = \sup W$ and $u \notin W$, we can choose the sequence in such a way that $g(x_n) = 0$ for all $n$. It follows that $f(x_n)$ is a strictly decreasing sequence converging to $f(u)$ and such that $g(f(x_n)) = g(x_n) = 0$ for all $n \in \mathbb{N}$. This implies that $g(f(u)) = 0$ by continuity, and therefore $0 = g(f(u)) = g(u) = 1$, which is impossible. We conclude that $g(u) = 0$, which in turn allows us to conclude that $u < a$.\
**Claim:** $u$ is a right endpoint.\
Suppose this does not happen. By continuity, there exists $\varepsilon > 0$ such that $g[[u,u+\varepsilon)]\subseteq \lbrace 0\rbrace$, since $u$ is not a right endpoint $$(u,u+\varepsilon)\cap (u,a)\cap \Delta\neq \emptyset$$ which contradicts the fact that $u$ is the supremum of $W$.\
**Claim:** There does not exist a strictly increasing sequence $(x_n) \subseteq \Delta$ converging to $u$ such that $g(x_n) = 1$ for all $n \in \mathbb{N}$. That is, there exists $\varepsilon>0$ such that $g[(u-\varepsilon,u)]\subseteq \lbrace 0\rbrace$.\
Suppose that such a sequence exists. Then, $(f(x_n))$ is a strictly decreasing sequence converging to $f(u)$. By the continuity of $g$, we can conclude that $1 = g(f(u)) = g(u) = 0$, which is a contradiction. From this point forward in the proof, we will use this $\varepsilon$.\
Let's take $y\in(u-\varepsilon,u)\cap \Delta$, this is posible since $u$ is not a left endpoint, moreover, since $u$ is a right endpoint, we can consider $u^+$, and furthermore, we have that $u^+\leq a$ and $g(u^+)=1$. IN order to finish the proof let's prove that $U=[g,\lbrace u,u^+,y\rbrace,\frac{1}{2}]$ satisfies $$U\cap \mathcal{F}\subseteq \lbrace g_u,g_{u^+}\rbrace$$ which allow us to conclude since $C_p(H(A))$ is Hausdorff.\
Let $c\in \Delta\cap[\frac{1}{2},1)$. We have three final sub cases:\
- $c<y$. In this case $g_c(y)=1$ and $g(y)=0$.
- $c\in[y,u)$. Here $g_c(u)=1$ and $g(y)=0$.
- $b>u^+$. We have $g_c(u^+)=0$ but $g(u^+)=1$,
In any case we deduce that $g_c\notin U$, and so we can conclude.
The previous theorem has quite strong conclusions. Let's start by recalling a couple of results. The first one is a classical result concerning analytic spaces, and the second one is a theorem by Sorgenfrey.
**Theorem 12**. *Let $B \subseteq \mathbb{R}$ be an analytic and uncountable set. Then $B$ contains a set homeomorphic to the Cantor set $\Delta$.*
**Theorem 13**. *Let $C, D \subseteq \mathbb{R}$ be compact nowhere dense sets. Then there exists an order isomorphism $f: C \rightarrow D$ between them. In particular, any two Cantor sets in $\mathbb{R}$ are order isomorphic.*
Thanks to these two results, we have that if $\mathbb{R} \setminus A$ is an analytic set (for example, open, closed, etc.), then it contains a copy of the Cantor set, let's call it $C$. Moreover, we can provide an order isomorphism between $\Delta$ and $C$, which allows us to \"transfer\" our construction of the usual Cantor set to this case in the following way:
Let $g$ be the order isomorphism. Since $g\left(\frac{1}{3}\right) < g\left(\frac{2}{3}\right)$, we can take an intermediate point, let's say $c$. This point will act as a \"division\" point, similar to how $\frac{1}{2}$ did originally. Now, we define $D = C \cap [c, \infty) \setminus {g(1)}$ and $I = (-\infty, c] \cap C \setminus {g(0)}$. These two sets are disjoint, and we have $C = D \cup I \cup {g(0), g(1)}$. We also need the following auxiliary function $\phi: C \rightarrow C$ given by $\phi(g(x)) = g(1-x)$. Since $g$ is bijective, $\phi$ is well-defined and bijective. Moreover, the fact that $g$ preserves the order implies that $\phi[D] = I$. Notice that $\phi$ transforms increasing sequences in $D$ into decreasing sequences in $I$. This function acts as a "pseudo-symmetry,\" taking on the role of $f(x) = 1-x$ in the original construction.\
The only thing we haven't reinterpreted is the concept of left or right endpoint, but we can do that very simply. A left/right endpoint of $C$ is the image under $g$ of a corresponding type of endpoint in $\Delta$. For example, $g\left(\frac{2}{3}\right), g\left(\frac{2}{9}\right)$ are left endpoints of $C$. All the conclusions of Lemma 1 can be naturally adapted to this definition using the fact that $g$ preserves order and is a homeomorphism.\
Finally, for each $d \in D$, we define
$$h_d(x)=\begin{cases} 0 \ \ \ & x\in(-\infty,g(0)) \\
1 &x\in[g(0),\phi(d))\\
0 &x \in [\phi(d),d)\\
1 &x\in [d,g(1))\\
0 &x\in [g(1),\infty)
\end{cases}$$
Under this reinterpretation of the key concepts used in the proof of Theorem 4, we can replicate that proof for this more general case by making the corresponding replacements to conclude that $\lbrace h_d \ | \ d \in D\rbrace$ is a closed and discrete set in $C_p(H(A))$. Therefore, we can state the following result:
**Theorem 14**. *Let $A\subseteq \mathbb{R}$ such that $\mathbb{R}\setminus A$ is uncountable and analytic. Then $C_p(H(A))$ is not normal.*
Unfortunately, the previous theorem is not an "if and only if\" statement, as the following example shows:
**Example 2**. *Let $B$ be a Bernstein set that is also an additive subgroup of $\mathbb{R}$ (see [@bernstein] and [@bernstein2]). It follows that $B$ is an uncountable set that is symmetric with respect to $0$, dense in $\mathbb{R}$, but not analytic. Therefore, by Proposition [Proposition 11](#Simetricos densos){reference-type="ref" reference="Simetricos densos"}, we can conclude that $C_p(H(\mathbb{R}\setminus B))$ is not normal, even though $B$ is not analytic.*
All these partial advances naturally lead to the following question:
**Question 3**. *If $\mathbb{R}\setminus A$ is uncountable, then, is ¿$C_p(H(A))$ not normal?*
One of the most significant models of ZF is Solovay's model, in which the Axiom of Choice is not satisfied fully (only the Axiom of Dependent Choice, DC), and furthermore, in this model, $\mathbb{R}$ has the Perfect Set Property, meaning that every infinite subset is either countable or contains a perfect set. Since every perfect set is analytic, this implies that every non-countable subset of $\mathbb{R}$ contains a copy of $\Delta$. Therefore, in this model, we have an affirmative answer to the previous question:
**Theorem 15**. *(In Solovay Model) Let $A\subseteq \mathbb{R}$. Then if $\mathbb{R}\setminus A$ is uncountable then $C_p(H(A))$ is not normal.*
From the last Theorem it follows that, if there exist an uncountable $B$ such that $C_p(H(B))$ is normal, then it would need AC for his construction. Thus it will not be easy to describe such a set.
| arxiv_math | {
"id": "2309.11045",
"title": "$Cp(X)$ for Hattori Spaces",
"authors": "Elmer Enrique Tovar-Acosta",
"categories": "math.GN",
"license": "http://creativecommons.org/licenses/by-nc-sa/4.0/"
} |
---
abstract: |
We prove that an infinite-ended group whose one-ended factors have finite-index subgroups and are in a family of groups with a nonzero multiplicative invariant is not quasi-isometrically rigid. Combining this result with work of the first author proves that a residually-finite multi-ended hyperbolic group is quasi-isometrically rigid if and only if it is virtually free. The proof adapts an argument of Whyte for commensurability of free products of closed hyperbolic surface groups.
author:
- Nir Lazarovich
- Emily Stark
bibliography:
- mybib.bib
title: Failure of quasi-isometric rigidity for infinite-ended groups
---
# Introduction
.2in
Infinite-ended groups display a strong form of quasi-isometric flexibility: Papasoglu--Whyte [@papasogluwhyte] proved the quasi-isometry type of an infinite-ended group depends only on the quasi-isometry types of its one-ended factors. Commensurability classes, on the other hand, are often finer. For example, while there is one abstract commensurability class of closed hyperbolic surface groups, there are infinitely many abstract commensurability classes of their free products. Indeed, Whyte [@whyte1999amenability Theorem 1.6] proved that free products of two hyperbolic surface groups are abstractly commensurable if and only if they have equal Euler characteristic.
We adapt Whyte's argument to free products of groups with a nonzero multiplicative invariant, like Euler characteristic. Consequently, we show quasi-isometric rigidity fails for many infinite-ended groups, where a finitely generated group $G$ is *quasi-isometrically rigid* if any group quasi-isometric to $G$ is abstractly commensurable to $G$, meaning the groups contain isomorphic finite-index subgroups. In particular, by [@lazarovich23], our main result gives the following:
**Theorem 1**. *A residually-finite multi-ended hyperbolic group is quasi-isometrically rigid if and only if it is virtually free.*
Whether all hyperbolic groups are residually finite is an open question of considerable interest in the field. We remark that in the proof of Theorem [Theorem 1](#thm:introhyp){reference-type="ref" reference="thm:introhyp"} residual-finiteness is only used to ensure that the one-ended factors of the multi-ended hyperbolic group have some proper finite-index subgroup.
Quasi-isometric rigidity is a driving problem in geometric group theory. Many well-studied groups have been shown to be quasi-isometrically rigid, including non-abelian free groups [@stallings; @dunwoody1985accessibility], fundamental groups of closed hyperbolic surfaces [@tukia; @gabai; @cassonjungreis], non-uniform lattices in rank-one symmetric spaces [@Schwartz95], mapping class groups [@behrstockkleinerminskymosher], and uniform lattices in certain hyperbolic buildings [@bourdonpajot; @haglund06; @xie06]. Fundamental groups of closed hyperbolic $3$-manifolds and free products of closed hyperbolic surface groups are non-examples, and this paper gives many more. We refer the reader to the book of Drutu--Kapovich [@drutukapovich] for additional background.
We show quasi-isometric rigidity for infinite-ended groups fails in the presense of a suitable *multiplicative invariant* (also known as *volume* cf. [@reznikov]).
**Definition 2**. Let $\mathcal{C}$ be a family of groups closed under taking finite-index subgroups. A *multiplicative invariant on $\mathcal{C}$* is a function $\rho:\mathcal{C}\rightarrow \mathbb{R}$ so that if $G \in \mathcal{C}$ and $H$ is a finite-index subgroup of $G$, then $\rho(H) = [G:H]\rho(G)$.
**Example 3**. The following are examples of multiplicative invariants.
1. Euler characteristic for the family groups with a finite classifying space; see discussion on generalizations in work of Chiswell [@chiswell];
2. $\ell^2$-Betti numbers for the family groups with a classifying space with finite skeleta in each dimension; for background, see work of Lück [@luck];
3. Volume for the family of fundamental groups of closed hyperbolic manifolds of dimension $n \geq 3$ by Mostow Rigidity [@mostow_strongrigidity]
4. Volume from an upper or lower volume in the sense of Reznikov [@reznikov Section 1]. Examples include deficiency volume and rank volume [@reznikov] and a volume from topological complexity due to Lazarovich [@lazarovich23].
Let $\mathcal{C}$ be a family of one-ended groups and suppose that $\mathcal{C}$ is closed under taking finite-index subgroups. Suppose $\mathcal{C}$ has a multiplicative invariant that is nonzero for every element of $\mathcal{C}$.
**Theorem 4**. *Let $G_i \in \mathcal{C}$, and suppose $H_i \leq G_i$ is a proper finite-index subgroup for some $n>1$ and $1 \leq i \leq n$. Then, the groups $G_1* \ldots * G_n$ and $H_1 * \ldots *H_n$ are quasi-isometric and not abstractly commensurable.*
**Corollary 5**. *If $G$ is a finitely presented infinite-ended group that is not virtually free and whose one-ended factors are in $\mathcal{C}$ and contain non-trivial finite-index subgroups, then $G$ is not quasi-isometrically rigid.*
*Proof of Theorem [Theorem 1](#thm:introhyp){reference-type="ref" reference="thm:introhyp"}.* A multi-ended hyperbolic group which is not virtually free has finitely many one-ended factors. If the group is residually finite, then each factor is a residually-finite hyperbolic group and, in particular, has a proper finite-index subgroup. Every hyperbolic group has finite asymptotic dimension [@roe], and so there exists $m$ such that the asymptotic dimension of all the factors is bounded by $m$. Let $\mathcal{C}$ be the family of one-ended hyperbolic groups whose asymptotic dimension is bounded by $m$. By [@lazarovich23 Proof of Theorem A], the family $\mathcal{C}$ has a multiplicative invariant, namely $\underline{C}_{2,m}$, that is nonzero for every element of $\mathcal{C}$. The conclusion now follows from Corollary [Corollary 5](#main cor){reference-type="ref" reference="main cor"}. ◻
The free products $G_1* \ldots * G_n$ and $H_1 * \ldots *H_n$ are quasi-isometric by work of Papasoglu--Whyte [@papasogluwhyte] since $G_i$ and $H_i$ are quasi-isometric for each $i$. It remains to prove the groups are not abstractly commensurable. To do this, given a multiplicative invariant $\rho$ for a family of groups $\mathcal{C}$, we construct a multiplicative invarant $\chi_\rho$ on the family of free products whose non-free free factors are elements of $\mathcal{C}$, and we adapt an argument of Whyte [@whyte1999amenability Theorem 1.6]. We note that Sykiotis [@sykiotis] defines a related multiplicative invariant for infinite-ended groups through a limiting process; our construction is simpler and more direct.
To prove that two groups are quasi-isometric, often one finds geometric actions of the groups on the same proper metric space - in this case, the groups are said to have a *common model geometry*. Theorem [Theorem 4](#thm:intromain){reference-type="ref" reference="thm:intromain"} can be used to exhibit quasi-isometric groups with no common model geometry. Indeed, such examples arise via finitely generated groups that are not quasi-isometrically rigid, but are *action rigid*, meaning that whenever the group and another finitely generated group act geometrically on the same proper metric space, the groups are abstractly commensurable. Many examples of action rigid free products are given in [@starkwoodhouse19; @mssw], including free products of closed hyperbolic manifold groups.
## Acknowledgements {#acknowledgements .unnumbered}
NL was supported by Israel Science Foundation (grant no. 1562/19). ES was supported by NSF Grant No. DMS-2204339.
# Preliminaries {#sec:prelims}
We will use the notion of a graph of spaces and a graph of groups; we refer the reader to [@SerreTrees; @ScottWall79] for additional background.
**Definition 6**. (Graph notation.) A *graph* $\Gamma$ is a set of vertices $V\Gamma$ together with a set of edges $E\Gamma$. An edge $e$ is oriented with an initial vertex $\iota(e)$ and a terminal vertex $\tau(e)$. For every edge $e$ there is an edge $\bar{e}$ with reversed orientation so that $\iota(e) = \tau(\bar{e})$. Further, $e \neq \bar{e}$ and $e = \bar{\bar{e}}$.
We often suppress the orientation of an edge if it is not relevant to a construction.
**Definition 7**. A *graph of spaces* $(X,\Gamma)$ consists of the following data:
1. a graph $\Gamma$ called the *underlying graph*,
2. a connected space $X_v$ for each $v \in V\Gamma$ called a *vertex space*,
3. a connected space $X_e$ for each $e\in E\Gamma$ called an *edge space* such that $X_{\bar{e}} = X_e$, and
4. a $\pi_1$-injective map $\phi_e : X_e \rightarrow X_{\tau(e)}$ for each $e\in E\Gamma$.
The *total space* $X$ is the following quotient space: $$X = \left( \bigsqcup_{v \in V \Gamma} X_v \bigsqcup_{e \in E\Gamma} \bigl( X_e \times [0,1]\bigr) \right) \; \Big/ \; \sim,$$ where $\sim$ is the relation that identifies $X_e \times \{0\}$ with $X_{\bar{e}} \times \{0\}$ via the identification of $X_e$ with $X_{\bar{e}}$, and the point $(x, 1)\in X_e \times [0,1]$ with $\phi_e(x)\in X_{\tau(e)}$.
**Definition 8**. A *graph of groups* $\mathcal{G}$ consists of the following data:
1. a graph $\Gamma$ called the *underlying graph*,
2. a group $G_v$ for each $v \in V\Gamma$ called a *vertex group*,
3. a group $G_e$ for each $e \in E\Gamma$ called an *edge group*, such that $G_e = G_{\bar{e}}$,
4. and injective homomorphisms $\Theta_e: G_e \rightarrow G_{\tau(e)}$ for each $e \in E\Gamma$ (called *edge maps*).
If $\mathcal{G}$ is a graph of groups with underlying graph $\Gamma$, a *graph of spaces associated to $\mathcal{G}$* consists of a graph of spaces $(X,\Gamma)$ with points $x_v \in X_v$ and $x_e \in X_e$ so that $\pi_1(X_v,x_v) \cong G_v$ and $\pi_1(X_e, x_e) \cong G_e$ for all $v \in V\Gamma$ and $e \in E\Gamma$, and the maps $\phi_e:X_e\rightarrow X_{\tau(e)}$ are such that $(\phi_e)_* = \Theta_e$. The *fundamental group* of the graph of groups $\mathcal{G}$ is $\pi_1(X)$. The fundamental group is independent of the choice of $X$ and is denoted by $\pi_1(\mathcal{G})$.
If $G \cong \pi_1(\mathcal{G})$, where $\mathcal{G}$ is a graph of groups, then we may view each vertex group of $\mathcal{G}$ as a subgroup of $G$ (up to conjugacy). Suppose that $G = \pi_1(\mathcal{G}) =\pi_1(X)$, where $\mathcal{G}$ is a graph of groups and $X$ is the corresponding total space. If $G' \leq G$, then $G' = \pi_1(X')$, where $X'$ covers $X$. This cover $q:X' \rightarrow X$ naturally yields a graph of spaces decomposition of $X'$ with underlying graph $\Gamma'$, where the edge and vertex spaces are components of the preimages of the edge and vertex spaces in $X$. Hence, $G'$ is the fundamental group of a corresponding graph of groups $\mathcal{G}'$. Each vertex group of $\mathcal{G}'$ is then conjugate to a subgroup of a vertex group of $\mathcal{G}$.
**Definition 9**. With the notation from the previous paragraph, if $G_v$ is a vertex group of $\mathcal{G}$, we say a vertex group $G_u'$ of $\mathcal{G}'$ *covers* $G_v$ if $G_u'$ is conjugate to a subgroup of $G_v$ via the covering of the corresponding graphs of spaces.
We make use of the following observation.
**Observation 10**. Suppose $\mathcal{C}$ is a family of groups closed with respect to taking finite-index subgroups and with a multiplicative invariant $\rho$. Suppose $\mathcal{G}$ is a graph of groups with each vertex group in $\mathcal{C}$ and $G = \pi_1(\mathcal{G})$. Suppose $G'$ is an index-$k$ subgroup of $G$ and $\mathcal{G}'$ is its corresponding graph of groups decomposition. Then for each vertex group $G_v \in \mathcal{G}$, if $K = \{G'_{u_1}, \ldots, G'_{u_n}\}$ is the collection of vertex groups of $\mathcal{G}'$ covering $G_v$, then $$\sum_{i=1}^n \rho(G'_{u_i}) = k \rho(G_v).$$
# Multiplicative invariants and commensurability
**Notation 11**. For $r \geq 0$, let $F_r$ denote the free group of rank $r$. Throughout this section let $\mathcal{C}$ be a family of one-ended groups that is closed with respect to taking finite-index subgroups and has a multiplicative invariant $\rho$. Let $\mathcal{C}'$ denote the family of groups of the form $G = G_1 * \ldots * G_n * F_r$ for some $r \geq 0$, $n \geq 0$, and $G_i \in \mathcal{C}$. Equivalently, $G \in \mathcal{C}'$ if $G$ is the fundamental group of a graph of groups $\mathcal{G}$ with trivial edge groups and each vertex group either trivial or in $\mathcal{C}$.
We will work in the setting of free products, rather than the general infinite-ended case, as the arguments are cleaner. See Remark [\[rem:infendedgen\]](#rem:infendedgen){reference-type="ref" reference="rem:infendedgen"} regarding the more general setting.
**Definition 12**. Let $\rho$ be a multiplicative invariant on the family of groups $\mathcal{C}$. Define a function $\chi_{\rho}:\mathcal{C}' \rightarrow \mathbb{R}$ by $$\chi_{\rho}(G_1*\ldots *G_n*F_r) = 1-r-n+ \sum_{i=1}^n \rho(G_i).$$
**Remark 13**. The function $\chi_{\rho}$ is a variant of the Euler characteristic. Viewing $G \in \mathcal{C}'$ as the fundamental group of a graph of groups, each vertex in the graph of groups decomposition labeled $G_i \in \mathcal{C}$ is assigned $\rho(G_i)$ rather than $1$. In particular, if the multiplicative invariant $\rho$ is Euler characteristic, then $\chi_{\rho}$ is as well.
To prove that $\chi_{\rho}$ is a multiplicative invariant, we will use the following.
**Definition 14**. Let $\mathcal{G}$ be a graph of groups with underlying graph $\Gamma$, trivial edge groups, and with nontrivial vertex groups $G_1, \ldots, G_n \in \mathcal{C}$. Define $$\chi_{\rho}(\mathcal{G}) = \chi(\Gamma) - n + \sum_{i=1}^n \rho(G_i).$$
**Lemma 15**. *Let $G = G_1 * \ldots G_n * F_r \in \mathcal{C}'$, and suppose $G \cong \pi_1(\mathcal{G})$, where $\mathcal{G}$ is a graph of groups with underlying graph $\Gamma$, trivial edge groups, and with nontrivial vertex groups $G_1, \ldots, G_n \in \mathcal{C}$. Then, $\chi_{\rho}(G) = \chi_{\rho}(\mathcal{G})$.*
*Proof.* By definition, $$\begin{aligned}
\chi_{\rho}(G) &=& 1-r-n + \sum_{i=1}^n\rho(G_i), \textrm{ and } \\
\chi_{\rho}(\mathcal{G}) &=& \chi(\Gamma) - n + \sum_{i=1}^n\rho(G_i).
\end{aligned}$$ Thus, we must show $\chi(\Gamma) = 1-r$. Indeed, it follows from the definition of the fundamental group of a graph of groups that $F_r \cong \pi_1(\Gamma)$. So, $\chi(\Gamma) = \chi(F_r) = 1-r$. ◻
**Proposition 16**. *The function $\chi_{\rho}:\mathcal{C}' \rightarrow \mathbb{R}$ is a multiplicative invariant.*
*Proof.* Let $G = G_1 * \ldots * G_n * F_r \in \mathcal{C}'$. Realize $G$ as the fundamental group of a graph of groups $\mathcal{G}$ whose underlying graph $\Gamma$ has vertex set $V\Gamma = \{v_0, \ldots, v_n\}$ and (unoriented) edge set $E\Gamma = \{e_1, \ldots, e_n, a_1, \ldots, a_r\}$, where $e_i = \{v_0, v_i\}$ for $1 \leq i \leq n$, and $a_i$ is a loop based at $v_0$ for $1 \leq i \leq r$. Suppose each edge group is trivial, the vertex group $G_{v_0}$ is trivial, and the vertex group $G_{v_i}$ is $G_i$.
Let $G' \leq G$ be an index-$k$ subgroup. As discussed in Section [2](#sec:prelims){reference-type="ref" reference="sec:prelims"}, $G' \cong \pi_1(\mathcal{G}')$, where $\mathcal{G}'$ is a graph of groups with:
- $k$ vertices with trivial vertex group that each cover $G_{v_0} = \left\langle 1 \right\rangle$,
- $k$ edges with trivial edge group that each cover $G_{a_i} = \left\langle 1 \right\rangle$ for $1 \leq i \leq r$,
- $k$ edges with trivial edge group that each cover $G_{e_i} = \left\langle 1 \right\rangle$ for $1 \leq i \leq n$, and
- for $i \in \{1, \ldots n \}$, there exists $n_i\in \mathbb{N}_{\geq 1}$ and $n_i$ vertices $v_i^1, \ldots, v_i^{n_i}$ with vertex group $G_{v_i^j}$ that each cover $G_{v_i}$. Further, $$\sum_{j=1}^{n_i} \rho(G_{v_i^j}) = k\rho(G_i)$$ by Observation [Observation 10](#obs:multpreimage){reference-type="ref" reference="obs:multpreimage"} since $\rho$ is a multiplicative invariant.
Thus, $\chi_{\rho}(\mathcal{G}') = k \chi_{\rho}(\mathcal{G})$. So, by Lemma [Lemma 15](#lemma:defs_agree){reference-type="ref" reference="lemma:defs_agree"}, $\chi_{\rho}(G') = \chi_{\rho}(\mathcal{G}') = k \chi_{\rho}(\mathcal{G}) = k \chi_{\rho}(G)$. ◻
As explained in the introduction, Theorem [Theorem 4](#thm:intromain){reference-type="ref" reference="thm:intromain"} follows from the next theorem. The proof is analogous to Whyte's argument in [@whyte1999amenability Theorem 1.6].
**Theorem 17**. *Let $\mathcal{C}$ be a family of groups closed with respect to taking finite-index subgroups and with a multiplicative invariant $\rho$ that is nonzero for every element of $\mathcal{C}$. For $m>1$ and $1 \leq i \leq m$, let $G_i \in \mathcal{C}$, and suppose $H_i \leq G_i$ is a proper finite-index subgroup. Then, the groups $G=G_1* \ldots * G_m$ and $H=H_1 * \ldots *H_m$ are not abstractly commensurable.*
*Proof.* Suppose towards a contradiction that there exist finite-index subgroups $G' \leq G$ and $H' \leq H$ of index $k$ and $k'$, respectively, so that $G' \cong H'$. Suppose that $H' \cong G' \cong G_1' * \ldots* G_n' * F_r$ where $r \geq 0$ and for each $i \in \{1, \ldots, n\}$, the group $G_i'$ covers $G_j$ and $H_{\ell}$ for some $j, \ell \in \{1,\ldots, m\}$. By Proposition [Proposition 16](#prop:prime_mult){reference-type="ref" reference="prop:prime_mult"}, $\chi_{\rho}(G') = k \chi_{\rho}(G)= k' \chi_{\rho}(H)$. We will show that $k = k'$ by relating these numbers to $r,n$, and $m$. This will complete the proof since $\chi_{\rho}(H) < \chi_{\rho}(G)$ because $\rho$ is a multiplicative invariant.
**Claim 18**. $k = k' = \frac{1-r-n}{1-m}$.
*Proof of Claim..* We prove the claim for $k$; the proof for $k'$ is analogous. First, by Proposition [Proposition 16](#prop:prime_mult){reference-type="ref" reference="prop:prime_mult"}, $$\begin{aligned}
\chi_{\rho}(G') &=& k\chi_{\rho}(G) \\
&=& k \left( 1-m + \sum_{i=1}^m \rho(G_i) \right).
\end{aligned}$$ Next, by the definition of $\chi_{\rho}$, $$\begin{aligned}
\chi_{\rho}(G') &=& 1-r-n + \sum_{i=1}^n \rho(G_i').
\end{aligned}$$
For $1 \leq i \leq m$, let $K_i = \{G_{i_j}'\}_{j=1}^{n_i}$ and $i_j \in \{1, \ldots, n\}$ be the collection of free factors of $G'$ so that $G_{i_j}'$ covers $G_i$. Then, for all $1 \leq i \leq m$, $$\sum_{j=1}^{n_i} \rho(G_{i_j}') = k\rho(G_i)$$ by Observation [Observation 10](#obs:multpreimage){reference-type="ref" reference="obs:multpreimage"}. Since the $K_i$ partition the set $\{G_i'\}_{i=1}^n$, Equation (3) can be rewritten as $$\begin{aligned}
\chi_{\rho}(G') &=& 1-r-n + \left( \sum_{j=1}^{n_1} \rho(G_{i_j}') + \ldots + \sum_{j=1}^{n_m} \rho(G_{i_j}') \right) \\
&=& 1-r-n + k\sum_{i=1}^m \rho(G_i).
\end{aligned}$$ Combining Equation (2) and Equation (5) yields $$\begin{aligned}
k \left( 1-m + \sum_{i=1}^m \rho(G_i) \right) &=& 1-r-n + k\sum_{i=1}^m \rho(G_i).
\end{aligned}$$ Thus, $k = \frac{1-r-n}{1-m}$. ◻
Therefore, $k=k'$, completing the proof of the theorem. ◻
**Remark 19**. Note that in the proof of Claim [Claim 18](#claim about index){reference-type="ref" reference="claim about index"} we have not used that the multiplicative invariant $\rho$ is non-zero. In fact, the same proof works for the multiplicative invariant $\rho(G)=1/|G|$ (where $1/\infty =0$).
**Remark 20**. (Weakening the hypotheses.) If $G = G_1 * \ldots * G_m$ and $H = H_1 * \ldots * H_m$ are in the family $\mathcal{C}'$ and $\chi_{\rho}(G)>\chi_{\rho}(H)$, then the proof of the theorem above shows that $G$ and $H$ are not abstractly commensurable. So, for example, one can weaken the hypothesis of the theorem to asking that $H_i \leq G_i$ is a *proper* finite-index subgroup for only one $i \in \{1, \ldots, n\}$.
The general infinite-ended case now follows from Theorem [Theorem 17](#thm:mainpaper){reference-type="ref" reference="thm:mainpaper"}.
*Proof of Corollary [Corollary 5](#main cor){reference-type="ref" reference="main cor"}.* Suppose $G$ is a finitely presented infinite-ended group. Work of Stallings and Dunwoody [@stallings; @dunwoody1985accessibility] proves that $G$ is the fundamental group of a finite graph of groups with finite edge groups and vertex groups that have at most one end. If $G$ is not virtually free then there is at least one one-ended factor in this decomposition. Suppose the one-ended vertex groups in this decomposition are $G_1, \ldots, G_n$ for some $n\geq 1$ and belong to a family of groups $\mathcal{C}$ that is closed under taking finite-index subgroups. Suppose $\mathcal{C}$ has a multiplicative invariant that is nonzero for every element of $\mathcal{C}$. Suppose further that each one-ended factor of $G$ contains a non-trivial finite-index subgroup. The group $G$ is quasi-isometric to the free product $G_1 * G_1 * \ldots * G_n$ by work of Papasoglu--Whyte [@papasogluwhyte]. Thus, $G$ is not quasi-isometrically rigid by Theorem [Theorem 17](#thm:mainpaper){reference-type="ref" reference="thm:mainpaper"}. ◻
| arxiv_math | {
"id": "2310.03644",
"title": "Failure of quasi-isometric rigidity for infinite-ended groups",
"authors": "Nir Lazarovich and Emily Stark",
"categories": "math.GR math.GT",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |
---
abstract: |
Long linear polymers in a depinned interfaces environment have been studied for a long time, for instance in [@Caravenna2009depinning] when the temperature is constant. In this paper, we study an extension of this model by making the temperature and the distance between interfaces goes to infinity. We give a full phase diagram for this model, and show that a new behaviour appears in this extension. Our key tool include new sharp results on the simple random walk evolving between interfaces.
author:
- Elric Angot
bibliography:
- bibli.bib
title: POLYMER IN A MULTI-INTERFACE MEDIUM WITH WEAK REPULSION
---
# Model and notations
Under specific constraints, a random walk can exhibit an unusually confined behavior within a very small region of space, in contrast to its typical diffusive nature when unconstrained. This phenomenon can manifest itself in various models and take different forms. For example, it can be observed in the collapse transition of a polymer when placed in a poor solvent [@Carmona2018], the pinning of a polymer on a defect line [@giacomin2007random] [@Giacomin2006smoothing], and even when random walks are confined by obstacles [@Sznitman1998] or obstacles randomly distributed in their path [@Poisat2022] [@PoisatSimenhaus2020]. These models, often inspired by fields like Biology, Chemistry, or Physics, pose intricate mathematical challenges and have remained a vibrant area of research.
In this paper, we extend the model of [@Caravenna2009depinning] by making the repulsion weaker and weaker and by observing the interactions between the repulsion and the spacing between the interfaces. We consider a (1 + 1)-dimensional model of a polymer depinned infinitely many equi-spaced horizontal interfaces. The possible configurations of the polymer are modeled by the trajectories of the simple random walk $(i,S_i)_{i \geq 0}$, where $S_0 = 0$ and $\left(S_i - S_{i-1}\right)_{i \geq 1}$ is a sequence of i.i.d. random variables uniformly distributed on $\{-1,1\}$ such that $S_1 \leadsto 2\mathscr{B}(1/2)-1$. We denote by $P$ the law of the simple random walk, and $E$ its associated expectation. The polymer receives an energetic penalty $\delta > 0$ every time it touches one of the horizontal interfaces located at heights $\{kT : k \in \mathbb{Z} \}$, where $T \in 4\mathbb{N}$ (we assume this for convenience [^1]). More precisely, the polymer interacts with the interfaces through the following Hamiltonian:
$$H_{N,\delta}^{T} := -\delta \underset{i=1}{\overset{N}{\sum}}\mathbf{1} \{ S_i \in T \mathbb{Z} \} = -\delta \underset{k \in \mathbb{Z}}{\overset{}{\sum}} \underset{i=1}{\overset{N}{\sum}}\mathbf{1} \{ S_i = kT \},
\label{1.1}$$ where $N \in \mathbb{N}$ is the number of monomers constituting the polymer. We introduce the corresponding polymer measure $\mathbf{P}_{N,\delta}^T$:
$$\frac{d \mathbf{P}_{N,\delta}^T}{d P}(S) := \frac{\exp\left(H_{N,\delta}^{T}(S)\right)}{Z_{N,\delta}^{T}},
\label{1.2}$$
where the normalisation constant $Z_{N,\delta}^{T} = E \left[ \exp\left(H_{N,\delta}^{T}(S) \right) \right]$ is the *partition function*. We denote $E_{N,\delta}^{T}$ the associated expectation. We consider $(T_N)_{N \in \mathbb{N}} \in (4 \mathbb{N})^{\mathbb{N}}$ an increasing sequence of even integer. The repulsion intensity is given by $( \delta_N)_{N \geq 1}$, such that $\underset{N \longrightarrow \infty}{\lim}\delta_N = 0$. Throughout this document, we consider that, with $a,b>0$ being the parameters of the model and $\beta>0$ fixed:
$$T_N = N^a \text{ and } \delta_N = \frac{\beta}{N^b}.$$
A large part of the paper is dedicated to the case $$\begin{aligned}
& 1/3 < a < 1/2, &\text{ i.e. } & N^{1/3} \ll T_N \ll \sqrt{N} \\
& 3a-1>b, &\text{ i.e. } &N \ll T_N^3 \delta_N,
\end{aligned}
\label{H_0}$$ which is the most fruitful to us (cf figure [\[fig: Phase Diagram\]](#fig: Phase Diagram){reference-type="ref" reference="fig: Phase Diagram"}). That being said, all cases will be covered.
Throughout this paper, we use the following notations.
- We will use $c$ to denote a constant whose value is irrelevant and which may change from line to line.
- $u_n = o_n(v_n)$, or $u_n \ll v_n$ if $\underset{n \longrightarrow \infty}{\lim} \frac{u_n}{v_n} = 0$.
- $a_n \sim_n b_n$ if $\frac{a_n}{b_n} \longrightarrow 1$ as $n \longrightarrow \infty$.
- $u_n = O_n(v_n)$ if $\left(\frac{u_n}{v_n}\right)_{n \in \mathbb{N}}$ is bounded.
- $f(n) \asymp_n g(n)$ means that there exists $C>0$ such that for all $n \in \mathbb{N}$, $\frac{1}{C} g(n) \leq f(n) \leq Cg(n)$.
# Main results
Let us first introduce two notations.
**Definition 1**. Let $T \in 2\mathbb{N} \cup \{ \infty \}$ fixed, and $(S_n)_{n \in \mathbb{N}}$ the simple random walk. We define $\left(\tau_k^T\right)_{k \in \mathbb{N}}$ as follows, with $\infty \, \mathbb{Z} := \{0\}$:
$$\tau_0^{T} = 0, \qquad \tau_{k+1}^T = \inf \left\{n > \tau_k^T : S_n \in T \mathbb{Z} \right\}.
\label{4.5000}$$ We define also $\tau^T := \underset{k=0}{\overset{\infty}{\bigcup}} {\tau_k^T}$. Moreover, we define the sequence $(\varepsilon_n^T)_{n \in \mathbb{N}^*}$ as follows: $$\varepsilon_k^T = \frac{S_{\tau_k^T} - S_{\tau_{k-1}^T} }{T}.$$ Last, we define, for $N \in \mathbb{N}$ : $$L_N := \max \left\{ n \in \mathbb{N} : \tau_n^T \leq N \right\}.$$ [\[definition 1\]]{#definition 1 label="definition 1"}
The support script $T$ is omitted for $L_N$ for notational convenience. Clearly, $\tau_{j}^{T}$ is the $j^{\text {th }}$ time at which $S$ visits an interface and $\varepsilon_{j}^{T}$ tells us whether the $j^{\text {th }}$ interface visited is the same as the $(j-1)^{\text {th }}$ $\left(\varepsilon_{j}^{T}=0\right)$, or the one above $\left(\varepsilon_{j}^{T}=1\right)$ or the one below $\left(\varepsilon_{j}^{T}=-1\right)$. We now split our results in two theorems respectively dealing with [\[H_0\]](#H_0){reference-type="eqref" reference="H_0"} and all other cases.
**Theorem 1**. *Under [\[H_0\]](#H_0){reference-type="eqref" reference="H_0"}, for all $\varepsilon > 0$, there exist $\nu> 0$, $M>0$ and $N_0 \in \mathbb{N}$ such that, for $N \geq N_0$: $$\mathbf{P}_{N,\delta_N}^{T_N} \left( {\frac{\nu}{ \delta_N^2}<\tau_{L_N}^{T_N}< \frac{M}{\delta_N^2}} \right) > 1-\varepsilon.
\label{ de terre}$$ Moreover: $$\underset{N \longrightarrow \infty}{\lim} \mathbf{P}_{N,\delta_N}^{T_N} \left( \exists i \leq N : S_i \in T_N\mathbb{Z} \backslash \{0\} \right) =
0.
\label{Deuxieme partie theoreme}$$ [\[main theorem\]]{#main theorem label="main theorem"}*
In other words, in this regime and with high probability, the polymer visits only the interface of the origin and the last contact is at roughly $\frac{1}{\delta_N^2} \ll N$. It eventually gets stuck between the origin and one of the two (above or below) neighbouring interfaces. This extend [@Caravenna2009depinning Theorem 1.1 (3)] to the case $b>0$.
**Theorem 2**.
1. *[\[partie 123.1 du théorème\]]{#partie 123.1 du théorème label="partie 123.1 du théorème"} When $a>b>3a-1$, $S_N$ is of order $\sqrt{N/(T_N \delta_N)}$. More precisely, let $Z$ follow a standard normal distribution. Then, uniformly in $u<v \in \overline{\mathbb{R}}$ and $N \in \mathbb{N}$: $$\mathbf{P}_{N,\delta_N}^{T_N} \left( u < \frac{S_N}{\sqrt{\pi^2 N/(T_N \delta_N) }} \leq v \right) \asymp_N P(u < Z \leq v).
\label{123.1.266}$$*
2. *[\[partie 123.2 du théorème\]]{#partie 123.2 du théorème label="partie 123.2 du théorème"} When $1/2>a$ and $b\geq a$, $S_N$ is of order $\sqrt{N}$. More precisely, let $Z$ follow a standard normal distribution, and let us define $x_\beta$ the only solution in $(0,\pi)$ of $x_\beta = \frac{\sin(x_\beta)\beta}{1-\cos(x_\beta)}$. Now, let $\kappa$ be a constant defined as:*
*$$\kappa =
\begin{cases}
1 & \text{if } b > a \\
\sqrt{\frac{x_\beta^3}{\beta(x_\beta + \sin(x_\beta))}} & \text{if } a=b.
\end{cases}$$*
*Then, uniformly in $u<v \in \overline{\mathbb{R}}$ and $N \in \mathbb{N}$: $$\mathbf{P}_{N,\delta_N}^{T_N} \left( u < \frac{S_N}{\kappa \sqrt{N}} \leq v \right) \asymp_N P(u<Z \leq v).
\label{123.1.366}$$*
3. *When $b=1/2$ and $a > 1/2$, $S_N$ is of order $\sqrt{N}$. More precisely, let $Z$ follow a standard normal distribution. Then, uniformly in $u<v \in \overline{\mathbb{R}}$ and $N \in \mathbb{N}$:*
*$$\mathbf{P}_{N,\delta_N}^{T_N} \left( u \leq \frac{S_N}{\sqrt{N}} \leq v \right) \asymp_N P(u < Z \leq v).
\label{123.1.466}$$*
4. *When $b>1/2$ and $a \geq 1/2$, for all $A \in \sigma(X_1,...,X_N)$, and the $o_N(1)$ being independent of A: $$\mathbf{P}_{N,\delta_N}^{T_N} \left( A \right) = o_N(1) + P(A).
\label{123.1.56}$$*
5. *[\[partie 123.5 du theoreme\]]{#partie 123.5 du theoreme label="partie 123.5 du theoreme"} When $b<1/2$ and $a \geq 1/2$, we have the same results as in Theorem [\[main theorem\]](#main theorem){reference-type="ref" reference="main theorem"}.*
6. *[\[partie 123.6 du theoreme\]]{#partie 123.6 du theoreme label="partie 123.6 du theoreme"} When $3a-1=b$ and $b<1/2$, there exists $\theta<1$ and $N_0 \in \mathbb{N}$ such that for all $N > N_0$ and $m \geq 1$:*
*$$\mathbf{P}_{N,\delta_N}^{T_N} \left( \#\{ i \geq 1 : \varepsilon_i^{T_N} = \pm 1 \text{ and } \tau_i^{T_N} \leq N \} = m \right) \leq \theta^m.
\label{123.1.6p}$$*
*Moreover, there exists $C>0$ and $N_0 \in \mathbb{N}$ such that for all $N > N_0$ and $\varepsilon > 0$:*
*$$\mathbf{P}_{N,\delta_N}^{T_N} \left( \ \tau^{T_N} \cap [(1-\varepsilon)N,N] \neq \emptyset \right) \leq C \varepsilon.
\label{123.11.7}$$*
*[\[Theoreme 123.1\]]{#Theoreme 123.1 label="Theoreme 123.1"}*
Let us make some comments on Theorem [\[Theoreme 123.1\]](#Theoreme 123.1){reference-type="ref" reference="Theoreme 123.1"}.
- The firsts three points give us that, when $b>3a-1$, we have the tightness of the appropriatly renormalized endpoint distribution, and that this density is bounded from below and from above by a multiple of the standard normal density. Scaling is sub-diffusive if $a>b$, and diffusive otherwise.
- The fourth point is even more precise, and tell us that the polymer behaves as if there were no constraints . Indeed, on average, the simple random walk makes $\sqrt{N}$ contacts with an interface, and $b$ being greater than $1/2$, these trajectories are not penalized.
- For the fifth point, with high probability, the polymer visits only the interface at the origin, and its last contact is of order $\frac{1}{\delta_N^2} \ll N$. The first part of this statement is not surprising because the maximum of the simple random walk before time $N$ is of order $\sqrt{N}$, hence it won't reach $T_N \gg \sqrt{N}$, and this reasoning applies to the polymer.
- The sixth point shows two things: in the critical regime, the number of interfaces visited by the polymer is of order one. Moreover, it doesn't touch any interface in $[(1-\varepsilon)N,N)$, whereas the probability that the simple random walk touches an interface in this interval tends to one - see Lemma [\[lemme 123.6.6\]](#lemme 123.6.6){reference-type="ref" reference="lemme 123.6.6"}.
- Figure [\[subfig:beta_image\]](#subfig:beta_image){reference-type="ref" reference="subfig:beta_image"} and [\[subfig:renormalization_image\]](#subfig:renormalization_image){reference-type="ref" reference="subfig:renormalization_image"} contains the the value of $x_\beta$ as a function of $\beta$, and the value of $c$ defined in Theorem [\[Theoreme 123.1\]](#Theoreme 123.1){reference-type="ref" reference="Theoreme 123.1"}, part [\[partie 123.2 du théorème\]](#partie 123.2 du théorème){reference-type="ref" reference="partie 123.2 du théorème"} as a function of $x_\beta$.
![value of $\kappa$ in Theorem [\[Theoreme 123.1\]](#Theoreme 123.1){reference-type="ref" reference="Theoreme 123.1"}, [\[partie 123.2 du théorème\]](#partie 123.2 du théorème){reference-type="ref" reference="partie 123.2 du théorème"}.](Images/x_beta_en_fonction_de_beta.png){#fig:side_by_side width="\\textwidth"}
![value of $\kappa$ in Theorem [\[Theoreme 123.1\]](#Theoreme 123.1){reference-type="ref" reference="Theoreme 123.1"}, [\[partie 123.2 du théorème\]](#partie 123.2 du théorème){reference-type="ref" reference="partie 123.2 du théorème"}.](Images/Valeur_de_Kappa.png){#fig:side_by_side width="\\textwidth"}
[\[fig:side_by_side\]]{#fig:side_by_side label="fig:side_by_side"}
Here is a table summarising all the information about regimes above. The second line contains the endpoint position of the polymer. The third line has the position of the average localisation of the last contact between an interface and the polymer. The fourth line has the average number of contact between the polymer and interfaces.
Theorem 1 R1 R2 R2 R4 R5 R6
------------------------ --------------------------------- ------------------------------------------------- ------------ ------------ ------------------------ -------------------- --
$T_N$ $\sqrt{\frac{N}{T_N \delta_N}}$ $\sqrt{N}$ $\sqrt{N}$ $\sqrt{N}$ $\sqrt{N}$ $T_N$
$\frac{1}{\delta_N^2}$ $N$ $N$ $N$ $N$ $\frac{1}{\delta_N^2}$ Of order $N$
$\frac{1}{\delta_N}$ $T_N^3\delta_N^2$ $\min \left\{ \sqrt{N}, \frac{N}{T_N} \right\}$ $\sqrt{N}$ $\sqrt{N}$ $\frac{1}{\delta_N}$ $T_N^3 \delta_N^2$
As a byproduct of our analysis, we obtain results about the return times of the simple random walk to the interfaces, which may be of independent interest. Theorem [\[theoreme\]](#theoreme){reference-type="ref" reference="theoreme"} concerns the simple random walk and its $k$-th return time on $T \mathbb{Z}$. Note that part [\[theoreme 3, partie 1\]](#theoreme 3, partie 1){reference-type="ref" reference="theoreme 3, partie 1"} of [\[theoreme\]](#theoreme){reference-type="ref" reference="theoreme"} is a combination of [@berger2019strong Theorem 2.4] for $n \geq k^2$ and [@Doney2019 Theorem 1.1] for $n \leq k^2$, but we present an alternative proof of a previously established result by computational techniques, hence the proof is left there.
**Theorem 3**.
1. *[\[theoreme 3, partie 1\]]{#theoreme 3, partie 1 label="theoreme 3, partie 1"} There exists $C>0$ such that, for all $k \geq 1$ and $n \in 2\mathbb{N}$: $$P(\tau_k^\infty = n) \leq \frac{Ck}{n^{3/2}}.
\label{2.3}$$*
2. *[\[theoreme 3, partie 2\]]{#theoreme 3, partie 2 label="theoreme 3, partie 2"} There exists $C>0$, $C'>0$ such that, for all $T \in 2\mathbb{N}$, $0<n<2 T^2$ and $k \in \mathbb{N}$: $$P\left(\tau_k^T = n \right) \leq \frac{Ck}{\min\{T^3 , n^{3/2} \}} e^{-ng(T)} ,
\label{equation theoreme}$$ with $g(T) = - \log \cos \left( \frac{\pi}{T}\right) = \frac{\pi^2}{2T^2 } + O_T\left( \frac{1}{T^4} \right)$. Moreover, for $n> 2 T^2$ and $k \in \mathbb{N}$: $$P\left(\tau_k^T = n \text{ and } \tau_1^T<T^2,...,\tau_k^T- \tau_{k-1}^T<T^2 \right) < \frac{C}{T^3}e^{-ng(T)}\left(1+\frac{C'}{T}\right)^k.
\label{8.6}$$*
*[\[theoreme\]]{#theoreme label="theoreme"}*
Note that Theorem [\[theoreme\]](#theoreme){reference-type="ref" reference="theoreme"} part [\[theoreme 3, partie 2\]](#theoreme 3, partie 2){reference-type="ref" reference="theoreme 3, partie 2"} is a refinement of [@Caravenna2009depinning Eq. (B.9)]. Last, Proposition [\[Proposition 123.1.3\]](#Proposition 123.1.3){reference-type="ref" reference="Proposition 123.1.3"} gives the asymptotic behaviours of the partition function defined below [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} , which is is often relevant for physicists. The free energy $\phi(\delta,T)$ is defined as $\phi(\delta,T) := \underset{n \longrightarrow \infty}{\lim} \frac{1}{n} \log Z_{n,\delta}^{T}$ and is computed in [\[3.3000\]](#3.3000){reference-type="eqref" reference="3.3000"}, [\[3.4000\]](#3.4000){reference-type="eqref" reference="3.4000"} and [\[phi\]](#phi){reference-type="eqref" reference="phi"} (see Section [\[Section 123.3.1\]](#Section 123.3.1){reference-type="ref" reference="Section 123.3.1"} for more details about it). Proposition [\[Proposition 123.1.3\]](#Proposition 123.1.3){reference-type="ref" reference="Proposition 123.1.3"} is proven in [\[Annex F\]](#Annex F){reference-type="ref" reference="Annex F"}.
**Proposition 1**. *The partition function defined in [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"} has a behaviour that depends of the case we are in:*
*$$Z_{N, \delta_N}^{T_{N}} \asymp_N \frac{1}{\max\{1, \delta_N \min \{ \sqrt{N}, T_N \} \}} e^{N \phi(\delta_N,T_N)} \quad \text{ if $a>b$,}
\label{123.1.66}$$*
*$$Z_{N, \delta_N}^{T_{N}} \asymp_N e^{N \phi(\delta_N,T_N)} \quad \text{ if $a \leq b$. }
\label{123.1.77}$$ [\[Proposition 123.1.3\]]{#Proposition 123.1.3 label="Proposition 123.1.3"}*
Note that, when $a \geq 1/2$ and $b \geq 1/2$:
$$Z_{N, \delta_N}^{T_{N}} \asymp_N 1.
\label{123.1.88}$$ This means that the energy of the system is, up to a constant, same without constraints, and gives us a hint that, in this case, the constraint doesn't have any role in the behaviour of the polymer.
**Outline of the paper** We first need to understand renewal theory, which is a very useful tool to have access to the polymer measure. An introduction to this theory is done in Section [\[Preliminaries\]](#Preliminaries){reference-type="ref" reference="Preliminaries"}, and this section ends with Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"} and Lemma [\[Lemme sur toucher en n pour la m.a.s.\]](#Lemme sur toucher en n pour la m.a.s.){reference-type="ref" reference="Lemme sur toucher en n pour la m.a.s."}, which are both the main ingredients to prove Theorem [\[main theorem\]](#main theorem){reference-type="ref" reference="main theorem"}. Section [\[Polymer measure\]](#Polymer measure){reference-type="ref" reference="Polymer measure"} has the proof of Theorem [\[main theorem\]](#main theorem){reference-type="ref" reference="main theorem"}, and Section [\[Simple random walk\]](#Simple random walk){reference-type="ref" reference="Simple random walk"} contains the proof of Theorem [\[theoreme\]](#theoreme){reference-type="ref" reference="theoreme"}. They are then followed by Section [\[section 60.0\]](#section 60.0){reference-type="ref" reference="section 60.0"}, which contains the proof of Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"}. Section [\[Section 123.77\]](#Section 123.77){reference-type="ref" reference="Section 123.77"} contains all the tools and proofs of Theorem [\[Theoreme 123.1\]](#Theoreme 123.1){reference-type="ref" reference="Theoreme 123.1"}. The general strategy of this paper is greatly inspired by [@Caravenna2009depinning]. It consists in writing the partition function in terms of a certain renewal process, after removal of the exponential leading term (see Section [\[Preliminaries\]](#Preliminaries){reference-type="ref" reference="Preliminaries"}). Some key estimates, see Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"}, are however different, and therefore require additional work, making the results non-trivial.
# Preparation [\[Preliminaries\]]{#Preliminaries label="Preliminaries"}
In this section, we first introduce the notion of free energy, a key concept in the study of polymers. We then introduce a renewal process that is linked to the polymer, another key concept for our paper.
## Free energy, asymptotic estimates [\[Section 123.3.1\]]{#Section 123.3.1 label="Section 123.3.1"}
Let us assume for now that $T \in 2\mathbb{N}$ is fixed. We define the *free energy* $\phi(\delta,T)$ as the rate of exponential growth of the partition function defined below [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"}: $$\phi(\delta,T) := \underset{N \longrightarrow \infty}{\lim} \frac{1}{N} \log Z_{N,\delta}^{T}
= \underset{N \longrightarrow \infty}{\lim}\frac{1}{N} \log E \left( e^{H_{N,\delta}^{T}} \right).$$ Investigating this function is motivated by identifying the points where the function $\delta \longrightarrow \phi(\delta, T)$ is not differentiable. These points typically correspond to significant changes or transitions in the system, known as *phase transition*. In our specific case, we do not observe any. However, even in the absence of phase transitions, studying this function remains valuable for understanding polymer trajectories in our model, as elaborated in Section [\[théorie du renouvellement\]](#théorie du renouvellement){reference-type="ref" reference="théorie du renouvellement"}. For this reason, we recall some basic results on free energy proven in [@Caravenna_2009].
Let us remind Definition [\[definition 1\]](#definition 1){reference-type="ref" reference="definition 1"} : $\tau_1^{T}$ is the random variable recording the time of the first contact with an interface for the simple random walk. We define $Q_{T}(\lambda) := E(e^{-\lambda \tau_1^{T}})$ its Laplace transform under the simple random walk law. There exists $\lambda_0^{T}$ such that $Q_{T}(\lambda)$ is finite and analytic on $(\lambda_0^{T}, \infty)$, where $\underset{ \lambda \longrightarrow \lambda_0^{T}}{\lim} Q_{T}(\lambda) = +\infty$ as $\lambda \downarrow \lambda_0^{T}$. More precise expressions of $Q_{T}(\lambda)$ are known, cf. equations (A.4) and (A.5) in [@Caravenna_2009]. A common fact is that $Q_{T}(\cdot)$ is deeply connected to the free energy. More precisely: $$\phi(\delta, T)=\left(Q_{T}\right)^{-1}\left(e^{\delta}\right),
\label{2.2}$$ for all $\delta \in \mathbb{R}$ (Theorem 1 of [@Caravenna_2009], with a $-\delta$ because in this paper we have reversed the sign of $\delta$ compared to [@Caravenna_2009]). Let us come back to our model. Equation [\[2.2\]](#2.2){reference-type="eqref" reference="2.2"} makes it easy to study the asymptotic behaviour of $\phi(\delta_N, T_N)$ when $N \rightarrow \infty$, which is
$$\phi (\delta_N, T_N) = - \frac{\delta_N}{T_N}(1 + o_{N}(1)) \text{ if $a<b$},
\label{3.3000}$$ $$\phi (\delta_N, T_N) = - \frac{x_\beta^2}{2 T_N^2}(1 + o_{N}(1)) \text{ if $a=b$ },
\label{3.4000}$$ $$\phi (\delta_N, T_N) = - \frac{\pi^2}{2 T_N^2}
\left(1 - \frac{4}{T_N \delta_N}
(1 + o_{N}(1))
\right)\text{ if $a>b$ }.
\label{phi}$$
Here, $x_\beta$ is an explicit constant defined in [\[def x_beta\]](#def x_beta){reference-type="eqref" reference="def x_beta"} - see Figure [\[subfig:beta_image\]](#subfig:beta_image){reference-type="ref" reference="subfig:beta_image"}. Those are proven in Appendix [\[Annex A.1\]](#Annex A.1){reference-type="ref" reference="Annex A.1"}.
## Underlying renewal theory [\[théorie du renouvellement\]]{#théorie du renouvellement label="théorie du renouvellement"}
We now write a description of our basic model based on renewal theory, which has been proven in [@Caravenna_2009 Section 2.2]. Recall Definition [\[definition 1\]](#definition 1){reference-type="ref" reference="definition 1"}. We denote by $q_{T}^{j}(n)$ the joint law of $\left(\tau_{1}^{T}, \varepsilon_{1}^{T}\right)$ under the law of the simple random walk. It is defined for $(n,j) \in \mathbb{N} \times \{-1,0,1\}$:
$$q_{T}^{j}(n):=P\left(\tau_{1}^{T}=n, \varepsilon_{1}^{T}=j\right) .$$
Of course, by symmetry, $q_{T}^{1}(n)=q_{T}^{-1}(n)$ for all $n$ and $T$. We also define
$$q_{T}(n):=P\left(\tau_{1}^{T}=n\right)=q_{T}^{0}(n)+2 q_{T}^{1}(n) .$$ We now introduce the Markov chain $\left(\left\{\left(\tau_{j}, \varepsilon_{j}\right)\right\}_{j \geq 0} \right)$, of law $\mathcal{P}_{\delta,T}$, taking values in $(\mathbb{N} \cup\{0\}) \times$ $\{-1,0,1\}$, defined as follows: $\tau_{0}:=\varepsilon_{0}:=0$ and under $\mathcal{P}_{\delta,T}$ the sequence of random vector $\left\{\left(\tau_{j}-\tau_{j-1}, \varepsilon_{j}\right)\right\}_{j \geq 1}$ is i.i.d. with joint law:
$$\mathcal{P}_{\delta,T} \left( \tau_{1}=n, \varepsilon_{1}=j \right):=e^{-\delta} q_{T}^{j}(n) e^{-\phi(\delta, T) n} .
\label{2.4}$$
By [\[2.2\]](#2.2){reference-type="ref" reference="2.2"}, this defines a probability measure. Note that the process $\left(\tau_{j}\right)_{j \geq 0}$ alone under $\mathcal{P}_{\delta,T}$ is a renewal process, for which $\tau_{0}=0$ and the variables $\left(\tau_{j}-\tau_{j-1}\right)_{j \geq 1}$ are i.i.d., with law
$$\mathcal{P}_{\delta,T} \left( \tau_{1}=n \right)=e^{-\delta} q_{T}(n) e^{-\phi(\delta, T) n}=e^{-\delta} P\left(\tau_{1}^{T}=n\right) e^{-\phi(\delta, T) n}.
\label{définition de la mesure du renouvellement}$$
We also introduce a random set containing the indexes at which the process touches one of the interface: $$\tau = \{\tau_i : i \in \mathbb{N}\}.$$
Thus, $\tau$ and $\tau^T$ (cf below [\[4.5000\]](#4.5000){reference-type="eqref" reference="4.5000"}) contains the renewal time of their respective processes: the first being the renewal process defines in [\[définition de la mesure du renouvellement\]](#définition de la mesure du renouvellement){reference-type="ref" reference="définition de la mesure du renouvellement"}, and the second the time of interface contacts for the simple random walk. Let us now make the link between the law $\mathcal{P}_{\delta,T}$ and $\mathbf{P}_{N,\delta}^T$, the polymer's one defined in [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"}. Remind the definition of $L_N$ in [\[definition 1\]](#definition 1){reference-type="ref" reference="definition 1"}. By abuse of notation, we use $L_N$ in the context of the renewal process with the same definition, i.e., under $\mathcal{P}_{\delta,T}$, $L_N := \max \left\{ n \in \mathbb{N} : \tau_n \leq N \right\}$ . We now have the following crucial result (see [@Caravenna_2009 Eq. 2.13 ]): for all $N, T \in 2 \mathbb{N}$ and for all $k \in \mathbb{N},\left\{t_{i}\right\}_{1 \leq i \leq k} \in \mathbb{N}^{k},\left(\sigma_{i}\right)_{1 \leq i \leq k} \in\{-1,0,1\}^{k}$:
$$\begin{aligned}
&\mathbf{P}_{N,\delta}^{T} \left( L_{N}=k,\left(\tau_{i}^{T}, \varepsilon_{i}^{T}\right)=\left(t_{i}, \sigma_{i}\right), 1 \leq i \leq k \mid N \in \tau^{T} \right) \\
&=\mathcal{P}_{\delta,T} \left( L_{N}=k,\left(\tau_{i}, \varepsilon_{i}\right)=\left(t_{i}, \sigma_{i}\right), 1 \leq i \leq k \mid N \in \tau \right) ,
\end{aligned}
\label{3.12}$$
where $\{N \in \tau\}:=\bigcup_{k=0}^{\infty}\left\{\tau_{k}=N\right\}$ and similarly for $\left\{N \in \tau^{T}\right\}$. In other words, the process $\left\{\left(\tau_{j}^{T}, \varepsilon_{j}^{T}\right)\right\}_{j}$ under $\mathbf{P}_{N,\delta}^{T} \left( \cdot \mid N \in \tau^{T} \right)$ is distributed in the same way as the Markov chain $\left\{\left(\tau_{j}, \varepsilon_{j}\right)\right\}_{j}$ under $\mathcal{P}_{\delta,T} \left( \cdot \mid N \in \tau \right)$. It is precisely the link with the renewal theory that allows us to have precise estimates in our model. Another basic equation that we use on multiple occasions is
$$E\left[e^{H_{k, \delta}^{T}(S)} \mathbf{1}_{\left\{k \in \tau^{T}\right\}}\right]=e^{\phi(\delta, T) k} \mathcal{P}_{\delta,T} \left( k \in \tau \right) ,
\label{3.13}$$ which is true for all $k, T \in 2 \mathbb{N}$ (cf. [@Caravenna_2009 Eq (2.11)]).
## Two renewal results
To prove Theorem [\[main theorem\]](#main theorem){reference-type="ref" reference="main theorem"}, we need Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"} that gives estimates about the renewal process, which is proven in Section [\[section 60.0\]](#section 60.0){reference-type="ref" reference="section 60.0"}. Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"} is linked to the polymer measure by [\[3.12\]](#3.12){reference-type="eqref" reference="3.12"} and [\[3.13\]](#3.13){reference-type="eqref" reference="3.13"}.
**Proposition 2**. *When $a>b$, i.e. when $T_N \delta_N \longrightarrow 0$, there exists $T_0,c_1,c_2$ dependant of $a$ and $b$ such that, for $T_N > T_0$ and for all $n \in \mathbb{N}$: $$\begin{aligned}
\frac{c_1}{\min \{ n^{3/2} \max\{ \frac{1}{n},\delta_N^2 \}, T_N^3\delta_N^2 \} }
&\leq \mathcal{P}_{\delta_N,T_N} \left( n \in \tau \right) \\
&\leq
\frac{c_2}{\min \{ n^{3/2} \max\{ \frac{1}{n},\delta_N^2 \}, T_N^3\delta_N^2 \} }
\label{Proposition 1}.
\end{aligned}$$ [\[Prooposition 1\]]{#Prooposition 1 label="Prooposition 1"}*
In other words, we have three regimes:
$$\mathcal{P}_{\delta_N,T_N} \left( n \in \tau \right) \asymp_n
\left\{
\begin{array}{ll}
\frac{1}{\sqrt{n}} & \text{if } n < \frac{1}{\delta_N^2}, \\
\\
\frac{1}{\delta_N^2 n^{3/2}} & \text{if } \frac{1}{\delta_N^2} < n < T_N^2,\\
\\
\frac{1}{T_N^3 \delta_N^2} & \text{if } n > T_N^2 \textit{ (stationary regime)}.
\end{array}
\right.$$
Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"} is fundamental. It gives us precisely how the renewal measure behaves. Moreover, because this measure is deeply related to the polymer measure, it helps us to understand how the polymer behaves. Proposition [\[Proposition 1\]](#Proposition 1){reference-type="ref" reference="Proposition 1"} extends [@Caravenna2009depinning Proposition 2.3] to the case $b>0$. When the penalty is constant, the polymer makes a big jump after a time of order one, which takes it further than $N$. In our case, it makes a big jump every $\frac{1}{\delta_N}$ jumps. Thus, it behaves like the simple random walk during its $\frac{1}{\delta_N}$ first contacts, then it goes on a long excursion taking it further than $N$.
Another important result is [@Caravenna2009depinning Lemma 2.1] that we remind in Lemma [\[Lemme sur toucher en n pour la m.a.s.\]](#Lemme sur toucher en n pour la m.a.s.){reference-type="ref" reference="Lemme sur toucher en n pour la m.a.s."}. First, let us set
$$g(T):=-\log \cos \left(\frac{\pi}{T}\right)=\frac{\pi^{2}}{2 T^{2}}+O_{T}\left(\frac{1}{T^{4}}\right).
\label{g}$$
We then have the following
**Lemma 1**. *There exist $T_{0} \in 2\mathbb{N}, c_{2} > c_{1} > 0, c_{4} > c_{3} > 0$ such that, when $T>T_{0}$ is even and for $n \in 2 \mathbb{N}$:*
*$$\frac{c_{1}}{\min \left\{T^{3}, n^{3 / 2}\right\}} e^{-g(T) n} \leq P\left(\tau_{1}^{T}=n\right) \leq \frac{c_{2}}{\min \left\{T^{3}, n^{3 / 2}\right\}} e^{-g(T) n},
\label{probabilité pour la marche aléatoire simple que tau 1 vaille n}$$ $$\frac{c_{3}}{\min \{T, \sqrt{n}\}} e^{-g(T) n} \leq P\left(\tau_{1}^{T}>n\right) \leq \frac{c_{4}}{\min \{T, \sqrt{n}\}} e^{-g(T) n} .
\label{probabilité pour la marche aléatoire simple que tau 1 soit plus grande que n}$$*
*[\[Lemme sur toucher en n pour la m.a.s.\]]{#Lemme sur toucher en n pour la m.a.s. label="Lemme sur toucher en n pour la m.a.s."}*
# Proof of Theorem [\[main theorem\]](#main theorem){reference-type="ref" reference="main theorem"} [\[Polymer measure\]]{#Polymer measure label="Polymer measure"}
## Proof of Equation [\[ de terre\]](# de terre){reference-type="eqref" reference=" de terre"} [\[Section 4.1\]]{#Section 4.1 label="Section 4.1"}
Let us first introduce the following events:
$$\mathcal{A}_\nu= \left\{\tau_{L_N}^{T_N} < \frac{\nu}{\delta_N^2} \right\}, \quad
\mathcal{B}_{\nu,M} = \left\{ \frac{\nu}{\delta_N^2} \leq \tau_{L_N}^{T_N} \leq\frac{M}{\delta_N^2} \right\}, \quad
\mathcal{C}_{M} = \left\{ \frac{M}{\delta_N^2} < \tau_{L_N}^{T_N} \right\}.
\label{44.2}$$
We shall prove that there exists $C>0$ such that: $$\mathbf{P}_{N,\delta_N}^{T_N} \left( \mathcal{A}_\nu \right) \leq C \sqrt{\nu} \mathbf{P}_{N,\delta_N}^{T_N} \left( \mathcal{B}_{\nu,M} \right)
\label{Heroes 1}$$ $$\mathbf{P}_{N,\delta_N}^{T_N} \left( \mathcal{C}_{M} \right) \leq \frac{C}{\sqrt{M}} \mathbf{P}_{N,\delta_N}^{T_N} \left( \mathcal{B}_{\nu,M} \right) ,
\label{Heroes 2}$$ which is equivalent to [\[ de terre\]](# de terre){reference-type="eqref" reference=" de terre"} in Theorem [\[main theorem\]](#main theorem){reference-type="ref" reference="main theorem"}. We denote in this section $\phi := \phi(\delta_N,T_N)$. Let us start with [\[Heroes 1\]](#Heroes 1){reference-type="eqref" reference="Heroes 1"}:
$$\begin{aligned}
&\mathbf{P}_{N,\delta_N}^{T_N} \left( \mathcal{A}_\nu \right) = \frac{1}{Z_{N,\delta_N}^{T_N}}\underset{k=1}{\overset{\frac{\nu}{\delta_N^2}}{\sum}}E\left( e^{H_{k,\delta_N}^{T_N}} 1 \left\{ k \in \tau^{T_N} \right\} \right) P \left( \tau_1^{T_N}>N-k \right) \\
&=\frac{1}{Z_{N,\delta_N}^{T_N}e^{-\phi N}}\underset{k=1}{\overset{\frac{\nu}{\delta_N^2}}{\sum}}e^{-k\phi}E\left( e^{H_{k,\delta_N}^{T_N}}
1
\left\{ k \in \tau^{T_N} \right\}\right) P \left( \tau_1^{T_N}>N-k \right) e^{-(N-k)\phi}.
\label{44.4}
\end{aligned}$$
First, we note that $P \left( \tau_1^{T_N}>N-k \right) e^{-(N-k)\phi} \asymp_N \frac{1}{T_N}$ because of [\[probabilité pour la marche aléatoire simple que tau 1 soit plus grande que n\]](#probabilité pour la marche aléatoire simple que tau 1 soit plus grande que n){reference-type="eqref" reference="probabilité pour la marche aléatoire simple que tau 1 soit plus grande que n"}, and, with [\[phi\]](#phi){reference-type="eqref" reference="phi"} and [\[g\]](#g){reference-type="eqref" reference="g"}, $-(g(T_N) + \phi(\delta_N,T_N)) = -\frac{2 \pi ^2}{T_N^3 \delta_N}(1 + o_N(1))$ is a $o_N(1/N)$ in our case. By using [\[3.13\]](#3.13){reference-type="eqref" reference="3.13"}:
$$\mathbf{P}_{N,\delta_N}^{T_N} \left( \mathcal{A}_\nu \right) \asymp_N \frac{1}{T_N Z_{N,\delta_N}^{T_N} e^{-N \phi}}\underset{k=1}{\overset{\frac{\nu}{\delta_N^2}}{\sum}} \mathcal{P}_{\delta_N,T_N} \left( k \in \tau \right) .
\label{4.565}$$
With the upper bound of eq [\[Proposition 1\]](#Proposition 1){reference-type="eqref" reference="Proposition 1"}:
$$\underset{k=1}{\overset{\frac{\nu}{\delta_N^2}}{\sum}} \mathcal{P}_{\delta_N,T_N} \left( k \in \tau \right) \asymp_N
\underset{k=1}{\overset{\frac{\nu}{\delta_N^2}}{\sum}} \frac{1 }{\sqrt{k}}
\asymp_N \frac{\sqrt{\nu}}{\delta_N}.$$
Therefore:
$$\mathbf{P}_{N,\delta_N}^{T_N} \left( \mathcal{A}_\nu \right) \asymp_N \frac{\sqrt{\nu}}{T_N \delta_N Z_{N,\delta_N}^{T_N}e^{-N\phi}}.
\label{9.7}$$
With the same ideas as above, using the lower bounds of [\[Proposition 1\]](#Proposition 1){reference-type="eqref" reference="Proposition 1"} and [\[probabilité pour la marche aléatoire simple que tau 1 soit plus grande que n\]](#probabilité pour la marche aléatoire simple que tau 1 soit plus grande que n){reference-type="eqref" reference="probabilité pour la marche aléatoire simple que tau 1 soit plus grande que n"}:
$$\mathbf{P}_{N,\delta_N}^{T_N} \left( \mathcal{B}_{\nu,M} \right) \asymp_N \frac{1}{T_N \delta_N Z_{N,\delta_N}^{T_N}e^{-N\phi}} \left( 1-\sqrt{\nu} + 1 - \frac{1}{\sqrt{M}} \right).
\label{9.8}$$ We now prove [\[Heroes 2\]](#Heroes 2){reference-type="eqref" reference="Heroes 2"} with the same ideas. We remark that[\[probabilité pour la marche aléatoire simple que tau 1 soit plus grande que n\]](#probabilité pour la marche aléatoire simple que tau 1 soit plus grande que n){reference-type="eqref" reference="probabilité pour la marche aléatoire simple que tau 1 soit plus grande que n"} has a change of behaviour when $N-k \leq T_N^2$ when writing the sum:
$$\mathbf{P}_{N,\delta_N}^{T_N} \left( \mathcal{C}_{M} \right) \asymp_N
\frac{1}{T_N\delta_N Z_{N,\delta_N}^{T_N}e^{-N\phi}} \left( \frac{1}{\sqrt{M}} + \frac{N}{ T_N^3 \delta_N } + \frac{1}{ \delta_N T_N} \right).
\label{9.9}$$ Finally, we get the result by combining [\[9.7\]](#9.7){reference-type="eqref" reference="9.7"}, [\[9.8\]](#9.8){reference-type="eqref" reference="9.8"} and [\[9.9\]](#9.9){reference-type="eqref" reference="9.9"}, and noting that the last two terms of [\[9.9\]](#9.9){reference-type="eqref" reference="9.9"} are $o_N(1)$ by [\[H_0\]](#H_0){reference-type="eqref" reference="H_0"}.
## Proof of Equation [\[Deuxieme partie theoreme\]](#Deuxieme partie theoreme){reference-type="eqref" reference="Deuxieme partie theoreme"}
First, let us note that, up to a negligible term, we may restrict to a polymer whose last contact with an interface happens before $\frac{M}{\delta_N^2}$ thanks to [\[ de terre\]](# de terre){reference-type="eqref" reference=" de terre"}. Let us denote $\mathcal{D}= \{ \forall i \leq N, S_i \notin T_N\mathbb{Z} \backslash \{0\} \}$ the complement of the event in [\[Deuxieme partie theoreme\]](#Deuxieme partie theoreme){reference-type="eqref" reference="Deuxieme partie theoreme"}. By using the reflection principle- see [@Petrov Theorem 11, chapter III] in the second line: [^2]
$$\begin{aligned}
\mathbf{P}_{N,\delta_N}^{T_N} \left( \mathcal{D} \right) &\leq o_N(1) + \frac{2}{Z_{N,\delta_N}^{T_N}}P\left(\underset{1 \leq l \leq \frac{M}{\delta_N^2}}{\max}S_l \geq T_N\right) \\
&\leq o_N(1) + \frac{4}{Z_{N,\delta_N}^{T_N}} P\left(S_{\frac{M}{\delta_N^2}} \geq T_N \right).
\end{aligned}
\label{4.120100}$$
To bound $P\left(S_{\frac{M}{\delta_N^2}} \geq T_N \right)$ from above, we use a standard estimate (see Prop. 2.1.2 in [@lawler2010random]). Hence, there exist $m > 0$ and $\lambda > 0$ such that, for all $A>0$ and $B>0$:
$$P\left( S_B \geq A \right) \leq \exp m\left( - \lambda \frac{A^2}{B} \right).
\label{Markov exponentiel}$$
Here, by taking $A = T_N$ and $B = \frac{M}{\delta_N^2}$:
$$P\left( S_{\frac{M}{\delta_N^2}} \geq T_N \right) \leq \exp
\left(
-\frac{\lambda T_N^2 \delta_N^2}{M}
\right).$$
Let us come back to [\[4.120100\]](#4.120100){reference-type="eqref" reference="4.120100"}. By [\[123.1.66\]](#123.1.66){reference-type="eqref" reference="123.1.66"} and [\[H_0\]](#H_0){reference-type="eqref" reference="H_0"}, there exists $C > 0$ such that $Z_{N,\delta_N}^{T_N} \geq \frac{C^{-1}}{\delta_N T_N}e^{N \phi(\delta_N,T_N)}$. Then, by [\[g\]](#g){reference-type="eqref" reference="g"}, [\[phi\]](#phi){reference-type="eqref" reference="phi"}, recalling that $\underset{N \longrightarrow \infty}{\lim} \delta_N T_N = + \infty$ and $\delta_N T_N^3 \geq N$, and by defining $c := \frac{\pi^2}{2}$: $$\begin{aligned}
\mathbf{P}_{N,\delta_N}^{T_N} \left( \mathcal{D} \right) &\leq o_{T_N}(1) + C \delta_N T_N \exp \left( -\frac{\lambda T_N^2 \delta_N^2}{M} + \frac{c N}{T_N^2}(1+o_{T_N}(1)) \right) = o_N(1).
\end{aligned}$$ Thus, [\[Deuxieme partie theoreme\]](#Deuxieme partie theoreme){reference-type="eqref" reference="Deuxieme partie theoreme"} is proven.
# Proof of Theorem [\[theoreme\]](#theoreme){reference-type="ref" reference="theoreme"} [\[Simple random walk\]]{#Simple random walk label="Simple random walk"}
**Outline** In this section, we prove Theorem [\[theoreme\]](#theoreme){reference-type="ref" reference="theoreme"}. Section [\[section 2\]](#section 2){reference-type="ref" reference="section 2"} contains some technical results necessary for the proof. Section [\[section 5.2\]](#section 5.2){reference-type="ref" reference="section 5.2"} contains the proof of [\[2.3\]](#2.3){reference-type="eqref" reference="2.3"} . Then, Section [\[section 5.3\]](#section 5.3){reference-type="ref" reference="section 5.3"} contains the proof of [\[equation theoreme\]](#equation theoreme){reference-type="eqref" reference="equation theoreme"}, and Section [\[Section 5.4\]](#Section 5.4){reference-type="ref" reference="Section 5.4"} contains the proof of [\[8.6\]](#8.6){reference-type="eqref" reference="8.6"}. The proof of these equations relies on an induction and sharp approximations to estimate $P(... = n)$.
## Technical results [\[section 2\]]{#section 2 label="section 2"}
The following lemmas are only technical. Their proofs are deferred to Appendix [\[Annexe A\]](#Annexe A){reference-type="ref" reference="Annexe A"}.
**Lemma 2**. *For the simple random walk and when $n \in 2 \mathbb{N}$: $$P\left(n \in \tau^{\infty} \right) = \frac{\sqrt{2}}{\sqrt{\pi n}}\left( 1 - \frac{1}{4n} + O_n\left( \frac{1}{n^2} \right) \right),
\label{1.1000}$$ $$P\left(\tau_1^{\infty} = n\right) = \frac{\sqrt{2}}{\sqrt{\pi}n^{3/2}}\left( 1 + \frac{3}{4n} + O_n\left( \frac{1}{n^2} \right) \right).
\label{1.2000}$$ [\[lemme 1.1\]]{#lemme 1.1 label="lemme 1.1"}*
Now, Lemma [\[lemme 1.2\]](#lemme 1.2){reference-type="ref" reference="lemme 1.2"} gives a sharp estimate of an integral which appears in [\[8.19\]](#8.19){reference-type="eqref" reference="8.19"}:
**Lemma 3**. *For $\varepsilon > 0$: $$\int_\frac{1}{2}^{1-\varepsilon} \frac{dt}{t^{3/2} (1-t)^{3/2}} =
\frac{2- 3 \varepsilon + O_{\varepsilon}(\varepsilon)}{\sqrt{\varepsilon}}.
\label{intégrale}$$ [\[lemme 1.2\]]{#lemme 1.2 label="lemme 1.2"}*
Finally, we have the following expansion of the cumulative distribution function of the return time to the origin:
**Lemma 4**. *For $l \in \mathbb{N}$: $$P \left( \tau_1^{\infty} = 2l \right) < 1 - \frac{1}{ \sqrt{\pi l}} - \frac{3}{8 \sqrt{\pi} \, l^{3/2}} + o_l \left( \frac{1}{l^{3/2}} \right).
\label{1.4}$$ [\[lemme 1.3\]]{#lemme 1.3 label="lemme 1.3"}*
## Proof of Equation [\[2.3\]](#2.3){reference-type="eqref" reference="2.3"} [\[section 5.2\]]{#section 5.2 label="section 5.2"}
We are going to prove Equation [\[2.3\]](#2.3){reference-type="eqref" reference="2.3"} by induction. We want to prove that there exists $C>0$ such that, if for all $n \in \mathbb{N}, P(\tau_k^\infty = n) \leq \frac{C k}{n^{3/2}}$, then for all $n \in \mathbb{N}, P(\tau_{k+1}^\infty = n) \leq \frac{C (k+1)}{n^{3/2}}$. Let $Q(k)$ be our induction hypothesis:
$$\forall n \in \mathbb{N}, P(\tau_k^\infty = n) \leq \frac{C k}{n^{3/2}}.
\label{hypothese2}$$
Let us remark that, for $k=1$, $C = \sqrt{2}$ suits - a study of $\binom{2n}{n}\frac{1}{(2n-1)2^{2n}}$ proves it. Let us now assume that we have the result for $k \in \mathbb{N}^*$. For $l \in \mathbb{N}$, let us denote $K_l(n) = P(\tau_l^\infty = n)$. Let us prove $Q(k+1)$.
We start by a simple computation:
$$K_{k+1}(n) = \underset{l=\frac{n}{2}}{\overset{n-2}{\sum}}K_k(l) K_1(n-l) + \underset{l=\frac{n}{2}}{\overset{n}{\sum}}K_1(l) K_k(n-l).
\label{3.1}$$ A simple bound from above and $Q(1)$ give us: $$\begin{aligned}
\underset{l=\frac{n}{2}}{\overset{n}{\sum}}K_1(l) K_k(n-l) &\leq \underset{l=\frac{n}{2}}{\overset{n}{\sum}} \frac{\sqrt{2}}{l^{3/2}} K_k(n-l) \leq \underset{l=\frac{n}{2}}{\overset{n}{\sum}} \frac{2^{5/2}}{n^{3/2}} K_k(n-l) \\
& \leq \frac{8}{n^{3/2}} P\left(\frac{n}{2}<\tau_k^\infty\right) \leq \frac{8}{n^{3/2}}.
\end{aligned}$$
Hence, [\[3.1\]](#3.1){reference-type="eqref" reference="3.1"} can be bounded from above by:
$$K_{k+1}(n) \leq \underset{l=\frac{n}{2}}{\overset{n-2}{\sum}}K_k(l) K_1(n-l) + \frac{8}{n^{3/2}}.
\label{3.3}$$
Let us bound from above sharply $\underset{l=\frac{n}{2}}{\overset{n-2}{\sum}}K_k(l) K_1(n-l)$. To do this, we separate the sum in two parts, and study the two members. Let us take $0<\alpha<1/2$, and bound from above the two quantities:
$$\Sigma_1^n := \underset{l=\frac{n}{2}}{\overset{n-2n^\alpha-2}{\sum}}K_k(l) K_1(n-l),$$
$$\Sigma_2^n := \underset{l=n-2n^\alpha}{\overset{n-2}{\sum}}K_k(l) K_1(n-l).
\label{3.5-}$$
We start by a Lemma dealing with $\Sigma_1^n$:
**Lemma 5**. *Given $Q(k)$, and for $n$ large enough: $$\Sigma_1^n <
\frac{kC}{\sqrt{\pi} n^{3/2}}\left( \frac{1}{n^{\alpha/2}} + \frac{1}{6n^{3\alpha/2}} + o_n \left( \frac{1}{n^{3\alpha/2}} \right) O_n \left( \frac{1}{n^{1-\alpha/2}} \right) \right).$$ [\[lemma 3.1\]]{#lemma 3.1 label="lemma 3.1"}*
*Proof.* We use that, if $n$ is large enough, [\[1.2000\]](#1.2000){reference-type="ref" reference="1.2000"} give us that $K_1(2p) \leq \frac{\sqrt{2}}{\sqrt{\pi}(2p)^{3/2}} + \frac{\sqrt{2}}{\sqrt{\pi (2p)^{5/2} }}$ is true for all $p>n/2$. We remark that we take only the even terms of the sum, because $\tau_1^\infty$ can take only even values. We also use [\[hypothese2\]](#hypothese2){reference-type="eqref" reference="hypothese2"}. Hence, it comes that:
$$\begin{aligned}
&\Sigma_1 =
\underset{p=\frac{n}{4}}{\overset{n/2-n^\alpha - 1}{\sum}}K_k(2p) K_1(n-2p) \\
&\leq \underset{p=\frac{n}{4}}{\overset{n/2-n^\alpha - 1}{\sum}}
\frac{Ck}{(n-2p)^{3/2}} \frac{\sqrt{2} } {\sqrt{\pi} (2p)^{3/2}} + \underset{p=\frac{n}{4}}{\overset{n/2-n^\alpha - 1}{\sum}}
\frac{Ck}{(n-2p)^{5/2}} \frac{\sqrt{2} } {\sqrt{\pi} (2p)^{3/2}} \\
&\leq \frac{\sqrt{2} Ck}{\sqrt{\pi}}
\int_{n/4}^{n/2 - n^\alpha} \frac{dt}{(2t)^{3/2} (1-2t)^{3/2}}
+
\frac{\sqrt{2} Ck}{\sqrt{\pi}}
\int_{n/4}^{n/2 - n^\alpha} \frac{dt}{(2t)^{3/2} (1-2t)^{5/2}}
\\
&\leq \frac{ Ck}{\sqrt{2} \sqrt{\pi} n^2 }
\int_{1/2}^{1 - \frac{2}{n^{1 - \alpha}}} \frac{dt}{t^{3/2} (1-t)^{3/2}} +
\frac{ Ck}{\sqrt{2} \sqrt{\pi} n^3 }
\int_{1/2}^{1 - \frac{2}{n^{1 - \alpha}}} \frac{dt}{t^{3/2} (1-t)^{5/2}}
.
\end{aligned}
\label{8.19}$$
Using Lemma [\[lemme 1.2\]](#lemme 1.2){reference-type="ref" reference="lemme 1.2"} with $\varepsilon = \frac{2}{n^{1 - \alpha}}$:
$$\int_{1/2}^{1 - \frac{2}{n^{1 - \alpha}}} \frac{dt}{t^{3/2} (1-t)^{3/2}} \leq
\sqrt{2} \frac{\sqrt{n}}{n^{\alpha/2}} \left( 1 + O_n \left( \frac{1}{n^{1 - \alpha}} \right)\right).$$ Moreover, $\int_{1/2}^{1 - \frac{2}{n^{1 - \alpha}}} \frac{dt}{t^{3/2} (1-t)^{5/2}} \sim \frac{n^{3/2}}{3 \sqrt{2} n^{3\alpha/2}}$. Therefore, [\[8.19\]](#8.19){reference-type="eqref" reference="8.19"} becomes: $$\begin{aligned}
\Sigma_1^n &
& \leq
\frac{kC}{\sqrt{\pi} n^{3/2}}\left( \frac{1}{n^{\alpha/2}} + \frac{1}{6n^{3\alpha/2}} + o_n\left( \frac{1}{n^{3\alpha/2}} \right) +O_n \left( \frac{1}{n^{1-\alpha/2}} \right) \right).
\end{aligned}$$ ◻
Now, let us prove Lemma [\[lemma 3.2\]](#lemma 3.2){reference-type="ref" reference="lemma 3.2"}, that deals with [\[3.5-\]](#3.5-){reference-type="eqref" reference="3.5-"}:
**Lemma 6**. *We have the following bound from above:*
*$$\Sigma_2^n \leq
\frac{Ck}{n^{3/2}} \left( 1 + \frac{3}{2 n^{1-\alpha}}\right) \left( 1 - \frac{1}{\sqrt{\pi}n^{\alpha/2}} - \frac{3}{8 \sqrt{\pi} \, n^{3\alpha/2}} + o_n \left( \frac{1}{n^{3 \alpha / 2}} \right) \right).$$ [\[lemma 3.2\]]{#lemma 3.2 label="lemma 3.2"}*
*Proof.* By using $Q(k)$:
$$\begin{aligned}
\Sigma_2^n &\leq
\underset{p=n-2n^\alpha}{\overset{n-2}{\sum}} \frac{Ck}{p^{3/2}} K_1(n-p) \leq \frac{Ck}{(n-n^\alpha)^{3/2}}P(\tau_1^\infty < 2n^\alpha).
\end{aligned}$$
Thanks to Lemma [\[lemme 1.3\]](#lemme 1.3){reference-type="ref" reference="lemme 1.3"}, we then have:
$$\begin{aligned}
\frac{Ck}{(n-n^\alpha)^{3/2}}P(\tau_1^\infty < 2n^\alpha) \leq
& \frac{Ck}{n^{3/2}} \left( 1 + \frac{3}{2 n^{1 - \alpha}}\right) \\
&\left( 1 - \frac{1}{\sqrt{\pi}n^{\alpha/2}} - \frac{3}{8 \sqrt{\pi} \, n^{3\alpha/2}} + o_n \left( \frac{1}{n^{3\alpha/2}} \right) \right).
\end{aligned}$$ ◻
We now have the tools to prove [\[2.3\]](#2.3){reference-type="eqref" reference="2.3"}.
*Proof.* Let us start from [\[3.3\]](#3.3){reference-type="eqref" reference="3.3"}. Thanks to Lemmas [\[lemma 3.1\]](#lemma 3.1){reference-type="ref" reference="lemma 3.1"} and [\[lemma 3.2\]](#lemma 3.2){reference-type="ref" reference="lemma 3.2"}, it comes that, for $n$ large enough:
$$\begin{aligned}
K_{k+1}(n) &\leq \frac{8}{n^{3/2}} + \frac{Ck}{n^{3/2}} \left( 1 + \frac{3}{2 n^{1 - \alpha}}\right) \left( 1 - \frac{1}{\sqrt{\pi}n^{\alpha/2}} - \frac{3}{8n^{3\alpha/2}} \right) \\ &+
\frac{kC}{\sqrt{\pi} n^{3/2}}\left( \frac{1}{n^{\alpha/2}} + \frac{1}{6 n^{3\alpha/2}} + O_n \left( \frac{1}{n^{1-\alpha/2}} \right) + o_n \left( \frac{1}{n^{3\alpha/2}} \right) \right) \\
& \leq \frac{8}{n^{3/2}} + \frac{Ck}{n^{3/2}} \left( 1 - \frac{5}{24n^{3 \alpha / 2}} + o_n \left( \frac{1}{n^{3 \alpha / 2}} \right) +
O_n\left( \frac{1}{n^{1 - \alpha}} \right) \right).
\label{3.15}
\end{aligned}$$ Therefore, when $\frac{3\alpha}{2} <1 - \alpha$ and for $n$ large enough: $$K_{k+1}(n) \leq \frac{8}{n^{3/2}} + \frac{Ck}{n^{3/2}} \left( 1 - \frac{5}{24n^{3 \alpha / 2}}(1+o_n(1)) \right).
\label{3.16}$$ Hence, $$K_{k+1}(n) \leq \frac{8}{n^{3/2}} + \frac{Ck}{n^{3/2}}.$$
Thus, we have the wanted result for $n$ large enough - let us say larger than $M$. Hence, because $K_k(n) = 0$ if $n<k$, we can deduce that $Q(k) \Longrightarrow Q(k+1)$ if $k>M$ and for $C>8$. Therefore, we have to prove $Q(i)$ for $i \leq M$. Using [@Caravenna2009depinning Eq. (B.9)] with $T = \infty$, we obtain that $K_k(n) \leq \frac{ck^3}{n^{3/2}}$. By taking $C = \max \{ cM^2,8 \}$,[\[2.3\]](#2.3){reference-type="eqref" reference="2.3"} is proven. ◻
## Proof of Equation [\[equation theoreme\]](#equation theoreme){reference-type="eqref" reference="equation theoreme"} [\[section 5.3\]]{#section 5.3 label="section 5.3"}
We now want to prove the result with an interface at height $T<\infty$. We see that, if $n<T$, $P \left( \tau_k^T=n \right) = P\left(\tau_k^\infty =n \right)$, because the simple random walk needs at least $T$ steps to reach another interface. Therefore, thanks to [\[2.3\]](#2.3){reference-type="eqref" reference="2.3"}, there is a constant $C>0$ such that, for all $k>0$ and $n<T$:
$$P\left(\tau_k^T=n \right) \leq \frac{Ck}{\min\{ n^{3/2},T^3 \}}e^{-ng(T)}.$$
The first thing we do is to ignore the term in $e^{-ng(T)}$: if we succeed to prove it without it, we can add this term by multiplying $C'$ by $\exp(\pi^2)$ because $ng(T) \sim n\frac{\pi^2}{2T^2} \leq \exp(\pi^2)$ (remind that $n \leq 2T^2$). We are going to use the exact same technique as in the subsection [\[section 5.2\]](#section 5.2){reference-type="ref" reference="section 5.2"}. Thus, let us prove the following proposition:
**Proposition 3**. *There exists a constant $C>0$ such that, for all $1<n<2T^2$: $$\forall k \in \mathbb{N}, \quad P\left(\tau_k^T = n\right) \leq \frac{Ck}{n^{3/2}}.$$ [\[proposition 4.1\]]{#proposition 4.1 label="proposition 4.1"}*
First, this constant exists for $1<n<T$. As in the proof in the subsection [\[section 5.2\]](#section 5.2){reference-type="ref" reference="section 5.2"}, let us do it by induction. Let $C>0$ a real number, and let us note $Q(k)$ the induction hypothesis:
$$\forall \, 1<n<2T^2 , P\left(\tau_k ^T= n\right) \leq \frac{C k}{n^{3/2}}.
\label{hypothese}$$
For $k=1$, it is right thanks to [@Caravenna2009depinning Eq (2.13)] with the $c_2$ in this paper. Now, let us assume that $Q(k)$ is true. Let us take $T<n<2T^2$. We will use the following notation: $K_k^T(n) := P\left(\tau_k^T = n\right)$. We have:
$$K_{k+1}^T(n) = \underset{l=\frac{n}{2}}{\overset{n-2}{\sum}}K_k^T(l) K_1^T(n-l) + \underset{l=\frac{n}{2}}{\overset{n}{\sum}}K_1^T(l) K_k^T(n-l).
\label{4.4}$$ A simple bound from above and [\[probabilité pour la marche aléatoire simple que tau 1 vaille n\]](#probabilité pour la marche aléatoire simple que tau 1 vaille n){reference-type="eqref" reference="probabilité pour la marche aléatoire simple que tau 1 vaille n"} give us: $$\begin{aligned}
\underset{l=\frac{n}{2}}{\overset{n}{\sum}}K_1^T(l) K_k^T(n-l) &\leq \underset{l=\frac{n}{2}}{\overset{n}{\sum}} \frac{c_2}{l^{3/2}} K_k^T(n-l) \leq \underset{l=\frac{n}{2}}{\overset{n}{\sum}} \frac{2^{5/2}c_2}{n^{3/2}} K_k^T(n-l) \\
& \leq \frac{8c_2}{n^{3/2}} P\left(\frac{n}{2}<\tau_k^T\right)
\leq \frac{8c_2}{n^{3/2}}.
\end{aligned}
\label{8.34}$$
Hence, [\[4.4\]](#4.4){reference-type="eqref" reference="4.4"} can be bounded from above by:
$$K_{k+1}^T(n) \leq \underset{l=\frac{n}{2}}{\overset{n-2}{\sum}}K_k^T(l) K_1^T(n-l) + \frac{8c_2}{n^{3/2}}.
\label{4.6}$$
Let us bound from above sharply $\underset{l=\frac{n}{2}}{\overset{n-2}{\sum}}K_k^T(l) K_1^T(n-l)$. To do this, we split the sum in three parts, and study the three members. Let us take $0<\alpha<1/2$, and bound from above the three quantities:
$$\Sigma_3^n := \underset{l=\frac{n}{2}}{\overset{n-\sqrt{n}}{\sum}}K_k^T(l) K_1^T(n-l),
\label{4.7000}$$
$$\Sigma_4^n := \underset{l=n-\sqrt{n}}{\overset{n-2n^\alpha-2}{\sum}}K_k^T(l) K_1^T(n-l),
\label{4.8000}$$
$$\Sigma_5^n := \underset{l=n-2n^\alpha}{\overset{n-2}{\sum}}K_k^T(l) K_1^T(n-l).
\label{4.9000}$$
We bound from above $\Sigma_3^n$ with [\[probabilité que tau 1 = n\]](#probabilité que tau 1 = n){reference-type="eqref" reference="probabilité que tau 1 = n"} and $Q(k)$, and the same computations as in the proof of Lemma [\[lemma 3.1\]](#lemma 3.1){reference-type="ref" reference="lemma 3.1"}:
$$\Sigma_3^n
\leq
kCc_2 \underset{l=n/2}{\overset{n-\sqrt{n}}{\sum}} \frac{1}{(n-l)^{3/2}l^{3/2}}
\leq \frac{kCc_2}{\sqrt{2} n^{3/2}} \left( \frac{1}{n^{1/4}}(1+o_n(1)) \right).
\label{Bourgogne 25}$$
Then, we notice that, in [\[4.8000\]](#4.8000){reference-type="eqref" reference="4.8000"} and [\[4.9000\]](#4.9000){reference-type="eqref" reference="4.9000"}, $K_1^T(n-l) = K_1(n-l)$ because $n-l<T$. Hence, we can bound from above $\Sigma_4^n$ by $\underset{l=n/2}{\overset{n-2n^\alpha-2}{\sum}}K_k^T(l) K_1(n-l)$:
$$\Sigma_4^n = \underset{l=n-\sqrt{n}}{\overset{n-2n^\alpha-2}{\sum}}K_k^T(l) K_1(n-l) <
\underset{l=n/2}{\overset{n-2n^\alpha-2}{\sum}}K_k^T(l) K_1(n-l).
\label{4.11000}$$
With this, by using the exact same technique as in Lemma [\[lemma 3.1\]](#lemma 3.1){reference-type="ref" reference="lemma 3.1"}, the following upper bound arises:
$$\Sigma_4^n \leq
\frac{kC}{\sqrt{\pi} n^{3/2}}\left( \frac{1}{n^{\alpha/2}} + \frac{1}{6n^{3\alpha/2}} + o_n \left( \frac{1}{n^{3\alpha/2}} \right) O_n \left( \frac{1}{n^{1-\alpha/2}} \right) \right).
\label{4.12000}$$
Also, by using the exact same technique as in Lemma [\[lemma 3.2\]](#lemma 3.2){reference-type="ref" reference="lemma 3.2"}, we can bound from above [\[4.9000\]](#4.9000){reference-type="eqref" reference="4.9000"}:
$$\Sigma_5^n \leq
\frac{Ck}{n^{3/2}} \left( 1 + \frac{3}{2 n^{1 - \alpha}}\right) \left( 1 - \frac{1}{\sqrt{\pi}n^{\alpha/2}} - \frac{3}{8 \sqrt{\pi} \, n^{3 \alpha / 2}} + o_n \left( \frac{1}{n^{3 \alpha / 2}} \right) \right).
\label{4.13000}$$
By summing the upper bound of [\[4.13000\]](#4.13000){reference-type="eqref" reference="4.13000"} and [\[4.12000\]](#4.12000){reference-type="eqref" reference="4.12000"}, and by choosing $\alpha$ small enough such that $\frac{3 \alpha}{2}<1 - \alpha$, we have the same result as in [\[3.15\]](#3.15){reference-type="eqref" reference="3.15"} and [\[3.16\]](#3.16){reference-type="eqref" reference="3.16"}: $$\Sigma_3^n + \Sigma_4^n + \Sigma_5^n
\leq
\frac{Ck}{n^{3/2}} \left( 1 - \frac{5}{24 n^{3 \alpha / 2}}(1+o_n(1) \right).
\label{4.14000}$$ By summing the bound from above of [\[Bourgogne 25\]](#Bourgogne 25){reference-type="eqref" reference="Bourgogne 25"} and [\[4.14000\]](#4.14000){reference-type="eqref" reference="4.14000"}, taking $\alpha$ small enough such that $\frac{3 \alpha}{2} < 1/4$, and taking $n$ large enough such that the $|o_n(1)| \leq 1$ in [\[Bourgogne 25\]](#Bourgogne 25){reference-type="eqref" reference="Bourgogne 25"} and [\[4.14000\]](#4.14000){reference-type="eqref" reference="4.14000"}:
$$\underset{l=\frac{n}{2}}{\overset{n-2}{\sum}}K_k^T(l) K_1^T(n-l) \leq \frac{Ck}{n^{3/2}}.
\label{4.15000}$$ Therefore, by summing [\[4.15000\]](#4.15000){reference-type="eqref" reference="4.15000"} and [\[4.6\]](#4.6){reference-type="eqref" reference="4.6"}, and when $n$ large enough - let us say larger than $M$:
$$K_{k+1}^T(n) \leq \frac{Ck}{n^{3/2}} + \frac{8c_2}{n^{3/2}}.$$
Thus, if $C>8c_2$, we have: $$K_{k+1}^T(n) \leq \frac{C(k+1)}{n^{3/2}}.$$
We now have to prove the relation for all $n<M$ for a certain $M>0$ coming from $o_n(1)$ in [\[4.14000\]](#4.14000){reference-type="eqref" reference="4.14000"} and [\[Bourgogne 25\]](#Bourgogne 25){reference-type="eqref" reference="Bourgogne 25"}. It is done in [@Caravenna2009depinning Eq. (B.9)]: $K_k^T(n) \leq \frac{ck^3}{\min\{n^{3/2},T^3\}}$. Hence, by taking $C = \max \{ cM^2, 8c_2 \}$, we have proven the induction. The proof of Proposition [\[proposition 4.1\]](#proposition 4.1){reference-type="ref" reference="proposition 4.1"} follows.
## Proof of Equation [\[8.6\]](#8.6){reference-type="eqref" reference="8.6"} [\[Section 5.4\]]{#Section 5.4 label="Section 5.4"}
For the regime $n \geq T^2$, our method encounters its limits. However, we can reduce the group of trajectories we are looking at, and still have a result sharp enough to prove Proposition [\[prop 1.1\]](#prop 1.1){reference-type="ref" reference="prop 1.1"}. Let us note that, thanks to Proposition [\[proposition 4.1\]](#proposition 4.1){reference-type="ref" reference="proposition 4.1"}, there exists $C_0 > 0$ such that, when $n<2T^2$ and for $k \in \mathbb{N}$: $$\begin{aligned}
P\left(\tau_k^T = n \text{ and } \tau_1^T<T^2,...,\tau_k^T- \tau_{k-1}^T<T^2 \right)
& \leq P(\tau_k^T = n)\\
&\leq \frac{C_0}{T^3}e^{-ng(T)}\left(1+\frac{C'}{T}\right)^k.
\end{aligned}$$
Let us now prove [\[8.6\]](#8.6){reference-type="eqref" reference="8.6"}. We define: $$Q_k^T(n) := P\left(\tau_k^T = n \text{ and } \tau_1<T^2,...,\tau_k^T- \tau_{k-1}^T<T^2\right).
\label{4.20}$$ Let us take $C>C_0$ a real number. Let $H(k)$ be the induction hypothesis:
$$\forall n>2T^2, \quad Q_k^T(n) \leq \frac{C}{T^3}e^{-ng(T)}\left(1+\frac{C}{T}\right)^k.$$
Let us remark that $H(1)$ is true because $Q_1^T(n) = 0$ when $n \geq 2T^2$. We now prove that $H(k) \implies H(k+1)$. Let us assume that $H(k)$ is true. Let $n>2T^2$ be an integer. Then,
$$\begin{aligned}
Q_{k+1}^T(n) &= \underset{l=1}{\overset{T^2}{\sum}} Q_1^T(l) Q_k^T(n-l).
\end{aligned}
\label{4.21}$$
To bound from above [\[4.21\]](#4.21){reference-type="eqref" reference="4.21"}, let us use that $g(T) \leq \frac{10}{T^2}$, so that $e^{lg(T)} \leq 1 + \frac{C''l}{T^2}$ for $l \leq T^2$ and a certain $C''>0$. Hence, thanks to $H(k)$:
$$\begin{aligned}
&\underset{l=1}{\overset{T^2}{\sum}} Q_1^T(l) Q_k^T(n-l) \leq
\frac{C e^{-ng(T)}}{T^3} \underset{l=1}{\overset{T^2}{\sum}} Q_1^T(l) e^{lg(T)} \left(1+\frac{C'}{T}\right)^k\\
&\leq \frac{C e^{-ng(T)}}{T^3} \left(1+\frac{C'}{T}\right)^k \underset{l=1}{\overset{T^2}{\sum}} Q_1^T(l) \left(1+\frac{C''l}{T^2} \right)\\
&\leq \frac{C e^{-ng(T)}}{T^3} \left(1+\frac{C'}{T}\right)^k \left( \mathcal{P}_{\delta_N,T_N} \left( \tau_1^T \leq T^2 \right) + \underset{l=1}{\overset{T^2}{\sum}} \frac{c_2}{l^{3/2}} \frac{C''l}{T^2} \right) \\
&\leq \frac{C e^{-ng(T)}}{T^3} \left(1+\frac{C'}{T}\right)^k \left( 1 + \frac{C'''}{T} \right),
\end{aligned}$$ where we have used [\[probabilité que tau 1 = n\]](#probabilité que tau 1 = n){reference-type="eqref" reference="probabilité que tau 1 = n"} in the penultimate line. Therefore, the induction is proven when we take $C'$ bigger than $C'''$. It follows [\[8.6\]](#8.6){reference-type="eqref" reference="8.6"}
# Asymptotic renewal estimates [\[section 60.0\]]{#section 60.0 label="section 60.0"}
**Outline** In this section, we want to prove Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"}. To do it, we first do some asymptotic estimates in Subsection [\[section 5000\]](#section 5000){reference-type="ref" reference="section 5000"}. The proof of Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"} is done in three parts. First, we do it for $n \geq T_N^3 \delta_N$, done in Section [\[section 6\]](#section 6){reference-type="ref" reference="section 6"}. Then, we do the lower bound of Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"} in Section [\[section 7\]](#section 7){reference-type="ref" reference="section 7"}. The upper bound is done in Section [\[section 8\]](#section 8){reference-type="ref" reference="section 8"}.
## First estimates [\[section 5000\]]{#section 5000 label="section 5000"}
Let us consider the renewal process $\left(\left(\tau_{n}\right)_{n \geq 0}, \mathcal{P}_{T_N,\delta_N}\right)$. It turns out that the law of $\tau_{1}$ under $\mathcal{P}_{T_N,\delta_N}$ has two sorts of jumps: the small jumps, of order $O_N(1)$, of mass $1-\delta_N$, and the big jumps, of order $O_N\left( T_N^{3} \delta_N\right)$, of mass $\delta_N$. Although we do not fully prove these results, it is useful to have them in mind. We start with an estimate of $\mathcal{P}_{\delta_N,T_N} \left( \tau_{1}=n \right)$, which is easily derived from Lemma [\[Lemme sur toucher en n pour la m.a.s.\]](#Lemme sur toucher en n pour la m.a.s.){reference-type="ref" reference="Lemme sur toucher en n pour la m.a.s."}.
**Lemma 7**. *There exist $c_2>c_1>0$ and $T_0 \in 2\mathbb{N}$ such that, for all $n \in 2\mathbb{N}$ and $T_N \geq T_0$: $$\begin{aligned}
\frac{c_1}{\min \{ n^{3/2}, T_N^3\}} e^{-(g(T_N) + \phi(T_N, \delta_N))n} &\leq \mathcal{P}_{\delta_N,T_N} \left( \tau_1 = n \right) \\
&\leq \frac{c_2}{\min \{ n^{3/2}, T_N^3 \} } e^{-(g(T_N) + \phi(T_N, \delta_N))n}.
\label{probabilité que tau 1 = n}
\end{aligned}$$*
Let us stress that this is always true, even when we are not under [\[H_0\]](#H_0){reference-type="eqref" reference="H_0"}.
*Proof.* Equation [\[probabilité que tau 1 = n\]](#probabilité que tau 1 = n){reference-type="eqref" reference="probabilité que tau 1 = n"} is an immediate consequence of the equations [\[définition de la mesure du renouvellement\]](#définition de la mesure du renouvellement){reference-type="eqref" reference="définition de la mesure du renouvellement"} and [\[probabilité pour la marche aléatoire simple que tau 1 vaille n\]](#probabilité pour la marche aléatoire simple que tau 1 vaille n){reference-type="eqref" reference="probabilité pour la marche aléatoire simple que tau 1 vaille n"}. ◻
**Lemma 8**. *When $a>b$, there exists $N_{0} \in \mathbb{N}$, $c_{3} > 0$ with the same $c_1$ as in [\[probabilité que tau 1 = n\]](#probabilité que tau 1 = n){reference-type="eqref" reference="probabilité que tau 1 = n"} such that for $N>N_{0}$ the inequalities below are verified for any $m, n \in 2 \mathbb{N} \cup\{+\infty\}$ with $m<n$.*
*$$\begin{aligned}
\mathcal{P}_{\delta_N,T_N} \left( \tau_1 > m \right) \leq
c_3 \left( \frac{\mathbf{1} \{ m<T_N^2 \} }{\sqrt{m}} + \delta_N \right) e^{-(g(T_N) + \phi(\delta_N,T_N))m}
\label{majoration de la proba que tau 1 soit > m},
\end{aligned}$$*
*$$\mathcal{P}_{\delta_N,T_N} \left( m \leq \tau_1 < n \right) \geq
c_1 \delta_N \left( e^{-(g(T_N) + \phi(\delta_N,T_N))m} - e^{-(g(T_N) + \phi(\delta_N,T_N))n} \right).
\label{minoration de la proba que tau 1 soit entre m et n}$$ [\[lemme 4.2\]]{#lemme 4.2 label="lemme 4.2"}*
*Proof.* To prove [\[minoration de la proba que tau 1 soit entre m et n\]](#minoration de la proba que tau 1 soit entre m et n){reference-type="eqref" reference="minoration de la proba que tau 1 soit entre m et n"}, we sum the lower bound of [\[probabilité que tau 1 = n\]](#probabilité que tau 1 = n){reference-type="eqref" reference="probabilité que tau 1 = n"} on $k$. To do it, we use that $\min \left\{T_N^{3}, k^{3 / 2}\right\} \leq T_N^{3}$ and we note that, by [\[phi\]](#phi){reference-type="eqref" reference="phi"} and [\[g\]](#g){reference-type="eqref" reference="g"}, for $N$ large enough:
$$g(T_N) + \phi(\delta_N,T_N) = \frac{2 \pi ^2}{T_N^3 \delta_N}(1 + o_N(1)).
\label{g + phi}$$
To obtain [\[majoration de la proba que tau 1 soit \> m\]](#majoration de la proba que tau 1 soit > m){reference-type="eqref" reference="majoration de la proba que tau 1 soit > m"}, we distinguish between two cases.
- For $m \geq T_N^2$, we sum the upper bound of [\[probabilité que tau 1 = n\]](#probabilité que tau 1 = n){reference-type="eqref" reference="probabilité que tau 1 = n"} using [\[g + phi\]](#g + phi){reference-type="eqref" reference="g + phi"}.
- For $m \leq T_N^2$, we use that $\frac{1}{\min \{ n^{3/2}, T_N^3 \}} < \frac{1}{T_N^3} + \frac{1}{n^{3/2}}$ and sum the upper bound of [\[probabilité que tau 1 = n\]](#probabilité que tau 1 = n){reference-type="eqref" reference="probabilité que tau 1 = n"}.
◻
Three other equalities are:
**Lemma 9**. *When $a>b$:*
*$$\mathcal{E}_{\delta_N, T_N} \left( \tau_1 \right) = \frac{T_N^3 \delta_N^2}{2 \pi^2},
\label{espérance de tau 1}$$*
*$$\mathcal{P}_{\delta_N,T_N} \left( \varepsilon_1^2 = 1 \right) = \frac{\delta_N}{2}(1 + o_N(1)),
\label{probabilité de changer d'interface}$$*
*$$\mathcal{E}_{\delta_N, T_N} \left( \tau_1^2 \right) = \frac{T_N^6 \delta_N^3}{2 \pi^4}(1 + o_N(1)).
\label{moment d'ordre 2 de tau 1}$$*
The expected value of $\tau_1$ gives us an idea of $\mathcal{P}_{\delta_N,T_N} \left( n \in \tau \right)$ in the stationary regime. Equation [\[moment d\'ordre 2 de tau 1\]](#moment d'ordre 2 de tau 1){reference-type="eqref" reference="moment d'ordre 2 de tau 1"} gives us the variance of this random variable. Those equalities are proven in Appendix [\[Appendix A.2\]](#Appendix A.2){reference-type="ref" reference="Appendix A.2"}. Heuristically, Equation [\[espérance de tau 1\]](#espérance de tau 1){reference-type="eqref" reference="espérance de tau 1"} combined with [\[probabilité de changer d\'interface\]](#probabilité de changer d'interface){reference-type="eqref" reference="probabilité de changer d'interface"} tells us that, the average time to switch interface is $T_N^3\delta_N$, which is greater than $N$ under [\[H_0\]](#H_0){reference-type="eqref" reference="H_0"}. Thus, we should not see any interface change before N.
## Proof of Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"}, case $n \geq T_N^3 \delta_N$ [\[section 6\]]{#section 6 label="section 6"}
Let us assume for the moment that Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"} is true for $n \leq T_N^3 \delta_N$. Our aim in Section [\[section 6\]](#section 6){reference-type="ref" reference="section 6"} is to extend this proposition to $2\mathbb{N}$.
### Lower bound
We denote $\mu_m := \inf \{ k \geq m : k \in \tau \}$. For $n \geq T_N^3 \delta_N$:
$$\begin{aligned}
&\mathcal{P}_{\delta_N,T_N} \left( n \in \tau \right) \geq \mathcal{P}_{\delta_N,T_N} \left( \tau \cap\left[n-T_N^3\delta_N, n-1\right] \neq \emptyset, n \in \tau \right) \\
&=\sum_{k=n-T_N^3\delta_N}^{n-1} \mathcal{P}_{\delta_N,T_N} \left( \mu_{n-T_N^3\delta_N}=k \right) \mathcal{P}_{\delta_N,T_N} \left( n-k \in \tau \right) \\
& \geq
\frac{c}{T_N^{3} \delta_N^2} \mathcal{P}_{\delta_N,T_N} \left( \tau \cap\left[n-T_N^3\delta_N, n-1\right] \neq \emptyset \right) .
\end{aligned}
\label{Bourgogne 1}$$
The $1/T_N^3\delta_N^2$ comes from Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"} admitted until $T_N^3\delta_N$. It is now sufficient to show that there exist $cst, T_{0}>0$ such that, for $T_N>T_{0}$ and $n \geq T_N^3 \delta_N$:
$$\mathcal{P}_{\delta_N,T_N} \left( \tau \cap\left[n-T_N^3\delta_N, n-1\right] \neq \emptyset \right) \geq cst .
\label{Bourgogne 2}$$
We prove the following equivalent fact:
$$\mathcal{P}_{\delta_N,T_N} \left( \tau \cap\left[n-T_N^3\delta_N, n-1\right] \neq \emptyset \right) \geq C
\mathcal{P}_{\delta_N,T_N} \left( \tau \cap\left[n-T_N^3\delta_N, n-1\right]=\emptyset \right) ,
\label{Bourgogne 3}$$
for a $C>0$ which is appropriate. Thanks to [\[probabilité que tau 1 = n\]](#probabilité que tau 1 = n){reference-type="eqref" reference="probabilité que tau 1 = n"}:
$$\begin{aligned}
&\mathcal{P}_{\delta_N,T_N} \left( \tau \cap\left[n-T_N^3\delta_N, n-1\right] \neq \emptyset \right) \\
&=\sum_{l=0}^{n-T_N^3\delta_N-1} \mathcal{P}_{\delta_N,T_N} \left( l \in \tau \right) \sum_{k=n-T_N^3\delta_N}^{n-1} \mathcal{P}_{\delta_N,T_N} \left( \tau_{1}=k-l \right) \\
&\geq c \sum_{l=0}^{n-T_N^3\delta_N-1} \mathcal{P}_{\delta_N,T_N} \left( l \in \tau \right) \delta_N \Bigg(e^{-(\phi(\delta_N, T_N)+g(T_N))\left(n-T_N^3\delta_N-l\right)} \\
& \hspace{6cm} -e^{-(\phi(\delta_N, T_N)+g(T_N))(n-l)}\Bigg).
\end{aligned}
\label{Bourgogne 4}$$
Thanks to [\[majoration de la proba que tau 1 soit \> m\]](#majoration de la proba que tau 1 soit > m){reference-type="eqref" reference="majoration de la proba que tau 1 soit > m"}:
$$\begin{aligned}
&\mathcal{P}_{\delta_N,T_N} \left( \tau \cap\left[n-T_N^3\delta_N, n-1\right]=\emptyset \right) \\
&=\underset{l=0}{\overset{n-T_N^3\delta_N-1}{\sum}} \mathcal{P}_{\delta_N,T_N} \left( l \in \tau \right) \underset{k \geq n}{\overset{}{\sum}} \mathcal{P}_{\delta_N,T_N} \left( \tau_{1}=k-l \right) \\
& \leq c \underset{l=0}{\overset{n-T_N^3\delta_N-1}{\sum}} \mathcal{P}_{\delta_N,T_N} \left( l \in \tau \right) e^{-(\phi(\delta_N, T_N)+g(T_N))(n-l)}
\delta_N.
\end{aligned}
\label{Bourgogne 5}$$ Thus, by [\[g + phi\]](#g + phi){reference-type="eqref" reference="g + phi"}: $$\begin{aligned}
\frac{ \left( e^{-(\phi(\delta_N, T_N)+g(T_N))\left(n-T_N^3\delta_N-l\right)}-e^{-(\phi(\delta_N, T_N)+g(T_N))(n-l)} \right) \delta_N }{ \left( e^{-(\phi(\delta_N, T_N)+g(T_N))(n-l)} \right) \delta_N } > \\
e^{T_N^3\delta_N(\phi(\delta_N, T_N)+g(T_N))}-1 \sim_N e^{2 \pi^2} - 1,
\end{aligned}
\label{Bourgogne 6}$$ which bounds from below this quantity by a real number strictly positive. We finally have the desired result.
### Upper bound
It remains to prove the upper bound of Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"} for $n \geq T_N^{3} \delta_N$. It comes:
$$\begin{aligned}
&\mathcal{P}_{\delta_N,T_N} \left( \tau \cap\left[n-T_N^{3}\delta_N, n-T_N^2 \right] \neq \emptyset, n \in \tau \right) \\
& =\sum_{k=n-T_N^{3} \delta_N}^{n-T_N^{2}} \mathcal{P}_{\delta_N,T_N} \left( \mu_{n-T_N^{3}\delta_N}=k \right) \mathcal{P}_{\delta_N,T_N} \left( n-k \in \tau \right) \\
& \leq \frac{c}{\delta_N^2 T_N^{3}} \mathcal{P}_{\delta_N,T_N} \left( \tau \cap\left[n-T_N^{3}\delta_N, n-T_N^{2} \right] \neq \emptyset \right) \leq \frac{c}{\delta_N^2 T_N^{3}},
\end{aligned}
\label{7.1}$$
by applying the upper bound of Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"} to $\mathcal{P}_{\delta_N,T_N} \left( n-k \in \tau \right)$, because $T_N^{2} \leq n-k \leq T_N^{3} \delta_N$. If we now show that there is $cst, N_{0}>0$ such that, for $N>N_{0}$ and for $n>T_N^{3}\delta_N$
$$\mathcal{P}_{\delta_N,T_N} \left( \tau \cap\left[n-T_N^{3}\delta_N, n-T_N^{2}\right] \neq \emptyset \mid n \in \tau \right) \geq cst,
\label{Bourgogne 10}$$
we deduce that
$$\mathcal{P}_{\delta_N,T_N} \left( n \in \tau \right) \leq \frac{1}{cst} \mathcal{P}_{\delta_N,T_N} \left( \tau \cap \left[n-T_N^{3}\delta_N, n-T_N^{2} \right] \neq \emptyset , n \in \tau \right) \leq \frac{cst}{T_N^3 \delta_N^2},
\label{Bourgogne 11}$$
and the proof is over. Instead of [\[Bourgogne 10\]](#Bourgogne 10){reference-type="eqref" reference="Bourgogne 10"}, we prove the following equivalent relation
$$\begin{aligned}
&\mathcal{P}_{\delta_N,T_N} \left( \tau \cap\left[n-T_N^{3} \delta_N , n-T_N^{2} \right] \neq \emptyset, n \in \tau \right) \\
&\geq C \mathcal{P}_{\delta_N,T_N} \left( \tau \cap \left[n - T_N^{3}\delta_N, n-T_N^{2}\right]=\emptyset, n \in \tau \right) ,
\end{aligned}
\label{Bourgogne 12}$$
for some $C>0$. Let us start with the first term:
$$\begin{aligned}
&\mathcal{P}_{\delta_N,T_N} \left( \tau \cap\left[n-T_N^{3}\delta_N, n-T_N^{2}\right] \neq \emptyset, n \in \tau \right) \\
&=\sum_{m=0}^{n-T_N^{3}\delta_N-1} \mathcal{P}_{\delta_N,T_N} \left( m \in \tau \right)
\sum_{l=n-T_N^{3}\delta_N}^{n-T_N^{2}}
\mathcal{P}_{\delta_N,T_N} \left( \tau_{1}=l-m \right) \mathcal{P}_{\delta_N,T_N} \left( n-l \in \tau \right) .
\end{aligned}
\label{Bourgogne 13}$$
Let us notice that $\mathcal{P}_{\delta_N,T_N} \left( n-l \in \tau \right) \geq c / T_N^{3}\delta_N^2$ for $n-l \in 2 \mathbb{N}$ thanks to the lower bound of Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"}. By [\[minoration de la proba que tau 1 soit entre m et n\]](#minoration de la proba que tau 1 soit entre m et n){reference-type="eqref" reference="minoration de la proba que tau 1 soit entre m et n"} and [\[g + phi\]](#g + phi){reference-type="eqref" reference="g + phi"},
$$\begin{aligned}
&\sum_{l=n-T_N^{3}\delta_N}^{n-T_N^{2}} \mathcal{P}_{\delta_N,T_N} \left( \tau_{1}=l-m \right) \\
& \geq c \delta_N \left(e^{-(\phi(\delta_N, T_N)+g(T_N))\left(n-T_N^{3}\delta_N-m\right)}-e^{-(\phi(\delta_N, T_N)+g(T_N))\left(n-T_N^{2}-m\right)}\right) \\
&=c \delta_N e^{-(\phi(\delta_N, T_N)+g(T_N))\left(n-T_N^{3}\delta_N -m \right)}
\left(1-e^{-(\phi(\delta_N, T_N)+g(T_N))\left(T_N^{3}\delta_N-T_N^{2} \right)}\right) \\
&\geq c \delta_N e^{-(\phi(\delta_N, T_N)+g(T_N))\left(n-T_N^{3} \delta_N -m\right)}
\geq
c \delta_N e^{-(\phi(\delta_N, T_N)+g(T_N))(n-m)}.
\end{aligned}
\label{Bourgogne 14}$$
Returning to [\[Bourgogne 13\]](#Bourgogne 13){reference-type="eqref" reference="Bourgogne 13"}:
$$\begin{aligned}
& \mathcal{P}_{\delta_N,T_N} \left( \tau \cap\left[n-T_N^{3}, n-T_N^{2}\right] \neq \emptyset, n \in \tau \right) \\
\geq & \frac{c}{T_N^{3} \delta_N} \sum_{m=0}^{n-T_N^{3}-1} \mathcal{P}_{\delta_N,T_N} \left( m \in \tau \right) e^{-(\phi(\delta_N, T_N)+g(T_N))(n-m)} .
\end{aligned}
\label{Bourgogne 15}$$
Now, let us focus on the second term of [\[Bourgogne 12\]](#Bourgogne 12){reference-type="eqref" reference="Bourgogne 12"}:
$$\begin{aligned}
&\mathcal{P}_{\delta_N,T_N} \left( \tau \cap\left[n-T_N^{3}\delta_N, n-T_N^{2}\right]=\emptyset, n \in \tau \right) \\
&=\sum_{m=0}^{n-T_N^{3}\delta_N-1} \mathcal{P}_{\delta_N,T_N} \left( m \in \tau \right) \sum_{l=n-T_N^{2}}^{n} \mathcal{P}_{\delta_N,T_N} \left( \tau_{1}=l-m \right) \mathcal{P}_{\delta_N,T_N} \left( n-l \in \tau \right) .
\end{aligned}
\label{Bourgogne 16}$$
Since $l-m \geq T_N^{3}\delta_N-T_N^{2}$, thanks to the upper bound in [\[probabilité que tau 1 = n\]](#probabilité que tau 1 = n){reference-type="eqref" reference="probabilité que tau 1 = n"}, we obtain
$$\begin{aligned}
\mathcal{P}_{\delta_N,T_N} \left( \tau_{1}=l-m \right) & \leq \frac{c}{T_N^{3}} e^{-(\phi(\delta_N, T_N)+g(T_N))(l-m)} \\
& \leq \frac{c}{T_N^{3}} e^{-(\phi(\delta_N, T_N)+g(T_N))(n-m)},
\end{aligned}
\label{Bourgogne 18}$$
because $n-l \leq T_N^{2}$ (let us remember [\[g + phi\]](#g + phi){reference-type="eqref" reference="g + phi"} and $\underset{N \longrightarrow \infty}{\lim} T_N \delta_N = \infty$). Moreover, thanks to the upper bound of Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"} applied to $\mathcal{P}_{\delta_N,T_N} \left( n-l \in \tau \right)$, for $n-l \leq T_N^{2}$, we obtain
$$\sum_{l=n-T_N^{2}}^{n} \mathcal{P}_{\delta_N,T_N} \left( n-l \in \tau \right) \leq c
\underset{l=1}{\overset{\frac{1}{\delta_N^2}}{\sum}}\frac{1}{\sqrt{l}} + \frac{c}{\delta_N^2} \underset{l=\frac{1}{\delta_N^2}}{\overset{T_N^2 }{\sum}} \frac{1}{l^{3/2}} \leq \frac{c}{\delta_N}.
\label{Bourgogne 30}$$
Returning to [\[Bourgogne 16\]](#Bourgogne 16){reference-type="eqref" reference="Bourgogne 16"}, we obtain
$$\begin{aligned}
&\mathcal{P}_{\delta_N,T_N} \left( \tau \cap\left[n-T_N^{3}\delta_N, n-T_N^{2}\right]=\emptyset, n \in \tau \right) \\
&\leq \frac{c}{ T_N^{3} \delta_N} \sum_{m=0}^{n-T_N^{3}\delta_N-1} \mathcal{P}_{\delta_N,T_N} \left( m \in \tau \right) e^{-(\phi(\delta_N, T_N)+g(T_N))(n-m)} .
\end{aligned}
\label{Bourgogne 17}$$
By comparing [\[Bourgogne 15\]](#Bourgogne 15){reference-type="eqref" reference="Bourgogne 15"} and [\[Bourgogne 17\]](#Bourgogne 17){reference-type="eqref" reference="Bourgogne 17"}, we see that [\[Bourgogne 12\]](#Bourgogne 12){reference-type="eqref" reference="Bourgogne 12"} is proven, which concludes the proof.
## Proof of the lower bound of Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"} [\[section 7\]]{#section 7 label="section 7"}
In this section, we deal with the case $n<T_N^3 \delta_N$. We allow ourselves to neglect the exponential constants among equations in Lemma [\[lemme 4.2\]](#lemme 4.2){reference-type="ref" reference="lemme 4.2"}. Indeed, by dividing $c_1$ by $e$, the equations are still true without the exponential terms. Here is the main idea of the proof: we consider only those trajectories where only one jump is larger than $cn$ with $c>0$ to be defined below, and where all other jumps are smaller than $cn$. Heuristically, this kind of trajectory with only one big jump and many small ones are typical. Let us start with some estimates:
**Lemma 10**. *There exist $c_5$ and $c_6$ such that, for all $1 \leq n \leq T_N^2$: $$\mathcal{E}_{\delta_N, T_N} \left( \tau_1 \mathbf{1} \{ \tau_1 \leq n \} \right) \leq c_5 \sqrt{n},
\label{7.10}$$ $$\mathcal{E}_{\delta_N, T_N} \left( \tau_1 | \tau_1 \leq n \right) \leq c_6 \sqrt{n}.
\label{7.11}$$ [\[lemme 7.5\]]{#lemme 7.5 label="lemme 7.5"}*
*Proof.* We use [\[probabilité que tau 1 = n\]](#probabilité que tau 1 = n){reference-type="eqref" reference="probabilité que tau 1 = n"} and well known computations techniques for [\[7.10\]](#7.10){reference-type="eqref" reference="7.10"}. To prove [\[7.11\]](#7.11){reference-type="eqref" reference="7.11"}, we simply use that $\mathcal{P}_{\delta_N,T_N} \left( \tau_1 \leq n \right) \geq \mathcal{P}_{\delta_N,T_N} \left( \tau_1 = 2 \right) \geq c$. ◻
Let us now turn to the proof of Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"} for $n \leq T_N^3 \delta_N$. Rather than computing $\mathcal{P}_{\delta_N,T_N} \left( n \in \tau \right)$, we compute $\mathcal{P}_{\delta_N,T_N} \left( \left(2 \frac{c_6}
{c_1}+1\right)n \in \tau \right)$, with $c_1$ appearing in [\[minoration de la proba que tau 1 soit entre m et n\]](#minoration de la proba que tau 1 soit entre m et n){reference-type="eqref" reference="minoration de la proba que tau 1 soit entre m et n"} and $c_6$ in [\[7.11\]](#7.11){reference-type="eqref" reference="7.11"}. Details of this choice appear in Equation [\[7.111\]](#7.111){reference-type="eqref" reference="7.111"} but it helps to simplify computations. We introduce then the following notation:
$$n' = \left( 2\frac{c_6}{c_1} + 1 \right)n.
\label{7.3}$$
We define $\xi_i := \tau_i - \tau_{i-1}$, and $s_n := \min\{n, 1/\delta_N^2\}$. A jump is said \"small\" if it is smaller than $s_n$. For $k$ in $\mathbb{N}$, we define the following event:
$$A_k(n) := \{ \xi_1 + ... + \xi_k = n' \text{ and }
\# \{ i \leq k \colon \xi_i < s_n \}= k-1 \},$$ which represents the event of reaching $n'$ defined in [\[7.3\]](#7.3){reference-type="eqref" reference="7.3"} in $k$ jumps, and only one of them is larger than $s_n$. Thus, $\mathcal{P}_{\delta_N,T_N} \left( n' \in \tau \right) \geq \underset{k=1}{\overset{ \sqrt{s_n}/c_1}{\sum}} \mathcal{P}_{\delta_N,T_N} \left( A_k(n) \right)$ since we only consider a subset of all trajectories reaching $n'$. We also define for $k$ in $\mathbb{N}$:
$$B_k(n) := \left\{ \xi_1 + ... + \xi_k < 2 \frac{c_6}{c_1} n \right\} \cap \left \{ \forall i \leq k, \xi_i< s_n \right\},$$
which represents the fact that small jumps do not exceed $2\frac{c_6}{c_1} n$ and the $k$ firsts jumps are smaller than $n$, and
$$C_k(n) := \{\forall i \leq k, \, \xi_i < s_n \},$$
which represents the probability of having $k$ small jumps. We now prove Lemma [\[lemme 7.6\]](#lemme 7.6){reference-type="ref" reference="lemme 7.6"}:
**Lemma 11**. *For $k \leq \frac{\sqrt{s_n}}{c_1}$ and $n \leq T_N^3 \delta_N$, $$\mathcal{P}_{\delta_N,T_N} \left( A_k(n) \right) > c \frac{k}{\min \{n^{3/2}, T_N^3\}} \left(1-\frac{c_1}{\sqrt{n}} \right)^{k-1}.$$ [\[lemme 7.6\]]{#lemme 7.6 label="lemme 7.6"}*
*Proof.* We first decompose on the location of the big jump and use [\[probabilité que tau 1 = n\]](#probabilité que tau 1 = n){reference-type="eqref" reference="probabilité que tau 1 = n"}. Remind that we may neglect all exponential terms in these equations: $$\begin{aligned}
& \mathcal{P}_{\delta_N,T_N} \left( A_k(n) \right) =\underset{j=1}{\overset{k}{\sum}} \underset{
\substack{l_1,...,l_{k} \in \mathbb{N} \\
l_1 + ... + l_{k} =n' \\
l_j \geq s_n \\
l_i < s_n \text{ for } i \neq j }}{\overset{}{\sum}}
\mathcal{P}_{\delta_N,T_N} \left( (\xi_1,...,\xi_k) = (l_1,...,l_k) \right) \\
& = k \underset{
\substack{1 \leq l_1,...,l_{k-1} < s_n\\
l_1 + ... + l_{k-1} \leq \left( 2\frac{c_6}{c_1} \right) n \\
}}{\overset{}{\sum}}
\mathcal{P}_{\delta_N,T_N} \left( (\xi_1,...,\xi_{k-1}) = (l_1,...,l_{k-1}) \right) \\
& \hspace{3.5 cm}\mathcal{P}_{\delta_N,T_N} \left( \xi_k = n' - \underset{i=1}{\overset{k-1}{\sum}} l_i \right) \\
& \geq \frac{k c_1}{\min \{ n^{3/2}, T_N^3 \}} \underset{
\substack{1 \leq l_1,...,l_{k-1} < s_n \\
l_1 + ... + l_{k-1} \leq \left( 2\frac{c_6}{c_1} \right) n \\
}}{\overset{}{\sum}}
\mathcal{P}_{\delta_N,T_N} \left( (\xi_1,...,\xi_{k-1}) = (l_1,...,l_{k-1}) \right) \\
& \geq
\frac{k c_1}{\min \{ n^{3/2}, T_N^3 \}} \mathcal{P}_{\delta_N,T_N} \left( B_{k-1}(n) \right) .
\end{aligned}
\label{7.18}$$ We now are looking for a lower bound of $\mathcal{P}_{\delta_N,T_N} \left( B_{k-1}(n) \right)$. This can be done by conditioning on the fact that all jumps are smaller than $s_n$, which corresponds to the event $C_{k-1}(n)$.
$$\begin{aligned}
&\mathcal{P}_{\delta_N,T_N} \left( B_{k-1}(n) \right) = \mathcal{P}_{\delta_N,T_N} \left( B_{k-1}(n) | C_{k-1}(n) \right) \mathcal{P}_{\delta_N,T_N} \left( C_{k-1}(n) \right) \\
& \geq \mathcal{P}_{\delta_N,T_N} \left( \xi_1 + ... + \xi_{k-1} < 2\frac{c_6}{c_1}n \Big| \forall i \leq k-1, \, \xi_i<s_n \right) \mathcal{P}_{\delta_N,T_N} \left( \tau_1<s_n \right) ^{k-1} \\
& \geq \mathcal{P}_{\delta_N,T_N} \left( \xi_1 + ... + \xi_{k-1} < 2 k c_6 \sqrt{n} \Big|\forall i \leq k-1, \, \xi_i<s_n \right) \\
& \quad \mathcal{P}_{\delta_N,T_N} \left( \tau_1<s_n \right) ^{k-1}. \\
\end{aligned}$$ where it was used that $k<\frac{\sqrt{s_n}}{c_1}$. To make the following equation more readable, we note $X$ a random variable equal in law to $\xi_1 + ... + \xi_{k-1}$ when, for all $i \leq k$, $\xi_i$ is conditioned to be smaller than $s_n$. Lemma [\[lemme 7.5\]](#lemme 7.5){reference-type="ref" reference="lemme 7.5"} therefore gives us: $$\begin{aligned}
\mathcal{P}_{\delta_N,T_N} \left( B_{k-1}(n) \right) & \geq \mathcal{P}_{\delta_N,T_N} \left( X \leq 2E(X) \right) \mathcal{P}_{\delta_N,T_N} \left( \tau_1<s_n \right) ^{k-1} \\
& \geq \frac{1}{2}\left( 1- \frac{c_1}{\sqrt{s_n}} \right)^{k-1},
\end{aligned}
\label{7.19}$$ where we used the Markov inequality and [\[minoration de la proba que tau 1 soit entre m et n\]](#minoration de la proba que tau 1 soit entre m et n){reference-type="eqref" reference="minoration de la proba que tau 1 soit entre m et n"} in the last line. Put together, [\[7.18\]](#7.18){reference-type="eqref" reference="7.18"} and [\[7.19\]](#7.19){reference-type="eqref" reference="7.19"} give us the Lemma [\[lemme 7.6\]](#lemme 7.6){reference-type="ref" reference="lemme 7.6"}. ◻
Thus, we can conclude the proof. By Lemma [\[lemme 7.6\]](#lemme 7.6){reference-type="ref" reference="lemme 7.6"}:
$$\begin{aligned}
&\mathcal{P}_{\delta_N,T_N} \left( n \in \tau \right) \geq \underset{k=1}{\overset{\sqrt{s_n}/c_1}{\sum}}\mathcal{P}_{\delta_N,T_N} \left( A_k(n) \right) \\
&\geq \frac{c}{\min \{n^{3/2},T_N^3\}} \underset{k=1}{\overset{\sqrt{s_n}/c_1}{\sum}} k \left(1- \frac{c_1}{\sqrt{s_n}} \right)^{k-1} \\
& \geq \frac{c}{2 \min \{n^{3/2},T_N^3\}} \left(
- \frac{\sqrt{s_n}}{c_1} \frac{(1-\frac{c_1}{\sqrt{s_n}})^{\sqrt{s_n}/c_1}}
{\frac{c_1}{\sqrt{s_n}}}
+
\frac{1 - (1 - \frac{c_1}{\sqrt{s_n}})^\frac{\sqrt{s_n}}{c_1}}{\left(\frac{c_1}{\sqrt{s_n}} \right)^2} \right) \\
&\geq \frac{c (1 - 2 e^{-1})}{\min \{ n^{3/2} \max\{ \frac{1}{n},\delta_N^2 \}, T_N^3\delta_N^2 \}},
\end{aligned}
\label{7.111}$$ hence the desired result.
## Proof of the upper bound [\[section 8\]]{#section 8 label="section 8"}
This section is the most technical one. In the Subsection [\[Section 8.11\]](#Section 8.11){reference-type="ref" reference="Section 8.11"}, we do the proof for $n \leq \frac{1}{\delta_N^2}$, by comparing our renewal measure to the simple random walk. Then, for $\frac{1}{\delta_N^2} \leq n \leq \delta_N T_N^3$, things get harder. We need very sharp estimates about the simple random walk, done in Theorem [\[theoreme\]](#theoreme){reference-type="ref" reference="theoreme"}
### Regime $n \leq 1/\delta_N^2$ [\[Section 8.11\]]{#Section 8.11 label="Section 8.11"}
Let us start by a lemma about the simple random walk:
**Lemma 12**. *Exists $C>0$ such that, for all $T \in 2\mathbb{N}$ and $n \in 2\mathbb{N}$: $$\frac{1}{C \min\{\sqrt{n}, T\}} \leq P(S_n \in T \mathbb{Z}) \leq \frac{C}{\min\{\sqrt{n}, T\}}.
\label{Bourgogne 22}$$ [\[lemme 8.1\]]{#lemme 8.1 label="lemme 8.1"}*
We defer the proof to Appendix [\[Annexe B.1\]](#Annexe B.1){reference-type="ref" reference="Annexe B.1"}. We now prove the upper bound of Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"} for $n \leq \frac{1}{\delta_N^2}$ by using the definition of $\mathcal{P}_{T_N,\delta_N}$ in [\[définition de la mesure du renouvellement\]](#définition de la mesure du renouvellement){reference-type="eqref" reference="définition de la mesure du renouvellement"}. Remind $\tau_k^{T_N}$, defined in [\[4.5000\]](#4.5000){reference-type="eqref" reference="4.5000"}. With [\[probabilité pour la marche aléatoire simple que tau 1 vaille n\]](#probabilité pour la marche aléatoire simple que tau 1 vaille n){reference-type="eqref" reference="probabilité pour la marche aléatoire simple que tau 1 vaille n"} and Lemma [\[lemme 8.1\]](#lemme 8.1){reference-type="ref" reference="lemme 8.1"}:
$$\begin{aligned}
\mathcal{P}_{\delta_N,T_N} \left( n \in \tau \right) &= \underset{k=1}{\overset{\infty}{\sum}}P \left( \tau_k^{T_N} = n \right) e^{-k\delta_N}e^{-n\phi(\delta_N,T_N)} \\
&\leq c \underset{k=1}{\overset{\infty}{\sum}}P \left( \tau_k^{T_N} = n \right) \leq c P \left( n \in \tau^{T_N} \right) \leq \frac{C'}{\sqrt{n}}.
\end{aligned}$$
We get that $e^{-n \phi(\delta_N,T_N)} \leq c$ thanks to [\[phi\]](#phi){reference-type="eqref" reference="phi"} and $n$ being smaller than $\frac{1}{\delta_N^2}$, hence being smaller than $T_N^2$. Thus, in this regime, our renewal measure behaves as in the simple random walk case.
### Regime $n \geq 1/\delta_N^2$ [\[Section 7.3.2\]]{#Section 7.3.2 label="Section 7.3.2"}
We are now looking to prove Proposition [\[prop 1.1\]](#prop 1.1){reference-type="ref" reference="prop 1.1"} :
**Proposition 4**. *There exists $C>0$ such that, for all $N \in \mathbb{N}$ and $\frac{1}{\delta_N^2}<n<T_N^3 \delta_N$: $$\mathcal{P}_{\delta_N,T_N} \left( n \in \tau \right) \leq \frac{C}{\delta_N^2 \min\{n^{3/2},T_N^3 \} }.
\label{1.0}$$ [\[prop 1.1\]]{#prop 1.1 label="prop 1.1"}*
We distinguish between two cases :
\(i\) Let us take $\frac{1}{\delta_N^2} \leq n \leq 2T_N^2$. Because $n$ is smaller than $2T_N^2$ and [\[phi\]](#phi){reference-type="eqref" reference="phi"}, it comes that $e^{-n \phi(\delta_N,T_N)} \leq e^c$. Remind [\[2.4\]](#2.4){reference-type="eqref" reference="2.4"}:
$$\mathcal{P}_{\delta_N,T_N} \left( n \in \tau \right) \leq e^c \underset{k=0}{\overset{\infty}{\sum}}P(\tau_k^{T_N} = n) e^{-k \delta_N},$$ with $\tau_k^{T_N}$ defined in [\[4.5000\]](#4.5000){reference-type="eqref" reference="4.5000"}. By applying the formula of Proposition [\[proposition 4.1\]](#proposition 4.1){reference-type="ref" reference="proposition 4.1"}, we can bound from above this formula by $\frac{C}{n^{3/2} \delta_N^2}$, which is the result we are looking for.
\(ii\) Let us take $2T_N^2 \leq n\leq T_N^3 \delta_N$. We are first going to bound from above the contribution coming from a certain type of trajectories, namely the ones with only \"small\" jumps, i.e. jumps smaller than $T_N^2$. Remind that $L_n = \inf \{ l \in \mathbb{N}: \tau_{l}>n \}$ is the number of jumps needed to exceed $n$. We use the notations introduced in [\[4.20\]](#4.20){reference-type="eqref" reference="4.20"}. We write $\xi_i := \tau_i^{T_N} - \tau_{i-1}^{T_N}$. Thus, it comes:
$$\mathcal{P}_{\delta_N,T_N} \left( n \in \tau , \, \, \xi_i \leq T_N^2 \, \, \forall i \leq L_n \right) = \underset{k=0}{\overset{\infty}{\sum}}Q_k^{T_N}(n) e^{-k \delta_N} e^{-n \phi(\delta_N,T_N)}.$$ We use the upper bound of [\[8.6\]](#8.6){reference-type="eqref" reference="8.6"}, valid for $n \geq 2T_N^2$, and that $e^{-n(\phi+g(T_N))} \leq 1$:
$$\mathcal{P}_{\delta_N,T_N} \left( n \in \tau, \, \, \xi_i < T_N^2 \, \, \forall i \leq L_n \right) \leq \underset{k=0}{\overset{\infty}{\sum}} \frac{C}{T_N^3}\left( 1 + \frac{C'}{T}\right)^k e^{-k\delta_N} \leq \frac{C}{T_N^3 \delta_N},$$ because $\delta_N \gg_N \frac{1}{T_N}$. Now, we have to prove the equation in Proposition [\[prop 1.1\]](#prop 1.1){reference-type="ref" reference="prop 1.1"} for $n \in [2T_N^2,T_N^3 \delta_N]$, and for the trajectories with at least a jump bigger than $T_N^2$. We are going to decompose the sum with the number of jumps needed to reach $n$, and the number of jumps bigger than $T_N^2$ among those jumps. We remind that $\xi_j = \tau_j^{T_N} - \tau_{j-1}^{T_N}$. Thus:
$$\begin{aligned}
& \mathcal{P}_{\delta_N,T_N} \left( \{ n \in \tau \} \cap \{ \exists i \leq L_n : \tau_i - \tau_{i-1} > T_N^2 \} \right) \\
&= \underset{k=1}{\overset{\infty}{\sum}} e^{-k \delta_N} e^{n \phi} \underset{j=1}{\overset{k}{\sum}} P(\tau_k^{T_N}=n, \# \{ l \leq k : \xi_l > T_N^2\} = j \}) \\
& \leq
\underset{k=1}{\overset{\infty}{\sum}} e^{-k \delta_N} e^{n \phi} \underset{j=1}{\overset{k}{\sum}} \binom{k}{j}
\underset{\substack{T_N^2 \leq l_2,...,l_j < n \\ l_2+...+l_j \leq n-T_N^2}}{\overset{}{\sum}} P(\xi_2=l_2) ... P(\xi_j=l_j)\\
&\hspace{2.5cm} \underset{l=1}{\overset{n-l_2-...-l_j-T_N^2}{\sum}}Q_{k-j}^{T_N}(l) P(\tau_1^{T_N} = n-l-l_2-...-l_j).
\end{aligned}
\label{5.5000}$$
Here is a lemma to bound from above this sum:
**Lemma 13**. *We have the following upper bounds, for $2 T_N^2 \leq n \leq \delta_N T_N^3$ and $l_2,...,l_j \in \mathbb{N}$ such that $n-l_2-...-l_j>T_N^2$:*
*$$\begin{aligned}
&\underset{\substack{T_N^2 \leq l_2,...,l_j < n \\ l_2 + ... + l_j \leq n-T_N^2}}{\overset{}{\sum}} P(\xi_2=l_2) ... P(\xi_j=l_j) \\
&\leq \left( \frac{c_2}{T_N^3} \right)^{j-1} \underset{\substack{ l_2 + ... + l_j \leq n-T_N^2}}{\overset{}{\sum}} e^{-(l_2+...+l_j)g(T_N)},
\label{5.6000}
\end{aligned}$$*
*$$\begin{aligned}
&\underset{l=1}{\overset{n-l_2-...-l_j-T_N^2}{\sum}}Q_{k-j}^{T_N}(l) P(\tau_1^{T_N} = n-l-l_2-...-l_j)\\
&\leq C\left(1 + \frac{C''}{T_N} \right)^k \frac{e^{-g(T_N)(n-l_2-...-l_k)}}{T_N^3}.
\end{aligned}
\label{5.7}$$*
*[\[lemme 5.1\]]{#lemme 5.1 label="lemme 5.1"}*
The proof is technical, and is deferred to Appendix [\[Appendix E\]](#Appendix E){reference-type="ref" reference="Appendix E"}. By using these two upper bounds, that $n \leq T_N^3 \delta_N$ so $-n(\phi + g(T_N)) \leq c$, and that $\frac{\delta_N}{2} \gg_N \frac{1}{T_N}$, we obtain:
$$\begin{aligned}
\text{\eqref{5.5000}} &\leq \underset{k=1}{\overset{\infty}{\sum}}e^{-k\delta_N} C \left( 1 + \frac{C''}{T_N}\right)^k \frac{1}{T_N^3} e^{n(\phi - g(T_N))} \underset{j=1}{\overset{k}{\sum}} \binom{k}{j}\left( \frac{c}{T_N^3} \right)^{j-1} \underset{l_2+...+l_j\leq n}{\overset{}{\sum}}1 \\
&\leq \frac{C}{T_N^3}\underset{k=1}{\overset{\infty}{\sum}}e^{-k\delta_N/2}\underset{j=1}{\overset{k}{\sum}}\binom{k}{j} \left(\frac{c}{T_N^3}\right)^{j-1}\binom{n}{j-1} \\
&\leq \frac{C}{T_N^3}\underset{k=1}{\overset{\infty}{\sum}}e^{-k\delta_N/2}\underset{j=1}{\overset{k}{\sum}}\binom{k}{j} \left(\frac{cn}{T_N^3}\right)^{j-1}\frac{1}{(j-1)!}.
\end{aligned}$$ The computation of the sum is standard. Let us remind what the result is:
**Lemma 14**. *For $0<\alpha<\lambda<1$: $$\underset{k=1}{\overset{\infty}{\sum}}\lambda^k \underset{j=1}{\overset{k}{\sum}}\binom{k}{j}\alpha^{j-1}\frac{1}{(j-1)!} = \frac{\lambda}{(1-\lambda)^2}e^{\frac{\alpha \lambda}{1-\lambda}}.$$ [\[lemme 5.2\]]{#lemme 5.2 label="lemme 5.2"}*
The proof is left to the reader. Thanks to this result, with $\lambda = e^{-\delta_N/2}$ and $\alpha = \frac{cn}{T_N^3}$, using that $(1-\lambda)$ is equivalent to $\frac{\delta_N}{2}$:
$$\text{\eqref{5.5000}} \leq \frac{C}{\delta_N^2 T_N^3} e^{\frac{n}{\delta_N T_N^3}}.$$
Hence, for $n \leq T_N^3\delta_N$, we have proven the equation [\[1.0\]](#1.0){reference-type="eqref" reference="1.0"}.
# Proof of Theorem [\[Theoreme 123.1\]](#Theoreme 123.1){reference-type="ref" reference="Theoreme 123.1"} [\[Section 123.77\]]{#Section 123.77 label="Section 123.77"}
**Outline of the section** In this section, we are no more under [\[H_0\]](#H_0){reference-type="eqref" reference="H_0"}. The parameters $a$ and $b$ vary accordingly to the case we are in. We first do the needed technical lemmas in [\[Section 123.128\]](#Section 123.128){reference-type="ref" reference="Section 123.128"}. Then, Sections [\[123.999\]](#123.999){reference-type="ref" reference="123.999"} to [\[Section 123.7.4\]](#Section 123.7.4){reference-type="ref" reference="Section 123.7.4"} contain the proof of Theorem [\[Theoreme 123.1\]](#Theoreme 123.1){reference-type="ref" reference="Theoreme 123.1"}.
## Tools for Theorem [\[Theoreme 123.1\]](#Theoreme 123.1){reference-type="ref" reference="Theoreme 123.1"} [\[Section 123.128\]]{#Section 123.128 label="Section 123.128"}
Let us remember [\[g\]](#g){reference-type="eqref" reference="g"}. We have the following asymptotic bounds:
**Lemma 15**. *There exists $T_0$ in $\mathbb{N}$, $c_7,c_8 > 0$ depending on $\beta$, $a$ and $b$ such that, when $b \geq a$ and for all $T_N > T_0$:*
*$$\mathcal{P}_{\delta_N,T_N} \left( m \leq \tau_1 \leq n \right) \geq \frac{c_7}{T_N}\left( e^{-(g(T_N) + \phi(\delta_N,T_N))m} - e^{-(g(T_N) + \phi(\delta_N,T_N))n}\right)
\label{minoration de la proba que tau 1 soit entre m et n, quand b geq a}$$*
*$$\mathcal{P}_{\delta_N,T_N} \left( \tau_1 \geq m \right) \leq \frac{c_8}{\min\{ T_N, \sqrt{m} \}} e^{-(g(T_N) + \phi(\delta_N,T_N))m}.
\label{majoration de la proba que tau 1 soit plus grand que m, cas b geq a }$$*
Moreover, we have, with $x_\beta$ defined in [\[def x_beta\]](#def x_beta){reference-type="eqref" reference="def x_beta"}:
$$g(T_N) + \phi(\delta_N,T_N) =
\begin{cases}
\frac{\pi^2}{2 T_N^2}(1 + o_N(1)) & \text{if } a < b, \\
\\
\frac{\pi^2 - x_\beta^2}{2 T_N^2}(1 + o_N(1)) & \text{if } a = b.
\end{cases}
\label{g + phi, cas b geq a}$$
*Proof.* The proof of [\[minoration de la proba que tau 1 soit entre m et n, quand b geq a\]](#minoration de la proba que tau 1 soit entre m et n, quand b geq a){reference-type="eqref" reference="minoration de la proba que tau 1 soit entre m et n, quand b geq a"} and [\[majoration de la proba que tau 1 soit plus grand que m, cas b geq a \]](#majoration de la proba que tau 1 soit plus grand que m, cas b geq a ){reference-type="eqref" reference="majoration de la proba que tau 1 soit plus grand que m, cas b geq a "} is done by summing [\[probabilité que tau 1 = n\]](#probabilité que tau 1 = n){reference-type="eqref" reference="probabilité que tau 1 = n"}, using [\[3.3000\]](#3.3000){reference-type="eqref" reference="3.3000"} and [\[3.4000\]](#3.4000){reference-type="eqref" reference="3.4000"} or [\[g\]](#g){reference-type="eqref" reference="g"}. Equation [\[g + phi, cas b geq a\]](#g + phi, cas b geq a){reference-type="eqref" reference="g + phi, cas b geq a"} is obtained by combining [\[3.3000\]](#3.3000){reference-type="eqref" reference="3.3000"}, [\[3.4000\]](#3.4000){reference-type="eqref" reference="3.4000"} and [\[g\]](#g){reference-type="eqref" reference="g"}. ◻
We also need a lemma about the first and second moments of $\tau_1$ under $\mathcal{P}_{T_N,\delta_N}$ and the probability to switch interface. It is proven in Appendix [\[Annex E.22\]](#Annex E.22){reference-type="ref" reference="Annex E.22"}:
**Lemma 16**. *With $x_\beta$ defined in [\[def x_beta\]](#def x_beta){reference-type="eqref" reference="def x_beta"}: $$\mathcal{E}_{\delta_N, T_N} \left( \tau_1 \right) =
\begin{cases}
T_N(1 + o_N(1)) & \text{if } b > a, \\
\frac{T_N\beta}{x_\beta^2}
\left(
1 + \frac{x_\beta}{\sin x_\beta}
+ o_N(1)
\right) & \text{if } b = a.
\end{cases}
\label{moment d'ordre 1 de tau 1, b geq a}$$*
*$$\mathcal{E}_{\delta_N, T_N} \left( \tau_1^2 \right) =
\begin{cases}
\frac{T_N^3}{3} + o_N(N^{3a}) & \text{if } b > a, \\
\\
\frac{T_N^3 \beta}{x_\beta^3}
\left(
\frac{\beta}{\sin (x_\beta)} + \frac{1}{\sin (x_\beta)} - \frac{1}{x_\beta}
\right)(1 + o_N(1)) & \text{if } b = a.
\end{cases}
\label{moment d'ordre 2 de tau 1, b geq a}$$*
*$$\label{Probabilité de changer d'interface,b geq a}
\mathbb{P}_{\delta_N,T_N}(\epsilon_1^2 = 1) =
\begin{cases}
\frac{1}{T_N}(1 + o_N(1)) & \text{if } b > a, \\
\\
\frac{x_\beta}{T_N \sin x_\beta}(1 + o_N(1)) & \text{if } b = a.
\end{cases}$$*
*[\[lemme 123.5.654\]]{#lemme 123.5.654 label="lemme 123.5.654"}*
We observe that, when $b>a$, the renewal process behaves like the simple random walk at first order. When $b=a$, the asymptotics are the same up to a multiplicative constant.
**Proposition 5**. *There exists $T_0$ in $\mathbb{N}$, $c_9,c_{10} > 0$ depending on $\beta$, $a$ and $b$ such that, when $b \geq a$ and for all $T_N > T_0$ and $n \in 2\mathbb{N}$: $$\frac{c_9}{\min \{\sqrt{n}, T_N \} }
\leq \mathcal{P}_{\delta_N,T_N} \left( n \in \tau \right) \leq
\frac{c_{10}}{\min \{\sqrt{n}, T_N \} }
\label{Proposition 123}.$$ [\[proposition 123\]]{#proposition 123 label="proposition 123"}*
Note that, for the simple random walk, $P(n \in \tau^T) \asymp_n \frac{1}{\min \{\sqrt{n},T \}}$, as proven in Lemma [\[lemme 8.1\]](#lemme 8.1){reference-type="ref" reference="lemme 8.1"}. The proof of Proposition [\[proposition 123\]](#proposition 123){reference-type="ref" reference="proposition 123"} is very similar to what has already been done, so we write it up more briefly.
*Proof.*
1. **Regime $n \leq T_N^2$** The exact same computations as in Section [\[Section 8.11\]](#Section 8.11){reference-type="ref" reference="Section 8.11"} give the upper bound for $n \leq T_N^2$. For the lower bound, we use the exact same ideas as in Section [\[section 7\]](#section 7){reference-type="ref" reference="section 7"}. The only change is $s_n$, define after [\[7.3\]](#7.3){reference-type="eqref" reference="7.3"}, that becomes $n$, and the last inequality of [\[7.111\]](#7.111){reference-type="eqref" reference="7.111"} becomes $\frac{c\left(1-2e^{-1}\right)}{\sqrt{n}}$.
2. **Regime $n \geq T_N^2$** We now know that Proposition [\[proposition 123\]](#proposition 123){reference-type="ref" reference="proposition 123"} is true for $n \leq T_N^2$. Our aim is to extend this proposition to $2\mathbb{N}$. To do it, we do the same computations as in Section [\[section 6\]](#section 6){reference-type="ref" reference="section 6"}. Let us list the few main changes:
1. **Lower bound**
1. The $T_N^3 \delta_N^2$ appearing in [\[Bourgogne 1\]](#Bourgogne 1){reference-type="eqref" reference="Bourgogne 1"} becomes $T_N$.
2. The $T_N^3 \delta_N$ appearing in [\[Bourgogne 1\]](#Bourgogne 1){reference-type="eqref" reference="Bourgogne 1"}, [\[Bourgogne 2\]](#Bourgogne 2){reference-type="eqref" reference="Bourgogne 2"}, [\[Bourgogne 3\]](#Bourgogne 3){reference-type="eqref" reference="Bourgogne 3"}, [\[Bourgogne 4\]](#Bourgogne 4){reference-type="eqref" reference="Bourgogne 4"}, [\[Bourgogne 5\]](#Bourgogne 5){reference-type="eqref" reference="Bourgogne 5"} and [\[Bourgogne 6\]](#Bourgogne 6){reference-type="eqref" reference="Bourgogne 6"} becomes $T_N^2$.
3. Instead of using Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"} as below [\[Bourgogne 1\]](#Bourgogne 1){reference-type="eqref" reference="Bourgogne 1"}, we use Proposition [\[proposition 123\]](#proposition 123){reference-type="ref" reference="proposition 123"}
2. **Upper bound**
1. Instead of considering the interval $[n-T_N^3\delta_N, n-T_N^2]$ appearing in [\[7.1\]](#7.1){reference-type="eqref" reference="7.1"}, [\[Bourgogne 10\]](#Bourgogne 10){reference-type="eqref" reference="Bourgogne 10"}, [\[Bourgogne 11\]](#Bourgogne 11){reference-type="eqref" reference="Bourgogne 11"}, [\[Bourgogne 12\]](#Bourgogne 12){reference-type="eqref" reference="Bourgogne 12"}, [\[Bourgogne 13\]](#Bourgogne 13){reference-type="eqref" reference="Bourgogne 13"}, [\[Bourgogne 14\]](#Bourgogne 14){reference-type="eqref" reference="Bourgogne 14"}, [\[Bourgogne 15\]](#Bourgogne 15){reference-type="eqref" reference="Bourgogne 15"}, [\[Bourgogne 16\]](#Bourgogne 16){reference-type="eqref" reference="Bourgogne 16"} and [\[Bourgogne 17\]](#Bourgogne 17){reference-type="eqref" reference="Bourgogne 17"}, we consider $[n-T_N^2, n-T_N^2/2]$.
2. Instead of using Proposition [\[Prooposition 1\]](#Prooposition 1){reference-type="ref" reference="Prooposition 1"} as below [\[7.1\]](#7.1){reference-type="eqref" reference="7.1"}, [\[Bourgogne 13\]](#Bourgogne 13){reference-type="eqref" reference="Bourgogne 13"} ad [\[Bourgogne 18\]](#Bourgogne 18){reference-type="eqref" reference="Bourgogne 18"}, we use Proposition [\[proposition 123\]](#proposition 123){reference-type="ref" reference="proposition 123"}.
3. The $T_N^3 \delta_N^2$ appearing in [\[7.1\]](#7.1){reference-type="eqref" reference="7.1"} and [\[Bourgogne 11\]](#Bourgogne 11){reference-type="eqref" reference="Bourgogne 11"} becomes $T_N$.
4. The $c \delta_N$ in [\[Bourgogne 15\]](#Bourgogne 15){reference-type="eqref" reference="Bourgogne 15"} becomes $\frac{c}{T_N}$.
5. The $\frac{c}{T_N^3 \delta_N}$ in [\[Bourgogne 16\]](#Bourgogne 16){reference-type="eqref" reference="Bourgogne 16"} becomes $\frac{c}{T_N^2}$.
6. The $\frac{c}{\delta_N}$ in [\[Bourgogne 30\]](#Bourgogne 30){reference-type="eqref" reference="Bourgogne 30"} becomes $c T_N$.
7. Other minor changes are left to the reader.
◻
## Proof of Theorem [\[Theoreme 123.1\]](#Theoreme 123.1){reference-type="ref" reference="Theoreme 123.1"}, part [\[partie 123.1 du théorème\]](#partie 123.1 du théorème){reference-type="ref" reference="partie 123.1 du théorème"} and [\[partie 123.2 du théorème\]](#partie 123.2 du théorème){reference-type="ref" reference="partie 123.2 du théorème"} [\[123.999\]]{#123.999 label="123.999"}
The proof of these two parts is actually very similar to the one of the proof of [@Caravenna2009depinning Theorem 1.1, part 1], done in Section 3 of [@Caravenna2009depinning], and follow the same path. We list the changes below, step by step, and let the reader fill the holes. We advise the reader to first do the four steps of the case $a \geq b$, then to do the same for the case $a<b$. Before going to the proof, the $v_\delta$ in [@Caravenna2009depinning Section 3] becomes $v_\delta = \mathcal{P}_{\delta_N,T_N} \left( \epsilon_1^2 = 1 \right)$ and their $k_N$ becomes $k_N := N/\mathcal{E}_{\delta_N, T_N} \left( \tau_1 \right)$. We also define $s_N := \max \{ T_N^2, T_N^3 \delta_N \}$. Hence, $s_N = T_N^3 \delta_N$ if $a > b$ and $s_N = T_N^2$ if $a \leq b$
1. **[@Caravenna2009depinning Section 3.1]** We do the same thing to prove this step, by using formulas from Section [\[section 5000\]](#section 5000){reference-type="ref" reference="section 5000"} (case $a > b$) or from Section [\[Section 123.128\]](#Section 123.128){reference-type="ref" reference="Section 123.128"} (case $a \leq b$) to apply the Berry-Esseen theorem .
2. **[@Caravenna2009depinning Section 3.2]** First, the $V_N$ defined in the first line of [@Caravenna2009depinning Section 3.2] is now taken as $s_N \ll V_N \ll N$, Sticking on the notation of [@Caravenna2009depinning], we see that the work on $A_\nu^N$, in [@Caravenna2009depinning Eq. (3.8)] doesn't present more difficulties. To prove [@Caravenna2009depinning Eq (3.9)], we need to compute the quantity $\mathcal{E}_{\delta_N, T_N} \left( (Y_{k_N + \nu k_N} - Y_{k_N})^2 \right)$, which is equal to $\nu \, v_\delta \, k_N$. This computation is left to the reader.
3. **[@Caravenna2009depinning Section 3.3]** Two things change there. First, in [@Caravenna2009depinning Eq. (3.11)], we have to change the $L_{N-p-T_N^3}$ by $L_{N-p-s_N}$. But proving that [@Caravenna2009depinning Eq. (3.10)] is equivalent to [@Caravenna2009depinning Eq. (3.11)] is harder in our case. Hence, it is necessary to display the change: we do it below. Then, going from Eq (3.16) to (3.21) in [@Caravenna2009depinning] does not present any greater difficulties.
4. **[@Caravenna2009depinning Section 3.4]** Nothing changes there. The computations are more laborious, but there is no additional difficulty here.
Let us now prove the part going from [@Caravenna2009depinning Eq (3.11)] to [@Caravenna2009depinning Eq (3.16)] in our case. We assume that the reader has already checked Section 3.1 and 3.2 in [@Caravenna2009depinning], and the first four lines of [@Caravenna2009depinning Section 3.3] and [@Caravenna2009depinning Eq (3.10)], and know all notations used between [@Caravenna2009depinning Eq (3.11)] to [@Caravenna2009depinning Eq (3.16)]: we will use the same notations below. We continue the proof from [@Caravenna2009depinning Eq (3.10)].
A first observation is that we can safely replace $L_{N-p}$ with $L_{N-p-s_N}$. To prove this, since $k_{N} \rightarrow \infty$, the following bound is sufficient: there exists $c>0$ and $cst > 0$ such that, for every $N, M \in 2 \mathbb{N}$
$$\sup _{p\in\left\{0, \ldots, V_{N}\right\} \cap 2 \mathbb{N}} \mathcal{P}_{\delta_N,T_N} \left( \left|Y_{L_{N-p}}-Y_{L_{N-p-s_N }}\right| \geq M \Big| N-p\in \tau \right) \leq cst (1-c)^M.
\label{Bourgogne 31}$$
We define $\Tilde{\tau} := \{ \tau_i \in \tau : \varepsilon_i^2 = 1 \}$, and we write $\Tilde{\tau}_1, \Tilde{\tau}_2,...$ its elements. One can check that the left hand side of [\[Bourgogne 31\]](#Bourgogne 31){reference-type="eqref" reference="Bourgogne 31"} is bounded from above by the quantity $\mathcal{P}_{\delta_N,T_N} \left( \#\Tilde{\tau} \cap\left[N-p-s_N, N-p\right) \geq M \mid N-p\in
\tau \right)$. By using time-inversion and the renewal property, we then rewrite this as
$$\begin{aligned}
& \mathcal{P}_{\delta_N,T_N} \left( \#\left\{\tilde{\tau} \cap\left(0, s_N\right]\right\} \geq M \mid N-p\in \tau \right)
=\mathcal{P}_{\delta_N,T_N} \left( \Tilde{\tau}_M \leq s_N \mid N-p\in \tau \right) \\
& \leq \sum_{n=1}^{s_N} \mathcal{P}_{\delta_N,T_N} \left( \Tilde{\tau}_M = n \right) \cdot \frac{\mathcal{P}_{\delta_N,T_N} \left( N-p-n \in \tau \right) }{\mathcal{P}_{\delta_N,T_N} \left( N-p\in \tau \right) } .
\label{123.2.13}
\end{aligned}$$
Recalling that $N \gg V_{N} \gg s_N$ and using the estimate [\[Proposition 1\]](#Proposition 1){reference-type="eqref" reference="Proposition 1"}, we see that the ratio in the r.h.s. of [\[123.2.13\]](#123.2.13){reference-type="eqref" reference="123.2.13"} is bounded from above by a constant, uniformly for $0 \leq n \leq s_N$ and $p\in\left\{0, \ldots, V_{N}\right\} \cap 2 \mathbb{N}$. We now have to estimate $\mathcal{P}_{\delta_N,T_N} \left( \Tilde{\tau}_{M} \leq s_N \right)$. By writing $\xi_i = \tilde{\tau}_i - \Tilde{\tau}_{i-1}$, this probability is lower than $\mathcal{P}_{\delta_N,T_N} \left( \forall i \leq M, \xi_i \leq s_N \right)$. We first note that $\xi_i$ stochastically dominate the variable $\tau_1$ conditioned on $\varepsilon_1^2=1$, by removing all the jumps that stay on the same interface. Therefore, by denoting $(X_i)_{i \in \mathbb{N}}$ an i.i.d. sequence of random variable distributed as $\tau_1$ conditioned on $\varepsilon_1^2=1$, we have that $\mathcal{P}_{\delta_N,T_N} \left( \forall i \leq M, \xi_i \leq s_N \right) \leq \mathcal{P}_{\delta_N,T_N} \left( X_1 \leq s_N \right) ^M$. We now need a lemma to control $\mathcal{P}_{\delta_N,T_N} \left( \tau_1 = n, \varepsilon_1^2 = 1 \right)$.
**Lemma 17**. *There exists $c_1,c_2$ coming from [\[probabilité pour la marche aléatoire simple que tau 1 vaille n\]](#probabilité pour la marche aléatoire simple que tau 1 vaille n){reference-type="eqref" reference="probabilité pour la marche aléatoire simple que tau 1 vaille n"} and $c_{11},c_{12} > 0$ such that, for all $T \in 4\mathbb{N}$ and $n \geq T^2$: $$P \left(\tau_1^T = n, \left(\varepsilon_1^T\right)^2 = 1 \right) \geq \frac{c_1}{2 T^3} e^{-ng(T)} - \frac{c_2}{T^3} e^{-4ng(T)},
\label{123.2.14}$$ $$\mathcal{P}_{\delta_N,T_N} \left( \tau_1 = n, \varepsilon_1^2 = 1 \right) \geq \frac{c_{11}}{T_N^3} e^{-n(g(T_N)+\phi(\delta_N,T_N))} - \frac{c_{12}}{T_N^3} e^{-4n(g(T_N) + \phi(\delta_N,T_N))}.
\label{123.4}$$ [\[lemme 123.222\]]{#lemme 123.222 label="lemme 123.222"}*
*Proof.* Let us define $A := \{ (S_i)_{0 \leq i \leq n} : \forall \, 1 \leq i \leq n-1, S_i \notin \{ -T,0,T \} , S_n = 0 \}$; $B := \{ (S_i)_{0 \leq i \leq n} : \forall \, 1 \leq i \leq n-1, S_i \notin \{ -T/2,0,T/2 \} , S_n = 0 \}$ and $C := \{ (S_i)_{0 \leq i \leq n} : \forall \, 1 \leq i \leq n-1, S_i \notin \{ -T,0,T \} , |S_n| = T \}$. By the reflection principle done at the first time when $S_n$ reaches $\pm T/2$, one can see that there are as many trajectories touching their first interface at $\pm T$ at time $n$ that the number of trajectories touching their first interface at $0$ and touching $\pm T/2$ before. Hence, $P(C) = P(A \backslash B) = P(A) - P(B)$. Remind that $q_T^i(n) = P(\tau_1^T = n, \left(\varepsilon_1^T \right)^2 = i)$. We deduce that $q_T^1(n) = q_T^0(n) - q_{T/2}^0(n)$. Some considerations about trajectories of the simple random walk gives that $q_T^0(n) \geq \frac{1}{2}P(\tau_1^T=n)$ and $q_{T/2}^0(n) \leq P\left(\tau_1^{T/2}=n\right)$, so $2q_T^1(n) \geq \frac{1}{2}q_T(n) - P\left(\tau_1^{T/2}=n\right)$. Let us now use [\[probabilité pour la marche aléatoire simple que tau 1 vaille n\]](#probabilité pour la marche aléatoire simple que tau 1 vaille n){reference-type="eqref" reference="probabilité pour la marche aléatoire simple que tau 1 vaille n"} and $\eqref{g}$. We then have: $$P(\tau_1^T = n, \varepsilon_1^2 = 1) \geq \frac{c_1}{2 T^3} e^{-ng(T)} - \frac{c_2}{T^3} e^{-4ng(T)}.$$ Equation [\[123.4\]](#123.4){reference-type="eqref" reference="123.4"} comes directly from [\[2.4\]](#2.4){reference-type="eqref" reference="2.4"} and [\[123.2.14\]](#123.2.14){reference-type="eqref" reference="123.2.14"}. ◻
**Lemma 18**. *When $a>b$, there exists $c_{13}>0$ and $C>1$ such that, for all $m>C$:*
*$$\mathcal{P}_{\delta_N,T_N} \left( \tau_1 \geq m s_N | \varepsilon_1^2 = 1 \right) \geq c_{13} e^{-m} >0.
\label{123.2.16}$$ [\[Lemme Bourguignon\]]{#Lemme Bourguignon label="Lemme Bourguignon"}*
*Proof.* Combining [\[g + phi\]](#g + phi){reference-type="eqref" reference="g + phi"} and [\[probabilité de changer d\'interface\]](#probabilité de changer d'interface){reference-type="eqref" reference="probabilité de changer d'interface"} ($a>b$) or [\[g + phi, cas b geq a\]](#g + phi, cas b geq a){reference-type="eqref" reference="g + phi, cas b geq a"} and [\[Probabilité de changer d\'interface,b geq a\]](#Probabilité de changer d'interface,b geq a){reference-type="eqref" reference="Probabilité de changer d'interface,b geq a"} ($a \leq b$), and [\[123.2.14\]](#123.2.14){reference-type="eqref" reference="123.2.14"}: $$\begin{aligned}
&\mathcal{P}_{\delta_N,T_N} \left( \tau_1 \geq m s_N | \varepsilon_1^2 = 1 \right)
=
\frac{1}{\mathcal{P}_{\delta_N,T_N} \left( \varepsilon_1^2 = 1 \right) }\underset{k= m s_N}{\overset{\infty}{\sum}} \mathcal{P}_{\delta_N,T_N} \left( \tau_1 = k, \varepsilon_1^2 = 1 \right) \\
&\geq c \min \left\{ T_N, \frac{1}{\delta_N} \right\} \underset{k= m s_N}{\overset{\infty}{\sum}} \frac{1}{T_N^3} \left(e^{-\frac{k}{s_N}} - c' e^{-\frac{3k}{T_N^2}} \right)
\geq c_{13} e^{-m}.
\end{aligned}
\label{123.1.6}$$ having used in the last equation that $e^{-m} - c' e^{-\frac{3ms_N}{T_N^2}} \geq \frac{1}{2} e^{-m}$, which is the case for $m$ bigger than $C$ a certain real number independant of $a$ and $b$ that we take bigger than $1$. ◻
Coming back to the number of interface change before $s_N$, with $C$ defined in Lemma [\[Lemme Bourguignon\]](#Lemme Bourguignon){reference-type="ref" reference="Lemme Bourguignon"}, and using [\[123.2.16\]](#123.2.16){reference-type="eqref" reference="123.2.16"}: $$\begin{aligned}
\mathcal{P}_{\delta_N,T_N} \left( X_1 \leq C s_N \right) &= 1-P(X_1 \geq C s_N) =1- \mathcal{P}_{\delta_N,T_N} \left( \tau_1 \geq C s_N | \varepsilon_1^2 = 1 \right) \\
&\leq 1 - c.
\end{aligned}
\label{123.1.7}$$ Thus, putting everything together, $\mathcal{P}_{\delta_N,T_N} \left( \tilde{\tau}_{M} \leq s_N \right) \leq (1-c)^M$.
## Polymer measure, case $b,a \geq 1/2$ [\[Section 951\]]{#Section 951 label="Section 951"}
### Case $b = 1/2 ; a > 1/2$ [\[Section 123.7.1\]]{#Section 123.7.1 label="Section 123.7.1"}
Let us first prove that, when $b = 1/2$ and $a > 1/2$, the polymer does not touch any other interface than the origin.
**Lemma 19**. *$$\underset{N \longrightarrow \infty}{\lim} \mathbf{P}_{N,\delta_N}^{T_N} \left( \exists i \leq N : S_i \in T_N\mathbb{Z} \backslash \{0\} \right) =
0.$$ [\[Lemme 123.8.2\]]{#Lemme 123.8.2 label="Lemme 123.8.2"}*
*Proof.* First, using [\[Markov exponentiel\]](#Markov exponentiel){reference-type="eqref" reference="Markov exponentiel"} and [@Petrov Theorem 11, Chapter III] [^3] for the simple random walk:
$$P\left(\underset{i \leq N}{\max} |S_i| \geq T_N \right) \leq 4 \exp \left( \frac{T_N^2}{8N} \right) = o_N(1).
\label{123.8.6}$$
Now, let us focus our attention on the polymer. Using that the partition function is bounded from below by a constant thanks to [\[123.1.88\]](#123.1.88){reference-type="eqref" reference="123.1.88"}, and recalling [\[1.2\]](#1.2){reference-type="eqref" reference="1.2"}, we then have:
$$\begin{aligned}
\mathbf{P}_{N,\delta_N}^{T_N} \left( \exists i \leq N, S_i = \pm T_N \right) \leq
\frac{P(\exists i \leq N, S_i = \pm T_N)}{Z_{N,\delta_N}^{T_N}} &\asymp_N P(\exists i \leq N, S_i = \pm T_N) \\
&= o_N(1).
\end{aligned}$$ ◻
Let us come back to the simple random walk. Let $u < v \in \overline{\mathbb{R}}$ and let us denote $K_{u,v}^N := P \left( u \leq \frac{S_N}{\sqrt{N}} \leq v \right)$. By decomposing the trajectory of the random walk according to the last contact with an interface before $N$ and dismissing the trajectories touching any interface other than the origin thanks to [\[123.8.6\]](#123.8.6){reference-type="eqref" reference="123.8.6"}:
$$\begin{aligned}
K_{u,v}^N &= \underset{k=1}{\overset{N-1}{\sum}}
P(S_k = 0) P \left( u \leq \frac{S_{N-k}}{\sqrt{N}} \leq v, S_i \notin \{-T_N,0,T_N\} \forall i \leq N-k \right) \\
&+ o_N(1).
\end{aligned}
\label{123.8.7}$$
Finally, we use [\[3.13\]](#3.13){reference-type="eqref" reference="3.13"}
$$\begin{aligned}
&\mathbf{P}_{N,\delta_N}^{T_N} \left( u \leq \frac{S_N}{\sqrt{N}} \leq v \right) = \\
& o_N(1) + \underset{k=1}{\overset{N-1}{\sum}} \frac{ \mathcal{P}_{\delta_N,T_N} \left( \exists i \geq 1 \colon k = \tau_i, \, \varepsilon_1+...+\varepsilon_i = 0 \right) e^{-k \phi(\delta_N,T_N)}}{Z_{N, \delta_N}^{T_{N}}}. \\
&P \left( u \leq \frac{S_{N-k}}{\sqrt{N}} \leq v, S_i \notin \{0, \pm T_N\} \forall i \leq N-k \right).
\end{aligned}
\label{123.8.9}$$
Note that $e^ {-k \phi(\delta_N,T_N)} \asymp_N 1$ because of [\[3.3000\]](#3.3000){reference-type="eqref" reference="3.3000"}, [\[3.4000\]](#3.4000){reference-type="eqref" reference="3.4000"} or [\[phi\]](#phi){reference-type="eqref" reference="phi"}, and same for $Z_{N, \delta_N}^{T_{N}}$ thanks to [\[123.1.88\]](#123.1.88){reference-type="eqref" reference="123.1.88"}. Therefore, we can dismiss them. Now, we want to show that :
**Lemma 20**. *For all $k \leq N$, with the $\asymp_N$ being uniform over $k$: $$\mathcal{P}_{\delta_N,T_N} \left( \exists i, k = \tau_i, \, \varepsilon_1+...+\varepsilon_i = 0 \right) \asymp_N \mathcal{P}_{\delta_N,T_N} \left( k \in \tau \right) .$$*
*Proof.* First, $\mathcal{P}_{\delta_N,T_N} \left( \exists i, k = \tau_i \text{, } \varepsilon_1+...+\varepsilon_i = 0 \right) \leq \mathcal{P}_{\delta_N,T_N} \left( k \in \tau \right)$ because of the first event being included in the other. Remind Definition [\[definition 1\]](#definition 1){reference-type="ref" reference="definition 1"}, and that we use also this notation for the renewal process. Then, we use that: $$\begin{aligned}
\mathcal{P}_{\delta_N,T_N} \left( \exists i \geq 1 \colon k = \tau_i, \varepsilon_1+...+\varepsilon_i = 0 \right) &\geq \mathcal{P}_{\delta_N,T_N} \left( k \in \tau \right) \\ &- \mathcal{P}_{\delta_N,T_N} \left( \exists i\leq L_N: \varepsilon_i \neq 0 \right) .
\label{Bourgogne 21}
\end{aligned}$$ Equation [\[3.12\]](#3.12){reference-type="eqref" reference="3.12"} gives us that we can compare the Polymer measure and the renewal measure. Combining it with a rough use of [\[Proposition 1\]](#Proposition 1){reference-type="eqref" reference="Proposition 1"} that gives that for $r \leq N$, $\mathcal{P}_{\delta_N,T_N} \left( r \in \tau \right) \geq \frac{1}{N}$. Hence: $$\begin{aligned}
&\mathcal{P}_{\delta_N,T_N} \left( \exists i\leq L_N:\varepsilon_i \neq 0 \right) \\
&= \underset{k=1}{\overset{N}{\sum}} \mathcal{P}_{\delta_N,T_N} \left( \exists i \leq L_N \colon \varepsilon_i \neq 0, k \in \tau \right) \mathcal{P}_{\delta_N,T_N} \left( \tau_1 \geq N-k \right) \\
&\leq \underset{k=1}{\overset{N}{\sum}} \mathcal{P}_{\delta_N,T_N} \left( \exists i \leq L_N \colon \varepsilon_i \neq 0 | k \in \tau \right) \frac{1}{\mathcal{P}_{\delta_N,T_N} \left( k \in \tau \right) } \\
&\leq N \underset{k=1}{\overset{N}{\sum}} \mathbf{P}_{N,\delta_N}^{T_N} \left( \exists i \leq N : S_i \in T_N \mathbb{Z} \backslash \{0\} | k \in \tau \right) \\
&\leq
N^3 \mathbf{P}_{N,\delta_N}^{T_N} \left( \exists i \leq N : S_i \in T_N \mathbb{Z} \backslash \{0\} \right) .
\end{aligned}$$ Now, combining [\[123.8.6\]](#123.8.6){reference-type="eqref" reference="123.8.6"} and [\[123.1.88\]](#123.1.88){reference-type="eqref" reference="123.1.88"} gives that $\mathbf{P}_{N,\delta_N}^{T_N} \left( \exists i \leq N : S_i \in T_N \mathbb{Z} \right) \leq e^{-cN}$ for a certain $c>0$. Hence, coming back to [\[Bourgogne 21\]](#Bourgogne 21){reference-type="eqref" reference="Bourgogne 21"}:
$$\begin{aligned}
&\mathcal{P}_{\delta_N,T_N} \left( \exists i \colon k = \tau_i \text{, } \varepsilon_1+...+\varepsilon_i = 0 \right) \\
& \geq \mathcal{P}_{\delta_N,T_N} \left( k \in \tau \right) - \mathcal{P}_{\delta_N,T_N} \left( \exists i\leq N: S_i \in T_N \mathbb{Z} \right) \\
&\asymp_N \frac{1}{\sqrt{k}} - N^2e^{-cN} \asymp_N \frac{1}{\sqrt{k}} \asymp_N \mathcal{P}_{\delta_N,T_N} \left( k \in \tau \right) .
\end{aligned}$$ ◻
Now, combining [\[Proposition 1\]](#Proposition 1){reference-type="eqref" reference="Proposition 1"} and standard facts about the simple random walk gives that $\mathcal{P}_{\delta_N,T_N} \left( k \in \tau \right) \asymp_N \frac{1}{\sqrt{k}} \asymp_N P(S_k = 0)$. Hence, [\[123.8.9\]](#123.8.9){reference-type="eqref" reference="123.8.9"} becomes:
$$\begin{aligned}
&\mathbf{P}_{N,\delta_N}^{T_N} \left( u \leq \frac{S_N}{\sqrt{N}} \leq v \right) \\
&\asymp_N
o_N(1) + \underset{k=1}{\overset{N-1}{\sum}} P( S_k = 0) P \left( u \leq \frac{S_{N-k}}{\sqrt{N}} \leq v, S_i \neq 0 \, \forall i \leq N-k \right).
\end{aligned}
\label{123.8.11}$$
Comparing [\[123.8.11\]](#123.8.11){reference-type="eqref" reference="123.8.11"} and [\[123.8.7\]](#123.8.7){reference-type="eqref" reference="123.8.7"} gives us that $\mathbf{P}_{N,\delta_N}^{T_N} \left( u \leq \frac{S_N}{\sqrt{N}} \leq v \right) \asymp_N K_{u,v}^N$, and the $\asymp_N$ is uniform. The proof is therefore over.
### Case $b > 1/2 ; a \geq 1/2$ [\[Section 123.7.2\]]{#Section 123.7.2 label="Section 123.7.2"}
Recalling Proposition [\[Proposition 123.1.3\]](#Proposition 123.1.3){reference-type="ref" reference="Proposition 123.1.3"} and[\[3.3000\]](#3.3000){reference-type="eqref" reference="3.3000"} or [\[3.4000\]](#3.4000){reference-type="eqref" reference="3.4000"} or [\[phi\]](#phi){reference-type="eqref" reference="phi"}, $Z_{N,\delta_N}^{T_N} \longrightarrow 1$. Recall Definition [\[definition 1\]](#definition 1){reference-type="ref" reference="definition 1"}. Let us prove a lemma about the number of contact between the simple random walk and interfaces:
**Lemma 21**. *When $a \geq 1/2$, and $c' = c_4^{-1} e^{N g(T_N)}$ coming from Equation [\[probabilité pour la marche aléatoire simple que tau 1 soit plus grande que n\]](#probabilité pour la marche aléatoire simple que tau 1 soit plus grande que n){reference-type="eqref" reference="probabilité pour la marche aléatoire simple que tau 1 soit plus grande que n"}: $$P\left(L_N \geq c' \sqrt{N} \ln(N) \right) \leq \frac{1}{N}.$$*
*Proof.* Using [\[probabilité pour la marche aléatoire simple que tau 1 soit plus grande que n\]](#probabilité pour la marche aléatoire simple que tau 1 soit plus grande que n){reference-type="eqref" reference="probabilité pour la marche aléatoire simple que tau 1 soit plus grande que n"}, we have $P\left(\tau_1^{T_N} \geq T_N^2 \right) \geq \frac{1}{c' \sqrt{N}}$ because $T_N \geq \sqrt{N}$. Hence:
$$\begin{aligned}
P\left(L_N \geq c' \sqrt{N} \ln(N) \right) &\leq
P\left(\tau_1^{T_N} \leq N\right)^{c' \sqrt{N} \ln(N)}
\leq \left(1 - \frac{1}{c' \sqrt{N}} \right)^{c' \sqrt{N} \ln(N)} \\
& \leq e^{-\ln(N)} = \frac{1}{N}.
\end{aligned}$$ ◻
Hence, for $A$ an event, $P\left(A \cap L_N \leq \sqrt{N} \ln(N)/c' \right) \geq P(A) + o_N(1)$. Thus: $$\begin{aligned}
\mathbf{P}_{N,\delta_N}^{T_N} \left( A \right) &= \frac{o_N(1) + E\left(1\left(A \cap L_N \leq \sqrt{N} \ln(N)/c'\right) e^{-\delta_N L_N} \right)}{Z_{N,\delta_N}^{T_N} } \\
&=
P(A) + o_N(1).
\end{aligned}$$ having used that $e^{-\delta_N L_N} \sim_N 1$ because $L_N \leq \sqrt{N} \ln(N)/c'$ and $b>\frac{1}{2}$ .
## Case $b<1/2 ; a \geq 1/2$ [\[Section 123.7.3\]]{#Section 123.7.3 label="Section 123.7.3"}
The proof is the exact same as the one done in Section [\[Section 4.1\]](#Section 4.1){reference-type="ref" reference="Section 4.1"}, except that $P(\tau_1^{T_N} > N-k) \asymp_N \frac{1}{\sqrt{N-k}}$ instead of $\frac{1}{T_N}$ in [\[44.4\]](#44.4){reference-type="eqref" reference="44.4"}, hence the $T_N$ in the denominator of [\[9.7\]](#9.7){reference-type="eqref" reference="9.7"} and [\[9.8\]](#9.8){reference-type="eqref" reference="9.8"} becomes a $\sqrt{N}$, and [\[9.9\]](#9.9){reference-type="eqref" reference="9.9"} becomes $\mathbf{P}_{N,\delta_N}^{T_N} \left( \mathcal{C}_{M} \right) \asymp_N \frac{1}{\sqrt{MN} \delta_N Z_{N,\delta_N}^{T_N}e^{-N\phi}}$.
## Critical case, $3a-b = 1, a<1/2$ [\[Section 123.7.4\]]{#Section 123.7.4 label="Section 123.7.4"}
Let us start by a lemma giving $\mathcal{E}_{\delta_N, T_N} \left( \tau_1|\varepsilon_1 = 1 \right)$.
**Lemma 22**. *When $a>b$: $$\mathcal{E}_{\delta_N, T_N} \left( \tau_1 1\{\varepsilon_1^2 = 1\} \right) \sim_N \frac{T_N^3 \delta_N^2}{4 \pi^2},
\label{123.8.29}$$ $$\mathcal{E}_{\delta_N, T_N} \left( \tau_1|\varepsilon_1^2 = 1 \right) \sim_N
\frac{T_N^3 \delta_N}{2 \pi^2}.
\label{123.8.30}$$*
*Proof.* Recall $\gamma$ defined in [\[definition de gamma\]](#definition de gamma){reference-type="eqref" reference="definition de gamma"}. Using the equation above (A.5) in [@Caravenna_2009] and being inspired by what is done in (A.7) and (A.9) in [@Caravenna_2009]:
$$\mathcal{E}_{\delta_N, T_N} \left( \tau_1 1\{\varepsilon_1^2 = 1\} \right) =
- 2e^{\delta_N} \left(\Tilde{Q}_{T_N}^{1}\right)' (\gamma(\phi(\delta_N,T_N))) \gamma'(\phi(\delta_N,T_N)),
\label{123.8.31}$$ with $$\Tilde{Q}_{T_N}^1(\gamma) = \frac{\tan \gamma}{2 \sin(T_N \gamma)}.$$ Hence, with the computation of $\gamma'$ done below (A.12) in [@Caravenna_2009] and recalling [\[phi\]](#phi){reference-type="eqref" reference="phi"}: $$\gamma'(\phi(\delta_N,T_N)) \sim_N \frac{T_N}{\pi}.
\label{123.8.32}$$ Moreover, [\[A.16\]](#A.16){reference-type="eqref" reference="A.16"} gives us that $\gamma(\phi(\delta_N,T_N)) = \frac{\pi}{T_N} - \frac{2\pi}{\delta_N T_N^2}(1 + o_N(1))$. Hence: $$Q_{T_N}^{1'} = \frac{1}{2 \cos^2(\gamma) \sin(T_N \gamma) } - \frac{T_N \tan(\gamma \cos(T_N \gamma))}{2 \sin^2(T_N \gamma)} \sim \frac{\delta_N^2 T_N^2 }{8 \pi}.
\label{123.8.33}$$
Putting [\[123.8.33\]](#123.8.33){reference-type="eqref" reference="123.8.33"}, [\[123.8.32\]](#123.8.32){reference-type="eqref" reference="123.8.32"} with [\[123.8.31\]](#123.8.31){reference-type="eqref" reference="123.8.31"}, we get [\[123.8.29\]](#123.8.29){reference-type="eqref" reference="123.8.29"}. Using [\[probabilité de changer d\'interface\]](#probabilité de changer d'interface){reference-type="eqref" reference="probabilité de changer d'interface"}, we get [\[123.8.30\]](#123.8.30){reference-type="eqref" reference="123.8.30"}. ◻
Let us now prove [\[123.1.6p\]](#123.1.6p){reference-type="eqref" reference="123.1.6p"}.
*Proof.* Remind that $T_N^3 \delta_N = \beta N$. In our case, Equation [\[123.8.30\]](#123.8.30){reference-type="eqref" reference="123.8.30"} gives us that $\mathcal{E}_{\delta_N, T_N} \left( \tau_1|\varepsilon_1^2 = 1 \right) \sim_N
\frac{\beta N}{2 \pi^2}$. Using [\[123.2.16\]](#123.2.16){reference-type="eqref" reference="123.2.16"} with $m=\frac{1}{\beta}$, one has that the probability of having more than $m$ interfaces change is smaller than $(1-c)^m$ for a certain constant independent of $N$. ◻
Let us now prove [\[123.11.7\]](#123.11.7){reference-type="eqref" reference="123.11.7"}.
*Proof.* First, using [\[9.8\]](#9.8){reference-type="eqref" reference="9.8"} with $\nu= 0$ and $M$ big, using notations defined in [\[44.2\]](#44.2){reference-type="eqref" reference="44.2"}, one has that the probability under the polymer measure that the last contact is done before $\frac{M}{\delta_N^2}$ is greater than:
$$\mathbf{P}_{N,\delta_N}^{T_N} \left( \mathcal{B}_{0,M} \right) \asymp_N \frac{1}{T_N \delta_N Z_{N,\delta_N}^{T_N}e^{-N\phi}} \left( 1 - \frac{1}{\sqrt{M}} \right).
\label{123.6.32}$$
Now, using the same idea as in [\[44.4\]](#44.4){reference-type="eqref" reference="44.4"}, remembering that $L_N$ is the time of the last contact of the polymer with an interface (cf. Definition [\[definition 1\]](#definition 1){reference-type="ref" reference="definition 1"}), and denoting $\phi := \phi(\delta_N,T_N)$:
$$\begin{aligned}
&\mathbf{P}_{N,\delta_N}^{T_N} \left( \tau_{L_N}^{T_N} \geq (1-\varepsilon)N \right) = \\
&\frac{1}{Z_{N,\delta_N}^{T_N}e^{-\phi N}}\underset{k=(1-\varepsilon)N}{\overset{N}{\sum}}e^{-k\phi}E\left( e^{H_{k,\delta_N}^{T_N}} 1\{ k \in \tau^{T_N} \}\right) P \left( \tau_1^{T_N}>N-k \right) e^{-(N-k)\phi}.
\end{aligned}$$
With [\[3.13\]](#3.13){reference-type="eqref" reference="3.13"} and [\[Proposition 1\]](#Proposition 1){reference-type="eqref" reference="Proposition 1"}, $E\left( e^{H_{k,\delta_N}^{T_N}} 1\{ k \in \tau^{T_N} \}\right) e^{-k \phi} = \mathcal{P}_{\delta_N,T_N} \left( k \in \tau \right) \asymp_N \frac{1}{T_N^3 \delta_N^2}$. Then, by combining Lemma [\[Lemme sur toucher en n pour la m.a.s.\]](#Lemme sur toucher en n pour la m.a.s.){reference-type="ref" reference="Lemme sur toucher en n pour la m.a.s."} and Equation [\[g + phi\]](#g + phi){reference-type="eqref" reference="g + phi"}, we obtain that $P \left( \tau_1^{T_N}>N-k \right) e^{-(N-k)\phi} \asymp_N \frac{1}{\sqrt{N-k+1}} + \frac{1}{T_N}$ (the $+1$ comes from $k=N$, to avoid dividing by 0). Thus:
$$\mathbf{P}_{N,\delta_N}^{T_N} \left( L_N \geq (1-\varepsilon)N \right) \asymp_N
\frac{e^{\phi N}}{Z_{N,\delta_N}^{T_N}}\underset{k=(1-\varepsilon)N}{\overset{N}{\sum}}
\frac{1}{T_N^3 \delta_N^2} \left( \frac{1}{T_N} + \frac{1}{\sqrt{N-k+1}} \right).
\label{123.6.33}$$
Simple computation and remembering that we are in the case where $T_N^3 \delta_N = c N$ gives that:
$$\underset{k=(1-\varepsilon)N}{\overset{N}{\sum}}
\frac{1}{T_N^3 \delta_N^2} \left( \frac{1}{T_N} + \frac{1}{\sqrt{N-k+1}} \right) \asymp_N \frac{\varepsilon}{T_N \delta_N}.
\label{123.6.34}$$
With [\[123.6.33\]](#123.6.33){reference-type="eqref" reference="123.6.33"} and [\[123.6.34\]](#123.6.34){reference-type="eqref" reference="123.6.34"}, $\mathbf{P}_{N,\delta_N}^{T_N} \left( L_N \geq (1-\varepsilon)N \right) \asymp_N \frac{\varepsilon}{ T_N \delta_N Z_{N,\delta_N}^{T_N}e^{-\phi N}}$. Therefore, comparing this to [\[123.6.32\]](#123.6.32){reference-type="eqref" reference="123.6.32"}, there exists $C>0$ such that, for all $\varepsilon > 0$:
$$\mathbf{P}_{N,\delta_N}^{T_N} \left( L_N \geq (1-\varepsilon)N \right) \leq C \varepsilon
\mathbf{P}_{N,\delta_N}^{T_N} \left( L_N \leq \frac{M}{\delta_N^2} \right)
\leq C \varepsilon .
\label{123.6.36}$$ ◻
To finish the proof, we have to prove the following Lemma:
**Lemma 23**. *For $a<\frac{1}{2}$ and for $\varepsilon>0$ fixed: $$\underset{N \longrightarrow \infty}{\lim} P \left(\tau^{T_N} \cap [ (1-\varepsilon)N,N ] \neq \emptyset \right) = 1.$$ [\[lemme 123.6.6\]]{#lemme 123.6.6 label="lemme 123.6.6"}*
*Proof.* With Lemma [\[Lemme sur toucher en n pour la m.a.s.\]](#Lemme sur toucher en n pour la m.a.s.){reference-type="ref" reference="Lemme sur toucher en n pour la m.a.s."}: $$\begin{aligned}
&P\left(\tau^{T_N} \cap [ (1-\varepsilon)N,N ] = \emptyset \right) =
\underset{k=0}{\overset{(1-\varepsilon)N}{\sum}} P \left( k \in \tau^{T_N} \right) P \left(\tau_1^{T_N}>N-k \right) \\
&\asymp_N
\underset{k=0}{\overset{(1-\varepsilon)N}{\sum}} \left( \frac{1}{T_N} + \frac{1}{\sqrt{k}} \right) \frac{1}{T_N} e^{-c\frac{N-k}{T_N^2}} \leq N e^{-c \varepsilon \frac{N}{T_N^2}} \ll 1.
\end{aligned}$$ ◻
# Asymptotic renewal estimates
## Asymptotic free energy estimates [\[Annex A.1\]]{#Annex A.1 label="Annex A.1"}
We would like to stress that, for us, $\delta_N > 0$ although we are in the repulsive case. Thus, the equations below are not exactly the same as those of [@Caravenna2009depinning] because of the minus sign we have to add. By [@Caravenna_2009 Theorem 1], $Q_{T}(\phi(\delta, T))=e^{\delta}$. Moreover:
$$Q_{T}(\lambda)=1+\sqrt{e^{-2 \lambda}-1} \cdot \frac{1-\cos \left(T \arctan \sqrt{e^{-2 \lambda}-1}\right)}{\sin \left(T \arctan \sqrt{e^{-2 \lambda}-1}\right)},$$
which comes, for example, from [@Caravenna_2009 Eq. (A.5)]. If we denote
$$\gamma=\gamma(\delta_N, T_N):=\arctan \sqrt{e^{-2 \phi(\delta_N, T_N)}-1},
\label{definition de gamma}$$
we can therefore write
$$\widetilde{Q}_{T_N}(\gamma(\delta, T_N))=e^{\delta_N} \quad \text { where } \quad \widetilde{Q}_{T_N}(\gamma)=1+\tan \gamma \cdot \frac{1-\cos (T_N \gamma)}{\sin (T_N \gamma)} .
\label{Q tilde}$$
Note that $\gamma \mapsto \widetilde{Q}_{T_N}(\gamma)$ is an increasing function with $\widetilde{Q}_{T_N}(0)=1$ and $\widetilde{Q}_{T_N}(\gamma) \rightarrow+\infty$ when $\gamma \uparrow \frac{\pi}{T_N}$, hence $0<\gamma(\delta_N, T_N)<\frac{\pi}{T_N}$. We therefore have to study the equation $\widetilde{Q}_{T_N}(\gamma)=e^{\delta_N}$ for $0<\gamma<\frac{\pi}{T_N}$. An asymptotic study shows us that $$(1+o_N(1)) \gamma \cdot \frac{1-\cos (T_N \gamma)}{\sin (T_N \gamma)}=\delta_N (1+o_N(1)).$$
Noting $x=T_N \gamma$ gives us
$$(1+o_N(1)) x \cdot \frac{1-\cos x}{\sin x}=T_N \delta_N (1+o_N(1)),
\label{x}$$
where $0<x<\pi$. Three cases are now emerging. According to whether $a$ is equal, higher or lower than b, the right hand side of [\[x\]](#x){reference-type="eqref" reference="x"} tends to a constant, $+ \infty$ or 0 respectively.
- When $a<b$, the right-hand side in [\[x\]](#x){reference-type="eqref" reference="x"} tends to 0. We can therefore expand further:
$$x^2(1+o_N(1)) = 2 T_N \delta_N (1+o_N(1)).$$ Hence $x = \sqrt{2 T_N \delta_N}(1+o_N(1))$, and since $\gamma(\delta,T) = \frac{x}{T}$, $$\gamma(\delta_N,T_N) = \sqrt{ \frac{2\delta_N}{T_N}}(1+o_N(1)).
\label{gamma a<b}$$
Remembering [\[definition de gamma\]](#definition de gamma){reference-type="eqref" reference="definition de gamma"}:
$$\sqrt{e^{-2 \phi(\delta_N, T_N)}-1}=\tan \left( \sqrt{2 \frac{\delta_N}{T_N}}(1+o_N(1)) \right) .$$
Because the function $\lambda \mapsto \arctan \sqrt{e^{-2 \lambda}-1}$ is decreasing, and continuously differentiable, with non-zero first derivative:
$$\phi(\delta_N,T_N) = \frac{\delta_N}{T_N}(1 + o_N(1)).$$
- When $a=b$ and with the same ideas:
$$x = x_{\beta}(1+o_N(1)), \text{ where } x_\beta = \frac{\sin(x_\beta)\beta}{1-\cos(x_\beta)},
\label{def x_beta}$$ $$\gamma(\delta_N,T_N) = \frac{x_\beta}{T_N}(1+o_N(1)).
\label{gamma a=b}$$ and $$\phi(\delta_N,T_N) = - \frac{x_\beta^2}{2 T_N^2}(1 + o_N(1)).$$
- When $a>b$ and with the same ideas (we give more details that are going to be useful later on): $$(1+o_N(1)) \frac{2 \pi}{\pi-x}=T_N \delta_N,
\label{A.16}$$ $$\gamma(\delta_N, T_N)=\frac{\pi}{T_N}- \frac{2 \pi }{\delta_N T_N^{2}}(1+o_N(1)),
\label{gamma a>b}$$ $$\phi(\delta_N, T_N)= - \frac{\pi^2}{2 T_N^2} \left( 1 - \frac{4}{T_N \delta_N}
(1 + o_N(1)) \right).
\label{phi a>b}$$
## Estimates for the time to change interface, case a\>b [\[Appendix A.2\]]{#Appendix A.2 label="Appendix A.2"}
We are now looking for asymptotic estimates of the variables $\left(\tau_{1}, \varepsilon_{1}\right)$ under $\mathcal{P}_{T_N,\delta_N}$ defined in [\[2.4\]](#2.4){reference-type="eqref" reference="2.4"}, when $N \rightarrow \infty$, with $T_N = N^b$, $\delta_N = \frac{\beta}{N^a}$ and $a>b$. Let us first focus on $Q_{T_N}^{1}(\phi(\delta_N, T_N))$, where $Q_{T_N}^{1}(\lambda):=E\left(e^{-\lambda \tau_{1}^{T_N}} \mathbf{1}_{\left\{\varepsilon_{1}^{T_N}=1\right\}}\right)=\underset{n \in \mathbb{N}}{\overset{}{\sum}} e^{-\lambda n} q_{T_N}^{1}(n)$. With the same ideas as in the calculation above and thanks to [@Caravenna_2009 Eq. (A.5)], we can write
$$Q_{T_N}^{1}(\phi(\delta_N, T_N))=\widetilde{Q}_{T_N}^{1}(\gamma(\delta_N, T_N)), \quad \text { where } \quad \widetilde{Q}_{T_N}^{1}(\gamma):=\frac{\tan \gamma}{2 \sin (T_N \gamma)},$$
and, according to [\[gamma a\>b\]](#gamma a>b){reference-type="eqref" reference="gamma a>b"}, we obtain, when $N \rightarrow \infty$:
$$Q_{T_N}^{1}(\phi(\delta_N, T_N))=\frac{\pi}{T_N} \frac{1}{2 \cdot \frac{2 \pi}{e^{\delta_N}-1} \frac{1}{T_N}}(1+o_N(1)) \sim_N \frac{\delta_N}{4}.$$
In particular, thanks to [\[définition de la mesure du renouvellement\]](#définition de la mesure du renouvellement){reference-type="eqref" reference="définition de la mesure du renouvellement"}, we may write when $N \rightarrow \infty$:
$$\mathcal{E}_{\delta_N, T_N} \left( \varepsilon_{1}^{2} \right) =2 \mathcal{P}_{\delta_N,T_N} \left( \varepsilon_{1}=+1 \right) =2 e^{-\delta_N} Q_{T_N}^{1}(\phi(\delta_N, T_N)) \sim_N \frac{\delta_N}{2} .$$
This proves [\[probabilité de changer d\'interface\]](#probabilité de changer d'interface){reference-type="eqref" reference="probabilité de changer d'interface"}. Now, let us determine the asymptotic behaviour of $\mathcal{E}_{\delta_N, T_N} \left( \tau_{1} \right)$ when $N \rightarrow \infty$. Thanks to [\[définition de la mesure du renouvellement\]](#définition de la mesure du renouvellement){reference-type="eqref" reference="définition de la mesure du renouvellement"}, we can write:
$$\mathcal{E}_{\delta_N, T_N} \left( \tau_{1} \right) =e^{-\delta_N} \sum_{n \in \mathbb{N}} n q_{T_N}(n) e^{-\phi(\delta_N, T_N) n}=-e^{-\delta_N} \cdot Q_{T_N}^{\prime}(\phi(\delta_N, T_N)),
\label{A.7}$$ $$\mathcal{E}_{\delta_N, T_N} \left( \tau_{1}^{2} \right) =e^{-\delta_N} \sum_{n \in \mathbb{N}} n^{2} q_{T_N}(n) e^{-\phi(\delta_N, T_N) n}=
e^{-\delta_N} \cdot Q_{T_N}^{\prime \prime}(\phi(\delta_N, T_N)).$$
Thus, the problem is to determine $Q_{T_N}^{\prime}(\lambda)$ for $\lambda=\phi(\delta_N, T_N)$. Using the function $\gamma(\lambda):=$ $\arctan \sqrt{e^{-2 \lambda}-1}$ defined in [\[definition de gamma\]](#definition de gamma){reference-type="eqref" reference="definition de gamma"} and remembering [\[Q tilde\]](#Q tilde){reference-type="eqref" reference="Q tilde"}, since $Q_{T_N}=\widetilde{Q}_{T_N} \circ \gamma$, it follows that
$$Q_{T_N}^{\prime}(\lambda)=\widetilde{Q}_{T_N}^{\prime}(\gamma(\lambda)) \cdot \gamma^{\prime}(\lambda),$$ $$Q_{T_N}^{\prime \prime}(\lambda)=\gamma^{\prime \prime}(\lambda) \cdot \widetilde{Q}_{T_N}^{\prime}(\gamma(\lambda))+\left(\gamma^{\prime}(\lambda)\right)^{2} \cdot \widetilde{Q}_{T_N}^{\prime \prime}(\gamma(\lambda)) .$$
By a direct computation,
$$\widetilde{Q}_{T_N}^{\prime}(\gamma) =\frac{1 - \cos (T_N \gamma)}{\sin (T_N \gamma)} \cdot\left(\frac{1}{\cos ^{2} \gamma}+\frac{T_N\tan \gamma}{\sin (T_N\gamma)}\right),$$ $$\begin{aligned}
\widetilde{Q}_{T_N}^{\prime \prime}(\gamma) =&\frac{1-\cos (T_N\gamma)}{\sin (T_N\gamma)} \\
&\cdot\left(\frac{2 T_N}{\sin (T_N\gamma) \cos ^{2} x}+\frac{2 \sin \gamma}{\cos ^{3} x}+\frac{T_N^{2} \tan \gamma}{\sin ^{2}(T_N\gamma)}(1-\cos (T_N\gamma))\right),
\end{aligned}$$
and
$$\gamma^{\prime}(\lambda)=-\frac{1}{\sqrt{e^{-2 \lambda}-1}}, \quad \gamma^{\prime \prime}(\lambda)=-\frac{e^{-2 \lambda}}{\left(e^{-2 \lambda}-1\right)^{3 / 2}} .$$
Thanks to [\[A.7\]](#A.7){reference-type="eqref" reference="A.7"} and [\[definition de gamma\]](#definition de gamma){reference-type="eqref" reference="definition de gamma"}:
$$\mathcal{E}_{\delta_N, T_N} \left( \tau_{1} \right) =-e^{-\delta_N} \cdot \widetilde{Q}_{T_N}^{\prime}(\gamma(\delta_N, T_N)) \cdot \gamma^{\prime}(\phi(\delta_N, T_N)) .$$
Asymptotics [\[gamma a\>b\]](#gamma a>b){reference-type="eqref" reference="gamma a>b"} and [\[phi a\>b\]](#phi a>b){reference-type="eqref" reference="phi a>b"} give
$$\begin{aligned}
&\widetilde{Q}_{T_N}^{\prime}(\gamma(\delta_N, T_N))= \frac{\delta_N^2 T_N^2}{2 \pi}(1 + o_N(1)), \\
&\gamma^{\prime}(\phi(\delta_N, T_N))= - \frac{T_N}{\pi}(1 + o_N(1)),
\end{aligned}$$
and
$$\widetilde{Q}_{T_N}^{\prime \prime}(\gamma(\delta_N, T_N))= \frac{T_N^4 \delta_N^3}{2\pi^2}(1+o_N(1)), \quad \gamma^{\prime \prime}(\phi(\delta_N, T_N))=
- \left(
\frac{T_N}{\pi}
\right)^{3}(1 + o_N(1)) .$$
By combining the preceding relations:
$$\mathcal{E}_{\delta_N, T_N} \left( \tau_{1} \right) = \frac{T_N^3 \delta_N^2}{2 \pi^2}(1 + o_N(1)) \quad \text{ and } \quad
\mathcal{E}_{\delta_N, T_N} \left( \tau_{1}^{2} \right) = \frac{T_N^6 \delta_N^3}{2\pi^4}
(1 + o_N(1)),$$
which prove [\[espérance de tau 1\]](#espérance de tau 1){reference-type="eqref" reference="espérance de tau 1"} and [\[moment d\'ordre 2 de tau 1\]](#moment d'ordre 2 de tau 1){reference-type="eqref" reference="moment d'ordre 2 de tau 1"}.
# Proof of Lemma [\[lemme 8.1\]](#lemme 8.1){reference-type="ref" reference="lemme 8.1"} [\[Annexe B.1\]]{#Annexe B.1 label="Annexe B.1"}
We do the proof with $2n$ instead of $n$ to simplify notations. If $2n\leq T$, then [\[Bourgogne 22\]](#Bourgogne 22){reference-type="eqref" reference="Bourgogne 22"} follows easily from $P\left(2n \in \tau^T\right) = \binom{2n}{n} \frac{1}{2^{2n}} \sim \frac{c}{\sqrt{n}}$. Else, for $k \in \frac{1}{2}\mathbb{Z}$:
$$\begin{aligned}
P(S_{2n} = 2kT) &= \binom{2n}{n+kT}\frac{1}{2^{2n}} \asymp_n \frac{n^{2n}}{\sqrt{n} (n+kT)^{n+kT}(n-kT)^{n-kT} } \\
& \asymp_n \frac{1}{\sqrt{n}}\exp\left( -(n-kT) \ln(1-kT/n) - (n+kT)\ln(1+kT/n) \right).
\end{aligned}
\label{B.1}$$ Moreover, $(1-x)\ln(1-x) + (1+x)\ln(1+x) \sim_0 x^2$. Therefore, by continuity of $\exp$, there exists $\varepsilon > 0$ such that, when $\left| \frac{kT}{n} \right| \leq \varepsilon$: $$\exp\left( (n-kT) \ln(1-kT/n) + (n+kT)\ln(1+kT/n) \right) \asymp_n \exp \left(-\frac{(kT)^2}{n}\right).$$ Now, we need to bound from above $P(S_n \geq \varepsilon n)$.
**Lemma 24**. *When $\varepsilon > 0$ , one has: $$P(S_n \geq \varepsilon n) \leq \exp\left(-\frac{n}{2}(\varepsilon^2 + O(\varepsilon^3))\right).$$*
*Proof.* The exponential Markov inequality with $\lambda = \frac{1}{2}\log\left( 1 + \frac{2\varepsilon}{1-\varepsilon} \right)$ combined with a second-order expansion of $\log$ yields:
$$\begin{aligned}
P(S_n \geq \varepsilon n) &\leq \frac{E\left( e^{\lambda S_n} \right)}{e^{\lambda \varepsilon n}} \leq \exp \left( n\left( (1-\varepsilon)\lambda + \log \left( \frac{1 + e^{-2\lambda}}{2} \right) \right) \right) \\
&\leq \exp\left(-\frac{n}{2}\left(\varepsilon^2 + O \left(\varepsilon^3 \right) \right) \right).
\end{aligned}$$ ◻
Hence, when when $\varepsilon$ is small enough:
$$P(|S_n| \geq \varepsilon n) \leq 2\exp\left( -\frac{n \varepsilon^2}{3} \right) \ll \frac{1}{\min\{T, \sqrt{n}\}}.
\label{B.5}$$
Now, let us compute $\underset{k=0}{\overset{\frac{\varepsilon n}{T}}{\sum}} \exp \left( - \frac{(kT)^2}{n}\right)$. By standard techniques:
$$\begin{aligned}
&\underset{k=0}{\overset{\frac{ \varepsilon n}{T}}{\sum}} \exp \left( - \frac{(kT)^2}{n}\right)
\asymp_n 1 + \int_0^{\frac{\varepsilon n}{T}} \exp \left( - \frac{(Tt)^2}{n}\right) dt \\
&\asymp_n 1 + \frac{\sqrt{n}}{T} \int_0^{\sqrt{n} \varepsilon} e^{-t^2}dt \asymp_n 1 + \frac{\sqrt{n} }{T} \asymp_n \frac{\max\{ T,\sqrt{n} \}}{T}.
\end{aligned}
\label{B.6}$$ Therefore, when $\varepsilon$ is small enough, thanks to [\[B.1\]](#B.1){reference-type="eqref" reference="B.1"}, [\[B.5\]](#B.5){reference-type="eqref" reference="B.5"} and [\[B.6\]](#B.6){reference-type="eqref" reference="B.6"}:
$$\begin{aligned}
P(S_{2n} \in T\mathbb{Z}) &\asymp_n \underset{k= - \frac{ \varepsilon n}{T}}{\overset{\frac{ \varepsilon n}{T}}{\sum}} P(S_{2n} = kT) + P(|S_{2n}| \geq \varepsilon n ) \\
&\asymp_n \frac{\max\{T,\sqrt{n}\}}{T \sqrt{n}} \asymp_n \frac{1}{\min\{\sqrt{n},T\}}.
\end{aligned}$$
Hence, Lemma [\[lemme 8.1\]](#lemme 8.1){reference-type="ref" reference="lemme 8.1"} is proven.
# Proof of technical results [\[Annexe A\]]{#Annexe A label="Annexe A"}
## Proof of Lemma [\[lemme 1.1\]](#lemme 1.1){reference-type="ref" reference="lemme 1.1"}
We recall the extended Stirling formula [@Abrahamovitz 6.1.37]:
$$n! = \left(\frac{n}{e}\right)^n\sqrt{2\pi n}\left( 1 + \frac{1}{12 n} + O_n\left( \frac{1}{n^2} \right) \right).$$
So, because $P \left(2n \in \tau^\infty \right) = \binom{2n}{n}\frac{1}{4^n}$:
$$P \left(2n \in \tau^\infty \right) \sim
\frac{1}{\sqrt{n\pi}} \frac{1 + \frac{1}{24n} + O_n \left( \frac{1}{n^2} \right)}{\left( 1 + \frac{1}{12n} + O_n \left( \frac{1}{n^2} \right) \right)^2} \sim \frac{1}{\sqrt{n\pi}} \left( 1 - \frac{1}{8n} + O_n\left( \frac{1}{n^2} \right) \right).$$ By evaluating in $n$ instead of $2n$, Equation [\[1.1000\]](#1.1000){reference-type="eqref" reference="1.1000"} follows. The equation [\[1.2000\]](#1.2000){reference-type="eqref" reference="1.2000"} is easy, because $P\left(\tau_1^\infty = n \right) = \frac{1}{n-1}P \left(n \in \tau^\infty \right)$ by [@kesten2008introduction Eq. (3.7)].
## Proof of Lemma [\[lemme 1.2\]](#lemme 1.2){reference-type="ref" reference="lemme 1.2"}
Let us change first $t$ by $\sin^2(t)$. Then, the primitive is $\frac{- \cos(2t)}{\sin(2t)}$, and because $\cos(2\arcsin(u)) = 1 - 2u^2$, and $\sin(2 \arcsin{u}) = 2u \sqrt{1-u^2}$, it comes that: $$\begin{aligned}
\int_\frac{1}{2}^{1-\varepsilon} \frac{dt}{t^{3/2} (1-t)^{3/2}}
&=
8 \int_{\arcsin{\sqrt{1/2}}}^{\arcsin{\sqrt{1-\varepsilon}}} \frac{dt}{\sin^2(2t)} = \frac{2-4\varepsilon}{ \sqrt{1-\varepsilon}\sqrt{\varepsilon}} \\
& = \frac{2-4\varepsilon}{ \left(1 - \frac{\varepsilon}{2} + O_{\varepsilon}(\varepsilon^2) \right)\sqrt{\varepsilon}} = \frac{2 - 3\varepsilon + O_{\varepsilon}(\varepsilon^2)}{\sqrt{\varepsilon}}.
\end{aligned}$$
## Proof of Lemma [\[lemme 1.3\]](#lemme 1.3){reference-type="ref" reference="lemme 1.3"}
Let us prove an equivalent result, that is $P \left( \tau_1^\infty > 2l \right) = \frac{1}{\sqrt{\pi l} } + \frac{3}{8 l^{3/2}} + o_l \left( \frac{1}{l^{3/2}}\right)$. Thanks to [\[1.2000\]](#1.2000){reference-type="eqref" reference="1.2000"}: $$\begin{aligned}
P \left( \tau_1^\infty > 2l \right)
&=
\frac{\sqrt{2}}{\sqrt{\pi}}\underset{p=l}{\overset{\infty}{\sum}}
\frac{1 + \frac{3}{8p} + o_p\left(\frac{1}{p} \right) }{(2p)^{3/2}} \\
&= \frac{1}{\sqrt{\pi l} } + \frac{1}{2\sqrt{\pi}}
\underset{p=l}{\overset{\infty}{\sum}} \left( \frac{1}{p^{3/2}} - \int_p^{p+1} \frac{dt}{t^{3/2}} + \frac{3}{8p^{5/2}} \right)
+ o_l \left( \frac{1}{l^{3/2}}\right)\\
&=
\frac{1}{\sqrt{\pi l}} + o_l \left( \frac{1}{l^{3/2}} \right) + \frac{1}{2\sqrt{\pi}} \underset{p=l}{\overset{\infty}{\sum}} \frac{9}{8p^{5/2}}.
\end{aligned}$$ Hence the result we were looking for.
# Proof of Lemma [\[lemme 5.1\]](#lemme 5.1){reference-type="ref" reference="lemme 5.1"} [\[Appendix E\]]{#Appendix E label="Appendix E"}
Equation [\[5.6000\]](#5.6000){reference-type="eqref" reference="5.6000"} follows easily from [\[probabilité que tau 1 = n\]](#probabilité que tau 1 = n){reference-type="eqref" reference="probabilité que tau 1 = n"} and alleviating the first constraint in the sum on the left hand side. Equation [\[5.7\]](#5.7){reference-type="eqref" reference="5.7"} is somewhat more complicated. We have to cut the sum at $l=T_N^2$.
- For $l<T_N^2$, we use that $g(T_N) \sim \frac{c}{T_N^2}$ so $e^{g(T_N) T_N^2} \sim c$. We denote $R(n,T_N) := \frac{Ce^{-g(T_N)(n-l_2-...-l_j)}}{T_N^3}$, with the constant $C$ changing at each lign and being independant of all others variable. With [\[probabilité pour la marche aléatoire simple que tau 1 vaille n\]](#probabilité pour la marche aléatoire simple que tau 1 vaille n){reference-type="eqref" reference="probabilité pour la marche aléatoire simple que tau 1 vaille n"}:
$$\begin{aligned}
& \underset{l=1}{\overset{ T_N^2 }{\sum}}Q_{k-j}^{T_N}(l) P \left(\tau_1^{T_N}
= n-l-l_1-...-l_j \right) \\ &\leq R(n,T_N) \underset{l=1}{\overset{ T_N^2 }{\sum}}Q_{k-j}^{T_N} (l) \leq R(n,T_N)
P\left(\tau_{k-j}^{T_N} \leq T_N^2 \right) \leq R(n,T_N).
\end{aligned}
\label{B.1000}$$
- For $l\geq T_N^2$, we can use [\[8.6\]](#8.6){reference-type="eqref" reference="8.6"}, which leads to the following upper bound:
$$\begin{aligned}
&\underset{l=T_N^2}{\overset{ n-l_2-...-l_j-T_N^2 }{\sum}}Q_{k-j}^{T_N}(l) P\left(\tau_1^{T_N} = n-l-l_2-...-l_j\right) \\
&\leq \frac{R(n,T_N)}{T_N^3} \left(1 + \frac{C}{T_N} \right)^k \underset{l=1}{\overset{n}{\sum}} 1 \leq R(n,T_N) \left(1 + \frac{C}{T_N} \right)^k ,
\end{aligned}
\label{B.2000}$$ because $n \leq T_N^3$. By summing [\[B.1000\]](#B.1000){reference-type="eqref" reference="B.1000"} and [\[B.2000\]](#B.2000){reference-type="eqref" reference="B.2000"}, we have proven [\[5.7\]](#5.7){reference-type="eqref" reference="5.7"}.
# Asymptotic results for Theorem [\[Theoreme 123.1\]](#Theoreme 123.1){reference-type="ref" reference="Theoreme 123.1"}
## Proof of Proposition[\[Proposition 123.1.3\]](#Proposition 123.1.3){reference-type="ref" reference="Proposition 123.1.3"} [\[Annex F\]]{#Annex F label="Annex F"}
Using $L_N$ defined in [\[definition 1\]](#definition 1){reference-type="eqref" reference="definition 1"} and the Markov property, together with [\[3.13\]](#3.13){reference-type="eqref" reference="3.13"} and [\[majoration de la proba que tau 1 soit \> m\]](#majoration de la proba que tau 1 soit > m){reference-type="eqref" reference="majoration de la proba que tau 1 soit > m"}:
$$\begin{aligned}
&Z_{N, \delta_N}^{T_{N}} =E\left[e^{H_{N, \delta}^{T_{N}}(S)}\right]=\sum_{r=0}^{N} E\left[e^{H_{N, \delta}^{T_{N}}(S)} \mathbf{1}_{\left\{ \tau_{L_N}^{T_N} =r\right\}}\right] \\
& =\sum_{r=0}^{N} E\left[e^{H_{r, \delta}^{T_{N}}(S)} \mathbf{1}_{\left\{r \in \tau^{T_{N}}\right\}}\right] P\left(\tau_{1}^{T_{N}}>N-r\right) \\
& =\sum_{r=0}^{N} e^{\phi\left(\delta_N, T_{N}\right) r} \mathcal{P}_{\delta_N,T_N} \left( r \in \tau^{T_N} \right) P\left(\tau_{1}^{T_{N}}>N-r\right) \\
&\asymp_N
e^{\phi\left(\delta_N, T_{N}\right) N}
\sum_{r=0}^{N}
e^{(\phi\left(\delta_N, T_{N}\right) + g(T_N))(N- r)} \mathcal{P}_{\delta_N,T_N} \left( r \in \tau \right) \left(
\frac{1}{\sqrt{N-r}} + \frac{1}{T_N} \right).
\end{aligned}
\label{123.4.14}$$
Now, we distinguish several cases. We will denote $\phi$ for $\phi(\delta_N,T_N)$ and $g$ for $g(T_N)$.
1. If $a \leq b$, [\[Proposition 123\]](#Proposition 123){reference-type="eqref" reference="Proposition 123"} gives $\mathcal{P}_{\delta_N,T_N} \left( r \in \tau \right) \asymp_N \frac{1}{T_N} + \frac{1}{\sqrt{r}}$. Moreover, [\[g + phi, cas b geq a\]](#g + phi, cas b geq a){reference-type="eqref" reference="g + phi, cas b geq a"} gives $g+\phi \asymp \frac{1}{T_N^2}$. Plugging this in [\[123.4.14\]](#123.4.14){reference-type="eqref" reference="123.4.14"} gives us that $Z_{N, \delta_N}^{T_{N}} \asymp_N e^{N \phi(\delta_N,T_N)}$.
2. It $a > b$, [\[g + phi\]](#g + phi){reference-type="eqref" reference="g + phi"} gives $g+\phi \asymp \frac{1}{T_N^3 \delta_N}$. Remember [\[Proposition 1\]](#Proposition 1){reference-type="eqref" reference="Proposition 1"} and the three types of behaviour of $\mathcal{P}_{\delta_N,T_N} \left( r \in \tau \right)$. We then distinguish again some cases:
1. If $T_N^3 \delta_N \ll N$, the term which gives the main contribution in the sum is for $N - T_N^3 \delta_N \leq r \leq N$. There, $\mathcal{P}_{\delta_N,T_N} \left( r \in \tau \right) \asymp_N \frac{1}{T_N^3 \delta_N^3}$, so the whole sum is of order $\frac{1}{T_N \delta_N}$.
2. If $T_N^3 \delta_N \geq N$ and $T_N^2 \leq N$, we can neglect the term $e^{(\phi + g)(N-r)}$. Splitting the sum when $1 \leq r \leq \frac{1}{\delta_N^2}$, $\frac{1}{\delta_N^2} \leq r \leq T_N^2$ and $T_N^2 \leq r \leq N$ gives us that the two firsts terms are the one contributing the most, and give a contribution of order $\frac{1}{T_N \delta_N}$.
3. If $\frac{1}{\delta_N^2} \leq N \leq T_N^2$, we now have to split the sum in two parts: $1 \leq r \leq \frac{1}{\delta_N^2}$ and $\frac{1}{\delta_N^2} \leq r \leq N$. We then have that the first sum contribute the most, and gives a contribution of order $\frac{1}{\delta_N \sqrt{N}}$.
4. If $N \leq \frac{1}{\delta_N^2}$, then $\mathcal{P}_{\delta_N,T_N} \left( r \in \tau \right) \asymp_N \frac{1}{\sqrt{r}}$. Hence, the sum has the same order as $\underset{r=1}{\overset{N-1}{\sum}} \frac{1}{\sqrt{r} \sqrt{N-r}}$, which is of order of a constant.
## Asymptotic estimates [\[Annex E.22\]]{#Annex E.22 label="Annex E.22"}
To prove Lemma [\[lemme 123.5.654\]](#lemme 123.5.654){reference-type="ref" reference="lemme 123.5.654"}, we use ideas of Appendix A.2 in [@Caravenna2009depinning] and follow their proof. Let us remind the reader that, from [\[gamma a\<b\]](#gamma a<b){reference-type="eqref" reference="gamma a<b"} and [\[gamma a=b\]](#gamma a=b){reference-type="eqref" reference="gamma a=b"}:
$$\gamma =
\left\{
\begin{array}{l}
\sqrt{ \frac{2\delta_N}{T_N}}(1+O_N(\min\{ T_N^3 \delta_N^3, \delta_N \})), \text{ if $b>a$} ,\\
\frac{x_\beta}{T_N}(1 + o_N(1))
\text{ if $b=a$}.\\
\end{array}
\right.$$
Moreover, to prove [\[moment d\'ordre 2 de tau 1, b geq a\]](#moment d'ordre 2 de tau 1, b geq a){reference-type="eqref" reference="moment d'ordre 2 de tau 1, b geq a"}, one needs the following approximation, when $b>a$:
$$\phi(\delta_N,T_N) = -\frac{\delta_N}{T_N}\left( 1 + o \left( \max\{ \delta_N, T_N^3 \delta_N^3 \} \right) \right).$$
Hence, the equation above (A.5) in [@Caravenna2009depinning] gives us that
$$\Tilde{Q}_{T_N}(\gamma) =
\left\{
\begin{array}{l}
\frac{1}{2 T_N}(1+o_N(1)), \text{ if $b>a$,} \\
\frac{x_\beta}{\sin(x_\beta) T_N }(1+o_N(1)) . \text{ if $b=a$. }
\end{array}
\right.$$
Using (A.6) in [@Caravenna2009depinning], i.e. $\mathcal{P}_{\delta_N,T_N} \left( \varepsilon^2=1 \right) = 2 e^{-\delta_N}\Tilde{Q}_{T_N}^1(\phi(\delta_N,T_N))$, we have easily proven [\[Probabilité de changer d\'interface,b geq a\]](#Probabilité de changer d'interface,b geq a){reference-type="eqref" reference="Probabilité de changer d'interface,b geq a"}. Now, using (A.11) and (A.12) in [@Caravenna2009depinning], and remembering that the $x$ in [@Caravenna2009depinning] is $\gamma/T$:
$$\Tilde{Q}_{T_N}'(\gamma) =
\left\{
\begin{array}{l}
\sqrt{2T_N \delta_N}\left( 1 + \frac{T_N \delta_N}{3} \right)(1+o_N(1)) \text{ if $b>a$} \\
\frac{\beta}{x_\beta} \left( 1 + \frac{x_\beta}{\sin (x_\beta)} + o_{N}(1) \right), \text{ if $b=a$, }
\end{array}
\right.$$ $$\Tilde{Q}_{T_N}''(\gamma) =
\left\{
\begin{array}{l}
T_N(1 + \delta_N T_N)(1+o_N(1)) \text{ if $b>a$} \\
\frac{T_N \beta}{x_\beta} \left(
\frac{2}{\sin(x_\beta)} + \frac{\beta}{\sin(x_\beta)} + o_{T_N}(1)
\right), \text{ if $b=a$, }
\end{array}
\right.$$ $$\gamma'(\phi(\delta_N,T_N)) =
\left\{
\begin{array}{l}
\left( \frac{T_N}{2 \delta_N} \right)^{1/2}\left( 1 + o_N(\max \{\delta_N,T_N^3 \delta_N^3 \}) \right) \text{ if $b>a$} \\
- \frac{T_N}{x_\beta}(1+o_{T_N}(1)) \text{ if $b=a$, }
\end{array}
\right.$$ $$\gamma''(\phi(\delta_N,T_N)) =
\left\{
\begin{array}{l}
\left( \frac{T_N}{2 \delta_N} \right)^{3/2}\left( 1 + o_N(\max \{\delta_N,T_N^3 \delta_N^3 \}) \right) \text{ if $b>a$} \\
- \frac{T_N^3}{x_\beta^3}(1+o_{T_N}(1)) \text{ if $b=a$. }
\end{array}
\right.$$
Combining these equations with (A.7) and (A.8) in [@Caravenna2009depinning], we get [\[moment d\'ordre 1 de tau 1, b geq a\]](#moment d'ordre 1 de tau 1, b geq a){reference-type="eqref" reference="moment d'ordre 1 de tau 1, b geq a"} and [\[moment d\'ordre 2 de tau 1, b geq a\]](#moment d'ordre 2 de tau 1, b geq a){reference-type="eqref" reference="moment d'ordre 2 de tau 1, b geq a"}.
[^1]: $T$ is even is used in some estimates, and $T \in 4\mathbb{N}$ is used in the proof of Lemma [\[lemme 123.222\]](#lemme 123.222){reference-type="ref" reference="lemme 123.222"} where we use reflection principle.
[^2]: Note that there is a mistake: the equation of Theorem 11 should have a $\leq$ instead of a $\geq$.
[^3]: and that, in the equation of this theorem, it should be a $\leq$ instead of a $\geq$.
| arxiv_math | {
"id": "2310.02009",
"title": "Polymer in a multi-interface medium with weak repulsion",
"authors": "Angot Elric",
"categories": "math.PR",
"license": "http://creativecommons.org/licenses/by/4.0/"
} |