This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Inverting Ray-Knight identities on trees

Xiaodan Lilabel=e1][email protected] [    Yushu Zhenglabel=e2][email protected] [ Department of Mathematics, Shanghai University of Finance and Economics, presep=, ]e1 Shanghai Center for Mathematical Sciences, Fudan University presep=, ]e2
Abstract

In this paper, we first introduce the Ray-Knight identity and percolation Ray-Knight identity related to loop soup with intensity α(0)\alpha(\geq 0) on trees. Then we present the inversions of the above identities, which are expressed in terms of repelling jump processes. In particular, the inversion in the case of α=0\alpha=0 gives the conditional law of a continuous-time Markov chain given its local time field. We further show that the fine mesh limits of these repelling jump processes are the self-repelling diffusions [21, 1] involved in the inversion of the Ray-Knight identity on the corresponding metric graph. This is a generalization of results in [20, 15, 16], where the authors explore the case of α=1/2\alpha=1/2 on a general graph. Our construction is different from [20, 15] and based on the link between random networks and loop soups.

60K35,
60J55,
60G55,
60J27,
60J65,
Ray-Knight identities,
Vertex repelling jump processes,
Loop soup,
keywords:
[class=MSC]
keywords:
\startlocaldefs\endlocaldefs

,

1 Introduction

Imagine a Brownian crook who spent a month in a large metropolis. The number of nights he spent in hotels A, B, C, \cdots, etc. is known; but not the order, nor his itinerary. So the only information the police has is total hotel bills. This vivid story is quoted from [21], which is also the paper the name ‘Brownian burglar’ comes from. In [21], Warren and Yor constructed the Brownian burglar to describe the law of reflected Brownian motion on the positive half line conditioned on its local time process (also called occupation time field). Meanwhile, Aldous [2] used the tree structure of the Brownian excursion to show that the genealogy of the conditioned Brownian motion is a time-changed Kingman coalescent. The article [21] can be viewed as a construction of the process in time while [2] as a construction in space. Then a natural question arises.

(Q1) How can we describe the law of a continuous-time Markov chain (CTMC) conditionally on its occupation time field?

This problem can actually be seen as a special case in a more general class of recovery problems that we are to explain. The occupation time field of a CTMC is considered in the generalized second Ray-Knight theorem, which provides an identity between the law of the sum of half of a squared Gaussian free field (GFF) with boundary condition 0 and the occupation time field of an independent Markovian path on one hand, and the law of half of a squared GFF with boundary condition 2u\sqrt{2u} on the other hand (see [7, 6, 17]). We call this identity a Ray-Knight identity. It is well-known that the occupation time field of a loop soup with intensity α=1/2\alpha=1/2 is distributed as half of a squared GFF by Le Jan’s isomorphism [12]. Therefore the generalized second Ray-Knight identity can also be stated using a loop soup with intensity 1/21/2. In the case of a loop soup with arbitrary intensity α>0\alpha>0, an analogous identity holds: adding to the occupation time field of a loop soup with ‘boundary condition’ 0 the occupation time field of an independent CTMC until local time uu gives the distribution of the occupation time field of a loop soup with ‘boundary condition’ uu, see Proposition 2.2 below for a precise statement. We call any such identity a Ray-Knight identity. Inverting the Ray-Knight identity refers to recovering the CTMC conditioned on the total occupation time field.

Vertex-reinforced jump processes (VRJP), conceived by Werner and first studied by Davis and Volkov [4, 5], are continuous-time jump processes favouring sites with higher local times. Surprisingly, Sabot and Tarrès [20] found that a time change of a variant of VRJP provides an inversion of the Ray-Knight identity on a general graph in the case of α=1/2\alpha=1/2. It is natural to wonder whether an analogous description holds for an arbitrary intensity α\alpha.

(Q2) For any α>0\alpha>0, how can we describe the process that inverts the Ray-Knight identity related to loop soup with intensity α>0\alpha>0?

Note that (Q1) can be viewed as a special case of (Q2) with α=0\alpha=0 if we generalize (Q2). Intuitively, when α=0\alpha=0, the external interference of the loop soup disappears. Hence it reduces to extracting the CTMC from its own occupation time field. Another equivalent interpretation of (Q2) is to recover the loop soup with intensity α\alpha conditioned on its local time field.

In [14], Lupu gave a ‘signed version’ of Le Jan’s isomorphism where the loop soup at intensity α=1/2\alpha=1/2 is naturally coupled with a signed GFF. In [15, Theorem 8], Lupu, Sabot and Tarrès gave the corresponding version of the Ray-Knight identity (we call it a percolation Ray-Knight identity). Besides the identity of local time fields, by adding a percolation along with the Markovian path and finally sampling signs in every connected component of the percolation, one can start with a GFF with boundary condition 0 and end up getting a GFF with boundary condition 2u\sqrt{2u}. The inversion of the signed isomorphism is carried out in that paper and involves another type of self-interacting process [15, §3]. It leads to the following question.

(Q3) Can we generalize the percolation Ray-Knight identity to the case of loop soup with intensity α>0\alpha>0? If so, how can we describe the process that inverts the percolation Ray-Knight identity?

The analogous problems can also be considered for Brownian motion and Brownian loop soup. In [16], Lupu, Sabot and Tarrès constructed a self-repelling diffusion out of a divergent Bass-Burdzy flow which inverts the Ray-Knight identity related to GFF on the line and showed that the self-repelling diffusion is the fine mesh limit of the vertex repelling jump processes involved in the case of grid graphs on the line. More generally, it was shown ([21, 1]) that the self-repelling diffusion inverting the Ray-Knight identity on the positive half line can be constructed with the Jacobi flow.

We want to explore the relationship between the repelling jump processes in (Q1)-(Q3) and the self-repelling diffusions in [1, 2, 21]. Our last question is

(Q4) Are the fine mesh limits of the repelling jump processes involved in (Q1)-(Q3) the self-repelling diffusions?

In this paper, we will focus on nearest-neighbour CTMCs on a tree 𝕋{\mathbb{T}} and give a complete answer to the above questions (Q1)-(Q4). It is shown that the percolation Ray-Knight identity has a simple form in this case, see Theorem 2.4. We construct two kinds of repelling jump processes, namely the vertex-repelling jump process and the percolation-vertex repelling jump process that invert the Ray-Knight identity and percolation Ray-Knight identity related to loop soup respectively, and show that the fine mesh limits of these repelling jump processes are self-repelling diffusions involved in the inversion of the Ray-Knight identity on the metric graph associated to 𝕋{\mathbb{T}}. Besides, the inversion processes in the case of general graphs have been constructed in another paper of ours in preparation. The inversion processes on general graphs involve non-local jump rates. So we restrict our discussion to the case of trees here.

The main feature of this paper is the intuitive way of constructing vertex repelling jump processes, which is rather different from [20, 15]. It is enlightened by the recovery of the loop soup with intensity 1/21/2 given its local time field (see [23, §2.5] and [22, Proposition 7]), where Werner involves the crossings of loop soup that greatly simplify the recovery. In our case, the introduction of crossings translates the problem into a ‘discrete-time version’ of inverting Ray-Knight identity, which can be stated as recovering the path of a discrete-time Markov chain conditioned on the number of crossings over each edge, see Proposition 3.4. This inversion has a surprisingly nice description, which can be seen as a ‘reversed’ oriented version of the edge-reinforced random walk.

The paper is organized as follows. In §2, we introduce the Ray-Knight identity and percolation Ray-Knight identity related to loop soup and give the main results of the paper. In §3-4, the vertex repelling jump process and the percolation-vertex repelling jump process are shown to invert the Ray-Knight identity and the percolation Ray-Knight identity respectively. In §5, we verify that the mesh limits of repelling jump processes are the self-repelling diffusions. In Appendix A, we give the rigorous definition and basic properties of a class of processes, called processes with terminated jump rates, covering the repelling jump processes.

2 Statements of main results

In this section, we first recall the Ray-Knight identity. Then we introduce a new Ray-Knight identity that we call percolation Ray-Knight identity. Finally, we present our results concerning the inversion of these identities and the fine mesh limit of the inversion processes.

Notations

We will use the following notations throughout the paper. ={0,1,2,}{\mathbb{N}}=\{0,1,2,\cdots\}, +=[0,){\mathbb{R}}^{+}=[0,\infty). For any stochastic process RR on some state space 𝒮\mathcal{S}, for t,u0t,u\geq 0, x𝒮x\in\mathcal{S} and a specified point x0𝒮x_{0}\in\mathcal{S}, we denote by

  • LR(t,x)L^{R}(t,x) the local time of RR. When 𝒮\mathcal{S} is discrete, LR(t,x):=0t1{Rs=x}dsL^{R}(t,x):=\int_{0}^{t}1_{\{R_{s}=x\}}\,\mathrm{d}s;

  • uτuRu\mapsto\tau_{u}^{R} the right-continuous inverse of tLR(t,x0)t\mapsto L^{R}(t,x_{0});

  • TRT^{R} the lifetime of RR;

  • HxR:=inf{t>0:Rt=x}H_{x}^{R}:=\inf\{t>0:R_{t}=x\} the hitting time of xx;

  • (When 𝒮\mathcal{S} is discrete) JiR=Ji(R)J_{i}^{R}=J_{i}(R) the ii-th jump time of RR;

  • R[0,t]:=(Rs:0st)R_{[0,t]}:=(R_{s}:0\leq s\leq t) the path of RR up to time tt.

The superscripts in above notations are omitted when R=XR=X, the CTMC to be introduced immediately.

2.1 Ray-Knight identity related to loop soup

Consider a tree 𝕋{\mathbb{T}}, i.e. a finite or countable connected graph without cycle, with root x0x_{0}. Denote by VV its set of vertices, by EE, resp. E\vec{E}, its set of undirected, resp. directed, edges. Assume that any vertex xVx\in V has finite degree. We write x<yx<y if xx is an ancestor of yy and xyx\sim y if xx and yy are neighbours. Denote by 𝔭(x)\mathfrak{p}(x) the parent of xx. For xyx\sim y, we simply write xy:=(x,y)xy:=(x,y) for a directed edge. The tree is endowed with a killing measure (kx)xV\left(k_{x}\right)_{x\in V} on VV and conductances (Cxy)xyE\left(C_{xy}\right)_{xy\in\vec{E}} on E\vec{E}. We do not assume the symmetry of the conductances at the moment. Write Cxy=Cyx=CxyCyxC^{*}_{xy}=C^{*}_{yx}=\sqrt{C_{xy}C_{yx}} for xyx\sim y.

Consider the CTMC X=(Xt)0t<TXX=(X_{t})_{0\leq t<T^{X}} on VV which being at xx, jumps to yy with rate CxyC_{xy} and is killed with rate kxk_{x}. TXT^{X} is the time when the process is killed or explodes. Let \mathcal{L} be the unrooted oriented loop soup with some fixed intensity α>0\alpha>0 associated to XX. (See for example [13] for the precise definition.)

Denote by L()L_{\cdot}(\mathcal{L}) the occupation time field of \mathcal{L}, i.e. for all xVx\in V, Lx()L_{x}(\mathcal{L}) is the sum of the local time at xx of each loop in \mathcal{L}. It is well-known that when XX is transient, Lx()L_{x}(\mathcal{L}) follows a Gamma(α,G(x,x)1\alpha,G(x,x)^{-1}) distribution111The density of Gamma(a,ba,b) distribution at xx is 1{x>0}baΓ(a)xa1ebx1_{\{x>0\}}\frac{b^{a}}{\Gamma(a)}x^{a-1}e^{-bx}., where GG is the Green function of XX; when XX is recurrent, Lx()=L_{x}(\mathcal{L})=\infty for all xVx\in V a.s.. We first suppose XX is transient, which ensures that the conditional distribution of \mathcal{L} given Lx0()L_{x_{0}}(\mathcal{L}) exists. For u0u\geq 0, let (u)\mathcal{L}^{(u)} have the law of \mathcal{L} given Lx0()=uL_{x_{0}}(\mathcal{L})=u. Without particular mention, we always assume that XX starts from x0x_{0}. The next proposition (see [15, Proposition 3.7] or [3, Proposition 5.3]) connects the path of XX with the loops in (u)\mathcal{L}^{(u)} that visit x0x_{0}.

Proposition 2.1.

For any u>0u>0, consider the path (Xt)0tτu\left(X_{t}\right)_{0\leq t\leq\tau_{u}} conditioned on τu<TX\tau_{u}<T^{X}. Let D=(d1,d2)D=\left(d_{1},d_{2}\cdots\right) be a Poisson-Dirichlet partition with parameter α\alpha, independent of XX. Set sn:=uk=1ndks_{n}:=u\cdot\sum_{k=1}^{n}d_{k}. Then the family of unrooted loops

{π((Xτsj1+t)0tτsjτsj1):j1}\left\{\pi\left(\big{(}X_{\tau_{s_{j-1}}+t}\big{)}_{0\leq t\leq\tau_{s_{j}}-\tau_{s_{j-1}}}\right):j\geq 1\right\}

is distributed as all the loops in (u)\mathcal{L}^{(u)} that visit x0x_{0}, where π\pi is the quotient map that maps a rooted loop to its corresponding unrooted loop.

By Proposition 2.1, we can take (0)={γ:γ does not visit x0}\allowbreak\mathcal{L}^{(0)}=\{\gamma\in\mathcal{L}:\gamma\text{ does not}\linebreak[0]\text{ visit }x_{0}\} and (u)\mathcal{L}^{(u)} to be the collection of loops in (0)\mathcal{L}^{(0)} and loops derived by partitioning X[0,τu]X_{[0,\tau_{u}]}, where XX and (0)\mathcal{L}^{(0)} are required to be independent. This special choice of ((u):u0)\left(\mathcal{L}^{(u)}:u\geq 0\right) provides a continuous version of the conditional distribution. So we work on this version from now on. Note that the above definition also makes sense when α=0\alpha=0. In this case, (0)=\mathcal{L}^{(0)}=\emptyset and (u)\mathcal{L}^{(u)} consists of a single loop π(X[0,τu])\pi\left(X_{[0,\tau_{u}]}\right) (Poisson-Dirichlet partition with parameter 0 is considered as the trivial partition D=(1,0,0,)D=(1,0,0,\cdots)). So we also allow α=0\alpha=0 from now on. The generalized second Ray-Knight theorem related to loop soup reads as follows, which is direct from the above definition.

Proposition 2.2 (Ray-Knight identity).

Let (0)\mathcal{L}^{(0)} and XX be independent. Then for u>0u>0, conditionally on τu<TX\tau_{u}<T^{X},

(Lx((0))+L(τu,x))xV has the same law as (Lx((u)))xV.\left(L_{x}(\mathcal{L}^{(0)})+L(\tau_{u},x)\right)_{x\in V}\text{ has the same law as }\left(L_{x}(\mathcal{L}^{(u)})\right)_{x\in V}.

2.2 Percolation Ray-Knight identity related to loop soup

In this part, we assume the symmetry of the conductances (i.e. Cxy=CyxC_{xy}=C_{yx} for any xyx\sim y) and that 0<α<10<\alpha<1. An element OO in {0,1}E\{0,1\}^{E} is also called a configuration on EE. When O(e)=1O(e)=1, the edge ee is thought of as being open. OO is also used to stand for the set of open edges induced by OO without particular mention. A percolation on EE refers to a random configuration on EE.

Definition 2.3.

For u0u\geq 0, let 𝒪(u)\mathcal{O}^{(u)} be the percolation on EE such that, conditionally on L((u))=L_{\cdot}(\mathcal{L}^{(u)})=\ell: {longlist}

edges are open independently;

the edge {x,y}\{x,y\} is open with probability

I1α(2Cxyxy)Iα1(2Cxyxy),\displaystyle\dfrac{I_{1-\alpha}\left(2C_{xy}\sqrt{\ell_{x}\ell_{y}}\right)}{I_{\alpha-1}\left(2C_{xy}\sqrt{\ell_{x}\ell_{y}}\right)}, (2.1)

where IνI_{\nu} is the modified Bessel function: for ν1\nu\geq-1 and z0z\geq 0,

Iν(z)=(z/2)νn=0(z/2)2nn!Γ(n+ν+1).I_{\nu}(z)=(z/2)^{\nu}\sum_{n=0}^{\infty}\dfrac{(z/2)^{2n}}{n!~{}\Gamma(n+\nu+1)}.

Consider the loop soup (0)\mathcal{L}^{(0)} and the percolation 𝒪(0)\mathcal{O}^{(0)}. For t0t\geq 0, define the aggregated local times

ϕt(x):=Lx((0))+L(t,x),xV.\displaystyle\phi_{t}(x):=L_{x}(\mathcal{L}^{(0)})+L(t,x),~{}x\in V. (2.2)

The process 𝒳:=(Xt,𝒪t)0tτu\mathcal{X}:=(X_{t},\mathcal{O}_{t})_{0\leq t\leq\tau_{u}} is defined as follows: (X0,𝒪0):=(x0,𝒪(0))\left(X_{0},\mathcal{O}_{0}\right):=\left(x_{0},\mathcal{O}^{(0)}\right). Conditionally on L((0))L_{\cdot}(\mathcal{L}^{(0)}) and (𝒳s:0st)(\mathcal{X}_{s}:0\leq s\leq t), if Xt=xX_{t}=x and yxy\sim x, then

  • XtX_{t} jumps to its neighbour yy with rate CxyC_{xy} and 𝒪t({x,y})\mathcal{O}_{t}(\{x,y\}) is then set to 11 (if it was not already).

  • In case 𝒪t({x,y})=0\mathcal{O}_{t}(\{x,y\})=0, 𝒪t({x,y})\mathcal{O}_{t}(\{x,y\}) is set to 11 without XtX_{t} jumping with rate

    Cxyϕt(y)ϕt(x)Kα(Cxyϕt(x)ϕt(y))K1α(Cxyϕt(x)ϕt(y)),\displaystyle C_{xy}\sqrt{\dfrac{\phi_{t}(y)}{\phi_{t}(x)}}\cdot\dfrac{K_{\alpha}\big{(}C_{xy}\sqrt{\phi_{t}(x)\phi_{t}(y)}\big{)}}{K_{1-\alpha}\big{(}C_{xy}\sqrt{\phi_{t}(x)\phi_{t}(y)}\big{)}}, (2.3)

    where Kν(z)=12Γ(ν)Γ(1ν)(Iν(z)Iν(z))K_{\nu}(z)=\dfrac{1}{2}\Gamma(\nu)\Gamma(1-\nu)\left(I_{-\nu}(z)-I_{\nu}(z)\right).

Theorem 2.4 (Percolation Ray-Knight identity).

With the notations above, conditionally on τu<TX\tau_{u}<T^{X}, (ϕτu,𝒪τu)\allowbreak(\phi_{\tau_{u}},\linebreak[0]\mathcal{O}_{\tau_{u}}) has the same law as (L((u)),𝒪(u))(L_{\cdot}(\mathcal{L}^{(u)}),\mathcal{O}^{(u)}).

Theorem 2.4 will be proved in §4. The process (𝒪t)0tτu(\mathcal{O}_{t})_{0\leq t\leq\tau_{u}} has a natural interpretation in terms of loop soup on a metric graph. Specifically, let 𝕋~{\widetilde{\mathbb{T}}} be the metric graph associated to 𝕋{\mathbb{T}}, where edges are considered as intervals, so that one can construct a Brownian motion BB which moves continuously on 𝕋~{\widetilde{\mathbb{T}}}, and whose print on the vertices is distributed as XX (see §4 for details). Let ~{\widetilde{\mathcal{L}}} be the unrooted oriented loop soup with intensity α>0\alpha>0 associated to BB. Starting with the loop soup ~{\widetilde{\mathcal{L}}} with boundary condition 0 at x0x_{0} and an independent Brownian motion BB starting at x0x_{0}, one can consider the field (ϕ~t(x),x𝕋~)(\widetilde{\phi}_{t}(x),\,x\in{\widetilde{\mathbb{T}}}) which is the aggregated local time at xx of the loop soup ~{\widetilde{\mathcal{L}}} and the Brownian motion BB up to time tt. Then one can construct 𝒪t\mathcal{O}_{t} as the configuration where an edge ee is open if the field ϕ~t\widetilde{\phi}_{t} does not have any zero on the edge ee. See Proposition 4.1.

Remark 2.5.

Since the laws involved in the Ray-Knight and percolation Ray-Knight do not depend on kx0k_{x_{0}}, we can generalize the above results to the case where XX is recurrent.

2.3 Inversion of Ray-Knight identities

To ease the presentation, write ϕ(u)\phi^{(u)} for L((u))L_{\cdot}(\mathcal{L}^{(u)}) from now on. Theorem 2.4 allows us to identify (ϕ(u),𝒪(u))(\phi^{(u)},\mathcal{O}^{(u)}) with (λτu,𝒪τu)\allowbreak(\lambda_{\tau_{u}},\linebreak[0]\mathcal{O}_{\tau_{u}}), which we will do.

Definition 2.6.

We call the triple

(ϕ(0),X[0,τu],ϕ(u))\left(\phi^{(0)},X_{[0,\tau_{u}]},\phi^{(u)}\right)

a Ray-Knight triple (with parameter α\alpha) associated to XX. Similarly, recalling the notation 𝒳t=(Xt,𝒪t)\mathcal{X}_{t}=(X_{t},\mathcal{O}_{t}), the triple

((ϕ(0),𝒪(0)),𝒳[0,τu],(ϕ(u),𝒪(u)))\left((\phi^{(0)},\mathcal{O}^{(0)}),\mathcal{X}_{[0,\tau_{u}]},(\phi^{(u)},\mathcal{O}^{(u)})\right)

will be called a percolation Ray-Knight triple.

Inverting the Ray-Knight, resp. percolation Ray-Knight identity, is to deduce the conditional law of X[0,τu]X_{[0,\tau_{u}]}, resp. 𝒳[0,τu]\mathcal{X}_{[0,\tau_{u}]}, given ϕ(u)\phi^{(u)}, resp. (ϕ(u),𝒪(u))(\phi^{(u)},\mathcal{O}^{(u)}).

We introduce an adjacency relation on V×{0,1}EV\times\{0,1\}^{E}: For (x,O1),(y,O2)V×{0,1}E(x,O_{1}),(y,O_{2})\in V\times\{0,1\}^{E}, (x,O1)(x,O_{1}) and (y,O2)(y,O_{2}) are neighboured if they satisfy either one of the followings: (1) O1=O2O_{1}=O_{2} and xyx\sim y; (2) O1O_{1} and O2O_{2} differ by exactly one edge ee and x,yex,y\in e. This defines a graph 𝕋{\mathbb{T}}^{\prime} with finite degree and 𝒳t\mathcal{X}_{t} is a nearest-neighbour jump process on 𝕋{\mathbb{T}}^{\prime}.

The inversion of Ray-Knight identities is expressed in terms of several processes defined by jump rates. Readers are referred to Appendix A for the rigorous definition and basic properties of such processes. All the continuous-time processes defined below are assumed to be right-continuous, minimal, nearest-neighbour jump processes with a finite or infinite lifetime. The collection of all such sample paths on 𝕋{\mathbb{T}} (resp. 𝕋{\mathbb{T}}^{\prime}) is denoted by Ω\Omega (resp. Ω\Omega^{\prime}).

Given λ(+)V\lambda\in({\mathbb{R}}^{+})^{V}, set for x,yVx,y\in V with xyx\sim y and ωΩ\omega\in\Omega with Tω>tT^{\omega}>t,

Λt(x)=Λt(λ,ω)(x)=Λt(λ,ω[0,t])(x):=λ(x)0t1{ωs=x}ds,φt(xy)=φt(λ,ω)(xy)=φt(λ,ω[0,t])(xy):=2CxyΛt(x,ω)Λt(y,ω).\displaystyle\begin{split}&\Lambda_{t}(x)=\Lambda_{t}(\lambda,\omega)(x)=\Lambda_{t}(\lambda,\omega_{[0,t]})(x):=\lambda(x)-\int_{0}^{t}1_{\{\omega_{s}=x\}}\,\mathrm{d}s,\\ &\varphi_{t}(xy)=\varphi_{t}(\lambda,\omega)(xy)=\varphi_{t}(\lambda,\omega_{[0,t]})(xy):=2C^{*}_{xy}\sqrt{\Lambda_{t}(x,\omega)\Lambda_{t}(y,\omega)}.\\ \end{split} (2.4)

Intuitively, λ\lambda is viewed as the initial local time field and Λt\Lambda_{t} stands for the remaining local time field while running the process ω\omega until time tt. Although these quantities depend on ω\omega, we will systematically drop the notation ω\omega for sake of concision whenever the process is clear from the context.

2.3.1 Inversion of Ray-Knight identity

Keep in mind that most of the following definitions have a parameter α0\alpha\geq 0, which is always omitted in notations for simplicity.

Set

={{λ(+)V:λ(x)>0xV}, if α>0;{λ(+)V:λ(x0)>0,supp(λ) is connected and finite}, if α=0.\displaystyle\mathfrak{R}=\begin{cases}\big{\{}\lambda\in({\mathbb{R}}^{+})^{V}:\lambda(x)>0\ \forall x\in V\big{\}},&\text{ if }\alpha>0;\\ \big{\{}\lambda\in({\mathbb{R}}^{+})^{V}:\lambda(x_{0})>0,\,\text{supp}(\lambda)\text{ is connected and finite}\big{\}},&\text{ if }\alpha=0.\end{cases}

Note that with probability 11, ϕ(u)\phi^{(u)}\in\mathfrak{R}. We will take \mathfrak{R} as the range of ϕ(u)\phi^{(u)} and consider only the law of X[0,τu]X_{[0,\tau_{u}]} given ϕ(u)=λ\phi^{(u)}=\lambda\in\mathfrak{R}.

Now define the vertex repelling jump process we are interested in. Given λ\lambda\in\mathfrak{R}, its distribution x0λ{\mathbb{P}}^{\lambda}_{x_{0}} on Ω\Omega verifies that the process ω=(ωt,0t<Tω)\omega=(\omega_{t},0\leq t<T^{\omega}) starts at ω0=x0\omega_{0}=x_{0}, behaves such that

  • conditionally on t<Tωt<T^{\omega} and (ωs:0st)\left(\omega_{s}:0\leq s\leq t\right) with ωt=x\omega_{t}=x, it jumps to a neighbour yy of xx with rate rtλ(x,y,ω[0,t])r^{\lambda}_{t}\big{(}x,y,\omega_{[0,t]}\big{)};

  • (resurrect mechanism). every time limstΛs(ωs)=0\lim\limits_{s\rightarrow t-}\Lambda_{s}(\omega_{s})=0 and ωtx0\omega_{t-}\neq x_{0}, it jumps to 𝔭(x)\mathfrak{p}(x) at time tt,

and stops at time Tω=Tλ(ω)T^{\omega}=T^{\lambda}(\omega), where for x,yVx,y\in V with xyx\sim y and ωΩ\omega\in\Omega with Tω>tT^{\omega}>t,

rtλ(x,y,ω[0,t]):={CxyΛt(y)Λt(x)Iα1(φt(xy))Iα(φt(xy)), if y=𝔭(x),CxyΛt(y)Λt(x)Iα(φt(xy))Iα1(φt(xy)), if x=𝔭(y),\displaystyle\begin{split}r^{\lambda}_{t}(x,y,\omega_{[0,t]})&:=\left\{\begin{aligned} &C^{*}_{xy}\sqrt{\dfrac{\Lambda_{t}(y)}{\Lambda_{t}(x)}}\cdot\dfrac{I_{\alpha-1}\left(\varphi_{t}(xy)\right)}{I_{\alpha}\left(\varphi_{t}(xy)\right)},&\text{ if $y=\mathfrak{p}(x)$},\\ &C^{*}_{xy}\sqrt{\dfrac{\Lambda_{t}(y)}{\Lambda_{t}(x)}}\cdot\dfrac{I_{\alpha}\left(\varphi_{t}(xy)\right)}{I_{\alpha-1}\left(\varphi_{t}(xy)\right)},&\text{ if $x=\mathfrak{p}(y)$},\\ \end{aligned}\right.\end{split} (2.5)

and Tλ=T0λTλT^{\lambda}=T^{\lambda}_{0}\wedge T^{\lambda}_{\infty} with

T0λ(ω)\displaystyle T^{\lambda}_{0}(\omega) :=sup{t0:Λt(x0)>0};\displaystyle:=\sup\big{\{}t\geq 0:\Lambda_{t}(x_{0})>0\big{\}};
Tλ(ω)\displaystyle T^{\lambda}_{\infty}(\omega) :=sup{t0:ω[0,t] has finitely many jumps}.\displaystyle:=\sup\big{\{}t\geq 0:\omega_{[0,t]}\text{ has finitely many jumps}\big{\}}.

Here T0λT^{\lambda}_{0} represents the time when the local time at x0x_{0} is exhausted. The process can be roughly described as follows: the total local time available at each vertex is given at the beginning. As the process runs, it eats the local time. The jump rates are given in terms of the remaining local time. It finally stops whenever the available local time at x0x_{0} is used up or an explosion occurs.

Remark 2.7.

It holds that I1=I1I_{1}=I_{-1} and as z0z\downarrow 0,

Iν(z){Γ(ν+1)1(12z)ν, if ν>1;12z, if ν=1.\displaystyle I_{\nu}(z)\sim\begin{cases}\Gamma(\nu+1)^{-1}(\frac{1}{2}z)^{\nu},&\text{ if }\nu>-1;\\ \frac{1}{2}z,&\text{ if }\nu=-1.\end{cases}

Hence,

for α>0,Iα1(z)/Iα(z)αz1;I1(z)/I0(z)z.\displaystyle\text{for $\alpha>0$},~{}I_{\alpha-1}(z)/I_{\alpha}(z)\sim\alpha z^{-1};\quad I_{-1}(z)/I_{0}(z)\sim z. (2.6)

The different behaviors in (2.6) for α>0\alpha>0 and α=0\alpha=0 indicate different behaviors of the vertex repelling jump process in the two cases. Intuitively speaking, when α>0\alpha>0, as the process tends to stay at some xx0x\neq x_{0}, the jump rate to 𝔭(x)\mathfrak{p}(x) goes to infinity. So it actually does not need the resurrect mechanism in this case, and there is still some positive local time left at any vertex other than x0x_{0} at the end. While in the case of α=0\alpha=0, the process exhibits a different picture. Contrary to the case of α>0\alpha>0, the jump rate to the children goes to infinity when α=0\alpha=0. So intuitively the process is ‘pushed’ to the boundary of supp(λ)\text{supp}(\lambda) and exhausts the available local time at one of the boundaries. To guide the process back to x0x_{0}, we decide to resurrect it by letting it jump to the parent of the vertex. In this way, the process finally ends up exhausting all the available local time at each vertex.

By Remark 2.7, we can see that the vertex repelling jump process is in line with our intuition about the inversion of Ray-Knight identity. Since in the case of α>0\alpha>0, the given time field includes the external time field ϕ(0)\phi^{(0)}, the inversion process will only use part of the local time at any vertex other than x0x_{0} and end up exhausting the local time at x0x_{0}. While in the case of α=0\alpha=0, the given time field is exactly the local time field of the CTMC itself, the inversion process will certainly use up the local time at each vertex before finally stopping at x0x_{0}.

Theorem 2.8.

Suppose (ϕ(0),X[0,τu],ϕ(u))\left(\phi^{(0)},X_{[0,\tau_{u}]},\phi^{(u)}\right) is a Ray-Knight triple associated to XX. For any λ\lambda\in\mathfrak{R}, the conditional distribution of X[0,τu]X_{[0,\tau_{u}]} given τu<TX\tau_{u}<T^{X} and ϕ(u)=λ\phi^{(u)}=\lambda is x0λ{\mathbb{P}}^{\lambda}_{x_{0}}.

2.3.2 Inversion of percolation Ray-Knight identity

Assume the symmetry of the conductances and that 0<α<10<\alpha<1. Our goal is to introduce the percolation-vertex repelling jump process that inverts the percolation Ray-Knight identity. Given λ\lambda\in\mathfrak{R} and a configuration OO on EE, the distribution (x0,O)λ{\mathbb{P}}^{\lambda}_{(x_{0},O)} on Ω\Omega^{\prime} verifies that the process ω=(ωt,0t<Tω)\omega=(\omega_{t},0\leq t<T^{\omega}) (the first coordinate being a jump process on VV and the second coordinate a percolation on VV) starts at (x0,O)(x_{0},O), moves such that conditionally on t<Tωt<T^{\omega} and (ωs:0st)(\omega_{s}:0\leq s\leq t) with ωt=(x,O1)\omega_{t}=(x,O_{1}), it jumps from (x,O1)(x,O_{1}) to (y,O2)(y,O_{2}) with rate

{1{O1({x,y})=1}CxyΛt(y)Λt(x)I1α(φt(xy))Iα(φt(xy)), if y=𝔭(x),O1=O2;1{O1({x,y})=1}CxyΛt(y)Λt(x)Iα(φt(xy))I1α(φt(xy)), if x=𝔭(y),O1=O2;CxyΛt(y)Λt(x)Iα1(φt(xy))I1α(φt(xy))Iα(φt(xy)), if y=𝔭(x),O2=O1{x,y};CxzΛt(z)Λt(x)Iα(φt(xz))Iα(φt(xz))I1α(φt(xz)), if x=y,O2=O1{x,z} for z with x=𝔭(z),\displaystyle\begin{split}\left\{\begin{aligned} &1_{\{O_{1}(\{x,y\})=1\}}C_{xy}\sqrt{\dfrac{\Lambda_{t}(y)}{\Lambda_{t}(x)}}\cdot\dfrac{I_{1-\alpha}\left(\varphi_{t}(xy)\right)}{I_{\alpha}\left(\varphi_{t}(xy)\right)},&\text{ if $y=\mathfrak{p}(x),O_{1}=O_{2}$};\\ &1_{\{O_{1}(\{x,y\})=1\}}C_{xy}\sqrt{\dfrac{\Lambda_{t}(y)}{\Lambda_{t}(x)}}\cdot\dfrac{I_{\alpha}\left(\varphi_{t}(xy)\right)}{I_{1-\alpha}\left(\varphi_{t}(xy)\right)},&\text{ if $x=\mathfrak{p}(y),O_{1}=O_{2}$};\\ &C_{xy}\sqrt{\dfrac{\Lambda_{t}(y)}{\Lambda_{t}(x)}}\cdot\dfrac{I_{\alpha-1}\left(\varphi_{t}(xy)\right)-I_{1-\alpha}\left(\varphi_{t}(xy)\right)}{I_{\alpha}\left(\varphi_{t}(xy)\right)},&\text{ if $y=\mathfrak{p}(x),O_{2}=O_{1}\setminus\{x,y\}$};\\ &C_{xz}\sqrt{\dfrac{\Lambda_{t}(z)}{\Lambda_{t}(x)}}\cdot\dfrac{I_{-\alpha}\left(\varphi_{t}(xz)\right)-I_{\alpha}\left(\varphi_{t}(xz)\right)}{I_{1-\alpha}\left(\varphi_{t}(xz)\right)},&\text{ if }x=y,O_{2}=O_{1}\setminus\{x,z\}\atop\text{ for $z$ with }x=\mathfrak{p}(z),\\ \end{aligned}\right.\end{split} (2.7)

and stops at time Tω=Tλ,O(ω)T^{\omega}=T^{\lambda,O}(\omega) when the process explodes or uses up the local time at x0x_{0}. Here for ω=(ω1,ω2)Ω\omega=(\omega^{1},\omega^{2})\in\Omega^{\prime}, Λt(x,ω)\Lambda_{t}(x,\omega), φt(xy,ω)\varphi_{t}(xy,\omega) are defined as Λt(x,ω1)\Lambda_{t}(x,\omega^{1}), φt(xy,ω1)\allowbreak\varphi_{t}(xy,\linebreak[0]\omega^{1}) respectively.

Theorem 2.9.

Suppose ((ϕ(0),𝒪(0)),𝒳[0,τu],(ϕ(u),𝒪(u)))((\phi^{(0)},\mathcal{O}^{(0)}),\mathcal{X}_{[0,\tau_{u}]},(\phi^{(u)},\mathcal{O}^{(u)})) is a percolation Ray-Knight triple associated to XX. For any λ\lambda\in\mathfrak{R} and configuration OO on EE, the conditional distribution of (𝒳τut)0tτu\left(\mathcal{X}_{\tau_{u}-t}\right)_{0\leq t\leq\tau_{u}} given (ϕ(u),𝒪(u))=(λ,O)\left(\phi^{(u)},\mathcal{O}^{(u)}\right)=(\lambda,O) and τu<TX\tau_{u}<T^{X} is (x0,O)λ{\mathbb{P}}^{\lambda}_{(x_{0},O)}.

2.4 Mesh limit of vertex repelling jump processes

We only tackle the case of simple random walks on dyadic grids, which can be easily generalized to general CTMCs on trees. Let BB be a reflected Brownian motion on [0,)[0,\infty). View 0 as the ‘root’ and let (ϕ~(0),B[0,τuB],ϕ~(u))(\widetilde{\phi}^{(0)},B_{[0,\tau^{B}_{u}]},\widetilde{\phi}^{(u)}) be a Ray-Knight triple associated to BB (defined in a similar way to that for CTMCs). The conditional law of B[0,τuB]B_{[0,\tau^{B}_{u}]} given ϕ~(u)=λ\widetilde{\phi}^{(u)}=\lambda is the self-repelling diffusion WλW^{\lambda}, which can be constructed with the burglar process. See §5 for details.

Denote k:=2k{\mathbb{N}}_{k}:=2^{-k}{\mathbb{N}}. Consider 𝕋k=(k,Ek){\mathbb{T}}_{k}=({\mathbb{N}}_{k},E_{k}), where Ek:={{x,y}:x,yk,|xy|=2k}E_{k}:=\big{\{}\{x,y\}:x,y\in{\mathbb{N}}_{k},\ \left\lvert x-y\right\rvert=2^{-k}\big{\}}, endowed with conductances Cek=2k1C_{e}^{k}=2^{k-1} on each edge and no killing. The induced CTMC X(k)X^{(k)} is the print of BB on k{\mathbb{N}}_{k}. Let (ϕ(0),k,X[0,τuX(k)](k),ϕ(u),k)\big{(}\phi^{(0),k},X^{(k)}_{[0,\tau^{X^{(k)}}_{u}]},\phi^{(u),k}\big{)} be a Ray-Knight triple associated to X(k)X^{(k)}. It holds that

(X2kt(k),LX(k)(2kt,x),ϕ(0),k(x))t0,x+𝑑(Bt,LB(t,x),ϕ~(0)(x))t0,x+,\left(X^{(k)}_{2^{k}t},L^{X^{(k)}}(2^{k}t,x),\phi^{(0),k}(x)\right)_{t\geq 0,x\in{\mathbb{R}}^{+}}\overset{d}{\rightarrow}\left(B_{t},L^{B}(t,x),\widetilde{\phi}^{(0)}(x)\right)_{t\geq 0,x\in{\mathbb{R}}^{+}},

where LX(k)(2kt,)L^{X^{(k)}}(2^{k}t,\cdot) and ϕ(0),k()\phi^{(0),k}(\cdot) are considered to be linearly interpolated outside k{\mathbb{N}}_{k}.

In view of this, for any non-negative, continuous function λ\lambda on +{\mathbb{R}}^{+} with some additional conditions, we naturally consider the vertex repelling jump process Xλ,(k)X^{\lambda,(k)}, which has the conditional law of (X2kt(k):0t2kτuX(k))\left(X^{(k)}_{2^{k}t}:0\leq t\leq 2^{-k}\tau^{X^{(k)}}_{u}\right) given ϕ(u),k=λ|k\phi^{(u),k}=\lambda|_{{\mathbb{N}}_{k}}. The jump rates of the process are given in (5.4).

Theorem 2.10.

For λ~\lambda\in\widetilde{\mathfrak{R}} (defined in §5), the family of vertex repelling jump processes Xλ,(k)X^{\lambda,(k)} converges weakly as kk\rightarrow\infty to the self-repelling diffusion WλW^{\lambda} for the uniform topology, where the processes are assumed to stay at 0 after the lifetime.

3 Inverting Ray-Knight identity

In this section, we will obtain the inversion of Ray-Knight identity. The main idea is to first introduce the information on crossings and explore the law of CTMC conditioned on both local times and crossings. Then by ‘averaging over crossings’, we get the representation of the inversion as the vertex repelling jump process shown in Theorem 2.8.

To begin with, observe that it suffices to only consider the case when XX is recurrent. In fact, for xVx\in V, we set h(x)=x(Hx0<TX)h(x)={\mathbb{P}}^{x}(H_{x_{0}}<T^{X}), where x{\mathbb{P}}^{x} is the law of XX starting from xx. Let YY be the CTMC on VV starting from x0x_{0} induced by conductances Cxyh=h(y)h(x)CxyC^{h}_{xy}=\frac{h(y)}{h(x)}C_{xy} and no killing. Then by the standard argument, we can show that YY is recurrent and Y[0,τuY]Y_{[0,\tau^{Y}_{u}]} has the law of X[0,τu]X_{[0,\tau_{u}]} conditioned on τu<TX\tau_{u}<T^{X}. Note that YY can also be obtained by removing the killing rate at x0x_{0} from the hh-transform of XX (the latter process is killed at x0x_{0} with rate Cx0Cx0hC_{x_{0}}-C^{h}_{x_{0}}, where Cx=kx+y:yxCxyC_{x}=k_{x}+\sum_{y:y\sim x}C_{xy} and Cxh=y:yxCxyhC^{h}_{x}=\sum_{y:y\sim x}C^{h}_{xy}). Combine the two facts: (1) the law of the loop soup is invariant under the hh-transform (Cf. [3, Proposition 3.2]); (2) the law of the Ray-Knight triple does not depend on the killing rate at x0x_{0}. We have the Ray-Knight triple associated to XX has the same law as that associated to YY. Then it is easy to deduce Theorem 2.8 in the transient case from that in the recurrent case.

Throughout this section, we will assume XX is recurrent.

3.1 The representation of the inversion as a vertex-edge repelling process

Definition 3.1.

An element n=(n(xy))xyEEn=\left(n(xy)\right)_{xy\in\vec{E}}\in\mathbb{N}^{\vec{E}} is called a network (on 𝕋{\mathbb{T}}). For any network nn, set

nwidecheck(xy)=n(xy)+(α1)1{y=𝔭(x)},\displaystyle\begin{split}\widecheck{n}(xy)=n(xy)+(\alpha-1)\cdot 1_{\{y=\mathfrak{p}(x)\}},\end{split}

and for xVx\in V, n(x):=y:yxn(xy)n(x):=\sum_{y:y\sim x}n(xy) and nwidecheck(x):=y:yxnwidecheck(xy)\widecheck{n}(x):=\sum_{y:y\sim x}\widecheck{n}(xy). If n(xy)=n(yx)n(xy)=n(yx) for any {x,y}\{x,y\} in EE, we say that the network nn is sourceless, denoted by n=\partial n=\emptyset. Given λ\lambda\in\mathfrak{R}, we call 𝒩\mathcal{N} a sourceless α\alpha-random network associated to λ\lambda if 𝒩\mathcal{N} is a sourceless network, and (𝒩(xy):x=𝔭(y))\left(\mathcal{N}(xy):x=\mathfrak{p}(y)\right) are independent with 𝒩(xy)\mathcal{N}(xy) following the Bessel (α1(\alpha-1, 2Cxyλ(x)λ(y))2C^{*}_{xy}\sqrt{\lambda(x)\lambda(y)}) distribution222For ν1\nu\geq-1 and z>0z>0, the Bessel(ν,z)(\nu,z) distribution is a distribution on {\mathbb{N}} given by: bν,z(n)=Iν(z)1(z/2)2n+νn!Γ(n+ν+1),n.\displaystyle b_{\nu,z}(n)=I_{\nu}(z)^{-1}\dfrac{(z/2)^{2n+\nu}}{n!~{}\Gamma(n+\nu+1)},~{}n\in{\mathbb{N}}. (3.1) Bessel(ν,0)(\nu,0) distribution is defined to be the Dirac measure at 0..

More generally, let 𝔭(x0,x)\mathfrak{p}(x_{0},x) be the unique self-avoiding path from x0x_{0} to xx, also seen as a collection of unoriented edges. For iV\{x0}i\in V\backslash\{x_{0}\}, we say that nn has sources (x0,i)(x_{0},i), denoted by n=(x0,i)\partial n=(x_{0},i), if for any xyExy\in\vec{E} with x=𝔭(y)x=\mathfrak{p}(y),

{n(xy)=n(yx)1, if {x,y}𝔭(x0,i);n(xy)=n(yx), if {x,y}𝔭(x0,i).\left\{\begin{array}[]{ll}n(xy)=n(yx)-1,&\text{ if }\{x,y\}\in\mathfrak{p}(x_{0},i);\\ n(xy)=n(yx),&\text{ if }\{x,y\}\notin\mathfrak{p}(x_{0},i).\end{array}\right.

Given λ\lambda\in\mathfrak{R} and iV\{x0}i\in V\backslash\{x_{0}\}, we call 𝒩\mathcal{N} an α\alpha-random network with sources (x0,i)(x_{0},i) associated to λ\lambda if 𝒩\mathcal{N} is a network with sources (x0,i)(x_{0},i) and (𝒩(xy):x=𝔭(y))\left(\mathcal{N}(xy):x=\mathfrak{p}(y)\right) are independent with 𝒩(xy)\mathcal{N}(xy) following the Bessel (α(\alpha, 2Cxyλ(x)λ(y))2C^{*}_{xy}\sqrt{\lambda(x)\lambda(y)}) distribution if x<ix<i, and the Bessel (α1(\alpha-1, 2Cxyλ(x)λ(y))2C^{*}_{xy}\sqrt{\lambda(x)\lambda(y)}) distribution otherwise.

Remark. We will sometimes use the convention that a network with sources (x0,x0)(x_{0},x_{0}) is a sourceless network.

Every loop configuration \mathscr{L} (i.e. a collection of unrooted, oriented loops) induces a network θ()\theta({\mathscr{L}}): for xyx\sim y,

θ()(xy):=# crossings from x to y by the loops in .\theta({\mathscr{L}})(xy):=\#\text{ crossings from $x$ to $y$ by the loops in $\mathscr{L}$}.

Due to the tree structure, it holds that θ()(xy)=θ()(yx)\theta({\mathscr{L}})(xy)=\theta({\mathscr{L}})(yx) for any xyExy\in\vec{E}, i.e. θ()\theta({\mathscr{L}}) is sourceless.

Let (ϕ(0),X[0,τu],ϕ(u))\left(\phi^{(0)},X_{[0,\tau_{u}]},\phi^{(u)}\right) be the Ray-Knight triple associated to XX, where ϕ(0)\phi^{(0)} is the local time field of a loop soup (0)\mathcal{L}^{(0)} independent of XX. The path X[0,τu]X_{[0,\tau_{u}]} is also viewed as a loop configuration consisting of a single loop. Let 𝒩(u):=θ((0))+θ(X[0,τu])\mathcal{N}^{(u)}:=\theta({\mathcal{L}^{(0)}})+\theta({X_{[0,\tau_{u}]}}). We have the following result, the proof of which is contained in §3.1.1. [10, Theorem 3.1] provides another proof for the case of α=0\alpha=0.

Proposition 3.2.

For λ\lambda\in\mathfrak{R}, conditionally on ϕ(u)=λ\phi^{(u)}=\lambda, 𝒩(u)\mathcal{N}^{(u)} is a sourceless α\alpha-random network associated to λ\lambda.

With this proposition, for the recovery of X[0,τu]X_{[0,\tau_{u}]}, it suffices to derive the law of X[0,τu]X_{[0,\tau_{u}]} given ϕ(u)\phi^{(u)} and 𝒩(u)\mathcal{N}^{(u)}. Set

𝔑:={{nE:n=},if α>0;{nE:n=,supp(n) is connected and finite},if α=0.\displaystyle\mathfrak{N}:=\begin{cases}\big{\{}n\in{\mathbb{N}}^{\vec{E}}:\partial n=\emptyset\big{\}},&\text{if }\alpha>0;\\ \big{\{}n\in{\mathbb{N}}^{\vec{E}}:\partial n=\emptyset,\,\text{supp}(n)\text{ is connected and finite}\big{\}},&\text{if }\alpha=0.\end{cases}

Given n𝔑n\in\mathfrak{N}, for ωΩ\omega\in\Omega and x,yVx,y\in V with xyx\sim y, set

Θt(xy)=Θt(n,ω)(xy)=Θt(n,ω[0,t])(xy):=n(xy)#{0<st:ωs=x,ωs=y},\displaystyle\begin{aligned} \Theta_{t}(xy)&=\Theta_{t}(n,\omega)(xy)=\Theta_{t}(n,\omega_{[0,t]})(xy)\\ &:=n(xy)-\#\left\{0<s\leq t:\omega_{s-}=x,\omega_{s}=y\right\},\end{aligned} (3.2)

which represents the remaining crossings while running the process ω\omega until time tt with the initial crossings nn. Recall notations introduced at the beginning of §2.3.1. Given λ\lambda\in\mathfrak{R} and n𝔑n\in\mathfrak{N}, the vertex-edge repelling jump process Xλ,nX^{\lambda,n} is defined to be a process starting from x0x_{0}, behaves such that {longlist}

conditionally on t<TXλ,nt<T^{X^{\lambda,n}} and (Xsλ,n:0st)\left(X^{\lambda,n}_{s}:0\leq s\leq t\right) with Xtλ,n=xX^{\lambda,n}_{t}=x, it jumps to a neighbour yy of xx with rate rtλ,n(x,y,X[0,t]λ,n)r^{\lambda,n}_{t}\big{(}x,y,X^{\lambda,n}_{[0,t]}\big{)};

every time limstΛs(Xsλ,n)=0\lim\limits_{s\rightarrow t-}\Lambda_{s}(X^{\lambda,n}_{s})=0 and Xtλ,nx0X^{\lambda,n}_{t-}\neq x_{0}, it jumps to 𝔭(x)\mathfrak{p}(x) at time tt, and stops at time TXλ,nT^{X^{\lambda,n}} when the process exhausts the local time at x0x_{0} or explodes. Here for ωΩ\omega\in\Omega with Tω>tT^{\omega}>t,

rtλ,n(x,y,ω):=Θwidecheckt(xy)Λt(x).\displaystyle\begin{aligned} r^{\lambda,n}_{t}(x,y,\omega):=\frac{\widecheck{\Theta}_{t}(xy)}{\Lambda_{t}(x)}.\end{aligned} (3.3)

In the above expression, Θt\Theta_{t} is viewed as a network, and Θwidecheckt\widecheck{\Theta}_{t} is defined as before.

Intuitively, for this process, both the local time and crossings available are given at the beginning. The process eats the local time during its stay at vertices and consumes crossings at jumps over edges.

Theorem 3.3.

For any λ\lambda\in\mathfrak{R} and n𝔑n\in\mathfrak{N}, Xλ,nX^{\lambda,n} has the law of X[0,τu]X_{[0,\tau_{u}]} conditioned on ϕ(u)=λ\phi^{(u)}=\lambda and 𝒩(u)=n\mathcal{N}^{(u)}=n.

3.1.1 Proof of Proposition 3.2

For a subset V0V_{0} of VV, the print of XX on V0V_{0} is by definition

(XA1(t):0t<A(TX)), where A(u):=0u1{XsV0}ds.\displaystyle\Big{(}X_{A^{-1}(t)}:0\leq t<A\big{(}T^{X}\big{)}\Big{)},\text{ where }A(u):=\int_{0}^{u}1_{\{X_{s}\in V_{0}\}}\,\mathrm{d}s. (3.4)

We can also naturally define the print of a (rooted or unrooted) loop. The print of a loop configuration is the collection of the prints of loops in the configuration. Recall that (u)\mathcal{L}^{(u)} consists of the partition of the path of X[0,τu]X_{[0,\tau_{u}]} and an independent loop soup (0)\mathcal{L}^{(0)}. By the excursion theory (resp. basic properties of Poisson random measure), the prints of X[0,τu]X_{[0,\tau_{u}]} (resp. (0)\mathcal{L}^{(0)}) on the different branches333A branch at x0x_{0} is defined as a connected component of the tree when removing the vertex x0x_{0}, to which we add x0x_{0} at x0x_{0} are independent. In particular, for any xx0x\sim x_{0}, 𝒩(u)(x0x)\mathcal{N}^{(u)}(x_{0}x) is independent of the prints of (u)\mathcal{L}^{(u)} on the branches that do not contain xx. By considering other vertices as the root of 𝕋{\mathbb{T}}, we can readily obtain that given ϕ(u)=λ\phi^{(u)}=\lambda, (𝒩(u)(yz):y=𝔭(z))\left(\mathcal{N}^{(u)}(yz):y=\mathfrak{p}(z)\right) are independent and the conditional law of 𝒩(u)(yz)\mathcal{N}^{(u)}(yz) depends only on λ(y)\lambda(y) and λ(z)\lambda(z).

Now it reduces to considering the law of 𝒩(u)(yz)\mathcal{N}^{(u)}(yz) conditioned on ϕ(u)(y)=λ(y)\phi^{(u)}(y)=\lambda(y) and ϕ(u)(z)=λ(z)\phi^{(u)}(z)=\lambda(z). We focus on the case of yz=x0xyz=x_{0}x only, since it is the same for other edges. It holds that {longlist}

𝒩(u)(x0x)\mathcal{N}^{(u)}(x_{0}x) has a Poisson distribution with parameter uCx0,xuC_{x_{0},x};

ϕ(u)(x)\phi^{(u)}(x) equals a sum of 𝒩(u)(x0x)\mathcal{N}^{(u)}(x_{0}x) i.i.d. exponential random variables with parameter Cx,x0C_{x,x_{0}};

Lx((0))L_{x}(\mathcal{L}^{(0)}) follows a Gamma(α\alpha, Cx,x0C_{x,x_{0}}) distribution (a Gamma(0, β\beta) r.v. is interpreted as a r.v. identically equal to 0). In fact, Lx((0))L_{x}(\mathcal{L}^{(0)}) follows a Gamma(α,Gx¯0(x,x)1\alpha,G^{\bar{x}_{0}}(x,x)^{-1}) distribution, where Gx¯0G^{\bar{x}_{0}} is the Green function of the process XX killed at x0x_{0}. The recurrence of XX implies that Gx¯0(x,x)=Cx,x01G^{\bar{x}_{0}}(x,x)=C_{x,x_{0}}^{-1}.

The above exponential random variables, 𝒩(x0x)\mathcal{N}^{*}(x_{0}x) and Lx((0))L_{x}(\mathcal{L}^{(0)}) are mutually independent.

It follows that the conditional law of 𝒩(u)(x0x)\mathcal{N}^{(u)}(x_{0}x) is the same as the conditional law of UU given

R+R1++RU=λ(x),\displaystyle R_{*}+R_{1}+\cdots+R_{U}=\lambda(x),

when UU has Poisson(uCx0,xuC_{x_{0},x}) distribution, RR_{*} has Gamma(α\alpha, Cx,x0C_{x,x_{0}}) distribution, R1,R2,R_{1},R_{2},\cdots have Exponential(Cx,x0C_{x,x_{0}}) distribution and U,R,R1,R2,U,R_{*},R_{1},R_{2},\cdots are mutually independent. By directly writing the density of R+R1++RUR_{*}+R_{1}+\cdots+R_{U} and then conditioning on the sum being λ(x)\lambda(x), we can readily obtain that the conditional distribution is Bessel (α1,2Cx0xuλ(x)\alpha-1,2C^{*}_{x_{0}x}\sqrt{u\lambda(x)}) (see also [8, §2.7]). We have thus proved the proposition.

3.1.2 Proof of Theorem 3.3

The recovery of X[0,τu]X_{[0,\tau_{u}]} given ϕ(u)\phi^{(u)} and 𝒩(u)\mathcal{N}^{(u)} is carried out by the following two steps: (1) reconstruct the jump chain of X[0,τu]X_{[0,\tau_{u}]} conditioned on ϕ(u)\phi^{(u)} and 𝒩(u)\mathcal{N}^{(u)}; (2) assign the holding times before every jump conditioned on ϕ(u)\phi^{(u)}, 𝒩(u)\mathcal{N}^{(u)} and the jump chain. We shall prove the process recovered by the above two steps is exactly the vertex-edge repelling jump process.

For step (1), it is easy to see that the conditional law of the jump chain actually depends only on 𝒩(u)\mathcal{N}^{(u)}. Moreover, let X¯\bar{X} be distributed as the jump chain of XX. For k+k\in{\mathbb{Z}}^{+}, let τ¯k:=inf{l>0:#{1jl:X¯j=x0}=k}\bar{\tau}_{k}:=\inf\{l>0:\#\{1\leq j\leq l:\bar{X}_{j}=x_{0}\}=k\}. Fixing a sourceless network nn, set m=n(x0)m=n(x_{0}). We consider X¯\bar{X} up to τ¯m\bar{\tau}_{m}. Let ¯(0)\bar{\mathcal{L}}^{(0)} have the law of the discrete loop soup induced by (0)\mathcal{L}^{(0)}, and be independent of X¯\bar{X}. Set 𝒩¯:=θ(X¯[0,τ¯m])+θ(¯(0))\bar{\mathcal{N}}:=\theta({\bar{X}_{[0,\bar{\tau}_{m}]}})+\theta({\bar{\mathcal{L}}^{(0)}}), where θ(X¯[0,τ¯m])\theta({\bar{X}_{[0,\bar{\tau}_{m}]}}) and θ(¯(0))\theta({\bar{\mathcal{L}}^{(0)}}) have the obvious meaning. Denote by X¯n\bar{X}^{n} the process X¯\bar{X} conditioned on 𝒩¯=n\bar{\mathcal{N}}=n. Then X¯n\bar{X}^{n} has the same law as the jump chain of X[0,τu]X_{[0,\tau_{u}]} conditioned on 𝒩(u)=n\mathcal{N}^{(u)}=n. The following proposition plays a central role in the inversion. We present a combinatorial proof later.

Proposition 3.4.

The law of X¯n\bar{X}^{n} can be described as follows. It starts from x0x_{0}. Conditionally on (X¯kn:0kl)\left(\bar{X}^{n}_{k}:0\leq k\leq l\right) with l<TX¯nl<T^{\bar{X}^{n}} and X¯ln=x\bar{X}^{n}_{l}=x,

  • if Θwidecheckl(x)>0\widecheck{\Theta}_{l}(x)>0, it jumps to a neighbour yy of xx with probability

    Θwidecheckl(xy)Θwidecheckl(x),\frac{\widecheck{\Theta}_{l}(xy)}{\widecheck{\Theta}_{l}(x)},
  • if Θwidecheckl(x)=0\widecheck{\Theta}_{l}(x)=0 and xx0x\neq x_{0}, it jumps to the parent of xx; (This can only happen when α=0\alpha=0.)

And finally X¯n\bar{X}^{n} stops at time TX¯nT^{\bar{X}^{n}}. Here

Θl(xy)=Θl(n,X¯[0,l]n)(xy):=n(xy)#{0kl1:X¯kn=x,X¯k+1n=y},\displaystyle\Theta_{l}(xy)=\Theta_{l}(n,\bar{X}^{n}_{[0,l]})(xy):=n(xy)-\#\{0\leq k\leq l-1:\bar{X}^{n}_{k}=x,\,\bar{X}^{n}_{k+1}=y\},
TX¯n:=inf{k0:Θk(x0)=0}.\displaystyle T^{\bar{X}^{n}}:=\inf\{k\geq 0:\Theta_{k}(x_{0})=0\}.

Now turn to step (2), i.e. recovering the jump times.

Proposition 3.5.

Given ϕ(u)=λ\phi^{(u)}=\lambda, 𝒩(u)=n\mathcal{N}^{(u)}=n and the jump chain of X[0,τu]X_{[0,\tau_{u}]}, denote by r(x)r(x) the number of visits to xx by the jump chain and hixh_{i}^{x} the ii-th holding time at vertex xx. Note that r(x)=n(x)r(x)=n(x) for each xx in the case of α=0\alpha=0. The followings hold: {longlist}

(j=1ihjx0:1in(x0))\big{(}\sum_{j=1}^{i}h^{x_{0}}_{j}:1\leq i\leq n(x_{0})\big{)} have the same law as n(x0)n(x_{0}) i.i.d. uniform random variables on [0,u][0,u] ranked in ascending order and hn(x0)+1x0=uj=1n(x0)hjx0h^{x_{0}}_{n(x_{0})+1}=u-\sum_{j=1}^{n(x_{0})}h^{x_{0}}_{j};

for any x(x0)x(\neq x_{0}) visited by X[0,τu]X_{[0,\tau_{u}]}, (h1xλ(x),,hr(x)xλ(x),1j=1r(x)hjxλ(x))\left(\frac{h^{x}_{1}}{\lambda(x)},\cdots,\frac{h^{x}_{r(x)}}{\lambda(x)},1-\frac{\sum_{j=1}^{r(x)}h^{x}_{j}}{\lambda(x)}\right) is a Dirichlet distribution with parameter (1,,1,n(x)r(x)+α).(1,\cdots,1,n(x)-r(x)+\alpha).

Here mm-variable Dirichlet distribution with parameter (1,,1,0)(1,\cdots,1,0) is interpreted as (m1)(m-1)-variable Dirichlet distribution with parameter (1,,1)(1,\cdots,1).

Proof.

{j=1ihjx0:1in(x0)}\Big{\{}\sum_{j=1}^{i}h^{x_{0}}_{j}:1\leq i\leq n(x_{0})\Big{\}} is distributed as the jump times of a Poisson process on [0,u][0,u] conditioned on that there are exactly n(x0)n(x_{0}) jumps during this time interval. That deduces (i).

For xx0x\neq x_{0}, {hjx:1jr(x)}\left\{h^{x}_{j}:1\leq j\leq r(x)\right\} has the law of r(x)r(x) i.i.d exponential variables with parameter Cx=kx+y:yxCxyC_{x}=k_{x}+\sum_{y:y\sim x}C_{xy} conditioned on the sum of them and an independent Gamma(n(x)r(x)+αn(x)-r(x)+\alpha, CxC_{x}) random variable being λ(x)\lambda(x) (recall that a Gamma(0,β0,\beta) r.v is identically equal to 0). Here we use the fact that every visit, either by X[0,τu]X_{[0,\tau_{u}]} or (0)\mathcal{L}^{(0)}, is accompanied with an exponential holding time with parameter CxC_{x}, and the accumulated local time of the one-point loops at xx provides a Gamma(α\alpha, CxC_{x}) time duration. ∎

Proposition 3.4 and 3.5 give a representation of the inversion of the Ray-Knight identity in terms of its jump chain and holding times. Using this, we can readily calculate the jump rates.

Proof of Theorem 3.3.

It suffices to show that the jump chain and holding times of Xλ,nX^{\lambda,n} are given by Proposition 3.4 and 3.5 respectively. As shown in the proof of Theorem A.6, Xλ,nX^{\lambda,n} can be realized by a sequence of i.i.d exponential random variables. It is direct from this realization that the jump chains of Xλ,nX^{\lambda,n} coincides with X¯n\bar{X}^{n} in Proposition 3.4. To see this, one only needs to note that for any fixed t0t\geq 0 and ωΩ\omega\in\Omega with Tω>tT^{\omega}>t and ωt=x\omega_{t}=x, the jump rate rtλ,n(x,y,ω)r^{\lambda,n}_{t}(x,y,\omega) is proportional to Θwidecheckt(n,ω)(xy)\widecheck{\Theta}_{t}(n,\omega)(xy) for yxy\sim x. So it remains to check the holding times. We consider X[0,τu]X_{[0,\tau_{u}]} given ϕ(u)=λ\phi^{(u)}=\lambda and 𝒩(u)=n\mathcal{N}^{(u)}=n. Let hixh^{x}_{i} be the ii-th holding time at xx. For 1in(x)1\leq i\leq n(x), set lix:=j=1ihjxl^{x}_{i}:=\sum_{j=1}^{i}h^{x}_{j}. By Proposition 3.5, conditionally on all the holding times before, it holds that hi+1x/(λ(x)lix)h^{x}_{i+1}/(\lambda(x)-l^{x}_{i}) follows a Beta(1,nwidecheck(x)i+1)(1,\widecheck{n}(x)-i+1) distribution. We can readily check that

(λ(x)lix)Beta(1,nwidecheck(x)i+1)=𝑑(ux(i))1(γ),(\lambda(x)-l^{x}_{i})\cdot\text{Beta}(1,\widecheck{n}(x)-i+1)\overset{d}{=}\big{(}u_{x}^{(i)}\big{)}^{-1}(\gamma),

where ux(i)(t)=0tnwidecheck(x)i+1λ(x)lixsdsu_{x}^{(i)}(t)=\int_{0}^{t}\frac{\widecheck{n}(x)-i+1}{\lambda(x)-l^{x}_{i}-s}\,\mathrm{d}s (0t<λ(x)lix0\leq t<\lambda(x)-l^{x}_{i}) and γ\gamma is an exponential random variable with parameter 11. This is exactly the same holding times as Xλ,nX^{\lambda,n}. We have thus proved the theorem. ∎

The remaining part is devoted to the proof of Proposition 3.4.

Case of α=0\alpha=0. Condition on (X¯kn:0kl)\left(\bar{X}^{n}_{k}:0\leq k\leq l\right) with l<TX¯nl<T^{\bar{X}^{n}} and X¯ln=x\bar{X}^{n}_{l}=x. It holds that the remaining path of X¯\bar{X} is completed by uniformly choosing a path with edge crossings Θl\Theta_{l}, since the conditional probability of any such path are the same, which equals

yzE(Cyz/Cy)Θl(yz).\prod_{yz\in\vec{E}}(C_{yz}/C_{y})^{\Theta_{l}(yz)}.

So for the probability of the next jump, it suffices to count for any yxy\sim x, the number of all possible paths with edge crossings Θl\Theta_{l} and the first jump being to yy. Here we use the same idea as the proof of [9, Proposition 2.1]. This number equals the number of relative orders of exiting each vertex satisfying (1) the first exit from xx is to yy; (2) the last exit from any zx0z\neq x_{0} is to 𝔭(z)\mathfrak{p}(z). In particular, it is proportional to the number of relative orders of exiting xx satisfying the above conditions at vertex xx, which equals

{(Θl(x)11{xx0})!z:zx(Θl(xz)1{z=𝔭(x)}1{z=y})!, if Θl(x)2;1{y=𝔭(x)}, if Θl(x)=1,\displaystyle\left\{\begin{array}[]{ll}\frac{\left(\Theta_{l}(x)-1-1_{\{x\neq x_{0}\}}\right)!}{\prod_{z:z\sim x}\left(\Theta_{l}(xz)-1_{\{z=\mathfrak{p}(x)\}}-1_{\{z=y\}}\right)!},&\text{ if }\Theta_{l}(x)\geq 2;\\ 1_{\{y=\mathfrak{p}(x)\}},&\text{ if }\Theta_{l}(x)=1,\end{array}\right.

where (1)!:=(-1)!:=\infty. It follows that the conditional probability that X¯l+1n=y\bar{X}^{n}_{l+1}=y is

{Θwidecheckl(xy)Θwidecheckl(x), if Θl(x)2;1{y=𝔭(x)}, if Θl(x)=1.\displaystyle\begin{cases}\dfrac{\widecheck{\Theta}_{l}(xy)}{\widecheck{\Theta}_{l}(x)},&\text{ if }\Theta_{l}(x)\geq 2;\\ 1_{\{y=\mathfrak{p}(x)\}},&\text{ if }\Theta_{l}(x)=1.\end{cases} (3.5)

Case of α>0\alpha>0. (1) The conditional transition probability at x0x_{0}. We will first deal with the conditional transition probability given (X¯kn:0kl)\left(\bar{X}^{n}_{k}:0\leq k\leq l\right) with l<TX¯nl<T^{\bar{X}^{n}} and X¯ln=x0\bar{X}^{n}_{l}=x_{0}. We further condition on the remaining crossings of X¯n\bar{X}^{n}, i.e. θ:=θ(X¯[l,TX¯n]n)\theta^{\prime}:=\theta({\bar{X}^{n}_{[l,T^{\bar{X}^{n}}]}}). Note that Θl=θ+θ(¯(0))\Theta_{l}=\theta^{\prime}+\theta({\bar{\mathcal{L}}^{(0)}}). In particular, it holds that θ(x0y)=Θl(x0y)\theta^{\prime}(x_{0}y)=\Theta_{l}(x_{0}y) for any yx0y\sim x_{0}. By (3.5), the conditional probability that X¯l+1n=y\bar{X}^{n}_{l+1}=y is

θ(x0y)θ(x0)=Θl(x0y)Θl(x0),\dfrac{\theta^{\prime}(x_{0}y)}{\theta^{\prime}(x_{0})}=\dfrac{\Theta_{l}(x_{0}y)}{\Theta_{l}(x_{0})},

which is independent of the further condition and hence gives the conclusion.

(2) The conditional transition probability at xx0x\neq x_{0}. Recall that the law of the Ray-Knight triple is independent of kx0k_{x_{0}}. In this case, it is easier to consider the process XX killed at x0x_{0} with rate 11. With an abuse of notation, we still use XX and \mathcal{L} to denote this process and its associated loop soup respectively444We mention that such notations are only used in this part.. The main idea is that the recovery of the Markovian path can be viewed as the recovery of the discrete loop soup instead. We choose to work on the extended graph 𝕋K{\mathbb{T}}^{K} to make sure that the probability of a loop configuration is proportional to α#loops in the configuration\alpha^{\#\text{loops in the configuration}} as KK\rightarrow\infty. When choosing an outgoing crossing at xx right after a crossing to xx, there is a unique choice that leads to one more loop in the configuration than others. This unique choice is always to 𝔭(x)\mathfrak{p}(x). So the conditional transition probability from xx to yy is proportional to Θwidecheck(xy)\widecheck{\Theta}(xy). Let us make some preparations first.

(a) Concatenation process 𝐋\mathbf{L}. First we give a representation of the path X¯n\bar{X}^{n} as a concatenation of loops in a loop soup as follows. Let ¯\bar{\mathcal{L}} be the discrete loop soup associated to \mathcal{L}. We focus on the loops in ¯\bar{\mathcal{L}} that visits x0x_{0}. Denote by 𝔎\mathfrak{K} the number of such loops. For each of them, we uniformly and independently root it at one of its visits at x0x_{0}. Then we choose uniformly at random (among all 𝔎!\mathfrak{K}! choices) an order for the rooted loops labelled in order by {γ¯i:1i𝔎}\{\bar{\gamma}_{i}:1\leq i\leq\mathfrak{K}\} and concatenate them:

𝐋:=γ¯1γ¯2γ¯𝔎.\mathbf{L}:=\bar{\gamma}_{1}\circ\bar{\gamma}_{2}\circ\cdots\bar{\gamma}_{\mathfrak{K}}.

We call 𝐋\mathbf{L} the concatenation process of ¯\bar{\mathcal{L}}. It can be easily deduced from the properties of discrete loop soup that the path between consecutive visits of x0x_{0} in any γ¯i\bar{\gamma}_{i} has the same law as an excursion of X¯\bar{X} at x0x_{0}. Thus, conditionally on θ(𝐋)(x0)=m\theta(\mathbf{L})(x_{0})=m, 𝐋\mathbf{L} has the same law as X¯[0,τ¯m]\bar{X}_{[0,\bar{\tau}_{m}]}. Consequently, we have the following corollary.

Corollary 3.6.

Given a network n𝔑n\in\mathfrak{N}, denote by 𝐋n\mathbf{L}^{n} the process 𝐋\mathbf{L} conditioned on θ(¯)=n\theta({\bar{\mathcal{L}}})=n. Then 𝐋n\mathbf{L}^{n} has the same law as X¯n\bar{X}^{n}.

(b) Pairing on the extended graph. To further explore the law of 𝐋n\mathbf{L}^{n}, we need to introduce the extended graphs of 𝕋{\mathbb{T}} and the definition of ‘pairing’. Let K+K\in{\mathbb{Z}}^{+}. Replace each edge of 𝕋{\mathbb{T}} by KK copies. The graph thus obtained, denoted by 𝕋K=(V,EK){\mathbb{T}}^{K}=(V,E^{K}), is an extended graph (of 𝕋{\mathbb{T}}). The collection of all directed edges in 𝕋K{\mathbb{T}}^{K} is denoted by EK\vec{E}^{K}. The graph 𝕋K{\mathbb{T}}^{K} is equipped with the killing measure δx0\delta_{x_{0}}, the Dirac measure at x0x_{0}, and for any xyx\sim y, the conductance on each one of the KK directed edges from xx to yy is CxyK:=Cxy/KC^{K}_{xy}:=C_{xy}/K. Any element in EK{\mathbb{N}}^{\vec{E}^{K}} is called a network on 𝕋K{\mathbb{T}}^{K}. We will use NN to denote a deterministic network on 𝕋K{\mathbb{T}}^{K}. For a network NN on 𝕋K{\mathbb{T}}^{K}, the projection of NN on 𝕋{\mathbb{T}} is a network on 𝕋{\mathbb{T}} defined by:

N𝕋(xy):=eEKN(e)1{e is from x to y}.N^{\mathbb{T}}(xy):=\sum_{\vec{e}\in\vec{E}^{K}}N(\vec{e})1_{\{\vec{e}\text{ is from $x$ to $y$}\}}.

In the following, we only focus on the network N{0,1}EKN\in\{0,1\}^{\vec{E}^{K}}. For such a network NN, denote [N]:={eEK:N(e)=1}[N]:=\{\vec{e}\in\vec{E}^{K}:N(\vec{e})=1\}. For simplicity, we omit the superscript ‘\rightarrow’ for directed edges throughout this part.

Definition 3.7.

Given a sourceless network N{0,1}EKN\in\{0,1\}^{\vec{E}^{K}}, a pairing of NN is defined to be a bijection from [N][N] to [N][N], such that for any e[N]e\in[N], the image of ee is a directed edge whose head is the tail of ee.

Given N{0,1}EKN\in\{0,1\}^{\vec{E}^{K}}, a pairing bb of NN and a subset [N0][N_{0}] of [N][N], b|[N0]={b(e):e[N0]}b|_{[N_{0}]}=\{b(e):e\in[N_{0}]\} determines a set of loops and bridges. Precisely, for each e[N0]e\in[N_{0}], following the pairing on ee, we arrive at a new (directed) edge b(e)b(e). Continuing to keep track of the pairing on the new edge b(e)b(e), we arrive at another edge b(b(e))b(b(e)). This procedure stops when arriving at either the initial edge ee again, or an edge [N][N0]\in[N]\setminus[N_{0}] because we lose the information about the pairing on [N][N0][N]\setminus[N_{0}]. In the former case, a loop is obtained. In the latter case, we get a path whose first edge is ee and last edge [N][N0]\in[N]\setminus[N_{0}]. Any two such paths are either disjoint, or one is a part of the other. This naturally determines a partial order. All the maximal elements with respect to this partial order and the loops obtained in the former case form the set of bridges and loops determined by b|[N0]b|_{[N_{0}]}.

Now we start the proof. Let XKX^{K} be the CTMC on 𝕋K{\mathbb{T}}^{K} starting from x0x_{0} induced by the conductances (CxyK)\left(C^{K}_{xy}\right) and killing measure δx0\delta_{x_{0}}. Consider the discrete loop soup ¯K\bar{\mathcal{L}}^{K} associated to XKX^{K}, the projection of which on 𝕋{\mathbb{T}} has the same law as ¯\bar{\mathcal{L}}. Let 𝐋K\mathbf{L}^{K} be its concatenation process. By Corollary 3.6, it suffices to consider the law of (the projection of) 𝐋K\mathbf{L}^{K} given (θ(¯K))𝕋=n\left(\theta(\bar{\mathcal{L}}^{K})\right)^{\mathbb{T}}=n.

Observe that conditionally on (𝐋kK:0kl)\left(\mathbf{L}^{K}_{k}:0\leq k\leq l\right) with 𝐋lK=x(x0)\mathbf{L}^{K}_{l}=x\,(\neq x_{0}), for the next step, it will definitely jump along its present loop in ¯K\bar{\mathcal{L}}^{K}, which is a loop visiting both xx and x0x_{0}. Therefore, we focus on ¯K,x:={γ¯K:γ visits x}\bar{\mathcal{L}}^{K,x}:=\{\gamma\in\bar{\mathcal{L}}^{K}:\gamma\text{ visits }x\}. In the following, loop configurations always refer to configurations consisting of only loops visiting xx. Note that for any nn\in\mathfrak{R}, given (Θ(¯K))𝕋=n\left(\Theta({\bar{\mathcal{L}}^{K}})\right)^{\mathbb{T}}=n, the probability that ¯K\bar{\mathcal{L}}^{K} uses every edge at most once tends to 11 as KK\rightarrow\infty. So by the standard arguments (see [22, 23]), it suffices to show that for any N{0,1}EKN\in\{0,1\}^{\vec{E}^{K}} with N𝕋nN^{\mathbb{T}}\leq n and N𝕋(xy)=n(xy)N^{\mathbb{T}}(xy)=n(xy) for any yxy\sim x, when further conditioned on θ(¯K,x)=N\theta({\bar{\mathcal{L}}}^{K,x})=N, the transition probability of 𝐋K\mathbf{L}^{K} at xx is given by the statements in the proposition. It is easily seen that due to the independence of ¯K,x{\bar{\mathcal{L}}}^{K,x} and ¯K¯K,x{\bar{\mathcal{L}}}^{K}\setminus{\bar{\mathcal{L}}}^{K,x}, the transition probability at xx is independent of the earlier condition (θ(¯K))𝕋=n.\left(\theta({\bar{\mathcal{L}}}^{K})\right)^{\mathbb{T}}=n.

A key observation is that conditionally on θ(¯K,x)=N\theta({\bar{\mathcal{L}}}^{K,x})=N, the probability of a loop configuration of ¯K,x\bar{\mathcal{L}}^{K,x} is proportional to α#loops in the configuration\alpha^{\#\text{loops in the configuration}}. In fact, ¯K,x\bar{\mathcal{L}}^{K,x} is a Poisson random measure on the space of loops visiting xx. The intensity of a loop is the product of the conductances of the edges divided by the multiplicity of the loop. Note that for any configuration \mathscr{L} with θ()=N\theta({\mathscr{L}})=N, the multiplicities of the loops in \mathscr{L} are all 11. Hence, the probability that ¯K,x=\bar{\mathcal{L}}^{K,x}=\mathscr{L} is

Pxα#loops in yzE(Cyz/KCy)N𝕋(yz),\displaystyle P^{x}_{\emptyset}\cdot\alpha^{\#\text{loops in $\mathscr{L}$}}\prod_{yz\in\vec{E}}(C_{yz}/KC_{y})^{N^{\mathbb{T}}(yz)},

where PxP^{x}_{\emptyset} is the probability that ¯K,x\bar{\mathcal{L}}^{K,x} is empty.

Let 𝔟\mathfrak{b} be the pairing of [N][N] induced by ¯K,x\bar{\mathcal{L}}^{K,x}. Namely, 𝔟(e)\mathfrak{b}(e) is defined to be the edge right after ee in the same loop in ¯K,x\bar{\mathcal{L}}^{K,x}. We are interested in the conditional law of 𝔟(𝐋l1K𝐋lK)\mathfrak{b}\left(\mathbf{L}^{K}_{l-1}\mathbf{L}^{K}_{l}\right) given the path 𝐋[0,l]K\mathbf{L}^{K}_{[0,l]}.

Let

{={e[N]:e is crossed by 𝐋[0,l1]K and the tail of e is x};[N2]={e[N]:the tail of e is not x}.\displaystyle\left\{\begin{aligned} &=\Big{\{}e\in[N]:e\text{ is crossed by }\mathbf{L}^{K}_{[0,l-1]}\text{ and the tail of $e$ is }x\Big{\}};\\ [N_{2}]&=\Big{\{}e\in[N]:\text{the tail of }e\text{ is not }x\Big{\}}.\end{aligned}\right.
Refer to caption
(a) the loops and bridges determined by 𝔟|[N1][N2]\mathfrak{b}|_{[N_{1}]\cup[N_{2}]}
Refer to caption
(b) cut off loops at xx
Figure 1: (a) 𝔟|[N1][N2]\mathfrak{b}|_{[N_{1}]\cup[N_{2}]} determines some crossed loops (the first red loop) visiting xx, one partly explored bridge (the second red bridge) from xx to xx and (Nl𝕋(x)1)(N_{l}^{{\mathbb{T}}}(x)-1) unexplored excursions (blue line on the right) from xx to xx. Our aim is to find the missing piece right after 𝐋[s,l]K\mathbf{L}^{K}_{[s,l]} in the present unfinished loop. The loop is completed either by directly gluing the second red bridge, or adding some other blue bridges in between. (b) For any loop in K,x\mathcal{L}^{K,x} that is not or partly explore by 𝐋[0,l]K\mathbf{L}^{K}_{[0,l]}, we cut the unexplored part of the loop at every visit at xx (including the visit at time ll). This gives the bridges related to the loop.

Note that the path 𝐋[0,l]K\mathbf{L}^{K}_{[0,l]} already determines 𝔟|[N1]\mathfrak{b}|_{[N_{1}]}. We further condition on 𝔟|[N2]\mathfrak{b}|_{[N_{2}]}. It holds that 𝔟|[N1][N2]\mathfrak{b}|_{[N_{1}]\cup[N_{2}]} determines a set of loops and bridges from and to xx (see Figure 1(a)). The loops are exactly the loops in K,x\mathcal{L}^{K,x} that have been completely crossed by 𝐋[0,l]K\mathbf{L}^{K}_{[0,l]}, and the bridges are partitions of the remaining loops in K,x\mathcal{L}^{K,x} (see Figure 1(b)). Denote Nl(e)=1{e[N] and is not crossed by 𝐋[0,l]K}N_{l}(e)=1_{\left\{e\in[N]\text{ and is not crossed by }\mathbf{L}^{K}_{[0,l]}\right\}} for eEKe\in\vec{E}^{K}. Then there are exactly Nl𝕋(x)N^{\mathbb{T}}_{l}(x) bridges, including exactly Nl𝕋(xy)N^{\mathbb{T}}_{l}(xy) bridges whose first edge enters yy, for any yxy\sim x. Moreover, there is exactly one bridge partly crossed by the path 𝐋[0,l]K\mathbf{L}^{K}_{[0,l]}. This bridge is a part of the loop that 𝐋K\mathbf{L}^{K} is walking along at time ll. So it visits x0x_{0} by the construction of 𝐋K\mathbf{L}^{K}, which implies that the first edge of the bridge enters 𝔭(x)\mathfrak{p}(x) due to the tree structure.

Now we focus on the conditional law of the pairing of these bridges, i.e. the law of 𝔟:=𝔟|[N]\mathfrak{b}^{*}:=\mathfrak{b}|_{[N^{*}]}, where [N]=[N]([N1][N2])[N^{*}]=[N]\setminus([N_{1}]\cup[N_{2}]) is the collection of the last edges in the above Nl𝕋(x)N^{\mathbb{T}}_{l}(x) bridges. Note that 𝔟([N])\mathfrak{b}^{*}([N^{*}]) consists of the first edges in these bridges. Denote [N]={e1,,er}[N^{*}]=\{e_{1},\cdots,e_{r}\} and 𝔟([N])={f1,,fr}\mathfrak{b}^{*}([N^{*}])=\{f_{1},\cdots,f_{r}\}, where r=Nl𝕋(x)r=N^{\mathbb{T}}_{l}(x) and we assign the subscripts such that

  • eie_{i} and fif_{i} are in the same bridge (i=1,,ri=1,\cdots,r);

  • e1=𝐋l1K𝐋lKe_{1}=\mathbf{L}^{K}_{l-1}\mathbf{L}^{K}_{l}. (So the bridge containing e1e_{1} and f1f_{1} is exactly the unique bridge partly crossed by 𝐋[0,l]K\mathbf{L}^{K}_{[0,l]}.)

The totality of bijections from {e1,,er}\{e_{1},\cdots,e_{r}\} to {f1,,fr}\{f_{1},\cdots,f_{r}\} is denoted by \mathcal{B}. Every bb\in\mathcal{B} pairs the bridges into a loop configuration. We simply call it the configuration completed by bb. It is easy to see that this defines a one-to-one correspondence between \mathcal{B} and all the possible configurations obtained by pairing these bridges.

Recall that the probability of a loop configuration of K,x\mathcal{L}^{K,x} is proportional to α#loops\allowbreak\alpha^{\#\text{loops}}. So the conditional probability that 𝔟\mathfrak{b}^{*} equals a fixed bb\in\mathcal{B} is proportional to α#(b)\alpha^{\#(b)}, where #(b):=#loops in the configuration completed by b\#(b):=\#\text{loops in the configuration completed by }b. Set i:={b:b(e1)=fi}\mathcal{B}_{i}:=\{b\in\mathcal{B}:b(e_{1})=f_{i}\}. For 1i,jr1\leq i,j\leq r with iji\neq j, a bijection Υij\Upsilon_{ij} from BiB_{i} to BjB_{j} can be defined as follows. For any bBib\in B_{i}, Υij(b)\Upsilon_{ij}(b) is defined by exchanging the image of e1e_{1} and b1(fj)b^{-1}(f_{j}). Precisely,

(Υij(b))(e):={fj, if e=e1;fi, if e=b1(fj);b(e), otherwise.\displaystyle\left(\Upsilon_{ij}(b)\right)(e):=\left\{\begin{array}[]{ll}f_{j},&\text{ if }e=e_{1};\\ f_{i},&\text{ if }e=b^{-1}(f_{j});\\ b(e),&\text{ otherwise}.\end{array}\right.

We can readily check that when xx0x\neq x_{0}, for bBib\in B_{i},

{#(b)=#(Υij(b)), if i,j1;#(b)=#(Υij(b))+1, if i=1 and j1.\displaystyle\left\{\begin{array}[]{ll}\#(b)=\#(\Upsilon_{ij}(b)),&\text{ if $i,j\neq 1$;}\\ \#(b)=\#(\Upsilon_{ij}(b))+1,&\text{ if $i=1$ and $j\neq 1$}.\end{array}\right.

Hence, if we denote by pip_{i} the conditional probability that 𝔟Bi\mathfrak{b}^{*}\in B_{i}, then pi=pjp_{i}=p_{j} for i,j1i,j\neq 1, and p1=αpjp_{1}=\alpha p_{j} for j1j\neq 1. Thus,

{p1=α/(r+α1);pi=1/(r+α1), for i1.\displaystyle\left\{\begin{aligned} p_{1}&=\alpha/(r+\alpha-1);\\ p_{i}&=1/(r+\alpha-1),\text{ for }i\neq 1.\end{aligned}\right.

It follows that the conditional probability of 𝐋l+1K=y\mathbf{L}^{K}_{l+1}=y is

i=1Nl(x)1{the tail of fi is y}pi=Nwidecheckl𝕋(xy)Nwidecheckl𝕋(x)=Θwidecheckl(n,𝐋[0,l]K)(xy)Θwidecheckl(n,𝐋[0,l]K)(x),\displaystyle\sum_{i=1}^{N_{l}(x)}1_{\{\text{the tail of }f_{i}\text{ is }y\}}\cdot p_{i}=\dfrac{\widecheck{N}^{\mathbb{T}}_{l}(xy)}{\widecheck{N}^{\mathbb{T}}_{l}(x)}=\dfrac{\widecheck{\Theta}_{l}(n,\mathbf{L}^{K}_{[0,l]})(xy)}{\widecheck{\Theta}_{l}(n,\mathbf{L}^{K}_{[0,l]})(x)},

where the path 𝐋[0,l]K\mathbf{L}^{K}_{[0,l]} is understood as its projection on 𝕋{\mathbb{T}}, and the first equality is due to the fact that f1f_{1} enters 𝔭(x)\mathfrak{p}(x). The conditional probability is independent of the further conditioning on b|[N2]b|_{[N_{2}]}. That completes the proof.

3.2 The representation of the inversion as a vertex repelling jump process

Let 𝒩λ\mathcal{N}^{\lambda} be a sourceless α\alpha-random network associated to λ\lambda as in Definition 3.1 and XλX^{\lambda} be a process distributed, conditionally on 𝒩λ=n\mathcal{N}^{\lambda}=n, as Xλ,nX^{\lambda,n}. By Proposition 3.2 and Theorem 3.3, XλX^{\lambda} has the law of X[0,τu]X_{[0,\tau_{u}]} conditioned on ϕ(u)=λ\phi^{(u)}=\lambda. The goal of this subsection is to show the following proposition, so as to obtain Theorem 2.8. Recall the distribution x0λ\mathbb{P}^{\lambda}_{x_{0}} introduced in §2.3.1.

Proposition 3.8.

For any λ\lambda\in\mathfrak{R}, XλX^{\lambda} has the law x0λ\mathbb{P}^{\lambda}_{x_{0}}.

In other words, XλX^{\lambda} is a jump process on VV with jump rates given by (2.5). Recall the definition of Λt\Lambda_{t} and Θt\Theta_{t} in (2.4) and (3.2) respectively. The key to Proposition 3.8 is the following lemma, which reveals a renewal property of the remaining crossings of XλX^{\lambda}, i.e. Θt(𝒩λ,X[0,t]λ)=Θt(n,X[0,t]λ)|n=𝒩λ\Theta_{t}({\mathcal{N}^{\lambda}},X^{\lambda}_{[0,t]})=\Theta_{t}(n,X^{\lambda}_{[0,t]})\big{|}_{n=\mathcal{N}^{\lambda}}. The proof is given in §3.2.1.

Lemma 3.9.

For any λ\lambda\in\mathfrak{R}, conditionally on t<TXλt<T^{X^{\lambda}} and (Xsλ:0st)\left(X^{\lambda}_{s}:0\leq s\leq t\right), the network Θt(𝒩λ,X[0,t]λ)\Theta_{t}(\mathcal{N}^{\lambda},X^{\lambda}_{[0,t]}) is an α\alpha-random network with sources (x0,Xtλ)(x_{0},X^{\lambda}_{t}) associated to Λt\allowbreak\Lambda_{t}.

Remark. In the statement of the proposition and below, a network with sources (x0,x0)(x_{0},x_{0}) has to be understood as a sourceless network.

Proof of Proposition 3.8.

By Lemma 3.9, for λ\lambda\in\mathfrak{R}, conditionally on t<TXλt<T^{X^{\lambda}} and (Xsλ:0st)\left(X^{\lambda}_{s}:0\leq s\leq t\right) with Xtλ=xX^{\lambda}_{t}=x, for any yxy\sim x,

  • if x=𝔭(y)x=\mathfrak{p}(y), Θt(𝒩λ)(xy)\Theta_{t}(\mathcal{N}^{\lambda})(xy) follows a Bessel (α1\alpha-1, φt(xy)\varphi_{t}(xy)) distribution;

  • if y=𝔭(x)y=\mathfrak{p}(x), Θt(𝒩λ)(xy)1\Theta_{t}(\mathcal{N}^{\lambda})(xy)-1 follows a Bessel (α\alpha, φt(xy)\varphi_{t}(xy)) distribution.

Note that if further conditioned on Θt(𝒩λ)\Theta_{t}(\mathcal{N}^{\lambda}), the process jumps to yy at time tt with rate

Θwidecheckt(𝒩λ)(xy)Λt(x).\dfrac{\widecheck{\Theta}_{t}(\mathcal{N}^{\lambda})(xy)}{\Lambda_{t}(x)}.

So using Corollary A.10 and averaging over Θt(𝒩λ)(xy)\Theta_{t}(\mathcal{N}^{\lambda})(xy), we get the probability of a jump of XλX^{\lambda} from xx to yy during [t,t+Δt][t,t+\Delta t] is

{(Iα(φt(xy))1k0k+αΛt(x)(φt(xy)/2)2k+αk!Γ(k+α+1))Δt+o(Δt), if y=𝔭(x);(Iα1(φt(xy))1k0kΛt(x)(φt(xy)/2)2k+α1k!Γ(k+α))Δt+o(Δt), if x=𝔭(y),\displaystyle\begin{cases}\Big{(}I_{\alpha}\left(\varphi_{t}(xy)\right)^{-1}\sum_{k\geq 0}\dfrac{k+\alpha}{\Lambda_{t}(x)}\dfrac{\left(\varphi_{t}(xy)/2\right)^{2k+\alpha}}{k!\cdot\Gamma(k+\alpha+1)}\Big{)}\Delta t+o(\Delta t),&\text{ if $y=\mathfrak{p}(x)$};\\ \Big{(}I_{\alpha-1}\left(\varphi_{t}(xy)\right)^{-1}\sum_{k\geq 0}\dfrac{k}{\Lambda_{t}(x)}\dfrac{\left(\varphi_{t}(xy)/2\right)^{2k+\alpha-1}}{k!\cdot\Gamma(k+\alpha)}\Big{)}\Delta t+o(\Delta t),&\text{ if $x=\mathfrak{p}(y)$},\\ \end{cases}

which gives the jump rates (2.5). ∎

3.2.1 Proof of Lemma 3.9

For any xVx\in V, set

x:={λ:λ(x)>0},𝔑x:={n𝔑:n=(x0,x)}.\displaystyle\mathfrak{R}^{x}:=\{\lambda\in\mathfrak{R}:\lambda(x)>0\},\ \mathfrak{N}^{x}:=\{n\in\mathfrak{N}:\partial n=(x_{0},x)\}. (3.6)

In particular, =x0\mathfrak{R}=\mathfrak{R}^{x_{0}} and 𝔑=𝔑x0\mathfrak{N}=\mathfrak{N}^{x_{0}}. First let us generalize the notations XλX^{\lambda}, Xλ,nX^{\lambda,n} and 𝒩λ\mathcal{N}^{\lambda}. Forgetting the original definition before, we construct {X[0,TXλ,n]λ,n:λ,nxV𝔑x}\Big{\{}X^{\lambda,n}_{[0,T^{X^{\lambda,n}}]}:\lambda\in\mathfrak{R},\,n\in\bigcup_{x\in V}\mathfrak{N}^{x}\Big{\}}, {𝒩λ:λ}\left\{\mathcal{N}^{\lambda}:\lambda\in\mathfrak{R}\right\} and {x:xV}\big{\{}{\mathbb{P}}_{x}:x\in V\big{\}} a family of random processes, random networks and probability measures respectively on the same measurable space, such that for any xVx\in V, λx\lambda\in\mathfrak{R}^{x}, n𝔑xn\in\mathfrak{N}^{x}, under x{\mathbb{P}}_{x},

  • Xλ,nX^{\lambda,n} is a process that starts at xx, has the jump rates rtλ,nr^{\lambda,n}_{t} (as defined in (3.3)) and the same resurrect mechanism as the vertex-edge repelling jump process, and stops at TXλ,nT^{X^{\lambda,n}} the time when the process exhausts the local time at x0x_{0} or explodes;

  • 𝒩λ\mathcal{N}^{\lambda} is an α\alpha-random network with sources (x0,x)(x_{0},x) associated to λ\lambda;

  • XλX^{\lambda} is a process distributed, conditionally on 𝒩λ=n\mathcal{N}^{\lambda}=n, as Xλ,nX^{\lambda,n}.

It is easy to see that for λ\lambda\in\mathfrak{R} and n𝔑n\in\mathfrak{N}, under x0{\mathbb{P}}_{x_{0}}, XλX^{\lambda}, Xλ,nX^{\lambda,n} and 𝒩λ\mathcal{N}^{\lambda} are consistent with the original definition.

We start the proof with an observation. To emphasize the tree where XλX^{\lambda} is defined, let us write Xλ=Xλ,𝕋X^{\lambda}=X^{\lambda,{\mathbb{T}}}. Any connected subgraph 𝕋0=(V0,E0){\mathbb{T}}_{0}=(V_{0},E_{0}) containing x0x_{0} is automatically equipped with conductances (Ce)eE0(C_{e})_{e\in E_{0}} and no killing. The induced CTMC is exactly the print of XX on V0V_{0} (recall (3.4)). The following restriction principle is a simple consequence of Theorem 3.3 and the excursion theory.

Proposition 3.10 (Restriction principle).

For any connected subgraph 𝕋0=(V0,E0){\mathbb{T}}_{0}=(V_{0},E_{0}) containing x0x_{0}, the print of Xλ,𝕋X^{\lambda,{\mathbb{T}}} on V0V_{0} (similarly defined as (3.4)) has the same law as Xλ|𝕋0,𝕋0X^{\lambda|_{{\mathbb{T}}_{0}},{\mathbb{T}}_{0}}.

Observe that by the restriction principle, it suffices to tackle the case where 𝕋{\mathbb{T}} is finite, which we will assume henceforth.

For λx\lambda\in\mathfrak{R}^{x}, consider XλX^{\lambda} under x{\mathbb{P}}_{x}. In the following, we simply write Θt(𝒩λ)\Theta_{t}(\mathcal{N}^{\lambda}) for Θt(𝒩λ,X[0,t]λ)\Theta_{t}(\mathcal{N}^{\lambda},X^{\lambda}_{[0,t]}) and denote J1=J1XλJ_{1}=J_{1}^{X^{\lambda}}. For 0<tλ(x)0<t\leq\lambda(x), let λt(y):=λ(y)t1{y=x}\lambda^{t}(y):=\lambda(y)-t\cdot 1_{\{y=x\}} for yVy\in V. We will show that under x{\mathbb{P}}_{x}, {longlist}

for any 0<tλ(x)0<t\leq\lambda(x), conditionally on Xsλ=xX^{\lambda}_{s}=x on [0,t][0,t], Θt(𝒩λ)\Theta_{t}(\mathcal{N}^{\lambda}) is an α\alpha-random network with sources (x0,x)(x_{0},x) associated to λt\lambda^{t};

for any yxy\sim x, conditionally on J1λ(x)J_{1}\leq\lambda(x) and XJ1λ=yX^{\lambda}_{J_{1}}=y, ΘJ1(𝒩λ)\Theta_{J_{1}}(\mathcal{N}^{\lambda}) is an α\alpha-random network with sources (x0,y)(x_{0},y) associated to λJ1\lambda^{J_{1}}.

Note that once (i) and (ii) are proved, we have conditionally on a stay or jump at the beginning of XλX^{\lambda}, {longlist}

the remaining crossings Θt(𝒩λ)\Theta_{t}(\mathcal{N}^{\lambda}) is distributed as an α\alpha-random network associated to the remaining local time field;

the process in the future is distributed as XλX^{\lambda^{\prime}} under y{\mathbb{P}}_{y}, where λ\lambda^{\prime} is the remaining local time and yy equals xx or the vertex it jumps to accordingly. In fact, it is simple to deduce from the strong renewal property of Xλ,nX^{\lambda,n} (see Corollary A.9) an analogous property for XλX^{\lambda} that reads as follows: for any stopping time SS, conditionally on S<TXλS<T^{X^{\lambda}} and (Xtλ:0tS)\allowbreak\big{(}X^{\lambda}_{t}:0\leq t\leq S\big{)} with XSλ=yX^{\lambda}_{S}=y and ΛS(,X[0,S]λ)=λ\Lambda_{S}(\cdot,X^{\lambda}_{[0,S]})=\lambda^{\prime}, the process after SS i.e. (XS+tλ:0tTXλS)\allowbreak\Big{(}X^{\lambda}_{S+t}:0\leq t\leq T^{X^{\lambda}}-S\Big{)} has the same law as Xλ,𝒩X^{\lambda^{\prime},\mathcal{N}^{\prime}} under y{\mathbb{P}}_{y}, where 𝒩\mathcal{N}^{\prime} is a random network following the conditional distribution of ΘS(𝒩λ)\Theta_{S}(\mathcal{N}^{\lambda}). Then the statement follows from (a).

Under x0{\mathbb{P}}_{x_{0}}, iteratively using (a) and (b), we have after a chain of stays or jumps, Θt(𝒩λ)\Theta_{t}(\mathcal{N}^{\lambda}) keeps to be distributed as an α\alpha-random network associated to the remaining local time, which leads to the conclusion.

We present the proof of (ii), and the proof of (i) is similar. First consider the case of J1=λ(x)J_{1}=\lambda(x). Notice that this event has a positive probability under x{\mathbb{P}}_{x} only when α=0\alpha=0, xx0x\neq x_{0}, y=𝔭(x)y=\mathfrak{p}(x) and 𝒩λ(x)=𝒩λ(xy)=1\mathcal{N}^{\lambda}(x)=\mathcal{N}^{\lambda}(xy)=1. In this case, 𝒩λ\mathcal{N}^{\lambda} is a 0-random network with sources (x0,x)(x_{0},x) associated to λ\lambda conditioned on 𝒩λ(x)=1\mathcal{N}^{\lambda}(x)=1. Since ΘJ1(𝒩λ)(ij)=𝒩λ(ij)1{ij=xy}\Theta_{J_{1}}(\mathcal{N}^{\lambda})(ij)=\mathcal{N}^{\lambda}(ij)-1_{\{ij=xy\}} for ijEij\in\vec{E}, it is easily seen that ΘJ1(𝒩λ)\Theta_{J_{1}}(\mathcal{N}^{\lambda}) is a 0-random network with sources (x0,x)(x_{0},x) associated to λJ1\lambda^{J_{1}}.

Now we focus on the case where J1<λ(x)J_{1}<\lambda(x). Recall the α\alpha-random network defined in Definition 3.1. A simple calculation shows that the law of an α\alpha-random network with sources (x0,i)(x_{0},i) associated to λ\lambda is given by:

Kλ,i(n)=1{n=(x0,i)}σαi(λ)1λ(x0)λ(i)xVλ(x)n(x)+α12deg(x)xyE(Cxy)nwidecheck(xy)Γ(nwidecheck(xy)+1),\displaystyle\begin{split}K^{\lambda,i}(n)&=1_{\{\partial n=(x_{0},i)\}}\cdot\sigma^{i}_{\alpha}(\lambda)^{-1}\sqrt{\dfrac{\lambda(x_{0})}{\lambda(i)}}\cdot\prod_{x\in V}\lambda(x)^{n(x)+\frac{\alpha-1}{2}\text{deg}(x)}\prod_{xy\in\vec{E}}\frac{\left(C^{*}_{xy}\right)^{\widecheck{n}(xy)}}{\Gamma(\widecheck{n}(xy)+1)},\end{split}

where deg(x):=#{yV:yx}\text{deg}(x):=\#\{y\in V:y\sim x\} and

σαi:={x,y}𝔭(x0,i)Iα(2Cxyλ(x)λ(y)){x,y}E𝔭(x0,i)Iα1(2Cxyλ(x)λ(y)).\sigma^{i}_{\alpha}:=\prod_{\{x,y\}\in\mathfrak{p}(x_{0},i)}I_{\alpha}\big{(}2C^{*}_{xy}\sqrt{\lambda(x)\lambda(y)}\big{)}\prod_{\{x,y\}\in E\setminus\mathfrak{p}(x_{0},i)}I_{\alpha-1}\big{(}2C^{*}_{xy}\sqrt{\lambda(x)\lambda(y)}\big{)}.

For r𝔑yr\in\mathfrak{N}^{y}, set r(ij)=r(ij)+1{ij=xy}r^{\prime}(ij)=r(ij)+1_{\{ij=xy\}} for ijEij\in{\mathbb{N}}^{\vec{E}}. Then for any Borel subset D(0,u)D\subset(0,u),

x(J1D,XJ1λ=y,ΘJ1(𝒩λ)=r)=Kλ,x(r)x(J1(Xλ,r)D,XJ1(Xλ,r)λ,r=y)=Kλ,x(r)D(λs(x)λ(x))rwidecheck(x)rwidecheck(xy)λs(x)ds=DC(λ,s)Kλs,y(r)ds,\displaystyle\begin{aligned} &\quad\,{\mathbb{P}}_{x}\left(J_{1}\in D,\,X^{\lambda}_{J_{1}}=y,\,\Theta_{J_{1}}(\mathcal{N}^{\lambda})=r\right)\\ &=K^{\lambda,x}(r^{\prime})\cdot{\mathbb{P}}_{x}\Big{(}J_{1}(X^{\lambda,r^{\prime}})\in D,\,X^{\lambda,r^{\prime}}_{J_{1}(X^{\lambda,r^{\prime}})}=y\Big{)}\\ &=K^{\lambda,x}(r^{\prime})\int_{D}\left(\dfrac{\lambda^{s}(x)}{\lambda(x)}\right)^{\widecheck{r^{\prime}}(x)}\dfrac{\widecheck{r^{\prime}}(xy)}{\lambda^{s}(x)}\,\mathrm{d}s=\int_{D}C(\lambda,s)K^{\lambda^{s},y}(r)\,\mathrm{d}s,\end{aligned} (3.7)

where the second equality is due to (A.1) and in the last expression,

C(λ,s):=Kλ,x(r)Kλs,y(r)(λs(x)λ(x))rwidecheck(x)rwidecheck(xy)λs(x)=σαy(λs)σαx(λ)λ(x0)λ(y)λs(x0)λ(x)(λ(x)λs(x))(1α)1{xx0},C(\lambda,s):=\dfrac{K^{\lambda,x}(r^{\prime})}{K^{\lambda^{s},y}(r)}\left(\dfrac{\lambda^{s}(x)}{\lambda(x)}\right)^{\widecheck{r^{\prime}}(x)}\dfrac{\widecheck{r^{\prime}}(xy)}{\lambda^{s}(x)}=\dfrac{\sigma^{y}_{\alpha}(\lambda^{s})}{\sigma^{x}_{\alpha}(\lambda)}\sqrt{\dfrac{\lambda(x_{0})\lambda(y)}{\lambda^{s}(x_{0})\lambda(x)}}\left(\dfrac{\lambda(x)}{\lambda^{s}(x)}\right)^{(1-\alpha)1_{\{x\neq x_{0}\}}},

that is independent of rr. Summing over r𝔑x\allowbreak r\in\mathfrak{N}^{x} in (3.7), we get

x(J1D,XJ1λ=y)=DC(λ,s)ds.\displaystyle{\mathbb{P}}_{x}\left(J_{1}\in D,\,X^{\lambda}_{J_{1}}=y\right)=\int_{D}C(\lambda,s)\,\mathrm{d}s. (3.8)

By the monotone class methods, we can replace ‘1D1_{D}’ in (3.8) by any non-negative measurable function on +{\mathbb{R}}^{+} vanishing on [u,)[u,\infty). In particular,

𝔼x(KλJ1,y(r);J1D,XJ1λ=y)=DC(λ,s)Kλs,y(r)ds.\displaystyle\begin{aligned} &{\mathbb{E}}_{x}\left(K^{\lambda^{J_{1}},y}(r);J_{1}\in D,\,X_{J_{1}}^{\lambda}=y\right)=\int_{D}C(\lambda,s)K^{\lambda^{s},y}(r)\,\mathrm{d}s.\end{aligned} (3.9)

Comparing (3.7) and (3.9), we have

x(ΘJ1(𝒩λ)=r,J1D,XJ1λ=y)=𝔼x(KλJ1,y(r);J1D,XJ1λ=y).\displaystyle{\mathbb{P}}_{x}\left(\Theta_{J_{1}}(\mathcal{N}^{\lambda})=r,\,J_{1}\in D,\,X^{\lambda}_{J_{1}}=y\right)={\mathbb{E}}_{x}\left(K^{\lambda^{J_{1}},y}(r);\,J_{1}\in D,\,X^{\lambda}_{J_{1}}=y\right).

We have thus reached (ii).

4 Inverting percolation Ray-Knight identity

In this section, we only consider the case where the conductances are symmetric and α(0,1)\alpha\in(0,1). In §4.1, we will introduce the metric-graph Brownian motion. It turns out that the metric-graph Brownian motion together with the associated percolation process gives a realization of 𝒳\mathcal{X} defined in §2.2, which leads to the percolation Ray-Knight identity (Theorem 2.4). §4.2 is devoted to the proof of the inversion of percolation Ray-Knight identity (Theorem 2.9).

4.1 Percolation Ray-Knight identity

Replacing every edge e={x,y}e=\{x,y\} of 𝕋{\mathbb{T}} by an interval IeI_{e} with length 1/2Cxy1/2C_{xy}, it defines the metric graph associated to 𝕋{\mathbb{T}}, denoted by 𝕋~{\widetilde{\mathbb{T}}}. VV is considered to be a subset of 𝕋~{\widetilde{\mathbb{T}}}. One can naturally construct a metric-graph Brownian motion BB on 𝕋~{\widetilde{\mathbb{T}}}, i.e. a diffusion that behaves like a Brownian motion inside each edge, performs Brownian excursion inside each adjacent edges when hitting a vertex in VV and is killed at each vertex xVx\in V with rate kxk_{x}. Let ~{\widetilde{\mathcal{L}}} be the loop soup associated to BB. Then XX and BB, resp. \mathcal{L} and ~{\widetilde{\mathcal{L}}}, can be naturally coupled through restriction (i.e. XX, resp. \mathcal{L}, is the print of BB, resp. ~{\widetilde{\mathcal{L}}}, on VV), which we will assume from now on. See [14] for more details of metric graphs, metric-graph Brownian motions and the above couplings. Notations such as L(~)L_{\cdot}({\widetilde{\mathcal{L}}}) and ~(u){\widetilde{\mathcal{L}}}^{(u)} (u0)(u\geq 0) are similarly defined for ~{\widetilde{\mathcal{L}}} as for \mathcal{L}. Assume that BB starts from x0x_{0}. We also have the Ray-Knight identity for BB, that is to replace ,X,τu\mathcal{L},X,\tau_{u} in Theorem 2.2 by ~,B,τuB{\widetilde{\mathcal{L}}},B,\tau^{B}_{u} respectively.

Let BB and ~(0){\widetilde{\mathcal{L}}}^{(0)} be independent, and we always consider under the condition τu<TX\tau_{u}<T^{X} (or equivalently τuB<TB\tau^{B}_{u}<T^{B}). For x𝕋~x\in{\widetilde{\mathbb{T}}} and tτut\leq\tau_{u}, we set

ϕ~t(x):=Lx(~(0))+LB(Q1(t),x),\widetilde{\phi}_{t}(x):=L_{x}({\widetilde{\mathcal{L}}}^{(0)})+L^{B}(Q^{-1}(t),x),

where Q(t)=xVLB(t,x)Q(t)=\sum_{x\in V}L^{B}(t,x) and Q1Q^{-1} is the right-continuous inverse. By the coupling of BB and XX, it holds that ϕ~t|V=ϕt\widetilde{\phi}_{t}|_{V}=\phi_{t} (defined in (2.2)). Next, for t0t\geq 0, 𝒪~t\widetilde{\mathcal{O}}_{t} will denote a percolation on EE defined as: for eEe\in E,

𝒪~t(e)=1{ϕ~t has no zero on Ie}.\widetilde{\mathcal{O}}_{t}(e)=1_{\left\{\widetilde{\phi}_{t}\text{ has no zero on }I_{e}\right\}}.

Recall the notations defined in §2.2. We will show the following proposition, which immediately implies the percolation Ray-Knight identity.

Proposition 4.1.
{longlist}

(ϕ~τu,𝒪~τu)(\widetilde{\phi}_{\tau_{u}},\widetilde{\mathcal{O}}_{\tau_{u}}) has the same law as (L((u)),𝒪(u))(L_{\cdot}(\mathcal{L}^{(u)}),\mathcal{O}^{(u)});

𝒳~:=(Xt,𝒪~t)0tτu\widetilde{\mathcal{X}}:=(X_{t},\widetilde{\mathcal{O}}_{t})_{0\leq t\leq\tau_{u}} has the same law as 𝒳=(Xt,𝒪t)0tτu\mathcal{X}=(X_{t},\mathcal{O}_{t})_{0\leq t\leq\tau_{u}}.

The remaining part is devoted to the proof of Proposition 4.1. First we present two lemmas in terms of the loop soups \mathcal{L} and ~{\widetilde{\mathcal{L}}}, which will later be translated into the analogous versions associated to 𝒳~\widetilde{\mathcal{X}} using the Ray-Knight identity.

Define O(),O(~)O(\mathcal{L}),O({\widetilde{\mathcal{L}}}) on EE as follows: for eEe\in E,

O(~)(e):=1{L(~) has no zero on Ie},O()(e):=1{e is crossed by }.O({\widetilde{\mathcal{L}}})(e):=1_{\{L_{\cdot}({\widetilde{\mathcal{L}}})\text{ has no zero on }I_{e}\}},~{}O(\mathcal{L})(e):=1_{\{e\text{ is crossed by }\mathcal{L}\}}.

Remember that \mathcal{L} and ~{\widetilde{\mathcal{L}}} are naturally coupled. So it holds that O(~)O()O({\widetilde{\mathcal{L}}})\geq O(\mathcal{L}). Moreover, they have the following relations.

Lemma 4.2.

Conditionally on \mathcal{L}, (O(~)(e):eE,O()(e)=0)\left(O({\widetilde{\mathcal{L}}})(e):e\in E,\,O(\mathcal{L})(e)=0\right) is a family of independent random variables, and

(O(~)({x,y})=0|L()=,O()({x,y})=0)=𝕂1α(2Cxyxy),\displaystyle\begin{split}&{\mathbb{P}}\left(O({\widetilde{\mathcal{L}}})(\{x,y\})=0\,\big{|}\,L_{\cdot}(\mathcal{L})=\ell,O(\mathcal{L})(\{x,y\})=0\right)=\mathbb{K}_{1-\alpha}(2C_{xy}\sqrt{\ell_{x}\ell_{y}}),\end{split} (4.1)

where 𝕂ν(z):=2(z/2)νΓ(ν)Kν(z)\mathbb{K}_{\nu}(z):=\frac{2\left(z/2\right)^{\nu}}{\Gamma(\nu)}K_{\nu}\left(z\right).

Lemma 4.3.

Conditionally on L()L_{\cdot}(\mathcal{L}), (O(~)(e):eE)\left(O({\widetilde{\mathcal{L}}})(e):e\in E\right) is a family of independent random variables, and

(O(~)({x,y})=1|L()=)=I1α(2Cxyxy)Iα1(2Cxyxy).\displaystyle\begin{split}&{\mathbb{P}}\left(O({\widetilde{\mathcal{L}}})(\{x,y\})=1\,\big{|}\,L_{\cdot}(\mathcal{L})=\ell\right)=\dfrac{I_{1-\alpha}\left(2C_{xy}\sqrt{\ell_{x}\ell_{y}}\right)}{I_{\alpha-1}\left(2C_{xy}\sqrt{\ell_{x}\ell_{y}}\right)}.\end{split} (4.2)
Proof of Lemma 4.2.

The independence follows from an argument using the Ray-Knight identity of BB and the excursion theory similar to that in the proof of Proposition 3.2. With the same idea as [14, §3], conditionally on O()({x,y})=0O(\mathcal{L})(\{x,y\})=0, the trace of ~{\widetilde{\mathcal{L}}} in {x,y}\{x,y\} consists of the loops entirely contained in I{x,y}I_{\{x,y\}}, excursions from and to xx (resp. yy) inside I{x,y}I_{\{x,y\}} of loops in ~{\widetilde{\mathcal{L}}} visiting xx (resp. yy). By considering the contribution to the occupation time field by each part, we have that the left-hand side of (4.1) is the same as the probability that the sum of three independent processes

(at(h)+bt(h,x)+bht(h,y))0th\displaystyle\left(a_{t}^{(h)}+b^{(h,\ell_{x})}_{t}+b_{h-t}^{(h,\ell_{y})}\right)_{0\leq t\leq h} (4.3)

has a zero on (0,h)(0,h). Here h=1/2Cxyh=1/2C_{xy}, (at(h))0th(a_{t}^{(h)})_{0\leq t\leq h} is a BESQ002α,h\text{BESQ}^{2\alpha,h}_{0\rightarrow 0} (i.e. a 2α2\alpha-dimensional BESQ bridge from 0 to 0 over [0,h][0,h]) and (bt(h,l))0th(b_{t}^{(h,l)})_{0\leq t\leq h} is a BESQl00,h\text{BESQ}^{0,h}_{l\rightarrow 0}.

For (4.3) to have a zero, the process bt(h,x)b^{(h,\ell_{x})}_{t} has to hit 0 before the last zero of (at(h)+bht(h,y))0th(a_{t}^{(h)}+b_{h-t}^{(h,\ell_{y})})_{0\leq t\leq h}. The density of the first zero of bt(h,x)b^{(h,\ell_{x})}_{t} is

1{0<t<h}x2t2exp((ht)x2ht)dt.\displaystyle 1_{\{0<t<h\}}\dfrac{\ell_{x}}{2t^{2}}\exp\left(-\frac{(h-t)\ell_{x}}{2ht}\right)\,\mathrm{d}t. (4.4)

To get this, one can start with the well-known fact that the first zero of a 2δ2\delta-dimensional BESQ process starting from xx is distributed as x2/2Gamma(1δ,1)x^{2}/2\text{Gamma}(1-\delta,1) (Cf. for example [11, Proposition 2.9]). Then use the fact that for a 2δ2\delta-dimensional BESQ process ρt\rho_{t} starting from xx, the process h(1u/h)2ρu/(hu)h(1-u/h)^{2}\rho_{u/(h-u)} (0uh)(0\leq u\leq h) is a BESQx02δ,h\text{BESQ}^{2\delta,h}_{x\rightarrow 0}.

Since (at(h)+bht(h,y))0th(a_{t}^{(h)}+b_{h-t}^{(h,\ell_{y})})_{0\leq t\leq h} has the same law as a BESQ0y2α,h\text{BESQ}^{2\alpha,h}_{0\rightarrow\ell_{y}}, its last zero has the same law as hh minus the first zero of a BESQy02α,h\text{BESQ}^{2\alpha,h}_{\ell_{y}\rightarrow 0}, which has the density

1{0<t<h}2α1Γ(1α)hαy1αtα(ht)α2exp(yt2h(ht))dt.\displaystyle 1_{\{0<t<h\}}\dfrac{2^{\alpha-1}}{\Gamma(1-\alpha)}h^{\alpha}\ell_{y}^{1-\alpha}t^{-\alpha}(h-t)^{\alpha-2}\exp{\left(-\frac{\ell_{y}t}{2h(h-t)}\right)}\,\mathrm{d}t. (4.5)

Gathering (4.4), (4.5) and taking h=1/2Cxyh=1/2C_{xy}, we get the probability of (4.3) having a zero is

0h0tx2s2exp((hs)x2hs)2α1Γ(1α)hαy1αtα(ht)α2exp(yt2h(ht))dsdt\displaystyle\int_{0}^{h}\int_{0}^{t}\dfrac{\ell_{x}}{2s^{2}}\exp\left(-\frac{(h-s)\ell_{x}}{2hs}\right)\dfrac{2^{\alpha-1}}{\Gamma(1-\alpha)}h^{\alpha}\ell_{y}^{1-\alpha}t^{-\alpha}(h-t)^{\alpha-2}\exp{\left(-\frac{\ell_{y}t}{2h(h-t)}\right)}\,\mathrm{d}s\,\mathrm{d}t
=2α1Γ(1α)y1αhαexp(y+x2h)0hexp(x2ty2(ht))tα(ht)α2dt\displaystyle=\dfrac{2^{\alpha-1}}{\Gamma(1-\alpha)}\ell_{y}^{1-\alpha}h^{\alpha}\exp\left(\dfrac{\ell_{y}+\ell_{x}}{2h}\right)\int_{0}^{h}\exp\left(-\dfrac{\ell_{x}}{2t}-\dfrac{\ell_{y}}{2(h-t)}\right)t^{-\alpha}(h-t)^{\alpha-2}\,\mathrm{d}t
=1Γ(1α)0exp(xy(2h)2ss)dssα\displaystyle=\dfrac{1}{\Gamma(1-\alpha)}\int_{0}^{\infty}\exp\left(-\frac{\ell_{x}\ell_{y}}{(2h)^{2}s}-s\right)\dfrac{\,\mathrm{d}s}{s^{\alpha}}
=2Γ(1α)1(Cxyxy)1αK1α(2Cxyxy),\displaystyle=2\Gamma(1-\alpha)^{-1}\left(C_{xy}\sqrt{\ell_{x}\ell_{y}}\right)^{1-\alpha}K_{1-\alpha}\left(2C_{xy}\sqrt{\ell_{x}\ell_{y}}\right),

where in the second equality, we use the change of variable s=yt2h(ht)s=\dfrac{\ell_{y}t}{2h(h-t)} and the last equality follows from [19, (136)]. ∎

Proof of Lemma 4.3.

By Proposition 3.2,

(O()({x,y})=0|L()=)=(Cxyxy)α1Γ(α)Iα1(2Cxyxy).\displaystyle{\mathbb{P}}\left(O(\mathcal{L})(\{x,y\})=0|L_{\cdot}(\mathcal{L})=\ell\right)=\dfrac{\left(C_{xy}\sqrt{\ell_{x}\ell_{y}}\right)^{\alpha-1}}{\Gamma(\alpha)I_{\alpha-1}(2C_{xy}\sqrt{\ell_{x}\ell_{y}})}. (4.6)

The probability we are interested in equals 11 minus the multiplication of (4.1) and (4.6). ∎

Proof of Proposition 4.1.

By the Ray-Knight identity of BB, we have ϕ~τu\widetilde{\phi}_{\tau_{u}} has the law of L(~(u))L_{\cdot}({\widetilde{\mathcal{L}}}^{(u)}). So it follows from Lemma 4.3 that (ϕ~τu,𝒪~τu)(\widetilde{\phi}_{\tau_{u}},\widetilde{\mathcal{O}}_{\tau_{u}}) has the same law as (L((u)),𝒪(u))(L_{\cdot}(\mathcal{L}^{(u)}),\mathcal{O}^{(u)}). That concludes (1).

For (2), if XX jumps through the edge {x,y}\{x,y\}, then BB crosses the interval I{x,y}I_{\{x,y\}}, which makes ϕ~t\widetilde{\phi}_{t} positive on this interval. So 𝒪~t({x,y})\widetilde{\mathcal{O}}_{t}(\{x,y\}) turns to 11.

It remains to calculate the rate of opening an edge without XX jumping. First it is easy to deduce that (𝒳~,ϕ)(\widetilde{\mathcal{X}},\phi) is a Markov process, which allows us to condition only on (𝒳~t,ϕt)(\widetilde{\mathcal{X}}_{t},\phi_{t}) when calculating the rate. Note that the case of opening {x,y}\{x,y\} without XX jumping can happen only when XX stays at xx and x=𝔭(y)x=\mathfrak{p}(y). We use {X[t,t+Δt]=x}\{X_{[t,t+\Delta t]}=x\} to stand for the event that XX stays at xx during [t,t+Δt][t,t+\Delta t]. Denote by A1A_{1} the event that at some time in [t,t+Δt][t,t+\Delta t], {x,y}\{x,y\} is opened without XX jumping. And let A2:=A1{X[t,t+Δt]=x}A_{2}:=A_{1}\cap\{X_{[t,t+\Delta t]}=x\}. For (𝒳~t,ϕt)(\widetilde{\mathcal{X}}_{t},\phi_{t}) with Xt=xX_{t}=x and 𝒪~t({x,y})=0\widetilde{\mathcal{O}}_{t}(\{x,y\})=0, if we write =(|Xt=x,ϕt){\mathbb{Q}}={\mathbb{P}}(\cdot\,|\,X_{t}=x,\phi_{t}), then

(A2|𝒳~t,ϕt)=(𝒪~t+Δt({x,y})=1,X[t,t+Δt]=x|𝒪~t({x,y})=0)\displaystyle\quad\,{\mathbb{P}}(A_{2}\,|\,\widetilde{\mathcal{X}}_{t},\phi_{t})={\mathbb{Q}}(\widetilde{\mathcal{O}}_{t+\Delta t}(\{x,y\})=1,X_{[t,t+\Delta t]}=x\,|\,\widetilde{\mathcal{O}}_{t}(\{x,y\})=0)
=(𝒪~t({x,y})=0)1[(𝒪~t({x,y})=0,X[t,t+Δt]=x)\displaystyle={\mathbb{Q}}(\widetilde{\mathcal{O}}_{t}(\{x,y\})=0)^{-1}\cdot\Big{[}{\mathbb{Q}}(\widetilde{\mathcal{O}}_{t}(\{x,y\})=0,X_{[t,t+\Delta t]}=x)
(𝒪~t+Δt({x,y})=0,X[t,t+Δt]=x)]\displaystyle\quad-{\mathbb{Q}}(\widetilde{\mathcal{O}}_{t+\Delta t}(\{x,y\})=0,X_{[t,t+\Delta t]}=x)\Big{]}
=(X[t,t+Δt]=x)[1(𝒪~t+Δt({x,y})=0|X[t,t+Δt]=x)(𝒪~t({x,y})=0)].\displaystyle={\mathbb{Q}}(X_{[t,t+\Delta t]}=x)\cdot\Bigg{[}1-\dfrac{{\mathbb{Q}}(\widetilde{\mathcal{O}}_{t+\Delta t}(\{x,y\})=0\,|\,X_{[t,t+\Delta t]}=x)}{{\mathbb{Q}}(\widetilde{\mathcal{O}}_{t}(\{x,y\})=0)}\Bigg{]}.

By considering the print of BB on the branch at xx containing yy and using the Ray-Knight identity of BB, we can readily check that the fraction in the above square brackets equals

(O(~)({x,y})=0|O()({x,y})=0,Lx()=ϕt(x)+Δt,Ly()=ϕt(y))(O(~)({x,y})=0|O()({x,y})=0,Lx()=ϕt(x),Ly()=ϕt(y)),\displaystyle\dfrac{{\mathbb{P}}\big{(}O({\widetilde{\mathcal{L}}})(\{x,y\})=0\,|\,O(\mathcal{L})(\{x,y\})=0,L_{x}(\mathcal{L})=\phi_{t}(x)+\Delta t,L_{y}(\mathcal{L})=\phi_{t}(y)\big{)}}{{\mathbb{P}}\big{(}O({\widetilde{\mathcal{L}}})(\{x,y\})=0\,|\,O(\mathcal{L})(\{x,y\})=0,L_{x}(\mathcal{L})=\phi_{t}(x),L_{y}(\mathcal{L})=\phi_{t}(y)\big{)}},

which further equals f(ϕt(x)+Δt)/f(ϕt(x))f(\phi_{t}(x)+\Delta t)/f(\phi_{t}(x)) with f(s):=𝕂1α(2Cxysϕt(y))f(s):=\mathbb{K}_{1-\alpha}\left(2C_{xy}\sqrt{s\phi_{t}(y)}\right) by Lemma 4.2. Therefore,

(A2|𝒳~t,ϕt)=f(ϕt(x))f(ϕt(x))Δt+o(Δt).\displaystyle{\mathbb{P}}(A_{2}\,|\,\widetilde{\mathcal{X}}_{t},\phi_{t})=-\dfrac{f^{\prime}(\phi_{t}(x))}{f(\phi_{t}(x))}\Delta t+o(\Delta t).

Observe that A1A2A1{X has a jump during [t,t+Δt]}A_{1}\setminus A_{2}\subset A_{1}\cap\{X\text{ has a jump during }[t,t+\Delta t]\}. We can readily deduce that the conditional probability of the latter event is o(Δt)o(\Delta t). So

(A1|𝒳~t,ϕt)=(A2|𝒳~t,ϕt)+o(Δt)=f(ϕt(x))f(ϕt(x))Δt+o(Δt),\displaystyle{\mathbb{P}}(A_{1}\,|\,\widetilde{\mathcal{X}}_{t},\phi_{t})={\mathbb{P}}(A_{2}\,|\,\widetilde{\mathcal{X}}_{t},\phi_{t})+o(\Delta t)=-\dfrac{f^{\prime}(\phi_{t}(x))}{f(\phi_{t}(x))}\Delta t+o(\Delta t),

which leads to the rate in (2.3) by using [zνKν(z)]=zνKν1(z)[z^{\nu}K_{\nu}(z)]^{\prime}=-z^{\nu}K_{\nu-1}(z) and Kν=KνK_{-\nu}=K_{\nu}. ∎

4.2 Percolation-vertex repelling jump process

Let 𝒳=(Xt,𝒪t)0tτu\overleftarrow{\mathcal{X}}=(\overleftarrow{X}_{t},\overleftarrow{\mathcal{O}}_{t})_{0\leq t\leq\tau_{u}} be the time reversal of 𝒳\mathcal{X}, i.e. 𝒳=(𝒳τut)0tτu\overleftarrow{\mathcal{X}}=\left(\mathcal{X}_{\tau_{u}-t}\right)_{0\leq t\leq\tau_{u}}. In this part, we will verify that for any λ\lambda\in\mathfrak{R} and configuration OO on EE, the jump rate of 𝒳\overleftarrow{\mathcal{X}} conditionally on τu<TX\tau_{u}<T^{X} and (ϕ(u),𝒪(u))=(λ,O)\left(\phi^{(u)},\mathcal{O}^{(u)}\right)=(\lambda,O) is given by (2.7), which leads to Theorem 2.9.

Recall the notations in (2.4). In the following, we will use Λt\Lambda_{t} and φt\varphi_{t} to represent Λt(λ,X[0,t])\Lambda_{t}(\lambda,\overleftarrow{X}_{[0,t]}) and φt(λ,X[0,t])\varphi_{t}(\lambda,\overleftarrow{X}_{[0,t]}) respectively. Note that given ϕ(u)=λ\phi^{(u)}=\lambda, the aggregated local time ϕτut\phi_{\tau_{u}-t} equals Λt\Lambda_{t}. Let us condition on τu<TX\tau_{u}<T^{X}, (ϕ(u),𝒪(u))=(λ,O)\left(\phi^{(u)},\mathcal{O}^{(u)}\right)=(\lambda,O) and the path 𝒳[0,t]\overleftarrow{\mathcal{X}}_{[0,t]} with Xt=x\overleftarrow{X}_{t}=x, and calculate the jump rate of the edge {x,y}\{x,y\}, i.e. the rate of the jump from xx to yy by Xt\overleftarrow{X}_{t} without modifying 𝒪t\overleftarrow{\mathcal{O}}_{t}, or the closure of {x,y}\{x,y\} in 𝒪t\overleftarrow{\mathcal{O}}_{t} without X\overleftarrow{X} jumping, or the jump and closure simultaneously. Due to the Markov property of 𝒳\mathcal{X}, it suffices to condition only on (𝒳t,Λt)(\overleftarrow{\mathcal{X}}_{t},\Lambda_{t}). For simplicity, denote by =(|Xt=x,Λt)=(|Xτut=x,ϕτut){\mathbb{Q}}={\mathbb{P}}(\cdot\,|\,\overleftarrow{X}_{t}=x,\Lambda_{t})={\mathbb{P}}(\cdot\,|\,X_{\tau_{u}-t}=x,\phi_{\tau_{u}-t}) henceforth.

4.2.1 Two-point case

We start with the simplest case: 𝕋{\mathbb{T}} contains only two vertices. The jump rate is analyzed in the following two cases.

(1) if y=𝔭(x)y=\mathfrak{p}(x), it holds that 𝒪t({x,y})=1\overleftarrow{\mathcal{O}}_{t}(\{x,y\})=1. This allows us to further ignore the condition on 𝒪t({x,y})\overleftarrow{\mathcal{O}}_{t}(\{x,y\}). Note that the law of X\overleftarrow{X} given ϕ(u)=λ\phi^{(u)}=\lambda and τu<TX\tau_{u}<T^{X} is also x0λ{\mathbb{P}}_{x_{0}}^{\lambda} (defined in §2.3). So the rate of the jump from xx to yy by X\overleftarrow{X} at time tt is CxyΛt(y)Λt(x)Iα1(φt(xy))Iα(φt(xy))C_{xy}\sqrt{\dfrac{\Lambda_{t}(y)}{\Lambda_{t}(x)}}\cdot\dfrac{I_{\alpha-1}(\varphi_{t}(xy))}{I_{\alpha}(\varphi_{t}(xy))} as shown in (2.5).

We further consider the probability that the jump is accompanied by the closure of {x,y}\{x,y\} in 𝒪\mathcal{O}. Observe that this happens if and only if 𝒪(τut)({x,y})=0\mathcal{O}_{(\tau_{u}-t)-}(\{x,y\})=0. Conditionally on Λt\Lambda_{t} and X\overleftarrow{X} jumping from xx to yy at time tt, 𝒪(τut)({x,y})\mathcal{O}_{(\tau_{u}-t)-}(\{x,y\}) has the same law as O(~)({x,y})O({\widetilde{\mathcal{L}}})(\{x,y\}) given Lx(~)=Λt(x)L_{x}({\widetilde{\mathcal{L}}})=\Lambda_{t}(x) and Ly(~)=Λt(y)L_{y}({\widetilde{\mathcal{L}}})=\Lambda_{t}(y). By Lemma 4.3, the conditional probability that 𝒪(τut)({x,y})=0\mathcal{O}_{(\tau_{u}-t)-}(\{x,y\})=0 is

1I1α(φt(xy))Iα1(φt(xy)).1-\dfrac{I_{1-\alpha}(\varphi_{t}(xy))}{I_{\alpha-1}(\varphi_{t}(xy))}. (4.7)

Therefore, X\overleftarrow{X} jumps from xx to yy and 𝒪({x,y})\overleftarrow{\mathcal{O}}(\{x,y\}) turns to 0 at time tt with rate

CxyΛt(y)Λt(x)Iα1(φt(xy))Iα(φt(xy))(1I1α(φt(xy))Iα1(φt(xy)))=CxyΛt(y)Λt(x)Iα1(φt(xy))I1α(φt(xy))Iα(φt(xy));\displaystyle\begin{aligned} &\quad C_{xy}\sqrt{\dfrac{\Lambda_{t}(y)}{\Lambda_{t}(x)}}\cdot\dfrac{I_{\alpha-1}(\varphi_{t}(xy))}{I_{\alpha}(\varphi_{t}(xy))}\cdot\Big{(}1-\dfrac{I_{1-\alpha}(\varphi_{t}(xy))}{I_{\alpha-1}(\varphi_{t}(xy))}\Big{)}\\ &=C_{xy}\sqrt{\dfrac{\Lambda_{t}(y)}{\Lambda_{t}(x)}}\cdot\dfrac{I_{\alpha-1}(\varphi_{t}(xy))-I_{1-\alpha}(\varphi_{t}(xy))}{I_{\alpha}(\varphi_{t}(xy))};\end{aligned}

while X\overleftarrow{X} jumps from xx to yy at time tt without modifying 𝒪t\overleftarrow{\mathcal{O}}_{t} with rate

CxyΛt(y)Λt(x)I1α(φt(xy))Iα(φt(xy)).C_{xy}\sqrt{\dfrac{\Lambda_{t}(y)}{\Lambda_{t}(x)}}\cdot\dfrac{I_{1-\alpha}(\varphi_{t}(xy))}{I_{\alpha}(\varphi_{t}(xy))}.

(2) if x=𝔭(y)x=\mathfrak{p}(y), there are two possible jumps: X\overleftarrow{X} jumps from xx to yy without modifying 𝒪t\overleftarrow{\mathcal{O}}_{t}, or {x,y}\{x,y\} is closed without X\overleftarrow{X} jumping. Denote by A1A_{1} (resp. A2A_{2}) the event that there is a jump of the first (resp. second) kind occurs during [t,t+Δt][t,t+\Delta t]. Note that A1A_{1} can happen only when 𝒪t({x,y})=1\overleftarrow{\mathcal{O}}_{t}(\{x,y\})=1. We have

(A1|Xt=x,𝒪t({x,y}=1,Λt)=(A1)/(𝒪t({x,y})=1)=CxyΛt(y)Λt(x)Iα(φt(xy))Iα1(φt(xy))Δt(I1α(φt(xy))Iα1(φt(xy)))1+o(Δt)=CxyΛt(y)Λt(x)Iα(φt(xy))I1α(φt(xy))Δt+o(Δt).\displaystyle\begin{split}&\quad\,{\mathbb{P}}(A_{1}\,|\,\overleftarrow{X}_{t}=x,\overleftarrow{\mathcal{O}}_{t}(\{x,y\}=1,\Lambda_{t})={{\mathbb{Q}}(A_{1})}/{{\mathbb{Q}}(\overleftarrow{\mathcal{O}}_{t}(\{x,y\})=1)}\\ &=C_{xy}\sqrt{\dfrac{\Lambda_{t}(y)}{\Lambda_{t}(x)}}\cdot\dfrac{I_{\alpha}\left(\varphi_{t}(xy)\right)}{I_{\alpha-1}\left(\varphi_{t}(xy)\right)}\Delta t\cdot\Bigg{(}\dfrac{I_{1-\alpha}(\varphi_{t}(xy))}{I_{\alpha-1}(\varphi_{t}(xy))}\Bigg{)}^{-1}+o(\Delta t)\\ &=C_{xy}\sqrt{\dfrac{\Lambda_{t}(y)}{\Lambda_{t}(x)}}\cdot\dfrac{I_{\alpha}\left(\varphi_{t}(xy)\right)}{I_{1-\alpha}\left(\varphi_{t}(xy)\right)}\Delta t+o(\Delta t).\end{split} (4.8)

Now we turn to A2A_{2}. Observe that A2A_{2} is also the event that at some time in [τutΔt,τut][\tau_{u}-t-\Delta t,\tau_{u}-t], {x,y}\{x,y\} is opened in 𝒪\mathcal{O} without XX jumping. Let A3=A2{X[τutΔt,τut]=x}A_{3}=A_{2}\cap\{X_{[\tau_{u}-t-\Delta t,\tau_{u}-t]}=x\}. Then

(A3|Xt=x,𝒪t({x,y})=1,Λt)=(𝒪τutΔt({x,y})=0,X[τutΔt,τut]=x|𝒪τut({x,y})=1)=(X[τutΔt,τut]=x|𝒪τut({x,y})=1)(𝒪τutΔt({x,y})=1,X[τutΔt,τut]=x|𝒪τut({x,y})=1)=:q1q2.\displaystyle\begin{split}&\quad\,{\mathbb{P}}(A_{3}\,|\,\overleftarrow{X}_{t}=x,\overleftarrow{\mathcal{O}}_{t}(\{x,y\})=1,\Lambda_{t})\\ &={\mathbb{Q}}(\mathcal{O}_{\tau_{u}-t-\Delta t}(\{x,y\})=0,X_{[\tau_{u}-t-\Delta t,\tau_{u}-t]}=x\,|\,\mathcal{O}_{\tau_{u}-t}(\{x,y\})=1)\\ &={\mathbb{Q}}(X_{[\tau_{u}-t-\Delta t,\tau_{u}-t]}=x\,|\,\mathcal{O}_{\tau_{u}-t}(\{x,y\})=1)\\ &-{\mathbb{Q}}(\mathcal{O}_{\tau_{u}-t-\Delta t}(\{x,y\})=1,X_{[\tau_{u}-t-\Delta t,\tau_{u}-t]}=x\,|\,\mathcal{O}_{\tau_{u}-t}(\{x,y\})=1)=:q_{1}-q_{2}.\end{split}

We observe that

q1=1CxyΛt(y)Λt(x)Iα(φt(xy))I1α(φt(xy))Δt+o(Δt)q_{1}=1-C_{xy}\sqrt{\dfrac{\Lambda_{t}(y)}{\Lambda_{t}(x)}}\cdot\dfrac{I_{\alpha}\left(\varphi_{t}(xy)\right)}{I_{1-\alpha}\left(\varphi_{t}(xy)\right)}\Delta t+o(\Delta t)

by (4.8); while if we denote h(s)=I1α(2CxysΛt(y))Iα1(2CxysΛt(y))h(s)=\frac{I_{1-\alpha}(2C_{xy}\sqrt{s\Lambda_{t}(y)})}{I_{\alpha-1}(2C_{xy}\sqrt{s\Lambda_{t}(y)})}, then by Lemma 4.3,

q2\displaystyle q_{2} =q1(𝒪τutΔt=1|XτutΔt=x,ΛτutΔt=Δtδx)|=Λt\displaystyle=q_{1}{\mathbb{P}}(\mathcal{O}_{\tau_{u}-t-\Delta t}=1\,|\,X_{\tau_{u}-t-\Delta t}=x,\Lambda_{\tau_{u}-t-\Delta t}=\ell-\Delta t\cdot\delta_{x})|_{\ell=\Lambda_{t}}
=q1h(ΛtΔt)=q1[h(Λt)h(Λt)Δt]\displaystyle=q_{1}h(\Lambda_{t}-\Delta t)=q_{1}[h(\Lambda_{t})-h^{\prime}(\Lambda_{t})\Delta t]
=1CxyΛt(y)Λt(x)Iα(φt(xy))I1α(φt(xy))Δt+o(Δt).\displaystyle=1-C_{xy}\sqrt{\dfrac{\Lambda_{t}(y)}{\Lambda_{t}(x)}}\cdot\dfrac{I_{-\alpha}\left(\varphi_{t}(xy)\right)}{I_{1-\alpha}\left(\varphi_{t}(xy)\right)}\Delta t+o(\Delta t).

So (A3|Xt=x,𝒪t({x,y})=1,Λt){\mathbb{P}}(A_{3}\,|\,\overleftarrow{X}_{t}=x,\overleftarrow{\mathcal{O}}_{t}(\{x,y\})=1,\Lambda_{t}) equals

CxyΛt(y)Λt(x)Iα(φt(xy))Iα(φt(xy))I1α(φt(xy))Δt+o(Δt).\displaystyle C_{xy}\sqrt{\dfrac{\Lambda_{t}(y)}{\Lambda_{t}(x)}}\cdot\dfrac{I_{-\alpha}(\varphi_{t}(xy))-I_{\alpha}(\varphi_{t}(xy))}{I_{1-\alpha}(\varphi_{t}(xy))}\Delta t+o(\Delta t). (4.9)

Since A2A3{X has more than 2 jumps during [t,t+Δt]}A_{2}\setminus A_{3}\subset\big{\{}\overleftarrow{X}\text{ has more than $2$ jumps during }[t,t+\Delta t]\big{\}}, by Lemma A.11 we have the probability of A2A3A_{2}\setminus A_{3} is o(Δt)o(\Delta t) under (|𝒪τut({x,y})=1){\mathbb{Q}}(\cdot\,|\,\mathcal{O}_{\tau_{u}-t}(\{x,y\})=1). So (A2|Xt=x,𝒪t({x,y})=1,Λt){\mathbb{P}}(A_{2}\,|\,\overleftarrow{X}_{t}=x,\overleftarrow{\mathcal{O}}_{t}(\{x,y\})=1,\Lambda_{t}) also equals (4.9). We have thus shown all the rates in (2.7) in the two-point case. The rates already determines the process (Cf. Appendix A). So we are done.

4.2.2 General case

For general 𝕋{\mathbb{T}}, the strategy is to introduce a coupling, which connects the rate in the general case with that in the two-point case. In the following, to emphasize the tree where 𝒳\mathcal{X} is defined, we also write 𝒳=𝒳𝕋=(X𝕋,𝒪𝕋)\overleftarrow{\mathcal{X}}=\overleftarrow{\mathcal{X}}^{\mathbb{T}}=(\overleftarrow{X}^{\mathbb{T}},\overleftarrow{\mathcal{O}}^{\mathbb{T}}). Recall that we condition on (𝒳t,Λt)(\overleftarrow{\mathcal{X}}_{t},\Lambda_{t}) with Xt=x\overleftarrow{X}_{t}=x.

If y=𝔭(x)y=\mathfrak{p}(x), divide 𝕋{\mathbb{T}} into 33 parts: 𝕋i=(Vi,Ei)(i=1,2,3){\mathbb{T}}_{i}=(V_{i},E_{i})~{}(i=1,2,3) is the connected subgraph of 𝕋{\mathbb{T}} with V2={x,y}V_{2}=\{x,y\}, V3={zV:z=x or x<z}V_{3}=\{z\in V:z=x\text{ or }x<z\}, V1=VV3V_{1}=V\setminus V_{3}. Each 𝕋i{\mathbb{T}}_{i} is equipped with the conductances and killing measure such that the induced CTMC on 𝕋i{\mathbb{T}}_{i} has the law of the print of XX on ViV_{i}555For example, for 𝕋2{\mathbb{T}}_{2}, the conductance on {x,y}\{x,y\} is CxyC_{xy} and the killing measure on xx (resp. yy) is the effective conductance (on 𝕋{\mathbb{T}}) from xx (resp. yy) to the infinity point after cutting the edge {x,y}\{x,y\}.. View x0x_{0}, yy and xx as the roots of 𝕋1{\mathbb{T}}_{1}, 𝕋2{\mathbb{T}}_{2} and 𝕋3{\mathbb{T}}_{3} respectively. We have the following coupling. Note that (𝒳,Λ)=(𝒳𝕋,Λ𝕋)(\overleftarrow{\mathcal{X}},\Lambda)=(\overleftarrow{\mathcal{X}}^{\mathbb{T}},\Lambda^{\mathbb{T}}) is a Markov process. So we can naturally define (𝒳,Λ)(\overleftarrow{\mathcal{X}},\Lambda) starting from a given pair (x,O,λ)(x,O,\lambda), which represents the starting point, initial configuration and time field respectively. Now we consider (𝒳𝕋i,Λ𝕋i)(\overleftarrow{\mathcal{X}}^{{\mathbb{T}}_{i}},\Lambda^{{\mathbb{T}}_{i}}) starting from (xi,𝒪t|Ei,Λt|Vi)(x_{i},\mathcal{O}_{t}|_{E_{i}},\Lambda_{t}|_{V_{i}}) respectively, where xi={y, if i=1;x, if i=2,3;x_{i}=\left\{\begin{aligned} y,&\text{ if }i=1;\\ x,&\text{ if }i=2,3;\end{aligned}\right.. If we glue the excursions of X𝕋i\overleftarrow{X}^{{\mathbb{T}}_{i}} at xx and yy according to the local time, preserve the evolution of O𝕋i\overleftarrow{O}^{{\mathbb{T}}_{i}} along with X𝕋i\overleftarrow{X}^{{\mathbb{T}}_{i}}, and recover a process from the glued excursion process starting from xx up to the time when the local time at x0x_{0} is exhausted (some of the excursions are not used in the recovery procedure), then the process obtained has the law of (𝒳t+s:0sT𝒳t)\big{(}\overleftarrow{\mathcal{X}}_{t+s}:0\leq s\leq T^{\mathcal{X}}-t\big{)} given (𝒳t,Λt)(\overleftarrow{\mathcal{X}}_{t},\Lambda_{t}).

In the following, assume (𝒳,Λ)(\overleftarrow{\mathcal{X}},\Lambda) and (𝒳𝕋2,Λ𝕋2)(\overleftarrow{\mathcal{X}}^{{\mathbb{T}}_{2}},\Lambda^{{\mathbb{T}}_{2}}) are coupled in the above way. Let A1A_{1} and A2A_{2} (resp. A1A^{\prime}_{1} and A2A^{\prime}_{2}) be the events that X\overleftarrow{X} (resp. X𝕋2\overleftarrow{X}^{{\mathbb{T}}_{2}}) jumps from xx to yy with and without the closure of {x,y}\{x,y\} in 𝒪\overleftarrow{\mathcal{O}} (resp. 𝒪𝕋2\overleftarrow{\mathcal{O}}^{{\mathbb{T}}_{2}}) during [t,t+Δt][t,t+\Delta t] (resp. [0,Δt][0,\Delta t]) respectively. It follows direct from the coupling that for (𝒳t,Λt)(\overleftarrow{\mathcal{X}}_{t},\Lambda_{t}) with Xt=x\overleftarrow{X}_{t}=x and 𝒪t({x,y})=1\overleftarrow{\mathcal{O}}_{t}(\{x,y\})=1,

(Aj|𝒳t,Λt)(Aj|𝒳t,Λt)=(Aj|𝒳Q2(t)𝕋2,ΛQ2(t)𝕋2)(j=1,2),\displaystyle{\mathbb{P}}(A_{j}|\overleftarrow{\mathcal{X}}_{t},\Lambda_{t})\leq{\mathbb{P}}(A^{\prime}_{j}|\overleftarrow{\mathcal{X}}_{t},\Lambda_{t})={\mathbb{P}}\big{(}A^{\prime}_{j}\big{|}\overleftarrow{\mathcal{X}}^{{\mathbb{T}}_{2}}_{Q_{2}(t)},\Lambda^{{\mathbb{T}}_{2}}_{Q_{2}(t)}\big{)}~{}(j=1,2), (4.10)

where Q2(t)=0t1{XsV2}dsQ_{2}(t)=\int_{0}^{t}1_{\{\overleftarrow{X}_{s}\in V_{2}\}}\,\mathrm{d}s.

Recall that =(|Xt=x,Λt){\mathbb{Q}}={\mathbb{P}}(\cdot\,|\,\overleftarrow{X}_{t}=x,\Lambda_{t}). It is plain that

(𝒪t({x,y})=1)[(A1|𝒳t,Λt)+(A2|𝒳t,Λt)]=(there is a jump of X from x to y during [t,t+Δt]).\displaystyle\begin{aligned} &\quad{\mathbb{Q}}(\overleftarrow{\mathcal{O}}_{t}(\{x,y\})=1)\cdot\big{[}{\mathbb{P}}(A_{1}|\overleftarrow{\mathcal{X}}_{t},\Lambda_{t})+{\mathbb{P}}(A_{2}|\overleftarrow{\mathcal{X}}_{t},\Lambda_{t})\big{]}\\ &={\mathbb{Q}}\big{(}\text{there is a jump of $\overleftarrow{X}$ from $x$ to $y$ during }[t,t+\Delta t]\big{)}.\end{aligned} (4.11)

On the other hand, the jump rates of 𝒳𝕋2\overleftarrow{\mathcal{X}}^{{\mathbb{T}}_{2}} have been calculated in §4.2.1. Together with the rates of X\overleftarrow{X} shown in Theorem 2.8, we can readily verifies that

(𝒪t({x,y})=1)[(A1|𝒳Q2(t)𝕋2,ΛQ2(t)𝕋2)+(A2|𝒳Q2(t)𝕋2,ΛQ2(t)𝕋2)]=(there is a jump of X from x to y during [t,t+Δt])+o(Δt).\displaystyle\begin{aligned} &\quad\,{\mathbb{Q}}(\overleftarrow{\mathcal{O}}_{t}(\{x,y\})=1)\cdot\Big{[}{\mathbb{P}}\big{(}A^{\prime}_{1}\big{|}\overleftarrow{\mathcal{X}}^{{\mathbb{T}}_{2}}_{Q_{2}(t)},\Lambda^{{\mathbb{T}}_{2}}_{Q_{2}(t)}\big{)}+{\mathbb{P}}\big{(}A^{\prime}_{2}\big{|}\overleftarrow{\mathcal{X}}^{{\mathbb{T}}_{2}}_{Q_{2}(t)},\Lambda^{{\mathbb{T}}_{2}}_{Q_{2}(t)}\big{)}\Big{]}\\ &={\mathbb{Q}}\big{(}\text{there is a jump of $\overleftarrow{X}$ from $x$ to $y$ during }[t,t+\Delta t]\big{)}+o(\Delta t).\end{aligned} (4.12)

Comparing (4.11) and (4.12), we have the inequality in (4.10) is in fact equality up to a difference of o(Δt)o(\Delta t). This gives the jump rates.

The jumps rates in the case of x=𝔭(y)x=\mathfrak{p}(y) can also be deduced from the jump rates in the two-point case using a similar argument. We safely leave the details to the readers.

5 Mesh limits of repelling jump processes

In this section, we first introduce the self-repelling diffusion WλW^{\lambda} which inverts the Ray-Knight identity of reflected Brownian motion, and then show that WλW^{\lambda} is the mesh limit of vertex repelling jump processes as stated in Theorem 2.10.

5.1 Self-repelling diffusions

Let 𝒵\mathcal{Z} be the process whose local time flow is Jacobi(2α,0)\text{Jacobi}(2\alpha,0) flow, which is the so-called burglar. See [1, §5] and [21] for details.

For any non-negative continuous function ff on +{\mathbb{R}}^{+} with f(0)>0f(0)>0, denote by 𝔡f\mathfrak{d}_{f} the hitting time of 0 by ff, i.e. 𝔡f:=inf{x>0:f(x)=0}(0,]\mathfrak{d}_{f}:=\inf\{x>0:f(x)=0\}\in(0,\infty]. We call ff admissible if

0𝔡ff(x)1dx=.\int_{0}^{\mathfrak{d}_{f}}f(x)^{-1}\,\mathrm{d}x=\infty.

Let ~\widetilde{\mathfrak{R}} be the set of non-negative, continuous, admissible functions λ\lambda on +{\mathbb{R}}^{+} with λ(0)>0\lambda(0)>0 when α>0\alpha>0 and further with finite and connected support when α=0\alpha=0. The BESQ2α\text{BESQ}^{2\alpha} process is in ~\widetilde{\mathfrak{R}} a.s..

For f~f\in\widetilde{\mathfrak{R}}, define the following change of scale ηf\eta_{f} and change of time KfK_{f} associated to a deterministic continuous process RR on +{\mathbb{R}}^{+} starting from 0:

ηf(y)=0yf(x)1dx,y[0,𝔡f),\displaystyle\eta_{f}(y)=\int_{0}^{y}f(x)^{-1}\,\mathrm{d}x,y\in[0,\mathfrak{d}_{f}),
Kf(t)=KfR(t)=0t(fηf1(Rs))2ds,t[0,H𝔡fR).\displaystyle K_{f}(t)=K^{R}_{f}(t)=\int_{0}^{t}\big{(}f\circ\eta_{f}^{-1}(R_{s})\big{)}^{2}\,\mathrm{d}s,\ t\in[0,H^{R}_{\mathfrak{d}_{f}}).

Set Φ(R[0,a),f):=(ηf1(RKf1(t)):t[0,Kf(a)))\Phi(R_{[0,a)},f):=\left(\eta^{-1}_{f}\big{(}R_{K_{f}^{-1}(t)}\big{)}:t\in[0,K_{f}(a))\right) for 0<aH𝔡fR0<a\leq H^{R}_{\mathfrak{d}_{f}}.

For any λ~\lambda\in\widetilde{\mathfrak{R}}, define the self-repelling diffusion WλW^{\lambda} as follows:

  • when α=0\alpha=0, let 𝒵(1)\mathcal{Z}^{(1)}, 𝒵(2)\mathcal{Z}^{(2)} and 𝒥2,2\mathcal{J}^{2,2} be three independent processes such that 𝒵(1)\mathcal{Z}^{(1)} and 𝒵(2)\mathcal{Z}^{(2)} are identically distributed as 𝒵\mathcal{Z} and 𝒥2,2\mathcal{J}^{2,2} is a diffusion on [0,1][0,1] with infinitesimal generator 2x(1x)d2x+2(12x)dx2x(1-x)\,\mathrm{d}^{2}x+2(1-2x)\,\mathrm{d}x. Set λ(1)=λ(x)𝒥ηλ(x)2,2\lambda^{(1)}=\lambda(x)\mathcal{J}^{2,2}_{\eta_{\lambda}(x)}, λ(2)(x)=λ(x)λ(1)(x)\lambda^{(2)}(x)=\lambda(x)-\lambda^{(1)}(x) and t(i)=Kλ(i)Z(i)()t^{(i)}=K^{Z^{(i)}}_{\lambda^{(i)}}(\infty)666It holds that TZ(i)=T^{Z^{(i)}}=\infty (Cf. [21]). (i=1,2)(i=1,2). Define

    Wtλ:={Φ(𝒵(1),λ(1),)(t),0tt(1),Φ(𝒵(2),λ(2),)(t(1)+t(2)t),t(1)<tt(1)+t(2);\displaystyle W^{\lambda}_{t}:=\begin{cases}\Phi(\mathcal{Z}^{(1)},\lambda^{(1)},\infty)(t),&\text{$0\leq t\leq t^{(1)}$},\\ \Phi(\mathcal{Z}^{(2)},\lambda^{(2)},\infty)(t^{(1)}+t^{(2)}-t),&t^{(1)}<t\leq t^{(1)}+t^{(2)};\end{cases} (5.1)
  • when α>0\alpha>0, set

    Wλ:=Φ(𝒵,λ).W^{\lambda}:=\Phi(\mathcal{Z},\lambda). (5.2)
Proposition 5.1 ([1, 21]).

Let (ϕ~(0),B[0,τuB],ϕ~(u))(\widetilde{\phi}^{(0)},B_{[0,\tau^{B}_{u}]},\widetilde{\phi}^{(u)}) be a Ray-Knight triple associated to BB. For any λ~\lambda\in\widetilde{\mathfrak{R}}, WλW^{\lambda} has the conditional law of B[0,τuB]B_{[0,\tau^{B}_{u}]} given ϕ~(u)=λ\widetilde{\phi}^{(u)}=\lambda.

We call WλW^{\lambda} the self-repelling diffusion on +{\mathbb{R}}^{+}. Recall the setting in §2.4. Let Ωk\Omega_{k} be the collection of right-continuous, minimal, nearest-neighbor path on k{\mathbb{N}}_{k}. Fix λ~\lambda\in\widetilde{\mathfrak{R}}. For k1k\geq 1, x,ykx,y\in{\mathbb{N}}_{k} with xyx\sim y and ωΩk\omega\in\Omega_{k} with Tω>tT^{\omega}>t, set

Λtk(x,ω)=λ(x)2k0t1{ωs=x}ds,φtk(xy,ω)=2CxykΛtk(x,ω)Λtk(y,ω).\displaystyle\begin{split}&\Lambda^{k}_{t}(x,\omega)=\lambda(x)-2^{k}\int_{0}^{t}1_{\{\omega_{s}=x\}}\,\mathrm{d}s,\\ &\varphi_{t}^{k}(xy,\omega)=2C^{k}_{xy}\sqrt{\Lambda_{t}^{k}(x,\omega)\Lambda_{t}^{k}(y,\omega)}.\end{split} (5.3)

It is easy to check that Xλ,(k)X^{\lambda,(k)} is a jump process on k{\mathbb{N}}_{k} with jump rates rλ,(k)(x,y)r^{\lambda,(k)}(x,y) and the similar resurrect mechanism and lifetime as before, where

rλ,(k)(x,y):={22k1Λtk(y)Λtk(x)Iα1(φtk(xy))Iα(φtk(xy)), if x=𝔭(y);22k1Λtk(y)Λtk(x)Iα(φtk(xy))Iα1(φtk(xy)), if y=𝔭(x);\displaystyle\begin{split}r^{\lambda,(k)}(x,y)&:=\left\{\begin{aligned} &2^{2k-1}\sqrt{\dfrac{\Lambda^{k}_{t}(y)}{\Lambda^{k}_{t}(x)}}\cdot\dfrac{I_{\alpha-1}(\varphi^{k}_{t}(xy))}{I_{\alpha}(\varphi^{k}_{t}(xy))},&\text{ if $x=\mathfrak{p}(y)$};\\ &2^{2k-1}\sqrt{\dfrac{\Lambda^{k}_{t}(y)}{\Lambda^{k}_{t}(x)}}\cdot\dfrac{I_{\alpha}(\varphi^{k}_{t}(xy))}{I_{\alpha-1}(\varphi^{k}_{t}(xy))},&\text{ if $y=\mathfrak{p}(x)$};\\ \end{aligned}\right.\end{split} (5.4)

The proof of Theorem 2.10 follows the same routine as [16]. So we omit the details here.

Appendix A Processes with terminated jump rates

Let G=(V,E)G=(V,E) be a finite or countable graph with finite degree. All the processes considered in this part are assumed to be right-continuous, minimal, nearest-neighbour jump processes on GG with a finite or infinite lifetime. We consider the process to stay at a cemetery point Δ\Delta after the lifetime. The collection of all such sample paths is denoted by Ω\Omega, and Ω:={ωΩ:ω does not explode}\Omega_{\infty}:=\{\omega\in\Omega:\omega\text{ does not explode}\}. Let {t:t0}\left\{\mathscr{F}_{t}:t\geq 0\right\} be the natural filtration of the coordinate process on Ω\Omega_{\infty}.

Let us introduce some notations that will be used throughout this part. For 0a<b0\leq a<b\leq\infty, [a,b[a,b\rangle represents the interval [a,b][a,b] or [a,b)[a,b) according as b<b<\infty or b=b=\infty. Let l0l\geq 0, t(0,]t\in(0,\infty], {ti}1il\{t_{i}\}_{1\leq i\leq l} be positive real numbers and {xi}0il\{x_{i}\}_{0\leq i\leq l} be vertices in VV, such that 0=:t0<t1<<tltl+1:=t0=:t_{0}<t_{1}<\cdots<t_{l}\leq t_{l+1}:=t and x0x1xlx_{0}\sim x_{1}\sim\cdots\sim x_{l}. Denote by σ(x0t1x1t2tlxl𝑡)\sigma(x_{0}\overset{t_{1}}{\rightarrow}x_{1}\overset{t_{2}}{\rightarrow}\cdots\overset{t_{l}}{\rightarrow}x_{l}\overset{t}{\rightarrow}) the function: [0,tV[0,t\rangle\rightarrow V, that equals xix_{i} on [ti,ti+1)[t_{i},t_{i+1}) (resp. [ti,ti+1[t_{i},t_{i+1}\rangle) for i=0,,l1i=0,\cdots,l-1 (resp. i=li=l).

Definition A.1 (LSC stopping time).

Let TT be a stopping time with respect to (t:t0)\left(\mathscr{F}_{t}:t\geq 0\right), such that it is lower semi-continuous under the Skorokhod topology. We simply say TT is a LSC stopping time. The notation 1{t<T(ω[0,t])}1_{\{t<T(\omega_{[0,t]})\}} has the natural meaning when TT is a stopping time. We further say TT is regular if for any ω\omega with t<T(ω)t<T(\omega) and ωt=x\omega_{t}=x, there exists l=l(ω[0,t])>0l=l(\omega_{[0,t]})>0, such that for any 0<sl0<s\leq l and yxy\sim x, it holds that T(ω[0,t]σ(x𝑠y))t+lT\big{(}\omega_{[0,t]}\circ\sigma(x\overset{s}{\rightarrow}y\overset{\infty}{\rightarrow})\big{)}\geq t+l. Intuitively, if we regard T(ω)T(\omega) as the lifetime of ω\omega, then the above statement means that if ω\omega is alive at time tt, then as long as it has exactly one jump during [t,t+l][t,t+l], it is bound to stay alive until time t+lt+l.

Definition A.2 (Terminated jump rates).

Given a LSC stopping time TT. Denote

𝒟T:={(t,x,y,ω)+×V×V×Ω:0t<T(ω),xy,ωt=x}.\mathscr{D}_{T}:=\big{\{}(t,x,y,\omega)\in{\mathbb{R}}^{+}\times V\times V\times\Omega_{\infty}:0\leq t<T(\omega),\,x\sim y,\,\omega_{t}=x\big{\}}.

A function r=rt(x,y)=r(t,x,y,ω)r=r_{t}(x,y)=r(t,x,y,\omega): 𝒟T+\mathscr{D}_{T}\rightarrow{\mathbb{R}}^{+} is called (a family of) TT-terminated jump rates if (1) it is continuous with respect to the product topology, where Ω\Omega_{\infty} is equipped with the Skorokhod topology; (2) for any xyx\sim y, the process (rt(x,y)1{t<T}:t0)\big{(}r_{t}(x,y)1_{\{t<T\}}:t\geq 0\big{)} is adapted to (t)(\mathscr{F}_{t}). For such rr, we also write rt(x,y,ω)=rt(x,y,ω[0,t])r_{t}(x,y,\omega)=r_{t}(x,y,\omega_{[0,t]}) on {t<T(ω)}\{t<T(\omega)\}.

From now on, we always assume that TT is a LSC stopping time and rr is TT-terminated jump rates.

Definition A.3 (Processes with terminated jump rates).

A process Z=(Zs:0s<TZ)Z=(Z_{s}:0\leq s<T^{Z}) is said to have jump rates rr if for any t0t\geq 0, conditionally on t<TZt<T^{Z} and (Zs:0st)\left(Z_{s}:0\leq s\leq t\right) with Zt=xZ_{t}=x,

  • (R1)

    the probability that the first jump of ZZ after time tt occurs in [t,t+Δt][t,t+\Delta t] and the jump is to a neighbour yy is rt(x,y,Z[0,t])Δt+o(Δt)r_{t}(x,y,Z_{[0,t]})\Delta t+o(\Delta t), where o(Δt)o(\Delta t) depends on the conditioned path Z[0,t]Z_{[0,t]},

and the process finally stops at time TZT^{Z}. Here TZ=T0ZTZT^{Z}=T^{Z}_{0}\wedge T^{Z}_{\infty} with

T0Z\displaystyle T^{Z}_{0} :=sup{t0:t<T(Z[0,t])};\displaystyle:=\sup\{t\geq 0:t<T(Z_{[0,t]})\};
TZ\displaystyle T^{Z}_{\infty} :=sup{t0:Z[0,t] has finitely many jumps}.\displaystyle:=\sup\{t\geq 0:Z_{[0,t]}\text{ has finitely many jumps}\}.
Remark A.4.

In the above definition, if it holds that TT is regular, then the condition (R1) can be replaced by (R2) below without affecting the law of the defined process (see Remark A.13 for details).

  • (R2)

    the probability of a jump of ZZ from xx to a neighbour yy in [t,t+Δt][t,t+\Delta t] is rt(x,y,Z[0,t])Δt+o(Δt)\allowbreak r_{t}(x,y,Z_{[0,t]})\Delta t+o(\Delta t), where o(Δt)o(\Delta t) depends on the conditioned path Z[0,t]Z_{[0,t]}.

The following renewal property is direct from the definition.

Proposition A.5 (Renewal property).

Suppose ZZ is a process with jump rates rr. Then for any t0t\geq 0, conditionally on t<TZt<T^{Z} and (Zs:0st)(Z_{s}:0\leq s\leq t), the process (Zt+s:0s<TZt)(Z_{t+s}:0\leq s<T^{Z}-t) is a process with jump rates ru(x,y,ω)=rt+u(x,y,Z[0,t)ω)r^{\prime}_{u}(x,y,\omega)=r_{t+u}(x,y,Z_{[0,t)}\circ\omega) starting from ZtZ_{t}, where Z[0,t)ωZ_{[0,t)}\circ\omega is a function that equals Z[0,t)Z_{[0,t)} on [0,t)[0,t) and equals ω(t)\omega(\cdot-t) on [t,)[t,\infty).

Now we tackle the basic problem: the existence and uniqueness of the processes with jump rates rr. Let \mathscr{B} be the σ\sigma-field on Ω\Omega generated by the coordinate maps.

Theorem A.6.

For any x0Vx_{0}\in V, there exists a process with jump rates rr starting from x0x_{0}. Moreover, if TT is regular, then all such processes have the same distribution on (Ω,)\left(\Omega,\mathscr{B}\right).

Proof of the existence part of Theorem A.6.

Similar to the case of CTMC, we can use a sequence of i.i.d. exponential random variables to construct the process. Precisely, let (γx:xx0)\left(\gamma_{x}:x\sim x_{0}\right) be a family of i.i.d. exponential random variables with parameter 11. For xx0x\sim x_{0}, we set

ux0x(t)=0trs(x0,x,σ(x0))ds(0t<T(σ(x0))u_{x_{0}x}(t)=\int_{0}^{t}r_{s}(x_{0},x,\sigma(x_{0}\overset{\infty}{\rightarrow}))\,\mathrm{d}s\quad\left(0\leq t<T(\sigma(x_{0}\overset{\infty}{\rightarrow})\right)

and Γx0x=(ux0x)1(γx)\Gamma_{x_{0}x}=(u_{x_{0}x})^{-1}(\gamma_{x}), where (ux0x)1(u_{x_{0}x})^{-1} is the right-continuous inverse and if γxux0x(T(σ(x0))\gamma_{x}\geq u_{x_{0}x}(T(\sigma(x_{0}\overset{\infty}{\rightarrow})-), then Γx0x:=\Gamma_{x_{0}x}:=\infty. The process starts at x0x_{0}. If Γx0:=minxx0Γx0x=\Gamma_{x_{0}}:=\min_{x\sim x_{0}}\Gamma_{x_{0}x}=\infty, the process stays at x0x_{0} until the lifetime T(σ(x0))T(\sigma(x_{0}\overset{\infty}{\rightarrow})). Otherwise, i.e., Γx0<\Gamma_{x_{0}}<\infty, it jumps at time J1:=Γx0J_{1}:=\Gamma_{x_{0}} to x1x_{1}, which is the unique xx0x\sim x_{0} such that Γx0x=Γx0\Gamma_{x_{0}x}=\Gamma_{x_{0}}. For the second jump, the protocol is the same except that x0x_{0} and σ(x0)\sigma(x_{0}\overset{\infty}{\rightarrow}) are replaced by x1x_{1} and σ(x0J1x1)\sigma(x_{0}\overset{J_{1}}{\rightarrow}x_{1}\overset{\infty}{\rightarrow}) respectively.

In the same way that one verifies the jump rates of a CTMC constructed from a sequence of i.i.d. exponential random variables, it is simple to check that the process constructed above has jump rates rr. ∎

The proof of the uniqueness part is delayed to §A.1 after the introduction of path probability.

Remark A.7.

In this remark, we give the rigorous definitions of repelling processes presented in §2\sim4. We only construct the process with law x0λ{\mathbb{P}}^{\lambda}_{x_{0}} (defined in §2.3.1) and it is similar for the others. For λ\lambda\in\mathfrak{R}, let Tλ(ω):=sup{t0:Λt(ωt,ω)>0}T^{\lambda}(\omega):=\sup\{t\geq 0:\Lambda_{t}(\omega_{t-},\omega)>0\} and consider rλr^{\lambda} (defined as (2.5)) restricted to 𝒟Tλ\mathscr{D}_{T^{\lambda}}. Intuitively, Tλ(ω)T^{\lambda}(\omega) represents the time when the process ω\omega uses up the local time at some vertex. We can readily check that TλT^{\lambda} is a regular LSC stopping time and rλr^{\lambda} is TλT^{\lambda}-terminated jump rates. We first run a process Z1Z^{1} with jump rates rλr^{\lambda} starting from x0x_{0} up to TZ1T^{Z^{1}}. If the death is due to the exhaustion of local time at some vertex xx0x\neq x_{0}, i.e. TZ1=T0Z1T^{Z^{1}}=T^{Z^{1}}_{0} and ZTZ11=xZ^{1}_{T^{Z^{1}}-}=x, we record the remaining local time at each vertex, say ϱ\varrho. Then we resurrect the process by letting it jump to 𝔭(x)\mathfrak{p}(x) and running an independent process Z2Z^{2} with jump rates rϱr^{\varrho} starting from 𝔭(x)\mathfrak{p}(x) up to TZ2T^{Z^{2}}. After the death, we resurrect the process again. This continues until it explodes or exhausts the local time at x0x_{0}. This procedure defines a process, the law of which is defined to be x0λ{\mathbb{P}}^{\lambda}_{x_{0}}. By Theorem A.6, such a process always exists and the law is already determined by the definition.

As the analysis in §2, when α>0\alpha>0, Z1Z^{1} cannot die at xx0x\neq x_{0}. So in this case, we actually do not need the resurrect procedure, and Z1Z^{1} up to TZ1T^{Z^{1}} has the law x0λ{\mathbb{P}}^{\lambda}_{x_{0}}.

Based on Theorem A.6, if TT is regular, when considering a process with terminated jump rates, we can always focus on its special realization presented in the previous proof of the existence part. The following three corollaries are immediately derived with this perspective.

Corollary A.8.

Suppose ZZ is a process with jump rates rr. Let J1J_{1} be the first jump time of ZZ. Then

(J1dt,ZJ1=x)=1{t<T(σ(x0))}exp(0ty:yx0rs(x0,y,σ(x0))ds)rt(x0,x,σ(x0))dt,\displaystyle\begin{aligned} &{\mathbb{P}}\left(J_{1}\in\,\mathrm{d}t,\,Z_{J_{1}}=x\right)\\ &=1_{\{t<T(\sigma(x_{0}\overset{\infty}{\rightarrow}))\}}\exp\Big{(}-\int_{0}^{t}\sum_{y:y\sim x_{0}}r_{s}(x_{0},y,\sigma(x_{0}\overset{\infty}{\rightarrow}))\,\mathrm{d}s\Big{)}\cdot r_{t}(x_{0},x,\sigma(x_{0}\overset{\infty}{\rightarrow}))\,\mathrm{d}t,\end{aligned} (A.1)
Corollary A.9 (Strong renewal property).

Suppose ZZ is a process on GG starting from x0x_{0} with jump rates rr. Then for any stopping time SS with respect to the natural filtration of ZZ, conditionally on S<TZS<T^{Z} and (Zs:0sS)(Z_{s}:0\leq s\leq S), the process (ZS+s:0sTZ)(Z_{S+s}:0\leq s\leq T^{Z}) is a process with jump rates rt(x,y,ω)=rt+S(x,y,Z[0,S)ω)r^{\prime}_{t}(x,y,\omega)=r_{t+S}(x,y,Z_{[0,S)}\circ\omega) starting from ZSZ_{S}.

The proof of the above strong renewal property is almost a word-by-word copy of the proof of the strong Markov property of CTMC presented in [18, Theorem 6.5.4].

The next corollary gives a more accurate bound of the probability in (R1) and (R2). We only state in terms of the event in (R2). For t0t\geq 0 and σΩt:={ω[0,t]:ωΩ}\sigma\in\Omega_{t}:=\{\omega_{[0,t]}:\omega\in\Omega_{\infty}\}, we use σ\sigma^{\rightarrow} to represent the function in Ω\Omega_{\infty} that equals σ\sigma on [0,t][0,t] and stays at σt\sigma_{t} after time tt.

Corollary A.10.

Suppose ZZ is a process with jump rates rr starting from x0x_{0}. Then any for σΩt\sigma\in\Omega_{t} with σt=x\sigma_{t}=x and t<T(σ)t<T(\sigma), if we denote R(z)=exp(tt+Δtrs(x,y,σ)ds)R(z)=\exp\Big{(}-\int_{t}^{t+\Delta t}r_{s}(x,y,\sigma^{\rightarrow})\,\mathrm{d}s\Big{)} for zxz\sim x, it holds that

(there is a jump of Z from x to y in [t,t+Δt]|Z[0,t]=σ)\displaystyle{\mathbb{P}}\left(\text{there is a jump of $Z$ from $x$ to $y$ in }[t,t+\Delta t]\,\Big{|}\,Z_{[0,t]}=\sigma\right)
[(z:zx,zyR(z))(1R(y)), 1R(y)].\displaystyle\in\bigg{[}\Big{(}\prod_{z:z\sim x,\,z\neq y}R(z)\Big{)}\cdot\big{(}1-R(y)\big{)},\ 1-R(y)\bigg{]}.

A.1 Path probability

Assume TT is regular. Let ZZ be a process with jump rates rr starting from x0x_{0}. For any Borel subsets {Di}1il\{D_{i}\}_{1\leq i\leq l} of +{\mathbb{R}}^{+}, t(0,]t\in(0,\infty] and {xi}1il\{x_{i}\}_{1\leq i\leq l} with x0x1xlx_{0}\sim x_{1}\sim\cdots\sim x_{l}, denote

A(x0D1x1D2Dlxl𝑡):={σ(x0s1x1s2slxl𝑡):0<s1<<slt and siDi 1il}.\displaystyle\begin{aligned} A\big{(}x_{0}\overset{D_{1}}{\rightarrow}x_{1}\overset{D_{2}}{\rightarrow}\cdots\overset{D_{l}}{\rightarrow}x_{l}\overset{t}{\rightarrow}\big{)}:=\Big{\{}\sigma(x_{0}\overset{s_{1}}{\rightarrow}x_{1}\overset{s_{2}}{\rightarrow}\cdots\overset{s_{l}}{\rightarrow}x_{l}\overset{t}{\rightarrow}):0<s_{1}<&\\ \cdots<s_{l}\leq t\text{ and }s_{i}\in D_{i}\ \forall\,1\leq i\leq l\Big{\}}&.\end{aligned} (A.2)

The goal of this part is to calculate the path probability

(Z[0,t]A(x0D1x1D2Dlxl𝑡)).{\mathbb{P}}\Big{(}Z_{[0,t]}\in A\big{(}x_{0}\overset{D_{1}}{\rightarrow}x_{1}\overset{D_{2}}{\rightarrow}\cdots\overset{D_{l}}{\rightarrow}x_{l}\overset{t}{\rightarrow}\big{)}\Big{)}.

The key point of the calculation is the following lemma. At first glance, the lemma seems obvious. We mention that the main difficulty is due to the dependence of o(Δt)o(\Delta t) in (R1) on the conditioned path Z[0,t]Z_{[0,t]}.

Lemma A.11.

Conditionally on t<TZt<T^{Z} and (Zs:0st)(Z_{s}:0\leq s\leq t) with Zt=xZ_{t}=x, the probability that ZZ has at least 22 jumps during [t,t+Δt][t,t+\Delta t] and the first jump is to yy is O((Δt)2)O((\Delta t)^{2}), where O((Δt)2)O((\Delta t)^{2}) depends on the conditioned path.

Remark. Without the regularity of TT, it may happen that the conditional probability of two jumps in [t,t+Δt][t,t+\Delta t] is comparable to Δt\Delta t. For example, we consider a path-dependent Poisson process as follows. Starting from 0, it jumps to 11 with rate 11. Its jump rate at 11 is given by:

rs(1,2,σ(0𝑡1)):=12ts for t>0 and ts2t.r_{s}\big{(}1,2,\sigma(0\overset{t}{\rightarrow}1\overset{\infty}{\rightarrow})\big{)}:=\frac{1}{2t-s}~{}\text{ for $t>0$ and $t\leq s\leq 2t$}.

Let us consider the special realization presented in Theorem A.6 with the above rates. Then it holds that conditionally on the process jumping from 0 to 11 at time tt, it is bound to jump from 11 to 22 before time 2t2t. So the probability of two jumps in [0,Δt][0,\Delta t] is no less than 1eΔt/21-e^{-\Delta t/2}.

Before the proof, we mention a simple result. Recall the notation σ\sigma^{\rightarrow}. It is easy to see that given s>t0s>t\geq 0 and σΩt\sigma\in\Omega_{t} with σt=x\sigma_{t}=x and s<T(σ)s<T(\sigma^{\rightarrow}),

(Zv=x on [t,s]|Z[0,t]=σ)=exp(tsy:yxru(x,y,σ)du).\displaystyle\begin{aligned} {\mathbb{P}}\left(Z_{v}=x\text{ on }[t,s]\,\big{|}\,Z_{[0,t]}=\sigma\right)=\exp\Big{(}-\int_{t}^{s}\sum_{y:y\sim x}r_{u}(x,y,\sigma^{\rightarrow})\,\mathrm{d}u\Big{)}.\end{aligned} (A.3)

To this end, one can consider the left-hand side of (A.3) as a function of ss and formulate a differential equation.

Proof of Lemma A.11.

The renewal property (in Proposition A.5) enables us to consider only the case of t=0t=0. Let J1J_{1} and J2J_{2} be the first and second jump time of ZZ respectively, and B=B(y):={J1,J2[0,Δt],ZJ1=y}B=B(y):=\big{\{}J_{1},J_{2}\in[0,\Delta t],\,Z_{J_{1}}=y\big{\}} for yx0y\sim x_{0}. The event BB can be divided according to the jump times as follows. For n1n\geq 1, define

Bj,n:={J1(j12nΔt,j2nΔt],J2(j2nΔt,Δt],ZJ1=y}(1j2n1).B_{j,n}:=\Big{\{}J_{1}\in\Big{(}\frac{j-1}{2^{n}}\Delta t,\frac{j}{2^{n}}\Delta t\Big{]},\,J_{2}\in\Big{(}\frac{j}{2^{n}}\Delta t,\Delta t\Big{]},\,Z_{J_{1}}=y\Big{\}}\ (1\leq j\leq 2^{n}-1).

Set Bn:=j=12n1Bj,nB_{n}:=\bigcup_{j=1}^{2^{n}-1}B_{j,n}. Then it holds that BnBB_{n}\uparrow B. So it suffices to show that there exists a constant CC independent of nn, such that

(Bn)C(Δt)2, for all n1.{\mathbb{P}}(B_{n})\leq C(\Delta t)^{2},\text{ for all }n\geq 1.

For any jj and nn with 1j2n11\leq j\leq 2^{n}-1, let Bj,n(1)B_{j,n}^{(1)} be the event that J1(j12nΔt,j2nΔt]J_{1}\in\Big{(}\frac{j-1}{2^{n}}\Delta t,\frac{j}{2^{n}}\Delta t\Big{]} and Zu=y for u[J1,j2nΔt]Z_{u}=y\text{ for }u\in\Big{[}J_{1},\frac{j}{2^{n}}\Delta t\Big{]}. Then

(Bj,n)=(J2(j2nΔt,Δt]|Bj,n(1))(Bj,n(1)).\displaystyle{\mathbb{P}}\left(B_{j,n}\right)={\mathbb{P}}\Big{(}J_{2}\in\Big{(}\frac{j}{2^{n}}\Delta t,\Delta t\Big{]}\,\Big{|}\,B_{j,n}^{(1)}\Big{)}\cdot{\mathbb{P}}\Big{(}B_{j,n}^{(1)}\Big{)}.

For the first probability on the right-hand side, observe that if we further condition on J1J_{1}, then we have conditioned on the whole path (Zs:0sj2nΔt)(Z_{s}:0\leq s\leq\frac{j}{2^{n}}\Delta t). By the regularity of TT, for Δt\Delta t sufficiently small, T(σ(x0𝑢y))>ΔtT(\sigma(x_{0}\overset{u}{\rightarrow}y\overset{\infty}{\rightarrow}))>\Delta t for any 0<uΔt0<u\leq\Delta t. Then it follows from (A.3) that the conditional probability of J2(j2nΔt,Δt]J_{2}\in\Big{(}\frac{j}{2^{n}}\Delta t,\Delta t\Big{]} is

1exp((j2nΔt,Δt]z:zyrv(y,z,(Z[0,j2nΔt]))dv)CΔt,\displaystyle 1-\exp\bigg{(}-\int_{(\frac{j}{2^{n}}\Delta t,\Delta t]}\sum_{z:z\sim y}r_{v}\big{(}y,z,(Z_{[0,\frac{j}{2^{n}}\Delta t]})^{\rightarrow}\big{)}\,\mathrm{d}v\bigg{)}\leq C\Delta t, (A.4)

where C=sup{z:zyrv(y,z,σ(x0𝑢x1)):0uvΔt}C=\sup\Big{\{}\sum_{z:z\sim y}r_{v}(y,z,\sigma(x_{0}\overset{u}{\rightarrow}x_{1}\overset{\infty}{\rightarrow})):0\leq u\leq v\leq\Delta t\Big{\}}, which is finite by the continuity of rr. The same bound also holds for (J2(j2nΔt,Δt]|Bj,n(1)){\mathbb{P}}\Big{(}J_{2}\in\Big{(}\frac{j}{2^{n}}\Delta t,\Delta t\Big{]}\,\Big{|}\,B_{j,n}^{(1)}\Big{)}. So

(Bn)\displaystyle{\mathbb{P}}(B_{n}) =j=1n1(Bj,n)CΔtj=1n1(Bj,n(1))CΔt(J1[0,Δt],ZJ1=y)\displaystyle=\sum_{j=1}^{n-1}{\mathbb{P}}(B_{j,n})\leq C\Delta t\sum_{j=1}^{n-1}{\mathbb{P}}\Big{(}B_{j,n}^{(1)}\Big{)}\leq C\Delta t\cdot{\mathbb{P}}(J_{1}\in[0,\Delta t],\,Z_{J_{1}}=y)
=CΔt(r0(x0,y,σ(x0))Δt+o(Δt))C(Δt)2.\displaystyle=C\Delta t\cdot\left(r_{0}(x_{0},y,\sigma(x_{0}\overset{\infty}{\rightarrow}))\Delta t+o(\Delta t)\right)\leq C^{\prime}(\Delta t)^{2}.

That completes the proof. ∎

Corollary A.12.

Conditionally on t<TZt<T^{Z} and (Zs:0st)(Z_{s}:0\leq s\leq t) with Zt=xZ_{t}=x, the probability that ZZ has exactly one jump during [t,t+Δt][t,t+\Delta t] and the jump is to yy is rt(x,y,Z[0,t])Δt+o(Δt)r_{t}(x,y,Z_{[0,t]})\Delta t+o(\Delta t), where o(Δt)o(\Delta t) depends on the conditioned path.

Let us start the calculation of path probability. The case of l=0l=0 has been covered by (A.3). For l1l\geq 1, Corollary A.12 enables us to use the methods of formulating differential equations to calculate path probabilities. Fix x1x0x_{1}\sim x_{0} and 0<t1<t20<t_{1}<t_{2}\leq\infty with t2<T(σ(x0t1x1))t_{2}<T(\sigma(x_{0}\overset{t_{1}}{\rightarrow}x_{1}\overset{\infty}{\rightarrow})). By the lower semi-continuity of TT, there exists 0<u<t2t10<u<t_{2}-t_{1}, such that for any 0s<u0\leq s<u, t2<T(σ(x0t1+sx1))t_{2}<T\left(\sigma(x_{0}\overset{t_{1}+s}{\rightarrow}x_{1}\overset{\infty}{\rightarrow})\right). For such ss, let us calculate

q(s):=(Z[0,t2]A(x0[t1,t1+s]x1t2)).q(s):={\mathbb{P}}\Big{(}Z_{[0,t_{2}]}\in A\big{(}x_{0}\overset{[t_{1},t_{1}+s]}{\rightarrow}x_{1}\overset{t_{2}}{\rightarrow}\big{)}\Big{)}.

For 0<Δs<us0<\Delta s<u-s,

q(s+Δs)q(s)=(Z[0,t2]A(x0[t1+s,t1+s+Δs]x1t2))=:q1q2q3,\displaystyle q(s+\Delta s)-q(s)={\mathbb{P}}\Big{(}Z_{[0,t_{2}]}\in A\big{(}x_{0}\overset{[t_{1}+s,t_{1}+s+\Delta s]}{\rightarrow}x_{1}\overset{t_{2}}{\rightarrow}\big{)}\Big{)}=:q_{1}q_{2}q_{3},

where q1=(E1)q_{1}={\mathbb{P}}(E_{1}), q2=(E2|E1)q_{2}={\mathbb{P}}(E_{2}\,|\,E_{1}), q3=(E3|E1E2)q_{3}={\mathbb{P}}(E_{3}\,|\,E_{1}\cap E_{2}) with

E1\displaystyle E_{1} ={Zu=x0 for u[0,t1+s]},\displaystyle=\{Z_{u}=x_{0}\text{ for }u\in[0,t_{1}+s]\},
E2\displaystyle E_{2} ={Z has exactly one jump during [t1+s,t1+s+Δs] and the jump is to x1},\displaystyle=\{Z\text{ has exactly one jump during }[t_{1}+s,t_{1}+s+\Delta s]\text{ and the jump is to }x_{1}\},
E3\displaystyle E_{3} ={Zu=x1 for u[t1+s+Δs,t2]}.\displaystyle=\{Z_{u}=x_{1}\text{ for }u\in[t_{1}+s+\Delta s,t_{2}]\}.

It holds that

{q1=exp(0t1+sy:yx0rv(x0,y,σ(x0))dv);q2=rt1+s(x0,x1,σ(x0))Δs+o(Δs);q3=exp(t1+st2y:yx1rv(x1,y,σ(x0t1+sx1))dv)+o(1),\displaystyle\left\{\begin{aligned} &q_{1}=\exp\Big{(}-\int_{0}^{t_{1}+s}\sum_{y:y\sim x_{0}}r_{v}\big{(}x_{0},y,\sigma(x_{0}\overset{\infty}{\rightarrow})\big{)}\,\mathrm{d}v\Big{)};\\ &q_{2}=r_{t_{1}+s}\big{(}x_{0},x_{1},\sigma(x_{0}\overset{\infty}{\rightarrow})\big{)}\cdot\Delta s+o(\Delta s);\\ &q_{3}=\exp\Big{(}-\int_{t_{1}+s}^{t_{2}}\sum_{y:y\sim x_{1}}r_{v}\big{(}x_{1},y,\sigma(x_{0}\overset{t_{1}+s}{\rightarrow}x_{1}\overset{\infty}{\rightarrow})\big{)}\,\mathrm{d}v\Big{)}+o(1),\end{aligned}\right. (A.5)

where q1q_{1} and q2q_{2} follows from (A.3) and Corollary A.12 respectively. For q3q_{3}, it suffices to note that for any 0hΔs0\leq h\leq\Delta s, conditionally on Z[0,t1+s+Δs]=σ(x0t1+s+hx1t1+s+Δs)Z_{[0,t_{1}+s+\Delta s]}=\sigma(x_{0}\overset{t_{1}+s+h}{\rightarrow}x_{1}\overset{t_{1}+s+\Delta s}{\rightarrow}), the probability of E3E_{3} is

exp(t1+s+Δst2y:yx1rv(x1,y,σ(x0t1+s+hx1)))dv,\exp\Big{(}-\int_{t_{1}+s+\Delta s}^{t_{2}}\sum_{y:y\sim x_{1}}r_{v}\big{(}x_{1},y,\sigma(x_{0}\overset{t_{1}+s+h}{\rightarrow}x_{1}\overset{\infty}{\rightarrow})\big{)}\Big{)}\,\mathrm{d}v,

which together with the continuity of rr easily leads to the probability q3q_{3}.

By further considering the case Δs<0\Delta s<0, we can formulate a differential equation, the solution of which gives: for 0s<u0\leq s<u,

q(s)\displaystyle q(s) =t1t1+sexp(0t2y:yσvrv(σv,y,σ)dv)rs(x0,x1,σ)ds,\displaystyle=\int_{t_{1}}^{t_{1}+s}\exp\Big{(}-\int_{0}^{t_{2}}\sum_{y:y\sim\sigma_{v}}r_{v}(\sigma_{v},y,\sigma)\,\mathrm{d}v\Big{)}\cdot r_{s^{\prime}}(x_{0},x_{1},\sigma)\,\mathrm{d}s^{\prime},

where σ=σ(x0sx1)\sigma=\sigma(x_{0}\overset{s^{\prime}}{\rightarrow}x_{1}\overset{\infty}{\rightarrow}) which varies with ss^{\prime}. Observe that the lower semi-continuity of TT implies that {s[0,t2]:t2<T(σ(x0sx1t2))}\{s^{\prime}\in[0,t_{2}]:t_{2}<T(\sigma(x_{0}\overset{s^{\prime}}{\rightarrow}x_{1}\overset{t_{2}}{\rightarrow}))\} is a relatively open subset of [0,t2][0,t_{2}]. So we can readily deduce that for any Borel subset DD in +{\mathbb{R}}^{+} and x1Vx_{1}\in V,

(Z[0,t]A(x0𝐷x1𝑡))=D1{s<t<T(σ)}exp(0ty:yσvrv(σv,y,σ)dv)rs(x0,x1,σ)ds.\displaystyle\begin{aligned} &{\mathbb{P}}\left(Z_{[0,t]}\in A\left(x_{0}\overset{D}{\rightarrow}x_{1}\overset{t}{\rightarrow}\right)\right)\\ &=\int_{D}1_{\{s^{\prime}<t<T(\sigma)\}}\exp\Big{(}-\int_{0}^{t}\sum_{y:y\sim\sigma_{v}}r_{v}(\sigma_{v},y,\sigma)\,\mathrm{d}v\Big{)}\cdot r_{s^{\prime}}(x_{0},x_{1},\sigma)\,\mathrm{d}s^{\prime}.\end{aligned}

Similarly, we can inductively check that for Borel subsets DiD_{i} in +{\mathbb{R}}^{+} and xiVx_{i}\in V (1il1\leq i\leq l),

(Z[0,t]At)=∫⋯∫D1××Dl1{s1<<sl<t<T(σ)}j=1lrtj(xj1,xj,σ)exp(0ty:yσvrv(σv,y,σ)dv)j=1ldsj,\displaystyle\begin{aligned} {\mathbb{P}}\left(Z_{[0,t]}\in A_{t}\right)&=\idotsint_{D_{1}\times\cdots\times D_{l}}1_{\{s_{1}<\cdots<s_{l}<t<T(\sigma)\}}\cdot\prod_{j=1}^{l}r_{t_{j}}(x_{j-1},x_{j},\sigma)\\ &\quad\cdot\exp\Big{(}-\int_{0}^{t}\sum_{y:y\sim\sigma_{v}}r_{v}(\sigma_{v},y,\sigma)\,\mathrm{d}v\Big{)}\cdot\prod_{j=1}^{l}\,\mathrm{d}s_{j},\end{aligned} (A.6)

where σ=σ(x0s1x1s2slxl𝑡)\sigma=\sigma(x_{0}\overset{s_{1}}{\rightarrow}x_{1}\overset{s_{2}}{\rightarrow}\cdots\overset{s_{l}}{\rightarrow}x_{l}\overset{t}{\rightarrow}) and At=A(x0D1x1D2Dlxl𝑡)A_{t}=A\big{(}x_{0}\overset{D_{1}}{\rightarrow}x_{1}\overset{D_{2}}{\rightarrow}\cdots\overset{D_{l}}{\rightarrow}x_{l}\overset{t}{\rightarrow}\big{)}.

When (xi)(x_{i}), (Di)(D_{i}) and tt vary, all the sets At=A(x0D1x1D2Dlxl𝑡)A_{t}=A\big{(}x_{0}\overset{D_{1}}{\rightarrow}x_{1}\overset{D_{2}}{\rightarrow}\cdots\overset{D_{l}}{\rightarrow}x_{l}\overset{t}{\rightarrow}\big{)} constitute a π\pi-system. The σ\sigma-field generated by these sets on (Ω,)(\Omega,\mathscr{B}) contains sets in the form {ω:ωs=x}\{\omega:\omega_{s}=x\} (xVx\in V, s0s\geq 0) and hence equals \mathscr{B}. So (A.6) determines the law of ZZ. That completes the proof of the uniqueness part of Theorem A.6.

Remark A.13.

Observe that if we replace (R1) by (R2) in the Definition A.3, then following the same routine, we obtain the same path probability (A.6). In fact, the only difference in the proof is that in (A.3) ‘==’ should be replaced by ‘\geq’, and the later proofs still work with minor modification. This easily leads to the statement in Remark A.4.

{acks}

[Acknowledgments] The authors are grateful to Prof. Aïdékon Elie for introducing this project and inspiring discussions. The authors would also like to thank Prof. Jiangang Ying and Shuo Qin for helpful discussions and valuable suggestions.

{funding}

The first author is partially supported by the Fundamental Research Funds for the Central Universities. The second author is partially supported by NSFC, China (No. 11871162).

References

  • [1] {barticle}[author] \bauthor\bsnmAïdékon, \bfnmE.\binitsE., \bauthor\bsnmHu, \bfnmY.\binitsY. and \bauthor\bsnmShi, \bfnmZ.\binitsZ. (\byear2022+). \btitleThe stochastic Jacobi flow. \bjournalIn preparation. \endbibitem
  • [2] {barticle}[author] \bauthor\bsnmAldous, \bfnmD. J.\binitsD. J. (\byear1998). \btitleBrownian excursion conditioned on its local time. \bjournalElectronic communications in probability \bvolume3 \bpages79-90. \endbibitem
  • [3] {barticle}[author] \bauthor\bsnmChang, \bfnmYinshan\binitsY. and \bauthor\bsnmLe Jan, \bfnmYves\binitsY. (\byear2016). \btitleMarkov loops in discrete spaces. \bjournalProbability and Statistical Physics in St. Petersburg \bvolume91 \bpages215. \endbibitem
  • [4] {barticle}[author] \bauthor\bsnmDavis, \bfnmB.\binitsB. and \bauthor\bsnmVolkov, \bfnmS.\binitsS. (\byear2002). \btitleContinuous time vertex-reinforced jump processes. \bjournalProbability Theory and Related Fields \bvolume123 \bpages281-300. \endbibitem
  • [5] {barticle}[author] \bauthor\bsnmDavis, \bfnmB.\binitsB. and \bauthor\bsnmVolkov, \bfnmS.\binitsS. (\byear2004). \btitleVertex-reinforced jump processes on trees and finite graphs. \bjournalProbability Theory and Related Fields \bvolume128 \bpages42-62. \endbibitem
  • [6] {barticle}[author] \bauthor\bsnmEisenbaum, \bfnmNathalie\binitsN. (\byear1994). \btitleDynkin’s isomorphism theorem and the Ray-Knight theorems. \bjournalProbability Theory and Related Fields \bvolume99 \bpages321–335. \endbibitem
  • [7] {barticle}[author] \bauthor\bsnmEisenbaum, \bfnmNathalie\binitsN., \bauthor\bsnmKaspi, \bfnmHaya\binitsH., \bauthor\bsnmMarcus, \bfnmMichael B.\binitsM. B., \bauthor\bsnmRosen, \bfnmJay\binitsJ. and \bauthor\bsnmShi, \bfnmZhan\binitsZ. (\byear2000). \btitleA Ray-Knight theorem for symmetric Markov processes. \bjournalThe Annals of Probability \bvolume28 \bpages1781 – 1796. \endbibitem
  • [8] {bbook}[author] \bauthor\bsnmFeller, \bfnmWilliam\binitsW. (\byear1957). \btitleAn introduction to probability theory and its applications, Vol. 1. \bpublisherJohn Wiley, \baddressNew York. \endbibitem
  • [9] {barticle}[author] \bauthor\bsnmHuang, \bfnmRuojun\binitsR., \bauthor\bsnmKious, \bfnmDaniel\binitsD., \bauthor\bsnmSidoravicius, \bfnmVladas\binitsV. and \bauthor\bsnmTarrès, \bfnmPierre\binitsP. (\byear2018). \btitleExplicit formula for the density of local times of Markov jump processes. \bjournalElectronic Communications in Probability \bvolume23 \bpages1–7. \endbibitem
  • [10] {bincollection}[author] \bauthor\bsnmKnight, \bfnmFrank B\binitsF. B. (\byear1998). \btitleOn the upcrossing chains of stopped Brownian motion. In \bbooktitleSéminaire de Probabilités XXXII \bpages343–375. \bpublisherSpringer, \baddressHeidelberg. \endbibitem
  • [11] {bmisc}[author] \bauthor\bsnmLawler, \bfnmGregory F\binitsG. F. (\byear2018). \btitleNotes on the Bessel process. \bhowpublishedhttp://www.math.uchicago.edu/~lawler/bessel18new.pdf. \endbibitem
  • [12] {bbook}[author] \bauthor\bsnmLe Jan, \bfnmYves\binitsY. (\byear2011). \btitleMarkov Paths, Loops and Fields: École D’Été de Probabilités de Saint-Flour XXXVIII–2008 \bvolume2026. \bpublisherSpringer, \baddressHeidelberg. \endbibitem
  • [13] {barticle}[author] \bauthor\bsnmLe Jan, \bfnmYves\binitsY. (\byear2017). \btitleMarkov loops, coverings and fields. \bjournalAnnales de la Faculté des sciences de Toulouse : Mathématiques \bvolumeSer. 6, 26 \bpages401–416. \endbibitem
  • [14] {barticle}[author] \bauthor\bsnmLupu, \bfnmTitus\binitsT. (\byear2016). \btitleFrom loop clusters and random interlacements to the free field. \bjournalThe Annals of Probability \bvolume44 \bpages2117–2146. \endbibitem
  • [15] {barticle}[author] \bauthor\bsnmLupu, \bfnmTitus\binitsT., \bauthor\bsnmSabot, \bfnmChristophe\binitsC. and \bauthor\bsnmTarrès, \bfnmPierre\binitsP. (\byear2019). \btitleInverting the coupling of the signed Gaussian free field with a loop-soup. \bjournalElectronic Journal of Probability \bvolume24 \bpages1–28. \endbibitem
  • [16] {barticle}[author] \bauthor\bsnmLupu, \bfnmTitus\binitsT., \bauthor\bsnmSabot, \bfnmChristophe\binitsC. and \bauthor\bsnmTarrès, \bfnmPierre\binitsP. (\byear2021). \btitleInverting the Ray-Knight identity on the line. \bjournalElectronic Journal of Probability \bvolume26 \bpages1 – 25. \endbibitem
  • [17] {bbook}[author] \bauthor\bsnmMarcus, \bfnmMichael B\binitsM. B. and \bauthor\bsnmRosen, \bfnmJay\binitsJ. (\byear2006). \btitleMarkov processes, Gaussian processes, and local times \bvolume100. \bpublisherCambridge University Press, \baddressCambridge. \endbibitem
  • [18] {bbook}[author] \bauthor\bsnmNorris, \bfnmJames R\binitsJ. R. (\byear1998). \btitleMarkov chains \bvolume2. \bpublisherCambridge university press, \baddressCambridge. \endbibitem
  • [19] {barticle}[author] \bauthor\bsnmPitman, \bfnmJim\binitsJ. and \bauthor\bsnmYor, \bfnmMarc\binitsM. (\byear1999). \btitleThe law of the maximum of a Bessel bridge. \bjournalElectronic Journal of Probability \bvolume4 \bpages1 – 35. \endbibitem
  • [20] {barticle}[author] \bauthor\bsnmSabot, \bfnmC.\binitsC. and \bauthor\bsnmTarres, \bfnmP.\binitsP. (\byear2016). \btitleInverting Ray-Knight identity. \bjournalProbability Theory and Related Fields \bvolume165 \bpages559-580. \endbibitem
  • [21] {binproceedings}[author] \bauthor\bsnmWarren, \bfnmJ.\binitsJ. and \bauthor\bsnmYor, \bfnmM\binitsM. (\byear1998). \btitleThe Brownian burglar: conditioning Brownian motion by its local time process. In \bbooktitleSéminaire de Probabilités XXXII \bpages328–342. \bpublisherSpringer, \baddressHeidelberg. \endbibitem
  • [22] {barticle}[author] \bauthor\bsnmWerner, \bfnmW.\binitsW. (\byear2016). \btitleOn the spatial Markov property of soups of unoriented and oriented loops. \bjournalIn Séminaire de Probabilités XLVIII. Lecture Notes in Mathematics \bpages481-503. \endbibitem
  • [23] {barticle}[author] \bauthor\bsnmWerner, \bfnmWendelin\binitsW. and \bauthor\bsnmPowell, \bfnmEllen\binitsE. (\byear2020). \btitleLecture notes on the Gaussian free field. \bjournalarXiv: Probability. \endbibitem