Inverting Ray-Knight identities on trees
Abstract
In this paper, we first introduce the Ray-Knight identity and percolation Ray-Knight identity related to loop soup with intensity on trees. Then we present the inversions of the above identities, which are expressed in terms of repelling jump processes. In particular, the inversion in the case of gives the conditional law of a continuous-time Markov chain given its local time field. We further show that the fine mesh limits of these repelling jump processes are the self-repelling diffusions [21, 1] involved in the inversion of the Ray-Knight identity on the corresponding metric graph. This is a generalization of results in [20, 15, 16], where the authors explore the case of on a general graph. Our construction is different from [20, 15] and based on the link between random networks and loop soups.
keywords:
[class=MSC]keywords:
,
1 Introduction
Imagine a Brownian crook who spent a month in a large metropolis. The number of nights he spent in hotels A, B, C, , etc. is known; but not the order, nor his itinerary. So the only information the police has is total hotel bills. This vivid story is quoted from [21], which is also the paper the name ‘Brownian burglar’ comes from. In [21], Warren and Yor constructed the Brownian burglar to describe the law of reflected Brownian motion on the positive half line conditioned on its local time process (also called occupation time field). Meanwhile, Aldous [2] used the tree structure of the Brownian excursion to show that the genealogy of the conditioned Brownian motion is a time-changed Kingman coalescent. The article [21] can be viewed as a construction of the process in time while [2] as a construction in space. Then a natural question arises.
(Q1) How can we describe the law of a continuous-time Markov chain (CTMC) conditionally on its occupation time field?
This problem can actually be seen as a special case in a more general class of recovery problems that we are to explain. The occupation time field of a CTMC is considered in the generalized second Ray-Knight theorem, which provides an identity between the law of the sum of half of a squared Gaussian free field (GFF) with boundary condition and the occupation time field of an independent Markovian path on one hand, and the law of half of a squared GFF with boundary condition on the other hand (see [7, 6, 17]). We call this identity a Ray-Knight identity. It is well-known that the occupation time field of a loop soup with intensity is distributed as half of a squared GFF by Le Jan’s isomorphism [12]. Therefore the generalized second Ray-Knight identity can also be stated using a loop soup with intensity . In the case of a loop soup with arbitrary intensity , an analogous identity holds: adding to the occupation time field of a loop soup with ‘boundary condition’ the occupation time field of an independent CTMC until local time gives the distribution of the occupation time field of a loop soup with ‘boundary condition’ , see Proposition 2.2 below for a precise statement. We call any such identity a Ray-Knight identity. Inverting the Ray-Knight identity refers to recovering the CTMC conditioned on the total occupation time field.
Vertex-reinforced jump processes (VRJP), conceived by Werner and first studied by Davis and Volkov [4, 5], are continuous-time jump processes favouring sites with higher local times. Surprisingly, Sabot and Tarrès [20] found that a time change of a variant of VRJP provides an inversion of the Ray-Knight identity on a general graph in the case of . It is natural to wonder whether an analogous description holds for an arbitrary intensity .
(Q2) For any , how can we describe the process that inverts the Ray-Knight identity related to loop soup with intensity ?
Note that (Q1) can be viewed as a special case of (Q2) with if we generalize (Q2). Intuitively, when , the external interference of the loop soup disappears. Hence it reduces to extracting the CTMC from its own occupation time field. Another equivalent interpretation of (Q2) is to recover the loop soup with intensity conditioned on its local time field.
In [14], Lupu gave a ‘signed version’ of Le Jan’s isomorphism where the loop soup at intensity is naturally coupled with a signed GFF. In [15, Theorem 8], Lupu, Sabot and Tarrès gave the corresponding version of the Ray-Knight identity (we call it a percolation Ray-Knight identity). Besides the identity of local time fields, by adding a percolation along with the Markovian path and finally sampling signs in every connected component of the percolation, one can start with a GFF with boundary condition and end up getting a GFF with boundary condition . The inversion of the signed isomorphism is carried out in that paper and involves another type of self-interacting process [15, §3]. It leads to the following question.
(Q3) Can we generalize the percolation Ray-Knight identity to the case of loop soup with intensity ? If so, how can we describe the process that inverts the percolation Ray-Knight identity?
The analogous problems can also be considered for Brownian motion and Brownian loop soup. In [16], Lupu, Sabot and Tarrès constructed a self-repelling diffusion out of a divergent Bass-Burdzy flow which inverts the Ray-Knight identity related to GFF on the line and showed that the self-repelling diffusion is the fine mesh limit of the vertex repelling jump processes involved in the case of grid graphs on the line. More generally, it was shown ([21, 1]) that the self-repelling diffusion inverting the Ray-Knight identity on the positive half line can be constructed with the Jacobi flow.
We want to explore the relationship between the repelling jump processes in (Q1)-(Q3) and the self-repelling diffusions in [1, 2, 21]. Our last question is
(Q4) Are the fine mesh limits of the repelling jump processes involved in (Q1)-(Q3) the self-repelling diffusions?
In this paper, we will focus on nearest-neighbour CTMCs on a tree and give a complete answer to the above questions (Q1)-(Q4). It is shown that the percolation Ray-Knight identity has a simple form in this case, see Theorem 2.4. We construct two kinds of repelling jump processes, namely the vertex-repelling jump process and the percolation-vertex repelling jump process that invert the Ray-Knight identity and percolation Ray-Knight identity related to loop soup respectively, and show that the fine mesh limits of these repelling jump processes are self-repelling diffusions involved in the inversion of the Ray-Knight identity on the metric graph associated to . Besides, the inversion processes in the case of general graphs have been constructed in another paper of ours in preparation. The inversion processes on general graphs involve non-local jump rates. So we restrict our discussion to the case of trees here.
The main feature of this paper is the intuitive way of constructing vertex repelling jump processes, which is rather different from [20, 15]. It is enlightened by the recovery of the loop soup with intensity given its local time field (see [23, §2.5] and [22, Proposition 7]), where Werner involves the crossings of loop soup that greatly simplify the recovery. In our case, the introduction of crossings translates the problem into a ‘discrete-time version’ of inverting Ray-Knight identity, which can be stated as recovering the path of a discrete-time Markov chain conditioned on the number of crossings over each edge, see Proposition 3.4. This inversion has a surprisingly nice description, which can be seen as a ‘reversed’ oriented version of the edge-reinforced random walk.
The paper is organized as follows. In §2, we introduce the Ray-Knight identity and percolation Ray-Knight identity related to loop soup and give the main results of the paper. In §3-4, the vertex repelling jump process and the percolation-vertex repelling jump process are shown to invert the Ray-Knight identity and the percolation Ray-Knight identity respectively. In §5, we verify that the mesh limits of repelling jump processes are the self-repelling diffusions. In Appendix A, we give the rigorous definition and basic properties of a class of processes, called processes with terminated jump rates, covering the repelling jump processes.
2 Statements of main results
In this section, we first recall the Ray-Knight identity. Then we introduce a new Ray-Knight identity that we call percolation Ray-Knight identity. Finally, we present our results concerning the inversion of these identities and the fine mesh limit of the inversion processes.
Notations
We will use the following notations throughout the paper. , . For any stochastic process on some state space , for , and a specified point , we denote by
-
•
the local time of . When is discrete, ;
-
•
the right-continuous inverse of ;
-
•
the lifetime of ;
-
•
the hitting time of ;
-
•
(When is discrete) the -th jump time of ;
-
•
the path of up to time .
The superscripts in above notations are omitted when , the CTMC to be introduced immediately.
2.1 Ray-Knight identity related to loop soup
Consider a tree , i.e. a finite or countable connected graph without cycle, with root . Denote by its set of vertices, by , resp. , its set of undirected, resp. directed, edges. Assume that any vertex has finite degree. We write if is an ancestor of and if and are neighbours. Denote by the parent of . For , we simply write for a directed edge. The tree is endowed with a killing measure on and conductances on . We do not assume the symmetry of the conductances at the moment. Write for .
Consider the CTMC on which being at , jumps to with rate and is killed with rate . is the time when the process is killed or explodes. Let be the unrooted oriented loop soup with some fixed intensity associated to . (See for example [13] for the precise definition.)
Denote by the occupation time field of , i.e. for all , is the sum of the local time at of each loop in . It is well-known that when is transient, follows a Gamma() distribution111The density of Gamma() distribution at is ., where is the Green function of ; when is recurrent, for all a.s.. We first suppose is transient, which ensures that the conditional distribution of given exists. For , let have the law of given . Without particular mention, we always assume that starts from . The next proposition (see [15, Proposition 3.7] or [3, Proposition 5.3]) connects the path of with the loops in that visit .
Proposition 2.1.
For any , consider the path conditioned on . Let be a Poisson-Dirichlet partition with parameter , independent of . Set . Then the family of unrooted loops
is distributed as all the loops in that visit , where is the quotient map that maps a rooted loop to its corresponding unrooted loop.
By Proposition 2.1, we can take and to be the collection of loops in and loops derived by partitioning , where and are required to be independent. This special choice of provides a continuous version of the conditional distribution. So we work on this version from now on. Note that the above definition also makes sense when . In this case, and consists of a single loop (Poisson-Dirichlet partition with parameter is considered as the trivial partition ). So we also allow from now on. The generalized second Ray-Knight theorem related to loop soup reads as follows, which is direct from the above definition.
Proposition 2.2 (Ray-Knight identity).
Let and be independent. Then for , conditionally on ,
2.2 Percolation Ray-Knight identity related to loop soup
In this part, we assume the symmetry of the conductances (i.e. for any ) and that . An element in is also called a configuration on . When , the edge is thought of as being open. is also used to stand for the set of open edges induced by without particular mention. A percolation on refers to a random configuration on .
Definition 2.3.
For , let be the percolation on such that, conditionally on : {longlist}
edges are open independently;
the edge is open with probability
(2.1) |
where is the modified Bessel function: for and ,
Consider the loop soup and the percolation . For , define the aggregated local times
(2.2) |
The process is defined as follows: . Conditionally on and , if and , then
-
•
jumps to its neighbour with rate and is then set to (if it was not already).
-
•
In case , is set to without jumping with rate
(2.3) where .
Theorem 2.4 (Percolation Ray-Knight identity).
With the notations above, conditionally on , has the same law as .
Theorem 2.4 will be proved in §4. The process has a natural interpretation in terms of loop soup on a metric graph. Specifically, let be the metric graph associated to , where edges are considered as intervals, so that one can construct a Brownian motion which moves continuously on , and whose print on the vertices is distributed as (see §4 for details). Let be the unrooted oriented loop soup with intensity associated to . Starting with the loop soup with boundary condition at and an independent Brownian motion starting at , one can consider the field which is the aggregated local time at of the loop soup and the Brownian motion up to time . Then one can construct as the configuration where an edge is open if the field does not have any zero on the edge . See Proposition 4.1.
Remark 2.5.
Since the laws involved in the Ray-Knight and percolation Ray-Knight do not depend on , we can generalize the above results to the case where is recurrent.
2.3 Inversion of Ray-Knight identities
To ease the presentation, write for from now on. Theorem 2.4 allows us to identify with , which we will do.
Definition 2.6.
We call the triple
a Ray-Knight triple (with parameter ) associated to . Similarly, recalling the notation , the triple
will be called a percolation Ray-Knight triple.
Inverting the Ray-Knight, resp. percolation Ray-Knight identity, is to deduce the conditional law of , resp. , given , resp. .
We introduce an adjacency relation on : For , and are neighboured if they satisfy either one of the followings: (1) and ; (2) and differ by exactly one edge and . This defines a graph with finite degree and is a nearest-neighbour jump process on .
The inversion of Ray-Knight identities is expressed in terms of several processes defined by jump rates. Readers are referred to Appendix A for the rigorous definition and basic properties of such processes. All the continuous-time processes defined below are assumed to be right-continuous, minimal, nearest-neighbour jump processes with a finite or infinite lifetime. The collection of all such sample paths on (resp. ) is denoted by (resp. ).
Given , set for with and with ,
(2.4) |
Intuitively, is viewed as the initial local time field and stands for the remaining local time field while running the process until time . Although these quantities depend on , we will systematically drop the notation for sake of concision whenever the process is clear from the context.
2.3.1 Inversion of Ray-Knight identity
Keep in mind that most of the following definitions have a parameter , which is always omitted in notations for simplicity.
Set
Note that with probability , . We will take as the range of and consider only the law of given .
Now define the vertex repelling jump process we are interested in. Given , its distribution on verifies that the process starts at , behaves such that
-
•
conditionally on and with , it jumps to a neighbour of with rate ;
-
•
(resurrect mechanism). every time and , it jumps to at time ,
and stops at time , where for with and with ,
(2.5) |
and with
Here represents the time when the local time at is exhausted. The process can be roughly described as follows: the total local time available at each vertex is given at the beginning. As the process runs, it eats the local time. The jump rates are given in terms of the remaining local time. It finally stops whenever the available local time at is used up or an explosion occurs.
Remark 2.7.
It holds that and as ,
Hence,
(2.6) |
The different behaviors in (2.6) for and indicate different behaviors of the vertex repelling jump process in the two cases. Intuitively speaking, when , as the process tends to stay at some , the jump rate to goes to infinity. So it actually does not need the resurrect mechanism in this case, and there is still some positive local time left at any vertex other than at the end. While in the case of , the process exhibits a different picture. Contrary to the case of , the jump rate to the children goes to infinity when . So intuitively the process is ‘pushed’ to the boundary of and exhausts the available local time at one of the boundaries. To guide the process back to , we decide to resurrect it by letting it jump to the parent of the vertex. In this way, the process finally ends up exhausting all the available local time at each vertex.
By Remark 2.7, we can see that the vertex repelling jump process is in line with our intuition about the inversion of Ray-Knight identity. Since in the case of , the given time field includes the external time field , the inversion process will only use part of the local time at any vertex other than and end up exhausting the local time at . While in the case of , the given time field is exactly the local time field of the CTMC itself, the inversion process will certainly use up the local time at each vertex before finally stopping at .
Theorem 2.8.
Suppose is a Ray-Knight triple associated to . For any , the conditional distribution of given and is .
2.3.2 Inversion of percolation Ray-Knight identity
Assume the symmetry of the conductances and that . Our goal is to introduce the percolation-vertex repelling jump process that inverts the percolation Ray-Knight identity. Given and a configuration on , the distribution on verifies that the process (the first coordinate being a jump process on and the second coordinate a percolation on ) starts at , moves such that conditionally on and with , it jumps from to with rate
(2.7) |
and stops at time when the process explodes or uses up the local time at . Here for , , are defined as , respectively.
Theorem 2.9.
Suppose is a percolation Ray-Knight triple associated to . For any and configuration on , the conditional distribution of given and is .
2.4 Mesh limit of vertex repelling jump processes
We only tackle the case of simple random walks on dyadic grids, which can be easily generalized to general CTMCs on trees. Let be a reflected Brownian motion on . View as the ‘root’ and let be a Ray-Knight triple associated to (defined in a similar way to that for CTMCs). The conditional law of given is the self-repelling diffusion , which can be constructed with the burglar process. See §5 for details.
Denote . Consider , where , endowed with conductances on each edge and no killing. The induced CTMC is the print of on . Let be a Ray-Knight triple associated to . It holds that
where and are considered to be linearly interpolated outside .
In view of this, for any non-negative, continuous function on with some additional conditions, we naturally consider the vertex repelling jump process , which has the conditional law of given . The jump rates of the process are given in (5.4).
Theorem 2.10.
For (defined in §5), the family of vertex repelling jump processes converges weakly as to the self-repelling diffusion for the uniform topology, where the processes are assumed to stay at after the lifetime.
3 Inverting Ray-Knight identity
In this section, we will obtain the inversion of Ray-Knight identity. The main idea is to first introduce the information on crossings and explore the law of CTMC conditioned on both local times and crossings. Then by ‘averaging over crossings’, we get the representation of the inversion as the vertex repelling jump process shown in Theorem 2.8.
To begin with, observe that it suffices to only consider the case when is recurrent. In fact, for , we set , where is the law of starting from . Let be the CTMC on starting from induced by conductances and no killing. Then by the standard argument, we can show that is recurrent and has the law of conditioned on . Note that can also be obtained by removing the killing rate at from the -transform of (the latter process is killed at with rate , where and ). Combine the two facts: (1) the law of the loop soup is invariant under the -transform (Cf. [3, Proposition 3.2]); (2) the law of the Ray-Knight triple does not depend on the killing rate at . We have the Ray-Knight triple associated to has the same law as that associated to . Then it is easy to deduce Theorem 2.8 in the transient case from that in the recurrent case.
Throughout this section, we will assume is recurrent.
3.1 The representation of the inversion as a vertex-edge repelling process
Definition 3.1.
An element is called a network (on ). For any network , set
and for , and . If for any in , we say that the network is sourceless, denoted by . Given , we call a sourceless -random network associated to if is a sourceless network, and are independent with following the Bessel , distribution222For and , the Bessel distribution is a distribution on given by: (3.1) Bessel distribution is defined to be the Dirac measure at ..
More generally, let be the unique self-avoiding path from to , also seen as a collection of unoriented edges. For , we say that has sources , denoted by , if for any with ,
Given and , we call an -random network with sources associated to if is a network with sources and are independent with following the Bessel , distribution if , and the Bessel , distribution otherwise.
Remark. We will sometimes use the convention that a network with sources is a sourceless network.
Every loop configuration (i.e. a collection of unrooted, oriented loops) induces a network : for ,
Due to the tree structure, it holds that for any , i.e. is sourceless.
Let be the Ray-Knight triple associated to , where is the local time field of a loop soup independent of . The path is also viewed as a loop configuration consisting of a single loop. Let . We have the following result, the proof of which is contained in §3.1.1. [10, Theorem 3.1] provides another proof for the case of .
Proposition 3.2.
For , conditionally on , is a sourceless -random network associated to .
With this proposition, for the recovery of , it suffices to derive the law of given and . Set
Given , for and with , set
(3.2) |
which represents the remaining crossings while running the process until time with the initial crossings . Recall notations introduced at the beginning of §2.3.1. Given and , the vertex-edge repelling jump process is defined to be a process starting from , behaves such that {longlist}
conditionally on and with , it jumps to a neighbour of with rate ;
every time and , it jumps to at time , and stops at time when the process exhausts the local time at or explodes. Here for with ,
(3.3) |
In the above expression, is viewed as a network, and is defined as before.
Intuitively, for this process, both the local time and crossings available are given at the beginning. The process eats the local time during its stay at vertices and consumes crossings at jumps over edges.
Theorem 3.3.
For any and , has the law of conditioned on and .
3.1.1 Proof of Proposition 3.2
For a subset of , the print of on is by definition
(3.4) |
We can also naturally define the print of a (rooted or unrooted) loop. The print of a loop configuration is the collection of the prints of loops in the configuration. Recall that consists of the partition of the path of and an independent loop soup . By the excursion theory (resp. basic properties of Poisson random measure), the prints of (resp. ) on the different branches333A branch at is defined as a connected component of the tree when removing the vertex , to which we add at are independent. In particular, for any , is independent of the prints of on the branches that do not contain . By considering other vertices as the root of , we can readily obtain that given , are independent and the conditional law of depends only on and .
Now it reduces to considering the law of conditioned on and . We focus on the case of only, since it is the same for other edges. It holds that {longlist}
has a Poisson distribution with parameter ;
equals a sum of i.i.d. exponential random variables with parameter ;
follows a Gamma(, ) distribution (a Gamma(, ) r.v. is interpreted as a r.v. identically equal to ). In fact, follows a Gamma() distribution, where is the Green function of the process killed at . The recurrence of implies that .
The above exponential random variables, and are mutually independent.
It follows that the conditional law of is the same as the conditional law of given
when has Poisson() distribution, has Gamma(, ) distribution, have Exponential() distribution and are mutually independent. By directly writing the density of and then conditioning on the sum being , we can readily obtain that the conditional distribution is Bessel () (see also [8, §2.7]). We have thus proved the proposition.
3.1.2 Proof of Theorem 3.3
The recovery of given and is carried out by the following two steps: (1) reconstruct the jump chain of conditioned on and ; (2) assign the holding times before every jump conditioned on , and the jump chain. We shall prove the process recovered by the above two steps is exactly the vertex-edge repelling jump process.
For step (1), it is easy to see that the conditional law of the jump chain actually depends only on . Moreover, let be distributed as the jump chain of . For , let . Fixing a sourceless network , set . We consider up to . Let have the law of the discrete loop soup induced by , and be independent of . Set , where and have the obvious meaning. Denote by the process conditioned on . Then has the same law as the jump chain of conditioned on . The following proposition plays a central role in the inversion. We present a combinatorial proof later.
Proposition 3.4.
The law of can be described as follows. It starts from . Conditionally on with and ,
-
•
if , it jumps to a neighbour of with probability
-
•
if and , it jumps to the parent of ; (This can only happen when .)
And finally stops at time . Here
Now turn to step (2), i.e. recovering the jump times.
Proposition 3.5.
Given , and the jump chain of , denote by the number of visits to by the jump chain and the -th holding time at vertex . Note that for each in the case of . The followings hold: {longlist}
have the same law as i.i.d. uniform random variables on ranked in ascending order and ;
for any visited by , is a Dirichlet distribution with parameter
Here -variable Dirichlet distribution with parameter is interpreted as -variable Dirichlet distribution with parameter .
Proof.
is distributed as the jump times of a Poisson process on conditioned on that there are exactly jumps during this time interval. That deduces (i).
For , has the law of i.i.d exponential variables with parameter conditioned on the sum of them and an independent Gamma(, ) random variable being (recall that a Gamma() r.v is identically equal to ). Here we use the fact that every visit, either by or , is accompanied with an exponential holding time with parameter , and the accumulated local time of the one-point loops at provides a Gamma(, ) time duration. ∎
Proposition 3.4 and 3.5 give a representation of the inversion of the Ray-Knight identity in terms of its jump chain and holding times. Using this, we can readily calculate the jump rates.
Proof of Theorem 3.3.
It suffices to show that the jump chain and holding times of are given by Proposition 3.4 and 3.5 respectively. As shown in the proof of Theorem A.6, can be realized by a sequence of i.i.d exponential random variables. It is direct from this realization that the jump chains of coincides with in Proposition 3.4. To see this, one only needs to note that for any fixed and with and , the jump rate is proportional to for . So it remains to check the holding times. We consider given and . Let be the -th holding time at . For , set . By Proposition 3.5, conditionally on all the holding times before, it holds that follows a Beta distribution. We can readily check that
where () and is an exponential random variable with parameter . This is exactly the same holding times as . We have thus proved the theorem. ∎
The remaining part is devoted to the proof of Proposition 3.4.
Case of . Condition on with and . It holds that the remaining path of is completed by uniformly choosing a path with edge crossings , since the conditional probability of any such path are the same, which equals
So for the probability of the next jump, it suffices to count for any , the number of all possible paths with edge crossings and the first jump being to . Here we use the same idea as the proof of [9, Proposition 2.1]. This number equals the number of relative orders of exiting each vertex satisfying (1) the first exit from is to ; (2) the last exit from any is to . In particular, it is proportional to the number of relative orders of exiting satisfying the above conditions at vertex , which equals
where . It follows that the conditional probability that is
(3.5) |
Case of . (1) The conditional transition probability at . We will first deal with the conditional transition probability given with and . We further condition on the remaining crossings of , i.e. . Note that . In particular, it holds that for any . By (3.5), the conditional probability that is
which is independent of the further condition and hence gives the conclusion.
(2) The conditional transition probability at . Recall that the law of the Ray-Knight triple is independent of . In this case, it is easier to consider the process killed at with rate . With an abuse of notation, we still use and to denote this process and its associated loop soup respectively444We mention that such notations are only used in this part.. The main idea is that the recovery of the Markovian path can be viewed as the recovery of the discrete loop soup instead. We choose to work on the extended graph to make sure that the probability of a loop configuration is proportional to as . When choosing an outgoing crossing at right after a crossing to , there is a unique choice that leads to one more loop in the configuration than others. This unique choice is always to . So the conditional transition probability from to is proportional to . Let us make some preparations first.
(a) Concatenation process . First we give a representation of the path as a concatenation of loops in a loop soup as follows. Let be the discrete loop soup associated to . We focus on the loops in that visits . Denote by the number of such loops. For each of them, we uniformly and independently root it at one of its visits at . Then we choose uniformly at random (among all choices) an order for the rooted loops labelled in order by and concatenate them:
We call the concatenation process of . It can be easily deduced from the properties of discrete loop soup that the path between consecutive visits of in any has the same law as an excursion of at . Thus, conditionally on , has the same law as . Consequently, we have the following corollary.
Corollary 3.6.
Given a network , denote by the process conditioned on . Then has the same law as .
(b) Pairing on the extended graph. To further explore the law of , we need to introduce the extended graphs of and the definition of ‘pairing’. Let . Replace each edge of by copies. The graph thus obtained, denoted by , is an extended graph (of ). The collection of all directed edges in is denoted by . The graph is equipped with the killing measure , the Dirac measure at , and for any , the conductance on each one of the directed edges from to is . Any element in is called a network on . We will use to denote a deterministic network on . For a network on , the projection of on is a network on defined by:
In the following, we only focus on the network . For such a network , denote . For simplicity, we omit the superscript ‘’ for directed edges throughout this part.
Definition 3.7.
Given a sourceless network , a pairing of is defined to be a bijection from to , such that for any , the image of is a directed edge whose head is the tail of .
Given , a pairing of and a subset of , determines a set of loops and bridges. Precisely, for each , following the pairing on , we arrive at a new (directed) edge . Continuing to keep track of the pairing on the new edge , we arrive at another edge . This procedure stops when arriving at either the initial edge again, or an edge because we lose the information about the pairing on . In the former case, a loop is obtained. In the latter case, we get a path whose first edge is and last edge . Any two such paths are either disjoint, or one is a part of the other. This naturally determines a partial order. All the maximal elements with respect to this partial order and the loops obtained in the former case form the set of bridges and loops determined by .
Now we start the proof. Let be the CTMC on starting from induced by the conductances and killing measure . Consider the discrete loop soup associated to , the projection of which on has the same law as . Let be its concatenation process. By Corollary 3.6, it suffices to consider the law of (the projection of) given .
Observe that conditionally on with , for the next step, it will definitely jump along its present loop in , which is a loop visiting both and . Therefore, we focus on . In the following, loop configurations always refer to configurations consisting of only loops visiting . Note that for any , given , the probability that uses every edge at most once tends to as . So by the standard arguments (see [22, 23]), it suffices to show that for any with and for any , when further conditioned on , the transition probability of at is given by the statements in the proposition. It is easily seen that due to the independence of and , the transition probability at is independent of the earlier condition
A key observation is that conditionally on , the probability of a loop configuration of is proportional to . In fact, is a Poisson random measure on the space of loops visiting . The intensity of a loop is the product of the conductances of the edges divided by the multiplicity of the loop. Note that for any configuration with , the multiplicities of the loops in are all . Hence, the probability that is
where is the probability that is empty.
Let be the pairing of induced by . Namely, is defined to be the edge right after in the same loop in . We are interested in the conditional law of given the path .
Let


Note that the path already determines . We further condition on . It holds that determines a set of loops and bridges from and to (see Figure 1(a)). The loops are exactly the loops in that have been completely crossed by , and the bridges are partitions of the remaining loops in (see Figure 1(b)). Denote for . Then there are exactly bridges, including exactly bridges whose first edge enters , for any . Moreover, there is exactly one bridge partly crossed by the path . This bridge is a part of the loop that is walking along at time . So it visits by the construction of , which implies that the first edge of the bridge enters due to the tree structure.
Now we focus on the conditional law of the pairing of these bridges, i.e. the law of , where is the collection of the last edges in the above bridges. Note that consists of the first edges in these bridges. Denote and , where and we assign the subscripts such that
-
•
and are in the same bridge ();
-
•
. (So the bridge containing and is exactly the unique bridge partly crossed by .)
The totality of bijections from to is denoted by . Every pairs the bridges into a loop configuration. We simply call it the configuration completed by . It is easy to see that this defines a one-to-one correspondence between and all the possible configurations obtained by pairing these bridges.
Recall that the probability of a loop configuration of is proportional to . So the conditional probability that equals a fixed is proportional to , where . Set . For with , a bijection from to can be defined as follows. For any , is defined by exchanging the image of and . Precisely,
We can readily check that when , for ,
Hence, if we denote by the conditional probability that , then for , and for . Thus,
It follows that the conditional probability of is
where the path is understood as its projection on , and the first equality is due to the fact that enters . The conditional probability is independent of the further conditioning on . That completes the proof.
3.2 The representation of the inversion as a vertex repelling jump process
Let be a sourceless -random network associated to as in Definition 3.1 and be a process distributed, conditionally on , as . By Proposition 3.2 and Theorem 3.3, has the law of conditioned on . The goal of this subsection is to show the following proposition, so as to obtain Theorem 2.8. Recall the distribution introduced in §2.3.1.
Proposition 3.8.
For any , has the law .
In other words, is a jump process on with jump rates given by (2.5). Recall the definition of and in (2.4) and (3.2) respectively. The key to Proposition 3.8 is the following lemma, which reveals a renewal property of the remaining crossings of , i.e. . The proof is given in §3.2.1.
Lemma 3.9.
For any , conditionally on and , the network is an -random network with sources associated to .
Remark. In the statement of the proposition and below, a network with sources has to be understood as a sourceless network.
Proof of Proposition 3.8.
By Lemma 3.9, for , conditionally on and with , for any ,
-
•
if , follows a Bessel (, ) distribution;
-
•
if , follows a Bessel (, ) distribution.
Note that if further conditioned on , the process jumps to at time with rate
So using Corollary A.10 and averaging over , we get the probability of a jump of from to during is
which gives the jump rates (2.5). ∎
3.2.1 Proof of Lemma 3.9
For any , set
(3.6) |
In particular, and . First let us generalize the notations , and . Forgetting the original definition before, we construct , and a family of random processes, random networks and probability measures respectively on the same measurable space, such that for any , , , under ,
-
•
is a process that starts at , has the jump rates (as defined in (3.3)) and the same resurrect mechanism as the vertex-edge repelling jump process, and stops at the time when the process exhausts the local time at or explodes;
-
•
is an -random network with sources associated to ;
-
•
is a process distributed, conditionally on , as .
It is easy to see that for and , under , , and are consistent with the original definition.
We start the proof with an observation. To emphasize the tree where is defined, let us write . Any connected subgraph containing is automatically equipped with conductances and no killing. The induced CTMC is exactly the print of on (recall (3.4)). The following restriction principle is a simple consequence of Theorem 3.3 and the excursion theory.
Proposition 3.10 (Restriction principle).
For any connected subgraph containing , the print of on (similarly defined as (3.4)) has the same law as .
Observe that by the restriction principle, it suffices to tackle the case where is finite, which we will assume henceforth.
For , consider under . In the following, we simply write for and denote . For , let for . We will show that under , {longlist}
for any , conditionally on on , is an -random network with sources associated to ;
for any , conditionally on and , is an -random network with sources associated to .
Note that once (i) and (ii) are proved, we have conditionally on a stay or jump at the beginning of , {longlist}
the remaining crossings is distributed as an -random network associated to the remaining local time field;
the process in the future is distributed as under , where is the remaining local time and equals or the vertex it jumps to accordingly. In fact, it is simple to deduce from the strong renewal property of (see Corollary A.9) an analogous property for that reads as follows: for any stopping time , conditionally on and with and , the process after i.e. has the same law as under , where is a random network following the conditional distribution of . Then the statement follows from (a).
Under , iteratively using (a) and (b), we have after a chain of stays or jumps, keeps to be distributed as an -random network associated to the remaining local time, which leads to the conclusion.
We present the proof of (ii), and the proof of (i) is similar. First consider the case of . Notice that this event has a positive probability under only when , , and . In this case, is a -random network with sources associated to conditioned on . Since for , it is easily seen that is a -random network with sources associated to .
Now we focus on the case where . Recall the -random network defined in Definition 3.1. A simple calculation shows that the law of an -random network with sources associated to is given by:
where and
For , set for . Then for any Borel subset ,
(3.7) |
where the second equality is due to (A.1) and in the last expression,
that is independent of . Summing over in (3.7), we get
(3.8) |
By the monotone class methods, we can replace ‘’ in (3.8) by any non-negative measurable function on vanishing on . In particular,
(3.9) |
Comparing (3.7) and (3.9), we have
We have thus reached (ii).
4 Inverting percolation Ray-Knight identity
In this section, we only consider the case where the conductances are symmetric and . In §4.1, we will introduce the metric-graph Brownian motion. It turns out that the metric-graph Brownian motion together with the associated percolation process gives a realization of defined in §2.2, which leads to the percolation Ray-Knight identity (Theorem 2.4). §4.2 is devoted to the proof of the inversion of percolation Ray-Knight identity (Theorem 2.9).
4.1 Percolation Ray-Knight identity
Replacing every edge of by an interval with length , it defines the metric graph associated to , denoted by . is considered to be a subset of . One can naturally construct a metric-graph Brownian motion on , i.e. a diffusion that behaves like a Brownian motion inside each edge, performs Brownian excursion inside each adjacent edges when hitting a vertex in and is killed at each vertex with rate . Let be the loop soup associated to . Then and , resp. and , can be naturally coupled through restriction (i.e. , resp. , is the print of , resp. , on ), which we will assume from now on. See [14] for more details of metric graphs, metric-graph Brownian motions and the above couplings. Notations such as and are similarly defined for as for . Assume that starts from . We also have the Ray-Knight identity for , that is to replace in Theorem 2.2 by respectively.
Let and be independent, and we always consider under the condition (or equivalently ). For and , we set
where and is the right-continuous inverse. By the coupling of and , it holds that (defined in (2.2)). Next, for , will denote a percolation on defined as: for ,
Recall the notations defined in §2.2. We will show the following proposition, which immediately implies the percolation Ray-Knight identity.
Proposition 4.1.
has the same law as ;
has the same law as .
The remaining part is devoted to the proof of Proposition 4.1. First we present two lemmas in terms of the loop soups and , which will later be translated into the analogous versions associated to using the Ray-Knight identity.
Define on as follows: for ,
Remember that and are naturally coupled. So it holds that . Moreover, they have the following relations.
Lemma 4.2.
Conditionally on , is a family of independent random variables, and
(4.1) |
where .
Lemma 4.3.
Conditionally on , is a family of independent random variables, and
(4.2) |
Proof of Lemma 4.2.
The independence follows from an argument using the Ray-Knight identity of and the excursion theory similar to that in the proof of Proposition 3.2. With the same idea as [14, §3], conditionally on , the trace of in consists of the loops entirely contained in , excursions from and to (resp. ) inside of loops in visiting (resp. ). By considering the contribution to the occupation time field by each part, we have that the left-hand side of (4.1) is the same as the probability that the sum of three independent processes
(4.3) |
has a zero on . Here , is a (i.e. a -dimensional BESQ bridge from to over ) and is a .
For (4.3) to have a zero, the process has to hit before the last zero of . The density of the first zero of is
(4.4) |
To get this, one can start with the well-known fact that the first zero of a -dimensional BESQ process starting from is distributed as (Cf. for example [11, Proposition 2.9]). Then use the fact that for a -dimensional BESQ process starting from , the process is a .
Since has the same law as a , its last zero has the same law as minus the first zero of a , which has the density
(4.5) |
Proof of Lemma 4.3.
Proof of Proposition 4.1.
By the Ray-Knight identity of , we have has the law of . So it follows from Lemma 4.3 that has the same law as . That concludes (1).
For (2), if jumps through the edge , then crosses the interval , which makes positive on this interval. So turns to .
It remains to calculate the rate of opening an edge without jumping. First it is easy to deduce that is a Markov process, which allows us to condition only on when calculating the rate. Note that the case of opening without jumping can happen only when stays at and . We use to stand for the event that stays at during . Denote by the event that at some time in , is opened without jumping. And let . For with and , if we write , then
By considering the print of on the branch at containing and using the Ray-Knight identity of , we can readily check that the fraction in the above square brackets equals
which further equals with by Lemma 4.2. Therefore,
Observe that . We can readily deduce that the conditional probability of the latter event is . So
which leads to the rate in (2.3) by using and . ∎
4.2 Percolation-vertex repelling jump process
Let be the time reversal of , i.e. . In this part, we will verify that for any and configuration on , the jump rate of conditionally on and is given by (2.7), which leads to Theorem 2.9.
Recall the notations in (2.4). In the following, we will use and to represent and respectively. Note that given , the aggregated local time equals . Let us condition on , and the path with , and calculate the jump rate of the edge , i.e. the rate of the jump from to by without modifying , or the closure of in without jumping, or the jump and closure simultaneously. Due to the Markov property of , it suffices to condition only on . For simplicity, denote by henceforth.
4.2.1 Two-point case
We start with the simplest case: contains only two vertices. The jump rate is analyzed in the following two cases.
(1) if , it holds that . This allows us to further ignore the condition on . Note that the law of given and is also (defined in §2.3). So the rate of the jump from to by at time is as shown in (2.5).
We further consider the probability that the jump is accompanied by the closure of in . Observe that this happens if and only if . Conditionally on and jumping from to at time , has the same law as given and . By Lemma 4.3, the conditional probability that is
(4.7) |
Therefore, jumps from to and turns to at time with rate
while jumps from to at time without modifying with rate
(2) if , there are two possible jumps: jumps from to without modifying , or is closed without jumping. Denote by (resp. ) the event that there is a jump of the first (resp. second) kind occurs during . Note that can happen only when . We have
(4.8) |
Now we turn to . Observe that is also the event that at some time in , is opened in without jumping. Let . Then
We observe that
by (4.8); while if we denote , then by Lemma 4.3,
So equals
(4.9) |
Since , by Lemma A.11 we have the probability of is under . So also equals (4.9). We have thus shown all the rates in (2.7) in the two-point case. The rates already determines the process (Cf. Appendix A). So we are done.
4.2.2 General case
For general , the strategy is to introduce a coupling, which connects the rate in the general case with that in the two-point case. In the following, to emphasize the tree where is defined, we also write . Recall that we condition on with .
If , divide into parts: is the connected subgraph of with , , . Each is equipped with the conductances and killing measure such that the induced CTMC on has the law of the print of on 555For example, for , the conductance on is and the killing measure on (resp. ) is the effective conductance (on ) from (resp. ) to the infinity point after cutting the edge .. View , and as the roots of , and respectively. We have the following coupling. Note that is a Markov process. So we can naturally define starting from a given pair , which represents the starting point, initial configuration and time field respectively. Now we consider starting from respectively, where . If we glue the excursions of at and according to the local time, preserve the evolution of along with , and recover a process from the glued excursion process starting from up to the time when the local time at is exhausted (some of the excursions are not used in the recovery procedure), then the process obtained has the law of given .
In the following, assume and are coupled in the above way. Let and (resp. and ) be the events that (resp. ) jumps from to with and without the closure of in (resp. ) during (resp. ) respectively. It follows direct from the coupling that for with and ,
(4.10) |
where .
Recall that . It is plain that
(4.11) |
On the other hand, the jump rates of have been calculated in §4.2.1. Together with the rates of shown in Theorem 2.8, we can readily verifies that
(4.12) |
Comparing (4.11) and (4.12), we have the inequality in (4.10) is in fact equality up to a difference of . This gives the jump rates.
The jumps rates in the case of can also be deduced from the jump rates in the two-point case using a similar argument. We safely leave the details to the readers.
5 Mesh limits of repelling jump processes
In this section, we first introduce the self-repelling diffusion which inverts the Ray-Knight identity of reflected Brownian motion, and then show that is the mesh limit of vertex repelling jump processes as stated in Theorem 2.10.
5.1 Self-repelling diffusions
Let be the process whose local time flow is flow, which is the so-called burglar. See [1, §5] and [21] for details.
For any non-negative continuous function on with , denote by the hitting time of by , i.e. . We call admissible if
Let be the set of non-negative, continuous, admissible functions on with when and further with finite and connected support when . The process is in a.s..
For , define the following change of scale and change of time associated to a deterministic continuous process on starting from :
Set for .
For any , define the self-repelling diffusion as follows:
-
•
when , let , and be three independent processes such that and are identically distributed as and is a diffusion on with infinitesimal generator . Set , and 666It holds that (Cf. [21]). . Define
(5.1) -
•
when , set
(5.2)
Proposition 5.1 ([1, 21]).
Let be a Ray-Knight triple associated to . For any , has the conditional law of given .
We call the self-repelling diffusion on . Recall the setting in §2.4. Let be the collection of right-continuous, minimal, nearest-neighbor path on . Fix . For , with and with , set
(5.3) |
It is easy to check that is a jump process on with jump rates and the similar resurrect mechanism and lifetime as before, where
(5.4) |
Appendix A Processes with terminated jump rates
Let be a finite or countable graph with finite degree. All the processes considered in this part are assumed to be right-continuous, minimal, nearest-neighbour jump processes on with a finite or infinite lifetime. We consider the process to stay at a cemetery point after the lifetime. The collection of all such sample paths is denoted by , and . Let be the natural filtration of the coordinate process on .
Let us introduce some notations that will be used throughout this part. For , represents the interval or according as or . Let , , be positive real numbers and be vertices in , such that and . Denote by the function: , that equals on (resp. ) for (resp. ).
Definition A.1 (LSC stopping time).
Let be a stopping time with respect to , such that it is lower semi-continuous under the Skorokhod topology. We simply say is a LSC stopping time. The notation has the natural meaning when is a stopping time. We further say is regular if for any with and , there exists , such that for any and , it holds that . Intuitively, if we regard as the lifetime of , then the above statement means that if is alive at time , then as long as it has exactly one jump during , it is bound to stay alive until time .
Definition A.2 (Terminated jump rates).
Given a LSC stopping time . Denote
A function : is called (a family of) -terminated jump rates if (1) it is continuous with respect to the product topology, where is equipped with the Skorokhod topology; (2) for any , the process is adapted to . For such , we also write on .
From now on, we always assume that is a LSC stopping time and is -terminated jump rates.
Definition A.3 (Processes with terminated jump rates).
A process is said to have jump rates if for any , conditionally on and with ,
-
(R1)
the probability that the first jump of after time occurs in and the jump is to a neighbour is , where depends on the conditioned path ,
and the process finally stops at time . Here with
Remark A.4.
In the above definition, if it holds that is regular, then the condition (R1) can be replaced by (R2) below without affecting the law of the defined process (see Remark A.13 for details).
-
(R2)
the probability of a jump of from to a neighbour in is , where depends on the conditioned path .
The following renewal property is direct from the definition.
Proposition A.5 (Renewal property).
Suppose is a process with jump rates . Then for any , conditionally on and , the process is a process with jump rates starting from , where is a function that equals on and equals on .
Now we tackle the basic problem: the existence and uniqueness of the processes with jump rates . Let be the -field on generated by the coordinate maps.
Theorem A.6.
For any , there exists a process with jump rates starting from . Moreover, if is regular, then all such processes have the same distribution on .
Proof of the existence part of Theorem A.6.
Similar to the case of CTMC, we can use a sequence of i.i.d. exponential random variables to construct the process. Precisely, let be a family of i.i.d. exponential random variables with parameter . For , we set
and , where is the right-continuous inverse and if , then . The process starts at . If , the process stays at until the lifetime . Otherwise, i.e., , it jumps at time to , which is the unique such that . For the second jump, the protocol is the same except that and are replaced by and respectively.
In the same way that one verifies the jump rates of a CTMC constructed from a sequence of i.i.d. exponential random variables, it is simple to check that the process constructed above has jump rates . ∎
The proof of the uniqueness part is delayed to §A.1 after the introduction of path probability.
Remark A.7.
In this remark, we give the rigorous definitions of repelling processes presented in §24. We only construct the process with law (defined in §2.3.1) and it is similar for the others. For , let and consider (defined as (2.5)) restricted to . Intuitively, represents the time when the process uses up the local time at some vertex. We can readily check that is a regular LSC stopping time and is -terminated jump rates. We first run a process with jump rates starting from up to . If the death is due to the exhaustion of local time at some vertex , i.e. and , we record the remaining local time at each vertex, say . Then we resurrect the process by letting it jump to and running an independent process with jump rates starting from up to . After the death, we resurrect the process again. This continues until it explodes or exhausts the local time at . This procedure defines a process, the law of which is defined to be . By Theorem A.6, such a process always exists and the law is already determined by the definition.
As the analysis in §2, when , cannot die at . So in this case, we actually do not need the resurrect procedure, and up to has the law .
Based on Theorem A.6, if is regular, when considering a process with terminated jump rates, we can always focus on its special realization presented in the previous proof of the existence part. The following three corollaries are immediately derived with this perspective.
Corollary A.8.
Suppose is a process with jump rates . Let be the first jump time of . Then
(A.1) |
Corollary A.9 (Strong renewal property).
Suppose is a process on starting from with jump rates . Then for any stopping time with respect to the natural filtration of , conditionally on and , the process is a process with jump rates starting from .
The proof of the above strong renewal property is almost a word-by-word copy of the proof of the strong Markov property of CTMC presented in [18, Theorem 6.5.4].
The next corollary gives a more accurate bound of the probability in (R1) and (R2). We only state in terms of the event in (R2). For and , we use to represent the function in that equals on and stays at after time .
Corollary A.10.
Suppose is a process with jump rates starting from . Then any for with and , if we denote for , it holds that
A.1 Path probability
Assume is regular. Let be a process with jump rates starting from . For any Borel subsets of , and with , denote
(A.2) |
The goal of this part is to calculate the path probability
The key point of the calculation is the following lemma. At first glance, the lemma seems obvious. We mention that the main difficulty is due to the dependence of in (R1) on the conditioned path .
Lemma A.11.
Conditionally on and with , the probability that has at least jumps during and the first jump is to is , where depends on the conditioned path.
Remark. Without the regularity of , it may happen that the conditional probability of two jumps in is comparable to . For example, we consider a path-dependent Poisson process as follows. Starting from , it jumps to with rate . Its jump rate at is given by:
Let us consider the special realization presented in Theorem A.6 with the above rates. Then it holds that conditionally on the process jumping from to at time , it is bound to jump from to before time . So the probability of two jumps in is no less than .
Before the proof, we mention a simple result. Recall the notation . It is easy to see that given and with and ,
(A.3) |
To this end, one can consider the left-hand side of (A.3) as a function of and formulate a differential equation.
Proof of Lemma A.11.
The renewal property (in Proposition A.5) enables us to consider only the case of . Let and be the first and second jump time of respectively, and for . The event can be divided according to the jump times as follows. For , define
Set . Then it holds that . So it suffices to show that there exists a constant independent of , such that
For any and with , let be the event that and . Then
For the first probability on the right-hand side, observe that if we further condition on , then we have conditioned on the whole path . By the regularity of , for sufficiently small, for any . Then it follows from (A.3) that the conditional probability of is
(A.4) |
where , which is finite by the continuity of . The same bound also holds for . So
That completes the proof. ∎
Corollary A.12.
Conditionally on and with , the probability that has exactly one jump during and the jump is to is , where depends on the conditioned path.
Let us start the calculation of path probability. The case of has been covered by (A.3). For , Corollary A.12 enables us to use the methods of formulating differential equations to calculate path probabilities. Fix and with . By the lower semi-continuity of , there exists , such that for any , . For such , let us calculate
For ,
where , , with
It holds that
(A.5) |
where and follows from (A.3) and Corollary A.12 respectively. For , it suffices to note that for any , conditionally on , the probability of is
which together with the continuity of easily leads to the probability .
By further considering the case , we can formulate a differential equation, the solution of which gives: for ,
where which varies with . Observe that the lower semi-continuity of implies that is a relatively open subset of . So we can readily deduce that for any Borel subset in and ,
Similarly, we can inductively check that for Borel subsets in and (),
(A.6) |
where and .
When , and vary, all the sets constitute a -system. The -field generated by these sets on contains sets in the form (, ) and hence equals . So (A.6) determines the law of . That completes the proof of the uniqueness part of Theorem A.6.
Remark A.13.
Observe that if we replace (R1) by (R2) in the Definition A.3, then following the same routine, we obtain the same path probability (A.6). In fact, the only difference in the proof is that in (A.3) ‘’ should be replaced by ‘’, and the later proofs still work with minor modification. This easily leads to the statement in Remark A.4.
[Acknowledgments] The authors are grateful to Prof. Aïdékon Elie for introducing this project and inspiring discussions. The authors would also like to thank Prof. Jiangang Ying and Shuo Qin for helpful discussions and valuable suggestions.
The first author is partially supported by the Fundamental Research Funds for the Central Universities. The second author is partially supported by NSFC, China (No. 11871162).
References
- [1] {barticle}[author] \bauthor\bsnmAïdékon, \bfnmE.\binitsE., \bauthor\bsnmHu, \bfnmY.\binitsY. and \bauthor\bsnmShi, \bfnmZ.\binitsZ. (\byear2022+). \btitleThe stochastic Jacobi flow. \bjournalIn preparation. \endbibitem
- [2] {barticle}[author] \bauthor\bsnmAldous, \bfnmD. J.\binitsD. J. (\byear1998). \btitleBrownian excursion conditioned on its local time. \bjournalElectronic communications in probability \bvolume3 \bpages79-90. \endbibitem
- [3] {barticle}[author] \bauthor\bsnmChang, \bfnmYinshan\binitsY. and \bauthor\bsnmLe Jan, \bfnmYves\binitsY. (\byear2016). \btitleMarkov loops in discrete spaces. \bjournalProbability and Statistical Physics in St. Petersburg \bvolume91 \bpages215. \endbibitem
- [4] {barticle}[author] \bauthor\bsnmDavis, \bfnmB.\binitsB. and \bauthor\bsnmVolkov, \bfnmS.\binitsS. (\byear2002). \btitleContinuous time vertex-reinforced jump processes. \bjournalProbability Theory and Related Fields \bvolume123 \bpages281-300. \endbibitem
- [5] {barticle}[author] \bauthor\bsnmDavis, \bfnmB.\binitsB. and \bauthor\bsnmVolkov, \bfnmS.\binitsS. (\byear2004). \btitleVertex-reinforced jump processes on trees and finite graphs. \bjournalProbability Theory and Related Fields \bvolume128 \bpages42-62. \endbibitem
- [6] {barticle}[author] \bauthor\bsnmEisenbaum, \bfnmNathalie\binitsN. (\byear1994). \btitleDynkin’s isomorphism theorem and the Ray-Knight theorems. \bjournalProbability Theory and Related Fields \bvolume99 \bpages321–335. \endbibitem
- [7] {barticle}[author] \bauthor\bsnmEisenbaum, \bfnmNathalie\binitsN., \bauthor\bsnmKaspi, \bfnmHaya\binitsH., \bauthor\bsnmMarcus, \bfnmMichael B.\binitsM. B., \bauthor\bsnmRosen, \bfnmJay\binitsJ. and \bauthor\bsnmShi, \bfnmZhan\binitsZ. (\byear2000). \btitleA Ray-Knight theorem for symmetric Markov processes. \bjournalThe Annals of Probability \bvolume28 \bpages1781 – 1796. \endbibitem
- [8] {bbook}[author] \bauthor\bsnmFeller, \bfnmWilliam\binitsW. (\byear1957). \btitleAn introduction to probability theory and its applications, Vol. 1. \bpublisherJohn Wiley, \baddressNew York. \endbibitem
- [9] {barticle}[author] \bauthor\bsnmHuang, \bfnmRuojun\binitsR., \bauthor\bsnmKious, \bfnmDaniel\binitsD., \bauthor\bsnmSidoravicius, \bfnmVladas\binitsV. and \bauthor\bsnmTarrès, \bfnmPierre\binitsP. (\byear2018). \btitleExplicit formula for the density of local times of Markov jump processes. \bjournalElectronic Communications in Probability \bvolume23 \bpages1–7. \endbibitem
- [10] {bincollection}[author] \bauthor\bsnmKnight, \bfnmFrank B\binitsF. B. (\byear1998). \btitleOn the upcrossing chains of stopped Brownian motion. In \bbooktitleSéminaire de Probabilités XXXII \bpages343–375. \bpublisherSpringer, \baddressHeidelberg. \endbibitem
- [11] {bmisc}[author] \bauthor\bsnmLawler, \bfnmGregory F\binitsG. F. (\byear2018). \btitleNotes on the Bessel process. \bhowpublishedhttp://www.math.uchicago.edu/~lawler/bessel18new.pdf. \endbibitem
- [12] {bbook}[author] \bauthor\bsnmLe Jan, \bfnmYves\binitsY. (\byear2011). \btitleMarkov Paths, Loops and Fields: École D’Été de Probabilités de Saint-Flour XXXVIII–2008 \bvolume2026. \bpublisherSpringer, \baddressHeidelberg. \endbibitem
- [13] {barticle}[author] \bauthor\bsnmLe Jan, \bfnmYves\binitsY. (\byear2017). \btitleMarkov loops, coverings and fields. \bjournalAnnales de la Faculté des sciences de Toulouse : Mathématiques \bvolumeSer. 6, 26 \bpages401–416. \endbibitem
- [14] {barticle}[author] \bauthor\bsnmLupu, \bfnmTitus\binitsT. (\byear2016). \btitleFrom loop clusters and random interlacements to the free field. \bjournalThe Annals of Probability \bvolume44 \bpages2117–2146. \endbibitem
- [15] {barticle}[author] \bauthor\bsnmLupu, \bfnmTitus\binitsT., \bauthor\bsnmSabot, \bfnmChristophe\binitsC. and \bauthor\bsnmTarrès, \bfnmPierre\binitsP. (\byear2019). \btitleInverting the coupling of the signed Gaussian free field with a loop-soup. \bjournalElectronic Journal of Probability \bvolume24 \bpages1–28. \endbibitem
- [16] {barticle}[author] \bauthor\bsnmLupu, \bfnmTitus\binitsT., \bauthor\bsnmSabot, \bfnmChristophe\binitsC. and \bauthor\bsnmTarrès, \bfnmPierre\binitsP. (\byear2021). \btitleInverting the Ray-Knight identity on the line. \bjournalElectronic Journal of Probability \bvolume26 \bpages1 – 25. \endbibitem
- [17] {bbook}[author] \bauthor\bsnmMarcus, \bfnmMichael B\binitsM. B. and \bauthor\bsnmRosen, \bfnmJay\binitsJ. (\byear2006). \btitleMarkov processes, Gaussian processes, and local times \bvolume100. \bpublisherCambridge University Press, \baddressCambridge. \endbibitem
- [18] {bbook}[author] \bauthor\bsnmNorris, \bfnmJames R\binitsJ. R. (\byear1998). \btitleMarkov chains \bvolume2. \bpublisherCambridge university press, \baddressCambridge. \endbibitem
- [19] {barticle}[author] \bauthor\bsnmPitman, \bfnmJim\binitsJ. and \bauthor\bsnmYor, \bfnmMarc\binitsM. (\byear1999). \btitleThe law of the maximum of a Bessel bridge. \bjournalElectronic Journal of Probability \bvolume4 \bpages1 – 35. \endbibitem
- [20] {barticle}[author] \bauthor\bsnmSabot, \bfnmC.\binitsC. and \bauthor\bsnmTarres, \bfnmP.\binitsP. (\byear2016). \btitleInverting Ray-Knight identity. \bjournalProbability Theory and Related Fields \bvolume165 \bpages559-580. \endbibitem
- [21] {binproceedings}[author] \bauthor\bsnmWarren, \bfnmJ.\binitsJ. and \bauthor\bsnmYor, \bfnmM\binitsM. (\byear1998). \btitleThe Brownian burglar: conditioning Brownian motion by its local time process. In \bbooktitleSéminaire de Probabilités XXXII \bpages328–342. \bpublisherSpringer, \baddressHeidelberg. \endbibitem
- [22] {barticle}[author] \bauthor\bsnmWerner, \bfnmW.\binitsW. (\byear2016). \btitleOn the spatial Markov property of soups of unoriented and oriented loops. \bjournalIn Séminaire de Probabilités XLVIII. Lecture Notes in Mathematics \bpages481-503. \endbibitem
- [23] {barticle}[author] \bauthor\bsnmWerner, \bfnmWendelin\binitsW. and \bauthor\bsnmPowell, \bfnmEllen\binitsE. (\byear2020). \btitleLecture notes on the Gaussian free field. \bjournalarXiv: Probability. \endbibitem