This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\areaset

15.1cm25.0cm

Branching Brownian motion with self repulsion

Anton Bovier A. Bovier
Institut für Angewandte Mathematik
Rheinische Friedrich-Wilhelms-Universität
Endenicher Allee 60
53115 Bonn, Germany
[email protected]
 and  Lisa Hartung L. Hartung
Institut für Mathematik
Johannes Gutenberg-Universität Mainz
Staudingerweg 9, 55099 Mainz, Germany
[email protected]
Abstract.

We consider a model of branching Brownian motion with self repulsion. Self-repulsion is introduced via change of measure that penalises particles spending time in an ϵ\epsilon-neighbourhood of each other. We derive a simplified version of the model where only branching events are penalised. This model is almost exactly solvable and we derive a precise description of the particle numbers and branching times. In the limit of weak penalty, an interesting universal time-inhomogeneous branching process emerges. The position of the maximum is governed by a F-KPP type reaction-diffusion equation with a time dependent reaction term.

Key words and phrases:
branching Brownian motion, excluded volume, extreme values, F-KPP equation
2000 Mathematics Subject Classification:
60J80, 60G70, 82B44
This work was partly funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy - GZ 2047/1, Projekt-ID 390685813 and GZ 2151 - Project-ID 390873048, through Project-ID 211504053 - SFB 1060, through Project-ID 233630050 -TRR 146, through Project-ID 443891315 within SPP 2265, and Project-ID 446173099.

1. Introduction

Branching Brownian motion (BBM) [23, 3] can be seen as an elementary model for the evolution of a population of individuals that are subject to birth, death, and motion in space. One of the primary interests in this model was the analysis of the speed of spread of such a population in space, as well as finer properties of the front. Indeed, BBM was investigated form the point of view of extreme value theory over the last 40 year, see, e.g., [9, 19, 12, 13, 5, 6, 7, 4, 14, 8].

As a model for population dynamics, BBM is somewhat unrealistic as it leads to uncontrolled exponential growth of the population size. In fact, in the standard normalisation, the population size grows like exp(t)\exp(t), while the population spreads over a volume of order tt, leading to an unsustainable density of the population. Several variants of the model that resolve this problem have been proposed where, according to some selection rule, offspring is selected to survive in such a way that the total population size stays controlled [11, 21, 15, 20]. Versions where competitive interactions between particles are present were considered, e.g. in [16, 17, 2, 1].

In this paper, we propose a model where the population size is controlled by penalising the fact that particles stay close to each other. Before defining the model precisely, recall that BBM is constructed as follows: start with a single particle which performs a standard Brownian motion x(t)x(t) in {\mathbb{R}} with x(0)=0x(0)=0 and continues for a standard exponentially distributed holding time TT, independent of xx. At time TT, the particle splits independently of xx and TT into kk offspring with probability pkp_{k}, where i=1pk=1\sum_{i=1}^{\infty}p_{k}=1, k=1kpk=2\sum_{k=1}^{\infty}kp_{k}=2 and K=k=1k(k1)pk<K=\sum_{k=1}^{\infty}k(k-1)p_{k}<\infty. In the present paper, we choose the simples option, k2=1k_{2}=1, all others zero, except in Section 8, where we allow for p0>0p_{0}>0. These particles continue along independent Brownian paths starting from x(T)x(T) and are subject to the same splitting rule. And so on. We let n(t)n(t) denote the number of particles at time tt and label the particles at time tt arbitrarily by 1,2,3,,n(t)1,2,3,\dots,n(t), and denote by x1(t),,xn(t)(t)x_{1}(t),\dots,x_{n(t)}(t) the positions of these particles at that time. For sts\leq t, we let xi(s)x_{i}(s) be the position of the ancestor of particle ii at time ss. We denote by {\mathbb{P}} the law of BBM.

Alternatively, BBM can be constructed as a Gaussian process indexed by a continuous time Galton-Watson tree with mean zero and covariances, conditioned on the Galton-Watson tree, given by

𝔼[xk(s)x(r)|σ(GW)]=d(xk(s),x(t))sr,{\mathbb{E}}\left[x_{k}(s)x_{\ell}(r)|{\sigma}(GW)\right]=d(x_{k}(s),x_{\ell}(t))\wedge s\wedge r, (1.1)

where d(xk(t),x(t))d(x_{k}(t),x_{\ell}(t)) is the time of the most recent common ancestor of the particles labeled kk and \ell in the Galton-Watson tree.

For t<t<\infty and for some ϵ>0\epsilon>0, we define the penalty function

It(x)0tij=1n(s)𝟙|xi(s)xj(s)|ϵds.I_{t}(x)\equiv\int_{0}^{t}\sum_{i\neq j=1}^{n(s)}\mathbbm{1}_{|x_{i}(s)-x_{j}(s)|\leq\epsilon}ds. (1.2)

(The notation here is not quite consistent, as the labelling of the n(s)n(s) particles at time ss is changing with ss. This can be remedied by using the Ulam-Kesten-Harris labelling of the tree, but maybe this is not necessary here.) We are interested in the law of xtx_{t} under the tilted measure Pt,σP_{t,{\sigma}} given by

Pt,σ(A)𝔼[𝟙xtAeλIt(x)]𝔼[eλIt(x)],P_{t,{\sigma}}(A)\equiv\frac{{\mathbb{E}}\left[\mathbbm{1}_{x_{t}\in A}{\mathrm{e}}^{-{\lambda}I_{t}(x)}\right]}{{\mathbb{E}}\left[{\mathrm{e}}^{-{\lambda}I_{t}(x)}\right]}, (1.3)

for any Borel set AA. The function ItI_{t} measures the total time when any two particles stay within a distance ϵ\epsilon up to time tt. This seems to a be reasonable measure for competitive pressure. In a typical realisation of BBM, the density of particles at time ss will be of order es/s{\mathrm{e}}^{s}/s, and hence the ϵ\epsilon-neighbourhood of any particle contains ϵes/s\epsilon{\mathrm{e}}^{s}/s other particles. Thus, for a typical configuration xx of BBM, It(x)ϵe2t/(2t)I_{t}(x)\sim\epsilon{\mathrm{e}}^{2t}/(2t). This penalty is most easily avoided by reducing the particle number by not branching. For a particle to not branch up to time tt has probability et{\mathrm{e}}^{-t}, which is far less costly. Reducing the particle density by making the particles move much farther apart would be far more costly.

A simplified model.

Analysing the measure Pt,σP_{t,{\sigma}} directly seems rather difficult. We suggest an approximation that should share the qualitative features of the full measure. For this we consider a lower bound on ItI_{t}. Note that, whenever branching occurs, the offspring start at the same point and thus are all closer than ϵ\epsilon Let us for simplicity take a branching law such that p2=1p_{2}=1, i.e. only binary branching occurs. Then we can bound

It(x)It(x)i=1n(t)1τϵ(i),I_{t}(x)\geq I^{\prime}_{t}(x)\equiv\sum_{i=1}^{n(t)-1}{\tau}_{\epsilon}(i), (1.4)

where τϵ(i){\tau}_{\epsilon}(i) is the first time the two Brownian motions that start at the ii-th branching event are a distance ϵ\epsilon apart. For small ϵ\epsilon, the probability that one of the two branches branches again before the time τϵ{\tau}_{\epsilon} is of order ϵ2\epsilon^{2}, so that it will be a good approximation to treat the τϵ(i){\tau}_{\epsilon}(i) as independent and having the same distribution as

τϵinf{t>0:|Bt|>ϵ}.{\tau}_{\epsilon}\equiv\inf\{t>0:|B_{t}|>\epsilon\}. (1.5)

Then,

𝔼[eλi=1n(t)1τϵ(i)]𝔼[𝔼[eλτϵ]n(t)1]𝔼[σn(t)1].{\mathbb{E}}\left[{\mathrm{e}}^{-{\lambda}\sum_{i=1}^{n(t)-1}{\tau}_{\epsilon}(i)}\right]\approx{\mathbb{E}}\left[{\mathbb{E}}\left[{\mathrm{e}}^{-{\lambda}{\tau}_{\epsilon}}\right]^{n(t)-1}\right]\equiv{\mathbb{E}}\left[{\sigma}^{n(t)-1}\right]. (1.6)

But (as follows from Theorem 5.35 and Proposition 7.48 in [22]),

𝔼[eλτϵ]=sech(ϵ2λ),{\mathbb{E}}\left[{\mathrm{e}}^{-{\lambda}{\tau}_{\epsilon}}\right]=\operatorname{sech}(\epsilon\sqrt{2{\lambda}}), (1.7)

which for small λϵ2{\lambda}\epsilon^{2} behaves like exp(λϵ2)\exp(-{\lambda}\epsilon^{2}). Note that we also have, by Jensen’s inequality, that

𝔼[eλi=1n(t)1τϵ(i)]𝔼[eλi=1n(t)1𝔼[τϵ(i)]]=𝔼[eλϵ2(n(t)1)].{\mathbb{E}}\left[{\mathrm{e}}^{-{\lambda}\sum_{i=1}^{n(t)-1}{\tau}_{\epsilon}(i)}\right]\leq{\mathbb{E}}\left[{\mathrm{e}}^{-{\lambda}\sum_{i=1}^{n(t)-1}{\mathbb{E}}[{\tau}_{\epsilon}(i)]}\right]={\mathbb{E}}\left[{\mathrm{e}}^{-{\lambda}\epsilon^{2}(n(t)-1)}\right]. (1.8)

One might think that the approximate model is a poor substitute for the full model, since it ignores the repulsion of particles after the time that they first separate. However, as we will see shortly, already It(x)I^{\prime}_{t}(x) suppresses branching so much that the total number of particles will stay finite for any time. Hence we can expect that these finitely many particles can remain separate rather easily and that the remaining effect of ItI_{t} will be relatively mild.

Outline.

The remainder of this paper is organised as follows. In Section 2 we derive exact formulas for the partition function, the particle number, and the first branching time in the simplified model. In Section 3 we introduce the notion of quasi-Markovian Galton-Watson trees. In Section 4 we show that the branching times in the simplified model are given by such a tree. In Section 5, we consider the limit when λ0{\lambda}\downarrow 0 and derive a universal asymptotic model, which is a specific quasi-Markovian Galton-Watson tree. In Section 6, we consider the position of the maximal particle and show that its distribution is governed by a F-KPP equation with time dependent reaction term and analyse the behaviour of its solutions. We discuss the relation of the approximate model to the full model in Section 7. In Section 8, we briefly look at the case when p0>0p_{0}>0. In this case, the process dies out and we derive the rate at which the number of particles tends to zero.

2. Partition function, particle numbers, and first branching time

2.1. The partition function

The first object we consider is the normalising factor or partition function

vσ(t)𝔼[σ(λ,ϵ)n(t)1].v_{\sigma}(t)\equiv{\mathbb{E}}\left[{\sigma}({\lambda},\epsilon)^{n(t)-1}\right]. (2.1)
Lemma 2.1.

Let v~σ(t)\tilde{v}_{\sigma}(t) be the solution of the ordinary differential equation

ddtv~σ(t)=σ(λ,ϵ)v~σ(t)2v~σ(t),\frac{d}{dt}\tilde{v}_{\sigma}(t)={\sigma}({\lambda},\epsilon)\tilde{v}_{\sigma}(t)^{2}-\tilde{v}_{\sigma}(t), (2.2)

with initial condition v~σ(0)=1\tilde{v}_{\sigma}(0)=1. Then vσ(t)=v~σ(λ,ϵ)(t)v_{\sigma}(t)=\tilde{v}_{{\sigma}({\lambda},\epsilon)}(t).

Proof.

The derivation of the ode is similar to that of the F-KPP equation for BBM (see [8]). Clearly, vσ(0)=1v_{\sigma}(0)=1. Conditioning on the time if the first branching event, we get

vσ(t)=et+0t𝑑se(ts)σ(λ,ϵ)vσ(s)2.v_{\sigma}(t)={\mathrm{e}}^{-t}+\int_{0}^{t}ds{\mathrm{e}}^{-(t-s)}{\sigma}({\lambda},\epsilon)v_{\sigma}(s)^{2}. (2.3)

Differentiating with respect to tt gives

ddtvσ(t)\displaystyle\frac{d}{dt}v_{\sigma}(t) =\displaystyle= et+σ(λ,ϵ)vt(t)20t𝑑se(ts)σ(λ,ϵ)vσ(s)2\displaystyle-{\mathrm{e}}^{-t}+{\sigma}({\lambda},\epsilon)v_{t}(t)^{2}-\int_{0}^{t}ds{\mathrm{e}}^{-(t-s)}{\sigma}({\lambda},\epsilon)v_{\sigma}(s)^{2} (2.4)
=\displaystyle= σ(λ,ϵ)vσ(t)2vσ(t).\displaystyle{\sigma}({\lambda},\epsilon)v_{\sigma}(t)^{2}-v_{\sigma}(t).

Thus vσv_{\sigma} solves the equation (2.3) with s=σ(λ,ϵ)s={\sigma}({\lambda},\epsilon), which proves the lemma. ∎

A first inspection of Eq. (2.2) shows why the cases λ=0{\lambda}=0 and λ>0{\lambda}>0 are vastly different.

Equation (2.2) has the two fix points 0 and 1/σ1/{\sigma}. Here 0 is stable and 1/σ1/{\sigma} is unstable. Hence all solutions with initial condition 0vσ(0)<1/σ0\leq v_{\sigma}(0)<1/{\sigma} will converge to 0, while solutions with vσ(0)>1/σv_{\sigma}(0)>1/{\sigma} will tend to infinity. Only the special initial condition vσ(0)=1/σv_{\sigma}(0)=1/{\sigma} will lead to the constant solution. Since we start with the initial condition vσ=1v_{\sigma}=1, if λ=0{\lambda}=0 and hence σ=1{\sigma}=1, we get this special constant solution, while for λ>0{\lambda}>0, the solution will tend to zero. In fact, we can solve (2.2) exactly. To do this it is convenient to define

v^σ(t)etv~σ(t).\hat{v}_{\sigma}(t)\equiv{\mathrm{e}}^{t}\tilde{v}_{\sigma}(t). (2.5)

Then v^σ\hat{v}_{\sigma} solves

ddtv^σ(t)=σv^σ(t)2et,\frac{d}{dt}\hat{v}_{\sigma}(t)={\sigma}\hat{v}_{\sigma}(t)^{2}{\mathrm{e}}^{-t}, (2.6)

also with initial condition v^σ(0)=1\hat{v}_{\sigma}(0)=1. Dividing both sides by v^σ2\hat{v}_{\sigma}^{2}, this can be written as

ddt1v^σ(t)=σet,-\frac{d}{dt}\frac{1}{\hat{v}_{\sigma}(t)}={\sigma}{\mathrm{e}}^{-t}, (2.7)

which can be integrated to give

1v^σ(t)+1v^σ(0)=σ(1et),-\frac{1}{\hat{v}_{\sigma}(t)}+\frac{1}{\hat{v}_{\sigma}(0)}={\sigma}\left(1-{\mathrm{e}}^{-t}\right), (2.8)

or

v^σ(t)=11v^σ(0)σ(1et),{\hat{v}_{\sigma}(t)}=\frac{1}{\frac{1}{\hat{v}_{\sigma}(0)}-{\sigma}\left(1-{\mathrm{e}}^{-t}\right)}, (2.9)

and

v~σ(t)=1et(1v~σ(0)σ)+σ.{\tilde{v}_{\sigma}(t)}=\frac{1}{{\mathrm{e}}^{t}\left(\frac{1}{\tilde{v}_{\sigma}(0)}-{\sigma}\right)+{\sigma}}. (2.10)

Using the initial condition v^σ(0)=1\hat{v}_{\sigma}(0)=1, we get

v~σ(t)=et(1σ)+σet.\tilde{v}_{\sigma}(t)=\frac{{\mathrm{e}}^{-t}}{\left(1-{\sigma}\right)+{\sigma}{\mathrm{e}}^{-t}}. (2.11)

Thus, provided σ<1{\sigma}<1,

limtv^σ(t)=11σ.\lim_{t\uparrow\infty}\hat{v}_{\sigma}(t)=\frac{1}{1-{\sigma}}. (2.12)

2.2. Particle numbers

From the formula for the partition function we can readily infer the mean number of particles at time tt, nramely,

E^λ,tn(t)=1+σddσlnv^σ(t)=1+σ1et1σ(1et)=11σ(1et),\widehat{E}_{{\lambda},t}n(t)=1+{\sigma}\frac{d}{d{\sigma}}\ln\hat{v}_{\sigma}(t)=1+{\sigma}\frac{1-{\mathrm{e}}^{-t}}{1-{\sigma}(1-{\mathrm{e}}^{-t})}=\frac{1}{1-{\sigma}(1-{\mathrm{e}}^{-t})}, (2.13)

and for tt\uparrow\infty this converges to 1/(1σ)1/(1-{\sigma}). For small λϵ2{\lambda}\epsilon^{2}, this in turn behaves like (λϵ2)1({\lambda}\epsilon^{2})^{-1}.

In fact, we can even compute the distribution of the number of particles at times sts\leq t. To do so, we want to compute the Laplace (Fourier) transforms

E^t,σ[eγn(s)]=𝔼[eγn(s)σn(t)1]𝔼[σn(t)1]=𝔼[eγn(s)σn(t)1]vσ(t).\widehat{E}_{t,{\sigma}}\left[{\mathrm{e}}^{{\gamma}n(s)}\right]=\frac{{\mathbb{E}}\left[{\mathrm{e}}^{{\gamma}n(s)}{\sigma}^{n(t)-1}\right]}{{\mathbb{E}}\left[{\sigma}^{n(t)-1}\right]}=\frac{{\mathbb{E}}\left[{\mathrm{e}}^{{\gamma}n(s)}{\sigma}^{n(t)-1}\right]}{v_{\sigma}(t)}. (2.14)

The denominator has already been calculated. For the numerator we write

𝔼[eγn(s)σn(t)1]\displaystyle{\mathbb{E}}\left[{\mathrm{e}}^{{\gamma}n(s)}{\sigma}^{n(t)-1}\right] =\displaystyle= 𝔼[eγn(s)𝔼[σn(t)1|s]]\displaystyle{\mathbb{E}}\left[{\mathrm{e}}^{{\gamma}n(s)}{\mathbb{E}}\left[{\sigma}^{n(t)-1}\big{|}{\mathcal{F}}_{s}\right]\right] (2.15)
=\displaystyle= 𝔼[eγn(s)𝔼[σi=1n(s)n(i)(ts)1|s]],\displaystyle{\mathbb{E}}\left[{\mathrm{e}}^{{\gamma}n(s)}{\mathbb{E}}\left[{\sigma}^{\sum_{i=1}^{n(s)}n^{(i)}(t-s)-1}\big{|}{\mathcal{F}}_{s}\right]\right],

where n(i)(ts)n^{(i)}(t-s) are the number of particles at time tt that have particle ii as common ancerstor at time ss. Using the independence properties, this equals

𝔼[eγn(s)σn(s)1(𝔼[σn(ts)1])n(s)]\displaystyle{\mathbb{E}}\left[{\mathrm{e}}^{{\gamma}n(s)}{\sigma}^{n(s)-1}\left({\mathbb{E}}\left[{\sigma}^{n(t-s)-1}\right]\right)^{n(s)}\right]
=eγvσ(ts)𝔼[(eγσvσ(ts))n(s)1]\displaystyle={\mathrm{e}}^{{\gamma}}v_{\sigma}(t-s){\mathbb{E}}\left[\left({\mathrm{e}}^{{\gamma}}{\sigma}v_{\sigma}(t-s)\right)^{n(s)-1}\right]
=eγvσ(ts)es1eγσvσ(ts)(1es)\displaystyle={\mathrm{e}}^{{\gamma}}v_{\sigma}(t-s)\frac{{\mathrm{e}}^{-s}}{1-{\mathrm{e}}^{{\gamma}}{\sigma}v_{\sigma}(t-s)(1-{\mathrm{e}}^{-s})}
=eseγvσ(ts)1σ(1es)\displaystyle=\frac{{\mathrm{e}}^{-s}}{{\mathrm{e}}^{-{\gamma}}v_{\sigma}(t-s)^{-1}-{\sigma}(1-{\mathrm{e}}^{-s})}
=eseγ(1σ(1et+s))etsσ(1es)\displaystyle=\frac{{\mathrm{e}}^{-s}}{{\mathrm{e}}^{-{\gamma}}\left(1-{\sigma}(1-{\mathrm{e}}^{-t+s})\right){\mathrm{e}}^{t-s}-{\sigma}(1-{\mathrm{e}}^{-s})}
=eteγ(1σ(1et+s))σ(estet).\displaystyle=\frac{{\mathrm{e}}^{-t}}{{\mathrm{e}}^{-{\gamma}}\left(1-{\sigma}(1-{\mathrm{e}}^{-t+s})\right)-{\sigma}({\mathrm{e}}^{s-t}-{\mathrm{e}}^{-t})}. (2.16)

Dividing by vσ(t)v_{\sigma}(t), we arrive at

E^t,σ[eγn(s)]=1σ(1et)eγ(1σ(1et+s))σ(estet).\widehat{E}_{t,{\sigma}}\left[{\mathrm{e}}^{{\gamma}n(s)}\right]=\frac{1-{\sigma}(1-{\mathrm{e}}^{-t})}{{\mathrm{e}}^{-{\gamma}}\left(1-{\sigma}(1-{\mathrm{e}}^{-t+s})\right)-{\sigma}({\mathrm{e}}^{s-t}-{\mathrm{e}}^{-t})}. (2.17)

From this exact formula we can derive various special cases.

Theorem 2.2.
  • (i)

    Under the measure P^λ,t\widehat{P}_{{\lambda},t}, the number of particles at time tt is geometrically distributed with parameter 1σ(λ,ϵ)(1et)1-{\sigma}({\lambda},\epsilon)(1-{\mathrm{e}}^{-t}). In particular, the number of particles converges, as tt\uparrow\infty, to a geometric random variable with parameter 1σ(λ,ϵ)1-{\sigma}({\lambda},\epsilon).

  • (ii)

    As tt\uparrow\infty, the number of particles at time s(t)=t+ln(1σ)+ρs(t)=t+\ln(1-{\sigma})+\rho, for all ρln(1σ)\rho\leq-\ln(1-{\sigma}), converges in distribution to a geometric random variable with parameter
    (1+σeρ)1(1+{\sigma}{\mathrm{e}}^{\rho})^{-1}.

Proof.

Inserting s=ts=t into (2.17) we get that

E^t,σ[eγn(t)]=1σ(1et)eγσ(1et),\widehat{E}_{t,{\sigma}}\left[{\mathrm{e}}^{{\gamma}n(t)}\right]=\frac{1-{\sigma}(1-{\mathrm{e}}^{-t})}{{\mathrm{e}}^{-{\gamma}}-{\sigma}(1-{\mathrm{e}}^{-t})}, (2.18)

which is the Laplace transform of the geometric distribution with parameter 1σ(λ,ϵ)(1et)1-{\sigma}({\lambda},\epsilon)(1-{\mathrm{e}}^{-t}). This implies (i). Similarly, with s=t+ln(1σ)+ρs=t+\ln(1-{\sigma})+\rho, and ρln(1σ)\rho\leq-\ln(1-{\sigma})

E^t,σ[eγn(s)]=1σ(1et)eγ(1σ+σ(1σ)eρ)σ(1σ)(eρet).\widehat{E}_{t,{\sigma}}\left[{\mathrm{e}}^{{\gamma}n(s)}\right]=\frac{1-{\sigma}(1-{\mathrm{e}}^{-t})}{{\mathrm{e}}^{-{\gamma}}\left(1-{\sigma}+{\sigma}(1-{\sigma}){\mathrm{e}}^{\rho}\right)-{\sigma}(1-{\sigma})({\mathrm{e}}^{\rho}-{\mathrm{e}}^{-t})}. (2.19)

If we now take tt\uparrow\infty, we get

limtE^t,σ[eγn(t+ln(1σ)+ρ)]=1eγ(1+σeρ)σeρ=(1+σeρ)1eγσeρ1+σeρ,\lim_{t\uparrow\infty}\widehat{E}_{t,{\sigma}}\left[{\mathrm{e}}^{{\gamma}n(t+\ln(1-{\sigma})+\rho)}\right]=\frac{1}{{\mathrm{e}}^{-{\gamma}}\left(1+{\sigma}{\mathrm{e}}^{\rho}\right)-{\sigma}{\mathrm{e}}^{\rho}}=\frac{(1+{\sigma}{\mathrm{e}}^{\rho})^{-1}}{{\mathrm{e}}^{-{\gamma}}-\frac{{\sigma}{\mathrm{e}}^{\rho}}{1+{\sigma}{\mathrm{e}}^{\rho}}}, (2.20)

which is the Laplace transform of the geometric distribution with parameter (1+σeρ)1(1+{\sigma}{\mathrm{e}}^{\rho})^{-1}.

Remark.

Note that, for fixed ss, taking the limit tt\uparrow\infty, we get unsurprisingly eγ{\mathrm{e}}^{{\gamma}}, indicating that there is just one particle.

We see that the mean number of particles ranges from 11 (as ρ\rho\downarrow-\infty), 1+σ1+{\sigma} (for ρ=0\rho=0), to 1σ1-{\sigma} (for ρ=ln(1σ)\rho=-\ln(1-{\sigma})). Note that if σ=1{\sigma}=1, n(t)n(t) is geometric with parameter et{\mathrm{e}}^{-t}, which corresponds to BBM with binary branching.

2.3. Distribution of the first branching time

We have seen so far that the repulsion strongly suppresses the number of branchings. The first branching time is then

τ1inf{s>0:n(s)=2}.{\tau}_{1}\equiv\inf\{s>0:n(s)=2\}. (2.21)
Theorem 2.3.

The distribution of the first branching time under P^λ,t\widehat{P}_{{\lambda},t} is given by

P^λ,t(τ1tr)=σ(λ,ϵ)eret1σ(λ,ϵ)(1er).\widehat{P}_{{\lambda},t}\left({\tau}_{1}\leq t-r\right)={\sigma}({\lambda},\epsilon)\frac{{\mathrm{e}}^{-r}-{\mathrm{e}}^{-t}}{1-{\sigma}({\lambda},\epsilon)(1-{\mathrm{e}}^{-r})}. (2.22)
Proof.

Note that after the first branching, there will be two independent BBMs that run for the remaining time tτ1t-{\tau}_{1} and that are subject to the same penalty as before. In particular, given τ1{\tau}_{1}, the total particle number n(t)n(t) is equal to the sum of the number of particles in these two branches,

n(t)=n~(0)(tτ1)+n~(1)(tτ1),n(t)=\tilde{n}^{(0)}(t-{\tau}_{1})+\tilde{n}^{(1)}(t-{\tau}_{1}), (2.23)

where n~(i)\tilde{n}^{(i)} are the particles in the two branches that split at time τ1=e0(0){\tau}_{1}=e_{0}(0). Denote by vσ(t,r)v_{\sigma}(t,r) the unnormalised mass of paths that branch before time trtt-r\leq t, i.e. set

vσ(t,tr)𝔼[σ(λ,ϵ)n(t)1𝟙τ1tr].v_{\sigma}(t,t-r)\equiv{\mathbb{E}}\left[{\sigma}({\lambda},\epsilon)^{n(t)-1}\mathbbm{1}_{{\tau}_{1}\leq t-r}\right]. (2.24)

We get

vσ(t,tr)\displaystyle v_{\sigma}(t,t-r) =\displaystyle= 0tres𝔼[σ(λ,ϵ)n~(0)(ts)+n~(1)(ts)1]𝑑s\displaystyle\int_{0}^{t-r}{\mathrm{e}}^{-s}{\mathbb{E}}\left[{\sigma}({\lambda},\epsilon)^{\tilde{n}^{(0)}(t-s)+\tilde{n}^{(1)}(t-s)-1}\right]ds (2.25)
=\displaystyle= 0tresσ(λ,ϵ)𝔼[σ(λ,ϵ)n(ts)1]𝔼[σ(λ,ϵ)n(ts)1]𝑑s\displaystyle\int_{0}^{t-r}{\mathrm{e}}^{-s}{\sigma}({\lambda},\epsilon){\mathbb{E}}\left[{\sigma}({\lambda},\epsilon)^{n(t-s)-1}\right]{\mathbb{E}}\left[{\sigma}({\lambda},\epsilon)^{n(t-s)-1}\right]ds
=\displaystyle= 0tr𝑑sesσ(λ,ϵ)vσ(ts)2𝑑s.\displaystyle\int_{0}^{t-r}ds{\mathrm{e}}^{-s}{\sigma}({\lambda},\epsilon)v_{\sigma}(t-s)^{2}ds.

Since vσ(ts)v_{\sigma}(t-s) is known, this is an explicit formula, namely

vσ(t,tr)\displaystyle v_{\sigma}(t,t-r) =\displaystyle= σ(λ,ϵ)e2t0tr𝑑ses1(1σ(λ,ϵ)(1e(ts)))2\displaystyle{\sigma}({\lambda},\epsilon){\mathrm{e}}^{-2t}\int_{0}^{t-r}ds{\mathrm{e}}^{s}\frac{1}{(1-{\sigma}({\lambda},\epsilon)(1-{\mathrm{e}}^{-(t-s)}))^{2}} (2.26)
=\displaystyle= etσ(λ,ϵ)eret(1σ(λ,ϵ)(1et))(1σ(λ,ϵ)(1er)).\displaystyle{\mathrm{e}}^{-t}{\sigma}({\lambda},\epsilon)\frac{{\mathrm{e}}^{-r}-{\mathrm{e}}^{-t}}{(1-{\sigma}({\lambda},\epsilon)(1-{\mathrm{e}}^{-t}))(1-{\sigma}({\lambda},\epsilon)(1-{\mathrm{e}}^{-r}))}.

Since P^λ,t(τ1tr)=vσ(t,tr)vσ(t)\widehat{P}_{{\lambda},t}\left({\tau}_{1}\leq t-r\right)=\frac{v_{\sigma}(t,t-r)}{v_{\sigma}(t)}, (2.22) follows. ∎

Remark.

Note that, for rr fixed, P^λ,t(τ1tr)\widehat{P}_{{\lambda},t}\left({\tau}_{1}\leq t-r\right) converges, as tt\uparrow\infty, to

σ(λ,ϵ)er1σ(λ,ϵ)(1er).\frac{{\sigma}({\lambda},\epsilon){\mathrm{e}}^{-r}}{1-{\sigma}({\lambda},\epsilon)(1-{\mathrm{e}}^{-r})}. (2.27)

Note further that vσ(t)=vσ(t,t)+𝔼𝟙τ1>t=vσ(t,t)+etv_{\sigma}(t)=v_{\sigma}(t,t)+{\mathbb{E}}\mathbbm{1}_{{\tau}_{1}>t}=v_{\sigma}(t,t)+{\mathrm{e}}^{-t} and therefore

P^λ,t(τ1t)=vσ(t,t)vσ(t)=1et/vσ(t)<1.\widehat{P}_{{\lambda},t}\left({\tau}_{1}\leq t\right)=\frac{v_{\sigma}(t,t)}{v_{\sigma}(t)}=1-{\mathrm{e}}^{-t}/v_{\sigma}(t)<1. (2.28)

3. Quasi-Markovian time-inhomogeneous Galton-Watson trees

In this section we introduce a class of models that are continuous-time version of Galton-Watson processes that are time-inhomogeneous and that in general are not Markov, but have an underlying discrete-time Markov property. These processes emerge in the models introduced above.

We start with discrete time trees and we introduce the usual Ulam-Harris labelling.

Let us define the set of (infinite) multi-indices

𝐈+,\mathbf{I}\equiv{\mathbb{Z}}_{+}^{\mathbb{N}}, (3.1)

and let 𝐅𝐈\mathbf{F}\subset\mathbf{I} denote the subset of multi-indices that contain only finitely many entries that are different from zero. Ignoring leading zeros, we see that

𝐅=k=0+k,\mathbf{F}=\cup_{k=0}^{\infty}{\mathbb{Z}}_{+}^{k}, (3.2)

where +0{\mathbb{Z}}_{+}^{0} is either the empty multi-index or the multi-index containing only zeros. A discrete-time tree is then identified by a consistent sequence of sets of multi-indices, q(n)q(n) at time nn as follows.

  • {(0,0,)}={u(0)}=q(0)\{(0,0,\dots)\}=\{u(0)\}=q(0).

  • If uq(n)u\in q(n) then u+(0,,0n×0,k,0,)q(n+1)u+(\underbrace{0,\dots,0}_{n\times 0},k,0,\dots)\in q(n+1) if 0klu(n)10\leq k\leq l^{u}(n)-1, where

    lu(n)=#{ offsprings of the particle corresponding to uat timen}.l^{u}(n)=\#\{\mbox{ offsprings of the particle corresponding to }u\,\mbox{at time}\,n\}. (3.3)

We can relate the assignment of labels in a backwards consistent fashion as follows. For u(u1,u2,u3,)+u\equiv(u_{1},u_{2},u_{3},\dots)\in{\mathbb{Z}}_{+}^{\mathbb{N}}, we define the function u(r),r+u(r),r\in{\mathbb{R}}_{+}, through

u(r){u,ifr,0,if>r.u_{\ell}(r)\equiv\begin{cases}u_{\ell},&\,\,\hbox{\rm if}\,\,\ell\leq r,\\ 0,&\,\,\hbox{\rm if}\,\,\ell>r.\end{cases} (3.4)

Clearly, if u(n)q(n)u(n)\in q(n) and rtr\leq t, then u(r)q(r)u(r)\in q(r). This allows to define the boundary of the tree at infinity as follows:

𝐓{u𝐈:n<,u(n)q(n)}.{\partial}\mathbf{T}\equiv\left\{u\in\mathbf{I}:\forall n<\infty,u(n)\in q(n)\right\}. (3.5)

We also want to be able to consider a branch of a tree as an entire new tree. For this we use the notation u=(u1,u2,u3,)\overleftarrow{u}=(u_{1},u_{2},u_{3},\dots) if u=(u0,u1,u2,)u=(u_{0},u_{1},u_{2},\dots).

Given a discrete-time tree, we can turn it into a continuous-time tree by assigning waiting times to each vertex, resp. to each multi-index in the tree. E.g., in the case of the standard continuous-time Galton-Watson tree, we simply assign standard, iid, exponential random variables, eu(n)e_{u}(n), to each vertex, resp. multi-index. Note that we choose the notation in such a way that we think of uu as an element of the boundary of the tree, and eu(n)e_{u}(n) is the waiting time attached to the vertex labelled u(n)u(n) (in the nn-th generation). This time represents the waiting time from the birth of this branch to its next branching. This assigns a total time, Tu(n)T_{u}(n) for the branching of a multi-index at discrete time nn, as

Tu(n)=t0+k=0neu(k),T_{u}(n)=t_{0}+\sum_{k=0}^{n}e_{u}(k), (3.6)

where t0t_{0}\in{\mathbb{R}} is an initial time associated to the root of the tree and eu(0)e_{u}(0) is the time of the first branching of the root of the tree. We denote by n{\mathcal{F}}_{n} the σ{\sigma}-algebra generated by the branching times of the first nn generations of the tree, i.e.

nσ(t0,ek(u),kn,u𝐓).{\mathcal{F}}_{n}\equiv{\sigma}\left(t_{0},e_{k}(u),k\leq n,u\in{\partial}\mathbf{T}\right). (3.7)

We need to define further σ{\sigma}-algebras that correspond to events that take place in sub-trees. For a given multi-index uu, define the set of multi-indices that coincide with uu in the first nn entries,

𝒰n(u){v𝐓:kn,vk=uk}.{\mathcal{U}}_{n}(u)\equiv\left\{v\in\mathbf{T}:\forall k\leq n,v_{k}=u_{k}\right\}. (3.8)

Naturally, this is the subtree that branches off the branch uu in the nn-th generation. Next, we define the σ{\sigma}-algebra generated by the times in these subtrees,

𝒢n(u)σ(ev(k),v𝒰n(u),kn).{\mathcal{G}}_{n}(u)\equiv{\sigma}\left(e_{v}(k),v\in{\mathcal{U}}_{n}(u),k\leq n\right). (3.9)
Definition 3.1.

A set n(u)𝒢n(u){\mathcal{E}}_{n}(u)\in{\mathcal{G}}_{n}(u)\subset is called normal, if it is of the form

n(u)={en(u)r}n+1((u(n),0))n+1((u(n),1)),{\mathcal{E}}_{n}(u)=\{e_{n}(u)\leq r\}\cap{\mathcal{E}}_{n+1}((u(n),0))\cap{\mathcal{E}}_{n+1}((u(n),1)), (3.10)

or if

n(u)=+𝐓,{\mathcal{E}}_{n}(u)={\mathbb{R}}_{+}^{\mathbf{T}}, (3.11)

where the two events n+1{\mathcal{E}}_{n+1} are normal. We say that a normal event n(u){\mathcal{E}}_{n}(u) has finite horizon, if there exists a >Nn\infty>N\geq n such that n(u)N{\mathcal{E}}_{n}(u)\in{\mathcal{F}}_{N}.

Definition 3.2.

We say that the assignment of branching times is quasi-Markov (with time horizon tt), if there is a family of probability measures, Qt,T,t+,TtQ_{t,T},t\in{\mathbb{R}}_{+},T\leq t on 𝒢0(0){\mathcal{G}}_{0}(0) and a family of probability measures qt,T,t+,Ttq_{t,T},t\in{\mathbb{R}}_{+},T\leq t on (+,(+))({\mathbb{R}}_{+},{\mathcal{B}}({\mathbb{R}}_{+})) that have the following property. For any event 0(0)𝒢0(0){\mathcal{E}}_{0}(0)\in{\mathcal{G}}_{0}(0) which is of the form

0(0)={e0(0)r}1((0,0))1((0,1)),{\mathcal{E}}_{0}(0)=\{e_{0}(0)\leq r\}\cap{\mathcal{E}}_{1}((0,0))\cap{\mathcal{E}}_{1}((0,1)), (3.12)

where 1(u)𝒢1(u){\mathcal{E}}_{1}(u)\in{\mathcal{G}}_{1}(u), for all t0<r<tt0t_{0}<r<t-t_{0},

Qt,t0(0(0))=t0rqt,t0(ds)Qt,t0+s(1(00))Qt,t0+s(1(01)).Q_{t,t_{0}}({\mathcal{E}}_{0}(0))=\int_{t_{0}}^{r}q_{t,t_{0}}(ds)Q_{t,t_{0}+s}({\mathcal{E}}_{1}(\overleftarrow{00}))Q_{t,t_{0}+s}({\mathcal{E}}_{1}(\overleftarrow{01})). (3.13)
Lemma 3.3.

The measures Qt,t0Q_{t,t_{0}} on the σ{\sigma}-algebra generated by the normal events with finite horizon in 𝒢0(u){\mathcal{G}}_{0}(u) are uniquely determined by the family of measures qt,s,stq_{t,s},s\leq t. qt,sq_{t,s} is the law of eu(n)e_{u}(n) conditioned on Tn1(u)=sT_{n-1}(u)=s.

Proof.

From (3.13) it follows by simple iteration that the measure of any normal event of finite horizon is expressed uniquely in term of qq. Noting further that the set of finite horizon events is intersection stable, the assertion follows from Dynkin’s theorem. ∎

The total tree at continuous time tt is then described as follows:

  • (i)

    The branches of the tree alive are

    𝒜(t){u(n):u𝐓,n0s.t.Tn1(u)t<Tn(u)}.{\mathcal{A}}(t)\equiv\left\{u(n):u\in\mathbf{T},n\in{\mathbb{N}}_{0}\;\text{s.t.}\;T_{n-1}(u)\leq t<T_{n}(u)\right\}. (3.14)
  • (ii)

    The entire tree up to time tt is the set

    𝒯(t){u(k):kn,u(n)𝒜(t)}.{\mathcal{T}}(t)\equiv\left\{u(k):k\leq n,u(n)\in{\mathcal{A}}(t)\right\}. (3.15)

Note that both sets are empty if t<t0t<t_{0}. It is a bit cumbersome to write, but the distribution of the set 𝒯(t){\mathcal{T}}(t) together with the lengths of all branches can be written down explicitly in terms of the laws qq and the branching laws of the underlying discrete-time tree.

4. The simplified model as quasi-Markov Galton-Watson tree

We return to the approximate model defined in Section 1. For simplicity, we keep the assumption that the underlying tree is binary. We first show that the branching times under the law P^λ,t\widehat{P}_{{\lambda},t} define a quasi-Markov Galton-Watson tree.

Lemma 4.1.

The branching times of the simplified model under the law P^λ,t\widehat{P}_{{\lambda},t} are quasi-Markov, where Qt,TQ_{t,T} is the marginal distribution of P^λ,tT\widehat{P}_{{\lambda},t-T} with qt,Tq_{t,T} that is absolutely continuous w.r.t. Lebesgue mesure with density

esσ(λ,ϵ)vσ(tsT)2vσ(tT)𝟙sT,\frac{{\mathrm{e}}^{-s}{\sigma}({\lambda},\epsilon)v_{\sigma}(t-s-T)^{2}}{v_{\sigma}(t-T)}\mathbbm{1}_{s\geq T}, (4.1)

namely,

P^λ,tt0(0(0))=t0rqt,t0(ds)P^λ,ts(1(00))P^λ.ts(1(01)).\widehat{P}_{{\lambda},t-t_{0}}({\mathcal{E}}_{0}(0))=\int_{t_{0}}^{r}q_{t,t_{0}}(ds)\widehat{P}_{{\lambda},t-s}({\mathcal{E}}_{1}(\overleftarrow{00}))\widehat{P}_{{\lambda}.t-s}({\mathcal{E}}_{1}(\overleftarrow{01})). (4.2)
Proof.

Let 0(0)={e0(0)r0}1(00)1(01){\mathcal{E}}_{0}(0)=\{e_{0}(0)\leq r_{0}\}\cap{\mathcal{E}}_{1}(00)\cap{\mathcal{E}}_{1}(01). We now have

n(t)=n~(00)(tT1(00))+n~(01)(tT1(00)).n(t)=\tilde{n}^{(00)}(t-T_{1}(00))+\tilde{n}^{(01)}(t-T_{1}(00)). (4.3)

where the n~\tilde{n} are the particle numbers in the four respective branches of the tree. In analogy to (2.25), we obtain

vσ(tT)P^λ,tT(0(0))\displaystyle v_{\sigma}(t-T)\widehat{P}_{{\lambda},t-T}({\mathcal{E}}_{0}(0))
=Ttes+T𝔼[σ(λ,e)n(00)(ts)+n(01)(ts)11(00)1(01)]𝑑s\displaystyle=\int_{T}^{t}{\mathrm{e}}^{-s+T}{\mathbb{E}}\left[{\sigma}({\lambda},e)^{n^{(00)}(t-s)+n^{(01)}(t-s)-1}{\mathcal{E}}_{1}(00){\mathcal{E}}_{1}(01)\right]ds
=Ttes+Tσ(λ,ϵ)𝔼[σ(λ,e)n(00)(ts)11(00)]𝔼[σ(λ,ϵ)n(01)(ts)11(01)]𝑑s\displaystyle=\int_{T}^{t}{\mathrm{e}}^{-s+T}{\sigma}({\lambda},\epsilon){\mathbb{E}}\left[{\sigma}({\lambda},e)^{n^{(00)}(t-s)-1}{\mathcal{E}}_{1}(00)\right]{\mathbb{E}}\left[{\sigma}({\lambda},\epsilon)^{n^{(01)}(t-s)-1}{\mathcal{E}}_{1}(01)\right]ds
=Ttes+Tσ(λ,ϵ)vσ(ts)2𝔼[σ(λ,e)n(00)(ts)11(00)]vσ(ts)𝔼[σ(λ,ϵ)n(01)(ts)11(01)]vσ(ts)𝑑s\displaystyle=\int_{T}^{t}{\mathrm{e}}^{-s+T}{\sigma}({\lambda},\epsilon)v_{\sigma}(t-s)^{2}\frac{{\mathbb{E}}\left[{\sigma}({\lambda},e)^{n^{(00)}(t-s)-1}{\mathcal{E}}_{1}(00)\right]}{v_{\sigma}(t-s)}\frac{{\mathbb{E}}\left[{\sigma}({\lambda},\epsilon)^{n^{(01)}(t-s)-1}{\mathcal{E}}_{1}(01)\right]}{v_{\sigma}(t-s)}ds
=Ttes+Tσ(λ,ϵ)vσ(ts)2P^λ,ts(1(00))P^λ,ts(1(01))𝑑s,\displaystyle=\int_{T}^{t}{\mathrm{e}}^{-s+T}{\sigma}({\lambda},\epsilon)v_{\sigma}(t-s)^{2}\widehat{P}_{{\lambda},t-s}\left({\mathcal{E}}_{1}(\overleftarrow{00})\right)\widehat{P}_{{\lambda},t-s}\left({\mathcal{E}}_{1}(\overleftarrow{01})\right)ds, (4.4)

where we used the independence the events in the two branches under the original BBM measure {\mathcal{E}} and the definition of P^λ,t\widehat{P}_{{\lambda},t}. This concludes the simple proof. ∎

5. The limit λ(t)0{\lambda}(t)\downarrow 0

We have seen that a penalty with fixed λ<{\lambda}<\infty and ϵ>0\epsilon>0 enforces that only a finite number of branchings take place, even if we let tt tend to infinity. To get more interesting results, we consider now the case when λ=λ(t){\lambda}={\lambda}(t) depends on tt such that λ(t)0{\lambda}(t)\downarrow 0 as tt\uparrow\infty. In fact, we will see that a rather interesting limiting model arises in this setting. Clearly, in this case σ(λ(t),ϵ)eλ(t)ϵ21λ(t)ϵ2{\sigma}({\lambda}(t),\epsilon)\approx{\mathrm{e}}^{-{\lambda}(t)\epsilon^{2}}\approx 1-{\lambda}(t)\epsilon^{2} is a good approximation.

We first look at the partition function.

Lemma 5.1.

Assume that λ(t)0{\lambda}(t)\downarrow 0. Then

limtetλ(t)ϵ2vσ(λ(t),ϵ)(t)=1.\lim_{t\uparrow\infty}{\mathrm{e}}^{t}{\lambda}(t)\epsilon^{2}v_{{\sigma}({\lambda}(t),\epsilon)}(t)=1. (5.1)
Proof.

We just use the explicit form of vσ(t)v_{\sigma}(t) given in (2.11). This gives

etλ(t)ϵ2vλ(t)(t)\displaystyle{\mathrm{e}}^{t}{\lambda}(t)\epsilon^{2}v_{{\lambda}(t)}(t) =\displaystyle= λ(t)ϵ21σ(λ,ϵ)+σ(λ,ϵ)et\displaystyle\frac{{\lambda}(t)\epsilon^{2}}{1-{\sigma}({\lambda},\epsilon)+{\sigma}({\lambda},\epsilon){\mathrm{e}}^{-t}} (5.2)
=\displaystyle= λ(t)ϵ2λ(t)ϵ2+O(λ(t)2))+O(et)=1+O(λ(t)),\displaystyle\frac{{\lambda}(t)\epsilon^{2}}{{\lambda}(t)\epsilon^{2}+O({\lambda}(t)^{2}))+O({\mathrm{e}}^{-t})}=1+O({\lambda}(t)),

which implies the statement of the lemma. ∎

From Theorem 2.2 we derive the asymptotics of the particle number.

Theorem 5.2.

Assume that λ(t)0{\lambda}(t)\downarrow 0, but t+ln(λ(t)ϵ2)t+\ln({\lambda}(t)\epsilon^{2})\uparrow\infty, as tt\uparrow\infty. Then:

  • (i)

    The number of particles at time tt times λ(t)ϵ2{\lambda}(t)\epsilon^{2}, λ(t)ϵ2n(t){\lambda}(t)\epsilon^{2}n(t), converges in distribution to an exponential random variable with parameter 11.

  • (ii)

    For any ρ\rho\in{\mathbb{R}}, the number of particles at time s(t)=t+ln(λ(t)ϵ2)+ρs(t)=t+\ln({\lambda}(t)\epsilon^{2})+\rho converges in distribution to a geometric distribution with parameter 1/(1+eρ)1/(1+{\mathrm{e}}^{\rho}).

  • (iii)

    If ρ(t)\rho(t)\uparrow\infty but ln(λ(t)ϵ2)+ρ(t)0\ln({\lambda}(t)\epsilon^{2})+\rho(t)\leq 0 , the number of particles at time s(t)=t+ln(λ(t)ϵ2)+ρ(t)s(t)=t+\ln({\lambda}(t)\epsilon^{2})+\rho(t) divided by 1+eρ(t)1+{\mathrm{e}}^{\rho(t)} converges in distribution to an exponential random variable with parameter 11.

Proof.

The proof follows easily from the explicit computations of the Laplace tranforms of the particle numbers, see Eq. (2.14) and (2.19). ∎

The next theorem gives the asymtotics of the first branching time.

Theorem 5.3.

Let λ(t){\lambda}(t) be as in Theorem 5.2. Then, for any ρ\rho\in{\mathbb{R}},

limtP^λ(t),t(τ1t+ln(λ(t)ϵ2)+ρ)=1eρ+1.\lim_{t\uparrow\infty}\widehat{P}_{{\lambda}(t),t}\left({\tau}_{1}\leq t+\ln({\lambda}(t)\epsilon^{2})+\rho\right)=\frac{1}{{\mathrm{e}}^{-\rho}+1}. (5.3)
Proof.

From the explicit formula (2.22), we get that,

P^λ(t),t(τ1tr)=(1λϵ2)(eret)λϵ2+er(1λϵ2)(1+O(λ2ϵ4))=1e(tr)erλ(t)ϵ2+1(1+O(λ2ϵ4)).\widehat{P}_{{\lambda}(t),t}\left({\tau}_{1}\leq t-r\right)=\frac{(1-{\lambda}\epsilon^{2})({\mathrm{e}}^{-r}-{\mathrm{e}}^{-t})}{{\lambda}\epsilon^{2}+{\mathrm{e}}^{-r}(1-{\lambda}\epsilon^{2})}(1+O({\lambda}^{2}\epsilon^{4}))=\frac{1-{\mathrm{e}}^{-(t-r)}}{{\mathrm{e}}^{r}{\lambda}(t)\epsilon^{2}+1}(1+O({\lambda}^{2}\epsilon^{4})). (5.4)

To get something non-trivial, the first term in the denominator should be of order one. That suggest to choose r=r(t)=ln(λ(t)ϵ2)ρr=r(t)=-\ln({\lambda}(t)\epsilon^{2})-\rho. Eq. (5.3) then follows directly. ∎

5.1. The limiting Quasi-Markov Galton-Watson tree

Theorem 5.3 also suggests to define

τ~1τ1tln(λ(t)ϵ2).\tilde{\tau}_{1}\equiv{\tau}_{1}-t-\ln({\lambda}(t)\epsilon^{2}). (5.5)

τ~1\tilde{\tau}_{1} should be thought of as the position of the first branching seen from the standard position t+ln(λ(t)ϵ2)t+\ln({\lambda}(t)\epsilon^{2}).

To derive the asymptotics of the consecutive branching times, we just have to look at

P^λ(t),t(eu(n+1)Δ|n)=P^λ(t),tTu(n)(τ1Δ).\widehat{P}_{{\lambda}(t),t}(e_{u}(n+1)\leq{\Delta}|{\mathcal{F}}_{n})=\widehat{P}_{{\lambda}(t),t-T_{u}(n)}({\tau}_{1}\leq{\Delta}). (5.6)

For this we have from the previous computation

P^λ(t),tTu(n)(τ1Δ)=1eΔetTu(n)+ln(λ(ϵ2)eΔ+1.\widehat{P}_{{\lambda}(t),t-T_{u}(n)}\left({\tau}_{1}\leq{\Delta}\right)=\frac{1-{\mathrm{e}}^{-{\Delta}}}{{\mathrm{e}}^{t-T_{u}(n)+\ln({\lambda}(\epsilon^{2})}{\mathrm{e}}^{-{\Delta}}+1}. (5.7)

Recall that, e.g. Tu(1)=eu(1)=t+ln(λϵ2)+ρT_{u}(1)=e_{u}(1)=t+\ln({\lambda}\epsilon^{2})+\rho, where ρ\rho is a finite random variable, so that we will have that, in general, Tu(n)=eu(1)=t+ln(λϵ2)+O(1)T_{u}(n)=e_{u}(1)=t+\ln({\lambda}\epsilon^{2})+O(1), so that the right-hand side of (5.7) will converge to some non-trivial distribution.

The asymptotic results above suggest to consider the branching times of the process in the limit tt\uparrow\infty, λ(t)0{\lambda}(t)\downarrow 0, around the time tln(λ(t)ϵ2)t-\ln({\lambda}(t)\epsilon^{2}). We have seen that the time of the first branching shifted by this value converges in distribution to a random variable with distribution function 1/(eρ+1)1/\left({\mathrm{e}}^{-\rho}+1\right) (which is supported on (,)(-\infty,\infty)).

This suggests to define a limiting model as a quasi-Markov Galton-Watson tree with the measures

q,T(eΔ)=1eΔeΔT+1.q_{\infty,T}(e\leq{\Delta})=\frac{1-{\mathrm{e}}^{-{\Delta}}}{{\mathrm{e}}^{-{\Delta}-T}+1}. (5.8)

This gives, in particular, for the first branching time,

Q,t0(e0(0)Δ)=1eΔeΔt0+1.Q_{\infty,t_{0}}(e_{0}(0)\leq{\Delta})=\frac{1-{\mathrm{e}}^{-{\Delta}}}{{\mathrm{e}}^{-{\Delta}-t_{0}}+1}. (5.9)

We have to choose t0t_{0} to match this with the known asymptotics of the first branching time, see (5.3). It turns out that

limt0Q,t0(e0(0)t0+Δ)=limt01et0ΔeΔ+1=11+eΔ,\lim_{t_{0}\downarrow-\infty}Q_{\infty,t_{0}}(e_{0}(0)\leq-t_{0}+{\Delta})=\lim_{t_{0}\downarrow-\infty}\frac{1-{\mathrm{e}}^{t_{0}-{\Delta}}}{{\mathrm{e}}^{-{\Delta}}+1}=\frac{1}{1+{\mathrm{e}}^{-{\Delta}}}, (5.10)

for all Δ{\Delta}\in{\mathbb{R}}. So the picture is that we start the process at t0=t_{0}=-\infty and the first branching time is infinitely far in the future and occurs at a finite random time distributed according to (5.10). The density of this distribution is 14cosh(Δ)2\frac{1}{4}\cosh({\Delta})^{-2}. In particular, it has mean zero and variance π2/3\pi^{2}/3.

We have the following result.

Theorem 5.4.

Assume that λ(t)0{\lambda}(t)\downarrow 0 and t+ln(λ(t)ϵ2)t+\ln({\lambda}(t)\epsilon^{2})\uparrow\infty, as TT\uparrow\infty. Then, for any Δ{\Delta}\in{\mathbb{R}} and events 1(u)𝒢1{\mathcal{E}}_{1}(u)\in{\mathcal{G}}_{1},

limtP^λ(t),t({e0(0)Δ+t+ln(λ(t)ϵ2)}1((0,0))1((0,1)))\displaystyle\lim_{t\uparrow\infty}\widehat{P}_{{\lambda}(t),t}\left(\{e_{0}(0)\leq{\Delta}+t+\ln({\lambda}(t)\epsilon^{2})\}\cap{\mathcal{E}}_{1}((0,0))\cap{\mathcal{E}}_{1}((0,1))\right)
=Q,({e0(0)Δ}1((0,0))1((0,1))),\displaystyle=Q_{\infty,-\infty}\left(\{e_{0}(0)\leq{\Delta}\}\cap{\mathcal{E}}_{1}((0,0))\cap{\mathcal{E}}_{1}((0,1))\right), (5.11)

where Q,Q_{\infty,-\infty} is the law of the limiting model.

Proof.

All we need to show is that the measures qq converge. But this follows form the computations indicated above. ∎

Refer to caption
Figure 1. Scaling towards the limit process

Note further, as long as Tu(n)T_{u}(n) is negative, the distribution of eu(n)e_{u}(n) is concentrated around Tu(n)T_{u}(n), while as Tu(n)T_{u}(n) become positive and large, the distribution tends to a standard exponential distribution. In fact,

𝔼[eu(n+1)|n]=(1+e+Tu(n))ln(1+eTu(n)).{\mathbb{E}}\left[e_{u}(n+1)|{\mathcal{F}}_{n}\right]=\left(1+{\mathrm{e}}^{+T_{u}(n)}\right)\ln\left(1+{\mathrm{e}}^{-T_{u}(n)}\right). (5.12)

Clearly this converges to 11, as Tu(n)T_{u}(n)\uparrow\infty and behaves like Tu(n)-T_{u}(n), as Tu(n)T_{u}(n) tends to -\infty.

6. The distribution of the front

An obvious first question is the distribution of the maximum of BBM under the law P^λ,t\widehat{P}_{{\lambda},t}. We define, for any zz\in{\mathbb{R}},

uσ(t,z)=𝔼[σ(λ,ϵ)n(t)1𝟙1=1n(t)xi(t)z].u_{\sigma}(t,z)={\mathbb{E}}\left[{\sigma}({\lambda},\epsilon)^{n(t)-1}\mathbbm{1}_{\forall_{1=1}^{n(t)}x_{i}(t)\leq z}\right]. (6.1)

Then

P^λ,t(1=1n(t)xi(t)z)=uσ(t,x)vσ(t)1wσ(t,x).\widehat{P}_{{\lambda},t}\left(\forall_{1=1}^{n(t)}x_{i}(t)\leq z\right)=\frac{u_{\sigma}(t,x)}{v_{\sigma}(t)}\equiv 1-w_{\sigma}(t,x). (6.2)

Note that we use the choice 1wσ1-w_{\sigma} to be closer to the usual formulation of the F-KPP equation.

Interestingly, wσw_{\sigma} solves a time-dependent version of the F-KPP equation.

Lemma 6.1.

wσw_{\sigma} defined in (6.2) is the unique solution of the equation

twσ=12xxwσ+wσ(1wσ)σ(1σ)et+σ.{\partial}_{t}w_{\sigma}=\frac{1}{2}{\partial}_{xx}w_{\sigma}+w_{\sigma}(1-w_{\sigma})\frac{{\sigma}}{(1-{\sigma}){\mathrm{e}}^{t}+{\sigma}}. (6.3)

with initial condition wσ(0,x)=𝟙x0w_{\sigma}(0,x)=\mathbbm{1}_{x\leq 0}.

Proof.

In complete analogy to the derivation of the F-KPP equation (see, e.g. [8]), uσu_{\sigma} satisfies the recursive equation

uσ(t,z)=etΦt(z)+0t𝑑se(ts)𝑑yey22(ts|)2π(ts)σ(λ,ϵ)uσ(s,zy)2,u_{\sigma}(t,z)={\mathrm{e}}^{-t}\Phi_{t}(z)+\int_{0}^{t}ds{\mathrm{e}}^{-(t-s)}\int dy\frac{{\mathrm{e}}^{-\frac{y^{2}}{2(t-s|)}}}{\sqrt{2\pi(t-s)}}{\sigma}({\lambda},\epsilon)u_{\sigma}(s,z-y)^{2}, (6.4)

where Φt(z)=zex22t2πt𝑑x\Phi_{t}(z)=\int_{-\infty}^{z}\frac{{\mathrm{e}}^{-\frac{x^{2}}{2t}}}{\sqrt{2\pi t}}dx is the probability that a single Brownian motion at time tt is smaller than zz. Letting H(t,x)etx22t2πtH(t,x)\equiv\frac{{\mathrm{e}}^{-t-\frac{x^{2}}{2t}}}{\sqrt{2\pi t}}, we can write this as

uσ(t,z)=𝑑yH(t,zy)𝟙y0+0t𝑑s𝑑yH(s,y)σ(λ,ϵ)uσ(ts,zy)2.u_{\sigma}(t,z)=\int_{-\infty}^{\infty}dyH(t,z-y)\mathbbm{1}_{y\leq 0}+\int_{0}^{t}ds\int dyH(s,y){\sigma}({\lambda},\epsilon)u_{\sigma}(t-s,z-y)^{2}. (6.5)

Note that HH is the Green function for the differential operator t12xx+1{\partial}_{t}-\frac{1}{2}{\partial}_{xx}+1. and so uσu_{\sigma} is the mild formulation of the partial differential equation

tuσ(t,z)=12zzuσ(t,z)uσ(t,z)+σ(λ,ϵ)uλ(t,z)2,{\partial}_{t}u_{\sigma}(t,z)=\frac{1}{2}{\partial}_{zz}u_{\sigma}(t,z)-u_{\sigma}(t,z)+{\sigma}({\lambda},\epsilon)u_{{\lambda}}(t,z)^{2}, (6.6)

with initial condition uσ(0,z)=𝟙z0u_{\sigma}(0,z)=\mathbbm{1}_{z\geq 0}. This equation is the F-KPP equation if σ(λ,ϵ)=1{\sigma}({\lambda},\epsilon)=1, i.e. if λ=0{\lambda}=0, and looks similar to it in general. Hence,

twσ\displaystyle{\partial}_{t}w_{\sigma} =\displaystyle= tuσvσ+uσtvσvσ2=12xxwσ+σwσ(1wσ)vσ\displaystyle-\frac{{\partial}_{t}u_{\sigma}}{v_{\sigma}}+\frac{u_{\sigma}{\partial}_{t}v_{\sigma}}{v_{\sigma}^{2}}=\frac{1}{2}{\partial}_{xx}w_{\sigma}+{\sigma}w_{\sigma}(1-w_{\sigma})v_{\sigma} (6.7)
=\displaystyle= 12xxwσ+wσ(1wσ)σ(1σ)et+σ,\displaystyle\frac{1}{2}{\partial}_{xx}w_{\sigma}+w_{\sigma}(1-w_{\sigma})\frac{{\sigma}}{(1-{\sigma}){\mathrm{e}}^{t}+{\sigma}},

where we used the explicit form of vσv_{\sigma} from (2.11). ∎

Note that (6.3) is a time-dependent version of the F-KPP equation, where the non-linear term is modulated down over time. Time dependent F-KPP equations have been studied in the past, see, e.g. [25, 18, 24], but we did not find this specific example in the literature. For small λ{\lambda}, (6.3) becomes

twσ=12xxwσ+wσ(1wσ)1+O(λϵ2)λϵ2et+1.{\partial}_{t}w_{\sigma}=\frac{1}{2}{\partial}_{xx}w_{\sigma}+w_{\sigma}(1-w_{\sigma})\frac{1+O({\lambda}\epsilon^{2})}{{\lambda}\epsilon^{2}{\mathrm{e}}^{t}+1}. (6.8)

For future use, note that (6.8) is a special case of a class of F-KPP equations of the form

tψ=12xxψ+g(t)ψ(1ψ),{\partial}_{t}\psi=\frac{1}{2}{\partial}_{xx}\psi+g(t)\psi(1-\psi), (6.9)

where g:+g:{\mathbb{R}}\rightarrow{\mathbb{R}}_{+}. We will be interested in the case when gg is bounded, monotone decreasing, and integrable.

The key tool for analysing solutions of (6.3) is the Feynman-Kac representation for ψ\psi, see Bramson [10].

Lemma 6.2.

If ψ\psi is a solution of the equation (6.9) with initial condition ψ(0,x)=ρ(x)\psi(0,x)=\rho(x), then ψ\psi satisfies

ψ(t,x)=𝔼x[exp(0tg(ts)(1ψ(ts,Bs))𝑑s)ρ(Bt)],\psi(t,x)={\mathbb{E}}_{x}\left[\exp\left(\int_{0}^{t}g(t-s)(1-\psi(t-s,B_{s}))ds\right)\rho(B_{t})\right], (6.10)

where BB is Brownian motion starting in xx.

The strategy to exploit this representation used by Bramson is to use a priori bounds on ψ\psi in the right-hand side of the equation in order to get sharp upper and lower bounds. Here we want to do the same, but we need to take into account the specifics of the function gg. Going back to the specific case (6.8), gg remains close to 11 for a fairly long time (O(ln(λϵ2))O(-\ln({\lambda}\epsilon^{2}))), and then decays exponentially with rate 11 to zero. Therefore, we expect that initially, the solution will behave like that of the F-KPP equation and approach a travelling wave solution. As time goes on, the wave slows down and comes essentially to a halt. Finally, as time goes on we see a pure diffusion. We will deal differently with these three regimes. We begin with the initial phase when g(t)1g(t)\sim 1.

Lemma 6.3.

Assume that gg is non-increasing and bounded by one and zero from above and below. Define G(t)=0tg(s)𝑑sG(t)=\int_{0}^{t}g(s)ds. Then

eG(t)tw0(t,x)wσ(t,x)w0(t,x),x,t+.{\mathrm{e}}^{G(t)-t}w_{0}(t,x)\leq w_{\sigma}(t,x)\leq w_{0}(t,x),\quad\forall x\in{\mathbb{R}},t\in{\mathbb{R}}_{+}. (6.11)
Proof.

Starting from (6.10), we see that

ψ(t,x)\displaystyle\psi(t,x) =\displaystyle= 𝔼x[exp(0t(1ψ(ts,Bs))ds)+(g(ts)1)(1ψ(ts,Bs))ds)ρ(Bt)]\displaystyle{\mathbb{E}}_{x}\left[\exp\left(\int_{0}^{t}\left(1-\psi(t-s,B_{s}))ds\right)+(g(t-s)-1)(1-\psi(t-s,B_{s}))ds\right)\rho(B_{t})\right] (6.12)
\displaystyle\geq exp(0t(g(ts)1)ds)𝔼x[exp(0t(1ψ(ts,Bs))ds)ds)ρ(Bt)]\displaystyle\exp\left(\int_{0}^{t}(g(t-s)-1)ds\right){\mathbb{E}}_{x}\left[\exp\left(\int_{0}^{t}\left(1-\psi(t-s,B_{s}))ds\right)ds\right)\rho(B_{t})\right]
=\displaystyle= exp(G(t)t)ψ0(t,x).\displaystyle\exp(G(t)-t)\psi_{0}(t,x).

The upper bound follows since g(ts)1g(t-s)\leq 1 in the same way. ∎

GG can be computed explicitly for g(t)=σ/((1σ)et+σ)g(t)={\sigma}/((1-{\sigma}){\mathrm{e}}^{t}+{\sigma}), namely

G(t)=tln(1+(1/σ1)et)+ln(1/σ))=tln(σ+(1σ)et).G(t)=t-\ln\left(1+(1/{\sigma}-1){\mathrm{e}}^{t}\right)+\ln\left(1/{\sigma})\right)=t-\ln\left({\sigma}+(1-{\sigma}){\mathrm{e}}^{t}\right). (6.13)

Notice that

limtG(t)=ln(1σ).\lim_{t\uparrow\infty}G(t)=-\ln(1-{\sigma}). (6.14)

Define, for δ>0{\delta}>0,

τδsup{t>0:1g(t)δ}.{\tau}_{\delta}\equiv\sup\left\{t>0:1-g(t)\leq{\delta}\right\}. (6.15)

Obviously,

σ(1σ)eτδ+σ=1δ,\frac{{\sigma}}{(1-{\sigma}){\mathrm{e}}^{{\tau}_{\delta}}+{\sigma}}=1-{\delta}, (6.16)

so

τδ=ln(1/σ1)ln(1/δ1).{\tau}_{\delta}=-\ln(1/{\sigma}-1)-\ln(1/{\delta}-1). (6.17)

Finally, G(τδ)=τδ+ln(1δ)+ln(1/σ)G({\tau}_{\delta})={\tau}_{\delta}+\ln(1-{\delta})+\ln(1/{\sigma}).

In the limit λ0{\lambda}\downarrow 0, we get

τδln(λϵ2)ln(1/δ1),{\tau}_{\delta}\sim-\ln({\lambda}\epsilon^{2})-\ln(1/{\delta}-1), (6.18)
G(τδ)τδln(1δ),G({\tau}_{\delta})-{\tau}_{\delta}\sim\ln(1-{\delta}), (6.19)

and

limtG(t)ln(λϵ2).\lim_{t\uparrow\infty}G(t)\sim-\ln({\lambda}\epsilon^{2}). (6.20)

We see that, as λ0{\lambda}\downarrow 0, τδ{\tau}_{\delta}\uparrow\infty. This allows us to deduce the precise behaviour of the solution at this time via Bramson’s results.

Lemma 6.4.

If wσw_{\sigma} satisfies (6.3) with Heaviside initial condition Then, as λ0{\lambda}\downarrow 0,

(1δ)v0(x)wσ(τδ,x+m(τδ))v0(x),(1-{\delta})v_{0}(x)\leq w_{\sigma}\left({\tau}_{\delta},x+m({\tau}_{\delta})\right)\leq v_{0}(x), (6.21)

where v0v_{0} is a travelling wave of the F-KPP equation with speed 2\sqrt{2}

12xxv0+2xv0+v0(1v0)=0,\frac{1}{2}{\partial}_{xx}v_{0}+\sqrt{2}{\partial}_{x}v_{0}+v_{0}(1-v_{0})=0, (6.22)

and m(t)2t322lntm(t)\equiv\sqrt{2}t-\frac{3}{2\sqrt{2}}\ln t.

Proof.

This follows immediately from Bramson’s theorems A and B in [10], Lemma 6.3, and (6.19). ∎

Next we look at the behaviour of the solution for times when g(t)1g(t)\ll 1.

Lemma 6.5.

Let ψ\psi solve (6.9) and gg be integrable. Define, for Δ>0{\Delta}>0, TΔT_{\Delta} by

TΔ=inf{t>0:tg(s)𝑑sΔ}.T_{\Delta}=\inf\left\{t>0:\int_{t}^{\infty}g(s)ds\leq{\Delta}\right\}. (6.23)

Then, for t>TΔt>T_{\Delta},

𝔼x[wσ(TΔ,BtTΔ)]wσ(t,x)eΔ𝔼x[wσ(TΔ,BtTΔ)],{\mathbb{E}}_{x}\left[w_{\sigma}(T_{\Delta},B_{t-T_{\Delta}})\right]\leq w_{\sigma}(t,x)\leq{\mathrm{e}}^{\Delta}{\mathbb{E}}_{x}\left[w_{\sigma}(T_{\Delta},B_{t-T_{\Delta}})\right], (6.24)

where BB is Brownian motion started in xx.

Proof.

We have that, for tTΔt\geq T_{\Delta},

wσ(t,x)=𝔼x[exp(0tTΔg(ts)(1wσ(ts,Bs))𝑑s)wσ(TΔ,BtTΔ)].w_{\sigma}(t,x)={\mathbb{E}}_{x}\left[\exp\left(\int_{0}^{t-T_{\Delta}}g(t-s)(1-w_{\sigma}(t-s,B_{s}))ds\right)w_{\sigma}(T_{\Delta},B_{t-T_{\Delta}})\right]. (6.25)

The exponent is trivially bounded by

00tTΔg(ts)(1wσ(ts,Bs))𝑑s0tTΔg(ts)𝑑s=TΔtg(s)𝑑sΔ0\leq\int_{0}^{t-T_{\Delta}}g(t-s)(1-w_{\sigma}(t-s,B_{s}))ds\leq\int_{0}^{t-T_{\Delta}}g(t-s)ds=\int_{T_{\Delta}}^{t}g(s)ds\leq{\Delta} (6.26)

Inserting these bounds into (6.25) gives (6.24). ∎

Note that in our case, TΔT_{\Delta} is determined by

G()G(TΔ)=Δ.G(\infty)-G(T_{\Delta})={\Delta}. (6.27)

But

G()G(TΔ)\displaystyle G(\infty)-G(T_{\Delta}) =\displaystyle= ln(1σ)TΔ+ln(σ+(1σ)eTΔ)\displaystyle-\ln(1-{\sigma})-T_{\Delta}+\ln\left({\sigma}+(1-{\sigma}){\mathrm{e}}^{T_{\Delta}}\right)
=\displaystyle= ln(1σ)+ln(σeTΔ+(1σ))=ln(σeTΔ/(1σ)+1)\displaystyle-\ln(1-{\sigma})+\ln\left({\sigma}{\mathrm{e}}^{-T_{\Delta}}+(1-{\sigma})\right)=\ln\left({\sigma}{\mathrm{e}}^{-T_{\Delta}}/(1-{\sigma})+1\right)

Now set TΔ=ln(1σ)+zT_{\Delta}=-\ln(1-{\sigma})+z. Then

G()G(TΔ)=ln(σez+1).G(\infty)-G(T_{\Delta})=\ln\left({\sigma}{\mathrm{e}}^{-z}+1\right). (6.29)

Hence

TΔ=ln(1/σ1)ln(eΔ1)ln(1/σ1)+ln(1/Δ),T_{\Delta}=-\ln(1/{\sigma}-1)-\ln\left({\mathrm{e}}^{\Delta}-1\right)\sim\ln(1/{\sigma}-1)+\ln(1/{\Delta}), (6.30)

for small Δ{\Delta}. In particular, we have that

TΔτδln(eΔ1)+ln(1/δ1)ln(1/Δ)+ln(1/δ),T_{\Delta}-{\tau}_{\delta}\sim\ln\left({\mathrm{e}}^{\Delta}-1\right)+\ln(1/{\delta}-1)\sim\ln(1/{\Delta})+\ln(1/{\delta}), (6.31)

for small Δ{\Delta} and δ{\delta}.

What is left to do is to control the evolution of the solution between time τδ{\tau}_{\delta} and TΔT_{\Delta}. The Feynman-Kac representation, for τδtTΔ{\tau}_{\delta}\leq t\leq T_{\Delta},

wσ(t,x)=𝔼x[exp(0tτδg(tτδs)(1wσ(tτδs,Bs)))wσ(τδ,Btτδ)].w_{\sigma}(t,x)={\mathbb{E}}_{x}\left[\exp\left(-\int_{0}^{t-{\tau}_{\delta}}g(t-{\tau}_{\delta}-s)(1-w_{\sigma}(t-{\tau}_{\delta}-s,B_{s}))\right)w_{\sigma}({\tau}_{\delta},B_{t-{\tau}_{\delta}})\right]. (6.32)

To start, the following bounds are straightforward from the Feynman-Kac representation and the fact that wσ[0,1]w_{\sigma}\in[0,1].

Lemma 6.6.

With the notation above, for τδtTΔ{\tau}_{\delta}\leq t\leq T_{\Delta},

𝔼x[wσ(τδ,Btτδ)]wσ(t,x)eG(t)G(τδ)𝔼x[wσ(τδ,Btτδ)]1,{\mathbb{E}}_{x}\left[w_{\sigma}({\tau}_{\delta},B_{t-{\tau}_{\delta}})\right]\leq w_{\sigma}(t,x)\leq{\mathrm{e}}^{G(t)-G({\tau}_{\delta})}{\mathbb{E}}_{x}\left[w_{\sigma}({\tau}_{\delta},B_{t-{\tau}_{\delta}})\right]\land 1, (6.33)

where

G(t)G(τδ)=ln((1δ)e(tτδ)+δ).G(t)-G({\tau}_{\delta})=-\ln\left((1-{\delta}){\mathrm{e}}^{-(t-{\tau}_{\delta})}+{\delta}\right). (6.34)

and eG(TΔ)G(τδ)1δeΔ{\mathrm{e}}^{G(T_{\Delta})-G({\tau}_{\delta})}\sim\frac{1}{{\delta}}{\mathrm{e}}^{-{\Delta}}.

Remark.

We expect the upper bound to be closer to the correct answer.

We now combine all estimates. This gives, for tTΔt\geq T_{\Delta},

(1δ)𝔼0[v0(x+Btτδ)]wσ(x+m(τδ))(1/δ)𝔼0[v0(x+Btτδ)]1.(1-{\delta}){\mathbb{E}}_{0}\left[v_{0}(x+B_{t-{\tau}_{\delta}})\right]\leq w_{\sigma}(x+m({\tau}_{\delta}))\leq(1/{\delta}){\mathbb{E}}_{0}\left[v_{0}(x+B_{t-{\tau}_{\delta}})\right]\land 1. (6.35)

If we choose, e.g. δ=1/2{\delta}=1/2, we see that the upper and lower bounds only differ by a factor 44.

The expectation over v0v_{0} can be bounded using the known tail estimates (see [10] and [14]),

v0(x)\displaystyle v_{0}(x) \displaystyle\leq Cxe2x,ifx>1,\displaystyle Cx{\mathrm{e}}^{-\sqrt{2}x},\;\hbox{if}\,x>1, (6.36)
v0(x)\displaystyle v_{0}(x) \displaystyle\geq 1Ce(22)x,ifx<1,\displaystyle 1-C{\mathrm{e}}^{(2-\sqrt{2})x},\;\hbox{if}\,x<-1, (6.37)

However, the resulting expressions are not very nice and not very precise, so we leave their computation to the interested reader.

We conclude this chapter by summarising the behaviour of the solution as a function of tt when λ0{\lambda}\downarrow 0.

Theorem 6.7.

Assume that λ0{\lambda}\downarrow 0. Let 0<δ<10<{\delta}<1 and τδ{\tau}_{\delta} defined as in (6.15). Then

τδ=ln(1/σ1)ln(1/δ1)ln(λϵ2)ln(1/δ1).{\tau}_{\delta}=\ln(1/{\sigma}-1)-\ln(1/{\delta}-1)\sim-\ln({\lambda}\epsilon^{2})-\ln(1/{\delta}-1). (6.38)

Moreover, for Δ>0{\Delta}>0, let TΔT_{\Delta} be defined in (6.23). Then,

TΔ=ln(1/σ1)ln(eΔ1)ln(λϵ2)+ln(1/Δ),T_{\Delta}=-\ln(1/{\sigma}-1)-\ln\left({\mathrm{e}}^{\Delta}-1\right)\sim-\ln({\lambda}\epsilon^{2})+\ln(1/{\Delta}), (6.39)

for λ0{\lambda}\downarrow 0 and Δ{\Delta} small. Then the solution wσw_{\sigma} of (6.3) can be described as follows.

  • (i)

    For 0tτδ0\ll t\leq{\tau}_{\delta}

    wσ(t,x+m(t))v0(x),w_{\sigma}(t,x+m(t))\sim v_{0}(x), (6.40)

    where v0v_{0} is the solution of (6.22).

  • (ii)

    For τδ<t<TΔ{\tau}_{\delta}<t<T_{\Delta} we have

    (1δ)𝔼x[v0(Btτδ)]wσ(t,x+m(τδ))eG(t)G(τδ)𝔼x[v0(Btτδ)]1,(1-{\delta}){\mathbb{E}}_{x}\left[v_{0}(B_{t-{\tau}_{\delta}})\right]\leq w_{\sigma}(t,x+m({\tau}_{\delta}))\leq{\mathrm{e}}^{G(t)-G({\tau}_{\delta})}{\mathbb{E}}_{x}\left[v_{0}(B_{t-{\tau}_{\delta}})\right]\land 1, (6.41)
  • (iii)

    For tTΔt\geq T_{\Delta}, we have

    (1δ)𝔼x[v0(Btτδ)]wσ(t,x+m(τδ))1δ𝔼x[v0(Btτδ)]1.(1-{\delta}){\mathbb{E}}_{x}\left[v_{0}(B_{t-{\tau}_{\delta}})\right]\leq w_{\sigma}(t,x+m({\tau}_{\delta}))\leq\frac{1}{{\delta}}{\mathbb{E}}_{x}\left[v_{0}(B_{t-{\tau}_{\delta}})\right]\land 1. (6.42)

This picture corresponds to the geometric picture we have established in the preceding sections, in a sort of time reversed way: the diffusive behaviour at large times corresponds to the Brownian morion up to the time of the first branching (+ln(λϵ2)(\sim+\ln({\lambda}\epsilon^{2}), the travelling wave behaviour at times up to τδ{\tau}_{\delta} corresponds to the almost freely branching at the late times after t+ln(λϵ2)t+\ln({\lambda}\epsilon^{2}), and the finite time interval between τδ{\tau}_{\delta} and TΔT_{\Delta}, when the travelling waves comes to a halt corresponds to the first branching steps that are asymptotically described by the limiting quasi-Markov Galton-Watson tree described in Section 5.

7. Comparison to the full model

We will show that in the original model, with the interaction given by ItI_{t} (see Eq. (1.2)), behaves similarly to the simplified model. In particular, the first branching happens at least as late as in that model.

Lemma 7.1.

Let τ1{\tau}_{1} be the first branching time. Then

Pt,σ(τ1tr)σ(eret)(1σ(1et))(1σ(1er)).P_{t,{\sigma}}\left({\tau}_{1}\leq t-r\right)\leq\frac{{\sigma}({\mathrm{e}}^{-r}-{\mathrm{e}}^{-t})}{(1-{\sigma}(1-{\mathrm{e}}^{-t}))(1-{\sigma}(1-{\mathrm{e}}^{-r}))}. (7.1)

For λ0{\lambda}\downarrow 0 and tt\uparrow\infty, this behaves as

Pt,σ(τ1tr)erλϵ2(λϵ2+er)=1λ2ϵ4er+λϵ2.P_{t,{\sigma}}\left({\tau}_{1}\leq t-r\right)\leq\frac{{\mathrm{e}}^{-r}}{{\lambda}\epsilon^{2}({\lambda}\epsilon^{2}+{\mathrm{e}}^{-r})}=\frac{1}{{\lambda}^{2}\epsilon^{4}{\mathrm{e}}^{r}+{\lambda}\epsilon^{2}}. (7.2)

and so

Pt,σ(τ1t+2ln(λ(t)ϵ2)ρ)eρ,P_{t,{\sigma}}\left({\tau}_{1}\leq t+2\ln({\lambda}(t)\epsilon^{2})-\rho\right)\leq{\mathrm{e}}^{-\rho}, (7.3)
Proof.

Set

V(t,r)𝔼[eλIt(x)𝟙τ1r].V(t,r)\equiv{\mathbb{E}}\left[{\mathrm{e}}^{-{\lambda}I_{t}(x)}\mathbbm{1}_{{\tau}_{1}\leq r}\right]. (7.4)

Then

Pt,σ(τ1tr)=V(t,tr)𝔼[eλIt(x)]etv(t,tr),P_{t,{\sigma}}\left({\tau}_{1}\leq t-r\right)=\frac{V(t,t-r)}{{\mathbb{E}}\left[{\mathrm{e}}^{-{\lambda}I_{t}(x)}\right]}\leq{\mathrm{e}}^{t}v(t,t-r), (7.5)

where we simply bounded the denominator by the probability that there is no branching up to time tt. Inserting the explicit form of vσ(t,tr)v_{\sigma}(t,t-r) gives (7.1). The asymptotic formulae for small λ{\lambda} are straightforward. ∎

One can improve the bound above as follows. Instead of bounding the denominator just by the probability that there is no branching up to time tt, we can bound it by no branching up to time tqt-q and then bound the interaction of the remaining piece uniformly by

Iq(x)0qn(s)(n(s)1)𝑑s.I_{q}(x)\leq\int_{0}^{q}n(s)(n(s)-1)ds. (7.6)

Hence the denominator becomes

𝔼[eλIt(x)]𝔼[eλIt(x)𝟙τ1>tq]=et+q𝔼[eλIq(x)].{\mathbb{E}}\left[{\mathrm{e}}^{-{\lambda}I_{t}(x)}\right]\geq{\mathbb{E}}\left[{\mathrm{e}}^{-{\lambda}I_{t}(x)}\mathbbm{1}_{{\tau}_{1}>t-q}\right]={\mathrm{e}}^{-t+q}{\mathbb{E}}\left[{\mathrm{e}}^{-{\lambda}I_{q}(x)}\right]. (7.7)

Now

𝔼[eλIq(x)]\displaystyle{\mathbb{E}}\left[{\mathrm{e}}^{-{\lambda}I_{q}(x)}\right] \displaystyle\geq 𝔼[eλ0qn(s)(n(s)1)𝑑s𝟙n(s)c𝔼n(s)sq]\displaystyle{\mathbb{E}}\left[{\mathrm{e}}^{-{\lambda}\int_{0}^{q}n(s)(n(s)-1)ds}\mathbbm{1}_{n(s)\leq c{\mathbb{E}}n(s)\forall_{s\leq q}}\right] (7.8)
\displaystyle\geq eλ0qc𝔼n(s)(c𝔼n(s)1)𝑑s𝔼[𝟙n(s)c𝔼n(s)sq].\displaystyle{\mathrm{e}}^{-{\lambda}\int_{0}^{q}c{\mathbb{E}}n(s)(c{\mathbb{E}}n(s)-1)ds}{\mathbb{E}}\left[\mathbbm{1}_{n(s)\leq c{\mathbb{E}}n(s)\forall_{s\leq q}}\right].

Since n(s)/𝔼n(s)n(s)/{\mathbb{E}}n(s) is a positive martingale, by Doob’s maximum inequality we have that, for c>1c>1,

𝔼[𝟙n(s)>c𝔼n(s)sq]1/c.{\mathbb{E}}\left[\mathbbm{1}_{n(s)>c{\mathbb{E}}n(s)\forall_{s\leq q}}\right]\leq 1/c. (7.9)

Moreover, 𝔼[n(s)]=es{\mathbb{E}}[n(s)]={\mathrm{e}}^{s}, and hence

𝔼[eλIq(x)]et+q(11/c)eλc2e2q/2.{\mathbb{E}}\left[{\mathrm{e}}^{-{\lambda}I_{q}(x)}\right]\geq{\mathrm{e}}^{-t+q}(1-1/c){\mathrm{e}}^{-{\lambda}c^{2}{\mathrm{e}}^{2q}/2}. (7.10)

Finally, we make the close to optimal choice q=12ln(1/c2λ)q=\frac{1}{2}\ln(1/c^{2}{\lambda}), which yields

𝔼[eλIq(x)]et+12ln(1/c2λ)(11/c)e1=et(λc2)1/2(11/c)/e.{\mathbb{E}}\left[{\mathrm{e}}^{-{\lambda}I_{q}(x)}\right]\geq{\mathrm{e}}^{-t+\frac{1}{2}\ln(1/c^{2}{\lambda})}(1-1/c){\mathrm{e}}^{-1}={\mathrm{e}}^{-t}({\lambda}c^{2})^{-1/2}(1-1/c)/{\mathrm{e}}. (7.11)

Thus, choosing c=2c=2, (7.3) improves to

Pt,σ(τ1tr)4eλλ2ϵ4er+λϵ2.P_{t,{\sigma}}\left({\tau}_{1}\leq t-r\right)\leq 4{\mathrm{e}}\frac{\sqrt{{\lambda}}}{{\lambda}^{2}\epsilon^{4}{\mathrm{e}}^{r}+{\lambda}\epsilon^{2}\;}. (7.12)

Hence,

Pt,σ(τ1t+ln(λ3/2ϵ4)ρ)4eeρ+λϵ24eρ+1.P_{t,{\sigma}}\left({\tau}_{1}\leq t+\ln({\lambda}^{3/2}\epsilon^{4})-\rho\right)\leq\frac{4{\mathrm{e}}}{{\mathrm{e}}^{\rho}+\sqrt{{\lambda}}\epsilon^{2}}\sim 4{\mathrm{e}}^{-\rho+1}. (7.13)

This is still not perfect for small λ{\lambda}, but it seems very hard to improve the bound on the denominator much more. Improvement would need to come from a matching bound in the numerator.

8. The case p0>0p_{0}>0

The behaviour of the model is very different if particles are allowed to die. In that case, the process will die out almost surely. However, it is still interesting to see how exactly this happens. To simplify things, we assume in the sequel p0>0p_{0}>0 and p2=1p0p_{2}=1-p_{0}. Note first that the approximate penalty function changes slightly, since now the number of branching events is no longer related to the number of particles. Let us introduce the two numbers m(t)m(t) and d(t)d(t) as the number of (binary) branchings and deaths, resp. that occurred up to time tt. Clearly, n(t)=1+m(t)d(t)n(t)=1+m(t)-d(t). We then have

It(x)i=1m(t)τϵ(i),I_{t}(x)\geq\sum_{i=1}^{m(t)}{\tau}_{\epsilon}(i), (8.1)

We consider first the partition function function vσ(t)=𝔼[σ(λ,ϵ)m(t)]v_{\sigma}(t)={\mathbb{E}}\left[{\sigma}({\lambda},\epsilon)^{m(t)}\right]. The analog of Lemma 2.1 is as follows.

Lemma 8.1.

Let v~σ(t)\tilde{v}_{\sigma}(t) be the solution of the ordinary differential equation

ddtv~σ(t)=σ(λ,ϵ)p2v~σ(t)2v~σ(t)+p0,\frac{d}{dt}\tilde{v}_{\sigma}(t)={\sigma}({\lambda},\epsilon)p_{2}\tilde{v}_{\sigma}(t)^{2}-\tilde{v}_{\sigma}(t)+p_{0}, (8.2)

with initial condition v~σ(0)=1\tilde{v}_{\sigma}(0)=1. Then vσ(t)=v~σ(λ,ϵ)(t)v_{\sigma}(t)=\tilde{v}_{{\sigma}({\lambda},\epsilon)}(t).

Proof.

We proceed as in the proof of Lemma 2.1. Since now the first event could be either a branching (with probability p2p_{2}) or a death (with probability p0p_{0}), we get the recursion

vσ(t)=et+0t𝑑se(ts)(p2σ(λ,ϵ)vσ(s)2+p0).v_{\sigma}(t)={\mathrm{e}}^{-t}+\int_{0}^{t}ds{\mathrm{e}}^{-(t-s)}\left(p_{2}{\sigma}({\lambda},\epsilon)v_{\sigma}(s)^{2}+p_{0}\right). (8.3)

Differentiating yields the asserted claim. ∎

The presence of the term p0>0p_{0}>0 eliminates the fixpoint 0 in equation (8.2). In fact, (8.2) has the two fixpoints

vσ±12sp2(1±14sp2+4sp22)v^{\pm}_{\sigma}\equiv\frac{1}{2sp_{2}}\left(1\pm\sqrt{1-4sp_{2}+4sp_{2}^{2}}\right) (8.4)

Note that for σ(λ,ϵ)=1{\sigma}({\lambda},\epsilon)=1, this simplifies to

v0±=12p2(1±(12p2)2),v^{\pm}_{0}=\frac{1}{2p_{2}}\left(1\pm\sqrt{(1-2p_{2})^{2}}\right), (8.5)

which is 11 and p0/p2p_{0}/p_{2}. In that case we clearly have v0(t)=1v_{0}(t)=1 in all cases.

If λ>0{\lambda}>0, but λ1{\lambda}\ll 1 (i.e. σ(λ,ϵ)<1{\sigma}({\lambda},\epsilon)<1, but 1σ(λ,ϵ)1-{\sigma}({\lambda},\epsilon) small), we can expand

vσ±={12σ(λ,ϵ)p2(1±(2σ(λ,ϵ)p21)1+4p22σ(λ,ϵ)(1σ(λ,ϵ))(2p2σ(λ,ϵ)1)2),ifp2>1/2,12σ(λ,ϵ)p2(1±(12σ(λ,ϵ)p2)1+4p22σ(λ,ϵ)(1σ(λ,ϵ))(12p2σ(λ,ϵ))2),ifp21/2.v_{\sigma}^{\pm}=\begin{cases}\frac{1}{2{\sigma}({\lambda},\epsilon)p_{2}}\left(1\pm(2{\sigma}({\lambda},\epsilon)p_{2}-1)\sqrt{1+\frac{4p_{2}^{2}{\sigma}({\lambda},\epsilon)(1-{\sigma}({\lambda},\epsilon))}{(2p_{2}{\sigma}({\lambda},\epsilon)-1)^{2}}}\right),&\text{if}\;\;p_{2}>1/2,\\ \frac{1}{2{\sigma}({\lambda},\epsilon)p_{2}}\left(1\pm(1-2{\sigma}({\lambda},\epsilon)p_{2})\sqrt{1+\frac{4p_{2}^{2}{\sigma}({\lambda},\epsilon)(1-{\sigma}({\lambda},\epsilon))}{(1-2p_{2}{\sigma}({\lambda},\epsilon))^{2}}}\right),&\text{if}\;\;p_{2}\leq 1/2.\end{cases} (8.6)

In particular, the smaller fixpoint is

vσ{p0p2+O((1σ(λ,ϵ))2),ifp2>1/2,1p2(1σ(λ,ϵ))12p2σ(λ,ϵ),ifp21/2.v_{\sigma}^{-}\approx\begin{cases}\frac{p_{0}}{p_{2}}+O((1-{\sigma}({\lambda},\epsilon))^{2}),&\text{if}\;\;p_{2}>1/2,\\ 1-\frac{p_{2}(1-{\sigma}({\lambda},\epsilon))}{1-2p_{2}{\sigma}({\lambda},\epsilon)},&\text{if}\;\;p_{2}\leq 1/2.\end{cases} (8.7)

Now set fσ(t)vσ(t)vσf_{\sigma}(t)\equiv v_{\sigma}(t)-v^{-}_{\sigma}. Then fσf_{\sigma} satisfies the differential equation

tfσ(t)=σ(λ,ϵ)p2fσ(t)2+(2σ(λ,ϵ)p2vσ1)fσ(t),{\partial}_{t}f_{\sigma}(t)={\sigma}({\lambda},\epsilon)p_{2}f_{\sigma}(t)^{2}+(2{\sigma}({\lambda},\epsilon)p_{2}v^{-}_{\sigma}-1)f_{\sigma}(t), (8.8)

with initial condition fσ(0)=1vσf_{\sigma}(0)=1-v^{-}_{\sigma}. We can solve this equation as in the case p0=0p_{0}=0. Define

f^σ(t)e(2σ(λ,ϵ)p2vσ1)tfσ(t).\hat{f}_{\sigma}(t)\equiv{\mathrm{e}}^{-(2{\sigma}({\lambda},\epsilon)p_{2}v^{-}_{\sigma}-1)t}f_{\sigma}(t). (8.9)

Then

tf^σ(t)=σ(λ,ϵ)p2f^σ(t)2e(2σ(λ,ϵ)p2vσ1)t,{\partial}_{t}\hat{f}_{\sigma}(t)={\sigma}({\lambda},\epsilon)p_{2}\hat{f}_{\sigma}(t)^{2}{\mathrm{e}}^{(2{\sigma}({\lambda},\epsilon)p_{2}v^{-}_{\sigma}-1)t}, (8.10)

which has the solution

f^σ(t)=111vσσ(λ,ϵ)p212σ(λ,ϵ)p2vσ(1e(2σ(λ,ϵ)p2vσ1)t).\hat{f}_{\sigma}(t)=\frac{1}{\frac{1}{1-v^{-}_{\sigma}}-\frac{{\sigma}({\lambda},\epsilon)p_{2}}{1-2{\sigma}({\lambda},\epsilon)p_{2}v^{-}_{\sigma}}\left(1-{\mathrm{e}}^{(2{\sigma}({\lambda},\epsilon)p_{2}v^{-}_{\sigma}-1)t}\right)}. (8.11)

Hence

fσ(t)=e(2σ(λ,ϵ)p2vσ1)t11vσσ(λ,ϵ)p212σ(λ,ϵ)p2vσ(1e(2σ(λ,ϵ)p2vσ1)t).f_{\sigma}(t)=\frac{{\mathrm{e}}^{(2{\sigma}({\lambda},\epsilon)p_{2}v^{-}_{\sigma}-1)t}}{\frac{1}{1-v^{-}_{\sigma}}-\frac{{\sigma}({\lambda},\epsilon)p_{2}}{1-2{\sigma}({\lambda},\epsilon)p_{2}v^{-}_{\sigma}}\left(1-{\mathrm{e}}^{(2{\sigma}({\lambda},\epsilon)p_{2}v^{-}_{\sigma}-1)t}\right)}. (8.12)

Note that in the case λ=0{\lambda}=0, this is just one, while otherwise, it decays exponentially to zero, so that vσ(t)vσ>0v_{\sigma}(t)\rightarrow v_{\sigma}^{-}>0, indicating that the number of branchings in the process remains finite.

Next let us consider the generating function of the particle number,

wσ,γ(t)𝔼[σ(λ,ϵ)m(t)eγn(t)].w_{{\sigma},{\gamma}}(t)\equiv{\mathbb{E}}\left[{\sigma}({\lambda},\epsilon)^{m(t)}{\mathrm{e}}^{-{\gamma}n(t)}\right].\ (8.13)

We readily see that this function satisfies the equation

wσ,γ(t)=eγet+p2σ(λ,ϵ)0te(ts)wσ,γ(t)2𝑑s+p0(1et).w_{{\sigma},{\gamma}}(t)={\mathrm{e}}^{-{\gamma}}{\mathrm{e}}^{-t}+p_{2}{\sigma}({\lambda},\epsilon)\int_{0}^{t}{\mathrm{e}}^{-(t-s)}w_{{\sigma},{\gamma}}(t)^{2}ds+p_{0}\left(1-{\mathrm{e}}^{-t}\right). (8.14)

This implies the differential equation

twσ,γ(t)=p2σ(λ,ϵ)wσ,γ(t)2wσ,γ(t)+p0,{\partial}_{t}w_{{\sigma},{\gamma}}(t)=p_{2}{\sigma}({\lambda},\epsilon)w_{{\sigma},{\gamma}}(t)^{2}-w_{{\sigma},{\gamma}}(t)+p_{0}, (8.15)

with initial condition wσ,γ(0)=eγw_{{\sigma},{\gamma}}(0)={\mathrm{e}}^{-{\gamma}}. Thus, ww and vv differ only in the initial conditions. it is therefore easy to see that

wσ,γ(t)=vσ+e(2σ(λ,ϵ)p2vσ1)t1eγvσσ(λ,ϵ)p212σ(λ,ϵ)p2vσ(1e(2σ(λ,ϵ)p2vσ1)t).w_{{\sigma},{\gamma}}(t)=v_{\sigma}^{-}+\frac{{\mathrm{e}}^{(2{\sigma}({\lambda},\epsilon)p_{2}v^{-}_{\sigma}-1)t}}{\frac{1}{{\mathrm{e}}^{-{\gamma}}-v^{-}_{\sigma}}-\frac{{\sigma}({\lambda},\epsilon)p_{2}}{1-2{\sigma}({\lambda},\epsilon)p_{2}v^{-}_{\sigma}}\left(1-{\mathrm{e}}^{(2{\sigma}({\lambda},\epsilon)p_{2}v^{-}_{\sigma}-1)t}\right)}. (8.16)

From this expression we can compute, e.g., the expected number of particles at time tt under the measure P^σ,t\hat{P}_{{\sigma},t},

E^σ,t[n(t)]=γln(wσ,γ(t))|γ=0,\hat{E}_{{\sigma},t}[n(t)]=-\frac{{\partial}}{{\partial}{\gamma}}\ln\left(w_{{\sigma},{\gamma}}(t)\right)\big{|}_{{\gamma}=0}, (8.17)

which reads

E^σ,t[n(t)]=1vσ(t)e(2σ(λ,ϵ)p2vσ1)t(1σ(λ,ϵ)p2(1vσ)12σ(λ,ϵ)p2vσ(1e(2σ(λ,ϵ)p2vσ1)t))2.\hat{E}_{{\sigma},t}[n(t)]=\frac{1}{v_{\sigma}(t)}\frac{{\mathrm{e}}^{(2{\sigma}({\lambda},\epsilon)p_{2}v^{-}_{\sigma}-1)t}}{\left(1-\frac{{\sigma}({\lambda},\epsilon)p_{2}(1-v^{-}_{\sigma})}{1-2{\sigma}({\lambda},\epsilon)p_{2}v_{\sigma}^{-}}\left(1-{\mathrm{e}}^{(2{\sigma}({\lambda},\epsilon)p_{2}v^{-}_{\sigma}-1)t}\right)\right)^{2}}. (8.18)

For tt\uparrow\infty, this behaves to leading order, provided vσ>0v_{\sigma}^{-}>0, like

1vσe(2σ(λ,ϵ)p2vσ1)t(1σ(λ,ϵ)p2(1vσ)12σ(λ,ϵ)p2vσ)2.\frac{1}{v_{\sigma}^{-}}\frac{{\mathrm{e}}^{(2{\sigma}({\lambda},\epsilon)p_{2}v^{-}_{\sigma}-1)t}}{\left(1-\frac{{\sigma}({\lambda},\epsilon)p_{2}(1-v^{-}_{\sigma})}{1-2{\sigma}({\lambda},\epsilon)p_{2}v_{\sigma}^{-}}\right)^{2}}. (8.19)

This implies that the process dies out exponentially fast unless the death rate is zero. This does not come, of course, as a surprise.

Alternative computation of E^σ,t[n(t)]\hat{E}_{{\sigma},t}[n(t)].

Instead of passing through the generating function for n(t)n(t), we can also proceed by deriving a direct recursion for E^σ,t[n(t)]\hat{E}_{{\sigma},t}[n(t)]. To do so, define the un-normalised expectation

uσ(t)=𝔼[n(t)σ(λ,ϵ)m(t)].u_{\sigma}(t)={\mathbb{E}}\left[n(t){\sigma}({\lambda},\epsilon)^{m(t)}\right]. (8.20)

Clearly we have

uσ(t)=et+p20te(ts)σ(λ,ϵ)2uσ(s)vσ(s)𝑑s,u_{\sigma}(t)={\mathrm{e}}^{-t}+p_{2}\int_{0}^{t}{\mathrm{e}}^{-(t-s)}{\sigma}({\lambda},\epsilon)2u_{\sigma}(s)v_{\sigma}(s)ds, (8.21)

where we used that in the case when the first event is a death at time tst-s, n(t)n(t) will be zero, while in the case of a birth, n(t)=(n1(s)+n2(s))n(t)=(n_{1}(s)+n_{2}(s)), where nin_{i} are independent copies. This implies the differential equation

tuσ(t)=2p2σ(λ,ϵ)uσ(t)vσ(t)uσ(t).{\partial}_{t}u_{\sigma}(t)=2p_{2}{\sigma}({\lambda},\epsilon)u_{\sigma}(t)v_{\sigma}(t)-u_{\sigma}(t). (8.22)

The solution of this can be written directly as

uσ(t)=exp(0t(2p2σ(λ,ϵ)vσ(s)1)𝑑s).u_{\sigma}(t)=\exp\left(\int_{0}^{t}\left(2p_{2}{\sigma}({\lambda},\epsilon)v_{\sigma}(s)-1\right)ds\right). (8.23)

Since vσv_{\sigma} is explicit, one can verify that this gives the same answer as (8.18).

References

  • [1] L. Addario-Berry, J. Berestycki, and S. Penington. Branching Brownian motion with decay of mass and the nonlocal Fisher-KPP equation. Comm. Pure Appl. Math., 72(12):2487–2577, 2019.
  • [2] L. Addario-Berry and S. Penington. The front location in branching Brownian motion with decay of mass. Ann. Probab., 45(6A):3752–3794, 2017.
  • [3] S. R. Adke and J. E. Moyal. A birth, death, and diffusion process. J. Math. Anal. Appl., 7:209–224, 1963.
  • [4] E. Aïdékon, J. Berestycki, E. Brunet, and Z. Shi. Branching Brownian motion seen from its tip. Probab. Theor. Rel. Fields, 157:405–451, 2013.
  • [5] L.-P. Arguin, A. Bovier, and N. Kistler. Genealogy of extremal particles of branching Brownian motion. Comm. Pure Appl. Math., 64(12):1647–1676, 2011.
  • [6] L.-P. Arguin, A. Bovier, and N. Kistler. Poissonian statistics in the extremal process of branching Brownian motion. Ann. Appl. Probab., 22(4):1693–1711, 2012.
  • [7] L.-P. Arguin, A. Bovier, and N. Kistler. The extremal process of branching Brownian motion. Probab. Theor. Rel. Fields, 157:535–574, 2013.
  • [8] A. Bovier. Gaussian Processes on Trees. From Spin Glasses to Branching Brownian Motion, volume 163 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2017.
  • [9] M. D. Bramson. Maximal displacement of branching Brownian motion. Comm. Pure Appl. Math., 31(5):531–581, 1978.
  • [10] M. D. Bramson. Convergence of solutions of the Kolmogorov equation to travelling waves. Mem. Amer. Math. Soc., 44(285):iv+190, 1983.
  • [11] E. Brunet and B. Derrida. Shift in the velocity of a front due to a cutoff. Phys. Rev. E (3), 56(3, part A):2597–2604, 1997.
  • [12] B. Chauvin and A. Rouault. KPP equation and supercritical branching Brownian motion in the subcritical speed area. Application to spatial trees. Probab. Theory Related Fields, 80(2):299–314, 1988.
  • [13] B. Chauvin and A. Rouault. Supercritical branching Brownian motion and K-P-P equation in the critical speed-area. Math. Nachr., 149:41–59, 1990.
  • [14] A. Cortines, L. Hartung, and O. Louidor. The structure of extreme level sets in branching Brownian motion. Ann. Probab., 47(4):2257–2302, 2019.
  • [15] A. Cortines and B. Mallein. A NN-branching random walk with random selection. ALEA Lat. Am. J. Probab. Math. Stat., 14(1):117–137, 2017.
  • [16] J. Engländer. The center of mass for spatial branching processes and an application for self-interaction. Electron. J. Probab., 15:no. 63, 1938–1970, 2010.
  • [17] J. Engländer. Spatial branching in random environments and with interaction, volume 20 of Advanced Series on Statistical Science & Applied Probability. World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2015.
  • [18] F. Hamel, J. Nolen, J.-M. Roquejoffre, and L. Ryzhik. A short proof of the logarithmic Bramson correction in Fisher-KPP equations. Netw. Heterog. Media, 8(1):275–289, 2013.
  • [19] S. P. Lalley and T. Sellke. A conditional limit theorem for the frontier of a branching Brownian motion. Ann. Probab., 15(3):1052–1061, 1987.
  • [20] P. Maillard and J. Schweinsberg. Yaglom-type limit theorems for branching Brownian motion with absorption. arXIv e-print, 2010.16133, 2020.
  • [21] B. Mallein. Branching random walk with selection at critical rate. Bernoulli, 23(3):1784–1821, 2017.
  • [22] P. Mörters and Y. Peres. Brownian motion, volume 30 of Cambridge Series in Statistical and Probabilistic Mathematics. Cambridge University Press, Cambridge, 2010.
  • [23] J. E. Moyal. Multiplicative population chains. Proc. Roy. Soc. Ser. A, 266:518–526, 1962.
  • [24] J. Nolen, J.-M. Roquejoffre, and L. Ryzhik. Power-like delay in time inhomogeneous Fisher-KPP equations. Comm. Partial Differential Equations, 40(3):475–505, 2015.
  • [25] L. Rossi and L. Ryzhik. Transition waves for a class of space-time dependent monostable equations. Commun. Math. Sci., 12(5):879–900, 2014.