This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

\newaliascnt

lemthmm \aliascntresetthelem \newaliascntthmthmm \aliascntresetthethm \newaliascntcorthmm \aliascntresetthecor \newaliascntpropthmm \aliascntresettheprop \newaliascntdefithmm \aliascntresetthedefi \newaliascntremthmm \aliascntresettherem

Doubly stochastic Yule cascades (Part II): The explosion problem in the non-reversible case

Radu Dascaliuc Department of Mathematics, Oregon State University, Corvallis OR, 97331. [email protected]    Tuan N. Pham Department of Mathematics, Brigham Young University, Provo UT, 84602. [email protected]    Enrique Thomann Department of Mathematics, Oregon State University, Corvallis OR, 97331. [email protected]    Edward C. Waymire Department of Mathematics, Oregon State University, Corvallis OR, 97331. [email protected]
Abstract

We analyze the explosion problem for a class of stochastic models introduced in [part1_2021], referred to as doubly stochastic Yule cascades. These models arise naturally in the construction of solutions to evolutionary PDEs as well as in purely probabilistic first passage percolation phenomena having a Markov-type statistical dependence, new for this context. Using cut-set arguments and a greedy algorithm, we respectively establish criteria for non-explosion and explosion without requiring the time-reversibility of the underlying branching Markov chain (a condition required in [part1_2021]). Notable applications include the explosion of the self-similar cascade of the Navier-Stokes equations in dimension d=3d=3 and non-explosion in dimensions d12d\geq 12.

Keywords. Yule cascade, doubly stochastic Yule cascade, stochastic explosion, Navier-Stokes equations, KPP equation.

AMS subject classification. 60H30, 60J80.

1 Introduction

Largely motivated by the probabilistic method for the deterministic three-dimensional incompressible Navier-Stokes equations (NSE) introduced by Le Jan and Sznitman [lejan], relaxed in [chaos], the authors introduced a new class of stochastic branching models, referred to as doubly stochastic Yule cascades, in [part1_2021]. This class of branching models may also be viewed in the context of statistically dependent first passage percolation models on tree graphs, or in the context of certain cellular aging models in biology [DA_PS_1988, KB_PP_2010]. Roughly speaking, a doubly stochastic Yule cascade evolves from a single progenitor in which each particle waits for a random length of time depending on the particle’s position on the genealogical tree before giving birth to a new generation of particles, each evolving by the same rules as its parent. If the lengths of times between particle reproductions are sufficiently short then branching may produce infinitely many particles in a finite time, an event referred to as stochastic explosion. In this paper, we establish some sufficient conditions for explosion and non-explosion, and apply them for the doubly stochastic Yule cascades associated with several well-known differential equations and probabilistic models.

We begin by recalling, in the context of binary trees, some of the definitions introduced in [part1_2021] which contains more general definitions. With standard notations, debote by 𝕋={θ}(n=1{1,2}n)\mathbb{T}=\{\theta\}\cup\left(\cup_{n=1}^{\infty}\{1,2\}^{n}\right) the indexed binary tree with the root θ\theta.

Definition \thedefi (Yule cascade).

A tree-indexed family of independent random variables {Yv}v𝕋\{Y_{v}\}_{v\in\mathbb{T}} is said to be a Yule cascade with positive parameters (intensities) {λv}v𝕋\{\lambda_{v}\}_{v\in\mathbb{T}} if YvExp(λv)Y_{v}\sim\text{Exp}(\lambda_{v}) for every v𝕋{v\in\mathbb{T}}.

In the above, we used the standard notation YvExp(λv)Y_{v}\sim{\rm Exp}(\lambda_{v}) to denote that YvY_{v} is exponentially distributed with intensity λv\lambda_{v}. If λvλ>0\lambda_{v}\equiv\lambda>0 is a constant, this is the familiar Yule cascade with intensity λ\lambda. The special case λ=1\lambda=1 is referred to as the standard Yule cascade.

Definition \thedefi (Doubly stochastic Yule cascade).

Consider a tree-indexed family of random variables Y={Yv}v𝕋Y=\{Y_{v}\}_{v\in\mathbb{T}} and a tree-indexed family of positive random variables Λ={λv}v𝕋\Lambda=\{\lambda_{v}\}_{v\in\mathbb{T}}. YY is said to be a doubly stochastic Yule (DSY) cascade with random intensities Λ\Lambda if YY is, conditionally given Λ\Lambda, a Yule cascade with intensities Λ\Lambda.

Equivalently, {Yv}v𝕋\{Y_{v}\}_{v\in\mathbb{T}} is a DSY cascade with random intensities {λv}v𝕋\{\lambda_{v}\}_{v\in\mathbb{T}} if and only if {Tv=λvYv}v𝕋\{T_{v}=\lambda_{v}Y_{v}\}_{v\in\mathbb{T}} is a family of i.i.d. mean one exponentially distributed random variables (called holding times or clocks) independent of {λv}v𝕋\{\lambda_{v}\}_{v\in\mathbb{T}}. Thus, a DSY cascade is completely specified once the family of random intensities is specified. For this reason, one can write a DSY cascade as {λv1Tv}v𝕋\{\lambda_{v}^{-1}T_{v}\}_{v\in\mathbb{T}}. The associated counting process defined by

N(t)={1iftTθλθ,card{v𝕋:j=0|v|1Tv|jλv|j<tj=0|v|Tv|jλv|j}ift>TθλθN(t)=\left\{\begin{matrix}1&\text{if}&t\leq\frac{{{T}_{\theta}}}{{{\lambda}_{\theta}}},\\ \text{card}\left\{v\in\mathbb{T}:\ \sum\limits_{j=0}^{|v|-1}{\frac{{{T}_{v|j}}}{{{\lambda}_{v|j}}}}<t\leq\sum\limits_{j=0}^{|v|}{\frac{{{T}_{v|j}}}{{{\lambda}_{v|j}}}}\right\}&\text{if}&t>\frac{{{T}_{\theta}}}{{{\lambda}_{\theta}}}\\ \end{matrix}\right.

is referred to as the Yule process or doubly stochastic Yule process depending on the respective cascade model being considered. In the definition of N(t)N(t), we have used standard notations for the length of a vertex v=(v1,,vn)𝕋v=(v_{1},\dots,v_{n})\in\mathbb{T} and the truncation of vv of length jj, namely |v|=n|v|=n and v|j=(v1,,vj)v|j=(v_{1},\dots,v_{j}) for 0j|v|0\leq j\leq|v|. By convention, v|0=θ.v|0=\theta.

Definition \thedefi (Explosion time).

The explosion time of a DSY cascade {λv1Tv}v𝕋\{\lambda_{v}^{-1}T_{v}\}_{v\in\mathbb{T}} is a [0,][0,\infty]-valued random variable ζ\zeta defined by

ζ=limnmin|v|=nj=0nTv|jλv|j.\zeta=\underset{n\to\infty}{\mathop{\lim}}\,\underset{|v|=n}{\mathop{\min}}\,\sum\limits_{j=0}^{n}{\frac{{{T}_{v|j}}}{\lambda_{v|j}}}.

The event of non-explosion is defined by [ζ=][\zeta=\infty]. The cascade is said to be non-explosive if (ζ=)=1\mathbb{P}(\zeta=\infty)=1, and explosive if (ζ=)<1\mathbb{P}(\zeta=\infty)<1.

According to the definition, the explosion event [ζ<][\zeta<\infty] has a positive probability in an explosive cascade. However, in all applications considered in this paper, explosion is in fact a 010-1 event (Section 2). The DSY cascades introduced in Section 1 are quite general. To consider the explosion problem, we will further assume a certain branching Markov chain structure underlying the random intensities λv\lambda_{v}.

Using estimates on the spectral radius of an operator defined on the underlying branching Markov chains, the authors obtained in [part1_2021] criteria for non-explosion assuming a time-reversibility condition. These conditions were applied to the Bessel cascade, a particular cascade associated with the Navier-Stokes equations, and to the cascade of the Kolmogorov-Petrovski-Piskunov (KPP) equation in Fourier domain to show that they are both non-explosive. Other more purely probabilistic models were also analyzed there. However, time reversibility is too restrictive and does not apply to several examples of particular importance. These include the cascades induced by the scaling properties of the NSE, referred here as self-similar cascades associated with the NSE. The focus of this paper is to establish explosion and non-explosion criteria for a class of DSY cascades using probabilistic arguments that do not depend on the time-reversibility.

To establish a new non-explosion criterion (Section 2), we rely on the sub-criticality of nonhomogeneous Galton-Watson processes associated with the cascade, which is more intuitive and robust than the method of large deviation estimates used in [part1_2021]. The key idea to obtain non-explosion is to construct a sequence of recurring finite cut-sets through an inspection process. The construction of these cut-sets takes into consideration the uncountable set of paths s𝕋={1,2}s\in\partial{\mathbb{T}}=\{1,2\}^{\infty} in a binary tree, which involves a new notion of recurrence that naturally generalizes neighborhood recurrence along each path (Section 2). As an application, Section 2 implies the non-explosion of the self-similar cascade associated with NSE in dimensions d12d\geq 12 and recovers the non-explosion result for the Bessel cascade of the 3-dimensional NSE and the cascade associated with the KPP equation proven in [part1_2021].

Remark \therem.

In spite of its intuitive appeal, it is the Markov dependence along tree paths, in place of independence, that makes the use of cut-sets technically challenging here. Similar challenges arise in computing the speeds of branching random walk particles with Markov-chain displacement paths, in place of independent displacements, and in computing the first passage percolation “flow”problems [HK_1986, GG_HK1984] if the i.i.d. passage times along paths are statistically dependent.

We establish a criterion for a.s. explosion (Section 2) based on a “fastest path” constructed by a greedy algorithm: at each time of branching, the fastest path chooses to follow the descendant vertex with the larger intensity λv\lambda_{v}. This criterion allows one to show the explosion of a cascade even if the explosion does not occur along any deterministic path s𝕋s\in{\partial{\mathbb{T}}}, i.e.

j=0Ts|jλs|j=a.s.s𝕋.\sum\limits_{j=0}^{\infty}{\frac{{{T}_{s|j}}}{\lambda_{s|j}}}=\infty\ \ \ {\text{a.s.}}\ \forall s\in\partial\mathbb{T}.

As an application, we obtain the a.s. explosion of the self-similar cascade of the 3-dimensional Navier-Stokes equations. As shown in [athreya, alphariccati, smallness], the stochastic explosion of DSY cascades associated with certain evolution equations leads to both nonuniqueness and finite-time blowup of solutions to the initial-value problems in appropriate settings. In particular, the aforementioned explosion of the self-similar DSY cascade corresponding to the Navier-Stokes equations leads to a non-uniqueness result for an equation introduced by Montgomery-Smith [smith] as a model for possible Navier-Stokes finite-time blowup. Remarkably, our DSY framework also provides an alternative way to prove finite-time blowup of the smooth solutions to the Montgomery-Smith equations [smallness]. For the actual Navier-Stokes equations, a possible resolution of the outstanding nonuniqueness and finite-time blowup problems hinges on a better understanding of the connections between geometric structure of the nonlinear term and the branching structure of the associated DSY cascade.

2 Main results and organization of the paper

Motivated by the PDE and probabilistic models mentioned above, we are particularly interested in DSY cascades in which the random intensities λv\lambda_{v} are of the form λv=λ(Xv)\lambda_{v}=\lambda(X_{v}) where, for each v𝕋v\in\mathbb{T}, XvX_{v} is a random variable taking values in a measurable space (S,𝒮)(S,\mathscr{S}) and λ:S(0,)\lambda:S\to(0,\infty) is a measurable function. Throughout the paper, we assume the following two properties:

  1. (A)

    For any path s𝕋s\in\partial\mathbb{T}, the sequence XθX_{\theta}, Xs|1X_{s|1}, Xs|2X_{s|2},…is a time homogeneous Markov chain.

  2. (B)

    For any path s𝕋s\in\partial\mathbb{T}, the stationary transition probability p(x,dy)p(x,dy) does not depend on ss.

In [part1_2021], DSY cascades satisfying conditions (A) and (B) are called DSY cascades of type (\mathscr{M}). We will refer to the family {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}} as the underlying branching Markov chain of the DSY cascade.

In the statements of the main results, the following conditions are sometimes required. We use the standard notation uvu*v for concatenation of vertices u,v𝕋u,v\in\mathbb{T}.

  1. (C)

    For each v𝕋v\in\mathbb{T}, the two subfamilies {Xv1w}w𝕋\{X_{v*1*w}\}_{w\in\mathbb{T}} and {Xv2w}w𝕋\{X_{v*2*w}\}_{w\in\mathbb{T}} are conditionally independent of each other given Xv1X_{v*1} and Xv2X_{v*2}.

  2. (D)

    For each v𝕋v\in\mathbb{T}, the conditional joint distribution (Xv1,Xv2)(X_{v*1},X_{v*2}) given XvX_{v} does not depend on vv.

The next condition is a generalization of the neighborhood recurrence along a path to Markov branching process. First, for aS,A𝒮,a\in S,A\in\mathscr{S}, n1n\geq 1 and s𝕋s\in\partial\mathbb{T}, define

In(a,A)=a(Xs|1A,Xs|2A,,Xs|nA).I_{n}(a,A)=\mathbb{P}_{a}(X_{s|1}\not\in A,\,X_{s|2}\not\in A,\ldots,\,X_{s|n}\not\in A). (2.1)

Notice that InI_{n} does not depend on s𝕋s\in\partial\mathbb{T} by virtue of Properites (A) and (B).

  1. (E)

    (Cut-set recurrence condition) There exists a set A𝒮A\in\mathscr{S}, a function ψ:S\psi:S\to\mathbb{R} and a number r>2,r>2, such that for all aSa\in S, s𝕋s\in\partial\mathbb{T} and nn\in\mathbb{N},

    In(a,A)ψ(a)rn.I_{n}(a,A)\leq\psi(a)r^{-n}.

While conditions (C) and (D) are intuitive, condition (E) is somewhat technical. It implies a kind of recurrence property for the branching Markov chain; in the language to be made clear later, the set AA is cut-set recurrent with respect to {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}}. The intuition is that the Markov chain {Xs|j}j0\{X_{s|j}\}_{j\geq 0} returns to the set AA infinitely often in a uniform manner across all the paths s𝕋s\in\partial\mathbb{T} (cf. Section 3 for the precise definition). Specifically, we have the following result.

Theorem \thethm.

Assume that the tree-indexed family of random variables {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}} satisfies (A), (B), (C), (E). Then, with AA and ψ\psi given by condition (E), we have:

  1. (i)

    The set AA is cut-set recurrent with respect to {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}} in the sense of Section 3.

  2. (ii)

    The cardinality of the first passage cut-set is a.s. finite and satisfies

    𝔼a[cardΠA(1)]2+ψ(a)2rr2.\mathbb{E}_{a}[{\rm card}\,\Pi_{A}^{(1)}]\leq 2+\psi(a)\frac{2r}{r-2}\,.

    Moreover, if ψ\psi is bounded on AA then the expected valued of the cardinality of each passage cut-set is bounded by

    𝔼a[cardΠA(k)]μ(a)kk.\mathbb{E}_{a}[{\rm card}\,\Pi_{A}^{(k)}]\leq\mu(a)^{k}\qquad\forall\,k\in\mathbb{N}.

    where μ(a)=sup{ψ(x):xA{a}}.\mu(a)=\sup\{\psi(x):\,x\in A\cup\{a\}\}.

The proof will be given in Section 4. This theorem leads to a strategy to show the non-explosion of a DSY cascade by finding an appropriate value c>0c>0 such that the set A=λ1((0,c])A=\lambda^{-1}((0,c]) is cut-set recurrent. Next, we state the main criterion for non-explosion. We say that a function ϕ:(0,)(0,)\phi:(0,\infty)\rightarrow(0,\infty) is locally bounded if it maps every bounded set into a bounded set.

Theorem \thethm.

Let {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}} be a branching Markov chain with values on a measurable state space (S,𝒮)(S,\mathscr{S}) and satisfy (A), (B), (C). Let {λ(Xv)1Tv}v𝕋\{\lambda(X_{v})^{-1}T_{v}\}_{v\in\mathbb{T}} be a DSY cascade with λ:S(0,)\lambda:S\to(0,\infty) a measurable function. Suppose that condition (E) holds for A=λ1((0,c])A=\lambda^{-1}((0,c]) for some c>0c>0 and ψ=ϕλ\psi=\phi\circ\lambda, where ϕ:(0,)(0,)\phi:(0,\infty)\to(0,\infty) is locally bounded. Then the DSY cascade is non-explosive for any initial state Xθ=aSX_{\theta}=a\in S.

The proof of this theorem is in Section 5. In our applications, the case S=(0,)S=(0,\infty) is of particular interest. In this case, we have the following corollary.

Corollary \thecor.

Let {λ(Xv)1Tv}v𝕋\{\lambda(X_{v})^{-1}T_{v}\}_{v\in\mathbb{T}} be a DSY cascade in which {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}} has properties (A), (B), (C) on a measurable state space S=(0,)S=(0,\infty) (and 𝒮\mathscr{S} is the Borel σ\sigma-algebra). Suppose that the condition (E) holds for A=(0,c]A=(0,c] for some c>0c>0, with λ\lambda and ψ\psi both being locally bounded. Then the DSY cascade {λ(Xv)1Tv}v𝕋\{\lambda(X_{v})^{-1}T_{v}\}_{v\in\mathbb{T}} with Xθ=aX_{\theta}=a is non-explosive for any aS.a\in S.

Remark \therem.

Note that in the context of Section 2, In(a,A)I_{n}(a,A) from (2.1) becomes

In(a,c)=a(Xs|1>c,Xs|2>c,,Xs|n>c).{I}_{n}(a,c)=\mathbb{P}_{a}(X_{s|1}>c,\,X_{s|2}>c,\ldots,\,X_{s|n}>c). (2.2)

We will give in Section 6 two sufficient conditions (relatively simple to check) for condition (E) to be satisfied. An interesting class of DSY cascades is the self-similar DSY cascades. This class includes the cascade of the α\alpha-Riccati equation, the cascade of the complex Burgers equation, and the cascade of the Navier-Stokes equations with a scale-invariant kernel (Section 12).

Definition \thedefi.

Let SS be a subgroup of (0,)(0,\infty). A DSY cascade {λ(Xv)1Tv}v𝕋\{\lambda(X_{v})^{-1}T_{v}\}_{v\in\mathbb{T}} is said to be self-similar if the branching Markov chain {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}} satisfies the following:

  1. (i)

    {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}} satisfies (A), (B), (C).

  2. (ii)

    For any initial state Xθ=aSX_{\theta}=a\in S and v𝕋v\in\mathbb{T}, the family {Yw=Xvw/Xv}w𝕋\{Y_{w}=X_{v*w}/X_{v}\}_{w\in\mathbb{T}} also satisfies (A), (B), (C) with the same transition probability as {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}}.

In Section 7, we provide a useful characterization of self-similar DSY cascades: these are exactly the DSY cascades in which the branching Markov chain {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}} satisfies (A), (B), (C) such that along each path s𝕋s\in\partial\mathbb{T}, the sequence {Xs|j}j0\{X_{s|j}\}_{j\geq 0} is a multiplicative random walk on SS (i.e. lnXs|j\ln X_{s|j} is an additive random walk on the additive group lnS\ln S). Multiplicative random walks on (0,)(0,\infty) are natural examples of non-reversible Markov chains, which are of interest in this paper. Self-similar DSY cascades admit a rather simple criterion for non-explosion:

Proposition \theprop.

Let {λ(Xv)1Tv}v𝕋\{\lambda(X_{v})^{-1}T_{v}\}_{v\in\mathbb{T}} be a self-similar DSY cascade in which λ:S(0,)\lambda:S\to(0,\infty) is a locally bounded function. Suppose 𝔼[R1b]<1/2\mathbb{E}[R_{1}^{b}]<1/2 for some b>0b>0. Then the cascade is non-explosive for any initial state Xθ=aSX_{\theta}=a\in S.

The proof of this result is given in Section 7. The main criterion for a.s. explosion of DSY cascades is as follows.

Theorem \thethm.

Let {λ(Xv)1Tv}v𝕋\{\lambda(X_{v})^{-1}T_{v}\}_{v\in\mathbb{T}} be a DSY cascade in which {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}} satisfies (A), (B), (D). Put Z=max{λ(X1),λ(X2)}Z=\max\{\lambda(X_{1}),\lambda(X_{2})\}. If there exists a constant κ(0,1)\kappa\in(0,1) such that

𝔼a[1Z]κλ(a)aS\mathbb{E}_{a}\left[\frac{1}{Z}\right]\leq\frac{\kappa}{\lambda(a)}\ \ \ \forall\,a\in S

then 𝔼aζλ(a)1(1κ)1{{\mathbb{E}}_{a}}\zeta\leq{\lambda(a)^{-1}}{(1-\kappa)^{-1}}. In particular, the cascade is a.s. explosive for any initial state Xθ=aSX_{\theta}=a\in S.

The proof this theorem is given in Section 8. In Section 12, it is shown that the explosion criterion is satisfied by the self-similar DSY cascades associated with the 3-dimensional Navier-Stokes equations. We also show that for large spatial dimensions (d12d\geq 12), the self-similar DSY cascades are non-explosive. The range 3<d<123<d<12 remains inconclusive to us. Nevertheless, we show that the explosion event in a self-similar DSY cascade is a 010-1 event. More generally, one has the following.

Theorem \thethm.

Let {λ(Xv)1Tv}v𝕋\{\lambda(X_{v})^{-1}T_{v}\}_{v\in\mathbb{T}} be a DSY cascade in which {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}} has property (A), (B), (C), (D). Then either x(ζ=)=0\mathbb{P}_{x}(\zeta=\infty)=0 for all xSx\in S or x(ζ=)=1\mathbb{P}_{x}(\zeta=\infty)=1 for all xSx\in S if one of the following conditions holds.

  1. (i)

    {λ(Xv)1Tv}v𝕋\{\lambda(X_{v})^{-1}T_{v}\}_{v\in\mathbb{T}} is a self-similar DSY cascade and λ\lambda is a multiplicative function, i.e. λ(xy)=λ(x)λ(y)\lambda(xy)=\lambda(x)\lambda(y) for all x,ySx,y\in S.

  2. (ii)

    Along each path s𝕋s\in\partial\mathbb{T}, {Xs|j}j0\{X_{s|j}\}_{j\geq 0} is an ergodic time-reversible Markov chain on a countable state space SS.

  3. (iii)

    Along each path s𝕋s\in\partial\mathbb{T}, {Xs|j}j0\{X_{s|j}\}_{j\geq 0} is time-reversible with respect to a probability measure γ(dx)\gamma(dx) (i.e. γ(dx)p(x,dy)=γ(dy)p(y,dx)\gamma(dx)p(x,dy)=\gamma(dy)p(y,dx) as measures on S×SS\times S) which satisfies γ(dy)p(x,dy)γ(dy)\gamma(dy)\ll p(x,dy)\ll\gamma(dy) for every xSx\in S.

The proof of this result is given in Section 9. Applications of the non-explosion and explosion criteria are given in the last three sections of the paper. In Section 10, we focus on examples originating from probability, whereas Section 11 gives examples originating from differential equations. The applications to NSE are of particular interest and are mentioned in a separated section (Section 12).

3 Preliminary notions

To prepare for the proofs of the main results, we introduce some preliminary notions on a general rooted tree (see also [lyons90, lyons92], [lyons, Sec. 2.5]).

Let 𝒯𝕍={θ}(n1n)\mathscr{T}\subset\mathbb{V}=\{\theta\}\cup(\cup_{n\geq 1}\mathbb{N}^{n}) be a connected locally finite random tree rooted at θ\theta. Each vertex v=(v1,,vn)𝒯v=(v_{1},\ldots,v_{n})\in\mathscr{T} is connected to the root by a unique path, so one can identify a vertex with the path connecting it to the root. For a finite or infinite path s=(s1,s2,)s=(s_{1},s_{2},\ldots), we denote by vsv*s the path obtained by appending ss to vv:

vs=(v1,,vn,s1,s2,)v*s=(v_{1},\ldots,v_{n},s_{1},s_{2},\ldots)

The length of a path v=(v1,,vn)v=(v_{1},\ldots,v_{n}), or the genealogical height of vertex vv, is |v|=n|v|=n. For 1jn1\leq j\leq n, the truncation of vv up to the jj’th generation is v|j=(v1,,vj)v|j=(v_{1},\ldots,v_{j}). We use the convention that v|0=θv|0=\theta and |θ|=0|\theta|=0. The boundary of 𝒯\mathscr{T} is defined as the set of infinite paths:

𝒯={s:s|j𝒯,j0}.\partial\mathscr{T}=\{s\in\mathbb{N}^{\infty}:~{}s|j\in\mathscr{T},\ \forall\,j\geq 0\}.
Remark \therem.

One can interpret 𝒯\partial\mathscr{T} as the boundary of 𝒯\mathscr{T} in the metric space 𝒯\mathscr{T}\cup\mathbb{N}^{\infty} endowed with the metric d(v,w)=j=12j(1δαj,βj)d(v,w)=\sum^{\infty}_{j=1}2^{-j}(1-\delta_{\alpha_{j},\beta_{j}}) where δx,y\delta_{x,y} is the standard Kronecker delta and

αj={vjifj|v|0ifj>|v|,βj={wjifj|w|0ifj>|w|.{{\alpha}_{j}}=\left\{\begin{array}[]{*{35}{r}}{{v}_{j}}&\text{if}&j\leq|v|\\ 0&\text{if}&j>|v|\\ \end{array}\right.,\ \ {{\beta}_{j}}=\left\{\begin{array}[]{*{35}{r}}{{w}_{j}}&\text{if}&j\leq|w|\\ 0&\text{if}&j>|w|\\ \end{array}\right..
Definition \thedefi (cut-sets).

A finite set of vertices V𝒯\{θ}V\subset\mathscr{T}\backslash\{\theta\} is called a cut-set of 𝒯\mathscr{T} if for each path s𝒯s\in\partial\mathscr{T}, there exists unique j1j\geq 1 such that s|jVs|j\in V.

Intuitively, a cut-set is a set of vertices that separates the root from the boundary.

Definition \thedefi (Passage sets).

Let {Yv}v𝕍\{Y_{v}\}_{v\in\mathbb{V}} be a tree-indexed family of random variables independent of 𝒯\mathscr{T} taking values on a measurable state space (S,𝒮)({S},\mathscr{S}). For A𝒮A\in\mathscr{S}, we define a sequence of random sets {ΠA(n)}n1\{\Pi_{A}^{(n)}\}_{n\geq 1} depending on 𝒯\mathscr{T} as follows.

ΠA(1)={v𝒯\{θ}:YvA,Yv|jA 0<j<|v|},\Pi_{A}^{(1)}=\{v\in\mathscr{T}\backslash\{\theta\}:\ Y_{v}\in A,\ Y_{v|j}\not\in A~{}~{}\forall\,0<j<|v|\},
ΠA(n)={v𝒯\{θ}:YvA,1j<|v|,v|jΠA(n1),Yv|iAj<i<|v|}\Pi_{A}^{(n)}\ =\{v\in\mathscr{T}\backslash\{\theta\}:\ Y_{v}\in A,\ \exists 1\leq j<|v|,\ v|j\in\Pi_{A}^{(n-1)},\ Y_{v|i}\not\in A\ \forall\,j<i<|v|\}

for all n2n\geq 2. We call ΠA(n)\Pi_{A}^{(n)} the nn’th passage set of {Yv}v𝒯\{Y_{v}\}_{v\in\mathscr{T}} through AA.

Definition \thedefi (Cut-set recurrence).

Let {Yv}v𝕍\{Y_{v}\}_{v\in\mathbb{V}} be a tree-indexed family of random variables on a measurable state space (S,𝒮)({S},\mathscr{S}). A set A𝒮A\in\mathscr{S} is said to be cut-set recurrent with respect to ({Yv}v𝕍,𝒯)(\{Y_{v}\}_{v\in\mathbb{V}},\mathscr{T}) if each passage set ΠA(1)\Pi_{A}^{(1)}, ΠA(2)\Pi_{A}^{(2)}, ΠA(3)\Pi_{A}^{(3)}, …is almost surely a cut-set. In this case, ΠA(n)\Pi_{A}^{(n)} is called the nn’th passage cut-set.

For the sake of simplicity, we will write the structure ({Yv}v𝕍,𝒯)(\{Y_{v}\}_{v\in\mathbb{V}},\mathscr{T}) simply as {Yv}v𝒯\{Y_{v}\}_{v\in\mathscr{T}}.

4 Proof of the Section 2

With the terminology introduced in the previous section, the proof of Section 2 uses the decay of InrnI_{n}\lesssim r^{-n} and an inspection process on whether XvAX_{v}\notin A to construct a sequence of passage cut-sets. Specifically at each vertex v𝕋v\in\mathbb{T}, starting from the root θ\theta, we inspect each offspring u{v1,v2}u\in\{v*1,\,v*2\} of vv. If XuAX_{u}\notin A then uu passes the inspection. Otherwise, if XuAX_{u}\in A, uu does not pass the inspection and the inspection process along the path containing uu stops at uu. For each offspring of vv that passes the inspection, we proceed to inspect its own offspring. For example, if Xv1AX_{v*1}\notin A and Xv2AX_{v*2}\notin A then we inspect the vertices v11v*11, v12v*12, v21v*21, v22v*22. If Xv1AX_{v*1}\notin A and Xv2AX_{v*2}\in A, we only inspect v11v*11 and v12v*12. In this manner, the inspection process either continues indefinitely or stops after finitely many steps. Note that we do not inspect the root: θ\theta is considered passing the inspection whether or not XθAX_{\theta}\notin A.

For a vertex v𝕋v\in\mathbb{T} that passes the inspection, we denote by 𝒪v\mathscr{O}_{v} the number of offspring of vv that passes the inspection. The distribution of 𝒪v=θ\mathscr{O}_{v=\theta} is given by

a(𝒪θ=2)\displaystyle\mathbb{P}_{a}\left(\mathscr{O}_{\theta}=2\right) =\displaystyle= a(X1A,X2A),\displaystyle\mathbb{P}_{a}\left(X_{1}\notin A,\,X_{2}\notin A\right),
a(𝒪θ=1)\displaystyle\mathbb{P}_{a}(\mathscr{O}_{\theta}=1) =\displaystyle= a(X1A,X2A)+a(X1A,X2A),\displaystyle\mathbb{P}_{a}(X_{1}\notin A,\,X_{2}\in A)+\mathbb{P}_{a}(X_{1}\in A,\,X_{2}\notin A),
a(𝒪θ=0)\displaystyle\mathbb{P}_{a}(\mathscr{O}_{\theta}=0) =\displaystyle= a(X1A,X2A).\displaystyle\mathbb{P}_{a}(X_{1}\in A,\,X_{2}\in A).

For v𝕋{θ}v\in\mathbb{T}\setminus\{\theta\}, let Bv=[Xv|1A,Xv|2A,,XvA]B_{v}=[X_{v|1}\notin A,X_{v|2}\notin A,\dots,X_{v}\notin A] so that In(a,A)=a(Bv)I_{n}(a,A)=\mathbb{P}_{a}(B_{v}). Note that this probability only depends on n=|v|n=|v| due to properties (A) and (B). Let

N(a)=inf{n1:In(a,A)=0}N(a)=\inf\{n\geq 1:~{}I_{n}(a,A)=0\} (4.1)

which could be infinity. For each vertex vv with 1|v|=n<N(a)1\leq|v|=n<N(a), the number of offspring passing the inspection has a distribution

a(𝒪v=2)\displaystyle\mathbb{P}_{a}(\mathscr{O}_{v}=2) =\displaystyle= a(Xv1A,Xv2A|Bv),\displaystyle\mathbb{P}_{a}(X_{v*1}\notin A,\,\,X_{v*2}\notin A\ |\ B_{v}), (4.2)
a(𝒪v=1)\displaystyle\mathbb{P}_{a}({{\mathscr{O}}_{v}}=1) =\displaystyle= a(Xv1A,Xv2A|Bv)+a(Xv1A,Xv2A|Bv),\displaystyle\mathbb{P}_{a}(X_{v*1}\notin A,X_{v*2}\in A\ |\ B_{v})+\mathbb{P}_{a}(X_{v*1}\in A,X_{v*2}\notin A\ |\ B_{v}), (4.3)
a(𝒪v=0)\displaystyle\mathbb{P}_{a}(\mathscr{O}_{v}=0) =\displaystyle= a(Xv1A,Xv2A|Bv).\displaystyle\mathbb{P}_{a}(X_{v*1}\in A,\,\,X_{v*2}\in A\ |\ B_{v}). (4.4)

Invoking Properties (A) and (B) once again, one can write 𝒪v𝒪n\mathscr{O}_{v}\sim\mathscr{O}_{n} for n=|v|.n=|v|. Denote pn,i(a)=a(𝒪n=i)p_{n,i}(a)=\mathbb{P}_{a}(\mathscr{O}_{n}=i) for i=0,1,2i=0,1,2. Let ZnZ_{n} be the total number of individuals at generation nn that pass the inspection. Then Z0=1Z_{0}=1 and

Zn+1=j=1Zn𝒪n,j 0n<N(a){{Z}_{n+1}}=\sum\limits_{j=1}^{{{Z}_{n}}}{{\mathscr{O}_{n,j}}}\ \ \ \forall\,0\leq n<N(a)

where 𝒪n,1\mathscr{O}_{n,1}, 𝒪n,2\mathscr{O}_{n,2}, 𝒪n,3\mathscr{O}_{n,3},…are i.i.d copies of 𝒪n\mathscr{O}_{n}. The inspection process can be viewed as a nonhomogeneous Galton-Watson process with the offspring distribution given above. We will show this process stops eventually.

Lemma \thelem.

Under the assumption of Section 2, for every a>0a>0, the inspection process starting at Xθ=aX_{\theta}=a almost surely stops after finitely many steps.

Proof.

We have

μ0:=𝔼𝒪0=2a(𝒪0=2)+a(𝒪0=1)=a(X1A)+a(X2A)=2I1.\mu_{0}:=\mathbb{E}\mathscr{O}_{0}=2\mathbb{P}_{a}(\mathscr{O}_{0}=2)+\mathbb{P}_{a}(\mathscr{O}_{0}=1)=\mathbb{P}_{a}(X_{1}\notin A)+\mathbb{P}_{a}(X_{2}\notin A)=2I_{1}.

If N(a)=1N(a)=1 then a(X1A)=a(X2A)=0\mathbb{P}_{a}(X_{1}\notin A)=\mathbb{P}_{a}(X_{2}\notin A)=0. In this case, the inspection process stops at the root. Consider the case 2N(a)2\leq N(a)\leq\infty. For 1n<N(a)1\leq n<N(a), denote the two terms on the right hand side of (4.3) by p¯1,n\bar{p}_{1,n} and p~1,n\tilde{p}_{1,n}.

μn:=𝔼𝒪n\displaystyle{{\mu}_{n}}:=\mathbb{E}{\mathscr{O}_{n}} =\displaystyle= 2pn,2+p¯n,1+p~n,1=(pn,2+p¯n,1)+(pn,2+p~n,1)\displaystyle 2{{p}_{n,2}}+{\bar{p}_{n,1}}+\tilde{p}_{n,1}=({{p}_{n,2}}+{\bar{p}_{n,1}})+({{p}_{n,2}}+{\tilde{p}_{n,1}})
=\displaystyle= (Xv1A|Bv)+(Xv2A|Bv)\displaystyle\mathbb{P}(X_{v*1}\notin A\,|\,B_{v})+\mathbb{P}(X_{v*2}\notin A\,|\,B_{v})
=\displaystyle= 2(Xv1A|Bv)\displaystyle 2\mathbb{P}(X_{v*1}\notin A\,|\,B_{v})
=\displaystyle= 2In+1In.\displaystyle 2\frac{{{I}_{n+1}}}{{{I}_{n}}}.

By Wald’s identity, 𝔼Zn=μn1𝔼Zn1\mathbb{E}{{Z}_{n}}=\mu_{n-1}\mathbb{E}{{Z}_{n-1}}. Applying this identity consecutively, we get

𝔼Zn=μn1μ1𝔼Z0=μn1μ1μ0=2nInψ(a)(2r)n 1n<N(a),n<.\mathbb{E}{{Z}_{n}}={{\mu}_{n-1}}...\mu_{1}\mathbb{E}Z_{0}=\mu_{n-1}\ldots\mu_{1}\mu_{0}={{2}^{n}}{I_{n}}\leq\psi(a)\left(\frac{2}{r}\right)^{n}~{}~{}~{}\forall\,1\leq n<N(a),\,n<\infty. (4.5)

If N(a)<N(a)<\infty then a(Xv|1A,Xv|2A,,Xv|nA)=0\mathbb{P}_{a}(X_{v|1}\notin A,\,X_{v|2}\notin A,\ldots,X_{v|n}\notin A)=0 for all v𝕋v\in\mathbb{T}, |v|=nN(a)|v|=n\geq N(a). Hence, the inspection process stops almost surely after N(a)N(a) generations. Consider the case N(a)=N(a)=\infty. Put Z=lim infZnZ_{\infty}=\liminf Z_{n}. By Fatou’s lemma,

𝔼Zlim infn𝔼Znlim infnψ(a)(2r)n=0.\mathbb{E}{{Z}_{\infty}}\leq\underset{n\to\infty}{\mathop{\liminf}}\,\mathbb{E}{{Z}_{n}}\leq\underset{n\to\infty}{\mathop{\liminf}}\,\psi(a)\left(\frac{2}{r}\right)^{n}=0.

Hence, Z=0Z_{\infty}=0 a.s. Because ZnZ_{n}’s are nonnegative integers, Zn=0Z_{n}=0 a.s. for some random value of nn. ∎

To complete the proof of Part (i) of Section 2, we use Section 4 to show that the set AA is cut-set recurrent with respect to the branching Markov chain {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}}. By definition, the first passage set ΠA(1)\Pi_{A}^{(1)} consists, for each s𝕋s\in\partial\mathbb{T}, of the first vertex vθv\neq\theta such that XvAX_{v}\in A (see Figure 1). In other words, it consists of the vertices at which the inspection process stops. By Section 4, ΠA(1)\Pi_{A}^{(1)} is, a.s, a finite set and, thus, must be a cut-set.

Suppose that for some k1k\geq 1, the passage set ΠA(k)\Pi_{A}^{(k)} is a cut-set. For each u𝕋u\in\mathbb{T}, denote by 𝕋(u)\mathbb{T}^{(u)} the subtree of 𝕋\mathbb{T} rooted at uu and write Xv(u)=XuvX^{(u)}_{v}=X_{u*v} for v𝕋v\in\mathbb{T}. By conditions (A) and (B), we have that condition (E) is also satisfied by the branching Markov chain Xv(u)X^{(u)}_{v}. Indeed, the analogue to (2.1) for this branching Markov process is

In(a,A;X(u))\displaystyle I_{n}(a^{\prime},A;X^{(u)}) \displaystyle\equiv (Xus|(|u|+1)A,Xus|(|u|+2)A,,Xus|(|u|+n)A|Xu=a)\displaystyle\mathbb{P}(X_{u*s|(|u|+1)}\notin A,X_{u*s|(|u|+2)}\notin A,\dots,X_{u*s|(|u|+n)}\notin A|X_{u}=a^{\prime})
=\displaystyle= a(Xs|1A,Xs|2A,,Xs|nA)\displaystyle\mathbb{P}_{a^{\prime}}(X_{s|1}\notin A,X_{s|2}\notin A,\dots,X_{s|n}\notin A)
=\displaystyle= In(a,A)\displaystyle I_{n}(a^{\prime},A)

Given XuX_{u}, by Section 4 the inspection process on 𝕋(u)\mathbb{T}^{(u)} must terminate a.s. Thus, for each path s𝕋s\in\partial\mathbb{T} passing through uu, there exists a random integer jj, |u|<j<|u|<j<\infty such that Xs|jAX_{s|j}\in A. Let Nu,sN_{u,s} be the smallest value of such jj’s. Let

ΠA(1)(𝕋(u))={s|Nu,s:s𝕋passing through u}{\Pi^{(1)}_{A}(\mathbb{T}^{(u)})}=\left\{s|_{{N}_{u,s}}:\ s\in{\partial{\mathbb{T}}}~{}~{}\text{passing through }u\right\} (4.6)

corresponding to the first passage set of the tree rooted at uu through AA. Thus, by Section 4, ΠA(1)(𝕋(u)){\Pi^{(1)}_{A}(\mathbb{T}^{(u)})} is a.s. finite and nonempty and separates boundary paths in 𝕋\partial\mathbb{T} that pass through uu from uu. Note that, by definition, the k+1k+1’st passage set is given by

ΠA(k+1)=uΠA(k)ΠA(1)(𝕋(u))\Pi_{A}^{(k+1)}=\bigcup_{u\in\Pi_{A}^{(k)}}{\Pi^{(1)}_{A}(\mathbb{T}^{(u)})} (4.7)

which is nonempty, finite almost surely. Since ΠA(k)\Pi_{A}^{(k)} is a cut-set, ΠA(k+1)\Pi_{A}^{(k+1)} is also a cut-set. We conclude that AA is cut-set recurrent thus completing the proof Part (i) of Section 2.

Refer to caption
Figure 1: ΠA(1)\Pi_{A}^{(1)} consists of the bold dots. ΠA(2)\Pi_{A}^{(2)} consists of the stars.

We begin the proof of Part (ii) of Section 2 by showing the estimate for the mean cardinality of ΠA(1)\Pi_{A}^{(1)}, namely

𝔼a[cardΠA(1)]2+ψ(a)2rr2\mathbb{E}_{a}[{\rm card}\,\Pi_{A}^{(1)}]\leq 2+\psi(a)\frac{2r}{r-2} (4.8)

and use induction for the other estimate. With N(a)N(a) defined in (4.1), if N(a)=1N(a)=1 then ΠA(1)={1,2}\Pi_{A}^{(1)}=\{1,2\} a.s. and (4.8)\eqref{eq:510214} holds. Consider the case N(a)2N(a)\geq 2. The mean total number of individuals passing the inspection process is

𝔼a[n=0N(a)Zn]=n=0N(a)𝔼Zn1+ψ(a)n=1N(a)(2r)n1+ψ(a)rr2,\mathbb{E}_{a}\left[\sum\limits_{n=0}^{N(a)}{{{Z}_{n}}}\right]=\sum\limits_{n=0}^{N(a)}{\mathbb{E}{{Z}_{n}}}\leq 1+\psi(a)\sum\limits_{n=1}^{N(a)}\left(\frac{2}{r}\right)^{n}\leq 1+\psi(a)\frac{r}{r-2}\,,

according to (4.5). Because each vertex in ΠA(1)\Pi_{A}^{(1)} is an offspring of a vertex that passes the inspection process, card ΠA(1)\Pi_{A}^{(1)} is at most twice the total number of vertices passing the inspection process. Therefore,

𝔼a[cardΠA(1)]2𝔼[n=0Zn]2+ψ(a)2rr2.\mathbb{E}_{a}[{\rm card}\,{\Pi_{A}^{(1)}}]\leq 2\mathbb{E}\left[\sum\limits_{n=0}^{\infty}{{{Z}_{n}}}\right]\leq 2+\psi(a)\frac{2r}{r-2}.

Consider now uΠA(k)u\in\Pi_{A}^{(k)} for some k1k\geq 1, so that Xu=aA.X_{u}=a^{\prime}\in A. Then, by properties (A) and (B) of the branching Markov chain we have

𝔼a[card(ΠA(1)(𝕋(u)))|Xu=a]=𝔼Xu=a[card(ΠA(1)(𝕋(u)))]=𝔼Xθ=a[card(ΠA(1))]2+ψ(a)2rr2{{\mathbb{E}}_{a}}[\operatorname{card}({\Pi^{(1)}_{A}(\mathbb{T}^{(u)})})|X_{u}=a^{\prime}]={\mathbb{E}}_{X_{u}=a^{\prime}}[\operatorname{card}({\Pi^{(1)}_{A}(\mathbb{T}^{(u)})})]={{\mathbb{E}}_{{{X}_{\theta}}=a^{\prime}}}[\operatorname{card}(\Pi^{(1)}_{A})]\leq 2+\psi(a^{\prime})\frac{2r}{r-2}~{}~{}~{}

Recall that we now assume that ψ\psi is bounded on AA and that M(a)=sup{ψ(x):xA{a}}.M(a)=\sup\left\{\psi(x):\,x\in A\cup\{a\}\,\right\}. Then

𝔼a[card(ΠA(1)(𝕋(u)))|Xu=a]2+M(a)2rr2μ(a){{\mathbb{E}}_{a}}[\operatorname{card}({{\Pi^{(1)}_{A}(\mathbb{T}^{(u)})}})|X_{u}=a^{\prime}]\leq 2+M(a)\frac{2r}{r-2}\equiv\mu(a) (4.9)

The proof of Section 2 is completed by noting that

𝔼a[cardΠA(k)]=𝔼a[𝔼a[cardΠA(k)|ΠA(k1)]]μ(a)𝔼a[cardΠA(k1)](μ(a))k.\mathbb{E}_{a}[{\rm card}\,{\Pi_{A}^{(k)}}]=\mathbb{E}_{a}[\mathbb{E}_{a}[{\rm card}\,{\Pi_{A}^{(k)}}|\Pi_{A}^{(k-1)}]]\leq\mu(a)\mathbb{E}_{a}[{\rm card}\,{\Pi_{A}^{(k-1)}}]\leq(\mu(a))^{k}.

5 Proof of Section 2

Thanks to Section 2, we can define a random sequence of nonnegative integers {Hk}\{H_{k}\} where H0=1H_{0}=1 and

Hk=max{|v|:vΠA(k)}k.H_{k}=\max\{|v|:~{}v\in\Pi_{A}^{(k)}\}\ \ \ \forall\,k\in\mathbb{N}. (5.1)

One can observe that for any kk\in\mathbb{N} and s𝕋s\in\partial\mathbb{T}, there exists lk=lk(s)[k,Hk)l_{k}=l_{k}(s)\in[k,\,H_{k}) satisfying s|lkΠA(k)s|l_{k}\in\Pi_{A}^{(k)}. In order to establish the non-explosive character of the DSY cascade, note that for kk\in\mathbb{N} and with nHkn\geq H_{k}, we have:

ζn=min|v|=nj=1nTv|jλ(Xv|j)min|v|=ni=1kTv|liλ(Xv|li)1cmin|v|=ni=1kTv|li{{\zeta}_{n}}=\underset{|v|=n}{\mathop{\min}}\,\sum\limits_{j=1}^{n}\frac{T_{v|j}}{\lambda(X_{v|j})}\geq\underset{|v|=n}{\mathop{\min}}\,\sum\limits_{i=1}^{k}{\frac{{{{{T}}}_{v|{{l}_{i}}}}}{\lambda(X_{v|{{l}_{i}}})}}\geq\frac{1}{c}\underset{|v|=n}{\mathop{\min}}\,\sum\limits_{i=1}^{k}{{{{{T}}}_{v|{{l}_{i}}}}}

where li=li(sv)l_{i}=l_{i}(s_{v}) with sv𝕋s_{v}\in\partial\mathbb{T} such that sv||v|=vs_{v}|_{|v|}=v. In the last inequality we have used that A=λ1(0,c)A=\lambda^{-1}(0,c) so that since Xv|liAX_{v|l_{i}}\in A one has λ(Xv|li)<c.\lambda(X_{v|l_{i}})<c. Since {Tv}v𝕋\{T_{v}\}_{v\in\mathbb{T}} is independent of {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}}, in order to establish that the DSY cascade is not explosive it suffices to show that almost surely,

limkmin|v|=Hki=1kTv|li=\underset{k\to\infty}{\mathop{\lim}}\,\underset{|v|=H_{k}}{\mathop{\min}}\,\sum\limits_{i=1}^{k}{{{{{T}}}_{v|{{l}_{i}}}}}=\infty (5.2)

For this, we define a random tree, referred as the “reduced tree”, consisting of the root θ\theta and vertices in the passage cut-sets ΠA(1)\Pi_{A}^{(1)}, ΠA(2)\Pi_{A}^{(2)}, ΠA(3)\Pi_{A}^{(3)}, …Throughout this section we will use

Ku=cardΠA(1)(𝕋(u))K_{u}=\textrm{card}\,{\Pi^{(1)}_{A}(\mathbb{T}^{(u)})} (5.3)

denote the random number of vertices in the first cut-set of the tree rooted at u.u. For the reduced tree, the root θ\theta has Kθ=cardΠA(1)K_{\theta}=\textrm{card}\,{\Pi^{(1)}_{A}} offspring, each of which is a vertex in the cut-set ΠA(1){\Pi^{(1)}_{A}}. Each vertex uu of the first generation of the reduced tree has KuK_{u} offspring, each of which is a vertex in the cut-set ΠA(1)(𝕋(u)){\Pi^{(1)}_{A}(\mathbb{T}^{(u)})}, and so on (Figure 2). Thus, the first generation of the reduced tree consists of vertices in the first cut-set ΠA(1)\Pi_{A}^{(1)}. The second generation of this tree consists of vertices in the second cut-set ΠA(2)\Pi_{A}^{(2)}. In general, the kk’th generation of the reduced tree consists of vertices in the kk’th cut-set ΠA(k)\Pi_{A}^{(k)}.

Refer to caption
Figure 2: The reduced tree 𝒯\mathscr{T} corresponding to the binary tree 𝕋\mathbb{T} in Figure 1.

The set of all vertices of the random reduced tree is denoted by 𝒯𝕍={θ}(n=1n)\mathscr{T}\subset\mathbb{V}=\{\theta\}\cup(\cup_{n=1}^{\infty}\mathbb{N}^{n}). Recall that the configuration of 𝒯\mathscr{T} is independent of the clocks {Tv}v𝕍\{T_{v}\}_{v\in\mathbb{V}} after a natural relabeling of the vertices (Figure 2). Now (5.2) becomes a non-explosion problem of a DSY cascade {Tv}v𝕍\{T_{v}\}_{v\in\mathbb{V}} with all intensities equal to 1 on a random tree structure 𝒯\mathscr{T}. The uniform bound obtained in Part (ii) of Section 2 on the expected number of offspring of each vertex turns out to be crucial for our approach. For further criteria for non-explosion of a DSY cascade on a random tree structure, see [part1_2021, Sec. 4].

To show (5.2), we will use a cut-set argument similar to the one used in the proof of Section 2. Let ε>0\varepsilon>0 be a number to be chosen. Given the random tree 𝒯\mathscr{T}, we start an inspection process of whether TvεT_{v}\leq\varepsilon as follows. At each vertex v𝒯v\in\mathscr{T}, starting from the root θ\theta, we inspect its offspring. If an offspring u{v1,v2,,vKv}u\in\{v*1,\,v*2,\ldots,\,v*K_{v}\} satisfies TuεT_{u}\leq\varepsilon then it passes the inspection. Otherwise, it does not pass the inspection and the inspection process along the path containing uu stops at uu. For any vertex that has passed the inspection, we continue to inspect its own offspring. Note that we do not inspect the root: θ\theta is considered passing the inspection whether or not TθεT_{\theta}\leq\varepsilon. In this manner, the inspection process might keep going indefinitely or stop after finitely many steps. To show that the process stops almost surely after finite number of inspections we establish the following more general result.

Lemma \thelem.

Assume there exists 0<ν<0<\nu<\infty such that

max{1,𝔼Kθ,supv𝒯\{θ}𝔼[Kv|{Ku}|u||v|1]}<ν<.\max\left\{1,\,\mathbb{E}K_{\theta},\,\sup_{v\in\mathscr{T}\backslash\{\theta\}}\mathbb{E}\left[{{K}_{v}}|\ \{{K}_{u}\}_{|u|\leq|v|-1}\right]\right\}<\nu<\infty.

Let 0<ε<logνν1.0<\varepsilon<\log\frac{\nu}{\nu-1}. Then the inspection process almost surely stops after finitely many steps and B=(ε,)B=(\varepsilon,\infty) is a cut-set recurrent with respect to ({Tv}v𝕍,𝒯).(\{T_{v}\}_{v\in\mathbb{V}},\mathscr{T}).

Proof.

The probability for a vertex v𝒯v\in\mathscr{T} to pass the inspection is

δ:=(Tvε)=0εet𝑑t=1eε<1ν.\delta:=\mathbb{P}({T}_{v}\leq\varepsilon)=\int_{0}^{\varepsilon}e^{-t}dt=1-e^{-\varepsilon}<\frac{1}{\nu}.

The number of offspring of the root θ\theta that pass the inspection is

K~θ=j=1Kθ𝟙Tjε{{\tilde{K}}_{\theta}}=\sum\limits_{j=1}^{{{K}_{\theta}}}{{\mathbbm{1}_{{{T}_{j}}\leq\varepsilon}}}

which is the sum of KθK_{\theta} i.i.d. random variables with distribution Bernoulli(δ\delta) that are also independent of KθK_{\theta}. Thus

𝔼K~θ=𝔼[δKθ]δν<1.\mathbb{E}\tilde{K}_{\theta}=\mathbb{E}[\delta{K}_{\theta}]\leq\delta\nu<1.

For each n0n\geq 0, denote by VnV_{n} be the total number of vertices of the nn’th generation that pass the inspection. We have V0=1V_{0}=1 and V1=K~θV_{1}=\tilde{K}_{\theta}. If we label the vertices of the nn’th generation that pass the inspection by vn,jv_{n,j} for 1jVn1\leq j\leq V_{n} then

Vn+1=j=1VnK~vn,jn0.{{V}_{n+1}}=\sum\limits_{j=1}^{{{V}_{n}}}{{{{\tilde{K}}}_{{{v}_{n,j}}}}}\ \ \ \forall\,n\geq 0.

Write u=vn,ju=v_{n,j}. Since {Tv}v𝕍\{T_{v}\}_{v\in\mathbb{V}} are independent from {Kv}v𝕍\{K_{v}\}_{v\in\mathbb{V}}, we have

𝔼[K~u|Vn]=𝔼[j=1Ku𝟙Tujε|Vn]=δ𝔼[Ku|Vn]\mathbb{E}\left[{{{\tilde{K}}}_{u}}|{{V}_{n}}\right]=\mathbb{E}\left[\sum\limits_{j=1}^{{{K}_{u}}}{{\mathbbm{1}_{{{T}_{u*j}}\leq\varepsilon}}|\,{{V}_{n}}}\right]=\delta\mathbb{E}\left[{{K}_{u}}|{{V}_{n}}\right]

Moreover,

𝔼[Ku|Vn]\displaystyle\mathbb{E}\left[{{K}_{u}}|{{V}_{n}}\right] =\displaystyle= 𝔼[𝔼[Ku|{Kv}|v|n]|Vn]\displaystyle\mathbb{E}\left[\mathbb{E}[{{K}_{u}}|{{\{{{K}_{v}}\}}_{|v|\leq n}}]|{{V}_{n}}\right]
\displaystyle\leq 𝔼[ν|Vn]=ν.\displaystyle\mathbb{E}\left[\nu|{{V}_{n}}\right]=\nu.

Thus

𝔼Vn+1=𝔼[𝔼[K~w|Vn]Vn]δν𝔼Vn.\mathbb{E}{{V}_{n+1}}=\mathbb{E}\left[\mathbb{E}\left[{{{\tilde{K}}}_{w}}|{{V}_{n}}\right]V_{n}\right]\leq\delta\nu\mathbb{E}{{V}_{n}}.

Because δν<1\delta\nu<1, limn𝔼Vn=0\lim_{n\to\infty}\mathbb{E}V_{n}=0. Put V=lim infVnV_{\infty}=\liminf V_{n}. By Fatou’s lemma, 𝔼V=0\mathbb{E}V_{\infty}=0. Therefore, V=0V_{\infty}=0 almost surely and hence, the inspection process for {Tv}v𝕍\{T_{v}\}_{v\in\mathbb{V}} almost surely stops in finitely many steps.

To show that the interval B=(ε,)B=(\varepsilon,\infty) is cut-set recurrent with respect to ({Tv}v𝕍,𝒯)(\{T_{v}\}_{v\in\mathbb{V}},\mathscr{T}), construct the sequence of passage cut-sets Π~B(k)\tilde{\Pi}_{B}^{(k)} by the procedure described in the proof of Section 2: The first passage cut-set Π~B(1)\tilde{\Pi}_{B}^{(1)} consists of, for each s𝒯s\in\partial\mathscr{T}, the first vertex vθv\neq\theta such that Tv>εT_{v}>\varepsilon. Starting at each vertex in Π~B(k)\tilde{\Pi}_{B}^{(k)}, we start a new independent inspection process. The union of the vertices where one of these inspection processes stops gives the passage cut-set Π~B(k+1)\tilde{\Pi}_{B}^{(k+1)}. Since each of these sets is finite, BB is cut-set recurrent as claimed. ∎

Define a strictly increasing sequence of random variables H~k\tilde{H}_{k} as: H~0=1\tilde{H}_{0}=1 and

H~k=max{|v|:vΠ~B(k)}k.\tilde{H}_{k}=\max\{|v|:~{}v\in\tilde{\Pi}_{B}^{(k)}\}~{}~{}~{}\forall\,k\in\mathbb{N}.

It then follows that almost surely, for each kk\in\mathbb{N} and s𝒯s\in\partial\mathscr{T}, there exists l~k(s)[k,H~k)\tilde{l}_{k}(s)\in[k,\,\tilde{H}_{k}) such that s|l~k(s)Π~B(k)s|{\tilde{l}_{k}(s)}\in\tilde{\Pi}_{B}^{(k)}. Then, for n>H~kn>\tilde{H}_{k}

ζ~n=minv𝒯|v|=nk=1nTv|kminv𝒯|v|=ni=1kTl~ikε\tilde{\zeta}_{n}=\underset{\begin{smallmatrix}v\in\mathscr{T}\\ |v|=n\end{smallmatrix}}{\mathop{\min}}\sum\limits_{k=1}^{n}{{{{{T}}}_{v|k}}}\geq\underset{\begin{smallmatrix}v\in\mathscr{T}\\ |v|=n\end{smallmatrix}}{\mathop{\min}}\,\sum\limits_{i=1}^{k}{{{{{T}}}_{{\tilde{l}_{i}}}}}\geq k\varepsilon

where l~i=l~i(sv)\tilde{l}_{i}=\tilde{l}_{i}(s_{v}) with sv𝒯s_{v}\in\partial\mathscr{T} such that sv||v|=vs_{v}|_{|v|}=v. Letting kk\to\infty, we get ζ~=limζ~n=\tilde{\zeta}=\lim{\tilde{\zeta}}_{n}=\infty which implies (5.2). This completes the proof of Section 2.

6 Criteria for cut-set recurrence

In this section, we give two criteria for a branching Markov chain to satisfy the condition (E) in terms of the transition probability density p(x,y)p(x,y). These criteria will be used to show the non-explosion of the cascade of the KPP equation on the Fourier side (Section 11.2) and of the Bessel cascade of the three-dimensional Navier-Stokes equations (Section 11.4).

Proposition \theprop.

Let {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}} be a branching Markov chain on the state space SS satisfying (A) and (B) with the transition probability density p(x,y)p(x,y). Suppose there exist a constant b>0b>0 and functions ϕ1,ϕ2:S(0,)\phi_{1},\phi_{2}:S\to(0,\infty) such that:

  1. (i)

    p(x,y)ϕ1(x)ϕ2(y)p(x,y)\leq\phi_{1}(x)\phi_{2}(y) for all x,ySb=λ1((b,))x,y\in S_{b}=\lambda^{-1}((b,\infty)).

  2. (ii)

    ϕ1=ϕλ\phi_{1}=\phi\circ\lambda for some locally bounded function ϕ:(0,)(0,)\phi:(0,\infty)\to(0,\infty).

  3. (iii)

    ϕ2\phi_{2}, ϕ1ϕ2L1(Sb)\phi_{1}\phi_{2}\in L^{1}(S_{b}).

Then {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}} satisfies the condition (E).

Proof.

By Section 2, it is sufficient to show that there exist constants c>bc>b and r>2r>2 such that Inϕ1(a)rn{I}_{n}\lesssim\phi_{1}(a)r^{-n}, where In{I}_{n} is defined by

In(a,c)=a(Xs|1,Xs|nSc)aS,nI_{n}(a,c)=\mathbb{P}_{a}(X_{s|1},\ldots X_{s|n}\in S_{c})\ \ \ \forall a\in S,\,n\in\mathbb{N}

with Sc=λ1((c,))S_{c}=\lambda^{-1}((c,\infty)). Fix an initial state Xθ=a>0X_{\theta}=a>0 and a path s𝕋s\in\partial\mathbb{T}. For simplicity, we write the sequence XθX_{\theta}, Xs|1X_{s|1}, Xs|2X_{s|2},…as X0X_{0}, X1X_{1}, X2X_{2},…Since this is a time homogeneous Markov chain with transition probability density pp,

In=ScScp(a,y1)p(y1,y2)p(yn1,yn)𝑑yn𝑑y1n1.{I}_{n}={\int_{S_{c}}{\ldots\int_{S_{c}}p(a,y_{1})p(y_{1},y_{2})\ldots p(y_{n-1},y_{n})d{{y}_{n}}\ldots d{{y}_{1}}}}~{}~{}~{}\forall\,n\geq 1.

By the condition (i), p(yj1,yj)ϕ1(yj1)ϕ2(yj)p(y_{j-1},y_{j})\leq\phi_{1}(y_{j-1})\phi_{2}(y_{j}). We obtain the estimate

Inϕ1(a)(Scϕ1ϕ2𝑑x)n1Scϕ2𝑑xϕ1(a)ϕ2L1(Sb)ϕ1ϕ2L1(Sb)rn{{I}_{n}}\leq{{\phi}_{1}}(a){{\left(\int_{S_{c}}{{{\phi}_{1}}{{\phi}_{2}}dx}\right)}^{n-1}}\int_{S_{c}}{{{\phi}_{2}}dx}\leq{{\phi}_{1}}(a)\frac{{{\left\|{{\phi}_{2}}\right\|}_{{{L}^{1}}(S_{b})}}}{{{\left\|{{\phi}_{1}}{{\phi}_{2}}\right\|}_{{{L}^{1}}(S_{b})}}}{{r}^{-n}}

where 1/r=Scϕ1ϕ2𝑑x1/r=\int_{S_{c}}{{{\phi}_{1}}{{\phi}_{2}}dx}. By choosing c>bc>b sufficiently large, we get r>2r>2. ∎

Proposition \theprop.

Let {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}} be a branching Markov chain on the state space S=(0,)S=(0,\infty) satisfying (A), (B) and with following conditions:

  1. (i)

    For any path s𝕋s\in\partial\mathbb{T}, the Markov chain XθX_{\theta}, Xs|1X_{s|1}, Xs|2X_{s|2},…is time-reversible with the transition probability density p(x,y)p(x,y) and the invariant probability density γ(x)\gamma(x).

  2. (ii)

    There exist c1>1c_{1}>1 and a locally bounded function ψ1:(0,)\psi_{1}:(0,\infty)\to\mathbb{R} such that

    p(x,y)ψ1(x)γ(y)x,y>c1.p(x,y)\leq\psi_{1}(x)\gamma(y)~{}~{}~{}~{}\forall\,x,y>c_{1}.
  3. (iii)

    There exists c2>1c_{2}>1 such that

    p(x,y)c2yc1<y<x.p(x,y)\leq\frac{c_{2}}{y}\ \ \ \forall\,c_{1}<y<x.
  4. (iv)

    There exists a function ψ2:(0,)\psi_{2}:(0,\infty)\to\mathbb{R} such that

    p(x,y)ψ2(yx)xc1<x<y.p(x,y)\leq\frac{\psi_{2}(y-x)}{x}\ \ \ \forall\,c_{1}<x<y.
  5. (v)

    There exists α>2c2\alpha>2c_{2} such that

    0ψ2(x)xα𝑑x<,0γ(x)xα𝑑x<.\int_{0}^{\infty}\psi_{2}(x)x^{\alpha}dx<\infty,~{}~{}~{}\int_{0}^{\infty}\gamma(x)x^{\alpha}dx<\infty.

Then {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}} satisfies the condition (E) with A=(0,c]A=(0,c], 2<r<α/c22<r<\alpha/c_{2}, ψ=2γxαL1ψ1\psi=2\|\gamma x^{\alpha}\|_{L^{1}}\psi_{1} and

c=max{c1,2α1r(ψ2L1+ψ2xαL1)1(c2r)/α}.c=\max\left\{c_{1},\,\frac{2^{\alpha-1}r(\|\psi_{2}\|_{L^{1}}+\|\psi_{2}x^{\alpha}\|_{L^{1}})}{1-(c_{2}r)/\alpha}\right\}.
Proof.

As noted in Section 2, it is sufficient to show Inψ(a)rn{I}_{n}\leq\psi(a)r^{-n}, where In{I}_{n} is defined by (2.2). Fix an initial state Xθ=a>0X_{\theta}=a>0 and a path s𝕋s\in\partial\mathbb{T}. Again, we write the sequence XθX_{\theta}, Xs|1X_{s|1}, Xs|2X_{s|2},…as X0X_{0}, X1X_{1}, X2X_{2},…Since this is a time homogeneous Markov chain with transition probability density pp,

In=ccp(a,y1)p(y1,y2)p(yn1,yn)𝑑yn𝑑y1n1.{I}_{n}={\int_{c}^{\infty}{\ldots\int_{c}^{\infty}p(a,y_{1})p(y_{1},y_{2})\ldots p(y_{n-1},y_{n})d{{y}_{n}}\ldots d{{y}_{1}}}}~{}~{}~{}\forall\,n\geq 1.

Consider a sequence of functions {fn}\{f_{n}\} given by f1(y)=p(a,y)f_{1}(y)=p(a,y) and

fn+1(x)=cp(y,x)fn(y)𝑑yn1.{{f}_{n+1}}(x)=\int_{c}^{\infty}{p(y,x){{f}_{n}}(y)dy}\ \ \ \forall n\geq 1.

Then In=cfn(y)𝑑y{I}_{n}=\int_{c}^{\infty}f_{n}(y)dy. Define a sequence of functions {gn}\{g_{n}\} by g1(x)1g_{1}(x)\equiv 1 and

gn+1(x)=cp(x,y)gn(y)𝑑yn1.g_{n+1}(x)=\int_{c}^{\infty}p(x,y)g_{n}(y)dy~{}~{}~{}\forall n\geq 1. (6.1)

We now show by induction that fn(x)ψ1(a)γ(x)gn(x)f_{n}(x)\leq\psi_{1}(a)\gamma(x)g_{n}(x). This is true for n=1n=1 thanks to condition (ii). Suppose fnψ1(a)γgnf_{n}\leq\psi_{1}(a)\gamma g_{n} for some n1n\geq 1. By the induction hypothesis and condition (i),

fn+1(x)\displaystyle{{f}_{n+1}}(x) =\displaystyle= cp(y,x)fn(y)𝑑yψ(a)cp(y,x)γ(y)gn(y)𝑑y\displaystyle\int_{c}^{\infty}{p(y,x){{f}_{n}}(y)dy}\leq\psi(a)\int_{c}^{\infty}{p(y,x)\gamma(y){{g}_{n}}(y)dy}
=\displaystyle= ψ1(a)cp(x,y)γ(x)gn(y)𝑑y=ψ(a)γ(x)gn+1(x)\displaystyle\psi_{1}(a)\int_{c}^{\infty}{p(x,y)\gamma(x){{g}_{n}}(y)dy}=\psi(a)\gamma(x){{g}_{n+1}}(x)

which completes the proof by induction. We show by induction that

gn(x)min{xαrn, 1}n,x>c.g_{n}(x)\leq\min\{x^{\alpha}r^{-n},\,1\}~{}~{}~{}\forall n\in\mathbb{N},\,x>c. (6.2)

Using the fact that 0p(x,y)𝑑y=1\int_{0}^{\infty}p(x,y)dy=1, we can deduce from (6.1) that gn(x)1g_{n}(x)\leq 1 for all nn\in\mathbb{N} and x>0x>0. Because c>1c>1, (6.2) is true for n=1n=1. Suppose by induction that for some n1n\geq 1,

gn(x)xαrnx>c.g_{n}(x)\leq x^{\alpha}r^{-n}~{}~{}~{}\forall x>c.

We decompose gn+1g_{n+1} given by (6.1) as follows.

gn+1(x)=cxp(x,y)gn(y)𝑑y{1}+xp(x,y)gn(y)𝑑y{2}.{{g}_{n+1}}(x)=\underbrace{\int_{c}^{x}{p}(x,y){{g}_{n}}(y)dy}_{\{1\}}+\underbrace{\int_{x}^{\infty}{p}(x,y){{g}_{n}}(y)dy}_{\{2\}}.

The first term can be estimated using condition (c):

{1}cxc2y(yαrn)𝑑y=c2rncxyα1𝑑y<xαrn1κ.\{1\}\leq\int_{c}^{x}{\frac{{{c}_{2}}}{y}({{y}^{\alpha}}{{r}^{-n}})dy}={{c}_{2}}{{r}^{-n}}\int_{c}^{x}{{{y}^{\alpha-1}}dy}<{{x}^{\alpha}}{{r}^{-n-1}}\kappa.

The second term can be estimated using condition (d):

{2}xψ2(yx)x(yαrn)𝑑y=rnx0ψ2(z)(x+z)α𝑑z.\{2\}\leq\int_{x}^{\infty}{\frac{\psi_{2}(y-x)}{x}({{y}^{\alpha}}{{r}^{-n}})dy}=\frac{{{r}^{-n}}}{x}\int_{0}^{\infty}{\psi_{2}(z){{(x+z)}^{\alpha}}dz}.

By the inequality (x+z)α2α1(xα+zα)(x+z)^{\alpha}\leq 2^{\alpha-1}(x^{\alpha}+z^{\alpha}), we have

{2}\displaystyle\{2\} <\displaystyle< rn2α1c0ψ2(z)(xα+zα)𝑑z\displaystyle\frac{{{r}^{-n}}{{2}^{\alpha-1}}}{c}\int_{0}^{\infty}{\psi_{2}(z)({{x}^{\alpha}}+{{z}^{\alpha}})dz}
<\displaystyle< xαrn1(2α1rψ2L1+2α1rψ2zαL1c)\displaystyle{{x}^{\alpha}}{{r}^{-n-1}}\left(\frac{{{2}^{\alpha-1}}r{{\left\|\psi_{2}\right\|}_{{{L}^{1}}}}+{{2}^{\alpha-1}}r{{\left\|\psi_{2}{{z}^{\alpha}}\right\|}_{{{L}^{1}}}}}{c}\right)
<\displaystyle< xαrn1(1κ)\displaystyle{{x}^{\alpha}}{{r}^{-n-1}}(1-\kappa)

where κ=c2rα\kappa=\frac{c_{2}r}{\alpha}. By the above estimates, gn+1(x)={1}+{2}<xαrn1g_{n+1}(x)=\{1\}+\{2\}<x^{\alpha}r^{-n-1}. Therefore, (6.2) is true for every nn. We proceed to estimate In{I}_{n}. Put βn=rn/α\beta_{n}=r^{n/\alpha}. If βn>c\beta_{n}>c then

Inψ1(a)=1ψ1(a)cfn(x)𝑑x\displaystyle\frac{{{{I}}_{n}}}{\psi_{1}(a)}=\frac{1}{\psi_{1}(a)}\int_{c}^{\infty}{{{f}_{n}}(x)dx} \displaystyle\leq cγ(x)gn(x)𝑑x\displaystyle\int_{c}^{\infty}{\gamma(x){{g}_{n}}(x)dx}
\displaystyle\leq cβnγ(x)xαrn𝑑x{3}+βnγ(x)𝑑x{4}.\displaystyle\underbrace{\int_{c}^{{{\beta}_{n}}}{\gamma(x){{x}^{\alpha}}{{r}^{-n}}dx}}_{\{3\}}+\underbrace{\int_{{{\beta}_{n}}}^{\infty}{\gamma(x)dx}}_{\{4\}}.

Each term can be estimated as follows:

{3}\displaystyle\{3\} \displaystyle\leq rn0γ(x)xα𝑑x=rnγxαL1,\displaystyle{{r}^{-n}}\int_{0}^{\infty}{\gamma(x){{x}^{\alpha}}dx}={{r}^{-n}}{{\left\|\gamma{{x}^{\alpha}}\right\|}_{{{L}^{1}}}},
{4}\displaystyle\{4\} \displaystyle\leq βnαβnγ(x)xα𝑑xrnγxαL1.\displaystyle\beta_{n}^{-\alpha}\int_{{{\beta}_{n}}}^{\infty}{\gamma(x){{x}^{\alpha}}dx}\leq{{r}^{-n}}{{\left\|\gamma{{x}^{\alpha}}\right\|}_{{{L}^{1}}}}.

Therefore, In2γxαL1ψ1(a)rn{I}_{n}\leq 2\|\gamma x^{\alpha}\|_{L^{1}}\psi_{1}(a)r^{-n}. If βnc\beta_{n}\leq c then

Inψ1(a){4}rnγxαL1.\frac{{I}_{n}}{\psi_{1}(a)}\leq\{4\}\leq r^{-n}\|\gamma x^{\alpha}\|_{L_{1}}.

In this case, we have InγxαL1ψ1(a)rn{I}_{n}\leq\|\gamma x^{\alpha}\|_{L^{1}}\psi_{1}(a)r^{-n}. ∎

7 Self-similar DSY cascades and proof of Section 2

We begin this section noting some properties of self-similar DSY cascades.

Lemma \thelem.

Let {λ(Xv)1Tv}v𝕋\{\lambda(X_{v})^{-1}T_{v}\}_{v\in\mathbb{T}} be a self-similar DSY cascade on S(0,)S\subset(0,\infty). Then for any initial state Xθ=aSX_{\theta}=a\in S and path s𝕋s\in\partial\mathbb{T}, the ratios {Rs|j=Xs|j/Xs|j1}j1\{R_{s|j}=X_{s|j}/X_{s|j-1}\}_{j\geq 1} form an i.i.d. sequence with the common distribution p(1,dx)p(1,dx).

Proof.

For v𝕋\{θ}v\in\mathbb{T}\backslash\{\theta\}, the ratio RvR_{v} can be written as Rv=Xv/XvR_{v}=X_{v}/X_{\overleftarrow{v}}, where v\overleftarrow{v} denotes the parent of vv. Now fix v𝕋\{θ}v\in\mathbb{T}\backslash\{\theta\}. By the definition of self-similar DSY cascades, the family {Yw=Xvw/Xv}w𝕋\{Y_{w}=X_{\overleftarrow{v}*w}/X_{\overleftarrow{v}}\}_{w\in\mathbb{T}} satisfies (A), (B) with the transition probability p(x,dy)p(x,dy). Since Yθ=1Y_{\theta}=1, Y1Y_{1} and Y2Y_{2} have the distribution p(1,dx)p(1,dx). Therefore, RvR_{v} also has the distribution p(1,dx)p(1,dx).

Next, we show that the transition probability has the scaling property p(a,adx)=p(1,dx)p(a,adx)=p(1,dx) for all aSa\in S. This is obtained by expressing the cumulative distribution functions of R1R_{1} in two ways. First, a(R1x)=0xp(1,dz)\mathbb{P}_{a}(R_{1}\leq x)=\int_{0}^{x}p(1,dz) for any a,xSa,x\in S. Second, a(R1x)=a(X1ax)=0axp(a,dy)\mathbb{P}_{a}(R_{1}\leq x)=\mathbb{P}_{a}(X_{1}\leq ax)=\int_{0}^{ax}p(a,dy). By the change of variables z=y/az=y/a in the last integral, one obtains p(a,adz)=p(1,dz)p(a,adz)=p(1,dz).

Finally, we show that along each path s𝕋s\in\partial\mathbb{T}, the ratios {Rs|j}j1\{R_{s|j}\}_{j\geq 1} are independent. For any r1,r2,,rnSr_{1},r_{2},\ldots,r_{n}\in S,

a(R1r1,,Rnrn)\displaystyle{{\mathbb{P}}_{a}}({{R}_{1}}\leq{{r}_{1}},\ldots,{{R}_{n}}\leq{{r}_{n}}) =\displaystyle= a(X1ar1,X2X1r2,,XnXn1rn)\displaystyle{{\mathbb{P}}_{a}}({{X}_{1}}\leq a{{r}_{1}},{{X}_{2}}\leq{{X}_{1}}{{r}_{2}},\ldots,{{X}_{n}}\leq{{X}_{n-1}}{{r}_{n}})
=\displaystyle= 0ar10x1r20xn1rnp(a,dx1)p(x1,dx2)p(xn1,dxn)\displaystyle\int_{0}^{a{{r}_{1}}}{\int_{0}^{{{x}_{1}}{{r}_{2}}}{\ldots\int_{0}^{{{x}_{n-1}}{{r}_{n}}}{p(a,d{{x}_{1}})p({{x}_{1}},d{{x}_{2}})\ldots p({{x}_{n-1}},d{{x}_{n}})}}}

where RjR_{j} and XjX_{j} are short notations for Rs|jR_{s|j} and Xs|jX_{s|j}. By the change of variables yj=xj/xj1y_{j}=x_{j}/x_{j-1} (with x0=ax_{0}=a) and the scaling property of pp, one has

a(R1r1,,Rnrn)\displaystyle{{\mathbb{P}}_{a}}({{R}_{1}}\leq{{r}_{1}},\ldots,{{R}_{n}}\leq{{r}_{n}}) =\displaystyle= 0r10r20rnp(1,dy1)p(1,dy2)p(1,dyn)\displaystyle\int_{0}^{{{r}_{1}}}{\int_{0}^{{{r}_{2}}}{\ldots\int_{0}^{{{r}_{n}}}{p(1,d{{y}_{1}})p(1,d{{y}_{2}})\ldots p(1,d{{y}_{n}})}}}
=\displaystyle= (R1r1)(Rnrn).\displaystyle\mathbb{P}({{R}_{1}}\leq{{r}_{1}})\ldots\mathbb{P}({{R}_{n}}\leq{{r}_{n}}).

We now give the proof of Section 2. By Section 2 (taking into account Section 2), we only need to find c>0c>0, r>2r>2, and a locally bounded function ψ\psi on (0,)(0,\infty) such that

In(a,c)=a(Xs|1>c,Xs|2>c,,Xs|n>c)ψ(a)rnn1I_{n}(a,c)=\mathbb{P}_{a}(X_{s|1}>c,\,X_{s|2}>c,\ldots,X_{s|n}>c)\leq\psi(a)r^{-n}\ \ \ \forall\,n\geq 1

where s𝕋s\in\partial\mathbb{T} is an arbitrary path. For simplicity, we write the sequences {Xs|j}j0\{X_{s|j}\}_{j\geq 0} and {Rs|j}j1\{R_{s|j}\}_{j\geq 1} as {Xj}j0\{X_{j}\}_{j\geq 0} and {Rj}j1\{R_{j}\}_{j\geq 1}, respectively. Recall that p(1,dx)p(1,dx) is the distribution of R1R_{1}. Define I0=1I_{0}=1. For n1n\geq 1, we have

In(a,c)\displaystyle{{I}_{n}}(a,c) =\displaystyle= a(aR1>c,,aR1R2Rn>c)\displaystyle{{\mathbb{P}}_{a}}\left(a{{R}_{1}}>c,\ldots,a{{R}_{1}}{{R}_{2}}...{{R}_{n}}>c\right)
=\displaystyle= a(R1>ca,aR2>cR1,,aR2Rn>cR1)\displaystyle{{\mathbb{P}}_{a}}\left({{R}_{1}}>\frac{c}{a},a{{R}_{2}}>\frac{c}{{{R}_{1}}},\ldots,a{{R}_{2}}...{{R}_{n}}>\frac{c}{{{R}_{1}}}\right)
=\displaystyle= c/aa(aR2>cx,,aR2Rn>cx)p(1,dx)\displaystyle\int_{c/a}^{\infty}{{{\mathbb{P}}_{a}}\left(a{{R}_{2}}>\frac{c}{x},\ldots,a{{R}_{2}}...{{R}_{n}}>\frac{c}{x}\right)p(1,dx)}
=\displaystyle= c/aIn1(a,c/x)p(1,dx).\displaystyle\int_{c/a}^{\infty}{{{I}_{n-1}}(a,c/x)p(1,dx)}.

Therefore,

In(a,t)=t/aIn1(a,t/x)p(1,dx)a,tS.{{I}_{n}}(a,t)=\int_{t/a}^{\infty}{{{I}_{n-1}}(a,t/x)p(1,dx)}\ \ \ \forall a,t\in S.

Next, we show by induction that

In(a,t)(at)brna,tS,n1I_{n}(a,t)\leq\left(\frac{a}{t}\right)^{b}r^{-n}\ \ \ \forall a,t\in S,\,n\geq 1 (7.1)

where r=1/𝔼[R1b]>2r=1/\mathbb{E}[R_{1}^{b}]>2. For n=1n=1,

I1(a,t)=t/ap(1,dx)(at)bt/axbp(1,dx)(at)b𝔼[R1b]=(at)br1.{{I}_{1}}(a,t)=\int_{t/a}^{\infty}{p(1,dx)}\leq{{\left(\frac{a}{t}\right)}^{b}}\int_{t/a}^{\infty}{{{x}^{b}}p(1,dx)}\leq{{\left(\frac{a}{t}\right)}^{b}}\mathbb{E}[{R_{1}^{b}}]={{\left(\frac{a}{t}\right)}^{b}}{{r}^{-1}}.

Suppose (7.1) is true for some n1n\geq 1. Then

In+1(a,t)t/a(axt)brnp(1,dx)(at)brn𝔼[R1b]=(at)brn1.{{I}_{n+1}}(a,t)\leq\int_{t/a}^{\infty}{{{\left(\frac{ax}{t}\right)}^{b}}{{r}^{-n}}p(1,dx)}\leq{{\left(\frac{a}{t}\right)}^{b}}{{r}^{-n}}\mathbb{E}[{R_{1}^{b}}]={{\left(\frac{a}{t}\right)}^{b}}{{r}^{-n-1}}.

Therefore, (7.1) is also true for n+1n+1. We can now choose c=1c=1 and ψ(a)=ab\psi(a)=a^{b}.

8 Proof of Section 2

Starting from the root θ\theta, we construct a random path in 𝕋\partial\mathbb{T} by recursively annexing one vertex at a time as follows. Suppose v𝕋v\in\mathbb{T} is the most recently annexed vertex. If λ(Xv1)λ(Xv2)\lambda(X_{v*1})\geq\lambda(X_{v*2}) then v1v*1 is the next to be annexed. Otherwise, v2v*2 is the next to be annexed. This random path, which we denote by ss, is a path with stepwise maximal intensities: at every branching step, the path follows the branch that has a larger intensity.

Refer to caption
Figure 3: Path with stepwise maximal intensities.

For n0n\geq 0, let Zn=λ(Xs|n)Z_{n}=\lambda(X_{s|n}). Then

ζζ:=j=0Ts|jλ(Xs|j)=j=0Ts|jZj.\zeta\leq{{\zeta}_{*}}:=\sum\limits_{j=0}^{\infty}{\frac{{{T}_{s|j}}}{\lambda({{X}_{s|j}})}}=\sum\limits_{j=0}^{\infty}{\frac{{{T}_{s|j}}}{{{Z}_{j}}}}.

By the independence of the clocks and the intensities,

𝔼aζ=j=0𝔼a[Zj1].{{\mathbb{E}}_{a}}\zeta=\sum\limits_{j=0}^{\infty}{\mathbb{E}_{a}[Z_{j}^{-1}]}. (8.1)

We only need to show that

𝔼a[Zj1]κjλ(a)1j,aS.\mathbb{E}_{a}[Z_{j}^{-1}]\leq\kappa^{j}\lambda(a)^{-1}\ \ \ \forall\,j\in\mathbb{N},\,a\in S. (8.2)

Indeed, once (8.2) is proved, one can infer from (8.1) that

𝔼aζj=0κjλ(a)=1λ(a)11κ<{{\mathbb{E}}_{a}}\zeta\leq\sum\limits_{j=0}^{\infty}{\frac{{{\kappa}^{j}}}{\lambda(a)}}=\frac{1}{\lambda(a)}\frac{1}{1-\kappa}<\infty

which leads to ζ<\zeta<\infty a.s. Next, to show (8.2), it suffices to show that

𝔼a[Zj+11]κ𝔼a[Zj1]j0,aS.\mathbb{E}_{a}[Z_{j+1}^{-1}]\leq\kappa\mathbb{E}_{a}[Z_{j}^{-1}]\ \ \ \forall\,j\geq 0,\,a\in S. (8.3)

Fix j0j\geq 0 and put v=s|j𝕋v=s|j\in\mathbb{T}. Then Zj=λ(Xv)Z_{j}=\lambda(X_{v}) and Zj+1=max{λ(Xv1),λ(Xv2)}Z_{j+1}=\max\{\lambda(X_{v*1}),\lambda(X_{v*2})\}. Because the joint distribution of (Xv1,xv2)(X_{v*1},x_{v*2}) given XvX_{v} is the same as the joint distribution of (X1,X2)(X_{1},X_{2}) given XθX_{\theta},

𝔼a[Zj+11|Xv=x]\displaystyle\mathbb{E}_{a}[Z_{j+1}^{-1}|{{X}_{v}}=x] =\displaystyle= 𝔼a[1max{λ(Xv1),λ(Xv2)}|Xv=x]\displaystyle\mathbb{E}_{a}\left[\frac{1}{\max\left\{\lambda({{X}_{v*1}}),\lambda({{X}_{v*2}})\right\}}|{{X}_{v}}=x\right]
=\displaystyle= 𝔼[1max{λ(X1),λ(X2)}|Xθ=x]\displaystyle\mathbb{E}\left[\frac{1}{\max\left\{\lambda({{X}_{1}}),\lambda({{X}_{2}})\right\}}|{{X}_{\theta}}=x\right]
=\displaystyle= 𝔼x[Z11]κλ(x)1.\displaystyle\mathbb{E}_{x}[Z_{1}^{-1}]\leq\kappa\lambda(x)^{-1}.

Thus, 𝔼[Zj+11|Xv]κλ(Xv)1=κZj1\mathbb{E}[Z_{j+1}^{-1}|X_{v}]\leq\kappa\lambda(X_{v})^{-1}=\kappa Z_{j}^{-1}. The proof of (8.3) is then completed by the law of total expectation,

𝔼a[Zj+11]=𝔼a[𝔼[Zj+11|Xv]]κ𝔼a[Zj1].\mathbb{E}_{a}[Z_{j+1}^{-1}]=\mathbb{E}_{a}[\mathbb{E}[Z_{j+1}^{-1}|{{X}_{v}}]]\leq\kappa\mathbb{E}_{a}[Z_{j}^{-1}].

9 Proof of Section 2

With the initial state Xθ=xSX_{\theta}=x\in S, the explosion time can be expressed as ζ=Tθλ(x)1+min{ζ(1),ζ(2)}\zeta=T_{\theta}\lambda(x)^{-1}+\min\left\{{{\zeta}^{(1)}},{{\zeta}^{(2)}}\right\} where ζ(k)\zeta^{(k)} is the explosion time of the sub-cascade starting at vertex k{1,2}k\in\{1,2\}.

f(x):=x(ζ=)\displaystyle f(x):={{\mathbb{P}}_{x}}(\zeta=\infty) =\displaystyle= 𝔼x[𝔼[𝟙ζ(1)=𝟙ζ(2)=|X1,X2]]\displaystyle{{\mathbb{E}}_{x}}[\mathbb{E}[{\mathbbm{1}_{{{\zeta}^{(1)}}=\infty}}{\mathbbm{1}_{{{\zeta}^{(2)}}=\infty}}|{{X}_{1}},{{X}_{2}}]] (9.1)
=\displaystyle= 𝔼x[𝔼[𝟙ζ(1)=|X1]𝔼[𝟙ζ(2)=|X2]]\displaystyle{{\mathbb{E}}_{x}}[\mathbb{E}[{\mathbbm{1}_{{{\zeta}^{(1)}}=\infty}}|{{X}_{1}}]\mathbb{E}[{\mathbbm{1}_{{{\zeta}^{(2)}}=\infty}}|{{X}_{2}}]]
=\displaystyle= 𝔼x[f(X1)f(X2)].\displaystyle{{\mathbb{E}}_{x}}[f(X_{1})f(X_{2})].

By Section 7, we can express XvX_{v}, v𝕋v\in\mathbb{T}, as a product of i.i.d. ratios Xv=xRv|1Rv||v|X_{v}=xR_{v|1}\ldots R_{v||v|}. Let us first assume that condition (i) is satisfied. We use the multiplicativity of λ\lambda to rewrite the explosion time as ζ=λ(x)1ζ~\zeta=\lambda(x)^{-1}\widetilde{\zeta} where

ζ~=sup𝑛min|v|=nj=0nTv|jλ(Rv|1)λ(Rv|n).\widetilde{\zeta}=\underset{n}{\mathop{\sup}}\,\underset{|v|=n}{\mathop{\min}}\,\sum\limits_{j=0}^{n}{\frac{{{T}_{v|j}}}{\lambda({{R}_{v|1}})...\lambda({{R}_{v|n}})}}.

The event [ζ=][\zeta=\infty] is the same as the event [ζ~=][\widetilde{\zeta}=\infty], whose probability does not depend on xx. Thus, f(x)cf(x)\equiv c. By (9.1), c=c2c=c^{2}. Therefore, c{0,1}c\in\{0,1\}.

Next, assume that condition (iii) is satisfied. From (9.1), one has the estimate 2f(x)𝔼x[f(X1)2]+𝔼x[f(X2)2]2f(x)\leq\mathbb{E}_{x}[f(X_{1})^{2}]+\mathbb{E}_{x}[f(X_{2})^{2}]. Now note that along each path s𝕋s\in\partial\mathbb{T}, γ\gamma is a stationary distribution of the Markov chain {Xs|j}j0\{X_{s|j}\}_{j\geq 0}. By integrating the above inequality against γ(dx)\gamma(dx), one obtains

2𝔼γf(Xθ)𝔼γ[f(X1)2]+𝔼γ[f(X2)2]=2𝔼γ[f(Xθ)2]2\mathbb{E}_{\gamma}f({{X}_{\theta}})\leq\mathbb{E}_{\gamma}[f{{({{X}_{1}})}^{2}}]+\mathbb{E}_{\gamma}[f{{({{X}_{2}})}^{2}}]=2\mathbb{E}_{\gamma}[f{{({{X}_{\theta}})}^{2}}]

which implies f(x){0,1}f(x)\in\{0,1\} for γ\gamma-a.e. xSx\in S. Consider two cases.
\blacksquare f(a)=1f(a)=1 for some aSa\in S:
Using the inequality f(X1)f(X2)f(X1)f(X_{1})f(X_{2})\leq f(X_{1}), one has 1=f(a)𝔼a[f(X1)]=Sf(x)p(a,dx)1=f(a)\leq{{\mathbb{E}}_{a}}[f({{X}_{1}})]=\int_{S}{f(x)p(a,dx)}. Thus, f(x)=1f(x)=1 for p(a,dx)p(a,dx)-a.e. xSx\in S. Because γ(dx)p(a,dx)\gamma(dx)\ll p(a,dx), f(x)=1f(x)=1 for γ\gamma-a.e. xSx\in S. Now take an arbitrary bSb\in S. Since p(b,dx)γ(dx)p(b,dx)\ll\gamma(dx), f(x)=1f(x)=1 for p(b,dx)p(b,dx)-a.e. xSx\in S. By the inequality (f(X1)1)(f(X2)1)0(f(X_{1})-1)(f(X_{2})-1)\geq 0, we have

f(b)=𝔼b[f(X1)f(X2)]𝔼b[f(X1)]+𝔼b[f(X2)]1=2Sf(x)p(b,dx)1=1.f(b)={{\mathbb{E}}_{b}}[f({{X}_{1}})f({{X}_{2}})]\geq{{\mathbb{E}}_{b}}[f({{X}_{1}})]+{{\mathbb{E}}_{b}}[f({{X}_{2}})]-1=2\int_{S}{f(x)p(b,dx)}-1=1.

Therefore, f(b)=1f(b)=1.
\blacksquare f(x)<1f(x)<1 for all xSx\in S:
In this case, f(x)=0f(x)=0 for γ\gamma-a.e. xSx\in S. For any bSb\in S, f(x)=0f(x)=0 for p(b,dx)p(b,dx)-a.e. xSx\in S. By the inequality f(X1)f(X2)f(X1)f(X_{1})f(X_{2})\leq f(X_{1}), we have

f(b)=𝔼b[f(X1)f(X2)]𝔼b[f(X1)]=Sf(x)p(b,dx)=0.f(b)={{\mathbb{E}}_{b}}[f({{X}_{1}})f({{X}_{2}})]\leq{{\mathbb{E}}_{b}}[f({{X}_{1}})]=\int_{S}{f(x)p(b,dx)}=0.

Therefore, f(b)=0f(b)=0.

Finally, assume that condition (ii) is satisfied. Using the same technique as in Part (iii) and noting that γ\gamma is fully supported on SS in the discrete topology, one has f(x){0,1}f(x)\in\{0,1\} for all xSx\in S. Let A={xS:f(x)=1}A=\{x\in S:\,f(x)=1\}. If A=A=\emptyset then f(x)0f(x)\equiv 0. Otherwise, for any aAa\in A, we have 1=f(a)𝔼a[f(X1)]=p(a,A)1=f(a)\leq\mathbb{E}_{a}[f(X_{1})]=p(a,A), which leads to p(a,A)=1p(a,A)=1. This implies that AA contains all the states that can be reached from an element of AA in one step. By the irreducibility of the Markov chain, A=SA=S.

10 DSY cascades resulting from probabilistic models

In this section, we give four examples of non-explosive cascades motivated by probabilistic considerations.

10.1 A pure birth process

The classical Yule process is a pure process starting from a single progenitor in which a particle survives for a mean λ1\lambda^{-1} (deterministic constant) exponentially distributed time before being replaced by two offspring independently evolving in the same manner. The population is finite at every finite time (i.e. non-explosive) if and only if

ζ=limnmin|v|=n1λj=0nTv|j=a.s.\zeta=\underset{n\to\infty}{\mathop{\lim}}\,\underset{|v|=n}{\mathop{\min}}\,\frac{1}{\lambda}\sum\limits_{j=0}^{n}T_{v|j}=\infty~{}~{}~{}\text{a.s.} (10.1)

The classical Yule cascades are the simplest case of DSY cascades where λ(x)λ\lambda(x)\equiv\lambda. The branching Markov chain can be chosen arbitrarily. We can of course choose Xv1X_{v}\equiv 1 (deterministic). It is well-known that the classical Yule cascades are non-explosive. One can apply Section 2 with the choice c=2λc=2\lambda to obtain an alternative proof by cut-set theory. In fact, Section 5 implies the non-explosion of a more general pure birth process:

Proposition \theprop.

Let μ\mu be a positive number. Consider a branching process starting with a single progenitor in which a particle, independently of all others, survives for a mean-one exponentially distributed time before being replaced by a random number of offspring. The offspring distributions are not necessarily the same across all the particles. Suppose that the expected number of offspring of each particle is less than μ\mu. Then the total population is finite at any finite time.

In Section 10.1, if the offspring distributions are the same across all the particles then the pure birth process is a continuous-time Galton-Watson process in which the offspring distribution has a finite expected value. The non-explosion problem for the case of infinite expected offspring number was studied in [amini2013].

10.2 A mean-field cascade (dependent first passage percolation on a tree)

Consider a DSY cascade {λ(Xv)1Tv}v𝕋\{\lambda(X_{v})^{-1}T_{v}\}_{v\in\mathbb{T}} on the state space S=(0,)S=(0,\infty) such that along each path s𝕋s\in\partial\mathbb{T}, {Xs|j}j1\{X_{s|j}\}_{j\geq 1} is an i.i.d. sequence of random variables. The branching Markov chain {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}} clearly has properties (A), (B), (C), (D). Choose c>0c>0 sufficiently large such that (X1>c)<1/2\mathbb{P}(X_{1}>c)<1/2. Then

In(a,c)=a(Xs|1>c,Xs|2>c,,Xs|n>c)=(1/(X1>c))n.I_{n}(a,c)=\mathbb{P}_{a}(X_{s|1}>c,X_{s|2}>c,\ldots,X_{s|n}>c)=(1/\mathbb{P}(X_{1}>c))^{-n}.

Therefore, the condition (E) holds for ψ1\psi\equiv 1 and r=1/(X1>c)r=1/\mathbb{P}(X_{1}>c). By Section 2, the cascade is non-explosive.

10.3 A cascade with geometric-like sequence along each path

Consider a DSY cascade {λ(Xv)1Tv}v𝕋\{\lambda(X_{v})^{-1}T_{v}\}_{v\in\mathbb{T}} on the state space S=(0,)S=(0,\infty) in which the branching Markov chain has properties (A), (B), (C) and the transition probability density is given by

p(x,y)=ey1e2x𝟙0<y<2x.p(x,y)=\frac{e^{-y}}{1-e^{-2x}}\mathbbm{1}_{0<y<2x}.

Intuitively, Xv1X_{v*1} can be as large as 2Xv2X_{v} but with a small probability. This allows the sequence {Xs|j}j0\{X_{s|j}\}_{j\geq 0} to behave like a geometric sequence up to any prescribed index. It is the geometric growth of intensities along each path that causes the explosion in the α\alpha-Riccati equation for α>1\alpha>1 (Section 11.1). We show that the present cascade is in fact non-explosive. Denote

In(a)=a(Xs|1,Xs|2,,Xs|n>1)=11p(a,y1)p(y1,y2)p(yn1,yn)𝑑yn𝑑y1.{{I}_{n}}(a)={{\mathbb{P}}_{a}}({{X}_{s|1}},{{X}_{s|2}},\ldots,{{X}_{s|n}}>1)=\int_{1}^{\infty}{\ldots\int_{1}^{\infty}{p}(a,{{y}_{1}})p({{y}_{1}},{{y}_{2}})\ldots p({{y}_{n-1}},{{y}_{n}})d{{y}_{n}}\ldots d{{y}_{1}}}.

If a1/2a\leq 1/2 then p(a,y1)=0p(a,y_{1})=0 for all y1>12ay_{1}>1\geq 2a. In this case, In(a)=0I_{n}(a)=0. If a>1/2a>1/2 then

In(a)=12a12yn1ey11e2aey21e2y1eyn1e2yn1𝑑yn𝑑y1.{{I}_{n}}(a)=\int_{1}^{2a}{\ldots\int_{1}^{2{{y}_{n-1}}}{\frac{{{e}^{-{{y}_{1}}}}}{1-{{e}^{-2a}}}\frac{{{e}^{-{{y}_{2}}}}}{1-{{e}^{-2{{y}_{1}}}}}\ldots\frac{{{e}^{-{{y}_{n}}}}}{1-{{e}^{-2{{y}_{n-1}}}}}d{{y}_{n}}\ldots d{{y}_{1}}}}.

Because

12yn1eyn1e2yn1𝑑yn=e1e2yn11e2yn1<e1,\int_{1}^{2{{y}_{n-1}}}{\frac{{{e}^{-{{y}_{n}}}}}{1-{{e}^{-2{{y}_{n-1}}}}}d{{y}_{n}}}=\frac{{{e}^{-1}}-{{e}^{-2{{y}_{n-1}}}}}{1-{{e}^{-2{{y}_{n-1}}}}}<{{e}^{-1}},

we have In(a)e1In1(a)I_{n}(a)\leq e^{-1}I_{n-1}(a). Using this inequality repeatedly, we get

In(a)en+1I1(a)=en+1e1e2a1e2a<ena>0,n.{{I}_{n}}(a)\leq{{e}^{-n+1}}{{I}_{1}}(a)={{e}^{-n+1}}\frac{{{e}^{-1}}-{{e}^{-2a}}}{1-{{e}^{-2a}}}<{{e}^{-n}}\ \ \ \forall a>0,\,n\in\mathbb{N}.

Therefore, the condition (E) holds for ψ1\psi\equiv 1 and r=er=e. By Section 2, the cascade is non-explosive.

10.4 Birth-death branching Markov chain

Consider a DSY cascade {λ(Xv)1Tv}v𝕋\{\lambda(X_{v})^{-1}T_{v}\}_{v\in\mathbb{T}} on the state space S=S=\mathbb{N} in which the branching Markov chain {Xv}v𝕋\{X_{v}\}_{v\in\mathbb{T}} has properties (A) and (B) with the transition probabilities given by

(Xv1=j+1|Xv=j)=(Xv2=j+1|Xv=j)=βj,\mathbb{P}(X_{v*1}=j+1\,|\,X_{v}=j)=\mathbb{P}(X_{v*2}=j+1\,|\,X_{v}=j)=\beta_{j},
(Xv1=j1|Xv=j)=(Xv2=j1|Xv=j)=δj,\mathbb{P}(X_{v*1}=j-1\,|\,X_{v}=j)=\mathbb{P}(X_{v*2}=j-1\,|\,X_{v}=j)=\delta_{j},

where β1=1\beta_{1}=1, and δj=1βj(0,1)\delta_{j}=1-\beta_{j}\in(0,1) for j=2,3,j=2,3,\dots Along each path s𝕋s\in\partial\mathbb{T}, the sequence Xs|0X_{s|0}, Xs|1X_{s|1}, Xs|2X_{s|2}, …is the birth-death process on SS with reflection at 11 and birth-death rates βj,δj\beta_{j},\,\delta_{j}. This is an ergodic time-reversible Markov chain (see [RB_EW2009], Theorem 3.1(b), p. 241) with the invariant probability

γj=β2βj1δ2δjγ1,j=2,3,\gamma_{j}=\frac{\beta_{2}\cdots\beta_{j-1}}{\delta_{2}\cdots\delta_{j}}\gamma_{1},\ \ \ j=2,3,\dots (10.2)

provided that

γ1=j=2β2βj1δ2δj<.\gamma_{1}=\sum_{j=2}^{\infty}\frac{\beta_{2}\cdots\beta_{j-1}}{\delta_{2}\cdots\delta_{j}}<\infty.

In particular, this is the case when δ2=δ3=δ4==δ(1/2,1)\delta_{2}=\delta_{3}=\delta_{4}=\cdots=\delta\in(1/2,1). Since each state is visited infinitely often, the pathwise total waiting time is infinite:

j=0Ts|jλ(Xs|j)=a.s.\sum_{j=0}^{\infty}\frac{T_{s|j}}{\lambda(X_{s|j})}=\infty\ \ \ \text{a.s.}

However, it will be shown below that the cascade can still be explosive.

Proposition \theprop.

Let δ2=δ3==δ(0,12)\delta_{2}=\delta_{3}=\cdots=\delta\in(0,\frac{1}{\sqrt{2}}) and λ(k)=bk\lambda(k)=b^{k} where b>1b>1. Suppose that for each v𝕋v\in\mathbb{T}, Xv1X_{v*1} and Xv2X_{v*2} are conditionally independent given XvX_{v}. Then the cascade is a.s. explosive for any initial state Xθ=aX_{\theta}=a\in\mathbb{N}.

Proof.

One can see that condition (D) is satisfied. Put Z=max{λ(X1),λ(X2)}Z=\max\{\lambda(X_{1}),\lambda(X_{2})\}. Then Z1=min{bX1,bX2}Z^{-1}=\min\{b^{-X_{1}},b^{-X_{2}}\}. First, consider the case b<δ21b<\delta^{-2}-1. Put

c=max{1b,bδ2+1δ2b}<1.c=\max\left\{\frac{1}{b},\,b\delta^{2}+\frac{1-\delta^{2}}{b}\right\}<1.

By Section 2, it suffices to show that 𝔼a[Z1]cb1\mathbb{E}_{a}[Z^{-1}]\leq cb^{-1} for all aa\in\mathbb{N}. For a=1a=1, 𝔼1[Z1]=b2cb1\mathbb{E}_{1}[Z^{-1}]=b^{-2}\leq cb^{-1}. For a2a\geq 2,

𝔼a[Z1]\displaystyle{{\mathbb{E}}_{a}}[{{Z}^{-1}}] =\displaystyle= 1ba1a(Z=ba1)+1ba+1a(Z=ba+1)\displaystyle\frac{1}{{{b}^{a-1}}}{{\mathbb{P}}_{a}}(Z={{b}^{a-1}})+\frac{1}{{{b}^{a+1}}}{{\mathbb{P}}_{a}}(Z={{b}^{a+1}})
=\displaystyle= 1ba1a(X1=X2=a1)+1ba+1a(X1=a+1 or X2=a+1)\displaystyle\frac{1}{{{b}^{a-1}}}{{\mathbb{P}}_{a}}({{X}_{1}}={{X}_{2}}=a-1)+\frac{1}{{{b}^{a+1}}}{{\mathbb{P}}_{a}}({{X}_{1}}=a+1\text{\ \ or\ \ }{{X}_{2}}=a+1)
=\displaystyle= 1ba1δ2+1ba+1(1δ2)cba.\displaystyle\frac{1}{{{b}^{a-1}}}{{\delta}^{2}}+\frac{1}{{{b}^{a+1}}}(1-{{\delta}^{2}})\leq\frac{c}{{{b}^{a}}}.

By Section 2, the cascade is non-explosive. Next, we consider the case bδ21b\geq\delta^{-2}-1. Take 1<q<δ211<q<\delta^{-2}-1 and denote

ζq=supn0min|v|=nj=0nqXv|jTv|j.\zeta_{q}=\sup_{n\geq 0}\min_{|v|=n}\sum_{j=0}^{n}q^{-X_{v|j}}T_{v|j}.

We proved in the first case that 𝔼a[ζq]<\mathbb{E}_{a}[\zeta_{q}]<\infty. Observe that ζζq\zeta\leq\zeta_{q} and thus, 𝔼aζ𝔼aζq<\mathbb{E}_{a}\zeta\leq\mathbb{E}_{a}\zeta_{q}<\infty. This implies ζ<\zeta<\infty a.s. ∎

Proposition \theprop.

Let δ2=δ3==δ(2+34,1)\delta_{2}=\delta_{3}=\cdots=\delta\in(\frac{2+\sqrt{3}}{4},1). Then for any function λ:(0,)\lambda:\mathbb{N}\to(0,\infty), a DSY cascade {λ(Xv)1Tv}v𝕋\{\lambda(X_{v})^{-1}T_{v}\}_{v\in\mathbb{T}} with property (C) is non-explosive for any initial state Xθ=aX_{\theta}=a\in\mathbb{N}.

Proof.

Any function from \mathbb{N} to (0,)(0,\infty) is locally bounded. By Section 2, we only need to find r>2r>2 and a function ψ:(0,)\psi:\mathbb{N}\to(0,\infty) such that

a(Xs|1,Xs|2,,Xs|n>1)ψ(a)rnn,a.\mathbb{P}_{a}(X_{s|1},X_{s|2},\ldots,X_{s|n}>1)\leq\psi(a)r^{-n}\ \ \ \forall n,a\in\mathbb{N}.

As shown below, one can choose r=12δ(1δ)>2r=\frac{1}{2\sqrt{\delta(1-\delta)}}>2 and ψ(a)=a(δ/β)(a1)/212βδ2βδ\psi(a)=\frac{a(\delta/\beta)^{(a-1)/2}}{1-2\sqrt{\beta\delta}}2\sqrt{\beta\delta}. Fix a path s𝕋s\in\partial\mathbb{T} and write X0,X1,X2,X_{0},X_{1},X_{2},\ldots in lieu of Xs|0,Xs|1,Xs|2,X_{s|0},X_{s|1},X_{s|2},\ldots Denote

τ=min{k1:Xk=1}.\tau=\min\{k\geq 1:\ X_{k}=1\}.

With the initial state X0=aX_{0}=a\in\mathbb{N}, τ\tau can be viewed as the first passage time of a simple random walk on \mathbb{Z} with probability of going to the left δ\delta, and probability of going to the right β=1δ\beta=1-\delta. One has the estimates ([RB_EW2009, p. 11])

a(τ=N)\displaystyle{{\mathbb{P}}_{a}}(\tau=N) =\displaystyle= aN(NN+1a2)βN+1a2δN+a12\displaystyle\frac{a}{N}\left(\begin{matrix}N\\ \frac{N+1-a}{2}\\ \end{matrix}\right){{\beta}^{\frac{N+1-a}{2}}}{{\delta}^{\frac{N+a-1}{2}}}
\displaystyle\leq a(δβ)(a1)/21N(NN)(βδ)N/2\displaystyle a{{\left(\frac{\delta}{\beta}\right)}^{(a-1)/2}}\frac{1}{N}\left(\begin{matrix}N\\ N^{\prime}\\ \end{matrix}\right){{(\beta\delta)}^{N/2}}

with N=[N/2]N^{\prime}=[N/2] and with the convention that (Nk)=0\left(\begin{matrix}N\\ k\\ \end{matrix}\right)=0 if k<0k<0 or kk is not an integer. Because δ>1/2\delta>1/2, the Markov chain is recurrent on \mathbb{N}. Thus, a(τ=)=0\mathbb{P}_{a}(\tau=\infty)=0. It is known that (NN)2NN1/2\left(\begin{matrix}N\\ N^{\prime}\\ \end{matrix}\right)\lesssim{{2}^{N}}{{N}^{-1/2}} as a consequence of Sterling’s formula k!=2πkkkek(1+o(k))k!=\sqrt{2\pi k}k^{k}e^{-k}(1+\text{o}(k)). Thus,

a(τ=N)a(δβ)(a1)/22NN3/2(βδ)N/2a(δβ)a/2(2βδ)N.{{\mathbb{P}}_{a}}(\tau=N)\lesssim a{{\left(\frac{\delta}{\beta}\right)}^{(a-1)/2}}\frac{{{2}^{N}}}{{{N}^{3/2}}}{{(\beta\delta)}^{N/2}}\lesssim a{{\left(\frac{\delta}{\beta}\right)}^{a/2}}{{(2\sqrt{\beta\delta})}^{N}}.

We obtain

a(X1,X2,,Xn>1)\displaystyle{{\mathbb{P}}_{a}}({{X}_{1}},{{X}_{2}},...,{{X}_{n}}>1) =\displaystyle= a(τ>n)\displaystyle{{\mathbb{P}}_{a}}(\tau>n)
\displaystyle\lesssim a(δβ)a/2k=n+1(2βδ)k\displaystyle a{{\left(\frac{\delta}{\beta}\right)}^{a/2}}\sum\limits_{k=n+1}^{\infty}{{{(2\sqrt{\beta\delta})}^{k}}}
\displaystyle\leq ψ(a)rn.\displaystyle\psi(a){{r}^{-n}}.

Remark \therem.

Section 10.4 and Section 10.4 do not give a conclusion about the explosion or non-explosion in the case δ2=δ3==δ[12,2+34]\delta_{2}=\delta_{3}=\cdots=\delta\in[\frac{1}{\sqrt{2}},\frac{2+\sqrt{3}}{4}] and λ(k)=bk\lambda(k)=b^{k}, b>1b>1. However, by Section 2 (ii), we know that it must be a 010-1 event.

11 DSY cascades resulting from differential equations

Our first example serves as a precursor to the general method of associating a branching cascade structure to a quasilinear evolutionary partial differential equation. This method goes back to McKean’s treatment for the Fisher-KPP equation in the physical space [mckean, MB1978] and Le Jan and Sznitman’s treatment for the Navier-Stokes equations in the Fourier space [lejan].

11.1 The α\alpha-Riccati equation

For α>0\alpha>0, consider the α\alpha-Riccati equation

u(t)=u(t)+u2(αt),u(0)=u0.u^{\prime}(t)=-u(t)+u^{2}(\alpha t),\quad u(0)=u_{0}. (11.1)

This equation can be viewed as a toy model for self-similar Navier-Stokes equations (see [alphariccati]), It also appears from purely probabilistic models (see [athreya, DA_PS_1988]). After rewriting the equation in the mild formulation

u(t)=u0et+0tesu2(α(ts))𝑑s,u(t)=u_{0}e^{-t}+\int_{0}^{t}e^{-s}u^{2}({\alpha(t-s)})\,ds,

we can interpret uu as the expected of a stochastic functional U(t)U(t) defined implicitly by

U(t)=u0 1Tθt+U(1)(α(tTθ))U(2)(α(tTθ)) 1Tθ<t,U(t)=u_{0}\,{\mathds{1}_{T_{\theta}\geq t}}+U^{(1)}\left(\alpha(t-T_{\theta})\right)\,U^{(2)}\left(\alpha(t-T_{\theta})\right)\,{\mathds{1}_{T_{\theta}<t}},

where U(1)U^{(1)} and U(2)U^{(2)} are two independent copies of UU. Thus, the stochastic functional U(t)U(t) is defined over the stochastic structure {α|v|Tv}v𝕋\left\{{\alpha^{-|v|}}{T_{v}}\right\}_{v\in\mathbb{T}}. In the notation of the present paper, this is a self-similar DSY cascade with λ(x)=x\lambda(x)=x and Xv=α|v|X_{v}=\alpha^{|v|} (deterministic). The explosion problem of the α\alpha-Riccati equation, especially in the connection with the existence and uniqueness of solutions, was studied in detail in [athreya, alphariccati]. The cascade was known to be explosive if and only if α>1\alpha>1. Section 2 provides an alternate justification. Indeed, the ratios along each path are Rs|j=Xs|n/Xs|n1=αR_{s|j}={X_{s|n}}/{X_{s|n-1}}=\alpha. With Xθ=a>0X_{\theta}=a>0, Z=max{X1,X2}=αaZ=\max\{X_{1},X_{2}\}=\alpha a. We have

𝔼a[Z1]=1αa=ca\mathbb{E}_{a}[Z^{-1}]=\frac{1}{\alpha a}=\frac{c}{a}

with c=1/α<1c=1/\alpha<1. The non-explosion of the cascade of the α\alpha-Riccati equation for α1\alpha\leq 1 can be inferred from the non-explosion of the standard Yule cascade (see [complexburgers, yule]). If α<1\alpha<1, Section 2 provides an alternative justification. Indeed, with b>0b>0 sufficiently large, 𝔼[R1b]=αb<1/2\mathbb{E}[R_{1}^{b}]=\alpha^{b}<1/2. Note that the case α=1\alpha=1 results in a classical Yule process, which is non-explosive.

In the case α1\alpha\leq 1, the stochastic non-explosion was exploited in [complexburgers] to prove existence and uniqueness results as well as long-time behavior of solutions to (11.1). In the case α>1\alpha>1, the stochastic explosion was used to prove a nonuniqueness result in [athreya] for the case u0=0u_{0}=0 and [alphariccati] developed a framework for using stochastic explosion and distribution of branches of the underlying cascade to prove both non-uniqueness and finite-time blowup of the solutions for more general initial data.

DSY cascades associated with evolutionary PDEs in Fourier space

The next examples concern the DSY cascades originating from partial differential equations. A common feature of these equations is that they define a dissipative dynamical system in which, when formulated in the Fourier space, the linear term determines the intensity of the random waiting time and the quadratic nonlinear term leads to a random binary tree. In all of these examples, the equations can be written in the Fourier space as

u^(ξ,t)=eλ(ξ)tu^0(ξ)+0teλ(ξ)sλ(ξ)ρ(ξ)dB(u^(η,ts),u^(ξη,ts))𝑑η𝑑s\hat{u}(\xi,t)=e^{-\lambda(\xi)t}\hat{u}_{0}(\xi)+\int_{0}^{t}e^{-\lambda(\xi)s}\lambda(\xi)\rho(\xi)\int_{\mathbb{R}^{d}}B(\hat{u}(\eta,t-s),\hat{u}(\xi-\eta,t-s))d\eta ds (11.2)

where λ\lambda, ρ\rho are radially symmetric positive functions, and BB is a bilinear map. The functions λ\lambda, ρ\rho, BB will be determined by the specific PDE under consideration. A key step in the probabilistic reformulation of (11.2) is to find a function h(ξ)h(\xi) such that

H(η|ξ)=ρ(ξ)h(ξ)h(η)h(ξη)H(\eta|\xi)=\frac{\rho(\xi)}{h(\xi)}h(\eta)h(\xi-\eta) (11.3)

is a probability density function on d\mathbb{R}^{d}. Once hh is identified, we introduce the new unknown χ(ξ,t)=u^(ξ,t)/h(ξ)\chi(\xi,t)=\hat{u}(\xi,t)/h(\xi), which satisfied the normalized equation

χ(ξ,t)=eλ(ξ)tχ0(ξ)+0teλ(ξ)sλ(ξ)dB(χ(η,ts),χ(ξη,ts))H(η|ξ)𝑑η𝑑s.{\chi}(\xi,t)=e^{-\lambda(\xi)t}\chi_{0}(\xi)+\int_{0}^{t}e^{-\lambda(\xi)s}\lambda(\xi)\int_{\mathbb{R}^{d}}B(\chi(\eta,t-s),\chi(\xi-\eta,t-s))H(\eta|\xi)\;d\eta ds. (11.4)

Equation (11.4) leads to a family of wave numbers {Wv}v𝕋\{W_{v}\}_{v\in\mathbb{T}} satisfying Wθ=ξW_{\theta}=\xi, Wv1+Wv2=WvW_{v*1}+W_{v*2}=W_{v} for all v𝕋v\in\mathbb{T}, and conditionally given WvW_{v}, Wv1W_{v*1} and Wv2W_{v*2} are each distributed as H(|Wv)H(\cdot|W_{v}). For Xv=WvX_{v}=W_{v}, one gets a DSY cascade {λ(Xv)1Tv}v𝕋\{\lambda(X_{v})^{-1}T_{v}\}_{v\in\mathbb{T}}. In most cases, the holding times between branchings only depends on the magnitudes of the random wave numbers which, in turn, have a well-behaved branching Markov structure. For example, for the Navier-Stokes equations, the choice of Xv=|Wv|X_{v}=|W_{v}| turns out to be more efficient than the choice of Xv=WvX_{v}=W_{v}.

Similarly to the α\alpha-Riccati equation (Section 11.1), the solution χ(ξ,t)\chi(\xi,t) can be expressed as the expected value of a “solution” stochastic functional 𝒳(ξ,t)\mathscr{X}(\xi,t) defined over the DSY cascade by

𝒳(ξ,t)={χ0(ξ)ifTθ/λ(ξ)tB(𝒳(1)(W1,tTθ),𝒳(2)(W2,tTθ))ifTθ/λ(ξ)<t\mathscr{X}(\xi,t)=\left\{\begin{array}[]{*{35}{r}}{{\chi}_{0}}(\xi)&\text{if}&{{T}_{\theta}}/\lambda(\xi)\geq t\\ B\left({\mathscr{X}^{(1)}}({{W}_{1}},t-{{T}_{\theta}}),{\mathscr{X}^{(2)}}({{W}_{2}},t-{{T}_{\theta}})\right)&\text{if}&{{T}_{\theta}}/\lambda(\xi)<t\\ \end{array}\right. (11.5)

where 𝒳(1)\mathscr{X}^{(1)} and 𝒳(2)\mathscr{X}^{(2)} are (conditionally) independent copies of 𝒳\mathscr{X}. The stochastic explosion or non-explosion of the associated DSY cascades has direct implications to the existence and uniqueness of global-in-time solutions of these equations [chaos, alphariccati, smallness].

We will illustrate the aforementioned generic scheme in greater detail for the Fisher-KPP equation (on the Fourier side) in the next subsection. Note that the stochastic structure on the Fourier side (namely, DSY cascade) is very different from that on the physical side (namely, branching Brownian motion) which was identified in [mckean, MB1978].

11.2 The Fourier-transformed KPP equation

The KPP equation introduced by Fisher, Kolmogorov, Petrovsky and Piskunov, also referred to as the F-KPP equation, has the general form

ut=Duxx+ru(1u)t>0,xu_{t}=Du_{xx}+ru(1-u)~{}~{}~{}\forall t>0,\,x\in\mathbb{R}

where DD and rr are positive constant. McKean constructed a cascade model for this equation in the physical domain [mckean, MB1978]. We now construct a cascade for this equation in the Fourier domain. By rescaling the time and space variables, we can assume that D=r=1D=r=1. Then by introducing v=1uv=1-u, we get

vt=vxx+v2vt>0,x.v_{t}=v_{xx}+v^{2}-v~{}~{}~{}\forall t>0,\,x\in\mathbb{R}.

Taking Fourier transform with respect to xx, we get

v^t=(1+ξ2)v^+12πv^v^.{{\hat{v}}_{t}}=-(1+{{\xi}^{2}})\hat{v}+\frac{1}{\sqrt{2\pi}}\hat{v}*\hat{v}.

In the integral form,

v^(ξ,t)=e(1+ξ2)tv^0(ξ)+0te(1+ξ2)sv^(η,ts)v^(ξη,ts)𝑑η𝑑s.\hat{v}(\xi,t)={{e}^{-(1+{{\xi}^{2}})t}}{{{\hat{v}}}_{0}}(\xi)+\int_{0}^{t}{\int_{-\infty}^{\infty}{{{e}^{-(1+{{\xi}^{2}})s}}\hat{v}(\eta,t-s)\hat{v}(\xi-\eta,t-s)d\eta ds}}.

We normalize v^\hat{v} by χ(ξ,t)=v^(ξ,t)h(ξ)\chi(\xi,t)=\frac{\hat{v}(\xi,t)}{h(\xi)} where hh is a positive function to be determined. Function χ\chi satisfies the equation

χ(ξ,t)=e(1+ξ2)tχ0(ξ)+0t(1+ξ2)e(1+ξ2)sχ(η,ts)χ(ξη,ts)H(η|ξ)𝑑η𝑑s\chi(\xi,t)={{e}^{-(1+{{\xi}^{2}})t}}{{\chi}_{0}}(\xi)+\int_{0}^{t}{\int_{-\infty}^{\infty}{(1+{{\xi}^{2}}){{e}^{-(1+{{\xi}^{2}})s}}\chi(\eta,t-s)\chi(\xi-\eta,t-s)H(\eta|\xi)d\eta ds}} (11.6)

where H(η|ξ)=h(η)h(ξη)(1+ξ2)h(ξ)H(\eta|\xi)=\frac{h(\eta)h(\xi-\eta)}{(1+\xi^{2})h(\xi)}. For HH to be a probability density function, hh must satisfy the equation

hh=(1+ξ2)h.h*h=(1+\xi^{2})h.

The function w(x)=2πhˇ(x)w(x)=\sqrt{2\pi}\check{h}(x) satisfies w′′=ww2w^{\prime\prime}=w-w^{2}, which has a solution w(x)=31+coshxw(x)=\frac{3}{1+\cosh x}. It follows that

h(ξ)=3ξsinh(πξ).h(\xi)=\frac{3\xi}{\sinh(\pi\xi)}. (11.7)
Refer to caption
Figure 4: Graph of h(ξ)=3ξ/sinh(πξ)h(\xi)=3\xi/\sinh(\pi\xi).

Then the representation (11.6) is equivalent to χ=𝔼ξ[𝒳(ξ,t)]\chi=\mathbb{E}_{\xi}[\mathscr{X}(\xi,t)] where 𝒳\mathscr{X} is a stochastic functional defined recursively by

𝒳(ξ,t)={χ0(ξ)ifTθt𝒳(1)(W1,tTθ)𝒳(2)(W2,tTθ)ifTθ<t\mathscr{X}(\xi,t)=\left\{\begin{array}[]{*{35}{r}}{{\chi}_{0}}(\xi)&\text{if}&{{T}_{\theta}}\geq t\\ {\mathscr{X}^{(1)}}({{W}_{1}},t-{{T}_{\theta}}){\mathscr{X}^{(2)}}({{W}_{2}},t-{{T}_{\theta}})&\text{if}&{{T}_{\theta}}<t\\ \end{array}\right. (11.8)

where 𝒳(1)\mathscr{X}^{(1)} and 𝒳(2)\mathscr{X}^{(2)} are i.i.d. copies of 𝒳\mathscr{X}. This definition, when applied recursively, leads to a family of exponential mean-one clocks {Tv}v𝕋\{T_{v}\}_{v\in\mathbb{T}} and a family of frequencies {Wv}v𝕋\{W_{v}\}_{v\in\mathbb{T}} described as follows. For ξ\xi\in\mathbb{R} and t>0t>0, we start a branching process with a particle located at Wθ=ξW_{\theta}=\xi. The particle is time tt away from the horizon. The clock runs for an exponential time T¯θExp(1+ξ2)\bar{T}_{\theta}\sim{\rm Exp}(1+\xi^{2}). If T¯θ>t\bar{T}_{\theta}>t then the branching process stops (no branching occurs). Otherwise, the particle dies and is replaced by two particles. A random variable W1W_{1}\in\mathbb{R} is then sampled according to the p.d.f. H(η|ξ)H(\eta|\xi). The first particle is placed at W1W_{1} and the second particle is placed at W2=ξW1W_{2}=\xi-W_{1}. Each particle is now tT¯θt-\bar{T}_{\theta} away from the time horizon. These particles continue to evolve by the same rule, independently of each other. Note that Tv=T¯v/(1+Wv2)Exp(1)T_{v}=\bar{T}_{v}/(1+W_{v}^{2})\sim\text{Exp}(1). Therefore, we can associate the equation (11.6) with a DSY cascade {λ(Xv)1Tv}v𝕋\{\lambda(X_{v})^{-1}T_{v}\}_{v\in\mathbb{T}} where Xv=WvX_{v}=W_{v}, λ(ξ)=1+ξ2\lambda(\xi)=1+\xi^{2} and p(ξ,η)=H(η|ξ)p(\xi,\eta)=H(\eta|\xi). The corresponding explosion time is

ζ=ζ(ξ)=limnmin|v|=nj=0nTv|j1+Wv|j2.\zeta=\zeta(\xi)=\underset{n\to\infty}{\mathop{\lim}}\,\underset{|v|=n}{\mathop{\min}}\,\sum\limits_{j=0}^{n}{\frac{{{T}_{v|j}}}{1+W_{v|j}^{2}}}.

The stochastic functional 𝒳\mathscr{X} is a.s. well-defined by (11.8) for all t>0t>0 if and only if ζ=\zeta=\infty a.s. We will apply Section 2 and Section 6 to show the non-explosion of the cascade. The same cascade was analyzed by the authors in [part1_2021, Example 5.4] and Orum in [orum, Sec. 7.9]. The non-explosion was proved by using a large deviation method together with a spectral radius technique in [part1_2021], and by exploiting the uniqueness of solutions to the KPP equation in [orum].

We start the proof of the non-explosion by observing that the function hh given by (11.7) is even and decreasing on (0,)(0,\infty). According to Section 11.2 below, hh is logarithmically concave on (0,)(0,\infty). Thus,

h(η)h(ξη)=h(|η|)h(|ξη|)h2(|η|+|ξη|2)h2(|ξ|2)=h2(ξ2).h(\eta)h(\xi-\eta)=h(|\eta|)h(|\xi-\eta|)\leq{{h}^{2}}\left(\frac{|\eta|+|\xi-\eta|}{2}\right)\leq{{h}^{2}}\left(\frac{|\xi|}{2}\right)={{h}^{2}}\left(\frac{\xi}{2}\right).

Now fix a number κ(1/2,1)\kappa\in(1/2,1). Using the fact that hh is bounded from above by 3/π3/\pi, we have

h(η)h(ξη)=[h(η)h(ξη)]1κ[h(η)h(ξη)]κc1h(η)1κh2κ(ξ2)h(\eta)h(\xi-\eta)={{[h(\eta)h(\xi-\eta)]}^{1-\kappa}}{{[h(\eta)h(\xi-\eta)]}^{\kappa}}\leq{{c}_{1}}h{{(\eta)}^{1-\kappa}}{{h}^{2\kappa}}\left(\frac{\xi}{2}\right)

where c1=(3/π)1κc_{1}=(3/\pi)^{1-\kappa}. Therefore, p(ξ,η)ϕ1(ξ)ϕ2(η)p(\xi,\eta)\leq\phi_{1}(\xi)\phi_{2}(\eta) where ϕ1(ξ)=c1h2κ(ξ/2)/h(ξ)\phi_{1}(\xi)=c_{1}h^{2\kappa}(\xi/2)/h(\xi) and ϕ2(η)=h(η)1κ\phi_{2}(\eta)=h(\eta)^{1-\kappa}. Since 2κ>12\kappa>1, ϕ1\phi_{1} decays exponentially as ξ±\xi\to\pm\infty. Thus, Section 6 is satisfied with b=1b=1.

Lemma \thelem.

Let h(x)=3xsinh(πx)h(x)=\frac{3x}{\sinh(\pi x)}. Then h(x)th(y)1th(tx+(1t)y)h(x)^{t}h(y)^{1-t}\leq h(tx+(1-t)y) for all x,y>0x,y>0, t(0,1)t\in(0,1).

Proof.

Put k(x)=logh(x)k(x)=\log h(x). By basic differentiations, k′′(x)=1x2+π2sinh2(πx)k^{\prime\prime}(x)=-\frac{1}{x^{2}}+\frac{\pi^{2}}{\sinh^{2}(\pi x)}. Using the fact that sinh(πx)>πx\sinh(\pi x)>\pi x for all x>0x>0, we have

k′′(x)<1x2+π2(πx)2=0.k^{\prime\prime}(x)<-\frac{1}{x^{2}}+\frac{\pi^{2}}{(\pi x)^{2}}=0.

Therefore, kk is concave on (0,)(0,\infty). ∎

11.3 The complex Burgers equation

Consider the one-dimensional Burgers equation

ut=uxxuux,u(x,0)=u0(x),u_{t}=u_{xx}-uu_{x},\ \ \ u(x,0)=u_{0}(x),

where u=u(x,t)u=u(x,t) is a complex-valued function. In the analysis of self-similar solutions in the Fourier domain [complexburgers], the equation is associated with a self-similar DSY cascade {λ(Xv)1Tv}v𝕋\left\{\lambda({X_{v}})^{-1}{{{T}_{v}}}\right\}_{v\in\mathbb{T}} in which λ(x)=x2\lambda(x)=x^{2} and the ratios Rs|n=Xs|n/Xs|n1R_{s|n}={X_{s|n}}/{X_{s|n-1}} are uniformly distributed on (0,1)(0,1). Since 𝔼[R12]=1/3\mathbb{E}[R_{1}^{2}]=1/3, the cascade is non-explosive according to Section 2.

11.4 Bessel cascade of the three-dimensional NSE

The dd-dimensional incompressible Navier-Stokes equations are given by

{tu+uu=Δupind×(0,),divu=0ind×(0,),u(,0)=u0ind.\left\{{\begin{array}[]{*{20}{rcl}}{{\partial_{t}}u+u\cdot\nabla u=\Delta u-\nabla p}&~{}~{}{\rm in}&\mathbb{R}^{d}\times(0,\infty),\\ {{\rm div}\,u=0}&~{}~{}{\rm in}&\mathbb{R}^{d}\times(0,\infty),\\ {u(\cdot,0)={u_{0}}}&~{}~{}{\rm in}&\mathbb{R}^{d}.\\ \end{array}}\right. (NSE)

In the general formulation (11.4) for the Navier-Stokes equations, ρ(ξ)=|ξ|1\rho(\xi)=|\xi|^{-1} and the branching distribution of wave numbers is given by

H(η|ξ)=h(η)h(ξη)|ξ|h(ξ).H(\eta|\xi)=\frac{h(\eta)h(\xi-\eta)}{|\xi|h(\xi)}. (11.9)

The function h:d\{0}(0,)h:\mathbb{R}^{d}\backslash\{0\}\to(0,\infty) satisfies the equation hh=|ξ|hh*h=|\xi|h and is called a standard majorizing kernel [rabi]. In three dimensions, the Bessel cascade is the DSY cascade corresponds to the choice of h(ξ)=12πe|ξ||ξ|h(\xi)=\frac{1}{2\pi}\frac{e^{-|\xi|}}{|\xi|} (see e.g. [rabi, chaos, lejan], [orum, Prop. 3.8]). In the analysis of the explosion problem, the choice Xv=|Wv|X_{v}=|W_{v}| turns out to be more efficient than the choice Xv=WvX_{v}=W_{v}. The corresponding function λ\lambda is λ(x)=x2\lambda(x)=x^{2}. With y=|η|y=|\eta| and x=|ξ|x=|\xi|, the branching distribution of X1=|W1|X_{1}=|W_{1}| given Xθ=xX_{\theta}=x is (see [chaos]):

p(x,y)={e2x1xe2yif0<xy,1e2yxif0<y<x.p\left(x,y\right)=\left\{\begin{array}[]{*{35}{l}}\frac{{{e}^{2x}}-1}{x}{{e}^{-2y}}&\text{if}&0<x\leq y,\\[6.0pt] \frac{1-{{e}^{-2y}}}{x}&\text{if}&0<y<x.\\ \end{array}\right.

Along each path s𝕋s\in\partial\mathbb{T}, the sequence {Xs|j}j0\{X_{s|j}\}_{j\geq 0} is a time-reversible Markov chain with the invariant probability density

γ(x)=4xe2x.\gamma(x)=4xe^{-2x}.

The non-explosion of the Bessel cascade was proved in [part1_2021, Example 5.2] via a large deviation method. Here we present an alternative proof using Section 2 and Section 6. The conditions (i)–(v) in Section 6 are satisfied for the choice of

ψ1(x)=e2x14x,ψ2(x)=e2x,c1=c2=2,α=5.\psi_{1}(x)=\frac{e^{2x}-1}{4x},\ \psi_{2}(x)=e^{-2x},\ c_{1}=c_{2}=2,\,\alpha=5.

Therefore, the DSY cascade {Xv2Tv}v𝕋\left\{X_{v}^{-2}T_{v}\right\}_{{v\in\mathbb{T}}} is non-explosive.

12 Cascade for the Navier-Stokes equations with scale-invariant kernel

This section explores the explosion/non-explosion problem of a self-similar DSY cascade naturally associated with the dd-dimensional incompressible Navier-Stokes equations with d3d\geq 3. Based on the general scheme (11.4), the present cascade corresponding to ρ(ξ)=|ξ|1\rho(\xi)=|\xi|^{-1} and the choice of standard majorizing kernel h(ξ)=Cd|ξ|1dh(\xi)=C_{d}|\xi|^{1-d}, called the scale-invariant kernel. The probability density function H(|ξ)H(\cdot|\xi) is

H(η|ξ)=Cd|ξ|d2|η|d1|ξη|d1.H(\eta|\xi)=C_{d}\frac{|\xi|^{d-2}}{|\eta|^{d-1}|\xi-\eta|^{d-1}}. (12.1)

In contrast to the Bessel kernel discussed above, this family of kernels scales according to the natural scaling of (NSE), namely

u(x,t)κu(κx,κ2t),p(x,t)κ2p(κx,κ2t),u0(x)κu0(κx),κ.u(x,t)\to\kappa u(\kappa x,\kappa^{2}t),~{}~{}p(x,t)\to\kappa^{2}p(\kappa x,\kappa^{2}t),~{}~{}u_{0}(x)\to\kappa u_{0}(\kappa x),\ \ \kappa\in\mathbb{R}.

Choose the branching Markov chain Xv=|Wv|X_{v}=|W_{v}|. The corresponding intensity function is λ(x)=x2\lambda(x)=x^{2}. In the case d=3d=3, the ratios Rs|n=Xs|n/Xs|n1R_{s|n}={X_{s|n}}/{X_{s|n-1}} have the dilogarithmic distribution with density

f3(r)=2π21rlnr+1|r1|f_{3}(r)=\frac{2}{\pi^{2}}\frac{1}{r}\ln\frac{r+1}{|r-1|} (12.2)

and the corresponding self-similar DSY cascade is referred to as the dilogarithmic cascade [chaos]. It was shown in [chaos] that, for d=3d=3, the event of non-explosion [ζ=][\zeta=\infty] is a 010-1 event and that a.s. explosion does not occur along any deterministic path s𝕋s\in\partial\mathbb{T}, i.e. j=0Xs|j2Ts|j=\sum_{j=0}^{\infty}X_{s|j}^{-2}T_{s|j}=\infty (see [chaos, Cor. 5.1]). Section 2 (i) implies that the non-explosion is a 010-1 event for any dimension d3d\geq 3. We will show that non-explosion occurs with probability one for d12d\geq 12 (Section 12.1), and with probability zero for d=3d=3 (Section 12.2).

Remark \therem.

In the construction of solutions to the 3-dimensional incompressible Navier-Stokes equations, Le Jan and Sznitman circumvented the problem of stochastic explosion by introducing a clever, albeit adhoc, critical coin tossing device. This technique assures the a.s. termination of the branching after finitely many steps. Subcritical coin tosses could also be implemented for the same effect (see [rabi]). The elimination of the coin-toss device naturally leads to an intrinsic explosion problem for the branching cascade itself. This problem depends on the Markov transition probabilities of the wave numbers H(|ξ)H(\cdot|\xi), which is determined by the choice of hh. In the construction of global or local-in-time solutions, Bhattacharya et al [rabi] relaxed the condition on hh to only hhC|ξ|θhh*h\leq C|\xi|^{\theta}h for some constants θ1\theta\leq 1 and C>0C>0 (called the majorizing kernel). Majorizing kernels determine the space of initial data suitable for the construction of solutions. While our focus here is on zero forcing, nonzero forcing can also be accommodated in this framework (see [chaos]).

We will derive the branching distribution of X1=|W1|X_{1}=|W_{1}| given Xθ=|ξ|X_{\theta}=|\xi| as follows. Write x=|ξ|x=|\xi|, y=|η|y=|\eta| and z=|ξη|z=|\xi-\eta| and let ϕ\phi be the angle between ξ\xi and η\eta. Then z=x22xycosϕ+y2z=\sqrt{x^{2}-2xy\cos\phi+y^{2}}. Put k(x)=h(ξ)=Cdx1dk(x)=h(\xi)=C_{d}x^{1-d}. Choose the spherical coordinates in d\mathbb{R}^{d} by η=(ycosϕ,y(sinϕ)w)\eta=(y\cos\phi,\,y(\sin\phi)w) where w𝕊d2w\in\mathbb{S}^{d-2}, the unit sphere in d1\mathbb{R}^{d-1}. Then dη=yd1sind2ϕdydϕdwd\eta=y^{d-1}\sin^{d-2}\phi\,dyd\phi dw. Therefore,

H(η|ξ)dη=Cdxd2sind2ϕ(x22xycosϕ+y2)d12dydϕdw.H(\eta|\xi)\,d\eta={{C}_{d}}{\frac{x^{d-2}{{\sin}^{d-2}}\phi}{{{\left({{x}^{2}}-2xy\cos\phi+{{y}^{2}}\right)}^{\frac{d-1}{2}}}}\,dyd\phi dw}. (12.3)

For any smooth function g(y)g(y),

𝔼xg(|W1|)\displaystyle{{\mathbb{E}}_{x}}g(|{{W}_{1}}|) =\displaystyle= dg(|η|)H(η|ξ)𝑑η\displaystyle\int_{{{\mathbb{R}}^{d}}}{g(|\eta|)H(\eta|\xi)\,d\eta}
=\displaystyle= 00π𝕊d2g(y)Cdxd2sind2ϕ(x22xycosϕ+y2)d12𝑑w𝑑ϕ𝑑y\displaystyle\int_{0}^{\infty}{\int_{0}^{\pi}{\int_{{\mathbb{S}^{d-2}}}{g(y)\,{C}_{d}}{\frac{x^{d-2}{{\sin}^{d-2}}\phi}{{{\left({{x}^{2}}-2xy\cos\phi+{{y}^{2}}\right)}^{\frac{d-1}{2}}}}\,dwd\phi dy}}}
=\displaystyle= |𝕊d2|Cdxd100πg(y)sind2ϕ(x22xycosϕ+y2)d12𝑑ϕ𝑑y.\displaystyle|{\mathbb{S}^{d-2}}|{{C}_{d}}{{x}^{d-1}}\int_{0}^{\infty}{\int_{0}^{\pi}{g(y)\frac{{{\sin}^{d-2}}\phi}{{\left(x^{2}-2xy\cos\phi+y^{2}\right)^{\frac{d-1}{2}}}}d\phi dy}}.

Thus, the distribution of |W1||W_{1}| given |ξ||\xi| has a density

p(x,y)=cdxd10πsind2ϕ(x22xycosϕ+y2)d12𝑑ϕ,p\left(x,y\right)={{c}_{d}}{{x}^{d-1}}\int_{0}^{\pi}{\frac{{{\sin}^{d-2}}\phi}{{{\left({{x}^{2}}-2xy\cos\phi+{{y}^{2}}\right)}^{\frac{d-1}{2}}}}d\phi}, (12.4)

where

cd=|𝕊d2|Cd=Γ(d12)Γ(d22)2π3/2.{{c}_{d}}=|{\mathbb{S}^{d-2}}|{{C}_{d}}=\frac{\Gamma\left(\frac{d-1}{2}\right)}{\Gamma\left(\frac{d-2}{2}\right)}\frac{2}{{{\pi}^{3/2}}}. (12.5)

The ratios Rs|n=Xs|n/Xs|n1R_{s|n}={X_{s|n}}/{X_{s|n-1}} have a density

fd(r)=p(1,r)=cd0πsind2ϕ(12rcosϕ+r2)d12𝑑ϕf_{d}(r)=p(1,r)={{c}_{d}}\int_{0}^{\pi}{\frac{{{\sin}^{d-2}}\phi}{{{\left({{1}}-2r\cos\phi+{{r}^{2}}\right)}^{\frac{d-1}{2}}}}d\phi} (12.6)

which is independent of s𝕋s\in\partial\mathbb{T} and nn\in\mathbb{N}. Therefore, for any d3d\geq 3, {Xv2Tv}v𝕋\{X_{v}^{-2}T_{v}\}_{v\in\mathbb{T}} is a self-similar DSY cascade with the ratio distribution given by (12.6).

Remark \therem.

One can check with basic calculations that for d=3d=3, the distribution of R1R_{1} is symmetric with respect to the group identity r=1r=1 on (0,)(0,\infty), i.e. the median of R1R_{1} is equal to 1. If d4d\geq 4, one can show 01fd(r)𝑑r>1fd(r)𝑑r\int_{0}^{1}f_{d}(r)dr>\int_{1}^{\infty}f_{d}(r)dr by using the change of variable r1/rr\mapsto 1/r. Thus, the median of R1R_{1} is less than 1, which better facilitates the non-explosion of the cascade. At this heuristic level, the non-explosion is more likely to happen for d4d\geq 4 than for d=3d=3. One may suspect that for very large dd, the self-similar cascade of the Navier-Stokes equations is non-explosive. The problem will be further discussed in the next section.

Besides the spherical coordinates, one can also express H(η|ξ)dηH(\eta|\xi)d\eta in the coordinates (ϕ1=ϕ,ϕ2,w)(\phi_{1}=\phi,\phi_{2},w) where ϕ2\phi_{2} is the angle between ξ\xi and ξη\xi-\eta. Using the sine rule in the triangle made by ξ\xi, η\eta, ξη\xi-\eta, we get

y=xsinϕ2sin(πϕ1ϕ2)=xsinϕ2sin(ϕ1+ϕ2).y=x\frac{\sin\phi_{2}}{\sin(\pi-\phi_{1}-\phi_{2})}=x\frac{\sin\phi_{2}}{\sin(\phi_{1}+\phi_{2})}.

Then changing variables in (12.3) from (y,ϕ,w)(y,\phi,w) to (ϕ1,ϕ2,w)(\phi_{1},\phi_{2},w), noting the Jacobian (y,ϕ,w)(ϕ1,ϕ2,w)=xsinϕ1sin2(ϕ1+ϕ2)\frac{\partial(y,\phi,w)}{\partial(\phi_{1},\phi_{2},w)}=\frac{-x\sin\phi_{1}}{\sin^{2}(\phi_{1}+\phi_{2})}, we obtain

H(η|ξ)dη=Cdsind3(ϕ1+ϕ2)dϕ1dϕ2dw,H(\eta|\xi)\,d\eta={{C}_{d}}\,{\sin}^{d-3}(\phi_{1}+\phi_{2})\,d\phi_{1}d\phi_{2}dw, (12.7)

where (ϕ1,ϕ2)Δ:={(ϕ1,ϕ2):ϕ1,ϕ20,ϕ1+ϕ2π}(\phi_{1},\phi_{2})\in\Delta:=\{(\phi_{1},\phi_{2}):\ \phi_{1},\phi_{2}\geq 0,\ \phi_{1}+\phi_{2}\leq\pi\}. This is the triangle formed by ξ/|ξ|,\xi/|\xi|, η/|ξ|\eta/|\xi|, and (ξη)/|ξ|(\xi-\eta)/|\xi|. As η\eta is randomized according to H(|ξ)H(\cdot|\xi), ϕ1\phi_{1} and ϕ2\phi_{2} are values of the corresponding random variables Φ1\Phi_{1} and Φ2\Phi_{2}. The joint distribution density of the random angles (Φ1,Φ2)(\Phi_{1},\Phi_{2}) conditionally given eξ=ξ/|ξ|e_{\xi}=\xi/|\xi| is

Sd(ϕ1,ϕ2)=cdsind3(ϕ1+ϕ2),S_{d}(\phi_{1},\phi_{2})=c_{d}{\sin}^{d-3}(\phi_{1}+\phi_{2}), (12.8)

where cdc_{d} is given by (12.5). It is easy to see that (Φ1,Φ2)(\Phi_{1},\Phi_{2}) is uniformly distributed in the triangle Δ\Delta if and only if d=3d=3.

12.1 Non-explosion of the self-similar Navier-Stokes cascades in high dimensions

As dd\to\infty, SdS_{d} given by (12.8) approaches the uniform distribution on the line segment {ϕ1,ϕ20,ϕ1+ϕ2=π/2}\{\phi_{1},\phi_{2}\geq 0,\,\phi_{1}+\phi_{2}=\pi/2\}. Geometrically, the triad ξ\xi, W1W_{1}, W2W_{2} tend to form a right triangle with sides with hypotenuse ξ\xi. Consequently, R1R_{1} and R2R_{2} become bounded by 1 in the limit dd\to\infty. The explosion time of the self-similar cascade of the Navier-Stokes becomes bounded by that of the classical Yule cascade, or equivalently the α\alpha-Riccati cascade with α=1\alpha=1, which is non-explosive. Hence, it is quite conceivable that the self-similar DSY cascade of the Navier-Stokes equations is non-explosive if dd is sufficiently large. The following result affirms that this is the case.

Proposition \theprop.

For d12d\geq 12, the self-similar cascade of the Navier-Stokes equations in d\mathbb{R}^{d} is a.s. non-explosive for any initial state ξd\{0}\xi\in\mathbb{R}^{d}\backslash\{0\}.

Proof.

As shown above, R1R_{1} has a density function fd(r)f_{d}(r) given by (12.6). By Section 2, it is sufficient to show that αd:=𝔼[R1(d3)/2]<1/2\alpha_{d}:=\mathbb{E}[R_{1}^{(d-3)/2}]<1/2. With the change of variable ϕs=12rcosϕ+r2\phi\mapsto s=\sqrt{1-2r\cos\phi+r^{2}}, one gets

rd32fd(r)=cdr(4r)d32|1r|1+r((1+r)2s2)d32(s2(1r)2)d32sd2𝑑s.{{r}^{\frac{d-3}{2}}}{{f}_{d}}(r)=\frac{{{c}_{d}}}{r{{(4r)}^{\frac{d-3}{2}}}}\int_{|1-r|}^{1+r}{\frac{{{({{(1+r)}^{2}}-{{s}^{2}})}^{\frac{d-3}{2}}}{{({{s}^{2}}-{{(1-r)}^{2}})}^{\frac{d-3}{2}}}}{{{s}^{d-2}}}ds}.

Put f¯d(r)=r(d3)/2fd(r)\bar{f}_{d}(r)=r^{(d-3)/2}f_{d}(r). We show f¯d(r)f¯d2(r)\bar{f}_{d}(r)\leq\bar{f}_{d-2}(r) for all d5d\geq 5, r>0r>0. By the fact that (1(d3)sd3)=1sd2\left(-\frac{1}{(d-3)s^{d-3}}\right)^{\prime}=\frac{1}{s^{d-2}} and by integration by parts, one obtains

r(4r)d32cdf¯d(r)\displaystyle\frac{r{{(4r)}^{\frac{d-3}{2}}}}{{{c}_{d}}}{\bar{f}_{d}}(r) =\displaystyle= |1r|1+r((1+r)2s2)d32(s2(1r)2)d32sd2𝑑s\displaystyle\int_{|1-r|}^{1+r}{\frac{{{({{(1+r)}^{2}}-{{s}^{2}})}^{\frac{d-3}{2}}}{{({{s}^{2}}-{{(1-r)}^{2}})}^{\frac{d-3}{2}}}}{{{s}^{d-2}}}ds}
=\displaystyle= |1r|1+r((1+r)2s2)d52(s2(1r)2)d52sd4(2+2r22s24r)𝑑s\displaystyle\int_{|1-r|}^{1+r}{\frac{{{({{(1+r)}^{2}}-{{s}^{2}})}^{\frac{d-5}{2}}}{{({{s}^{2}}-{{(1-r)}^{2}})}^{\frac{d-5}{2}}}}{{{s}^{d-4}}}(\underbrace{2+2{{r}^{2}}-2{{s}^{2}}}_{\leq 4r})ds}
\displaystyle\leq r(4r)d32cdf¯d2(r).\displaystyle\frac{r{{(4r)}^{\frac{d-3}{2}}}}{{{c}_{d}}}{\bar{f}_{d-2}}(r).

Thus, αdαd2\alpha_{d}\leq\alpha_{d-2} for all d5d\geq 5. The next step is to evaluate αd\alpha_{d} for some values of dd. By the sine rule in the triangle with edges 1,R1,R21,R_{1},R_{2} (Figure 5), R1=sinΦ2sin(Φ1+Φ2){{R}_{1}}=\frac{\sin{{\Phi}_{2}}}{\sin({{\Phi}_{1}}+{{\Phi}_{2}})}. Thus,

αd\displaystyle{{\alpha}_{d}} =\displaystyle= Δ(sinϕ2sin(ϕ1+ϕ2))d32Sd(ϕ1,ϕ2)𝑑ϕ1𝑑ϕ2=cdΔ(sinϕ2sin(ϕ1+ϕ2))d32𝑑ϕ1𝑑ϕ2.\displaystyle\iint\limits_{\Delta}{{{\left(\frac{\sin{{\phi}_{2}}}{\sin({{\phi}_{1}}+{{\phi}_{2}})}\right)}^{\frac{d-3}{2}}}{{S}_{d}}({{\phi}_{1}},{{\phi}_{2}})d{{\phi}_{1}}d{{\phi}_{2}}}={{c}_{d}}\iint\limits_{\Delta}{{{\left(\sin{{\phi}_{2}}\sin({{\phi}_{1}}+{{\phi}_{2}})\right)}^{\frac{d-3}{2}}}d{{\phi}_{1}}d{{\phi}_{2}}}.

We obtain the numerical approximations

α100.5427,α110.5143,α120.4898,α130.4684.\alpha_{10}\approx 0.5427,\ \ \ \alpha_{11}\approx 0.5143,\ \ \ \alpha_{12}\approx 0.4898,\ \ \ \alpha_{13}\approx 0.4684.

This implies that αd<1/2\alpha_{d}<1/2 for all d12d\geq 12. ∎

Remark \therem.

Although an explicit formula for αd\alpha_{d} is not needed for the above proof, one can obtain it with the aid of Mathematica:

αd=2d32Γ(d14)3πΓ(d22)Γ(d+14).{{\alpha}_{d}}=\frac{{{2}^{\frac{d-3}{2}}}\Gamma{{\left(\frac{d-1}{4}\right)}^{3}}}{\pi\Gamma\left(\frac{d-2}{2}\right)\Gamma\left(\frac{d+1}{4}\right)}.
Remark \therem.

As mentioned before, (12.8) implies that, as dd\to\infty, the random vector (Φ1,Φ2)(\Phi_{1},\Phi_{2}) converges in distribution to a vector (Φ1,Φ2)(\Phi^{\infty}_{1},\Phi_{2}^{\infty}) distributed uniformly on the line segment {ϕ1,ϕ20,ϕ1+ϕ2=π/2}\{\phi_{1},\phi_{2}\geq 0,\,\phi_{1}+\phi_{2}=\pi/2\}. The corresponding random variables R1R_{1} and R2R_{2} would then represent the sides of the right triangle with hypothenuse 1 and adjacent angles Φ1\Phi^{\infty}_{1}, Φ2\Phi_{2}^{\infty}. This configuration gives raise to a self-similar DSY cascade that can be viewed as the limit case of the cascades associated with (12.1), corresponding to d=d=\infty. The mean-field model of this cascade for d=d=\infty yields R1=R21/2R_{1}=R_{2}\equiv 1/\sqrt{2}, which is exactly the DSY cascade of the α\alpha-Riccati equation with α=1/2\alpha=1/2. The special case α=1/2\alpha=1/2 has several critical properties in the context of α\alpha-Riccati cascades. First, this continuous-time Markov process is a Poisson process with unit intensity. Second, the infinitesimal generator for the Markov process defined by the set of vertices alive at time tt is a bounded operator if and only if α1/2\alpha\leq 1/2 (see [alphariccati, yule]).

12.2 Explosion of the self-similar Navier-Stokes cascade in dimension d=3d=3

In the case d=3d=3, the self-similar DSY cascade (or dilogarithmic cascade) has several special properties. First, the ratios Rs|n=Xs|n/Xs|n1R_{s|n}=X_{s|n}/X_{s|n-1} have a dilogarithmic distribution with density given by (12.2). Second, conditionally given WvW_{v}, the angle Φ1\Phi_{1} between Wv1W_{v*1} and WvW_{v} and the angle Φ2\Phi_{2} between Wv2W_{v*2} and WvW_{v} have a joint uniform distribution in

Δ={(ϕ1,ϕ2):ϕ1,ϕ20,ϕ1+ϕ2π}.\Delta=\{(\phi_{1},\phi_{2}):\ \phi_{1},\phi_{2}\geq 0,\ \phi_{1}+\phi_{2}\leq\pi\}.

We will use Section 2 to show that the dilogarithmic cascade is a.s. explosive. Consider the random variable Rmax=max{R1,R2}R_{\max}=\max\{R_{1},\,R_{2}\}. Recall that R1=|W1|/|ξ|R_{1}=|W_{1}|/|\xi| and R2=|W2|/|ξ|R_{2}={|W_{2}|}/{|\xi|}. By the triangular inequality, Rmax[1/2,)R_{\max}\in[1/2,\infty). Our first step is to compute the distribution of RmaxR_{\max}.

Lemma \thelem.

The random variable Rmax=max{R1,R2}R_{\max}=\max\{R_{1},\,R_{2}\} has a probability density

g(r)=4π21rlnr|r1|𝟙r1/2.g(r)=\frac{4}{\pi^{2}}\frac{1}{r}\ln\frac{r}{|r-1|}{\mathds{1}_{r\geq 1/2}}.
Proof.

For any r>0r>0, denote

G(r):=(Rmaxr)=2(R1R2r).G(r):=\mathbb{P}(R_{\max}\leq r)=2\mathbb{P}(R_{1}\leq R_{2}\leq r)\,.

The event [R1R2r][R_{1}\leq R_{2}\leq r] can be written as [0Φ2ϕ2(r),Φ2Φ1ϕ1(r,Φ2)][0\leq\Phi_{2}\leq\phi_{2}^{*}(r),\ \Phi_{2}\leq\Phi_{1}\leq\phi_{1}^{*}(r,\Phi_{2})] (see Figure 5).

Refer to caption
Figure 5: An illustration of the event [R1R2r][R_{1}\leq R_{2}\leq r].

We observe that

cosϕ2=12r,tanϕ1=rsinϕ21rcosϕ2.\cos\phi_{2}^{*}=\frac{1}{2r},\ \ \ \tan\phi_{1}^{*}=\frac{r\sin\phi_{2}}{1-r\cos{\phi_{2}}}\,.

Thus,

G(r)=2|Δ|0ϕ2ϕ2ϕ1𝑑ϕ1𝑑ϕ2=4π20ϕ2(ϕ1ϕ2)𝑑ϕ2.G(r)=\frac{2}{|\Delta|}\int_{0}^{\phi_{2}^{*}}\int_{\phi_{2}}^{\phi_{1}^{*}}{d}\phi_{1}{d}\phi_{2}=\frac{4}{\pi^{2}}\int_{0}^{\phi_{2}^{*}}\left(\phi_{1}^{*}-\phi_{2}\right)\,{d}\phi_{2}\,.

By the chain rule of differentiation,

dGdr=4π20ϕ2(r)ϕ1r(r,ϕ2)𝑑ϕ2+4π2(ϕ1(r,ϕ2)ϕ2).\frac{dG}{dr}=\frac{4}{\pi^{2}}\int_{0}^{\phi_{2}^{*}(r)}\frac{\partial\phi_{1}^{*}}{\partial r}(r,\phi_{2})\,d\phi_{2}+\frac{4}{\pi^{2}}\left(\phi_{1}^{*}(r,\phi_{2}^{*})-\phi_{2}^{*}\right)\,. (12.9)

Since ϕ1(r,ϕ2)=ϕ2\phi_{1}^{*}(r,\phi_{2}^{*})=\phi_{2}^{*}, the second term on the right hand side is zero. To compute ϕ1/r{\partial\phi_{1}^{*}}/{\partial r}, we note that on one hand

rtan(ϕ1(r,ϕ2))=rrsinϕ21rcosϕ2=sinϕ2(1rcosϕ2)2,\frac{\partial}{\partial r}\tan\left(\phi_{1}^{*}(r,\phi_{2})\right)=\frac{\partial}{\partial r}\frac{r\sin\phi_{2}}{1-r\cos{\phi_{2}}}=\frac{\sin\phi_{2}}{(1-r\cos{\phi_{2})^{2}}}\,, (12.10)

and on the other hand (the chain rule),

rtan(ϕ1(r,ϕ2))=1cos2(ϕ1(r,ϕ2))ϕ1r(r,ϕ2).\frac{\partial}{\partial r}\tan\left(\phi_{1}^{*}(r,\phi_{2})\right)=\frac{1}{\cos^{2}\left(\phi_{1}^{*}(r,\phi_{2})\right)}\frac{\partial\phi_{1}^{*}}{\partial r}(r,\phi_{2})\,. (12.11)

Observe that (Figure 5)

cos2(ϕ1(r,ϕ2))=(1rcosϕ2)2(1rcos2ϕ2)2+(rsinϕ2)2=(1rcosϕ2)21+r22rcosϕ2.\cos^{2}\left(\phi_{1}^{*}(r,\phi_{2})\right)=\frac{(1-r\cos\phi_{2})^{2}}{(1-r\cos^{2}\phi_{2})^{2}+(r\sin\phi_{2})^{2}}=\frac{(1-r\cos\phi_{2})^{2}}{1+r^{2}-2r\cos\phi_{2}}\,. (12.12)

By (12.10), (12.11), (12.12), we obtain

ϕ1r(r,ϕ2)=sinϕ21+r22rcosϕ2.\frac{\partial\phi_{1}^{*}}{\partial r}(r,\phi_{2})=\frac{\sin\phi_{2}}{1+r^{2}-2r\cos\phi_{2}}\,.

Now substitute this result into (12.9):

dGdr=4π20ϕ2sinϕ21+r22rcosϕ2𝑑ϕ2=4π212r12rdu1+r2u=4π21rlnr|r1|.\frac{dG}{dr}=\frac{4}{\pi^{2}}\int_{0}^{\phi_{2}^{*}}\frac{\sin\phi_{2}}{1+r^{2}-2r\cos\phi_{2}}\,d\phi_{2}=\frac{4}{\pi^{2}}\frac{1}{2r}\int_{1}^{2r}\frac{du}{1+r^{2}-u}=\frac{4}{\pi^{2}}\frac{1}{r}\ln\frac{r}{|r-1|}\,.

Remark \therem.

If we change variable R~=2Rmax1\tilde{R}=2R_{\max}-1, then the distribution density of R~\tilde{R} is given by

g~(r)=2r1+r2π21rln|r+1r1|,r0,\tilde{g}(r)=\frac{2r}{1+r}\frac{2}{\pi^{2}}\frac{1}{r}\ln\left|\frac{r+1}{r-1}\right|,\quad r\geq 0,

which is a tilted dilogarithmic distribution. Recall that R1R_{1} and R2R_{2} individually have the dilogarithmic distribution.

Proposition \theprop.

The self-similar cascade of the Navier-Stokes equations in 3\mathbb{R}^{3} is a.s. explosive for any initial state ξ3\{0}\xi\in\mathbb{R}^{3}\backslash\{0\}.

Proof.

By Section 12.2,

𝔼[Rmax2]=1/2r2g(r)𝑑r=8π2<1.\mathbb{E}\left[R_{\max}^{-2}\right]=\int_{1/2}^{\infty}r^{-2}g(r)\,dr=\frac{8}{\pi^{2}}<1. (12.13)

We can now apply Section 2 with Xθ=|ξ|X_{\theta}=|\xi|, λ(x)=x2\lambda(x)=x^{2}, Z=|ξ|2Rmax2Z=|\xi|^{2}R_{\max}^{2}, and κ=8/π2\kappa=8/\pi^{2}. ∎

Our explosion criterion (Section 2) is not satisfied for dimensions d4d\geq 4. The following proposition provides another way to obtain the expectation in (12.13) for d=3d=3 and also shows that d=4d=4 is the critical case for our method.

Proposition \theprop.

For d3d\geq 3, we have

κd:=𝔼[Rmax2]=4πΓ(d12)2Γ(d22)Γ(d2).{{\kappa}_{d}}:=\mathbb{E}[R_{\max}^{-2}]=\frac{4}{\pi}\frac{\Gamma{{\left(\frac{d-1}{2}\right)}^{2}}}{\Gamma\left(\frac{d-2}{2}\right)\Gamma\left(\frac{d}{2}\right)}.

The sequence {κd}d3\{\kappa_{d}\}_{d\geq 3} is strictly increasing with κ3=8/π2\kappa_{3}=8/\pi^{2}, κ4=1\kappa_{4}=1 and limdκd=4/π\lim_{d\to\infty}\kappa_{d}=4/\pi.

Proof.

By the sine rule in the triangle with edges 1,R1,R21,R_{1},R_{2} (Figure 5), R1=sinΦ2sin(Φ1+Φ2){{R}_{1}}=\frac{\sin{{\Phi}_{2}}}{\sin({{\Phi}_{1}}+{{\Phi}_{2}})} and R2=sinΦ1sin(Φ1+Φ2){{R}_{2}}=\frac{\sin{{\Phi}_{1}}}{\sin({{\Phi}_{1}}+{{\Phi}_{2}})}. On the triangle Δ1={(ϕ1,ϕ2)Δ:ϕ1ϕ2}\Delta_{1}=\{(\phi_{1},\phi_{2})\in\Delta:\phi_{1}\geq\phi_{2}\}, we have Rmax=R2R_{\max}=R_{2}. Thus,

κd=2Δ1r22Sd(ϕ1,ϕ2)𝑑ϕ2𝑑ϕ1=2cd0π0ϕ1sind1(ϕ1+ϕ2)sin2ϕ1𝑑ϕ2𝑑ϕ1=2cd0π/2K(ϕ1)sin2ϕ1𝑑ϕ1{{\kappa}_{d}}=2\iint\limits_{{{\Delta}_{1}}}{r_{2}^{-2}{{S}_{d}}({{\phi}_{1}},{{\phi}_{2}})d{{\phi}_{2}}d{{\phi}_{1}}}=2{{c}_{d}}\int_{0}^{\pi}{\int_{0}^{{{\phi}_{1}}}{\frac{{{\sin}^{d-1}}({{\phi}_{1}}+{{\phi}_{2}})}{{{\sin}^{2}}{{\phi}_{1}}}d{{\phi}_{2}}d{{\phi}_{1}}}}=2{{c}_{d}}\int_{0}^{\pi/2}{\frac{K({{\phi}_{1}})}{{{\sin}^{2}}{{\phi}_{1}}}d{{\phi}_{1}}}

where SdS_{d} is given by (12.8) and K(ϕ)=0ϕ(sind1(ϕ+ϕ2)+sind1(ϕϕ2))𝑑ϕ2K(\phi)=\int_{0}^{\phi}{({{\sin}^{d-1}}(\phi+{{\phi}_{2}})+{{\sin}^{d-1}}(\phi-{{\phi}_{2}}))d{{\phi}_{2}}}. One can express KK in terms of the hypergeometric function as follows.

K(ϕ)={K~(ϕ)if0ϕπ4,πΓ(d2)Γ(d+12)K~(ϕ)ifπ4ϕπ2K(\phi)=\left\{\begin{array}[]{*{35}{r}}\widetilde{K}(\phi)&\text{if}&0\leq\phi\leq\frac{\pi}{4},\\ \frac{\sqrt{\pi}\Gamma\left(\frac{d}{2}\right)}{\Gamma\left(\frac{d+1}{2}\right)}-\widetilde{K}(\phi)&\text{if}&\frac{\pi}{4}\leq\phi\leq\frac{\pi}{2}\\ \end{array}\right.

where K~(ϕ)=1d1F2(12,d2,d+22;sin2(2ϕ))sind(2ϕ)\widetilde{K}(\phi)=\frac{1}{d}\,{{\,}_{1}}{{F}_{2}}\left(\frac{1}{2},\frac{d}{2},\frac{d+2}{2};{{\sin}^{2}}(2\phi)\right){{\sin}^{d}}(2\phi). One can now compute κd\kappa_{d} either with the aid of Mathematica or directly by induction on even and odd dd:

κd=2cd0π/4(K(ϕ)sin2ϕ+K(π/2ϕ)cos2ϕ)𝑑ϕ=4πΓ(d12)2Γ(d22)Γ(d2).{{\kappa}_{d}}=2{{c}_{d}}\int_{0}^{\pi/4}{\left(\frac{K(\phi)}{{{\sin}^{2}}\phi}+\frac{K(\pi/2-\phi)}{{{\cos}^{2}}\phi}\right)d\phi=}\frac{4}{\pi}\frac{\Gamma{{\left(\frac{d-1}{2}\right)}^{2}}}{\Gamma\left(\frac{d-2}{2}\right)\Gamma\left(\frac{d}{2}\right)}.

Using the asymptotic approximation Γ(x+1/2)Γ(x)x\frac{\Gamma(x+1/2)}{\Gamma(x)}\sim\sqrt{x} as xx\to\infty, one gets κd4π\kappa_{d}\to\frac{4}{\pi} as dd\to\infty. To show the monotonicity, we use Kershaw’s inequality Γ(x+1)Γ(x+1/2)>(x+14)1/2\frac{\Gamma(x+1)}{\Gamma(x+1/2)}>(x+\frac{1}{4})^{1/2} for all x>0x>0 (see [kershaw]).

κd+1κd=4(d1)(d2)(Γ(d2)Γ(d12))4>4(d1)(d2)(d234)2>1.\frac{{{\kappa}_{d+1}}}{{{\kappa}_{d}}}=\frac{4}{(d-1)(d-2)}{{\left(\frac{\Gamma\left(\frac{d}{2}\right)}{\Gamma\left(\frac{d-1}{2}\right)}\right)}^{4}}>\frac{4}{(d-1)(d-2)}{{\left(\frac{d}{2}-\frac{3}{4}\right)}^{2}}>1.

Remark \therem.

As mentioned in the introduction, there is a strong connection between the stochastic explosion of the DSY cascade underlying a certain evolution PDE and the well-posedness problem for the PDE itself. In particular, if we simplify the mild-type formulation (11.2) corresponding to the Navier-Stokes equations in Fourier space by replacing u^\hat{u} with a scalar function and replacing the sophisticated product-like structure in B(u^,u^)B(\hat{u},\hat{u}) (coming from the Fourier transform of (u)u(u\cdot\nabla)u) with the a simple product of scalars, we will obtain a non-linear scalar PDE with exactly the same DSY cascade structure as the Navier-Stokes equations (NSE). This scalar PDE was first considered by Montgomery-Smith [smith] as a model for a finite-time blowup. As shown in [smallness], the simplified product structure of BB allows to recover the blowup results from [smith] directly from the DSY cascade. Also, the explosion of the self-similar DSY cascade (Section 12.2) yields a nonuniqueness result for the initial value problem of the Montgomery-Smith equation [smallness].

In the case of 3-dimensional Navier-Stokes equations, the problem of existence and uniqueness of global solutions naturally involves possibility of cancellations in (11.5) due to the geometric structure of the bilinear product BB. While the present paper focussed on the time evolution of the DSY cascade process itself, it remains to be determined whether the additional geometric structure emanating from the nonlinear term BB corresponding to (NSE) provides sufficient cancellations in some smooth initial data to negate the impact of explosion of the self-similar DSY cascade.

References