This paper was converted on www.awesomepapers.org from LaTeX by an anonymous user.
Want to know more? Visit the Converter page.

Existence and properties of connections decay rate for high temperature percolation models

Sébastien Ott Dipartimento di Matematica e Fisica, Univ. Roma Tre, 00146 Roma, Italy [email protected]
Abstract.

We consider generic finite range percolation models on d\mathbb{Z}^{d} under a high temperature assumption (exponential decay of connection probabilities and exponential ratio weak mixing). We prove that the rate of decay of point-to-point connections exists in every directions and show that it naturally extends to a norm on d\mathbb{R}^{d}. This result is the base input to obtain fine understanding of the high temperature phase and is usually proven using correlation inequalities (such as FKG). The present work makes no use of such model specific properties.

1. Introduction and results

1.1. Decay rate of connections

Let PP denote an edge-percolation measure on d\mathbb{Z}^{d}. The central object of our investigation is(are) the rate(s) of exponential decay for point-to-point connection probabilities (two point functions):

Definition 1.1 (Inverse correlation length).

Let s𝕊d1s\in\mathbb{S}^{d-1}. The point-to-point decay rates are

ν¯(s)=lim supn1nlogP(0ns),ν¯(s)=lim infn1nlogP(0ns).\begin{gathered}\overline{\nu}(s)=\limsup_{n\to\infty}-\frac{1}{n}\log P(0\leftrightarrow ns),\\ \underline{\nu}(s)=\liminf_{n\to\infty}-\frac{1}{n}\log P(0\leftrightarrow ns).\end{gathered} (1)

1.2. Motivation

The main motivation of this work comes from the (supposed) universal behaviour of two point functions in high temperature systems: they should decay exponentially with a well-defined rate and the pre-factor to this decay should be the one predicted by the Ornstein-Zernike theory [10, 14]. See [13] for a review on this topic.

On the one hand some fairly satisfactory universal statements are available in perturbative regimes (very high temperature regime), see [2]. In the other hand, a non-perturbative approach (giving statements about the whole high temperature regime) has been developed over the past decades, proving the expected behaviour in various specific models: [1, 7, 8, 4, 5, 6, 11]. A recurrent ingredient in the proofs being the presence of correlation inequalities.

The latest non-perturbative approaches (mainly [6] combined with refinements from [12] and [11]) seem to be robust enough to tackle the problem (with some work…) for generic percolation models under a high temperature assumption and conditionally on the decay rate existence as well as some of its properties validity. This latter condition is usually where correlation inequalities are crucially needed.

To give an idea of the problem, let us consider some translation invariant percolation model PP. When PP satisfies the FKG inequality, one has P(0x+y)P(0xx+y)P(0x)P(0y)P(0\leftrightarrow x+y)\geq P(0\leftrightarrow x\leftrightarrow x+y)\geq P(0\leftrightarrow x)P(0\leftrightarrow y). The equality ν¯=ν¯ν\overline{\nu}=\underline{\nu}\equiv\nu is then easy consequence of Fekete’s Lemma. One can further extend ν\nu by positive homogeneity. The above inequality directly implies that ν\nu satisfies the triangle inequality.

The main problem is that “satisfying FKG inequality” (or any other) is a very model specific property (which fails for some arbitrarily small perturbation of -for example- FK percolation) while ν¯=ν¯\overline{\nu}=\underline{\nu} and ν\nu being a norm should be a generic property of high temperature models (which is a condition insensitive to sufficiently small perturbations). We therefore introduce a suitable notion of high temperature phase for percolation models and prove that the wanted properties of ν\nu hold for any model in this phase. To the best of the author’s knowledge, this is the first non-perturbative proof of this type of result not relying on correlation inequalities.

1.3. Results

Our main result is (see Section 2 for missing definitions):

Theorem 1.1.

Let E{{i,j}d}E\subset\big{\{}\{i,j\}\subset\mathbb{Z}^{d}\big{\}} be finite range, irreducible, invariant under translations. Let PP be a percolation measure on EE. Suppose that

  • PP is invariant under translations,

  • PP has the insertion tolerance property (Definition 2.3) with constant θ>0\theta>0,

  • PP satisfies the exponential ratio weak mixing property (Definition 2.1) with rate cmix>0c_{\mathrm{mix}}>0 and constant Cmix<C_{\mathrm{mix}}<\infty for the set of local connection events,

  • there exists cco>0c_{\mathrm{co}}>0 such that P(0Λnc)ecconP(0\leftrightarrow\Lambda_{n}^{c})\leq e^{-c_{\mathrm{co}}n} for any nn large enough.

Then, for any s𝕊d1s\in\mathbb{S}^{d-1},

ν¯(s)=ν¯(s)ν(s).\overline{\nu}(s)=\underline{\nu}(s)\equiv\nu(s). (2)

Moreover, the extension of ν\nu by positive homogeneity of order one defines a norm on d\mathbb{R}^{d}.

Remark 1.1.

The ratio weak mixing condition demanded can look less natural and more stringent than the weak mixing property (not ratio). However, it has been shown, see [3, Theorem 3.3], that in many cases the two are equivalent. In particular, if P(ω)Ccl(ω)f(C)P(\omega)\propto\prod_{C\in\mathrm{cl}(\omega)}f(C) (formally, the R.H.S. being infinite, cl\mathrm{cl} denotes the set of connected components), the assumption P(0ΛNc|EE(ΛN))ecNP(0\leftrightarrow\Lambda_{N}^{c}\,|\,\mathcal{F}_{E\setminus E(\Lambda_{N})})\leq e^{-cN} implies that the model has exponentially bounded controlling regions in the sense of [3].

Moreover, modulo straightforward changes in the proofs, one can replace the exponential mixing by any power law mixing with power >d>d. But this type of mixing can generally be enhanced to exponential (see for example the discussion on mixing in [9]).

Remark 1.2.

The insertion tolerance property excludes degeneracies occurring in hard-core models. Moreover, it gives lower bounds on local connections implying for example that the decay rates of Definition 1.1 are in (ϵ,ϵ1)(\epsilon,\epsilon^{-1}) for some ϵ>0\epsilon>0 (non-degenerate).

Remark 1.3.

ν\nu obviously inherit additional symmetries of PP.

1.4. Strategy of the proof

The idea of the proof goes as follows: one expects that existence of ν\nu and ν\nu being a norm is closely related to some form of sub-additivity. The latter property can be recovered from mixing if typical clusters realizing connections are somehow directed. We thus introduce various notions of “directed connections” for which we prove existence of an asymptotic decay rate. We then show that all these rates are equal and define a norm ν~\tilde{\nu}. To relate the obtained “directed rate” to the “real rates”, we do a small detour: we introduce point to hyperplanes decay rates and their directed version. Showing that these two agree is much easier than for point-to-point connections and is done using a suitable coarse-graining argument. We then relate directed point-to-point to directed point-to-hyperplane via convex duality (approximately: directed point-to-hyperplane connections in a direction ss are realized by a directed point-to-point connection in an optimal direction ss^{\prime}). Finally, we relate (non directed) point-to-point connections to (non directed) point-to-hyperplane connections via another coarse-graining argument.

2. Definitions and notations

2.1. General notations

Denote \|\ \| the Euclidean norm on d\mathbb{R}^{d} and d\mathrm{d} the associated distance. Write 𝕊d1\mathbb{S}^{d-1} the unit sphere for \|\ \|. ,\langle\ ,\ \rangle will denote the scalar product. ss will always be an element of 𝕊d1\mathbb{S}^{d-1}. For a (possibly asymmetric) norm μ:d+\mu:\mathbb{R}^{d}\to\mathbb{R}_{+}, define the unite ball of μ\mu and its polar set (“Wulff shape”)

𝒰μ={xd:μ(x)1},𝒲μ=s𝕊d1{xd:x,sμ(s)}.\mathcal{U}_{\mu}=\{x\in\mathbb{R}^{d}:\ \mu(x)\leq 1\},\quad\mathcal{W}_{\mu}=\bigcap_{s\in\mathbb{S}^{d-1}}\{x\in\mathbb{R}^{d}:\ \langle x,s\rangle\leq\mu(s)\}.

For AdA\subset\mathbb{R}^{d} and xdx\in\mathbb{R}^{d}, write A+xA+x the translate of AA by xx, A\partial A the boundary of AA and Å=AA\mathring{A}=A\setminus\partial A the interior of AA.

Denote

ΛN=[N,N]d,ΛN(x)=x+ΛN.\Lambda_{N}=[-N,N]^{d},\quad\Lambda_{N}(x)=x+\Lambda_{N}.

We also denote ΛN\Lambda_{N} the intersection of ΛN\Lambda_{N} with d\mathbb{Z}^{d}.

Define the half spaces: for s𝕊d1s\in\mathbb{S}^{d-1},

Hs={xd:x,s0},Hs(x)=x+Hs.H_{s}=\{x\in\mathbb{R}^{d}:\ \langle x,s\rangle\geq 0\},\quad H_{s}(x)=x+H_{s}. (3)

Then, for δ[0,1]\delta\in[0,1], define the cones

𝒴s,δ={xd:x,s(1δ)x},𝒴s,δ(x)=x+𝒴s,δ.\mathcal{Y}_{s,\delta}=\{x\in\mathbb{R}^{d}:\ \langle x,s\rangle\geq(1-\delta)\|x\|\},\quad\mathcal{Y}_{s,\delta}(x)=x+\mathcal{Y}_{s,\delta}. (4)

δ=0\delta=0 is a line and δ=1\delta=1 is the half space HsH_{s}.

Also introduce the truncated cones

𝒴s,δK=𝒴s,δHs(Ks),𝒴s,δK(x)=x+𝒴s,δK.\mathcal{Y}_{s,\delta}^{K}=\mathcal{Y}_{s,\delta}\setminus H_{s}(Ks),\quad\mathcal{Y}_{s,\delta}^{K}(x)=x+\mathcal{Y}_{s,\delta}^{K}. (5)

For xdx\in\mathbb{R}^{d}, we denote int(x)\mathrm{int}(x) the point in d\mathbb{Z}^{d} closest to xx, with some fixed breaking of draws respecting symmetries/translations of d\mathbb{Z}^{d}. We will often omit int\mathrm{int} from the notation.

We fix a priori some arbitrary total order on d\mathbb{Z}^{d}.

We will regularly use the following notation: for (an)n1(a_{n})_{n\geq 1}\in\mathbb{R}^{\mathbb{N}} a sequence, we denote a¯=lim supnan{±}\overline{a}=\limsup_{n\to\infty}a_{n}\in\mathbb{R}\cup\{\pm\infty\}, a¯=lim infnan{±}\underline{a}=\liminf_{n\to\infty}a_{n}\in\mathbb{R}\cup\{\pm\infty\}. When a¯=a¯\overline{a}=\underline{a}, we write the limit aa.

A quantity f(n)f(n) is on(1)o_{n}(1) if limnf(n)=0\lim_{n\to\infty}f(n)=0.

2.2. Percolation

We consider edge percolation models, in all this work EE will be a subset of {{i,j}d}\big{\{}\{i,j\}\subset\mathbb{Z}^{d}\big{\}} with the properties:

  • Irreducibility: (d,E)(\mathbb{Z}^{d},E) is connected.

  • Finite Range: there exists r>0r>0 such that ijr{i,j}E\|i-j\|\geq r\implies\{i,j\}\notin E. The smallest such rr is called the range of EE and is denoted RR(E)R\equiv R(E).

  • Translation Invariance: for any eEe\in E and xdx\in\mathbb{Z}^{d}, x+eEx+e\in E.

The graph distance on (d,E)(\mathbb{Z}^{d},E) is denoted dE\mathrm{d}_{E}. As EE is finite range and irreducible, there exists cE>0c_{E}>0 such that

cE1d(x,y)dE(x,y)cEd(x,y).c_{E}^{-1}\mathrm{d}(x,y)\leq\mathrm{d}_{E}(x,y)\leq c_{E}\mathrm{d}(x,y).

For a set AdA\subset\mathbb{Z}^{d}, denote Ac=dAA^{c}=\mathbb{Z}^{d}\setminus A, intA={xA:yAc,{x,y}E}\partial^{\mathrm{\scriptscriptstyle int}}A=\{x\in A:\ \exists y\in A^{c},\ \{x,y\}\in E\}, extA={xAc:yA,{x,y}E}\partial^{\mathrm{\scriptscriptstyle ext}}A=\{x\in A^{c}:\ \exists y\in A,\ \{x,y\}\in E\}. Also define E(A)={{x,y}E:x,yA}E(A)=\{\{x,y\}\in E:\ x,y\in A\}.

For ω{0,1}E\omega\in\{0,1\}^{E}, we systematically identify the {0,1}\{0,1\}-valued function and the edge set induced by the set {e:ωe=1}\{e:\omega_{e}=1\}, the set of open edges. When talking about connectivity properties of ω\omega, it is assumed that the graph (d,ω)(\mathbb{Z}^{d},\omega) is considered.

For FEF\subset E finite, denote F={A{0,1}F}\mathcal{F}_{F}=\{A\subset\{0,1\}^{F}\} and for FEF\subset E infinite, denote F\mathcal{F}_{F} the sigma algebra generated by the collection (F)FF finite(\mathcal{F}_{F^{\prime}})_{F^{\prime}\subset F\textnormal{ finite}}. A percolation measure PP is a probability measure such that (P,E,E)(P,\mathcal{F}_{E},E) is a probability space. We write {xy}\{x\leftrightarrow y\} for the event that x,yx,y lie in the same connected component (and {AB}\{A\leftrightarrow B\} for the event that there exists xA,yBx\in A,y\in B with xyx\leftrightarrow y). We also will write {x𝐹y}\{x\xleftrightarrow{F}y\} for the event that xx is connected to yy by a path of open edges in FF. ω\omega will be a random variable with law PP.

2.3. Hypotheses

One of our hypotheses is a mixing condition, called the exponential ratio weak mixing property for connections events:

Definition 2.1 (Ratio mixing).

We say that PP has the ratio weak mixing property with rate c>0c>0 and constant C0C\geq 0 if for any sets F,FEF,F^{\prime}\subset E and events AF,BFA\in\mathcal{F}_{F},B\in\mathcal{F}_{F^{\prime}} with P(A)P(B)>0P(A)P(B)>0,

|1P(AB)P(A)P(B)|CeF,eFecd(e,e),\Big{|}1-\frac{P(A\cap B)}{P(A)P(B)}\Big{|}\leq C\sum_{e\in F,e^{\prime}\in F^{\prime}}e^{-c\mathrm{d}(e,e^{\prime})}, (6)

where d\mathrm{d} is the Euclidean distance. We say that the property is satisfied for the class 𝒞E\mathcal{C}\subset\mathcal{F}_{E} if (6) holds whenever, in addition to the hypotheses, A,B𝒞A,B\in\mathcal{C}.

Definition 2.2 (Connexion events).

The class of local connection events is the set of events of the form

{AΔB},\{A\xleftrightarrow{\Delta}B\},

where A,BdA,B\subset\mathbb{Z}^{d}, ΔE\Delta\subset E are finite.

Definition 2.3 (Insertion tolerance).

A percolation measure PP on EE is said to have the insertion tolerance property if for any edge eEe\in E there exists θe>0\theta_{e}>0 such that

P(ωe=1|E{e})θe.P(\omega_{e}=1\,|\,\mathcal{F}_{E\setminus\{e\}})\geq\theta_{e}.

If PP is finite range and translation invariant, it is equivalent to the existence of θ>0\theta>0 such that

mineEP(ωe=1|E{e})θ.\min_{e\in E}P(\omega_{e}=1\,|\,\mathcal{F}_{E\setminus\{e\}})\geq\theta.

A useful consequence of insertion tolerance is

Lemma 2.1.

Suppose PP is a finite range, translation invariant percolation measure on EE. Then, for any x,ydx,y\in\mathbb{Z}^{d}, and any sets A,BdA,B\subset\mathbb{Z}^{d},

P(xA,yB,xy)θdE(x,y)P(xA,yB),P(x\leftrightarrow A,y\leftrightarrow B,x\leftrightarrow y)\geq\theta^{\mathrm{d}_{E}(x,y)}P(x\leftrightarrow A,y\leftrightarrow B),

where dE\mathrm{d}_{E} is the graph distance on (d,E)(\mathbb{Z}^{d},E) and θ>0\theta>0 is given by Definition 2.3.

Proof.

Let γ\gamma be a path (seen as set of edges) from xx to yy realizing dE(x,y)\mathrm{d}_{E}(x,y) (in particular, |γ|=dE(x,y)|\gamma|=\mathrm{d}_{E}(x,y)). Now,

P(xA,yB,xy)\displaystyle P(x\leftrightarrow A,y\leftrightarrow B,x\leftrightarrow y) P(P(xA,yB,γω|Eγ))\displaystyle\geq P\big{(}P(x\leftrightarrow A,y\leftrightarrow B,\gamma\subset\omega\,|\,\mathcal{F}_{E\setminus\gamma})\big{)}
=P(𝟙ωγ{xA}𝟙ωγ{yB}P(γω|Eγ))\displaystyle=P\big{(}\mathds{1}_{\omega\cup\gamma\in\{x\leftrightarrow A\}}\mathds{1}_{\omega\cup\gamma\in\{y\leftrightarrow B\}}P(\gamma\subset\omega\,|\,\mathcal{F}_{E\setminus\gamma})\big{)}
P(𝟙ω{xA}𝟙ω{yB})θ|γ|.\displaystyle\geq P\big{(}\mathds{1}_{\omega\in\{x\leftrightarrow A\}}\mathds{1}_{\omega\in\{y\leftrightarrow B\}}\big{)}\theta^{|\gamma|}.

We will regularly use this kind of argument without explicitly writing down the details.

3. Coarser lattice, restricted connections, preliminary results

In all this Section, we work under the hypotheses of Theorem 1.1.

3.1. Coarse connections

To avoid dealing with trivialities occurring from the discrete structure of d\mathbb{Z}^{d}, we will look at a coarser notion of connections. Let R0RR_{0}\geq R (recall RR is the range) be a fixed integer such that (ΛR0,E(ΛR0))\big{(}\Lambda_{R_{0}},E(\Lambda_{R_{0}})\big{)} is connected. Denote Γ=((2R0+1))d\Gamma=((2R_{0}+1)\mathbb{Z})^{d} the coarser lattice. For xΓx\in\Gamma, denote Λ(x)=x+ΛR0\Lambda(x)=x+\Lambda_{R_{0}}. To lighten notations, for x,yΓx,y\in\Gamma we will write {xy}\{x\leftrightarrow y\} for the event {Λ(x)Λ(y)}\{\Lambda(x)\leftrightarrow\Lambda(y)\}. By Lemma 2.1, these events have the same asymptotic decay rates as the point-to-point rates.

For a point xdx\in\mathbb{R}^{d}, denote Bx=vx+[R01/2,R0+1/2)dB_{x}=v_{x}+[-R_{0}-1/2,R_{0}+1/2)^{d} the box such that vxΓv_{x}\in\Gamma, xBxx\in B_{x}. For a set Δd\Delta\subset\mathbb{R}^{d}, we denote [Δ]=xΔBxd[\Delta]=\bigcup_{x\in\Delta}B_{x}\cap\mathbb{Z}^{d}. For x,yd,Δdx,y\in\mathbb{R}^{d},\Delta\subset\mathbb{R}^{d}, we write {xΔy}={Λ(vx)E([Δ])Λ(vy)}\{x\xleftrightarrow{\Delta}y\}=\{\Lambda(v_{x})\xleftrightarrow{E([\Delta])}\Lambda(v_{y})\}.

In the same spirit, for Δd\Delta\subset\mathbb{R}^{d}, we say that an event is Δ\Delta-measurable if it is in E([Δ])\mathcal{F}_{E([\Delta])}.

3.2. A family of coarse graining

We will regularly use coarse-graining of the cluster of 0. We describe here a generic coarse-graining procedure parametrized by the “unit cell” of the coarse graining. These procedures are a general formulation of the coarse-graining procedure applied in [6]. Let 0Δd0\in\Delta\subset\mathbb{Z}^{d} be finite. Let ΔK=xΔΛK(x)\Delta_{K}=\bigcup_{x\in\Delta}\Lambda_{K}(x). Let 𝒯=𝒯(Δ,K)\mathcal{T}=\mathcal{T}(\Delta,K) be the set of embedded rooted trees defined as follows: T𝒯T\in\mathcal{T} is the data of a set of vertices t={t0,,tm}t=\{t_{0},\cdots,t_{m}\} where each tidt_{i}\in\mathbb{Z}^{d}, and a set of edges f={f1,,fm}f=\{f_{1},\cdots,f_{m}\} with fit,|fi|=2f_{i}\subset t,|f_{i}|=2 such that

  • The graph (t,f)(t,f) is a tree.

  • A given point in d\mathbb{Z}^{d} can only occur once as element of tt.

  • t0=0t_{0}=0, tiext(ΔK+tj)t_{i}\in\partial^{\mathrm{\scriptscriptstyle ext}}(\Delta_{K}+t_{j}) where fi={ti,tj}f_{i}=\{t_{i},t_{j}\}.

  • The labels and edges can be inductively reconstructed from the set of vertices (without labels) WW as follows: tit_{i} is the smallest (for the fixed total order on d\mathbb{Z}^{d}) element of W{0,t1,,ti1}W\setminus\{0,t_{1},\cdots,t_{i-1}\} belonging to j=0i1ext(tj+ΔK)\bigcup_{j=0}^{i-1}\partial^{\mathrm{\scriptscriptstyle ext}}(t_{j}+\Delta_{K}) and fif_{i} is given by {ti,v}\{t_{i},v^{*}\} where vv^{*} is the smallest element of {t0,,ti1}\{t_{0},\cdots,t_{i-1}\} with tiext(v+ΔK)t_{i}\in\partial^{\mathrm{\scriptscriptstyle ext}}(v^{*}+\Delta_{K}).

A fairly direct observation is that the degree of a vertex tit_{i} in (t,f)(t,f) is less than dΔK=|extΔK|d_{\Delta_{K}}=|\partial^{\mathrm{\scriptscriptstyle ext}}\Delta_{K}| and one has a natural inclusion of 𝒯l={T𝒯:|t|=l}\mathcal{T}_{l}=\{T\in\mathcal{T}:\ |t|=l\} in the set of sub-trees of 𝕋dΔK\mathbb{T}_{d_{\Delta_{K}}} (the dΔKd_{\Delta_{K}}-regular tree) containing 0 and having ll vertices. In particular, there exists c>0c>0 universal such that |𝒯l|eclog(dΔK)l|\mathcal{T}_{l}|\leq e^{c\log(d_{\Delta_{K}})l}.

We now define a mapping CGΔ,K\mathrm{CG}_{\Delta,K} from the set of clusters containing 0 to 𝒯(Δ,K)\mathcal{T}(\Delta,K). We define it via an algorithm constructing T𝒯T\in\mathcal{T} from C0C\ni 0 (see Figure 1). Fix some C0C\ni 0. Consider the graph formed by the vertices of d\mathbb{Z}^{d} and the edges in CC. Construct t,ft,f as follows

Set t0=0t_{0}=0, t={t0}t=\{t_{0}\}, f=f=\varnothing, V=ΔKV=\Delta_{K}, i=1i=1;
while A={zextV:z(z+Δ)Vext(z+Δ)}A=\big{\{}z\in\partial^{\mathrm{\scriptscriptstyle ext}}V:\ z\xleftrightarrow{(z+\Delta)\setminus V}\partial^{\mathrm{\scriptscriptstyle ext}}(z+\Delta)\big{\}}\neq\varnothing do
       Set ti=minAt_{i}=\min A;
       Let vv_{*} be the smallest vtv\in t such that tmext(Δ+v)t_{m}\in\partial^{\mathrm{\scriptscriptstyle ext}}(\Delta+v^{*});
       Set fi={v,ti}f_{i}=\{v^{*},t_{i}\};
       Update t=t{ti}t=t\cup\{t_{i}\}, f=f{fi}f=f\cup\{f_{i}\}, V=V(ti+ΔK)V=V\cup(t_{i}+\Delta_{K}), i=i+1i=i+1;
      
end while
Set m=im=i;
return (t,f)(t,f);
Algorithm 1 Coarse graining of a cluster containing 0.

Write CGΔ,K(C)=(t(C),f(C))\mathrm{CG}_{\Delta,K}(C)=(t(C),f(C)). One has automatically that CC is in a (K+2×radius(Δ))(K+2\times\mathrm{radius}(\Delta))-neighbourhood of CGΔ,K(C)\mathrm{CG}_{\Delta,K}(C).

Refer to caption
Refer to caption
Figure 1. Left: a possible cell Δ\Delta. Right: a coarse graining using the square cell. Required connections are depicted in red.

The usefulness of such coarse graining is the conjunction of the combinatorial control we mentioned on trees with given number of vertices and the following energy bound.

Lemma 3.1.

Suppose the hypotheses of Theorem 1.1 hold. Then, there exists K00K_{0}\geq 0 such that for any 0Δd0\in\Delta\subset\mathbb{Z}^{d} finite, KK0K\geq K_{0}, and T=(t,f)𝒯(Δ,K)T=(t,f)\in\mathcal{T}(\Delta,K),

P(CGΔ,K(C0)=T)(P(0Δc)(1+|Δ|ecmixK/2))|f|.P\big{(}\mathrm{CG}_{\Delta,K}(C_{0})=T\big{)}\leq\Big{(}P(0\leftrightarrow\Delta^{c})(1+|\Delta|e^{-c_{\mathrm{mix}}K/2})\Big{)}^{|f|}.
Proof.

Let T=(t,f)T=(t,f). The event CGΔ,K(C0)=T\mathrm{CG}_{\Delta,K}(C_{0})=T implies in particular that

i=1|f|{ti(ti+Δ)Viext(ti+Δ)}i=1|f|Ai\bigcap_{i=1}^{|f|}\{t_{i}\xleftrightarrow{(t_{i}+\Delta)\setminus V_{i}}\partial^{\mathrm{\scriptscriptstyle ext}}(t_{i}+\Delta)\}\equiv\bigcap_{i=1}^{|f|}A_{i}

occurs, where Vi=0j<i(tj+ΔK)V_{i}=\bigcup_{0\leq j<i}(t_{j}+\Delta_{K}). Now, let FiF_{i} denote the support of AiA_{i}. One has that |Fi|C|Δ||F_{i}|\leq C|\Delta| for any ii (recall PP has finite range) and d(Fi,Fj)K\mathrm{d}(F_{i},F_{j})\geq K. In particular, by (6),

P(i=1|f|Ai)\displaystyle P(\bigcap_{i=1}^{|f|}A_{i}) P(i=1|f|1Ai)P(A|f|)(1+CmixeF|f|,e:d(e,e)Kecmixd(e,e))\displaystyle\leq P(\bigcap_{i=1}^{|f|-1}A_{i})P(A_{|f|})\Big{(}1+C_{\mathrm{mix}}\sum_{e\in F_{|f|},e^{\prime}:\mathrm{d}(e,e^{\prime})\geq K}e^{-c_{\mathrm{mix}}\mathrm{d}(e,e^{\prime})}\Big{)}
P(i=1|f|1Ai)P(A|f|)(1+C|Δ|Kd1ecmixK)\displaystyle\leq P(\bigcap_{i=1}^{|f|-1}A_{i})P(A_{|f|})\Big{(}1+C|\Delta|K^{d-1}e^{-c_{\mathrm{mix}}K}\Big{)}
P(i=1|f|1Ai)P(0Δc)(1+C|Δ|Kd1ecmixK)\displaystyle\leq P(\bigcap_{i=1}^{|f|-1}A_{i})P(0\leftrightarrow\Delta^{c})\Big{(}1+C|\Delta|K^{d-1}e^{-c_{\mathrm{mix}}K}\Big{)}

where we used inclusion of events and translation invariance in the last line. Iterating |f||f| times gives the result. ∎

4. Proofs

The proof will go by introducing a family of decay rates (rates associated to various connection events). The idea is to prove the wanted properties for convenient rates and then to prove that all rates are in fact the same. Again, we work under the hypotheses of Theorem 1.1 which are implicitly assumed in the statements.

4.1. Constraint point-to-point

First introduce a family of connection events. For δ(0,1]\delta\in(0,1] and s,s𝕊d1s,s^{\prime}\in\mathbb{S}^{d-1} such that s𝒴̊s,δs\in\mathring{\mathcal{Y}}_{s^{\prime},\delta},

Qs,δ(s,N)={0𝒴s,δHs(Ns)Ns}.Q_{s^{\prime},\delta}(s,N)=\{0\xleftrightarrow{\mathcal{Y}_{s^{\prime},\delta}\setminus H_{s^{\prime}}(Ns)}Ns\}.
Lemma 4.1.

For any δ(0,1]\delta\in(0,1] and s,s𝕊d1s,s^{\prime}\in\mathbb{S}^{d-1} such that s𝒴̊s,δs\in\mathring{\mathcal{Y}}_{s^{\prime},\delta}, the limit

ν~s,δ(s)=limN1NlogP(Qs,δ(s,N))\tilde{\nu}_{s^{\prime},\delta}(s)=\lim_{N\to\infty}-\frac{1}{N}\log P(Q_{s^{\prime},\delta}(s,N))

exists.

Proof.

Fix s,s𝕊d1s,s^{\prime}\in\mathbb{S}^{d-1}, δ(0,1]\delta\in(0,1] such that s𝒴̊s,δs\in\mathring{\mathcal{Y}}_{s^{\prime},\delta}. By assumption, P(0Λnc)ecconP(0\leftrightarrow\Lambda_{n}^{c})\leq e^{-c_{\mathrm{co}}n}. Denote l=2lim supN1NlogP(Qs,δ(s,N))l=2\limsup_{N\to\infty}-\frac{1}{N}\log P(Q_{s^{\prime},\delta}(s,N)) and set α=lcco\alpha=\frac{l}{c_{\mathrm{co}}}. In particular, there exists N0N_{0} such that for any NN0N\geq N_{0}, P(Qs,δ(s,N))elNP(Q_{s^{\prime},\delta}(s,N))\geq e^{-lN}. So, 1NlogP(Qs,δ(s,N))-\frac{1}{N}\log P(Q_{s^{\prime},\delta}(s,N)) has the same upper and lower limits as the sequence

aNN=1NlogP(0(𝒴s,δHs(Ns))ΛαNNs).\frac{a_{N}}{N}=-\frac{1}{N}\log P\big{(}0\xleftrightarrow{(\mathcal{Y}_{s^{\prime},\delta}\setminus H_{s^{\prime}}(Ns))\cap\Lambda_{\alpha N}}Ns\big{)}.

See Figure 2 for the volume in which the connection is required to occur.

Refer to caption
Figure 2. The volume (𝒴s,1Hs(Ns))ΛαN(\mathcal{Y}_{s,1}\setminus H_{s}(Ns))\cap\Lambda_{\alpha N}.

This additional manipulation is only needed to handle δ=1\delta=1, see Figure 3. We show that aNa_{N} satisfies the hypotheses of Lemma A.1. Let ΔN=(𝒴s,δHs(Ns))ΛαN\Delta_{N}=(\mathcal{Y}_{s^{\prime},\delta}\setminus H_{s^{\prime}}(Ns))\cap\Lambda_{\alpha N}. Let nmn\geq m be large enough, =log(m)2\ell=\log(m)^{2}, and set N=n+m+N=n+m+\ell. Then, ΔnΔN\Delta_{n}\subset\Delta_{N}, ((n+)s+Δm)ΔN\big{(}(n+\ell)s+\Delta_{m}\big{)}\subset\Delta_{N}, and d(Δn,(n+)s+Δm)\mathrm{d}(\Delta_{n},(n+\ell)s+\Delta_{m})\geq\ell. Then,

P(0ΔNNs)θcEP(0Δnns,(n+)s(n+)s+ΔmNs).P\big{(}0\xleftrightarrow{\Delta_{N}}Ns\big{)}\geq\theta^{c_{E}\ell}P\big{(}0\xleftrightarrow{\Delta_{n}}ns,(n+\ell)s\xleftrightarrow{(n+\ell)s+\Delta_{m}}Ns\big{)}.

by inclusion of events and Lemma 2.1. Then, ratio mixing implies

P(0Δnns,(n+)s(n+)s+ΔmNs)(1|Δm|ecmix/2)P(0Δnns)P(0Δmms)P\big{(}0\xleftrightarrow{\Delta_{n}}ns,(n+\ell)s\xleftrightarrow{(n+\ell)s+\Delta_{m}}Ns\big{)}\geq(1-|\Delta_{m}|e^{-c_{\mathrm{mix}}\ell/2})P\big{(}0\xleftrightarrow{\Delta_{n}}ns\big{)}P\big{(}0\xleftrightarrow{\Delta_{m}}ms\big{)}

for any mm large enough. |Δm||\Delta_{m}| being upper bounded by a degree dd polynomial in mm, the wanted property follows with g(m)=log(m)2g(m)=\log(m)^{2}, and f(m)=2+cElog(θ1)log(m)2f(m)=2+c_{E}\log(\theta^{-1})\log(m)^{2}.

Refer to caption
Refer to caption
Figure 3. Left: construction of the local event when δ<1\delta<1. Right: construction of the local event when δ=1\delta=1. Dotted lines denote the use of insertion tolerance.

Lemma 4.2.

For any s𝕊d1s\in\mathbb{S}^{d-1}, ν~s,δ(s)\tilde{\nu}_{s^{\prime},\delta}(s) does not depend on δ(0,1]\delta\in(0,1] and s𝕊d1s^{\prime}\in\mathbb{S}^{d-1} as long as s𝒴̊s,δs\in\mathring{\mathcal{Y}}_{s^{\prime},\delta}.

Proof.

Fix s𝕊d1s\in\mathbb{S}^{d-1} and omit it from notation. Let δ,δ′′(0,1]\delta^{\prime},\delta^{\prime\prime}\in(0,1] and s,s′′𝕊d1s^{\prime},s^{\prime\prime}\in\mathbb{S}^{d-1} be such that s𝒴̊s,δ𝒴̊s′′,δ′′s\in\mathring{\mathcal{Y}}_{s^{\prime},\delta^{\prime}}\cap\mathring{\mathcal{Y}}_{s^{\prime\prime},\delta^{\prime\prime}}. To lighten notation, write r=ν~s,δr^{\prime}=\tilde{\nu}_{s^{\prime},\delta^{\prime}} and r′′=ν~s′′,δ′′r^{\prime\prime}=\tilde{\nu}_{s^{\prime\prime},\delta^{\prime\prime}}. We first prove rr′′r^{\prime}\leq r^{\prime\prime}. Let α=2r′′cco\alpha=\frac{2r^{\prime\prime}}{c_{\mathrm{co}}}. In particular, defining Δn=(𝒴s′′,δ′′Hs′′(ns))Λαn\Delta_{n}=(\mathcal{Y}_{s^{\prime\prime},\delta^{\prime\prime}}\setminus H_{s^{\prime\prime}}(ns))\cap\Lambda_{\alpha n} (see Figure 4),

P(0Δnns)=er′′n(1+on(1)).P(0\xleftrightarrow{\Delta_{n}}ns)=e^{-r^{\prime\prime}n(1+o_{n}(1))}.
Refer to caption
Figure 4. The volume Δn\Delta_{n} when δ′′=1\delta^{\prime\prime}=1.

Then, fix ϵ>0\epsilon>0 small and nn large enough. Write =log(n)2\ell=\log(n)^{2}. For any NN large, (1ϵ)N=q(n+)+b(1-\epsilon)N=q(n+\ell)+b with b<n+b<n+\ell (integer parts are implicitly taken). One has

P(Qs,δ(s,N))θcEϵN+b+qP(i=0q1{(ϵ2N+i(n+))s(ϵN+i(n+))s+Δn(ϵ2N+i(n+)+n)s}),P\big{(}Q_{s^{\prime},\delta^{\prime}}(s,N)\big{)}\geq\\ \geq\theta^{c_{E}\epsilon N+b+q\ell}P\Big{(}\bigcap_{i=0}^{q-1}\big{\{}(\frac{\epsilon}{2}N+i(n+\ell))s\xleftrightarrow{(\epsilon N+i(n+\ell))s+\Delta_{n}}(\frac{\epsilon}{2}N+i(n+\ell)+n)s\big{\}}\Big{)},

where we used insertion tolerance (Lemma 2.1). See Figure 5.

Refer to caption
Figure 5. The construction of the lower bound. Dotted lines denote the use of insertion tolerance.

Using ratio mixing and translation invariance, the probability in the RHS is lower bounded by

eqi=0q1P(0Δnns)=eqer′′qn(1+on(1))e^{-q}\prod_{i=0}^{q-1}P\big{(}0\xleftrightarrow{\Delta_{n}}ns\big{)}=e^{-q}e^{-r^{\prime\prime}qn(1+o_{n}(1))}

whenever nn is larger than some fixed constant. Taking the log, dividing by N-N and taking NN\to\infty, one obtains

rcElog(θ1)ϵ+(log(θ1)+1)(1ϵ)n++(1ϵ)nr′′n+r^{\prime}\leq c_{E}\log(\theta^{-1})\epsilon+\frac{(\log(\theta^{-1})\ell+1)(1-\epsilon)}{n+\ell}+\frac{(1-\epsilon)nr^{\prime\prime}}{n+\ell}

ϵ>0\epsilon>0 is arbitrary and nn is arbitrarily large. Take nn\to\infty and then ϵ0\epsilon\searrow 0 to obtain the wanted inequality.

Repeating the argument with (s,δ)(s^{\prime},\delta^{\prime}) and (s′′,δ′′)(s^{\prime\prime},\delta^{\prime\prime}) exchanged yields the reverse inequality and thus the result. ∎

From Lemma 4.1 and 4.2, it is natural to introduce ν~:d+\tilde{\nu}:\mathbb{R}^{d}\to\mathbb{R}_{+} as the extension by positive homogeneity of ν~s,δ(s)\tilde{\nu}_{s^{\prime},\delta}(s).

Lemma 4.3.

ν~\tilde{\nu} defines a norm on d\mathbb{R}^{d}.

Proof.

First, point separation follows from the exponential decay assumption (cco>0c_{\mathrm{co}}>0). Then, positive homogeneity of order one is a direct consequence of the way we extended ν~\tilde{\nu} to d\mathbb{R}^{d} and of

P(Qs,1(s,N))=P(Qs,1(s,N)),P\big{(}Q_{s,1}(s,N)\big{)}=P\big{(}Q_{-s,1}(-s,N)\big{)},

by translation invariance. Remains the triangle inequality. Fix x,ydx,y\in\mathbb{R}^{d}. Let sxy=x+yx+ys_{xy}=\frac{x+y}{\|x+y\|}, sx=xxs_{x}=\frac{x}{\|x\|}, sy=yys_{y}=\frac{y}{\|y\|}. We can suppose that x,x+yH̊sxyx,x+y\in\mathring{H}_{s_{xy}} (otherwise, exchange the role of 0 and x+yx+y, see Figure 6).

Refer to caption
Figure 6.

Then, for ϵ>0\epsilon>0 fixed, for any δ>0\delta>0 small enough and any NN large

P(Qsxy,1(sxy,Nx+y))θϵc(x+y)NP(Qsx,δ(sx,x(1ϵ)N),x+y𝒴sy,δy(1ϵ)N(x+y)x+ϵNysy),P\big{(}Q_{s_{xy},1}(s_{xy},N\|x+y\|)\big{)}\geq\\ \geq\theta^{\epsilon c(\|x\|+\|y\|)N}P\big{(}Q_{s_{x},\delta}(s_{x},\|x\|(1-\epsilon)N),x+y\xleftrightarrow{\mathcal{Y}_{-s_{y},\delta}^{\|y\|(1-\epsilon)N}(x+y)}x+\epsilon N\|y\|s_{y}\big{)},

where we used insertion tolerance. See Figure 7.

Refer to caption
Figure 7. Construction of the forced connection through NxNx. Dotted lines denote the use of insertion tolerance.

Now, for ϵ>0\epsilon>0 fixed and δ>0\delta>0 small enough (depending on ϵ\epsilon), one can use ratio mixing to obtain that the last probability is lower bounded by

(1ecϵN)P(Qsx,δ(sx,x(1ϵ)N))P(Qsy,δ(sy,y(1ϵ)N)).(1-e^{-c^{\prime}\epsilon N})P\big{(}Q_{s_{x},\delta}(s_{x},\|x\|(1-\epsilon)N)\big{)}P\big{(}Q_{s_{y},\delta}(s_{y},\|y\|(1-\epsilon)N)\big{)}.

Taking the log, dividing by N-N and sending NN\to\infty, one obtains

x+yν~(sxy)log(θ1)ϵc(x+y)+(1ϵ)xν~(sx)+(1ϵ)yν~(sy).\|x+y\|\tilde{\nu}(s_{xy})\leq\log(\theta^{-1})\epsilon c(\|x\|+\|y\|)+(1-\epsilon)\|x\|\tilde{\nu}(s_{x})+(1-\epsilon)\|y\|\tilde{\nu}(s_{y}).

ϵ>0\epsilon>0 was arbitrary, taking ϵ0\epsilon\searrow 0 and using positive homogeneity gives ν~(x+y)ν~(x)+ν~(y)\tilde{\nu}(x+y)\leq\tilde{\nu}(x)+\tilde{\nu}(y). ∎

4.2. Point-to-half-space

Lemma 4.4.

Let s𝕊d1s\in\mathbb{S}^{d-1}. The limit

νH(s)=limN1NlogP(0Hs(Ns))\nu_{H}(s)=\lim_{N\to\infty}-\frac{1}{N}\log P\big{(}0\leftrightarrow H_{s}(Ns)\big{)}

exists.

Proof.

We fix s𝕊d1s\in\mathbb{S}^{d-1} and omit it from the notation. Let (nk)k1(n_{k})_{k\geq 1} be an increasing sequence of integers such that

limk1nklogP(0Hs(nks))=lim supN1NlogP(0Hs(Ns))ν¯H\lim_{k\to\infty}-\frac{1}{n_{k}}\log P\big{(}0\leftrightarrow H_{s}(n_{k}s)\big{)}=\limsup_{N\to\infty}-\frac{1}{N}\log P\big{(}0\leftrightarrow H_{s}(Ns)\big{)}\equiv\overline{\nu}_{H}

In particular, P(0Hs(nks))=enkν¯H(1+ok(1))P\big{(}0\leftrightarrow H_{s}(n_{k}s)\big{)}=e^{-n_{k}\overline{\nu}_{H}(1+o_{k}(1))}.

By our hypotheses,

P(0ΛM)eccoMP(0\leftrightarrow\Lambda_{M})\leq e^{-c_{\mathrm{co}}M}

for any MM large enough. Let then α=ν¯Hcco\alpha=\frac{\overline{\nu}_{H}}{c_{\mathrm{co}}}. Set Δk=ΛαnkHs(nks)\Delta_{k}=\Lambda_{\alpha n_{k}}\setminus H_{s}(n_{k}s), Kk=log(nk)2K_{k}=\log(n_{k})^{2}, Δ¯k=vΔkΛKk(v)\overline{\Delta}_{k}=\bigcup_{v\in\Delta_{k}}\Lambda_{K_{k}}(v). See Figure 8. In particular, we have

P(0Δkc)P(0Λαnkc)+P(0Hs(nks))eν¯Hnk(1+ok(1))P(0\leftrightarrow\Delta_{k}^{c})\leq P(0\leftrightarrow\Lambda_{\alpha n_{k}}^{c})+P\big{(}0\leftrightarrow H_{s}(n_{k}s)\big{)}\leq e^{-\overline{\nu}_{H}n_{k}(1+o_{k}(1))} (7)

where we used a union bound.

Refer to caption
Figure 8. The cell Δk\Delta_{k}.

We now coarse-grain C0C_{0} using CGkCGΔk,Kk\mathrm{CG}_{k}\equiv\mathrm{CG}_{\Delta_{k},K_{k}} (see Section 3.2). Write CGk(C0)=(t(C0),f(C0))\mathrm{CG}_{k}(C_{0})=(t(C_{0}),f(C_{0})). One has that C0C_{0} is included in an 3αnk3\alpha n_{k}-neighbourhood of t(C0)t(C_{0}). We have

P(0X)=T𝒯P(0X,CGk(C0)=T)TXP(CGk(C0)=T)P(0\leftrightarrow X)=\sum_{T\in\mathcal{T}}P\big{(}0\leftrightarrow X,\mathrm{CG}_{k}(C_{0})=T\big{)}\leq\sum_{T\sim X}P\big{(}\mathrm{CG}_{k}(C_{0})=T\big{)} (8)

where TXT\sim X means that d(X,t(C0))3αnk\mathrm{d}(X,t(C_{0}))\leq 3\alpha n_{k}. We can then use Lemma 3.1 and the bound on the number of trees to obtain that for any fixed large enough kk, as NN goes to infinity,

P(0Hs(Ns))\displaystyle P\big{(}0\leftrightarrow H_{s}(Ns)\big{)} lNnk+KkT𝒯lP(CGk(C0)=T)\displaystyle\leq\sum_{l\geq\frac{N}{n_{k}+K_{k}}}\sum_{T\in\mathcal{T}_{l}}P\big{(}\mathrm{CG}_{k}(C_{0})=T\big{)}
lNnk+Kkeclog(dΔ¯k)leν¯Hnkl(1+ok(1))\displaystyle\leq\sum_{l\geq\frac{N}{n_{k}+K_{k}}}e^{c\log(d_{\overline{\Delta}_{k}})l}e^{-\overline{\nu}_{H}n_{k}l(1+o_{k}(1))}
=lNnk+Kkeν¯Hnkl(1+ok(1)+onk(1))\displaystyle=\sum_{l\geq\frac{N}{n_{k}+K_{k}}}e^{-\overline{\nu}_{H}n_{k}l(1+o_{k}(1)+o_{n_{k}}(1))}
=eNν¯H(1+ok(1)+onk(1))(1ok(1))1\displaystyle=e^{-N\overline{\nu}_{H}(1+o_{k}(1)+o_{n_{k}}(1))}\big{(}1-o_{k}(1)\big{)}^{-1}

as dΔ¯kd_{\overline{\Delta}_{k}} is upper bounded by a polynomial of degree dd in nkn_{k} and any tree TT with THs(Ns)T\sim H_{s}(N_{s}) has |f|Nnk+Kk|f|\geq\frac{N}{n_{k}+K_{k}}. In particular, for any kk large enough,

ν¯Hlim infN1NlogP(0Ns+Hs)ν¯H(1+ok(1)+onk(1)).\underline{\nu}_{H}\equiv\liminf_{N\to\infty}-\frac{1}{N}\log P(0\leftrightarrow Ns+H_{s})\geq\overline{\nu}_{H}(1+o_{k}(1)+o_{n_{k}}(1)).

Taking kk\to\infty yields ν¯Hν¯H\underline{\nu}_{H}\geq\overline{\nu}_{H}. The direction ss being arbitrary, ν¯H(s)=ν¯H(s)\overline{\nu}_{H}(s)=\underline{\nu}_{H}(s) for all s𝕊d1s\in\mathbb{S}^{d-1}. ∎

4.3. Constrained point-to-half-space

Lemma 4.5.

Let s𝕊d1s\in\mathbb{S}^{d-1}. The limit

ν~H(s)=limN1NlogP(0HsHs(Ns))\tilde{\nu}_{H}(s)=\lim_{N\to\infty}-\frac{1}{N}\log P\big{(}0\xleftrightarrow{H_{s}}H_{s}(Ns)\big{)}

exists. Moreover,

ν~H(s)=νH(s).\tilde{\nu}_{H}(s)=\nu_{H}(s).
Proof.

We fix s𝕊d1s\in\mathbb{S}^{d-1} and omit it from the notation. By inclusion of events, one has lim infN1NlogP(0HsHs(Ns))νH\liminf_{N\to\infty}-\frac{1}{N}\log P\big{(}0\xleftrightarrow{H_{s}}H_{s}(Ns)\big{)}\geq\nu_{H}. To obtain the other bound, start with, for any ϵ>0\epsilon>0,

P(0Hs(Ns))θcEϵNP(ϵNsHs(Ns))=eλϵNeνH(1ϵ)N(1+oN(1)),P\big{(}0\leftrightarrow H_{s}(Ns)\big{)}\leq\theta^{-c_{E}\epsilon N}P(\epsilon Ns\leftrightarrow H_{s}(Ns))=e^{\lambda\epsilon N}e^{-\nu_{H}(1-\epsilon)N(1+o_{N}(1))}, (9)

where λ=log(θ1)cE>0\lambda=\log(\theta^{-1})c_{E}>0 and

P(0HsHs(Ns))eλϵNP(ϵNsHsHs(Ns)),P\big{(}0\xleftrightarrow{H_{s}}H_{s}(Ns)\big{)}\geq e^{-\lambda\epsilon N}P(\epsilon Ns\xleftrightarrow{H_{s}}H_{s}(Ns)), (10)

by Lemma 2.1 (insertion tolerance).

We then use a coarse-graining described in Section 3.2 (the same as in the proof of Lemma 4.4 with different sizes). Set Δn=ΛαnHs(ns)\Delta_{n}=\Lambda_{\alpha n}\setminus H_{s}(ns), Kn=log(n)2K_{n}=\log(n)^{2}, and Δ¯n=vΔnΛKn(v)\overline{\Delta}_{n}=\bigcup_{v\in\Delta_{n}}\Lambda_{K_{n}}(v), where α\alpha is the same quantity as in the proof of Lemma 4.4. As in Lemma 4.4,

P(0Δnc)eνHn(1+on(1)).P(0\leftrightarrow\Delta_{n}^{c})\leq e^{-\nu_{H}n(1+o_{n}(1))}.

We use CGnCGΔn,Kn\mathrm{CG}_{n}\equiv\mathrm{CG}_{\Delta_{n},K_{n}}. Write CGn(C0)=(t(C0),f(C0))\mathrm{CG}_{n}(C_{0})=\big{(}t(C_{0}),f(C_{0})\big{)}.

Now, any cluster contributing to {ϵNsHs(Ns)}{ϵNsHsHs(Ns)}\{\epsilon Ns\leftrightarrow H_{s}(Ns)\}\setminus\{\epsilon Ns\xleftrightarrow{H_{s}}H_{s}(Ns)\} has |f|ϵNd(αn+log(n)2)+(1ϵ)Nn+log(n)2|f|\geq\frac{\epsilon N}{\sqrt{d}(\alpha n+\log(n)^{2})}+\frac{(1-\epsilon)N}{n+\log(n)^{2}} (see Figure 9). So, applying the same argument as in Lemma 4.5,

P(ϵNsHs(Ns))P(ϵNsHsHs(Ns))eνHN(1ϵ+ϵdα+on(1)).P(\epsilon Ns\leftrightarrow H_{s}(Ns))-P(\epsilon Ns\xleftrightarrow{H_{s}}H_{s}(Ns))\leq e^{-\nu_{H}N(1-\epsilon+\frac{\epsilon}{\sqrt{d}\alpha}+o_{n}(1))}.

In particular, for any fixed nn large enough, and any NN large

P(ϵNsHsHs(Ns))P(ϵNsHs(Ns))1eνHN(ϵ+on(1)+oN(1)),\frac{P(\epsilon Ns\xleftrightarrow{H_{s}}H_{s}(Ns))}{P(\epsilon Ns\leftrightarrow H_{s}(Ns))}\geq 1-e^{-\nu_{H}N(\epsilon^{\prime}+o_{n}(1)+o_{N}(1))},

where ϵ=ϵdα\epsilon^{\prime}=\frac{\epsilon}{\sqrt{d}\alpha}. Plugging this in (10), and using (9), one obtains

P(0HsHs(Ns))\displaystyle P\big{(}0\xleftrightarrow{H_{s}}H_{s}(Ns)\big{)} e2λϵN(1eνHN(ϵ+on(1)+oN(1)))P(0Hs(Ns))\displaystyle\geq e^{-2\lambda\epsilon N}(1-e^{-\nu_{H}N(\epsilon^{\prime}+o_{n}(1)+o_{N}(1))})P(0\leftrightarrow H_{s}(Ns))
=e2λϵN(1eνHN(ϵ+on(1)+oN(1)))eνHN(1+oN(1)).\displaystyle=e^{-2\lambda\epsilon N}(1-e^{-\nu_{H}N(\epsilon^{\prime}+o_{n}(1)+o_{N}(1))})e^{-\nu_{H}N(1+o_{N}(1))}.

In particular lim supN1NlogP(0HsHs(Ns))νH+2λϵ\limsup_{N\to\infty}-\frac{1}{N}\log P(0\xleftrightarrow{H_{s}}H_{s}(Ns))\leq\nu_{H}+2\lambda\epsilon. ϵ>0\epsilon>0 being arbitrary, taking ϵ0\epsilon\searrow 0 yields the result.

Refer to caption
Figure 9. Coarse graining of a cluster contributing to {ϵNsHs(Ns)}{ϵNsHsHs(Ns)}\{\epsilon Ns\leftrightarrow H_{s}(Ns)\}\setminus\{\epsilon Ns\xleftrightarrow{H_{s}}H_{s}(Ns)\}.

We highlight at this point that we could easily remove the “directed constraint” for point-to-half-spaces connections, which seems to be much harder to do for point-to-point connections.

4.4. Convex duality

We saw that ν~\tilde{\nu} defines a norm on d\mathbb{R}^{d}. In particular, 𝒰ν~\mathcal{U}_{\tilde{\nu}} (the unit ball for ν~\tilde{\nu}) is a convex set. To each s𝕊d1s\in\mathbb{S}^{d-1}, we associate the set of dual directions

s={s𝕊d1:Hs(s,sν~(s)s)𝒰ν~𝒰ν~}.s^{\star}=\Big{\{}s^{\prime}\in\mathbb{S}^{d-1}:\ H_{s^{\prime}}\big{(}\frac{\langle s,s^{\prime}\rangle}{\tilde{\nu}(s)}s^{\prime}\big{)}\cap\mathcal{U}_{\tilde{\nu}}\subset\partial\mathcal{U}_{\tilde{\nu}}\Big{\}}.

It is the set of directions normal to the boundary of half-spaces tangent to 𝒰ν~\mathcal{U}_{\tilde{\nu}} at sν~(s)\frac{s}{\tilde{\nu}(s)} (see Figure 10). By abuse of notation, we will write ss^{\star} for an arbitrarily chosen element of the set. It satisfies s,s>0\langle s,s^{\star}\rangle>0. Moreover, for a fixed ss^{\star}, any ss having ss^{\star} as dual is a minimizer of sν~(s)s,ss^{\prime}\mapsto\frac{\tilde{\nu}(s^{\prime})}{\langle s^{\star},s^{\prime}\rangle} under the constraint s,s>0\langle s^{\star},s^{\prime}\rangle>0. Notice that this notion of duality is not the classical convex duality between 𝒰ν~\mathcal{U}_{\tilde{\nu}} and 𝒲ν~\mathcal{W}_{\tilde{\nu}} (but it is related via normalization of the dual directions).

Refer to caption
Figure 10. Duality between directions.

The duality statement is

Lemma 4.6.

For any s𝕊d1s\in\mathbb{S}^{d-1},

ν~(s)=νH(s)s,s.\tilde{\nu}(s)=\nu_{H}(s^{\star})\langle s,s^{\star}\rangle. (11)
Proof.

Fix s𝕊d1s\in\mathbb{S}^{d-1}. Let ss^{\star} be a dual direction of ss. Start with the easy inequality. By inclusion of events and Lemma 4.2,

P(0Hs(Ns))P(0HsHs(aNs)aNs)=eaν~(s)N(1+oN(1))P\big{(}0\leftrightarrow H_{s^{\star}}(Ns^{\star})\big{)}\geq P\big{(}0\xleftrightarrow{H_{s^{\star}}\setminus H_{s^{\star}}(aNs)}aNs\big{)}=e^{-a\tilde{\nu}(s)N(1+o_{N}(1))}

where a=s,s1a=\langle s,s^{\star}\rangle^{-1}. Taking the log, dividing by N-N and letting NN\to\infty, one gets νH(s)aν~(s)\nu_{H}(s^{\star})\leq a\tilde{\nu}(s).

We now proceed to the harder inequality. We use Lemma 4.5. The idea is illustrated in Figure 11. Then, using the same argument as in the proof of Lemma 4.4, for some α\alpha large enough,

P(0HsHs(Ns))CP(0HsΛαNHs(Ns)).P\big{(}0\xleftrightarrow{H_{s^{\star}}}H_{s^{\star}}(Ns^{\star})\big{)}\leq CP\big{(}0\xleftrightarrow{H_{s^{\star}}\cap\Lambda_{\alpha N}}H_{s^{\star}}(Ns^{\star})\big{)}.

By a union bound, this is in turn upper bounded by

Cxint[Hs(Ns)]ΛαNP(0HsHs(x)x).C\sum_{x\in\partial^{\mathrm{\scriptscriptstyle int}}[H_{s^{\star}}(Ns^{\star})]\cap\Lambda_{\alpha N}}P(0\xleftrightarrow{H_{s^{\star}}\setminus H_{s^{\star}}(x)}x). (12)

Let δ<1\delta<1 be such that int[Hs(Ns)]ΛαN𝒴s,δ\partial^{\mathrm{\scriptscriptstyle int}}[H_{s^{\star}}(Ns^{\star})]\cap\Lambda_{\alpha N}\subset\mathcal{Y}_{s^{\star},\delta} for any NN large enough. Let ϵ>0\epsilon>0 be small. Choose a finite subset SS of 𝕊d1𝒴s,δ\mathbb{S}^{d-1}\cap\mathcal{Y}_{s^{\star},\delta} such that |S|c′′ϵd+1|S|\leq c^{\prime\prime}\epsilon^{-d+1} and 𝒴s,δsS𝒴s,ϵ\mathcal{Y}_{s^{\star},\delta}\subset\bigcup_{s^{\prime}\in S}\mathcal{Y}_{s^{\prime},\epsilon}. Denote As(N)=int[Ns+Hs]𝒴s,ϵA_{s^{\prime}}(N)=\partial^{\mathrm{\scriptscriptstyle int}}[Ns^{\star}+H_{s^{\star}}]\cap\mathcal{Y}_{s^{\prime},\epsilon}. Then, by insertion tolerance, (12) is upper bounded by

CsSxAs(N)θcϵNP(0HsHs(asNs)asNs)C\sum_{s^{\prime}\in S}\sum_{x\in A_{s^{\prime}}(N)}\theta^{-c^{\prime}\epsilon N}P(0\xleftrightarrow{H_{s^{\star}}\setminus H_{s^{\star}}(a_{s^{\prime}}Ns^{\prime})}a_{s^{\prime}}Ns^{\prime})

with as=s,s1a_{s^{\prime}}=\langle s^{\prime},s^{\star}\rangle^{-1}. By Lemma 4.2, P(0HsHs(asNs)asNs)=easNν~(s)(1+oN(1))P(0\xleftrightarrow{H_{s^{\star}}\setminus H_{s^{\star}}(a_{s^{\prime}}Ns^{\prime})}a_{s^{\prime}}Ns^{\prime})=e^{-a_{s^{\prime}}N\tilde{\nu}(s^{\prime})(1+o_{N}(1))} with the oN(1)o_{N}(1) depending on ss^{\prime}. Denote it oNs(1)o^{s^{\prime}}_{N}(1). Now, asν~(s)a_{s^{\prime}}\tilde{\nu}(s^{\prime}) is minimal if s,ss^{\prime},s^{\star} are dual directions. So, combining all the previous observations,

P(0HsHs(Ns))CNd1ϵ1dθcϵNeasNν~(s)maxsSoNs(1)easNν~(s).P\big{(}0\xleftrightarrow{H_{s^{\star}}}H_{s^{\star}}(Ns^{\star})\big{)}\leq C^{\prime}N^{d-1}\epsilon^{1-d}\theta^{-c^{\prime}\epsilon N}e^{a_{s}N\tilde{\nu}(s)\max_{s^{\prime}\in S}o_{N}^{s^{\prime}}(1)}e^{-a_{s}N\tilde{\nu}(s)}.

Taking the log, dividing by N-N and taking NN\to\infty gives

νH(s)log(θ)cϵ+asν~(s).\nu_{H}(s^{\star})\geq\log(\theta)c^{\prime}\epsilon+a_{s}\tilde{\nu}(s).

Taking then ϵ0\epsilon\searrow 0 yields the result.

Refer to caption
Figure 11. Connection to Hs(Ns)H_{s}(Ns) is made at the point minimizing the distance measured with ν~\tilde{\nu} (here the square mark). The grey square is a dilation of 𝒰ν~\mathcal{U}_{\tilde{\nu}}.

4.5. Final coarse-graining

Let us summarize what we did so far. First, we constructed a norm using a directed version of the point-to-point connections (Lemmas 4.14.2, and 4.3). Then, we proved an equivalence (at the level of exponential rates) between directed and un-directed point-to-half-space connections (Lemmas 4.4 and 4.5). Finally, we related these two quantities using convex duality (Lemma 4.6). We can now gather these three results to prove our key estimate

Lemma 4.7.

For any ϵ>0\epsilon>0, there exists L00L_{0}\geq 0 such that for any LL0L\geq L_{0},

P(0(L𝒰ν~)c)eL(1ϵ).P\big{(}0\leftrightarrow(L\mathcal{U}_{\tilde{\nu}})^{c}\big{)}\leq e^{-L(1-\epsilon)}. (13)
Proof.

Fix ϵ>0\epsilon>0. Take SS a finite subset of 𝕊d1\mathbb{S}^{d-1} such that |S|cδd+1|S|\leq c\delta^{-d+1} and sS𝒴s,δ𝒰ν~=𝒰ν~\bigcup_{s\in S}\mathcal{Y}_{s,\delta}\cap\mathcal{U}_{\tilde{\nu}}=\mathcal{U}_{\tilde{\nu}}. For sSs\in S, denote As=ext(L𝒰ν~)𝒴s,δA_{s}=\partial^{\mathrm{\scriptscriptstyle ext}}(L\mathcal{U}_{\tilde{\nu}})\cap\mathcal{Y}_{s,\delta}. Then,

P(0(L𝒰ν~)c)\displaystyle P\big{(}0\leftrightarrow(L\mathcal{U}_{\tilde{\nu}})^{c}\big{)} sSxAsP(0L𝒰ν~x)\displaystyle\leq\sum_{s\in S}\sum_{x\in A_{s}}P\big{(}0\xleftrightarrow{L\mathcal{U}_{\tilde{\nu}}}x\big{)}
θcδL(c′′Ld1)sSP(0L𝒰ν~sLν~(s)),\displaystyle\leq\theta^{-c^{\prime}\delta L}(c^{\prime\prime}L^{d-1})\sum_{s\in S}P\big{(}0\xleftrightarrow{L\mathcal{U}_{\tilde{\nu}}}\frac{sL}{\tilde{\nu}(s)}\big{)},

where we used insertion tolerance in the second line. Now, for any fixed sSs\in S, let ss^{\star} be dual to ss. See Figure 12. One then has

P(0L𝒰ν~sLν~(s))P(0Ls,sν~(s)s+Hs)eLs,sν~(s)νH(s)(1+oL(1))=eL(1+oL(1)).P\big{(}0\xleftrightarrow{L\mathcal{U}_{\tilde{\nu}}}\frac{sL}{\tilde{\nu}(s)}\big{)}\leq P\big{(}0\leftrightarrow\frac{L\langle s,s^{\star}\rangle}{\tilde{\nu}(s)}s^{\star}+H_{s^{\star}}\big{)}\leq e^{-\frac{L\langle s,s^{\star}\rangle}{\tilde{\nu}(s)}\nu_{H}(s^{\star})(1+o_{L}(1))}=e^{-L(1+o_{L}(1))}.

Now, the oL(1)o_{L}(1) depends on ss. Write it oLs(1)o_{L}^{s}(1). One therefore obtains

P(0(L𝒰ν~)c)θcδL(cLd1)c′′δd+1eLemaxsSoLs(1).P\big{(}0\leftrightarrow(L\mathcal{U}_{\tilde{\nu}})^{c}\big{)}\leq\theta^{-c\delta L}(c^{\prime}L^{d-1})c^{\prime\prime}\delta^{-d+1}e^{-L}e^{\max_{s\in S}o_{L}^{s}(1)}.

Take δ\delta small enough and then LL large enough to have θcδL(cLd1)c′′δd+1eϵL/2\theta^{-c\delta L}(c^{\prime}L^{d-1})c^{\prime\prime}\delta^{-d+1}\leq e^{\epsilon L/2} and maxsSoLs(1)ϵL/2\max_{s\in S}o_{L}^{s}(1)\leq\epsilon L/2.

Refer to caption
Figure 12. For each direction ss, we chose a dual direction for which connecting to a half-spaces is the same as connecting in direction ss.

We then use the coarse graining procedure of Section 3.2 with Δ=L𝒰ν~\Delta=L\mathcal{U}_{\tilde{\nu}} and K=log(L)2K=\log(L)^{2}: CGLCGL𝒰ν~,log(L)2\mathrm{CG}_{L}\equiv\mathrm{CG}_{L\mathcal{U}_{\tilde{\nu}},\log(L)^{2}}.

As a corollary of this construction, we obtain

Corollary 4.8.

For any s𝕊d1s\in\mathbb{S}^{d-1},

ν¯(s)ν~(s)ν¯(s).\overline{\nu}(s)\leq\tilde{\nu}(s)\leq\underline{\nu}(s).

In particular, ν\nu is well defined and defines a norm on d\mathbb{R}^{d}.

Proof.

Fix some s𝕊d1s\in\mathbb{S}^{d-1}. One has the direct lower bound ν~(s)ν¯(s)\tilde{\nu}(s)\geq\overline{\nu}(s). To obtain the other bound, we use CGL\mathrm{CG}_{L}. Any cluster C0,NsC\ni 0,Ns has |f(C)|Nν~(s)L+log(L)2|f(C)|\geq\frac{N\tilde{\nu}(s)}{L+\log(L)^{2}} (recall 𝒰ν~\mathcal{U}_{\tilde{\nu}} is convex). Fix ϵ>0\epsilon>0 small and take LL0(ϵ)L\geq L_{0}(\epsilon). Using the bound on the combinatoric of trees and Lemmas 3.1 and 4.7, one obtains

P(0Ns)e(ϵ+oL(1))NeNν~(s).P(0\leftrightarrow Ns)\leq e^{(\epsilon+o_{L}(1))N}e^{-N\tilde{\nu}(s)}.

Taking the log, dividing by N-N and letting NN\to\infty gives ν¯(s)ν~(s)ϵ+oL(1)\underline{\nu}(s)\geq\tilde{\nu}(s)-\epsilon+o_{L}(1). Letting LL\to\infty and then ϵ0\epsilon\searrow 0 give the result. ∎

Acknowledgements

The author thanks the university Roma Tre for its hospitality and is supported by the Swiss NSF through an early PostDoc.Mobility Grant. The author also thanks Yvan Velenik for a careful reading of the manuscript and for useful discussions.

Appendix A Relaxed Fekete’s Lemma

We use this Lemma which proof is an easy adaptation of the usual Fekete’s Lemma.

Lemma A.1.

Suppose (an)n1(a_{n})_{n\geq 1} is a sequence with cn<an<c+nc_{-}n<a_{n}<c_{+}n for some 0<cc+<0<c_{-}\leq c_{+}<\infty. Suppose that there exists N01N_{0}\geq 1 and functions f,g:(>0)f,g:(\mathbb{Z}_{>0})\to\mathbb{Z} such that

  • f(n)=o(n)f(n)=o(n), g(n)=o(n)g(n)=o(n),

  • For any n,mN0n,m\geq N_{0}, an+m+g(min(n,m))an+am+f(min(n,m))a_{n+m+g(\min(n,m))}\leq a_{n}+a_{m}+f(\min(n,m)).

Then, the limit limnann\lim_{n\to\infty}\frac{a_{n}}{n} exists in [c,c+][c_{-},c_{+}].

Proof.

Let l¯=lim infnann\underline{l}=\liminf_{n\to\infty}\frac{a_{n}}{n}. Let (nk)k1(n_{k})_{k\geq 1} be an increasing sequence such that limkanknk=l¯\lim_{k\to\infty}\frac{a_{n_{k}}}{n_{k}}=\underline{l}. Fix kk such that nkN0n_{k}\geq N_{0}. For any NN large enough, N=q(nk+g(nk))+rN=q(n_{k}+g(n_{k}))+r with r<nk+g(nk)r<n_{k}+g(n_{k}). Then, by q1q-1 iterations of our sub-additivity-type hypotheses

aNN(q1)(ank+f(nk))+ank+g(nk)+rq(nk+g(nk))+r=l¯+ok(1)+onk(1)+oN(1).\frac{a_{N}}{N}\leq\frac{(q-1)(a_{n_{k}}+f(n_{k}))+a_{n_{k}+g(n_{k})+r}}{q(n_{k}+g(n_{k}))+r}=\underline{l}+o_{k}(1)+o_{n_{k}}(1)+o_{N}(1).

Taking NN\to\infty, one obtains

lim supNaNNl¯+ok(1)+onk(1).\limsup_{N\to\infty}\frac{a_{N}}{N}\leq\underline{l}+o_{k}(1)+o_{n_{k}}(1).

kk being arbitrary, one can now take kk\to\infty to obtain the wanted result. ∎

References

  • [1] D. B. Abraham, J. T. Chayes, and L. Chayes. Random surface correlation functions. Communications in Mathematical Physics, 96(4):439–471, December 1984.
  • [2] D. B. Abraham and H. Kunz. Ornstein-Zernike theory of classical fluids at low density. Phys. Rev. Lett., 39(16):1011–1014, 1977.
  • [3] K. S. Alexander. On weak mixing in lattice models. Probability Theory and Related Fields, 110(4):441–471, May 1998.
  • [4] M. Campanino and D. Ioffe. Ornstein-Zernike theory for the Bernoulli bond percolation on d\mathbb{Z}^{d}. Ann. Probab., 30(2):652–682, 2002.
  • [5] M. Campanino, D. Ioffe, and Y. Velenik. Ornstein-Zernike theory for finite range Ising models above TcT_{c}. Probab. Theory Related Fields, 125(3):305–349, 2003.
  • [6] M. Campanino, D. Ioffe, and Y. Velenik. Fluctuation theory of connectivities for subcritical random cluster models. Ann. Probab., 36(4):1287–1321, 2008.
  • [7] J. T. Chayes and L. Chayes. Ornstein-Zernike behavior for self-avoiding walks at all noncritical temperatures. Comm. Math. Phys., 105(2):221–238, 1986.
  • [8] D. Ioffe. Ornstein-Zernike behaviour and analyticity of shapes for self-avoiding walks on 𝐙d{\bf Z}^{d}. Markov Process. Related Fields, 4(3):323–350, 1998.
  • [9] F. Martinelli. Lectures on Glauber Dynamics for Discrete Spin Models. Springer Berlin Heidelberg, Berlin, Heidelberg, 1999.
  • [10] L. S. Ornstein and F. Zernike. Accidental deviations of density and opalescence at the critical point of a single substance. Proc. Akad. Sci., 17:793–806, 1914.
  • [11] S. Ott. Sharp Asymptotics for the Truncated Two-Point Function of the Ising Model with a Positive Field. Comm. Math. Phys., 374:1361–1387, 2020.
  • [12] S. Ott and Y. Velenik. Potts models with a defect line. Comm. Math. Phys., 362(1):55–106, Aug 2018.
  • [13] S. Ott and Y. Velenik. Asymptotics of correlations in the Ising model: a brief survey. Panoramas et Synthèses, 2019.
  • [14] F. Zernike. The clustering-tendency of the molecules in the critical state and the extinction of light caused thereby. Koninklijke Nederlandse Akademie van Wetenschappen Proceedings Series B Physical Sciences, 18:1520–1527, 1916.